Nvidia Sees AI Chip Demand at $1T by 2027; Debuts Groq 3 Chips, NemoClaw Platform at GTC
Jensen Huang unveils the 'Vera Rubin' era at GTC 2026, promising a 25x performance leap while Meta weighs massive layoffs to fund its AI future.

Nvidia CEO Jensen Huang has stunned the tech industry by forecasting that demand for AI infrastructure will skyrocket to $1 trillion by 2027, a massive jump from the $500 billion demand and order backlog for Blackwell and Rubin chips that Nvidia had estimated earlier through 2026.
'In fact, we are going to be short. I am certain computing demand will be much higher than that,' Huang said while speaking at the GTC 2026 keynote in San Jose, California on 16 March.
Huang declared that the world has reached an 'inference inflection point', where the shift from training models to deploying AI agents is driving an insatiable need for compute power.
To meet this surge, Nvidia unveiled the Vera Rubin architecture, a liquid-cooled supercomputing platform that delivers up to 25 times more performance than the H100 for specialised tasks, alongside a $20 billion integration of Groq's high-speed inference technology.
Huang touted Nvidia's relationships with cloud service providers, such as Google, Microsoft, Amazon, and Oracle, saying the AI company is 'bringing customers to them.'
'Moore's Law has run out of steam; we need a new approach,' Huang said. 'Accelerated computing allows us to take these giant leaps forward, and as you will see later, because we continue to optimise the algorithms ... and because our reach is so large and our installed base is so large, we can reduce the computing cost, increasing the scale, increasing the speed for everybody, continuously.'
Groq 3 Chips, NemoClaw Platform Launched
Nvidia debuted multiple chips at the GTC ranging from the Nvidia Groq 3 language processing unit to the Vera central processing unit. The company launched five massive server rack, each designed to service a unique purpose in AI data centres.
Nvidia entered an agreement to license technology from Groq and even onboarded founder Jonathan Ross and President Sunny Madra as part of a $20 billion deal in December 2025. Groq's processors prioritise AI inferencing, or running AI models.
Nvidia also launched the NemoClaw platform for companies that use OpenClaw privacy and security controls AI agents. OpenClaw can run AI agents powered by various AI models on users' machines via apps such as WhatsApp, Discord, and Slack.
AI Chips For Space Data Centres, Self-Driving Ubers
Nvidia also unveiled the Vera Rubin Space Module platform for orbital data centres, geospatial intelligence, and autonomous space operations.
Nvidia's chips have already made the distance to space, with startup Starcloud launching an H100 processor on a satellite in November 2025. The processor became the first to run an AI model based on Google's Gemini.
However, Vera Rubin offers a major performance boost over H100, with Nvidia saying the Rubin GPU will provide up to 25 times more AI compute power for 'space-based inferencing.'
Nvidia also said that Uber will start rolling out a fleet of Level 4 autonomous vehicles in Los Angeles and San Francisco in 2027 as part of the companies' broader self-driving efforts. The firms had previously stated that they plan to deploy 100,000 vehicles running Nvidia's Drive Hyperion self-driving platform and a new reasoning model called Alpamayo. The service will eventually move beyond California to include 28 cities across four continents. In addition to Uber, Nvidia said Lyft, Estonia-based Bolt, and Singapore's Grab are also using its systems to power their self-driving capabilities.
Meta Secures $27B Capacity Amid Looming Layoffs
The surging cost of this infrastructure was highlighted by a landmark deal between Meta Platforms and AI cloud provider Nebius Group. Meta has committed up to $27 billion through 2027 to secure dedicated capacity built on Nvidia's Vera Rubin platform.
However, this massive capital expenditure comes at a human cost. Reports surfaced during GTC that Meta is preparing to cut 20 per cent of its workforce—roughly 16,000 employees—to offset these 'expensive AI bets'. The company is looking to compensate for mounting AI costs in what could be its largest restructuring since 2022.
CEO Mark Zuckerberg has recently suggested that AI-driven productivity gains allow projects that once required large teams to be completed by a single person, signalling a major restructuring of the Silicon Valley workforce.
Disclaimer: Our digital media content is for informational purposes only and not investment advice. Please conduct your own analysis or seek professional advice before investing. Remember, investments are subject to market risks and past performance doesn't indicate future returns.
© Copyright IBTimes 2025. All rights reserved.
















