At AWS re:Invent 2025 conference, Amazon (AMZN) launched its new, in-house-built Trainium3 AI chip, along with the new Trainium3 UltraServers that are powered by the chip. The new chip is built on 3-nanometer technology and promises faster performance, lower costs, and greater energy efficiency for training and deploying AI models. It competes directly with offerings from Nvidia (NVDA) and Google (GOOGL) in the growing AI chip market.
TipRanks Cyber Monday Sale
- Claim 60% off TipRanks Premium for data-backed insights and research tools you need to invest with confidence.
- Subscribe to TipRanks' Smart Investor Picks and see our data in action through our high-performing model portfolio - now also 60% off
The tech giant disclosed that each AWS Trainium3 chip delivers 2.52 PFLOPs of compute, with 1.5x more memory and 1.7x faster bandwidth than Trainium2. Also, it supports 144 GB of HBM3e memory and 4.9 TB/s bandwidth, and is built to handle complex AI workloads such as real-time, multimodal, and reasoning tasks more efficiently.
Early adopters of the new AI chip include Anthropic (PC:ANTPQ), Karakuri, Splash Music, and Decart (PC:DECAR). These companies have already used Trainium3 to cut inference costs and speed up generative AI workloads.
Amazon’s New Servers Boosts Performance While Cutting Costs
Among the key features and benefits, the Trainium3 servers can deliver up to 4.4 times more compute performance and nearly four times the memory bandwidth compared to the previous generation, Trainium2.
In terms of efficiency, the new servers are four times more energy-efficient and use 40% less power, which helps to reduce operational costs.
Moreover, systems using the Trainium3 chips can scale up to 144 chips in a single UltraServer, and thousands of these servers can be connected into a cluster of up to one million Trainium chips for massive AI projects.
In addition, AMZN said that the Trainium platform users have lowered training and inference costs by as much as 50% compared to other GPU systems.
AWS Teases Trainium4: Built to Work With Nvidia GPUs
At the conference, AWS also teased Trainium4, the next AI accelerator chip in development. The company has promised that Trainium4 will deliver another “big step-up in performance” over its predecessors.
A key feature of the Trainium4 chips will be their ability to work with Nvidia’s GPUs using the NVLink Fusion chip-to-chip communications technology. This is a major development, as it could make it easier for AI applications built on Nvidia’s CUDA framework to run on Amazon’s infrastructure.
What Is the Price Target for Amazon Stock?
Currently, Wall Street has a Strong Buy consensus rating on Amazon stock based on 43 Buys and one Hold. The average AMZN stock price target of $295.60 indicates a 25.55% upside potential from current levels.


