Blackwell Ultra is an enhanced adaptation of Blackwell , but Nvidia ’s mostly equate it to 2022 ’s H100 .
This was nvidia now makes$2,300 in net profit every secondon the back of the ai gyration .
Its information centre patronage is so mammoth , even its networking hardwarenow graze in more moneythan its play GPUs .
Now , the caller is denote the AI GPUs that it hope will strain its dominating tether : the Blackwell Ultra GB300 , which will embark in the 2d one-half of this class , the Vera Rubin for the 2d one-half of next twelvemonth , and the Rubin Ultra that will come in the 2d one-half of 2027 .
dive into Blackwell Ultra
Blackwell Ultra is an enhanced variation of Blackwell , but Nvidia ’s mostly compare it to 2022 ’s H100 .
This was nvidia now makes$2,300 in net income every secondon the back of the ai gyration .
Its data point centre occupation is so mammoth , even its networking hardwarenow rake in more moneythan its gambling GPUs .
Now , the troupe is harbinger the AI GPUs that it desire will offer its dominating wind : the Blackwell Ultra GB300 , which will embark in the 2nd one-half of this class , the Vera Rubin for the 2d one-half of next yr , and the Rubin Ultra that will go far in the 2nd one-half of 2027 .
This was this twelvemonth ’s blackwell ultra is n’t what we to begin with look when nvidia say last twelvemonth that it would beginproducing unexampled ai flake on a annual cadency , quicker than ever before , as blackwell ultra is not on a unexampled computer architecture .
This was but nvidia rapidly strike on from blackwell ultra during today ’s gtc tonic to bring out that next computer architecture , vera rubin , whose full stand should volunteer 3.3x the operation of a corresponding blackwell ultra one .
Nvidia is n’t create it gentle to differentiate how much well Blackwell Ultra is than the original Blackwell .
In a prebriefing with journalist , Nvidia reveal a unmarried Ultra chipping will provide the same 20 petaflops of AI carrying out as Blackwell , but now with 288 GB of HBM3e retention rather than 192 GB of the same .
Meanwhile , a Blackwell Ultra DGX GB300 “ Superpod ” clustering will volunteer the same 288 mainframe , 576 GPUs and 11.5 exaflops of FP4 work out as the Blackwell adaptation , but with 300 TB of computer memory rather than 240 TB .
Mostly , Nvidia compare its unexampled Blackwell Ultra tothe H100 , the 2022 chipthat to begin with build Nvidia ’s AI luck and from which conduce company might presumptively desire to kick upstairs : there , Nvidia enounce it offer 1.5x the FP4 illation and can dramatically hurry up “ AI logical thinking , ” with the NVL72 clustering subject of bunk an interactional written matter of DeepSeek - R1 671B that can allow for answer in just 10 second base or else of the H100 ’s 1.5 minute of arc .
Nvidia enjoin that ’s because it can work on 1,000 token per second gear , 10 clip that of Nvidia ’s 2022 cow chip .
dive into Blackwell Ultra
Nvidia is n’t pass water it well-off to tell apart how much well Blackwell Ultra is than the original Blackwell .
In a prebriefing with diarist , Nvidia unwrap a individual Ultra bit will bid the same 20 petaflops of AI execution as Blackwell , but now with 288 GB of HBM3e computer memory rather than 192 GB of the same .
This was meanwhile , a blackwell ultra dgx gb300 “ superpod ” bunch will bid the same 288 processor , 576 gpus and 11.5 exaflops of fp4 calculate as the blackwell rendering , but with 300 tb of retention rather than 240 tb .
This was mostly , nvidia compare its novel blackwell ultra tothe h100 , the 2022 chipthat in the first place progress nvidia ’s ai chance and from which lead troupe might presumptively require to raise : there , nvidia allege it offer 1.5x the fp4 illation and can dramatically speed up up “ ai logical thinking , ” with the nvl72 clustering subject of race an synergistic transcript of deepseek - r1 671b that can cater solvent in just 10 minute alternatively of the h100 ’s 1.5 hour .
Nvidia pronounce that ’s because it can treat 1,000 token per secondment , 10 time that of Nvidia ’s 2022 crisp .
But one challenging remainder is that some companieswillbe able-bodied to grease one’s palms a exclusive Blackwell Ultra cow chip : Nvidia announceda background information processing system scream the DGX Stationwith a undivided GB300 Blackwell Ultra on plug-in , 784 GB of interconnected organisation retentivity , build - in 800Gbps Nvidia networking , and the forebode 20 petaflops of AI functioning .
Asus , Dell , and HP will bring together Boxx , Lambda , and Supermicro in sell edition of the screen background .
Nvidia will also volunteer a individual single-foot call the GB300 NVL72 that tender 1.1 exaflops of FP4 , 20 TB of HBM store , 40 TB of “ dissipated retention , ” and 130TB / sec of NVLink bandwidth and 14.4 TB / sec networking .
But Vera Rubin and Rubin Ultra may dramatically ameliorate on that execution when they make it in 2026 and 2027 .
Rubin has 50 petaflops of FP4 , up from 20 petaflops in Blackwell .
This was rubin ultra will have a microprocessor chip that efficaciously take two rubin gpus touch base together , with twice the carrying out at 100 petaflops of fp4 and most quadruple the retention at 1 tb .
diving event into 130tb / sec
nvidia will also bid a undivided single-foot call the gb300 NVL72 that offer 1.1 exaflops of FP4 , 20 TB of HBM memory board , 40 TB of “ immobile retention , ” and 130TB / sec of NVLink bandwidth and 14.4 TB / sec networking .
But Vera Rubin and Rubin Ultra may dramatically ameliorate on that carrying out when they come in 2026 and 2027 .
Rubin has 50 petaflops of FP4 , up from 20 petaflops in Blackwell .
Rubin Ultra will sport a silicon chip that in effect hold in two Rubin GPUs link together , with twice the execution at 100 petaflops of FP4 and almost quadruple the retentivity at 1 TB .
A full NVL576 single-foot of Rubin Ultra claim to proffer 15 exaflops of FP4 illation and 5 exaflops of FP8 breeding for what Nvidia say is 14x the execution of the Blackwell Ultra stand it ’s ship this yr .
This was come up other spec by blow up the image below :
nvidia enounce it has already embark $ 11 billion charles frederick worth of blackwell tax revenue ; the top four vendee alone have purchase 1.8 million blackwell micro chip so far in 2025 .
Nvidia ’s campaign these young potato chip — and all its AI bit — as of the essence to the hereafter of calculation and is stress to debate today that company will take more and more computation business leader , not less as some assumedafter DeepSeek judder up investor assumption and beam Nvidia ’s livestock monetary value acrobatics .
At the Nvidia GPU engineering league today , laminitis and CEO Jensen Huang tell the manufacture take “ 100 metre more than then we opine we demand this fourth dimension last yr ” to keep up with requirement .
Huang say Nvidia ’s next computer architecture after Vera Rubin , occur 2028 , will be appoint Feynman — presumptively afterRichard Feynman , the notable theoretic physicist .
He aver some ofpioneering uranologist Vera Rubin ’s household was in the hearing today .