Jensen Huang is the World’s Smartest Man

<p><iframe width&equals;"560" height&equals;"315" src&equals;"https&colon;&sol;&sol;www&period;youtube&period;com&sol;embed&sol;Y2F8yisiS6E&quest;si&equals;yC8Gr6XJSghl1zXq" title&equals;"YouTube video player" frameborder&equals;"0" allow&equals;"accelerometer&semi; autoplay&semi; clipboard-write&semi; encrypted-media&semi; gyroscope&semi; picture-in-picture&semi; web-share" referrerpolicy&equals;"strict-origin-when-cross-origin" allowfullscreen><&sol;iframe><&sol;p>&NewLine;<p>Jensen Huang&comma; Founder and CEO of NVIDIA&comma; unveiled the company&&num;8217&semi;s new Blackwell platform at its GTC conference in San Jose&period; The new processor allows organizations to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor&period;<&sol;p>&NewLine;<p>The <a title&equals;"Blackwell GPU architecture" href&equals;"https&colon;&sol;&sol;www&period;nvidia&period;com&sol;en-us&sol;data-center&sol;technologies&sol;blackwell-architecture&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">Blackwell GPU architecture<&sol;a> features six transformative technologies for accelerated computing&comma; which will help unlock breakthroughs in data processing&comma; engineering simulation&comma; electronic design automation&comma; computer-aided drug design&comma; quantum computing and generative AI — all emerging industry opportunities for NVIDIA&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;For three decades we’ve pursued accelerated computing&comma; with the goal of enabling transformative breakthroughs like deep learning and AI&comma;” said Jensen Huang&comma; founder and CEO of NVIDIA&period; &OpenCurlyDoubleQuote;Generative AI is the defining technology of our time&period; Blackwell is the engine to power this new industrial revolution&period; Working with the most dynamic companies in the world&comma; we will realize the promise of AI for every industry&period;”<&sol;p>&NewLine;<p>Among the many organizations expected to adopt Blackwell are Amazon Web Services&comma; Dell Technologies&comma; Google&comma; Meta&comma; Microsoft&comma; OpenAI&comma; Oracle&comma; Tesla and xAI&period;<&sol;p>&NewLine;<p><strong>Sundar Pichai&comma; CEO of Alphabet and Google&colon;<&sol;strong> &OpenCurlyDoubleQuote;Scaling services like Search and Gmail to billions of users has taught us a lot about managing compute infrastructure&period; As we enter the AI platform shift&comma; we continue to invest deeply in infrastructure for our own products and services&comma; and for our Cloud customers&period; We are fortunate to have a longstanding partnership with NVIDIA&comma; and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google&comma; including Google DeepMind&comma; to accelerate future discoveries&period;”<&sol;p>&NewLine;<p><strong>Andy Jassy&comma; president and CEO of Amazon&colon;<&sol;strong> &OpenCurlyDoubleQuote;Our deep collaboration with NVIDIA goes back more than 13 years&comma; when we launched the world’s first GPU cloud instance on AWS&period; Today we offer the widest range of GPU solutions available anywhere in the cloud&comma; supporting the world’s most technologically advanced accelerated workloads&period; It&&num;8217&semi;s why the new NVIDIA Blackwell GPU will run so well on AWS and the reason that NVIDIA chose AWS to co-develop Project Ceiba&comma; combining NVIDIA’s next-generation Grace Blackwell Superchips with the AWS Nitro System&&num;8217&semi;s advanced virtualization and ultra-fast Elastic Fabric Adapter networking&comma; for NVIDIA&&num;8217&semi;s own AI research and development&period; Through this joint effort between AWS and NVIDIA engineers&comma; we&&num;8217&semi;re continuing to innovate together to make AWS the best place for anyone to run NVIDIA GPUs in the cloud&period;”<&sol;p>&NewLine;<p><strong>Michael Dell&comma; founder and CEO of Dell Technologies&colon;<&sol;strong> &OpenCurlyDoubleQuote;Generative AI is critical to creating smarter&comma; more reliable and efficient systems&period; Dell Technologies and NVIDIA are working together to shape the future of technology&period; With the launch of Blackwell&comma; we will continue to deliver the next-generation of accelerated products and services to our customers&comma; providing them with the tools they need to drive innovation across industries&period;”<&sol;p>&NewLine;<p><strong>Demis Hassabis&comma; cofounder and CEO of Google DeepMind&colon;<&sol;strong> &OpenCurlyDoubleQuote;The transformative potential of AI is incredible&comma; and it will help us solve some of the world’s most important scientific problems&period; Blackwell’s breakthrough technological capabilities will provide the critical compute needed to help the world’s brightest minds chart new scientific discoveries&period;”<&sol;p>&NewLine;<p><strong>Mark Zuckerberg&comma; founder and CEO of Meta&colon;<&sol;strong> &OpenCurlyDoubleQuote;AI already powers everything from our large language models to our content recommendations&comma; ads&comma; and safety systems&comma; and it&&num;8217&semi;s only going to get more important in the future&period; We&&num;8217&semi;re looking forward to using NVIDIA&&num;8217&semi;s Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products&period;”<&sol;p>&NewLine;<p><strong>Satya Nadella&comma; executive chairman and CEO of Microsoft&colon;<&sol;strong> &OpenCurlyDoubleQuote;We are committed to offering our customers the most advanced infrastructure to power their AI workloads&period; By bringing the GB200 Grace Blackwell processor to our datacenters globally&comma; we are building on our long-standing history of optimizing NVIDIA GPUs for our cloud&comma; as we make the promise of AI real for organizations everywhere&period;”<&sol;p>&NewLine;<p><strong>Sam Altman&comma; CEO of OpenAI&colon;<&sol;strong> &OpenCurlyDoubleQuote;Blackwell offers massive performance leaps&comma; and will accelerate our ability to deliver leading-edge models&period; We’re excited to continue working with NVIDIA to enhance AI compute&period;”<&sol;p>&NewLine;<p><strong>Larry Ellison&comma; chairman and CTO of Oracle&colon;<&sol;strong> &&num;8220&semi;Oracle’s close collaboration with NVIDIA will enable qualitative and quantitative breakthroughs in AI&comma; machine learning and data analytics&period; In order for customers to uncover more actionable insights&comma; an even more powerful engine like Blackwell is needed&comma; which is purpose-built for accelerated computing and generative AI&period;”<&sol;p>&NewLine;<p><strong>Elon Musk&comma; CEO of Tesla and xAI&colon;<&sol;strong> &OpenCurlyDoubleQuote;There is currently nothing better than NVIDIA hardware for AI&period;”<&sol;p>&NewLine;<p>Named in honor of David Harold Blackwell — a mathematician who specialized in game theory and statistics&comma; and the first Black scholar inducted into the National Academy of Sciences — the new architecture succeeds the NVIDIA Hopper™ architecture&comma; launched two years ago&period;<&sol;p>&NewLine;<p><strong>Blackwell Innovations to Fuel Accelerated Computing and Generative AI<&sol;strong><br &sol;>&NewLine;Blackwell’s six revolutionary technologies&comma; which together enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters&comma; include&colon;<&sol;p>&NewLine;<ul >&NewLine;<li><strong>World’s Most Powerful Chip<&sol;strong> — Packed with 208 billion transistors&comma; Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB&sol;second chip-to-chip link into a single&comma; unified GPU&period;<&sol;li>&NewLine;<li><strong>Second-Generation Transformer Engine<&sol;strong> — Fueled by new micro-tensor scaling support and NVIDIA’s advanced dynamic range management algorithms integrated into NVIDIA TensorRT™-LLM and NeMo Megatron frameworks&comma; Blackwell will support double the compute and model sizes with new 4-bit floating point AI inference capabilities&period;<&sol;li>&NewLine;<li><strong>Fifth-Generation NVLink<&sol;strong> — To accelerate performance for multitrillion-parameter and mixture-of-experts AI models&comma; the latest iteration of NVIDIA NVLink® delivers groundbreaking 1&period;8TB&sol;s bidirectional throughput per GPU&comma; ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs&period;<&sol;li>&NewLine;<li><strong>RAS Engine<&sol;strong> — Blackwell-powered GPUs include a dedicated engine for reliability&comma; availability and serviceability&period; Additionally&comma; the Blackwell architecture adds capabilities at the chip level to utilize AI-based preventative maintenance to run diagnostics and forecast reliability issues&period; This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs&period;<&sol;li>&NewLine;<li><strong>Secure AI<&sol;strong> — Advanced confidential computing capabilities protect AI models and customer data without compromising performance&comma; with support for new native interface encryption protocols&comma; which are critical for privacy-sensitive industries like healthcare and financial services&period;<&sol;li>&NewLine;<li><strong>Decompression Engine<&sol;strong> — A dedicated decompression engine supports the latest formats&comma; accelerating database queries to deliver the highest performance in data analytics and data science&period; In the coming years&comma; data processing&comma; on which companies spend tens of billions of dollars annually&comma; will be increasingly GPU-accelerated&period;<&sol;li>&NewLine;<&sol;ul>&NewLine;<p><strong>A Massive Superchip<&sol;strong><br &sol;>&NewLine;The <a title&equals;"NVIDIA GB200 Grace Blackwell Superchip" href&equals;"https&colon;&sol;&sol;www&period;nvidia&period;com&sol;en-us&sol;data-center&sol;gb200-nvl72&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">NVIDIA GB200 Grace Blackwell Superchip<&sol;a> connects two <a title&equals;"NVIDIA B200 Tensor Core GPUs" href&equals;"https&colon;&sol;&sol;www&period;nvidia&period;com&sol;en-us&sol;data-center&sol;b200&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">NVIDIA B200 Tensor Core GPUs <&sol;a>to the NVIDIA Grace CPU over a 900GB&sol;s ultra-low-power NVLink chip-to-chip interconnect&period;<&sol;p>&NewLine;<p>For the highest AI performance&comma; GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum™-X800 Ethernet platforms&comma; also <a title&equals;"announced today" href&equals;"https&colon;&sol;&sol;nvidianews&period;nvidia&period;com&sol;news&sol;networking-switches-gpu-computing-ai" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">announced today<&sol;a>&comma; which deliver advanced networking at speeds up to 800Gb&sol;s&period;<&sol;p>&NewLine;<p>The GB200 is a key component of the <a title&equals;"NVIDIA GB200 NVL72" href&equals;"https&colon;&sol;&sol;developer&period;nvidia&period;com&sol;blog&sol;nvidia-gb200-nvl72-delivers-trillion-parameter-llm-training-and-real-time-inference&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">NVIDIA GB200 NVL72<&sol;a>&comma; a multi-node&comma; liquid-cooled&comma; rack-scale system for the most compute-intensive workloads&period; It combines 36 Grace Blackwell Superchips&comma; which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink&period; Additionally&comma; GB200 NVL72 includes NVIDIA BlueField®-3 data processing units to enable cloud network acceleration&comma; composable storage&comma; zero-trust security and GPU compute elasticity in hyperscale AI clouds&period; The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads&comma; and reduces cost and energy consumption by up to 25x&period;<&sol;p>&NewLine;<p>The platform acts as a single GPU with 1&period;4 exaflops of AI performance and 30TB of fast memory&comma; and is a building block for the newest DGX SuperPOD&period;<&sol;p>&NewLine;<p>NVIDIA offers the <a title&equals;"HGX B200" href&equals;"https&colon;&sol;&sol;www&period;nvidia&period;com&sol;en-us&sol;data-center&sol;hgx&sol;" target&equals;"&lowbar;blank" rel&equals;"nofollow noopener">HGX B200<&sol;a>&comma; a server board that links eight B200 GPUs through NVLink to support x86-based generative AI platforms&period; HGX B200 supports networking speeds up to 400Gb&sol;s through the NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking platforms&period;<&sol;p>&NewLine;

Editor

Wispr Scores $25 Million Series A Extension

SAN FRANCISCO -- Wispr, the voice-to-text AI that turns speech into clear, polished writing in every…

1 day

Numeric Dials Up $51 Million Series B

SAN FRANCISCO -- Numeric, an AI accounting automation platform, has raised a $51 million Series…

1 day

Apple Names 45 Finalists for App Store of the Year Awards

Apple has announced 45 finalists for this year’s App Store Awards, recognizing the best apps…

2 days

UC Reaches Agreement With Nurses, Strike Canceled

The University of California (UC) and the California Nurses Association (CNA) have reached a tentative…

4 days

HouseRX Rakes In $55 Million Series B

SAN FRANCISCO -- House Rx, a health tech company focused on making specialty medications more accessible and…

4 days

King Charles Honors NVIDIA’s Jensen Huang

Britain's King has given an award to the King of NVIDIA! NVIDIA founder and CEO…

4 days