Tesla DOJO Exa-Scale Lossy AI Network using the Tesla Transport Protocol over Ethernet TTPoE
Patrick Kennedy - 4
Tesla brought its Dojo V1 networking hardware to Hot Chips 2024 and announced that it is donating its own TTPoE protocol to the Ultra Ethernet Consortium
The next-gen Meta MTIA is a custom RISC-V accelerator for the company's recommendation model AI inference workloads deployed this year
The AMD Versal AI Edge Series Gen 2 is the update to the 2021 series of chips for automotive and edge inference applications
The Preferred Networks MN-Core 2 is a chip for HPC and AI from Japan that focuses on power efficient compute
At Hot Chips 2024, AMD detailed Zen 5 again. If you missed our previous coverage, you can see the summary slides here
In one of the coolest presentations at Hot Chips 2024 so far, Broadcom showed co-packaged silicon photonics for switches and AI ASICs
The FuriosaAI RNGD processor was detailed and shown at Hot Chips 2024 for lower power AI inference applications
AMD had a talk on its big AI GPU at Hot Chips 2024. The AMD Instinct MI300X is a multi-billion dollar product line for the company
We have more details of the Intel Gaudi 3 for AI training and inference at Hot Chips 2024 including generational performance gains
This is the SambaNova SN40L a dataflow architecture AI accelerator with 520MB of SRAM, 64GB of HBM, and 1.5TB of DDR memory