Tesla Not Compute Limited for FSD AI Means 100+ Exaflops and 1+ Exabyte of Cache Memory
By Brian Wang
Tesla indicated in August, 2023 they were activating 10,000 Nvidia H100 cluster and over 200 Petabytes of hot cache (NVMe) storage. This memory is used to train the FSD AI on the massive amount of video driving data.
Elon Musk posted yesterday that Tesla FSD training is no longer compute constrained. Tesla has likely activated 5 times more compute and c…
Keep reading with a 7-day free trial
Subscribe to next BIG future to keep reading this post and get 7 days of free access to the full post archives.