next BIG future

next BIG future

Tesla Not Compute Limited for FSD AI Means 100+ Exaflops and 1+ Exabyte of Cache Memory

By Brian Wang

NextBigFuture's avatar
NextBigFuture
Mar 23, 2024
∙ Paid
Share

Tesla indicated in August, 2023 they were activating 10,000 Nvidia H100 cluster and over 200 Petabytes of hot cache (NVMe) storage. This memory is used to train the FSD AI on the massive amount of video driving data.

Elon Musk posted yesterday that Tesla FSD training is no longer compute constrained. Tesla has likely activated 5 times more compute and c…

Keep reading with a 7-day free trial

Subscribe to next BIG future to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Nextbigfuture
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture