Tesla has just introduced a chip that they designed for the servers to train the model Who named Dojo D1. A 25 chip D1 connected. This chip is optimized for artificial intelligence tasks, Similar to what Google, Amazon, Intel, Baidu, Nvidia is doing, there is Dojo D1 thing born to Tesla so there will be higher efficiency. Currently D1 is being used by Tesla's Dojo supercomputer. TSMC is a 7nm-based D1 production unit, which has 50 billion transistors with a chip area of up to 645 mm square, very large (because for servers, not cars or mobile devices)
.tesla Show this chip with the strongest performance of 362 teraflops with FP16 or 22.6 Teraflops as FP32. In order for you to imagine, the NVIDIA A100 Ampere GPU only runs 312 teraflops with FP16 tasks
Look at a deeper level, this chip is made up of multiple functional units (FU), which are linked together to create An overall chip. Each Fu contains a 64-bit CPU with a customized script to perform tasks about calculating the matrix with 1.25MB SRAM memory. Each Fu has the power of 1 teraflop FP16 or 64 gigaflops FP32, 512 GB / s bandwidth in any direction of the network. Tesla says the conductor part to connect FUs If you are straightened, there is a length of 17.7km. The D1 chips can be connected to each other in configuration 5 x 5, ie on a "Plate" plate available 25 children D1 Activate, then the power of the Plate will be 9pflops, the 36Tbps bandwidth and its area up to 30x30x10cm. Then Tesla took these plates connected according to the 2 x 3 configuration and to enter one The tray, each cabinet has 2 trays, the total strength of 100pflops, 12Tbps bandwidth. Now it is time to make your chip yourself, when they grow quickly, want to do different things that solutions Other chips in the market cannot meet.
Dịch vụ:
Thiết kế website,
quảng cáo google,
đăng ký website bộ công thương uy tín