Back in mid-May we had an opportunity to look at the NVIDIA Tesla V100 accelerators in performance with PCI Express 3.0 interface from afar . In addition to the version with a TDP level of not more than 250 W, a more compact accelerator with a TDP level of not more than 150 W will be developed. Clearly, in the latter case we will have to make some compromises in terms of speed, but so far the characteristics of the last version of the Tesla V100 remain a mystery behind seven seals.
But the full-size version of the Tesla V100 with PCI Express 3.0 interface was recently introduced to the general public . On the site of NVIDIA you can already find not only the image of this accelerator, which will appear in the ready systems of partners of the company before the end of the year, but also the characteristics of the novelty using 16 GB of HBM2 memory.
In this embodiment, the Tesla V100 accelerator TDP level is limited to a value of 250W. This is 50W less than the version in the SXM2 version, although the cooling system could be protected from using fans under the casing of the accelerator itself - they should be located in the server system casing, pumping air through the radiator.
Decrease in TDP was given at the cost of loss of performance - about 6.5%, and the memory remained working at the same frequencies, and only the graphics processor had to sacrifice. In addition, the option Tesla V100 with a PCI Express 3.0 interface does not support the proprietary interface NVLink, which has almost a tenfold advantage in bandwidth.
One of NVIDIA partners, who will offer Tesla V100 in this embodiment, is a company HPE. Related Products :
|