What are actual Tesla M60 models used by AWS?

hans asked:

Wikipedia says that Tesla M60 has 2×8 GB RAM (whatever it means) and TDP 225–300.

I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:

> sudo nvidia-smi
Tue Mar 12 00:13:10 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M60           On   | 00000000:00:1E.0 Off |                    0 |
| N/A   43C    P0    37W / 150W |   7373MiB /  7618MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      6779      C   python                                      7362MiB |
+-----------------------------------------------------------------------------+

What does it mean? Do I get a ‘half’ of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2×8) suggest?

My answer:


Yes, the Tesla M60 is two GPUs ‘sticked’ together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.


View the full question and any other answers on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.