There are two options:
- With Docker
- With Mamba/Conda
In either case it is utmost important to find version compatibility between GPU driver version, CUDA compute capability, Pytorch version.
Docker provides an isolated workspace which is ideal to replicate development environment. This works well if I do not want to change my default installations just to test out some crazy Github repos out there. This is learnt from past experiences of sweat and blood. The best place to pick prebuilt docker images which are well tested against Nvidia software layers, including CUDA toolkits, CuDNN, and versions of either Tensorflow or Pytorch.