You are setting up a workstation for Machine Learning and you want to optimize the memory of Nvidia GPU for CUDA work, this post is for you.
Firstly, ensure that:
- You are using Ubuntu or a similar OS.
- You have a laptop with Nvidia Graphic card and a CPU with integrated graphic card.
- You dont have to install the graphic card driver and CUDA lib. Following tutorial will help you.
- You may have to plug the display cable into mainboard display port (HDMI, VGA) instead of those ports on NVIDIA card. Also, you may have to configure the BIOS to enable and load the integrated graphic card on boot (In my BIOS, option for integrated card is called iGFX).
You can use this guide to install correct NVIDIA card.
/etc/X11/xorg.confwith the following content:
Section "Device" Identifier "intel" Driver "intel" BusId "PCI:0:2:0" EndSection Section "Screen" Identifier "intel" Device "intel" EndSection
(if you try to do the same, make sure to check where your GPU is. Mine was on 00:02.0 which translates to PCI:0:2:0)
lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop) 01:00.0 VGA compatible controller: NVIDIA Corporation TU106 [GeForce RTX 2070] (rev a1)
- After all, reboot your system.
If everything goes right, you can boot up the system with the default graphic card is iGPU.
You can check the result by following command:
If you see
No running processes found like in following image, your system is using integrated GPU for displaying now. You still have all the NVIDIA drivers installed, and you can use Nvidia card for CUDA work.