CUDA on WSL2 for Dummies

Last year, I have gifted myself a gaming laptop. After going back and forth between “I must have one” vs. “They are just so bulky and expensive. Plus, I already have a 2015 MBP with blown speakers, and IPad with Apple Pencil, 2 e-book reader of which 1 was supposed to be sold ages ago”. Despite all my screams, of course, the former has one. Sooner or later it wins, as long as the focus is brought back to the subject again.

Finding the right one was another issue. Some mentioned horrible screen quality, some mentioned a lack of usb-c port, others said the speakers were bad… After much struggle, I chose Acer Predator Helios 300 with RTX 2070. The card is good enough to last for some years and I could not only play games but dive into the world of CUDA. There are three ways one can go about using CUDA on a Windows computer.The first and probably the recommended route for less headache is to wipe out the Windows operating system and replace it with Linux. A less dramatic measure would be installing Linux with dual boot. You get the best of both worlds. If you survive until the end of this post, however, you are stubborn enough and should give either method a try.

The second method is installing all the support for CUDA on Windows native environment. My goal is to try the simulation programs, so here is one guide I found for running GROMACS.  I did  not try this method because I thought it would be more of a headache due to my unfamiliarity with the Windows world. Do not underestimate its complexity. The other day, I plugged in an external hard drive and spend a good 15 minutes to make it work.

The third method is using Windows subsystem for Linux and following the steps described in this guide, or this one, or million other comments and pages you find on the internet. The guides are not perfect. I have spent more than 24 hours on this task. After downloading several versions of CUDA tool kits, I believe I was even temporarily banned by NVIDIA. It’s either that or their servers had an issue, which was resolved, coincidentally at the time I turned on the VPN  to reattempt the download.

I have started my journey with the first guide. First task is to get a build of Windows that is different than what I have. I have to register for the “Windows Insider Program”.  Easy enough. Click the link, create an account. Then, I have to download this build from the “Fast Ring”. I did download a build but I still have no clue as to what this “Fast Ring” means. The fast ring link talks about “Flighting” when you login with your insider account.

I had a feeling I shouldn’t have taken this route at this stage, but I proceeded anyway. Now, this first guide mentions  a build number requirement. When you are about to select a channel for the insider program, you might ask yourself which build number comes with these channels… One would think that the latest build number in that channel would be reported in the description or the guide would tell you to choose a specific channel to meet the requirements. But neither does. So I have made a decision to use the “Recommended” channel. So much for recommendation…

At this point, I have “a” build,  an NVIDIA driver that updates the GeForceExperience additionally, that I move on to the next step of getting “WSL 2”. I follow the link, of course the automated way does not work. The “install” command is not even recognized. Luckily, the manual commands were easy enough to follow. You just have to restart the machine. I have the habit of ignoring this restart step on Windows, because who wants to restart the machine every 5 minutes… Definitely not me! But  in this chase restart was mandatory because the following steps had failed. So remember restart before attempting to get version 2.

I have now “a” build, a driver, and the WSL 2. I went back to the guide for the next steps. There is an update mentioned but I have not seen such update, yet I didn’t care because I had the latest of “everything”. Now comes the part where I download the cuda toolkit. What they don’t mention in the guide is if you get “.run” version and managed to make it work, it might actually install  an additional “NVIDIA driver”.  You have to remember not to install that driver. So my recommendation use the .deb over network. Less hassle.

The download took forever on WSL 2. I realized the speed was insanely low, so I started searching for answers. I saw that I should disable something on the network driver for Hyper… yada yada. I disabled that and boom, I can no longer connect to the NVDIA download server. Not within WSL 2, not within Windows. So I have gave up on NVIDIA and switched to conda. It had  a CUDA toolkit I could use that, right?

So I get the CUDA toolkit and OpenMM. I need to test if “GPU” is detected as a platform. CUDA on WSL has limitations, an actual calculation wouldn’t work but detection should. How naive I was. OpenMM couldn’t find “lbcuda.so.1” and I kept wondering why. It turns out libcuda.so.1 comes with the driver not with the toolkit, which was installed in… where again? Oh right, this was the driver I installed on Windows. It must be on … Program Files, System, Windows folder… somewhere… Found the path with a search in finder for libcuda.so.1 . Linked that path and I though the detection should have worked since I linked that file. It didn’t. Maybe because the toolkit was version 10 and I had the latest driver… I don’t know. I have zero clue. CUDA is alien technology for me, but I am learning.

I decided to give the NVIDIA download another try, but this time enabled the VPN. Surprise, surprise it works. I download the toolkit version 11. Installed it and moved onto the step to verify. I wish they had mentioned how to compile only selected folders. The compilation of the entire samples folder takes ages . It was more than enough time to take a break for a cup of tea. After my short break, I realized I don’t need to wait for the entire thing. I could run the tests with whatever is already compiled and I should learn how to read a Makefile in order to learn about compilation options.

The test mentioned on the guide… fails! deviceQuery fails. What about the docker examples… Fail! Fail! Fail! Every test fails. Troubleshoot hints that there is a version mismatch between any of the components I have downloaded above. How lovely! The first thing I should have checked should have been the very first component I got, Windows build version. But I thought walking backwards would be easier. Even in physical world, walking backwards is harder. It’s easier to face the path you have walked on and examine it as an outsider with the freedom to jump between every step.

After many many long hours spent on trying to find the culprit, I stumbled upon a post mentioning that  the build required is only available on the “Dev” channel, not on the recommend beta channel. It also points to the second guide from microsoft. Ok, easy enough to solve. I go back, change the channel, check for updates and download the latest build on “Dev”. Of course not that easy. What was I thinking? The new updates kept downloading and installing themselves repeatedly. Over and over again I watched that progress going back to zero again. I cannot be the only one experiencing this. There must be a solution and there was. Pause the updates so they won’t consume the bandwidth for no reason, uninstall the updates, restart the machine, check for the updates. Finally after these steps the updates worked and I have got myself the required Windows build.

The sample tests passed. The docker tests passed. I have downloaded and compiled the OpenMM from source since there is no cuda-toolkit-11 supported binary on conda yet and this was the toolkit I had. I also learned when I looked that the manual for the instructions to compile that the “omnia” conda channel was switched for “conda-forge”. Every time you get a new version of a software RTFM, Ozge! Well now OpenMM GPU detection works too. I had to pass an extra flag to link against the libcuda.so.1 during compilation and place that path to “LD_LIBRARY_PATH”. Here is the command to run within the build directory:

cmake ../ -DOPENMM_BUILD_CUDA_LIB=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DCMAKE_LIBRARY_PATH=/mnt/c/Windows/System32/lxss/lib/

I won’t be running any simulations any time soon though. GPU detection works but it won’t pass the tests ( Error loading CUDA module: CUDA_ERROR_UNSUPPORTED_PTX_VERSION (222) ). Who knows maybe there is a way to make it work with a docker image. CUDA world is new to me. Before this journey, I didn’t even know all the different components of the CUDA libraries, what it means to have a driver, what is the toolkit, what is nvcc…

Now I know a bit more. The tip of the iceberg… But I am eagerly waiting for full CUDA support on WSL.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.