Creating a Machine Learning Auto-Shoot bot for CS:GO. Part 6.

Fletch
5 min readJul 25, 2021

In continuation to Part 5 of “Creating a Machine Learning Auto-shoot bot for CS:GO.” this time around I have worked on perfecting the Keras CNN trained models for both the original 28x28 sample window size and the new 92x192 sample window.

Throughout this entire series there has been one major bottleneck to this project and that has been the speed at which the popular X window manager, commonly used with Linux, can retrieve rectangles of pixels from the CS:GO game. For the 28x28 sample size, this bottleneck tends to average around 50–60 FPS. For the 96x192 sample size, we’re talking about 5–6 FPS. Although I am sure that a developer well versed in working with Xlib could retrieve the whole game window of pixels and convert them to a mean/samplewise normalised buffer at 100–150 FPS — It’s a little beyond my expertise and I’m not experienced enough in this area to know if bypassing safe functions such as XGetPixel and XQueryColor could cause compatibility issues. But 50–60 FPS for the 28x28 sample size is honestly just fine for our purposes. 96x192 was a novel idea but ultimately it didn’t prove to be as adequate at detecting enemies as the 28x28 model.

One of the major changes I have made in this iteration is that I am no longer converting the weights to C for the forward propagation. This time I have created a small Python program that acts as a daemon, it loops every millisecond checking to see if there is a new input file to ingest which it uses to perform a prediction on a loaded Keras model and then finally output the result as a separate file. The new “aim” program now just exports screen samples to a file and then reads the result of the Keras prediction back from a file. The throughput of this process is very high, and when executed correctly performs flawlessly to the end user. But the main reason for moving to this model was so that I could increase the productivity of prototyping new or slightly modified networks. This system allows me to train a new network topology and simply export it out as a Keras model which the Python daemon can then load and execute. The only hard-coded parameters in the Daemon that need to be changed if necessary are the size of the input expected as bytes and the shape of the array that will be fed into the predictor.

When it comes to writing a forward propagation in pure C I found there is nothing more enjoyable than doing this with a Feed-Forward Neural Network — not only are they easier to implement but they are also much easier to vectorise as the forward propagation can be denominated to a Multiply Accumulate Operation (MAC) this can be achieved in plain C using the fma() function or via the compiler flag-mfma using the GCC compiler. Although this can also be manually implemented using SIMD Intrinsics it is a niche case these days with compilers able to vectorise your code for you, but this article goes into detail concerning vectorisation of a MAC operation using SIMD Intrinsics if you are curious.

Beyond a simple Feed-Forward network, writing CNN models in C from scratch even without the back-propagation can be very laborious and completely disruptive to rapid prototyping.

The results were unanimous, me, myself, and I all came to the agreement that the 28x28 sample window was the most effective and responsive at short and long-range headshots. I tried the 96x192 in two topologies; A, three layers of 3x3 filters, and B, three layers of filters 9x9, 6x6, and 3x3. I have no comment on which of the two 96x192 topologies was better, all were “OK”.

Setting up Python and Tensorflow however, is another beast entirely but for Ubuntu users using an NVIDIA GPU you can follow these steps below;

sudo apt install nvidia-cuda-toolkit
sudo apt install nvidia-driver-465
sudo apt install python3
sudo apt install python3-pip
sudo pip3 install --upgrade pip
sudo pip3 install tensorflow-gpu
sudo pip3 install --upgrade tensorflow-gpu

Keep in mind that nvidia-driver-465 has to be installed after nvidia-cuda-toolkit or the toolkit overwrites the nvidia-smi program which is particularly helpful. Also, nvidia-settings is optionally a useful utility that can be installed.

If you have an output claiming that cusolver.so could not be located then the generally accepted method to solve this is that you can create a symbolic link to an older version in the place where it was expected to be found like so; (although your libcusolver.so might be in a different location or be a different version in which case you will have to search your drive for its location)
sudo ln -s /usr/lib/x86_64-linux-gnu/libcusolver.so.10.6.0.245 /usr/lib/x86_64-linux-gnu/libcusolver.so.11

You will also need to install NVIDIA cuDNN. Then you should be good to go! You can now install cuDNN via the Ubuntu package manager using sudo apt install nvidia-cudnn.

But if you’re just planning to utilise your CPU it’s much easier;

sudo apt install python3
sudo apt install python3-pip
sudo pip3 install --upgrade pip
sudo pip3 install tensorflow-cpu
sudo pip3 install --upgrade tensorflow-cpu

If you’re still stuck you can refer to the official installation guide here.

You’ll also want to make sure you have xterm, espeak, clang, and libx11-dev installed like so; (because I actually use clang in the compile.sh files and not gcc as recommended in the source files)

sudo apt install espeak
sudo apt install xterm
sudo apt install clang
sudo apt install libx11-dev

This is a video of the latest GOBOT in action;

If you just want to quickly test these things out I recommend you launch a Deathmatch game as “Practice With Bots” and then punch these settings into the developer console;

sv_cheats 1
hud_showtargetid 0
cl_teamid_overhead_mode 1
cl_teamid_overhead_maxdist 0.1
bot_stop 1

These bots pose absolutely no threat to online gaming in CS:GO because Neural Networks currently are just not as complex or responsive as a human brain is — although the benefit I suppose is that they never get distracted.

Beyond academic and novelty value I think the only real-world application of this work would be for those who have mobility impairments such as ALS, MS, or CP. Such users if still able to control a joystick with either left or right fingers/hand to control player motion and have some neck or eye movement to direct a computer cursor to aim the player reticule in-game then could combine that input control with this auto-shoot project to automatically issue weapon triggers for them in-game. It’s plausible that such games against other mobility-impaired players in a LAN tournament could yield an easier and more enjoyable experience.

So that’s it, for now, I’m never really fully finished some of the latest work on the project can be found either on my personal GitHub here or on the TFNN project GitHub under TBVGG here.

--

--