Creating a Machine Learning Auto-Shoot bot for CS:GO. Part 6.

In continuation to Part 5 of “Creating a Machine Learning Auto-shoot bot for CS:GO.” this time around I have worked on perfecting the Keras CNN trained models for both the original 28x28 sample window size and the new 92x192 sample window.

Throughout this entire series there has been one major bottleneck to this project and that has been the speed at which the popular X window manager, commonly used with Linux, can retrieve rectangles of pixels from the CS:GO game. For the 28x28 sample size, this bottleneck tends to average around 50–60 FPS. For the 96x192 sample size, we’re talking about 5–6 FPS. Although I am sure that a developer well versed in working with Xlib could retrieve the whole game window of pixels and convert them to a mean/samplewise normalised buffer at 100–150 FPS — It’s a little beyond my expertise and I’m not experienced enough in this area to know if bypassing safe functions such as XGetPixel and XQueryColor could cause compatibility issues. But 50–60 FPS for the 28x28 sample size is honestly just fine for our purposes. 96x192 was a novel idea but ultimately it didn’t prove to be as adequate at detecting enemies as the 28x28 model.

One of the major changes I have made in this iteration is that I am no longer converting the weights to C for the forward propagation. This time I have created a small Python program that acts as a daemon, it loops every millisecond checking to see if there is a new input file to ingest which it uses to perform a prediction on a loaded Keras model and then finally output the result as a separate file. The new “aim” program now just exports screen samples to a file and then reads the result of the Keras prediction back from a file. The throughput of this process is very high, and when executed correctly performs flawlessly to the end user. But the main reason for moving to this model was so that I could increase the productivity of prototyping new or slightly modified networks. This system allows me to train a new network topology and simply export it out as a Keras model which the Python daemon can then load and execute. The only hard-coded parameters in the Daemon that need to be changed if necessary are the size of the input expected as bytes and the shape of the array that will be fed into the predictor.

When it comes to writing a forward propagation in pure C I found there is nothing more enjoyable than doing this with a Feed-Forward Neural Network — not only are they easier to implement but they are also much easier to vectorise as the forward propagation can be denominated to a Multiply Accumulate Operation (MAC) this can be achieved in plain C using the fma() function or via compiler flags such as -mavx -mfma using the GCC compiler. Although this can also be manually implemented using SIMD Intrinsics it is a niche case these days with compilers able to vectorise your code for you, but this article goes into detail concerning vectorisation of a MAC operation using SIMD Intrinsics if you are curious.

Beyond a simple Feed-Forward network, writing CNN models in C from scratch even without the back-propagation can be very laborious and completely disruptive to rapid prototyping.

The results were unanimous, me, myself, and I all came to the agreement that the 28x28 sample window was the most effective and responsive at short and long-range headshots. I tried the 96x192 in two topologies; A, three layers of 3x3 filters, and B, three layers of filters 9x9, 6x6, and 3x3. I have no comment on which of the two 96x192 topologies was better, all were “OK”.

Now these two models and the datasets I created for them are pretty huge, the whole zip file comes to 374 MB so I have uploaded it to Mega.nz this time simply because Mega is the only large file hosting service with a reputable history of preserving large files and with minimal hassle for end-users wanting to download them. The download is here. The models are very simple to use, if you’re looking to just jump right in then navigate to the /PredictBot folder and execute exec.sh which will launch the Python daemon and aim program for you. Setting up Python and Tensorflow however, is another beast entirely but for Ubuntu users using an NVIDIA GPU you can follow these steps below;

sudo apt install nvidia-cuda-toolkit
sudo apt install nvidia-driver-465
sudo apt install python3
sudo apt install python3-pip
sudo pip3 install --upgrade pip
sudo pip3 install tensorflow-gpu
sudo pip3 install --upgrade tensorflow-gpu

Keep in mind that nvidia-driver-465 has to be installed after nvidia-cuda-toolkit or the toolkit overwrites the nvidia-smi program which is particularly helpful. Also, nvidia-settings is optionally a useful utility that can be installed.

If you have an output claiming that cusolver.so could not be located then the generally accepted method to solve this is that you can create a symbolic link to an older version in the place where it was expected to be found like so; (although your libcusolver.so might be in a different location or be a different version in which case you will have to search your drive for its location)
sudo ln -s /usr/lib/x86_64-linux-gnu/libcusolver.so.10.6.0.245 /usr/lib/x86_64-linux-gnu/libcusolver.so.11

You will also need to install NVIDIA cuDNN. Then you should be good to go!

But if you’re just planning to utilise your CPU it’s much easier;

sudo apt install python3
sudo apt install python3-pip
sudo pip3 install --upgrade pip
sudo pip3 install tensorflow-cpu
sudo pip3 install --upgrade tensorflow-cpu

If you’re still stuck you can refer to the official installation guide here.

You’ll also want to make sure you have xterm, espeak, and clang installed like so; (because I actually use clang in the compile.sh files and not gcc as recommended in the source files)

sudo apt install espeak
sudo apt install xterm
sudo apt install clang

Once you’re all setup all you need to know is that the GOBOT9 directory is the superior 28x28 headshot bot and the GOBOT10 directory is the novel 96x192 bot. In the relative /PredictBot folder, you will find many different keras_model folders, you just need to rename the one you wish to be used to “keras_model”. There are a few different variations, in GOBOT9 they are all just variations of the filter_resolution value used in /SampleBot/train.py, and in GOBOT10 it is the same deal but with the B and A versions alluded to above.

So which are the best versions? If you’re looking for the best all in one FNN it’s GOBOT7 but if you’re after the best of the best it’s the CNN GOBOT9 hands down minimal misfire and fast responsive headshots.

Want to see GOBOT9 in action? Watch it here to avoid the lacklustre quality of YouTube below.

Using the A256 dataset, in the “T” full auto-shoot mode.

If you just want to quickly test these things out I recommend you launch a Deathmatch game as “Practice With Bots” and then punch these settings into the developer console;

sv_cheats 1
hud_showtargetid 0
cl_teamid_overhead_mode 1
cl_teamid_overhead_maxdist 0.1
bot_stop 1

Although throughout this series I have maintained a tongue and cheek humour about the project these bots pose absolutely no threat to online gaming in CS:GO and absolutely never intended to. Beyond academic and novelty value I think the only real-world application of this work would be for those who have mobility impairments such as ALS, MS, or CP. Such users if still able to control a joystick with either left or right arm to control player motion and have some neck or eye movement to direct a computer cursor to aim the player reticule in-game then could combine that input control with this auto-shoot project to automatically issue weapon triggers for them in-game. It’s plausible that such games against other mobility-impaired players in a LAN tournament could yield an easier and more enjoyable experience.

So that’s it. I know I said this before, but I’m done.

You win Gabe.

… until next time >:)

Next time happened: https://github.com/mrbid/CSGO_TENSOR_TRIGGER