Creating a Machine Learning Auto-Shoot bot for CS:GO. Part 5.

James William Fletcher
4 min readJul 17, 2021

--

In continuation to Part 4 of “Creating a Machine Learning Auto-shoot bot for CS:GO.” I have once again used Tensorflow Keras to train the network weights but this time for the CNN version.

It’s not elegant, what I set out to achieve here was to re-use the same TBVGG3 code with minimal modification to load weights trained in Tensoflow Keras using Python. In essence, I stripped out all of the backpropagation code, made some small changes to the forward pass adding a simple dense layer, exported the weights from Keras as flat files, and then loaded that data into the multidimensional arrays used in the C program.

The first part of the process was to train the weights in Keras using Conv2D layers, this was the easy part. In my original TBVGG3 solution I went from a GAP layer to an average of the GAP outputs and then into a modified sigmoid function. In Keras I could not seem to achieve this type of transformation using standard layers so I just opted for a simple correction to going from a GAP layer to a single unit Dense layer with a Sigmoid output, like so;

model = Sequential([
keras.Input(shape=(28, 28, 3)),
layers.Conv2D(2, kernel_size=(3, 3), activation=”relu”),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(4, kernel_size=(3, 3), activation=”relu”),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(8, kernel_size=(3, 3), activation=”relu”),
layers.GlobalAveragePooling2D(),
layers.Flatten(),
layers.Dense(1, activation=”sigmoid”),
])

It is worth noting that this version has less kernels per layer than the original, this is because I squeezed as much % out of the trained results as I could for the smallest network size.

Once the weights had trained I flattened and exported them, but also exported a non-flattened view so that I could see what order the weights were coming in — I had a hunch that it would not be a straight copy and paste job for the weights and that I would need to re-order them for the C multidimensional array (and I was right) in future scenarios I would write the forward-pass in C to use the weights as-is from Keras. But anyhoo this time around exporting the weights was a lot simpler than prior;

np.set_printoptions(threshold=sys.maxsize)
for layer in model.layers:
if layer.get_weights() != []:
f = open("view.txt”, “w”)
if f:
f.write(str(layer.get_weights()))
f.close()
np.savetxt(“w”, layer.get_weights()[0].flatten(), delimiter=”,”)
np.savetxt(“b”, layer.get_weights()[1].flatten(), delimiter=”,”)

However it did make the import code a little heavier in exchange as the weights needed to be re-ordered.

Once I had the weights imported I tried out the solution without the added dense layer and had abysmal results, then I added in the dense layer, and well .. nothing, no activation at all. I thought this odd so I removed the sigmoid to see what was being input into it and I could see that I was getting values between -100 and 0. I have no idea why, but I assume this is due to some optimisation Tensorflow uses? The good news is that the values being output were as to be expected, just in an odd range, so to scale them back to a 0–1 range I just had to multiply the output by -0.01, the reciprocal of dividing by -100 which was a tad more efficient than the traditional 1 / (1 + exp(-x))sigmoid. Cool.

So how did it perform? The code which houses this bot was the same code used in Part 4 for the FNN version, and I used the same “hyperparameters”;

#define SCAN_VARIANCE 3
#define SCAN_DELAY 10000
#define ACTIVATION_SENITIVITY 0.75
#define REPEAT_ACTIVATION 2
#define FIRE_RATE_LIMIT_MS 800

And to be honest with you, the CNN version, in this case, was a little worse. Why? It could be down to the FNN having less compute time and thus calculating activation's faster, or that this job is better solved using cookie-cutter object detection over feature matching due to the small input of 28x28 pixels — sure this may be the same size used by the MNIST dataset but one has to consider that MNIST is a much simpler representation to feed a network, it’s practically solid black lines against a white background to represent handwritten numeric characters or “digits” as wherewith CS:GO we are feeding the network very complex and noisy images.

I do have a plan though and that is to make another convolutional network trained in Keras but this time to use a scan window that can encapsulate the entire form of a player model at an average distance from the player. Something around 96x192 pixels. I will save all of the samples out as bitmaps rather than pre-normalised float32 arrays and then let Keras deal with normalisation to allow users a bit more flexibility over the dataset.

(96x192) Pesky counter-terrorists!

Don’t worry, this is far from finished! Keep tuned for Part 6, I promise you perfect head shots in the near future.

Edit: I ran some tests using a 96x192 sample window, but it was not suitably fast enough to process in real-time due to the X11 XGetImage() bottleneck but there is much less miss-fire. I have now created a new 28x28 sample set twice as large as the old one and had better results in the FNN model. I will be sticking to the 28x28 pixel sample window in future iterations.

I also retrained the FNN version with a new dataset, download it here.

Continue reading this series of articles in Part 6 here.

--

--