Creating a Machine Learning Auto-Shoot Bot for CS:GO. Part 4.

James William Fletcher
4 min readJul 2, 2021

--

In continuation to Part 3 of “Creating a Machine Learning Auto-shoot bot for CS:GO.” I have this time used Tensorflow Keras with Python to train the neural network weights. Prior I was training weights with back-propagation I had written entirely in C and the only mainstream optimiser I had not implemented was Adam. The reason being is that Adam required more parameters and that didn’t fit well with me as every other optimiser used only one extra set of parameters. It was a mistake on my part to have overlooked Adam in this way because in my first trial run of testing Keras I found that the Adam optimiser was for my purposes much superior.

Discovering the effortless power of back-propagation that Keras armed me with I decided to revisit the CS:GO auto-shoot project but starting from the beginning — could I now get the fully connected feed-forward version to train an adequate set of weights? I ran a few tests on the old dataset I had collected in Part 3, and simmered down the solution as much as I could until I eventually found that I could get good results with a simple two-layer network with one unit per layer — preposterous right? Well not so, since the input was still 28x28 pixels what I was essentially doing was creating a single kernel which was like a fuzzy tracing template, either the object was a fuzzy cookie-cutter enemy or it was something else. The output from the first layer was Tanh and the last layer output a sigmoid.

Of course, it was not great. But it was also not all that bad, what I particularly liked about it is how well it performed in contrast to how simple the set of weights had become — it was truly no huge feat for a CPU to process. With some tweaking, I managed to reduce the misfire rate by decreasing the scan interval, and adding a sequence detector so that a trigger activation was only executed when the neural network output activation's above the threshold over x times in a row for consecutive scans. This worked well, I was able to significantly reduce the output threshold and scan interval meaning that the weights which had only been trained on counter-terrorist samples now worked on terrorists too, and by reducing the frequency of the scan interval even less pressure was being put on the CPU.

So to my amazement here I was, with a small, slightly inadequate but not terrible model, and with some tweaking it now had a very minimal misfire rate and fairly impressive detection rates. To further improve upon the perceived quality of the network I added the activation of it to a key-down toggle so that players only have to enable the bot as they drift a mouse over the enemy, it saves more CPU cycles and eliminates what had become a much less frequent misfire.

Exporting my weights from Keras to C was not that difficult, the syntax of Python was a little bit of a learning curve but not terribly so. Here I have the python script I put together to export sequential network weights, it iterates each layer writing out each unit weight array followed by the unit bias and saves it out as a C header file with each layer as a separate array.

print("Exporting weights...")
li = 0
f = open(project + "/" + project + "_layers.h", "w")
if f:
f.write("#ifndef " + project + "_layers\n#define " + project + "_layers\n\n")
for layer in model.layers:
total_layer_weights = layer.get_weights()[0].flatten().shape[0]
total_layer_units = layer.units
layer_weights_per_unit = total_layer_weights / total_layer_units
print("+ Layer:", li)
print("Total layer weights:", total_layer_weights)
print("Total layer units:", total_layer_units)
print("Weights per unit:", int(layer_weights_per_unit))
isfirst = 0
wc = 0
bc = 0
weights = layer.get_weights()
if weights != []:
f.write("const float " + project + "_layer" + str(li) + "[] = {")
for w in weights[0].flatten():
wc += 1
if isfirst == 0:
f.write(str(w))
isfirst = 1
else:
f.write("," + str(w))
if wc == layer_weights_per_unit:
f.write(", /* bias */ " + str(weights[1].flatten()[bc]))
wc = 0
bc += 1
f.write("};\n\n")
li += 1
f.write("#endif\n")
f.close()

Do I have a video? Yes! Every shot fired in this video, like every video prior, was fired by the AI. I am just the actor moving and aiming.

There are a few other tricks I employed to increase detection, such as giving the scan area a random offset from the center of the screen on each scanning interval, this is because I often found myself getting better results if I shook my cursor around on the target and I figured why not have the algorithm do that for me? I also implemented an adaptive firing rate so that if detection was over 0.7 the bot would fire once, over 0.8 it would fire 3 times, and over 0.9 it would fire 6 times. Simple, yet effective.

In the next article I use Keras to train CNN version, and one with a larger scan window.

Note: when running these bots it’s required to run your game in native screen resolution — because that’s how I trained them. No half-resolutions with blurry pixel stretching!

Getting bored? Hop over to the GitHub to browse the latest code release here.

See how the CNN version trained by Keras turned out in Part 5 by clicking here!

--

--