A new alternative to the Fast Artificial Neural Network Library (FANN) in C

Fletch
4 min readNov 30, 2020

FANN was originally released in November of 2003, making it 17 years old to the month at the time of writing this, November 2020. It is only one of a few libraries written in C for machine learning, some of which include THNN, NeonRVM, cONNXr, CCV, VLFeat, and the most recent addition (2016) Darknet; although a more comprehensive list can be found here. The problem is that many of these projects veer on being specialised implementations and for this reason FANN has remained the king of general-purpose use throughout the 17 years of it’s current lifespan to date.

While C has become less favourable in the domain of neural network programming over the years, seeing Python vastly dominate the scene, a lot of the performance-critical back end is still in C, more favourably C++ in modern times, and usually it’s not presented or packaged in a developer-friendly manner as it is not intended to be used directly more so it is intended to be used via its python bindings.

The problem with C++ is that it generally forces code into an Object-Oriented design pattern, with C you can optionally wrap the C code into an Object-Oriented C++ wrapping. In this sense, C provides the best performance, portability, and flexibility of code, flexible in how it is integrated and portable in binding to other languages. Although, in some cases, C++ has superior “out of the can” support for dynamic memory, among other functionality supported by many of the common C++ libraries such as STL and Boost.

So here we are, 2020 and I think it’s fair to say out of the rich selection of Python libraries we now have for machine learning, Chainer, Keras, and PyTorch to name a few; that it is PyTorch really leading the way in terms of it’s wide variety of support in leading machine learning papers such as Google’s BERT and Open AI's GPT-2. It’s an age where the mathematics or intricacies of how a neural network operates are no longer a primary concern for those implementing them and, if you will, the design is now akin to duplo blocks where the designer first selects some order of layers, and then, an activation function, optimiser, training data, back-propagation algorithm, and in this manner the core principles I believe or fear are becoming some what of an esoteric knowledge, particularly in a field that was predominantly a “see what sticks” trial and error domain than one of mathematical proofs.

So in a long winded introduction, this brings us back around to the importance of C in neural network design. I feel it is important to have core C libraries that are simple, concise and clear for future neural network engineers to learn from. There’s a big step between an algebraic formulae and an actual programmatic implementation, and with a language like C that so easily resembles many other common languages such as JavaScript and PHP I argue that it best represents a working and compilable pseudo code implementation. While algebraic form is concise, it leaves a lot to the imagination with methods of implementation.

Furthermore I would like to point out that I concern myself primarily with standard CPU implementations in C and not SIMD or GPU implementations which sacrifice clarity for performance, for that is what the Tensorflow’s of the field readily provide to us.

so without further adieu ..

Introducing the Tiny Fully Connected Neural Network (TFCNN) library,

TFCNN is a fully connected neural networking library in C with a small footprint, and as such, it can be included in your project via a single header file. It also serves as a great example for beginners to not only implement but also rapidly learn the fundamental algorithms and functions of a neural network.

TFCNNv1 targets any platform that compiles C code, it features binary classification, and a staple set of 5 activation functions, 5 optimisers, and 3 uniform weight initialisation methods. A CPU-based Uint8 quantised version is additionally available which can be used for training and classification unlike with the FANN implementation which cannot be used in the training process. Although this does come at the additional cost of casting operations and slow float32 emulation on platform architecture without native support.

TFCNNv2 targets Linux/BSD/Unix platforms and features a superfluous set of 20 activation functions, softmax, and a regular multiple classification implementation. Without going into too much detail, whereas v1 is vanilla, reliable, using tried and tested methods, v2 expands upon that by implementing more selection and in some cases, purely experimental options such as derivatives which are based on lookup tables where maybe they really should not be.

In no way is TFCNN intended to be a replacement to FANN, as the two projects have different goals and functionality subsets. When I personally was looking for C implementations of fully connected neural networks the only real complete, versatile, and ready to use option that I could find was FANN and I wanted to expand that selection with something more to what I personally would have liked to have found. I am an enthusiast of the C programming language and also a recent enthusiast to programming neural networks.

TFCNN supports more than FANN in some areas and less than FANN in other areas and for this reason, I believe they make for their own specific use cases.

I do however propose that TFCNN makes a better use case for beginners, who wish to better understand how a classical neural network works without having to flick through too many different source files. This really is, as simple; clear, and concise as I feel one could make the implementation of such a neural network in the C programming language.

And not to take up any more of your time, if you are interested to learn more about the TFCNN project please visit the Github at:
https://github.com/tfcnn

--

--