28th September

Code(f.) members flocked to our host and sponsors, Equal Experts, for our inaugural meetup. We were very fortunate to have an international conference speaker, Katharine Beaumont (@katharineCodes), presenting on “Neural Networks and Artificial Intelligence”.

The presentation got off to a good start with a quote from Ian Goodfellow et al’s book, “Deep Learning” that explains the challenges that are being currently faced by artificial intelligence:

“The true challenge to artificial intelligence proved to be solving the tasks that are easy to perform but hard for people to describe formally – problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images.”

So how can we use neural networks to help with artificial intelligence? To help understand this complex challenge , the topic was broken down into the following questions:

  • What are we trying to get from neurons?
  • How can we use information from neurons in artificial intelligence?
  • Why is it important?

In order to fully understand how artificial neurons work, we first need to understand their underlying biology.

Diagram of a neuron and the synapse. Picture taken from https://www.khanacademy.org

In our bodies, we have a plethora of dendrites (nerve endings) that connect to neurons. When dendrites receive excitatory signals, they fire an electrical impulse down the neuron through the axon.  A signal can also be inhibitory which prevents an electrical impulse from being sent.  Neurons are connected to each other by synapses; for communication to occur across theses synapses, a neurotransmitter is passed from the presynaptic cell to the target receptors on the postsynaptic cell. The amount of chemical released depends on the magnitude of the impulse. There is a positive correlation between the size of the impulse and the quantity of chemical transmitted.


By studying how neurons function we can help to deduce the following 3 things:

  1. How the brain works.
  2. Style of parallel computation inspired by neurons
  3. Solve practical problems with novel algorithms


Artificial neurons are essentially trying to transform input data into output data. We can form mathematical calculations based on the biological explanation where in a mathematical problem the synapses form the weights.  The simplest neural network was invented in 1957 by Frank Rosenblatt; a single neuron which utilises the computational ‘feed-forward’ model.  This became known as a perceptron which consists of one or more inputs, a single processor and an output.

Setup of Artificial Neurons. Picture taken from http://www.global-warming-and-the-climate.com/climate-forcing.html

The above picture shows a neural network where we receive n inputs, each multiplied by its own weighting.  In computational networks, it applies an activation function to get the sum of results.  In artificial neural networks, this is known as a transfer function as the above diagram displays.  An activation function controls whether a neuron is ‘active’ or ‘inactive’, much like our biological electrical impulses being excitatory or inhibitory.  This activation function can take many different types such as linear or sigmoid.  3D graphical representations of non-linear activation functions can be visualised here.  Finally, the artificial neuron, outputs the result.


Google famously uses machine learning with neural networks in its development for voice and image recognitions.  They own various programmes utilising this technique such as DeepDream; a computer vision programme that uses a convolutional neural network.

Katharine provided a visual explanation to help explain this. Our brain is very good at recognising objects such as a rabbit.  On the other hand, computers find it hard to recognise the shape and detail of such objects.  What about the following?  What does it look like?

A Komondor Dog

From a computers’ perspective, it looks very similar to:

A Yarn Mop

We want to build a network of neurons to help figure this information out artificially.

The presentation was concluded with a great visualisation of how neural networks work.

For further reading on this topic, Katharine has recommended the following courses on Coursera: Machine Learning and Neural Networks

We look forward to seeing everyone at our next meetup in October. Please sign up here to not miss out on this opportunity.

Blog post as seen on www.codef.co.uk