The eye, be it in a human or another animal, is one of the most complex organs in a given body being highly relied upon for external observations. Other observational methods are used by most creatures such as touch, smell and skin, but operating in a complex environment becomes extremely difficult when vision is omitted. For this exact reason the human eye was a fundamental point of study for machine learning and artificial intelligence engineers. AI has been migrating into the physical world for some time now as shown by the rise of autonomous vehicles, so replicating the observational methods as refined in the eye provides a tested system for external perception.
The human eye shown on the left displays the basic components present. Light bounces off external objects, a leaf in this case, then enters the eye through the pupil being promptly focused by the lens onto the rear of the eye. The biological processes required to achieve this so far are truly astounding, but what happens next is at whole new level, borderline magical. I think so at least.
The Retina contains photoreceptors which essentially transmit signals when light hits them. There are two types of photoreceptors detecting two different elements that are present in light: rods (black-white) and cones (colour). How these two receptors talk to the brain however, is where AI engineers get excited. Rods and cones feed into the brain in different ways. Before signals from cones are sent to the brain they are first collected by Ganglion cells. These cells will take a large group of cones' signals, process them, and then output a single signal. Being more specific, Ganglion cells add and subtract signals depending on how far away they are from the centre of the Ganglion.
Here's the magic. On the right you see the exact response of the Ganglion cells. I'll break it down a bit more: the light enters the scene and hits a big group of cones on the back of the eye, these all then send their signals to the Ganglion. The cones surrounding the centre of the group reduce the output of the Ganglion, whereas the centre cones add to the output. What this does is allow the eye to detect edges! An example scenario: imagine that all of the light detected by the group of cones is a constant lovely blue. The signal sent by the ganglion would be nothing, as the centre and surrounding cones in the cone group would cancel out. There's no edge - no signal. But if there's lots of drastic variation in the light received, the centre and surrounding cones will not cancel in the Ganglion suggesting an edge.
This principle is used not just in AI or ML but also in typical image processing. Where AI really shines with this though is by saying: what if we don't want to detect just an edge, what if we want to detect something weird. What if the relationship between the centre and surround cones feeding into the ganglion is reversed? What if we know what we want to detect, but don't know the best way for the ganglion to detect it? This last question is exactly the one that AI developers work with.
A very common type of AI architecture is called: Convolutional Neural Networks (CNNs). Now, the "Convolutional" part in this system is essentially the digital form of the Ganglion spoken about before. Following the eye example, a convolution is a mathematical principle that defines the process of taking the cones' signals through to the single Ganglion output. After the convolution (fake Ganglion) part of this system we then have the "Neural Network" which acts as the brain and processes the convolutional output into something we can work with.
CNNs are very very very popular in the AI community, for good reason. Autonomous driving uses them extensively by filtering several video feeds from camera's placed around a car. Inside of a CNN there are many of these Ganglion objects that are adjusted, or more accurately trained, to detect specific objects. Autonomous driving engineers train their CNN models to detect trees, sidewalks, traffic lights, other cars, hopefully people, and much more. All building on the original work of the eye.
If you like this post, subscribe to our newsletter to keep upto date!
Author: Paul Hetherington
 CJ Kazilek, Kim Cooper. (2010, January 06). Rods and Cones. ASU - Ask A Biologist. Retrieved July 26, 2019 from https://askabiologist.asu.edu/rods-and-cones
 Retinal Ganglion Cells Calculate Color. Retrieved July 26, 2019 from