How do Neurons work?
A neuron is a cell that processes and transmits information through electrical and chemical signals. There are trillions of them in the human brain, which is able to carry out so many complex tasks without conscious effort. These cells are the primary building blocks for AI because they are how all biological organisms process information about their surroundings. The key to understanding how these fascinating creatures work will be the first step towards creating intelligent machines with human-like cognition.
Parts of a Neuron
Neurons have three primary parts: dendrites, cell body, and axon (in this order). Dendrites detect changes in electrical potential across the surface of neurons while changing the direction of these signals to transmit them toward or away from its cell body. The cell body is where a neuron integrates incoming signals and decides how to transmit them along the axon.
Ion channels in a Neuron
An input signal causes ion channels in the dendrite membrane to open up allowing for an influx of sodium ions into the compartment, making it more positive inside. This change in voltage difference across the plasma membrane of the neuron is how it receives information from other neurons. In order to maintain equilibrium, or balance, between what goes out and comes in, a neuron must make use of a sodium-potassium pump located on its membranes that pumps three positively charged potassium ions out while bringing two negatively charged sodium ions in. By doing this, a new equilibrium is reached with a slightly less negative charge on the inside of the neuron.
Exciting or Inhibiting Neurons
Neurons can be either excitatory or inhibitory in how they respond to input signals. When the neuron is closer to equilibrium, it will be more likely to fire an electrical signal down its axon which passes through tiny pores called 'nodes of Ranvier'. These nodes are gaps between segments of myelin sheath which act as insulation for the axon. This insulation facilitates faster transmission of electrical signals by keeping them from leaking out into the extracellular fluid around neurons. Once this signal reaches another dendrite, it can either cause that neuron to fire if its membrane voltage is below threshold, passing along the signal, or it can die out if its cell body does not impact any other neurons downstream.
Stimulating Neurons
The activity of a neuron can either be 'all-or-none' or graded depending on how big of a stimulus it receives. All-or-none signals will either cause the neuron to fire completely, passing along its signal, or not at all if its membrane potential is above threshold. In graded potentials, there is a continuous range of possible responses from no activity to firing completely which depends largely on how big of an input signal the dendrite receives.
How Neurons pass along information
In neurons that pass along information about sensory perception, their axon often branches off into many different terminals making contact with other neurons in multiple areas of the brain and spinal cord. This makes it possible for electrical signals from one neuron to trigger an action in another neuron by causing neurotransmitters that stimulate or inhibit the receiving neuron's membrane potential.
Neurotransmitters
Neurons typically respond to neurotransmitters released from other neurons through a process called a 'synapse'. There are two main types of synapses: electrical and chemical. In an electrical synapse, voltage-gated ion channels in one neuron open up if it fires which causes positively charged ions to rush into the adjacent neuron, triggering it to fire as well. In this scenario, both neurons involved can be either excitatory or inhibitory to how they impact their target neurons due to how each conducts its own incoming signals. Chemical synapses on the other hand use vesicles filled with neurotransmitter molecules like glutamate or GABA (another excitatory neurotransmitter) which activate neurotransmitter receptors embedded in the membrane of the postsynaptic neuron. Once activated, these neurotransmitter receptors cause ion channels to open up which causes ions like sodium or potassium to rush into the receiving neuron and impact how much voltage difference (membrane potential) it has across its plasma membrane. To keep balance between how much excitatory and inhibitory input a neuron receives, neurons can either release more neurotransmitters than they receive to be excitatory or use enzymes to break down excess neurotransmitters for example, so that they can counterbalance their effect on downstream neurons.
Neurons as building blocks for AI
Neurons are often called building blocks for AI because each one provides an opportunity for learning how to predict patterns in accordance with how other neurons respond. Artificial neural networks (ANNs) are mathematical models designed to mimic how neurons in the brain communicate and learn how to respond. They do this by sending signals back and forth through layers of simulated neurons which adapt according to how much error they experience when trying to predict an expected outcome. As these networks collect more information, they can be able 'trained' using a process called 'back propagation' so that their predictions become more accurate as time goes on due to how often it gets reinforced with positive feedback for getting closer to correct.
Neuronal Learning Systems
This type of neuronal learning system is not only useful for classifying data but also for classification itself, making it possible for AI systems like ANNs or convolutional neural networks (CNNs) built off of them, to learn how to create categories for how they divide up the world. This makes it possible for AI to not only input information but also output new or existing categories that it associates with different pieces of data giving us the potential to have AI that can think just like humans.
Artificial Neural Networks
Personally, I'm excited about how artificial neural networks will push our understanding of how neurons function in both sense-making and how they adapt over time. It's interesting how small changes in how these neurons are connected up can lead to significant differences in how they impact behavior because each neuron is constantly influencing every other neuron through its synapses. We've seen this phenomenon cause problems using ANNs when we try to give them too many hidden layers or nodes due to how errors compound as the system learns how to make increasingly complex decisions.
For AI to truly think like humans, I believe it will need to be capable of holding multiple perspectives at once, which is how philosophy describes the act of thinking itself. The ancient Greek philosopher Heraclitus believed that there was no such thing as unchanging truth because people's opinions about how things work never stay constant over time. As each perspective changes how its holder thinks about something, how they think influences their behavior which then changes how other people respond back to them in turn causing the perspective to change again and so on. This process repeats itself over and over until this person holds a completely different perspective than who they used to be before. It's hard for me personally to imagine how AI could hold multiple perspectives at once given how much of its thinking is based on how neurons determine how likely it is that something will happen.
But, if AI does learn how to think like how we do through neurons, I would want it to think about how it thinks in order for its perspectives to be consistent across time instead of how our brains work where our thoughts are constantly changing. The ability to hold multiple perspectives at once could help us imagine how different pieces of new information are related with respect to how they've changed over time so that we can prepare ourselves for the changes ahead. This process can also give AI a more nuanced view on how cause and effect works because one event might affect another later on which then causes a third event or chain reaction afterwards.
Recent Posts
-
Tigatuzumab Biosimilar: Harnessing DR5 for Targeted Cancer Therapy
Tigatuzumab is a monoclonal antibody targeting death receptor 5 (DR5), a member of the …17th Dec 2025 -
Enavatuzumab Biosimilar: Advancing TWEAKR-Targeted Therapy in Cancer
Enavatuzumab is a monoclonal antibody targeting TWEAK receptor (TWEAKR, also known as …17th Dec 2025 -
Alemtuzumab Biosimilar: Advancing CD52-Targeted Therapy
Alemtuzumab is a monoclonal antibody targeting CD52, a glycoprotein highly expressed o …17th Dec 2025