Jun 29, 2022

Let's first explore Newtonian Mechanics.

Newtonian physics is, quite literally, the physics of common sense. It is also referred to as Classical Mechanics, which explains how objects move through space. It allowed us to understand how objects will move in the future (determinism), and how objects have moved in the past (reversibility). Simply put, for the first time in human experience, we were able to use mathematical methods for replicating most circumstances in our physical world. This provided a kinematical basis for the whole of classical physics up until the early twentieth century.

Newtonian physics suggests that everything looks the same to everyone in the universe, no matter the speed and location of the observer. Considering our 3-dimensional reality, this is an easy and logical concept to accept—this is precisely how we view day to day life. The world seems as if it is a material plenum with space and time being absolute (Rynasiewicz, 2004). When we look out into the distance, we do not see the curvature of the earth; it appears flat and constant. And since most of us will never get to observe Earth from space, we will never get to visually comprehend a round Earth. Our perception suggests Newtonian law. 

This is how Newton founded classical mechanics. It was established on the view that space is distinct from the body and that time passes equivalently regardless of what happens in the world. Absolute time would suggest that there can be no lapse of time without change occurring somewhere—meaning that time is merely a measure of change within the world. Absolute space would suggest space as an absolute (unmoving) reference point in which inertial systems exist within it (DiSalle, 2002). Therefore, each and every object has an absolute state of motion reflecting either a state of absolute rest, or moving at some absolute speed.

Newton, even more so than Galileo before him, was a direct product of his time. He did approach physics from the perspective that God had created the laws of nature. He believed the job of a scientist was merely to uncover their workings. 

Absolute time was a crucial component in Newtonian framework. To question whether he actually believed in absolute time is not important, but to understand why he would want to is simple. By systemizing and formalizing time, Newton was able to formulate his laws of motion, which set the stage for all his other developments to come. 

Newton’s third law, commonly known as the Law of Action or Reaction, states that: whenever a body exerts a force on a second body, the second body will exert an equal force in magnitude and opposite direction. Simply put, every action will result in an equal or opposite reaction. Imagine throwing a ball against a wall—the ball puts force on the wall (action), and the wall puts a force on the ball (reaction), which results in the ball bouncing off the wall. Physics understood this phenomena as a perspective of energy transfer. But, from a philosophical standpoint, it means every action has a connected consequence.

Cause and Effect.

In the eighteen century, the great philosopher David Hume explored human curiosity. He focused on a key factor of human nature emphasizing how we are more influenced by our feelings than by reason (Schmitter, 2021). He insisted that reason is the slave of passion and that we are more motivated by our feelings than by any results from analysis and logic. 

He was very impressed with Newton and with experimental philosophy. “If you want to find out about the nature of reality, the best way is to go out and look and see how things work”. Hume wanted to apply this concept to our own case, or to “moral subjects”, as Hume used to refer to humans (Morris & Brown, 2001). He wanted to know how the human mind works using a similar kind of experimental method. He focused on three fundamental principles: resemblance (the similarity between objects and events), contiguity (the temporal and spatial “proximity” of events), and cause and effect, also referred to as causation (Buonomano, 2017, 28). He knew that reason alone could not prove the reality of causation. Because of this, he provided rules that must be followed to determine if two events are causally related to each other:

1. The cause and effect must be contiguous in space and time.
2. The cause must be prior to the effect.

If every event has a cause, how could that go on forever? If we reverse back in time, there must somehow be a first cause that was a cause of itself. Or, there must have been some form of necessary existent that did not need a cause. Our reality seems to be one thing after another. What is the connection between events that occur in moments of time at different points in space? Can we prove any connection between events? Hume’s view was not only that we cannot prove any connections between events, but that we also cannot see any connections in the sense that there is no sensory impression of causation (Morris & Brown, 2001). This was a problem that Hume emphasized because he believed that all human ideas (concepts) come from impressions. How do we experience this connection? We cannot see the impression of causality just by looking at it. 

Consider billiard balls. If you watch a white ball hit a red ball, you will see the red ball move. What you don’t see is the “cause”. We do not see the white ball “cause” the red ball to move. We do not see the connection. And, at the time, the complaint that physics could not explain the “cause” of the phenomena was dismissed as a problem that was philosophical or metaphysical, rather than empirical.

Newtonian mechanics just shook the world. It made the world easier and more understandable. The similarity between causality and Newton’s third law might not even be responsible for the dismissal. Newtonian physics did not allow exceptions to any of its laws. Reality was thought to be fully determined by mechanical properties. There was no room for indeterminacy. There was no need to appeal to the substantial form in a top-down causality. Everything was fully explained in a bottom-up causality. Determinism in the form of physical mechanics rendered all other forms of causality irrelevant. As easy as it is to consider the law of cause and effect similar to the law of action or reaction, it is not useful in physics. Where we will find use for it is in neuroscience.

Synaptic Cause and Effect

In the beginning of the twentieth century, scientists that studied the nervous system were troubled by a major dilemma. They knew that neurons were all independent structures. They also understood that some type of electrical charge passed down neurons, from the dendrite to the axon. What they did not understand is what happened next. Somehow, in animals, the nervous impulses would pass from one cell to another cell, even though they were separate (Bullock, 1997). They explained it as contiguity in relation to the world of technology. But at the end of the day, they had no proof of what really happened when two neurons met, let alone how the “current” was transmitted. 

In 1897, Sir Charles Scott Sherrington introduced the term ‘synapsis’ to the world (Breathnach, 2004). The name actually came before the understanding. He still referred to it as a “special connection” since it still was not fully understood. Within only two years this term became known as synapse, which is the term we currently use. 

Sherrington was not able to theorize until he was able to examine data showing what actually happens to the nerve impulses on both sides of the synapse. He came to realize that there was a sort of funneling that was taking place–one that worked something like a row of falling dominoes.

The temporal asymmetry of cause and effect is encoded at the most basic level in the human brain. The human brain is made up of close to 100 billion neurons, all of which are communicating with each other through synapses. Neurons receive inputs and generate outputs. Neurons are extroverts (Buonomano, 2017, 32). They are each connected to thousands of others. This interface between the two neurons consists of a presynaptic neuron that is sending out a signal, and a postsynaptic neuron that is receiving a signal. Excitatory synapses encourage postsynaptic neurons to generate outputs and send electrical signals to all its downstream neurons. Almost as if each neuron has postsynaptic partners waiting to receive signals. The opposite of excitatory synapses are inhibitory synapses. These attempt to keep the postsynaptic neuron quiet.

It is not a question of which elements should be connected to each other, but of what strength should each of these connections be. 

The strength of a synapse refers to the degree in which a presynaptic neuron influences how a postsynaptic neuron behaves. Strong excitatory synapses between neuron A and neuron B means that when neuron A fires, neuron B is likely to fire. On the other hand, in the case of a weak synapse between neuron A and neuron B, neuron B would not react to the firing of neuron A. This synaptic strength is determined by algorithms referred to as synaptic learning rules (Kennedy, 2016). The main role of a synaptic learning rule is to either strengthen or weaken a synapse. All of which is dependent upon the pattern of activity between the presynaptic and postsynaptic neurons.

This specific learning rule is called spike-timing dependent plasticity, which demonstrates how temporal asymmetry of cause-and-effect is built directly into our synapses (Feldman, 2012). If the presynaptic neuron is consistently firing before the postsynaptic one (especially if there is close temporal synchrony), this will lead to synaptic potentiation, where the firing of the postsynaptic cell is heightened in response to presynaptic inputs. The reverse can also be true, in the case that the firing of the presynaptic neuron precedes postsynaptic activation, synaptic depression will occur. 

It was not until 1990 that neuroscientists came upon this simple learning rule. This rule is essentially a neural coincidence detector which allows for the establishment of a cause-and-effect mechanism in the brain. It shows us how the brain is essentially adapting to patterns that we repeat or reject over time.

Written by: Jordan Ettner


Breathnach, C. S. (2004). Charles Scott Sherrington's Integrative Action: a centenary notice. NCBI. Retrieved June 29, 2022, from

Bullock, H. (1997). Signals and signs in the nervous system: The dynamic anatomy of electrical activity is probably information-rich. NCBI. Retrieved June 29, 2022, from

Buonomano, D. (2017). Your Brain is a Time Machine: The Neuroscience and Physics of Time. W.W. Norton.

DiSalle, R. (2002, March 30). Space and Time: Inertial Frames (Stanford Encyclopedia of Philosophy). Stanford Encyclopedia of Philosophy. Retrieved June 29, 2022, from

Feldman, D. E. (2012). The spike timing dependence of plasticity - PMC. NCBI. Retrieved June 29, 2022, from

Hoefer, C. (2003, January 23). Causal Determinism (Stanford Encyclopedia of Philosophy). Stanford Encyclopedia of Philosophy. Retrieved June 29, 2022, from

Kennedy, M. B. (2016). Synaptic Signaling in Learning and Memory - PMC. NCBI. Retrieved June 29, 2022, from

Morris, W. E., & Brown, C. R. (2001, February 26). David Hume (Stanford Encyclopedia of Philosophy). Stanford Encyclopedia of Philosophy. Retrieved June 29, 2022, from

Rynasiewicz, R. (2004, August 12). Newton's Views on Space, Time, and Motion (Stanford Encyclopedia of Philosophy). Stanford Encyclopedia of Philosophy. Retrieved June 29, 2022, from

Schmitter, A. M. (2021). 17th and 18th Century Theories of Emotions > Hume on the Emotions (Stanford Encyclopedia of Philosophy). Stanford Encyclopedia of Philosophy. Retrieved June 29, 2022, from

Anuradha Rao Memorial Experiment: Neuropharmacology-Effect of Nicotine and MSG on Neurons. (n.d.). Backyard Brains. Retrieved June 29, 2022, from