2.2.6 Speed

The role of speed and time, particularly in the context of an object’s changing location over time, has long held a mystique in the realm of scientific discovery (and recently returned to be mystic again in cosmology). This intrigue can be traced back to historical debates, such as Zeno’s paradoxes. The acknowledgment that an object’s movement speed can influence our observations is a topic that has sparked significant scientific discourse over the years. In this section, we aim to draw parallels between the historical debate on the finite speed of light and its contemporary implications in various scientific disciplines, such as the finite speed of ionic currents in biology.

It has been a long-standing mystery that interactions with different speeds play their role simultaneously. The issue forces researchers to give non-scientific explanations to everyday phenomena only because they routinely assume that the interactions have the same speed, and they use the laws about strictly pair-wise interactions. They have no choice: there is no formalism to handle non-equal speeds.

We need different abstractions (finite-speed interaction in modern physics) for different phenomena that require different mathematical handling, which is not as simple and friendly. The speeds of observation and propagation of electric fields remain the same in biology, and it is easy to extrapolate, mistakenly, that all interactions have infinitely large interaction speeds. However, also slow interaction speeds exist, furthermore, different interaction speeds can intermix in the same phenomenon. Neglecting that effect introduces the need to assume fake mechanisms and effects for explaining some details; which are naturally explained by assuming finite interaction speeds and their combinations. More discussion in section 2.4.1.

Instant Interaction

The above dichotomies are rooted in deeper layers of science. Notice that in the electrical abstraction, no mass is present, so one can use the equations assuming ’instant interaction’, which in biology led to using non-physical explanations of the observations (such as ’delayed current’). In the mechanical/thermodynamic abstraction, mass is to be moved, making the finite-speed interaction evident. Classical physics is based on the Newtonian idea that space and time are absolute, so everything happens simultaneously. Consequently, when nature’s objects interact, it must be instantaneous; in other words, their interaction speed is infinitely large. Furthermore, electromagnetic waves with the same high (logically, infinitely high) speed inform the observer. This self-consistent abstraction enables us to provide a ”nice” mathematical description of nature in various phenomena: the classical science.

We learned that the idea resulted in ”nice” reciprocal square dependencies, Kepler’s and Coulomb’s Laws. We discussed that the macroscopic phenomenon ”current” is implemented at the microscopic level by transferring (in different forms) discrete charges; furthermore, that solids show a macroscopic behavior ”resistance” against forwarding microscopic charges. We also learned that without charge (and, without atomic charge carriers), neither potential nor current exists. We did not learn, however, that also thermodynamic forces can move the ions, given that they have inseparable mass.

The low speed of ions in electrolytes introduces problems. The same physical phenomenon, the interaction (or movement) of ions is described using an ’infinite’ electrical speed and a million times less mass propagation speed, respectively, which leads to an unresolvable discrepancy, given that physics is not prepared to handle different speeds in the same interaction event [22]. (Historically, Onsager’s work in 1931 referred to thermoelectricity and transport phenomena in electrolytes, thus connecting experimentally two separate disciplines.)

Physics notoriously suffers from the lack of handling different simultaneous interactions; facing such a case leads to misunderstandings, debates and causality problems. Such a famous case is the interaction speed of entanglement. In that time, E. Schrödinger introduced his famous Law Of Motion in quantum-mechanics entirely analogously as I. Newton introduced his Laws of Motion. Similarly to the Newtonian ’absolute time’, the quantum mechanical interaction is supposed to be ’instant’ (this is the price for having ’nice’ equations in classical and quantum mechanics), i.e., its speed is supposed to be infinitely high. However, at that time was already known that the electrical interaction (propagation of electromagnetic waves) is finite, so if an object has quantum mechanical interaction (aka entanglement) and electrical interaction, the corresponding forces start at the same time, but arrive at the other object at different times. The entanglement arrives instantly, the electromagnetic effect arrives at the time we can calculate from the interaction speed and the spatial distance of the objects. This leads to causality problems: the effect of the two interactions of photons entangled earlier in an exploded supernova should be measured at two different times; meaning a ”spooky remote interaction” as A. Einstein coined, and leads to contradictions such as the Einstein–Podolsky–Rosen paradox. Actually, the issue roots in the improper handling of mixing interaction speeds: the Schrödinger-equation introduces the infinitely large interaction speed, while the EM interaction has finite speed.

The confusion and question marks in connection with describing the life by science mostly arise from the interpretation of notion ’speed’ in physics. When discussing the underlying physical laws, we go back to the very basic physical notions instead of taking over the approximations and abstractions used in the classical physics for non-biological matter and less complex interactions. As we emphasized many times, we construct laws and conclusions based on somewhat simplified abstractions about nature, in all fields of science. The notions and laws depend on the circle of phenomena we know and want to describe. The Newtonian and Einsteinian worlds are basically distinguished by considering speed dependence that actually means explicite time dependence. Interesting consequences are that in the Einsteinian world, the mass is not constant, the time and space are not absolute, and so on. We can be prepared to some similar counter-intuitive experiences in physiology: ”we must be prepared to find it working in a manner that cannot be reduced to the ordinary laws of physics”[12]. Here we scrutinize the basic notions and discover some differences between physics and biology as consequences of the required different abstractions and approximations.

During our college studies, we mentioned that light is an electromagnetic wave with a vast but finite propagation speed. Still, we forgot to highlight that, at the same time, it is the propagation speed of the electric (, and magnetic, and gravitational) interaction force fields as well. The effect of ”Retarded-Time Potential” is also known in physics and communication engineering. Algorithms ”marching-on-in time” and ”Analytical Retarded-Time Potential Expressions” are derived to handle the problem; for a discussion, see [98]. Telegrapher’s equations (unfortunately, also used to describe biological signal transfer) explicitly assume a finite propagation speed millions of times slower than the (implicitly assumed) EM interaction’s. The issue is not confined to large distances: designers of micro-electronic devices also must consider the effect: they introduced clock time domains and clock distribution trees; see, for example [99, 25].

Science uses ’instant’ in the sense that one interaction is much faster than the process under study; we consider the faster interaction as instant. The approach of classical science is based on the oversimplified approximation that the interaction speed is always much higher than the speed of changes it causes and that the processes can always be described by a single stage. In our approach, for biology, we put together a series of stages to describe the observed complex phenomena, where the stages provide input and output for each other, involve more than one interaction speed, and use per-stage-valid approximations. We simplify the approximations by omitting the less significant interactions and introduce ideas for accounting for the different interaction speeds. This way, we reduce the problem to a case that science can describe mathematically. This procedure is fundamentally different from applying some mathematical equations derived for an abstracted case of science to a complex biological phenomenon without validating that we use the appropriate formalism.

Speed of light

In 1676, the Danish astronomer Ole Rømer was making meticulous observations of Jupiter’s moon Io and concluded not only that the speed of light is finite, but he measured its value with sufficient accuracy. Rømer never published a formal description of his method, possibly because of the opposition of his bosses, Cassini and Picard, to his ideas. Cassini knew Rømer’s idea and the measurement data. However, instead of accepting the finite value of the speed of observation, he made periodic corrections to the tables of eclipses of Io to take account of its irregular orbital motion: periodically resetting the clock . The speed of light must remain infinitely large.

However, the theory of finite speed quickly gained support among other natural philosophers of the period, such as Christiaan Huygens and Isaac Newton [100]. Although Newton surely knew that the observation speed was finite, in his ”Philosophiae Naturalis Principiai Mathematica” [101], published in 1687, he decided to refer to observations that they happen ”at the same time” despite knowing that what we observe at the same times, happen at different times. Using instant interaction results in ”nice’” mathematical laws and enables us to describe most of nature’s experiences with sufficient accuracy. Einstein, in 1905, discovered [102] that the speed of observation (in moving reference frames) may play a decisive role in interpreting scientific phenomena. The results he derived using Minkowski-coordinates [89] were counter-intuitive, with many unexpected consequences. Instead of introducing improvement(s) or correction(s) to the existing classic principles and methods, he introduced a new principle: the finite (limiting) interaction speed. The disciplinary analysis of the reception of Minkowski’s Cologne lecture reveals an overwhelmingly positive response on the part of mathematicians, and a decidedly mixed reaction on the part of physicists [103] has turned to the exact opposite. Today, physics generally accepts the description, that is, the existence of finite interaction speed (resulting in the birth of a series of modern science disciplines). However, other science disciplines, including biology and computing science, refute (or at least do not use) it; despite that its effects are evident.

Processes vs states

The finite speed introduces something unusual into science: processes, rather than jumps between states must be considered.

Speed in neuroscience

Helmholtz, in 1850, sent a short report off to the Academy [104] ”I have found that a measurable time passes when the stimulus exerted by a momentary electrical current on the hip plexus (Hüftgeflecht) of a frog propagates itself to the nerves of the thigh and enters the calf muscle.” His teacher ”had thought that the speed of nervous conduction might be in excess of the speed of light and could probably never be measured. Helmholtz’s father, on hearing of the experiment and the surprisingly slow measured speed, wrote to his son that he would as soon believe this result as that one can see the light of a star that burned out a million years ago” [105].

With the development of measurement technology, it became evident that finite speed is a general feature of the ”nervous connection”. (Somehow, ”the speed of nervous conduction” has been renamed to ”conduction velocity”, neglecting the clear distinction that physics makes between the two wording.) With the dawn of instrumental electronics and computing, the McCulloch-Pitts model [106] introduced the picture that the brain can be modeled by a network of simple perceptron nodes connected by wires; that is, it comprises a two-state equipotential membrane connected with perfect wires. The experimental research also quickly (re-)discovered that those wires forward signals in a particular way; the speed of the potential wave is finite. Furthermore, the axons are not equipotential during transmission. Although its structure is practically identical with axons, biology assumes that, unlike an axon, the membrane remains equipotential during its operation, although the evidence shows the opposite: ’the action potential spreads as a traveling wave from the initial site of depolarization to involve the entire plasma membrane’ [107].

When seeing that assuming an equipotential membrane was wrong and a single equipotential surface (in other words, classic physics’ instant interaction) cannot describe neurons adequately, multi-compartment models (typically comprising equipotential cylinders with different potentials) have been introduced [45]. (Notice that it is a consequence of the wrong oscillator model hypothesized by Hodgkin an Huxley: the membrane is modeled as a series of resistors and capacitors.) Then (forgetting that Ohm’s Law is valid only for classic physics’s ’instant interaction’, furthermore, that no external potential is connected to either of the compartments, and no charge is present at the beginning, except at the input of the first compartment), the individually equipotential compartment pieces were connected by individual resistors. This model shows that the more compartments, the better the agreement (accuracy) with experiments. It happens because the shorter is the size of the compartment (approaches a differential equation), the less noticeable is the deviation from the true non-equipotential surface. This conclusion means that charging the capacitance attached to the compartment takes time, resulting in a delayed distributed current. Using infinitely many compartments, we would arrive at the differential equations describing a delayed distributed current on the surface of the non-equidistant membrane. However, biology did entirely not give up its position. It admitted that membrane current exists, but only between compartments, and its speed must be infinite (or, at least, the speed of EM interaction). However, at least the compartment pieces must be equipotential. Instead of fixing the wrong hypothesis, biology is ”periodically resetting the clock”. Instead of accepting that the charged ions represent a ”slow current” (compared to the ”fast current” represented by electrons and their charge transmission method), biology introduced changing conductance, delayed current, rectifying current,

Finite-speed interactions

When speaking about speed, especially about the speed of charged objects inside biological objects, one needs to consider microscopic and macroscopic levels of understanding. On the boundaries of the two levels, we need to make distinctions between different kinds of speeds, among others (in units of m/s), the propagation speed 108 of the electric field (aka potential gradient), the speed 105 of thermal motion and potential-accelerated motion, the apparent speed of current (potential-assisted speed of a macroscopic stream, both in metals and electrolytes; mainly due to the repulsion of nearby ions in the stream) 101, for ion current inside a neuron (see Fig. 1 in  [50]) 102; diffusion speed of electrons in a wire 104, drift speed of the individual carriers in aqueous solutions 107; and of ions moving in a narrow tube filled with viscous liquid 108. Fortunately, in most but not all cases, different mechanisms (such as the Grotthuss mechanism or free electron model; for a review, see [108]) at the level of microscopic structure help to create the illusion of a high macroscopic propagation speed (million times higher than the speed of its microscopic carriers). The same carrier can have macroscopic speeds differing by orders of magnitude, depending on the context; see a biological example at ion channels. When more than one of those speeds plays a role in the phenomenon we study, we must carefully consider its context and prepare for handling fast and slow effects, furthermore, their mixing.

When an object can interact with another in a way abstracted by science as more than one interaction type, we need to find the relation (the ’extraordinary’ law) between them. Such a famous case is electricity and magnetism. Their interrelation is defined by the Maxwell equations: how an electrical field creates a magnetic one and vice versa (notice that the law is about their space derivatives, aka space gradients instead of the entities themselves). While we understand that the speeds of electromagnetic and gravitational interactions are finite, we can use the ’instant interaction’ approximation in classical physics because one effect of the first particle reaches the second particle simultaneously with the other effect, leading to the absence of a time-dependent term in the mathematical formulation.

An apparently similar case is found in electrodiffusion, where ions can be abstracted as mass and charge, one belonging to thermodynamics and the other to electricity. There is, however, an essential difference between those cases: the interaction speeds are the same in the first case (moreover, in the spirit of classical physics, the interactions are instant) and differ by several orders of magnitude in the second one. Of course, the Maxwell equations can be nicely solved and modeled also for biology if one introduces [109] that the axial currents have the same speed (BTW: which was measured as 20m/s) as the electrical and magnetic waves, furthermore the longitudinal current is (?)defined(?) to have no attenuation. Furthermore, it is likely also defined that current needs no driving force and this is why the positive and negative ions flow in the same direction. It is really a novel paradigm leading to ”(mis)understanding cell interactions”, but definitely describes some alternative nature.

Speed in laws of science

Actually, the famous Coulomb’s Law is expressed as

F(t)Q1=kQ2r2 (2.1)

r is a space-time distance. That is, in the Newtonian approximation, time is identical at all places, so we used to omit it. However, physics knows also the notion of retarded time. Considering the finite field propagation speed requires revisiting the fundamental physical laws. (in a Lorentz-transformed form) should be written as

F(t)Q1=kQ2r2(trc) (2.2)

It is an implicit equation usually solved numerically or using perturbation methods for moving sources.

The electrostatic field that charge Q1 experiences due to the finite propagation speed c of the electric field (or interaction) corresponds to that Q2 at a distance r generated rc time ago (k is the constant describing the electrical interaction). This term has no role if the two charges do not change their position; similarly to that in the special theory of relativity, only the relative movement leads to complications. If the distance changes, its effect is so tiny that the term can usually be omitted. So, our college knowledge can serve as a good first approximation.

This speed term pops another law from classical physics into our minds: Kirchhoff’s junction rule. The law is perfect in the approximation ’instant interaction’ that classical physics uses, but not for biology. First, because it expresses charge conservation, it is invalid when charges are ”created” inside biological objects (ions diffuse into the junction; see the role of ion channels in the wall of membranes). Second, it is not valid for input currents arriving with finite speeds into finite-size space regions, but it is valid for a single point in space-time (in other words, in differential equation form). As we discuss in section 2.5.2, using a wrong definition of current means assuming ’instant interaction’, that is, that neural signals propagate with the speed of light. The currents (and the voltages), measured at two different points in space-time, are different. Consequently, for extended objects (such as a line-like finite-size neuron), it is valid only with a time delay

Iout(t)=Iin(tΔt) (2.3)

The time delay in biology is in the 1msec range. We must not describe the axon or the membrane by the differential form the non-differential form Kirchoff equation: the input and the output currents flow at different times (the charge carriers need time to reach from input to output). We must not describe the axon or the membrane with the non-differential form of the Kirchoff equation: the input and the output currents flow at different times (the charge carriers need time to reach from input to output); only the differential equation form expresses charge conservation (furthermore, in the case of ”producing” ions, even by the differential form is invalid). For its exact interpretation see sections on axonal charge delivery and on the true membrane current, Fig. 3.23, and the text around it. Studying electrical phenomena on structured media, such as biological cells, needs much care. We must not apply laws derived from entirely different conditions (mainly metals). The ”ghost image” can be the ”zeroth order” approach to understanding the AP: we assume that a”slow current” causing to appear as PSP, flows out through AIS some Δt time later than in flowed in ”at the other end” of the neuron. The time delay is needed for the slow current to reach from the entry point to the exit. In fact, the figure displays Kirchhoff’s junction rule for the slow currents in biology: the rush-in current triggered by exceeding the threshold leaves the junction a few tenth of millisecond later. The difference between the numerical solution and the simple difference is mainly due to the distributed nature of the input current. As the conduction speed increases the difference current decreases, and at the speed of ”instant interaction”, results in Kirchhoff’s junction rule as known in the technical electricity.