Galilei said, ”Mathematics is the language in which God has written the universe”. However, it is not sure that when we attempt to read a piece of the universe written in that language, we use the right piece and dialect of the language, and even that humans already invented the needed piece. Not mentioning whether one understands the given piece correctly. For example, mathematical calculus (integral and differential) was invented mainly for the practical needs of analyzing spatial motion of celestial bodies. Similarly, Minkowski’s mathematics theory proliferated widely [110] only after inventing the special theory of relativity. Although the mathematical description was developed earlier, there was no practical need to apply it. The classical laws of motion were valid only until more meticulous observations required to consider speed and acceleration (time derivatives of position) dependence in addition to position dependence. Newton’s static laws remained valid, but for the dynamic description we must revisit the second law of motion.
Also, we must not forget that ”mathematics is not just a language. Mathematics is a language plus reasoning. It’s like a language plus logic. Mathematics is a tool for reasoning.” (Richard P. Feynman) Mathematical formulas work with numbers but math theorems and statements begin with ”If … then”. They have their range of validity, even when they describe nature. Using mathematics to describe the classical equations of motion, to calculate forces and times that speed up bodies above the speed of light is possible, but in that case mathematics is applied to an inappropriate approximation to nature. When approaching the speed of light, different physical approximations (that calls for a different mathematical handling) are to be used. A mathematical formula, without naming which interactions it describes and naming under which conditions and approximations the formula can be applied, are just numbers without meaning. It surely describes something but only eventually describes what we studied. Galilei made measurements with objects having friction, but his careful analysis extrapolated his results to the abstraction that no friction was present. We know his name because of making meticulous abstractions and omissions (and mainly: recognizing the need to do so!) instead of publishing a vast amount of half-understood measured data.
The discrete-continuous dichotomy raises the interesting question: how long quantities describing matter remain differentiable? At macroscopic quantities, for example, in technical electricity, surely yes: for a large number of charge carriers flowing through a cross section, the statistical fluctuation of the number of charge carriers obviously is sufficiently small to operate with the approximation that the fluctuation is zero, so the definition that, at a cross section, defines the current has sense. Even one can approximate the current by saying that cannot be smaller than the elementary charge (recall what a revolution happened due to assuming the the energy of a radiation cannot be arbitrarily small). Mathematically, one can say that the time course of a single charge carrier is a -function, which is surely not differentiable. Then, is the rush-in current with its charge carriers differentiable? If it happens in fractions of a nanosecond, over a surface of the membrane? The same question for the single ion channels? Or, when that amount of rush-in current is distributed over a several millisecond range? In a microsecond, in average, about thousand electrons represents the current; at the beginning and the end of an AP, only a few dozens. These are reasonable questions when measuring voltage gradient that such a low current current generates on a resistor represented by some ion channel.
The interdependent behavior of charge and mass has an interesting, entirely mathematical consequence as well. The definition of a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. In the case of ions, changing the function with respect to mass or charge means simultaneously changing also the other; that is, the partial derivatives depend on each other. In other words, one cannot calculate the partial derivatives; only the total derivative. As a consequence, equations that use the partial derivatives of concentration and electrical potential are either incorrect or only more or less good approximations. Classical mathematics cannot be applied. Consequently, in the case of ions and electrolytes, one must not use the well-established concepts (enthalpy, entropy, etc) of thermodynamics in unchanged form. (BTW: the discrete nature of charged particles leaves the question open, under which conditions remains the ion current differentiable.)
Experience shows that thermodynamics describes correctly the macroscopic behavior of physical media when an enormous number of microscopic objects are present in a macroscopic volume. Are the biological objects large enough to neglect the statistical fluctations of the objects they contain? The Nernst-Planck equation (and the corresponding transport equations) relates concentration and voltage gradients. Are those laws valid at the size of ion channels, where it is hard to interpret those derivatives? Do those statistical laws remain valid for quasi-continuous infinite volumes, for non-infinite closed volumes, or for individual ions? Recall that thermodynamics follows a hybrid path that calculates the probable number in a volume in the phase space and works that as if it were the actual number of particles.
Mathematics and physics do not have methods behind the pairwise, instant interactions.
Science has a well-established method for deriving its laws. It groups resemblant phenomena to major disciplines and they branch along even more resemblance to sub-disciplines. The hierarchy is not strict, given that ”nature is not interested in our separations, and many of the interesting phenomena bridge the gaps between fields.” [6]. Mathematics is a discipline on its own right. It abstracted its notions from nature’s concrete objects and phenomena, its abstract methods serve as a solid base for the fellow sciences and are frequently used to describe new discoveries. It is commonly accepted that physics is the very basic tool to study nature at some elementary level. Chemistry, at another level of abstracting nature, is based upon its laws, and constructs its laws for the phenomena it researches. The laws of physics are borrowed and respected, and the laws of chemistry cannot conflict with the laws of chemistry. Biology is the discipline studying living matter, and is considered to be an even higher level abstraction, with its own laws. These laws, as expected, cannot conflict with the laws of chemisty and physics. Howevever, it looks like some laws, mainly the ones belonging to thermodynamics, are are not perfect for describing biological phenomena. Especially, the life itself seems to contradict the second law of thermodynamics. E. Schrödinger suspected that some non-ordinary laws are to be derived to describe life.
Classical physics is simplifying the things until can attribute a single substance (such as mass or charge) to the tested object and in its laboratories derives accurately the laws and determines the attibutes of the substances. As E. Schrödinger has drawn the attention to, ”the construction [of living matter] is different from anything we have yet tested in the physical laboratory”. Science experienced many times that the used approximations do not describe some phenomena accurately. At that times, scrutinizing the fundamental assumptions (such as whether the Galillei’s relativity principle remain valid when approaching the speed of light, or the matter remains continuous when decreasing extremely the tested sample) has led to establishing new scientific fields, such as theories of relativity or theory of quantum mechanics. When describing living matter, one needs some more approximations.