Quotation:
each neuron is a compact, efficient, nonlinear, analog summing device, …the rules of which are not quite understood as of yet. [56] @1972
The ultimate aim of computational neuroscience is
to explain how electrical and chemical signals are used in the brain
to represent and process information [14] @1988
The brain computes!
This is accepted as a truism by the majority of neuroscientists. [24] @1999
We so far lack principles to understand rigorously how computation is done
in living, or active, matter” [153] @2018
Yet for the most part, we still do not understand the brain’s underlying computational logic [5]
Despite the impressive results of grandiose projects [5, 154], and the sometimes triumphal communications, there has been no significant advance in understanding neuronal computing in four decades. Unfortunately, the they attempt to merge the decades-old, outdated theory [155] with the modern technical hardware and software facilities. Furthermore, even the true advances are miscommunicated [31]. The true advances are mismatched with urrealist designs and plans. The Human Brain Project believes that it is sufficient to build a computing system with theoretically sufficient resources for simulating 1 billion neurons, and do not want to admit that it can be used [156] to simulate only by orders of magnitude less, below 100 thousand, simply because of the presently available serial systems do not enable to reach such a performance [157]. Despite that even the system with 1-million-core proved to be non-usable, building a 10-million-core system recived the green light. Despite that lack of knowledge, the half-understood principles of neuromorphic computing are extensively used [158, 159], although it is seen that brain-inspired computing needs a master plan [160]. Maybe, really, ”a new understanding of the brain” [and the cooperation of scientific disciplines] is needed.
Quotation: ”Indeed, the operation of our brain differs vastly from that of human- made computing systems, both in terms of topology and in the way it processes information, which explains its different aptitudes” [159] Our ”present-day digital computers are optimized for high-precision calculations but consume an inordinate amount of energy when they run the type of cognitive tasks that the brain excels at” [159].
Today we have the ”golden age” of neuromorphic (brain-inspired, artificial intelligence) architectures and computing. However, the meaning of the word has changed considerably since Carver Mead [161] coined the wording. Today practically every single solution that borrows at least one single operating principle from the biology, and mimics some of its functionality in a more or less successful way, deserves this name. As always, to grasp out some single aspect and implement it in an environment and from components based on entirely different principles, is dangerous. Historically, ’neuromorphic’ architectures were suggested to be based on different principles and components, such as mechanics, pneumatics, telephones, analog and digital electronics, computing. Some initial resemblance surely exists, and even some straightforward systems can demonstrate more or less successfully functionality in some aspects similar to that of the nervous system. There is a noteworthy analogy between the deep learning of neuronal nodes and the long-term potentiation found in synapses.
However, when scrutinizing the scalability (i.e., how those systems shall work when used under real-life conditions in which a vast number of similar subsystems shall work and cooperate), the picture is not favorable at all. ”Successfully addressing these challenges [of neuromorphic computing] will lead to a new class of computers and systems architectures” [158] has been targeted. However, as noticed by the judges of the Gordon Bell Prize, ”surprisingly, [among the winners,] there have been no brain-inspired massively parallel specialized computers” [162]. Despite the vast need and investments, furthermore the concentrated and coordinated efforts, just because of mimicking the biological systems with computing inadequately.
Given ”that the quest to build an electronic computer based on the operational principles of biological brains has attracted attention over many years” [163], modeling the neuronal operation became a well-known field in both electronics and computing. At the same time, more and more details come to light about the computational operations of the brain. However, it would appear, that the ’wet’ neuroscience is miles ahead of the ’silicon’ neuroscience. There are projects and exaggerated claims about extremely large computing systems, even about targeting the simulation of the brain of some animals and eventually even the human brain. Often these claims are followed by a long silence, or some rather slim or no results. As that the operating principles of the large computer systems tend to deviate from the operating principles of a single processor, it is worth reopening the discussion on a decade-old question ”Do computer engineers have something to contribute. . . to the understanding of brain and mind?” [163]. Maybe, and they surely have something to contribute to the understanding of computing itself. There is no doubt that the brain does computing, the key question is how?