PERCEPTRON 5)
← Back
An electro-mechanical device able to recognize some patterns among a number of stimuli it is able to register.
The perceptron was developed by F. ROSENBLATT during the fifties. It consisted of a trainable network of connected nodes that interacted in such a way as to be able to "learn", at least to a limited degree.
Each node could do simple calculations like adding and substracting, based on received signals. It was also able to emit some signals. To decide which signal to send, a node adds up its inputs, after "weighing" them through very simple mechanisms, and thus giving them different values. Positive weights make the node more likely to emit if it happens to receive this same specific signal, and conversely, with negative weight. A perceptron is more or less progressively self-programmed, as it becomes able to define "right" answers and, by further use of the weights, to close in evermore on them, through a discriminate use of its inputs.
ROSENBLATT considered at the time that the perceptron was a probabilistic model "for information storage and organization of the brain", and "as a concept it would seem that the perceptron has established, beyond doubt, the feasibility and principle of non-human systems which may embody human cognitive functions at a level far beyond that which can be achieved through present day automatons" (F. ROSENBLATT, 1958, p.449)
The perceptron was shelved for many years after M. MINSKY and S. PAPERT showed in 1969 the practical difficulties for any efficient self-programmation. in order to obtain really interesting results. They proposed instead the "topdown" approach to artificial intelligence by the introduction of programs for the manipulation of abstract symbols representing knowledge, and transformation rules.
The same problem had also be signaled by F. OFFNER in a 1965 paper (p.195-200).
Such a critique is seemingly at the same time well founded and off the mark. The perceptron is by necessity a highly parallel machine, thus inevitably chaotic and practically unprogrammable. But, as observed by E. ANDREEWSKY, such systems "do not have central units, and each of their elements functions in its local environment in order to determine a global cooperation, that is the system's selforganization. The network shapes itself progressively through the stimuli it receives; in other words, its learning activity, which is continuous during its implementation, results in changes of the network itself (whereas classically, learning is only change in knowledge representation)" (1993, p.193).
"Global cooperation in local environments" leads to the very interesting new field of computational ecology (B.A. HUBERMAN, 1988).
ROSENBLATT's idea has resurfaced in the eighties in the guise of the connexion machines… or interactions among machines. As stated by P. DENNING this was made possible when "several investigators independently discovered how to transform ROSENBLATT's perceptron learning heuristic into a precise algorithm… called backward error propagation, because errors in the output are used to adjust the weight applied to the inputs. One version of the backward error propagation algorithm was described in 1986 by D.E. RUMELHART and J.L. McCLELLAND and their colleagues" (1992, p.427).
Categories
- 1) General information
- 2) Methodology or model
- 3) Epistemology, ontology and semantics
- 4) Human sciences
- 5) Discipline oriented
Publisher
Bertalanffy Center for the Study of Systems Science(2020).
To cite this page, please use the following information:
Bertalanffy Center for the Study of Systems Science (2020). Title of the entry. In Charles François (Ed.), International Encyclopedia of Systems and Cybernetics (2). Retrieved from www.systemspedia.org/[full/url]
We thank the following partners for making the open access of this volume possible: