A procedure to compute the errors resulting of the activity (EA) level of a unit in a neuronal computer net (i.e. the difference between the actual and the desired output), and to correct accordingly the corresponding error weights (EW).
In a neuron which processes signals (= activities) each input is multiplied by a number called the weight. The sum of the weights, as computed by the neuron, determines the output of the neuron through an input-output function
If the output differs from the desired activity, i.e. when there is an error in activity (EA), this error is calculated as "the square of the difference between the actual and the desired activities", as stated by G. HINTON.
The weights are then modified in order to reduce the error in activity.
The back-propagation algorithm – invented around 1974 by P. WERBOS – is thus basically a feedback process.
In HINTON's words, "The back-propagation algorithm is easiest to understand if all the units in the network are linear… For nonlinear units, the back-propagation algorithm includes an extra step. Before back-propagating, the EA must be converted into the EI, the rate at which the error changes as the total input received by a unit is changed". (1992, p.106)
This corrective mechanism is iterative (see below)
- 1) General information
- 2) Methodology or model
- 3) Epistemology, ontology and semantics
- 4) Human sciences
- 5) Discipline oriented
To cite this page, please use the following information:
Bertalanffy Center for the Study of Systems Science (2020). Title of the entry. In Charles François (Ed.), International Encyclopedia of Systems and Cybernetics (2). Retrieved from www.systemspedia.org/[full/url]
We thank the following partners for making the open access of this volume possible: