"An array of processors interlinked by connections that can be strengthened or weakened" (H. COLLINS, 1992, p.40).

(Different types of similarly constructed networks are also known as connectionist models or machines, artificial neural systems or distributing parallel processors)

P. DENNING explains: "Neural networks are structures consisting of highly interconnected elementary computational units. The networks map input patterns, are encoded as vectors of bits, into output patterns. They are called neurons not because they model the nervous systems of animals but because they were inspired by them. Most authors do not emphasize the biological metaphor: they talk of units and connections rather than neurons and synapses" (1992b, p.426).

R. ROSEN offers the following short synthesis about the origins of neural networks models: "They were initially introduced by McCULLOCH and PITTS in 1943, as discrete, Boolean versions of the continuous-time dynamical network introduced earlier by RASHEVSKY (1960, p.230-41) and his collaborators. Technically the passage to discrete time served to Sidestep the then insuperable analytical problems arising from the systems of nonlinear differential equations which characterize the continuous-time case: the study of neural nets requires only algebra and combinatorics. It quickly became clear that the neural networks not only furnished key insights into the operation of the brain, but also indicated how one might fabricate new systems which could behave in a brain like fashion. Consequently, neural networks became one of the primary motivations of the theory of automata in all of its ramifications, especially in computation" (1979, p.181).

As to the operation of the brain W.J. FREEMAN (1975), (as quoted by S. GOONATILAKE, 1991, p.20) showed that "collectivities of nerve cells up to the level of ten million nerves… have been studied as dissipative structures and shown to be self-organizing with autopoietic properties".

Presently, the analogy between neural networks (biological) and massively parallel computers (constructed artifacts assimilated to neural networks from a functional viewpoint) works in both directions.

S. STROGATZ and I. STEWART make the following and suggestive comment: "Neurons often act as oscillators and so, if central pattern generators exist, it is reasonable to expect their dynamics to ressemble those of an oscillator network" (1993, p.73).

COLLINS, in turn, states: "… the concept takes its inspiration from the interconnected neurons of the brain. A neural network is "trained" by being given a series of examples of correct responses and the connections between its processors are strengthened or weakened according to its success of reproducing what is wanted ".

… and "The neural network is never given an explicit body of rules to follow" (1992, p.40).

In other words, a neural network, is trainable. It does not receive an algorithmic program and its behavior is not precisely specified, nor standardised. It "learns", i.e. acquires patterns of activities, by progressive adjustment of connections between inputs and outputs through feedbacks and "synaptic weighing ". It must not however present a completely random behavior if it is to reach some operative state: no rigidly deterministic body of rules does in effect mean no rules at all, since networks generally become functional through Boolean switching functions of either the type AND or the type OR.

D. GREENWOOD enumerates the following advantages in information processing by neural networks:

"1) Information learned by a network (i.e., stored in network memory) can be, as in biological systems, very tolerant to hardware failure and can be insensitive to arithmetic rounding problems…

"2) Existing network learning methods can circumvent the ordinary difficult problem of programming parallel processors. Networks learn by example and computer algorithms often are not required.

"3) Pattern recognition can occur in parallel and patterns with missing or distorted components can be completed.

"4) Feature extraction or abstraction can result as a by-product of the network learning process or from specially configurated "competititive" networks.

"5) Neural networks can perceive and recognize spatial and/or temporal patterns in the presence of noise and with distortion. They also can process discrete or continuous data.

"6) Hierarchies of data structures can be represented as layers in a multilayer network.

"7) Conflicting aspects of information can be accomodated through network self-organization. Competing hypotheses can be represented, evaluated and selected in one network.

"8) Neural network classifiers can be nonparametric and make relatively weak assumptions concerning the shapes of underlying probability density functions compared to traditional statistical classifiers. Networks can learn multimensional probability density functions.

"9) Networks can act as controllers in environments with several constraints" (1991, p.3).

Most of the former objections to perceptrons by M. MINSKY and S. PAPERT have thus been overcomed.

However, D GERNERT observed recently that: "… the neural network still associate an output pattern to any given input pattern, such as the neural network itself can be considered a model of the original system. In a lot of practical applications this idea works with sufficient precision" (1994, p.128). But: "As far as can be seen presently, implicit models in neural networks do not open a way toward explicit understanding. A network can be prepared such as all internal weights and connection strenghts at a certain instant can be measured, but such data offer no chance of an interpretation – no rules, correlations, or similar elements of our customary explicit models can be derived.

"Even worse, implicit models remain valid only as long as the conditions of their formation persist… Unfortunately… very often, the decay of such models creeps up slowly and unnoticed" (p.129).

It also seems probable that neural networks are submitted globally to power laws.

Neural networks are now much more that a kind of metaphor or abstract model.

D. FLOREANO writes: "by neural networks, it must of course be understood "nets of artificial neurons". These can be materialised either as electronic components, or computer simulated "(2002, p. 24)

FLOREANO and his team at the Federal Polytechnic School in Lausanne, Switzerland, have in fact created a real mobile mini-robot (Khepera) equiped with two wheels, eight luminous devices observing its environment and contacts conceived for spontaneous connection with electrical sockets when the robot is in need of energy.

Khepera's incorporated neural network is made of electronic components that may become interconnected in different ways, receiving any emitting signals, and transmitting them in a way similar to the behavior of biological neurons (2002, p. 24-26)

→ Automata nets, von NEUMANN's architecture

2) methodology or model

Tag cloud geneated by TagCrowd



comments powered by Disqus
Concepts extracted by AlchemyAPI AlchemyAPI