International Encyclopedia of Systems and Cybernetics

2nd Edition, as published by Charles François 2004 Presented by the Bertalanffy Center for the Study of Systems Science Vienna for public access.


The International Encyclopedia of Systems and Cybernetics was first edited and published by the system scientist Charles François in 1997. The online version that is provided here was based on the 2nd edition in 2004. It was uploaded and gifted to the center by ASC president Michael Lissack in 2019; the BCSSS purchased the rights for the re-publication of this volume in 200?. In 2018, the original editor expressed his wish to pass on the stewardship over the maintenance and further development of the encyclopedia to the Bertalanffy Center. In the future, the BCSSS seeks to further develop the encyclopedia by open collaboration within the systems sciences. Until the center has found and been able to implement an adequate technical solution for this, the static website is made accessible for the benefit of public scholarship and education.



"The creation of algorithms and systems that exhibit intelligent behavior, including activities such as adaptive and heuristic programming, cognitive process modelling, expert systems, natural language processing, and neural networks" (G. FORGIONE, 1991, p.64)

This definition covers a wide area of quite different subjects. It is much more than, in K. KRIPPENDORFF's words: "A branch of computer science concerned with the programming of computers so that they exhibit apparently intelligent behavior, e.g. the design of robots, chess playing automata or theorem proving machines" (1986, p.4)

The "apparently intelligent behavior" conforms to the TURING's criterion according to which a "machine" should be considered intelligent if an observer, unknowing that she/he is dialoguing with a machine, cannot differenciate it from a human being.

According to M. MINSKY, an A.I. device must have an understanding, at least rudimentary, of its own problem solving processes. He believes that, if one endows it with a model of its own workings, it could finally improve itself.

This seems questionable, at least for two reasons:

- GÖDEL's Incompletenes theorem may not allow any system to contain a complete model of itself.

- If the model is introduced from outside, the predominance and necessity of natural intelligence is restablished.

FORGIONE's definition reflects the evolution of the field, already observed in 1980 by M. BODEN who wrote that: "Artificial Intelligence cannot be expressed in the terminology of traditional cybernetics, which focusses on feedback and adaptive networks and which defines information-processing in quantitative rather than qualitative terms" (1990, p.30)

E. ANDREEWSKY states: "Two main axes which, to judge by current debates, appear to be antagonistic, characterize the cognitive models of Artificial Intelligence: the so-called 'classical' Artificial Intelligence where cognition is considered from a "software" viewpoint (insofar as it concerns, as in computer programs, "computation" on symbols) and Connectionism where cognition is considered as if "emerging" from the brain and its neurons. These axes define different, but nevertheless very complementary, points of view on mind" (1993, p.189)

And "Connectionism is considered by the tenants of classical Artificial Intelligence as a resurgence of Behaviorism, with the negative connotations associated to this approach. For connectionists, classical Artificial Intelligence is merely a "Spanish inn", insofar as the rules (or the meta-rules) of reasoning are given to the system" (p.194) (Note: In Spanish inns of old you were supposed to find only what you brought in yourself)

In fact, the classical view on A.l.implies that no intelligent system can work without a program of algorithms, i.e. a software. This is probably true, but how are these very algorithms constructed?

The connectionist answer tries to explain this point through the concept of "spontaneous" upbuilding of patterns from unplanned interconnections in the network. Furthermore, the representations acquired by the network are seemingly spread out over the whole network, in a kind of holographic way. But how do these patterns become fixed and how are the rules to construct the network generated? The very general topic of order from randomness lurks behind these question marks.

The problem seems to be closely related to the autogenesis one and, as said by ANDREEWSKY, both viewpoints are probably needed. Remains to be seen how they can be interconnected and integrated.

"Power laws in fractal structures" and "Autogenetic systems precursors".

Still a different angle is generalization through induction, as for example in the famed BACON program which "rediscovered" several classic physical laws: KEPLER's law of planetary motion; NEWTON's law of gravitation and BOYLE's law of gases, among others.

Generally speaking, D. GREGORY describes as follows what he calls the basic "realist" axioms, problems and goals of A.I.


- Knowledge is a commodity. We can trade it, teach it, learn it, forget it, remember it, represent it, discover it…

- Knowledge is distinct from its knower – just like data are distinct from computer disks

- Knowledge is sets of true facts together with rules for combining them

- Knowledge can be reduced to sets of primitives - just like matter is ultimately reducible to fundamental particles" (1993, p.67)

(All these views are controversial, some of them highly so: Knowledge should be only acceptably understood here in T. ÖREN sense (see "Knowledge (Taxonomy of")


- To elicit facts that are free of the distortions produced by the way they are known or described

- To represent facts and their inter-relationships so as to capture the way the world really is

- To specify a utilization engine that arrives at conclusions that are true in the world" (Ibid)

(The words "facts", "really" and "true" should be carefully pondered in order to cautiously relativize their use. See for example "Club of Rome" models and "Systems dynamics")


- To build an Artificial Intelligence: a disembodied, artificial subject matter expert with whom conversations are possible" (Ibid)

D. GREGORY's conclusion is that "A.I. research seeks to understand the principles by which pragmatically and semantically interesting behaviour can be produced with syntactic machinery" (p.63)

Such a mechanistic and more or less reductionist view is widely, but not universally accepted.


  • 1) General information
  • 2) Methodology or model
  • 3) Epistemology, ontology and semantics
  • 4) Human sciences
  • 5) Discipline oriented


Bertalanffy Center for the Study of Systems Science(2020).

To cite this page, please use the following information:

Bertalanffy Center for the Study of Systems Science (2020). Title of the entry. In Charles François (Ed.), International Encyclopedia of Systems and Cybernetics (2). Retrieved from www.systemspedia.org/[full/url]

We thank the following partners for making the open access of this volume possible: