artificial intelligence

Convocation of psychoanalysis in the world of artificial intelligence

Through this article we would like to present the work of Paul Jorion on intelligent systems, and then open this presentation on neural networks, since these two parts are part of a continuing common history, that of artificial intelligence, which is in its infancy, but already going through ruptures.
Photo: Film Ex Machina by Alex Garland, 2015
"And in a way, I didn't really regret leaving artificial intelligence at that time, when I left it, because I was going in a completely different direction than everybody else. We'll see if one day we'll call it "innovative", but at that time, well, I did the thing that surprised everybody. The people working around me were swallowing big treatises on formal logic, and I said: "Well, we're going to take Freud and Lacan and we're going to make a machine work from that. "Voilà! So that was the time when, in the field of scientists - I'm not talking about society as a whole - psychoanalysis was seen as something of a charlatan. That will change, of course, we will realize that the psychology we need in this field of artificial intelligence, and that, I already said it in my book I wrote in '89, "Principles of Intelligent Systems"The type of psychology that is needed to make machines intelligent is the type of understanding that comes from Freud, Lacan, Melanie Klein, a number of people who have thought about these things. »  
Paul Jorion, from his blog - Dec 5, 2014 (The weather).

Paul Jorion

Language, machine language, thinking, learning and personality

"Machine learning", self-learning machines
The inaugural session of the Collège de France's annual chair: Computer Science and Digital Sciences took place in early February 2016. Led by Yann Lecun, it focused on artificial intelligence, and more specifically on deep learning, also known as "deep learning". "deep learning" [1]. Something is therefore happening on the side of the machines, which finds itself heard by a state institution as if to validate and legitimize the importance finally taken into consideration; something that, moreover, tends to erase a limit hitherto held to be unassailable and which separated the human subject from the machine with respect to the possibility of self-organization, and, to simulate its effects more and more perfectly and finely [2] through the noétique activity of learning. Thus, a last stage would be in the process of being reached [3], and another old separation would fall more or less totally (which played the evolutionary self-organization of the living against the fixity of the machines' programs because until now they were not able to evolve and learn by themselves, they had to be supervised), an effect whose consequences on societies (in metaphysical, social, affective, categorical, economic, technical terms) are difficult to predict. But this has already been thought about, analysed and discussed in various parts of the world, and each society may not react in the same way to the challenge posed by these machines and to that of their integration into the social body, depending on the singularity of the axiomatic regimes specific to each one.
Computing and the intelligence effect
At the time of writing this paper we have not listened to the lectures mentioned above, but we would like to take this institutional event as a starting point to talk about a book that was written by Paul Jorion (also an anthropologist and economist) in 1989 and entitled : "Principles of Intelligent Systems" 4], because this one operated on us like a real jolt. It seems to us that he approached - among the very first - artificial intelligence in another way, articulating several elements (some of which may even appear foreign to the computer discipline) but which contribute to construct what he calls: the intelligence effect.
"Effect of intelligence" 5], because Jorion does not ask himself what intelligence is, he does not try to provide a complete definition nor to extract an essentiality from it, but rather to identify what in reality is considered by humans to be intelligence, which gives the impression of intelligence. It is therefore from the point of view of human experience, from the phenomenal and immanent point of view, one could say that he addresses the problem. In doing so, he asks what conditions must be met by inorganic machine systems in their internal operations for humans to be able to say when placed in front of them : - this computer is intelligent, it thinks, its intelligence is similar to mine. And we will see that in the principles that he identified to achieve this effect of similarity, some of them later became the fundamental principles of the daughter branches of intelligent systems. " princeps "All of them have the particularity of moving away - to varying degrees - from expert systems, to be self-learning or evolutionary systems.
We will come back to these systems a little later; for the time being, let us follow in the footsteps of Paul Jorion who will lead us - through the problematic of language and its conceptualization through psychoanalysis among others, this is what seemed strange to us and what interested us in the first instance - towards the learning systems that we call today neural networks as well as the method: genetic program [6].
We think because we talk?
In his introduction, Paul Jorion asks about language, its place in relation to thought, and he wonders, in terms of chronology - whether one thinks because one speaks, or whether one speaks because one thinks? And faced with this alternative, he will choose to ask and question the anteriority of words, to posit that thought emerges from words, and he formulates his problematic as follows: "In other words, what if thought is the result of the self-organization of words? » Which will be the hypothesis of his book. A strong hypothesis.
This hypothesis will be used to develop another type of language modeling in intelligent systems and on which he himself will work [7]. Another language, therefore, and which "would be like the production of a coherent discourse resulting from the dynamic exercise of constraints on a word space. ». We must then find out how to structure the machines so that they can follow such a procedure and so that they can move in such a space.
When a human addresses a machine and asks it a question, it is to obtain an answer, information that he does not possess, but this answer, says Paul Jorion, can be of two types. Either it does not surprise and simply answers the question and the expectation of the questioner. Either it surprises and gives the impression that the machine has captured more than the explicit statement, and "that it confronts the human being with the usual overflow of desire in relation to demand".
This is to show that if we want a machine to give an impression of itself that is not mechanical, it must be structured according to certain characteristics in which, moreover, strangely point to notions that we find in the field of psychoanalysis. But we specify that this is not the case with all the characteristics. We will enumerate them succinctly.
First characteristic: the obligation for an intelligent system to have knowledge (a database), then to be able to transmit its knowledge (output-oriented natural language interface), then to be able to acquire knowledge (extract knowledge from what the user or the environment transmits to it, thanks to a parser and a learning module), then to know how to question the user, then not to impose its knowledge but to negotiate it (manage to determine the degree of adhesion of the interlocutor), and finally to have its own personality.
Certain characteristics, it is observed, seem difficult to attribute to a machine and are part of the effect that Paul Jorion calls intention, that is, the system has taken an initiative. It did not behave in the usual "machinic" way, and moreover it offered the most relevant information here, i.e. the finest information in relation to the context, it moves away from the stereotype. The last characteristic is the one that is of a higher level, it is the one that would give the impression that a "person", a thought and an intelligence as human, is there [8]. [8] The machine would thus have a personality and would be capable of self-organization.
But how do you build such a machine?
From symbolic systems to semantic systems: a step towards associationism
First of all, it should be pointed out that the intelligent systems that would be capable of doing this are not the so-called expert systems (also called symbolic systems, which is what our current computers are), but according to Jorion, semantic systems that have become mnemonic. A mnemonic system is based on several postulates concerning language, memory and affects. We will elaborate on each of these aspects in the course of this article.
Let's focus on language first
Let's take a speech. Either we can consider it from the side of signification [9], or we can consider it from the side of the signifier, and even more so as a sequential path within a space of signifiers (independently of their signification), i.e., to use Paul Jorion's words, language would be "as a path traced in a lexicon understood as a list of all the words in a language". But then a question arises, if it is not the meaning of the words that matters for their association, and if it is not what they would refer to in the world and in things that matters, according to what associative rules can we then articulate them, taking into account the temporal linearity of the word, the sentence or the utterance?
There are several options. The so-called monkey method: "which explores the prints of a vast combinatorium". The method known as the rules method: "which gives itself a priori a set of constraints to which the route will be subjected. We are familiar with this method since it is here that we find the different constraints: syntactic (all the words of the language are shared in part of the discourse), semantic which corresponds to the internal organization of the language, e.g. the verb "to think" requires as subject a noun denoting an animated being. Of a pragmatic nature, it is the properly dialectical dimension that states that one sentence cannot follow another sentence whose meaning is unrelated or contradicts it. This corresponds to the subject of Aristotle's organon topics.
Then to finish the logical constraints. But as Jorion says: "This method requires that even before a sentence can be generated, a huge system has to be built in terms of rules and stored metrics, which requires unacceptable processing time for simple operation. »
The human brain doesn't work like that. It only takes a few tenths of a second to produce a sentence for a human being. It is thus necessary to consider the last method, that of: blow by blow, and it is there that we will find an intuition of psychoanalysis (that of free association) in relation to language, and to signifiers. Here, there is no need to define a priori rules, we only need a principle that makes it possible to determine once a word has been put down, which could be the following. And "one can imagine that channels and Christian codes are constantly in place, privileged passages for getting from one word to another". Here, then, we must reason in terms of traces, as if to go from one place to another, from one word to another, we always took the same little path, and if by dint of the vegetation having leveled off, the passage from one word to another would be reinforced, and for the rest, an ease of association would ensue [10] ; 10] Paul Jorion thinks that the path of thought takes place in the same way, that is to say that once the starting point is set, in this case a word, then the path that unfolds after it is somehow indicated and it is the one we would take rather than cutting through the forest, that is to say rather than choosing a new word (poetry, it tries to get out of these channels). But let us add right away, that this path is not eternally present, that it is the result of a singular life, a construction, a learning and a memory that is constituted throughout a life. To put it another way, we are not born with all these paths already conceived, but only a structure that allows their progressive establishment, this structure being the network of our neurons and synapses (we will come back to this a little later). And we can already see that the concatenation of signifiers is not the result of an a priori application of rules (as the grammar we learn at school might suggest) but of a more or less frequent spawning between two signifiers in the course of a lifetime (depending on the learning parameters that make it an individual singularity), and through this we see the direct link that takes place, between moving from one signifier to another, and memory, because the spawning that we have been talking about is memory.
Now let's get it together quickly so we can get on with it.
The first hypothesis followed was:
- From the signifiers, thought emerges, not the other way around [11].
The second hypothesis was:
- What determines the intensity of the links between the different signifiers (we abandon the plane of meaning) when they make speech, i.e. the passage from one to the other in an utterance, is an association, not an axiomatic and a priori, not a random combinatorial one, but the fact of creating an ease of passage, a reinforcement, a habit of links, and in this memory plays a primordial role.
From there, we can look at things from the point of view of associationism for whom, "It is a simple gradation from recollection through reasoning to the generation of an ordinary discourse. We can think here of a term that Freud used when he gave up hypnosis in favor of : "free association", of which, and we quote Paul Jorion, "The originality of the doctrine of a language of thought is to have realized that what can be studied with all scientific rigor is not the association of ideas, but the association of images and especially in their material supposition.".
We insist, it is important, because what we are concerned with is no longer the association of the signifieds but that of the signifiers that shift to other signifiers. And furthermore, we must add that, if we are well inscribed with this theory within a space of words, it is not possible, as Paul Jorion says: "It is not to deal with an element that plays an essential role in the associative chains, namely the production of images, because this is what happens in human beings. Indeed, some words have the capacity to evoke an image, " ex: when we hear "apples", we hallucinate the image of one or more apples "..
According to this, there are several possible ways of chaining: word to word, word to image, image to word, image to image. And this again under two regimes, under the regime of the unconscious (intuitive and automatic) and under the regime of the conscious. Paul Jorion will again decline the different types of associative sequences which can be: material (acoustic, graphic), semantic (synonymy, inclusion, simple connection, translation, etc.), but he adds: "... the associative sequences are not only a matter of the material, but also of the automatic...". it may be better to dispense with this mechanism for intelligent system modelling". Does this mean that we would go too far in mimicry, because it would be too complicated to translate for the moment in a machine structure or... we don't know but Paul Jorion concludes provisionally that "for languages that are familiar to us (he shows elsewhere in the book how for Chinese language for example it works differently), associative linking - which is required here as a model for intelligent systems, reflects relations of inclusions, attribution and synonymy expressed using the "being" copula, and of "simple connection" expressed using the "having" copula., which we will have to try to implement in the machine.
The associationism discussed here, which can serve as a model for constructing a new type of intelligent system, however, only deals with one aspect of the problem because we still have to think about the environment in which it can be deployed, and we move on to the problem of structure, that of space, that of the topology that will have to be translated into terms of mathematical objects.
But what is this environment?
Evolutionary structure (P-dual of a graph).
First of all it must be a mnemonic network [12], that is to say that it must store signifiers in the most economical way, but it must be added that at its beginning, since we are trying to build machines that imitate human intelligence, it is also and paradoxically necessary that this network does not exist "too much", that it is not already formed/completed, as for a very small child where what is stored in memory is still quite small. And the question arises of the inscription of a first mnemonic trace that will act as a germ, because the network will evolve, learn and change, otherwise it would be like an expert system, like our computers, which is precisely what we are trying to get away from. And it is from the channel that we are going to make demands in order to achieve this.
The channel (passage from one signifier to another) should no longer be in the sequence "vertex/arc/vertex", but "arc/vertex/arc". This makes the semantic network a mnemonic network [13]. This transformation is made possible thanks to a new mathematical object, the "P-dual" of a graph. We do not master this object and therefore refer the reader to other works if he wishes to go further into this aspect.
But in operational terms this allows two very important things, firstly the relocation of signifiers, we no longer think of their situation in one place, but rather their situation between such and such a place. Let us add all the same that the phenomenon of delocalization cannot be complete, that is to say that no representation can be entirely delocalized. And secondly, distribution, which allows the signifier to be inscribed in a multiplicity of associative sequences, an inscription that will not carry the same weight in each case. Paul Jorion takes this example: "the signifier ''apple'' is weighted differently if it appears between ''plum'' and ''pear'' and if it appears between ''Eve'' and ''Adam''. The affective load can be different, and if we translate this in terms of membership, it means that the insertion of a signifier in this or that associative chain does not receive the same intensity, there are associative sequences that we accept to question and others for which it is much more difficult. Paul Jorion describes the former as "knowledge", and as far as they are concerned, we can accept without too much violence to modify them. For example, this is what science does on a daily basis when it issues statements of truth, which it subsequently modifies, when the previous theory is invalidated or when it has become less effective in terms of its power of generalization in the face of the new one. On the other hand, there are associative sequences, and we can observe this in human beings, which are very costly when they have to be questioned, sometimes even impossible; Paul Jorion calls these sequences "beliefs" and we quote: "the belief on the contrary [of knowledge] is of central importance and can only be modified in a 'catastrophic' manner; through conversion, which must then be considered as a modification of the connections existing between the elements that are chronologically the first. Conversion is, of course, observed in human beings, usually at the cost of considerable energy consumption, what Freud calls the "Nachträglichkeit", the aftermath of such restructuring.
Thus, if we want to build an artificial intelligence that imitates human intelligence in its effects, all this must be taken into consideration and we must try to model it in the machine structure. A memory network, as Paul Jorion calls it, must have, in addition to the so-called "expert" computer systems, two features: the ability to learn and the ability to negotiate with the user on the basis - as we have just discussed - of the degrees of adherence that the user gives to its statements. The machine must therefore be capable of somehow "perceiving" the emotional charge and the degree of rootedness of an utterance in the memory network of its interlocutor, and only then will it be able to give the impression to the human being that a similar intelligence (even if mechanical) is standing in front of him, or that a person is standing in front of him. At the very beginning of the article, we stated that the intelligent system must have "like" a personality. Therefore, it must also have a model of the human psyche. This is why psychology and psychoanalysis must meet computer science. This is what Paul Jorion is trying to do.
Now let's retrace the path taken here before opening up to what we were announcing, namely neural networks.
Personality effect: open structure and spawning
The new artificial intelligence must have an effect on intelligence, on personality. To do this, it must imitate the human fact that always remains (let us say under more or less normal and serene conditions) an open, self-learning system, which modifies itself and in contact with others and the world, but which has a character, and therefore also a kind of structural core that is not easily modified. It will be necessary to be able to simulate these two aspects; evolutionary and fixed.
The machine's memory network must have a structure (the P-dual of a graph) in which the signifiers will be added as the machine is exercised, and which in turn can modify the associations already drawn between the signifiers present, as well as modify themselves.
Moreover, this very open memory structure (incomparably more open than that of expert systems) should, in order not to go "all the way" as Paul Jorion says, be "domesticated". That is to say, the discourse generated cannot be, at each of its bifurcations, the result of a hazardous choice. It will have to be "informed", "motivated". It must be, within the space of the lexicon, a sub-space of privileged paths, and this motivation for the choice will operate according to two parameters [14].
First of all according to the affect. Paul Jorion will take Freud's theory of affects as a model for giving the mnemonic network its singular structure, inasmuch as it is this affective charge that guides the passages from one signifier to another, inasmuch as it is he who is responsible for the dynamics and the more or less profound inscription of the signifier in the mnemonic network, and thus for its own structure. For Freud, any recording of a percept (visual, auditory) passes through the limbic system that gives it this small affective charge, and which makes it inscribe itself more or less strongly in us (think of Proust's madeleine and the associative sequence that followed in terms of a novellistic statement decades later). Because for Freud: "Memory is represented by the spawning between neurons...spawning depends on the quality of excitation that passes through the neuron during the process, and the number of times the process is repeated.
Thus, we can think that it is the impressions that have acted most strongly on us that determine us the most, that make us say one thing rather than another, that make us associate one signifier with another. It will therefore be necessary to give an affective weighting to the choice at a bifurcation within the machine, and to transpose what Freud calls the "Bahnung" (the spawning) of the human psyche into a Hebrew reinforcement [15] in the machine.
Thus, we would store in the machine at the arc two values, the impedance which would correspond to the affect value and the resistance which would be the inverse of the milling and in doing so, we would associate to the arc, not a value but a vector. However, with affect, what comes into play is the perception and therefore the representation of the world through the organs and tissues of a body [16]. 16] The machine will therefore have to equip itself with an interface that is in touch with the phenomena of the world and that it is no longer just a language machine. This will be the case with neural network type machines, and we are thinking of the perceptron among others.
Interfaced machines and neo-engineering
In 1989, Paul Jorion develops in his book : principles of intelligent systems, another approach to artificial intelligence, and rather than sticking to systems already entirely programmed and fixed by means of logical structures, he proposes to make these machines neo-genic machines that would be able to self-organize, to learn, to negotiate their knowledge, basically to have like a biography and a personality; and this, according to principles borrowed both from the theory of language (abandoning the problem of meaning, put forward blow by blow, and from the postulate that: "17]), but also from Freud, Lacan and Klein's theory of the psyche (associationism with the idea of the "free association" of signifiers in the theory of the unconscious, and then that of the affective charge that structures the memory matrix by spawning), and also from an anatomical basis (the structure of the cerebral cortex: neurons and synapses whose mathematical formalisation is the P-dual).
Based on this approach, what we now call neural networks or evolutionary networks will develop, which we will now briefly present. However, it is not certain that what we had so much grasped (and liked) when reading Jorion's book, namely the presence of the Psychoanalytic dimension in artificial intelligence, i.e. that of affects and the unconscious, will be taken up and reworked within neural networks, because it would seem that a shift towards the biological has been made. The affects dynamics model seems to have been pushed aside in favour of the biological. In favour, (perhaps?) of a bio-reductionist tendency?

Biomimicry but reductionism?

Human structure modeling and implementation in machines
The problem here is still that of intelligence and the imitation of its effects by the machine, but considered a little differently.
On one side, therefore, neurons, glial cells, capillaries, blood, synapses; on the other side, mineral material, conductors or semiconductors, electricity, and algorithms.
A very brief comparison, but one that places the elements present or related to them in an inappropriate position. Because a priori between the two, there is little connection, except through modelling. It is thus modelling that has made it possible to make the transition from one to the other, from human to machine, by means of what is known as the biomimicry of "neural networks". The researchers took the structure of the brain as their starting point and built a model of it, focusing on two aspects in particular. 18] We find in the que sais-je: neural networks written by F. Blayo, M. Verleysen this definition of neural networks: "Neural networks are a metaphor of (modeled) brain structures: assemblages of elementary constituents, each of which performs a simple processing but which together bring out global properties worthy of interest.
The whole is a highly interconnected parallel system
The information held by the network is distributed across all the components, not located in a part of the memory in the form of a symbol. Finally, a neural network does not program itself to perform a task, but is trained on acquired data, thanks to a learning mechanism that acts on the constituents of the network. »
Once this high-level modeling [19] was done, they then tried to translate it into an algorithmic plan [20]. But to do this, it was necessary to postulate that human intelligence is computation, i.e. to postulate that reason and human thought are reduced to computation, which Paul Jorion had refused when he had tried to model the unconscious part of human thought, the part of its desire, and of what concerns affect in order to implement it in the machine. Thus, the machine gives itself the structure of the human cortex as its structure, and at the same time it is assumed that when humans think, what they do is calculate. There is like that, a kind of movement of influence, of back and forth, which goes from the machine to the human and from the human to the machine. And we can find an origin for such a postulate in the words of the 17th century English philosopher Thomas Hobbes; in his Leviathan we read: "Reasoning is but the conclusion of an addition of parts to a total sum or of the subtraction of a sum from another to a remainder ... These operations are not peculiar to numbers; they concern all kinds of things that can be added to one another. ...] [...] In short, wherever there is room for addition and subtraction there is also room for reason...reason is in this sense only the calculation of the general names agreed upon to mark and signify our thoughts, I say mark them when we think for ourselves, and signify them when we demonstrate our calculations to others. » [21]
So here's pretty much what the researchers are working on: neurons, synapses and computation.
We will now briefly review the important steps that have allowed the field of artificial intelligence to take shape and give rise to the neural network.
First there is the work of Herbert Spencer when he shows how a nerve structure controls a muscle.
Then there is the concept of associative memory [22] which shows that the frequency of the conjunction of events tends to strengthen the links between their cerebral representations and that the recall of one idea tends to call for another.
Then there is the contribution of D. Hebb, which we have already discussed, who interprets learning as the dynamic modification of synaptic weights.
Then again, the "all or nothing" law discovered in the 1920s by Edgar Douglas Adrian [23] which shows that the neuron is only excited if a threshold is reached. That is to say that even if a neuron is stimulated, there may be no potential, no output signal, because the stimulation (in frequency) will have been too weak. This discovery is particularly important in that it allows us to link biology to logic, to make a passage between the two, since this "all or nothing" law is similar in its form (which transforms a continuous physiological process into a discontinuous and binary process) to that of the truth tables of predicate logic (which is binary) and therefore formally close to the tools of mathematical logic. In 1943 an article was published with the title: "A logical calculus of ideas immanent in nervous activity" [24].
These elements are at the basis of connexionnism [25], itself situated (on an epistemological level) at the intersection of neurobiology, psychology, and the development of mathematical logic [26]. The goal was to match a modeled biological structure to a binary logical structure.
This crossing (so to speak) gave rise to machines called "neural networks".
These, unlike previous systems (computers built according to the Von Neumann architecture) "do not assume that one knows the solution to the problem to be addressed, specifically, the procedure to be followed to reach or approach it." [27]. "Neural networks have a different framework: the representation of data is indirect and distributed through a set of connections and synaptic weights. Self-organized, it is capable of high-level representations (concepts). To conclude we can say that : "The information processing capacity is dynamic through modulation, creation or destruction of neural links. (...) that these networks "adapt and learn, that they know how to generalize, group or classify information, and this solely because of their structure and dynamics, but for the time being they must be considered as a complement to traditional methods". 28] because we do not know how to go back up the chain of their reasoning, nor do we know exactly why they give this or that answer. In a way, we cannot understand them and they escape.
Let us now quickly present the structure of these neural networks.
Neural network
First of all a neural network is a topology. It is the placement of elements within a space and it is this space, its configuration that will determine the potentiality of the network. Thus a non-looped network and a recurrent network will not have the same possibilities, a monolayer network and a multilayer network will not have the same possibilities either. Representations of these networks can be found on the Internet if one wants to get an idea. Here, to illustrate our point, we will take the perceptron which is one of the first models [29]. It consists of sensitive units sensitive to various physical stimuli, association units (incoming and outgoing connections), a response unit (which generates a response outside the network), and an interaction matrix which defines coupling coefficients between the different units.
The particularity of these systems is that they are learning systems, "that is to say that external stimuli induce - through various mechanisms - internal transformations [30] modifying the responses to the environment".. It should also be added that the units of meaning of these systems ("neurons/synapses") function according to three modalities: competition, cooperation, and adaptation.
But the problem with these systems is that they still need to be supervised during the learning process. In a way, they still need to be "trained" by a human being, and the next step is the unsupervised learning networks, with the so-called "evolutionary" networks we announced at the very beginning of this article.
Evolutionary networks
These evolutionary networks no longer need to be supervised while learning. Here, adaptation and self-organization is seen, not so much from the point of view of learning itself, but from a Darwinian perspective, from the point of view of the genetic code. According to the theory of evolution, living organisms have adapted to their environment through the modification and recombination of their genetic make-up. And this is what computer scientists are now trying to model and implement in the machine. We see that the epistemological framework has shifted since we started this article.
At the beginning with Paul Jorion we had a multidisciplinary framework where even psychoanalysis (which had interested us enormously) had been taken into account, then with neural networks this dimension disappeared, only learning remains understood as synaptic weighting, recurrence phenomenon and retro-propagation of gradient, and finally now with evolutionary networks in a purely biological and genetic framework (this is made possible because the genome is represented as information, i.e. the sense units of the genome work in the same direction as the sense units in computer science, we fold one over the other, that's one perspective but there could be others). Thus, what we are observing is a dynamic that is similar to biological reductionism. But before we finish, let us outline the principles that govern evolutionary networks.
Principles of evolutionary networks: neotenia, randomness and code self-generation
"The basic idea is to construct, on a random and/or heuristic basis, a set of potential solutions to a given problem. » 31] What is at issue here is the expression: "a set of potential solutions", as if we were building up a reserve of solutions with a view to sorting them out gradually, as if the approach to the result, which is progressive, were to be carried out through the interplay of mutations" [32].
Usually, what we do is go directly to the solution, that is, in the very intention we are looking for "the solution". But here, no, the intention is different. We begin by randomly generating a population and the "genetic material" of this population - which in this case is coded as a string of bits and not nitrogenous bases - represents a set of potential solutions to the problem posed. Then, once these individuals (randomly constituted bit strings) have been generated, a score (adaptation level) is calculated for each one. If the objective is reached, then the output of the algorithm takes place. Then we select reproducers according to the scores (it is a kind of genetic selection and we may hear the echoes of eugenics and those of a decrease in biodiversity). Then descendants are constructed by the application of different genetic operators (crossing, mutation). Then, finally, the population is replaced by descendants.
We see that evolutionary algorithms The "work by taking advantage of the diversity of a population to move towards the desired solution. Initially you have no way of knowing which direction to go in, and you build a random population." 33] What these evolutionary algorithms allow is to generate computer programs, and surprisingly, it is not uncommon for the algorithms discovered to be at least equivalent to those built by humans. The most astonishing case is that of the rediscovery, from the data on the movement of the planets of Kepler's Third Law, a rediscovery that during the progression of the algorithm went through one of the initial conjunctures of the German scientist.
In the future with these networks, computer programs will be more and more automatically generated and less and less constructed (if the process does not encounter a limitation that could not yet have been taken into account).
To conclude, we can say that evolutionary networks are neural networks, but they are focused on three main purposes. Firstly, one that wants to substitute the synaptic weights of the learning problem with a genetic algorithm (a pool of potential solutions made up of individuals consisting of a string of bits). Secondly, that which wants to replace manual procedures (human presence) of trial and error (supervised learning) by unsupervised learning, still thanks to the genetic algorithm, and thirdly, the evolution not of the parameters, but of the evolution of the coding of the parameters, i.e. in a way that it leads to finding the most suitable adaptation rule for adaptation, we take a further step backwards.
Thus, we come to the end of this journey which has seen us starting from Paul Jorion's book and his gesture: the convocation of psychoanalysis in the world of artificial intelligence to open intelligent systems to self-organization and learning according to the affect/memory couple, that is to say, to ensure that the systems possess a personality and a biography, to a situation where the biologic-genetic model prevails, itself underpinned by the information paradigm.
So, of course, we would still have to talk about this information paradigm, which was the major absence in this article, and perhaps not for nothing.
Juliette Wolf
Many thanks to the magazine Temps marranes n°30 in which the original of this article appeared
1] Deep learning comes under the heading of "machine learning" or "statistical learning" which is a field of study that concerns the design, analysis, development and implementation of methods that allow a machine (in the broadest sense) to evolve through a systematic process, and thus to perform tasks that are difficult or impossible to perform by more classical algorithmic means. The algorithms used allow, to a certain extent, a computer-controlled (possibly a robot) or computer-assisted system to adapt its analyses and response behaviours based on the analysis of empirical data from a database or sensors. There are several teaching modes: supervised teaching, unsupervised teaching, and semi-supervised teaching.
2] In October 2015, the alphaGo program having learned to play the game of go by the deep learning method beat the European champion Fan Hui by 5 games to 0 [3]. In March 2016, the same program defeated the world champion Lee Sedol 4 games to 1. Information extracted from a wikipedia article
3] We take up Bernard Stiegler's tripartite distinction for whom three periods can be identified in the history of the evolution of the man-machine relationship: the time when machines began to "do" for us, which for the author is the stratum of "know-how", for example Vaucansson's loom machine, the time when machines began to replace us also in our "savoir vivre-ensemble": for example with television, then the time when machines began to replace us in our thinking activities, this is the last stratum, that of noesis, of "know-knowledge" and that we would reach today with machines of the type: "learning machines".
4] Principles of intelligent systems. Paul Jorion. ed du croquant, 2012.
5] We find this notion of effect in Baruch Spinoza (1632-1677). Philosopher and tailor of lenses for glasses and microscope.
6] This is not the genome, but a new way of generating code. It is no longer the human user but the machine itself by means of a random combinatorial combination of binary populations that generates "individual solutions", i.e. an efficient algorithmic output.
7] We refer to his work on Anella: Associative Network with Emerging Logical and Learning Abilities.
8] The real human series (the first season) was a good way to address this issue. How will humans relate to their robots, which will now have a human appearance and an intelligence identical - or even superior - to their own? A master-slave relationship, a relationship of equality? A relationship of fear? Rejection? Jealousy?
9] Paul Jorion says about meaning: "The problem is not that we do not understand how this thing we call meaning works, but rather that we do not know what it is. In other words, we don't know what the word means, because if meaning is the thing that the word refers to, there are few words that have meaning. For example, with the word freedom, to which thing "freedom" refers, it's not easy to determine, we see that it's more a question of definition, of a subnetwork of the memory network, and that it's a convention.
10] This ease of passage is what researchers of intelligent systems of the neural network type will call the progressive reinforcement, by self-modification of the synaptic weight (recurrence), which is closely related to the problem of learning and then, that of self-organization.
11] We quote Paul Jorion: "My words surprise me and teach me my thinking". Thought would only be a reconstruction based on the words spoken.
12] This will give the term "neural network" for the most recent systems.
13] In a semantic network the signifiers are placed at the vertices, for example: "a parrot" and the relation colors - for example "is one", are placed in the arcs (arcs are a kind of links). It is the opposite in a mnemonic network. The relationship colours are at the vertex and the signifiers are at the level of the arcs.
14] The first parameter: the affect, we talk about it in this article, but the second parameter, if we don't talk about it, it's not because we have forgotten, but because we haven't really understood what it's all about. Paul Jorion talks about gravity, in the sense of gravitation, about what attracts down. So we leave the reader to go and see for themselves. We are aware that there is a lack here.
15] Hebbs (1904-1985) was a Canadian psychologist and neuropsychologist. He will attempt to find an alternative to "behaviorism", and in doing so emphasizes synaptic reinforcement through simultaneity. But what is interesting is that Freud had already postulated it: "There is a fundamental law of association by simultaneity [which] provides the basis for all connections between PSI neurons. […]. The load is equivalent to spawning, relative to the passage of quantity (Q'n)". In " Naissance de la psychanalyse ", PARIS, PUF 1979.
16] Spinoza showed how affect is like a split fiber. We quote him: "By affect I mean the affections of the body by which its power to act is increased or reduced, aided and suppressed, and at the same time as these affections, their ideas. "Affect is both a bodily event (affection) and the awareness of that event.
17] cf. Wittgenstein.
18] The synapse is a functional junction between two neurons, but in topological terms it is a small inter-membrane void. It allows the passage of neurotransmitters (for chemical neurons) that translate the action potential of the afferent neuron into an action potential for the efferent neuron (potential that can become zero or even be reversed). Synapses therefore play a major role in the coding of nerve information.
19] It is a high level modelling because it is at the level of the cell as a whole (the neuron and synapses), inserted in the network of the cortical assembly, that we are located and not, for example, at the level of ion exchanges along the membrane. But we could have made this choice.
20] Perhaps we must wait a little longer for a more pronounced hybridization to become concrete, a convergence that would tend towards a proportional homeostasis between biological and "inorganic-mineral" carriers.
[21] Neural networks, an introduction. J.P Rennard. Ed Vuibert (2006).
[22] W. James (1842-1910).
[23] E.D. Adrian is an English physician and electrophysiologist.
[24] W. McCulloch and W. Pitts.
25] We specify that connexionnism is only one of the forms of biomimicry applied to artificial intelligence. There is also, for example, the "animat", "animal-like" or distributed A.I. approach.
26] Influenced by Whitehead and Russell for whom mathematics had to be re-founded on the only logical basis, cf. their book: Principia mathematica published in 1910-1913 .
[27] Neural networks, an introduction. J.P Rennard. Ed Vuibert (2006).
28] Ibid.
[29] Wikipedia
[30] Neural networks, an introduction. J.P Rennard. Ed Vuibert (2006).
31] Ibid.
32] Ibid.
33] Ibid.



Inline Feedbacks
View all comments
food processors
Previous article

France and robots: I love you too

indecipherable AI
Next article

AI becomes indecipherable to man. And that's not good news

Latest articles in Artificial Intelligence



Already registered? I'm connecting

Inscrivez-vous et lisez three articles for free. Recevez aussi notre newsletter pour être informé des dernières infos publiées.

→ Register for free to continue reading.



You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.