Prof. Dr. med. Wolfgang Kromer
         Specialist in Pharmacology and Toxicology            Specialist in Clinical Pharmacology

 

The following pages (click on the bars in the upper right corner!) refer to:
BIOGRAPHY & RESEARCH FIELDS
PUBLICATIONS
CONGRESS CONTRIBUTIONS & LECTURES
IMPRINT & DATA PROTECTION


IN THIS PLACE (SEE BELOW!) YOU WILL FIND, FROM TIME TO TIME, A CRITICAL COMMENTARY ON A CURRENT TOPIC

THE FAIRY TALE ABOUT „INTERNET CONSCIOUSNESS“

When I was jung, I loved fairy tales. But when I grew up, I learned that fairy tales may produce profound misunderstandings about the real world, sometimes with unpleasant or even fatal consequences. One such fairy tale message was the pronouncement by Johann Grolle (1) that human brain organoids, implanted into the rat brain, may develop their own and genuine human consciousness - living in the rat brain but nonetheless independent of it („Me in the rat“). A real horror scenario! This nonsense scenario has been disputed in my contribution to FORUM WISSENSCHAFT (2).

Today it is about the no less imaginative speculation that the internet may become conscious, a widespread vision, for example published in 2014 by Jeff Stibel (3). Joel Wille (4) hit the same note in 2017 when he speculated about the reasons why the internet may in fact develop consciousness. And Felix Stadler (5) asked in his feature article in 2012: „Träumt das Internet?“ („Does the internet dream?“)

The counterarguments are always the same, and they have been discussed in detail by two publications (6; 7). To cut it short: The crucial point is that it needs an individual for consciousness to emerge. Actually, there must be someone to whom something can become conscious. The point now is that an individual must be able to discriminate between its own body and its environment. That is to say: Consciousness requires embodiment (8) which depends on a complex sensorium that perceives information from inside and outside one‘s own body, is able to discriminate between the two sources of information and represents them comparatively in the neuronal networks of the brain.

However, shit happens, neither the human brain organoid nor the internet have any body of their own which they might experience as opposed to their environment. Consequently and no less important: They don’t have at their disposal any sensorial infrastructure equipped with sensory organs such as eyes, ears, nose and organ of equilibrium or somatovisceral sensitivity. While the sensorium of the rat of course generates the genuine rat consciousness, the internet doesn’t have any such equipment at all. Even the architecture of the internet is far apart from that of the brain, quite in contrast to what has been argued (4). Both the organoid and the internet are not individuals to whom something could become conscious. The organoid is no more than a poorly organized assembly of cells, and the internet is a collection of countless facts, „alternative facts“ and most diverse opinions, however lacking any personal entitiy. It’s the embodiment that counts, in conjunction with the comparative representation of the inside versus the outside world. Try your introspection and you will easily recognize that the experience of your environment is always paralleled by the experience of your own body. It‘s no mystery!

The storytellers should take this into account! However, don’t mix it up: The completely unrealistic hypothesis of a „conscious internet“ must not be confused with the internet viewed as a „mirror of the collective consciousness“ – two fundamentally different notions.

1) Grolle, J.: „Ich in der Ratte“; DER SPIEGEL No. 18/2018, p. 102

2) Kromer, W.: „Verirrungen in der Bioethik“; FORUM WISSENSCHAFT 3/18, p. 49.

3) Stibel, J.: „Will the internet become conscious?“ BBC November 18, 2014.

4) Wille, J.: „Warum das Internet tatsächlich ein Bewusstsein haben kann“. welt.de, 29.12.2017. 

5) Stadler, F.: „Digitales Bewusstsein: Wovon träumt das Internet?“. Berliner Gazette 27.08.2012. 

6) Kromer, W.: „Is consciousness a mystery? A simplified approach to pinpoint the basic nature of consciousness”. Journal of Psychiatry and Psychiatric Disorders 2022; 6, 196-202.

7) Kromer, W.: “The Amazing Gap in Consciousness Research”. American J. Neurol. Res. 2025; 4 (1) 1-3. 

8) Kraus, P, and Maier, A.: „Will we ever have conscious machines?“. Frontiers in Computational Neuroscience 2020; 14, 556544

Previous contribution:



ARTIFICIAL INTELLIGENCE is currently a hot topic. However, a contribution about MIND READING by NEURAL DECODING deserves a closer look :

The magazine DER SPIEGEL (issue 23/2024, pp 88-91) offered an imaginative interview (interview passages translated by the author) with the „neuroethicist“ Prof. Dr. Marcello Ienca, regarding mind reading by artificial intelligence (AI). The pretty complaisant interview was done by the SPIEGEL journalists Johann Grolle and Claus Hecking. Central but surprisingly uncritical statements made me speechless, e.g.:

QUESTION: „Will it be possible to read out our intentions by a brain scan?“

ANSWER: „Quite possible. Dreams will probably become decipherable as well“ and „Dictatorships could hack the brains of prisoners or political opponents, extract their thoughts […] I guess: By brain manipulation, this will become technically possible in a few decades“.

Thus far Marcello Ienca’s vision of the future. While at this point, at the latest, some critical demands were urgently needed, these statements remained unquestioned. An absurd scenario, if one considers the experimental background. For example the publication by Jerry Tang et al.: Semantic reconstruction of continuous language from non-invasive brain recordings (1). The authors write:

„To compare word sequences to a subject’s brain responses, we trained an encoding model that predicts how that subject’s brain responds to phrases in natural language. This model can score the likelihood [!] that the subject is hearing or imagining a candidate [!] sequence by measuring how well the recorded brain responses match the predicted [!] brain responses.“ (exclamation marks inserted)

To put it very simply: By high resolution fMRT (functional magnetic resonance imaging), brain activity will be measured and assigned, during hour-long sessions, known [!] contents of consciousness. Thereafter the game will essentially be turned around: The subject will imaginate the respective content, which will then be identified by the fMRT signals, however only with a certain probabiliy! This has nothing to do with „reading out“ any unknown thoughts from subjects the decoder has not been trained with. However, without critical allocation, readers naive to this area of knowledge would misunderstand it just this wrong way. By his imaginative vision of the future, Marcello Ienca himself leads the readership on the wrong track. After all, he has been presented by DER SPIEGEL as an expert, so his statements must be true in the reader’s understanding!

In the individual context, contents of consciousness (2) such as spontaneous ideas, intentions, dreams, intimate thoughts and memories are represented in the brain in an extremely complex way and quite differently from subject to subject. fMRT signals or EEG potentials are, despite any high-resolution technique, relatively blurry sum effects, and their meaning is, above all, not transferable from one person to another.

Tang et al. (1) address this problem as follows: „An important ethical consideration for semantic decoding is its potential to compromise mental privacy. To test if decoders can be trained without a person’s cooperation, we attempted to decode perceived speech from each subject using decoders trained on data from other subjects. For this analysis, we collected data from seven subjects as they listened to five hours of narrative stories […] Decoders trained on cross-subject data performed barely above chance, and significantly worse than decoders trained on within-subject data. This suggests that subject cooperation remains necessary for decoder training“.

To take any example, consider the word „table“: It may generate strikingly different signals in the brain scan depending on the subject who either was tortured on the table, or had a nice dinner at this table, in a relaxed atmosphere with good friends. The associations of pictures, sounds, words, pain or joy etc. will contrast sharply between the two situations, and all this will of course contribute to the overall brain signal. This holds true not only between subjects, but also if one and the same subject will repeatedly speak or imagine the same word but in a different context! The arising problem has nothing to do with the technique of „neural decoding“ but simply with the semantic content of the scanned sequence. Therefore improvement of the scan technology will not change anything about this particular point, which simply explains the finding of Tang et al. (1) when they used cross-subject data.

Marcello Ienca and his interviewers sweep this most critical point under the table! So what shall we do, after this relatively clear cut message, with the countless people without any individually trained decoders? How then should despots „read out“ their inescapably diverse contents of consciousness? I’m afraid this Nobel Prize will remain (thank God) in the casket! The „neuroethicist“ Marcello Ienca does not have to suffer from too many worries!

But even if one day, by improved technology contrary to expectations, a few specific signals from the „brain scan“ will prove interindividually stable and interpretable, one can easily imagine the error rate of such mind reading. After all, this requires less imagination than Marcello Ienca’s vision of the future.

However, what does that bother the magazine DER SPIEGEL? A horror story sells better anyway than critical and rather dry science! Not a new experience, really! (3)

(1) J. Tang et al.: Semantic reconstruction of continuous language from
non-invasive brain recordings. Nature Neuroscience 2023; 26, 858–866

(2) W. Kromer, Is consciousness a mystery? A simplified approach to pinpoint the basic nature of consciousness. J Psychiatry Psychiatric Disord 2022; 6 (3); 196-202
https://www.fortunejournals.com/articles/is-consciousness-a-mystery-a-simplified-approach-to-pinpoint.pdf

(3) W. Kromer, Verirrungen in der Bioethik. Forum Wissenschaft 2018; Nr. 3; 49-50 https://www.bdwi.de/forum/archiv/themen/gesund/10678222.html