Last year, attracted by the irresistible offer of Shanghai University and Ninth Nishan Forum of world Civilisations, I made the brief and tiresome travel to China to lecture. Even bad health and retirement could not stop me: Firstly, I liked all my human and professional experiences there in the last decade (so shortened by the C-19) and, secondly, I could not refuse an invitation to the congress in the glory of Confucius, even in his birthplace. He is perhaps the philosopher closest to understanding the general concept of what turned into museums and other memory institutions.
I did say a lot and would do it better now, but I will however try to share here some slides I presented in both places. Here is one.
China is firmly committed to making great advances in new technologies, especially AI, lately, and I wanted to pass on Stephen Hawking's warning in the book Brief Answers to the Big Questions, which was published a few months after his death in 2018. The book also gives many variations of the quote from the slide.
Reviewing the summaries and comments of the book it occurred to me that although being extremely convincing and detailed about how wary we should be about the developments on the planet, he seemingly did not mention Bioengineering (BE) which should be treated as equal danger among those. The term loosely references a broad set of technologies, including things like genetic engineering, biochemical engineering, and bioprocess engineering, biomedical engineering being one of the scaring sub-areas of danger.
Why do I put it into the context? It is striking that AI competes with the human mind whereas BE tries to control and outperform human biology. Both refer to identity and our eternal ambition to improve ourselves within processes guided by moral imperatives and virtues. This strive is very similar to all human civilisations. It is always the purpose that sets the boundary between good and bad and the same ability of complex scientific and emotional judgment. The must is that it remains a human privilege.
Turning to AI, shall we give up our public memory profession by commanding AI service: “Create us a museum that would support the prestige and power of our community X”. The problem is the danger of misuse that can damage and vilify the process of collectively formed memory. It is done by filtering, compressing, and sublimating using the intellectual, spiritual, and artistic creative interpreting and communication finishing thus the process of recognition, research, collecting, studying truth, originality, and authenticity ennobling humanity’s capacity to liken themselves to their ideals and gods whichever way they have chosen to imagine them.
“Trash in, trash out” was a way experts dismissed the great expectations from the early IT revolution. In a way, it is the same situation, but half a century later, mankind is more ignorant, selfish, manipulable, and easier prey to illusions and deceptions, which made humanity, for the first time, face multiple scenarios of the species' end. Will museums become or remain the places where their community of users will be able to discern between dangerous illusions and reliable reality?
Comments