Generative Artificial Intelligence as a “Black Screen” for Society: The Logic and Practice of a Stacked Black-Box

GUO Quanzhong, LI Li

Jinan Journal ›› 2024, Vol. 46 ›› Issue (12) : 81-96.

PDF(886 KB)
PDF(886 KB)
Jinan Journal ›› 2024, Vol. 46 ›› Issue (12) : 81-96. DOI: 10.11778/j.jnxb.20241587

Generative Artificial Intelligence as a “Black Screen” for Society: The Logic and Practice of a Stacked Black-Box

  • GUO Quanzhong, LI Li
Author information +
History +

Abstract

Generative Artificial Intelligence (GenAI) has not only reshaped the way of production, dissemination, and knowledge creation but also profoundly affected the social order and human thinking mode. As a powerful technical tool, the data-driven logic, algorithmic optimization mechanism, and black-box characteristics behind it have gradually become the focus of attention of academics and policymakers. How to deeply deconstruct the black-box characteristics of GenAI from technical, ethical, and social perspectives and explore its far-reaching impact on the social and cultural structure is the core issue of this study.
The core mission of GenAI is to mimic human language, values, and thinking abilities, the goal that makes it increasingly technologically complex and insidious. Unlike conventional AI, GenAI embodies a high degree of non-interpretability in every aspect of its technical model, training process, and generation of results, shaping its unprecedented pure black-box nature. Based on the cutting-edge research and practice of GenAI development, this paper constructs a “stacked black-box” model, proposing that the black-box characteristics of GenAI are superimposed by three components: the technical black-box is reflected in the complexity of the algorithmic model; the nourishment black-box is reflected in the hidden nature of the training data source and processing; and the result black-box is reflected in the weak interpretability and uncertainty of the generated text.
Although explainable artificial intelligence (XAI) tries to demystify the AI black-box through technical paths, its effect is still limited. From “pre-modeling explanation” and “interpretable model” to “post-modeling explanation”, the development of XAI has not yet really opened up the whole process of deep learning models, and may even increase the complexity of the system by adding secondary models. At the same time, the paradox of “explainable-human-like” is further highlighted. On the one hand, human beings want AI to simulate the complexity of human thinking; on the other hand, they demand transparency in the process, and this contradiction puts the development of technology in a dilemma. This paper emphasizes that it may not be realistic to completely demystify the GenAI black-box, and it is more important to balance the relationship between technological transparency and social needs.
As GenAI is widely embedded in daily life, the traditional “black-box society” is evolving into “Neoblack-box society”, and the technology outsourcing system constructed by GenAI is becoming an important pillar of social decision-making, but it brings new problems such as centralization of power, technological inequality, and untraceability of decision-making. This paper proposes the concept of “thinking ratio”, arguing that in the context of widely embedded black-box technological systems, human beings need to strengthen their ability to “think about thinking” and “judge about judging”, so as to realize rational control of technology amid uncertainty. This paper provides a new path for the collaboration between technology and society at the cognitive level.
This paper expands on previous studies in the following three aspects. First, it focuses on the key ethical issue of the “explainable-human-like” paradox from the multidimensional perspective of “superposition”, revealing the inherent contradiction between generative AI in the pursuit of transparency and human-like intelligence. Through the comprehensive analysis from technical logic to ethical dilemma, it provides a new theoretical perspective for the study of AI ethics. Second, the impact of AI technology on social formations is understood from the perspective of the black-box, revealing the far-reaching reshaping of technology on social order, resource distribution and individual ways of thinking. Third, the concept of “thinking ratio” is proposed in the face of criticisms that AI has made humans lose their thinking, turning attention to human cognitive adaptability to complex technological environments, and providing a new way of thinking for the understanding of the evolution of human decision-making ability in the technological era.

Key words

generative artificial intelligence / intelligent black-box / explainable artificial intelligence / Neoblack-box society / thinking ratio

Cite this article

Download Citations
GUO Quanzhong, LI Li. Generative Artificial Intelligence as a “Black Screen” for Society: The Logic and Practice of a Stacked Black-Box. Jinan Journal. 2024, 46(12): 81-96 https://doi.org/10.11778/j.jnxb.20241587
PDF(886 KB)

86

Accesses

0

Citation

Detail

Sections
Recommended

/