Generative artificial intelligence (AIGC) with big models as the core is accelerating its integration into business scenarios, but the ethical problems caused in the process are becoming increasingly prominent, especially in the algorithm "black box", data abuse, responsibility evasion, etc., which urgently require institutional governance to cope with the failure of new technical markets.
The author has compiled the manifestations of AIGC ethical risks under the background of commercialization:
——The property rights of data elements are not yet clear, which induces data abuse and "black box" of technology. Data, a core digital production factor, has not yet achieved a clear right confirmation and reasonable pricing mechanism. Platform companies can seize user data at low cost through fuzzy authorization and cross-platform crawling, while users lack control over the data. Under this structural asymmetry, AIGC products widely embed business processes with the help of the SaaS model, the algorithm logic is highly closed and opaque, forming a technical "black box". Users passively contribute data without knowing it, and the right to know and the right to choose cannot be effectively guaranteed.
——The corporate governance structure is relatively lagging, aggravating the retreat of ethical boundaries. Some enterprises still continue the traditional industrial logic, oriented towards profit and scale, and have not yet fully incorporated ethical governance into corporate strategies, or have been marginalized or become formal. Driven by the pressure of commercialization, some companies choose to apply AIGC technology in sensitive fields, such as deep forgery, emotional manipulation, consumption induction, etc., manipulate user decision-making and even affect public cognition. Although there are short-term benefits, they destroy long-term social trust and ethical order.
——The regulatory rules are not yet perfect, resulting in a governance window and a responsibility vacuum. The existing regulatory system has not yet fully adapted to the rapid evolution of AIGC in terms of division of rights and responsibilities, technical understanding and law enforcement methods, allowing some companies to promote their business within regulatory blind spots. When the content generated causes controversy, the platform often avoids responsibility on the grounds of "technological neutrality" and "non-human control", creating an imbalance between social risks and economic interests, weakening the public's confidence in the governance mechanism.
——There are biases in the algorithm training mechanism, and solidification bias and value dislocation. For efficiency and economic considerations, enterprises often use historical data for model training. If there is a lack of a deviation control mechanism, it is easy to lead to algorithm output solidification bias. In advertising recommendation, talent screening, information distribution and other links, such deviations may further strengthen the tendency to label, affect the rights and interests of specific groups, and even trigger cognitive deviations in social value.
——The weak social cognitive foundation promotes spillover of ethical risks. Most users lack understanding of the working principles of AIGC technology and its potential risks, making it difficult to identify false information and potential guiding behavior. Education, media and platforms have failed to form a joint effort to promote the popularization of ethical literacy, making the public more likely to fall into misunderstandings and misleading, providing a low-resistance environment for AIGC abuse, and the risks quickly spread to the level of public opinion and cognitive security.
So, how to improve the design of ethical risk governance system to ensure that technology is good?
The author believes that to solve the ethical risk dilemma in the commercial application of AIGC, we need to start from multiple dimensions such as property rights system, corporate governance, regulatory system, algorithm mechanism and public literacy, and build a systematic governance structure covering the entire process before, middle and back, combining points and surfaces, so as to achieve forward-looking warnings and structural relief of ethical risks.
First, establish a data property rights and pricing mechanism to crack the "black box" of data abuse and technology. We should accelerate the legislation on the confirmation of data elements, clarify the boundaries of data ownership, use rights and trading rights, and ensure the complete rights chain of users' "data knowledge-authorization-retraction-traceability"; build a unified data trading platform and an explicit pricing mechanism so that users can actively manage and price their own data; promote the platform disclosure algorithm operation mechanism or provide interpretability disclosure, and establish an information source labeling mechanism to improve the transparency of AIGC operation and the user's perception ability.
Secondly, reform the corporate governance structure and embed ethical responsibility and value orientation. It is recommended to include AI ethical governance in corporate strategic issues, establish an algorithmic ethics committee and moral responsibility officer, and strengthen the embedded management of ethics from the organizational structure level; establish a pre-mechanism of "technical ethics evaluation", conduct ethical impact assessment before product design and deployment, ensure that value orientation is reasonable and safety boundaries are clear; introduce an ethical audit system, and include ethical practice in the ESG performance appraisal system; encourage leading platforms to publish ethical practice reports to form an industry demonstration effect, and guide enterprises to achieve "innovation for good".
Thirdly, strengthen cross-departmental coordinated supervision and narrow the governance window and vague areas of responsibility. A cross-departmental regulatory coordination mechanism should be established as soon as possible to jointly form an AIGC comprehensive governance team to coordinate the formulation and implementation of regulations; accelerate the issuance of special regulations on the identification of generated content, definition of data ownership, and ownership of algorithmic responsibilities to clarify the main responsibilities of the platform in the generated content; a principle of "presumable responsibility" can be established for the content generated by AIGC, that is, the platform cannot prove that it is not at fault and needs to bear corresponding responsibilities, to prevent enterprises from evading governance obligations in the name of "automatic generation of algorithms", and to establish a full-chain governance system that combines pre-prevention, in-process supervision and post-accountability accountability.
At the same time, improve the training data governance rules and eliminate algorithm bias and value misalignment. An authoritative third party should lead the establishment of a public training corpus, providing diverse, credible and audited corpus resources for enterprises to use, and improving the ethical quality of basic data; compulsory enterprises to disclose the source of training data, de-bias technology and value review process, and establish an algorithm filing mechanism to strengthen external supervision; promote enterprises to introduce diverse indicators such as fairness and diversity in algorithmic goals, change the current single business orientation mainly based on "click rate" and "stay time", and build a value-balanced AIGC application logic.
Finally, we must improve the public's digital literacy and consolidate the foundation of consensus-based ethical governance. AI ethics and algorithmic literacy education should be included in the curriculum system of primary and secondary schools and universities, and social forces such as media, industry associations and public welfare organizations should be supported to participate in AI ethical governance. By establishing "public technology observation teams" and "ethical risk reporting windows", we should promote the normalization of private supervision; encourage platforms to establish ethical popular science and risk warning mechanisms, timely release technical interpretations and ethical guidance for hot AIGC applications, alleviate public anxiety, and enhance the overall social ability to identify and prevent AIGC.
The commercial application of generative artificial intelligence is a major opportunity for the integration of technological progress and economic development, and it is also a severe test for the ethical governance system. Only by coordinating development and norms with the concept of systematic governance and strengthening institutional design and responsibility implementation can we maintain the bottom line of ethics while promoting technological innovation and cultivating a safe, sustainable and trustworthy digital economy ecology.
(Author: Li Dayuan is a professor at the School of Business of Central South University, and a doctoral student at the School of Business of Central South University, Suaya is a doctoral student at the School of Business of Central South University)
[Editor in charge: Zhu Jiaqi]
Comment