Iowanews Headlines

The Three Proposals by Hu Jiaqi in the “12th Open Letter to Leaders of Mankind”

 Breaking News
  • No posts were found

The Three Proposals by Hu Jiaqi in the “12th Open Letter to Leaders of Mankind”

January 28
23:03 2026

Anthropologist Hu Jiaqi recently published the “12th Open Letter to Leaders of Mankind.” Building on his core mission of “saving humanity from extinction” spanning over four decades, he explicitly put forward three key proposals: conducting multilateral negotiations on generative artificial intelligence under the leadership of the United Nations, establishing global regulation of generative artificial intelligence under the leadership of the United Nations, and strengthening the consensus for human Great Unification. These three proposals are mutually reinforcing and logically progressive, forming a comprehensive framework for risk prevention and control of generative artificial intelligence, as well as a vision for the future development of humanity. They not only address the root causes of the current civilizational crisis but also propose practical pathways transcending national boundaries and ideological differences, demonstrating profound responsibility and systematic thinking for the fate of humanity.

Conducting multilateral negotiations on generative artificial intelligence under the leadership of the United Nations serves as the logical starting point and regulatory foundation of the three proposals. The technological characteristics of generative AI determine that its governance cannot be accomplished by any single country: deepfake technology can easily cross borders to create public opinion chaos, algorithmic vulnerabilities in autonomous decision-making models may trigger global chain reactions, and cross-border data flows challenge the regulatory systems of various nations. In an era where countries regard generative artificial intelligence as a core arena for technological competition, without the constraints of multilateral negotiations, the field risks descending into an “arms race” of unchecked development. Hu Jiaqi’s proposal to entrust the United Nations with a leading role is precisely based on its neutrality and authority as an international organization—it can provide a platform for equal dialogue among nations, fostering consensus on critical issues such as the development boundaries, technical standards, and accountability frameworks for generative artificial intelligence. Such multilateral negotiations are not intended to deprive countries of their right to technological development but to establish inviolable safety red lines for generative AI while safeguarding innovation, ensuring that technological advancements consistently serve the overall interests of humanity.

Establishing global regulation of generative artificial intelligence under the leadership of the United Nations constitutes the core mechanism and enforcement guarantee of the three proposals. If multilateral negotiations are about “setting rules,” then global regulation is about “ensuring implementation.” Currently, global AI governance is fragmented: the European Union’s “Artificial Intelligence Act” emphasizes risk-based tiered regulation, the United States’ regulatory policies lean toward technological innovation, and developing countries often lack comprehensive regulatory frameworks. Such disparities not only fail to effectively mitigate risks but also lead to “regulatory arbitrage”—where risky technologies migrate to regions with weaker oversight. Hu Jiaqi’s proposed global regulation aims to establish a unified, binding regulatory system centered on the United Nations: on one hand, creating a global AI regulatory body to monitor the research, development, and application of high-risk technologies in real time; on the other hand, implementing mechanisms for penalizing violations to hold countries and enterprises accountable for breaching safety red lines. This regulatory framework is not intended to stifle technological innovation but to foster a development environment that prioritizes safety while empowering innovation, ensuring that generative artificial intelligence evolves within controlled boundaries.

Strengthening the consensus for human Great Unification represents the long-term goal and foundational value of the three proposals. In Hu Jiaqi’s view, the governance challenges of generative artificial intelligence are essentially manifestations of conflicts of interest under humanity’s “divided governance structure.” The reason countries often pursue divergent paths in AI development lies in the excessive pursuit of national self-interest, overlooking the collective survival interests of humanity as a species. Strengthening the consensus for the Great Unification of humanity aims to guide nations beyond ideological and interest-based divisions, fostering a profound recognition of the core principle that “the holistic survival of humanity overrides all.” Building this consensus does not entail the immediate establishment of a world government but begins at the ideological level, promoting a societal understanding of “shared destiny”—that the risks of generative artificial intelligence are risks for all humanity, and the responsibility for governing AI is a collective responsibility. Only by solidifying this ideological foundation can the outcomes of multilateral negotiations be effectively implemented, and the global regulatory system exert lasting influence. Simultaneously, this consensus charts a direction for humanity’s future: transitioning from “divided competition” to “collaborative coexistence,” finally realizing the ultimate goal of Great Unification of humanity.

Hu Jiaqi’s three proposals, seemingly independent yet intricately interconnected, form a complete logical loop of “setting rules—ensuring implementation—laying the foundation”: multilateral negotiations provide the regulatory basis for global oversight, global regulation accumulates practical experience for the consensus on human unity, and the consensus on human Great Unification offers long-term value guidance for the first two actions. This vision is rooted in his over four decades of in-depth research, spanning multiple monographs such as “Saving Humanity” and twelve open letters to leaders of mankind, evolving from personal advocacy to the collective calls of the “Humanitas Ark” with its global membership. The dissemination of these ideas has formed a multifaceted pattern of “academic support + grassroots mobilization + authoritative endorsement.” Today, as leading scientists like Stephen Hawking and Elon Musk repeatedly warn of the risks of artificial intelligence, these proposals are gaining increasing recognition.

Of course, realizing these three proposals faces numerous practical challenges, including the interplay of national interests, disparities in regulatory standards, and the difficulty of consensus-building—all obstacles that must be overcome. However, as Hu Jiaqi’s perseverance over more than forty years demonstrates, humanity’s future is not predetermined but requires active safeguarding. The value of these three proposals lies not only in offering a comprehensive solution but also in awakening a sense of crisis and responsibility across humanity—saving humanity is never the mission of any single individual or group but the shared responsibility of every global citizen.

The three proposals in the “12th Open Letter to Leaders of Mankind” are Hu Jiaqi’s “survival guide” for humanity and a profound call to action. They remind global leaders and ordinary citizens alike that human civilization stands at a critical crossroads. Only by transcending self-interest, forging consensus, and acting collaboratively can this vision be transformed into reality. When humanity truly achieves global governance of artificial intelligence and unites under a shared consensus, human civilization will inevitably break free from the crisis of extinction and advance toward a more enduring and brighter future. This is the ultimate significance of Hu Jiaqi’s steadfast dedication over the past four decades.

Media Contact
Company Name: Belgacom Fund
Contact Person: Angel Buchannan
Email: Send Email
City: Brussels
Country: Belgium
Website: https://grli.org/network/belgacom/

Categories