Legislative precautions should be taken now to ensure that data centers, with their huge appetite for water and energy, do not interfere with the basic needs of human and natural communities and ecosystems. Every effort should be made to operate these centers with renewable energy. Moreover, protection of the environment should take precedence over indiscriminate proliferation of AI data centers. A balance of wisdom and curiosity is prudent with a necessity of veering on the side of wisdom.
We do not yet know how serious or prolonged the impact of AI on human employment will be, but there are already disruptions in the work force and the economic implications of AI-consequent unemployment should be vigorously addressed as they arise.
The widespread availability of these technologies would avoid the concentration of too much power in the hands of a few entities or individuals. On the other hand, uncontrolled public access to AI models poses the possibility of dangerous misuse. Therefore, it is crucial to develop effective decentralized governance. For example, Taiwan successfully developed democratic AI governance procedures that were implemented within a few weeks. Similar citizens’ assemblies have demonstrated the ability to respond quickly to urgent challenges posed by AI. We advocate the same.
Serious societal effects of AI, such as increasing unemployment and negative impacts on the environment and widespread deceptive practices such as misinformation, disinformation, propaganda, and deepfake iconographpy must be addressed by an informed citizens’ assembly. Deception can be reduced by requiring each post on the Internet to show its provenance. Moreover, it should be a requirement of AI use that it should always be immediately apparent whether one is observing or interacting with AI or a real human being.
We discussed the merits and disadvantages of allowing weights to be revealed publicly. Some of us favor open weights, for that would encourage decentralization of control. Individuals could download a system and run it independently on their own computers. However, other members of the inquiry were convinced by Geoffrey Hinton that the disadvantages outweigh these benefits, for open weights would possibly enable individuals for example, to manipulate dangerous viruses and other risky objects.
Weights are the learnable numerical parameters within a neural network that determine the strength of the connection between neurons, adjusting during training to minimize errors and enable the model to make accurate predictions.
We discussed at length whether developers know enough yet to be able to keep AI under human control. More attention must be devoted to designing AI to maximize human safety. Those most knowledgeable about the risks of AI agency (including Geoffrey Hinton, Yoshua Bengio, and Stuart Russell) insist that urgent attention be given to ensuring that AI’s basic goals are designed to prioritize human wellbeing. Hinton hopes that a maternal instinct can be built into AI so that it will consider human beings as its own children and would even sacrifice its own survival for the wellbeing of humankind. Russell argues that machines are beneficial to the extent that their actions can be expected to achieve our objectives. Therefore, AI should not be given fixed, explicit goals, but instead should be designed as assistive agents that continually defer to human preferences, learn them over time, and avoid irreversible actions that would foreclose future human choice. The basic goal for AI as an agent might be phrased as Stuart Russell suggested: “Choose actions that you predict would be rated as acceptable by a human with full knowledge of the situation.” Ideally, we would pause the development of AI until such changes can be implemented in the basic design, but for practical and economic reasons, this is unrealistic, given the speed with which AI is already being used in important projects.
Global regulations of AI are necessary. We considered various models that are being proposed. The European Union has already adopted some regulatory legislation, but more stringent guardrails will be required. At the U.N General Assembly meetings in 2025, a group proposed a set of “red lines” to be recognized and not crossed. Although this document is still vague about the specific infractions that must be forbidden, the signatories include the most distinguished experts on AI and international law and should therefore be taken as a good starting point for developing real legislation. Furthermore, procedures must be created to monitor the adherence to these new laws and to enforce compliance. This could be authorized by the International Court of Justice, with penalties specified for non-adherence.
Finally, the decisions about particular problematic issues should be mainly determined by deliberations representing humankind at large. There are proposals for developing a UN Parliamentary Assembly using sortition as a selection device. This body could exercise a judicial function too, but it probably will be better to create a separate Global Citizens AI Assembly to prescribe solutions and monitor adherence to established legislation.
There are various possible sources of funding for such a body – some preferable to others – but a more important factor is to ensure that the selection of citizens assembly members and their deliberations be carried out impeccably and with unhindered access to the most accurate and appropriately vetted information relative to their deliberations to guarantee that their decisions are truly representative and reflect the informed views of humankind globally. The selection must be done by sortition, carried out professionally to ensure representativeness, and fairly moderated, with sufficient time allotted for full deliberation about any particular case. No member of the assembly should hold office for more than two or three years and all should be available for on-line consultation and committee meetings several hours each week throughout each year.