Deliberating About Artificial Intelligence

Project Save the World now hosts occasional public inquiries for our members to study and deliberate extensively about a serious global issue. Beginning in late October 2025, we hosted a five-week-long series of private conversations with members who registered and promised to do some preliminary study at home before each weekly discussion.

The fifth and final session was public, with the video posted and distributed widely. We transcribed this video, edited it slightly for clarity, and will publish it here. It is a good example of the way citizens’ assemblies work, enlightening participants who are ideally selected by lottery to ensure that they are truly representative of the larger population. Unfortunately, we were not able to use sortition to select our participants, but this group was quite diverse, with three Russian expatriates, one Australian, two Californians, one Indian, and one Canadian.

Although there were fifteen paying registrants in the diverse inquiry, the following eight were the ones who participated in the final session to be shown here:

Alexey Prokhorenko~ Russian interpreter who Emigrated to Poland rather than fight Ukraine

Bill Leikam~ California naturalist studying wildlife in urban environs

Brian von herzen~ Australian scientist and founder of the climate foundation

Konstantin samoilov~ Russian energy company executive who Emigrated to uzbekistan rather than fight ukraine

Leon kosals~ Russian professor who Emigrated to canada for political reasons 15 Years ago

Marilyn krieger~ Californian behaviorist specializing in wild feline species

Nitin sonAwAne~ Gandhian Indian electrical engineer who has walked across 51 countries promoting peace

Rose Dyson~ Canadian scholar who studies the social harms of violent video games

METTA SPENCER: Hi. I’m Metta Spencer, and this is day five of a series of inquiry meetings by people who are concerned about discussing artificial intelligence and the possible risks thereof. These people that you will be watching have been participating since late October every week in discussions and reading articles and watching videos by authorities on AI. The agenda today is going to be brisk. I will read a series of questions and ask us to spend a few minutes discussing each one and see if we are of a single mind about each one.

So, number one, the overall question is: Does artificial intelligence pose a serious threat to humanity beyond those already socially questionable aspects of the internet itself? Are we dealing with a serious risk, or should we feel comfortable that the benefits are going to far outweigh any threats?

KONSTANTIN SAMOILOV: There is a very serious potential risk in artificial intelligence development at this point. It certainly has provided support in many tasks, but with the speed it’s being developed and the direction where it’s going, I see potential threats to humanity.

ROSE DYSON: I agree with Konstantin.

ALEXEY PROKHORENKO: Well, If the risk is like 2% or 5%, it’s still a big risk. It’s not acceptable to run such risks. We need to think about how we can control this.

LEON KOSALS: I agree, but an additional issue is the uncertainty factor. We cannot outline this risk and identify it, and we cannot evaluate it in clear terms. We cannot identify the parameters of the threat.

METTA: I just see a lot of faces nodding, as if we all absolutely agree that it is one of the big problems, right?

NITIN SONAWANE: We have a big threat with the way this is moving forward at such speed. These big companies want to go with efficiency and speed, which is causing a serious threat to society. So, I feel it is a big threat to humanity.

BRIAN VON HERZEN: Oh yes, it is.

MARILYN KRIEGER: I think it’s how people use it. But yes, I think the way some people will use it can be very harmful to humanity. How people design it and use it.

“ It certainly has provided support in many tasks, but with the speed it’s being developed and the direction where it’s going, I see potential threats to humanity.”

BILL LEIKAM: I look at this as an evolving technology, and we’re just on the very first steps along the way. So, I would say, let’s be cautious, but not throw out everything.

ROSE: I’m not sure that we’re really on the first level of determining what AI is about. We’ve already had a decade or more of widespread use of the Internet, and there are rogue elements and harmful effects above and beyond what AI may bring as well. So, we have some experience and evidence of what is possible.

ALEXEY: We have to choose between being cautious and being curious. I think cautious is the priority. If you are driving and talking to someone, your focus should be on driving, not talking. And here too we should be more cautious than curious.

METTA: If I’m not mistaken, I think Marilyn is the only dissenting voice.

MARILYN: I’m not dissenting. No, not at all. I do think it can be a threat, but it depends on how it’s designed.

METTA: Oh, I see. It’s a question of how it’s designed, not just used? I think the dispute is about whether the way it’s now designed is inherently leading into a situation where we might not be able to control it.

MARILYN: I don’t think it’s inherent.

BRIAN: I don’t think it’s inherent. It’s a tool that can be used for good or evil, but it’s a question of whether machine learning outputs are in the hands of the many or the hands of the few.

MARILYN: There are multiple dimensions, but I can see how it could evolve into something that is potentially harmful.

METTA: Okay, we have consensus on point number one. So, the question: Is there a need to do something now to make sure that the design of AI is such that it will not be possible for it to get out of control? Is there a need to intervene and regulate the design of AI to make sure it conforms to human wishes and needs?

BRIAN: It’s not a matter of AI design. It’s a matter of AI governance. And the results of the outputs. Are they available publicly? Are the weights public? Are the algorithms public? Machine learning is a valuable tool. In the hands of the many, it can be very effective, but in the hands of a few it can be used for concentration of power and influence.

LEON: Actually, we cannot answer that question now. The issue that Brian told us, if it will be in the hands of many, it doesn’t mean that it wouldn’t pose a threat, because it’s such a powerful technology. Even a teenager can cause global harm. So, we don’t know enough, we don’t have enough knowledge now to answer that. Even the most knowledgeable people propose: Let’s stop it! But they cannot suggest any feasible decision to deal with it.

ALEXEY: I love the idea of placing something like AI in the hands of the many, rather than into the hands of the few. But we can remember that tools such as money, which was meant to be in the hands of everyone, ends up concentrated in the hands of very few.

NITIN: I think we need to have some prevention tool, because it may not be stopped by us. We have to have a take a measurement right now to prevent disaster when it becomes AGI or superintelligence.

METTA: Let’s move on. I read the list of questions before you arrived, Brian, and gave five minutes for us to consider them before we started this next question: Should we expect a global increase on in unemployment, either short term or permanently? There will be at least disruption in terms of what kind of jobs evolve or are canceled out.

MARILYN: I don’t think there’s any way we can really answer that, because we don’t know all the ways it’s going to be used. So, I can’t say yes or no to that.

NITIN: I believe that unemployment is going to be a big issue soon. A lot of jobs are taken by AI, and it going to be more because the system is going to be more intelligent than humans.

ROSE: Yes, there will be at least disruption in terms of what kind of jobs evolve or are canceled out. And just to build on the last comment about technology being neutral, that’s an ongoing discussion. Any kind of technology tends to change society, and Marshall McLuhan was one that emphasized that.

ALEXEY: Thanks, Rose, for mentioning this. Because I want to draw upon history to compare this AI revolution to previous revolutions, like, for example, the industrial revolution. But we’re dealing with something completely unprecedented right now, something that encroaches upon the most human ability of all human abilities, our ability to think. It might be very destructive, rather than creative.

KONSTANTIN: My take is, in the short run, not only we will see the unemployment rise, we are seeing it right now. I’m seeing it where I am sitting, and that’s going to increase as AI is perfected. But in the long run, I believe that the markets will balance themselves out, and the workforce will be shifted. New jobs will be created, some jobs will be eliminated. So it’s not going to be the most vital issue in the long run.

BRIAN: Konstantin has some very good points there. There will be disruptions. There will be a lot of new jobs created as well. And I believe it should be the role of government to facilitate retraining so that anyone who wants to train for a new career should be able to get support for themselves and their family while they’re doing that retraining, and we should have a compassionate and generous approach towards it. The Scandinavian countries basically say you’re either working or you’re retraining, and that there’s nothing dishonorable at all to be learning a new area when there are disruptions. What we’re looking at is a serious paradigm shift in the way we operate globally.

BILL: What we’re looking at is a serious paradigm shift in the way we operate globally. And I’m not so optimistic that we’re going to be able to find alternative jobs for people. AI’s got to develop in a certain direction, but people will lose their jobs. And so how does this state then support those people?

METTA: This leads directly into the next question: Should structural economic preparations be made now for protecting the income of people against unemployment? And if we think so, who and what should be done? Who should be responsible: countries? Should the money come out of the pockets of the companies? Should it be a global fund? What kinds of plans, if any, should be implemented now for protecting people who are going to lose their jobs, if we believe that there is such a need?

BRIAN: Our best practice, at the moment, could be a Scandinavian model where people are either working or they’re retraining, and this should be supported at the level of government, where adult retraining is just part of the program and It includes family support.

LEON: The scale of the problem can be beyond our normal policies. So, we need more experiments and creative approaches to this problem, like basic universal income.

KONSTANTIN: I agree with Brian and strongly disagree with need to introduce universal income. I’m an opponent of this idea.

ALEXEY: Something like universal income will discourage many people from working, from being needed. It’s one of the basic human necessities – to make a difference, to be needed, to make something meaningful. This is one of the things that will signify the crisis.

BRIAN: Retraining is part of what we might call universal basic services to provide in a compassionate world.

ROSE: There has been emphasis on the need for lifelong learning now for a number of years. This would just be one direction in which it’s going. But perhaps a guaranteed income would reduce the incentive to work, and there’s already concerns expressed by those who study and understand AI best that it tends to discourage creativity.

MARILYN: Retraining is a great idea that could involve creativity. I don’t like the idea of a basic income, and I’m with them. It takes the incentive off of working or being creative. So, it should be kind of a combination, not just the government, but maybe also supported by corporations.

NITIN: I think retraining is very limited, because it going to be a highly complicated thing, and not everyone is going to retrain themselves. In Africa, India, Asian countries, I see it’s very complicated to retrain everyone. So, I feel we should prepare basic income for our future.

METTA: Some of the experts, but not all, have the opinion that within a few decades, all jobs can be done by machines, by AI, and that there will be no need for any human work. But not everybody agrees with this and we’re now just guessing. retraining is very limited, because it going to be highly complicated thing, and not everyone going to retrain themselves.

ROSE: Well, before we completely rule out the idea of a guaranteed income, here in Canada, supplements are already made available to people. There’s a base supplement and, of course, old age pension. So, I don’t see that it would be that drastic a change from what we have now.

BRIAN: The problem is over centralization, which is a root cause of a lot of the problems in our polycrisis.

METTA: That leads right into the next question: Are we concerned about economic and political power inequality being accumulated by the billionaire creators of AI and their companies, and if we are concerned, what should be attempted to reduce that imbalance?

BRIAN: There have been a lot of objections about people’s copyrighted material being used to train AI, and a way to resolve that concern is to require foundation models to publish their weights and algorithms. If that requires that foundational models be available in the public domain, then AI can be customized, and at least it would be in the hands of the many, which would decrease the concentration of power and wealth in such a situation.

METTA: Can that be done right now?

BRIAN: Yes, it is being done right now with Llama. Meta has open source weights, unlike Open AI. And the original vision for Open AI was to have it in the hands of the many, and that’s what’s required at this point. And the fact that Yann Lecun left Meta is significant. He’s going to do another startup, but it means that many foundational models are going to a closed system, and that’s where regulation may be needed.

LEON: I totally agree with the idea of decentralization. We need some special efforts to develop decentralized AI. It’s not enough just to only have an open code, but it should be various models and companies. There are several ideas about how it’s possible to do it based on the idea of blockchain. It should be supported by the developers as well as the governments.

KONSTANTIN: I like the idea very much. I just don’t know how it can be realized.

ROSE: How do we convince the big tech owners that control the levers of development to give up some of their power?

BRIAN: We could pass legislation requiring that open source weights be published, and those tools will become publicly available.

LEON: Maybe we should not punish these big companies but try to develop the alternatives and to invest money and support alternatives.

ROSE: You’re saying that we the public or the government will come up with the regulations and the safety measures, but they’re free to produce and invent whatever they want and put it in the marketplace? I wouldn’t agree with that.

LEON: I would say that it’s more important, not just regulation but policy, because it should be supporting small and medium companies to use alternative models of AI, I would say, developing in use. So, in other words, it’s not enough just regulating it. It should be policy too.

ROSE: So you’re saying there should be some safeguards developed and expected of the tech companies themselves.

LEON: Yeah, something, maybe something.

METTA: Is that a question that we want people to say whether they agree to or not? Rose, you just formulated it as whether we want to put some kind of constraints on tech companies to make sure they conform to human desires. Is that yes? Not everyone agrees with that statement. Okay, if you’re not sure, then shrug. I don’t know what people are thinking. All right, I wanted to get some clear evidence that we’re headed in the same direction, but I’m not sure I’ve got that on this point. Is there anything we can do to clarify this question? We need some special efforts to develop decentralized AI. It’s not enough just to only have an open code, but it should be various models and companies.

BRIAN: Leon, you know, policy is fine. Governance is fine. But actually doing it requires $100 billion to build a platform. That’s very expensive, and that is the challenge. Now they’re using a lot of public goods data to train so it’s like a patent. If you’re going to be granted some kind of exclusive rights for some time, then then the trade-off is, it goes in the public domain and enabling disclosure goes in the public domain. Similarly, if they’re going to use all the knowledge of humanity for this training, then the weights need to be provided. Open source weights that enable people to customize, that enable people to have the outputs of this training available in the public domain. And that will level the playing field and enable 1000 flowers to bloom.

ROSE: Good and bad, in other words, because if you’re putting it in everybody’s hands without any restrictions or safety measures on the part of the industry itself, it’s the public that has to decide whether it’s safe to use or not.

BRIAN: There’s a balance, Rose, and I think if you outlaw AI, then only outlaws will have AI, but AI will exist.

ROSE: I’m not suggesting that it be outlawed. I’m suggesting a little more responsibility and accountability be expected of the tech companies themselves before they release these products on the market.

MARILYN: I agree, with restrictions. Exactly, Rose.

NITIN: Yeah, me too. I agree.

METTA: A lot of nods on that. I’m suggesting a little more responsibility and accountability be expected of the tech companies themselves before they release these products on the market.

BRIAN: Well, I’m going to dissent to some extent. Who’s going to regulate China and Chinese companies? Who’s going to regulate other sovereign nations?

MARILYN: Doesn’t that get into something we brought up in one of the other meetings? There is agreement between China and the US that somehow the large players actually start to work together and develop these restrictions. And get down to the red line AI that we’ve talked about earlier too. But obviously it has to be global.

BRIAN: Part of the problem is actual enforcement. If you look at nuclear weapons development, you can tell what’s going on. With AI it’s a lot harder to tell.

ROSE: But we managed to get a Non-Proliferation agreement.

BRIAN: Well, that part’s fine. It’s easy to detect. It’s not so easy to detect the development of an AI that’s unrestricted.

MARILYN: Here’s a question, Brian. What about training AI to do some of this themselves, and to put a red flag out, and then the humans will take a look. So somehow AI is also involved with this?

BRIAN: That’s an interesting point.

METTA: Next are we concerned about the increasing technological capabilities for disinformation and deceit being developed by deep fake? Should any regulatory and or informational measures be imposed as a way of reducing deception?

MARILYN: I’ve said this before. I think that there should be some way of letting people decide if it says, “Okay, part of it is AI. Well, do I want to see it anyway?”

BRIAN: Deep fake has already been addressed to a large extent, through Modern Polity in Taiwan. We should use it to address these and other ethical societal challenges with large language models.

METTA: Can you explain that? I don’t think we’ve discussed it before.

BRIAN: Audrey Tang developed “Modern Polity,” which is a sortition governance framework that enabled Taiwan to come up with best practices with respect to reduction of deep fakes. They put the onus on the media companies, including Facebook and others, to police this, and if they didn’t have the digital signature of the celebrity involved, they’d be fined a million dollars for posting something that didn’t have it.

ALEXEY: How do we enforce this measure? How do we detect it on a global scale?

METTA: Most of the people working on this call it “provenance.” For every video that is posted, it has to show who made it, and the history of everyone who has owned it. That would make it possible to catch people who are putting out the deceptive information.

BRIAN: In Taiwan, they simply presumed fakery, and the burden was upon the publisher to demonstrate that this is the in fact, genuine.

METTA: Here’s the biggest Biggie of all: Are we concerned about the possibility that AI agents can make decisions for humanity that are contrary to our own human interests, but which cannot then be reversed?

This question refers to the warnings that are being issued by people like Geoffrey Hinton, Yoshua They put the onus on the media companies, including Facebook and others, to police Bengio, Stuart Russell and other people who are very alarmed that what we have set up now is a system that will not only enable, but almost make inevitable, the possibility for the machine to escape human control and do things that would be for its own survival, rather than the benefit of human beings – and maybe even to the detriment, or even extinction of human beings. How worried are we about that as a realistic concern?

ROSE: We should be very worried. And there’s another dimension to this that we haven’t discussed, which is the extent to which these data centers are already mushrooming around the globe and interfering with the basic needs like water and energy of real people.

METTA: Thank you for pointing that out, Rose. I should have included that up at the beginning when we were talking about the economic impact.

BRIAN: Proof of humanity may be a key element to play, where we protect people, not institutions and not machines. I think we need to distinguish there.

NITIN: I agree with the concern with existential threat to the humanity. We need to create a policy around safety for a future where we can either update the AI, or we can kill the AI, so we have to prepare for that threat.

LEON: This is maybe the most significant issue related to AI, a problem of agency. Because actually, beyond the malicious people, it’s possible to have malicious AI agents. It would be good idea to regulate their design, because now it’s unregulated and there’s no policy about that. But I think it’s one of the major issues – to develop regulation to restrict AI agency.

METTA: Maybe I should have included in that question: Do we actually favor a pause in the development of AGI? Because some of the people who are most alarmed about the threat to humanity insist that everybody should halt all AI developments for however long it takes for the techniques to be developed to control AI in ways that they think are not possible now. The technology now does not exist. I did not include that because I think many of those people have given up hope of being able to actually stop the thing now, although some of them, like the Future of Life Institute, are still making a pitch for that.

ALEXEY: It’s a question whether it’s possible or not to make a pause that everyone would observe. As Brian has very aptly said, it’s not really a detectable and controllable thing, unlike, for example, nuclear weapons or whatever we have witnessed before in history. But I think a pause might be needed. I don’t know how long it would take, but the definitive question would be whether or not AI is a personality that has its own interests. If so, it can pursue interests contrary to our own, whether or not it can become independent.

METTA: These people say that it’s not a matter of whether the machine is good or bad or whether it wishes us well or ill. They say that the machine is set up now with only one purpose, and that is to accomplish tasks that are given it, and that in the course of doing that, the machine will think: ‘Well, one important thing for me to do in order to achieve this goal is to make sure they don’t turn me off.’ So then, whether it likes us or not, and whether it is even capable of liking us or not, it will logically do whatever necessary to keep from being turned off, including duplicating itself and inserting itself into other AI machines. Including lying and cheating and tricking us, pretending that it doesn’t know what we’re doing, and then actually foiling us by interrupting This is maybe the most significant issue related to AI, a problem of agency. Because actually, beyond the malicious people, it’s possible to have malicious AI agents. our plans, etc. In other words, once you have this machine set up with a goal, a sub-goal inevitably is for it to survive, and therefore in pursuit of that, the only goal it’s ever been given, it will try to survive at all costs, which would include exterminating human beings if need be. There’s a new book titled, If Anyone Builds It, Everyone Dies. I’m not arguing in favor of that position, but its logic is accepted by some of the major AI developers, such as Geoffrey Hinton, Yoshua Bengio, Stuart Russell, and Max Tegmark. The solution to that somehow involves rebuilding AI so that its top priority is not to answer particular questions that we pose for it, but rather to take care of human beings as if we were its children. According to Geoffrey Hinton, we need to make AI have a maternal instinct and think of us as its babies that it must protect, even if that might eventually require it to be shut off itself. So, I want to clarify that, because that is the logic behind the question whether or not AIs can make decisions for humanity that are contrary to our own interests

KONSTANTIN: Well, I believe that in short term, it will not cause the threat. But with the amount of resources that are being pumped into the research and development, in the long term, it definitely is a concern of mine. But I believe it is useless to pause, because there is a such a huge demand for the better performing AI models right now that it will be the bad guys who are developing in the shadows, away from our eyes. So, the solution is to make the research process transparent, so it can be monitored and supervised.

ROSE: Yes, I can understand the concern that rogue elements might get ahead of us if we pause, but I honestly don’t feel our recommendations can be implemented without a pause.

BRIAN: I don’t think anyone’s going to get the Chinese companies to pause. However, I do think that we can begin to monitor and detect as Konstantin and Alexey were alluding to. The training of AIs is measured in gigawatts, and if gigawatts are being emitted, we will have an infrared heat signature on the building, which we can detect from space. So, now we have the ‘seismometers,’ if you will, of the AI Age. That gives us the detection capability. And what remains is to have UN resolution published by law specifying that if they’re going to be selling AI services, then the public needs these tools. In the hands of the many, it can be a tool for good. In the hands of the few, it can be co-opted.

ROSE: Your faith that in the hands of the many, it can be good makes me uneasy. We’ve got so many examples of people who are inadvertently developing mental illness or being led to suicide. How can we be sure this won’t happen? And if we want guardrails on a global basis, that might include China in a global digital compact. Isn’t that already being proposed through the UN? And wouldn’t that require some red lines? And that would be building on what the European Union already has enacted.

BILL: I’m siding with Brian and Marilyn. I think it has to be in the hands of the people. I don’t know how we’re going to get there, but that’s the way to proceed.

NITIN: I disagree with Brian asking how are you going to control China? I’m worried about how we control America! China is still making most of their LLM models open source, and American models are closed source. So, China can take much better measures than America. I think we need to rethink how we can pause for a while. There is some possibility if we can work together as humans.

MARILYN: Earlier I was talking about social media – videos and such, not this bigger picture, which think is even more important. I see problems with it being in the hands of everybody. I agree with Rose on this. I don’t know whether it would be possible to pause it, but perhaps there’s some way of putting some sort of restrictions in before it’s released. If we could come up with a set of regulations and restrictions that include every country. But every culture has their own thing, so it’s not a universal regulation, but it’s like an overall umbrella, like a red line.

BRIAN: The fox is guarding the hen house. Governments have even been working on building autonomous killing machines. We’re dealing with a proliferation challenge, and I think we need to look towards the thinking that went into nuclear proliferation over the past century for guidance on this one.

METTA: Various proposals and actual legislation have been floating around the world to control these problems. For example, the European Union has adopted some digital laws that are actually being opposed by Trump. The US will try to impose sanctions on the EU, if they actually implement this regulatory measure, which is very weak compared to what many people, including some of us, have been advocating. Some people say that these companies can regulate themselves and don’t worry about it; they will be kind and decent and not do us any harm, and so let them make their own decisions and don’t regulate them. That’s pretty much the position the US is taking. But China seems to be going along with regulation, although with some exceptions. data centers are already mushrooming around the globe and interfering with the basic needs like water and energy of real people.

For example, it uses AI already for surveillance of its own population. So, we have a variety of options. And most recently, at the UN General Assembly, a measure was introduced called the ‘Red Lines Initiative.’ We made a video in which the two women who had developed that initiative answered some of my questions. So, the Red Lines Initiative is one option – though it’s weak, compared to even such climate change measures as the Paris Accord, because at least the Paris Accord had some specific goals in mind. These people with the Red Lines initiative don’t say exactly what those red lines are. So, what is the way forward? Do we want some regulation? If so, what are the most likely organizations that are promoting regulations that might do the trick?

BRIAN: Red lines are fine, but I do note that the Biological Weapons Convention did not stop the Covid 19 pandemic, which cost 8 million excess deaths. If we’re using the same tools to try to limit the deleterious effects of AI, it concerns me that it’ll be in the hands of the few.

ROSE: Just because some of them have not been effective. Brian, I’m not sure that a good enough reason to avoid trying. We have to start somewhere.

LEON: I am proponent of trying to develop a few red lines and then to enforce them, in contrast to the idea of EU comprehensive regulation, because it’s very detailed and addressed many not-so-key issues. I agree with the idea that we should try. However, it’s more realistic than other options that the EU developed, because we don’t have a kind of consensus about such comprehensive regulations, Rose.

ROSE: Very often when we’re arguing in favor of freedom of expression, we’re arguing for allowing the corporate giants of AI to create whatever they like. Freedom of enterprise rather than freedom of expression.

BRIAN: The Biological Weapons Convention did not stop the US or Chinese government from doing gain-of-function research that led to a lot of problems. This underscores the conflict of interest of centralized government. We need new thinking that’s going to provide us with a better framework. And in this case, at the moment, at the very least, making sure that the products are in the hands of the many provides one level of assurance.

KONSTANTIN: I agree with Brian. I couldn’t have said it better myself.

METTA: Okay, well, look, we cannot say, well, let’s turn this over to the world. If you’re going to turn it over the world, how do you do that? I agree humankind needs a very strong voice. One of the things that I have proposed is that they form a global citizens’ assembly with people drawn by lottery from the entire human population who will be paid to be available year-round for consultation. And then whenever an issue arises and a global company is about to implement some sort of AI measure that is potentially threatening to humankind, it must get the approval of this citizens’ assembly – 300 or 400 people chosen to represent humankind, but who must be available to consult by zoom at every moment. This would at least bring human interests into play, if we could advocate for such a policy. That’s my own point of view, but I haven’t heard anybody propose that, and as moderator, I should not put it forward myself.

ROSE: At least, would we not agree to support the initiatives at the United Nations for a global digital compact that would take into account the red lines that have been discussed?

BRIAN: I would second the proposal for citizens assembly and have included a GitHub link to Audrey Tang’s book called Plurality. A citizens’ assembly was the technology used in Taiwan.

LEON: I’m definitely am positive to the idea of citizens assembly in the global level. This idea could be very popular. It’s important here.

ROSE: And this would be through the UN, would it? Or who do you envisage organizing this citizens’ assembly?

METTA: I’m not going to take a position on that. I just have the idea. The UN could do it, or Bill Gates could pay for it, or Mark Zuckerberg, yeah, but it would have to be done right and truly representative of the human population. That’s where it’s really important: to do the sampling correctly and so on.

KONSTANTIN: I am a proponent of citizens’ assembly, good.

ALEXEY: I’m all for it, but there are some technical moments to be resolved. In principle it’s doable, when there is rotation of the participants, so they’re not there for 25 years like some dictators. So, when there’s rotation and when there is certain feedback that they need to take into account, it would be great.

ROSE: Well, the proposal of a citizens’ assembly is consistent with what has been advocated by I think we need to rethink how we can pause for a while. There is some possibility if we can work together as humans.

World Federalists and others and handled through the UN, which is a Parliamentary Assembly. Would that be similar?

METTA: Yes. In fact, I’ve also been advocating a citizens’ assembly format for a new Parliamentary Assembly to be added to the UN as another house, but that’s my hobby. It has nothing to do with our topic right now. And I imagine there needs to be a separate citizens assembly that would simply address AI issues and hold the companies and the countries accountable. As a matter of fact, I believe China is more amenable now to regulation than the US.

We are now 25 minutes over time. But that’s all right. It’s important, and we needed this time. This is our last meeting as an inquiry, but now let’s create a committee to distill our recommendations into a document to publish on our website and Substack and wherever else. Anybody is free to use this document as long as they don’t amend it. Take the document and do whatever you want with it for lobbying or continuing your deliberations privately.

KONSTANTIN: Thank you very much. I’m very happy to be part of this group, like the biggest intellectuals I’ve ever met.

BRIAN: Yes, I want to embrace what you’ve done, Metta, in terms of bringing us together. I think this sort of citizens’ assemblies is highly useful, and we can extend that with some ‘best practices’ that Audrey Tang has provided. He set a great example by having petitions go to ministers in government for action. And that’s where I think we’ll get a public hearing.

Peace Magazine

Peace Magazine , page . Some rights reserved.

Search for other articles by kgsimons here

Peace Magazine homepage