Artificial Intelligence in Medicine

Professor Enrico Coiera is the director of the Center for Health Informatics at the Australia Institute of Health and Professor in Medical Informatics at Macquarie University. Dr. Ronald St. John is a retired epidemiologist who managed infectious disease programs, World Health Organization (Regional Office for the Americas). He was manager of Canada’s SARS response. Adam Wynne is program manager, Project Save the World. This is an extract from a longer Project Save the World forum in February which you can watch here: https://tosavetheworld.ca/episode-545-ai-and-pandemics. We began by asking Dr. Coiera the difference between AI and AGI.

ENRICO COIERA:AGI – or artificial general intelligence – would be sentient and have autonomy. And, yes, that is coming. Some people say it’s a few years away and some people say we’re nowhere near doing that. I remain agnostic. But what we normally mean by AI today is a set of tools and methods that computer scientists have built that help automate the way we make decisions.
AI, using machine learning, can look at data for patterns associated with an outcome or a diagnosis. The new generation technologies, which have taken over in the last five to seven years, are called deep learning. That refers to the use of what we call “neural networks,” which are not really models of human brains, but were loosely modeled against them in the early days. These are big networks that sort of connect together different ideas. And like neurons, there’s different weightings between them. Training those networks can take huge chunks of data, huge energy resources.
AI is already replacing some humans, as for example with mammogram screening. It’s increasingly hard to find humans to read all those thousands of mammograms and, it turns out, most of them are normal anyway. So, AI would screen all the obviously normal ones out, throw them away and reserve those that are somehow concerning, to have a reading with the human and the AI together.
Backpropagation is the mechanism for teaching the network. When I was completing my doctoral work in the 80’s, it was being used, but the computer power to do it was very limited. So, we would tend to build networks with very few neurons simply because the computers couldn’t do more.
METTA SPENCER: You call them neurons? You’re actually trying to mimic the structure of a brain?
COIERA: Well, that’s how it started, but there’s no longer much correlation. About ten years ago, two big things happened. One was on the back of video games. People started to build really fast computer chips to play video games and they turned out to be perfect for teaching neural networks. So, all of a sudden, we have computing power and companies like Google had big warehouses full of computers and, for the first time, we had access to massive computing power. ENRICO COIERA: AGI – or artificial general intelligence – would be sentient and have autonomy. And, yes, that is coming. Some people say it’s a few years away and some people say we’re nowhere near doing that. I remain agnostic. But what we normally mean by AI today is a set of tools and methods that computer scientists have built that help automate the way we make decisions.
AI, using machine learning, can look at data for patterns associated with an outcome or a diagnosis. The new generation technologies, which have taken over in the last five to seven years, are called deep learning. That refers to the use of what we call “neural networks,” which are not really models of human brains, but were loosely modeled against them in the early days. These are big networks that sort of connect together different ideas. And like neurons, there’s different weightings between them. Training those networks can take huge chunks of data, huge energy resources.
AI is already replacing some humans, as for example with mammogram screening. It’s increasingly hard to find humans to read all those thousands of mammograms and, it turns out, most of the other big thing that happened in healthcare was that we were probably a decade into digitizing healthcare. So, at last, we were getting enough patient data to train these networks and have them discover the patterns I’m describing. So yes, there have been great innovations in the design of these neural networks.
RONALD ST. JOHN: That’s remarkable. When you finish medical school, you have to take a final exam to qualify as a doctor and I’ve heard that ChatGPT passed the exam. So I guess ChatGPT technically became a doctor.
SPENCER: Is there any conceivable way in which that’s good news?
COIERA: It sort of is and sort of is not.
Technology, I like to think, is neutral. It’s the human uses of technology that are of concern. ChatGPT is what’s called a “large language model.’” It has fed the neural network with literally billions of words and text from the web and it’s built a model of how people speak. It’s a great storyteller but it’s not a truth teller. Just by knowing how people speak about things, it often gets things right, but it actually is not knowledgeable. So, you can ask ChatGPT for something and it’ll give you an answer. That’s “truthy” to use that American phrase. It’ll be articulate and polished and it often gets things right.
Instead of doing a Google search, I’m sure in the next few months, consumers will speak to or type questions into some version of ChatGPT. There’s a rivalry going on now. Microsoft will incorporate it into its search. So, the way we look for information will change. And instead of getting a list of links, we’re going to get an answer.
Before Google, there was something called Ask Jeeves, which was a very early search engine. And the idea then was the same thing: you would just give a question to the computer and it would
ChatGPT is a great storyteller but it’s not a truth teller
just tell you what it thinks the answer is, but ChatGPT and all those language models can write essays, limericks, and songs.
During the pandemic there was an explosion in research. Thousands of free print articles that had not been peer reviewed were just deposited because it was so urgent that we get the results out. We could look at all that research that had not been peer reviewed yet but no human could really judge it all, so we’re interested in building what we call “automated systematic review technologies,”
My biggest worry right now is misinformation. A malevolent state actor can flood the internet with lots of plausible versions of misinformation. If you were wanting to change an election, you’d just create lots of very plausible tweets or Facebook posts. Then it becomes an arms race about message control. We’re only in the very early days of misinformation.
ST. JOHN: One wonderful application would be in helping us sort out long COVID. There is now a list of 200 symptoms of long COVID. It can’t all be one disease. There are lots of theories, publications, and papers. To sift through all that to see if there is a common thread that can define the long COVID syndrome – that would be fantastic.
COIERA: Yeah, I’ll get the guys to do it today! It’s an interesting challenge. I suspect part of the problem with long COVID is that we still don’t have all the right things measured, so we’re left with proxies, like fever and cough.
ST. JOHN: One of my interests is early detection of pandemics. Have you given any thought to applications that might help detect an unusual event that might become a pandemic?
COIERA: Yeah, we did some work about a year or two ago, looking at the way AI had been used in the COVID pandemic, just trying to review the different tasks that it was used for. The first task was signaling pandemic risk. A program called HealthMap, which is run by the Boston Children’s Hospital, basically uses AI to read news feeds from social media, essentially looking for signals in people’s behavior and what they’re reporting, and to see if there are early clusters. HealthMap detected the COVID pandemic first. It was seeing signals and social media in China suggesting that something was happening in that population before people mentioned it. There’s a Canadian model called BlueDot, which I don’t know too much about. Just a few days later it also detected the same kind of signal.
How do we re-engineer healthcare to be resilient?
ST. JOHN: Just a bit of background. Back in the late 90s, due to a couple of scares we had in Canada, we thought Canada needed an early warning of things that might be imported from around the world, given that you could come from anywhere in the world in
24 hours, incubating something that might create an outbreak of disease. We thought the early internet could help find the information we were looking for. But at that time, the fastest search engine was going to take one whole month to make a pass, whereas we wanted stuff in five minutes.
What we landed on was reading RSS feeds from media. We created the global public health intelligence network or GPHIN. Now that’s archaic technology and you have AI to look, not only at newsprint, but the other social media jabber that’s going on around the world. But certainly, tapping into news media and social media is the way to go in AI, which now can do the job much faster and better than humans can do it.
COIERA: Yeah. The lesson is that you can do it, and now we have ChatGPT. We knew that this was coming, but the sophistication of the technology! It is more capable than we thought would be possible at this time.
We’re going to have new pandemics because of shifting patterns of climate. We’re going to have floods, heat events, smoke events, cold events. Each one is a shock to the health system. So, the big question now is, how do we re-engineer healthcare to be resilient enough to deal with all these things? If we do another COVID response, we’re not going to have many doctors and nurses left willing to play. It’s just going to be too hard. But we actually have a very interesting project – diagnosing COVID based on cough. Just based on the kind of cough, you can distinguish whether you’ve got COVID or not. You cough into your smartphone. It’s performing better than a lot of these swab tests.
There are different labs and centers now that share information. We have a national alliance here with about 100 organizational members – not just universities. It’s industry, health service providers, clinicians,
One of the big things is the articulation of ethical principles in the use of AI
ST. JOHN: I’m sure you’re familiar with the International Bank of genetic sequences for viruses. Is there a bank for software programs for AI?
COIERA: We’d like that to happen for the sake of replicability. Just because somebody publishes a study doesn’t mean it’s true. They might have made a mistake or biased it, wanting a certain answer because of financial conflict of interest. In my field of AI, people write their own algorithms, publish a result, and nobody checks their homework to see if it was true or not. So, one solution is to have open repositories of patient data, so everybody can test their systems on the same data, and also their algorithms.
ADAM WYNNE: I’ve read that you were a member of the pandemic working group with the global partnership on artificial intelligence, which is one of the broader international artificial intelligence organizations.
COIERA: Right. GPAI was put together under the OECD to try and bring together some rapid response to the pandemic. One of the big things is the articulation of ethical principles in the use of AI. What is appropriate, what is the safe use of AI? Not just in health care, but in the military, etc.
A science fiction reader may be familiar with Isaac Asimov and his Robot series. The rules that all robots have to follow are: Don’t harm people. Don’t kill people. Do what you’re told. But a clinical AI would break all those rules if it did its job.
One example is end of life. In the last couple of weeks of life, many people end up, unfortunately, in hospital, though they probably need to be at home with their loved ones. They don’t need heroic intensive care. They need the tube pulled out and to be comfortable. And that’s a decision to be made. But if you make that decision to remove care, you’re allowing somebody to die and you’ve broken the Asimov rules.
We have algorithms that are very good at predicting whether you’re about to die in hospital. Really, really accurate. So, what is the ethics of using that algorithm to decide whether to withdraw care? Such decisions involve their family and care givers, with guidance from their clinical team. But a clinical team now may rely on the technology to push them to making one recommendation or the other.
SPENCER: I guess real, practical, cost-benefit analysis plays into real decisions anyway, I’ve heard of cases where the hospital says, now look here, this person has been on a machine for 40 years. That’s long enough. And then the family keeps saying no, don’t turn it off.
COIERA: Yeah, things that we used to do occasionally are now becoming pervasive. Another good example of an issue is algorithmic bias. Algorithms might suggest what sort of treatment you should get, or what your chances of survival are. But people from one socio-economic group, or one ethnic group might have very poor health outcomes for reasons around inequity, but the algorithm doesn’t know that and says, Oh, look, if I see a patient like that, they’re not going to do well. So, they’re not going to get treatment.
SPENCER: Somebody’s going to own the algorithms and charge rates for diagnostics. How’s the economics of medicine going to be affected by the use of this thing?
COIERA: I don’t have the answer. There’s also something called “algorithmic sovereignty.” It’s important for nations to have control of the algorithms that run their country. If you’ve got foreign actors creating the software that runs your electricity grid, your transport grid, your water supply, you don’t really know what’s in the software. So, there are societal risks from not having capability.
SPENCER: How’s it going to change the working life of doctors – the whole profession? If much of the work can be given over to machines, it’s going to affect the job market, it’s going to affect that the whole structure of the medical profession.
COIERA: Early on, people were saying that AI was going to put doctors out of work. We now know that’s nonsense. Look at all those patients who have problems who are not getting the care they need! There’s more than enough work to be done.

Peace Magazine

Peace Magazine , page . Some rights reserved.

Search for other articles by kgsimons here

Peace Magazine homepage