A friend of mine travelled to America for surgery. On her return, she proceeded to pester me about all the wonderful things she had seen in their hospitals.
“Do you know they do robotic surgery? Do you know that they checked my blood glucose levels without drawing blood? Do you know that I could see my surgery entirely on screen?” She asked.
By the time she was done, I was thoroughly exhausted. Then she said something that irked me: “You people will soon be out of job fa! Computers have taken over.”
I let out a small laugh and followed with a question: “Would you allow a robot to operate on you without human supervision?”
She was quiet for a bit as she mulled over the question. Finally, she let out a small ‘no.’
Exactly two weeks ago, I discussed about the pros of Artificial Intelligence in medicine and its role in healthcare delivery. Today, we are going to dissect the cons.
Perhaps for me, by far, the biggest con is the ethical dilemma that AI poses. AI’s deployment in healthcare applications raises complex ethical questions with unclear liability and accountability. Who is responsible for AI-related mistakes? If a robot operates on you alone, who do you contact in cases of complications? Who will be liable? How should AI handle end-of-life decisions? How do we ensure that AI does not perpetuate healthcare disparities or demographic biases?
Another disadvantage is that AI still needs human surveillance. Although AI has come a long way in the medical world, human surveillance is still essential. For example, most robotic surgeries still have a human controlling (or supervising) the machine on what to do. Additionally, robots operate logically, as opposed to empathetically. Health practitioners may notice vital behavioural observations that can help diagnose or prevent medical complications.
Let me give an example using prostatectomy. Robot-assisted surgery has become very popular in urology, particularly in the United States. In urology centres where it is used, it is extensively applied in surgery to excise prostate cancer as it enables access to anatomical areas that are difficult to reach. However, with robot-assisted surgery, there is not only the risk of human error when operating the robotic system but also the potential for mechanical failure. For instance, system components such as robotic arms, camera, robotic tower, binocular lenses and instruments can fail. In other cases, the electrical current in the robotic instrument can leave the robotic arm and be misapplied to surrounding tissues, resulting in accidental burn injuries. Likewise, robot-assisted surgery can cause nerve palsies due to extreme body positioning or direct nerve compression that may occur when using robots.
Another dimension to AI is its inability to detect social variables; patient’s needs often extend beyond immediate physical conditions. Social, economic and historical factors can play into appropriate recommendations for particular patients. For instance, an AI system may be able to allocate a patient to a specific treatment based on diagnosis. However, this system may not account for patient economic restrictions or other personalised preferences.
Back in medical school, we were taught to assess patients holistically before prescribing medication or deciding a mode of treatment. The doctor looks at how well dressed you are: your shoes, watch etc and the way you speak before prescribing certain medication. What good will it do to prescribe Exforge HCT, a medication for hypertension that costs more than N30,000 per pack to a man who sells fruits by the road when he can buy Bendroflumethiazide and Amlodipine at N2,000? A human being can detect the difference, a machine cannot. AI will just prescribe whatever is written in the books. You see the problem?
Again, while AI systems can be highly accurate, they are not infallible. There is always a risk of misdiagnosis or overlooking crucial information, leading to potentially life-threatening errors. Medical AI depends heavily on diagnosis data available from millions of catalogued cases. In cases where little data exists on particular illnesses, demographics or environmental factors, a misdiagnosis is entirely possible. This factor becomes especially important in rare conditions.
Another concern is data privacy and security. AI relies on expansive amounts of sensitive patient data, which makes data privacy and security a paramount concern. The misuse, unauthorised access to or exposure of this data can have serious personal, ethical and legal consequences. As AI is generally dependent on data networks, its systems are susceptible to security risks. The onset of offensive AI improved cyber security will be required to ensure the technology is sustainable. According to Forrester Consulting, 88 per cent of decision-makers in the security industry are convinced offensive AI is an emerging threat.
Lastly, there is the need to address the elephant in the room – unemployment. Although AI may help cut costs and reduce clinician pressure, it may also render some jobs redundant. This variable may result in displaced professionals who invested time and money in healthcare education, presenting equity challenges. The automation of administrative tasks and even some clinical functions can lead to concerns about job displacement within the healthcare industry; however, I am confident that this transition can be achieved harmoniously by striking a balance in the utilisation of this technology.
A 2018 World Economic Forum report projected that AI would create a net sum of 58 million jobs by 2022; however, this same study finds that 75 million jobs would be displaced or destroyed by AI by the same year. The major reason for this elimination of job opportunities is that as AI is more integrated across different sectors, roles that entail repetitive tasks will be redundant.
Well, here we are. Truth is: Whether we like it or not, artificial intelligence is here to stay. While the incorporation of AI into healthcare has the potential to revolutionise patient care, improve outcomes and streamline operations, it is crucial to address the associated challenges, such as data privacy, diagnostic accuracy and ethical considerations.
As we continue to navigate along this transformative journey, a cautious and intentional balance between AI and human intervention will be key to responsibly harnessing the full potentials of this transformative technology in healthcare.