Imagine a world in which the precision of artificial intelligence was integrated with the experience of health practitioners to make medicine more effective. Artificial intelligence–or the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages–has, in recent decades, become a popular field of study, and involves the integration of computer programming into various other fields such as medicine. The medical field is defined as: of, relating to, or concerned with physicians or the practice of medicine. It includes nurses, doctors, and various specialists. That being said, artificial intelligence developed by programmers and medical researchers collectively should be used as it has largely benefitted the medical field, allowing for medical procedures to be done more efficiently and with less risk.
The figure above shows the trendline of the number of research articles that have been published about artificial intelligence in healthcare from 2013 to 2017. The number of articles has grown exponentially, reflecting in the increased interest in the role of artificial intelligence in healthcare as well as the growth in technology that allows for new developments to occur.
Algorithms could make the implementation of medicine and healthcare much easier, but as of yet, they are still unable to replace actual practitioners due to the fact that many regulations are hard to program and are up to the discretion of individual practitioners. “We think it’s important to emphasize that these tools are never going to replace clinicians. These technologies will provide assistance, helping care providers see important signals in massive amounts of data that would otherwise remain hidden. But at the same time, there are levels of understanding that computers still can’t and may never replicate” (Strait 2018). Additionally, “defining the qualities necessary for an algorithm to be deemed sufficiently accurate for the clinic, while addressing the potential sources of error in the algorithm’s decision making, and being transparent about where an algorithm thrives and where it fails, could allow for public acceptance of algorithms to supplant doctors in certain tasks. These challenges, however, are worth trying to overcome in order to universally increase the accuracy and efficiency of medical practices for various diseases” (Greenfield 2019). However, some might argue that with the amount that technology is exponentially advancing, it is hard to predict whether it could improve to the point where it could replace health practitioners. But, it is important to note that it is not this simple, for there are many regulations that also must be addressed, and many ethics issues are things that cannot be programmed but must instead be decided by an expert or professional. For example, ethics issues such as whether to save the mother or the baby during childbirth poses a huge issue and is often up to the discretion of the practitioner based on morals and chances of survival.
Also, the aid of artificial intelligence could improve the precision of many medical procedures, such as surgeries. A counter-argument to this could be that the use of artificial intelligence in medicine is unnecessary because for many years, doctors have been performing procedures well enough without it. But it’s also important to note that all fields are constantly evolving, especially fields pertaining to the sciences. Therefore, if there’s a way for doctors to perform procedures with more ease and that simultaneously causes less discomfort to the patient, why should we not take it? Additionally, many kinds of tests are starting to be done with artificial intelligence, and these tests can help diseases be diagnosed earlier so that treatment can begin sooner and the survival rate increases. This is shown by the statement “popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data. Major disease areas that use AI tools include cancer, neurology and cardiology” (Jiang et al., 2017). Another counter-argument that could be offered is that on top of those issues already discussed, there are many ethics issues that come with the advancement of technology in the medical field, such as genetic engineering. ““The concept of altering the human germ line in embryos for clinical purposes has been debated over many years from many different perspectives and has been viewed almost universally as a line that should not be crossed,” Francis S. Collins, director of the National Institutes of Health, said in response to the news that scientists in China were using gene-editing technology to alter human embryos” (Erickson 2015). “Thus, in summary, genetic engineering is beginning to deliver meaningful products to the market place, proving that it is a viable body of technology on which to base an industry. Its rites of passage serve to drive home the tired lessons that good technical ideas need to be complemented by good management, financial savvy, and an eye to the market to be successful” (Dickson 1984). However, things such as these will never truly have an ethical outcome. Of course, though, this will help many people who may be diagnosed with diseases become healthier, and that is still beneficial to many people.
Lastly, data collected by researchers in clinical studies in recent years is enough to improve the function of artificial intelligence by a long shot. For example, “accurate predictions of sequence repair could allow researchers to computationally predict the precise guide RNAs that will reproduce exact human patient mutations, leading to the development of better research models to study genetic disease” (Yeager 2019). Additionally, “experts in artificial intelligence (AI) are working to bring computers into the clinic. Advances in a technique called “deep learning” help computers to find patterns in massive data sets, which should be very useful in medicine” (Johnson 2017). Some people say that more data should be collected before we let artificial intelligence deal with something as valuable as human lives, though. But there’s a problem with that, since all technology has to start from somewhere. It first has to be implemented for us to be able to study it and know how to improve it. So, it’s a better idea, and more beneficial in the long run, for it to be implemented in certain procedures now so data can be collected. Practitioners could also start with implementing it in relatively safer and easier procedures first, and then moving on to more difficult ones as they collect more data and improve the technology.
The figure above represents where most artificial intelligence and technology goes in terms of research. It is most often used for neoplasms, followed by research into problems such as nervous disorders and cardiovascular diseases.
Artificial intelligence aids medical procedures in being more precise, and ultimately the development of advancements in artificial intelligence. Algorithms could make the implementation of medicine and healthcare much easier, but as of yet, they are still unable to replace actual practitioners due to the fact that many regulations are hard to program and are up to the discretion of individual practitioners. And there is little chance that they will ever be able to do so. Additionally, the aid of artificial intelligence could improve the precision of many medical procedures, such as surgeries. And last but not least, data collected by researchers in clinical studies in recent years is enough to improve the function of artificial intelligence by a long shot. Thus, artificial intelligence developed by programmers and medical researchers collectively should be used as it has largely benefitted the medical field, allowing for medical procedures to be done more efficiently and with less risk.