Home > Computer science essays > Could Artificial Intelligence Ever Replace a Doctor?

Essay: Could Artificial Intelligence Ever Replace a Doctor?

Essay details and download:

Text preview of this essay:

This page of the essay has 1,960 words.

Artificial Intelligence is increasing in its ability to do complicated tasks reliably. I am going to discuss the potential it has to replace a doctor – specifically a surgeon, researcher or consultant. Some tasks in a doctor’s routine are already being automated, similar to some tasks in the lives of the public. However, there are many moral and ethical concerns with having an artificially intelligent doctor: for example it lacks compassion and empathy. There are many positive and negative factors to consider from scientific and moral viewpoints.
Firstly, almost all examples of Artificial Intelligence use Genetic Algorithms. According to Jenna Carr, these are optimisation algorithms that are able to adapt their calculations to effectively “evolve” and become more accurate. This is similar to the process of natural selection and theory of “Survival of the Fittest” by Charles Darwin. This research is very reliable as it had to be verified to be published on an educational website. For an example of a Genetic Algorithm, one of the most famous and commonly used genetic algorithms is called a neural network. This mimics the neural structure of the brain. This uses linear equations to form predictions and quadratic equations to calculate how incorrect the predictions are. The predictions are from a data set containing data inputs and outputs that are known to be correct. Several of these data sets, consisting of essentially questions and answers, are used to train and “test” the Artificial Intelligence. The difference between the prediction and the correct result is squared, which forms the function of error – the cost function. This consequently results in a quadratic graph, of which the minimum of the parabola is theoretically the “most correct” result. The neural network uses backpropagation to adjust the coefficients and constants for each “neuron”. After the network has been trained it should be able to make correct predictions without needing to be verified. This could pertain to thousands of medical situations and the Artificial Intelligence would have impeccable memory along with excellent decisiveness and precision. Also, the neural network will only get better over time if it still adapts based on its experiences.
However the stochastic method of a neural network means that there is no guarantee the algorithm will always be correct. As Lawrence David says in his journal, “Genetic algorithms and simulated annealing”, genetic algorithms attempt to find a pattern in a seemingly random set of data. This information is quite reliable as it was from a verified government website. However since it was published in 1987 it may be considered outdated, but the fundamentals of Genetic Algorithms haven’t changed. If there is no pattern for the data the algorithm will still attempt to find one. This results in constant incorrect predictions. Also, no-one, including the programmer, understands exactly what calculations are being made. Even if a graph is drawn to visualise the calculations, all widely used neural networks would result in graphs with far too many dimensions for a human brain to understand (which only understands 3 at most).
Therefore it is impossible to change anything in a neural network without having to construct an entirely new one – this is the issue of editability that people are concerned with – regarding Artificial Intelligence. This is stated as one of the problems with Artificial Intelligence in Nick Bostrom and Eliezer Yudkowsky’s journal. They are both from research institutes so their insights are reliable and relevant. This flaw increases the concern of the data set being extremely accurate and, most importantly, representative of the real life scenarios and problems the Artificial Intelligence will be expected to face and solve. If these data sets are not accurate, then patients may be diagnosed incorrectly or be in danger, especially after surgery. However, from a study carried out in 28 European countries, it was discovered that after elective surgery 4% of patients died before hospital discharge and a further 8% were admitted to critical care for a median length of 1-2 days. These statistics are from surgeries conducted by human doctors. This shows that even without Artificial Intelligence there is still a risk of losing patients. There is a risk of the Artificial Intelligence miscalculating just as there is a risk of the human doctor miscalculating. The extensive range of obdurate doctor’s reports and medical records should ideally suffice and lower the risk of miscalculation. If not then simulations of previous medical cases can be used as a final test for the Artificial Intelligence. This could then be verified by human doctors for increased credibility.
Despite these issues, genetic algorithms, are very frequently used. 68/75 people say that they have used some form of Artificial Intelligence in their lives, such as Youtube algorithms, image recognition, self driving cars. Also there are personal assistants such as Siri, Cortana, Alexa and Google Home. These examples are widely varied. This difference shows the versatility of Artificial Intelligence. Despite its issues, Artificial Intelligence has been proved to work well with no significant issues. Be that as it may some may argue that these specific applications are simple and carry no consequences if there is an error. For example, if the Youtube recommends videos the user isn’t interested in there is no harm done. In the case of medicine, even a slight error or incorrect judgement by the Artificial Intelligence, based on the thousands of factors, can have catastrophic consequences.
Nonetheless, personal assistants are growing to be more and more useful and accurate in judging and imitating human behaviour. This shows that they are getting better at understanding human cognition. For example the Turing Test created by Alan Turing was nearly passed by Cleverbot in 2011, despite many people being able to distinguish as a robot. This is from the BBC, a well-known and reliable news organisation. In the near future it is very possible that another Artificial Intelligence will pass and actually be believable. The aforementioned examples of Siri, Alexa and Google Home are already on that path. This shows that one day, despite its current state, Artificial Intelligence could fully imitate human behaviour and be able to understand complex situations. This would reduce the margin of error greatly in its decision making process and make it more viable as a replacement for a doctor.
There are two main types of doctors that people interact with: surgeons, and consultants, such as General Practitioners. According to the UCAS website, which is very reliable and experienced concerning such matters, surgeons are required to a number of tasks. These consist of: discuss the details of the medical process with the patient and potentially family; arrange and take tests; carry out operations; file reports concerning the patient and inform related consultants; check up on patients. The majority of these seem feasible to be attempted by Artificial Intelligence – in fact, personal assistants can already arrange tests and file reports. However in some primary research, the first issue 8/75 people can think of in answer to “Do you think there are any issues with AI?” is that software is not empathetic. This may severely worry and unsettle patients. On the other hand some people think that a robot doctor would make people suffering from anxiety or body-image concerns more comfortable during consultations and check-ups. The lack of empathy would create a judgementless encounter. On the other hand the idea of a robot operating on people may deeply concern them. However, due to future advancements, it may be more practical and preferable to have a robot operating on them rather than a human, as the risk may be lower and it may be more efficient (since the robot wouldn’t need to be a humanoid shape i.e. may have extra limbs or other auxiliary features). For a diagnosis, many people think that they would accept Artificial Intelligence, in fact some commented that they do not think that it is any different to self-diagnosing from official medical websites. This is particularly useful as 67/75 people (89%) think the wait to see a doctor is too long and since the software is portable, people can be accurately diagnosed at any time and place – eradicating the unnecessary wait time.
However there is a significant concern regarding all Artificial Intelligence that it reduces the available jobs and therefore increases unemployment. From a survey to the question “The economic impact of robotic advances and AI”, 42% of experts believe that Artificial Intelligence will automate many blue-collar and white-collar jobs. They believe that this would leave many people unemployed and cause social distress. On the other hand, the majority of 52% believe that although many jobs will be automated, human ingenuity will find new jobs and tasks to do. This was likened to the Industrial Revolution. Also it is stated that only blue and white-collar jobs are being threatened, whereas a doctor is a gold-collar job. This indicates that it may take a long time for technology to progress enough to automate that task, but it is still possible.
Furthermore, there are many moral issues concerning Artificial Intelligence in general, not even specifically to medicine. For Nick Bostrom and Eliezer Yudkowsky say that there is the issue of transparency and editability as aforementioned with neural networks – this is a significant concern as the software can become unpredictable. This volatility would cause people to lose confidence in Artificial Intelligence as to them it appears as if it may malfunction at any given moment. Bostrom and Yudkowsky also mention the fear of exploitation. If Artificial Intelligence is too predictable then people may find loopholes or may even hack the software and manipulate it for their own malicious needs. This raises several data privacy issues. In order for the Artificial Intelligence to have optimal accuracy in its predictions, it would require vast amounts of data. The people that have to provide this may feel uncomfortable as that information in the wrong hands could have devastating repercussions. Accountability is another problem. If the Artificial Intelligence fails at the set task, it can’t be held accountable. Some may believe that the programmers are next to blame, however it is likely that if the software is commercial, the developers would emphasise zero accountability in the contract. This makes the organisation responsible and would possibly deter them from using Artificial Intelligence at all. This, and the previous issues, would severely impede the likelihood of Artificial Intelligence actually being used for high level tasks with significant consequences. However a main cause of this “fear” is not knowing the real risks and as result imagining them to be exaggerated versions of what they really are. In the future, the main concerns of data security and privacy and malfunctions will become less and less of an issue. Also by the idea of “argumentum ad populum” as more people use Artificial Intelligence, the more the general public will begin to accept it.
In addition to this, even if direct contact with Artificial Intelligence is not welcomed, it can still be used to aid with the research aspect of medicine. For example, a hexapod Artificial Intelligence was assigned the task of walking with its feet touching the ground as little as possible. It found a way with 0% contact. The programmers were confused until they saw that it had flipped itself upside down and was walking on its elbows. This shows the ability of the Artificial Intelligence to think of innovative and unique solutions that humans wouldn’t have considered – such as predicting mutations of viruses, such as the Common Cold, and effectively finding a cure for them. This is most likely because its judgement process is not influenced by the environment. However, according to Genetic Home Reference, human intelligence is strongly influenced by their environment. Therefore the use of Artificial Intelligence in medical research can potentially provide unexpected outlooks, which could be developed to make significant breakthroughs.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Could Artificial Intelligence Ever Replace a Doctor?. Available from:<https://www.essaysauce.com/computer-science-essays/could-artificial-intelligence-ever-replace-a-doctor/> [Accessed 18-12-24].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.