What is really interesting about Covid 19 is that an AI – driven algorithm used by a Canadian company called BlueDot actually sent details of a flu-like illness spreading in China to its customers on 31st December 2019 – 9 days before the World Health Organisation publicly announced this. The algorithm obtains data from foreign language news reports, plant and animal disease networks and official notices to analyse data and make predictions. The founder of the company Kamran Khanhad learned lessons from working in Toronto in the Severe Acute Respiratory Syndrome (SARS) outbreak of 2003 as an infectious disease specialist – SARS was a precursor of Covid 19 form the same coronavirus group. This is perhaps one the best potential uses of AI in medicine and public health in the future as the AI predictions produced by BlueDot are check ed by epidemiologists to make sure that they are sensible and then sent to Health officials and organisations who may be slower to spot these trends. In a viral epidemic where speed of response has been shown to be vital this year, this is of huge importance.
What I have discussed above are just a few of the examples of the use of AI in the present day that I have come across in my research and reading, but what really interested me during my writing of this project was what will AI look like in the future.
The Future
The four interviews that I carried out gave interesting views on the future of AI but what was clear was that even amongst a small group of professionals the definition of AI varied. All four were aware that an algorithm is basically a set of instructions, but AI, as Michael put it, was much harder to define. He felt that AI had a human-like element to it but others described it simply as intelligence produced by artificial means or using the intelligence to do things that humans could not do. Two mentioned how unhelpful the Hollywood version was, quoting The Terminator and Minority Report as films that give human qualities to AI. So even this well qualified group were affected by this when it came to a definition. So, if it is hard to agree on a definition, deciding what its impact on the future will be is more difficult. As Raghul put it, the AI goalposts keep moving. Any definition you come up will be revised again and again.
Certain things came out repeatedly in interview. All accepted that there will be increased use both in their own fields and in society in general, through for example the Internet of Things which is essentially the internet connected to devices. Increased use of AI driven robotics was also mentioned as was the fact that AI is already established widely in daily life as I have discussed in the previous section.
Regulation of the use of AI also came up as an issue. The Financial Sector seems heavily regulated and likely to cope and the Pharmaceutical industry seems similar, important when AI has now started to create new drugs such as Halacin as discussed above.. The Legal profession might create its own problems – Patrick pointed out that data privacy law might actually hinder AI and getting Big Data as anti -trust issues may stop this across businesses in the same industry. He also mentioned how Amazon have got around this by being so big that they can look at just their own data and so use AI to select which of their own products to promote or services to develop that offer them the best profit margin. The problem is that if we believe Moore’s Law, where computers double in power and halve in price every 18 months, the speed of change will be faster than regulation can keep up. The bleeding edge will be ahead of the regulated edge.
The other issue adding to this is the future use of Quantam Computing which is expected to speed up computers enormously and allow greater and quicker AI. This might allow e.g someone to break encryption for banking or transport systems – will the world move fast enough to regulate this or develop Quantam-resistant encryption? There will be unexpected consequences of AI – how we respond to these will be crucial.
All of these fit with the concept of Future Shock, which Alvin Toffler first wrote about in 1970. He defined it as “a certain psychological state of individuals and entire societies” as a result of “too much change in too short a period of time.” As technological shocks occur quicker than we can react the impact on industries and society may be huge and disruptive.
Human acceptance and understanding of AI will also a factor. Patrick was of the view that using AI instead of Judges to decide cases may be difficult for the public to accept unless it is in relatively simple data driven cases where decision making is based on relatively limited data e.g. speeding fines. AI will not replace lawyers as it is not sophisticated enough. It would take longer to write the software to allow a computer to for example do the necessary due diligence than for a human to just do it. The danger Michael pointed out is that if humans don’t understand the decisions being made but accept them in a passive way, they may regard AI almost as a quasi-religious being. This would be open to abuse.
Ethical Issues
All of the interviewees could see significant ethical problems in the future with the use of AI. The driverless car was a popular example – if it kills someone who is responsible? Or if it makes a choice to kill for example a small object rather than hit a big object to protect the driver, but the small object is a toddler and the big object is a dog then who answers for that. One interviewee also pointed out that the answer may vary in different parts of the world depending on cultural acceptability. Will this lead to the same AI idea being implemented differently in different political systems?
Leading on this the issue of availability of AI was mentioned in that its benefits as a whole may be tailored towards the more developed world, widening the inequality gap between haves and have- nots. Bias may also occur against certain groups. Some machine learning programmes have looked at reading facial expressions but where the systems have been trained on Caucasian faces, they are not so good at looking at black faces creating bias. Any system will only be as good as the data used. If the data is biased so will be the AI. In a similar way this could be used for political reasons. The Black Lives Matter movement has highlighted that some organisations are trying to use facial recognitions systems to decide on whether to not someone is likely to go to jail. But if us e data form the present US jail system it will pick up a higher proportion of black males as inmates. Using this data will just continue the same bias. Some may try to sell the idea as an AI decision but if the data used is biased against black individuals then the AI will simply reflect the racism of those choosing the data. Michael wondered that if AI becomes a replacement for sentient thinking, it will affect everything, but the problem is that the ethics of the programmer become more important. Will national traits or for example political views affect the AI output?
Simpler problems arise. If AI is used to generate knowledge of an individual’s DNA, if a rare disease is identified as a possibility will insurers use this data to refuse medical insurance? Privacy may become an issue. One the obvious issues in the world of Covid 19 is the use of apps to track movements. But more secretively, companies may wish to buy big data and use it to generate AI useful for employment selection. An individual may fail to get a job and never know why.
Patrick raised the issue of the age of those making decisions about AI use – most Governments and boards of big companies are made up mainly of men aged over 50 who are less likely to understand AI yet make the big decisions. He hoped that they would have the sense to accept advice from those that do understand these systems.
So it would appear that AI is a tool with downsides and upsides. It can be used ethically but is also open to misuse. Perhaps the main question regarding the ethical problems is: Are we as a race learning fast enough to be able to deal with these problems?
Conclusions & Final Comments
Overall, what I have learnt from researching algorithms and artificial intelligence is that their uses are broad and versatile and affect all aspects of modern-day life. In my opinion the greatest advantage is their ability to process large amounts of data rapidly that no human could do. This allows us to gain insights and information into a wide range of fields. We are already seeing positive advantages such as the development of a new antibiotic and the potential for wider future use is obvious. My research also taught me how important it is that any AI is trained correctly with the right data to prevent a flawed AI. It is still true that rubbish in produces rubbish out! The problem is that if this is not recognised there can be serious unfortunate consequences when AI is applied
What is particularly interesting are the ethical and legal issues that are developing. I believe that whilst there are issues with regulations, and this will not be solved in the near future, these will eventually be resolved as the adoption of AI across more fields continues. However the ethical and moral issues that arise are more complicated and will change as society develops over time – whether we can react as a group to this fast enough is critical if we are to avoid AI being misused more than it is applied usefully. The final learning from this project that I felt was interesting is people’s views differ on even producing a basic definition of what an algorithm is and what is meant by artificial intelligence. It is these definitions that will need to be made clearer if AI on the future is not only to be useful but to be universally accepted.