About Artificial Intelligence
Artificial Intelligence (AI), despite being prevalent in the everyday life of most individuals and encompassing almost every variation of modern industry in some capacity, curiously lacks a precise universally accepted definition.
AI was first named in the 1950’s when Minsky, McCarthy, and colleagues, described artificial intelligence as “that of making a machine behave in ways that would be called intelligent if a human were so behaving” (source).
Artificial intelligence has been categorized as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.” (Winston, n.d.) as “a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason and take action.” (Panel, 2016) , “the activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” (Nilsson, 2010) and “AI can also be defined by what AI researchers do. AI is primarily as a branch of computer science that studies the properties of intelligence by synthesizing intelligence” (Simon, 1995)
At its core Artificial Intelligence is the ability for a machine to complete a task, that if it were done by a human would require intelligence.
Types of AI
The general definition of AI is broad, as its ability to be classified. AI can currently be classified according to two separate systems, one is the classification of AI in relation to their similarities to the human mind, another is a broader definition, more commonly used in the technology industry that puts AI into three separate categories.
AI classified based on its relation to the human mind falls into four separate categories:
Reactive: This is the original form of AI and operate in an extremely limited capability. They emulate the ability to respond to different stimuli, this type of AI does not have any memory-based functionality, they do not use previous experience to make decisions on their current actions. In basic terms, they do not have the ability to learn, they can simply respond to a limited variation of inputs.
Limited memory: Similar to reactive machines, this type of AI also has the ability to learn from historical data to make decisions. These machines are trained by using data stored in their memory as a reference model for solving problems. Almost all types of current AI fit into this category.
Theory of mind: This type of AI currently only exists in theory, “ is the ability to attribute mental states — beliefs, intents, desires, emotions, knowledge, etc. — to oneself and to others.” (Wikipedia, n.d.)
Self-awareness: This AI also exists only hypothetically and is self-explanatory. It is an AI that has developed self-awareness.
These four types of AI can also be more generally classified under the three more general classifications, Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).
ANI: the form of AI that exists in our world today, often referred to as “weak AI”. It is intelligent systems that operate within a limited context – how to carry out specific tasks without being specifically programmed to do so – driven by sets of self-learning algorithms and is at best a basic simulation of human intelligence.
Narrow AI is generally focused on performing a single task extremely well, often at much faster speeds and with higher accuracy than humans. Whilst this form of AI seems intelligent it operates under a larger set of constraints and limitations than even the most basic human intelligence, namely they are only capable of performing specific tasks in which they are programmed to do, which is where the name ‘narrow AI’ comes from. Reactive and limited memory AI fits into this category.
AGI: this type of AI has the same ability as a human being, it can learn, perceive and understand independently and build connections and generalizations across multiple fields in the same manner that humans can. This form of AI currently exists only in theory.
ASI: a theoretical type of AI that surpasses human intelligence and ability in every facet. An example of this would be Skynet from the Terminator series.
How does AI work?
As stated, the field of AI is creating machines that are capable of executing tasks that would require human intelligence to otherwise perform. Machine learning is a subset of that field, one that allows machines to “learn” independently and deep learning is a further subset of that, with it being the area that is currently producing the furthest advancements in the field.
“Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques.” – Frank Chen (Source).
Machine learning
Machine learning is a subset of AI that allows a system to learn from data without the need to be specifically programmed to do so, it does this through sets of rules – or “algorithms” – that the system is able to follow.
This is achieved by training the system, by feeding it data, which using statistical techniques it will then finds patterns in said data, and from which it derives a rule or procedure that explains the data or can predict future data. Or more simply put, by the system learning.
“In essence, you could build an AI consisting of many different rules and it would also be able to be AI. But instead of programming all the rules, you feed the algorithm data and let the algorithm adjust itself to improve the accuracy of the algorithm. Traditional science algorithms mainly process, whereas machine learning is about applying an algorithm to fit a model to the data. Examples of machine-learning algorithms that are used a lot and that you might be familiar with are decision trees, random forest, Bayesian networks, K-mean clustering, neural networks, regression, artificial neural networks, deep learning and reinforcement learning. “ (IBM, 2018)
Machine learning methods are usually categorized broadly under two definitions: supervised and unsupervised.
Supervised learning is where algorithms trained using labelled examples. It is similar to learning by example, with the system being given a data set with labels that act as the “answers” and eventually the system learns to tell the difference between the labels by comparing its outputs with the correct outputs – the answers – to find errors and adjust itself accordingly.
For example, a system might be shown pictures of cats and dogs and given enough data will learn to differentiate by perhaps the structure of its ears, or shape of its face.
Once the system has been “trained” it is able to then be applied to new data and classify it using the rules it has learnt.
The problem with supervised learning is that it usually requires enormous amounts of labelled data to work effectively, with systems potentially needing to use millions of images to say, carry out the task of identifying pictures of cats and dogs accurately.
Unsupervised learning is where algorithms are trained using unlabelled data sets, it is not given the correct “answer” for the data and instead must figure out what it is being shown. The aim of unsupervised learning is for the system to explore the data and try and identify patterns that can used to classify and categorize the data.
For example, unsupervised learning might be clustering together data that can be grouped by similarities, such as news websites grouping together stories on similar topics.
Deep learning
Deep learning is a subset of machine learning that operates by employing a system inspired by the human brain – neural networks – it operates by using progressive layers that each subsequently extract and composite information. As data is passed through the layers:
“each unit combines a set of input values to produce an output value, which in turn is passed on to other neurons downstream. For example, in an image recognition application, a first layer of units might combine the raw data of the image to recognize simple patterns in the image; a second layer of units might combine the results of the first layer to recognize patterns-of-patterns; a third layer might combine the results of the second layer; and so on.”
This allows systems to process large amount of uncategorized and complex data efficiently by breaking it down into smaller, simpler parts and using those parts to recognize complex precise patterns in data that would not be possible using traditional machine learning techniques.
The larger the neural network and the more data it has access to, the better he performance of the system. Deep learning however requires enormous amounts of processing power and specific hardware – GPU’s have made recent advancements possible – long training times and large amounts of data to work effectively.
In addition, one of the problems facing deep learning is known as the “black box” problem, in which it is often next to impossible to determine how the system came to a particular conclusion, which in turn makes it difficult to gain insight required to refine and improve the system.
Development of AI
Despite AI having existed for more than half a century since the term was originally coined in 1950, developments in the field have only recently seen large breakthroughs and interest from modern industries, this is due to advancements in computing power – GPU’s – and the exponential growth in volume and variety of data, which in turn has increased the potential value – and advancement – for algorithms.
As the necessity for the implementation of AI systems becomes more prevalent, due to the rise of big data, and AI providing a greater return on investment, more research and development has been put into the field.
Challenges for development
The main challenge in the development of increasingly advanced AI is computing power, until recently there was a technical brick wall regarding development, with there being plenty of theoretical ideas but not enough computing power required to implement or develop them effectively.
Modern day cloud computing and parallel processing systems have helped currently, but they are nothing more than a stop gap as advances in complex deep learning algorithms and data volumes continue to grow, and more power is required.
Another problem in the development of AI is that current systems can only learn from given data, knowledge cannot be integrated in any other way, this means for example that any inaccuracies in the data will be reflected in results.
This is partly due to the fact that modern AI only operates on a one-track mind, it is only capable of performing a specific task, and thus unable to perform, and take into consideration learning and data from tasks other than the one it is performing.
Lack of professionals in the field, despite the increased demand for AI experts, machine and deep learning developers and data scientists, talent supply remains at a deficit – as of early 2019 there was estimated to be less than 40,000 AI specialists in the world (Source).
2020-4-26-1587878761
Writing an artificial intelligence essay
Artificial intelligence (AI) is a quickly growing field of computer science which has recently become a hot topic of discussion. AI has the potential to revolutionize the way we interact with the world and the way we do business. While the potential benefits of AI are vast, there are also many potential risks and drawbacks associated with it. Essays on this theme typically require a discussion on some of the main benefits and risks of artificial intelligence, and how AI can be used responsibly.
One of the main benefits of AI is the potential to increase efficiency and accuracy in many tasks. By using AI-powered systems and algorithms, businesses can automate many of their processes, freeing up more of their employees’ time to focus on more important tasks. AI systems are also capable of analyzing large amounts of data quickly and accurately, making it easier for businesses to make informed decisions. Additionally, AI can be used to create powerful tools that can help us better understand and interact with the world.
However, there are also some potential risks associated with AI. One of the most commonly cited risks is the potential for AI systems to be used maliciously. AI can be used to create powerful weapons or to manipulate and deceive people, so it is important to consider these potential uses of AI when developing AI-powered systems. Additionally, there is the potential for AI systems to be biased or to produce inaccurate results. As AI systems become more complex, it becomes increasingly difficult to ensure that the results are unbiased and accurate, so it is important to consider these issues when developing and deploying AI-powered systems.
Another important consideration is the ethical implications of AI. AI-powered systems can have a significant impact on our lives, and it is important to consider the ethical implications of using AI. For example, there are questions about the impact of AI on privacy, autonomy, and freedom of choice. Additionally, there are concerns about the potential for AI systems to be used to discriminate against certain groups of people. It is important to consider these ethical considerations when using AI-powered systems.
Finally, it is important to consider the potential impact of AI on employment. While AI-powered systems can help increase efficiency and accuracy in certain tasks, they also have the potential to replace human labor in some areas. This could lead to increased unemployment, which could have a major impact on the economy. As such, it is important to consider the potential impact of AI on employment when developing and deploying AI-powered systems.
Artificial Intelligence essay themes:
- The potential risks and benefits of AI
- The impact of AI on human employment opportunities
- The potential ethical implications of AI
- The ethical implications of creating sentient AI
- The implications of AI on data privacy and security
- The potential for AI to be used for nefarious purposes
- The potential for AI to be used for good
- The potential for AI to augment human capabilities
- The potential for AI to automate certain tedious tasks
- The potential for AI to lead to increased inequality in society