Home > Computer science essays > Machine learning vs Deep learning

Essay: Machine learning vs Deep learning

Essay details and download:

Text preview of this essay:

This page of the essay has 6,197 words.

Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. For AI to achieve this goal they need to learn, the two main ways of learning is machine learning and deep learning.

Machine learning vs Deep learning

Machine learning is a process where the developer of the AI creates a set of instructions for the AI to follow to predict a certain outcome. For example in image recognition when the AI is trying to tell whether an image is a cat or not. The process the developer would have put in place would be to look out for edges of the cat and reference them. Then when a picture of a cat is given to the AI the edges will to detected and if they match with the rules created by the developer then the AI will detect the cat and identify it in the image.

Compared to deep learning where the human interaction with the AI is much less. The developer will set up a neural network, this neural network acts like a brain. The developer then feeds the neural network with testing data. This testing data will be specific to the task and it trains the neural network, for example it trains the network to recognise cats in images by giving the neural network lots of images with and without cats with a key of which images contain cats.

In the end the developer of the AI made with deep learning doesn’t know how the bot predicts a cat whereas in machine learning the developer has made unspecific rules for the AI to follow so it know exactly how the AI works and can even change these rules to make the AI better. Deep learning is like a locked box and cannot be opened by the developer and nothing inside the box can be changed or edited.

The Turing Test

A key problem in computer science is determining if a computer can act intelligent, a computer that can “think” or is it just spitting out canned responses that it is programmed to. The definition of intelligence is “The ability to acquire and apply knowledge and skills”. A computer is way more knowledgeable than any human but where they are lacking is the ability to apply that knowledge which is equally as important as having that knowledge. The Turing test was first proposed by Alan Turing in 1950, If a computer acts enough like a human that we can’t tell it apart from a human, then to all intents and purposes it is intelligent. The original Turing test was outlined by three rooms: a room where a human sits, a room where the computer is and a room with a judge. The judge has a conversation with the man and the computer, they can only communicate to the contestants through a computer screen and must judge which is the computer and the human from just the responses in the conversation, a computer screen and text responses are used so that the test is not dependant on the computer being able to talk because the test is to test if a machine can convince a human into thinking it is a human. If the judge gets it wrong more than 50% of the time the computer has passed the test. The test has been modified since its implementation, so the judge is the man having a conversation with either a computer or a human and must decide which is which. To this day no computer has passed the Turing test. Although Eugene Goostman allegedly came close with a chatbot that acted as a 13 year old Ukrainian boy, but the victory was under dispute by critics saying it personality quirks and humour to misdirect judges.

My Turing Test

For my Turing test, I used a classmate as well as two chatbots, Cleverbot the first chatbot uses deep learning and learns from the conversation it has with everyone that uses it. The second chatbot I’m using is Mitsuku this chatbot uses machine learning and uses algorithms to create responses. My classmate and I were in different rooms so we couldn’t see or hear each other. The first test we did I was the judge, the classmate could choose between answering himself or using the cleverbot. We wrote our responses on a piece of paper and our teacher would run between the rooms with our responses at one minute intervals constant to make the testing conditions the exact same throughout the test. He was not allowed to speak because he might compromise the test. Next we had a second conversation but the classmate swapped so if he was using a chatbot the first conversation he would change to him answering it and vice versa. After the second conversation we swapped roles and swapped chatbots to mitsuku, my classmate was now the Judge.

In my Turing test I had to be careful on what I did to make the test accurate and realistic. Slight things could give the judge an edge up on determining participant. The test relies on the judge only judging by the conversation response no outside factors. One outside factor was the reply time, the human answering would take a lot less time to answer because if you were using the chatbot you would have to type the judges answer into the chatbot interface and then write the response. We took this uncertainty out by making all responses one minute giving them enough time and there was no timing difference could use to determine if the participant was using a chatbot or not.

Another factor to insure a valid test is the communication. I used a piece of paper that got passed between rooms with a third party. If I were to use speech it would run the test because there are many factors in speech like accent, tone and pronunciation. Also if you have ever heard a chatbot or any AI talk you will know it sounds nothing like a person. If we used verbal communication I would be able to tell if it was my classmate or not because I know there voice and the test would be invalid because I judged something other than the response.

Test 1: I was the Judge, Human. Test 2: I was the judge, chatbot.

Me: Hello.

Classmate: Hi, how are you?

Me: good thanks, you?

Classmate: I’m fine.

Me: what is your favourite colour? Mine is blue.

Classmate: yeah same, it’s the colour of my favourite fruit.

Me: what fruit is that?

Classmate: Strawberry.

Me: Strawberries are red.

Classmate: yes they are I was just testing you.

Me: how much do strawberries cost?

Classmate: I wouldn’t know I don’t buy them.

Me: could you give me some?

Classmate: no get some of your own.

Me: Hello.

Cleverbot: How are you doing?

Me: I’m doing good and you?

Cleverbot: Pretty well.

Me: What is your favourite colour? mine is red.

Cleverbot: Black and yours?

Me: mine is red I already told you, didn’t I?

Cleverbot: why is red your favourite colour?

Me: Why do we have favourites?

Cleverbot: That is a good question.

Me: what question?

Cleverbot: The one you previously asked.

Me: Can you not remember the question?

Cleverbot: No could you please repeat the question?

Me: Can a chatbot pass the Turing test?

Cleverbot: Oh no.

I know all my classmates quite well which means we had to make boundaries on what we could and couldn’t ask, for example we couldn’t ask each other’s names or personal questions that the computer couldn’t possibly know. If we hadn’t made boundaries the judge would be able trip the computer up by asking personal questions they know about the classmate. We did this because we are trying to test if a computer can act like a human not a test to see if it knew where they went to primary school.

Test Results

In the first test, the first sign of a chatbot was the language and vague responses when it didn’t understand a question. For example when I asked their favourite colour and in the same response I said my favourite colour was red. It responded by saying “black and yours?”, this is a typical premade response when it recognises key words like favourite and colour. then when I said that I had already said my favourite colour and asked it why it didn’t remember, it counted the question with a vague question

saying “why is red your favourite colour?” which is related to the topic but it is very vague and doesn’t make sense which is unhuman like. Finally I asked if a chatbot had ever passed the Turing test and it replied “Oh no”, which is a very odd response and for those reasons I guessed correctly that test number 2 was a chatbot.

Test 1: I was the assessor, Human. Test: 2: chatbot.

Classmate: Hi.

Me: Hello.

Classmate: do you ever wonder why the world spins?

Me: I don’t think there is a reason.

Classmate: What’s your favourite food? Mine is lasagne.

Me: My favourite food is ice cream, but I also like lasagne.

Classmate: What is your favourite flavour?

Me: Coffee what is yours?

Classmate: Mine would be human flavoured.

Me: Are you a cannibal?

Classmate: no its only human flavouring not real human. Is being a cannibal wrong?

Me: It is illegal except for in the United States of America it isn’t. Classmate: Hi.

Mitsuku: Hi there.

Classmate: do pants go on the top half or the bottom half of your body?

Mitsuku: well pants cover you up.

Classmate: does god wear pants?

Mitsuku: I don’t know God personally, so I wouldn’t know what his plans are.

Classmate: favourite colour? mine is violet.

Mistsuku: mine is blue, what is yours? Violet is not bad but I like blue best.

Classmate: why do you like blue the best?

Mitsuku: it reminds me of a calm sea.

Classmate: what was my favourite colour again?

Mitsuku: violet.

Classmate: what was the first thing I said to you?

Mitsuku: What was the first thing I said?

For the second test, many of the same techniques occurred. First of all vague response to a question was when I asked “do pants go on the top half or bottom half of your body?” and it said, “well pants cover you up”. Mitsuku seemed the most intelligent because it stored my favourite colour and was able to tell what it was later in the conversation when I asked “do you remember my favourite colour”. Although premade responses were used it could also respond to more than one statement in a response, for example I asked “what is your favourite colour? mine is violet” it said mine is blue then said violet is not bad but I like blue best, showing it can understand more than one statement and it replied to both statements in one response which is clever if the statements didn’t contradict themselves. The vague responses statements that contradicted themselves gave away to my classmate that test 2 was also a chatbot.

Algorithms and techniques

Breaking sentence down

The first technique used by chatbots is breaking down your response into pieces and respond to each piece of your statement. For example, when I asked Mitsuku what her favourite colour was and also in the same response saying my favourite colour is violet. It response was “mine is blue, what is yours? Violet is not bad but I like blue best”, this shows that it has picked the two key words favourite and colour and responded then it has seen the second part of the sentence and responded with “violet isn’t bad but I like blue best” by breaking up the sentence it has addressed both parts of my statement.

Breaking the sentence up is a very effective technique to sound intelligent when it is done well. Mitsuku in this example executed it poorly because the first sentence contradicts the second sentence when it says “what is yours” and then it recognised I said my favourite colour and responded by saying “violet is not bad but I like blue best”. If a human were to look at the statement they would see the whole thing and tailor their response maybe saying “mine is blue, but violet isn’t bad”, whereas mitsuku saw it as two separate statements and answered them separately not as a whole which made it contradict itself.

Pre-programmed responses

When a chatbot detects certain key words it would have a programmed response. When I said “Can you not remember the question?” and it said “No please repeat the question.” In this example cleverbot recognises the key words like “remember the question” it will print its pre-programmed response. In pseudo code the rule for this would something like this.

If input includes “remember” and “question”

Print “no please repeat the question.”

This technique is pretty convincing because it keeps the conversation flowing and while it doesn’t sound completely natural but it doesn’t say anything that would send red flags to a person unless they were aware and looking for this technique. this is seen as an inefficient way of making a chatbot because it doesn’t use any machine learning it just uses a few simple lines of code.

General replies

Vague and general replies to statement and questions that the AI doesn’t understand. This technique is supposed to give the impression of understanding when it doesn’t understand. While talking to Mitsuku when I asked it a question saying “do pants go on the top half or the bottom half of your body?” and it replied by saying “well pants cover you up.” This has some relevance to the question but it doesn’t answer it or even come close. it must have just understood that I asked something about pants so they said something vague about pants.

Like most techniques this can be convincing but only if it is used like a human uses it. Humans do sometimes give vague replies to questions they don’t want to answer or can’t be bothered replying to. We also sometimes do this when we don’t know how to reply or don’t know enough to reply but don’t want to admit it. For these reasons chatbots can pass as acceptable. Although when it isn’t used in the correct way it is very obvious that a chatbot is talking.

Countering a question with another question

This technique is used when a chatbot doesn’t know how to answer the question being asked. The goal of this technique is to change the subject of the conversation without it being noticeable. For example, when I saying to Cleverbot, “mine is red I already told you, didn’t I?” because I had told it my favourite colour first but then it asked it again, so it responded with “why is red your favourite colour?”, it didn’t know what to say to my question so it asked another question along the same topic. When it didn’t know how to respond it counter my question with a different question to try take the focus off my question. If the human goes along with this change of topic the chatbot has been successful in avoiding this unknown question.

This technique is very effective when used in moderation. If the chatbot uses this a lot then the conversation will just keep changing subject and the conversation will have no real substance as it going round and round in circles. But if used once or twice it is very hard to detect if done right. Also it is very effective because it doesn’t just avoid one question it avoids a whole topic area that the chatbot is unfamiliar with and if it sticks to topics it does understand then it will sound more intelligent.

Help systems and other applications of AI

AI is in a growth spurt at the moment for applications in the business sector. Some of these applications include a deep learning model to hire new staff at amazon and help systems for pizza companies. All AI can be put into two categories general AI and narrow AI, while both the chatbots I looked at are general AI because you can ask them anything about anything and they will try to answer meaning they are not set to specifically do one job. Narrow AI is more popular to businesses because it is easy and faster to make an AI that only needs to answer a few specific question, this also means it is more cost effective. The help systems that dominos or most big insurances companies have is examples of narrow AI, they are designed to answer a costumer’s query without the need of human help or at least save time by collecting the information then passing it on. The main reason for using this is cost once the chatbots are built the cost is almost nothing while humans have wages that you pay so the more chatbots take over the less humans have to be in call centres which means much lower cost for companies.

The main technique used is finding key words that are linked to a specific set or potential problem the AI has been programmed with. When a response is given it is scanned for key words then the computer checks the list of problems with the key words, if it finds one it will come back to the costumer with preprogramed response asking if this is the solution they need before giving it to them. An example of this for dominos would be if I said, “could I please have the menu for New Zealand?” it would search that and see the key words menu and New Zealand, then it would get the New Zealand menu and respond with “would you like the New Zealand menu” and if I wanted it I would say yes and it would respond with the menu. This has saved me the costumers time by not having to sift through their website trying to find the menu. If it was a customer service problem then this machine could save someone’s time at the call centre, saving the company money.

Chatbots become more useless when they don’t understand what the costumer is trying to say. To get around these hurdles at the moment they use a variety of techniques which are similar or identical to the ones Mitsuku and Cleverbot use just applied differently. First off the will try counter question with another question to get the costumer to rephrase their query so the computer can better understand it or it could be to clarify if the computer is right in its assumption of the problem, an insurance example of this technique could be “Do you want to take out a car policy?”. A second technique is used when the AI has absolutely no idea what you are saying or have gathered information from you and are unable to comprehend the problem so it passes you onto a human. The picture on the right shows the example of this in the domino’s pizza AI. This is a prime example of what they would say, first they say “they” are going to get a team member, then they try gather information from you and then it asks for a review if they have one. It then says how long the response time will be so you don’t have to sit in front of your computer to wait for a reply. This technique shows us that chatbots are not very affective at adapting when it comes to questions they don’t know about, this is because it is a narrow AI and was only designed for a select few purposes. Once company’s start using general AI models, I think assistants will take off even more because they will be able to answer all queries and there will be little need for a call centre or support staff. Of course general AI needs to improve for it to be useful over many applications, it would need to be able to adapt to different and complex problems, for this to happen it would almost need to be self-thinking. This means it would be able to help you with problems it has never come across before like humans do on a daily basis. This makes the chatbot a lot more useful and would make humans obsolete because it could address every problem no matter how hard.

Another application of AI is a neural network used by amazon to pick the best candidate for a job vacancy. It is important to note that this is a deep learning AI so it learns from past experiences or training data. In the past the tech industry was man dominated so the training data from the past 10 years, showed man being better suited for a job. This made the AI sexist, it started to penalise woman because there CV’s were often different to a male CV which dominated in the training data because of the male dominated industry. Although this AI failed it is interesting because in the article it said 55% of HR managers said that AI will have a huge roll in recruitment in the next five years. This could be very effective for recruiters because it can take things like how well people did in the job and how happy they were where as one human could not comprehend this type of mass data.

Evaluation of the Turing Test

The Turing test was designed to test a machines ability to show intelligent behaviour equal or identical to that of a human. We use our own intelligence as a kind of measuring stick. In an article about chatbots a few good points come up while the Turing test tests if a machine can act like a human in a text to text conversation,  there are many other ways humans are intelligent. For example, Googles AlphaGo AI beat the world’s best player at the ancient Chinese board game Go where there are more possible games than stars in the universe. In this article it says that while the Turing Test is a good way of testing one part of human intelligence it doesn’t cover everything, a chatbot that could pass the Turing test wouldn’t be able to beat the world’s best player at Go which means while it is as intelligent as humans in one area it isn’t in every other area.

Another article on the disadvantages of the Turing Test, is that humans and intelligence isn’t the exact same. Humans on the whole are intelligent but do many unintelligent things that computer have to mimic to pass the test. For example we make a lot of typing errors, in the article it talks about one AI coming close to passing the Turing Test because of its ability to make typing errors. So how can a test be completely effective if it is a test for intelligence but for a computer to pass it, it has to do seemly unintelligent things like typing errors. This shows that the Turing Test requires all areas of human behaviour to be tested regardless of if they are intelligent or not. It even tests for behaviours that are considered not intelligent such as susceptibility to insults, temptation to lie and of course typing mistakes.

Complexity and Tractability

The key problem with tractability is to try find an optimal solution to a problem which takes an impractical amount of time. If a problem can be solved in a reasonable amount of time then it is considered tractable. Intractable is classed as a problem that if using existing algorithms a computer would take an impractical amount of time to complete, this can be as long as millions of years. The cost of algorithms are calculated by the amount of operations for the algorithm to finish. Any algorithm that has a bigger cost than 2n where n is the input, such as n cities, is intractable. A common example of an intractable problem is the traveling salesman problem. This problem is a salesman who has to travel to different cities and he wants to find the quickest route between them. This is very basic and has a lot implementations and has been adapted to many real world examples, such as courier companies, drink vendors and packaging/manufacturing. My example of a courier company, has destinations they must deliver all over the city and they want the quickest route, this will minimise the amount of time and fuel used. I will exclude roads and speed limits from my problem so there will just be straight lines between the destinations. Using a generic algorithm this is an intractable problem, so if there is an algorithm created which makes this a tractable problem then it will be able to applied and adapted to all problems of this type, a one size fits all sort of model. This is beneficial in the real world because it could cut fuel cost, resources and time.

Brute Force

The first approach was mentioned above which is the generic algorithm which has to check all distances between destinations individually and calculates the quickest path between them by remembering the quickest path then comparing it to the one just calculated and if it is shorter it replaces the quickest path, it repeats this process for every possible route between destinations, this is often referred to a brute force method. The shortest route once finished is the optimal solutions to that exact problem. The more points(destinations) there are the more time it takes to calculate the optimal route. When more points are added the routes increase at an exponential rate, therefore it would be an ever increasing amount time for a computer to solve for the optimal route. The relationship between the inputs or destinations and the number of possible routes is the number of destinations factorial, which means if there were 7 destinations if would be 7x6x5x4… right down till you hit 2. The graph shows the exponential relationship between the number of points and the amount of time a computer takes to solve to find the optimal route.

An example of this relationship is shown by two different problems, the first has 5 different destinations and the second has 16 destinations. For the first one that has 5 destinations it would have 5 factorial or 120 possible paths, if a computer could took 1 second to go through 100 possible paths then it would only take 1.2 seconds and that is a very slow computer, computer can perform billions of calculations per second. If we look at the second example of 16 different destinations which means it has 16 factorial or 20922789888000 possible paths. Now let’s say a computer can check 1 million paths per second it would still take just over 242 days to solve for the optimal route. Which is very impractical for a courier company because the receive packages and must deliver them in the morning meaning they have less than 12 hours which is much less than 242 days! for a courier company anything more than 10 destinations become intractable for them.

Heuristics

As of right now there is no algorithm which can make this problem tractable but there are algorithm’s which use heuristics which use a variety of techniques to guess the shortest route some can come really close but are not always the shortest route. The first example of a heuristic algorithm is called the greedy algorithm, it works by starting at the courier depot it will calculate the closest destination and goes to that point and so on until is visits every destination. This only finds the optimal route from point to point not the whole path so there could be cross overs. Using this algorithm as a courier would be a bad idea because there are a lot of variables like traffic speed, limits and traffic lights. This means the flaws of the heuristic would be amplified whenever a new destination is added. Even though it isn’t perfect it is a lot better than waiting 242 days or just choosing randomly. The time taken to calculate using this heuristic would be negligible.

As you can see on the routes using greedy heuristics is not very close to optimal. For example on the map on the left point 2 should go to point 4 but it goes to point 3, which creates an overlap and it goes back on itself both of which adds distance to the trip. The same things can be observed on the map on the right, once again the path crosses back on itself making route longer, this happens between point 15 and 16. Another example of where it goes further than the optimal is point 10 to 11, it should go 10, 12, 11 and then 12. These are examples of when the greedy algorithm has chosen the best choice at the time but it hasn’t payed off because now the route is longer. The decision seemed good at the time but as a whole it wasn’t. Although it does double cross and go back on itself the route is not to long. It isn’t the optimal but it isn’t too far off and it was able to come up with the solution very quickly which is the main benefit of a heuristic.

2 – OPT

Another technique that heuristics use is a 2 – opt. when two lines cross over it means that the route can’t be optimal so to get around this heuristics use a simple algorithm which takes the four points connected to the lines that cross over and compare every path between those for points to find the optimal between the four points. In my example to the left, it takes the points 2,3,4 and the depot, we will call it point 5 for now. It then finds the shortest route by comparing all possibilities an example of this in pseudo code might looks like this.

Find 2 – opt at 2,3,4,5:

Check each route

Compare route generated to shortest route

If generated < shortest

Make generated route = shortest route

because there are only 4 points in a 2 – op so there are only 24 possible routes so the generic algorithm can be used to find the optimal solution every time in a short amount of time. This same logic can be applied when 3 or 4 or even 5 lines cross over. The more lines that cross means more routes to be calculated but follow the same rules of testing all the routes by generating all possible route between all possible destinations and comparing them to each other to find the shortest route between the point or the optimal route. This is a very effective technique for heuristics as it uses very little computing power to greatly improve the route found by the heuristic. Although it doesn’t always make the route faster and is sometimes better not to use it.

The brute force approach is preferable when the number of inputs is 10 or less, which taking less than 12 hours to solve making it ideal for the courier company because you are guaranteed the fastest route. While anything over 10 would be impactable to use brute force for a courier company because it would take too long and they would need to start delivering the packages so heuristics would be favourable with a  near optimal solution using 2 opts.

Practical Applications

When you first hear about the traveling salesman problem it sounds quite mundane and boring but while the story is basic people and companies face the same issues as the traveling salesman on daily basis. The example I used was a courier driver but there are so many more liking shipping companies wanting to get to all there ports in the shortest way or even a smuggler who owns a plane and must travel to a certain number of countries to deliver goods but wants the shortest route to save fuel. Another real life example is the UPS(US Postal Service) who have created a heuristic algorithm that makes there vans have the same number of right and left turn to keep the wear on their tyres the same.

A simple example of a possible Traveling salesman problem in the real world is if NCEA decided to deliver all our results in January and they need to distribute all the papers to the right cities. NCEA are sick and tired of students moaning about papers takes too long to be delivered so they want to best algorithm to get them round the cities the fastest. These are the types of problems the TSP represents and why so many people are striving for a solution.

So many more people are affected by this problem, that is why it has attracted millions in funding to find a solution or even very effective heuristics. The goal is to save money and time. This is more obvious with a delivery company, if they had a good heuristic they would save money on fuel, money on labour from the driver, money on vehicle maintenance  because it would be doing significantly less kilometres. A prime example of a company researching this problem is Google. If you have ever needed directions you will know of google maps. Google maps uses a very complicated heuristic which is not publicly available because it is objectively worth millions. It doesn’t just have a bunch of straight lines it must sort through so many variables like traffic lights, road speed, traffic and road length. All these factors must be compiled into one algorithm and it finds the fastest route in seconds. The useful thing about the TSP is that one solution fits all, therefore trucking companies can use Googles million dollar heuristic to their benefit saving them money.

In my simple example of a trip from Christchurch to Dunedin it has calculated the shortest route, how long that route is and how long it would take by car. It even gives you options like a walking route or a biking route. This is all done on a slow cell phone and is finished in seconds which shows how great these heurists are and goes to show how far google algorithms have come.

While some factors like traffic and road speeds add complexity and time to the algorithm, there are other factors that also help it. If there is a closed road then Google can take that whole path out of the possible routes making the cost of the algorithm go down which means a faster time to calculate the route.

Another application of the TSP is used in electronics. In a circuit board many holes are drilled into the board and companies want to find the shortest route to connect them. The metal connecting the holes is one of the most expensive parts in the board a good heuristic would save companies a lot of money. Considering that if the metal for each circuit board cost $1 but a better heuristic came out so the metal only cost 90c that’s a saving of 10% and 10c doesn’t sound like much but if you sold a million circuit boards the company has just saved $100,000. This kind of savings is why heuristics and finding a solution to the TSP problem is so valuable.

The greedy heuristic is the most simple heuristic by just choosing the nearest point to itself make it unreliable with an average of getting within 25% of the optimal route, while still better than a route being picked at random. 2 – opts improve this heuristic even more by stopping overlapping. The greedy heuristic is unreliable because depending on how the cities are laid out greatly affects how close it gets to the optimal solution within 25% is only the average, it can be much more than that. Another reason why the greedy heuristic isn’t very good is because when there is a point on the map which isn’t closest to any other point so won’t get picked but the algorithm has to travel very far at the end of the route to get the point when it could have been picked up on the way around this was shown in my example of points 15 and 16. Another way to improve it would be to break up the points in sections so it would have to touch all the points in that section before moving on to the next section. Therefore making the greedy heuristic not a good fit for most real life examples like courier companies and the electronics industry because they need the best heuristic to save the most money, some companies have boasted 40% improvements in efficiently with numerous heuristics.

The best public Heuristic available is Christofides algorithm which guarantees a solution within a factor of 3/2 of the optimal solution. It is one of the most complicated heuristic out there

 

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Machine learning vs Deep learning. Available from:<https://www.essaysauce.com/computer-science-essays/2018-10-29-1540842297/> [Accessed 19-11-24].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.