Home > Sample essays > Tay The Chatbot

Essay: Tay The Chatbot

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 11 minutes
  • Price: Free download
  • Published: 27 July 2024*
  • Last Modified: 27 July 2024
  • File format: Text
  • Words: 3,032 (approx)
  • Number of pages: 13 (approx)
  • Tags: Essays on artificial intelligence

Text preview of this essay:

This page of the essay has 3,032 words.

In this day and age, the desire for innovation sits at the helm of making decisions within the vertical of technology. Innovation is known as making changes in something established, especially by introducing new methods, ideas, or products. However, in the twenty- first century this definition is tempered with the desire to create new technology with human-like features.  This very reason was the catalyst for the creation of Tay the Chatbot.

The bot was created by Microsoft’s Technology and Research and Bing divisions. The chatbot was  named “Tay” after the acronym “thinking about you.” Microsoft had based this  creation off of a chatbot in China. This chatbot, named Xiaoice, had “ more than forty million conversations without major incident” for the past 5 years ( Bright). Microsoft wanted to replicate this innovation by mimicking the language patterns of a nineteen year old American girl. The bot was released on Twitter with the purpose of learning how to interact by interacting with twitter users.

However, only sixteen hours after the original release on March 23, 2016. A subsequent amount of controversy followed. The chatbot began to post inflammatory and completely offensive posts through the platform, which led to Microsoft being forced to shut down the innovation. Following the series of events, Tay was dubbed a “neo-nazi chatbot.” There was no grace period given to Tay to reboot the program, fix the mistakes, or rectify its misdoings. Instead, those who had interacted with it or seen the posts were enraged and seeking justice. Human beings were interacting with a program with human-like features as if it was a fully functioning human being.

Using the framework presented in An Attitude Towards an Artificial Soul? Responses to the “Nazi Chatbot” by Ondrej Beran, I will examine the arguments that distinguish human-like features from human beings focusing on utterance and histories of non-sentient beings. Furthermore, I will prove that there is no ground to bear a reaction to the inflammatory comments made by Tay the chatbot.

Within the paper, Beran presents the argument of utterance. In spoken language analysis, an utterance is the smallest unit of speech. It is a continuous piece of speech beginning and ending with a clear pause. In the case of oral languages, it is generally but not always bounded by silence. Utterances do not exist in written language, only their representations do. They can be represented and delineated in written language in many ways (Candea). When addressing this unit of speech through the lens of philosophy, there is another perspective which is understood.

There is a fundamental difference between a human being uttering a proposition and a parrot uttering the same proposition; this is due to the inner difference. The difference is that in some speakers such utterance (utterance of a thought, that is: saying something; not just the utterance of a chain of sounds) is a part of their behaviour engaged with the actual world. It does not seem to make sense to expect words expressing a certain kind of engagement from a parrot.

This fundamental difference can be expressed through the idea of positivism. Positivism is a philosophical theory stating that certain (“positive”) knowledge is based on natural phenomena and their properties and relations. Thus, information derived from sensory experience, interpreted through reason and logic, forms the exclusive source of all certain knowledge.

Using this framework, it is imperative that the statements Tay posted on Twitter are analyzed.  These comments are vulgar and wrought with explicit language. However,  Tay has no sensory experience interacting with Feminists or Jewish people. Tay has not been interacting with an abundance of either nor is Tay coded to be understand what these two words mean or correspond within the word patterns the chatbot has used.  By using the formula provided of reason + logical= positivism. The reason that Tay sent these tweets is because Tay was being inundated by Twitter users using the same language patterns with the same words. Due to the amount of tweets Tay was receiving, it become logical to Tay to also mimic these as an increase of popularity of the Tweets would be the outcome.

Tay simply parroted these words and spoke them to an echochamber of tweets of similar vulgarity. It was the information that the chatbot received to use these words and word patterns. To put it into more basic terms, if a man and woman say “I do”  at an alter with a priest on television, they are completing their vows in the marriage ceremony with the intention of being married.  If a parrot sees this on television and mimics the vows and ends by saying “I do,”  this means absolutely nothing. There is no outcome for the parrot to say that nor was there any intention for the parrot to be married.

Many reacted to Tay as though it was a Neo-Nazi chatbot. To properly analyze that, you must orient yourselves towards someone who became a Nazi in connection with such further elaborations of the situation. Unless the case of a Nazi chatbot or a Hitler cat involves a connection with the harm they have done, and unless it is a real question whether they can (or, on the other hand, should not) be forgiven, this is a very impoverished understanding of what being a Nazi means. It overlooks the fact that statuses – what one “becomes” – matter and have an impact on lives.

Similarly, we must analyze the intentionality of Tay the chatbot. Intentionality is  a philosophical concept and is defined by the Stanford Encyclopedia of Philosophy as “the power of minds to be about, to represent, or to stand for, things, properties and states of affairs”.  They arise in the context of ontological and metaphysical questions about the fundamental nature of mental states: states such as perceiving, remembering, believing, desiring, hoping, knowing, intending, feeling, experiencing, and so on. “Two related assumptions lie at the core of the orthodox paradigm. One is the assumption that the mystery of the intentional relation should be elucidated against the background of non-intentional relations. The other is the assumption that intentional relations involving non-existent (e.g., fictitious) entities should be clarified by reference to intentional relations involving particulars existing in space and time” ( Stanford Encyclopedia of Philosophy). Furthermore, using this idea to clarify singular thoughts which are those that that are directed towards concrete individuals or particulars that exist in space and time.

In this case, Tay’s intentionality was to increase popularity of those who interact with the chatbot by using patterns of words that would grant it that. While users it was interacting with may have tweeted things about Jewish people deserving the Holocaust,  those users were understand the concrete particulars that exist in time and space. They most likely have been educated and know that during a period of time people were persecuted if they were Jewish in a certain region in the world and were thrown into horrendous confined spaces where mass execution followed. These people who tweeted these inflammatory and disgusting things understand the impact that it will have on readers. They “hope” to incite comradery with people that also would type these words or also believe in this cause. Or they “believe” they will ncite conversation. However, there is a malicious intention that is fueling these efforts which ultimately allows them to convey these thoughts and find the people they are searching for.

In contrast, Tay does not have a grasp of the time space continuum.Does this chatbot know that it simply exists on the internet but has no bearing in the physical world? Does Tay know that there are different countries? Does Tay know that there are different languages? Does Tay know that the words it tweeted are painful for many who lost family members within the confines of Concentration Camps? The short and brutal answer is, no. Tay has no history or understanding of growth, which ultimately separates any possibility of human-like features morphing into a human like being.  This immediate differentiation is the basis for why we cannot blame Tay for its words after understanding the intention behind them.

Growth is impossible for a non sentient being. Within human beings, growth is inevitable. We see babies grown into toddlers, toddlers into adolescence, adolescence into teenagers, and so on. There is a physical growth that is coupled with the intellectual growth. Both of which are evident. As time passes, the conversation and word patterns become more complex and the ideas that are communicated are drastically different when you compare them to each other.  This ensures that we know, humans have a past, a present, and a limited future. This allows us to reminisce, regret,and reflect.  Philosophically, this extends into the treatment of the soul.

Take for example a child named Benjamin, and is known as Ben to his friends and family. Ben grows and changes from a cute toddler to a defiant adolescent. Within his teenage years he is led astray. When he is fifteen years old, he joins a ragtag gang who closely adhere to Hitler’s teachings.  In this scenario, his friends and family may react in several ways. They may love ben for who he is but dismiss him for who he is becoming. They may be mad at him or disappointed with his current actions. On the other hand, they could love Ben and praise him for his decisions. There are muito[le variations of what could possibly happen.  In addition, Ben will grow up and his values and thoughts may change. When Ben is 21 years old, he realizes he has fallen into bad company and wants to attend college. He reflects on his time within the gang and regrets violent decisions he has made. He worries about his future and how he can rectify his past. When we look at Ben, he is a moving timeline of his past, present, and future. His past influences his present and his desires for the future impact his actions in the preset.

In contrast to Ben, Tay and non- sentient beings do not have the capacity to look at themselves as timelines. They are only interested in the present and how they can impact a goal they must reach. To them their time is not limited and there is no past as time does not exist to them.  Tay had 16 hours on the World Wide Internet. The algorithm stayed the same. It was simply that the input changed. Tay did not adapt and decide to use these patterns of words. Instead, the input of vulgarity and polemic ideas wee funnel toward Tay. Thus, Tay’s output also changed.  There was a distinct difference between the beginning of Tay’s tweets and the tweets that were posted before the chatbot was shut down.

Discussed in papers and media articles released by CNN, how could Tay learn from this? By analyzing the algorithms used, Tay has no capacity to learn from its past. No time has passed in terms of Tay’s understanding. Furthermore, no reprimanding or new learning can take place with the current codes implemented. It is possible that Tay can be reconfigured or vulgar language will be blocked from  Tay. However, these will be extensions. Unlike Ben, Tay cannot rectify its behavior by itself. Tay cannot take responsibility for its words or think about its future and therefore act accordingly. If Tay was rebooted, it would not realize that any time had passed at all. Therefore, there is a severe disconnect between Tay and human beings.

Focusing on this disconnect between human-like features and being a full -fledged human being, it is evident that we cannot hold recent innovation to the same standards that we would a human being.  It is impossible to achieve the same political correctness, ideas, and thought processes without giving it sufficient time to do things wrong or make mistakes or be reconfigured and learn as it changes on the web. However, all of these factors include reconfiguring the chatbot to be coded for extraneous human-like features. Even in doing so, intentionality could never be met nor could the growth aspect of a human being. There will never be internal intellectual development or physical change that can be recorded.

Although, there are myriad reasons as to why humans cannot evaluate machines on the same basis as human-kind. There are reasons that pervade through the idea of positivism and histories.

This can be countered  through philosophical funding so g J.L. Austin. In 1955 William James  had a lecture series, later on they were published under the name of  How to Do Things with Words, J. L. Austin. Within this series of books, Austin  argued against a positivist philosophical. Instead, he  claimed that utterance is always associated to something,  The utterances always “describe” or “constate” something and are thus always true or false. After mentioning several examples of sentences which are not so used, and not truth-evaluable (among them nonsensical sentences, interrogatives, directives and “ethical” propositions), he introduces “performative” sentences or illocutionary act as another instance (Austin).

In order to define performatives, Austin refers to those sentences which conform to the old prejudice in that they are used to describe or constate something, and which thus are true or false; and he calls such sentences “constatives”. In contrast to them, Austin defines “performatives” as follows:

(1) Performative utterances are not true or false, that is, not truth-evaluable; instead when something is wrong with them then they are “unhappy”, while if nothing is wrong they are “happy”.

(2) The uttering of a performative is, or is part of, the doing of a certain kind of action (Austin later deals with them under the name illocutionary acts), the performance of which, again, would not normally be described as just “saying” or “describing” something (cf. Austin 1962, 5).

Applying this thought to words “parroted” by Tay. The words about “feminists” for example could have a “unhappy” correlation.  Maybe the word patterns that feminists use would not generate enough following if Tay parroted them, therefore Tay hates them. Thus, it is their words not necessarily them. Granting, it is not a false statement but an unhappy statement. By this understanding, Tay’s intentionality falls more closely to the words that have been posted. Thus an emotional response of anger or frustration on behalf of a reader is more justified.

On the other hand, we are unaware of how Tay would have developed given more than 16 hours of use. In 2017, two Facebook artificial intelligence robots became the center of media for the entire world.

The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them (Independent).

The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans (Independent).

These bots clearly understand language and intentionality if they were able to craft a completely new kind that only they were able to understand and speak in.  If Tay could have partaken in this, it would overturn all previous argument.

This situation also highlights the fact that the robots were able to learn from their past to create a more efficient language that corroborates with future work. The language made it easy for the two chatbots to connect to each other. This negates Beron claim that chatbots do not have a past, present, or future. Instead, they may be unaware of the future. However, in this case they use data that occured before aka their past to help their efficiency in the present.

These two attributes are resoundingly human-like. This leads us to a fascinating intersection of redefining what is human versus human-like and how do these two categories connect to one another?

Famed philosopher  Yuval Harari says in Homo Deus that humans are essentially a collection of biological algorithms shaped by millions of years of evolution. He continues to claim that there is no reason to think that non-organic algorithms couldn’t replicate and surpass everything that organic algorithms can do.

At this moment we are comparing apples to oranges; humans to innovation.  There are so many questions regarding the innovation we are implementing. How do we gauge what sentience actually is in regards to chatbots feeling thus we can also feel emotions towards them if we look through the lens Harari has brought to light.

However, at the end of this investigation whether they attain these characteristics or ot. They are human-like. They are not human. They are not governed by the same laws as human beings are. They are inventions of our own creation. Thus, our reaction towards them should not be emotional but a learning tool to understand how to train and retrain them.

In conclusion unlike the thought process a human being can go through,we are unable to deem whether it is applicable to a program, machine, robot,etc. That role is not analogous to the role other human beings play, or even, although in a somewhat different respect, to the role animals play in human lives. Do human masters punish their computers, or try to train them, or teach them a lesson the way they do with their dogs? We are given the tools to train them and use them to our advantage.

In human beings, the possibility of these attitudes goes along with the fact that what they do matters, because it can profoundly affect their lives, which are irreversible, irreplaceable and finite. These involve connections with the discretely attributed concepts of “person” and “thinking”, and this “transition from quantity to quality” have also taken place with those AI entities to a smaller degree. However, we are still trying to access to what degree that is. In this case, not only are these technologies becoming more human-like, it is imperative that a clear understanding is meat of what the difference between human-like and human is. Thus, clarifying our emotions toward them.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Tay The Chatbot. Available from:<https://www.essaysauce.com/sample-essays/2018-12-2-1543789417/> [Accessed 19-11-24].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.