Home > Information technology essays > Can a weak form of AI consent? Attempts to strip weak AI of moral worth

Essay: Can a weak form of AI consent? Attempts to strip weak AI of moral worth

Essay details and download:

Text preview of this essay:

This page of the essay has 1,536 words.

The continuing advancement of artificial intelligence (AI) provides many unique and troubling ethical issues concerning the boundaries that demarcate a robot from a human being, and whether the former is worthy of any moral considerations. Notably, the potential roboticisation of the sex trade, and the introduction of nascent AI poses the question of whether a weak form of AI can be properly considered to consent. Within this essay, I will look at attempts to avoid this problem by stripping weak AI of moral worth, arguing that as this cannot be done in a satisfactory way the issue must be confronted head on.
In the first part of this essay I will look at appeals to the intuition that it is ‘just’ an object, and thus not worthy of any ethical consideration or capacity to consent. Highlighting how this solution is not only unsatisfactory, but unreflective of developing technological trends, I will then turn in the second part to whether appeals to animal rights can be turned in favour of attributing moral worth to weak AI. As this is a short essay, I will touch on a number of issues relating to AI and the sex trade only briefly, including the objectification of women, in relation to our perceptions of weak AI.
Weak AI as ‘just’ an object
It seems intuitive to consider weak AI, which includes the likes of Siri, smart furniture and internet chatbots, as purely object. The idea of granting moral consideration to a kettle, no matter how sophisticated, appears at first absurd. Often classified explicitly as non-sentient, weaker variants of AI are programmed to focus on a narrow range of tasks, without ‘understanding’ their actions. Even if it can pass the Turing test, it is only emulating human experience, not doing anything more than processing an algorithm (Searle, 1999). In this sense, weak AI in sex dolls may lead them to be considered as nothing more than objects, with no moral weight. Thus, the issue of consent is a non-issue, but by considering them as ‘only’ objects, numerous other ethical concerns are raised. Within this section, I will focus concerns on the issues of objectification, and the uncertainty around at what point weak AI transitions to strong AI, to challenge the ‘intuitive’ solution.
Firstly, and of concern to many feminist theorists, is the problem that the trend of treating sex dolls with weak AI as objects encourages the corresponding treatment of women as objects. The sexual objectification of women has a complex relationship with the sex industry in feminist theory, with MacKinnon (2014:159) asserting that prostitution is fundamentally a power play, not merely the act of sex, but ‘you do what I say, sex’. Sex robots, frequently gender-coded as feminine, with a weak AI programmed to serve the sexual desires of others whilst mimicking the servile attitudes of an ‘ideal’ woman, only serve to reinforce this patriarchal narrative. This is particularly troublesome if one accepts a view like Satz’s (2010:136), who objects to both markets in women’s reproductive and sexual labour: ‘ on [their] relationship to the pervasive social inequality between men and women’. Without an obligation to respect the AI, which looks, sounds and acts like a ‘real’ woman, the narrative of women as sexually subservient to men not only goes unchallenged, but is exacerbated. This problem underpins many of the concerns around the development of sex robots, and it is one to which I will return later in the essay.
Moreover, there has been in recent years a transition in attitudes towards robots that does not seem reflective of this perception of them as objects. Notably, the EU recently passed a vote on whether robots should be accorded rights (Bulman, 2017) which affords them blame in situations where things go wrong under their control. In noting that robots are increasingly independent of their engineers and programmers (ibid., 2017), the vote echoes a further concern that would undermine the notion of weak AI as purely an object – that the line dividing weak AI from strong AI is uncharted territory. Intuition in the future as technology grows and changes may not be able to provide us with a reliable guide (Boden, 1988).
As the capacity for weak AI to analyse and act on its environment grows, an argument such as Searle’s (1999), that the AI is simply processing an algorithm without any formal understanding, appears susceptible to the criticism of what exactly ‘understanding’ entails (Pinker, 1997). A sufficiently developed weak AI may appear almost negligibly different to a strong AI, exhibiting traits to which humans may accord a certain notion of ‘understanding’, such as acting upon what it analyses a person to want, rather than what they are actually asking for (Saenz, 2010). In this light, it would seem arbitrary to deny it rights simply because it does not ‘think exactly like us’, while exhibiting traits of sentience. We attribute a limited understanding to young children and animals, so why not weak AI? Within the second part of my essay, highlighting the latter case, I will draw attention to the potential distinction between animal rights and the rights of weak AI.
Weak AI as an ‘animal level’ intelligence
Referring back to my initial description, weak AI lacks what humans would term ‘sentience’. Nonetheless, as I have briefly illustrated throughout the first part of my essay, there appear, contrary to our intuitions, to be some compelling reasons for considering weak AI as an object of at least some moral consideration. Singer’s (2014) case for animal rights can be invoked here in defence of weak AI. Although it is not actually ‘like us’, the fact that humans possess a higher or more refined degree of intelligence does not entail that humans can exploit AI with impunity. As we cannot give moral weight to our own species without committing ‘speciesism’, we appear to not be able to give more moral weight to being an organic life form over a synthetic one without committing to a form of discrimination. Thus we should consider weak AI as subject to some moral considerations, and the issue of whether or not it can be properly considered to consent to sexual acts returns. Within this section, I will first attempt to refute the notion of weak AI as a rights bearer, before turning to the implications if it is to be considered one.
Notably, within Singer’s (2014) utilitarian account, as with Bentham (1879), there is an emphasis on moral consideration being derived from the capacity to suffer, rather than reason and intelligence. In this instance, many variants of weak AI can be rejected as being rights bearers, as they do not possess the capacity to feel what we would term suffering. They do not possess interests, as they cannot suffer (Singer, 2014). Although this returns us to our initial position, and to the common sense conclusion that a smart kettle, as it cannot suffer, possesses no moral worth, it can be challenged in two ways.
Recent developments in technology are towards robots that ‘experience’ pain, and use it to adjust their behaviour. Such beings can be rightly said to ‘suffer’, and thus must be worthy of some moral consideration. Furthermore, these developments and this position create an uneasy distinction that must be resolved. Otherwise, a weak AI that possesses an ability to experience pain and that which does not do not seem to invoke equal moral weight.
While this may seem common sense between the smart kettle and the sex robot, as we would appear to want the latter to possess some degree of consideration over the former, it becomes more problematic between two cases of a similar type – such as two sex robots, one possessing the capacity for pain and the other not. In this case, the former is worthy of moral consideration, it must be considered capable of some degree of consent and deserving of certain treatment. The latter, however, is not, resurrecting issues concerning objectification. After all, if the end goal of the development of sex robots is as Levy (2010) states, solely to be responsive and receptive to human sexual desires, giving them a degree of respect may well be seen as getting in the way. Instead of provoking trouble, the programmer may well decide to simply bypass the option to install pain sensors, reducing the sex robot once more to an object, and once more raising issues concerning objectification and the degradation of women. Thus, if we accept that to avoid socially negative ethical consequences the weak AI must be afforded some, even if limited, moral consideration, the issue is raised over whether it can be considered properly capable of consenting.
This essay has briefly covered a broad number of topics including the objectification of women, the issue of robot intelligence and an appeal to arguments made in favour of animal rights to argue that the issue of consent within robotics, particularly concerning weak AI within the sex trade, needs to be confronted head on. As the literature on the issue is comparatively small and technology increasingly involves in this direction, it appears imperative that in order to avoid provoking other ethical issues this unique problem posed by the presence of AI should be answered without appeal to the current, but changing, object status of robots.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Can a weak form of AI consent? Attempts to strip weak AI of moral worth. Available from:<https://www.essaysauce.com/philosophy-essays/can-a-weak-form-of-ai-consent-attempts-to-strip-weak-ai-of-moral-worth/> [Accessed 02-10-24].

These Information technology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.