An Agent or a Tool? Artificial Intelligence, Heidegger’s Robot and the Ignorant’s Toaster

Donate to CA

The free access to this article was made possible by support from readers like you. Please consider donating any amount to help defray the cost of our operation.

 

An Agent or a Tool? Artificial Intelligence, Heidegger’s Robot and the Ignorant’s Toaster

David Holtgrave

 

Abstract
Be it in art, aesthetics, or anywhere in or beyond academia, artificial intelligence, or AI, often is said to do a lot of stuff formerly done by humans. In this essay, I will argue that this line of thinking is to some extent misleading. On my account, it is still humans that do the same stuff, only by different means. Taking its employment and discussion in the arts as an example, I will set up and discuss what I call the ‘agent view’ of AI and show how, for those cases in which we typically attribute agency to AI, this attribution can and should be reduced to a lack of practical knowledge. For an alternative description of what AI is, I then give account of AI as a tool in the spirit of Heidegger’s analysis of tools. With both the agent and the tool view of AI in hand, I finally show how our understanding of questions about AI’s role and employment in art, aesthetics, or any sort of context begs for different kinds of answers, depending on whether we take the agent or the tool view as a premise.

Key Words
aesthetics; agency; AI; AI-generated art; black boxes; Heidegger; ignorance; practical knowledge; tool

 

1. Introduction

I believe much of the fuss that AI currently stirs in the artworld, aesthetics, and anywhere in and beyond academia rests on the idea that AI is some new kind of agent entering the stage. It quickly became a common habit to speak of it in agency-laden ways, like saying something was “generated” by an AI, and that it “talks,” “answers,” “learns,” “understands,” and so on. On this note, AI as an agent obviously brings about many challenges that some of us think we need to face at this brink of development. On the other end, AI in many contexts is employed as a kind of tool. Here we may say we “generate,” “research,” or make any other deliberate use of AI programs. In both ways, however, AI seems to fall short of our common understanding of what makes an agent or a tool. It seems not agent enough to be an agent and not tool enough to be a tool.

Now, to make it clear from the start, when I say ‘AI,’ I mean current, state-of-the-art AIs, often referred to as ‘weak AI’ or ‘artificial narrow intelligence.’[1] Also, I will not oppose the idea that we can plausibly render AI as an agent of some kind. My question is whether as philosophers we should. What is there to gain? For it seems to me that the idea of AI as an agent may not only lead to a lot of theoretical agitation. It also may obstruct a more pragmatic and accurate way to think of AI programs: as mere tools. If we can think of AIs as tools like any other, not only may some of the problems that an agent view bears simply dissolve, but taking on what I call the ‘tool view’ of AI may also elicit questions of keen philosophical interest. For in recognizing AI as a tool, we may come to understand that questions about its agency are better understood as questions about our own. That is, instead of wondering from an agent view what AI is, can, and should be doing, from a tool view we come to ask what it is we can and should be doing by making use of it as a tool.

My argument goes like this: I will first concern myself with a typical setup of an agent view and in what way and under what conditions such a view will hold. While I assume that we can frame AI as an agent, I will argue that the fact we can give such descriptions does not  alone make for good descriptions of correlating events. Ascribing agency to an entity must also provide for the better description, especially when there are human agents involved.

Following that, I shall discuss production events with more than one possible agent involved by means of a thought experiment, and argue that the agency we attribute to one of them may be a good explanation only in light of our ignorance. This is especially the case for black boxes and tools of which we lack the necessary practical knowledge to make proper use.

To establish a tool view of AI, I then give an account of AI as a ‘tool’ by reference to Heidegger’s analysis of the term, thereby showing that AI can plausibly be thought of as a tool akin to a hammer.

With both the agent and the tool view in hand, I then conclude by showing how the way we understand questions about AI and its role and employment in different contexts is heavily dependent on whether we endorse one or the other.

So, while I may not have to say anything specific about how AI may change the field of academic aesthetics or not, I take the unnerving philosophical approach showing that to answer that question, we should first think about how we are to understand it, for the answers likely will differ depending on whether we take on the agent or the tool view.

2. Setting up an agent view

In computer sciences, AIs are already designated agents by technical definition.[2] However, such definitions seem to fall short of our ordinary thinking about agency. For when it comes to an AI’s actual employment, talk about its agency is often ambiguous. An example is when a fan of Nick Cave posted what he said an ChatGPT generated in the spirit of the musician. Here is a part of it:

Verse 1:

In the depths of the night, I hear a call

A voice that echoes, through the hall

It’s a siren’s song, that pulls me in

Takes me to a place, where I can’t begin

 

Chorus:

I am the sinner, I am the saint

I am the darkness, I am the light

I am the hunter, I am the prey

I am the devil, I am the savior

 

Verse 2:

I walk the streets, with my head held high

I’ve got my demons, but I’ll make them fly

I’ll dance with the devil, and I’ll play his game

I’ll be the one, to call his name

 

Chorus:

I am the sinner, I am the saint

I am the darkness, I am the light

I am the hunter, I am the prey

I am the devil, I am the savior[3]

Now, Cave didn’t particularly endorse it. Part of his answer is typical of how AI is often viewed by artists and critics alike:

What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. […] Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer.[4]

The history of technical advances (not only) in the arts is known to contain quarrel.[5] So it is not only artists or critics reluctant of acknowledging AI-generated art as “real” art but oftentimes also those engaged in aesthetics and neighboring disciplines. For example, in the cognitive sciences Keith Holyoak notes:

[F]or all the functions an AI can potentially achieve […] an intelligent program will fall short of authenticity as a poet. AI lacks what is most needed to place the footprints of its own consciousness on another mind: inner experience. That is, experience shaded by the accumulated memories derived over a lifetime.[6]

And Claire Benn, working in the ethics of AI, data, and algorithms, writes:

One thing that AIs will not be able to do is to write in an authentic way: that is, from a unified subjective experience.[7]

Notice how all of them talk of AI in agency-laden ways, however ambiguous; AI is rendered as having the capacity to act on its own, in this case, write. Yet at the same time, they find it lacking certain features that one would probably expect from actual agents; that is, having a mental life and thus being able to write in an authentic way. What to make of this?

The notion of agency typically evolves around concepts of intentionality,[8] autonomy,[9] and responsibility.[10] All three of them may make up for the paradigm kind of agency, that is, the human kind. And it seems clear that, in the current state of the art, it just would be blatantly wild to attribute that kind of agency to an AI. In this light, we may say the problem here is that AI’s agency is unduly compared to a notion of human agency under conditions we take necessary for what makes authenticity in art. Therefore, it is simply not on behalf of the AI that its art cannot live up to our common expectations about art. AI in terms of its agency is just not what we think artistic agency requires.

But this of course does not mean that AI cannot be an agent of some nonhuman kind. Even if it has not yet this kind of full-blown agency, one may think that our intuition about AI’s agency at least gives ground for a constrained version. For even if AI programs are still programs made by people, as Mark Coeckelbergh notes, they are programmed

in the sense that the algorithm, the code, is programmed, but the end product—what is claimed to be the work of art—is not directly made by a human being. The algorithm, not the human, is the “artistic” agent. The human is the creator of the code, not of the work of art.[11]

What Coeckelbergh claims about AI in art production, I believe also holds true of our intuition about AI’s doings in any other kind of productive context. It seems to us that the AI is doing something on its own, something we have no hold over, something of which we have no clear sight or understanding. Even more, we do not talk only about AIs in terms of “doing.” We talk about their functional features in terms of language use and more complicated brain stuff. AI seems to “talk” and “communicate” with us, works on large language models (LLMs) and artificial neural networks modeled in ways that to some extent resemble our own: making use of a language in communicative ways, having a functional organization akin to a neural network, and doing stuff on its own. All of this seems to be calling for agency, right?

Unsurprisingly, philosophical attempts and discussions of how and whether to make sense of AI’s “doings” run wild across ideas of autonomy,[12] (higher-order) intentionality,[13] and moral responsibility[14] as applicable to machines or AIs, specifically.[15] Some are designed more specifically to capture the kind of “agentive cognition” machine intelligence exhibits.[16]

As mentioned above, I will not concern myself with such attempts here nor oppose the idea that AI can be construed as an agent. My interest is in whether we actually ought to ascribe such notions to it. For I think whatever applicable weak or strong version of an agent view of AI one may come up with, the basis for the eligibility of agency rests not only on whether or not we can construe such a notion. A proponent of an agent view of AI not only needs a) to give a description of AI-infused production events that rests upon a coherent notion of AIs agency but b) also make it clear why our descriptions of such events are in need of such notions at all, if there is also a human agent involved. For if in any case the agency of such events can, on the very same or an even stronger notion, be solely attributed to a human entity and still deliver a description at least as plausible as the other, it seems rather questionable why we should attribute it to a software program in the first place. To my understanding, any agent view must collapse if a) is not met or it would at least be considerably weakened if b) cannot be established. For the sake of this argument, I will simply assume that a) can be met and focus on b).

In what follows, I will discuss how in production events we are inclined to ascribe agency to more than one agent or not. My whole argument rests on the simple idea that black boxes are bad candidates for agency. Put more simply: If I do not understand how to make proper use of a toaster and it burns my hair, it is a bad way to explain my burned hair in terms of the toaster’s agency.

3. Of robot-painters, black boxes, and the ignorance stance

Let us begin with what I call a ‘production event’: an event in which entities interact with other entities in a way that results in some kind of output. For example, a painter interacts with colors and a brush and a canvas in a way that results in a painting. In such production events, on what basis are we entitled to attribute agency to one of the entities involved? The simplest answer probably would be whenever one of the involved entities “does” something, that is, takes active part in the production and thus exercises agency. And, it seems clear that neither a brush nor the colors nor a canvas is “doing” something. It is clearly the painter who is the singular agent in that event.

Then there may be cases in which more than one entity does something and thus two or more entities exercise some kind of agency in these events. For instance, we can have two painters producing a single painting. In such cases it seems clear that we should attribute agency to both of them, for both of them took part in the production event on the same and strong notion of agency.

Now, there may be other cases in which we have two or more entities that we can say do part of the production under differing notions of agency. The painter puts bread into the toaster, and the toaster toasts the bread. In such events, both painter and toaster can be described as agents without bigger problems if we can construe adequate notions of agency for both. However, we can also simply say that the painter toasted the bread. This description is in no way deficient, for it seems plausible to see the painter as the single agent in the event of toasting the bread—be it by the use of that or any other device that needs no rendering as agent.

Now, the painter made some good money and bought a robot-painter. That robot-painter does whatever the painter tells it to do. The robot executes the painter’s orders in a way that, in some cases, exactly matches what the painter has intended. However, the painter learns that the more complex the goal set for the robot-painter’s doing, the more likely will the outcome deviate from the painter’s intention, for it seems easy to make the robot draw a single line, but difficult to make it draw the painting as the painter intends. The robot often miserably fails, or at least produces something very unexpected. The painter therefore learns to give more well-refined, procedural, and intricate commands that take account of the complexity of what the painter desires the robot to do.[17] Thus, the painter gains a kind of practical knowledge about the robot. Time passes, and the painter’s knowledge allows the painter to not only understand how and in what ways the painter has to handle the robot so that it will produce the desired output but also to understand what kinds of output to actually expect from it. So put more simply, the painter mastered how to use the robot as a device for the painter’s craft.

Here, we have a production event involving two entities doing something. It seems clear that both the painter and the robot-painter can be rendered as agents under differing conceptions, the notion of agency attributed to the robot-painter somewhat weaker and derived. And here is where the agent and the tool view collide: for to the unknowing bystander, it will seem that we have two agents producing one painting. It may even seem that the robot did most of the work. When the bystander later sees that painting in the gallery, the bystander will therefore tell a friend that this was not really made by the artist, but by a robot, actually. So, they go and confront the artist with their indignation. To the artist, however, who thoroughly knows that any of the robot’s doings come down to the artist’s own doings, its employment is just like the employment of any other device the painter has mastered. Certainly, the painter could have drawn this or that line by him or her self—but what would that even mean to the painter? ‘By himself’ or ‘by herself’ to the painter must mean by means of the tools painters make use of to draw lines, be it pencils, digital art boards, or robots. The two bystanders not sharing the kind of practical knowledge that the artist has will insist that the robot did all this and reject any of the nonsense the artist says in defense. To them, the robot is just a black box that seems to act on its own. If, however, the two bystanders have the kind of practical knowledge that the painter has, to my understanding they will be obliged to concur with the painter’s position. For they will have to judge the painting not on the basis of the doings of a robot but on the ways the painter painted with the robot. If they now find any inconsistency in the painting, they ought not ascribe it to a robot’s lack, but to the painter who made a mistake in the use of the robot.

I think my point is clear. The question now is whether one may have this practical knowledge of AI. If we can have this kind of knowledge, it would seem that there is little reason to render AI as an agent, at least in such kinds of events. For any deviance between the product and our intent would not have to lean on a second kind of agent involved to explain how that deviance came to be; it would simply become clear to us that we were not properly handling our tool.

One may argue that on the current state of the art it seems rather unlikely that we can have this kind of knowledge. Current AI models often are construed as “black boxes,” for their learning and decision-making processes are not readily interpretable by humans, especially when it comes to the so-called “deep learning.”[18] So we may have to admit that the AI is doing stuff in some autonomous ways we cannot control because we cannot make sense of. I mean, this would call for an explanation in terms of agency, right?[19] Not necessarily so.

First, the metaphor of the black box may actually be misleading. Under the term ‘explainable artificial intelligence,’ or XAI, there are in fact promising attempts to elucidate what is happening inside those alleged black boxes,[20] in addition to design models that are inherently explainable or much more easy to explain.[21] If it turns out that we can actually explain what is happening inside those black boxes, or design transparent ones that do the same magic without the trick, we may just have mistaken something that is hard to explain for something that is impossible to explain. If we can understand how AI works, we can understand how to work with it in a way that aligns our intentions with its possible outcomes. Surely the idea that we can get this kind of knowledge may at this point seem rather speculative. However, the same holds for its opposition.

Second, even working with something that to some degree has the character of a black box does not entail that we cannot gain the practical knowledge required to use that thing in alignment with our intentions. A photographer need not know about all the physics happening inside a camera to make proper use of it, but can get the kind of practical knowledge by the mere experiment of usage that is necessary to understand what kind of outputs can or cannot be achieved. This way, the photographer can align her or his intentions with the usage of what at the start may have seemed like an all-around black box and may still be when it comes to the detailed physical processes happening inside. However, now the photographer knows how to handle it and thereby bend its outputs to his or her intents and vice versa.

One may object that this analogy does not hold, since unlike a camera AIs will typically produce even slightly differing outputs on one and the same input. But this does not necessarily entail agency and nor does it rule out the possibility of intended outcomes. It may just be another part of our practical knowledge of AI that its usage and outcomes are to some extent subject to chance. Just because chance is a feature of such events does not mean we cannot be the singular agent of events in which we interact with entities to an outcome that, to some degree, is not in our complete control, but also subject to chance. It is perfectly plausible to say that someone can be the singular agent of an outcome by chance by intending it. We can find many ways of production by chance in the arts, Dadaism being just one prime example.[22] In fact, it seems more accurate to think that an artist does not start with a clear intention and realize that through tools and material, but by going one step after another and giving the material a part in influencing the production event, which often will be an invitation to chance.[23] We can also just throw dice and thereby intend an outcome by chance. That we do not have complete control about the outcome does not mean that there is another agent that has it (even if only in part).

It is a common thing that when in need of an explanation for events happening between and on the end of different entities, we often refer to terms implying agency to make sense of what is going on. In such cases, we seem to take the kind of intentional stance as described by Dennett, that is, a stance in which it seems appropriate to attribute mental states in the explanation of agency benefitting the predictive success of behavior.[24] In everyday life, it may help to think a toaster does whatever the heck it wants to. Yet only for the sake of ignorance can this be regarded as a good explanation, when the toaster is actually just broken or was never meant to dry my hair with.

To my understanding, the intentional stance typically will emerge exactly when in front of events involving a black box or when we lack the practical knowledge necessary to make proper use of a thing. Sure, in some contexts it may help to think of AI as an agent, for example, when it is interacting with other kinds of computer stuff. Yet, I contend that whenever in a production event we have two or more possible agents involved under different notions of agency, the conceptually weaker agent may only be an agent in light of our ignorance. That it does not do what I want does not necessarily mean that it does what it wants. It may just mean that I do not understand what it is for and how to properly use it.

If we can imagine having the kind of knowledge of an AI program that the painter has of its robot, there would be no need to attribute agency to it. That kind of knowledge need not to be perfect, but be only sufficient practical knowledge. If such practical knowledge can plausibly be imagined of an AI, we can conceive of it as a mere tool, as is the brush or the robot to the painter. For on the basis of such practical knowledge, we will know that to write a single line as a prompt to an AI program is to the realization of a proper song text as is the single stroke of a brush on a canvas to the realization of a proper painting. Just not enough.

To show that we can plausibly think of AI that way, in the next section I will give an analysis of AI as a tool following Heiddegger’s analysis of tools.

4. A tool view of AI

Just as for the agent view, the issue with the tool view of AI seems not to be that we can or cannot conceive AI as such. Its qualities as a tool are discussed no less in different contexts like medicine,[25] education,[26] and art.[27] So why argue? Well, as mentioned in the case of the AI-generated song text, it seems that for many of us AI possesses features of an agent, but is still not agent enough. On the other end, AI is often recognized as a tool, yet seems to go beyond the character of any ordinary tool. Therefore, a philosophical challenge we may find here is a) to explain why AI is hard to perceive as a mere tool, and b) whether we can give a description under which AI can be understood as a tool akin to any ordinary tool. Both challenges, I think, can be faced with Heidegger. I shall tackle b) first.

Distinguishing two senses of the word ‘thing,’ Heidegger introduces two technical expressions: “being on hand” (vorhanden) and “being at hand” (zuhanden). Things being on hand are things philosophers traditionally call ‘material objects.’ A flower, for instance, has certain properties and is located somewhere in space. Listing its properties and stating its location gives a description of the flower.[28] Now, if you ask me what it was for, I will be entitled to reject your question as inadequate, because a flower is not for anything. It is just a flower. Things to ask such a question about are things at hand. If I describe a hammer to you and you ask me the same question, I will not be entitled to dismiss it by saying that it is just a hammer. A hammer is a tool, and for Heidegger, tools are part of the stuff of the second sense of ‘thing.’[29]

But before bluntly applying the question “what is it for?” to AI, we should check the conditions under which we are entitled to ask that of a thing. To make that question applicable, it obviously is not enough to refer to properties like shape, size, color, or the location of a material object. Both a flower and a hammer possess such properties, but cannot be said to be for something on that ground. Just as little can the possession of a particular set of properties establish it being for anything.

Another possibility would be to say that a hammer is for hammering on grounds of it being used for that. Yet that cannot be a sufficient condition, for hammers need not be actually used for hammering to be hammers. A hammer just sold that has never been used as such can still be sold as a hammer. So maybe get more modest and say hammers are hammers on the basis that if they are used at all, they are used for hammering. But then again, hammers need not be used for hammering alone to be hammers. They are also used as weapons, for opening beer bottles, poking things with the handle, and so on. It cannot be their use alone that makes the hammer a thing for this or that.

However you turn or twist it, it seems hardly possible to make sense of a thing as a tool in the light of the thing or its use alone. Heidegger understands that and therefore speaks of the tool-context, or what he calls the “Zeugganzes,” consisting in relations of different tools, gear, and objects of use.[30] Hammers are for hammering nails. Nails are for connecting boards. Boards are for forming walls. Walls are for keeping bad weather outside. In this light, hammers are for hammering, if nails are for connecting boards and saws for sawing boards. A time may come when we no longer need nails because everything gets printed by a 3-D printer. Then, the context would disappear and a hammer would cease to be a tool in this context. To know what a thing is for, then, is to know what it is for in its context of tools. The marking question, “what is it for?” is therefore only applicable to things for which there is such a context of tools.

Applying this to AI, we can see that Heidegger has a good point, for its tool-character emerges exactly where AI gets integrated into such tool-contexts. In medicine, AI is employed for diagnoses, diagnoses are for determining the right treatment, and so on. In the arts, AI may be for combining art styles, combining art styles is for inspiring new styles, and so on. To know what AI is for, is to know what it can be for in a given tool-context. More specifically, to know how to properly use AI as a tool does not only require at least some practical knowledge about AI but also about the tool-context one is employing it in.

This, then, also gives way to an answer to a), that is, why it often seems hard to us to conceive of AI as a tool. According to Heidegger, we will grasp the tool-character of a thing not by looking at it in isolation but only by its use in the tool-context.[31] It is the hammering in a hammering-tool-context that reveals the hammer as a tool, as a thing at hand. And, since hammering is a practice that takes place only in a tool-context, it is not only its use but its specific use in a specific context of tools that reveals its tool-character. If we cannot grasp the tool-character of AI, it is less because AI does not look like a hammer or has properties unlike any tool we know. The looks and properties of a thing do not reveal that character. It is because we lack the knowledge or simply choose to ignore the tool-contexts in which AI can and does play the role of a thing being at hand, being for something.

One may object that there is still this important gap between a hammer and AI, in that there are not such kinds of complex processes going on inside a hammer while hammering, whereas the inside of an AI seems to do a lot of work on its own. But again, these are mere properties of the thing that can be described and still reject the question “what is it for?,” and by the isolated look upon it, may beg questions of agency. Yet, it is not the properties of a thing that decide whether something is a tool or not, it is the AI being integrated and used as a tool into a tool-context that makes the question “what is it for?” applicable. And, while we may still describe its properties in terms of agency or not, this makes no difference for its construal as a tool. In fact, wondering about its properties in isolation seems rather confused as soon as we grasp the tool-character of a thing. For if we know what it is good for in this or that context, then its properties will be less puzzling than functional. Missing that tool-character will give way to a kind of theoretical puzzlement that has no hold over the practical knowledge of the tool-user.

Yet another objection may emerge from this analysis. Under such a description, a tool can at the same time also be an agent. A human laborer in the tool-context of a construction site may be both, an agent and a tool.[32] Notice, however, that it seems rather implausible to think that in the event of the failing of the construction, the failing can be explained by the lack of practical knowledge by the head staff or whomever employed the laborer in this tool-context. It may still hold true that weaker kinds of agents like AI programs can at the same time be described as agents and tools. However, as I have tried to show, the agent view will only hold as long as the intents of our inputs and the outputs of such agents align to a sufficient degree. As soon as there is deviance between inputs and outputs in regard to our intents, any nonhuman notion of agency applied to a tool ought to be considered as coherent only for our lack of practical knowledge.

5. Conclusion for a tool in arts and aesthetics

I have argued that a) given it is possible to frame AI as an agent, there is better reason to understand it as a mere tool if there is also a human being involved to whom we can ascribe the agency, b) that AI can be thought of as a tool akin to ordinary tools, and c) that the reason we have a hard time to conceive it that way is due to a lack of practical knowledge.

Finally, coming to the lead question of this special issue: How will AI change the field of academic aesthetics? As annoying as a philosopher can be, I answered this: It depends on how we conceive the question, or more specifically, what we mean by ‘AI.’ As I have argued above, this is a question begging for theoretic discussion primarily when we adopt the agent view of AI. For then, all sorts of questions arise concerning the conflicts that emerge when a new kind of agent enters the scene. Questions like: What if AI writes students papers? What are the implications of AI-infused research? On what basis are we to describe and judge AI-generated art? What kinds of contribution can or must a human agent provide when collaborating with an AI? What kinds of risk does AI pose?

Now these questions certainly do not simply disappear once we drop the agent for the tool view of AI. But what is actually meant by these questions and how we are to answer them differs significantly on those two views. On the agent view, the underlying question is: What kind of agent is AI and what does it mean for this or that context? On the tool view, the question becomes this: What is AI for in this or that tool-context?

The implications then are not so much about the agential conflicts emerging between humans and AI. Taken as a tool, the follow-up questions are about what a proper use of AI may look like in a given tool-context. To my understanding, answers to these questions demand practical knowledge first and theoretical discussion later. For to make sense of the tool as a tool, we first need to employ it in our tool-context, to reveal what we may or may not be able to achieve by its means. Take art again, for example. What a proper use of AI in the arts may look like depends not on what we know about art and AI but on whether and in what ways artists actually find to make use of AI in their creative processes. How AI will change the field of academic aesthetics is therefore at best a speculative question; at worst, it misunderstands the way this change occurs. Call me captious. But strictly speaking, AI will not change the field of aesthetics. It will not “do” anything. That is completely up to us. And, we should be perfectly clear about our agency in that.

I have argued that once we gain the practical knowledge needed to make proper use of AI as a tool, there will no longer be need to refer to it as an agent. Until then we may as well make use of it.

 

David Holtgrave
davholtgrav@gmail.com

David Holtgrave is a PhD candidate and lecturer of philosophy at the Martin-Luther-Universität Halle-Wittenberg, Germany. He teaches aesthetics, has given talks about AI in the arts and expression in poetry, and is currently working on a theory of poetry. He also fancies himself a poet.

 

Published on July 14, 2025.

Cite this article: David Holtgrave, “An Agent or a Tool? Artificial Intelligence, Heidegger’s Robot and the Ignorant’s Toaster,” Contemporary Aesthetics, Special Volume 13 (2025), accessed date.

 

Endnotes

 

[1] Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh, An Introduction to Ethics in Robotics and AI (Berlin: Springer, 2021), 10. While I confine myself to weak AI in this paper, I believe that much of what I argue also applies to strong and more speculative forms of AI – namely, Artificial General Intelligence and Artificial Superintelligence – since my case rests not on the notion of agency itself, but on the attribution of agency.

[2] Samir Chopra and Laurence F. White, A Legal Theory For Autonomous Artificial Agents (Ann Arbor, MI: University of Michigan Press, 2011), 6.

[3] “Issue #218,” The  Red Hand Files, https://www.theredhandfiles.com/chat-gpt-what-do-you-think/, accessed June 30, 2024.

[4] “Issue #218,” The  Red Hand Files.

[5] An historic overview with examples of analogous curiosity gives Anthony O’Hear, “Art and Technology: An Old Tension,” Royal Institute of Philosophy Supplements 38 (1995): 143-158, https://doi.org/10.1017/S1358246100007335.

[6] Keith J. Holyoak, The Spider’s Thread: Metaphor in Mind, Brain, and Poetry (Cambridge, MA: MIT Press, 2019), 181f.

[7] Claire Benn, “Eight Scholars on Art and Artificial Intelligence,” Aesthetics for Birds, https://aestheticsforbirds.com/2023/11/02/eight-scholars-on-art-and-artificial-intelligence/.

[8] Michael Bratman, Intention, Plans, and Practical Reason (Stanford: CSLI, 2000).

[9] Richard M. Ryan and Edward L. Deci, Self-determination Theory: Basic psychological needs in motivation, development, and wellness (New York: Guilford Press, 2017).

[10] John M. Fischer and Mark Ravizza, Responsibility and control: A theory of moral responsibility (Cambridge, U.K: Cambridge University Press, 1998).

[11] Mark Coeckelbergh, ”Can Machines Create Art?” Philosophy and Technology 30, no. 3 (2016): 286, https://doi.org/10.1007/s13347-016-0231-5.

[12] John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat, and Merel Noorman, “7 Autonomy,” in A Citizens Guide to Artificial Intelligence (Cambridge, MA: MIT Press, 2021), 107-126.

[13] Daniel C. Dennett, “When HAL Kills, Who’s to Blame? Computer Ethics,” in HAL’s Legacy: 2001’s Computer as Dream and Reality, ed. David G. Stork (Cambridge, MA: MIT Press, 1997), 351-365.

[14] John P. Sullins, ”When is a Robot a Moral Agent?” International Review of Information Ethics 6, no. 12 (2006): 23-29, https://doi.org/10.29173/irie136.

[15] One may argue that these accounts only bear on strong and more speculative kinds of AI. However, on my view this also a consequence of the ways we talk about ANI in terms of agency. It is part of the science fiction of the present that we think about even weak AIs as agents, for they do not only resemble us somehow but much more the fictional artifacts we know from the books and the movies.

[16] Aziz Zambak and Roger Vergauwen, “Artificial Intelligence and Agentive Cognition: A Logico-Linguistic Approach,” Logique et Analyse 52, no. 205 (2009): 57-96, https://www.jstor.org/stable/44084913.

[17] In the case of AI, acquiring this kind of knowledge in many cases will mean to learn prompt engineering. See, for example, Yunlong Wang, Shuyuan Shen and Brian Y Lim, “RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions,” in CHI ’23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, ed. Albrecht Schmidt, Kaisa Väänänen, Tesh Goyal et al. (New York: Association for Computing Machinery, 2023), article no. 22, 1-29, https://doi.org/10.1145/3544548.3581402. Han Qiao, Vivian Liu, and Lydia Chilton, “Initial Images: Using Image Prompts to Improve Subject Representation in Multimodal AI Generated Art,” in C&C ’22: Proceedings of the 14th Conference on Creativity and Cognition, ed. Neil Maiden and Corina Sas (New York: Association for Computing Machinery, 2022): 15-28, https://doi.org/10.1145/3527927.3532792. Aras Bozkurt and Ramesh C. Sharma, “Generative AI and Prompt Engineering: The Art of Whispering to Let the Genie Out of the Algorithmic World,” Asian Journal of Distance Education 18, no. 2 (2023): i-vii, http://asianjde.com/ojs/index.php/AsianJDE/article/view/749.

[18] Quotation marks, because again I find it questionable whether saying of these programs that they “learn” is in any other than a technical sense true. I think it is at best a metaphorical usage; at worst it distorts a clear understanding of its technical nature.

[19] It seems that wherever there is a black box, we tend to think of agents hidden inside, even if it is actually us being the cause for what is happening. See also Frank Pasquale, The Black Box Society (Cambridge, MA: Harvard University Press, 2016).

[20] Neil Savage, “Breaking into the black box of artificial intelligence,” Nature, March 29, 2022, https://www.nature.com/articles/d41586-022-00858-1.

[21] Plamen P. Angelov, Eduardo A. Soares, Richard Jiang, Nicholas I. Arnold, and Peter M. Atkinson, “Explainable artificial intelligence: an analytical review,” WIREs Data Mining and Knowledge Discovery 5, no. 11 (2021), https://doi.org/10.1002/widm.1424.

[22] Harriett A. Watts, “Chance: A Perspective on Dada” (PhD diss., University of Texas at Austin, 1975), ProQuest (7524979).

[23] Mark Titmarsh, Expanded Painting: Ontological Aesthetics and the Essence of Colour (London: Bloomsbury, 2017), 103.

[24] Daniel C. Dennett, The Intentional Stance (Cambridge, MA: MIT Press, 1996), 17.

[25] Mohamed H. Shahin, Aline Barth, Jagdeep T. Podichetty et al., “Artificial Intelligence: From Buzzword to Useful Tool in Clinical Pharmacology,” Clinical Pharmacology and Therapeutics 115, no 4 (2023): 698-709, https://doi.org/10.1002/cpt.3083.

[26] Juan M. Alberola, Elena del Val, Victor Sanchez-Anguix, Alberto Palomares, and Maria Dolores Teruel, “An artificial intelligence tool for heterogeneous team formation in the classroom,” Knowledge-Based Systems 101, no 1 (2016): 1-14, https://doi.org/10.1016/j.knosys.2016.02.010.

[27] Zorana Ivcevic and Mike Grandinetti, “Artificial intelligence as a tool for creativity,” Journal of Creativity 34, no. 2 (2024), https://doi.org/10.1016/j.yjoc.2024.100079.

[28] Martin Heidegger, Sein und Zeit (Tübingen: Max Niemeyer Verlag, 1953), 67-68. Cited as SZ hereafter.

[29] SZ, 68.

[30] SZ, 68-71.

[31] SZ, 69.

[32] This is no consequence of Heidegger’s account, as for him humans’ ontological status is another kind of being than that of mere objects.