Interactivist Summer Institute

July 22 - 26, 2003


Affects, Autonomy, and AI.

Craig DeLancey
Department of Philosophy
SUNY Oswego
Oswego, NY 13126
585-223-7417 (tel. and fax)

26 February 2003
ABSTRACT: The unifying theme that connects AI, philosophy, and other sciences of mind is autonomy. An autonomous system is a system that has its own purposes, some of which are directed to changing the external environment, and pursues those purposes with sufficient success and flexibly under changing environmental conditions. Autonomy is importantly related to affects. Real affects are a necessary condition for autonomy in biological systems, and provide a strategy to enable autonomy more generally. Affects include highly-encapsulated kinds, such as basic emotions. The affect program theory offers a way to understand highly-encapsulated emotions, and provides some insight into developing biologically realistic artificial autonomous systems.

“Affects, Autonomy, and AI”

The unifying theme that connects AI and robotics, neural science, philosophy, and other sciences of mind should be autonomy. In this paper, I will present a series of hypotheses as part of an argument that both the challenge of autonomy and also the neglect of affects by AI can fruitfully be addressed together. There are at least two reasons to see affects and autonomy as related problems at this time. First, as researchers work to understand autonomy, they will find that our best understanding of some affects provides preliminary insights into what may be required to understand and ultimately recreate autonomy, suggesting a way out of a potential impasse for both AI and the theory of mind. Thus, researchers in AI and the theory of mind have an interest in understanding affects in order to understand autonomy. Second, many of those concerned solely with emotions and other affects, and not necessarily with the mind as a whole, should also have an interest in autonomy. One important set of issues in emotion research is how dependent an emotion must be upon other kinds of cognitive states. Some views see emotions as composed of high-level cognitive states (e.g., conscious declarative beliefs); others see emotions as potentially independent of such cognitive systems. If the latter kind of view is correct, some kind of account of agency that is more general than classical notions of mind-as-intelligence is needed; if the former view is correct, it would be helpful to know where a more general notion of autonomy fails to enable some feature of the emotions.

1. Autonomy, AI, and the Sciences of Mind

In a general way, the imperative to at least consider autonomy an issue has become well understood in AI. Many AI projects have as a goal a system that can be largely “left alone.” But typically AI systems have been developed with a focus upon recreating piecemeal aspects of autonomous systems, and not with a positive account of autonomy itself. Solving the kind of piecemeal problems that we might list as challenges to a system being “left alone” or being capable of unsupervised learning are essential but only very preliminary steps. This piecemeal approach is very likely part of the reason why affects have been of marginal interest to AI. An approach to a particular problem, like chess performance, may be specified in terms of some basic rules. There is no demand for flexible learning, coping with changing environmental conditions, or valuing diverse and potentially conflicting opportunities – the very kinds of tasks which affects seem to enable in biological systems.

Clarity about what autonomy is ultimately will clarify what our design goals should be in AI; such clarity will also provide neural science with hypotheses about global system function that can be tested. Some important offerings have been made. Andy Clark has defined an autonomous agent as “a creature capable of survival, action, and motion in real time in a complex and somewhat realistic environment” (Clark 1997: 6). Rodney Brooks, in a seminal article on his bottom-up approach to AI, wrote some requirements for “building Creatures,” or autonomous systems, including that “A Creature should do something in the world; it should have some purpose in being” (1991: 402). Christensen and Hooker argue that autonomous systems are “self-structuring far-from-equilibrium systems which seek out energy gradients that can maintain their dissipative processes and also act to maintain and sometimes modify and elaborate the processes which enable the exploitation of such gradients (2000: 2-3).

These mutually consistent descriptions seem to me largely correct. However, the definition and description given by Chistensen and Hooker is at a much lower level of detail than is readily applicable to philosophical debates about the nature of mind; a definition consistent with it, but pitched at a higher level, is possible. Furthermore, a more succinct account than those given by Brooks and Clark is possible. I will attempt one; it is preliminary, and revisions and criticism are eagerly hoped for as they will help us get clear about what is now obscure.

An autonomous system is a system that (1) has its own purposes, some of which are directed to changing the real external environment; and (2) pursues those purposes (a) with sufficient success and (b) flexibly under changing environmental conditions. Each of these points requires some explanation. First, having purposes is a normative notion, essentially linked to the fact that representation is a normative notion. That is, a simple causal account of a purposeful system, where we assume that what is caused to happen is the function of the system, is not sufficient; we also need an account of how such a system can be right or wrong. If a probe on Mars is sitting on an incline and so fails to scoop up soil for some kind of soil test, and instead ends up performing the test on the thin Martian atmosphere, we say that it has failed in its purpose, and not that the soil testing apparatus is now a soil-and-Martian-atmosphere testing apparatus. This is a designed system, and so the normative constraints are apparent in our design intentions, but we can make the same kind of claims about biological systems. When a child is born with a hole in her heart, we say that there is a congenital defect. Of course, if we could trace the cause, we would see nothing like a “defective cause” -- that is, we would see no breakdown in physical or chemical laws. Rather, the hole is a defect because we understand the purpose of the heart in the whole biological system, and see that this heart will fail to properly perform this role. These kinds of cases make clear that a symptom of having purposes is that the system can fail.

In what follows, I will show that a better understanding of some affects will give us the first steps to understanding some kinds of purposefulness. However, as a general notion, there are a number of candidate ways to understand purposefulness. These arise from accounts that aim to determine what the function (and thus success conditions) of a biological subsystem are, and provide preliminary accounts of how a system can have purposes (that is, they are reductive accounts of purposefulness). There is a vast literature on this topic in philosophy (most of it following and modifying Wright 1973, 1976), and we need not review it here, but can mention a few examples. Wright’s formulation is that a biological subsystem has a function if that subsystem performs that function and the reason it exists is because it performs that function. The latter clause is typically explained by saying that the subsystem was selected for that role and continues to be inherited in the species because of that role (Millikan 1984). An alternative (which I find preferable) is to explain the sense of the subsystem having a function not by reference to evolution but rather to how it sustains other subsystems which in turn sustain it – a kind of circular bootstrapping found in very complex systems (Schlosser 1998; $$$).

Second, only once a system has purposes can it exhibit success and behavioral flexibility. If no instances of a kind of system exhibit some success in their purposes, then we cannot call the system autonomous. This is captured in, and replaces, Clark’s notion that the system must be capable of survival and must be acting in real time. Acting in real time is just another way of saying that the system must act quickly enough to be successful to achieve some of its purposes. That is, many tasks can only be completed in a timely way. And survival, although typically a purpose and typically something that is necessary to achieve other purposes, is not always required. For many biological organisms, there are purposes which actually kill the organism; examples include breeding for salmon or stinging an attacker for the honey bee. And, we may want similar kinds of goals for designed systems; for example, a mine-detecting robot may have as its purpose the discovery and then self-destructive detonation of a land mine.

Success need not be complete or even common, however. A tiger may fail on most of its hunts, and we would still call it an autonomous hunter. Similarly, we may rightly ascribe autonomy to a type of system if only some of the instances of that type are successful. For many species, most of the young die shortly after birth or hatching. Those that survive, however, demonstrate autonomy in the fulfillment of some of their purposes. This is sufficient for us to say that the species is an autonomous type.

Like success, flexibility also follows from having purposes. A system that has purposes can in principle pursue those when some strategy fails by attempting a different strategy. Flexibility comes in degrees, and we should neither expect nor demand an account of where flexible behavior ends and inflexible begins. Even very complex autonomous systems, like human beings, are flexible only within limits; we cannot solve some practical problems, cannot continue to think when the temperature in our environment is too high, and so on. However, even though the borders are unclear, it is an empirical fact that some systems do show remarkable degrees of flexibility. They move in order to assist thermoregulation, they adopt different strategies to hunt for food depending on different environmental conditions, they hide from threats in novel environments, and so on. Furthermore, ultimately we should be able to make robust predictions about what kind of systems will be more flexible than others. For example, in very advanced autonomous systems, like mammals, many purposes are represented in the mind/brain so that they can be pursued. However, a system might still have purposes even without them being richly represented; for example, an organism may have affects which compel it to random behaviors until certain states (that is, goals) are achieved. It is a likely hypothesis that systems that represent their goals with internal states, or at least represent objects or events which play some essential role in their goal state, could be capable of kinds of flexibility of which systems that do not represent their goals are incapable. To refer to just parts of biological systems, a reflex can be said to have a purpose, but it does not pursue that purpose very flexibly; however, a predator may take advantage of representations of its prey to allow it to better attack (it may, for example, run towards where the prey is likely to be upon its arrival, even when the prey is invisible because it has run behind an obstacle). Thus, once represented, a goal would be available for different strategies to pursue. This example illustrates that it will be possible to lay out a ranking of systems in terms of their autonomy; and an arbitrary line can be drawn at any place in such a ranking, and AI researchers could declare themselves primarily interested in behavior that fell beyond some such distinction.

There are profound questions that remain unanswered about the nature of autonomous systems, but this characterization is sufficient to make some important observations about the role that affects may play in them.

2. Real affects

If a system has goals and pursues those goals, it is typically useful to describe it as having motivations. Thus, we can use, and in some cases may need to use, the concept of motivation to make sense of autonomy. But the practical utility of referring to affects to explain autonomy contains an important ambiguity: it does not require that affects be real, measurable states. This is because affects may be merely attributed states. Philosophers call such a view “irrealism”; and forms of irrealism about motivation have been extremely influential in contemporary analytic philosophy (e.g., Davidson 1963; Dennett 1987). For example, for (Dennett 1987), we understand some complex systems by ascribing to them beliefs and desires, and then predicting their behavior as a result. On such a view, motivations are a useful fiction, and nothing more. Dennett (1978) uses the example of a chess program. Its code is too complex for someone to refer to its output in terms of the computations performed. However, we can easily “understand” the program if we assume things like, “it put its rook in danger because it believes we’ll take the rook with our queen and it wants to capture our queen.” But of course, in a typical chess program, the program believes and wants nothing. There are only processes, which search ahead through possibilities and output some syntax as a result. Nothing in these functions could plausibly be said to be a state dedicated to a role we would call an affect – or a belief, for that matter. One might thus be an irrealist about motivation, and assume that autonomous systems like humans are undifferentiated but overwhelmingly complex collections of processes, like very, very large chess programs, and there is nothing in this system that actually is a dedicated internal state which corresponds to our notion of motivation. Rather, it is convenient to talk of each other as having such states. Irrealist approaches to affects are perhaps consistent with some emergentist expectations in some AI programs.

The question of whether we should be realists about affects has immediate practical import. If there are no real motivations, then they are not useful posits for the engineer concerned with creating autonomous systems from the ground up, and they are explanatory posits that the scientist of mind should work to eventually eliminate from theoretical discourse. Note also that irrealism about affects is not inconsistent with anything we have said so far about autonomy. Because flexibility comes in degrees, autonomy also comes in degrees. There is some sense in which one might want to argue that bacteria show some autonomy (more than the average AI program, alas), but they are clearly an example at the limit of what we mean by autonomy. Presumably we would not want to ascribe fears or desires or pains or any other affects to a bacterium. This begs the question: do we really need these affects to understand more complex organisms?

There are a number of reasons to suppose that we should be realists about at least some motivational states, when we are considering more complex organisms (i.e., certainly any mammal, and perhaps anything with a central nervous system); and, as a consequence, we should assume that there are genuine internal states of a complex autonomous system that have as their sole or primary role causing behavior and thus deserve the appellation “affect.” First, distinct affects have been a powerful explanatory tool in psychology, giving some credence to the idea that they actually exist. The role of these states in most psychological theory is as motivations that can be associated with diverse stimuli, resulting in a powerful tool for generalization in practical action. This kind of explanation requires motivations that are distinct in kind from other kinds of representational states, with which they can be associated. Second, some complex affects have behavioral features that are measurable and which plausibly function to support action. For example, when we are very frightened, adrenaline typically surges through our bodies. Our best explanation of this function is that it serves to prepare us for flight. This suggests a measurable, complex coordinated set of events that support some action. These aspects could plausibly be said to be caused by or to constitute a real affect. Third, there has long been evidence for differentiation in brain systems that enable motivation; the brain does not appear to be a homogenous system of similar kinds of processes, as we expect of the snarl of complex processes I described above. Fourth, and finally, a distinction between motivations and other states is a better position for the AI researcher to start with, since the alternative is a homogenous collection of processes that we hope will appear to have both motivations and information states when complex enough. This sounds like an invitation to write spaghetti code and hope that eventually the system will become autonomous. That is, irrealism about affects offers no direction for those kinds of problems that (according to psychology or even folk psychology) require some differentiation of motivation. Some AI researchers have explicitly recognized this problem (see Brooks 1997: 297).

If we accept realism about motivation, and work with the hypothesis that there are genuine internal states to many autonomous systems that are motivations, then we may have a necessary minimal first building block for autonomy. Reward and punishment alone could plausibly allow for a system to have purposes, built out of the general purpose of maximizing reward and minimizing punishment. Such systems could learn strategies for achieving those goals, and acquire new goals and shed old ones, in that these motivations and new ones can be associated, dissociated, and reassociated, with certain potential end states and activities. Thus, just a single or a handful of motivational states may be a sufficient starting place for developing an autonomous system.

At this point, if the arguments and assumptions made up to now are correct, then we have established that AI should seek to create affects as a powerful strategy for creating complex autonomous systems. For the other sciences of mind, these hypotheses and corollaries suggest both directions for study and plausible frameworks for relating existing findings in an overarching theory. However, additional understanding about the nature of affects could be very fruitful. About the general nature of affects, there is another question which remains unanswered and which is highly relevant to AI and the theory of mind. There are a host of diverse motivations to which we refer in our common sense explanations of behavior: not just pleasure and pain, but anger, fear, grief, joy, lust, and so on. Are these just names for complex combinations of reward, punishment, and information states, or are they distinct kinds of motivations? And if different, are these differences any guide to strategies for creating autonomous systems?

3. Affects and encapsulation

Biological systems appear to have a very rich diversity of affects. To understand some of these motivations, and to have a plausible opportunity to reconstruct some of them, we need to understand better their nature. Neural science, psychology, and other sources of theories of mind have a range of perspectives to offer on affects, many inconsistent with others. Most of these views differ most significantly along one dimension: the degree to which some affects are potentially encapsulated from other subsystems of the autonomous agent. This encapsulation will be from other information states and/or other affects (should there be more than one kind of affect). The encapsulation need only be potential, since a system may only show its encapsulation under certain conditions. To put this point differently, there are diverse views about how much of the behavior that an affect motivates is written into the affect itself, as a modular subsystem in some significant ways distinct from other subsystems of the brain. A highly-encapsulated system must have more of its behavior specific to that encapsulated system, since by definition it is less integrated with other systems and sometimes cannot use them to accomplish certain tasks.

For any particular affect, a range of views about its encapsulation are thus possible. We can simplify the issue by idealizing to two polar views.

On the low-encapsulation view, an affect may be widely used and integrated into (we might say, transparent to) a range of other systems. A low-encapsulation view about all affects, which we can call a global low-encapsulation view, would typically be of the form that, for the organism in question, we have a small number of relatively generic affects that can explain all of its behavior. Emotions like fear might be explained by reference to such generic motivations coupled with information states. In philosophy, a brand of theory of emotion called “cognitive theories” (here “cognitive” is used in a richer sense than in the sciences) reduce an instance of an emotion to beliefs and desires. Thus, fear of a rabid dog growling before me is something like the belief that this is a dog, that it might bite me, that it is rabid, that bites hurt, that rabid dog bites can cause rabies, and so on, and the desire not to be bitten, the desire not to suffer pain, the desire not to get rabies, and so on (Marks 1982; in some senses Davidson 1963, Solomon 1977). Similar views exist in psychology, often under the label “cognitive” also. In neural science, where belief is a state that is recognized to be extremely complex and not fully understood at present, a global minimal encapsulation view will more likely take the form of reliance upon association of stimuli with reward and punishment, perhaps augmented by certain kinds of accompanying cognitive states (Rolls 1999 appears to be an example of a global low-encapsulation view).

On the low-encapsulation view, the kind of complex stereotypical behavior one sees with some emotions may be explained by the relationship of a simpler affect to other states of the system. One consistently flees the rabid dog because one consistently reasons out that this is the best response to take. Such reasoning may or may not be conscious. Ultimately, the point is that emotional behaviors are going to be contiguous with other kinds of behaviors, and fully integrated into the systems which (in part) cause other kinds of behaviors.

On the high-encapsulation views, the relevant affects can be extremely diverse. Some affects will have dedicated neural systems, or will at least be enabled by particular functional arrangements of other neural systems which make a relatively unique use of their elements; in either case, each of the highly-encapsulated affective systems will be capable of operating with a high degree of independence from other affective systems and from some cognitive systems. Thus, fear may be best understood not as a combination of beliefs and desires; nor as a kind of relationship between reward or punishment states and associations with other information states, where all these states are potentially shared with other subsystems or constitute any other affects; but rather as the functioning of a partly or potentially independent fear system. Typical fearful behaviors, such as flight or freezing, may be seen as motor programs that in part constitute the affect.

It is important to recognize that each of these two kinds of view may be correct of different affects. Which affects are highly encapsulated, and which not, is an empirical question which remains unanswered. Furthermore, it is possible that even for each particular kind of affect some middle-ground view will prove correct, with some encapsulation but not a high degree. And, of course, details about the kinds of other systems that an affect is encapsulated from will ultimately replace the utility of a one-dimensional measure. Finally, both the low and the high-encapsulation views are able to offer plausible explanations of the phenomena as we understand them now. It is still possible (and frequent) for some neural scientists and psychologists to defend strong forms of one position, while others defend a strong form of the other position, for the same affects. That is, if someone flees in fear or attacks in anger, both the hypothesis that they are behaving in response to certain generic motivations given certain information, or the hypothesis that they were caused by fear to flee or anger to attack, are possible explanations, and so both still going concerns. Nonetheless, this kind of distinction is a useful one, since it can make clear some practical consequences for design and for hypothesis testing.

Each approach suggests distinct architectures for the AI developer. On a global low-encapsulation approach, a small set of affects would be used to motivate all behaviors. Complex motivations would be built out of these affects and other mental states. The approach would be consistent with explanations of emotions in highly cognitive terms (e.g., Ortony, Clore, and Collins 1988 provide an excellent example of a global low-encapsulation approach which is described in terms that were readily used by AI researchers). Thus one might pursue, for example, a classical symbol manipulation architecture, where certain outcomes have associated with them reward or punishment; the agent reasons (in some sense) from outcomes to possible paths to those outcomes; and paths to reward are motivated, paths to punishment avoided.

To pick a clear contrast, a global high-encapsulation approach would assume that there are affects but they are very distinct programs. Each of these programs coordinates behavioral changes like motor programs, perceptual changes, autonomic changes, and so on. The developer seeking to emulate such a system would develop each affect as a semi-independent behavioral subsystem. These affect subsystems could and typically would interact with other mental systems, but just as we could see them act alone in abnormal situations in biological organisms (such as at high speeds – as in the mere exposure effect, Zajonc 1968, 1980 – or after brain damage or in surgical preparations), we should expect them to be capable of operating with varying degrees of independence from other systems. An architecture consistent with this might be the subsumption architecture of Brooks, augmented with a posit that some affects are low-level states and not emergent ones (see Brooks 1998). On this approach, robots are built out of distinct and partly independent systems which communicate, and even compete, with each other to result in behavior, and some of these could be affect systems.

For at least three reasons, I believe that for many affects, which I will call here “basic emotions,” a high-encapsulation view is the better working hypothesis. The first reason is that this better explains the behaviors that are caused by or otherwise accompany basic emotions. This point is most clear when we think of those cases where emotional behavior “sticks out” beyond what we would expect of a low-encapsulation view. Jones may know that flying is safe, but be afraid to fly and act accordingly, even though it impedes his career. Here Jones is acting contrary to his own interests and to his professed understanding. Smith may be afraid of snakes but be very well educated. Faced with a garter snake in his front yard, he agrees that it is unlikely to attack him, and that it cannot open doors, and that it certainly cannot unlock a door. But, to get away from the snake, Smith runs into his house, locks the front door, and moves away from the door. Smith is acting beyond the requirements of his own interests according to his own professed understanding. If you accept that cases like these are possible, and that our usual explanation is correct (Jones is afraid of flying; Smith is terrified of snakes), then some emendation of the low-encapsulation view is required. Either Jones and Smith must be allowed to believe contradictory things (so that there is no encapsulation between the information that flying is dangerous and the information that flying is not dangerous, and for some reason the former is consistently winning out in determining behavior), or they must act on information that they do not profess to believe (that is, there must be some encapsulation between professed beliefs and the information states that initiate and guide their action). Either proposal is reasonable for the low-encapsulation view; however, either proposal could be introduced to make the low-encapsulation view consistent with any observation, possibly making the theory unfalsifiable, but more importantly revealing that the theory can tell us nothing about why organisms continually reuse certain strategies (as opposed to the infinitely many other strategies the low-encapsulation view would allow).

This should draw our attention to the second reason: there are characteristic behaviors which do not well fit a low-encapsulation view, and which raise the question of why these emotional behaviors should be common to our species and others. Many different kinds of animals appear to have fear and anger and grief and other basic emotions, with recognizable behaviors accompanying them. The low-encapsulation view can explain these behaviors, but what it cannot explain is why these behaviors are shared across cultures for humans, and across species. We may well ask why flight or attack, along with the many behaviors that accompany them (characteristic displays, regular autonomic changes), should be common strategies. For the high-encapsulation view, there is an explanation: there are neural structures that enable the affect programs, which are inheritable, pancultural in humans, and are enabled by structures which have homologues in many other animals.

A third reason to adopt the high-encapsulation view about some affects is similar to the practical reason to be a realist about affects: as a working hypothesis, it is a useful one for AI researchers. The architectural constraints are far greater on a high-encapsulation approach to basic emotions than on a low-encapsulation approach. On a low encapsulation approach, some small number of motivational states will be used to build the programs for all behaviors. This gives the developer so much freedom in her approach that she has little to glean from such theories. A high-encapsulation approach, however, will require the developer to implement or choose between a number of distinct motivational types, each with its own kind of behavioral response. These can include some generic motivational state(s), but can also include things like flee, attack, defend, and so on. In other words, the AI researcher has at her disposal a range of behavioral programs that can be developed and implemented.

4. The affect program theory

If we assume a high-encapsulation view about some group of affects, then we assume that the relevant agents have dedicated affect systems which motivate them in particular ways. Many highly encapsulated affects appear to be just those affects for which we typically reserve the term “emotion.” There is a theory of emotions which is consistent with this approach: the affect program theory. Suitably extended, the affect program theory of emotions provides a fruitful starting place for an AI approach to emotions and autonomy. On this theory, some of the things we call emotions are inherited (and in humans, pancultural) complex coordinated states of the organism that include things like motor programs, autonomic changes, alterations in perception, and other factors (Tomkins 1962, 1963; Ekman 1980; Panksepp 1998). One version of this theory argues that these emotions evolved from motor programs, and are still in part fruitfully understood as motor programs (see DeLancey 2002: 25ff). Plausible candidates for affect programs, which I will continue to call “basic emotions,” include fear, anger, and disgust. The affect program theory suggests benefits to both architecture and theory.

In terms of architecture, the hypothesis that affect programs evolved in part as motor programs, accompanied by other features such as displays of intent, is highly suggestive. An affect program, idealized and simplified, would seem to have some of the following elements:

- A motor program, which can produce relational behaviors or may be suppressed;
- A set of action preparedness features;
- Perceptual integration and alteration, which can produce evaluations;
- Effects on learning and memory;
- Display behaviors, which may be suppressed.

Some affects appear to have as a clear primary function a specific relational behavior – that is, a behavior concerned not solely with communication or internal changes, but with changing directly some state of the world relevant to the organism. Not all affects plausibly contain such motor programs, but some appear to: flight seems to be a defining feature of fear, attack of anger, vomiting and withdrawal of disgust, and so on. There is much to recommend the view that mind should be understood foremost as a motor control system (Llinas, 2001: “that which we call thinking is the evolutionary internalization of movement”). But even if one rejects such a view, an approach to building autonomous systems that starts with clearly defined motor programs – such as flee or attack – offers a clear place to start construction of an affect, and also a theoretically powerful way to conceive of motivation in terms of motor programs. We don’t always flee when afraid, or attack when angry, but it does appear that we are more likely to undertake these behaviors. As a working hypothesis, I suggest that when a basic emotion that contains a motor program is elicited, the motor program must be suppressed if the behavior is not to occur. That is, fear without flight requires both fear and some kind of motor suppression. Furthermore, when we feel an emotion strongly, our own phenomenology is that, although we may not act on that emotion – we may feel fear but not flee, we may feel anger but not attack – we do feel motivated to act on the emotion. We can think of an engine revving, but the clutch is engaged. This approach, in which motivational activation is a default, and it is suppression that must be actively applied, has advantages in allowing the system to be prepared for action, and may also offer a strategy for communicating the motivational aspect of the affect to other subsystems.

Many basic emotions include more than a disposition to certain relational behaviors; some appear to include features which can assist the relevant action. In fear or anger heart rate may change, adrenalin may flow, muscles may become tense, and so on. These changes serve to ensure greater success of the relational action, if taken. These same changes, and perhaps others, also appear to be part of the perceptual changes that can occur when an emotion is active. Laboratory evidence that our perception changes to match our affects (called “emotional congruence”) is still preliminary, but provides some evidence that there is truth to the common wisdom that the fearful see threats, the angry see slights, the joyful see through rose-colored glasses. These changes may also help to assist in action. Better to be attentive and even overcautious, for example, when in a frightening situation, which may have as a result a tendency to produce too many false positive identifications of threats. The fact that we experience frequent affective arousal, evidenced by things like skin conductance response, indicates that we not only perceive things often in affective terms, but also we sometimes have some degree of affective reaction to even the most passing of perceptions.

Basic emotions appear to also potentially play a role in learning. Some of the autonomic changes that accompany some kinds of emotional arousal affect how well we remember some event perceived during that arousal (e.g., Cahill et al 1994). The ability to associate a stimulus with an appropriate affect could be extremely useful in future encounters with that stimulus. We want to be able to quickly and accurately recall if something is frightening, infuriating, and so on. The typical relational behaviors, the changes that they can cause, and their role in learning can together mean that the agent that has some basic emotions also has as a consequence a rich set of valuations it can bring to bear on its world. Things can now be recognized not only in terms of disinterested properties they may have (green, large, triangular, etc.), but also by affective properties. They are frightening, a property which is not disinterested because it encourages us to flee from the thing; or infuriating, which encourages us to attack the thing; and so on. Human beings, and many other animals, move through their world in a constant state of affective excitement: things we see, hear, smell, and touch cause in us affective reactions. These reactions are mostly fleeting, and mostly go unnoticed. But the world remains for us a place of opportunities and threats, of fearful things and infuriating things, of sad events and joyous events. If we approach the design of an autonomous system as a challenge to incorporate affect programs like fear and anger, then we have provided our autonomous system not only with motor strategies but also specific valuations. The world of the robot can now include fearful and annoying events and objects. As such, these events or objects could be understood not only in terms of recognition and categorization (as we see in the obsolete GOFAI paradigm of the robot recognizing and filing the details of its environment), but also in terms of perception and understanding. To have in your environment frightening or infuriating things is to in some part encounter the world as offering affordances for flight and attack. The system in part perceives the world in these terms. It also offers a way to understand things in these terms: items can be remembered not only (if at all!) in terms of some formal ontology constructed in a knowledge representation system, but as fearful things, infuriating things.

Finally, although space limitations have led me to neglect this issue in my discussion of autonomy, many affects, and especially basic emotions, appear to have important social roles, and these are in part served by display behaviors. The most obvious of these in humans are characteristic, pancultural and inherited facial displays. The important social roles that anger or fear, for example, surely play in human society is greatly augmented by our ability to communicate that we are in such a state without having to engage in the relational behavior of the state. The utility of being able to credibly threaten attack without having to attack, or to communicate to one’s peers genuine fear, should be obvious.

5. Conclusion: five key hypotheses

I have proposed that the unifying theme that connects AI and robotics, neural science, philosophy, and other sciences of mind should be autonomy; and that autonomy and affect are fruitfully treated together at this time. I have done this by defending five hypotheses:


Averill, James (1980) “Emotion and Anxiety: Sociocultural, Biological, and Psychological Determinants.” In Rorty: 37-72.

Brooks, Rodney (1997) “From Earwigs to Humans.” Robotics and Autonomous Systems, Vol. 20, Nos. 2–4, June: 291–304.

----- (1991) “Intelligence without Representation.” In John Haugeland (ed.) (1997), Mind Design II: Philosophy, Psychology, Artificial Intelligence: 395-420. Cambridge, MA: The MIT Press.

Cahill, Larry, Bruce Prins, Michael Weber, and James L. McGaugh (1994) “b- Adrenergic activation and memory for emotional events.” Nature, 371: 702-704.

Christensen, W. D. and C. A. Hooker (2000) “Autonomy and the Emergence of Intelligence: Organised Interactive Construction.” Communication and Cognition – Artificial Intelligence vol. 17 (3-4): 133-157.

Clark, Andy (1997) Being there: Putting Brain, Body and World Together Again. Cambridge, MA: The MIT Press.

Davidson, Donald (1963) “Actions, Reasons, and Causes.” In Essays on Actions and Events, New York: Oxford University Press, 3-20.

DeLancey, Craig (2002) Passionate Engines: What emotions reveal about Mind and Artificial Intelligence. New York: Oxford University Press.

----- (1998) “Real Emotions.” Philosophical Psychology, 11 (4).

Dennett, Daniel (1978) “Intentional Systems.” In Brainstorms, Cambridge, MA: The MIT Press, 3-22.

---- (1987) “True Believers.” In The Intentional Stance, Cambridge, MA: The MIT Press, 13-36.

Ekman, Paul (1980) “Biological and cultural contributions to body and facial movement in the expression of emotions.” In Rorty: 37-72.

Fodor, Jerry (1990) “A Theory of Content, I.” In A Theory of Content and Other Essays, Cambridge, MA: The MIT Press, 51-88.

Llinas, Rodolfo R. (2001) I of the Vortex: From Neurons to Self. Cambridge, MA: The MIT Press.

Marks, Joel (1982) “A Theory of Emotion.” Philosophical Studies 42: 227-242.

Millikan, Ruth (1984) Language, Thought, and Other Biological Categories. Cambridge, MA: The MIT Press.

Minsky, Marvin (1985) The Society of Mind. New York: Simon & Schuster.

Newman, P. L. (1960) “‘Wild Man’ Behavior in a New Guinea Highland Community.” American Anthropologist, 62: 603-623.

Panksepp, Jaak (1998) Affective Neuroscience. New York: Oxford University Press.

Rolls, Edmund (1999) The Brain and Emotion. Oxford: Oxford University Press.

Rorty, Amelie Oskenberg (1980) Explaining Emotions. Berkeley: University of California Press.

Schlosser, Gerhard (1998) “Self-re-production and Functionality: A Systems-Theoretical Approach to Teleological Explanation.” Syntheses 116: 303-354.

Sloman, Aaron (2001) “Varieties of Affect and the CogAff Architecture Schema.” Presented at the AISB 2001 convention. Available at

Solomon, Robert (1977) The Passions. Garden City, NY: Anchor Books.

Tomkins, Silvan (1962) Affect, Imagery, Conscsiousness. Volume 1. New York: Springer.

----- (1963) Affect, Imagery, Conscsiousness. Volume 2. New York: Springer.

Wright, Larry (1973) “Functions.” Philosophical Review 82: 139-168.

----- (1976) Teleological Explanations. Berkeley: University of California Press.

Zajonc, R. B. (1968) “Attitudinal effects of mere exposure.” Journal of Personality and Social Psychology Monograph 9 (2): 1-28.

----- (1980) “Feeling and Thinking: Preferences need no inferences.” American Psychologist 35 (2): 151-175.


1. I describe here a form of the “disjunction problem,” which generalizes from representation to other kinds of normative systems, such as goal directed systems, to show that a simple causal account of such systems is inadequate. See (Fodor 1990).


ISI 2003 Home Page