Function, Anticipation, Representation


Function, Anticipation, Representation

Mark H. Bickhard
Lehigh University
mhb0@lehigh.edu
http://www.lehigh.edu/~mhb0/mhb0.html

Abstract

Function emerges in certain kinds of far-from-equilibrium systems. One important kind of function is that of interactive anticipation, an adaptedness to temporal complexity. Interactive anticipation is the locus of the emergence of normative representational content, and, thus, of representation in general: interactive anticipation is the naturalistic core of the evolution of cognition. Higher forms of such anticipation are involved in the subsequent macro-evolutionary sequence of learning, emotions, and reflexive consciousness.

Keywords: representation, action selection, anticipation, far from equilibrium systems, self maintenant systems

1. Introduction

I argue that representation is emergent in a particular form of anticipatory function, a kind of anticipation involved in action selection. Specifically, representation is emergent in anticipations of what further actions and interactions are possible under current or indicated conditions. Developing and supporting that claim, plus arguing that various alternative approaches to representation are flawed, is the primary focus of this paper.

There are six steps in this development: 1) establishing the legitimacy of naturalistic emergence, 2) modeling the emergence of normative function, 3) modeling the emergence of primitive representation, 4) critiquing some alternative models of representation, 5) indicating the adequacy of this model of representation for more complex forms of representation, and 6) situating this model of representational phenomena in a broader framework of a macro-evolutionary sequence of anticipatory adaptations.

2. Emergence

There are many notions of emergence in the literature, of varying kinds and strengths. I argue for a naturalistic form of emergence that yields genuinely novel causal powers; on one hand, it is not a form of epiphenomenal emergence, and on the other hand, it is fully naturalistic.

2.1 Kims Argument

Jaegwon Kim has developed one of the strongest arguments against emergence in the literature, and I take his argument as a point of departure (Kim, 1989, 1990, 1991, 1992a, 1992b, 1993a, 1993b, 1997). Kim argues that any emergence model faces a dilemma: either emergence exists, but naturalism fails, or naturalism is correct, and emergence is at best epiphenomenal.

The core reasoning points out that, on a naturalistic account, everything is ultimately made up of fundamental particles, of whatever kind they turn out to be. Perhaps quarks, gluons, and electrons, perhaps something else.

But this point immediately yields the consequence that everything that happens is caused by the causal interactions of these basic particles, in whatever spatio-temporal organization in which they find themselves. In that case, the only genuine causality is that of the particles, and any causal regularities at any higher level of organization are nothing more than such regularities -- they are not causal powers in their own right, but merely manifestations of the causal dance of the fundamental particles. Consequently, there is no higher level emergence of novel causal power: everything at higher levels of organization is epiphenomenal relative to the fundamental particles -- including all mental properties and processes!

Or, on the other hand, some higher level organizations do yield genuinely novel emergent causal power. But, in this case, that higher level causality yields consequences that are not consequences of the working out of the causal interactions of basic physical particles, and we find that the world is not physically closed: there are some causal chains that do not remain strictly at the physical level. This is a kind of dualism. Consequently, naturalism fails.

2.2 Kims reductio

I find Kim's argument to be valid, but unsound. That is, the logic is correct, but at least one of the assumptions is false. In a sense, Kim has discovered a reductio of that false assumption (Bickhard, 1998a).

The false assumption is that of a particle metaphysics. If a particle metaphysics were correct, there would seem to be no way to avoid Kim's dilemma. But, there are no particles.

First, it should be clear that no metaphysics restricted to particles could suffice: dimensionless points would never interact and nothing would ever happen. At a minimum, we need particles plus the fields with respect to which they interact -- gravitational, electric, whatever. The introduction of fields as loci of causal power is already sufficient for my basic point, but the case against particles is actually even stronger.

Our best current physics, quantum field theory, is a theory of continuous ongoing processes in the vacuum. Any particle-like manifestations of field activity are results of the quantization of that field activity, and is no more a metaphysical particle than is the quantization of the waves in a vibrating guitar string. So, not only do particles not suffice, there are no particles. Everything is patterns and organizations of quantum fields (Aitchison, 1985; Aitchison & Hey, 1989; Brown & Harré, 1988; Davies, 1984; Ryder, 1985; Sciama, 1991; Weinberg, 1977, 1995).

The crucial point here is that, in the particle metaphysics that Kim presupposes, higher level organizations are just the stage on which and in which the particles engage in their dance. Particles do not have organization; they participate in organization. Organization, then, is not a legitimate locus of causal power, it is merely the stage setting.

But the core of the notion of emergence is emergence in particular patterns and organizations, so Kim has shown that, in the framework of a particle metaphysics, such emergence, if it exists, forces a failure of naturalism. From a different angle, if naturalism is accepted as a working framework, Kim has shown that naturalistic emergence plus a particle metaphysics are inconsistent. He has found a reductio of particle metaphysics -- given that emergence in some sense clearly occurs.

A metaphysics of quantum fields, on the other hand, is inherently organized. A field cannot exist at a single point of space-time; fields are processes, and processes must be organized. This means that organization cannot be delegitimized as a locus of causal power, because nothing exists, at any level, that isn't organized. So, to delegitimize organization as a locus of causal power would yield the conclusion that there is nothing with causal power.

Organization, then, is a legitimate locus of causal power, including, potentially of emergent causal power. Models of emergence, to be consistent, must work within a process metaphysics, unless they focus exclusively on some epiphenomenal sense of emergence, an emergence without genuine causal import in the world.

3. Normative Function

I will be modeling representation as an emergent of a kind of function, the function of action selection, and the anticipations upon which those selections are based. But representation is a normative phenomenon -- it can be true or false -- and accounting for that normativity in a naturalistic way is one of the difficult aspects of representation to model. In this model, the normativity of representation is derivative from the normativity of biological function, the sense in which it is the normative function of the heart to pump blood. A descriptive normativity, ultimately derived from some human observer or user or designer, such as the sense in which it is the function of an automobile to carry passengers, or of a computer program to handle word processing, will not suffice: normativity that is derived from unmodeled human beings is not a naturalized normativity.

The dominant approach to modeling normative function in the literature is what is usually designated as the etiological approach, for which Ruth Millikan is among the most careful advocates (Godfrey-Smith, 1994; Millikan, 1984, 1993). I could develop the representational model on such an etiological model of function, though I would do so in a different way than is to be found in that etiological literature, but I find the etiological approach to normative function to itself fail the test of naturalistic emergence.

3.1 Etiological Function

The core intuition of the etiological approach to function is that organs have functions -- the heart to pump blood, the kidney to filter blood, and so on -- insofar as they have the right evolutionary history. In particular, if their ancestors have been selected for yielding those consequences. So, previous hearts have been selected for, and thus this heart exists now, because they have in fact pumped blood and that has permitted the organisms to live and reproduce, thus eventuating in this organism with this heart.

This is a strong model, and has a strong sense of naturalism to it. After all, nothing but pure evolutionary facts are being appealed to, and the model seems to make good sense.

3.2 Some Counter-Intuitive Consequences

The etiological approach, however, does have some counter-intuitive consequences. One of them is that, the first time an organ did have some consequence that was useful to the organism, that did not constitute being functional for the organism, because there was not yet any evolutionary selection history for that useful consequence. In a few generations, however, when such a selection history does exist (or at least might exist), the descendants of that organ could well have the function of yielding that useful consequence.

A thought experiment yields an even stronger counter-intuitive consequence. Suppose that, by magic or extreme chance, molecules in the air come together to form a lion that is molecule by molecule identical to a lion in the zoo. The organs in the zoo lion will have functions, because they have the proper history, but the organs of the suddenly appearing lion will not have any functions, because they have no evolutionary history at all.

This is counter-intuitive. After all, the lions, by assumption, are identical molecule by molecule. Nevertheless, the usual conclusion is that the overall model is so strong that a few counter-intuitive results such as these are just the price to be paid for a naturalistic model. Science often changes the details of some of our pre-sophisticated notions.

3.3 A Failure of Naturalism

I argue, however, that the etiological model fails the criterion of naturalism. The lion example has an aspect that is not usually seen in this literature. The two lions, by assumption, have identical current states, but one lion has organs with functions and the other doesn't. Current state, then, is not sufficient to model etiological function: the right history must be there.

But only current state can have causal consequences. History can have causal consequences only insofar as history factors through current state. And appeal to distal causes, such as evolutionary history, is legitimate only insofar as those distal causes factor through current state without loss. Simply put, the past cannot cause anything in the future without the full mediation by the present (state).

Etiological function, then, since it cannot be modeled in current state, is not a causally efficacious model of function. Etiological function is causally epiphenomenal. Etiological function fails to provide a naturalistic account of the emergence of normative function (Bickhard, 1993, 1998a, in press-a).

3.4 Self-maintenant Systems

I offer a completely different approach to the modeling of normative function, an approach that yields a naturalistic emergence of such function. Consider a system that is far from thermodynamic equilibrium. Such a system, if isolated from its environment, will go to equilibrium. Maintenance of such a system in its far from equilibrium condition requires ongoing interactions with the environment that counteract the natural tendency toward equilibrium. A chemical bath, for example, may be maintained in a far from equilibrium condition by the continuous infusion of various chemicals from external reservoirs by electric pumps. In this case, maintenance at far from equilibrium conditions is totally dependent on these external pumps and reservoirs.

Some systems, however, contribute to their own conditions of far from equilibrium stability. They help to maintain the conditions of their far from equilibrium state. A candle flame, for example, maintains above threshold combustion temperatures, and, in standard atmospheres and gravitational conditions, induces convection, which brings in fresh oxygen and gets rid of waste products. Candle flames make definite contributions to the maintenance of their far from equilibrium continued existence. They are self-maintenant.

3.5 Function as Self-maintenant Contribution

I propose to model the serving of a function as making such a contribution to the self-maintenance of a far from equilibrium system. Function, in this model, emerges with self-maintenance.

Note first that this model is in terms of current system state, and, thus, is not epiphenomenal. Either the system remains far from equilibrium or it goes to equilibrium, and the causal consequences of the two possibilities can be vastly different

Second, such functional contribution may fail. If the candle flame runs out of wax, or if it is doused with water, the self-maintenant contributions are insufficient, and the candle has no alternative self-maintenant functional contributions to make.

Third, this model of serving a function is relative to the system involved. The heart of a parasite may serve a function for the parasite, but it is dysfunctional for the host.

Fourth, this is a model of serving a function as the primary locus of emergence. This is opposite to the etiological model, in which the primary locus is that of having a function, and a heart, for example, serves a function insofar as it succeeds in a function that it has. The model under development, then, must eventually address the notion of having a function, and the related notion of having a function even if not serving it, such as kidney that has failed.

3.6 Having a Function: Recursive Self-maintenance

Self-maintenant contributions only succeed in certain conditions. The candle is out of luck if the wax runs out. The maintenance of above threshold combustion temperatures fails if too much energy is removed too quickly: the burning does not provide enough energy to counteract all possible energy drains. The conditions under which the serving of a function succeeds constitute the dynamic presuppositions of those functional processes. If the dynamic presuppositions hold, the processes succeed; if the dynamic presuppositions do not hold, the processes fail.

For a simple self maintaining system, if the dynamic presuppositions of its self maintaining processes fail, the system goes to equilibrium and ceases to exist. Some systems, however, have the ability to switch among two or more means of being self maintaining, two or more functional processes, such that if the dynamic presuppositions of one fail, the dynamic presuppositions of the other may hold. Such switching, then, can expand the range of conditions under which the system can be self-maintenant to include certain changes in conditions. Such a system can maintain its condition of being self-maintenant in the face of environmental changes. It is recursively self-maintenant.

A simple example is a bacterium that can swim so long as it is swimming up a sugar gradient, but can tumble if it finds itself swimming down a sugar gradient (D. Campbell, 1974, 1990). In general, the conditions in which swimming is appropriate are not those in which tumbling is appropriate. The dynamic presuppositions of the two are different.

A recursively self-maintenant system requires some way of differentiating environments, and some way of switching among available self-maintaining processes appropriate to those differentiations. The bacterium must differentiate "up the gradient" from "down the gradient". Such differentiation and switching processes require some sort of infrastructure in the system, some organization of processes in the system that has a longer time scale than that of the differentiation and switching per se. That is, such differentiation and switching requires that the mechanisms for differentiation and switching be effectively stable relative to the processes of differentiation and switching.

There are many important aspects of such infrastructural phenomena, such as the introduction of metabolism (Moreno & Ruiz-Mirazo, 1999), but the point of importance for the current discussion is that such infrastructure provides a differentiation of parts of the overall system. In turn, such parts can be presupposed by various self-maintenant processes; that the parts do certain things can be dynamically presupposed by other processes. In that sense, the parts will have a function (for further development of such distinctions, including that of proper function, see Bickhard, 1993, 1998a, in press-a; Christensen & Bickhard, in preparation).

4. The Emergence of Representation

In a simple recursively self-maintenant system, such as the bacterium, the selection of which action to perform is by simple triggering: if an "up sugar gradient" condition is differentiated, then swim, while if a "down sugar gradient" is differentiated, then tumble. In more complex systems, there may be more than one system interaction that is possible under a single differentiated condition. An animal can, in general, walk in many different directions. Some process more complex than triggering is involved in the selection of next interactions in such situations.

One aspect of such selection is that of indicating, anticipating, which interactions are in fact available in a given current environment. That is, one aspect is the indication of the range of interactions among which the selection should "choose". One way this could be done, to use a computer analogy, would be with pointers to appropriate subsystems that would engage in various kinds of interactions. Such pointers could be set up when appropriate environmental differentiations had occurred. The computer architecture is a limiting and misleading framework in some deep respects (Bickhard, 1993; Bickhard & Terveen, 1995), but it serves to indicate how such indications could occur. It is these indications of interactive potentiality that are central to the interactive model of representation.

4.1 Truth Value

The central point is that an indication that such and such an interaction is available in the current environment may be false. Such an indication is, in effect, a predication about that environment, a predication that that environment is appropriate for this interaction. Such an indication, such a predication, constitutes the emergence of a primitive bearer of truth value. Truth value, in turn, is the central emergent property of representation.

4.2 Content

The indication of an interactive potentiality is true or false, but what makes it true or false? Such an indication will hold, be true, if the dynamic presuppositions of the indication are true. That is, it is the dynamic presuppositions that constitute the content of the indication. An interactive indication is a primitive form of representation, with emergent truth value, and content constituted by the dynamic presuppositions of the indication (Bickhard, 1993, 1998b, in press-a; Bickhard & Terveen, 1995).

Note the such interactive content is implicit. It is presupposed, not explicitly encoded. Such implicitness is of fundamental importance -- I argue elsewhere, for example, that it provides a solution to the frame problems (Bickhard & Terveen, 1995). This implicitness is a dynamic generalization of the notion of implicit definition as found in model theory, especially the sense in which a set of formal sentences (interactive kinds) implicitly defines the class of its models (appropriate environments) (Bickhard, 1998a; Hilbert, 1971; Kneale & Kneale, 1986).

4.3 Contact

An indication of interactive potentiality constitutes an emergence of content with truth value: of representation. On what basis are such indications created? I have been discussing such creations, such setting up of interactive indications, as contingent on differentiations of appropriate environmental categories. It is such differentiations that provide the necessary contact with the environment for the interactive indications to be appropriate to that environment, or at least to have a chance to be so appropriate.

Such differentiations can occur with any interaction. The course of an interaction will depend on the organization of the (sub)system engaged in the interaction and on the environment being interacted with. In particular, some environments will leave the subsystem in one final state when the interaction has completed, while other environments will leave the system in different final states when the interaction is finished. Such final states, then, serve to categorize environments either together, if they yield the same state, or apart, if they yield different states. This is a pure differentiation. There is no control information functionally available to the system in such a differentiation beyond what state it arrived at. But this is all that is needed.

If the system has evolved, or learned, that environments yielding final states of type A are appropriate for interactions of type Q, while those of type B are appropriate for interactions of type R, that is all that is required. It may well be the case that interactions of type Q require that there be a fly in a certain location in order for Q to succeed -- that is, it may well be that Q interactions dynamically presuppose that there is a fly at a certain location -- and it may be that a particular (visual) interaction happens to factually arrive at A when there is in fact a fly in that location. But nothing in this model requires that that fly or its location per se be explicitly represented in the system. All that is represented is that the frog, say, is in a particular kind of tongue-flicking and eating situation.

4.4 Encodingism

Nevertheless, in standard treatments, precisely such a visual differentiation is supposed to represent, to encode, that which has been differentiated. The frog is supposed to encode the fly, and then infer on the basis of such encoded representations that such and such kinds of eating are possible. Such models assume that what is differentiated is thereby encoded. The representation of the fly is constituted in the correspondence with that fly -- an informational, or causal, or nomological, or some other kind of correspondence, depending on the particular model at issue.

Such correspondence models conflate contact with content. They assume that content is of whatever contact is with. In so doing, they encounter a labyrinth of fatal errors, some of which have been known for millennia, some of which have only recently been noticed (Bickhard, 1993, in press-a; Bickhard & Terveen, 1995).

Here is one. In a correspondence model, there are only two possibilities: either the crucial correspondence exists, in which case the "representation" exists, and it is correct, or the correspondence does not exist, in which case the representation does not exist, and, therefore, cannot be incorrect. There are only two correspondence possibilities, but there are three representational possibilities that have to be accounted for: the representation exists and is correct; the representation exists and is incorrect; and the representation does not exist. This problem and its variants have attracted a great deal of effort during the last decades (e.g., Dretske, 1981, 1988; Fodor, 1981, 1990a, 1990b, 1998; Loewer & Rey, 1991; Millikan, 1984, 1993). I will not address these in detail here. Suffice it to say that none of these attempts have succeeded in providing a naturalistic model of normative representation (see, e.g., Bickhard, 1993, 1998b, in press-a; Bickhard & Terveen, 1995).

On the other hand, note that the interactive model has no trouble at all with such issues of the possibility of error. An indication of interactive potentiality can exist and be true, it can exist and be false, or it may not exist at all. All three representational possibilities are accounted for simply and directly. Furthermore, the interactive model accounts for a strengthened version of the "possibility of error" issue. If an interaction is indicated, and is undertaken, the course of the interaction may or may not be consistent with the indications. If it is not, then the representation of that interactive possibility is not only false, but its falsity is potentially detectable by the system itself.

This is critical to such phenomena as error guided action or error guided learning. They require not just the possibility of error, but of system detectable error. No other model in the literature even addresses this strengthened error problem. Clearly such error guidance occurs, so clearly it is possible, but it is not possible in current models. Error guidance is, in an of itself, a refutation of current models (Bickhard, 1993, 1999).

There is only one form of correspondence representation that is genuine. These are genuine encodings, such as Morse code, in which "..." encodes "S", for example. Or using some natural phenomena as an encoding or indicator of some other, such as smoke indicating fire or neutrino fluxes encoding fusion process in the sun. These are real and important, but they cannot be the foundational form of representation.

Encodings, whether they be conventional in nature or natural, require that the persons using them know both ends of the encoding relationship and know that relationship itself. Only with such full prior knowledge does "..." encode "S", or neutrino fluxes encode fusion processes. Again, these cases are important, but, because they require prior knowledge of the elements and the relationships between them, such encodings cannot provide new representations, or representations for something that is not already represented. Encodings borrow representational content from whatever is being encoded. Encodings cannot generate new or emergent representational content.

The conflation between contact and content that is so widespread is an identification that actually holds only for cases of genuine encodings: there is one encoding correspondence that provides both contact and content for the encoding. For this reason, I often dub such models "encoding models" or such positions as constituting "encodingism" -- they assume that all representation has the character of encoded representation, and that is not possible. Encodings must ultimately derive their representational content from some emergent source, and that cannot be encodings themselves on pain of circularity or infinite regress (Bickhard, 1993).

4.5 Anticipation

The emergence of representational truth value, of representational content, occurs in indications, or anticipations, of interactive potentialities. It is anticipation that can be false, and that can be detected to be false by the system itself.

Note that anticipation is future oriented, while standard models of representation are past oriented -- they look back down the stream of perceptual inputs, trying vainly to see (into the past) what the inputs are coming from. Interactive anticipation, in contrast, anticipates into the future yet to come, and may discover that it anticipated incorrectly.

This anticipation is action and interaction based, and is not possible in a merely passive processing of inputs. In being action based and future oriented, the interactive model joins the general pragmatist orientation. It also joins in the pragmatist critique of passive "spectator" models of perception and representation (Joas, 1993; Rosenthal, 1983, 1986; Smith, J., 1987).

In being action based, the interactive model forces that the emergence of representation occur only in embodied systems. It forces that all representation is fundamentally perspectival and contextualized -- all differentiations and all interactive potentialities are for a system in a particular location and time and orientation. It forces that representation and cognition are temporally characterized, and cannot be captured by the purely sequential formalisms of Turing machine theory and its equivalents or by the decontextualized encodings of formal model theory (Bickhard & Terveen, 1995).

In being future oriented, interactive representation inherently partakes of modalities: interactive possibilities and impossibilities and necessities. Modality is not an additional consideration somehow added onto an atemporal and amodal encoding model. Modalities must be differentiated out of this primitive mix, and this is in fact what we find in the cognitive development of children (Bickhard, 1988; Piaget, 1987).

And so on. In general, the interactive model accounts for representational truth value and content, but in ways and with properties that are not familiar in standard approaches. These differing characteristics themselves constitute part of the power of the model (Bickhard, 1998b; Bickhard, & Terveen, 1995).

5. What About More Familiar Representations?

The interactive model accounts for the emergence of representational content and truth value, but, so far, seems to do so only for relatively primitive representations of interactive potentialities. What about more familiar kinds of representations, such as of simple objects?

Consider a child with a toy block. The block offers, or affords (Bickhard & Richie, 1983; Gibson, 1977), multiple possible interactions. There are many visual scans and many manipulations available. Some of these are directly available to the child. Some of them are contingently available in the sense that they may require some intermediate interaction to bring a particular interactive potentiality into direct potentiality. The block may need to be rotated, for example, in order to bring some particular visual scan into direct affordance.

So, interactive indications can branch, as when there are multiple possibilities in given condition, and they can conditionally iterate, as when the scan is possible if the manipulation is engaged in first. Such conditionalized indications can form vast and complex webs of indications, anticipations, of interactive potentiality.

Some subwebs, such as for the toy block, have two crucial additional properties: 1) every potentiality in the web is reachable from every other part of the web, perhaps with intermediate interactions, and 2) the overall subweb and its internal reachability remain invariant under an important class of other interactions and happenings. The affordances of the toy block remain even if it is put away in the toy box, or if the child walks into a different room. Such interactions will embed the potentialities of the toy block in differing mediating interactions required in order to re-access the potentialities of the block -- perhaps walking back into the original room, or opening the toy box -- but the subweb of anticipated potentialities remains itself invariant.

It is not invariant under all such interactions, however. If the block is crushed or burned, then it affordances disappear. In general, such locomotor and manipulational invariances of internally reachable webs of interactive potentiality are what constitute representations of simple objects. This is a clearly generally Piagetian model of object representation (Bickhard, 1980a, 1998b; Campbell & Bickhard, 1986; Piaget, 1954, 1977), and I would urge similarly Piagetian inspired action based models of other familiar forms of representation, such as of abstractions and mathematics. In any case, it is clearly not in general an aporia to ask of the interactive model how it can account for more complex and sophisticated forms of representation.

6. Higher Order Anticipatory Processes

Representation emerges in recursively self-maintenant systems. It emerges in the anticipations of interactive possibilities that such systems "consider" in selecting their next interactions. But all such possibilities and selections must be inherent in the organization of the system infrastructure. The entire field of such potentialities must be anticipated in the hard wiring of the organism. Clearly, that is not always possible. Sometimes situations change character much too rapidly for evolution to keep pace. Sometimes learning, or perhaps something even more powerful, is required -- some way of appropriately expanding the realm of potentialities of interaction. Anticipation must ultimately be of change in possibility as well as of possibility per se. I now turn to some characterizations of such higher order anticipations.

The discussion has, of necessity, been somewhat sketchy, and will now become even more so. I will outline a macro-evolutionary sequence consisting of interactive knowing and representing, or recursively self-maintenant systems, followed by learning, then by emotions, and finally by reflexive consciousness. Greater detail than this outline is available in other sources (Bickhard, 1980b; Bickhard & D. Campbell, in preparation; Campbell & Bickhard, 1986). The point of the outline is to illustrate that anticipation is involved in all orders of adaptation and adaptability.

6.2 Interactive Knowing

A system is interactively knowing its environment insofar as it is successfully interacting with it. If so, it will also be successfully interactively representing it. Since the problem of motivation is itself the problem of action selection, such a system will also be successfully solving its problems of motivation (Bickhard, 1997, in press-b).

But, if such a system encounters failure, it has limited resources available for response. It could try again; it could switch to some other available interaction. But, by assumption, it could not attempt to create some new form of interaction. It could not engage in learning.

Nevertheless, if we consider a system continuously in process, taking into account the organizations of interactive potentialities as its flow of interactive process proceeds, we already find several properties of interest. This will be a process flow that is inherently contentful, inherently from a point of view, inherently contextualized and situated. These are crucial properties of simple, unreflexive consciousness, or simple awareness. Even this level of the model can begin to account for some properties of consciousness (Bickhard, 1988b, in press-b, in press-c).

6.3 Learning

If the system can respond to interactive failure with a reorganization of interactive control processes in an appropriate way, then a primitive form of learning can emerge. If interactive failure, for example, induces a destabilization of the interactive processes that yielded that failure, and a stabilization of interactive processes that are successful, then the system will have a simple variation and selection learning process, an evolutionary epistemology (D. Campbell, 1974). Heuristic learning is to be preferred to blind variation, when it is available, but it requires its own powerful process architectures (Bickhard & Campbell, 1996).

Such a learning system is an evolutionary advance on a simple interactive knower in at least two senses. First, a learning process is built upon an interactive process; it is a modification and addition to it. Second, such an addition will increase the adaptability of the overall system. A learning system can learn to anticipate; it anticipates the necessity to learn new anticipations. A learning system, then, constitutes a second step in a macro-evolutionary sequence of increasing adaptedness to adaptability.

6.4 Emotions

If a learning system encounters a novel situation, say, a tiger on a path, it can engage in learning trials. But, insofar as such learning trials are themselves relatively unguided, something more powerful is sometimes useful. In particular, a learning system per se can engage conditions in which its anticipations fail, but its only available coping is to attempt to learn a successful anticipation, and it may not have the temporal luxury of doing so.

The problem is that such a system can be in a condition of anticipatory uncertainty, but cannot interactively deal with such a condition. It can only attempt to cope via learning, not directly via interaction. The reason is that, although the system by assumption has available functional information about its own internal condition of anticipatory uncertainty, that information is available only to the learning processes, not to the interactive processes. Consequently, what is not possible for such a system is the development, whether via evolution or learning or both, of generic interactive manners of coping with types of such uncertainty situations. It would be nice, for example, if, no matter how novel and unheard of it was to encounter a tiger on a trail, there would nevertheless be some general characterization of this as a large likely dangerous animal situation, or something like that, with the generic interactive strategy of running.

If information of interactive anticipatory uncertainty were fed back into the interactive system as an input, then this new capacity would result. Such a system would be capable of developing general interactive strategies for relevant kinds of uncertainty situations. Such as system would interact with its own uncertainty conditions. I offer such interactions with internal interactive anticipatory uncertainty conditions as a model of emotions (Bickhard, 1980b, in press-b).

Just to illustrate a little further, if an uncertainty situation is encountered, and it is categorized, differentiated, as of a kind for which no likely successful interaction is available, then the original uncertainty about the situation will be augmented by the additional uncertainty of how to deal with this uncertainty situation. In particularly virulent versions of this, a runaway positive feedback of uncertainty about uncertainty about uncertainty, and so on, can result -- a panic attack.

On the other hand, if the uncertainty situation is categorized as of a kind of which a general coping strategy is known -- I may not know the solution to this problem, but I do know a strategy for solving this kind of problem -- then the overall anticipation is of the resolution of the uncertainty. I offer this distinction as the general form of the difference between negative and positive emotions.

Much more development of the emotions model is needed, but, for current purposes, what is most important is that: 1) systems capable of emotions as modeled constitute an increase in adaptability over learning systems, and 2) the modeled emotion processes are a modification and addition to the processes already present in learning systems. Again, we have a next macro-evolutionary step of increasing adaptedness to adaptability.

6.5 Reflexive Consciousness

An emotion system is a kind of meta-system that monitors uncertainty conditions in the flows of interactive and learning processes. In addition, it outputs a signal of that uncertainty into the interactive system. An emotion system, then, is a partial interactive system, with the first level interactive system as its interaction environment. If the inputs to such a meta-system come to be adequate to the functional flow at the first level, and the outputs come to be competent to modify and re-organize that flow, then a full second level, or meta-interactive system will have evolved.

Such a second level system, a second level interactive knowing system, will be able to track first level interactive processes and organization. It will be able to rehearse particular first level processes and contents. It will able to "examine" first level interactive organization as a means to planning and anticipating the environment. These are just a few of the new capabilities that emerge with a second level knowing system.

As is familiar, such a step: 1) constitutes an increase in adaptability over a system capable only of emotions (e.g., planning), and 2) it is generated by a modification and addition to processes already present in emotion systems. Yet again, we have a next macro-evolutionary step of increasing adaptedness to adaptability. It is worth pointing out that this sequence of knowing, learning, emotions, and reflexive consciousness is in fact the sequence in which these phenomena evolved (Bickhard, 1980b, 1988b; Campbell & Bickhard, 1986).

One additional phenomena emergent with reflexive consciousness will be mentioned. Just as the flow of first level interaction is inherently contentful, with the content being about environmental properties, the flow of second level knowing will be inherently contentful, with the content being about first level properties. There are many important kinds of such properties, such as the predicational organization of interactive anticipations. But one that is of focal concern is that of the properties that distinguish one kind of ongoing first level interactive flow from another -- the properties that distinguish one experiential flow from another. I would offer this kind of second level analysis and representation as a model of the emergence of the consciousness of experience, and of the quality of experience, including, in cases of highly abstract analysis, perceptual qualia.

Qualities of experience, in this model, are emergences of second level analysis of first level experience. They are not originary and constitutive of that first level experience. Qualia, for example, are not building blocks of perception, nor of perceptual experience. It is a vestige of mistaken positivistic empiricism to assume that they are.

7. Conclusion

We have traced higher order forms of anticipatory processes through the macro-evolutionary sequence of (interactive) knowing, learning, emotions, and reflexive consciousness, finding along the way many concurrent and companion emergences of mental characteristics and properties. Anticipation is at the core of self-maintenance -- at all levels.

References

Aitchison, I. J. R. (1985). Nothing's Plenty: The vacuum in modern quantum field theory. Contemporary Physics, 26(4), 333-391.

Aitchison, I. J. R., Hey, A. J. G. (1989). Gauge Theories in Particle Physics. Bristol, England: Adam Hilger.

Bickhard, M. H. (1980a). Cognition, Convention, and Communication. New York: Praeger Publishers.

Bickhard, M. H. (1980b). A Model of Developmental and Psychological Processes. Genetic Psychology Monographs, 102, 61-116.

Bickhard, M. H. (1988). The Necessity of Possibility and Necessity. Review of Piaget's Possibility and Necessity Harvard Educational Review, 58, No. 4, 502-507.

Bickhard, M. H. (1993). Representational Content in Humans and Machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285-333.

Bickhard, M. H. (1997). Is Cognition an Autonomous Subsystem? In S. O'Nuallain, P. McKevitt, E. MacAogain (Eds.). Two Sciences of Mind. (115-131). Amsterdam: John Benjamins.

Bickhard, M. H. (1998a). A Process Model of the Emergence of Representation. In G. L. Farre, T. Oksala (Eds.) Emergence, Complexity, Hierarchy, Organization, Selected and Edited Papers from the ECHO III Conference. Acta Polytechnica Scandinavica, Mathematics, Computing and Management in Engineering Series No. 91, Espoo, Finland, August 3 - 7, 1998, 263-270.

Bickhard, M. H. (1998b). Levels of Representationality. Journal of Experimental and Theoretical Artificial Intelligence, 10(2), 179-215.

Bickhard, M. H. (1999). Interaction and Representation. Theory & Psychology, 9(4), 435-458.

Bickhard, M. H. (in press-a). Autonomy, Function, and Representation. Communication and Cognition.

Bickhard, M. H. (in press-b). Motivation and Emotion: An Interactive Process Model. In R. D. Ellis, N. Newton (Eds.) The Cauldron of Consciousness. J. Benjamins.

Bickhard, M. H. (in press-c). The Emergence of Contentful Experience. In T. Kitamura (Ed.). What Should be Computed to Model Brain Functioning? Singapore: World Scientific.

Bickhard, M. H., Campbell, Donald T. (in preparation). Variations in Variation and Selection: The Ubiquity of the Variation-and-Selective-Retention Ratchet in Emergent Organizational Complexity.

Bickhard, M. H., Campbell, R. L. (1996). Topologies of Learning and Development. New Ideas in Psychology, 14(2), 111-156.

Bickhard, M. H., Richie, D. M. (1983). On the Nature of Representation: A Case Study of James J. Gibson's Theory of Perception. New York: Praeger.

Bickhard, M. H., Terveen, L. (1995). Foundational Issues in Artificial Intelligence and Cognitive Science -- Impasse and Solution. Amsterdam: Elsevier Scientific.

Brown, H. R., & Harré, R. (1988). Philosophical foundations of quantum field theory. Oxford: Oxford University Press.

Campbell, D. T. (1974). Evolutionary Epistemology. In P. A. Schilpp (Ed.) The Philosophy of Karl Popper. (413-463). LaSalle, IL: Open Court.

Campbell, D. T. (1990). Levels of Organization, Downward Causation, and the Selection-Theory Approach to Evolutionary Epistemology. In Greenberg, G., & Tobach, E. (Eds.) Theories of the Evolution of Knowing. (1-17). Hillsdale, NJ: Erlbaum.

Campbell, R. L., Bickhard, M. H. (1986). Knowing Levels and Developmental Stages. Contributions to Human Development. Basel, Switzerland: Karger.

Christensen, W. D., Bickhard, M. H. (in preparation). The Dynamic Emergence of Normative Function.

Davies, P. C. W. (1984). Particles Do Not Exist. In S. M. Christensen (Ed.) Quantum Theory of Gravity. (66-77). Adam Hilger.

Dretske, F. I. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.

Dretske, F. I. (1988). Explaining Behavior. Cambridge, MA: MIT Press.

Fodor, J. A. (1981). The present status of the innateness controversy. In J. Fodor RePresentations (257-316). Cambridge: MIT Press.

Fodor, J. A. (1990a). A Theory of Content. Cambridge, MA: MIT Press.

Fodor, J. A. (1990b). Information and Representation. In P. P. Hanson (Ed.) Information, Language, and Cognition. (175-190). Vancouver: University of British Columbia Press.

Fodor, J. A. (1998). Concepts: Where Cognitive Science Went Wrong. Oxford.

Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.) Perceiving, acting and knowing. (67-82). Hillsdale, NJ: Erlbaum.

Godfrey-Smith, P. (1994). A Modern History Theory of Functions. Nous, 28(3), 344-362.

Hilbert, D. (1971). The Foundations of Geometry. Open Court.

Joas, H. (1993). American Pragmatism and German Thought: A History of Misunderstandings. In H. Joas Pragmatism and Social Theory. (94-121). University of Chicago Press.

Kim, J. (1989). The Myth of Nonreductive Materialism. Proceedings and Addresses of the American Philosophical Association, 63, 31-47.

Kim, J. (1990). Supervenience as a Philosophical Concept. Metaphilosophy, 21(1-2), 1-27.

Kim, J. (1991). Epiphenomenal and Supervenient Causation. In D. M. Rosenthal (Ed.) The Nature of Mind. (257-265). Oxford University Press.

Kim, J. (1992a). "Downward Causation" in Emergentism and Non-reductive Physicalism. In A. Beckermann, H. Flohr, J. Kim (Eds.) Emergence or Reduction? Essays on the Prospects of Nonreductive Physicalism. (119-138). Berlin: Walter de Gruyter.

Kim, J. (1992b). Multiple Realization and the Metaphysics of Reduction. Philosophy and Phenomenological Research, 52, 1-26.

Kim, J. (1993a). Supervenience and Mind. Cambridge University Press.

Kim, J. (1993b). The Non-Reductivist's Troubles with Mental Causation. In J. Heil, A. Mele (Eds.) Mental Causation. (189-210). Oxford University Press.

Kim, J. (1997). What is the Problem of Mental Causation? In Chiara, M. L. D., Doets, K., Mundici, D., van Benthem, J. (Eds.) Structures and Norms in Science. (319-329). Dordrecht: Kluwer Academic.

Kneale, W., Kneale, M. (1986). The Development of Logic. Oxford: Clarendon.

Loewer, B., Rey, G. (1991). Meaning in Mind: Fodor and his critics. Oxford: Blackwell.

Millikan, R. G. (1984). Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press.

Millikan, R. G. (1993). White Queen Psychology and Other Essays for Alice. Cambridge, MA: MIT Press.

Moreno, A., Ruiz-Mirazo, K. (1999). Metabolism and the Problem of its Universalization. BioSystems, 49, 45-61.

Piaget, J. (1954). The Construction of Reality in the Child. New York: Basic.

Piaget, J. (1977). The Role of Action in the Development of Thinking. In W. F. Overton, J. M. Gallagher (Eds.) Knowledge and Development: Vol. 1. (17-42). New York: Plenum.

Piaget, J. (1987). Possibility and Necessity. Vols. 1 and 2. Minneapolis: U. of Minnesota Press.

Rosenthal, S. B. (1983). Meaning as Habit: Some Systematic Implications of Peirce's Pragmatism. In E. Freeman (Ed.) The Relevance of Charles Peirce. (312-327). La Salle, IL: Monist.

Rosenthal, S. B. (1986). Speculative Pragmatism. La Salle, IL: Open Court.

Ryder, L. H. (1985). Quantum Field Theory. Cambridge.

Sciama, D. W. (1991). The Physical Significance of the Vacuum State of a Quantum Field. In S. Saunders, H. R. Brown (Eds.) The Philosophy of Vacuum. (137-158) Oxford: Clarendon.

Smith, J. E. (1987). The Reconception of Experience in Peirce, James, and Dewey. In R. S. Corrington, C. Hausman, T. M. Seebohm (Eds.) Pragmatism Considers Phenomenology. (73-91). Washington, D.C.: University Press.

Weinberg, S. (1977). The Search for Unity, Notes for a History of Quantum Field Theory. Daedalus, 106(4), 17-35.

Weinberg, S. (1995). The Quantum Theory of Fields. Vol. 1. Foundations. Cambridge.