Why Children Don't have to Solve the Frame Problems

Mark H. Bickhard
Lehigh University






Mark H. Bickhard
Department of Psychology
17 Memorial Drive East
Lehigh University
Bethlehem, PA 18015
610-758-3633 office
610-758-3630 psychology dept
mhb0@lehigh.edu
http://www.lehigh.edu/~mhb0/mhb0.html

Running head: The frame problems

Key words: frame problems, representation, pragmatism, interactivism, Piaget, connectionism, innatism, constructivism, scaffolding, mental models, Perner

Deepest thanks are due to Carol Feldman for comments on an earlier draft, and to the Henry R. Luce Foundation for support during the preparation of this paper.

Why Children Don't have to Solvethe Frame Problems

Mark H. Bickhard

Abstract

We all believe an unbounded number of things about the way the world is and about the way the world works. For example, I believe that if I move this book into the other room, it will not change color -- unless there is a paint shower on the way, unless I carry an umbrella through that shower, and so on; I believe that large red trucks at high speeds can hurt me, that trucks with polka dots can hurt me, and so on; that if I move this book, the room will stay in place -- unless there is a pressure switch under the book attached to a bomb, unless the switch communicates to the bomb by radio and there is shielding in the way, and so on; that the moon is not made of green cheese, that the moon is not made of caviar, that the moon is not made of gold, and so on. The problems involved in accounting for such infinite proliferations of beliefs -- and the computations and inferences that take them into account -- are collectively called the Frame Problems, and are considered by some to constitute a major discovery of a new philosophical problem. How could we possibly learn them all? How could the brain possibly hold them all? The problems appear insoluble, impossible. Yet we all learn and hold such unbounded numbers of beliefs; in particular, children do. Something must be wrong. I wish to argue that the frame problem arises from a fundamental presupposition about the nature of representation -- a false presupposition. Yet, it is a presupposition that dominates contemporary developmental psychology (and psychology more broadly, and cognitive science, artificial intelligence, philosophy of mind, and so on). In particular, I will offer an alternative model of the nature of representation within which the frame problem does not arise -- within which such unboundedness is natural.




Why Children Don't have to Solve the Frame Problems

Mark H. Bickhard

Introduction

We all believe an unbounded number of things about the way the world is and about the way the world works. For example, I believe that if I move this book into the other room, it will not change color -- unless there is a paint shower on the way, unless I carry an umbrella through that shower, and so on; I believe that large red trucks at high speeds can hurt me, that trucks with polka dots can hurt me, and so on; that if I move this book, the room will stay in place -- unless there is a pressure switch under the book attached to a bomb, unless the switch communicates to the bomb by radio and there is shielding in the way, and so on; that the moon is not made of green cheese, that the moon is not made of caviar, that the moon is not made of gold, and so on. The problems involved in accounting for such infinite proliferations of beliefs -- and the computations and inferences that take them into account -- are collectively called the Frame Problems, and are considered by some to constitute a major discovery of a new philosophical problem (McCarthy & Hayes, 1969; Amarel, 1981; Pylyshyn, 1987; Ford and Hayes, 1991; Ford & Pylyshyn, 1996; Genesereth and Nilsson, 1987; Toth, 1995).

How could we possibly learn them all? How could the brain possibly hold them all? The problems appear insoluble, impossible. Yet we all learn and hold such unbounded numbers of beliefs; in particular, children do. Something must be wrong.[1]

I wish to argue that the frame problem arises from a fundamental presupposition about the nature of representation -- a false presupposition. Yet, it is a presupposition that dominates contemporary developmental psychology (and psychology more broadly, and cognitive science, artificial intelligence, philosophy of mind, and so on). Roughly, it is the presupposition that representation -- all representation -- is constituted as encodings. In contrast, I will offer an alternative model of the nature of representation within which the frame problem does not arise -- within which such unboundedness is natural.

I will begin at the end -- with the alternative model of representation. It is a model called interactivism. It is part of the general historical development of pragmatism, of the emergence of representation out of action and interaction. This tradition is historically quite recent: it is known best in psychology through the work of Piaget, but its descent is roughly from Peirce and James to Baldwin to Piaget. I will not argue a specifically Piagetian model of representation, because in fact I think he got it wrong (Bickhard, 1988a, 1988b; Bickhard & Campbell, 1989), but I will present a generally pragmatist model, that, though developed independently (e.g., Bickhard, 1973, 1980), happens to be somewhat more similar to Peirce's than to Piaget's (Hoopes, 1991; Houser & Kloeser, 1992; Murphy, 1990; Rosenthal, 1983; Smith, 1987).

Pragmatist conceptions of representation are in strong contrast to standard notions of representation that have dominated Western thought since the Greeks (Bickhard, 1987, 1993, 1995b). Those dominant assumptions turn on the notion that representations are in some sort of correspondence -- causal, informational, isomorphic, and so on -- with what is represented. Picture or statue representations are the basic analogy; presumed visual sensory "encodings" are paradigmatic (see Bickhard & Richie, 1983, for a critique of this notion). In general, these assumptions hold that representations encode what they represent. Such notions are still dominant, and, with the decline of Piagetian developmentalism in favor of computer and information processing (including connectionist) models, they have become re-ascendant in developmental psychology as well.

This paper is a part of the logical and conceptual conflict between these two traditions about representation. In particular, I will argue that the frame problems are problems for, and only for, the dominant correspondence or encoding models of representation. That is, the frame problems are derivative from encoding conceptions of representation, and do not arise for the interactive model of representation. Conversely, the consequences of the alternative pragmatist model of representation that I propose ramify quite broadly throughout developmental theory -- and psychology and philosophy more broadly.

That is, the frame problems constitute a fatal counter-example to encoding correspondence notions of representation, and, therefore, to much of contemporary developmental psychology (for many other problematics, see: Bickhard, 1993, 1996; Bickhard & Richie, 1983; Bickhard & Terveen, 1995; Campbell & Bickhard, 1986, 1992b). If the basic presuppositions that provide the foundations for work on child development cannot even in principle account for a clear fact about development -- the unbounded growth of beliefs -- then it clearly cannot serve as a valid foundation for understanding development. Such false presuppositions can render an entire field of study nugatory and doom it to historical sterility, such as happened to the field of "verbal learning" when its associationistic presuppositions were rejected.

So, the stakes at hand are both the particular ones of how children, and adults, could possibly solve these seemingly impossible problems, and, more broadly and deeply, the theoretical validity of the view of representation in developmental psychology today.

Representation Emergent Out of Interaction

Consider an organism interacting in its environment. At any particular time, it must differentiate the environment that it is in; it must somehow indicate for itself what next interactions are possible in those differentiated environments; and it must select which of those possible interactions to actually engage in. My claim is that representation is emergent in this basic task of interaction indication and selection (Bickhard, 1993; Bickhard & D. Campbell, forthcoming).

Functional control of interaction. If there is more than one action possible, the organism will need some way to functionally select which action to engage in. Such selections will, in general, depend on the internal outcomes of previous interactions -- for example, if interaction A has just completed with internal outcome S, then begin interaction B. If a paramecium is swimming down a sugar gradient, then it will stop swimming and tumble for a bit, before resuming swimming -- eventually it will hit upon a direction that moves up the sugar gradient, and then it will continue swimming (Campbell, 1974, 1990).

For more complicated organisms, the relationships among possible interaction outcomes and consequent further possible interactions will get quite complex. First, there can be multiple indications of interactive possibilities based on a single internal interaction outcome: interaction A with internal outcome S may indicate not only the possibility of interaction B, but also the possibilities of C, D, and E as well. Which, if any, of such multiple possibilities is selected to actually engage in will depend upon other states in the organism.

Another form of complexity is the possibility of iterated conditional relationships among possible interactions and outcomes: if A reaches internal outcome S, then B is possible with a possible outcome T, and if B is in fact engaged in and T is in fact reached, then C becomes possible, and so on. Combinations of multiple indications and iterated conditional indications of interactive possibility can achieve enormous complexity, including interlocking webs of indications of possible interactions with closed loops and paths of indications.

A third form of complexity is that of context dependency: outcome S may indicate one possibility if some other interaction K has reached internal state X, while S may indicate a different possibility if K has reached Y, or if some other interaction L has reached Z, and so on. Context dependencies of interaction complicate even more the webs of indications.

To this point, I have only outlined some of the functional properties of complex interacting organisms (much more careful development is required to show that various potential problems can be avoided and questions answered, but this introduction should suffice for my current purposes -- for more elaborated treatments, see Bickhard, 1980, 1987, 1992a, 1993; Bickhard & Richie, 1983; Bickhard & Terveen, 1995; Campbell & Bickhard, 1986) In particular, as their interactive capabilities become more complex, the internal functional machinery for keeping track of what to do next, and of what could be done next, will get more complex accordingly. What I wish to argue now is that this functional machinery for controlling interaction is already sufficient to capture foundational properties of representation, and in a form that avoids the frame problems.

Interactive representation. Consider first the internal outcomes of interactions already engaged in. The general course of an interaction between an organism and an environment will depend on the organization of the functional system in the organism that is engaged in the interaction, and it will depend upon properties of the environment being interacted with -- some environments will yield one outcome for that interactive functional subsystem, and other environments will yield different outcomes for that same interactive functional subsystem. Which outcome is attained after a particular interaction, then, serves to differentiate types of environments: those environments yielding outcome S are grouped together, and are differentiated from environments yielding any other outcomes.

Such differentiation is important because it is precisely such environmental differentiations that serve as the basis upon which the organism can then make further selections of which further interactions to engage in. Environments of type S resulting from interaction A will be indicated to also be environments of type T resulting from interaction B in the sense that if A reaches S, then the indication is that, if B is engaged in, B will, in this environment, yield T. It is such indications of interactive possibilities based on interactive differentiations of the environments that permit organisms to control their interactions in ways that are appropriate to those environments. This is the general form of interactive environmental sensitivity.

Truth value. This simple core of the model is already sufficient to capture a fundamental property of representation: truth value. It may or may not be the case that, when the organism engages in an indicated interaction it will arrive at one of the associated indicated internal outcomes. If it does not, then that indication is false, and is falsified. If it does reach that (one of) indicated outcome(s), then the indication is not falsified (though it is still possible that it is false in its full scope -- the indication might not always hold). Indications of interactive possibilities with associated internal outcomes are potentially false, and potentially falsifiable -- this is an emergence of the most fundamental, primitive, characteristic of representation.

Thus, the interactive approach can model the emergence of truth value, and that is an emergence that has not been accounted for within the encodingist tradition. Error, in fact, is seriously problematic within that tradition. If the correspondence that is supposed to constitute a representation exists, then the representation presumably exists, and is correct, but if the correspondence does not exist, then the representation does not exist, and, therefore, cannot be incorrect. There is much effort being expended on attempting to solve this problem (Dretske, 1988; Fodor, 1990; Hanson, 1990; Loewer & Rey, 1991), but none of these efforts have been successful to date. Even if they were, they would at best define error from the perspective of some external observer of the organism and its interactions (Bickhard, 1993, in preparation; Bickhard & Terveen, 1995).

Most important, there seems to be no way in the encoding framework to account for the possibility of error that might be detectable by the organism itself. But this is a fundamental requirement for any model of human representation, for how can learning or error guided behavior occur if error cannot be detected?

More sophisticated representation. Primitive interactive representation captures a basic property, that of truth value. But such representation of interactive potentiality doesn't look much like the sorts of representations that we normally consider: representations of objects and abstractions, perceptions, language, and so on. I will sketch how object representations would be modeled in this approach. For more detail, see Bickhard (1980, 1987, 1992a, 1993; Bickhard & Campbell, 1992; Bickhard & Richie, 1983; Bickhard & Terveen, 1995; Campbell & Bickhard, 1986, 1992a).

The basic notion of object representation is that, within the webs of complex interactive indications may be some that have two basic properties: 1) All indications in such subwebs are reachable from each other. That is, the subwebs are closed in the sense that if any part of such a subweb is attainable, then all parts of it are attainable. And 2), such subweb patterns remain invariant under large classes of other interactions -- they are not changed in their organization by other interactions. My claim is that these two properties capture the epistemology of objects for children and some less complex organisms (Bickhard, 1980).

Consider a toy block. It offers many possible visual scans, manipulations, tactile experiences, and so on. Furthermore, the possibility of any one of these interactions carries with it the possibility of all of the rest -- if I can scan it this way, then I can turn it and scan it this other way. The interactive possibilities afforded by the block, or at least a critical subset of them, are closed in the sense outlined above. Furthermore, this entire organization of interactive possibilities remains invariantly possible under scans, manipulations, drops, throws, locomotions, head turnings, hidings, lines of approach, and so on. Of course, it does not remain invariant under burning, crushing, and so on. I offer such closedness and invariance as capturing the basic epistemological nature of object representations emergent out of interactive representations. Clearly, this is a basically Piagetian notion of object representation (Piaget, 1954).

This is a model of representation -- and object representation -- as being emergent in interaction, not just in the processing of inputs. In this model, representation is forward looking in time, not backward looking. Representation is most fundamentally of future potentialities for further interactions. Some patterns of such possible actions, such as with physical objects, generally remain invariant (as patterns) over time. These form a critically important kind of representation, but object representation also contributes to the mistaken assumption that representation is backward looking rather than forward looking: We think that a representation of an object is most fundamentally of the object in the past that reflected the light, rather than of the object of the immediate (or distant) future that offers multiple possibilities of further interaction.

Implicitness. I turn now to the first step in accounting for the absence of the frame problems: implicit representation. Consider an internal indication of the possibility of interaction B with outcome T. Some environments would support that interaction yielding that outcome, and some would not. Some environments possess whatever interactive properties are required for the system to reach that internal outcome, and some do not. Note that the internal falsification of the interactive indication entails that that environment did not possess those properties, while the absence of falsification indicates that the environment did possess those properties. But in neither case is there any explicit representation of what those critical properties are. Whatever the requisite environmental properties are for reaching T, they are detected, or not, and represented implicitly, not explicitly. That is the key.

Unboundedness. Implicit representation is unbounded. Any interactive representation of the form "whatever yields this outcome for this interaction will also yield that outcome for that interaction" is unbounded both for the class of environments that might satisfy the indication, and for whatever the class of possible environmental properties are that would support that satisfaction. These classes may or may not be finite (this question, in fact, may not be well defined: the answer depends strongly on the formalisms chosen for explicitly representing the elements of the classes), but they are unbounded -- there is no apriori upper bound on them. This is a dynamic interactive version of implicit definition in model theory, such as the sense in which the abstract axioms of geometry implicitly define the notions of geometry -- point, line, etc. -- and the infinite class of potential satisfiers of those axioms (Coffa, 1991; Keisler, 1977; Kneale & Kneale, 1986; Campbell & Bickhard, 1992a).

Modality. Interactive representation, and, thus, the unboundedness of interactive representation, is intrinsically modal. Interactive indicators are of potentialities of further interaction. They intrinsically involve the modality of possibility: it is precisely these possibilities that are intrinsically unbounded. This is in contrast to encoding approaches, in which possibilities and necessities require distinct encoded representations together with explicit markers about their modal status.

Modality and the frame problems are intimately related: the frame problems have to do with the unboundedness of possibilities of what might be the case, of what might have been the case, of what might come to be the case, and so on. There are even counterfactual frame problems: for example, if kangaroos did not have tails, they would fall over, unless they got very skilled with crutches -- and so on (Stein, 1991).

The interactive model provides a natural approach to modality (Bickhard & Campbell, 1992). Actuality, possibility, and necessity are not differentiated for a primitive interactive representation, as they are forced to be within an encoding model, but must be progressively differentiated within the fundamental interactive framework. Such a progressive differentiation is, in fact, what is found developmentally -- not the explicit marking of explicit modalities invoked by encoding models (Bickhard, 1988a, 1988b; Piaget, 1987).

The modality of interactive representation is implicit, thus unbounded. It is precisely this unboundedness of implicit modal representation that avoids the computational and memory impossibilities imposed by the frame problems. Encodings are intrinsically explicit, so any such unboundednesses easily become impossible for any reasonable resources of time, computational power, and memory (see below).

Practical implicitness. What are the practical implications of such implicitness, and, therefore, unboundedness? One example is simply that a system differentiating various categories of possible histories of its own experience can easily differentiate such categories of histories that are unbounded. Insofar as histories set up current contexts, this also implies that unbounded categories of possible contexts can be differentiated (Shanon, 1988, 1993). Insofar as future potentialities, and future interactions, are dependent upon and sensitive to such contexts, this provides an entire class of examples of unboundednesses that we all function with all of the time (Bickhard & Terveen, 1995). It is not possible to exhaust, for example, a list of all circumstances -- all histories and all contexts -- that we would take as flattering, or as insulting.

Encodings are explicit. Such a list would be a list of encodings, and encodings are, and are necessarily, explicit. Encodings are correspondences to what they represent, and there must be at least one such actual correspondence with an actual encoding element for each thing or property encoded.

The only way for encodings to capture the representation of novel unbounded categories is the definition of a new encoding element, defined as representing "that (novel, unbounded) category". Such as definition is possible, of course, but it is parasitic, and necessarily parasitic, on an implicit representation of "that category" in order for the encoding definition to define anything at all. It is only with the resources of implicit representations that there is any possible solution to this problem (Bickhard & Terveen, 1995).

Connectionism. Connectionism and Parallel Distributed Processing have created great excitement, and have excited great controversy, since the early 1980s. Connectionist networks were shown to be capable of discriminations that had been thought to be beyond their capabilities, and the vision opened up of many more problems that were proving intractable for Good Old Fashioned AI falling to the new approach. Connectionism has matured out of its early heady period, and entered a phase of serious work -- and of encountering serious problems (Bickhard & Terveen, 1995; Clark, 1989; McClelland & Rumelhart, 1986; Rich & Knight, 1991; Rumelhart, 1989; Rumelhart & McClelland, 1986; Smolensky, 1988; Waltz & Feldman, 1988).

I will not explore these general problems and promises of connectionism here, except to point out one foundational limitation: connectionism is committed to the same notion of representation-as-correspondence, the same notion of encodingism, as classical AI. Connectionism, therefore, provides no new solutions to problems of unboundedness. It is as ineffective before the frame problems as Good Old Fashioned AI.

Connectionist systems receive patterns of inputs, and settle into patterns of outputs. Classes of input patterns can each individually settle into the same output pattern, and are, thereby, classified together. Furthermore, connectionist systems can be trained to classify training sets of input patterns in accordance with the "right" classification, and then will generalize (with varying "correctness") those classification categories to new input patterns never encountered. This possibility for training was a major source of the excitement about such systems.

It may be that the class of input patterns classified together is unbounded, so, it might be argued, here is the unboundedness that is needed to solve the frame problems. That this is not so can be seen by examining a case from classical AI: the transducer. A transducer is supposed to transduce inputs, say, light patterns, into internal encodings, which can then be processed by the symbol manipulation system of a classical AI model. The class of possible light conditions that might be "transduced" into a particular internal encoding might well be unbounded, if an attempt were made to list it in some notation for light patterns. In both the case of the classical "transducer" and the case of the connectionist "classifier", unbounded input conditions may get classified together. So, the only difference between the two is that transducers have to be engineered, while connectionist differentiations can be trained.

Can the possibility of training solve the problem? No, because training is to a designed standard, set by the trainer. In both cases, whether designed or trained, supposed encoding correspondences must be set up, and instances of such correspondences must actually exist, in order for a "representation" to exist. All representation of all basic categories must be explicit -- however much they may be unbounded relative to the input conditions that evoke them. They must be explicit with respect to the category they represent, and, thus, cannot solve frame problems with respect to that category. As mentioned above, for explicit representations, everything turns on the notation chosen, on the basic categories in that notation. The limitations of finiteness with respect to whatever that basic category level might be cannot be transcended.

Interactive implicit representation, in contrast, is not explicitly constructed out of explicit atoms of representation (whether engineered or trained), and, therefore, is not subject to the restriction of finiteness of those constructions.

The discussion to this point has neglected an additional critical point. The representationality of an interactive representation is constituted in the indications of potential further interactions. Such indications may be evoked or set up in response to prior interactive differentiations of the environment, but what is being represented is not dependent on those prior differentiations. Both transductions and connectionist nets are at best differentiators of classes of environments -- in general, passive, non-interactive differentiators. As such, they might be useful in an overall interactive system. They can, for example, be useful for setting up appropriate indications of potential interactions, for setting up appropriate interactive representations. In themselves, however, such environmental differentiations have no representationality at all. In an encoding perspective, they are presumed to represent that which they are in correspondence with, but the system itself cannot know what any such correspondences might or might not be with -- cannot represent what they might be with. Furthermore, they cannot detect errors in any such supposed correspondence. Any such check for error will merely be a check of the transducer or connectionist net against itself: an empty circularity (Bickhard, 1987, 1992a, 1993; Bickhard & Terveen, 1995). Transducers provide engineered correspondences with differentiated classes of environments, and connectionist nets provide trained correspondences with differentiated classes, but, in neither case do we have representations.

Development

Several potentially interesting properties and processes of development follow naturally from the interactive model of representation. I will outline three of them: 1) a consequent variation and selection constructivism, 2) an avoidance of the logic that yields much contemporary strong innatism, and 3) a broad model of scaffolding that, among other things, yields the possibility of self-scaffolding. Following these adumbrations, I will show, in a particular example from the literature, how standard encoding assumptions about representation can yield serious problems -- creativity and insightful argument do not in themselves protect against false premises.

Interactivism Forces Constructivism. If representation is thought of as being constituted by encoding correspondences, there is a temptation to think that representation could be impressed into a passive mind -- transduction, induction, waxed slates, and so on. There is a temptation to think that representations are constructed out of, or inferred on the basis of, input representations. Or, if such encodings do not come in from the world, then the alternative seems to be that they were there all along, in the genes. That is, encoding notions force us, rather indifferently it seems, to either empiricism or innatism. The encodings have to come from somewhere, and there are only two possibilities: from the environment or from the genes. Encodings cannot develop, thus, encodingism leads to extreme theories that have no place for real development (cf. Elman, et al, 1996).

If representation is understood as an emergent of interactive systems, however, then there is no temptation to think that functional system organizations could be impressed into the mind by the world, and no need to postulate a magical preformism. System organization can be constructed by the system itself, and, therefore, representation can be constructed by the system itself. Interactivism leads to constructivist ideas of development, with real and meaningful change across interactive experience (Bickhard, 1980, 1987, 1992a, 1993; Bickhard & Campbell, 1989; Campbell & Bickhard, 1986).

Furthermore, such constructions will not always be correct. Knowledge of which are the "right" constructions cannot always be presciently available to the system -- if it were, then the system would already know everything. Heuristics, of course, are possible, and heuristics constitute their own form of knowledge, but the origins of that knowledge too must be accounted for. In general, the system tries out constructions to see if they work, and then uses the results as indications of erroneous constructions if they do not work. That is, the constructivism must be a variation and selection constructivism. Interactivism, then, forces a variation and selection constructivism, an evolutionary epistemology (Campbell, 1974).

Innatism. Still further, the representations that are emergent in such constructions of system organization do not come from anywhere. They are constructed de novo -- they are emergent. Information in a gene pool concerning how that species can survive in a particular niche has not come into the gene pool from that niche -- has not "come from" anywhere at all. There is no more reason for representation or knowledge to have to come from "somewhere" for the child. In particular, there is no reason for representation or knowledge to have to come from the genes, even if it did not or could not have come from the environment.

Contemporary developmental innatism is based primarily on the poverty of the stimulus argument, which is simply an argument that, since there is insufficient information in the environment -- the stimulus -- then the information must be genetic. This is an invalid argument because, among other problems, it overlooks the possibility of emergent representation (and intrinsic constraint on constructions -- see Bickhard, 1992b, 1995; Bickhard & D. Campbell, forthcoming-b; Campbell & Bickhard, 1986, 1992b). In the view of the interactive model, then, environmentalism and innatism fall together, and for the same reason: they share the false assumption that representation and knowledge must come from somewhere, and differ only on details of where it is supposed to come from.

The interactive position differs from both standard empiricist and innatist positions, and from Piaget as well (Bickhard, 1988a). There are partial recognitions of the importance of variation and selection constructivism in developmental psychology (Siegler, 1984; Siegler & Jenkins, 1989), but, if this argument is valid, an evolutionary epistemology is logically forced, not just an interesting possibility to study. This is one major consequence of the frame problems: explicit representation isn't sufficient; implicit representation is required; implicit representation requires interactivism; interactivism forces variation and selection constructivism; variation and selection constructivism is a very under-appreciated framework in today's literature.

Scaffolding and Self-Scaffolding. Interactivism forces a variation and selection constructivism -- one of the consequences of a variation and selection constructivism is a model of developmental scaffolding with interesting, and arguably important, new properties. In particular, this functional model of scaffolding yields the possibility of, and the ubiquitous importance of, self-scaffolding (Bickhard, 1992a, 1992c). I turn now to an adumbration of that model.

Some constructions will be more complex than others. Successful constructions are retained, and unsuccessful constructions are eliminated -- selected out. If a learning or developmental task requires a very complex construction in order for the construction to be successful, the successful constructive process is not likely to hit upon it. Most constructions are relatively simple, and the greater the complexity involved, the larger the number of alternative constructions that are possible -- therefore, the less likely that any particular complex construction is one that will succeed and be retained.

Constructions, then, will progress via sequences of successful, or at least partially successful, constructions that are not too far away from previous successful constructions. That is, constructions will progress via trajectories of relatively nearby stable (successful) constructions. But what if there are no such intermediate successful constructions -- intermediate successful task or developmental partial accomplishments? If the constructions required are too complex, success at the learning or developmental task becomes extremely unlikely.

Suppose, however, that some of the selection pressures that would otherwise eliminate simpler constructions are somehow blocked. Such a blocking of relevant selection pressures may allow simpler intermediate constructions to be successful in the context of those blocked selection pressures. In some circumstances, the blocking of selection pressures may create a trajectory of successful constructions that are sufficiently nearby to be traversed by the child. If such a trajectory ends on a construction that is successful in the full task environment, is stable with respect to the full set of relevant selection pressures, then the blocking of selection pressures could result in the constructive attainment of the full task capability. At that point, further blocking of the selection pressures would be unnecessary, and those blocks could be removed. Blocking selection pressures, in this sense, can serve to scaffold the attainment of otherwise unlikely or practically impossible developments. This is functional scaffolding.

The are a number of interesting properties of functional scaffolding, but the one that I will focus on is that the functional model of scaffolding permits self-scaffolding. Simply, blocking selection pressures is a function that, at least in some forms, a person can do for him- or herself. This differs from classical models of scaffolding, derived from Vygotsky's notion of the zone of proximal development (Bruner, 1975). In these notions, scaffolding consists primarily of the provision of knowledge not otherwise available -- often knowledge of how to coordinate subskills into a resultant more sophisticated skill -- that suffices to make a task accomplishable. Such provision of knowledge can be one way to block selection pressures -- those that would require that knowledge for success. A child (or adult), however, cannot provide to him- or herself knowledge that he or she does not already have. This notion of scaffolding renders the idea of self-scaffolding internally contradictory.

Blocking of selection pressures, however, can be done in myriads of ways. Problems can be broken down into subproblems; criteria for success can be temporarily set aside; ideal cases or analogies can be explored first; resources can be used that might not be needed once the overall construction is attained; and so on. These are forms of scaffolding that individuals can perform for themselves. In these forms, self-scaffolding is very possible. I argue elsewhere, in fact, that learning the skills of self-scaffolding is a major field of development, one that has been largely overlooked (Bickhard, 1992c).

The major point here, however, is that models of functional scaffolding and self-scaffolding are natural consequences of taking seriously the interactive model of representation, consequences such as variation and selection constructivism. Such models result from taking seriously the lessons of the frame problems, and moving to a model of representation that is not subject to those errors. Such a move has positive implications for understanding processes and potentialities of development.

Consequences of Representational Models. I turn now to an example of some of the difficulties that can be introduced into even the most careful arguments if those arguments are based on a false model, such as on standard encodingist models. Perspicacity and logical validity of considerations do not guarantee against the falseness of the premises -- do not guarantee against unsoundness.

Perner (1991) presents an ingenious set of analyses and critiques of assumptions about representation in contemporary developmental literature, and offers a model of early representational development based on some of the most sophisticated conceptions of representation in the philosophical and cognitive science literature. His discussion draws most heavily from Dretske (e.g., Dretske, 1988) and Fodor (e.g., Fodor, 1990), and from Johnson-Laird's model of mental models (Johnson-Laird, 1983).

One of the central issues that Perner addresses is whether children are capable of reflection prior to about age 4. There is good reason to think that children older than 4 can reflect, but Leslie (1987) argues that pretend play, which occurs prior to age 4, requires a meta- or reflective representation -- a representation that the play situation is not real.

There is little doubt that reflection is sufficient for pretense, but the question is whether or not it is necessary (Bickhard, 1995c), and Perner provides a compelling demonstration that it is not (a demonstration that seems to have been not understood by some: Gopnik, 1993). Perner develops his argument in terms of mental models of situations, using a military sand-box model of a battlefield as his basic analogy. He points out that if a child is capable of having multiple mental models, then those models can be indexed or marked for differences in the kinds of situations they represent. In particular, they can be marked with respect to temporal differences and modal differences: past and future; actual, possible -- and pretend. A sand-box model, similarly, can represent the current battlefield, or can represent potential changes in strategy or tactics -- a battlefield that is not currently "actual."

Perner suggests that confusion about representation is especially likely when the representation of non-existents is considered -- non-existents such as the pretenses involved in pretend play. In particular, if we consider a representation of a unicorn, we are apt to construe not only the representation of the unicorn as representational, but also the unicorn itself as representational (there isn't any real unicorn, so what is being represented?) -- a representation of a representation: metarepresentations. We have no similar intuition if the represented situation is real: the toy block that is represented is not itself a representation. Perner argues that what is represented by a representation of a unicorn is not itself a representation. Instead, it is a possible or pretend situation, rather than a real situation. There is only one representation involved: the representation of the unicorn. It is represented by a mental model, just like real situations, but the model carries a marker of "play" or "pretense" rather than of "real".

A strong reason for the pull to conclude that representations of unicorns are actually representations of representations is the correspondence model of representation itself. If representation is constituted as correspondence, then what what does the unicorn representation correspond with if there are no unicorns? This is where the temptation arises to conclude that the unicorn representation corresponds to some other representation -- a unicorn representation at a lower level (but what would it be in correspondence with?). Perner provides an alternative model. There is only one level of unicorn representation, marked as "unreal", hence no metarepresentation.

Perner argues that evidence for metarepresentation is not conclusive without evidence for (meta)representation of both ends of the representational correspondence relationship -- the representation and the represented. The strongest evidence for that capability seems to be the capacity for understanding false representation, representations involved in false belief. To represent the falseness of a representation seems to require metarepresentation of the primary level representation, and metarepresentation of what it is being taken to represent, tied together by a metarepresentation of the purported representational correspondence between them. Only in such a rich metarepresentational structure could it be represented that what is supposed to be being represented is not in fact the case. Such understanding of false belief seems to emerge about age 4, thus the issue outlined between Perner and Leslie.

This is an ingenious set of arguments and an alternative model for pretense. I take it as demonstrated that, within the framework of correspondence models of representation, pretense, thus pretend play, does not require genuine metarepresentation -- and, therefore, that there is no convincing evidence for metarepresentation prior to about age 4.

But what if that framework is incorrect? What of the possibility that standard encoding models have misled Perner? Then Perner's analyses are unsound. Clearly, I wish to urge that that framework is false: it is an encodingist framework. Unsoundness does not necessarily imply that Perner's conclusions are in error, but it does imply that the arguments for those conclusions cannot be depended upon. The issues need to be re-examined.

The mental models model is just a particular example of an encoding model of representation, and, as such, all of the basic encodingist problems are manifested. I will illustrate some of them. First, I re-open the question of the representation of non-existents. Perner rightly argues against resemblance as a foundation of representation. For one problem, A can represent B by virtue of resembling B only if the knowing agent already represents both A and B so that any such resemblance can be noticed. Resemblance presupposes the very problem it might be taken to solve.

But the assumed alternative to resemblance as the constitutive form of representation is correspondence, and, for non-existents, no such correspondence can exist because there is by assumption nothing to be in correspondence with. No correspondence, no representation. So how can representation of non-existents occur?

In Perner's military sand-box models, a block of wood can represent a non-existent tank because the officers using the model understand and interpret the blocks of wood as representing tanks. Similarly for sticks representing soldiers, and so on. But who or what is the equivalent of the interpreting officer for mental models? Who is the homunculus that understands and interprets them?

Perner models pretense without metarepresentation by introducing markers for various statuses of situations that the mental models represent. Some may be of past situations, some of possible future situations, some of pretend situations. Particular elements of such mental models may be representing non-existents by virtue of the overall situation being past or future or pretend. This does avoid metarepresentation, but it does not solve the homunculus problem. Who is the homunculus that understands and interprets those markers?

And still the question remains of what constitutes understanding and interpreting an element of a mental model as representing something if that something does not in fact exist. No correspondence is possible, so how does the homunculus understand what the supposed correspondence is supposed to be with? That would seem to be the problem of representational content all over again: how can representations of non-existents have representational contents for the agent him- or herself? (For that matter, how does the existence of a correspondence create or generate or constitute any such content: how does the agent know that there is any such correspondence, or know what any such correspondence is a correspondence with, even if such a correspondence does in fact exist? See Bickhard (1993, in preparation; Bickhard and Terveen, 1995.))

Perner has been careful about distinctions between representation and represented, and perspicacious in diagnosing equivocations between those in other literature, but the encoding models of representation that he relied on do not provide any understanding of what representation is -- of what constitutes representational content for an epistemic agent.

Another question to be raised about mental models addresses the fact that they presuppose objects. People, soldiers, tanks, furniture, and so on are the sorts of things that the elements of a mental model are supposed to represent. What about properties, events, actions, processes, scripts (Nelson, 1985, 1986), and so on?

Furthermore, even within the object focus, is a two month old infant's representation of an object the same as that of a ten year old's representation of that same object? Presumably not, but if correspondence is all the resource available to model representation, then the two month old will have correspondences to precisely the same object as the ten year old -- and how are the differences between the infant and the child to be accounted for? How can Perner's mental models model account for any such developmental changes? As mentioned above, encodingism makes genuine development difficult to account for.[2]

A third question addresses how mental models are individuated. Are the representation of the child's bedroom and the representation of the child's living room parts of the same mental model, or are they different mental models? How far does a mental model extend spatially -- how about temporally? What about the child's representation of the day care center play room that he or she has not been to for a week? This issue is crucial to how many mental models and how much mental-model-computation is required. For a closely related issue, is the child's representation of a play room specific about where all the toys and furniture and people are located? Not likely. How do mental models handle indefiniteness of that sort?

A fourth asks how mental models are set up or created. If mental models constitute our basic epistemic and representational contact with the world -- our copies of the world -- then how do we know what the world is in order to construct our mental model copies of it (Piaget, 1970)?

Still another question to be addressed to mental models is simply the error question. How can someone check whether or not their mental model is correct? Presumably by checking it against the world that it is supposed to represent. But how can they know how the world actually is in order to be able to compare it to how their mental model represents it to be? If their representation of the world is their mental model, then checking their mental model is just checking it against itself. Any such check is circular, and, therefore, no check at all. How can mental models account for representational error that the agent can detect for him- or herself?

The final critique that I will bring to bear against mental models is the frame problems. There is a particular aptness in this, because Perner's reply to Leslie's claim that pretense requires metarepresentation consists of mental models carrying markers of their temporal and modal status. One mental model might represent an actual situation, while another could be marked as possible or as pretend.

It is such considerations of what might be, of what might occur, that generate the frame problems. There are unbounded spaces of such considerations. How do we take them into account? In the mental models framework, this becomes the question of how mental models could handle unbounded organizations of possible situations. How many mental models are required to keep track of all that we seem to take into account?

Mental models, whatever their status regarding present or past or future or actual or potential, and so on, are of particular situations. Mental models are explicit representations, and, therefore, must be of explicit particular situations. There must be a model for each potentiality. This property they share with all encoding models. But it immediately follows that they cannot handle the frame problems: unbounded numbers of mental models would be required, and unbounded amounts of computation would be required to take all of them into account. There is no machinery in the mental models model for addressing the frame problems.

I conclude that the mental models framework within which Perner analyzes childrens' representational development suffers from its own fatal problems. This conclusion suggests that those developmental issues need to be re-examined. I will address, though only briefly, two of the primary issues that Perner discusses: pretense and metarepresentation.

I would argue that Perner's basic intuition regarding pretense is correct. Pretense, along with other indexing such as for temporal differences and modal differences, requires appropriate contextualization of the representations involved, and that contextualization does not necessarily require metarepresentation. The difficulty in Perner's model is that the only means available for such contextualization is some sort of context-marking markers, and these simply recreate all of the problems of encodingism at the level of the markers, as well as at the level of the mental models being marked: rampant homunculi, no model of representational content, and so on.

In the interactive model, all representation is intrinsically contextualized. All system interactive organization is contextualized in the sorts of activities that make some suborganization appropriate. All interactive organization, in order to be functional for the system, must be interactively reachable by the system -- there must be some sets of interactions that, if followed, would arrive at the suborganization of interactive potentialities in question. That suborganization will be functionally contextualized in whatever might lead to it, and representationally contextualized in whatever that functional context represents. So, if the context is a play situation, then the subrepresentations will be specific to that contextualizing play situation. The play representations are pretense precisely in the sense that they are accessible and functional only within the play situation. They are "play"-context representations. Instead of Perner's markers, then, I suggest a functional embedding into a contextual interactive representation. This captures a similar idea, but does not commit to homunculi to interpret the markers -- not to mention the representations per se.

This addresses only the most basic of the issues involved. Another issue is concerned with the fact that pretense generally involves making use of some aspects of external objects in order to carry out the "symbolicness" of the pretense, but ignoring other aspects. Wooden blocks can serve as sand-box models of tanks because they share some properties, such as rough shape, with tanks, but they clearly differ in others, such as size, weight, self-locomotion, and so on. Similarly, a child can use an eraser as a pretend snail (Piaget, 1962). Pretense, in other words, involves making use of some properties of real objects, and ignoring others. Pretense involves representation and functioning at certain levels of generality, but disregards more particular levels. How is such representational generality possible? (Note that Perner's mental models do not address this issue, even though the analogy is dependent on it.)

Interactive representation, unlike encoding representation, is intrinsically general. It is intrinsically contextual and general. It is general to whatever differentiations are involved, but not necessarily at any more particular level. If a control system can control a robot in a room without bumping into anything, then it could also in principle control a pen to draw the layout of the room. Similarly, a control system that could control a visual scan of a house could in principle control a hand holding a pen to draw the house (Bickhard, 1978). Any control system that can control any particular interaction can in principle control a different interaction that manifests the same general interactive properties. The interactive model, in other words, offers a rich set of resources for modeling pretense, including generality, without adverting to metarepresentation (Bickhard, 1980).

This point brings us to phenomena of metarepresentation. I will be quite brief here, because there is considerable other discussion available. The interactive model yields a natural model of levels of reflective knowing -- reflective or metarepresentation -- very similar to Piaget's levels of reflective abstraction (Bickhard, 1973, 1980; Campbell & Bickhard, 1986). This model addresses such apparently reflective phenomena as imagery that occurs prior to age 4, shows that it does not require reflection, and identifies the age at which genuine reflection does occur at about age 4 (Bickhard, 1973, 1978). That model of roughly age 4 reflection, in turn, can address metarepresentation, including Perner's metarepresentation, as well as multiple other developmental shifts that seem to occur at about the same age (Bickhard, 1992d).

In a striking convergence, then, the interactive model agrees with Perner that pretense does not require metarepresentation, but can be modeled with proper contextualization, and it supports and generalizes Perner's claims about metarepresentation, including even the age of emergence of genuine metarepresentation (Bickhard, 1973, 1978, 1992d). The interactive model, however, is not subject to the encoding aporias, including, especially, the frame problems.

Conclusions

The frame problems are problems of computational intractability. They are inherent for any system that has only explicit representations. But unboundednesses of representation is commonplace in living and in child development. Consequently, it is not possible for all human representation to be explicit representation, as they are in standard conceptions. Human representations must be implicit, but standard conceptions of representation compell that it is all explicit. Standard conceptions of representation cannot be correct.

My proposal for a form of pragmatically based "interactive representation" offers implicitness of representation, and, thus, offers unboundedness of representation. It offers to solve the frame problems, but only by abandoning the dominant conceptions of representation fundamental to many fields today: information processing, symbol manipulation, computer modeling, sensory encoding, and connectionist approaches to understanding the cognitions of children and adults.

Standard approaches to cognition in terms of explicit encoded representations, then, cannot solve the frame problems because such approaches are the sources of the frame problems. The frame problems are not a fundamental problematic of epistemology; but rather a refutation of representational models that require representation to be explicit. Psychology cannot ultimately progress within an approach that is subject to such deep and unavoidable refutations. In contrast, the interactive approach to representation, by virtue of its fundamental implicitness, dissolves the frame problems.

Of particular importance to developmental psychology is that encoding models of representation drive us to the absurd anti-developmental extremes of innatism and environmentalism. The interactive model avoids both by providing a model of the emergence of representation, representation that doesn't have to come from anywhere.

The frame problems, then, are a reductio ad absurdum of standard encoding approaches to representation. Real children do not have to solve the frame problems, because they never arise. Taking that reductio seriously and making the consequent shift to an interactive model of representation offers many advantages.

References

Amarel, S. (1981). On Representations of Problems of Reasoning about Actions. In B. L. Webber, N. J. Nilsson (Eds.) Readings in Artificial Intelligence. Los Altos, CA: Morgan Kaufmann.

Bickhard, M. H. (1973). A Model of Developmental and Psychological Processes. Ph. D. Dissertation, University of Chicago.

Bickhard, M. H. (1978). The Nature of Developmental Stages. Human Development, 21, 217-233.

Bickhard, M. H. (1980). Cognition, Convention, and Communication. New York: Praeger.

Bickhard, M. H. (1987). The Social Nature of the Functional Nature of Language. In M. Hickmann (Ed.) Social and Functional Approaches to Language and Thought (pp. 39-65). New York: Academic.

Bickhard, M. H. (1988a). Piaget on Variation and Selection Models: Structuralism, Logical Necessity, and Interactivism Human Development, 31, 274-312.

Bickhard, M. H. (1988b). The Necessity of Possibility and Necessity: Review of Piaget's Possibility and Necessity. Harvard Educational Review, 58, No. 4, 502-507.

Bickhard, M. H. (1992a). How does the Environment Affect the Person? In L. T. Winegar, J. Valsiner (Eds.) Children's Development within Social Context: Metatheoretical, Theoretical and Methodological Issues. (pp. 63-92). Hillsdale, NJ: Erlbaum.

Bickhard, M. H. (1992b). Myths of Science: Misconceptions of science in contemporary psychology. Theory and Psychology, 2(3), 321-337.

Bickhard, M. H. (1992c). Scaffolding and Self Scaffolding: Central Aspects of Development. In L. T. Winegar, J. Valsiner (Eds.) Children's Development within Social Context: Research and Methodology. (33-52). Erlbaum.

Bickhard, M. H. (1992d). Commentary on the Age 4 Transition. Human Development, 35(3), 182-192.

Bickhard, M. H. (1993). Representational Content in Humans and Machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285-333.

Bickhard, M. H. (1995). Intrinsic Constraints on Language: Grammar and Hermeneutics. Journal of Pragmatics, 23, 541-554.

Bickhard, M. H. (1995b). World Mirroring versus World Making: There's Gotta Be a Better Way. In L. Steffe, J. Gale (Eds.) Constructivism in Education. (229-267). Erlbaum.

Bickhard, M. H. (1995c). Transcending False Dichotomies in Developmental Psychology -- Review of Beyond Modularity. by A. Karmiloff-Smith. Theory and Psychology, 5(1), 161-165.

Bickhard, M. H. (1996). Troubles with Computationalism. In W. O'Donohue, R. F. Kitchener (Eds.) The Philosophy of Psychology. (173-183). London: Sage.

Bickhard, M. H. (1997). Piaget and Active Cognition. Human Development., 40, 238-244.

Bickhard, M. H. (manuscript, in preparation). Interaction and Representation.

Bickhard, M. H. with Campbell, Donald T. (forthcoming). Emergence. In P. B. Andersen, N. O. Finnemann, C. Emmeche, & P. V. Christiansen (Eds.) Emergence and Downward Causation.

Bickhard, M. H., Campbell, Donald T. (forthcoming-b). Variations in Variation and Selection: The Ubiquity of the Variation-and-Selective-Retention Ratchet in Emergent Organizational Complexity. In D. Hull, C. Heyes (Eds.) A Festshrift for Don Campbell

Bickhard, M. H., Campbell, R. L. (1989). Interactivism and Genetic Epistemology. Archives de Psychologie, 57(221), 99-121.

Bickhard, M. H., Campbell, R. L. (1992). Some Foundational Questions Concerning Language Studies: With a Focus on Categorial Grammars and Model Theoretic Possible Worlds Semantics. Journal of Pragmatics, 17(5/6), 401-433.

Bickhard, M. H., Richie, D. M. (1983). On the Nature of Representation: A Case Study of James J. Gibson's Theory of Perception. New York: Praeger.

Bickhard, M. H., Terveen, L. (1995). Foundational Issues in Artificial Intelligence and Cognitive Science -- Impasse and Solution. Amsterdam: Elsevier Scientific.

Bogartz, R. S., Shinskey, J. L., & Speaker, C. J. (1997). Interpreting infant looking: The even set x event set design. Developmental Psychology, 33(3), 408-422.

Bruner, J. S. (1975) The Ontogenesis of Speech Acts. Journal of Child Language, 2, 1-19.

Campbell, D. T. (1974). Evolutionary Epistemology. In P. A. Schilpp (Ed.) The Philosophy of Karl Popper. LaSalle, IL: Open Court.

Campbell, D. T. (1990). Levels of Organization, Downward Causation, and the Selection-Theory Approach to Evolutionary Epistemology. In Greenberg, G., & Tobach, E. (Eds.) Theories of the Evolution of Knowing. (1-17). Hillsdale, NJ: Erlbaum.

Campbell, R. L., Bickhard, M. H. (1986). Knowing Levels and Developmental Stages. Basel: Karger.

Campbell, R. L., Bickhard, M. H. (1992a). Clearing the Ground: Foundational Questions Once Again. Journal of Pragmatics, 17(5/6), 557-602.

Campbell, R. L., Bickhard, M. H. (1992b). Types of Constraints on Development: An Interactivist Approach. Developmental Review, 12(3), 311-338.

Chomsky, N. (1964). A Review of B. F. Skinner's Verbal Behavior. In J. A. Fodor, J. J. Katz (Eds.) The Structure of Language. Prentice-Hall, 547-578.

Clark, A. (1989). Microcognition. MIT.

Coffa, J. A. (1991). The Semantic Tradition from Kant to Carnap. Cambridge.

Dretske, F. I. (1988). Explaining Behavior. MIT.

Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloff-Smith, A., Parisi, D., Plunkett, K. (1996). Rethinking Innateness. MIT.

Fodor, J. A. (1990). A Theory of Content and Other Essays. MIT.

Ford, K. M., Hayes, P. J. (1991). Reasoning Agents in a Dynamic World: The Frame Problem. Greenwich, CT: JAI Press.

Ford, K. M., Pylyshyn, Z. (1996). The Robot's Dilemma Revisited: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex Press.

Genesereth, M. R., Nilsson, N. J. (1987). Logical Foundations of Artificial Intelligence. Morgan Kaufmann.

Gopnik, A. (1993). Theories and Illusions. Behavioral and Brain Sciences, 16(1), 90-108.

Haith, M. M. (1997). Who Put the Cog in Infant Cognition: Is Rich Interpretation Too Costly? Biennial meetings of the Society for Research in Child Development, Washington, D. C., April 3.

Hanson, P. P. (1990). Information, Language, and Cognition. University of British Columbia Press.

Hoopes, J. (1991). Peirce on Signs. Chapel Hill.

Houser, N., Kloesel, C. (1992). The Essential Peirce. Vol. 1. Indiana.

Johnson-Laird, P. N. (1983). Mental Models. Cambridge: Harvard.

Keisler, H. J. (1977). Fundamentals of model theory. In J. Barwise (Ed.), Mathematical logic. Amsterdam: North-Holland.

Kneale, W., & Kneale, M. (1986).The development of logic. Oxford: Clarendon.

Leslie, A. M. (1987). Pretense and representation: The origins of "Theory of Mind." Psychological Review, 94, 412-426.

Loewer, B., Rey, G. (1991). Meaning in Mind: Fodor and his critics. Blackwell.

McCarthy, J., Hayes, P. (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer, D. Michie (Eds.) Machine Intelligence 4 (pp. 463-502). New York: American Elsevier.

McClelland, J. L., Rumelhart, D. E. (1986). Parallel Distributed Processing. Vol. 2: Psychological and Biological Models. MIT.

Murphy, J. P. (1990). Pragmatism. Westview.

Nelson, K. (1985). Making Sense: The Acquisition of Shared Meaning. Academic.

Nelson, K. (1986). Event Knowledge. Erlbaum.

Perner, J. (1991). Understanding the Representational Mind. MIT.

Piaget, J. (1954). The Construction of Reality in the Child. New York: Basic.

Piaget, J. (1962). Play, Dreams, and Imitation in Childhood. New York: Norton.

Piaget, J. (1970). Genetic epistemology. New York: Columbia.

Piaget, J. (1987). Possibility and Necessity. Vols. 1 and 2. Minneapolis: U. of Minnesota Press.

Pylyshyn, Z. (1987). The Robot's Dilemma. Ablex.

Rich, E., Knight, K. (1991). Artificial Intelligence. New York: McGraw-Hill.

Rosenthal, S. B. (1983). Meaning as Habit: Some Systematic Implications of Peirce's Pragmatism. In E. Freeman (Ed.) The Relevance of Charles Peirce. Monist, 312-327.

Rumelhart, D. E. (1989). The Architecture of Mind: A Connectionist Approach. In M. I. Posner (Ed.) Foundations of Cognitive Science. MIT, 133-160.

Rumelhart, D. E., McClelland, J. L. (1986). Parallel Distributed Processing. Vol. 1: Foundations. MIT.

Shanon, B. (1988). Semantic Representation of Meaning. Psychological Bulletin, 104(1), 70-83.

Shanon, B. (1993). The Representational and the Presentational. Hertfordshire, England: Harvester Wheatsheaf.

Siegler, R. S. (1984). Mechanisms of Cognitive Growth. In R. J. Sternberg (Ed.) Mechanisms of Cognitive Development. (pp. 141-162). New York: Freeman.

Siegler, R. S., Jenkins, E. A. (1989). How children discover new strategies. Hillsdale, NJ: Erlbaum.

Smith, J. E. (1987). The Reconception of Experience in Peirce, James, and Dewey. In R. S. Corrington, C. Hausman, T. M. Seebohm (Eds.) Pragmatism Considers Phenomenology. (73-91). Washington, D.C.: University Press.

Smolensky, P. (1988). On the Proper Treatment of Connectionism. Behavioral and Brain Sciences, 11, 1-74.

Stein, L. (1991). An Atemporal Frame Problem. In K. M. Ford, P. J. Hayes (Eds.) Reasoning Agents in a Dynamic World: The Frame Problem. (219-230). Greenwich, CT: JAI Press.

Toth, J. A. (1995). Review of Kenneth Ford and Patrick Hayes, eds., Reasoning agents in a dynamic world: The frame problem. Artificial Intelligence, 73, 323-369.

Waltz, D., Feldman, J. A. (1988). Connectionist Models and Their Implications. Norwood, NJ: Ablex.