1. I have mentioned three stances that are commonly taken with respect to a potentially representational system: a designer stance, a user stance, and an observer stance. The three stances differ in terms of the origins of the knowledge of correspondences between conditions internal to the system and those external to the system: a designer typically stipulates and engineers such correspondences; a user typically learns them; and an observer typically diagnoses them from observations and analyses. What all three have in common, however, is that they constitute perspectives that are simultaneously on both the system and on that system's environment. It is by virtue of this dual perspective that encoding correspondences can be defined -- between known conditions in the system and known conditions in the environment. But these are known only to the holder of such an external perspective. None of these three perspectives is that of the system itself, and the system does not have any independent perspective on its own environment. Such correspondences may (or may not) exist between the system and its environment, but to take any such correspondences as constituting representations for the system is to require that the system know not only the internal states, but also the correspondences and what those correspondences are with. Such a project encounters precisely the circularity of incoherence: the system must already know what the correspondences are with in order for the correspondences to constitute representations for the system, but such knowledge is what those correspondence representations were supposed to provide in the first place.

2. A camera produces correspondences, lawful correspondences, between the image at the back of the camera and scenes in front of the camera. Of course, no one confuses such images with representations -- at least not for the camera itself. The imaging in a camera does not involve transduction in its typical sense, but the only function of the energy transformations of transduction in standard approaches is that it provides the grounds for the claims of lawfulness of the correspondences produced.

3 In spite of this double recognition -- 1) of the distinction between correspondence = correlation = covariation = information, on the one hand, and genuine representation, on the other, and 2) of the fact that the latter problem is still essentially untouched -- we still find Fodor presuming, as if the matter is settled, the core of his claimed demolition of Gibson (Fodor and Pylyshyn, 1981; e.g., in Fodor, 1991). That critique of Gibson, however, is based essentially on a non-sequitur equivocation on exactly the distinction between information = correspondence and genuine representational content -- from transduction, which gives at best correspondence, to "that the light is so-and-so" -- representation. Bickhard and Richie (1983) show just how bad that "demolition" of Gibson really is.

Incidentally, we also find in Fodor (1991, p. 257) the assumption "that we have a story about representation along the lines of representation-is-information-plus-asymmetric-dependence", in spite of the apparent recognition of the distinction between information and representation.

In yet another instance of Fodor's attack on Gibson, we find in Fodor (1986, p. 19) the claim that "Gibsonians require that for each nonnomic property to which we can respond selectively, there must be a coextensive, transducer-detectable, psychophysical invariant; e.g., a light structure in the case of each such visual property." Gibsonians require no such thing. Fodor is again simply charging the Gibsonians with the consequences of his own non-sequitur. It is Fodor's confusion that yields the presumed exhaustive dichotomy between transduction and inference, and, therefore, it is Fodor's confusion that concludes that if the Gibsonians deny inference, they must be committed to transduction for everything. In fact, Fodor's transduction cannot exist (Bickhard and Richie, 1983) -- it directly encounters the incoherence problem -- and the dichotomy between transduction and inference is not exhaustive -- it only even appears plausibly exhaustive, in fact, within a very narrowly focused encodingist perspective. Within such a narrow encodingist focus, you either generate encodings directly -- transduction -- or you generate them on the basis of earlier encodings -- inference (see Fodor on Transduction in the main text below).

4. The notion of functionalism as involving the possibility of substituting for components or aspects in virtue of their (the substitutes) serving the same function as that substituted for does not endorse the Pylyshyn (1984) notion of a singular cleavage between biological and functional levels of consideration. In fact, the hierarchical process-emergence model within which this discussion proceeds forces, and, thus, converges with, something like Lycan's (1990) model of hierarchical functionalism.

5 There is an approach to this regress problem that construes current functions of a subsystem type in terms of past contributions to meeting selection pressures in ancestral organisms, and holds that the first instances of such subsystems, presumably accidental, did not in fact serve a function. Functional emergence, then, depends on past contributions to the satisfaction of selection pressures; and past satisfaction of selection pressures as a ground for this emergence is taken to be at least one step closer to an ultimate naturalistic model. This maneuver, however, does not address the conceptual problem of systems without evolutionary history, and does not address the normativity problem.

It also relies on the choice of 'satisfying selection pressures' as somehow criterial for 'function', but does not address why the satisfaction of selection pressures should itself be taken as criterial for anything. The implicit response to this charge is that it is in terms of such selection pressure satisfaction that the existence of the current system is to be explained. I would argue that there is a germ of a solution hidden within this implicit answer, but, prima facie, it simply backs up the regress one step: Why should contributions to the existence of current systems be taken as criterial?

From my perspective, the deepest problem with this approach is that it fails to render function as non-intentional and non-epiphenomenal. Whether or not a system has a function in this view supposedly depends on whether or not there is an appropriate history of contributing to the satisfaction of selection pressures, but the presence or absence of such a history makes no difference whatsoever to the current activities of the system, and to its consequences. The presence or absence of such a history, and, therefore, in this account, the presence or absence of there being functions to be served at all, is causally and ontologically epiphenomenal. An intentional observer might conclude that there was or was not such a history, and, therefore, that there was or was not a function in this sense, but that does not provide ground for a naturalism of functional analysis.

6 Note that since this analysis is in terms of patterns or types of process organizations, it is already intrinsically also in terms of tendencies or propensities for certain consequences to obtain. It thus addresses naturally such potentially problematic examples as malfunctioning instances of system components, or components that are never called on for the performance of their function in this particular system instance. That is, it addresses normative issues in virtue of the modalities of propensity at the level of types. I will not address here the complexities that can obtain when, for example, normative functional analysis and functional etiological explanation interact, as for vestigial organs. Clearly, such interactions and differentiations can ramify with impressive complexity.

7. Self maintenance and recursive self-maintenance can be construed as functional propensities (Bigelow and Pargetter, 1987), except that here the propensity to contribute to the recursive self-maintenance of the system is taken to be a property of the organization of the underlying process in the system, not a property of some part of the system.

8 Note that this framework relies on properties that were emphasized by the cybernetic ground out of which Artificial Intelligence and Cognitive Science developed, but which were abandoned in the shift to the symbol manipulation framework. The symbol manipulation framework as an approach to cognition, however, presupposed away one of its own fundamental problems -- the nature of representation. In effect, I am proposing that the problem of emergent representation cannot be solved without taking into account several of the aspects of cybernetics that have been ignored -- plus evolutionary considerations, a better grounding of functional analysis, and so on. On the other hand, it should also be noted that cybernetics itself had no satisfactory model of representation either.

9. The notion of a final state, and of a set of possible final states, is essentially that of the conditions in a system that are functional for exerting control on other parts of the system. The "final" part of the term is strengthened if the subsystem in question switches off only when in some final state, though little of the interactive model requires that. It would be quite in keeping with the model developed that a differentiator would function continuously and concurrently with other parts of the system, and would change its "final state" that exerted control influences on (indications for; see below) the rest of the system from time to time on the basis of its interactions. The term "final state" is adopted from automata theory, and certain aspects of automata theory, such as the discreteness and strict sequentiality, are not necessary for the interactive model. It is also possible, and, in fact, very interesting consequences follow, if a system's differentiators collectively can store their final states as indicators. For example, there might be a dynamics internal to the system that could ensue among those stored indicators, even though the initial indicators would be stored on the basis of interactions with the environment (Bickhard, 1980b; Bickhard and Richie, 1983).

10. This notion of implicit definition apparently originated in the 19th century with the realization that appropriate axiomatizations of geometry could be taken as implicitly defining the notions of geometry (Kneale and Kneale, 1986). It was adopted by Hilbert as a formalist approach to mathematics in general, in which all of mathematics would be so implicitly defined (Moore, 1988). It carries on in formal model theory in more refined notions such as those of categorical or monomorphic axioms (Kneale and Kneale, 1986) or of an elementary class of models (Keisler, 1977). It is this notion of axioms implicitly defining a class of models (Kneale and Kneale, 1986; Quine, 1966) that is generalized in the idea of interactive implicit definition.

There is also a related notion of the implicit definition of a term in an axiom system by its position and role within that system, and a proof that the possibility of such an implicit definition implies the possibility of (a somewhat ad hoc) explicit definition (Boolos and Jeffrey, 1974; Kneale and Kneale, 1986; Quine, 1966). The interactive notion of implicit definition is not the definition of a term, but this theorem is nevertheless of relevance in showing that implicit definition is not intrinsically less powerful than explicit definition (Quine, 1966) -- when explicit definition is possible at all.

It is this last point, of course, that is at the core of the issue: such explicit definition is possible only in terms of already available representations -- it is an encoding definition -- and, therefore, cannot serve any fundamental epistemological functions. Explicit definitions cannot yield new representational content or new knowledge; they can only yield stand-ins for representation and knowledge already available. Implicit definitions can.

11 This notion of environmental sensitivity is with respect to the environment of the system under analysis. In the case of the central nervous system, for example, the 'environment' includes the rest of the body, and sensitivity to, say, blood sugar level, would constitute a functional sensitivity of an appropriate kind. The key point is that such sensitivity -- to blood sugar level -- is a strictly functional notion, and does not require any representation, of blood sugar or anything else.

12. Note also that while a system in which the function of representation is served must be a goal directed system -- else there would be no internal criterion of error -- the converse does not hold. Not all goal directed systems will be representational. In particular, only those goal directed systems that differentiate their activities in the service of the goal in accordance with differentiations of the environment (or some other source of such informational differentiation) will constitute minimal representational emergence according to this model.

13 Note that, on this explication, any genuine adaptiveness will be at least primitively representational (such as a recursive self-maintenant system). This is in the sense of involving indications of interactive potentialities about the environment -- such (indications of) interactive potentialities are the representational content.

Such a notion of content is more primitive than any involving systematicities or constituents of such content, or involving any differential attitudes towards (the predication of) such content. These issues, which commonly exercise Fodor and others, are changed in major ways by the interactive implicit definition explication of content, but will not be addressed in this chapter.

The general notion that representation must be functional for a system's interactions with its environment in order to functionally exist for the system at all is similar to Van Gulick (1982), but that explication proceeds, beyond the basic intuition, within a clear encodingist framework. There seems to be an assumption that only covariational correspondence could serve such an interactive function.

14 With respect to number: one property of a lower level control system -- or its executions -- is the iteration of a subsystem within a larger system: a count of the number of iterations. This could be detected and controlled from within a given system level, but could be represented only from within a next higher knowing level.

15. Interactivism per se is a model of the nature of representation. In particular, it is not a model of learning or development. Interactivism, however, does have strong implications for learning and development. In particular, interactivist system organizations cannot be passively impressed into a mind via induction or transduction, and, therefore, they must be actively constructed. Interactivism, then, forces a constructivism. Furthermore, such constructions, unless they are prescient, must be blind trials, so interactivist constructivism must be a blind variation and selection constructivism, an evolutionary epistemology (Campbell, 1974b). Variation and selection constructivism, in turn, is a kind of interactionism between system and environment with regard to learning and development. So, interactivism as a model of representation forces interactionism as a model of development. The two models of interactivism and interactionism, then, are not the same, not even models of the same phenomena, but they do have a strong relationship, with the first forcing the second (Bickhard, 1988, 1991d, 1992a, 1992b; Bickhard and Campbell, 1989; Campbell and Bickhard, 1986).

16 For still another example, note that the interactive model of emergent representational content is not subject to the peculiar dissolution into "meaning holism" (Fodor, 1987a, 1990). The minimal interactive representation system, and its representational content, is far from holistic.

[17] A reaction I have received several times at talks that I have given on interactivism and the encodingism critique.

[18]. Note that by defining transducers as non-inferential rather than in terms of their being nomic, Fodor risks being committed to accepting connectionist nets as transducers. Among other consequences, he could no longer simply claim (as in his argument against his construal of Gibson, for example) that non-nomic properties, such as that of being a "crumpled shirt", cannot be "transduced."

[19]. To recapitulate, it is impossible for transductions to yield encodings because a transduced encoding must encode that which produces it, that which is energetically transduced, and, in order to do that, the system must already know what is being transduced -- as usual, you must already have encoding representational content in order to get encoding representational content. The temptation to pass the origin of this knowledge onto prior learning or on to evolution fails because the same problem recurs at all levels: encoding representational content must come from somewhere, and neither evolution nor learning nor transduction can generate it (within the encodingist framework). In all cases, the question of the origin of the knowledge, the origin of the representational content, cannot be addressed. In all cases, the incoherence problem is encountered (Bickhard and Richie, 1983, contains a detailed tracking of the counters and counters-to-the-counters that might be proposed in this argument).

[20]. As mentioned before, so also is a camera.

[21] The Twin Earth intuitions turn on the presumption of naturalism: the conceptual experiment wouldn't work if thoughts were taken to be entertainments of Platonic forms. In particular, if naturalism is correct, then representation, whatever it is, must be constitutable in physical terms, or perhaps in physically instantiated functional terms. But functional interrelationships can at best functionally differentiate, differentiate with respect to functional properties -- they do not provide a privileged epistemic access to noumena. So, if all physical relations are identical in two situations, then so also are all functional relations, and naturalism cannot epistemically penetrate deeper than to differentiate down to, but not below or within, functionally identical properties. Twin earth scenarios postulate a situation in which H2O and XYZ have functionally identical properties, even though they are in fact different compounds, and turns on the intuition that whatever is in the mind could not differentiate more finely than those interactive functional properties -- without violating naturalism. Therefore, there must be something in the mind that picks out external differentiations, and such picking-out must be functionally context dependent. Assuming naturalism, then, whatever is representational in the mind cannot select or pick out or differentiate more finely than identity of interactive functional properties, and, therefore, such differentiation will necessarily be functionally context dependent.

If two substances were identical in functional properties with respect to all possible functional properties, then there are no grounds for supposing their difference at all. So, the Twin Earth intuitions also lead to the conclusion that if a party on either earth had access to both H2O and XYZ it would be possible to find differentiating functional properties. Similarly, science on either world might have already found properties of H2O or XYZ respectively that would differentiate it from the other should the opportunity to examine both occur.

Twin Earth arguments, then, turn on a basic principle of interactivism: the functional -- but functional in the interactive sense, not in the Fodor encoding sense -- character of representation, and the consequent impossibility for representational differentiations to partition more finely than those interactive functional properties. To attempt to capture those differentiating properties in terms of "contents" -- narrow content -- is to attempt to capture a differentiation process in terms of what it differentiates, but what is differentiated is specifiable only in a fully context dependent manner. But the differentiation functional organization itself is not context dependent, only its executions and its external conditions of outcomes are, and, therefore, once again, the encodingism approach presupposes itself. Once again, it must have content in order to get content, and it is impossible to have that content in the first place -- no necessarily context dependent characterization of content, narrow or otherwise, can possibly capture the context independent functional organizations that context dependently differentiate such contents. So long as content is what is represented, and what is represented is what is corresponded to, the circularities of encodingism are unavoidable. Real narrow 'contents' -- interactive differentiators -- have, in themselves, no content at all. And they cannot be given content in terms of what they differentiate or correspond to. Among other consequences, they cannot be ultimately understood or explicated or modeled in terms of any such content.

There is still another encodingism-created problem that appears here: naturalism forces a distinction between narrow and broad content; folk psychology, meanwhile, seems to require that representational contents have causal interactions with each other, and that these interactions be in terms of broad content; yet only narrow contents are in the brain, and, therefore, only narrow contents are potentially causally available to each other. One move would be to attempt to demonstrate that folk psychology can get by with only causal interactions among narrow contents, but, however, superficially plausible this might seem in a Twin Earth case, its plausibility disappears for more sensitive Kaplanesque context sensitivities such as indexicals. A different move might be to deny folk psychology completely, but such denials must ultimately offer some sort of replacement (Churchland, 1984). The phenomena of talk, and of talk of beliefs and desires, and of causal consequences of such talk, may or may not end up being understood in standard folk psychology construals, but the phenomena must nevertheless be accounted for in some way. (The encodingism argument, in fact, has as a consequence that folk psychology in its standard interpretation cannot be correct -- propositional attitudes-- whether beliefs, desires, or any other sort -- cannot be attitudes toward encoded propositions.)

This apparent conflict between naturalism and folk psychology, however, is itself a product of encodingism. Naturalism may force the narrow-broad content distinction, but resources for representational content, and for the potentiality of causal interaction among representational content, are restricted to broad or narrow contents only from within an encodingism.

In the interactive model, in particular, the representational content is not given in the context sensitive differentiator at all, nor in its "broad" differentiated externality, but, rather, in indications of potential further interactions -- indications of the potentiality of executing other system organizations. The differentiators, the indications, and the other system organizations whose potential executions are indicated, are all aspects and parts of the system itself, and, thus, are all naturalistically available for possible interaction with each other in that system. Whatever may be the ultimate fate of folk psychology, it won't fail on the basis of the necessity of impossible broad-content causal interactions -- at least not from within the interactive model. (There are other implications of the interactive model for folk psychology that I will not pursue here: one example is the sense in which an interactive system may function 'as if' it had certain beliefs -- may function in ways that involve implicit functional presuppositions about the environment or the system itself, that do not involve any explicit representational content, Bickhard and Christopher, in press.)

[22] There is another interesting sense in which this entire standard debate is, from the interactive perspective, misguided from the start: the questions of intentionality and representation are being asked about "mental states". Interactivism is embedded in a strict process ontology, and, from such a perspective, to ask about mental states is equivalent to asking about flame states. Both flames and mental phenomena are intrinsically open systems, necessarily in process, and a state approach is at best an aspect of a mathematical idealization, and at worst a labyrinthian dead-end. Flames can have states in the sense of the idealization of a single time slice through the process, where the process itself is captured, say, by some differential equations, but there is no way to explicate what a flame is nor to understand it just in terms of such a state. Or a flame could have a state in terms of some relatively, even if momentarily, stable aspect of its process, such as "being in the state of burning that piece of wood". Again, however, although such "conditions of the process" may be quite relevant to some considerations, they cannot model what a flame is at all. There is no apriori reason why a state approach to mind should be any better suited than to flames, and the interactive perspective -- for that matter, any reflection on the hierarchies of emergence of patterns of process through atoms, molecules, life, mind, and so on -- argues strongly that a state approach is precisely such a labyrinthian blindness. Simply, a state approach is just an idealization of a substance approach. A state approach, a mental state approach, then, is bound to be fatally misleading as a presupposition of the very statement of the issues concerning mentality.