I

Interactivist Summer Institute

July 23 - 27, 2001

Lehigh University

Abstracts of Papers

Richard Alterman

C. A. Hooker

Gila Y. Sher

Mark H. Bickhard

Bipin Indurkhya

Iris Stammberger

Dr. Jason Chihyu Chan

Richard F. Kitchener

Georgi Stojanov

Silvano P. Colombano

Hideki Kozima

Goran Trajkovski & Georgi Stojanov

Dr. Naomi Goldblum

Prof. Ralph D. Ellis & Prof. Natika Newton

Tom Ziemke

Nira Granott

Helena De Preester

Coordinating Representations

Richard Alterman
Computer Science Department
Center for Complex Systems
Brandeis University
Waltham, MA 02454 USA
alterman@cs.brandeis.edu

The overarching interest of this research is to continue to develop a framework for Cognitive Science that depends not only on the mental operations of the individual but also on the social interaction within which it is embedded. This talk will develop the concept of coordinating representation as an everyday method for structuring the coordination of actors engaged in a non face-to-face joint activity.

The first part of this talk will develop the notion of a coordinating representation in the context of the interdisciplinary literature on coordination. The second part focuses on the cognitive engineering task of building coordinating representations for computer-mediated joint activities.

Staying Coordinated

In the everyday world, humans manage to stay coordinated for joint tasks where face-to-face interaction is largely not an option. A method for coordination in these environments is the coordinating representation. A coordinating representation is an external representation shared among participants in a joint activity. It is designed for the activity-at-hand and reduces the complexity of the coordination task. It mediates and structures the activity. It has the designated purpose of helping participants to achieve coordination in non face-to-face cooperative activities. It helps the participants to jointly make sense of the situation in the absence of a face-to-face interaction. Its meaning is based on conventional interpretation. It signals to the participants - without dictating action - that a convention of behavior is in place.

An everyday example of a coordinating representation is the "stop sign". The stop sign is a representation shared among the participants at a traffic setting. The stop sign presents a structure for organizing the collective behavior of drivers, pedestrians, and cyclists at a busy intersection. The interpretation of the structure imposed by the stop sign is negotiated during the activity. Things may run smoothly at the intersection - but there will also be interruptions. An impatient driver piggybacks on the driver in front of him. A pedestrian decides to ignore the stop sign altogether.

Computer-Mediated Joint Activity

For the last several years my group been building a same time/different place groupware system (VesselWorld) as an experimental platform for analyzing real time computer-mediated collaborations. In VesselWorld, there are three users engaged in a set of cooperative tasks that require the coordination of behavior in a simulated environment. In the simulated world, each participant is a captain of a ship, and their joint task is to find and remove barrels of toxic waste from a harbor. A demo of the system was run at CSCW 2000.

There are several important characteristics of the joint activity of participants in a VesselWorld problem-solving session. Participants have different roles (both predefined and emergent). Cooperation and collaboration are needed to succeed. Participants must develop a shared understanding of an unfolding situation to improve their performance. Uncertainty at the outset makes pre-planning inefficient in many circumstances. There are numerous problems of coordination.

In a base version of the VesselWorld system, participants can only coordinate by electronic chatting. Most of the participant dialogue is centered on the barrels, and how effort can be coordinated in removing the barrels from the harbor and transporting them to a large barge. An analysis of participant dialogue determines a set of problem areas in organizing behavior in relation to a shared domain object. So, for example, a large volume of information must be exchanged over the naming, status, location, and properties of the toxic wastes. In a second version of the system, coordinating representations are introduced that basically structure and simplify the exchange of information in the problem areas of coordination.

An experimental evaluation conducted at Brandeis compared the performance of teams of participants with (and without) the coordinating representation. Two groups could only electronically chat during problem-solving sessions, and two groups could chat but also had access to coordinating representations. Each team was trained and then played for about 10 hours over several sessions of problem solving. All events that occur during a problem-solving session are recorded in a log file by the system and can be reviewed using a "VCR" like device. In the talk I will present both quantitative and qualitative evaluations that contrast the social interaction and joint decision making of groups that had coordinating representations to those that did not.

The Interactivist Model and Analytic Philosophy

Mark H. Bickhard
17 Memorial Drive East
Lehigh University
Bethlehem, PA 18015
mark.bickhard@lehigh.edu

Twentieth century analytic philosophy emerged from and incorporated a number of themes, some having originated centuries before. These include an anti-naturalism, an anti-psychologism, a location of normativity exclusively in grammar or syntax, a corresponding sundering of epistemology from naturalistic grounds, and others. There were variants on these themes, few analytic philosophers were pure embodiments of all of them, and a claim of naturalism in some form had a resurgence later in the century, but such themes nevertheless characterize most analytic philosophy, especially in its earlier years.

Interestingly, the interactivist model holds opposite positions regarding virtually every one of these themes. This puts it in opposition not only to analytic philosophy in a broad sense, and to most contemporary philosophy insofar as it manifests vestiges of this past, but also to psychology and cognitive science insofar as they have incorporated these themes themselves, and this is in fact a deep and powerful incorporation. These themes are, arguably, not only false but also aporetic. The interactive model, then, attempts to offer a way out.

The Creative Problem-Solving Process:
Perspectives from Constructivism, Interactivism, and Evolutionary Epistemology

Dr. Jason Chihyu Chan
Department of Education
National Chengchi University
64, Chi-nan Rd.,
Wenshan, Taipei,
Taiwan 116, R.O. C.
Tel: 02-29364946.
Fax: 02-29394946
E-mail: jyjan@nccu.edu.tw

Since Donald Campbell proposed the blind-variation-and-selective-retention theory of human creativity in 1960, debates about the selectionist perspective on the process of knowledge growth never ended. Some psychologists such as Robert Sternberg may agree that evolution is powerful to produce adaptive organisms, but they disagree that blind variation is necessary or efficient to produce creative ideas for problem-solving. Indeed, Darwinian idea may be useful in the field of evolutionary psychology, but its role in cognitive psychology, especially in modeling the creative problem-solving process, is still waiting to be explicated.

The aim of my proposed research is to integrate perspectives from constructivism, interactivism, and evolutionary epistemology. Specifically, I would like to explore the process of variation-generation, of interaction between creative and critical thinking, and of problem-solving. Hopefully, my research can fill the gap between evolutionary psychology and cognitive psychology, help to avoid the adaptive-as-correspondence picture of knowing, and give a holistic accounting for creative problem-solving.

From the Origins of Life to Intelligence:
The Emergence of Symbolic Constructs

Silvano P. Colombano
NASA-Ames Research Center
Computational Sciences Division
Scolombano@mail.arc.nasa.gov

Intelligence cannot be understood - and cannot be "artificially recreated" - without also understanding how it fits as a phenomenon in the evolution of matter.

I say "matter" instead of "life" because my thesis is that the distinction between matter, with its associated interactions, and life, is simply in the complexity of these interactions and in the number of "symbolic levels" that are defined by these interactions.

Most of us think of symbols only in the context of language. This is understandable, since it is at this level where it is easiest to draw a distinction between natural objects and their "names", i.e. the different sets of "abstract" objects that can be manipulated to produce models of the "real" world. Of course sets of abstract objects can also acquire names and be manipulated at higher and higher conceptual levels. When we use the words "philosophy" or "the Declaration of Independence" we use abstract constructs that will only make sense in other specific abstract contexts.

Just like the concept of language can be "pushed up" to higher and higher levels of abstraction, it can also be "pushed down" to sets of objects that define very specific relationships with each other. In the genetic code, for example, we see that the assignment of specific amino acids to triplets of nucleotides (codons) is completely analogous to a language where the codons correspond to names, and amino acids to objects. Just like names and other verbal constructs in higher level languages are strung together to form concepts that affect people's behavior ("come here", "go there"...) codons are strung together in the DNA to represent proteins that will affect the behavior of cells and organisms.

I contend that the genetic code does not just show some interesting analogies to natural languages, it IS a language in its very essence. Just like in language other constructs for regulation (stop and start points etc.) are present, the language is defined in the context of itself - just like any dictionary - and, most important, its assignments are fundamentally ARBITRARY. Just like any sets of sounds could be chosen to represent an object, so could any codon represent a particular amino acid. This doesn't preclude historical and evolutionary reasons why some particular representations ended up being chosen and fixed in the system. The important point is that, if ambiguities occur and are deleterious, the system has the capacity to create and implement arbitrary rules that will disambiguate the assignments. These rules are implemented as grammar and syntax in higher level languages, and as genetic regulatory machinery in the genetic code.

Early researchers in the origins of the genetic code tended to ignore this possibility, and exerted most effort in determining "physical" reasons that would explain codon-amino acid assignments. It was the position of my Ph.D. thesis (1977), inspired by the work of my advisor Howard Pattee, that this extremely reductionist approach was missing the essential feature of the system: its basis in self-formed arbitrary rules, or stated differently, its "symbolic" nature.

Languages and symbols occur again and again in the evolution of matter, but they may not be always easily identifiable. Nevertheless, keeping a language framework in mind provides a unifying element in understanding the nature of evolution and emergence.

Matter that cannot produce arbitrary interaction rules is matter that cannot evolve. What is most difficult to understand in the early stages of the evolution of matter is how arbitrary choices can emerge from deterministic chemical dynamics. The answer is actually simple if we allow for the capacity to produce a large number of different molecular structures of increasing size. Large size will increase the likelihood that any particular structure, purely by random event, will be the first of its kind. If this structure happens to have a function that will enhance the probability of its own future formation (some rough self-catalytic behavior) in the context of all other structures, we have obtained the first "arbitrary" rule in the system. The rule is that this structure, from now on, is favored over other structures. The system itself, as a whole, creates and enforces this rule. Had a different structure with similar properties come about first, the rule would have been different: this second structure would have been favored. Physics and chemical dynamics are the same in both cases but different rules are now being enforced.

If we push the envelope in the other direction, towards intelligence, we may conjecture that sensory input creates at first random patterns of brain activity (within the limits of "pre-wired" features) until some of these patterns fall into self-reinforcing cycles and become dominant. These are the first perceptual "rules". Associations (self-reinforcing cycles) of patterns of objects with patterns of sounds are quickly recognized as the beginning of "natural language", but again, this is the same phenomenon we see at the origins of life.

What the Brain Can't Do

Dr. Naomi Goldblum
2/22 Mevo Refidim
Jerusalem, Israel
Email: goldbln@mail.biu.ac.il

One of the most important current debates in cognitive science is about the extent to which our understanding of brain processes can help us understand mental processes as well. On one side there is the dualist view, championed by "representationalists" such as Chomsky and Fodor, who are so impressed with the obvious differences between the mind and the brain that they think there is no point using any of our knowledge about the brain to try to understand the mind. On the other, there is the monist view, put forward by "materialists" such as the Churchlands, who infer from that the fact that mental processes take place in the brain that all we need to understand the mind is a deeper understanding of the brain. I would like to present a third view, one that distinguishes between what we can learn about the mind from the brain and what we cannot.

My claim is that the there is a crucial distinction between process and content that is ignored by most theorists in cognitive science. I believe that we can learn a great deal about how our minds work, about the processes of our thinking, by studying the workings of the brain. The contents of our minds, on the other hand, will always need to be based on the interaction between the human being and the environment. In other words, while the mental level of explanation of our cognitive processes can usefully be anchored in the physical level of explanation, there is no point in trying to explain our particular pieces of knowledge or belief in this way. What I would like to do here is to explore this distinction and its implications.

Cognitive scientists use our current knowledge about the brain to build models of how we recognize people and things, how we classify things into categories, how we learn how to do things we couldn't do before, how we learn to speak and later to read, and how we act on the basis of what we believe and what we want. What cannot be learned from studying the brain, however, is what information there is in the mind -- what any particular person is thinking at any particular moment, or what knowledge any particular person possesses in general.

Much has been written, especially in the PDP paradigm, about those aspects of the mind that can be understood better by using our knowledge of how the brain works. In this paper I would like to explain why I nevertheless believe that no one will ever be able to find out what you are thinking or what you know by finding out what state your brain is in, no matter how much we learn about the brain and how well we are able to monitor what your brain is doing.

First I must specify that my suggestion is based on an acceptance of token-token identity with a rejection of type-type identity. Since I believe that the mind is inseparable from the brain, I believe that there is a correspondence between tokens of mental states and tokens of brain states -- that every time we think or feel or sense or want something there is some process occurring in our brain. I do not accept, however, the claim that the relations between the mind and the brain can be described in terms of general laws. I reject the view that for every type of mental event -- all stabbing pains in the left big toe, all thoughts that the grass is yellow, all intentions to go to Alaska, all wishes to be a millionaire -- there is a particular type of brain state or brain process that corresponds to it in every person who is experiencing this type of mental event. I therefore deny that the mind can be "reduced" to the brain -- that once the brain has been described in physical terms we will know everything there is to know about the mind.

Since I accept token-token identity, I believe that, for instance, when I see a red square and you see the same red square, neurons are activated in the visual areas of my brain and neurons are also activated in the visual areas of your brain. However, the neurons firing in my brain when I see a red square are probably not the same as the ones that are firing in your brain, in the sense of "same" that we use when we say that if I am typing the word "red" on my PC and you are typing the word "red" on your PC with an identical keyboard, we are both depressing the same keys in the same order. Similarly, if I am thinking that Arafat needs a shave and you are thinking that Arafat needs a shave, it is most unlikely that the neurons activated in my brain are the same as the ones that are activated in yours. The correspondence between mind and brain involves only tokens, not types.

This is why I believe that there is no need to worry that someday scientists will be able to find out what we are thinking by hooking us up to machines that can tell them which neurons are active in our brains. There are, to be sure, machines that perform PET scans, which can tell what sort of processes are going on in our brains. These machines produce pictures that show in vivid color which parts of our brains are most active when we are performing a mental calculation or watching a movie or listening intently to our favorite music. Then why am I so certain that further scientific progress won't enable us to pick out the exact neurons that are active when we are thinking that "2+2=4" or watching Casablanca or listening to the Beatles' rendition of "Yesterday"?

To explain why I am so sure that this will never happen, I will detail my reasons for believing that the neurons that are active in my brain when I am thinking that "2+2=4" are not the same as the ones that are active in yours when you are thinking the same thing. If this is the case, then it is enough to ensure that no one will be able to "read your mind" by "reading your brain." Let's assume for the sake of the argument that scientists will someday be able to pick out the exact neurons that are firing in my brain when I am thinking that "2+2=4" or seeing a red square or intending to get some chocolate ice cream out of the freezer. But if the neurons that are firing in my brain when I am thinking that "2+2=4" are not the same as the ones that are active in yours when you are thinking the same thing, then it won't do the scientists any good to know which ones are firing in my brain, because even if they could see the "very same" neurons firing in your brain -- that is, if they could find neurons active in exactly the same physical location in your brain as in mine -- they would not be able to deduce that you are also thinking that "2+2=4." Although they might be able to deduce that you are thinking about some numerical fact because the "numerical" area of your brain is lit up, they wouldn't know which particular numerical fact you are thinking about. Similarly, if they discovered exactly which neurons are firing in my brain when I am thinking that Arafat needs a shave, and they saw the "same" neurons firing in your brain, they might be able to say that you too are thinking about something related to the visual appearance of a human face, but they could not deduce that you are thinking that Arafat needs a shave.

For a more detailed example, consider the speech areas of the brain. If your Broca's area, for instance, is electrically active, then we can deduce that you are likely to be formulating a sentence that you are about to say. Even though this is not the case for the small number of people whose language centers are on the right side of their brain, the generalization is a useful one anyway because it is true for most people. It will never be possible, however, to use our knowledge of the brain to predict the exact sentence you are formulating when your Broca's area is lit up. We might be able to say something about its general content -- if, for example, the area responsible for mathematical calculations was lit up a fraction of a second earlier, then it is very likely that you are about to express the result of some such calculation. But we will never be able to predict, say, that if neurons G3, W1243 and X756 are active, then you are about to say "2+2=4," while if H856, Q2064, V902 and three thousand other particular neurons are active, you are planning to say "If it takes Faucet A 20 minutes to fill up the bathtub, while Faucet B can do the job in 30 minutes, then if both faucets are turned on the bathtub will be filled up in 12 minutes."

Therefore, even if "brain-readers" could map all my present thoughts onto specific brain states by recording which neurons are firing each time I am thinking some specific thought, this would enable them to know only what I myself am thinking when the same configurations of neurons fired again, and then only in case I had not learned something new about this particular topic in the meantime. They would not be able to deduce the specific details about what anyone else was thinking if the same configuration of neurons fired in some other person's brain, because that configuration might be used to encode some other thought.

But science is built on generalizations. Being able to read one specific person's mind would not be worth the effort, as it would be easier just to ask her what she is thinking. The effort of mapping neuronal configurations onto thoughts would be worthwhile only if this knowledge could be used to read other people's minds as well, but what I am claiming here is that this is impossible.

Why is this so? If neuroscience has advanced to the point that we can tell whether you are making a mathematical calculation or imagining a colorful scene, how can I be so sure that further advances in this area will not allow us to figure out which particular mathematical calculation you are making, or what scene you are imagining?

The answer is not that the technology couldn't be invented, since there don't seem to be any limits to the sorts of technology that can be invented. It is rather that there wouldn't be any point in it, because each person's knowledge is organized differently within her/his brain, so that no generalizations can be made on the level of specific pieces of information. Even if we could use some sort of advanced technology to discover precisely which neurons in Dick's brain are active when he is getting ready to say "See Spot run," it wouldn't tell us anything about which neurons are active in Jane's brain when she is preparing to utter the very same sentence.

There are at least two reasons for this -- one biological and one environmental. The biological reason is that the fine structure of the connections between the neurons in different people's brains is different, just as everyone has different fingerprints from everyone else. Normal people all have their middle finger longer than their other fingers, for example, but the exact pattern of whorls on the fingertip is different for each person. We can, of course, take everyone's fingerprints and thus know the exact pattern of whorls on every person's fingers, but that will not enable us to predict the exact pattern of the next person's fingerprints. In the same way, some new advanced technology might someday enable us to find out which neurons are active in Dick's brain when he is about to say "See Spot run," but this will not tell us anything about which neurons are active in Jane's brain when she is planning to say the same thing, because the fine structure of the neurons in their brains is different. Even if we find out that the same area of the brain is active when Dick and Jane are thinking about animals, the particular pattern of neurons that are active when they are about to say "See Spot run" will remain as individual as a fingerprint.

The environment-based reason for the difference between people in the exact pattern of neurons involved in planning to say "See Spot run" is that each person learns the individual words in a different way. The way we understand each word we know is based on the different sentences in which we've heard it used, and the different things that were going on at the time we heard these sentences. But no two people have heard exactly the same sentences, and even in the case of those sentences that were the same, not exactly the same thing was going on when they heard them. Thus the neurons that encode the word "run" are connected differently for Dick, who first encountered the word when his older brother yelled "Look at those dogs run" while taking him for a walk in the park, than they are for Jane, who first heard the word when her mother said "Don't run so fast." Therefore the particular pattern of neurons that encodes this word in Dick's brain cannot be exactly the same as the pattern that encodes it in Jane's.

In contrast to the content of our thoughts, however, the underlying processes of thinking are likely to be similar across people, making it useful to base our models of cognitive processing on brain function. Although people's higher thought processes are also liable to be quite individual, the unconscious processes on which they are based are probably similar for most people. Most advances in cognitive science have made use of the assumption that people's thought processes are similar, and the results of the myriad experiments on processes of object perception, reading, attention, language understanding, memory and the like all support this assumption.

It therefore seems likely that we can usefully develop theories about cognitive processes that are modeled closely on brain function, such as connectionist theories, while these theories are unlikely to be so useful for understanding the contents of our thoughts. Understanding what we are thinking about requires a knowledge of how our brain processes interact with whatever happens to be out there in the world, and this is a much more intricate enterprise. Considering that the human brain is the most complex object in the known universe, the study of the interaction between the brain and all the rest of the universe is liable to be more complex by several orders of magnitude and should therefore provide cognitive scientists with many decades, if not centuries, of exciting challenges.

Emergent Representation out of Actions in Social Systems:
Discovery in Collaborative Problem Solving

Nira Granott
University of Texas at Dallas
P.O. Box 830688, GR41
Richardson, TX 75083-0688
Ph. 972-883-6349
Fax 972-883-2491
Email: ngranott@utdallas.edu

One of the challenges that interactivist researchers face is the development of models that explain and predict emergent processes distributed across interactive systems of people and objects. This paper presents a model that explains the emergence of representations out of action in such systems during microdevelopment. Microdevelopment is development during short time spans (e.g., during learning or problem solving). The advantage of using microdevelopment is that it is possible to encapsulate (by continuous documentation and analysis) a whole process, rather than snapshot at different time points. When the content of microdevelopment is an unfamiliar problem, the data (e.g., videotapes) can capture the emergence of new knowledge. The model described here is demonstrated by such experimental findings.

The model uses a system-based unit of analysis-the "ensemble". The ensemble is a group who co-constructs knowledge through interaction within an activity-context (Granott, 1998). The activity-context (e.g., objects, materials) forms part of ensemble, as it has direct effect on the ensuing constructive process. An ensemble can be of any size. Like a molecule, it retains the attributes of the whole (interactive developmental process) and can consist of diverse numbers and arrangements of elements. The individual is a private case of N = 1. In unconstrained social contexts, ensembles naturally form, change, and reform differently throughout time. Attributes of ensembles (e.g., participants' ages, their knowledge in relation to each other, feedback from the activity's materials, the material's representational support) determine the attributes of the developmental process. During the ensemble's interaction, the participants' discourse externalizes their knowledge & understanding, which can be related to the ensemble's overt actions. Analysis of ensembles' microdevelopment can identify developmental patterns and mechanisms.

The model suggests that microdevelopment evolves through reiterated sequences (Granott, in press). Each sequence develops within a specific context. An unfamiliar or difficult problem may be processed at first at a level lower than exhibited by the same person/ensemble when solving familiar or easy problems. Such a backward transition to lower levels usually marks the beginning of a new microdevelopmental sequence related to that specific problem. Then, the sequence shows progress, yet progress is rarely smooth and unidirectional: it shows much variability within a developmental range and reiterative construction of higher and lower knowledge levels. Three types of variability operate within and across sequences. (1) Backward transitions that form temporary regressions followed by progress. (2) Ordered fluctuations, bounded within a developmental range-a Zone of Current Development (ZCD). (3) Reiteration of processes. A series of microdevelopmental sequences creates generalization and transfer of knowledge across contents and domains, while making knowledge more robust and valid within specific content areas. During microdevelopment, variability creates different patterns, characterized by different distributions. Moreover, across time, patterns self-assemble into specific meta-patterns. Meta-patterns show different developmental profiles that can predict and explain development. Progressive meta-patterns evolve to a higher level, including stability at a higher-level attractor (which corresponds to a developmental milestone). By contrast, other patterns self-assemble into a meta-pattern of stagnation and decay.

Findings of experimental studies support hypotheses derived from the model. Time-series data of ensembles' discovery and collaborative solutions of unfamiliar problems (i.e., understanding how robots function) are analyzed using microgenetic and dynamical systems techniques. The analysis shows that ensembles started constructing their knowledge through action-based explorations. Progressive meta-patterns involved dense fluctuations between knowledge levels, first leaping to the unknown and then increasingly specifying higher-level knowledge. Initial representations started to emerge through such fluctuations. These fluctuations served as a necessary and sufficient condition for transition to a still higher-level attractor.

The crucial interactivist nature of the resolution of the fundamental problem of intelligence:
problem definition.

C. A. Hooker
Philosophy Department
University of Newcastle
Callaghan 2308
NSW, Australia
plcah@alinga.newcastle.edu.au
Phone +61 (0) 249 215185
Fax +61 (0) 249 216928

From a biological perspective, the typical cognitive predicament of creatures is that of ill-definedness, that is the vagueness and ambiguity of the problems they face. How do I stay safe? How do I eat? ... How do I research this new domain? How do I become a mature person?

This predicament derives of course from ignorance, of what the environment is like, certainly, but equally of what creatures' bodies need and can do, and of what unrevealed possibilities there are in both; it is expressed cognitively, not just as lack of information to fill already well-defined categories, but as a lack of well-defined problems. The real power of intelligence is the capacity to transform such ill-defined problems into better defined ones.

In resolving ill-defined problems, typically the question to be posed, criteria for a relevant, successful answer, the methods for reaching that answer, and those for justifying it, and the character of the answer itself, all come into focus together. By comparison with that feat, solving already well-defined problems, however sophisticated, is a derivative capacity, one largely concerned, not with intelligent creativity, but with a cluster of separate technical and managerial skills for using analytic rule systems, managing information, and the like.

Since the problem-defining process begins in ignorance(and assuming naturalism) it can only be resolved through the consequences of interacting with the world; this is why interaction is essential. Moreover, if all aspects of problems are to come into focus together, the consequences of interaction must be profound; that is why characterising it properly is crucial. But the consequences of interaction cannot be foreseen by the cognitive agent, else they would be uninformative for definedness, that is why its role cannot be posed in terms of any kind of already achieved well-defined problem-solving capacity. These features make standard representationalist, computational problem-solving approaches unattractive. This paper focuses on an organisational approach to characterising problem-identifying processes, and highlights the problem of providing informative general accounts of problem solving.

An Interaction-Based Approach to Metaphor and Cognition

Bipin Indurkhya
Department of Computer, Information and Communication Sciences
Tokyo University of Agriculture and Technology
2-24-16 Nakacho, Koganei, Tokyo 184-8588, Japan
bipin@cc.tuat.ac.jp

It has been recognized for quite some time that many metaphors go beyond the existing similarities between two objects, rather they create the similarities. However, such metaphors have received only a scanty attention in the cognitive science research. I focus on such similarity-creating metaphors, and argue that they are manifestations of the interactive nature of cognition. A key to understanding the cognitive force of similarity-creating metaphor is to resolve a central paradox that seems inherent in any interaction theory: how can reality not have a mind-independent ontology and structure, but can still manifest objectivity by managing to constrain the possible worlds a cognitive agent can create in it. I will outline an approach to cognition that resolves this paradox, and then address the problem of similarity-creating metaphor within this framework. I will demonstrate the efficacy of this approach by showing how it can be used to model context effect in perception, creativity in legal reasoning, and cognition in art and poetry. Finally, I will examine some implications of this approach for computer and creativity.

Developmental Epistemology

Richard F. Kitchener
Dept. of Philosophy
Colorado State University
Fort Collins, CO 80523
970-491-6315 of
rkitchener@vines.colostate.edu

Recently, individuals in evolutionary epistemology (EE), philosophy of science, and cognitive science have (in quite different ways) concluded there is a need to understand cognition in a historical or developmental way. This requires, I will argue, a developmental epistemology (DE).

Such a DE would be different from EE in that, although evolution and evolutionary biology provide one model for the growth of knowledge, a better model comes from developmental biology. This is because, on the standard interpretation, evolution does not manifest progress and directionality, whereas these features appear to be present, e.g., in the history of science. This gives rise to what has been called developmentalism (both in the history of ideas and in biology). Although developmentalism has been seriously criticized by several individuals, much of it can be retained as a model for the growth of knowledge, especially if it is based on recent work in developmental biology such as developmental systems theory (DST). Although there are objections one can raise to DST and related schools of thought, its core features should be retained.

DE would be a version of naturalistic epistemology, in which the empirical results of the empirical sciences (especially cognitive developmental psychology) would be relevant to the constructing a philosophical theory of how knowledge develops in the individual, in science, in the various species, etc. Objections to this program are considered and replies given.

Ontogeny of Socially Communicative Robots

Hideki Kozima
Communications Research Laboratory
Hikaridai 2-2-2, Seika-cho, Soraku-gun,
Kyoto 619-0289, JAPAN
Email: xkozima@crl.go.jp
Fax: +81-774-95-2419

Introduction

This paper proposes an ontogenetic model of artificial social intelligence that can acquire the ability to communicate with human beings and to commit itself to human social interaction. In order to investigate the underlying mechanisms of the social development, we build an infant-like humanoid robot, Infanoid, and bring it up through the interaction with social environment, especially with human caregivers. The developmental model, which is being implemented in Infanoid, starts from (1) acquisition of intentionality, which enables intentional use of methods for obtaining goals, then develops into (2) the ability to identify the intentional self with others, which enables indirect experience of others' behavior, and finally attains (3) the ability of social communication, in which the robot understands others' behavior by ascribing to the intention that best explains the behavior.

Intentionality

A symbol, in any form like a word or gesture, is not a label of the referent, but a piece of high-potential information from which the receiver derives the sender's intention to manifest something in the environment and so to change the receiver's attention and behavioral disposition. Therefore a robot learning use of the symbol has to understand the intention of human interlocutors using the symbol. How can the robot do that?

First of all, the robot itself must be an intentional being capable of goal-directed spontaneous behavior. To get the robot intentional, we assume the following prerequisites are needed:

(1) a sensori-motor system, with which the robot rides on affordance in the environment as we do,

(2) a repertoire of behavior, whose initial contents are innate reflexes, like to grasp whatever the hand touches,

(3) a value system, like that for pleasure and displeasure, which evaluates what the robot feels (as exteroception and proprioception), and

(4) a learning mechanism, which reinforces the evoked behavior positively or negatively according to the value of the result.

Starting with innate reflexes, which have a continuous spectrum on the sensori-motor modalities, the robot reinforces profitable cause-effect relations, partitioning the spectrum into meaningful units, through the interaction with the environment. Then the robot gradually becomes able to use them spontaneously as method-goal relations. This means the acquisition of intentionality.

Identification

Understanding others' intention requires the ability to understand how others feel and act. As a number of researchers have suggested, joint attention plays an important role in understanding others' exteroception: how they perceive the environment. Understanding how others act requires what we call action capture for observing others' action and translating the action into the robot's own motor program or proprioception.

Joint attention is the activity of sharing each other's attentional focus. It spotlights the objects and events being focused on by the participants of the communicative interaction, creating a shared context in front of them. The shared context is a subset of the environment, the constituents of which are mutually manifested among the participants; it reduces the computational cost of selecting and segmenting possible referents from the vast environment and also makes the communicative interaction coherent.

Action capture connects quite different modalities:

(1) (mainly visually) observed others' bodily action (movement or posture),

(2) the robot's motor program for or proprioception attached to the observed action.

A number of researchers have suggested that human being would be equipped with the ability of action capture. Some of the possible evidences are (1) neonatal mimicking of some facial expressions and (2) mirror neurons, found in macaques's pre-motor cortex, that activate both when one observes someone else doing a particular action and when the one is actually doing the same action. We think both evidences do not fully account for human ability of action capture, since (1) neonates seem to respond to specific stimuli like the motion of the tongue, and (2) the macaques could have learned the correspondence.

We would rather assume neonates' amodal (or synesthetic) perception, in which both exteroception (of visual stimuli, for instance) and proprioception (of their own action) appear in the same space spanned by the dimensions like spatial and temporal frequency, amplitude, and egocentric localization. This amodal perception will produce primordial imitation, like that of head rotation and arm stretching. Starting from quite a rough mapping, the amodal reflexes will be shaped into meaningful correspondence between the self and others through the imitative interaction with caregivers.

Social Communication

The ability to identify with others opens up the power of empathetic understanding of others' intention in their communicative behavior. This empathetic process is so-called self-reflection, but we emphasize that the process is automatic, subconscious one, which we are unaware of. Human beings and possibly the robot identify the intentional self to the interlocutor, reflectively understand the intention behind the observable behavior, and so project the intention back onto the interlocutor.

This empathetic understanding of others' intention is the key not only to human communication but also to imitative learning. Imitation is qualitatively different from emulation; while emulation is the reproduction of the same result by using pre-existing repertoire of behavior or one's own trial-and-error, imitation is the reproduction of intentional use of methods for obtaining goals. This ability of imitation would be specific to Homo sapiens and have given the species the ability to share individual creations, to maintain them over generations, and so to the emergence of language and culture.

The Unified Self: An Enactionist Approach

Prof. Ralph D. Ellis
Department of Philosophy
Clark Atlanta University
P.O. Box 81
Atlanta, GA 30314
ralphellis@mindspring.com
(Office) (404) 880-8262

Prof. Natika Newton
Department of Philosophy
Nassau County College
Garden City, N.Y. 11533
nnewton@suffolk.lib.ny.us;
natika@worldnet.att.net
(Office) (516) 572-7450

Hume argued that we do not experience a unified self in the content of our conscious experience, but instead are aware of a "bundle of impressions" which changes from moment to moment. What Hume meant can be confirmed by introspection: there is no object of conscious perception that remains unchanged throughout all our experience. More recently, philosophers and cognitive scientists (Dennett, 1991) have argued that there is no single location or mechanism in the brain that can be identified with the "self" as we experience it. This claim is not only an empirical one. It is supported by philosophical arguments (Perry, 1993; Shoemaker, 1963), that any intentional content of experience that is describable cannot intrinsically carry with it the information that it is identical with our selves. Any such content is subject to misidentification. Yet we cannot misidentify our selves, per se. Most terms and names are subject to misuse (I can point to Jones and mistakenly call him "Smith") but when I use "I," I cannot fail to refer to myself. I am normally considered to be an authority, moreover, on whether a subjective state like a feeling or a thought is mine, or not mine. I cannot mistake my headache for yours.

How is it possible that a conscious subject is capable of infallible self-identification without reference to any empirical information about the self? Since all other cases of reference make use of empirical evidence, how can there be an exception? The puzzle is not only linguistic but experiential: I experience clearly the difference between self-initiated actions and reflexive movements, but am not able to identify any sensational content that shows me the difference, and I cannot explain how I know. The solution to this problem has been elusive, we argue, because of an entrenched assumption that all cognitive abilities make use of the same basic mechanisms of information-processing. Data, externally or internally generated, is received and transformed into beliefs upon which the subject then can act. But the belief that certain states are states of my SELF does not seem to depend upon received data. The inability to imagine any alternative origin of the belief has led to this impasse.

We argue that the passive, perceptual model of knowledge - the model of a subject whose experiences result solely from input impressed upon it, in Hume's terms - is and has been seriously misleading. There is a new movement in philosophy and cognitive science, of which this Institute is a welcome recognition, that cognitive abilities and experiences do not derive from passively received sensory and proprioceptive input, but instead are activities of the subject, considered as an organism in which diverse processes can act separately or in concert. Most of the philosophical mysteries surrounding consciousness evaporate if an action model, rather than a perceptual model, is used to explain our experience. I.e., if the organism is a self-organizing process that enacts its conscious states, rather than a partes extra partes receiver of inputs, then it is obvious why another person cannot enact my conscious states, and therefore cannot have direct access to what they are like.

Our focus here is on the experience of the unity of the self, which we examine from both empirical and philosophical standpoints. First, we explore the unity of the self from a philosophical perspective. Hurley (1998) and Rovane (1998), reviving views of phenomenologists such as Heidegger and Merleau-Ponty, see the self and agency as inseparable; we defend one version of their claims. Second, in support of the philosophical arguments, we look at recent work in the neurosciences that suggests that it is the experience of active agency, rather than passive receptivity, that constitutes selfhood for a subject. There are three areas of research that support this view. First, the work of Jeannerod (1997) on motor imagery supplies a foundation for the claim that mental imagery of motor activity is central to subjective experience, even though it may go unnoticed by a subject. Second, studies of efference copy in motor commands supplies a mechanism whereby the experience of agency is constantly in the background and often in the foreground (Galliano, 1990). Third, recent work on mirror images (Gallese,1999, 2000; Rizzolatti et al., 2000) shows how actions, even those of others when witnessed and represented by a subject, constitute a central aspect of that subject's own self.

It's important to realize that mirror neurons aren't some special class of neurons that perform the specific function of imitation or empathy with others' actions and conscious states. They are the same neurons that are active when we imagine ourselves doing the action. When we understand the actions of another, we do so by imagining what it would be like to execute the action, and that entails activating neural firing patterns that would correspond to imagining ourselves doing the action. The interesting point is what this says about how we are conscious and how we understand the world and other persons in general -- that when we're conscious of any object, we are so most fundamentally in terms of its possible action affordances, and when we understand another conscious being, we understand them as beings who understand objects in the same way we do -- most basically, by imagining how they could or couldn't act in relation to the object. But in order to understand the actions of other persons in this way, we have to imagine ourselves executing the actions as they are executing them, which of course fires some of the same neurons that would fire if we were simply to imagine ourselves executing the action (which in turn also includes some of the same neurons firing as if we ourselves were to actually execute the action). The neurons in question are the parietal ones that form the body map, and are activated when we imagine moving those parts of our body. What is new is the realization of how we are born with a tendency to empathize -- i.e., the so-called "mirror neurons" will fire and allow us to empathize with what it's like to execute an action another is doing (e.g., someone pointing their finger) even before we have ever actually done that specific action ourselves. We seem to come into the world with this tendency already activated, to be able to map others' actions onto our own body maps and imagine what it would be like to execute their actions, without first having learned this by executing the actions ourselves. (See Meltzoff and Gopnik, 1993.).

In our discussion of this empirical work, we also briefly indicate some implications of the role of agency in the unity of the self for the issue of free will.

A particularly exciting development to emerge from the new scientific work is the support for a view of consciousness and selfhood as unified activities composed of the interaction of various processes. It is tempting to call the unified self an illusion, but that would be misleading. The self exists as an embodied agent, engaged in unifying projects that constitute its integrity. Yet the self is not identical with the body as such; rather, it is the body enacting the emotional motivations of the organism as a whole. Conscious subjects must represent themselves by means of sensorimotor/proprioceptive imagery, but the imagery itself can only symbolize the self as it is experienced in the process of enacting the subject's goals (see Bickhard 1999). There is ample opportunity in this dynamic and complex process for metaphysical wild-goose chases, and the history of philosophy is the history of many such enterprises. The focus on interactive processes, rather than on the static individual components of one sort or another that are parts of the self, offers the hope of catching the self in the act, so to speak, of its own conscious creation.

REFERENCES

Bickhard, M. H. (1999). Interaction and Representation. Theory & Psychology, 9(4), 435-458.

Dennett, D. (1991), Consciousness Explained (Boston: Little, Brown & Co.)

Ellis, R.D. (1986), An Ontology of Consciousness (Dordrecht: Kluwer/Martinus Nijhoff)

Ellis, R.D. (1995), Questioning Consciousness (Amsterdam: John Benjamins)

Ellis, R.D. and Newton, N. (1998), "Three Paradoxes of Phenomenal Consciousness," Journal of Consciousness Studies 5, 4, 419-442.

Ellis, R.D. and Newton, N., eds. (2000). The Caldron of Consciousness. (Amsterdam: John Benjamins)

Fadiga, L. and Gallese, V. Action representation and language in the brain. Theoretical Linguistics, 23: 267-280, 1997.

Gallese, V., Fadiga, L., Fogassi, L. and Rizzolatti, G. Action recognition in the premotor cortex. Brain 119: 593-609, 1996.

Gallese, V. and Goldman, A. Mirror neurons and the simulation theory of mind-reading. Trends in Cognitive Sciences, 12: 493-501, 1998.

Gallese, V. Agency and the self model. Consc. Cogn. 8: 837-839, 1999.

Gallese, V. From grasping to language: mirror neurons and the origin of social communication. In: Towards A Science of Consciousness. S. Hameroff, A. Kazniak and D. Chalmers (eds.), MIT Press,2000.

Gallese, V. The acting subject: towards the neural basis of social cognition. In: Neural Correlates of Consciousness - Empirical and Conceptual Questions T. Metzinger (ed.), pp. 325-333, MIT Press, 2000.

Hurley, S.L. (1998), Consciousness in Action, (Cambridge, Mass., Harvard University Press).

Jeannerod, M. (1994), The representing brain. Behavioral and Brain Sciences 17: 2, (1994), pp. 187-244.

Meltzoff, A. and Gopnik, V. (1993). The role of imitation in understanding persons and developing theories of mind. In Understanding Other Minds: Perspectives from Autism, Baron-Cohen, S., Tager-Flusberg, H., and Cohen, D. (eds.), 335-366. Oxford: Oxford University Press.

Newton, N. (1991), "Consciousness, Qualia and Reentrant Signalling," Behavior and Philosophy 19: 21-41.

Newton, N. (1996), Foundations of Understanding (Amsterdam: John Benjamins)

Perry, J. (1993), The Problem of the Essential Indexical and Other Essays (New York: Oxford University Press).

Rizzolatti, G., Fogassi, L. and Gallese, V. Cortical mechanisms subserving object grasping and action recognition: a new view on the cortical motor functions. In: The new Cognitive Neurosciences, 2nd Edition, M. Gazzaniga (ed.), pp. 539-552, MIT Press, 2000

Rizzolatti, G. and Gallese, V. From action to meaning. In: Les Neurosciences et la Philosophie de l'Action. J.-L. Petit (ed.). Librairie Philosophique J. Vrin, Paris, 1997.

Rizzolatti, G., Fadiga, L., Fogassi, L. and Gallese, V. (2000) From mirror neurons to imitation: facts and speculations. In : W. Prinz and A. Meltzoff (eds.), The Imitative Mind: Development, Evolution and Brain Bases, Cambridge University Press.

Rovane, C. (1998). The Bounds of Agency: An Essay in Revisionary Metaphysics. (Princeton: Princeton University Press).

Shoemaker, S. (1963). Self-Knowledge and Self-Identity (Ithaca: Cornell University Press).

Intentionality and the inside/outside distinction in sensitive systems

Helena De Preester
Universiteit Gent- Department of Philosophy
Blandijnberg 2
B-9000 Gent (Belgium)
Tel:003292643969
helena.depreester@rug.ac.be

Mind-like phenomena are often studied from a one-sided focus, in which the mental is considered as something quasi-autonomous, without a necessary relation to the body and the environment. This is obviously the case in studies of intentionality, in particular (i) in the natural-scientific materialistic and causal explanatory scheme, where the mind is viewed as a brain, and the body is seen as relatively unimportant, and (ii) where intentionality is restricted to having 'propositional attitudes'. Although propositional attitudes are no linguistic entities, the formal-logical approach has the drawback of hindering the development of a general theory of intentionality. This twofold narrowing of the phenomenon of intentionality leads to the fact that systems are closed with regard to their semantics. To conceive of the mind as closed with respect to semantics and to describe the mental without extra-mental references is, however, highly problematic. The symbol grounding problem - how are symbols related to the external world? - illustrates that it is the human observer who makes the intentional circle operational.

We reject the narrowing of the problem of intentionality, and elaborate the main shortcomings in the current cognitive-scientific approach on that point. In so doing, we are inspired by phenomenological insights, especially those in which the broader conception of intentionality and embodiment are brought together (Husserl's Leib, Merleau-Ponty's corps-sujet). The monistic-materialistic perspective precludes involving the body as a meaningful instance for the emergence of consciousness and of a conscious subject. In contrast, the intentionally directed subject, as first person, requires an embodied subjectivity, which the dominating paradigm, inspired by Descartes' determination of the res extensa, cannot account for. In order to account for the emergence of intentionality and consciousness, a third term has to be introduced between mere matter and conscious matter, namely the living system or the organism.

In this regard, the following issues are discussed:

The emergence of the distinction inside/outside, based on an adequate viewpoint on the body, should be articulated. This distinction does indeed contain the basic terms of the reference relation. Instead of studying a substance that produces an output as a reaction to certain stimuli, it is necessary to study the emergence of a living and embodied instance that has arrived at actively making a distinction between an inside and an outside. As a consequence, in order to be able to study intentionality as the relation in which a subject relates itself meaningfully to something external, it is necessary to investigate the emergence of an embodied subjectivity. We make a radical choice for embodiment, because only in that way the conditions for having intentionality can be traced. In naturalising intentionality, one should not start in the middle of the story. A living system is seen as an organisationally closed system, but meanwhile it is open for various kinds of exchange of matter and energy. In other words, a living system does not only form a cohesive whole on the basis of self-referential dynamics, it also has to build its internal cohesion by referring to the environment. The double dynamics of internal and external coherent organisation must be examined in connection with the subjectivity of the system.

This can be done by explaining the difference between a conception of the body as a mere registering device and the sensitivity of a living being. The intentional relation does not only consist of the reception of stimuli, but also implies a dimension of sense, which the sensitivity of the living body contributes to. Already in Husserl, we can find the Leib as a necessary term between the mental 'closure' and what is external to it. Husserl, in opposition to Brentano, does not accept the inner presence of the terminus of the reference relation. The active contribution of the sensitive body in the reference relation must be made explicit, a.o. by showing that it is impossible to reduce the body to a mental content. Nevertheless, Merleau-Ponty's analysis of the corps-sujet as the primary intentional instance is unsatisfactory for us, because the consciousness component in it eventually is insufficiently explained. In contrast, it is the relation between the sensitivity of an organism and consciousness, viewed as a structure that enables meaningful relations with the external, which we want to elaborate. To focus on the sensitivity of the organism enables us to see intentionality as a general mark of a living system and to trace the roots of the inside/outside relation. Such an account is in our opinion the lacking framework for an adequate theory of intentionality, in which the inside/outside relation occupies a central place.

The Formal-Structural View of Logical Consequence

Gila Y. Sher
Department of Philosophy
The University of California, San Diego
La Jolla, CA 92093-0302
gsher@ucsd.edu

In this talk I will offer a “formal-structural” view of logic that grounds it philosophically and has considerable mathematical as well as linguistic consequences. The roots of this view go back to Mostowski (1957), Tarski (1966), and Lindstrom (1966), and I have worked it out in a number of recent papers and an earlier book (The Bounds of Logic: A Generalized Viewpoint, MIT, 1991). I will describe the philosophical motivation and basic principles of this conception of logic, its criterion of logicality, some of its linguistic applications, and the place it assigns to logic in our system of knowledge.

Relevant Literature:

1. Barwise, J & S. Feferman, eds. 1985. Model Theoretic Logics. (New York: SpringerVerlag.)

2. Lindström, P. 1966. “First Order Predicate Logic with Generalized Quantifiers”. Theoria 32: 186-95.

3. McGee, V. 1996. “Logical Operations”. Journal of Philosophical Logic 26: 567-80.

4. Mostowski, A. 1957. “On a Generalization of Quantifiers”. Fundamenta Mathematicae 44: 12-36.

5. Sher, G.1991. The Bounds of Logic: A Generalized Viewpoint. (Cambridge: MIT.)

6. --- 1996a. “Did Tarski Commit ‘Tarski's Fallacy’?” Journal of Symbolic Logic 61: 653-86.

7. ---1996b. “Semantics and Logic”. Handbook of Contemporary Semantic Theory, ed. S. Lappin (Oxford: Blackwell): 509-35.

8. --- 1999a. “Is Logic a Theory of the Obvious?” European Review of Philosophy 4 The Nature of Logic: 207-38.

9. --- 1999b. “Is There a Place for Philosophy in Quine's Theory?” Journal of Philosophy 96: 491-524.

10. Tarski, A.1936. “On the Concept of Logical Consequence”. In Logic, Semantics, Metamathematics (1983): 409-20.

11. --- 1966. “What Are Logical Notions?”. Ed. J. Corcoran. History and Philosophy of logic 7 (1986): 143-54.

12. Van Benthem, J. & D. Westerståhl. 1995. “Directions in Generalized Quantifier Theory”. Studia Logica 55: 389-419.

Knowing-what, knowing-how and levels of explanation:
How to build interdisciplinary theories of creativity

Iris Stammberger
Tufts University Interdisciplinary Program
8 Hancock Street #2
Somerville, MA 02144
E-mail: istammberger@mindspring.com
Phone: (617) 7762873

In mature sciences, when different explanatory strategies are used to approach a phenomenon, theorists organize them into an abstract hierarchy of levels of explanation. This allows for a comparison of locally articulated theories in terms of the following questions: a. at each level, "Which theory offers the best generalization of the phenomenon?" and, b. "Which one is the best level to explain the phenomenon?" Cognitive scientists applying this procedure have found that the answers that emerge are highly dependent of the pragmatic interest of the researcher in question and the assumptions, methodologies and techniques each disciplinary focus can afford. Because of that, they have suggested the need for interdisciplinary frameworks where different models could cohere and theorists with different interests could inform each other theorizing regardless of their pragmatic explanatory strategy (Claxton, 1988; Newell, 1990; Hardcastle, 1996, Dennett, 1981, 1987, 1991).

In creativity research, where explanations go from the subpersonal to the social, the articulation of unified frameworks would allow for theorists working within different explanatory strategies to illuminate each other's findings and would offer to the scientific community at large a more sophisticated theoretical explanation of the phenomenon we all call 'creativity'.

It has been recognized that the science of creativity needs to be interdisciplinary (Gardner, 1994), but then, if this is so, what are the criteria by which local theories of creativity - in sociology or neuroscience, for instance - could inform each other? If we decide that each theory is equally valid - the anthropological and the computational, for instance - how do we make the science of creativity an interdisciplinary endeavor instead of a multidisciplinary one? How do we make different perspectives cohere? In a 25 year long longitudinal study of art students Csikszentmihalyi (1988) found that the creative individual cannot be studied in isolation but in relation with her domain of expertise and the social field that supports it. He found that the social field that promotes, evaluates, finances, and qualifies, as well as its dynamics, are as determinant of the creativity of an individual as her talents or the specific particularities of her domain of practice is - painting, dance, philosophy, biology, philosophy, etc. -. Csikszentmihalyi proposes an interdisciplinary framework for the study of creativity that he calls the Domain-Individual-Field- Interacting systems approach in which the study of the phenomenon includes the individual dimension, the domain dimension, and the field dimension as systems that interact (Csikszentmihalyi, 1988, 1999).

Understanding Csikszentmihalyi's dimensions as different levels of analysis of the creative phenomenon, each one of which can offer an independent and valid explanation of the phenomenon, asking how these levels are systems that interact, and looking for ways in which this framework relates to explanations of creativity couched in terms of evolutionary and computational algorithms, neuronal events or anthropological data, are some of the questions that an interdisciplinary theory must address.

In this paper develops a method to build interdisciplinary theories of creativity using:

conceptualizations developed in philosophy of science for the evaluation of theoretical explanations, and

arguments developed in cognitive science for the articulation of unified theories of cognition.

My proposal is that different local theories of creativity can be organized as a set of general principles and loosely coupled models, and, that by making more rigorous the general principles, the local models can cohere into an interdisciplinary theory.

The procedure I follow starts by organizing existing local theories of creativity into an abstract hierarchy of levels of explanation. This is accomplished by asking, for each local theory, the following questions: 1. The know-how question (What is explained?) and 2. the know-how question (How is it explained?). Then 3. Using a unifying power criterion, to look, horizontally, for the best generalization at each level (Is this theory connecting creativity with more apparently unrelated phenomena?), and 4. Using a screening-off criterion, to look, vertically for the best level of explanation of the phenomenon ("What is the best level to study this local phenomena?").

A comparison of existing creativity theories using the know-what and the know-how questions, together with the levels of explanation distinction and the unifying power criteria and the screening off criteria, shows that some apparently incompatible theories could instead be considered as part of a unified framework (Amabile's, Boden's, and Csikszentmihalyi's for instance). To cohere within a unified framework, these local theories' contrast classes - their alternative views - must correspond. With this and other additional distinctions borrowed from philosophy of science I transform the initial procedure by which I compare different theories into a method to build interdisciplinary theories (Friedman, 1991; Hardcastle, 1996). To illustrate the method at work, I outline an interdisciplinary theory of creativity, the ASPAS theory which considers five different levels of explanation: the artificial, the social, the psychic, the algorithmic and the subpersonal, and shows how local existing theories of creativity could cohere within this framework.

My project is both descriptive and normative: besides comparing and contrasting different theories of creativity, it aims to determine what should be the elements in common that would allow for different models to be unified under a sole theory.

I do not claim there is only one unified framework but instead show how unified theoretical frameworks can be created for the study of creativity as an interdisciplinary phenomenon.

Implementing Interactivism: An Outline for Generic Interactivist Architecture

Georgi Stojanov
Computer Science Department
SS Cyril and Methodius University in Skopje
Macedonia
geos@cerera.etf.ukim.edu.mk

In this paper, which has a flair of philosophy-with-a-screwdriver, I attempt to identify the key features of an artifact architecture which would classify it as an interactivist one. Taking as a point of departure the extensive descriptions of the interactivist approach (Bickhard 1993; Bickhard&Terveen, 1995) I try to spell out the design guidelines that arise when one actually wants to implement an autonomous agent according to these principles.

Taking as a motivational example the architecture of the autonomous agent Petitagé proposed in (Stojanov et al 1997) I outline and discuss a hypothetical generic interactivist architecture. Issues concerning action, perception, representation, the notion of layers of representation, intentionality, and communication (artifact-artifact, artifact-human), are discussed.

Works of Merleau-Ponty (1962) and Piaget (1929, 1969) are treated within the interactivist framework as possible sources of additional artifact design constraints.

Bickhard, M.H., Terveen, L., Foundational Issues in Artificial Intelligence and Cognitive Science: Impasse and Solution, Elsevier Science Publishers, 1995.

Bickhard, M.H., “Representational Content in Humans and Machines”, Journal of Theoretical and Experimental Artificial Intelligence, 5, 1993.

Merleau-Ponty, M., Phenomenology of Perception, London: Routledge, 1962.

Piaget, J, The Child's Conception of the World, NY: Harcourt, Brace Jovanovich, 1929.

Piaget, J., The Mechanisms of Perception, London: Rutledge & Kegan Paul, 1969.

Stojanov, G., Bozinovski S., Trajkovski, G., “Interactionist-Expectative View on Agency and Learning", IMACS Journal for Mathematics and Computers in Simulation, North-Holland Publishers, Amsterdam, Vol. 44, 1997.

Approaching Process Ontology via Tense Fuzzy Algebraic Structures

Goran Trajkovski
West Virginia University at Parkersburg
Department of Computer Science
300 Campus Drive, Parkersburg, WV 26101, USA
E-mail: gtrajkovski@wvup.wvnet.edu

Georgi Stojanov
St. Cyril and Methodius University
Faculty of Electrical Engineering
PO Box 574, 91000 Skopje, Macedonia
E-mail: geos@cerera.etf.ukim.edu.mk

Abstract.

The problems with context and context dependent representations were among the weakest points of the classical representationalist approach in AI which used pregiven context free representations which reflected designer's ontology. The paper utilizes original approaches, methods and results from the Fuzzy Algebraic Theory to define context dependent representations and related terms within the interactivist approach. The framework used to illustrate the theory is extremely general, and consists of an autonomous linguistically incompetent agent. The agent possesses inborn drives, that alone or combined, are active at different moments in time. The active drives change dynamically the agent's relevant perspective (representation) of the environment.

We formalize the concept of context and designer's ontology via tense fuzzy algebraic structures, suitable especially for modeling processes based on partially ordered hierarchies. This approach complements our Interactivist-Expectative Theory on Agency and Learning (IETAL) presented in (Stojanov et al 1997)

Reference.

Stojanov, G., Bozinovski, S., Trajkovski G. “Interactionist-Expectative View on Agency and Learning,” IMACS Journal on Mathematics and Computers in Simulation, 44(1997).

Interactivist Knowledge Representation in Recurrent Neural Robot Controllers--
Umwelt and Embodiment of Robots

Tom Ziemke
Department of Computer Science, University of Skövde
Box 408, SE -54128 Skövde, Sweden
+46-500-448399 (fax) / - 448330 (phone)
tom@ida.his.se

Much research in cognitive science has recently been devoted to the study of the situated and embodied nature of intelligent behavior in general, and adaptive robots (or 'autonomous agents') in particular. Such systems are typically said to 'learn', 'develop' and 'evolve' in interaction with their environments. Hence, it has been argued that these self-organizing properties solve the problem of symbol or representation grounding in AI research, and thus place autonomous agents in a position of semiotic and cognitive-scientific interest.

This talk discusses recurrent neural robot control systems and the way they represent, in an interactivist sense, an agent's knowledge of its environment. Peschl (1996) has referred to the recurrent neural net (RNN) style of representation as "representation without representations" due to the fact that the RNN's internal structures do not map or encode environmental structures in the traditional, encodingist sense of referential representation. Instead the RNN dynamics generate functionally fitting behavior which is triggered and modulated by the environment, but determined by its own internal structure. This talk illustrates and analyzes in detail several examples of interactivist knowledge representation in recurrent neural robot controllers (cf. Ziemke, 1999, 2000, 2001).

Furthermore, we discuss the relation to theoretical biologist Jakob von Uexküll's (1864-1944) early 20th-century theory of meaning (von Uexküll, 1940) and his Umwelt concept (von Uexküll, 1928, 1957), describing the species-dependent embedding of every organism in their own conceptual/phenomenal world. Relevant to RNN-style representation in autonomous agents is in particular his concept of functional tone, which aimed to explain how (identical) objects subjectively can be attributed completely different meanings in different contexts. Here RNNs provide a model of how agents, through dynamical adaptation of their behavioral disposition, can realize internal structures that allow them to reliably interact with their environment, even in the absence of sufficient external scaffolds. As mentioned above, such internal structures can be considered representations in an interactivist sense due the fact that they are formed by an agent in interaction with its environment, not as an abstract model or encoding of the world, but for the purpose of guiding its own behavior. We further discuss the relation to the Kantian notions of Vorstellung and Darstellung (incorrectly often both translated as representation), radical constructivist notions of knowledge as sensorimotor transformation knowledge (cf. Ziemke, 2001) as well as modern interactivist notions of representation (Bickhard & Terveen, 1995).

Finally, the relation between von Uexküll's work and modern theories of embodied cognition and its biological basis, in particular the work of Maturana and Varela, is examined (cf. Ziemke & Sharkey, 2001), and different notions of embodiment in contemporary cognitive science are discussed (cf. Sharkey & Ziemke, 198, 2000, 2001; Ziemke, 2001) in order to clarify the differences between artificial autonomous agents and living organisms.

References

Bickhard and Terveen (1995). Foundational Issues in Artificial Intelligence and Cognitive Science - Impasse and Solution. New York: Elsevier.

Peschl (1996). The representational relation between environmental structures and neural systems: Autonomy and environmental dependency in neural knowledge representation, Nonlinear Dynamics, Psychology and Life Sciences, 1(3).

Sharkey & Ziemke (1998). A consideration of the biological and psychological foundations of autonomous robotics. Connection Science, 10(3-4), 361-391.

Sharkey & Ziemke (2000). Life, Mind and Robots - The Ins and Outs of Embodied Cognition. In: Wermter & Sun (eds.) Hybrid Neural Systems. Heidelberg, Germany: Springer Verlag.

Sharkey & Ziemke (2001). Does a Robot have a Body?. Cognitive Systems Research, to appear.

von Uexküll, Jakob (1928). Theoretische Biologie. Berlin: Springer Verlag.

von Uexküll, Jakob (1957). A stroll through the worlds of animals and men - a picture book of invisible worlds. In Schiller, editor, Instinctive Behavior - The Development of a Modern Concept, pages 5-80. New York: International Universities Press. Appeared also in Semiotica, 89(4):319-391.

von Uexküll, Jakob (1982). The Theory of Meaning. Semiotica, 42(1):25-82.

Ziemke (1999). Remembering how to behave: Recurrent neural networks for adaptive robot behavior. In Medsker, & Jain, editors, Recurrent Neural Networks: Design and Applications, pages 355-389. New York: CRC Press.

Ziemke (2000). On 'Parts' and 'Wholes' of Adaptive Behavior: Functional Modularity and Diachronic Structure in Recurrent Neural Robot Controllers. In Meyer et al, editors, From animals to animats 6 - Proceedings of the Sixth International Conference on the Simulation of Adaptive Behavior, pages 115-124. Cambridge, MA: MIT Press.

Ziemke (2001). The Construction of 'Reality' in the Robot: Constructivist Perspectives on Situated Artificial Intelligence and Adaptive Robotics. Foundations of Science, 6(1), (special issue on 'Radical Constructivism and the Sciences'), to appear.

Ziemke & Sharkey (2001). A stroll through the worlds of robots and animals: Applying Jakob von Uexküll's theory of meaning to adaptive robots and artificial life. Semiotica, special issue on the work of Jakob von Uexküll, to appear.

BACK TO ISI 2001 HOME PAGE

ISI 2001 Home Page
ISI 2001 Home Page - no frames version