InterJournal Complex Systems, 1289
Status: Accepted
Manuscript Number: [1289]
Submission Date: 2004
Evolutionary Dynamics of Knowledge
Author(s): Carlos Parra ,Masakazu Yano

Subject(s): CX.0

Category:

Abstract:

This study discusses the human version of an artificial agent’s interpretative devices (Arthur, B. 1997) by presenting a definition of interpretants, which follows Varela’s (1999) neurophenomenological perspective coupled with a cybernetic understanding of Piercian semiotics, namely: experiences, made of alternate bundles of embodied experiences (distinctions.) This definition of interpretants is not only useful from a human development perspective when capabilities are comprised of alternate bundles of choices or functionings (an individual’s beings and doings, Sen, 1993) and these choices then, standing for embodied distinctions (Parra and Yano, 2002), stem from distinctions that in turn stand for embodied experiences (Parra and Yano, 2004); but more so because this approach uses liberty instead of utility for economic decision-making, replacing widely used traditional assumptions (i.e. individual rationality) and thereby adopting recent behavioral and experimental discoveries. In particular, this paper proposes a learning model (or inner-world reconstructing model) that can overcome neo-classic obstacles, and increase the predictive power of computational economics, by letting agents’ knowledge evolve by itself, irrespective of globally specified goals and even individual motives of behavior; using simultaneous (or parallel) Genetic Algorithms (GA) to evolve a single agent’s learning strategy, each GA with different general specifications, in a multi-agent setting. In order to implement our definition of interpretants computationally, artificial agents would need to be designed so as to: experience something; distinguish the source of this experience; also ground what they are experiencing; such that when this new experience is eventually employed, be able to embodied it as a distinction; and also be able to self-provoke random interpretations accounting for the effects of chance, leading to “misinterpretations” that could end up having positive effects on the performance of an agent, or the system as a whole. Moreover, this single agent inner world reconstruction model, when used in a multi-agent scenario could help scrutinize the transition from freewill-guided agents to rule-based interactions (i.e. cooperation and/or self-organization). Eventhough we do not provide detail specifications about how to implement the learning model, or put it into practice, we do give real-life perspective on what the outcomes of such an exercise could be in institutional terms (North, 1990) pointing to the evolutionary dynamics of experiences, distinctions and choices. This is done so as to contribute to the cognitive debate around agent-based learning models, which in our perspective should be about the methods for handling variation (inside learning algorithms, between algorithms, among agents, and for systems in general.)

Retrieve Manuscript
Submit referee report/comment


Public Comments: