«1 Introduction Wisdom and sapience have been traditionally considered desirable traits in humans, but the use of the terms is decaying perhaps due to ...»
A real-time agent system perspective
of meaning and sapience
Ricardo Sanz, Julita Bermejo, Ignacio L´ pez, Jaime G´ mez
Autonomous Systems Laboratory, Universidad Polit´ cnica de Madrid, Spain,
Wisdom and sapience have been traditionally considered desirable traits in humans,
but the use of the terms is decaying perhaps due to a raising post-modern relativism
that lessens the value of others’ knowledge. This chapter proposes an interpretation of sapience in terms of meaning generation and knowledge exploitation in social groups of knowledge-based agents.
We will describe sapient agents as those that are able to generate useful meanings for other agents beyond their own capability of generation of self-meanings. This makes sapient agents specially valuable entities in agent societies because they provide interagent reliable third-person meaning generation that provides some functional redundancy that contributes to enhance individual and social robustness and global performance. This approach to meaning generation is pursued by our research group in the context os the ASys Theory of autonomous cognitive systems.
Knowledge-based systems have been a matter of research and development from years; from the logic-based problem solvers of the sixties to the expert systems of the eighties or contemporary model-based systems, the nature of exploitable knowledge has been a core issue in artiﬁcial intelligence. Construction of well performing systems seems to require the codiﬁcation of suitable knowledge in suitable forms for the agent activity.
In a sense, there has been a raising awareness that having knowledge —whatever its form— is not enough. To perform adequately, agents need to acquire an understanding of their action context so they can rationally decide about the proper action to be taken and the proper knowledge to be used in deciding about it. This means that agents should interpret information coming from their sensors and generate meanings from this information to be used in the action decision-making process. This issue of situation awareness has been raised many times and even addressed speciﬁcally in the design of intelligent system architectures (see for example Figure 1).
2 Ricardo Sanz, Julita Bermejo, Ignacio L´ pez, Jaime G´ mez o o Control Assessment of plant situation plant operational state references control plant action state Control action determination
Fig. 1. Two level phasing of situated intelligent systems: 1) plant situation awareness and 2) control action generation; from Sanz (1990).
While this brief analysis directly enters the old debate about data, information, knowledge and meaning, we will not contribute extensively to it; but it will be somehow necessary to clarify some of the terms used in the analysis of wisdom and sapience that follows (e.g. intelligence, meaning or knowledge).
Mayorga (2005) proposed a differentiation between intelligence and wisdom based on the inner architecture of action. He sees ”Intelligence” as related to an ”Analysis” → ”Action” process; whereas, ”Wisdom” is seen as related to ”Analysis”, ”Synthesis”, → ”Action” process.
Although we are not going to enter the debate about the deﬁnition of intelligence (see (Sanz et al., 2000) for a partial account of our views that we can summarize as utility maximisation in knowledge-based action) it may be necessary to analyze the nature of the knowledge involved in action generation and propose a model for thirdperson meaning generation that will provide a simple interpretation of the concepts of ”wisdom” and ”sapience”.
To achieve this objective, ﬁrstly we will present a model of ﬁrst-person meaning generation. Next, we apply this model to a cross-agent meaning generation process.
Other authors (Tien, 2003) consider that wisdom is just a further step in the data → information → knowledge ladder (see Figure 2). Or as Landauer puts it in his meaning hierarchy, the ladder is data → information → knowledge → understanding (Landauer, 1998).
While meaning (semantics) is critical for purposeful action, few psychological theories of mind have taken the study of meaning as the foundation of a working theory of the mind (Combs, 2000).
Hardy (1998) says that the generation of meaning is produced by the continuous closed causal link between an internal context (what she calls the semantic constellations), and an external context (a meaningful environment).
Other’s argue for a theory of meaning based on embodiment. This alternative is based on the idea of embodiment (e.g., Barsalou, 1993; Glenberg, 1997; Lakoff, 1987), A real-time agent system perspective of meaning and sapience 3
Fig. 2. Moving from information to wisdom according to Tien (2003).
that cognition is intimately connected with the functioning of the body (Glenberg et al., 1999).
2 The nature of meaning Beyond classical accounts of life-related information and meaning generation (Oyama, 1985), we will focus on
cognitive agents with the –perhaps hopeless– purpose of having a theory applicable both to the analysis of extant cognitive agents and also to engineering processes of high-performance artiﬁcial agents, as those found controlling the technical systems of today’s world.
Some authors have proposed that meaning is just a list of features —like a frame in classical AI— but there are compelling arguments from different sources against this interpretation (see for example (Shanon, 1988)). Another classic alternative was to consider that the meaning of symbols is a semantic network; but this leads to a recursive search of meaning that ﬁnally ends in the symbol grounding problem (Harnad, 1990). A third solution is based on the symbols taking on meaning by referring to entities outside the agent. That is, perception is seen as the core engine of meaning assignment to internal symbols. This corresponds to the views of interactivist schools;
but the recurrent discussion about the necessity of embodiment will disappear when constructors become aware that minds necessarily run on virtual machines and hence the existence and awareness of an extant body is both unavoidable and useful for enhancing behavior.
4 Ricardo Sanz, Julita Bermejo, Ignacio L´ pez, Jaime G´ mez o o
Fig. 3. The in and out paths of a situated system show the range of decision-making activities coupled with the different information levels.
In most of these interpretations, however, there is a big pending issue; they usually lack support of a core feature of meanings: meanings can capture the dynamics of entities in their contexts. Meanings are not constrained to statics but do also express change (actual or potential).
If we can say that X captures the meaning of a concrete piece of information it is because X provides a sensible account of the relation of the agent with the originator —the causal agent— of the information in present and potentially future conditions.
As Meystel (2001) says, “the ﬁrst fundamental property of intelligent systems architectures (the property of the existence of intelligence), can be visualized in the law of forming the loop of closure” (See Figure 4). This loop of closure is seen in intelligent systems as composed by the world, sensors, world models and behavior generators, this last three constituting parts of the agent. A fourth component is necessary to provide the goal-centered behavior of agents: the value judgment engine.
If we consider how behavior is generated, the value judgment component in RCS architecture is critical (see Figure 7). But this value judgment shouldn’t be done over raw or ﬁltered sensor data (i.e. judging the present state of affairs) nor over the agent’s present mental state. Value judgment is necessarily done over potential futures based on agent’t present mental state. Is this value judgment of potential future states what assigns meanings to facts of reality.
A real-time agent system perspective of meaning and sapience 5
Fig. 4. The elementary loop of functioning —loop of closure— as described by Meystel (2003).
3 Meaning generation in the ASys model The previous analysis shows that the core elements of a meaning generation engine are a predictor and a state value calculator. This is what our brain does all the time to generate meanings: evaluation of causal impact of what we see. Meanings are generated by means of temporal utility functions.
A real-time time/utility function expresses the utility to the system of an action completion as a function of its completion time. This is seated at the core root of realtime systems engineering, i.e. engineering of systems that have requirements related to the passage of time.
The meaning of a concrete perceived (externally or internally) fact is the partitioning of potential future trajectories of the agent in its state space. For example, if I see it’s raining outside, this fact divides all my potential futures into two sets: in one I continue dry; in the other one I get wet. This partition is the meaning of the fact “it’s raining outside”.
This interpretation of meaning as related to the dynamics of futures can be found in many different areas, for example, neurobiology, psychology or even software engineering.
Fig. 6. Situated cognitive agents exploit the interaction with the world to maximise utility and this is achieved by means of driving such interaction by means of models of the reality —the plant under control in artiﬁcial systems— that constitute the very knowledge of the agent.
In order to help in this calculation of futures and future values, situated cognitive agents exploit the interaction with the world —and with other cognitive agents— to maximise behaviour utility by means of driving such interaction by adaptive models of the reality they are dealing with (see Figure 6).
These models of a part of the reality —the plant under control in artiﬁcial systems— constitute the core of the real world knowledge of the agent, and are the very foundation of meaning calculation.
A real-time agent system perspective of meaning and sapience 7
4 Other analyses of meaning generation4.1 Freeman’s mental dynamics
Walter Freeman identiﬁes meanings with “the focus of an activity pattern that occupies the entire available brain”(Freeman, 1997). From his point of view there are no representations in the brain, only meanings. The brain is an engine for meaning generation –based on brain perceptual dynamics– and, simultaneously, an engine for action generation based on the same type of dynamics.
4.2 Gibson’s affordance theory
According to the ecological psychologist James Gibson (Gibson, 1979), an affordance is an activity that is made possible –an action possibility so to say– by some property of an object. A valve affords ﬂow control, by being of the right shape and size and being in the proper pipe place where one needs to reduce ﬂow.
In some contexts, affordances are classiﬁed into three categories: based on sensory (unlearned sensory experience), perceptual (learned categorizations of sensory experience) or cognitive (thought-based) processes. There are even considerations about the possibilities of non-aware affordances.
The most classic example of affordances involves doors and their handles (buildings, cars etc.) but the world of control systems is full of these entities: actuators are embodiments of affordances.
4.3 Griswold’s programs meaning
In the area of tools-based software engineering, programmers look for automated methods of automated transformation of program speciﬁcations into ﬁnal deployable packages. This is expect to solve the handcrafting bottleneck of manual programming.
See for example, the work of Griswold and Notkin (1995) in the ﬁeld of computer program transformation.
This implies having meaningful transformations of programs between different representations. The MDA proposal, for example, considers transformations from UML-based Platform Independent Models into platform-dependent models, and then into concrete implementation-oriented languages (IDL, C++, etc.).
All this transformations should be, however, meaning-preserving. But program meaning is not related with the actual wording of the code –that in model-centric software development may not even exist in some phases– but with the concrete program functionality (the program behavior) when executed over the appropriate platform, i.e. the platform that provides the required abstractions that the application was based upon.
8 Ricardo Sanz, Julita Bermejo, Ignacio L´ pez, Jaime G´ mez o o
Fig. 7. The elementary loop of functioning of Meystel incremented with a value judgment unit to generate meanings; this design matches what is proposed in the ASys Theory about menaing generation. This structure corresponds to the elementary control node of the RCS intelligent control architecture (Albus, 1992).
5 Meaning in control systems
From the former analysis, we can see that meaning cannot be associated to an isolated piece of information but to a set composed by the information, the agent for which the information is meaningful and the context where the agent operates. To summarize, the meaning of a piece of information is agent- and context-dependent, something that it is well known in psychology (Clark, 1998).
Most researchers’ creatures manipulate meanings without having an explicit theory of them; by means of ad-hoc meaning generation processes embedded in the control architectures. These are based on a particular, hidden ontology and a value system that is implicit in the architecture (see for example the work of Steels (1998)).