«1 Introduction Wisdom and sapience have been traditionally considered desirable traits in humans, but the use of the terms is decaying perhaps due to ...»
Valuable engineering efforts are those oriented toward a clariﬁcation of the role that architecture plays in control systems and how is it possible to attain constructabiliy of complex systems by means of scalable design patterns. This approach is specially well captured in the multiresolutional approach fostered by the control design pattern that Meystel calls the elementary loop of functioning (Meystel, 2003). Of importance in relation with the ASys theory of meaning is the incorporation of value judgment mechanisms over this elementary loop (see Figure 7).
The elementary loop of functioning, when applied hierarchically, generates a multiresolutional ladder of meanings speciﬁcally focused on the controllable subspace of each control level. This approach partitions both the problem of meaning generation and the problem of action determination, leading to hierarchical control structures that have interesting properties of self-similarity.
This core design pattern approach is extended in the concept of a control node of the RCS control architecture (Albus, 1992) (see Figure 7). Beyond the model of the world and the sensing and acting units, this architecture considers the existence of a A real-time agent system perspective of meaning and sapience 9 value judgment unit that evaluate both static states and dynamic states derived from hypothetical plan execution.
Fig. 8. The basic RCS node interchanges sensory and command ﬂows with upper and lower nodes. While this may be considered meaningful ﬂows, their meaning —sensu stricto— is limited to the originating node (Albus and Barbera, 2005).
6 Sapience: Generating other’s meanings To go to the core issue of the problem, i.e. the nature of sapience, we interpret it as the capability of generating meanings for others. Sapient agents can interpret the state of affairs and generate meanings that are valuable for other agents, i.e. like those generated by value judgment engines that are transpersonal. The attribution of sapience is social in the sense that it happens when the sapient agent is able to generate meanings that are socially valid, i.e. valid not only for one agent but for a group of agents.
Generating meanings that are valid for more that one agent is beyond normal agent capabilities. That makes sapient agents really special.
To some extent, sapient systems can voluntarily select and use shared ontologies (that are used by others) and prediction engines to generate meanings that are valid for them. This capability of shared ontology selection and use is largely sought (Mizoguchi and Ikeda, 1996) in present-day research on distributed information systems (see for example the efforts related with the semantic web).
10 Ricardo Sanz, Julita Bermejo, Ignacio L´ pez, Jaime G´ mez o o Beyond the meaning calculation fact, sapient systems do usually manifest themselves by means of their explanatory capabilities; i.e. they can communicate the results of the calculation to the target agent. This may be seen as clearly rejecting those fashionable accounts of sapience as obscure manifestations of mental capability. Explanation is hence strongly related with the perception of sapience (see (Craik, 1943), (Brewer et al., 1998) or (Wilson and Keil, 1998)).
Obviously this vision is strongly related with the psychology concept of “theories of mind” but goes well beyond it in the sense that the “theory of mind” is typically restricted to agent-to-agent interaction.
This view of sapience can be implicit or explicit (when the sapient system uses consciously the model of the other to calculate meanings). It is like having ‘deliberative’ sapience.
7 Meanings in hive minds Of major interest for us, that focus our research in the domain of complex distributed controllers, is the capability of exploiting this sapience mechanics to improve the integration level of a distributed controller.
We may wonder to what extent meaning integration can lead to mind federation and the emergence of a single, uniﬁed controller: a hive mind. If meaning is globally integrated this implies that the different subsystems may be aware of what is going on affecting other subsystems. A kind of distributed consciousness emerges.
Some people have considered the possibility of shared or collective consciousness even for humans (see for example Hardy (1998), Sheldrake (1988) or Laszlo (1996)).
From this perspective, individuals can conjointly share a particular experience even being at distance.
People dealing with practically independent environments, can use other’s previous experiences in similar situations to better understand the present state of affairs. These previous experiences are culturally shared and when executed over similar virtual machines (Sloman and Chrisley, 2003) can generate similar interpretations of reality that coalesce into coherent social behaviors that can be seen as a form of collective understanding.
Perhaps we can exploit this kind of social phenomena in the implementation of advanced cognitive conscious modular controllers.
Agent’s meanings are not static interpretations of agent-perceived data but do capture future trajectories of the agent in his state space in a particular context. This is strongly related to Putnam’s causal theory of meaning (Putnam, 1975).
Sapient systems are agents that have the capability to generate meanings for others, i.e. they can assess situations as other agents would do and suggest courses of action based on other agents’ set of values.
A real-time agent system perspective of meaning and sapience 11
Fig. 9. Multi-agent systems can only operate if the ontologies are shared to be able to reconstruct meaning from messages coming from other agents.
Wisdom is hence nothing categorically different from what is available in conventional agent architectures but a particular capability of an agent to use its own resources to think-for-others. Wisdom in hence attributed by other’s due to this capability that goes beyond usual agent capabilities.
This understanding of meaning is strongly related with recent theories of consciousness and lead us to the possibility of achieving consciousness states in control systems (Sanz and Meystel, 2002).
This approach to explicit management of meanings is currently under implementation in the SOUL Project (http://www.aslab.org/public/projects/SOUL/) in the laboratory of the authors.
9 Acknowledgements The authors would like to acknowledge the support of the Spanish Ministry of Education and Science through the DPI C3 grant and the support of the European Commission through the IST ICEA grant.
References Albus, J. S. (1992). A reference model architecture for intelligent systems design. In Antsaklis, P. and Passino, K., editors, An Introduction to Intelligent and Autonomous Control, pages 57–64. Kluwer Academic Publishers, Boston, MA.
Albus, J. S. and Barbera, A. J. (2005). Rcs: A cognitive architecture for intelligent multi-agent systems. Annual Reviews in Control, 29(1):87–99.
12 Ricardo Sanz, Julita Bermejo, Ignacio L´ pez, Jaime G´ mez o o Antsaklis, P. and Passino, K. E. (1993). An Introduction to Intelligent and Autonomous Control. Kluwer Academic Publishers.
Brewer, W. F., Chinn, C. A., and Samarapungavan, A. (1998). Explanation in scientists and children. Minds and Machines, 8:119–136.
Clark, A. (1998). Twisted tales: Causal complexity and cognitive scientiﬁc explanation. Minds and Machines, 8:79–99.
Combs, A. (2000). Book review: Networks of meaning: The bridge between mind and matter, by Christine Hardy. Nonlinear Dynamics, Psychology, and Life Sciences, Vol. 4, No. 1, 2000, 4(1):129–134.
Craik, K. (1943). The Nature of Explanation. Cambridge University Press, London.
Freeman, W. J. (1997). A neurobiological interpretation of semiotics: meaning vs. representation. In 1997 IEEE International Conference on Systems, Man, and Cybernetics, volume 2, pages 12–15.
Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Houghton Mifﬂin, Boston.
Glenberg, A. M., Robertson, D. A., Jansen, J. L., and Johnson-Glenberg, M. C. (1999).
Not propositions. Journal of Cognitive Systems Research 1, 1:19–33.
Griswold, W. and Notkin, D. (1995). Architectural tradeoffs for a meaning-preserving program restructuring tool. IEEE Transactions on Software Engineering, 21(4):275– 287.
Hardy, C. (1998). Networks of Meaning: The Bridge Between Mind and Matter.
Praeger/Greenwood Publishing Group, Westport, CT.
Harnad, S. (1990). The symbol grounding problem. Physica D, 42:335–346.
Landauer, C. (1998). Data, information, knowledge, understanding: computing up the meaning hierarchy. In 1998 IEEE International Conference on Systems, Man, and Cybernetics, volume 3, pages 2255–2260.
Laszlo, E. (1996). The whispering pond. Element Books, Rockport, MA.
Mayorga, R. V. (2005). Towards computational sapience (wisdom): A paridigm for sapient (wise) systems. In IEEE International Conference on Integration of Knowledge Intensive Multi-agent Systems, Boston.
Meystel, A. (2001). Multiresolutional representation and behavior generation: How do they affect the performance of intelligent systems. In Tutorial at ISIC’2001, Mexico D.F.
Meystel, A. M. (2003). Multiresolutional hierarchical decission support systems. IEEE Transactions on Systems, Man and Cybernetics, 33(1):86–101.
Mizoguchi, R. and Ikeda, M. (1996). Towards ontology engineering. Technical Report AI-TR-96-1, The Institute of Scientiﬁc and Industrial Research, Osaka University.
Orden, G. C. V., Moreno, M. A., and Holden, J. G. (2003). A proper metaphysics for cognitive performance. Nonlinear Dynamics, Psychology, and Life Sciences, 7(1):49–60.
Oyama, S. (1985). The Ontogeny of Information: Developmental Systems and Evolution. Cambridge University Press, Cambridge.
Pustejovsky, J. (1990). Perceptual semantics: the construction of meaning in artiﬁcial devices. In 5th IEEE International Symposium on Intelligent Control, volume 1, pages 86–91.
A real-time agent system perspective of meaning and sapience 13 Putnam, H. (1975). Mind, Language and Reality. Number 2 in Philosophical Papers.
Cambridge University Press, Cambridge.
Sanz, R. (1990). Arquitectura de Control Inteligente de Procesos. PhD thesis, Universidad Polit´ cnica de Madrid.
e Sanz, R., Mat` F., and Gal·n, S. (2000). Fridges, elephants and the meaning of auIa, tonomy and intelligence. In IEEE International Symposium on Intelligent Control, ISIC’2000, Patras, Greece.
Sanz, R. and Meystel, A. (2002). Modeling, self and consciousness: Further perspectives of ai research. In Proceedings of PerMIS ’02, Performance Metrics for Intelligent Systems Workshop, Gaithersburg (MD), USA.
Shanon, B. (1988). Semantic representation of meaning: A critique. Psychological Bulletin, 104:7–83.
Sheldrake, R. (1988). The presence of the past. Random House, New York.
Sloman, A. and Chrisley, R. (2003). Virtual machines and consciousness. Journal of Consciosness Studies, 10(4-5):133–172.
Steels, L. (1998). The origins of syntax in visually grounded robotic agents. Artiﬁcial Intelligence, 103:133–156.
Tien, J. M. (2003). Toward a decision informatics paradigm: A real-time, informationbased approach to decision making. IEEE Transactions On Systems, Man, and Cybernetics-Part C: Applications and Reviews, 33(1):102–112.
Tuomi, I. (1999). Data is more than knowledge. In Proceedings of the 32nd Hawaii International Conference on System Sciences.
von Uexk¨ ll, J. (1982). The theory of meaning. Semiotica, 42(1):25–82.
u Wilson, R. A. and Keil, F. (1998). The shadows and shallows of explanation. Minds and Machines, 8:137–159.
Ziemke, T. (2002). On the epigenesis of meaning in robots and organisms. Sign Systems Studies, 30:101–111.
Ziemke, T. and Sharkey, N. E. (2001). A stroll through the worlds of robots and animals: Applying Jakob von Uexk¸ll’s theory of meaning to adaptive robots and artiﬁcial life. Semiotica, 134(1-4):701–746.
Zlatev, J. (2001). The epigenesis of meaning in human beings, and possibly in robots.
Minds and Machines, 11.