Department of Computer Science


  • The objective of this research is to implement metacognitive computing capability in artificial agents. Metacognition is composed of both meta-level control of cognitive activities and the introspective monitoring of such activities to evaluate and to explain them. Meta-level control is the ability of an agent to efficiently trade off its resources between object level actions like learning, reasoning and planning, and ground level actions like perceiving and performing, to maximize the quality of its decisions. Meta-level control allows agents to decide when to stop thinking and start acting. Introspective monitoring is an awareness of one’s own thoughts and the factors that influence thinking. It allows agents to gather information to make effective meta-level control decisions or explain failed reasoning.

    Closely related to this research is the need to improve the perturbation tolerance of real-world agents, that is, their ability to detect and recover from errors or unexpected changes. A person who has never been on ice before may slip and fall a few times, but will soon learn how to adjust the gait to avoid a fall.  The ability to notice when things go wrong and adapt is important to survive or make things better.  This project investigates how to create agent software that can learn and fix itself so that agents can adapt to new situations rather than fail completely.


    We have developed a model of metacognition that uses expectation as a switch that turns on metacognitive control of an agent’s behavior. We have shown that adding a metacognitive component can improve the performance of agents operating in dynamic environments ranging from air traffic control simulators to human-machine natural language interfaces. We have also used a time-sensitive and contradiction-tolerant reasoning mechanism on explicit representations of beliefs, desires, intentions, expectations and observations, to monitor and correct actions of an agent operating in a setting where concurrent actions are allowed. We have implemented metacognition in diverse systems to monitor and fix problems in natural language understanding as well as in task execution.

    We have strong collaborations with Active Logic, Metacognitive Computation and Mind Lab at UMD and Cognitive Informatics & Cognitive Computing Lab at Universidad de Córdoba

  • The objective of this research is to study the effects of simulated emotions on the behavior and performance of artificial agents that work alone or in groups. Artificial agents need to adapt to changes in the environment as needed to function effectively. Some changes may be safely ignored, whereas others may require change of plans, goals and behaviors. When working in groups, agents need to determine whether its partners are competitive, destructive or cooperative in-order to decide with whom to collaborate to achieve its goals. This research explores how modeling emotions can contribute to helping an agent focus on what is important to help it achieve its goals.


    As part of our research, we have shown how modeling emotions in artificial agents can help them adapt their behavior to changes in the environment. Each agent works on its goals or intentions and monitors the status of achievement. Achieving a goal will move its emotional state towards a more pleasant level. Goals that are not met in a determined amount of time, result in expectation violations. These expectation violations will negatively affect the emotion of the agent. Hostile agents that inhibit an agent from achieving its goals can also negatively impact its emotion. We have shown how the implementation of emotions results in an agent’s more efficient and successful navigation in a two dimensional plane.

  • This research focuses on analyzing time series data to detect seasonality even when the length of seasons may vary or when there are multiple levels of seasonality such that seasons may contain subseasons, sub-sub-seasons etc. Seasonality detection is important for autonomous agents to choose behaviors appropriate to the season and to differentiate between seasonal changes versus chaotic changes in the normal operating environment.


    We have developed an object (Kasai) is that provides a seasonal pattern detection service that can be embedded within the client application. The Kasai accepts a sequence of symbols as its input and represents the sequence as a computational graph where the nodes indicate the seen symbols and edges indicate the ordering of the symbols. Cycles within the graph indicate seasonality and allow predicting the next symbol based on what has been observed prior.

  • Agents that need to operate in unforeseen environments need to have the ability to experiment and learn autonomously. Such learning may include observational learning as well as experiential learning. This research focuses on designing and developing components that allow such human-free learning and behavior formation


    We have created the design of an agent that learns patterns from its observations and chooses behaviors that are conducive for its measure of homeostasis. The observed patterns form the basis for making predictions on what to expect and noting changes in its environment. We are in the process of implementing different components of this design.

  • Autonomous agents need to decide what data is relevant, how to effectively store relevant data, how to transfer sensor data and how to integrate data that come from multiple sensors. A loud sound detected in the sound sensor reading and a sudden flash of light in the camera sensor reading at the same time might more likely be an indication of an explosion than either event occurring alone.


    We have designed a ROS-based framework for packaging sensor data from multiple sensors and publishing as MAVink messages for use by a cognitive agent for object detection and tracking application.  This is ongoing work under the MAST project.