One of the key factors in understanding what interfaces will be easy to use is the limited capacity of the human information-processing system. This work outlines a theory of human working memory which is instantiated as a computational system called SPAN. Working memory and the related construct of short-term memory have a long history in psychology, and in the last decade have been used to explain differences in performance on a wide variety of tasks both at the individual level and between different age groups. The production system SPAN was constructed as an attempt to address working memory issues based on several well-established mechanisms such as decay, interference, and processing speed. One property unique to SPAN is its ability to model the use of external memory. It is this last property, combined with SPAN's explicit acknowledgment of individual differences, which gives it a great deal of promise in applications to HCI domains--particularly in the prediction of errors.
Cognitive models, individual differences, user models, GOMS, human memory
The basic idea of humans as limited-capacity information processors goes back at least as far as the 1950's [7] and has gradually evolved into the construct we now know as "working memory." Particularly in the last decade, working memory has received a great deal of attention from researchers interested in different aspects of human cognition. One of the central findings in the cognitive aging literature is that working memory capacity generally declines with age in adults. Differences in working memory capacity have been used to explain individual differences in young populations and group differences between young and older adults in a tremendous range of tasks, including natural language use (i.e. comprehension, production, discourse recall, etc.), a wide variety of reasoning tasks, recognition of declarative memory, procedural errors, skill acquisition, and so on. This phenomenon is so pervasive that some researchers have claimed that working memory capacity is the primary source of individual differences in cognitive abilities outside of domain-specific knowledge differences [6]. It seems clear that working memory also plays a critical role in understanding the interaction of humans with other information systems, especially in understanding the errors that users make [e.g. 2].
Given that working memory plays a key role in our ability to predict human performance, what do we know about how the human working memory system actually works? It turns out that several important mechanisms have been identified, such as: Decay. Items in working memory decay over time. That is, the longer it has been since an item was needed in working memory, the less likely it is that it is currently available [1]. Displacement/interference. As new items enter into working memory, there are at least two repercussions in the rest of the system: other items tend to become harder to access and the cognitive system becomes less efficient, effectively slowing down [3]. Basic processing speed. Partially as a result of attempting to determine what causes the age-related decline in working memory capacity, researchers have discovered that there is a strong link between simple processing speed and working memory capacity [9].
Despite the centrality of working memory as a psychological construct and the amount we know about how it works, there is at present no formal/computational theory which integrates these mechanisms into a coherent whole and can be used to predict user performance. Providing such a theory is the central aim of this research.
SPAN (which stands for Speed, Parallelism, Activation, and eNvironment) is a production system designed to computationally realized the mechanisms described in the previous section. It has declarative elements or chunks which have activations that decay over. In addition, an element's activation must be above a threshold level for the element to be matchable by productions. Productions (IF-THEN rules) have the primary function of propagating activation, and are limited by two factors: a global speed parameter (used to realize individual differences) and total system noise, which is a complex function of the activations of the individual elements. This yields a system with a functional limit to its processing bandwidth as a function of processing speed.
There is another aspect of the theory which is clearly relevant to HCI: external memory. Since memory elements in SPAN are fragile, the functional capacity of the system can be augmented by the use of memory elements that do not decay or contribute noise: those in the environment of the system. For example, it is impossible for most people to multiply two ten-digit numbers without the use of some kind of external memory store, such as a piece of paper. SPAN provides a way to formally model this important aspect of human memory use.
Though at this point, SPAN is primarily an exercise in theoretical cognitive psychology, the potential for application to issues in human-computer interaction (HCI) is great. GOMS models have proved themselves useful in understanding certain aspects of HCI, and it has been shown that the process of transforming a GOMS model into a set of productions is fairly straightforward [4]. However, SPAN provides advantages that GOMS and other production systems lack. The key advantage is that SPAN can predict working memory failures and therefore certain classes of user error. Other advantages include the ability to make predictions about individual differences in performance, as well as taking into account the impact of what information is and is not available to the human operator in the form of external memory. All of these are important aspects of the human-computer interface that are not handled particularly well by GOMS analysis or other production systems such as Soar [8].
Detailed modeling of the memory ramifications of a human-machine system may be useful in the prediction of performance in terms of both fluency of use and in the prediction of errors. SPAN models, augmented by appropriate device or interface models, should be able to predict at what points the information load in an interface may become problematic for particular users. In addition, such models may be able to suggest to designers what information ought to be available in the interface to help avoid the errors. The inability of GOMS-based models to aid in the prediction of errors has often been taken as a key weakness (or at least incompleteness) in such models; SPAN offers a way to make model-based predictions about errors.
Currently, I am attempting to use SPAN to model the results of various psychological experiments in an attempt to test the particular mechanisms chosen for SPAN. In the future, I would like to extend SPAN by incorporating a more complete model of the perceptual-motor system such as the one found in EPIC [5] and using SPAN to evaluate real-world interface designs. I would also like to incorporate a learning mechanism into SPAN to better understand the effects of practice on human cognitive performance.
I would like to thank the National Science Foundation for financial support in the form of a graduate fellowship and also my dissertation committee (especially my advisor, Susan Bovair) for their support and guidance.
1. Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Lawrence Erlbaum.
2. Anderson, J. R., & Jeffries, R. (1985). Novice LISP errors: Undetected losses of information from working memory. Human-Computer Interaction, 22, 403-423.
3. Baddeley, A. D. (1986). Working memory. Oxford: Oxford University Press.
4. Bovair, S., Kieras, D. E., & Polson, P. G. (1990). The acquisition and performance of text editing skill: A cognitive complexity analysis. Human Computer Interaction, 5, 1-48.
5. Kieras, D. E., Wood, S. D., & Meyer, D. E. (1995). Predictive engineering models using the EPIC architecture for a high-performance task. In Human Factors in Computer Systems: Proceedings of CHI'95, pp. 11-18. New York: Addison Wesley.
6. Kyllonen, P. C. & Christal, R. E. (1990). Reasoning is (little more than) working-memory capacity. Intelligence, 14, 389-433.
7. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.
8. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.
9. Salthouse, T. A. (1991). Mediation of adult age differences in cognition by reductions in working memory and speed of processing. Psychological Science, 2, 179-183.