The Effects of Information Accuracy on
User Trust and Compliance
Jean E. Fox
George Mason University
Fairfax, Virginia 22030
jfox3@osf1.gmu.edu
ABSTRACT
Designers and manufacturers of new technology
must understand the factors that influence consumers'
decisions to purchase new high-tech products. One
important factor in the decision is how much users trust the
technology. Muir [5, 6] developed a theory of how people
develop trust in automated systems. Several studies have
supported her model. This proposed study will provide
additional data to test this theory. The application to be
studied is an Advanced Traveler Information System
(ATIS), which provides route navigation information to
automobile drivers. The study will evaluate how inaccurate
congestion information affects the users' trust in and
compliance with the system's advice. These results will be
important to ATIS developers, who need to know how
accurate the systems must be to facilitate user
acceptance.
KEYWORDS:
ATIS, Automated systems, Decision aids, Human-system trust, ITS, User acceptance
INTRODUCTION
With rapid developments in technology,
manufacturers of electronic products are frequently
releasing new products and making improvements to
existing products. The market success of innovative
electronic consumer products clearly depends on consumer
acceptance and willingness to pay. Manufacturers must
therefore understand the factors that influence consumers'
decisions to purchase new technology.
One innovative technology currently under development is
the Advanced Traveler Information System (ATIS). One
component of ATIS is a computer-based technology that
uses information about current traffic conditions to provide
automobile drivers with the optimum route to their
destination. It is important for user acceptance of ATIS to
occur fairly quickly because the more people who own and
use ATIS, the more accurate the systems' predictions will be
(traffic management centers will gather data on traveling
speeds from all equipped cars). Thus, the slower the user
acceptance, the less accurate the congestion information
will be (there will be fewer data points). Developers will
therefore want to ensure that the accuracy of the congestion
information is sufficient for user acceptance before they
release their products.
HUMAN-SYSTEM TRUST
One issue that contributes to whether users purchase a new
product is how much they trust the technology. Muir [5, 6]
has developed a theory of how users develop trust in
automated systems. This theory is based on the work of
Barber [1] and Rempel, Holmes, and Zanna [7] on how trust
develops between people. Muir's hypotheses include:
- Initial trust will be high. Users will expect the system to
be accurate, so they will trust with blind faith.
- Trust is dynamic, and will change depending on the
users' experience with the system.
- The users' trust progresses with their experience from
the initial blind faith to a trust in the dependability of the
system in the current situation. This progresses to a
trust that the system will remain dependable, despite an
uncertain future.
- Users weigh each experience differently, depending on
the "risk" involved. Users place considerable
importance on situations where there is a high risk of
system error.
- Trust will affect allocation of function (whether users
choose manual or automated control) or, in the case of
decision aids, compliance with the system's advice.
- Trust can be rebuilt after it is broken, but rebuilding is a
difficult process.
- There is a criterion of minimum system performance
required for trust. If the system performs above the
criterion, users will trust it; if it performs below the
criterion, users will not.
Muir tested her theory in two studies (see [4]). The first
study supported the "progression of trust" aspect of her
theory, and the second study found a positive correlation
between trust and use. Lee and Moray [4] found that trust
in a system partially explained system use, but other factors
(such as the user's own ability to provide manual control)
also influenced system use. These three studies have
provided some support for Muir's theory, but more research
is needed to evaluate her hypotheses in more depth.
TRUST AND COMPLIANCE WITH ATIS
ADVICE
There have been a few studies that evaluated the effects of
user trust and compliance with ATIS advice regarding
congestion [2, 3]. These studies found that as accuracy
decreases, both trust and compliance decline. However,
when accuracy improves, trust and compliance increase. In
addition, the studies reveal that there is a criterion level of
information accuracy that must be met to facilitate user trust
and compliance.
Although these studies provided important results, there are
still some gaps in ATIS research. For example, the studies
used either trust or compliance as a measure, but not both.
In addition, these initial studies have been performed in low-
fidelity simulators or with questionnaires. Thus, there is a
need to study this topic further.
OBJECTIVES OF THIS RESEARCH
There are two main goals for this study. The first goal is to
provide empirical data to evaluate part of Muir's theory of
user trust. Muir stated that trust affects allocation of
function. It is expected that this study will show a
significant positive correlation between user trust and
compliance. The study will also address Muir's hypothesis
that trust can be rebuilt, by using three conditions. In one
condition, accuracy will decrease over the trials, to see how
trust declines. In the second condition, accuracy will start
out low, leading to degraded trust. Accuracy will increase
over the trials to see how trust is re-established. In the third
condition, the accuracy will increase and decrease. Finally,
Muir's theory predicts that there will be a criterion level of
acceptable information accuracy. This prediction will also
be tested in the study by evaluating trust at a range of
accuracy levels.
The second goal is to fill in gaps in the research on trust in
ATIS by studying both trust and compliance. This study
will also use more accuracy levels than previous studies.
Further, this study will be conducted in a high-fidelity
driving simulator, which will allow a more realistic evaluation
of the types of choices people make. The results of the
study will be useful to developers to determine how
accurate ATIS must be to ensure rapid user acceptance.
APPARATUS
This study will be performed in the Highway Driving
Simulator (HYSIM) at the Federal Highway Administration's
Turner-Fairbank Highway Research Center in McLean,
Virginia. The HYSIM is a high-fidelity, fully interactive,
fixed-base driving simulator. It consistmainly of a car cab,
a large screen to display the computer-generated scenario,
and random-access slide projectors to project images of
signs. The steering wheel, accelerator pedal, and brake
pedal retain the feel of real controls. The ATIS will be
simulated on a computer and displayed on a small monitor
mounted in the dashboard of the car.
PROCEDURES
Each subject will drive a computer-generated
scenario. There will be one training and four experimental
trials. During each trial, there will be 10 decision points,
where subjects will choose one of two branches at a fork in
the road by steering the car down that branch. One branch
will have congestion, and the other will not. Subjects will
choose the branch they believe is not congested. Thus, the
task is to avoid congestion, not to navigate to a particular
destination. The ATIS will suggest which branch is not
congested, but in some cases this information will be wrong.
The accuracy of congestion information provided by ATIS
will be different in each trial. The HYSIM will record
whether subjects complied with the ATIS advice. In
addition, a verbal measure of trust will be taken at each
decision point and at the end of each trial.
EXPERIMENTAL DESIGN
The proposed study is a 3 (Accuracy Condition) x 2
(Age Group) between-factors design. Accuracy Condition
refers to the order in which subjects will experience different
levels of accuracy of congestion information. One group
will experience the first trial at a low level (information will be
accurate only 40% of the time), then experience higher and
higher accuracy levels for the remaining three trials (60%,
80%, and 100%). The second group will experience the trials
in the reverse order. The third group will start at 60%, then
go to 40%, 100%, and end at 80%. The purpose of this
design is to examine the patterns of user trust and
compliance as the accuracy of ATIS congestion information
changes.
SUMMARY
The goal of this research is to examine the effects of
inaccurate congestion information on user trust in and
compliance with ATIS advice and to test Muir's theory. The
results will be useful in both an applied and a theoretical
framework. Manufacturers of ATIS products will be
interested in the requirements for the accuracy level of their
products. The results are also relevant to the theoretical
relationship between trust and compliance and to the
establishment of user trust.
ACKNOWLEDGMENTS
This research is sponsored by the Federal Highway
Administration, Turner-Fairbank Highway Research Center,
McLean, VA, through Contract DTFH61-94-C-00003. I
would like to thank Deborah Boehm-Davis (GMU) and
Elizabeth Alicandri (FHWA) for their assistance on this
project.
REFERENCES
- Barber, B. (1983). The Logic and Limits of
Trust. New Brunswick, NJ: Rutgers University Press.
- Bonsall, P.W. and Joint, M. (1991). Driver compliance with
route guidance advice: The evidence and its implications.
Proceedings of the IEEE-IEE Vehicle Navigation &
Information Systems Conference, 47-59.
- Kantowitz, B.H., Kantowitz, S.C., and Hanowski, R.J.
(1994). Driver reliability demands for route guidance systems.
Proceedings of the 12th Triennial Congress of the
International Ergonomics Association. Vol. 4, 133-
135.
- Lee, J. and Moray, N. (1992). Trust, control strategies and
allocation of function in human-machine systems.
Ergonomics, 35(10), 1243-1270.
- Muir, B.M. (1987). Trust between humans and machines,
and the design of decision aids. International Journal of
Man-Machine Systems, 27, 527-539.
- Muir, B.M. (1994). Trust in automation: Part I. Theoretical
issues in the study of trust and human intervention in
automated systems. Ergonomics, 37(11), 1905-
1922.
- Rempel, J.K., Holmes, J.G., and Zanna, M.P. (1985). Trust
in close relationships. Journal of Personality and Social
Psychology, 49, 95-112.
� Copyright of this material is held by the
author.
The Effects of Information Accuracy on
User Trust and Compliance
Jean E. Fox (jfox3@osf1.gmu.edu)