User Autonomy: Who Should Control What and
When?
Batya Friedman*, Helen Nissenbaum**
- *Mathematics and Computer Science
- Colby College
- Waterville, ME 04901, USA
- +1 207 872 3572
- b_friedm@colby.edu
- **University Center for Human Values
- Marx Hall
- Princeton University
- Princeton, NJ 08544, USA
- +1 609 258 2879
- helen@phoenix.princeton.edu
KEYWORDS
Autonomy, computer system design, design methods, ethics,
information systems, social computing, social impact.
INTRODUCTION
In this workshop we are concerned with understanding the
relationship between user autonomy, the user interface, and
computer system design. By autonomy we mean the capability to act
on the basis of one's own decisions; to be guided by
one's own reasons, desires, and goals. When actions are unduly
constrained or restricted then autonomy may be diminished or
violated. Evaluating the interface and system design in relation to
user autonomy involves uncovering the extent to which
systems either enhance or diminish autonomy.
A case in point: This past year, a colleague of ours (whom we will
call Jim) enthusiastically welcomed video-conferencing
into his office. With the addition of a video camera, microphone, and
a few other things Jim was poised for real-time
interactions with colleagues in far away places. Jim got down to
work -- connected with his colleagues. The technology was
terrific. But when the first session was over, Jim was horrified.
There was no on/off switch on the video camera. How
could he know if someone was looking in? There was no on/off
switch on the microphone. How could he know if someone
was listening in? In self-defense, Jim attached a 3X5 index card to
the top of his video camera; he can flip the card down to
cover the camera lens whenever he wants to insure visual privacy.
Gaining control over his microphone was a bit tougher;
Jim has sewn a small felt bag to cover the microphone and mute the
sound.
This example from video-conferencing begins to highlight the
importance for users to have control over the technology they
use. More generally, the example points to the larger issue of user
autonomy.
This workshop builds on the organizers' previous work on designing
computer systems for responsible computer use [1, 2,
3, 4]. In the workshop, we draw on the organizers' background and
participants' research and design experiences (1) to
identify positive designs and abuses of user autonomy in computer
systems and (2) to generate design principles for
protecting user autonomy in the design of future systems.
WORKSHOP GOALS
- To explore with colleagues the meaning and value of user
autonomy, the nature of the relationship between user
autonomy and control of computer systems, and the elements of
interface and system design that affect user
autonomy.
- To provide a forum (opportunity) for colleagues to discuss issues
of user autonomy in computer systems that have
arisen from their own design experiences.
- To work with colleagues to identify positive designs and abuses
of user autonomy in computer systems.
- To work with colleagues to generate design principles for
protecting user autonomy in the design of future systems.
REFERENCES
- Friedman, B., & Millett, L. (1995, May). "It's the
computer's fault" -- Reasoning about computers as
moral agents. Conference companion of the conference on Human
Factors in Computing Systems, CHI '95 (pp. 226-
227). New York: Association for Computing Machinery.
- Friedman, B., & Nissenbaum, H. (in press). Bias in computer
systems. ACM Transactions on Information
Systems.
- Friedman, B., & Nissenbaum, H. (1995, May). Workshop at CHI
'95: Minimizing bias in computer systems.
Conference companion of the conference on Human Factors in
Computing Systems, CHI '95 (p. 444). New York:
Association for Computing Machinery.
- Nissenbaum, H. (1994). Computing and accountability.
Communications of the ACM, 37(1), 72-80.
(c) Copyright held by the authors.
User Autonomy: Who Should Control What and When?
b_friedm@colby.edu