Simon Penny

Fugitive

Abstract

Fugitive is an interactive artwork which (via a machine vision system) interprets gross bodily movement as an indicator of "mood". The user moves through an impressionistic visual experience based in a digital video database. The question of Embodiment is central: we attempted to production a system which "speaks the language of the body". Artworks should be immediately accessible to any spect-actor. The complexity of interactive mode is low at the beginning of each users experiece, but increases as the user gains familiarity with the system. We call this idea of a transparent training "auto-pedagogic".

Introduction

The question of the embodiment has become critical in computer science and cognitive science discourses. We argue that a large part of human understanding is necessarily "embodied". In general, tracker systems for immersive virtual environments and spatial applications reduce the physical presence of the user to one (or a small number) of points. Rather than submit a human user to such erasure, or to the discipline of encoding her desire into a abstract iconic or symbolic language amenable to a computer system, we argue that a new generation of real-time machine-vision systems are required to create an inteface which can interpret bodily dynamics. Fugitive, using one camera and simple image differencing, is the first of such experiments. The output of the system is completely free of textual, iconic, or mouse/buttons/menus type interaction. The fothcominging project (Traces) uses multiple cameras and builds real-time volumetric models.

Physical Description and User Experience

Fugitive is a single user spatial interactive artwork [1]. The arena for interaction is a dark circular room about 10m in diameter. A video image moves back and forth around the circular wall, tightly coupled to the movement of the user. Thi coupling is an instantaneous confirmation to the user that the system is indeed interactive. The user is unencumbered by any tracker hardware, sensing is done via machine vision using infra-red video[2]. From the users point of view, there is zero latency between her actions and the response of the system. Over time the behavior of Fugitive becomes increasingly subtle and complex. A user must spend almost 15 minutes to experience the full seven chapters and elicit the most complex system responses.

User Behavior - System Behavior

The behavior of the system is evasive, the image, in general, "runs away" from the user. The user pursues the image. In general, the image places itself diametrically opposite the user. The basic logic of interactive representation in Fugitive is: system responses are represented by image movement across the wall and camera movement within the image sequences. If a user is moving radially toward the image, the chosen image will be one in which the camera is moving in along the z axis of the shot (a dolly or a zoom). The frame rate is controlled by the users velocity. Likewise, if the user moves circumferentially in the space, the video will be a pan sequence, framerate indexed per degree of circumferential movement and the image will track around diametrically opposite the user. The output of the "Mood Analysis Engine" controls the flow of digitised video imagery in such a way that two people walking the same path in the installation are unlikely to produce the same video sequence, because their bodily dynamics are different. The system matches the "mood" of the user: frenetic behavior will produce a frenetic response, calm behavior, a calm response. The system responds not simply to raw position, but to changes in raw acceleration or velocity and ideally, to kinesthetically meaningful but computationally complex parameters like directedness, wandering or hesitancy. [3]

Kinesthetic Interaction and Response Representation

The "Mood Analysis Engine" interprets video representation of the users body [4]. It is an attempt to develop a computationally implementable system which truly reflects the kinesthetic feeling of the user. The goal is to draw the users attention back to their own embodied experience, not to an illusory virtual space or hypertextual structure. The primary and structuring continuity in Fugitive is the deeply subjective continuity of embodied being through time.Representation of the response of the system back to the user is key to persuasive interaction. Not only must one reduce human behavior to algorithmic functions, but one must be able to present to the user a response which can be meaningfully understood as relating to their current behavior. Art artwork is by definition not literal or didactic, it is concerned with poetic and metaphoric associations. It would not be interesting if Fugitive told you: "you just moved two paces left". One strives for some poetic richness which is clear enough to orient the user but unclear enough to allow the generation of mystery and inquisitiveness.Fugitive is not primarily a device for looking at pictures (or video), it is a behaving system in which the image content and its physical location is the "voice" of the system. The order of images displayed is not determined by any previous images, nor by the instantaneous position of the user. Many types of information can be extracted from a single still image, let alone a video sequence. In Fugitive it is difficult for the user to determine which aspects of the images signify the expression of the system. The aspect of the image which is the "voice" of the system is camera movement. Subject matter, objects, color s (etc) do not carry meaning about the state of the system.

The Auto-pedagogic Interface

Although prior training has become a part of theme park amusements, nobody wants to do a tutorial or read a manual before they experience an artwork. Hence a central issue in interactive art is managing the learning curve of the user. Some works choose interactive meodes so simple that they become boring, others choose complex modes which a user cannot distingush from random. In avoiding these two undesirables, the designer can choose a well known paradigm, or if one desires the modalities of an interface which is novel, then the user must be trained or the system must teach the user.In my previous research I have learnt that pleasure is key. If the user has a desire to interact, learning occurs in an unimpeded and transparent way. In Fugitive, I attempted to formally produce this effect in a complex system. Such an "auto-pedagogic" interface must present itself as facile to a new user, but progressively and imperceptibly increases in complexity as the familiarity of the user increases.1. Design for Fugitive was begun in May 1995. Construction began in May 1996. MAE (Mood Analysis Engine) and PID motion control system were built at Carnegie Mellon University, Pittsburgh PA, USA by Simon Penny and Jamieson Schulte. Digital video editing, MAE2, VSE (Video Selector Engine) and full scale installation were built up at the Institut fr Bildmedien, Zentrum Fr Kunst und Medientechnologie, Karlsruhe, Germany, march-may 1997, by Simon Penny and Andre Bernhardt. Fugitive was first shown in the exhibition "Current" at the opening of the ZKM in October 1997. It was shown again at the European Media Art Festival (EMAF), Osnabruck, Germany in May 1997.2. The space is lit with smooth infra-red light. A custom vision system running on a PC recieves video from a very wide angle monochromatic video camera. Two streams of serial data are output. Simple angular position data is sent to the custom PID motor control board to drive the projector rotation system. Values for MAE calculations are sent to the MAE2 running on an SGI 02 computer. On the basis of this calculation, the VSE (Video Selector Engine, also resident on the 02) selects, loads and replaces digital video on a frame by frame basis. Video data is fed to the video projector in real time.3. This is achieved in a multi-stage process of computationally building up the complexity of parameters. The input level data from the vision system is limited to raw position in each frame. From this, simple values for velocity and acceleration are calculated. A third level of more complex parameters is then constructed: average acceleration over various time frames and so on. Finally, values for various combinations of these parameters are used to determine the entry and exit points for "behaviors" which are matched to presorted classes of video sequences.4. The term "Mood Analysis Engine" should be understood as a parody of such terms as Artifical Intelligence and Knowledge Engineering.