AUDRIE is a Robotic Art project that invites participant interaction by means of proximity and touch. The robots respond dynamically and differently to each individual and environment resulting in participants anthropomorphizing / zoomorphizing their behavior; assigning genders, naming them, conversing with them, in spite of the robots geometric, nonbiological physical appearance.
We expect stability from machines, devices, and software applications we interact with. However, if a human or animal were to react to us in exactly the same manners each time we interact with them we would find that behavior unnatural.
In 2011, while working on an interactive installation project that incorporated the use of Arduino and custom capacitive sensors, I noticed that the sensors responses varied depending on the environment in which the installation was placed, and the individuals interacting with it. The instructions for use of the Capacitive Sensing Library suggest a way to improve stability and repeatability using a capacitor to ground. I elected to take the opposite path; to embrace the instability.
The simple robots that were created take the form of simple 3D printed geometric blocks with hard corners, two servos, an Arduino Micro on a custom PCB and a few dozen lines of code. They move in reaction to capacitive touch/proximity.
How would users react to a “finicky” machine, one that “chooses” how to react, or whether to respond at all?
In writing the code question arose, such as; How much control should I exert over the robots reactions? What if I set up the installation and nothing moved at all? Should I be disappointed, or satisfied that the robot simply “decided” not to respond? Should I, then, tweak the robot’s parameters to achieve a satisfactory response?
Participant responses ranged from initial trepidation to disbelief, and within a few minutes of first contact, warmth, even affection. Some likened them to puppies. Personalities were attributed to each robot along with attempts to decipher their motion; to discern and understand their “language”. What were they saying?
Part of what makes living things interesting is their unpredictable nature. However in machines predictability is expected. Can a middle ground be found; where robots perform their functions while maintaining freedom to be spontaneous? How would individuals react to machines that respond differently to each individual or environment based on how they “feel” at any given moment?