Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Rudiments

This set of small robotic like devices, called rudiments, investigate human-machine interactions in a variety of different ways. Through careful iterations in their design, the rudiments are intended to provoke curiosity and discussion around the possibility of autonomy in interactive systems. They steer away from a humanoid approach to robot design and explore the potential relations and possibilities for combining robotic devices with appliance like characteristics.

Rudiment 1, the least sophisticated of the three, is made up of two modules connected via a long, flexible cable. One of the modules quite literally wanders around a magnetic surface, e.g., a fridge door. Its round-shaped wood and acrylic case encapsulates its magnetic wheels and also a narrow range IR sensor to detect nearby movement. Its speed and direction are randomly changed when the IR sensor is triggered. The second module, a switchbox, is magnetically affixed to the same surface as the moving module. It simultaneously provides power and sends signals to the moving module whenever its own wide-range IR sensor detects peripheral movement. On receiving a signal, the moving module is activated and moves for a random amount of time (limited by a set min- and maximum). To prevent it from falling off a surface, two sensors protrude from the front of the moving module. Each sensor contains two switches, one to detect an obstacle and the other to detect when an edge is reached. If these sensors are triggered, the module backs up and changes its direction. Both modules contain Arduino micro-controller boards to control the sensors, actuation and communication functions.

Rudiment 2 consists of a plywood servomotor and base, and two acrylic microphone cases—all three wirelessly connected using the Zigbee standard. The servomotor, with an articulated arm and pencil attached, is slotted into the middle of the wooden base. As well as its mechanical parts, the bespoke servomotor houses a customised Arduino micro-controller and an FIO board1 with XBee module (the latter enabling wireless communication). The two encased microphones function as a trigger for the servomotor and the arm/pencil attachment. The rotation and direction of the motor’s arm are dictated by the level of sound input and which of the two microphones detects a louder sound. Also, using a simple caching method, the system’s sensitivity is varied: sustained or particularly loud noises make it increasingly sensitive and consequently the motor arm’s frequency and degree of rotation were increased. The intended effect is a machine that appears to draw in response to sounds but, to some degree, controls its movements. The rudiment’s output is drawn on removable paper sheets that in effect visually record a soundscape.

Rudiment 3 consists of an acrylic cog system and casement suspended on a horizontally extended, toothed belt of adjustable length. Using suction cups, the flexible belt can be mounted on any smooth vertical surface, e.g., a window, under the cog system, the oval-shaped casement contains a video camera monitoring the environment. Actuated by a DC motor, the cog system moves the casement left and right along the belt’s entire length. The camera can also be rotated left or right by up to 70 degrees. These movements effectively change the viewing direction of the camera. In addition, the opaque casement can display eight different colours using three integrated LEDs (red, green, and blue). The rudiment’s movement and colour are controlled by an Arduino micro-controller, which in turn communicates with a small PC encased in a wooden box. The PC receives the video signal from the camera and triggers the rudiment’s behaviour through a set of simple yet nondeterministic computer vision processes. The program, written in C++, searches for human faces in the video frame using an object detection algorithm based on the Haar classifier cascade. Each time a face is detected, the rudiment will adjust itself by either moving along the belt or turning the camera (randomly choosing between the two actions), so that the face remains centred. As this happens continuously, the camera appears to follow any movement of a detected face. When more than two faces are detected, the rudiment randomly chooses one face to follow. In addition to faces, the rudiment also responds to gross motion in the camera view, momentarily turning towards it. Finally, the rudiment occasionally makes random movements, adding a degree of ambiguity to its behaviour.

In addition to movement, the rudiment turns one of its eight possible colours when a face is detected. The colour is chosen by comparing the detected face with eight face categories, each consisting of three sample faces. The category that contains the most similar face is chosen and the associated colour displayed. The face comparison is based on a straightforward pixel level comparison, without leveraging any predefined knowledge of facial features. This results in a somewhat “machine-defined” similarity measure, which may or may not appear recognisable to users. With a small probability (0.05), the newly detected face may replace an old face sample in the category. As such, the face categories gradually evolve as the rudiment is exposed to more faces. This simple machine learning technique allows the rudiment to adapt to the people who interact with it and present the same colour each time it recognises similar facial features.

 

Publications