I just got back from AAAI '93 and the robot building contest. My team's robot, JaCK, was the finalist in the second event, "Coffee Pot" Surprisingly (to us at least), JaCK did well in this event even though it was running the code written for the first event. The only difference in code was made in 5 minutes between rounds and was never run until the competition. I like to think this shows JaCK's generality and intelligence, but maybe it just shows our luck.
Anyway, here is my report, notes, and code. I encourage other participants to report also, as my report is very JaCK-centered.
- Carl Kadie
The contest was very stressful but very wonderful. Thanks to the organizers: Lynn Andrea Stein and David Miller and to the staff: Matt Domsch, Karsten Ulland, Carol Lee, Mike Wessler, and Anne Wright.
Thanks to my teammates, Jalal Maceki and Karl Altenberg.
The anonymous FTP site for the contest is/was aeneas.mit.edu in pub/ACS/6.270/AAAI. This site has the manual and the rules.
AAAI '93 is the national artificial intelligence conference. On Monday, the day before the conference proper began, a big room full of us attended a tutorial on Autonomous Mobile Robots. For students who preregistered this cost $50. For others it cost much more. The tutorial lasted two hours and was pretty high-level. The low-level stuff we learned in the lab.
After the tutorial, 22 teams of about 3 folks each went to the robot building lab. It was in a roped off area of the main exhibition area. When the exhibition area was open to general conference attendees, they could come up to the rope and talk to us.
Some people preregistered as teams (I believe both winning teams did this). Others, like me, were assigned to teams. My teammates were Karl Altenberg from North Dakota State University and Jalal Maceki from Linkoping University in Sweden. We had never met before the conference, but we did exchange a bit of email when we learned we were on the same team. (We made a good team; we were lucky that we were able to develop a good working style quickly.)
The competition was to be run on Thursday at Noon, so we had about 72 hours to build a robot that operate by itself. If we choose to, we could also use that 72 hours to sleep, eat, and go to conference talks. (I make it to Herb Simon's excellent talk and most of the Computational Learning Theory session.)
Each team was given use of a box of parts and a laptop computer. The box seemed to contained about every Lego building piece in the world. The structure of our robot was to be made of Lego bricks.
We were also given (use of) a 6.270 board. This is a microcontoller board designed at MIT for computer science class "6.270". It is based on the Motorola 6811 chip. It has 32K of memory. 16K was for the operating system (multitasking C in 16K!) and 16K was for programs. We had a lot of code, but had no trouble with memory. The board can control 6 or 7 motors outputs, has 8 digital inputs, about a dozen analog inputs, several unswitched outputs, several "LED" outputs. A servo and modulated IR input/output. The only i/o ports we came close to running out of was digital inputs (but we could have used an analog input instead) and "LED" output (but we used some of the motor outputs instead.
We were given some prebuild sensors and use of a soldering table to make more. The sensors include simple switches, photo cells, IR transmitter/receiver pairs (good for proximity sensors), a modulated IR receiver for receiving the signal from the beacon (or the signal from the other robot). Very sadly, the bend sensors (from the Mantel Powerglove) did not arrive, so we could not use them.
We were given 3 motors, 2 DC motors and one servo motor that we could make turn, pretty reliably, to any angle from 0 to 180 degrees.
There were two events/tasks. Each was run two robots at a time. The contest was double elimination. So to be eliminated from an event, a robot had to lose twice.
Both events involved moving from a starting location, across the playing field to a target that was sending out an identifying modulated IR signal. The playing fields were about 6 feet by 8 feet with 3 1/2" walls around the outside. In both events the robots started side by side divided by a 3 1/2" wall. A robot could be told if it was on the right or left side by having a tiny dip switch flipped. Robots were arbitrarily assigned to the left or right side unless a team wanted to flip a coin to determine side. Robots were started either facing the target, facing each other, facing away from each other, or facing back. This was decide with a roll of the dice. The robots were not told which way they were facing.
In the first event, "Escape from the Office", the field looked like (not to scale):
______________B______________ | |TTTTTTTTTTT| | | _| |_ | | | | _....._....._ | | | | | | | . | . | | . L | R . | | | | | | -----------------------------
The field and walls were painted white. The target area black. The starting points had lights under them. When the lights turned on, the robots were to start. 60 seconds later they were required to turn themselves off.
There were two movable walls/doors (one for each start location) covered with shiny metal. They would either be installed between the robot and the beacon (meaning that the robot would have to go out the side) or on the side (meaning that the robot would have a straight shot to the target area). There installation location was determined by a roll of the dice.
The winner was the first robot to reach the target area in less than 60 seconds or the closest robot if neither reached it in 60 seconds.
The second event was called "Coffee Pot". It involved object finding and retrieval.
________________ | A | | --------- | | B | | --------- | | CCCCCCCCCCC | | CCCCCCCCC | | _________ | | | | | L | R | | | ----------------|
Our goal was to have very fast hardware. Most of the competitors copied a prototype Lego robot. It had good power but poor speed. We (esp. Karl A.) thought we would have a competitive advantage by having a faster robot. In the end we had a robot that was probably first or second fastest.
The prototype robot had two drive wheels and 1 caster wheel. JaCK had 4 drive wheels. The left wheels were powered by one motor, the right wheel by another. The robot moved and turned like a tank. Like a tank, it scuffed when turning, making it hard to turn a preset angle just using the timer.
To sense the beacon, we had one modulated IR sensor facing the front of the robot. It had side blinders on it so that it would only see straight ahead. We also at the same sensors mounted facing left and right, these had no blinders because we wanted to use them as peripheral vision, telling us which way to turn to face the beacon.
Many of the other robots mounted something similar on the servo motor, giving them an "eye" that moved independent from the body. Given our development time constraints and the speed at which JaCK could spin, we decided was easier just to spin the whole body.
I did most of JaCK's programming with my teammates figuring out what high-level strategy the robot should follow. (I've included the code at the end of this note).
The programming environment for the 6.270 board is wonderful. It uses IC ("Interactive C"), a multitasking C. Code is written on a "big" computer such as a Mac Powerbook or an MS Windows laptop. The big computer compiles it into p-code which is downloaded to the 6.270 board. A 16K operating system on the board, interpret the code quite quickly, even doing multitasking.
I tried to follow good programming practice, by using lots of "#define"'s and variables for things like sensor ports and threshold values. That way we could move around sensors without having to change much code.
I also wanted to make everything symmetric. I wanted one function call to allow a program written for the robot sitting on the left side of the board would work for the robot sitting on the right side of the board. I eventually did this by having a series of constants that gave a motor or sensor's true location, for example:
and a variable that gave its virtual location. For example:
The virtual location could be changed simply by, for example:
The first code written defined functions for controlling actions with the two motors ("forward","backward","spin"). We didn't know if these functions should be given a speed parameter or if there should just be a global speed variable. We coded it up both ways, but in the end didn't use the global speed variable.
The first day all the code was in one file. I took our laptop (owned by Karl A.) back to my dorm room the first night and eventually reorganized in a way that seemed to work well:
The best way I could think of for writing robot code was with finite state automata. The problem is (as anyone who has looked at code produced by lex knows) is converting such code to a high-level language so that it is readable. I eventually came up with the erroneous idea of defining each state as a C function. This "state function" would then multitask a behavior function and some condition testing functions. These would all run at the same time. When one of the conditions became true, it would tell the state function (via a call-by-reference parameter) and the state function would kill the processes clean up anything that need cleaning and then call a the next state function.
The problem? The call to the next state function would never return, it would just call another next state function, etc, etc. The computers state would soon overflow.
Here is the programming strategy we ended up. It is not perfect but given the few days we had to work, I think it is pretty good. It was based on what we called "abstract sensors" and "behaviors".
We created 2 abstract sensors. The first told if the robot had bumped into anything in front (the only direction we ended up caring about). It was accessed with a global variable. This abstract sensor was run via a background process that looked at 7 real sensors. Four of these were IR proximity sensors. The background process would take a reading with the LEDs off, turn the LEDs on, wait, take another reading. It would then compare the difference of the before and after readings. This difference was reported back. If any of the differences were over a threshold or if one of the contact switches closed, then "obstacle_p" was set to TRUE. (We also tried averaging and counting the IR differences but couldn't get that to work well.)
The other abstract sensor told if the target beacon was left, right, ahead, or behind. It was also a background process. It compared the number consecutive times the beacon was seen from each of the 3 modulated IR sensors.
As far as behaviors, we tried to get wall following working, but without the bend sensors, we couldn't. It was the better part of a day before my teammates told me to give up on this.
We were more successful with "lockin". A behavior in which the robot would spin for up to two seconds trying to get the beacon in front of it. The time limit was important, in the second event there was a chance the robot would be too far from the beacon to see it. Some of the other robots spend their whole turn spinning and spinning and spinning. The first version of "lockin" had two steps 1) turn at full speed till the beacon was "ahead" then 2) turn slowly until it is "ahead" again. We eventually removed the second step and just called the first one twice.
The other interesting behavior was "track". I thought we could move toward the beacon *while* having the robot correct its course by forward turns. It turns out that our robot could only really turn reliably by spinning (having the left and right wheels going in opposite direction). So "track" came to mean go forward until 1) a bump 2) some time limit expired 3) the beacon was no longer "ahead".
By combining "lockin" and "track", JaCK was able to find the beacon if it was not blocked by a wall or other robot. Two deal with blockage, we had Jack back up and turn a bit when a "track" step was stopped by a bump.
We had several interesting bugs during development along the way:
Our team, along with many of the teams, stayed up the whole list night before the contest. At about 6am (the contest started at Noon) we had lockin and track working well. It was fun to play "bull fight" with JaCK by moving the coffee pot away from him just as he was about to reach it and then watch him spin around and go for it again.
We decided we would have to program in a fixed strategy for Escape from the Office. The programming went fast because of all the low level commands we now had, but the program was not very robust (esp. because we didn't have time to test and tune it much). That program was:
This was the first chance to use my left/right symmetry code. I was very happy. My teammates had to go along because we didn't have time to write everything twice. I was also happy that it work since it had not been tested much.
Locking in on the RIGHT or LEFT sensor (rather than the forward one) turned out to be easy, we just changed the variables so that the lockin program thought it was locking in forward be it was really looking left or right.
We were tuning until about 30 minutes till the contest started.
About an 30 minutes before the contest we knew we had to put in the contest-supplied code for turning on with the start light (perceived with a photocell) and turning off in 60 seconds (easily done with a background process). We installed the code but found to our horror that the LCD display on the 6.270 board started dimming. I thought it was probably a bug in the supplied code, but Karl suspected that he had wired the photocell in wrong. With 15 minutes to go, he went over to the soldering table and made and installed another sensor. It worked fine.
While waiting to compete, Karl A. suggested that for COFFEE we not just run the OFFICE program. He suggested that instead, every so often, the robot should lockin AND track on the left or right for a while. This would mean moving sideways to the beacon rather than straight for it. I coded this up, but couldn't even compile it because the robot and its board was competing. After the first round of OFFICE we ran with the robot to the laptop. Downloaded. Two syntax errors. Fixed. Downloaded. Ran back with the robot to the first round of COFFEE.
The front door was open. The easiest situation for us. JaCK zoomed forward, the crowd liked the speed. JaCK stopped, the crowd was worried. JaCK turned slightly and then zoomed again, turned, zoomed, entered the target area, the crowd liked the fast win. The other robot had not moved very far.
The target area was marked in black, and we had a white/black floor sensor in the robot, but we never saw the point of stopping just because JaCK thought he was in the target area, so JaCK keep hitting the back wall of the target area backing up and turning a bit and then turning back to the beacon and slamming into the wall again. The crowd thought this was very funny.
Talk about emergent behavior, we had no idea what JaCK would do. Here is what he did do. He zoomed, hit, locked, zoomed, hit, etc. Sometimes he was even behind walls but with his bump and backup and "track sideways" he always got out. He reached the coffee pot first (1 point). Then he backed up and hit he again, etc. Every team had been given magnets that would stick to the coffee pot, but we had always figured that it was two hard to move the coffee pot, so we had never bothered to mount our magnet so JaCK would stick. Besides, backing up and hitting again seem the best way to get the second point (for last to initiate contact).
The other robot eventually made it to the coffee pot. In the confusion that followed JaCK (somehow) got between the coffee pot and the other robot and they all just were both stuck in a corner till the 60 seconds was up. So we got the second point.
We're were now 1-0 in both events and had visions of a double win.
The open door was on the side. I was so nervous that I wasn't even in the arena. I was 30 feet away watching on a TV monitor. A teammate started JaCK, but forgot to flip the switch to tell it which side he was on. JaCK didn't even make it out of the start area.
1-1 on office.
We all double checked everything this time (even if we were nervous). Something went still went wrong during the beginning of the dead reckoning part, JaCK got stuck in a corner and could not get out. I'll be interested in seeing the video (if there is one) to see what happened.
1-2 on office, so we were eliminated.
The eventual winner was "Death Star 2000" by a team from AT&T. They had one secret weapon. They had found that they could detect the shiny metal of the closed door from the start position (with an red LED detector). They also had good turns, probably by using the servo to turn their beacon detector at any angle. Their robot was the same speed as most of the others (i.e. kind of slow), but it never made a mistake.
Our moral and hope were very low at this point.
Using his tacking JaCK found the coffee pot first (1 point). Several times the crowd thought that JaCK was stuck, but it was always just a matter of time before sideway tracking got him out. After he found it he keep backing up and hitting it the other robot wasn't close so Jack had a good chance of getting the second point.
Now, JaCK didn't have much power, but he did have speed and a running start, giving him momentum. So every time he hit the coffee pot, it moved. Miraculous and accidently, JaCK hit the coffee pot into the painted cup. 3 points. We were very happy. I think most of the crowd thought JaCK hit the pot into the painted cup on purpose. He did not. It was a lucky accident (facilitated by not having the other robot near.)
The other robot got there fast (1 point). JaCK eventually got there and starting its tapping. Unfortunately, the other robot also was tapping and got the last touch (2 points). JaCK's first loss in COFFEE.
Because of 0-0 losses, only 3 robots were left. It was decide that they would compete round robin, and the one with the most points would win.
"Not Yet" and JaCK both went against the 3rd robot and both won 2-0.
So now it was "Not Yet" vs. JaCK.
"Not Yet" was created by Thomas Pendleton, Peter Bonasso, Linda Williams and Bob Ambose. "Not Yet" was very good at Coffee. It would lockin on the beacon, carefully turn 45 degrees (by time), move till it hit the side wall. Wall follow (using 2 wheels on the wall and a distance sensor) until it detected the Coffee Pot, then it would turn toward it. Grab it with a wonderful 2 fingered robot hand that was flexible and strong. Then it would reverse and try to drag the pot to the painting of the cup. Its only loss, I think, was in the first round when it tired to do wall following without the side distance sensor, hit the wall at too steep an angle, and got stuck).
Just as I suspected "Not Yet" got to the pot first (because it went more directly) (1 point). JaCK got there before too long and started tapping. Because "Not Yet" never let go, JaCK got the last to start touching point. Time ran out, tie.
Just like round 5 except that they were positioned so that every time JaCK pulled back to tap, "Not Yet" pull the coffee pot forward a bit toward the painted up. With time almost running out and with the "Not Yet" almost missing the painting, it got it in and won the second point. 1st Place to "Not Yet". Runner up: JaCK.
I didn't feel bad because I could tell that "Not Yet" would always get the first point and JaCK would usually get the 2nd point, but I didn't think JaCK could every expect to get the 3rd point. So it was just a matter of time before "Not Yet" would have a round were it would get the pot to the cup area and win.
I had trouble sleeping the night before the contest, so I had a total of 13 hours sleep over 4 days. The last night when we stayed up, we amused ourselves by observing own alertness rise and falls and rise and fall.
It was fun having a "robot lab" badge. It gave us 24-hour access to the exhibit hall and we could use a "staff-only" security door that was also a short cut to McDonald's.
Interactive C is great. We never had problems with memory or speed. I want a board that can run it.
Our 6.270 board twice failed because of hardware problems. Both times the problem was fixed with a soldering iron. I wonder how sturdy the boards are. I want a cheap, *reliable* board that I can run IC on.
At then end of the competition, they sold 17 robot kits to participants for $550 each. I don't know what happened to the other 5 kits.
What programmers call bugs, some robotists call "emergent behavior".
There was lots of within team interaction as we worked and eat and relaxed (a bit) for the 3 days. There was very little interaction between teams.
I could have used my C manual. IC could use (or I could have figured out how to use) "?", "#include", and "exec()").
A team without someone who knew C and someone who know how to solder would have been in trouble.
After the contest, at about 6pm, I got back to the dorm. I had a 2-day old message at the dorm to call my wife. I called a couple of times, no answer, took a walk, called, no answer, left a message saying I'd try latter. I decide to set an alarm, sleep for an hour, and then call again. I fell asleep easily. I woke up on my own. It was dark, the clock said 9:40pm. The alarm must not have gone off. I went downstairs to the payphone and called home. My wife answered and we talked for about 30 minutes. As we were saying good bye I said "see you tomorrow".
When setting my alarm clock I had apparently left it in "time set" mode rather than "alarm mode"; it apparently keeps time more slowly when in "time set" mode. If you asked the chock's designers they'd probably claim that it was not a bug, but rather an "emergent behavior".
- Carl Kadie
Is available if you are interested