How do you teach computers to teach each other? One answer: Pac-Man.
Washington State University professor Matthew E. Taylor’s latest project involves teaching computers to teach each other Pac-Man.
What’s essentially going on here is that there are two pieces of software — one programmed with data related to the actions and variables that add up to winning Pac-Man behaviors, one programmed to “learn” those behaviors. “The ‘teacher’ can only influence the student by ‘suggesting’ what action to take,” Taylor explained to me via email. (The research included a similar experiment involving a version of StarCraft.)
Taylor’s research focuses on artificial intelligence, and he obviously has higher goals than training machines to master arcade-era classics. Games, with constrained rules and clearly observable outcomes, are a convenient starting point in learning how to teach machines to teach one another.
Down the road, if one robotic device can “train” another to perform specific tasks, it avoids the problems that arise when differing models or upgrades make it hard to simply copy information from one device to another — so instead of a data transfer, a “knowledge” transfer occurs.
Think of robotic vacuums here, not Terminator-esque robo-warriors. After all, Taylor’s subjects are just now mastering Pac-Man.
Specifically, Taylor’s research involved “two distinct pieces of software interacting,” he said. “[They’re] basically simulated robots.” One was, in effect, a Pac-Man ace, the other a novice.
So how did it go? Here’s how the newbie software fared at Pac-Man:
Pretty sad. But after interaction with sofwtware programed with Pac-Man expertise and strategies for teaching it, things got better:
Eventually, the newbies actually surpassed their masters’ expertise. An impressive result for both “teacher” and “pupil.”
“This work is different from existing work because we not only allow very different types of robots to teach each other, but also because it’s applicable to humans,” Taylor added. “Thus far, we’ve conducted preliminary studies on teaching humans to play Pac-Man with a trained agent.”
And that’s partly where all this could lead: Scenarios involving robots training humans. (Please, no jokes about the HR department here.) As computers prove themselves more adept at successfully teaching how to succeed at increasingly complex tasks, it could turn out that we all have a lot to learn — from them.