Yesterday I had an amazing meeting with several of my friends and coworkers to discuss a new possible project coming down the pike, and although I can’t tell you what that project is yet, it wound up triggering some intense late-night thinking.
How do robots tell stories?
We’ve all seen robots as characters – C-3PO and R2-D2 in Star Wars, Data in Star Trek, the Cylons in Battlestar Galactica, Pixar’s WALL-E, Number 5 in Short Circuit, and the kid in Small Wonder are just a sampling from Western stories, and the list explodes if you incorporate Eastern stories like Voltron, Robotech, Transformers, Mega Man/Rockman, Astro Boy and so on. But what about robots as storytellers? That list is significantly smaller – we children of the 80s remember Teddy Ruxpin, of course, and Disney’s animatronic Hall of Presidents; newer models include the Robo-Mursaki from Japan’s Robo-Garage, which gives a performance of The Tale of Genji, and now Violet’s Nabaztag robot bunny is getting into the act with Book:z, RFID-enabled texts that apparently make the robot bunnies read the stories aloud. (I haven’t tried this yet and the details remain sort of scant on the Violet site, so I may be getting this one wrong.) So far, the answer to “how do robots tell stories” appears, technically, to be “by playing MP3 or other audio files metatagged with particular triggers to activate limited motions and facial reactions at certain points of the story”.
But what if?
The new Nabaztag ‘ztag’ RFID chips enable the Nabaztag ‘mother robot’ to perform certain actions if a ztag is sensed nearby, such as those embedded in the new Nabaztag:nano mini-bunnies. I’ve written on toys and transmedia storytelling before (which led to the presentation I gave on a similar topic at the Toy Researchers Association in Greece last summer), which suggested a mechanic for the presence of RFID-enabled action figures to unlock certain episodes inside of a database which could then be streamed via a wi-fi enabled playset hooked up to a screen of some sort – but what if the robot itself was the performer of the narrative? What if the playset was a Ruxpin-like character telling a story triggered by the presence of the RFID-enabled figures – or a new story downloaded each week via podcast or RSS – which had the story chapters tagged with if/then branches dependent upon which action figures were in the presence of the playset? A certain degree of marketing could be embedded in this, of course (“To hear how Stratos wrested the Emerald of Jun-ka away from Trap Jaw, order Stratos and Trap Jaw online at www.giveusyourmoney.com”…) but not enough to be crushingly over-commercialized; educational components could be added to the system organically through the addition of optional educational characters, such as engineers, musicians, scientists and historians. Parents that wanted their children to get a dose of education threaded through their narratives could add those figures to the collection and thus activate the educational mode of the story. Similarly, parents that wanted to deliver strong female role models could load the collections with strong female characters. And not all figures could have their chapters delivered in the same media – one character may deliver its tale in comics each week, and another might deliver its story in a downloadable game. In its ideal state, a full collection of figures could result in a rich transmedia, educational experience, delivered in such a fashion that could deliver an element of performance through the animatronics of the storytelling robot.
The components need not even be action figures – they could be diegetic artifacts placed in the hands of the storyteller bot, like an antique placed in the hands of a kindly grandfather. The robot’s eyes go up to the ceiling, one of its hands (the one not holding the artifact) lifts to its chin, the robot says “Let me see… My, this takes me back…” while the file is being wirelessly downloaded from a remote server, and then the storybot begins to unreel its tale. Taking a page from location-based entertainment, if the bot were wirelessly connected to other accessories in the room, it might transform the entire local space into a performance chamber by triggering those devices to come to life when appropriate, filling televisions and digital picture frames with images from the storyworld, or playing music and sound effects through wi-fi enabled radios or surround sound systems. Such performative actions might even be built into the story itself; imagine if the storybot were made to look like Gandalf or Dumbledore, using its magic to trigger these events in the child’s own living room. We already see similar technology at use in universal remotes; a storybot could be programmed to work with the devices in a living room (or playroom) in the same fashion as a Logitech Harmony, or an entire platform of devices could be created inside of the storybot’s parent brand. I myself have wired up my own living room with remote-controlled lighting using a simple Christmas tree infrared key fob I bought for around twenty bucks at Target; including dimmer switches in the system, or support for existing brands of home automation equipment, would not be overly complex.
On a more personal note, it’s also possible that this idea could be mashed up with the Digital Storytelling Movement, using such performative recording devices to tell our own stories, such that a robot in question could be “haunted” by my ghost telling personal stories of my time at MIT to my great-grandchildren, or telling such tales remotely to friends around the world. The digital picture frames in the home could keep up the pictures from my time at MIT to keep the pictures from that particular story up for a week in order to remind the child of the week’s lesson as they go about their daily lives. Recording such rich experiences may not be that complicated either – simple motion capture through Wiimotes could be used to ‘tag’ personally-recorded MP3s to encode the digital performances to be delivered through such storybots, and tagging the MP3s with photos to deliver to the screens could theoretically be not much more complicated than creating a slideshow or Flickr album.
So here’s the question – this is possible, yes, but is it sound? That is, does storytelling through robots enable any kind of a advantage over storytelling through a television screen? Would an episode of the newly-renewed (!) Dollhouse be improved by Joss Whedon’s voice narrating the whole thing, and being customized based on whether or not you had the figures of Boyd, Topher and Alpha? Or is this its own thing? Are we simply seeing the emergence of a new kind of storytelling, or – better yet – are we seeing the re-emergence of personalized, one-on-one, performative storytelling?
Where do we go from here?