OK, so it’s a robot and not a roach. But it is a robot that *looks* a lot like a roach. Researchers at Bielefeld University are experimenting with emergent behavior on a robot platform they named Hector. Their software thus far has been reactive. The new software aims to give the robot “what if” capabilities to solve problems it has not been programmed for. This would imbue the robot with independent goal-directed behavior – i.e. robot intentions.
But beyond that, “they have now developed a software architecture that could enable Hector to see himself as others see him.” In other words, they gave it theory of mind and their ultimate goal is for it to be able to sense the intentions of humans and take these into account when formulating responses and actions. They want it to be self-aware. Though the rest of the world will probably see in this the parallels to Skynet of Terminator fame, to me the more interesting part to me is the notion that it will sense human intention.
Perhaps this is because the current crop of “smart” devices seems very autistic to me. Though they have a wide range of apparent intelligence, they respond only to what they can directly sense, and only within a context of which they are the center. The inability to make inferences about humans, and in particular to understand their intentions, is a typically autistic cognitive deficit. While it is possible to emulate this to some extent, it is often perceived as inauthentic and creepy, which may be why I write about it so much.
The quest by the marketing industry to provide targeted messaging tailored to your specific interests and intentions very much parallels the autistic experience. Any given product or brand seeks to better understand how it is perceived by humans. Or to put it another way, products and brands lack theory of mind and the ability to infer human emotions and intentions from non-verbal communication. They lack cognitive empathy
Like any autistic person, they attempt to mitigate their cognitive deficits by gathering data, observing reactions, forming a model of human behavior, calculating appropriate responses, then improving data sources and refining the model over time. When humans do this we call it vocational training and independence skills. When vendors do this we call it ad-tech. Both groups tend to wonder why people at large often perceive it as creepy.
But it is worth wondering whether this is appropriate. Certainly it is intuitive because in a world where as much as 90% of communication is non-verbal, the expectation is not to have to accommodate those with cognitive deficits, but rather that they learn to overcome the deficit.
But if you are autistic or know someone who is, how often have you said or heard “why don’t people just say what they mean?” But in neurotypical society the first rule about the rules is you don’t talk about the rules. Among neurotypical people, telling people what the rules are destroys authenticity. It creates the assumption that the person’s words and actions are merely a reflection of what you want from them.
But that need not be the case with commercial transactions. As Doc Searls explains in his book The Intention Economy, using brute-force computing power to analyze your behavior in order to guess your intentions is grossly inefficient. It would be easier to implement a system in which you can just broadcast them. Or intentcast them, as he dubbed it. Vendors are starting to embrace the concept and discovering it actually works. Let us consumers tell you what it is we want, gather that info en masse to minimize sampling error, and then go produce and deliver it. That works. Who knew?
Hector the robot at Bielefeld University is essentially autistic. With the addition of self-awareness and the ability to infer human intentions, Hector may cross the line to creepy. We’ll find out shortly. Much will depend on how he is architected and what our expectations are in terms of robot authenticity. Is that even a thing? Can a robot be “authentic” in the sense that humans are expected to be?
The consciousness of most of our iconic sci-fi robots like C3PO and Robbie was modeled after that of humans – it was self-contained and part of the robot itself. Even though the Star Wars bots could access the networked world, they didn’t send their sensor data back to a central mother ship to be interpreted, processed, and turned into instructions for the robot to follow, then transmitted back. Everything happened locally. Contrast this with our real-world robots that use the mother ship architecture. Siri, Cortana, Alexa, Google [x], Jibo, Pepper, etc. all phone home more often than ET. If you use these products, their vendors have access to all the data they send back to the mother ship. Because that data is potentially very valuable, it would be naive to believe that it will be discarded once its benefit to you the user has been realized.
It remains to be seen how the software coming out of Bielefeld will work, but one hopes that some aspect of self-awareness will be so incompatible with processing latency as to strongly favor local processing. If that is true and the new robot architecture is more like science fiction of yesteryear than the science fact of today, there is some hope that someone, somewhere on the planet will finally use intention detection in a non-creepy way that primarily benefits the individual and not the vendor. It might also give us insights that will improve the lives of autistic people by helping us learn to infer human behavior in non-creepy ways.
On the other hand, if you ever read about Hector in Ad Age, we are all doomed. Skynet will have awoken. And it will have a really good deal for you.
This is cross-posted from my business blog at IoPT Consulting because of the autism connection, and edited slightly to explore more of the autism topics. The original is posted here.