by Tegan Moore
In “The Work of Wolves” [on sale now in our current issue] I posit a near future in which animals are genetically and physically modified to increase their intelligence and communication abilities in order to make them more useful to humans. A world like this is predicated on the belief that animals, with their innate intelligence, might be better at some jobs than anything artificially programmed. In many circumstances, dogs are better choices than robots. Here’s why I think this might be true enough that we might choose to tinker with the brains of animals instead of creating our own intelligence from scratch, and why it inevitably leads to dystopia.
Reason one: Dogs are smarter than we thought.
Dogs aren’t smart in the way we commonly think they are. They don’t, for example, feel guilty when they do something wrong—it’s safer to say that they feel threatened by your emotional response and are throwing out appeasement gestures to make themselves look non-threatening. And while some might swear their dogs read their minds, what they’re really doing is reading body language and predicting what their human might be about to do.
But that right there is fascinating. Dogs are exceptionally good at reading human body and facial expressions—better than any other species of animal. Even though bared teeth mean aggression or anxiety to canines, when you smile toothily at your dog he can interpret that gesture.
Why? Because our species shares a relationship with dogs that’s closer than any other. We didn’t domesticate dogs; dogs domesticated themselves, living at the edge of human society and gradually adapting more and more to live alongside us. Dogs and humans co-evolved, and they have an exceptional ability to interpret our emotional and physical cues.
Current research claims dogs have about the innate intelligence of a two-year-old human. Not too impressive. However, some individual dogs are much more intelligent: Chaser, a Border Collie who died just this July, learned over one thousand nouns and could understand when they were combined in novel ways. He’s not unique.
[Above: The author’s dogs displaying their trained odor detection skills while competing in K9 Nosework.]
So in addition to having evolved the ability to interpret emotions and predict our behavior, understanding human language is not beyond a dog’s grasp. How much genetic and physical change is required to bring a dog’s intelligence to the same level as an adult human? I don’t know—I’m just a science fiction writer.
Reason two: Robots aren’t as smart as we’d hoped—at least not yet.
AI is currently hilariously confused by reality. It’s fascinating to watch: Researchers feed an algorithm a problem to solve and let it go bat-shit naming cats and paint colors or writing movie scripts. The results are occasionally recognizable, but rarely natural. AI does okay picking out a book you might want to buy, but if asked to give the book a title it does a fairly abysmal job.
A self-driving car has already killed one person after its collision-prevention software decided a pedestrian was a “false positive.” Researchers have also shown that these cars can be stymied by graffiti on signs. These aren’t impossible problems, and of course a technology in its infancy will fail often before it becomes reliable, but it’s an illustration of the point: It’s hard to program every possible outcome, and software is not great at making logical jumps.
Research has already been done on creating robot “guide dogs” for the visually impaired. It is expensive and time-consuming to train a guide dog—at least two years of training, costing up to $60,000—and these dogs only work eight to ten years before they must retire. It seems like a robot would make an ideal replacement. All it needs to do is see, right? But what if, say, there’s a ground hornet nest next to the sidewalk. Your robot guide doesn’t care about bee stings.
This might be just a programming issue. Put that on the list: Robot guide dogs need to be programmed to avoid wasp nests. But there are a lot of these programming issues, and they’ll keep coming up for a long time, because there is not a finite number of weird things that are possible in this world.
Intelligence evolved in the experiential world. AI did not. The world doesn’t make a lot of sense to AI. Current theories claim the singularity—the point at which technological advancement leaves the realm of human control—is between thirty to fifty years in the future. It seems reasonable to guess that we might try to shortcut that timeframe by merging biological systems with artificial ones.
Reason Three: Robots are good specialists; dogs are generalist-specialists.
MIT researchers have been trying to create artificial olfaction—a robot nose. They have yet to build one that does a better job than any old dog from the pound. Dogs can be trained to indicate the presence of cancerous cells within just a few months. They do this with greater accuracy than any other system, and some can spontaneously recognize other kinds of cancer, even if it’s not the one they’ve been trained to detect. They’re the best detection system we have, not just because of their excellent noses, but also because of their observational skills.
Here is where dogs are unquestionably better than robots: In addition to being physically strong, mentally adaptable, easy to train, possessing a stellar sense of smell and excellent hearing, they are also generalists. They aren’t just a nose, something to sort data, or a sensor to detect obstacles. They evolved in this world, and it makes inherent sense to them. They co-evolved with us, and we make . . . well, at least some sense to them.
AI does best with limited data sets. Give an algorithm a data set and ask it to predict future iterations and you get some good responses. Give it multiple data sets and multiple parameters and the job gets harder. Ask it to write a recipe, and it will lose its shit.
A dog is already an expert at juggling multiple data sets, and in addition to being able to process multiple kinds of information at once, it can synthesize information in ways robot brains, so far, cannot.
Why reinvent the wheel?
But this is all beside the point because this whole idea is pretty much evil.
There are many, many reasons this is a terrible idea, even beyond those addressed in “The Work of Wolves.” The ethical problems with creating an ultra-intelligent animal are the precise reason this topic is an interesting subject for science fiction, and one I hope to grapple with more in the future.