If you follow tech news closely, you might vaguely remember reading something about people who could use cybernetic implants to send signals to each other during a card game. Discovery News has called human enhancements like this one “alarming,” and it would be difficult for many science fiction fans to disagree when they practically grew up on images of the Borg. When DARPA starts talking about giving soldiers a cortical modem that could relay signals to optical nerves, it’s not a far stretch to imagine that these implants can be used to fake soldiers into thinking that they’re seeing an enemy that doesn’t exist. How would a soldier react if he realizes that he can’t trust anything he sees on the battlefield?
Some people might be tempted to suppress this kind of technology, figuring that it’s better to not have it all than to see it misused. However, the problem with this is that suppressing the technology would only keep it out of the hands of responsible people who can use it in an ethical manner. It wouldn’t stop unethical smugglers from selling it on the black market for inflated prices, and of course the smugglers won’t care whether their products cause harm to their customers as long as it sells. It also won’t stop rogue governments from developing cybernetic enhancements that could enslave their own population in a way that makes the Borg look tame. Once people know that something is possible and there is a market for it, people will find a way to make it happen regardless of whether it is legal or not.
Kevin Warwic’s Ted Talk on Cybernetics And Healthcare
So, obviously, the solution isn’t suppression. I like the way Isaac Asimov put it in his forward to his robot novel, The Naked Sun:
“Even as a youngster, though, I could not bring myself to believe that if knowledge presented danger, the solution was ignorance. To me, it always seemed that the solution had to be wisdom. You did not refuse to look at the danger, rather you learned how to handle it safely.”
At the time, he had been referring to how his contribution to the realm of science fiction, most notably the Three Laws of Robotics, came about. At the time, science fiction authors thought of robots in the same way that cybernetic enhancements are seen now, as something dangerous that could turn against humanity without warning. In Isaac Asimov’s novels, the Three Laws could prevent a robot from harming a human because robotics experts with a sense of ethics foresaw the danger. They did not refuse to design robots. Instead, they arranged it so that a robotic “brain” would cease to function if it violated the First Law. They found a way to, as Isaac Asimov said, “handle it safely.”
It’s not that there will be no danger, even when safeties are worked into the technologies involved. The physicists working on the Manhattan Project were in a race with WWII-era Germany and knew what their work was going to be used for. It may have even stung Oppenheimer’s feelings if he ever found out that the President had dismissed his concerns about war strategies that included the nuclear bomb as the whining of a “crybaby scientist.” However, nuclear technology can also take a more peaceful turn in the form of nuclear power that can be beneficial to humanity if used in a safe and responsible manner. If we start building nuclear reactors and make a few improvements to our transportation infrastructure now, we can stop using fossil fuels by the time we would have run out of oil and coal anyway.
Human nature counts for a lot more than a completely neutral technology that just sits there until somebody does something with it. New technologies are usually going to be misused by the criminal element – and I don’t mean the kind of people who own guns but forgot to send in their FOID renewal application before the expiration date on their existing cards. In any kind of sensible world, forgetting the paperwork wouldn’t make someone a criminal and a simple reminder from the appropriate agency would suffice. I mean the kind of people who would order hits on rivals and offer Bitcoin as payment. In this kind of case, I don’t blame Bitcoin because this is a “dumb” technology that doesn’t make the association between a bunch of bits flowing over the network and the life of a human. Neither does Bitcoin know that charitable miners have mined a little over 51 Bitcoin for a German nonprofit that helps children orphaned by AIDS. This kind of thing makes Bitcoin no more good or evil than the people who use it.
I could see cybernetic enhancements helping a man to walk again after he’s been paralyzed in a hit-and-run accident, or helping somebody who was born blind to gain his vision. This is what technology could do in the hands of people who can use it wisely. Tyrannical regimes are going to ignore ethical concerns and, if they have to, they will put their scientists and engineers to work on developing their own version. This is why I’d rather see cutting-edge technologies in the hands of good people who can use it to improve their everyday lives and the lives of the people around them. This is a way to flip off the criminal element and the tyrannical regimes in a way that does not involve suppressing new technologies because people are afraid that they might be abused.
Books On Cybernetics