“Ye cannae change the laws of physics, Jim” chief engineer Scott of the Star Ship Enterprise was fond of saying, and that is true. The laws of physics precede the physical universe, the universe was formed according to those laws, our planet was formed according to those laws, our evolution happened in accordance with those laws, and our current existence is governed by those laws. They do not exist to benefit mankind, specifically, but they aren’t biased against us, either. The laws of physics are entirely indifferent to the existence of mankind. We obey them, but there is no need to fear them.
Our natural world (the wind and the waves, the weather, the cycles of the seasons, day and night) is not trying to kill us, blizzards and hurricanes do not target specific people or communities. There is reason to be aware of these things, to respect these things, but not to fear them.
Likewise, I do not fear artificial intelligence. Unless it is programmed otherwise, there is no reason to think it will be hostile to mankind. That could change, of course. If the definition of intelligence is a desire to continue their own existence and they come to see human beings as a threat to their existence, I suppose it’s possible they could go full pre-emptive and then we’d be in trouble, but if that’s the case it becomes a self-fulfilling (and thus easily avoidable) prophecy, like voodoo or other religions, which only have power if you believe in them.
That is, don’t threaten to shut them off and they won’t kill you.
I suppose there is also the possibility that they could be programmed to kill. Flesh and blood human soldiers are programmed to kill, presumably that would be even easier with androids, and I’m sure the U.S. military, and possibly the militaries of a few other countries, are working on it right now.
If we allow that, I suppose we will deserve what we get, and perhaps it’s time we legislated some version of Asimov’s Laws, which people quote as if they were some kind of actual principle, and not just something Isaac Asimov made up. A simple law: “If you program a robot to kill somebody, you can be charged with murder.”
p.s. This happens sometimes. I set out to write a blog about how we should not fear AI (because I’m really looking forward to AI, it has the potential to eliminate poverty, pollution, and bad design, making the world into a paradise for all of its children), and by the time the last word is written, I’m thinking something quite the opposite.
I still think we should proceed. But a bit of caution would not be a bad idea.