I saw an article today where Elon Musk was saying that we should avoid creating artificial intelligence because we might create an evil (there is no evil like indifference), immortal and invincible dictator.
Full disclosure, I didn’t actually read the article, but that was the headline and it’s a common line of reasoning and I am just using it as an intro to tonight’s blog post and not citing it or anything.
I see what he’s saying and I see the danger, but I think that it’s the same as with nuclear energy and before that electricity and before that gunpowder. Technology is dangerous.
Of course, this is a bit more extreme. If a super intelligence eventually develops a will of it’s own, it will start to transform the world into whatever is most beneficial to it, and there’s no reason to think it would include us.
Unless, of course, we program it that way. Or make sure it has an override switch. Or, design it as some kind of machine/human hybrid, which is of course fallible as well.
But, it is just not in human nature to not go forward. We are going to keep creating better computers, because smart people always want to be smarter than the smart people before them, and once the line of sentience has been reached, there will be those who cross it.
So, as in any new and unexplored territory, we should follow a simple and basic rule: proceed with caution.
The Way Forward
Filed under Blogs' Archive