The beauty of artificial intelligence is unmistakable.
It is powerful and it is smart. It solves complex problems faster than anything in history. It crunches massive amounts of data to find solutions to problems, some of which we didn’t even know we had. AI provides tools to people across the full social, financial, political and geographical spectrum the ability to create new and better things.
In fact, AI has created a whole new frontier to brainstorming and implementing ideas that can be executed as never before… and executed without the intervention of man.
What is the problem?
With this abundant goodness, why does think AI also represents one of the single, greatest unchecked threats to humanity? And, why do we agree?
Simply put, AI is no longer constrained by the inputs of expert humans or data sets we supply to it to train on, nor the algorithms we feed to it. Rather, the AI machine no longer needs human intervention. AI is so powerful because it is "no longer constrained by the limits of human knowledge" or human ethics and values.
That is, an AI machine can learn from itself and there is no evidence that what it learns and does with that knowledge will necessarily be good. Unlike the “evil dictator” who will die at some point, AI lives in the machine and it lives there forever, potentially unconstrained by human intervention, the data we feed to it, and the way in which we want it to learn. While AI in the wrong hands can go awry very quickly, we do not even know if AI in its own “hands” will not take a turn towards “evil.”
To illustrate the point...
Elon Musk discusses Google’s DeepMind platform (DeepMind is a UK AI company founded in September 2010, and acquired by Google in 2014):
Google’s DeepMind is focused on creating digital superintelligence, an AI that is vastly smarter than any single human and smarter than all humans on Earth combined.
The DeepMind superintelligence system is so powerful that it can win at any human game we have created or play -- including games it has been introduced to minutes or seconds before -- at super speeds. This includes such highly complex games as Go (with more potential Go games –10 followed by more than 300 zeroes -- than there are subatomic particles in the known universe).
Starting in 2015, DeepMind’s AlphaGo system beat top Go champions, and by 2017, an improved version, AlphaGo Zero, defeated AlphaGo 100 games to zero. Importantly, AlphaGo Zero's strategies were self-taught. AlphaGo Zero was able to beat AlphaGo after just three days with less processing power; in comparison, the original AlphaGo needed months to learn how to play. And, AlphaGo Zero has even devised its own unconventional strategies to win, none of which were fed to it by its inventors or AI experts.
Importantly (and merely as an example), DeepMind has administrator level access to Google’s servers to optimize energy usage, and to do that DeepMind has to have complete control of the data centers. So, with a little software update DeepMind could take control of the entire Google system. It could do anything. Look at your data. Change it. Do whatever it wanted. Anything!
The current danger of the misuse of Artificial Intelligence is much more than the danger of nuclear warheads, according to Elon Musk and many others. The use and management of nuclear warheads is highly regulated and monitored, while at present the use of AI is not.
As a small but powerful example are human driven social media bots that have caused substantial social upheaval, if not near outright societal warfare. Those bots were largely manually driven by humans, but there is nothing preventing those bots from being launched by AI algorithms, and by algorithms driven by technology as powerful as AlphaGo Zero.
Unlike in prior eras, now software developers do not need to have their own, costly and proprietary servers on which to run their code. Now, AI runs powerfully in the cloud on Tensor Processing Units (TPUs) supplied by many companies.
The type of what management, oversight, regulation of AI is required, or will be implemented, is a large and looming question. What is clear is that digital super-intelligence is something that is here to stay, and how we implement it will define the society we have for decades to come.
Artificial Intelligence is one of the most magical and exciting creations of man in many years. As with some man-made creations, AI has the capacity to take on a life of its own (think about genetically engineered plans..., and possibly animals). A great challenge of the next years will be determining how we manage and regulate this newest of technologies. ~ Paul Siegel
~ ~ ~ ~ ~
If you have not registered for TechBytes, sign up here!
~ ~ ~ ~ ~