The Brave New World of Artificial Intelligence

Experience has taught me to be skeptical about new technology.  Many years ago I anticipated that the tools of the Internet could create a virtual world of grief similar to what we see in the real world today.  I feel that my fears have been justified.

Some of these new technologies are presented to us in a somewhat idealistic manner when they first arrive on the scene.  We were told that the Internet was going to open the world to the masses by providing universal connectivity and tools for communication, and thus even up the playing field for the everybody.  Technologies like this were going to transform the world for the better.  We just needed to let the technology loose to see all its life-changing benefits.   We are so full of ourselves that we do not even pause sufficiently to think about possible problems we may create.  This is silly.  We seem to be ignoring natural human behavior.  Human tendency is to eventually find a way to destroy every good thing.

And now there is this thing called Artificial Intelligence.

The subject of Artificial Intelligence has been a topic of research for many years.  The name seems to imply that machines can be made to think like human beings (is that even a good thing?), and eventually they should be able to behave like humans.  I have been a skeptic, although I will admit to not having spent enough time to really understand what it all means.  I think the name itself is a turnoff for me, making it sound like it is more than it really is.

Artificial Intelligence, or AI, is becoming more mainstream these days, and the definition has undergone a little bit more of a refinement in my mind.  Specifically, AI is not to be considered in too broad a sense today, but in a more focused manner.  These days one primarily thinks about AI for particular functions.  For example, AI might help to design an autonomous vehicle where the vehicle reacts as if a human were in control, but that does not mean that the same machine can climb a tree or make a good cup of coffee, or plan a vacation.  Implementations of “AI” are compartmentalized, be it  for speech recognition, image classification, autonomous vehicles,  question-answering systems, etc..

And what is basically happening is that we now have enough processing power in computing systems, and the ability to collect, store, and process (in some statistical manner), large amounts of historical data related to particular functions and features, to allow us to design systems in which decisions can be made by computers in a similar way to the human decision making process that created the data that was collected in the first place, and to do so with a fair degree of confidence.   I hear terms related to the general topic of AI – machine learning, neural networks, deep learning, data mining, pattern recognition, etc.., subjects that I know very little about, but in my mind they all seem to be about finding ways to process data to come to up with algorithms to make decisions.  (I understand that neural networks in particular are about algorithms that try to mimic the neural networks in the brain.)

So things are moving along in this field, and I think it is because of the advancement of basic technologies related to data collection and processing.  New algorithms and approaches are being invented to use all this capability.  AI is becoming more fashionable as a  technology concept.  It is so enticing a concept, and the technology is moving ahead at such a fast pace, that not many people seem to be dwelling on the possible dangers. But this may also be changing, and people like Stephen Hawkings and Elon Musk, and other experts, have spoken up on this topic in recent times.  (You can see the letter that is referred to in the previous link here.)  I myself am not sure that we can create a machine that is greater than the input that went into its design in the sense of decision making, a superintelligence if you will.  But we could sure mess up when multiple decision making processes are involved and they are not brought together properly, or if the learning processes themselves are not done properly.  The results could be unexpected.  Here are some simpler examples of unexpected results with AI in real life.

https://www.infoworld.com/article/3184205/technology-business/danger-danger-10-alarming-examples-of-ai-gone-wild.html#slide1

My concern with AI would be something similar to what has happened in the world of universal networking and the Internet.  It is about the innate human tendency to try to exploit systems for their own benefit at the expense of others.  Who would have imagined the kind of hacking that exists today on the Internet, with bad players easily being able to access, slow down, steal from, and control, systems that they do not own, for their own nefarious purposes.  We were very naive in the initial design of the Internet.  Security was not tackled as one of the fundamental requirements in the design of protocols for the Internet.   The system is deliberately quite open.  Security is only added on at the higher protocol levels when it is thought to be needed.

When it comes to AI, the one subject I have not read much about yet is the likelihood of AI design being motivated by the wrong reasons, for fundamentally bad purposes.  An extreme example would be the development of technology based on AI that could be the foundation of robot battlefields.  We seem to be part of the way there conceptually with the extensive use of remote drone technologies these days.

Since AI depends on a process where algorithms are developed based on data collection, what if some organization, or some person, decides to skew this learning process deliberately to reflect a thinking process that is geared towards destructive outcomes.  And what if this kind of technology infiltrates the mainstream in a way that is difficult to contain (just like it happens with hacking on the Internet these days).   Will human beings be then fated to try to build systems to try to contain this infestation when it would have been easier and wiser to not even let it start in the first place.   Is it possible that there are bad players who are already in the process of taking advantage of the new forces we are about to unleash with the easier availability of tools to enable AI.

I have a bad feeling about what is going to happen with the new level of technology that is being created.  And I have the sense that we will try to muddle through the new problems that we create, problems that are of our own doing. We will band-aid specific issues as they arise, when it would have been wiser to consider all the possible ramifications of what we are doing up front.

In the world of medicine and health, we always seem to be on the verge of having an epidemic of some kind that existing systems are incapable of handling, but we have been fortunate to survive through such episodes even in more recent times as a human race  for various reasons.  Sometimes, like in the case of the recent Ebola epidemic, it takes desperate measures and some luck.  Will we always be so fortunate?

I wonder if it is possible to have similar scenarios for damage and destruction to humanity and its systems with technologies like AI.

Having written all this, I am hoping that somebody who reads this will tell me that my fears are unfounded, that my ignorance of AI extends even beyond what I have noted here, and that the foundations of the technology will not allow what I have speculated about to happen.  I would love to be pleasantly surprised.  Please, please, please….