The Brave New World of Artificial Intelligence

Experience has taught me to be skeptical about new technology.  Many years ago I anticipated that the tools of the Internet could create a virtual world of grief similar to what we see in the real world today.  I feel that my fears have been justified.

Some of these new technologies are presented to us in a somewhat idealistic manner when they first arrive on the scene.  We were told that the Internet was going to open the world to the masses by providing universal connectivity and tools for communication, and thus even up the playing field for the everybody.  Technologies like this were going to transform the world for the better.  We just needed to let the technology loose to see all its life-changing benefits.   We are so full of ourselves that we do not even pause sufficiently to think about possible problems we may create.  This is silly.  We seem to be ignoring natural human behavior.  Human tendency is to eventually find a way to destroy every good thing.

And now there is this thing called Artificial Intelligence.

The subject of Artificial Intelligence has been a topic of research for many years.  The name seems to imply that machines can be made to think like human beings (is that a good thing?), and eventually they should be able to behave like humans.  I have been a skeptic, although I will admit to not having spent enough time to really understand what it all means.  I think the name itself is a turnoff for me, making it sound like it is more than it really is.

Artificial Intelligence, or AI, is becoming more mainstream these days, and the definition has undergone a little bit more of a refinement in my mind.  Specifically, AI is not to be considered in too broad a sense today, but in a more focused manner.  These days one primarily thinks about AI for particular functions.  For example, AI might help to design an autonomous vehicle where the vehicle reacts as if a human were in control, but that does not mean that the same machine can climb a tree or make a good cup of coffee, or plan a vacation.  Implementations of “AI” are compartmentalized, be it  for speech recognition, image classification, autonomous vehicles,  question-answering systems, etc..

And what is basically happening is that we now have enough processing power in computing systems, and the ability to collect, store, and process (in some statistical manner), large amounts of historical data related to particular functions and features, to allow us to design systems in which decisions can be made by computers in a similar way to the human decision making process that created the data that was collected in the first place, and to do so with a fair degree of confidence.   I hear terms related to the general topic of AI – machine learning, neural networks, deep learning, data mining, pattern recognition, etc.., subjects that I know very little about, but in my mind they all seem to be about finding ways to process data to come to up with algorithms to make decisions.  (I understand that neural networks in particular are about algorithms that try to mimic the neural networks in the brain.)

So things are moving along in this field, and I think it is because of the advancement of basic technologies related to data collection and processing.  New algorithms and approaches are being invented to use all this capability.  AI is becoming more fashionable as a  technology concept.  It is so enticing a concept, and the technology is moving ahead at such a fast pace, that not many people seem to be dwelling on the possible dangers. But this may also be changing, and people like Stephen Hawkings and Elon Musk, and other experts, have spoken up on this topic in recent times.  (You can see the letter that is referred to in the previous link here.)  I myself am not sure that we can create a machine that is greater than the input that went into its design in the sense of decision making, a superintelligence if you will.  But we could sure mess up when multiple decision making processes are involved and they are not brought together properly, or if the learning processes themselves are not done properly.  The results could be unexpected.  Here are some simpler examples of unexpected results with AI in real life.

https://www.infoworld.com/article/3184205/technology-business/danger-danger-10-alarming-examples-of-ai-gone-wild.html#slide1

My concern with AI would be something similar to what has happened in the world of universal networking and the Internet.  It is about the innate human tendency to try to exploit systems for their own benefit at the expense of others.  Who would have imagined the kind of hacking that exists today on the Internet, with bad players easily being able to access, slow down, steal from, and control, systems that they do not own, for their own nefarious purposes.  We were very naive in the initial design of the Internet.  Security was not tackled as one of the fundamental requirements in the design of protocols for the Internet.   The system is deliberately quite open.  Security is only added on at the higher protocol levels when it is thought to be needed.

When it comes to AI, the one subject I have not read much about yet is the likelihood of AI design being motivated by the wrong reasons, for fundamentally bad purposes.  An extreme example would be the development of technology based on AI that could be the foundation of robot battlefields.  We seem to be part of the way there conceptually with the extensive use of remote drone technologies these days.

Since AI depends on a process where algorithms are developed based on data collection, what if some organization, or some person, decides to skew this learning process deliberately to reflect a thinking process that is geared towards destructive outcomes.  And what if this kind of technology infiltrates the mainstream in a way that is difficult to contain (just like it happens with hacking on the Internet these days).   Will human beings be then fated to try to build systems to try to contain this infestation when it would have been easier and wiser to not even let it start in the first place.   Is it possible that there are bad players who are already in the process of taking advantage of the new forces we are about to unleash with the easier availability of tools to enable AI.

I have a bad feeling about what is going to happen with the new level of technology that is being created.  And I have the sense that we will try to muddle through the new problems that we create, problems that are of our own doing. We will band-aid specific issues as they arise, when it would have been wiser to consider all the possible ramifications of what we are doing up front.

In the world of medicine and health, we always seem to be on the verge of having an epidemic of some kind that existing systems are incapable of handling, but we have been fortunate to survive through such episodes even in more recent times as a human race  for various reasons.  Sometimes, like in the case of the recent Ebola epidemic, it takes desperate measures and some luck.  Will we always be so fortunate?

I wonder if it is possible to have similar scenarios for damage and destruction to humanity and its systems with technologies like AI.

Having written all this, I am hoping that somebody who reads this will tell me that my fears are unfounded, that my ignorance of AI extends even beyond what I have noted here, and that the foundations of the technology will not allow what I have speculated about to happen.  I would love to be pleasantly surprised.  Please, please, please….

The more things change

The more they remain the same…

I read somebody’s blog article recently about Artificial Intelligence (AI) and about how human intelligence will in the not-to-distant future be surpassed by artificial intelligence, which will then fuel a development pace that we have not seen in the past.  There was an interesting introductory section of the article that talked about how technology was basically developing exponentially.  It would have taken lifetimes in the past to see the kinds of changes that we have seen within our own lifetimes.  In fact the changes today seem to happen rapidly enough that people are left behind.

But I have a hard time tying the rate of development to the topic of AI.   The logical capability that constitutes the core of a machine is very different from the core of a human brain, and I am not sure that this can be replicated. One has to have sufficient speed in the machine to be able to build an emulation of the core of the human brain that works in real time. The approach for developing AI capability in its limited form today is very focused and still limited in the capability to really learn.  Of course, one could come out with new versions of software encompassing the lessons from the use of the earlier version of what one can call AI software, and call this an AI implementation, but this is still an development that is directly dependent on human intelligence.  So some additional big breakthrough in technology is needed, something that can apparently lead to “super-intelligence” as discussed in the article I mentioned.  Also, in addition to “learning” software, we perhaps need hardware that can self-promulgate and grow in order to make this concept a reality.

So what, I think to myself.  While the changes in lifestyle during our own lifetimes has been astounding, where is this leading us? We have developed the tools to improve our efficiency of operation, we have created lots of functionality that simplifies life, we can communicate at speeds and across distances that would have been considered astounding even a couple of hundred years ago ago, we can cover vast distances in short periods of time,  we have increased food production to levels that would have been unthinkable in the past, etc…  We regularly have new technologies that come into place that quickly form the basis of our future experiences in all facets of life. People are living longer, enjoying more comfort, etc.., but so what.  We still are born, eat, sleep, and poop, and eventually die.  While lifetimes have increased, is this increase proportionate to the level of increase in technology? Is somebody thinking that AI will eventually change the fundamental elements of the paradigm of life.

I am not saying that development is a bad thing.  I am just thinking that we have not thought through its impact at a fundamental level. Every advantage that we appear to gain seems to be balanced by an advancement of some negative sort (including sometimes stupidity) at some level.  AI, even if it lives up to its hype, could turn out to be one of these things that adds to this unfocused sense of advancement and speeds it up.  I suppose that the most dangerous thing possible is that if this concept really becomes a reality in its truest form, we would have found a way to speed up the progress to such an extent that what we really achieve turns out to be completely destructive.