I watched an entire Flat Earth Convention for my research – here’s what I learnt: The Conversation

The democratization of “science” and “information” by the Internet has enabled many strange things today, including acceptance of lines of thinking that one would have expected reasonable people to scoff at in the past, and events that some people would consider quite surprising during our times, such as the results of the US presidential elections in 2016.

Despite early claims, from as far back as HG Well’s “world brain” essays in 1936, that a worldwide shared resource of knowledge such as the internet would create peace, harmony and a common interpretation of reality, it appears that quite the opposite has happened. With the increased voice afforded by social media, knowledge has been increasingly decentralised, and competing narratives have emerged.

via I watched an entire Flat Earth Convention for my research – here’s what I learnt

The Zuckerberg Strategy for Technology Development

I think I actually understand the Mark Zuckerberg strategy for developing technology and making a business of it.   It is an approach based on placing a product or a feature out there for the public with a limited understanding of its broad impact.  You learn from the responses to the features.  If changes or fixes are to be made they will be made based on feedback, and as the problems arise.  You experiment with new features.  If indeed problems arise for customers, you can respond by apologizing, and it would be an apology that could be sincere since you did not take the trouble to dig more deeply into possible problem scenarios itself.

I think this is a valid approach in some business scenarios and applications, especially if the problems that can arise are most likely to have limited impact on the customer and can be contained, and mostly if the service is free.  But Facebook has become too big for this kind of a strategy to continue to work.  If too many people are impacted, the government gets involved.

If I were to fault Facebook with regards to the problems they have been having recently, it would be for not recognizing the serious nature of the misuse of the system promptly and responding to it.  They seem to have a policy strategy of trying to buy time while not promptly addressing issues that are becoming obvious.   They allowed their system to be co-opted by others to spread misinformation as if it was the truth.  However, in this context, I am not sure what the authorities can hold them liable for.  I am not sure there is any legal basis in current law to prosecute with.

The above problem should be separated from a second one that should not have happened.  There seems to have been a breakdown in Facebook’s security process that led to private data being exposed, a breakdown that should have legal repercussions.

Meanwhile, I am highly amused at all the outrage that is being directed Facebook’s way – as if people did not understand the risks they were taking by participating on this platform.  Any sensible person should realize that when you place your life story on the Internet, and when you do so with a free service, you are taking a big risk.  It is a free service only because your information is being sold to advertisers.   You signed away your privacy.  And Facebook in particular has pushed the boundaries on how to take advantage of the information you provide.  And the platform also seems to be designed to draw out more information about you from you than you might first have been inclined to provide.  Also realize that even when you are given options for privacy from a vendor, you are still at the mercy of the vendor.  You don’t know what goes on behind the button that you have just pressed, or the data you have entered, on the screen.  You could logically believe that they will not take the risk of breaking the law, but anything beyond that is a matter of “trust”.

Would you not be naturally suspicious of a non-philanthropic private organization that provides a free service, and ask yourself how they intend to make money?  Would you not read and understand more carefully the User’s Agreement that you have with a company that is offering you the free service?

In this context, we are our own worst enemies.  We should be protecting ourselves better even without new regulations from government.  People are being manipulated very easily.

 

The Brave New World of Artificial Intelligence

Experience has taught me to be skeptical about new technology.  Many years ago I anticipated that the tools of the Internet could create a virtual world of grief similar to what we see in the real world today.  I feel that my fears have been justified.

Some of these new technologies are presented to us in a somewhat idealistic manner when they first arrive on the scene.  We were told that the Internet was going to open the world to the masses by providing universal connectivity and tools for communication, and thus even up the playing field for the everybody.  Technologies like this were going to transform the world for the better.  We just needed to let the technology loose to see all its life-changing benefits.   We are so full of ourselves that we do not even pause sufficiently to think about possible problems we may create.  This is silly.  We seem to be ignoring natural human behavior.  Human tendency is to eventually find a way to destroy every good thing.

And now there is this thing called Artificial Intelligence.

The subject of Artificial Intelligence has been a topic of research for many years.  The name seems to imply that machines can be made to think like human beings (is that a good thing?), and eventually they should be able to behave like humans.  I have been a skeptic, although I will admit to not having spent enough time to really understand what it all means.  I think the name itself is a turnoff for me, making it sound like it is more than it really is.

Artificial Intelligence, or AI, is becoming more mainstream these days, and the definition has undergone a little bit more of a refinement in my mind.  Specifically, AI is not to be considered in too broad a sense today, but in a more focused manner.  These days one primarily thinks about AI for particular functions.  For example, AI might help to design an autonomous vehicle where the vehicle reacts as if a human were in control, but that does not mean that the same machine can climb a tree or make a good cup of coffee, or plan a vacation.  Implementations of “AI” are compartmentalized, be it  for speech recognition, image classification, autonomous vehicles,  question-answering systems, etc..

And what is basically happening is that we now have enough processing power in computing systems, and the ability to collect, store, and process (in some statistical manner), large amounts of historical data related to particular functions and features, to allow us to design systems in which decisions can be made by computers in a similar way to the human decision making process that created the data that was collected in the first place, and to do so with a fair degree of confidence.   I hear terms related to the general topic of AI – machine learning, neural networks, deep learning, data mining, pattern recognition, etc.., subjects that I know very little about, but in my mind they all seem to be about finding ways to process data to come to up with algorithms to make decisions.  (I understand that neural networks in particular are about algorithms that try to mimic the neural networks in the brain.)

So things are moving along in this field, and I think it is because of the advancement of basic technologies related to data collection and processing.  New algorithms and approaches are being invented to use all this capability.  AI is becoming more fashionable as a  technology concept.  It is so enticing a concept, and the technology is moving ahead at such a fast pace, that not many people seem to be dwelling on the possible dangers. But this may also be changing, and people like Stephen Hawkings and Elon Musk, and other experts, have spoken up on this topic in recent times.  (You can see the letter that is referred to in the previous link here.)  I myself am not sure that we can create a machine that is greater than the input that went into its design in the sense of decision making, a superintelligence if you will.  But we could sure mess up when multiple decision making processes are involved and they are not brought together properly, or if the learning processes themselves are not done properly.  The results could be unexpected.  Here are some simpler examples of unexpected results with AI in real life.

https://www.infoworld.com/article/3184205/technology-business/danger-danger-10-alarming-examples-of-ai-gone-wild.html#slide1

My concern with AI would be something similar to what has happened in the world of universal networking and the Internet.  It is about the innate human tendency to try to exploit systems for their own benefit at the expense of others.  Who would have imagined the kind of hacking that exists today on the Internet, with bad players easily being able to access, slow down, steal from, and control, systems that they do not own, for their own nefarious purposes.  We were very naive in the initial design of the Internet.  Security was not tackled as one of the fundamental requirements in the design of protocols for the Internet.   The system is deliberately quite open.  Security is only added on at the higher protocol levels when it is thought to be needed.

When it comes to AI, the one subject I have not read much about yet is the likelihood of AI design being motivated by the wrong reasons, for fundamentally bad purposes.  An extreme example would be the development of technology based on AI that could be the foundation of robot battlefields.  We seem to be part of the way there conceptually with the extensive use of remote drone technologies these days.

Since AI depends on a process where algorithms are developed based on data collection, what if some organization, or some person, decides to skew this learning process deliberately to reflect a thinking process that is geared towards destructive outcomes.  And what if this kind of technology infiltrates the mainstream in a way that is difficult to contain (just like it happens with hacking on the Internet these days).   Will human beings be then fated to try to build systems to try to contain this infestation when it would have been easier and wiser to not even let it start in the first place.   Is it possible that there are bad players who are already in the process of taking advantage of the new forces we are about to unleash with the easier availability of tools to enable AI.

I have a bad feeling about what is going to happen with the new level of technology that is being created.  And I have the sense that we will try to muddle through the new problems that we create, problems that are of our own doing. We will band-aid specific issues as they arise, when it would have been wiser to consider all the possible ramifications of what we are doing up front.

In the world of medicine and health, we always seem to be on the verge of having an epidemic of some kind that existing systems are incapable of handling, but we have been fortunate to survive through such episodes even in more recent times as a human race  for various reasons.  Sometimes, like in the case of the recent Ebola epidemic, it takes desperate measures and some luck.  Will we always be so fortunate?

I wonder if it is possible to have similar scenarios for damage and destruction to humanity and its systems with technologies like AI.

Having written all this, I am hoping that somebody who reads this will tell me that my fears are unfounded, that my ignorance of AI extends even beyond what I have noted here, and that the foundations of the technology will not allow what I have speculated about to happen.  I would love to be pleasantly surprised.  Please, please, please….

We are Stardust

NASA presented some preliminary findings from their Twins Study earlier this year.  A complete paper from this study is to be released later this year.  For those who are not familiar with this study, this is the first and only study done on twin astronauts comparing the one who spent 340 days in space (Scott Kelly) with his brother (Mark Kelly) who spent the same time on earth, to try to understand genetic changes due to long term space travel.  The twins had identical genes when the experiment started.  They found that the person who had lived in space went through some genetic mutations during his time in space, and that some changes in gene expression (which apparently is not the same as genetic changes) seem to be long lasting.

Our living environment deeply impacts what we are as a species inhabiting the Universe.   We are shaped by where we exist in the universe, and there is some kind of a process that causes us to develop in a certain manner in different environments.  Scott Kelly spent less than a year in Space before the changes in his body manifested themselves.  Consider the near certainty that the magnitude of the differences caused in species because of where we exist in the universe likely outweighs our differences due to our existences in different places and in different circumstances on this earth itself.  Why then are we bent on focusing on and exploiting our own relatively minor differences?  And do we really think we are the superior species?

The Simple Algorithm That Ants Use to Build Bridges | Quanta Magazine

(Picture from Quanta Magazine. Credit – Vaishakh Manohar.)

via The Simple Algorithm That Ants Use to Build Bridges | Quanta Magazine

I first learned about how ants work in a cooperative manner in a book that my daughter had bought me for Christmas. The book was all about trails.  (She had figured out the perfect book for my interests!)   There is a chapter in this book about how trails historically came into being, and how these have, over time, led to our modern day system of roads, railroad tracks, and other connections for human travel.

Trails have existed for ages. The concept is not the creation of humans.  Animals of different kinds, using different skills, and for different purposes, have created trails.   There was, and still is, no real planning involved (the way humans would define it) in the creation of animal trails. It is all tied to their inbuilt instinct to survive and exist.

Ants have been creating trails for a long time.  The notable thing about the behavior of ants is that in spite of the fact that they do not have any significant level of individual intelligence, they show a great deal of collective or cooperative intelligence that lets them be effective in complex tasks.  (They do not even depend on the presence of an occasional “smart” ant that can serve as a leader.)  The book describes how their processes work for creating very efficient trails.  (There is even a kind of ant that is blind that is still very effective at this.)  Humans are now trying to understand if any of these processes are useful for our own existence.

Anyway, the article I have linked to is fascinating.  Make sure to watch the videos!