Bioplastics

Some of us feel quite good about ourselves because we recycle our plastics at home.  We believe we are doing our little bit to save the environment.  But, as it turns out, very little of the plastics that we recycle are being reused in a useful way.  As the article below points out, there are many challenges to achieving real meaningful recycling.  Perhaps the solution is to use less plastics, or plastics in a more sustainable way.  (The author of this article linked to below (click on the image) talks about “bioplastics”, which is something they are working on in their University.)  Whichever way you look at it, there are additional costs involved in getting things on the right path.  The article below is a good read in the sense that it also gives you a good sense of the bigger picture, and of the damage we are doing to ourselves over the longer run.

(Courtesy – The Conversation)

Here is a video from the article.

Steven F. Udvar-Hazy Center

The Udvar Hazy Center is the Smithsonian National Air and Space Museum (NASM)’s annex at Washington Dulles International Airport in Fairfax County, Virginia.  The huge space hosts a whole lot of aircraft and other human built flying objects, in all shapes and sizes, from the beginning of human flight.  There are just too many exhibits to remember, or even go through in detail in a single day!  Here are a few pictures.

If you are fascinated by aeroplanes just like I am, read more specific details about some of these aircraft, and see pictures of some of their transitions to the museum, at the following links provided by the Smithsonian.

Lockheed SR-71 Blackbird.

Space Shuttle Discovery.

The Enola Gay.

The Mustang.

The Concorde.

Dassault Falcon 20.

Global Flyer.

Super Constellation.

Cleaning up the Great Pacific Garbage Patch

A socially active friend of mine had told me about the Great Pacific Garbage Patch a while back.  He is the type of person who is likely to latch on to out-of-the-mainstream causes, some of which require a lot of work to verify.  I only followed the story in the background of my mind for several years, not certain if there was any exaggeration in the statement of the problem.  The subject seems to have moved into the mainstream in more recent times.

We human beings do not realize the extent of the damage that we are doing to the planet just because we do not see a lot of it with our own eyes. We will also willingly deny the role that we play in the process of its destruction.

What is the Great Pacific Garbage Patch?  From Wikipedia:
“The patch is characterized by exceptionally high relative pelagic concentrations of plastic, chemical sludge, and other debris that have been trapped by the currents of the North Pacific Gyre.  Its low density (4 particles per cubic meter) prevents detection by satellite imagery, or even by casual boaters or divers in the area. It consists primarily of an increase in suspended, often microscopic, particles in the upper water column.”

How big is the Great Pacific Garbage Patch?  From Wikipedia:
“The findings from the two expeditions, show that the patch is 1.6 million square kilometers and has a concentration of 10-100 kg per square kilometers. They estimate there to be 80.000 metric tonnes in the patch, with 1.8 trillion plastic pieces, out of which 92% of the mass is to be found in objects larger than 0.5 centimeters.”

The reason for my posting of this blog was a mainstream news item that I saw on CNN regarding attempts to try to address the issue.  The project is called The Ocean Cleanup.  They think they are capable of cleaning up 50% of the Great Pacific Garbage Patch in five years.  Part of the solution is trying to figure how the best way to recycle the garbage that is captured. Hope it all works, and that we can clean up the mess that we have all made!

 

A Twisted Path to Equation-Free Prediction: Quanta Magazine – About Empirical Dynamic Modeling

Empirical dynamic modeling, Sugihara said, can reveal hidden causal relationships that lurk in the complex systems that abound in nature.

This approach for prediction throws out the equations, and uses a different kind of approach to find order in chaotic systems. The process includes the gathering of enough historical data to make more reliable predictions.  To me, it sounds similar in some ways to some of the processes that feed into the field of AI, or Artificial Intelligence.

https://www.quantamagazine.org/chaos-theory-in-ecology-predicts-future-populations-20151013/

 

Boeing and Airbus, the new ‘super duopoly’ – WP

The business of manufacturing and selling commercial aircraft is a good illustration of how cutthroat the world of commerce can be, where winners and losers are sometimes determined not necessarily by how innovative you are, or how good a product you have produced, but by how you are able to manipulate the system.  The big guys do have an advantage in this regard.  I follow this business somewhat closely because of my love for aeroplanes in general  (I have destroyed many a balsa wood glider in my childhood), something that has stayed with me for a very long time.

https://wapo.st/2qWj8Dc

The Zuckerberg Strategy for Technology Development

I think I actually understand the Mark Zuckerberg strategy for developing technology and making a business of it.   It is an approach based on placing a product or a feature out there for the public with a limited understanding of its broad impact.  You learn from the responses to the features.  If changes or fixes are to be made they will be made based on feedback, and as the problems arise.  You experiment with new features.  If indeed problems arise for customers, you can respond by apologizing, and it would be an apology that could be sincere since you did not take the trouble to dig more deeply into possible problem scenarios itself.

I think this is a valid approach in some business scenarios and applications, especially if the problems that can arise are most likely to have limited impact on the customer and can be contained, and mostly if the service is free.  But Facebook has become too big for this kind of a strategy to continue to work.  If too many people are impacted, the government gets involved.

If I were to fault Facebook with regards to the problems they have been having recently, it would be for not recognizing the serious nature of the misuse of the system promptly and responding to it.  They seem to have a policy strategy of trying to buy time while not promptly addressing issues that are becoming obvious.   They allowed their system to be co-opted by others to spread misinformation as if it was the truth.  However, in this context, I am not sure what the authorities can hold them liable for.  I am not sure there is any legal basis in current law to prosecute with.

The above problem should be separated from a second one that should not have happened.  There seems to have been a breakdown in Facebook’s security process that led to private data being exposed, a breakdown that should have legal repercussions.

Meanwhile, I am highly amused at all the outrage that is being directed Facebook’s way – as if people did not understand the risks they were taking by participating on this platform.  Any sensible person should realize that when you place your life story on the Internet, and when you do so with a free service, you are taking a big risk.  It is a free service only because your information is being sold to advertisers.   You signed away your privacy.  And Facebook in particular has pushed the boundaries on how to take advantage of the information you provide.  And the platform also seems to be designed to draw out more information about you from you than you might first have been inclined to provide.  Also realize that even when you are given options for privacy from a vendor, you are still at the mercy of the vendor.  You don’t know what goes on behind the button that you have just pressed, or the data you have entered, on the screen.  You could logically believe that they will not take the risk of breaking the law, but anything beyond that is a matter of “trust”.

Would you not be naturally suspicious of a non-philanthropic private organization that provides a free service, and ask yourself how they intend to make money?  Would you not read and understand more carefully the User’s Agreement that you have with a company that is offering you the free service?

In this context, we are our own worst enemies.  We should be protecting ourselves better even without new regulations from government.  People are being manipulated very easily.

 

The Brave New World of Artificial Intelligence

Experience has taught me to be skeptical about new technology.  Many years ago I anticipated that the tools of the Internet could create a virtual world of grief similar to what we see in the real world today.  I feel that my fears have been justified.

Some of these new technologies are presented to us in a somewhat idealistic manner when they first arrive on the scene.  We were told that the Internet was going to open the world to the masses by providing universal connectivity and tools for communication, and thus even up the playing field for the everybody.  Technologies like this were going to transform the world for the better.  We just needed to let the technology loose to see all its life-changing benefits.   We are so full of ourselves that we do not even pause sufficiently to think about possible problems we may create.  This is silly.  We seem to be ignoring natural human behavior.  Human tendency is to eventually find a way to destroy every good thing.

And now there is this thing called Artificial Intelligence.

The subject of Artificial Intelligence has been a topic of research for many years.  The name seems to imply that machines can be made to think like human beings (is that even a good thing?), and eventually they should be able to behave like humans.  I have been a skeptic, although I will admit to not having spent enough time to really understand what it all means.  I think the name itself is a turnoff for me, making it sound like it is more than it really is.

Artificial Intelligence, or AI, is becoming more mainstream these days, and the definition has undergone a little bit more of a refinement in my mind.  Specifically, AI is not to be considered in too broad a sense today, but in a more focused manner.  These days one primarily thinks about AI for particular functions.  For example, AI might help to design an autonomous vehicle where the vehicle reacts as if a human were in control, but that does not mean that the same machine can climb a tree or make a good cup of coffee, or plan a vacation.  Implementations of “AI” are compartmentalized, be it  for speech recognition, image classification, autonomous vehicles,  question-answering systems, etc..

And what is basically happening is that we now have enough processing power in computing systems, and the ability to collect, store, and process (in some statistical manner), large amounts of historical data related to particular functions and features, to allow us to design systems in which decisions can be made by computers in a similar way to the human decision making process that created the data that was collected in the first place, and to do so with a fair degree of confidence.   I hear terms related to the general topic of AI – machine learning, neural networks, deep learning, data mining, pattern recognition, etc.., subjects that I know very little about, but in my mind they all seem to be about finding ways to process data to come to up with algorithms to make decisions.  (I understand that neural networks in particular are about algorithms that try to mimic the neural networks in the brain.)

So things are moving along in this field, and I think it is because of the advancement of basic technologies related to data collection and processing.  New algorithms and approaches are being invented to use all this capability.  AI is becoming more fashionable as a  technology concept.  It is so enticing a concept, and the technology is moving ahead at such a fast pace, that not many people seem to be dwelling on the possible dangers. But this may also be changing, and people like Stephen Hawkings and Elon Musk, and other experts, have spoken up on this topic in recent times.  (You can see the letter that is referred to in the previous link here.)  I myself am not sure that we can create a machine that is greater than the input that went into its design in the sense of decision making, a superintelligence if you will.  But we could sure mess up when multiple decision making processes are involved and they are not brought together properly, or if the learning processes themselves are not done properly.  The results could be unexpected.  Here are some simpler examples of unexpected results with AI in real life.

https://www.infoworld.com/article/3184205/technology-business/danger-danger-10-alarming-examples-of-ai-gone-wild.html#slide1

My concern with AI would be something similar to what has happened in the world of universal networking and the Internet.  It is about the innate human tendency to try to exploit systems for their own benefit at the expense of others.  Who would have imagined the kind of hacking that exists today on the Internet, with bad players easily being able to access, slow down, steal from, and control, systems that they do not own, for their own nefarious purposes.  We were very naive in the initial design of the Internet.  Security was not tackled as one of the fundamental requirements in the design of protocols for the Internet.   The system is deliberately quite open.  Security is only added on at the higher protocol levels when it is thought to be needed.

When it comes to AI, the one subject I have not read much about yet is the likelihood of AI design being motivated by the wrong reasons, for fundamentally bad purposes.  An extreme example would be the development of technology based on AI that could be the foundation of robot battlefields.  We seem to be part of the way there conceptually with the extensive use of remote drone technologies these days.

Since AI depends on a process where algorithms are developed based on data collection, what if some organization, or some person, decides to skew this learning process deliberately to reflect a thinking process that is geared towards destructive outcomes.  And what if this kind of technology infiltrates the mainstream in a way that is difficult to contain (just like it happens with hacking on the Internet these days).   Will human beings be then fated to try to build systems to try to contain this infestation when it would have been easier and wiser to not even let it start in the first place.   Is it possible that there are bad players who are already in the process of taking advantage of the new forces we are about to unleash with the easier availability of tools to enable AI.

I have a bad feeling about what is going to happen with the new level of technology that is being created.  And I have the sense that we will try to muddle through the new problems that we create, problems that are of our own doing. We will band-aid specific issues as they arise, when it would have been wiser to consider all the possible ramifications of what we are doing up front.

In the world of medicine and health, we always seem to be on the verge of having an epidemic of some kind that existing systems are incapable of handling, but we have been fortunate to survive through such episodes even in more recent times as a human race  for various reasons.  Sometimes, like in the case of the recent Ebola epidemic, it takes desperate measures and some luck.  Will we always be so fortunate?

I wonder if it is possible to have similar scenarios for damage and destruction to humanity and its systems with technologies like AI.

Having written all this, I am hoping that somebody who reads this will tell me that my fears are unfounded, that my ignorance of AI extends even beyond what I have noted here, and that the foundations of the technology will not allow what I have speculated about to happen.  I would love to be pleasantly surprised.  Please, please, please….