Revolutionary Camera Technologies?

camera-711040_1280
(Source – pixabay.com, used under CCO license.)

I saw the following article on the online magazine Wired recently.

http://www.wired.com/2015/11/panasonic-cameras-get-a-shoot-now-and-focus-later-feature/

Panasonic has introduced a feature in some existing cameras via a software download that lets you take a single picture at multiple focal points almost simultaneously so that the person can pick a desired focal point for presentation to the viewer after the fact.  Some existing cameras have had this kind of a feature in the sense of taking a picture at a few (two or three) focal lengths one after another, but this Panasonic feature apparently takes this kind of capability to the next level.  Indeed, what is needed in existing cameras to implement this kind of a feature is plenty of speed and a lot of storage.

I somehow feel that this is a half-baked solution to a very interesting problem of capturing pictures in their truest form so that they are suitable for post-processing to any desired set of parameters for presentation.  In fact, this is the technology that will eventually revolutionize the field of photography and allow even devices like smartphones to take pictures that in presentation will far superior to those generated from traditional cameras.  They will allow a much greater level of creativity than with the existing optical technology.

Welcome to the field of plenoptic  or light-field technology!  There are experiments in this realm that are still not completely mature or suitable for use by consumers at this time, but I think that something along these lines will be coming some time in the future.

http://petapixel.com/2010/09/23/the-first-plenoptic-camera-on-the-market/

http://www.wired.com/2015/11/lytro-refocuses-to-create-a-groundbreaking-vr-camera/

And then there is Wavefront coding….

Perhaps I was very naive about what it was all about when I took up photography, but years of experience have taught me that this hobby is not just about capturing the image as seen by an observer.   It is about creating the visual and mental impact that you desire with the picture that you present. Towards this goal, today, you end up using all kinds of technology in the camera, and outside of it in post-processing, to create the impact that one wants.  Even the most basic picture that you see today has probably undergone some kind of “processing”, either optically, or electronically, or in software.  What we call artistry is trying to use the technology that is available to us, be it the simple paint-brushes, or the cameras, or the electronic devices, or the software, to create the impact we wish.    Of course, we will always argue about the amount of “reality” in the product that is being produced based on the amount of creativity that is used in the presentation, but I think it is becoming more and more an argument about the degree of processing, not about the presence or absence of processing.

When you look at the possibility of new technologies emerging for capturing images, and then this technology becoming a part of the mainstream, such events will actually open up the field of photography to new techniques for artistry in picture presentations.  We will have a new generation of artists using newly invented image capturing and processing devices and techniques who will call themselves photographers, who will have no concept of what photography meant to the pioneers in this field.  Photoshopping is just the beginning.  Even the term “camera” may become passe.  Analog cameras anybody?!

Battery Technoglogies

A while back I wrote about technologies being worked on for regenerating and saving energy that would otherwise be wasted.  This is the way hybrid vehicles work, and this is the way that the London Underground is experimenting with powering an underground station, using the energy regenerated when trains brake to come to a stop in a station.

At that time I pointed out that one of the major issues with using regenerated energy was the need to store the energy for later use.  Batteries that are used to store electric energy today simply do not do a good enough job when it comes to saving significant amounts of energy efficiently for long periods of time.  This is one of the reasons that solar systems that are used to power homes in the US today do not in general use battery storage.  Instead of capturing the excess energy that is generated during the day in batteries and then powering the home from the stored batteries during the night, these systems send the excess power back into the electric grid during the day, and draw power from the grid during the night.

It turns out that that there is actually quite a bit of work going on regarding new battery technologies.  A lot of this work is in the R&D stage.  I came upon an article recently about one such company.  As could be expected with some R&D work, this technology seems to have been given birth to in a university setting.  The more I read about the technology at that company’s website, the more fascinated I became.  There is quite a bit of innovation going on, and it is only a matter of time and investment before new technologies with far greater potential (no pun intended!) than today’s batteries will become real.  From the perspective of this particular company, such improvement in performance can be powered by a fundamental change in the way in which the layout of the battery takes place. It involves thinking about the layout of the anode and the cathode for the battery in a true 3D sense rather than the traditional 2D manner.  This kind of a layout is facilitated by newer technologies that were not available in the past, but that are more common today (think nanotechnologies!).

All of this made me interested in further investigating the playing field of battery technologies, and I came upon a few articles, some of them not that recent.

https://gigaom.com/2013/01/14/13-battery-startups-to-watch-in-2013/

There is much other work going on in battery technology, some of it along more conventional lines.  A lot of this work is motivated by real needs of today’s existing infrastructure, and also by other newer areas of development, including the ongoing emergence of the electric powered automobile as a real consumer product.

http://cleantechnica.com/2015/01/15/27-battery-storage-companies-watch/

What we will have to remember when some of these technologies mature is that unless they are used in the right context, they are likely to create additional problems that will need to be addressed and solved.  If quick recharging of high capacity batteries from the electric grid becomes a common need, the grid itself will have to change.

Types of Technology Initiatives that make Sense

Environmentally friendly approaches to technology development are important because they ultimately impact the future of our planet and the quality of life for the generations that follow.

In this context, when it comes to the technologies related to energy, we look for cleaner sources of energy, we look for technologies that generate energy more efficiently from these sources, and finally we try to design equipment that operate efficiently without wasting energy.

The article below pertains to technology that does not fall neatly into any one of the categories noted above.  It has to do with regenerating energy.  It is about saving the energy that might have been wasted and reusing it in some way.

The principle used in the system described below is in some ways similar to that in hybrid cars. In an hybrid vehicle, a traction battery provides power to a motor that supplements the gasoline engine as needed to move the vehicle.  This battery is recharged when there is either some braking action, or when the automobile its trying to increase speed and build up momentum on a down-slope.  Essentially what is happening is that the energy that is generated from braking, instead of being wasted as heat, is converted to electric power.  Kinetic energy generated on the downhills is also converted to electric power. There is no additional external source of power.  We are basically saving energy in the battery when possible and then using that energy later when it makes sense.

In the system described below, the trains provide electric power in real time to the underground stations when they brake to come to a stop in the station.   The numbers from the article are an indication of the tremendous amount of energy that is available from this process, and also an indication of the tremendous amount of energy we are wasting today.  It would be great if this kind of a philosophy – of taking advantage of the unused energy from an inefficient process and reusing it for either the primary process itself, or for a secondary purpose – be considered more widely in the design of all systems that consume energy, especially since we have many technologies in place today that are still quite inefficient.  Towards this end, the ability to store energy efficiently on a large scale at a reasonable cost point is still a significant technological issue to be addressed and solved.

London Tube’s ‘regenerative braking’ tech can power an entire station“.

Is There a Concept of Having Too Much Technology

Some inventors from Airbus were recently granted the following patent.

If you follow the link you will see that the patent is essentially for the design of a passenger aircraft that can travel at speeds of up to Mach 4.5 using certain advanced technologies.  The invention contemplates an aircraft with three different kinds of engines for three different stages of flight.  The first engine type would be used for liftoff of the aircraft, the second would take it up to the altitude that it is supposed to fly at, and the third would let it cruise as speeds that border on the hypersonic. Although I have not read the patent,  I suspect that the innovation that is being claimed here is the single piece of equipment (i.e., the aircraft) being designed to work with the three engine technologies in three different stages of flight, and that the innovation is not in the inventions of the engine types themselves, although there could be some optimization/modification of the engines being contemplated for the application at hand.  There also ought to be some innovative ideas related to the shape of the aircraft and the placement of the engines.

Of course, filing patents is all about putting ideas that you consider implementable on the record and being acknowledged as the person who “owns” the idea, but it does not necessarily imply that the patents have been really implemented or are implementable in a practical sense in the near future.  In my past history, I have been fortunate  to have worked, in most cases with other people, on many concepts that have been patented, some of which have made it into real implementations, and many that have not.

In the case of this particular patent, I have my serious doubts about the design becoming reality in any practical sense for the purpose of moving passengers.   Factors that make me a skeptic include the development costs, the cost of the aircraft itself, its efficiency in terms of the cost of moving each passenger per mile, and finally the real need in our world for this kind of technology today.  In many cases patents are filed purely as a defensive measure, to let people know that you got the idea first, or to serve as a negotiating tool with your competition.  That having been said, I cannot completely discount the possibility of somebody somewhere convincing a military organization somewhere to spend billions of dollars for the purposes of building something based on this patent that improves our capability in the realm of waging war and killing people.  You do not have to look too far to see this kind of foolishness going on today. There is also a new field of commercial space flight that is emerging these days, where paying passengers can be given rides into space, for which some of this technology may be applicable. But if this idea becomes successful in that realm, only the super rich who can afford to pay humongous amounts of money for one-time thrills will be able to afford it.

People might argue that my viewpoint regarding the practical use of this technology is typical of those who have no real vision for the future.  After all, most of the technology that has been developed that keeps the world going today had a cost associated with it, and if people had not invested in these technologies, we would not be where we are today in terms of capabilities, lifestyles, convenience and comfort.  But how much of convenience and comfort does a human being really need?  There is also the trickle down factor to be considered, where technology that is developed for one limited scenario bleeds into more general usage.  This is particularly true about innovations that have come out of the space program that have found their way into every day use.   Fair enough!  But, at the same time, the innovation that comes from the space program is considered useful all in itself even if there were no immediate secondary benefits.  This is because we human beings want to know more about the Universe we live in.  We want to advance our knowledge.  Can some similar case be made for the benefits of developing of a passenger aircraft as contemplated in the patent?

We know that the concept of a super-fast aircraft did not work out from an economic perspective in the case of the Concorde (which was also a relatively much slower aircraft).  There is even the possibility that new aircraft technologies that have been introduced recently can end up not being successful in the long run.  Aircraft such as the Boeing 787 and the Airbus A380 are huge risks for their manufacturers, and it is quite possible that the companies may not even recoup their expenses over the lifetimes of the aircraft. The aircraft being contemplated in the patent would cost much more to develop, purchase and operate.  All things considered, will Boeing or Airbus even attempt to build a passenger aircraft that travels this fast?

Regardless, even if there was enough of a motivation to try to develop an aircraft as contemplated in the invention, and even if there were enough people willing to pay for flying in the aircraft so that a profit could be made in spite of the monumental developmental and manufacturing cost, what kind of real world scenario really demands/needs such a capability as far as speed is concerned?  Most leisure travelers are unlikely to be able to afford to fly such an aircraft.   If at all, this could turn into a business tool, a military boondoggle, or a toy for rich people. (I believe that when it comes to conducting business, we are definitely capable of coming up with some new reasoning for needing to use an aircraft of this type, finding a way to justify the cost based on what is likely to be some kind of hokey cost benefit analysis.  After all, there are a lot of companies today that still think it makes sense to own and use private luxury jets.  This is how business works.)  In my mind the above scenarios would amount to the use of technology just because it can exist and not because it is necessary.  Basically this would be about spending without having a good reason to do so.  What good will come out of any of it?

There is some commonality of this scenario with the story of a lot of the technology being developed in recent years in the field of electronics and communications.  The significant driver for advancements in this field is entertainment (perhaps it actually all starts out with porn).  Companies want to outdo their competition in this business, so that people with money to burn (and sometimes even people who cannot afford it) will try to buy their product.  A lot of resources of all kinds are spent in this regard, and the primary motivation is creating wealth and putting money into the pockets of those involved.  This is also my story, having worked for many years in the industry to make a living by advancing technologies for the purposes of delivering entertainment. I suppose there is nothing wrong with all of this.  This is the way capitalism works.

How much of the impact of new technologies really trickles down to the people whose lives really need to be improved? I have a lot of doubt in this regard about a lot of the stuff that is being worked on today. As I grow older I have more and more difficulty coming to terms with the development and use of technology just for technology’s sake.  I hope that the aircraft described above just remains a concept in somebody’s mind.

Digital in an Analog World (March 21st, 2014)

The digital paradigm is a key element of the technology and general thinking that drives our civilization today.  Information sharing in the electronic domain is for the most part achieved by breaking the information down into discrete, i.e, digital, levels for transmission and processing (except for a few remaining exceptions).  What we may not realize is that we also tend to use the digital paradigm almost everywhere else in our lives, outside of the technology domain, and this often times is the trigger to many of the issues we have in the world.  We tend to make absolute determinations about situations when in fact there are” levels” or grades of explanations and understandings about the realities, and a resulting ambiguity with a lot of what happens in the world.  By using the term “levels” I have already assumed a digital mode of thought, by postulating that there are some thresholds involved in the thinking process, when in fact the range of ideas and opinions that are available is a continuously variable analog process.

There are examples everywhere.  Consider definitions  used in the political world. We tend to use categorizations such as democratic, dictatorship, capitalist, socialist, etc.,  when in fact there is almost always more variation and ambiguity in the definition of political systems of particular countries being talked about, and mixed approaches to governing and addressing national issues.  But given a choice, the world will tend to categorize and compartmentalize.  When you do not want to think, names can be substituted and can perhaps be used as a basis for conflict.  When you think about it, having nations with boundaries is a completely artificial digital concept in itself.

Social arguments also tend to follow the digital paradigm.  In many cases is no compromise on topics ranging from religion to human rights (including women’s rights).  We are the owners of the one truth, we set our thresholds at one extreme, we cannot (or we refuse to) empathize with the other side, we want to set the rules, there is no room for compromise, and we are divided because of this.  It could be argued that some categorization is needed to provide a structure in society, without which there would be chaos. The challenge is to do this in a way that works fairly for everybody involved so that consideration of variety, compromise, and ambiguity are part of the process.  Why is it that the only outcomes of a court of law are that of guilty and not guilty.  Surely there are situations which are not that clear cut.  But we hate ambiguity.

Take an practical example from everyday city life.  We have traffic lights that almost eliminate the need for drivers of automobiles to think when arriving at a traffic intersection. Then we have speed limits (and other rules of the road) that are meant for safety (as if you are completely safe below a particular speed, and completely unsafe above it).  Perhaps we need some of these absolute rules because we cannot be trusted to deal properly with situations that are ambiguous, where we need to make judgment calls.  But rules still do not eliminate danger.  We are still capable of killing ourselves and others on the road.  And rules can also applied in a manner that leads to inefficiencies, such as the need to sit at traffic intersections waiting for the light to change for long periods of time when there is no cross traffic at all. Do we set more levels and boundary conditions and rules for decision making, or should we be smarter about our interactions at a traffic intersection?  Or perhaps we create autonomous vehicles and try to program the vehicles to respond to any possible scenario that can be thought of.  Is this even possible?

Categorization does also provide us with a tool that can be used to simplify the teaching process.  For example, look up information about the height of the atmosphere.  You will find that it is defined as being layered, with names for the different layers. In fact the nature of the atmosphere continuously changes with height and there are no clear layers, nor is there a clear boundary between the atmosphere and outer space.  Creating layers makes it simpler to be organized and to talk a common language to get an idea across, but it is essentially a concept in our minds.  To truly understand something, perhaps you have to embrace  ambiguity.  Consider the geographical construct of a shoreline.  Assumptions are made about clear lines delineating the land from the water so that we can try to make measurements, when in fact the delineation could be extremely complex and could be described beautifully using the concept of fractals  Here are some great examples (including that of the shoreline). (I actually think that the concept of fractals is something that is intuitive and can be taught to kids.)

When the digital mode of thinking it taken to its extreme, any form of dissent, disagreement, and attempts to argue with the rules, is not allowed.   (You will also be failed in your exams. 🙂 ) Are there folks who will argue that it is a good thing?

Things can be ambiguous in physics.  The uncertainty principle asserts that one cannot accurately know the location and momentum of a particle at the same time.  We also learn in physics that light has properties of both particles and waveforms, and there are experiments to illustrate both behaviors.   But we most probably started out learning only one of the behaviors in a school environment because it was probably more intuitive and easy to explain that behavior.  It is more difficult to comprehend things when you start talking about the subtleties.

In the world of digital communications, we find that communications becomes more efficient if are able to define more levels (of modulation), but we also learn that creating these additional levels creates more uncertainty and requires much more powerful processing (error correction) to resolve the uncertainty, until at some point we can approach Shannons capacity limit for the maximum possible information transmission rate in a noisy channel.  Perhaps, there is a similar dynamic in play in our minds on other matters, where creating more levels of consideration may be considered equivalent to embracing more uncertainty, but the ability to deal with this uncertainty requires more powerful processing in our heads. Dealing with ambiguity can lead to better solutions but it is harder to do.

You might say that there are moments in time when change is instantaneous.  Death is instantaneous. A nuclear explosion happens instantly.  How about the Big Bang?  From a human experience today, is it always the case that the experience of real life ends at the moment of death.  What if you are incapacitated and incapable of doing or feeling anything while your heart is still beating?  Some people might feel that this is as good as being dead.  Who is to decide? Consider also that time frames tend to be relative, or that the concept of time is itself relative.  There is actually a process related to a phenomenon such as the Big Bang, or even  a nuclear reaction, and processes do take some time, even if that time might seem to be extremely short. (http://www.nytimes.com/2014/03/18/science/space/detection-of-waves-in-space-buttresses-landmark-theory-of-big-bang.html)

Looking at the concept of time frames from a different angle, consider the fact that if one were to measure the lifetime of individual humans relative to the lifetime of the universe, our existence is of the order of the order of 1 out of about 100000000 units of time (if we live that long)!  Homo sapiens have only existed for much less than less than 1/10000th the lifetime of the universe, and “intelligent life” for much less than that. Our individual existences, and even the existence of humanity are but an instant if the observation is being made from a particular perspective.  But we think we know that our real lives are not instantaneous.  It all depends on your perspective.

We can use absolutes to get concepts across, to try to organize the workings of our human society, and perhaps even to find ways to move humanity forward (using whatever definition of humanity that works for us), but I think we are truly enlightened only if we are able to get even beyond these “absolutes” and wrap our heads around the reality of the ambiguity of almost everything, and incorporate this concept into the principles that we all individually live by.  Life is analog!

The article below is somewhat related.  The argument is being made that nothing is truly alive.  But I think the actual issue here is that we are trying to fit a digital concept of life and death into a world that is really analog.  It is a hard argument to make that life and death are not real for humans.
http://www.nytimes.com/2014/03/13/opinion/why-nothing-is-truly-alive.html?ref=opinion
(Even if you do not read the article, click through to this website to see something unique.)

The Unending Battle to Protect Audio/Video Content in the Entertainment World

It is quite interesting, and even amusing, to see how the battle for content protection in the entertainment world continues even to this day. It was not too long ago that the entertainment industry, including the content providers, the content distributors (cable, satellite, etc..), and the manufacturers of content viewing devices, i.e., TVs, came up with the strategy of making analog video interfaces in the High Definition TV sets obsolete so that high quality video recordings could not be made on devices like VCRs.  Digital interfaces with content protection became the industry standard. The content owners managed to force the issue so that you could not be a player in the business without following particular rules for protecting their content. Content distributors had to toe the line with the content providers to be able to receive content, and the manufacturers of entertainment viewing devices depended in turn on the rules created by the content distributors in order to connect to their networks.

But eliminating analog interfaces in itself does not prevent the ability for the customer to make recordings. The digital format is perfect for recording!  The key difference from the world of the analog video interfaces was that the Industry recognized that this time they were still in a position to create rules for making digital recordings, something that they were not successful in forcing on consumers during the time of the VCRs.  Strategies were being devised to try to limit and eventually try to disallow consumer video recordings via digital interfaces.   The initial strategy taken was to try to manage the process for making digital copies of content in the home, either managing the nature of the copying process allowed, or limiting the number of copies allowed, or disallowing it completely.  This was enabled through rules that a manufacturer would have to agree to to be licensed to receive certain types of content.     It has now gotten to the point where a consumer finds it difficult to even make a decent quality copy of the content for his or her use, or for archiving, when receiving content on a television set.

You would think that the gradual tightening of the screws by the content providers would make piracy more difficult, but the truth is that it is only changing the nature of the process.   While the industry comes up with technical approaches to try to make pirating more difficult, the only way that they can really try to stop the piracy is through non-technical means – by licensing agreements, by monitoring, by regulations, and by legal action.  But can they even keep up with the technology and continue to be successful trying to manage it?

One of the fundamental issues with preventing piracy of audio/video content is due the very nature of the product itself. Video is meant to be viewed on a device and all you have to do is point a recording device at the viewing device, and voila!, you have the ability to record what you are receiving. Once you have this content, the world of the Internet allows you to share this content to others, and applications such as Youtube further enable the process by making this functionality easy to implement. It used to be that the analog copies that were made by pointing a camera at a screen were not very good, but that technology is also improving. Furthermore, the definition of the viewing device is also changing. Entertainment can now be consumed in devices other than TV sets, devices such as PCs, and smartphones, and the quality of the viewing experience on these devices is constantly improving. Since these other devices are primarily meant for non-entertainment purposes, the ability of the entertainment industry to force the issue of implementing content protection measures in these devices becomes more challenging.

I heard recently that there now are some Internet vendors who have implemented applications that are actually enabling the live streaming of video from one consumer location to others. As a practical matter this enables piracy to take place very easily, for example, high value Pay Per View (PPV) content being received by one paying subscriber can be streamed to a bunch of non-paying consumers. This kind of capability parallels what Youtube did for recorded video content. The content providers have a hard time shutting down these types of operations because it could be argued that the primary functionality that these companies are offering is not related to commercial content and piracy. They are primarily enabling sharing of content in general. Shutting this kind of service would be equivalent to creating an industry ban, or regulations, on camcorders on which families make recordings of family events because these devices can also be used for piracy. What the content owners are limited to doing is trying to influence the operations of companies that provide the services in question or the devices used to generate the content. As an example, in the case of Youtube, mechanisms have been instituted to try to identify pirated content within the network itself.   The content owners could even try to force the camcorder or camera manufacturers to include features their products that would automatically prevent recording of protected content. (It is actually technically possible using a technology called watermarking.)

But, it could be argued that all of these efforts are for a losing cause. With the development of cloud technology, including network storage and sharing capability via applications such as Dropbox and Google Drive, content sharing becomes more decentralized and difficult to track.   It is not just Youtube that the content providers need to focus on and deal with. Until network snooping and monitoring protocols are implemented and made legal for commercial purposes, it will become close to impossible to monitor piracy in such a scenario. It can also be an expensive proposition to implement this kind of functionality.  And with an abundance of bandwidth available to for the consumer, and device capabilities improving in the home, such functionality will also become more and more practical to implement.   To the extent that attempts at piracy are achieved through technologies that are becoming more and more common, that are legitimately meant for general purposes other than piracy, the content providers will be at a loss to prevent it from happening from a technical perspective. The only thing they can do is monitor content on the Internet, try to identify sources of pirated content, and shut down each of these sources (or put the fear of God into the common man, like the RIAA tried to do a few years back) by resorting to legal arguments and processes. While it might be technically possible for the content owners to do all of this, the democratization of the piracy capability can make this a very daunting challenge for the industry going forward. At some point they may actually have to depend on the goodwill of the average person, and it will be a fine line for them to walk between offering a product for a cost that the consumer is willing to pay vs. making a “pirate” out of the consumer, and the need to spend tons of money on enforcement. At the end of the day, it is a cost-benefit tradeoff, and analysis is based on the perceived value of the product that the industry is providing to the common man. Hard to imagine all this fuss is over entertainment, something that does not seem essential for our survival.

The FCC moves into action on Net Neutrality

Historically, Internet Service Providers (ISPs) have been have been able to provide their services without any real regulation targeted specifically towards the conduct of this business.   This situation is about to change. The Federal Communications Commission (FCC) has decided that the broadband Internet will be regulated going forward (link) under the umbrella of Title II of the Telecommunications Act. The concept is called Net Neutrality. The ISPs will not be allowed to manage their Internet resource in such a manner as to discriminate against users, whether they are companies that use the Internet or consumers. This is huge!   You might wonder why this is happening and what all the fuss is all about. Let me give you my own take on this.

For the past several years the growth of the Internet has been primarily driven by the Internet Service Providers who happened to have access to customers because they have traditionally provided other services to such customers such as cable TV and voice services. This service might have been provided via a traditional landline connection to the home, or through a mobile phone connection using a wireless cellular network. There have been a few exceptions, but the big players today not surprisingly happen to be companies like Verizon, AT&T, Comcast, etc.. In the case of services to the home, these companies started out by taking advantage of existing infrastructure, and then building on this infrastructure (coax or copper-based) or moving to new infrastructures (such as fiber optics) as the demand for greater levels of service grew.   The growth in the mobile area on the other is being spurred on by new technologies from the ground up, including use of new wireless bandwidth resources that have been assigned for this purpose, and development of advanced transmission technologies that are able to support higher speed data service (as opposed to voice service) requirements more effectively.

In the early years, the applications that drove the use of the Internet were fairly simple, starting out with basic point-to-point requirements across the network infrastructure (think e-mail), and gradually evolving towards more client-server type interactions. The amount of data being transferred across the internet kept increasing as this was happening. The primary flow of data that came with functions like browsing were primarily unidirectional – to the home. The data movement patterns became less uniform in a certain sense from the early day. But the type of applications running in the home have also further evolved significantly in the last few years, with applications that involve significant downloads of data from the home to network servers, and also significant peer-to-peer traffic. People can now even stream videos from devices like smartphones to others, something that was unthinkable a few years ago. The traffic patterns in the network are a constantly changing story because of the innovation that is going on.

New kinds of applications are also increasing the traffic in the Internet further in many different ways. Video streaming services such as Netflix dominate bandwidth usage to the consumer these days. (link)

These days, Internet traffic is not necessarily driven by the customer. You have data that is being pushed to people who even do not know that something like this is going on.  Then there is advertising data that is being pushed to customers based on data that is being collected about you on network connected devices that monitor your Internet behavior. Then you have routers on the networks that are intercepting data traffic and taking actions based on what is being seen – such action including sending of additional data to the customer. In many cases, the Internet Service Providers are themselves affecting the amount of traffic in the system. When customers interact with vendors across the Internet, such interaction can initiate further communication between these vendors and one or more third parties that will now form a part of the transactions that are going on. And then we have the data traffic from illegal or semi-illegal goings-on on the Internet where entities, unknown to end-users, bury software in their computers, that will generate traffic to and from these computers without the user’s knowledge. (Such is the danger of always being connected to the Internet.)

And all of this is only a viewpoint only from the consumer applications. All networking for commercial interactions also use the same resource that the consumer applications are sharing.

Essentially, the Internet is the Wild West out there in terms of the nature of the data traffic. This was the promise of the Internet and I am not sure if it will also become its bane. I am not certain exactly how the ISPs keep a handle on the traffic on the network links today. Innovations in Internet applications happen constantly and each one of these has the potential to change the nature of the traffic on the network and the manner in which ISPs manage their bandwidth resource. In such a happy circumstance of innovation, one has to ask why anybody would think there is a need for regulation. There is a need to look at all of this from a different angle.

First of all, here is something going on in the global picture that might be shocking to some people.   The US is far behind some other countries when in come to the Internet. If one were to just look at the average speed of Internet connectivity available to customers, several countries in Asia appear at the top of the list while the US is nowhere near the top.  Also, according to one study of Internet penetration done in 2013 the US ranked 29th in the world, with a penetration rate of 84.2 percent (link­).

What is probably happening is that in a completely market driven environment the ISPs are selectively focusing their efforts and attention to where they can get the most bang for their buck. At the end of the day they have to make money for the stockholders. It turns out that there are still underserved and unserved areas in the US as far as mainstream broadband Internet access is concerned, and it would appear that this situation is not about to change on its own. The other aspect to consider is that the Internet can no longer be considered a luxury for the common man. It is becoming a basic necessity just like any other public utility. Our lifestyles have changed significantly during the last few years, and it will continue to change because of the Internet. It is not just the new applications that are made available on the Internet that move us in this direction. More and more of the traditional service providers are trying to adjust their operations so that more and more of their interactions with the customer happen through the Internet. In fact, people who do not keep up with this rapid change in the way business is done are in danger of being left behind. It is now beginning to make sense to consider the Internet as a basic necessity. This is a point at which government has to begin playing a role in what is going on so that people are able to get what they need. This aspect of the development and use of the Internet has already been recognized by more forward looking countries. Governments have taken a more active role in helping shape the development of this resource.

The ISPs have also become smart to the game and have determined that there might be additional money to be made by not just charging the customers, but by also selling vendors that use their networks access to these networks. If they do so, they will be able to influence and control the experience of the customer to services from these vendors directly. This is already happening (link)! The ISPs can now have control over how businesses that use the Internet may succeed and fail, and ISPs may themselves even try to get into the business of providing such services to customers while giving themselves an advantage (e.g., Comcast might stream NBC programs with better Quality of Service (QoS) simply to give Netflix, HBO, ESPN or ABC a bad name).   Companies like Google and Netflix support Net Neutrality while those like AT&T and Verizon do not.

But the tricky part about regulating the Internet is that it is still an evolving mess. Any regulation that is put into place has to be done with a light hand. If not done properly, this can basically stifle the industry. There has to be room for the Internet to continue on a path of development. ISPs will need to continue to improve on their networks to support new capacities and capabilities that are yet to be determined, and applications and traffic patterns that continue to evolve. There should really be no issue if network capacities are always beyond the data loads being carried. But there does come a point where traffic needs to be managed, either when there are temporary bursts of traffic due to the nature of the applications running across the networks, or if the networks are themselves not properly sized to support all the traffic that is allowed to connect into it. The ISPs should be free to manage the data flow, even slowing it down as needed in order to manage the bottlenecks, but they must do it in a way that is fair. But how does one define what is fair? Should one type of traffic fundamentally have priority over others or is all traffic equal? For example, is Netflix streaming more important than regular data transmitted to a browser (e.g., a Skype session), and, if so, what speed of Netflix streaming must be allowed. Do different kinds of browser traffic have different priorities? One has to try to find a way to find non-specific and generic answers to such questions like this. It can become quite messy and dirty if one tries to solve each of the problems individually by jumping into the weeds in each case. If the FCC thinks it already knows the answer, they are fooling themselves. Hopefully they will keep an open mind and make sure they have some intelligent and experienced people working on this. Regulators need to have insight about the global implications while dealing with the specifics of each element of regulation carefully. And they have to do all of this even while they are being harangued by the lobbyists from various factions of the industry who have their own differing interests at heart.

Some say that regulation will stifle innovation. My take is different. I believe that it will shape the nature of the innovation rather than stifle it. It might even shape it in a very significant way. It could impact the businesses that end up being successful in the industry. And what is wrong with that? The truth of the matter is that a lot of the technological innovation in industries like this happens today because of the rules that the industry lives by, not necessarily all related to improving service to the customer, and sometimes because of regulation. The entertainment production and distribution system is a prime example of such an environment. As an example, a fundamental element in the conduct of the entertainment distribution business is copyright protection. Rules of the game come from the both the government and the industry in this regard. Many unique systems are in place from the perspective of content protection, not necessarily all for the benefit of the customer. Regulation does not necessarily drive away all innovation, sometimes it creates opportunities.

Will regulation lead to more cost to a customer? Will regulation be such that ISPs and users of the Internet are able to continue to innovate and grow their businesses and provide an adequate and fair level of service to their customers? I think we do not need to be afraid in this regard, but only time will tell if I am right or wrong.

http://business.time.com/2013/01/09/is-broadband-internet-access-a-public-utility/