Please enable JS

CLOUD COMPUTING | THE EVOLUTION OF COMPUTATIONAL DNA

BLOGROLL

CLOUD COMPUTING | THE EVOLUTION OF COMPUTATIONAL DNA

MARCH 16/MICHAEL BAYLOR

Let me begin with an acknowledgment that this discourse does not begin to scratch the surface of this subject in the level of detail that it deserves. It is a stream of consciousness exercise intended to aid me in fleshing out some ideas for a series of treatises on Cloud Computing that I am presently working on.

I am looking for a proper place for this particular stream of thought and needed to reason through it a bit further. It seemed as though it may be sufficiently interesting as a stand-alone post, if only for my own desperate need for intellectual exercise.

Onward then!

DNA serves as a mechanism for long-term storage of information and is essentially a set of blueprints that contains the instructions needed to construct other components. It has a significant influence on future generations and exposes itself in the form of specifically identifiable physical (and some claim character) traits, in its descendants.

The simple elegance of its double-helix structure belies the complexity lying beneath the surface. This is analogous to the abstracted layers of Cloud computing that conceal much of the DNA complexity of the underlying infrastructure necessary to deliver Cloud services.

I am a student of the history of computing and am fascinated with the evolution of computing models. Cloud computing is of particular interest to me because of its DNA, that is the lineage of its ancestors that is unmistakable when one examines the traits that were influenced by prior generations of computing models (e.g. Mainframe, Client-server).

There was once a study that examined the effect of DNA in terms of its influence on one’s vocation. The investigation focused on Fishermen and their sons. After an exhaustive analysis of hundreds of Fishermen and their sons, the “scientists” that conducted the study made an astonishing proclamation; They concluded that the majority of sons of Fishermen would themselves, also become Fishermen!

I hope that whoever commissioned the study did not pay a lot of money for that insightful information.

The reason for sharing that story is the analogues to the evolution of computing models. The “Cloud” is not (as Neil Ward-Dutton and others have already acknowledged) a revolutionary concept. That is not to trivialize the importance or significance of Cloud computing, but it is more accurately, evolutionary.

That is, it has evolved from earlier forms of computing models but contains unmistakable traits that it inherited from previous generations. Some good DNA, some bad.

Our computing DNA provides valuable insights into the evolution of future generations and what we should watch for in terms of risks to our future well-being. We need to understand it in order to identify weaknesses or flaws that could jeopardize future generations.

One of the most valuable lessons we can learn from the history of our Computational DNA is the mistakes that were made and the most valuable lesson we can learn from previous mistakes is to not repeat them or at minimum, identify alternative approaches.

However, for some reason we ignore genetic traits that may predispose us to fatal diseases. After all, we know much more today than our predecessors did and history has little to teach us about the future.

Ego check!

Before we get too full of ourselves, check out the famous (or infamous) quotes below from past industry luminaries. (From Stephen Whites, “A Brief History of Computing”).

  • 1899 “Everything that can be invented has already been invented.”, Charles H. Duell, director of the U.S. Patent Office
  • 1943 “I think there is a world market for maybe five computers.”, Thomas Watson, chairman of IBM.
  • 1949 “Computers in the future may weigh no more than 1.5 tons.”, Popular Mechanics, forecasting the relentless march of science.
  • 1957 “I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year.” The editor in charge of business books for Prentice Hall.
  • 1965 Moore’s law published by Gordon Moore in the 35th Anniversary edition of Electronics magazine. Originally suggesting processor complexity every year the law was revised in 1975 to suggest a doubling in complexity every two years.
  • 1968 “But what … is it good for?” Engineer at the Advanced Computing Systems Division of IBM commenting on the microchip.
  • 1977 “There is no reason anyone would want a computer in their home.” Ken Olson, president, chairman and founder of Digital Equipment Corp.
  • 1980 “DOS addresses only 1 Megabyte of RAM because we cannot imagine any applications needing more.” Microsoft on the development of DOS.
  • 1981 “640k ought to be enough for anybody.”, Bill Gates
  • 1992 “Windows NT addresses 2 Gigabytes of RAM which is more than any application will ever need”. Microsoft on the development of Windows NT

In the 1950’s, we just “knew” that the mainframe was the future of computing, in the 60’s we just “knew” that mini-computers were the future of computing, in the 70’s we just “knew” that super-mini’s were the future of computing, in the 80’s we just “knew” that client-server was the future of computing, in the 90’s we just “knew” that the Internet was the future of computing we and now we just “know” that Clouds are the future of computing.

Just think of what we will “know” tomorrow!

At the end of the day we are all just sons of Fishermen, predestined ourselves, to also become Fishermen (but with much cooler and faster boats)!