Skip navigation


The term Artificial Intelligence (AI) was coined by John McCarthy, although he thought it to be a misnomer later. “I should have named it machine intelligence”, he said on more than one occasions. Since the 1950s, a great deal of the original optimism and unrealistic expectations has gone out of AI and has been replaced with a degree of realism. The aim of the study of AI is no longer to create a robot as intelligent as a human, but rather to use algorithms, heuristics, and methodologies based on the ways in which the human brain solves problems. With the rise of distributed computing the power of super-computers has been brought to the masses, enabling them to use machine intelligence in such a fast and cost-effective way which was never possible before. More recently Cloud computing has brought this enormous machine wisdom to various businesses providing them with an option to outsource their complex problems and get them solved in the Cloud. I call it “Outsourced Intelligence”.

Take any business problem. From supply-chain management systems to “what-if scenario” business simulations; from data trend analytics to data warehouse classification algorithms…. just type the name of the problem, Google it; and you will get a bunch of solutions in the Cloud tailored for the particular problem you have at hand. In many cases these solutions will offer seemingly complete package with end-to-end data-integration. The code in the Cloud will provide the intelligence for your problem.

But will you ever know that exactly what logic is going to be used to solve you particular problem? Just try to dig into this. Browse through the literature, query the sales representatives and dig deep through the provided documentation. Chances are that this key information would be missing.

But isn’t this be the most important question that what the algorithm lies behind the intelligence and how accurately it would behave under your particular system and workload conditions. You may be led to believe that this information is the “secret sauce” which cannot be revealed. After all these “secret sauces” are the fiercely guarded business secrets which are not be be shared. But what about more general information about the algorithm….what about some information on the class of algorithm, its performance and accuracy, the comprises it makes and the constraints it relaxes, any insight into its performance upper and lower bounds, its comparison with optimal, its objective function formulation, if Integer Programming or Linear Programming is in place… and the proof that heuristics will actually work, if any heuristics are being used.

Recently I was surprised to know that some of the biggest names in IT industry are using this Outsourced Intelligence without having any knowledge of the key metrics for their critical supply chain management solutions. No doubt algorithms should be able to provide a solution in any case that will work… but how optimum that solution would be, is another story. Few seem to care but.. beware…“harder work can offset lower IQ” is only true for human intelligence. For machine intelligence even few comprises in the algorithms may have huge unwanted implications once in a while.


Cloud computing is said to be a funny business. The term itself is sometimes derided by technical professionals as nothing more than a marketing term used to sell recycled technologies in a brand new package. So, to what extend Cloud computing is actually a marketing gambit and what is truly innovative about it?

To analyze the innovation in Cloud computing, lets analyze SaaS, PaaS and SaaS separately.

To be honest, the idea of SaaS isn’t actually new, but the term SaaS is. SaaS simply refers to software that is provided on-demand for use. To some extend, IaaS isn’t conceptually new either. People have been collocating in data centers since data centers have been around. Also, virtualized software infrastructures are in use for years now. Although in SaaS and IaaS, there is continuous innovation in hypervisors, scalability, elasticity, integration, load-balancing and dealing with multi-tenancy issues, but, lets face it.. a real skeptical critic may still label them as old wine in a new bottle.

What about PaaS?  Unlike IaaS and SaaS, PaaS is a much more abstract concept. PaaS providers offer a platform for others to use. Usually what is being provided is part operating system and part middleware. A proper PaaS provider takes care of everything needed to run some specific language or technology stack. Each PaaS offers a solution suitable for a particular environment and in most cases, it’s a totally new approach to decade old problems.

PaaS is a bright promising star on the horizons of Clouds, whose “innovation factor” cannot be easily challenged. This is the real disruptive technology part of the Cloud stack that Gartner analysts refer so often.

These days many PaaS providers are providing multi-tenant solutions. This means that not only is the physical hardware shared among multiple virtual machines but the virtual machines themselves may have several different applications from several different customers on them. Not long time ago, with PaaS used to be synonymous with three things; Google App Engine, VMware’s Cloud Foundry/vFabric and Microsoft’s Windows Azure. Now a whole bunch of companies are providing their new PaaS offering to provide solutions in various domains.

Following are the three examples.

1) Alcatel-Lucent’s CloudBand

Like many telcos, Alcatel-Lucent (ALU) found itself lagging behind in the Cloud computing race. In 2011, ALU had to get serious about their Cloud strategy and needed to offer something different and innovative. Recently they provided the roadmap of their Cloud infrastructure named as CloudBand and PaaS is the counter-stone of that. ALU is always proud on its SAM-portal technology which is a graphic portal that can be used to provision services for their routers.  This is something that is not offered by competitors like Cisco. ALU has extended this router provisioning services through a set of PaaS tools provided under CloudBand offering.  The use of PaaS in CloudBand would provide a much more powerful, flexible and extendable solution for the user. It is one of the real innovative ways of using the power of PaaS.

2) Uhuru

Uhuru is a PaaS technology that targets a specific segment of developer community. It targets those developers who have to use .NET, but do not want to take the “official” solution offered by Windows Azure. Uhuru’s product provides native Microsoft .NET extensions to the VMware Cloud Foundry. By using Uhuru together with Cloud Foundry Windows, .NET developers are free to select the most appropriate cloud service from among the many competing providers. Private and public clouds are also both supported with Uhuru .NET Services for Cloud Foundry.

3) AppScale

AppScale is an open-source PaaS technology which implements a number of popular APIs including those of Google App Engine, MapReduce (via Hadoop), MPI and others. AppScale executes as a guest virtual machine (guestVM) over any virtualization layer that can host an Ubuntu Lucid image.

There are many other companies using PaaS in new innovative ways offering flexibility, power and extendibility that never existed before. But, keep in mind that the dynamics of Cloud computing are changing all the time. Rapid innovation causes the lines between IaaS, PaaS, and SaaS to blur, sometimes significantly. Nonetheless, the promise of PaaS seems to be real and has already started to materialize.


Last week, working on a project I faced a seemingly simple  question:

“So, what exactly is a Cloud?”

I started with quoting Ian Foster that a distributed system incorporating virtualization and providing scalability, is a Cloud. To make things more in perspective I explained its typical attributes such as elasticity and then differentiated Cloud computing with cluster computing and grid computing.

After the meeting, I kept on thinking…. what exactly is a Cloud these days?

Can it be that clearly defined, or have we managed to cloud the exact meaning of Cloud terminologies by its massive overuse?

From VMware’s hypervisor based virtual infrastructure, to Hadoop running clusters; from Google’s Apps Engine to ready-to-use CRM applications; from Eucalyptus based enterprise computing to distributed analytics engines for supply chain management software… everything is seems to be marketed as a Cloud. The concept of “private clouds”  compounds the problem. Now it’s much easier to spin any on-premise technologies as Cloud.

And then ages-old technologies have been re-packaged by marketing teams and are sold as Cloud computing. For example, large data-centers, that have been in existence for the last decade, have been recently re-branded as Clouds . In many cases, only marketing brochures need to be reprinted with the new price structures and the company would be ready with their Cloud offering.

Gartner stated in 2011 that out of vendors who have briefed them on their Cloud computing strategy, very few actually managed to show how their strategies are really Cloud centric.

But this overuse of Cloud term is starting to have a clearing effect. As people and companies are becoming more familiar with Clouds, they digging down further. They are starting to ask what exactly in Clouds? I predict that due to its massive overuse, the term “Clouds” may lose their “coolness” factor. And people will start to use, terms named after the exact domain like Business Analytics, Social Analytics, Context Enriching Services, Virtualized offering, Pay-as-you-go computing, Compute Farms, Data Farms etc. instead of the large tent of all-encompassing term”Clouds”.

In terms of Gartner’s hypecyle, Cloud computing technologies are settling down in the trough of disillusionment, it seems. And its not bad news.


There is something interesting about the announcement of Kindle Fire today by Amazon.

If you go through the announcement or the product’s webpage, unlike other tablets, there is no emphasis on the storage capacity of the device (16GB or 32GB etc). In fact Cloud is to be used as the primary storage (not secondary or backup storage).  Its a bold move and Apple cannot denigrate Amazon as a copycat; as it was allowed to do by most of the other manufacturers of the tablets. The fundamental architecture of most of the other tablets was indeed a copy of iPad’s architecture. Kindle Fire is fundamentally a different animal. Amazon is one of the few companies which have the infrastructure and expertise to use extensive Cloud services at the backend. This may translate into huge cost and scalability advantages.

But not that fast…

It is still risky. The use of Clouds in this form will have some technical implications that need to be looked into carefully. It remains to be seen how satisfactory would be the end-user experience, especially while using multimedia data. There will have to be some intelligent cachcing at the tablet to account for the unpredictable jitter and latency of the WiFi connections. It may become a bottleneck and hence a show-stopper, ruining all the superfast services and ultra-optimized infrastructure deployed by Amazon at the backend.

This pricing by Amazon also seems to be a new business model for tablets as well, where the manufacturer is prepared to sell the device at some loss in the hope of making profit from the sales of the digital contents (books, movies, games and apps).

It is potentially a game-changer in the novice tablet market…whether it will actually change the game.. remains to be seen.

Lets wait a bit. 15th November is not that far after all.

Imran Ahmad, PhD (Cloud Computing)

Follow

Get every new post delivered to your Inbox.