Cloud as a technology has taken the world by storm. Cloud adoption is increasing at a tremendous rate. Spending on public cloud (hardware and software) is predicted to reach $38 Billion in 2016 and $173 Billion in 2026. That is a CAGR of 18.3%. Worldwide Public IT Cloud Service revenue in 2018 is predicted to be $127 Billion. Although the dollar values might be hard to come by, the adoption of private cloud has gone up by 9 % over 2015 where 31% of the enterprises are running more than 1000 virtual machines. Hybrid cloud adoption has increased by 13% over last year. The cloud users are using around 6 cloud applications at an average. Cost management, scalability and security are among the most important cloud challenges. Containerization has immense potential to solve the cost and scalability challenges.
So how does Containerization fit into the cloud puzzle?
Virtualization and virtual machines are an essential part of the cloud. In a typical infrastructure-as-a-service (IAAS) architecture, the computing is distributed on virtual machines that run on servers that are physical machines. The virtual machines are not tied to these physical servers. Now for each of the applications running on these virtual machines, they run a complete copy of the operating system along with all the libraries needed for the application. This is duplication and leads to a lot of unnecessary wastage in terms of memory, bandwidth and storage.
Containerization eliminates this wastage. It deploys each application in its own “container”. These containers are essentially “self-sustaining execution environments”. They contain all the libraries and binaries needed for an application and they run on the same host operating system repeatedly, sharing the single instance of the operating system. It creates a kind of multi-tenancy at the operating system level.
What does containerization bring?
Containerization takes IT automation to an entirely different level. It adds speed. Tending to dynamic fluctuations in demand becomes very effective because containers can be allocated and de-allocated in almost real time (read seconds). Containers allow much higher computing workloads to be packed on a single server.
The reported application density improvements have been found to be in the range of 10 times to more than 100 times per physical server. This means less hardware needs to be bought, less data center space needs to be rented and lesser number of people are needed for the job. The end result is direct savings. When we look at the scale at which physical servers and data center space are needed, we are essentially looking at millions of dollars in savings for large data centers. Just to get an idea of the savings, let us look at the sizes of the top 5 data centers in the world, viz. Range International Information Hub, Lang fang China (size 6.3 million square feet); Switch super NAP, Las Vegas Nevada, (2.2 Million square feet); Utah Data Centre, Bluffdale, Utah (Over 1 million square feet); Lakeside Technology data center, Chicago (1.1 Million square feet) and so on. A 10-fold increase in efficiency even in one of them translates to a potential of 100’s of millions of dollars in savings.
Containers can be extremely effective in dealing with scalability issues. Containers can boot up in 0.05 seconds. They can be created far quicker than the virtual machines because they use the operating system kernel from the host’s server. These containers avoid the need to retrieve 10’s of GB of operating system from storage. This allows a development team to activate the project code, test it in multiple ways and provide additional capacity very quickly. The biggest proof of the scalability of containers is Google’s search operations. In order to continue the search operations, Google needs to launch 7000 containers every second. The demand for search is also extremely flexible; it rises and falls in each data center depending on the events and the time of the day.
Security is one aspect where the opinion is somewhat divided. The fact that the containers are isolated and one container cannot talk to the other provides a sense of relief, however, experts do raise hypothetical concerns. The containers share the CPU, memory and disc space very close to each other, hence, if one of the containers is made to talk to another and one of them is infected by a malware that can snoop for the encryption keys in all the data visible to it, it can compromise security.
Some experts are exploring containerization for dealing with the security problems arising out of the BYOD (Bring Your Own Device) adoption by enterprises. The idea is to separate the corporate mobile apps and data on the mobile device from the personal data and apps of the employees. In theory it is possible to separate out and manage corporate e-mail, applications and data to enforce security, using containerization. Authentication, encryption, data leakage, cut-and-paste restrictions and selective content wiping through various types of container technologies promise help. This supposed promise, however, might be threatened by the inability of Apple IOS devices to prevent “jail breaking”. Experts believe that containers cannot protect rooted devices.
Containers do show a lot of promise when it comes to cost and scalability but they are still far from becoming an integral part of the enterprise technology stack. Experts envision them to remain the preserve of Virtual machines for some more time at least. Like every technology, only time will tell the exact extent and impact of its widespread adoption. For the time being, enterprises are treading with caution and there is no rush for adopting them.
Goutham is a former Happiest Mind and this content was created and published during his tenure.