You are here

You are here

The state of containers: 5 things you need to know now

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
 

Docker adoption is up fivefold in one year, according to analyst reports. That's an amazing feat: One year ago, Docker had almost no market share. Now it's running on 6 percent of all hosts, according to a survey of 7,000 companies by Datadog, and that doesn't include hosts running CoreOS and other competing container technologies. Most companies that adopt Docker do so within 30 days of initial production usage, the survey reports, and almost all the remaining adopters convert within 60 days.

The most common technologies running Docker, Datadog reports, are:

  • Registry: 25% of companies running Docker also use Registry, presumably instead of Docker Hub.
  • NGINX: Docker is being used to contain a lot of HTTP servers, it seems. It is interesting that Apache Hypertext Transfer Protocol Server (httpd) didn't make the top 10.
  • Redis: This popular in-memory key/value data store is often used as an in-memory database, message queue, or cache.
  • Ubuntu: It's still the default to build images.
  • Logspout: For collecting logs from all containers on a host, and routing them to wherever they need to go.
  • MongoDB: The widely used NoSQL datastore.
  • Elasticsearch: For full-text search.
  • CAdvisor: Used by Kubernetes to collect metrics from containers.
  • MySQL: The most widely used open source database in the world.
  • Postgres: The second-most widely used open source database. Adding the Postgres and MySQL numbers, it appears that using Docker to run relational databases is surprisingly common (see figure).

The success of Docker and containers can be attributed to the following megatrends:

  • The rise of cloud. IT is looking to make cloud applications portable and scalable at the same time.
  • The rise of DevOps. Containerization meshes well with both DevOps processes and tools. Thus, those who are moving to DevOps are typically moving to containers as well.
  • The rise of the strategic use of data. Some of the applications that run on Docker (in the graphic above) are data-oriented.

DockerCon 2015 Europe is over, and Docker has some more announcements to share. Here's a short summary:

  • Docker announced Project Nautilus, which is the new image scanning and vulnerability detection service for Official Repositories on Docker Hub. The upgrades to the Docker Hub Auto Build services show how to use these together with Docker's recently acquired Tutum Docker hosted platform for an end-to-end containers-as-a-service platform, which is available now. Also announced was a new release of Docker Trusted Registry that integrates with Docker Content Trust, for image signing, integrity, and authenticity.
  • Other announcements relate to a partnership with Amazon Web Services. Docker Trusted Registry and Docker Engine with Business Day Support are now available in AWS European regions. Docker Subscription for AWS has expanded its availability beyond the AWS US regions to now include AWS's EU regions. Docker Subscription for AWS is available on demand through the AWS Marketplace with a 30-day free trial and hourly and annual subscription options.
  • Finally, Docker is claiming 1,000 nodes, 30,000 containers, and one Swarm manager. Clearly, this is a shot across the bows of Google. Docker recently took Swarm out of beta and released Version 1.0. It's being used by organizations like O'Reilly for building authoring tools, the Distributed Systems Group at Eurecom for doing scientific research, and Rackspace, which built its new container service, Carina, on top of it.

On the CoreOS side of things, Docker is introducing Clair, an open source vulnerability analysis tool for containers. Clair is an API-driven analysis engine that inspects containers layer by layer for known security flaws. It lets you build services that provide continuous monitoring for container vulnerabilities.

October saw the release of Apache Mesos 0.25.0, which includes the following features and improvements. First, support for maintenance primitives, and the use of master endpoints for dynamics reservations. Second, the use of extended module APIs to enable IP per container assignment, isolation and resolution. Finally, 100-plus bugs and improvements have made it into this release.

How enterprises should approach containers now

So how should enterprises capitalize on this trend of containerization? Or how should they successfully leverage containers, such as Docker and others, within their application development and operational infrastructure? As of today, there are five things to consider.

1. If you lead, you can bleed

While containers are relatively new, and thus come with more risks, enterprises need to balance the risks with the potential strategic advantages of leveraging containers. These days, containers are relatively hardened for the applications they serve, so you're relatively safe when it comes to deployment. But there are some security issues that you'll need to address. Companies such as Docker are taking steps to address these as quickly as they can. However, you need to look at your own requirements and decide whether containers are a good fit for your situation, in terms of security. Most enterprises can leverage them just fine.

2. Scaling is a matter of the application

While you can find orchestration and scheduling platforms such as Docker Swarm and Google Kubernetes, enterprises should still consider the fact that scaling is specific to their applications.

Indeed, while some applications are easy to containerize, most are either too coupled, or too decoupled from the data or other application components. Thus, it's a matter of how much work it will take to refactor the applications so that they can be a set of containers as well as scale as a cluster of containers.

Part of your analysis of the use of containers should be to determine how much of a hassle it will be to actually turn an application into a container, or, more likely, into a set of containers. In some cases it's just not practical, while in others, it's relatively straightforward. This is the same problem you may have wrestled with during the analysis stage of porting applications to cloud platforms, so many of the same total cost of ownership metrics and models apply and can be reused here.

3. Consider your people

Most enterprises have not moved a significant number of workloads to the cloud, much less to containers running on clouds. In many respects, the technology is not the limiting factor; it's the IT staff's skillset.

As you move to containers, you'll need to make a significant investment in training and hiring to obtain the skillsets you need to build and deploy containerized applications. To make this even more complicated, many enterprises are adopting DevOps as a way to automate their agile approaches to application development and so must consider the adoption of containers. In many enterprises, IT organizations have elected to adopt containers first, and then do the DevOps transformation. They do this to reduce risk, and even to cut costs.

4. Do the strategy thin, again

Just as enterprises have created and begun implementation of a cloud computing strategy, they now need to consider containers as a sub-strategy. This really just goes to the enabling technology that is a component of your overall cloud strategy and does not replace it or take it over.

At the end of the day, we're deploying applications within containers, which is nothing new. The patterns of adoption we've seen with the rise of the Internet and the risk of app servers are pretty much the same, except that the technology is new. If you look at this as a revolution, you'll be sorely disappointed. It's an evolution at best, but one that provides a better, more scalable, and more portable platform for cloud-based applications.

Containers will have other uses with IoT applications, big data, and even traditional, on-premises systems that will stay on premises. Thus, containers are not just for clouds, but they are mostly for cloud.

5. Test it to the limit

Enterprises moving to containers need to create test platforms to understand the real limitations of the technology. Core on this list should be:

  • Scalability. What types of workloads are able to scale, and how are you able to scale them?
  • Stability. What types of behaviors or loads make the containers unstable?
  • Data. What are the limitations when containerizing data, versus leveraging data using traditional interfaces?
  • Management. How do you manage clusters of containers over a long period of time?
  • Governance and security. How do you govern and secure containers, and how does that affect scalability, performance, and stability?

Containers are here to stay

The use of containers is growing like crazy within enterprises, and the vendors in this space have all of the funding they need to make the improvements that enterprises and cloud providers demand. That said, containers don't solve all problems. Enterprises need to make sure their eyes are wide open as they move to the technology.

Users of containers will learn a lot more about the technology as the space continues to grow and adoption spreads unabated. Along the way you should expect some pleasant surprises, and some not so pleasant. The good news is that you'll better understand the limits of this technology and how you can best leverage it to serve the business.

Keep learning

Read more articles about: App Dev & TestingEnterprise IT