DockerCon 2017 Recap: Everything You Need To Know

Written by JP La Torre

We recently closed the books on another amazing DockerCon conference, which took place in Austin, TX and wrapped up on April 20th. In case you were unable to attend this incredible 3-day event, we’ve decided to put together a nice little wrap up to go over the highlights, relevant news, exciting announcements and more. So, without further ado, here’s the low-down of what you missed at this year’s conference.

Astronomical Growth

The first noteworthy thing about this year’s event is how much it has grown. There were over 5,500 attendees and the number of contributors has grown from 400 to an incredible 3,300 in just three short years. Of those, 41.5% of code base was contributed by individuals, 40.6% by Docker, 7.7% by Microsoft and 3.2% by IBM.

DockerHub now has 900K apps, up from 15K apps in 2014 and some 14 million hosts are now running Docker. There have been 12 billion image pulls over the past four years, a number that represents a mind-boggling growth rate of 390,000%!

Another noteworthy takeaway from this year’s DockerCon is the number and diversity of use cases that were presented. For instance, Docker has now been documented for use in:

  • Keeping planes in the air
  • Powering the largest financial institutions
  • Processing $ billions in transactions per day
  • Healthcare, genome sequencing, curing diseases
  • 25M tax returns are running through Docker this year

Furthermore, over 100 startups in the Docker ecosystem have received funding and the global community has grown to 287 cities with 170K+ members. Not too shabby!

Sacrifice to the ‘Demo Gods’

This year’s conference kicked off with an introduction by Solomon Hykes, Docker’s Founder and CTO. Hykes revealed quite a few intriguing things that really got the crowd amped up — specifically, better tools for developers and less friction in the development cycle. To demonstrate these changes, a few examples were provided based on common developer frustrations, including:

Example #1: “My container images are too big!”

This prompted the introduction of multi-stage builds, which provide the ability to build smaller images in multiple stages: first, completing the build environment and second, minimal runtime environment, resulting in one Dockerfile.

Example #2: “I wish there was an easier way to take my app from desktop to the cloud.”

The solution proposed for this example was an introduction to the new desktop-to-cloud feature. This provides the ability to add and connect Swarms running in supported cloud providers, including AWS, Azure and Google Cloud (beta). It also features built-in collaboration with Docker Cloud and Docker ID. To demonstrate, two Docker software engineers used multi-stage builds and Desktop-to-Cloud as a deployment mechanism for the Swarm cluster.

The multi-stage build and Desktop-to-Cloud features will be available in the next stable release which ships in June. These features are currently available in Docker’s edge channel, which is the experimental build. You can download it here: https://docker.com/getdocker

Security Improvements

Another exciting announcement that many have been anxiously anticipating was the secure orchestration with SwarmKit, a Docker open-source project that functions as a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

There is now an automatic public key infrastructure that creates cryptographic node identities for every node that joins the Swarm, as well as automatic certificate rotation. The toolkit operates using Mutual TLS for authorization, authentication and encryption of all communication across the control plane of all nodes on the Swarm. Using Raft, SwarmKit helps share secrets securely.

Use cases for this include PCI DSS, healthcare and other sensitive or confidential workloads.

Introducing LinuxKit

According to Hykes, one of the challenges to cross-platform containers has been the lack of a portable Linux subsystem across different hardware platforms and operating systems.

LinuxKit was developed as a solution to provide the container movement with a secure, lean and portable Linux subsystem. This is the result of a mass collaboration between Docker, IBM, Linux Foundation, Microsoft, ARM, HP Enterprise and Intel. Now open source, the solution is open to public contributions.

Specifications of the secure Linux subsystem are as follows:

  • Only works with containers
  • Smaller attack surface
  • Immutable infrastructure
  • Sandboxed system services
  • Specialized patches and configuration
  • Incubator for security innovations
  • Community-first security process

With a lean Linux subsystems, users can expect the following:

  • Minimal size, minimal boot time
  • All system services are containers
  • Everything can be removed or replaced

Portable Linux subsystems offer:

  • Desktop, server, IoT, mainframe
  • Intel and ARM
  • Bare metal and virtualized

Running Containers on Windows

In a major announcement, Microsoft is bringing Linux kernels to Windows so that Linux-based containers can now run inside the Windows Server. This is accomplished through collaboration between Docker and Microsoft as a means of providing a Linux subsystem that runs atop Hyper-V Hypervisor.

This essentially means that Linux containers will now run just as easily on Windows as any other OS and can now run alongside Windows containers on the same bare metal or virtual machines. Microsoft also announced that developers will be able to choose from a variety of Linux kernels from major Linux vendors.

LinuxKit is on the roadmap for future versions of Hyper-V and Microsoft Server.

 

A Platform Made of Components

Hykes made a specific point of talking about how Docker has grown from a monolithic open source project to a system of distributed projects that fit together as follows:

The overarching theme was that Docker has come a long way, and in order to facilitate a growing community with expanding needs, it needed to become a more holistic and distributed system.

As a result, users can now expect more projects, components and primitives to be released as open-source in the not-too-distant future. Hykes also noted that one major challenge to the future of containers is dealing with increased scale as a technology to service increasingly specialized needs and use cases.

The open component model started to show its limits to servicing desktop, server, cloud, etc. Examples discussed:

  • Servers
  • Datacenters
  • Consumer IoT
  • Industrial IoT
  • Operating Systems
  • Mainframes
  • Mobile

Interestingly, the auto industry has solved this problem through the application of common assemblies and scaling the Docker production model through the use of shared components and assemblies.

Meet Moby

The Moby project is an open-source framework developed to assemble specialized container systems without reinventing the wheel (music to our ears!). It features several noteworthy items, including:

  • Library of 80+ components
  • Ability to package your own components as containers
  • Reference assemblies deployed on millions of nodes
  • Ability to create your own assemblies or start from an existing one

Docker uses Moby as a framework for its open-source projects to help manage thousands of contributors and hundreds of patches/week. It’s ideal for:

  • Component development
  • Specialized assembly development
  • Integration tests
  • Architecture design
  • Integration with other projects
  • Experimentation and bleeding edge features

Moby is community-run and features open governance inspired by the Fedora project. It also plays well with existing projects. Most importantly, from a user perspective, Docker will better leverage the ecosystem to facilitate faster innovation. It’s also ideal for system builders because it allows innovation without being tied to Docker.

One of the most interesting components of the Moby portion of DockerCon was learning about how it’s capable of transforming multi-month R&D projects into weekend projects. Examples include: locked-down Linux with remote attestation, custom CI/CD stack, custom CI/CD stack + Debian + Terraform, RedisOS and Kubernetes Cluster on Mac.

Overall, we’re happy to report that this year’s DockerCon was a resounding success and we’re super excited (as we’re sure you are) to experience many of the changes and improvements firsthand.

Getting Started with Docker Swarm

If you’re interested in getting started with Docker Swarm on AWS or Azure, consider checking out Caylent.

Caylent is a cloud-agnostic Container Management Platform that acts as a fully managed Docker Swarm solution and has lots of great features like one-click deployments, continuous delivery, user access management, and more.

Did you attend this year’s event? What key takeaways did we miss? Please share in the comments section.

The DevOps container management platform

  • Amazon AWS
  • Microsoft Azure
  • Google Cloud
  • Digital Ocean

Unlimited users. Unlimited applications. Unlimited stacks. Easily manage Docker containers across any cloud.

Get Started—Free!
Top
%d bloggers like this: