It’s the end of another amazing DockerCon conference, in Austin, TX. In case you were unable to attend this incredible 3-day event, here’s our wrap-up of the highlights, relevant news, exciting announcements and more. A full low-down on what you missed at this year’s conference.
DockerCon’s Astronomical Growth
The first noteworthy thing about this year’s event is how much it has grown. There were over 5,500 attendees and the number of contributors has grown from 400 to an incredible 3,300 in just three short years. Of those, 41.5% of code base was contributed by individuals, 40.6% by Docker, 7.7% by Microsoft and 3.2% by IBM.
DockerHub now has 900K apps, up from 15K apps in 2014 and some 14 million hosts are now running Docker. There have been 12 billion image pulls over the past four years, a number that represents a mind-boggling growth rate of 390,000%!
A top takeaway from this year’s DockerCon is the number and diversity of use cases that were presented. Docker helps:
- Keep planes in the air
- Power large financial institutions
- Process $ billions in transactions per day
- Healthcare, genome sequencing, and diseases prevention
- With 25M tax returns this year
It doesn’t stop there, over 100 startups in the Docker ecosystem have received funding and the global community has grown to 287 cities with 170K+ members. Not too shabby!
Sacrifice to the ‘Demo Gods’
This year’s conference kicked off with an introduction by Solomon Hykes, Docker’s Founder and CTO. Hykes revealed a few intriguing things that really got the crowd amped up with better tools for developers to create less friction in the development cycle. To highlight the changes, a few examples were given from common developer frustrations, including:
Example #1: “My container images are too big!”
This prompted the introduction of multi-stage builds, which provide the ability to build smaller images in multiple stages: first, completing the build environment and second, minimal runtime environment, resulting in one Dockerfile.
Example #2: “I wish there was an easier way to take my app from desktop to the cloud.”
The solution proposed for this example was an introduction to the new desktop-to-cloud feature. This provides the ability to add and connect Swarms running in supported cloud providers, including AWS, Azure and Google Cloud (beta). More features include built-in collaboration with Docker Cloud and Docker ID. To show the crowd, two Docker software engineers used multi-stage builds and Desktop-to-Cloud to deploy the Swarm cluster.
The multi-stage build and Desktop-to-Cloud features will be available in the next stable release which ships in June. Find them in Docker’s edge channel—the experimental build—first at: https://docker.com/getdocker
Another exciting announcement that many have been anxiously anticipating from DockerCon was the secure orchestration with SwarmKit, a Docker open-source project that functions as a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.
Now, there is an automatic public key infrastructure that creates cryptographic node identities for every node that joins the Swarm, on top of automatic certificate rotation. The toolkit operates using Mutual TLS for authorization, authentication, and encryption of all communication across the control plane of all nodes on the Swarm. Using Raft, SwarmKit helps share secrets securely.
Use cases for this include PCI DSS, healthcare and other sensitive or confidential workloads.
According to Hykes, one of the challenges to cross-platform containers has been the lack of a portable Linux subsystem across different hardware platforms and operating systems.
LinuxKit was developed as a solution to provide the container movement with a secure, lean and portable Linux subsystem. This is the result of a mass collaboration between Docker, IBM, Linux Foundation, Microsoft, ARM, HP Enterprise and Intel. Now open source, the solution is open to public contributions.
Specifications of the secure Linux subsystem are as follows:
- Only works with containers
- Smaller attack surface
- Immutable infrastructure
- Sandboxed system services
- Specialized patches and configuration
- Incubator for security innovations
- Community-first security process
With a lean Linux subsystems, users can expect:
- Minimal size, minimal boot time
- All system services are containers
- Everything can be removed or replaced
Portable Linux subsystems offer:
- Desktop, server, IoT, mainframe
- Intel and ARM
- Bare metal and virtualized
Running Containers on Windows
In a major DockerCon announcement, Microsoft introduces Linux kernels to Windows meaning that Linux-based containers can now run inside the Windows Server. This is accomplished through collaboration between Docker and Microsoft as a means of providing a Linux subsystem that runs atop Hyper-V Hypervisor.
This essentially means that Linux containers will now run just as easily on Windows as any other OS and can now run alongside Windows containers on the same bare metal or virtual machines. Microsoft’s second announcement was that developers may choose from a variety of Linux kernels from major Linux vendors.
LinuxKit is on the roadmap for future versions of Hyper-V and Microsoft Server.
A Platform Made of Components
Hykes made a specific point of talking about how Docker has grown from a monolithic open source project to a system of distributed projects that fit together as follows:
The overarching theme was that Docker has come a long way. To facilitate a growing community with expanding needs, it should become a more holistic, distributed system.
Users can expect more open source projects, components, and primitives in the not-too-distant future. Hykes notes that a major challenge to the future of containers is dealing with an increased scale from its technology to service increasingly specialized needs and use cases.
The open component model started to show its limits to servicing desktop, server, cloud, etc. Examples discussed:
- Consumer IoT
- Industrial IoT
- Operating Systems
Interestingly, the auto industry has solved this problem through the application of common assemblies and scaling the Docker production model through the use of shared components and assemblies.
The Moby project is an open-source framework developed to assemble specialized container systems without reinventing the wheel (music to our ears!). It features several noteworthy items, including:
- Library of 80+ components
- Ability to package your own components as containers
- Reference assemblies deployed on millions of nodes
- Ability to create your own assemblies or start from an existing one
Docker uses Moby as a framework for its open-source projects to help manage thousands of contributors and hundreds of patches/week. It’s ideal for:
- Component development
- Specialized assembly development
- Integration tests
- Architecture design
- Integration with other projects
- Bleeding-edge features
Moby is community-run and features open governance inspired by the Fedora project. It plays well with existing projects. Most importantly, from a user perspective, Docker will better leverage the ecosystem to facilitate faster innovation. It’s ideal for system builders for its innovation prospects outside of Docker.
One of the most interesting components of the Moby portion of DockerCon was learning about how it’s capable of transforming multi-month R&D projects into weekend projects. Examples include: locked-down Linux with remote attestation, custom CI/CD stack, custom CI/CD stack + Debian + Terraform, RedisOS and Kubernetes Cluster on Mac.
We’re happy to report that this year’s DockerCon was a resounding success. We’re super excited (like you) to experience many of the changes and improvements firsthand.
Start with Docker Swarm Today
If you’re interested in getting started with Docker Swarm on AWS or Azure, consider checking out Caylent. Read our post on Creating a High-Availability WordPress cluster with Docker Swarm and EFS for a hands-on walkthrough.
Caylent is a cloud-agnostic Container Management Platform that acts as a fully managed Docker Swarm solution and has lots of great features like one-click deployments, continuous delivery, user access management, and more.
Did you attend this year’s event? What key takeaways did we miss? Please share in the comments section.