As OpenStack supporters and tire kickers convene in Mountain View next month at the Unlocked Infrastructure Conference (a.k.a. OpenStack Silicon Valley), it is Unlocked Performance that I will have on my mind. Lately I’ve become a zealous red-flag waver on this issue. In fact, I recently warned of a DevOps train wreck in an article in The New Stack, and I’d like to ring that warning bell again:
“The DevOps revolution, driven largely by containers, is placing new and mighty demands on the enterprise data center, demands that will break infrastructure and policies designed for a dying era of monolithic apps. If we don’t work together as a community to build a new generation of tooling for the DevOps era, performance won’t meet expectations, and then we’ll all get to read articles about how DevOps failed.”
Let me briefly explain why “Unlocking Performance” in the DevOps era requires new tools.
- DevOps requires tools that devs and ops can use simultaneously and in collaboration to see how software is being deployed, consumed and managed. Why? As opposed to waterfall, DevOps iteration cycles are much faster, meaning dev and ops must be able to see the same monitoring information and make adjustments together in order for deployment at these increased speeds to be successful. An automated shared workspace is required.
- We need automated tools that work in real-time, or nearly so. In agile environments, especially when using containers, application lifecycles may be only minutes long. Current monitoring tools can take longer than that to log and report. And they require human intervention, which requires time too. Of what use is a tool that takes longer to use than the life of thing it seeks to fix? None. That’s why we need automated tools that work in real-time.
- We need to be able to monitor at the processor level. Current monitoring technology relies on data that comes from the host operating system. Unfortunately, the host OS has a dangerously limited view of what’s happening at the processor level. Today’s users need to be able to see both top-down (from application into the OS) and bottom-up (from thread level looking up to containers and pods). This requires insight at the processor level.
In summary, unlocking performance in the DevOps era requires that developers and operators have new tools that understand the context of what’s happening in the applications and the shared infrastructure, from both bottom-up and top-down views, and those tools need to dynamically automate scheduling of containers and pods in real time.
At AppFormix, we’re working diligently on this challenge, with the help of key partners in the OpenStack community, especially Intel and Rackspace.
We recently launched ContainerFlow, the first scheduling and monitoring tool built specifically to address the unique needs of cloud-native applications and DevOps environments. ContainerFlow works out of the box for microservices built using Docker containers, using Kubernetes as an open source system for managing containerized applications.
ContainerFlow integrates Intel® Resource Director Technology (Intel® RDT) to monitor and control the resource usage of containers and VMs, providing major improvements in application performance, especially with container-based applications. Only ContainerFlow, using Intel’s game-changing RDT technology, offers operators and developers infrastructure transparency—down to the processor level—and real‐time monitoring and analytics.
Also, as announced at the OpenStack Summit Austin in April, Rackspace will use AppFormix cloud optimization software to help manage the scale and performance of its customers’ private clouds and enable customers to support VMs, bare metal deployment and containers with confidence.
What this means in general terms is that users of OpenStack private clouds managed by Rackspace will be able to improve application performance and cloud agility with monitoring and analytics tools that give both application developers and cloud operators a shared, real-time view of performance right down to the hardware layer, supporting VMs, bare metal and container deployment with confidence.
I look forward to discussing ContainerFlow and the need for automated, processor-level monitoring for DevOps when we meet in Mountain View. Also, it will be my honor to moderate a discussion between Brandon Philips, CTO of CoreOS, and Craig McLuckie, Project Manager for Kubernetes at Google, addressing the convergence of the Kubernetes and OpenStack Communities. Join us in the Hahn Auditorium at the Computer History Museum at 11:40am on Tuesday, August 9.
I look forward to seeing you there.
Flickr photo courtesy of Tim Green.