Skip to Content

How to Run Virtual Machines and Container Workloads on a Single Hybrid Cloud Platform

IT teams are increasingly required to support containerized applications. Heading into 2020, 84% of organizations were already using containers in production, and nearly 80% were using Kubernetes as an orchestration platform, according to the Cloud Native Computing Foundation.

How to Run Virtual Machines and Container Workloads on a Single Hybrid Cloud Platform

How to Run Virtual Machines and Container Workloads on a Single Hybrid Cloud Platform

But it’s not just new applications getting cloud-native capabilities. Developers and IT are modernizing entire application portfolios to meet urgent digital business demands. Nearly three out of four organizations plan to containerize existing applications as part of their modernization efforts. The top reasons are better agility, cited by 67% of respondents; better availability, 45%; and continuous improvement, 35%.

The last thing IT decision-makers would want is for developers and DevOps teams to be distracted by underlying infrastructure rather than focusing their efforts on their CI/CD pipeline and development sprints.

The infrastructure that supports containerized workloads is best managed in the hands of IT infrastructure pros. But the growth of containers doesn’t mitigate IT’s need to also support traditional Virtual Machine (VM) apps.

IT’s part of the application modernization mission is clear: to empower agile development and DevOps teams, and manage the infrastructure that supports all types of workloads. And all of this must be accomplished without increasing complexity, which can undermine traditional IT goals of responsiveness, service quality, costs, and security. IT teams today are ideally looking to adopt a cloud operating model that works across data centers, hosting providers, and public cloud environments.

So how do you get there from here?

The ideal solutions should give both developers and IT what they need to be successful in their areas of responsibilities:

  • Developers work through APIs. They want to specify and deploy infrastructure through integrated development and deployment tools.
  • IT teams manage infrastructure using tools. They want to monitor and optimize cluster utilization and performance with familiar processes that have been fine-tuned over years of running production.

So, the obvious—and optimal—answer is to use the same platform, tools, and skills the organization is already using for VM architectures and seamlessly add support for containerized applications and Kubernetes, without having to rip up everything already in place or add a separate platform for containerized workloads.

The concept of using a single platform that exposes APIs for developers, and management tools for IT, and supports VMs and containers, is a relatively new one for both DevOps IT and DevOps teams. Not because it wouldn’t have made sense, but simply because it hasn’t been available.

Now, VMware offers a single platform that manages both VMs and containers. VMware Cloud Foundation™ with VMware Tanzu™ delivers a Kubernetes runtime embedded within vSphere, as well as software-defined storage, networking, and security that is uniquely suited for hosting both traditional workloads and modern cloud-native applications. And the full range of VMware infrastructure and management tools is also now optimized for Kubernetes.

Accelerate provisioning by 90% and improve operational efficiency by nearly 70%—while giving developers self-service access to Kubernetes with security and policy guardrails in place. Watch a demo of VMware Tanzu Mission Control.

VMware Cloud Foundation with Tanzu

App-focused Management | Dev & IT Ops Collaboration

App-focused Management | Dev & IT Ops Collaboration

This full-stack solution offers something for everyone:

  • For application developers – it is Kubernetes.
  • For infrastructure administrators – it is vSphere.
  • For the business – it is a single platform that supports the spectrum of application modernization changes designed to meet digital business needs.

With a single platform that supports both VM and container workloads, IT teams can handle any type of application modernization—rehosting, re-platforming, or refactoring—while preserving intrinsic security, business resiliency, software-defined networking, high availability, and the full range of enterprise-grade capabilities IT has come to expect from VMware.

VMware VP and CTO Kit Colbert explain the spectrum of application modernization options: Rehost, Replatform, and Refactor. VMware: The Counterintuitively Fastest Path to App Modernization

Here are answers to common questions IT decision-makers frequently ask about using a single platform for all application types.

Q: How are applications being modernized?

A: Most IT teams have a desired future state, which is typically composed of cloud-native containerized, microservices-based applications created and deployed with automated DevOps processes. The challenge is getting there. Not every application will be modernized at the same time or in the same way:

  • Rehost: Some legacy applications may simply need to be migrated “as is” to benefit from the flexibility and scale of the new cloud infrastructure.
  • Replatform: Other VM-based apps or app components may be deployed in containers to gain the uptime and flexibility of being managed by a container orchestration platform like Kubernetes.
  • Refactor: Still, others will be re-written or written new with a cloud-native microservices architecture. The programming language may change, while they build, deploy, and manage processes will almost certainly change. The goal is to modernize each application according to business needs, incrementally if needed, with the least cost and disruption to the business.

Q: What are the biggest “gotchas” to avoid?

A: One is trying to do too much at one time. Many organizations envision their desired future state and want to get there as soon as possible for all applications. According to one survey, 98% of enterprises had active plans to move legacy applications to the cloud—yet 74% of those surveyed said they started a cloud migration project but failed to complete it. IDC once predicted that up to 85% of enterprises would repatriate public cloud workloads back to private clouds or on-premises infrastructure.

Another potential issue is increased complexity in using different platforms to host and manage different application types: VM-based applications on one stack; containerized applications orchestrated by Kubernetes on another. This increases cost and complexity, creates new challenges for both DevOps and IT, limits workload portability, and increases risk due to uneven implementation of application network and security policies. The complexity is multiplied if numerous platforms are used on-premises, in hosted environments, and multiple public clouds.

Q: What are the benefits of having containers and VMs both as “first-class citizens” on a single platform?

A. Instead of having two platforms—each optimized for a specific application architecture—IT can utilize a single platform designed and optimized for both VM and container workloads, wherever they are deployed. The single-platform approach gives the organization maximum flexibility of hyper-converged infrastructure (HCI) with full-stack agility and flexibility delivered at enterprise scale—including choice of how and when to modernize, whether rehosting, re-platforming, or refactoring. Decisions can be made on what is the best modernization strategy for each application. Changes can be made incrementally.

From micro-segmentation and load balancing to service mesh, NSX enables enterprises to connect and protect their microservices and workloads running in containers and VMs.

This single platform minimizes complexity and risk, lowers costs, and accelerates time to value by:

  • Leveraging the existing skills and knowledge of IT administrators to manage infrastructure and enable service delivery for both VMs and Kubernetes.
  • Empowering developers with a cloud operating model everywhere while eliminating manual ticket-and-wait interaction with infrastructure and cloud operations teams.
  • Avoiding tool sprawl by eliminating the need for separate platforms, tools, and processes for VMs and containers.
  • Reducing run-time complexity with a single approach to deploying, monitoring, and troubleshooting applications, while optimizing capacity and utilization, whether in the data center, at the edge, or in the public cloud.
  • Reducing risk through centralized management as well as predictable policy enforcement of security, regulatory compliance and IT cost controls and governance.

Q: How important are an integrated network, storage, and namespace for containers?

A. Kubernetes is integrated directly into the vSphere architecture and acts as an abstraction layer. The industry-standard Kubernetes API allows programmatic and on-demand consumption and control of infrastructure. However, open-source Kubernetes installation and operation can be challenging. And consumption of storage, network, and security services requires an integrated network, storage, and namespace capabilities—each time a Kubernetes cluster is deployed.

Network

Networking for containers is different than for VMs, and typically more complex. Containers are deployed in pods, and Kubernetes orchestrates the creation or deletion of pods based on changing workloads. One of the biggest advantages of using VMware Cloud Foundation for managing containers is that VMware NSX® provides full-stack networking and security for Kubernetes.

NSX is designed into vSphere with Tanzu from the ground up as the default pod networking and network security solution. NSX provides a rich set of networking capabilities including distributed switching and routing. It automates many of the behind-the-scenes steps when clusters are created, such as setting up distributed firewalls and load balancers. Integrations with Kubernetes enable context-aware and granular security policies that follow Kubernetes namespaces, especially useful for compliance use cases such as GDPR.

NSX is designed into vSphere with Tanzu from the ground up as the default pod networking and network security solution.

NSX is designed into vSphere with Tanzu from the ground up as the default pod networking and network security solution.

Storage

The files stored within a container are ephemeral, which means each time a container restarts, the data is lost. This is both an advantage and a disadvantage. If your application has persistent data, it must be stored in a persistent volume. There are many different types of volumes available to Kubernetes.

VMware vSAN™ has native container storage capabilities, allowing workloads to mount persistent volumes inside the VMware Cloud Foundation deployment. Cloud-Native Storage in vSphere and vSAN provides the capability to back Kubernetes persistent volumes with various types of vSphere storage, including vSAN, vSphere Virtual Volumes (vVols), VMFS, and NFS. Developers can provision and scale persistent volumes dynamically with Kubernetes API, and IT admins can manage VMs and containers with consistent storage policies and operations.

Namespace

Namespaces in the Kubernetes term for managing resources and policies. They are a way to divide cluster resources and separate permissions between users, teams, or projects. When a namespace is created, CPU, memory, and storage limits are assigned to restrict resources a workload can consume, not unlike a vSphere Resource Pool.

Where namespaces differ from Resource Pools is that they also incorporate security controls. Access controls include edit or read-only groups. And security policies can limit ports, audit changes, and force encryption of data. Encrypting all containers and/or VMs in a namespace can be done by setting one property rather than setting policies and encryption for each VM individually.

Q: What key infrastructure capabilities do developers need?

A. Developers want an abstraction layer between their work and the infrastructure. They want to consume infrastructure as a standardized service, not a bespoke creation. They don’t want to deal with help-desk ticket requests—they want a process that is automated, blueprinted, and consumed on-demand via APIs.

Most of all, developers want the freedom to write and deploy code with similar processes regardless of whether it ends up in a traditional VM or container—and they want a consistent service in a data center or public cloud. The end goal is to develop and pipeline tools that are independent of the underlying infrastructure technology.

To a developer, vSphere with Tanzu looks and acts like a standard Kubernetes cluster. Developers can consume infrastructure services through the industry-standard Kubernetes API the same way in the data center, hosted providers, and public cloud— everywhere vSphere with Tanzu-based infrastructure is offered.

Through API-driven automation, developers can define and consume application resources such as storage, networking, and even relationships and availability requirements that work consistently across implementations. By using the industry-standard Kubernetes syntax they don’t need direct access to, or knowledge of, the underlying vSphere infrastructure or cluster management tools.

vRealize Cloud Management is a single pane of glass solution for day two operations of hybrid cloud virtualization and container infrastructure. With Templates for automation, AIOps for optimization, and full-stack observability

Q: What key infrastructure capabilities do infrastructure operators need?

A. IT operations want the visibility and flexibility to monitor system and application performance, and the tools to respond to changing conditions to ensure system uptime, utilization, and security. They want to deploy and manage clusters of infrastructure resources that can be managed in an integrated way, to optimize IT cost and operational efficiency. And, they want to avoid the complexity that makes systems management more difficult than it needs to be.

To a vSphere admin, vSphere continues operating just as it has for decades, but now with integrated Kubernetes features. Management of vSphere is still done through the vSphere Client, PowerCLI, and vSphere APIs, as it has been done for years. But vSphere admins can also now deploy and manage Kubernetes clusters and namespaces, and seamlessly deploy and manage the network, storage, and security constructs consumed by developers and their modern applications.

Q: What is “Developer-ready Infrastructure”?

A. Developers want to consume infrastructure services programmatically via the industry-standard Kubernetes API. While IT operations deploy infrastructure and lifecycle manages Kubernetes as the API layer for developer service consumption, there is an in-between role—managing namespaces and permissions—that may fall to either IT Ops or DevOps.

“Developer-ready Infrastructure” means that developers can assume the infrastructure cluster is deployed and accessible through the Kubernetes API, and someone is available to manage namespace and other constructs that enable self-service consumption— delivered via a single operating model and consistent service that spans data center, hosted environment or public cloud infrastructure.

Q: Can IT build and manage vSphere clusters that deliver IT service for both VMs and containers?

A. Yes. vSphere traditionally has been about the management of infrastructure and VMs, while being somewhat indifferent to the actual applications running on the VMs. For the VMware administrator, the introduction of Kubernetes as a control plane for vSphere opens possibilities for new workload management and orchestration in the future, while still protecting your investments and current processes today.

With vSphere with Tanzu, the VMware administrator can now easily create workflows and policies that govern containers, VMs, or both simultaneously. Both VM and container application workloads are now first-class citizens in a vSphere environment.

Q: How does VMware Cloud Foundation deliver enterprise-grade Kubernetes?

A. There are many ways to deploy Kubernetes. Options including managed, cloud, on-premises virtual, and on-premises bare metal. There are open-source tools, such as Minikube, which have been developed to install and operate a Kubernetes cluster on a single host, which is great for training.

For enterprise use, though, open-source Kubernetes installation and operation can be challenging. Most deployments require extensive setup work, integration of networking and storage capabilities, new processes and retraining of staff to install and operate, and lifecycle management Kubernetes effectively.

However, VMware Cloud Foundation with Tanzu offers a solution unlike anything else in the industry. It offers unified storage, network, and security management that streamlines and optimizes the lifecycle management of Kubernetes. This is where the power of vSphere within VMware Cloud Foundation with Tanzu becomes apparent, combining automation and scale that fits naturally into modern IT infrastructure and processes.

VMware Cloud Foundation with Tanzu is how IT organizations can deploy Kubernetes at scale on-premises via private cloud, as well as delivered on the public cloud through the hyper-scale cloud and managed service providers. It automates deployments, infrastructure changes, and lifecycle operations that take the complexity out of deploying and managing Kubernetes workloads.

Conclusion

Open-source Kubernetes installation and operation can be challenging for many reasons. VMware Cloud Foundation with Tanzu brings Kubernetes to the enterprise in a manner unlike anything else in the industry. The single platform approach builds on and multiplies your existing investments in infrastructure, people, processes, and existing workloads while providing a future-ready cloud solution for all applications.

Are you ready to take the next step in application modernization to maximize both VMs and containers?

Source: VMware

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that\'s committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we haven\'t implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you\'re currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.