Alan Zeichick | Senior Writer | September 5, 2025
Kubernetes is an open source platform for managing very large applications that include huge numbers of containerized services. Developed by Google engineers in 2014 and released as an open source platform shortly thereafter, Kubernetes is now a thriving ecosystem that’s supported by nearly every cloud provider. Kubernetes can be used to manage cloud native applications built with microservices as well as traditional applications running on-premises or in the cloud.
Part of the power of Kubernetes is its automation—it can significantly reduce the workload required to manage applications across a network.
Kubernetes is an open source system for deploying, managing, and scaling containerized applications, particularly cloud native applications written using microservices. Sometimes abbreviated as K8s, Kubernetes allows administrators to cluster containers together to make them easier to manage.
What does Kubernetes do? It starts by deploying containers that hold either a complete application or a component of an application, often called a service. These containers are deployed onto servers, which can be in a cloud, on-premises, spread across several clouds in a multicloud configuration, or in a hybrid cloud/on-premises configuration.
Once containers are deployed, Kubernetes provides discovery, enabling an application or service to find the services it needs from other containers. Kubernetes directs traffic to the correct resource under its control. If a system is running multiple copies of a containerized application or service, usually to accommodate high demand, Kubernetes automatically balances the load.
Part of the power of Kubernetes is that it can group containers together in ways that make sense for the deployment. For example, it can group together several containers that share the same storage and network into a pod—that’s a word you’ll see often. You’ll also see references to Kubernetes nodes. These are individual machines, either physical services or traditional virtual machines, that house containers. A collection of nodes running Kubernetes—that is, set of physical or virtual containers—is referred to as a cluster.
For every container, pod, node, and cluster, Kubernetes will manage storage resources, detect and restart failed containers, a process called “healing,” and even implement security protocols across a distributed application. It can be configured to handle passwords, security tokens, and encryption keys, making it easier to protect critical assets.
The Kubernetes platform’s development is overseen by the Cloud Native Computing Foundation (CNCF).
Containers often replace a different model of cloud deployment: virtual machines (VMs). Containers are more lightweight than VMs because they use the host server’s underlying operating system and device drivers. By contrast, VMs are larger and require more processing resources because each one contains its own operating system. Therefore, a server can run more containers than VMs, and more importantly, can dedicate more processor and memory resources to running applications rather than maintaining multiple operating system instances—one for each VM. While there are specialized cases where VMs are required, containers offer a far more efficient model for most cloud application deployments.
While Kubernetes and containers are related, they’re best considered complementary technologies. Kubernetes is a management platform for containers that’s often used for large-scale deployments numbering in the hundreds or thousands of containers. Containers themselves put all necessary code and dependencies for a capability—be it a microservice or complete application—into a single executable format.
Kubernetes is a tool for managing many containers at once, usually in the cloud. Sometimes referred to as the operating system for the cloud, Kubernetes lets organizations manage containers at scale.
Key Takeaways
Kubernetes is a platform that automates the deployment, scaling, and management of containers. Kubernetes also has capabilities for healing containers—that is, detecting when they’re malfunctioning and then fixing them. Kubernetes is all about orchestration: Like a symphony conductor directing musicians, it knows what needs to be done, keeps all containers in place and working properly, and acts if something goes wrong.
And just as a symphony conductor works from a musical score that calls for a piano, violins, a few cellos, and a brass section, Kubernetes has a document explaining the desired state of an application’s containers. This document, called a configuration file, describes the functions required to make the application work and specifies which containers can provide those functions. The configuration file also lists the servers, storage devices, networks, and other physical machines available for the application’s containers.
When an application is launched, Kubernetes loads the necessary containers onto the available servers according to the configuration file, then starts running the software within those containers. It monitors the resource utilization of each server (or node), making sure that the systems aren’t overloaded. If they are, it moves containers onto a less-loaded server by starting up a new container and then stopping the old one. If a container itself is overloaded, Kubernetes starts up an identical copy of that container on a different server and automatically configures a load balancer to split the workload between them. As demand increases, it starts a third container, and so on as needed. Later, if the workload decreases, Kubernetes shuts down any unnecessary containers to help reduce costs and free up server resources for other tasks.
When a container fails, Kubernetes quickly starts up a new container on another server and redirects network traffic away from the problem area, providing for rapid failover.
Imagine you have an application that requires hundreds or thousands of containers, each providing services needed by the application. Systems administrators could manually deploy and manage the containers, possibly with the aid of automation tools for specific tasks such as load balancing or detecting faults. In fact, there are even tools for managing containers in small-scale deployments. These are most often used by software developers and DevOps teams when building and testing containerized software.
However, without a more complete orchestration system, the demands of system administration eventually become overwhelming.
The beauty of Kubernetes is that it’s a single platform that handles automation tasks from deploying containers to scaling them to resolving faults. In addition, Kubernetes is open source and widely supported, including by every major cloud provider. In short, it’s ubiquitous. That makes Kubernetes the preferred system for managing a large, containerized enterprise application.
When an organization decides to containerize its applications, adopting Kubernetes to orchestrate those containers just makes sense—especially given the wide-ranging payoffs.
The Google engineers who created Kubernetes chose the name based on an ancient Greek word for pilot or helmsman—the person who steers a ship—because it moves and manages a fleet of containers. And much like an actual container ship, Kubernetes depends on many components working together to get its data cargo where it needs to be. These are the terms that you’ll commonly encounter in discussions about containers and the Kubernetes orchestration platform.
Kubernetes is not merely a container management platform; it’s a sophisticated orchestration tool that automates and simplifies the entire application lifecycle, from design to deployment to production use. Its robust feature set helps efficiently manage complex, distributed applications. Here are some of the key features of Kubernetes:
While Kubernetes has been honed over many years, there is a steep learning curve. Still, it’s far better to take the time to learn Kubernetes than to use other methods for managing large, distributed applications. Here are a few challenges to consider:
Enterprises use Kubernetes for many types of applications; you’ll find it in ecommerce, manufacturing, research, finance, utilities—basically every industry. Many large distributed applications that use containers can benefit from Kubernetes orchestration and automation. Here are a few of the common scenarios where Kubernetes can really shine.
The intersection of Kubernetes with AI can be transformative for a business, since Kubernetes can play a pivotal role in managing and orchestrating AI workloads in the cloud. In particular, Kubernetes provides a robust and flexible platform for AI training and deployment, offering several advantages:
The widespread adoption of Kubernetes in the past decade has led to the emergence of a thriving ecosystem of tools, services, and supporting technologies. This ecosystem further enhances Kubernetes’ capabilities, providing organizations with diverse options to tailor their infrastructure and development practices. The main categories in this ecosystem include:
Any discussion of the Kubernetes ecosystem would be incomplete without a mention of KubeCon, the annual conference for Kubernetes developers and users hosted by the Cloud Native Computing Foundation (CNCF). Since the first KubeCon convened in 2015 with 500 attendees, the event has grown substantially. In 2024, the Salt Lake City conference drew more than 9,000 developers and engineers.
Best practices for Kubernetes could fill a book—and in fact, many have been written. Make no mistake: Kubernetes is complex. However, following these best practices can help companies leverage this platform successfully.
OCI Kubernetes Engine (OKE) is an Oracle-managed container orchestration service that can reduce the time and cost of building cloud native applications. OKE helps simplify operations of enterprise-grade Kubernetes at scale, letting you easily deploy and manage resource-intensive workloads, such as AI, with automatic scaling, patching, and upgrades. OKE offers:
The ability of the Kubernetes platform to orchestrate and automate application deployment and management has revolutionized the way applications run in the cloud native era. As Kubernetes continues to evolve and gain momentum, it’s becoming even more significant. Businesses that embrace Kubernetes can gain a marked competitive advantage, so understanding this technology is vital for developers and business leaders alike.
Kubernetes is important to cloud native development—which is key to more resilient, scalable, and agile applications, whether they’re running in the cloud, on-premises, or in a hybrid or multicloud model.
Why is Kubernetes a critical component of enterprise cloud strategy?
Kubernetes is critical because it’s how enterprises deploy, scale, and manage their distributed applications, especially those running in the cloud. Kubernetes automation improves application reliability while also maximizing the utilization of resources, thereby keeping costs down.
What key factors should enterprises consider when adopting Kubernetes at scale?
There are two main factors to consider. The first is organizational readiness: Are your engineers and developers ready for this model of application development and deployment? The other is technical: Do you have the right architectural approach to design and deploy Kubernetes and containers in a way that’s secure, stable, and compliant with governance requirements?
What are the main cost considerations for enterprises running Kubernetes at scale?
Kubernetes can help reduce costs by maximizing the use of cloud resources and by releasing resources such as servers and storage when they’re not needed. However, there are costs involved in training, tooling, and optimizing your network and application models to take full advantage of available resources.
How can enterprises ensure a smooth transition to Kubernetes from traditional infrastructure?
It’s a big shift! Begin by introducing Kubernetes for a small application that may already be running in one or a relatively few containers. Consider starting with a cloud-based Kubernetes service that manages the data plane on your behalf rather than trying to learn, deploy, and operate all the different elements on your own. Experiment with upgrades, rollbacks, monitoring, deliberate failures, and more to help your team gain the experience needed to tackle bigger projects, such as refactoring from a monolithic application to one based on microservices.