Kubernetes Engine Features

Node choice

Virtual nodes for serverless Kubernetes

Virtual nodes provide a serverless Kubernetes experience, delivering granular pod elasticity with per-pod pricing. Scale your deployments without needing to consider a cluster's capacity, thus simplifying operations.

Managed nodes

Managed nodes are worker nodes created within your tenancy and operated with shared responsibility between you and OCI. They are suitable for many configurations and compute shapes not supported by virtual nodes.

Self-managed nodes

Self-managed nodes offer the most customization and control for unique compute configurations and advanced setup not supported by managed nodes, such as RDMA-enabled bare metal HPC/GPU for AI workloads.

Comprehensive compute options

Optimize cost and performance by choosing the most appropriate compute shapes from a wide range of bare metal and virtual machine options, including high-powered NVIDIA GPUs and cost-effective Arm and AMD CPUs. Support multiarchitecture images with Oracle Container Image Registry.

OKE simplifies your operations

On-demand node cycling

Streamline the task of updating managed worker nodes with on-demand node cycling, which eliminates the need for the time-consuming, manual rotation of nodes or the development of custom solutions. Effortlessly modify node pool properties including SSH keys, boot volume size, and custom cloud-init scripts.

Add-ons lifecycle management

Easily expand and control the functionality of your OKE clusters with a curated collection of configurable add-on software. OKE manages add-on lifecycles—from initial deployment and configuration through ongoing operations including upgrades, patching, scaling, and rolling configuration changes.

Automatic Kubernetes upgrades

Trigger an upgrade of your Kubernetes version with one click. Virtual nodes automatically deliver seamless, on-the-fly updates and security patches for your worker nodes and underlying infrastructure while respecting the availability of your applications.

Autoscale with high availability

Increase the availability of applications using clusters that span multiple availability domains (data centers) in any commercial region or in Oracle Cloud Infrastructure (OCI) Dedicated Region. Scale pods horizontally and vertically and scale clusters too.

Self-healing nodes

When node failure is detected, OKE automatically provisions new worker nodes to maintain cluster availability.

Safe node deletion

Automated cordon and drain options enable you to safely delete worker nodes without disrupting your applications.

Cluster observability

Monitor and secure your applications with observability tools from OCI, Datadog, Aqua Security, and more.

OKE makes things easy for developers

One-click cluster creation

Deploy Kubernetes clusters including the underlying virtual cloud networks, internet gateways, and NAT gateways with just one click.

Complete REST API and CLI support

Automate Kubernetes operations with web-based REST APIs and command line interface (CLI) for all actions including cluster creation, scaling, and operations.

Tight integration with other OCI services

OKE integrates seamlessly with OCI services including Container Registry, DevOps CI/CD, networking, storage, and more. Using OCI Service Operator for Kubernetes, you can directly manage your OCI services from your OKE cluster. Create, manage, and establish connections with resources such as Autonomous Database and MySQL Database using the Kubernetes API and tooling.

DevOps toolchain compatibility

OKE is built on open standards and is fully conformant with open source upstream Kubernetes, enabling you to leverage ecosystem solutions and integrate with your preferred dev tools such as Argo CD, GitHub, Jenkins, and many more.

Container Marketplace

Get prepackaged containerized solutions finely tuned for optimal performance on OKE via the OCI Container Marketplace.

Hybrid and multicloud app building

OKE uses unmodified open source Kubernetes that complies with the Cloud Native Computing Foundation (CNCF) and Open Container Initiative standards for easy application portability.

OKE facilitates security and privacy

Coming soon: Kubernetes governance

Oracle Cloud Guard offers out-of-the-box Kubernetes governance, delivering automated security and adherence to best practices when deploying resources on OKE. By automatically identifying configuration issues, it enables you to effortlessly secure and maintain compliance on your OKE clusters.

Encryption

OCI always encrypts block volumes, boot volumes, and volume backups at rest using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption. You can also encrypt Kubernetes secrets at rest using the Key Management Service and/or manage the lifecycle of your own encryption keys using OCI Vault.

Compliance

OKE supports compliance with various regulatory frameworks such as HIPAA, PCI, SOC 2, and many others.

Private Kubernetes clusters and Bastion

With private clusters, you can restrict access to the Kubernetes API endpoint to your on-premises network or a bastion host, improving your security posture. Use OCI Bastion to easily access fully private clusters.

Strong pod-level isolation

OKE virtual nodes provide strong isolation to every Kubernetes pod, which does not share any underlying kernel, memory, or CPU resources. This pod-level isolation enables you to run untrusted workloads, multitenant applications, and sensitive data.

Network security groups

OKE supports network security groups (NSGs) for all cluster components. An NSG consists of a set of ingress and egress security rules that apply to virtual network interface cards (VNICs) in your virtual cloud network (VCN). With NSG, you can separate your VCN architecture from your cluster components’ security requirements.

Authentication and authorization

Control access and permissions using native OCI Identity and Access Management (IAM) and Kubernetes role-based access control. You can also configure OCI IAM multifactor authentication. OKE workload identity enables you to establish secure authentication at the pod level for OCI APIs and services. By implementing the principle of “least privilege” for workloads, you can ensure users only can access necessary resources.

Container image scanning, signing, and verification

OKE supports container image scanning, signing, and verification so you can protect your application images against serious security vulnerabilities and preserve the integrity of the container images when deployed.

Easy auditability

All Kubernetes audit events are made available via the OCI Audit service.

OKE is ideal for AI workloads

Control over large fleets of compute

OKE makes it easy to deploy the massive amounts of GPU and CPU resources needed for sophisticated model training and highly responsive inferencing workloads. With OKE self-managed nodes, you can harness the power of NVIDIA H100 GPUs and OCI’s low-latency RDMA networking for maximum AI performance.

Automatic scaling

Using the built-in Horizontal Pod Autoscaler (HPA) feature of open source Kubernetes, OKE enables your AI-powered applications to quickly grow and shrink as user demand fluctuates. There’s no need to overprovision and pay for unused resources when OKE can automatically scale in and out as needed.

RDMA cluster networking

Use OKE self-managed nodes to tap into the power of OCI supercluster infrastructure, designed to scale to tens of thousands of NVIDIA GPUs without compromising performance. OCI's cluster network uses RDMA over Converged Ethernet (RoCE) on top of NVIDIA ConnectX RDMA NICs to support high-throughput and latency-sensitive workloads.