OKE streamlines operations for cloud native, enterprise-grade Kubernetes at any scale. Deploy, manage, and scale your most demanding workloads—including AI and microservices—with automated upgrades, intelligent scaling, and built-in security.
Learn how to accelerate development and simplify managing AI workloads in production.
Modular software offers major advantages—scalability, resilience, flexibility, and more. Discover why now is the perfect time to adopt cloud native development.
Explore how you can quickly build, run, and rearchitect dynamic applications using AI on a reliable, scalable cloud.
Discover how agentic AI, cloud native platforms, and robust infrastructure are driving enterprise AI adoption and how your organization can prepare. This session will be held on January 20 and 27.
See how 8x8 improved performance and TCO on OCI.
Learn how DMCC meets peak demand with elastic scaling.
Explore how Cohere improved serving efficiency on OCI.
See how CNCF moved Kubernetes workloads to OCI with minimal changes.
Find out how EZ Cloud streamlined deployment and Day 2 operations.
Read how B3 achieves stringent availability objectives on OCI.
See how Zimperium designs for regional failover and rapid recovery.
The AI model-building process starts with data preparation and experimentation, benefitting from secure, shared access to GPUs and centralized administration. OKE enables teams to:
– Maximize GPU utilization through secure, multitenant clusters
– Collaborate efficiently in centrally managed environments
– Integrate with Kubeflow for streamlined model development and deployment
Learn more about running applications on GPU-based nodes with OKE.
Built on OCI's high performance infrastructure, OKE gives you:
– Access to the latest NVIDIA GPUs (H100, A100, A10, and more)
– Ultrafast RDMA networking for maximum throughput and low latency
– Full control with managed or self-managed Kubernetes worker nodes
Explore how to create a Kubernetes cluster and install Kubeflow within it.
Data scientists rely on optimized scheduling to maximize resource use for training jobs. OKE supports advanced schedulers such as Volcano and Kueue to efficiently run parallel and distributed workloads.
Large-scale AI training requires fast, low-latency cluster networking. OCI’s RDMA-enabled infrastructure empowers OKE to move data directly to and from GPU memory, minimizing latency and maximizing throughput.
OKE, built on reliable OCI infrastructure, brings you:
– Access to NVIDIA GPUs (H100, A100, A10, and more)
– Ultrafast, RDMA-backed network connections
– The flexibility to run jobs on self-managed Kubernetes nodes
Learn more about running applications on GPU-based nodes with OKE.
Ready to run GPU workloads on OKE with NVIDIA A100 bare metal nodes? This tutorial can show you how.
OKE takes full advantage of Kubernetes to efficiently manage inference pods, automatically adjusting resources to meet demand. With the Kubernetes Cluster Autoscaler, OKE can automatically resize managed node pools based on real-time workload demands, enabling high availability and optimal cost management when scaling inference services.
OKE’s advanced scheduling and resource management enable you to set precise CPU and memory allocations for inference pods, supporting consistent and reliable performance as workloads fluctuate. Learn more about deploying and managing applications on OKE.
OKE offers robust options for scalable, cost-effective AI inference—including virtual nodes for rapid pod-level scaling and the flexibility to run on both GPU and Arm-based processors.
See how to deploy NVIDIA NIM inference microservices at scale with OCI Kubernetes Engine.
For more on running AI inference on GPU nodes, review the documentation for running applications on GPU-based nodes.
When you bring your applications to OKE, you can:
Modernizing with OKE means you move faster and more securely—while Oracle handles the complex parts behind the scenes. That’s migration made easy, so you can focus on what matters most: your business.
Follow the step-by-step deployment guide on using OKE, OCI Bastion, and GitHub Actions for secure, automated migration.
For more on OKE features and management, see the official OKE documentation.
Building microservices OKE lets your teams:
With OKE, you get the robust tooling and enterprise security Oracle is known for, plus the flexibility microservices require. Change the way you build, update, and scale apps—with fewer headaches and a lot more control.
For more information on developing and managing microservices:
“Many OCI AI services run on OCI Kubernetes Engine (OKE), Oracle’s managed Kubernetes service. In fact, our engineering team experienced a 10X performance improvement with OCI Vision just by switching from an earlier platform to OKE. It’s that good.”
VP of OCI AI Services, Oracle Cloud Infrastructure
CIO magazine recognizes OCI for its expertise in delivering cutting-edge Kubernetes solutions, supporting scalable and efficient application development.
Deploy simple microservices packaged as Docker containers and communicate via a common API.
Discover best practices for deploying a serverless virtual node pool using the provided Terraform automation and reference architecture.
Find out how Tryg Insurance reduced their costs by 50% via dynamic rightsizing.
Gregory King, Senior Principal Product Manager
Oracle Cloud Infrastructure (OCI) Full Stack Disaster Recovery (Full Stack DR) announces native support for OCI Kubernetes Engine (OKE). OKE clusters are now a selectable OCI resource in Full Stack DR just like virtual machines, storage, load balancers, and Oracle databases. This means we know exactly how to validate, failover, switchover, and test your ability to recover OKE, infrastructure, and databases without your IT staff writing one line of code or step-by-step instructions in a spreadsheet or text file.
Read the complete postGet 30 days of access to CI/CD tools, managed Terraform, telemetry, and more.
Explore deployable reference architectures and solutions playbooks.
Empower app development with Kubernetes, Docker, serverless, APIs, and more.
Reach our associates for sales, support, and other questions.
Scalable, Resilient, Flexible: Why Cloud Native’s Time Is Now
Cloud Native for High Performance Computing
Driving Enterprise Value: AI Trends in Cloud Infrastructure for 2026