Get answers to frequently asked questions about Oracle Cloud Infrastructure (OCI) Service Mesh.
OCI Service Mesh is a managed application infrastructure layer for microservice-to-microservice communication. It allows developers to focus on application development while automating network architecture and security.
It simplifies the development and operation of cloud native applications by providing defined, out-of-the-box functionalities for security, observability, and network traffic management—without requiring application changes.
Organizations of all sizes will benefit from operational improvements and security enforcement. Adopting the Service Mesh service requires no additional effort from engineering teams because OCI manages it.
Service Mesh supports cloud native applications that run on Oracle Container Engine for Kubernetes.
There are no charges for using Service Mesh. Users only pay for the infrastructure required to run the proxy component that runs alongside their application.
Service Mesh adds a set of proxies to the architecture by injecting a proxy component in front of microservices. Once there is a proxy in front of each microservice, we can control microservice-to-microservice communication and provide features for security, observability, and network traffic control in the mesh layer.
The best place to begin is with the documentation's Getting Started section, which details the prerequisites and requirements for Service Mesh enablement.
The documentation for Oracle Cloud Infrastructure Service Mesh is available in the Oracle Help Center.
OCI Service Mesh will automatically push updates to proxy components. The application team can turn off automatic proxy updates. When they turn them back on, OCI Service Mesh will push new updates. Proxy updates may cause downtime, depending on the pod deployment strategy used. To avoid any downtime during the upgrade process, the application setup can use pod disruption budgets on Kubernetes (an industry-recommended best practice).
Yes, it is turned on by default. Service Mesh will encrypt communication between microservices and authenticate their identities automatically, with no configuration required.
As businesses increasingly rely on microservices for mission-critical applications, the reliability and security of microservices become critical. Using mTLS enforces mutual service authentication and data encryption. The mTLS encrypts all messages sent between microservices, ensuring that messages remain private even if intercepted.
We support integration with Prometheus to automatically collect metrics from proxy components.
Service Mesh integrates with OCI Logging to store and search logs generated by proxy components.
OCI Service Mesh's traffic management functionality deploys new microservice versions using canary deployment, where it first routes a small percentage of traffic to a new version and, over time, completely switches traffic to the new version. If the application runs into any issues, the application team can quickly switch back to the stable version.
While the use of an ingress gateway is not required, it gives more control over the mesh ingress traffic. For traffic between the mesh edge and external clients, OCI already provides services such as Load Balancer and API Gateway.
The access policies of OCI Service Mesh define egress rules to enable requests to certain hosts outside the mesh.