Container Instances FAQ

FAQ topics

General Questions

When should I use OCI Container Instances versus OCI Kubernetes Engine?

Container Instances is great for operating isolated containers that don’t require a container orchestration platform, such as Kubernetes. It’s suitable for APIs, web apps, CI/CD jobs, automation tasks, data/media processing, DevTest environments, and other use cases. However, it doesn’t replace container orchestration platforms. For use cases that require container orchestration, use OKE.

What’s the difference between running containers on OCI Container Instances versus a standard virtual machine?

When running containers on Container Instances, you don’t have to provision and manage any VMs or servers yourself. You can simply specify the container images and launch configuration to run containers on Container Instances. OCI manages the underlying compute required for running your containers. If you run containers on a virtual machine, you’re responsible for managing the server and installing and maintaining the container runtime on the virtual machine.

How is OCI Container Instances priced?

With OCI Container Instances, you only pay for the infrastructure resources used by the container instances. The price for CPU and memory resources allocated to a container instance is the same as the price of OCI Compute instances for the chosen shape. There are no additional charges for using Container Instances. Partial OCPU and gigabyte (memory) hours consumed are billed as partial hours with a one-minute minimum, and the usage is aggregated by the second. Each container instance gets 15 GB of ephemeral storage by default with no additional charges. For more details, see the Container Instances pricing page.

How many cores and how much memory can I allocate to a container instance?

When creating a container instance, you can select the underlying compute shape and allocate up to the maximum cores and memory resources provided by the shape. The regular cores limit is 8 cores on x86 (AMD) shapes and 16 cores on Arm (Ampere) shapes. You can allocate an extended number of cores for demanding workloads. For example, if you select an AMD E4 Flex shape, you can allocate up to 64 cores (128 vCPU) and 1,024 GB of memory to your container instance. A container instance with extended cores can take longer to create than a container instance without extended cores.

Can I run multiple containers on one container instance?

Yes. When creating a container instance, you can specify one or more containers and the container image. Optionally, you can specify environment variables, resource limits, startup options, and more for each container.

A container instance should typically run a single application. However, your application container may require supporting containers, such as a logging sidecar or database container. You can choose to run multiple containers of the same application on one container instance. Containers running on the same container instance will share the CPU/memory resources, local network, and ephemeral storage. You can choose to apply CPU/memory resource limits at the container level to restrict the amount of resources consumed by each container.

What types of container registries are supported?

Any container registry compliant with the Open Container Initiative, including OCI Container Registry, is supported.

Can I run my applications on Arm-based processors using Container Instances?

Yes. With Arm-based processors, customers can run current workloads less expensively and build new applications with superior economics and performance. OCI Container Instances lets you run your containerized applications on Arm-based processors. This can be achieved by selecting an Ampere shape, such as CI.Standard.A1.Flex, when setting up your container instances and using Arm-compatible or multi-architecture container images for your applications. You also get 3,000 OCPU hours and 18,000 GB hours of Free Tier usage with the Ampere A1 Flex shape. This Free Tier usage is shared across bare metal, VM, and container instances.

What are the available storage options for Container Instances?

Each container instance gets 15 GB of ephemeral storage by default. It’s used for a variety of purposes, such as storing container images and backing each container's root overlay file system. If the size of any of the container images per container instance exceeds 7.5 GB, the container instance creation may fail. It’s recommended you use external databases to store application data that needs to persist independent of the container instance lifecycle. Options to attach persistent volumes with OCI Block Storage and OCI File Storage will be provided in the future.

How long-lived are container instances?

A container instance will be inactive as soon as all containers within that instance stop, and the restart policy isn’t enabled. This means that container instances used for ephemeral workloads, such as CI/CD pipelines, automation tasks for cloud operations, data/media processing, and so forth, will stop once the workload is executed. Customers will be billed only for the duration of the job.

For container instances that need to stay up, such as those used for web applications, customers can configure restart policies to restart containers within a container instance in case of failure, ensuring that the application is always up. For high availability of such applications, it’s recommended to create multiple container instances across two availability domains or fault domains in a given region.