Object Storage FAQ

 

General Questions

What is Oracle Cloud Infrastructure Object Storage?

Oracle Object Storage is a scalable, fully programmable, durable cloud storage service. Developers and IT administrators can use this service to store and easily access an unlimited amount of data at low cost.

What can you do with Oracle Object Storage?

With Oracle Object Storage, you can safely and securely store and retrieve data directly from applications or from within the cloud platform, at any time. Oracle Object Storage is agnostic to the data content type and enables a wide variety of use cases. You can send backup and archive data offsite, design Big Data Analytics workloads to generate business insights, or build scale-out web applications. The elasticity of the service enables you to start small and scale applications as they evolve, and you always pay for only what you use.

What is unique about Oracle Cloud Infrastructure's approach to object storage?

Oracle Object Storage service is secure, easy to manage, strongly consistent, and scalable. When a read request is made, Oracle Object Storage serves the most recent copy of the data that was written to the system. Oracle Object Storage is connected to a high-performing, high-bandwidth network with compute and object storage resources co-located on the same network. This means that compute instances running in Oracle Cloud Infrastructure get low latency access to object storage.

What are the core components of the Oracle Object Storage service?

Objects: All data, regardless of content type, is stored as objects in Oracle Object Storage. For example, log files, video files, and audio files are all stored as objects.

Bucket: A bucket is a logical container that stores objects. Buckets can serve as a grouping mechanism to store related objects together.

Namespace: A namespace is the logical entity that lets you control a personal bucket namespace. Oracle Cloud Infrastructure Object Storage bucket names are not global. Bucket names need to be unique within the context of a namespace, but can be repeated across namespaces. Each tenant is associated with one default namespace (tenant name) that spans all compartments.

How do I get started with Oracle Cloud Infrastructure Object Storage?

Sign up for the OCI Cloud Free Tier. You can build, test, and deploy applications on Oracle Cloud—for free.

How durable is data stored in Oracle Cloud Infrastructure Object Storage?

Oracle Object Storage is designed to be highly durable, providing 99.999999999% (Eleven 9s) of annual durability. It achieves this by storing each object redundantly across three different availability domains for regions with multiple availability domains, and across three different fault domains in regions with a single availability domain. Data integrity is actively monitored using checksums, and corrupt data is detected and automatically repaired. Any loss in data redundancy is detected and remedied, without customer intervention or impact.

Do you use erasure coding in the Object Storage service?

Yes, OCI Object Storage uses a variety of storage schemes, including erasure coding. The storage scheme used for an object cannot be influenced by the customer, and the schemes utilized may change overtime.

How reliable is Oracle Cloud Infrastructure Object Storage?

Oracle Object Storage is highly reliable. The service is designed for 99.9% availability. Multiple safeguards have been built into the platform to monitor the health of the service to guard against unplanned downtime.

Can I assign metadata tags to Objects?

Yes. Objects can be tagged with multiple user-specified metadata key-value pairs. See Managing Objects in the Object Storage documentation for more information.

How much data can I store in Oracle Cloud Infrastructure Object Storage?

You can store an unlimited amount of data in Oracle Object Storage. You can create thousands of buckets per account and each bucket can host an unlimited number of objects. Stored objects can be as small as 0 bytes or as large as 10 TiB. Oracle recommends that you use multipart uploads to store objects larger than 100 MiB. For more information, see Service Limits in the Oracle Cloud Infrastructure documentation.

Is Oracle Cloud Infrastructure Object Storage specific to an Availability Domain or to a Region?

Oracle Object Storage is a regional service. It can be accessed through a dedicated regional API endpoint.

The Native Oracle Cloud Infrastructure Object Storage API endpoints use a consistent URL format of https://objectstorage.<region-identifier>.oraclecloud.com. For example, the Native OCI Object Storage API endpoint in US West (us-phoenix-1) is https://objectstorage.us-phoenix-1.oraclecloud.com.

The Swift API endpoints use a consistent URL format of https://swiftobjectstorage.<region-identifier>.oraclecloud.com. For example, the Native OCI Object Storage API endpoint in US East (us-ashburn-1) is https://swiftobjectstorage.us-ashburn-1.oraclecloud.com.

The Region Identifier for all OCI regions can be found at https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm.

Where is my Oracle Cloud Infrastructure Object Storage data stored?

Oracle Object Storage is available in all Oracle Cloud Infrastructure regions and data is stored within those regions. Customers have the flexibility to choose the specific region where data will reside. You can find more information on available regions and Availability Domains here.

How secure is my data in Oracle Cloud Infrastructure Object Storage?

Oracle Object Storage is highly secure. It is tightly integrated with Oracle Cloud Infrastructure Identity and Access Management. By default, only authenticated users that have explicitly been granted access to specific resources can access data stored in Oracle Object Storage. Data is uploaded and downloaded from Oracle Object Storage over SSL endpoints using the HTTPS protocol. All stored data is encrypted, by default. For an additional layer of security, you can encrypt objects prior to sending them to Oracle Object Storage. That gives you total control over not only your data, but also the encryption keys that are used to encrypt the data.

Does Oracle Cloud Infrastructure Object Storage support object-level permission controls?

OCI Object Storage supports object-level permissions in addition to compartment-level and bucket-level permissions. Object-level permissions protect data in shared buckets from unauthorized users, providing an extra level of security.

You will benefit from the following:

  • Granular-level control over an individual object or set of objects
  • The ability to restrict user access to a particular set of operations, for example, Get, Create, Delete, Rename, Copy

Our Identity and Access Management (IAM) offers a consistent set of policies across all OCI services, allowing you to create, apply, and centrally manage detailed permissions at various levels.

Can I use Oracle Cloud Infrastructure Object Storage as a primary data storage for big data?

Yes, you can use Oracle Object Storage as the primary data repository for big data. This means you can run big data workloads on Oracle Cloud Infrastructure. The object storage HDFS connector provides connectivity to multiple popular big data analytic engines. This connectivity enables the analytics engines to work directly with data stored in Oracle Cloud Infrastrucutre object storage. You can find more information on the HDFS connector here.

Can I access Oracle Cloud Infrastructure Object Storage from anywhere?

You can access Oracle Object Storage from anywhere as long as you have access to an internet connection and the required permissions to access the service. Object storage latency will vary depending on where you are accessing the service from, with higher latency when accessing across a longer distance, all else equal. For example, if data is stored in the US West Region, the latency for accessing data from Nevada will be lower than if the same data were being accessed from London or New York.

Can I recover deleted or overwritten data?

No, deleted and overwritten data cannot be recovered.

However, when Object Versioning is enabled on a bucket, data is not lost when an object is overwritten or when a versioning-unaware delete operation is performed. In both cases, the previous contents of the object are saved as a previous version of the object. Previous versions can be accessed or restored at any time and must be explicitly removed by a Lifecycle Policy or with a versioning-aware delete operation. Object Versioning must be enabled at the time of delete or overwrite to protect data.

Do I need to backup my object storage data?

No, you do not need to backup data stored in Oracle Cloud Infrastructure Object Storage. Oracle Object Storage is an inherently highly durable storage platform. All objects are stored redundantly on multiple storage servers, across multiple Availability Domains, within a region. Data integrity is constantly monitored using checksums and corrupt data is self healed. The native object storage durability characteristics virtually eliminate the need for traditional backups.

Can I use Oracle Cloud Infrastructure Object Storage as a destination for my on-premises backups?

You can use Oracle Object Storage as a destination for your backups, regardless of whether the backup originates in the cloud or in an on-premises data center. Oracle Cloud Infrastructure Block Volumes backups are stored by default in Oracle Cloud Infrastructure Object Storage.

You can also direct your Oracle RMAN backups to Object Storage via the Swift API integration. For Oracle RMAN, you need to use the correct Swift API endpoint. Swift API endpoints use a consistent URL format of https://swiftobjectstorage.<region-identifier>.oraclecloud.com. For example, the Native OCI Object Storage API endpoint in US East (us-ashburn-1) is https://swiftobjectstorage.us-ashburn-1.oraclecloud.com.

The Region Identifier for all OCI regions can be found at https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm.

Can Oracle Cloud Infrastructure Object Storage buckets be mounted as traditional NFS/SMB mount points on the bare metal compute instances?

Exposing buckets as NFS/SMB mount points on the bare metal compute instances is not supported. Currently you can access Oracle Object Storage using the native APIs, SDKs or the HDFS connector.

How will I be metered and billed for Oracle Cloud Infrastructure Object Storage usage?

Oracle Object Storage is available as a pay-as-you-go service and charged on the following usage elements:

  • Storage used per month, measured in Timed Storage-Byte Hours, aggregated per month.
  • Total number of requests received per month. Delete requests are free.
  • Outbound Internet Transfer. First 10TB of outbound transfer are free.

Full pricing details for Oracle Cloud Infrastructure Object Storage can be found here.

Where can I find Oracle Cloud Infrastructure Object Storage IP address ranges, to add to my on premises firewall or Oracle Cloud Infrastructure security list?

You can find Object Storage IP address ranges in the Object Storage product documentation.

Does Oracle Cloud Infrastructure Object Storage support CORS?

The Oracle Cloud Infrastructure Object Storage API and Swift API do return Cross-Origin Resource Sharing (CORS) headers; however, the returned headers are fixed and cannot be edited. The Amazon S3 Compatibility API does not return CORS headers.

Encryption

Does Oracle Cloud Infrastructure Object Storage support server-side encryption?

Yes. Oracle Object Storage supports server-side encryption. All data stored in Oracle Object Storage is automatically encrypted. Customers can also use Server-Side Encryption with Customer-Provided Keys (SSE-C) or a master encryption key from Vault if they choose.

How can I enable the Oracle Cloud Infrastructure Object Storage encryption capability?

Encryption is automatically enabled for all data with no action required on the part of customers.

Do I need to do any data decryption on the client?

There is nothing specific that you need to do to decrypt the data. You can continue making normal HTTPS GET requests to retrieve the data.

Are the encryption keys rotated?

Yes. The encryption keys are rotated frequently based on a rigorous internal policy.

Do you support client-side encryption?

Yes, we support client-side encryption. You can encrypt the data prior to sending it to Oracle Object Storage. Sending encrypted data enables you to have full control over your encryption keys and provides a second line of defense against unintended and unauthorized data access. To help in this area, Oracle has released SDK enhancements for Client-Side Encryption.

Do you encrypt both the object data and the user-defined metadata?

Yes. We encrypt both the object data and the user-defined metadata associated with the object.

Which encryption algorithm do you use to encrypt the data?

We use 256-bit Advanced Encryption Standard (AES- 256) to encrypt all data and encryption keys. AES-256 is considered one of the strongest encryption algorithms that exists today.

Multipart Upload

I need to upload large objects to Oracle Cloud Infrastructure Object Storage. How can I optimize the upload process?

To upload large objects to Oracle Object Storage, consider using multipart upload. Multipart uploads upload data in parallel and are faster and more efficient than uploading a large object in a single upload. If a multipart upload fails for any reason, instead of restarting the entire object upload, you'll only need to retry uploading the part of the object that failed. You should consider using the multipart upload to upload all objects that are greater than 100 MiB in size.

The OCI Command Line Interface and OCI Console will perform multipart uploads for you automatically. More information about multipart uploads is available at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingmultipartuploads.htm.

Can I associate customer-defined metadata with an object when uploading using multipart upload?

Yes. When you initiate the multipart upload, you can specify the metadata you want to associate with the object. When the object is committed, after all constituent parts are uploaded, the metadata will be associated with the composed object.

What is the maximum number of parts accepted for a multipart upload?

An object can be divided into a maximum of 10,000 parts. Each part must be at least 10 MiB in size. The upper size limit on an object part is 50 GiB. We recommend that you consider using multipart upload to upload objects greater than 100 MiB in size. Regardless of the total number of parts an object has been divided into, the total size of an object cannot exceed 10 TiB.

Can I retry uploading a part if the upload fails?

Yes, you can retry uploading a part when the upload fails for any reason. You must provide the correct upload ID and part number when reinitiating the upload.

Can I replace a part that has already been uploaded?

Yes, you can replace a part after it has been uploaded, but only if the object has not been committed yet. To replace an object part in a multipart upload, make sure that the correct upload ID and part number are used to reinitiate the upload.

Can I pause and resume an object upload?

Yes, you can pause and resume an object upload. If a multipart upload has been initiated for a constituent part, you must let Oracle Object Storage finish uploading the part. Oracle Object Storage does not support pausing and resuming in-progress part uploads.

Can I GET or LIST object parts after the object has been composed from its constituent parts and committed?

No, you cannot 'GET' or 'LIST' the uploaded parts of an object once the multipart upload is complete and the object has been committed. To retrieve a part of the object, you will need to use a Range GET request, which is distinct and separate from multipart upload functionality.

Can I determine the object part sizes used to upload an object after the multipart upload has been committed?

No. It is not possible to determine the part sizes used after a multipart upload has been committed and the parts have been assembled into an object.

Can I reorder parts of an object before composing the object?

No, the object parts cannot be reordered. The part number determines the sequential order in which parts are committed to the object.

Can I use parts of an object to compose another object?

No, you cannot re-purpose parts of an object to compose another object. An object can only be composed of object parts that share an upload ID.

What is the expected behavior if multiple uploaded parts have the same part number

If multiple object parts are uploaded using the same part number, the part that was uploaded last takes precedence and is used to compose the object.

What happens to object parts if an object is never committed?

If an upload is initiated, but never completed, Oracle Object Storage maintains the parts in its inventory until you explicitly abort the multipart upload. Oracle Object Storage charges for storage of the object parts regardless of whether or not the object has been committed. You can list active uploads and then decide which uploads to abort. Deleting active uploads deletes all uploaded parts and frees storage space. You can also configure Object Lifecycle Management rules to automatically remove uncommitted or failed multipart uploads.

Can I abort a multipart upload and delete parts that have already been uploaded?

Yes, you can terminate an in-progress multipart upload by aborting the process. Aborting a multipart upload deletes all object parts associated with a specific upload ID.

Can I append a part to an object after the upload has been committed?

No, you cannot append parts to an object after the upload has been committed.

Can I skip part numbers when uploading parts in a multipart upload?

Yes, you can skip part numbers when uploading parts. Part numbers do not need to be contiguous.

Can I manually delete a part and exclude it from the object upload before committing the upload?

No, you cannot specifically delete uploaded parts associated with an active multipart upload. However, you can choose to exclude uploaded parts when committing the object. These excluded parts are automatically deleted.

How can I verify the integrity of an object uploaded using the multipart upload process?

Oracle Object Storage treats the upload of an object part as it would a normal object upload. You can verify that an object was not unintentionally corrupted by sending the MD5 hash of the object part or by capturing the MD5 hash that is returned in the response to the request. When the upload is committed, you will also receive an MD5 hash of the MD5 hashes of the individual parts that constitute the object. This MD5 hash can be used to validate the integrity of the object as a whole.

Which Oracle Cloud Infrastructure Object Storage clients support multipart uploads?

Multipart upload functionality is supported by Oracle Object Storage native API, the Oracle Cloud Infrastructure (OCI) Software Development Kits (SDKs), the OCI Command Line Interface (CLI), and the OCI Console.

Public Buckets

What is a public bucket?

A public bucket is a bucket type that enables you to freely share data stored in object storage. Anyone with knowledge of the public bucket name and associated namespace can anonymously read data, list objects or get the object metadata. Anonymous PUT operations to post data to a public bucket are not supported. Buckets are private by default, bucket properties must be explicitly set to make a bucket public.

Because public buckets support anonymous data access, be careful and deliberate when creating public buckets. We encourage you to err on the side of caution and use public buckets only when absolutely necessary. Though public buckets are a powerful means to widely share data, there is a security tradeoff. Since anyone can anonymously access data stored in a public bucket, there is no visibility or control over who is accessing your data stored. Often times, Oracle Cloud Infrastructure Identity and Access Management rules or pre-authenticated requests can be a good substitute for public buckets.

How do I create a public bucket?

You can create public buckets using the API, SDK, CLI, and the Oracle Cloud Infrastructure console. Public buckets can be created like any other normal bucket with the difference being you need to set the attribute 'publicAccessType' value to 'ObjectRead'. By default, the value of this variable is set to 'NoPublicAccess'. You can set the value of this attribute when creating the bucket, or after the fact by updating the bucket.

What Oracle Identity and Access Management (IAM) permissions do I need to possess to create a public bucket?

You need to have been granted the IAM permissions BUCKET_CREATE, BUCKET_UPDATE to create a public bucket.

Can I make a public bucket private and vice versa?

Yes, you can make a public bucket private, and vice versa, by updating the bucket attribute 'publicAccessType'.

Private Buckets

What are private buckets?

When you create an Object Storage bucket, it's created as a private bucket by default. To share data stored in a private bucket with other groups of users, you need to define the pertinent IAM permission for the group.

Can I limit Object Storage buckets to be accessible only from specific virtual cloud networks or subnets?

Yes, you can define IAM policies on buckets such that requests are only authorized if they originate from a specific VCN or a CIDR block within that VCN. However, you will need to use Oracle Cloud Infrastructure Service Gateway or Private Endpoint to access Object Storage in addition to the IAM policy. Access to these buckets from instances with public IP addresses through Internet Gateway will be blocked.

Review Managing Network Resources for details on managing access and allowing only the resources in a specific VCN to read/write objects to a particular Object Storage bucket. For more information, review the Service Gateway documentation and Private Endpoint documentation.

Can I use a private IP address to access Object Storage buckets?

Yes, you can use a Private Endpoint to allow access to Object Storage data via a private IP address inside your VCN. You can also restrict which Object Storage bucket(s) can be accessed via the Private Endpoint.

Compared to OCI Service Gateway, OCI Private Endpoint offers the following benefits:

  • Private IP address for data privacy
  • Access limited to Object Storage service only
  • Ability to restrict access to one or more Object Storage buckets
  • Seamless Private Endpoint lifecycle management from Object Storage

For more information, review Managing Private Endpoints in Object Storage.

Pre-Authenticated Requests

What are pre-authenticated requests (PAR)?

Pre-authenticated requests (PARs) offer a mechanism by which you can share data stored in object storage with a third party. PARs eliminate the need to access the object storage data using programmatic interfaces, such as the API, SDK, or the CLI. PARs can be defined both on buckets and objects. Using tools such as cURL or wget on the PAR will enable you to access data stored in the object storage. You can also use PARs to receive data from anyone. The data received via PARs is posted to an object storage bucket, specified at the time of PAR creation.

When you create a PAR, a unique PAR URL is generated. Anyone with access to this URL can access the resources identified in the pre-authenticated request. PARs have an expiration date, which determines the length of time the PAR stays active. Once a PAR expires, it can no longer be used. PAR_MANAGE permissions are required to create and manage PARs. Read and/or write privileges are required for the object storage resource that you are creating a PAR on. Once created, you can list PARs per object storage bucket and delete them if necessary to preempt the PAR expiration date.

When should I use pre-authenticated requests?

You should use PARs when you need to share or receive data from a third party. PARs are useful when the third party cannot, or does not wish to, use normal object storage interfaces like the APIs, SDK, or the CLI to access data. They can use off-the-shelf HTTP tools like cURL.

Be careful when creating and sharing PARs. Once created, anyone who has access to the PAR URL can access the specified object storage resource. There is no obvious way to determine if the PAR usage is being driven by an authorized or unauthorized user.

How can I create pre-authenticated requests?

You can create a PAR using the Oracle Cloud Infrastructure service console or via the Oracle Cloud Infrastructure SDKs and/or CLI. When creating a PAR, you'll need to specify the object storage resource (object or bucket), actions the end user can take, and how long the PAR is valid.

Which object storage resources can I define pre-authenticated requests on?

You can define PARs on buckets and objects. You can use PARs defined on a bucket to receive data, however PARs defined on objects can be used both to send and receive data.

What Oracle Identity and Access Management permissions do I need to possess in order to create and manage pre-authenticated requests?

You need to have PAR_MANAGE permissions to create and manage PARs. Additionally, you can only create PARs on resources you have permissions to access. For example, if you wish to create a PUT PAR on a bucket, you need permission to write data in that specific bucket. If you are creating a GET PAR on an object, you need permission to read the specific object you intend to share. If your object storage permissions are altered after the PAR was created and shared, the PAR will stop working, regardless of the expiration date associated with the PAR.

How many pre-authenticated requests can I create per bucket or object?

There is no limit on the number of PARs that can be created on a bucket or object.

Can I manage PARs after I generate the PAR URLs?

Yes, once created, PARs can easily be managed. You can list PARs created on buckets and objects. You can also delete PARs, regardless of whether the PAR is active or expired. Once a PAR is deleted, the PAR URL will immediately stop working. PAR URLs will also stop working if permissions of the user that created the PAR change such that they no longer have access to the specified target resource.

Can I update pre-authenticated requests?

No, update operations on PARs is not supported. You cannot extend the expiration date on a PAR or modify the operation defined on the PAR. You will need to create a new PAR if you wish to make any changes to a PAR.

What happens to a previously created pre-authenticated requests when the password of the user who created the PAR changes?

Nothing. One of the benefits of pre-authenticated requests is that they are decoupled from Oracle Cloud Infrastructure user account credentials. Changing passwords has no impact on the validity of the PAR.

How secure are pre-authenticated requests?

Pre-authenticated requests are generally a secure means of sharing data. Pre-authenticated requests can only be created by users who have permissions to create such requests. Furthermore, the user creating the request must be allowed to perform the action the request is permitting.

For example, a user generating a pre-authenticated request for uploading an object must have both OBJECT_CREATE and PAR_CREATE permissions in the target compartment. If the user who created the request loses the OBJECT_CREATE permission after creating the request, then the request will no longer function.

Be careful when sharing a PAR URL. Make sure that only the intended user gains access to it. Anyone who has access to the PAR URL is automatically granted access to the object storage resource specified in the PAR. There is no obvious way to determine whether the PAR usage came from an authorized or unauthorized user.

Can I create a PAR on a public bucket?

Yes, you can create PARs on a public bucket.

If I create a PAR on a bucket that was initially private and was then updated to become a public bucket, will it continue to work as expected?

Yes, the PAR continues to work if a bucket transitions for being private to public, and vice versa.

Can I retire PARs before they expire?

Yes. You can retire PARs before the expiration date by deleting the PAR. Once deleted, the PAR URL stops working immediately.

How can I create PARs that do not expire?

To create a PAR that theoretically does not expire, set a PAR expiration date that is far out in the future.

How can I track PAR operations?

All PAR create and manage operations are logged in to the audit service. Viewing audit logs provides visibility into all PAR management operations performed. PAR access operations can be logged by enabling optional service logs for object storage.

Object Lifecycle Management

What is Object Lifecycle Management?

Object lifecycle management lets you manage the lifecycle of your Object Storage data through automated archiving and deletion, reducing storage costs and saving time. Lifecycle management works by creating a set of rules for a bucket (a lifecycle policy) that archive or delete objects depending on their age. You can narrow the scope of individual lifecycle policy rules by using object name prefix matching criteria. This allows you to create a lifecycle policy that is customized for the needs of different objects within a bucket. For example, you can create a lifecycle policy that automatically migrates objects containing the name prefix "ABC" from standard Object Storage to Archive Storage 30 days after the data was created, and then delete the data 120 days after it was created. If you later decide to keep the archived data for a longer period, you can edit the individual lifecycle policy rule controlling the length of time that qualifying archived objects are retained, while leaving the other lifecycle policy rules unchanged.

How do I create lifecycle policies on my bucket?

You can define lifecycle policies on a bucket using the Oracle Cloud Infrastructure Service Console, CLI, SDK or the API. One lifecycle policy can be defined per bucket, and each lifecycle policy can have up to 1000 rules. Each rule corresponds to an action (archive or delete) that can be executed on objects in the bucket. You can create rules that apply to all objects in the bucket, or only to a subset of objects that use a specific name prefix pattern.

Can I define a lifecycle policy on an Archive Storage bucket?

Yes, you can create lifecycle policies on an Archive Storage bucket. However, only 'Delete' rules are supported. Archived objects cannot be migrated from Archive Storage to standard Object Storage using a lifecycle policy. Note, when creating lifecycle policy rules, Archive Storage has a minimum retention requirement of 90 days. If your lifecycle policy deletes archived data that has not met the retention requirement, you may incur a deletion penalty. The deletion penalty is the prorated cost of storing the data for the full 90 days.

Can I disable rules defined in a lifecycle policies?

Yes, you can disable or re-enable rules defined in a lifecycle policies.

Can I add rules to the lifecycle policy after it was created?

Yes, you can add rules to an existing lifecycle policy. When adding, removing, or changing individual lifecycle policy rules using the CLI, SDK or API, you must provide an edited version of the entire lifecycle policy (including the unchanged rules) in your update. See the documentation for more details.

Can I add rules to the lifecycle policy after it was created?

Yes, lifecycle policies apply to data uploaded to the Object Storage bucket before the policy was created. For example, if a lifecycle policy rule is implemented that archives all objects over 30 days old, and the bucket contains objects that are 40 days old, those objects will be identified immediately by the service as candidates for archiving, and the archiving process will begin.

How are conflicting lifecycle rules evaluated for execution?

Rules are evaluated for conflicts at runtime. Rules that delete objects always take priority over rules that would archive the same objects.

Cross-Region Copy

What is cross-region copy?

Cross region copy lets you asynchronously copy objects to other buckets in the same region, to buckets in other regions or to buckets in other tenancies within the same region or in other regions. When copying the objects, you can keep the same name or modify the object name. The object copied to the destination bucket is considered a new object with unique ETag values and MD5 hashes.

How does cross-region copy work?

You can use the Oracle Cloud Infrastructure service console, CLI, SDK or Object Storage API to copymobjects between regions. You must specify the source object name, destination namespace, destination region and destination bucket to copy an object. The copy is asynchronous, meaning that the Object Storage processes copy requests as resources become available, using a queue to manage your copy requests. When you submit a copy request, a workrequest id is generated. You can query the workrequest to monitor the copy status of your object. Workrequests can also be canceled using the API, CLI, or an SDK. A canceled workrequest aborts the copy operation.

Can objects be copied to buckets in any Oracle Cloud Infrastructure region?

Yes, object can be copied between any two available Oracle Cloud Infrastructure regions. However, the user initiating the copy must have the required IAM permissions to read and write data in both the source and the destination regions.

Will the copy operation preserve the custom metadata defined on the source object?

Yes, when you copy objects, by default the metadata of the source object is preserved. However, using the API, the CLI, or an SDK, you can optionally modify or delete the object metadata as a part of the copy operation.

Can I copy objects from a standard object storage bucket to an archive storage bucket and vice versa?

Yes, you can copy objects between standard object storage and archive storage buckets. However, before you can copy an object from an archive storage bucket, you must restore the object.

Can objects be copied between buckets in the same region?

Yes, objects can be copied between buckets in the same region

When an object is copied, will the MD5 hashes of the source and destination objects match?

The MD5 hash of the destination object may not match the MD5 hash of the source object. This is because the Object Storage service may use a chunk size for the destination object that differs from the one used to originally upload the source object.

Can I use the cross-region copy functionality to copy multiple objects at once?

No, you can only use cross-region copy feature to copy one object at a time. However, using the CLI, you can script bulk copying operations from the source to the destination bucket.

Amazon S3 Compatibility API

What is the Amazon S3 Compatibility API?

The Amazon S3 Compatibility API is a set of Object Storage APIs that let you build products and services that interoperate with other storage services, such as Amazon S3.

What are the benefits of the Amazon S3 Compatibility API?

The benefits of the Amazon S3 API include:

  • Not being locked into a single vendor storage service
  • The ability to continue using your favorite client, application, or service that leverages the Amazon S3 API with Oracle Object Storage

Do you support all of the available Amazon S3 APIs?

No, not all of the available Amazon S3 APIs are supported. See the Amazon S3 Compatibility API documentation for a complete list of currently supported Amazon APIs.

Will Oracle Object Storage continue to support multiple APIs, or standardize on a single API?

Oracle Object Storage will continue to support both the native Object Storage API and the Amazon S3 Compatibility API. The Amazon S3 Compatibility API exists to promote interoperability with other cloud storage platforms. If you want to use all available Oracle Object Storage features, we recommend using the native Object Storage API.

When should I use the Amazon S3 Compatibility API?

You should consider using the Amazon S3 Compatibility API if you wish to use a specific client or application to access the Object Storage service, while leveraging Amazon S3-like APIs. You should also consider using the Amazon S3 Compatibility API if you need your product or service to interoperate with multiple Amazon S3-like object storage targets.

Will there be feature parity across the native Object Storage API and the Amazon S3 Compatibility API?

No, feature parity is not guaranteed across the two sets of APIs. All new Object Storage features will be supported with the native API first, and then opportunistically with the Amazon S3 Compatibility API.

If I write data using the Amazon S3 Compatibility API, can I read it back using the Object Storage Native API, and vice versa?

Yes, the two API sets are congruent. If data is written to Oracle Object Storage using the Amazon S3 Compatibility API, it can be read back using the native Object Storage API. The converse applies when writing and reading data.

How do the Amazon S3 compatibility APIs incorporate the concept of compartments, a concept unique to Oracle Cloud Infrastructure?

All buckets created using the Amazon S3 Compatibility API will be created in the Oracle Cloud Infrastructure “root” compartment. However, If creating buckets in the root compartment is not acceptable, you can use the console or the Command Line Interface (CLI) to create a bucket in a compartment of your choice. You can then operate on the bucket using the Amazon S3 Compatibility API.

How does authentication work with the Amazon S3 compatibility API?

To use the APIs, you need to create an Amazon S3 Compatibility Access Key/ Secret Key pair using the Oracle Cloud Infrastructure console. This Access Key/ Secret Key combination can then be used with a client of your choice. Note that Oracle Cloud Infrastructure only supports the Signature Version 4 signing mechanism. You can simultaneously have two active API key/password pairs for each Oracle Identity and Access Management user.

Does the Amazon S3 Compatibility API support both the virtual hosted style and path style URLs?

No, the Amazon S3 Compatibility API supports only path style URLs.

Can I reuse buckets I created using the native API or the Oracle Cloud Infrastructure console for work with the Amazon S3 Compatibility API?

Yes, you can reuse buckets created using the native Object Storage API or the console to work with the Amazon S3 Compatibility API.

How does the Oracle Object Storage service handle REST headers that are not supported by Amazon S3 Compatibility API?

If an Amazon S3 API call references unsupported REST headers or header values, those headers or values are ignored while processing the request.

For example, if specifying the header x-amz-server-side-encryption while calling the PUT Object Storage API, the headers are ignored because Oracle Object Storage encrypts all objects by default.

How is encryption supported with the Amazon S3 Compatibility API?

All data in Oracle Object Storage is encrypted by default. Encryption headers are ignored when processing the API calls.

Which clients are officially supported with the Amazon S3 Compatibility API?

We have tested the Amazon S3 compatibility API with the AWS SDK for Java. However, other client that integrate with an Amazon S3-like API should work with Oracle Object Storage, as long as only the supported APIs are being referenced. See the Amazon S3 Compatibility API documentation for a complete list of the Amazon APIs that we currently support.

Can I assign object metadata using the Amazon S3 Compatibility API?

Yes. Object metadata can be assigned when objects are uploaded.

Are object metadata values validated by the Amazon S3 Compatibility API?

No. However, metadata values with trailing whitespace will have the trailing whitespace removed.

Replication

What is Replication?

Replication is an Object Storage feature that asynchronously replicates objects in an Object Storage bucket to another bucket in your tenancy. The destination bucket can be in a remote OCI region or in the same region as the source bucket. A replicated object in the destination bucket is an identical copy of the object in the source bucket with the same name, metadata, eTag, MD5, and version ID.

Is Replication performed in near real-time?

Yes. Object Storage constantly monitors Replication source buckets for changes. When changes are found, replication to the destination bucket begins immediately.

Does Replication use encryption?

Yes. Object Storage always encrypts objects when they are stored or transmitted. Replication reads encrypted source objects and transmits them over the network encrypted.

Are IAM policies required for Replication?

Yes. Replication will not work correctly unless the required IAM policies have been created. Additional information is available in the Replication documentation.

Is it possible to limit Replication destinations?

Yes it is possible. Replication can replicate an object to any "unrestricted" OCI region globally. It is also possible to restrict Replication to particular source and destination region.

As described in the documentation, the Object Storage service in the Replication source region must be given explicit access to the source and destination buckets to be used. Using existing functionality of OCI Identity and Access Management (IAM), it is possible to limit the permissions granted to the Object Storage service to specific source and destination regions, and optionally specific source and destination buckets. To limit Object Storage Replication to specific source and destination regions, configure a policy like the following example that allows any source bucket in us-phoenix-1 and any destination bucket in us-ashburn-1. When referring to regions in IAM policies, the three-letter region key must be used.

allow service objectstorage-us-phoenix-1 to manage object-family in tenancy where any {request.region='phx', request.region='iad'}

To limit Object Storage Replication to a source bucket called "source_bucket" in us-phoenix-1 to a bucket called "destination_bucket" in us-ashburn-1 create an IAM policy like the following:

allow service objectstorage-us-phoenix-1 to manage object-family in tenancy where any {all {request.region='phx', target.bucket.name='source_bucket'}, all {request.region='iad', target.bucket.name='destination_bucket'}}