Oracle Object Storage is a scalable, fully programmable, durable cloud storage service. Developers and IT administrators can use this service to store and easily access an unlimited amount of data at low cost.
With Oracle Object Storage, you can safely and securely store and retrieve data directly from applications or from within the cloud platform, at any time. Oracle Object Storage is agnostic to the data content type and enables a wide variety of use cases. You can send backup and archive data offsite, design Big Data Analytics workloads to generate business insights, or build scale-out web applications. The elasticity of the service enables you to start small and scale applications as they evolve, and you always pay for only what you use.
Oracle Object Storage service is secure, easy to manage, strongly consistent, and scalable. When a read request is made, Oracle Object Storage serves the most recent copy of the data that was written to the system. Oracle Object Storage is connected to a high-performing, high-bandwidth network with compute and object storage resources co-located on the same network. This means that compute instances running in Oracle Cloud Infrastructure get low latency access to object storage.
Objects: All data, regardless of content type, is stored as objects in Oracle Object Storage. For example, log files, video files, and audio files are all stored as objects.
Bucket: A bucket is a logical container that stores objects. Buckets can serve as a grouping mechanism to store related objects together.
Namespace: A namespace is the logical entity that lets you control a personal bucket namespace. Oracle Cloud Infrastructure Object Storage bucket names are not global. Bucket names need to be unique within the context of a namespace, but can be repeated across namespaces. Each tenant is associated with one default namespace (tenant name) that spans all compartments.
Sign up for the OCI Cloud Free Tier. You can build, test, and deploy applications on Oracle Cloud—for free.
Oracle Object Storage is designed to be highly durable, providing 99.999999999% (Eleven 9s) of annual durability. It achieves this by storing each object redundantly across three different availability domains for regions with multiple availability domains, and across three different fault domains in regions with a single availability domain. Data integrity is actively monitored using checksums, and corrupt data is detected and automatically repaired. Any loss in data redundancy is detected and remedied, without customer intervention or impact.
Yes, OCI Object Storage uses a variety of storage schemes, including erasure coding. The storage scheme used for an object cannot be influenced by the customer, and the schemes utilized may change overtime.
Oracle Object Storage is highly reliable. The service is designed for 99.9% availability. Multiple safeguards have been built into the platform to monitor the health of the service to guard against unplanned downtime.
Yes. Objects can be tagged with multiple user-specified metadata key-value pairs. See Managing Objects in the Object Storage documentation for more information.
You can store an unlimited amount of data in Oracle Object Storage. You can create thousands of buckets per account and each bucket can host an unlimited number of objects. Stored objects can be as small as 0 bytes or as large as 10 TiB. Oracle recommends that you use multipart uploads to store objects larger than 100 MiB. For more information, see Service Limits in the Oracle Cloud Infrastructure documentation.
Oracle Object Storage is a regional service. It can be accessed through a dedicated regional API endpoint.
The Native Oracle Cloud Infrastructure Object Storage API endpoints use a consistent URL format of https://objectstorage.<region-identifier>.oraclecloud.com
. For example, the Native OCI Object Storage API endpoint in US West (us-phoenix-1) is
https://objectstorage.us-phoenix-1.oraclecloud.com
.
The Swift API endpoints use a consistent URL format of
https://swiftobjectstorage.<region-identifier>.oraclecloud.com
. For example, the
Native OCI Object Storage API endpoint in US East (us-ashburn-1) is https://swiftobjectstorage.us-ashburn-1.oraclecloud.com
.
The Region Identifier for all OCI regions can be found at https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm.
Oracle Object Storage is available in all Oracle Cloud Infrastructure regions and data is stored within those regions. Customers have the flexibility to choose the specific region where data will reside. You can find more information on available regions and Availability Domains here.
Oracle Object Storage is highly secure. It is tightly integrated with Oracle Cloud Infrastructure Identity and Access Management. By default, only authenticated users that have explicitly been granted access to specific resources can access data stored in Oracle Object Storage. Data is uploaded and downloaded from Oracle Object Storage over SSL endpoints using the HTTPS protocol. All stored data is encrypted, by default. For an additional layer of security, you can encrypt objects prior to sending them to Oracle Object Storage. That gives you total control over not only your data, but also the encryption keys that are used to encrypt the data.
OCI Object Storage supports object-level permissions in addition to compartment-level and bucket-level permissions. Object-level permissions protect data in shared buckets from unauthorized users, providing an extra level of security.
You will benefit from the following:
Our Identity and Access Management (IAM) offers a consistent set of policies across all OCI services, allowing you to create, apply, and centrally manage detailed permissions at various levels.
Yes, you can use Oracle Object Storage as the primary data repository for big data. This means you can run big data workloads on Oracle Cloud Infrastructure. The object storage HDFS connector provides connectivity to multiple popular big data analytic engines. This connectivity enables the analytics engines to work directly with data stored in Oracle Cloud Infrastrucutre object storage. You can find more information on the HDFS connector here.
You can access Oracle Object Storage from anywhere as long as you have access to an internet connection and the required permissions to access the service. Object storage latency will vary depending on where you are accessing the service from, with higher latency when accessing across a longer distance, all else equal. For example, if data is stored in the US West Region, the latency for accessing data from Nevada will be lower than if the same data were being accessed from London or New York.
No, deleted and overwritten data cannot be recovered.
However, when Object Versioning is enabled on a bucket, data is not lost when an object is overwritten or when a versioning-unaware delete operation is performed. In both cases, the previous contents of the object are saved as a previous version of the object. Previous versions can be accessed or restored at any time and must be explicitly removed by a Lifecycle Policy or with a versioning-aware delete operation. Object Versioning must be enabled at the time of delete or overwrite to protect data.
No, you do not need to backup data stored in Oracle Cloud Infrastructure Object Storage. Oracle Object Storage is an inherently highly durable storage platform. All objects are stored redundantly on multiple storage servers, across multiple Availability Domains, within a region. Data integrity is constantly monitored using checksums and corrupt data is self healed. The native object storage durability characteristics virtually eliminate the need for traditional backups.
You can use Oracle Object Storage as a destination for your backups, regardless of whether the backup originates in the cloud or in an on-premises data center. Oracle Cloud Infrastructure Block Volumes backups are stored by default in Oracle Cloud Infrastructure Object Storage.
You can also direct your Oracle RMAN backups to Object Storage via the Swift API integration. For Oracle RMAN, you need to use the correct Swift API endpoint. Swift API endpoints use a consistent URL format of https://swiftobjectstorage.<region-identifier>.oraclecloud.com
. For example, the Native OCI Object Storage API endpoint in US East (us-ashburn-1) is
https://swiftobjectstorage.us-ashburn-1.oraclecloud.com
.
The Region Identifier for all OCI regions can be found at https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm.
Exposing buckets as NFS/SMB mount points on the bare metal compute instances is not supported. Currently you can access Oracle Object Storage using the native APIs, SDKs or the HDFS connector.
Oracle Object Storage is available as a pay-as-you-go service and charged on the following usage elements:
Full pricing details for Oracle Cloud Infrastructure Object Storage can be found here.
You can find Object Storage IP address ranges in the Object Storage product documentation.
The Oracle Cloud Infrastructure Object Storage API and Swift API do return Cross-Origin Resource Sharing (CORS) headers; however, the returned headers are fixed and cannot be edited. The Amazon S3 Compatibility API does not return CORS headers.
Yes. Oracle Object Storage supports server-side encryption. All data stored in Oracle Object Storage is automatically encrypted. Customers can also use Server-Side Encryption with Customer-Provided Keys (SSE-C) or a master encryption key from Vault if they choose.
Encryption is automatically enabled for all data with no action required on the part of customers.
There is nothing specific that you need to do to decrypt the data. You can continue making normal HTTPS GET requests to retrieve the data.
Yes. The encryption keys are rotated frequently based on a rigorous internal policy.
Yes, we support client-side encryption. You can encrypt the data prior to sending it to Oracle Object Storage. Sending encrypted data enables you to have full control over your encryption keys and provides a second line of defense against unintended and unauthorized data access. To help in this area, Oracle has released SDK enhancements for Client-Side Encryption.
Yes. We encrypt both the object data and the user-defined metadata associated with the object.
We use 256-bit Advanced Encryption Standard (AES- 256) to encrypt all data and encryption keys. AES-256 is considered one of the strongest encryption algorithms that exists today.
To upload large objects to Oracle Object Storage, consider using multipart upload. Multipart uploads upload data in parallel and are faster and more efficient than uploading a large object in a single upload. If a multipart upload fails for any reason, instead of restarting the entire object upload, you'll only need to retry uploading the part of the object that failed. You should consider using the multipart upload to upload all objects that are greater than 100 MiB in size.
The OCI Command Line Interface and OCI Console will perform multipart uploads for you automatically. More information about multipart uploads is available at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingmultipartuploads.htm.
Yes. When you initiate the multipart upload, you can specify the metadata you want to associate with the object. When the object is committed, after all constituent parts are uploaded, the metadata will be associated with the composed object.
An object can be divided into a maximum of 10,000 parts. Each part must be at least 10 MiB in size. The upper size limit on an object part is 50 GiB. We recommend that you consider using multipart upload to upload objects greater than 100 MiB in size. Regardless of the total number of parts an object has been divided into, the total size of an object cannot exceed 10 TiB.
Yes, you can retry uploading a part when the upload fails for any reason. You must provide the correct upload ID and part number when reinitiating the upload.
Yes, you can replace a part after it has been uploaded, but only if the object has not been committed yet. To replace an object part in a multipart upload, make sure that the correct upload ID and part number are used to reinitiate the upload.
Yes, you can pause and resume an object upload. If a multipart upload has been initiated for a constituent part, you must let Oracle Object Storage finish uploading the part. Oracle Object Storage does not support pausing and resuming in-progress part uploads.
No, you cannot 'GET' or 'LIST' the uploaded parts of an object once the multipart upload is complete and the object has been committed. To retrieve a part of the object, you will need to use a Range GET request, which is distinct and separate from multipart upload functionality.
No. It is not possible to determine the part sizes used after a multipart upload has been committed and the parts have been assembled into an object.
No, the object parts cannot be reordered. The part number determines the sequential order in which parts are committed to the object.
No, you cannot re-purpose parts of an object to compose another object. An object can only be composed of object parts that share an upload ID.
If multiple object parts are uploaded using the same part number, the part that was uploaded last takes precedence and is used to compose the object.
If an upload is initiated, but never completed, Oracle Object Storage maintains the parts in its inventory until you explicitly abort the multipart upload. Oracle Object Storage charges for storage of the object parts regardless of whether or not the object has been committed. You can list active uploads and then decide which uploads to abort. Deleting active uploads deletes all uploaded parts and frees storage space. You can also configure Object Lifecycle Management rules to automatically remove uncommitted or failed multipart uploads.
Yes, you can terminate an in-progress multipart upload by aborting the process. Aborting a multipart upload deletes all object parts associated with a specific upload ID.
No, you cannot append parts to an object after the upload has been committed.
Yes, you can skip part numbers when uploading parts. Part numbers do not need to be contiguous.
No, you cannot specifically delete uploaded parts associated with an active multipart upload. However, you can choose to exclude uploaded parts when committing the object. These excluded parts are automatically deleted.
Oracle Object Storage treats the upload of an object part as it would a normal object upload. You can verify that an object was not unintentionally corrupted by sending the MD5 hash of the object part or by capturing the MD5 hash that is returned in the response to the request. When the upload is committed, you will also receive an MD5 hash of the MD5 hashes of the individual parts that constitute the object. This MD5 hash can be used to validate the integrity of the object as a whole.
Multipart upload functionality is supported by Oracle Object Storage native API, the Oracle Cloud Infrastructure (OCI) Software Development Kits (SDKs), the OCI Command Line Interface (CLI), and the OCI Console.
A public bucket is a bucket type that enables you to freely share data stored in object storage. Anyone with knowledge of the public bucket name and associated namespace can anonymously read data, list objects or get the object metadata. Anonymous PUT operations to post data to a public bucket are not supported. Buckets are private by default, bucket properties must be explicitly set to make a bucket public.
Because public buckets support anonymous data access, be careful and deliberate when creating public buckets. We encourage you to err on the side of caution and use public buckets only when absolutely necessary. Though public buckets are a powerful means to widely share data, there is a security tradeoff. Since anyone can anonymously access data stored in a public bucket, there is no visibility or control over who is accessing your data stored. Often times, Oracle Cloud Infrastructure Identity and Access Management rules or pre-authenticated requests can be a good substitute for public buckets.
You can create public buckets using the API, SDK, CLI, and the Oracle Cloud Infrastructure console. Public buckets can be created like any other normal bucket with the difference being you need to set the attribute 'publicAccessType' value to 'ObjectRead'. By default, the value of this variable is set to 'NoPublicAccess'. You can set the value of this attribute when creating the bucket, or after the fact by updating the bucket.
You need to have been granted the IAM permissions BUCKET_CREATE, BUCKET_UPDATE to create a public bucket.
Yes, you can make a public bucket private, and vice versa, by updating the bucket attribute 'publicAccessType'.
When you create an Object Storage bucket, it's created as a private bucket by default. To share data stored in a private bucket with other groups of users, you need to define the pertinent IAM permission for the group.
Yes, you can define IAM policies on buckets such that requests are only authorized if they originate from a specific VCN or a CIDR block within that VCN. However, you will need to use Oracle Cloud Infrastructure Service Gateway or Private Endpoint to access Object Storage in addition to the IAM policy. Access to these buckets from instances with public IP addresses through Internet Gateway will be blocked.
Review Managing Network Resources for details on managing access and allowing only the resources in a specific VCN to read/write objects to a particular Object Storage bucket. For more information, review the Service Gateway documentation and Private Endpoint documentation.
Yes, you can use a Private Endpoint to allow access to Object Storage data via a private IP address inside your VCN. You can also restrict which Object Storage bucket(s) can be accessed via the Private Endpoint.
Compared to OCI Service Gateway, OCI Private Endpoint offers the following benefits:
For more information, review Managing Private Endpoints in Object Storage.
Pre-authenticated requests (PARs) offer a mechanism by which you can share data stored in object storage with a third party. PARs eliminate the need to access the object storage data using programmatic interfaces, such as the API, SDK, or the CLI. PARs can be defined both on buckets and objects. Using tools such as cURL or wget on the PAR will enable you to access data stored in the object storage. You can also use PARs to receive data from anyone. The data received via PARs is posted to an object storage bucket, specified at the time of PAR creation.
When you create a PAR, a unique PAR URL is generated. Anyone with access to this URL can access the resources identified in the pre-authenticated request. PARs have an expiration date, which determines the length of time the PAR stays active. Once a PAR expires, it can no longer be used. PAR_MANAGE permissions are required to create and manage PARs. Read and/or write privileges are required for the object storage resource that you are creating a PAR on. Once created, you can list PARs per object storage bucket and delete them if necessary to preempt the PAR expiration date.
You should use PARs when you need to share or receive data from a third party. PARs are useful when the third party cannot, or does not wish to, use normal object storage interfaces like the APIs, SDK, or the CLI to access data. They can use off-the-shelf HTTP tools like cURL.
Be careful when creating and sharing PARs. Once created, anyone who has access to the PAR URL can access the specified object storage resource. There is no obvious way to determine if the PAR usage is being driven by an authorized or unauthorized user.
You can create a PAR using the Oracle Cloud Infrastructure service console or via the Oracle Cloud Infrastructure SDKs and/or CLI. When creating a PAR, you'll need to specify the object storage resource (object or bucket), actions the end user can take, and how long the PAR is valid.
You can define PARs on buckets and objects. You can use PARs defined on a bucket to receive data, however PARs defined on objects can be used both to send and receive data.
You need to have PAR_MANAGE permissions to create and manage PARs. Additionally, you can only create PARs on resources you have permissions to access. For example, if you wish to create a PUT PAR on a bucket, you need permission to write data in that specific bucket. If you are creating a GET PAR on an object, you need permission to read the specific object you intend to share. If your object storage permissions are altered after the PAR was created and shared, the PAR will stop working, regardless of the expiration date associated with the PAR.
There is no limit on the number of PARs that can be created on a bucket or object.
Yes, once created, PARs can easily be managed. You can list PARs created on buckets and objects. You can also delete PARs, regardless of whether the PAR is active or expired. Once a PAR is deleted, the PAR URL will immediately stop working. PAR URLs will also stop working if permissions of the user that created the PAR change such that they no longer have access to the specified target resource.
No, update operations on PARs is not supported. You cannot extend the expiration date on a PAR or modify the operation defined on the PAR. You will need to create a new PAR if you wish to make any changes to a PAR.
Nothing. One of the benefits of pre-authenticated requests is that they are decoupled from Oracle Cloud Infrastructure user account credentials. Changing passwords has no impact on the validity of the PAR.
Pre-authenticated requests are generally a secure means of sharing data. Pre-authenticated requests can only be created by users who have permissions to create such requests. Furthermore, the user creating the request must be allowed to perform the action the request is permitting.
For example, a user generating a pre-authenticated request for uploading an object must have both OBJECT_CREATE and PAR_CREATE permissions in the target compartment. If the user who created the request loses the OBJECT_CREATE permission after creating the request, then the request will no longer function.
Be careful when sharing a PAR URL. Make sure that only the intended user gains access to it. Anyone who has access to the PAR URL is automatically granted access to the object storage resource specified in the PAR. There is no obvious way to determine whether the PAR usage came from an authorized or unauthorized user.
Yes, you can create PARs on a public bucket.
Yes, the PAR continues to work if a bucket transitions for being private to public, and vice versa.
Yes. You can retire PARs before the expiration date by deleting the PAR. Once deleted, the PAR URL stops working immediately.
To create a PAR that theoretically does not expire, set a PAR expiration date that is far out in the future.
All PAR create and manage operations are logged in to the audit service. Viewing audit logs provides visibility into all PAR management operations performed. PAR access operations can be logged by enabling optional service logs for object storage.
Object lifecycle management lets you manage the lifecycle of your Object Storage data through automated archiving and deletion, reducing storage costs and saving time. Lifecycle management works by creating a set of rules for a bucket (a lifecycle policy) that archive or delete objects depending on their age. You can narrow the scope of individual lifecycle policy rules by using object name prefix matching criteria. This allows you to create a lifecycle policy that is customized for the needs of different objects within a bucket. For example, you can create a lifecycle policy that automatically migrates objects containing the name prefix "ABC" from standard Object Storage to Archive Storage 30 days after the data was created, and then delete the data 120 days after it was created. If you later decide to keep the archived data for a longer period, you can edit the individual lifecycle policy rule controlling the length of time that qualifying archived objects are retained, while leaving the other lifecycle policy rules unchanged.
You can define lifecycle policies on a bucket using the Oracle Cloud Infrastructure Service Console, CLI, SDK or the API. One lifecycle policy can be defined per bucket, and each lifecycle policy can have up to 1000 rules. Each rule corresponds to an action (archive or delete) that can be executed on objects in the bucket. You can create rules that apply to all objects in the bucket, or only to a subset of objects that use a specific name prefix pattern.
Yes, you can create lifecycle policies on an Archive Storage bucket. However, only 'Delete' rules are supported. Archived objects cannot be migrated from Archive Storage to standard Object Storage using a lifecycle policy. Note, when creating lifecycle policy rules, Archive Storage has a minimum retention requirement of 90 days. If your lifecycle policy deletes archived data that has not met the retention requirement, you may incur a deletion penalty. The deletion penalty is the prorated cost of storing the data for the full 90 days.
Yes, you can disable or re-enable rules defined in a lifecycle policies.
Yes, you can add rules to an existing lifecycle policy. When adding, removing, or changing individual lifecycle policy rules using the CLI, SDK or API, you must provide an edited version of the entire lifecycle policy (including the unchanged rules) in your update. See the documentation for more details.
Yes, lifecycle policies apply to data uploaded to the Object Storage bucket before the policy was created. For example, if a lifecycle policy rule is implemented that archives all objects over 30 days old, and the bucket contains objects that are 40 days old, those objects will be identified immediately by the service as candidates for archiving, and the archiving process will begin.
Rules are evaluated for conflicts at runtime. Rules that delete objects always take priority over rules that would archive the same objects.
Cross region copy lets you asynchronously copy objects to other buckets in the same region, to buckets in other regions or to buckets in other tenancies within the same region or in other regions. When copying the objects, you can keep the same name or modify the object name. The object copied to the destination bucket is considered a new object with unique ETag values and MD5 hashes.
You can use the Oracle Cloud Infrastructure service console, CLI, SDK or Object Storage API to copymobjects between regions. You must specify the source object name, destination namespace, destination region and destination bucket to copy an object. The copy is asynchronous, meaning that the Object Storage processes copy requests as resources become available, using a queue to manage your copy requests. When you submit a copy request, a workrequest id is generated. You can query the workrequest to monitor the copy status of your object. Workrequests can also be canceled using the API, CLI, or an SDK. A canceled workrequest aborts the copy operation.
Yes, object can be copied between any two available Oracle Cloud Infrastructure regions. However, the user initiating the copy must have the required IAM permissions to read and write data in both the source and the destination regions.
Yes, when you copy objects, by default the metadata of the source object is preserved. However, using the API, the CLI, or an SDK, you can optionally modify or delete the object metadata as a part of the copy operation.
Yes, you can copy objects between standard object storage and archive storage buckets. However, before you can copy an object from an archive storage bucket, you must restore the object.
Yes, objects can be copied between buckets in the same region
The MD5 hash of the destination object may not match the MD5 hash of the source object. This is because the Object Storage service may use a chunk size for the destination object that differs from the one used to originally upload the source object.
No, you can only use cross-region copy feature to copy one object at a time. However, using the CLI, you can script bulk copying operations from the source to the destination bucket.
The Amazon S3 Compatibility API is a set of Object Storage APIs that let you build products and services that interoperate with other storage services, such as Amazon S3.
The benefits of the Amazon S3 API include:
No, not all of the available Amazon S3 APIs are supported. See the Amazon S3 Compatibility API documentation for a complete list of currently supported Amazon APIs.
Oracle Object Storage will continue to support both the native Object Storage API and the Amazon S3 Compatibility API. The Amazon S3 Compatibility API exists to promote interoperability with other cloud storage platforms. If you want to use all available Oracle Object Storage features, we recommend using the native Object Storage API.
You should consider using the Amazon S3 Compatibility API if you wish to use a specific client or application to access the Object Storage service, while leveraging Amazon S3-like APIs. You should also consider using the Amazon S3 Compatibility API if you need your product or service to interoperate with multiple Amazon S3-like object storage targets.
No, feature parity is not guaranteed across the two sets of APIs. All new Object Storage features will be supported with the native API first, and then opportunistically with the Amazon S3 Compatibility API.
Yes, the two API sets are congruent. If data is written to Oracle Object Storage using the Amazon S3 Compatibility API, it can be read back using the native Object Storage API. The converse applies when writing and reading data.
All buckets created using the Amazon S3 Compatibility API will be created in the Oracle Cloud Infrastructure “root” compartment. However, If creating buckets in the root compartment is not acceptable, you can use the console or the Command Line Interface (CLI) to create a bucket in a compartment of your choice. You can then operate on the bucket using the Amazon S3 Compatibility API.
To use the APIs, you need to create an Amazon S3 Compatibility Access Key/ Secret Key pair using the Oracle Cloud Infrastructure console. This Access Key/ Secret Key combination can then be used with a client of your choice. Note that Oracle Cloud Infrastructure only supports the Signature Version 4 signing mechanism. You can simultaneously have two active API key/password pairs for each Oracle Identity and Access Management user.
No, the Amazon S3 Compatibility API supports only path style URLs.
Yes, you can reuse buckets created using the native Object Storage API or the console to work with the Amazon S3 Compatibility API.
If an Amazon S3 API call references unsupported REST headers or header values, those headers or values are ignored while processing the request.
For example, if specifying the header x-amz-server-side-encryption while calling the PUT Object Storage API, the headers are ignored because Oracle Object Storage encrypts all objects by default.
All data in Oracle Object Storage is encrypted by default. Encryption headers are ignored when processing the API calls.
We have tested the Amazon S3 compatibility API with the AWS SDK for Java. However, other client that integrate with an Amazon S3-like API should work with Oracle Object Storage, as long as only the supported APIs are being referenced. See the Amazon S3 Compatibility API documentation for a complete list of the Amazon APIs that we currently support.
Yes. Object metadata can be assigned when objects are uploaded.
No. However, metadata values with trailing whitespace will have the trailing whitespace removed.
Replication is an Object Storage feature that asynchronously replicates objects in an Object Storage bucket to another bucket in your tenancy. The destination bucket can be in a remote OCI region or in the same region as the source bucket. A replicated object in the destination bucket is an identical copy of the object in the source bucket with the same name, metadata, eTag, MD5, and version ID.
Yes. Object Storage constantly monitors Replication source buckets for changes. When changes are found, replication to the destination bucket begins immediately.
Yes. Object Storage always encrypts objects when they are stored or transmitted. Replication reads encrypted source objects and transmits them over the network encrypted.
Yes. Replication will not work correctly unless the required IAM policies have been created. Additional information is available in the Replication documentation.
Yes it is possible. Replication can replicate an object to any "unrestricted" OCI region globally. It is also possible to restrict Replication to particular source and destination region.
As described in the documentation, the Object Storage service in the Replication source region must be given explicit access to the source and destination buckets to be used. Using existing functionality of OCI Identity and Access Management (IAM), it is possible to limit the permissions granted to the Object Storage service to specific source and destination regions, and optionally specific source and destination buckets. To limit Object Storage Replication to specific source and destination regions, configure a policy like the following example that allows any source bucket in us-phoenix-1 and any destination bucket in us-ashburn-1. When referring to regions in IAM policies, the three-letter region key must be used.
allow service objectstorage-us-phoenix-1 to manage object-family in tenancy where any {request.region='phx', request.region='iad'}
To limit Object Storage Replication to a source bucket called "source_bucket" in us-phoenix-1 to a bucket called "destination_bucket" in us-ashburn-1 create an IAM policy like the following:
allow service objectstorage-us-phoenix-1 to manage object-family in tenancy where any {all {request.region='phx', target.bucket.name='source_bucket'}, all {request.region='iad', target.bucket.name='destination_bucket'}}