Object Storage limitations and basic usage

What is object storage?

OpenStack Object Storage (swift) is used for redundant, scalable data storage using clusters of standardized servers to store petabytes of accessible data. It is a long-term storage system for large amounts of static data which can be retrieved and updated.
Object Storage uses a distributed architecture with no central point of control, providing greater scalability, redundancy, and permanence. Objects are written to multiple hardware devices, with the OpenStack software responsible for ensuring data replication and integrity across the cluster.
Storage clusters scale horizontally by adding new nodes. Should a node fail, OpenStack works to replicate its content from other active nodes. Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used in lieu of more expensive equipment.
Object Storage is ideal for cost effective, scale-out storage. It provides a fully distributed, API-accessible storage platform that can be integrated directly into applications or used for backup, archiving, and data retention.


Features and Benefits

Features

Benefits

Leverages commodity hardware No lock-in, lower price/GB.
HDD/node failure agnostic Self-healing, reliable, data redundancy protects from failures.
Unlimited storage Large and flat namespace, highly scalable read/write access, able to serve content directly from storage system.
Multi-dimensional scalability Scale-out architecture: Scale vertically and horizontally-distributed storage. Backs up and archives large amounts of data with linear performance.
Account/container/object structure No nesting, not a traditional file system: Optimized for scale, it scales to multiple petabytes and billions of objects.
Built-in replication 3✕ + data redundancy (compared with 2✕ on RAID) A configurable number of accounts, containers and object copies for high availability.
Easily add capacity (unlike RAID resize) Elastic data scaling with ease.
No central database Higher performance, no bottlenecks.
RAID not required Handle many small, random reads and writes efficiently.
Built-in management utilities Account management: Create, add, verify, and delete users;
Container management: Upload, download, and verify;
Monitoring: Capacity, host, network, log trawling, and cluster health.
Drive auditing Detect drive failures preempting data corruption.
Expiring objects Users can set an expiration time or a TTL on an object to control access.
Direct object access Enable direct browser access to content, such as for a control panel.
Realtime visibility into client requests Know what users are requesting.
Supports S3 API Utilize tools that were designed for the popular S3 API.
Restrict containers per account Limit access to control usage by user.
 

 

What is it not so good for?

  • It's not a Filesystem
  • No Directory Hierarchies
  • Not a Database
  • Swift is not a good option if the data needs to be updated in real time


Any quota?

Object Storage (swift) uses segmentation to support the upload of large objects. By default, Object Storage limits the download size of a single object to 5GB.
Using segmentation, uploading a single object is virtually unlimited. The segmentation process works by fragmenting the object, and automatically creating a file that sends the segments together as a single object. This option offers greater upload speed with the possibility of parallel uploads.

 

Official documentations for reference:

Introduction to Object Storage

Features and benefits

Large object support

 

 

Basic Usage in ELITS environment

Create and manage object containers

How to upload large objects?

How to share objects?

 

 

 

Creative Commons Attribution 3.0 License
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License.
Changes were made based on the original OpenStack Administrator Guides

 
Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Article is closed for comments.