Storage is exciting and hot. Who would have predicted that sentiment 15 years ago? Data is needed for everything from artificial intelligence (AI) to the storage of the more than 500 hours of video being uploaded to YouTube every minute. Something is needed to store all that data in a manageable and scalable way while still maintaining accessibility.
Software defined object storage has come to the rescue. It is infinitely scalable with no dependency on locations or physical components. Object storage eliminates the file hierarchy structure in favor of descriptive metadata and objects instead of files or blocks. This model is well suited for the continuous influx of data and the management of it for real-time and archival purposes.
This change in the data storage model means that there need to be changes to how the solution is architected into the wider IT environment. Data reliability and availability are critical for businesses in today’s digitally transformed world. Load balancing technology is needed to support object storage solutions in a similar way they support other applications like Microsoft Exchange and web servers.
In computer science, there is the CAP Theorem or Brewer’s Theorem which states that a distributed data system cannot provide all three of these key features in the event of a failure. One must be sacrificed.
Usually, this is a tradeoff between consistency and availability. Software defined object storage combined with load balancing technology that is tuned to support the object storage application can mitigate the impact of the CAP Theorem.
Most object storage deployments use the S3 protocol. Load balancers that understand S3 provide availability through the local load balancing of multiple nodes within a site. Availability is offered across sites through global server load balancing technology (GSLB) for a transparent distribution in either an active/standby disaster recovery (DR) scenario or active/active.
The load balancer also ensures partition tolerance by performing advanced health checking of the individual nodes in the object storage solution. If a node does not provide a proper response to the health check, it is taken out of service until it does.
Scality ensures strong consistency of the data with their RING scalable storage solution. In conjunction with Kemp’s advanced object storage load balancing solution, we are one step closer to providing the consistency, availability, and partition tolerance concurrently that the CAP Theorem posits against.
There are many examples of great pairings throughout history. Fred Astaire and Ginger Rogers, Mario and Luigi, etc. The individuals are amazing on their own, but when they come together, something special happens and you get more than what you put into the system. Kemp and Scality are working together because there is value for the customer by bringing these technologies together.
Overall, Kemp provides:
To learn more about what this partnership can do for your data needs, check out the press release and additional resources.