Cloud

Cloud Storage

New Generation Cloud Storage Architecture
Cloud Storage
More often than not, cloud storage is confusing due to the fact the word “cloud” infers the idea of being up in the sky amongst the clouds, though it is truly tangible and it is an emerging storage technology, one of many significant milestones during the development of storage technology. Before delving into cloud storage, we first need to understand the technical transformations realized in corporate data storage in the past five decades.
The first significant innovation of storage technology was brought about in the mid-60s. At this point in time, disks were no longer merely an integral part of the host computer and began to exist as independent external storage devices. This was the era of Direct Attach Storage (DAS). With the DAS architecture, the external storage device is connected to the host computer using the SCSI protocol and copper cables. Thus, more storage from the external storage device was made available to the host computer.

In 1987, a group of UC Berkeley computer science specialists consisted of David Patterson, Garth Gibson and Randy Katz, published a report titled “A Case for Redundant Arrays of Inexpensive Disk-RAID,” which in turn created the term “Redundant Array of Independent Disks (RAID).” With the birth of RAID technology and storage network protocol, the age of Storage Area Network (SAN) flourished in the 80s. With the SAN architecture, the host computer was able to connect to the external storage device by using a dedicated switch and the fibre channel protocol, and as a result, multiple host computers can share one external storage device. Once the external storage device was partitioned into multiple Logical Unit Numbers (LUNs), host computers could mount their storage capacity to one of the LUNs in the external storage device.

Another significant network storage technology, Network Attached Storage (NAS), followed the launch of SAN. For NAS devices, the file access service was provided via Ethernet, where the physical host computer can define the NAS storage as its folder via the local network and NFS/CIFS protocol.

SAN technologies evolved from only using Fibre Channel (FC) technology to a heterogeneous combination of Fibre Channel and IP switches and technology that is seen in IP SAN using Ethernet.
Storage equipment development history
In the 90s, corporate data technology featured the two major factions of SAN and NAS and they remained the solutions of choice for typical businesses. However, with the rise of cloud computing, corporate data technology had evolved to yet another significant milestone.
 
Beginning from 2012, Amazon began to offer two types of cloud storage services, Amazon Elastic Block Service (EBS) and Simple Storage Service (S3). As its name suggests, EBS is a Block-level disk service whereby as customers rent Amazon’s cloud web hosting solutions, they also need to rent the storage space provided by EBS. The rented EBS will be mounted to the virtual host according to the rented storage capacity. At the same time, Amazon provides another cloud storage service, S3, which is an object storage service.

As was true in creating volumes for servers in the past, EBS can operate volumes through the operating system, but S3 is completely different in that its object storage operations require an Application Programming Interface (API). When the user uses their rented S3 storage as their website server's image database, the S3’s API is required to access S3’s storage. Since then, a wide array of object-based cloud storage services has flourished, such as Dropbox, Apple iCloud Drive, Box and Google Drive, and so did the object-based cloud storage technology.
Three Types of Data Storage as a Service (DaaS)
The Data storage as a Services (DaaSs) currently offered by cloud storage vendors can be roughly divided into the following three types:
1. For commonly used SAN and NAS or a mix of both. It allows users to connect to the Internet or local network, and they use block or file level systems as storage. For example, the user may use the block service from the cloud storage vendor via IP-SAN to add a host computer drive. Or, they may use the file level system from the cloud storage vendor via the CIFS/NFS protocol. This type of cloud storage technology makes it possible for users to utilize existing IT infrastructure and investment;
2. For database applications. It is able to offer operations identical to those of the database and features excellent database scalability. Though, it often requires dedicated APIs from specific software vendors for interfacing; and
3. For object-based cloud storage. It is a software-defined storage service where all data entries are treated as objects and assigned with Uniform Resource Identifiers (URIs) for the Name Base web-based interface or API to access. For instance, Amazon S3 uses API to access files.

Like cloud computing, the cloud storage service has the advantage of charging users by their rented capacities while it is utilized in a way totally different from that of existing IT technology. There are also a lot of add-on software that target cloud storage, including data compression, data de-duplication, etc., to reduce the physical storage space used. After all, cloud services tend to bill users by their usage.

Currently, cloud storage services are developed based on vendors’ proprietary specifications, and this is a major issue for development because there are often incompatible data formats between services. Therefore, the Storage Networking Industry Association (SNIA) had to lay the foundations for cloud storage standards, Cloud Data Management Interface (CDMI), and completed version 1.0.1 specifications and definition as early as 2012. The standards enable Software-Defined Storage (SDS) to resolve the compatibility issue on a gradual basis and assist in developing new storage technology.

With the above-mentioned technical development history in mind, we can see that the cloud environment is already here. But, what are the hardware requirements for SDS? 

At Gigabyte, we worked with cloud storage developers to learn the required functionalities and provide them with the SDS hardware configurations. From our analysis, we found that:
● CPU: By Core (the number of processor cores to be determined by the operating system and the program requirements for the object-based storage system) 
● Memory: Per OS & OSD quantity corresponding 
● SDS OS firmware: Independent Disk 240GB *2 (Minimum) 
● Cache Disk: Per OSD Capacity correspondence
● Data Disk: According to demand capacity
● NIC Port: By demand structure
● Raid Card: By demand structure & ISV Software firmware
Comparison between Traditional Storage and SDS Architectures in terms of Expansion
The biggest difference between traditional storage and SDS architectures is their system expansion requirements. The advantages of SDS include:
● On-demand scale-out can be carried out at any time
● System operations are not affected by any single point of failure
● Stored data doesn’t need to be migrated during the expansion process

The disadvantages of traditional storage architectures include:
1. Management becomes very difficult due to different hardware brands offered by traditional storage vendors
  ● Complex storage brands and hardware types
  ● Interfacing between different host systems and the virtual platform
2. Rapidly changing application requirements from Big Data are not met
  ● Low flexibility as restricted by the architecture
  ● New storage requirements set by users
  ● Storage connection and performance requirements from new business services
3. Resource utilization of traditional storage is imbalanced
  ● Imbalanced resource utilization due to different applications
  ● Unmet rapid and urgent needs from the same application
 
The new SDS technology can solve the bottlenecks of traditional storage architectures.
Why is SDS better than traditional storage architectures?
1. x86 hosts that can be connected to typical networks are used to overcome the restrictions of dedicated connections in traditional storage architecture, eliminating island and data migration issues.
2. The flexibility of the storage scale, capacity and performance requirements are better than traditional architectures.
3. System deployment can be done at the root layer of the virtually architected platform so that applications and data storage can coexist and share resources.
4. A wide array of storage protocols are supported.
5. The OpenStack storage environment and mainstream virtual platforms, including VMware, MSFT Hyper-V or Linux-based storage network architectures, are supported.

Recently, SDS is often used as an independent part of the HPC architecture planning. Thus, the point where the storage transmission and capacity requirements of supercomputer systems are met can be foreseen.
GIGABYTE'S Series Server Storage Solutions
SDS is far superior to traditional storage technology, though it faces many challenges in the corporate IT environment or private cloud architecture. While the SDS architecture is utilized for the data pool configuration of supercomputer centers to meet their high capacity and high-speed transmission requirements, it must be customized based on the requirements, making it difficult to scale-down for small-scale projects.

As SDS architected products developed based on open-source software, product advantages are unique to system providers and ISVs. SDS’s optimized hardware products and ongoing update and maintenance for software iterations are what makes SDS stand out from the competition in the market.

Gigabyte has been engaging in the development and sales of server products for more than two decades. From our hardware product development efforts in the long run and our observation of market interactions, we have learned the market trends. That is why we have established the R (Rack), H (High Density), G (GPU) and S (Storage) categories for different product applications and designed servers to meet their requirements in an effort to provide products with the optimal price/performance ratio for different applications. We also take the initiative to work with software storage vendors from the design phase to offer customers one-stop hardware solutions so that customers can acquire enterprise-level, commercial storage system products.
Bigtera VirtualStor Scaler
Bigtera is a brand owned by Silicon Motion Technology. The Bigtera VirtualStor Scaler product line focuses on offering highly cost-effective storage products with scale-out capabilities so that users can adjust and expand flexibly based on their business needs. The scale-out architecture of the VirtualStor Scaler boasts the advantages of flexibility, data protection, etc., inherent to the SDS architecture.

With Gigabyte’s assistance, three standardized servers with different capacities are designed for the VirtualStor Scaler to provide storage solutions with low initial configuration cost, flexible capacity expansion and performance optimization.
VMware vSAN
VMware’s vSAN is the SDS product for vSphere virtual architecture that users can simultaneously compute and store on the VMware platform. VMware vSAN is available for standard x86 servers. According to the vSAN deployment guide, the x86 servers designed and produced by Gigabyte can be configured accordingly. Gigabyte servers with vSAN’s hyper-converged infrastructure products are easy to plan and deploy as long as specific configurations are used, as seen in the below server tables.

vSAN configured servers are standardized hardware configurations with their vSan ReadyNode product descriptions listed in the vSAN VCG (VMware Configurator Guide) for narrowing down part selection. VMware has clearly described the options available for the vSAN ReadyNode. (For details, log into the MY VMware site and refer to the documentation at https://my.vmware.com/group/vmware/home)

Gigabyte offers solutions with the same configuration as is listed by vSAN ReadyNode so that customers can select the right products. This enables customers to select the products with the optimal price/performance ratio and the highest ROI.

Gigabyte configuration example: AMD & Intel; VMware vSan HY-6 & AF-8
Conclusion
With the launch of the next-generation processors, Gigabyte will continue its efforts to optimize its server hardware product design and work with SDS partners to develop SDS architected products with on-demand capacity expansion capabilities to offer one-stop procurement and integrated storage resources to effectively eliminate storage islands.

As our customer base includes those in the television broadcasting, education, national lab, semi-conductor and other industries, we shall continue to develop partners from different industries in the future. We want to build a more comprehensive industry ecosystem and solve the storage, data security, and computing performance issues faced by IT teams.

Get the inside scoop on the latest tech trends, subscribe today!
Get Updates