Cloud

Storage Systems are Extremely Important for Business Continuity

by GIGABYTE
In an era of increasing technological advancement, an important issue for enterprises and the key to maintaining business continuity is how to prevent important data from being accidentally lost due to human error, deliberately deleted or even stolen.
Can Your Business Protect Itself Against System Outages?
According to a survey conducted by the global research and advisory firm Gartner, 39% of small- and 27% of medium-sized businesses have not conducted any risk analysis when formulating their long-term operational plans. And if a disaster occurs, 45% of small- and 37% of medium-sized businesses have made no preparations or measures to ensure their intellectual property is protected. At the same time, 40% of businesses have encountered major outages to their IT systems in the last 5 years which have disrupted their business.

Today, the most likely cause of an outage to a business’s IT systems is due to human negligence or sabotage, such as an employee accidently deleting a bunch of important data, a destructive virus invading the company’s network, thieves stealing important business secrets from the server, or hackers launching an attack from the Internet. If an enterprise cannot access their important data because their IT system has been severely damaged, how much short term or long term damage will this cause to their business? The key to success will be a business continuity plan that can be executed at a reasonable cost without risk, and can be adjusted to meet future growth needs.
What Goals Should be Set?
A company’s important IT systems include Enterprise Resource Planning (ERP), Supply Chain Management (SCM), and Customer Relationship Management (CRM) systems that utilize e-commerce as well as public, private or hybrid cloud platforms, and run either on-premises or in the cloud connected via Software Defined Wide Area Networks (SD-WAN) and Unified Customer Premises Equipment (uCPE). These systems, platforms and applications all have different requirements, forcing businesses and organizations to pay more attention to their capabilities for continuous and comprehensive data access and storage. And the highly interconnected nature of a company’s systems, both internally and externally, means that these virtual links can be very fragile.
Glossary:
What is uCPE?
What is Hybrid cloud?
Classification & definition of data protection mechanisms
Therefore, if a business does not want its trading to be interrupted due to a system outage, it must aim to achieve the goal of guaranteeing business continuity, so that important systems and networks will continue to provide service no matter what the situation. In other words, they need to think ahead and implement data availability, security and reliability planning into their operational processes from the beginning.
Data & system availability level definitions
Classification & objectives of storage & backup systems
Who Should Be Responsible?
Every CIO (Chief Information Officer) has always realized that implementing a business continuity plan is good for business, but there was never enough time to think about it carefully, and the mantra of company executives towards the development of such plan was usually “not a priority, no budget, and no time”. However, the question of a system outage is not “if” but one of “when”. Therefore, every person responsible for the management of data or business operations must plan in advance of any possible system outage or interruption. American business intelligence firm FIND/SVP Inc. has calculated that the average hourly loss caused by disk array downtime is $29,301 for the securities industry, $26,761 for the manufacturing industry, $17,093 for the banking industry and $9,435 for the transportation industry.

In addition to the immediate financial cost caused by a system outage, more difficult to estimate is the intangible losses that the company may suffer, such as a reduction in employee morale and productivity or an increase in work pressure, a delay in the schedule of important projects, an unexpected diversion of company resources, a detailed review or audit by regulatory authorities or even damage to the company’s public reputation. The procurement contracts of most companies stipulate that suppliers must be able to provide their goods or services under any circumstances, even when a preventable accident has occurred. More importantly, certain types of organizations must provide adequate data protection capabilities by law, such as publicly listed companies, financial institutions, public utilities, medical organizations or government agencies.
All the Money in the World Can’t Bring Back Lost Data
In a major case of cybercrime that recently occurred, the chairman of the company that was targeted asked his MIS: “Our files have been encrypted by a hacker – how much Bitcoin should we reasonably pay to unlock them?” If there has been a good disaster recovery system already in place, the MIS could answer with a straight face: “Boss, you don’t even need to pay a single cent as the ransom”. Having a good disaster recovery system is like having Dr. Strange’s Time Stone – you can just use it to go back in time to retrieve the data before it was hacked!

You may only realize the importance of certain files after they have been hacked and encrypted. Information security and storage are both paramount – all the money in the world can’t bring back data once it has been lost. 

For example, both Taiwan’s CPC (Chinese Petroleum Corporation) and Formosa Petrochemical Corporation have recently suffered malware attacks. In addition to network security therefore, an important issue is the verification of data to ensure it has not been modified, or being able to restore it back to a point in time before it was encrypted or modified. This is also an important feature of a good data storage and backup system.

The solution described above sounds magically simple, but it is actually the result of a number of complex and cumbersome processes that integrate various technologies into a comprehensive data storage and backup system – from the most basic on-premises High Availability (HA) cluster with rapid backup & recovery capabilities, to an on-premises CDP (Continuous Data Protection) system with CRR (Continuous Remote Replication) via the Internet or a WAN (Wide Area Network), as well as the combination of other various continuous protection mechanisms such as CLR (Concurrent Local and Remote) data protection. Only by adopting this kind of data storage and backup system will you have the opportunity to possess Dr. Strange’s Time Stone and travel at light speed back to that point in time just before your data was hacked and encrypted.《Glossary:What is Continuous Data Protection(CDP)?
Data Storage is an Extremely Important Part of Business Continuity
In addition to compute and networking, a good business continuity plan for a company’s on-premises or cloud IT systems must include adequate storage infrastructure support. If data is not safely stored and protected, any disaster recovery plan will turn out to be useless. Different storage technologies can assist companies to greatly improve the availability and resilience of their data. These range from traditional SAN (Storage Area Networks) and NAS (Network Attached Storage) platforms to the latest SDS (Software Defined Storage) systems that feature automatic load balancing, data healing & recovery, API support for open-source or Amazon S3 cloud storage, and the advantages of a cloud computing architecture that is natively compatible with various open-source operating systems as well as AI (Artificial Intelligence) and Big Data applications that have been all the rage recently. Any storage system must also provide excellent read and write performance for real-time import and processing of large amounts of data as well as flexibility for expansion, in order to eliminate the worries of the administrator that services may be interrupted when data is being transferred. 
Glossary:
What is Artificial Intelligence(AI)?
What is Big Data?
What is NAS?
The high performance transmission and processing requirements of storage systems used for Big Data & AI applications
Currently there are five different cloud infrastructure-based storage architectures, with SDS (Software-Defined Storage) systems already the most popular choice. Many businesses have been gradually adopting SDS technology to centralize management of their existing SAN (Storage Area Networks) due to increased data protection capabilities, reduced TCO (Total Cost of Ownership) and strengthened data recovery procedures. Integrating all data into SDS together with existing high-speed fiber-channel SAN for centralized management allows enterprises to achieve higher standards of availability, scalability, security and resilience. SDS not only helps improve data availability and resilience, but also allows administrators to connect to online storage systems to protect data anywhere on the network.

Glossary:
What is Software-Defined Storage?
What is SAN?
The 5 different types of data center storage architectures
SDS meets the needs of cloud-based data centers
GIGABYTE Storage Server
Choosing the Right Partner is Extremely Important
Successful businesses understand the value of technology solution providers because they can assist in the creation, implementation, management and continuous development of a business continuity plan. A company that has limited resources, and would rather use those resources to promote revenue growth and increase shareholder value, can still consider using a technology solution provider to deliver a proprietary solution to meet some or all of their business continuity needs.

When seeking a business continuity solution, it is wise to find a partner that can supply the newest server hardware and software to ensure that your business continuity plan can be quickly and smoothly deployed.

Based on the above requirements, GIGABYTE has designed over 80 different server platforms for storage applications. And in addition to both enterprise-grade x86 (AMD & Intel) and Arm server platforms, GIGABYTE has over 20 platforms available that support the latest NVMe storage technology, as well as in 1U, 2U or 4U form-factors. These range of platforms can satisfy customer’s needs for building their SDS infrastructure as well as the various external and internal hardware and configuration requirements of ISVs (Independent Software Vendors).

On the enterprise section of GIGABYTE’s official website, you can see a range of servers available that provide customers with practical designs, flexibility, and rapid updates in response to their changing needs. GIGABYTE can even provide customized products on request – as long as customers put forth their AI or Big Data application requirements for computing performance, storage device type, management host or networking interface, GIGABYTE can give them a fast and satisfactory recommendation from our large family of server products.
What is NVMe?
GIGABYTE NVMe-Supported Server Model List
● CPU AMD Series (12 Models With NVMe Support)                                              ● CPU Intel Series (8 Models With NVMe Support)
R152-Z31(2NVMe) & R152-Z32(10NVMe)                                                             - R181-N20(2NVMe) & R181-NA0(10NVMe)
R272-Z32(24NVMe)                                                                                                - R281-N40(4NVMe) & R281-NO0(24NVMe)
R162-Z10(10NVMe) & R162-Z11(4NVMe)                                                             - S260-NF0(12NVMe) & S260-NF1(24NVMe)
R181-Z91(2NVMe) & R181-Z92(10NVMe)                                                             - S451-3R1(6NVMe+36Bay(3.5")HDD
R281-Z91(6NVMe) & R281-Z92(24NVMe)                                                             - S461-3T0(6NVMe+60Bay(3.5")+2(2.5"SAS/SATA)HDD
R182-Z91(2NVMe) & R182-Z92(10NVMe)
R282-Z92(24NVMe)
GIGABYTE Hardware for Cloud Appliance
GIGABYTE’s x86 Open Rack Hardware Infrastructure
You have the ideas, we can help make it happen.
Realtion Tags
Software-Defined Storage
Big Data
NAS
SAN
Data Center
SDS
AI Inferencing
SDN (Software Defined Networking)
uCPE
CDP(Continuous Data Protection)
Computing Cluster
WE RECOMMEND
RELATED ARTICLES
Over 100,000 Pokémon Fanatics Gather to Catch ‘Em All - See How GIGABYTE’s High Density Servers Help Maintain Law & Order

Large scale events can lead to a sudden surge in crowds, creating cellular network congestion. Even if the user capacity of a cell tower user is upgraded, the network operator is still unable to cope with an abrupt increase in demand. In 2019 Industrial Technology Research Institute (ITRI) therefore designed and built a “Private Cell Mobile Command Vehicle”, which can deploy a pre-5G private cellular network to avoid the problem of commercial network traffic jams. The vehicle provides the New Taipei City Police Department with smooth, uninterrupted cellular network service, allowing the police staff to remotely monitor real time footage of large scale events and deploy police resources where needed, increasing the efficiency of providing event security and safety. GIGABYTE’s H-Series High Density Servers are also helping to support ITRI’s “Private Cell Mobile Command Vehicle” by reducing the complexity of back-end infrastructure – each server combines computing, storage and networking into a single system, allowing for resources to be centralized and shared across different applications. The servers also optimize the use of time and manpower by combining and centralizing applications to simplify management procedures.

In the Quest for Higher Learning, High Density Servers Hold the Key

A top technological university in Europe noticed rising demand for computing services across its various departments. It decided to build a next-generation data center with GIGABYTE's high density servers. With the right tools in place, scientists were able to accelerate their research, analyze massive amounts of information, and complete more data-intensive projects. Science advanced while the institute flourished.