MooseFS Enterprise

GIGABYTE has joined forces with Alcestor to offer a high performance & fault tolerant parallel file system turnkey solution: MooseFS Enterprise
Introduction
MooseFS Enterprise is a fault-tolerant distributed parallel file system designed from the ground up for extremely demanding data workloads, offering exceptional performance, high availability and scalability. MooseFS Enterprise features a SPOF (Single Point of Failure)-less configuration, metadata redundancy, and distributed user data. Plus, it scales linearly with your data all the way up to 16 exabytes on a single cluster.
MooseFS Enterprise Use Case Scenarios
Digital Media Storage
Make sure your content is accessible from anywhere - without disruptions or data loss. MooseFS Enterprise provides massive storage for all your digital media needs, featuring a SPOF-less configuration, metadata redundancy, and distributed used data. Plus, it scales linearly with your data all the way up to 16 exabytes on a single cluster.
ADAS Data Storage
Get virtually limitless storage space for your ADAS (Advanced Driver-Assistance Systems) data. MooseFS Enterprise is a distributed parallel file system ideal for storing, processing and running simulations with sensor data (LIDAR, RADAR, GPS, cameras etc.) that scales linearly all the way up to 16 exabytes on a single cluster.
Geospatial Data Storage
Get HPC-ready storage for the data fueling your operations. For geophysical processing, reservoir analysis - all your data-heavy workloads - we bring you MooseFS Enterprise. Our distributed parallel file system scales linearly together with your data all the way up to 16 exabytes on a single cluster.
Why MooseFS Enterprise?
Big Data presents a myriad of potential opportunities for every organization to gain insights, find new opportunities, reduce costs and generate more profit and revenue. Of course, Big Data also presents a big challenge: storage for the billions of files that are being generated, stored, and retrieved all the time. In many cases, cloud is the answer, but sometimes you just want to use your own hardware solution.

Do you store a lot of mission critical data? Does your data need to be stored more securely? Do you need an on-site storage solution? MooseFS Enterprise by GIGABYTE and Alcestor, an enterprise version of the open source distributed file system MooseFS could be the perfect solution.

MooseFS Enterprise intelligently assigns data to numerous servers, thereby creating one virtually unlimited storage space, available like an ordinary hard drive with a POSIX compliant file system. MooseFS Enterprise keeps data redundant and safe for decades, with uninterrupted access, featuring data protection using redundant copies or 8+N erasure coding algorithms (RAID 5 extension), and saves your time with minimal maintenance and parallel data access. MooseFS Enterprise also saves you money with less hard disks required thanks to erasure coding, and different storage tiers possible using different disk types.
How It Works
MooseFS Enterprise distributes data in parallel over a virtually unlimited number of servers that act as one disk to the client devices accessing it. A MooseFS Enterprise storage cluster is comprised of Master and Chunk Servers. The Master servers hold file metadata, while the Chunk Servers redundantly store chunks of data. The Leader Master manages the Chunk Servers, ensuring data is always available or recovered should any Chunk Servers fail. In the event a Leader server fails, the Chunk Servers will “elect” a new Leader from the remaining Follower Master servers. This is how the system self-heals and maintains data integrity.
MooseFS Enterprise Benefits
suitable to Performance, Efficiency
High Performance
suitable to Reliability, Consistency
High Availability and Reliability
suitable to Reduce Expenses, Money Saving, Reduces Cost
Competitive TCO
suitable to User Friendly, Ease of Use& Lower maintenance requirement
Easy Deployment
suitable to Security, Data Protection
Security and Management
High Performance
• Store up to 16 exabytes and more than 2 billion files on a single cluster
• High throughput across multiple storage nodes
• Parallelism—performs all I/O operations in parallel threads of executions to deliver high readwrite operations performance
• Compute on nodes efficiently uses idle CPU, GPU and memory resources for low-latency data processing and storage on same machine
• Direct drive mounts means a significant performance boot over traditional RAID arrays— each chunk server creates a mount-point for each drive, allowing MooseFS to intelligently decide how data is written to individual drives
• Native clients and MooseFS (MFS) protocol - dedicated client component for Linux, FreeBSD, and Mac OS X systems for max performance
• Pre-fetch and read-ahead algorithms enhance HDD disk cluster performance—improves stream-like cluster access patterns including storing and serving video files, logging, etc.
High Availability and Reliability
• Runs as single file system volume, no matter the cluster size
• SPOF-less configuration (no single point of failure)
• Metadata redundancy with two or more raw copies on physically redundant master servers for rapid recovery
• Distributed user data - data divided into “chunks” and redundantly spread across storage servers
• Transparent automatic failover - the system instantly initiates parallel data replication from redundant copies to other resources
• Fast disk recovery - as little as 15 minutes for a 1 TB drive
• Cyclic redundancy checks - each chunk of data in a cluster is followed by CRC to assure 100% consistency
Competitive TCO
• Completely hardware- and OS-agnostic - mix existing and new hardware for a cost effective solution
• Erasure coding using error correction code algorithms (with up to 9 parity sums) ensures redundancy on less raw space
• POSIX compliance - meets IEEE-specified standards that define uniform APIs for Unix-like operating systems, meaning no need to replace existing architecture
Easy Deployment
• Software-defined storage solution that you can set up in less than 30 minutes
• Use turnkey appliances available from GIGABYTE
• On-the-fly upgrades - one node at a time - means zero downtime
Security and Management
• Granular quota limits and storage policies enable full control over how data is stored
• Access control based on a standard Unix access control lists (ACLs) model
• Global trash makes it easy to recover accidentally deleted data
• Rich set of admin tools: command-line, web-based, and SNMP-based interfaces
• Advanced storage policies and tiering: - Tier between unlimited number of storage pools like HPC and archive - Files automatically moved between storage tiers based on triggers set by admin
MooseFS Turnkey Appliances
A MooseFS storage cluster consists of Master Servers (meta-data servers) and Chunk Servers (storage servers). Two Master Servers are standard in any cluster.  There is a minimum of three Chunk Servers (of any combination) in a cluster and no maximum.
1/3
Master Server
R181-2A0 (rev. 100)
2/3
Small Chunk Server
R281-3C1 (rev. 300)
Raw Capacity 144TB / Usable Capacity 72TB
3/3
Large Chunk Server
S451-3R0 (rev. 100)
Raw Capacity 432TB / Usable Capacity 216TB
MooseFS Turnkey Appliance Example Configuration
Raw Capacity 432TB 720TB 1PB 1.295PB 2.592PB 3.456PB
Usable Capacity* 216TB 360TB 500TB 648TB 1.296PB 1.728PB
Required Rack Space 8U 10U 12U 14U 26U 34U
Hardware Configuration 2 x R181-2A0 2 x R181-2A0 2 x R181-2A0 2 x R181-2A0 2 x R181-2A0 2 x R181-2A0
3 x R281-3C1 1 x S451-3R0 2 x S451-3R0 3 x S451-3R0 6 x S451-3R0 8 x S451-3R0
2 x R281-3C1 1 x R281-3C1
*Default; usable capacity can be adjusted by the user according to data replication configuration
Related Technologies
Parallel File System
A parallel file system, also known as a clustered file system, is a type of storage system designed to store data across multiple networked servers and to facilitate high-performance access through simultaneous, coordinated input/output operations (IOPS) between clients and storage nodes. A parallel file system breaks up a data set and distributes, or stripes, the blocks to multiple storage drives, which can be located in local and/or remote servers. Users do not need to know the physical location of the data blocks to retrieve a file, as the system uses a global namespace to facilitate data access. Data is read / written to the storage drives / devices using multiple I/O paths concurrently, providing a significant performance benefit. Storage capacity and bandwidth can be scaled to accommodate enormous quantities of data, and features may include high availability, mirroring, replication and snapshots.
Software-Defined Storage
Software Defined Storage ("SDS") refers to the virtualization of storage systems, whereby the underlying storage hardware is abstracted from the storage software that manages it. SDS can be an element with a Software Defined Data Center or it can also function as a stand-alone technology. The software enabling the storage-defined storage environment may also provide policy management for features such as data de-duplication, replication, thin provisioning, snapshots and backup.
Erasure Coding
Erasure Coding is a method of data protection for storage systems. Data is broken into fragments, expanded and enriched with redundant data pieces, and then stored across a set of different locations or storage media within a distributed storage system. A subset of this data is enough to regenerate the original data. The goal of erasure coding is to enable data that becomes corrupted at some point in the disk storage process to be reconstructed by using information about the data that's stored elsewhere within the storage system. While Erasure Coding is an alternative to replication (simply making multiple copies of data), it can achieve the same or better protection against data loss as replication while reducing the total amount of disk space / raw capacity required by 50% – 80%.
You have the ideas, we can help make it happen.
Contact Us
Please send us your idea
 
 
 
 
 
 
/ 1000
 
* For services and supports, please visit eSupport.
* We collect your information in accordance with our Privacy Policy.
Submit Successfully