Designed for Big Data, the HUAWEI OceanStor 9000 storage system uses a symmetric distributed architecture to deliver cutting-edge performance, large-scale horizontal expansion capabilities, and a super-large single file system, providing shared storage for structured data and unstructured data. Applicable industries include broadcasting media, satellite mapping, genetic research, energy exploration, scientific research, and education.
- Big data life cycle integration: One system integrates with data storage, analysis, and archiving for improved processing efficiency, simplified system, and reduced investment and cost.
- Management integration: One set of software centrally manages IT devices, provides analysis reports, simplifies management, and improves operation efficiency.
- Industry-leading performance: The OceanStor 9000's performance ranks first in the SPEC sfs2008 benchmark test and exceeds 5 million Operations Per Second (OPS) in terms of the Network File System (NFS).
- High-speed internal interworking: Interworking is based on a 10GE or 40GE InfiniBand network. Target of Evaluation (TOE) and Remote Direct Memory Access (RDMA) technologies are supported.
- SSD-based metadata access acceleration: SSDs are used to store metadata, improving metadata access efficiency.
- Global cache: A maximum of 55 TB global cache is provided, increasing the data access hit ratio.
- Dynamic storage tiering: Hotspot data is intelligently migrated, improving system performance.
- Intelligent load balancing: Cross-node client connection load balancing automatically strikes a balance between capacity and performance, optimizing cluster resources.
- Flexible expansion: Seamless expansion from 3 to 288 nodes enables linear expansion of capacity and performance.
- Scale-out architecture: The OceanStor 9000 adopts a scalable share-nothing symmetric distributed architecture.
- Global namespace: A single file system provides up to 40 PB capacity.
- Flexible space quota: Storage resource thin provisioning is implemented at the directory, user, and user group levels.
- Automatic statistics collection and analysis: Automatic performance statistics collection and analysis enable proper use of resources.
- Automatic deployment: Automatic deployment and configuration of the software platform enable one-click capacity expansion. A single node can be added within 60 seconds.
Technical Specifications of the Big Data File Storage Subsystem
|Model||Performance Node||Capacity Node||Mini-Capacity Node|
|System architecture||Symmetric distributed architecture|
|Number of nodes||3 to 288|
|CPUs per node||2 x Intel E5 series|
|Cache per node||Standard configuration: 48 GB, expandable to 192 GB||Standard configuration: 32 GB, expandable to 192 GB|
|Disk type||SSD and SAS||SSD, SATA, and NL-SAS|
|Number of disks per node||Standard configuration: 4 x 2.5-inch 200 GB SSDs + 21 x 2.5-inch 600 GB SAS disks (Based on actual performance requirements, the SSD/HDD configuration ratio can be adjusted.)||Standard configuration: 1 x 3.5-inch 200 GB SSD + 35 x 3.5-inch 4 TB SATA disks (Based on actual performance requirements, the SSD/HDD configuration ratio can be adjusted.)||Standard configuration: 12 x 3.5-inch 2 TB SATA disks (Based on actual performance requirements, the SSD/HDD configuration ratio can be adjusted.)|
|Front-end network type||10GE, 40GE InfiniBand, and 1GE|
|Internal network type||10GE and 40GE InfiniBand|
|Data protection level||N+1, N+2, N+3, and N+4|
|File system||Wushan distributed file system, which supports global namespace and can be dynamically expanded up to 40 PB|
|Value-added feature||Dynamic storage tiering (InfoTier)
Automatic client connection load balancing (InfoEqualizer)
Space quota management (InfoAllocator)
|Thin provisioning||Support for thin provisioning, which does not need to be configured|
|Data self-healing||Automatic, concurrent, and quick data restoration, with the maximum restoration speed of 1 TB/hour|
|System expansion||One-click online expansion, with less than 60 seconds needed for expansion of a single node|
|System management||Rights- and domain-based user management|
|Global cache||Up to 55 TB|
|Supported operating system||Windows, Linux, Mac OS|
|Supported protocol||NFS, CIFS, HDFS, NIS, Microsoft Active Directory, LDAP, and SNMP|
|Alarm notification||Email, SMS, SNMP, and Syslog|
|Free from instant maintenance||Automatic bad disk detection, alarm notification, and centralized batch replacement of bad disks, avoiding instant replacement and reducing manual maintenance|
|Power supply||AC 100 V to 127 V, AC 200 V to 240 V|
|Dimensions (H x W x D)||Node||2 U, 86.1 mm x 446 mm x 582 mm (3.39 in. x 17.56 in. x 22.91 in.)||4 U, 175 mm x 446 mm x 582 mm (6.89 in. x 17.56 in. x 22.91 in.)||2 U, 86.1 mm x 446 mm x 582 mm (3.39 in. x 17.56 in. x 22.91 in.)|
|Cabinet||Maximum dimensions: 2000 mm x 600 mm x 1100 mm (78.74 in. x 23.62 in. x 43.31 in.)|
|Weight||Node||Fully loaded with 2.5-inch disks: ≤ 35 kg (77.18 lb)||Fully loaded with 3.5-inch disks: ≤ 70 kg (154.35 lb)||Fully loaded with 3.5-inch disks: ≤ 32 kg (70.56 lb)|
|Cabinet||Maximum weight when fully loaded with 2 U nodes: 576.7 kg (1271.62 lb)
Maximum weight when fully loaded with 4 U nodes: 725.5 kg (1599.73 lb)
|Typical power||420 W||580 W||260 W|
|Operating temperature||5ºC to 35ºC (41ºF to 95ºF) when the altitude ranges from –60 m to +1800 m (–196.85 ft. to +5905.44 ft.)
When the altitude is higher than 1800 m (5905.44 ft.) but lower than or equal to 3000 m (9842.4 ft.), the ambient temperature drops by 0.6ºC (1.08ºF) for every 100 m (328.08 ft.) altitude increment.
|Operating humidity||20% RH to 80% RH|
Technical Specifications of the Big Data Analysis Subsystem