Hardware Reference
In-Depth Information
maximum of 20.7 PB. The maximum capacity will continue to increase as
disk drives continue to get denser. The available nodes today are broken into
several classes, according to their functionality:
S-Series: IOPS-intensive applications,
X-Series: high-concurrency and throughput-driven workflows,
NL-Series: near-primary accessibility, with near-tape value,
Performance Accelerator : independent scaling for ultimate performance,
Backup Accelerator : high-speed and scalable backup and restore solu-
tion, and
EX Capacity Extension: independent scaling of capacity.
11.3 Network
There are two types of networks associated with a cluster: internal and
external.
11.3.1 Back-End Network
All intra-node communication in a cluster is performed using a propriety,
unicast (node-to-node) protocol. Communication occurs using an extremely
fast low-latency, Infiniband network. This back-end network, which is con-
figured with redundant switches for high availability, acts as the backplane
for the cluster, enabling each node to act as a contributor in the cluster and
isolating node-to-node communication to a private, high-speed, low-latency
network. This back-end network utilizes IP over Infiniband (also called IPoIB
or IP over IB) for node-to-node communication.
11.3.2 Front-End Network
Clients connect to the cluster using Ethernet connections (1 GigE or 10
GigE) that are available on all nodes. Because each node provides its own
Ethernet ports, the amount of network bandwidth available to the cluster
scales linearly with performance and capacity. The Isilon cluster supports
standard network communication protocols to a customer network, including
NFS, CIFS, HTTP, iSCSI, and FTP.
 
Search WWH ::




Custom Search