Friday, January 4th 2013 11:25:01 PM PST
There are currently three data storage systems on Triton Resource:
Home areas on Triton are located on Solaris-based NFS servers using ZFS as the underlying file system. Backup Policy
A small scratch space is available to all users. Backup Policy
The Lustre Storage Area is a Parallel File System (PFS) called Data Oasis. It contains over 800 terabytes of shared scratch space available to all users.
This PFS is fully connected to both the Petascale Data Analysis Facility and the Triton Compute Cluster, providing exceptional data movement and data management throughput to users on either system. The scratch space currently available will be the resource capacity until the completion of Data Oasis.
In addition to the shared PFS storage, there are 36 terabytes of space in the Home Storage system for all users. This will be the maximum size for the home filesystem for the foreseeable future. There are also approximately 5 gigabytes per node of Local Temporary Storage available during job runs. This space is purged between jobs.
|System||Local HD Redundancy||Connectivity||Space||Backup/Replication||Total Space for files, snapshots||Location|
|Home Area Storage||Double-drive failure can occur without data loss: Triton uses Raidz2 variant of Raid6||10GbE; Delivers > 300MB/sec to single node; > 500MB/sec aggregate||Each user is guaranteed at least 50GB of space||Nightly snapshots retained 7 days minimum;
view the snapshot recovery page
|36TB||Snapshots accessible at $HOME/.zfs/snapshot|
|Lustre Storage PFS (Data Oasis)||Single-drive hardware failure is supported through Raid5 on the Lustre Object||4 x 10GbE; delivers > 500MB/sec to single node; > 2.5GB/sec aggregate||min/max currently undefined; Triton supports
project special requests
|No backup of this storage is performed||800TB||Storage accessible at
|Local Node Temporary Space||Single-drive hardware failure is supported with Linux SW Raid-1 (Mirroring)||Local HD; about 50MB/sec/node; about 14GB/sec aggregate||Generally about 5GB/node; purged between jobs||No backup of this storage is performed||Dependent on number of nodes requested for job||Storage accessible at /tmp from local node only|
View a rack diagram of the complete Triton Resource.
View a complete system diagram of the Triton Resource.
Download the Gigabit Ethernet Network Diagram (PDF).