...
- Data is replicated on across three disks on three nodes, secured against simultaneous failure of two full nodes in the EWFM cluster.
- Ceph software performs self-healing (maintaining three replicas) if one or two replicas are lost due to disk or node failure.
- Ceph software performs daily and weekly data scrubbing to ensure replicas remain consistent to avoid bit rot.
- Data which is deleted is NOT recoverable.
- Data is NOT protected against catastrophic cluster failures or loss of the EWFM datacenter.
Research groups can opt for backup of data to a secondary Ceph cluster in which case
- Data is stored on two distinct clusters in two locations. The primary cluster is located in the EWFM datacenter, the backup in the Packard datacenter.
- Data is replicated on across three disks on three nodes in each cluster, secured against simultaneous failure of two full nodes in either cluster or simultaneous failure of five nodes in both clusters.
- Ceph software performs self-healing (maintaining three replicas) if one or two replicas are lost due to disk or node failure.
- Ceph software performs daily and weekly data scrubbing to ensure replicas remain consistent to avoid bit rot.
- Data is snapshotted and stored weekly.
- Data is protected in the event of catastrophic failure of the primary cluster or loss of the EWFM datacenter, so long as the Packard site remains operational.
Ceph Charges
- All Ceph projects are purchased for a 5 year duration at a rate of $375/TB. No snapshots and backups provided.
- PIs can request snapshots and backups to a secondary cluster for an additional $375/TB (5 year duration).
...