Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Data is replicated on across three disks on three nodes in three racks with distinct power feeds and network paths, secured against simultaneous failure of two full nodes in the primary data center. With current connectivity, the cluster supports an aggregate read/write speed of 3.75GB/s, with capability to increase bandwidth as needed. The Ceph software performs daily and weekly data scrubbing to ensure replicas remain consistent. An option for daily snapshots of data and backup stored to a secondary data center on Lehigh’s campus is also available.

NOTE: Ceph does not do backups. If you need daily snapshot and store the snapshots, you need to purchase an additional block of Ceph storage. If you need backup, one alternative is mount the Ceph project as a network drive and use Crashplan to backup contents in your Ceph project.

System Configuration

  • 7 storage nodes

    • One 2.5GHz 16-core AMD EPYC 7351P, 2.4GHz

    • 128GB 2666MHz DDR4 RAM

    • Three Micron 1.9TB SATA 2.5 IN Enterprise SSD

      • Total Raw Storage: 5.7TB for CephFS (Fast Tier)

    • Two Intel 240GB DC S4500 Enterprise SSD (OS only)

    • 13 Seagate 8TB SATA

      • Total Raw Storage: 104TB Ceph (Slow Tier)

    • 10 GbE and 1 GbE network interface

    • CentOS 7.x

  • Raw Storage: 728TB (Slow Tier) and 39.9TB (Fast Tier)

  • Available Storage: 206TB (Slow Tier) and 11.3TB (Fast Tier)

...