Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

LTS Research Computing provides a Ceph based storage resource, also called as Ceph. In Fall 2018, a 768TB storage cluster was designed, build and deployed to replace the original Ceph cluster, a 1PB storage cluster. In Fall 2020, total storage was increased to 1566TB 2019TB by the addition of 798TB 796TB from Hawk and a further 455TB investment from LTS.

How is Data Stored in Ceph?

...

  • 7 storage nodes

    • One 2.4GHz 16-core AMD EPYC 7351P, 

    • 128GB 2666MHz DDR4 RAM

    • Three Micron 1.9TB SATA 2.5 IN Enterprise SSD

      • Total Raw Storage: 5.7TB for CephFS (Fast Tier)

    • Two Intel 240GB DC S4500 Enterprise SSD (OS only)

    • 13 Seagate 8TB SATA HDD

      • Total Raw Storage: 104TB Ceph (Slow Tier)

    • 10 GbE and 1 GbE network interface

    • CentOS 7.x

  • 7
  • 11 storage nodes

    • One 3.0GHz 16-core AMD EPYC 7302P, 

    • 128GB 2666MHz DDR4 RAM

    • Three

  • Micron
    • 1.9TB SATA

  • 2.5 IN Enterprise
    • SSD

      • Total Raw Storage: 5.7TB for CephFS (Fast Tier)

    • Two Intel 240GB DC

  • S4500
    • S4510 Enterprise SSD (OS only)

    • 9

  • Seagate
    • 12TB SATA HDD

      • Total Raw Storage: 108TB Ceph (Slow Tier)

  • 10 GbE and 1 GbE network interface
  • CentOS 7.x
  • Debian 10
  • Raw Storage: 1484TB 1916TB (Slow Tier) and 79102.8TB 6TB (Fast Tier)

  • Available Storage: 420TB 543TB (Slow Tier) and 22.6TB 29TB (Fast Tier)

Why two tiers of storage?

...