Table of Contents |
---|
Ceph is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely available.
...
NOTE: Ceph does not do backups. If you need backup, one alternative is mount the Ceph project as a network drive and use Crashplan to backup contents in your Ceph project. Users are responsible for their data. Ceph will not lose data due to mechanical failure, but we cannot protect against user error.
System Configuration
7 storage nodes
One 2.4GHz 16-core AMD EPYC 7351P,
128GB 2666MHz DDR4 RAM
Three Micron 1.9TB SATA 2.5 IN Enterprise SSD
Total Raw Storage: 5.7TB for CephFS (Fast Tier)
Two Intel 240GB DC S4500 Enterprise SSD (OS only)
13 Seagate 8TB SATA HDD
Total Raw Storage: 104TB Ceph (Slow Tier)
11 storage nodes
One 3.0GHz 16-core AMD EPYC 7302P,
128GB 2666MHz DDR4 RAM
Three 1.9TB SATA SSD
Total Raw Storage: 5.7TB for CephFS (Fast Tier)
Two Intel 240GB DC S4510 Enterprise SSD (OS only)
9 12TB SATA HDD
Total Raw Storage: 108TB Ceph (Slow Tier)
- 10 GbE and 1 GbE network interface
- Debian 10
Raw Storage: 1916TB (Slow Tier) and 102.6TB (Fast Tier)
Available Storage: 543TB (Slow Tier) and 29TB (Fast Tier)
...
Ceph storage projects are shared using cifs utilities and can be mounted as a network drive on Windows, Mac OSX and Linux. Ceph projects are mounted on Sol and Hawk. Groups that use Ceph as home directory have access to their projects when they login to Sol. All others can access their Ceph projects at /share/ceph/projectname
...