Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

We offer two High Performance Computing clusters, and a Ceph-based storage cluster for the Lehigh Research Community.

  • Sol: a 89-node High Performance Computing Cluster. Sol is a condominium cluster and can be expanded by interested research groups by investing in the Condo Program.
  • Hawk: a 34-node High Performance Computing Cluster funded by the National Science Foundation's Campus Cyberinfrastructure Award 2019035
  • Ceph: a combined 2019TB storage cluster based on the Ceph distributed storage system.

The computing and data storage resources are available for use in the classroom for registrar scheduled courses. See Teaching Uses below.


HPC Accounts

All users of Lehigh's research computing systems must obtain an account for their exclusive use with their personal Lehigh University username.


Acknowledgement of Lehigh Research Computing Resources

In publications, reports, and presentations that utilize Research Computing Services, please acknowledge Lehigh University  with the following statement:

"Portions of this research were conducted with research computing resources provided by Lehigh University"

Also, providing the Research Computing group with a list of publications, students graduated, their thesis/dissertation title, and grants received based on work done on Research Computing resources is strongly recommended.

Research uses of Research Computing Systems

An HPC account provides access to Sol and Hawk. A faculty member may obtain access to Sol and Hawk by submitting a proposal for allocation to the Research Computing Steering Committee (RCSC). There are several types of allocation depending on your needs as described in the Accounts & Allocation page.  


Account Request: To obtain an HPC account or allocation or both on Sol, the Faculty sponsoring the user account should submit an allocation request using on the forms available at Accounts & Allocation page. If a Faculty already has an active allocation, then please email the Manager of Research Computing to add additional users to your allocation.

Ceph Storage

In Fall 2018, Research Computing deployed a refreshed 768TB Ceph Storage solution. Faculty, Departments, Centers and Colleges can purchase a Ceph project in units of 1TB. In Fall 2020, Ceph was augmented by an additional 796TB from Hawk and 455TB by LTS for a total of 2PB storage cluster.

  • Data is replicated on across three disks on three nodes, secured against simultaneous failure of two full nodes in the EWFM cluster.
  • Ceph software performs self-healing (maintaining three replicas) if one or two replicas are lost due to disk or node failure.
  • Ceph software performs daily and weekly data scrubbing to ensure replicas remain consistent to avoid bit rot. 
  • Data which is deleted is NOT recoverable.
  • Data is NOT protected against catastrophic cluster failures or loss of the EWFM datacenter.

Ceph Charges

  • There are no charges if requesting a storage allocation from the RCSC. Storage allocations that are not renewed or approved annually will be deleted and no backups will be kept.
  • Ceph projects can be purchased for a 5 year duration at a rate of $375/TB. No snapshots and backups provided.

HPC Research groups and Ceph

Each Principle Investigator is provided with a 1TB ceph space for his/her research group. If additional space is requested, please include a justification in your compute allocation request or explicitly request a storage allocation. This storage exists as long as your allocation is active and will be deleted (no backups kept) a month after your allocation expires. If you purchase a Ceph project, then your storage will exist for  5 years irrespective of your compute allocation status.

Account Request: To purchase a Ceph project, please contact Manager of Research Computing for more information.

Condo Investments

Faculty, Departments, Centers and Colleges can invest in Sol by purchasing additional compute nodes to support their research thereby increasing the overall capacity of Sol. Such investors, Condo Investors, will be provided with an annual allocation proportional to their investment that can be shared with their collaborators. Condo Investors who need more computing time than their investment can purchase allocation, if available, in blocks of 10,000 SUs for $100. These increments must be expended during the allocation cycle that they were purchased and cannot be rolled over to the next cycle.

A Condo Investor can request additional accounts sharing their allocation for an annual charge of $50 (each).

Prospective Investors should review the Condo Program before contacting HPC for investing in Sol.

Additional Storage: Additional storage is available by purchasing a Ceph project volume @ $375/TB for 5 years. To request additional home directory storage, please submit request to http://lts.lehigh.edu/help.
Definition of One Core-hour/Service Unit/SU: 1 hour of computing on 1 core. The base compute node on Sol with 20 cores will consume 20 SUs per hour of computing.

Teaching uses of Research Computing Systems

Faculty members considering use of research computing facilities (including Kubernetes based JupyterHub and RStudio Server) for teaching purposes should request an Education Allocation at least eight weeks prior to the class start date with an anticipated enrollment count, a proposed syllabus, and details of their proposed use of HPC systems. 

These accounts are typically associated with a rostered course, and last for the duration of that course (up to one semester).  Faculty should request these accounts for his/her students, requests by TA's or department coordinators will be denied. Faculty are encouraged to submit a final report at the end of the semester to help us improve services provided for teaching.

A course allocation provides 1TB Ceph space and an allocation based on number of students in the course. Faculty can request storage only education allocations (please justify storage > 1TB).

Instructors requiring assistance with estimating total or per student SU requirements for the course should contact Research Computing Staff at least 4 weeks prior to beginning of the semester.

Usage Policy: Student accounts cannot be shared and will be active until two weeks past the end of the semester. All compute intensive tasks must be submitted via the batch scheduler. On request, LTS Research Computing staff will guest lecture on how to use the resource, write and submit job scripts and monitor jobs. Compute intensive task is defined as any operation on the HPC resource other than editing, copying, moving or deleting files, submitting and monitoring jobs and issuing simple commands such as ls, cp, mv, mkdir, rm, tail, tar, gzip/gunzip, more, cat and less. All student data not saved in the Ceph project space will be purged when accounts are deactivated.



  • No labels