Services

RCSS provides the following services:


Investment

Users that have not invested will find it increasingly difficult to run jobs that require a lot of resources. Please contact us at mudoitrcss@missouri.edu for help including your computational and storage needs in your next grant application. We will continue to provide grant-friendly mechanisms to invest (overhead/indirect free).

Benefits of Investing

The primary benefit of investing is recieving "shares". Shares are used to calculate the percentage of the cluster owned by an investor. As long as an investor has used less than they own, investors will recieve higher priorities on the queue. This is called "FairShare" and can be monitored by running sshare. A FairShare value of more than 0.5 indicates that an investor has used less than they own, and conversely a FairShare value of less than 0.5 indicates the investor has used more than they own. FairShare is by far the largest factor in queue placement and wait times.

In addition to this primary benefit, investors have access to other benefits:

  • First-time investors will recieve 3TB of HPC Storage at no additional cost.
  • Access to the following Quality of Services (QOS):
    • manyjobs: to submit more than 2000 jobs (upon workflow approval)
    • long: to submit 500 jobs up to 7 days (upon workflow approval)

Investors in GPU nodes will have access to these QOS:

  • gpu4: to submit 2000 jobs on gpu4 partition
  • gpu-investor-7d: to submit a single job up to 7 days on a single card of GPU Partition (will need workflow approval)

Investors will be granted Slurm accounts to use in order to charge their investment (FairShare). These accounts can contain the same members of a POSIX group (storage group) or any other set of users at the request of the investor.

To use an investor account in an sbatch script, use:

#SBATCH --account=<investor account>

To use a QOS in an sbatch script, use:

#SBATCH --qos=<qos>

Non-Investor Policy

Non-investor users will be able:

  • to submit 2000 jobs up to 2 days
  • to submit 2000 jobs up to 2 hours on gpu3 partition

While non-investors can submit the same number of jobs as investors by default, non-investors are limited to 24 simultaneously running jobs.

HPC Pricing

The HPC Service is available at any time at the following rates for year 2020-2021:

Service Rate Unit Support
HPC Compute $3,600 Per 12 Cores 5 Years
HPC Storage $15.50 Per TB/Month Month to Month
HTC Storage $160 Per TB 5 Years
GPRS Storage $7.00 Per TB/Month Month to Month
  • On HPC Compute initial investment, 3TB of HPC storage is provided at no extra cost
  • New HTC Storage investments must be at least 50TB. Existing upgrades must be at least 10TB
  • HTC storage is currently sold out, please contact mudoitrcss@missouri.edu for more information on when this service will be available again

Teaching Cluster

The teaching cluster is meant as a resource for students and instructors for computational computing. The teaching cluster is a full HPC cluster and students are allowed to run jobs on the head node. Students must contact instructors for course related questions and support.

Service Capabilities

  • 12 compute nodes (152 Cores)

Example Use Cases

  1. As a student, I want to learn how to login and run a simple program on an HPC cluster with minimal setup time and effort.
  2. As an instructor, I want a resource to teach my students about the different functions of high performance computing without needing to spend a lot of time to set up accounts and get the students logged in.

Service Policies

For Instructors:

  1. The teaching cluster provided to all UM students, students must be official UM students with an PawPrint or UM SSO ID.
  2. The environment is "research grade". No backups or high availability. Students/TA's are responsible for backing up any data throughout the semester.
  3. Only infrastructure support is provided, there is no student/end-user support. All support requests should come through the instructor or TA’s via mudoitrcss@missouri.edu. Support is best effort and provided during regular business hours.
  4. Software is limited to CentOS 7 packages installed via yum that require minimal configuration and a subset of Lewis scientific packages.
  5. We do not support a development environment/IDE. Users need to use either sftp or other console based text editors. For Windows users we have a site license for MobaXterm
  6. We take security seriously. We upgrade the entire environment (including rebooting) on a regular basis and without notice. SELinux is enforced.
  7. Students must be made aware of the "Teaching Cluster Policy" and the limitations of the environment.

For Students:

  1. Use of this system is governed by the rules and regulations of the University of Missouri and the University of Missouri System.
  2. Users must be familiar with and abide by the UM System acceptable use policy and the UM System Data Classification System (DCL). Collected Rules and Regulations - Chapter 110, Data Classification System.
  3. Only DCL 1 data is permitted on the cluster. See the Data Classification System - Definitions
  4. This is a shared environment with limited storage, RAM, or CPU with no quotas. Please be nice.
  5. Data is not backed up and all data deleted when students graduate. This policy may be revised.
  6. Students must contact instructors for course related questions and support.

GPU Service

Some scientific workflows are greatly accelerated when they run on one or more Graphical Processing Units (GPUs). The Lewis Cluster includes a partition dedicated to GPU processing to accommodate these workflows. There are two kinds of NVIDIA GPUs available for use on Lewis, namely GeForce and Tesla class GPUs.

GPU Capabilities

14 GPU Nodes Spanning 3 Generations:

  • GPU3 (10 Nodes)
  • Pascal Architecture
    • NVIDIA Tesla P100: 1
    • NVIDIA GeForce GTX 1080 Ti: 7
  • Kepler Architecture
    • NVIDIA Tesla k40m: 1
    • NVIDIA Tesla k20Xm: 6
  • GPU4 (4 Nodes)
  • Volta Architecture
    • NVIDIA Tesla V100: 12

GPU Best Practices

Example Use Cases

  • As a researcher I want to train a neural network to classify images, but my project budget does not cover the cost of purchasing and managing the amount of GPU hardware that is required to complete this task.

Service Policies

  1. The GPU partition must only be used for GPU accelerated workflows. Jobs running on the GPU partition that are not utilizing the GPU are subject to cancellation and potential loss of GPU partition access.
  2. The use of srun for active development and debugging is permitted but is limited to allocations of 2 hours or less. Excessive srun session idle time or excessive number of srun sessions is not permitted.
  3. GPU jobs that utilize only 1 GPU should be structured in a way to allow other jobs to share the node. The exclusive SLURM option should NOT be used and CPU cores, memory, and GPU resources need to be 'right-sized' to the workload. Resource requests should match the correct class and quantity of GPUs for the algorithm.
  4. No more than 50% of the partition resources will be available for concurrent use by any single user.
  5. Users that have not invested in gpu4 can run jobs on the Gpu partition (which allows access to all GPU nodes) for up to 2 hours and by request jobs up to 2 days on the gpu3 partition.
  6. Access to the gpu4 partition is limited to investors only and jobs must be submitted directly to the gpu4 partition (--partition gpu4) with a gpu4 QOS (--qos gpu4) and their GPU account to charge (example --account engineering-gpu). The account must not be 'general' or their CPU investment account.

HPC Compute

The Lewis cluster is a High Performance Computing (HPC) cluster that currently consists of 217 compute nodes and nearly 8000 compute cores with around 3 PB of storage. The cluster serves on average 178 active users per month with about 3.5 million core hours of compute. Interested parties may invest in Lewis in increments of 12 cores. The investor will purchase the slice of Lewis cores up front, and the maintenance of the hardware, software and other infrastructure will be provided by RCSS for 5 years. 3 TB of group storage is included with investment as well. Larger scale investments (such as at the rack level) are possible and interested parties should contact the RCSS team to discuss their specific requirements further.

HPC Compute Capabilities

217 Compute Nodes Spanning 4 Generations:

  • HPC3
  • 19 Nodes (456 Cores)
  • Intel Haswell
  • HPC4/HTC4/HPC4RC
  • 101 Nodes (2828 Cores)
  • Intel Broadwell
  • HPC5
  • 35 Nodes (1400 Cores)
  • Intel Skylake
  • HPC6
  • 62 Nodes (2976 Cores)
  • Intel Cascade Lake

HPC Compute Best Practices

  • Never run calculations on the home disk
  • Always use SLURM to schedule jobs
  • The login nodes are only for editing files and submitting jobs
  • Do not run calculations interactively on the login nodes

Example Use Cases

  1. I need to run a computational fluid dynamics simulation that requires very fast communication across different logical units.
  2. I need to analyze a large pool of gene expression data that far exceeds the processing capacity of my lab's PC's.
  3. I want to run a simulation on drug interaction and toxicity without involving live subjects.

Service Policies

General Use

  • Use of this system is governed by the rules and regulations of the University of Missouri and the University of Missouri System and by the requirements of any applicable granting agencies. Those found in violation of policy are subject to account termination.
  • Users will follow the UM System acceptable use policy
  • Users are responsible to ensure that only data classified as DCL1 or DCL2 will be stored or processed on the system.

Accounts

  • Faculty of the University of Missouri – Columbia, Kansas City, St. Louis, and S&T may request user accounts for themselves, current students, and current collaborators. Account requests require the use of a UM System email address. The exception is for researchers of the ShowMeCI consortium who are not part of the University of Missouri System; they may apply for accounts for themselves and their students using their organization email address.

Collaborator Accounts

  • Faculty requesting accounts for collaborators must first apply for their collaborator to have a Courtesy Appointment thru the faculty’s department using the Personal Action Form. After the Courtesy Appointment approval, faculty can submit an Account Request for their collaborator. Collaborators must submit account requests for their students using the student’s university email address. Collaborators agree to abide by the External Collaborator policy.

External Collaborator Policy

  1. External collaborators must agree to the following:

  2. Follow the University of Missouri's rules and policies listed above and your home institution's policies and rules.

  3. Data on the cluster is restricted to DCL1 or DCL2 as described above.
  4. Data storage and computation is for academic research purposes only, no personal, commercial, or administrative use.
  5. Follow the Research Computing cluster policy.
  6. Under no circumstances may access to the user account be shared or granted to third parties.
  7. As a collaborator you will be assigned different priorities and limits from the rest of the users in the cluster.
  8. Data is not backed up on the cluster and Research Computing is not responsible for the integrity of the data or data loss or the accuracy of the calculations performed.
  9. We ask that you give the University of Missouri, Division of IT, Research Computing Support Services acknowledgment for the use of the computing resources.
  10. We ask that you provide us with citations of any publications and/or products that utilized the computing resources.

Account Sharing

  • Direct sharing of account data on the cluster should only be done via a shared group folder. A shared group folder is setup by the faculty adviser or PI. This person is the group owner and can appoint other faculty to be a co-owner. The owners and co-owners approve the members of the group and are responsible for all user additions and removals. The use of collaboration tools, such as Git, is encouraged for (indirect) sharing and backup of source data.
  • Sharing of accounts and ssh-keys is strictly prohibited. Sharing of ssh-keys will immediately result in account suspension.

Running Jobs

  • All jobs must be run using the SLURM job scheduler. Long term or resource heavy processes running on the login node are subject to immediate termination.
  • Normal Jobs running on the cluster are limited to two days running time. Jobs up to 7 days may be run after consultation with the RCSS team. Long jobs may be occasionally extended upon request. The number of long jobs running on the cluster is limited to ensure that users can run jobs in a timely manner. All jobs are subject to termination for urgent security or maintenance work or the stability of the cluster.

Investor Policy

  • Investors purchase nodes and, in exchange for idle cycles, the space, power, and cooling as well as management of the hardware, operating system, security, and scientific applications are provided by Research Computing Support Services at no cost for five years. After 5 years the nodes are placed in the Bonus pool for extended life and removed at the discretion of RCSS based on operating conditions. Investors get prioritized access to their capacity via the SLURM FairShare scheduling policy and unused cycles are shared with the community. Investors get 3TB of group storage and help migrating their computational research to the cluster. For large investments (rack scale) we will work with researchers and vendors to test and optimize configurations to maximize performance and value. Information on becoming an investor can be requested via mudoitrcss@missouri.edu.

Acknowledgements

  • We ask that when you cite any of the RCSS clusters in a publication to send an email to mudoitrcss@missouri.edu as well as share a copy of the publication with us. To cite the use of any of the RCSS clusters in a publication please use: "The computation for this work was performed on the high performance computing infrastructure provided by Research Computing Support Services and in part by the National Science Foundation under grant number CNS-1429294 at the University of Missouri, Columbia MO."

HPC Storage

HPC Storage is fastest when dealing with "large" files (>100MB). This is because files on HPC Storage are striped, which means they are split across multiple storage devices. In Lustre, file striping involves Object Storage Targets (OSTs). Much of the work Lustre does involves coordinating Object Storage Servers (OSSs) to reassemble a file from the various OSTs. Therefore, workloads that involve many small files create far more work for Lustre than workloads with the same amount of data that deal only with a few files.

The HPC Storage service should be used for loading large datasets into memory before processing them, or writing large output files. When data is too large to fit into memory, please use file streams with large chunk sizes to process files when possible. The HPC Storage service should not be used for millions of small files - such usage will impact performance for all users. To avoid inappropriate usage, consider using formats such as HDF5 or NetCDF to store large collections of related data. If you have any questions about your workflow, please contact us and we will be happy to help!

Our HPC Storage service (/storage/hpc, /data, /group, and /scratch) is a Lustre parallel filesystem ideal for storing data and results.

HPC Storage Capabilities

  • 595 TB of shared high speed parallel storage
  • 2 MDSs and 4 OSSs, serving 2 MDTs and 4 OSTs respectively

Example Use Cases

  1. I need to be able to get gigabytes of data from disk into memory as fast as possible
  2. I have many large files that need to be read by multiple nodes simultaneously. These files are too large to be stored on local scratch
  3. I need a place to store tarballs that will be extracted to local scratch for further processing

HPC Storage Best Practices

  • Store input and output that will be used in the near-term by batch jobs
  • Avoid large metadata operations, such as ls -la
  • Ensure your files are appropriate for Lustre
  • Tune your folders for your workload

Policy

  1. There are no backups of any storage. The RCSS team is not responsible for data integrity and data loss. You are responsible for your own data and data backup. RCSS recommends the UMKC Researcher Managed Backup Storage.
  2. Groups are located in HPC/storage/hpc/group/group_name and all users belonging to a group will have the same access permissions by default. The PI for the group is the only person who can approve additions and removals of users in groups.

Appropriate Use

HPC Storage Should Be Used

  • Input and output, stored in an efficient container
  • binary formats are usually great
  • CSV and TSV are good if loaded using big chunks
  • other text formats are okay as long as they minimize random I/O
  • Read-Only Metadata

HPC Storage Should Not Be Used

  • Executable files and source code*
  • Small text files for input
  • Attempt to concatenate as many of these files as possible
  • Files larger than RAM that require random I/O
  • Use HTC Storage, then copy the files to local scratch before processing
  • Log files*
  • Read/Write Metadata*
  • Files intended for human use*

*Use your home directory instead

HPC Storage Must Not Be Used

  • Datasets stored as thousands of files under 1MB
  • Avoid this practice. Get in touch with us if you need help finding other solutions.
  • Files that require locks
  • The home filer supports file locks
  • Files that you are not prepared to lose in the event of a storage failure
  • NO RCSS STORAGE SERVICE SHOULD BE USED FOR THESE FILES

Tuning

Unlike many filesystems, Lustre can be tuned by users in userspace. The most important commands to know in regards to Lustre tuning is lfs getstripe and lfs setstripe. These commands show and modify stripe settings on files and folder. The stripe count is the number of Object Storage Targets (OSTs) that a file is stored on. Large files benefit from larger stripe sizes, while small files benefit from small stripe sizes.

Examples

Note: these changes only affect new files.

To get the current stripe information of a file or directory:

lfs getstripe <path>

To set up a directory to be used for small files (mostly <128MB):

lfs setstripe -c 1 <dir>

To set up a directory to be used for both small and large files:

lfs setstripe -c 2 <dir>

To set up a directory to be used for large files:

lfs setstripe -c 4 <dir>

HTC Storage

The High Throughput Computing Storage (HTC Storage) service has been designed for researchers with a need for large amounts of long term storage for High Throughput Computing (HTC) with cost as a primary consideration. New investments need to be at least 50TB, and all upgrades must be in amounts 10TB or greater (in multiples of 10).

  • HTC storage is currently sold out, please contact mudoitrcss@missouri.edu for more information on when this service will be available again

HTC Storage Capabilities

  • 1240 TB low computational intensity project storage
  • Utilizes the ZFS file system

Example Use Case

  1. I have a large amount of research data that I may want to quickly analyze on Lewis at a later date. Instead of constantly moving my data between sources, it would be nice to be able to have access to cost effective storage.

Service Policies

  1. The storage is only internally accessible on the Lewis cluster compute nodes and externally accessible on the Lewis login nodes and data transfer (DTN) nodes via rsync over ssh or sftp.
  2. Storage is limited to DCL1 and DCL2 research data. Administrative, commercial, and personal data is prohibited.
  3. Storage is allocated in blocks and each storage block has its own quota mount point.
  4. Storage blocks are prepaid in full. New investments must be at least 50TB.
  5. Existing investments must be incremented in 10TB blocks.
  6. Storage nodes are expanded in large increments (100TB). Depending on available capacity, requests for storage blocks may be delayed until a storage node is expanded.
  7. The storage blocks expire after 5 years. Users must either purchase another storage block and transfer data to the new storage node; transfer data to another system; or do nothing and the data will be destroyed. Users may request a hardship waiver to the CI Council for temporary storage. No automatic data migration services are provided and users are responsible for the data integrity of moved data.
  8. Storage is based on ZFS and provided on a single node and single volume basis with no high availability (HA).
  9. There are no backups of the data. The data storage system is resilient to multiple disk failures (parity) and we do our best to protect the data but are not responsible for any data loss. Snapshots are available and count towards storage block.
  10. Storage is transferable but is not refundable.
  11. By default, all users in a group have the same access permissions.
  12. The group PI is the only one able to approve user additions and removals.
  13. The path to HTC storage is: /storage/htc/group_name