/tag/allocations

  • Rivanna Queues

    Several queues (or “partitions”) are availble to users for different types of jobs. One queue is restricted to single-node (serial or threaded) jobs; another for multinode parallel programs, and others are for access to specialty hardware such as large-memory nodes or nodes offering GPUs. Partition Max time per job Max nodes per job Max cores per job Max memory per core Max memory per node per job SU Charge Rate standard 7 days 1 40 9GB 375GB 1.00 parallel 3 days 45 900 9GB 120GB 1.00 largemem 4 days 1 16 64GB 975GB 1.
  • Allocations

    function setCookie(key, value, expiry) { var expires = new Date(); expires.setTime(expires.getTime() + (expiry * 60 * 60 * 1000)); document.cookie = key + ‘=’ + value + ‘;expires=’ + expires.toUTCString() + ‘;path=/'; } function getCookie(key) { var keyValue = document.cookie.match('(^|;) ?’ + key + ‘=([^;]*)(;|$)'); return keyValue ? keyValue[2] : null; } function gen_random(length) { var text = “"; var possible = “ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789”; for (var i = 0; i var user_token = getCookie("__user_token”); Time on Rivanna is allocated as Service Units (SUs). One SU corresponds to one core-hour. Multiple SUs make up what is called an allocation (e.
  • Rivanna

    Rivanna is the University of Virginia’s High-Performance Computing (HPC) system. As a centralized resource it has hundreds of pre-installed software packages available for computational research across many disciplines. Currently the Rivanna supercomputer has over 8,000 cores and 8PB of various storage. All UVA faculty, staff, and postdoctoral associates are eligible to use Rivanna, or students when part of faculty research. The sections below contain important information for new and existing Rivanna users. Please read each carefully. New users are invited to attend one of our free orientation sessions (“Introduction to the HPC System”) held throughout the year during office hours or by appointment.
  • FastX Web Portal

    Overview FastX is a commercial solution that enables users to start an X11 desktop environment on a remote system. It is available on the Rivanna frontends. Using it is equivalent to logging in at the console of the frontend. Using FastX for the Web We recommend that most users access FastX through its Web interface. To connect, point a browser to: https://rivanna-desktop.hpc.virginia.edu Login Screen After entering your computing ID and Netbadge password, you will see a launch screen. Launch In this example, we have no pre-existing sessions so we must create one. Click the Launch Session button. This will bring up a screen showing the options.
  • Open OnDemand

    Overview Open OnDemand is a graphical user interface that allows access to Rivanna via a web browser. Within the Open OnDemand environment users have access to a file explorer; interactive applications like JupyterLab, RStudio Server & FastX Web; a command line interface; and a job composer and job monitor. Logging in to Rivanna Rivanna is accessible through the Open OnDemand web client at https://rivanna-portal.hpc.virginia.edu. Your login is your UVA computing ID and your password is your Netbadge password. Some services, such as FastX Web, require the Eservices password. If you do not know your Eservices password you must change it through ITS by changing your Netbadge password (see instructions).
  • Open OnDemand: File Explorer

    Open OnDemand provides an integrated file explorer to browse and manage small files. Rivanna has multiple locations to store your files with different limits and policies. Specifically, each user has a relatively small amount of permanent storage in his/her home directory and a large amount of temporary storage (/scratch) where large data sets can be staged for job processing. Researchers can also lease storage that is accessible on Rivanna. Contact Research Computing or visit the storage website for more information. The file explorer provides these basic functions: Renaming of files Viewing of text and small image files Editing text files Downloading & uploading small files To see the storage locations that you have access to from within Open OnDemand, click on the Files menu.
  • Open OnDemand: Job Composer

    Open OnDemand allows you to submit SLURM jobs to the cluster without using shell commands. The job composer simplifies the process of: Creating a script Submitting a job Downloading results Submitting Jobs We will describe creating a job from a template provided by the system. Open the Job Composer tab from the Open OnDemand Dashboard. Go to the New Job tab and from the dropdown, select From Template. You can choose the default template or you can select from the list. Click on Create New Job. You will need to edit the file that pops up, so click the light blue Open Editor button at the bottom.
  • Pricing

    Below is a schedule of prices for Research Computing resources. Rivanna Allocations Type SU Limits Cost SU Lifetime Standard 100,000 per application; renewable (400K SUs max per fiscal year) Free 12 months Deans’ Allocations None Free 12 months by default, negotiable Purchased None $0.015 (<1M SUs); $0.01 (=1M SUs) Forever Instructional 25,000 Free 2 weeks after last teaching session ** Non-UVA personnel are charged at a rate of $0.07/SU About Allocations Storage Name Security Cost Project Standard $60 TB/year Value Standard $45 TB/year ZFS Standard $30 TB/year Ivy Central Storage High $45 TB/year Ivy NAS Storage High $60 TB/year Storage Details Request Storage
  • SLURM Job Manager

    Overview Rivanna is a multi-user, managed environment. It is divided into frontends, which are directly accessible by users, and compute nodes, which must be accessed through the resource manager. We use the Simple Linux Utility for Resource Management (SLURM), an open-source tool that performs cluster management and job scheduling for Linux clusters. Jobs are submitted to the resource manager, which queues them until the system is ready to run them. SLURM selects which jobs to run, when to run them, and how to place them on the compute node, according to a predetermined site policy meant to balance competing user needs and to maximize efficient use of cluster resources.