UVA Research Computing

Research Computing

Creating innovative solutions for researchers

/tag/queues

  • Rivanna

    Rivanna is the University of Virginia’s High-Performance Computing (HPC) system. As a centralized resource it has hundreds of pre-installed software packages available for computational research across many disciplines. Currently the Rivanna supercomputer has over 8,000 cores and 8PB of various storage. All UVA faculty, staff, and postdoctoral associates are eligible to use Rivanna, or students when part of faculty research. The sections below contain important information for new and existing Rivanna users. Please read each carefully. New users are invited to attend one of our free orientation sessions (“Introduction to the HPC System”) held throughout the year during office hours or by appointment.
  • FastX Web Portal

    Overview FastX is a commercial solution that enables users to start an X11 desktop environment on a remote system. It is available on the Rivanna frontends. Using it is equivalent to logging in at the console of the frontend. Using FastX for the Web We recommend that most users access FastX through its Web interface. To connect, point a browser to: https://rivanna-desktop.hpc.virginia.edu Login Screen After entering your computing ID and Netbadge password, you will see a launch screen. Launch In this example, we have no pre-existing sessions so we must create one. Click the Launch Session button. This will bring up a screen showing the options.
  • Open OnDemand

    Overview Open OnDemand is a graphical user interface that allows access to Rivanna via a web browser. Within the Open OnDemand environment users have access to a file explorer; interactive applications like JupyterLab, RStudio Server & FastX Web; a command line interface; and a job composer and job monitor. Logging in to Rivanna Rivanna is accessible through the Open OnDemand web client at https://rivanna-portal.hpc.virginia.edu. Your login is your UVA computing ID and your password is your Netbadge password. Some services, such as FastX Web, require the Eservices password. If you do not know your Eservices password you must change it through ITS by changing your Netbadge password.
  • Open OnDemand: File Explorer

    Open OnDemand provides an integrated file explorer to browse and manage small files. Rivanna has multiple locations to store your files with different limits and policies. Specifically, each user has a relatively small amount of permanent storage in his/her home directory and a large amount of temporary storage (/scratch) where large data sets can be staged for job processing. Researchers can also lease storage that is accessible on Rivanna. Contact Research Computing or visit the storage website for more information. The file explorer provides these basic functions: Renaming of files Viewing of text and small image files Editing text files Downloading & uploading small files To see the storage locations that you have access to from within Open OnDemand, click on the Files menu.
  • Open OnDemand: Job Composer

    Open OnDemand allows you to submit SLURM jobs to the cluster without using shell commands. The job composer simplifies the process of: Creating a script Submitting a job Downloading results Submitting Jobs We will describe creating a job from a template provided by the system. Open the Job Composer tab from the Open OnDemand Dashboard. Go to the New Job tab and from the dropdown, select From Template. You can choose the default template or you can select from the list. Click on Create New Job. You will need to edit the file that pops up, so click the light blue Open Editor button at the bottom.
  • Rivanna Queues

    Several queues (or “partitions”) are availble to users for different types of jobs. One queue is restricted to single-node (serial or threaded) jobs; another for multinode parallel programs, and others are for access to specialty hardware such as large-memory nodes or nodes offering GPUs. Partition Max time per job Max nodes per job Max cores per job Max memory per core Max memory per node per job SU Charge Rate standard 7 days 1 40 12GB 375GB 1.00 parallel 3 days 45 900 6GB 120GB 1.00 largemem 4 days 1 16 62GB 975GB 1.
  • SLURM Job Manager

    Overview Rivanna is a multi-user, managed environment. It is divided into frontends, which are directly accessible by users, and compute nodes, which must be accessed through the resource manager. We use the Simple Linux Utility for Resource Management (SLURM), an open-source tool that performs cluster management and job scheduling for Linux clusters. Jobs are submitted to the resource manager, which queues them until the system is ready to run them. SLURM selects which jobs to run, when to run them, and how to place them on the compute node, according to a predetermined site policy meant to balance competing user needs and to maximize efficient use of cluster resources.