UVA Research Computing

Research Computing

Creating innovative solutions for researchers

/tag/supercomputer

  • Rivanna FAQs

    General Usage Allocations Applications Job Management Storage Management Data Transfer Other Questions General Usage How do I gain access to Rivanna? A faculty or research staff member must first request an allocation on Rivanna. Full details can be found here. How do I log on to Rivanna? Use an SSH client from a campus-connected machine and connect to rivanna.hpc.virginia.edu. Instructions for using ssh and other login tools, as well as recommended clients for different operating systems, are here. You can also use FastX. If you are off Grounds you must use the UVA Anywhere VPN. How do I reset my current password / obtain a new password?
  • Rivanna

    Rivanna is the University of Virginia’s High-Performance Computing (HPC) system. As a centralized resource it has hundreds of pre-installed software packages available for computational research across many disciplines. Currently the Rivanna supercomputer has over 8,000 cores and 8PB of various storage. All UVA faculty, staff, and postdoctoral associates are eligible to use Rivanna, or students when part of faculty research. The sections below contain important information for new and existing Rivanna users. Please read each carefully. New users are invited to attend one of our free orientation sessions (“Introduction to the HPC System”) held throughout the year during office hours or by appointment.
  • Allocations

    Time on Rivanna is allocated as Service Units (SUs). One SU corresponds to one core-hour. Multiple SUs make up what is called an allocation (e.g., a new allocation = 100K SUs). Allocations are managed through MyGroups groups that are automatically created for Principal Investigators (PIs) when they submit an allocation request. All UVA faculty, staff, and postdoctoral associates are considered PIs and therefore eligible for an allocation on Rivanna. Students—both graduate and undergraduate—cannot request allocations, but they are allowed to use Rivanna as members of a MyGroups group controlled by a PI. Eligibility and Account Creation University of Virginia tenure stream and academic general faculty, research faculty, research scientists, and postdoctoral associates may request any type of allocation.
  • Logging In

    Rivanna is accessible through a web portal, secure shell terminals, or a remote desktop environment. For of all of these access points, your login is your UVA computing ID and your password is your Eservices password. If you do not know your Eservices password you must change it through ITS. Off Campus? All users who wish to access Rivanna while off Grounds must use the UVA Anywhere VPN client. Only Windows and Mac OSX operating systems are supported. Linux users should refer to these unsupported instructions to install and configure a VPN. Web-based Access Open OnDemand is a graphical user interface that allows access to Rivanna via a web browser.
  • MobaXterm

    MobaXterm is the recommended login tool for Windows users. It bundles a tabbed ssh client, a graphical drag-and-drop sftp client, and an X11 window server for Windows, all in one easy-to-use package. Some other tools included are a simple text editor with syntax coloring and several useful Unix utlities such as cd, ls, grep, and others, so that you can run a lightweight Linux environment on your local machine as well as use it to log in to a remote system. Download To download MobaXterm, click the link below. Select the “Home” version, “Installer” edition, Download MobaXterm Run the installer as directed.
  • SLURM Job Manager

    Overview Rivanna is a multi-user, managed environment. It is divided into frontends, which are directly accessible by users, and compute nodes, which must be accessed through the resource manager. We use the Simple Linux Utility for Resource Management (SLURM), an open-source tool that performs cluster management and job scheduling for Linux clusters. Jobs are submitted to the resource manager, which queues them until the system is ready to run them. SLURM selects which jobs to run, when to run them, and how to place them on the compute node, according to a predetermined site policy meant to balance competing user needs and to maximize efficient use of cluster resources.