Microservices

Docker Containers Microservice architecture is an approach to designing and running applications. Such applications are typically run within containers, made popular in the last few years by Docker.

Containers are portable, efficient, and disposable, and contain code and any dependencies in a single package. Containerized microservices typically run a single process, rather than an entire stack within the same computing environment. This allows portions of your application to be easily replaced or scaled as needed.

Research Computing runs microservices in an orchestration environment named DCOS (Distributed Cloud Operating System), based on Apache Mesos and Apache Marathon. DCOS makes the deployment and management of many containers easy and scalable. This cluster has >1000 cores and ~1TB of memory allocated to running containerized services. DCOS also has over 300TB of cluster storage and can attach to project storage.

Microservices Architecture

Basic Principles of Microservices

1 Microservice architecture is a design approach, or a way of building things.

2 The easiest and most common way to run microservices is inside of containers.

  • We teach workshops on containers and how to use them. Browse the course overview for Introduction to Containers at your own pace.
  • Docker provides an excellent Getting Started tutorial.
  • Katacoda offers a great hands-on Docker training series for free.
  • DCOS allows users to inject ENV environment variables and encrypted secrets into containers at runtime. This means sensitive information does not need to be written into your container.

Uses for Research

Microservices are typically used in computational research in one of two ways:

  1. Standalone microservices or small stacks - Such as static HTML websites, interactive or data-driven web applications and APIs, databases, or scheduled task containers. Some examples:
    • Simple web container to serve Project files to the research community or as part of a publication.
    • Reference APIs can handle requests based either on static reference data or databases.
    • Shiny Server presents users with interactive plots to engage with your datasets.
    • A scheduled job to retrieve remote datasets, perform initial ETL processing, and stage them for analysis.
  2. Microservices in support of HPC jobs - Some workflows in HPC jobs require supplemental services in order to run, such as relational databases, key-value stores, or reference APIs. Some examples:
    • Cromwell/WDL pipelines rely on MySQL databases to track job status and state if a portion of your pipeline fails.
    • Key-value stores in Redis can track an index of values or a running count that is updated as part of a job.
    • A scheduled job to refresh a library of reference data from an external source, such as reference genomes or public datasets.

Common Deployments

Service Accessibility Description
NGINX Web Server Public A fast web server that can run
  • Static HTML demo
  • Flask or Django apps demo
  • RESTful APIs demo
  • Expose Project storage demo
Apache Web Server Public An extremely popular web server that can run your static HTML, Flask or Django apps, RESTful APIs, or expose files stored in Project storage.
Shiny Server Public Runs R-based web applications and offers a dynamic, data-driven user interface. See a demo or try using LOLAweb
MySQL Database HPC networks A stable, easy to use relational database. Run MySQL in support of your HPC projects in Rivanna or web containers.
mongoDB Database HPC networks A popular NoSQL database. Use mongo in support of your Rivanna jobs or web containers. Try Mongo
Redis Database HPC networks An extremely fast, durable, key-value database. Use Redis in support of Rivanna jobs or other processes you run. Try Redis
Recurring Tasks n/a Schedule or automate tasks or data staging using the language of your choice (bash, Python, R, C, Ruby).

Eligibility


Pricing

Currently our microservices cluster is in beta testing. We welcome any single-container applications for free, either as a deployment listed above or a ready-to-run container that you bring.

Have a more complicated design? Contact us


Singularity

Singularity

Want to run your container within an HPC environment? It can be done, using Singularity!

Singularity is a container application targeted to multi-user, high-performance computing systems. It interoperates well with SLURM and with the Lmod modules system. Singularity can be used to create and run its own containers, or it can import Docker containers.

Read more.


Contact Us

Submit a consultation request to discuss your microservice implementation.