skip to Main Content
OUR PRESENTATION IN THE SUSE BOOTH AT SC18: PART 1

OUR PRESENTATION IN THE SUSE BOOTH AT SC18: PART 1

TL;DR: SUSE booth presentation from SC18; video here. Singularity containers for compute-driven workloads in the enterprise on SLES.

If the measure of an event includes a metric for sustaining dialog, then SC18 continues to prove exemplary. About two months later, we’re still following up on this November 2018 event in substantial ways – and some of us feel like we’ve only recently recovered from the frenetic pace of activities that lead up to, happened during, and followed SC18.

In this first of two posts, we showcase presentations made by Ian Lumb in the SUSE booth on the exhibits floor at SC18. Though Ian is a self-confessed ‘HPC veteran’ who describes himself as an avid fan of the event, the presentation shared below was made less than one month after Ian joined Sylabs as a technical writer.

The title and abstract for Ian’s presentation were, respectively, “Singularity Containers on SUSE Linux Enterprise Server: The Perfect Fit for Enterprise Performance Computing” and:

When it comes to compute-intensive applications and workflows, enterprise customers demand dedicated compute resources to maximize performance while minimizing overhead. Whereas bare-metal servers running an instance of SUSE Linux Enterprise Server (SLES) might appear then to present the optimal configuration for Enterprise Performance Computing (EPC), performance and overhead guarantees can hardly be provided for dedicated use by a single application when utilizing virtualized environments. Virtual machines (VMs), as well as traditional solutions for containerization, allow for dedicated resources in shared settings, however they do so at the expense of overhead and simplicity – in particular, obfuscating access to key computational resources such as GPUs and interconnect fabrics. Each of these use cases manifests as a suboptimal fit for those shared computational infrastructures that organizations have built in their on-premise data centers or remotely hosted clouds. By making the choice to employ open source-based Singularity containers for all classes of applications and their workflows however, organizations can maximize performance while minimizing overhead – in a simple, efficient, and secure fashion that includes access to special-purpose resources. By allowing untrusted users to run untrusted containers in a trusted way, Singularity’s unique approach for EPC is well illustrated via concrete examples. Two use case examples, from the numerous set of contributions on the project’s GitHub site (https://github.com/sylabs/examples), will be showcased – whereas the first example considers a classic problem in traditional High Performance Computing (HPC) via OpenFOAM (https://www.openfoam.com/) for Computational Fluid Dynamics (CFD), the second will focus on distributed Deep Learning via Horovod (https://eng.uber.com/horovod/). By making use of SLES in data centers or clouds, Singularity delivers the perfect blend of high performance and low overhead for the broadest spectrum of EPC use cases.

Ian proudly admits he wrote the above abstract himself, but shamelessly ‘repurposed’ slides (available here) developed originally by Sylabs CEO and Singularity founder Gregory Kurtzer . You can watch Ian’s presentation below or via the SUSE Youtube channel here.  

Finally, a recommendation: during Ian’s presentation, integration with Kubernetes is identified as a significant item on the Sylabs roadmap for Singularity development – a roadmap item, by the way, for which significant progress has already been made (as this final update for 2018 indicates). If you aren’t already all jazzed up about Kubernetes, you should check out SUSE’s musical parody here. We can assure you that if you turn this up to 11, it’ll provide a jolt to your day that’s equivalent to a double-espresso or your go-to energy drink!

Kudos and thanks to our great friends at SUSE for recording booth presentations, making the industry’s best parody videos, and just being a whole lot of fun to work with. Please stay tuned for Part 2 in this short series of posts.

Back To Top