One unnecessarily time-consuming task for HPC user support teams is installing software for users. Due to the advanced nature of a supercomputing system (think: multiple multi-core modern microprocessors (possibly next to co-processors like GPUs), the availability of a high performance network interconnect, bleeding edge compilers & libraries, etc.), compiling the software from source on the actual operating system and system architecture that it is going to be used on is typically highly preferred over using readily available binary packages that were built in a generic way. Combine this environment with software applications that are developed by research scientists, who typically lack an extensive background in software development and computer science (“it wouldn’t be called research if we knew what we are doing”), that require a multitude of (typically open source) libraries and tools as dependencies, and you have a recipe for potential disaster.
Moreover, HPC user support teams typically need to provide a variety of builds and versions for most of the software packages. Since supercomputers are simultaneously used by lots of users from different scientific domains that have conflicting needs, just having a single software version installed and updating software installations ‘in place’ is simply not good enough. For a variety of reasons, traditional well-established packaging tools fall short in dealing with scientific software. Until recently, HPC sites typically invested large amounts of time/manpower/money, possibly combined with crude in-house scripting, to tackle this tedious task while trying to provide a coherent software stack. Consequently, a huge amount of duplicate work was being done. Although each system has its own specific characteristics which warrant compilation from source whenever possible, the build & install procedures that need to be followed are usually very similar across sites. Despite that this was well known and recognised, there was a sheer lack of tools to automate this ubiquitous burden on HPC user support teams.