Many high-performance scientific and data analytics applications are subject to an ever-growing complexity of their software stack. We experienced that first hand within the experiments at the Large Hadron Collider (LHC) at CERN. LHC experiment applications allow hundreds of researchers to plug in their specific algorithms. The software stacks comprise hundreds of thousands of small files and binaries, and they often change on a daily basis. Distributing such applications from a shared software area or containers can be challenging, in particular in HPC environments tuned for parallel writing rather than for high-frequency (meta-)data reading.
This talk presents the status and strategic directions of the CernVM File System, a purpose-built file system to address the problem of software distribution. The CernVM File System emerged from the high-throughput and cloud computing environment. Since several years, it is a mission-critical system for the worldwide computing operations of the LHC experiments. Recent targeted developments made it more tractable in pure HPC environments, such as Cori at NERSC in Berkeley and Piz Daint at CSCS in Lugano (#3 of the TOP500). The talk will outline the experience from these installations and future plans for CernVM-FS in HPC environments.