=== FreeBSD HPC Ports Modernization: Slurm 25.11 and Unbundled PMIx/PRRTE Links: + link:https://cgit.freebsd.org/ports/commit/?id=1536bac0dd26d81e315652929b8bfaff9c136089[sysutils/slurm-wlm: 23.11.7 → 25.11.0] URL: link:https://cgit.freebsd.org/ports/commit/?id=1536bac0dd26d81e315652929b8bfaff9c136089[] + link:https://www.freshports.org/net/pmix/[net/pmix: Process Management Interface for Exascale (PMIx)] URL: link:https://www.freshports.org/net/pmix/[] + link:https://www.freshports.org/net/prrte/[net/prrte: PMIx Reference RunTime Environment (PRRTE)] URL: link:https://www.freshports.org/net/prrte/[] + link:https://www.freshports.org/sysutils/py-clustershell/[sysutils/py-clustershell: Python framework for efficient cluster administration] URL: link:https://www.freshports.org/sysutils/py-clustershell/[] + link:https://kavocado.net/reports/[Kavocado Monthly Status Reports – FreeBSD HPC notes] URL: link:https://kavocado.net/reports/[] Contact: Generic Rikka During this quarter, a significant amount of work has gone into making FreeBSD a more practical target for modern HPC clusters by bringing key components of the Slurm + PMIx + PRRTE stack up to date and available as first-class ports. ==== Work completed * Updated package:sysutils/slurm-wlm[] from 23.11.7 to 25.11.0, tracking the latest upstream long-term series and drastically reducing the number of local patches required for FreeBSD. * Refreshed the Slurm rc.d scripts so that `slurmctld` and `slurmd` integrate better with a typical FreeBSD deployment (configurable config/log directories, pidfiles, status and cleanup helpers). * Introduced package:net/pmix[] and package:net/prrte[] as standalone ports, and switched package:net/openmpi[] to use these unbundled runtimes instead of the copies shipped inside the OpenMPI distfile. This aligns FreeBSD more closely with how many Linux HPC distros package the MPI runtime stack. * Added package:sysutils/py-clustershell[], a Python framework widely used for scalable cluster administration, providing FreeBSD users with a familiar tool found on many production HPC systems. ==== Work in progress * Iterating on additional Slurm integration improvements (plugins, defaults, documentation) to make it easier to deploy Slurm on FreeBSD in real clusters. * Extending the HPC userland stack with further tools such as test frameworks and job-oriented utilities, so that FreeBSD can serve as a realistic development and validation platform for HPC software. * Porting package:sysutils/mpifileutils[] and its dependencies (package:devel/libcircle[], package:devel/lwgrp[], package:devel/lwgrpd[]) to provide MPI-parallel file utilities commonly used on large HPC filesystems (currently under review). * Adding and refining HPC-oriented Python tooling, including package:benchmarks/py-reframe[] (HPC regression testing framework) and continued work around package:sysutils/py-clustershell[]. * Initial work on bringing package:devel/spack[] to FreeBSD as a complementary tool for HPC software development and experimentation, with the goal of improving compatibility with existing HPC workflows. ==== Future plans * Continue tracking upstream Slurm, PMIx and PRRTE releases closely so that FreeBSD remains a viable target for sites that expect a modern MPI/Slurm stack. * Document a “reference” Slurm + OpenMPI + PMIx + PRRTE setup on FreeBSD, to lower the barrier for new sites that want to experiment with FreeBSD in an HPC context. * Identify and address FreeBSD-specific gaps or regressions to ensure the software stack remains feature-complete and robust on FreeBSD.