Molpro is a comprehensive toolkit for ab initio electronic structure calculations. See https://www.molpro.net/ for more details.
We have built Molpro 2022.1.1 using Global Arrays with the MPI-PR option. This means that your Molpro job will actually run with one less task per node than you specify in your slurm script. The one remaining “progress rank” task will handle the asynchronous communications between processes and improve the parallel scaling of Molpro.
The scratch files generated by Molpro can often be larger than 16GB and must be accessible by all the nodes in a job. Therefore multi-node jobs must use the /lcrc/globalscratch filesystem. /lcrc/globalscratch has more than a petabyte of disk space available and will handle the scratch files for a sizeable Molpro job.
A single node job can use the local /scratch filesystem as scratch space. We recommend that you run a single node job in the bdwd partition. The /scratch filesystem on a bdwd node has a local 4TB disk drive which should be enough for most Molpro calculations. An example script, input, output, and logs can be found at /soft/molpro/2022.1.1/molpro_2022.1/example/1node.
A multiple node job is a bit more complicated. Molpro needs a file that specifies the node where each process will run. This file is generated by the command “srun hostname -s > goods.” An example script is shown below and can be found at /soft/molpro/2022.1.1/molpro_2022.1/example/2node/2N_mp_t.sh along with sample input, output, and log.
#SBATCH -p bdwall
#SBATCH -N 2
#SBATCH -t 04:00:00
module delete intel intel-mkl intel-mpi
module add gcc/9.2.0-pkmzczt
module add libiconv
source /gpfs/fs1/software/centos7/spack-latest/opt/spack/linux-centos7-x86_64/gcc-8.2.0/intel-mkl-2020.4.304-d6zw4xa/compilers_and_libraries_2020.4.304/linux/mkl/bin/mklvars.sh intel64
srun hostname -s > goods
molpro -n $SLURM_NTASKS/$SLURM_NTASKS_PER_NODE -d /lcrc/globalscratch/$SLURM_JOB_ID –nodefile ./goods tmp.mol
rm -r /lcrc/globalscratch/$SLURM_JOB_ID