Configuration data for BG/L was not available at the time of pmemd 9 release. We have since determined suitable parameters for config.h for BG/L, and are releasing sample config.h files, but will not incorporate BG/L configuration into the configure script infrastructure until the next amber release. The two sample files are:
The system administrator should take the appropriate script from above, place in under $AMBERHOME/src/pmemd as config.h, and do a "make install" as per usual to compile, link, and install pmemd. It may be necessary to modify the paths used for finding headers and libraries to match the local installation. The MASSV config.h is included for use at installations where users may want to use the generalized Born simulation method. The MASSV libraries significantly improve performance for this type of simulation. However, there is currently a bug in the BG/L MASSV libraries whereby runs may fail for very large generalized Born simulations (say 25086 atoms, no cutoff, though probably also smaller; more typical size problems (like myoglobin - 2k-3k atoms) are known to run without problems). Installations may want to make a PMEMD version compiled without MASSV available until this bug is fixed.
To run pmemd on BG/L, one should always select "-mode VN" in the mpirun command. This selects "virtual node" mode as opposed to "coprocessor" mode; the effect of this is that both cpu's on a BG/L node will be used as a full-fledged compute cpu instead of one of the cpu's being used as a communications coprocessor.
We have not made much of a study of interconnect geometry for running BG/L; we just used defaults. The interconnect geometry can be changed for different applications, but it is likely that this will have little impact on pmemd runs at this point in time. The BG/L architecture is a bit unusual in that the individual cpu's are relatively slow by today's standards, and the interconnect for mpi is fast. The ultimate effect of this architecture on the current implementation of pmemd is that pmemd bottlenecks on the nodes that have one fft slab to process as the cpu count goes above some point. We will be working on this issue to better support BG/L in the next release.
I would like to thank IBM's Blue Gene Capacity on Demand Center in Rochester, Minnesota for making resources available to facilitate configuration and benchmarking work. I would also specifically like to thank Carlos Sosa and Cindy Mestad of IBM for their help.
NIEHS and UNC-Chapel Hill
May 22, 2006