BerkeleyGW¶
The BerkeleyGW package is a set of computer codes that calculates the quasiparticle properties and the optical responses of a large variety of materials from bulk periodic crystals to nanostructures such as slabs, wires and molecules. The package takes as input the mean-field results from various electronic structure codes such as the Kohn-Sham DFT eigenvalues and eigenvectors computed with Abinit, Octopus, PARSEC, Quantum ESPRESSO, RMGDFT, Siesta, and TPBW.
Availability and Supported Architectures at NERSC¶
BerkeleyGW is available at NERSC as a provided support level package. Version 4.x includes support for GPUs.
Versions Supported¶
Perlmutter GPU | Perlmutter CPU |
---|---|
4.x | 4.x |
Use the module avail berkeleygw
command to see a full list of available sub-versions.
Application Information, Documentation, and Support¶
BerkeleyGW is freely available and can be downloaded from the BerkeleyGW home page. See the online documentation for the user manual, tutorials, examples, and links to previous workshops and literature articles. For troubleshooting, see the BerkeleyGW Help Forum. For help with issues specific to the NERSC module, please file a support ticket.
Using BerkeleyGW at NERSC¶
Use the module avail
command to see which versions are available and module load <version>
to load the environment:
nersc$ module avail berkeleygw
--------------------------- NERSC Modules ---------------------------
berkeleygw/4.0-gcc-12.3 berkeleygw/4.0-nvhpc-23.9 (g,D)
Where:
g: built for GPU
D: Default Module
nersc$ module load berkeleygw/4.0-nvhpc-23.9
Sample Job Scripts¶
See the example jobs page for additional examples and information about jobs.
Perlmutter GPU
#!/bin/bash
#SBATCH -A <your account name> # e.g., m1111
#SBATCH -C gpu
#SBATCH -q regular
#SBATCH -N 2
#SBATCH -t 01:00:00
ml berkeleygw/4.0-nvhpc-23.9
export HDF5_USE_FILE_LOCKING=FALSE
export BGW_HDF5_WRITE_REDIST=1
ulimit -s unlimited
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=16
# Run 8 MPI processes on 2 nodes with 8 GPUs and 16 OpenMP threads-per-MPI-process
srun -n 8 -c 32 -G 8 --cpu-bind=cores --gpu-bind=single:1 epsilon.cplx.x
Perlmutter CPU
#!/bin/bash
#SBATCH -A <your account name> # e.g., m1111
#SBATCH -C cpu
#SBATCH -q regular
#SBATCH -N 2
#SBATCH -t 01:00:00
ml berkeleygw/4.0-gcc-12.3
export HDF5_USE_FILE_LOCKING=FALSE
export BGW_HDF5_WRITE_REDIST=1
ulimit -s unlimited
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
export OMP_NUM_THREADS=16
# Run 16 MPI processes on 2 nodes with 16 OpenMP threads-per-MPI-process
srun -n 16 -c 32 --cpu-bind=cores epsilon.cplx.x
Tip
In some cases the epsilon
module will fail while trying to access a file in HDF5 format. To prevent this, add export HDF5_USE_FILE_LOCKING=FALSE
to the job script.
Building BerkeleyGW from Source¶
Some users may be interested in modifying the BerkeleyGW build parameters and/or building BerkeleyGW themselves. BerkeleyGW can be downloaded as a tarball from the download page. Build instructions are included in the Makefile
and in README.md
in the BerkeleyGW main directory. Before building, one must load the appropriate modules and create a configuration file in the BerkeleyGW main directory. Sample configuration files, found in the config
directory, can be copied into the main directory, edited, and renamed as arch.mk
. Sample configuration file headers also contain recommendations of the modules to load.
Building BerkeleyGW on Perlmutter for GPUs
The following arch.mk
file may be used to build BerkeleyGW 4.0 targeting GPUs on Perlmutter:
# arch.mk for NERSC Perlmutter GPU build using NVIDIA compilers
#
# Load the following modules before building:
#
# module reset
# ml PrgEnv-nvidia cray-fftw cray-hdf5-parallel python
#
# ml cpe/23.03 # <- NOTE: also include as of July 2025 (temporary workaround)
#
COMPFLAG = -DNVHPC -DNVHPC_API -DNVIDIA_GPU
PARAFLAG = -DMPI -DOMP
MATHFLAG = -DUSESCALAPACK -DUNPACKED -DUSEFFTW3 -DHDF5 -DOMP_TARGET -DOPENACC #-DUSEPRIMME -DUSEELPA -DUSEELPA_GPU
NVCC=nvcc
NVCCOPT= -O3 -use_fast_math
CUDALIB= -lcufft -lcublasLt -lcublas -lcudart -lcuda -lnvToolsExt
FCPP = /usr/bin/cpp -C -nostdinc
F90free = ftn -Mfree -acc -mp=multicore,gpu -gpu=cc80 -cudalib=cublas,cufft -traceback -Minfo=all,mp,acc -gopt -traceback
LINK = ftn -acc -mp=multicore,gpu -gpu=cc80 -cudalib=cublas,cufft -Minfo=mp,acc
FOPTS = -fast -Mfree -Mlarge_arrays
FNOOPTS = $(FOPTS)
MOD_OPT = -module
INCFLAG = -I
C_PARAFLAG = -DPARA -DMPICH_IGNORE_CXX_SEEK
CC_COMP = CC
C_COMP = cc
C_LINK = cc -lstdc++
C_OPTS = -fast -mp
C_DEBUGFLAG =
REMOVE = /bin/rm -f
FFTWLIB = $(FFTW_DIR)/libfftw3.so \
$(FFTW_DIR)/libfftw3_threads.so \
$(FFTW_DIR)/libfftw3_omp.so \
${CUDALIB} -lstdc++
FFTWINCLUDE = $(FFTW_INC)
PERFORMANCE =
SCALAPACKLIB =
LAPACKLIB =
HDF5_LDIR = ${HDF5_DIR}/lib/
HDF5LIB = $(HDF5_LDIR)/libhdf5hl_fortran.so \
$(HDF5_LDIR)/libhdf5_hl.so \
$(HDF5_LDIR)/libhdf5_fortran.so \
$(HDF5_LDIR)/libhdf5.so -lz -ldl
HDF5INCLUDE = ${HDF5_DIR}/include/
ELPALIB =
ELPAINCLUDE =
PRIMMELIB =
PRIMMEINC =
Building BerkeleyGW on Perlmutter for CPUs
The following arch.mk
file may be used to build BerkeleyGW 4.0 targeting CPUs on Perlmutter:
# arch.mk for NERSC Perlmutter CPU build using GNU compiler
#
# Load the following modules before building:
# ('PrgEnv-gnu' must be pre-loaded!)
#
# module reset
# ml cpu cray-fftw cray-hdf5-parallel python
#
COMPFLAG = -DGNU
PARAFLAG = -DMPI -DOMP
MATHFLAG = -DUSESCALAPACK -DUNPACKED -DUSEFFTW3 -DHDF5 #-DUSEELPA -DUSEPRIMME
FCPP = /usr/bin/cpp -C -nostdinc
F90free = ftn -fopenmp -ffree-form -ffree-line-length-none -fno-second-underscore -fbounds-check -fbacktrace -Wall -fallow-argument-mismatch
LINK = ftn -fopenmp -ffree-form -ffree-line-length-none -fno-second-underscore -fbounds-check -fbacktrace -Wall -fallow-argument-mismatch -dynamic
FOPTS = -O1 -funsafe-math-optimizations -fallow-argument-mismatch
FNOOPTS = $(FOPTS)
MOD_OPT = -J
INCFLAG = -I
C_PARAFLAG = -DPARA -DMPICH_IGNORE_CXX_SEEK
CC_COMP = CC
C_COMP = cc
C_LINK = CC -dynamic
C_OPTS = -O1 -ffast-math
C_DEBUGFLAG =
REMOVE = /bin/rm -f
FFTWINCLUDE = $(FFTW_INC)
PERFORMANCE =
HDF5_LDIR = $(HDF5_DIR)/lib
HDF5LIB = $(HDF5_LDIR)/libhdf5hl_fortran.so \
$(HDF5_LDIR)/libhdf5_hl.so \
$(HDF5_LDIR)/libhdf5_fortran.so \
$(HDF5_LDIR)/libhdf5.so -lz -ldl
HDF5INCLUDE =$(HDF5_DIR)/include
ELPALIB =
ELPAINCLUDE =
PRIMMELIB =
PRIMMEINC =
After creating arch.mk
in the BGW main directory, build using the following commands:
nersc$ make cleanall
nersc$ make all-flavors
Note
As of July 2025, new GPU builds made using the current programming environment can fail at runtime with floating point exception errors. To work around, build BerkeleyGW with the cpe/23.03
module loaded, and then load the PrgEnv-nvidia
, cray-fftw
, and cpe-23.03
modules in your job script at runtime.
Related Applications¶
User Contributed Information¶
Please help us improve this page
Users are invited to contribute helpful information and corrections through our GitLab repository.