To visualize the impact of an efficient installation you are request to perform two different steps: the common user way and the skilled way.
The two ways differ mainly by the fact that the common user usually has no knowledge about the compute architecture and the possibility of using highly-‐optimized library for HPC. So, he would install the package as well as he commonly does on the desktop/laptop. Indeed, the Quantum ESPRESSO package includes an old version of all the needed libraries, with the purpose of allowing successful compilation of the code on various environment. By default the configure will set "-O3" as compiler optimization.
On the other hand, a skilled user is aware of the fact that more optimized libraries are available as well as compiler optimizations for a given CPU architecture. For the particular case OPENBLAS library are provided. Moreover, "-O3 -mavx" is used to enhance compiler automatic vectorization for the Intel Sandy Bridge architecture. OPENBLAS were compiled using the same level of optimization.
Perform the following steps to complete the exercise:
1) Compile the package in the common user way:
$module load openmpi
$./configure
$make pw
2) Compile the package in the skilled user way:
$module load openmpi openblas
$./configure FFLAGS="-O3 -mavx" FCFLAGS="-O3 -mavx" CFLAGS="-O3 -mavx"
$make pw
3) Create a submission script to run the pw.x binary in parallel on a compute node of java2 (8 MPI processes). Check the "time to solution" with the provided input, using the following command:
mpirun $QE_DIR/bin/pw.x -input ausurf.in
4) Compare the obtained output.
NOTE:
a) retrieve the software package and the provided input from NEXUS.
b) Step 1) and 2) as well as the computation of the two cases can be performed in parallel.
c) The computation has to be informed in the same directory on which input files are stored.
d) On step No. 2, use "O Three", not "Zero Three". O uppercase of capital, not O lowercase
LANGKAH KE 1
term-62:~ muhammadsirojulmunir$ ssh hpc07@192.168.222.2
The authenticity of host '192.168.222.2 (192.168.222.2)' can't be established.
RSA key fingerprint is ad:82:fc:ed:1d:8b:00:0b:2e:47:8f:70:d0:b8:ae:00.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.222.2' (RSA) to the list of known hosts.
hpc07@192.168.222.2's password:
Last login: Fri Dec 6 10:23:52 2013 from 192.168.228.76
[hpc07@java2 ~]$ module load openmpi
[hpc07@java2 ~]$ ./configure
-bash: ./configure: No such file or directory
Gak ada yang bisa di-compile.
[hpc07@java2 ~]$ ls
C_source Fortran_source_text_viz sub_script.sh
Fortran_source Lab-Day1.tar.gz transport_parallel.f90
[hpc07@java2 ~]$ module list
Currently Loaded Modulefiles:
1) openmpi/1.6.5
Rupanya file yang mau di-compile harus di-download dulu
[hpc07@java2 ~]$ wget http://nexus.lipi.go.id/lectures/day5/Lab-session-Day5.tar --2013-12-06 10:47:15-- http://nexus.lipi.go.id/lectures/day5/Lab-session-Day5.tar
Resolving nexus.lipi.go.id... 192.168.228.11
Connecting to nexus.lipi.go.id|192.168.228.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104816640 (100M) [application/x-tar]
Saving to: “Lab-session-Day5.tar”
100%[======================================>] 104,816,640 3.79M/s in 25s
2013-12-06 10:47:40 (4.07 MB/s) - “Lab-session-Day5.tar” saved [104816640/104816640]
Lihat hasil donwload
[hpc07@java2 ~]$ ls
C_source Lab-Day1.tar.gz transport_parallel.f90
Fortran_source Lab-session-Day5.tar
Fortran_source_text_viz sub_script.sh
Ekstrak file hasil download yaitu file Lab-session-Day5.tar
[hpc07@java2 ~]$ tar xvfz Lab-session-Day5.tar
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Salah pake opsi z
Ekstrak file hasil download yaitu file Lab-session-Day5.tar (tanpa opsi z)
[hpc07@java2 ~]$ tar xvf Lab-session-Day5.tar
espresso.tar.gz
input_test_QE.tar.gz
Ada dua file, yang ternyata juga file terkompres.
Lihat file hasil ekstrak yang perlu diekstrak juga
[hpc07@java2 ~]$ ls
C_source Fortran_source_text_viz Lab-session-Day5.tar
espresso.tar.gz input_test_QE.tar.gz sub_script.sh
Fortran_source Lab-Day1.tar.gz transport_parallel.f90
Ekstrak file espresso.tar.gz
[hpc07@java2 ~]$ tar xvfz espresso.tar.gz
espresso/
espresso/Makefile
espresso/GIPAW/
....
[hpc07@java2 ~]$ ls -l
total 204748
drwxr-xr-x 2 hpc07 ictp 4096 Dec 1 15:04 C_source
drwxr-xr-x 30 hpc07 ictp 4096 Dec 5 12:34 espresso
-rw-r--r-- 1 hpc07 ictp 104697385 Dec 5 13:33 espresso.tar.gz
drwxr-xr-x 2 hpc07 ictp 4096 Dec 2 17:28 Fortran_source
drwxr-xr-x 2 hpc07 ictp 4096 Dec 2 18:04 Fortran_source_text_viz
-rw-r--r-- 1 hpc07 ictp 108890 Dec 5 13:36 input_test_QE.tar.gz
-rw-r----- 1 hpc07 ictp 8031 Dec 2 15:59 Lab-Day1.tar.gz
-rw-r--r-- 1 hpc07 ictp 104816640 Dec 5 15:24 Lab-session-Day5.tar
-rwxr-xr-x 1 hpc07 ictp 124 Dec 2 15:53 sub_script.sh
-rw-r--r-- 1 hpc07 ictp 6304 Dec 2 15:33 transport_parallel.f90
[hpc07@java2 ~]$ cd espresso
[hpc07@java2 espresso]$ module list
Currently Loaded Modulefiles:
1) openmpi/1.6.5
[hpc07@java2 espresso]$ ./configure
checking build system type... x86_64-unknown-linux-gnu
detected architecture... x86_64
checking for ifort... no
checking for pgf90... no
checking for pathf95... no
checking for sunf95... no
checking for openf95... no
checking for gfortran... gfortran
configure: WARNING: using cross tools not prefixed with host triplet
checking whether the Fortran compiler works... yes
checking for Fortran compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU Fortran compiler... yes
checking whether gfortran accepts -g... yes
checking for Fortran flag to compile .f90 files... none
checking for mpif90... mpif90
checking whether we are using the GNU Fortran compiler... yes
checking whether mpif90 accepts -g... yes
checking version of mpif90... gfortran 4.4.7
setting F90... gfortran
setting MPIF90... mpif90
checking for cc... cc
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
setting CC... cc
checking how to run the C preprocessor... cc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking size of int *... 8
checking malloc.h usability... yes
checking malloc.h presence... yes
checking for malloc.h... yes
checking for struct mallinfo.arena... yes
checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
setting F77... gfortran
using F90... gfortran
setting FFLAGS... -O3 -g
setting F90FLAGS... $(FFLAGS) -x f95-cpp-input
setting FFLAGS_NOOPT... -O0 -g
setting CFLAGS... -O3
setting CPP... cpp
setting CPPFLAGS... -P -C -traditional
setting LD... mpif90
setting LDFLAGS... -g -pthread
setting AR... ar
setting ARFLAGS... ruv
checking whether make sets $(MAKE)... yes
checking whether Fortran files must be preprocessed... no
checking host system type... x86_64-unknown-linux-gnu
checking how to get verbose linking output from gfortran... -v
checking for Fortran 77 libraries of gfortran... -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../.. -lgfortranbegin -lgfortran -lm
checking for dummy main to link with Fortran 77 libraries... none
checking for Fortran 77 name-mangling scheme... lower case, underscore, no extra underscore
checking for library containing dgemm... no
MKL not found
in /opt/intel/composer*/mkl/lib/intel64: checking for library containing dgemm... no
MKL not found
in /opt/intel/Compiler/*/*/mkl/lib/em64t: checking for library containing dgemm... no
MKL not found
in /opt/intel/mkl/*/lib/em64t: checking for library containing dgemm... no
MKL not found
in /opt/intel/mkl*/lib/em64t: checking for library containing dgemm... no
MKL not found
in /opt/intel/mkl/lib: checking for library containing dgemm... no
MKL not found
in /home/software/openmpi/1.6.5/lib: checking for library containing dgemm... no
MKL not found
checking for library containing dgemm... no
in /usr/local/lib: checking for library containing dgemm... no
in /home/software/openmpi/1.6.5/lib: checking for library containing dgemm... no
checking for library containing dgemm... no
in /usr/local/lib: checking for library containing dgemm... no
in /home/software/openmpi/1.6.5/lib: checking for library containing dgemm... no
checking for library containing dspev... no
in /usr/local/lib: checking for library containing dspev... no
in /home/software/openmpi/1.6.5/lib: checking for library containing dspev... no
setting BLAS_LIBS... /home/hpc07/espresso/BLAS/blas.a
setting LAPACK_LIBS... /home/hpc07/espresso/lapack-3.2/lapack.a
checking for library containing dfftw_execute_dft... no
in /usr/local/lib: checking for library containing dfftw_execute_dft... no
in /home/software/openmpi/1.6.5/lib: checking for library containing dfftw_execute_dft... no
setting FFT_LIBS...
setting MASS_LIBS...
checking for library containing mpi_init... none required
setting MPI_LIBS...
checking for library containing mpi_init... (cached) none required
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
setting SCALAPACK_LIBS... -L/bgsys/local/scalapack/lib -lscalapack -L/bgsys/local/blacs/lib -lblacs -lblacsF77init -lblacs
setting DFLAGS... -D__GFORTRAN -D__STD_F95 -D__FFTW -D__MPI -D__PARA
setting IFLAGS... -I../include
setting FDFLAGS... $(DFLAGS)
checking for ranlib... ranlib
setting RANLIB... ranlib
checking for wget... wget -O
setting WGET... wget -O
configure: creating ./config.status
config.status: creating include/fft_defs.h
config.status: creating make.sys
config.status: creating configure.msg
config.status: creating install/make_wannier90.sys
config.status: creating install/make_blas.inc
config.status: creating install/make_lapack.inc
config.status: creating include/c_defs.h
--------------------------------------------------------------------
ESPRESSO can take advantage of several optimized numerical libraries
(essl, fftw, mkl...). This configure script attempts to find them,
but may fail if they have been installed in non-standard locations.
If a required library is not found, the local copy will be compiled.
The following libraries have been found:
BLAS_LIBS=/home/hpc07/espresso/BLAS/blas.a
LAPACK_LIBS=/home/hpc07/espresso/lapack-3.2/lapack.a
FFT_LIBS=
Please check if this is what you expect.
If any libraries are missing, you may specify a list of directories
to search and retry, as follows:
./configure LIBDIRS="list of directories, separated by spaces"
Parallel environment detected successfully.\
Configured for compilation of parallel executables.
For more info, read the ESPRESSO User's Guide (Doc/users-guide.tex).
--------------------------------------------------------------------
configure: success
[hpc07@java2 espresso]$
---------------------------------------------------------------------------------
LANGKAH KE 2
term-62:~ muhammadsirojulmunir$ ssh hpc07@192.168.222.2
hpc07@192.168.222.2's password:
Last login: Fri Dec 6 10:37:13 2013 from 192.168.228.62
[hpc07@java2 ~]$ ls
C_source Fortran_source_text_viz sub_script.sh
espresso input_test_QE.tar.gz transport_parallel.f90
espresso.tar.gz Lab-Day1.tar.gz
Fortran_source Lab-session-Day5.tar
[hpc07@java2 ~]$ cd espresso
[hpc07@java2 espresso]$ module list
No Modulefiles Currently Loaded.
[hpc07@java2 espresso]$ module load openmpi
[hpc07@java2 espresso]$ module list
Currently Loaded Modulefiles:
1) openmpi/1.6.5
[hpc07@java2 espresso]$ module load openblas
[hpc07@java2 espresso]$ module list
Currently Loaded Modulefiles:
1) openmpi/1.6.5 2) openblas/1.13
[hpc07@java2 espresso]$
[hpc07@java2 espresso]$ ./configure FFLAG="-o3 -mavx" FCFLAGS="-o3 -mavx" CFLAGS="-o3 -mavx"
checking build system type... x86_64-unknown-linux-gnu
detected architecture... x86_64
checking for ifort... no
checking for pgf90... no
checking for pathf95... no
checking for sunf95... no
checking for openf95... no
checking for gfortran... gfortran
configure: WARNING: using cross tools not prefixed with host triplet
checking whether the Fortran compiler works... no
configure: error: in `/home/hpc07/espresso':
configure: error: Fortran compiler cannot create executables
See `config.log' for more details
[hpc07@java2 espresso]$
[hpc07@java2 espresso]$ ./configure FFLAG="-O3 -mavx" FCFLAGS="-O3 -mavx" CFLAGS="-O3 -mavx"
checking build system type... x86_64-unknown-linux-gnu
detected architecture... x86_64
checking for ifort... no
checking for pgf90... no
checking for pathf95... no
checking for sunf95... no
checking for openf95... no
checking for gfortran... gfortran
configure: WARNING: using cross tools not prefixed with host triplet
checking whether the Fortran compiler works... yes
checking for Fortran compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU Fortran compiler... yes
checking whether gfortran accepts -g... yes
checking for Fortran flag to compile .f90 files... none
checking for mpif90... mpif90
checking whether we are using the GNU Fortran compiler... yes
checking whether mpif90 accepts -g... yes
checking version of mpif90... gfortran 4.4.7
setting F90... gfortran
setting MPIF90... mpif90
checking for cc... cc
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
setting CC... cc
checking how to run the C preprocessor... cc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking size of int *... 8
checking malloc.h usability... yes
checking malloc.h presence... yes
checking for malloc.h... yes
checking for struct mallinfo.arena... yes
checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
setting F77... gfortran
using F90... gfortran
setting FFLAGS... -O3 -g
setting F90FLAGS... $(FFLAGS) -x f95-cpp-input
setting FFLAGS_NOOPT... -O0 -g
setting CFLAGS... -O3 -mavx
setting CPP... cpp
setting CPPFLAGS... -P -C -traditional
setting LD... mpif90
setting LDFLAGS... -g -pthread
setting AR... ar
setting ARFLAGS... ruv
checking whether make sets $(MAKE)... yes
checking whether Fortran files must be preprocessed... no
checking host system type... x86_64-unknown-linux-gnu
checking how to get verbose linking output from gfortran... -v
checking for Fortran 77 libraries of gfortran... -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/home/software/openblas/1.13/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../.. -lgfortranbegin -lgfortran -lm
checking for dummy main to link with Fortran 77 libraries... none
checking for Fortran 77 name-mangling scheme... lower case, underscore, no extra underscore
checking for library containing dgemm... no
MKL not found
in /opt/intel/composer*/mkl/lib/intel64: checking for library containing dgemm... no
MKL not found
in /opt/intel/Compiler/*/*/mkl/lib/em64t: checking for library containing dgemm... no
MKL not found
in /opt/intel/mkl/*/lib/em64t: checking for library containing dgemm... no
MKL not found
in /opt/intel/mkl*/lib/em64t: checking for library containing dgemm... no
MKL not found
in /opt/intel/mkl/lib: checking for library containing dgemm... no
MKL not found
in /home/software/openblas/1.13/lib: checking for library containing dgemm... no
MKL not found
in /home/software/openmpi/1.6.5/lib: checking for library containing dgemm... no
MKL not found
checking for library containing dgemm... no
in /usr/local/lib: checking for library containing dgemm... no
in /home/software/openblas/1.13/lib: checking for library containing dgemm... no
in /home/software/openmpi/1.6.5/lib: checking for library containing dgemm... no
checking for library containing dgemm... -lopenblas
checking for library containing dspev... none required
setting BLAS_LIBS... -lopenblas
setting LAPACK_LIBS... -lopenblas
checking for library containing dfftw_execute_dft... no
in /usr/local/lib: checking for library containing dfftw_execute_dft... no
in /home/software/openblas/1.13/lib: checking for library containing dfftw_execute_dft... no
in /home/software/openmpi/1.6.5/lib: checking for library containing dfftw_execute_dft... no
setting FFT_LIBS...
setting MASS_LIBS...
checking for library containing mpi_init... none required
setting MPI_LIBS...
checking for library containing mpi_init... (cached) none required
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
checking for library containing pdgemr2d... no
setting SCALAPACK_LIBS... -L/bgsys/local/scalapack/lib -lscalapack -L/bgsys/local/blacs/lib -lblacs -lblacsF77init -lblacs
setting DFLAGS... -D__GFORTRAN -D__STD_F95 -D__FFTW -D__MPI -D__PARA
setting IFLAGS... -I../include
setting FDFLAGS... $(DFLAGS)
checking for ranlib... ranlib
setting RANLIB... ranlib
checking for wget... wget -O
setting WGET... wget -O
configure: creating ./config.status
config.status: creating include/fft_defs.h
config.status: creating make.sys
config.status: creating configure.msg
config.status: creating install/make_wannier90.sys
config.status: creating install/make_blas.inc
config.status: creating install/make_lapack.inc
config.status: creating include/c_defs.h
config.status: include/c_defs.h is unchanged
--------------------------------------------------------------------
ESPRESSO can take advantage of several optimized numerical libraries
(essl, fftw, mkl...). This configure script attempts to find them,
but may fail if they have been installed in non-standard locations.
If a required library is not found, the local copy will be compiled.
The following libraries have been found:
BLAS_LIBS= -lopenblas
LAPACK_LIBS= -lopenblas
FFT_LIBS=
Please check if this is what you expect.
If any libraries are missing, you may specify a list of directories
to search and retry, as follows:
./configure LIBDIRS="list of directories, separated by spaces"
Parallel environment detected successfully.\
Configured for compilation of parallel executables.
For more info, read the ESPRESSO User's Guide (Doc/users-guide.tex).
--------------------------------------------------------------------
configure: success
[hpc07@java2 espresso]$
[hpc07@java2 espresso]$ make pw
test -d bin || mkdir bin
cd install ; make -f extlibs_makefile libiotk
make[1]: Entering directory `/home/hpc07/espresso/install'
if test ! -d ../S3DE; then \
(gzip -dc ../archive/iotk-1.2.beta.tar.gz | (cd ../; tar -xvf -)) ; \
if test -e Makefile_iotk; then \
(cp Makefile_iotk ../S3DE/iotk/src/Makefile); fi; \
if test -e iotk_config.h; then \
(cp iotk_config.h ../S3DE/iotk/include/iotk_config.h); fi; fi
S3DE/
S3DE/iotk/
S3DE/iotk/altsrc/
S3DE/iotk/TODO
S3DE/iotk/tmp/
S3DE/iotk/tmp/.cvsignore
S3DE/iotk/tmp/.touch
S3DE/iotk/CHANGES
S3DE/iotk/.cvsignore
dst ...............
mpif90 -g -pthread -o bands_FS.x bands_FS.o -lopenblas -lopenblas
( cd ../../bin ; ln -fs ../PW/tools/bands_FS.x . )
gfortran -O3 -g -c kvecs_FS.f
mpif90 -g -pthread -o kvecs_FS.x kvecs_FS.o -lopenblas -lopenblas
( cd ../../bin ; ln -fs ../PW/tools/kvecs_FS.x . )
make[2]: Leaving directory `/home/hpc07/espresso/PW/tools'
make[1]: Leaving directory `/home/hpc07/espresso/PW'
[hpc07@java2 espresso]$
---------------------------------------------------------------------------------
LANGKAH KE 3
[hpc07@java2 espresso]$ mpirun $QE_DIR/bin/pw.x -input ausurf.in
--------------------------------------------------------------------------
mpirun was unable to launch the specified application as it could not access
or execute an executable:
Executable: /bin/pw.x
Node: java2
while attempting to start process rank 0.
--------------------------------------------------------------------------
[hpc07@java2 espresso]$ ls
3 CPV GUI make.sys PW upftools
archive dev-tools GWW Modules PWCOND VdW
atomic Doc include NEB QHA XSpectra
bin environment_variables install PHonon README
clib EPW iotk PlotPhon S3DE
configure flib License PP TDDFPT
COUPLE GIPAW Makefile pseudo TODO
[hpc07@java2 espresso]$ cd bin
[hpc07@java2 bin]$ ls
band_plot.x generate_rVV10_kernel_table.x iotk.x pwi2xsf.x
bands_FS.x generate_vdW_kernel_table.x kpoints.x pw.x
dist.x iotk kvecs_FS.x
ev.x iotk_print_kinds.x manypw.x
[hpc07@java2 bin]$ pwd
/home/hpc07/espresso/bin
[hpc07@java2 bin]$ mpirun /home/hpc07/espresso/bin/pw.x -input ausurf.in
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Program PWSCF v.5.0.2 (svn rev. 10630) starts on 6Dec2013 at 11:20:48
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 1 processors
Open_input_file: error opening ausurf.in
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine read_input (2):
opening input file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping ...
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 14664 on
node java2 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[hpc07@java2 bin]$
Ternyata file input ada di file TAR yang lain yaitu input_test_QE.tar.gz
[hpc07@java2 espresso]$ cd ..
[hpc07@java2 ~]$ ls -l
total 204748
drwxr-xr-x 2 hpc07 ictp 4096 Dec 1 15:04 C_source
drwxr-xr-x 32 hpc07 ictp 4096 Dec 6 11:23 espresso
-rw-r--r-- 1 hpc07 ictp 104697385 Dec 5 13:33 espresso.tar.gz
drwxr-xr-x 2 hpc07 ictp 4096 Dec 2 17:28 Fortran_source
drwxr-xr-x 2 hpc07 ictp 4096 Dec 2 18:04 Fortran_source_text_viz
-rw-r--r-- 1 hpc07 ictp 108890 Dec 5 13:36 input_test_QE.tar.gz
-rw-r----- 1 hpc07 ictp 8031 Dec 2 15:59 Lab-Day1.tar.gz
-rw-r--r-- 1 hpc07 ictp 104816640 Dec 5 15:24 Lab-session-Day5.tar
-rwxr-xr-x 1 hpc07 ictp 124 Dec 2 15:53 sub_script.sh
-rw-r--r-- 1 hpc07 ictp 6304 Dec 2 15:33 transport_parallel.f90
[hpc07@java2 ~]$ tar xvfz input_test_QE.tar.gz
PW-AUSURF54/
PW-AUSURF54/ausurf.in
PW-AUSURF54/Au.pbe-nd-van.UPF
[hpc07@java2 ~]$ mpirun /home/hpc07/espresso/bin/pw.x -input /home/hpc07/PW-AUSURF54/ausurf.in
Program PWSCF v.5.0.2 (svn rev. 10630) starts on 6Dec2013 at 11:26:35
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 1 processors
Reading input from /home/hpc07/PW-AUSURF54/ausurf.in
Warning: card &IONS ignored
Warning: card ION_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Warning: card &CELL ignored
Warning: card CELL_DYNAMICS = 'NONE' ignored
Warning: card / ignored
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine readpp (2):
file ./Au.pbe-nd-van.UPF not found
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping ...
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 16287 on
node java2 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
Gak bisa dijalanin secara langsung, harus buat JOB Script-nya dulu.
Jalankan JOB Script-nya yaitu file sub_script5.sh
[hpc07@java2 ~]$ qsub sub_script5.sh
620.java2.grid.lipi.go.id
[hpc07@java2 ~]$ ls
CRASH Fortran_source_text_viz sub_script5.sh
C_source input_test_QE.tar.gz sub_script.sh
espresso Lab-Day1.tar.gz test_pbs.e620
espresso.tar.gz Lab-session-Day5.tar test_pbs.o620
Fortran_source PW-AUSURF54 transport_parallel.f90
[hpc07@java2 ~]$ more test_pbs.e620
/var/lib/torque/mom_priv/jobs/620.java2.grid.lipi.go.id.SC: line 6: mpirun: comm
and not found
Tampaknya masih ada error-nya
[hpc07@java2 ~]$ which mpirun
/usr/bin/which: no mpirun in (/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/fujitsu/ServerViewSuite/UpdateManager/bin:/home/hpc07/bin)
Edit JOB Script-nya :
[hpc07@java2 ~]$ more sub_script5.sh
#!/bin/bash
#PBS -N test_pbs
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
module load openmpi
cd /home/hpc07/
mpirun ./espresso/bin/pw.x -input ./PW-AUSURF54/ausurf.in
Jalankan lagi JOB Script-nya :
[hpc07@java2 ~]$ qsub sub_script5.sh
751.java2.grid.lipi.go.id
[hpc07@java2 ~]$ cat test_pbs.e751
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 4 with PID 13465 on
node andalas25 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[andalas25:13460] 5 more processes have sent help message help-mpi-api.txt / mpi-abort
[andalas25:13460] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Masih ada error-nya
Edit lagi JOB Script-nya :
[hpc07@java2 ~]$ more sub_script5.sh
#!/bin/bash
#PBS -N test_pbs
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
module load openmpi
cd $PBS_O_WORKDIR
mpirun ./espresso/bin/pw.x -input ./PW-AUSURF54/ausurf.in
Jalankan lagi JOB Script-nya :
[hpc07@java2 ~]$ qsub sub_script5.sh
771.java2.grid.lipi.go.id
[hpc07@java2 ~]$ more test_pbs.e771
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 7 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 5342 on
node bali72 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[bali72:05339] 5 more processes have sent help message help-mpi-api.txt / mpi-ab
ort
[bali72:05339] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help
/ error messages
Masih ada error-nya
Pindahkan file Au.pbe-nd-van.UPF dari folder /home/hpc07/PW-AUSURF54/ ke /home/hpc07/
[hpc07@java2 ~]$ cp PW-AUSURF54/Au.pbe-nd-van.UPF .
Edit lagi JOB Script-nya :
[hpc07@java2 ~]$ more sub_script5.sh
#!/bin/bash
#PBS -N test_pbs
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
module load openmpi
cd $PBS_O_WORKDIR
mpirun espresso/bin/pw.x -input PW-AUSURF54/ausurf.in | tee optoutput.txt
Yang lama kayak gini : lihat "./" pada baris terakhir
#!/bin/bash
#PBS -N test_pbs
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
module load openmpi
cd $PBS_O_WORKDIR
mpirun ./espresso/bin/pw.x -input ./PW-AUSURF54/ausurf.in
Jalankan lagi JOB Script-nya :
[hpc07@java2 ~]$ qsub sub_script5.sh
782.java2.grid.lipi.go.id
Melihat status JOB di Server :
[hpc07@java2 ~]$ qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
714.java2 QE_test hpc12 03:21:04 R nogpu
719.java2 test_pbs hpc16 02:54:56 R nogpu
747.java2 test_pbs hpc01 01:35:23 R nogpu
761.java2 test_pbs hpc03 01:01:30 R nogpu
767.java2 test_pbs hpc27 00:29:24 R nogpu
768.java2 test_PBS_2 hpc06 00:26:01 R nogpu
772.java2 QE_test hpc12 00:24:10 R nogpu
775.java2 test_pbs hpc03 00:07:02 R nogpu
777.java2 test_pbs hpc03 00:09:04 R nogpu
782.java2 test_pbs hpc07 0 R nogpu
Menghapus JOB 782 untuk mengulangi JOB. Bukan karena error.
[hpc07@java2 ~]$ qdel 782
Jalankan lagi JOB Script-nya :
[hpc07@java2 ~]$ qsub sub_script5.sh
785.java2.grid.lipi.go.id
Melihat status JOB di Server :
[hpc07@java2 ~]$ qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
719.java2 test_pbs hpc16 03:18:45 R nogpu
747.java2 test_pbs hpc01 01:59:15 R nogpu
761.java2 test_pbs hpc03 01:19:18 R nogpu
768.java2 test_PBS_2 hpc06 00:49:49 R nogpu
772.java2 QE_test hpc12 00:48:00 R nogpu
777.java2 test_pbs hpc03 00:32:52 R nogpu
783.java2 test_pbs hpc16 00:18:36 R nogpu
784.java2 test_pbs hpc08 00:05:51 R nogpu
785.java2 test_pbs hpc07 0 R nogpu
Melihat output file : optoutput.txt secara iteratif
[hpc07@java2 ~]$ tail -f optoutput.txt
Each subspace H/S matrix 30.94 Mb ( 1424, 1424)
Each
Arrays for rho mixing 15.79 Mb ( 129375, 8)
Initial potential from superposition of free atoms
Check: negative starting charge= -1.450934
starting charge 593.37296, renormalised to 594.00000
negative rho (up, down): 1.452E+00 0.000E+00
Starting wfc are 486 randomized atomic wfcs
total cpu time spent up to now is 62.3 secs
per-process dynamical memory: 102.1 Mb
Self-consistent Calculation
iteration # 1 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 3.0
Threshold (ethr) on eigenvalues was too large:
Diagonalizing with lowered threshold
Davidson diagonalization with overlap
ethr = 4.67E-04, avg # of iterations = 2.0
negative rho (up, down): 1.438E+00 0.000E+00
total cpu time spent up to now is 174.1 secs
total energy = -5507.14530794 Ry
Harris-Foulkes estimate = -5509.64398749 Ry
estimated scf accuracy < 3.10048687 Ry
iteration # 2 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
Melihat output file : optoutput.txt
[hpc07@java2 ~]$ more optoutput.txt
Program PWSCF v.5.0.2 (svn rev. 10630) starts on 6Dec2013 at 12:50: 5
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 8 processors
R & G space division: proc/nbgrp/npool/nimage = 8
Reading input from PW-AUSURF54/ausurf.in
Warning: card &IONS ignored
Warning: card ION_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Warning: card &CELL ignored
Warning: card CELL_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
Subspace diagonalization in iterative solution of the eigenvalue problem:
a serial algorithm will be used
Found symmetry operation: I + ( 0.3333 0.0000 0.0000)
This is a supercell, fractional translations are disabled
Parallelization info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Min 532 266 70 58610 20700 2800
Max 533 267 71 58615 20720 2808
Sum 4257 2129 561 468901 165669 22421
Title:
DEISA pw benchmark
bravais-lattice index = 8
lattice parameter (alat) = 16.3533 a.u.
unit-cell volume = 9815.9181 (a.u.)^3
number of atoms/cell = 54
number of atomic types = 1
number of electrons = 594.00
number of Kohn-Sham states= 356
kinetic-energy cutoff = 25.0000 Ry
charge density cutoff = 200.0000 Ry
convergence threshold = 1.0E-06
mixing beta = 0.7000
number of iterations used = 8 plain mixing
Exchange-correlation = SLA PW PBE PBE ( 1 4 3 4 0)
celldm(1)= 16.353258 celldm(2)= 1.000000 celldm(3)= 2.244492
celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000
crystal axes: (cart. coord. in units of alat)
a(1) = ( 1.000000 0.000000 0.000000 )
a(2) = ( 0.000000 1.000000 0.000000 )
a(3) = ( 0.000000 0.000000 2.244492 )
reciprocal axes: (cart. coord. in units 2 pi/alat)
b(1) = ( 1.000000 0.000000 0.000000 )
b(2) = ( 0.000000 1.000000 0.000000 )
b(3) = ( 0.000000 0.000000 0.445535 )
PseudoPot. # 1 for Au read from file:
./Au.pbe-nd-van.UPF
MD5 check sum: deb5c07af10777505a79e28f5b4b4115
Pseudo is Ultrasoft + core correction, Zval = 11.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 985 points, 3 beta functions with:
l(1) = 1
l(2) = 2
l(3) = 2
Q(r) pseudized with 8 coefficients, rinner = 1.100 1.100 1.100
1.100 1.100
atomic species valence mass pseudopotential
Au 11.00 196.96000 Au( 1.00)
2 Sym. Ops. (no inversion) found
Cartesian axes
site n. atom positions (alat units)
1 Au tau( 1) = ( 0.0000000 0.0000000 0.0000000 )
2 Au tau( 2) = ( 0.2222220 0.1111110 0.2721649 )
3 Au tau( 3) = ( 0.1111110 0.2222220 0.5443320 )
4 Au tau( 4) = ( 0.3333330 0.0000000 0.0000000 )
5 Au tau( 5) = ( 0.5555560 0.1111110 0.2721649 )
6 Au tau( 6) = ( 0.4444440 0.2222220 0.5443320 )
7 Au tau( 7) = ( 0.6666670 0.0000000 0.0000000 )
8 Au tau( 8) = ( 0.8888890 0.1111110 0.2721649 )
9 Au tau( 9) = ( 0.7777780 0.2222220 0.5443320 )
10 Au tau( 10) = ( 0.0000000 0.3333330 0.0000000 )
11 Au tau( 11) = ( 0.2222220 0.4444440 0.2721649 )
12 Au tau( 12) = ( 0.1111110 0.5555560 0.5443320 )
13 Au tau( 13) = ( 0.3333330 0.3333330 0.0000000 )
14 Au tau( 14) = ( 0.5555560 0.4444440 0.2721649 )
15 Au tau( 15) = ( 0.4444440 0.5555560 0.5443320 )
16 Au tau( 16) = ( 0.6666670 0.3333330 0.0000000 )
17 Au tau( 17) = ( 0.8888890 0.4444440 0.2721649 )
18 Au tau( 18) = ( 0.7777780 0.5555560 0.5443320 )
19 Au tau( 19) = ( 0.0000000 0.6666670 0.0000000 )
20 Au tau( 20) = ( 0.2222220 0.7777780 0.2721649 )
21 Au tau( 21) = ( 0.1111110 0.8888890 0.5443320 )
22 Au tau( 22) = ( 0.3333330 0.6666670 0.0000000 )
23 Au tau( 23) = ( 0.5555560 0.7777780 0.2721649 )
24 Au tau( 24) = ( 0.4444440 0.8888890 0.5443320 )
25 Au tau( 25) = ( 0.6666670 0.6666670 0.0000000 )
26 Au tau( 26) = ( 0.8888890 0.7777780 0.2721649 )
27 Au tau( 27) = ( 0.7777780 0.8888890 0.5443320 )
28 Au tau( 28) = ( 0.0000000 0.0000000 0.8164968 )
29 Au tau( 29) = ( 0.2222220 0.1111110 1.0886617 )
30 Au tau( 30) = ( 0.1111110 0.2222220 1.3608265 )
31 Au tau( 31) = ( 0.3333330 0.0000000 0.8164968 )
32 Au tau( 32) = ( 0.5555560 0.1111110 1.0886617 )
33 Au tau( 33) = ( 0.4444440 0.2222220 1.3608265 )
34 Au tau( 34) = ( 0.6666670 0.0000000 0.8164968 )
35 Au tau( 35) = ( 0.8888890 0.1111110 1.0886617 )
36 Au tau( 36) = ( 0.7777780 0.2222220 1.3608265 )
37 Au tau( 37) = ( 0.0000000 0.3333330 0.8164968 )
38 Au tau( 38) = ( 0.2222220 0.4444440 1.0886617 )
39 Au tau( 39) = ( 0.1111110 0.5555560 1.3608265 )
40 Au tau( 40) = ( 0.3333330 0.3333330 0.8164968 )
41 Au tau( 41) = ( 0.5555560 0.4444440 1.0886617 )
42 Au tau( 42) = ( 0.4444440 0.5555560 1.3608265 )
43 Au tau( 43) = ( 0.6666670 0.3333330 0.8164968 )
44 Au tau( 44) = ( 0.8888890 0.4444440 1.0886617 )
45 Au tau( 45) = ( 0.7777780 0.5555560 1.3608265 )
46 Au tau( 46) = ( 0.0000000 0.6666670 0.8164968 )
47 Au tau( 47) = ( 0.2222220 0.7777780 1.0886617 )
48 Au tau( 48) = ( 0.1111110 0.8888890 1.3608265 )
49 Au tau( 49) = ( 0.3333330 0.6666670 0.8164968 )
50 Au tau( 50) = ( 0.5555560 0.7777780 1.0886617 )
51 Au tau( 51) = ( 0.4444440 0.8888890 1.3608265 )
52 Au tau( 52) = ( 0.6666670 0.6666670 0.8164968 )
53 Au tau( 53) = ( 0.8888890 0.7777780 1.0886617 )
54 Au tau( 54) = ( 0.7777780 0.8888890 1.3608265 )
number of k points= 2 Marzari-Vanderbilt smearing, width (Ry)= 0.0500
cart. coord. in units 2pi/alat
k( 1) = ( 0.2500000 0.2500000 0.0000000), wk = 1.0000000
k( 2) = ( -0.2500000 0.2500000 0.0000000), wk = 1.0000000
Dense grid: 468901 G-vectors FFT dimensions: ( 75, 75, 180)
Smooth grid: 165669 G-vectors FFT dimensions: ( 54, 54, 120)
Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 14.13 Mb ( 2601, 356)
NL pseudopotentials 27.86 Mb ( 2601, 702)
Each V/rho on FFT grid 1.97 Mb ( 129375)
Each G-vector array 0.45 Mb ( 58614)
G-vector shells 0.14 Mb ( 18406)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 56.52 Mb ( 2601, 1424)
Each subspace H/S matrix 30.94 Mb ( 1424, 1424)
Each
Arrays for rho mixing 15.79 Mb ( 129375, 8)
Initial potential from superposition of free atoms
Check: negative starting charge= -1.450934
starting charge 593.37296, renormalised to 594.00000
negative rho (up, down): 1.452E+00 0.000E+00
Starting wfc are 486 randomized atomic wfcs
total cpu time spent up to now is 62.3 secs
per-process dynamical memory: 102.1 Mb
Self-consistent Calculation
iteration # 1 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 3.0
Threshold (ethr) on eigenvalues was too large:
Diagonalizing with lowered threshold
Davidson diagonalization with overlap
ethr = 4.67E-04, avg # of iterations = 2.0
negative rho (up, down): 1.438E+00 0.000E+00
total cpu time spent up to now is 174.1 secs
total energy = -5507.14530794 Ry
Harris-Foulkes estimate = -5509.64398749 Ry
estimated scf accuracy < 3.10048687 Ry
iteration # 2 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.434E+00 0.000E+00
total cpu time spent up to now is 298.5 secs
total energy = -5480.57093516 Ry
Harris-Foulkes estimate = -5525.11985817 Ry
estimated scf accuracy < 723.97866323 Ry
iteration # 3 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.446E+00 0.000E+00
total cpu time spent up to now is 417.5 secs
total energy = -5507.87863501 Ry
Harris-Foulkes estimate = -5510.20956162 Ry
estimated scf accuracy < 8.18583953 Ry
End of self-consistent calculation
convergence NOT achieved after 3 iterations: stopping
Writing output data file ausurf.save
init_run : 61.24s CPU 61.90s WALL ( 1 calls)
electrons : 349.70s CPU 355.61s WALL ( 1 calls)
Called by init_run:
wfcinit : 12.13s CPU 12.40s WALL ( 1 calls)
potinit : 2.81s CPU 2.86s WALL ( 1 calls)
Called by electrons:
c_bands : 187.41s CPU 191.52s WALL ( 4 calls)
sum_band : 41.62s CPU 41.97s WALL ( 4 calls)
v_of_rho : 1.64s CPU 1.67s WALL ( 4 calls)
newd : 160.63s CPU 161.87s WALL ( 4 calls)
mix_rho : 0.29s CPU 0.37s WALL ( 4 calls)
Called by c_bands:
init_us_2 : 0.89s CPU 0.94s WALL ( 18 calls)
cegterg : 185.20s CPU 189.26s WALL ( 8 calls)
Called by *egterg:
h_psi : 76.20s CPU 77.02s WALL ( 44 calls)
s_psi : 9.08s CPU 9.13s WALL ( 44 calls)
g_psi : 0.36s CPU 0.37s WALL ( 34 calls)
cdiaghg : 67.73s CPU 68.02s WALL ( 40 calls)
Called by h_psi:
add_vuspsi : 8.99s CPU 9.07s WALL ( 44 calls)
General routines
calbec : 13.16s CPU 13.31s WALL ( 52 calls)
fft : 3.30s CPU 3.32s WALL ( 66 calls)
ffts : 0.03s CPU 0.03s WALL ( 8 calls)
fftw : 44.33s CPU 44.48s WALL ( 21614 calls)
interpolate : 0.27s CPU 0.27s WALL ( 8 calls)
davcio : 0.01s CPU 0.69s WALL ( 8 calls)
Parallel routines
fft_scatter : 27.57s CPU 27.99s WALL ( 21688 calls)
PWSCF : 6m51.29s CPU 7m 0.03s WALL
This run was terminated on: 12:57: 5 6Dec2013
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
Mencoba mengubah JOB Script-nya
[hpc07@java2 ~]$ more job5.sh
#!/bin/bash
#PBS -N test_pbs
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
module load openmpi
cd $PBS_O_WORKDIR
mpirun /home/hpc07/espresso/bin/pw.x -input /home/hpc07/PW-AUSURF54/ausurf.in
Yang lama kayak gini : (alamat file ausurf.in dan pakai optoutput.txt)
#!/bin/bash
#PBS -N test_pbs
#PBS -l nodes=1:ppn=8
#PBS -l walltime=2:00:00
module load openmpi
cd $PBS_O_WORKDIR
mpirun espresso/bin/pw.x -input PW-AUSURF54/ausurf.in | tee optoutput.txt
Jalankan JOB Script-nya
[hpc07@java2 ~]$ qsub job5.sh
813.java2.grid.lipi.go.id
[hpc07@java2 ~]$ qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
796.java2 test_pbs hpc24 02:29:54 R nogpu
804.java2 test_pbs hpc03 01:16:00 R nogpu
808.java2 test_pbs hpc03 00:48:11 R nogpu
809.java2 test_pbs hpc21 00:37:09 R nogpu
813.java2 test_pbs hpc07 00:17:57 R nogpu
Setelah proses selesai akan terlihat file output : test_pbs.o813
[hpc07@java2 ~]$ ls
Au.pbe-nd-van.UPF sub_script5.sh test_pbs.o620
C_source sub_script.sh test_pbs.o631
espresso test_pbs.e620 test_pbs.o647
espresso.tar.gz test_pbs.e631 test_pbs.o751
Fortran_source test_pbs.e647 test_pbs.o758
Fortran_source_text_viz test_pbs.e751 test_pbs.o771
input_test_QE.tar.gz test_pbs.e758 test_pbs.o782
job5.sh test_pbs.e771 test_pbs.o785
Lab-Day1.tar.gz test_pbs.e782 test_pbs.o798
Lab-session-Day5.tar test_pbs.e785 test_pbs.o813
optoutput.txt test_pbs.e798 tmp
PW-AUSURF54 test_pbs.e813 transport_parallel.f90
Melihat isi file output
[hpc07@java2 ~]$ more test_pbs.o813
Program PWSCF v.5.0.2 (svn rev. 10630) starts on 6Dec2013 at 13:22:14
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 8 processors
R & G space division: proc/nbgrp/npool/nimage = 8
Reading input from /home/hpc07/PW-AUSURF54/ausurf.in
Warning: card &IONS ignored
Warning: card ION_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Warning: card &CELL ignored
Warning: card CELL_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
Subspace diagonalization in iterative solution of the eigenvalue problem:
a serial algorithm will be used
Found symmetry operation: I + ( 0.3333 0.0000 0.0000)
This is a supercell, fractional translations are disabled
Parallelization info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Min 532 266 70 58610 20700 2800
Max 533 267 71 58615 20720 2808
Sum 4257 2129 561 468901 165669 22421
Title:
DEISA pw benchmark
bravais-lattice index = 8
lattice parameter (alat) = 16.3533 a.u.
unit-cell volume = 9815.9181 (a.u.)^3
number of atoms/cell = 54
number of atomic types = 1
number of electrons = 594.00
number of Kohn-Sham states= 356
kinetic-energy cutoff = 25.0000 Ry
charge density cutoff = 200.0000 Ry
convergence threshold = 1.0E-06
mixing beta = 0.7000
number of iterations used = 8 plain mixing
Exchange-correlation = SLA PW PBE PBE ( 1 4 3 4 0)
celldm(1)= 16.353258 celldm(2)= 1.000000 celldm(3)= 2.244492
celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000
crystal axes: (cart. coord. in units of alat)
a(1) = ( 1.000000 0.000000 0.000000 )
a(2) = ( 0.000000 1.000000 0.000000 )
a(3) = ( 0.000000 0.000000 2.244492 )
reciprocal axes: (cart. coord. in units 2 pi/alat)
b(1) = ( 1.000000 0.000000 0.000000 )
b(2) = ( 0.000000 1.000000 0.000000 )
b(3) = ( 0.000000 0.000000 0.445535 )
PseudoPot. # 1 for Au read from file:
./Au.pbe-nd-van.UPF
MD5 check sum: deb5c07af10777505a79e28f5b4b4115
Pseudo is Ultrasoft + core correction, Zval = 11.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 985 points, 3 beta functions with:
l(1) = 1
l(2) = 2
l(3) = 2
Q(r) pseudized with 8 coefficients, rinner = 1.100 1.100 1.100
1.100 1.100
atomic species valence mass pseudopotential
Au 11.00 196.96000 Au( 1.00)
2 Sym. Ops. (no inversion) found
Cartesian axes
site n. atom positions (alat units)
1 Au tau( 1) = ( 0.0000000 0.0000000 0.0000000 )
2 Au tau( 2) = ( 0.2222220 0.1111110 0.2721649 )
3 Au tau( 3) = ( 0.1111110 0.2222220 0.5443320 )
4 Au tau( 4) = ( 0.3333330 0.0000000 0.0000000 )
5 Au tau( 5) = ( 0.5555560 0.1111110 0.2721649 )
6 Au tau( 6) = ( 0.4444440 0.2222220 0.5443320 )
7 Au tau( 7) = ( 0.6666670 0.0000000 0.0000000 )
8 Au tau( 8) = ( 0.8888890 0.1111110 0.2721649 )
9 Au tau( 9) = ( 0.7777780 0.2222220 0.5443320 )
10 Au tau( 10) = ( 0.0000000 0.3333330 0.0000000 )
11 Au tau( 11) = ( 0.2222220 0.4444440 0.2721649 )
12 Au tau( 12) = ( 0.1111110 0.5555560 0.5443320 )
13 Au tau( 13) = ( 0.3333330 0.3333330 0.0000000 )
14 Au tau( 14) = ( 0.5555560 0.4444440 0.2721649 )
15 Au tau( 15) = ( 0.4444440 0.5555560 0.5443320 )
16 Au tau( 16) = ( 0.6666670 0.3333330 0.0000000 )
17 Au tau( 17) = ( 0.8888890 0.4444440 0.2721649 )
18 Au tau( 18) = ( 0.7777780 0.5555560 0.5443320 )
19 Au tau( 19) = ( 0.0000000 0.6666670 0.0000000 )
20 Au tau( 20) = ( 0.2222220 0.7777780 0.2721649 )
21 Au tau( 21) = ( 0.1111110 0.8888890 0.5443320 )
22 Au tau( 22) = ( 0.3333330 0.6666670 0.0000000 )
23 Au tau( 23) = ( 0.5555560 0.7777780 0.2721649 )
24 Au tau( 24) = ( 0.4444440 0.8888890 0.5443320 )
25 Au tau( 25) = ( 0.6666670 0.6666670 0.0000000 )
26 Au tau( 26) = ( 0.8888890 0.7777780 0.2721649 )
27 Au tau( 27) = ( 0.7777780 0.8888890 0.5443320 )
28 Au tau( 28) = ( 0.0000000 0.0000000 0.8164968 )
29 Au tau( 29) = ( 0.2222220 0.1111110 1.0886617 )
30 Au tau( 30) = ( 0.1111110 0.2222220 1.3608265 )
31 Au tau( 31) = ( 0.3333330 0.0000000 0.8164968 )
32 Au tau( 32) = ( 0.5555560 0.1111110 1.0886617 )
33 Au tau( 33) = ( 0.4444440 0.2222220 1.3608265 )
34 Au tau( 34) = ( 0.6666670 0.0000000 0.8164968 )
35 Au tau( 35) = ( 0.8888890 0.1111110 1.0886617 )
36 Au tau( 36) = ( 0.7777780 0.2222220 1.3608265 )
37 Au tau( 37) = ( 0.0000000 0.3333330 0.8164968 )
38 Au tau( 38) = ( 0.2222220 0.4444440 1.0886617 )
39 Au tau( 39) = ( 0.1111110 0.5555560 1.3608265 )
40 Au tau( 40) = ( 0.3333330 0.3333330 0.8164968 )
41 Au tau( 41) = ( 0.5555560 0.4444440 1.0886617 )
42 Au tau( 42) = ( 0.4444440 0.5555560 1.3608265 )
43 Au tau( 43) = ( 0.6666670 0.3333330 0.8164968 )
44 Au tau( 44) = ( 0.8888890 0.4444440 1.0886617 )
45 Au tau( 45) = ( 0.7777780 0.5555560 1.3608265 )
46 Au tau( 46) = ( 0.0000000 0.6666670 0.8164968 )
47 Au tau( 47) = ( 0.2222220 0.7777780 1.0886617 )
48 Au tau( 48) = ( 0.1111110 0.8888890 1.3608265 )
49 Au tau( 49) = ( 0.3333330 0.6666670 0.8164968 )
50 Au tau( 50) = ( 0.5555560 0.7777780 1.0886617 )
51 Au tau( 51) = ( 0.4444440 0.8888890 1.3608265 )
52 Au tau( 52) = ( 0.6666670 0.6666670 0.8164968 )
53 Au tau( 53) = ( 0.8888890 0.7777780 1.0886617 )
54 Au tau( 54) = ( 0.7777780 0.8888890 1.3608265 )
number of k points= 2 Marzari-Vanderbilt smearing, width (Ry)= 0.0500
cart. coord. in units 2pi/alat
k( 1) = ( 0.2500000 0.2500000 0.0000000), wk = 1.0000000
k( 2) = ( -0.2500000 0.2500000 0.0000000), wk = 1.0000000
Dense grid: 468901 G-vectors FFT dimensions: ( 75, 75, 180)
Smooth grid: 165669 G-vectors FFT dimensions: ( 54, 54, 120)
Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 14.13 Mb ( 2601, 356)
NL pseudopotentials 27.86 Mb ( 2601, 702)
Each V/rho on FFT grid 1.97 Mb ( 129375)
Each G-vector array 0.45 Mb ( 58614)
G-vector shells 0.14 Mb ( 18406)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 56.52 Mb ( 2601, 1424)
Each subspace H/S matrix 30.94 Mb ( 1424, 1424)
Each
Arrays for rho mixing 15.79 Mb ( 129375, 8)
Initial potential from superposition of free atoms
Check: negative starting charge= -1.450934
starting charge 593.37296, renormalised to 594.00000
negative rho (up, down): 1.452E+00 0.000E+00
Starting wfc are 486 randomized atomic wfcs
total cpu time spent up to now is 40.6 secs
per-process dynamical memory: 102.1 Mb
Self-consistent Calculation
iteration # 1 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 3.0
Threshold (ethr) on eigenvalues was too large:
Diagonalizing with lowered threshold
Davidson diagonalization with overlap
ethr = 4.67E-04, avg # of iterations = 2.0
negative rho (up, down): 1.438E+00 0.000E+00
total cpu time spent up to now is 130.1 secs
total energy = -5507.14530794 Ry
Harris-Foulkes estimate = -5509.64398749 Ry
estimated scf accuracy < 3.10048687 Ry
iteration # 2 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.434E+00 0.000E+00
total cpu time spent up to now is 234.8 secs
total energy = -5480.57093516 Ry
Harris-Foulkes estimate = -5525.11985817 Ry
estimated scf accuracy < 723.97866323 Ry
iteration # 3 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.446E+00 0.000E+00
total cpu time spent up to now is 335.2 secs
total energy = -5507.87863501 Ry
Harris-Foulkes estimate = -5510.20956162 Ry
estimated scf accuracy < 8.18583953 Ry
End of self-consistent calculation
convergence NOT achieved after 3 iterations: stopping
Writing output data file ausurf.save
init_run : 39.77s CPU 40.33s WALL ( 1 calls)
electrons : 289.96s CPU 295.00s WALL ( 1 calls)
Called by init_run:
wfcinit : 10.38s CPU 10.62s WALL ( 1 calls)
potinit : 2.66s CPU 2.70s WALL ( 1 calls)
Called by electrons:
c_bands : 173.99s CPU 177.80s WALL ( 4 calls)
sum_band : 31.85s CPU 32.06s WALL ( 4 calls)
v_of_rho : 1.53s CPU 1.55s WALL ( 4 calls)
newd : 104.69s CPU 105.46s WALL ( 4 calls)
mix_rho : 0.21s CPU 0.21s WALL ( 4 calls)
Called by c_bands:
init_us_2 : 0.91s CPU 0.95s WALL ( 18 calls)
cegterg : 171.95s CPU 175.70s WALL ( 8 calls)
Called by *egterg:
h_psi : 60.97s CPU 61.58s WALL ( 44 calls)
s_psi : 9.63s CPU 9.66s WALL ( 44 calls)
g_psi : 0.43s CPU 0.44s WALL ( 34 calls)
cdiaghg : 67.68s CPU 67.95s WALL ( 40 calls)
Called by h_psi:
add_vuspsi : 9.38s CPU 9.44s WALL ( 44 calls)
General routines
calbec : 13.11s CPU 13.16s WALL ( 52 calls)
fft : 1.28s CPU 1.31s WALL ( 66 calls)
ffts : 0.02s CPU 0.02s WALL ( 8 calls)
fftw : 32.84s CPU 33.23s WALL ( 21614 calls)
interpolate : 0.22s CPU 0.22s WALL ( 8 calls)
davcio : 0.01s CPU 0.26s WALL ( 8 calls)
Parallel routines
fft_scatter : 14.89s CPU 15.25s WALL ( 21688 calls)
PWSCF : 5m30.03s CPU 5m37.77s WALL
This run was terminated on: 13:27:52 6Dec2013
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
Coba unload modul openblas sehingga tinggal openmpi saja.
[hpc07@java2 ~]$ module unload openblas
[hpc07@java2 ~]$ module list
Currently Loaded Modulefiles:
1) openmpi/1.6.5
[hpc07@java2 ~]$ qsub job5.sh
825.java2.grid.lipi.go.id
[hpc07@java2 ~]$ qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
796.java2 test_pbs hpc24 04:17:02 R nogpu
804.java2 test_pbs hpc03 03:03:24 R nogpu
808.java2 test_pbs hpc03 02:08:41 R nogpu
809.java2 test_pbs hpc21 02:24:13 R nogpu
817.java2 test_pbs hpc23 01:36:36 R nogpu
818.java2 QE_test hpc12 01:35:40 R nogpu
822.java2 test_pbs hpc08 00:44:06 R nogpu
823.java2 common hpc05 00:39:44 R nogpu
825.java2 test_pbs hpc07 00:32:37 R nogpu
828.java2 exp-kopi-22 hpc22 00:14:46 R nogpu
830.java2 exp-kopi-22 hpc22 0 R nogpu
831.java2 test_pbs hpc27 0 R nogpu
[hpc07@java2 ~]$ ls
Au.pbe-nd-van.UPF optoutput.txt test_pbs.e782 test_pbs.o771
C_source PW-AUSURF54 test_pbs.e785 test_pbs.o782
espresso sub_script5.sh test_pbs.e798 test_pbs.o785
espresso.tar.gz sub_script.sh test_pbs.e813 test_pbs.o798
Fortran_source test_pbs.e620 test_pbs.e825 test_pbs.o813
Fortran_source_text_viz test_pbs.e631 test_pbs.o620 test_pbs.o825
input_test_QE.tar.gz test_pbs.e647 test_pbs.o631 tmp
job5.sh test_pbs.e751 test_pbs.o647 transport_parallel.f90
Lab-Day1.tar.gz test_pbs.e758 test_pbs.o751
Lab-session-Day5.tar test_pbs.e771 test_pbs.o758
[hpc07@java2 ~]$ more test_pbs.o825
Program PWSCF v.5.0.2 (svn rev. 10630) starts on 6Dec2013 at 13:34:11
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 8 processors
R & G space division: proc/nbgrp/npool/nimage = 8
Reading input from /home/hpc07/PW-AUSURF54/ausurf.in
Warning: card &IONS ignored
Warning: card ION_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Warning: card &CELL ignored
Warning: card CELL_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
Subspace diagonalization in iterative solution of the eigenvalue problem:
a serial algorithm will be used
Found symmetry operation: I + ( 0.3333 0.0000 0.0000)
This is a supercell, fractional translations are disabled
Parallelization info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Min 532 266 70 58610 20700 2800
Max 533 267 71 58615 20720 2808
Sum 4257 2129 561 468901 165669 22421
Title:
DEISA pw benchmark
bravais-lattice index = 8
lattice parameter (alat) = 16.3533 a.u.
unit-cell volume = 9815.9181 (a.u.)^3
number of atoms/cell = 54
number of atomic types = 1
number of electrons = 594.00
number of Kohn-Sham states= 356
kinetic-energy cutoff = 25.0000 Ry
charge density cutoff = 200.0000 Ry
convergence threshold = 1.0E-06
mixing beta = 0.7000
number of iterations used = 8 plain mixing
Exchange-correlation = SLA PW PBE PBE ( 1 4 3 4 0)
celldm(1)= 16.353258 celldm(2)= 1.000000 celldm(3)= 2.244492
celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000
crystal axes: (cart. coord. in units of alat)
a(1) = ( 1.000000 0.000000 0.000000 )
a(2) = ( 0.000000 1.000000 0.000000 )
a(3) = ( 0.000000 0.000000 2.244492 )
reciprocal axes: (cart. coord. in units 2 pi/alat)
b(1) = ( 1.000000 0.000000 0.000000 )
b(2) = ( 0.000000 1.000000 0.000000 )
b(3) = ( 0.000000 0.000000 0.445535 )
PseudoPot. # 1 for Au read from file:
./Au.pbe-nd-van.UPF
MD5 check sum: deb5c07af10777505a79e28f5b4b4115
Pseudo is Ultrasoft + core correction, Zval = 11.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 985 points, 3 beta functions with:
l(1) = 1
l(2) = 2
l(3) = 2
Q(r) pseudized with 8 coefficients, rinner = 1.100 1.100 1.100
1.100 1.100
atomic species valence mass pseudopotential
Au 11.00 196.96000 Au( 1.00)
2 Sym. Ops. (no inversion) found
Cartesian axes
site n. atom positions (alat units)
1 Au tau( 1) = ( 0.0000000 0.0000000 0.0000000 )
2 Au tau( 2) = ( 0.2222220 0.1111110 0.2721649 )
3 Au tau( 3) = ( 0.1111110 0.2222220 0.5443320 )
4 Au tau( 4) = ( 0.3333330 0.0000000 0.0000000 )
5 Au tau( 5) = ( 0.5555560 0.1111110 0.2721649 )
6 Au tau( 6) = ( 0.4444440 0.2222220 0.5443320 )
7 Au tau( 7) = ( 0.6666670 0.0000000 0.0000000 )
8 Au tau( 8) = ( 0.8888890 0.1111110 0.2721649 )
9 Au tau( 9) = ( 0.7777780 0.2222220 0.5443320 )
10 Au tau( 10) = ( 0.0000000 0.3333330 0.0000000 )
11 Au tau( 11) = ( 0.2222220 0.4444440 0.2721649 )
12 Au tau( 12) = ( 0.1111110 0.5555560 0.5443320 )
13 Au tau( 13) = ( 0.3333330 0.3333330 0.0000000 )
14 Au tau( 14) = ( 0.5555560 0.4444440 0.2721649 )
15 Au tau( 15) = ( 0.4444440 0.5555560 0.5443320 )
16 Au tau( 16) = ( 0.6666670 0.3333330 0.0000000 )
17 Au tau( 17) = ( 0.8888890 0.4444440 0.2721649 )
18 Au tau( 18) = ( 0.7777780 0.5555560 0.5443320 )
19 Au tau( 19) = ( 0.0000000 0.6666670 0.0000000 )
20 Au tau( 20) = ( 0.2222220 0.7777780 0.2721649 )
21 Au tau( 21) = ( 0.1111110 0.8888890 0.5443320 )
22 Au tau( 22) = ( 0.3333330 0.6666670 0.0000000 )
23 Au tau( 23) = ( 0.5555560 0.7777780 0.2721649 )
24 Au tau( 24) = ( 0.4444440 0.8888890 0.5443320 )
25 Au tau( 25) = ( 0.6666670 0.6666670 0.0000000 )
26 Au tau( 26) = ( 0.8888890 0.7777780 0.2721649 )
27 Au tau( 27) = ( 0.7777780 0.8888890 0.5443320 )
28 Au tau( 28) = ( 0.0000000 0.0000000 0.8164968 )
29 Au tau( 29) = ( 0.2222220 0.1111110 1.0886617 )
30 Au tau( 30) = ( 0.1111110 0.2222220 1.3608265 )
31 Au tau( 31) = ( 0.3333330 0.0000000 0.8164968 )
32 Au tau( 32) = ( 0.5555560 0.1111110 1.0886617 )
33 Au tau( 33) = ( 0.4444440 0.2222220 1.3608265 )
34 Au tau( 34) = ( 0.6666670 0.0000000 0.8164968 )
35 Au tau( 35) = ( 0.8888890 0.1111110 1.0886617 )
36 Au tau( 36) = ( 0.7777780 0.2222220 1.3608265 )
37 Au tau( 37) = ( 0.0000000 0.3333330 0.8164968 )
38 Au tau( 38) = ( 0.2222220 0.4444440 1.0886617 )
39 Au tau( 39) = ( 0.1111110 0.5555560 1.3608265 )
40 Au tau( 40) = ( 0.3333330 0.3333330 0.8164968 )
41 Au tau( 41) = ( 0.5555560 0.4444440 1.0886617 )
42 Au tau( 42) = ( 0.4444440 0.5555560 1.3608265 )
43 Au tau( 43) = ( 0.6666670 0.3333330 0.8164968 )
44 Au tau( 44) = ( 0.8888890 0.4444440 1.0886617 )
45 Au tau( 45) = ( 0.7777780 0.5555560 1.3608265 )
46 Au tau( 46) = ( 0.0000000 0.6666670 0.8164968 )
47 Au tau( 47) = ( 0.2222220 0.7777780 1.0886617 )
48 Au tau( 48) = ( 0.1111110 0.8888890 1.3608265 )
49 Au tau( 49) = ( 0.3333330 0.6666670 0.8164968 )
50 Au tau( 50) = ( 0.5555560 0.7777780 1.0886617 )
51 Au tau( 51) = ( 0.4444440 0.8888890 1.3608265 )
52 Au tau( 52) = ( 0.6666670 0.6666670 0.8164968 )
53 Au tau( 53) = ( 0.8888890 0.7777780 1.0886617 )
54 Au tau( 54) = ( 0.7777780 0.8888890 1.3608265 )
number of k points= 2 Marzari-Vanderbilt smearing, width (Ry)= 0.0500
cart. coord. in units 2pi/alat
k( 1) = ( 0.2500000 0.2500000 0.0000000), wk = 1.0000000
k( 2) = ( -0.2500000 0.2500000 0.0000000), wk = 1.0000000
Dense grid: 468901 G-vectors FFT dimensions: ( 75, 75, 180)
Smooth grid: 165669 G-vectors FFT dimensions: ( 54, 54, 120)
Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 14.13 Mb ( 2601, 356)
NL pseudopotentials 27.86 Mb ( 2601, 702)
Each V/rho on FFT grid 1.97 Mb ( 129375)
Each G-vector array 0.45 Mb ( 58614)
G-vector shells 0.14 Mb ( 18406)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 56.52 Mb ( 2601, 1424)
Each subspace H/S matrix 30.94 Mb ( 1424, 1424)
Each
Arrays for rho mixing 15.79 Mb ( 129375, 8)
Initial potential from superposition of free atoms
Check: negative starting charge= -1.450934
starting charge 593.37296, renormalised to 594.00000
negative rho (up, down): 1.452E+00 0.000E+00
Starting wfc are 486 randomized atomic wfcs
total cpu time spent up to now is 40.6 secs
per-process dynamical memory: 102.1 Mb
Self-consistent Calculation
iteration # 1 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 3.0
Threshold (ethr) on eigenvalues was too large:
Diagonalizing with lowered threshold
Davidson diagonalization with overlap
ethr = 4.67E-04, avg # of iterations = 2.0
negative rho (up, down): 1.438E+00 0.000E+00
total cpu time spent up to now is 123.6 secs
total energy = -5507.14530794 Ry
Harris-Foulkes estimate = -5509.64398749 Ry
estimated scf accuracy < 3.10048687 Ry
iteration # 2 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.434E+00 0.000E+00
total cpu time spent up to now is 220.2 secs
total energy = -5480.57093516 Ry
Harris-Foulkes estimate = -5525.11985817 Ry
estimated scf accuracy < 723.97866323 Ry
iteration # 3 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.446E+00 0.000E+00
total cpu time spent up to now is 312.6 secs
total energy = -5507.87863501 Ry
Harris-Foulkes estimate = -5510.20956162 Ry
estimated scf accuracy < 8.18583953 Ry
End of self-consistent calculation
convergence NOT achieved after 3 iterations: stopping
Writing output data file ausurf.save
init_run : 39.54s CPU 40.26s WALL ( 1 calls)
electrons : 266.99s CPU 272.51s WALL ( 1 calls)
Called by init_run:
wfcinit : 10.25s CPU 10.69s WALL ( 1 calls)
potinit : 2.67s CPU 2.70s WALL ( 1 calls)
Called by electrons:
c_bands : 171.69s CPU 175.62s WALL ( 4 calls)
sum_band : 30.16s CPU 30.58s WALL ( 4 calls)
v_of_rho : 1.62s CPU 1.64s WALL ( 4 calls)
newd : 85.50s CPU 86.30s WALL ( 4 calls)
mix_rho : 0.19s CPU 0.19s WALL ( 4 calls)
Called by c_bands:
init_us_2 : 0.99s CPU 1.05s WALL ( 18 calls)
cegterg : 169.66s CPU 173.53s WALL ( 8 calls)
Called by *egterg:
h_psi : 60.42s CPU 61.32s WALL ( 44 calls)
s_psi : 9.56s CPU 9.63s WALL ( 44 calls)
g_psi : 0.36s CPU 0.37s WALL ( 34 calls)
cdiaghg : 66.61s CPU 66.88s WALL ( 40 calls)
Called by h_psi:
add_vuspsi : 9.84s CPU 9.96s WALL ( 44 calls)
General routines
calbec : 12.62s CPU 12.72s WALL ( 52 calls)
fft : 1.23s CPU 1.25s WALL ( 66 calls)
ffts : 0.02s CPU 0.02s WALL ( 8 calls)
fftw : 31.07s CPU 31.83s WALL ( 21614 calls)
interpolate : 0.20s CPU 0.20s WALL ( 8 calls)
davcio : 0.00s CPU 0.26s WALL ( 8 calls)
Parallel routines
fft_scatter : 12.97s CPU 13.46s WALL ( 21688 calls)
PWSCF : 5m 6.87s CPU 5m15.26s WALL
This run was terminated on: 13:39:26 6Dec2013
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
Coba langkah 1
[hpc07@java2 ~]$ pwd
/home/hpc07
[hpc07@java2 ~]$ cd espresso
[hpc07@java2 espresso]$ module list
Currently Loaded Modulefiles:
1) openmpi/1.6.5
Compile ulang
[hpc07@java2 espresso]$ ./configure
checking build system type... x86_64-unknown-linux-gnu
detected architecture... x86_64
checking for ifort... no
checking for pgf90... no
checking for pathf95... no
checking for sunf95... no
checking for openf95... no
dst ....
espresso/lapack-3.2/lapack.a /home/hpc07/espresso/BLAS/blas.a
( cd ../../bin ; ln -fs ../PW/tools/kpoints.x . )
mpif90 -g -pthread -o pwi2xsf.x \
pwi2xsf.o ../src/libpw.a ../../Modules/libqemod.a ../../flib/ptools.a ../../flib/flib.a ../../clib/clib.a ../../iotk/src/libiotk.a /home/hpc07/espresso/lapack-3.2/lapack.a /home/hpc07/espresso/BLAS/blas.a
( cd ../../bin ; ln -fs ../PW/tools/pwi2xsf.x . )
make[2]: Leaving directory `/home/hpc07/espresso/PW/tools'
make[1]: Leaving directory `/home/hpc07/espresso/PW'
[hpc07@java2 espresso]$ cd ..
[hpc07@java2 ~]$ qsub job5.sh
[hpc07@java2 ~]$ qsub job5.sh
863.java2.grid.lipi.go.id
[hpc07@java2 ~]$ qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
822.java2 test_pbs hpc08 03:30:19 R nogpu
828.java2 exp-kopi-22 hpc22 03:01:03 R nogpu
841.java2 ps hpc03 01:27:49 R nogpu
859.java2 test_pbs hpc23 00:27:25 R nogpu
861.java2 pt hpc03 00:05:20 R nogpu
863.java2 test_pbs hpc07 0 R nogpu
[hpc07@java2 ~]$ qstat
Job id Name User Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
863.java2 test_pbs hpc07 03:11:48 R nogpu
864.java2 ps hpc03 02:35:21 R nogpu
865.java2 common hpc05 01:58:19 R nogpu
866.java2 tes_pbs hpc13 00:21:38 R nogpu
[hpc07@java2 ~]$ ls
Au.pbe-nd-van.UPF optoutput.txt test_pbs.e782 test_pbs.o758
C_source PW-AUSURF54 test_pbs.e785 test_pbs.o771
espresso sub_script5.sh test_pbs.e798 test_pbs.o782
espresso.tar.gz sub_script.sh test_pbs.e813 test_pbs.o785
Fortran_source test_pbs.e620 test_pbs.e825 test_pbs.o798
Fortran_source_text_viz test_pbs.e631 test_pbs.e863 test_pbs.o813
input_test_QE.tar.gz test_pbs.e647 test_pbs.o620 test_pbs.o825
job5.sh test_pbs.e751 test_pbs.o631 test_pbs.o863
Lab-Day1.tar.gz test_pbs.e758 test_pbs.o647 tmp
Lab-session-Day5.tar test_pbs.e771 test_pbs.o751 transport_parallel.f90
[hpc07@java2 ~]$ more test_pbs.o863
Program PWSCF v.5.0.2 (svn rev. 10630) starts on 6Dec2013 at 13:59:38
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 8 processors
R & G space division: proc/nbgrp/npool/nimage = 8
Reading input from /home/hpc07/PW-AUSURF54/ausurf.in
Warning: card &IONS ignored
Warning: card ION_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Warning: card &CELL ignored
Warning: card CELL_DYNAMICS = 'NONE' ignored
Warning: card / ignored
Current dimensions of program PWSCF are:
Max number of different atomic species (ntypx) = 10
Max number of k-points (npk) = 40000
Max angular momentum in pseudopotentials (lmaxx) = 3
Subspace diagonalization in iterative solution of the eigenvalue problem:
a serial algorithm will be used
Found symmetry operation: I + ( 0.3333 0.0000 0.0000)
This is a supercell, fractional translations are disabled
Parallelization info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Min 532 266 70 58610 20700 2800
Max 533 267 71 58615 20720 2808
Sum 4257 2129 561 468901 165669 22421
Title:
DEISA pw benchmark
bravais-lattice index = 8
lattice parameter (alat) = 16.3533 a.u.
unit-cell volume = 9815.9181 (a.u.)^3
number of atoms/cell = 54
number of atomic types = 1
number of electrons = 594.00
number of Kohn-Sham states= 356
kinetic-energy cutoff = 25.0000 Ry
charge density cutoff = 200.0000 Ry
convergence threshold = 1.0E-06
mixing beta = 0.7000
number of iterations used = 8 plain mixing
Exchange-correlation = SLA PW PBE PBE ( 1 4 3 4 0)
celldm(1)= 16.353258 celldm(2)= 1.000000 celldm(3)= 2.244492
celldm(4)= 0.000000 celldm(5)= 0.000000 celldm(6)= 0.000000
crystal axes: (cart. coord. in units of alat)
a(1) = ( 1.000000 0.000000 0.000000 )
a(2) = ( 0.000000 1.000000 0.000000 )
a(3) = ( 0.000000 0.000000 2.244492 )
reciprocal axes: (cart. coord. in units 2 pi/alat)
b(1) = ( 1.000000 0.000000 0.000000 )
b(2) = ( 0.000000 1.000000 0.000000 )
b(3) = ( 0.000000 0.000000 0.445535 )
PseudoPot. # 1 for Au read from file:
./Au.pbe-nd-van.UPF
MD5 check sum: deb5c07af10777505a79e28f5b4b4115
Pseudo is Ultrasoft + core correction, Zval = 11.0
Generated by new atomic code, or converted to UPF format
Using radial grid of 985 points, 3 beta functions with:
l(1) = 1
l(2) = 2
l(3) = 2
Q(r) pseudized with 8 coefficients, rinner = 1.100 1.100 1.100
1.100 1.100
atomic species valence mass pseudopotential
Au 11.00 196.96000 Au( 1.00)
2 Sym. Ops. (no inversion) found
Cartesian axes
site n. atom positions (alat units)
1 Au tau( 1) = ( 0.0000000 0.0000000 0.0000000 )
2 Au tau( 2) = ( 0.2222220 0.1111110 0.2721649 )
3 Au tau( 3) = ( 0.1111110 0.2222220 0.5443320 )
4 Au tau( 4) = ( 0.3333330 0.0000000 0.0000000 )
5 Au tau( 5) = ( 0.5555560 0.1111110 0.2721649 )
6 Au tau( 6) = ( 0.4444440 0.2222220 0.5443320 )
7 Au tau( 7) = ( 0.6666670 0.0000000 0.0000000 )
8 Au tau( 8) = ( 0.8888890 0.1111110 0.2721649 )
9 Au tau( 9) = ( 0.7777780 0.2222220 0.5443320 )
10 Au tau( 10) = ( 0.0000000 0.3333330 0.0000000 )
11 Au tau( 11) = ( 0.2222220 0.4444440 0.2721649 )
12 Au tau( 12) = ( 0.1111110 0.5555560 0.5443320 )
13 Au tau( 13) = ( 0.3333330 0.3333330 0.0000000 )
14 Au tau( 14) = ( 0.5555560 0.4444440 0.2721649 )
15 Au tau( 15) = ( 0.4444440 0.5555560 0.5443320 )
16 Au tau( 16) = ( 0.6666670 0.3333330 0.0000000 )
17 Au tau( 17) = ( 0.8888890 0.4444440 0.2721649 )
18 Au tau( 18) = ( 0.7777780 0.5555560 0.5443320 )
19 Au tau( 19) = ( 0.0000000 0.6666670 0.0000000 )
20 Au tau( 20) = ( 0.2222220 0.7777780 0.2721649 )
21 Au tau( 21) = ( 0.1111110 0.8888890 0.5443320 )
22 Au tau( 22) = ( 0.3333330 0.6666670 0.0000000 )
23 Au tau( 23) = ( 0.5555560 0.7777780 0.2721649 )
24 Au tau( 24) = ( 0.4444440 0.8888890 0.5443320 )
25 Au tau( 25) = ( 0.6666670 0.6666670 0.0000000 )
26 Au tau( 26) = ( 0.8888890 0.7777780 0.2721649 )
27 Au tau( 27) = ( 0.7777780 0.8888890 0.5443320 )
28 Au tau( 28) = ( 0.0000000 0.0000000 0.8164968 )
29 Au tau( 29) = ( 0.2222220 0.1111110 1.0886617 )
30 Au tau( 30) = ( 0.1111110 0.2222220 1.3608265 )
31 Au tau( 31) = ( 0.3333330 0.0000000 0.8164968 )
32 Au tau( 32) = ( 0.5555560 0.1111110 1.0886617 )
33 Au tau( 33) = ( 0.4444440 0.2222220 1.3608265 )
34 Au tau( 34) = ( 0.6666670 0.0000000 0.8164968 )
35 Au tau( 35) = ( 0.8888890 0.1111110 1.0886617 )
36 Au tau( 36) = ( 0.7777780 0.2222220 1.3608265 )
37 Au tau( 37) = ( 0.0000000 0.3333330 0.8164968 )
38 Au tau( 38) = ( 0.2222220 0.4444440 1.0886617 )
39 Au tau( 39) = ( 0.1111110 0.5555560 1.3608265 )
40 Au tau( 40) = ( 0.3333330 0.3333330 0.8164968 )
41 Au tau( 41) = ( 0.5555560 0.4444440 1.0886617 )
42 Au tau( 42) = ( 0.4444440 0.5555560 1.3608265 )
43 Au tau( 43) = ( 0.6666670 0.3333330 0.8164968 )
44 Au tau( 44) = ( 0.8888890 0.4444440 1.0886617 )
45 Au tau( 45) = ( 0.7777780 0.5555560 1.3608265 )
46 Au tau( 46) = ( 0.0000000 0.6666670 0.8164968 )
47 Au tau( 47) = ( 0.2222220 0.7777780 1.0886617 )
48 Au tau( 48) = ( 0.1111110 0.8888890 1.3608265 )
49 Au tau( 49) = ( 0.3333330 0.6666670 0.8164968 )
50 Au tau( 50) = ( 0.5555560 0.7777780 1.0886617 )
51 Au tau( 51) = ( 0.4444440 0.8888890 1.3608265 )
52 Au tau( 52) = ( 0.6666670 0.6666670 0.8164968 )
53 Au tau( 53) = ( 0.8888890 0.7777780 1.0886617 )
54 Au tau( 54) = ( 0.7777780 0.8888890 1.3608265 )
number of k points= 2 Marzari-Vanderbilt smearing, width (Ry)= 0.0500
cart. coord. in units 2pi/alat
k( 1) = ( 0.2500000 0.2500000 0.0000000), wk = 1.0000000
k( 2) = ( -0.2500000 0.2500000 0.0000000), wk = 1.0000000
Dense grid: 468901 G-vectors FFT dimensions: ( 75, 75, 180)
Smooth grid: 165669 G-vectors FFT dimensions: ( 54, 54, 120)
Largest allocated arrays est. size (Mb) dimensions
Kohn-Sham Wavefunctions 14.13 Mb ( 2601, 356)
NL pseudopotentials 27.86 Mb ( 2601, 702)
Each V/rho on FFT grid 1.97 Mb ( 129375)
Each G-vector array 0.45 Mb ( 58614)
G-vector shells 0.14 Mb ( 18406)
Largest temporary arrays est. size (Mb) dimensions
Auxiliary wavefunctions 56.52 Mb ( 2601, 1424)
Each subspace H/S matrix 30.94 Mb ( 1424, 1424)
Each
Arrays for rho mixing 15.79 Mb ( 129375, 8)
Initial potential from superposition of free atoms
Check: negative starting charge= -1.450934
starting charge 593.37296, renormalised to 594.00000
negative rho (up, down): 1.452E+00 0.000E+00
Starting wfc are 486 randomized atomic wfcs
total cpu time spent up to now is 109.2 secs
per-process dynamical memory: 102.1 Mb
Self-consistent Calculation
iteration # 1 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 1.00E-02, avg # of iterations = 3.0
Threshold (ethr) on eigenvalues was too large:
Diagonalizing with lowered threshold
Davidson diagonalization with overlap
ethr = 4.67E-04, avg # of iterations = 2.0
negative rho (up, down): 1.438E+00 0.000E+00
total cpu time spent up to now is 508.6 secs
total energy = -5507.14530794 Ry
Harris-Foulkes estimate = -5509.64398749 Ry
estimated scf accuracy < 3.10048687 Ry
iteration # 2 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.434E+00 0.000E+00
total cpu time spent up to now is 1122.7 secs
total energy = -5480.57093516 Ry
Harris-Foulkes estimate = -5525.11985817 Ry
estimated scf accuracy < 723.97866323 Ry
iteration # 3 ecut= 25.00 Ry beta=0.70
Davidson diagonalization with overlap
ethr = 5.22E-04, avg # of iterations = 6.0
negative rho (up, down): 1.446E+00 0.000E+00
total cpu time spent up to now is 1618.9 secs
total energy = -5507.87863501 Ry
Harris-Foulkes estimate = -5510.20956162 Ry
estimated scf accuracy < 8.18583953 Ry
End of self-consistent calculation
convergence NOT achieved after 3 iterations: stopping
Writing output data file ausurf.save
init_run : 105.77s CPU 107.71s WALL ( 1 calls)
electrons : 1482.58s CPU 1510.20s WALL ( 1 calls)
Called by init_run:
wfcinit : 75.82s CPU 77.49s WALL ( 1 calls)
potinit : 3.29s CPU 3.37s WALL ( 1 calls)
Called by electrons:
c_bands : 1346.26s CPU 1370.46s WALL ( 4 calls)
sum_band : 71.82s CPU 73.24s WALL ( 4 calls)
v_of_rho : 1.53s CPU 1.58s WALL ( 4 calls)
newd : 84.15s CPU 85.80s WALL ( 4 calls)
mix_rho : 0.18s CPU 0.18s WALL ( 4 calls)
Called by c_bands:
init_us_2 : 0.53s CPU 0.56s WALL ( 18 calls)
cegterg : 1341.72s CPU 1365.86s WALL ( 8 calls)
Called by *egterg:
h_psi : 335.45s CPU 343.93s WALL ( 44 calls)
s_psi : 101.33s CPU 103.80s WALL ( 44 calls)
g_psi : 0.22s CPU 0.22s WALL ( 34 calls)
cdiaghg : 210.23s CPU 212.29s WALL ( 40 calls)
Called by h_psi:
add_vuspsi : 134.95s CPU 138.38s WALL ( 44 calls)
General routines
calbec : 201.32s CPU 205.65s WALL ( 52 calls)
fft : 1.49s CPU 1.51s WALL ( 66 calls)
ffts : 0.02s CPU 0.02s WALL ( 8 calls)
fftw : 34.99s CPU 36.53s WALL ( 21614 calls)
interpolate : 0.16s CPU 0.17s WALL ( 8 calls)
davcio : 0.01s CPU 0.29s WALL ( 8 calls)
Parallel routines
fft_scatter : 16.63s CPU 18.11s WALL ( 21688 calls)
PWSCF : 26m29.80s CPU 27m 1.53s WALL
This run was terminated on: 14:26:39 6Dec2013
=------------------------------------------------------------------------------=
JOB DONE.
=------------------------------------------------------------------------------=
Parallel Programming :
http://www.citutor.org/login.php
No comments:
Post a Comment