| 查看: 2788 | 回復(fù): 10 | ||
wmy8802217木蟲 (正式寫手)
|
[求助]
vasp5.3.3并行編譯出錯(cuò) 已有4人參與
|
|
mpich2并行編譯 怎么辦? scala.o: In function `scala_mp_ppotrf_trtri_': scala.f90:(.text+0x1ba): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x1dc): undefined reference to `numroc_' scala.f90:(.text+0x205): undefined reference to `numroc_' scala.f90:(.text+0x63d): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x679): undefined reference to `pzpotrf_' scala.f90:(.text+0x79b): undefined reference to `pztrtri_' scala.f90:(.text+0x7ed): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x80f): undefined reference to `numroc_' scala.f90:(.text+0x838): undefined reference to `numroc_' scala.o: In function `scala_mp_pdssyex_zheevx_': scala.f90:(.text+0x14cb): undefined reference to `pzheevx_' scala.f90:(.text+0x191f): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x1940): undefined reference to `numroc_' scala.f90:(.text+0x1967): undefined reference to `numroc_' scala.f90:(.text+0x1f50): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x1f71): undefined reference to `numroc_' scala.f90:(.text+0x1f95): undefined reference to `numroc_' scala.o: In function `scala_mp_pssyex_cheevx_': scala.f90:(.text+0x2d34): undefined reference to `pzheevx_' scala.f90:(.text+0x318b): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x31ac): undefined reference to `numroc_' scala.f90:(.text+0x31d3): undefined reference to `numroc_' scala.f90:(.text+0x3923): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x3944): undefined reference to `numroc_' scala.f90:(.text+0x396b): undefined reference to `numroc_' scala.o: In function `scala_mp_pssyex_cheevx_single_': scala.f90:(.text+0x4733): undefined reference to `pcheevx_' scala.f90:(.text+0x4c44): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x4c65): undefined reference to `numroc_' scala.f90:(.text+0x4c89): undefined reference to `numroc_' scala.f90:(.text+0x5332): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x5353): undefined reference to `numroc_' scala.f90:(.text+0x537a): undefined reference to `numroc_' scala.o: In function `scala_mp_distri_': scala.f90:(.text+0x5c32): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x5c50): undefined reference to `numroc_' scala.f90:(.text+0x5c70): undefined reference to `numroc_' scala.o: In function `scala_mp_distri_single_': 。。。。。。。。。。。。。。 下面是makefile .SUFFIXES: .inc .f .f90 .F #----------------------------------------------------------------------- # Makefile for Intel Fortran compiler for Pentium/Athlon/Opteron # based systems # we recommend this makefile for both Intel as well as AMD systems # for AMD based systems appropriate BLAS (libgoto) and fftw libraries are # however mandatory (whereas they are optional for Intel platforms) # For Athlon we recommend # ) to link against libgoto (and mkl as a backup for missing routines) # ) odd enough link in libfftw3xf_intel.a (fftw interface for mkl) # feedback is greatly appreciated # # The makefile was tested only under Linux on Intel and AMD platforms # the following compiler versions have been tested: # - ifc.7.1 works stable somewhat slow but reliably # - ifc.8.1 fails to compile the code properly # - ifc.9.1 recommended (both for 32 and 64 bit) # - ifc.10.1 partially recommended (both for 32 and 64 bit) # tested build 20080312 Package ID: l_fc_p_10.1.015 # the gamma only mpi version can not be compiles # using ifc.10.1 # - ifc.11.1 partially recommended (some problems with Gamma only and intel fftw) # Build 20090630 Package ID: l_cprof_p_11.1.046 # - ifort.12.1 strongly recommended (we use this to compile vasp) # Version 12.1.5.339 Build 20120612 # # it might be required to change some of library path ways, since # LINUX installations vary a lot # # Hence check ***ALL*** options in this makefile very carefully #----------------------------------------------------------------------- # # BLAS must be installed on the machine # there are several options: # 1) very slow but works: # retrieve the lapackage from ftp.netlib.org # and compile the blas routines (BLAS/SRC directory) # please use g77 or f77 for the compilation. When I tried to # use pgf77 or pgf90 for BLAS, VASP hang up when calling # ZHEEV (however this was with lapack 1.1 now I use lapack 2.0) # 2) more desirable: get an optimized BLAS # # the two most reliable packages around are presently: # 2a) Intels own optimised BLAS (PIII, P4, PD, PC2, Itanium) # http://developer.intel.com/software/products/mkl/ # this is really excellent, if you use Intel CPU's # # 2b) probably fastest SSE2 (4 GFlops on P4, 2.53 GHz, 16 GFlops PD, # around 30 GFlops on Quad core) # Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # http://www.tacc.utexas.edu/resources/software/ # #----------------------------------------------------------------------- # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=ifort # fortran linker #FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) # this release should be fpp clean # we now recommend fpp as preprocessor # if this fails go back to cpp CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) # byterecl is strictly required for ifc, since otherwise # the WAVECAR file becomes huge #----------------------------------------------------------------------- FFLAGS = -FR -names lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- # ifc.9.1, ifc.10.1 recommended OFLAG=-O2 -ip OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # we recommend to use mkl, that is simple and most likely # fastest in Intel based machines #----------------------------------------------------------------------- # mkl path for ifc 11 compiler #MKL_PATH=$(MKLROOT)/lib/em64t # mkl path for ifc 12 compiler MKL_PATH=$(MKLROOT)/lib/intel64 MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/ # BLAS # setting -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines usually speeds up program execution # BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide # faster linking and available from at least version 11 BLAS= -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread # LAPACK, use vasp.5.lib/lapack_double #LAPACK= ../vasp.5.lib/lapack_double.o # LAPACK from mkl, usually faster and contains scaLAPACK as well LAPACK= /opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_intel_lp64.a # here a tricky version, link in libgoto and use mkl as a backup # also needs a special line for LAPACK # this is the best thing you can do on AMD based systems !!!!!! #BLAS = -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -liomp5 #LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a #----------------------------------------------------------------------- #LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o $(LAPACK) \ $(BLAS) # options for linking, nothing is required (usually) LINK = #----------------------------------------------------------------------- # fft libraries: # VASP.5.2 can use fftw.3.1.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #======================================================================= # MPI section, uncomment the following lines until # general rules and compile lines # presently we recommend OPENMPI, since it seems to offer better # performance than lam or mpich # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (recommended if mkl is available) # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \ -DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK \ -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply leave this section commented out #----------------------------------------------------------------------- # usually simplest link in mkl scaLAPACK #BLACS= -lmkl_blacs_openmpi_lp64 #SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS) #----------------------------------------------------------------------- # libraries #----------------------------------------------------------------------- LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o \ $(SCA) $(LAPACK) $(BLAS) #----------------------------------------------------------------------- # parallel FFT #----------------------------------------------------------------------- # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o \ radial.o pseudo.o gridq.o ebs.o \ mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o \ $(BASIC) nonl.o nonlr.o nonl_high.o dfast.o choleski2.o \ mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o \ constrmag.o cl_shift.o relativistic.o LDApU.o \ paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o \ mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o pot.o \ dos.o elf.o tet.o tetweight.o hamil_rot.o \ chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o \ aedens.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o hamil_high.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o sym_grad.o \ mymath.o internals.o npt_dynamics.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o \ nmr.o pead.o subrot.o subrot_scf.o \ force.o pwlhf.o gw_model.o optreal.o steep.o davidson.o david_inner.o \ electron.o rot.o electron_all.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o \ hamil_lr.o rmm-diis_lr.o subrot_cluster.o subrot_lr.o \ lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o \ linear_optics.o \ setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o \ mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o \ local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o \ bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o \ rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp main.o $(SOURCE) $(FFT3D) $(LIB) $(LINK) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.g *.f *.o *.L *.mod ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # these special rules have been tested for ifc.11 and ifc.12 only fft3dlib.o : fft3dlib.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftw3d.o : fftw3d.F $(CPP) $(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX) fftmpi.o : fftmpi.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftmpiw.o : fftmpiw.F $(CPP) $(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX) wave_high.o : wave_high.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) # the following rules are probably no longer required (-O3 seems to work) wave.o : wave.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) paw.o : paw.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) cl_shift.o : cl_shift.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) us.o : us.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) |

鐵桿木蟲 (著名寫手)
|
初看是看不出錯(cuò)誤。這跟你的數(shù)據(jù)庫(kù)以其環(huán)境變量有很大關(guān)系,這樣的錯(cuò)誤猜測(cè)是一些庫(kù)的應(yīng)用不對(duì)導(dǎo)致。具體的你的自己去找了。我建議你看看這個(gè)帖子的makefile,然后找找?guī)旌瘮?shù)的路勁。 http://www.gaoyang168.com/bbs/viewthread.php?tid=7255487 |
榮譽(yù)版主 (著名寫手)
禁蟲 (職業(yè)作家)
|
本帖內(nèi)容被屏蔽 |
木蟲 (正式寫手)

木蟲 (正式寫手)
|
這樣算么?我使用mpich3直接并行串行編譯的vasp...但是貌似是雙核各自算一遍= = wmy@wmy:~ /usr/local/mpich/bin/mpiexec -n 2 vasp vasp.5.2.2 15Apr09 complex vasp.5.2.2 15Apr09 complex POSCAR found : 2 types and 4 ions POSCAR found : 2 types and 4 ions LDA part: xc-table for Pade appr. of Perdew LDA part: xc-table for Pade appr. of Perdew WARNING: stress and forces are not correct POSCAR, INCAR and KPOINTS ok, starting setup WARNING: stress and forces are not correct POSCAR, INCAR and KPOINTS ok, starting setup WARNING: small aliasing (wrap around) errors must be expected FFT: planning ...( 1 ) WARNING: small aliasing (wrap around) errors must be expected FFT: planning ...( 1 ) reading WAVECAR reading WAVECAR charge-density read from file: BiH charge-density read from file: BiH entering main loop N E dE d eps ncg rms rms(c) entering main loop N E dE d eps ncg rms rms(c) DAV: 1 0.422598100495E+02 0.42260E+02 -0.20528E+03 1636 0.444E+02 DAV: 1 0.422598100495E+02 0.42260E+02 -0.20528E+03 1636 0.444E+02 DAV: 2 -0.711140028514E+01 -0.49371E+02 -0.46922E+02 2608 0.904E+01 DAV: 2 -0.711140028514E+01 -0.49371E+02 -0.46922E+02 2608 0.904E+01 DAV: 3 -0.129856833239E+02 -0.58743E+01 -0.58397E+01 2060 0.407E+01 DAV: 3 -0.129856833239E+02 -0.58743E+01 -0.58397E+01 2060 0.407E+01 top信息 Cpu(s): 95.4%us, 4.2%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3716592k total, 3262492k used, 454100k free, 148292k buffers Swap: 3856380k total, 168k used, 3856212k free, 1258312k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27817 wmy 20 0 478m 216m 8584 R 177 6.0 6:17.11 vasp 27816 wmy 20 0 478m 216m 8600 R 166 6.0 6:14.28 vasp |

木蟲 (正式寫手)
|
ld: k1om architecture of input file `/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_scalapack_lp64.a(cmmtadd.o)' is incompatible with i386:x86-64 output ld: k1om architecture of input file `/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_scalapack_lp64.a(cmmtcadd.o)' is incompatible with i386:x86-64 output ..... 編譯的話又出現(xiàn)這樣的錯(cuò)誤 |

木蟲 (小有名氣)
|
#SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS) 改為: SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a -lmkl_blacs_openmpi_lp64 可能還需要: INCS = -I$(MKLROOT)/include/fftw 這個(gè)錯(cuò)誤我以前遇到過(guò),是vasp5.3 需要這個(gè)數(shù)學(xué)庫(kù)的scalapack 建議看一下官方的wiki: http://cms.mpi.univie.ac.at/wiki/index.php/Installing_VASP |
木蟲 (正式寫手)
|
rm -f vasp /usr/local/mpich/bin/mpif90 -mkl -o vasp main.o base.o mpi.o smart_allocate.o xml.o constant.o jacobi.o main_mpi.o scala.o asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o radial.o pseudo.o gridq.o ebs.o mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o symmetry.o symlib.o lattlib.o random.o nonl.o nonlr.o nonl_high.o dfast.o choleski2.o mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o constrmag.o cl_shift.o relativistic.o LDApU.o paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o pot.o dos.o elf.o tet.o tetweight.o hamil_rot.o chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o aedens.o wavpre.o wavpre_noio.o broyden.o dynbr.o hamil_high.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o brent.o stufak.o fileio.o opergrid.o stepver.o chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o sym_grad.o mymath.o internals.o npt_dynamics.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o nmr.o pead.o subrot.o subrot_scf.o force.o pwlhf.o gw_model.o optreal.o steep.o davidson.o david_inner.o electron.o rot.o electron_all.o shm.o pardens.o paircorrection.o optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o hamil_lr.o rmm-diis_lr.o subrot_cluster.o subrot_lr.o lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o linear_optics.o setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o -L../vasp.5.lib -ldmy ../vasp.5.lib/linpack_double.o /opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_scalapack_lp64.a /opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.a -L/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -L/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64 -lmkl_blas95_lp64 -limf -lm 上面是make.log貌似是到 MKL_PATH=/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64 BLAS= -L$(MKL_PATH) -lmkl_blas95_lp64 LAPACK= -L$(MKL_PATH) -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(MKL_PATH)/libmkl_blacs_openmpi_lp64.a就下不去了怎么辦?出現(xiàn)錯(cuò)誤/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.a(zgebs2d_.o):../../../../scalapack/BLACS/SRC/MPI/zgebs2d_.c .text+0x28e): more undefined references to `ompi_mpi_byte' follow/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.a(igebr2d_.o): In function `igebr2d_': ../../../../scalapack/BLACS/SRC/MPI/igebr2d_.c .text+0x108): undefined reference to `ompi_mpi_int'怎么辦 0.0 |

銅蟲 (小有名氣)
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研] 【同濟(jì)軟件】軟件(085405)考研求調(diào)劑 +3 | 2026eternal 2026-03-18 | 3/150 |
|
|---|---|---|---|---|
|
[考研] 26調(diào)劑/材料/英一數(shù)二/總分289/已過(guò)A區(qū)線 +7 | 步川酷紫123 2026-03-13 | 7/350 |
|
|
[考研] 344求調(diào)劑 +5 | knight344 2026-03-16 | 6/300 |
|
|
[考研] 一志愿天津大學(xué)化學(xué)工藝專業(yè)(081702)315分求調(diào)劑 +9 | yangfz 2026-03-17 | 9/450 |
|
|
[考研] 0703化學(xué)調(diào)劑 +4 | pupcoco 2026-03-17 | 7/350 |
|
|
[考研] 299求調(diào)劑 +5 | △小透明* 2026-03-17 | 5/250 |
|
|
[考研] 材料工程專碩調(diào)劑 +5 | 204818@lcx 2026-03-17 | 5/250 |
|
|
[考研] 308求調(diào)劑 +4 | 是Lupa啊 2026-03-16 | 4/200 |
|
|
[考研] 26考研求調(diào)劑 +6 | 丶宏Sir 2026-03-13 | 6/300 |
|
|
[考研] 302求調(diào)劑 +4 | 小賈同學(xué)123 2026-03-15 | 8/400 |
|
|
[考研] 283求調(diào)劑 +10 | 小樓。 2026-03-12 | 14/700 |
|
|
[考研] 求老師收留調(diào)劑 +4 | jiang姜66 2026-03-14 | 5/250 |
|
|
[考研] 22408總分284求調(diào)劑 +3 | InAspic 2026-03-13 | 3/150 |
|
|
[考研] 294求調(diào)劑 +3 | Zys010410@ 2026-03-13 | 4/200 |
|
|
[考研] 學(xué)碩285求調(diào)劑 +13 | Wisjxn 2026-03-12 | 46/2300 |
|
|
[考研] 330求調(diào)劑 +3 | ?醬給調(diào)劑跪了 2026-03-13 | 3/150 |
|
|
[考研] 求材料調(diào)劑 085600英一數(shù)二總分302 前三科235 精通機(jī)器學(xué)習(xí) 一志愿哈工大 +4 | 林yaxin 2026-03-12 | 4/200 |
|
|
[考研] 329求調(diào)劑 +3 | miaodesi 2026-03-12 | 4/200 |
|
|
[考研] 考研調(diào)劑 +4 | 芬達(dá)46 2026-03-12 | 4/200 |
|
|
[考博] 福州大學(xué)楊黃浩課題組招收2026年專業(yè)學(xué)位博士研究生,2026.03.20截止 +3 | Xiangyu_ou 2026-03-12 | 3/150 |
|