| 24小時(shí)熱門版塊排行榜 |
| 5 | 1/1 | 返回列表 |
| 查看: 2494 | 回復(fù): 9 | |||
| 本帖產(chǎn)生 1 個(gè) 1ST強(qiáng)帖 ,點(diǎn)擊這里進(jìn)行查看 | |||
| 當(dāng)前只顯示滿足指定條件的回帖,點(diǎn)擊這里查看本話題的所有回帖 | |||
zhang668木蟲 (正式寫手)
|
[求助]
同個(gè)任務(wù)同樣的節(jié)點(diǎn)數(shù)VASP4.6能算,而VASP5.2不能
|
||
|
對(duì)24個(gè)原子的體系的結(jié)構(gòu)優(yōu)化任務(wù),256個(gè)節(jié)點(diǎn),VASP4.6可以計(jì)算,但VASP5.2(自旋軌道耦合版本)計(jì)算得到段錯(cuò)誤提示: forrtl: severe (174): SIGSEGV, segmentation fault occurred。 用ulimit -a查了下stack size:無限制。 以前小體系的時(shí)候測(cè)試過,發(fā)現(xiàn)兩個(gè)版本計(jì)算效率挺接近的,這次體系增大,同時(shí)增大節(jié)點(diǎn)數(shù),發(fā)現(xiàn)很大差異,基本上VASP5.2很容易出現(xiàn)段錯(cuò)誤提示。 看到wuli8版主的段錯(cuò)誤一個(gè)貼,最終解決是在FLAG的選項(xiàng)中加入: -heap-arrays 64,而我的VASP5.2版本原來編譯時(shí)候已經(jīng)加入了這項(xiàng): FFLAGS = -FR -lowercase -assume byterecl -heap-arrays 64 -I/opt/intel/mkl/10.2.1.017/include/fftw 其它編譯參數(shù): FC=mpiifort CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -Dkind8 -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \ -DMPI_BLOCK=8000 -DRPROMU_DGEMV -DRACCMU_DGEMV SCA=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 請(qǐng)教各位,編譯有沒問題,以前曾用這個(gè)VASP5.2計(jì)算過10個(gè)原子的自選軌道耦合,一直挺順利的。最近計(jì)算20原子的體系,總是出現(xiàn)問題。請(qǐng)大家?guī)兔聪率鞘裁磫栴}? [ Last edited by zhang668 on 2011-6-15 at 22:22 ] |
木蟲 (正式寫手)
|
k點(diǎn)不多。是這樣的,我的4.6版本是直接拷過來的程序,就是在集群下編譯的。 5.2旋軌耦合版本是我自己編譯的。 這是5.2 makefile: FC=ifort # fortran linker FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) #----------------------------------------------------------------------- CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ -Dkind8 -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) # byterecl is strictly required for ifc, since otherwise # the WAVECAR file becomes huge #----------------------------------------------------------------------- FFLAGS = -FR -lowercase -assume byterecl -heap-arrays 64 -I/opt/intel/mkl/10.2.1.017/include/fftw #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- # ifc.9.1, ifc.10.1 recommended OFLAG=-O3 OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # VASP works fastest with the libgoto library # so that's what we recommend #----------------------------------------------------------------------- # mkl.10.0 # set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines BLAS=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_intel_lp64 -lmkl_sequential -lpthread -lmkl_core # even faster for VASP Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # parallel goto version requires sometimes -libverbs #BLAS= /opt/libs/libgoto/libgoto.so # LAPACK, simplest use vasp.5.lib/lapack_double #LAPACK= ../vasp.5.lib/lapack_double.o # use the mkl Intel lapack LAPACK=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread #----------------------------------------------------------------------- #LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o $(LAPACK) \ $(BLAS) # options for linking, nothing is required (usually) LINK = #----------------------------------------------------------------------- # fft libraries: # VASP.5.2 can use fftw.3.1.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a #======================================================================= # MPI section, uncomment the following lines until # general rules and compile lines # presently we recommend OPENMPI, since it seems to offer better # performance than lam or mpich # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi #----------------------------------------------------------------------- FC=mpiifort FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -Dkind8 -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \ -DMPI_BLOCK=8000 -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply leave that section commented out #----------------------------------------------------------------------- #BLACS=$(HOME)/archives/SCALAPACK/BLACS/ #SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK #SCA= $(SCA_)/libscalapack.a \ # $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a SCA=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o $(LAPACK) \ $(SCA) $(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller #FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /gpfssan1/apps/fftw32/lib/libfftw3f.a |
金蟲 (正式寫手)
|
1.查一查下OUTCAR里最后幾行的提示,是否進(jìn)入了主循環(huán)計(jì)算? 2.-heap-arrays 64 只是解決棧溢出問題,你只有那么點(diǎn)原子,考慮自旋也也不會(huì)產(chǎn)生此問題; 3.檢查一下你的編譯器的版本,INTEL 10.xxx編譯器本身有bug,會(huì)遇到一下意想不到的問題; 4.FC=mpiifort,這個(gè)是自己該的還是自動(dòng)生成的?一般MPI編譯后,會(huì)建立個(gè)mpif90到ifort連接,即便并行編譯,F(xiàn)C=mpif90 或 ifort都可以; 5.你在做測(cè)試嗎?這么小的體系還用那么多個(gè)節(jié)點(diǎn),并行效率很差,時(shí)間都浪費(fèi)在CPU之間的通訊上了。你可以減少節(jié)點(diǎn)測(cè)試一下,或者串行試一下看看是否還有這個(gè)問題。 |

木蟲 (正式寫手)
木蟲 (正式寫手)
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研] 一志愿北京科技,085601總分305求調(diào)劑 +9 | 半生瓜! 2026-04-01 | 10/500 |
|
|---|---|---|---|---|
|
[考研] 295材料工程專碩求調(diào)劑 +19 | 1428151015 2026-03-27 | 19/950 |
|
|
[考研] 一志愿南昌大學(xué)324求調(diào)劑 +12 | hanamiko 2026-03-27 | 12/600 |
|
|
[考研]
|
廈大化工 2026-04-01 | 4/200 |
|
|
[考研] 一志愿鄭大085600,310分求調(diào)劑 +6 | 李瀟可 2026-03-26 | 6/300 |
|
|
[考研] 一志愿北交材料工程總分358 +5 | cs0106 2026-04-01 | 7/350 |
|
|
[考研] 318求調(diào)劑 +8 | 七憶77 2026-04-01 | 8/400 |
|
|
[考研] 考研材料工程351分調(diào)劑 +5 | 整個(gè)好的 2026-03-31 | 5/250 |
|
|
[考研] 336材料求調(diào)劑 +10 | 陳瀅瑩 2026-03-26 | 12/600 |
|
|
[考研] 277跪求調(diào)劑 +8 | 1915668 2026-03-27 | 13/650 |
|
|
[考研] 286求調(diào)劑 +6 | Faune 2026-03-30 | 6/300 |
|
|
[考研] 本科211安全工程,初試290分,求調(diào)劑 +3 | 2719846834 2026-03-28 | 3/150 |
|
|
[考研] 083000學(xué)碩274求調(diào)劑 +12 | Li李魚 2026-03-26 | 12/600 |
|
|
[考研] 279求調(diào)劑 +12 | j的立方 2026-03-29 | 12/600 |
|
|
[考研] 求調(diào)劑 +10 | 家佳佳佳佳佳 2026-03-29 | 10/500 |
|
|
[考研] 340求調(diào)劑 +6 | Amber00 2026-03-26 | 6/300 |
|
|
[考研] 11408軟件工程求調(diào)劑 +3 | Qiu學(xué)ing 2026-03-28 | 3/150 |
|
|
[考研] 352分 化工與材料 +5 | 海納百川Ly 2026-03-27 | 5/250 |
|
|
[考研] 070300化學(xué)求調(diào)劑 +4 | 起個(gè)名咋這么難 2026-03-27 | 4/200 |
|
|
[考研] 292求調(diào)劑 +4 | 求求了收下我吧?/a> 2026-03-26 | 4/200 |
|