| 24小時(shí)熱門(mén)版塊排行榜 |
| 5 | 1/1 | 返回列表 |
| 查看: 3103 | 回復(fù): 6 | |||
| 當(dāng)前只顯示滿足指定條件的回帖,點(diǎn)擊這里查看本話題的所有回帖 | |||
xiaowu787木蟲(chóng) (正式寫(xiě)手)
|
[交流]
GAMESS最新安裝通過(guò)
|
||
|
GAMESS最新安裝通過(guò)--不知有沒(méi)有遺漏,測(cè)試結(jié)果有些問(wèn)題,請(qǐng)高手指點(diǎn),謝謝 這些怎么沒(méi)有加入 測(cè)試結(jié)果 測(cè)試結(jié)果-- DDI Process 0: error code 911 ,不知何故 |
» 搶金幣啦!回帖就可以得到:
+1/983
+1/182
+2/108
+1/85
+3/83
+1/81
+1/36
+1/23
+1/19
+1/17
+1/11
+1/10
+1/9
+1/5
+1/5
+1/4
+1/4
+1/4
+1/1
+1/1
木蟲(chóng) (正式寫(xiě)手)
至尊木蟲(chóng) (職業(yè)作家)
木蟲(chóng) (正式寫(xiě)手)
|
編譯程序時(shí)用的MPICH2,rungms里面的內(nèi)容修改搞的不是很明白,請(qǐng)高手指點(diǎn)一下?纯茨男┑胤叫枰薷模瑀ungms里默認(rèn)的是Intel MPI rungms: Here we use two constant node names, compute-0-0 and compute-0-1, each of which is assumed to be SMP (ours are 8-ways): Each user must set up a file named ~/.mpd.conf containing a single line: "secretword=GiantsOverDodgers" which is set to user-only access permissions "chmod 600 ~/.mpd.conf". The secret word shouldn't be a login password, but can be anything you like: "secretword=VikingsOverPackers" is just as good. if ($TARGET == mpi) then # # Run outside of the batch schedular Sun Grid Engine (SGE) # by faking SGE's host assignment file: $TMPDIR/machines. # This script can be executed interactively on the first # compute node mentioned in this fake 'machines' file. set TMPDIR=$SCR # perhaps SGE would assign us two node names... echo "compute-0-1" > $TMPDIR/machines echo "compute-0-2" >> $TMPDIR/machines # or if you want to use these four nodes... #--echo "compute-0-0" > $TMPDIR/machines #--echo "compute-0-1" >> $TMPDIR/machines #--echo "compute-0-2" >> $TMPDIR/machines #--echo "compute-0-3" >> $TMPDIR/machines # # besides the usual three arguments to 'rungms' (see top), # we'll pass in a "processers per node" value. This could # be a value from 1 to 8 on our 8-way nodes. set PPN=$4 # # Allow for compute process and data servers (one pair per core) # @ NPROCS = $NCPUS + $NCPUS # # MPICH2 kick-off is guided by two disk files (A and B). # # A. build HOSTFILE, saying which nodes will be in our MPI ring # setenv HOSTFILE $SCR/$JOB.nodes.mpd if (-e $HOSTFILE) rm $HOSTFILE touch $HOSTFILE # if ($NCPUS == 1) then # Serial run must be on this node itself! echo `hostname` >> $HOSTFILE set NNODES=1 else # Parallel run gets node names from SGE's assigned list, # which is given to us in a disk file $TMPDIR/machines. uniq $TMPDIR/machines $HOSTFILE set NNODES=`wc -l $HOSTFILE` set NNODES=$NNODES[1] endif # uncomment these if you are still setting up... #--echo '------------' #--echo HOSTFILE $HOSTFILE contains #--cat $HOSTFILE #--echo '------------' # # B. the next file forces explicit "which process on what node" rules. # setenv PROCFILE $SCR/$JOB.processes.mpd if (-e $PROCFILE) rm $PROCFILE touch $PROCFILE # if ($NCPUS == 1) then @ NPROCS = 2 echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else @ NPROCS = $NCPUS + $NCPUS if ($PPN == 0) then # when our SGE is just asked to assign so many cores from one # node, PPN=0, we are launching compute processes and data # servers within our own node...simple. echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else # when our SGE is asked to reserve entire nodes, 1<=PPN<=8, # the $TMPDIR/machines contains the assigned node names # once and only once. We want PPN compute processes on # each node, and of course, PPN data servers on each. # Although DDI itself can assign c.p. and d.s. to the # hosts in any order, the GDDI logic below wants to have # all c.p. names before any d.s. names in the $HOSTFILE. # # thus, lay down a list of c.p. @ PPN2 = $PPN + $PPN @ n=1 while ($n <= $NNODES) set host=`sed -n -e "$n p" $HOSTFILE` set host=$host[1] echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE @ n++ end endif endif # uncomment these if you are still setting up... #--echo PROCFILE $PROCFILE contains #--cat $PROCFILE #--echo '------------' # echo "MPICH2 will be running GAMESS on $NNODES nodes." echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x" echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers." if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node." # # Next sets up MKL usage setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t # force old MKL versions (version 9 and older) to run single threaded setenv MKL_SERIAL YES # setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH set path=(/opt/mpich2/gnu/bin $path) # echo The scratch disk space on each node is $SCR chdir $SCR # # Now, at last, we can actually launch the processes, in 3 steps. # a) bring up a 'ring' of MPI demons # set echo mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE # # b) kick off the compute processes and the data servers # mpiexec -configfile $PROCFILE < /dev/null # # c) shut down the 'ring' of MPI demons # mpdallexit unset echo # # HOSTFILE is passed to the file erasing step below rm -f $PROCFILE endif |
木蟲(chóng) (正式寫(xiě)手)
|
請(qǐng)指導(dǎo)一下,rungms關(guān)于mpi的修改,謝謝。 實(shí)驗(yàn)室的mpd一直在正常運(yùn)行了,不需要再啟動(dòng)mpd進(jìn)程了,實(shí)在不知這里該如何修改 [u06@pc07 tests]$ ../rungms exam01.inp ----- GAMESS execution script ----- This job is running on host pc07 under operating system Linux at 2011年 09月 29日 星期四 10:50:47 CST Available scratch disk space (Kbyte units) at beginning of the job is 文件系統(tǒng) 1K-塊 已用 可用 已用% 掛載點(diǎn) store:/data 2536545984 1253686304 1154010560 53% /home cp exam01.inp /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 unset echo setenv ERICFMT /home/u06/lammps/gamess/GAMESS/gamess/u06/ericfmt.dat setenv MCPPATH /home/u06/lammps/gamess/GAMESS/gamess/u06/mcpdata setenv EXTBAS /dev/null setenv NUCBAS /dev/null ....... setenv GMCDIN /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F97 setenv GMC2SZ /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F98 setenv GMCCCS /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F99 unset echo Intel MPI (iMPI) will be running GAMESS on 1 nodes. The binary to be kicked off by 'mpiexec' is gamess.00.x iMPI will run 1 compute processes and 1 data servers. The scratch disk space on each node is /home/u06/lammps/gamess/GAMESS/gamess/u06 /home/u06/lammps/mpich2/bin/mpdroot: open failed for root's mpd conf filempiexec_pc07 (__init__ 1208): forked process failed; status=255 ----- accounting info ----- Files used on the master node pc07 were: -rw-r--r-- 1 u06 usbfs 1136 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 -rw-r--r-- 1 u06 usbfs 5 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.nodes.mpd -rw-r--r-- 1 u06 usbfs 66 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.processes.mpd 2011年 09月 29日 星期四 10:50:49 CST 0.204u 0.084s 0:01.71 16.3% 0+0k 0+0io 18pf+0w [u06@pc07 tests]$ [ Last edited by xiaowu787 on 2011-9-29 at 10:20 ] |
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研] 本科211生物醫(yī)學(xué)工程085409求調(diào)劑339分 +5 | 里子木yy 2026-03-29 | 5/250 |
|
|---|---|---|---|---|
|
[考研] 362求調(diào)劑 +3 | 西南交材料專(zhuān)碩3 2026-03-31 | 3/150 |
|
|
[考研] 284求調(diào)劑 +5 | 小熊~~ 2026-03-31 | 5/250 |
|
|
[考研] 0703化學(xué) +18 | 妮妮ninicgb 2026-03-27 | 18/900 |
|
|
[考研] 張芳銘-中國(guó)農(nóng)業(yè)大學(xué)-環(huán)境工程專(zhuān)碩-298 +7 | 手機(jī)用戶 2026-03-26 | 7/350 |
|
|
[考研] 0817化工學(xué)碩調(diào)劑 +7 | 努力上岸中! 2026-03-31 | 7/350 |
|
|
[考研] 材料學(xué)碩333求調(diào)劑 +18 | 北道巷 2026-03-24 | 18/900 |
|
|
[考研] 理學(xué)07化學(xué) 303求調(diào)劑 +4 | 睿08 2026-03-27 | 4/200 |
|
|
[考研] 材料專(zhuān)碩 085600求調(diào)劑 +7 | BBQ233 2026-03-30 | 7/350 |
|
|
[考研] 0703化學(xué)/290求調(diào)劑/本科經(jīng)歷豐富/工科也可 +13 | 丹青奶蓋 2026-03-26 | 15/750 |
|
|
[考研] 材料與化工272求調(diào)劑 +21 | 阿斯蒂芬2004 2026-03-28 | 21/1050 |
|
|
[考研] 343求調(diào)劑085601 +3 | 要努力學(xué)習(xí)x 2026-03-29 | 3/150 |
|
|
[考研] 349求調(diào)劑 +6 | 李木子啊哈哈 2026-03-25 | 6/300 |
|
|
[考研] 一志愿華理,數(shù)一英一285求A區(qū)調(diào)劑 +8 | AZMK 2026-03-25 | 12/600 |
|
|
[考研] 求調(diào)劑推薦 材料 304 +15 | 荷包蛋hyj 2026-03-26 | 15/750 |
|
|
[考博] 26申博 +3 | 加油沖。 2026-03-26 | 3/150 |
|
|
[考研] 315調(diào)劑 +4 | 0860求調(diào)劑 2026-03-26 | 5/250 |
|
|
[考研] 求調(diào)劑 +6 | 林之夕 2026-03-24 | 6/300 |
|
|
[考研] 334分 一志愿武理-080500 材料求調(diào)劑 +4 | 李李不服輸 2026-03-25 | 4/200 |
|
|
[考研] 求調(diào)劑 +3 | 李李不服輸 2026-03-25 | 3/150 |
|