重启状态为down的节点
Restart nodes in state down
断电后我的节点进入状态 down
sinfo -a
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
partMain up infinite 4 down* node[001-004]
part1* up infinite 3 down* node[002-004]
part2 up infinite 1 down* node001
我执行这些命令
/etc/init.d/slurm stop
/etc/init.d/slurm start
sinfo -a
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
partMain up infinite 4 down node[001-004]
part1* up infinite 3 down node[002-004]
part2 up infinite 1 down node001
如何重新启动我的节点?[=15=]
sinfo -R
REASON USER TIMESTAMP NODELIST
Not responding root 2019-07-23T08:40:25 node[001-004]
$ scontrol update nodename=node001 state=idle
$ scontrol update nodename=node[001-004] state=resume
# the state changes to idle* but for a few seconds then returns to down*
$service --status-all | grep 'slurm'
slurmctld (pid 24000) is running... slurmdbd (pid 4113) is running...
$systemctl status -l slurm
● slurm.service - LSB: slurm daemon management
Loaded: loaded (/etc/rc.d/init.d/slurm; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2019-07-24 13:45:38 CEST; 257ms ago
Docs: man:systemd-sysv-generator(8)
Process: 30094 ExecStop=/etc/rc.d/init.d/slurm stop (code=exited, status=1/FAILURE)
Process: 30061 ExecStart=/etc/rc.d/init.d/slurm start (code=exited, status=0/SUCCESS)
Main PID: 30069 (code=exited, status=1/FAILURE)
在启动守护进程后试试这个:
scontrol update nodename=node001 state=idle
查看它们被标记为 sinfo -R
的原因。他们很可能会被列为 "unexpectedly rebooted"。您可以使用
恢复它们
scontrol update nodename=node[001-004] state=resume
slurm.conf
的 ReturnToService
参数控制计算节点从意外重启中醒来时是否处于活动状态。
断电后我的节点进入状态 down
sinfo -a
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
partMain up infinite 4 down* node[001-004]
part1* up infinite 3 down* node[002-004]
part2 up infinite 1 down* node001
我执行这些命令
/etc/init.d/slurm stop
/etc/init.d/slurm start
sinfo -a
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
partMain up infinite 4 down node[001-004]
part1* up infinite 3 down node[002-004]
part2 up infinite 1 down node001
如何重新启动我的节点?[=15=]
sinfo -R
REASON USER TIMESTAMP NODELIST
Not responding root 2019-07-23T08:40:25 node[001-004]
$ scontrol update nodename=node001 state=idle
$ scontrol update nodename=node[001-004] state=resume
# the state changes to idle* but for a few seconds then returns to down*
$service --status-all | grep 'slurm'
slurmctld (pid 24000) is running... slurmdbd (pid 4113) is running...
$systemctl status -l slurm
● slurm.service - LSB: slurm daemon management
Loaded: loaded (/etc/rc.d/init.d/slurm; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2019-07-24 13:45:38 CEST; 257ms ago
Docs: man:systemd-sysv-generator(8)
Process: 30094 ExecStop=/etc/rc.d/init.d/slurm stop (code=exited, status=1/FAILURE)
Process: 30061 ExecStart=/etc/rc.d/init.d/slurm start (code=exited, status=0/SUCCESS)
Main PID: 30069 (code=exited, status=1/FAILURE)
在启动守护进程后试试这个:
scontrol update nodename=node001 state=idle
查看它们被标记为 sinfo -R
的原因。他们很可能会被列为 "unexpectedly rebooted"。您可以使用
scontrol update nodename=node[001-004] state=resume
slurm.conf
的 ReturnToService
参数控制计算节点从意外重启中醒来时是否处于活动状态。