在 Ubuntu 16.04 上模拟 SLURM
Emulating SLURM on Ubuntu 16.04
我想在 Ubuntu 16.04 上模拟 SLURM。我不需要认真的资源管理,我只是想测试一些简单的例子。我,我想知道是否还有其他选择。我尝试过的其他事情:
一个Docker image。不幸的是,docker pull agaveapi/slurm; docker run agaveapi/slurm
给我错误:
/usr/lib/python2.6/site-packages/supervisor/options.py:295: UserWarning: Supervisord 是 运行 作为 root 并且它正在默认位置搜索其配置文件(包括其当前工作目录);您可能希望指定一个“-c”参数来指定配置文件的绝对路径以提高安全性。
'Supervisord is running as root and it is searching '
2017-10-29 15:27:45,436 CRIT Supervisor 运行 作为 root(配置文件中没有用户)
2017-10-29 15:27:45,437 INFO supervisord 以 pid 1 启动
2017-10-29 15:27:46,439 信息生成:'slurmd',pid 9
2017-10-29 15:27:46,441 信息生成:'sshd',pid 10
2017-10-29 15:27:46,443 信息生成:'munge',pid 11
2017-10-29 15:27:46,443 信息生成:'slurmctld',pid 12
2017-10-29 15:27:46,452 信息退出:munge(退出状态 0;未预期)
2017-10-29 15:27:46,452 CRIT 收获未知 pid 13)
2017-10-29 15:27:46,530 INFO 放弃:munge 进入 FATAL 状态,太多启动重试太快
2017-10-29 15:27:46,531 信息退出:slurmd(退出状态 1;未预期)
2017-10-29 15:27:46,535 INFO 放弃:slurmd 进入 FATAL 状态,太多启动重试太快
2017-10-29 15:27:46,536 信息退出:slurmctld(退出状态 0;未预期)
2017-10-29 15:27:47,537 INFO 成功:sshd 进入 运行 状态,进程已保持超过 1 秒(startsecs)
2017-10-29 15:27:47,537 INFO 放弃:slurmctld 进入 FATAL 状态,太多启动重试太快
This guide to start a SLURM VM via Vagrant。我试过了,但是复制我的 munge
密钥超时。
sudo scp /etc/munge/munge.key vagrant@server:/home/vagrant/
ssh:连接到主机服务器端口 22:连接超时
失去联系
我仍然更喜欢 运行 SLURM 本机,但我放弃并启动了 Debian 9.2 VM。看到 for my efforts to troubleshoot a native installation. The directions here 工作顺利,但我需要对 slurm.conf
进行以下更改。下面,Debian64
是hostname
,wlandau
是我的用户名。
ControlMachine=Debian64
SlurmUser=wlandau
NodeName=Debian64
这是完整的 slurm.conf
。类似的 slurm.conf
在我的本地 Ubuntu 16.04.
上不起作用
# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=Debian64
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=Debian64 CPUs=1 RealMemory=744 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
PartitionName=debug Nodes=Debian64 Default=YES MaxTime=INFINITE State=UP
所以...我们这里有一个现有集群,但它运行的是较旧的 Ubuntu 版本,该版本与我的工作站 运行 17.04.
不兼容
所以在我的工作站上,我只是确保安装了 slurmctld
(后端)和 slurmd
,然后设置一个简单的 slurm.conf
和
ControlMachine=mybox
# ...
NodeName=DEFAULT CPUs=4 RealMemory=4000 TmpDisk=50000 State=UNKNOWN
NodeName=mybox CPUs=4 RealMemory=16000
之后我重新启动 slurmcltd
,然后 slurmd
。现在一切都很好:
root@mybox:/etc/slurm-llnl$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
demo up infinite 1 idle mybox
root@mybox:/etc/slurm-llnl$
这是一个退化的设置,我们的真实设置混合了开发和生产机器以及适当的分区。但这应该可以回答您的 "can backend really be client" 问题。另外,我的机器并不是真正的 mybox
,但在任何一种情况下都与问题无关。
使用 Ubuntu 17.04,所有库存,与 munge
通信(无论如何这是默认设置)。
编辑:即:
me@mybox:~$ COLUMNS=90 dpkg -l '*slurm*' | grep ^ii
ii slurm-client 16.05.9-1ubun amd64 SLURM client side commands
ii slurm-wlm-basic- 16.05.9-1ubun amd64 SLURM basic plugins
ii slurmctld 16.05.9-1ubun amd64 SLURM central management daemon
ii slurmd 16.05.9-1ubun amd64 SLURM compute node daemon
me@mybox:~$
我想在 Ubuntu 16.04 上模拟 SLURM。我不需要认真的资源管理,我只是想测试一些简单的例子。我
一个Docker image。不幸的是,
docker pull agaveapi/slurm; docker run agaveapi/slurm
给我错误:/usr/lib/python2.6/site-packages/supervisor/options.py:295: UserWarning: Supervisord 是 运行 作为 root 并且它正在默认位置搜索其配置文件(包括其当前工作目录);您可能希望指定一个“-c”参数来指定配置文件的绝对路径以提高安全性。 'Supervisord is running as root and it is searching ' 2017-10-29 15:27:45,436 CRIT Supervisor 运行 作为 root(配置文件中没有用户) 2017-10-29 15:27:45,437 INFO supervisord 以 pid 1 启动 2017-10-29 15:27:46,439 信息生成:'slurmd',pid 9 2017-10-29 15:27:46,441 信息生成:'sshd',pid 10 2017-10-29 15:27:46,443 信息生成:'munge',pid 11 2017-10-29 15:27:46,443 信息生成:'slurmctld',pid 12 2017-10-29 15:27:46,452 信息退出:munge(退出状态 0;未预期) 2017-10-29 15:27:46,452 CRIT 收获未知 pid 13) 2017-10-29 15:27:46,530 INFO 放弃:munge 进入 FATAL 状态,太多启动重试太快 2017-10-29 15:27:46,531 信息退出:slurmd(退出状态 1;未预期) 2017-10-29 15:27:46,535 INFO 放弃:slurmd 进入 FATAL 状态,太多启动重试太快 2017-10-29 15:27:46,536 信息退出:slurmctld(退出状态 0;未预期) 2017-10-29 15:27:47,537 INFO 成功:sshd 进入 运行 状态,进程已保持超过 1 秒(startsecs) 2017-10-29 15:27:47,537 INFO 放弃:slurmctld 进入 FATAL 状态,太多启动重试太快
This guide to start a SLURM VM via Vagrant。我试过了,但是复制我的
munge
密钥超时。sudo scp /etc/munge/munge.key vagrant@server:/home/vagrant/ ssh:连接到主机服务器端口 22:连接超时 失去联系
我仍然更喜欢 运行 SLURM 本机,但我放弃并启动了 Debian 9.2 VM。看到 slurm.conf
进行以下更改。下面,Debian64
是hostname
,wlandau
是我的用户名。
ControlMachine=Debian64
SlurmUser=wlandau
NodeName=Debian64
这是完整的 slurm.conf
。类似的 slurm.conf
在我的本地 Ubuntu 16.04.
# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=Debian64
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=Debian64 CPUs=1 RealMemory=744 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN
PartitionName=debug Nodes=Debian64 Default=YES MaxTime=INFINITE State=UP
所以...我们这里有一个现有集群,但它运行的是较旧的 Ubuntu 版本,该版本与我的工作站 运行 17.04.
不兼容所以在我的工作站上,我只是确保安装了 slurmctld
(后端)和 slurmd
,然后设置一个简单的 slurm.conf
和
ControlMachine=mybox
# ...
NodeName=DEFAULT CPUs=4 RealMemory=4000 TmpDisk=50000 State=UNKNOWN
NodeName=mybox CPUs=4 RealMemory=16000
之后我重新启动 slurmcltd
,然后 slurmd
。现在一切都很好:
root@mybox:/etc/slurm-llnl$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
demo up infinite 1 idle mybox
root@mybox:/etc/slurm-llnl$
这是一个退化的设置,我们的真实设置混合了开发和生产机器以及适当的分区。但这应该可以回答您的 "can backend really be client" 问题。另外,我的机器并不是真正的 mybox
,但在任何一种情况下都与问题无关。
使用 Ubuntu 17.04,所有库存,与 munge
通信(无论如何这是默认设置)。
编辑:即:
me@mybox:~$ COLUMNS=90 dpkg -l '*slurm*' | grep ^ii
ii slurm-client 16.05.9-1ubun amd64 SLURM client side commands
ii slurm-wlm-basic- 16.05.9-1ubun amd64 SLURM basic plugins
ii slurmctld 16.05.9-1ubun amd64 SLURM central management daemon
ii slurmd 16.05.9-1ubun amd64 SLURM compute node daemon
me@mybox:~$