运行 多台机器上的 mpi

Running mpi on multiple machines

我可以 运行 我的 mpi 程序可以在具有任意数量进程的单台机器上运行,但不能在多台机器上运行。我有一个 "machines" 文件,它将主机上的进程数指定为:

localhost:6
another_host:4

下面我举了3个例子:

// When I run the program on only localhost, everything is OK.
mpirun -n 10 ./myMpiProg parameter1 parameter2

// In this case, everything is OK, too.
mpirun -f machinesFile -n 10 ./myMpiProg parameter1 parameter2

// This is also OK
mpirun -n 8 ./myMpiProg parameter1 parameter2

当我更改机器文件时,如下所示:

localhost:6
another_host:2

...

// But this does not work.
mpirun -f machinesFile -n 8 ./myMpiProg parameter1 parameter2

当我运行在分布式环境中运行程序时,会出现下面的错误。更有趣的是,它总是发生在一些分布上:比如 8 个进程,12 个进程。 10 个进程永远不会发生这种情况。

terminate called after throwing an instance of 'std::length_error' what():  vector::reserve

那么,运行在单机和多机上运行mpi程序有什么区别吗?

我无意中发现了这个问题,但仍然不知道为什么。当我将 isend 请求保存在向量中时,一切正常。但如果我不保存它们,就会出现错误。有时 std::length::error 有时更长,如下所示。

我能提到的代码可以在中找到。 如果我更改此行:

mpiSendRequest.push_back(world.isend(neighbors[j], 100, *p));

如:

world.isend(neighbors[j], 100, *p);

出现错误。这对我来说没有意义,但也许有一个合理的解释。

错误信息:

terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::mpi::exception> >'
what():  MPI_Alloc_mem: Unable to allocate memory for MPI_Alloc_mem, error stack:
MPI_Alloc_mem(115): MPI_Alloc_mem(size=1600614252, MPI_INFO_NULL, baseptr=0x7fffbb499e90) failed
MPI_Alloc_mem(96).: Unable to allocate memory for MPI_Alloc_mem
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::mpi::exception> >'
what():  MPI_Alloc_mem: Unable to allocate memory for MPI_Alloc_mem, error stack:
MPI_Alloc_mem(115): MPI_Alloc_mem(size=1699946540, MPI_INFO_NULL, baseptr=0x7fffdad0ee10) failed
MPI_Alloc_mem(96).: Unable to allocate memory for MPI_Alloc_mem
[proxy:0:1@mpi_notebook] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:928): assert (!closed) failed
[proxy:0:1@mpi_notebook] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:1@mpi_notebook] main (./pm/pmiserv/pmip.c:226): demux engine error waiting for event

=====================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   EXIT CODE: 134
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
=====================================================================================
APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6)