How to free boost :: mpi :: request?

I am trying to get MPI to turn off the communicator, which is pretty business - I put together a demo below. I have two versions of the same idea, listening to int using MPI_IRecv and one using boost :: mpi :: request.

You will notice that when using mpiexec -n 2 in this program, version A will successfully shutdown and exit, but version B will not. Is there some kind of trick for MPI_Request_free-ing boost :: mpi :: request? There seems to be a difference here. If it matters, I am using MSVC and MSMPI and Boost 1.62.

#include "boost/mpi.hpp"
#include "mpi.h"

int main()
{
    MPI_Init(NULL, NULL);
    MPI_Comm regional;
    MPI_Comm_dup(MPI_COMM_WORLD, &regional);
    boost::mpi::communicator comm = boost::mpi::communicator(regional, boost::mpi::comm_attach);
    if (comm.rank() == 1)
    {
        int q;

        //VERSION A:
//      MPI_Request n;
//      int j = MPI_Irecv(&q, 1, MPI_INT, 1, 0, regional, &n);
//      MPI_Cancel(&n);
//      MPI_Request_free(&n);

        //VERSION B:

//      boost::mpi::request z = comm.irecv<int>(1, 0, q);
//      z.cancel();

    }
    MPI_Comm_disconnect(&regional);
    MPI_Finalize();
    return 0;
}

      

Did I find a bug? I doubt if I'm deep in code.

+1


source to share


1 answer


Well, let's assume it's not a bug if documented: MPI_Request_free

Boost.MPI is not supported
.

Now back to MPI itself:

Calling a MPI_CANCEL

mark to cancel a pending, non-blocking exchange (send or receive). The cancellation call is local. It returns immediately, possibly before the post is actually canceled. It is still necessary to call MPI_Request_free

, MPI_WAIT

or MPI_TEST

(or any of the derived operations) with a canceled request as an argument after the call MPI_CANCEL

. If a message is canceled, then the call MPI_WAIT

for that message is guaranteed to return, regardless of the activity of other processes
(i.e., MPI_WAIT

behaves like a local function);

This means that simply:



z.cancel();
z.wait();

      

and everything should be fine.

Now, IMHO this is a bad waste of the correct RAII from Boost.MPI.

+1


source







All Articles