-
|
Hi, I used to always compile with MPI + CUDA on a local cluster and then use ./idefix or mpirun -np d ./idefix (with d>=2) depending on the number of GPUs wanted. On my local cluster, using ./idefix is no longer working to launch the job on a single GPU. There are several changes that could be responsible (update of nvcc, kokkos, openmpi). Here is the openmpi error message : I know it is not a big deal, but I wonder if it is possible to go back to the way it was. Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
Nothing has changed since v2.0 regarding how MPI is initialised in Idefix. As far as I know, calling Idefix compiled with MPI without mpirun still works (at least with open MPI 4). There might be some MPI implementations for which this doesn't work, though... The error message seems to indicate that your MPI implementation is not properly installed. You should check what's happening with your system administrator. |
Beta Was this translation helpful? Give feedback.
Nothing has changed since v2.0 regarding how MPI is initialised in Idefix. As far as I know, calling Idefix compiled with MPI without mpirun still works (at least with open MPI 4). There might be some MPI implementations for which this doesn't work, though...
The error message seems to indicate that your MPI implementation is not properly installed. You should check what's happening with your system administrator.