Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/onqtam/doctest.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBerengerBerthoul <56256655+BerengerBerthoul@users.noreply.github.com>2022-02-08 18:36:32 +0300
committerGitHub <noreply@github.com>2022-02-08 18:36:32 +0300
commit90ba0a95cebced29bdd4c9614391bd8a650a5171 (patch)
treed4caeb3431d6f56dc15293c94f591b52f1e4a853
parent55e95707e0f5176776fef7cbc97fb4c85f6513b0 (diff)
Fix mpi subcase (#611)
* mpi extension: fixed deadlock if multiple SUBCASEs Also skip the test if not enough proc and issue a warning (rather than failing the test) * More explicit MPI fails * Updated MPI example * Fixed GCC warnings
-rw-r--r--doc/markdown/extensions.md46
-rw-r--r--doctest/extensions/doctest_mpi.h160
-rw-r--r--doctest/extensions/mpi_reporter.h20
-rw-r--r--doctest/extensions/mpi_sub_comm.h81
-rw-r--r--examples/mpi/main.cpp4
-rw-r--r--examples/mpi/mpi.cpp10
6 files changed, 248 insertions, 73 deletions
diff --git a/doc/markdown/extensions.md b/doc/markdown/extensions.md
index e543b03b..00d6b2cc 100644
--- a/doc/markdown/extensions.md
+++ b/doc/markdown/extensions.md
@@ -33,16 +33,16 @@ int my_function_to_test(MPI_Comm comm) {
MPI_TEST_CASE("test over two processes",2) { // Parallel test on 2 processes
int x = my_function_to_test(test_comm);
-
+
MPI_CHECK( 0, x==10 ); // CHECK for rank 0, that x==10
MPI_CHECK( 1, x==11 ); // CHECK for rank 1, that x==11
}
```
-An ```MPI_TEST_CASE``` is like a regular ```TEST_CASE```, except it takes a second argument, which is the number of processes needed to run the test. If the number of processes is less than 2, the test will fail. If the number of processes is greater than or equal to 2, it will create a sub-communicator over 2 processes, called ```test_comm```, and execute the test over these processes. Three objects are provided by ```MPI_TEST_CASE```:
+An ```MPI_TEST_CASE``` is like a regular ```TEST_CASE```, except it takes a second argument, which is the number of processes needed to run the test. If the number of processes is less than 2, the test will fail. If the number of processes is greater than or equal to 2, it will create a sub-communicator over 2 processes, called ```test_comm```, and execute the test over these processes. Three objects are provided by ```MPI_TEST_CASE```:
* ```test_comm```, of type ```MPI_Comm```: the mpi communicator on which the test is running,
* ```test_rank``` and ```test_nb_procs```, two ```int``` giving respectively the rank of the current process and the size of the communicator for ```test_comm```. These last two are just here for convenience and could be retrieved from ```test_comm```.
-
+
We always have:
```c++
@@ -57,12 +57,12 @@ It is possible to use regular assertions in an ```MPI_TEST_CASE```. MPI-specific
## The main entry points and mpi reporters
-You need to launch the unit tests with an ```mpirun``` command:
+You need to launch the unit tests with an ```mpirun``` or ```mpiexec``` command:
```
mpirun -np 2 unit_test_executable.exe
```
-```MPI_Init``` should be called before running the unit tests. Also, using the default console reporter will result in each process writing everything in the same place, which is not what we want. Two reporters are provided and can be enabled. A complete ```main()``` would be:
+```doctest::mpi_init_thread()``` must be called before running the unit tests, and ```doctest::mpi_finalize()``` at the end of the program. Also, using the default console reporter will result in each process writing everything in the same place, which is not what we want. Two reporters are provided and can be enabled. A complete ```main()``` would be:
```c++
@@ -71,7 +71,7 @@ mpirun -np 2 unit_test_executable.exe
#include "doctest/extensions/doctest_mpi.h"
int main(int argc, char** argv) {
- MPI_Init(&argc, &argv);
+ doctest::mpi_init_thread(argc,argv,MPI_THREAD_MULTIPLE); // Or any MPI thread level
doctest::Context ctx;
ctx.setOption("reporters", "MpiConsoleReporter");
@@ -81,7 +81,7 @@ int main(int argc, char** argv) {
int test_result = ctx.run();
- MPI_Finalize();
+ doctest::mpi_finalize();
return test_result;
}
@@ -103,7 +103,7 @@ std_e_mpi_unit_tests
[doctest] run with "--help" for options
===============================================================================
path/to/test.cpp:30:
-TEST CASE: my test case
+TEST CASE: my test case
On rank [2] : path/to/test.cpp:35: CHECK( x==-1 ) is NOT correct!
values: CHECK( 0 == -1 )
@@ -113,13 +113,37 @@ On rank [2] : path/to/test.cpp:35: CHECK( x==-1 ) is NOT correct!
[doctest] assertions: 2 | 2 passed | 0 failed |
[doctest] Status: SUCCESS!
===============================================================================
-[doctest] glob assertions: 5 | 4 passed | 1 failed |
+[doctest] assertions on all processes: 5 | 4 passed | 1 failed |
===============================================================================
-[doctest] fail on rank:
+[doctest] fail on rank:
-> On rank [2] with 1 test failed
[doctest] Status: FAILURE!
```
+If the test executable is launch with less processes than the number of processes required by one test, the test is skipped and marqued as such in the mpi console reporter:
+
+
+```c++
+MPI_TEST_CASE("my_test",3) {
+ // ...
+}
+```
+
+```
+mpirun -np 2 unit_test_executable.exe
+```
+
+```
+===============================================================================
+[doctest] test cases: 1 | 1 passed | 0 failed | 1 skipped
+[doctest] assertions: 1 | 1 passed | 0 failed |
+[doctest] Status: SUCCESS!
+===============================================================================
+[doctest] assertions on all processes: 1 | 1 passed | 0 failed |
+[doctest] WARNING: Skipped 1 test requiring more than 2 MPI processes to run
+===============================================================================
+```
+
### MpiFileReporter
The ```MpiFileReporter``` will just print the result of each process in its own file, named ```doctest_[rank].log```. Only use this reporter as a debug facility if you want to know what is going on exactly when a parallel test case is failing.
@@ -135,8 +159,6 @@ This feature is provided to unit-test mpi-distributed code. It is **not** a way
* Pass ```s``` member variable of ```ConsoleReporter``` as an argument to member functions so we can use them with another object (would help to factorize ```MPIConsoleReporter```)
* Only MPI_CHECK tested. MPI_REQUIRE, exception handling: nothing tested
- * If the number of processes is not enough, prints the correct message, but then deadlocks (comes from ```MPI_Probe``` in ```MpiConsoleReporter```)
- * [[maybe_unused]] is C++17
* More testing, automatic testing
* Packaging: create a new target ```mpi_doctest```? (probably cleaner to depend explicitly on MPI for mpi/doctest.h)
* Later, maybe: have a general mechanism to represent assertions so we can separate the report format (console, xml, junit...) from the reporting strategy (sequential vs. MPI)
diff --git a/doctest/extensions/doctest_mpi.h b/doctest/extensions/doctest_mpi.h
index 45f82ddb..152b92da 100644
--- a/doctest/extensions/doctest_mpi.h
+++ b/doctest/extensions/doctest_mpi.h
@@ -2,80 +2,126 @@
#ifdef DOCTEST_CONFIG_IMPLEMENT
-#include "doctest/doctest.h"
+#include "doctest/extensions/mpi_sub_comm.h"
#include "mpi_reporter.h"
-
-#else
-
-#include "mpi.h"
-#include <numeric>
-#include <vector>
-#include "doctest/doctest.h"
-#include <cassert>
-#include <string>
+#include <unordered_map>
namespace doctest {
-inline
-int mpi_world_nb_procs() {
- int n;
- MPI_Comm_size(MPI_COMM_WORLD, &n);
- return n;
+// Each time a MPI_TEST_CASE is executed on N procs,
+// we need a sub-communicator of N procs to execute it.
+// It is then registered here and can be re-used
+// by other tests that requires a sub-comm of the same size
+std::unordered_map<int,mpi_sub_comm> sub_comms_by_size;
+
+// Record if at least one MPI_TEST_CASE was registered "skipped"
+// because there is not enought procs to execute it
+int nb_test_cases_skipped_insufficient_procs = 0;
+
+// Record size of MPI_COMM_WORLD with mpi_comm_world_size()
+// so that it can be compared to the MPI_Comm_size value
+// once MPI_Init_thread has been called
+int world_size_before_init = -1;
+
+
+std::string thread_level_to_string(int thread_lvl);
+int mpi_init_thread(int argc, char *argv[], int required_thread_support);
+void mpi_finalize();
+
+
+// Can be safely called before MPI_Init()
+// This is needed for MPI_TEST_CASE because we use doctest::skip()
+// to prevent execution of tests where there is not enough procs,
+// but doctest::skip() is called during test registration, that is, before main(), and hence before MPI_Init()
+int mpi_comm_world_size() {
+ #if defined(OPEN_MPI)
+ const char* size_str = std::getenv("OMPI_COMM_WORLD_SIZE");
+ #elif defined(I_MPI_VERSION) || defined(MPI_VERSION) // Intel MPI + MPICH (at least)
+ const char* size_str = std::getenv("PMI_SIZE"); // see https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Environment-variables-defined-by-intel-mpirun/td-p/1096703
+ #else
+ #error "Unknown MPI implementation: please submit an issue or a PR to doctest. Meanwhile, you can look at the output of e.g. `mpirun -np 3 env` to search for an environnement variable that contains the size of MPI_COMM_WORLD and extend this code accordingly"
+ #endif
+ if (size_str==nullptr) return 1; // not launched with mpirun/mpiexec, so assume only one process
+ world_size_before_init = std::stoi(size_str);
+ return world_size_before_init;
}
-struct mpi_sub_comm {
- int nb_procs;
- int rank;
- MPI_Comm comm;
-
- mpi_sub_comm( mpi_sub_comm const& ) = delete;
- mpi_sub_comm& operator=( mpi_sub_comm const& ) = delete;
-
- mpi_sub_comm(int nb_prcs) noexcept
- : nb_procs(nb_prcs)
- , rank(-1)
- , comm(MPI_COMM_NULL)
- {
- int comm_world_rank;
- MPI_Comm_rank(MPI_COMM_WORLD, &comm_world_rank);
- if (nb_procs>mpi_world_nb_procs()) {
- if (comm_world_rank==0) {
- MESSAGE(
- "Unable to run test: need ", std::to_string(nb_procs), " procs",
- " but program launched with only ", std::to_string(doctest::mpi_world_nb_procs()), "."
- );
- CHECK(nb_procs<=mpi_world_nb_procs());
- }
- } else {
- int color = MPI_UNDEFINED;
- if(comm_world_rank < nb_procs){
- color = 0;
- }
- MPI_Comm_split(MPI_COMM_WORLD, color, comm_world_rank, &comm);
-
- if(comm != MPI_COMM_NULL){
- MPI_Comm_rank(comm, &rank);
- assert(rank==comm_world_rank);
- }
- }
+std::string thread_level_to_string(int thread_lvl) {
+ switch (thread_lvl) {
+ case MPI_THREAD_SINGLE: return "MPI_THREAD_SINGLE";
+ case MPI_THREAD_FUNNELED: return "MPI_THREAD_FUNNELED";
+ case MPI_THREAD_SERIALIZED: return "MPI_THREAD_SERIALIZED";
+ case MPI_THREAD_MULTIPLE: return "MPI_THREAD_MULTIPLE";
+ default: return "Invalid MPI thread level";
+ }
+}
+int mpi_init_thread(int argc, char *argv[], int required_thread_support) {
+ int provided_thread_support;
+ MPI_Init_thread(&argc, &argv, required_thread_support, &provided_thread_support);
+
+ int world_size;
+ MPI_Comm_size(MPI_COMM_WORLD,&world_size);
+ if (world_size_before_init != world_size) {
+ DOCTEST_INTERNAL_ERROR(
+ "doctest found "+std::to_string(world_size_before_init)+" MPI processes before `MPI_Init_thread`,"
+ " but MPI_COMM_WORLD is actually of size "+std::to_string(world_size)+".\n"
+ "This is most likely due to your MPI implementation not being well supported by doctest. Please report this issue on GitHub"
+ );
}
- ~mpi_sub_comm() {
- if(comm != MPI_COMM_NULL){
- MPI_Comm_free(&comm);
- }
+ if (provided_thread_support!=required_thread_support) {
+ std::cout <<
+ "WARNING: " + thread_level_to_string(required_thread_support) + " was asked, "
+ + "but only " + thread_level_to_string(provided_thread_support) + " is provided by the MPI library\n";
}
-};
+ return provided_thread_support;
+}
+void mpi_finalize() {
+ // We need to destroy all created sub-communicators before calling MPI_Finalize()
+ doctest::sub_comms_by_size.clear();
+ MPI_Finalize();
+}
+
+} // doctest
+
+#else // DOCTEST_CONFIG_IMPLEMENT
+#include "doctest/extensions/mpi_sub_comm.h"
+#include <unordered_map>
+#include <exception>
+
+namespace doctest {
+
+extern std::unordered_map<int,mpi_sub_comm> sub_comms_by_size;
+extern int nb_test_cases_skipped_insufficient_procs;
+
+int mpi_comm_world_size();
+int mpi_init_thread(int argc, char *argv[], int required_thread_support);
+void mpi_finalize();
template<int nb_procs, class F>
void execute_mpi_test_case(F func) {
- mpi_sub_comm sub(nb_procs);
+ auto it = sub_comms_by_size.find(nb_procs);
+ if (it==end(sub_comms_by_size)) {
+ bool was_emplaced = false;
+ std::tie(it,was_emplaced) = sub_comms_by_size.emplace(std::make_pair(nb_procs,mpi_sub_comm(nb_procs)));
+ assert(was_emplaced);
+ }
+ const mpi_sub_comm& sub = it->second;
if (sub.comm != MPI_COMM_NULL) {
func(sub.rank,nb_procs,sub.comm,std::integral_constant<int,nb_procs>{});
};
}
+inline bool
+insufficient_procs(int test_nb_procs) {
+ bool insufficient = test_nb_procs>mpi_comm_world_size();
+ if (insufficient) {
+ ++nb_test_cases_skipped_insufficient_procs;
+ }
+ return insufficient;
+}
+
} // doctest
@@ -92,7 +138,7 @@ void execute_mpi_test_case(F func) {
#define DOCTEST_CREATE_MPI_TEST_CASE(name,nb_procs,func) \
static void func(DOCTEST_UNUSED int test_rank, DOCTEST_UNUSED int test_nb_procs, DOCTEST_UNUSED MPI_Comm test_comm, DOCTEST_UNUSED std::integral_constant<int,nb_procs>); \
- TEST_CASE(name * doctest::description("MPI_TEST_CASE")) { \
+ TEST_CASE(name * doctest::description("MPI_TEST_CASE") * doctest::skip(doctest::insufficient_procs(nb_procs))) { \
doctest::execute_mpi_test_case<nb_procs>(func); \
} \
static void func(DOCTEST_UNUSED int test_rank, DOCTEST_UNUSED int test_nb_procs, DOCTEST_UNUSED MPI_Comm test_comm, DOCTEST_UNUSED std::integral_constant<int,nb_procs> test_nb_procs_as_int_constant)
diff --git a/doctest/extensions/mpi_reporter.h b/doctest/extensions/mpi_reporter.h
index ed3614c2..1f199e6e 100644
--- a/doctest/extensions/mpi_reporter.h
+++ b/doctest/extensions/mpi_reporter.h
@@ -11,6 +11,9 @@
namespace doctest {
+extern int nb_test_cases_skipped_insufficient_procs;
+int mpi_comm_world_size();
+
namespace {
// https://stackoverflow.com/a/11826666/1583122
@@ -106,12 +109,25 @@ public:
if(rank == 0) {
separator_to_stream();
- s << Color::Cyan << "[doctest] " << Color::None << "glob assertions: " << std::setw(6)
+ s << Color::Cyan << "[doctest] " << Color::None << "assertions on all processes: " << std::setw(6)
<< g_numAsserts << " | "
<< ((g_numAsserts == 0 || anythingFailed) ? Color::None : Color::Green)
<< std::setw(6) << (g_numAsserts - g_numAssertsFailed) << " passed" << Color::None
<< " | " << (g_numAssertsFailed > 0 ? Color::Red : Color::None) << std::setw(6)
<< g_numAssertsFailed << " failed" << Color::None << " |\n";
+ if (nb_test_cases_skipped_insufficient_procs>0) {
+ s << Color::Cyan << "[doctest] " << Color::Yellow << "WARNING: Skipped ";
+ if (nb_test_cases_skipped_insufficient_procs>1) {
+ s << nb_test_cases_skipped_insufficient_procs << " tests requiring more than ";
+ } else {
+ s << nb_test_cases_skipped_insufficient_procs << " test requiring more than ";
+ }
+ if (mpi_comm_world_size()>1) {
+ s << mpi_comm_world_size() << " MPI processes to run\n";
+ } else {
+ s << mpi_comm_world_size() << " MPI process to run\n";
+ }
+ }
separator_to_stream();
if(g_numAssertsFailed > 0){
@@ -186,7 +202,7 @@ public:
}
bool is_mpi_test_case() const {
- return tc->m_description != nullptr
+ return tc->m_description != nullptr
&& std::string(tc->m_description) == std::string("MPI_TEST_CASE");
}
diff --git a/doctest/extensions/mpi_sub_comm.h b/doctest/extensions/mpi_sub_comm.h
new file mode 100644
index 00000000..5a77634e
--- /dev/null
+++ b/doctest/extensions/mpi_sub_comm.h
@@ -0,0 +1,81 @@
+#pragma once
+
+#include "mpi.h"
+#include "doctest/doctest.h"
+#include <cassert>
+#include <string>
+
+namespace doctest {
+
+inline
+int mpi_world_nb_procs() {
+ int n;
+ MPI_Comm_size(MPI_COMM_WORLD, &n);
+ return n;
+}
+
+struct mpi_sub_comm {
+ int nb_procs;
+ int rank;
+ MPI_Comm comm;
+
+ mpi_sub_comm( mpi_sub_comm const& ) = delete;
+ mpi_sub_comm& operator=( mpi_sub_comm const& ) = delete;
+
+ mpi_sub_comm(int nb_prcs) noexcept
+ : nb_procs(nb_prcs)
+ , rank(-1)
+ , comm(MPI_COMM_NULL)
+ {
+ int comm_world_rank;
+ MPI_Comm_rank(MPI_COMM_WORLD, &comm_world_rank);
+ if (nb_procs>mpi_world_nb_procs()) {
+ if (comm_world_rank==0) {
+ MESSAGE(
+ "Unable to run test: need ", std::to_string(nb_procs), " procs",
+ " but program launched with only ", std::to_string(doctest::mpi_world_nb_procs()), "."
+ );
+ CHECK(nb_procs<=mpi_world_nb_procs());
+ }
+ } else {
+ int color = MPI_UNDEFINED;
+ if(comm_world_rank < nb_procs){
+ color = 0;
+ }
+ MPI_Comm_split(MPI_COMM_WORLD, color, comm_world_rank, &comm);
+
+ if(comm != MPI_COMM_NULL){
+ MPI_Comm_rank(comm, &rank);
+ assert(rank==comm_world_rank);
+ }
+ }
+ }
+
+ void destroy_comm() {
+ if(comm != MPI_COMM_NULL){
+ MPI_Comm_free(&comm);
+ }
+ }
+
+ mpi_sub_comm(mpi_sub_comm&& x)
+ : nb_procs(x.nb_procs)
+ , rank(x.rank)
+ , comm(x.comm)
+ {
+ x.comm = MPI_COMM_NULL;
+ }
+ mpi_sub_comm& operator=(mpi_sub_comm&& x) {
+ destroy_comm();
+ nb_procs = x.nb_procs;
+ rank = x.rank;
+ comm = x.comm;
+ x.comm = MPI_COMM_NULL;
+ return *this;
+ }
+
+ ~mpi_sub_comm() {
+ destroy_comm();
+ }
+};
+
+} // doctest
diff --git a/examples/mpi/main.cpp b/examples/mpi/main.cpp
index 22db024c..6d080b76 100644
--- a/examples/mpi/main.cpp
+++ b/examples/mpi/main.cpp
@@ -3,7 +3,7 @@
#include <doctest/extensions/doctest_mpi.h>
int main(int argc, char** argv) {
- MPI_Init(&argc, &argv);
+ doctest::mpi_init_thread(argc,argv,MPI_THREAD_MULTIPLE);
doctest::Context ctx;
ctx.setOption("reporters", "MpiConsoleReporter");
@@ -13,7 +13,7 @@ int main(int argc, char** argv) {
int test_result = ctx.run();
- MPI_Finalize();
+ doctest::mpi_finalize();
return test_result;
}
diff --git a/examples/mpi/mpi.cpp b/examples/mpi/mpi.cpp
index 26356ead..54c7d458 100644
--- a/examples/mpi/mpi.cpp
+++ b/examples/mpi/mpi.cpp
@@ -34,3 +34,13 @@ MPI_TEST_CASE("Parallel test on 3 processes (failing)",3) {
MPI_CHECK( 1, x==11 ); // CHECK for rank 1, that x==11
MPI_CHECK( 2, x==-1 ); // CHECK for rank 2, that x==-1 (which is not the case -> will trigger a failure report)
}
+
+MPI_TEST_CASE("Parallel tests with subcases",2) {
+
+ SUBCASE("sub_case 0") {
+ CHECK( test_nb_procs == 2 );
+ }
+ SUBCASE("sub_case 1") {
+ CHECK( test_rank == 0 ); // should fail on proc 1
+ }
+}