Fix mpi broadcast for metadata to MPI_COMM_WORLD

Description

My solver works on models with multiple domains, with each domain partitioned among an arbitrary number of MPI ranks. Unique communicators are created for the solution of each domain. As an example, for 100 ranks working on a problem with 2 domains of different size the communicators could be:

MPI_COMM_WORLD = ranks 0, 99
DOMAIN_COMM_1 = ranks 0, 25
DOMAIN_COMM_2 = ranks 26, 99

When I need to write in serial I use rank 0 from the current domain comm, and when writing in parallel I use the comm for the domain. This leads to a situation where world rank 0 is not involved in the writing of domain 2. This caused a problem in cgp_open >> cg_open >> cgio_open_file >> cgio_check_file. In this routine world rank 0 is used to read some kind of header information from an existing CGNS file, which is then broadcast to all processes in MPI_COMM_WORLD. As a temporary workaround I have each process read the header information directly. For a long term solution it would probably be better to have rank 0 from the active IO comm read the data, and then limit the broadcast to that same communicator. What do you think?

This needs to be fixed

Environment

None

Status

Assignee

Unassigned

Reporter

Scot Breitenfeld

Labels

None

Components

Fix versions

Affects versions

Priority

Blocker
Configure