Skip to content
Howard Pritchard edited this page May 4, 2020 · 2 revisions

#5/04/20 webex notes

Attending: Martin Schulz, Dan Holmes, Tom Herschberg, Derek Schafer, Howard Pritchard

Continue Sessions 2 aka Bubbles

Martin has added some use cases to the slides (https://github.com/mpiwg-sessions/sessions-issues/wiki/SessionsV2-ideas.pptx). He's restructured the slides some from the last webex.

Use Cases

  • Individual applications with changing resources
  • Coupled application(s) (components)
  • (Partly) Co-located independent applications - tools, viz, etc.
  • System software to schedule independent applications - e.g. Slurm using Open MPI itself for IPC?

Martin has added slides for each of these use cases.

Individual applications with changing resources

Cooperative/selfish/coercive. Martin explains figure illustrating adding a new process. How should we handle process set names? Use versions? Dan brings up late arrival issue. An option to get the most recent version of say mpi://world process set? A feedback mechanism may be necessary - perhaps when mpi_comm_from_group is invoked? what if RM is giving resources in several goes? See slide entitled - Consensus finding for New Process Sets. This discussion led to a TBD slide. Pro-active, during negotiations. How is the app negotiating? Example from elastic MPI. See the TBD: How to make the negotiations slide. Discuss cases where an application may not be able or interested in using new resources made available by RM.

Martin reviews another figure with growing but selfishly. Here the app requests resources.

Continue on to shrinking example from the application point of view. Slide Scenario: Shrinking (Selfish). Rather than an exit for the disappearing process but a donate yourself type of method. Perhaps useful for workflow scenarios? See orange block on this slide for discussion of returning resources to RM verses keeping it and using for some other task. How about RM reaping these paused processes if needed? See Pausing/Lending/Sharing Resources slide.

Decided to skip next week as several WG members won't be available.

TODOS:

Clone this wiki locally