The message-passing paradigm is attractive because of wide portability and can be used in communication for distributed-memory and shared-memory multiprocessors, networks of workstations, and a combination of these elements. In an effort to create a universal standard for message passing, researchers did not base it off of a single system but it incorporated the most useful features of several systems, including those designed by IBM, Intel, nCUBE, PVM, Express, P4 and PARMACS. MPI provides a simple-to-use portable interface for the basic user, yet one powerful enough to allow programmers to use the high-performance message passing operations available on advanced machines. As a result, hardware vendors can build upon this collection of standard low-level routines to create higher-level routines for the distributed-memory communication environment supplied with their parallel machines. MPI provides parallel hardware vendors with a clearly defined base set of routines that can be efficiently implemented. Most of the major vendors of concurrent computers were involved in the MPI effort, collaborating with researchers from universities, government laboratories, and industry. The MPI effort involved about 80 people from 40 organizations, mainly in the United States and Europe. These meetings and the email discussion together constituted the MPI Forum, membership of which has been open to all members of the high-performance-computing community. After a period of public comments, which resulted in some changes in MPI, version 1.0 of MPI was released in June 1994. The draft MPI standard was presented at the Supercomputing '93 conference in November 1993. The MPI working group met every 6 weeks throughout the first 9 months of 1993. In November 1992 a meeting of the MPI working group took place in Minneapolis and decided to place the standardization process on a more formal footing. Walker put forward a preliminary draft proposal, "MPI1", in November 1992. Attendees at Williamsburg discussed the basic features essential to a standard message-passing interface and established a working group to continue the standardization process. ![]() Out of that discussion came a Workshop on Standards for Message Passing in a Distributed Memory Environment, held on April 29–30, 1992 in Williamsburg, Virginia. The message passing interface effort began in the summer of 1991 when a small group of researchers started discussions at a mountain retreat in Austria. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. Message Passing Interface ( MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. Please help update this article to reflect recent events or newly available information. The reason given is: MPI-4.0 was approved by the MPI Forum in June 2021.
0 Comments
Leave a Reply. |