A unified approach to the development of algorithms tailored to various classes of parallel computer architecture is presented. The central theme is to identify a small set of higher-order functions that can be implemented efficiently on the target architecture and which can be used to express parallel algorithms - in general via mechanised program transformation from some higher-level specification. Such higher-order functions enable generic programs to be written in which much parallelism may be explicit. Although the analysis uses purely functional languages, it is the functional paradigm that is important and not the particular syntax. The proposed methodology is illustrated with a numerical problem which is solved directly by a non-recursive program. We also describe schemes that map programs onto both static and dynamic MIMD architectures which have communication links which are fixed and changeable at run-time respectively.
Information from pubs.doc.ic.ac.uk/parallel-algorithms.