[Raw Msg Headers][Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Workshop on High Performance Computing (WHPC'94): TUTORIAL PROGRAM
- To: zmailer@cs.toronto.edu
- Subject: Workshop on High Performance Computing (WHPC'94): TUTORIAL PROGRAM
- From: casimiro@lsi.usp.br (Casimiro de A. Barreto (kofuji))
- Date: Sat, 19 Mar 1994 01:39:51 +0200
- Approved: rick@sparky.sterling.com
- Expires: 31 Mar 1994 8:00:00 GMT
- Fake-Sender: rick@sparky.sterling.com
- Followup-To: poster
- Newsgroups: news.announce.conferences
- Organization: Sterling Software
- Reply-To: <casimiro@lsi.usp.br>
- Sender: news <news@smurf.noris.de>
TO CONTACT WHPC'94, PLEASE E-MAIL liria@lsi.usp.br or moreira@csrd.uiuc.edu
WHPC'94 - IEEE/USP INTERNATIONAL WORKSHOP ON
HIGH PERFORMANCE COMPUTING - COMPILERS AND TOOLS
===========================================================================
Title: Smart Parallelizing Compilers: Automatic Parallelization and
Adaptive Runtime Parallelism Management
Constantine D. Polychronopoulos
CSRD, University of Illinois
Abstract:
In this tutorial we shall address specific directions in the development
of powerful compilers and operating systems for large-scale multiprocessors,
undertaken by industry and universities during the last few years.
The tutorial will cover major aspects of parallelizing compiler
design and implementation, including data and control dependence analysis,
dataflow analysis, source level optimizations and restructuring,
backend optimizations, program partitioning and
scheduling, parallel code generation, granularity control, parallel
thread management and operating system support for parallel threads.
In the course of this tutorial we shall also discuss issues
pertaining to the design of powerful intermediate program representation
structures which capture both the hierarchy of computations and the
parallelism in a program.
Recent progress in control dependence analysis and optimization,
which will also be reviewed in this tutorial, provide the opportunity for
dynamic optimizations which were not possible thus far. Particular
emphasis will be given to the backend desing and in particular to
partitioning a program into threads, packaging the parallelism and
carrying out static and dynamic scheduling coupled with dynamic tuning
of the granularity of threads. Packaging instructions into schedulable
units can take place at several levels: packing long instruction words
to partitioning computation tasks into independent threads.
Full automation of the process of thread packaging, creation and scheduling
is achieved via intelligent code embedded in user code by the
compiler, following extensive data and control dependence analysis.
This is a new powerful technique which will also be reviewed
in this tutorial.
Audience:
This tutorial is addressed to anyone who is involved in the
development and/or research on compilers for parallel architectures, VLIW, or
superscalar computers, compiler writers, architects and computer engineers,
faculty and graduate students working on research problems in parallel computing
as well as project leaders and managers of compiler and architecture design
and development efforts for parallel machines.
Prerequisites:
The tutorial will be approximately organized in 10% introductory material,
30% intermediate level, and 60% advance material. Attendees are expected to
have basic knowledge of parallel processing concepts and be familiar with
basic compiler notions. A thorough but brief introduction to basics will be
given at the beginning of the tutorial.
===========================================================================
Compiler Techniques for Parallel Computing
David A. Padua
Center for Supercomputing Research and Development
University of Illinois at Urbana-Champaign
Programming difficulty is perhaps the main obstacle for
the widespread acceptance of parallel computers. Today,
effective parallel programming requires that both code and
data be mapped by hand and the resulting programs are usu-
ally not portable between machines with different organiza-
tions. Powerful compiler techniques are necessary to facil-
itate parallel programming and to make portability possible.
The tutorial will present an overview of compiler tech-
niques for parallel computers. Topics include: (1) Tech-
niques to detect parallelism at compile- and run-time. (2)
Code generation techniques for fine-grain parallelism as
found in superscalar processors. (3) Techniques for the
automatic mapping of code and data onto Massively Parallel
Processors.
==========================================================================
Title: Parallel Programming Languages - an Overview
Instructor: Luiz A. De Rose
Center for Supercomputing Research and Development
University of Illinois at Urbana-Champaign
Course Outline:
Efficient exploitation of the new generation of massively parallel machines
requires the utilization of specialized languages that are easy to program,
debug, and maintain, with the ability to express large amounts of parallelism.
This course will present an overview of the state of the art in programming
languages for massively parallel machines.
The following topics will be discussed:
Part I - Fortran Family Languages
- Fortran 90
- Fortran D
- High Performance Fortran (HPF)
- CM Fortran (Connection Machine 5)
- MPP Fortran (Cray T3D)
Part II - Object Oriented Languages
- pC++
- Charm++
- Concurrent Aggregates (CA)
Course Duration: 4 hours
===========================================================================
Autoscheduling and the Exploitation of Parallelism
Jose E. Moreira
Center for Supercomputing Research and Development
University of Illinois at Urbana-Champaign
Laboratorio de Sistemas Integraveis
Escola Politecnica da Universidade de Sao Paulo
Autoscheduling is a model of computation that provides efficient
support for multiprocessing and multiprogramming in a general purpose
multiprocessor, by exploiting parallelism at all levels of granularity.
The vehicle for implementing autoscheduling is the
Hierarchical Task Graph (HTG), an intermediate program representation
that encapsulates the information on control and data dependences at all
levels. Autoscheduling incorporates a data flow execution into an
underlying control flow model, which allows us to exploit not only
loop-level parallelism, but also irregular (functional) parallelism across
procedure calls, loops, basic blocks, or even instructions.
In this course we will discuss specific issues on how to extract
and exhibit parallelism in a program, and once this parallelism
is made available, how it can be exploited efficiently. We will
present alternatives for the exploitation of autoscheduling on
existing and future systems. We will show how autoscheduling can
be implemented through software-only techniques, and how special
hardware can be used to exploit very fine-grain parallelism.
prerequisites:
Attendees are expected to
have basic knowledge of parallel processing concepts and be familiar with
basic compiler notions.
=========================================================================
PARALLEL PROCESSING: A RESEARCH PESPERCTIVE
ISAAC D. SCHERSON
Department of Information and Computer Science
UNIVERSITY OF CALIFORNIA AT IRVINE - USA
Parallel processing emerged as an important approach for speeding up
computations that are too time consuming in standard uniprocessor
systems. The idea is to decompose a problem into a number of smaller ones
whose integrated solutions solve the original problem. It is however
easier said than done and parallel processing is still struggling for
a place within the high performance community. This seminar will
survey the three main areas of research activity, namely
Architectures, Software and Algorithms. The state-of-the-art will be
assessed and new directions for future research will be identified.
Examples to ilustrate ideas will be drawn form existing commercially
available paralle computers such as CRAY' T3D, Kendall Square' KSR1,
MasPar' MP-2 and TMC' CM-5.