Advanced Program



SYMPOSIUM ORGANIZATION

Workshops Index

IPPS '98 PROGRAM

OVERVIEW

MONDAY, MARCH 30

Workshops 1-8
Tutorial 1

TUESDAY, MARCH 31

Workshop 9
Technical Sessions 1-9

WEDNESDAY, APRIL 1

Technical Sessions 10-15
Industrial Track
Panel Discussion

THURSDAY, APRIL 2

Technical Sessions 16-21

FRIDAY, APRIL 3

Workshops 10-18
Tutorials 2-3

LOCATION

ORGANIZATION



GENERAL CO-CHAIRS
Viktor K. Prasanna, University of Southern California
Behrooz Shirazi, University of Texas at Arlington


PROGRAM CHAIR
Sartaj Sahni, University of Florida, Gainsville


PROGRAM VICE-CHAIRS
Algorithms: Oscar Ibarra, University of California at Santa Barbara
Applications: Paul Messina, California Institute of Technology
Architecture: Lionel Ni, Michigan State University
Software: David Padua, University of Illinois at Urbana-Champaign


INDUSTRIAL-COMMERCIAL CHAIR
John K. Antonio, Texas Tech University


WORKSHOPS CHAIR
Jose D.P. Rolim, University of Geneva


TUTORIALS CHAIR
D.K. Panda, Ohio State University


PROCEEDINGS CHAIR
Matt Mutka, Michigan State University


FINANCE CHAIR
Bill Pitts, Toshiba America Information Systems, Inc.


LOCAL ARRANGEMENTS CHAIR
Susamma Barua, California State University, Fullerton


PUBLICITY CHAIR
Sally Jelinek, Electronic Design Associates, Inc.


PUBLICITY COORDINATORS
Asia
Alexey Kalinov, Russian Academy of Sciences

Australia/New Zealand
Alberta Zomaya, University of Western Australia

Far East
Yu-Chee Tseng, National Central University, Taiwan

North America
Dan Watson, Utah State University

South America
Nelson Maculan, Universidade Federal do Rio de Janeiro, Brazil

Western Europe/Africa
Afonso Ferreira, CNRS, LIP-ENS Lyon, France

PROGRAM COMMITTEE
Dharma P. Agrawal, North Carolina State University
Rajive Bagrodia, University of California at Los Angeles
Henri E. Bal, Vrije University, Netherlands
Prithviraj Banerjee, Northwestern University
Silvano Barros, University of London
Laxmi Bhuyan, Texas A&M University
Jehoshua (Shuki) Bruck, California Institute of Technology
Wen-Tsuen Chen, National Tsing Hua University, Taiwan
Yookun Cho, Seoul National University, Korea
Dave Curkendall, NASA Jet Propulsion Laboratory
Robert Cypher, Johns Hopkins University
Sajal Das, University of North Texas
Tim Davis, University of Florida
Andre DeHon, University of California at Berkeley
Hossam ElGindy, University of Newcastle, Australia
William J. Feiereisen, NASA Ames Research Center
Robert Ferraro, NASA Jet Propulsion Laboratory
Afonso Ferreira, CNRS, LIP-ENS, France
K. Gopinath, Indian Institute of Science, India
Thomas Gross, Carnegie Mellon University/ETH Zurich, Switzerland
John Gustafson, Ames Laboratory
Reiner Hartenstein, Universitaet Kaiserslautern
Debra Hensgen, Naval Postgraduate School
Joseph Ja'Ja', University of Maryland
Vipin Kumar, University of Minnesota
Richard Linderman, Rome Laboratory
Evangelos P. Markatos, ICS-FORTH, Greece
Kathryn S. McKinley, University of Massachusetts
Reagan Moore, San Diego Supercomputer Center
David Nassimi, New Jersey Institute of Technology
Stephan Olariu, Old Dominion University
Michael A. Palis, Rutgers University
Keshav Pingali, Cornell University
Timothy Mark Pinkston, University of Southern California
Sanguthevar Rajasekaran, University of Florida
Sanjay Ranka, University of Florida
Arnold L. Rosenberg, University of Massachusetts
Larry Rudolph, Massachusetts Institute of Technology
P. Sadayappan, Ohio State University
Ambuj Singh, University of California at Santa Barbara
Jaswinder Pal Singh, Princeton University
Arun Somani, University of Washington
Sang Son, University of Virginia
Rick Stevens, Argonne National Laboratory
Kenji Toda, Electrotechnical Laboratory, Japan
Josep Torrellas, University of Illinois at Urbana-Champaign
Chau-Wen Tseng, University of Maryland
Mateo Valero, Universidad Politecnica de Catalunya, Spain
C. Eric Wu, IBM Watson Research Center
Sudhakar Yalamanchili, Georgia Institute of Technology
Tao Yang, University of California at Santa Barbara
Kathy Yelick, University of California at Berkeley
Chung-Kwong Yuen, National University of Singapore, Singapore


STEERING COMMITTEE CHAIR
George Westrom, Odetics, Inc. & FSEA


STEERING COMMITTEE
K. Mani Chandy, California Institute of Technology
Ali R. Hurson, Pennsylvania State University
Joseph Ja'Ja', University of Maryland
F. Tom Leighton, MIT
Viktor K. Prasanna, University of Southern California
Sartaj Sahni, University of Florida
Behrooz Shirazi, University of Texas at Arlington
H.J. Siegel, Purdue University
Hal Sudborough, University of Texas at Dallas
George Westrom, Odetics, Inc. & FSEA


ADVISORY COMMITTEE
Said Bettayeb, University of South Alabama (USA)
Michel Cosnard, Ecole Normale Superieure de Lyon (France)
Michael J. Flynn, Stanford University (USA)
Friedhelm Meyer auf der Heide, University of Paderborn (Germany)
Louis O. Hertzberger, University of Amsterdam (The Netherlands)
Richard Karp, University of Washington (USA)
Jan van Leeuwen, University of Utrecht (The Netherlands)
Kurt Mehlhorn, Max Planck Institute (Germany)
Gary Miller, Carnegie Mellon University (USA)
Juerg Nievergelt, ETH Zurich (Switzerland)
Charles L. Seitz, Myricom, Inc. (USA)
Ioannis Tollis, University of Texas, Dallas (USA)
Leslie Valiant, Harvard University (USA)
Paolo Zanella, E.B.I., Cambridge (UK)


SPONSORSHIP

IPPS/SPDP 1998 - the first merged event of the International Parallel Processing Symposium and the Symposium on Parallel and Distributed Processing - is sponsored by the IEEE Computer Society Technical Committee on Parallel Processing and is held in cooperation with the ACM Special Interest Group on Computer Architecture (SIGARCH). Several companies and organizations are providing commercial support. Some workshops have additional sponsorship as listed.

IPPS '98 Program



IPPS/SPDP '98 LOCATION

The first merged symposium of IPPS and SPDP will take place in Orlando, Florida, home of Disney World and a multitude of other attractions including Epcot Center, the Kennedy Space Center, Splendid China, and the Cypress Gardens. Centrally located with easy access to a broad menu of entertainment and recreation, the Delta Orlando Resort offers 25 acres of family fun and relaxation at discounted rates, guaranteeing an exceptionally family friendly week at IPPS/SPDP '98. See the back of the program to learn more about the area. As usual, book early to assure your choice of accommodations (see reservation form on middle tear-out sheet).

IPPS/SPDP '98 PROGRAM SCHEDULE

The combined event will follow the customary format of previous IPPS meetings. Tuesday through Thursday will feature sessions for contributed papers, a panel discussion, industrial track presentations, and commercial exhibits, and each day will open with a keynote address. Workshops and tutorials will be held on Monday and Friday.

CONTRIBUTED PAPERS

There will be 118 contributed technical papers to be presented in 21 technical sessions. Topics cover all areas of parallel processing and present previously unpublished research in the design, development, use and analysis of parallel processing systems.


KEYNOTES/PANEL

Invited speakers will give keynote presentations on Tuesday, Wednesday, and Thursday, and on Wednesday there will be a panel discussion.

WORKSHOPS

Workshops will be held on the first and last days of the symposium. The first nine will be on Monday and the other nine will be on Friday. Open to all registrants, workshops are an opportunity to explore special topics. See descriptions in pages which follow for each workshop.

1. Heterogeneous Computing Workshop

2. Workshop on Parallel and Distributed Real-Time Systems

3. Reconfigurable Architectures Workshop

4. Workshop on Run-Time Systems for Parallel Programming

5. Workshop on High-Level Parallel Programming Models and Supportive Environments

6. Workshop on Distributed Data and Structures

7. Workshop on Job Scheduling Strategies for Parallel Processing

8. Workshop on Biologically Inspired Solutions to Parallel Processing Problems

9. Workshop on High Performance Data Mining

10. Workshop on Randomized Parallel Computing

11. Workshop on Embedded HPC Systems and Applications

12. Workshop on Parallel Processing and Multimedia

13. Workshop on Solving Combinatorial Optimization Problems in Parallel

14. Workshop on Interconnection Networks and Communication Algorithms

15. Workshop on Personal Computer based Networks Of Workstations

16. Workshop on Fault-Tolerant Parallel and Distributed Systems

17. Workshop on Optics and Computer Science

18. Workshop on Formal Methods for Parallel Programming: Theory and Applications




TUTORIALS

There will be three tutorials. The first one will be held on Monday afternoon and the other two will be held on Friday.

1. A Tutorial Introduction to High Performance Data Mining

2. Structured Multithreaded Programming for Windows NT and UNIX/Pthreads

3. Parallel and Distributed Computing Using Java




INDUSTRIAL-COMMERCIAL TRACK

The Industrial Track and Commercial Exhibits portion of the symposium is designed to give manufacturers of commercial products the opportunity to both explain and demonstrate their wares. The Industrial Track consists of original technical papers authored and presented by representatives from the participating organizations, and the papers will be printed in the symposium proceedings. The Commercial Exhibits portion of the symposium complements the presentations of the Industrial Track by providing vendors with the opportunity to display and/or demonstrate their products in an informal "walk-up and talk" setting.


SOCIAL EVENTS

Early morning refreshments will start each day and there will be morning and afternoon coffee (tea & soda) breaks daily. On Thursday, poolside at the Delta Orlando (we always offer a "water view"), IPPS/SPDP '98 will host an evening buffet dinner party.

REGISTRATION

An online registration form is available
here. Note that you need to register by March 5, 1998 to obtain the "Early Bird" discount. Registrations after March 18, 1998 will be accepted on-site only.


IPPS/SPDP '98 PROCEEDINGS

The proceedings will be published by the IEEE Computer Society Press in both book and CD-ROM form and will be available to all registrants at the symposium. Extra copies and proceedings from previous symposia (including CD-ROMs) may be obtained by contacting the IEEE Computer Society.


WORKSHOP PROCEEDINGS

Proceedings from the workshops will vary in format, and availability and cannot be guaranteed to each IPPS/SPDP registrant. Most workshop organizers will put proceedings on the Web as well as having hard copies available at the symposium. To help fairly distribute a limited supply at the symposium, a ticket for proceedings from two workshops (one for Monday and one for Friday) will be issued to each registrant, and printed copies will be distributed at the beginning of each workshop with preference given to those participating in the workshop. Requests for additional printed copies should be made to the individual workshop chair(s).


FOR MORE INORMATION

Additional information regarding IPPS/SPDP '98 may be obtained on the Web (using URL http://www.ippsxx.org) or contact one of the General Co-Chairs: Viktor K. Prasanna ((ipps98 or spdp98)@ganges.usc.edu), Behrooz Shirazi ((ipps98 or spdp98)@zulu.uta.edu)



KEYNOTES/PANEL



Tuesday, March 31 - Keynote
8:30 AM - 9:30 AM
What's So Different About Cluster Architectures?
David Culler, University of California, Berkeley

David Culler leads the Berkeley Network of Workstations (NOW) project, which has been instrumental in putting clusters on the parallel computing map. He is well known for his work on fast communication (Active Messages), parallel programming systems (Split-C), and models for parallel execution (LogP). He has recently authored a graduate text on Parallel Computer Architecture.


Wednesday, April 1 - Keynote
8:30 AM - 9:30 AM
Parallel Data Access and Parallel Execution in a World of CyberBricks
Jim Gray, Microsoft Research

Dr. Gray is a specialist in database and transaction processing computer systems. At Microsoft, his research focuses on scalable computing: building super-servers and workgroup systems from commodity software and hardware. Prior to joining Microsoft, he worked at Digital, Tandem, IBM and AT&T on database and transaction processing systems. He is editor of the Performance Handbook for Database and Transaction Processing Systems, and co-author of Transaction Processing Concepts and Techniques. He is a Member of the National Academy of Engineering, Fellow of the ACM, a member of the National Research Council's Computer Science and Telecommunications Board, Editor in Chief of the VLDB Journal, Trustee of the VLDB Foundation, and Editor of the Morgan Kaufmann series on Data Management.


Thursday, April 2 - Keynote
8:30 AM - 9:30 AM
The Future of Scalable Systems
Greg Papadopoulos, Sun Microsystems

As vice president and chief technology officer for Sun Microsystems Computer Company (SMCC), Dr. Greg Papadopoulos is responsible for SMCC's core technology strategy and advanced development programs, key technology investments and external technology partnerships.
Previously, Dr. Papadopoulos was chief scientist for Server Systems engineering and then chief technology officer of the Enterprise Servers and Storage business unit within SMCC.
Before joining Sun in the Fall of 1994, Dr. Papadopoulos was senior architect and director of product strategy for Thinking Machines Corporation (TMC) in Cambridge, Massachusetts, and an associate professor at MIT, where he taught electrical engineering and computer science. At MIT, he was the recipient of the National Science Foundation's Presidential Young Investigator Award. He also co-founded three companies: PictureTel (video conferencing), Ergo Computing (high-end PCs) and Exa Corporation (fluid dynamics). Dr. Papadopoulos received a B.A. in systems science from University of California at San Diego, and an M.S. and Ph.D. in electrical engineering and computer science from the Massachusetts Institute of Technology.


Wednesday, April 1 - Panel Discussion
4:00 PM - 6:00 PM
Data Intensive vs. Scientific Computing: Will the Twain Meet for Parallel Processing?
Moderator:

Vipin Kumar, University of Minnesota

Panelists:

Tilak Agarwal, IBM
David Bailey, NASA
Jim Gray, Microsoft
Olaf Lubeck, Los Alamos National Lab
Bob Lucas, ARPA
Tom Sterling, Caltech
Rick Stevens, Argonne National Lab
Hans Zima, University of Vienna



OVERVIEW




GENERAL SCHEDULE

Sunday, March 29th
Registration Opens

Monday, March 30th
Workshops 1-8
Tutorial 1

Tuesday, March 31st
Keynote
Contributed Papers

Workshop 9
Workshop 2 continues

Wednesday, April 1st
Keynote
Contributed Papers
Industrial Track
Panel Discussion

Workshop 2 continues

Thursday, April 2nd
Keynote
Contributed Papers

Friday, April 3rd
Workshops 10-18
Tutorials 2 & 3

Please Note :
Workshop schedules will vary but will accomodate common morning and afternoon breaks. See descriptions which include contact information. Also, check the registration area where schedules for each workshop will be posted.

The registration desk will be open Sunday evening and each morning prior to sessions starting. A program sheet detailing room assignments and other symposium details will be available at registartion.


MONDAY, MARCH 30

WORKSHOPS

1. Heterogeneous Computing Workshop

2. Workshop on Parallel and Distributed Real-Time Systems

3. Reconfigurable Architectures Workshop

4. Workshop on Run-Time Systems for Parallel Programming

5. Workshop on High-Level Parallel Programming Models and Supportive Environments

6. Workshop on Distributed Data and Structures

7. Workshop on Job Scheduling Strategies for Parallel Processing

8. Workshop on Biologically Inspired Solutions to Paralel Processing Problems

TUTORIAL

1:30 PM - 5:30 PM
Tutorial 1
A Tutorial Introduction to High Performance Data Mining


TUESDAY, MARCH 31

8:30 AM - 9:30 AM
Keynote Address
What's So Different About Cluster Architectures?
David Culler, University of California, Berkeley

9:30 AM - 10:00 AM
Morning Break

10:00 AM - 12:00 noon
Session 1
Communication
Session 2
Compilers I
Session 3
Mathematical Applications

12:00 noon - 1:30 PM
Lunch Break

1:30 PM - 3:30 PM
Session 4
Networks
Session 5
Compilers II
Session 6
Signal and Image Processing

3:30 PM - 4:00 PM
Afternoon Break

4:00 PM - 6:00 PM
Session 7
Collective Communication
Session 8
Memory Hierarchy and I/O
Session 9
Algorithms I

All Day
9. Workshop on High Performance Data Mining
2. Workshop on Parallel and Distributed Real-Time Systems

COMMERCIAL EXHIBITS


WEDNESDAY, APRIL 1

8:30 AM - 9:30 AM
Keynote Address
Parallel Data Access and Parallel Execution in a World of CyberBricks
Jim Gray, Microsoft Research

9:30 AM - 10:00 AM
Morning Break

10:00 AM - 12:00 noon
Session 10
Routing
Session 11
Operating systems and Scheduling
Session 12
Algorithms II
Industrial Track I
Environments, Tools, and Evaluation Methods

12:00 noon - 1:30 PM
Lunch Break

1:30 PM - 3:30 PM
Session 13
Multiprocessor Performance Evaluation
Session 14
Scheduling
Session 15
Databases and sorting
Industrial Track II
Reconfigurable Systems

3:30 PM - 4:00 PM
Afternoon Break

4:00 PM - 6:00 PM
Panel Discussion

All Day
2. Workshop on Parallel and Distributed Real-Time Systems

COMMERCIAL EXHIBITS


THURSDAY, APRIL 2

8:30 AM - 9:30 AM
Keynote Address
The Future of Scalable Systems
Greg Papadopoulos, Sun Microsystems

9:30 AM - 10:00 AM
Morning Break

10:00 AM - 12:00 noon
Session 16
Performance Prediction and Evaluation
Session 17
Software distributed Shared Memory
Session 18
Scientific Simulation

12:00 noon - 1:30 PM
Lunch Break

1:30 PM - 3:30 PM
Session 19
Fault Tolerance
Session 20
Performance and Debugging Tools
Session 21
Distributed Systems


COMMERCIAL EXHIBITS


Early Evening
IPPS/SPDP 1998
POOLSIDE PARTY
Buffet Dinner
Resort Attire

Tickets for guests may be purchased at registration desk. Details and time will be posted at registration desk and in program sheet.


FRIDAY, APRIL 3

WORKSHOPS

10. Workshop on Randomized Parallel Computing

11. Workshop on Embedded HPC Systems and Applications

12. Workshop on Paralel Processing and Multimedia

13. Workshop on solving Combinatorial Optimization Problems in Parallel

14. Workshop on Interconnection Networks and Communication Algorithms

15. Workshop on Personal Computer Based Networks of Workstations

16. Workshop on Fault-Tolerant Parallel and Distributed Systems

17. Workshop on Optics and Computer Science

18. Workshop on Formal Methods for Parallel Programming: Theory and Applications

TUTORIALS

8:30 AM - 12:30 PM
Tutorial 2
Structured Multithreaded Programming for Windows NT and UNIX/Pthreads

1:30 PM - 5:30 PM
Tutorial 3
Parallel and Distributed Computing Using Java


IPPS/SPDP '98 ADJOURNS





MONDAY, MARCH 30



Workshop 1: All Day Monday
HCW '98
7th HETEROGENEOUS COMPUTING WORKSHOP

Co-sponsored by

IEEE Computer Society & Office of Naval Research


Proceedings published by
IEEE Computer Society Press





ADVANCE PROGRAM



HCW '98 Contents

In addition to eight contributed, refereed papers, HCW '98 will feature four invited papers describing existing HC systems. The workshop will be concluded with what is sure to be a lively panel discussion on the use of Java for programming HC systems.

HCW '98 Focus

Heterogeneous computing systems range from diverse elements within a single computer to coordinated, geographically distributed machines with different architectures. A heterogeneous computing system provides a variety of capabilities that can be orchestrated to execute multiple tasks with varied computational requirements. Applications in these environments achieve performance by exploiting the affinity of different tasks to different computational platforms or paradigms, while considering the overhead of inter-task communication and the coordination of distinct data sources and/or administrative domains.

Organizing Committee:

General Chair: Vaidy Sunderam, Emory University
Vice-General Chair: Dan Watson, Utah State University
Program Chair: John K. Antonio, Texas Tech University

Steering Committee:

Francine Berman, University of California, San Diego
Jack Dongarra, University of Tennessee
Richard F. Freund, NRaD (Chair)
Debra Hensgen, Naval Postgraduate School
Paul Messina, Caltech
Jerry Potter, Kent State University
Viktor K. Prasanna, University of Southern California
H.J. Siegel, Purdue University
Vaidy Sunderam, Emory University

Program Committee:

John K. Antonio, Texas Tech University (Chair)
Francine Berman, University of California, San Diego
Steve J. Chapin, University of Virginia
Partha Dasgupta, Arizona State University
Mary Eshaghian, New Jersey Institute of Technology
Allan Gottlieb, New York University and NEC Research
Babak Hamidzadeh, University of British Columbia
Salim Hariri, Syracuse University
Taylor Kidd, Naval Postgraduate School
Domenico Laforenza, CNUCE - Institute of the Italian NRC
Yan Alexander Li, Intel Corporation
David J. Lilja, University of Minnesota
Noe Lopez-Benitez, Texas Tech University
Piyush Maheshwari, The University of New South Wales
Richard C. Metzger, Rome Laboratory
Viorel Morariu, Concurrent Technologies Corporation
Viktor K. Prasanna, University of Southern California
Ranga S. Ramanujan, Architecture Technology Corporation
Behrooz A. Shirazi, University of Texas at Arlington
H.J. Siegel, Purdue University
Min Tan, Cisco Systems, Inc.
Dan Watson, Utah State University
Charles C. Weems, University of Massachusetts, Amherst
Elizabeth Williams, Center for Computing Sciences
Albert Y. Zomaya, University of Western Australia


Opening Remarks
8:15 - 8:30

Session 1
8:30 - 10:10
Invited Case Studies and Status Reports on Existing Systems
Chair: John K. Antonio
Texas Tech University
Lubbock, TX, USA



Production Computing on Corporate Intranets with SPANR
Richard F. Freund, NRaD, San Diego, CA, USA

Prototyping an International Computational Grid: Globus and GUSTO
Ian Foster, Argonne National Laboratory, Argonne, IL, USA Carl Kesselman, Information Sciences Institute, University of Southern California, Los Angeles, CA, USA

NetSolve's Network Enabled Server: Examples and Applications
Henri Casanova, University of Tennessee, Knoxville, TN, USA, Jack Dongarra, University of Tennessee, Knoxville and Oak Ridge National Laboratory, Oak Ridge, TN, USA

Implementing Large-Scale Distributed Synthetic Forces Simulations on top of Metacomputing Software Infrastructure
Paul Messina, Sharon Brunett, and Tom Gottschalk, Caltech, Pasadena, CA, USA
Carl Kesselman, Information Sciences Institute, University of Southern California, Los Angeles, CA, USA

Break
10:00 - 10:30


Session 2: 10:30 - 12:10
Resource Management, Matching, and Scheduling
Chair: Dan Watson
Utah State University
Logan, UT, USA


Resource Management in Networked HPC Systems
Axel Keller and Alexander Reinefeld, Paderborn Center for Parallel Computing, Paderborn, Germany

A Dynamic Matching and Scheduling Algorithm for Heterogeneous Computing Systems
Muthucumaru Maheswaran and H.J. Siegel, Purdue University, West Lafayette, IN, USA

Dynamic, Competitive Scheduling of Multiple DAGs in a Distributed Heterogeneous Environment
Michael Iverson and Fusun Ozguner, The Ohio State University, Columbus, OH, USA

The Relative Performance of Various Mapping Algorithms is Independent of Sizable Variances in Runtime Predictions
Robert Armstrong, Debra Hensgen, and Taylor Kidd, Naval Postgraduate School, Monterey, CA, USA

Lunch
12:10 - 2:00

Session 3: 2:00 - 3:40
Modeling Issues and Group Communications
Chair: David J. Lilja
University of Minnesota
Minneapolis, MN, USA


Modeling the Slowdown of Data-Parallel Applications in Homogeneous and Heterogeneous Clusters of Workstations
Silvia Figueira and Francine Berman, University of California, San Diego, CA, USA

Specification and Control of Cooperative Work in a Heterogeneous Computing Environment
Guillermo J. Hoyos-Rivera and Esther Martinez-Gonzalez, Universidad Veracruzana, Xalapa, Veracruz, Mexico Homero V. Rios-Figueroa and Victor G. Sanchez-Arias, Laboratorio Nacional de Informatica Avanzada, LANIA, A.C. Hector G. Acosta-Mesa, Universidad Tecnologica de la Mixteca, Huajuapan de Leon, Oaxaca, Mexico Noe Lopez-Benitez, Texas Tech University, Lubbock, TX, USA

A Mathematical Model, Heuristic, and Simulation Study for a Basic Data Staging Problem in a Heterogeneous Networking Environment
Min Tan, Cisco Systems, Inc., San Jose, CA, USA Mitchell D. Theys, H.J. Siegel, and Noah B. Beck, Purdue University, West Lafayette, IN, USA Michael Jurcyzk, University of Missouri at Columbia, Columbia, MO, USA

An Efficient Group Communication Architecture over ATM Networks
Sung-Yong Park, Joohan Lee, and Salim Hariri, Syracuse University, Syracuse, NY, USA

Break
3:40 - 4:00


Panel

4:00 - 5:30
Is Java the Answer for Programming Heterogeneous Computing Systems?
Panel Chair: Gul A. Agha
University of Illinois, Urbana-Champaign, IL, USA


Panelists:
Lorenzo Alvisi
University of Texas, Austin, TX, USA

Hank Dietz
Purdue University, West Lafayette, IN, USA

Suresh Jagannathan
NEC Research, Princeton, NJ, USA

Doug Lea
SUNY Oswego, Oswego, NY, USA

Charles C. Weems
University of Massachusetts, Amherst, MA, USA

Panel Description:
One factor that complicates the programming of heterogeneous computing systems is the absence of a portable, high-performance programming language. The widespread interest and use of Java for remote execution on the Web has demonstrated the ease and practicality of using high-level programming for heterogeneous computing. This panel will discuss the use of Java for programming HC systems with a higher degree of both inter- and intra-machine concurrency. Representative questions that the panel will address include:

- Can Java deliver the potential of HC systems for high-performance, availability, etc.?

- What additional constructs are needed for Java to effectively support concurrency and distribution?

- Does Java provide true portability and mobility?

- Will Java be accepted by application programmers of HC systems?

- What are the challenges in using machine-dependent libraries with Java?

Workshop 2:
All Day Monday,Tuesday, & Wednesday

JOINT WORKSHOP ON PARALLEL AND DISTRIBUTED REAL-TIME SYSTEMS
Sixth International Workshop on Parallel and Distributed Real-Time Systems & Second Workshop on Metrics and Measurement for Computer-based Systems

Workshop Co-Chairs:
David Andrews, UofA, USA
P.D.V. van der Stok, Eindhoven Univ Tech, Netherlands
Kenji Toda, Electro-tech Laboratory, Japan



Keynote Speaker:

Michael W. Masters, Chief Systems Engineer of the US Navy's Advanced Control-21st Century (AdCon-21) Program


This year's WPDRTS will feature manuscripts that demonstrate original unpublished research pertaining to real-time systems that have parallel and/or distributed architectures. Of interest are experimental and commercial systems, their scientific and commercial applications, and theoretical foundations. The presentations will cover topics of:

- Architecture
- Benchmarking
- Command and control systems
- Communications and networking
- Databases (real-time)
- Embedded systems
- Fault tolerance
- Formal methods
- Instrumentation
- Languages (real-time)
- Multimedia
- New paradigms
- Object orientation (real-time)
- Signal and image processing
- Reengineering
- Scheduling and resource management
- Software architectures-systems engineering
- Systems engineering
- Tools and environments
- Validation and simulation
- Visualization

WPDRTS will also feature a special problem session for solutions to a challenging real-time problem distributed with the call for papers.

Steering Committee:

Li Guan, Australia
Dieter K. Hammer, Eindhoven Univ Tech, Netherlands
Robert D. Harrison, NSWCDD, USA
Viktor K. Prasanna, USC, USA
John A. Stankovic, Univ Virginia, USA
Mario Tokoro, Kei Univ/Sony-Csl, Japan
Hideyuki Tokuda, Keio Univ, Japan
Lonnie R. Welch, UTA, USA

Program Committee:

Emile Aarts, Eindhoven Univ Tech, Netherlands
Gul Agha, UofI, USA
Mehmet Aksit, Twente Univ Tech, Netherlands
Jorge Amador-Monteverdi, Euro. Space Tech, Netherlands
Mary A. Austin, United Tech, USA
Paul Austin, Xerox, USA
Keith Bromley, NRaD, USA
Jin-Young Choi, KU, Korea
Kyung-Hee Choi, AU, Korea
Ray Clark, Open Group, USA
Flaviu Cristian, UCSD, USA
John Drummond, NRaD, USA
Klaus Ecker, U of Claus, Germany
Doug Gensen HP, USA
Jan Gustafsson, U of Mal, Sweden
Wolfgang Halang, U of Hagen, Germany
Constance Heitmeyer, NRL, USA
Kenji Ishida, Hiroshima Univ, Japan
Farnam Jahanian, UMich, USA
Joerg Kaiser, Univ of Ulm, Germany
Yoshiaki Kakuda, Osaka Univ, Japan
Christian Kelling, Fraun-ISST, Berlin, Germany
Gerard Le Lann, INRIA, France
Miroslaw Malek, Hum U, Berlin, Germany
David Marlow, NSWCDD, USA
Richard Metzger, Rome Lab, USA
Tatsuo Nakajima, JAIST, Japan
Hidenori Nakazato, Oki Elec, Japan
Edgar Nett, GMD, Germany
Diane Rover, Mich St Univ, USA
Bo Sanden, Col Tech Univ, USA
Karsten Schwan, GaTech, USA
Bran Selic, ObjecTime, Inc., USA
Behrooz Shirazi, UTA, USA
Sang Son, Univ Virginia, USA
Peter Steenkiste, CMU, USA
Hiroaki Takada, UT, Japan
Naohisa Takahashi, NTT, Japan
Kazunori Takashio, UCEC, Japan
Seiichi Takegaki, Mit Elec, Japan
Mitch Thornton, UA, USA
Brian Ujvary, Vista Bankcorp, USA
Farn Wang, Acad Sin, Taiwan, ROC
Paul Werme, NSWCDD, USA
J.B. Williams, LMC, USA
C.M. Woodside, Carl U, USA
Yoshinori Yamaguchi, Elec-tech Lab, Japan
Tomohiro Yoneda, TIT, Japan

Publication Chair:

Binoy Ravindran, UTA, USA

Publicity Chairs:

Tadashi Ae, Hiroshima University, Japan
Maarten Bodlaender, Eindhoven Tech Univ, Netherlands
Antonio L. Samuel, NSWCDD, USA


Sponsors:

Naval Surface Warfare Center Dahlgren Division (NSWCDD), U.S. Army MICOM, and Hollands Signaalapparaten B.V.

Information Requests:

David Andrews
CSEG Dept
Rm 331 Engineering Hall
University of Arkansas
Fayetteville, AR 72701
Internet: maarten@win.tue.nl

Workshop 3: All Day Tuesday

5th RECONFIGURABLE ARCHITECTURES WORKSHOP

Workshop Co-Chairs:
Peter M. Athanas, Virginia Tech, USA
Reiner W. Hartenstein, University of Kaiserslautern, Germany




Program Committee:

Peter Athanas, Virginia Tech, USA
Don Bouldin, University of Tennessee, USA
Klaus Buchenrieder, Siemens Research, Germany
Steven Casselman, Virtual Computer Corp., USA
Pak Chan, University of California, Santa Cruz, USA
Bernard Courtois, Univ. Grenoble, France
Hossam Elgindy, Univ. of Newcastle, Australia
Rolf Ernst, Univ. Braunschweig, Germany
Masahiro Fujita, Fujitsu Labs., USA
Manfred Glesner, TH Darmstadt, Germany
John Gray, Xilinx Corp., Great Britain
Reiner Hartenstein, Univ. Kaiserslautern, Germany
John McHenry, National Security Agency, USA
Toshiaki Miyazaki, NTT Laboratories, Japan
Brent Nelson, Brigham Young Univ., USA
Viktor Prasanna, Univ. of Southern California, USA
Hartmut Schmeck, Univ. Karlsruhe, Germany
Herman Schmitt, Carnegie Mellon Univ., USA
Michal Servit, Techn. Univ. Prague, Czech Republic
Takayuki Yanagawa, NEC, Tokyo, Japan
Hiroto Yasuura, Kyushu University, Japan

The recent decade has witnessed enormous technological advances, a deeper appreciation of the power of the use of reconfigurable technology platforms, and a better understanding of computing in time and in space. On account of increasing interest in reconfigurable systems, the scope of this workshop has been substantially widened. It now deals with reconfigurability at all levels: reconfigurable processor network structures, reconfigurable processors, as well as reconfigurable components.

The topics of interest of the workshop include:

Reconfigurable systems
- Implementations
- Scalable programmable logic
- Reconfigurable custom computing machines
- Reconfigurable accelerators and their applications

Applications
- Problem solving paradigms
- Image processing
- Graphics and animation
- Algorithms
- Industrial applications and experiences

Bridging the gap
- Software to hardware migration for speed-up
- Run time to compile time migration for speed-up
- Hardware/software co-design using reconfigurable devices
- New paradigms and basic research aspects

Development tools and methods
- High-level development support
- Reconfiguration from programming language sources
- Compilation techniques
- Benchmarks for reconfigurable hardware

Curricula
- Introducing structural programming in CS curricula
- Introducing reconfigurable architectures and technology platforms in CS&E curricula
- Educational experiences on reconfigurable systems
- Experiences in hardware/software co-education

The primary goal of the workshop is to bridge the gap between hardware on one side, and HPC, parallel processing, or supercomputing on the other side. The building of reconfigurable systems can only be achieved by building on the experience in these different areas. Close interaction between them is necessary to identify and solve important research problems. The workshop aims to provide an opportunity to intensify creative interaction between researchers actively involved in the fabrication, design, applications and enabling technologies of reconfigurable architectures.

For more information, please see the RAW-98 homepage:

URL: http://xputers.informatik.uni-kl.de/RAW/RAW98.html

or contact any of the Workshop Co-Chairs:

Reiner W. Hartenstein
Universitaet Kaiserslautern, Germany
Fax: +49 (631) 205-2640
and +49 (7251) 14823 (please, use both, simultaneously)
Internet: hartenst@rhrk.uni-kl.de
and abakus@informatik.uni-kl.de (please, use both, simultaneously)

Peter M. Athanas
Virginia Polytechnic Institute and State University
The Bradley Department of Electrical and Computer Engineering
Blacksburg, VA 24061-0111
Internet: athanas@vt.edu


Workshop 4: All Day Monday

2nd WORKSHOP ON RUNTIME SYSTEMS FOR PARALLEL PROGRAMMING

Workshop Chairs:
Matthew Haines, University of Wyoming, USA
Koen Langendoen, Vrije Universiteit, The Netherlands
Greg Benson, University of California at Davis, USA



Invited Speakers:

Joel Saltz, University of Maryland at College Park, USA
Ian Foster, Argonne National Laboratory, USA
David Culler, University of California at Berkeley, USA

Program Committee:

Henri Bal, Vrije Universiteit, The Netherlands
Pete Beckman, Los Alamos National Laboratory, USA
Wim Bohm, Colorado State University, USA
Denis Caromel, University of Nice - INRIA Sophia Antipolis, France
Ian Foster, Argonne National Laboratory, USA
Dennis Gannon, Indiana University, USA
Laxmikant V. Kale, University of Illinois at Urbana Champaign, USA
David Lowenthal, University of Georgia, USA
Frank Mueller, Humboldt-Universitaet zu Berlin, Germany
Ron Olsson, University of California, Davis, USA
Klaus Schauser, University of California, Santa Barbara, USA
Alan Sussman, University of Maryland, USA


Runtime systems are critical to the implementation of parallel programming languages and libraries. They provide the core functionality of a particular programming model and the glue between the model and the underlying operating system. As such, runtime systems have a large impact on the performance and portability of parallel programming systems. Yet despite the importance of runtime systems, there are few forums in which practitioners can exchange their ideas, and these are typically forums showcasing peripheral areas, such as languages, operating systems, and parallel computing. RTSPP provides a forum for bringing together runtime system designers from various backgrounds to discuss the state-of-the-art in designing and implementing runtime systems for parallel programming. This one-day workshop includes both a technical session of refereed papers and invited talks by leading researchers in the field.

RTSPP is organized to provide for a useful exchange of information in the area of runtime systems. To achieve this, there are three components to the RTSPP program: 1) original, peer-reviewed technical papers that illustrate the latest research ideas; 2) invited talks from established leaders in the field; and 3) question and answer sessions that encourage participation among the participants. Topics for the technical papers include:

- Novel runtime system designs
- Performance evaluation of runtime systems
- Operating system support for runtime systems
- Design and implementation techniques for threads
- Runtime systems supporting distributed shared memory
- Tension between portability and efficiency in a runtime system


For additional information, please contact:

Matthew Haines
Department of Computer Science
University of Wyoming
Laramie, WY 82070, USA
Internet: haines@cs.uwyo.edu

Workshop 5: All Day Monday

3rd WORKSHOP ON HIGH-LEVEL PARALLEL PROGRAMMING MODELS AND SUPPORTIVE ENVIRONMENTS

Workshop Chair:
M. Gerndt, Research Centre Juelich, Germany



Program Committee:

Arndt Bode, Technische Universitaet Muenchen, Germany
Helmar Burkhart, Universitaet Basel, Switzerland
John Carter, University of Utah, USA
Michel Cosnard, Ecole Normale Superieur de Lyon, France
Karsten Decker, Swiss Center for Scientific Computing, Switzerland
Dennis Gannon, Indiana University, USA
Michael Gerndt, Research Centre Juelich, Germany
Hermann Hellwagner, Technische Universitaet Muenchen, Germany
Francois Irigoin, Ecole des Mines de Paris, France
Vijay Karamcheti, University of Illinois at Urbana-Champaign, USA
Pete Keleher, University of Maryland, USA
Ulrich Kremer, Rutgers University, USA
Ron Perrott, Queens University Belfast, United Kingdom
Thierry Priol, INRIA, France
Klaus E. Schauser, University of California at Santa Barbara, USA
Domenico Talia, ISI-CNR, Italy
Hans P. Zima, University of Vienna, Austria


HIPS'98 is a full-day workshop focusing on high-level programming of networks of workstations and of massively-parallel machines. Its goal is to bring together researchers working in the areas of applications, language design, compilers, system architecture, and programming tools to discuss new developments in programming such systems.

With the advent of the de facto standards Parallel Virtual Machine (PVM) and Message Passing Interface (MPI), parallel programming using the message-passing style has reached a certain level of maturity. However, in terms of convenience and productivity, this parallel programming style is often considered to correspond to assembler-level programming of sequential computers.

One of the keys for a (commercial) breakthrough of parallel processing, therefore, are high-level programming models that allow to produce truly efficient code. Along this way, languages and packages have been established which are more convenient than explicit message passing and allow higher productivity in software development; examples are High Performance Fortran (HPF), thread packages for shared memory-based programming, and Shared Virtual Memory (SVM) environments.

Yet, current implementations of high-level programming models often suffer from low performance of the generated code, from the lack of corresponding high-level development tools, e.g. for performance analysis, and from restricted applicability, e.g. to the data parallel programming style. This situation requires strong research efforts in the design of parallel programming models and languages that are both at a high conceptual level and implemented efficiently, in the development of supportive tools, and in the integration of languages and tools into convenient programming environments. Hardware and operating system support for high-level programming, e.g. distributed shared memory and monitoring interfaces, are further areas of interest.


Invited speaker:

John Carter, University of Utah, USA
"Distributed Shared Memory: Past, Present, and Future"

For further information, please contact:

Michael Gerndt
Research Centre Juelich
Central Institute for Applied Mathematics
D-52425 Juelich, Germany
Vox: +49-2461-616569
Fax: +49-2461 616656
Internet: hips98@fz-juelich.de


Workshop 6: All Day Monday

WORKSHOP ON DISTRIBUTED DATA AND STRUCTURES

Workshop Organizers:

Nicola Santoro, Carleton University, Ottawa, Canada
Peter Widmayer, ETH Zurich, Switzerland





As databases are growing steadily, applications become more and more demanding, and distributed computer systems are becoming rather easily available, the problem of how to efficiently maintain large datasets gains importance. An important aspect of this problem is the design, implementation, and operation of a data structure in a distributed system. Research on the systematic design and analysis of distributed data structures has just started: In the database literature, dynamic file structures for distributed object management have attracted some attention, and in the algorithms literature, data structures have been studied from a complexity oriented point of view.

This workshop is intended to bring together application oriented developers and theoretical researchers concerned with the maintenance of distributed data and the organization of the interaction between the computing nodes. The presentations will address the following topics:

- Design, implementation, and operation of distributed data structures in general
- Measures of efficiency for distributed data structures
- Complexity analysis (lower and upper bounds) for distributed data structures
- Particular data maintenance problems (distributed token; mutual exclusion;
counter)
- Connection between distributed data maintenance and mobile data maintenance
- Connection between the interaction structure of the nodes and the data maintenance
- Experience with distributed data structures on current systems


For more information, contact:

Konrad Schlude
Institute of Theoretical Computer Science
ETH Zentrum
8092 Zurich
Switzerland
Vox: +41 1 632 7401 or 632 3832
Internet: wdas98@inf.ethz.ch


Workshop 7: All Day Monday

4th WORKSHOP ON JOB SCHEDULING STRATEGIES FOR PARALLEL PROCESSING

Workshop Co-Chairs:
Dror Feitelson, Hebrew University (feit@cs.huji.ac.il)
Larry Rudolph, MIT (rudolph@theory.lcs.mit.edu)




Program Committee:

Stephen Booth, EPCC
Allan Gottlieb, NYU
Atsushi Hori, RWCP
Phil Krueger, Sequent
Richard Lagerstrom, Cray Research
Miron Livny, University of Wisconsin
Virginia Lo, University of Oregon
Reagan Moore, SDSC
Bill Nitzberg, NASA Ames
Uwe Schwiegelshohn, University Dortmund
Ken Sevcik, University of Toronto
Mark Squillante, IBM Research
John Zahorjan, University of Washington
Songnian Zhou, Platform Computing

As large parallel computers become more popular, scheduling strategies become more important as a means of balancing the need for exclusive use of the machine's resources and the desire to make these resources readily available to many diverse users. Neither sign-up sheets, naive time-slicing, nor naive space-slicing are suitable solutions. Moreover, there appears to be a divergence between what is studied, modeled, and analyzed in academic circles and the actual, sometimes ad-hoc, scheduling schemes developed by vendors and large installations.

Continuing the tradition established at IPPS'95, the workshop is intended to attract people from academia, supercomputing centers, national laboratories, and parallel computer vendors to address resource management issues in multiuser parallel systems, and attempt to resolve the conflicting goals such as short response times for interactive work, minimal interference with batch jobs, fairness to all users, and high system utilization. We hope to achieve a balance between reports of current practices in large and heavily-used installations, proposals of novel schemes that have not yet been tested in a real environment, and realistic models and analysis. The emphasis will be on practical designs in the context of real parallel operating systems.

Part of this year's workshop will be dedicated to discussing how to come up with a "standard" benchmark workload for parallel job scheduling that takes into account response time and system resource needs.

Topics of interest include:

- Benchmarking and performance metrics to compare scheduling schemes
- Experience with scheduling policies on current systems
- Performance implications of scheduling strategies
- Fairness, priorities, and accounting issues
- Workload characterization and classification
- Support for different classes of jobs (e.g. interactive vs. batch)
- Static vs. dynamic partitioning
- Time slicing and gang scheduling
- Interaction of scheduling with the model of computation
- Interaction of scheduling with memory management and I/O
- Load estimation and load balancing
- Scheduling on heterogeneous nodes (e.g. with different amounts of memory)

Program:

The advance program will be available in the workshop home page towards the middle of February, and downloadable papers will be available in March.

Proceedings:

In previous years, post-workshop proceedings were published by Springer-Verlag in the Lecture Notes in Computer Science series, as volumes 949, 1162, and 1291.

For more information contact:

Dror Feitelson
Vox: +972 2 658 4115
Internet: feit@cs.huji.ac.il

URL: http://www.cs.huji.ac.il/~feit/parsched.html


Workshop 8: All Day Monday

FIRST WORKSHOP ON BIOLOGICALLY INSPIRED SOLUTIONS TO PARALLEL PROCESSING PROBLEMS

Workshop Co-Chairs:

Albert Y. Zomaya, The University of Western Australia
Fikret Ercal, University of Missouri-Rolla
Stephan Olariu, Old Dominion University




Steering Committee:

Peter Fleming, University of Sheffield, United Kingdom
Frank Hsu, Fordham University
Viktor Prasanna, University of Southern California
Sartaj Sahni, University of Florida - Gainsville
Hartmut Schmeck, University of Karlsruhe, Germany
H.J. Siegel, Purdue University
Les Valiant, Harvard University

Program Committee:

Ishfaq Ahmad, Hong Kong University of Science and Technology
David Andrews, University of Arkansas
Prithviraj Banerjee, Northwestern University
Juergen Branke, University of Karlsruhe, Germany
Jehoshua (Shuki) Bruck, California Institute of Technology
Sajal Das, University of North Texas
Hesham El-Rewini, University of Nebraska at Omaha
Afonso Ferreira, CNRS, LIP-ENS, France
Ophir Frieder, Florida Tech.
Eileen Kraemer, Washington University in St. Louis
Mohan Kumar, Curtin University of Technology, Australia
Richard Lipton, Princeton University
John H. Reif, Duke University
P. Sadayappan, Ohio State University
Assaf Schuster, Technion, Israel
Franciszek Seredynski, Polish Academy of Sciences, Poland
Peter M.A. Sloot, University of Amsterdam, The Netherlands
Ivan Stojmenovic, Ottawa University, Canada
Weiping Zhu, University of Queensland, Australia

Techniques based on biological paradigms can provide efficient solutions to a wide variety of problems in parallel processing. A vast literature exists on biology-inspired approaches to solving an impressive array of problems and, more recently, a number of studies have reported on the success of such techniques for solving difficult problems in all key areas of parallel processing.

Rather remarkably, most biologically-based techniques are inherently parallel. Thus, solutions based on such methods can be conveniently implemented on parallel architectures.

This workshop seeks to provide an opportunity for researchers to explore the connection between biologically-based techniques and the development of solutions to problems that arise in parallel processing. Topics of interest include, but are not limited to:

- Biologically-based methods for solving parallel processing problems (e.g. ant algorithms, genetic algorithms, cellular automata, DNA and molecular computing, neural networks) for solving parallel processing problems (scheduling, data organization and partitioning, communication and routing, VLSI layout etc.)

- Other methods based on natural phenomena such as simulated annealing and other artificial life techniques applied to solve problems in parallel processing are also of interest

- Parallel/distributed platforms for biologically-based computations

- Techniques for integrating conventional parallel and biologically-based paradigms

- Tools and algorithms for parallelizing biologically-based techniques

- Applications and case studies combining traditional parallel and distributed computing and biologically-based techniques

- Theoretical work related to solution optimality, convergence issues, and time/space complexities of parallel algorithms that employ biologically-based methods

All papers will be reviewed, and proceedings will be distributed at the workshop site. The papers from the workshop will also be published by the Springer-Verlag Lecture Notes on Computer Science series. A selected group of papers accepted for the workshop will be published as a special issue which is coguest-edited by the workshop chairs in the Future Generation of Computer Systems Journal.


Albert Y. Zomaya
Parallel Computing Research Laboratory
Department of Electrical and Electronic Engineering
The University of Western Australia
Nedlands, Perth
Western Australia 6907
Australia
Vox: +61-8-9380-3875
Fax: +61-8-9380-1088
Internet: zomaya@ee.uwa.edu.au


Fikret Ercal
Department of Computer Science
University of Missouri
Rolla, MO 65409-0350
USA
Vox: +1-573-341-4857
Fax: +1-573-341-4501
Internet: ercal@umr.edu


Stephan Olariu
Department of Computer Science
Old Dominion University
Norfolk, VA 23529-0162
USA
Vox: +1-757-683-4417
Fax: +1-757-683-4900
Internet: olariu@cs.odu.edu

and http://www.cs.umr.edu/~ercal/biosp3/BioSP3.html


Tutorial 1
1:30 PM - 5:30 PM

A TUTORIAL INTRODUCTION TO HIGH PERFORMANCE DATA MINING

Robert Grossman Center for Data Mining, University of Illinois at Chicago and Magnify, Inc.




Who Should Attend:

The goal of the tutorial is to provide researchers, practitioners, and advanced students with an introduction to data mining. The focus will be on algorithms, software tools, and system architectures appropriate for mining massive data sets using techniques from high performance computing. There will be several running illustrations of practical data mining, including examples from science, business, and computing.

Course Description:

Data mining is the automatic discovery of patterns, associations, changes, anomalies, and statistically significant structures and events in data. Traditional data analysis is assumption driven in the sense that a hypothesis is formed and validated against the data. Data mining in contrast is discovery driven in the sense that patterns are automatically extracted from data.

Lecturer:

Robert Grossman is Director of the Center for Data Mining at the University of Illinois at Chicago and a principal of Magnify, Inc., a company which provides data mining software and services. He has published over sixty papers in refereed journals and proceedings on data intensive computing and data mining, high performance data management, scientific computing, symbolic and numeric computing, hybrid systems, and related areas; lectured extensively at scientific conferences; and organized several international conferences and workshops. He received a Ph.D. from Princeton in 1985 and a B.A. from Harvard in 1980.




TUESDAY, MARCH 31



Workshop 9: All Day Tuesday

WORKSHOP ON HIGH PERFORMANCE DATA MINING

Workshop Chairs:

Vipin Kumar, University of Minnesota, kumar@cs.umn.edu
Sanjay Ranka, University of Florida, ranka@cise.ufl.edu
Vineet Singh, Hitachi America, vsingh@hitachi.com




Program Committee:

P. Chan, Florida Institute of Technology
G. Cybenko, Dartmouth College
A. Grama, Purdue University
R. Grossman, University of Chicago
J. Han, Simon Fraser University
C.T. Ho, IBM Almaden Research Center
T. Johnsson, AT&T
G. Karypis, University of Minnesota
S. Kasif, University of Chicago
M. Kitsuregawa, University of Tokyo
R. Kohavi, Silicon Graphics
R. Musick, Lawerence Livermore National Laboratory
R. Ramakrishnan, University of Wisconsin
J.C. Shafer, IBM Almaden Research Center
J.P. Singh, Princeton University

The last decade has seen an explosive growth in database technology and the amount of data collected. Advances in data collection, use of bar codes in commercial outlets, and the computerization of business transactions have flooded us with lots of data. We have an unprecedented opportunity to analyze this data to extract more intelligent and useful information. Data mining is the efficient supervised or unsupervised discovery of interesting, useful, and previously unknown patterns from this data. Due to the huge size of data and amount of computation involved in data mining, parallel processing is an essential component for any successful large-scale data mining application. This workshop will provide a forum for presentation of recent results in parallel computation for data mining including applications, algorithms, software, and systems. Of special interest will be (i) the use of techniques originally developed in the high-performance scientific computing domain that have been found useful in serial and parallel algorithms for data mining as well as (ii) works highlighting the differences and similarities in the hardware requirements of data intensive algorithms and traditional algorithms used in the scientific computing domain.

Topics of interest include:

- Classification and regression
- Decision trees
- Association rules
- Dependency derivation
- Clustering
- Deviation detection
- Similarity search
- Bayesian networks

Important Dates:

Paper Submission: Feb. 7, 1998
Author Notification: Feb. 28, 1998
Camera Ready Due: Mar. 20, 1998

Paper Submission:

The workshop will feature contributed papers and invited papers in an informal setting. To submit a paper for consideration, send six copies of the manuscript to one of the workshop chairs. Electronic submissions (postscript versions printable on 8.5 x 11 paper only) are strongly encouraged. To guarantee consideration, manuscripts must be received by Feb. 7, 1998, and must be no more than 5 pages excluding figures, tables, and references. A hard copy of the proceedings will be available at the workshop. A copy of the proceedings would also be available on the web.

Workshop URL: http://www.cise.ufl.edu/~ranka/hpdm.html



Keynote Address


8:30 AM - 9:30 AM
What's So Different About Cluster Architectures?
David Culler, University of California, Berkeley



10:00 AM - 12:00 PM
SESSION 1
Communication
Chair: Sanguthevar Rajasekaran
University of Florida

Nearly Optimal Algorithms for Broadcast on d-Dimensional All-Port and Wormhole-Routed Torus
Jyh-Jong Tsay and Wen-Tsong Wang, National Chung Cheng University

Minimizing Total Communication Distance of a Broadcast on Mesh and Torus Networks
Songluan Cang and Jie Wu, Florida Atlantic University

Hiding Communication Latency in Data Parallel Applications
Vivek Garg and David E. Schimmel, Georgia Institute of Technology

Protocols for Non-Deterministic Communication over Synchronous Channels
Erik D. Demaine, University of Waterloo

Broadcast-Efficient Algorithms on the Coarse-Grain Broadcast Communication Model with Few Channels
Koji Nakano, Nagoya Institute of Technology, Stephan Olariu and James L. Schwing, Old Dominion University

Optimal All-to-Some Personalized Communication on Hypercubes
Y. Charlie Hu, Rice University

10:00 AM - 12:00 PM
SESSION 2
Compilers I
Chair: David Padua
University of Illinois at Urbana-Champaign

Compiler Optimization of Implicit Reductions for Distributed Memory Multiprocessors
Bo Lu and John Mellor-Crummey, Rice University

Local Enumeration Techniques for Sparse Algorithms
Gerardo Bandera, Pablo P. Trabado, and Emilio L. Zapata, University of Malaga, Campus of Teatinos

Optimizing Data Scheduling on Processor-In-Memory Arrays
Yi Tian, Edwin H.-M. Sha, Chantana Chantrapornchai, and Peter M. Kogge, University of Notre Dame

An Expression-Rewriting Framework to Generate Communication Sets for HPF Programs with Block-Cyclic Distribution
Gwan-Hwan Hwang and Jenq Kuen Lee, National Tsing-Hua University

A Generalized Framework for Global Communication Optimization
M. Kandemir, Syracuse University, P. Banerjee and A. Choudhary, Northwestern University, J. Ramanujam, Louisiana State University, N. Shenoy, Northwestern University

Evaluation of Compiler and Runtime Library Approaches for Supporting Parallel Regular Applications
Dhruva R. Chakrabarti, Northwestern University, Antonio Lain, Hewlett Packard Labs, Prithviraj Banerjee, Northwestern University

10:00 AM - 12:00 PM
SESSION 3
Mathematical Applications
Chair: William J. Feiereisen
NASA Ames Research Center

Preliminary Results from a Parallel MATLAB Compiler
Michael J. Quinn, Alexey Malishevsky, Nagajagadeswar Seelam, and Yan Zhao, Oregon State University

Jacobi Orderings for Multi-Port Hypercubes
Dolors Royo, Antonio Gonzalez, and Miguel Valero-Garcia, Universitat Politecnica de Catalunya

Automatic Differentiation for Message-Passing Parallel Programs
Paul Hovland and Christian Bischof, Argonne National Laboratory

Processor Lower Bound Formulas for Array Computations and Parametric Diophantine Systems
Peter Cappello and Omer Egecioglu, University of California at Santa Barbara

Analysis of a Class of Parallel Matrix Multiplication Algorithms
John Gunnels, Calvin Lin, Greg Morrow, and Robert van de Geijn, University of Texas at Austin

Caching-Efficient Multithreaded Fast Multiplication of Sparse Matrices
Peter D. Sulatycke and Kanad Ghose, State University of New York, Binghamton

1:30 PM - 3:30 PM
SESSION 4
Networks
Chair: Timothy Mark Pinkston
University of Southern California

Permutation Capability of Optical Multistage Interconnection Networks
Yuanyuan Yang, University of Vermont, Jianchao Wang, GTE Laboratories, Yi Pan, University of Dayton

HIPIQS: A High-Performance Switch Architecture Using Input Queuing
Rajeev Sivaram, Ohio State University, Craig B. Stunkel, IBM T.J. Watson Research Center, Dhabaleswar K. Panda, Ohio State University

On the Bisection Width and Expansion of Butterfly Networks
Claudson F. Bornstein, Carnegie Mellon University, Ami Litman, Technion, Bruce M. Maggs, Carnegie Mellon University, Ramesh K. Sitaraman, University of Massachusetts, Tal Yatzkar, Technion

Multiprocessor Architectures Using Multi-Hop Multi-OPS Lightwave Networks and Distributed Control
D. Coudert and A. Ferreira, LIP ENS Lyon, X. Munoz, UPC

Distributed, Dynamic Control of Circuit-Switched Banyan Networks
Chuck Salisbury and Rami Melhem, University of Pittsburgh

A Case for Aggregate Networks
Raymond R. Hoare, Purdue University

1:30 PM - 3:30 PM
SESSION 5
Compilers II
Chair: Chau-Wen Tseng
University of Maryland

An Enhanced Co-Scheduling Method Using Reduced MS-State Diagrams
R. Govindarajan, Supercomputer Education and Research Center and Indian Institute of Science, N.S.S. Narasimha Rao, Indian Institute of Science, E.R. Altman, IBM T.J. Watson Research Center, Guang R. Gao, University of Delaware

Predicated Software Pipelining Technique for Loops with Conditions
Dragan Milicev and Zoran Jovanovic, University of Belgrade

The Generalized Lambda Test
Weng-Long Chang, Chih-Ping Chu, and Jesse Wu, National Cheng Kung University

Experimental Study of Compiler Techniques for Scalable Shared Memory Machines
Yunheung Paek, New Jersey Institute of Technology, David A. Padua, University of Illinois at Urbana-Champaign

Register-Sensitive Software Pipelining
Amod K. Dani, Indian Institute of Science, R. Govindarajan, Supercomputer Education and Research Center and Indian Institute of Science

Analyzing the Individual/Combined Effects of Speculative and Guarded Execution on a Superscalar Architecture
M. Srinivas, Silicon Graphics Inc., Alexandru Nicolau, University of California at Irvine

1:30 PM - 3:30 PM
SESSION 6
Signal and Image Processing
Chair: Dave Curkendall
NASA Jet Propulsion Laboratory

NOW Based Parallel Reconstruction of Functional Images
Frank Munz, T. Stephan, U. Maier, T. Ludwig, S. Ziegler, S. Nekolla, P. Bartenstein, M. Schwaiger, and A. Bode, Nuklearmedizinische Klinik und Poliklinik des Klinikums rechts der Isar

An Improved Output-size Sensitive Parallel Algorithm for Hidden-Surface Removal for Terrains
Neelima Gupta and Sandeep Sen, Indian Institute of Technology, New Delhi

Design, Implementation and Evaluation of Parallel Pipelined STAP on Parallel Computers
Alok Choudhary, Northwestern University, Wei-keng Liao, Donald Weiner, and Pramod Varshney, Syracuse University, Richard Linderman and Mark Linderman, Air Force Research Laboratory
The VEGA Moderately Parallel MIMD, Moderately Parallel SIMD, Architecture for High Performance Array Signal Processing
Mikael Taveniku, Ericsson Microwave Systems AB and Chalmers University of Technology, Anders Ahlander, Ericsson Microwave Systems AB and Halmstad University, Magnus Jonsson, Halmstad University, Bertil Svensson, Halmstad University and Chalmers University of Technology

Medical Image Processing and Visualization on Heterogeneous Clusters of Symmetric Multiprocessors Using MPI and POSIX Threads
Christoph GieB, Achim Mayer, Harald Evers, and Hans-Peter Meinzer, Deutsches Krebsforschungszentrum

Quantitative Code Analysis of Scientific Systolic Programs: DSP Vs. Matrix Algorithms
R. Sernec, BIA D.o.o., M. Zajc and J.F. Tasic, University of Ljubljana

4:00 PM - 6:00 PM
SESSION 7
Collective Communication
Chair: Dhabaleswar K. Panda
Ohio State University

Tree-Based Multicasting in Wormhole-Routed Irregular Topologies
Ran Libeskind-Hadas, Dominic Mazzoni, and Ranjith Rajagopalan, Harvey Mudd College

NoWait-RPC: Extending ONC RPC to a Fully Compatible Message Passing System
Thomas Hopfner, Technische Universitat Munchen

Efficient Barrier Synchronization Mechanism for the BSP Model on Message-Passing Architectures
Jin-Soo Kim, Soonhoi Ha, and Chu Shik Jhon, Seoul National University

Performance and Experience with LAPI -- a New High-Performance Communication Library for the IBM RS/6000 SP
Gautam Shah, IBM Power Parallel Systems, Jarek Nieplocha, Pacific Northwest National Laboratory, Jamshed Mirza and Chulho Kim, IBM Power Parallel Systems, Robert Harrison, Pacific Northwest National Laboratory, Rama K. Govindaraju, Kevin Gildea, Paul DiNicola, and Carl Bender, IBM Power Parallel Systems

Total-Exchange on Wormhole k-ary n-cubes with Adaptive Routing
Fabrizio Petrini, International Computer Science Institute

Managing Concurrent Access for Shared Memory Active Messages
Steven S. Lumetta and David E. Culler, University of California at Berkeley

4:00 PM - 6:00 PM
SESSION 8
Memory Hierarchy and I/O
Chair: Behrooz Parhami
University of California at Santa Barbara

Design and Implementation of a Parallel I/O Runtime System for Irregular Applications
Jaechun No, Syracuse University, Sung-soon Park, Anyang University, Jesus Carretero, Universidad Politecnica de Madrid, Alok Choudhary, Northwestern University, Pang Chen, Sandia National Laboratory

Using PI/OT to Support Complex Parallel I/O
Ian Parsons, Jonathan Schaeffer, Duane Szafron, and Ron Unrau, University of Alberta

Cache Optimization for Multimedia Compilation on Embedded Processors for Low Power
C. Kulkarni, IMEC, F. Catthoor and H. De Man, IMEC and Katholieke Universiteit Leuven

Memory Hierarchy Management for Iterative Graph Structures
Ibraheem Al-Furaih, Syracuse University, Sanjay Ranka, University of Florida

High-Performance External Computations Using User-Controllable I/O
Jang Sun Lee, A.I. Section, ETRI, Sunghoon Ko, Syracuse University, Sanjay Ranka, University of Florida, Byung Eui Min, A.I. Section, ETRI

Pin-down Cache: A Virtual Memory Management Technique for Zero-copy Communication
Hiroshi Tezuka, Francis O'Carroll, Atsushi Hori, and Yutaka Ishikawa, Real World Computing Partnership

4:00 PM - 6:00 PM
SESSION 9
Algorithms I
Chair: Stephan Olariu
Old Dominion University

Synthesis of a Systolic Array Genetic Algorithm
G.M. Megson and I.M. Bland, University of Reading

Vector Reduction and Prefix Computation on Coarse-Grained, Distributed-Memory Parallel Machines
Seungjo Bae and Dongmin Kim, Syracuse University, Sanjay Ranka, University of Florida

Solving the Maximum Clique Problem Using PUBB
Yuji Shinano, Science University of Tokyo, Tetsuya Fujie, Tokyo Institute of Technology, Yoshiko Ikebe and Ryuichi Hirabayashi, Science University of Tokyo

A Scalable VLSI Architecture for Binary Prefix Sums
R. Lin, SUNY Genesco, S. Olariu, Old Dominion University, M.C. Pinotti, I.E.I., C.N.R., K. Nakano, Nagoya Institute of Technology, J.L. Schwing, Old Dominion University, A.Y. Zomaya, University of Western Australia

Emulating Direct Products by Index-Shuffle Graphs
Bojana Obrenic, Queens College and Graduate Center of CUNY

A Comparative Study of Five Parallel Genetic Algorithms Using The Traveling Salesman Problem
Lee Wang, Anthony A. Maciejewski, Howard Jay Siegel, Purdue University, Vwani P. Roychowdhury, UCLA



WEDNESDAY, APRIL 1



Keynote Address

8:30 AM - 9:30 AM
Parallel Data Access and Parallel Execution in a World of CyberBricks
Jim Gray, Microsoft Research


10:00 AM - 12:00 noon
SESSION 10
Routing
Chair: David Nassimi
New Jersey Institute of Technology

A New Self-Routing Multicast Network
Yuanyuan Yang, University of Vermont, Jianchao Wang, GTE Laboratories

Optimal Contention-Free Unicast-Based Multicasting in Switch-Based Networks of Workstations
Ran Libeskind-Hadas, Dominic Mazzoni, and Ranjith Rajagopalan, Harvey Mudd College

Multicast Broadcasting in Large WDM Networks
Weifa Liang, University of Queensland, Hong Shen, Griffith University

Optimally Locating a Structured Facility of a Specified Length in a Weighted Tree Network
Shan-Chyun Ku and Biing-Feng Wang, National Tsing Hua University

Deterministic Routing of h-relations on the Multibutterfly
Andrea Pietracaprina, Universita di Padova

An Efficient Counting Network
Costas Busch, Brown University, Marios Mavronicolas, University of Cyprus

10:00 AM - 12:00 noon
SESSION 11
Operating Systems and Scheduling
Chair: Tao Yang
University of California at Santa Barbara

Partitioned Schedules for Clustered VLIW Architectures
Marcio Merino Fernandes, University of Edinburgh, Josep Llosa, Universitat Politecnica de Catalunya, Nigel Topham, University of Edinburgh

Dynamic Processor Allocation with the Solaris Operating System
Kelvin K. Yue, Sun Microsystems Inc., David J. Lilja, University of Minnesota

Thread-based vs. Event-based Implementation of a Group Communication Service
Shivakant Mishra and Rongguang Yang, University of Wyoming

Performance Sensitivity of Space-Sharing Processor Scheduling in Distributed-Memory Multicomputers
Sivarama P. Dandamudi and Hai Yu, Carleton University

Efficient Fine-Grain Thread Migration with Active Threads
Boris Weissman and Benedict Gomes, University of California at Berkeley and International Computer Science Institute, Jurgen W. Quittek, International Computer Science Institute, Michael Holtkamp, Technical University of Hamburg-Harburg

Clustering and Reassignment-Based Mapping Strategy for Message-Passing Architectures
M.A. Senar, A. Ripoll, A. Cortes, and E. Luque, Universitat Autonoma de Barcelona

10:00 AM - 12:00 noon
SESSION 12
Algorithms II
Chair: Afonso Ferreira
CNRS, LIP-ENS

Asymptotically Optimal Randomized Tree Embedding in Static Networks
Keqin Li, State University of New York, New Paltz

Resource Placements in 2D Tori
Bader Almohammad and Bella Bose, Oregon State University

An O((log log n)^2) Time Convex Hull Algorithm on Reconfigurable Meshes
Tatsuya Hayashi, Koji Nakano, Nagoya Institute of Technology, Stephan Olariu, Old Dominion University

Toward a Universal Mapping Algorithm for Accessing Trees in Parallel Memory Systems
Vincenzo Auletta, Universita di Salerno, Sajal K. Das, University of North Texas, Amelia De Vivo, Universita di Salerno, M. Cristina Pinotti, I.E.I., Consiglio Nazionale delle Ricerche, Vittorio Scarano, Universita di Salerno

Sharing Random Bits with No Process Coordination
Marius Zimand, Georgia Southwestern State University

Lower Bounds on Communication Loads and Optimal Placements in Torus Networks
M. Cemil Azizoglu and Omer Egecioglu, University of California at Santa Barbara

1:30 PM - 3:30 PM
SESSION 13
Multiprocessor Performance Evaluation
Chair: Paul Messina
California Institute of Technology

Impact of Switch Design on the Application Performance of Cache-Coherent Multiprocessors
Laxmi N. Bhuyan, H. Wang, and R. Iyer, Texas A&M University, A. Kumar, Intel Corporation

Parallel Tree Building on a Range of Shared Address Space Multiprocessors: Algorithms and Application Performance
Hongzhang Shan and Jaswinder Pal Singh, Princeton University

Configuration Independent Analysis for Characterizing Shared-Memory Applications
Gheith A. Abandah and Edward S. Davidson, University of Michigan

Experimental Validation of Parallel Computation Models on the Intel Paragon
Ben H.H. Juurlink, University of Paderborn

Comparing the Optimal Performance of Different MIMD Multiprocessor Architectures
Lars Lundberg and Hakan Lennerstad, University of Kariskrona/Ronneby

The Design of COMPASS: An Execution Driven Simulator for Commercial Applications Running on Shared Memory Multiprocessors
Ashwini K. Nanda, IBM T.J. Watson Research Center, Yiming Hu, University of Rhode Island, Moriyoshi Ohara, IBM Tokyo Research Lab, Caroline D. Benveniste, Mark E. Giampapa, and Maged Michael, IBM T.J. Watson Research Center

1:30 PM - 3:30 PM
SESSION 14
Scheduling
Chair: Michael A. Palis
Rutgers University

An Efficient RMS Admission Control and its Application To Multiprocessor Scheduling
Sylvain Lauzac, Rami Melhem, and Daniel Mosse, University of Pittsburgh

Guidelines for Data-Parallel Cycle-Stealing in Networks of Workstations
Arnold L. Rosenberg, University of Massachusetts

Low Memory Cost Dynamic Scheduling of Large Coarse Grain Task Graphs
Michel Cosnard, LORIA-INRIA Loraine, Emmanuel Jeannot and Laurence Rougeot, LIP, ENS de Lyon

Benchmarking the Task Graph Scheduling Algorithms
Yu-Kwong Kwok and Ishfaq Ahmad, Hong Kong University of Science and Technology

A Performance Evaluation of CP List Scheduling Heuristics for Communication Intensive Task Graphs
Benjamin S. Macey and Albert Y. Zomaya, University of Western Australia

Utilization and Predictability in Scheduling the IBM SP2 with Backfilling
Dror G. Feitelson and Ahuva Weil, Hebrew University of Jerusalem

1:30 PM - 3:30 PM
SESSION 15
Databases and Sorting
Chair: Reagan Moore
San Diego Supercomputer Center

High Performance OLAP and Data Mining on Parallel Computers
Sanjay Goil and Alok Choudhary, Northwestern University

An Efficient Parallel Algorithm for High Dimensional Similarity Join
Khaled Alsabti, Syracuse University, Sanjay Ranka, University of Florida, Vineet Singh, Hitachi America, Ltd.

Sorting on Clusters of SMP's
David R. Helman and Joseph Ja'Ja', University of Maryland

An AT^2 Optimal Mapping of Sorting onto the Mesh Connected Array without Comparators
Ju-wook Jang, Sogang University

ScalParC: A New Scalable and Efficient Parallel Classification Algorithm for Mining Large Datasets
Mahesh V. Joshi, George Karypis, and Vipin Kumar, University of Minnesota

Improved Concurrency Control Techniques for Multi-dimensional Index Structures
K.V. Ravi Kanth, F. David Serena, and Ambuj K. Singh, University of California at Santa Barbara



PANEL DISCUSSION

4:00 PM - 6:00 PM
Data Intensive vs. Scientific Computing:
Will the Twain Meet for Parallel Processing?


Much of recent research in practical parallel computing appears to be driven by the demands of the scientific and engineering domains. Grand Challenge Applications and, more recently, the DOE's ASCI project have provided a great impetus to the development of very high performance parallel architectures, programming environments, and applications for performing large-scale floating-point intensive simulations of physical phenomena.
But the market for such scientific applications appears small and rather limited. In contrast, the market for data intensive applications (e.g., data mining and decision support) is growing very rapidly, and such applications can potentially become the biggest consumers of parallel computing.

The panel will address this dichotomy between traditional scientific computing and data intensive applications in the context of parallel processing.
What are the algorithmic features that make data intensive applications different than scientific applications?
What demands do these features place on the architectures and environments of parallel computing?
Are the desired features of parallel architectures and environments for data intensive applications fundamentally different than those used for traditional scientific computing applications?
Are these two sets of applications impacted differently by the deep memory hierarchies in current and future generation architectures (serial and parallel)?


Moderator:

Vipin Kumar, University of Minnesota

Panelists:

Tilak Agarwal, IBM
David Bailey, NASA
Jim Gray, Microsoft
Olaf Lubeck, Los Alamos National Lab
Bob Lucas, ARPA
Tom Sterling, Caltech
Rick Stevens, Argonne National Lab
Hans Zima, University of Vienna



INDUSTRIAL TRACK: INVITED VENDOR PRESENTATIONS
Industrial Track Chair:
John K. Antonio, Texas Tech University



10:00 AM - 12:00 noon
INDUSTRIAL TRACK
SESSION-I
Environments, Tools, and Evaluation Methods
Chair: Viorel Morariu
Concurrent Technologies Corporation

Pacific-Sierra Research Corporation
Topic: DEEP: A Development Environment for Parallel Programs
Authors: Brian Brode, Vice President, Chris Warber, Senior Analyst, and James Bonang, Software Engineer

Integrated Sensors, Inc.
Topic: Rapid Development of Real-Time Systems Using RTExpress
Authors: Richard Besler, Senior Software Engineer, Diane Brassaw, Senior Software Engineer, Milissa Benincasa, Senior Software Engineer, and Ralph L. Kohler, Jr., Program Manager (Air Force Research Laboratory)

Northrop Grumman
Topic: Evaluating ASIC, DSP, and RISC Architectures for Embedded Applications
Author: Marc Campbell, Technical Lead, High Performance Computing

Tandem Computers, a Compaq Company
Topic: The Effect of the Router Arbitration Policy on ServerNet(tm)'s Scalability
Authors: Vladimer Shurbanov, Research Assistant (Boston University), Dimiter R. Avresky, Associate Professor (Boston University), Robert Horst, Technical Director

1:30 PM - 3:00 PM
INDUSTRIAL TRACK
SESSION-II
Reconfigurable Systems
Chair: Ralph L. Kohler, Jr.
Air Force Research Laboratory

Annapolis Micro Systems, Inc.
Topic: WILDFIRE(tm) Heterogeneous Adaptive Parallel Processing System
Authors: Bradley K. Fross, Senior WILDFIRE Application Engineer, Dennis M. Hawver, Principal Design Engineer, and James B. Peterson, Principal Design Engineer

TSI TelSys, Inc.
Topic: A High Performance Reconfigurable Processing Subsystem
Author: Don Davis, Manager, Strategic Engineering

Virtual Computer Corporation
Topic: Seeking Extreme Performance through Hardware Software Co-Design
Authors: Steve Casselman, President, and John Schewel, Vice President of Sales & Marketing

THURSDAY, APRIL 2



Keynote Address

8:30 AM - 9:30 AM
The Future of Scalable Systems
Greg Papadopoulos, Sun Microsystems


10:00 AM - 12:00 noon
SESSION 16
Performance Prediction and Evaluation
Chair: Ashwini K. Nanda
IBM T.J. Watson Research Center

A Clustered Approach to Multithreaded Processors
Venkata Krishnan and Josep Torrellas, University of Illinois at Urbana-Champaign

C++ Expression Templates Performance Issues in Scientific Computing
Federico Bassetti, New Mexico State University and Scientific Computing Group CIC-19, Kei Davis and Dan Quinlan, Scientific Computing Group CIC-19

Aggressive Dynamic Execution of Multimedia Kernel Traces
Benjamin Bishop, Robert Owens, and Mary Jane Irwin, Pennsylvania State University

Performance Prediction in Production Environments
Jennifer M. Schopf and Francine Berman, University of California, San Diego

Predicting the Running Time of Parallel Programs by Simulation
Radu Rugina and Klaus E. Schauser, University of California, Santa Barbara

10:00 AM - 12:00 noon
SESSION 17
Software Distributed Shared Memory
Chair: Prithviraj Banerjee
Northwestern University

Compile-time Synchronization Optimizations for Software DSMs
Hwansoo Han and Chau-Wen Tseng, University of Maryland

An Efficient Logging Scheme for Lazy Release Consistent Distributed Shared Memory System
Taesoon Park, Sejong University, Heon Y. Yeom, Seoul National University

Update Protocols and Iterative Scientific Applications
Pete Keleher, University of Maryland

Java Consistency = Causality + Coherency: Non-Operational Characterizations of the Java Memory Behavior
Alex Gontmakher and Assaf Schuster, Technion

Locality and Performance of Page- and Object-Based DSMs
Bryan Buck and Pete Keleher, University of Maryland

Optimistic Synchronization of Mixed-Mode Simulators
Peter Frey, Radharamanan Radhakrishnan, Harold W. Carter, and Philip A. Wilsey, University of Cincinnati

10:00 AM - 12:00 noon
SESSION 18
Scientific Simulation
Chair: John Gustafson
Ames Laboratory

Airshed Pollution Modeling: A Case Study in Application Development in an HPF Environment
Jaspal Subhlok, Peter Steenkiste, James Stichnoth, and Peter Lieu, Carnegie Mellon University

Design of a FEM Computation Engine for Real-Time Laparoscopic Surgery Simulation
Alex Rhomberg, Rolf Enzler, Markus Thaler, and Gerhard Troester, Eidgenossische Technische Hochschule

SIMD and Mixed-Mode Implementations of a Visual Tracking Algorithm
Mark B. Kulaczewski, Universitat Hannover, Howard Jay Siegel, Purdue University

The Implicit Pipeline Method
John B. Pormann, John A. Board, Jr., and Donald J. Rose, Duke University

Rendering Computer Animations on a Network of Workstations
Timothy A. Davis and Edward W. Davis, North Carolina State University

1:30 PM - 3:30 PM
SESSION 19
Fault Tolerance
Chair: Craig B. Stunkel
IBM T.J. Watson Research Center

Hyper-Butterfly Network: A Scalable Optimally Fault Tolerant Architecture
Wei Shi and Pradip K. Srimani, Colorado State University

Scheduling Algorithms Exploiting Spare Capacity and Tasks' Laxities for Fault Detection and Location in Real-time Multiprocessor Systems
K. Mahesh, G. Manimaran, and C. Siva Ram Murthy, Indian Institute of Technology, Arun K. Somani, University of Washington

The Robust-Algorithm Approach to Fault Tolerance on Processor Arrays: Fault Models, Fault Diameter, and Basic Algorithms
Behrooz Parhami and Chi-Hsiang Yeh, University of California at Santa Barbara

Fault-Tolerant Switched Local Area Networks
Paul LeMahieu, Vasken Bohossian, and Jehoshua Bruck, California Institute of Technology

1:30 PM - 3:30 PM
SESSION 20
Performance and Debugging Tools
Chair: Robert Ferraro
NASA Jet Propulsion Laboratory

Trace-Driven Debugging of Message Passing Programs
Michael Frumkin, Robert Hood, and Louis Lopez, NASA Ames Research Center

Predicate Control for Active Debugging of Distributed Programs
Ashis Tarafdar and Vijay K. Garg, University of Texas at Austin

VPPB - A Visualization and Performance Prediction Tool for Multithreaded Solaris Programs
Magnus Broberg, Lars Lundberg, and Hakan Grahn, University of Kariskrona/Ronneby

Parallel Performance Visualization Using Moments of Utilization Data
T.J. Godin, Michael J. Quinn, and C.M. Pancake, Oregon State University

1:30 PM - 3:30 PM
SESSION 21
Distributed Systems
Chair: Debra Hensgen
Naval Postgraduate School

Optimizing Parallel Applications for Wide-Area Clusters
Henri E. Bal, Aske Plaat, Mirjam G. Bakker, Peter Dozy, and Rutger F.H. Hofman, Vrije Universiteit

Prioritized Token-Based Mutual Exclusion for Distributed Systems
Frank Mueller, Humboldt-Universitat zu Berlin

Adaptive Quality Equalizing: High-Performance Load Balancing for Parallel Branch-and-Bound Across Applications and Computing Systems
Nihar R. Mahapatra, State University of New York at Buffalo, Shantanu Dutt, University of Illinois at Chicago

Data Collection and Restoration for Heterogeneous Network Process Migration
Kasidit Chanchio and Xian-He Sun, Louisiana State University




FRIDAY, APRIL 3



Workshop 10: All Day Friday

3rd WORKSHOP ON RANDOMIZED PARALLEL COMPUTING

Workshop Chairs:
Panos Pardalos
Sanguthevar Rajasekaran
University of Florida, Gainesville




Program Committee:

Pankaj K. Agarwal, Duke U
Susanne Albers, Max-Planck Institute
Sandeep N. Bhatt, Bellcore
Frank Hsu, Fordham U
Oscar Ibarra, U of California, SB
Tom Leighton, MIT
Bruce Maggs, CMU
Michael A. Palis, Rutgers U
Panos Pardalos, U of Florida
Greg Plaxton, U of Texas
Sanguthevar Rajasekaran, U of Florida
Abhiram Ranade, IIT Bombay
Sartaj Sahni, U of Florida
Paul Spirakis, U of Patras

Randomization has played a vital role in the domains of both sequential and parallel computing in the past two decades. This workshop is a forum for bringing together both theoreticians and practitioners who employ randomized techniques in parallel computing. Topics include but are not limited to:

- Network algorithms
- PRAM algorithms
- Architectures
- I/O systems
- Scheduling
- Network fault tolerance
- Reconfigurable networks
- Optical networks
- Various applications
- Programming models and languages
- Implementation experience

Papers of an experimental nature (describing implementation results) are especially sought. Authors are invited to submit previously unpublished original papers (that will not be submitted elsewhere) reflecting their current research results. All submitted papers will be refereed for quality and originality.

Accepted papers will appear in the workshop proceedings to be published by Springer-Verlag. We expect to have many internationally reputed invited speakers. Invited papers and selected contributed papers will appear in a book to be published by Kluwer Academic Press. Confirmed invited speakers are: Danny Krizanc, Jean-Claude Latombe, S. Muthukrishnan, Lata Narayanan, Stephan Olariu, Rajeev Raman, Bala Ravikumar, Assaf Schuster, Torsten Suel, Peter J. Varman, and David Wei.

Contact email address: raj@cise.ufl.edu
URL: http://www.cise.ufl.edu/~raj/WRPC98.html


Workshop 11: All Day Friday

3rd INTERNATIONAL WORKSHOP ON EMBEDDED HPC SYSTEMS AND APPLICATIONS

Workshop Co-Chairs:
Devesh Bhatt, Honeywell Technology Center, USA (bhatt@htc.honeywell.com)
Viktor Prasanna, Univ. of Southern California, USA (prasanna@usc.edu)




Program Committee:

Ashok Agrawala, Univ. of Maryland, USA
Bob Bernecky, NUWC, USA
Hakon O. Bugge, Scali Computer, Norway
Terry Fountain, University College London, UK
Richard Games, MITRE, USA
Farnam Jahanian, Univ. of Michigan, USA
Jeff Koller, USC/Information Sciences Institute, USA
Mark Linderman, USAF Rome Laboratory, USA
Craig Lund, Mercury Computer Systems, Inc., USA
David Martinez, MIT Lincoln Laboratory, USA
Stephen Rhodes, Advanced Systems Architectures Ltd., UK
Philip Sementilli, Hughes Missile Systems Co., USA
Anthony Skjellum, Mississippi State Univ., USA
Henk Spaanenburg, Lockheed Sanders, USA
Lothar Thiele, Swiss Federal Institute of Technology, Switzerland
Chip Weems, Univ. of Massachusetts, USA
Sudhakar Yalamanchili, Georgia Tech., USA

Advisory Committee:

Keith Bromley, NRaD, USA
Dieter Hammer, Eindhoven Univ. of Technology, The Netherlands
Jose Munoz, DARPA/Information Technology Office, USA
Clayton Stewart, SAIC, USA
Lonnie Welch, Univ. of Texas at Arlington, USA


The International Workshop on Embedded HPC Systems and Applications (EHPC'98) is a forum for the presentation and discussion of approaches, research findings, and experiences in the applications of High Performance Computing (HPC) technology for embedded systems. Of interest are both the development of relevant technology (e.g.: hardware, middleware, tools) as well as the embedded HPC applications built using such technology.

We hope to bring together industry, academia, and government researchers/users to explore the special needs and issues in applying HPC technologies to defense and commercial applications.

Topics of Interest:

- Algorithms and applications such as radar signal processing, surveillance, automated target recognition
- Programming environments and development tools
- Performance modeling/simulation, partitioning/mapping and architecture trade-offs
- Operating systems and middleware services, addressing real-time scheduling, fault-tolerance, and resource management
- Special-purpose architectures

EHPC'98 will feature technical papers, presentations, and an open discussion session. Proceedings will be published in a volume of Lecture Notes in Computer Science (LNCS) by Springer-Verlag.

For further information, please contact:

Devesh Bhatt
Honeywell Technology Center
3660 Technology Drive
Minneapolis, MN 55418, USA
Vox: (612) 951-7316
Internet: bhatt@htc.honeywell.com

Workshop 12: All Day Friday

2nd WORKSHOP ON PARALLEL PROCESSING AND MULTIMEDIA

Workshop Chair:
Argy Krikelis, Aspex Microsystems Ltd., UK




Program Committee:

Edward J. Delp, Purdue University, USA
Martin Goebel, GMD, Germany
Divyesh Jadav, IBM Research Center, Almaden, USA
Argy Krikelis, Aspex Microsystems Ltd., UK
Tosiyasu L. Kunii, The University of Aizu, Japan
Vasily Moshnyaga, Kyoto University, Japan
Eythymios D. Providas, University of Thessaly, Greece

In the recent years multimedia technology has emerged as a key technology, mainly because of its ability to represent information in disparate forms as a bit-stream. This enables everything from text to video and sound to be stored, processed, and delivered in digital form. A great part of the current research community effort has emphasized the delivery of the data as an important issue of multimedia technology. However, the creation, processing, and management of multimedia forms are the issues most likely to dominate the scientific interest in the long run. The focus of the activity will be how multimedia technology deals with information, which is in general task-dependent and is extracted from data in a particular context by exercising knowledge. The desire to deal with information from forms such as video, text, and sound will result in a data explosion. This requirement to store, process, and manage large data sets naturally leads to the consideration of programmable parallel processing systems as strong candidates in supporting and enabling multimedia technology.

The workshop aims to act as a platform for topics related, but not limited, to

- Parallel architectures for multimedia
- Parallel multimedia computing servers
- Mapping multimedia applications to parallel architectures
- System interfaces and programming tools to support multimedia applications on parallel processing systems
- Multimedia content creation, processing, and management using parallel architectures
- Parallel processing architectures of multimedia set-top boxes
- Multimedia agent technology and parallel processing
- `Proof of concept' implementations and case studies

The workshop proceedings will be published by Springer-Verlag Lecture Notes in Computer Science series, in conjunction with other IPPS/SPDP 1998 workshops.

Workshop Chair
Argy Krikelis
Aspex Microsystems Ltd.
Brunel University
Uxbridge, UB8 3PH
United Kingdom
Vox: + 44 1895 203184
Fax: + 44 1895 203185
Internet: Argy.Krikelis@aspex.co.uk

Workshop 13: All Day Friday

2nd WORKSHOP ON SOLVING COMBINATORIAL OPTIMIZATION PROBLEMS IN PARALLEL

Workshop Chair:
Jens Clausen, Technical University of Denmark




Program Committee:

J. Clausen, Copenhagen (Chair)
R. Correa, Rio de Janeiro
A. de Bruin, Rotterdam
N. Deo, Florida
A. Ferreira, Sophia Antipolis
M. Gengler, Lyon
A. Grama, Purdue
G. Kindervater, Rotterdam
B. le Cun, Versailles
R. Lueling, Paderborn
G. Megson, Reading
S. Migdalas, Linkoping
J. Nievergelt, Zurich
P. Pardalos, Florida
M. Resende, AT & T
J. Rolim, Geneva
S. Tschoeke, Paderborn
A. Zomaya, Western Australia

Organizing Committee:

P. Pardalos, Florida (Chair)
J. Clausen, Copenhagen

The solution of optimization problems in real world applications usually involves an enormous amount of work in which the use of parallel computers may be of great value. Parallel computing facilitates that problems may be solved faster, that large-sized problems may become tractable, and that development of solution methods also for large-scale problems can be based on an interplay between theory and experimentation.

The workshop aims to bring together experts in the field of parallel combinatorial computing. It will address both exact and approximate methods for scientific and practical hard optimization problems.

The program will consist of two key-note lectures, and a number of short contributed presentations. The key-note lectures are:

N. Deo, Center for Parallel Computation, University of Central Florida, USA.
"Computing Constrained Minimum Spanning Trees in Parallel,"
and
J. Nievergelt, Department of Computer Science, ETH Zurich, Switzerland
"Exhaustive Search and Combinatorial Optimization: Exploring the Power of Parallel Computing."

Contributions are solicited reporting research in combinatorial optimization in parallel. The following are topics of interest (list not exclusive):

- Applications
- Approximation algorithms
- Branch and bound
- Continuous optimization
- Dynamic programming
- Financial applications
- Graph partitioning
- Libraries
- Load balancing
- Metaheuristics
- Network design
- Quadratic assignment problems
- Randomized algorithms
- Scheduling
- Tools
- Vehicle routing problems

Springer Verlag will publish a volume in the LNCS series containing proceedings from a number of the workshops held in connection with IPPS/SPDP. In addition to full papers to be considered for the proceedings, one-page abstracts of talks are also welcome.

For further information please contact:

Jens Clausen
Department of Mathematical Modelling
Technical University of Denmark
DK 2800 Lyngby
Denmark
Vox: +45 45 25 33 87
Fax: +45 45 88 26 73
Internet: jc@imm.dtu.dk


Workshop 14: All Day Friday

WORKSHOP ON INTERCONNECTION NETWORKS AND COMMUNICATION ALGORITHMS

Workshop Chair:
David Nassimi, New Jersey Institute of Technology




Program Committee:

D.P. Agrawal, North Carolina State University
R. Boppana, Univ. of Texas, San Antonio
C.T. Ho, IBM, Almaden Research Center
O. Ibarra, Univ. of California, Santa Barbara
S. Latifi, University of Nevada
F.T. Leighton, MIT
P.K. McKinley, Michigan State University
D. Nassimi, NJIT
S. Olariu, Old Dominion University
Y. Oruc, University of Maryland
G. Plaxton, University of Texas, Austin
V. Prasanna, University of Southern California
C.S. Raghavendra, The Aerospace Corporation
S. Rajasekaran, University of Florida
S. Sahni, University of Florida
I. Scherson, University of California, Irvine
A. Schuster, Technion, Israel
A. Sohn, NJIT
T. Suel, Bell Laboratories, Murray Hill
S. Ziavras, NJIT

This workshop focuses on interconnection networks, routing algorithms, communication paradigms, and algorithmic studies of parallel computers with special emphasis on communication complexity. Recent communication models such as LogP and BSP, and their relations to the underlying architectures, are also of interest. The topics of interest include:

- Interconnection networks
- Reconfigurable architectures
- Optical interconnects
- Parallel sorting
- Sorting networks
- Permutation algorithms
- Broadcasting
- Routing algorithms
- Communication models and paradigms
- Parallel algorithms
- Communication complexity

For further information, please contact:

David Nassimi
CIS Department
New Jersey Institute of Technology
Newark, New Jersey 07102
Internet: nassimi@cis.njit.edu

URL: http://WWW.cis.njit.edu/netcomm


Workshop 15: All Day Friday

1st WORKSHOP ON PERSONAL COMPUTER BASED NETWORKS OF WORKSTATIONS

Workshop Co-Chairs:
Giovanni Chiola, DISI, University of Genoa, Italy
Gianni Conte, University of Parma, Italy




Program Committee:

- A. Barak, CSI, Hebrew University, Jerusalem
- G. Chiola, DISI, University of Genoa
- T.-C. Chiueh, CS Dept., SUNY S.B.
- G. Conte, CE, University of Parma
- H.G. Dietz, ECE, Purdue University
- W. Gentzsch, GENIAS Software GmbH
- A. Greiner, LIP6, University of Paris 6
- G. Iannello, DIS, University of Napoli
- K. Li, CS Dept, Princeton University
- L.V. Mancini, DSI, Uni. of Roma "La Sapienza"
- T.G. Mattson, Intel Microcomp. Research Lab.
- W. Rehm, Informatik, T.U. Chemnitz
- P. Rossi, ENEA, Bologna
- C. Szyperski, Queensland, University of Tech.
- D. Tavangarian, Informatik, University of Rostock
- B. Tourancheau, LHPC/ENS, University of Lyon

Network of Workstations (NOW) composed of fast personal computers are becoming more and more attractive as cheap and efficient platforms for distributed and parallel applications. The main drawback of a standard NOW is the poor performance of the standard inter-process communication mechanisms based on RPC, sockets, TCP/IP, Ethernet. Such standard communication mechanisms perform poorly both in terms of throughput as well as message latency.

Recently, few prototypes developed around the world have proved that by re-visiting the implementation of the communication layer of a standard Operating System kernel, a low cost hardware platform composed of only commodity components can scale up to a few tens of processing nodes and deliver communication and computation performance exceeding the one delivered by the conventional high-cost parallel platforms.

Despite the importance of this break-through, that allows the use of inexpensive hardware platforms for efficient support of large/medium/fine grain parallel computation in a NOW environment, few papers describing their design and implementation appear in the literature. The PC-NOW workshop will provide a forum where researchers and practitioners can discuss issues, results, and ideas related to the design of efficient NOWs based on commodity hardware and public domain operating systems as compared to custom hardware devices.

The PC-NOW program will include a number of contributed (25 minute) presentations, and a panel discussion. The presentations will cover the following topics:

- Experience with low-cost, high-performance NOW
- Low-cost communication hardware for personal computers
- Performance and benchmarks
- Porting of significant applications on low-cost NOW
- Efficient implementation of message passing libraries for NOW
- Parallel application environment for NOW
- Communication architectures
- Communication paradigms
- Industrial relevance

The PC-NOW workshop proceedings will be published by Springer Verlag.

For further information, please contact:

Giovanni Chiola
DISI, University of Genoa
35 via Dodecaneso, 16146 Genoa, Italy
Vox: +39-10-353-6606
Fax: +39-10-353-6699
Internet: chiola@disi.unige.it

Workshop 16: All Day Friday

WORKSHOP ON FAULT-TOLERANT PARALLEL AND DISTRIBUTED SYSTEMS

Workshop Chair:
Dimiter Avresky, Boston University
Workshop Vice-Chair:
David R. Kaeli, Northeastern University




Theme

Increasingly large parallel computing systems provide unique challenges to the researchers in dependable computing, especially because of the high failure rates intrinsic to these systems. While commercial and scientific companies share the need for massive throughput and low latency, dependability of service is also a concern. In addition to providing uninterrupted service, commercial systems must be free from data corruption. Achieving dependability in highly scalable parallel and distributed systems poses a considerable challenge. As the number of components increases, so does the probability of a component failure. Therefore, improved fault-tolerant technology is required for high scalable parallel and distributed systems.

The goal of this workshop is to provide a forum for researchers and practitioners to discuss issues related to these issues of fault-tolerant parallel and distributed systems. All aspects of design, theory and realization of parallel and distributed systems are of interest.

Topics of interest include, but are not limited to:

- Dependable distributed systems
- Fault-tolerance in clusters of workstations
- Fault-tolerant interconnection networks
- Reconfigurable fault-tolerant parallel and distributed systems
- Fault-tolerant parallel programming
- Scalable fault-tolerant architectures and algorithms
- Fault injection in parallel and distributed systems
- Dependability evaluation of fault-tolerant parallel and distributed systems

Program Committee:

J. Bruck, Caltech, USA
S. Budkowski, INT, France
B. Ciciani, University of Roma, Italy
F. Cristian, U.C. San Diego, USA
A. Goyal, IBM Watson Research Center, USA
J. Hauser, Sun Microsystems, USA
J. Hayes, University of Michigan, Ann Arbor, USA
R. Horst, Tandem Computers Inc., USA
M. Karpovsky, Boston University, USA
H. Levendel, Lucent Tech., Bell Labs Innovations, USA
Q. Li, Santa Clara University, CA, USA
F. Lombardi, Texas A&M, USA
E. Maehle, University of Lubeck, Germany
K. Makki, University of Nevada, Las Vegas, USA
M. Malek, Humboldt University, Germany
A. Nordsieck, Boeing, USA
N. Pissinou, CACS, USE, USA
M. Raynal, IRISA, France
R. Riter, Boeing, USA
K. Siu, MIT, USA
B. Smith, IBM Watson Research Center, USA
K. Trivedi, Duke University, USA
J. Wu, University of Florida, USA

The workshop is sponsored by the IEEE Computer Society Technical Committee on Parallel Processing. For further information, please contact one of the following:

D.R. Avresky
Department of Electrical and Computer Engineering
Boston University
8, Saint Mary's St.
Boston, MA 02215
Vox: (617)- 353-9850
Fax:(617)- 353-6440
Internet: avresky@bu.edu

D.R. Kaeli
Department of Electrical and Computer Engineering
Northeastern University
Boston, MA 02115
Vox: (617)-373-5413
Fax: (617) -373-8970
Internet: kaeli@ece.neu.edu

Workshop URL: http://netcom1.bu.edu/FTPDS98_workshop.html


Workshop 17: All Day Friday

3rd WORKSHOP ON OPTICS AND COMPUTER SCIENCE

Workshop Co-Chairs:
P. Berthomé, LRI, Orsay, France
P.J. Marchand, UCSD, USA




Steering Committee:

P. Chavel, IOTA, Orsay, France
T. Drabik, GeorgiaTech, Metz, France and Atlanta, USA
S.C. Esener, UCSD, USA
A. Ferreira, CNRS INRIA, Nice Sophia-Antipolis, France
P. Spirakis, CTI, Patras, Greece

Program Committee:

J.-C. Bermond, INRIA CNRS, Nice Sophia-Antipolis, France
D. Blumenthal, Georgia Institute of Technology, Atlanta, USA
H.-A. Choi, George Washington University, USA
J. Collet, LAAS, Toulouse, France
M. Desmulliez, Heriot Watt University, UK
P. Dowd, University of Maryland, New York, USA
A. Ferreira, CNRS INRIA, Nice Sophia-Antipolis, France
A. Krishnamoorthy, Lucent Bell-Labs, USA
Y. Li, NEC Research Institute, USA
A. Louri, University of Arizona, USA
M.A. Marsan, Torino, Italy
N. Mauduit, IEF, Orsay, France
F. McCormick, Call-Recall Inc., USA
R. Melhem, University of Pittsburgh, USA
H. Ozaktas, Bilkent University, Turkey
R. Paturi, UCSD, USA
D. Peleg, Weizmann Institute, Israel
T. Pinkston, USC, USA
T. Szymanski, McGill University, Montreal, Canada
J. Tanida, Osaka University, Japan

Optical means are now widely used in telecommunication networks and the evolution of optical and optoelectronic technologies tends to show that they could be successfully introduced in shorter distance interconnection systems such as parallel computers. These technologies offer a wide range of techniques that can be used in interconnection systems. But introducing optics in interconnect systems also means that specific problems have yet to be solved while some unique features of the technology must be taken into account in order to design optimal systems. Such problems and features include device characteristics, network topologies, packaging issues, compatibility with silicon processors, system level modeling.

The purpose of WOCS is two-fold. First, we hope to provide a good opportunity for the optical, architecture, and communication research communities to get together for a fruitful cross-fertilization and exchange of ideas. The goal is to bring the optical interconnects research into the mainstream research in parallel processing, while at the same time provide the parallel processing community with a more comprehensive understanding of the advantages and limitations of optics as applied to high-speed communications. In addition, we intend to assemble a group of major research contributors to the field of optical interconnects for assessing its current status, and identifying future directions. By its location, this workshop will provide the first real opportunity for European researchers to expose their result in this field.

WOCS will feature invited speakers, several sessions of submitted papers, and a panel discussion. The presentations will cover the topics of:

- High-speed interconnections
- Optical interconnects
- Parallel optical architectures
- Reconfigurable optical interconnects and architectures
- Applications of optical interconnects
- Modeling of optical systems and applications
- Performance analysis and comparisons
- Packaging of optical interconnects
- System demonstrations
- Routing in optical networks

Invited Speaker:
Ted Szymanski, McGill University, Canada
"Intelligent Optical Network"

For further information, please contact:

Philippe J. Marchand
UCSD, ECE Department
9500 Gilman Drive
La Jolla CA 92093-0407, USA
Internet: pmarchand@ucsd.edu

Pascal Berthomé
LRI, Bat 490
91405 ORSAY Cedex, France
Internet: berthome@lri.fr

URL: http://soliton.ucsd.edu/wocs/wocs98


Workshop 18: All Day Friday

3rd WORKSHOP ON FORMAL METHODS FOR PARALLEL PROGRAMMING: THEORY AND APPLICATIONS

Workshop Chairs:
Dominique Mery, Universite Henri Poincare-Nancy 1 and IUF, France
Beverly Sanders, University of Florida, USA




Program Committee:

Flemming Andersen, Tele Danmark R&D, Denmark
Mani Chandy, Caltech, USA
Radhia Cousot, CNRS & Ecole Polytechnique, France
Pascal Gribomont, Liege, Belgium
Dominique Mery, Nancy, France (Co-Chair)
Lawrence Paulson, Cambridge, UK
Xu Qiwen, Macau
Catalin Roman, St Louis, USA
Beverly Sanders, Florida, USA (Co-Chair)
Ambuj Singh, California, USA
Mark Staskauskas, Bell Laboratories, Lucent Technologies, USA

Topics

Formal methods have been widely investigated in academic institutions. More recently they are being applied in industry. Systems and their properties can be described precisely using mathematical notations, offering a way to achieve higher reliability. Formal methods combine methodological aspects in a formal framework. Although they appear to be difficult to apply, they are the only means of ensuring that an implementation is correct with respect to a given specification. The development of an algorithmic solution from a (formal) specification is carried out with the help of mathematical techniques and tools. The objective of the workshop is to gather people, both from academia and industry, who use and/or develop formal methods for parallel programming. There are potentially many different approaches to improving the environment for parallel programming. The (proof) tools and their user interface are fundamental to formalization of the parallel programming process. Since 1998 marks the 10th anniversary of UNITY, we especially encourage UNITY related work including

- Foundations of UNITY: proof theory, temporal logics
- Paradigms of concurrency and distribution in UNITY
- Case studies
- Refinement techniques and mapping
- Implementations of UNITY principles: theorem provers, environment, model checking techniques, etc.
- UNITY versus others frameworks: TLA, VDM, Z, B, RAISE, Action Systems, DISCO, CCS, PI calcul, OO, etc.
- Methods that extend or were motivated by UNITY

Invited Speakers

We are pleased to announce that UNITY inventors

K. Mani Chandy (Caltech) and Jay Misra (U Texas, Austin)
have both agreed to speak at FMPPTA'98.

Papers accepted for FMPPTA'98 are currently under review for publication in the journal Parallel Programming Letters as regular papers. Papers of FMPPTA'98 are available at the URL: http://www.loria.fr/~mery/FMPPTA98/. A special issue in a journalis planned for FMPPTA'98 and authors will receive information during the workshop. Workshop proceedings will be available at the conference and will be included in a book of the series LNCS.

Information requests:

FMPPTA'98/Dominique Mery
Universite Henri Poincare-Nancy 1 IUF,
CRIN-CNRS,
Batiment LORIA, BP239
F-54506 Vandoeuvre-les-Nancy France
Vox: +33 3 83 59 20 14
Fax: +33 3 83 41 30 79
Internet: FMPPTA98@loria.fr or mery@loria.fr
URL: http://www.loria.fr/~mery/FMPPTA98



Tutorial 2
8:30 AM - 12:30 PM

STRUCTURED MULTITHREADED PROGRAMMING FOR WINDOWS NT AND UNIX/PTHREADS

John Thornley, California Institute of Technology




Who Should Attend:

Experienced programmers who are new to multithreaded programming. This includes programmers with experience on message-passing systems who want to learn how to take full advantage of shared-memory multiprocessors.

Course Description:
Introduction to shared-memory multiprocessors and multithreaded programming systems. Examples of commercially-available machines and programming systems, including multiprocessor PCs running Windows NT. Performance issues in multithreaded programming of shared-memory multiprocessors. Design, development, and debugging issues in high-performance multithreaded programming. Advantages of shared-memory multithreaded programming over message-passing. Introduction to Sthreads: a portable, high-level multithreaded programming library defined on top of Windows NT threads and UNIX/Pthreads. Examples of practical applications, with performance measurements for current multiprocessor PCs and workstations.

Lecturer:

John Thornley received a B.Sc. and M.S. from the University of Auckland in New Zealand. He was a lecturer in Computer Science at the University of Auckland for many years. He received an M.S. and Ph.D. from Caltech, where he is currently a research scientist in the Computer Science Department. He is also teaching a class at Caltech on structured programming for multiprocessors. His research interests include multithreaded programming and systematic programming methods.


Tutorial 3
1:30 PM - 5:30 PM

PARALLEL AND DISTRIBUTED COMPUTING USING JAVA

Ira Pramanick, Silicon Graphics Inc.




Who Should Attend:

Engineering students, researchers and industry professionals seeking to learn this extremely popular, portable programming language for use in parallel and distributed computing. Basic knowledge of object oriented concepts would be a plus, but is not essential.

Course Description:

Java takes the promise of parallel processing and distributed computing a step beyond other languages by not only providing a portable programming language that can be used on any combination of different machines that support Java, but also by offering a seamless integration with the web. Java is itself multithreaded, and has several models/layers of parallel and distributed computing, offering these layers via its well defined object-oriented classes. This tutorial will touch upon the basics of Java that realize its promise of "Write Once, Run Anywhere," and then move on to cover its parallel and distributed programming aspects. It will cover Java's support for parallel processing using multiple CPUs of a single machine through its threads model, and then move on to the topic of parallel and distributed computing using several machines on a network. Programming examples will be provided for each major topic.

The major topics to be covered are:

- Brief overview of Java basics
- Introduction to parallel & distributed computing in Java
- Applications and applets
- Parallel processing/multithreading in Java
- Distributed Java programs
- Java socket classes
- Java URL classes
- Remote method invocation
- Java native interface
- Miscellaneous Java goodies such as JavaBeans and JDBC.

Lecturer:

Ira Pramanick received her B.Tech.(Hons.) in Electrical Engineering from IIT Kharagpur, India in 1985, and her Ph.D. in Electrical & Computer Engineering from the University of Iowa in 1991. She was an Assistant Professor in the ECE Department at the University of Alabama in Huntsville from 1991 to 1992. She worked for IBM Corporation as an Advisory Engineer from 1992 to 1995, and joined Silicon Graphics in 1995 where she currently works for the Strategic Software Division. Dr. Pramanick's research interests include parallel processing and distributed computing, highly available systems, algorithms, and applications. She has served on the program committee for the IEEE International Symposium on High Performance Distributed Computing for the past four years, and has chaired several technical sessions at HPDC, ICPP and SECON. She was awarded an IBM Invention Achievement Award for a patent filing in 1994.



LOCATION


Orlando is located in central Florida, once an area of sparkling lakes, pine forests, and citrus groves. With the Disney transformation of 43 square miles of swampland, the Orlando area is now a world tourist draw as well as one of the fastest growing high tech regions.

IPPS/SPDP'98 will be held at the Delta Orlando Resort which offers 25 acres of family fun and relaxation, including 3 pools, 2 kids pools, 3 hot tubs, saunas, and a mini-golf course. A Wally's Kids Club provides supervised activities, and there is a children's playground and game room. The Delta Orlando is located at the Maingate of Universal Studios on Major Boulevard. It is less than 20 minutes from the airport and about the same distance from both Downtown Orlando and Disney World.

Orlando is the site of an impressive list of world renowned attractions including: Walt Disney World featuring the Magic Kingdom, the Epcot Center and Disney-MGM Studios; Universal Studios (across from the resort); Sea World; and Cypress Gardens. In addition, there is the Orlando Science Center, Splendid China, Flying Tigers Warbird Air Museum, and Green Meadows Petting Farm (to mention a few!) as well as downtown Orlando which features Church Street Station, a shopping, dining, nightclub complex.

Clearly, there is more to see and do than 1 week allows, so you may want to extend your stay. If your family is joining you in Orlando, you will want to plan ahead to make the best use of your time and resources.

HOW TO GET TO ORLANDO

The following airlines offer service to Orlando: Delta, America West, American, American Trans Air, British Airways, Continental, Midway, Northwest, SunJet, Transbrasil, TWA, United, US Airways and Virgin Atlantic. As with all air travel, to obtain the best fare book early.

ACCOMMODATIONS

IPPS/SPDP'98 has been publicized as an especially "family friendly" event and with good reason. The room rate is $79 for up to four people, there are 3 outdoor pools and kids under 12 eat free with purchase of adult meals in the 3 Delta Resort restaurants.

The IPPS/SPDP special discount rate at the Delta Resort is available March 27 to April 3, 1998. Reservations must be received by the hotel no later than March 2, 1998. See the reservation form on the middle tear out sheet. Note that requests received after March 2nd are subject to space availability only.

LOCAL TRANSPORTATION

From the airport to the hotel, you can take a shuttle. The Mears Transportation Group and Transtar Shuttle vans may be boarded outside baggage claim. They depart every 15 to 20 minutes and cost $12 for a party of 1, more for a larger group.

You can take cabs to other destinations or arrange to catch a bus or shuttle. If you plan to do more than just visit Disney World (which has its own transportation network), you may want to consider renting a car which, if you shop around and utilize discounts, should be available at a weekly rate under $150.

CLIMATE

The March/April temperatures in Orlando vary between 60-80 deg.F, and you can expect sunny days (bring your sunscreen). Casual attire is the norm, and when out enjoying the attractions, comfortable shoes are a must.

ATTRACTIONS

There is not space here to fully describe the various attractions and the details of cost, hours, and getting there. The best bet is to purchase (or borrow) a travel book or check the Web sites listed below. These sources detail various packages and special deals for saving dollars in Orlando. For example, the Frommer's 98 Guide has a coupon for obtain a free Magicard which offers over $1000 in savings.

A Vacation Value Pass may be purchased for the attractions Sea World, Wet 'n Wild, Universal Studios, and Busch Gardens in Tampa. Available at any one of these sites, the pass allows you five consecutive days of unlimited visits to all four attractions. The Disney World 4-Day Value Pass provides for 1 day admission each at The Magic Kingdom, Epcot Center, and Disney-MGM Studios plus 1 day at any of the three.

DISNEY WORLD
http//www.disneyworld.com
UNIVERSAL STUDIOS
http://www.usf.com
SEA WORLD
DELTA ORLANDO RESORT
http://www.deltahotels.com