PMAA14
8th International Workshop on Parallel Matrix Algorithms and Applications
8th International Workshop on Parallel Matrix Algorithms and Applications
July 2-4, 2014 // Università della Svizzera italiana // Lugano, Switzerland

Overview

The scientific program of PMAA14 consists of plenary speakers, participant-organized minisyposia, and contributed presentations. The PMAA14 papers will be published in a journal special issue in Parallel Computing.

Minisymposia

A minisymposium consists of four (or eight) talks with a common technical theme. Minisymposium organizers are responsible for securing the participation of their speakers and collecting and forwarding initial speaker and talk title information, upon which minisymposium selection will be based. Once minisymposia are approved, speakers may use the web-based automated submission procedure as other presenters to enter their finalized titles, abstracts, author and affiliation data, and speaker designation data (in the case of multiply-authored papers).

Please see these instructions for more details on minisymposium proposal submissions.

Contributed Presentations

A contributed talk is a 30-minute presentation, which will be grouped by the organizers with other contributed talks into the two-hour parallel sessions each morning and afternoon of the conference.

Presenters are asked to submit their titles, abstract (600 words or less), author and affiliation data, and speaker designation data (in the case of multiply-authored papers) using the abstract submission procedure.

The deadline for fullest consideration is April 14, 2014. Submitters will be notified of acceptance no later than April 20, 2014. If an earlier notification is required for visa purposes for participants from countries with restricted access to Switzerland, please e-mail your submission to decanato.inf@usi.ch as soon as possible, and special attention will be given.

Keynote Speakers

Prof. Dr. Wolfgang Bangerth

Texas A&M, USA

Finite Element Methods at Realistic Complexities

Solving realistic, applied problems with the most modern numerical methods introduces many levels of complexity. In particular, one has to think about not just a single method, but a whole collection of algorithms: a single code may utilize fully adaptive, unstructured meshes; nonlinear, globalized solvers; algebraic multigrid and block preconditioners; and do all this on 1,000 processors or more with realistic material models.

Codes at this level of complexity can no longer be written from scratch. However, over the past decade, many high quality libraries have been developed that make writing advanced computational software simpler. In this talk, I will briefly introduce the deal. The finite element library (http://www.dealii.org) whose development I lead and show how it has enabled us to develop the ASPECT code (http://aspect.dealii.org) for simulation of convection in the earth mantle. I will discuss some of the results obtained with this code and comment on the lessons learned from developing this massively parallel code for the solution of a complex problem.

Prof. Dr. Eric Darve

Stanford University, USA

Fast direct linear solvers

In recent years there has been a resurgence in direct methods to solve linear systems. These methods can have many advantages compared to iterative solvers; in particular their accuracy and performance is less sensitive to the distribution of eigenvalues. However they typically have a larger computational cost in cases where iterative solvers converge in few iterations. We will discuss a recent trend of methods that address this cost and can make these direct solvers competitive. Techniques involved include hierarchical matrices, hierarchically semi-separable matrices, fast multipole method, etc.

Prof. Dr. Jacek Gondzio

University of Edinburgh, UK

Parallel Matrix Computations in Optimization

I shall address recent challenges in optimization, including problems which originate from the “Big Data” buzz. The existing and well-understood methods (such as interior point algorithms) which are able to take advantage of parallelism in the matrix operations will be briefly discussed. The new class of methods which work in the matrix-free regime will then be introduced. These new approaches employ iterative techniques such as Krylov subspace methods to solve the linear equation systems when search directions are computed. Finally, I shall comment on the well-known drawbacks of the fashionable but unreliable first-order methods for optimization and I shall suggest the parallelisation-friendly remedies which rely on the use of (inexact) second-order information.

Prof. Dr. Wim Vanroose

University of Antwerpen, Belgium

Communication avoiding and hiding in Krylov methods

The main algorithmic components of a preconditioned Krylov method are the dot-product and the sparse-matrix vector product. On modern HPC hardware the performance of Preconditioned Krylov methods is severely limited due to two communication bottlenecks. First, each dot product has a large latency due to the involved synchronization and global reduction. Second, each sparse-matrix vector product suffers from the limited memory bandwidth because it does only a few floating point computations for each byte read from main memory. In this talk we discuss how Krylov methods can be redesigned to alleviate these two communication bottlenecks.

Minisymposia

A minisymposium consists of four 25-minute presentations, with an additional five minutes for discussion after each presentation.

Prospective minisymposium organizers should submit a short proposal for the minisymposium by email to pmaa14@usi.ch (Extended deadline for minisymposium proposal: April 08, 2014), and each accepted minisymposium speaker should submit a 600-word abstract (in a text file) through the abstract submission web page (Extended deadline for abstract submisson April 14, 2014).

These will be reviewed by the scientific committee. The number of minisymposia may be limited to retain an acceptable level of parallelism in the conference sessions.

Minisymposia proposals should be e-mailed to pmaa14@usi.ch by April 08, 2014 to receive fullest consideration. Selections will be announced within two days after this deadline.

Proposals consist of:

  1. over-arching title,
  2. organizer(s) with affiliations and e-mails,
  3. abstract (600 words or less) of overall minisymposium,
  4. list of speakers with affiliations and e-mails,
  5. tentative titles of each speaker's talk.

Proposals should be in uncompressed plaintext, pdf, or doc format. Minisymposia are allowed two or four hours each. This means four or eight 30-minute talks.

Accepted Minisymposia

  • Fault-tolerant, communication-avoiding and asynchronous matrix computations
    D. Goeddeke, S. Turek, M. A.. Heroux
  • Parallel-in-time Methods
    P. Arbenz, R. Krause
  • Toward resiliency for extreme scale applications
    E. Agullo, L. Giraud, J. Roman
  • Task-based solvers over runtime systems
    E. Agullo, L. Giraud, J. Roman
  • Algorithmic Adaptivity in Multilevel Linear and Nonlinear Solvers
    M. Knepley, J. Brown
  • Modeling and Solver Techniques for Fluid-Structure Interaction Problems
    R. Krause, J. Steiner
  • Parallel eigenvalue solvers
    J.  E. Roman, W. Wang
  • Advanced Parallel Algorithms for the Eigenvalue Problem
    E. Polizzi
  • Advanced algorithm and parallel implementation of next generation eigenvalue solver
    Z. Imamura, T. Sakurai
  • Jacobi-based Eigen- and SVD-Solvers
    M. Vajtersic, G. Oksa
  • Highly Scalable Preconditioners for Sparse Eigensolvers with Focus on Finding Interior Eigenpairs
    A. Basermann, J. Thies
  • Recent advances in Highly Parallel Preconditioners
    M. Bolten, L. Grigori, F. Nataf, M. Bollhoefer