Accelerated Prediction of the Polar Ice and Global Ocean (APPIGO)

Lead PI: Dr. Eric Chassignet, Florida State University

Arctic change and reductions in sea ice are impacting Arctic communities and are leading to increased commercial activity in the Arctic. Improved forecasts will be needed at a variety of timescales to support Arctic operations and infrastructure decisions. Increased resolution and ensemble forecasts will require significant computational capability. At the same time, high performance computing architectures are changing in response to power and cooling limitations, adding more cores per chip and using Graphics Processing Units (GPUs) as computational accelerators. We describe here an effort to improve Arctic forecast capability by modifying component models to better utilize new computational architectures. Specifically, we will focus on the Los Alamos Sea Ice Model (CICE), the HYbrid Coordinate Ocean Model (HYCOM) and the Wavewatch III models and optimize each model on both GPU-accelerated and MIC-based architectures. These codes form the ocean and sea ice components of the Navy’s Arctic Cap Nowcast/Forecast System (ACNFS) and the Navy Global Ocean Forecasting System (GOFS), with the latter scheduled to include a coupled Wavewatch III by 2016. An incremental acceleration approach will begin by improving selected sections of each code and expanding those regions to accelerate the three application codes [OpenACC, OpenCL, supplemented for functionality with CUDA Fortran as needed]. This approach provides early successes and opportunities to test the changes as they are made. A second approach will redesign code infrastructure to incorporate a multi-level parallelism by design. The modified codes will be validated both on a single component basis and within the forecast systems. This work will contribute to improved Arctic forecasts and the Arctic ice prediction demonstration project for the Earth System Prediction Capability (ESPC).

Number of Years:  5

Start Year: 2014

End Year: 2019

Partners:

  • Florida State University
  • LANL
  • NRL, Stennis Space Center
  • University of Miami

FY 2015 PI Report
FY 2016 PI Report


An Integration and Evaluation Framework for ESPC Coupled Models

Lead PI: Dr. Ben Kirtman, University of Miami

To realize its potential, a U.S. Earth system modeling and prediction capability must encompass a network of agencies and organizations that contribute model components, infrastructure, and scientific and technical expertise. The model component contributions must be integrated using coupling software, optimized as a whole, and the predictive skill of the resulting models assessed using standard metrics. We propose to provide these integrative functions for the Earth System Prediction Capability (ESPC), using as a reference application a version of the Community Earth System Model (CESM) running an optimized version of the HYbrid Coordinate Ocean Model (HYCOM). We will establish a Coupling Testbed as a resource available to ESPC modelers, with Testbed leads who have developed infrastructure for major modeling systems and can engage and motivate interactions with modelers at NASA, NOAA, and other centers. We will advance and update a coupled CESM-HYCOM system prototyped under other funding with a version of HYCOM optimized for accelerator-based architectures. We plan to evaluate the computational performance and predictive skill of the CESM-HYCOM code using standard metrics. In particular, we will perform a suite of numerical experiments examining how the model performs in simulating current climate at both moderate and ocean eddy resolving scales, and we will implement a set of retrospective forecasts taking advantage of the NMME protocol, again at both moderate and ocean eddy resolving scales.

Number of Years: 5

Start Year: 2014

End Year: 2019

Partners:

  • University of Miami
  • NRL Stennis Space Center
  • University of Colorado
  • University of Chicago
  • George Mason University/ COLA
  • University of Wisconsin
  • National Center for Atmospheric Research
  • Florida State University

FY 2014 PI Report
FY 2015 PI Report


NPS-NRL-Rice-UIUC Collaboration on Navy Atmosphere-Ocean Coupled Models on Many-Core Computer Architectures

Lead PI: Dr. Lucas Wilcox, Naval Postgraduate School

The goal of this project is threefold. The first goal is to identify the bottlenecks of the Nonhydrostatic Unified Model of the Atmosphere (NUMA) and then circumvent these bottlenecks through the use of: 1) analytical tools to identify the most computationally intensive parts of both the dynamics and physics; 2) intelligent and performance portable use of heterogeneous accelerator-based many-core machines, such as General Purpose Graphics Processing Units (GPGPU or GPU, for short) or Intel’s Many Integrated Core (MIC), for the dynamics; and 3) intelligent use of accelerators for the physics. The second goal is to implement Earth System Modeling Framework (ESMF) interfaces for the accelerator-based computational kernels of NUMA allowing the study of coupling many-core based components. We will investigate whether the ESMF data structures can be used to streamline the coupling of models in light of these new computer architectures which require memory access that has to be carefully orchestrated to maximize both cache hits and bus occupancy for out of cache requests. The third goal is to implement NUMA as an ESMF component allowing NUMA to be used as an atmospheric component in a coupled earth system application. A specific outcome of this goal will be a demonstration of a coupled air-ocean-wave-ice system involving NUMA, HYCOM, Wavewatch III, and CICE within the Navy ESPC. The understanding gained through this investigation will have a direct impact on the Navy ESPC that is currently under development. NUMA has already been shown to scale up to tens of thousands of processors on CPU-based distributed- memory platforms [18]. This impressive scalability has been achieved through the use of the Message Passing Interface to exchange data between processors. The work proposed here will further increase the performance of NUMA especially for the most costly operations that are currently taking place on- processor. Examples of such operations include the right-hand-side (RHS) vectors formed by the continuous/discontinuous Galerkin (CG/DG) high-order spatial operators, the implicit time integration strategy, and the sub-grid scale physics.

Number of Years:  5

Start Year: 2014

End Year: 2019

Partners:

  • Naval Postgraduate School
  • Rice University
  • NRL Stennis Space Center
  • University of Illinois, Urbana-Champaign

FY 2013 PI Report
FY 2014 PI Report
FY 2015 PI Report
FY 2016 PI Report


RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade

Lead PI: Dr. Eli Mlawer, Atmospheric and Environmental Research, Inc.

This proposal is to develop a high-performance broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of the accurate RRTMG radiation code that has been implemented in many General Circulation Models (GCMs) including the Navy Global Environmental Model (NAVGEM), the NCAR Community Earth System Model (CESM), and NOAA’s Global Forecast System (GFS). Our proposed development will significantly lessen a key bottleneck in these highly complex and coupled models, namely the large fraction of computational time currently required for the calculation of radiative fluxes and heating rates. This will allow these models to increase their resolution and/or the complexity of their other physical parameterizations, thereby enabling a large potential increase in model performance. We will preserve the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, while rewriting the entire code to provide highly efficient computation across a range of architectures. We will pay special attention to highly-parallel platforms including Graphical Processing Units (GPUs) and Many- Integrated-Core processors (MICs), which preliminary experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. Our redesign will include refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, and revisiting the existing gas optics interpolation scheme to reduce branching. Our work will make extensive use of the lessons our team has learned on two recent related efforts, the development of a specialized GPU-accelerated version of RRTMG and a vectorized reimplementation of the code that includes a novel spectral sub-selection algorithm. Representatives of two global models that will benefit greatly from the proposed development, NAVGEM and CESM, are part of our collaboration. Each modeling group will participate in the redesign phase of the project so that the specifics of each computational environment are taken into account in the development of RRTMGP.

Number of Years:  5

Start Year: 2014

End Year: 2019

Partners:

  • Atmospheric and Environmental Research, Inc.
  • University of Colorado
  • NCAR
  • NRL Monterey

FY 2014 PI Report
FY 2015 PI Report
FY 2016 PI Report