Parallelization of unsteady adaptive mesh refinement for unstructured Navier-Stokes solvers

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

This paper explores the implementation of MPI parallelization in an unstructured Navier-Stokes solver. It uses dynamic adaptive mesh refinement of hexahedral cells to increase grid density in regions with strong gradients. Implicit and explicit time advancement methods are considered. Distributed implementation of the Data-Parallel Line Relaxation implicit operator is discussed for grids with hanging nodes. Parallel performance of simulations for an unsteady, inviscid flow are examined for both adapted and unadapted meshes in order to provide a baseline for comparison. Relative costs for adaptation and time stepping provide insight into computational bottlenecks. The flow solver and methods presented here are validated with data from a double cone experiment in hypersonic flow. For a given level of accuracy, adapted grids provide predictions that are less expensive than those obtained on unadapted grids for this staple test problem. Unsteady adaptation provides considerable savings for all problems considered.

Original languageEnglish (US)
Title of host publicationAIAA AVIATION 2014 -7th AIAA Theoretical Fluid Mechanics Conference
PublisherAmerican Institute of Aeronautics and Astronautics Inc.
ISBN (Print)9781624102936
DOIs
StatePublished - 2014
EventAIAA AVIATION 2014 -7th AIAA Theoretical Fluid Mechanics Conference 2014 - Atlanta, GA, United States
Duration: Jun 16 2014Jun 20 2014

Publication series

NameAIAA AVIATION 2014 -7th AIAA Theoretical Fluid Mechanics Conference

Other

OtherAIAA AVIATION 2014 -7th AIAA Theoretical Fluid Mechanics Conference 2014
Country/TerritoryUnited States
CityAtlanta, GA
Period6/16/146/20/14

Fingerprint

Dive into the research topics of 'Parallelization of unsteady adaptive mesh refinement for unstructured Navier-Stokes solvers'. Together they form a unique fingerprint.

Cite this