TNT

A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice

Joseph M. Myre, Erich Frahm, David J Lilja, Martin O. Saar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Since its inception by Gauss, the least-squares problem has frequently arisen in science, mathematics, and engineering. Iterative methods, such as Conjugate Gradient Normal Residual (CGNR), have been popular for solving sparse least-squares problems, but have historically been regarded as undesirable for dense applications due to poor convergence. We contend that this traditional 'common knowledge' should be reexamined. Preconditioned CGNR, and perhaps other iterative methods, should be considered alongside standard methods when addressing large dense least-squares problems. In this paper we present TNT, a dynamite method for solving large dense least-squares problems. TNT implements a Cholesky preconditioner for the CGNR fast iterative method. The Cholesky factorization provides a preconditioner that, in the absence of round-off error, would yield convergence in a single iteration. Through this preconditioner and good parallel scaling, TNT provides improved performance over traditional least-squares solvers allowing for accelerated investigations of scientific and engineering problems. We compare a parallel implementation of TNT to parallel implementations of other conventional methods, including the normal equations and the QR method. For the small systems tested (15000 15000 or smaller), it is shown that TNT is capable of producing smaller solution errors and executing up to 16 faster than the other tested methods. We then apply TNT to a representative rock magnetism inversion problem where it yields the best solution accuracy and execution time of all tested methods.

Original languageEnglish (US)
Title of host publicationProceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages987-995
Number of pages9
ISBN (Print)9781538655559
DOIs
StatePublished - Aug 3 2018
Event32nd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018 - Vancouver, Canada
Duration: May 21 2018May 25 2018

Other

Other32nd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018
CountryCanada
CityVancouver
Period5/21/185/25/18

Fingerprint

Iterative methods
Magnetism
Factorization
Rocks
Gradient
Least squares

Keywords

  • Least Squares
  • Preconditioned Conjugate Gradient
  • Rock Magnetism

Cite this

Myre, J. M., Frahm, E., Lilja, D. J., & Saar, M. O. (2018). TNT: A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice. In Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018 (pp. 987-995). [8425520] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IPDPSW.2018.00153

TNT : A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice. / Myre, Joseph M.; Frahm, Erich; Lilja, David J; Saar, Martin O.

Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 987-995 8425520.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Myre, JM, Frahm, E, Lilja, DJ & Saar, MO 2018, TNT: A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice. in Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018., 8425520, Institute of Electrical and Electronics Engineers Inc., pp. 987-995, 32nd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018, Vancouver, Canada, 5/21/18. https://doi.org/10.1109/IPDPSW.2018.00153
Myre JM, Frahm E, Lilja DJ, Saar MO. TNT: A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice. In Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 987-995. 8425520 https://doi.org/10.1109/IPDPSW.2018.00153
Myre, Joseph M. ; Frahm, Erich ; Lilja, David J ; Saar, Martin O. / TNT : A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice. Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 987-995
@inproceedings{ba32414a529b4a15988c75d697f56a28,
title = "TNT: A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice",
abstract = "Since its inception by Gauss, the least-squares problem has frequently arisen in science, mathematics, and engineering. Iterative methods, such as Conjugate Gradient Normal Residual (CGNR), have been popular for solving sparse least-squares problems, but have historically been regarded as undesirable for dense applications due to poor convergence. We contend that this traditional 'common knowledge' should be reexamined. Preconditioned CGNR, and perhaps other iterative methods, should be considered alongside standard methods when addressing large dense least-squares problems. In this paper we present TNT, a dynamite method for solving large dense least-squares problems. TNT implements a Cholesky preconditioner for the CGNR fast iterative method. The Cholesky factorization provides a preconditioner that, in the absence of round-off error, would yield convergence in a single iteration. Through this preconditioner and good parallel scaling, TNT provides improved performance over traditional least-squares solvers allowing for accelerated investigations of scientific and engineering problems. We compare a parallel implementation of TNT to parallel implementations of other conventional methods, including the normal equations and the QR method. For the small systems tested (15000 15000 or smaller), it is shown that TNT is capable of producing smaller solution errors and executing up to 16 faster than the other tested methods. We then apply TNT to a representative rock magnetism inversion problem where it yields the best solution accuracy and execution time of all tested methods.",
keywords = "Least Squares, Preconditioned Conjugate Gradient, Rock Magnetism",
author = "Myre, {Joseph M.} and Erich Frahm and Lilja, {David J} and Saar, {Martin O.}",
year = "2018",
month = "8",
day = "3",
doi = "10.1109/IPDPSW.2018.00153",
language = "English (US)",
isbn = "9781538655559",
pages = "987--995",
booktitle = "Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - TNT

T2 - A solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice

AU - Myre, Joseph M.

AU - Frahm, Erich

AU - Lilja, David J

AU - Saar, Martin O.

PY - 2018/8/3

Y1 - 2018/8/3

N2 - Since its inception by Gauss, the least-squares problem has frequently arisen in science, mathematics, and engineering. Iterative methods, such as Conjugate Gradient Normal Residual (CGNR), have been popular for solving sparse least-squares problems, but have historically been regarded as undesirable for dense applications due to poor convergence. We contend that this traditional 'common knowledge' should be reexamined. Preconditioned CGNR, and perhaps other iterative methods, should be considered alongside standard methods when addressing large dense least-squares problems. In this paper we present TNT, a dynamite method for solving large dense least-squares problems. TNT implements a Cholesky preconditioner for the CGNR fast iterative method. The Cholesky factorization provides a preconditioner that, in the absence of round-off error, would yield convergence in a single iteration. Through this preconditioner and good parallel scaling, TNT provides improved performance over traditional least-squares solvers allowing for accelerated investigations of scientific and engineering problems. We compare a parallel implementation of TNT to parallel implementations of other conventional methods, including the normal equations and the QR method. For the small systems tested (15000 15000 or smaller), it is shown that TNT is capable of producing smaller solution errors and executing up to 16 faster than the other tested methods. We then apply TNT to a representative rock magnetism inversion problem where it yields the best solution accuracy and execution time of all tested methods.

AB - Since its inception by Gauss, the least-squares problem has frequently arisen in science, mathematics, and engineering. Iterative methods, such as Conjugate Gradient Normal Residual (CGNR), have been popular for solving sparse least-squares problems, but have historically been regarded as undesirable for dense applications due to poor convergence. We contend that this traditional 'common knowledge' should be reexamined. Preconditioned CGNR, and perhaps other iterative methods, should be considered alongside standard methods when addressing large dense least-squares problems. In this paper we present TNT, a dynamite method for solving large dense least-squares problems. TNT implements a Cholesky preconditioner for the CGNR fast iterative method. The Cholesky factorization provides a preconditioner that, in the absence of round-off error, would yield convergence in a single iteration. Through this preconditioner and good parallel scaling, TNT provides improved performance over traditional least-squares solvers allowing for accelerated investigations of scientific and engineering problems. We compare a parallel implementation of TNT to parallel implementations of other conventional methods, including the normal equations and the QR method. For the small systems tested (15000 15000 or smaller), it is shown that TNT is capable of producing smaller solution errors and executing up to 16 faster than the other tested methods. We then apply TNT to a representative rock magnetism inversion problem where it yields the best solution accuracy and execution time of all tested methods.

KW - Least Squares

KW - Preconditioned Conjugate Gradient

KW - Rock Magnetism

UR - http://www.scopus.com/inward/record.url?scp=85052195736&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85052195736&partnerID=8YFLogxK

U2 - 10.1109/IPDPSW.2018.00153

DO - 10.1109/IPDPSW.2018.00153

M3 - Conference contribution

SN - 9781538655559

SP - 987

EP - 995

BT - Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -