Skip to main navigation
Skip to search
Skip to main content
Experts@Minnesota Home
Search content at Experts@Minnesota
Home
Profiles
Research units
University Assets
Projects and Grants
Research output
Datasets
Press/Media
Activities
Fellowships, Honors, and Prizes
Impacts
An ADMM-based interior-point method for large-scale linear programming
Tianyi Lin
, Shiqian Ma
, Yinyu Ye
,
Shuzhong Zhang
Industrial and Systems Engineering
Research output
:
Contribution to journal
›
Article
›
peer-review
40
Link opens in a new tab
Scopus citations
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'An ADMM-based interior-point method for large-scale linear programming'. Together they form a unique fingerprint.
Sort by
Weight
Alphabetically
Keyphrases
Alternating Direction multiplier Method
100%
Interior Point Method
100%
Large-scale Linear Programming
100%
Linear Programming
83%
Newton's Method
50%
System of Linear Equations
33%
Penalty Function
33%
Log-barrier
33%
Numerical Experiments
16%
Self-dual
16%
Large Systems
16%
Linear Systems
16%
Self-dual Embedding
16%
Path Following
16%
Iterative Procedure
16%
Preconditioned Conjugate Gradient Method
16%
Program Model
16%
Experiment Testing
16%
Machine Learning Applications
16%
Conditioner
16%
Well Structure
16%
Overall Solution
16%
Newton Step
16%
Solution Efficiency
16%
Engineering
Point Method
100%
Alternating Direction Method of Multipliers
100%
Interior Point
100%
Linear Programming
100%
Linear Program
83%
Newton's Method
50%
Penalty Function
33%
Linear Equation
33%
Numerical Experiment
16%
Input Data
16%
Subproblem
16%
Iterative Procedure
16%
Conjugate Gradient Method
16%
Learning System
16%
Mathematics
Interior Point
100%
Alternating Direction Method of Multipliers
100%
Linear Programming
100%
Linear Program
83%
Newton's Method
50%
Minimizes
33%
Systems of Linear Equation
33%
Conjugate Gradient Method
16%
Linear System
16%
Step Newton
16%
Numerical Experiment
16%
Input Data
16%
Subproblem
16%
Chemical Engineering
Scalability
100%
Large-scale Linear Programming
100%
Learning System
50%
Linear Systems
50%