Unifying configuration management across High Performance Compute (HPC) and supporting infrastructure systems is a challenge commonly faced by research computing centers. System operators no longer have the luxury to manage one-off cases manually; automation is essential. Centralized tooling and site-wide configuration promises efficiencies, but the substantial differences between, for example, compute nodes and networking gear can dissuade operators from attempting to manage their entire fleet with a single tool. To solve this, the Minnesota Supercomputing Institute (MSI) at the University of Minnesota assembled an innovative collection of utilities for Institute-wide management of systems with Puppet 5 as the centerpiece. The innovative solution emerged from the need for more than a dozen system operators/administrators to coordinate changes across a medium- to large-scale data-center with over 1500 nodes. A centralized base configuration ensures all systems are inline with University security policies and other compliance needs, while the Puppet infrastructure additionally enables per-cluster or even per-node customizations as needed. This document presents the architecture of MSI’s orchestration and management infrastructure, as well as the workflow followed by operators to provision disparate systems with Puppet and ensure that quality, accountability, and compliance requirements are met. The resulting system is actively used in the day-to-day management of the Institute, including persistent infrastructure services, cluster head- and compute-nodes, and even networking equipment. The workflow described herein is recommended for similar research computing institutions, including those with as few as five nodes and two operators.