We present a new method for online selection of the penalty parameter for the alternating direction method of multipliers (ADMM) algorithm. ADMM is a widely used method for solving a range of optimization problems, including those that arise in signal and image processing. In its standard form, ADMM includes a scalar hyperparameter, known as the penalty parameter, which usually has to be tuned to achieve satisfactory empirical convergence. In this work, we develop a framework for analyzing the ADMM algorithm applied to a quadratic problem as an affine fixed point iteration. Using this framework, we develop a new method for automatically tuning the penalty parameter by detecting when it has become too large or small. We analyze this and several other methods with respect to their theoretical properties, i.e., robustness to problem transformations, and empirical performance on several optimization problems. Our proposed algorithm is based on a theoretical framework with clear, explicit assumptions and approximations, is theoretically covariant/invariant to problem transformations, is simple to implement, and exhibits competitive empirical performance.