One of the goals of data-intensive research, in any field of study, is to grow knowledge over time as additional studies contribute to collective knowledge and understanding. Two steps are critical to making such research cumulative - the individual research results need to be documented thoroughly and conducted on data made available to others (to allow replication and meta-analysis), and the individual research needs to be carried out correctly, following standards and best practices for coding, missing data, algorithm choices, algorithm implementations, metrics, and statistics. This work aims to address a growing concern that the Recommender Systems research community (which is uniquely equipped to address many important challenges in electronic commerce, social networks, social media, and big data settings) is facing a crisis where a significant number of research papers lack the rigor and evaluation to be properly judged and, therefore, have little to contribute to collective knowledge. We advocate that this issue can be addressed through development and dissemination (to authors, reviewers, and editors) of best-practice research methodologies, resulting in specific guidelines and checklists, as well as through tool development to support effective research. We also plan to assess the impact on the field with an eye toward supporting such efforts in other data-intensive specialties.