To improve reproducibility, listen to graduate students and postdocs

The National Institutes of Health (NIH) should implement a national exit interview portal to collect feedback from mentees on their experiences.

A recent Nature survey asked 1500 scientists how to best address the irreproducibility crisis. One of the most common suggestions for improvement was “Better Mentoring.” This raises two important questions: How is the current mentor-mentee relationship contributing to irreproducibility? And how can we change things to make it better?

The mentor-mentee relationship is central to modern scientific research. As science developed as a sector in its own right, a system to make more scientists had to evolve. Mentees (grad students and postdocs) perform experiments and analyze data while mentors evaluate results and come up with new ideas. Ideally, this division is not absolute — everyone should have the opportunity to contribute to the entire process.

In my opinion, this division has introduced a level of complexity that could potentially fade into dysfunction. Dishonesty, poor communication, a lack of transparency and not enough managerial training for scientists can create a toxic environment and exacerbate irreproducibility.

Something I often hear from other scientists is that misconduct mostly occurs in situations when a postdoc or student confirms their boss’s pet theory. Some principle investigators rarely look at raw data, because they assume that their attention should be directed elsewhere, while the mentee is driven by a need to generate flashy positive results. The temptations for these win-win situations are too strong in most laboratories — one fake positive result could see students making their boss happy, and academics having their theory and career advanced (temporarily or otherwise).

If a mentee’s scientific career is held hostage to a mentor’s success, science will suffer. Scientific inquiry must have a higher goal than advancing the career of academics. Mentees often accept poor working conditions, broken promises and the egocentricity of their bosses because of the need to move on to the next stage in the academic race. This breakdown in mentorship leads to the creation of a toxic social environment where science is no longer noble.

In my opinion, these scenarios and many similar ones are a direct consequence of the system’s disinterest with providing agency to mentees. If NIH leadership is serious about improving reproducibility, they should empower mentees. One possible way to accomplish this goal is through the creation of an online portal where graduate students and postdocs working on NIH grants can provide confidential feedback about their experiences in labs after they leave.

Comments should be standardized with emphasis on objective criteria and the accomplishment of career goals. The feedback collected should not be used to penalize labs. Instead, it should be used during the grant evaluation process to reward and prioritize top mentors. In my view, this simple addition will nudge scientists to take mentorship seriously.

The implementation of this approach will also encourage mentors to keep the sizes of their labs manageable. Currently, departmental boards can be self-interested battlegrounds where professors compete over who has the largest group, which in their minds correlates with success. Professors are hesitant to remove faded photos of summer interns who left five years ago, but eager to add the postdoc who started yesterday.

A recent move by the NIH to restrict the amount of research money any individual investigator receives would have been a great step financially, and could have led to decreasing lab sizes and to improved mentorship and reproducibility. Unfortunately, this measure was reversed following pressure from top investigators.

Some might say that feedback channels already exist, such as mentor awards given by institutions. However, these awards are not sufficient since they require letters of nomination which are usually written by trainees directly benefiting from the relationship with their mentors — this technique would not stand up during peer review.

Recently, the head of the postdoc affairs office at a reputable institution shared with me that a recipient of a University-wide mentorship award was known to be an unreliable mentor – a clear indication that the process does not work. The news of the award is still advertised on the investigator’s website to lure in talent. Academic ‘street wisdom’ suggest that good mentors are those that list all previous trainees that came through their labs alongside their contact information, rather than their awards.

Creating a feedback system to collect information from employees on managers is not revolutionary. It is a common practice even in the worst of companies. Solving the problems of academia may require borrowing best practices already established in the private sector.