HPC is almost the definition of research software. All too often, however, HPC research software becomes orphaned or lost over time. The job of maintaining and delivering research-based software has taken on a more formal title called Research Software Engineer (RSE). The job combines two important skills: software expertise and an understanding of research.
Recently, well-attended conferences, RSECon23 and USRSE23, and the development of an international community indicate interest and growth in the emerging field.
In the US, the US Research Software Sustainability Institute (URSSI) has been established to improve the recognition, development, and use of software for a more sustainable research enterprise. They foster collaboration in developing education, outreach, and software services that emphasize open, transparent, reproducible, and cooperative practices for research software.
The following is republished (with permission) from the URSSI website and provides some first-hand prescriptive on scientific software careers. As background, you can consult the previous two blog posts, Charting the Course: Policy and Planning for Sustainable Research Software and Elevate Research Software: Co-creating a Digital Roadmap. — HPCwire editorial staff
The URSSI Charting the Course project convened a focused participatory workshop at the IEEE eScience conference in October 2023 in Cyprus to gather international perspectives on key challenges in the research software field. At this workshop, a nuanced, in-depth discussion unfolded around the metrics and incentives that shape careers in research software internationally. This helped to clarify key issues and potential solutions that have been explored in different countries. Participating in this workshop were professionals involved in research software at different levels, including two individuals who manage large teams of research software engineers (RSEs) in different countries. The other participants were two early-career researchers working with research software and a social science PhD. These differences in participants’ profiles provided a diverse range of perspectives on the topic of advancing research software careers.
Beyond the “Fancy Digital Receipt”
One participant – an early-career researcher in computational biology – argued that insofar as software is a project “deliverable”, it often shares the limelight with academic papers as markers of successful investment by the funder. However, this researcher noted that academic papers serve as a “fancy digital receipt” of research software work—a formal yet detached record of the actual work, which is the software itself. This observation underscored the persistence of the perceived need for research software professionals to adapt their work to traditional academic metrics, despite high-level policies that indicate research software contributions should be valued in and of themselves.
Communicating research software’s contribution
Another participant – working on software for a major supercomputer project – emphasized the role of effective communication when navigating ambiguities and challenges affecting promotion and other key aspects of university life. While software is “a very critical piece,” its impact is amplified when disseminated through multiple channels—from academic papers to user group meetings. This multi-faceted approach to sharing knowledge highlights the evolving criteria for what constitutes “valuable” research output.
The Evaluation Gap and Importance of Social Skills
The conversation then shifted to the challenges of measuring software’s impact. A participant lamented the absence of a robust, evidence-based system, describing the current evaluation framework as “loosey-goosey.” Participants noted a reliance on self-reported claims rather than concrete metrics, arguing for more reliable methods, such as testimonials or third-party validation. One participant involved in annual reviews emphasized that technical prowess alone doesn’t define a senior research software engineer (RSE). The ability to “hold their own” in academic conversations and mentor junior team members is equally vital. This perspective could broaden the evaluation criteria to include social and communication capabilities that may not be easily quantifiable but are essential for career growth.
Operational Challenges in Scaling Teams
The workshop also explored the operational aspects of ensuring recognition of contributions in RSE teams, especially as they scale. This discussion emphasized the need to maintain transparency to document research software professionals’ accomplishments. “Our entire team’s working system lives on GitHub,” said one senior manager participant. This approach brings “continuity” to the team’s interactions, but shows its limitations as teams expand. “I’m already feeling the growing pains,” admitted another participant, emphasizing the need for more formalized systems of tracking research software accomplishments to ensure fair recognition and promotion for RSEs.
The RSE Hiring Puzzle
When it comes to hiring, the criteria for evaluation take an interesting turn. “The thing I prize above all else is intellectual curiosity,” noted one senior team member. We’re not just looking for the “mythical 10X coder,” but rather individuals who can engage effectively with academics from diverse domains. This senior participant noted that one of the most successful RSE hires (“absolutely phenomenal”) had a fine arts background, and a PhD focusing on a traditional craft.
The Role of Specialists in RSE Teams
As the conversation evolved, the role of specialists in growing teams was acknowledged. While smaller teams need generalists who can “pitch in with everything,” larger teams can afford specialists. “Maybe when we have a hundred people and I can afford those specialisms, maybe we hire more of those,” one participant noted.
Conclusion: A Field in Flux
The workshop served up a microcosm of the broader shifts taking place in computational research. It highlighted the need for a more nuanced, multi-faceted approach to evaluating research software contributions, one that goes beyond papers and citations to include direct software contributions and communication skills. As the field continues to evolve, these discussions will undoubtedly shape the frameworks that define success and impact in research software. The URSSI Charting the Course project is working to identify and promote ways of advancing these discussions. One way you can get involved is to participate in the platform for collaborative engagement about ideas for improving the status, sustainability and impact of research software that we have initiated on GitHub.
Eric A. Jensen is on staff at URSSI and Daniel S. Katz is co-leader of URSSI, both are currently at NCSA at Illinois.
This article originally appeared in HPCwire.