Utilization of a New Customizable Scoring Tool to Recruit and Select Pulmonary/Critical Care Fellows
Files
TR Number
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Background: Finding the ideal candidate for a residency/fellowship program has always been difficult. Finding the "perfect" match has always been the ultimate goal. However, many factors affect obtaining that "perfect" match. In the past, we would have each attending physician review around 20 to 50 Electronic Residency Application Service (ERAS) applications and rank them into three categories: high, middle, or low. Depending on their ranking, the applicant would be invited for an interview. After the interview, the applicants' files (ERAS and interview) would he reviewed and ranked by the faculty as a group. This was time-consuming and fraught with too much subjectivity and minimal objectivity. We, therefore, sought to find a way to assess and rank applicants in a more objective and less time-consuming manner. By creating a customizable scoring tool, we were able to screen applicants to our pulmonary/critical care fellowship program in an efficient and a more objective manner. Objectives: A customizable scoring tool was developed weighting components in the ERAS and interview process, allowing residency/fellowship programs to create a final rank list consistent with the programs' desired applicants. Methods: Two hundred and sixty pulmonary/critical care fellowship applications were reviewed from 2013 to 2018. In 2018, we used our new scoring rubric to create a rank list and rescore previous applicants. The traditional and new lists were compared to the final rank list submitted to the National Residency Matching Program (NRMP) for 2018. We wanted to ascertain which scoring method correlated best with the final rank list submitted to the NRMP. We obtained feedback from eight faculty members who had reviewed applicants with both scoring tools. Results: The novel customizable scoring tool positively correlated with the final rank list submitted to the NRMP (r = 0.86). The novel tool showed a better correlation to the final rank list than the traditional method. Faculties (6/6, 100%) responded positively to the new tool. Conclusions: Our new customizable tool has allowed us to create a final rank list that is efficient and more focused on our faculty's desired applicants. We hope to assess and compare the quality of applicants matched through this scoring system and the traditional method by using faculty evaluations, milestones, and test scores.