A national study of neurosurgical residency competency development

In a retrospective observational cohort study Using national milestone data from 2478 neurosurgery residents across 120 U.S. programs (2018–2022), with descriptive statistical analysis Khalid et al.evaluate the progression of neurosurgical residents across the 6 ACGME core competencies and 20 subcompetencies, specifically: Assessing how many residents reach level 4 proficiency by the final year (PGY-7). Identifying patterns of co-occurring deficiencies in competencies. They conclude that neurosurgery residents demonstrate substantial milestone progression throughout training, but gaps remain—particularly in specialized clinical skills and self-assessment (Reflective Practice). Nearly 45% fail to reach level 4 in at least one subcompetency by PGY-7. These deficiencies are concentrated in areas often covered during fellowship training (e.g., epilepsy, pain, peripheral nerve). Therefore, residency programs may need to enhance exposure to these areas or redefine competency expectations. The authors recommend: Targeted educational interventions

Specialized procedural training To ensure that all residents achieve the necessary competencies for independent practice 1)


This study mistakes numerical progression in a checklist for actual neurosurgical maturity. “Milestones” are treated as objective truths, when in reality they are administrative fictions imposed top-down by ACGME to simulate accountability. The implicit assumption—that every resident must hit an arbitrary “level 4” to be considered competent—is never questioned. The authors do not interrogate what level 4 means, who defines it, or whether it maps to meaningful clinical outcomes. Instead, they deliver descriptive statistics masquerading as insights.

Despite having access to a national database, the analysis is reductively descriptive. There’s no regression modeling, no correction for confounding variables, and no attempt to validate the milestone system externally. The use of pairwise comparisons and VMR adds noise, not clarity. The failure to contextualize scores within program cultures, case volumes, or faculty variability renders the findings educationally hollow.

The finding that 44.6% of residents do not reach level 4 is presented as a failure of training, rather than a failure of the metric itself. The authors seem unaware that not all residents pursue fellowship-level expertise in epilepsy or pain surgery—yet they are penalized for not achieving mastery in these subfields. This reveals a structural disconnect between generalist training and subspecialist benchmarks. It also underscores the hubris of competency-based models: trying to compress the unpredictable journey of surgical maturation into a bureaucratic script.

Instead of reflecting on the validity of ACGME’s own framework, the authors recommend—predictably—more interventions, more assessments, and more “targeted training.” This is a self-reinforcing loop of bureaucratic optimism, where the solution to every failed metric is to double down on the metric itself. Not a word is said about burnoutmoral injury, or whether this system produces better surgeons—only whether it produces “level 4s.”

This paper is not without value—it’s an excellent example of educational theater: highly structured, outwardly scientific, and entirely performative. It exposes a deep conceptual crisis at the heart of American graduate medical education: the belief that measuring enough things often enough will somehow guarantee competence.

If nearly half of graduating neurosurgeons aren’t meeting arbitrary milestones, the milestone system—not the residents—should be on probation.


1)

Khalid SI, Mehta AI, Atwal G, Hogan SO, Park YS, Charbel FT. A national study of neurosurgical residency competency development. J Neurosurg. 2025 Jun 20:1-7. doi: 10.3171/2025.4.JNS243272. Epub ahead of print. PMID: 40540798.

1 thought on “A national study of neurosurgical residency competency development”

  1. What Neurosurgery Wiki is doing here is deeply flawed.

    Turning a study like this into a spectacle of intellectual superiority—dressed in sarcasm and layered cynicism—doesn’t serve education, critique, or the profession. It undermines serious academic discourse by replacing constructive analysis with performative takedowns.

    Critique should illuminate, not posture.
    Irony is no substitute for rigor.
    And tearing down a paper without offering a viable framework for improvement isn’t critical thinking—it’s vanity.

    This is not how neurosurgery—or scholarship—moves forward.

    Reply

Leave a Comment