#KeyLIMEPodcast 355: Can we see into the brain of a trainee by looking at their notes?

SHARE:
POSTED BY:

Today\’s paper takes a look at written notes as a way of assessing a trainee’s clinical reasoning. However, the authors point out that feedback on notes is infrequent, and that the validated instruments for rating them have varying degrees of evaluation of reasoning. Therefore, they set out \”to develop a valid and reliable assessment tool for clinical reasoning documentation.\”

Learn more about their work here.

———————————————————————–

KEYLIME SESSION 355

Listen to the podcast

Reference

Schaye et. al., Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes:a Shared Mental Model for Feedback J Gen Intern Med. 2022 Feb;37(3):507-512.

Reviewer

Linda Snell (@LindaSMedEd)_

Background

I am an internal medicine specialist – clinical reasoning is my bread and butter: doing it, teaching it, and yes, assessing it. I  usually do this in discussion with learners or by looking at the consult note, but do not have a useful concept map or model to help me. Written notes are a way of assessing a trainee’s clinical reasoning, and this has made its way into competency frameworks e.g.  an ACGME Internal Medicine milestone “appropriate utilization and completion of medical records including effective communication of clinical reasoning.” The authors say that feedback on notes is infrequent, with one barrier bein a lack of a shared mental model for high-quality clinical reasoning, and variability of assessment tools that exist. The validated instruments for rating notes have varying degrees of evaluation of reasoning.

The IDEA tool does have “a robust assessment of clinical reasoning documentation” but lacks descriptive anchors for each domain which might threaten reliability. The IDEA domains are: (I)  Interpretive summary,  (D) Differential diagnosis with commitment to the most likely diagnosis, (E) Explanation of reasoning in choosing the most likely diagnosis, (A) Alternative diagnosis with explanation of reasoning.

Purpose

\”…to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool… the process of developing and validating the Revised-IDEA assessment tool with standard setting for item rating in order to … increase reliability and create a shared mental model for feedback on clinical reasoning documentation.”

Key Points on the Methods

Methods well described and mostly understandable:
Used 4 of items in Messick’s validity framework to derive evidence: content validation (do items reflect the construct they are intended to measure), response process (how well the answer reflects the observed performance), internal structure (associations between assessment results and other measures or learner), and consequences (the impact, beneficial or harmful, of the assessment itself and the decisions and actions that result.)

content validation – panel of experts
response process – piloted, feedback, revisions for tool
internal structure – inter-rater reliability on 282 notes
consequences – cut-off for what makes good documentation

There are many methods for determining pass/fail. “Methods for standard setting are categorized as relative methods or absolute methods. Relative methods compare the students’ performance to each other, while absolute methods refer to an external reference point. Absolute methods are preferable.” Hofstee is a compromise method, between absolute and relative, that prevents failing too many by polling an expert panel on the maximum and minimum acceptable failure rates, which is incorporated into the determination of the cut score.

In summary they took the old IDEA, validated content to have a newer IDEA, piloted and revised this to have the newest IDEA for which they developed reliability and cut off for what constituted high- vs. low- quality documentation of clinical reasoning.

Key Outcomes

content validation – The newest ‘Revised IDEA’ – same four core domains of the original with  more detailed descriptions in the prompts and descriptive anchors for the Likert rating scale in each of the four domains – giving total maximum score of 10.

\"\"

response process -minimal faculty training requires
internal structure – inter-rater reliability .84; Cronbach α was 0.53, indicating moderate agreement between item scores. The agreement between the D, E, and A scores was higher with a Cronbach α 0.69.
consequences – cut-off score ≥6 was determined to indicate high-quality clinical reasoning and 53% were rated as high quality

‘The ability to formulate a problem representation or interpretive summary (I) is in itself an essential skill and distinct from the ability to produce a prioritized differential diagnosis that is explained and justified (D,E,A)
This means you can get 0 on the first and ace the rest and pass…is this right?

Half the trainees were rated lower than the cut off – is this what is expected?

Key Conclusions

The authors conclude… ‘The Revised-IDEA assessment tool is a reliable and easy-to-use assessment tool for feedback on clinical reasoning documentation, with descriptive anchors that facilitate a shared mental model for feedback.’

Spare Keys – Other take home points for Clinician Educators

A good example of alignment of sections of the paper.

Access KeyLIME podcast archives here

The views and opinions expressed in this post and podcast episode are those of the host(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our \’About\’ page

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.