Monthly Archives: March 2015

Multidisciplinary Team Training

 

Deering S, Johnston LC, Colacchio K. Multidisciplinary Teamwork and Communication Training. Seminars in Perinatology. 2011 Apr;35(2):89-96.

This article highlights the evidence for and benefits of multidisciplinary team training using simulation for improving patient outcomes. The evidence based curriculums referenced include TeamSTEPPS (http://teamstepps.ahrq.gov) and the Anesthesia Crisis Resource Management (ACRM ), http://med.stanford.edu/VAsimulator/acrm/ and Team-Oriented Medical Simulation. They focus on the key concepts of teamwork which encompass the behavioral dimensions of leadership, situation monitoring , mutual support, and communication. The authors present from a perinatal perspective on instituting teamwork training including an overview of how to plan and implement a program. The advantages /disadvantages of in situ simulation-in the providers own clinical area versus simulation center based training are also discussed as well as the necessary resources. The importance of beginning with a site assessment including a culture survey and obtaining buy in from leadership is imperative to the success of this training program.

 

Simulation based team training in healthcare, Eppich et al, Sim Healthcare 6:s14-s19;2011

This article is a call to arms – paraphrasing what I think it is about – “don’t do team training just to do team training, make it more worthwhile by following a couple of key steps.”

1. Do a needs assessment – of the individuals, teams, and organizations.  Make the effort worthwhile for everyone.

2. Have a well designed scenario to test underlying KSAs and targetted objectives and competencies

3. Feedback remains important, debriefing remains important

4. The simulation community should be as rigorous about reporting specific training conditions and what is manipulated during a simulation. Simulation needs the same reporting standards as drug and device trials.

5. The relationship between simulation based team training and cilnically relevant task work needs more precise definition… Interprofessional vs. discipline specific team training – no one knows if one is better or worse – or if they each have their own advantages in different settings

6. We need to be able to link institutional outcomes to our simulation based team training – much easier to type this sentence than to actually measure.

 

Assessing, assessments,and feedback…

Boulet JR, Jeffries PR, Hatala RA, Korndorffer JR, . Feinstein DM, Roche JP. Research Regarding Methods of Assessing Learning Outcomes Sim Healthcare 6:548-551,2011
Simulation based assessments are used for both formative and summative assessments of healthcare providers. The objective of this article was to look at how their use is supported and to provide direction in the form of consensus recommendations for research. . They delve into “the four components of Kane’s inferential chain –Scoring, Generalization, Extrapolation, and Decision/Interpretation.” The importance of the need for reliability as well as validity of simulation based assessments is emphasized within a brief review of existing research (as of 2011) and opportunities for additional work. Their recommendations target five areas for research which include measurement error, developing scoring rubrics, supporting the underlying theories for learning with simulation, translation of simulation based learning to the clinical environment, and the impact of implementing these assessments for healthcare.

Feedback, assessments and more…

What is feedback in clinical education? van de Ridder JM, Stokking KM, McGaghie WC, ten CateOTJ.  Medical Education 2008; 42:189-97

The purpose of the research is to establish an operational definition of “feedback” for purposes of research and improved communication.  The authors reviewed the general, social science and medical education literature for conceptual formulations of and approaches to feedback.   Three concepts dominating definitions of feedback in the literature are:  feedback as information, feedback as a reaction, and feedback as a cycle.

The authors construct a definition of feedback in clinical education as “Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance.

Looking at the elements of this definition allows one to categorize feedback as week or strong (this is different from effective or ineffective).  Strong feedback, for example, includes specific information gleaned from observations of tasks or elements for which there is an explicit standard.  It derives from personal observation by experts with the aim of performance improvement.

This paper was published 7 years ago.  During that time the feedback attributes defined as “strong” have become the foundation of how we think about performance feedback in simulation.

So that’s STRONG feedback.  And effective feedback has to be strong, but strong feedback is not necessarily effective.  What’s the difference?

Linking Simulation Based Educational Assessments and Patient Related Outcomes: A systematic review and meta-analysis. Brydges R, et al. Acad Med 2015:90:246-256

First review of its kind trying to evaluate the evidence linking educational surrogates with corresponding assessments in the workplace — looking to examine the relationship between simulation based and patient related assessments — and furthermore if there is validity evidence for these outcomes, and what is the quality of methods and reporting in this body of research.

Almost 12 thousand articles screened, only 33 included in the review; these articles in total included 1203 participants. Patient related outcomes defined as provider behaviors and patient outcomes.

This is a long article.. If I think about what the message is – there seems to be suggestion that provider behaviors and time behaviors and patient outcomes each to varying degrees (higher to lower respectively) correlate with simulation outcomes. However – there are multiple instruments used to define/rate simulation outcomes and not all of these are validated.

Concerns from the writers about publication bias for the results – but no real way to exclude this…

Future ideas – contain recommendations to work these educational trials like we would drug trials, about consistency in ratings/assessment, sample sizes, etc.