Featured post

Multidisciplinary Team Training

 

Deering S, Johnston LC, Colacchio K. Multidisciplinary Teamwork and Communication Training. Seminars in Perinatology. 2011 Apr;35(2):89-96.

This article highlights the evidence for and benefits of multidisciplinary team training using simulation for improving patient outcomes. The evidence based curriculums referenced include TeamSTEPPS (http://teamstepps.ahrq.gov) and the Anesthesia Crisis Resource Management (ACRM ), http://med.stanford.edu/VAsimulator/acrm/ and Team-Oriented Medical Simulation. They focus on the key concepts of teamwork which encompass the behavioral dimensions of leadership, situation monitoring , mutual support, and communication. The authors present from a perinatal perspective on instituting teamwork training including an overview of how to plan and implement a program. The advantages /disadvantages of in situ simulation-in the providers own clinical area versus simulation center based training are also discussed as well as the necessary resources. The importance of beginning with a site assessment including a culture survey and obtaining buy in from leadership is imperative to the success of this training program.

 

Simulation based team training in healthcare, Eppich et al, Sim Healthcare 6:s14-s19;2011

This article is a call to arms – paraphrasing what I think it is about – “don’t do team training just to do team training, make it more worthwhile by following a couple of key steps.”

1. Do a needs assessment – of the individuals, teams, and organizations.  Make the effort worthwhile for everyone.

2. Have a well designed scenario to test underlying KSAs and targetted objectives and competencies

3. Feedback remains important, debriefing remains important

4. The simulation community should be as rigorous about reporting specific training conditions and what is manipulated during a simulation. Simulation needs the same reporting standards as drug and device trials.

5. The relationship between simulation based team training and cilnically relevant task work needs more precise definition… Interprofessional vs. discipline specific team training – no one knows if one is better or worse – or if they each have their own advantages in different settings

6. We need to be able to link institutional outcomes to our simulation based team training – much easier to type this sentence than to actually measure.

 

Featured post

Scenario Design – 2/27/15

Photo Credit: Umpqua via Compfight cc

Photo Credit: Umpqua via Compfight cc

Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis. DAVID A. COOK. Medical Teacher. 2013; 35: e867–e898

Simulation in healthcare education: A best evidence practical guide. AMEE Guide No. 82 IVETTE MOTOLA. Medical Teacher. 2013; 35: e1511–e1530
This best evidence practical guide for simulation in healthcare education is highly recommended reading for educators. It is part 2 of the Guide, and is focused on “the educational principles that lead to effective learning.” Curriculum integration, feedback and debriefing, deliberate practice, mastery learning, capturing clinical variation, individualized learning, and finally approaches to team training are discussed. Each topic is defined and explored in relation to its effective use in simulation with practical implementation points and challenges that may be encountered

 

Developing High-Fidelity Health Care Simulation Scenarios: A Guide for Educators and Professionals Guillaume Alinier. Simulation & Gaming
2011. 42(1) 9–26

This is a conversational piece on important considerations in scenario design that concludes with a sample scenario design template. It’s a good article for prompting new simulation educators (or just disorganized people like myself (-:) to think through how we build scenarios to address specific objectives and challenge learners in specific ways. Figure 1 depicts general courses of action and prompts the educator to think about actions or conditions that result in a change in patient condition as the scenario unfolds. It’s a good synthesizing graphic.
The section on “Preparing Scenarios in an Organized Manner” is particularly helpful as a practical guide to planning a smooth running scenario.  The Salas article (see SMARTER 2 articles down) and the TEACH Sim article resonate better with me and reflect the human factors and observational assessment orientation, but this is easily digestible and says much of the same stuff in different ways.

The Template of Events for Applied and Critical Healthcare Simulation (TEACH Sim) A Tool for Systematic Simulation Scenario Design. Benishek & Salas et al. Sim Healthcare 10:21-30, 2015.

The authors review and compare five existing scenario design templates and then feature their own design TEACH Sim. This is a great evidence- based tool for designing simulation scenarios. In fact, we were so impressed that we have already started using it (with permission from Eduardo Salas, PhD). It has a logical flow that is easy to understand and with the addition of our own simulation operation flow chart for high fidelity scenarios includes all the components for effective scenario design. Instructions for their template are also included.

 A Measurement Tool for Simulation-Based Training in Emergency
Medicine: The Simulation Module for Assessment of Resident Targeted
Event Responses (SMARTER) Approach.  Rosen, MA et al, Sim Healthcare 3:170–179, 2008

An event based approached to training and measurement, assesment and feedback. Systematic fashion to do several things: “1. Develop and maintain links between simulation scenario events, performance measures, and ACGME core competencies, 2. generate diagnostic measurement that feeds the processes of providing corrective feedback to accelerate kill acquisition and provide learning outcomes data rooted in the ACGME core competencies, 3. to provide opportunities to perform that are structured to maximize learning and assessment opportunities”

8 step process.

Each scenario designed to sample a specific part of the core competencies’ content (can’t measure everything at once — but can do multiple sims or have multiple different observations not using sim that can triangulate on the components of the core competency that you are trying to grade.)

A practical guide – with well delineated examples at multiple areas of the process of scenario creation – that are focused on EM and maybe critical care cases – but could be abstracted to other disciplines in formation/generation of scenarios.

Personal thoughts: Why didn’t I read this 7 years ago.. Check out http://www.aliem.com/category/non-clinical/simulation/ for some nice examples from the Academic Life in Emergency Medicine crew of simulation cases linked to ACGME core competencies.

Three CHSE special- Evaluating Impact of Medical Simulation to Patient Outcomes, Translational Science?

3 up for CHSE and 3 certified — I’d like to think of myself as a nice Brie cheese perhaps….

Anyhow –  we started talking about this last week:

 

From McGaghie et al – Translational Medicine, February 2010, Vol2, issue 9

——————————–                T1                                         T2                                   T3

increased/improved    KSA, profesionalism          Patient Care                     Patient outcomes

——————————–

Target                            Individ. And teams          Individ. And teams             Individ. & public health

——————————–

Setting:                           simulation Lab                 clinic and bedside              clinic and community

Other views:

Kirkpatrick  Reaction/Learning Behavior  Results
Gaba So you taught them something, can they do it when you give them the simulation? You taught them something in simulation, when they go out on the wards, do they really do it – the way you taught them?Ability to measure this.. Does it really change patient outcome?Does it save money?

Just because you show something works in one study, does it mean it can be disseminated and it can be done by others?

Does it really change the outcome of populations as a whole to do these things?

 

 

Constructivism and Case Based Learning

Simulation is a form of case based learning.

Case based learning grew out of the problem based learning discipline. PBL employs an open inquiry approach, in which students independently discover knowledge within a domain. Self directed learning is central to a PBL curriculum. The knowledge base is integrated through the discovery of its applicability across cases and problems. Barrows argues that the core of problem based learning lies in allowing students to “analyze and resolve the problem as far as possible before acquiring any information needed for better understanding.”(1)

PBL’s proponents have argued that it encourages lifelong learning, fosters the development of superior problem solving skills, and is firmly grounded in adult learning theory (self-directed, building on prior experience, relevant to the lives and work of learners). Critics of problem-based learning have argued that the open inquiry approach is inefficient, wastes faculty time, and leads to sometimes inaccurate or erroneous constructs which the learner establishes out of inexperience or lack of knowledge. Despite its theoretical strengths, learning outcomes in problem based learning curricula are mixed, leading educators to speculate on and attempt to mitigate the shortcomings of PBL and the reasons for the disconnect between theory and outcome.(2)

Case based learning leverages the theoretical underpinnings of PBL, but adopts a guided inquiry approach, in which the expertise of the instructor is used to guide the discussion toward relevant and accurate knowledge, and to mitigate group dysfunction that inhibits learning. Case based teaching requires both content expertise and expertise in group facilitation.

Case based learning has its critics as well, and they argue that the “guided” nature of CBL stifles creativity, and that the guidance may well be ineffective unless there is adequate attention to faculty development.

In a comparison of satisfaction and perceived value of PBL vs. CBL at two medical schools, Srinivasan et al reported that students and faculty overwhelmingly preferred CBL. Students felt that CBL was more efficient, provided better opportunity for applying skills learned, and provided valuable feedback.(3)

Simulation is grounded in the philosophies of case based teaching and learning, using the experience of the simulation to provide active engagement with the case, and guided inquiry to achieve the reflective observation necessary for learning.

In her primer on cases based teaching, Colich(4) reviews attributes and advantages of case based teaching. At its best, simulation will share these attributes. Learners will make and implement decisions by sorting out pertinent information from irrelevant information, they will apply prior knowledge to identify core problems, and they will formulate narratives about problems and strategies to address them. The learning outcomes (knowledge, skills and attitudes) match the ability of the case to challenge students in these areas.

Case based learning is a constructivist approach. Constructivism relies on the notions that learning is based on interactions with the environment, that cognitive puzzlement is a powerful stimulus for learning, and that social negotiation is an important contributor to knowledge acquisition.(5) Simulation is grounded in these ideas and in the principles of designing an authentic task, anchoring the learning in a larger problem, and providing opportunity for guided reflection. The idea of social negotiation is particularly interesting in its application to the desired learning for the simulation. Teaching the need to “speak up”, to avoid assumptions and to engage in error correction are substantial challenges for medical educators. The collaborative learning process gives us an opportunity to reinforce the notion that lack of comment or question implies agreement.

1. Barrow HS. Problem based Learning in Medicine and Beyond: A Brief Overview. New Directions for Teaching and learning 1996; 68: 3-12
2. Onyon C. Problem-based learning a review of the educational and psychological theory. The Clinical Teacher 2012; 9: 22-26
3. Srinivasan M. Comparing problem-based learning with case-based learning. Acad Med 2007; 82(1): 74-82
4. Golich C. The ABC’s of Case Based Teaching. International studies perspectives 2000; 1: 11-29.
5. Savery J and Duffy T. Problem based learning: An instructional model and its constructivist framework. in Wilson, BG. Constructivist learning environments: case studies in instructional design. Educational Technology Publications Inc., Englewood Cliffs, NJ. 1996

Cognitive Load and Overload… 4_10_2015

 

Interesting articles, trying to figure out how all of these ideas  apply to medical simulation

(truth be told we didn’t read all of them – just some…)

1 . Manipulation of cognitive load variables and impact on auscultation test performance Ruth Chen • Lawrence Grierson • Geoffrey Norman Received: 20 January 2014 / Accepted: 20 November 2014

2. 4C/ID in medical education: How to design an educational program based on whole-task learning: AMEE Guide No. 93

3. Mental load: helping clinical learners Geoff White, Clinical Education and Professional Development Unit, School of Primary Health Care, Monash University, Australia

4. Cognitive load theory in health professional education: design principles and strategies. Medical Education 2010: 44: 85–93

5.  Cognitive Load Theory: Implications for medical education: AMEE Guide No. 86 (more overview wtih examples)

Extraneous Load – load not essential to the task
Intrinsic Load – load associated with the task
Germane Load – available working memory to learn and to deal with the extraneous and intrinsic load —  these are elements that allow cognitive resources to be put towards learning/problem solving i.e. assist with information processing.

We should think about each of these as we tailor our learning experiences at different levels of learners.  In health care – as opposed to powerpoint slides – there is always going to be a lot of extraneous load – and we do need to teach learners how to deal with this part of the tax on our working memories.

Nice pictures from article # 4 that show how overloading the intrinsic or extrinsic load can leave not potential germane load left for learners to work with in particular situation

http://mathewmitchell.net/multimedia/cogload/

A nice alternate way to think about cog load in reference to slides in powerpoint (but I can abstract this to similar “noise” in a simulation scenario)

http://theelearningcoach.com/learning/what-is-cognitive-load/

http://theelearningcoach.com/learning/novice-versus-expert-design-strategies/

Have to vary your learning instructional strategies for novices vs. experts…

Article #2 provides a detailed look at how to provide whole task learning – using cognitive load theory at it’s backbone. This is not just simulation but multiple ways of learning – that take a scaffoldign approach of single or simple task to more complex or multiple task – while at the same time (if I am interpreting this right) allowing for more direct coaching at the beginning and then more hands off coaching at the end – as learners go from novice to expert or at least more advanced. Contained within this schema is continual assessment, feedback and reflection.

Interesting approach – starting with a worked example – “show them what you want them to do” then break this down into sizabel chunks – that progressively get more difficult as the learning continues.

I had to read this article several times. I will probable need to read if a few more times.

Article #4 talks about strategies to minimize extraneous load and minimize intrinsic load for novice learners, and how to increase intrinsic load (and thus germane load) as learner expertise increases.
Minimizing extraneous load for novices:
– starting with goal free strategies (generate a list of as many diagnostic possibilities as you can) and moving to goal directed (what is the most likely diagnosis).
-starting with worked examples (i.e. demonstrations of how to do the task) then moving to completing more and more of a task – I liken this to learning a new computer software by trial and error – clicking button sequences until you finally get it right. It’s inefficient and frustrating. Much better to show the completed task and then let the learner complete a similar one
-Reinforce material by using multimodal presentation
A deconstructed roadmap for moving from unconscious incompetence to unconscious competence

 

 

 

 

 

 

 

Assessing, assessments,and feedback…

Boulet JR, Jeffries PR, Hatala RA, Korndorffer JR, . Feinstein DM, Roche JP. Research Regarding Methods of Assessing Learning Outcomes Sim Healthcare 6:548-551,2011
Simulation based assessments are used for both formative and summative assessments of healthcare providers. The objective of this article was to look at how their use is supported and to provide direction in the form of consensus recommendations for research. . They delve into “the four components of Kane’s inferential chain –Scoring, Generalization, Extrapolation, and Decision/Interpretation.” The importance of the need for reliability as well as validity of simulation based assessments is emphasized within a brief review of existing research (as of 2011) and opportunities for additional work. Their recommendations target five areas for research which include measurement error, developing scoring rubrics, supporting the underlying theories for learning with simulation, translation of simulation based learning to the clinical environment, and the impact of implementing these assessments for healthcare.

Feedback, assessments and more…

What is feedback in clinical education? van de Ridder JM, Stokking KM, McGaghie WC, ten CateOTJ.  Medical Education 2008; 42:189-97

The purpose of the research is to establish an operational definition of “feedback” for purposes of research and improved communication.  The authors reviewed the general, social science and medical education literature for conceptual formulations of and approaches to feedback.   Three concepts dominating definitions of feedback in the literature are:  feedback as information, feedback as a reaction, and feedback as a cycle.

The authors construct a definition of feedback in clinical education as “Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance.

Looking at the elements of this definition allows one to categorize feedback as week or strong (this is different from effective or ineffective).  Strong feedback, for example, includes specific information gleaned from observations of tasks or elements for which there is an explicit standard.  It derives from personal observation by experts with the aim of performance improvement.

This paper was published 7 years ago.  During that time the feedback attributes defined as “strong” have become the foundation of how we think about performance feedback in simulation.

So that’s STRONG feedback.  And effective feedback has to be strong, but strong feedback is not necessarily effective.  What’s the difference?

Linking Simulation Based Educational Assessments and Patient Related Outcomes: A systematic review and meta-analysis. Brydges R, et al. Acad Med 2015:90:246-256

First review of its kind trying to evaluate the evidence linking educational surrogates with corresponding assessments in the workplace — looking to examine the relationship between simulation based and patient related assessments — and furthermore if there is validity evidence for these outcomes, and what is the quality of methods and reporting in this body of research.

Almost 12 thousand articles screened, only 33 included in the review; these articles in total included 1203 participants. Patient related outcomes defined as provider behaviors and patient outcomes.

This is a long article.. If I think about what the message is – there seems to be suggestion that provider behaviors and time behaviors and patient outcomes each to varying degrees (higher to lower respectively) correlate with simulation outcomes. However – there are multiple instruments used to define/rate simulation outcomes and not all of these are validated.

Concerns from the writers about publication bias for the results – but no real way to exclude this…

Future ideas – contain recommendations to work these educational trials like we would drug trials, about consistency in ratings/assessment, sample sizes, etc.

Debriefing and Simulation Trail Mix – a little bit of a bunch of things

Photo Credit: CapsLK via Compfight cc

Photo Credit: CapsLK via Compfight cc

Alternative Educational Models for Interdisciplinary Student Teams. Judy L. LeFlore: Sim Healthcare 4:135–142, 2009

13 teams – randomized to Self Directed  (described in this paper as allowing students to proceed through the scenario with little or no input from the instructor until after the scenario) learning with facilitated debriefing vs. instructor modeled learning (defined in this paper as instructors model the appropriate responses and interventions during a team simulated clinical scenario while the students or participants observe – with the experts verbalizing what they are doing and why it works in the scenario — this act then followed by students participating in a simulation) with modified debriefing afterwards. Team content (all seemingly novice students – NP, RN, RT, and social work). Reported debriefing was performed by someone trained in debriefing through a comprehensive simulation workshop and debriefing workshop – not stated what exact question styles were used.

Results: Instructor Modeled learning (IML) – had no effect difference on knowledge test vs. Self directed learning with facilitated debriefing. However – the IML groups had higher satisfaction score, and higher overall teamwork and CRM scores (behavioral asssessment checklist); IML team also had quicker times to intervention based on the Technical Evaluation Tool used.

Questions from me? People are generally happier when the explicitly know what is expected of them.Did the IML just teach to the test/evaluation? Is that necessarily a bad thing? Does this work better for novice vs. expert learners? How do we model what we want students to do?  Video, other fashions, simprov.  Personally I have had our residents watch videos of teams prior to coming into our simulations – crisiscode.org (shoutout! to the Stanford AIM LAB http://aim.stanford.edu/ )

Directed self-regulated learning versus instructor regulated learning in simulation training. Ryan Brydges. Medical Education 2012: 46: 648–656

First – What is  Directed Self Regulated Learning?  Well ” It requires a knowledgeable educator to design practice conditions using validated learning principles. A trainee then steps into this structured setting and is given a limited control of a specific aspect of practice and therefore is metacognitively, behaviorally, and motivationally active in the learning experience.”

Instructor regulated learning as described in their paper is similar to what many centers do for procedural group training 1 instructor, 4 residents, everybody gets a turn/gets to demonstrate skill  – then you move on.

45 PGY1 Internal Medicine Residents – Standard pre-training curriculum /standard testing/questionnaires pre/post educational intervention – assigned randomly to Directed Self Regulated Learning (DSRL – solo participant could choose to progress from easy to hard LP simulator models on their own time in a 35 minute time frame, could play pre training video ad lib during training session – then take the post test – then get 15 minutes for feedback)) vs. Instructor regulated learning (IRL) for teaching lumbar puncture using simulation (instructor ratio 1:4, did not have access to video during session – as instructor was resource, instructor and students collectively decided when to progress from easy to hard – still within  a 35 minute time frame for the 4 students (as opposed to 1) – then take post test – then get 15 minutes feedback as a group).  Then everybody gets a 3 month retention test.

23 residents completed the study. Trained/blinded experts rated videotaped procedures.

Pre and post test findings were similar (although IRL post test trended higher) — but at the 3 month time frame – better retention trend in checklist score, and significant score increase  in the GRS (global rating score) of the DSRL group.

Maybe this speaks to something about being responsible adult learners, and being invested in their own learning… Multiple moving parts here, small study, food for thought.

Comparison of Postsimulation Debriefing Versus In-Simulation Debriefing in Medical Simulation. Jon N. Van Heukelom, MD; Sim Healthcare 5:91–97, 2010

Both post simulation and in-simulation debriefings have been used in medical simulation, the question remains which method is more effective? The authors compared these two styles of debriefing with 161 third year medical students participating in ACLS simulations. A retrospective pre-post assessment was completed by the students on “self reported confidence, level of knowledge related to medical resuscitation and the simulation itself.”

The post simulation debrief group gave higher rankings for “effectiveness of the debriefing style, debriefing leading to effective learning, and understanding of correct and incorrect actions.” These results were statistically significant, and students rated this method as more effective overall.

The discussion noted advantages and disadvantages of each method and its effect on the quality of learning. Their students however “did not feel that interruptions during a simulation significantly altered the realism of the simulation.”

Limitations of the study noted that this is one level of learners in one type of simulation, thus the inability to generalize to other levels of learners or other types of simulations. It did not include clinical outcome data nor did it specify the amount of time spent in simulation versus debriefing for each group (total length of time =20 min).

When Things Do Not Go as Expected: Scenario Life Savers: Dieckmann, et al. Sim Healthcare 5:219–225, 2010

Dieckmann et all discuss the need to build into scenarios the possibility that learners will not proceed down the paths that the instructor considers possible or likely. This might result from altered learner comprehension of the meanings and clues within a scenario, inability to accept the scenario as plausible, or mismatch between the scenario difficulty and the learner’s ability.   These situations might compromise the potential for learning in a simulation.

“Life savers” attempt to rescue the scenario and make it meaningful and relevant for the learner. These life savers can be brought into the scenario from within or outside the scenario. Examples of altering the scenario from within would include having a confederate administer a drug that is necessary to the progress of the scenario when the learners have failed to do it, or manipulating vital signs on the fly from the control room to make manikin status changes more or less obvious. Examples of altering the scenario from outside would include speaking via the overhead speaker into a scenario to stop participants from doing an action that would be harmful to themselves or to the manikin, or stopping and restarting a scenario to recover from control room or technology mishaps.

Life savers that are employed by a role player from within a scenario must follow the logic of the scenario. They must make sense to the learner in the context of the situation. By contrast, life savers brought from outside the scenario do not, and in some cases can intentionally disrupt the scenario.

This has implications for both scenario design and prebriefing. During design of scenarios, instructors should consider the potential need for life savers, and how they might be implemented if necessary. The learners should be prepared for this possibility during the prebrief.