Marketing Data Management Tools and Services to Faculty – ASERL Summertime Summit – Breakout Session 2

For the afternoon breakout session I chose “Marketing Data Management Tools and Services to Faculty”, presented by Mandy Swygart-Hobaugh from Georgia State U, and Jen Doty from Emory U.

Jen Doty gave us an overview of what Emory is doing in the data area. They set up a RDM LibGuide based on other institutions guides, offered workshops and consulting, and customized DMPTool. She stressed that making sure DMPTool had local information and contacts was very important. A survey was sent to all faculty to assess data needs. Of note was the comment that no referrals have come from the LibGuide, it is mostly word of mouth. Also, they do some work with the grad school in the area of scholarly integrity related to data.

Jen Doty is one of three staff members of the Electronic Data Center at Emory Libraries http://edc.library.emory.edu/ . I mention this because the next speaker, Mandy Swygart-Hobaugh, described a totally different data services model.

Georgia State U set up a team of existing staff that combines data services with their existing work. You can see their LibGuide here: http://research.library.gsu.edu/datamgmt The team includes people from the institutional repository, cataloging/metadata, subject liaisons, and those with an interest in data. If storage is needed, they refer people to university IT. The guide is careful to indicate that their main services are to assist and connect, because of the limited time the team has to work on data projects. The Georgia State University Library also did a survey and found that the main reason people had for setting up a DMP was that it was required! They are hoping to do a follow up survey changing some of the wording and getting more in depth ideas.

One suggestion Mandy had was to get permission to use the stories of people you help when presenting to groups about data services. And try to tailor the subject of the example if you can, since different groups have different data management needs.

Discussions covered a number of areas. For example, using a Purdue Data Curation Profile http://datacurationprofiles.org/completed as a case study to work through the DMPTool https://dmp.cdlib.org/. Also, making sure you take stage of research into account when holding workshops. Depending on stage the researcher may not be ready for a practical session or might not care about theory. Hands on is good in all cases. A required ethics class for students might be a good place to start their data management education.

As with the opening keynote and the morning session, the presentations and the discussions were very helpful. After all, I’ll be starting as Director, Research Data Management for VCU Libraries on Monday!

Posted in data curation, librarian roles, research data management | Tagged , | Leave a comment

Other posts about ASERL Summertime Summit

Today I thought I’d check if anyone else had written about ASERL’s Summertime Summit in Atlanta. I found two blog posts, a slide deck, and a video.

Wake Forest U Librarians wrote about the summit.

http://cloud.lib.wfu.edu/blog/pd/2013/08/21/aserl-summertime-summit-2013-liaison-roles-in-open-access-data-management-equal-parts-inspiration-perspiration/

And Robin Sinn from JHU also did a summary. http://jhulibrariestravel.blogspot.com/2013/08/i-attended-aserl-association-of.html

Kathy Crowe, UNC at Greensboro, talked in Breakout Session I didn’t attend, “Library staffing/responsibility models for data management and open access”. Her talk, entitled ‘Models for Liaison Services’ is up on SlideShare. http://www.slideshare.net/kmcrowe/liaison-staffing-models

And Georgia Tech Library has Sayeed Choudhury’s opening keynote in their repository. You can get video, audio, or the pdf of the slides. http://hdl.handle.net/1853/48696

 

Posted in data curation, librarian roles | Tagged , | Leave a comment

ASERL Summertime Summit – Breakout Session 1 – Data Management Tools

After a rousing keynote, we had the choice of 4 topics for the first breakout session.  I chose “Practical Data Management Tools – Step-By-Step Guide, DMPTool, DataBib” presented by Aaron Trehub (Auburn U) and Lizzy Rolando (Georgia Tech).  To help us gauge what the various tools did, Aaron gave us a quote to help think about the tools (they actually covered more than mentioned in the title of the session)..

“I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.”
Rudyard Kipling
http://members.optusnet.com.au/~charles57/Creative/Techniques/elephants_child.htm

As we looked at descriptions of the various tools provided by their web sites, filtering with what, why, when, how, where, and who helped get to the meat of the tool.

We were all given a print copy of “A Step-By-Step Guide to Data Management” from ASERL/SURA  http://sura.org/news/docs/RDMStepGuide101512.pdf It is an easy to use handout that covers best practices and should be an excellent tool to use during data interviews. The guide is based on DataOne life cycle best practices http://www.dataone.org/best-practices

DataOne was also mentioned because of their Investigator Toolkit http://www.dataone.org/investigator-toolkit and their education modules http://www.dataone.org/education-modules which are CC0 and can be used and adapted any way you want. Of course DataOne is also a big repository for environmental science data.

DMPTool  https://dmp.cdlib.org/ is being used by several of the libraries represented in the session so we discussed the need to customize the tool for your institution.  Most people found that researchers leave data management plans until the end of the grant writing process so having correct and complete information is important.  Often researchers will share DMPs and this just propagates incorrect information. The Webinar Series on DMPTool was recommended for those who use or are thinking of using the tool http://blog.dmptool.org/webinar-series/.

Tools to find repositories to store and share data were covered as well.

DataBib http://databib.org/  is a collaborative, annotated bibliography of primary research data repositories.

OpenDOAR – The Directory of Open Access Repositories http://www.opendoar.org/index.html

Simmons list of data repositories http://oad.simmons.edu/oadwiki/Data_repositories

Before depositing, researchers will need to add metadata.

DCC Disciplinary Metadata listing http://www.dcc.ac.uk/resources/metadata-standards

Science Data Literacy Project (Syracuse U) listing of metadata http://sdl.syr.edu/?page_id=32

Since the meeting was about liaisons’ role in data (and scholarly communication) some training sites were suggested.

MANTRA http://www.ed.ac.uk/schools-departments/information-services/about/organisation/edl/data-library-projects/mantra/about

DataONE modules http://www.dataone.org/education-modules

UK Data Archive http://data-archive.ac.uk/create-manage/advice-training Also look over site for all other data needs.

RDMRose http://www.sheffield.ac.uk/is/research/projects is specific to information professionals.

Another useful ASERL/SURA document is the Model Language for RDM Policies http://www.aserl.org/wp-content/uploads/2013/02/NEWS__ASERL-SURA_Model_RDM_Policy_Language.pdf

You can’t mention policies without mentioning the OSTP policy that will require open access to federally funded research http://www.whitehouse.gov/blog/2013/02/22/expanding-public-access-results-federally-funded-research and the upcoming August 22 deadline for agencies to outline how they will comply. Will it be CHORUS or SHARE or something else (this blog post has good information http://blogs.library.duke.edu/scholcomm/2013/06/10/better-than-joining-the-chorus/ )

Of course this led to the discussion of funding since data storage is an unfunded mandate in most cases, so having policies in place that require deposit and access are problematic for everyone.  It was mentioned that there is a difference between open and available when looking at the NSF mandate, so not all data needs to be open access.

And we also discussed how much liaisons need to know about data.  One person suggested that they need to know enough so they don’t get a deer in the headlights look when a faculty member brings up data.  So communication with liaisons to keep them up to date on trends is important.

All in all, it was a great session.  I learned about some new tools and even though there weren’t many answers about funding, it was a good discussion.

Posted in data curation, librarian roles, research data management, Training | Tagged , , | Leave a comment

ASERL Summertime Summit – Opening Keynote – Sayeed Choudhury

I was lucky enough to attend my first ASERL (Association of Southeastern Research Libraries) event this week.  It was a very timely Summertime Summit titled “Liaison Roles in Open Access & Data Management: Equal Parts Inspiration & Perspiration” held in Atlanta. I am a liaison to our School of Medicine and I’ve also been working with an image database and NCBI data so the summit was a good fit with my work.  It was also exciting to meet with people outside of medical libraries.  Not that I don’t love MLA meetings and #medlibs chats on Twitter, but it is always good to get a new perspective on the workings of libraries.

Sayeed Choudhury from Johns Hopkins was the opening keynote speaker.  He talked about setting up the research data management program at the Sheridan Libraries in his talk “Open Access & Data Management Are Do-Able Through Partnerships”. Choudhry suggested that we need to ask why we are starting data services before things get going. Not that there is a right answer but it will help with the process to know if the motivation is the PIs/faculty wanting a service or if it is only because there is a mandate to comply with.

My favourite  part was when Choudhury mentioned that the “reference interview” was still needed even when using DMPTool.  He said that there were times when a researcher started out thinking one thing about data and ended up in a totally different place once they had an interview with him.  Since I think the skill of interviewing is one of the great super powers of librarians, I couldn’t agree more.  (I consider this the best book – my Reference instructor was Dr. Ross years ago at UWO SLIS -Conducting the Reference Interview: A How-To-Do-It Manual for Librarians, 2nd ed.  Catherine Sheldrick Ross, Kirsti Nilsen, Marie L. Radford. http://www.amazon.com/dp/155570655X  )

Another point Choudhury made, which resonated with many people, is that data management shouldn’t be seen as a library service, it should be research support provided by the Library.  This point was also mentioned in a new article about the JHU data services initiative I had been reading on the plane going to Atlanta:

Yi Shen, Virgil E. Varvel Jr., Developing Data Management Services at the Johns Hopkins University, The Journal of Academic Librarianship, Available online 11 July 2013 http://dx.doi.org/10.1016/j.acalib.2013.06.002 .

One last take away from Choudhury’s keynote was a talking point he uses with researchers who are uncertain about RDM.  He asks if they can find their data after 5 years.  Usually they can’t, and this question opens a dialogue about planning.

It was a great start to the meeting.  Hopefully I can write up my notes for the 2 breakout sessions I attended and the closing speaker in the next week or so.

update 8/22/2013 Georgia Tech Library has the presentation video online: http://hdl.handle.net/1853/48696

Posted in research data management | Tagged , | 1 Comment

Using the Socratic Method in a Flipped Classroom

There has been discussion lately about the use of the flipped classroom in medical education and last night it came up in the context of #medlibs continuing education because of a blog post by Eric Schnell – Flipping the MLA Conference .  Since Twitter is not the ideal medium for me to explain how the flipped method was used when I took my MLIS classes back in the 1980s, I thought I’d write a blog post about it.

At the time I took classes at the University of Western Ontario SLIS, now FIMS, they used a method that I heard called Socratic or didactic (http://en.wikipedia.org/wiki/Socratic_method  or http://www.socraticmethod.net/ .)  The major method of teaching was to have students research various aspects of a topic ahead of the class and write a paper on the topic. Ideally, the professors would read all the reports before class and lead the class by asking specific students questions about things they had written to gradually reveal the material that the class was supposed to cover. So basically, a flipped class where students learn some material before class and discuss it in class.

Using the Socratic method requires students to have some knowledge of a subject beforehand so they can have opinions about it to debate about in class. And it requires a teacher who can lead the discussion through the topic to make sure learning objectives were met. And I must admit that lots of students complained about having a weekly paper in every class, so there were some variations in class structure. For me it wasn’t much different that the weekly lab reports due in my science classes, so I managed.

There is a difference in my old classes, compared to medical education model where material is learned ahead of time and case studies or follow-up tests are given in class  - the teacher has to read all the reports so the class discussion can be directed through the questioning to cover the topic completely.  It was quite a bit of work and many teachers did not always follow the method and other flipped classroom methods were used.  Some professors had one group of students learn the materials ahead of time and make a presentation which was then discussed.  Or the class was broken into groups and each group given a part of the topic to research ahead of time, and then groups shared what they knew in class to bring together the whole topic.

A Librarian’s Guide to NCBI is an example of a course with pre-learning.  We had 3 weeks of videos and exercises before going to NCBI for one week to help make sure everyone was prepared for an intense week of lectures and exercises. This type of model ensures that  the group is all starting on the same page, so to speak, so the experts are then able to give the advanced content, knowing everyone has the basics.

Recently I’ve attended two sessions where a variation of these methods. The speaker or instructor gives a short lecture and then breaks the audience/class into small groups. In the team science workshop I attended the groups traded elevator speeches and discussed a short case study.  In the Data Curation for Information Professionals CE I just attended at MLA in Boston, we learned about aspects of data management and then looked at a case study with leading questions to help us learn more about how to help with data. In both cases, we were all enjoying our discussions about the cases so much, it was hard to stop talking and get back to the speaker. And sharing our main points with the whole group afterwards was also a great learning experience.

After reviewing all this, it seems to me there are many ways we could incorporate a flipped or semi-flipped model into MLA conferences, CEs and our teaching. I’d love to hear from anyone who wants to take the next step and try it out a future MLA meeting.

 

 

Posted in Uncategorized | Tagged , , | Leave a comment

The Lifespan of a Fact – book review and some observations

Last week I finished reading ‘The Lifespan of a Fact.’ John D’Agata, author. Jim Fingal, fact-checker. The core of the book is an essay by D’Agata about the suicide of Levi Presley in Las Vegas in 2002. But wrapped around the essay are comments by Fingal, who was asked by an editor to check the facts of the story. The final product is a discussion/argument between an essayist who wants to massage the story to make it more artistic, readable, and compelling, and a fact-checker who feels non-fiction, i.e. an essay, should be all true because readers are expecting things to be real and factual.

As a biomedical sciences librarian who has done some writing and teaches evidence based medicine, I tend to be on the side of the fact-checker. Which could be why I have so much trouble writing more than a 175 word book review. I don’t like to write anything that isn’t my opinion unless I have a reference to back it up. But, I can understand why someone might want to simplify a situation a bit to make a point.

I especially have a problem with science and medicine journalism that glosses over facts or outright lies to create an attention grabbing headline. Or doesn’t report on the full results of a study because they have an agenda. I think Fingal is right in the book when he writes that most readers expect the facts in any sort of non-fiction work to be correct, and most won’t investigate further to find the truth.

And I think this acceptance of a story as truth goes further – and includes things like Shakespeare’s plays. As I was finishing this book, I was also watching Shakespeare Uncovered http://www.pbs.org/wnet/shakespeare-uncovered/, specifically the Richard II and Henry IV and V episodes. The history plays of Shakespeare are taken as fact by most people but there are inaccuracies, mainly to do with not wanting to get the people in power at the time mad. But how many assume that Shakespeare if fact? This especially interests me because I love ‘The Daughter of Time’ by Josephine Tey, which refutes the picture of Richard III most people have. And of course now that Richard’s body has been identified, maybe a new picture will emerge http://www.bbc.co.uk/news/uk-england-leicestershire-21063882

Posted in Uncategorized | Leave a comment

Reference Interviews, Data Curation, and “To Sell is Human”

I’ve had the strangest convergence of ideas that have been bouncing around in my brain the last couple of weeks.

It started with research on data curation or management as a role for Librarians. I have been on a work team investigating the feasibility of hiring a new Librarian in that sort of a position here at VCU Libraries.  It is a subject near and dear to my heart because I did a practicum with the Biomedical Informatics Core recently and became very interested in all the data and analytics coming from hospital EHR systems.  There is a real need to help with use and reuse of data from all sources, and if we think of data as an information resource like a book or journal, it makes sense to me that it is the job of a librarian to help manage data. (see David Stuart’s book Facilitating Access to the Web of Data: a Guide for Librarians)

Last week I was thinking of writing a post – “Whither the Reference Interview?”  The idea came to me because in discussions with colleagues, I realized that so many of our questions come via email, directly or through our general reference email account, and it is hard to conduct a reference interview via email.  It made me think of the reference courses I took way back in the mid-1980s at SLIS (UWO). I loved my instructor, Dr. Catherine Sheldrick Ross, who has, coincidentally written an excellent book on the subject (Conducting the Reference Interview: A How-To-Do-It Manual for Librarians),  and I loved learning about open, closed, and neutral questions and Brenda Dervin’s sense making research.  Being a fan of mystery novels, I find the idea of figuring out what people really want to know an exciting challenge.  Of course we still get the chance for meaty searching from our contacts and referrals, but those email questions are often a problem to deal with.

Then, I came back to my office yesterday morning to find a Twitter conversation, amongst a group of #medlibs I know virtually, about data curation and the skills we need for that role. Are library schools teaching students what they need to know to take on new roles?

Finally it all came together yesterday afternoon as I was reading Daniel Pink’s new book To Sell is Human. I got so much out of Daniel Pink’s talk at MLA in 2010 and his earlier book, A Whole New Mind, so I purchased his latest book.  Plus, it seems to me that we need to learn how to sell ourselves as data managers and teachers and team members or whatever else we want to do.

I was reading the chapter “Clarity” – the capacity to  help others see their situations in fresh and more revealing ways and to identify problems they didn’t realize they had.” (p.127). and there on page 128, while reading the section “Finding the Right Problems to Solve” it hit me, our reference skills are perfect for finding out what the problems are that people need to solve. He then goes on to say that in order to sell something we need to curate information, sort it and present the relevant data, and ask questions to uncover possibilities.  So when you look at it this way, helping with data really is an extension of our skills. We don’t know or understand every article we find for a person, and while there are some librarians who can do analysis of bibliometrics, not every librarian has that skill.  So we shouldn’t all expect to be data managers or curators.

So we have a place in the non-sales selling process of all of the people we help, plus we have the non-sales selling of our reference and curation skills.  A double reason to read Pink’s book.

Of course we still need to keep our skills up to date and expand the specifics of our skill set (Health Informatics Forum Massive Open Online Course (MOOC) http://www.healthinformaticsforum.com/MOOC  or free SPSS classes anyone), but the theoretical framework is there.

Posted in data curation, librarian roles, reference | Leave a comment

The Power of Social Media

Last night I ran a bit of an experiment on medlibs chat on twitter. I noticed quite a few anti-circumcision tweets during on the AAMC feed during the 2012 meeting and I thought it quite odd. After all, I’m not sure a general meeting on medical education is the best place to push those concerns. Maybe a pediatrics meeting would be better. Anyway, I mentioned the meeting hash tag and within 30 min I had 3 responses from anti-circumcision accounts, despite the meeting being over 6 days ago.  I already had a sense of how far things travel on the Internet but this is one more anecdotal account of how far our words can travel.

But I wonder if maybe the groups in question should be a bit more careful about how they push their message.  Saying that circumcision is an important issue and people want to stifle opposition is not going to change anyone’s mind.  Listing evidence to support your viewpoint when talking with medical professionals or medical librarians is more likely to get your point across.

Back in the 1990s when the Internet and email were young, I learned I had to be careful about what I write. Somebody forwarded a private email to a group, and I had been a little too honest about another manager at my workplace. Then I discovered that listservs have vendors subscribing as well as librarians. More recently, a couple of my tweets have been picked up by the journal and software program I mentioned. So it behooves us to be careful as we tweet and post and comment.

 

Posted in Uncategorized | Leave a comment

Librarians Connecting EHR Data – a paper presented at MLAQUAD 2012

I had a fantastic time at MLAQUAD in Baltimore.  It was great to see some old friends and meet some new ones.  I had crab cakes every day, and they were all good.  There were a couple of queries about my presentation, plus some interest from a couple of people on #medlibs, so here it is.  It is very much a work in progress so any suggestions are welcome.

This link goes to my slide presentation in SlideShare: ConnectingEHRdatamlaquad12 from Margaret Henderson

And here is my script, also in SlideShare. This presentation was my first paper at a meeting.  I’ve done quite a few posters, and a couple of lightning posters ( 5 minute talks), but this was new.  I also had lost most of my voice due to a cold so I couldn’t actually speak my presentation aloud to practice it.  So I thought the best way to get my timing right was to write it all out.  In future, I will only note the things I really want to remember, like names, and then let the rest just flow, since I generally have no trouble babbling on about things I’m interested in.

Librarians Connecting EHR Data

  1. Good morning everyone.  I would like to share with you a new project I am working on – using the data from our hospital EHR system as part of the curriculum of our medical school.
  2. With all the concern about new librarian roles for the future and how we can embed librarians, I thought I would first tell you about the connections that led me to this work. In the spring of 2010 I attended the NLM/ Wood’s Hole MBL biomedical informatics course.  Here is half of my group after our tour of Martha’s Vineyard.  I loved the course.  On Thursday of that week, Joan Ash, a professor in the Department of Medical Informatics and Clinical Epidemiology at Oregon Health & Sciences University (and also a librarian and member of MLA) did a role playing session on the use of technology in hospitals, but she also mentioned that OHSU had grants (HITECH/ARRA) to get a graduate certificate in biomedical informatics.  I immediately decided to apply, but I couldn’t get my old, Canadian university transcripts in time, so I had to wait a year.  But I did get into the program, starting in Sept. 2011. After taking classes on informatics, evidence based practice, clinical information systems, health information management, statistics, SQL programming, and IT in healthcare, I had to decide on a practicum project.
  3. While reading a blog post http://informaticsprofessor.blogspot.com/2012/04/from-implementation-to-analytics-future.html , by the chair of the OHSU DMICE dept., Dr. Bill Hersh (who has also taught at Wood’s Hole), I became really interested in the idea of analytics as described in the report that was mentioned (see quote on slide).  I wanted to do a practicum that would use analytics in some way to improve patient outcomes.
  4. After asking around at my institution, I discovered that our Center for Clinical and Translational Research, part of the CTSA consortium, had a Biomedical Informatics Core (BIC) that was working on the analytics I was interested in.  They are actually setting up quite a few different resources, but the main focus has been on REDCap for surveys, i2b2 for cohort discovery, setting up a clinical data warehouse for hospital EHR data from Cerner, and HealthFacts, a large Cerner supplied database with data from 160 hospitals, which can be used for research.
  5. REDCap is used for collecting survey data and analyzing it, but to get the patient sets(which needs IRB approval) the researchers need to use i2b2 to see if there is a large enough cohort in the system.
  6. 12b2 is another open source software for research, but it was specifically designed for translational research and relating patients to genomic information.  At VCU, researchers don’t need IRB approval to use this database and they can play around with the patient characteristics they need for their research to find out if there are enough patients in our community for a study. When I learned about i2b2, I felt there must be more we could do with it.  One Core staff member said he had taught REDCap to a medical informatics class so they could learn about databases, but I thought using i2b2 would be even better.
  7. So I started to learn more about the system and think of ways to use it for teaching.  First off, there are 4 ontologies used for various parts of the system.  ICD9-CM for Diagnoses and Procedures, LOINC for lab tests, NDC for medications, and Snomed-CT for microbiology.  The important thing to realize about this database is that the Diagnosis code is for billing purposes, so there can be other diagnoses which aren’t billed and the numbers may not reflect what a researcher expects based on their experiences.
  8. The actual interface looks like this.  On the left, a box to search for terms to use. a workplace to share searches – especially if you need help, and a previous queries area that stores what I have done.   On the right, a query tool for setting up terms into a search statement, and a query status box to show how your search is running.
  9. In the navigate terms area, the ontology folder opens up to show all the areas that can be searched.
  10. And you can keep opening the various hierarchies to get to what you need. While browsing is fine, it may take a while.
  11. So, you can switch to the Find Terms tab and do a name or code search. You’ll notice though that things become a lot more complicated and specific. Luckily, you can just choose the base term to get the whole group. But if you needed secondary diabetes mellitus, you’d need to add more codes.  And the terms list includes things from all ontologies.
  12. Once the search criteria have been determined you can add date limits and run the search.  The number of patients is +_ 3 to keep the set deidentified.
  13. Once you have a set, you can switch to Analysis Tools and do basic Demographics on the set.
  14. The basic demographics can give you some idea of the type of community you will be working with.  So you can see, it is a not too hard a search process, but you do need to think about the ontologies – which is a good skill for medical students who will be using EHRs.
  15. It is easy to say medical students should learn how to use these analytic databases, but another thing to convince faculty. So I’ve been combing through various objectives reports and finding relevant reasons to learn to use an EHR database.  The Medical School Objectives Project of the Association of American Medical Colleges has put out several reports, with many objectives.  This one, Physicians must be dutiful, specifically mentions retrieving biomedical information – not just literature.
  16. This MSOP report looks at the medical informatics skills needed for various roles – Life-Long Learner, Clinician, Educator/Communicator, Manager, and in this objective, the role of Researcher.
  17. And in this report, a newer one, which is nice since the MSOP reports are from 1998, Utilizing Informatics is a core competency for working in interprofessional teams.
  18. Using i2b2 to find information to educate students about their patient populations can help direct their studies.  Also, learning to use databases can help with future quality analysis efforts.
  19. I2b2 can only do so much and the Provider information, especially on the hospital side as opposed to the clinic side, is based on the admitting provider, not necessarily the provider/resident who treated the patient. So I am learning about the Data Warehouse that is being set up at VCU.  Right now Bob, the BIC specialist, is using a Business Objects program to run searches, similar to the i2b2 interface where you move folders and terms to different boxes.  He almost has the more functional Data Warehouse ready.  It will smooth out some of the variations in the EHR record, and allow searching of the terms entered, not just codes.  And it can be accessed through any hospital terminal. We will create a template search for the residents so they just need to put in their name and then they can find out their patient loads for self-assessments.
  20. I’ve touched on the problems a couple of times, but like any research work, there are problems with the data.  As long as you are aware of the issues, you can temper your conclusions properly.
  21. You can actually search PubMed for i2b2, clinical data warehouse, and Cerner Health Facts and find articles that use the databases for research.  As you search, you will find that there are also all sorts of specialized health databases that are also being used.  EHRs are generating huge amounts of data.
  22. Which leads me to a suggestion for those of you who don’t have access to hospital data, learn about health data – not just health statistics.   This relates well to our speaker this morning who discussed open data (link to a blog post about this keynote https://macmla.wordpress.com/2012/10/16/quad-meeting-keynote-big-data/)  There are large amounts of open data on the web and some of it is medical.  Learn about these resources and the tools needed to use the data.  According to David Stuart, data is the new book.  In his book  Stuart paraphrases Ranganathan’s laws starting with “Every user his data”
  23. So look for data.  Data.gov is a good starting point.
  24. visualizing.org has quite a few categories of health data.
  25. And of course, don’t forget all the genetic data in NCBI.
  26. Yesterday, when Bart Ragon, http://www.hsl.virginia.edu/bio/bart , spoke about the BioConnector room they have created at UVa, he ended with a slide of flowers growing as an analogy for their growing program.  I’ve ended with flowers but for a slightly different reason.  This piece of embroidery is a reproduction of an Elizabethan coif.  It will be made up next year and used by interpreters at Agecroft Hall, an historical house in Richmond, VA.  I think this coif is like the projects that I have been working on.  It requires many people working as a team, lots of different skills, and lots of time.  But in the end it will be a wonderful thing.
Posted in EHR data, MLAQUAD 2012, Papers and presentations | Leave a comment

EHRs and Patient Safety – a reply to a #medlibs chat

Last Thursday Jon Goodell (@jonspoke) mentioned that EHRs can worsen pateint safety and posted a JAMA article from 2005 http://jama.jamanetwork.com/article.aspx?articleid=200498
I knew we had used newer things in a class presentation earlier this year in my Clinical Information Systems class. While our talk was mainly about EHR costs, safety was a big part of that since poor safety records are costly. There are a few places where mention is made of figures from the presentation, but we pulled them from the articles mentioned so you should be able to see them. My team mates, Michelle Tellez and Spoorthi Velagapalli, were great to work with and it was an enlightening project.
I hope the following from our presentation is helpful:
How can an Electronic Health Record system reduce expenditures?
There have been many reasons given for why EHRs will reduce expenditures and increase profits for those who use them.
• A good system adds value to the organization by allowing the organization to do things it could not do before. It expands the business possibilities and recognizes that data and information are an organizational asset.
• Quality based interventions should improve outcomes that translate into savings.
• Worker productivity gains – computers are fast and accurate so worker using them are expected to be more productive
• Billing optimization -More complete documentation of patient encounters potentially allows visits to be billed at a higher level of service.
• Storage of other encounter data – lots of information can be stored and retrieved, decreasing redundant tests and studies. Decrease storage space for paper files, easy to retrieve in an emergency such as Katrina.
EHR can help improve the culture of safety
• Malpractice reduction because there are fewer errors and better documentation.
• Medication error avoidance through alerts, decision support and CPOE.
• Medical mistake avoidance – decision support would decrease the errors of omission (forgetting to do something like vaccines) and those of commission (doing the wrong thing like the wrong medication).
• Impact on outcomes -outcomes research savings in chart reviews and timely updating of charts.
• Increasing number of patients enrolled in research – EHRs can compile lists of patients eligible to participate in clinical research to improve treatments.
• Provider profile and reputational incentives – EMRs can be used to track how well providers adhere to quality standards proposed in Meaningful use requirements. Those providers who do better will have better reputations with patients and the institutions they work for.
However, the reality is not quite what was predicted…
• Quality based interventions that measure “quality of life” or “years of life” have not translated into actual cost savings.
• Clerks may become more efficient but some may lose their jobs as clinicians do most of the entry of data. MDs may be less efficient as it take longer to record encounters, many cannot type as well or do not feel as comfortable with computers (Maybe this is this an age factor)
• Studies are suggesting that lawsuits are more likely associated with bed-side manner than with the errors themselves, so malpractice is not reduced.
• Medication error avoidance works in some cases but new sets of errors (unintended consequences) have appeared.
• Medical mistake avoidance -AHRQ did not include EHRs as one of the 20 tips to decrease errors… it did not pass its threshold.
There many important unintended consequences, such as:
• more/new work for clinicians;
• unfavorable workflow issues;
• never ending system demands;
• problems related to paper persistence;
• changes in communication patterns and practices;
• negative emotions;
• generation of new kinds of errors;
• unexpected changes in the power structure; and
• overdependence on the technology.
Others have found that as currently implemented, hospital computing might modestly improve process measures of quality but it does not reduce administrative or overall costs. (Himmelstein Du, Wright A, Woolhandler S. 2010)
Sidorov suggest that there is too much bias on the research
Renner is a 1996 article predicted that – The benefits and the costs often include intangibles that are difficult to quantify i.e. the qualitative benefits associated with computerizing clinical information(Renner, 1996).
While Wang and el attribute the conflict to underlying assumptions in calculations can cause significant variation in whether EMRs results in net expense or profit (Wang, et al., 2003).
One of the common comments when looking at costs is that EHRs have not actually been proven to have positive outcomes. A recent systematic review showed that most articles on health information technology do show improvement for various measures. Some of these measures cannot be factored into cost calculations, but they must always be an underlying consideration when looking at total costs of setting up EHRs and other computerized health systems.
But there are newer studies that show benefits over time.
A recent article by Colene Byrne and a group from The Center for IT Leadership looked at the value of the investment the Department of Veterans Affairs has made in information technology. While the VA spent proportionately more on IT than the private health care sector spent, it achieved higher levels of IT adoption and quality of care. This graph from the article shows that a higher percentage of diabetic patients in the VA system received the tests necessary for their chronic condition, and a smaller percentage had their HbA1c levels under poor control, compared to the private sector comparisons with similar IT in place.
This graph from the same article shows the costs and benefits. The gross value of the VA’s investments in VistA applications was projected to be $7.16 billion. Cumulative reductions in unnecessary care attributable to prevention of adverse drug event-related hospitalizations and outpatient visits as a result of VistA was the largest source of benefit in our projections, with an estimated value of $4.64 billion, or 65 percent of total estimated value.
Zlabek, Wickus, and Mathiason found that the number of laboratory tests per week per hospitalization declined from 13.9 pre-EHR to 11.4 in the 9 months after CPOE implementation, a decrease of 18.0% (p<0.001). There was also a decline in radiology examinations and medication errors. They concluded "Implementation of a commercially available inpatient EHR with CPOE appears to have quickly reduced cost of care and improved safety in our hospital."
Lapoint, Mignerat, and Vedel (2011)
Did a lit review and found a very limited number effects of EHRs are being explored and confounding factors in HIT cost research are not being controlled for (e.g. measurement errors, time lags, financial benefit redistribution, and management characteristics).
A wider range of variables need to be included and measured in the cost models, implementation characterization accounted for, multiple levels of perspective (individual, group and organizational) and multiple stakeholder perspectives (managers, health professionals and patients) must be included in the analysis.
The authors feel that too many studies don't look at all the areas where there are costs and benefits from EHRs. Costs need to be considered in context and in relation to efficiency, quality, outcomes, access, accessibility, compliance, and overall success for research findings to be really meaningful to clinical practice
This figure from Lapoint, Mignerat, and Vedel show the stakeholder of the overall success of the EHR. Each group of stakeholder has a different set of needs and perspectives on what consist a benefit or a cost.
As an example we could say that
Administrators may focus on efficiency of reporting and billings cost reduction
While health care professionals are more focused on quality of care, flexible workflows, UI, etc
And patients want to be empowered, maintain privacy and receive high quality care…
From your stake holder perspective what are the costs and benefits of the wide spread use of EHR?
Some studies are showing better patient care using techniques that are only possible with EHRs. Jackson, Casy, Frieder, and Schaeffer found that data mining derived algorithms improved empirical antimicrobial therapy in outpatients with urinary tract infections.
It is hard to find exact costs for some factors that impact the bottom line, like patient satisfaction, but they need to be considered as well. Kazley, Diana, Ford, and Menachemi found that EHRs improved 3 of 10 measures of patient satisfaction, including how the patient rated the hospital and if they would recommend the hospital. Both of which are hard to place a value on.
Final thought: "My own personal experience in switching my practice from paper to EHRs showed that the change requires some initial effort; however, it did not interrupt work flow in the clinic. The results are better care for patients and new opportunities for the physician and staff to improve quality outcomes." Surgeon General Regina Benjamin, M.D
Dr. Benjamin switched to EHRs in her Gulf Coast Alabama family practice after two hurricanes and a fire destroyed the clinic's paper records.
(Quote from a March8, 2011 press release : http://www.hhs.gov/news/press/2011pres/03/20110308a.html)
References
1. Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Aff (Millwood). 2011;30:464-471.
2. Byrne CM, Mercincavage LM, Pan EC, Vincent AG, Johnston DS, Middleton B. The Value From Investments In Health Information Technology At The US Department Of Veterans Affairs. Health Aff. 2010;29:629-638.
3. Campbell EM, Sittig DF, Ash JS, Guappone KP, Dykstra RH. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2006;13:547-556.
4. Gardner E. Trial runners. A two-physician practice goes on a roll with its EHR, generating extra revenue from clinical trials. Health Data Manag. 2009;17:44, 46.
5. Hillestad R, Bigelow J, Bower A, et al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff (Millwood). 2005;24:1103-1117.
6. Himmelstein DU, Wright A, Woolhandler S. Hospital Computing and the Costs and Quality of Care: A National Study. Am J Med. 2010;123:40-46.
7. Jackson HA, Cashy J, Frieder O, Schaeffer AJ. Data mining derived treatment algorithms from the electronic medical record improve theoretical empirical therapy for outpatient urinary tract infections. J Urol. 2011;186:2257-2262.
8. Kazley AS, Diana ML, Ford EW, Menachemi N. Is electronic health record use associated with patient satisfaction in hospitals? Health Care Manage Rev. 2012;37:23-30.
9. Lapointe L, Mignerat M, Vedel I. The IT productivity paradox in health: A stakeholder's perspective. Int J Med Inf. 2011;80:102-115.

Posted in Uncategorized | 3 Comments