Powered By

Powered by Blogger

Jumat, 16 Juli 2010

The New York Times Reports A University President's Conflict of Interest

Three months ago, we discussed the controversy at the University of Michigan about  the university president's position on the board of directors of the big pharmaceutical, medical device and medical supply company Johnson and Johnson as a potential conflict of interest that could have influenced her decision to make the campus smoke-free.  (Johnson and Johnson makes drugs to aid in smoking cessation.)  I argued that by the Institute of Medicine definition, President Coleman did have a conflict of interest, and while it was not possible to tell whether it influenced the smoke-free decision, the issue with conflicts is that they constantly raise the possibility of undue influence on decisions.

Now this issue has made it to the big time.  New York Times reporter Duff Wilson, wrote in the Times' Prescriptions Blog
The University of Michigan medical school became the first in the nation last month to say it would refuse any funding from drug companies for its continuing medical education classes. The decision could cost it as much as $1 million a year, but it was worth it, the medical school dean said, for education to be free from potential bias.

At the same time, Mary Sue Coleman, president of the entire University of Michigan, sits on the board of directors for the pharmaceutical giant Johnson & Johnson. Last year, the company paid her $229,978 — roughly half in stock and half in cash — for attending a limited number of meetings, corporate filings show.

Conflict of interest? Conflict of policies? If the med school and mere professors could be tainted by drug money, what about the university president?

She says no. Responding to questions on Ms. Coleman’s behalf Monday, Kelly E. Cunningham, a spokeswoman for the university, said the president satisfied the policy by disclosing her outside work. Ms. Coleman has never had to recuse herself from any discussion or action at the university because medical purchasing and investment decisions are so remote from her, Ms. Cunningham said.

'The same is true at J&J,' she added. 'There has never been a discussion or decision at the board level that involved something related to the UM. But, of course, if there were, she would recuse herself.'

The story was picked up by the Detroit Free Press, which reiterated the official line that President Coleman's role on the Johnson and Johnson board did not pose a conflict:
A student group at the University of Michigan is calling on President Mary Sue Coleman to resign from her seat on the Johnson & Johnson board of directors, saying it's a conflict of interest.

But Coleman has no plans to resign, and university officials say her role on the board is not in conflict with university operations. Last year, she earned nearly $230,000 for her board duties. Coleman's U-M salary is about $550,000.

'It's essential that U-M have a voice and interact with the business world,' said Rick Fitzgerald, a U-M spokesman. 'She thinks it's her duty to understand what the commercial world is doing.'

So, as I did last time, let us turn to the Institute of Medicine's definition of conflict of interest (in a health care context) found in its report, Conflict of Interest in Medical Research, Education, and Practice.
Conflicts of interest are defined as circumstances that create a risk that professional judgments or actions regarding a primary interest will be unduly influenced by a secondary interest. Primary interests include promoting and protecting the integrity of research, the quality of medical education, and the welfare of patients. Secondary interests include not only financial interests....

I asserted then that President Coleman has a conflict of interest. Her primary interests as President of a university are to uphold the university's academic mission, and as President of a university that includes a medical school, a school of public health, and an academic medical center, also to uphold the integrity of patient care and public health practice. Her secondary interest as a member of the board of directors of a public, for-profit corporation is her fiduciary duty to that corporation and its stockholders, which means she must "demonstrate unyielding loyalty to the company's shareholders" [Per Monks RAG, Minow N. Corporate Governance, 3rd edition. Malden, MA: Blackwell Publishing, 2004. P.200.] Such unyielding loyalty to the shareholders of a pharmaceutical and medical device company clearly creates a risk of influencing judgments or actions that could affect the corporations' sales or operations, economic or health policy, or the general environment in which it operates. Many of the judgments of or actions performed by the leader of a medical school, public health school, and academic medical center could so so, and are thus at risk of being so unduly influenced.

As the IOM report said, though,
a judgment that someone has a conflict of interest does not imply that the person is unethical. Such judgments assume only that some situations are generally recognized to pose an unacceptable risk that decisions may be unduly influenced by considerations that should be irrelevant.

However, note that the sorts of decisions that may be influenced by a conflict of interest go beyond just those that involve the specific secondary interest causing the conflict. So the University spokesperson's statement that the president would recuse herself from any decision at the university that directly involved Johnson and Johnson, but that no such decision has ever been necessary, missed the point.

Meanwhile, the university's insistence that the president's part-time position at Johnson and Johnson is justified by the need to "have a voice and interact with the business world" rings hollow. There are many ways a president could do that which do not involve getting corporate pay (and for "unyielding loyalty"). It rings especially hollow at a university that has identified corporate funding for continuing medical education as an unacceptably bad conflict of interest.

But then again, conflicts of interest are known to create confused thinking, and such confused thinking is likely to be prevalent at an institution that has one set of rules for the little people, and another for the top leaders.

Maybe this story in the New York Times will lead to some discussion about whether it is good for academic medical institutions to tolerate this previously "new species of conflict of interest" (as we termed it in 2006).

FDA MAUDE Database: Patient Outcome - Death

I present another health IT problem case from the FDA's voluntary MAUDE (Manufacturer and User Facility Device Experience) database below.

From FDA's description of MAUDE:

  • MAUDE data represents reports of adverse events involving medical devices. The data consists of voluntary reports since June 1993, user facility reports since 1991, distributor reports since 1993, and manufacturer reports since August 1996. MAUDE may not include reports made according to exemptions, variances, or alternative reporting requirements granted under 21 CFR 803.19.
  • The on-line search allows you to search CDRH database information on medical devices which may have malfunctioned or caused a death or serious injury. MAUDE is scheduled to be updated monthly and the search page reflects the date of the most recent update. FDA seeks to include all reports received prior to the update. However, the inclusion of some reports may be delayed by technical or clerical difficulties.
  • MAUDE data is not intended to be used either to evaluate rates of adverse events or to compare adverse event occurrence rates across devices. Please be aware that reports regarding device trade names may have been submitted under different manufacturer names. Searches only retrieve records that contain the search term(s) provided by the requester.

I somehow missed the following case when I wrote the Oct. 2009 post 'Our Policy Is To Always Have Unabashed Faith In The Computer ... Except When It Screws Up, And Then It's The Doctor's Fault' but I have added it there as well:

http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/detail.cfm?mdrfoi__id=1656460
CERNER MILLENIUM POWERCHART CPOE
Event Date 11/19/2006
Event Type: Death
Patient Outcome: Death

The medication review screen of the subject device does not specify the exact dose in milligrams of combination medications. For example, narcotics are combined with tylenol in at least two strengths. Liquid narcotic tylenol-oxycodone combination is reported in ml, not mg. The exact dose of tylenol is not specified and requires knowledge of the combination medication dose in the volume specified.

Certain fields of the grid do not specify the volume, but rather state "date/time" requiring another click or pop up screen. The immediate knowledge of tylenol dosage in mg is directly related to understanding and preventing excessive doses. In the subject, 10 ml of acetaminophen-oxycodone is indicated as having been given 3 times over 4 hours. That means that 1950 mg of tylenol was administered in 4 hours while the patient was in a state of starvation and receiving other medication that increase the effects of tylenol.

This dose would equate to 11,700 mg of tylenol over 24 hours, nearly 3 times the maximum daily dose in otherwise health people. In the ensuing days, the patient developed acute renal failure, presumably acute tubular necrosis, and died. In the absence of other etiology, the excess tylenol was the culprit. This was not considered as etiology ante-mortem. The counterintuitive screen impaired the professionals. The pharmacist did not recognize and stop the medication, the nurses administered it, and the excessive dose, clinically meaninglessly listed as a volume of 10 ml -given 3 times in 4 hours- of acetaminophen-oxycodone, was missed by the physicians. Adverse events have been ascribed to "user error" by vendors.

The device offers a potent propensity to life endangering oversights. There are other screens on this device which present information that interfere with clinically useful visualization of data.
[Who designed these screens, I ask? Clinicians, or business IT personnel used to designing inventory systems for widget control? - ed.] The data does not flow to the professionals. It is not represented in a meaningfully useful manner.

The professionals need to hunt for it. As such, the user unfriendly screens [see this link on mission hostile HIT - ed.] impair safe medical care consistent with the impediment to expedient professional understanding of what, exactly, is the dose of medication and how much was administered to the patient. This sentinel case of death is directly attributed to user unfriendly screens on this device.

How many cases like this, as well as "near misses" related to health IT go unreported, nationwide and worldwide?

As in my paper "Remediating an Unintended Consequence of Healthcare IT: A Dearth of Data on Unintended Consequences of Healthcare IT",
nobody really knows; these devices are unregulated with no requirements for reporting.

However, let's roll it out nationally anyway, because HIT will deterministically "revolutionize" medicine. Just ignore those spoil-the-party, man-behind-the-curtain prattle from writers like these.

We can safely ignore all contrarian research and literature, of course, as we all know HIT will revolutionize medicine from the definitive certainty of HHS in "The 'Meaningful Use' Regulation for Electronic Health Records", NEJM, Blumenthal and Tavenner (10.1056/NEJMp1006114, July 13, 2010):

The widespread use of electronic health records (EHRs) in the United States is inevitable. EHRs will improve caregivers’ decisions and patients’ outcomes. Once patients experience the benefits of this technology, they will demand nothing less from their providers. Hundreds of thousands of physicians have already seen these benefits in their clinical practice.

[Except for those who
haven't - ed.]


And our government's called BP Energy Company cavalier?

I offer no additional comments.

-- SS

Kamis, 15 Juli 2010

NORCAL Mutual Insurance Company: "Electronic Health Records: Recognizing and Managing the Risks"

The Insurance Industry is catching on that EMR's and other clinical IT are not exactly the cybernetic miracles they're sometimes held out to be, for example as implied in this statement from HHS in the July 13, 2010 NEJM that-

... EHRs will improve caregivers’ decisions and patients’ outcomes. Once patients experience the benefits of this technology, they will demand nothing less from their providers.

NORCAL Mutual Insurance Company, for example, produces a near-monthly publication entitled "Claims Rx." Its purpose:

The goal of Claims Rx is to help physicians better recognize their medical professional liability risks and implement strategies to minimize those risks. The topics addressed here are derived from numerous sources including closed malpractice claims analyses, emerging liability trends and current medical literature. Each issue is meticulously reviewed by an editorial board consisting of physicians, nurses, risk management specialists and attorneys with a mind toward optimal patient safety and proactive risk management.

The October 2009 issue, found at this link in PDF, is entitled "Electronic Health Records: Recognizing and Managing the Risks." It contains advice to physicians to limit their liability, reduce errors, and presents de-identified examples of HIT-related horror stories.

Some of the advice offered includes:

Electronic health records (EHRs) hold great promise of improving patient safety and decreasing medical liability exposure, but their use is creating a variety of new risk management and patient safety issues.

For some reason, the "great promises" made over the past fifty years still seem elusive...but we'll get it right some day...(perhaps when these issues among others are solved).

Some of these issues are directly associated with EHRs (e.g., providers disregard warnings generated by the EHR), but many of the risk concerns associated with EHRs are analogous to problems that currently exist in paper documentation systems.

The marketing puffery that today's EMR's are vastly superior to paper might be just that - marketing puffery.

In this month’s Claims Rx we present a number of shorter-than-usual case studies that exemplify various aspects of unsafe EHR documentation and communication practices. The scenarios are based on NORCAL closed claims, facts presented in appellate opinions, research findings and the observations of NORCAL Risk Management Specialists.

What many of the examples show is that EHRs do not eliminate many of the dangerous documentation and communication practices that have historically led to patient injury and malpractice lawsuits.

I would add "EHR's in their present form do not eliminate dangerous practices", designed as if they were clinical data inventory systems by business-IT eggheads rather than as clinical tools "of, by and for clinicians." (One might ask why we are about to spend 100+ billion dollars on them in national rollout the next few years.)

Consequently, while it is important to address new issues that arise with EHRs, many of the risk management recommendations that apply to a paper-based documentation system remain valid.

This Claims Rx will discuss the risks associated with various aspects of EHRs and will provide guidance for instituting policies and procedures designed to enhance the quality and safety of patient care, while diminishing professional liability risk.

A number of case examples then follow. I will reproduce two, but all can be read via downloading the PDF at the link above.

Saving Images in the Wrong Patient’s Chart

Just as an image can be misfiled or lost in a paper system, it can be misfiled in an electronic one. However, as the following case shows, it can be less obvious that an image has been misfiled in an electronic system.

Case Study

Patient #1 and patient #2 both presented to the Emergency Department (ED) complaining of abdominal pain. CT scans of the abdomen and pelvis were completed for both patients. A radiology tech mistakenly gave patient #2’s images the identification number assigned to patient #1 and uploaded the images into the Picture Archiving Computer System (PACS).

A short time later, the tech realized his mistake and called the on-duty teleradiologist to tell him about the mistake and request that the mislabeled images be deleted from the system. However, the on-duty teleradiologist did not have access to delete images from the PACS; this had to be done by the PACS administrator. The tech then corrected the labeling problem and sent the images out to the teleradiology service for a preliminary review and resent the correctly labeled images to the PACS.

Patient #1’s PACS file now contained both his own and patient #2’s images. A few days later the tech told his supervisor about the mislabeling, and assumed that the supervisor would remedy the problem. Pursuant to hospital policy, the tech should have immediately contacted the PACS administrator.

The teleradiology service reported that patient #1’s CT scan was normal. Patient #2’s CT scan, however, showed a large tumor (about the size of a grapefruit) on the patient’s kidney. The service faxed the reports to the radiology department at the hospital.

The next morning, the on-duty radiologist reviewed the PACS images from the night before. He disregarded the teleradiology service reports because they did not correspond to what he saw in the PACS. Because patient #2’s scan had been completed before patient #1’s, patient #2’s images were the first series in his file.

The on-duty radiologist noted the large tumor and dictated a note. Because patient #2’s images still carried patient #1’s identification number, the radiologist’s report was assigned to patient #1.

Patient #1 was subsequently seen by a number of specialists for the supposed tumor on his kidney. Seven days after the CT scan, he underwent a nephrectomy [lovely- ed.]. During the surgery, no mass could be positively identified on his kidney by his surgeons. Postoperatively, no tumor was identified in the removed kidney and pathology returned benign.

(Please note, once the filing mistake was recognized, patient #2 was notified and underwent a timely and successful nephrectomy.)

[In other words, through serendipity only one person was harmed, not two - ed.]

One wonders how many "cases" like this will arise out of the major-vendor EHR upgrade flaw that is claimed to have led patient data to go into wrong charts at not one, but several hospitals:

HIStalk Monday Morning Update 7/12/10

From Holy Smoke: “Re: Cerner. Misidentification incidents have been reported with Cerner PowerChart and Millenium in hospitals in Indiana, Michigan, and others after a Cerner upgrade. Entries are placed in the wrong electronic chart and reviewed data is for the wrong patient.” Unverified. I saw nothing in the FDA’s Maude database, so if it’s happening, customers should file an experience report.


(See my July 11, 2010 post '
Health IT and 'High Regulatory Standards': Criminal Negligence for Implementing Defective Systems That Put Data in the Wrong Charts?').

Next from NORCAL:

Checking the Wrong Box

In the following case, the appearance of the computer screen probably played a role in the medication error.

Case Study

A patient presented to his primary care physician (PCP) for the treatment of headaches and episodes of altered consciousness. The PCP prescribed amitripyline at 10 mg nightly. The PCP told the patient to escalate the dosage by 10 mg every three to four days until the pain was relieved, but not to exceed 50 mgs without consulting him.

When creating the prescription, the PCP intended to check off the 10-mg box in the computerized physician order entry (CPOE), but inadvertently checked the 100-mg box, which was right above it. In the medication instructions section, he indicated that five pills could be taken per night, so the patient would not have to return to the pharmacy and pay an additional co-pay if he ultimately needed the larger dose.

The pharmacist had noticed that the dose seemed high and requested that a call be made to the PCP prior to it being dispensed. A nurse at the PCP’s office picked up the call, and because she was very busy that day, told the pharmacy to dispense the medication as it had been ordered — she did not check the dose. Three days later, the patient took five of the 100-mg pills together. Early the next morning, the PCP was contacted by an emergency department (ED) physician who reported that the patient was in the ED reporting dizziness, an altered state of consciousness, an inability to coordinate his movements and a rapid heartbeat.

He was further informed by the ED physician that the patient had taken five 100-mg amitripyline tablets. The PCP then checked the patient’s record and realized his mistake.

[It's serendipitous that the patient was not in the morgue when the mistake was realized - ed.]

Read the rest of the cases and the explanations at the Electronic Health Records: Recognizing and Managing the Risks "Claims Rx" document from NORCAL here (PDF).

In these cases, both technological and "people" issues were responsible for the malpractice events.

However, healthcare is an extremely complex endeavor requiring exquisite attention to detail (or your patient's dead).

These IT systems could have been designed to provide cognitive, ease-of-use, and known-error revision (or at least known-error flagging) support to clinicians that could have helped prevent these errors.

HIT designers need to do their part if they want to be considered part of the clinical team.

If they don't do their part voluntarily and stop with the mission-hostile,1970's-paradigm "inventory system of widgets" health IT, as I've written before, they will increasingly find themselves part of that team in an involuntary manner - as defendants in litigation.

Finally, due to issues such as: an utter lack of governmental regulation of the HIT industry and a lack of defect and error reporting requirements; contractual gag and hold-harmless clauses (as raised by Penn researchers Koppel and Kreda in JAMA here); physician fear of hospital retaliation such as sham peer review for HIT whistleblowing especially now that more physicians are becoming hospital employees; and other causes, there is no reliable data on the incidence of EHR-related medical errors and malpractice.

As in my paper "Remediating an Unintended Consequence of Healthcare IT: A Dearth of Data on Unintended Consequences of Healthcare IT", there is also no reliable data on "near misses" or potential IT related mishaps that were averted by serendipity. These near misses expose patients to risk. With no reporting, there is no systematic data that can be used for remediation and prevention.


I thus present once again the following thoughts from my post
Science or Politics? The New England Journal and "The 'Meaningful Use' Regulation for Electronic Health Records":

I believe we should hold off national Health IT roll outs until we:

  • learn sufficiently from failures such as the UK CfH and our own military's AHLTA debacle on how to avoid same, which can injure and kill patients and wastes massive money and resources healthcare can ill afford, and more importantly that can be better used elsewhere - such as care of the poor;
  • improve the technology's usability, safety and efficacy through the years of Medical Informatics and other disciplinary research needed, that was short circuited through the invention of the ONC office by Bush (although national HIT then remained a goal, not a mandate), and the 'militarization' of ONC under Obama whereby HIT was unilaterally declared a proven technology and mandated for national rollout;
  • end the contractual and fear-based censorship of information on health IT problems, and patient injuries and deaths related to the devices; and
  • meaningfully regulate these devices that have increasingly become governors of care delivery.
I have written extensively on these topics at this blog, at my academic website on health IT failure, and other sources (see list at end of my bio).

When there are significant doubts about a medication or medical device, we ought not push for national rollout.


Health IT devices have gotten special accommodation, and it's not on the basis of any rigorous science I am familiar with.


-- SS

Rabu, 14 Juli 2010

Eli Lilly CEO on “America’s Growing Innovation Gap”

In “America’s Growing Innovation Gap”, WSJ, July 9, 2010, Eli Lilly CEO John C. Lechleiter, Ph.D. writes that:


“…the most important elements are the seeds of innovation, which equate to talented people and their ideas.”


He then suggests these people are “highly skilled immigrants” abroad.


In my own circle of friends, I know American pharma industry cast-offs who are both brilliant and talented. One with dual MS degrees in mathematics and computer science from a major university, one a skilled bioinformaticist I've had teach my healthcare informatics students as guest lecturer, one a brilliant programmer who could be considered the grandfather of computer image manipulation, another with years of expertise in pharma knowledge discovery.


Then there's me – former Director of a Merck R&D support group and of The Merck Index - with degrees in medicine and post-doctoral specialization in biomedical informatics and information science, plus I'm an extra-class amateur radio licensee who understands complex technology at a level far beyond that of the usual pharmaceutical company worker.

Yet no donuts for us. In recent years the pharmaceutical industry won’t grant any of us the courtesy even of an interview.

However, in Mar. 2009 as I documented here, I did receive an email solicitation from Lilly that read as follows ("sic's" are mine):

“Your Help Is Requested for a Eli Lilly Career Opportunity! (sic) I am a member of the Staffing Team at Eli Lilly. I were referred to me (sic) as person who specializes in pharmaceutical based informatics. I wanted to reach out to me (sic), to see if you maybe able (sic) to recommend anyone that could qualify for the below position (sic)."

I was not exactly inspired by this solicitation, perhaps written by one of the "highly skilled immigrants" Lechleiter covets.

Nor was I inspired by the earlier solicitation I documented at my Jan. 2009 post "What, Me Worry? Lilly Fined Over Zyprexa, Should Be Fined For eRecruitment Inanity As Well?"

I suggest if Mr. Lechleiter wishes to close America’s purported "innovation gap", he spend some time away from the executive castle and perhaps review some resumes – and the job solicitations his company proffers – in his HR department.

A cause of the "innovation gap" may be leadership xenophilia, at the expense of the American born-and-raised scientists the pharma industry is so fond of discarding.

-- SS

Science or Politics? The New England Journal and "The 'Meaningful Use' Regulation for Electronic Health Records"

In the NEJM article "The 'Meaningful Use' Regulation for Electronic Health Records", David Blumenthal, M.D., M.P.P. (ONC Chair) and Marilyn Tavenner, R.N., M.H.A. (10.1056/NEJMp1006114, July 13, 2010) available at this link, the opening statement is (emphases mine):

The widespread use of electronic health records (EHRs) in the United States is inevitable. EHRs will improve caregivers’ decisions and patients’ outcomes. Once patients experience the benefits of this technology, they will demand nothing less from their providers. Hundreds of thousands of physicians have already seen these benefits in their clinical practice.

I think it fair to say those are grandiose statements and predictions presented with a tone of utmost certainty in one of the world's most respected scientific medical journals.


Even though it is a "perspectives" article, I once long ago learned that in writing in esteemed scientific journals of worldwide impact, statements of certainty were at best avoided, or if made should be exceptionally well referenced.

I note the lack of footnotes showing the source(s) of these statements.

I also note the lack of mention of literature refuting or potentially refuting these statements of certainty. I can think of more than a few examples of the latter just off the top of my head [ref. 1-15 below, certainly not a comprehensive list but merely skimming the surface].

In politics, however, no such sourcing is necessary. It's easy for a politician to say "Free markets will not give us the healthcare system we want" or, conversely, "I never heard about the DOJ's selective dismissal of charges against people intimidating voters at a voting site in Philadelphia."

So, did the NEJM publish fact, or political platitude?

Can someone provide a list of peer reviewed, rigorous studies that back the assertions of certainty in 10.1056/NEJMp1006114, and override the body of literature that could cast doubt on these assertions of certainty?

Since it's people's lives at stake, not an inventory of widgets, I've promoted the idea of holding off on national roll outs until we:

  • learn sufficiently from failures such as the UK's NPfIT (National Programme for IT) in the NHS and our own military's AHLTA debacle on how to avoid same, which can injure and kill patients and wastes massive money and resources healthcare can ill afford, and more importantly that can be better used elsewhere - such as care of the poor;
  • improve the technology's usability, safety and efficacy through the years of Medical Informatics and other disciplinary research needed, that was short circuited through the invention of the ONC office by Bush (although national HIT then remained a goal, not a mandate), and the 'militarization' of ONC under Obama whereby HIT was unilaterally declared a proven technology and mandated for national rollout;
  • end the contractual "hold vendor harmless clauses" (see Koppel and Kreda's 2009 JAMA article here), and fear-based censorship of information on health IT problems, patient injuries and deaths related to the devices; and
  • meaningfully regulate these devices that have increasingly become governors of care delivery.

I have written extensively on these topics at this blog, at my academic website on health IT failure, and other sources (see list at end of my bio).

When there are significant doubts about a medication or medical device, we ought not push for national rollout.


Health IT devices have gotten special accommodation, and it's not on the basis of any rigorous science I am familiar with.

-- SS

References: (hyperlinks to these and others can be found at my medical informatics teaching sites here and here):

1. Health IT Project Success and Failure: Recommendations from Literature and an AMIA Workshop by Bonnie Kaplan and Kimberly D. Harris-Salamone. From the May/June 2009 issue of JAMIA.

2. "E-Health Hazards: Provider Liability and Electronic Health Record Systems.” Hoffman and Podgurski’s followup paper on EHR medical and legal risks

3. Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors. Ross Koppel, PhD, et al, Journal of the American Medical Association, 2005;293:1197-1203

4. Electronic Health Record Use and the Quality of Ambulatory Care in the United States. Arch Intern Med. 2007;167:1400-1405. The authors examined electronic health records (EHR) use throughout the U.S. and the association of EHR use with 17 basic quality indicators. They concluded that “as implemented, EHRs were not associated with better quality ambulatory care.”

5. Pessimism, Computer Failure, and Information Systems Development in the Public Sector. (Public Administration Review 67;5:917-929, Sept/Oct. 2007, Shaun Goldfinch, University of Otago, New Zealand)

6.
Bad Health Informatics Can Kill. his site contains summaries of a number of reported incidents in healthcare where IT was the cause or a significant factor. It comes from the Working Group for Assessment of Health Information Systems of the European Federation for Medical Informatics (EFMI).

7. The U.S. National Research Council’s "Current Approaches to U.S. Health Care Information Technology are Insufficient."

8. The UK Public Accounts Committee report on disastrous problems in their £12.7 billion national EMR program.

9. Gateway reviews of the UK National Programme for IT from the Office of Government Commerce (OGC) (released under the UK’s Freedom of Information Act).

10. A report on the serious problems with the Department of Defense’s AHLTA system, Electronic Records System Unreliable, Difficult to Use, Service Officials Tell Congress. (This system, as I wrote here, is slated for abandonment. I cannot imagine it was greatly improving outcomes).

11. A New York Times report “Little Benefit Seen, So Far, in Electronic Patient Records” on Jha’s research at the Harvard School of Public Health, that compared 3,000 hospitals at various stages in the adoption of computerized health records and found little difference in the cost and quality of care.

12. An American Journal of Medicine paper “Hospital Computing and the Costs and Quality of Care: A National Study” by Himmelstein and Woolhandler at Harvard Medical School, that also concluded “as currently implemented, hospital computing might [very] modestly improve process measures of quality but not administrative or overall costs."

13. A Milbank Quarterly article “Tensions and Paradoxes in Electronic Patient Record Research: A Systematic Literature Review Using the Meta-narrative Method" by Greenhalgh, Potts, Wong, Bark and Swinglehurst at University College London.


14. Health Affairs, 29, no. 4 (2010): 639-646 Electronic Health Records’ Limited Successes Suggest More Targeted Uses, Catherine M. DesRoches et al.

15. NORCAL Mutual Insurance Company: "Electronic Health Records: Recognizing and Managing the Risks" (PDF
here)

Addendum 7/14:

I think this statement at "The Road to Hellth" blog in a post entitled "Meaningful Ruse" that cites my posts is apropos:

... Meaningful use entered our vocabulary in early 2009 as part of a $20+ billion gift from doctors, hospitals and the taxpayers to the needy folks at Cerner, GE, Siemens, Allscripts, Epic and other purveyors of complex, expensive and difficult-to-use and potentially even dangerous medical software products.

-- SS

Selasa, 13 Juli 2010

New CMS Chief Donald Berwick: a Trojan Horse for Quackery?

On July 7, President Obama appointed Dr. Donald Berwick as Administrator of the Centers for Medicare and Medicaid Services (CMS). Dr. Berwick, a pediatrician, is well known as the CEO of the non-profit Institute for Healthcare Improvement (IHI), which "exists to close the enormous gap between the health care we have and the health care we should have — a gap so large in the US that the Institute of Medicine (IOM) in 2001 called it a 'quality chasm'.” Dr. Berwick was one of the authors of that IOM report. His IHI has been a major player in the patient safety movement, most notably with its "100,000 Lives Campaign" and, more recently, its "5 Million Lives Campaign."

Berwick's CMS gig is a "recess appointment": it was made during the Senate's July 4th recess period, without a formal confirmation hearing---although such a hearing must take place before the end of this Senate term, if he is to remain in the position. A recent story suggested that Obama made the recess appointment in order to avoid a reprise of "last year's divisive health care debate." The president had originally nominated Berwick for the position in April, and Republicans have opposed "Berwick's views on rationing of care," claiming that he "would deny needed care based on cost."

A "Patient-Centered Extremist"

If there is a problem with the appointment, it is likely to be roughly the opposite of what Republicans might suppose: Dr. Berwick is a self-described "Patient-Centered Extremist." He favors letting patients have the last word in decisions about their care even if that means, for example, choosing to have unnecessary and expensive hi-tech studies. In an article for Health Affairs published about a year ago, he explicitly argued against the "professionally dominant view of quality of health care":



I think it wrong for the profession of medicine—or any other health care profession, for that matter—to “reserve to itself the authority to judge the quality of its work.” I eschew compromise words like “partnership.” For better or worse, I have come to believe that we—patients, families, clinicians, and the health care system as a whole—would all be far better off if we professionals recalibrated our work such that we behaved with patients and families not as hosts in the care system, but as guests in their lives. I suggest that we should without equivocation make patient-centeredness a primary quality dimension all its own, even when it does not contribute to the technical safety and effectiveness of care.

A new definition. My proposed definition of “patient-centered care” is this: The experience (to the extent the informed, individual patient desires it) of transparency, individualization, recognition, respect, dignity, and choice in all matters, without exception, related to one’s person, circumstances, and relationships in health care.

Does this mean that Dr. Berwick would also eschew professional, i.e., expert, judgment in favor of patients' wishes? In a word, yes:

Evidence-based medicine sometimes must take a back seat. First, leaving choice ultimately up to the patient and family means that evidence-based medicine may sometimes take a back seat. One e-mail correspondent asked me, “Should patient ‘wants’ override professional judgment about whether an MRI is needed?” My answer is, basically, “Yes.” On the whole, I prefer that we take the risk of overuse along with the burden of giving real meaning to the phrase “a fully informed patient.”

Dr. Berwick is not so naive as this opinion might suggest. He envisions a "mature dialogue" in such a case, and argues that "if, over time, a pattern emerges of scientifically unwise or unsubstantiated choices...then we should seek to improve our messages..." He also admits that there might be an occasional patient whose demands are so unreasonable that "it is time to say, 'No'." That exception, he argues, should not dictate the rule.

There are situations in which most civilized people would agree with Dr. Berwick's view of 'patient-centeredness'. In both the Health Affairs article and in his recent address to the 2010 graduating class of the Yale School of Medicine, he offered real examples of petty, arbitrary hospital rules causing unnecessary sorrow for patients and their loved ones. It is in such contexts that he makes a convincing case that health professionals ought to behave "as guests in their lives." In an interview for the New York Times, he argued:


We don’t have a standard of services or processes that are comfortable for patients. We have built a technocratic castle, and when people come into it, they are intimidated.

Nothing to disagree with there. To create that standard, moreover, would not undermine settled medical practice ethics---it would celebrate them, even as it rightly embarrasses the profession for having taken so long to do so.


Enter the Woo

Eschewing the scientific basis for modern medical practice, however, is another matter. In February of 2009, Dr. Berwick gave a 'keynote' address at the IOM and Bravewell Collaborative-sponsored Summit on Integrative Medicine and the Health of the Public. He shared the podium with Mehmet Oz, Dean Ornish, Senator Tom Harkin, and other advocates of pseudoscientific health claims. I wrote about the conference at the time, mainly to call attention to its misleading use of the term "integrative medicine": literature emanating from the Summit characterized it as "preventive" and "patient-centered," whereas the only characteristic that distinguishes it from modern medicine is an inclusion of various forms of pseudomedicine.


I noticed that Dr. Berwick was on the speaker roster, which I found disappointing: I imagined that he had either gone over to the Dark Side or, perhaps, was sufficiently naive about the topic to have been duped; or, more likely, that he had cynically accepted the offer to further his ambitions. I didn't bother to listen to his speech until the CMS appointment was announced a few days ago.

It is troubling, to say the least. Dr. Berwick did not argue, as he had in the NYT piece, that "If we doctors feel a person is going to make unwise choices, we have to take on the responsibility of being teachers, educators and informers." Rather, he praised his fellow speakers, most of whom were spouting nonsense, for their "reach" and "eloquence." He praised the IOM for its "glorious record...in pursuit of better designs in health care...traditional, allopathic curative care and now migrating into this distinguished and important new arena." He mentioned homeopathy and acupuncture, not to wonder why they should be promoted as effective, but merely to warn that they will fail---presumably in some economic sense---if they try to compete with each other for reimbursement.

Such language, and Dr. Berwick's very presence at the Summit, were a far cry from advocating "patient-centeredness." What they amounted to was a generous endorsement of pseudoscientific practices and of the socio-political movement that promotes them. Even granting some naivete on his part (he called himself "an amateur at this topic"), he must have known this. Such an endorsement, unlike tearing down the "technocratic castle," has ethical implications at least as profound as those that Dr. Berwick tacitly or explicitly relies upon to support his arguments for patient-centeredness.

"Physicians have no Immunity to Moral or Ethical Constraints"

The relevant medical ethics treatises (reviewed here) are in substantial agreement that it is unethical for physicians to prescribe scientifically implausible methods or to refer patients to other practitioners for the same purpose. They are also in agreement that it is unethical to prescribe a placebo to a patient while claiming that the treatment has specific biologic activity---a point that has been vigorously argued in the UK this year, with regard to homeopathy. These ethical tenets are not mere odes to nerdy, sciency thinking; they are matters of honesty and integrity---fundamental bases for ethical interactions between physicians and patients.

In 1983, philosophers Clark Glymour and Douglas Stalker published an article in the New England Journal of Medicine titled “Engineers, cranks, physicians, magicians.” They framed modern medicine as follows, comparing it to what was then called "holistic medicine" (the article is quoted extensively here):



Medicine in industrialized nations is scientific medicine. The claim tacitly made by American or European physicians, and tacitly relied on by their patients, is that their palliatives and procedures have been shown by science to be effective. Although the physician’s medical practice is not itself science, it is based on science and on training that is supposed to teach physicians to apply scientific knowledge to people in a rational way.

The practice of medicine in the United States and in other industrialized nations is a form of consultant engineering...

That statement is just as accurate now---even more so, in this era of Evidence-Based Medicine---as it was nearly 30 years ago, even if some might find the likening of medicine to engineering displeasing. Nor is it at odds with almost any definition of "patient-centeredness," other than one that presumes that the patient's desires trump the physician's ethics:



A physician engineer can act as consoler; nothing in either logic or social psychology forbids it. But certain combinations are impossible or extraordinarily unlikely. A physician engineer cannot honestly claim powers of magic or occult knowledge. The principles governing scientific reasoning and belief are negative as well as positive, and they imply that occult doctrines are not worthy of belief. Moreover, physician engineers have no immunity to moral or ethical constraints. On the contrary, they are by training and by culture enmeshed in a tradition of rational thought about
the obligations and responsibilities of their profession.

Dr. Berwick---if he really believes what his presence and words at the "Integrative Medicine" Summit imply---is playing with ethical fire. (If, as I hope, he doesn't really believe those things, he's playing with ethics of another kind). Will we begin to see pseudomedicine "integrated" into Medicare and Medicaid? That is certainly the expectation of those who observed Dr. Berwick's performance at the Summit, and who appear intent to hold him to his word.

KA

Two other blogs that have addressed this issue are:

Dr. RW: Not only evidence based medicine but science based medicine may take a back seat in Donald Berwick's vision for patient centered care

Dr. David Gorski: Dr. Donald Berwick and “patient-centered” medicine: Letting the woo into the new health care law?

Meaningful Use Final Rule: Have the Administration and ONC Put the Cart Before the Horse on Health IT?

Meaningful use before meaningful usability?

The Dept. of HHS today has released the final version of "Meaningful Use" rules on HIT, which can be seen here: Meaningful Use – Final Version Full Text.

By what category of diligence were the rules for "meaningful use" finalized on the same date that a NIST conference is being held on health IT "usability" ("Usability in Health IT: Technical Strategy, Research, and Implementation", http://www.nist.gov/itl/usability_hit.cfm), implying there's a problem with usability of these experimental devices physicians are supposed to "meaningfully use?"

Don't take my word on the issue of usability problems...

The National Research Council of the National Academies (considered the highest scientific authority in the U.S.) issued a 2009 report on HIT. In that report, presided over by noted HIT pioneers G. Octo Barnett (Harvard/MGH) and William Stead (Vanderbilt), were findings that current HIT does not support clinicians' cognitive needs as here:

CURRENT APPROACHES TO U.S. HEALTH CARE INFORMATION TECHNOLOGY ARE INSUFFICIENT

WASHINGTON -- Current efforts aimed at the nationwide deployment of health care information technology (IT) will not be sufficient to achieve medical leaders' vision of health care in the 21st century and may even set back the cause, says a new report from the National Research Council. The report, based partially on site visits to eight U.S. medical centers considered leaders in the field of health care IT, concludes that greater emphasis should be placed on information technology that provides health care workers and patients with cognitive support, such as assistance in decision-making and problem-solving.

How about HIT industry trade/"educational" group HIMSS itself? I think reasonable people might conclude the technology is not ready for "meaningful use" on a national scale from their mid 2009 report:

Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating (PDF)
HIMSS EHR Usability Task Force
June 2009

EXECUTIVE SUMMARY
Electronic medical record (EMR) adoption rates have been slower than expected in the United States, especially in comparison to other industry sectors and other developed countries. A key reason, aside from initial costs and lost productivity during EMR implementation, is lack of efficiency and usability of EMRs currently available. Achieving the healthcare reform goals of broad EMR adoption and “meaningful use” will require that efficiency and usability be effectively addressed at a fundamental level.

These "usability" problems require long term solutions. There are no quick fix, plug and play solutions. Years of research are needed, and years of system migrations as well for existing installations.

Yet we now have an HHS Final Rule on "meaningful use" regarding experimental, unregulated medical devices the industry itself admits have major usability problems, along with a growing body of literature on the risks entailed.
For crying out loud, talk about putting the cart before the horse...

Something's very wrong here...

However, this situation is anything but humorous.

How more "cart before the horse" can government get?


Poor usability promotes medical error. Medical error puts patients at risk of iatrogenic injury and death.

Are we are entering an era of cybernetic medical assault on our patients (and perhaps criminal negligence and manslaughter, a term I do not use lightly) through irrational exuberance in computing -- and through exuberance about the profits to be made by the HIT industry?

Unless we slow down in our exuberance and recklessness on HIT diffusion, my fear is that we very well might be.

-- SS

Addendum:

Also see my followup July 14, 2010 post "Science or Politics? The New England Journal and "The 'Meaningful Use' Regulation for Electronic Health Records."

-- SS


Minggu, 11 Juli 2010

Health IT and 'High Regulatory Standards': Criminal Negligence for Implementing Defective Systems That Put Data in the Wrong Charts?

Over at the HIStalk blog (a blog whose owner remains anonymous, and who uses an ISP that does not reveal information that could be used to identify him, apparently out of fear of retaliation for controversial stories he posts), the following appeared:

Monday Morning Update 7/12/10

From Holy Smoke: “Re: Cerner. Misidentification incidents have been reported with Cerner PowerChart and Millenium in hospitals in Indiana, Michigan, and others after a Cerner upgrade. Entries are placed in the wrong electronic chart and reviewed data is for the wrong patient.” Unverified. I saw nothing in the FDA’s Maude database, so if it’s happening, customers should file an experience report.

While the reports are "unverified", I can add that the FDA MAUDE database would not show any data if this problem were recent, as I believe MAUDE contributions are reviewed by FDA before posting.

(7/21/10 addendum: various sources confirm this occurred at a religious-denomination hospital chain headquartered in the Great Lakes region of the U.S.)

However, as I wrote in Oct. 2009 at "Our Policy Is To Always Have Unabashed Faith In The Computer ... Except When It Screws Up, And Then It's The Doctor's Fault", the MAUDE database does contain some error reports from this vendor (one of the very few HIT vendors who actually file such reports) such as:

http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfRES/res.cfm?id=64345
Cerner Millennium RadNet Auto Launch Study and Auto Launch Report software functionalities. Defects in the Auto Launch functionality make it possible for a mismatch of patient data.

http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfMAUDE/Detail.CFM?MDRFOI__ID=946706
Patient care delay. The issue involves functionality in cerner millennium powerchart office and powerchart core and affects users that utilize the powerchart inbox and message center inbox. In results to endorse or sign and review, if the user clicks ok and next multiple times in quick succession while attempting to sign a result or a document, the display could lag behind the system's processing of the action, and multiple results or documents could be signed without the user's review. In message center, when clicking ok and next or accept and next, or when deleting or completing messages and moving to the next task, a document could be signed or a message could be deleted without the user's review. Results could be endorsed or documents could be signed without physician review, which could impact patient care. Cerner received communication that a patient's follow-up care was delayed as a result of this issue.

http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfMAUDE/Detail.CFM?MDRFOI__ID=753029
Microbiology set up a program within the cerner computer system to automate the reporting system for hsv (herpes simplex virus)testing. The system was tested with the assistance of cerner and found to be working appropriately. The new system was operational for approximately 3 weeks when it was determined that the first word of the sentence, "no" was inappropriately dropping off of the following sentence: "no herpes simplex virus type 1 or herpes simplex virus type 2 detected by dna amplification. " as such, two of five patients were incorrectly informed that they had hsv before the error was detected. One had started an antiviral creme treatment. The other three did not have follow-up visits until after the correct results were determined. Cerner has looked at the program and has not provided an answer for the system issue. In the interim, the previous manual review and entry process is being used.

Assuming the current reports from anonymous whistleblower "Holy Smoke" are true, I note the following.

My observations apply to any vendor and/or healthcare organization that puts defective HIT into use in patient care--

At my April 2010 post "Healthcare IT Corporate Ethics 101: 'A Strategy for Cerner Corporation to Address the HIT Stimulus Plan'", I'd written:

A profoundly disappointing lesson in the ethics of the healthcare IT sector (and the B-schools as well) can be gleaned from the following, a paper written by a Cerner employee and two health industry colleagues for a Duke Fuqua School of Business course.

The course is "Health Economics & Strategy (HLTHMGMT 326), Distance Executive MBA" (syllabus here in PDF) ... The paper is entitled "A STRATEGY FOR CERNER CORPORATION TO ADDRESS THE HIT STIMULUS PLAN."

The paper was scrubbed from the Duke Fuqua School of Business Site on or around April 16, 2010 but a cached copy is available here. In that paper what I believe is a combination in restraint of trade was suggested:

This paper seeks to clarify these implications [of the the economic 'stimulus' package - ed.], understand the strengths and weaknesses of various players in the industry and recommend a strategy for Cerner Corporation to maximize its profit from the stimulus package and thereby secure a dominant position in the HIT industry.

... We recommend that Cerner collaborate with other incumbent vendors to establish high regulatory standards, effectively creating a barrier to new firm entry.

High standards? I have some suggestions regarding "high regulatory standards."

I agree that high, in fact, the highest regulatory standards should be upheld.

I think I can safely state that a common regulatory standard in healthcare is that those involved in patient care, even peripherally, act with sound judgment and with patient well being as a foremost concern. Those acting recklessly and dangerously might be found negligent in a civil sense, or if acting recklessly in a willful and knowing manner, might be found criminally negligent.

Two descriptions of criminal negligence:

Criminal negligence - (law) recklessly acting without reasonable caution and putting another person at risk of injury or death (or failing to do something with the same consequences).

Criminal negligence is conduct which is such a departure from what would be that of an ordinary prudent or careful person in the same circumstance as to be incompatible with a proper regard for human life or an indifference to consequences. Criminal negligence is negligence that is aggravated, culpable or gross.(PDF).

It is damn well clear that electronic medical records systems must function without unpredictable data errors that put data into the wrong persons' charts, thus producing two errors and two possibilities for patient harm: an erroneous absence of appropriate data in one patient's chart, and an erroneous presence of inappropriate data in another's.

This is not a theoretical argument open to debate, and this is not a drill.

A recent IT-related data error involving one single medication nearly killed my mother.

In addition, the "learned intermediary" excuse used to punt liability onto physicians and other clinicians for patient harm due to IT errors does not apply here, and this is also not open to debate. Physicians, even the most learned, are not clairvoyant; they should not be expected to know which chambers are empty and which chambers are loaded in a game of cybernetic Russian Roulette with the data on their patients.

Having an EMR maintain fundamental relational integrity, i.e., not place clinical data entered in good faith by trusting clinicians in another patients' chart, is not rocket science.

Those who design, those who implement, and those who put into production (i.e., for use by physicians, nurses and other clinicians in the care of patients) any health IT "upgrade" without the extensive testing, testing and more testing necessary to prove proper operation on such a fundamental point as maintenance of relational integrity (i.e., correct patient identity in data storage and retrieval) knew, should have known, or should have made it their business to know that doing so puts patients at risk of injury or death.

Putting an "upgraded" software application with such fundamental defects into actual use in real, live patients care environments - for whatever reason, e.g., finances, vendor marketing pressures, meeting planned objectives and numbers, obtaining a bonus, etc. - reflects in my view:

"... a departure from what would be that of an ordinary prudent or careful person in the same circumstance as to be incompatible with a proper regard for human life or an indifference to consequences."

Thus:

In upholding the highest regulatory standards, if patients are harmed or die as a result of this type of HIT snafu, criminal charges against the responsible IT, clinical and administrative personnel would be an appropriate remedy to this type of negligence.

As I wrote at "$4 Billion Military EMR "AHLTA" to be Put Out of Its Misery?", in my view as of 2010 legal actions are the only way that the domain of healthcare IT can be returned to a field "of, by and for" clinicians, instead of "of, by and for" those who live off the hard work of clinicians and their patients.

-- SS