https://youtu.be/R16GXtgSzUE
https://www.politico.com/story/2019/02/03/health-risk-scores-opioid-abuse-1139978
Companies are starting to sell “risk scores” to doctors, insurers and hospitals to identify patients at risk of opioid addiction or overdose, without patient consent and with little regulation of the kinds of personal information used to create the scores.
While the data collection is aimed at helping doctors make more informed decisions on prescribing opioids, it could also lead to blacklisting of some patients and keep them from getting the drugs they need, according to patient advocates.
Over the past year, powerful companies such as LexisNexis have begun hoovering up the data from insurance claims, digital health records, housing records, and even information about a patient’s friends, family and roommates, without telling the patient they are accessing the information, and creating risk scores for health care providers and insurers. Health insurance giant Cigna and UnitedHealth’s Optum are also using risk scores.
There’s no guarantee of the accuracy of the algorithms and “really no protection” against their use, said Sharona Hoffman, a professor of bioethics at Case Western Reserve University. Overestimating risk might lead health systems to focus their energy on the wrong patients; a low risk score might cause a patient to fall through the cracks.
No law prohibits collecting such data or using it in the exam room. Congress hasn’t taken up the issue of intrusive big data collection in health care. It’s an area where technology is moving too fast for government and society to keep up.
“Consumers, clinicians and institutions need to understand that personalized health is a type of surveillance,” says Harvard University professor Eric Perakslis. “There is no way around it, so it needs to be recognized and understood.”
The justification for risk scoring is the terrible opioid epidemic, which kills about 130 Americans a day and is partly fueled by the overprescribing of legal painkillers. The Trump administration and Congress have focused billions on fighting the epidemic, and haven’t shied from intrusive methods to combat it. In its national strategy, released Thursday, the White House Office of National Drug Control Policy urged requiring doctors to look up each patient in a prescription drug database.
Health care providers legitimately want to know whether a patient in pain can take opioids safely, in what doses, and for how long — and which patients are at high risk of addiction or overdose. Data firms are pitching their predictive formulas, or algorithms, as tools that can help make the right decisions.
The practice scares some health care safety advocates. While the scoring is aimed at helping doctors figure out whether to prescribe opioids to their patients, it might pigeonhole people without their knowledge and give doctors an excuse to keep them from “getting the drugs they need,” says a critic, Lorraine Possanza of the ECRI Institute.
The algorithms assign each patient a number on a scale from zero to 1, showing their risk of addiction if prescribed opioids. The risk predictions sometimes go directly into patients’ health records, where clinicians may use them, for example, to turn down or limit a patient’s request for a painkiller.
Doctors can share the patients’ scores with them — if they want to, the data mongers say. “We stop really short of trying to advocate a particular opinion,” said Brian Studebaker from one of the risk scoring companies, the actuarial firm Milliman.
According to addiction experts, however, predicting who’s at risk is an inexact science. Past substance abuse is about the only clear red flag when a doctor is considering prescribing opioid painkillers.
But several companies POLITICO spoke with already are selling the predictive technology. None would name customers. Nor would they disclose exactly what goes into the mathematical formulas they use to create their risk scores — because that information is the “secret sauce” they’re selling.
Congress has shown some interest in data privacy; a series of hearings last year looked into thefts of data or suspect data sharing processes by big companies like Facebook. But it hasn’t really delved into the myriad health care and health privacy implications of data crunching.
Consumers have a “basic expectation” that the data they provide to websites and apps “won’t be used against them,” said Sen. Brian Schatz (D-Hawaii), who co-sponsored legislation last year barring companies from using individuals’ data in harmful ways. The HIPAA privacy law of the late 1990s restricted how doctors share patient information, and Schatz says “online companies should be required to do the same.”
A bill from Sen. Ed Markey (D-Mass.), S. 1815 (115), would require data brokers to be more transparent about what they collect, but neither his bill nor Schatz’s specifically address data in health care, a field in which separating the harmful from the benign may prove especially delicate.
The use of big data in this arena impinges on human rights beyond simple violation of privacy, says data governance expert Martin Tisne. He argues in a recent issue of Technology Review for a Bill of Data Rights that includes the right to be secure against “unreasonable surveillance” and unfair discrimination on the basis of data.
Risk scores may be ‘the way of the future’
Research into opioid risk factors is nascent. The University of Pittsburgh was awarded an NIH grant last year to determine whether computer programs incorporating Medicaid claims and clinical data are more accurate than ones based on claims alone.
Risk scores could be helpful if they help clinicians begin candid conversations about the unique circumstances that could make a patient more vulnerable to opioid use disorder, said Yngvild Olsen, a board member at the American Society of Addiction Medicine.
But the algorithms could be relying on inaccurate public data, and they may disempower patients, leaving them in the dark about the Big Brotherish systems rating them. Another key challenge, says Case Western’s Hoffman, is ensuring that the predictions don’t override a clinicians’ instinct or reinforce biases.
It’s difficult to imagine what a robust safeguard against the misuse of predictive algorithms would even look like, she said. One approach might be to revise health care privacy law to prohibit groups from profiting from health data or algorithms that crunch it. But that won’t keep tech companies from making predictions based on whatever they can access.
Algorithms predicting health risk are likely “the way of the future,” she said. “I’m afraid we need to learn to live with them. … but get more education.”
The companies using predictive analytics to address the opioid crisis include insurer Cigna, which announced last year it was expanding a program flagging patients likely to overdose. The insurer has a “number of tools that enable further insights,” Cigna’s Gina Papush said. Optum has also begun stratifying patients by opioids-related risk. It said a spokesperson was unavailable to comment.
Milliman won an FDA innovation challenge to create an artificial intelligence-based algorithm that predicts whether patients will receive an opioid use disorder diagnosis in the next six months. The company offers to provide a list of high-risk patients to payers, who can hand the relevant information to clinicians.
Milliman has signed early-stage contracts with some accountable care organizations. It assigns patients a risk score from zero to 1, and also compares them to other patients.
Another company, called HBI Solutions, uses a mathematical formula that learns from deidentified claims data, said senior vice president Laura Kanov. Payers or providers can run the formula on their own patient data. Unlike some companies, HBI displays the reasoning behind each risk score, she said.
LexisNexis sells health plans a tool that flags patients who may already have opioid use disorder. Someone could be at greater risk if their relatives or roommates abuse opioids, or if they use a pharmacy known for filling high volumes of pills, said LexisNexis’s Shweta Vyas. LexisNexis can draw “relatively strong connections” between people based on public records showing they live at the same address, she said. If both parties are enrolled in the same health plan, the software can find patterns “in the aggregate behavior of those two people.”
Sally Satel, an American Enterprise Institute fellow and psychiatrist, warned that risk scores could reinforce what she sees as a mistaken idea that doctors who overprescribe are the key drivers of the opioid crisis. A patient who’s been in a serious car accident could exceed the recommended duration of opioid use because of their mental and emotional state, not just because a doctor gave them too much, she said.
“I don’t know how much an algorithm can examine all those much more personal dimensions,” she said. “I’d love to see this being studied more, instead of being sold.”
Just imagine how accurate these “risk scores” are going to be when a fair/unknown number of fake ID’s are being used by serious substance abusers and/or diverters are thrown into the mix ?
Getting a fake ID package is readily available and we have been told that fake credit accounts are being opened and debit run up under with these fake ID’s
We already have companies that will sell you a service to watch your credit score – like https://www.lifelock.com – are we going to have a new industry evolve to watch and help straighten out this risk score.
Filed under: General Problems
One of the supposed red flags is if you live with or near anyone who takes opioids. I think they must instantly take away the drivers licenses from anyone who lives with or near anyone who drinks alcohol…they kill way more people than all opioids do, legal plus illegal.
Oh wait. Those are facts –stupid me. Facts mean nothing these days.
And again, why the ACL-Useless refuses to get involved with such incredibly blatant violations of civil rights is one of the mysteries of the ages. Or one of the crimes of the ages.
[…] How your health information is sold and turned into ‘risk scores’ […]
All of this is just WRONG, WRONG, WRONG. WRONG. WRONG!! And Ms. Sharona Hoffman says we just need to learn to live with it?!?! Also, if a person hasn’t abused their medication in the last 6 months, there’s a good chance they won’t abuse it in the following 6 months. And it isn’t a customer’s fault what a pharmacy fills. This is just a blatant invasion of privacy and discrimination. If the “War on Opioid” medication wasn’t bad enough. Kids, it just got a whole hell of a lot hairier.