Trends

DeepMind touts predictive healthcare AI ‘breakthrough’ trained on heavily skewed data

DeepMind, the Google-owned UK AI research firm, has published a research letter in the journal Nature in which it discusses the performance of a deep learning model for continuously predicting the future likelihood of a patient developing a life-threatening condition called acute kidney injury (AKI).

The company says its model is able to accurately predict that a patient will develop AKI “within a clinically actionable window” up to 48 hours in advance.
In a blog post trumpeting the research, DeepMind couches it as a breakthrough — saying the paper demonstrates artificial intelligence can predict “one of the leading causes of avoidable patient harm” up to two days before it happens.
“This is our team’s biggest healthcare research breakthrough to date,” it adds, “demonstrating the ability to not only spot deterioration more effectively, but actually predict it before it happens.”
Even a surface read of the paper raises some major caveats, though.
Not least that the data used to train the model skews overwhelmingly male: 93.6%. This is because DeepMind’s AI was trained using patient data provided by the US Department of Veteran Affairs (VA).
The research paper states that females comprised just 6.38% of patients in the training dataset. “Model performance was lower for this demographic,” it notes, without saying how much lower.
A summary of dataset statistics also included in the paper indicates that 18.9% of patients were black, although there is no breakout for the proportion of black women in the training dataset. (Logic suggests it’s likely to be less than 6.38%.) No other ethnicities are broken out.
Asked about the model’s performance capabilities across genders and different ethnicities, a DeepMind spokeswoman told us: “In women, it predicted 44.8% of all AKI early, in men 56%, for those patients where gender was known. The model performance was higher on African American patients — 60.4% of AKIs detected early compared to 54.1% for all other ethnicities in aggregate.”
“This research is just the first step,” she confirmed. “For the model to be applicable to a general population, future research is needed, using a more representative sample of the general population in the data that the model is derived from.

“The data set is representative of the VA population, and we acknowledge that this sample is not representative of the US population.  As with all deep learning models it would need further, representative data from other sources before being used more widely.

“Our next step would be to work closely with [the VA] to safely validate the model through retrospective and prospective observational studies, before hopefully exploring how we might conduct a prospective interventional study to understand how the prediction might impact care outcomes in a clinical setting.”

“To do this kind of work, we need the right kind of data,” she added. “The VA uses the same EHR [electronic health records] system (widely recognized as one of the most comprehensive EHRs) in all its hospitals and sites, which means the dataset is also very comprehensive, clean, and well-structured.”
So what DeepMind’s ‘breakthrough’ research paper neatly underlines is the reflective relationship between AI outputs and training inputs.
In a healthcare setting, where instructive outputs could be the difference between life and death, it’s not the technology that’s king; it’s access to representative datasets that’s key — that’s where the real value lies.
This suggests there’s huge opportunity for countries with taxpayer-funded public healthcare systems to structure and unlock the value contained in medical data they hold on their populations to develop their own publicly owned healthcare AIs.
Indeed, that was one of the recommendations of a 2017 industrial strategy review of the UK’s life sciences sector.
Oxford University’s Sir John Bell, who led the review, summed it up in comments to the Guardian newspaper, when he said: “Most of the value is the data. The worst thing we could do is give it away for free.”

Streams app evaluation

DeepMind has also been working with healthcare data in the UK.
Reducing the time it takes for clinicians to identify when a patient develops AKI has been the focus of an app development project it’s been involved with since 2015 — co-developing an alert and clinical task management app with doctors working for the country’s National Health Service (NHS).
That app, called Streams, which makes use of an NHS algorithm for detecting AKI, has been deployed in several NHS hospitals. And, also today, DeepMind and its app development partner NHS trust are releasing an evaluation of Streams’ performance, led by University College London.
The results of the evaluation have been published in two papers, in the Nature Digital Medicine and the Journal of Medical Internet Research.
In its blog DeepMind claims the evaluations show the​ ​app​ “​improved​ ​the​ ​quality​ ​of​ ​care​ ​for​ ​ patients​ ​by​ ​speeding​ ​up​ ​detection​ ​and​ ​preventing​ ​missed​ ​cases”, further claiming ​clinicians​ ​”were​ ​able​ ​to​ ​respond​ ​to​ ​urgent​ ​AKI​ ​cases​ ​in​ ​14​ ​minutes​ ​or​ ​less” — and suggesting that ​using​ ​existing​ ​systems​ “​might​ ​otherwise​ ​have​ ​taken​ ​many​ ​hours”.​ ​
It also claims a reduction in the cost of care to the NHS — ​from​ ​£11,772​ ​to​ ​£9,761​ ​for​ ​a hospital​ ​admission​ ​for​ ​a​ ​patient​ ​with​ ​AKI.​ ​
Though it’s worth emphasizing that under its current contracts with NHS trusts DeepMind provides the Streams service for free. So any cost reduction claims also come with some major caveats.
Simply put: We don’t know the future costs of data-driven, digitally delivered healthcare services — because the business models haven’t been defined yet. (Although DeepMind has previously suggested pricing could be based on clinical outcomes.)

“A​ccording​ ​to​ ​the​ ​evaluation,​ ​the​ ​app​ ​has​ ​improved​ ​the​ ​experience​ ​of​ ​clinicians​ ​responsible​ ​for​ ​ treating​ ​AKI,​ ​saving​ ​them​ ​time​ ​which​ ​would​ ​previously​ ​have​ ​been​ ​spent​ ​​trawling​ ​through​ ​paper,​ ​ pager​ ​alerts​ ​and​ ​multiple​ ​desktop​ ​systems,” DeepMind also writes now of Streams.
However, again, the discussion contained in the evaluation papers contains rather more caveats than DeepMind’s PR does — flagging a large list of counter considerations, such as training costs and the risks of information overload (and over-alerting) making it more difficult to triage and manage care needs, as well as concluding that more studies are needed to determine wider clinical impacts of the app’s use.
Here’s the conclusion to one of the papers, entitled A Qualitative Evaluation of User Experiences of a Digitally Enabled Care Pathway in Secondary Care:

Digital technologies allow early detection of adverse events and of patients at risk of deterioration, with the potential to improve outcomes. They may also increase the efficiency of health care professionals’ working practices. However, when planning and implementing digital information innovations in health care, the following factors should also be considered: the provision of clinical training to effectively manage early detection, resources to cope with additional workload, support to manage perceived information overload, and the optimization of algorithms to minimize unnecessary alerts.

A second paper, looking at Streams’ impact on clinical outcomes and associated healthcare costs, concludes that “digitally enabled clinical intervention to detect and treat AKI in hospitalized patients reduced health care costs and possibly reduced cardiac arrest rates”.
“Its impact on other clinical outcomes and identification of the active components of the pathway requires clarification through evaluation across multiple sites,” it adds.
To be clear, the current Streams app for managing AKI alerts does not include AI as a predictive tool. The evaluations being published today are of clinicians using the app as a vehicle for task management and push notification-style alerts powered by an NHS algorithm.
But the Streams app is a vehicle that DeepMind and its parent company Google want to use to drive AI-powered diagnosis and prediction onto hospital wards.
Hence DeepMind also working with US datasets to try to develop a predictive AI model for AKI. (It backed away from an early attempt to use Streams patient data to train AI, after realizing it would need to gain additional clearances from UK regulators.)
Every doctor now carries a smartphone. So an app is clearly the path of least resistance for transforming a service that’s been run on paper on pagers for longer than Google’s existed.
The wider intent behind DeepMind’s app collaboration with London’s Royal Free NHS Trust was stated early on — to build “powerful general-purpose learning algorithms”, an ambition expressed in a Memorandum of Understanding between the pair that has since been cancelled following a major data governance scandal.
The background to the scandal — which we covered extensively in 2016 and 2017 — is that the medical records of around 1.6 million Royal Free NHS Trust patients were quietly passed to DeepMind during the development phase of Streams. Without, as it subsequently turned out, a valid legal basis for the data-sharing.
Patients had not been asked for their consent to their sensitive medical data being shared with the Google-owned company. The regulator concluded they would not have had a reasonable expectation of their medical data ending up there.
The trust was ordered to audit the project — though not the original data-sharing arrangement that had caused the controversy in the first place. It was not ordered to remove DeepMind’s access to the data.
Nor were NHS patients whose data passed through Streams during the app evaluation phase asked for their consent to participate in the UCL/DeepMind/Royal Free study; a note on ‘ethical approval’ in the evaluation papers says UCL judged it fell under the remit of a service evaluation (rather than research) — hence “no participant consent was required”.
It’s an unfortunate echo of the foundational consent failure attached to Streams, to say the very least.

Despite all this, the Royal Free and DeepMind have continue to press on with their data-sharing app collaboration. Indeed, DeepMind is pressing on the accelerator — with its push to go beyond the NHS’ AKI algorithm.

Commenting in a statement included in DeepMind’s PR, Dr​ ​Chris​ ​Streather,​ ​Royal​ ​Free​ ​London​’s ​chief​ ​medical​ ​officer​ ​and​ ​deputy​ ​chief​ ​executive,​ ​enthuses: “The​ ​ findings​ ​of​ ​the​ ​Streams​ ​evaluation​ ​are​ ​incredibly​ ​encouraging​ ​and​ ​we​ ​are​ ​delighted​ ​that​ ​our​ ​partnership​ ​with​ ​DeepMind​ ​Health​ ​has​ ​improved​ ​the​ ​outcomes​ ​for​ ​patients.​ ​

“Digital​ ​technology​ ​is​ ​the​ ​way​ ​forward​ ​for​ ​the​ ​NHS.​ ​In​ ​the​ ​same​ ​way​ ​as​ ​we​ ​can​ ​receive​ ​transport​ ​ and​ ​weather​ ​alerts​ ​on​ ​our​ ​mobile​ ​devices,​ ​doctors​ ​and​ ​nurses​ ​should​ ​benefit​ ​from​ ​tools​ ​which​ ​put​ ​ potentially​ ​life-saving​ ​information​ ​directly​ ​into​ ​their​ ​hands.​
“In​ ​the​ ​coming​ ​months,​ ​we​ ​will​ ​be​ ​introducing​ ​the​ ​app​ ​to​ ​clinicians​ ​at​ ​Barnet​ ​Hospital​ ​as​ ​well​ ​as​ ​ exploring​ ​the​ ​potential​ ​to​ ​develop​ ​solutions​ ​for​ ​other​ ​life-threatening​ ​conditions​ ​like​ ​sepsis.”​

Scramble for NHS data

The next phase of Google-DeepMind’s plan for Streams may hit more of a blocker, though.
Last year DeepMind announced the app would be handed off to its parent — to form part of Google’s own digital health push. Thereby contradicting DeepMind’s own claims, during the unfolding scandal when it had said Google would not have access to people’s medical records.
More like: ‘No access until Google owns all the data and IP’, then…
As we said at the time, it was quite the trust shock.

Since then the Streams app hand-off from DeepMind to Google appears to have been on pause.
Last year the Royal Free Trust said it could not happen without its approval.
Asked now whether it will be signing new contracts for Streams with Google a spokesperson told us: “At present, the Royal Free London’s contract with DeepMind remains unchanged. As with all contractual agreements with suppliers, any changes or future contracts will follow information governance and data protection regulations. The trust will continue to be the data controller at all times, which means it is responsible for all patient information.”
The trust declined to answer additional questions — including whether it intends to deploy a version of Streams that includes predictive AI model at NHS hospitals; and whether or not patients will be given an opt out for their data being shared with Google.
It’s not clear what its plans are. Although DeepMind’s and Google’s is clearly for Streams to be the conduit for predictive AIs to be pushed onto NHS wards. Its blog aggressively pushes the case for adding AI to Streams.
To the point of talking down the latter in order to hype the former. The DeepMind Health sales pitch is evolving from ‘you need this app’ to ‘you need this AI’… With the follow on push to ‘give us your data’.
“Critically, these early findings from the Royal Free suggest that, in order to improve patient outcomes even further, clinicians need to be able to intervene before AKI can be detected by the current NHS algorithm — which is why our research on AKI is so promising,” it writes. “These results comprise the building blocks for our long-term vision of preventative healthcare, helping doctors to intervene in a proactive, rather than reactive, manner.
“Streams doesn’t use artificial intelligence at the moment, but the team now intends to find ways to safely integrate predictive AI models into Streams in order to provide clinicians with intelligent insights into patient deterioration.”
In its blog DeepMind also makes a point of reiterating that Streams will be folded into Google — writing: “As we announced in November 2018, the Streams team, and colleagues working on translational research in healthcare, will be joining Google in order to make a positive impact on a global scale.”
“The combined experience, infrastructure and expertise of DeepMind Health teams alongside Google’s will help us continue to develop mobile tools that can support more clinicians, address critical patient safety issues and could, we hope, save thousands of lives globally,” it adds, ending with its customary ‘hope’ that its technology will save lives — yet still without any hard data to prove all the big claims it makes for AI-powered predictive healthcare’s potential.
As we’ve said before, for its predictive AI to deliver anything of value Google really needs access to data the NHS holds. Hence the big PR push. And the consent-overriding scramble for NHS data.

Responding to DeepMind’s news, Sam Smith, coordinator at health data privacy advocacy group  medConfidential told us: “The history of opportunists using doctors to take advantage of patients to further their own interests is as long as it is sordid. Some sagas drag on for years. Google has used their international reach to use data on the US military what they said they’d do in the UK, before it became clear they misled UK regulators and broken UK law.”
In a blog post the group added: “In recent weeks, Google & YouTube, Facebook & Instagram, and other tech companies have come under increasing pressure to accept they have a duty of care to their users. Can Google DeepMind say how its project with the Royal Free respects the Duty of Confidence that every NHS body has to its patients? How does the VA patient data they did use correspond to the characteristics of patients the RFH sees?
“Google DeepMind received the RFH data -– up to 10 years’ of hospital treatments -– of 1.6 million patients. We expect its press release to confirm how many of those 1.6 million people actually had their data displayed in the app, and whether they were used for testing alongside the US military data.”

Source: TechCrunch

Follow Us On Facebook, Twitter & Instagram Please Share Your Stories, Press Release & Articles At [email protected]. To Read More News Daily, Subscribe To Our Push Notification at https://www.inventiva.co.in/
This article is automatically sourced by automatic news feeds through online softwares, Inventiva team has not made any modifications and adjustments in the article and is published as it is after giving due credits to its original source.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker