Artificial intelligence firm DeepMind and a London hospital trust, the Royal Free London NHS Foundation Trust, have signed a five-year deal to develop a clinical app called Streams. The deal extends the already controversial partnership between the London-based startup, which was bought by Google in 2014, and the healthcare trust.
The Streams app is for healthcare professionals. According to the Financial Times, it will trigger mobile alerts when a patient’s vital signs or blood results become abnormal so that a doctor can intervene quickly and prevent the problem escalating.
The trust said that Streams has, thus far, been using algorithms to detect acute kidney injury, and added that it would
alert doctors to [a] patient in need “within seconds”, rather than hours [and] free up doctors from paperwork, creating more than half a million hours of extra direct care
The aim is to use Streams as a diagnostic support tool for a far wider range of illness, including sepsis and organ failure.
OK, so that’s the what. Now for the controversial bit: the how…
The app quite obviously relies on access to patient data.
A story in New Scientist earlier this year raised concerns that the partnership had given DeepMind access to “a wide range of healthcare data on the 1.6 million patients … from the last five years”, and noted that the data will be stored in the UK by a third party and that DeepMind is obliged to delete its copy of the data when the agreement expires.
In a follow-up story published this week, New Scientist revealed that the UK’s Information Commissioner’s Office began investigating the data-sharing agreement following its revelations. A statement from the office says that it is “working to ensure that the project complies with the Data Protection Act”.
But is that enough?
Privacy firms have raised concerns that medical records are being collected on a massive scale without the explicit consent of patients. Phil Booth, coordinator of medConfidential, queried the value of the app:
Our concern is that Google gets data on every patient who has attended the hospital in the last five years and they’re getting a monthly report of data … [but] because the patient history is up to a month old, [it] makes the entire process unreliable and makes the fog of unhelpful data potentially even worse.
Academics have also raised concerns. Speaking to the Financial Times, Julia Powles, a lawyer who specializes in technology law and policy from the University of Cambridge, highlighted that:
We do not know – and have no power to find out – what Google and DeepMind are really doing with NHS patient data, nor the extent of Royal Free’s meaningful control over what DeepMind is doing.
Give Google a chance?
When Natasha Loder asked:
@juliapowles if Google Deep Mind has the potential to save lives, ought we not try and see if it is possible to do it?
— Natasha Loder 🐋 (@natashaloder) November 22, 2016
Powles responded:
https://twitter.com/juliapowles/status/801208839173525504
That’s exactly it, isn’t it? The issue is not with what Google is trying to achieve, but that fact that it is Google doing it.
Doing it right
I have no issues with technologies being used to improve patient outcomes … provided the right people are doing it, for the right reasons and that it’s done in the right way.
Here we have Google creating an app that really needs real-time data to be useful. Surely it could potentially put patients at risk if the data are not up to the minute when you’re talking about things like organ failure and sepsis. Won’t the doctor need to know what’s been happening with the patient in the last weeks, days, hours and even minutes?
On my second point, Google is not doing the work for profit. Mustafa Suleyman, head of DeepMind Health and DeepMind’s co-founder, told the FT:
We get a modest service fee to supply the software. Ultimately, we could get reimbursed [by the NHS] for improved outcomes.
So you have to ask why. To access to data? To gain a foothold in health analytics? To test possibilities? To build a proof of concept it can sell in the future?
I suspect all of those are near the truth.
Does Google really need to be given this data at all? Wouldn’t it have been a lot safer if the NHS Trust had trialled the app on Google’s behalf, keeping the data safely in-house? After all, if you wanted to test-drive a piece of technology, wouldn’t you ask for the technology to test rather than hand over your data?
Or is this something that can only be accessed as a service, in other words, where data need to sit on the service provider’s machines? If that’s the case, we need to seriously look at how organizations access cloud-based third-party services that require a local copy of data. If we don’t, we risk finding copies of patient, student, citizen and other very personal data here, there and everywhere in the future.
anon
I found the same thing happening as a federal government employee nearly twenty years ago. HRO decided it would be far more efficient to deliver all employee payroll data to a contractor to provide employment verification services. Employees were not given an opt in choice even though many of us would never have a need to use or benefit from such services. Pure risk, and not from just one company but several more as the work was competed for contract renewals over the years to come. When asked about their authority for doing this without notifying employees for comment or permission, the HRO response was it was done within existing employer legal rights and in complance with privacy laws.
Jason Scott (@jlscott_77)
Presumably the data is patient identifiable or at least pseudonymised, allowing any alerts produced by DeepMind to be linked back within Trust systems allowing doctors to intervene. This is for direct patient care, but it’s going to a third party so the patients should be informed about where their information is going and who (besides the Trust) is processing it. They should also have the ability to opt out.
Mike
Sadly the only way to opt out is to opt out of healthcare.
BeingMulticellular
This is very scary. This is not the health care system I want to enter into, patients should be the priority not money
Anon
Let’s make one thing clear – Google is using this as a test case for making future profits. They have no interest in saving lives, other than the monetization of people’s misfortune. The data gathered could be sold to employers trying to work out whether a potential employee is likely to cost their company money through illness. It could be sold to banks for mortgage purposes – will they likely die before paying off the loan? Then there is insurance companies who would love to get their hands on that data. Lastly, drugs trials, medical treatment and consultant referrals could all be sold your misfortune for profit. As for Deep Mind learning, hmmm, sorry not buying that PR crap – it’s a computer not a person. I doubt it could have the same intuition that a real living person has. What happens if it gets a diagnosis wrong and ends up ruining someone’s career prospects? Will you be able to hold Google accountable? Who will take responsibility? We have already seen artificial intelligence offer contraceptive treatment to rape victims, potty mouthed chat bots, people being falsely declared dead… What happens with misdiagnosis? Or worse, too much faith in AI and not spotting the symptoms? As far as I’m concerned I would trust an advertising company with my personal medical details – what happens if they get hacked? Will I end up with a barrage of crooks promising to sell me magic beans to cure my illness? No way would I ever consent to this.
simon neighbour
Look, we’re stuck with the snoopers charter and all the fun that that entails, we may as well get some health benefits as a by-product. I’m in
David Emeny
Trust a private firm? The only thing a private firm has to do by law, is to maximise profits.
Jason
Well if successive governments get their way the NHS will be privatised and then you’ll have no choice but to trust private firms.
Keithanthony
Lets imagine what a group like the Nazis could have achieved with such data naturally they’d only be saving lives so we all have nothing to worry about do we, we can trust Google…
can’t we?