Pictures of Macron and Putin sitting 15 meters from each other may be a topical way of starting an article about cybersecurity in the health care sector, but it has a serious purpose. The reason given for the distance was Macron’s refusal to do a COVID test as he didn’t want the Russians to have access to his DNA. This may have been a little over-stretched for political purposes by both sides but somehow illustrates on a macro scale something we all intuitively feel; the security of our health data, whoever we are. For national security reasons or just an intuitive feeling that nothing could be more private or intimate.
This has been built into medicine since the 5th century BC in the form of the Hippocratic Oath which though modernized, still embeds equal care and anonymity into the bedrock of treatment. However, these privacy concerns are in many ways an impediment that locks the full potential of medical data in providing immediate care and aggregating data for better diagnosis, treatment, and recovery.
A report notes that “Every second, an exponential amount of healthcare data is generated and mined for valuable insights. Today, approximately 30% of the world’s data volume is being generated by the healthcare industry. By 2025, the compound annual growth rate of data for healthcare will reach 36%. That’s 6% faster than manufacturing, 10% faster than financial services, and 11% faster than media & entertainment”.
These are impressive figures but due to privacy concerns, it’s fair to assume that the growth of medical data will not be matched by the technological capacity to gain maximal insights from that data. These concerns can only grow in the increasingly unstable world we live in. Thus, squaring the circle is vital for the future of the planet’s health ecosystem. This is made even more acute as the use of IoT in medical provision undergoes massive expansion.
Consider that US hospitals have an average of 10-15 connected medical devices per bed. These monitors allow the doctor to make informed and accurate decisions as well as providing warning systems for immediate care. These can match the exact medical history of the patient, their treatment, timings, responses and much more. But without express consent, all that data cannot be fed into the AI systems that could deliver insights based on a vast data sample unavailable to any individual or even institution. Even within a hospital, the transfer of a data scan can take many minutes when the patient may need a response immediately.
Cybersecurity in the health sector is literally a matter of life and death. As we have seen, privacy is as vital to individuals as a bitcoin wallet to a crypto investor and the further development of both is in some ways the same. Namely the need to introduce a new architecture to the systems that govern the collection, use, and analysis of data. The architecture must not only be secure but understood to be secure and to comply with the myriad of regulations that presently give confidence, at least in wealthier countries, to patients.
As an industry leader in the field, HUB Security advocates confidential computing as the architecture that can deliver the revolutionary change required. This will immensely increase the speed of transfer of information to the time and place where its use is critical. However, our vision of improving global healthcare potential is also to create AI that cannot only be trusted in practice but be perceived to be trustworthy by a wider society, already suspicious of the role of algorithms.
HUB’s secure compute platform relies on our proprietary hardware and software to protect any application and data, customizable for the specific client or organization. It relies on leveraging a range of security mechanisms, within the paradigm of confidential computing, to isolate any application and its associated elements within its own secure execution environment.
The platform protects data in all three states – data in transit, data in storage and data in use across the entire compute and network stack. This includes the application, the data, the AI models, the hypervisor, access policies, audit trails, and key management for all cryptographic keys.
Multi-party AI enables the enhancements to insights that were previously restricted by security concerns we have discussed above to be unlocked. Data from the de-identification process is used in these enhancements. The data from each organization, including the group that controls the environment, is isolated from the data from each other. Neither the data providers or the AI model have access to the data yet all can make use of the insights that their combined data can provide.