Social_Network_Analysis_Visualization

Healthcare Data Issues – Part 2 (Data Transfer)

A series of articles penned by Dr. Navin Ramachandran, Dr. Wai Keong Wong and Dr. Ian McNicoll. We examine the current approaches to data modelling which have led to the issues we face with interoperability. Main image from: Grandjean, Martin (2014). “La connaissance est un réseau”. Les Cahiers du Numérique 10 (3): 37–54. Retrieved 2014-10-15.

The other parts of the article can be found here: Part 1, Part 3.

The majority of current approaches to resolving data transfer issues revolve around either:

  • Replacing all the current systems in a region with a single vendor, using a single data model across the region. This circumvents the need for interoperability of systems.
  • Developing common datasets and messaging standards to maximise interoperability of systems.

Interoperability is a term that is often used in medical informatics, exalted as the answer to all our woes. However this interoperability can exist on a range from partial to complete. Unfortunately, we believe that even the most recent attempts at interoperability will only achieve partial success, and we explore the underlying reasons in this section.

 

Building a bigger silo

Also known as the monolithic model. In its most basic form, you would replace all the disparate systems in one centre with a single vendor solution:
Monolith 1
This has happened at many institutions, using solutions such as Cerner, Epic, Allscripts and Intersystems. However this does not resolve the issue of data transfer between hospitals or with community systems, therefore in some regions all the institutions have moved to the same vendor using a similar / identical data model:
Monolith 2
However this approach is ultimately problematic as:

  • It may be politically and technologically difficult to transition all the institutions to the same provider.
  • One system will not be able to satisfy the varied needs of all the different institutions. As we have identified previously, the documentation accuracy of smaller “best of breed” solutions is often better than the equivalent module of the monolithic solution, as they are more responsive to local needs. Therefore we often find multiple other smaller systems being run concurrently to the monolithic solution (eg cancer management systems separate to Cerner EHR at The Royal Free Hospital), which would break this monolithic model.
  • What happens outside this region? e.g How do San Francisco’s hospitals communicate with Los Angeles? The complexity of the healthcare environment means that there are always more boundaries, and the logical end-point of the single-system approach must always end in one single-system for an entire nation.
  • How will this system speak to social care, police, government systems? This and the previous issue are partially overcome by using common datasets and messaging standards, but that brings in the added problems of that solution too (see below).
  • The adverse effects of monopoly. The power of the apps revolution is in innovation and “fail fast”.

“The complexity of the healthcare environment means that there are always more boundaries, and the logical end-point of the single-system approach must always end in one single-system for an entire nation.”

 

Using common datasets, messaging standards and APIs (eg using FHIR)

This method works in the following manner. A central body defines the dataset for a particular medical condition or procedure and publishes it. Each individual institution then implements the dataset within its system, usually by mapping it to existing fields in existing tables, or adding new fields.

Central Dataset

The central body then defines messaging datasets for transmission of data between organisations – this may be the same as the initial dataset or a subset of this. This messaging dataset defines how and what data will be transmitted and received in different scenarios (eg for a referral form, for a discharge summary).

For example the Royal College of Surgeons has released a dataset for The National Prostate Cancer Audit: http://www.npca.org.uk/wp-content/uploads/2013/07/NPCA_MDS-specification_-V2-0_-17-12-14.pdf

A very important fact to note is that this is only a small portion of all the data that will be collected during their care, which has yet to be defined properly, and will in fact naturally vary between hospitals. Therefore if everything works well, the data which has been defined in the central dataset will be available for analysis and transfer. But the other peripheral data that has not been defined, will never be meaningfully stored or transmitted.

This is the major problem with this model, that only a very small dataset can be defined centrally – the lowest common denominator – which means that other potentially valuable data (which may provide great insight into causes of, or treatments for the disease) will be lost to analysis.

Furthermore, a lot of these central bodies are not versed in informatics and are still very much in thinking using the paper paradigm. Their datasets are often no more sophisticated than a digital representation of a paper form, and not derived from an overarching data model. Sadly this leads to a situation where the paper form is the dataset, and dataset is the data model!

“Only a very small dataset can be defined centrally – the lowest common denominator – which means that other potentially valuable data (which may provide great insight into causes of, or treatments for the disease) will be lost to analysis.”

Other significant drawbacks to this model are:

  • Loss of meaning at each stage.
    • When the central dataset is deployed locally, it has to be translated to the local data model. Then when it is transmitted, it is translated into the messaging model and then finally into the model of the receiving institution’s system. At each translation, there will likely be some loss of meaning of the data.

Transfer Message

  •  Updating pathways / datasets across hospitals, while maintaining functioning APIs / messaging standards.
    • An updated data model can take anywhere between 1 to 12 months to deploy in a hospital (this is the reality on the ground). This is because they have to be translated to the hospital systems’ data models and any systems interfaces and user interfaces updated, without breaking the underlying tables.
    • In the app world, there can be many app updates in 1 month. But if we very conservatively say that we may make 2 updates to an information model in a year, some hospitals may not have even deployed the first change by the end of the year. Therefore some hospitals would still be on version 1, others on version 2 and the most efficient on version 3. Therefore 3 different models would be in operation at the same time at different hospitals – which common API / messaging standard would now be the correct one to use?

 

The emergence of the granular API – HL7 FHIR

For the last decade cross-system interoperability in the US has largely focused on efforts to use the HL7v3 and CDA messaging standards. In spite of significant investment via the “Meaningful Use” program, useful interoperability has remained elusive, in part due to the complexity of the HL7v3 standard. In response the HL7 community developed FHIR (Fast Health Interoperability Resources) adopting modern software approaches such as granular APIs and a RESTful interface, and in line with influential industry opinion as per the JASON report.

FHIR has found favour amongst most of the major US system vendors, represented by the Argonaut Project, and without doubt is a major step forward in reducing the barriers to ease of data exchange between systems.

http://www.hl7.org/documentcenter/public_temp_EF13C1F7-1C23-BA17-0CB343BB6F370C99/pressreleases/HL7_PRESS_20141204.pdf

We strongly support the industry-led adoption of FHIR, nevertheless, in our view, the scope of FHIR falls short of the changes required to support a vibrant eHealth industry.

  • FHIR is expressly engineered to support data exchange not data persistence and querying.
  • Support for each new FHIR resource needs to be programmed by each vendor, unlike openEHR where new archetype models are consumed automatically (i.e in openEHR, data conforming to these new models can be immediately stored and queried without any re-programming).
  • FHIR is expressly designed to meet only key, common interoperability areas. It does have a capacity for extension, but this remains immature and beyond the core scope of the project.
  • FHIR resources remain technical artefacts, whilst archetypes are designed to be designed and maintained by clinicians.

This assessment may seem to put FHIR directly in opposition to openEHR but our view is that each, playing to its strength, makes for a compelling combination. openEHR can provide the methodology and toolset to build the clinical content definitions needed to underpin FHIR – this is the approach being used by NHS England.

“openEHR can provide the methodology and toolset to build the clinical content definitions needed to underpin FHIR.”

Leave a Reply