Clinical data is still sitting in departmental silos and yet technology is moving forward rapidly. AJ de Montjoie considers the current landscape.
Since I started working in clinical research back in 2007, the world has changed dramatically. For starters, my mobile phone was a Nokia 3310. There were no apps and I think the only game I could play on it was ‘Snake’. I certainly couldn’t add social media apps like Twitter and taking photos was not an option. It was also fun sending text messages, as at that point they were free – no mobile phone company had imagined that anyone would use text messaging.
In less than 20 years, we are attached to our smart phones in a way that very few people could have imagined. The phone now acts as a camera, encyclopaedia, health recorder, entertainment system, television, radio, news source and much more. It’s not just a way to call home.
Technology for Clinical Data?
And where are we today with the technological solutions for our common data challenges in clinical research? For patients and doctors, wearables like smart watches that can record health data are increasingly common. Interconnected devices in hospital settings are increasingly deployed to make a physician’s work faster and easier. But what about clinical research data after it has left the patient setting? We have faster IT systems than ever, but our clinical data is still locked in different departments, preventing a seamless data flow. The current solution is to link various systems that take the data from protocol to submission. Although, this is not always efficient. Manual processes are common, data traceability is tricky and yet an audit trail is fundamental. It also means that simultaneous access to data is hard; errors are difficult to spot. There are many reasons to stop using spreadsheets and move on but human beings don’t like change and neither do large organisations. Two years ago, Forbes interviewed Vas Narasimhan, CEO of Novartis, where he talks about the lack of change. His frustration is clear but how do you make an industry both agile and compliant?
TransCelerate and CDISC
The Digital Data Flow initiative from TransCelerate involves the industry’s biggest players, bringing together vendors and sponsors to look at how we can use automation from study set up to submission. CDISC has incredibly driven people trying to solve the digital data workflow. Vendors have been doing the same. Imaginative IT engineers in many pharmaceutical companies have looked at processes and new ideas with linked and graph data. However, as technology advances, it doesn’t necessarily end up in the place we need it. The issue being that companies are so busy trying to do the day job that they simply cannot afford the time to change. And so, we remain stuck.
We have huge potential at our fingertips. Metadata repositories should have reduced the siloed approach to data. However, the challenge still seems to be that standardised metadata, advocated by CDISC and industry thought leaders, is still not a reality. Some of the issues with metadata, that I noted in 2007 when I wrote the first CDISC primer, are still the same today.
As we see the range of technological solutions available to us, decision making and change management becomes increasingly complex. There are now so many stakeholders involved in the data flow, that changing one aspect can have a dramatic impact elsewhere.
In the last 12 months, we have seen some amazing changes at CDISC. The EU Interchange (which I wrote about back in May) saw the launch of several initiatives and support from powerful global organisations that are focused on improving the way that we work with our data. The CDISC Open Rules Engine (CORE) is being developed along with Microsoft. The aim being to execute machine readable conformance rules. Biomedical Concepts have been highlighted as central to our future and with the help of volunteers and the CDISC technical team, these are going to play a wider role in standards development. And we have also seen the launch of COSA: The CDISC Open Source Alliance formed to promote open source software projects created to implement CDISC standards.
CDISC Interchange and Next Steps
The CDISC US Interchange is now a week away and it will be interesting to hear the latest developments. In the last 18 months, we have shown that a vaccine that normally takes years to develop can be turned around quickly in a health emergency but what about other long term health issues that need our attention? How can we share data and reuse data to develop faster, safer therapies? I don’t have the answers but I know many of my colleagues in the industry are trying to use advanced machine learning and AI to speed up what we do. But it’s not only about speed; it’s about quality. We use Qlik tools at S-cubed to give us advanced views of the data. It improves accuracy and detection of SAEs and is an elegant solution for pharmacovigilance. Ultimately improvements in quality throughout the clinical data lifecycle will lead to better faster therapies.
‘We have the technology’
When I was a kid, there was a TV programme called, ‘The 6 Million Dollar Man’, and famously the lines at the start of each episode were, ‘Steve Austin, astronaut. A man barely alive. Gentlemen, we can rebuild him. We have the technology.’ Other than the clearly lacking female influence (although we did eventually get the Bionic Woman), the dream that technology could fix humans was around in the 1970s and some of that science fiction has moved to science fact: computerised prosthetic limbs, retinal implants, exoskeletons. Things have certainly moved on thanks to some fantastic science. And yet … our data is still in silos and often being reviewed and analysed in spreadsheets. Surely, we have can do better. We have the technology.