CDISC standards in clinical trials are the cornerstone of any successful submission to regulatory agencies. While they are “standards,” the structure and the nuances of applying them are continually evolving and changing, making it important to keep up to date in order to provide quality submissions. Here at PROMETRIKA, a full-service CRO, our statistical programmers and biostatisticians engage in continuing education activities by attending webinars, conferences, and training sessions specifically related to CDISC standards in clinical trials.
In support of this effort, I recently had the privilege of attending the CDISC Europe Interchange conference in person in Copenhagen, Denmark. It was the first in-person conference since 2019 and boasted the largest number of attendees ever at a CDISC European conference. I had dialed into the virtual 2020 conference, which had offered a beacon of hope in the early, dark days of the pandemic, but in 2023 the joy of finally meeting in person again was palpable. Attendees travelled from around the globe. On the last day of the conference, I met one of the furthest travelled - a young woman living in New Zealand. It turned out that she was an American and we had attended the same college and were both neuroscience majors! CDISC standards were new to her, as she is currently working in an academic setting, and while she was a bit overwhelmed by the depth of detail, she could see the value and was very excited about what she had learned and putting it to use. This was the prevailing mood of the conference: everyone was excited to be there, everyone was excited to see people, and everyone was excited to learn.
There were many interesting presentations, offering useful information, ideas and suggestions, as well as different perspectives on using CDISC standards, and on conducting clinical trials in general. The conference started off with keynote speeches from members of the EMA (European Medicines Agency), PMDA (Japan Pharmaceuticals and Medical Devices Agency) and US FDA (United States Food and Drug Administration). One might have expected dry, official presentations from all three, but the presenters were all genuine and entertaining, while speaking about very serious topics.
A representative from the Norwegian Medicines Agency (NoMA – which led to a joke about not being mistaken with “NOMA,” a Copenhagen-based *** Michelin restaurant), focused on the differences between big data and large data, data quality and structure, and the challenges of coming to a common understanding on basic terminology. They related their experiences of having received individual study data to review, all nicely organized and laid out in tables and listings. As it is not required for a submission in the EU, datasets are not provided and as a result they never see the raw data from those studies. They made mention of imagining the magic in the work the programmers do in converting that raw data into the finished product (i.e., kind of like elves or nerds in a cellar).
A representative from the PMDA spoke about trends in the submission packages they have received over the past few years, now that they are receiving the datasets. They explained that their organization analyzes the study data using the study Statistical Analysis Plan (SAP) and protocol to see if they can reproduce the results. Sometimes it is very difficult to do, so it is helpful to get detailed information on the format of the standardized data and the study analysis with the submission.
A representative from the US FDA explained the difference between Level 1 and Level 2 Guidances for Industry issued by the FDA. Level 1 guidances include those on regulatory requirements, which are the result of laws passed by Congress, changes in interpretation or policy that are of more than a minor nature, include complex scientific issues, or cover highly controversial issues. For this reason, it takes a long time for them to be drafted and reviewed. Level 2 guidance documents set forth existing practices or minor changes in interpretation or policy. While no guidance legally binds the FDA or the public, they represent the agency’s current thinking and FDA employees may depart from guidance document only with appropriate justification and supervisory concurrence. Why, then, the distinction? Level 2 guidances are easier to update as they do not need the same level of review and approval as do Level 1 guidances.
And to round things out, a representative of the EMA gave an overview of DARWIN EU, a long-term project to pool all European EHR data and deliver Real-World Evidence (RWE) on diseases, populations, and the uses and performance of medicines. Real-World Data (RWD) are data relating to patient health status and/or the delivery of health care, routinely collected from a variety of sources, while RWE is the clinical evidence about the usage and potential benefits or risks of a medical product and is derived from analysis of RWD. For this project, all the data stay local and primarily use the Common Data Model or OMOP structure. Currently, the project pools data from about 26 million patients, and four studies using the data are ongoing or have been completed. The team is using SAS JMP to consolidate the data and R for statistical analysis. The next step is to work out a data standardization strategy, for which the EMA is looking for help from an additional 10 data partners (Anyone interested?). The team desires to engage with the US FDA and PMDA regarding the use of raw data in regulatory decisions.
The EMA rep went on to discuss that having access to raw data visualization could cut down on regulatory reviewers having to ask questions or ask for additional outputs. But RWD tends to be unstructured and there are still a lot of questions regarding its use, such as, “Is historical randomized trial data RWD? Who owns it?” The FDA rep chimed in that the US FDA is still assessing standards and pointed out that when RWD is submitted, it then becomes clinical trial data. The FDA have their own definitions of RWD and RWE, which may differ from other agencies. All agreed that regulators assess benefits and risks of controlled trials, so they don’t necessarily need RWD, though it can supplement trial data. It’s all about making a better decision. “What we need vs. what we use.” The FDA rep then explained that how the data are submitted, not how they are used, is what the FDA regulates.
During the main portion of the conference, an interesting presentation was given on a project being hosted by ClinFocus, which is looking to gain more clinical trial awareness and involvement in Africa. They explained that sub-Saharan Africa carries 50% of the global burden of disease for 17.5% of the global population while only 2.5%-10% of clinical trials are run in Africa, primarily in either South Africa or Egypt. Africa is also one of the most genetically diverse continents on the planet, which means that it is important to include these populations in clinical trials to ensure knowledge of safety and efficacy before access to treatments is obtained through emigration or broadening of treatment distribution. Mobile smart phones are available on the continent but a big challenge for running clinical trials is that many health records are still paper based. A discussion ensued on needing an overall market assessment, as well as better conduits for drug supply but a suggestion was made that the European RWD/RWE initiative should help drive more trials in Africa. Since CDISC is driven by regulatory authorities, it was also suggested that the regulatory authority in South Africa should reach out for help.
Another standout presentation was on CORE (CDISC Open Rules Engine), which is a system being built by CDISC to house rules to validate SDTM and ADaM datasets. The team has started with CDISC and FDA rules and will move on to PMDA rules, defining cross-checks, getting input from users, adding Pinnacle 21 checks, and, finally, creating best practice checks. The idea is to have one validation tool, collating every possible check into unambiguous, executable rules. The engine will be integratable with any system and will be available in Pinnacle 21. The project needs real study data to test the engine, which CDISC doesn’t currently have. The team is looking for volunteers to help with the programming (no experience necessary) and users to download the engine and test the checks on their own data.
All of the presentation slide decks will be made available on the CDISC website in the near future. I am looking forward to working with my colleagues (fellow elves) to assess and implement these suggestions and ideas. It was an honor, and eye-opening, to interact with, and learn from, my international colleagues. This experience will help me to broaden the horizons and knowledge of the entire statistical programming team here at PROMETRIKA, and in turn help the PROMETRIKA team continue to provide current and quality deliverables to our clients.