March 11, 2015 | By Márcio Barra
(Note: all references used are in the comments)
The modern clinical trial is a significant undertaking, and one that requires a multidisciplinary team(1). The current research team includes the principal investigator, sub – investigators, data managers, statisticians, clinical research coordinators (CRC), and the monitor or clinical research associate (CRA).
The International Conference on Harmonization (ICH) E6 Good Clinical Practices guidelines defines monitoring as “The act of overseeing the progress of a clinical trial, and of ensuring that it is conducted, recorded, and reported in accordance with the protocol, Standard Operating Procedures (SOPs), Good Clinical Practice (GCP), and the applicable regulatory requirement(s)”. CRAs are responsible for monitoring the trial, a task which ensures the integrity of the data that is being collected. It also allows the sponsor to closely follow the study centres, evaluate their conduct, and identify bottlenecks in patient recruitment(3).
In the past, it was the sponsor that would send their personnel to research sites to carry out the monitoring activities. This changed in the early nineties, with the emergence of contract research organizations (CROs). Like many other aspects of trial conduction, monitoring began to be outsourced (4).
The ICH guidelines state that data should be accurate, complete, legible, and timely. However, they do not provide instructions on how to approach data monitoring or individualized approaches for different types of trials. This leaves room for the industry to experiment with different strategies for trial monitoring, and with the advent of EDC, news ways to monitor clinical trial data are becoming more and more common.
On-site monitoring is the current industry standard, and one that has stuck with clinical trials for some decades (6). Here, the CRA is assigned to monitor a clinical trial or an observational study in a study centre or group of study centres, and then carries an in-person site evaluation (6). The most significant activities carried out by a CRA include: (1) the study initiation visit, where the CRA visits the centre to prepare the study staff for conducting the study; (2) several monitoring visits to a centre during the course of the trial; and (3) the study closure visit.
During monitoring visits, the CRA reviews the study documentation and source documents with the goal of identifying data entry errors and any missing data in the patient records or Case Report Forms (CRF). The CRA should also provide study documentation to the centre and evaluate the site’s staff familiarity and compliance with the protocol and study procedures. Throughout the entire process, the CRA essentially acts as an interface between the centre and the sponsor, giving the latter feedback on how the trial is being conducted.
According to a recent systematic literature review on on-site monitoring, such a strategy holds both benefits and disadvantages (39). Potential benefits of clinical trial monitoring include staff formation, better adhesion to the protocol and GCPs and improved communication between the staff. Disadvantages include financial costs and time spent on-site visits. One important point raised by the review is that there was very little evidence found of the clinical and cost-effectiveness of on-site monitoring activities(39).
Source Data Verification (SDV), in which CRAs verify the data recorded in the trial’s CRFs against source documents, is the cornerstone of clinical trial on-site monitoring. CRAs spend a great deal of time going through both critical data (i.e data that, if inaccurate, would threaten the protection of the participants or the integrity of the study results) such as inclusion-exclusion criteria, efficacy and safety endpoint-related documentation, written informed consent, blinding procedures, drug accountability, as well as non-critical data points, which vary from study to study. Typically, most, if not all, data collected by a centre is verified by the CRA (100% or near 100% SDV ). The SDV process, depending on the complexity of the trial, can be a labour intensive process, especially when near 100% SDV is employed(3, 34).
The 100% or near 100% SDV stems from a notion shared by both sponsors and researchers that more is better, and on the safer side of regulations (34, 85). Some phase III and phase IV often carry enormous financial implications for the sponsors, and thus a high level of vigilance is presumed to help keep the bias contained and keep the results trustworthy(23). However, There appears to be differing perceptions about the likelihood of substantial biases occurring without near 100% SDV, and on the potential magnitude of those biases affecting the perceived treatment effects and the study end results. The ICH E6 guideline does not state upper or lower limits for SDV, and many trials not sponsored by the pharmaceutical industry undergo limited, not 100% SDV monitoring, but are reported and interpreted in the literature as contributing equally reliable evidence of clinical effects(23).
On-site monitoring is an expensive activity, as CRAs have to travel to each research centre, pay for hotel expenses, as well as for the time spent on-site (34). On-site monitoring generally performed in industry trials add about 25–35% to the overall costs of a typical Phase 3 trial(30). Moreover, sponsors nowadays are faced with an increasingly complex research environment, including interactive voice response systems for randomization, imaging centres, central laboratories, CROs, regulatory pressure, and so forth, which in turns significantly increases the amount of collected data and, consequently, the monitoring workload (3). The costly nature of on-site monitoring has resulted in some studies becoming too expensive to perform, while others are slowed down and the overall progress of research is impaired(23). These financial and logistical hurdles to the design and conduct of affordable clinical trials are especially limiting for small, independent clinical trials(30).
Centralized Monitoring and EDC
Centralized Monitoring, or off-site monitoring, is where the CRA, or any assigned sponsor personnel, conduct a remote evaluation of the study centre. Centralized monitoring was made possible by the evolution of modern information technologies, and the advent of EDC, most noticeably the electronic case report form (eCRF). Historically, CRFs were originally paper-based. Whereas in the past, there was a considerable time lag between the data collection point, and the time when it was entered into the computer system in the sponsor’s facility(68), nowadays, EDC and the eCRFs brought a new dynamic to the clinical trial enterprise. Most modern clinical trials rely on EDC for their CRFs, with direct data entry being an option in many research centres. Some protocols even allow direct data entry by subjects(44).
Central monitoring has its own sets of strengths. Central monitoring technologies allow a greater degree of flexibility, additional statistical monitoring strategies, and allow for errors and discrepancies to be identified earlier since the data can be monitored in real-time by a data manager(88, 89). Additional central statistical monitoring techniques include checks for missing or invalid data, calendar checks, checks for unusual data patterns, assessment of rates of reporting, checks of performance indicators and comparisons with external sources(29). Central monitoring can, in some cases, detect data fabrication, as in a recent large multicenter trial, the data of 438 patients was fabricated at one site, and was only detected though analysis of statistical anomalies(29).
It should be pointed that central monitoring might not be adequate for all trials. If, for example, the burden of paperwork to be reviewed centrally is excessive, as may be the case for some large trials enrolling a considerable number of participants, central monitoring might not be appropriate. Central monitoring might be more suitable for large trials involving more sites with fewer participants per site, and for double-blind superiority trials using an objective endpoint such as mortality
While EDC holds considerable potential, both the methodologies and tools needed for it are very complex. Welker identified some of the barriers to the implementation of EDC systems in clinical research. These include user input, motivation and communication, regulatory requirements, timing of implementation, software installation, proper graphical user interface, identification of early technology adopters, patient participation, availability of technology and last, but not least, the costs associated with implementing and maintaining EDC technologies in all research centres for a trial. While implementation costs are high, substantial cost reductions have been demonstrated when compared to traditional paper based data capture methods (76).
On Site Monitoring versus Centralized Monitoring approaches and EDC
Through the years , several studies were published highlighting the benefits and potential limitations of centralized monitoring and EDC, and compared it to traditional paper based data collection forms.
The STARBRITE clinical study implemented, in parallel to the ongoing data collection procedures, a single source EDC system in a clinical trial. While small in scope and not comparing directly central monitoring approaches to on-site monitoring, the study gave some preliminary evidence that single source EDC can ease the burden of validating the source of clinical research data and improve quality and efficiency by eliminating manual and redundant data entry(47).
Walther et al (75) compared a series of EDC technologies to the traditional paper based data collection in regards to duration of data capture and accuracy. Results showed a considerable reduction in the time from data collection to database lock, and the study concluded that EDC can be more time effective than the standard, paper based data capturing process followed by double entry and verification. However, the successful implementation of EDC was shown to require adjustment of work processes and reallocation of resources. Whether EDC is more accurate than the paper-based method could not be confirmed in this pilot study.
Weiler et al. (70) found no discernible difference in a paper based data capture instrument versus an EDC device in a randomized crossover trial of 87 adults with allergic rhinitis, and advised for a sensible, trial adjusted approach when selecting the tools for clinical trial data collection. In another randomised controlled trial, clinical research coordinators were asked to collect patient information on a hand held computer in parallel with a paper based CRF. High error rates occurred with hand held computers, compared with paper based CRFs – 67.5 error per 1000 fields, against the accepted error rate of 10 per 10,000 field for paper-based CRF data entry – highlighting the need for proper staff training in EDC before it can be used in clinical research(71).
The European Childhood Obesity Project implemented EDC for their data collection procedures, with notebooks being used for 73.0% of the visits. EDC was shown to reduce significantly the need for after-trial data checks, but the planning and implementation processes of the technologies was deemed more time consuming(60). The most recent study evaluated pre-visit remote SDV compared to traditional on-site SDV in five hospitals from two NIH sponsored networks, ARDS and ChiLDREN. The results showed that 99.5 % of the ARDS network data values were found remotely, and 100% of the ChiLDREN network data values were verified remotely, demonstrating that central monitoring is a viable option for SDV(90).
Bakobaki et al.(85) were the first to directly compare on-site monitoring to central monitoring. They evaluated the extent to which findings identified during half of a phase III trial could have been identified through central monitoring. The results showed that over 90% of the findings identified from the review of site monitoring reports, which amounted to 31 person days to conduct the four site visits, could have been found by central monitoring. Of the 5% that would be unlikely to be identified centrally, there were only two major findings, which were unlikely to have a direct impact on the results.
Eisenstein et al.(6) argued, by simulating the costs of a hypothetical mega-trial with an assumed number of 24 monitoring visits per site, with a $10,000 per patient site payment, that source document verification should be centralized, where appropriate. By using minimal on-site monitoring at the research site to a limited, previous selected, set of records and to ensure some form of personal contact with the study centre, the simulation showed savings of clinical trial costs of more than 20%.
One of the biggest ongoing projects that seeks to understand whether a reduced on-site monitoring strategy is equivalent to extensive full, on-site intensive monitoring regarding the occurrence of serious or critical flaws is the ADAMON project (ADAMON, for ADAptiertes MONitoring)(91). This initiative, which began in 2008 and is funded by the German Ministry of Education and Research, includes twelve different clinical trials, including phase II, III, and IV and non-commercial clinical trials, with a total of 3200 patients. Each trial is randomized to one of two possible monitoring approaches: either a trial specific, centralized monitoring approach with minimum on-site monitoring or a traditional, intensive on-site monitoring approach. The main outcome of the ADAMON project is occurrence of serious or critical findings concerning patient safety and data validity. The project is funded until December 2014, and results are expected in 2014. A similar, completed, project is the French OPTIMON (OPTIMON, for OPTImisation of MONitoring), which will compare these two monitoring approaches. The results are still to be published.
In 2015, the UK’s Medical Research Council’s Clinical Trials Unit is expected to release the results of its TEMPER study (Targeted Monitoring, Prospective Evaluation and Refinement). In this trial, a triggered on-site monitoring strategy for a multi-centre cancer clinical trial will be compared to a non-triggered, non-risk sensitive approach.
The CTTI, and the FDA’s and EMA’s guidance documents
In 2007, the United States Food and Drug Administration (FDA) and Duke University created a public-private partnership entitled Clinical Trials Transformation Initiative (CTTI), with the purpose of identifying practices to help address two questions: Do clinical trials have to be so heavy on time and resources expenditures, and what can be done so that randomized comparison can answer more clinical questions?
The CTTI made monitoring the focus of its first project, with the goal of helping identify the best practices to help sponsors select the most appropriate monitoring methods for a trial, through an evidence based methodology. The preliminary recommendations of CTTI placed emphasis on the creation of an efficient monitoring plan alongside the protocol, training of individuals regarding data collection, the use of risk based approaches and the allocation of resources to more critical data points in lieu of less critical ones, while encouraging sponsors to discuss their monitoring plan with the FDA(92).
Following the inception of the CTTI, the FDA released in 2011 a draft guidance entitled “Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring”, with the final version being released in 2013(93). The guidance advocates the use of risk-based monitoring approaches, driven by centralized and off-site monitoring methods, and the use of the modern EDC systems to collect and analyse data, with less emphasis on on-site monitoring. In 2013, The European Medicines Agency (EMA) also released a draft for consultation entitled “Reflection paper on risk based quality management in clinical trials”(94).
Both the FDA and EMA, in their releases, acknowledge that traditional 100 % SDV-based monitoring approaches are not always the most efficient, and are not always warranted. There is evidence that errors in non-critical data points, like ones related to concomitant medications, demographic data, or any other collected information that does not concern study endpoints ,do not affect a clinical trial outcome(29, 85), and thus might not need such a tight monitoring oversight. Both documents show the agencies encouraging sponsors to conduct a risk-assessment evaluation of their clinical trials. This is to identify events that could affect the quality of the clinical trial data or the performance of clinical trial processes, determine the probability of such events occurring, their potential impact on human safety and trial integrity, and the extent to which these risks can be detected. Following this evaluation, sponsors are advised to plan out a monitoring strategy proportional to the risk and focused on critical data.
A key point seems to be finding the balance between central and on-site monitoring. The existence of an experienced team who can define the important checks at the time of CRF design and that are aware of the risks is paramount when designing the monitoring plan (3, 41, 95). This should help guarantee that a proper number of on-site monitoring visits is scheduled and that a robust central monitoring approach is in place, all adjusted to risk.
Authors who have previously advocated the implementation of risk-based monitoring and risk adapted monitoring on clinical trials include Brosteanu et al.(41). Through a survey of existing monitoring practices and expert’s opinions, they developed a risk assessment score and analysis procedure for use in clinical trials.
Clinical trial monitoring seems to be standing at a crossroad. On one hand, sponsors and regulatory agencies are pressing for a more focused, risk adapted monitoring approach to clinical trials, while on the other, there are considerable bottlenecks in order for these strategies to be used to their full potential. Not all clinical trial centres are ready to implement the needed EDC technologies, staff training is still an issue, and some clinical trials would benefit more from these approaches than others. But the fact remains that, as costs keep rising and cost containment measures are employed by clinical trial sponsors, it comes as no surprise that alternatives to the costly on-site monitoring are becoming more and more attractive to stakeholders.