

Administrative information
Open Science
Introduction
Methods
Results
Discussion
Data collection methods
Item 25a: Plans for assessment and collection of trial data, including any related processes to promote data quality (e.g., duplicate measurements, training of assessors) and a description of trial instruments (e.g., questionnaires, laboratory tests) along with their reliability and validity, if known. Reference to where data collection forms can be accessed, if not in the protocol.
Explanation
The validity and reliability of trial data depend on the quality of the data collection methods. The processes of acquiring and recording data often benefit from attention to training of study personnel and use of standardised, pilot-tested methods. These should be identical for all study groups, unless precluded by the nature of the intervention.
The choice of methods for outcome assessment can affect study conduct and results. Substantially different responses can be obtained for certain outcomes (e.g., harms) depending on who answers the questions (e.g., the participant or investigator) and how the questions are presented (e.g., discrete options or open ended).(382-385) Also, when compared to paper-based data collection, the use of mobile devices and electronic data capture systems has the potential to improve protocol adherence, data accuracy, user acceptability, and timeliness of receiving data.(386)
The quality of data also depends on the reliability, validity, and responsiveness of data collection instruments such as questionnaires or laboratory instruments.(387, 388) Instruments with low inter-rater reliability will reduce statistical power, while those with low validity will not accurately measure the intended outcome variable. Modified versions of validated instruments may no longer be considered validated, and use of unpublished measurement scales can introduce bias and inflate treatment effect sizes.(389) Routinely collected data, e.g., from administrative databases, are increasingly used for data collection in randomised trials, though they may be associated with reduced effect estimates.(390)
Standard processes are often implemented by local study personnel to enhance data quality and reduce bias by detecting and reducing the amount of missing or incomplete data, inaccuracies, and excessive variability in measurements.(391) Examples include standardised training and testing of outcome assessors to promote consistency; tests of the validity or reliability of study instruments; and duplicate data measurements.
A review of United Kingdom government-funded trials found that among 75 protocols with patient-reported outcomes, only 37% described the instrument’s measurement properties.(392) Among protocols from 2016, 32% to 54% described the personnel who would be collecting data.(9, 10)
A clear protocol description of the data collection process, including the personnel, methods, instruments, and measures to promote data quality, can facilitate implementation and helps protocol reviewers to assess their appropriateness. If not included in the protocol, then a reference to where the data collection forms can be accessed should be provided. If performed, pilot testing and assessment of reliability and validity of the forms should also be described.
Summary of key elements to address
● Who will assess the outcome (e.g., participant, doctor, nurse, caregiver)
● Who will collect the data (e.g., participant, doctor, nurse, caregiver)
● Mode of data collection (e.g., paper-based data collection, mobile devices)
● Description of data collection instruments (e.g., validated questionnaires, laboratory instruments), including reliability and validity
● Processes to promote quality of data collection (e.g., duplicate measurements, training of assessors)
● Where the data collection form can be accessed (e.g., appendix, link to repository)
● Any pilot testing and assessment of reliability and validity of the forms, if performed