top of page

Administrative information

Open Science

Introduction

Methods: Patient and public involvement, trial design

Methods: Participants, interventions, and outcomes

Methods: Assignment of interventions

Methods: Data collection, management, and analysis

Methods: Monitoring

Ethics

Data collection methods

Item 25a: Plans for assessment and collection of trial data, including any related processes to promote data quality (e.g., duplicate measurements, training of assessors) and a description of trial instruments (e.g., questionnaires, laboratory tests) along with their reliability and validity, if known. Reference to where data collection forms can be accessed, if not in the protocol

Example

“The Psychotic Symptoms Rating Scales-Auditory Hallucinations subscale [PSYRATS-AH] is an interviewer-assessed measure tapping different dimensions of auditory hallucinations, e.g. frequency, duration and distress … The PSYRATS-AH (the primary outcome of the study) has been shown to have high inter-rater reliability (ranging from 0.79 to 1.00) as well as good validity [References]. Assessments will be conducted by trained psychologists at each treatment site. The main author of this article (LCS) is a psychologist with a specialisation in psychiatry. LCS is responsible for the training of all assessors. In the training process, supervised co-interviews are conducted. During the data collection phase, videotaped assessments are distributed among assessors for monthly consensus meetings. A random selection of video-taped interviews is scored individually by all assessors. ICC [Intraclass correlation coefficient] scores will subsequently be calculated and reported” [380].

 

“All data are recorded in paper form. Data collection forms comprise a patient questionnaire, a GP [general practitioner] case report form, a patient diary, an optional dual-energy CT [computed tomography] case report form and a telephone interview questionnaire. All data collection forms are available from the corresponding author on request.

 

As this is a pragmatic study conducted in general practices, blood samples are analysed in the affiliated laboratories of the practices.

 

The dual energy examination takes place in the three university hospitals by trained staff. The procedure of the examination and the measurement parameters are defined in a standard operating procedure. To assess the quality of the reading, two trained radiologists will review a subset of the images independently. Inter- and intra-observer reliability will be determined ” [381].

Explanation

The validity and reliability of trial data depend on the quality of the data collection methods. The processes of acquiring and recording data often benefit from attention to training of study personnel and use of standardised, pilot-tested methods. These should be identical for all study groups, unless precluded by the nature of the intervention.

 

The choice of methods for outcome assessment can affect study conduct and results. Substantially different responses can be obtained for certain outcomes (e.g., harms) depending on who answers the questions (e.g., the participant or investigator) and how the questions are presented (e.g., discrete options or open ended) [382-385]. Also, when compared to paper-based data collection, the use of mobile devices and electronic data capture systems has the potential to improve protocol adherence, data accuracy, user acceptability, and timeliness of receiving data [386].

 

The quality of data also depends on the reliability, validity, and responsiveness of data collection instruments such as questionnaires or laboratory instruments [387, 388]. Instruments with low inter-rater reliability will reduce statistical power, while those with low validity will not accurately measure the intended outcome variable. Modified versions of validated instruments may no longer be considered validated, and use of unpublished measurement scales can introduce bias and inflate treatment effect sizes [390].

 

Standard processes are often implemented by local study personnel to enhance data quality and reduce bias by detecting and reducing the amount of missing or incomplete data, inaccuracies, and excessive variability in measurements [391]. Examples include standardised training and testing of outcome assessors to promote consistency; tests of the validity or reliability of study instruments; and duplicate data measurements.

 

A review of United Kingdom government-funded trials found that among 75 protocols with patient-reported outcomes, only 37% described the instrument’s measurement properties [392]. Among protocols from 2016, 32% to 54% described the personnel who would be collecting data [9, 10].

 

A clear protocol description of the data collection process, including the personnel, methods, instruments, and measures to promote data quality, can facilitate implementation and helps protocol reviewers to assess their appropriateness. If not included in the protocol, then a reference to where the data collection forms can be accessed should be provided. If performed, pilot testing and assessment of reliability and validity of the forms should also be described.

 

Summary of key elements to address

  • Who will assess the outcome (e.g., participant, doctor, nurse, caregiver)

  • Who will collect the data (e.g., participant, doctor, nurse, caregiver)

  • Mode of data collection (e.g., paper-based data collection, mobile devices)

  • Description of data collection instruments (e.g., validated questionnaires, laboratory instruments), including reliability and validity

  • Processes to promote quality of data collection (e.g., duplicate measurements, training of assessors)

  • Where the data collection form can be accessed (e.g., appendix, link to repository)

  • Any pilot testing and assessment of reliability and validity of the forms, if performed

Logo: jointly funded by the UKRI Medical Research Council and the NIHR (National Institute for Health and Care Research)
University of Oxford logo
University of Toronto logo
The University of North Carolina at Chapel Hill logo
University of Southern Denmark (SDU) logo
University of Ottawa (uOttawa) logo
Université Paris Cité (UPC) logo

The 2025 update of SPIRIT and CONSORT, and this website, are funded by the MRC-NIHR: Better Methods, Better Research [MR/W020483/1]. The views expressed are those of the authors and not necessarily those of the NIHR, the MRC, or the Department of Health and Social Care.

bottom of page