

Open Science
Introduction
Methods
Results
Discussion
Harms
Item 15: How harms were defined and assessed (eg, systematically, non-systematically)
Examples
“Adverse events (AE) were assessed clinically and analytically at each monthly follow-up visit. The severity of AE were classified according to the National Cancer Institute Common Toxicity Criteria version 4.0. Following the onset of the first cases, the criteria for considering the presence of tenosynovitis were established as spontaneous pain that increased with movement in any tendon insertion with tenderness at that level and observation of localized inflammatory signs of at least 72 hours in duration . . . Patients were considered to have hepatotoxicity when they presented alanine transaminase, aspartate aminotransferase, or bilirubin elevations more than 2 times the upper limit of the normal range. Toxicity was considered severe, and therefore the drug was discontinued when symptomatic elevations were more than 3 times or asymptomatic elevations were more than 5 times the normal levels. All AE were recorded and additional information was required in case of serious adverse events” [229].
“Immediate adverse events were assessed by monitoring participants 30 min after injection in the trial centre. All participants were required to report all local and systemic adverse reactions and adverse events after the injection using the trial’s mobile application. Solicited adverse reactions were defined as any events that occurred from day zero to day seven after each injection. Unsolicited adverse reactions were defined as any adverse reactions which occurred from day eight to day 28 after each injection. The severity of adverse reactions was defined using the Food and Drug Administration guidance for industry and toxicity grading scale for healthy adult and adolescent volunteers enrolled in preventive vaccine clinical trial” [230].
Explanation
Evaluation and reporting of harms in randomised trials can be useful to inform decision makers on the benefit-risk balance of an intervention [20, 36] Randomised trials usually lack power and sufficient follow-up to adequately estimate harms [231]; nevertheless, they can provide data about harms that can be synthesised in meta-analyses if adequately reported [20, 36]. For example, the Women’s Health Initiative trials on hormone therapy provided important data on the cardiovascular risk of hormone replacement therapy [232].
Harm relates to the unwanted effects of an intervention. According to the specific context, a given event could be considered for assessing harm (eg, myocardial infarction in a trial assessing non-steroidal anti-inflammatory drugs in patients with osteoarthritis) or benefit (eg, myocardial infarction in a trial assessing aspirin in patients with cardiovascular risk factors) of an intervention [232].The use of the term “harms” is preferred over “safety” to better reflect the negative effect of interventions [20].
Despite the importance of having access to data on harms, reporting of this information is poor [127, 128, 233, 234]. A review of 184 drug trials published between 2015 and 2016 in four medical journals with high impact factors showed that 28% did not provide any details on how harm data were collected and 89% did not report who decided whether the harm was attributable to the study drug [128]. An overview of 13 reviews assessing the reporting of harms in randomised trials using the 2004 CONSORT extension for harms showed that only 40% of the included trials addressed harms outcomes with definitions for each and 44% clarified how harms-related information was collected [127].
How harms are defined and assessed will affect the results and effect estimates. Harm can be prespecified or not. They can be systematically assessed or rely on spontaneous declaration (ie, non-systematically assessed). To increase the study’s power, harm can be aggregated in a composite outcome (eg, cardiovascular diseases). Some trials implement a procedure to determine whether harms could be attributed to the intervention (ie, causality).
For each systematically assessed harm, authors should report the definition, the measurement variable (eg, name of a validated questionnaire), and where appropriate, the analysis metric for each participant (eg, time to event), the summary measure for each trial group (eg, proportion), and the time point of interest for analysis. They should describe the procedures for harm assessment including who did the assessment, whether they were blinded to the treatment allocated, the assessment time points, and the overall time period for recording harms. For non-systematically assessed harms, authors should report the mode of data collection with the time point and overall time period for recording harms. Nevertheless, non-systematically assessed harms can be difficult to analyse and interpretation of the results should be viewed with caution. To overcome this issue, trialists can code and group events into specific categories. Nevertheless, the lack of standardisation in data collection could result in selective and incomplete reporting [231]. Access to individual participant data may be needed to adequately synthesise this information [231, 235, 236].
Where appropriate, the process for coding each harm and grading its severity should be described including who did the coding and severity grading, and whether they were blinded to the allocated trial group.
If harms outcomes are aggregated (eg, cardiovascular events, serious events, severe events, withdrawals due to harms, harms imputed to treatment), authors should describe the process for classifying harms, including the grouping system used (eg, grading system to define severity), who did the grouping, and whether they were blinded to the treatment allocated [237, 238].
Box 3 summarises the essential information to be reported related to harms. More detailed information can be found in the CONSORT extension for harms, which was updated in 2022 [20].
Box start
Box 3: Reporting harms in randomised controlled trials
Systematically assessed harms
• Definition and instrument used (eg, name of a validated questionnaire)
• Analysis metric for each participant (eg, time to event); summary measure for each trial group (eg, proportion); time point of interest for analysis, where appropriate.
Procedures for harm assessment, including:
• Who did the assessment
• Whether the assessors were blinded to the allocated trial group
• Assessment time points and the overall time period for recording harms
Non-systematically assessed harms
• How data were collected
• Assessment time points and overall time period for recording harms
Coding harms and grading severity
Where appropriate, process for coding each harm and grading its severity, including:
• Who did the coding and severity grading, and whether they were blinded to the allocated trial group
• Which coding and severity grading systems were used, if any
• Assessment time points and overall time period for recording harms
Grouping of harms
For grouping of harms by body system, seriousness, severity, withdrawals (due to harms), and causality:
• Definitions of grouping categories
• Who did the grouping, and whether they were blinded to the allocated trial group.
Box end