Introduction
Randomized controlled trials are the optimal study design to assess the effectiveness of clinical treatments and can have a compelling and direct impact on patient care.1 The quality of reporting of veterinary RCTs is suboptimal in several aspects, including a general lack of reporting of randomization procedures, primary outcomes, and sample size calculations, among other key methodological items.2,3,4,5,6
The abstract of an RCT is the first source of information readers encounter. Many readers will base their assessment of a trial on the information contained in the abstract and then decide whether to obtain the full report on the basis of that information.7 In other instances, when the abstract is the only portion of the article available, readers may use this stand-alone information to make clinical decisions.8 Therefore, it is important that the information in abstracts of RCTs be as scientifically transparent as possible.
Until the establishment of CONSORT in 1996, no standardized reporting recommendations existed for RCTs or their abstracts.7 The CONSORT Statement was revised in 20019 and 2010.10,11 In 2008, an extension to the CONSORT Statement was published with the aim of improving the quality of reporting in abstracts.12 In this extension for abstracts, the CONSORT provided a CONSORT for Abstracts checklist and detailed 17 items falling into 8 categories to provide a “minimum list of essential items, which authors should consider when reporting the results of an RCT in any journal or conference abstract.”12 The 8 sections listed in the extension and captured in the checklist represent recommendations on the title, author details (specific to conference abstracts), trial design, methods (including participants, interventions, study objective, defined primary outcome, randomization, and blinding [masking]), results (numbers randomized, recruitment, numbers analyzed, outcome, and harms), conclusions, trial registration, and funding.12
Several studies have been conducted to evaluate the quality of abstract reporting in various human medical fields. However, our scoping searches of the literature in Medline (PubMed) and CAB Abstracts databases revealed that reporting quality of abstracts of veterinary RCTs has been systematically assessed only for trials of preharvest food safety interventions13 and not for trials of effectiveness of veterinary interventions. Therefore, the objectives of the study reported here were to evaluate the adherence of abstracts of veterinary RCTs to the recommendations for minimum abstract information captured in the CONSORT for Abstracts checklist, to identify characteristics associated with the number of checklist items reported, and to evaluate changes in reporting over a 5-year period.
Materials and Methods
Study design and outcome of interest
A cross-sectional evaluation of RCT abstracts published in 5 general veterinary journals in 2013 and in 2018 was performed. The primary outcome of interest for this study was the total number of CONSORT checklist items reported in each abstract.
Inclusion criteria
Journals—To be eligible for inclusion in the study, scientific journals were required to be in the English language, have a wide range of animal species and topics, and have an impact factor > 1 according to the ISI Journal Citation Report (2013).14 Journals were also required to have been in publication since 1999 or earlier. Aims and scopes of the remaining journals were evaluated on their websites until the first 5 broad-scope journals were identified: The Veterinary Journal, The Veterinary Record, Journal of Veterinary Internal Medicine, JAVMA, and American Journal of Veterinary Research. These same journals were included for the year 2018.
RCTs—Studies were classified as RCTs as defined by the US National Library of Medicine MeSH publication type15 and the Cochrane glossary.16 Studies were considered RCTs when at least 2 interventions were compared (1 of which may have been standard care or a placebo) and randomization of participants to interventions was mentioned. All studies of the effectiveness of interventions in animals (excluding humans) for which randomization was reported were included,17 regardless of whether nonrandom allocation of participants to interventions (eg, alternation of interventions between participants or certain dates of admission) had actually been used. Studies based on previous RCTs such as those involving subgroup analyses or longer-term outcomes were also considered to be RCTs if randomization had been maintained. Crossover studies (including Latin square design studies) were considered to be RCTs if the participants were described as randomized to treatment order. In vitro studies were not included.
Identification of RCTs was performed by scrutinizing the title and abstract of the associated report. When randomization was not mentioned in the title or abstract, the full text was electronically searched for the word random. Full texts were retrieved for all studies classified as RCTs. The process of RCT identification was performed by 2 investigators (NDG identified RCTs for the year 2013; REM identified RCTs for the year 2018), both of whom had received specific training in study design.
Data extraction
The group responsible for creating the CONSORT for Abstracts checklist12 identified 17 items as “essential items that authors should consider when reporting the main … results of a RCT in any journal or conference abstract.” The second item, authors’ details, concerns contact details of the corresponding authors and is specific to conference abstracts. Consequently, that item was not assessed in this study. The 16th item, trial registration, was also not assessed because a veterinary trial registry had only recently become available in veterinary medicine with the launch of the AVMA Animal Health Studies Database in 2016.18
Two investigators (REM and AKP) independently examined each abstract and assigned a yes or no value to each of the remaining 15 checklist items on the basis of whether the item was reported in each abstract. Scores were recorded in an electronic spreadsheet.a The explanation and elaboration document associated with the CONSORT for Abstracts checklist13 was freely available during this scoring process. After each investigator had finished scoring all abstracts, they compared scores with each other. For any abstract that had at least 1 item interpreted differently between investigators, the 2 investigators reread the abstract and discussed their interpretations until they agreed. A third investigator (NDG) was present during the process and acted as an arbiter in situations of persistent disagreement.
In addition to the 15 CONSORT items, information was extracted for each abstract regarding the journal title, volume, and issue; country of the first author and primary language of that country (English or other); word count; type of study participants (clinical patients or other); type of intervention (surgical or nonsurgical); species group (dogs, cats, equids, ruminants or swine, or birds or exotics); and structured (vs unstructured) abstract. Official languages of each country were determined by means of an online encyclopedia search.19 If a country had > 1 official language, the first listed language was used.
Clinical patient was defined as a representative of the population that the investigated intervention or interventions were intended to benefit (eg, those with a certain disease or condition). Animals with experimentally induced disease, animals maintained in laboratory conditions (except when the investigated intervention was intended to benefit laboratory animals), and healthy animals (except when the investigated intervention was intended to benefit healthy animals, such as a particular diet) were classified as other participants.
Statistical analysis
Statistical analyses were performed with commercial software.b Proportions of abstracts containing various CONSORT for Abstracts checklist items or with other characteristics were reported as percentages with 95% CIs. Values for total number of items and word count were reported as median (range). Generalized linear mixed models were built to determine whether certain variables were associated with the total number of checklist items reported (unit of analysis). Gamma regression models were built in which the total number of checklist items reported was included as the dependent variable. Because abstracts published in the same journal were expected to have less variability than abstracts published in different journals, journal identity was included in the models as a random effect. The random effect block in the model-building process included the intercept and used variance components as the random effect covariance type. The variables word count and structured (vs unstructured) abstract were included as fixed effects. These 2 covariates were included in every model regardless of their significance because they were hypothesized a priori to be potential confounding factors. Different models were built in a stepwise manner, including year of publication, primary language of the first author's country, type of participant, species group, and type of intervention as additional fixed effects. Goodness of fit of the models was determined by comparison of the Akaike information criterion values. Values of P < 0.05 were considered significant.
Results
Characteristics of included abstracts
Of the 1,090 and 799 full-length articles published in the 5 veterinary journals in 2013 and 2018, respectively, 114 (10.5%) and 98 (12.3%) were reports of RCTs (Table 1). A total of 212 RCT abstracts were assessed.
Number (%) of abstracts of RCTs published in 5 general veterinary journals in 2013 and 2018.
2013 | 2018 | |||
---|---|---|---|---|
Journal | Total No. of articles | No. (%) of all included RCTs | Total No. of articles | No. (%) of all included RCTs |
The Veterinary Journal | 369 | 30 (26.3) | 107 | 14 (14.3) |
JAVMA | 213 | 18 (15.8) | 168 | 13 (13.3) |
American Journal of Veterinary Research | 194 | 34 (29.8) | 145 | 37 (37.8) |
Journal of Veterinary Internal Medicine | 181 | 19 (16.7) | 229 | 25 (25.5) |
The Veterinary Record | 133 | 13 (11.4) | 150 | 9 (9.2) |
Total | 1,090 | 114 (100) | 799 | 98 (100) |
Abstracts contained a median of 244 words (range, 132 to 321 words) overall, 251 words (range, 132 to 321 words) in 2013, and 240 words (range, 141 to 292 words) in 2018. Abstracts by first authors from English-speaking countries accounted for 150 of 212 (70.8%) abstracts. The United States was the most common country of origin (n = 118 [55.7%]), followed by the United Kingdom (14 [6.6%]), Canada (9 [4.2%]), and Brazil (9 [4.2%]; Table 2).
Number (%) of abstracts from first authors of various countries for the RCTs of Table 1, by year of publication.
Country | 2013 (n = 114) | 2018 (n = 98) |
---|---|---|
United States | 60 (52.6) | 58 (59.2) |
United Kingdom | 8 (7.0) | 6 (6.1) |
Canada | 4 (3.5) | 5 (5.1) |
Brazil | 2 (1.8) | 7 (7.1) |
Germany | 5 (4.4) | 3 (3.1) |
Italy | 4 (3.5) | 2 (2.0) |
Finland | 1 (0.9) | 5 (5.1) |
Spain | 3 (2.6) | 2 (2.0) |
Netherlands | 4 (3.5) | 1 (1.0) |
Japan | 2 (1.8) | 1 (1.0) |
France | 2 (1.8) | 1 (1.0) |
Other countries* | 19 (16.7) | 7 (7.1) |
Countries represented by ≤ 2 RCTs were grouped together as other countries.
Participants consisted of clinical patients in 56 of 114 (49.1%) RCTs in 2013 and 30 of 98 (30.6%) RCTs in 2018. Dogs were the most common species group (n = 82 [38.7%]), followed by equids (41 [19.3%]), ruminants or swine (36 [16.9%]), cats (28 [13.2%]), and birds or exotics (25 [11.8%]). Overall, 146 of 212 (68.9%) RCTs had a structured abstract and 66 (31.1%) had a narrative abstract, per journal guidelines. Characteristics of RCTs and abstracts were summarized by year (Table 3). Medical interventions were far more common than surgical interventions.
Number (%) of abstracts with various characteristics for the RCTs of Table 1, by year of publication.
Characteristic | 2013 (n = 114) | 2018 (n = 98) |
---|---|---|
Type of study participants | ||
Clinical patients | 56 (49.1) | 30 (30.6) |
Other | 58 (50.9) | 68 (69.4) |
Primary language of first author's country | ||
English | 79 (69.3) | 71 (72.4) |
Other | 35 (30.7) | 27 (27.6) |
Type of abstract | ||
Structured | 71 (62.3) | 75 (76.5) |
Unstructured | 43 (37.7) | 23 (23.5) |
Type of intervention | ||
Surgical | 4 (3.5) | 2 (2.0) |
Nonsurgical | 110 (96.5) | 96 (98.0) |
CONSORT checklist item | ||
Identification of the trial as random in the title | 9 (7.9) | 11 (11.2) |
Description of trial design | 63 (55.3) | 58 (59.2) |
Eligibility criteria for participants and the setting where data were collected | 25 (21.9) | 7 (7.1) |
Interventions intended for each group | 111 (97.4) | 98 (100) |
Specific objective of hypothesis | 110 (96.5) | 97 (99.0) |
Clearly defined primary outcome | 5 (4.4) | 6 (6.1) |
Randomization and how participants were allocated to intervention | 0 (0) | 2 (2.0) |
Blinding (masking) | 9 (7.9) | 8 (8.2) |
Numbers of participants randomized to each group | 48 (42.1) | 64 (65.3) |
Recruitment or trial status | 0 (0) | 0 (0) |
Numbers of participants analyzed in each group | 12 (10.5) | 15 (15.3) |
Results for each group for the primary outcome and the estimated effect size | 2 (1.8) | 9 (9.2) |
Important adverse events or adverse effects | 11 (9.6) | 11 (11.2) |
Conclusion or general interpretation of the results | 114 (100) | 96 (98.0) |
Funding source | 0 (0) | 0 (0) |
*English-speaking countries included the United States, Canada, the United Kingdom, Australia, Netherlands, Saint Kitts and Nevis, and New Zealand.
Reporting of CONSORT items
In none of the 212 abstracts were all 15 of the evaluated CONSORT items reported. The median number of items reported was 5 (range, 2 to 10; mean ± SD, 4.7 ± 1.3). The nature of the reported items was summarized by year (Table 3).
Overall, the study was identified in the title as randomized in 20 of 212 (9.4%; 95% CI, 6.2% to 14.1%) abstracts and the trial design was clearly explained in 121 (57.1%; 95% CI, 50.4% to 63.6%). An eligibility criterion for participants was reported in 32 (15.1%; 95% CI, 10.9% to 20.5%) abstracts, and specific interventions were reported in 209 (98.6%; 95% CI, 95.9% to 99.5%). A specific objective or hypothesis was explicitly stated in 207 (97.6%; 95% CI, 94.6% to 99%) abstracts. A primary outcome was explicitly reported in 11 (5.2%; 95% CI, 3% to 9.1%) abstracts, and information regarding randomization was reported in 2 (0.9%; 95% CI, 0.3% to 3.4%). Information on blinding was reported in 17 (8.0%; 95% CI, 5.1% to 12.5%) abstracts.
The number of participants randomized to each group was reported in 112 (52.8%; 95% CI, 46.1% to 59.4%) abstracts. The number of participants included in the analysis was reported in 27 (12.7%; 95% CI, 8.9% to 17.9%) abstracts. Important adverse effects were described in 22 (10.4%; 95% CI, 7.0% to 15.2%) abstracts. Results for the primary outcome with the estimated treatment effect size were reported in 11 abstracts (5.2%; 95% CI, 3% to 9.1%).
A conclusion statement was included in 210 (99.1%; 95% CI, 96.6% to 99.7%) abstracts. None of the abstracts included funding information.
Factors associated with the number of items reported
The final generalized linear mixed model of factors associated with the total number of CONSORT items reported in abstracts included word count, primary language of the first author's country (English or other), year of publication (2013 or 2018), type of participants (clinical patients or other), type of intervention (surgical or nonsurgical), structured abstract (yes or no), and species group as fixed effects and journal identity as a random effect (Table 4). A significant (P < 0.001) association was identified between a higher number of items reported and the inclusion of clinical patients (OR, 1.13; 95% CI, 1.05 to 1.22; Figure 1). Abstracts for RCTs that included clinical patients contained a mean of 0.8 (95% CI, 0.4 to 1.1) more items than did abstracts for RCTs involving other participants.
Results of a multivariable generalized linear mixed model to evaluate associations between various characteristics and the number of CONSORT for Abstracts checklist items reported for the RCTs of Table 1.
Characteristic | Median (range) No. of items | Adjusted OR (95% CI) | P value |
---|---|---|---|
Year of publication | |||
2013 (n = 114) | 4 (2–9) | 1.07 (0.99–1.14) | 0.055 |
2018 (n = 98) | 5 (2–10) | Referent | — |
Type of study participants | |||
Clinical patients (n = 86) | 5 (3–9) | 1.13 (1.05–1.22) | < 0.001 |
Other (n = 126) | 4 (2–10) | Referent | — |
Primary language of first author's country | |||
English (n = 150) | 5 (2–10) | 0.99 (0.92–1.08) | 0.93 |
Other (n = 62) | 5 (2–8) | Referent | — |
No. of words (n = 212) | — | 0.99 (0.99–1.00) | 0.03 |
Type of abstract | |||
Structured (n = 146) | 5 (3–10) | 1.13 (0.94–1.36) | 0.18 |
Unstructured (n = 66) | 4.5 (2–7) | Referent | — |
Type of intervention | |||
Surgical (n = 6) | 4.5 (2–7) | 0.93 (0.76–1.13) | 0.45 |
Medical (n = 206) | 5.0 (2–10) | Referent | — |
Species group | |||
Cats (n = 28) | 5 (2–8) | 0.95 (0.86–1.06) | 0.38 |
Ruminants or swine (n = 36) | 5 (2–7) | 0.96 (0.87–1.06) | 0.43 |
Equids (n = 41) | 4 (3–7) | 0.99 (0.90–1.09) | 0.91 |
Birds or exotics (n = 25) | 4 (2–6) | 0.99 (0.89–1.11) | 0.93 |
Dogs (n = 82) | 5 (3–10) | Referent | — |
— = Not applicable.
Controlling for other variables, a significant (P = 0.03) but slight negative association was identified between word count and number of items reported (OR, 0.99; 95% CI, 0.99 to 1.00), indicating a 1% increase in the odds of including 1 additional item for each 1-word decrease in the number of words. Type of abstract (structured vs unstructured) or intervention (surgical vs medical), species group, and year of publication were not associated with the number of items reported (Table 4). The covariance of the included random effect (journal) was not significant (P = 0.29). Other models that included fewer variables as fixed effects had higher Akaike information criterion values and yielded comparable results.
Discussion
In the present study, abstracts of all RCTs published in 5 general veterinary journals during 2013 and 2018 were assessed to evaluate adherence to the CONSORT for Abstracts checklist. Item reporting was generally deficient, with a median of only 5 checklist items reported of the 15 items evaluated. Key information, such as information on participants, primary outcome, randomization, blinding, numbers analyzed, results for primary outcome, and harm, was lacking in a high percentage of abstracts. For only 7.9% of abstracts in 2013 and 11.2% of abstracts in 2018 was the study reported as randomized in the title. Details on recruitment and funding were not reported in any abstract. Only the items pertaining to the study objective, interventions, and conclusion were reported in most (> 95%) abstracts.
The median number of reported checklist items increased, albeit nonsignificantly, from 4 in 2013 to 5 in 2018. Even if this increase were significant from a statistical perspective, it would still be unremarkable given that 10 other items were still missing. In 2016, a study20 was conducted to evaluate the quality of reporting of 891 RCT abstracts in 3 pediatrics journals before and after the launch of the CONSORT for Abstracts checklist and associated documentation. Investigators found that different journals had different degrees of improvement over time; for example, JAMA Pediatrics had an increase over the study period from a mean of 8 items reported in RCT abstracts to a mean of 10 items reported, suggesting some but suboptimal improvement. Although all abstracts included in the present study were published after the CONSORT recommendations became available, our findings were also suboptimal, highlighting the need for continued improvement in the quality of reporting of veterinary RCT abstracts.
As in other studies,21,22,23 all CONSORT for Abstracts checklist items were weighed evenly for the purposes of this study. It should be noted that not all items carry the same weight and that some items have a higher impact on the transparency of the reporting process. For example, failure to report the actual numbers analyzed may have a more important impact on reporting quality than failure to include a conclusion statement. Without knowledge of the actual numbers analyzed and the participant dropout rate during a trial, a reader's ability to appropriately interpret the findings may be substantially impaired. Similarly, the lack of reporting of harm associated with the intervention may have a noteworthy clinical impact.24 In the present study, most abstracts failed to properly disclose the presence or absence of harm or adverse events associated with the intervention. In a previous meta-analytic study,25 adverse events were included in the title, abstract, or introduction of the reports for 108 of 168 (63.1%) trials of cancer treatment in companion animals. The difference in findings between that study and the present study may be related to differences in study design and the nature of the included clinical trials, among other factors.
In veterinary medicine, registration of clinical trials is not mandatory as it is in human medicine; therefore, this checklist item was not evaluated in the present study. Failure to register clinical trials or publish their protocols has been shown to result in publication bias,26,27 and registration in a trial registry is associated with better transparency.28 The AVMA Animal Health Studies Database was launched in June 2016 as a resource for researchers seeking animals to participate in clinical studies and for veterinarians and animal owners exploring options for treatment.18 We believe that it is important to enforce trial registration in veterinary medicine and to report such registration in the abstracts of clinical trials. Other than the item for trial registration, all of the CONSORT for Abstracts checklist items apply similarly to veterinary medicine as to human medicine and should therefore be reported properly in the abstracts of RCTs. Guidelines for reporting of veterinary RCTs such as the REFLECT Statement29 (recommendations for a minimum set of items for trials reporting production, health, and food-safety outcomes) have been published; however, such guidelines currently lack a specific extension for abstracts of RCTs, are generally limited to specific fields within veterinary medicine, and are not necessarily applicable to RCTs in the clinical setting.
Our findings indicated a lack of transparent reporting in RCT abstracts in veterinary medicine, as has been observed in human medicine. For example, in the abstracts of RCTs on age-related macular degeneration, the median number of CONSORT items reported was 7.30 In the abstracts of phase III oncology trials, the median number of items reported was 9.9,31 which is considerably higher than our results. Interestingly, the abstract items that were adequately reported for veterinary RCTs in the present study (objective, interventions, and conclusion) were the same as those most frequently reported for human RCTs in other studies.32,33 The least frequently reported items, such as funding and randomization, were also similar to those reported for human RCTs.33
The factor most strongly associated (ie, the highest adjusted OR) with a higher number of CONSORT checklist items reported in the abstracts of the present study was the inclusion of clinical patients, translating to a mean of approximately 1 more item than in abstracts for RCTs involving other types of participants. This was in line with our previous finding that reports of veterinary RCTs involving nonclinical participants were of overall worse reporting quality than those involving clinical patients.2 Although research in human medicine30 has shown a significant positive association between word count or structured abstracts and reporting quality, we found no association between structured (vs unstructured) abstracts and the number of checklist items reported. Furthermore, we found a slight negative association between abstract word count and the number of items reported. In developing the CONSORT for Abstracts checklist, the CONSORT group found that 250 to 300 words were sufficient to address all items in the checklist.12 We also did not find a significant association between English- versus non–English-speaking countries of first authors and the number of CONSORT items reported. These 2 findings were important because they may indicate that incomplete reporting is not likely caused by word limits or language barriers.
A potential limitation of the study reported here, as in other similar studies, was that the investigators who scored the abstracts were not blinded to the journal of origin or to the authors; therefore, the possibility of assessor bias could not be ruled out. We recommend that, for future studies, abstracts be extracted into a plain document and any information that could result in a biased assessment be removed, facilitating blinding. The findings reported here pertained to 5 general veterinary journals and may not apply to other journals. Owing to the nature of the study, we were also unable to determine whether the authors of each abstract had consulted the CONSORT for Abstracts checklist while writing the abstract.
In conclusion, our findings indicated that the reporting quality of RCT abstracts in 5 general veterinary journals was suboptimal. Because abstracts may be the only information available in some countries or in certain veterinary settings, we believe an urgent need exists to improve reporting in abstracts of RCTs. Authors, reviewers, and journal editors should be aware of these findings to improve adherence to the CONSORT for Abstracts checklist.
Acknowledgments
No third-party funding or support was received in connection with this study or the writing or publication of the manuscript. Ms. Maranville performed part of this research project under honorarium for the 2019 Summer Research Training Program at Oklahoma State University. Dr. Di Girolamo receives reimbursement for his editorial role from 2 veterinary journals not included in this study.
The authors declare that there were no conflicts of interest.
Footnotes
Excel, Microsoft Corp, Redmond, Wash.
SPSS Statistics, version 22.0, IBM, Chicago, Ill.
Abbreviations
CONSORT | Consolidated Standards of Reporting Trials |
RCT | Randomized controlled trial |
References
- 1. ↑
Byar DP, Simon RM, Friedewald WT, et al. Randomized clinical trials. Perspectives on some recent ideas. N Engl J Med 1976;295:74–80.
- 2. ↑
Di Girolamo N, Meursinge Reynders R. Deficiencies of effectiveness of intervention studies in veterinary medicine: a cross-sectional survey of ten leading veterinary and medical journals. PeerJ 2016;4:e1649.
- 3. ↑
Sargeant JM, Thompson A, Valcour J, et al. Quality of reporting of clinical trials of dogs and cats and associations with treatment effects. J Vet Intern Med 2010;24:44–50.
- 4. ↑
Sargeant JM, Elgie R, Valcour J, et al. Methodological quality and completeness of reporting in clinical trials conducted in livestock species. Prev Vet Med 2009;91:107–115.
- 5. ↑
Sargeant JM, Saint-Onge J, Valcour J, et al. Quality of reporting in clinical trials of preharvest food safety interventions and associations with treatment effect. Foodborne Pathog Dis 2009;6:989–999.
- 6. ↑
Di Girolamo N, Giuffrida MA, Winter AL, et al. In veterinary trials reporting and communication regarding randomisation procedures is suboptimal. Vet Rec 2017;181:195.
- 8. ↑
Saint S, Christakis DA, Saha S, et al. Journal reading habits of internists. J Gen Intern Med 2000;15:881–884.
- 9. ↑
Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol 2001;1:2.
- 10. ↑
Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c869.
- 11. ↑
Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC Med 2010;8:18.
- 12. ↑
Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med 2008;5:e20.
- 13. ↑
Snedeker KG, Canning P, Totton SC, et al. Completeness of reporting in abstracts from clinical trials of pre-harvest interventions against foodborne pathogens. Prev Vet Med 2012;104:15–22.
- 14. ↑
Clarivate Web of Science. Journal citation reports. Available at: clarivate.com/webofsciencegroup/solutions/journal-citation-reports/. Accessed Dec 3, 2013.
- 15. ↑
National Center for Biotechnology Information. Randomized controlled trial [publication type]. Available at: www.ncbi.nlm.nih.gov/mesh/68016449. Accessed Jun 15, 2020.
- 16. ↑
Cochrane Community. Glossary. Randomized controlled trial [publication type]. Available at: https://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/SURE-Guides-v2.1/Collectedfiles/source/glossary.html. Accessed Jun 15, 2020.
- 17. ↑
Schulz KF, Chalmers I, Grimes DA, et al. Assessing the quality of randomization from reports of controlled trials published in obstetrics and gynecology journals. JAMA 1994;272:125–128.
- 18. ↑
Burns K. AVMA launches database of clinical studies. Available at: www.avma.org/News/JAVMANews/Pages/160715a.aspx. Accessed Oct 22, 2019.
- 20. ↑
Chhapola V, Tiwari S, Brar R, et al. An interrupted time series analysis showed suboptimal improvement in reporting quality of trial abstract. J Clin Epidemiol 2016;71:11–17.
- 21. ↑
Bigna JJ, Noubiap JJ, Asangbeh SL, et al. Abstracts reporting of HIV/AIDS randomized controlled trials in general medicine and infectious diseases journals: completeness to date and improvement in the quality since CONSORT extension for abstracts. BMC Med Res Methodol 2016;16:138.
- 22. ↑
Chen Y, Li J, Ai C, et al. Assessment of the quality of reporting in abstracts of randomized controlled trials published in five leading Chinese medical journals. PLoS One 2010;5:e11926.
- 23. ↑
Berwanger O, Ribeiro RA, Finkelsztejn A, et al. The quality of reporting of trial abstracts is suboptimal: survey of major general medical journals. J Clin Epidemiol 2009;62:387–392.
- 24. ↑
Bernal-Delgado E, Fisher ES. Abstracts in high profile journals often fail to report harm. BMC Med Res Methodol 2008;8:14.
- 25. ↑
Giuffrida MA. A systematic review of adverse event reporting in companion animal clinical trials evaluating cancer treatment. J Am Vet Med Assoc 2016;249:1079–1087.
- 26. ↑
Hetherington J, Dickersin K, Chalmers I, et al. Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Pediatrics 1989;84:374–380.
- 27. ↑
Abaid LN, Grimes DA, Schulz KF. Reducing publication bias of prospective clinical trials through trial registration. Contraception 2007;76:339–341.
- 28. ↑
Reveiz L, Cortés-Jofré M, Lobos CA, et al. Influence of trial registration on reporting quality of randomized trials: study from highest ranked journals. J Clin Epidemiol 2010;63:1216–1222.
- 29. ↑
Sargeant JM, O'Connor AM, Gardner IA, et al. The REFLECT statement: reporting guidelines for randomized controlled trials in livestock and food safety: explanation and elaboration. J Food Prot 2010;73:579–603.
- 30. ↑
Baulig C, Krummenauer F, Geis B, et al. Reporting quality of randomised controlled trial abstracts on age-related macular degeneration health care: a cross-sectional quantification of the adherence to CONSORT abstract reporting recommendations. BMJ Open 2018;8:e021912.
- 31. ↑
Ghimire S, Kyung E, Lee H, et al. Oncology trial abstracts showed suboptimal improvement in reporting: a comparative before-and-after evaluation using CONSORT for Abstract guidelines. J Clin Epidemiol 2014;67:658–666.
- 32. ↑
Hua F, Walsh T, Glenny AM, et al. Reporting quality of randomized controlled trial abstracts presented at European Orthodontic Society congresses. Eur J Orthod 2016;38:584–592.
- 33. ↑
Faggion CM, Giannakopoulos NN. Quality of reporting in abstracts of randomized controlled trials published in leading journals of periodontology and implant dentistry: a survey. J Periodontol 2012;83:1251–1256.