Document Type : Review Article
Authors
1 Assistant Professor, Department of Periodontics, Mashhad School of Dentistry, Mashhad University of Medical Sciences, Mashhad, Iran
2 Professor, Department of Periodontics, Mashhad School of Dentistry, Mashhad University of Medical Sciences, Mashhad, Iran
3 Department of Endodontics, School of Dentistry, Tabriz University of Medical Sciences, Tabriz, Iran
Abstract
Keywords
Introduction
Ensuring scientific validity of epidemiological studies plays a significant role in invoking the results (1). According to previous research, a high percentage of studies should be questioned in terms of accuracy and validity, and this highlight importance of researcher task to recognize standard and efficient studies (2). Recent publications presented at seminars and congresses have not been criticized formally. The solution to this problem is to carry out clinical appraisal studies. This method has been used to test evidence-based researches in various fields.
Unfortunately, despite a growing number of evidence-based studies in medical sciences, critical studies in dental field are limited. Critical appraisal studies assess validity of research and determine relevance of results to reality. They also evaluate applicability of study information in clinical settings. It is a proper filter for validity and reliability of every study. Critical Appraisal (CA) studies evaluate methodology and statistics as one type of research in which shortcomings and quality are evaluated based on standard checklists (3).
In this regard, Moskouchi has compiled and categorized articles indexed by Iranian dental schools in four mentioned databases of Web of Knowledge, Scopus, PubMed and IranMedex. In 2005, Dixon et al. critically evaluated published meta-analyzed studies on general surgery. They searched MEDLINE and list of sources from 1997 to 2002 and gathered general surgery specialists to identify relevant meta-analysis studies. With this research strategy, 51 meta-analysis articles out of 487 studies were eligible to be included. Two researchers separately analyzed quality of these meta-analyses using the 10-item Overview Quality Index Assessment Questionnaire. The overall agreement between two researchers was >81%, which was considered favorable. Out of 51 articles, 38 were published in surgical journals. Most studies had significant methodological shortcomings (an average score of 3.3 in a range of 1–7). The critical assessment of meta-analysis articles published in general surgery journals showed frequent methodological errors. The quality of these reports limits validity of findings and possible interactions among initial studies. This article presents guidelines to improve quality of meta-analysis studies (4). In 2003, Mahshid and Ansari examined evidence-based dentistry’s role in the process and presentation of research papers. This study indicated that scientific and clinical application of research results depends on the strength of evidence or ranking of research information. Only articles can influence scientific and clinical judgments that have been reviewed by long-term clinical trials. Later these papers are reviewed in a larger scale in systematic reviews (overview, meta-analysis) (5). This study aimed to critique studies conducted in Mashhad School of Dentistry Periodontics department in the Last Twenty Years (1994-2014) to determine existing deficiencies and provide a report to improve future studies.
Materials and Methods
In this study, all articles published by faculties of Department of Periodontics at Mashhad Dental School, from 1994 to 2014 were evaluated. Total of 81 articles were divided into five categories, which included observational, diagnostic, clinical, animal, meta-analysis, and systematic review studies. Distribution of these 81 articles were as below:
ü Observational study (54%),
ü Diagnostic study (29%),
ü Clinical study (12%),
ü Animal study (3.7%),
ü Meta-analysis and systematic review study (1.2%).
The collected studies were then evaluated by using the relevant checklists.
STROBE checklist: (guidelines for reporting observational studies) (6)
This checklist that is used to evaluate observational studies, including cohort studies, control, and cross-sectional studies, plays a significant role in promotion of knowledge and recognition of proper factors in diseases. In the STROBE statement, three main types of observational studies have been considered, which are as follows: cohort studies, case-control studies and cross-sectional studies.
STARD Checklist: (A complete and accurate report of diagnostic accuracy of studies) (7)
The STARD abbreviation (Standards for Reporting Diagnostic Accuracy Studies) has been presented. A review of the "http://www.stard-statement.org" website statement shows that STARD statement has a list of 22 options used for observational studies. The STARD statement also has a flow chart that shows how researchers compare the index test results to the standard reference test results.
CONSORT checklist: (Reporting parallel-group randomized trials) (8)
One of the most significant types of studies is randomized clinical trial (RCT). The CONSORT Statement has a list of 25 items used for clinical studies. The statement has a full-time blog and permanent website, http://www.consort-statement.org.
ARRIVE checklist: (Animal Research Reporting in Vivo Experiments) (9)
This checklist is used for evaluating animal studies.
PRISMA checklist:
Systematic reviews and Meta-Analysis (10)
Meta-analyses and systematic reviews in clinical decisions are the last words. Therefore, the accuracy of results of this group of research should be particularly examined. Besides, meta-solutions, a new feature of the statement and the standard reporting of regular reviews are considered. However, main focus of this statement is limited to a set of randomized trials.
Results
According to the surveys, eighty-one articles collected from 1994 to 2014. Forty-four of which were observational articles, with 24 diagnostic articles, nine clinical articles, three animal articles, and one meta-analysis and systematic review. The important results of this study are as follows:
According to Table I, in the STROBE checklist, 88.63% of studies stated that specific objectives of the study and 11.36% of the studies referred only to the general purpose. The expression of inclusion and exclusion criteria was expressed in 90.9% of the studies. All studies referred to the type of statistical method used. In these studies, only 22.72% of them expressed limitations of the study.
Table I: Results of STROBE checklist
|
Item No |
Recommendation |
Mention |
Not mention |
Title and abstract |
1 |
(a) Indicate the study’s design with a commonly used term in the title or the abstract |
56.81 |
43.18 |
(b) Provide in the abstract an informative and balanced summary of what was done and what was found |
100 |
0 |
||
Introduction |
|
|
||
Background/rationale |
2 |
Explain the scientific background and rationale for the investigation being reported |
100 |
0 |
Objectives |
3 |
State specific objectives, including any prespecified hypotheses |
88.63 |
11.36 |
Methods |
|
|
||
Study design |
4 |
Present key elements of study design early in the paper |
77.77 |
22.22 |
Setting |
5 |
Describe the setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection |
86.36 |
13.63 |
Participants |
6 |
(a) Cohort study—Give the eligibility criteria, and the sources and methods of selection of participants. Describe methods of follow-up Case-control study—Give the eligibility criteria, and the sources and methods of case ascertainment and control selection. Give the rationale for the choice of cases and controls Cross-sectional study—Give the eligibility criteria, and the sources and methods of selection of participants |
90.9 |
9.1 |
(b) Cohort study—For matched studies, give matching criteria and number of exposed and unexposed Case-control study—For matched studies, give matching criteria and the number of controls per case |
87.23 |
12.77 |
||
Variables |
7 |
Clearly define all outcomes, exposures, predictors, potential confounders, and effect modifiers. Give diagnostic criteria, if applicable |
77.27 |
22.72 |
Data sources/ measurement |
8* |
For each variable of interest, give sources of data and details of methods of assessment (measurement). Describe comparability of assessment methods if there is more than one group |
86.36 |
13.63 |
Bias |
9 |
Describe any efforts to address potential sources of bias |
34.1 |
65.9 |
Study size |
10 |
Explain how the study size was arrived at |
0 |
100 |
Quantitative variables |
11 |
Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen and why |
22.72 |
77.27 |
Statistical methods |
12 |
(a) Describe all statistical methods, including those used to control for confounding |
100 |
0 |
(b) Describe any methods used to examine subgroups and interactions |
100 |
0 |
||
(c) Explain how missing data were addressed |
90.22 |
9.78 |
||
(e) Describe any sensitivity analyses |
74.45 |
25.55 |
||
Results |
|
|
|
|
Participants |
13* |
(a) Report numbers of individuals at each stage of study—eg numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analysed |
40.9 |
59.1 |
|
|
(b) Give reasons for non-participation at each stage |
47.78 |
52.22 |
|
|
(c) Consider use of a flow diagram |
38.89 |
61.11 |
Descriptive data |
14* |
(a) Give characteristics of study participants (eg demographic, clinical, social) and information on exposures and potential confounders |
22.72 |
77.27 |
|
|
(b) Indicate number of participants with missing data for each variable of interest |
0 |
100 |
|
|
(c) Cohort study—Summarise follow-up time (eg, average and total amount) |
63.63 |
36.36 |
Outcome data |
15* |
Cohort study—Report numbers of outcome events or summary measures over time |
100 |
0 |
|
|
Case-control study—Report numbers in each exposure category, or summary measures of exposure |
100 |
0 |
|
|
Cross-sectional study—Report numbers of outcome events or summary measures |
100 |
0 |
Main results |
16 |
(a) Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (eg, 95% confidence interval). Make clear which confounders were adjusted for and why they were included |
100 |
0 |
|
|
(b) Report category boundaries when continuous variables were categorized |
100 |
0 |
|
|
(c) If relevant, consider translating estimates of relative risk into absolute risk for a meaningful time period |
100 |
0 |
Other analyses |
17 |
Report other analyses done—eg analyses of subgroups and interactions, and sensitivity analyses |
0 |
100 |
Discussion |
|
|
|
|
Key results |
18 |
Summarise key results with reference to study objectives |
65.9 |
34.1 |
Limitations |
19 |
Discuss limitations of the study, taking into account sources of potential bias or imprecision. Discuss both direction and magnitude of any potential bias |
22.72 |
77.27 |
Interpretation |
20 |
Give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence |
100 |
0 |
Generalisability |
21 |
Discuss the generalisability (external validity) of the study results |
56.81 |
43.18 |
Other information |
|
|
|
|
Funding |
22 |
Give the source of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based |
64.23 |
35.77 |
According to Table II, in STARD checklist in 95.83% of studies, selection of participants was explained. Only in 25% of the studies, blinding results were determined and clinical and demographic characteristics of the study population were mentioned in only 26.16% of studies.
Table II: Results of STARD checklist
Section and Topic |
Item # |
|
mention |
Not mention |
TITLE/ABSTRACT/ KEYWORDS |
1 |
Identify the article as a study of diagnostic accuracy (recommend MeSH heading ’sensitivity and specificity’). |
95.83 |
4.16 |
INTRODUCTION |
2 |
State the research questions or study aims, such as estimating diagnostic accuracy or comparing accuracy between tests or across participant groups. |
100 |
0 |
METHODS |
|
|
|
|
Participants |
3 |
Describe the study population: The inclusion and exclusion criteria, setting and locations where the data were collected. |
91.6 |
8.33 |
|
4 |
Describe participant recruitment: Was recruitment based on presenting symptoms, results from previous tests, or the fact that the participants had received the index tests or the reference standard? |
95.83 |
4.16 |
|
5 |
Describe participant sampling: Was the study population a consecutive series of participants defined by the selection criteria in items 3 and 4? If not, specify how participants were further selected. |
87.5 |
12.5 |
|
6 |
Describe data collection: Was data collection planned before the index test and reference standard were performed (prospective study) or after (retrospective study)? |
100 |
0 |
Test methods |
7 |
Describe the reference standard and its rationale. |
45.83 |
54.16 |
|
8 |
Describe technical specifications of material and methods involved including how and when measurements were taken, and/or cite references for index tests and reference standard. |
91.6 |
8.33 |
|
9 |
Describe the number, training and expertise of the persons executing and reading the index tests and the reference standard. |
16.66 |
83.33 |
|
10 |
Describe whether or not the readers of the index tests and reference standard were blind (masked) to the results of the other test and describe any other clinical information available to the readers. |
25 |
75 |
Statistical methods |
11 |
Describe methods for calculating or comparing measures of diagnostic accuracy, and the statistical methods used to quantify uncertainty (e.g. 95% confidence intervals). |
87.5 |
12.5 |
RESULTS |
|
|
|
|
Participants |
12 |
Report when study was done, including beginning and ending dates of recruitment. |
0 |
100 |
|
13 |
Report clinical and demographic characteristics of the study population (e.g. age, sex, spectrum of presenting symptoms, comorbidity, current treatments, recruitment centers). |
29.16 |
70.83 |
|
14 |
Report the number of participants satisfying the criteria for inclusion that did or did not undergo the index tests and/or the reference standard; describe why participants failed to receive either test (a flow diagram is strongly recommended). |
8.33 |
91.66 |
Test results |
15 |
Report time interval from the index tests to the reference standard, and any treatment administered between. |
25 |
75 |
|
16 |
Report distribution of severity of disease (define criteria) in those with the target condition; other diagnoses in participants without the target condition. |
0 |
100 |
|
17 |
Report a cross tabulation of the results of the index tests (including indeterminate and missing results) by the results of the reference standard; for continuous results, the distribution of the test results by the results of the reference standard. |
91.66 |
8.33 |
Estimates |
18 |
Report estimates of diagnostic accuracy and measures of statistical uncertainty (e.g. 95% confidence intervals). |
8.1 |
91.9 |
|
19 |
Report how indeterminate results, missing responses and outliers of the index tests were handled. |
20.83 |
79.16 |
|
20 |
Report estimates of variability of diagnostic accuracy between subgroups of participants, readers or centers, if done. |
75 |
25 |
|
21 |
Report estimates of test reproducibility, if done. |
45.83 |
54.16 |
DISCUSSION |
22 |
Discuss the clinical applicability of the study findings. |
100 |
0 |
According to Table III, in the CONSORT checklist, 88.88% of studies referred to the type of randomized trial in title. 100% of the studies stated inclusion criteria of individuals. Unfortunately, no studies have shown how sample size was determined. Only 44.44% of studies have identified if blinding has been performed.
Table III: Results of CONSORT checklist
Section/Topic
|
Item No
|
Checklist item
|
Mention % |
Not mention % |
Title and abstract |
|
|
||
|
1a |
Identification as a randomised trial in the title |
88.88 |
11.11 |
1b |
Structured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts) |
100 |
0 |
|
Introduction |
|
|
||
Background and objectives |
2a |
Scientific background and explanation of rationale |
100 |
0 |
2b |
Specific objectives or hypotheses |
88.88 |
11.11 |
|
Methods |
|
|
||
Trial design |
3a |
Description of trial design (such as parallel, factorial) including allocation ratio |
77.77 |
22.22 |
3b |
Important changes to methods after trial commencement (such as eligibility criteria), with reasons |
11.11 |
88.88 |
|
Participants |
4a |
Eligibility criteria for participants |
100 |
0 |
4b |
Settings and locations where the data were collected |
100 |
0 |
|
Interventions |
5 |
The interventions for each group with sufficient details to allow replication, including how and when they were actually administered |
66.66 |
33.33 |
Outcomes |
6a |
Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed |
0 |
100 |
6b |
Any changes to trial outcomes after the trial commenced, with reasons |
0 |
100 |
|
Sample size |
7a |
How sample size was determined |
0 |
100 |
7b |
When applicable, explanation of any interim analyses and stopping guidelines |
0 |
100 |
|
Randomisation: |
|
|
|
|
Sequence generation |
8a |
Method used to generate the random allocation sequence |
0 |
100 |
8b |
Type of randomisation; details of any restriction (such as blocking and block size) |
44.44 |
55.55 |
|
Allocation concealment mechanism |
9 |
Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned |
33.33 |
66.66 |
Implementation |
10 |
Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions |
0 |
100 |
Blinding |
11a |
If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how |
44.44 |
55.55 |
11b |
If relevant, description of the similarity of interventions |
0 |
100 |
|
Statistical methods |
12a |
Statistical methods used to compare groups for primary and secondary outcomes |
100 |
0 |
12b |
Methods for additional analyses, such as subgroup analyses and adjusted analyses |
0 |
100 |
|
Results |
|
|
||
Participant flow (a diagram is strongly recommended) |
13a |
For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome |
55.55 |
44.44 |
13b |
For each group, losses and exclusions after randomisation, together with reasons |
22.22 |
77.77 |
|
Recruitment |
14a |
Dates defining the periods of recruitment and follow-up |
0 |
100 |
14b |
Why the trial ended or was stopped |
0 |
100 |
|
Baseline data |
15 |
A table showing baseline demographic and clinical characteristics for each group |
0 |
100 |
Numbers analysed |
16 |
For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups |
44.44 |
55.55 |
Outcomes and estimation |
17a |
For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) |
77.77 |
22.22 |
17b |
For binary outcomes, presentation of both absolute and relative effect sizes is recommended |
0 |
100 |
|
Ancillary analyses |
18 |
Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory |
0 |
100 |
Harms |
19 |
All important harms or unintended effects in each group (for specific guidance see CONSORT for harms) |
0 |
100 |
Discussion |
|
|
||
Limitations |
20 |
Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses |
22.22 |
77.77 |
Generalisability |
21 |
Generalisability (external validity, applicability) of the trial findings |
66.66 |
33.33 |
Interpretation |
22 |
Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence |
100 |
0 |
Other information |
|
|
||
Registration |
23 |
Registration number and name of trial registry |
0 |
100 |
Protocol |
24 |
Where the full trial protocol can be accessed, if available |
0 |
100 |
Funding |
25 |
Sources of funding and other support (such as supply of drugs), role of funders |
33.33 |
66.66 |
According to Table IV in the ARRIVE checklist, 66.66% of studies referred to ethical issues of animal studies. All studies have described details and information of animals tested. Unfortunately, none of these studies mentioned how animals were selected.
Table IV: Results of ARRIVE checklist
|
Item No |
Recommendation |
Mention |
Not mention |
|
Title |
1 |
Provide as accurate and concise a description of the content of the article as possible. |
100 |
0 |
|
abstract |
2 |
Provide an accurate summary of the background, research objectives, including details of the species or strain of animal used, key methods, principal findings and conclusions of the study. |
100 |
0 |
|
Introduction |
|
|
|
||
Background |
3 |
a. Include sufficient scientific background (including relevant references to previous work) to understand the motivation and context for the study, and explain the experimental approach and rationale. b. Explain how and why the animal species and model being used can address the scientific objectives and, where appropriate, the study’s relevance to human biology. |
100
33.33 |
0
66.66 |
|
Objectives |
4 |
Clearly describe the primary and any secondary objectives of the study, or specific hypotheses being tested. |
66.66 |
33.33 |
|
Methods |
|
|
|
||
Ethical statement |
5 |
Indicate the nature of the ethical review permissions, relevant licences (e.g. Animal [Scientific Procedures] Act 1986), and national or institutional guidelines for the care and use of animals, that cover the research. |
66.66 |
33.33 |
|
Study design |
6 |
For each experiment, give brief details of the study design including: a. The number of experimental and control groups. b. Any steps taken to minimise the effects of subjective bias when allocating animals to treatment (e.g. randomisation procedure) and when assessing results (e.g. if done, describe who was blinded and when). c. The experimental unit (e.g. a single animal, group or cage of animals). A time-line diagram or flow chart can be useful to illustrate how complex study designs were carried out. |
66.66 33.33
66.66 |
33.33 66.66
33.33 |
|
Experimental procedures |
7 |
|
|||
For each experiment and each experimental group, including controls, provide precise details of all procedures carried out. For example: a. How (e.g. drug formulation and dose, site and route of administration, anaesthesia and analgesia used [including monitoring], surgical procedure, method of euthanasia). Provide details of any specialist equipment used, including supplier(s). b. When (e.g. time of day). c. Where (e.g. home cage, laboratory, water maze). d. Why (e.g. rationale for choice of specific anaesthetic, route of administration, drug dose used). |
100
0 33.33 0 |
0
100 66.66 100 |
|
||
Experimental animals |
8 |
a. Provide details of the animals used, including species, strain, sex, developmental stage (e.g. mean or median age plus age range) and weight (e.g. mean or median weight plus weight range). b. Provide further relevant information such as the source of animals, international strain nomenclature, genetic modification status (e.g. knock-out or transgenic), genotype, health/immune status, drug or test naïve, previous procedures, etc. |
100
0 |
0
100 |
|
Housing and husbandry |
9 |
Provide details of: a. Housing (type of facility e.g. specific pathogen free [SPF]; type of cage or housing; bedding material; number of cage companions; tank shape and material etc. for fish). b. Husbandry conditions (e.g. breeding programme, light/dark cycle, temperature,quality of water etc for fish, type of food, access to food and water, environmental enrichment). c. Welfare-related assessments and interventions that were carried out prior to, during,or after the experiment. |
0
0
0 |
100
100
100 |
|
Sample size |
10 |
a. Specify the total number of animals used in each experiment, and the number of animals in each experimental group. b. Explain how the number of animals was arrived at. Provide details of any sample size calculation used. c. Indicate the number of independent replications of each experiment, if relevant. |
100
0
0 |
0
100
100 |
|
Allocating animals to experimental groups |
11 |
a. Give full details of how animals were allocated to experimental groups, including randomisation or matching if done. b. Describe the order in which the animals in the different experimental groups were treated and assessed. |
0
33.33 |
100
66.66 |
|
Experimental outcomes |
12 |
Clearly define the primary and secondary experimental outcomes assessed (e.g. cell death, molecular markers, behavioural changes). |
100 |
0 |
|
Statistical methods |
13 |
a. Provide details of the statistical methods used for each analysis. |
100 |
0 |
|
b. Specify the unit of analysis for each dataset (e.g. single animal, group of animals,single neuron). |
0 |
100 |
|
||
c. Describe any methods used to assess whether the data met the assumptions of the statistical approach. |
0 |
100 |
|
||
Results |
|
|
|
|
|
Baseline data |
14 |
For each experimental group, report relevant characteristics and health status of animals (e.g. weight, microbiological status, and drug or test naïve) prior to treatment or testing. (This information can often be tabulated). |
66.66 |
33.33 |
|
Numbers analysed |
15 |
a. Report the number of animals in each group included in each analysis. Report absolute numbers (e.g. 10/20, not 50%†). b. If any animals or data were not included in the analysis, explain why. |
33.33
0 |
66.66
100 |
|
Outcomes and estimation |
16 |
Report the results for each analysis carried out, with a measure of precision (e.g. standard error or confidence interval). |
100 |
0 |
|
Adverse events |
17 |
a. Give details of all important adverse events in each experimental group. b. Describe any modifications to the experimental protocols made to reduce adverse events. |
0 0 |
100 100 |
|
Discussion |
|
|
|
|
|
Interpretation/scientific implications |
18 |
a. Interpret the results, taking into account the study objectives and hypotheses, current theory and other relevant studies in the literature. b. Comment on the study limitations including any potential sources of bias, any limitations of the animal model, and the imprecision associated with the results†. c. Describe any implications of your experimental methods or findings for the replacement, refinement or reduction (the 3Rs) of the use of animals in research. |
100
33.33
33.33 |
0
66.66
66.66 |
|
Generalisability/ translation |
19 |
Comment on whether, and how, the findings of this study are likely to translate to other species or systems, including any relevance to human biology. |
33.33 |
66.66 |
|
Funding |
20 |
List all funding sources (including grant number) and the role of the funder(s) in the study. |
100 |
0 |
|
According to TableV in the PRISMA checklist, which include only 1 study, features such as PICO of study and follow-up duration have been identified. Unfortunately, bias risk assessment was not performed in this study. Limitations of the study and general interpretation of the evidence results are mentioned.
Table V: Results of PRISMA checklist
Section/topic |
|
Checklist item |
Reported on page # |
TITLE |
|
||
Title |
1 |
Identify the report as a systematic review, meta-analysis, or both. |
Y |
ABSTRACT |
|
||
Structured summary |
2 |
Provide a structured summary including, as applicable: background; objectives; data sources; study eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; systematic review registration number. |
Y |
INTRODUCTION |
|
||
Rationale |
3 |
Describe the rationale for the review in the context of what is already known. |
Y |
Objectives |
4 |
Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS). |
Y |
METHODS |
|
||
Protocol and registration |
5 |
Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provide registration information including registration number. |
N |
Eligibility criteria |
6 |
Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language, publication status) used as criteria for eligibility, giving rationale. |
Y |
Information sources |
7 |
Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched. |
N |
Search |
8 |
Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated. |
Y |
Study selection |
9 |
State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included in the meta-analysis). |
Y |
Data collection process |
10 |
Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators. |
Y |
Data items |
11 |
List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and simplifications made. |
Y |
Risk of bias in individual studies |
12 |
Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis. |
N |
Summary measures |
13 |
State the principal summary measures (e.g., risk ratio, difference in means). |
Y |
Synthesis of results |
14 |
Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I2) for each meta-analysis. |
Y |
Discussion
Epidemiological studies are a primary source of information for general health in the society. Therefore, it is significant that these studies’ quality meet a high level of health policy for citation. Critical appraisal uses standards and defined criteria for evaluating quality of epidemiological studies effectively. Due to the lack of studies under critical appraisal standards specifically in Iran, such studies are considered a necessity.
Ninety percent of observational studies and 95% of diagnostic studies have explained characteristics of participants, inclusion and exclusion criteria, and methods of tracking samples. These characteristics are considered advantages of such articles. Although 86.36% of observational studies have obtained a standard measurement criterion for measuring study results, many of them have not given an accurate description of name and defined classification of the process of accomplishment. Only 34.1% of observational articles explained measurements used to accomplish the procedure
There are no official resources to clarify how much work, specifically blinded work, has been carried out in implementation and testing process, and how many individuals were engaged. Therefore, based on statistics, it is alleged that only 44.44% of clinical studies have been accomplished blindly. Attention to confidence interval to determine confidence level of estimates and their range is another significant point. It has been mentioned in only 8.1% of diagnostic articles. Many studies do not report random sampling without bias. Although 33.33% of clinical papers mentioned concealment method of research and participants, most of the articles are limited to the term random sampling and they do not have a clear explanation of accomplishment process.
Since there has been no critical evaluation of Periodontal studies, it is not possible to compare it with a similar study.
In this regard, Moskouchi has compiled and categorized articles indexed by Iranian dental schools in four mentioned databases (Web of Science, Scopus, PubMed, IranMedex). In 2003, Mahshid and Ansari examined evidence-based dentistry’s role in process and presentation of research papers. This study indicated that the scientific and clinical application of results of study depends on strength of evidence or ranking of research information. Only articles can influence scientific and clinical judgments that have been reviewed by long-term clinical trials, followed by review and evaluation of a large number of them in systematic reviews (Overview, Meta-Analysis) (5).
Conclusion
The purpose of the current study was to critically evaluate studies conducted in the Periodontics Department of Mashhad School of Dentistry. This study, which is done using relevant checklists, showed that most of the items related to the checklist have been observed. But they also have a number of weaknesses like not following certain principles, such as reducing bias, lack of blindness and not mentioning confidence interval.
The main conclusion of this study was that many studies have done their research based on relevant checklists and have met required quality. Future studies require researchers from initial stage of designing their study a standard checklist, trying to prepare their research methodology in a principled and scientific way. This way reliable results can be obtained about prevalence and occurrence of various diseases. Also, annual evaluation of methodology in valid Iranian Journals will improve quality assessment and peer-review process. This would lead to reliable results of these studies for researchers in future and for health policymakers in community health planning.
Conflictof Interest
The authors claim that there was no conflict of interest in this study.
Acknowledgment
The authors would like to acknowledge the vice of research of Mashhad University of Medical Sciences.