Australian Health Practitioner Regulation Agency - Issues to consider about study design
Look up a health practitioner

Close

Check if your health practitioner is qualified, registered and their current registration status

Issues to consider about study design

Study design is an important aspect to consider when assessing if claims are supported by acceptable evidence. The evidence required for a therapeutic claim will depend on the specific claim made in the advertisement.

A well-conducted systematic review of relevant randomised controlled trials (RCT) represents the highest level of evidence where it considers all studies on a given topic and the review is systematic, it can be reproduced and is representative of all the evidence. Where a systematic review is unavailable, it is important that all relevant sources of evidence are considered (not selective sources which are not representative).

There are specific issues that need to be considered in making a judgement about the quality of a specific study design. More information on assessing three common study designs can be found below:

The individual papers used within a systematic review cannot be included as additional evidence for your advertising claim.

Did the review address a clearly focused question?

A systematic review must address a clearly stated research question and should include a clear statement of the study population, intervention and outcome of interest. The best place to find this information is in the methods section of the paper and the abstract.

Did the authors look for the right type of papers?

The papers should address the review’s research question and have an appropriate study design i.e. RCTs for evaluating interventions.

Did the authors include important relevant studies?

Which bibliographic databases did they use? Did they follow up articles identified from reference lists? Did they contact subject experts for suggestions? Was it limited to English language studies only? The more sources used to obtain studies, the higher the quality the study is likely to be.

Did the authors assess the quality of the papers?

The paper should include an assessment of the rigour of each study included in the systematic review using a standardised protocol such as PRISMA. Papers that do not include an assessment of the rigour of each study are not of sufficient quality to support an advertising claim.

If the results of the review were combined (meta-analysed), was it reasonable to do so?

Were the results of different studies similar? Differences between studies (heterogeneity) need to be discussed. Failure to do so indicates that the study is not of sufficient quality to support an advertising claim.

What were the overall results of the review?

The overall results should be clearly stated and relate to the research question. What are the results and how are they expressed?

How precise were the results?

Precision refers to how close each measurement is to each other. Results are typically reported as within a 95 per cent confidence interval which means that the reader can be 95 per cent sure that the true value lies between the upper and lower boundaries. The smaller the interval is between the upper and lower boundary, the more precise the result is.

Were all the important outcomes considered?

Was there any other information that you would have liked to see?

Is the evidence clinically significant?

Statistical significance is not the same as clinical significance. You need to assess whether the intervention makes a difference in a clinical setting.

Did the trial address a clearly focused issue?

​The research question should include the population studied, the intervention and comparator group and the outcome of interest. The research question should be directly related to your advertising claim.

Was the assignment of participants to the intervention and control groups randomised?

  • Randomised is used to reduce selection bias in intervention studies. How was this carried out? Computer generated randomised represents a gold standard. However, there are other ways of randomising including alternate allocation to intervention and control groups. The latter is less rigorous and known as pseudo-randomization.
  • The allocation sequence should have been concealed from the researchers and participants.

Were all the participants who entered the trial accounted for at its conclusion?

  • Was the timeframe used appropriate to assess the effects of the intervention? There needs to be enough time for the potentially good and bad effects to reveal themselves. Was the trial stopped early?
  • Were participants analysed in the groups to which they were randomised? The term for this is intention to treat (ITT).

Were the participants and study personal ‘blind’ to treatment?

To prevent bias the researchers, when collecting and analysing the findings, should not know which study participants underwent the intervention. In a ‘double blinded’ study, participants do not know whether they are receiving the treatment or not. Although this is gold standard, it may not be feasible or ethical in some cases.

Were the characteristics of the participants in the intervention and control groups similar at the start of the study?

Characteristics such as age distribution, gender, or social class might affect the outcome. The only difference between the two groups should be that one received the intervention of interest and the other did not. If this is not the case, the study is not of sufficiently high quality to be used to support an advertising claim.

How large was the treatment effect?

  • Were all the research findings used in reaching the conclusion? Selecting results that support the study’s hypothesis while ignoring those that do not is known as ‘cherry-picking’. If the researchers ignored some of the findings in reaching their conclusions or did not explain the conflicting data, the study should not be used to make an advertising claim.
  • Did the authors find a difference between the intervention and control group? Was the difference statistically significant?

How precise were the results?

Precision refers to how close each measurement is to each other. Results are reported as within a 95 per cent confidence interval which means that the reader can be 95 per cent sure that the true value lies between the upper and lower boundaries. The smaller the interval is between the upper and lower boundary, the more precise the result is.

Were all the important outcomes considered?

Was there any other information that you would have liked to see?

Is the evidence clinically significant?

Statistical significance is not the same as clinical significance. You need to assess whether the intervention makes a difference in a clinical setting.

Did the study address a clearly focused issue?

The research question should include the population studied, the intervention and comparator group and the outcome of interest. For these study designs, the research question is often found at the end of the introduction to the paper. The research question should be directly related to your advertising claim.

Did the authors use an appropriate method to answer the question?

Cohort and case control study designs are rarely an appropriate method of assessing the efficacy of an intervention. These study designs are hypothesis generating, being used to see whether there may be an association between an exposure and an outcome. Cohort studies are also used to examine the natural history of a condition or to determine the likely prognosis. Furthermore, they are highly susceptible to bias, chance and confounding.

 
 
 
Page reviewed 14/12/2020