About the ethics review process

Our Human Research Ethics Committee (HREC) reviews proposed research projects in our hospitals and health service. We can also review ethics applications for research outside of Metro South Health.

When you apply for ethics approval, we’ll assess your project against the National Statement on Ethical Conduct in Human Research.

You'll need to be able to show that:

  • your research has scientific merit
  • any benefits outweigh any associated risks
  • the people who’ll work on your project are properly qualified
  • you’ll treat the participants with respect, especially vulnerable groups
  • you’ve considered the ethical issues associated with the methodology you'll use.

Watch our latest video about the ethics review process.

Navigating ethics in research: tips and tricks for your ethics application

Good afternoon, everyone, and welcome to today's research education session. My name is Tarren Schneider, and I’m the Research Project Coordinator for Metro South Addiction and Mental Health Services.

First, I’d like to acknowledge the Yuggera and Turrbal peoples as the traditional custodians of the land upon which we meet today in Brisbane, and I pay respect to Elders past, present, and emerging. We have one presentation today, and there will be time for questions at the end. The session will be recorded and made available on the Metro South Research website. There’s also a feedback survey linked in the chat, so we’d appreciate it if you could fill it out for us.

Our presenter today is Dr Michelle Delaney, Acting Deputy Chair of Metro South Health HREC. Dr Delaney has a PhD in psychology and 25 years of research experience, including academic roles at Australian Catholic University, the University of Queensland, and the University of Sheffield. She has published in the fields of developmental, clinical, and health psychology and has secured over $2 million in competitive grant funding throughout her career. She is currently a Senior Research Development Officer at West Moreton Health and an Adjunct Research Fellow at the PA Southside Clinical Unit. Dr Delaney is also the Deputy Chair of the West Moreton Health Human Research Ethics Committee and Acting Deputy Chair of Metro South Health HREC.

Dr Delaney believes that ethics applications and statistics should not be daunting or dreaded. She also has a fondness for flat-faced and so-called “ugly” dogs, and she’ll tie this into her discussion of statistical power today. Dr Delaney will be presenting on navigating research ethics, so I’ll hand over to her now.

Thank you, Tarren. As mentioned, today we’ll be covering two key topics: the importance of sample size and statistical power in research, and how to write a high-quality ethics application that avoids common mistakes. The first topic will take up the majority of the session, and we’ll get into ethics applications in the second half.

I’d also like to acknowledge and respect the traditional custodians of the lands and waters on which we are meeting today, as well as paying respect to Elders past, present, and emerging. Additionally, I want to recognise those with lived experience of mental illness and those who provide care to people with mental illness.

As Tarren mentioned, I work across several areas, including research and ethics. To start, let’s talk about statistical power and sample size. I’d love to get a sense of how familiar everyone is with calculating statistical power. Could you please comment in the chat whether you consider yourself a novice, intermediate, or expert? Don’t be shy!

Great! It looks like we have a range of experience levels, with most people identifying as novices. That’s perfect because today’s session is pitched at an introductory level.

So, why is sample size important? It impacts your hypothesis and study design. Increasingly, journals require a justification of sample size. There are several ways to calculate power, and it’s not a one-size-fits-all approach. However, using an incorrect sample size can lead to inaccurate results, wasting time, funding, and participants' efforts. I’ll guide you through why sample size matters and how it relates to effect sizes and statistical significance.

If you take away just one thing today, I hope it’s a better understanding of the importance of power. If you also feel confident enough to start calculating power for your own studies, that’s a bonus!

For those of you who’ve completed an undergraduate psychology degree, this might remind you of basic stats lectures. But don’t worry if you haven’t—this will be a back-to-basics session.

Your hypothesis is essentially your best guess as to what the research results will be, based on existing literature. We test this hypothesis using the data we collect from our sample, which represents the wider population. Since we can’t study every individual in a population, we use a sample to draw conclusions.

You may have heard of the null and alternative hypotheses. The null hypothesis states there’s no statistically significant relationship between variables. For example, “dogs show no preference for food based on shape.” The alternative hypothesis predicts a significant relationship, like “dogs prefer food shaped like sticks over other shapes.”

Before we get into hypothesis testing, we need to cover a few basics. Let’s start with the concept of alpha. In statistics, the alpha level represents the threshold for determining statistical significance. Typically, researchers use an alpha of 0.05, meaning we’re willing to take a 5% chance that we’ll incorrectly reject the null hypothesis. However, this can vary depending on the type of study. For example, a drug trial might use a stricter alpha level of 0.001 to minimise the risk of false conclusions about the drug’s effectiveness.

This leads us to the potential errors in hypothesis testing. A Type 1 error occurs when we falsely conclude that there is an effect when there isn’t—this is known as a false positive. Type 2 errors happen when we wrongly accept the null hypothesis, meaning we miss an actual effect—this is a false negative. The most common cause of Type 2 errors is an inadequately small sample size, especially when combined with low effect sizes. Both types of errors can skew your results, so careful planning of your study’s sample size is crucial.

We’ll now go over a table that summarises these types of errors. A fun analogy to help understand these concepts is the old fable of the boy who cried wolf. In the story, the boy falsely raises an alarm about a wolf multiple times (a Type 1 error), but when a real wolf does appear, no one believes him (a Type 2 error).

They think it's just another false alarm and do nothing, so all the sheep get eaten by the wolf. This story illustrates the concept of Type 1 and Type 2 errors well.

First, the citizens make a Type 1 error by believing there is a wolf when there isn’t one. Then, they commit a Type 2 error by thinking there’s no wolf when there actually is one.

This links to power. Power is the probability of rejecting a false null hypothesis. A simpler way to put this is that we are correctly identifying a significant relationship or difference. Power is calculated as 1 minus the Type 2 error probability, with 80% power being quite common, meaning that 20% of the time we might miss a real difference.

When powering a study, we focus on our primary variables, which are typically one or two variables, without overcomplicating things. Secondary outcomes aren’t powered. To calculate power, understanding effect sizes is crucial. Effect sizes tell us the strength or importance of the relationship between variables or the difference between groups. It measures the practical significance of a research outcome.

To contrast with statistical significance, p-values tell us if an effect exists, but effect sizes show how meaningful that effect is. Statistical significance alone can be misleading, especially as increasing the sample size can make even very small effects statistically significant. However, these may not be clinically meaningful. Effect sizes are valuable because they aren’t influenced by sample size.

For example, let’s consider a study comparing two depression interventions. With a large sample size—say, 10,000 participants in both the control and intervention groups—you might find a statistically significant result, but a difference in depression scores of 9.8 (control) and 9.5 (intervention) is not practically significant. In this case, effect sizes help highlight that such a small difference isn't clinically meaningful.

However, small effect sizes can sometimes be important. For instance, during the COVID-19 pandemic, population studies on mental health showed small effect sizes. Yet, even a small shift in mental health scores can indicate a large increase in actual cases when scaled up to a population level.

Pre-determined effect sizes, like Cohen's d, classify effects as small (0.2), medium (0.5), and large (greater than 0.8). Cohen's d can take any value from 0 to infinity, where a larger value indicates a stronger effect. Pearson's r works differently, ranging from -1 to 1, with values closer to these extremes indicating a stronger positive or negative relationship.

For example, comparing the height of Chihuahuas and Bull Arabs would show a large effect size because breed is a good predictor of height. In contrast, comparing Boston Terriers and Pugs, which have similar heights, would show a small effect size, as breed isn’t a strong predictor of height here.

Understanding effect sizes is important when planning a study. Larger effect sizes allow for a smaller sample size to detect a significant effect, while smaller effect sizes require a larger sample size. Power is the probability of detecting a true effect, and a more powerful test reduces the likelihood of a Type 2 error. Without sufficient power, even significant effects might go undetected.

Performing a power analysis before starting a study helps determine the sample size needed to achieve the desired power level. This ensures the study is adequately powered to detect meaningful effects without wasting resources or missing important findings.

Effect sizes are also important after the study. Once data is collected, you can calculate and report the effect sizes in your paper, which helps others understand the practical significance of your findings. Including effect sizes allows your research to contribute to meta-analyses, helping to advance the field as a whole.

In summary, it’s vital to calculate an appropriate sample size for your study, and there are different ways to justify this. Ideally, you measure the entire population, but that’s rarely possible. Instead, an a priori power analysis, using predetermined effect sizes or those from similar studies, is the next best option. Just ensure that the previous studies or meta-analyses you reference are comparable in terms of population, study design, and interventions.

When determining sample size, it needs to be credible and evidence-based. That's essential. There are rules of thumb in statistics regarding the number of participants needed for specific tests. For example, in regression analysis, a minimum of 10 participants per predictor is recommended—although 20 is better. These guidelines can provide a starting point for sample size justification.

Another way to justify sample size is to consider resourcing. Often, we're constrained by available resources, and limited resources are a common reason for choosing smaller sample sizes. If a study is unfunded or a small project, a smaller sample size is inevitable. Researchers must balance the cost of data collection—whether in time, money, or clinical workload—against the value of the information being gathered. This justification focuses on what can realistically be done within the time and budget available.

However, we must also evaluate whether collecting a smaller sample size is worthwhile. We want to avoid conducting research with limited data that isn't meaningful. Even small amounts of data can reduce error rates and inform decision-making better than having no data at all.

In some cases, a small sample size is unavoidable due to the rarity of a condition. For example, Goodpasture syndrome, a rare autoimmune disease, affects fewer than one in a million people. In Australia, this might mean you have access to only 26 potential participants. In such cases, the small sample size is all that's available, and the research proceeds with that limitation in mind.

On the other hand, having no justification for a sample size is not acceptable. An underpowered study that fails to detect a significant result when one exists wastes time and resources. It also makes it challenging to publish the study. We aim to avoid this.

For qualitative studies, the process is simpler. You collect data until you reach saturation—meaning new data replicates what has already been observed, and no new information emerges. The goal is not to achieve a representative sample size but to reach saturation through a diverse enough sample.

For quantitative studies, there are various software tools available to calculate power and sample size. The most commonly used is G*Power, which is free to download. It calculates the sample size based on the alpha level, power, and effect size you choose. For example, an F-test comparing five groups with a small effect size of 0.326, an alpha of 0.05, and 95% power would require a total sample size of 180 participants.

The Human Research Ethics Committee (HREC) wants to ensure that your study has enough participants to detect a significant effect, making the research worthwhile. A good write-up for sample size justification would include details of how G*Power was used, the chosen power level, alpha, effect size, and total sample size. A poor example would simply state the number of participants without explanation.

If you're unsure about calculating power and sample size, or if you're time-poor, Metro South Health offers a Biostatistics service. You can book an appointment to discuss sample size, experimental design, or statistical analysis, ensuring your study is set up correctly from the start.

Now, moving on to tips and tricks for ethics applications and study protocols. This will focus on common pitfalls rather than a broad overview, with examples of what works and what doesn't.

Firstly, always begin with your protocol. This should be a standalone document that anyone could use to run the study if you were unavailable. It must include enough detail for replication and adhere to good research practice.

For retrospective studies, you'll be looking at existing data, while prospective studies involve collecting new data. Your protocol should clearly specify your study type, whether it's retrospective or prospective, and detail your research settings and sites.

When it comes to recruitment, explain exactly how participants will be contacted—whether by email, phone, or another method—and who will be responsible for recruitment. If it's a retrospective study, state where you're getting your data, such as from hospital records or databases.

You also need to provide inclusion and exclusion criteria, and if it's a multi-site study, ensure you have an investigator at each site. Without the correct team, accessing data from multiple sites may be problematic.

Finally, for prospective studies, include a participant information sheet and consent form. These documents must provide enough information for potential participants to make an informed decision about joining the study, particularly if the research involves sensitive topics. Ensure the information sheet comes before the consent form in your submission.

Remember to follow up on completing your Good Clinical Practice (GCP) training and any required integrity modules before submitting your study for ethics approval.

PhD students cannot be the Principal Investigator (PI) on a project. The PI needs to have sufficient research expertise to manage the study. There might be cases where an overarching project has sub-studies, and a student could lead one of those if they have enough expertise. However, as a rule, the PI must have research experience.

With regard to waiver of consent, the criteria are drawn from the National Statement on Ethical Conduct in Human Research, and we evaluate responses to each point. All criteria, from A to I, must be satisfied for approval. I'll walk through a common issue: either waiver of consent is not mentioned at all, or only some criteria are addressed without specifying which ones. For example, a submission might say, "All data will be non-identifiable and stored on a secure Queensland Health server," which is good information but not enough for a waiver of consent. Another common mistake is listing the criteria and saying, "We've done that," without linking it specifically to the study. We need to see how your study addresses each criterion.

A good example would reference the National Statement. First, confirm that the activity aligns with Section 2.3.10. For instance, if the involvement in research carries no more than low risk, you could say something like, "This is a retrospective audit of routinely collected clinical data. There’s no potential for revealing any information the patients or clinical teams haven’t already been aware of." You also need to show that the benefits of the research justify any risks, and it's crucial that obtaining consent is impracticable. For example, if you have a large number of records or patients who are difficult to contact, that may justify a waiver. We also need to ensure there are no reasons to believe the participants would have objected to being included in the study.

For privacy, we need to know that data is non-identifiable, analysed securely, and managed according to Queensland Health guidelines. You should provide detailed information on how you’re protecting confidentiality, such as using password-protected spreadsheets or RedCap on a secure Queensland Health network. We need to know that identifiable data will be securely deleted after seven years, in line with guidelines.

Sometimes retrospective studies might uncover concerning data. If that happens, you need to have a plan for how to manage it. There’s also a need to consider whether the research could result in commercial exploitation, which is usually not a concern in our context, but it still needs to be stated.

For data collection, we need to know when, how, and what data you’ll be collecting. Is it a survey, an interview, or existing data? Will there be any data linkage? For data analysis, provide a brief plan of the critical analysis, or consult with a biostatistician for assistance. If you’re doing a qualitative study, include your theoretical framework and analysis strategy.

We are tightening up on data management requirements in line with the updated National Statement. For retrospective studies, we need to know the source of your data (such as a registry or database) and which variables are being collected. Please avoid using vague terms like "such as"; instead, provide a complete list of variables. Additionally, we need specific dates for the data you're accessing, not just general date ranges.

Avoid using the term "de-identified" if possible. We need to know if the data is "re-identifiable," meaning it could be linked back to participants, or if it’s truly non-identifiable. Specify who will make the data non-identifiable and how this will be done. Ensure that only Metro South Health employees handle identifiable data, and store everything on a Queensland Health server. Data retention is typically for five to seven years, post-publication, and should be securely disposed of after that period.

Include a statement in your study protocol under ethical considerations. Even if you’re taking all the necessary precautions, there are still potential risks to privacy and confidentiality. Acknowledge these risks and explain how they will be minimised. We also want to know how you’ll disseminate research results. Whether the findings are shared within departmental meetings, at conferences, or through publications, ensure no individually identifiable data is shared.

Remember to evaluate the level of risk in your research. Low-risk research means the only foreseeable risk is discomfort, while higher-risk research involves more than discomfort. For example, accessing mental health data retrospectively might be low risk, but asking patients about their mental health could pose greater risks.

Ultimately, having a detailed protocol makes your Human Research Ethics Application much easier. A thorough protocol means you can simply copy the relevant information into the ethics application.

One common issue with the ethics application is question 1.19, which asks about participants in dependent or unequal relationships. Sometimes people answer this as "no," but if patients are recruited by their clinician, or if the PI is interviewing their team, this needs to be flagged as an issue.

We’re not here to block your research; we want to enable you to do high-quality work. The ethics team is here to protect participants and the health service, but also to support you. Feel free to contact us before submitting your application. It’s much easier to resolve issues at the front end, before submission.

Elizabeth Bais: Thank you, Michelle. That was a great presentation. It’s a good reminder to go back to basics, especially when guiding our novice researchers.

Elizabeth Bais: I have a question about novice researchers being PIs. From an ethics perspective, do you prefer that the PI is someone with experience, while the novice researcher is listed as an Associate Investigator (AI)?

Michelle Delaney: Great question. Generally, we prefer the PI to be someone with expertise, especially for ethics applications. It’s important that the lead researcher can navigate all aspects of the study, though we do encourage novice researchers to be actively involved.

Lower risk projects

Lower risk projects, such as clinical audits or quality assurance activities, may not need a full ethics review.

Find out if your project needs a full review by reading the:

If your project doesn’t need a full review, you can use our Exemption: project description template [DOCX 284.16 KB]. You’ll also need to submit the QLD Exemption Form in the Ethical Review Manager (ERM).

There's no closing date for exemption applications. You can apply at any time but you must do before starting your clinical audit or quality assurance activity. We do not provide retrospective approvals so if you think might like to publish - whether in a journal or at a conference - ensure you submit an exemption for HREC consideration before you start the work.

How to apply for an ethics review

Step 1: Prepare your research protocol

Before you can apply for an ethics review you'll need to create a research protocol.

Use our templates to help prepare your research protocol. These are a guide so some sections may not be relevant to your project and can be changed or deleted.

Step 2: Know your project's level of risk

The next step is to work out your project's level of risk. There are 2 types of research project:

  • lower risk research – when the only risk to a person is discomfort
  • higher risk research – when the risk to a person is more serious than discomfort.

The following documents and training will help you understand, assess and plan for risks in your research project.

Include your project's overall risk rating in your research protocol.

Our HREC can help you understand your project's level of risk. Email MSH-Ethics@health.qld.gov.au and attach a copy of your draft research protocol.

Step 3: Prepare your supporting documents

Our Ethical review guidance document and checklist [DOCX 593.7 KB] explains what supporting documents to submit when you apply for an ethics review.

Our HREC prefers our researchers use InFORMed participant information and consent form (PICF) templates.

Requirements for supporting documents are different for new research projects, retrospective reviews and clinical trials.

Step 4: Submit your ethics application

You must use the online Ethical Review Manager (ERM) to apply for an ethics review. If you're doing research in more than one place (multi-site research) you only need to submit one form.

Read the ERM how-to guides. If you need support using Ethical Review Manager (ERM), email ERMhelpdesksupport@health.qld.gov.au or call 07 3082 0629.  You can also check out the ERM help and frequently asked questions on the QLD ERM website.

After we've reviewed your application, we'll send you an email to let you know the outcome.

Application closing dates

Exemption and lower risk applications

You can apply for an exemption from ethics review or submit a lower risk application at any time. There's no closing date.

Higher risk applications

Our HREC meets on the first Tuesday of each month. You must submit your forms and supporting documents by 12 noon on a closing date (see below).

If you miss the deadline, you can use the Queensland HREC submission dates database to find another certified HREC to review your application.

2024 HREC dates

Closing datesMeeting dates

14 November 2024

3 December 2024

2025 HREC dates

Closing datesMeeting dates

16 January 2025

4 February 2025

13 February 2025

4 March 2025

13 March 2025

1 April 2025

17 April 2025

6 May 2025

15 May 2025

3 June 2025

12 June 2025

1 July 2025

17 July 2025

5 August 2025

14 August 2025

2 September 2025

18 September

7 October 2025

16 October 2025

4 November 2025

13 November 2025

2 December 2025

Fees

You may need to pay an administration fee [PDF 166.05 KB] when you submit your application. Make sure you include any fees as part of your research budget. Invoice details must be provided on the administration fees form [DOCX 684.16 KB].

Apply for a site specific assessment (SSA)

You can complete your ethics application at the same time as your SSA in ERM. After you apply for an ethics review in the ERM, you need to submit your SSA application form so you can apply for governance authorisation.

Contact us

If you have any questions about the ethics review process, you can contact us by either:

Last updated: November 2024