Tackling experimental design in your funding proposal
Over recent years grants panels from many funders have been placing increasing importance on the methodology and experimental design in applications. Well designed and correctly analysed experiments not only lead to a reduction in animal use but also increase the scientific validity of results.
To help applicants for NC3Rs funding identify and provide the detail needed in their application, we have gathered resources and advice from our funding and experimental design teams and Panel members from the NC3Rs Grant Assessment Panel. You find a table listing all the resources referred to at the end of this post.
Good experimental design sets a solid foundation for a grant application and demonstrates to the Panel your ability to do rigorous science. There are a number of common pitfalls we have identified, use this post to make sure you’ve not fallen foul of one of them and give your proposal the best chance of success.
While many resources discussing experimental design refer to in vivo experiments, it is equally important for applicants planning in vitro experiments to also demonstrate good experimental design. You need to consider your experimental unit and sample sizes, where you can apply blinding and randomisation, and what your controls will be. Our webinar with Dr Natasha Karp, Associate Director of Biostatistics at AstraZeneca, describes best practice in experimental design, including these concepts, and gives in vitro examples.
Another point to consider is your in vitro experiment may not represent the entirety of the animal, disease, or organ you are modelling. It is likely not supposed to, but it is important to describe how your in vitro system is representative of the aspects you are trying to model.
A large focus of our funding supports researchers developing and validating new 3Rs approaches. The experiments validating the approach are arguably the most important to be well designed, robust and reproducible as they are essential for building confidence in 3Rs approaches and encouraging uptake.
You must make sure your proposal sets out to answer a scientific question and that this is clearly stated for the Panel. Each objective and experiment throughout the application should contribute to answering this question. Reminders of the scientific question when describing experiments later in the proposal can also help demonstrate to the Panel how your research fits together.
Make sure your descriptions are concise and clear. Even if you include all the experimental and methodological details needed the Panel cannot be certain the details are present if your writing is difficult to understand. Our blog post titled Eleven ways your funding application could be failing can help.
Describe why your method is the most appropriate to use as there are many ways you could test a hypothesis and many different experiments that may be relevant. Including these details in your application highlights its relevance to the Panel.
This is particularly key for proposals using animal cells or tissue. The Panel will want to know why human cells or tissue are not suitable. There are likely good reasons, perhaps the tissue is rare, or you are validating against an animal-based gold standard used in your field. Including these details in your application shows you have selected your methods carefully.
Be as detailed as you can about the methodology, making sure to include the outcomes you will be measuring and how each experiment will be analysed. Don’t forget as well to describe your control(s). It is possible these will vary between your objectives, in which case be clear about which controls and comparisons you are using in each. The MRC have provided some worked examples for different types of experiments, which may help formulate your writing.
The Experimental Design Assistant (EDA) can help ensure you have considered all experimental details. Any animal experiment can be represented as a machine-readable diagram in the system and the EDA will then provide bespoke feedback on your plans via the ‘critique’ function. Feedback includes identifying missing information, helping to identify possible sources of bias and highlighting strategies that should be used to address them, such as using randomisation and blinding. You can then amend your experimental plans and critique again, repeating the cycle of feedback and edits until you are happy with your design. The EDA can also recommend appropriate statistical analysis methods for your experiment. Once your design is complete, the EDA can produce a report containing key details about the internal validity of the experiment, which can be included in the experimental design and methodology appendix. Your funder might also encourage the EDA report, which can be downloaded, to be submitted as an appendix.
Experiments that are powered properly have the maximum chance of detecting a true effect. A formal method should be used wherever possible to determine sample size. Typically, this will be a power calculation. At our workshop on experimental design for Panel members, Dr Kate Button discussed statistical power and the dangers of low powered experiments including missing real effects, incorrect estimates of effect size and risking wasting animals in inconclusive research. These are discussed in more detail on the ARRIVE website. Dr Simon Bate from Statistical Sciences, GSK has also written guidance covering questions he regularly receives about sample sizes and power calculations.
A power calculation relies on having certain information about the experiment and using these to calculate the number of samples or experimental units needed to determine if your effect is true. These will be specific to your experiment and/or your field and tailored to the statistical analysis you will be performing. The parameters needed to calculate a power calculation are discussed on the EDA website, which also has a sample size calculator for analyses using paired and unpaired t-tests. Often it is the effect size that requires the most input from a researcher. Professor Hazel Inskip describes in this video some key considerations for determining an effect size for a study.
However, there are times where a power calculation is not appropriate, such as in a pilot study. It is still important in these instances to provide justification for your sample size. The number will depend on the specific objective of the study and may be based on operational capacity and constraints such as the size of a litter or the amount of tissue that can be harvested per animal.
The statistical analysis you choose must be appropriate for the conclusions you are trying to draw from your experiment. This is best discussed with your local statistician early in your application process. The EDA can also provide advice on appropriate statistical analysis for your experiment.
One of the more common points raised by our Panel members is to consider how repeated measures factor into your experimental design and analysis. These involve multiple measures of the same outcome taken under different conditions or over time, such as in a longitudinal study. Repeated measures add a level of complexity to analysis as the responses measured in the same individual are related. Many statistical analyses assume measures are independent and so would not be appropriate for these designs.
You should also consider how variability or heterogeneity between animals or samples is going to be accounted for in your analysis. You might account for this using a randomisation strategy for example. This is particularly important in both in vivo studies and ex vivo studies using animal or human tissue.
Failing to correctly identify the experimental unit can artificially inflate the sample size of your study and lead to incorrectly drawn conclusions. This is known as pseudo-replication.
The experimental unit is the entity that receives the treatment. For example, if your experiment uses slice cultures to identify changes in the brain after a mouse has been treated with a compound, the mouse, not the brain slice, would be the experimental unit. Each slice in this set-up cannot be treated differently, since the treatment was administered to the animal. Using the brain slice as your experimental unit could lead to incorrect conclusions being drawn. Another common error seen is the use of tanks or cages, such as if a treatment is added to the water of a tank of zebrafish. The experimental unit in this instance is the tank and not the individual fish.
The concept of experimental units is described further in a publication by Lazic et al in PLOS Biology, with examples for both in vivo and in vitro experiments.
Often when performing an experiment, one expects a certain result and having these expectations can lead to unintentionally influencing results. It is important to minimise the ways biases such as these can influence the experiment and describe these to the Panel, including how you will minimise the impact of bias while designing, performing and analysing the experiment.
There are two main ways of avoiding bias the Panel will look for: randomisation and blinding. Randomisation ensures every experimental unit has an equal chance of receiving a treatment. The EDA website describes how experimental units can be allocated randomly and can also generate a randomisation sequence to share with colleagues to assist the process. Make sure to consider any nuisance variables such as marked differences in animals’ body weight or age, cage/tank grouping, timing of the experiment or sample position on a plate.
Blinding ensures investigators are unaware of which treatment group the animals or samples are assigned to further reducing the chance of unintentional bias. Sometimes this is not possible at every stage of the experiment, so it is important to identify when and how blinding will be achieved for the Panel. The British Pharmacological Society has published a video explaining more about the concepts of blinding and how it reduces bias. The ARRIVE website also describes where blinding can be applied in an experiment including examples of good practice from the literature.
For further information on the application process for NC3Rs funding schemes please refer to the NC3Rs Applicant and Grant Holder Handbook.
|Pitfall to avoid||Resources|
|You've assumed experimental design is only important for in vivo experiments.||Webinar: Best practice in experimental design (Dr Natasha Karp, Associate Director of Biostatistics, Astrazeneca)|
|Your scientific question is unclear or poorly defined.||Guidance: Eleven ways your funding application could be failing|
|You haven’t described your experiments in enough detail.||
Worked examples: Different types of experiments written up by the MRC
The Experimental Design Assistant (EDA)
|Your sample size is poorly justified.||
Guidance: Conducting a pilot study
|Your statistical analysis is unclear or inappropriate.||
EDA: Statistical Analysis
|You’ve incorrectly identified the experimental unit.||
The ARRIVE guidelines: Experimental units
|You’ve not explained how you are avoiding bias.||
EDA: Nuisance variables
The ARRIVE guidelines: Blinding