SLAS

Assay Optimization: A Statistical Design of Experiments Approach

From LabAutopedia

Jump to: navigation, search
JALACover small.png  A Tutorial from the Journal of the Association for Laboratory Automation

Originally appearing in JALA 11 2006 33-41 (view)


Assay Optimization: A Statistical Design of Experiments Approach
Authored by: Maneesha Altekar1, Carol A. Homon2, Mohammed A. Kashem3, Steven W. Mason3, Richard M. Nelson3, Lori A. Patnaude3, Jeffrey Yingling4, Paul B. Taylor3

 

With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. With the use of an integrated approach called automated assay optimization developed in collaboration with Beckman Coulter (Fullerton, CA), the process of conducting these experiments has been greatly facilitated. This approach imports an experimental design from a commercial statistical package and converts it into robotic methods. The data from these experiments are fed back into the statistical package and analyzed, resulting in empirical models for determining optimum assay conditions. The optimized assays are then progressed into HTS. This tutorial will focus on the use of statistically designed experiments in assay optimization.

Contents

Introduction

With the transition from manual to robotic HTS in the last several years, there has been a significant increase in the number of targets screened and hence throughputs required in the screening environment. Recent advances in miniaturization have resulted in ultra-high-throughput capability of >100,000wells/day. As a result, assay optimization has become a significant bottleneck and there is ever-increasing pressure to reduce the time and cost associated with assay development.

Developing assays that are robust for automated screening environments is a challenge facing the pharmaceutical industry today. In the past, the assay optimization process has typically taken between 4–12 months, using traditional one-factor-at-a-time experiments or, occasionally, simple two-factor checkerboard experiments. Statistically designed experiments have been employed for over half a century across numerous industries and were first described in The Design of Experiments by the British statistician and biologist Sir Ronald Fisher.[1] More recent texts[2][3][4] provide an excellent introduction to the application of this methodology. Other reports include applications in HTS,[5] assay,[6][7] PCR[8] and enzyme-linked immunoassay[9] optimizations, the analysis of a 29 full factorial combinatorial library,[10] optimizing conditions for the growth of Lactobacillus casei,[11] cell expression levels,[12] cDNA-microarray protocols,[13] drug delivery systems,[14] mass spectrometry,[15] and evaluation of dosing for combination therapy.[16]

Automated assay optimization (AAO) is a process that combines experimental design, robotics, and statistics. Its use has greatly facilitated the use of complex statistically designed experiments to optimize assays more efficiently and typically arrives at an optimized solution after running one to three experimental iterations over a 2-week period. Improved performance facilitates miniaturization, provides a higher likelihood of finding tractable hits, and provides a higher flexibility in resource allocation to screening systems. This tutorial will focus on statistics, in particular Design of Experiments (DOE), as a tool of choice for optimizing assays and will include examples of case studies.

An introduction to DOE

Statistically designed experiments are a powerful tool for improving the efficiency of experimentation. Through an iterative process, they allow us to gain knowledge about the system being studied with a minimum number of experiments. Inclusion of replicate test conditions allows the estimation of random, experimental variation. Statistical analysis of data generated from the experiment clearly establishes the relationship between the measured parameter of interest (response) and the process parameters (input factors or factors) being studied. The factors may have individual, simple effects on the response (referred to as main effects) or may have effects that are interdependent (referred to as interaction effects). Since the designed experiments are generated on the basis of statistical theory, confidence in the results obtained and conclusions drawn are clearly defined.

Different types of designs are available; their choice is determined by the objectives of the experiment and the current state of knowledge about the experimental environment. They can be categorized as follows:

  • Screening
  • Fractional & full factorial
  • Response surface

Screening designs (also referred to as Resolution III) are used for scouting the experimental space when little is known about the target. It is possible to derive main effect information for each factor, but interactions cannot be interpreted. At this stage, there may be a very large number of potentially important factors, but little is known about whether or how they may impact the response. A brief experiment is run to screen out the important factors from the unimportant ones so that further investigation can be carried out. These are two-level designs, that is, the factors are run at two levels with only high and low levels as defined by the range of each factor. The number of factors can be as high as 15.

Fractional (Resolutions IV and V) and full factorial designs are used when there is prior information about which factors are important; however, the precise nature and magnitude of their impact on the response is not well understood. As the complexity of the experimental design increases, the ability to be able to distinguish between main effects and interactions increases. The number of factors under study is typically between two and six. These are also two-level designs, and factors are varied simultaneously at various combinations of their high and low levels. These designs allow us to estimate linear and two-factor interaction effects of the factors. They also allow testing for the presence of nonlinear behavior in one or more factors by running replicated experiments at the midpoint condition (where all factors are simultaneously held at their midlevel). Fractional factorial designs offer a reduction in number of experiments without losing a lot of information. Figure 1 shows a three-factor full factorial design with center point.

DOE Fig 1.jpg
DOE Fig 2.jpg
 
Figure 1. Three-factor full factorial design with center point. Figure 2. Graphic representations of central composite, face-centered cube, and Box–Behnken designs.


Response surface designs are used to obtain precise information about factor effects including magnitude and direction. The number of factors is typically between two and six. These are three-level designs that allow us to estimate linear, two-factor interaction and nonlinear effects of all factors under study. They are used when there is prior indication of nonlinear behavior or when a factorial experiment reveals the presence of nonlinear behavior. They provide precise prediction of responses within the experimental region and are useful in identifying optimum conditions. Assay optimization in particular frequently produces responses that are nonlinear. Figure 2 shows various response surface designs using three factors for illustration. The first is a central composite design (CCD), where experiments are added to the factorial design after nonlinear behavior is detected. The second is a modified CCD, called a face-centered cube design, where the added experiments lie on the faces of the space formed by the factorial design. The third is a Box–Behnken design, which is run when there is a priori information about the existence of nonlinear effects. The experiments are located on the edges of the experimental space. Box–Behnken and CCDs involving up to 10 numerical and 1–3 categorical factors are fast becoming popular because of nonlinear responses common in assay development.

Multilevel full factorial designs are occasionally used instead of response surface designs if there is a need to explore a particular region within the experimental space in more detail, all within a single set of experiments. We will illustrate the application of DOE with case studies in assay optimization.

Automated assay optimization

AAO leverages more out of the traditional biochemistry of assay development by combining it with statistics and robotics for speedier and more precise information. The process imports a statistical design from a commercial statistical package (e.g., Design Expert, Minitab, JMP, Statistica), converts it into a robotic protocol with randomizations, carries out the liquid handling, and then parses the data for export back to the statistical package. Data are typically acquired from a detector readout in standard plate format before statistical optimization of assay conditions. There are no requirements of detection modalities that typically include fluorescence, absorbance, or luminescence. While AAO has been developed as a commercial product, many organizations have developed the capability of unifying DOE, robotics, and statistical analysis with in-house software solutions.

The three facets of AAO can be described as follows.

Biochemistry

This is the science behind AAO. Therapeutic teams identify targets, generate experimental methods associated with factors of biological relevance including reagent concentrations, buffer components such as salts, reducing agents, temperature, pH, osmotic/viscosity regulators, and provide information on factors detrimental to the assay. For each experiment, assay steps including dispense volumes, order of addition, and incubation times are programmed in as elements of the robotic methods.

Robotics

A Biomek FX Liquid Handling Workstation from Beckman Coulter handles the robotic needs. Before AAO was developed, programming was manual, and it took up to 2 weeks to generate proof of concept data. Using AAO, it is now feasible in a single day to import the statistical design, create plate maps and liquid-handling methods, run the assay, and collect the data. (See also Automatic Programming) Typically, statistical designs are selected so that an experiment can be run in one to six 384-well plates. Figure 3 shows what the graphic interface looks like when running liquid-handling methods.

DOE Fig 3.jpg
Figure 3. Graphic interface showing source, destination, and randomized delivery patterns on a Biomek FX.
Statistics

A statistically designed experiment is used to efficiently gain information on the factors affecting the assay response (signal, S/B, Z′, stability, etc.). This provides information on the relationship between factors and response, interactions among factors, random experimental variation, and conditions that optimize the assay. A partial example of factors and levels as constructed by a statistical package for export to AAO is shown in Figure 4.

DOE Fig 4.jpg
Figure 4. A partial list of the distribution of levels across four factors as constructed by a statistical package. The table is imported into AAO as a text file as the first step to robotic programming.


Data analysis, model development, and interpretation

After an experiment has been run, the following steps are typically implemented:

  • The data set is validated, ensuring good agreement across replicates and identification of spurious outliers.
  • The data are reviewed to ensure that the expected range of values has been obtained. The best test results are identified.
  • Data are imported into the statistical package.

The best model is identified from a summary of fit (R-square adj), analysis of variance (ANOVA) and lack of fit. This also includes checking the validity of the best-fit model through various diagnostic tests.

  • Model predictive values are compared with best test results.
  • Factors/levels are identified for confirmation or further experimentation.
Graphic Visualization Tools

Graphical tools form an integral part of evaluating the experimental data. These include plots that aid in identifying significant effects, verifying validity of the model, and interpreting the model.

A half normal plot is a plot of the absolute value of the effect estimates against their cumulative normal probabilities. Figure 5 illustrates that NaCl, Tween-20, 3-[(3-cholamidopropyl)dimethylammonio]-1-propanesulfonate (CHAPS), glycerol, and sucrose produced significant effects and should be included in the statistical model. Note that no distinction is made between whether an effect is positive or negative.


Image:DOE_Fig_5.jpg
Figure 5. Example of a half normal plot used for identifying all significant effects.


A normal plot is useful to distinguish between positive (upper right) and negative (lower left) effects and an example is shown in Figure 6.


Image:DOE_Fig_6.jpg
Figure 6. Example of a normal plot for distinguishing between positive and negative effects.


Once a model has been generated, its validity can be crosschecked against the actual data set by using an actual versus predicted plot as shown in Figure 7.

Image:DOE_Fig_7.jpg
Figure 7. An actual versus predicted plot enabling correlating inspection of model predictions relative to actual data.


Contour plots can be two- or three-dimensional and show the behavior of responses as a function of two significant factors while holding other significant factors constant. These are used to facilitate model interpretation. In the example given in Figure 8, the yellow surface represents a contour plot with its three-dimensional surface above it.


Image:DOE_Fig_8.jpg
Figure 8. A two- and three-dimensional contour plot showing the interaction between CHAPS and MgCl2.


Model Development

The statistical analysis results in the development of an empirical first- or second-order model that describes the mathematical relationship between the response and the factors.  A general form of the model is as follows:

DOE Formula.gif

where Y is the response, X's are the significant effects, and the coefficients represent the magnitude of the effect of the different factors in the model. This model can be used to predict the response for any condition within the experimental space.

Model Interpretation

The ANOVA table generated with the model is used to interpret the model. It shows which effects are significant and whether they are linear or nonlinear. Contour plots generated from the model greatly facilitate this interpretation by graphically showing the behavior of the response over the experimental region.

Assay optimization—case studies

Example 1: Enzymatic Assay 1—The Importance of Level Setting

In the optimization of an enzymatic assay, a Box–Behnken design was chosen using 10 factors each with three levels. The response surface design allowed for predictions of response curvature and resulted in 170 buffers being assembled in 85min by a Biomek FX. One of the factors chosen was calcium, and its levels were set at 0, 12.5, and 25mM. The first iteration of AAO failed to produce anything superior to existing conditions, and subsequent bench experiments revealed a concentration dependence (250μM optimum) that was much lower than the levels set in the AAO experiment (Fig. 9). This served as confirmation that preliminary experimentation on individual factors can be key in ensuring success of AAO. In the example given, experiments involving three to four factors with a broader range of levels would have had a higher likelihood of identifying the specific calcium concentration required. This experience confirmed that expert knowledge, wherever possible, is a component vital to the success of this approach and that the lack of it can lead to misinterpretation of test results.


DOE Fig 9.jpg
Figure 9. A demonstration of how a significant calcium effect was missed with levels set in an AAO experiment.


Example 2: Enzymatic Assay 2, Iteration 1


The experiment was run as a general factorial design with five factors and up to four levels (Fig. 10). The design generated 1788 test wells across five 384-well plates including totals, backgrounds in duplicate, and quality control (QC) wells with existing best conditions.


Image:DOE_Fig_10.jpg
Figure 10. Factors and levels for enzymatic assay 2, iteration 1.


The total number of test buffers made was 432, and these were assembled in 1.5h in deep-well polypropylene blocks. The base buffer composition was 25mM 3-morpholin-propanesulfonic acid (MOPS), pH 7.3, 3.75mM dithiothreitol (DTT), and 2% dimethylsulfoxide (DMSO). Variations across randomized replicates were generally good, and comparison of the best test results with QC wells indicated improvements as shown in Figure 11.


Image:DOE_Fig_11.jpg
Figure 11. Comparison of QC (T/B=4.07) and test data (best T/B=6.30) for enzymatic assay 2, iteration 1.


In general, when inspecting the top 10 test results, a clear effect results in common level values, and random values indicate little or no effect. A statistical model was generated using a quadratic fit and produced an R-square Adj of 0.89. The response surfaces shown in Figure 12 show that potassium glutamate was detrimental, MgCl2 had an optimum of 10mM, and sucrose was detrimental. Numerical prediction of an optimum condition resulted in a buffer with the composition 25mM MOPS, pH 7.3, 1mM NaCl, 10mM MgCl2, 2.5% polyethylene glycol (PEG), 3.75mM DTT, and 2% DMSO. This is compared to an original buffer composition of 25mM MOPS, pH 7.3, 25mM NaCl, 8mM MgCl2, 2.5% PEG, 3.75mM DTT, and 2% DMSO.


Image:DOE_Fig_12.jpg
Figure 12. Response surfaces showing effects of MgCl2, potassium glutamate (nonlinear), and sucrose (linear).


Example 3: Enzymatic Assay 2, Iteration 2

A second iteration was performed using the design shown in Figure 13.


Image:DOE_Fig_13.jpg
Figure 13. Factors and level settings for enzymatic assay 2, iteration 2.


A total of 192 buffers were made in 35min. One interesting observation was the effect of ZnCl2, shown in Figure 14. Without inspection of the response surface, it could be concluded that it would be best to omit zinc, but the data suggest that exploration of concentrations higher than 0.25mM could potentially lead to higher desirabilities.

Image:DOE_Fig_14.jpg
Figure 14. Response surface indicating potential beneficial effect by increasing the concentration of ZnCl2.


The total time taken to run both iterations on enzymatic target 2 was 5 days, a significant increase in speed over traditional bench methods.

Example 4: Enzymatic Assay 3—Example of AAO Producing Substantial Improvement

The best existing condition had the following buffer composition: 25mM HEPES, pH 7.5, 10mM MgCl2, 50mM KCl, 0.01% CHAPS, 0.2% bovine serum albumin, and 200μM Tris(2-carboxyethyl)phosphine (TCEP). A general factorial design was created to explore four factors (Fig. 15) and resulted in the assembly of 256 buffers in 75min.

Image:DOE_Fig_15.jpg
Figure 15. Factors and levels for enzymatic assay 3.


Comparison of the top 10 test results with best existing QC wells indicated substantial improvements in total/background ratios (Fig. 16).

Image:DOE_Fig_16.jpg
Figure 16. Comparison of QC (T/B=1.61) and test data (best T/B=30.04) for enzymatic assay 3.


Initial inspection of the statistical model revealed that NaCl was detrimental, sucrose was insignificant, and glycerol and CHAPS were beneficial. Numerical optimization predicted an optimum of 0mM NaCl, 10% glycerol, 138mM sucrose, and 0.1% CHAPS. Inspection of the sucrose response revealed that while a maximum was identified, its contribution was minimal. The predominant enhancement of catalytic activity derived from a glycerol and CHAPS interaction is shown in Figure 17.

Image:DOE_Fig_17.jpg
Figure 17. Response surface interactions between sucrose/NaCl and glycerol/CHAPS in enzymatic assay 3.


Conclusion

The assay optimization process has been greatly enhanced by the use of statistically designed experiments. Traditional methods frequently will find an optimum solution, but not as efficiently as a designed experiment, and most notably, without being able to uncover important interaction effects. In addition, due to advances in robotics, we are now able to exploit the power of statistically designed experiments by using more complex designs that offer much greater insight into the behavior of factors, the interactions among them, and their impact on the responses. Through the use of robotic capabilities available today, we now have the ability to easily and speedily examine several factors simultaneously, at multiple levels. As automated solutions become more prevalent across drug discovery environments, their application in this context will provide a substantial benefit.


Acknowledgments

The authors would like to thank Christine Martens for providing enzyme and John Snider for critical review of the manuscript.

Related Articles

Automatic_Programming

References

  1. Fisher RA. The Design of Experiments. Edinburgh: Oliver and Boyd; 1935–1966
  2. Montgomery DC. Design and Analysis of Experiments. New York: John Wiley and Sons, Inc.; 1997
  3. Haaland PD. Experimental Design in Biotechnology. Marcel Dekker, Inc.; 1989
  4. Myers RH, Montgomery DC. Response Surface Methodology. New York: John Wiley and Sons, Inc.; 2002
  5. Lutz MW, Menius JA, Choi TD, Gooding Laskody R, Domanico PL, Goetz AS, et al. Experimental design for high throughput screening. Drug Discov. Today. 1996;1(7):277–286.
  6. Taylor PB, Stewart FP, Quinn ST, Schulz CK, Vaidya KS, Kurali E, et al. Automated assay optimization with integrated statistics and smart robotics. J. Biomol. Screen. 2000;5(4):213–225. MEDLINE
  7. Taylor PB. Optimizing assays for automated platforms. Mod. Drug Discov. 2002;5(12):37–39.
  8. Boleda MD, Briones P, Farres J, Tyfield L, Pi R. Experimental design: a useful tool for PCR optimization. Biotechniques. 1996;21:134–140
  9. Reiken SR, Van Wie BJ, Sutisna H, Kurdikar DL, Davis WC. Efficient optimization of ELISAs. J. Immunol. Methods. 1994;177:199–206
  10. Young SS, Hawkins DM. Analysis of a 29 full factorial chemical library. J. Med. Chem. 1995;38:2784–2788. MEDLINE
  11. Oh S, Rheem S, Sim J, Kim S, Baek Y. Optimizing conditions for the growth of Lactobacillus casei YIT 9018 in tryptone–yeast extract–glucose medium by using response surface methodology. Appl. Environ. Microbiol. 1995;61(11):3809–3814
  12. Ganne V, Mignot G. Application of statistical design of experiments to the optimization of factor VIII expression by CHO cells. Cytotechnology. 1991;6(3):233–240
  13. Wrobel G, Schlingemann J, Hummerich L, Kramer H, Lichter P, Hahn M. Optimization of high-density cDNA-microarray protocols by design of experiments. Nucleic Acids Res. 2003;31(12):e67.
  14. Singh B, Kumar R, Ahuja N. Optimizing drug delivery systems using systematic design of experiments. Part I: fundamental aspects. Crit. Rev. Ther. Drug Carrier Syst. 2005;22(1):27–105.
  15. Riter LS, Vitek O, Gooding KM, Hodge BD, Julian RK. Statistical design of experiments as a tool in mass spectrometry. J. Mass Spectrom. 2005;40(5):565–579.
  16. Pool JL, Cushman WC, Saini RK, Nwachuku CE, Battikha JP. Use of the factorial design and quadratic response surface models to evaluate the Fosinopril and Hydrochlorothiazide combination therapy in hypertension. Am. J. Hypertens. 1997;10:117–123.


Authors

1Statistical Consultant
2Boehringer Ingelheim Pharmaceuticals, Ridgefield, CT, USA (retired)
3Boehringer Ingelheim Pharmaceuticals, Ridgefield, CT, USA
4Aerie Pharmaceuticals, Research Triangle Park, NC


Click [+] for other articles on 
Applications(10 C, 9 P)
The Market Place for Lab Automation & Screening  The Market Place
Click [+] for other articles on 
The Market Place for Lab Automation & Screening  Automation Software