This article is about the topic in the design of experiments. For design of experiments using the taguchi approach pdf topic in optimal control theory, see shape optimization.
Picture of a man taking measurements with a theodolite in a frozen environment. The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Optimal designs reduce the costs of experimentation by allowing statistical models to be estimated with fewer experimental runs. Optimal designs can accommodate multiple types of factors, such as process, mixture, and discrete factors. Experimental designs are evaluated using statistical criteria.
One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients. This criterion results in maximizing the differential Shannon information content of the parameter estimates. Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix.
This has the effect of minimizing the maximum variance of the predicted values. A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space. A third criterion on prediction variance is V-optimality, which seeks to minimize the average prediction variance over a set of m specific points. In many applications, the statistician is most concerned with a “parameter of interest” rather than with “nuisance parameters”. Catalogs of optimal designs occur in books and in software libraries.
In addition, major statistical systems like SAS and R have procedures for optimizing a design according to a user’s specification. Some advanced topics in optimal design require more statistical theory and practical knowledge in designing experiments. Since the optimality criterion of most optimal designs is based on some function of the information matrix, the ‘optimality’ of a given design is model dependent: While an optimal design is best for that model, its performance may deteriorate on other models. The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of “universal optimality” of Kiefer.