Abstract: Autotuning, or the problem of finding the optimal parameters of a software that will maximize a certain performance criterion (computation time, memory usage, energy consumption, etc.), can be considered as a black-box optimization problem. No analytical formulation of the objective function nor any information on its derivative are available. Given a set of values for the parameters, it is possible to evaluate the objective function only through a costly run of the software.
Several algorithms in the literature target this problem, among which are the surrogate-based black-box optimization algorithms. Their main idea is to build an of the objective function and try to optimize this model instead of the original function. One such approach is Kriging.
Our work is a generalization of Kriging to the case in which not only does one optimization problem need to be solved, but several and similar problems are also to be addressed. It lies within the transfer- and multitask-learning frameworks, as we use the knowledge acquired while solving one problem to help solve the other problems simultaneously.
A similar approach is known as co-Kriging. In this case, however, the hypothesis is that data corresponding to inexpensive-to-evaluate functions exist that are highly correlated to the main (and unique) objective function of interest.
In our case, however, all objective functions ought to be of the same cost and importance, none is supposed to be under- or oversampled compared with the others, and all are to be optimized instead of only one.
The example of the autotuning of the QR factorization routine of ScaLAPack will be the leading example in our presentation.