What is: Base Boosting?
Source | Boosting on the shoulders of giants in quantum device calibration |
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
In the setting of multi-target regression, base boosting permits us to incorporate prior knowledge into the learning mechanism of gradient boosting (or Newton boosting, etc.). Namely, from the vantage of statistics, base boosting is a way of building the following additive expansion in a set of elementary basis functions: \begin{equation} h_{j}(X ; { \alpha_{j}, \theta_{j} }) = X_{j} + \sum_{k=1}^{K_{j}} \alpha_{j,k} b(X ; \theta_{j,k}), \end{equation} where is an example from the domain collects the expansion coefficients and parameter sets, is the image of under the th coordinate function (a prediction from a user-specified model), is the number of basis functions in the linear sum, is a real-valued function of the example characterized by a parameter set
The aforementioned additive expansion differs from the standard additive expansion: \begin{equation} h_{j}(X ; { \alpha_{j}, \theta_{j}}) = \alpha_{j, 0} + \sum_{k=1}^{K_{j}} \alpha_{j,k} b(X ; \theta_{j,k}), \end{equation} as it replaces the constant offset value with a prediction from a user-specified model. In essence, this modification permits us to incorporate prior knowledge into the for loop of gradient boosting, as the for loop proceeds to build the linear sum by computing residuals that depend upon predictions from the user-specified model instead of the optimal constant model: where denotes the number of training examples, denotes a single-target loss function, and denotes a real number, e.g,