Title: | Calibration for Computer Experiments with Binary Responses |
---|---|
Description: | Performs the calibration procedure proposed by Sung et al. (2018+) <arXiv:1806.01453>. This calibration method is particularly useful when the outputs of both computer and physical experiments are binary and the estimation for the calibration parameters is of interest. |
Authors: | Chih-Li Sung |
Maintainer: | Chih-Li Sung <[email protected]> |
License: | GPL-2 | GPL-3 |
Version: | 0.1 |
Built: | 2024-10-23 05:14:40 UTC |
Source: | https://github.com/cran/calibrateBinary |
The function performs the L2 calibration method for binary outputs.
calibrateBinary(Xp, yp, Xs1, Xs2, ys, K = 5, lambda = seq(0.001, 0.1, 0.005), kernel = c("matern", "exponential")[1], nu = 1.5, power = 1.95, rho = seq(0.05, 0.5, 0.05), sigma = seq(100, 20, -1), lower, upper, verbose = TRUE)
calibrateBinary(Xp, yp, Xs1, Xs2, ys, K = 5, lambda = seq(0.001, 0.1, 0.005), kernel = c("matern", "exponential")[1], nu = 1.5, power = 1.95, rho = seq(0.05, 0.5, 0.05), sigma = seq(100, 20, -1), lower, upper, verbose = TRUE)
Xp |
a design matrix with dimension |
yp |
a response vector with length |
Xs1 |
a design matrix with dimension |
Xs2 |
a design matrix with dimension |
ys |
a response vector with length |
K |
a positive integer specifying the number of folds for fitting kernel logistic regression and generalized Gaussian process. The default is 5. |
lambda |
a vector specifying lambda values at which CV curve will be computed for fitting kernel logistic regression. See |
kernel |
input for fitting kernel logistic regression. See |
nu |
input for fitting kernel logistic regression. See |
power |
input for fitting kernel logistic regression. See |
rho |
|
sigma |
a vector specifying values of the tuning parameter |
lower |
a vector of size |
upper |
a vector of size |
verbose |
logical. If |
The function performs the L2 calibration method for computer experiments with binary outputs. The input and ouput of physical data are assigned to Xp
and yp
, and the input and output of computer data are assigned to cbind(Xs1,Xs2)
and ys
. Note here we separate the input of computer data by Xs1
and Xs2
, where Xs1
is the shared input with Xp
and Xs2
is the calibration input. The idea of L2 calibration is to find the calibration parameter that minimizes the discrepancy measured by the L2 distance between the underlying probability functions in the physical and computer data. That is,
where is the fitted probability function for physical data, and
is the fitted probability function for computer data. In this L2 calibration framework,
is fitted by the kernel logistic regression using the input
Xp
and the output yp
. The tuning parameter for the kernel logistic regression can be chosen by k-fold cross-validation, where k is assigned by
K
. The choices of the tuning parameter are given by the vector lambda
. The kernel function for the kernel logistic regression can be given by kernel
, where Matern kernel or power exponential kernel can be chosen. The arguments power
, nu
, rho
are the tuning parameters in the kernel functions. See KLR
. For computer data, the probability function is fitted by the Bayesian Gaussian process in Williams and Barber (1998) using the input
cbind(Xs1,Xs2)
and the output ys
, where the Gaussian correlation function,
is used here. The vector sigma
is the choices of the tuning parameter , and it will be chosen by k-fold cross-validation. More details can be seen in Sung et al. (unpublished). The arguments
lower
and upper
are lower and upper bounds of the input space, which will be used in scaling the inputs and optimization for . If they are not given, the default is the range of each column of
rbind(Xp,Xs1)
, and Xs2
.
a matrix with number of columns q+1
. The first q
columns are the local (the first row is the global) minimal solutions which are the potential estimates of calibration parameters, and the (q+1)
-th column is the corresponding L2 distance.
Chih-Li Sung <[email protected]>
KLR
for performing a kernel logistic regression with given lambda
and rho
. cv.KLR
for performing cross-validation to estimate the tuning parameters.
library(calibrateBinary) set.seed(1) ##### data from physical experiment ##### np <- 10 xp <- seq(0,1,length.out = np) eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function eta_x <- eta_fun(xp) yp <- rep(0,np) for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i]) ##### data from computer experiment ##### ns <- 20 xs <- matrix(runif(ns*2), ncol=2) # the first column corresponds to the column of xp p_xtheta <- function(x,theta) { # true probability function exp(exp(-0.5*x)*cos(3.5*pi*x)-1) - abs(theta-0.3) *exp(-0.5*x)*cos(3.5*pi*x) } ys <- rep(0,ns) for(i in 1:ns) ys[i] <- rbinom(1,1, p_xtheta(xs[i,1],xs[i,2])) ##### check the true parameter ##### curve(eta_fun, lwd=2, lty=2, from=0, to=1) curve(p_xtheta(x,0.3), add=TRUE, col=4) # true value = 0.3: L2 dist = 0 curve(p_xtheta(x,0.9), add=TRUE, col=3) # other value ##### calibration: true parameter is 0.3 ##### calibrate.result <- calibrateBinary(xp, yp, xs[,1], xs[,2], ys) print(calibrate.result)
library(calibrateBinary) set.seed(1) ##### data from physical experiment ##### np <- 10 xp <- seq(0,1,length.out = np) eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function eta_x <- eta_fun(xp) yp <- rep(0,np) for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i]) ##### data from computer experiment ##### ns <- 20 xs <- matrix(runif(ns*2), ncol=2) # the first column corresponds to the column of xp p_xtheta <- function(x,theta) { # true probability function exp(exp(-0.5*x)*cos(3.5*pi*x)-1) - abs(theta-0.3) *exp(-0.5*x)*cos(3.5*pi*x) } ys <- rep(0,ns) for(i in 1:ns) ys[i] <- rbinom(1,1, p_xtheta(xs[i,1],xs[i,2])) ##### check the true parameter ##### curve(eta_fun, lwd=2, lty=2, from=0, to=1) curve(p_xtheta(x,0.3), add=TRUE, col=4) # true value = 0.3: L2 dist = 0 curve(p_xtheta(x,0.9), add=TRUE, col=3) # other value ##### calibration: true parameter is 0.3 ##### calibrate.result <- calibrateBinary(xp, yp, xs[,1], xs[,2], ys) print(calibrate.result)
The function performs k-fold cross validation for kernel logistic regression to estimate tuning parameters.
cv.KLR(X, y, K = 5, lambda = seq(0.001, 0.2, 0.005), kernel = c("matern", "exponential")[1], nu = 1.5, power = 1.95, rho = seq(0.05, 0.5, 0.05))
cv.KLR(X, y, K = 5, lambda = seq(0.001, 0.2, 0.005), kernel = c("matern", "exponential")[1], nu = 1.5, power = 1.95, rho = seq(0.05, 0.5, 0.05))
X |
input for |
y |
input for |
K |
a positive integer specifying the number of folds. The default is 5. |
lambda |
a vector specifying lambda values at which CV curve will be computed. |
kernel |
input for |
nu |
input for |
power |
input for |
rho |
rho value at which CV curve will be computed. |
This function performs the k-fold cross-valibration for a kernel logistic regression. The CV curve is computed at the values of the tuning parameters assigned by lambda
and rho
. The number of fold is given by K
.
lambda |
value of |
rho |
value of |
Chih-Li Sung <[email protected]>
KLR
for performing a kernel logistic regression with given lambda
and rho
.
library(calibrateBinary) set.seed(1) np <- 10 xp <- seq(0,1,length.out = np) eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function eta_x <- eta_fun(xp) yp <- rep(0,np) for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i]) x.test <- seq(0,1,0.001) etahat <- KLR(xp,yp,x.test) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) ##### cross-validation with K=5 ##### ##### to determine the parameter rho ##### cv.out <- cv.KLR(xp,yp,K=5) print(cv.out) etahat.cv <- KLR(xp,yp,x.test,lambda=cv.out$lambda,rho=cv.out$rho) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) lines(x.test, etahat.cv, col = 3)
library(calibrateBinary) set.seed(1) np <- 10 xp <- seq(0,1,length.out = np) eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function eta_x <- eta_fun(xp) yp <- rep(0,np) for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i]) x.test <- seq(0,1,0.001) etahat <- KLR(xp,yp,x.test) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) ##### cross-validation with K=5 ##### ##### to determine the parameter rho ##### cv.out <- cv.KLR(xp,yp,K=5) print(cv.out) etahat.cv <- KLR(xp,yp,x.test,lambda=cv.out$lambda,rho=cv.out$rho) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) lines(x.test, etahat.cv, col = 3)
The function performs a kernel logistic regression for binary outputs.
KLR(X, y, xnew, lambda = 0.01, kernel = c("matern", "exponential")[1], nu = 1.5, power = 1.95, rho = 0.1)
KLR(X, y, xnew, lambda = 0.01, kernel = c("matern", "exponential")[1], nu = 1.5, power = 1.95, rho = 0.1)
X |
a design matrix with dimension |
y |
a response vector with length |
xnew |
a testing matrix with dimension |
lambda |
a positive value specifing the tuning parameter for KLR. The default is 0.01. |
kernel |
"matern" or "exponential" which specifies the matern kernel or power exponential kernel. The default is "matern". |
nu |
a positive value specifying the order of matern kernel if |
power |
a positive value (between 1.0 and 2.0) specifying the power of power exponential kernel if |
rho |
a positive value specifying the scale parameter of matern and power exponential kernels. The default is 0.1. |
This function performs a kernel logistic regression, where the kernel can be assigned to Matern kernel or power exponential kernel by the argument kernel
. The arguments power
and rho
are the tuning parameters in the power exponential kernel function, and nu
and rho
are the tuning parameters in the Matern kernel function. The power exponential kernel has the form
and the Matern kernel has the form
The argument lambda
is the tuning parameter for the function smoothness.
Predictive probabilities at given locations xnew
.
Chih-Li Sung <[email protected]>
Zhu, J. and Hastie, T. (2005). Kernel logistic regression and the import vector machine. Journal of Computational and Graphical Statistics, 14(1), 185-205.
cv.KLR
for performing cross-validation to choose the tuning parameters.
library(calibrateBinary) set.seed(1) np <- 10 xp <- seq(0,1,length.out = np) eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function eta_x <- eta_fun(xp) yp <- rep(0,np) for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i]) x.test <- seq(0,1,0.001) etahat <- KLR(xp,yp,x.test) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) ##### cross-validation with K=5 ##### ##### to determine the parameter rho ##### cv.out <- cv.KLR(xp,yp,K=5) print(cv.out) etahat.cv <- KLR(xp,yp,x.test,lambda=cv.out$lambda,rho=cv.out$rho) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) lines(x.test, etahat.cv, col = 3)
library(calibrateBinary) set.seed(1) np <- 10 xp <- seq(0,1,length.out = np) eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function eta_x <- eta_fun(xp) yp <- rep(0,np) for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i]) x.test <- seq(0,1,0.001) etahat <- KLR(xp,yp,x.test) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) ##### cross-validation with K=5 ##### ##### to determine the parameter rho ##### cv.out <- cv.KLR(xp,yp,K=5) print(cv.out) etahat.cv <- KLR(xp,yp,x.test,lambda=cv.out$lambda,rho=cv.out$rho) plot(xp,yp) curve(eta_fun, col = "blue", lty = 2, add = TRUE) lines(x.test, etahat, col = 2) lines(x.test, etahat.cv, col = 3)