dc.description | Methods using positive definite kernel (PDK), kernel methods play an increasingly prominent role to solve various problems in statistical machining learning such as, web design,
pattern recognition, human action recognition for a robot, computational protein function
perdition, remote sensing data analysis and in many other research fields. Due to the kernel trick and reproducing property, we can use linear techniques in feature spaces without
knowing explicit forms of either the feature map or feature spaces. It offers versatile tools to
process, analyze, and compare many types of data and offers state-of-the-art performance.
Nowadays, PDK has become a popular tool for the most branches of statistical machine learning e.g., supervised learning, unsupervised learning, reinforcement learning,
non-parametric inference and so on. Many methods have been proposed to kernel methods, which include support vector machine (SVM, Boser et al., 1992), kernel ridge regression (KRR, Saunders et al., 1998), kernel principal component analysis (kernel PCA,
Schélkopf et al., 1998), kernel canonical correlation analysis (kernel CCA, Akaho, 2001,
Bach and Jordan, 2002), Bayesian inference with positive definite kernels (kernel Bayes’
rule, Fukumizu et al., 2013), gradient-based kernel dimension reduction for regression
(gKDR, Fukumizu and Leng, 2014), kernel two-sample test (Gretton, 2012) and so on.
During the last decade, unsupervised learning has become an important application area
of the kernel methods. There are two most powerful tools of unsupervised kernel methods,
namely kernel principal component analysis (kernel PCA) and kernel canonical correlation
analysis (kernel CCA) (Schélkopf et al., 1998, Akaho, 2001). | en_US |
dc.description.abstract | In kernel methods, choosing a suitable kernel is indispensable for favorable results.
While cross-validation is a useful method of the kernel and parameter choice for supervised learning such as the support vector machines, there are no well-founded methods,
have been established in general for unsupervised learning. We focus on kernel principal
component analysis (kernel PCA) and kernel canonical correlation analysis (kernel CCA),
which are the nonlinear extension of principal component analysis (PCA) and canonical
correlation analysis (CCA), respectively. Both of these methods have been used effectively
for extracting nonlinear features and reducing dimensionality. | en_US |