Show simple item record

dc.contributor.advisorProf. Satoshi Kuriki
dc.contributor.authorAlam, Md. Ashad
dc.date.accessioned2022-04-21T04:15:09Z
dc.date.available2022-04-21T04:15:09Z
dc.date.issued2014-09
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/352
dc.descriptionMethods using positive definite kernel (PDK), kernel methods play an increasingly prominent role to solve various problems in statistical machining learning such as, web design, pattern recognition, human action recognition for a robot, computational protein function perdition, remote sensing data analysis and in many other research fields. Due to the kernel trick and reproducing property, we can use linear techniques in feature spaces without knowing explicit forms of either the feature map or feature spaces. It offers versatile tools to process, analyze, and compare many types of data and offers state-of-the-art performance. Nowadays, PDK has become a popular tool for the most branches of statistical machine learning e.g., supervised learning, unsupervised learning, reinforcement learning, non-parametric inference and so on. Many methods have been proposed to kernel methods, which include support vector machine (SVM, Boser et al., 1992), kernel ridge regression (KRR, Saunders et al., 1998), kernel principal component analysis (kernel PCA, Schélkopf et al., 1998), kernel canonical correlation analysis (kernel CCA, Akaho, 2001, Bach and Jordan, 2002), Bayesian inference with positive definite kernels (kernel Bayes’ rule, Fukumizu et al., 2013), gradient-based kernel dimension reduction for regression (gKDR, Fukumizu and Leng, 2014), kernel two-sample test (Gretton, 2012) and so on. During the last decade, unsupervised learning has become an important application area of the kernel methods. There are two most powerful tools of unsupervised kernel methods, namely kernel principal component analysis (kernel PCA) and kernel canonical correlation analysis (kernel CCA) (Schélkopf et al., 1998, Akaho, 2001).en_US
dc.description.abstractIn kernel methods, choosing a suitable kernel is indispensable for favorable results. While cross-validation is a useful method of the kernel and parameter choice for supervised learning such as the support vector machines, there are no well-founded methods, have been established in general for unsupervised learning. We focus on kernel principal component analysis (kernel PCA) and kernel canonical correlation analysis (kernel CCA), which are the nonlinear extension of principal component analysis (PCA) and canonical correlation analysis (CCA), respectively. Both of these methods have been used effectively for extracting nonlinear features and reducing dimensionality.en_US
dc.language.isoenen_US
dc.publisherThe Graduate University of Advanced Studiesen_US
dc.subjectDesigning kernel for unsupervised kernel methodsen_US
dc.subjectFeature space and its drawbacken_US
dc.subjectKernel and positive definite kernelen_US
dc.titleKernel Choice for Unsupervised Kernel Methodsen_US
dc.typeThesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record