Performance comparison of linear and nonlinear SVM methods under different datasets
Conference: CAIBDA 2022 - 2nd International Conference on Artificial Intelligence, Big Data and Algorithms
06/17/2022 - 06/19/2022 at Nanjing, China
Proceedings: CAIBDA 2022
Pages: 5Language: englishTyp: PDF
Authors:
Su, Mingxi (College of Science and Engineering University of Minnesota, Twin Cities Minneapolis, MN, USA)
Zhang, Chenxi (Advanced Technology School Xi’an Jiaotong-Liverpool University Suzhou, Jiangsu Province, China)
Abstract:
The support vector machine (SVM) is one of the most well-known supervised machine learning algorithms. SVM offers a variety of flexible kernel methods to suit different datasets from different fields and conveniently provides classification and regression. The performance of SVM depends on how well the kernel method matches the characteristics of the datasets. Using an inappropriate kernel method will cause a massive waste of resources since the datasets tend to become larger and larger nowadays. This article investigated the efficiency changes while applying linear and nonlinear datasets to linear kernel SVM and radial basis function (RBF) kernel SVM. It shows whether it is possible to improve the performance of unsuitable kernel methods and datasets by modifying parameters. Generally, the results show that applying linear kernel SVM to analyze a nonlinear dataset will perform poor efficiency. Linear SVM took thirteen times as long to achieve the same result as RBF SVM under the chess game dataset, which is a nonlinear dataset. Still, the performance of applying linear kernel SVM to a nonlinear dataset can be linearly improved by reducing one parameter, the C value, and the RBF kernel method SVM shows a similar performance and accuracy for both linear and nonlinear datasets.