The support-vector network implements the following idea: it maps the input vectors into some high dimensional feature space Z through some non-linear mapping chosen a priori. In this space a linear decision surface is constructed with special properties that ensure high generalization ability of the network.
(Cortes and Vapnik, 1995)Decision boundaries change in two ways in support vector machines. They blur and bend, again affecting what counts as pattern.
(Mackenzie, 2017, p.141)The extravagant dimensionality realized in the shift from 30 to 40,000 variables vastly expands the number of possible decision surfaces or hyperplanes that might be instituted in the vector space. The support vector machine, however, corrals and manages this massive and sometimes infinite generation of differences at the same time by only allowing this expansion to occur along particular lines marked out by kernel functions.
(Mackenzie, 2017, p.147)Cortes, C. and Vapnik, V. (1995) ‘Support-vector networks’, Machine learning. Springer, 20(3), pp. 273–297. [link]
Mackenzie, A. (2017) Machine Learners: Archaeology of a Data Practice. MIT Press.