SVMs are able to classify both linear and nonlinear data. ^{1} SMVs work by transforming the training dataset into a higher dimension, a higher dimension which is then inspected for the optimal separation boundary, or boundaries, between classes. In SVMs, these boundaries are referred to as hyperplanes, which are identified by locating support vectors, or the instances that most essentially define classes, and their margins, which are the lines parallel to the hyperplane defined by the shortest distance between a hyperplane and its support vectors.

The grand idea with SVMs is that, with a high enough number of dimensions, a hyperplane separating 2 classes can always be found, thereby delineating dataset member classes. When repeated a sufficient number of times, enough hyperplanes can be generated to separate all classes in n-dimension space.

**Sources**

Matthew, Mayo. “Machine Learning Key Terms, Explained.” KDnuggets, KDnuggets, 10AD, 2016, https://www.kdnuggets.com/2016/05/machine-learning-key-terms-explained.html (1)