IBM SPSS Decision Trees software includes four established tree-growing algorithms. Find the best fit for your data by trying different algorithms, or let IBM SPSS Decision Trees software suggest the most appropriate algorithm.
CHAID: A statistical multiway tree algorithm that explores data quickly and builds segments and profiles with respect to the desired outcome
Exhaustive CHAID: A modification of CHAID that examines all possible splits for each predictor (independent) variable
C&RT: A comprehensive binary tree algorithm that partitions data and produces more accurate homogeneous subsets
QUEST: A statistical algorithm that selects variables without bias and builds more accurate binary trees more quickly and efficiently
As you are creating classification trees, you can use the results to segment and group cases directly within your data. Additionally, you can generate selection or classification and prediction rules in the form of IBM SPSS Statistics syntax, SQL statements or simple text. Display these rules in the viewer and save them to an external file for later use to make predictions about individual and new cases.
The decision tree procedure creates a treebased classification model. It classifies cases into groups or predicts values of a dependent (target) variable based on values of independent (predictor) variables. The procedure provides validation tools for exploratory and confirmatory classification analyses.
Directly select cases or assign predictions in IBM SPSS Statistics Base software from the model results, or export rules for later use.
Use the highly visual trees to discover relationships that are currently hidden in your data. IBM SPSS Decision Trees diagrams, tables and graphs are easy to interpret.