stores the children of each node (as a map from attribute values to node ids)
stores the attribute that each node splits on; can be None for leaf nodes
for each node, stores a map of categories to their frequency of appearance at that node (i.e. how many times a training vector with that category makes it to this node during classification)
All attributes used in the decision tree.
for each node, stores a map of categories to their frequency of appearance at that node (i.
for each node, stores a map of categories to their frequency of appearance at that node (i.e. how many times a training vector with that category makes it to this node during classification)
stores the children of each node (as a map from attribute values to node ids)
Map versions of the argument parameters (only included because Spray/JSON was not working with maps for reasons unclear to me).
Classifies an instance.
Classifies an instance.
instance to classify
predicted label
The probability distribution over categories for each node of the decision tree.
The probability distribution over categories for each node of the decision tree.
If this tree was trained with DecisionTreeTrainer, then the distribution is Laplacian-smoothed assuming one count for each label in the training data.
Gets probability distribution of label.
Gets probability distribution of label.
instance to find distribution of
probability distribution of label according to training data
Finds the "decision point" of the specified instance.
Finds the "decision point" of the specified instance. This is the node for which no child covers the instance.
instance to classify
the node the instance is classified into
The most probable category at each node of the decision tree.
Prints the decision tree to stdout.
From a particular node, chooses the correct child according to the instance and its splitting attribute (if there is one).
From a particular node, chooses the correct child according to the instance and its splitting attribute (if there is one).
the id of the node
the instance
the node id of the correct child (if there is one)
stores the attribute that each node splits on; can be None for leaf nodes
Immutable decision tree for integer-valued features and categories.
Each data structure is an indexed sequence of properties. The ith element of each sequence is the property of node i of the decision tree.
stores the children of each node (as a map from attribute values to node ids)
stores the attribute that each node splits on; can be None for leaf nodes
for each node, stores a map of categories to their frequency of appearance at that node (i.e. how many times a training vector with that category makes it to this node during classification)