1. Instances are represented by attribute-value pairs.
“Instances are described by a fixed set of attributes (e.g., Temperature) and their values (e.g., Hot). The easiest situation for decision tree learning is when each attribute takes on a small number of disjoint possible values (e.g., Hot, Mild, Cold). However, extensions to the basic algorithm allow handling real-valued attributes as well (e.g., representing Temperature numerically).”
2. The target function has discrete output values.
“The decision tree is usually used for Boolean classification (e.g., yes or no) kind of example. Decision tree methods easily extend to learning functions with more than two possible output values. A more substantial extension allows learning target functions with real-valued outputs, though the application of decision trees in this setting is less common.”
3. Disjunctive descriptions may be required.
Decision trees naturally represent disjunctive expressions.
4. The training data may contain errors.
“Decision tree learning methods are robust to errors, both errors in classifications of the training examples and errors in the attribute values that describe these examples.”
5. The training data may contain missing attribute values.
“Decision tree methods can be used even when some training examples have unknown values (e.g., if the Humidity of the day is known for only some of the training examples).”
0 comments :
Post a Comment
Note: only a member of this blog may post a comment.