Documente Academic
Documente Profesional
Documente Cultură
Network
Ensemble
•Neural Network drawback
• Neural Network has a high variance
• Nonlinear (a high variance)
• Training Data
• Vary the choice of data used to train each model in the ensemble.
• Ensemble Models
• Vary the choice of the models used in the ensemble.
• Combinations
• Vary the choice of the way
that outcomes from ensemble members are combined.
Varying Training Data
K-Fold
Bootstrap
3. Model stacking
Varying Training Data (Chapter 22)
– k-fold Cross-Validation Ensemble.
– Bootstrap Aggregation (bagging) Ensemble.
– Random Training Subset Ensemble.
Sensitivity of Variance
Chapter 21
Contribute Proportional to Trust with Weighted Average Ensemble
Random Split
Bootstrap Aggregation
Epoch(N)
Model
Epoch(N+2)
Model
Chapter 24
Learn to Combine Predictions with Stacked Generalization Ensemble
Snapshot Ensembles
Stacked Model
• Alternate Meta-Learner
• Single Level 0 Model
• Vary Level 0 Models
• Cross-Validation Stacking Ensemble
• Use Raw Input in Meta-Learner
Chapter 26
Combine Model Parameters with Average Model Weights Ensemble