Zum Hauptinhalt springen Zur Suche springen Zur Hauptnavigation springen

Ensemble Learning for AI Developers

52,99 €

Sofort verfügbar, Lieferzeit: Sofort lieferbar

Format auswählen

Ensemble Learning for AI Developers, Apress
Learn Bagging, Stacking, and Boosting Methods with Use Cases
Von Alok Kumar, Mayank Jain, im heise Shop in digitaler Fassung erhältlich

Produktinformationen "Ensemble Learning for AI Developers"

Use ensemble learning techniques and models to improve your machine learning results.

ENSEMBLE LEARNING FOR AI DEVELOPERS starts you at the beginning with an historical overview and explains key ensemble techniques and why they are needed. You then will learn how to change training data using bagging, bootstrap aggregating, random forest models, and cross-validation methods. Authors Kumar and Jain provide best practices to guide you in combining models and using tools to boost performance of your machine learning projects. They teach you how to effectively implement ensemble concepts such as stacking and boosting and to utilize popular libraries such as Keras, Scikit Learn, TensorFlow, PyTorch, and Microsoft LightGBM. Tips are presented to apply ensemble learning in different data science problems, including time series data, imaging data, and NLP. Recent advances in ensemble learning are discussed. Sample code is provided in the form of scripts and the IPython notebook.

WHAT YOU WILL LEARN

* Understand the techniques and methods utilized in ensemble learning
* Use bagging, stacking, and boosting to improve performance of your machine learning projects by combining models to decrease variance, improve predictions, and reduce bias
* Enhance your machine learning architecture with ensemble learning



WHO THIS BOOK IS FOR

Data scientists and machine learning engineers keen on exploring ensemble learning

ALOK KUMAR is an AI practitioner and innovation lead at Publicis Sapient. He has extensive

experience in leading strategic initiatives and driving cutting-edge, fast-paced innovations. He won several awards and he is passionate about democratizing AI knowledge. He manages multiple non- profit learning and creative groups in NCR.

MAYANK JAIN currently works as Manager Technology at the Publicis Sapient Innovation Lab Kepler as an AI/ML expert. He has more than 10 years of industry experience working on cutting-edge projects to make computers see and think using techniques such as deep learning, machine learning, and computer vision. He has written several international publications, holds patents in his name, and has been awarded multiple times for his contributions. Chapter 1: An Introduction to Ensemble Learning

Chapter Goal: This chapter will give you a brief overview of ensemble learning

No of pages - 10

Sub -Topics

 Need for ensemble techniques in machine learning

 Historical overview of ensemble learning A brief overview of various ensemble techniques



Chapter 2: Varying Training Data

Chapter Goal: In this chapter we will talk in detail about ensemble techniques where training

data is changed.

No of pages: 30

Sub – Topics:

 Use of bagging or bootstrap aggregating for making ensemble model

 Code samples

 Popular libraries support for bagging and best practices

 Introduction to random forests models

 Hands-on code examples for using random forest models

 Introduction to cross validation methods in machine learning

 Intro to K-Fold cross validation ensembles with code samples

 Other examples of varying data ensemble techniques

Chapter 3: Varying Combinations

Chapter Goal : In this chapter we will talk about in detail about techniques where models areused in combination with one another to getting an ensemble learning boost.

No of pages: 40

Sub – Topics:

 Boosting : We will talk in detail about various boosting techniques with historical

 examples

 Introduction to adaboost , with code examples , Industry best practices and useful state

 of the art libraries for adaboost

 Introduction to gradient boosting , with hands on code examples with useful libraries

 and industry best practices for gradient boosting

 Introduction to XGboost with hands on code examples with useful libraries and industry

 best practices for XGboost

 Stacking : We will talk in detail about various stacking techniques are used in machine learning world

 Stacking in practice: How stacking is used by Kagglers for improving for winning

 entries.

Chapter 4: Varying Models

Chapter Goal: In this chapter we will talk about how ensemble learning models could

lead to better performance of your machine learning project

No of pages: 30

Sub - Topics:

 Training multiple model ensembles with code examples

 Hyperparameter tuning ensembles with code examples

 Horizontal voting ensembles

 Snapshot ensembles and its variants, Introduction to the cyclic learning rate.

 Code examples

 Use of ensembles in the deep learning world.

Chapter 5: Ensemble Learning Libraries and How to Use Them

Chapter Goal: In this chapter we will go into details about some very popular libraries used by

data science practitioners and Kagglers for ensemble learning

No of pages: 25Sub - Topics:

 Ensembles in Scikit-Learn

 Learning how to use ensembles in TensorFlow

 Implementing and using ensembles in PyTorch

 Using Boosting using Microsoft LightGBM

 Boosting using XGBoost

 Stacking using H2O library Ensembles in R

Chapter 6: Tips and Best Practices

Chapter Goal: In this chapter we will learn what are the best practices around ensemble learning with real world examples

No of pages: 25

Sub - Topics:

 How to build a state of the art Image classifier using ensembles

 How to use ensembles in NLP with real-world examples

 Use of ensembles for structured data analysis

 Using ensembles for time series data

 Useful tips and pitfalls

 How to leverage ensemble learning in Kaggle competitions

 Useful examples and case studies

Chapter 7 : The Path Forward

Chapter goal – In this section we will cover recent advances in ensemble learning

No of pages: 10

Sub - Topics:

 Recent trends and research in ensembles

 Use of ensembles in memory-constrained environments

 Use of ensembles in keeping eye of efficiency

 Useful resources

Artikel-Details

Anbieter:
Apress
Autor:
Alok Kumar, Mayank Jain
Artikelnummer:
9781484259405
Veröffentlicht:
18.06.20