Hyperparameter Optimization in Machine Learning
56,99 €
Sofort verfügbar, Lieferzeit: Sofort lieferbar
Hyperparameter Optimization in Machine Learning, Apress
Make Your Machine Learning and Deep Learning Models More Efficient
Von Tanay Agrawal, im heise Shop in digitaler Fassung erhältlich
Produktinformationen "Hyperparameter Optimization in Machine Learning"
Dive into hyperparameter tuning of machine learning models and focus on what hyperparameters are and how they work. This book discusses different techniques of hyperparameters tuning, from the basics to advanced methods.
This is a step-by-step guide to hyperparameter optimization, starting with what hyperparameters are and how they affect different aspects of machine learning models. It then goes through some basic (brute force) algorithms of hyperparameter optimization. Further, the author addresses the problem of time and memory constraints, using distributed optimization methods. Next you’ll discuss Bayesian optimization for hyperparameter search, which learns from its previous history.
The book discusses different frameworks, such as Hyperopt and Optuna, which implements sequential model-based global optimization (SMBO) algorithms. During these discussions, you’ll focus on different aspects such as creation of search spaces and distributed optimization of these libraries.
Hyperparameter Optimization in Machine Learning creates an understanding of how these algorithms work and how you can use them in real-life data science problems. The final chapter summaries the role of hyperparameter optimization in automated machine learning and ends with a tutorial to create your own AutoML script.
Hyperparameter optimization is tedious task, so sit back and let these algorithms do your work.
WHAT YOU WILL LEARN
* Discover how changes in hyperparameters affect the model’s performance.
* Apply different hyperparameter tuning algorithms to data science problems
* Work with Bayesian optimization methods to create efficient machine learning and deep learning models
* Distribute hyperparameter optimization using a cluster of machines
* Approach automated machine learning using hyperparameter optimization
WHO THIS BOOK IS FOR
Professionals and students working with machine learning.
Tanay is a deep learning engineer and researcher, who graduated in 2019 in Bachelor of Technology from SMVDU, J&K. He is currently working at Curl Hg on SARA, an OCR platform. He is also advisor to Witooth Dental Services and Technologies. He started his career at MateLabs working on an AutoML Platform, Mateverse. He has worked extensively on hyperparameter optimization. He has also delivered talks on hyperparameter optimization at conferences including PyData, Delhi and PyCon, India.
* Chapter 1: Hyperparameters
Chapter Goal: To introduce what hyperparameters are, how they can affect the
model training. Also gives an intuition of how hyperparameter affects general machine
learning algorithms, and what value should we choose as per the training dataset.
Sub - Topics
1. Introduction to hyperparameters.
2. Why do we need to tune hyperparameters
3. Specific algorithms and their hyperparameters
4. Cheatsheet for deciding Hyperparameter of some specific Algorithms.
Chapter 2: Brute Force Hyperparameter Tuning
Chapter Goal: To understand the commonly used classical hyperparameter tuning
methods and implement them from scratch, as well as use the Scikit-Learn library to do so.
Sub - Topics:
1. Hyperparameter tuning
2. Exhaustive hyperparameter tuning methods
3. Grid search
4. Random search
5. Evaluation of models while tuning hyperparameters.
Chapter 3: Distributed Hyperparameter Optimization
Chapter Goal: To handle bigger datasets and a large number of hyperparameter
with continuous search spaces using distributed algorithms and distributed
hyperparameter optimization methods, using Dask Library.
Sub - Topics:
1. Why we need distributed tuning
2. Dask dataframes
3. IncrementalSearchCV
Chapter 4: Sequential Model-Based Global Optimization and Its Hierarchical
Methods
Chapter Goal: A detailed theoretical chapter about SMBO Methods, which uses
Bayesian techniques to optimize hyperparameter. They learn from their previous iteration
unlike Grid Search or Random Search.
Sub - Topics:
1. Sequential Model-Based Global Optimization
2. Gaussian process approach
3. Tree-structured Parzen Estimator(TPE)
Chapter 5: Using HyperOpt
Chapter Goal: A Chapter focusing on a library hyperopt that implements the
algorithm TPE discussed in the last chapter. Goal to use the TPE algorithm to optimize
hyperparameter and make the reader aware of how it is better than other methods.
MongoDB will be used to parallelize the evaluations. Discuss Hyperopt Scikit-Learn and Hyperas with examples.
1. Defining an objective function.
2. Creating search space.
3. Running HyperOpt.
4. Using MongoDB Trials to make parallel evaluations.
5. HyperOpt SkLearn
6. Hyperas
Chapter 6: Hyperparameter Generating Condition Generative Adversarial Neural
Networks(HG-cGANs) and So Forth.
Chapter Goal: It is based on a hypothesis of how, based on certain properties of dataset, one can train neural networks on metadata and generate hyperparameters for new datasets. It also summarizes how these newer methods of Hyperparameter Tuning can help AI to develop further.
Sub - Topics:
1. Generating Metadata
2. Training HG-cGANs
3. AI and hyperparameter tuning
This is a step-by-step guide to hyperparameter optimization, starting with what hyperparameters are and how they affect different aspects of machine learning models. It then goes through some basic (brute force) algorithms of hyperparameter optimization. Further, the author addresses the problem of time and memory constraints, using distributed optimization methods. Next you’ll discuss Bayesian optimization for hyperparameter search, which learns from its previous history.
The book discusses different frameworks, such as Hyperopt and Optuna, which implements sequential model-based global optimization (SMBO) algorithms. During these discussions, you’ll focus on different aspects such as creation of search spaces and distributed optimization of these libraries.
Hyperparameter Optimization in Machine Learning creates an understanding of how these algorithms work and how you can use them in real-life data science problems. The final chapter summaries the role of hyperparameter optimization in automated machine learning and ends with a tutorial to create your own AutoML script.
Hyperparameter optimization is tedious task, so sit back and let these algorithms do your work.
WHAT YOU WILL LEARN
* Discover how changes in hyperparameters affect the model’s performance.
* Apply different hyperparameter tuning algorithms to data science problems
* Work with Bayesian optimization methods to create efficient machine learning and deep learning models
* Distribute hyperparameter optimization using a cluster of machines
* Approach automated machine learning using hyperparameter optimization
WHO THIS BOOK IS FOR
Professionals and students working with machine learning.
Tanay is a deep learning engineer and researcher, who graduated in 2019 in Bachelor of Technology from SMVDU, J&K. He is currently working at Curl Hg on SARA, an OCR platform. He is also advisor to Witooth Dental Services and Technologies. He started his career at MateLabs working on an AutoML Platform, Mateverse. He has worked extensively on hyperparameter optimization. He has also delivered talks on hyperparameter optimization at conferences including PyData, Delhi and PyCon, India.
* Chapter 1: Hyperparameters
Chapter Goal: To introduce what hyperparameters are, how they can affect the
model training. Also gives an intuition of how hyperparameter affects general machine
learning algorithms, and what value should we choose as per the training dataset.
Sub - Topics
1. Introduction to hyperparameters.
2. Why do we need to tune hyperparameters
3. Specific algorithms and their hyperparameters
4. Cheatsheet for deciding Hyperparameter of some specific Algorithms.
Chapter 2: Brute Force Hyperparameter Tuning
Chapter Goal: To understand the commonly used classical hyperparameter tuning
methods and implement them from scratch, as well as use the Scikit-Learn library to do so.
Sub - Topics:
1. Hyperparameter tuning
2. Exhaustive hyperparameter tuning methods
3. Grid search
4. Random search
5. Evaluation of models while tuning hyperparameters.
Chapter 3: Distributed Hyperparameter Optimization
Chapter Goal: To handle bigger datasets and a large number of hyperparameter
with continuous search spaces using distributed algorithms and distributed
hyperparameter optimization methods, using Dask Library.
Sub - Topics:
1. Why we need distributed tuning
2. Dask dataframes
3. IncrementalSearchCV
Chapter 4: Sequential Model-Based Global Optimization and Its Hierarchical
Methods
Chapter Goal: A detailed theoretical chapter about SMBO Methods, which uses
Bayesian techniques to optimize hyperparameter. They learn from their previous iteration
unlike Grid Search or Random Search.
Sub - Topics:
1. Sequential Model-Based Global Optimization
2. Gaussian process approach
3. Tree-structured Parzen Estimator(TPE)
Chapter 5: Using HyperOpt
Chapter Goal: A Chapter focusing on a library hyperopt that implements the
algorithm TPE discussed in the last chapter. Goal to use the TPE algorithm to optimize
hyperparameter and make the reader aware of how it is better than other methods.
MongoDB will be used to parallelize the evaluations. Discuss Hyperopt Scikit-Learn and Hyperas with examples.
1. Defining an objective function.
2. Creating search space.
3. Running HyperOpt.
4. Using MongoDB Trials to make parallel evaluations.
5. HyperOpt SkLearn
6. Hyperas
Chapter 6: Hyperparameter Generating Condition Generative Adversarial Neural
Networks(HG-cGANs) and So Forth.
Chapter Goal: It is based on a hypothesis of how, based on certain properties of dataset, one can train neural networks on metadata and generate hyperparameters for new datasets. It also summarizes how these newer methods of Hyperparameter Tuning can help AI to develop further.
Sub - Topics:
1. Generating Metadata
2. Training HG-cGANs
3. AI and hyperparameter tuning
Artikel-Details
- Anbieter:
- Apress
- Autor:
- Tanay Agrawal
- Artikelnummer:
- 9781484265796
- Veröffentlicht:
- 28.11.20