Tired of spending long hours and resources trying to find the best hyper-parameters? In this talk you will learn how to take your optimization to the next level with XGBoost and Optuna using a case study with a very imbalanced dataset.
Optimizing machine learning models is a challenging task that can greatly impact the performance of your models. Traditional methods such as grid search are often more time-consuming and can be insufficient for finding the best set of hyper-parameters. XGBoost and Optuna are libraries that allow for more efficient and effective optimization of machine learning models. In this talk, we will discuss how to use these libraries to improve your XGBoost models performance and achieve peak performance using a real life dataset that is highly imbalanced. Going beyond the simple datasets often found in tutorials. In this talk we will go through an end-to-end ML pipeline and solve a challenging scenario that predicts customer renewals based on product data usage.
William has worked in different roles and positions for the last decade. Bringing work experience from Intel, Oracle, Broadcom, Czech University of Economics and GitLab. Where he has participated in numerous projects involving software, hardware design, education and innovation across different industries. He leverages his experience as an engineer and educator to create Proof of Concepts and actively share knowledge as a public speaker in open source and technology events worldwide