Note that this article was submitted and accepted by KDnuggest, the most popular blog site about machine learning and knowledge discovery.
I have been using Lean Six Sigma (LSS) to improve business processes for the past 10+ year and am very satisfied with its benefits. Recently, I’ve been working with a consulting firm and a software vendor to implement a machine learning (ML) model to predict remaining useful life (RUL) of service parts. The result which I feel most frustrated is the low accuracy of the resulting model. As shown below, if people measure the deviation as the absolute difference between the actual part life and the predicted one, the resulting model has 127, 60, and 36 days of average deviation for the selected 3 parts. I could not understand why the deviations are so large with machine learning.
After working with the consultants and data scientists, it appears that they can improve the deviation only by 10%. This puzzles me a lot. I thought machine learning is a great new tool to make forecast simple and quick, but I did not expect it could have such large deviation. To me, such deviation, even after the 10% improvement, still renders the forecast useless to the business owners.
Read more at How can Lean Six Sigma help Machine Learning?
Leave your comments below and subscribe us to get updates in your inbox.