Utilize the real power of AI and ML by Choosing the right algorithm

By – Vikrant Shimpi

(Associate Consultant AI.WorkloadManagement | Digitate)

‘Applying theory into practice’ presents, several ‘rubber meets the road challenges’. In the world of machine learning, there are scenarios where we think, which algorithm to use? How to tune the algorithm parameters? How to learn and adapt to changes? Answers to these questions have a significant impact on the effectiveness of algorithms. Each dataset has its own features such as noise, trend, bias, variance, and so on. However, no single algorithm will work best on all datasets.

The traditional method involves running all relevant algorithms and giving the best result with minimum error. This might involve higher costs as well as time. The problem becomes even more challenging as most systems keep evolving to business and technology changes. There is a need for Algorithms of Algorithms (AOA) suite, which will automatically select the right algorithm where parameters are self-learning and self-tuned, to give the best results over a period.

Let us take an example of forecasting, where the objective is to take historical time-series as an input and predict future values. Various business needs of forecasting can be inventory planning, product strategy planning, creating sale pipelines, and so on. In literature, there exist different forecasting algorithms. Different algorithms work in different situations. Selecting the right forecasting algorithm depends heavily on the time-series properties such as seasonality, trend, cycle, randomness, number of observations, inter-arrival time, coefficient of variation and most importantly the forecasting horizon. Choosing from a wide variety of forecasting algorithms and using the correct algorithm is thus a challenge.

Different approaches for algorithm selection:

Given that for a defined problem, there are plenty of algorithms available, it becomes important and challenging to select the best algorithm from a set of candidate algorithms. Algorithm selection may also refer to the problem of selecting a few representative algorithms from a large set of computational algorithms for the purpose of decision-making or optimization under uncertainty. The simplest algorithm is most likely to be the best choice when all candidate algorithms have similar predictive or explanatory power. Below we have discussed few techniques, which can help to choose the best algorithm:

1. Based on rules of data types of outcome variables:

This can be the first criteria for algorithm selection. Let us take an example of regression models.

  • When the data type is ‘continuous’, linear regression models are best suited
  • When the data type is ‘binary’ or ‘categorical’, we use logistic, probit, multinomial types of models
  • When the data type is ‘binomial’, we use binomial or logistic regression
  • When the data type is ‘count’, we use Poisson or negative binomial regression

2. Based on rules of data properties:

Data properties play an important role in algorithm selection. Here, the first step is to extract data properties from raw data. If we take the forecasting example, regression models work best when only trend is present, whereas smoothing models work best when we observe trend and seasonality in data.

3. Based on optimal output values:

Many algorithms have quantifiable accuracy measures. They can be used to select the best algorithm. For example, in Root Mean Squared Error (RMSE), coefficient of determination (R2) can be used in regression models, silhouette distance can be used to compare clustering techniques, F1 score, ROC curve can be used to select best classification technique, and so on.

4. Based on user-feedback:

At times, opinions on outputs of certain algorithms are subjective. It becomes difficult to say what is correct. For example, let’s consider change detection algorithm. For a time-series, different change algorithms give different changes. In such cases, user-feedback helps to choose the right change algorithm. In addition, based on feedback, one can learn how much significance or persistence the user is looking for.

Trends in AutoAI/AutoML and open issues:

Data-science has become the backbone of most businesses. While implementing machine-learning algorithms, data scientists choose a method that works best for a business case. However, such implementations are prone to human error and bias. AutoML tools can automate this process and run a broader set of machine learning algorithms to select the best one, which might not have been considered by data scientists before. AutoML/AutoAI tools simplifies the tasks of automating various machine learning steps such as data pre-processing, feature engineering, feature extraction, feature selection, algorithm selection and hyper parameter optimizations. They are usually used for solving problems in space such as classification, regression and time-series forecasting. AutoML is an active area of research in both academia and industry. Below we discuss some of the AutoML tools:

  • Various open-source AutoML libraries e.g., AutoWeka, MLBox, auto-sklearn and commercial AutoML systems, e.g. DataRobot, DarwinAI, H2O.ai, OneClick.ai systems have been developed.
  • Today, Facebook trains around 300,000 machine learning models to improve its machine learning processes and even created its AutoML engineer named “Asimo” to generate improved versions of existing models automatically.
  • DataRobot is probably the best-known commercial solution for AutoML and one of the unicorns in the AI space. DataRobot’s offering is composed of four independent products (Automated Machine Learning, Automated Time Series, MLOps and Paxata).
  • Dataiku have focussed on AutoML techniques for long time. They provide a visual tool that can train a model selecting the best models, features, and so on, with only one button click. Google, Amazon and Microsoft now offer a host of AutoML products and services.

Initially, academic researchers developed AutoML solutions and their primary focus was on automating model selection and model learning steps of the ML workflow. These solutions have matured a lot and now, offering improved scalability, flexibility, versatility, and transparency/trust capabilities. AutoML algorithms work best when direct measurement of model accuracy such as model fit error, or silhouette distance, and so on are available. However, there are two main areas, which need to be focused. Firstly, solving complex ML problems that involves high-dimensional data and many or highly imbalanced classes. In such cases, results are inaccurate. Secondly, most of the existing solutions are focused on supervised learning problems. Little or no attention is paid to unsupervised or reinforcement learning ML problems.

ignio’s analytics fabric has also been designed over the concept of Algorithm of Algorithms (AoA). This AoA layer is designed to perform three key functions:

1. Select the right algorithm for the right data
2. Self-tune, and
3. Self-learn

In order to cater to a wide variety of data sets, this AoA layer is designed to select the right algorithm for the right data. Another big challenge with most algorithms is that they work best when tuned with the right parameters. Hence, this AoA layer self-tunes these parameters based on data properties. As the system and the related data undergoes frequent changes, it is important to continuously adapt to these changes. The AoA layer ensures self-learning to adapt the algorithm selection and parameter tuning accordingly. Over years, this AoA layer is designed to support a variety of algorithms such as change detection, forecasting, anomaly detection, and pattern mining, among others.

Conclusion

Over the last few years, AutoML/AutoAI tools and solutions have gained significant excitement and traction among the industry and academia. However, accountability of automatic decisions is a critical aspect that needs to be considered especially when these solutions are deployed and used for business-critical decisions such as in financial services or healthcare. For better adoption, these solutions need to come with an inherent explainability and should not be treated as black boxes that are impossible to interpret. Another important aspect to consider is the trustworthiness of these solutions. In the end, these models are used by humans who need to trust them, understand the errors they make, and the reasoning behind their predictions. Collaborative learning can be a promising direction to achieve that objective where AutoML solutions and business users can collaboratively work together to speed up the machine learning process and utilise the real power of AI and ML.

Related posts

Leave a Reply