A daily task for data scientists is to analyze data points and derive new insights from the dataset they’ve been given, aiming to answer specific questions. They typically wonder: Do I have sufficient data attributes to answer the question I’m interested in? Too few attributes? Too many? Hence, a lot of data pre-processing revolves around a concept called “dimensionality.”
This term may sound complicated. But dimensionality is relevant to a lot of decisions we make in daily life, like how long a road trip will take, just as much as it is to the kinds of business decisions that many Digitate customers make.
In the field of data science, data analytics, and dealing with big data, Data scientists spend a sizable share of their time on their data preparation – primarily working with the features in the data. Features refer to attributes or columns present in the data set. The number of such features is known as the dimensionality of a data set.
Consider a data set containing details about employees. It might have columns such as role, department, location, tenure, address, and so forth. These columns are considered features. They play a vital role in finding various insights such as user segments, detecting anomalies, predicting future events, and executing other useful tasks.
Features act as inputs in machine learning algorithms, forming the backbone of machine learning and artificial intelligence models. These models work by establishing a relationship between features (inputs) and the target values (outputs) they aim to predict. A core part of this process is the training data – a subset used to teach machine learning models how to accurately establish the relationship between these features and the desired outcomes. Naturally, teaching the correlation between these relevant and adequate features helps to increase the algorithm’s prediction power.
Suppose we want to train an ML algorithm to predict the time it will take for a user to travel by road from one city to another – let’s say Pune to Mumbai. There could be a multitude of determining factors such as distance, road conditions, weather, or vehicle used. Every one of these determining factors could be a feature for our algorithm. For example, the larger the distance, the longer it takes. Or the better the road conditions, the faster we travel. Moreover, the Hughes phenomenon illustrates a critical point where adding more features (moving into higher dimensions) can actually degrade the performance of a classifier due to overfitting, underscoring the complexity of managing dimensions in data science.
What is the curse of dimensionality?
In ML algorithms, dealing with high-dimensional data introduces challenges such as data sparsity, where the vastness of dimensions leads to scattered and less informative data points. On one hand, having very few features may limit our understanding of the problem and in turn limit our ability to predict the outcome. On the other, too many dimensions introduce complexity. This is where dimensionality reduction techniques become crucial to balance the scale
But this is where the curse of dimensionality and the risk of overfitting kicks in! Operating in a high-dimensional space with a large number of features makes the data too complex to visualize and work with effectively. It can lead to increased execution time, and it confuses the algorithm, reducing accuracy.
Let’s look at both cases to understand this dilemma better.
Low dimensionality: Consider predicting the travel time between two cities with just two features: the start point and end point of the journey. Naturally, this would result in inadequate prediction accuracy because it does not take traffic, weather, road conditions, or other factors into consideration. If we do not have enough features in a data set, any algorithm we use will have an incomplete picture of the entire problem, resulting in low accuracy of the predictions.
High dimensionality: The natural course of action to fix our problem might be to add more features! So we add temperature, humidity, wind speed, terrain, and road conditions. And we could add even more, such as distance in miles, distance in kilometers, color of the vehicle, registration number, or name of the driver! As you can guess, too many features not only introduces additional noise in the data, it also carries the risk of inaccurate predictions. Imagine the operations within these high-dimensional spaces, where a pattern gets erroneously derived suggesting that all passengers driving a red car with even-numbered registration numbers tend to drive too slowly!
Other factors also play a role in determining accuracy of the predictions, such as the nature of the test and training data sets, any underlying bias in the data, and the classification and regression algorithms used. Feature engineering is one of the most effective tools in a data scientist’s kit to overcome these issues. Employing dimensionality reduction techniques becomes vital in finding the right balance and picking the most impactful features, ensuring that the data set is optimized for predictive accuracy without unnecessary complexity. As we explore more advanced modeling techniques, neural networks emerge as powerful tools for handling high-dimensional data, offering an alternative to traditional algorithms by automatically detecting intricate patterns and relationships within the dataset.
The cure: Sift out irrelevant features
Let’s first address the scenario when we’re dealing with a high number of dimensions in our dataset. Having too many features can overwhelm the models, making it imperative to assess which dimensions offer value and which merely add noise. Often a lot of the features are either irrelevant or redundant to the problem at hand. For instance, in our example, the color of a vehicle might be irrelevant. Meanwhile, redundant features present the same information in different formats such as distance in miles and in kilometers.
Here are some tricks of the trade to winnow them down.
Use domain knowledge: A domain expert can spot irrelevant and repetitive features by just looking at them. This can be very effective. In fact, it’s generally the first trick to try. But given the huge number of features and complexities in some data sets, and the challenge of finding experts, manual clean-up is often not always scalable. In such cases, creative algorithmic solutions can be deployed.
Use statistics to remove irrelevant features: We can use various statistical properties to eliminate features.
- In many scenarios, features with the same value for all records or unique values for each record are not of much use for classification. Such features can be removed by measuring their degree of randomness using measures such as entropy or the Gini index.
- Correlation is also a very effective lever. Redundant metrics such as distance in miles and distance in kilometers will show a very high correlation. A high correlation implies that losing one of the two metrics will not lead to too much information loss.
- Classification and regression algorithms such as decision trees, random forest, and lasso regression internally use techniques to identify features of interest. These algorithms can be used for feature selection.
- Techniques such as PCA (Principal Component Analysis) use metrics such as Euclidean distance to effectively reduce dimensionality by measuring the straight-line distances between data points in space, allowing us to transform a large set of features into a smaller set while still maintaining most of the critical information.
Use brute-force methods: These methods train multiple iterations with a different group of features and pick the best-performing ones. Some approaches use elimination, where we start with all the present features, then remove one at a time until the accuracy stops improving. Alternately, some approaches adopt a selection approach where we start with the minimum and keep adding features until no improvement is observed. These approaches can deliver good results. But they are quite computationally expensive and hence are often not suitable for huge data sets.
Dig deep in the toolbox when you have too few features
In contrast to the many ways of reducing an abundance of features, it’s tricky to work with too few. Your options are more limited.
1. Decompose features: A popular solution is to derive additional information by decomposing the available features. For instance, a timestamp can be decomposed to derive features such as day of the month, hour of the day, day of the week, and so forth. (In our road trip example, it could take much longer to drive during morning commute hours than the middle of the night.)
2. Derive additional information: We can derive additional features from current features. For instance, BMI can be derived from gender, weight, and height. Or volume can be derived from length, breadth, and height.
3. Make use of publicly available data sets (data augmentation): Current features can be joined with various publicly available data sets to expand the feature set. For instance, if we have zip codes as a feature, those can be joined with demographic data to add more features such as city, state, country, population, traffic, etc. We can also add weather data temperature and humidity. With a person’s name, social media profiles can be explored to augment it with other information such as interests, education, and occupation.
Digitate’s take
Digitate data scientists works with a variety of use cases such as process prediction, transaction anomaly detection, and change management, among others. All these use cases involve a variety of data sets such as time series, events, and sets. Almost all of these data sets require some sort of feature engineering in order to derive any useful insights. Our award-winning AIOps suite, ignio, excels in data mining and works with a wide range of ML and artificial learning problems across multiple domains. This has helped ignio “learn” how to excel at both ends of the dimensionality spectrum. It leverages a unique blend of domain-specific knowledge paired with generalized data-wrangling tricks and techniques.
In some time series prediction use cases, it sees data with the bare minimum of features such as entity name, timestamp, and value. In such situations, ignio uses feature decomposition and feature derivation to expand the feature set. On the other end, ignio may have to manage a copious number of features in business transaction data. In such cases, ignio uses statistical
methods to eliminate redundant features and brute-force methods to pick relevant features.
Conclusion
Playing with data dimensions offers a wide range of possibilities. The quality and usability of analytics highly depends on the maturity of feature engineering. Today, the analytics pipeline is becoming fairly standardized. However, data preparation and feature engineering is still an art and is often what determines success for machine learning algorithms. Great results are often a function of how experienced you are and how bold and creative can you get with the data!