Skip to content
  • Products

      Products

      ignio AIOps

      Redefining IT Operations with AI and Automation

      ignio AI.Workload Management

      Enabling Predictable, Agile and Silent Batch Operations in a Closed-loop solution

      ignio AI.ERPOps

      End-to-end automation for incidents and service requests in SAP

      ignio AI.Digital Workspace

      Autonomously Detect, Triage and Remediate Endpoint issues

      ​ignio Cognitive Procurement

      AI-based analytics to improve Procure-to-Pay effectiveness

      ignio AI.Assurance

      Transform software testing and speed up software release cycles

      Let us demonstrate the power of Automation to transform your enterprise.
      Request a Demo
  • Solutions

      Solutions

      Business Health Monitoring

      Proactively Monitor Retail and eCommerce Health to deliver optimal customer experiences

      IDoc Management for SAP

      Monitor, Prioritize and Resolve IDoc issues to ensure business continuit

      IT Event Management

      Prevent outages with Predictive Intelligence and Autonomous Event Management

      Business SLA Prediction

      Never miss a business SLA with AI-powered monitoring and predictions

      Artificial Intelligence and Machine Learning
      Read More
  • Resources

      Resources

      Analyst Reports

      Discover what the top industry analysts have to say about Digitate

      Blogs

      Explore Insights on Intelligent Automation from Digitate experts

      ROI

      Get Insights from the Forrester Total Economic Impact™ study on Digitate ignio

      Case Studies

      Learn how Digitate ignio helped transform the Walgreens Boots Alliance

      Trust Center

      Digitate policies on security, privacy, and licensing

      e-Books

      Digitate ignio™ eBooks Provide Insights into Intelligent Automation

      Infographics

      Discover the Capabilities of ignio™’s AI Solutions

      White Papers and POV

      Discover ignio White papers and Point of view library

      Webinars & Events

      Explore our upcoming and recorded webinars & events

  • About

      About Us

      Leadership

      We’re committed to helping enterprise companies realize autonomous operations

      Newsroom

      Explore the Latest News and information about Digitate

      Partners

      Grow your business with our Elevate Partner Program

      Academy

      Evolve your skills and get certified

      Contact Us

      Get in Touch or Request a Demo

      Let us demonstrate the power of Automation to transform your enterprise.
      Request a Demo
Request a Demo
Search
Close
BLOG

The curse and cure of dimensionality

By Parag Agrawal

A daily task for data scientists is to answer questions or derive new insights from a body of data they’ve been given. They typically wonder: Do I have sufficient data attributes to answer the question I’m interested in? Too few attributes? Too many? Hence, a lot of data pre-processing revolves around a concept called “dimensionality.”  

This term may sound complicated. But dimensionality is relevant to a lot of decisions we make in daily life, like how long a road trip will take, just as much as it is to the kinds of business decisions that many Digitate customers make. 

Data scientists spend a sizable share of their time on data preparation – primarily working with the features in the data. Features refer to attributes or columns present in the data set. The number of such features is known as the dimensionality of a data set.  

Consider a data set containing details about employees. It might have columns such as role, department, location,  tenure, address, and so forth. These columns are considered features. They play a vital role in finding various insights such as user segments, detecting anomalies, predicting future events, and executing other useful tasks. 

Features act as inputs in a machine learning (ML) algorithm. Most of these algorithms work by establishing a relationship between these features and the target values that we wish to predict. Naturally, relevant and adequate features help to increase the algorithm’s prediction power.  

Suppose we want to train an ML algorithm to predict the time it will take for a user to travel by road from one city to another – let’s say Pune to Mumbai. There could be a multitude of determining factors such as distance, road conditions, weather, or vehicle used. Every one of these determining factors could be a feature for our algorithm. For example, the larger the distance, the longer it takes. Or the better the road conditions, the faster we travel. 

The curse of dimensionality

In ML algorithms, having very few features may limit our understanding of the problem and in turn limit our ability to predict the outcome. That might make you think the more features we have, the better our accuracy will be.  

But this is where the curse of dimensionality kicks in! If you have a large number of features in your data (i.e. high dimensionality), the data becomes too difficult to visualize. It can lead to increased execution time, and it confuses the algorithm, reducing accuracy.  

Let’s look at both cases to understand this dilemma better. 

Low dimensionality: Consider predicting the travel time between two cities with just two features: the start point and end point of the journey. Naturally, this would result in inadequate prediction accuracy because it does not take traffic, weather, road conditions, or other factors into consideration. If we do not have enough features in a data set, any algorithm we use will have an incomplete picture of the entire problem, resulting in low accuracy of the predictions. 

High dimensionality: The natural course of action to fix our problem might be to add more features! So we add temperature, humidity, wind speed, terrain, and road conditions. And we could add even more, such as distance in miles, distance in kilometers, color of the vehicle, registration number, or name of the driver! As you can guess, too many features not only introduces additional noise in the data, it also carries the risk of inaccurate predictions. Imagine a pattern getting derived that all passengers driving a red car with even-numbered registration numbers tend to drive too slowly!  

Other factors also play a role in determining accuracy of the predictions, such as the nature of the test and training data sets, any underlying bias in the data, and the classification and regression algorithms used. Feature engineering is one of the most effective tools in a data scientist’s kit to overcome these issues. Finding the right balance and picking the right features becomes a very important first step. 

The cure: Sift out irrelevant features

Let’s first discuss the case when we have too many features. Often a lot of the features are either irrelevant or redundant to the problem at hand. For instance, in our example, the color of a vehicle might be irrelevant. Meanwhile, redundant features present the same information in different formats such as distance in miles and in kilometers.  

Here are some tricks of the trade to winnow them down. 

  1. Use domain knowledge: A domain expert can spot irrelevant and repetitive features by just looking at them. This can be very effective. In fact, it’s generally the first trick to try. But given the huge number of features and complexities in some data sets, and the challenge of finding experts, manual clean-up is often not always scalable. In such cases, creative algorithmic solutions can be deployed.
  1. Use statistics to remove irrelevant features: We can use various statistical properties to eliminate features. 
  • In many scenarios, features with the same value for all records or unique values for each record are not of much use for classification. Such features can be removed by measuring their degree of randomness using measures such as entropy or the Gini index. 
  • Correlation is also a very effective lever. Redundant metrics such as distance in miles and distance in kilometers will show a very high correlation. A high correlation implies that losing one of the two metrics will not lead to too much information loss. 
  • Classification and regression algorithms such as decision trees, random forest, and lasso regression internally use techniques to identify features of interest. These algorithms can be used for feature selection. 
  • Techniques such as principal component analysis can be used to transform a large set of features into a smaller set while still maintaining most of the information. 
  1. Use brute-force methods: These methods train multiple iterations with a different group of features and pick the best-performing ones. Some approaches use elimination, where we start with all the present features, then remove one at a time until the accuracy stops improving. Alternately, some approaches adopt a selection approach where we start with the minimum and keep adding features until no improvement is observed. These approaches can deliver good results. But they are quite computationally expensive and hence are often not suitable for huge data sets.

Dig deep in the toolbox when you have too few features

In contrast to the many ways of reducing an abundance of features, it’s tricky to work with too few. Your options are more limited. 

  1. Decompose features: A popular solution is to derive additional information by decomposing the available features. For instance, a timestamp can be decomposed to derive features such as day of the month, hour of the day, day of the week, and so forth. (In our road trip example, it could take much longer to drive during morning commute hours than the middle of the night.)
  1. Derive additional information: We can derive additional features from current features. For instance, BMI can be derived from gender, weight, and height. Or volume can be derived from length, breadth, and height.
  1. Make use of publicly available data sets (data augmentation): Current features can be joined with various publicly available data sets to expand the feature set. For instance, if we have zip codes as a feature, those can be joined with demographic data to add more features such as city, state, country, population, traffic, etc. We can also add weather data temperature and humidity. With a person’s name, social media profiles can be explored to augment it with other information such as interests, education, and occupation. 

Digitate’s take

Digitate data scientists works with a variety of use cases such as process prediction, transaction anomaly detection, and change management, among others. All these use cases involve a variety of data sets such as time series, events, and sets. Almost all of these data sets require some sort of feature engineering in order to derive any useful insights. 

Our award-winning AIOps suite, ignio, works with a wide range of ML problems across multiple domains. This has helped ignio “learn” how to excel at both ends of the dimensionality spectrum. It leverages a unique blend of domain-specific knowledge paired with generalized data-wrangling tricks and techniques.  

In some time series prediction use cases, it sees data with the bare minimum of features such as entity name, timestamp, and value. In such situations, ignio uses feature decomposition and feature derivation to expand the feature set. On the other end, ignio may have to manage a copious number of features in business transaction data. In such cases, ignio uses statistical methods to eliminate redundant features and brute-force methods to pick relevant features. 

Conclusion

Playing with data dimensions offers a wide range of possibilities. The quality and usability of analytics highly depends on the maturity of feature engineering. Today, the analytics pipeline is becoming fairly standardized. However, data preparation and feature engineering is still an art and is often what determines success for machine learning algorithms. Great results are often a function of how experienced you are and how bold and creative can you get with the data!  

Related

Share

Recent Posts

Pareto principle: the law of the vital few

February 2, 2023

We’re not bluffing: Poker and other games are good models of the autonomous enterprise

January 31, 2023

The curse and cure of dimensionality

January 23, 2023

IDoc Management Helping Organizations Achieve Flawless ERP Communication- In Depth

December 15, 2022

How ignio uses ML to take the tediousness out of tech support

December 6, 2022

How to make an elephant fly

November 7, 2022
Author

Parag Agrawal

Data scientist | Digitate
Contacts
Head Office

5201 Great America Parkway Suite 522
Santa Clara, CA 95054 USA

Twitter Linkedin Youtube Facebook Instagram
Company
  • About Digitate
  • Partner With Us
  • Newsroom
  • Blogs
  • Contact Us
Support
  • Data Privacy Notice
  • Website Use Terms
  • Cookie Policy Notice
  • Trust Center
  • Cookies Settings
Stay Connected
© Tata Consultancy Services Limited, 2023. All rights reserved
Products

ignio AIOps

Redefining IT Operations with AI and Automation

ignio AI.Workload Management

Enabling Predictable, Agile and Silent Batch Operations in a Closed-loop solution

ignio AI.ERPOps

End-to-end automation for incidents and service requests in SAP

ignio AI.Digital Workspace

Autonomously Detect, Triage and Remediate Endpoint issues

​ignio Cognitive Procurement

AI-based analytics to improve Procure-to-Pay effectiveness

ignio AI.Assurance

Transform software testing and speed up software release cycles

Solutions

Business Health Monitoring

Proactively Monitor Retail and eCommerce Health to deliver optimal customer experiences

IDoc Management for SAP

Monitor, Prioritize and Resolve IDoc issues to ensure business continuit

IT Event Management

Prevent outages with Predictive Intelligence and Autonomous Event Management

Business SLA Prediction

Never miss a business SLA with AI-powered monitoring and predictions

Resources

Analyst Reports

Discover what the top industry analysts have to say about Digitate

Blogs

Explore Insights on Intelligent Automation from Digitate experts

ROI

Get Insights from the Forrester Total Economic Impact™ study on Digitate ignio

Case Studies

Learn how Digitate ignio helped transform the Walgreens Boots Alliance

Trust Center

Digitate policies on security, privacy, and licensing

e-Books

Digitate ignio™ eBooks Provide Insights into Intelligent Automation

Infographics

Discover the Capabilities of ignio™’s AI Solutions

White Papers and POV

Discover ignio White papers and Point of view library

Webinars & Events

Explore our upcoming and recorded webinars & events

About Us

Leadership

We’re committed to helping enterprise companies realize autonomous operations

Newsroom

Explore the Latest News and information about Digitate

Partners

Grow your business with our Elevate Partner Program

Academy

Evolve your skills and get certified

Contact Us

Get in Touch or Request a Demo

Request a Demo
Products

ignio AIOps

Redefining IT Operations with AI and Automation

ignio AI.Workload Management

Enabling Predictable, Agile and Silent Batch Operations in a Closed-loop solution

ignio AI.ERPOps

End-to-end automation for incidents and service requests in SAP

ignio AI.Digital Workspace

Autonomously Detect, Triage and Remediate Endpoint issues

​ignio Cognitive Procurement

AI-based analytics to improve Procure-to-Pay effectiveness

ignio AI.Assurance

Transform software testing and speed up software release cycles

Solutions

Business Health Monitoring

Proactively Monitor Retail and eCommerce Health to deliver optimal customer experiences

IDoc Management for SAP

Monitor, Prioritize and Resolve IDoc issues to ensure business continuit

IT Event Management

Prevent outages with Predictive Intelligence and Autonomous Event Management

Business SLA Prediction

Never miss a business SLA with AI-powered monitoring and predictions

Resources

Analyst Reports

Discover what the top industry analysts have to say about Digitate

Blogs

Explore Insights on Intelligent Automation from Digitate experts

ROI

Get Insights from the Forrester Total Economic Impact™ study on Digitate ignio

Case Studies

Learn how Digitate ignio helped transform the Walgreens Boots Alliance

Trust Center

Digitate policies on security, privacy, and licensing

e-Books

Digitate ignio™ eBooks Provide Insights into Intelligent Automation

Infographics

Discover the Capabilities of ignio™’s AI Solutions

White Papers and POV

Discover ignio White papers and Point of view library

Webinars & Events

Explore our upcoming and recorded webinars & events

About Us

Leadership

We’re committed to helping enterprise companies realize autonomous operations

Newsroom

Explore the Latest News and information about Digitate

Partners

Grow your business with our Elevate Partner Program

Academy

Evolve your skills and get certified

Contact Us

Get in Touch or Request a Demo

Request a Demo
 

Loading Comments...