I work in an IT firm and the one question that I normally come across from many reputable experts is “What do I do with all these data?”

Have you been thoughtful on analogous lines?

Images

Have you ever been in a situation where you are given a mammoth excel or a csv file and told to give your understanding on the same?

Have you ever tried googling out the step by step process which would assist you in solving a data related problem?

Or are you a young technology enthusiast trying to get into the magnificent turf of data?

If your answer to any of these questions is a Yes, you are at the right place. But before getting our hands dirty with data, let’s first take in some motivation for doing the same and what’s better than a quote. 

Every day we go over data and use science and data to drive policy and decision-makingDeborah Birx, American Physician and Diplomat

Data is King: Really?

In 2020, there is no denial that data is king. Data which once seemed intrinsic to computer sciences and mathematics, have now spread wings to nearly all the sectors in the contemporary world, right from medicines to clinical research; from laws to policies; from banking to share markets and not to overlook from modern applications to smart applications.

Data is copious in today’s world; it is easily available and is obtainable in diverse formats. With the advent of technologies like Internet of Things (IoT), smart devices and smart applications, data is now the new buzzword in Internet technology and data analytics. As per IBM big data and Analytics hub, 40 Zettabytes of data would be created by the end of 2020; an appraised 2.5 quintillion bytes of data is produced every day; roughly 6 million users would be online through cell phones globally; 4 billion+ hours of video are viewed on YouTube every month and an approximated 400 million tweets are sent across twitter every day.

With this ever accumulative data size and bombardment of data on the servers, it becomes puzzling and occasionally frightening to make sense of the data. It is at this juncture that Data Mining tools and techniques come into play which not only gives us an urbane tactic to dive deep in the data and carve out vital facts but also gives us a framework called the data mining process to work with data so that this horrendous task of understanding the data becomes resourceful and easy. By the time you finish reading this, I promise that you will be persuaded to believe that data is really the king in 2020 and I aim to do it by taking you in steps of what makes the data the real king of 2020.

I will be talking about some key concepts linked to data mining with an emphasis on data mining process and data mining tools and techniques. I would start with understanding the Data Mining Process followed by the Data mining tools and techniques presently available to support us with the avalanche of data which in turn gives the decision takers and policy makers the key acumens so that correct decisions and stern policies can be verbalized.

From Chunks of numbers to Digital transformation: What is the Data Mining Process?

Before we get into the realm of data mining, let us first answer the question, what is data mining? Put in modest terms, data mining is the process of extracting information from the data. Note that information and data are two diverse entities; data is a set of numbers describing different features however qualifying the data makes it an information. For example, 75 is a data but when I say I weigh 75 kg, it becomes an information. Hence, data mining is merely adding qualification to the data so that one can make some sense out of it.

Images

In other words, explaining the data in a language which can be understood by us and also allows us to take the right decisions is what data mining is all about. With this explanation pinned in our beliefs, lets now start with the data mining process.

The data mining process follows a process called the CRISP-DM (Cross Industry Standard Process for Data Mining) which divides the data mining problem into six different fragments,

  • Business Understanding
  • Data Understanding
  • Data Preparation
  • Modelling
  • Evaluation
  • Deployment

The CRISP-DM methodology of data mining process is unbiased to any industry, data, process or business and hence this is possibly the most prosperous methodology followed extensively in many industry sectors. Let us now appreciate these steps of data mining process one by one in detail.

Looking forward to becoming a Data Scientist? Check out the Data Science Bootcamp Program and get certified today.

1. Business Understanding

The first stage of CRISP-DM methodology of data mining process is understanding the business needs. The purpose of this stage is to comprehend what features influence a specific decision or process and what end result would be acceptable by the business after the data mining activity is complete.

For example, in the banking sector, if the business need is to spot customers who are likely to be defaulters if a loan is sanctioned, the purpose of this step is to pin point all the considerations that influence repayment.

This step also embraces the development of a project plan where every step of the data analytics process is expected to get documented with accurate details including the data mining tools and techniques.

The final milestone in this step is to define the acceptance criteria which is the accuracy with which the process adheres to the business understanding. In the example above, of finding out the defaulters, the acceptance criteria might be a cut off accuracy of say 85%, which is to say, if the model is able to predict at least 85% of the customers who have been defaulting on the payments in the past, the model is then accepted. But this has to be done right in the beginning of the project so that tracking of the progress becomes easier.

In a nutshell, the business understanding step encapsulates the following,

  • Setting up of the target to be achieved in terms of business language
  • Defining the data analytics process including the data mining tools and techniques
  • Setting up of the acceptance criteria of the model.

2. Data Understanding

The second stage of CRISP-DM methodology of data mining process is understanding data. We already have an insight on the parameters that influence a particular business process and once we are in this stage, we start off by collecting data for these parameters so that all these data can be loaded into the tool employed for data mining activity determined in the business understanding stage. In the example of banking as stated above, the data might be composed of repayment dates, the monthly earning of the customers, the EMI details, the insurances, any deposits in the bank, etc.

Once the data is acquired, the next step is to explore the data. This can include finding out relationships between various attributes, results of the aggregation tasks such as sum, average, etc., finding out the data distribution and simple statistical analysis such as finding out the mean, median, mode and standard deviation of the data in the data set.

Following the data exploration, the immediate next step is to determine the quality of data. In data science, the quality of the data refers to whether the data can be trusted to get a fruitful model. This can be inferred in one of the three ways namely, whether the data is complete which is to say whether the data talks about the complete population or a part of it, whether the data is correct or does it contain too many outliers since they might skew the results and finally whether there are missing values in the data and what strategies can be adopted to cover those gaps in the dataset. In our example, if the data collected has the data for only a month or two, it is not sufficient to say whether the customer is likely to default or not. In order to correctly predict whether a customer is likely to default would be to analyze the repayment trends for at least a year or so.

In a nutshell, the data understanding step encapsulates the following,

  • Collection of relevant data
  • Explore the collected data
  • Verify the quality of data

3. Data Preparation

The third stage of CRISP-DM methodology of data mining process is data preparation. Data preparation is the final step before the data is pushed as input to the data mining tools and techniques. The key importance of this step is to ensure that the data is free of outliers, the data does not contain too many blank values and the data is relevant to the business requirement done in the business understanding step of the data mining process. This is called data selection.

In our familiar example of the banking application; customer data such as address, date of birth, blood group, etc. are very unlikely to play any role in determining whether the customer is likely to be defaulter. Such data sets should be right away taken out of the data collection.

Once the data is selected, the data has to be cleansed where the outliers if any are cleared out and strategies are employed to fill in the missing data values so that the modelling can be done without any hiccup. This is called data cleansing.

Following the data cleansing, the immediate next step is to transform the data into a suitable format. In general, the raw data extracted out of the database does not serve any purpose. Continuing with our example, the data extracted out might contain tables where there is a customer data, tables containing details about the transactions they have been doing over the years, a table containing the fixed deposits or recurring deposits accounts, and so on. Often, all of these are transformed to form a new table that contains one row for one customer with the details in the corresponding attributes. This is done by merging and aggregating data following a set of strategies which are dependent on the type of data, the table attributes and the business needs.

In a nutshell, the data preparation step encapsulates the following,

  • Data Collection
  • Data Cleansing
  • Data Transformation

4. Modelling

The fourth stage of CRISP-DM methodology of data mining process is modelling. Prepared data from the last step is now sent as input to a specific data mining algorithm which can be a regression algorithm, or a classification algorithm or a neural network, etc. Though the tools are selected during the business understanding phase, the actual algorithm is selected here depending on the type of data gathered and prepared.

The data set is then split into train and test data sets. This is accomplished by generating random data sets using a variety of strategies such as randomizing the data points or using advanced test train split algorithms. This is done to build the model on the train data set and the quality of the model is assessed in the test data set.

The model is then built by fine tuning various parameters inherent to the data mining algorithms. For example, in a typical Linear regression problem, one of the parameters that is generally tuned is the weights for each sample of the data set. Such parameters are tuned to get an overall model for the prepared data.

Different models are generated using the same algorithm by tuning the parameters and other attributes of the algorithm which are then ranked depending on their performance on the test data set and how close are the results to the acceptance criteria that was agreed upon the Business understanding stage.

In a nutshell, the modelling step encapsulates the following,

  • Selecting the Data Mining Algorithm
  • Splitting the data set into train and test data sets
  • Model building
  • Model Ranking.

5. Evaluation

The fifth stage of CRISP-DM methodology of data mining process is Evaluation. Evaluation is the in depth analysis of the performance of the model that was built in the last step. On one hand where test data sets tell us about the accuracy and generality of the models, the evaluation stage tells us if the model is good enough to solve the typical business scenarios and verify whether the model meets the objective of the initial business objective. This stage is also utilized to analyze whether there are attributes that are used to build the model but would no longer be available in the near future or was there any data that was used which might have potential problems subjected to whether a particular user decides to share that or not.

In our banking example, if the user decides not to share the details of the family income, the bank cannot do much and hence this might impact the prediction of the model. Strategies should be made to meet such abnormalities.

In a nutshell, the evaluation step encapsulates the following,

  • Evaluating whether a particular model is good enough to meet the specific business requirements.
  • Developing the course of action in case there are attributes which might not be available in the future.

6. Deployment

The sixth and the final stage of CRISP-DM methodology of data mining process is deployment. This stage is responsible for taking the best model that comes out of the evaluation phase and making sure it is moved to the production system where there would be data mining in real time as the transactions happen. This has to be done with utmost care so as to not to disrupt the existing processes and the systems.

Deployment also involves planning for the operational support activities and also plans for the maintenance of the model embedded within the system. Additionally, plans for monitoring the system also has to be developed to ensure continued service.

In a nutshell, the deployment step encapsulates the following,

  • Deployment of the model into the production system.
  • Developing strategies for operational support, monitoring and maintenance of the model.

This concludes the data mining process as per the CRISP-DM methodology. There might be industries or sectors which might not follow the CRISP-DM methodology as religiously and would have their own procedures to deal into such projects. Having said that, the derived data mining process in certain industries implement the CRISP-DM methodology in addition to other measures or plans.

Helping hands to digital transformation: What are the latest data mining tools and techniques?

Having understood the data mining process in detail, it is now time to focus our attention to the latest data mining tools and techniques. If you can recollect the Business Understanding step of the data mining process, you should be able to recall that one of the major aspects of this step was to determine which data mining tool to use and later in the data mining process while modelling, the appropriate data mining technique is selected which best suites the business needs. This is how the data mining tools and techniques relate to the data mining process.

Let us start with a bird’s view understanding of the data mining techniques followed by the data mining tools that are in trend in 2020.

What are some of the latest data mining techniques currently being employed in 2020?

Data mining techniques are a set of logical deductions that are performed by a computer on a data set so that the raw data can be processed to give a better understanding so as to facilitate better decision making. Below are some of the data mining techniques which are currently ruling the data analytics space in 2020,

  • Pattern Recognition in data sets: Pattern recognition is the technique to figure out patterns in the data sets. For example, in our familiar banking application, there might be patterns where loan demands soar up during the months where admissions to universities are at a peak or it may also go up during festivals when families try to buy a house or a car or a two wheeler. Conversely, the loan demands might go down in certain months depending in the geography and demography of the location. These patterns have to be understood and data mining helps us do it. Pattern recognition helps the decision makers to better optimize their operations so that customer’s requirements are met.
  • Data classification: The classification problem is the classic problem where data points are grouped together based on certain attributes. In our banking application, depending on the salary, past history of the repayments, property details, etc. a loan application can be designated as ‘Low Risk’, ‘Moderate Risk’ and ‘High Risk’. This is a classification problem where data mining experts first classify the data points in one of the categories and then apply business policies as suited to those classes.
  • Data Association: Data association or famously known as association rule mining is to find association rules among data points. In order to understand it better, think on your last purchase on your favorite ecommerce website. Generally, while checking out the product if one scrolls down, they generally see a section entitled customers also brought with some products listed which are similar to the product currently being viewed. This is done via data association or association rule mining.
  • Outlier Detection: One of the most interesting data mining techniques is perhaps the outlier detection. Outlier in a dataset is any anomaly in the data. For example, in our banking application, let us assume depending on the demography the salary of the customers varies from 10L to 20L per annum. Imagine, a wealthy business applying for a loan in the same bank whose annual income is about 10Cr. This is an outlier which if goes unnoticed or undetected during the mining process can enormously skew the results and thus the entire model fails to achieve the business requirement. If there are too many outliers in the dataset, it calls for an evaluation of the data collection mechanism since too many outliers create confusion and also questions the quality of the data in the data set.
  • Cluster Formation: Clustering or cluster formation is a data mining technique which focuses on clustering the dataset in classes based on certain attributes of the data. Logically, It is very similar to classification but technically these two are different owing to the fact that clustering refers to technique called unsupervised data mining and classification refers to a supervised data mining problem.
  • Regression: Regression is a data mining technique which works with a set of data to predict the outcome of a target data. For example, in our banking application, Regression might be done to understand the predicted amount of loan requirements in different months or seasons of the year. In mathematical terms, regression predicts the relationship of a variable or an attribute with other variables or attributes.

Now that we understand some of the data mining techniques popular in 2020 let us now understand the data mining tools available in 2020 to collectively understand data mining tools and techniques in 2020.

What are the some of the data mining tools available in 2020?

  • RapidMiner: RapidMiner is a data science platform used for data analytics, predictive modelling, text analytics, machine learning and deep learning. RapidMiner also supports majority of the data mining process including data visualization, data preparation, model building, model evaluation and model optimization. Due to the wide range of functionalities that RapidMiner provides, it is supposedly the most popular data mining tool in 2020.
  • SAS: SAS is a closed source programming language which provides a rich library of functions and tools which aid data mining experts in statistical modelling. SAS is one of the best programming tools in the market but comes with a disadvantage that is exceptionally expensive and hence can only be used by large firms which also makes it a little less popular in the market.
  • R Programming: R programming is perhaps one of the most loved tool by data mining professionals. R provides a very rich library for data analytics, data visualization and data preparation which makes it very user friendly. The success of R language comes from the fact that R provides a wide range of visualization tools which are easy to use, intuitive and can be easily animated which makes the visualizations a lot simpler and easier to understand.
  • Apache Spark: Apache Spark is an improvement on Hadoop which can process the data faster even faster than MapReduce functionalities of Hadoop. Spark contains a bulky API for Machine Learning algorithms which enable data scientists and data mining experts to make fantastic predictions from the raw data.
  • Python Programming: When it comes to data mining and data analytics Python is perhaps the most popular in terms of its ease to learn and implement complex data mining algorithms. A decent software developer can learn python within minutes and start building sophisticated data mining algorithms to make sense of the data. Python not only supports data mining and data analytics but also has a tremendous support for software development, testing, internet technologies, etc. which makes it more complete and robust compared to other tools available in the market.
  • IBM SPSS modeler: IBM SPSS modeler and workbench is perhaps one of a kind in the data mining tools since it allows users to build sophisticated data mining algorithms with a little or absolutely no knowledge in programming.
  • Tableau: Tableau is a data visualization tool which brings intuitive and user friendly graphics to the table. The popularity of Tableau training comes from its ease with which it can interface with the databases, servers and OLAP (Online Analytical Processing) systems.

This brings me to the end of the article which focuses on data mining process and data mining tools and techniques in 2020.

Data is King: Really!

Throughout the article, we had been working on the banking application to predict customers who would likely to default on the repayments once the loan is sanctioned. Remember when you were given the mammoth excel or csv file containing big blocks of numbers and you were asked to give your views on the data? With no knowledge on what has to be done whatsoever to actually developing a smart application that can predict defaulters and save millions by not sanctioning the high risk loans, congratulations you just saved a bank from getting bankrupted. Now do you believe, data is king? Because, I do. Without data, this could not have been possible no matter how powerful and how stringent our algorithms are. Data is really the king in 2020.

A Quick Wrap Up: Thanking our digital heroes

We have seen the various steps involved in a data science Course project. This is called the CRISP-DM methodology of data mining process and comprises of the following steps,

  • Business Understanding
  • Data Understanding
  • Data Preparation
  • Modelling
  • Evaluation
  • Deployment

We also had a brief understanding on the data mining tools and techniques where we started with the techniques in data mining followed by the tools that are in the market for data mining. Let’s recollect the data mining techniques which are as follows,

  • Pattern recognition in data sets
  • Data classification
  • Data Association
  • Outlier Detection
  • Cluster Formation
  • Regression

Let us also have a quick look on the data mining tools that are in trend in 2020. They are as follows,

  • RapidMiner
  • SAS
  • R Programming
  • Apache Spark
  • Python Programming
  • IBM SPSS modeler
  • Tableau

Data mining is the perhaps the most exciting and interesting aspect of being in the data driver world around us. With the right choice of questions, tools and techniques, data has the power to bring in digital transform in the entire business.

Check out our page for more such interesting blogs. Happy Reading!

Recommended Reads:

Also check this video

E&ICT IIT Guwahati Best Data Science Program

Ranks Amongst Top #5 Upskilling Courses of all time in 2021 by India Today

View Course

Recommended videos for you

Interested in Henry Harvin Blog?
Get Course Membership Worth Rs 6000/-
For Free

Our Career Advisor will give you a call shortly

Someone from India

Just purchased a course

1 minutes ago
Henry Harvin Student's Reviews
Henry Harvin Reviews on Trustpilot | Henry Harvin Reviews on Ambitionbox |
Henry Harvin Reviews on Glassdoor| Henry Harvin Reviews on Coursereport