Connect with us

Bioinformatics Programming

How to preprocess data for clustering in MATLAB?

Dr. Muniba Faiza

Published

on

How to preprocess data for clustering in MATLAB?

Data preprocessing is a foremost and essential step in clustering based on machine learning methods. It removes noise and provides better results. In this article, we are going to discuss the steps involved in data preprocessing using MATLAB [1].

We are using sample data in the form of a CSV file consisting of 4 columns as features and 10 rows. This data consists of information about the employees working in a company including their location, age, salary, and experience. You will find some missing values there which we are going to deal with in this tutorial. Complete code is provided in the last.

Steps in data preprocessing

  1. Missing values

    At first, you have to deal with the missing values in your data. You have three ways to do that:

    i) Delete rows/columns containing missing values. You can either simply delete them or delete them based on the relative percentage of missing values. For example, if there are more than 3 values, then delete the rows/columns.
    ii) Using mean. You can replace the missing values with the mean of the row/column.
    iii) Using median. You can also fill in the missing values based on the median of the column/row.

  2. Deal with non-numeric data

    The non-numeric data must be taken care of before subjecting to clustering. For example, in our sample data, we have ‘Location‘ as a non-numeric feature. We will have to convert it into numeric values.

  3. Deal with outliers

    The next step is to remove the outliers from the dataset so that they cannot affect the final clustering. It can also be done in two ways as same as for the missing values either by deleting the complete row/column or by replacing the values with mean or median.

  4. Feature scaling

    Feature scaling is important because we may have different kinds of features or variables in our dataset. For instance, we have different features, such as location, age in years, salary in rupees, and experience in years. There are two ways to do the feature scaling:
    i) Standardization
    If you have pretty similar values then you can standardize your data. Otherwise, go for normalization. However, this is not a necessary condition, you can choose any method.
    ii) Normalization

  5. Store preprocessed data

    The preprocessed data is stored in the form of a table with a variable name. This data will be further used for clustering.

Now, let’s perform these steps on our sample data one by one.

Preparing the code

If you look at provided script right now, the first command is ‘clear’. This is to clear the workspace before running the script.

Reading data

Read the data table using the ‘readtable’ command and store it in a variable.

clear;

data = readtable('data.csv')

Dealing with missing values

Mean Method

As explained above, in this method we replace the missing values using the mean of the column. For example, for the Age column, we can do the following.

M_Age = mean(data.Age, 'omitnan');
U_Age = fillmissing(data.Age, 'constant',M_Age);
data.Age = U_Age;

Here, the missing values are replaced with the mean of Age.

Similarly, you can do for the rest of the columns except the ‘Location’. We will deal with it later.

Median Method

As explained above, in this method we replace the missing values using the median of the column. For example, for the Age column, we can do the following.

M_Age = median(data.Age, 'omitnan');
U_Age = fillmissing(data.Age, 'constant',M_Age);
data.Age = U_Age;

Here, the missing values are replaced with the mean of Age. We are using the mean method for missing values in our code.

Delete Method

You can delete rows or columns containing missing values. This method is not generally used, especially when having a small dataset. Therefore, we are using the mean method in our code. But you can use the following command if you wish to use the delete method for your data.

missing_data = rmmissing(data); #for row removal containing the missing data.
missing_data = rmmissing(data,2); #for column removal
missing_data = rmmissing(data,1); #for row removal
data = missing_data;

Deleting based on the relative percentage

We can also delete the rows or columns based on the relative percentage of missing values.

missing_data = rmmissing(data,'MinNumMissing', n); #for row removal
data = missing_data;

Here, n is the number of missing values. For example, 3, if there are 3 missing values then remove the complete rows.

Dealing with non-numeric values

For non-numeric data such as ‘Location’ in this example, we can replace the country names based on their frequency. We can use the dummyvar function for that. It will split the names of countries into different columns and put values based on their occurrences. For instance, if the US is occurring three times, then there will put 3 under the US column.

location_data = categorical(Location);
D = dummyvar(location_data);

Outliers

Now, let’s remove outliers from our data. You can either delete rows containing outliers or replace them with mean.

Deleting rows with outliers

outlier = isoutlier(data.Age);
data = data(~outlier,:);

Replacing outliers with mean

Age = filloutliers(data.Age,'clip','mean')
data.Age = Age;

Similarly, you can do this for all columns in your data.

Feature Scaling

For feature scaling, you can either do standardization or normalization.

Standardization

stand_age = (data.Age - mean(data.Age))/std(data.Age)
data.Age = stand_age;

Similarly, do that for all columns in your data.

Normalization

normalize_age = (data.Age - min(data.Age)) / (max(data.Age) - min(data.Age))
data.Age = normalize_age;

It depends on you how you would like to scale your data.

Storing preprocessed data

The last step is to store your preprocessed data in a variable.

writetable(data,'preprocessed_data.csv');

Now, let’s look at the code with different methods.

clear;
%% ————— Importing the dataset ———————-
data = readtable('data.csv');
%% —————Data Preprocessing —————————–
% ————– Handling Missing Values ————————
% ————– Method: Using Mean ————————–
M_Age = mean(data.Age, 'omitnan');
U_Age = fillmissing(data.Age, 'constant',M_Age);
data.Age = U_Age;

% ————– Method: Using Median ————————–
M_Age = mean(data.Age, 'omitnan');
U_Age = fillmissing(data.Age, 'constant',M_Age);
data.Age = U_Age;

% ————– Method: Deleting rows or columns————–

missing_data = rmmissing(data)
missing_data = rmmissing(data,2)
data = missing_data;

% ————– Method: Deleting rows or columns based on Relative Percentage——–

missing_data = rmmissing(data,'MinNumMissing',3);
missing_data = rmmissing(data,2,'MinNumMissing',2);
missing_data = rmmissing(data,2,'MinNumMissing',3);
data = missing_data;

%% ————– Handling Outliers——————————-
% ————– Method: Deleting Rows ————————

outlier = isoutlier(data.Age);
data = data(~outlier,:);

% ————– Method: Filling Outliers ———————

Age = filloutliers(data.Age,'clip','mean')
data.Age = Age;

%% ————– Feature Scaling ——————————-

% ————– Method: Standardization ———————-

stand_age = (data.Age - mean(data.Age))/std(data.Age)
data.Age = stand_age;

% ————– Method: Normalization ————————

normalize_age = (data.Age - min(data.Age)) / (max(data.Age) - min(data.Age))
data.Age = normalize_age;

%—————-Storing preprocessed data———————–

writetable(data,'D:\preprocessed_data.csv');

We have explained for one column only. You can include all columns in the respective sections of the code. Save this file as preprocessing.m and then keep the data.csv in the same directory or provide a full path in the script, and then run it.


References

  1. MATLAB and Statistics Toolbox Release 2022b, The MathWorks, Inc., Natick, Massachusetts, United States.

Dr. Muniba is a Bioinformatician based in New Delhi, India. She has completed her PhD in Bioinformatics from South China University of Technology, Guangzhou, China. She has cutting edge knowledge of bioinformatics tools, algorithms, and drug designing. When she is not reading she is found enjoying with the family. Know more about Muniba

Bioinformatics Programming

How to create a pie chart using Python?

Dr. Muniba Faiza

Published

on

How to create a pie chart using Python?

In this article. we are creating a pie chart of the docking score of five different compounds docked with the same protein. (more…)

Continue Reading

Bioinformatics Programming

How to make swarm boxplot?

Dr. Muniba Faiza

Published

on

How to make swarm boxplot?

With the new year, we are going to start with a very simple yet complicated topic (for beginners) in bioinformatics. In this tutorial, we provide a simple code to plot swarm boxplot using matplotlib and seaborn. (more…)

Continue Reading

Bioinformatics Programming

How to obtain ligand structures in PDB format from PDB ligand IDs?

Dr. Muniba Faiza

Published

on

How to obtain ligand structures in PDB format from PDB ligand IDs?

Previously, we provided a similar script to download ligand SMILES from PDB ligand IDs. In this article, we are downloading PDB ligand structures from their corresponding IDs. (more…)

Continue Reading

Bioinformatics Programming

How to obtain SMILES of ligands using PDB ligand IDs?

Dr. Muniba Faiza

Published

on

How to obtain SMILES of ligands using PDB ligand IDs?

Fetching SMILE strings for a given number of SDF files of chemical compounds is not such a trivial task. We can quickly obtain them using RDKit or OpenBabel. But what if you don’t have SDF files of ligands in the first place? All you have is Ligand IDs from PDB. If they are a few then you can think of downloading SDF files manually but still, it seems time-consuming, especially when you have multiple compounds to work with. Therefore, we provide a Python script that will read all Ligand IDs and fetch their SDF files, and will finally convert them into SMILE strings. (more…)

Continue Reading

Bioinformatics Programming

How to get secondary structure of multiple PDB files using DSSP in Python?

Dr. Muniba Faiza

Published

on

How to get secondary structure of multiple PDB files using DSSP in Python?

In this article, we will obtain the secondary structure of multiple PDB files present in a directory using DSSP [1]. You need to have DSSP installed on your system. (more…)

Continue Reading

Bioinformatics Programming

vs_analysis_compound.py: Python script to search for binding affinities based on compound names.

Dr. Muniba Faiza

Published

on

vs_analysis_compound.py: Python script to search for binding affinities based on compound names.

Previously, we have provided the vs_analysis.py script to analyze virtual screening (VS) results obtained from Autodock Vina. In this article, we have provided another script to search for binding affinity associated with a compound. (more…)

Continue Reading

Bioinformatics Programming

How to download files from an FTP server using Python?

Dr. Muniba Faiza

Published

on

How to download files from an FTP server using Python?

In this article, we provide a simple Python script to download files from an FTP server using Python. (more…)

Continue Reading

Bioinformatics Programming

How to convert the PDB file to PSF format?

Dr. Muniba Faiza

Published

on

How to convert the PDB file to PSF format?

VMD allows converting PDB to PSF format but sometimes it gives multiple errors. Therefore, in this article, we are going to convert PDB into PSF format using a different method. (more…)

Continue Reading

Bioinformatics Programming

smitostr.py: Python script to convert SMILES to structures.

Dr. Muniba Faiza

Published

on

smitostr.py: Python script to convert SMILES to structures.

As mentioned in some of our previous articles, RDKit provides a wide range of functions. In this article, we are using RDKit [1] to draw a molecular structure using SMILES. (more…)

Continue Reading

Bioinformatics Programming

How to calculate drug-likeness using RDKit?

Dr. Muniba Faiza

Published

on

How to calculate drug-likeness using RDKit?

RDKit [1] allows performing multiple functions on chemical compounds. One is the quantitative estimation of drug-likeness also known as QED properties. These properties include molecular weight (MW), octanol-water partition coefficient (ALOGP), number of hydrogen bond donors (HBD), number of hydrogen bond acceptors (HBA), polar surface area (PSA), number of rotatable bonds (ROTB), number of aromatic rings (AROM), structural alerts (ALERTS). (more…)

Continue Reading

Bioinformatics Programming

sdftosmi.py: Convert multiple ligands/compounds in SDF format to SMILES.

Dr. Muniba Faiza

Published

on

sdftosmi.py: Convert multiple ligands/compounds in SDF format to SMILES?

You can obtain SMILES of multiple compounds or ligands in an SDF file in one go. Here, we provide a simple Python script to do that. (more…)

Continue Reading

Bioinformatics Programming

tanimoto_similarities_one_vs_all.py – Python script to calculate Tanimoto Similarities of multiple compounds

Dr. Muniba Faiza

Published

on

tanimoto_similarities_one_vs_all.py – Python script to calculate Tanimoto Similarities of a compound with multiple compounds

We previously provided a Python script to calculate the Tanimoto similarities of multiple compounds against each other. In this article, we are providing another Python script to calculate the Tanimoto similarities of one compound with multiple compounds. (more…)

Continue Reading

Bioinformatics Programming

tanimoto_similarities.py: A Python script to calculate Tanimoto similarities of multiple compounds using RDKit.

Dr. Muniba Faiza

Published

on

tanimoto_similarities.py: A Python script to calculate Tanimoto similarities of multiple compounds using RDKit.

RDKit [1] is a very nice cheminformatics software. It allows us to perform a wide range of operations on chemical compounds/ ligands. We have provided a Python script to perform fingerprinting using Tanimoto similarity on multiple compounds using RDKit. (more…)

Continue Reading

Bioinformatics Programming

How to commit changes to GitHub repository using vs code?

Published

on

How to commit changes to GitHub repository using vs code?

In this article, we are providing a few commands that are used to commit changes to GitHub repositories using VS code terminal.

(more…)

Continue Reading

Bioinformatics Programming

Extracting first and last residue from helix file in DSSP format.

Dr. Muniba Faiza

Published

on

Extracting first and last residue from helix file in DSSP format.

Previously, we have provided a tutorial on using dssp_parser to extract all helices including long and short separately. Now, we have provided a new python script to find the first and last residue in each helix file. (more…)

Continue Reading

Bioinformatics Programming

How to extract x,y,z coordinates of atoms from PDB file?

Published

on

How to extract x,y,z coordinates of atoms from PDB file?

The x, y, and z coordinates of atoms are provided in the PDB file. One way to extract them is by using the Biopython package [1]. In this article, we will extract coordinates of C-alpha atoms for each residue from the PDB file using Biopython. (more…)

Continue Reading

Bioinformatics Programming

dssp_parser: A new Python package to extract helices from DSSP files.

Dr. Muniba Faiza

Published

on

A new Python package named ‘dssp_parser‘ is developed to parse DSSP files. This package fetches all helices including long and short ones from DSSP files. (more…)

Continue Reading

Bioinformatics Programming

How to calculate center of mass of a protein structure using Python script?

Dr. Muniba Faiza

Published

on

How to calculate center of mass of a protein structure using Python script?

Here is a Python script that helps you calculate the center of mass of a protein using the Pymol [1]. (more…)

Continue Reading

Bioinformatics Programming

How to sort binding affinities based on a cutoff using vs_analysis.py script?

Dr. Muniba Faiza

Published

on

How to sort binding affinities based on a cutoff using vs_analysis.py script?

Previously, we have provided a Python script (vs_analysis.py) to analyze the virtual screening (VS) results of Autodock Vina. Now, we have updated this script to sort binding affinities based on user inputted cutoff value. (more…)

Continue Reading

Bioinformatics Programming

sminalog_analysis.py – A new Python script to fetch top binding affinities from SMINA log file

Dr. Muniba Faiza

Published

on

sminalog_analysis.py – A new Python script to fetch top binding affinities from SMINA log file

In one of our previous posts, we provided a Python script for the virtual screening analysis of Autodock Vina. This script analyzes all log files obtained from the docking of multiple ligands to a receptor and provides the binding affinities for top poses from each file. In this article, we are publishing a new Python script for the virtual screening analysis of SMINA [1]. (more…)

Continue Reading

LATEST ISSUE

ADVERT