Survey of Drug Discovery Techniques

0
1131

Divesh Ramesh Kubal elucidates the drug discovery techniques, along with putting a comparison between such techniques stated.

The following study emphasizes various techniques that are applied in the field of drug discovery. We have used neural networks, active machine learning and support vector regression. Then we have compared these techniques with other machine learning methods. The collection of compounds is very large. We have to find those compounds that bind to a particular given target molecule. Results are quite interesting. Also, these techniques can be applied in the drug discovery process for discovering treatments for human diseases

Keywords—Active machine learning, multicast neural networks, drug discovery

I. INTRODUCTION

To discover new treatments for a particular human disease seems to sound easy but it is an immensely complicated challenge. Discovering new treatments are not just enough, we have to take various other factors like the newly discovered treatment must attack the source of illness within predefined metabolic and toxicity constraints. Precisely the process of drug discovery is much like an iterative/extended process which finds compounds that are active against a biological target. In each iteration, we have to select compounds from some accessible collections or datasets. First step is ‘hit finding’. It means millions of drugs are screened by pharmaceutical companies for a given druggable target. Important disadvantage in screening process is that it is expensive as they are automated by robots. How hit rate is one of the challenge which has to be overcome when machine learning methods are applied to virtual screening.

Computational time depends on the way we select data. The traditional machine learningmethods use static data for training. Data is analyzed by the model in each round and after every round, a new model is constructed which in turn selects compounds in next round. Basically, there are many strategies which help to select compounds which are to be used for training. By using random strategy, number of ‘hits’ grows only linearly. ‘Closest to some previously found active compounds’ is one such another strategy. We have discussed about all these methods in later sections. 

 

II. TECHNIQUES

A. Active Machine Learning

Active machine learning differs from traditional machine learning in the sense that the former uses dynamic data rather than static training data thereby increasing performance of the trained model. Rather than taking a static training data, we let our training data to be incremented with each round. There are many such strategies to select training data. Simplest strategy involves selecting new compounds at random. This strategy is not much effective because most of the compounds are commonly inactive. Also, in this strategy number of hits grows linearly with total number of compounds tested. Another strategy is to select those compounds which are ‘closest’ to some previously found active compounds. Disadvantage of this strategy is that it searches only locally and it will not find actives that are remote from the previously known ones. We prefer largest positive score strategy since they are most likely to be active.

The above figures depict the result of each selection strategy in each round starting from round0. We plot the total fraction of hits (in 5% test batches) for round0 (left) and round1 (right) of the Thrombin data set. In each caseWe plot all four selection strategies as a function of the fraction of compounds tested: random (black “x”), closest to an active (green circle), largest positive (red box), and near boundary (blue plus). For round0, the total number of actives is less than 5%. For round1 the magenta curve shows the optimal strategy which picks only actives in each test batch until all actives are selected.

B. Multitask Neural Networks

An artificial neural network is an interconnected group of nodes. They are inspired by neurons in a brain. A neural network is a non-linear classifier. It performs repeated linear as well as nonlinear transformation on the given input. First we have to train a neural network with some training data, and then we have to give a number of hidden layers so that we can get a trained model which will then give appropriate result to user inputted input.

The transformation performed is,

xi+1=σ(Wixi+bi)

Where,

xi: Input to ith layer of the neural network

Wi: Weight matrix for ith layer

bi: Bias matrix for ith layer

After ‘n’ number of transformations or steps, the output of the neural network is then fed to simple linear classifier, which in turn helps the probability that particular input which is given by user.

 

Where,

M-Number of class labels(Here M=2, as we have only 2 class labels, active and inactive)

x0:Input

j: Label

Final step is that our trained multitask network attaches N linear classifier i.e., one for each task to final layer.

 

CONCLUSION

The questions that have been raised after reading above techniques is Do massively multitask network provide a performance boost or perform better as compared with simple machine learning methods?

Multitask neural networks that we have designed, we have used pyramidal architectures i.e., our first hidden layer is very wide with a second narrow hidden layer. We have observed that when number of tasks and data is increased, the performance of multitask neural network also increases.

This main goal of this study is to discover new treatment for human diseases. We have to implement it in real world so that we can find alternate low cost drugs to expensive drugs. In this way, each and every common man can get appropriate treatment within their expenses and it will be a  boon to society.

 

Leave a Reply