Introduction
Data Science is one of the fastest-growing jobs in today’s world. Data science is an interdisciplinary field that combines the fields like statistics, mathematics, computer science, domain knowledge, artificial intelligence, machine learning, etc. the uses of particular techniques and analytical methods to extract information from data used in strategic planning, decision-making, etc. is known as data science. Data science is the practice of analyzing data to get meaningful insights.
In this article, we are briefly going to cover the top 30 Data science interview questions & answers.
- What are the differences between supervised and unsupervised learning?
Supervised learning - It uses labeled or known data. It is a feedback mechanism to train instances. Most used supervised machine learning algorithms are logistic regression, decision trees, and support vector machine
Unsupervised learning - Uses unlabeled data as input. It has no feedback mechanism. The most commonly used unsupervised learning algorithms are k-means clustering, hierarchical clustering, and apriori algorithm
- Explain the steps in making a decision tree.
- Take the entire data set as input
- Calculate the entropy of the target variable, as well as the predictor attributes
- Calculate your information gain of all attributes (we gain information on sorting different objects from each other)
- Choose the attribute with the highest information gain as the root node
- Repeat the same procedure on every branch until the decision node of each branch is finalized.
- How can you select k for k-means?
We use the elbow method to select k for k-means clustering. The idea of the elbow method is to run k-means clustering on the data set where 'k' is the number of clusters.
Within the sum of squares (WSS), it is defined as the sum of the squared distance between each member of the cluster and its centroid.
- What is the significance of the p-value?
p-value typically ≤ 0.05- This indicates strong evidence against the null hypothesis; so you reject the null hypothesis.
p-value typically > 0.05- This indicates weak evidence against the null hypothesis, so you accept the null hypothesis.
p-value at cutoff 0.05 - This is considered to be marginal, meaning it could go either way.
- How can time-series data be declared as stationery?
Time series is considered stationary when the Mean and Variance of the data are constant with the time. In the real-world scenario, we generally do not find stationary time-series data
- How can you calculate accuracy using a confusion matrix?
the formula for accuracy from the confusion matrix is:
Accuracy = (True Positive + True Negative) / Total Observations
= (262 + 347) / 650
= 609 / 650
= 0.93
As a result, we get an accuracy of 93 percent.
- 'People who bought this also bought…' recommendations seen on Amazon are a result of which algorithm?
The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
- Which of the following machine learning algorithms can be used for inputting missing values of both categorical and continuous variables?
- K-means clustering
- Linear regression
- K-NN (k-nearest neighbor)
- Decision trees
The K nearest neighbor algorithm can be used because it can compute the nearest neighbor and if it doesn't have a value, it just computes the nearest neighbor based on all the other features.
- What is the ROC curve?
The graph between the True Positive Rate on the y-axis and the False Positive Rate on the x-axis is called the ROC curve and is used in binary classification.
The False Positive Rate (FPR) is calculated by taking the ratio between False Positives and the total number of negative samples, and the True Positive Rate (TPR) is calculated by taking the ratio between True Positives and the total number of positive samples.
- What is a Confusion Matrix?
The Confusion Matrix is the summary of prediction results of a particular problem. It is a table that is used to describe the performance of the model. The Confusion Matrix is an n*n matrix that evaluates the performance of the classification model.
- How can one assess the Normal Distribution?
There are several methods to check the Normality of a Distribution. Some of the methods are:
- Histogram
- Kernel Density Estimation (KDE)
- Q_Q (quantile-quantile) Plot
- Skewness
- Kurtosis
- What do you understand about Random Forest?
Random Forest is one of the widely accepted machine learning algorithms classified under the supervised learning technique. The forest is built by combining multiple classifiers to bring solutions to complicated problems, thus, helping to improve the performance of a model. With a large number of forests, the risk of overfitting is avoided and it also leads to increased accuracy.
- Explain the term overfitting.
Overfitting, in simple terms, occurs when a statistical model is overfed with data. When such an event happens, the model starts training itself from the noise and inaccurate data entries. It is similar to trying to fit in an oversized cloth.
- How to avoid overfitting?
There are many ways to avoid the overfitting of statistical models. The most common ways are:
- Cross-validation
- Train with more data to help the model detect the right signals.
- By removing irrelevant features from the model.
- Overfitting can also be avoided by preventing it at an early stage. In this, one needs to measure each iteration at all levels.
- Through regularization, overfitting can be avoided. In this solution, techniques to artificially force the model to be made simpler are used.
- Ensembling is another way to avoid overfitting data.
- Define what is bias?
In statistics, bias can be defined as a wrong estimation of a parameter. In such a case, the results of the expected value differ from what is estimated. In bias, results can be either underestimated or overestimated.
- List the types of biases that can happen during the sampling process
Some of the biases that occur during the sampling process are:
- Selection Bias
- Self-Selection Bias
- Observer Bias
- Survivorship Bias
- Pre-Screening or Advertising Bias
- Undercoverage Bias
- What do you mean by prior probability and likelihood?
Prior probability can be defined as a probability of an event that is calculated before the collection of new data. In prior probability, the probability is computed before taking evidence into account, expressing one’s belief.
The likelihood, on the other hand, is the probability of attaining results for data given a particular parameter.
- What is backpropagation?
backpropagation is a short form for backward propagation of errors and is also known as backdrop or BP. Backpropagation is an algorithm that works to tune the weights of a neural net using the technique of delta rule or gradient descent. By reducing the error rates, backpropagation helps to increase the generalization of the model.
- Explain Deep learning in your own words
Deep Learning comes under the rubric of Machine Learning. It is a system that is used to create a model that will predict and solve problems using a handful of lines of coding. It is a neural network that is based on the functioning and structuring of a brain. Using its unique aspect of efficiency and accuracy, the systems of Deep Learning can even surpass the cognitive powers of the human brain.
- Define collaborative filtering
Collaborative Filtering is a technique that is used to filter out items using the interactions and collection of data from other users.
- What do you mean by recommender systems?
Recommender Systems can be defined as systems that are used to predict and recommend things that a user might be interested in based on various factors. These systems can anticipate the product a user most likely be interested in or might purchase based on their burning history. Some companies that use recommender systems are Netflix and Amazon.
- What do you understand about the true-positive rate and false-positive rate?
TRUE-POSITIVE RATE - The true-positive rate gives the proportion of correct predictions of the positive class. It is also used to measure the percentage of actual positives that are accurately verified.
FALSE-POSITIVE RATE - The false-positive rate gives the proportion of incorrect predictions of the positive class. A false positive determine something is true when that is initially false.
- How is Data Science different from traditional application programming?
The primary and vital difference between Data Science and traditional application programming is that in traditional programming, one has to create rules to translate the input to output. In Data Science, the rules are automatically produced from the data.
- Why is Python used for Data Cleaning in DS?
Data Scientists and technical analysts must convert a huge amount of data into effective ones. Data Cleaning includes removing malware records, outliners, inconsistent values, redundant formatting, etc. Matplotlib, Pandas, etc are the most used Python Data Cleaners.
- What is variance in Data Science?
Variance is the value that depicts the individual figures in a set of data that distributes themselves about the mean and describes the difference of each value from the mean value. Data Scientists use variance to understand the distribution of a data set.
- What is pruning in a decision tree algorithm?
In Data Science and Machine Learning, Pruning is a technique that is related to decision trees. Pruning simplifies the decision tree by reducing the rules. Pruning helps to avoid complexity and improves accuracy. Reduced error Pruning, cost complexity pruning, etc. are the different types of Pruning.
- What is entropy in a decision tree algorithm?
Entropy is the measure of randomness or disorder in the group of observations. It also determines how a decision tree switches to split data. Entropy is also used to check the homogeneity of the given data. If the entropy is zero, then the sample of data is entirely homogeneous, and if the entropy is one, then it indicates that the sample is equally divided.
- What information is gained in a decision tree algorithm?
Information gain is the expected reduction in entropy. Information gain decides the building of the tree. Information Gain makes the decision tree smarter. Information gain includes parent node R and a set E of K training examples. It calculates the difference between entropy before and after the split.
- What is k-fold cross-validation in machine learning?
The k-fold cross-validation is a procedure used to estimate the model's skill in new data. In k-fold cross-validation, every observation from the original dataset may appear in the training and testing set. K-fold cross-validation estimates the accuracy but does not help you to improve the accuracy.
- What is an RNN (recurrent neural network)?
RNN is an algorithm that uses sequential data. RNN is used in language translation, voice recognition, image capture, etc. There are different types of RNN networks such as one-to-one, one-to-many, many-to-one, and many-to-many. RNN is used in Google’s Voice search and Apple’s Siri.
References:
[1] https://www.simplilearn.com/tutorials/data-science-tutorial/data-science-interview-questions
[2] https://www.henryharvin.com/blog/data-science-interview-questions-and-answers