Skip to main content

Deep Learning Questions

1. Why CNN ?

  • Translational Invariant
  • Preserve Spatial Information
  • Shared Weights: reduce memory requirement and computation time.

2. Batch Normalization

  • Batch normalization is a technique used in deep learning to improve the performance and stability of neural networks.
  • It normalizes the activations of the neurons in a layer for each mini-batch of training data. This has the effect of reducing the internal covariate shift, which is the change in the distribution of the inputs to a layer caused by the changing weights during training. Normalization is done by scaling the features activation of a layer to have mean zero and standard deviation to one
  • Batch normalization can improve the convergence of the training process. It can also regularize the model, which can improve its generalization performance on unseen data.
  • Batch normalization is typically applied before the activation function of a layer.

3. Commonly used Activation Functions(Know More)

  • Sigmoid
  • Hyperbolic Tangent Function (tanh(x))
  • Rectified linear activation (ReLU)
  • Leaky rectified linear activation (Leaky ReLU)
  • Swish
  • Mish

3.Bias and Variance

  • In machine learning, bias and variance are two sources of error that can affect the accuracy of a model. Bias is the difference between the predicted values of a model and the true values of the underlying data. It occurs when a model is overly simplified or when it makes assumptions about the data that are not correct. High bias can lead to underfitting, where a model is unable to accurately capture the patterns in the data.
  •   Variance, on the other hand, is the variability of a model's predictions for a given input. It occurs when a model is too complex and is able to fit the noise in the data, rather than the underlying signal. High variance can lead to overfitting, where a model is too specific to the training data and is unable to generalize to new data. 
  • In general, a model with low bias and low variance is desirable because it can accurately capture the underlying patterns in the data and make reliable predictions on new data. Finding the right balance between bias and variance is an important aspect of model training and evaluation. 

Comments

Popular Posts

Transparent Image overlay(Alpha blending) with OpenCV and Python

(a)Final Blended Image                     (b) Background Image                             (c)Foreground Image                               Alpha blending Alpha blending is the process of overlaying a foreground image with transparency over a background Image. The transparent image is generally a PNG image.It consists of four channels (RGBA).The fourth channel is the alpha channel which holds the transparency magnitude. Image (b) is a background image and image (c) is the foreground / overlay image. Image (a) is the final blended image obtained by blending  the overalay image using the alpha mask.  Below is the image(Fig d) of the alpha channel of the overlay  image. (d).Alpha Channel At every pixel of the image, we blend the background...

Fast Pixel Processing with OpenCV and Python

In this post. I will explain how fast pixel manipulation of an image can be done in Python and OpenCV. Image processing is a CPU intensive task. It involves processing on large arrays. Hence when you are implementing your Image Processing algorithm, you algorithm needs to be highly efficient. The type of operation that can be applied on an Image can be classified into three categories.