Artificial Neural Networks : A Simple Introduction

Поделиться
HTML-код
  • Опубликовано: 13 янв 2025

Комментарии • 19

  • @VaibhavSingh-iz4bf
    @VaibhavSingh-iz4bf 3 года назад +2

    What was the motivation behind choosing sigmoid and relu kind functions for activation functions?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +1

      Motivation for sigmoid in ANN is same as that for Logistic Regression. It makes the log-odds a linear function of the input parameters/features. ReLU is chosen due to its piece-wise linearity, since linear functions are easy to handle and fast to compute.

  • @NaimishSharma-jp9yu
    @NaimishSharma-jp9yu 3 года назад +3

    Is it possible that for a particular dataset accuracy we get with ANNs(even after optimization) is less than Logistic Regression/SVMs OR Is it the case that ANNs always give better accuracy than Logistic Regression/SVMs ?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +1

      In theory, ANN is always at least as good as any other ML algorithm due to the universal approximation theorem. But in practice, it is quite possible for ANN accuracy to be lower than that of other algorithms since training an ANN is not easy for all datasets. And always remember the "no free lunch theorem"!

    • @sidchaini
      @sidchaini 3 года назад +1

      ​To add to what @@EvolutionaryIntelligence said, if we have enough data available, ANNs are as powerful as any other methods. But often, if your dataset is small, the ANN might underperform as compared to Logistic Regression. Thus choosing the right algorithm to use is an art in itself.
      I'd encourage you to look at this link:
      scikit-learn.org/stable/tutorial/machine_learning_map/index.html

  • @karthiknambiar8434
    @karthiknambiar8434 3 года назад +1

    Does implementing residual connections in fully connected neural networks (which are usually implemented in CNNs) help in increasing the performance of ANNs?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад

      ResNet models are used in Deep Learning (CNN, RNN, etc) since a large number of hidden layers aggravates the problem of vanishing gradients. In usual ANNs, I have not seen use of residual connections, since these are usually not very deep.

  • @HrichaAcharya
    @HrichaAcharya 3 года назад +1

    How to determine the number of times we need to apply the backpropagation step?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад

      Typically its a good idea to iterate till your error (or testing accuracy) stabilises to some value.

  • @TirthajitBaruah
    @TirthajitBaruah 3 года назад +1

    Does increasing the number of hidden layers after a certain optimum number, decrease the accuracy of the algorithm? If so, can there be an instance, when even after we keep on increasing the number of hidden layers once it shows a decrease in the accuracy, it suddenly starts showing an increase in the accuracy?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +1

      For a given classification problem, there is generally a certain optimum number of hidden layers, in whose vicinity, you get the best training and testing accuracy. For smaller number of hidden layers, your accuracy will most likely go down. For larger number of layers, either your training accuracy will go down or you may also get into overfitting depending on various factors.

  • @Parth-vn5xj
    @Parth-vn5xj 3 года назад +1

    Sir does ANN suffer from overfitting problem because it almost totally approximates the function as per the training dataset?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +1

      Yes, ANN can definitely overfit, which is usually taken care of by using various regularisation techniques like dropout. However, the possibility of overfitting in ANN is not as much as in Decision Trees.

  • @NamanJoshi-pi4bn
    @NamanJoshi-pi4bn 3 года назад +1

    Does number of hidden layers in ANN and number of nodes in hidden layer affected by the dataset we have or is it independent of the dataset?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +1

      Of course it is affected by the dataset we have and this choice is an important part of the design process. Theoretically, a single layer ANN can approximate any arbitrary function, but in practice, such an ANN is very difficult to train due to which we use multi-layer ANN in practice.

  • @RahulSharma-tf6zk
    @RahulSharma-tf6zk 3 года назад +1

    Does the universal approximation theorem also work for discontinuous functions?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +2

      A discontinuous function can be approximated by a steep continuous function. All that ANN does is approximation.

  • @sujitvj90
    @sujitvj90 3 года назад

    It will be beneficial if you please apply with Matlab sir

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 года назад +1

      Matlab does have its own advantages, but I prefer Python for various reasons. Matlab is useful mainly if you wish to integrate your Machine Learning code with some of its toolboxes.