Google Interview Question on Deep Learning

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии • 34

  • @pranavpandey2965
    @pranavpandey2965 3 года назад +2

    Using any kind of regularization doesn't ensure that the constraint will be satisfied for sure I believe, It does depend on the problem you are optimizing for.

  • @shubham_chime
    @shubham_chime 4 года назад +4

    The question was very good and very nice explanation. One follow up question would be - when would we like to put such constraints on w matrix ? Are there any practical applications of doing this ?

    • @shubham_chime
      @shubham_chime 4 года назад

      I just read your comments for other questions. Seems like this question has already been answered. Thanks.

  • @prithviramg
    @prithviramg 4 года назад +3

    wow... adding this constrain in regularisation is nice idea

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +5

      Even L2 and L1 regularisation can be interpreted as adding a constraint on weights followed by using Lagrangian multipliers.

  • @manojg5280
    @manojg5280 4 года назад +6

    Doesn't w^tw=1 imply that ww^t=1, so can we drop that second constraint ww^t-1 from the loss

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +5

      Here W is a matrix and not a vector. Hence rows being unit vectors needs not mean that the columns are also unit vectors. Hence, we need both the constraints.

  • @SHIVAMSHARMA-sj2yw
    @SHIVAMSHARMA-sj2yw 4 года назад +2

    can we do like this
    first get the optimum w vector
    and then apply gram schmitt method on w and wT vactor

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +3

      Gram-Schmidt process needs the original set of vectors to be linearly independent which is not guaranteed with the rows or columns of W obtained without additional constraints.

  • @venkateshmunagala8089
    @venkateshmunagala8089 3 года назад

    Loved the explanation

  • @AbhishekGupta-dy8vv
    @AbhishekGupta-dy8vv 4 года назад +1

    The final loss means with the existing regularisation term and the new constraint?let me know.This is correct.
    And if future if we are making some assumptions about weights so we an add regularisation term for that?

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +1

      Yes, that’s true. We can add any constraints that we want to add into the loss function itself.

  • @tagoreji2143
    @tagoreji2143 2 года назад

    Thank you Sir

  • @surajthapa4160
    @surajthapa4160 3 года назад

    Instead of the Frobenius norm can we use another matrix norm?

    • @AppliedAICourse
      @AppliedAICourse  3 года назад

      Yes, you can. Frobenius norm is one of the simplest norms on matrices.

  • @anandmadgeri2081
    @anandmadgeri2081 3 года назад

    Do we get these weights when we define the architecture? I mean, while we are defining architecture we have to name weights also right? otherwise how to get these weights so that we can use them on the constraint.

    • @AppliedAICourse
      @AppliedAICourse  3 года назад

      After we define the architecture, we initialise the weights to random values and using gradient descent based approaches, we finetune the weights to minimise a desired objective function. The constraint of added to the objective function.

  • @vinitsutar4048
    @vinitsutar4048 3 года назад

    When you said that r_i should be perpendicular to r_j, and c_i should be perpendicular to c_j did you mean that the r.T*r and c.T*c should be an identity matrix? i.e. the diagonal elements of the matrix that contain dot product of the row_i and row_j, col_i and col_j should be 1 respectively, correct?

  • @codewithme6499
    @codewithme6499 4 года назад +1

    Is this jus the interview question or such scenario may occur in real world?

    • @AppliedAICourse
      @AppliedAICourse  4 года назад

      Orthogonal matrix constraints are commonly encountered in Matrix Factorization probelms in the real world.

  • @pranjalsett7191
    @pranjalsett7191 4 года назад

    Where this concept we can actually implement in real world scenario? And how to code for it?

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +5

      Orthogonal constraints are popular in Matrix Factorisation in the real world. This interview question test your depth of understanding of adding new constraints to DL/ML optimisation problems. This can be implemented in TF 2.0 by creating a custom loss and using GradientTape functionality.

  • @hardikvagadia1959
    @hardikvagadia1959 4 года назад

    How do we add this constraint to the code?

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +1

      That’s a good follow up question. We can define custom loss functions and use gradient tape in TF2.0 to achieve this.

  • @clivefernandes5435
    @clivefernandes5435 4 года назад

    Lagrange multipliers

  • @eshashidharreddy8929
    @eshashidharreddy8929 4 года назад

    How much will be the fee for PGP for appliedaicourse students? When it will start?

    • @AppliedAICourse
      @AppliedAICourse  4 года назад

      We will launch the application portal in the next week. The fee would be around 78K INR for the 1yr PGD program.

    • @eshashidharreddy8929
      @eshashidharreddy8929 4 года назад

      @@AppliedAICourse any discount for already pursuing students who just bought in September?

    • @kumar13677
      @kumar13677 4 года назад

      @@AppliedAICourse Is the syllabus course for 1year PGD available on the site ?

    • @AppliedAICourse
      @AppliedAICourse  4 года назад

      Yes, please contact us on +91 8106-920-029.

    • @AppliedAICourse
      @AppliedAICourse  4 года назад +1

      This is the tentative syllabus. More details will be provided at the launch of the program.
      Semester-I
      1. Essentials of AI ( 6 credits) : Python, SQL, Linear Algebra, Basics of Probability
      2. Data Analysis and Visualisation ( 6 credits) : Plotting, Statistics for Data Analysis, Dimensionality reduction, Visualising high dimensional data, Real-world end to end case-studies
      3. Machine Learning( 6 credits) : Calculus and Numerical Optimisation, Classification, Regression and Clustering algorithms, Real-world end to end case-studies
      Semester-2:
      1. Advanced ML(with Deep Learning) [6 credits] : Recommender Systems, Matrix Factorization, Neural Networks, MLPs, Advanced Optimisation methods, Real-world end to end case-studies
      2. Deep Learning-II [6 Credits]: CNNs, RNNs, Transformers, TensorFlow and PyTorch, Real-world end to end case-studies
      3. Thesis [8 Credits] : Industry or Research focussed Thesis.