The cs Underdog
The cs Underdog
  • Видео 113
  • Просмотров 3 394

Видео

Queue | Data Structures Lecture 21 | The cs Underdog
Просмотров 14 часа назад
This lecture explains about Queue data structure and gives an overview of different types of queues and when you should use queues.
Peek, IsEmpty & IsFull in Stack | Data Structures Lecture 20 | The cs Underdog
Просмотров 57 часов назад
This lecture explains about peek, IsEmpty and IsFull operations in stack. It describes how these are implemented using arrays as well as using linked lists so as to achieve O(1) time complexity.
Push & Pop in Stack | Data Structures Lecture 19 | The cs Underdog
Просмотров 99 часов назад
This lecture explains about push and pop operations in stack. It goes through how these are implemented using arrays as well as using linked lists so as to maintian O(1) time complexity.
Stack | Data Structures Lecture 18 | The cs Underdog
Просмотров 812 часов назад
This lecture explains about stack data structure and its uses
Delete in Circular Linked List | Data Structures Lecture 17 | The cs Underdog
Просмотров 114 часов назад
This lecture explains about all sub cases of deleting an element from a circular linked list along with time and space complexity analysis
Insert in Circular Linked List | Data Structures Lecture 16 | The cs Underdog
Просмотров 1216 часов назад
This lecture explains about all sub cases of inserting a new element into a circular linked list with time and space complexity analysis
Search in Circular Linked List | Data Structures Lecture 15 | The cs Underdog
Просмотров 1819 часов назад
This lecture explains about how to search for a particular value in a circular linked list
Circular Linked List | Data Structures Lecture 14 | The cs Underdog
Просмотров 1321 час назад
This lecture describes circular linked list data structure and its use cases
Delete in Doubly Linked List | Data Structures Lecture 13 | The cs Underdog
Просмотров 9День назад
This lecture explains delete operation in a doubly linked list and all its all sub cases with time and space complexities
Insert in Doubly Linked List | Data Structures Lecture 12 | The cs Underdog
Просмотров 16День назад
This lecture explains the insert operation in a doubly linked list with all sub cases and their time and space complexities
Search in Doubly Linked List | Data Structures Lecture 11 | The cs Underdog
Просмотров 2День назад
This lecture describes search operation in a doubly linked list
Singly Linked List vs Doubly Linked List | Data Structures Lecture 10 | The cs Underdog
Просмотров 13День назад
A detailed comparison between singly linked list and doubly linked list
Delete in Linked List | Data Structures Lecture 9 | The cs Underdog
Просмотров 814 дней назад
Explanation on how to delete an element from a linked list and its various sub cases
Insert in Linked List | Data Structures Lecture 8 | The cs Underdog
Просмотров 1514 дней назад
Explanation on how to insert a new element into a linked list. Analysis of time and alace complexity for each subcase.
Search in Linked List | Data Structures Lecture 7 | The cs Underdog
Просмотров 1614 дней назад
Search in Linked List | Data Structures Lecture 7 | The cs Underdog
Linked List | Data Structures Lecture 6 | The cs Underdog
Просмотров 1014 дней назад
Linked List | Data Structures Lecture 6 | The cs Underdog
Delete in Array | Data Structures Lecture 5 | The cs Underdog
Просмотров 714 дней назад
Delete in Array | Data Structures Lecture 5 | The cs Underdog
Insert in Array | Data Structures Lecture 4 | The cs Underdog
Просмотров 1121 день назад
Insert in Array | Data Structures Lecture 4 | The cs Underdog
Search in Array | Data Structures Lecture 3 | The cs Underdog
Просмотров 1021 день назад
Search in Array | Data Structures Lecture 3 | The cs Underdog
Array | Data Structures Lecture 2 | The cs Underdog
Просмотров 821 день назад
Array | Data Structures Lecture 2 | The cs Underdog
Data Structures | Data Structures Lecture 1 | The cs Underdog
Просмотров 3221 день назад
Data Structures | Data Structures Lecture 1 | The cs Underdog
Multiclass Logistic Regression | Machine Learning Lecture 65 | The cs Underdog
Просмотров 1083 месяца назад
Multiclass Logistic Regression | Machine Learning Lecture 65 | The cs Underdog
Softmax Function | Machine Learning Lecture 64 | The cs Underdog
Просмотров 33 месяца назад
Softmax Function | Machine Learning Lecture 64 | The cs Underdog
Logit Function | Machine Learning Lecture 63 | The cs Underdog
Просмотров 113 месяца назад
Logit Function | Machine Learning Lecture 63 | The cs Underdog
Logistic Regression | Machine Learning Lecture 62 | The cs Underdog
Просмотров 783 месяца назад
Logistic Regression | Machine Learning Lecture 62 | The cs Underdog
Sigmoid Function | Machine Learning Lecture 61 | The cs Underdog
Просмотров 244 месяца назад
Sigmoid Function | Machine Learning Lecture 61 | The cs Underdog
The Perceptron Algorithm | Machine Learning Lecture 60 | The cs Underdog
Просмотров 634 месяца назад
The Perceptron Algorithm | Machine Learning Lecture 60 | The cs Underdog
Fisher's Linear Discriminant | Machine Learning Lecture 59 | The cs Underdog
Просмотров 4494 месяца назад
Fisher's Linear Discriminant | Machine Learning Lecture 59 | The cs Underdog
Eigenvalues and Eigenvectors | Machine Learning Lecture 58 | The cs Underdog
Просмотров 334 месяца назад
Eigenvalues and Eigenvectors | Machine Learning Lecture 58 | The cs Underdog

Комментарии

  • @baibhavpratapbhatt1377
    @baibhavpratapbhatt1377 9 дней назад

    YOU ARE doing great job man keep it up

  • @ramdafale
    @ramdafale 22 дня назад

    Good Video 👌

  • @momscookbook2222
    @momscookbook2222 3 месяца назад

    Thanks

  • @skiritijayadev2932
    @skiritijayadev2932 3 месяца назад

    Keep it up bro

  • @saidineshmuthyala5439
    @saidineshmuthyala5439 4 месяца назад

    What are rewards exactly and why aren't they used in SL & USL when none of them is perfect?

    • @The_cs_Underdog
      @The_cs_Underdog 8 дней назад

      Rewards are a way of providing feedback to a reinforcement learning model on how good or bad it performed in each step. This is used by the model to learn and improve. We don't use rewards in supervised learning because we use loss function as the criteria to make the model understand its shortcomings and make improvements. You can think of loss as negative rewards. They both serve the same purpose.

  • @kaushikichatterjee5643
    @kaushikichatterjee5643 4 месяца назад

    This video is very helpful. Can you please explain how Maximum Fisher Discriminate works?? Can we select features by calculating maximum fishers discriminate ratio? What is different between Maximum Fisher Discriminate ratio and Fishers Linear Discriminate.? I have these doubts. Please help me with this.

    • @The_cs_Underdog
      @The_cs_Underdog 8 дней назад

      Both refer to the same concept only. Fisher's linear discriminant is used to find the projection direction which maximizes the Fisher criterion (ie the ratio of between class variance to within class variance). The same is referred to in some informal instances as Maximum Fisher Discriminant.

  • @PUNITHAV-o1r
    @PUNITHAV-o1r 4 месяца назад

    your are explanation were amazing. but if you explain all the concept with real time example using python implementation it could be really useful. try to give like that.