Zardoua Yassir
Zardoua Yassir
  • Видео 98
  • Просмотров 193 725
Demo: A Siamese Object Tracker Adapted for 360° Videos
This video demonstrates the result of a single object tracking algorithm that I have adapted from regular 2D tracking to work on 360° videos. The original 2D tracking algorithm is described in the paper titled: Joint Classification and Regression for Visual Tracking with Fully Convolutional Siamese Networks.
The result at 1:14 is a projection of the original 360° video, centered on the target of interest. The tracking result in this video is based on material from the original 360° video linked below:
• Title: 2019 04 28 vr360m dog walk ford lake park
• Thanks to Channel Owner: @wenelson1968 for allowing video reuse.
Просмотров: 38

Видео

10- NE555: diodes de protection (partie1/2)
Просмотров 7972 года назад
10- NE555: diodes de protection (partie1/2)
1- Comparateur à un seul seuil partie (1/3)
Просмотров 4,6 тыс.2 года назад
1- Comparateur à un seul seuil partie (1/3)
11- NE555: diodes de protection (partie2/2)
Просмотров 6092 года назад
11- NE555: diodes de protection (partie2/2)
3- Comparateur à un seul seuil (partie3/3)
Просмотров 1,2 тыс.2 года назад
3- Comparateur à un seul seuil (partie3/3)
2- Comparateur à un seul seuil partie (2/3)
Просмотров 1,8 тыс.2 года назад
2- Comparateur à un seul seuil partie (2/3)
7- Astable à base d'amplificateur opérationnel partie (2/2)
Просмотров 1,7 тыс.2 года назад
7- Astable à base d'amplificateur opérationnel partie (2/2)
13- NE 555: circuit astable partie (2/2)
Просмотров 2,3 тыс.2 года назад
13- NE 555: circuit astable partie (2/2)
14- Oscillateurs électroniques: rappel sur fonction de transfert réelle et complexe.
Просмотров 7222 года назад
14- Oscillateurs électroniques: rappel sur fonction de transfert réelle et complexe.
15- Oscillateurs électronique: Gain en boucle
Просмотров 4272 года назад
15- Oscillateurs électronique: Gain en boucle
12- NE 555: circuit astable partie (1/2)
Просмотров 10 тыс.2 года назад
12- NE 555: circuit astable partie (1/2)
9- NE 555 : Trigger de Schmitt
Просмотров 1,6 тыс.2 года назад
9- NE 555 : Trigger de Schmitt
6- Astable à base d'amplificateur opérationnel partie (1/2)
Просмотров 2,8 тыс.2 года назад
6- Astable à base d'amplificateur opérationnel partie (1/2)
8- NE555: principe de fonctionnement
Просмотров 3,4 тыс.2 года назад
8- NE555: principe de fonctionnement
4- Comparateur à deux seuils : Trigger de Schmitt - partie (1/2)
Просмотров 6 тыс.2 года назад
4- Comparateur à deux seuils : Trigger de Schmitt - partie (1/2)
16- Oscillateurs électroniques: bruit thermique
Просмотров 6252 года назад
16- Oscillateurs électroniques: bruit thermique
17- Oscillateur à pont de Wien : circuit et fonctionnement
Просмотров 9562 года назад
17- Oscillateur à pont de Wien : circuit et fonctionnement
5- Comparateur à deux seuils : Trigger de Schmitt - partie (2/2)
Просмотров 2,2 тыс.2 года назад
5- Comparateur à deux seuils : Trigger de Schmitt - partie (2/2)
3- Alimentation de l'amplificateur opérationnel (rappel pratique)
Просмотров 2,2 тыс.2 года назад
3- Alimentation de l'amplificateur opérationnel (rappel pratique)
1- Carte de test (Test Board)
Просмотров 1,3 тыс.2 года назад
1- Carte de test (Test Board)
4- Signaux sinusoïdaux (rappel)
Просмотров 4332 года назад
4- Signaux sinusoïdaux (rappel)
6- Oscilloscope numérique
Просмотров 9942 года назад
6- Oscilloscope numérique
2- Alimentation de l'amplificateur opérationnel (rappel théorique)
Просмотров 7472 года назад
2- Alimentation de l'amplificateur opérationnel (rappel théorique)
5- GBF (Générateur Basses Fréquences)
Просмотров 9342 года назад
5- GBF (Générateur Basses Fréquences)
(12/12) MobileNets: MobileNetV2 (Part5)
Просмотров 3,3 тыс.3 года назад
(12/12) MobileNets: MobileNetV2 (Part5)
(11/12) MobileNets: MobileNetV2 (Part4)
Просмотров 2,9 тыс.3 года назад
(11/12) MobileNets: MobileNetV2 (Part4)
(10/12) MobileNets: MobileNetV2 (Part3)
Просмотров 3,7 тыс.3 года назад
(10/12) MobileNets: MobileNetV2 (Part3)
(9/12) MobileNets: MobileNetV2 (Part2)
Просмотров 6 тыс.3 года назад
(9/12) MobileNets: MobileNetV2 (Part2)
(8/12) MobileNets: MobileNetV2 (Part1)
Просмотров 22 тыс.3 года назад
(8/12) MobileNets: MobileNetV2 (Part1)
(7/12) MobileNets: MobileNetV1: Resolution Multiplier and Key Results
Просмотров 9103 года назад
(7/12) MobileNets: MobileNetV1: Resolution Multiplier and Key Results

Комментарии

  • @none-hr6zh
    @none-hr6zh 28 дней назад

    Thanks for the nice lecture. I have doubt you said that low dimensional activation can alone solve the problems we donot need much channels so whole thing is to reduce number of channels thats why we are going from d to d' but the problem is we cannot apply relu on lower dimensional activation hence we first increase the dimension and then applied relu and then again converted back to lower dimensional . but in lecture you are saying that d is small . but how can we get d small.? are we using less number for kernels for getting d smaller.

    • @zardouayassir7359
      @zardouayassir7359 28 дней назад

      @@none-hr6zh I don't remeber the specific notations but I see ur question is basically how do we get less channels (i.e., compress a feature map). Well yes we use less kernels because each kernel outptus one channel, so more kernels means more output channels. Less kernels means less output channels. Let me know if u have more doubts

  • @wb7779
    @wb7779 Месяц назад

    Very very nice. I went through hell to find you.

    • @zardouayassir7359
      @zardouayassir7359 Месяц назад

      @@wb7779 Feel free to post any question if you need help with the next videos. Good luck

  • @nfs5693
    @nfs5693 3 месяца назад

    great video really helped

  • @hungoverlofi5003
    @hungoverlofi5003 4 месяца назад

    Great series so far. Loving it

  • @gunasekhar8440
    @gunasekhar8440 4 месяца назад

    Great explanation but i have doubt. My dataset gas three features. Now im interested increase the dimension of this. Let us say my old dataset as 'x'. If use the polynomial kernel with degree 2 means, (1+x.(x.T))**2 right? My question is if we are doing the dot product means ( i mean my dataset shape is 200x3 and if transpose it means it become 3x200 and if do x.T*x then ill get 3x3 right). So ny question is where the dimensions are increased?

    • @zardouayassir7359
      @zardouayassir7359 4 месяца назад

      no where ur dimensions will be increased because ur doing the wrong math. dot product takes two input vectors, not matrices. x should denote a feature vector, not a dataset.

    • @gunasekhar8440
      @gunasekhar8440 4 месяца назад

      @@zardouayassir7359 but my dataset is non linear. I have already performed soft margin svm with the scratch code. But the soft margin allows more misclassifications due to the type of data. Now I'm interested to increase the dimension. If I want to use rbf or any other kernel, what should I do technically for increasing the dimension?

    • @zardouayassir7359
      @zardouayassir7359 4 месяца назад

      @@gunasekhar8440 "but my dataset is non linear". My answer to you applies regardless of your dataset properties. Tye kernel trick I'm explaining here is used for non linear boundaries. What you need technically is first to get ur math right. Good luck

  • @amersaidi6968
    @amersaidi6968 5 месяцев назад

    V-=Vc ?

  • @lethnisoff
    @lethnisoff 5 месяцев назад

    Thank you for clear explanation, this is great

  • @vartikagupta9998
    @vartikagupta9998 5 месяцев назад

    Thanks for the sorted explanation......my all doubts got cleared by your video

  • @charanteja1136
    @charanteja1136 5 месяцев назад

    your explanation is awesome bro! can u please tell me where did u get all these points(not in paper) . can u refer any resources?

    • @zardouayassir7359
      @zardouayassir7359 5 месяцев назад

      Happy to help. "where did u get all these points": as far as I remember, I did the following while creating this series: * Elaborate the key points in the paper by reading relevant references (such as the idea of depthwise seperable convolution) * Consult books to double check (I didn't find any book discussing this matter) * Look for concepts I've heard about for the first time (like the Manifold of Interest concept in MobileNetV2) * Once I grasp a concept, I may try to convey it with my own examples (such as the knobs controlling the pixels of a TV) * I have even asked a question on ResearchGate, and contacted some AI engineers on FB to discuss or double check my understanding. The original paper does not contain all of this stuff. In fact, the authors did a terrible job at explaining their idea, and sometimes they use expressions that are not clear. For instance, if my memory is correct, the authors in MobileNetV2 said that ReLU collapses the activations. What does "collapse" mean exactly? All I could do is guessing that it means clipping negative values (in ReLU). The authors had multiple chances to clearly explain their idea but they didn't. This happens in a lot of scientific papers. Once I read the entire paper, I realize that it's easy to explain the core idea in the abstract only, but the authors just don't do it and let you wast your time by going over the entire Flowchart and description of their algorithm to infer their key idea. Hope this helps

  • @buh357
    @buh357 6 месяцев назад

    this is my second time watching this series, thank you so much. I wish you could explain more interesting deep-learning architecture like this.

  • @hridaydesai5175
    @hridaydesai5175 7 месяцев назад

    Very useful. Thanks a lot for this content Sir

  • @NoorAmer-k9h
    @NoorAmer-k9h 7 месяцев назад

    I go over the entire web and didn't find an explanation for MobileNetV2 except your videos Your explanation is great, detailed, and easy to understand thank you very much

  • @tonleubrice2182
    @tonleubrice2182 7 месяцев назад

    Pas besoin tu as déjà un mode série qui connecte les borne en interne

    • @zardouayassir7359
      @zardouayassir7359 7 месяцев назад

      Le besoin initial d'un étudiant n'est pas l'économie des connexions mais plutôt la clarté. Nous fournissons deux alimentations DC indépendantes à certains étudiants et tu ne pourras pas choisir le mode SERIES.

  • @rizvee3977
    @rizvee3977 8 месяцев назад

    Thank you so much for making this topic understandable so easily.

  • @skauddy755
    @skauddy755 8 месяцев назад

    Thats you so much for the Series Very clean explanation of the MobileNet v2 paper

  • @hajasb4056
    @hajasb4056 8 месяцев назад

    merci beaucoup

  • @hajasb4056
    @hajasb4056 8 месяцев назад

    merci beaucoupppp❤

  • @Fawaed_salafia
    @Fawaed_salafia 8 месяцев назад

    جزاك الله خيرا

  • @afafkrouit443
    @afafkrouit443 8 месяцев назад

    Merci

  • @yeahno2466
    @yeahno2466 9 месяцев назад

    Can you please make a video like this but with MobileNetv3? 🥺🙏

    • @zardouayassir7359
      @zardouayassir7359 9 месяцев назад

      I appreciate that you wanted a video from my channel. My time is quite limited at the moment, but I'll definitely consider your request. Thanks for your understanding

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Год назад

    how do we choose a kernel and can we expect to find such a kernel that can arrive at the correct inner product within the feature space?

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      You can identify the kernel for your SVM based on empirical testing. It's better to start with simpler kernels first, then move to the more complicated ones if needed.

  • @dl569
    @dl569 Год назад

    the invertible pt s(k), summation range should be u instead of k

  • @logx2536
    @logx2536 Год назад

    it is better to consider that the dimensions of the pointwise kenel is 1×1×d×dt, where the quantity "dt" is the number of kernels and the "t" in "dt" is again the expansion factor

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      please read my reponse to the pinned comment. I believe I had answered there the same question. Let me know if it's not the answer you want.

  • @AgustinaDiaz-v1o
    @AgustinaDiaz-v1o Год назад

    I see all MobileNet videos, thanks so much for doing them, you are really clear!

  • @mohamed-rayanelakehal1324
    @mohamed-rayanelakehal1324 Год назад

    hi, but here the 0.784 is the accuracy per class...?, the 0.7 was the overall accuracy or for only class A...?

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      Your question is already answered in the video : 0.7 is the probability that each of the three classifiers (C1,2,3) woud produce the right classification. 0.784 is the probability that the predictions of the three classifiers would contain at least two correct classifications. Since the correct prediction is Class A, then the probability that the predictions of the three classifiers would contain at least two correct classifications is equivalent to the probability of getting 2A and 1B + the probability of getting 3B.

    • @mohamed-rayanelakehal1324
      @mohamed-rayanelakehal1324 Год назад

      @@zardouayassir7359 thank you

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      @@mohamed-rayanelakehal1324 feel free to ask other questions. Good luck

  • @mimiamine8117
    @mimiamine8117 Год назад

    merci beaucoup une bonne explication

  • @valdesnoualah5794
    @valdesnoualah5794 Год назад

    Je ne comprends pas pourquoi ve'=ve avec la présence de R

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      La présence de R ne peut pas provoquer une chute de tension sans avoir un courant qui la traverse. Par exemple, si les deux diodes D1 et D2 sont bloquées, aucun courant ne passera à travers R. Avec les deux diodes D1 et D2 bloquées, vous pensez peut-être qu'il y aura un courant circulant à travers le pin 6 et le pin 2 du circuit NE, mais ces deux entrées (Pin 2 et 6) ont une résistance infinie, empéchant ainsi le courant de passer à travers la résistance R. Donc la chute de tension à travers R est nulle. Ainsi, ve = ve'.

  • @user-gp3ul4cj5d
    @user-gp3ul4cj5d Год назад

    here in start 00:20 you said it was explained it earlier video as to how it came but in previous video you just wrote we have to find max of 2/||w||?

  • @mirabirhossain1842
    @mirabirhossain1842 Год назад

    Thank you very much. You've clearly explained everything. I really enjoyed the series. It was a great helping hand while reading the papers. Specially I was not able to understand V2 paper, the paper is difficult to understand for a lot of reasons, your video really worked like magic. Thank you again.

  • @samymostefai7644
    @samymostefai7644 Год назад

    Thank you so much for this wonderful course, you don't know how much you helped me, I have a question about the gamma parameter, could you tell us about its effect on our model as you did with the C parameter

  • @samymostefai7644
    @samymostefai7644 Год назад

    at 15:00 , I don't understand why we have two different indices for w when it is the same

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      Hi Samy, Please excuse me for my late response. Your question is related to a math issue. Let's assume we have the following expression: (Σ n) and I want to multiply it by itself like that: (Σ n) (Σ n). Now to specify the starting and ending index of each sigma, I'll use this notation: (Σim n), where the first letter ("i" in this case) and second letter ("m" in this case) right after the sigma Σ represent the starting and ending index of the sigma, respectively. With this in mind, I can rewrite (Σ n).(Σ n) as (Σim n).(Σim n). Based on math properties, we can merge the two sigmas Σ together like that: (ΣimΣjm n.n). Your question now is: why did I switch from i to j in the second sigma after merging the two sigmas? The answer is that, when merging the two sigmas, we must start from the inner sigma. I mean by "start" incrementing the index of the first sigma until the end and doing the sum after each increment. During this process (computation of the inner sigma), the outer sigma MUST HAVE A FIXED INDEX, WHICH WE INCREMENT BY ONE ONLY WHEN WE FINISH COMPUTING THE INNER SIGMA (i.e., we reach the final index of the inner sigma). Once we increment the outer sigma, we repeat the same process again (initialize the index j and re-compute the inner sigma). The computation would be complete once the index of the inner and outer sigmas both reach the maximum value ("m" in this case). This mechanism is not possible if we keep the same index "i" for both sigmas because as soon as I increment the index of the inner sigma, the index of the outer sigma will be incremented along as well, which violates the process I've just described. To avoid this issue and get the desired process, we must change the variable of the inner sigma to something else ("j" in our case). Hope this helps.

  • @ulrichlaurennnananyamsi568
    @ulrichlaurennnananyamsi568 Год назад

    Superbe vidéo merci beaucoup 👌👌👌👌👌

  • @satokoentertainment10
    @satokoentertainment10 Год назад

    thank god you made a video on this, thank you

  • @sanjivanirajput9874
    @sanjivanirajput9874 Год назад

    Good explaining but you might wanna use some darker color pen for writing can't see what you are writing.

  • @henkjekel4081
    @henkjekel4081 Год назад

  • @EarthFlow_erPower
    @EarthFlow_erPower Год назад

    MEOWING.....yes, she's meowing alright! I'd think she's a hungry hungry kitten! She's beautiful...I love animals that are vocal💖💖💖💖

  • @owyx09
    @owyx09 Год назад

    so cute ❤

  • @SOMEONE-jg6jg
    @SOMEONE-jg6jg Год назад

    Just finished the playlist,it cleared up a lot of things .Thanks

  • @bhavanarebba9718
    @bhavanarebba9718 Год назад

    Great Video, but I did not understand the part where d2 = 1.5 d1 + 0.5 d1 ^2. Is this just a random expression that you considered or am I missing something? Your response will be much appreciated! Thanks in advance! :)

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      The original feature space is x, which has d1 dimensions. The transformed (expanded feature space) is phi(x). The transformation phi can take several forms. In this video, I considered a quadratic transformation. In this case, the number of dimensions of phi(x) is d2. There is a relation between d2 and d1, which is d2 = 1.5 d1 + 0.5 d1 ^2. But do not forget that this relation applies only if the transformation phi is quadratic. Based on the nature of the transformation phi, the relation between d1 and d2 can be mathematically derived. Hope this helps.

  • @zardouayassir7359
    @zardouayassir7359 Год назад

    At 8:30, I forgot to pad the lower boundary of the input with zeros. However, this has no effect on the formlas derived or ideas explained.

  • @TheDeepLearningChannel
    @TheDeepLearningChannel Год назад

    At 3:35 when you describe computational cost of the depth wise convolution, what happens to the N? How did that just disappear?

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      N did not disappear because it wasn't there in the first place. The depthwise separable convolution has two steps: depthwise convolution + pointwise convolution. N is the number of kernels used in the pointwise step. In the depthwise step, the number of kernels is M, not N. I suggest you carefully review the previous video. Good luck.

  • @kyrilcouda
    @kyrilcouda Год назад

    Please dont use auto-focus... Make the camera focused at the beginning and dont let it change during the video.

  • @shardulkaranjekar5802
    @shardulkaranjekar5802 Год назад

    Simply explained!!!

  • @possibleplus2859
    @possibleplus2859 Год назад

    next time can you please set your camera to fixed manual focus instead of auto focus. since the distance between your camera and sheet is constant.

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      My camera has this option for photo capture but not for video footage. But I fixed this issue on the videos I posted after this series. Sorry for the inconvenience.

  • @MaccaBeats
    @MaccaBeats Год назад

    the two videos on the kernel trick might be two of the best videos i've ever watched on youtube. you are clearly a very clever and capable person and these videos make you appreciate what a gift having free internet access is. thanks a lot, you are amazing!

    • @zardouayassir7359
      @zardouayassir7359 Год назад

      Your kind comment is also one of internet's gifts. I'm really happy with your feedback. Thanks Macca.

  • @prattipatimanojsai
    @prattipatimanojsai Год назад

    Nice explanation Thanks 👍

  • @Idzi005
    @Idzi005 Год назад

    This is a really great video. Thanks!

  • @tejasvinnarayan2887
    @tejasvinnarayan2887 Год назад

    Zardoua Yassir made it so clear! Went through the entire playlist! Thanks!!!!

  • @rashid_____
    @rashid_____ Год назад

    Merci beaucoup 🙏🏿❤️✨

  • @huveja9799
    @huveja9799 Год назад

    One question how do you go from MAX{2/||w||} to MIN{1/2*||W||^2}?

    • @huveja9799
      @huveja9799 Год назад

      Well I have found the answer: go to Cross Validated (stats stackexchange), and search by "Convert maximization problem to minimization"