Google BERT Architecture Explained 1/3 - (BERT, Seq2Seq, Encoder Decoder)
HTML-код
- Опубликовано: 1 окт 2024
- Google BERT (Bidirectional Encoder Representations from Transformers) Machine Learning model for NLP has been a breakthrough. In this video series I am going to explain the architecture and help reducing time to understand the complex architecture.
Paper reference: Attention is all you need
Reference used in this part of Video:
ai.google/rese...
rajpurkar.gith...
google.github....
All References:
arxiv.org/pdf/...
github.com/hug...
mlexplained.com...
towardsdatasci...
towardsdatasci...
ai.google/rese...
rajpurkar.gith...
google.github....
lilianweng.git...
stats.stackexc...
Thanks to training partner: TechieGlobus : www.techieglobu...
Finally someone made a good video on BERT. Hope more details will follow
next time I recommend to remove the background music.it just creates noise in the presentation
Sure will take care.. Hope the content and delivery was relevant
5:00 you say the language is German? It's Chinese :p ; thanks for the video!
Thanks.. I will learn both languages now 😉😊
Japanese actually :)
True, it’s Japanese.
Its not german, Chinese :) , "Seher gut" . My comment is in german :)
I hope the Contents were useful to you
Confusing a bit and insecure ... many words but few Takeaways
Insecure means?
very well explained sir Awsome.
Can you please explain building BERT for summarization task?
It is like clustering on BERT embeddings.. Did you face issues with links on Google?
You have an Indian flavor and a good way of explanation. Looking forward to your videos.
Thanks
Any specific content you are interested in?
@@SandeepBhutani I saw your playlist and most of the videos are on NLP. I think you can work on a playlist which can educate people in application of Machine learning and Deep learning for NLP problems. If you structure it well with an incremental storyline, I think that will have more value than university degrees/research work. Most of the people are scared away just by equations.
Great Video.
I would also love to see the sequence of this video in which @Sandeep Bhutani, practically implements BERT in a Usecase/ project/example.
Keep up the good work.
Sure.. Will create a video on this
Hi Sandeep, I liked your explanation. Would you please make a video on how to implement this for Q&A task. Please explain the code. Thank you
Sure.. There is another video on QnA using allen NLP, in case you are interested
Thank you, I will go through it
Good overview. Thanks for the video.
Not much technical aspects there, also try to improve your presentation style - it's very confusing. Thumbs up for the effort.
Thanks for the feedback.. What are u looking for? Code? Data values? Values at different layers?
how can i feed multiple inputs to the seq2seq
The question is not clear.. Can you please elaborate your use case
In a seq2seq model we just input a single input and get a single output... but is it possible to pass multiple inputs and get a single input
You can check caption generator.. There multiple inputs are being fed
Farzi video
Thanks for your exper comment. It would be helpful if you can point out where and what went wrong in video.. Will take care in future