Your english is very pleasing. Fellow italian physicist here, beginning my PhD studies in astrophysical techniques. Will benefit from these videos for sure, thank you
1:16:55 Professor, I have one question, you said vertical axes correspond to y2 and horizontal axes correspond to y1. In my opinion, the codes are written by row-major in double for loops, so vertical axes mean to y1 and horizontal axes mean to y2. Am I right? plz recommend me.
Thanks alot prof I am interested in this field iam applied physics undergrad students and currently studying ML and i was looking to integrate this two fields with each other
ReLU is a switch. f(x)=x connect, f(x)=0 disconnect. The dot product of a number of dot products is still a dot product. What is actually happening in a ReLU network then? Do you see. Also fast transforms like the FFT and fast Walsh Hadamard transform can be viewed as fixed systems of dot products that are obviously very quick to compute. You can think of ways to include them in neural networks. I think there is a blog AI624 maybe.
Your english is very pleasing. Fellow italian physicist here, beginning my PhD studies in astrophysical techniques. Will benefit from these videos for sure, thank you
This is a very fine introductory lecture on deep learning. Really looking forward to watching the next ones. Thanks a lot for sharing Professor !
Thanks for uploading this content, and for all in your channel
1:16:55 Professor, I have one question, you said vertical axes correspond to y2 and horizontal axes correspond to y1. In my opinion, the codes are written by row-major in double for loops, so vertical axes mean to y1 and horizontal axes mean to y2. Am I right? plz recommend me.
Thanks alot prof
I am interested in this field iam applied physics undergrad students and currently studying ML and i was looking to integrate this two fields with each other
ReLU is a switch. f(x)=x connect, f(x)=0 disconnect. The dot product of a number of dot products is still a dot product. What is actually happening in a ReLU network then? Do you see.
Also fast transforms like the FFT and fast Walsh Hadamard transform can be viewed as fixed systems of dot products that are obviously very quick to compute. You can think of ways to include them in neural networks. I think there is a blog AI624 maybe.