Came here after I saw a picture of the poster from NeurIPS. I wish more deep learning researchers made videos explaining their papers in brief. Great use of Manim.
Just awesome! I've been wondering about how logic and neural networks could be combined for a long time, and this seems exactly the kind of thing I was hoping was possible! Amazing work, guys!
Wow, the implications here for running embedded models with such speed up is amazing! Can you elaborate on how you create logic gates for real inputs? After passing any of the weighted functions, the inputs will no longer be binary?
Are DLGN interpretable (or more interpretable) than traditional neural networks? Afaik, the field of digital design is "very well understood" and HW designers have been using synthesizers for 40+ years to map HDL code (i.e. operations) to real logic circuit. I am wondering if any of this could be reused to understand what a logic gate (or a group of them) at a given layer is doing to perform the task -- something that is arguably not really possible in traditional DNN.
their differential gates can be completely replaced by others during backpropagation which makes this unsuitable for hardware, unless you re-route prior layer to different routes for which we know apriori the behaviors. Thus, in hardware you need to create a lot of redundancy which is not a bad thing. The whole brain is redundant starting from the fact of having two separately functionating hemispheres. The idea is amazing. I'm working on something similar, but the gates are in probabilistic superposition. Hence, they don't have to be replaced with others but rather turned partially on and off. Certainly, I have a lot of inherent redundancy inside my model. And since each gate is a single CPU operation, this is extremely fast.
I have been very interested in this topic and trying to get my hands into this, however I have a question. Is this scalable? Can we increase the number of inputs from 2 and go beyond?
Came here after I saw a picture of the poster from NeurIPS. I wish more deep learning researchers made videos explaining their papers in brief. Great use of Manim.
Impressive.
Some ideas look so natural, so useful that we ask ourselves why we haven't tried this before.
Congrats!
Logic gates is an interesting idea for model to learn. And the time taken in nanoseconds is insane !!
We are working on the same topic, except that I'm treating them as superpositions rather than differentiable parts. Congrats on beating me up to this!
Wow! This is what real research is about. What a cool idea!! Keep uo the great work.
Just awesome! I've been wondering about how logic and neural networks could be combined for a long time, and this seems exactly the kind of thing I was hoping was possible! Amazing work, guys!
Mind blowing amazing and it reduces computation too 👏!
Amazing work!
Damn, this is impressive. Where did you get the idea?
this is extremely cool
I love this. Thought about this a while ago, great work!
This idea seems to potentially be a genius one
this is very interesting. cheers 🎉
Nice, just want to know, what push you guys to use gates.
Wow, the implications here for running embedded models with such speed up is amazing!
Can you elaborate on how you create logic gates for real inputs? After passing any of the weighted functions, the inputs will no longer be binary?
Could you please share the manim code for this video? Thank you!
Are DLGN interpretable (or more interpretable) than traditional neural networks? Afaik, the field of digital design is "very well understood" and HW designers have been using synthesizers for 40+ years to map HDL code (i.e. operations) to real logic circuit. I am wondering if any of this could be reused to understand what a logic gate (or a group of them) at a given layer is doing to perform the task -- something that is arguably not really possible in traditional DNN.
their differential gates can be completely replaced by others during backpropagation which makes this unsuitable for hardware, unless you re-route prior layer to different routes for which we know apriori the behaviors. Thus, in hardware you need to create a lot of redundancy which is not a bad thing. The whole brain is redundant starting from the fact of having two separately functionating hemispheres. The idea is amazing. I'm working on something similar, but the gates are in probabilistic superposition. Hence, they don't have to be replaced with others but rather turned partially on and off. Certainly, I have a lot of inherent redundancy inside my model. And since each gate is a single CPU operation, this is extremely fast.
I have been very interested in this topic and trying to get my hands into this, however I have a question. Is this scalable? Can we increase the number of inputs from 2 and go beyond?
ineresting!