It's so amazing. When looking up information on the brain is way more efficient than searching through a database. In the brain is checks the information in parallel.
Good intuition for the properties of SDR/SDM is to use SPHERE for illustration. If any binary-vector represent point on the sphere (and we decide the point be one of the poles) then majority of the other points are at the equator of the sphere i.e. further away from this particular point. What this means is that two points (bin-vectors) have very small chance to coincide. (All this is hamming distance i.e. difference in bits) Another visual intuition is x-y graph of normal-distribution where the binary-vector/point we pick is at position (0,0) ... again the majority of the bin-vectors are at distance of the mean (which is N/2. N is the number of bits) ... the longer the vector (2048) the lesser the chance to coincide. And both these intuitions work for every single particular bin-vector/point you pick. The important thing is that this provides a natural self-tunning/fidelity/dexterity by the vector space itself. The idea is that if two bin-vectors differ by small amount of bits (hamming distance) they are enclosed in the area N/4 to N/3 around the selected bin-vector i.e. we can count them as SIMILAR if they are enclosed in area less than N/3 bits away, they are NOT-SIMILAR if they are >N/3 bits away, but we already saw that the MAJORITY of bin-vectors are just above N/3 hamming distance. Similar bin-vectors are small amount of the whole but big enough to represent variation on a semantic feature, The rest Non-simlar bin-vectors are still humongous amount i.e. large capacity to represent almost unlimited amount of different features if the vector is more than 1000 bits.
It's so amazing.
When looking up information on the brain is way more efficient than searching through a database.
In the brain is checks the information in parallel.
severely underrated approach to AI
Good intuition for the properties of SDR/SDM is to use SPHERE for illustration.
If any binary-vector represent point on the sphere (and we decide the point be one of the poles) then majority of the other points are at the equator of the sphere i.e. further away from this particular point. What this means is that two points (bin-vectors) have very small chance to coincide. (All this is hamming distance i.e. difference in bits)
Another visual intuition is x-y graph of normal-distribution where the binary-vector/point we pick is at position (0,0) ... again the majority of the bin-vectors are at distance of the mean (which is N/2. N is the number of bits) ... the longer the vector (2048) the lesser the chance to coincide.
And both these intuitions work for every single particular bin-vector/point you pick.
The important thing is that this provides a natural self-tunning/fidelity/dexterity by the vector space itself.
The idea is that if two bin-vectors differ by small amount of bits (hamming distance) they are enclosed in the area N/4 to N/3 around the selected bin-vector i.e. we can count them as SIMILAR if they are enclosed in area less than N/3 bits away, they are NOT-SIMILAR if they are >N/3 bits away, but we already saw that the MAJORITY of bin-vectors are just above N/3 hamming distance.
Similar bin-vectors are small amount of the whole but big enough to represent variation on a semantic feature, The rest Non-simlar bin-vectors are still humongous amount i.e. large capacity to represent almost unlimited amount of different features if the vector is more than 1000 bits.
The beauty of simplicity. 👍
RESPECT!
Can neurons tween Bayes factors?