This is a *GREAT* explanation of the Convolution equation and the Convolution calculation process, the best I have seen. @MIT OpenCourseWare - I suggest you to include the words "Convolution Equation" in your lesson title; I believe many others are searching for this and will find this beneficial. *Thank you!*
i was skimming through various videos from this course and had hoped to find convolution among them, but had given up. fortunate to have stumbled upon it
I'll give it a shot. You're looking for a mapping. In this example, the mapping rule is each x maps to a y such that their sum is 3. He starts drawing the mapped pairs at 4:50. The mapped values are: 1 --> 2 4 --> -1 "Flipping" along the y-axis is the same as multiplying the domain by -1. "Shifting" along the x-axis is the same as adding to the domain. We can generalize this mapping with a single transformation: -1(right) + 3 = left Thus, -1(2) + 3 = 1 -1(-1) + 3 = 4 This vertically aligns the corresponding probabilities you want to multiply together. It's simply a convenience for addressing the appropriate corresponding probabilities. This transformation yields two distributions with the same support. If you wanted to code this in Python you could iterate through the domain of just one of the two equivalent domains. Your iterator variable could address both distributions. I didn't really code this and run it, but I think this idea would work. accum_var = 0 for i in rv_X.xs: accum_var += rv_X[i] * rv_Y_transformed[i] That i iterator variable addresses the correct probability for each random variable. If you're doing it in code, maybe you don't need to transform the random variable, maybe you elect to transform the iterator variable in order to map to the Y distribution. Which is faster, or clearer, or more maintainable? I think these both could work. accum_var = 0 for i in rv_X.xs: i_map = -(i) + 3 accum_var += rv_X[i] * rv_Y[i_map]
@@jsn1900 ‘y becomes -y’, but bcs of it s a transformation p(y) should change, shouldnt it? i mean -1 and 1 may not have the same PMF. perhaps u can get the perfect result fromt this, but it s not clear.. I think the man should write on all x and y axis what it represents. please explaine it myself
we want the subset of Y with value y that satisfies y= z-x, x are the values of X, x= z-y. To match the altered P_Y graph and P_X vertically, we flip P_Y graph and then shift z units
0:45 You took an example of z=x+y=3. But in this case x and y are not independent. If we take x=1 then we are restricted to take y=2. And so P(x=1 and y=2) will not be equal to P(x=1)*P(y=2). If I'm wrong let me know.
I agree with Bill. This is the best explanation of the convolution formula I've seen on RUclips.
This is a *GREAT* explanation of the Convolution equation and the Convolution calculation process, the best I have seen.
@MIT OpenCourseWare - I suggest you to include the words "Convolution Equation" in your lesson title; I believe many others are searching for this and will find this beneficial. *Thank you!*
This will remain a hidden gem, available only for the most persistent fellows.
i was skimming through various videos from this course and had hoped to find convolution among them, but had given up. fortunate to have stumbled upon it
The flipping is a very good and intuitive explanation for convolution. Thanks so much.
Great Video, Ευχαριστώ πολύ Γιάννη!
Direct and clear! Thanks a lot!
i wanna cry 🥲🥲🥲🥲🥲🥲🥲🥲
good stuff
Here is the final boss!!!
I'm a little confused as to *why* that flip-and-shift scheme works. 🤔 Could someone help clear that up for me?
I'll give it a shot. You're looking for a mapping. In this example, the mapping rule is each x maps to a y such that their sum is 3. He starts drawing the mapped pairs at 4:50.
The mapped values are:
1 --> 2
4 --> -1
"Flipping" along the y-axis is the same as multiplying the domain by -1.
"Shifting" along the x-axis is the same as adding to the domain.
We can generalize this mapping with a single transformation: -1(right) + 3 = left
Thus,
-1(2) + 3 = 1
-1(-1) + 3 = 4
This vertically aligns the corresponding probabilities you want to multiply together. It's simply a convenience for addressing the appropriate corresponding probabilities. This transformation yields two distributions with the same support. If you wanted to code this in Python you could iterate through the domain of just one of the two equivalent domains. Your iterator variable could address both distributions. I didn't really code this and run it, but I think this idea would work.
accum_var = 0
for i in rv_X.xs:
accum_var += rv_X[i] * rv_Y_transformed[i]
That i iterator variable addresses the correct probability for each random variable.
If you're doing it in code, maybe you don't need to transform the random variable, maybe you elect to transform the iterator variable in order to map to the Y distribution. Which is faster, or clearer, or more maintainable? I think these both could work.
accum_var = 0
for i in rv_X.xs:
i_map = -(i) + 3
accum_var += rv_X[i] * rv_Y[i_map]
@@jsn1900 ‘y becomes -y’, but bcs of it s a transformation p(y) should change, shouldnt it? i mean -1 and 1 may not have the same PMF. perhaps u can get the perfect result fromt this, but it s not clear.. I think the man should write on all x and y axis what it represents. please explaine it myself
we want the subset of Y with value y that satisfies y= z-x, x are the values of X, x= z-y. To match the altered P_Y graph and P_X vertically, we flip P_Y graph and then shift z units
0:45 You took an example of z=x+y=3. But in this case x and y are not independent. If we take x=1 then we are restricted to take y=2. And so P(x=1 and y=2) will not be equal to P(x=1)*P(y=2).
If I'm wrong let me know.
X and Y are independent, but Z is composed of them.
Lmao professor Andrew really made us searching for tuts everywhere
if we have two dependent random variable how we calculate Z