Thank you very much for the talk. This was super interesting. I'm also building a Media Mix model at my company, and I have a question. How sensitive is this modeling framework to the scale of the data? That is, should one apply a max scaling or a standardization scaler? Should one scale the spend and revenue time series independently? Are there any best practices for this that you can link to or elaborate on?
number one thomas, number one
Great talk! Thanks for sharing
Thank you very much for the talk. This was super interesting. I'm also building a Media Mix model at my company, and I have a question. How sensitive is this modeling framework to the scale of the data? That is, should one apply a max scaling or a standardization scaler? Should one scale the spend and revenue time series independently? Are there any best practices for this that you can link to or elaborate on?
Is the hierchical time series available anywhere?
someone knows the name of the function he uses to model saturation? (at 23:00)
23:20 - most infuriating part of the talk, wish I had a link to the notebook where he did that.
What is so infuriating about it?
@@pavellogacev94 he drastically improves the model with a "3-line code change" but doesn't show which three lines he changed
@@steeperdrip9188 Its like I want to tell people that I am smart but I don't exactly want to share it.
import aesara.tensor as at
def tanh_saturation(x, b, c):
return b * at.tanh(x / (b * c))
with pm.Model() as model:
# parameter = prior specification
baseline = pm.Normal("baseline", mu=200, sigma=300)
cac = pm.Normal("cac", mu=2.5, sigma=5)
saturation = pm.Normal("saturation", mu=500, sigma=80)
# linear regression
pred = tanh_saturation(ad_spend, saturation, 1/cac) + baseline
noise = pm.HalfNormal("noise", 100)
# likelihood
obs = pm.Normal("customers",
mu=pred,
sigma=noise,
observed=customers)
# inference button(TM)!
idata = pm.sample()