What is the difference between doing it in PySpark and SQL? I think in SQL it's much easier: %sql drop table if exists test; CREATE TABLE test ( id int, total int ) INSERT INTO test (id, total) VALUES (1,10), (1,20), (1,30), (1,40), (2,20), (2,40), (2,60), (2, 80); SELECT *, SUM(total) OVER (PARTITION BY id ORDER BY total) as emp_run FROM test;
Waiting on your full data engineer tutorial video Mr KT.... Some months ago you said you are working on something, i hope you are still working on it, I really look forward to that video as it will help me a lot. Thank you for this too.
Good one. Waiting for your next project video :)
Thank you, very soon :)
Great video brother! Looking forward to the upcoming videos . Thanks for the efforts 🎉
Thank you so much :)
Very nice explanation..Very good..
Waiting on your full data engineer tutorial video..
Thank you so much, very soon :)
What is the difference between doing it in PySpark and SQL? I think in SQL it's much easier:
%sql
drop table if exists test;
CREATE TABLE test (
id int,
total int
)
INSERT INTO test (id, total)
VALUES
(1,10),
(1,20),
(1,30),
(1,40),
(2,20),
(2,40),
(2,60),
(2, 80);
SELECT *,
SUM(total) OVER (PARTITION BY id ORDER BY total) as emp_run
FROM test;
Thank you for the dedication on your vidoes man they are very helpfull for hands on projects and learning
Thank you so much :)
Your explanation skill is too good ❤️ hoping for more videos on topics suchs as Projects, airflow , dbt , snowflake :)
Thank you so much, very soon :)
Waiting on your full data engineer tutorial video Mr KT.... Some months ago you said you are working on something, i hope you are still working on it, I really look forward to that video as it will help me a lot.
Thank you for this too.
Working on an End to End Project, will try to upload soon :)
@@mr.ktalkstech looking forward to seeing it.... Thank you for your amazing work.
Sir please upload scenario based questions for adf, key vault etc
It's asked in interviews
Even if we don't add rowsBetween, it works the same way right? I mean it's default right?
Yup, the main intention is to explain what's happening behind the hood :)
Don't we also need to add order by col(total) in the window spec? That would make the code deterministic
Good point, it's better to add it, you are absolutely right :)
Unboundedpreceding and current row is default, right?
Yup, the main intention is to explain what's happening behind the hood :)
Right :)
You are doing great work..
I learned Azure projects from your videos only..
Thank you so much🤙
God bless you🙏
Explain in tamil also
@mr.ktalkstech do videos on regular basis your subject is awesome
Sure, I am trying my best :)
@@mr.ktalkstech can you please do video on unity catalog
Sure, will upload that soon :)