Build An Airflow Data Pipeline To Download Podcasts [Beginner Data Engineer Tutorial]

Поделиться
HTML-код
  • Опубликовано: 23 авг 2024

Комментарии • 44

  • @manyes7577
    @manyes7577 2 года назад +17

    I think you’re the best data science lecturer so far. Keep going thanks for your hard work

  • @devanshsharma5159
    @devanshsharma5159 Год назад +1

    Beautiful explanation and a great project to get me started! Many thanks vik!!
    One thing to add from my experience: I installed airflow on my Mac M1 and it was working fine but I couldn't run any of the tasks we performed here (not even in the get_episodes task).. to solve that I made an EC2 instance and with some tweaks everything ran :D

  • @mahmoodhossainfarsim6292
    @mahmoodhossainfarsim6292 Год назад +1

    It was very useful. Thank you. It will be really helpful if you cover Apache Hadoop, Spark, MLFlow, Flink, Flume, Pig, Hive etc. Thank you

  • @HieuLe-tw7qm
    @HieuLe-tw7qm Год назад +1

    Thank you very much for this amazing tutorial :D

  • @demohub
    @demohub Год назад

    This video was a great resource. Thanks for the tutelage and your take on it.

  • @lightman2130
    @lightman2130 2 года назад +1

    What a amazing tutorial ! Thank you a lot

  • @dataprofessor_
    @dataprofessor_ Год назад +2

    Can you make more advanced Apache Airflow tutorials too?

  • @Funkykeyzman
    @Funkykeyzman 2 года назад +4

    Debug tip #1: If you run into error "conn_id isn't defined", then use the Airflow browser interface to instead create the connection. Select Admin --> Connections --> +
    Debug tip #2: If your Airflow runs fail, try logging out of the Airflow UI and restarting the Airflow server by pressing Ctrl + C and then airflow standalone.

  • @kiish8571
    @kiish8571 2 года назад +1

    this is very educational thanks a lot, i was wondering if you would be making a video of the automatic transcriptions

    • @Dataquestio
      @Dataquestio  2 года назад

      Yes, I will be doing a webinar for this tomorrow, and the video should be live next week on RUclips. -Vik

  • @diasfetti8393
    @diasfetti8393 Год назад

    👍👍👍excellent tuto. Thks a lot

  • @lolujames7668
    @lolujames7668 2 года назад +1

    nice one @Vik

  • @Maheshwaripremierleague
    @Maheshwaripremierleague 3 месяца назад

    if you are facing an issue with creating database, that your dag is running and not completing then put this line after importing the packages os.environ['NO_PROXY'] = '*' , it will work then for sure

  • @lalumutawalli9497
    @lalumutawalli9497 Год назад

    thanks you for your tutorials, let me know about your airflow version on your tutorial to practice.

  • @parikshitchavan2211
    @parikshitchavan2211 Год назад

    Hello Vikas Thanks for such a great tutorial everting you made smooth like butter thanks for that ,just one question whenever we made new DAG ( we will have to add docker-compose-CeleryExecutor, docker-compose-LocalExecutor, and Config for that particular DAG )

  • @rohitpandey9920
    @rohitpandey9920 Год назад +2

    I am stuck at 14:50 where you try to run the task in airflow. You simply switched the screen from pycharm terminal to git master terminal without any explanation, and I am unable to connect sqlite to pycharm terminal, neither could I establish connection with airflow. Please guide me through this

  • @vish949
    @vish949 11 месяцев назад +1

    whenever i run airflow standalone (or even airflow webserver) i get the ModuleNotFound error for pwd. Im running it on a windows, how do i solve this?

  • @OBGynKenobi
    @OBGynKenobi Год назад

    So where is the dependency chain where you set the actual task flow? I would have expected something like task1 >> Task2, etc... at the bottom of the Dag.

  • @user-vy9in2xs6c
    @user-vy9in2xs6c Год назад

    At 33:48, how did we get the 'Done loading. Loaded a total of 0 rows'. We haven't used this text in our code anywhere. Is this the work ok hook.insert_rows

  • @thangnguyen3786
    @thangnguyen3786 9 месяцев назад

    hi everyone. I have configured airflow with docker in a folder which include docker yaml file. Now I want to use airflow in another folder, so How can i do it without docker yaml file ? must I configure again in that folder ?

  • @investandcyclecheap4890
    @investandcyclecheap4890 2 года назад +1

    really liked this tutorial. The download episodes task is freezing on me. The task is "running" but it appears to be getting held up and has not actually downloaded the first episode for some reason

    • @Dataquestio
      @Dataquestio  2 года назад

      That's strange - can you access the podcast site in your browser? It may be blocking you for some reason. It's also possible that the airflow task executor isn't running properly.

    • @sungwonbyun5683
      @sungwonbyun5683 10 месяцев назад

      I ran into the same issue except on the very first task "get_episodes" nothing happens and eventually times out. tested the script in python console and it returned the list of episodes just fine.
      @@Dataquestio

    • @sungwonbyun5683
      @sungwonbyun5683 10 месяцев назад +2

      Fix for me was to start the airflow server as the root user; "sudo airflow standalone"

  • @yousufm.n2515
    @yousufm.n2515 Год назад

    When I change the 'dags_folder' path, everything breaks in airflow. What could be the reason

  • @DayOneCricket
    @DayOneCricket 7 месяцев назад

    you're first bit on seting the environment variable didn't work

  • @user-tm9ng7if3t
    @user-tm9ng7if3t 9 месяцев назад

    hello. I did everything as it is but it fails and no logs are visible

  • @bryancapulong147
    @bryancapulong147 Год назад

    My download_episodes task succeeds but I cannot see the mp3 files

  • @user-vy9in2xs6c
    @user-vy9in2xs6c Год назад

    I need help. For the final task it showed audio_path no such file or directory. So i used 'os.makedirs(audio_path, exist_ok=True)'. The dag was a success. But i couldnt find any files in my episodes folder. Please help

    • @Maheshwaripremierleague
      @Maheshwaripremierleague 3 месяца назад

      def download_episodes(episodes):
      for episode in episodes:
      filename = f"{episode['link'].split('/')[-1]}.mp3"
      audio_path = os.path.join("episodes", filename)
      if not os.path.exists("episodes"):
      os.makedirs("episodes")
      if not os.path.exists(audio_path):
      print(f"Downloading {filename}")
      audio = requests.get(episode["enclosure"]["@url"])
      with open(audio_path, "wb+") as f:
      f.write(audio.content)

    • @Maheshwaripremierleague
      @Maheshwaripremierleague 3 месяца назад

      it will create the episodes folder if it is not created

    • @Maheshwaripremierleague
      @Maheshwaripremierleague 3 месяца назад

      it happens because your airflow might not be directing to correct folder so it will create the folder somewhere ese where it is pointing, you can then search the folder to find out where it is

  • @dolamuoludare4383
    @dolamuoludare4383 Год назад

    Please kindly help, when I write my DAG on vscode, it doesn't show on the WEB UI and I keep getting this DAGNOTFOUND Error

    • @youssefelfhayel7078
      @youssefelfhayel7078 Год назад

      Add these lines to your airflow.cfg config file :
      min_file_process_interval = 0
      dag_dir_list_interval = 30
      and then the dags will be updated automatically.
      P.S : Be sure that your DAG is in the dags file.

  • @omarhossam285
    @omarhossam285 Год назад

    how did you change your terminal to git:(master)

    • @Dataquestio
      @Dataquestio  Год назад

      I use a terminal called zsh. There is a plugin for zsh that can show you your git branch.

    • @omarhossam285
      @omarhossam285 Год назад

      @@Dataquestio thx man

  • @aminatlawal21
    @aminatlawal21 Год назад

    How did he get the web page metadata in Xml?

    • @HJesse88
      @HJesse88 Год назад

      Look at the link in the video, type that link in a Chromium browser and it should appear. Wala ..

    • @rohitpandey9920
      @rohitpandey9920 Год назад

      @@HJesse88 it didn't appear for me

  • @parkuuu
    @parkuuu 2 года назад

    Awesome tutorial!
    Just had some confusion on the transform and loading function, particularly this code:
    stored = hook.get_pandas_df('SELECT * FROM episodes;')
    I thought you were querying from the episode list that was returned from the extract function, but suddenly realized that it was also the same as the Table name in the database lol.

    • @Dataquestio
      @Dataquestio  2 года назад

      Hi Park - that code is selecting from the sqlite3 database that we create. It's making sure the podcast hasn't been stored in the database yet (if it is, we don't need to store it again).