12 Understand Spark UI, Read CSV Files and Read Modes | Spark InferSchema Option | Drop Malformed

Поделиться
HTML-код
  • Опубликовано: 1 дек 2024

Комментарии • 25

  • @rahulpanda9256
    @rahulpanda9256 2 месяца назад

    No words, just awesome. Please cover more such concepts .. we are with you!

    • @easewithdata
      @easewithdata  2 месяца назад

      If you like my content, Please make sure to share with your network over LinkedIn 👍

  • @pradishpranam9108
    @pradishpranam9108 2 месяца назад

    highly underrated series. Keep doing the good work

    • @easewithdata
      @easewithdata  2 месяца назад

      Thank you so much for your lovely comment! ❤️ I hope my playlist made it easier for you to learn PySpark.
      To help me grow, please make sure to share with your network over LinkedIn 👍

  • @manishkumar1450
    @manishkumar1450 7 месяцев назад +1

    crisp and clear👌

    • @easewithdata
      @easewithdata  7 месяцев назад

      Thanks ❤️ Please make sure to share with your network over LinkedIn

  • @sambatammavarapu2280
    @sambatammavarapu2280 Год назад

    really good sessions

    • @easewithdata
      @easewithdata  Год назад

      Glad you like them! Please make sure to share with your network on LinkedIn ❤️

  • @bidyasagarpradhan2751
    @bidyasagarpradhan2751 11 месяцев назад

    Lots of new things learn today 👍

  • @yo_793
    @yo_793 11 месяцев назад

    AWESOME !

  • @Rakesh-q7m8r
    @Rakesh-q7m8r 10 месяцев назад

    Hi Shubham,
    Great content, I am following your series in data bricks environment. When we read a file it generates a job to get the metadata, when I to check the execution metrics in databricks ui, it does not show inputsize/record in databricks but in your docker container it show, where can we check that info in databricks?

  • @vineethreddy.s
    @vineethreddy.s 6 месяцев назад

    3:00 what do you mean by identifying the metadata? what's the use of it in this context?

    • @easewithdata
      @easewithdata  6 месяцев назад +1

      Metdata means the information about the column names and their datatypes

  • @abdulraheem2874
    @abdulraheem2874 Год назад +1

    can you make some video about Pyspark interview questions

    • @easewithdata
      @easewithdata  Год назад

      Sure, will definitely create some on it. Make sure to share this with your network.

    • @yo_793
      @yo_793 11 месяцев назад

      PySpark Interview Series for the top companies
      ruclips.net/p/PLqGLh1jt697zXpQy8WyyDr194qoCLNg_0&si=m82ejHBVkhSLWFET

    • @abdulraheem2874
      @abdulraheem2874 11 месяцев назад

      @@yo_793 thank you

  • @BnfHunterr
    @BnfHunterr Год назад +1

    please make a video on how to write a production grade code , unit testing , these things are not available on yt .. can u plz make it ....

    • @easewithdata
      @easewithdata  Год назад

      Will surely make video on that. Thanks for Following ❤️

    • @yo_793
      @yo_793 11 месяцев назад

      PySpark Interview Series of Top Companies
      ruclips.net/p/PLqGLh1jt697zXpQy8WyyDr194qoCLNg_0&si=m82ejHBVkhSLWFET

  • @omkarm7865
    @omkarm7865 Год назад

    Can you please do it in databricks

    • @easewithdata
      @easewithdata  Год назад +1

      Hello,
      You can lift and shift the same code in Databricks and it will work. Only difference, you dont need to generate Spark Session in Databricks notebook, it generates one for you.
      Hope this helps.

  • @yo_793
    @yo_793 11 месяцев назад

    PySpark Interview Series for the Top Companies
    ruclips.net/p/PLqGLh1jt697zXpQy8WyyDr194qoCLNg_0&si=m82ejHBVkhSLWFET

  • @omkarm7865
    @omkarm7865 Год назад

    So much gap😅

    • @easewithdata
      @easewithdata  Год назад

      The series is now Resumed. New videos are being published every 3 days. Thanks for Following ❤️