Spark Performance Tuning | Memory Architecture | Interview Question

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • #Apache #Spark #Performance #Memory
    In this particular video, we have discussed Spark performance optimisation for the efficient memory management
    Please join as a member in my channel to get additional benefits like materials in BigData , Data Science, live streaming for Members and many more
    Click here to subscribe : / @techwithviresh
    About us:
    We are a technology consulting and training providers, specializes in the technology areas like : Machine Learning,AI,Spark,Big Data,Nosql, graph DB,Cassandra and Hadoop ecosystem.
    Mastering Spark : • Spark Scenario Based I...
    Mastering Hive : • Mastering Hive Tutoria...
    Spark Interview Questions : • Cache vs Persist | Spa...
    Mastering Hadoop : • Hadoop Tutorial | Map ...
    Visit us :
    Email: techwithviresh@gmail.com
    Facebook : / tech-greens
    Twitter : @TechViresh
    Thanks for watching
    Please Subscribe!!! Like, share and comment!!!!

Комментарии • 15

  • @ajeepp06
    @ajeepp06 4 года назад +4

    The videos are awesome with good content and great explanation. but a summary at the end would be a great addition to recap all points covered in short. Thanks for the video.

    • @TechWithViresh
      @TechWithViresh  4 года назад +1

      Thanks, Sure we will take care of that:)

  • @rockngelement
    @rockngelement 4 года назад +1

    Nice, saw this on spark summit. but you explained it well. kudos to you.

  • @sanjitkhasnobis9202
    @sanjitkhasnobis9202 2 года назад +1

    very nice!!

  • @sumitgandhi628
    @sumitgandhi628 3 года назад +2

    Awesome explanation , Thanks :)

  • @dipanjansaha6824
    @dipanjansaha6824 4 года назад +2

    Thank you for the awesome videos..

  • @RAVIC3200
    @RAVIC3200 4 года назад +2

    Nice Explanation once again thank you!!!!

  • @BTVinfo
    @BTVinfo 4 года назад +2

    Nice sir,keep on do more videos 😊

  • @shitalchikhalikar981
    @shitalchikhalikar981 4 года назад +1

    Nice video sir

  • @vamshi878
    @vamshi878 4 года назад +2

    Hi thanks for video, I have a question,
    Suppose I have 10gb csv data, if I read using spark.read.csv by default how many partitions it will create? Will it consider every block as one partition and create 80 partitions?

    • @TechWithViresh
      @TechWithViresh  4 года назад

      Csv as data format in Hadoop can not be partitioned, when u read it in Spark number of partition sud be equal to the number of nodes in cluster.

    • @vamshi878
      @vamshi878 4 года назад

      @@TechWithViresh number of nodes, means we get very less partitions right? It will degrade performance?