BigData Thoughts
BigData Thoughts
  • Видео 97
  • Просмотров 442 264
What is AI and data science
What is AI
Evolution of AI
What is ML
What is Data Science
Real world application
Просмотров: 50

Видео

All about spark tuning
Просмотров 270Месяц назад
All about spark tuning
All you need to know about Spark Monitoring
Просмотров 4672 месяца назад
All you need to know about Spark Monitoring - Ways to Monitor - WebUI - History Server - REST API - External Instrumentation
Google Gemini vs ChatGPT
Просмотров 954 месяца назад
Google Gemini vs ChatGPT
What is generative AI
Просмотров 2295 месяцев назад
What is AI What is generative AI Large language model (LLM) use cases challenges
Stream Processing Fundamentals
Просмотров 2485 месяцев назад
Stream Processing Fundamentals What is stream processing Stream and batch combination Benefits Challenges Design considerations
Evolution of Data Architectures in last 40 years
Просмотров 4336 месяцев назад
Evolution of Data Architectures -The Landscape -RDBMS -Datawarehouse -Data lake -Why data lakes? -Data lakehouse
Spark low level API Distributed variables
Просмотров 3729 месяцев назад
Different APIs offered by Spark What are low level APIs ? Why are they needed? Types of low level API What are distributed variables ? Distributed variable types Broadcast variables Why are Broadcast Variables better ? Accumulators
Spark low level API - RDDs
Просмотров 4479 месяцев назад
Different APIs offered by Spark What are low level APIs ? Why are they needed? Types of low level API What is RDD? Internals of RDD RDD API Types of RDD Creating RDDs Transformations on RDD Actions of RDD
Spark structured API - Dataframe and Datasets
Просмотров 90810 месяцев назад
Spark structured API - Dataframe and Datasets - Structured and unstructured APIs - Dataframe and Datasets - Row Object - Schema - Column - Column as logical tree - Dataset - when to use Dataset
Spark structured API - Dataframe
Просмотров 85711 месяцев назад
This video explains about - High level structured API Dataframe - How spark executes user code - All the steps that are needed to create a DAG
Spark Architecture in Depth Part2
Просмотров 2,2 тыс.11 месяцев назад
Spark Architecture in Depth Part 2 - Spark Architecture - Spark APIs - transformation vs actions with examples - End to end example to explain spark execution -
Spark Architecture in Depth Part1
Просмотров 3,8 тыс.Год назад
Spark Architecture in Depth - Driver - Executor - Cluster Manager - Data frame - Partition - Transformations - Narrow - Wide
All About Continuous Integration
Просмотров 426Год назад
All About Continuous Integration
Top 3 file formats frequently used in bigdata world
Просмотров 672Год назад
Top 3 file formats frequently used in bigdata world
Understanding Spark Execution
Просмотров 2 тыс.Год назад
Understanding Spark Execution
Structured Streaming in spark
Просмотров 1,1 тыс.Год назад
Structured Streaming in spark
All about Debugging Spark
Просмотров 3,4 тыс.Год назад
All about Debugging Spark
What is Machine Learning in a nutshell
Просмотров 271Год назад
What is Machine Learning in a nutshell
What are Metadata Driven Architectures ?
Просмотров 1,8 тыс.Год назад
What are Metadata Driven Architectures ?
What is Quantum Computing?
Просмотров 162Год назад
What is Quantum Computing?
All about Blockchains
Просмотров 99Год назад
All about Blockchains
All about Data Vaults
Просмотров 455Год назад
All about Data Vaults
All you need to know about chatGPT
Просмотров 505Год назад
All you need to know about chatGPT
Top 8 Bigdata Trends
Просмотров 901Год назад
Top 8 Bigdata Trends
How to build efficient Data lakes
Просмотров 570Год назад
How to build efficient Data lakes
All about stream processing
Просмотров 1,4 тыс.Год назад
All about stream processing
All about partitions in spark
Просмотров 5 тыс.Год назад
All about partitions in spark
How to crack Bigdata Engineer Interviews
Просмотров 1,8 тыс.Год назад
How to crack Bigdata Engineer Interviews
What is Kubernetes
Просмотров 633Год назад
What is Kubernetes

Комментарии

  • @isaackodera9441
    @isaackodera9441 День назад

    Wonderful explanation

  • @Themotivationstationpower
    @Themotivationstationpower 4 дня назад

    Really appreciate your hard work. Thank you for the great explanation.

  • @shubhamdaundkar8327
    @shubhamdaundkar8327 8 дней назад

    Hello Shreya, Can you make a video of hand on Data ingestion in AWS S3?

  • @yashawanthraj8872
    @yashawanthraj8872 22 дня назад

    Can Node/Thread have more partition than no of executors, if yes where the no of partition information will be stored.

  • @gvnreddy2244
    @gvnreddy2244 27 дней назад

    Very good session mam if it was a practically show means it is very useful. thank you for your efforts

  • @nishchaysharma5904
    @nishchaysharma5904 28 дней назад

    Thank you for this video.

  • @vaibhavjoshi6853
    @vaibhavjoshi6853 Месяц назад

    Getting confidence in spark because of you only. Thanks so so much!

  • @ambar752
    @ambar752 Месяц назад

    To summarize, what the Datamarts are for a DataWarehouse, same are the DataMesh for a DataLake

  • @rovashri566
    @rovashri566 Месяц назад

    How did you make such a good visual explanation? Which tool you used to draw sketches ? Pls guide 🙏

  • @muralichiyan
    @muralichiyan Месяц назад

    Data mesh and snowflake same..? Data mesh and microsoft fabric same?

  • @Learn2Share786
    @Learn2Share786 Месяц назад

    Thanks, appreciate it.. is there a plan to post practical videos around spark performance tuning?

  • @user-zb9hm5yh1m
    @user-zb9hm5yh1m Месяц назад

    Thank you for sharing your thoughts.

  • @BishalKarki-pe8hs
    @BishalKarki-pe8hs Месяц назад

    this is not excatly asnwer

  • @ranyasri1092
    @ranyasri1092 2 месяца назад

    Please do videos with sample data sets so that it would help for hands on

  • @mindwithcuriosity5347
    @mindwithcuriosity5347 2 месяца назад

    Seems it is PAAS as mentioned on Microsoft website

  • @sanketdhamane5941
    @sanketdhamane5941 2 месяца назад

    Really Thanks to Good And Indepth Explantion

  • @sindhuchowdary572
    @sindhuchowdary572 2 месяца назад

    lets say there is no change in records for the next day.. then.. does the data gets overwrite again?? with same records..??

    • @BigDataThoughts
      @BigDataThoughts 2 месяца назад

      No we are only taking the new differential data when we do CDC

  • @sunnyd9878
    @sunnyd9878 2 месяца назад

    This is excellent and valuable knowledge sharing... Easily one can make out these trainings are coming out of personal deep hands-on experience and not the mere theory ..Great work

  • @Learn2Share786
    @Learn2Share786 2 месяца назад

    Thank you, pls also post some practical videos around the same topic

  • @user-zb9hm5yh1m
    @user-zb9hm5yh1m 2 месяца назад

    Thank you for sharing thoughts

  • @KiranKumar-cg3yg
    @KiranKumar-cg3yg 2 месяца назад

    First one to monitor the notification from you

  • @ahmedaly6999
    @ahmedaly6999 3 месяца назад

    how i join small table with big table but i want to fetch all the data in small table like the small table is 100k record and large table is 1 milion record df = smalldf.join(largedf, smalldf.id==largedf.id , how = 'left_outerjoin') it makes out of memory and i cant do broadcast the small df idont know why what is best case here pls help

  • @harigovindk
    @harigovindk 3 месяца назад

    18/april/2024

  • @karthikeyanr1171
    @karthikeyanr1171 3 месяца назад

    your videos on spark are hidden gems

  • @mdatasoft1525
    @mdatasoft1525 3 месяца назад

  • @mdatasoft1525
    @mdatasoft1525 3 месяца назад

  • @rupaghosh6251
    @rupaghosh6251 3 месяца назад

    Nice explanation

  • @RameshKumar-ng3nf
    @RameshKumar-ng3nf 3 месяца назад

    At the start of the video i was so happy seing all the diagrams.. Later got fully confused & felt complicated and i didnt understand well 😢

  • @nahomg.4191
    @nahomg.4191 3 месяца назад

    I wish I could give 1000 likes. You’re an excellent teacher!

  • @user-eg9ed5nr8z
    @user-eg9ed5nr8z 3 месяца назад

    Nice explaination

  • @amitgupta3
    @amitgupta3 3 месяца назад

    found it helpful. You may go slower though. I had to stop and rewind few times.

  • @husnabanu4370
    @husnabanu4370 4 месяца назад

    what a wonderfull explanation to the point... thank you

  • @sumonmal009
    @sumonmal009 4 месяца назад

    Good playlist for Spark ruclips.net/p/PL1RS9FR9qIPEAtSWX3rKLVcRWoaBDqVBV

  • @mohnishverma87
    @mohnishverma87 4 месяца назад

    Just woow, very simple explanation of a complex cluster overview.. Thanks.

  • @masoom002
    @masoom002 4 месяца назад

    best explanation ever i came across on RUclips. watching all the parts .... Thank you for explaining it so smoothly.

  • @user-zb9hm5yh1m
    @user-zb9hm5yh1m 4 месяца назад

    Thank you for sharing thoughts!

  • @utsavchanda4190
    @utsavchanda4190 4 месяца назад

    That was very well explained. Thank you for putting this together. One question though, do you really think data modelling should be done on the Gold layer? I don't think so because Gold datasets are just busineess level aggregates suited to particular business consumption needs. Whereas Silver layer is the warehouse in Lakehouse. That is where modelling should be done, if needed.

  • @shrabanti84
    @shrabanti84 4 месяца назад

    Thank you so much.. all the vdos are very much clear and effective.

  • @user-zb9hm5yh1m
    @user-zb9hm5yh1m 5 месяцев назад

    Thank you for sharing your thoughts.

  • @deepalirathod4929
    @deepalirathod4929 5 месяцев назад

    Finally it got cleared to me after reading here and there . thank you .

  • @himanshupandey8576
    @himanshupandey8576 5 месяцев назад

    one of the helpful session !

  • @Learn2Share786
    @Learn2Share786 5 месяцев назад

    Nicely explained, thank you ..looking forward to learn more around this topic

  • @srinivas123j
    @srinivas123j 5 месяцев назад

    well explained!!!

  • @srinivas123j
    @srinivas123j 5 месяцев назад

    Well explained!!!

  • @srinivas123j
    @srinivas123j 5 месяцев назад

    well explained!!

  • @srinivas123j
    @srinivas123j 5 месяцев назад

    well explained

  • @user-zm2me1gc5z
    @user-zm2me1gc5z 5 месяцев назад

    Nicely explained and thanks. helping a lot

  • @hlearningkids
    @hlearningkids 5 месяцев назад

    kindly do similar simple thing for dataproc also bigquery.

  • @user-fz4in8bf1y
    @user-fz4in8bf1y 5 месяцев назад

    Thank you for the detailed explanation. However the problems that I faced with reading dates prior to 1900, does not resolve even after setting all the mentioned properties. Does any one have a working example that solved the issue of reading dates prior to 1900. Below is the code that I added but did not work. conf = sparkContext.getConf() conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInRead", "CORRECTED") conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "CORRECTED") conf.set("spark.sql.datetime.java8API.enabled", "true")

  • @hlearningkids
    @hlearningkids 5 месяцев назад

    Very good information 🎉