AWS Hands-On: ETL with Glue and Athena

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 41

  • @ChimDashi
    @ChimDashi Месяц назад +1

    Great content. Clear, concise, and informative.

  • @rockyrocks2049
    @rockyrocks2049 Год назад +2

    Greatly explained video, I tried to follow other videos ended up with errors, because most of the videos people don't explain what IAM role and permissions need to be created before jumping into crawler and glue job, but thanks a lot explaining everything from scratch. If you can just explain a little bit on what kind of situation we need to take care of VPC, Subnets, Internet, Routing before creating a glue job that would be really great, because on some videos I have seen people are setting it up, I don't know whether it's actually required or not. Also, please explain the custom policy creation, custom pyspark code to develop a SCD type 2 job, a static look up from a look up table to source table data mapping. Bcoz in Azure SCD type 2 job development is quite easy they have readily available transformations NotExist & Create Key kind of transformations. Thanks a lot lot lot @Cumulus.

  • @dfelton316
    @dfelton316 11 месяцев назад +1

    What if there are multiple data sources? Are there separate databases for each source? Can multiple data sources be place into the same database?

  • @sgyakkala
    @sgyakkala 3 месяца назад

    @cumuluscycles Thanks for your video. I have followed your video and generated the output file. But, I see multiple partitioned output files are generated instead of generating a single output file. I want to generate a single output file only. I am totally clueless where the mistake is. Is there any config setting I am missing? Please help me.

  • @shaktiman-x7y
    @shaktiman-x7y 5 месяцев назад

    Thank you sir, pretty good demo and clear and effective explanation.

  • @heisenberg0121
    @heisenberg0121 5 месяцев назад +1

    Thank you!! It's help me clarify AWS Glue.

  • @RicardoPorteladaSilva
    @RicardoPorteladaSilva 8 месяцев назад +2

    totally excellent! thank you!

  • @nicknick65
    @nicknick65 6 месяцев назад +1

    brilliant: very well explained and easy to understand, thank you

  • @venkateshnekarakanti3268
    @venkateshnekarakanti3268 День назад

    Thank you sir

  • @ARATHI2000
    @ARATHI2000 8 месяцев назад

    @Cumulus, Great tutorial. Thank you so much. In my case, noticed that the Schema generated is in Array form not individual column names. Columns are wrapped into an Array. Any thoughts? Thx again!

    • @cumuluscycles
      @cumuluscycles  8 месяцев назад

      I'm glad you found the video useful. I just ran through the process again and my schema was generated with Cols, so I'm really not sure why yours was in Array form. Maybe someone else will comment if they experience the same.

  • @rubulroy55
    @rubulroy55 Год назад

    We want to use S3 in Glue then IAM rule shud hav been S3 service as IAM rule is used in Glue. Confused am I missing something 😕

  • @fifthnail
    @fifthnail Год назад +1

    10:46 I had a similar issue, I followed what you were doing with compression type. I selected GZIP, everything zipped as GZIP, however, I tried unselecting with Compression Type "None" and it defaulted back to GZIP. My guess is that you were NOT using GZIP originally, THEN for your tutorial you started used GZIP, and then it defaulted back to "None". To resolve, I needed to delete the original DATA TARGET S3 Bucket, and setup the Target from scratch. My guess is the Script code was not updating for some reason when changing.

    • @cumuluscycles
      @cumuluscycles  Год назад +1

      Thanks for this, I'll have to go and test it out!

  • @rockyrocks2049
    @rockyrocks2049 Год назад

    Also @Cumulus, while creating a job for the prod env, what are the requisites we need to take care of in terms of job, policy and crawler please explain that as well. I mean policy now we have added Power user, but in prod I think we need to narrow down our accesses. Please explain that if possible...Thanks once again.

  • @khaledabouelella
    @khaledabouelella Год назад +1

    Excellent explanation, Thank you

  • @mazharulboni5419
    @mazharulboni5419 9 месяцев назад +1

    well explained. thank you

  • @jgojiz
    @jgojiz 2 месяца назад

    brilliant!

  • @krishj8011
    @krishj8011 5 месяцев назад +1

    nice tutorial

  • @mejiger
    @mejiger Год назад +1

    clean explanation; thanks

  • @aabbassp
    @aabbassp 2 года назад +1

    Thanks for the video.

  • @mackshonayi943
    @mackshonayi943 2 года назад +1

    Great tutorial thank you so much

    • @cumuluscycles
      @cumuluscycles  2 года назад

      Thanks for the comment. I'm glad it was helpful!

  • @nagrotte
    @nagrotte 8 месяцев назад

    Great content🙏

  • @ulhaqz
    @ulhaqz Год назад

    Hi ! Great Video.
    Can you please help me with the following:
    I am stuck at 7:28 where you create a Job. For output I am selecting an empty S3 bucket, similar to you. But I am prompted to pick an object. I have tried uploading a CSV and TXT File but they are not recognized as objects. And I get an error and cannot proceed any further. Thanks a lot !

    • @cumuluscycles
      @cumuluscycles  Год назад +1

      Hmmm... That's odd, since you're specifying an output bucket - you shouldn't need to specify an object in the bucket. The only thing I can think of is, when specifying the path to some buckets, I've had to add a slash at the end of the bucket name. I know I didn't have to do that in the video, but it may be worth a try. If you figure it out, can you post here in the event others run into this

    • @ulhaqz
      @ulhaqz Год назад +1

      @@cumuluscycles Thanks for the reply. What worked for me was to create a folder in the bucket, and select it ... And there is a new GUI in place too, though I switched to the old one to match instructions in the video.

  • @sags3112
    @sags3112 Год назад

    awesome video... great one

  • @AvaneeshThakurRana
    @AvaneeshThakurRana Год назад

    Thank you for this video. Will I also be able to use Glue to run an ETL job for data in Aws RDS and then save the data in S3 and use Athena to query?

    • @cumuluscycles
      @cumuluscycles  Год назад

      Hi. You should be able to get data from RDS using a Glue Connection. Give this a read: docs.aws.amazon.com/glue/latest/dg/connection-properties.html

    • @MrDottyrock
      @MrDottyrock Год назад

      @@cumuluscycles can you connect to on prem database to run etl outside AWS?

    • @cumuluscycles
      @cumuluscycles  Год назад

      @@MrDottyrock Give the following a read and see if it helps: aws.amazon.com/blogs/big-data/how-to-access-and-analyze-on-premises-data-stores-using-aws-glue/

  • @suryatejasingasani256
    @suryatejasingasani256 Год назад

    Hii bro i have a doubt i have a datastage job converted into XML file i want to convert the XML file into glue job how can I do

    • @cumuluscycles
      @cumuluscycles  Год назад

      Hi. I haven’t done this before, but this info may help you: docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-xml-home.html

  • @AliTwaij
    @AliTwaij Год назад

    excllent thankyou