AWS Tutorials - Data Quality Check using AWS Glue DataBrew

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2021
  • The code link - github.com/aws-dojo/analytics...
    Maintaining data quality is very important for the data platform. Bad data can break ETL jobs. It can crash dashboards and reports. It can hit accuracy of the machine learning models due to bias and error. AWS Glue DataBrew Data Profile jobs can be used for data quality checks. One can define data quality rules and validate data against it. Learning how to use Data Quality Rules in AWS Glue DataBrew to validate data quality.

Комментарии • 33

  • @smmike
    @smmike 2 года назад +1

    Thanks, very comprehensive overview of the quality checking in DataBrew.

  • @MahmoudAtef
    @MahmoudAtef 2 года назад +1

    That was extremely helpful, thank you!

  • @Rawnauk

    Very nicely explained..

  • @ds12v123
    @ds12v123 2 года назад +1

    Nice explanation and details

  • @jeety5
    @jeety5 2 года назад +3

    Very impressive, I have been looking at data validation frameworks and think this would be great fit. The 2 open source libraries I checked are:

  • @shokyeeyong6469
    @shokyeeyong6469 2 года назад +1

    Thank you for the tutorial which can have understanding on the overall about the DQ part. Whether having possible to view the detail records which is succeeded or fail?

  • @scotter
    @scotter Год назад +1

    I'm looking for the most code-light (a short Python Lambda function is ok and assumed) way to set up a process so when a CSV file is dropped into my S3 bucket/incoming folder, the file will automatically be validated using a DQ Ruleset I would manually build earlier in console. For any given Lambda call (I assume triggered by a file dropped into our S3 bucket) If possible, I'd like the Lambda to instruct the DQ Ruleset to run but not wait for it to finish (Step function?). Wanting to output a log file of which rows/columns failed to my S3 bucket/reports folder (Using some kind of trigger that fires from a DQ Ruleset finishing execution?). Again, it is important that the process be fully automated because hundreds of files per day with hundreds of thousands of rows will be dropped into our S3 bucket/incoming folder every day via a different automated process. End goal is merely to let client know if their file does not fit rules. No need to save or clean data. I realize I may be asking a lot, so please feel free to only share the best high level path of which AWS services to use in which order. Thank you!

  • @spandans2049
    @spandans2049 2 года назад +2

    This was very nicely explained! Thank yo so much :)

  • @ladakshay
    @ladakshay 2 года назад +1

    This is perfect. We have thousands of datasets where we need to perform DQ checks and send reports. Is it possible to automate or create the rules programmatically instead of using the console? Something like create rules in a yaml/csv file?

  • @vishalchavhan6731
    @vishalchavhan6731 2 года назад +1

    Great.. Do have any plans to make a video on aws glue and apache hudi integration?

  • @sergiozavota7099
    @sergiozavota7099 2 года назад +2

    Thanks for the clear explanation!

  • @user-pt5wy3mf1y
    @user-pt5wy3mf1y Год назад

    where you have placed this code and how it is connected with this data brew profile job

  • @veerachegu
    @veerachegu 2 года назад +1

    Its nice explaination any training you will give I am looking to training pls help me ...

  • @BounceBackTrader
    @BounceBackTrader Год назад

    Please made video on pydeequ with Glue -> without using EMR

  • @veerachegu
    @veerachegu 2 года назад

    Can you pls give training for aws glue we are 5 members looking for training