great video! I have two questions: 1. why the table in the redshift is not in the same order as it in the CSV file? 2. why in the ETL job configuration the S3 source type choose the data catalog table instead of S3 location? Does that mean we can complete the incremental data load from s3 to redshift just choosing the S3 location and don't use the crawler? Thanks in advance.
Thank you for watching my videos. 1. It's purely functionality of ETL job which Extract the data from s3 bucket . 2. We can directly use s3 bucket as source when you have single csv file. But when bucket has multiple csv file it's better to use data catalog so that you can map the schema from source to destination.
@@cloudquicklabs regarding point 2, even if I have multiple csv files, but with same schema, I think there should not be any reason to use crawler, since we have fixed schema in redshift.
Thank you for watching my videos. It's not required, what's important is closing right option while declaring destination in ETL job. The configurations shown should be fair enough here.
insighful conectnt. I am trying to load s3 to postgresql but in the visual ETL the targeet postgresql not showing the table name to store, Connections part connected to RDS and attached same. pls let me know the issue around it. thanks
Thank you for the great explanation. I have 1 question: Q. What changes do I have to make in above solution to maintain the history of records in Redshift ? I meant I want to capture every change happened for an Id like SCD2, then in that case what should be my approach ?
Thank you for watching my videos. Redshift doesn’t natively support full CDC, but you can implement it using DMS, third-party ETL tools, or by building custom solutions with Kinesis, Lambda, or Redshift Spectrum. The choice depends on your specific requirements for latency, complexity, and integration needs. I shall explore and try to make video on this soon.
Thank you for watching my videos. I wanted to show that when you source data gets incremented pipelines ETL pipelines copies only the increamental data not the duplicates.
@@cloudquicklabs Yes, I understand that, but before running the job for the second time, to illustrate that merge works, you run the crawler. Is that needed/recommended?
It's because of if there is any change in the schema to identify it,so we run everytime the crawler before running the job to keep the latest changes from the source
I want to know why the rows data is come out when you choose data catalog table from s3 source in your glue job because I just saw crawler that only copy metadata from s3 csv to data catalog not the data right? when I try like you in my case , it says no data to display because there is only schema information updated in data catalog table. Please let me know
Thank you for watching my videos. It's default verification from Glue which display some sample data when you are mapping tables. In your case you must missing one step. Please do watch video again do the steps as mentioned in video.
I have two question 1 why you have rsn the crawler twice 2 if there is files continiusly ( different name ) then how can we use glue job to incremental load
Thank you for watching my videos. I ran twice to show the demo of incremental load at second time. And when file names are continuously different you would depend on folder path of files and also note that atheist column should be same and in job you need map it to destination target tables.
@@cloudquicklabs In this video at 33 minute for to fetch the incremental data the crawler has ran again. Do we really need to run the crawler if there is no schema change but only has the new data?
Nice video. I have a question I am getting my data from Raspberrypi to s3 and it updates whenever i run in raspberrypi. Now when my job succussed and update that s3 and run the same job I am getting error so whats the process.
Thank you for watching my videos. You should not get any error when you run twice, Did you check logs for what is the error message that you see there.
Thank you for watching my videos. I shall create new video on incremental loading from RDS (SQL/Postgres) to Amazon Redhsift. If not We still have work around do it here, I.e first load to S3 bucket and then load to Amzon Redshift.
@cloudquicklabs Hi I have a question to transfer data from S3 > AWS glue > redshift. In this I have to add filter to data from S3 table and filter is for age column age > 25. So only values less than 25 shall transfer to redshift like this
Also can you please share video link for below type of task Machine Learning: SageMaker -> Grab data from S3 -> Build the model -> Push the model data to S3
Thank you for watching my videos. Did you check my video ruclips.net/video/O0GZVsGfHdo/видео.html Here have explained how we can do the advanced data Transformation with ETL job
Few doubts. 1) In real scenario TestReport.csv won’t be updated right? Suppose we have TestReport_1.csv and TestReport_2.csv in particular intervals and in these files we have few old but updated records and few new records. How to handle such situations? Pls redirect me if you have already explained such situations in any of your videos!!
Thank you for watching my videos. I haven't covered these scenarios in these videos. I shall create new version of this video soon where I cover the demo of these scenario.
Hello the data is loaded from s3 to redshift is zig zag manner, we have the data from source like 1,2 3,4 order , but the target is 1,4,11 and so on , how to get serial data to the redshift
Thank you for watching my videos. It should not be the case here , I believe only columns should disordered while row data should be intact. Please watch this video again.
While test connection in the gule i was encountered few erroes like status logger unrecognised format specifiers like can you help me how to encounter those problems
Hi. While creating the connection am getting error. In this video there is no option showing to select the IAM role. But when i tried now we need to choose the iam role while creating connection. Can you help what policies required
I am having the same question and issue when creating connection: Create connection failed during validating credentials. Please validate connection inputs and VPC connectivity to Security Token Service, Secrets Manager and REDSHIFT.
Thank you for watching my videos. I have a IAM role with admin access and trusted by aws glue and redshift services but you could try with least privileged like below. I am role with below policies. 1. S3 BUCKET full access. 2. AWS Glue full access. 3. Amazon Redshift full access. 4. Amazon Cloudwatch full access. Trust policy. 1. aws glue service. 2. aws redshift services.
Thank you for watching my videos. I have a IAM role with admin access and trusted by aws glue and redshift services but you could try with least privileged like below. I am role with below policies. 1. S3 BUCKET full access. 2. AWS Glue full access. 3. Amazon Redshift full access. 4. Amazon Cloudwatch full access. Trust policy. 1. aws glue service. 2. aws redshift services.
If it is not the IAM role issue. It must be vpc and security group related issues. Please check the vpc endpoint and security group inbound rules. And check if you have source as security as itself at one inbound rule as well.
Thank you for watching my videos. This covers only cloud hands specially Azure and AWS in general. With this knowledge you can Crack any company Cloud DevOps roles
I have multiple large tables in CSV format in AWS S3, should I: 1. load them from S3 to RDS (mySQL) => do my queries using mySQL Workbench => export the expected data to S3 => sync the expected data to SageMaker for visualization and other analysis using a Notebook instance? 2. load them from S3 to Redshift => doing queries using Redshift, actually I'm not quite sure what to do next in this direction, the goal is to have some filtered data for visualization and analysis. Thank you
Thank you for watching my videos. Is your requirements here is just to Visualize data or do you want to have ML run on them (since you said you want run Sagemaker as it is for ML and costly tool). You could check for Amazon Quicksight if you are looking visualization.
Thank you,@@cloudquicklabs. I actually want to do some ML, but that part can be done on a local machine. The primary goal now is to load data from S3 to something I can query to analyze the data better.
Thank you for watching my videos. I believe you need to use Data Quality rules here to check if your primary keys are in sequence. Ma be you need watch my latest video here ruclips.net/video/DMQRFwbeYMc/видео.htmlsi=mu-t_cNUIvXzHIXv which might give ideas.
Thank you for watching my videos. Glad that it helped you. If you mean column of table , then I would say it does not matter untill proper records are getting updated in respective column
Thank you for watching my videos. I am using admin permissions as it is demo. Did you check reference documents shared at : github.com/RekhuGopal/PythonHacks/tree/main/AWS_ETL_Increamental_Load_S3_to_RedShift
@@cloudquicklabs Which IAM role given to redshift workgroup? Please create and upload basic video on it. I want to build this project. But don't understand which IAM role given to redshift workgroup?
Thank you for watching my videos. All the resources involved in this solution are costly. Please make use of AWS given calcutors to estimate you're cost here. As this was lab session for me I did set up and cleaned up later once lab is completed and it might have costed me very less.
I am importing the s3 data to redshift serverless, while the copy query is running it keeps on loading/running the query. I am loading the data which has 100 rows only. what is the problem?
Thank you for watching my videos. Did you check if connection to Amazon Redshift is working as expected. And check if source path is correct in ETL job.
Thank you for watching my videos. Glad that it's useful video to you. I have not copied the Glue job code but I have attached relevant files in description please use the same and do follow the these steps it would be working for you.
so my question is in this video why you not schedule it. it is a incremental data the data is load on s3 any time. so the incremental data is go to schedule it when the data is been upload means the automatically job will run and execute it. so why you default the execute the job run is how many times you run a crawler and run a job..?
Thank you for watching my videos. Indeed, it should be scheduled , I have mentioned it in video. As this is demo I have shown it via manual trigger. I am working on v3 of this video, where I cover missing points.
@@cloudquicklabs I am using dms to transfer data from rsd mysql to s3 Then by using glue I am transfering data from s3 to redshift.. and in glue I am getting error
Nice video! can I just use crawler to crawl new folders only and then in glue job, just use bookmark and append the data instead of merging ? if the table is large, merging will be very expensive operation, no?
Thank you for watching my videos. I have not created video on the dynamic folder structure in AWS glue. I shall create it soon. There should not be any extra cost here.
@@cloudquicklabs Some issue with glueContext.write_dynamic_frame.from_catalog, where as glueContext.write_dynamic_frame.from_jdbc_conf is perfectly working fine.... Getting below error while writing to Redshift catalog table: Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o130.pyWriteDynamicFrame. Exception thrown in awaitResult: SQLException thrown while running COPY query; will attempt to retrieve more information by querying the STL_LOAD_ERRORS table Could you please guide
This is great video!! Thank you for this :) My job got failed with error : Error Category: UNCLASSIFIED_ERROR; Failed Line Number: 20; An error occurred while calling o113.pyWriteDynamicFrame. Exception thrown in awaitResult::
Thank you for watching my videos. Error could be due to many reasons 1. Check if data format is as expected in source side. Due to syntactical error this kind of errors can happen.
Thank you for watching my videos. While defining the crawlers you would give the path, choose right path accordingly, and crawler would scrape all data from all files and folder present underneath the defined folder.
@@cloudquicklabs thanks for the reply. what if my folder structure is like the below. input_bucket/year={current_year}/month={current_month}/day={current_date}/file.txt how can i define my crawler to check the file based on date and load data on incremental basis not as full refresh...any idea ??
great video! I have two questions: 1. why the table in the redshift is not in the same order as it in the CSV file? 2. why in the ETL job configuration the S3 source type choose the data catalog table instead of S3 location? Does that mean we can complete the incremental data load from s3 to redshift just choosing the S3 location and don't use the crawler? Thanks in advance.
Thank you for watching my videos.
1. It's purely functionality of ETL job which Extract the data from s3 bucket .
2. We can directly use s3 bucket as source when you have single csv file. But when bucket has multiple csv file it's better to use data catalog so that you can map the schema from source to destination.
@@cloudquicklabs regarding point 2, even if I have multiple csv files, but with same schema, I think there should not be any reason to use crawler, since we have fixed schema in redshift.
can i use one crawler for one bucket with multiple schema data
Can you please explain why I need to run the crawler again if there are no changes to file location or fields / structure of the source (S3 file).
Thank you for watching my videos.
Indeed you don't need to re-run if all files and folders are scraped by crawler already.
For incremental load to work do we need to enable job bookmark in glue? Or it's not required?
Thank you for watching my videos.
It's not required, what's important is closing right option while declaring destination in ETL job. The configurations shown should be fair enough here.
thanks for the valuable inputs. Small query, what if my incremental file is landed on a diff folder on same bucket. How crawler is going to handle.?
Thank you for watching my videos.
Crawler would work (scrapes all data underneath a defined folder)
Great content !
I have a question on this. Can we run Merge operation on an external database as well? like Oracle, Snowflake , etc?
Thank you for watching my videos.
Yes.. we can do this.
I shall create a new video on this topic soon.
@@cloudquicklabs Hi where you able to create this video already? Thanks
insighful conectnt. I am trying to load s3 to postgresql but in the visual ETL the targeet postgresql not showing the table name to store, Connections part connected to RDS and attached same. pls let me know the issue around it. thanks
Thank you for watching my videos.
This scenio is labbed at my channel. I am working on this scenario a video on this topic would be uploaded soon.
Thank you for the great explanation. I have 1 question:
Q. What changes do I have to make in above solution to maintain the history of records in Redshift ? I meant I want to capture every change happened for an Id like SCD2, then in that case what should be my approach ?
Thank you for watching my videos.
Redshift doesn’t natively support full CDC, but you can implement it using DMS, third-party ETL tools, or by building custom solutions with Kinesis, Lambda, or Redshift Spectrum. The choice depends on your specific requirements for latency, complexity, and integration needs.
I shall explore and try to make video on this soon.
Why did you run the crawler after the Glue job finished and before running it for the second time? is that required in order to MERGE to succeed?
Thank you for watching my videos.
I wanted to show that when you source data gets incremented pipelines ETL pipelines copies only the increamental data not the duplicates.
@@cloudquicklabs Yes, I understand that, but before running the job for the second time, to illustrate that merge works, you run the crawler. Is that needed/recommended?
It's because of if there is any change in the schema to identify it,so we run everytime the crawler before running the job to keep the latest changes from the source
Indeed it's needed to load the data from the file. Note it's same file in same path.
But it is data change not schema change ... We can verify by running Athena query on catalog table .... Anyways nice use case explanation
Incase of SCD, how we can define the SCD logic for merge and load? which option will have this option?
Thank you for watching my videos.
In terms of SCD , I believe the process would remain same untill the schema of table remains same.
I want to know why the rows data is come out when you choose data catalog table from s3 source in your glue job because I just saw crawler that only copy metadata from s3 csv to data catalog not the data right?
when I try like you in my case , it says no data to display because there is only schema information updated in data catalog table. Please let me know
Thank you for watching my videos.
It's default verification from Glue which display some sample data when you are mapping tables.
In your case you must missing one step.
Please do watch video again do the steps as mentioned in video.
This video helped a lot. Is there a way you can add trust relationships during IAM role creation in the repo? Thank you
Thank you for watching my videos.
Did you see a dedicated video here ruclips.net/video/sw-a8nexTY8/видео.html
I have two question
1 why you have rsn the crawler twice
2 if there is files continiusly ( different name ) then how can we use glue job to incremental load
Thank you for watching my videos.
I ran twice to show the demo of incremental load at second time. And when file names are continuously different you would depend on folder path of files and also note that atheist column should be same and in job you need map it to destination target tables.
Hello please.. Do you hold classes any where, and do you provide project support?
Thank you for watching my videos.
Don't provide any classes.
But I provide project support and project works.
@@cloudquicklabs How can I go about it - getting project help
Do we need to run crawler every time when ever there is a new data that comes into S3 ?
Thank you for watching my videos.
You mean to add lambda trigger when objects created in s3 bucket or just schedule the lambda.
@@cloudquicklabs In this video at 33 minute for to fetch the incremental data the crawler has ran again. Do we really need to run the crawler if there is no schema change but only has the new data?
Indeed I run it for second to fetch data from source but update only incremental data at destination side.
Nice video. I have a question I am getting my data from Raspberrypi to s3 and it updates whenever i run in raspberrypi. Now when my job succussed and update that s3 and run the same job I am getting error so whats the process.
Thank you for watching my videos.
You should not get any error when you run twice, Did you check logs for what is the error message that you see there.
@@cloudquicklabs How to check logs of it, In cloud watch loggroups as i see my live tail it is saying that it is 100%displayed . How to resolve it?
Is Incremental loading available for SQl Server and Postgre as target?
Thank you for watching my videos.
I shall create new video on incremental loading from RDS (SQL/Postgres) to Amazon Redhsift.
If not
We still have work around do it here, I.e first load to S3 bucket and then load to Amzon Redshift.
@@cloudquicklabs She is referring RDS as target from S3.
What if primary key is not in incremental or decreasing order in the data. will incremental data load work?
Thank you for watching my videos.
Indeed it should load as Primary key is unique in tables and records are identified by that.
Can you please tell how to add filter like age > 25 to glue step before loading into redshift
Thank you for watching my videos.
I did not get this question.
Could you please provide more information here.
@cloudquicklabs Hi I have a question to transfer data from S3 > AWS glue > redshift. In this I have to add filter to data from S3 table and filter is for age column age > 25. So only values less than 25 shall transfer to redshift like this
Also can you please share video link for below type of task
Machine Learning: SageMaker -> Grab data from S3 -> Build the model -> Push
the model data to S3
Thank you for watching my videos.
Did you check my video ruclips.net/video/O0GZVsGfHdo/видео.html
Here have explained how we can do the advanced data Transformation with ETL job
Thank you for providing this input I shall create a video on this topic soon.
Few doubts. 1) In real scenario TestReport.csv won’t be updated right? Suppose we have TestReport_1.csv and TestReport_2.csv in particular intervals and in these files we have few old but updated records and few new records. How to handle such situations?
Pls redirect me if you have already explained such situations in any of your videos!!
Thank you for watching my videos.
I haven't covered these scenarios in these videos.
I shall create new version of this video soon where I cover the demo of these scenario.
appreciate ur efforts 🙏
Thank you for watching my videos.
Glad that it helped you.
Hello the data is loaded from s3 to redshift is zig zag manner, we have the data from source like 1,2 3,4 order , but the target is 1,4,11 and so on , how to get serial data to the redshift
Thank you for watching my videos.
It should not be the case here , I believe only columns should disordered while row data should be intact.
Please watch this video again.
While test connection in the gule i was encountered few erroes like status logger unrecognised format specifiers like can you help me how to encounter those problems
Thank you for watching my videos .
Have you followed the video direction as expected.
Thank you for this video , I was able to load the incremental data
Thank you for watching my videos.
Glad that it helped you.
Hi. While creating the connection am getting error. In this video there is no option showing to select the IAM role. But when i tried now we need to choose the iam role while creating connection. Can you help what policies required
I am having the same question and issue when creating connection: Create connection failed during validating credentials. Please validate connection inputs and VPC connectivity to Security Token Service, Secrets Manager and REDSHIFT.
Thank you for watching my videos.
I have a IAM role with admin access and trusted by aws glue and redshift services but you could try with least privileged like below.
I am role with below policies.
1. S3 BUCKET full access.
2. AWS Glue full access.
3. Amazon Redshift full access.
4. Amazon Cloudwatch full access.
Trust policy.
1. aws glue service.
2. aws redshift services.
Thank you for watching my videos.
I have a IAM role with admin access and trusted by aws glue and redshift services but you could try with least privileged like below.
I am role with below policies.
1. S3 BUCKET full access.
2. AWS Glue full access.
3. Amazon Redshift full access.
4. Amazon Cloudwatch full access.
Trust policy.
1. aws glue service.
2. aws redshift services.
@@cloudquicklabs thanks for the reply but does not seem to work. I have opened a support ticket after spending hours on this.
If it is not the IAM role issue.
It must be vpc and security group related issues. Please check the vpc endpoint and security group inbound rules. And check if you have source as security as itself at one inbound rule as well.
Im not getting Endpoint JDBC URL ODBC URL in worker group, how can we get them ?
Thank you for watching my videos.
Please check the workspace it should be there.
Amazing content!!!
When im trying to read data from s3 on aws glue it is giving me error.
Can u please guide
Thank you for watching my videos.
Could you please tell me what is the error that you are getting.
Can you please upload videos on t15 wings -1 hands on for tcs
Thank you for watching my videos.
This covers only cloud hands specially Azure and AWS in general.
With this knowledge you can Crack any company Cloud DevOps roles
I have multiple large tables in CSV format in AWS S3, should I:
1. load them from S3 to RDS (mySQL) => do my queries using mySQL Workbench => export the expected data to S3 => sync the expected data to SageMaker for visualization and other analysis using a Notebook instance?
2. load them from S3 to Redshift => doing queries using Redshift, actually I'm not quite sure what to do next in this direction, the goal is to have some filtered data for visualization and analysis.
Thank you
Thank you for watching my videos.
Is your requirements here is just to Visualize data or do you want to have ML run on them (since you said you want run Sagemaker as it is for ML and costly tool).
You could check for Amazon Quicksight if you are looking visualization.
Thank you,@@cloudquicklabs. I actually want to do some ML, but that part can be done on a local machine. The primary goal now is to load data from S3 to something I can query to analyze the data better.
@@hoangng16 once you catalog it you can access it from Athena as well.
How do we ensure Primary Key sequence is intact with Source.
Thank you for watching my videos.
I believe you need to use Data Quality rules here to check if your primary keys are in sequence. Ma be you need watch my latest video here ruclips.net/video/DMQRFwbeYMc/видео.htmlsi=mu-t_cNUIvXzHIXv which might give ideas.
Hi, I'm getting invalid input exception error, can you pls resolve?
Thank you for watching my videos.
Could you please watch the video again.
Thank you...very useful...great video👍 small query is - why the table in the redshift is not in the same order as it in the CSV file?
Thank you for watching my videos.
Glad that it helped you.
If you mean column of table , then I would say it does not matter untill proper records are getting updated in respective column
which iam role given to redshift workgroup?
Thank you for watching my videos.
I am using admin permissions as it is demo.
Did you check reference documents shared at : github.com/RekhuGopal/PythonHacks/tree/main/AWS_ETL_Increamental_Load_S3_to_RedShift
@@cloudquicklabs
Which IAM role given to redshift workgroup?
Please create and upload basic video on it.
I want to build this project. But don't understand which IAM role given to redshift workgroup?
How much cost is expected if I try this with my personal aws account
Thank you for watching my videos.
All the resources involved in this solution are costly. Please make use of AWS given calcutors to estimate you're cost here. As this was lab session for me I did set up and cleaned up later once lab is completed and it might have costed me very less.
I am importing the s3 data to redshift serverless, while the copy query is running it keeps on loading/running the query. I am loading the data which has 100 rows only. what is the problem?
Thank you for watching my videos.
Did you check if connection to Amazon Redshift is working as expected.
And check if source path is correct in ETL job.
Could you please make a video how to make fact & dimension table in redshift it would be great if you make that video too s3 to redshift using glue
Thank you for watching my videos.
I shall work on this concept and create a video.
very good.. but u can trim it in 15 min
Thank you for watching my videos.
Glad that it helped you.
I shall take this input in my next videos.
Much waited video thankyou so much,could you attach the code here which have been generated after glue job confriguration.
Thank you for watching my videos.
Glad that it's useful video to you.
I have not copied the Glue job code but I have attached relevant files in description please use the same and do follow the these steps it would be working for you.
so my question is in this video why you not schedule it. it is a incremental data the data is load on s3 any time. so the incremental data is go to schedule it when the data is been upload means the automatically job will run and execute it. so why you default the execute the job run is how many times you run a crawler and run a job..?
Thank you for watching my videos.
Indeed, it should be scheduled , I have mentioned it in video.
As this is demo I have shown it via manual trigger.
I am working on v3 of this video, where I cover missing points.
@@cloudquicklabs kk is this uploded
Not yet I am still working on it.
This is what I wanted.... Thank u so much 🙏
Thank you for watching my videos.
Glad that it helped you.
can you make oracledb(source with cdc) -->S3 --> Glue(some Transformation) --> Redshift warehouse whole usecase please
Thank you for watching my videos.
Indeed , I shall add this to my to do list and make a video on it.
Yes I am also waiting this type of project with ecommers data
Target node is not supported
What to do
Did you follow the video here ?
@@cloudquicklabs yes
@@cloudquicklabs I am using dms to transfer data from rsd mysql to s3
Then by using glue I am transfering data from s3 to redshift.. and in glue I am getting error
Tried the Redshift test connection but it failed due to this error
ERROR StatusLogger Unrecognized format specifier [d]
Thank you for watching my videos.
Are you getting error while creating connection or while execution of ETL pipeline.
While testing connection to redshift
Nice video! can I just use crawler to crawl new folders only and then in glue job, just use bookmark and append the data instead of merging ? if the table is large, merging will be very expensive operation, no?
Thank you for watching my videos.
I have not created video on the dynamic folder structure in AWS glue. I shall create it soon. There should not be any extra cost here.
Thank you so much I got revision and good idea
Thank you for watching my videos.
Glad that it helped you.
Very useful.. thanks... subscribed now for more interesting content
Thank you for watching my videos.
Happy learning.
@@cloudquicklabs Some issue with glueContext.write_dynamic_frame.from_catalog, where as glueContext.write_dynamic_frame.from_jdbc_conf is perfectly working fine....
Getting below error while writing to Redshift catalog table:
Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o130.pyWriteDynamicFrame. Exception thrown in awaitResult:
SQLException thrown while running COPY query; will attempt to retrieve more information by querying the STL_LOAD_ERRORS table
Could you please guide
How to load incremental data by using python script.
Thank you for watching my videos.
I shall do one vidoe using Python scripts on #ETL pipeline.
Great video! 🤓
Thank you for watching my videos.
Glad that it helped you.
It should automatically trigger based on s3 event
Thank you for watching my videos.
I shall create new version of this video where this would be considered.
Jai Sevalal Bhai
Thank you for watching my videos.
Glad that it helped you.
Jai Sevalal.!
Easy and smooth.
Thank you for watching my videos.
Glad that it helped you.
It is extremely useless!
Thank you for watching my videos.
Glad that it helped you.
What 😮 do you mean extremely useful?
This is great video!! Thank you for this :)
My job got failed with error : Error Category: UNCLASSIFIED_ERROR; Failed Line Number: 20; An error occurred while calling o113.pyWriteDynamicFrame. Exception thrown in awaitResult::
Thank you for watching my videos.
Error could be due to many reasons 1. Check if data format is as expected in source side. Due to syntactical error this kind of errors can happen.
How to contact yoy
You could reach me over email vrchinnarathod@gmail.com
thanks for the valuable inputs. Small query, what if my incremental file is landed on a diff folder on same bucket. How crawler is going to handle.?
Thank you for watching my videos.
While defining the crawlers you would give the path, choose right path accordingly, and crawler would scrape all data from all files and folder present underneath the defined folder.
@@cloudquicklabs thanks for the reply. what if my folder structure is like the below.
input_bucket/year={current_year}/month={current_month}/day={current_date}/file.txt
how can i define my crawler to check the file based on date and load data on incremental basis not as full refresh...any idea ??