- Видео 60
- Просмотров 220 601
Dustin Vannoy
США
Добавлен 26 окт 2011
This channel is a resource for you to learn about modern data technologies and practices from a Data Engineer perspective, from kickstart tutorials to tips to go to the next level.
Dustin Vannoy is a data engineer and analytics solutions architect. His specialties are modern data pipelines, data lakes, and data warehouses. He loves to share knowledge with the data engineering and science community.
Dustin Vannoy is a data engineer and analytics solutions architect. His specialties are modern data pipelines, data lakes, and data warehouses. He loves to share knowledge with the data engineering and science community.
Unity Catalog OSS Spotlight
Unity Catalog Open Source Software (OSS) is a compelling project and there are some key benefits to working with it locally. In this video I share reason for using the open source project Unity Catalog (UC) and walk through some of the setup and testing I did to create and write to tables from Apache Spark.
*All thoughts and opinions are my own*
Links:
Blog post - dustinvannoy.com/2025/01/30/oss-spotlight-unity-catalog
Roadmap 2024Q4 - github.com/unitycatalog/unitycatalog/discussions/411
Events - lu.ma/unity-catalog
More from Dustin:
Website: dustinvannoy.com
LinkedIn: www.linkedin.com/in/dustinvannoy/
Github: github.com/datakickstart
Outline:
0:00 Overview
4:50 Quickstart walkthrough
8:43 Apache Spa...
*All thoughts and opinions are my own*
Links:
Blog post - dustinvannoy.com/2025/01/30/oss-spotlight-unity-catalog
Roadmap 2024Q4 - github.com/unitycatalog/unitycatalog/discussions/411
Events - lu.ma/unity-catalog
More from Dustin:
Website: dustinvannoy.com
LinkedIn: www.linkedin.com/in/dustinvannoy/
Github: github.com/datakickstart
Outline:
0:00 Overview
4:50 Quickstart walkthrough
8:43 Apache Spa...
Просмотров: 230
Видео
Developer Best Practices on Databricks: Git, Tests, and Automated Deployment
Просмотров 2,1 тыс.Месяц назад
Data engineers and data scientists benefit from using best practices learned from years of software development. This video walks through 3 of the most important practices to build quality analytics solutions. It is meant to be an overview of what following these practices looks like for a Databricks developer. This video covers: - Version control basics and demo of Git integration with Databri...
7 Best Practices for Development and CICD on Databricks
Просмотров 1 тыс.Месяц назад
In this video I share why developer experience and best practices are important and why I think Databricks offers the best developer experience for a data platform. I'll cover high level developer lifecycle and 7 ways to improve your team's development process with a goal of better quality and reliability. Stay tuned for follow up videos that cover some of the key topics discussed here. Blog po...
Databricks VS Code: Multiple Projects In VS Code Workspace
Просмотров 6433 месяца назад
In this video I cover a specific option for work with Databricks Visual Studio Code Extension…what it I have many project folders each as their own bundle but I want to work in the same VS Code workspace? I talk through a couple ways to work with this and show how to switch the active project folder in order to run files from different bundles. You may need this if: - VS Code is only opening on...
Databricks VS Code Extension v2: Upgrade steps
Просмотров 4374 месяца назад
In this short video I show you how to upgrade a project from using Databricks Visual Studio Code version 1 to using the new version. There are a few key setup steps included and a quick glimpse at the new Databricks run button. For a more complete view of using the Databricks Visual Studio Code extension, see this video: ruclips.net/video/o4qMWHgT1zM/видео.html * All thoughts and opinions are m...
Databricks VS Code Extension v2: Setup and Feature Demo
Просмотров 3,9 тыс.4 месяца назад
Databricks Visual Studio Code Extension v2, the next major release, is now generally available. In this video I walk through the initial setup and the main ways you will run code and deploy resources using this extension. I also provide some key tips to make sure you don't get stuck along the way. * All thoughts and opinions are my own * References: Databricks blog: www.databricks.com/blog/simp...
Databricks CI/CD: Azure DevOps Pipeline + DABs
Просмотров 9 тыс.5 месяцев назад
Many organizations choose Azure DevOps for automated deployments on Azure. When deploying to Databricks you can take similar deploy pipeline code that you use for other projects but use it with Databricks Asset Bundles. This video shows most of the steps involved in setting this up by following along with a blog post that shares example code and steps. * All thoughts and opinions are my own * B...
Databricks Asset Bundles: Advanced Examples
Просмотров 9 тыс.7 месяцев назад
Databricks Asset Bundles is now GA (Generally Available). As more Databricks users start to rely on Databricks Asset Bundles (DABs) for their development and deployment workflows, let's look at some advanced patterns people have been asking for examples to help them get started. Blog post with these examples: dustinvannoy.com/2024/06/25/databricks-asset-bundles-advanced Intro post: dustinvannoy...
Introducing DBRX Open LLM - Data Engineering San Diego (May 2024)
Просмотров 3218 месяцев назад
A special event presented by Data Engineering San Diego, Databricks User Group, and San Diego Software Engineers. Presentation: Introducing DBRX - Open LLM by Databricks By: Vitaliy Chiley, Head of LLM Pretraining for Mosaic at Databricks DBRX is an open-source LLM by Databricks which when recently released outperformed established open-source models on a set of standard benchmarks. Join us to ...
Monitoring Databricks with System Tables
Просмотров 3,9 тыс.11 месяцев назад
In this video I focus on a different side of monitoring: What do the Databricks system tables offer me for monitoring? How much does this overlap with the application logs and Spark metrics? Databricks System Tables are a public preview feature that can be enabled if you have Unity Catalog on your workspace. I introduce the concept in the first 3 minutes then summarize where this is most helpfu...
Databricks Monitoring with Log Analytics - Updated for DBR 11.3+
Просмотров 4,1 тыс.Год назад
In this video I show the latest way to setup and use Log Analytics for storing and querying you Databricks logs. My prior video covered the steps for earlier Databricks Runtime Versions (prior to 11.0). This video covers using the updated code for Databricks Runtime 11.3, 12.2, or 13.3. There are various options for monitoring Databricks, but since Log Analytics provides a way to easily query l...
Databricks CI/CD: Intro to Databricks Asset Bundles (DABs)
Просмотров 22 тыс.Год назад
Databricks Asset Bundles provide a way to use the command line to deploy and run a set of Databricks assets - like notebooks, Python code, Delta Live Tables pipelines, and workflows. This is useful both for running jobs that are being developed locally and for automating CI/CD processes that will deploy and test code changes. In this video I explain why Databricks Asset Bundles are a good optio...
Data + AI Summit 2023: Key Takeaways
Просмотров 618Год назад
Data AI Summit key takeaways from a Data Engineers perspective. Which features coming to Apache Spark and to Databricks are most exciting for data engineering? I cover that plus a decent amount of AI and LLM talk in this informal video. See the blog post for a bit more thought out summaries and links to many of the keynote demos related to the features I am excited about. Blog post: dustinvanno...
PySpark Kickstart - Read and Write Data with Apache Spark
Просмотров 953Год назад
Every Spark pipeline involves reading data from a data source or table and often ends with writing data. In this video we walk through some of the most common formats and cloud storage used for reading and writing with Spark. Includes some guidance on authenticating to ADLS, OneLake, S3, Google Cloud Storage, Azure SQL Database, and Snowflake. Once you have watched this tutorial, go find a free...
Spark SQL Kickstart: Your first Spark SQL application
Просмотров 1 тыс.Год назад
Spark SQL Kickstart: Your first Spark SQL application
PySpark Kickstart - Your first Apache Spark data pipeline
Просмотров 4,4 тыс.Год назад
PySpark Kickstart - Your first Apache Spark data pipeline
Spark Environment - Azure Databricks Trial
Просмотров 535Год назад
Spark Environment - Azure Databricks Trial
Spark Environment - Databricks Community Edition
Просмотров 1,2 тыс.Год назад
Spark Environment - Databricks Community Edition
Apache Spark DataKickstart - Introduction to Spark
Просмотров 1,7 тыс.Год назад
Apache Spark DataKickstart - Introduction to Spark
Unity Catalog setup for Azure Databricks
Просмотров 16 тыс.Год назад
Unity Catalog setup for Azure Databricks
Visual Studio Code Extension for Databricks
Просмотров 18 тыс.Год назад
Visual Studio Code Extension for Databricks
Parallel Load in Spark Notebook - Questions Answered
Просмотров 2,5 тыс.2 года назад
Parallel Load in Spark Notebook - Questions Answered
Delta Change Feed and Delta Merge pipeline (extended demo)
Просмотров 2,2 тыс.2 года назад
Delta Change Feed and Delta Merge pipeline (extended demo)
Data Engineering SD: Rise of Immediate Intelligence - Apache Druid
Просмотров 2522 года назад
Data Engineering SD: Rise of Immediate Intelligence - Apache Druid
Azure Synapse integration with Microsoft Purview data catalog
Просмотров 2,3 тыс.2 года назад
Azure Synapse integration with Microsoft Purview data catalog
Adi Polak - Chaos Engineering - Managing Stages in a Complex Data Flow - Data Engineering SD
Просмотров 2022 года назад
Adi Polak - Chaos Engineering - Managing Stages in a Complex Data Flow - Data Engineering SD
Azure Synapse Spark Monitoring with Log Analytics
Просмотров 5 тыс.2 года назад
Azure Synapse Spark Monitoring with Log Analytics
Parallel table ingestion with a Spark Notebook (PySpark + Threading)
Просмотров 14 тыс.2 года назад
Parallel table ingestion with a Spark Notebook (PySpark Threading)
SQL Server On Docker + deploy DB to Azure
Просмотров 4,8 тыс.2 года назад
SQL Server On Docker deploy DB to Azure
Michael Kennedy - 10 tips for developers and data scientists - Data Engineering SD
Просмотров 2202 года назад
Michael Kennedy - 10 tips for developers and data scientists - Data Engineering SD
radical
Nice! Any chance you could demo working with R notebooks on VS code in Databricks?
I'd love to see a video from yourself on how you automate schema deployments in databricks if possible! We have a process just now but I would say it is compute heavy as it checks the schema vs our ddl each time the pipeline runs and applies changes if a difference is found, which is maybe once a quarter, so quite a bit of wasted compute.
I learnt that databricks open sourced Unity but now I know what that means. Great stuff. Thanks for this.
Hi Dustin, thank you for the amazing explanation. I have few questions. 1. What is the industry practise - to have one job per repo or multiple jobs per repo? 2. In case I have around 5 to 6 jobs, is there a way I can have each job in a separate yaml file, have the cluster information in a common file, say commons/cluster-info and refer them in individual file just like the YAML anchor but across the files?
This is helpful Dustin, thank you. I have a question for you. This is kinda of related to your problem as I'm trying to figure out testing as well as the branching + deployment strategy (e.g. CI vs abbreviated Trunk based dev + PR + CI for automated testing) but I have perhaps 3 different types of "work" that are related but different possibly in the same repo. So setting up CI/CD pipelines and testing is downstream of the repo structure or well, if multiple repos. So that said, I'm trying to decide repo structure and I'm a bit torn on if there should be multiple repositories or separate repositories. I'm working as an embedded end-to-end ML/AI/Data engineer on a product team and we have 3 primary "Functions of work" as I like to call it: 1. Automated ETL Pipelines (e.g. Databricks workflows) 2. LLM Tools, Utilities, etc -- we use or plan to use a lot of LLMs 3. POCs for businesses using GenAI/ML/etc that may NEVER have an updated data set needed, so no needed DBX Workflow I don't think these should all be in the same repo. The main question is should the #3, which likely also does not have Unit Tests, Integration Tests should live in the same repo as #1 (DBX Workflows), perhaps in a separate top level directory called "Sandbox" or if that should be it's own Github repository. And if/when any one of these POCs gets the go ahead for regular data ingestion and/or "PROD usage" perhaps it is then ported over to "Workflows" (either the directory, if it lives in the same repo, or the Workflows Repo, if it's a separate repo). I feel this video applies very well to #1, perhaps to #2. And I should then keep #3 to itself. I have another question about CI/CD as it pertains to doing both a Medallion Architecture + DEV/TEST/PROD environments. There seems to be a ton of redundancy if I am using a CI pipeline for testing of my code in DEV on a Pull Request to then merge the code to MAIN in the DEV environment. At this point, the code is "tested" but environment wise, still lives in DEV and has not yet been pushed up through UAT or PROD.
Oh man, you have just saved me hours! I didn't know that I can copy yaml configure by switching UI to code version. Thanks a lot Dustin. 🙏🙏🙏🙏
is there a way to avoid authentication via PAT? when I use "databricks auth login --host x " and then I log in with my account, if I try to run a databricks command I get some permission errors, what could this mean? Could you make a video about Databricks Authentication?
Thank you for the awesome videos, those are very helpful. When i do bundle run it runs the workflow and after the run is done it deletes the workflow completely and I get message saying my workflow is deleted. I tried it removing the bundle run command it still deletes the workflow and there is no workflow, what is causing this ?
question, the notebooks are not defined in a yaml-fashion? if not, how are they considered to be deployed in the target workspace? just by define them in the jobs are they deploy?
All the files and folders that are not ignored because of .gitignore are synced to workspace on bundle deploy by default. You can control which ones are included or excluded by adding sync -> include or sync ->exclude block. Search in these docs if you need more example of how to control that: docs.databricks.com/en/dev-tools/bundles/settings.html
Thank you for reply , the file is in the root , I will take the look to your suggestion . Thank you again and blessings !
Excellent. Thank you so much Dustin.
Thank you very much ! I have an issue in the step Validate bundle for dev enviroment , I am getting the error message Error: unable to locate bundle root: databricks.yml not found
The databricks.yml should be in the current working directory. Perhaps you need to set the working directory or add a change directory step before trying to run the deploy.
Like deployed. thanks for sharing
Superb video, very detailed Thanks @dustin.... One Ques:: Do we need to create job in each databricks environment (QA,Prod) after the deployment or is it automatic created while deployment from dev to different environment
You would either have a separate CI pipeline for each environment and swap out variables, or you can make this one more dynamic based on branch name or other factors.
Couldn't we just use complex variables instead of using the YAML anchors?
Nevermind, it looks like complex variables came out a day after this video was released lol
Excelent material
At 2:35 you mention getting into some new databricks features like 'jump to code/definition' in a different video. Could you add a link to that video? The option to see where code is defined, especially 'intellisense-like-behaviour' is something I miss a lot, most of all when using the magic %run command to import functions from different notebooks.
@@ExplainedbyAI-q2n I haven’t made one myself to show off that feature. If you have active session and right-click on a Python function call you should see the option. This video shares a lot of the newer features that make developing in the notebook a good experience: ruclips.net/video/YgAtny2qoqQ/видео.html
Really good stuff! Thank you so much for these posts! It is very inspiring and I have some work to do to reach this level. We are very heavy on using SQL-code for transformations, using temporary views and cte:s. Is that a bad strategy in the sense that it makes it really hard to test? So for example instead of having a CASE-statement you would instead use a UDF that is more easily testable? How do you test big SQL-transformations?
The downside of using SQL directly is it makes testing harder, but there are some different options. Certain ways of doing ETL with SQL like dbt or DLT have testing capabilities built in. Are you using SQL files that are then orchestrated via Databricks Workflows? If so you could follow similar ideas to what I show in the integration testing. Using UDF can make some of your unit testing easier but can have performance implications, so isn't always a best practice. I'm hoping to share more about SQL based ETL options this year but please let me know more about your design so it can influence some of the examples I build on this topic.
Thanks for the video; I had been looking into how to set this up for a while and this video finally got me to a place of having a working process. I just about have all of this setup in my environment but the most recent issue I'm running into is if I develop locally and run a DLT pipeline from VSCode it sets everything up with my credentials. When I merge to my dev branch that triggers the CICD pipeline (running as the service principal), the step that runs the job throws an error that the tables defined in the DLT pipeline are managed by another pipeline (the one with my credentials). If I use DLT do I just never test from vscode, or do I need to go clean those up each time? Is there a better way to manage this?
In case anyone else was running into this I found a solution. First a little more context. I have a dev and prod workspace. Data engineers can pull the project down into VS Code and using either the Databricks CLI or the VS Code Databricks extension when they deploy bundles to the dev target it will go to their home folder. I have a release pipeline that is triggered when I merge into the dev branch. This will deploy the project using a service principal account. It is re-using the dev workspace and code will be deployed to it's home folder. All jobs and pipelines are getting named with the environment and the username as well. The part that I was missing is that everyone was using the same schemas. After talking to our Databricks expert it was suggested to use dynamic schemas as well. This is all configured with variables in the databricks.yml file. In the dev target they are overridden to use the user name as part of the schema name. So, when I deploy the bundle to the dev target the workflows and pipelines get prefixed with [dev myusername] and the catalog is my dev_bronze catalog, this is shared by all in dev workspace. But, the schemas that are created and written to will be prefixed with myusername_. This was the key so that each dev has their own area in the workspace and we don't have multiple DLT pipelines trying to own the same underlying tables. Staging uses the service principal and I have it set to use the default values to mimic how the process will look in the prod workspace. So, when developer1 deploys a bundle for ProjectA to dev it will create the [dev developer1] ProjectA Job and the [dev developer1] ProjectA DLT Pipeline, the source is stored in the home folder of developer1 and the table it writes to will be in dev_bronze.developler1_schemaName.tableName. When developer2 deploys ProjectA, they get [dev developer2] ProjectA Job, [dev developer2] ProjectA DLT Pipeline and write to dev_bronze.developer2_schemaName.tableName. When the change gets merged to the dev branch it will get deployed as [dev servicePrincipal] ProjectA Job, [dev servicePrincipal] ProjectA DLT Pipeline and write to dev_bronze.schemaName.tableName.
@@BrianMurrays Thanks for sharing these details about how you got this working for your setup.
Great content.
@DustinVannoy Will this work with a Databricks Personal Access Token instead of Service Connection/Service Principle?
Yes, but for deploying DABs to Staging/Prod you want to use the same user every time so they are consistently the owner. For Github Actions I use a token in a secret. I think you could pull from key vault in dev ops pipeline, not positive on the best practice there.
Hi! Thanks for the content! Can you describe more in detail how you run automated tests. What do you test? Etc... Could be a topic for a future video? Real examples. Thanks again
Yes, editing that one to release in January
Thanks, Dustin! You help me a lot for a production pipeline!
Still confused on how did I exact populate your Bundle Resources explorer ?
So you need to have a databricks.yml file in the root folder and it has to have some workflows or pipelines defined. Check out my videos on intro to Databricks Asset Bundles if you aren't sure how to get jobs created. The short answer is you can create a starting project using `databricks bundle init` or find examples like what I have in the resources folder and modify as needed for your project.
Thank you! I have a few processes that are not related to each other. Do I need to create a separate DAB for each one? How can I make the process more dynamic?
The general guidance is if you want to deploy together and code can be versioned together, then put it in the same bundle (all using same databricks.yml). If you want to keep things separate then its fine to have separate bundles and you can either deploy in separate CD pipelines or the same one by calling `databricks bundle deploy` multiple times, once from each directory with a databricks.yml. For making it more dyanmic I suggest variables, especially complex variables, but usually that is just to change values based on the target environment. Using SDK to create workflows is an alternative to DABs and other things have been discussed which might be more of a blend between the two options eventually.
@ Thank you very much!
Great video!
Is there a way to define Policies as a resource and deploy . I have some 15 to 20 policies which my jobs can use any of them. If there is a way to manage these policies to apply policy changes, it will be very convenient
Is it possible to use DAB with a Standard Databricks cluster?
Yes, meaning not a job cluster but an all-purpose cluster? You can either reference one with existing_cluster_id or define one in resources under the `clusters` section. docs.databricks.com/en/dev-tools/bundles/settings.html#specification
I tried to test this extension and Databricks Connect but when I run file *.py file with Databricks Connect spark session variable is not initialized. I got an error: pyspark.errors.exceptions.base.PySparkRuntimeError: [CONNECT_URL_NOT_SET] Cannot create a Spark Connect session because the Spark Connect remote URL has not been set. Please define the remote URL by setting either the 'spark.remote' option or the 'SPARK_REMOTE' environment variable. I didn't configure SPARK_REMOTE but I added explicit session creation: config = Config( host = dbs_host, token = access_token, cluster_id = cluster_id, ) spark = DatabricksSession.builder.sdkConfig(config).getOrCreate() I use Profile Auth. Type. Databricks Connect is enabled. Upload and run file works Databricks runtime is 15.4.x Databricks Connect 15.4.3
Great video, learned a lot! I do have a question; would it make sense to define a base environment for serverless notebooks and jobs, and in the bundle reference said default environment? Ideally it would be in one spot, so upgrading the package versions would be simple and easy to test. This way developers could be sure that any package they get used to, is available across the whole bundle.
The idea makes sense but the way environments interact with workflows is still different depending on what task type you use. Plus you can't use them with standard clusters at this point. So it depends on how much variety you have in your jobs which is why I don't really include that in my repo yet.
Hi Dustin, nice video! Any plants to do the same but for Microsoft Fabric?
@@derkachm No, I am not doing enough with Fabric yet to add anything new there.
just great!!! I still using only the workspace UI and setups, but I really want to start using this VSCode extension.. I'm going to test some features and do some basic commands.. but anyway, just great! thank you very much
Custom Logs cannot be written via Diagnostic Settings? Reason for using spark-monitoring?
is the spill memory bad? @3:48
@@thevision-y1b yes, it’s not ideal. Indicates I either want to 1) change to a worker VM type with more memory per core or 2) split into more tasks since the Input Size for my median and max tasks is a bit too high. By the way, these days that input size is usually ok for me but I use different node types now.
Thank you for the video 👌. I have question, how can I convert existing Databricks jobs(ML, python, sql etc) that were initiated manually in the past to start running through the pipeline? That is our Data engineering team were running these jobs manually but now we want to use DABs and Azure DevOps to run the jobs automatically. Thank you
Great video!! Much appreciate the effort put in to add join section, Dustin.
Hey Dustin, if we want to add more ressources like adls bronze silver and gold storage do we need to add it to the envi variables?
You can deploy schemas within unity catalog but for external storage locations or volumes I would expect those to either happen from Terraform or as notebooks/scripts that you run in the deploy pipeline. Jobs to populate the storage would be defined in DABs, but not the creation of storage itself unless it's built into a job you trigger with bundle run.
Very helpful. Thanks for sharing!!
my cluster is taking so much time to deploy , any ideas ?
If you view the event log you might see some things. Sometimes a message will show that compute couldn't be retrieved from Azure which may be a quota limit (very common in trial accounts). If you added init scripts or libraries that can slow it down. Otherwise you can try posting more details (like event log info) in Databricks community. If you are really stuck and that doesn't help, message me more details through LinkedIn.
Now do AWS 😂
Meaning AWS account with GitHub Actions? If not, what combo of tools are you curious about for the deployment?
Any way to see a plan like you would with terraform?
Not really, using databricks bundle validate is best way to see things. There are some options to view as debug but I haven't found something that works quite like Terraform plan. When you try to run destroy it does show what will be destroyed before you confirm.
Still needs work. Issues I found so far: 1. The authentication can still clobber your CLI auth causing lots of confusion. 2. The file sync needs a full refresh option. Only way to currently do so is to delete the sync folder in the . databricks folder. 3. Sync needs to be 2 ways. Databricks/Spark connect is still not feature complete so you unfortunately have to use the notebook in some cases. 4. Overwrite job cluster feature installs your python whl onto your all purpose cluster but if you make any changes to the package l, it doesn't remove the old whl and update it with a new whl with your changes causing confusing errors.
For number 2, I agree. For number 3, I disagree, I think using git provider to push/pull from various environments is the right way to handle it. This is based on my belief its too confusing to sync two ways without git and often a team of people may be working together anyway. For number 4, if you append a incrementing version number or timestamp it will update on all-purpose cluster that already has it installed. Not really an IDE thing but it is all sort of related.
We need PyCharm and DataGrip support!
blog.jetbrains.com/pycharm/2024/08/introducing-the-pycharm-databricks-integration/
Hey Dustin, We're currently implementing data bricks asset bundles using Azure DevOps to deploy workflows. The bundles we are using storing it in the GitHub. Can you please help me with the YAML script for it.
How to remove [dev my_user_name]. Please suggest
Change from mode: development to mode: production (or just remove that line). This will remove prefix and change default destination. However, for dev target I recommend you keep the prefix if multiple developers will be working in the same workspace. Production target is best deployed as a service principal from CICD pipeline (like Azure DevOps Pipeline) to avoid different people deploying the same bundle and having conflicts with resource owner and code version.
@@DustinVannoy Thank you Vannoy!! Worked fine now !!
Hi Dustin, A basic question, how this method is different than configuring Azure portal -> databricks workspace home page -> Diagnostic Settings -> Exporting logs to Log Analytics
The things that are logged are different. I've never written it up but we had some logs enabled that way plus we used this. There are other options to get logs, of course, but I found this one to be useful in the past for Azure focused environments.
Thanks Dustin for the video. Is there a way where I can specify sub-set of resources (workflows, DLT pieplines) to run in specific env? For example, I would like to deploy only Unit test job in DEV and not in PROD env.
You would need to define the job in the targets section of only the targets you want it in. If it needs to go to more than one environment, use YAML anchor to avoid code deuplication. I would normally just let a testing job get deployed to prod without a schedule, but others can't allow that or prefer not to do it that way.
Hi Dustin, have you tried to configure and deploy a single node cluster by using Databricks Bundle?
Yes, it is possible. It looks something like this: job_clusters: - job_cluster_key: job_cluster new_cluster: spark_version: 14.3.x-scala2.12 node_type_id: m6gd.xlarge num_workers: 0 data_security_mode: SINGLE_USER spark_conf: spark.master: local[*, 4] spark.databricks.cluster.profile: singleNode custom_tags: {"ResourceClass": "SingleNode"}
@@DustinVannoy Thanks a lot! This cannot be found in the Databricks documentation.
Hi Dustin, is there any way to monitor compute event log from log analytics?
Not that I'm aware of.