I think most of those changes are going to have a big effect on the way we manage data. Databricks are setup to be the single tool right up to the point you visualize the end result. Wonder how MS feel about the fact they might end up serving instances of the very platform that makes fabric a bit redundant :P especially if the pricing is clear and competitive :)
Was eagerly waiting for your video Simon! I think if Lakeflow turns out to be as good as another replication tool that would be the biggest disruptor to how we currently do lakehouse. Data acquisition has always been the sore point in the data platform. Other one as you mentioned, tag based access control is something I have been waiting for almost 2 years!
I agree that the Tabular acquisition will lead to improved interoparability for Delta Lake & Iceberg users. For me this signals more the trend of reducing data movement and ETL so that people can use data where it is. And all access control is managed by Unity Catalog.
I need to dig into what's been announced publicly so I don't break NDAs - but I can say that what I've seen has come a fair way from the old continuous mode, it's more than just the spark engine change behind what's driving the performance increase.
Haha, it's true - there was the segment from Hannes himself. But the update is largely that DuckDB can now natively read Delta right, nothing I saw is directly Databricks functionality? That said, I'm waaaay overdue a separate video spinning up duckdb on a single node and showing how fast it is!
That's what the announcements were all about - they're rolling out Serverless for Workflows/Notebooks which means full serverless python support. Python is thoroughly recommended for any engineering/automation workloads (with embedded SQL for transformations as necessary)
@@AdvancingAnalytics The biggest problem is the price. Serverless option is the most expensive workload in Databricks. For many companies it can be a blocker especially when chipper option exist. I've heard about situation where companies ask developers not to use SQL serverless warehouse because of that
Nope - the original Databricks Assistant was using OpenAI, this new iteration is a flavour of DBRX, with the context of your own data (unity catalog, recent activity/queries etc etc). Should have far, far more context than just hitting an open endpoint.
I agree with the points about serverless making things easier and doing it better than a person would do. However, I would still want to know what it is doing, so I could replicate elsewhere (self hosted, other future vendor, etc). Otherwise this is another type of vendor lock Ie, if I'm too reliant on the platform optimising stuff for me, then I'm effectively locked in.
Excellent, summary Simon! I'm looking forward to LakeFlow. 😀
Nice recap of the key announcements! ABAC demo was 🎉
I think most of those changes are going to have a big effect on the way we manage data. Databricks are setup to be the single tool right up to the point you visualize the end result. Wonder how MS feel about the fact they might end up serving instances of the very platform that makes fabric a bit redundant :P especially if the pricing is clear and competitive :)
Amazing Simon, thanks for this update
Nice summary, thanks Simon
Was eagerly waiting for your video Simon! I think if Lakeflow turns out to be as good as another replication tool that would be the biggest disruptor to how we currently do lakehouse. Data acquisition has always been the sore point in the data platform.
Other one as you mentioned, tag based access control is something I have been waiting for almost 2 years!
I agree that the Tabular acquisition will lead to improved interoparability for Delta Lake & Iceberg users. For me this signals more the trend of reducing data movement and ETL so that people can use data where it is. And all access control is managed by Unity Catalog.
lake flow seems great if you can source control and deploy it. Hopefully it’s also somewhat testable.
What do you know about the realtime mode? Do you think it's just a rename of the experimental spark continuous mode?
I need to dig into what's been announced publicly so I don't break NDAs - but I can say that what I've seen has come a fair way from the old continuous mode, it's more than just the spark engine change behind what's driving the performance increase.
I liked your point about Serverless and what we do. Hopefully by the time the transition is done I’ll have retired into leadership 😂
And I'm here still waiting for that for_each task xD
It's on the roadmap, it was on one of the keynote slides and everything! 😅
Hoping we get branches in delta for write audit publish as that’s a pretty useful feature in Iceberg.
come on, not a single word about duckdb, it was everywhere on the keynote :)
Haha, it's true - there was the segment from Hannes himself. But the update is largely that DuckDB can now natively read Delta right, nothing I saw is directly Databricks functionality? That said, I'm waaaay overdue a separate video spinning up duckdb on a single node and showing how fast it is!
Currently only SQL Warehouse can be serverless which supports SQL only. Does it mean that Python is not recommended in the new projects?
That's what the announcements were all about - they're rolling out Serverless for Workflows/Notebooks which means full serverless python support. Python is thoroughly recommended for any engineering/automation workloads (with embedded SQL for transformations as necessary)
@@AdvancingAnalytics The biggest problem is the price. Serverless option is the most expensive workload in Databricks. For many companies it can be a blocker especially when chipper option exist. I've heard about situation where companies ask developers not to use SQL serverless warehouse because of that
@@AdvancingAnalytics Hopefully this means serverless supported in more regions!
Isn't Genie just hitting the openAI endpoint?
Nope - the original Databricks Assistant was using OpenAI, this new iteration is a flavour of DBRX, with the context of your own data (unity catalog, recent activity/queries etc etc). Should have far, far more context than just hitting an open endpoint.
I agree with the points about serverless making things easier and doing it better than a person would do. However, I would still want to know what it is doing, so I could replicate elsewhere (self hosted, other future vendor, etc). Otherwise this is another type of vendor lock
Ie, if I'm too reliant on the platform optimising stuff for me, then I'm effectively locked in.
I