Advancing Spark - Data + AI Summit 2024 Key Announcements
HTML-код
- Опубликовано: 7 фев 2025
- The dust is settling after the Data + AI Summit has come to an end, so it's time to reflect on the insane number of announcements that we saw over just a couple of days! We have the massive Open Sourcing of Unity Catalog, new products such as the AI/BI Interactions, Compound AI Applications and then massive teasers about LakeFlow - the complete rewrite of how we think about ETL!
In this video, Simon runs through a bunch of clips from the keynotes, pulling out the key announcements that you should be aware of if you are in the Data/AI Space!
For full youtube keynote replays, see:
Day 1 - • Data + AI Summit Keyno...
Day 2 - • Data + AI Summit 2024 ...
And as always, Advancing Analytics can help you get the most of your Data Intelligence Platform (or build one if you're not there yet), so give us a call if you need that extra boost.
Excellent, summary Simon! I'm looking forward to LakeFlow. 😀
Was eagerly waiting for your video Simon! I think if Lakeflow turns out to be as good as another replication tool that would be the biggest disruptor to how we currently do lakehouse. Data acquisition has always been the sore point in the data platform.
Other one as you mentioned, tag based access control is something I have been waiting for almost 2 years!
Nice recap of the key announcements! ABAC demo was 🎉
Amazing Simon, thanks for this update
I think most of those changes are going to have a big effect on the way we manage data. Databricks are setup to be the single tool right up to the point you visualize the end result. Wonder how MS feel about the fact they might end up serving instances of the very platform that makes fabric a bit redundant :P especially if the pricing is clear and competitive :)
Nice summary, thanks Simon
I agree that the Tabular acquisition will lead to improved interoparability for Delta Lake & Iceberg users. For me this signals more the trend of reducing data movement and ETL so that people can use data where it is. And all access control is managed by Unity Catalog.
I liked your point about Serverless and what we do. Hopefully by the time the transition is done I’ll have retired into leadership 😂
lake flow seems great if you can source control and deploy it. Hopefully it’s also somewhat testable.
What do you know about the realtime mode? Do you think it's just a rename of the experimental spark continuous mode?
I need to dig into what's been announced publicly so I don't break NDAs - but I can say that what I've seen has come a fair way from the old continuous mode, it's more than just the spark engine change behind what's driving the performance increase.
And I'm here still waiting for that for_each task xD
It's on the roadmap, it was on one of the keynote slides and everything! 😅
Hoping we get branches in delta for write audit publish as that’s a pretty useful feature in Iceberg.
Currently only SQL Warehouse can be serverless which supports SQL only. Does it mean that Python is not recommended in the new projects?
That's what the announcements were all about - they're rolling out Serverless for Workflows/Notebooks which means full serverless python support. Python is thoroughly recommended for any engineering/automation workloads (with embedded SQL for transformations as necessary)
@@AdvancingAnalytics The biggest problem is the price. Serverless option is the most expensive workload in Databricks. For many companies it can be a blocker especially when chipper option exist. I've heard about situation where companies ask developers not to use SQL serverless warehouse because of that
@@AdvancingAnalytics Hopefully this means serverless supported in more regions!
come on, not a single word about duckdb, it was everywhere on the keynote :)
Haha, it's true - there was the segment from Hannes himself. But the update is largely that DuckDB can now natively read Delta right, nothing I saw is directly Databricks functionality? That said, I'm waaaay overdue a separate video spinning up duckdb on a single node and showing how fast it is!
Isn't Genie just hitting the openAI endpoint?
Nope - the original Databricks Assistant was using OpenAI, this new iteration is a flavour of DBRX, with the context of your own data (unity catalog, recent activity/queries etc etc). Should have far, far more context than just hitting an open endpoint.
I agree with the points about serverless making things easier and doing it better than a person would do. However, I would still want to know what it is doing, so I could replicate elsewhere (self hosted, other future vendor, etc). Otherwise this is another type of vendor lock
Ie, if I'm too reliant on the platform optimising stuff for me, then I'm effectively locked in.
I