Cloud Data Warehouse Benchmark Redshift vs Snowflake vs BigQuery | Fivetran
HTML-код
- Опубликовано: 11 июл 2024
- Get the slides: www.datacouncil.ai/talks/clou...
ABOUT THE TALK:
Benchmarks are all about making choices: what kind of data will I use? How much? What kind of queries will users run? How you make these choices matters a lot: change your assumptions and the fastest warehouse can become the slowest.
As a data pipeline provider that supports all three warehouses as destinations, Fivetran conducted an independent benchmark that is representative of real-world users. In this talk, we'll dive into our methodology, the results and compare it to other similar benchmarks.
ABOUT THE SPEAKER:
George Fraser is Co-founder and CEO of Fivetran, a fully-managed data pipeline built for analysts. Fivetran is a Y Combinator-backed with over 300 customers relying on it to centralize their data. When Fivetran began in 2012, they realized that ETL tools were ill-equipped for modern companies who rely on cloud applications and databases.
To meet the needs of analysts in this new era, Fivetran built the only zero-maintenance data pipeline on the market and is now part of a growing ecosystem of cloud infrastructure that gives organizations control of their data without heavy engineering.
ABOUT DATA COUNCIL:
Data Council (www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers. Make sure to subscribe to our channel for more videos, including DC_THURS, our series of live online interviews with leading data professionals from top open source projects and startups.
FOLLOW DATA COUNCIL:
Twitter: / datacouncilai
LinkedIn: / datacouncil-ai
Facebook: / datacouncilai
Eventbrite: www.eventbrite.com/o/data-cou... Наука
Good presentation. Also nice to see that Jimmi Simpson is expanding his horizons.
Scan speed is extremely important when the data set is huge and it cannot all fit in memory. On a large warehouse, the time spent scanning will usually dwarf the compute time on queries. So I agree that on a tiny 100GB benchmark, complex queries are more meaningful, but on a larger size warehouse scan speed and re-distribution speed are the differentiator.
While most comparisons only focus on speed or cost, you covered a number of parameters in detail. Thanks for sharing.
Excellent video. I really like the detailed approach to pricing calculations. (20:00 onwards) . E.g. BigQuery being actually more expensive that what is appears to be.
Thank you very much for this presentation. It was very well done and I appreciate the explanation of your choices.
Great comoparsion, thank!
I first used Sybase IQ in 1996. It was a hugely successful implementation. I would say this was the first columnar DB which stemmed from an MIT group if I recall.
I joined Sybase in 1992 having been a Sybase customer since 1988.
OLTP vs OLAP @2:30 👍
Great, great video!
I think it's better to use Insert, Update and Delete architectural optimization vs. Query (Select) optimization for OLTP vs. OLAP. The select example you gave seems to be more of a difference between operational reports vs. analytical reports. BUT - Good stuff!
Great comparison & presentation!.
How about Databricks? Or using SparkSQL to query data stored in parquet file either stored in HDFS or in S3 via a connector?
Great presentation
Nicely presented.
Wonderful video. It should have azure as well.
nice talk!
Next time you do a benchmark testing , please do include Teradata as well.
another big cloud data warehouse provider is Alibaba Cloud MaxCompute, are we going to involve this product
The latest version of our warehouse benchmark is at fivetran.com/blog/warehouse-benchmark
legends are still waiting to get the presentation on their email id one day after registering on the link.
I don't want to throw a spanner in the works but ... why remove the best performing aspects of a data warehouse in order to perform a benchmark test? Removal of distribution, clustering, sort/partition keys doesn't in my opinion present a usable test because you removed the best and most important parts. Data can and should be be distributed and redistributed as copies in a warehouse. Re-sorting/restructuring has been used for variable data requirements for decades, and the most effective way is to create multiple copies (can also be a materialized view). Isn't disk space cheap relative to CPU+RAM? And will a complex data model (no indexing) cause problems, coupled with filtering and no partitioning or distribution? And why test with a small data set on a platform that is built for very large data sets?
I'm going to guess that these data warehouses are becoming so broadly available and cheap that they're edging out traditional data storage platforms, and are becoming more frequently used by smaller organizations. So a benchmark like this, while not necessarily helpful for large companies that would fully leverage the capabilities of a cloud storage architecture, is still extremely useful for a larger number of small companies looking to use agile storage services at a competitive price.
Excellent video. Please include SQL Warehouse (Azure Synapse Analytics)
Partitioning is a huge part of Snowflake's architectural magic... Isn't that a silly thing to restrict from the benchmark testing??
Is he the same person with Fivetran etl company?
Sybase IQ appeared as a column-store database in 90s and is still in use today. Yet sadly nobody knows about it. Both Sybase and SAP (who acquired Sybase) didn't bother to market it.
That's really interesting - how did you first encounter it? I'd never heard of it but will be checking it out.
Richard Mei cc
I loved Sybase as a customer and employee. But we could not market our way way out of a paper bag. In 1996 Oracle was 3 times our size in revenue but In new license sales we were nearly even. We were the 6th largest independent software company in the world. Oracle was 6th. Traveling on a plane I often had the following conversation (I like to talk to people). “....I work for Sybase a major database company.” Other passenger ‘I never heard of them, but I don’t know anything about technology”. Me “I bet you a dollar you heard of my competitor Oracle” them”oh, yes I have”
I think bigquery is better for me
Why no Azure SQL Date Warehouse?
It's in the latest version: fivetran.com/blog/warehouse-benchmark
The really big problem of BigQuery is data governance. Permissions in BigQuery are horrible, BigQuery only have dataset permission granularity.
Is this still the case or has BigQuery security improved since last year? Thx
Yes a lot changed from last year, there is already ACL for BigQuery in beta
Yuri Soares thanks will research this as we are considering BigQuery
@@Chekmate99 Good! BigQuery integrates well with GCP products, but nowadays the best value for money Data Warehouse is Snowflake. If you want to do some crazy ML stuffs BigQuery could be the way to go, otherwise Snowflake is much better.
Yuri Soares thanks! We are also looking at Snowflake and Azure solutions. Our situation is similar to this video’s case study, we are consolidating data from several key OLTP systems into a warehouse. Years ago we used Cisco’s Data Virtualization tool to accomplish something similar but now we want to leverage the Cloud. Biggest challenge we’ve had in the past was getting the business user community onboard and use these solutions (get away from excel spreadsheets, etc.)
Good Comparison, very informative.
However, I don't believe that this CEO have in-depth-knowledge of each technology to answer questions similar to that @34:27 min
"My instinct is that in general ahhhh..." Really? If you don't know for a fact you should just admit that you don't know. Period.
1 minute 30 seconds to get in 20 "uh"s. You failed the "Um" game.
Poor guy is nervous
ahh ... like .. ahhh ... like .... way way way.. ahh ... like ... 28 min. Content 6 min.
Agreed. Don't want to listen through. Who won?
Troll