Alternatives to Clickhouse logo

Alternatives to Clickhouse

Cassandra, Elasticsearch, MySQL, InfluxDB, and Druid are the most popular alternatives and competitors to Clickhouse.
388
517
+ 1
78

What is Clickhouse and what are its top alternatives?

It allows analysis of data that is updated in real time. It offers instant results in most cases: the data is processed faster than it takes to create a query.
Clickhouse is a tool in the Databases category of a tech stack.

Top Alternatives to Clickhouse

  • Cassandra
    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

  • Elasticsearch
    Elasticsearch

    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...

  • MySQL
    MySQL

    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. ...

  • InfluxDB
    InfluxDB

    InfluxDB is a scalable datastore for metrics, events, and real-time analytics. It has a built-in HTTP API so you don't have to write any server side code to get up and running. InfluxDB is designed to be scalable, simple to install and manage, and fast to get data in and out. ...

  • Druid
    Druid

    Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations. ...

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Vertica
    Vertica

    It provides a best-in-class, unified analytics platform that will forever be independent from underlying infrastructure. ...

  • Snowflake
    Snowflake

    Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn. ...

Clickhouse alternatives & related posts

Cassandra logo

Cassandra

3.5K
3.5K
507
A partitioned row store. Rows are organized into tables with a required primary key.
3.5K
3.5K
+ 1
507
PROS OF CASSANDRA
  • 119
    Distributed
  • 98
    High performance
  • 81
    High availability
  • 74
    Easy scalability
  • 53
    Replication
  • 26
    Reliable
  • 26
    Multi datacenter deployments
  • 10
    Schema optional
  • 9
    OLTP
  • 8
    Open source
  • 2
    Workload separation (via MDC)
  • 1
    Fast
CONS OF CASSANDRA
  • 3
    Reliability of replication
  • 1
    Size
  • 1
    Updates

related Cassandra posts

Thierry Schellenbach
Shared insights
on
GolangGolangPythonPythonCassandraCassandra
at

After years of optimizing our existing feed technology, we decided to make a larger leap with 2.0 of Stream. While the first iteration of Stream was powered by Python and Cassandra, for Stream 2.0 of our infrastructure we switched to Go.

The main reason why we switched from Python to Go is performance. Certain features of Stream such as aggregation, ranking and serialization were very difficult to speed up using Python.

We’ve been using Go since March 2017 and it’s been a great experience so far. Go has greatly increased the productivity of our development team. Not only has it improved the speed at which we develop, it’s also 30x faster for many components of Stream. Initially we struggled a bit with package management for Go. However, using Dep together with the VG package contributed to creating a great workflow.

Go as a language is heavily focused on performance. The built-in PPROF tool is amazing for finding performance issues. Uber’s Go-Torch library is great for visualizing data from PPROF and will be bundled in PPROF in Go 1.10.

The performance of Go greatly influenced our architecture in a positive way. With Python we often found ourselves delegating logic to the database layer purely for performance reasons. The high performance of Go gave us more flexibility in terms of architecture. This led to a huge simplification of our infrastructure and a dramatic improvement of latency. For instance, we saw a 10 to 1 reduction in web-server count thanks to the lower memory and CPU usage for the same number of requests.

#DataStores #Databases

See more
Thierry Schellenbach
Shared insights
on
RedisRedisCassandraCassandraRocksDBRocksDB
at

1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

#InMemoryDatabases #DataStores #Databases

See more
Elasticsearch logo

Elasticsearch

34K
26.5K
1.6K
Open Source, Distributed, RESTful Search Engine
34K
26.5K
+ 1
1.6K
PROS OF ELASTICSEARCH
  • 327
    Powerful api
  • 315
    Great search engine
  • 230
    Open source
  • 214
    Restful
  • 199
    Near real-time search
  • 97
    Free
  • 84
    Search everything
  • 54
    Easy to get started
  • 45
    Analytics
  • 26
    Distributed
  • 6
    Fast search
  • 5
    More than a search engine
  • 3
    Highly Available
  • 3
    Awesome, great tool
  • 3
    Great docs
  • 3
    Easy to scale
  • 2
    Fast
  • 2
    Easy setup
  • 2
    Great customer support
  • 2
    Intuitive API
  • 2
    Great piece of software
  • 2
    Reliable
  • 2
    Potato
  • 2
    Nosql DB
  • 2
    Document Store
  • 1
    Not stable
  • 1
    Scalability
  • 1
    Open
  • 1
    Github
  • 1
    Elaticsearch
  • 1
    Actively developing
  • 1
    Responsive maintainers on GitHub
  • 1
    Ecosystem
  • 1
    Easy to get hot data
  • 0
    Community
CONS OF ELASTICSEARCH
  • 7
    Resource hungry
  • 6
    Diffecult to get started
  • 5
    Expensive
  • 4
    Hard to keep stable at large scale

related Elasticsearch posts

Tim Abbott

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
MySQL logo

MySQL

122.4K
103.4K
3.7K
The world's most popular open source database
122.4K
103.4K
+ 1
3.7K
PROS OF MYSQL
  • 800
    Sql
  • 679
    Free
  • 562
    Easy
  • 528
    Widely used
  • 489
    Open source
  • 180
    High availability
  • 160
    Cross-platform support
  • 104
    Great community
  • 78
    Secure
  • 75
    Full-text indexing and searching
  • 25
    Fast, open, available
  • 16
    SSL support
  • 15
    Reliable
  • 14
    Robust
  • 8
    Enterprise Version
  • 7
    Easy to set up on all platforms
  • 2
    NoSQL access to JSON data type
  • 1
    Relational database
  • 1
    Easy, light, scalable
  • 1
    Sequel Pro (best SQL GUI)
  • 1
    Replica Support
CONS OF MYSQL
  • 16
    Owned by a company with their own agenda
  • 3
    Can't roll back schema changes

related MySQL posts

Nick Rockwell
SVP, Engineering at Fastly · | 46 upvotes · 3.2M views

When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?

So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.

React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.

Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.

See more
Tim Abbott

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
InfluxDB logo

InfluxDB

1K
1.2K
175
An open-source distributed time series database with no external dependencies
1K
1.2K
+ 1
175
PROS OF INFLUXDB
  • 59
    Time-series data analysis
  • 30
    Easy setup, no dependencies
  • 24
    Fast, scalable & open source
  • 21
    Open source
  • 20
    Real-time analytics
  • 6
    Continuous Query support
  • 5
    Easy Query Language
  • 4
    HTTP API
  • 4
    Out-of-the-box, automatic Retention Policy
  • 1
    Offers Enterprise version
  • 1
    Free Open Source version
CONS OF INFLUXDB
  • 4
    Instability
  • 1
    Proprietary query language
  • 1
    HA or Clustering is only in paid version

related InfluxDB posts

Hi everyone. I'm trying to create my personal syslog monitoring.

  1. To get the logs, I have uncertainty to choose the way: 1.1 Use Logstash like a TCP server. 1.2 Implement a Go TCP server.

  2. To store and plot data. 2.1 Use Elasticsearch tools. 2.2 Use InfluxDB and Grafana.

I would like to know... Which is a cheaper and scalable solution?

Or even if there is a better way to do it.

See more
Shared insights
on
InfluxDBInfluxDBJSONJSON

Hi all, I am trying to decide on a database for time-series data. The data could be tracking some simple series like statistics over time or could be a nested JSON (multi-level nested). I have been experimenting with InfluxDB for the former case of a simple list of variables over time. The continuous queries are powerful too. But for the latter case, where InfluxDB requires to flatten out a nested JSON before saving it into the database the complexity arises. The nested JSON could be objects or a list of objects and objects under objects in which a complete flattening doesn't leave the data in a state for the queries I'm thinking.

[ 
  { "timestamp": "2021-09-06T12:51:00Z",
    "name": "Name1",
    "books": [
        { "title": "Book1", "page": 100 },
        { "title": "Book2", "page": 280 },
    ]
  },
 { "timestamp": "2021-09-06T12:52:00Z",
   "name": "Name2",
   "books": [
       { "title": "Book1", "page": 320},
       { "title": "Book2", "page": 530 },
       { "title": "Book3", "page": 150 },
   ]
 }
]

Sample query: With a time range, for name xyz, find all the book title for which # of page < 400.

If I flatten it completely, it will result in fields like books_0_title, books_0_page, books_1_title, books_1_page, ... And by losing the nested context it will be hard to return one field (title) where some condition for another field (page) satisfies.

Appreciate any suggestions. Even a piece of generic advice on handling the time-series and choosing the database is welcome!

See more
Druid logo

Druid

378
865
32
Fast column-oriented distributed data store
378
865
+ 1
32
PROS OF DRUID
  • 15
    Real Time Aggregations
  • 6
    Batch and Real-Time Ingestion
  • 5
    OLAP
  • 3
    OLAP + OLTP
  • 2
    Combining stream and historical analytics
  • 1
    OLTP
CONS OF DRUID
  • 3
    Limited sql support
  • 2
    Joins are not supported well
  • 1
    Complexity

related Druid posts

Shared insights
on
DruidDruidMongoDBMongoDB

My background is in Data analytics in the telecom domain. Have to build the database for analyzing large volumes of CDR data so far the data are maintained in a file server and the application queries data from the files. It's consuming a lot of resources queries are taking time so now I am asked to come up with the approach. I planned to rewrite the app, so which database needs to be used. I am confused between MongoDB and Druid.

So please do advise me on picking from these two and why?

See more

My process is like this: I would get data once a month, either from Google BigQuery or as parquet files from Azure Blob Storage. I have a script that does some cleaning and then stores the result as partitioned parquet files because the following process cannot handle loading all data to memory.

The next process is making a heavy computation in a parallel fashion (per partition), and storing 3 intermediate versions as parquet files: two used for statistics, and the third will be filtered and create the final files.

I make a report based on the two files in Jupyter notebook and convert it to HTML.

  • Everything is done with vanilla python and Pandas.
  • sometimes I may get a different format of data
  • cloud service is Microsoft Azure.

What I'm considering is the following:

Get the data with Kafka or with native python, do the first processing, and store data in Druid, the second processing will be done with Apache Spark getting data from apache druid.

the intermediate states can be stored in druid too. and visualization would be with apache superset.

See more
MongoDB logo

MongoDB

91.5K
79K
4.1K
The database for giant ideas
91.5K
79K
+ 1
4.1K
PROS OF MONGODB
  • 827
    Document-oriented storage
  • 593
    No sql
  • 553
    Ease of use
  • 464
    Fast
  • 410
    High performance
  • 257
    Free
  • 218
    Open source
  • 180
    Flexible
  • 145
    Replication & high availability
  • 112
    Easy to maintain
  • 42
    Querying
  • 39
    Easy scalability
  • 38
    Auto-sharding
  • 37
    High availability
  • 31
    Map/reduce
  • 27
    Document database
  • 25
    Easy setup
  • 25
    Full index support
  • 16
    Reliable
  • 15
    Fast in-place updates
  • 14
    Agile programming, flexible, fast
  • 12
    No database migrations
  • 8
    Easy integration with Node.Js
  • 8
    Enterprise
  • 6
    Enterprise Support
  • 5
    Great NoSQL DB
  • 4
    Support for many languages through different drivers
  • 3
    Drivers support is good
  • 3
    Aggregation Framework
  • 3
    Schemaless
  • 2
    Fast
  • 2
    Managed service
  • 2
    Easy to Scale
  • 2
    Awesome
  • 2
    Consistent
  • 1
    Good GUI
  • 1
    Acid Compliant
CONS OF MONGODB
  • 6
    Very slowly for connected models that require joins
  • 3
    Not acid compliant
  • 1
    Proprietary query language

related MongoDB posts

Shared insights
on
Node.jsNode.jsGraphQLGraphQLMongoDBMongoDB

I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

  1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

  2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

  3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

See more
Vaibhav Taunk
Team Lead at Technovert · | 31 upvotes · 3.6M views

I am starting to become a full-stack developer, by choosing and learning .NET Core for API Development, Angular CLI / React for UI Development, MongoDB for database, as it a NoSQL DB and Flutter / React Native for Mobile App Development. Using Postman, Markdown and Visual Studio Code for development.

See more
Vertica logo

Vertica

88
118
16
Storage platform designed to handle large volumes of data
88
118
+ 1
16
PROS OF VERTICA
  • 3
    Shared nothing or shared everything architecture
  • 1
    Reduce costs as reduced hardware is required
  • 1
    Offers users the freedom to choose deployment mode
  • 1
    Flexible architecture suits nearly any project
  • 1
    End-to-End ML Workflow Support
  • 1
    All You Need for IoT, Clickstream or Geospatial
  • 1
    Freedom from Underlying Storage
  • 1
    Pre-Aggregation for Cubes (LAPS)
  • 1
    Automatic Data Marts (Flatten Tables)
  • 1
    Near-Real-Time Analytics in pure Column Store
  • 1
    Fully automated Database Designer tool
  • 1
    Query-Optimized Storage
  • 1
    Vertica is the only product which offers partition prun
  • 1
    Partition pruning and predicate push down on Parquet
CONS OF VERTICA
    Be the first to leave a con

    related Vertica posts

    Snowflake logo

    Snowflake

    1.1K
    1.2K
    27
    The data warehouse built for the cloud
    1.1K
    1.2K
    + 1
    27
    PROS OF SNOWFLAKE
    • 7
      Public and Private Data Sharing
    • 4
      Multicloud
    • 4
      Good Performance
    • 4
      User Friendly
    • 3
      Great Documentation
    • 2
      Serverless
    • 1
      Economical
    • 1
      Usage based billing
    • 1
      Innovative
    CONS OF SNOWFLAKE
      Be the first to leave a con

      related Snowflake posts

      I'm wondering if any Cloud Firestore users might be open to sharing some input and challenges encountered when trying to create a low-cost, low-latency data pipeline to their Analytics warehouse (e.g. Google BigQuery, Snowflake, etc...)

      I'm working with a platform by the name of Estuary.dev, an ETL/ELT and we are conducting some research on the pain points here to see if there are drawbacks of the Firestore->BQ extension and/or if users are seeking easy ways for getting nosql->fine-grained tabular data

      Please feel free to drop some knowledge/wish list stuff on me for a better pipeline here!

      See more
      Shared insights
      on
      Google BigQueryGoogle BigQuerySnowflakeSnowflake

      I use Google BigQuery because it makes is super easy to query and store data for analytics workloads. If you're using GCP, you're likely using BigQuery. However, running data viz tools directly connected to BigQuery will run pretty slow. They recently announced BI Engine which will hopefully compete well against big players like Snowflake when it comes to concurrency.

      What's nice too is that it has SQL-based ML tools, and it has great GIS support!

      See more