dbt logo


dbt helps data teams work like software engineers—to ship trusted data, faster.
+ 1

What is dbt?

dbt is a transformation workflow that lets teams deploy analytics code following software engineering best practices like modularity, portability, CI/CD, and documentation. Now anyone who knows SQL can build production-grade data pipelines.
dbt is a tool in the Database Tools category of a tech stack.

Who uses dbt?

66 companies reportedly use dbt in their tech stacks, including Shopify, technology, and Primer.

198 developers on StackShare have stated that they use dbt.

dbt Integrations

PostgreSQL, Apache Spark, Amazon Redshift, Google BigQuery, and Materialize are some of the popular tools that integrate with dbt. Here's a list of all 19 tools that integrate with dbt.
Pros of dbt
Easy for SQL programmers to learn
Modularity, portability, CI/CD, and documentation
Faster Integrated Testing
Reusable Macro
Schedule Jobs
Decisions about dbt

Here are some stack decisions, common use cases and reviews by companies and developers who chose dbt in their tech stack.

Shared insights
dbtdbtGoogle BigQueryGoogle BigQuery

I used dbt over manually setting up python wrappers around SQL scripts because it makes managing transformations within Google BigQuery much easier. This saves future Sung dozens of hours maintaining plumbing code to run a couple SQL queries. Check out my tutorial in the link!

I haven't seen any other tool make it as easy to run dependent SQL DAGs directly in a data warehouse.

See more

dbt's Features

  • Code compiler
  • Package management
  • Seed file loader
  • Data snapshots
  • Understand raw data sources
  • Tests
  • Documentation
  • CI/CD

dbt Alternatives & Comparisons

What are some alternatives to dbt?
Rather than having to commit/push every time you want test out the changes you are making to your .github/workflows/ files (or for any changes to embedded GitHub actions), you can use this tool to run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides.
Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed.
We've built a unique data modeling language, connections to today's fastest analytical databases, and a service that you can deploy on any infrastructure, and explore on any device. Plus, we'll help you every step of the way.
Apache Spark
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
It is a modern database query and access library for Scala. It allows you to work with stored data almost as if you were using Scala collections while at the same time giving you full control over when a database access happens and which data is transferred.
See all alternatives

dbt's Followers
276 developers follow dbt to keep up with related blogs and decisions.