Alternatives to Presto logo

Alternatives to Presto

Apache Spark, Stan, Apache Impala, Snowflake, and Apache Drill are the most popular alternatives and competitors to Presto.
385
988
+ 1
66

What is Presto and what are its top alternatives?

Distributed SQL Query Engine for Big Data
Presto is a tool in the Big Data Tools category of a tech stack.
Presto is an open source tool with GitHub stars and GitHub forks. Here’s a link to Presto's open source repository on GitHub

Top Alternatives to Presto

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Stan
    Stan

    A state-of-the-art platform for statistical modeling and high-performance statistical computation. Used for statistical modeling, data analysis, and prediction in the social, biological, and physical sciences, engineering, and business. ...

  • Apache Impala
    Apache Impala

    Impala is a modern, open source, MPP SQL query engine for Apache Hadoop. Impala is shipped by Cloudera, MapR, and Amazon. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. ...

  • Snowflake
    Snowflake

    Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn. ...

  • Apache Drill
    Apache Drill

    Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage systems. It was inspired in part by Google's Dremel. ...

  • Druid
    Druid

    Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations. ...

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

  • Amazon Athena
    Amazon Athena

    Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. ...

Presto alternatives & related posts

Apache Spark logo

Apache Spark

2.8K
3.3K
139
Fast and general engine for large-scale data processing
2.8K
3.3K
+ 1
139
PROS OF APACHE SPARK
  • 60
    Open-source
  • 48
    Fast and Flexible
  • 8
    Great for distributed SQL like applications
  • 8
    One platform for every big data problem
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    In memory Computation
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
CONS OF APACHE SPARK
  • 3
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.7M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.3M views

Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

See more
Stan logo

Stan

61
27
0
A Probabilistic Programming Language
61
27
+ 1
0
PROS OF STAN
    Be the first to leave a pro
    CONS OF STAN
      Be the first to leave a con

      related Stan posts

      Apache Impala logo

      Apache Impala

      137
      284
      18
      Real-time Query for Hadoop
      137
      284
      + 1
      18
      PROS OF APACHE IMPALA
      • 11
        Super fast
      • 1
        Load Balancing
      • 1
        Replication
      • 1
        Scalability
      • 1
        Distributed
      • 1
        High Performance
      • 1
        Massively Parallel Processing
      • 1
        Open Sourse
      CONS OF APACHE IMPALA
        Be the first to leave a con

        related Apache Impala posts

        I have been working on a Java application to demonstrate the latency for the select/insert/update operations on KUDU storage using Apache Kudu API - Java based client. I have a few queries about using Apache Kudu API

        1. Do we have JDBC wrapper to use Apache Kudu API for getting connection to Kudu masters with connection pool mechanism and all DB operations?

        2. Does Apache KuduAPI supports order by, group by, and aggregate functions? if yes, how to implement these functions using Kudu APIs.

        3. How can we add kudu predicates to Kudu update operation? if yes, how?

        4. Does Apache Kudu API supports batch insertion (execute the Kudu Insert for multiple rows at one go instead of row by row)? (like Kudusession.apply(List);)

        5. Does Apache Kudu API support join on tables?

        6. which tool is preferred over others (Apache Impala /Kudu API) for read and update/insert DB operations?

        See more
        Snowflake logo

        Snowflake

        937
        1.1K
        21
        The data warehouse built for the cloud
        937
        1.1K
        + 1
        21
        PROS OF SNOWFLAKE
        • 6
          Public and Private Data Sharing
        • 3
          Good Performance
        • 3
          Multicloud
        • 2
          Great Documentation
        • 2
          User Friendly
        • 2
          Serverless
        • 1
          Innovative
        • 1
          Economical
        • 1
          Usage based billing
        CONS OF SNOWFLAKE
          Be the first to leave a con

          related Snowflake posts

          I'm wondering if any Cloud Firestore users might be open to sharing some input and challenges encountered when trying to create a low-cost, low-latency data pipeline to their Analytics warehouse (e.g. Google BigQuery, Snowflake, etc...)

          I'm working with a platform by the name of Estuary.dev, an ETL/ELT and we are conducting some research on the pain points here to see if there are drawbacks of the Firestore->BQ extension and/or if users are seeking easy ways for getting nosql->fine-grained tabular data

          Please feel free to drop some knowledge/wish list stuff on me for a better pipeline here!

          See more
          Shared insights
          on
          Google BigQueryGoogle BigQuerySnowflakeSnowflake

          I use Google BigQuery because it makes is super easy to query and store data for analytics workloads. If you're using GCP, you're likely using BigQuery. However, running data viz tools directly connected to BigQuery will run pretty slow. They recently announced BI Engine which will hopefully compete well against big players like Snowflake when it comes to concurrency.

          What's nice too is that it has SQL-based ML tools, and it has great GIS support!

          See more
          Apache Drill logo

          Apache Drill

          70
          160
          16
          Schema-Free SQL Query Engine for Hadoop and NoSQL
          70
          160
          + 1
          16
          PROS OF APACHE DRILL
          • 4
            NoSQL and Hadoop
          • 3
            Free
          • 3
            Lightning speed and simplicity in face of data jungle
          • 2
            Well documented for fast install
          • 1
            SQL interface to multiple datasources
          • 1
            Nested Data support
          • 1
            Read Structured and unstructured data
          • 1
            V1.10 released - https://drill.apache.org/
          CONS OF APACHE DRILL
            Be the first to leave a con

            related Apache Drill posts

            Druid logo

            Druid

            360
            817
            31
            Fast column-oriented distributed data store
            360
            817
            + 1
            31
            PROS OF DRUID
            • 15
              Real Time Aggregations
            • 6
              Batch and Real-Time Ingestion
            • 4
              OLAP
            • 3
              OLAP + OLTP
            • 2
              Combining stream and historical analytics
            • 1
              OLTP
            CONS OF DRUID
            • 3
              Limited sql support
            • 2
              Joins are not supported well
            • 1
              Complexity

            related Druid posts

            My process is like this: I would get data once a month, either from Google BigQuery or as parquet files from Azure Blob Storage. I have a script that does some cleaning and then stores the result as partitioned parquet files because the following process cannot handle loading all data to memory.

            The next process is making a heavy computation in a parallel fashion (per partition), and storing 3 intermediate versions as parquet files: two used for statistics, and the third will be filtered and create the final files.

            I make a report based on the two files in Jupyter notebook and convert it to HTML.

            • Everything is done with vanilla python and Pandas.
            • sometimes I may get a different format of data
            • cloud service is Microsoft Azure.

            What I'm considering is the following:

            Get the data with Kafka or with native python, do the first processing, and store data in Druid, the second processing will be done with Apache Spark getting data from apache druid.

            the intermediate states can be stored in druid too. and visualization would be with apache superset.

            See more
            Umair Iftikhar
            Technical Architect at ERP Studio · | 3 upvotes · 336.6K views

            Developing a solution that collects Telemetry Data from different devices, nearly 1000 devices minimum and maximum 12000. Each device is sending 2 packets in 1 second. This is time-series data, and this data definition and different reports are saved on PostgreSQL. Like Building information, maintenance records, etc. I want to know about the best solution. This data is required for Math and ML to run different algorithms. Also, data is raw without definitions and information stored in PostgreSQL. Initially, I went with TimescaleDB due to PostgreSQL support, but to increase in sites, I started facing many issues with timescale DB in terms of flexibility of storing data.

            My major requirement is also the replication of the database for reporting and different purposes. You may also suggest other options other than Druid and Cassandra. But an open source solution is appreciated.

            See more
            Splunk logo

            Splunk

            720
            925
            14
            Search, monitor, analyze and visualize machine data
            720
            925
            + 1
            14
            PROS OF SPLUNK
            • 2
              Ability to style search results into reports
            • 2
              Alert system based on custom query results
            • 2
              API for searching logs, running reports
            • 2
              Query engine supports joining, aggregation, stats, etc
            • 1
              Query any log as key-value pairs
            • 1
              Splunk language supports string, date manip, math, etc
            • 1
              Granular scheduling and time window support
            • 1
              Custom log parsing as well as automatic parsing
            • 1
              Dashboarding on any log contents
            • 1
              Rich GUI for searching live logs
            CONS OF SPLUNK
            • 1
              Splunk query language rich so lots to learn

            related Splunk posts

            Shared insights
            on
            KibanaKibanaSplunkSplunkGrafanaGrafana

            I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

            See more
            Shared insights
            on
            SplunkSplunkElasticsearchElasticsearch

            We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.

            See more
            Amazon Athena logo

            Amazon Athena

            494
            783
            47
            Query S3 Using SQL
            494
            783
            + 1
            47
            PROS OF AMAZON ATHENA
            • 15
              Use SQL to analyze CSV files
            • 8
              Glue crawlers gives easy Data catalogue
            • 7
              Cheap
            • 5
              Query all my data without running servers 24x7
            • 4
              No data base servers yay
            • 3
              Easy integration with QuickSight
            • 2
              Query and analyse CSV,parquet,json files in sql
            • 2
              Also glue and athena use same data catalog
            • 1
              No configuration required
            • 0
              Ad hoc checks on data made easy
            CONS OF AMAZON ATHENA
              Be the first to leave a con

              related Amazon Athena posts

              I use Amazon Athena because similar to Google BigQuery , you can store and query data easily. Especially since you can define data schema in the Glue data catalog, there's a central way to define data models.

              However, I would not recommend for batch jobs. I typically use this to check intermediary datasets in data engineering workloads. It's good for getting a look and feel of the data along its ETL journey.

              See more

              Hi all,

              Currently, we need to ingest the data from Amazon S3 to DB either Amazon Athena or Amazon Redshift. But the problem with the data is, it is in .PSV (pipe separated values) format and the size is also above 200 GB. The query performance of the timeout in Athena/Redshift is not up to the mark, too slow while compared to Google BigQuery. How would I optimize the performance and query result time? Can anyone please help me out?

              See more