Alternatives to Apache Kylin logo

Alternatives to Apache Kylin

Apache Spark, Presto, Druid, Apache Impala, and AtScale are the most popular alternatives and competitors to Apache Kylin.
61
236
+ 1
24

What is Apache Kylin and what are its top alternatives?

Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, originally contributed from eBay Inc.
Apache Kylin is a tool in the Big Data Tools category of a tech stack.
Apache Kylin is an open source tool with 3.6K GitHub stars and 1.5K GitHub forks. Here’s a link to Apache Kylin's open source repository on GitHub

Top Alternatives to Apache Kylin

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Presto
    Presto

    Distributed SQL Query Engine for Big Data

  • Druid
    Druid

    Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations. ...

  • Apache Impala
    Apache Impala

    Impala is a modern, open source, MPP SQL query engine for Apache Hadoop. Impala is shipped by Cloudera, MapR, and Amazon. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. ...

  • AtScale
    AtScale

    Its Virtual Data Warehouse delivers performance, security and agility to exceed the demands of modern-day operational analytics. ...

  • Clickhouse
    Clickhouse

    It allows analysis of data that is updated in real time. It offers instant results in most cases: the data is processed faster than it takes to create a query. ...

  • Kyvos
    Kyvos

    Kyvos is a BI acceleration platform that helps users analyze big data on the cloud with exceptionally high performance using any BI tool they like. You can accelerate your cloud analytics while optimizing your costs with Kyvos. ...

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

Apache Kylin alternatives & related posts

Apache Spark logo

Apache Spark

2.9K
3.5K
140
Fast and general engine for large-scale data processing
2.9K
3.5K
+ 1
140
PROS OF APACHE SPARK
  • 61
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
  • 2
    In memory Computation
CONS OF APACHE SPARK
  • 4
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 2.9M views

Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

See more
Presto logo

Presto

393
1K
66
Distributed SQL Query Engine for Big Data
393
1K
+ 1
66
PROS OF PRESTO
  • 18
    Works directly on files in s3 (no ETL)
  • 13
    Open-source
  • 12
    Join multiple databases
  • 10
    Scalable
  • 7
    Gets ready in minutes
  • 6
    MPP
CONS OF PRESTO
    Be the first to leave a con

    related Presto posts

    Ashish Singh
    Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 2.9M views

    To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

    Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

    We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

    Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

    Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

    #BigData #AWS #DataScience #DataEngineering

    See more
    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    Druid logo

    Druid

    378
    865
    32
    Fast column-oriented distributed data store
    378
    865
    + 1
    32
    PROS OF DRUID
    • 15
      Real Time Aggregations
    • 6
      Batch and Real-Time Ingestion
    • 5
      OLAP
    • 3
      OLAP + OLTP
    • 2
      Combining stream and historical analytics
    • 1
      OLTP
    CONS OF DRUID
    • 3
      Limited sql support
    • 2
      Joins are not supported well
    • 1
      Complexity

    related Druid posts

    Shared insights
    on
    DruidDruidMongoDBMongoDB

    My background is in Data analytics in the telecom domain. Have to build the database for analyzing large volumes of CDR data so far the data are maintained in a file server and the application queries data from the files. It's consuming a lot of resources queries are taking time so now I am asked to come up with the approach. I planned to rewrite the app, so which database needs to be used. I am confused between MongoDB and Druid.

    So please do advise me on picking from these two and why?

    See more

    My process is like this: I would get data once a month, either from Google BigQuery or as parquet files from Azure Blob Storage. I have a script that does some cleaning and then stores the result as partitioned parquet files because the following process cannot handle loading all data to memory.

    The next process is making a heavy computation in a parallel fashion (per partition), and storing 3 intermediate versions as parquet files: two used for statistics, and the third will be filtered and create the final files.

    I make a report based on the two files in Jupyter notebook and convert it to HTML.

    • Everything is done with vanilla python and Pandas.
    • sometimes I may get a different format of data
    • cloud service is Microsoft Azure.

    What I'm considering is the following:

    Get the data with Kafka or with native python, do the first processing, and store data in Druid, the second processing will be done with Apache Spark getting data from apache druid.

    the intermediate states can be stored in druid too. and visualization would be with apache superset.

    See more
    Apache Impala logo

    Apache Impala

    145
    300
    18
    Real-time Query for Hadoop
    145
    300
    + 1
    18
    PROS OF APACHE IMPALA
    • 11
      Super fast
    • 1
      Massively Parallel Processing
    • 1
      Load Balancing
    • 1
      Replication
    • 1
      Scalability
    • 1
      Distributed
    • 1
      High Performance
    • 1
      Open Sourse
    CONS OF APACHE IMPALA
      Be the first to leave a con

      related Apache Impala posts

      I have been working on a Java application to demonstrate the latency for the select/insert/update operations on KUDU storage using Apache Kudu API - Java based client. I have a few queries about using Apache Kudu API

      1. Do we have JDBC wrapper to use Apache Kudu API for getting connection to Kudu masters with connection pool mechanism and all DB operations?

      2. Does Apache KuduAPI supports order by, group by, and aggregate functions? if yes, how to implement these functions using Kudu APIs.

      3. How can we add kudu predicates to Kudu update operation? if yes, how?

      4. Does Apache Kudu API supports batch insertion (execute the Kudu Insert for multiple rows at one go instead of row by row)? (like Kudusession.apply(List);)

      5. Does Apache Kudu API support join on tables?

      6. which tool is preferred over others (Apache Impala /Kudu API) for read and update/insert DB operations?

      See more
      AtScale logo

      AtScale

      24
      82
      0
      The virtual data warehouse for the modern enterprise
      24
      82
      + 1
      0
      PROS OF ATSCALE
        Be the first to leave a pro
        CONS OF ATSCALE
          Be the first to leave a con

          related AtScale posts

          Clickhouse logo

          Clickhouse

          387
          517
          78
          A column-oriented database management system
          387
          517
          + 1
          78
          PROS OF CLICKHOUSE
          • 19
            Fast, very very fast
          • 11
            Good compression ratio
          • 6
            Horizontally scalable
          • 5
            Great CLI
          • 5
            Utilizes all CPU resources
          • 5
            RESTful
          • 4
            Buggy
          • 4
            Open-source
          • 4
            Great number of SQL functions
          • 3
            Server crashes its normal :(
          • 3
            Has no transactions
          • 2
            Flexible connection options
          • 2
            Highly available
          • 2
            ODBC
          • 2
            Flexible compression options
          • 1
            In IDEA data import via HTTP interface not working
          CONS OF CLICKHOUSE
          • 5
            Slow insert operations

          related Clickhouse posts

          Kyvos logo

          Kyvos

          13
          32
          0
          Cloud BI Acceleration Platform
          13
          32
          + 1
          0
          PROS OF KYVOS
            Be the first to leave a pro
            CONS OF KYVOS
              Be the first to leave a con

              related Kyvos posts

              Which among the two, Kyvos and Azure Analysis Services, should be used to build a Semantic Layer?

              I have to build a Semantic Layer for the data warehouse platform and use Power BI for visualisation and the data lies in the Azure Managed Instance. I need to analyse the two platforms and find which suits best for the same.

              See more
              Splunk logo

              Splunk

              597
              997
              20
              Search, monitor, analyze and visualize machine data
              597
              997
              + 1
              20
              PROS OF SPLUNK
              • 3
                API for searching logs, running reports
              • 3
                Alert system based on custom query results
              • 2
                Dashboarding on any log contents
              • 2
                Custom log parsing as well as automatic parsing
              • 2
                Ability to style search results into reports
              • 2
                Query engine supports joining, aggregation, stats, etc
              • 2
                Splunk language supports string, date manip, math, etc
              • 2
                Rich GUI for searching live logs
              • 1
                Query any log as key-value pairs
              • 1
                Granular scheduling and time window support
              CONS OF SPLUNK
              • 1
                Splunk query language rich so lots to learn

              related Splunk posts

              Shared insights
              on
              KibanaKibanaSplunkSplunkGrafanaGrafana

              I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

              See more
              Shared insights
              on
              SplunkSplunkElasticsearchElasticsearch

              We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.

              See more