What is Apache Solr?
It uses the tools you use to make application building a snap. It is built on the battle-tested Apache Zookeeper, it makes it easy to scale up and down.
Apache Solr is a tool in the Search Engines category of a tech stack.
Who uses Apache Solr?
23 companies reportedly use Apache Solr in their tech stacks, including Walmart, doubleSlash, and AlphaSense.
70 developers on StackShare have stated that they use Apache Solr.
Apache Solr's Features
- Advanced full-text search capabilities
- Optimized for high volume traffic
- Standards based open interfaces - XML, JSON and HTTP
- Comprehensive administration interfaces
- Easy monitoring
- Highly scalable and fault tolerant
- Flexible and adaptable with easy configuration
Apache Solr Alternatives & Comparisons
What are some alternatives to Apache Solr?
See all alternatives
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data.
Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities.
Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.