What is Dgraph and what are its top alternatives?
Top Alternatives to Dgraph
- Neo4j
Neo4j stores data in nodes connected by directed, typed relationships with properties on both, also known as a Property Graph. It is a high performance graph store with all the features expected of a mature and robust database, like a friendly query language and ACID transactions. ...
- Titan
Titan is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. Titan is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. ...
- ArangoDB
A distributed free and open-source database with a flexible data model for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions. ...
- Cayley
Cayley is an open-source graph inspired by the graph database behind Freebase and Google's Knowledge Graph. Its goal is to be a part of the developer's toolbox where Linked Data and graph-shaped data (semantic webs, social networks, etc) in general are concerned. ...
- GraphQL
GraphQL is a data query language and runtime designed and used at Facebook to request and deliver data to mobile and web apps since 2012. ...
- MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...
- JanusGraph
It is a scalable graph database optimized for storing and querying graphs containing hundreds of billions of vertices and edges distributed across a multi-machine cluster. It is a transactional database that can support thousands of concurrent users executing complex graph traversals in real time. ...
- Neptune
It brings organization and collaboration to data science projects. All the experiement-related objects are backed-up and organized ready to be analyzed, reproduced and shared with others. Works with all common technologies and integrates with other tools. ...
Dgraph alternatives & related posts
- Cypher – graph query language70
- Great graphdb61
- Open source33
- Rest api31
- High-Performance Native API27
- ACID24
- Easy setup21
- Great support17
- Clustering11
- Hot Backups9
- Great Web Admin UI8
- Mature7
- Powerful, flexible data model7
- Embeddable6
- Easy to Use and Model5
- Best Graphdb4
- Highly-available4
- Great onboarding process2
- It's awesome, I wanted to try it2
- Used by Crunchbase2
- Great query language and built in data browser2
- Comparably slow9
- Can't store a vertex as JSON4
- Doesn't have a managed cloud service at low cost1
related Neo4j posts
We have an in-house build experiment management system. We produce samples as input to the next step, which then could produce 1 sample(1-1) and many samples (1 - many). There are many steps like this. So far, we are tracking genealogy (limited tracking) in the MySQL database, which is becoming hard to trace back to the original material or sample(I can give more details if required). So, we are considering a Graph database. I am requesting advice from the experts.
- Is a graph database the right choice, or can we manage with RDBMS?
- If RDBMS, which RDMS, which feature, or which approach could make this manageable or sustainable
- If Graph database(Neo4j, OrientDB, Azure Cosmos DB, Amazon Neptune, ArangoDB), which one is good, and what are the best practices?
I am sorry that this might be a loaded question.
I'm evaluating the use of RedisGraph vs Microsoft SQL Server 2019 graph features to build a social graph. One of the key criteria is high availability and cross data center replication of data. While Neo4j is a much-matured solution in general, I'm not accounting for it due to the cost & introduction of a new stack in the ecosystem. Also, due to the nature of data & org policies, using a cloud-based solution won't be a viable choice.
We currently use Redis as a cache & SQL server 2019 as RDBMS.
I'm inclining towards SQL server 2019 graph as we already use SQL server extensively as relational database & have all the HA and cross data center replication setup readily available. I still need to evaluate if it fulfills our need as a graph DB though, I also learned that SQL server 2019 is still a new player in the market and attempts to fit a graph-like query on top of a relational model (with node and edge tables). RedisGraph seems very promising. However, I'm not totally sure about HA, Graph data backup, cross-data center support.
related Titan posts
ArangoDB
- Grahps and documents in one DB37
- Intuitive and rich query language26
- Good documentation25
- Open source25
- Joins for collections21
- Foxx is great platform15
- Great out of the box web interface with API playground14
- Good driver support6
- Low maintenance efforts6
- Clustering6
- Easy microservice creation with foxx5
- You can write true backendless apps4
- Managed solution available2
- Performance0
- Web ui has still room for improvement3
- No support for blueprints standard, using custom AQL2
related ArangoDB posts
We have an in-house build experiment management system. We produce samples as input to the next step, which then could produce 1 sample(1-1) and many samples (1 - many). There are many steps like this. So far, we are tracking genealogy (limited tracking) in the MySQL database, which is becoming hard to trace back to the original material or sample(I can give more details if required). So, we are considering a Graph database. I am requesting advice from the experts.
- Is a graph database the right choice, or can we manage with RDBMS?
- If RDBMS, which RDMS, which feature, or which approach could make this manageable or sustainable
- If Graph database(Neo4j, OrientDB, Azure Cosmos DB, Amazon Neptune, ArangoDB), which one is good, and what are the best practices?
I am sorry that this might be a loaded question.
Hello All, I'm building an app that will enable users to create documents using ckeditor or TinyMCE editor. The data is then stored in a database and retrieved to display to the user, these docs can contain image data also. The number of pages generated for a single document can go up to 1000. Therefore by design, each page is stored in a separate JSON. I'm wondering which database is the right one to choose between ArangoDB and PostgreSQL. Your thoughts, advice please. Thanks, Kashyap
- Full open source7
related Cayley posts
- Schemas defined by the requests made by the user74
- Will replace RESTful interfaces62
- The future of API's60
- The future of databases48
- Self-documenting12
- Get many resources in a single request11
- Query Language5
- Ask for what you need, get exactly that5
- Fetch different resources in one request3
- Evolve your API without versions3
- Type system3
- Easy setup2
- GraphiQL2
- Ease of client creation2
- Good for apps that query at build time. (SSR/Gatsby)1
- Backed by Facebook1
- Easy to learn1
- "Open" document1
- Better versioning1
- Standard1
- 1. Describe your data1
- Fast prototyping1
- Hard to migrate from GraphQL to another technology4
- More code to type.4
- Takes longer to build compared to schemaless.2
- All the pros sound like NFT pitches1
- Works just like any other API at runtime1
related GraphQL posts
I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery
For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:
Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have
GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.
MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website
When I joined NYT there was already broad dissatisfaction with the LAMP (Linux Apache HTTP Server MySQL PHP) Stack and the front end framework, in particular. So, I wasn't passing judgment on it. I mean, LAMP's fine, you can do good work in LAMP. It's a little dated at this point, but it's not ... I didn't want to rip it out for its own sake, but everyone else was like, "We don't like this, it's really inflexible." And I remember from being outside the company when that was called MIT FIVE when it had launched. And been observing it from the outside, and I was like, you guys took so long to do that and you did it so carefully, and yet you're not happy with your decisions. Why is that? That was more the impetus. If we're going to do this again, how are we going to do it in a way that we're gonna get a better result?
So we're moving quickly away from LAMP, I would say. So, right now, the new front end is React based and using Apollo. And we've been in a long, protracted, gradual rollout of the core experiences.
React is now talking to GraphQL as a primary API. There's a Node.js back end, to the front end, which is mainly for server-side rendering, as well.
Behind there, the main repository for the GraphQL server is a big table repository, that we call Bodega because it's a convenience store. And that reads off of a Kafka pipeline.
- Document-oriented storage829
- No sql594
- Ease of use553
- Fast465
- High performance410
- Free257
- Open source218
- Flexible180
- Replication & high availability145
- Easy to maintain112
- Querying42
- Easy scalability39
- Auto-sharding38
- High availability37
- Map/reduce31
- Document database27
- Easy setup25
- Full index support25
- Reliable16
- Fast in-place updates15
- Agile programming, flexible, fast14
- No database migrations12
- Easy integration with Node.Js8
- Enterprise8
- Enterprise Support6
- Great NoSQL DB5
- Support for many languages through different drivers4
- Drivers support is good3
- Schemaless3
- Aggregation Framework3
- Fast2
- Managed service2
- Easy to Scale2
- Awesome2
- Consistent2
- Good GUI1
- Acid Compliant1
- Very slowly for connected models that require joins6
- Not acid compliant3
- Proprietary query language1
related MongoDB posts
Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.
We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient
Based on the above criteria, we selected the following tools to perform the end to end data replication:
We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.
We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.
In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.
Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.
In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!
We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.
As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).
When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.
related JanusGraph posts
- Aws managed services1
- Supports both gremlin and openCypher query languages1
- Doesn't have much support for openCypher clients1
- Doesn't have proper clients for different lanuages1
- Doesn't have much community support1