ArangoDB

ArangoDB

Application and Data / Data Stores / Databases
Needs advice
on
ArangoDBArangoDB
and
PostgreSQLPostgreSQL

Hello All, I'm building an app that will enable users to create documents using ckeditor or TinyMCE editor. The data is then stored in a database and retrieved to display to the user, these docs can contain image data also. The number of pages generated for a single document can go up to 1000. Therefore by design, each page is stored in a separate JSON. I'm wondering which database is the right one to choose between ArangoDB and PostgreSQL. Your thoughts, advice please. Thanks, Kashyap

READ MORE
4 upvotes·52.9K views
Replies (2)
Recommends
on
MongoDB

try mongodb first.

READ MORE
3 upvotes·2 comments·30.9K views
gitgkk
gitgkk
·
October 27th 2021 at 8:32PM

I wouldn't go the MongoDB route due to past bad experience and licensing restrictions compared to an open source db.

·
Reply
Xiaoming Deng
Xiaoming Deng
·
November 3rd 2021 at 5:59AM

I'm too

·
Reply
Founder at Vanilo·

Which Graph DB features are you planning to use?

READ MORE
2 upvotes·2 comments·30.3K views
Jean Arnaud
Jean Arnaud
·
October 26th 2021 at 8:00PM

It depends on the rest of your application/infrastructure. First would you use the features provided by the graph storage?

If not in terms of performance PostgreSQL is very good (even better than most no-sql db) for storing static JSON. If your JSON documents have to be updated frequently MongoDB could be an option as well.

·
Reply
gitgkk
gitgkk
·
October 27th 2021 at 8:32PM

Hello Jean, The application's main utility is to create and update documents therefore the choice for a database that supports json. I wouldn't go the MongoDB route due to past bad experience and licensing restrictions compared to an open source db.

·
Reply
Needs advice
on
Azure Cosmos DBAzure Cosmos DBNeo4jNeo4j
and
OrientDBOrientDB

We have an in-house build experiment management system. We produce samples as input to the next step, which then could produce 1 sample(1-1) and many samples (1 - many). There are many steps like this. So far, we are tracking genealogy (limited tracking) in the MySQL database, which is becoming hard to trace back to the original material or sample(I can give more details if required). So, we are considering a Graph database. I am requesting advice from the experts.

  1. Is a graph database the right choice, or can we manage with RDBMS?
  2. If RDBMS, which RDMS, which feature, or which approach could make this manageable or sustainable
  3. If Graph database(Neo4j, OrientDB, Azure Cosmos DB, Amazon Neptune, ArangoDB), which one is good, and what are the best practices?

I am sorry that this might be a loaded question.

READ MORE
7 upvotes·203.3K views
Replies (1)
Recommends
on
ArangoDB

You have not given much detail about the data generated, the depth of such a graph, and the access patterns (queries). However, it is very easy to track all samples and materials if you traverse this graph using a graph database. Here you can use any of the databases mentioned. OrientDB and ArangoDB are also multi-model databases where you can still query the data in a relational way using joins - you retain full flexibility.

In SQL, you can use Common Table Expressions (CTEs) and use them to write a recursive query that reads all parent nodes of a tree.

I would recommend ArangoDB if your samples also have disparate or nested attributes so that the document model (JSON) fits, and you have many complex graph queries that should be performed as efficiently as possible. If not - stay with an RDBMS.

READ MORE
5 upvotes·2 comments·13.4K views
Michael Staub
Michael Staub
·
August 6th 2020 at 4:53PM

Another reason I recommend ArangoDB is the fact that the storage engine does not limit your data model. You cannot create a geo-index on a 'user.location' field in any of the gremlin-compatible stores for example, as the JSON documents can only have one level of properties.

·
Reply
Thiru Medampalli
Thiru Medampalli
·
August 7th 2020 at 9:00PM

Hey @ifcologne,

Thanks for your response, We woud explore the ArangoDB <

Here are some more details if you are wondering

Operation produces many samples(output) from other samples(input). We are traking both Operation and Samples (two graphs i.e one for operation and another for samples), Typical depth is 10 to 20 for both Operation and Samples but some are even deeper(> 20). Operations could be million records(2-3 million) and samples could be (10 to 20 million) records so far over the years. We are using the Closure data model in the dbms to represent the tree/graph data.

Access patern:

API and some power users directly access the data via specific sql(stored procedure and/or special sql sripts). We are open to restrict or enhance the acess pattens further.

We are finding it hard to go upstream/downstream and also merge two tree structures(operations and samples) as depth increaseses

We are finding hard to data mine based on sample or process attributes(some are nesed)

Hard to represent multiple parents to one child.

·
Reply