What is Hadoop?
Who uses Hadoop?
Here are some stack decisions, common use cases and reviews by companies and developers who chose Hadoop in their tech stack.
I've been going over the documentation and couldn't find answers to different questions like:
Apache Hive is built on top of Hadoop meaning if I wanted to scale it up I could do either horizontal scaling or vertical scaling. but if I want to scale up openrefine to cater more data then how can this be achieved? the only thing I could find was to allocate more memory like 2 of 4GB but using this approach would mean that we would run out of memory to allot. so thoughts on this?
Secondly, Hadoop has MapReduce meaning a task is reduced to many mapper running in parallel to perform the task which in turn increase the processing speed, is there a similar mechanism in OpenRefine or does it only have a single processing unit (as it is running locally). thoughts?
For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?
I am looking for the best tool to orchestrate #ETL workflows in non-Hadoop environments, mainly for regression testing use cases. Would Airflow or Apache NiFi be a good fit for this purpose?
For example, I want to run an Informatica ETL job and then run an SQL task as a dependency, followed by another task from Jira. What tool is best suited to set up such a pipeline?