Alternatives to NLTK logo

Alternatives to NLTK

SpaCy, Gensim, TensorFlow, PyTorch, and scikit-learn are the most popular alternatives and competitors to NLTK.
128
177
+ 1
0

What is NLTK and what are its top alternatives?

NLTK (Natural Language Toolkit) is a widely used platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning. One of its key features is the extensive range of language processing tools available, making it a go-to choice for many developers. However, NLTK can be slow for large datasets and lacks advanced deep learning capabilities compared to newer tools in the market.

  1. spaCy: spaCy is a fast and efficient NLP library for Python that offers pre-trained models for text processing tasks such as named entity recognition, part-of-speech tagging, and dependency parsing. It is known for its high performance and scalability, making it a popular choice for building production-ready applications. However, spaCy may have a steeper learning curve compared to NLTK.
  2. Gensim: Gensim is a Python library for topic modeling, document indexing, and similarity retrieval with large text collections. It offers implementations of popular algorithms like word2vec and doc2vec for word and document embeddings, making it a powerful tool for semantic analysis. Compared to NLTK, Gensim is more focused on unsupervised learning tasks.
  3. Stanford NLP: Stanford NLP provides a suite of NLP tools developed by the Stanford NLP Group, including named entity recognition, sentiment analysis, and dependency parsing. It is known for its accuracy and robustness, especially in tasks like entity linking and coreference resolution. However, setting up and integrating Stanford NLP can be more complex compared to NLTK.
  4. Flair: Flair is a simple and powerful tool for NLP in Python, offering state-of-the-art embeddings and pre-trained models for text classification, named entity recognition, and part-of-speech tagging. It also provides easy-to-use APIs for training custom models on new datasets. Compared to NLTK, Flair focuses more on deep learning techniques for NLP tasks.
  5. TextBlob: TextBlob is a user-friendly NLP library for processing textual data in Python, offering simple APIs for common NLP tasks like sentiment analysis, part-of-speech tagging, and noun phrase extraction. It also provides access to WordNet for semantic analysis. TextBlob is easier to learn and use compared to NLTK, making it suitable for beginners in NLP.
  6. AllenNLP: AllenNLP is a deep learning framework for NLP tasks built on top of PyTorch, providing modular components for building state-of-the-art models in areas like text classification, question answering, and language modeling. It offers easy experimentation with different architectures and datasets, but may require more expertise in deep learning compared to NLTK.
  7. Hugging Face Transformers: Transformers by Hugging Face is a popular library for pre-trained NLP models, including BERT, GPT-2, and RoBERTa, that can be easily fine-tuned for downstream tasks like text classification and language generation. It offers a wide range of models and tools for working with transformers, making it a cutting-edge alternative to NLTK for deep learning-based NLP.
  8. FastText: FastText is a library for efficient learning of word embeddings and text classification, developed by Facebook Research. It provides fast training and inference for word representations and text categorization tasks, especially for large-scale datasets. Compared to NLTK, FastText is optimized for performance and scalability in NLP applications.
  9. BERT: BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained NLP model developed by Google AI that achieved state-of-the-art results across various NLP benchmarks. It offers fine-tuning capabilities for downstream tasks like question answering and named entity recognition, making it a powerful alternative to NLTK for advanced NLP projects.
  10. Spacy Transformers: Spacy Transformers is an integration of spaCy and transformers models for easy usage of pre-trained transformer-based models in spaCy pipelines. It allows for seamless integration of transformer models for tasks like text classification, entity recognition, and summarization, offering a modern approach to NLP compared to traditional rule-based methods like NLTK.

Top Alternatives to NLTK

  • SpaCy
    SpaCy

    It is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. It comes with pre-trained statistical models and word vectors, and currently supports tokenization for 49+ languages. ...

  • Gensim
    Gensim

    It is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Target audience is the natural language processing (NLP) and information retrieval (IR) community. ...

  • TensorFlow
    TensorFlow

    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. ...

  • PyTorch
    PyTorch

    PyTorch is not a Python binding into a monolothic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use numpy / scipy / scikit-learn etc. ...

  • scikit-learn
    scikit-learn

    scikit-learn is a Python module for machine learning built on top of SciPy and distributed under the 3-Clause BSD license. ...

  • Keras
    Keras

    Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on TensorFlow or Theano. https://keras.io/ ...

  • JavaScript
    JavaScript

    JavaScript is most known as the scripting language for Web pages, but used in many non-browser environments as well such as node.js or Apache CouchDB. It is a prototype-based, multi-paradigm scripting language that is dynamic,and supports object-oriented, imperative, and functional programming styles. ...

  • Git
    Git

    Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...

NLTK alternatives & related posts

SpaCy logo

SpaCy

216
294
14
Industrial-Strength Natural Language Processing in Python
216
294
+ 1
14
PROS OF SPACY
  • 12
    Speed
  • 2
    No vendor lock-in
CONS OF SPACY
  • 1
    Requires creating a training set and managing training

related SpaCy posts

Gensim logo

Gensim

73
88
0
A python library for Topic Modelling
73
88
+ 1
0
PROS OF GENSIM
    Be the first to leave a pro
    CONS OF GENSIM
      Be the first to leave a con

      related Gensim posts

      Biswajit Pathak
      Project Manager at Sony · | 6 upvotes · 847.1K views

      Can you please advise which one to choose FastText Or Gensim, in terms of:

      1. Operability with ML Ops tools such as MLflow, Kubeflow, etc.
      2. Performance
      3. Customization of Intermediate steps
      4. FastText and Gensim both have the same underlying libraries
      5. Use cases each one tries to solve
      6. Unsupervised Vs Supervised dimensions
      7. Ease of Use.

      Please mention any other points that I may have missed here.

      See more
      TensorFlow logo

      TensorFlow

      3.7K
      3.5K
      106
      Open Source Software Library for Machine Intelligence
      3.7K
      3.5K
      + 1
      106
      PROS OF TENSORFLOW
      • 32
        High Performance
      • 19
        Connect Research and Production
      • 16
        Deep Flexibility
      • 12
        Auto-Differentiation
      • 11
        True Portability
      • 6
        Easy to use
      • 5
        High level abstraction
      • 5
        Powerful
      CONS OF TENSORFLOW
      • 9
        Hard
      • 6
        Hard to debug
      • 2
        Documentation not very helpful

      related TensorFlow posts

      Tom Klein

      Google Analytics is a great tool to analyze your traffic. To debug our software and ask questions, we love to use Postman and Stack Overflow. Google Drive helps our team to share documents. We're able to build our great products through the APIs by Google Maps, CloudFlare, Stripe, PayPal, Twilio, Let's Encrypt, and TensorFlow.

      See more
      Shared insights
      on
      TensorFlowTensorFlowDjangoDjangoPythonPython

      Hi, I have an LMS application, currently developed in Python-Django.

      It works all very well, students can view their classes and submit exams, but I have noticed that some students are sharing exam answers with other students and let's say they already have a model of the exams.

      I want with the help of artificial intelligence, the exams to have different questions and in a different order for each student, what technology should I learn to develop something like this? I am a Python-Django developer but my focus is on web development, I have never touched anything from A.I.

      What do you think about TensorFlow?

      Please, I would appreciate all your ideas and opinions, thank you very much in advance.

      See more
      PyTorch logo

      PyTorch

      1.5K
      1.5K
      43
      A deep learning framework that puts Python first
      1.5K
      1.5K
      + 1
      43
      PROS OF PYTORCH
      • 15
        Easy to use
      • 11
        Developer Friendly
      • 10
        Easy to debug
      • 7
        Sometimes faster than TensorFlow
      CONS OF PYTORCH
      • 3
        Lots of code
      • 1
        It eats poop

      related PyTorch posts

      Eric Colson
      Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

      The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

      Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

      At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

      For more info:

      #DataScience #DataStack #Data

      See more

      Server side

      We decided to use Python for our backend because it is one of the industry standard languages for data analysis and machine learning. It also has a lot of support due to its large user base.

      • Web Server: We chose Flask because we want to keep our machine learning / data analysis and the web server in the same language. Flask is easy to use and we all have experience with it. Postman will be used for creating and testing APIs due to its convenience.

      • Machine Learning: We decided to go with PyTorch for machine learning since it is one of the most popular libraries. It is also known to have an easier learning curve than other popular libraries such as Tensorflow. This is important because our team lacks ML experience and learning the tool as fast as possible would increase productivity.

      • Data Analysis: Some common Python libraries will be used to analyze our data. These include NumPy, Pandas , and matplotlib. These tools combined will help us learn the properties and characteristics of our data. Jupyter notebook will be used to help organize the data analysis process, and improve the code readability.

      Client side

      • UI: We decided to use React for the UI because it helps organize the data and variables of the application into components, making it very convenient to maintain our dashboard. Since React is one of the most popular front end frameworks right now, there will be a lot of support for it as well as a lot of potential new hires that are familiar with the framework. CSS 3 and HTML5 will be used for the basic styling and structure of the web app, as they are the most widely used front end languages.

      • State Management: We decided to use Redux to manage the state of the application since it works naturally to React. Our team also already has experience working with Redux which gave it a slight edge over the other state management libraries.

      • Data Visualization: We decided to use the React-based library Victory to visualize the data. They have very user friendly documentation on their official website which we find easy to learn from.

      Cache

      • Caching: We decided between Redis and memcached because they are two of the most popular open-source cache engines. We ultimately decided to use Redis to improve our web app performance mainly due to the extra functionalities it provides such as fine-tuning cache contents and durability.

      Database

      • Database: We decided to use a NoSQL database over a relational database because of its flexibility from not having a predefined schema. The user behavior analytics has to be flexible since the data we plan to store may change frequently. We decided on MongoDB because it is lightweight and we can easily host the database with MongoDB Atlas . Everyone on our team also has experience working with MongoDB.

      Infrastructure

      • Deployment: We decided to use Heroku over AWS, Azure, Google Cloud because it is free. Although there are advantages to the other cloud services, Heroku makes the most sense to our team because our primary goal is to build an MVP.

      Other Tools

      • Communication Slack will be used as the primary source of communication. It provides all the features needed for basic discussions. In terms of more interactive meetings, Zoom will be used for its video calls and screen sharing capabilities.

      • Source Control The project will be stored on GitHub and all code changes will be done though pull requests. This will help us keep the codebase clean and make it easy to revert changes when we need to.

      See more
      scikit-learn logo

      scikit-learn

      1.2K
      1.1K
      44
      Easy-to-use and general-purpose machine learning in Python
      1.2K
      1.1K
      + 1
      44
      PROS OF SCIKIT-LEARN
      • 25
        Scientific computing
      • 19
        Easy
      CONS OF SCIKIT-LEARN
      • 2
        Limited

      related scikit-learn posts

      Should I continue learning Django or take this Spring opportunity? I have been coding in python for about 2 years. I am currently learning Django and I am enjoying it. I also have some knowledge of data science libraries (Pandas, NumPy, scikit-learn, PyTorch). I am currently enhancing my web development and software engineering skills and may shift later into data science since I came from a medical background. The issue is that I am offered now a very trustworthy 9 months program teaching Java/Spring. The graduates of this program work directly in well know tech companies. Although I have been planning to continue with my Python, the other opportunity makes me hesitant since it will put me to work in a specific roadmap with deadlines and mentors. I also found on glassdoor that Spring jobs are way more than Django. Should I apply for this program or continue my journey?

      See more

      Hi, I wanted to jump into Machine Learning.

      I first tried brain.js, but its capabilities are very limited and it abstracts most concepts of ML away. I've tried TensorFlow, but it's very hard for me to understand the concepts.

      Now, I thought about trying NumPy or scikit-learn, but I don't really know much about ML, but still want to use 100% Power of ML.

      What do you recommend me to use as a beginner in ML?

      Also do you know any good tutorials which explain how ML works and how to implement it in a given framework (ideal in german)?

      Thanks for your attention & help :D

      See more
      Keras logo

      Keras

      1.1K
      1.1K
      22
      Deep Learning library for Theano and TensorFlow
      1.1K
      1.1K
      + 1
      22
      PROS OF KERAS
      • 8
        Quality Documentation
      • 7
        Supports Tensorflow and Theano backends
      • 7
        Easy and fast NN prototyping
      CONS OF KERAS
      • 4
        Hard to debug

      related Keras posts

      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 8 upvotes · 2.8M views

      Why we built an open source, distributed training framework for TensorFlow , Keras , and PyTorch:

      At Uber, we apply deep learning across our business; from self-driving research to trip forecasting and fraud prevention, deep learning enables our engineers and data scientists to create better experiences for our users.

      TensorFlow has become a preferred deep learning library at Uber for a variety of reasons. To start, the framework is one of the most widely used open source frameworks for deep learning, which makes it easy to onboard new users. It also combines high performance with an ability to tinker with low-level model details—for instance, we can use both high-level APIs, such as Keras, and implement our own custom operators using NVIDIA’s CUDA toolkit.

      Uber has introduced Michelangelo (https://eng.uber.com/michelangelo/), an internal ML-as-a-service platform that democratizes machine learning and makes it easy to build and deploy these systems at scale. In this article, we pull back the curtain on Horovod, an open source component of Michelangelo’s deep learning toolkit which makes it easier to start—and speed up—distributed deep learning projects with TensorFlow:

      https://eng.uber.com/horovod/

      (Direct GitHub repo: https://github.com/uber/horovod)

      See more

      I am going to send my website to a Venture Capitalist for inspection. If I succeed, I will get funding for my StartUp! This website is based on Django and Uses Keras and TensorFlow model to predict medical imaging. Should I use Heroku or PythonAnywhere to deploy my website ?? Best Regards, Adarsh.

      See more
      JavaScript logo

      JavaScript

      354.3K
      269.4K
      8.1K
      Lightweight, interpreted, object-oriented language with first-class functions
      354.3K
      269.4K
      + 1
      8.1K
      PROS OF JAVASCRIPT
      • 1.7K
        Can be used on frontend/backend
      • 1.5K
        It's everywhere
      • 1.2K
        Lots of great frameworks
      • 897
        Fast
      • 745
        Light weight
      • 425
        Flexible
      • 392
        You can't get a device today that doesn't run js
      • 286
        Non-blocking i/o
      • 237
        Ubiquitousness
      • 191
        Expressive
      • 55
        Extended functionality to web pages
      • 49
        Relatively easy language
      • 46
        Executed on the client side
      • 30
        Relatively fast to the end user
      • 25
        Pure Javascript
      • 21
        Functional programming
      • 15
        Async
      • 13
        Full-stack
      • 12
        Setup is easy
      • 12
        Future Language of The Web
      • 12
        Its everywhere
      • 11
        Because I love functions
      • 11
        JavaScript is the New PHP
      • 10
        Like it or not, JS is part of the web standard
      • 9
        Expansive community
      • 9
        Everyone use it
      • 9
        Can be used in backend, frontend and DB
      • 9
        Easy
      • 8
        Most Popular Language in the World
      • 8
        Powerful
      • 8
        Can be used both as frontend and backend as well
      • 8
        For the good parts
      • 8
        No need to use PHP
      • 8
        Easy to hire developers
      • 7
        Agile, packages simple to use
      • 7
        Love-hate relationship
      • 7
        Photoshop has 3 JS runtimes built in
      • 7
        Evolution of C
      • 7
        It's fun
      • 7
        Hard not to use
      • 7
        Versitile
      • 7
        Its fun and fast
      • 7
        Nice
      • 7
        Popularized Class-Less Architecture & Lambdas
      • 7
        Supports lambdas and closures
      • 6
        It let's me use Babel & Typescript
      • 6
        Can be used on frontend/backend/Mobile/create PRO Ui
      • 6
        1.6K Can be used on frontend/backend
      • 6
        Client side JS uses the visitors CPU to save Server Res
      • 6
        Easy to make something
      • 5
        Clojurescript
      • 5
        Promise relationship
      • 5
        Stockholm Syndrome
      • 5
        Function expressions are useful for callbacks
      • 5
        Scope manipulation
      • 5
        Everywhere
      • 5
        Client processing
      • 5
        What to add
      • 4
        Because it is so simple and lightweight
      • 4
        Only Programming language on browser
      • 1
        Test
      • 1
        Hard to learn
      • 1
        Test2
      • 1
        Not the best
      • 1
        Easy to understand
      • 1
        Subskill #4
      • 1
        Easy to learn
      • 0
        Hard 彤
      CONS OF JAVASCRIPT
      • 22
        A constant moving target, too much churn
      • 20
        Horribly inconsistent
      • 15
        Javascript is the New PHP
      • 9
        No ability to monitor memory utilitization
      • 8
        Shows Zero output in case of ANY error
      • 7
        Thinks strange results are better than errors
      • 6
        Can be ugly
      • 3
        No GitHub
      • 2
        Slow
      • 0
        HORRIBLE DOCUMENTS, faulty code, repo has bugs

      related JavaScript posts

      Zach Holman

      Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.

      But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.

      But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.

      Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.

      See more
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 11.2M views

      How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

      Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

      Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

      https://eng.uber.com/distributed-tracing/

      (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

      Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

      See more
      Git logo

      Git

      293.4K
      175.7K
      6.6K
      Fast, scalable, distributed revision control system
      293.4K
      175.7K
      + 1
      6.6K
      PROS OF GIT
      • 1.4K
        Distributed version control system
      • 1.1K
        Efficient branching and merging
      • 959
        Fast
      • 845
        Open source
      • 726
        Better than svn
      • 368
        Great command-line application
      • 306
        Simple
      • 291
        Free
      • 232
        Easy to use
      • 222
        Does not require server
      • 27
        Distributed
      • 22
        Small & Fast
      • 18
        Feature based workflow
      • 15
        Staging Area
      • 13
        Most wide-spread VSC
      • 11
        Role-based codelines
      • 11
        Disposable Experimentation
      • 7
        Frictionless Context Switching
      • 6
        Data Assurance
      • 5
        Efficient
      • 4
        Just awesome
      • 3
        Github integration
      • 3
        Easy branching and merging
      • 2
        Compatible
      • 2
        Flexible
      • 2
        Possible to lose history and commits
      • 1
        Rebase supported natively; reflog; access to plumbing
      • 1
        Light
      • 1
        Team Integration
      • 1
        Fast, scalable, distributed revision control system
      • 1
        Easy
      • 1
        Flexible, easy, Safe, and fast
      • 1
        CLI is great, but the GUI tools are awesome
      • 1
        It's what you do
      • 0
        Phinx
      CONS OF GIT
      • 16
        Hard to learn
      • 11
        Inconsistent command line interface
      • 9
        Easy to lose uncommitted work
      • 7
        Worst documentation ever possibly made
      • 5
        Awful merge handling
      • 3
        Unexistent preventive security flows
      • 3
        Rebase hell
      • 2
        When --force is disabled, cannot rebase
      • 2
        Ironically even die-hard supporters screw up badly
      • 1
        Doesn't scale for big data

      related Git posts

      Simon Reymann
      Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 9.9M views

      Our whole DevOps stack consists of the following tools:

      • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
      • Respectively Git as revision control system
      • SourceTree as Git GUI
      • Visual Studio Code as IDE
      • CircleCI for continuous integration (automatize development process)
      • Prettier / TSLint / ESLint as code linter
      • SonarQube as quality gate
      • Docker as container management (incl. Docker Compose for multi-container application management)
      • VirtualBox for operating system simulation tests
      • Kubernetes as cluster management for docker containers
      • Heroku for deploying in test environments
      • nginx as web server (preferably used as facade server in production environment)
      • SSLMate (using OpenSSL) for certificate management
      • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
      • PostgreSQL as preferred database system
      • Redis as preferred in-memory database/store (great for caching)

      The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

      • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
      • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
      • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
      • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
      • Scalability: All-in-one framework for distributed systems.
      • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
      See more
      Tymoteusz Paul
      Devops guy at X20X Development LTD · | 23 upvotes · 8.9M views

      Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

      It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

      I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

      We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

      If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

      The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

      Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

      See more