What is Google Compute Engine and what are its top alternatives?
Google Compute Engine is a flexible and scalable Infrastructure as a Service (IaaS) that allows users to create virtual machines and run various workloads on Google's infrastructure. Key features include customizable virtual machine configurations, global load balancing, automatic scaling, and integration with other Google Cloud services. However, limitations include complex pricing structure, lack of Windows support for some machine types, and the need for expertise in managing virtual machine instances.
- Amazon EC2: Amazon Elastic Compute Cloud (EC2) offers a wide range of instance types, flexible pricing options, and integration with other AWS services. Pros include a vast global infrastructure, pay-as-you-go pricing model, and a wide selection of instance types. Cons may include complex pricing and additional costs for data transfer.
- Microsoft Azure Virtual Machines: Azure VMs provide on-demand computing power with various sizes and configurations to meet different workload requirements. Key features include hybrid cloud connectivity, auto-scaling, and support for Windows and Linux environments. Pros include easy integration with other Azure services, while cons may include potential downtime during updates.
- DigitalOcean Droplets: DigitalOcean offers Droplets as simple, scalable virtual machines with SSD storage and global data centers. Features include easy-to-use control panel, fixed pricing, and seamless integration with other DigitalOcean services. Pros include straightforward pricing and quick deployment, while cons may include limited instance types and services compared to bigger cloud providers.
- Vultr: Vultr provides high-performance cloud compute instances with multiple locations, SSD storage, and flexible configurations. Key features include hourly billing, fast provisioning, and a user-friendly control panel. Pros include competitive pricing and worldwide data centers, while cons may include fewer advanced features compared to major cloud providers.
- IBM Cloud Virtual Servers: IBM Cloud offers Virtual Servers with customizable configurations, scalable resources, and high availability. Features include integrated security, backup options, and support for various operating systems. Pros include IBM's enterprise-level security and compliance, while cons may include higher pricing for advanced features.
- Oracle Cloud Infrastructure Compute: Oracle's Compute service provides customizable virtual machines with high performance, security, and reliability. Key features include bare metal instances, advanced networking options, and integration with Oracle Cloud services. Pros include strong security features and reliability, while cons may include limited global presence compared to other cloud providers.
- Alibaba Cloud Elastic Compute Service: Alibaba's ECS offers scalable virtual servers with burstable instances, flexible billing options, and a vast global network. Features include auto-scaling, load balancing, and integrated security services. Pros include competitive pricing and extensive support for global customers, while cons may include less familiarity outside of Asia.
- UpCloud: UpCloud provides high-performance cloud servers with SSD storage, private networking, and customizable configurations. Key features include fast deployment, hourly billing, and an intuitive control panel. Pros include superior performance and reliability, while cons may include limited global presence and fewer data center locations compared to larger providers.
- Linode: Linode offers cloud hosting with simple pricing, SSD storage, and a variety of instance types. Features include fast networking, API access, and a rich library of tutorials and guides. Pros include affordable pricing and excellent customer support, while cons may include limited managed services and smaller global reach compared to major players.
- Scaleway: Scaleway provides virtual cloud servers with ARM architecture, flexible configurations, and bare metal options. Key features include high-performance computing, private networks, and pay-as-you-go billing. Pros include innovative ARM-based servers and competitive pricing, while cons may include fewer cloud services and a smaller customer base compared to larger providers.
Top Alternatives to Google Compute Engine
- Google App Engine
Google has a reputation for highly reliable, high performance infrastructure. With App Engine you can take advantage of the 10 years of knowledge Google has in running massively scalable, performance driven systems. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. ...
- DigitalOcean
We take the complexities out of cloud hosting by offering blazing fast, on-demand SSD cloud servers, straightforward pricing, a simple API, and an easy-to-use control panel. ...
- Google Cloud Platform
It helps you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning. It is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search and YouTube. ...
- Amazon EC2
It is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. ...
- Microsoft Azure
Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool or framework. And you can integrate your public cloud applications with your existing IT environment. ...
- Kubernetes
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...
- JavaScript
JavaScript is most known as the scripting language for Web pages, but used in many non-browser environments as well such as node.js or Apache CouchDB. It is a prototype-based, multi-paradigm scripting language that is dynamic,and supports object-oriented, imperative, and functional programming styles. ...
- Git
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...
Google Compute Engine alternatives & related posts
Google App Engine
- Easy to deploy145
- Auto scaling106
- Good free plan80
- Easy management62
- Scalability56
- Low cost35
- Comprehensive set of features32
- All services in one place28
- Simple scaling22
- Quick and reliable cloud servers19
- Granular Billing6
- Easy to develop and unit test5
- Monitoring gives comprehensive set of key indicators4
- Really easy to quickly bring up a full stack3
- Create APIs quickly with cloud endpoints3
- Mostly up2
- No Ops2
related Google App Engine posts
Uploadcare has built an infinitely scalable infrastructure by leveraging AWS. Building on top of AWS allows us to process 350M daily requests for file uploads, manipulations, and deliveries. When we started in 2011 the only cloud alternative to AWS was Google App Engine which was a no-go for a rather complex solution we wanted to build. We also didn’t want to buy any hardware or use co-locations.
Our stack handles receiving files, communicating with external file sources, managing file storage, managing user and file data, processing files, file caching and delivery, and managing user interface dashboards.
At its core, Uploadcare runs on Python. The Europython 2011 conference in Florence really inspired us, coupled with the fact that it was general enough to solve all of our challenges informed this decision. Additionally we had prior experience working in Python.
We chose to build the main application with Django because of its feature completeness and large footprint within the Python ecosystem.
All the communications within our ecosystem occur via several HTTP APIs, Redis, Amazon S3, and Amazon DynamoDB. We decided on this architecture so that our our system could be scalable in terms of storage and database throughput. This way we only need Django running on top of our database cluster. We use PostgreSQL as our database because it is considered an industry standard when it comes to clustering and scaling.
So, the shift from Amazon EC2 to Google App Engine and generally #AWS to #GCP was a long decision and in the end, it's one that we've taken with eyes open and that we reserve the right to modify at any time. And to be clear, we continue to do a lot of stuff with AWS. But, by default, the content of the decision was, for our consumer-facing products, we're going to use GCP first. And if there's some reason why we don't think that's going to work out great, then we'll happily use AWS. In practice, that hasn't really happened. We've been able to meet almost 100% of our needs in GCP.
So it's basically mostly Google Kubernetes Engine , we're mostly running stuff on Kubernetes right now.
#AWStoGCPmigration #cloudmigration #migration
DigitalOcean
- Great value for money560
- Simple dashboard364
- Good pricing362
- Ssds300
- Nice ui250
- Easy configuration191
- Great documentation156
- Ssh access138
- Great community135
- Ubuntu24
- Docker13
- IPv6 support12
- Private networking10
- 99.99% uptime SLA8
- Simple API7
- Great tutorials7
- 55 Second Provisioning6
- One Click Applications5
- Dokku4
- LAMP4
- Debian4
- CoreOS4
- Node.js4
- 1Gb/sec Servers3
- Word Press3
- Mean3
- LEMP3
- Simple Control Panel3
- Ghost3
- Runs CoreOS2
- Quick and no nonsense service2
- Django2
- Good Tutorials2
- Speed2
- Ruby on Rails2
- GitLab2
- Hex Core machines with dedicated ECC Ram and RAID SSD s2
- CentOS1
- Spaces1
- KVM Virtualization1
- Amazing Hardware1
- Transfer Globally1
- Fedora1
- FreeBSD1
- Drupal1
- FreeBSD Amp1
- Magento1
- ownCloud1
- RedMine1
- My go to server provider1
- Ease and simplicity1
- Nice1
- Find it superfitting with my requirements (SSD, ssh.1
- Easy Setup1
- Cheap1
- Static IP1
- It's the easiest to get started for small projects1
- Automatic Backup1
- Great support1
- Quick and easy to set up1
- Servers on demand - literally1
- Reliability1
- Variety of services0
- Managed Kubernetes0
- No live support chat3
- Pricing3
related DigitalOcean posts
This week, we finally released NurseryPeople.com. In the end, I chose to provision our server on DigitalOcean. So far, I am SO happy with that decision. Although setting everything up was a challenge, and I learned a lot, DigitalOceans blogs helped in so many ways. I was able to set up nginx and the Laravel web app pretty smoothly. I am also using Buddy for deploying changes made in git, which is super awesome. All I have to do in order to deploy is push my code to my private repo, and buddy transfers everything over to DigitalOcean. So far, we haven't had any downtime and DigitalOceans prices are quite fair for the power under the hood.
Coming from a non-web development environment background, I was a bit lost a first and bewildered by all the varying tools and platforms, and spent much too long evaluating before eventualy deciding on Laravel as the main core of my development.
But as I started development with Laravel that lead me into discovering Vue.js for creating beautiful front-end components that were easy to configure and extend, so I decided to standardise on Vue.js for most of my front-end development.
During my search for additional Vue.js components, a chance comment in a @laravel forum , led me to discover Quasar Framework initially for it's wide range of in-built components ... but once, I realised that Quasar Framework allowed me to use the same codebase to create apps for SPA, PWA, iOS, Android, and Electron then I was hooked.
So, I'm now using mainly just Quasar Framework for all the front-end, with Laravel providing a backend API service to the Front-end apps.
I'm deploying this all to DigitalOcean droplets via service called Moss.sh which deploys my private GitHub repositories directly to DigitalOcean in realtime.
- Good app Marketplace for Beginner and Advanced User5
- 1 year free trial credit USD3004
- Premium tier IP address3
- Live chat support3
- Cheap3
related Google Cloud Platform posts
My days of using Firebase are over! I want to move to something scalable and possibly less cheap. In the past seven days I have done my research on what type of DB best fits my needs, and have chosen to go with the nonrelational DB; MongoDB. Although I understand it, I need help understanding how to set up the architecture. I have the client app (Flutter/ Dart) that would make HTTP requests to the web server (node/express), and from there the webserver would query data from MongoDB.
How should I go about hosting the web server and MongoDb; do they have to be hosted together (this is where a lot of my confusion is)? Based on the research I've done, it seems like the standard practice would be to host on a VM provided by services such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, etc. If there are better ways, such as possibly self-hosting (more responsibility), should I? Anyways, I just want to confirm with a community (you guys) to make sure I do this right, all input is highly appreciated.
I want to make application like Zomato, #Foodpanda.
Which stack is best for this? As I have expertise in Java and Angular. What is the best stack you will recommend?
Web Micro-service / Mono? Angular / React? Amazon Web Services (AWS) / Google Cloud Platform? DB : SQL or No SQL
Mob Cross-platform: React Native / Flutter
Note: We are a team of 5. what languages do you recommend if I go with microservices?
Thanks
- Quick and reliable cloud servers647
- Scalability515
- Easy management393
- Low cost277
- Auto-scaling271
- Market leader89
- Backed by amazon80
- Reliable79
- Free tier67
- Easy management, scalability58
- Flexible13
- Easy to Start10
- Elastic9
- Web-scale9
- Widely used9
- Node.js API7
- Industry Standard5
- Lots of configuration options4
- GPU instances2
- Simpler to understand and learn1
- Extremely simple to use1
- Amazing for individuals1
- All the Open Source CLI tools you could want.1
- Ui could use a lot of work13
- High learning curve when compared to PaaS6
- Extremely poor CPU performance3
related Amazon EC2 posts
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
Microsoft Azure
- Scales well and quite easy114
- Can use .Net or open source tools96
- Startup friendly81
- Startup plans via BizSpark73
- High performance62
- Wide choice of services38
- Low cost32
- Lots of integrations32
- Reliability31
- Twillio & Github are directly accessible19
- RESTful API13
- PaaS10
- Enterprise Grade10
- Startup support10
- DocumentDB8
- In person support7
- Free for students6
- Service Bus6
- Virtual Machines6
- Redis Cache5
- It rocks5
- Storage, Backup, and Recovery4
- Infrastructure Services4
- SQL Databases4
- CDN4
- Integration3
- Scheduler3
- Preview Portal3
- HDInsight3
- Built on Node.js3
- Big Data3
- BizSpark 60k Azure Benefit3
- IaaS3
- Backup2
- Open cloud2
- Web2
- SaaS2
- Big Compute2
- Mobile2
- Media2
- Dev-Test2
- Storage2
- StorSimple2
- Machine Learning2
- Stream Analytics2
- Data Factory2
- Event Hubs2
- Virtual Network2
- ExpressRoute2
- Traffic Manager2
- Media Services2
- BizTalk Services2
- Site Recovery2
- Active Directory2
- Multi-Factor Authentication2
- Visual Studio Online2
- Application Insights2
- Automation2
- Operational Insights2
- Key Vault2
- Infrastructure near your customers2
- Easy Deployment2
- Enterprise customer preferences1
- Documentation1
- Security1
- Best cloud platfrom1
- Easy and fast to start with1
- Remote Debugging1
- Confusing UI7
- Expensive plesk on Azure2
related Microsoft Azure posts
I've heard that I have the ability to write well, at times. When it flows, it flows. I decided to start blogging in 2013 on Blogger. I started a company and joined BizPark with the Microsoft Azure allotment. I created a WordPress blog and did a migration at some point. A lot happened in the time after that migration but I stopped coding and changed cities during tumultuous times that taught me many lessons concerning mental health and productivity. I eventually graduated from BizSpark and outgrew the credit allotment. That killed the WordPress blog.
I blogged about writing again on the existing Blogger blog but it didn't feel right. I looked at a few options where I wouldn't have to worry about hosting cost indefinitely and Jekyll stood out with GitHub Pages. The Importer was fairly straightforward for the existing blog posts.
Todo * Set up redirects for all posts on blogger. The URI format is different so a complete redirect wouldn't work. Although, there may be something in Jekyll that could manage the redirects. I did notice the old URLs were stored in the front matter. I'm working on a command-line Ruby gem for the current plan. * I did find some of the lost WordPress posts on archive.org that I downloaded with the waybackmachinedownloader. I think I might write an importer for that. * I still have a few Disqus comment threads to map
I'm planning to create a web application and also a mobile application to provide a very good shopping experience to the end customers. Shortly, my application will be aggregate the product details from difference sources and giving a clear picture to the user that when and where to buy that product with best in Quality and cost.
I have planned to develop this in many milestones for adding N number of features and I have picked my first part to complete the core part (aggregate the product details from different sources).
As per my work experience and knowledge, I have chosen the followings stacks to this mission.
UI: I would like to develop this application using React, React Router and React Native since I'm a little bit familiar on this and also most importantly these will help on developing both web and mobile apps. In addition, I'm gonna use the stacks JavaScript, jQuery, jQuery UI, jQuery Mobile, Bootstrap wherever required.
Service: I have planned to use Java as the main business layer language as I have 7+ years of experience on this I believe I can do better work using Java than other languages. In addition, I'm thinking to use the stacks Node.js.
Database and ORM: I'm gonna pick MySQL as DB and Hibernate as ORM since I have a piece of good knowledge and also work experience on this combination.
Search Engine: I need to deal with a large amount of product data and it's in-detailed info to provide enough details to end user at the same time I need to focus on the performance area too. so I have decided to use Solr as a search engine for product search and suggestions. In addition, I'm thinking to replace Solr by Elasticsearch once explored/reviewed enough about Elasticsearch.
Host: As of now, my plan to complete the application with decent features first and deploy it in a free hosting environment like Docker and Heroku and then once it is stable then I have planned to use the AWS products Amazon S3, EC2, Amazon RDS and Amazon Route 53. I'm not sure about Microsoft Azure that what is the specialty in it than Heroku and Amazon EC2 Container Service. Anyhow, I will do explore these once again and pick the best suite one for my requirement once I reached this level.
Build and Repositories: I have decided to choose Apache Maven and Git as these are my favorites and also so popular on respectively build and repositories.
Additional Utilities :) - I would like to choose Codacy for code review as their Startup plan will be very helpful to this application. I'm already experienced with Google CheckStyle and SonarQube even I'm looking something on Codacy.
Happy Coding! Suggestions are welcome! :)
Thanks, Ganesa
Kubernetes
- Leading docker container management solution166
- Simple and powerful129
- Open source107
- Backed by google76
- The right abstractions58
- Scale services25
- Replication controller20
- Permission managment11
- Supports autoscaling9
- Simple8
- Cheap8
- Self-healing6
- Open, powerful, stable5
- Reliable5
- No cloud platform lock-in5
- Promotes modern/good infrascture practice5
- Scalable4
- Quick cloud setup4
- Custom and extensibility3
- Captain of Container Ship3
- Cloud Agnostic3
- Backed by Red Hat3
- Runs on azure3
- A self healing environment with rich metadata3
- Everything of CaaS2
- Gke2
- Golang2
- Easy setup2
- Expandable2
- Sfg2
- Steep learning curve16
- Poor workflow for development15
- Orchestrates only infrastructure8
- High resource requirements for on-prem clusters4
- Too heavy for simple systems2
- Additional vendor lock-in (Docker)1
- More moving parts to secure1
- Additional Technology Overhead1
related Kubernetes posts
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.
Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.
We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.
Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.
Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.
#BigData #AWS #DataScience #DataEngineering
JavaScript
- Can be used on frontend/backend1.7K
- It's everywhere1.5K
- Lots of great frameworks1.2K
- Fast897
- Light weight745
- Flexible425
- You can't get a device today that doesn't run js392
- Non-blocking i/o286
- Ubiquitousness237
- Expressive191
- Extended functionality to web pages55
- Relatively easy language49
- Executed on the client side46
- Relatively fast to the end user30
- Pure Javascript25
- Functional programming21
- Async15
- Full-stack13
- Setup is easy12
- Its everywhere12
- Future Language of The Web12
- Because I love functions11
- JavaScript is the New PHP11
- Like it or not, JS is part of the web standard10
- Expansive community9
- Everyone use it9
- Can be used in backend, frontend and DB9
- Easy9
- Most Popular Language in the World8
- Powerful8
- Can be used both as frontend and backend as well8
- For the good parts8
- No need to use PHP8
- Easy to hire developers8
- Agile, packages simple to use7
- Love-hate relationship7
- Photoshop has 3 JS runtimes built in7
- Evolution of C7
- It's fun7
- Hard not to use7
- Versitile7
- Its fun and fast7
- Nice7
- Popularized Class-Less Architecture & Lambdas7
- Supports lambdas and closures7
- It let's me use Babel & Typescript6
- Can be used on frontend/backend/Mobile/create PRO Ui6
- 1.6K Can be used on frontend/backend6
- Client side JS uses the visitors CPU to save Server Res6
- Easy to make something6
- Clojurescript5
- Promise relationship5
- Stockholm Syndrome5
- Function expressions are useful for callbacks5
- Scope manipulation5
- Everywhere5
- Client processing5
- What to add5
- Because it is so simple and lightweight4
- Only Programming language on browser4
- Test1
- Hard to learn1
- Test21
- Not the best1
- Easy to understand1
- Subskill #41
- Easy to learn1
- Hard 彤0
- A constant moving target, too much churn22
- Horribly inconsistent20
- Javascript is the New PHP15
- No ability to monitor memory utilitization9
- Shows Zero output in case of ANY error8
- Thinks strange results are better than errors7
- Can be ugly6
- No GitHub3
- Slow2
- HORRIBLE DOCUMENTS, faulty code, repo has bugs0
related JavaScript posts
Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.
But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.
But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.
Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
- Distributed version control system1.4K
- Efficient branching and merging1.1K
- Fast959
- Open source845
- Better than svn726
- Great command-line application368
- Simple306
- Free291
- Easy to use232
- Does not require server222
- Distributed27
- Small & Fast22
- Feature based workflow18
- Staging Area15
- Most wide-spread VSC13
- Role-based codelines11
- Disposable Experimentation11
- Frictionless Context Switching7
- Data Assurance6
- Efficient5
- Just awesome4
- Github integration3
- Easy branching and merging3
- Compatible2
- Flexible2
- Possible to lose history and commits2
- Rebase supported natively; reflog; access to plumbing1
- Light1
- Team Integration1
- Fast, scalable, distributed revision control system1
- Easy1
- Flexible, easy, Safe, and fast1
- CLI is great, but the GUI tools are awesome1
- It's what you do1
- Phinx0
- Hard to learn16
- Inconsistent command line interface11
- Easy to lose uncommitted work9
- Worst documentation ever possibly made7
- Awful merge handling5
- Unexistent preventive security flows3
- Rebase hell3
- When --force is disabled, cannot rebase2
- Ironically even die-hard supporters screw up badly2
- Doesn't scale for big data1
related Git posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.