What is OpenResty?
Who uses OpenResty?
Here are some stack decisions, common use cases and reviews by companies and developers who chose OpenResty in their tech stack.
We use nginx and OpenResty as our API proxy running on EC2 for auth, caching, and some rate limiting for our dozens of microservices. Since OpenResty support embedded Lua we were able to write a custom access module that calls out to our authentication service with the resource, method, and access token. If that succeeds then critical account info is passed down to the underlying microservice. This proxy approach keeps all authentication and authorization in one place and provides a unified CX for our API users. Nginx is fast and cheap to run though we are always exploring alternatives that are also economical. What do you use?
At Kong while building an internal tool, we struggled to route metrics to Prometheus and logs to Logstash without incurring too much latency in our metrics collection.
We replaced nginx with OpenResty on the edge of our tool which allowed us to use the lua-nginx-module to run Lua code that captures metrics and records telemetry data during every request’s log phase. Our code then pushes the metrics to a local aggregator process (written in Go) which in turn exposes them in Prometheus Exposition Format for consumption by Prometheus. This solution reduced the number of components we needed to maintain and is fast thanks to NGINX and LuaJIT.
I use OpenResty because it combines a high-performance, battled-tested network/protocol handler, with the facilities to write both prototype- and production-grade code in a performant runtime. We can easily test complex and prove complex business logic in a highly-performant (on the scale of hundreds of thousands of requests per second) environment, without worrying about maintaining a lot of plumbing code.