Existing configuration is like we have 100 local expensive Docker pod based decryptions (in=encrypted, out-=decrypted).
To reduce the load, 8 Varnish baremetals are used in between the client and the dockers. The varnish is dropping in performance if the connection count to it increases.'
-- the client C code is also integrated with Memcached for other types of data.
What would be my steps to compare Varnish vs Memcache?
- TTL is 12 hours
- qps is 500kqps across all varnish
- connection count to each varnish server is 360000
- size of data per query is 5k to 14k
thanks all!
You are mixing concerns. Varnish is a caching reverse proxy, and memcached is an in-memory cache for like, unstructured data. If you're pulling from some sort of backend that speaks HTTP, I guess you could consider Varnish something akin to memcached for pull-through, but keep in mind that in both instances your most significant concern will be cache invalidation rather than getting the cache working to begin with. (Famously one of the two "hard problems in CS.")