As Mentioned in my previous articles we are working on an application which processes around 25000 requests per minute. We use only 2 core CPU machine to receive these many requests and push it to Kafka. We are confident that 2 core CPU machine pretty much ok to handle these many requests. Means we have 5ms window to respond back on a Request.
It means we will be in the soup if we do one little mistake in our API server(Which takes the request and then process it and push it to Kafka).
Everything going great until last week because our API server used to process around 20000 requests per minute. But from last week onwards we started getting alerts about request failures because last new customer started using our product so right now we are getting around 25000 requests per minute.
So previously we used to have 6ms window for a request processing but now we have around 5ms window. We can add one more server but before that, we want to make sure do we really need one more server or we can improve further. When we saw our App Performance in New Relic we are able to see some of our Application time going on stat.
After a little bit of googling, we understood that it’s related to I/O(File System). It means we are doing some I/O operations at the time of processing request. Except for logging, we didn’t think much. But we are not doing any unnecessary logging. I realized the time arrived to dig deeper.
After spending some more time with our great friend(Google) found this super article. After reading this article realized that we unknowingly use a lot of middleware. If we don’t know exactly what those middlewares are doing then our application performance in great danger.
Usually, our typical app.js contains our routes, paths & middlewares(including request logging & static content serving). So, I wrote a function to measure how much time each middleware consuming at the time of processing a request.
This is our typical App.js. Let me explain section by section first then will explain the problems and their solutions according to my knowledge.
Section1: The usual imports for express framework, body-parser, event emitter for our profiling purpose, morgan for request logging. Then created express & event emitter objects.
Section2: Event handling code. This code prints which middleware or function ran and how much time it took to do its job.
Section3: Our magic sauce. This function takes another function as input and implant our event emitter to calculate the time input function took to do its job. Basically, middlewares are 3 type of functions. One type take request & response as arguments, the Second type takes request, response & next function as arguments & the Third type takes error, request, response & next function as arguments. For now, we are handling 2 types.
Section4: Our middlewares.
- morgan for request logging. Here dev is for logging format. For more details please look at morgan middleware GitHub repository.
- body parser. For extracting the entire body portion of an incoming request stream and exposes it on req.body. Here we are considering JSON format.
- Static content serving middleware. To serve static files through HTTP protocol.
Section5: Route/path for handling our user request. Putting 2 ms timeout assuming our request takes 2ms for processing a request.
After running our App.js(application) & hitting http://localhost:3000/check got below output.
You can observe 2 problems here.
- logger is taking time sometimes.
- serveStatic is taking time everytime which is not required for a request on check route/path.
These 2 are related to I/O operations. If we solve these 2 problems our stat performance issue will be resolved.
- We don’t need request logging because nginx anyway is doing that at the access.log level and it’s more efficient than our logging.
- Our placement serveStatic middleware(express.static(‘public’)) is wrong we should keep it at last so that for every Get Request this middleware won’t invoke.
Now my application APIs process a request & responds in around 4ms time.
Peace. Happy Coding.