When it comes to configuring a web server like Nginx there are dozens of parameters with almost endless configuration profiles. There are obvious places to start:

worker_processes 1;  
worker_connections 1024;  

Making worker_processes equal to the number of cores of the CPU is normally a good start and any experienced ops engineer will probably do just that. There is no real guide however to setting worker_connections - I have read countless blog posts where this number has been argued.

There are then other, perhaps less obvious configuration options like:

gzip_comp_level 6;  

This number has been debated extensively too. Overall when you consider adding something like PHP FPM into the equation and configuration options associated with that, there are thousands of adjustments that could be made.

I have started on a project to create an auto-optimising version of Nginx. What started as way to monitor the health of a server in real-time using Nginx, Lua and Redis has now turned into AI based learning service. The idea is monitor all aspects of the application in the backend, monitoring latencies, requests per seconds, errors and other potential bottlenecks like disk-io. Using this data to auto-optimise Nginx to increase the performance of the web server while providing valuable insight back to the developer regarding the state of their application.

Nginx with Lua and Redis modules to track web server performance are already in production. This has been relatively simple to setup with Lua rules. Inside the HTTP block of Nginx, we simply define the lua_package_path and state our function to apply:

...
lua_package_path '/path/to/lua/5.1/?.lua;;';  
...
log_by_lua '  
  local alc = require "alc"
  alc.log()
';  

Our Lua file takes the shape:

local alc = {  
    _VERSION     = 'alc-dev',
    _DESCRIPTION = 'Alchemy Nginx Lua Rules',
}

...

function alc.log()  
  local redis = require 'redis'
  local client = redis.connect('127.0.0.1', 6379)
  local datetime = os.date("%Y-%m-%dT%H:%M:00Z")
  local latencykey = 'req-lat:' .. datetime
  local endtime = ngx.now()*1000
  local starttime = ngx.req.start_time() * 1000
  client:incrby(latencykey, tonumber(endtime - starttime))
  local codekey = 'resp-code:' .. datetime .. ':' .. ngx.status
  client:incr(codekey)
  local byteskey = 'resp-size:' .. datetime
  client:incrby(byteskey, ngx.var.bytes_sent)
end

...

return alc  

A more simple hello world example would look something like:

function alc.hello()  
  ngx.say("hello")
end  

Lua has access to most of Nginx in terms of variable support. We are able to keep track of what is happening on the server in real time. A separate Python service runs on the server and reports on aggregated data regarding the health of the web-server.

With the data collected, the next phase is to train a model to learn what is working and not working and work within a set of guidelines to improve the overall performance.