How Does NGINX Work?
NGINX uses a predictable process model that is tuned to the available hardware resources:
- The master process performs the privileged operations such as reading configuration and binding to ports, and then creates a small number of child processes (the next three types).
- The cache loader process runs at startup to load the disk‑based cache into memory, and then exits. It is scheduled conservatively, so its resource demands are low.
- The cache manager process runs periodically and prunes entries from the disk caches to keep them within the configured sizes.
- The worker processes do all of the work! They handle network connections, read and write content to disk, and communicate with upstream servers.
- Each request handled by each process
Configuration files
sites-enaled and sites-avalible
sites-available: This directory contains individual configuration files for each website or application that NGINX can serve
sites-enable: This directory contains symbolic links (or sometimes actual copies) to the configuration files in the sites-available
directory.
NGINX only reads configuration files from the sites-enabled
directory when it starts or reloads. This separation allows administrators to easily enable or disable sites without deleting or moving configuration files.
cd /etc/ngnix → has config file
The way nginx and its modules work is determined in the configuration file. By default, the configuration file is named nginx.conf
and placed in the directory /usr/local/nginx/conf
, /etc/nginx
, or /usr/local/etc/nginx
.
[http {},events {}] → these are called contexts
Example
- Context: determines the scope and inheritance of directives, helping organize configuration settings and control their application at different levels
Main Context
├── Events Context
│ ├── worker_connections
│ ├── use
│ └── multi_accept
├── HTTP Context
│ ├── log_format
│ ├── access_log
│ ├── default_type
│ ├── include
│ └── Server Context
│ ├── listen
│ ├── server_name
│ ├── ssl_certificate
│ ├── location
│ │ ├── root
│ │ ├── proxy_pass
│ │ ├── try_files
│ │ └── rewrite
│ └── Location Context
│ ├── root
│ ├── proxy_pass
│ ├── try_files
│ └── rewrite
└── Stream Context
├── log_format
└── Server Context
├── listen
└── upstream
-
worker_processor: default 1 set auto or set how many cores you have
-
worker_rlimit_nofile: No of file can be open for connection set
2* no of worker processor
if you using as reverse proxy set 3* worker because for listen one file and for stroing response from proxy -
worker_connections : total amount of connection can be handle by per worker for per sec so if reach the threashold it won’t handle above the count it adviseable to set
2 * no of cpu
as count if we not using as reverse proxy
How nginx processes a request
In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port.
Reloading server with zero down time
Updating NGINX configuration is a very simple, lightweight, and reliable operation. It typically just means running the nginx
-s
reload
command, which checks the configuration on disk and sends the master process a SIGHUP signal.
When the master process receives a SIGHUP, it does two things:
- Reloads the configuration and forks a new set of worker processes. These new worker processes immediately begin accepting connections and processing traffic (using the new configuration settings).
- Signals the old worker processes to gracefully exit. The worker processes stop accepting new connections. As soon as each current HTTP request completes, the worker process cleanly shuts down the connection (that is, there are no lingering keepalives). Once all connections are closed, the worker processes exit.
This reload process can cause a small spike in CPU and memory usage, but it’s generally imperceptible compared to the resource load from active connections
Logs
accesslog
have all api logs what are the requset came .
http {
...
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
...
}
Error Logs
http {
...
error_log /var/log/nginx/error.log;
...
}
Caching
When nginx reads the response from an upstream server, the content is first written to a temporary file outside of the cache directory structure. When nginx finishes processing the request it renames the temporary file and moves it to the cache directory. If the temporary files directory for proxying is on another file system, the file will be copied, thus it’s recommended to keep both temporary and cache directories on the same file system. It is also quite safe to delete files from the cache directory structure when they need to be explicitly purged. There are third-party extensions for nginx which make it possible to control cached content remotely, and more work is planned to integrate this functionality in the main distribution.
The ngx_http_limit_conn_module
module is used to limit the number of connections per the defined key, in particular, the number of connections from a single IP address.
cache manager → activated perodically to check the state of cache and clear it based on constraints
Cache loader → runs only after ngnix start it load metadata about perv cached data
load balancer
To load balance the request
Socket sharding
When we run a server the ip and port are bind to the socket and if try to bind same ip and port again it will throw error but by using Socket sharding we can bind multiple app instance to single socket and kernel will do load balancing for the incoming request.
Configuring Socket Sharding
To enable the SO_REUSEPORT
socket option, include the new reuseport
parameter to the listen
directive for HTTP or TCP (stream module) traffic,
Rate limiting →https://www.nginx.com/blog/rate-limiting-nginx/
Optimization
When a client sends an HTTP request to the NGINX server, it typically establishes a TCP connection to send and receive data. This connection can be reused for multiple requests, especially if keepalive connections are enabled.
keepalive_requests and keepalive_timeout directives to alter the number of requests that can be made over a single connec‐ tion and the time idle connections can stay open:
http {
keepalive_requests 320;
keepalive_timeout 300s;
}
The keepalive_requests directive defaults to 100, and the keepalive_timeout directive defaults to 75 seconds.
Keeping Connections Open Upstream
When opening connection to upstream server we can set keep live that make connection to open and resuse for further request such that for each request it won’t open new connection
proxy_http_version 1.1;
proxy_set_header Connection "" #remove the `Connection` header when forwarding requests to upstream servers. to make connection live
upstream backend {
server 10.0.0.42;
server 10.0.2.56;
keepalive 32; # specifies that NGINX should maintain up to 32 idle connections to each of the upstream servers
}
Response buffering
When nginx recevied the response that is passed to a client synchronously, immediately as it is received. nginx will not try to read the whole response from the proxied server. we can enable buffering .
nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not fit into memory, a part of it can be saved to a temporary file on the disk.
server {
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 32k;
proxy_busy_buffer_size 64k;
...
}
Note: **Proxy buffering is enabled by default in NGINX**
Buffering Access Logs
buffer logs to reduce the opportunity of blocks to the NGINX worker process when the system is under load.
http {
access_log /var/log/nginx/access.log main buffer=32k flush=1m;
}
CMDS
ngnix -t
→ to test the config file is everything ok
Security
This will not send the ngix version
http{
# Turn off server tokens
server_tokens off;
}
-
Prevents MIME type sniffing: Browsers sometimes try to guess the MIME type of a file based on its content, which can lead to security vulnerabilities. For example, a file with a misleading extension might be interpreted as a different type of file than it actually is, potentially leading to XSS (Cross-Site Scripting) attacks.
-
Forces the browser to respect the declared content type: By sending the
X-Content-Type-Options: nosniff
header, you’re instructing the browser to trust the MIME type provided by the server and not try to infer it.
ModSecurity
- ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis.
Alternative
Envoy
it is layer 7 proxy and use Yaml extension for config
- Downstream → request come from [client]
- Upstream → response come from [server]
- Cluster → host are called .it has load balancing policy
- cluster app1- > have 2 host
- cluster app2 → can have 2 host
- connection pool → each host in cluster gets 1 or more connection pool and connection pools are per worker thread each thread bound to single connection. no cordination between threads
Pingora [cloudfare]
Resources
- Inside NGINX: How We Designed for Performance & Scale
- Tuning NGINX
- AOSA Book on NGINX
- Cloudflare’s Pingora Proxy
- NGINX Documentation
- Optimizing NGINX for High Traffic Loads