TCP/IP Sockets Vs. Unix Domain Sockets

Synopsis

This benchmark attempts to determine relative performance of Internet sockets and Unix domain sockets between an Nginx web server frontend and a Thin/Rack/Sinatra Ruby application backend. The request size was 75 bytes and the reply size was about 4 KB.

The client and the server reside on the same Gigabit switch. Three consecutive load tests were ran for each test condition and the results averaged. The client initiated 1,000 sequential requests to the server.

Concurrent requests were tested. However, the results were not consistent because httperf often core dumped as concurrency increased. Thus this document only presents results of sequential requests to the server.

In addition to comparing TCP and Unix Domain sockets, the number of backend applications and HTTP version used in proxy communications were also varied. The HTTP/1.1 proxy setup utilized keepalive connections while HTTP/1.0 did not.

Server Machine

Client Machine

Results Disclaimer

Unfortunately, this benchmark is inaccurate because the client machine is underpowered compared to the server. During testing the client experienced nearly 100% CPU utilization while the server barely cracked a sweat at less than 20% utilization. Nonetheless, the client was similarly saturated for all test conditions so the results are considered relative. But this fact could be hiding the truth.

The httperf man page states, don't fall prey to measuring client-performance instead of server performance! That is most likely the case here.

8 Test Conditions

Condition Backends Socket Type HTTP Version
#11TCP1.1
#21TCP1.0
#34TCP1.0
#44TCP1.1
#51UNIX1.1
#61UNIX1.0
#74UNIX1.0
#84UNIX1.1

Nginx Configurations

The nginx.conf below includes settings apropos for all tests. To create the 8 test conditions, select settings were omitted and nginx was reloaded. To create test condition #6 for example, the proxy_http_version, the 4 TCP upstreams, the last 3 UNIX upstreams, and the upstream keepalive settings were omitted from the configuration.

    server {
        location @app {
            # Enables HTTP/1.1
            proxy_http_version 1.1;

            # Used for all test conditions
            proxy_set_header Connection "";
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_pass http://app;
        }
    }

    upstream app {

        # Internet TCP Sockets
        server 127.0.0.1:3020;
        server 127.0.0.1:3021;
        server 127.0.0.1:3022;
        server 127.0.0.1:3023;

        # Unix Domain Sockets
        server unix:/app/tmp/thin.0.sock;
        server unix:/app/tmp/thin.1.sock;
        server unix:/app/tmp/thin.2.sock;
        server unix:/app/tmp/thin.3.sock;

        # Only used with HTTP/1.1
        keepalive 48;
    }

Results

The results were a bit surprising. I was sure Unix-domain sockets would perform better than Internet sockets.

Keep in mind the client requests to the server were sequential (i.e., no concurrent connections). This is not typical. Additionally, this benchmark tests the socket implementions of both the Nginx and Thin servers. Because both are Internet servers, they may be better tuned for TCP/IP sockets rather than Unix Domain sockets.

The performance of the two socket types are similar. TCP/IP only delivers a 5-12% performance increase over Unix domain sockets for the same backends and HTTP versions. In aggregate, TCP/IP sockets performed ~9% better than Unix Domain sockets with requests per second of 450 and 411, respectively.

The following results are ordered by performance. The 3-run output of httperf for each test condition is linked to its condition number.

Condition Backends Socket Type HTTP Version req/s ms/req
#11TCP1.1 4972.0
#21TCP1.0 4812.1
#51UNIX1.14412.3
#61UNIX1.04292.3
#44TCP1.1 4212.4
#34TCP1.0 4022.5
#84UNIX1.13902.6
#74UNIX1.03832.6

Concurrency

Concurrency was tested, but resulted in httperf core dumping as concurrent connections approached 24. However, concurrent testing achieved increased output from the server, especially with the multi-backend conditions. The multi-backend conditions out performed the single backends. This is opposite of the sequential tests and is to be expected.

As with sequential requests, HTTP/1.1 performed marginally better over HTTP/1.0 for the same number of backends and socket types.

Conclusion

When testing sequentially, the single backend configurations outperformed multiple backends. This is most like due to the underutilization of the proxy management logic. However, the results are reversed with concurrent request testing, which amortizes the management overhead.

HTTP/1.1 with keepalive connections always edges out similar HTTP/1.0 configurations without keepalive connections. It is safe to say that initiating connections for every request burns CPU cycles.

The fact still remains that the client just didn't have the muscle to push the server to its limit so these numbers may still be lying.

The bottom line is use the simplest configuration that works. Consequently, different hardware and software may provide different results. Nonetheless, it's fun to see some numbers.