Python Sanic vs GoLang Mux

Yogesh Sharma
5 min readSep 11, 2020
Python Sanic vs Go Mux

In this blog I will be comparing Python Sanic and Golang Mux Web Framework.

Golang Mux is high performance request router and dispatcher for matching incoming requests to their respective handler.

On the other hand, Python Sanic is a Python 3.6+ web server and web framework that’s written to go fast. It allows the usage of the async/await syntax added in Python 3.5, which makes your code non-blocking and speedy.

Test Metrics

I have added Prometheus client to both of the web framework for visualizing the metrics and evaluating the performance while running load test against basic GET REST API which returns sample JSON response.

Metrics to evaluate are:

  1. Max throughput achieved i.e. Requests Per Second
  2. Latency

wrk was used to load test both implementation.

Setup

Go Implementation

package mainimport (
"net/http"
"github.com/gorilla/mux"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"github.com/zephinzer/ezpromhttp"
)
var (
namespace = "ysharma"
httpRequestsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Name: "ysharma_go_http_requests_total",
Help: "Count of all HTTP requests",
}, []string{"app", "name", "method", "status"})
httpLatency = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: namespace,
Name: "ysharma_go_http_latency",
Help: "Time taken to execute endpoint.",
}, []string{"app", "name", "method", "status"})
)
func get(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"message": "get called"}`))
}
func notFound(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusNotFound)
w.Write([]byte(`{"message": "not found"}`))
}
func loggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
//log.Println(r.RequestURI)
requestLogger := log.WithFields(log.Fields{
"method": r.Method,
"authority": r.Host,
"uri": r.RequestURI,
"alpn": r.Proto,
})
requestLogger.Info()
next.ServeHTTP(w, r)
})
}
func main() {
log.SetLevel(log.InfoLevel)
router := mux.NewRouter().StrictSlash(true)
// router.Use(loggingMiddleware)
router.Handle("/metrics", promhttp.Handler())
apis := router.PathPrefix("/v1").Subrouter()
apis.HandleFunc("/get", get).Methods(http.MethodGet)
log.Fatal(http.ListenAndServe(":8080", ezpromhttp.InstrumentHandler(router)))
}

Python Sanic Implementation

import logging
import multiprocessing
import os
import pathlib
import sys
import time

import prometheus_client
import sanic
from prometheus_client import (CollectorRegistry, multiprocess, CONTENT_TYPE_LATEST, core)
from prometheus_client import Counter, Histogram
from prometheus_client.exposition import generate_latest
from sanic.response import json
from sanic.response import raw

os.environ['prometheus_multiproc_dir'] = f"{pathlib.Path().absolute()}/prometheus_multiproc_dir"


LOGGING_CONFIG_DEFAULTS = dict(
version=1,
disable_existing_loggers=False,
loggers={
"sanic.root": {"level": "INFO", "handlers": ["console"]},
},
handlers={
"console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": sys.stdout,
}
},
formatters={
"generic": {
"format": "%(asctime)s [%(process)d] [%(levelname)s] %(message)s",
"class": "logging.Formatter",
}
},
)


class MonitorSetup:
def __init__(self, app_name, multiprocess_on=False, metrics_path="/metrics"):
self._app = app_name
self._multiprocess_on = multiprocess_on
self._metrics_path = metrics_path

def expose_endpoint(self):
"""
Expose /metrics endpoint on the same Sanic server.
This may be useful if Sanic is launched from a container
and you do not want to expose more than one port for some
reason.
"""

@self._app.route(self._metrics_path, methods=['GET'])
async def expose_metrics(request):
return raw(self._get_metrics_data(),
content_type=CONTENT_TYPE_LATEST)

def _get_metrics_data(self):
if not self._multiprocess_on:
_registry = core.REGISTRY
else:
_registry = CollectorRegistry()
multiprocess.MultiProcessCollector(registry)
data = generate_latest(_registry)
return data


def setup_metrics(app_name):
"""
Expose /metrics endpoint on the same Sanic server.
This may be useful if Sanic is launched from a container
and you do not want to expose more than one port for some
reason.
"""

@app_name.middleware('request')
async def before_request(request):
before_request_handler(request)

@app_name.middleware('response')
async def before_response(request, response):
after_request_handler(request, response, request.path)

@app_name.route("/metrics")
async def expose_metrics(request):
return raw(prometheus_client.generate_latest(registry), content_type=CONTENT_TYPE_LATEST)


if "prometheus_multiproc_dir" not in os.environ:
registry = core.REGISTRY
else:
registry = CollectorRegistry()
multiprocess.MultiProcessCollector(registry)


REQUEST_COUNT = Counter(
'sanic_ysharma_request_count', 'App Request Count',
['app_name', 'method', 'endpoint', 'http_status'],
registry=registry,
)

REQUEST_LATENCY = Histogram('sanic_ysharma_latency_seconds', 'Request latency',
['app_name', 'endpoint'],
registry=registry,
)


def before_request_handler(request):
request['__START_TIME__'] = time.time()


def after_request_handler(request, response, endpoint):
try:
lat = time.time() - request['__START_TIME__']
response_status = response.status if response else 200
REQUEST_LATENCY.labels("ysharma_sanic_app", endpoint).observe(lat)
REQUEST_COUNT.labels("ysharma_sanic_app", request.method, endpoint, response_status).inc()
except KeyError:
pass


def
monitor(app_name):
@app_name.middleware('request')
async def before_request(request):
before_request_handler(request)

@app_name.middleware('response')
async def before_response(request, response):
after_request_handler(request, response, request.path)
return MonitorSetup(app_name)

# Sanic App


app = sanic.Sanic(__name__, log_config=LOGGING_CONFIG_DEFAULTS)
logger = logging.getLogger("sanic.root")
setup_metrics(app)


@app.route("/get")
async def test(request):
return json({"message": "welcome to sanic web server"})

if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000, access_log=False, auto_reload=True, workers=multiprocessing.cpu_count())

Prometheus Config for Metric Collection

global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:8080', 'localhost:5000']

and Grafana to visualize

The Test

  1. Both framework was tested for 300 seconds with 12 thread and 400 connections.
  2. Logging was disabled

Go Mux Results

$ wrk -t12 -c400 -d300s http://localhost:8080/v1/get
Running 5m test @ http://localhost:8080/v1/get
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.70ms 2.10ms 67.94ms 67.40%
Req/Sec 2.98k 0.95k 8.20k 76.38%
10677802 requests in 5.00m, 1.32GB read
Socket errors: connect 157, read 104, write 0, timeout 0
Requests/sec: 35584.01
Transfer/sec: 4.51MB

Max Throughput (Avg of 5 mins)

P95 Latency

Go Mux achieved Max Avg throughput of 35K+ with P95 of ~9ms.

Python Sanic Results

$ wrk -t12 -c400 -d300s http://localhost:5000/get
Running 5m test @ http://localhost:5000/get
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.06ms 4.45ms 146.34ms 86.57%
Req/Sec 3.36k 1.20k 9.42k 62.32%
12040994 requests in 5.00m, 1.69GB read
Socket errors: connect 157, read 162, write 0, timeout 0
Requests/sec: 40123.00
Transfer/sec: 5.78MB

Max Throughput (Avg of 5 mins)

P95 Latency

Python Sanic achieved Max Avg throughput of 40K+ with P95 of ~4.7ms.

Conclusion

In this test, Python Sanic outperformed Go Mux in both area i.e. Max Average Throughput and P95 latency. Python Sanic served 5K more requests with half of the latency of Go Mux.

Though result may change for a complex and production grade application when different load profiles and method will be implemented but it is right to say that Python Sanic is equally scalable, fast and can be trusted for Production environment.

--

--