Creating a full-fledged API requires resources, both time and money. You need to think about the model, the design, the REST principles, etc., without writing a single line of code. Most of the time, you don't know whether it's worth it: you'd like to offer a Minimum Viable Product and iterate from there. I want to show how you can achieve it without writing a single line of code.
The solution
The main requirement of the solution is to use the PostgreSQL database. It's a well-established Open Source SQL database.
Instead of writing our REST API, we use the PostgREST component:
PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.
-- PostgREST
Let's apply it to a simple use case. Here's a product
table that I want to expose via a CRUD API:
Note that you can find the whole source code on GitHub to follow along.
PostgREST's Getting Started guide is pretty complete and works out of the box. Yet, I didn't find any ready-made Docker image, so I created my own:
FROM debian:bookworm-slim #1ARG POSTGREST_VERSION=v10.1.1 #2ARG POSTGREST_FILE=postgrest-$POSTGREST_VERSION-linux-static-x64.tar.xz #2RUN mkdir postgrestWORKDIR postgrestADD https://github.com/PostgREST/postgrest/releases/download/$POSTGREST_VERSION/$POSTGREST_FILE \ . #3RUN apt-get update && \ apt-get install -y libpq-dev xz-utils && \ tar xvf $POSTGREST_FILE && \ rm $POSTGREST_FILE #4
- Start from the latest Debian
- Parameterize the build
- Get the archive
- Install dependencies and unarchive
The Docker image contains a postgrest
executable in the /postgrest
folder. We can "deploy" the architecture via Docker Compose:
version: "3"services: postgrest: build: ./postgrest #1 volumes: - ./postgrest/product.conf:/etc/product.conf:ro #2 ports: - "3000:3000" entrypoint: ["/postgrest/postgrest"] #3 command: ["/etc/product.conf"] #4 depends_on: - postgres postgres: image: postgres:15-alpine environment: POSTGRES_PASSWORD: "root" volumes: - ./postgres:/docker-entrypoint-initdb.d:ro #5
- Build the above
Dockerfile
- Share the configuration file
- Run the
postgrest
executable - With the configuration file
- Initialize the schema, the permissions, and the data
At this point, we can query the product
table:
curl localhost:3000/product
We immediately get the results:
[{"id":1,"name":"Stickers pack","description":"A pack of rad stickers to display on your laptop or wherever you feel like. Show your love for Apache ","price":0.49,"hero":false}, {"id":2,"name":"Lapel pin","description":"With this \"Powered by Apache \" lapel pin, support your favorite API Gateway and let everybody know about it.","price":1.49,"hero":false}, {"id":3,"name":"Tee-Shirt","description":"The classic geek product! At a conference, at home, at work, this tee-shirt will be your best friend.","price":9.99,"hero":true}]
That was a quick win!
Improving the solution
Though the solution works, it has a lot of room for improvement. For example, the database user cannot change the data, but everybody can actually access it. It might not be a big issue for product-related data, but what about medical data?
The PostgREST documentation is aware of it and explicitly advises using a reverse proxy:
PostgREST is a fast way to construct a RESTful API. Its default behavior is great for scaffolding in development. When it’s time to go to production it works great too, as long as you take precautions. PostgREST is a small sharp tool that focuses on performing the API-to-database mapping. We rely on a reverse proxy like Nginx for additional safeguards.
-- Hardening PostgREST
Instead of nginx, we would benefit from a full-fledged API Gateway: enters Apache . We shall add it to our Docker Compose:
version: "3"services: : image: apache/:2.15.0-alpine #1 volumes: - .//config.yml:/usr/local//conf/config.yaml:ro ports: - "9080:9080" restart: always depends_on: - etcd - postgrest etcd: image: bitnami/etcd:3.5.2 #2 environment: ETCD_ENABLE_V2: "true" ALLOW_NONE_AUTHENTICATION: "yes" ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2397" ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2397"
- Use Apache
- stores its configuration in etcd
We shall first configure Apache to proxy calls to postgrest
:
curl http://:9080//admin/upstreams/1 -H 'X-API-KEY: 123xyz' -X PUT -d ' #1-2{ "type": "roundrobin", "nodes": { "postgrest:3000": 1 #1-3 }}'curl http://:9080//admin/routes/1 -H 'X-API-KEY: 123xyz' -X PUT -d ' #4{ "uri": "/*", "upstream_id": 1}'
- Should be run in one of the Docker nodes, so use the Docker image's name. Alternatively, use
localhost
but be sure to expose the ports - Create a reusable upstream
- Point to the PostgREST node
- Create a route to the created upstream
We can now query the endpoint via :
curl localhost:9080/product
It returns the same result as above.
DDoS protection
We haven't added anything, but we're ready to start the work. Let's first protect our API from DDoS attacks. Apache is designed around a plugin architecture. To protect from DDoS, we shall use a plugin. We can set plugins on a specific route when it's created or on every route; in the latter case, it's a global rule. We want to protect every route by default, so we shall use one.
curl http://:9080//admin/global_rules/1 -H 'X-API-KEY: 123xyz' -X PUT -d '<span class="token-line" style="color: