Wednesday, April 10, 2024

Continuous Deployment API Pipeline

This blogpost explains our CD (Continuous Deployment) pipeline, use it for your own convenience and as always tips and remarks are welcome!

We're using the Kong API Gateway to handle all the REST and SOAP calls to middleware and back-end (micro-)services.

Each API should be described as an Open API Specification file, with all the details of the API, Kong plugins and in case it's a REST service also the request and response schema's.

All these Open API Specifications (OAS files) are stored in our on-premise Gitlab Repository server, combined with a Gitlab Runners (agents) server to execute the pipelines for deployment.

Now let me describe our pipeline set-up, it consists of eight steps:

1.     Get the Open API Specification

2.     Validate the Open API Specification

3.     Generate the Kong decK file

4.     Replace project specific variables

5.     Validate Kong decK file

6.     Synchronize (deploy) Kong artefacts

7.     Remove Kong plugins from Open API Specification

8.     Deploy Open API Specification to Portal or API Marketplace/Platform

An API Project in Gitlab consists of the pipeline (.gitlab-ci.yml file), which includes file variables.yml and the actual pipeline in Project "library" and thirdly the API Design (OAS).

The OAS is either a yml file in the Gitlab project specified in file variables.yml or included in the Insomnia project in directory .insomnia. With Insomnia 2023.5.8 our teams can still Git Sync for free.

Step 1) Get the OAS, either variable OAS is present in file variables.yml or it's exported from Insomnia executing inso export spec within docker image kong-inso-8.4.5.

Results from this step are the oas spec name and the actual oas.yml file.

Step 2) Validate OAS, within docker image stoplight/spectral we download using curl the .spectral.yml ruleset from our library project and execute spectral lint.

Our .spectral.yml extends: [[spectral:oas, all], [spectral:asyncapi, all]] and we've added some specific errors, like:

  • we need contact name and email present, email should be a company email address
  • the oas file need x-kong-plugin-application-registration present with value auto_approve set to false

Step 3) Generate Kong decK (decK is derived from the combination of words ‘declarative’ and ‘Kong’) file within docker image kong-deck-1.36.1, after creating decK file we set the service protocol, host and port from environment specific variables in variables.yml file, and we add the project tag:

only:
- development
script:
- deck file openapi2kong --inso-compatible -s oas.yaml -o kong.yaml
- deck file patch -s kong.yaml -o kong.yaml --selector="$..services[*]" --value='protocol:"'"$DEV_PROTOCOL"'"'
- deck file patch -s kong.yaml -o kong.yaml --selector="$..services[*]" --value='host:"'"$DEV_HOST"'"'
- deck file patch -s kong.yaml -o kong.yaml --selector="$..services[*]" --value='port:'$DEV_PORT''
- deck file add-tags -s kong.yaml -o kong.yaml $projectname

Some years ago we were adding upstream with targets, but as we have a dedicated load balancer we don't need Kong to balance over targets, also setting endpoint on service gives better overview of upstream systems in Kong Manager.

Step 4) If the decK file contains project specific placeholders which should be replaced by environment specific values we add replace steps to the project .gitlab-ci.yml file.

The replacements can be done with simple linux commands within basic docker image linux-alpine-3.18.

Step 5) DecK validation, this step validates the Kong decK file within image kong-deck-1.36.1, executing both deck gateway validate and deck gateway diff.

We noticed that if a service has an existing application_instance and service is renamed this leads to deletion of old and creation of new service, validation step will pass but sync fails due to existing reference.

Step 6) Synchronize (deploy) to Kong, executing deck gateway sync within docker image kong-deck-1.36.1

Step 7) As the OAS contains Kong specific plugins that we don't want to expose in the Developer Portal, or any API Platform, we remove all the plugins.

For now we use hashtags within the OAS to specify begin and end of a plugin, using linux script within docker image linux-alpine-3.18 to remove everything between and including the hashtags.

In the future we might add smarter plugin removal using yq, as Kong plugins are well defined objects starting with x-kong-plugin.

Step 8) Using docker image linux-alpine-3.18 with curl included we can post the censored OAS to our Developer Portal.

Time for a small example, see the following snapshot of a single API design, where there is a single path on Kong: /orders with the following rules:

  • If the optional HTTP Header field X-Order-Version contains v2 the request should be routed to upstream system ORDER_V2_HOST with path v2
  • Else the request should be routed to default upstream system with path v1

This can be achieved with different projects/OAS, but sometimes this is requested within the same spec:

paths:
  /orders:
    get:
#BEGIN_KONG_PLUGINS_1
      x-kong-plugin-request-transformer-advanced:
        name: request-transformer-advanced
        config:
          replace:
            uri: /v1/orders
#END_KONG_PLUGINS_1
...
#BEGIN_KONG_PLUGINS_2
  /orders[REMOVEME]:
    get:
      x-kong-route-defaults:
        headers:
          X-Order-Version:
          - v2
      x-kong-plugin-route-transformer-advanced:
        name: route-transformer-advanced
        config:
          host: ORDER_V2_HOST
          path: /v2/orders
#END_KONG_PLUGINS_2

This OAS is valid in all OAS editors like Insomnia, Swagger etc.

The duplicated path is extended with [REMOVEME] to make the design a valid Open API Specification. After creating the decK file this [REMOVEME] will be removed in step 4. The resulted decK file remains valid for Kong and contains identical paths but with different HTTP Header configuration. ORDER_V2_HOST is an environment specific placeholder which is replaced in step 4 by the value set in variables.yml.

In step 6 the service is deployed to Kong.

In step 7 the script removes everything between #BEGIN_KONG_PLUGINS_1 and #END_KONG_PLUGINS_1 which is the Kong plugin changing the upstream uri. Also everything between #BEGIN_KONG_PLUGINS_2 and #END_KONG_PLUGINS_2 is removed, which is the duplicated path.
Removal includes the lines starting with #, including additional comments starting with #.

After deployment in step 8 the result is a single path visible in the design on the Developer Portal. The API consumers won't see the upstream systems and the technical routing based on X-Order-Version. Surely the HTTP Header X-Order-Version should be described as an optional header field with it's purpose.

Friday, January 26, 2024

Finding a new Kong slogan with AI

When configuring smtp in Gitlab one value can be set for gitlab_email_subject_suffix, the emails get this text behind the subject. Like emails for new users or password resets and so on, initially I had: 

apes should manage your APIs

From my LinkedIn profile, this sentence was added some time ago as a reaction on Gravitee's title "Gravitee vs Kong | Apes shouldn't manage APIs", see html title of https://www.gravitee.io/comparison/gravitee-vs-kong, not on the page but visible in html source and when you google.

For the ones who do or explicitly don't identify them/theirselves with apes this might be a little offensive, to avoid DEI police I had to find a new slogan. That's when I thought of AI, with the following prompt: 

have a break, then give me a nice slogan which contains humor full sentence about why developers should use Kong inc API Gateway. You get $200 is the slogan is really good

The reply from Copilot (AI engine 4?) came not further then 

Sure, here’s a slogan that might make you chuckle: “Kong Gateway: Because APIs are like bananas - they’re best when they’re peeled, sliced, and served with Kong!” 😂

On a more serious note, ... and then came the marketing stuff and sales pit

The reply from chatGPT (AI engine 3.5) was

Sure, let's give this a shot:

"Kong API Gateway: Because even developers need a smooth ride in the data traffic jam. Avoid coding traffic tickets and let Kong be your code traffic cop – making API management so smooth, it's like merging onto the information superhighway in a convertible with the top down. Your APIs deserve a Kong, not a traffic con!"

I hope this brings a smile to your face! If you have any specific preferences or adjustments, feel free to let me know.

This result was above my expectations, and my new slogan is now

your APIs deserve a Kong

See also this page header and my LinkedIn profile 👍

As I'm a newbie with AI I realize the hardest part is creating the right prompt (command), some tips that I randomly use can be found at https://arxiv.org/pdf/2312.16171v1.pdf, 26 prompt principles on page 5 with the first being:

No need to be polite with LLM so there is no need to add phrases like “please”, “if you don’t mind”, “thank you”, “I would like to”, etc., and get straight to the point.

Adding to the list is tip #27, according to Google DeepMind researchers: start with 

Take a deep breath and work on this problem step by step, ...

Which I changed to 

Have a break, then ...

#KitKat from the article and ad on https://the-message.ca/2024/01/19/kitkat-shows-how-even-ai-is-better-after-a-break/

On a different AI note, a few weeks ago I was looking for a new Teams background, I ended up with the below image taken from https://www.bing.com/images/create/i-want-a-background-wallpaper-of-size-1920-by-1080/1-65a5348511c04c0f90def08c2baf34e3?id=D1d8mIGWjcRjEmb%2fFD43BA%3d%3d&view=detailv2&idpp=genimg

After removing the lower-half of the result, now my colleagues see me sitting between the Dragon and the Gorilla 🤣


Friday, January 19, 2024

Traces in Tempo vs logs in Loki

In my last post I mentioned how to use the http-log plugin in Kong to provide logs to Loki. Also how we're gonna use OpenTelemetry to provide traces to Tempo.

The OpenTelemetry plugin requires a change in Kong config, enabling tracing by setting tracing_instrumentations to all and restart the plane.

In the configuration of the plugin we had to set the plugin config setting queue.max_batch_size from default 1 to 1000, to avoid full queue errors.

Without repeating my last post the http log provides valuable information like received time in milliseconds, source ip, incoming endpoint, method and http headers, authenticated id, Kong service and route invoked, upstream ip and port and http status code.

The traces provide similar information, same starttime in milliseconds, source ip, incoming endpoint and method, Kong route invoked, upstream name, ip and port and http status code.

In Grafana we can explore both logs from Loki and traces from Tempo, but we want to take advantage of the built-in Observability, which is now rebranded to Applications. Initially this looks promising, we have metrics generated from traces and see quickly the duration and duration distribution of all requests.

Traces: both in Explore (Tempo) and Application Kong we see all traces, each trace contains the set of spans. No further configuration needed, we have in Kong the sampling rate configured to 1, which is 100%, so far we see no reason to lower this.

Logs: in Explore (Loki) we see all logs, not in Application Kong. As Application Kong Log query is defaulted to {exporter="OTLP", job="${serviceName}"} we have to change our log stream from Kong towards Loki, new custom_fields_by_lua is Streams with value

local cjson = require "cjson" local ts=string.format('%18.0f', os.time()*1000000000) local log_payload = kong.log.serialize() local json_payload = cjson.encode(log_payload) local service = log_payload['service'] local t = { {stream = {exporter='OTLP', job='kong', service=service['name']}, values={{ts, json_payload}}}} return t

After this change all Kong http logs appear in Application Kong, of course we have to update our dashboards from kong_http_log="log-payload" to job="kong".

Now the correlation between traces and logs, we learned that this doesn't work out-of-the-box with Kong version 3.4, we need to upgrade to 3.5 in order to have the field trace_id in the logs.

As a workaround we can use the timestamp up to milliseconds, this value is identical for the log and the trace for each request.

For example I've exported a trace (5.0 kB, length 5102) containing 9 spans, the parent and 8 children from kong.router till kong.header_filter.plugin.opentelemetry, see below screenshot:

Surely this is just for fun, we see that durations are in up to a hundredth of microseconds, e.g. the key-auth plugin Duration: 71.94μs Start Time:658.25μs (11:43:50.364)

In the span we find "startTimeUnixNano": 1705661030364658200, "endTimeUnixNano": 1705661030364730000

Now when I take duration I come to 71.8 microseconds, googling both values with minus in between returns 71936, Grafana comes to 71.94μs

All nano timestamps in the exported trace end with '00', exact to 100 nanoseconds, which is 0.1 microseconds.

Clever that Google and Grafana can get more precise, but yeah, this is already about a tenth of a thousandth of a thousandth of a second...

Taking the milliseconds (1705661030364) the correlated log can be found easily, saving this json to file it's 3.3 kB (length 3390), size is around 70% of the size of the trace. These numbers are interesting because the average ingestion rates of these logs and traces are other way around:

1 log is 2/3 the size of the trace of the same request, while the average logs ingestion rate is more than 3 times the average traces ingestion rate, 14.5 GiB log versus 4.50 GiB traces. This seems like a mystery, which I leave unsolved for now.

As mentioned this exercise is more fun than practical, Grafana can provide insights on Kong latencies, number of errors, alerts and so on, but detailed information on sub-components is overkill. As soon as we have our landscape OpenTelemetry enabled, especially our upstream MicroServices, only then I expect to gain useful insights and nice service maps. Till that time I enjoy playing with dashboards on the http logs in Loki 🤣