Friday, January 26, 2024

Finding a new Kong slogan with AI

When configuring smtp in Gitlab one value can be set for gitlab_email_subject_suffix, the emails get this text behind the subject. Like emails for new users or password resets and so on, initially I had: 

apes should manage your APIs

From my LinkedIn profile, this sentence was added some time ago as a reaction on Gravitee's title "Gravitee vs Kong | Apes shouldn't manage APIs", see html title of https://www.gravitee.io/comparison/gravitee-vs-kong, not on the page but visible in html source and when you google.

For the ones who do or explicitly don't identify them/theirselves with apes this might be a little offensive, to avoid DEI police I had to find a new slogan. That's when I thought of AI, with the following prompt: 

have a break, then give me a nice slogan which contains humor full sentence about why developers should use Kong inc API Gateway. You get $200 is the slogan is really good

The reply from Copilot (AI engine 4?) came not further then 

Sure, here’s a slogan that might make you chuckle: “Kong Gateway: Because APIs are like bananas - they’re best when they’re peeled, sliced, and served with Kong!” 😂

On a more serious note, ... and then came the marketing stuff and sales pit

The reply from chatGPT (AI engine 3.5) was

Sure, let's give this a shot:

"Kong API Gateway: Because even developers need a smooth ride in the data traffic jam. Avoid coding traffic tickets and let Kong be your code traffic cop – making API management so smooth, it's like merging onto the information superhighway in a convertible with the top down. Your APIs deserve a Kong, not a traffic con!"

I hope this brings a smile to your face! If you have any specific preferences or adjustments, feel free to let me know.

This result was above my expectations, and my new slogan is now

your APIs deserve a Kong

See also this page header and my LinkedIn profile 👍

As I'm a newbie with AI I realize the hardest part is creating the right prompt (command), some tips that I randomly use can be found at https://arxiv.org/pdf/2312.16171v1.pdf, 26 prompt principles on page 5 with the first being:

No need to be polite with LLM so there is no need to add phrases like “please”, “if you don’t mind”, “thank you”, “I would like to”, etc., and get straight to the point.

Adding to the list is tip #27, according to Google DeepMind researchers: start with 

Take a deep breath and work on this problem step by step, ...

Which I changed to 

Have a break, then ...

#KitKat from the article and ad on https://the-message.ca/2024/01/19/kitkat-shows-how-even-ai-is-better-after-a-break/

On a different AI note, a few weeks ago I was looking for a new Teams background, I ended up with the below image taken from https://www.bing.com/images/create/i-want-a-background-wallpaper-of-size-1920-by-1080/1-65a5348511c04c0f90def08c2baf34e3?id=D1d8mIGWjcRjEmb%2fFD43BA%3d%3d&view=detailv2&idpp=genimg

After removing the lower-half of the result, now my colleagues see me sitting between the Dragon and the Gorilla 🤣


Friday, January 19, 2024

Traces in Tempo vs logs in Loki

In my last post I mentioned how to use the http-log plugin in Kong to provide logs to Loki. Also how we're gonna use OpenTelemetry to provide traces to Tempo.

The OpenTelemetry plugin requires a change in Kong config, enabling tracing by setting tracing_instrumentations to all and restart the plane.

In the configuration of the plugin we had to set the plugin config setting queue.max_batch_size from default 1 to 1000, to avoid full queue errors.

Without repeating my last post the http log provides valuable information like received time in milliseconds, source ip, incoming endpoint, method and http headers, authenticated id, Kong service and route invoked, upstream ip and port and http status code.

The traces provide similar information, same starttime in milliseconds, source ip, incoming endpoint and method, Kong route invoked, upstream name, ip and port and http status code.

In Grafana we can explore both logs from Loki and traces from Tempo, but we want to take advantage of the built-in Observability, which is now rebranded to Applications. Initially this looks promising, we have metrics generated from traces and see quickly the duration and duration distribution of all requests.

Traces: both in Explore (Tempo) and Application Kong we see all traces, each trace contains the set of spans. No further configuration needed, we have in Kong the sampling rate configured to 1, which is 100%, so far we see no reason to lower this.

Logs: in Explore (Loki) we see all logs, not in Application Kong. As Application Kong Log query is defaulted to {exporter="OTLP", job="${serviceName}"} we have to change our log stream from Kong towards Loki, new custom_fields_by_lua is Streams with value

local cjson = require "cjson" local ts=string.format('%18.0f', os.time()*1000000000) local log_payload = kong.log.serialize() local json_payload = cjson.encode(log_payload) local service = log_payload['service'] local t = { {stream = {exporter='OTLP', job='kong', service=service['name']}, values={{ts, json_payload}}}} return t

After this change all Kong http logs appear in Application Kong, of course we have to update our dashboards from kong_http_log="log-payload" to job="kong".

Now the correlation between traces and logs, we learned that this doesn't work out-of-the-box with Kong version 3.4, we need to upgrade to 3.5 in order to have the field trace_id in the logs.

As a workaround we can use the timestamp up to milliseconds, this value is identical for the log and the trace for each request.

For example I've exported a trace (5.0 kB, length 5102) containing 9 spans, the parent and 8 children from kong.router till kong.header_filter.plugin.opentelemetry, see below screenshot:

Surely this is just for fun, we see that durations are in up to a hundredth of microseconds, e.g. the key-auth plugin Duration: 71.94μs Start Time:658.25μs (11:43:50.364)

In the span we find "startTimeUnixNano": 1705661030364658200, "endTimeUnixNano": 1705661030364730000

Now when I take duration I come to 71.8 microseconds, googling both values with minus in between returns 71936, Grafana comes to 71.94μs

All nano timestamps in the exported trace end with '00', exact to 100 nanoseconds, which is 0.1 microseconds.

Clever that Google and Grafana can get more precise, but yeah, this is already about a tenth of a thousandth of a thousandth of a second...

Taking the milliseconds (1705661030364) the correlated log can be found easily, saving this json to file it's 3.3 kB (length 3390), size is around 70% of the size of the trace. These numbers are interesting because the average ingestion rates of these logs and traces are other way around:

1 log is 2/3 the size of the trace of the same request, while the average logs ingestion rate is more than 3 times the average traces ingestion rate, 14.5 GiB log versus 4.50 GiB traces. This seems like a mystery, which I leave unsolved for now.

As mentioned this exercise is more fun than practical, Grafana can provide insights on Kong latencies, number of errors, alerts and so on, but detailed information on sub-components is overkill. As soon as we have our landscape OpenTelemetry enabled, especially our upstream MicroServices, only then I expect to gain useful insights and nice service maps. Till that time I enjoy playing with dashboards on the http logs in Loki 🤣


Monday, December 18, 2023

Monitoring Kong with Grafana

After being quiet for a a decade I love to start sharing some experiences again!


My new posts will be focused on Kong API Gateway Enterprise Edition. As we started years ago with Oracle API Gateway we switched two years ago and migrated all our API's to Kong 2.3.

This year I've had many interesting journeys with Kong, I successfully migrated the datastore from Cassandra to PostgreSQL, upgraded Kong to 3.4, performed some POC's with Grafana and Dynatrace and updated and improved our CI/CD pipelines.

 

This post is about monitoring, how to monitor Kong API Gateway with Grafana Cloud, Grafana is the dashboard to visualize all metrics (Mimir), logs (Loki) and traces (Tempo).

 

Metrics:

Prometheus: Kong offers the Prometheus plugin which exposes metrics on the Kong /metrics endpoint, to be scraped by an agent, like the Grafana agent.

 

Statsd: Statsd with Kong 2.8 didn't work smooth, required a lot of field mappings, in 2.8 there existed both a statsd and statsd advanced plugin, statsd should work better from 3.x onwards, see https://konghq.com/blog/engineering/how-to-use-prometheus-to-monitor-kong-gateway

 

Logs:

Http-log: Kong offers with the http-log the possibility to send the log of each request to Grafana.

  • Advantages: all request meta-information is available in Grafana, from latencies to upstream IP. Minimal performance impact as the HTTP Log plugin uses internal queues to decouple the production of log entries from their transmission to the upstream log server.
  • Disadvantages: No out-of-the-box dashboards available
  • Links:
    https://docs.konghq.com/hub/kong-inc/http-log/

Note that this plugin works nicer with a custom field by lua added: Streams with the following value:

local cjson = require "cjson" local ts=string.format('%18.0f', os.time()*1000000000) local log_payload = kong.log.serialize() local json_payload = cjson.encode(log_payload) local service = log_payload['service'] local t = { {stream = {kong_http_log='log-payload', service=service['name']}, values={{ts, json_payload}}}} return t

When using cjson the following should be added to kong.conf: untrusted_lua_sandbox_requires=cjson

Now we can easier explore the logs in Grafana, see

  • all requests as they come in, simply select log-payload: {kong_http_log="log-payload"} |= ``
  • parsed as json: {kong_http_log="log-payload"} | json
  • filter on service, as Kong sent the service in the custom field by lua Streams: {service="ASW_Europe_Standards_API"} | json

Some example dashboards:

  • Workspace request per minute: sum by(workspace_name) (count_over_time({kong_http_log="log-payload"} | json [1m]))
  • Response status per minute: sum by(response_status) (count_over_time({kong_http_log="log-payload"} | json [1m]))
  • Service per minute: sum by(service) (count_over_time({kong_http_log="log-payload"} | json [1m]))
  • Service status per minute: sum by(service, response_status) (count_over_time({kong_http_log="log-payload"} | json [1m]))

File logs: the Grafana agent can monitor the access and error logs, especially the access logs provide useful information about total latency and upstream path, but the whole set of information is less than what the http-log provides. Also the format of the loglines need to be defined in order to get parsed...

 

Traces:

OpenTelemetry: or olly (o-11-y) is a framework to send spans and traces from Kong to Grafana, or any other OTLP enabled application.

Currently I'm exploring this feature to see what kind of extra insights this gives us when e.g. our upstream microservices also enable OpenTelemetry.

Monday, November 12, 2012

ESB's rated by analysts

After my post last year [ref: SOA Suite rated by analysts] where I looked into the evaluation of the Oracle SOA Suite by Gartner and Forrester it's time to take a new look on their saying about ESB's (Enterprise Service Bus).
For this I took a look at some newer reports from Forrester and Gartner:
  • The Forrester Wave: Enterprise Service Bus, Q2 2011
    April 25, 2011 by Ken Vollmer
  • Magic Quadrant for Application Infrastructure for Systematic Application Integration Projects
    June 20, 2012 by Jess Thompson, Yefim Natis, Massimo Pezzini, Daniel Sholler, Ross Altman and Kimihiko Iijima

As shown in the graphs the leaders are clearly IBM, Oracle, Software AG and Tibco.
Software AG - worked with webMethods for many years, good product.
Oracle - worked the last couple of years with SOA Suite, the 11g and 12c are build upon weblogic application server, hence the good score.
IBM - will be working with IBM in the near futurs, the multiple ESB offerings from IBM makes me wonder.
Now let's see what is said about IBM's multiple ESB offering:
Forrester - IBM offers three ESB's: WebSphere Enterprise Service Bus (WESB), WebSphere Enterprise Service Bus Registry Edition (WESBRE) and WebSphere Message Broker (WMB). Funny in the Forrester wave is that the actual ESB from IBM, the WebSphere ESB, scored somewhat lower than the other two.
Gartner - Some caution for IBM as despite plans to rationalize and simplify the product portfolio (e.g., in ESB), the fine-grained differences, functional overlaps and product integration challenges — for example, among WMB, WESB, WebSphere Cast Iron and the WebSphere DataPower integration appliances — make it difficult for potential users to determine the best fit for their requirements.
IBM - the faqs on the IBM website mention three different ESB's: IBM WebSphere ESB, IBM WebSphere Message Broker and IBM WebSphere DataPower Integration Appliance XI50.

Here a selection from IBM [ref: faq] on when to use which ESB:

When to Use WebSphere ESB?
  • You use WebSphere Application Server and/or your team has skills with WAS Administration and Java coding
  • You are focused on standards based interactions using XML, SOAP, and WS
  • Reliability and extensive transactional support are key requirements
When to Use WebSphere Message Broker?
  • You are currently using WebSphere Message Broker but not as an ESB
  • You are using Industry formats such as SWIFT, EDI, HL7
  • You are implementing a wide range of messaging and integration patterns
  • You have very complex transformation needs
  • Reliability and extensive transactional support are key requirements
  • To achieve very high-performance with horizontal and vertical scaling
When To Use WebSphere DataPower?
  • Ease of use is a pre-dominant consideration
  • You are transforming between XML-and-XML or XML-and-any other format
  • Your interaction patterns are relatively simple
  • You are using XML-based or WS-Security extensively
  • You require use of advanced Web services standards
  • You need to minimize message latency when adding an ESB layer
  • Your ESB must be in production very quickly
What if you require an ESB from IBM for standards based integration, with complex integration needs, high performance, advanced web service standards and want to move to production quickly?

Tuesday, November 6, 2012

FOTY0001 and logfiles roulation

A certain person named, or used the alias of, Vivek wrote a few years ago some interesting articles on his blog [ref: OracleD] about Oracle SOA Suite 10g. He experienced many flaws in Oracle OC4J Application Server and recommended the OTN discussion: Oracle BPEL + Oc4j + Jdeveloper = brain damage. Clearly he was not so happy with the features in the 10g version of Oracle SOA Suite...

He referred also to the so-called FOTY0001 errors, these often occur during XRef calls and XSL Transformations. More information about this FOTY0001 can be found in the OPMN log files. For example, a typical error is the following:
subLanguageExecutionFault - XPathExecutionError
  XPath expression failed to execute.
  Error while processing xpath expression, the expression is "ora:processXSLT('myTransformation.xsl',bpws:getVariableData('myInputMessage'))", the reason is FOTY0001: type error.
  Please verify the xpath query.

This error occurs e.g., when in JDeveloper the transform activity is opened and closed immediately. There was no time for the messagePart to be loaded, so it will be missing in the code leading to this FOTY0001 error on runtime. The proper syntax in the code is ora:processXSLT('myTransformation.xsl',bpws:getVariableData('myInputMessage','myMessagePart'))
To prevent this either click cancel in JDeveloper or wait for the message parts to load completely.

To view the FOTY0001 errors in detail the obvious way is to view them using Enterprise Manager. But an easier way is to view them directly by opening the logfiles on filesystem. Depending on the logging level the server logfiles can quickly become pretty huge. In the opmn configuration the roulation can be configured as follows, open $SOA_HOME/opmn/conf/opmn.xml and make the following changes:
<ias-component id="soa_group" status="enabled">
  <process-type id="oc4j_soa" module-id="OC4J" status="enabled">
    <module-data>
      <category id="start-parameters">
        <data id="java-options"
          value="-server

Add here the following parameters:
          -Dstdstream.filesize=10 - File size in MegaBytes
          -Dstdstream.filenumber=50 - Number of files
To separate the output and error messages add the following data element with oc4j-options inside the same category:
<data id="oc4j-options" value="-out $ORACLE_HOME/opmn/logs/oc4j_soa.out -err $ORACLE_HOME/opmn/logs/oc4j_soa.err"/>
To view the FOTY0001 error details simply open (in a good text-editor) the *.err file containing the timestamp of the error.

Monday, November 5, 2012

ORABPEL dehydration store purge scripts

By default all BPEL instances (messages) in the SOA Suite are persisted in the internal database, the so-called dehydration store. Good practice is to purge older messages, to avoid database sizing problems and to increase performance.

Unfortunately the original Oracle scripts were not sufficient to do the task, so many (consulting) companies created their own purge scripts. The good news is that Oracle realized the need and offers improved purge scripts, which are available to download from Oracle. Please take a look at the Oracle 10G FMW purge strategy whitepaper [ref: note ID 1286265.1], this document also contains the BPEL database schema.

For example, the second option from the whitepaper, multi-threaded purge can be found at Oracle Note: New BPEL 10g Purge Scripts From 10.1.3.5 MLR#2 [ref: note ID 1110833.1]. This script can be scheduled with e.g., crontab, or any other scheduling tool. The start parameters can be configured like
  • P_OLDER_THAN := sysdate-21; (purge instances older than 21 days)
  • P_ROWNUM := 10000000; (purge up to 10 million instances)
  • P_DOP := 3; (use three threads in parallel)
  • P_CHUNKSIZE := 1000; (commit per 1000 rows)
This last note contains all three scripts mentioned in the whitepaper:
  • SINGLE THREADED LOOPED PURGE PROCEDURE
  • MULTI THREADED LOOPED PURGE PROCEDURE
  • CTAS (Create Table As Select) PROCEDURE
Afterwards if needed you can fine-tune this script a little.
Tips:
  • Create a script for all state instances to purge dev and test environments, modify INSERT INTO temp_cube_instance, change WHERE state >= 5 into WHERE state >= 0
  • When purging all instances older than the configured days, it might be handy to keep the process history a little longer, like one year, modify DELETE FROM process_log, change WHERE event_date < p_older_than into WHERE event_date < SYSDATE - 365
  • When sensor data is used you might want to include this data in the purge script:
    DELETE FROM activity_sensor_values WHERE creation_date < p_older_than;
    DELETE FROM fault_sensor_values WHERE creation_date < p_older_than;
    DELETE FROM variable_sensor_values WHERE creation_date < p_older_than;
  • When using AIA ErrorHandler you might want to prevent the carthesian product by adding a max to the script. The AIA Error Handler doesn't use unique conversation id's, modify INSERT INTO temp_invoke_message, change FROM temp_cube_instance tci into FROM (SELECT MAX (cikey) cikey, conversation_id FROM temp_cube_instance GROUP BY conversation_id) tci
For the ORAESB schema the scripts can simply be found at $SOA_HOME/integration/esb/sql/other.

Tuesday, June 7, 2011

SOA Suite rated by analysts

As my focus moved in the last decade from BizTalk to webMethods to SAP PI to nowadays Oracle integration it's interesting for me to see how Oracle's SOA Suite is positioned by analyst firms. Of course the evaluation of Oracle's position is based on the functionality provided by the Oracle Fusion Middleware (OFM) 11gR1 family of products, as the 10g version is at it's end, especially 10.1.3.4 [ref: note ID 1128203.1]:
This document intends to inform customers using SOA Suite 10.1.3.4.x components about the forthcoming end of the error correction period after August 31st, 2010
Let's see what Gartner says, Gartner uses magic quadrants to position the competing players in a specific technology market. Based on the Completeness of vision and Ability to execute Gartner rates vendors as Challengers, Leaders, Visionaries and Niche players. The main magic quadrant I looked at is Magic Quadrant for Application Infrastructure for Systematic Application Integration Projects, dated October 2010. The same month Gartner also published three other magic quadrants where Oracle 11g was evaluated: Application Infrastructure for Systematic SOA-Style Application Projects, Business Process Management Suites and Shared SOA Interoperability Infrastructure Projects. In all these magic quadrants Oracle is positioned as a leader.

SOA Suite strengths:
  • Oracle Fusion Middleware (OFM) is a large and fast-growing business that positions Oracle as the second-largest application infrastructure middleware vendor in the market. The technology is supported by a vast network of partners, and thousands of organizations in virtually every geography and in multiple vertical industries have successfully deployed the current or previous versions of OFM, in a large number of cases to support large and business-critical application integration scenarios.
  • Synergies with large Oracle DBMSs and packaged application businesses could potentially create plenty of opportunities for cross-selling OFM technologies to support application integration projects.
  • OFM provides a comprehensive, integrated, and feature-rich application infrastructure offerings, also providing leading technologies to support application integration requirements.
  • The OFM road map addresses key integration technologies (e.g., more-powerful mapping and transformation and new unified adapter architecture) and emerging requirements (e.g., support for integration of mobile applications).
SOA Suite cautions:
  • The relentless pace of Oracle's acquisitions in the packaged applications and application integration middleware markets (e.g., BEA Systems, Sun Microsystems and AmberPoint) requires further technology integration work, and poses migration and upgrade challenges for preacquisition product users.
  • Despite significant adoption, the OFM 11gR1 product set requires more proof points about its use in complex and large-scale, real-life deployments.
  • Oracle's campaign management features are weak. Oracle doesn't offer integration as a service, although it has partnerships in place for this market.
  • The migration path from prior-generation application integration technologies coming from acquisitions to the strategic Oracle SOA Suite 11gR1 is still onerous for some clients.
  • Some Oracle clients are experiencing licensing and pricing issues when upgrading from previous versions to SOA Suite 11gR1, due to the change in the underlying application server (from Oracle Internet Application Server to Oracle WebLogic Suite) that may imply higher licensing costs.

Let's take a look at another analyst firm, Forrester Research, Inc. Forrester evaluated 15 leading comprehensive integration solution (CIS) vendors against 137 criteria that reflect the requirements of application development and delivery professionals. This resulted in the Forrester Wave: Comprehensive Integration Solutions (CIS) dated November 2010:
Oracle delivers a well-integrated CIS solution. Oracle has been identified as a Leader. The Oracle solution, which enables rapid development of integration-related functionality, includes Oracle SOA Suite and Oracle BPM Suite as its key components. The vendor has the secondlargest base of CIS customers (approximately 6,000) and has consistently achieved leadership status in this software category over the past five years. Oracle achieved very strong scores in four out of five product evaluation areas (architecture, integration server, application development framework, and business process management) and achieved an above-average score for its B2B features.

So the Oracle SOA Suite 11g is both by Gartner as well as by Forrester rated as a leader, that's nice. I wonder how the 10g version would have been rated by them...

As I migrated a few integrations from 10g to 11g I can say that such a migration is pretty much doable, the 11g Weblogic server is straight-forward and easy to use, at least compared to the 10g version. Just take some time to learn about the MDS (MetaData Service), it's like a version management system used all accross the platform and that you can use to share common artifacts at design and runtime. This MDS is a huge improvement, just like the central GUI with the whole integration scenario in one single overview instead of divided over the BPEL and ESB consoles. When migrating the components plan some time to clean up the sourcecode as well, by deleting obsolete files and applying advancing insights and new best practices that were not known at the time of the original developments.

Just one small point of attention when migrating the AIA EBO library (Enterprise Business Object). The AIA EBO customizations (every EBO has a custom xsd file for your own customizations) should be upgrade-safe, that's the whole purpose of these custom xsd's. But some have been extended by Oracle in 11g. A few AIA 11g custom xsd files contain more types than were present in AIA v2.5, thus in a migration the following files should be merged manually: CustomCommonComponents.xsd, CustomCustomerPartyEBO.xsd and CustomSalesOrderEBO.xsd.

As I mentioned Gartners Magic quadrant and Forresters Wave, here they are: