Ricky Moorhouse

Apiconnect

Monitoring API Connect - 2020 update

Even though a lot has changed within the API Connect product and the types and numbers of stacks we're running since I first posted an overview of monitoring API Connect , the main areas we monitor haven't.

We are still using Grafana as a central location for dashboarding and analysing data across different data sources but some of the tools we're using to collect the data have changed. Having access to all the data in a single UI is really powerful, especially when troubleshooting or investigating events across the systems, being able to identify correlations between data from external load balancing, response times parsed from logs and pod utilisation metrics can really help narrrow in on specific components and how they impact the wider solution.

Metrics

Metrics flow

For metrics we're making use of the IBM Cloud Monitoring with Sysdig to gather metrics from across the kubernetes deployment, including metrics from kubernetes itself and recognisable container applications such as nginx. We also supplement this with our own custom metrics exporter, Trawler, which we built for API Connect to extract key application specific data and expose them to a prometheus compatible monitoring tool or send them to graphite. Examples of data gathererd are counts of objects within API Manager and DataPower and analytics call counts. For endpoint and availability monitoring we are continuing to use Hem which is a simple python application to call HTTP(s) endpoints and send the metrics to our graphtie stack. All of these then come together to view within our grafana dashboards - and to be used within new exploratory dashboards whilst problem solving as needed.

Logging

Logging flow

For our logging infrastructure, we continue to use Elastic, making use of the filebeat agent within our clusters to gather and tag the container logs, then some custom parsing in logstash to parse out the significant elements from the different logs so that we can easily correlate these with events going on in the system. A lot of the time this data is then viewed in timeseries graphs within grafana, but also linked to Kibana views to dig deeper in the logs themselves.

Trawler - Metric gathering for API Connect

As part of our work in running and monitoring our API Connect cloud deployments we've built some of our own tooling to assist with monitoring what is going on within the deployments. Trawler is one of these items which is used to gather metrics from a Kubernetes based deployment of API Connect.

Trawler runs within kubernetes alongside API Connect and identifies the API Connect components and exposes metrics to prometheus (or other compatible monitoring tooling)

This data can then be used to feed into dashboards such as this one in Grafana: Grafana dashboard

Trawler is open-source and available on github and docker hub - See the installation guide for more information on using trawler for yourself.

The kind of metrics that trawler collects are currently as follows:

Management subsystem:

  • API Connect version information (apiconnect_build_info)
  • Total users (apiconnect_users_total)
  • Number of provider_orgs (apiconnect_provider_orgs_total)
  • Number of consumer orgs (apiconnect_consumer_orgs_total)
  • Number of catalogs (apiconnect_catalogs_total)
  • Number of draft products / apis (apiconnect_draft_products_total / apiconnect_draft_apis_total)
  • Number of products / apis (apiconnect_products_total / apiconnect_apis_total)
  • Number of subscriptions (apiconnect_subscriptions_total)

DataPower subsystem:

  • TCP connection stats (datapower_tcp...)
  • Log target stats: events processed, dropped, pending (datapower_logtarget...)
  • Object counts e.g. SSLClientProfile, APICollection, APIOperation etc. (datapower_{object}_total)
  • HTTP Stats (datapower_http_tenSeconds/oneMinute/tenMinutes/oneDay)

Analytics subsystem

  • Cluster health status (analytics_cluster_status)
  • Number of nodes in the cluster (analytics_data_nodes_total/analytics_nodes_total)
  • Number of shards in states - active, relocating, initialising, unassigned (analytics_{state}_shards_total)
  • Number of pending tasks (analytics_pending_tasks_total)

Automatically publish your API when you push to github

Updated 11th October 2016 for API Connect

In less than half an hour I could update my project to automatically publish my API in IBM API Connect - Here's the steps...

Sign up for API Connect through Bluemix by creating an API Connect service instance - if you don't already have a Bluemix account you can sign up for a free trial account.

Install and configure the new toolkit CLI - replacing eu with au or us if you chose a different bluemix region:

npm install -g apiconnect 
apic config:set server=eu.apiconnect.ibmcloud.com
apic login

Create a product definition for your API:

apic create --type product --title "Travel Information" --apis product.yaml

Adjust the product definition as needed in your favourite editor

Add the x-ibm-configuration extensions to your swagger document to configure what happens when someone calls the API - in my case invoke the backend API

x-ibm-configuration:
  enforced: true
  phase: realized
  testable: true
  cors:
    enabled: true
  assembly:
    execute:
      - invoke:
          title: invoke
          target-url: '<backend url>'

Now switch over to your CodeShip account, load your project and go to the Deployment section of your project.

Add a custom script option and confiigure the following script (adding your details as needed):

npm install -g apiconnect
apic config:set server=eu.apiconnect.ibmcloud.com
apic login -u <username> -p <password>
apic config:set organization=<org>
apic push docs/swagger.yaml
apic stage --catalog=sb docs/travel-information.yaml
apic publish --catalog=sb docs/travel-information.yaml</code>

Commit and push to your repository and your updated API will be pushed to API Management! - Here is my example API

If you don't already have a CodeShip account you can sign up to CodeShip with your github account and create link in your github repository. You can then set up the tests and deployment steps in the project settings.