Ricky Moorhouse

Blog

Global Deployment with API Connect

If you have customers around the world you might want to serve them from a global APIp footprint such that they can call the API from a location closest to them reducing latency.

In this example I'm deploying my APIs to the 6 current regions of the API Connect Multi-tenant SaaS service on AWS. At the time of writing these are N. Virginia, Frankfurt, London, Sydney, and Mumbai. I will use N. Virginia as the initial source location.

API Connect Global Deployment

Automatically Deploy APIs and Products to all locations

Create a pipeline that deploys APIs and products to all regions - you could base this around the sample github action available or build it using the example CLI scripts. This then will enable me to

Pipeline

Create an API Connect instance in each region

Create an API Connect instance in each of the regions you want to deploy to. You can use the same image as your source location, but make sure to specify a different hostname for each instance. Each paid subscription for API Connect SaaS includes up to 3 instances which can be distributed as you wish across the available regions.

Configure the portal for the source location

In my simple example I opted to use the new Consumer Catalog as I don't need to configure any custom branding or anything like that. However I did enable approval flows for sign ups so that I can manage who has access to my APIs and products.

Portal

Configure ConfigSync to push configuration changes to all regions

As config sync runs from a source to a target region, you need to run it for each target region in turn. In my case this is done with a loop through the hostnames and a lookup for the appropriate API Key to use for each region.

Config Sync

The script I'm using looks like this:

#!/bin/bash

# US East is always the source catalog
export SOURCE_ORG=ibm
export SOURCE_CATALOG=production
export SOURCE_REALM=provider/default-idp-2
export SOURCE_TOOLKIT_CREDENTIALS_CLIENTID=599b7aef-8841-4ee2-88a0-84d49c4d6ff2
export SOURCE_TOOLKIT_CREDENTIALS_CLIENTSECRET=0ea28423-e73b-47d4-b40e-ddb45c48bb0c

export SOURCE_MGMT_SERVER=https://platform-api.us-east-a.apiconnect.automation.ibm.com/api
export SOURCE_ADMIN_APIKEY=$(grep 'us-east-a\:' ~/.apikeys.cfg | awk '{print $2}')


# Set common properties for all targets - in SaaS the toolkit credentials are common across regions.
export TARGET_ORG=ibm
export TARGET_CATALOG=production
export TARGET_REALM=provider/default-idp-2
export TARGET_TOOLKIT_CREDENTIALS_CLIENTID=599b7aef-8841-4ee2-88a0-84d49c4d6ff2
export TARGET_TOOLKIT_CREDENTIALS_CLIENTSECRET=0ea28423-e73b-47d4-b40e-ddb45c48bb0c

# Loop through the other regions to use as sync targets
stacklist="eu-west-a eu-central-a ap-south-a ap-southeast-a ap-southeast-b"
for stack in $stacklist 
do
    export TARGET_MGMT_SERVER=https://platform-api.$stack.apiconnect.automation.ibm.com/api
    export TARGET_ADMIN_APIKEY=$(grep "$stack\:" ~/.apikeys.cfg | awk '{print $2}')
    ./apic-configsync
done

For handling the API Keys for each region I have a file ~/.apikeys.cfg in which each line contains a pair of values in the form region: apikey

Verify that everything works as expected

  • In the source region, register a consumer org in the portal and subscribe to a product that contains an API to use.
  • Use the "Try now" section to invoke the API
  • Ensure the configsync job has had time to complete successfully for each region
  • Call the same API across the other regions to validate that everything is working as expected.

Possible next steps

  • Configure global load balancing to route customers to the closest location automatically.
  • Configure each location to use local replicas of backend applications through catalog properties

Product Academy for Teams - San Jose

Last week I had the opportunity to attend the three-day Product Academy for Teams course at the IBM Silicon Valley Lab in San Jose.

This brought together members of our team from across different disciplines - design, product management, user research, and engineering. It was fantastic to spend time face to face with other members of the team that we usually only work with remotely and to all go through the education together learning from each others approaches and ideas. The API Connect team attendees were split into three smaller teams to work on separate items and each was joined by a facilitator to help us work through the exercises.

We spent time together learning about the different phases of the product development lifecycle and in each looking at the process some of the best practices and ways to apply them to our product. It was particularly effective to use real examples from our roadmap in the exercises so we could collaboratively apply the new approaches and see how they apply directly to our product plan.

Each day of the course looked at a different phase of the product development lifecycle - Discovery, Delivery and Launch & Scale:

Discovery - Are we building the right product? - looking at and assessing opportunities and possible solutions we could offer for them, using evidence to build confidence and reviewing the impact this would have on our North Star Metric.

Delivery - Are we building it right? - ensuring we have a clear understanding of the outcomes we're looking for, how we can achieve them and how we can measure success.

Launch & Scale - Are customers getting value? - ensuring we enable customers to be successful in their use of the product and that we are able to get feedback and data to measure this and improve.

Each of these phases has an iterative approach to it and we looked at how we could apply these to our product plan. We also looked at some of the tools and techniques that can be used to help us apply this and members from the different product teams attending shared how they are using these today.

On the final day of the course I also had the opportunity to share some of our journey with instrumentation, how this has evolved and some of the lessons we learnt along the way - such as the benefits of having a data scientist on the team. I am looking forward to sharing this with the wider team and seeing how we apply some of the learning to improve our systems going forward. For example, better validation of decisions through measuring and improving our use of data.

API Connect Quick Check

This script originated as part of a much wider framework of tests that we put together when I was in the API Connect SRE team. However I’ve found this set of functions to be something useful to be able to validate quickly from time to time in different contexts to give a high level answer to ‘Is it working?’

The steps this script takes are as follows:

  • Authenticate to the API Manager Platform API and retrieve an access token
  • Take a templated API and Product and modify it to return a unique value
  • Publish this API to a nominated catalog
  • Invoke the API through the gateway (looping until a successful response is seen)
  • Query the Analytics APIs to find the event logged for this invocation (again looping until found)

Whilst the entire test frameworks a lot of assumptions around how our environments are built and deployed, this test was relatively standalone - and I just needed to make a couple of updates to it to work outside of our cloud deployments - add support to turn off certificate validation and for username/password instead of API Key based authentication. Take a look at the script on GitHub.

If you want to try this in your own environment you can follow these steps:

  1. Clone the repository

    git clone https://github.com/ibm-apiconnect/quick-check.git
    
  2. Install the python required dependencies

    pip install -r requirements.txt
    
  3. Identify the appropriate credentials to use for your stack - either username/password or an api-key if you are using an OIDC registry and set these as environment variables (APIC_API_KEY or APIC_REALM, APIC_USERNAME & APIC_PASSWORD) or use the command line parameters - either of:

    • -u <apim_user> -p <apim_password> -r provider/default-idp-2 (Local user registry or LDAP)
    • -a <apim_api_key> (OIDC e.g. SaaS)
  4. Download the credentials.json file from the toolkit download page to identify the client id and client secret for your environment - these can either be set as environment variables (CLIENT_ID / CLIENT_SECRET) or as command line parameters (--client_id / --client_secret)

  5. Run the script according to the usage examples

    python api-deploy-check.py -s <platform-api-hostname> -o <provider_org_name> -c <catalog_name> [credential parameters]
    

If successful, you should see output like this:

Example output from the script

I'd be interested to hear if you find this useful or if you have other similar utilities you use already - let me know!

Originally posted on the IBM API Connect Community Blog

Gilbert White walk, Selborne

This walk came from the In Their Footsteps app from the South Downs National Park. The app is made in conjunction with historic venues and has a series of guides for walks across the South Downs. Each of the walks has a map, a text guide and audio clips along the walk to tell you more about what you are seeing and walking through.

We headed for Selborne for an early start before the day got too hot and arrived at the car park in the village for 7:15am. The first part of the walk was a fairly steep route up through Selborne Common from the car park taking the zig-zag path that Gilbert White and his brother had cut into the hillside. At the top the route took us along the top edge of Selborne Hangar, a woodland of mostly beech trees. When there were gaps in the trees we could see through to Selborne and Gilbert White's House. Nova seemed to have a great time exploring and played in the leaves.

It then took us down Gracious Street to the sunken lane - Old Alton Road which is a fascinating path cut into the landscape by many centuries of walking and the sides of the path seem to be held back by the roots of the trees, it was interesting to hear on the audio guide how in the past this flooded and in the winter, icicles formed from the trees.

After this the route took us across meadows, past ponds to Coombe woods, another peaceful shaded woodland where we could enjoy listening to the sounds of the birds along our way. Finally we headed back up to the church and into Selborne to finish the route at Gilbert White's house.

Overall this was a lovely walk with a good variety of things to see - I thoroughly enjoyed it and was glad we made the decision to head out first thing before it got too hot as some of the parts would likely have been tough going later in the day! It was also a great walk for dogs with sections where Nova could have a good explore off-lead as well as those where there are lifestock where she had to be on lead.

Status Light

Whilst working from home it's useful for the family to know when I am on a video call or just a voice call. I was first looking at Busy bar which I came across via Hiro Report, but couldn't justify the cost of it and as I had a raspberry pi zero and a Blink! sitting around, I decided to build something simple myself. Maybe one day I could make it more complex and build a version using the Pico 2 W Unicorn Galactic?

Robot with Pi Zero

Hardware

Software Part 1 - LED Control Server

This is the application that runs on the Raspberry Pi and controls the lights based on a simple API call. I deploy this to my Raspberry Pi through Balena for ease of management and updates - then I can just push a new copy of the code into git and balena will automatically deploy it to my Raspberry Pi.

The API Call is very simple and currently accepts a GET request with parameters for the red, green and blue values (between 0 and 255) - really this should probably be a PUT request but using GET made testing with a browser simpler. The light is switched off with a DELETE call (or another GET with 0 for each parameter).

You can see the code for this in the light directory.

Software Part 2 - Webcam detection tool

This part runs on my laptop and detects when the webcam is in use through monitoring the system log - if a change in state is detected, it then sends an API call to the Raspberry Pi to switch the light on or off as appropriate.

The tricky part here was the detection of the webcam - I found a few different samples and a useful reddit thread (which I can't find now - will add the link later!) on ways to detect the webcam being operational on MacOS and it seems it is liable to change between MacOS versions - looking for eventMessages containing AVCaptureSessionDidStartRunningNotification, AVCaptureSessionDidStopRunningNotification or stopRunning seems to work for the things I've tested on Sequoia.

The alternative route I was considering was to use OverSight to trigger a CLI app and leave the detection to them - but having the CLI detect it was more interesting to build.

You can see the code for this part in the cli directory.

The code for this project lives at github.com/rickymoorhouse/status-light.