Ricky Moorhouse

Apiconnect

Global Deployment with API Connect - Serving APIs Worldwide

Why Global API Deployment Matters

If you have customers around the world, serving your APIs from a global footprint significantly improves their experience by reducing latency and increasing reliability. With API Connect's multi-region capabilities, you can ensure users call your APIs from locations closest to them, providing faster response times and better resilience against regional outages.

In this guide, I'll walk through deploying APIs to the 6 current regions of the API Connect Multi-tenant SaaS service on AWS. At the time of writing, these regions are:

  • North America: N. Virginia
  • Europe: Frankfurt, London
  • Asia-Pacific: Sydney, Mumbai, Jakarta

I'll use N. Virginia as the initial source location and demonstrate how to synchronize configuration across all regions.

API Connect Global Deployment

Automatically Deploy APIs and Products to all locations

To maintain consistency across regions, you'll need a reliable deployment pipeline. This pipeline should handle the deployment of APIs and products to all regions whenever changes are made to your source code repository.

You can build this pipeline using either:

This automation ensures that whenever you merge changes to your main branch, your APIs and products are consistently deployed across all regions without manual intervention.

Pipeline Architecture

Create an API Connect instance in each region

Before configuring your global deployment, you'll need to:

  1. Create an API Connect instance in each target region
  2. Use the same configuration as your source location (N. Virginia in this example)
  3. Specify a unique hostname for each regional instance

Pro Tip: Each paid subscription for API Connect SaaS includes up to 3 instances which can be distributed across the available regions as needed. For a truly global footprint covering all 6 regions, you'll need two subscriptions.

Configure the portal for the source location

For developer engagement, you'll need a portal where API consumers can discover and subscribe to your APIs. In my implementation, I chose the new Consumer Catalog for its simplicity and ease of setup.

While I didn't need custom branding for this example, I did enable approval workflows for sign-ups. This allows me to:

  • Review all registration requests
  • Control access to sensitive APIs
  • Manage who can subscribe to which products

Portal Configuration

Configure ConfigSync to push configuration changes to all regions

The key to maintaining consistency across regions is ConfigSync, which pushes configuration changes from your source region to all target regions. Since ConfigSync operates on a source-to-target basis, you'll need to run it for each target region individually.

My implementation uses a bash script that:

  1. Sets the source region (N. Virginia)
  2. Defines common properties for all target regions
  3. Loops through each target region, setting region-specific properties
  4. Runs the ConfigSync tool for each region

Config Sync Architecture

Here's the script I use:

#!/bin/bash

# US East is always the source catalog
export SOURCE_ORG=ibm
export SOURCE_CATALOG=production
export SOURCE_REALM=provider/default-idp-2
export SOURCE_TOOLKIT_CREDENTIALS_CLIENTID=599b7aef-8841-4ee2-88a0-84d49c4d6ff2
export SOURCE_TOOLKIT_CREDENTIALS_CLIENTSECRET=0ea28423-e73b-47d4-b40e-ddb45c48bb0c

# Set the management server URL and retrieve the API key for the source region
export SOURCE_MGMT_SERVER=https://platform-api.us-east-a.apiconnect.automation.ibm.com/api
export SOURCE_ADMIN_APIKEY=$(grep 'us-east-a\:' ~/.apikeys.cfg | awk '{print $2}')


# Set common properties for all targets - in SaaS the toolkit credentials are common across regions.
export TARGET_ORG=ibm
export TARGET_CATALOG=production
export TARGET_REALM=provider/default-idp-2
export TARGET_TOOLKIT_CREDENTIALS_CLIENTID=599b7aef-8841-4ee2-88a0-84d49c4d6ff2
export TARGET_TOOLKIT_CREDENTIALS_CLIENTSECRET=0ea28423-e73b-47d4-b40e-ddb45c48bb0c

# Loop through the other regions to use as sync targets
# Format: eu-west-a (London), eu-central-a (Frankfurt), ap-south-a (Mumbai), 
# ap-southeast-a (Sydney), ap-southeast-b (Jakarta)
stacklist="eu-west-a eu-central-a ap-south-a ap-southeast-a ap-southeast-b"
for stack in $stacklist 
do
    # Set the target management server URL for the current region
    export TARGET_MGMT_SERVER=https://platform-api.$stack.apiconnect.automation.ibm.com/api
    # Retrieve the API key for the current region from the config file
    export TARGET_ADMIN_APIKEY=$(grep "$stack\:" ~/.apikeys.cfg | awk '{print $2}')
    # Run the ConfigSync tool to synchronize configuration from source to target
    ./apic-configsync
done

For managing API keys, I store them in a configuration file at ~/.apikeys.cfg where each line contains a region-key pair in the format region: apikey. This approach keeps sensitive credentials out of the script itself - for a more production ready version this api key handling would be handed off to a secret manager.

Verify that everything works as expected

After setting up your global deployment, it's crucial to verify that everything works correctly across all regions. Follow these steps:

  1. Test the source region first:

    • Register a consumer organization in the portal
    • Subscribe to a product containing an API you want to test
    • Use the "Try now" feature to invoke the API and verify it works
  2. Verify ConfigSync completion:

    • Check logs to ensure the ConfigSync job has completed successfully for each region
    • Verify that all configuration changes have been properly synchronized
  3. Test each target region:

    • Call the same API from each region using the appropriate regional endpoint
    • Verify that response times, behavior, and results are consistent
    • Check analytics to confirm that traffic is being properly recorded in each region
  4. Monitor for any issues:

    • Watch for any synchronization failures or configuration discrepancies
    • Address any region-specific issues that might arise

Possible next steps

Once your global API deployment is working, consider these enhancements:

  • Implement global load balancing to automatically route customers to the closest region based on their location
  • Set up cross-region monitoring to track performance and availability across all regions
  • Implement disaster recovery procedures to handle regional outages gracefully

Conclusion

A global API deployment strategy with API Connect provides significant benefits for organizations with worldwide customers. By following the approach outlined in this guide, you can:

  • Reduce latency for API consumers regardless of their location
  • Improve reliability through geographic redundancy
  • Maintain consistent configuration across all regions
  • Simplify management through automation

While setting up a global footprint requires some initial configuration, the long-term benefits for your API consumers make it well worth the effort.

Product Academy for Teams - San Jose

Last week I had the opportunity to attend the three-day Product Academy for Teams course at the IBM Silicon Valley Lab in San Jose.

This brought together members of our team from across different disciplines - design, product management, user research, and engineering. It was fantastic to spend time face to face with other members of the team that we usually only work with remotely and to all go through the education together learning from each others approaches and ideas. The API Connect team attendees were split into three smaller teams to work on separate items and each was joined by a facilitator to help us work through the exercises.

We spent time together learning about the different phases of the product development lifecycle and in each looking at the process some of the best practices and ways to apply them to our product. It was particularly effective to use real examples from our roadmap in the exercises so we could collaboratively apply the new approaches and see how they apply directly to our product plan.

Each day of the course looked at a different phase of the product development lifecycle - Discovery, Delivery and Launch & Scale:

Discovery - Are we building the right product? - looking at and assessing opportunities and possible solutions we could offer for them, using evidence to build confidence and reviewing the impact this would have on our North Star Metric.

Delivery - Are we building it right? - ensuring we have a clear understanding of the outcomes we're looking for, how we can achieve them and how we can measure success.

Launch & Scale - Are customers getting value? - ensuring we enable customers to be successful in their use of the product and that we are able to get feedback and data to measure this and improve.

Each of these phases has an iterative approach to it and we looked at how we could apply these to our product plan. We also looked at some of the tools and techniques that can be used to help us apply this and members from the different product teams attending shared how they are using these today.

On the final day of the course I also had the opportunity to share some of our journey with instrumentation, how this has evolved and some of the lessons we learnt along the way - such as the benefits of having a data scientist on the team. I am looking forward to sharing this with the wider team and seeing how we apply some of the learning to improve our systems going forward. For example, better validation of decisions through measuring and improving our use of data.

API Connect Quick Check

This script originated as part of a much wider framework of tests that we put together when I was in the API Connect SRE team. However I’ve found this set of functions to be something useful to be able to validate quickly from time to time in different contexts to give a high level answer to ‘Is it working?’

The steps this script takes are as follows:

  • Authenticate to the API Manager Platform API and retrieve an access token
  • Take a templated API and Product and modify it to return a unique value
  • Publish this API to a nominated catalog
  • Invoke the API through the gateway (looping until a successful response is seen)
  • Query the Analytics APIs to find the event logged for this invocation (again looping until found)

Whilst the entire test frameworks a lot of assumptions around how our environments are built and deployed, this test was relatively standalone - and I just needed to make a couple of updates to it to work outside of our cloud deployments - add support to turn off certificate validation and for username/password instead of API Key based authentication. Take a look at the script on GitHub.

If you want to try this in your own environment you can follow these steps:

  1. Clone the repository

    git clone https://github.com/ibm-apiconnect/quick-check.git
    
  2. Install the python required dependencies

    pip install -r requirements.txt
    
  3. Identify the appropriate credentials to use for your stack - either username/password or an api-key if you are using an OIDC registry and set these as environment variables (APIC_API_KEY or APIC_REALM, APIC_USERNAME & APIC_PASSWORD) or use the command line parameters - either of:

    • -u <apim_user> -p <apim_password> -r provider/default-idp-2 (Local user registry or LDAP)
    • -a <apim_api_key> (OIDC e.g. SaaS)
  4. Download the credentials.json file from the toolkit download page to identify the client id and client secret for your environment - these can either be set as environment variables (CLIENT_ID / CLIENT_SECRET) or as command line parameters (--client_id / --client_secret)

  5. Run the script according to the usage examples

    python api-deploy-check.py -s <platform-api-hostname> -o <provider_org_name> -c <catalog_name> [credential parameters]
    

If successful, you should see output like this:

Example output from the script

I'd be interested to hear if you find this useful or if you have other similar utilities you use already - let me know!

Originally posted on the IBM API Connect Community Blog

Time with the team in Kochi

I was fortunate to finally get a chance to visit my team in Kochi and had a fantastic few days with them, it was so great to be able to spend some time face to face after working together remotely for several years. I travelled to Kochi via Chennai with British Airways. In Chennai this meant I arrived through the international terminal and then had to clear immigration and head over to the domestic terminal next door. Immigration all went smoothly but then at the domestic terminal they didn't recognise my boarding pass for the Indigo codeshare that British Airways had given me, so I had to get help at the check in desks. Once this was all resolved I had to pass through security again and all went smoothly, efficient boarding and a short flight later I arrived in Kochi around 9 in the morning and was met by Akhil and Midhun at the airport to take me to the office.

I didn't have very much time to explore Kochi, with arriving on Monday morning and heading back over to Chennai on the Thursday night to spend a day with the team there on Friday but we managed to get away earlier one afternoon and head to the coast - then on the way it poured with rain so we ended up at a nice spot, Old Lighthouse Lounge, overlooking the coast where we could get some food and drink.

I tried a lot of different foods (whilst sticking to the less spicy options) heading to different places with the team each day I was there - I think my favourite was the local fish wrapped with spices and baked in a banana leaf followed closely by parotta (a flakey layered flatbread) and some of the different paneer based dishes.

There was a real buzz in the office there and it was fantastic seeing the sense of community there and the collaboration that went on within the teams. I managed to have a lot of conversations with different groups across the team, a few one on ones and a couple of full team meetings, but there's lots more that could easily taken another week as the time flew by too quickly. I hope to see them all again soon!

Remote Gateway on Openshift

This post should guide you through the steps on how to deploy a datapower gateway to use as a remote gateway with API Connect Reserved instance, optionally configuring the inbound management traffic through IBM Cloud Satellite Connector.

Overview diagram

Installing the Operators

To install the operators in your cluster, the steps are as described in the documentation on how to install the operators

Set up certificates

Again follow the steps for creating certificates in the documentation.

Deploy the Gateway Cluster

Deploying the gateway cluster into Openshift is just a case of creating the GatewayCluster CR - you can start from this template. NB. You will need to ensure that spec.license.use is set to production if you are using the RI provided image registry as we don't provide the nonproduction images.

  • Create a pull-secret with access to download the datapower images and reference it under imagePullSecrets. You can download the image to mirror to your registry from the 'Download Gateway' button in the reserved instance Config Manager.

  • Create a secret containing the password for the datapower admin user and ensure it is referenced under adminUser - you can use the following command to create this:

    oc create secret generic admin-secret --from-literal=password={SET-PASSWORD-HERE!}

  • Update imageRegistry to point to your image registry.

  • Update the jwksUrl for your reserved instance, this needs to be the platform api endpoint for the reserved instance followed by /api/cloud/oauth2/certs - you can find the platform api endpoint url from the 'Download Clients' link in the API Manager interface.

  • Select and configure the appropriate profile for your cluster.

  • Create a secret with the CA the reserved instance endpoints are signed by - Let's Encrypt X2 Root CA (Download from the Let's Encrypt site) and ensure the secretName for mgmtPlatformEndpointCASecret points to this.

    curl https://letsencrypt.org/certs/isrg-root-x2.pem -o isrg-root-x2.pem oc create secret generic isrg-root-x2 --from-file=ca.crt=isrg-root-x2.pem

  • [Optional] If you are routing the inbound traffic through IBM Cloud Satellite Connector, you will need to configure the hostname for the gatewayManagerEndpoint to match the private cloud endpoint hostname for your connector - typically c-01.private.{region}.link.satellite.cloud.ibm.com.

  • Apply the gateway cluster yaml

  • You can check the status of the cluster using oc get gatewaycluster. If you see any issues you can use oc describe gatewaycluster for more details.

Set up Satellite Connector [optional]

Optionally, deploy a satellite connector agent on the same cluster as the remote gateway

  • Create a Satellite Connector
  • Deploy the agent
  • Create a Connector Endpoint for the gateway management interface docs. For the Gateway management endpoint you will need the following details:
    • Destination FQDN: {gateway-cluster-name}-datapower.{namespace}.svc e.g. api-gateway-datapower.apicri-gateway.svc
    • Destination Port: 3000
    • TCP

Register gateway with Reserved Instance

Create TLS Client Profile so that the manager can trust the CA that signs the certificate for the gateway management endpoint. This can be done through the RI Config manager under TLS.

  • Create a Trust Store containing the CA Certificate which can be obtained by copying out the ca.crt from the gateway-manager-endpoint-secret and putting it in a file named ca.pem (API Connect needs the .pem extension for the upload to be accepted.)
  • Create TLS Client Profile referencing the Trust store created

Create TLS Server profiles to present to clients invoking the APIs:

  • Create TLS Key Store containing the certificate and private key to present - typically these would be obtained through an external Certificate Authority.
  • Create a Server profile referencing the key store created.

To register the gateway on the 'Gateways' tab you will need the following details:

  • URL of management endpoint: If using Satellite, this is the link endpoint URL including the port number. If not, this will be the hostname from the gatewayManagerEndpoint in the gateway cluster CR.
  • TLS Client Profile: profile created above
  • Base URL of API Invocation endpoint: The host from the gatewayEndpoint in the CR that inbound clients will use
    • TLS Server Profile: profile created above