This website is currently dormant!
RSS

API Definitions News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

What Are Your Enterprise API Capabilities?

I spend a lot of time helping enterprise organizations discover their APIs. All of the organizations I talk to have trouble knowing where all of their APIs are–even the most organized of them. Development and IT groups have just been moving too fast over the last decade to know where all of their web services, and APIs are. Resulting in large organizations not fully understanding what all of their capabilities are, even if it is something they actively operate, and may drive existing web or mobile applications.

Each individual API within the enterprise represents a single capability. The ability to accomplish a specific enterprise tasks that is valuable to the business. While each individual engineer might be aware of the capabilities present on their team, without group wide, and comprehensive API discovery across an organization, the extent of the enterprise capabilities is rarely known. If architects, business leadership, and any other stakeholder can’t browse, list, search, and quickly get access to all of the APIs that exist, the knowledge of the enterprise capabilities will not be able to be quantified or articulated as part of regular business operations.

In 2018, the capabilities of any individual API is articulated by it’s machine readable definition. Most likely OpenAPI, but could also be something like API Blueprint, RAML, or other specification. For these definitions to speak to not just the technical capabilities of each individual API, but also the business capabilities, they will have to be complete. Utilizing a higher level strategic set of tags that help label and organize each API into a meaningful set of business capabilities that best describes what each API delivers. Providing a sort of business capabilities taxonomy that can be applied to each API’s definition and used across the rest of the API lifecycle, but most importantly as part of API discovery, and the enterprise digital product catalog.

One of the first things I ask any enterprise organization I’m working with upon arriving, is “do you know where all of your APIs are?” The answer is always no. Many will have a web services or API catalog, but it almost always is out of date, and not used religiously across all groups. Even when there are OpenAPI definitions present in a catalog, they rarely contain the meta data needed to truly understand the capabilities of each API. Leaving developer and IT operations existing as black holes when it comes to enterprise capabilities, sucking up resources, but letting very little light out when it comes to what is happening on the inside. Making it very difficult for developers, architects, and business users to articulate what their enterprise capabilities are, and often times reinventing the wheel when it comes to what the enterprise delivers on the ground each day.


The Layers Of Completeness For An OpenAPI Definition

Everyone wants their OpenAPIs to be complete, but what that really means will depend on who you are, what your knowledge of OpenAPI is, as well as being driven by your motivation for having an OpenAPI in the first place. I wanted to take a crack at articulating a complete(enough) definition for OpenAPIs I create, based upon what I’m needing them to do.

Info & Base - Give the basic information I need to understand who is behind, and where I can access the API.

Paths - Provide an entry for every path that is available for an API, and should be included in this definition.

Parameters - Provide a complete list of all path, query, and header parameters that can be used as part of an API. https://gist.github.com/kinlane/29d0247d6ff4aaa39db4dc793df4a2f9

Descriptions - Flesh out descriptions for all the path and parameter descriptions, helping describe an API does.

Enums - Publish a list of all the enumerated values that are possible for each parameter used as part of an API. https://gist.github.com/kinlane/444731f0214cab5efcc3ae77011823ba

Definitions - Document the underlying schema being returned by creating a JSON schema definition for the API.

Responses - Associate the definition for the API with the path using a response reference, connecting the dots regarding what will be returned.

Tags - Tag each path with a meaningful set of tags, describing what resources are available in short, concise terms and phrases.

Contacts - Provide contact information for whoever can answer questions about an API, and provide a URL to any support resources.

Create Security Definitions - Define the security for accessing the API, providing details on how each API request will be authenticated.

Apply Security Definitions - Apply the security definition to each individual path, associating common security definitions across all paths.

Complete(enough) - That should give us a complete (enough) API description.

Obviously there is more we can do to make an OpenAPI even more complete and precise as a business contract, hopefully speaking to both developers and business people. Having OpenAPI definitions are important, and having them be up to date, complete (enough), and useful is even more important. OpenAPIs provide much more than documentation for an API. They provide all the technical details an API consumer will need to successfully work with an API.

While there are obvious payoffs for having an OpenAPI, like being able to publish documentation, and generate code libraries. There are many other uses for an OpenAPI like loading into Postman, Stoplight, and many other API services and tooling that helps developers understand what an API does, and reduce friction when they integrate, and have to maintain their applications. Having an OpenAPI available is becoming a default mode of operation, and something every API provider should have.


Please Refer The Engineer From Your API Team To This Story

I reach out to API providers on a regular basis, asking them if they have an OpenAPI or Postman Collection available behind the scenes. I am adding these machine readable API definitions to my index of APIs that I monitor, while also publishing them out to my API Stack research, the API Gallery, APIs.io, work to get them published in the Postman Network, and syndicated as part of my wider work as an OpenAPI member. However, even beyond my own personal needs for API providers to have a machine readable definition of their API, and helping them get more syndication and exposure for their API, having an definition present significantly reduces friction when on-boarding with their APIs at almost every stop along a developer’s API integration journey.

One of the API providers I reached out to recently responded with this, “I spoke with one of our engineers and he asked me to refer you to https://developer.[company].com/”. Ok. First, I spend over 30 minutes there just the other day. Learning about what you do, reading through documentation, and thinking about what was possible–which I referenced in my email. At this point I’m guessing that the engineer in question doesn’t know what an OpenAPI or Postman Collection is, they do not understand the impact these specifications are having on the wider API ecosystem, and lastly, I’m guessing they don’t have any idea who I am(ego taking control). All of which provides me with the signals I need to make an assessment of where any API is in their overall journey. Demonstrating to me that they have a long ways to go when it comes to understanding the wider API landscape in which they are operating in, and they are too busy to really come out of their engineering box and help their API consumers truly be successful in integrating with their platform.

I see this a lot. It isn’t that I expect everyone to understand what OpenAPI and Postman Collections are, or even know who I am. However, I do expect people doing APIs to come out of their boxes a little bit, and be willing to maybe Google a topic before responding to question, or maybe Google the name of the person they are responding to. I don’t use a gmail.com address to communicate, I am using apievangelist.com, and if you are using a solution like Clearbit, or other business intelligence solution, you should always be retrieving some basic details about who you are communicating with, before you ever respond. That is, you do all of this kind of stuff if you are truly serious about operating your API, helping your API consumers be more successful, and taking the time to provide them with the resources they need along the way–things like an OpenAPI, or Postman Collections.

Ok, so why was this response so inadequate?

  • No API Team Present - It shows me that your company doesn’t have any humans their to support the humans that will be using your API. My email went from general support, to a backend engineer who doesn’t care about who I am, and about what I need. This is a sign of what the future will hold if I actually bake their API into my applications–I don’t need my questions lost between support and engineering, with no dedicated API team to talk to.
  • No Business Intelligence - It shows me that your company has put zero thought into the API business model, on-boarding, and support process. Which means you do not have a feedback loop established for your platform, and your API will always be deficient of the nutrients it needs to grow. Always make sure you conduct a lookup based upon on the domain, or Twitter handle or your consumers to get the context you need to understand who you are talking to.
  • Stuck In Your Bubble - You aren’t aware of the wider API community, and the impact OpenAPI, and Postman are having on the on-boarding, documentation, and other stops along the API lifecycle. Which means you probably aren’t going to keep your platform evolving with where things are headed.

Ok, so why should you have an OpenAPI and Postman Collection?

  • Reduce Onboarding Friction - As a developer I won’t always have the time to spend absorbing your documentation. Let me import your OpenAPI or Postman Collection into my client tooling of choice, register for a key and begin making API calls in seconds, or minutes. Make learning about your API a hands on experience, something I’m not going to get from your static documentation.
  • Interactive API Documentation - Having a machine readable definition for your API allows you to easily keep your documentation up to date, and make it a more interactive experience. Rather than just reading your API documentation, I should be able to make calls, see responses, errors, and other elements I will need to truly understand what you do. There are plenty of open source interactive API documentation solutions that are driven by OpenAPI and Postman, but you’d know this if you were aware of the wider landscape.
  • Generate SDKs, and Other Code - Please do not make me hand code the integration with each of your API endpoints, crafting each request and response manually. Allow me to autogenerate the most mundane aspects of integration, allowing OpenAPI and Postman Collection to act as the integration contract.
  • Discovery - Please don’t expect your potential consumers to always know about your company, and regularly return to your developer.[company].com portal. Please make your APIs portable so that they can be published in any directory, catalog, gallery, marketplace, and platform that I’m already using, and frequent as part of my daily activities. If you are in my Postman Client, I’m more likely to remember that you exist in my busy world.

These are just a few of the basics of why this type of response to my question was inadequate, and why you’d want to have OpenAPI and Postman Collections available. My experience on-boarding will be similar to that of other developers, it just happens that the application I’m developing are out of the normal range of web and mobile applications you have probably been thinking about when publishing your API. But this is why we do APIs, to reach the long tail users, and encourage innovate around our platforms. I just stepped up and gave 30 minutes of my time (now 60 minutes with this story) to learning about your platform, and pointing me to your developer.[company].com page was all you could muster in return?

Just like other developers will, if I can’t onboard with your API without friction, and I can’t tell if there is anyone home, and willing to give me the time of day when I have questions, I’m going to move on. There are other platforms that will accommodate me. The other downside of your response, and me moving on to another platform, is that now I’m not going to write about your API on my blog. Oh well? After eight years of blogging on APIs, and getting 5-10K page views per day, I can write about a topic or industry, and usually dominate the SEO landscape for that API search term(s) (ego still has control). But…I am moving on, no story to be told here. The best part of my job is there are always stories to be told somewhere else, and I get to just move on, and avoid the friction wherever possible when learning how to put APIs to work.

I just needed this single link to provide in response to my email response, before I moved on!


Some Ideas For API Discovery Collections That Students Can Use

This is a topic I’ve wanted to set in motion for some time now. I had a new university professor city my work again as part of one of their courses recently, something that floated this concept to the top of the pile again–API discovery collections meant for just for students. Helping k-12, community college, and university students quickly understand where to find the most relevant APIs to whatever they are working on. Providing human, but also machine readable collections that can help jumpstart their API education.

I use the API discovery format APIs.json to profile individual, as well as collections of APIs. I’m going to kickstart a couple of project repos, helping me flesh out a handful of interesting collections that might help students better understand the world of APIs:

  • Social - The popular social APIs like Twitter, Facebook, Instagram, and others.
  • Messaging - The main messaging APIs like Slack, Facebook, Twitter, Telegram, and others.
  • Rock Star - The cool APIs like Twitter, Stripe, Twilio, YouTube, and others.
  • Amazon Stack - The core AWS Stack like EC2, S3, RDS, DynamoDB, Lambda, and others.
  • Backend Stack - The essential App stack like AWS S3, Twilio, Flickr, YouTube, and others.

I am going to start there. I am trying to provide some simple, usable collections or relevant APIs for students are just getting started If there are any other categories, or stacks of APIs you think would be relevant for students to learn from I’d love to hear your thoughts. I’ve done a lot of writing about educational and university based APIs, but I’ve only lightly touched upon what APIs should students be learning about in the classroom.

Providing ready to go API collections will be an important aspect of the implementation of any API training and curriculum effort. Having the technical details of the API readily available, as well as the less technical aspects like signing up, pricing, terms of service, privacy policies, and other relevant building blocks should also be front and center. I’ll get to work on these five API discovery collections for students. Get the title, description, and list of each API stack published as a README, then I’ll get to work on publishing the machine, and human readable details for the technology, business, and politics of using APIs.


Searching For APIs That Possess Relevant Company Information

I’m evolving the search for the Streamdata.io API Gallery I’ve been working on lately. I’m looking to move the basic keywords search that searches the API name and description, as well as the API path, summary, and description using a key word or phrase, to also be about searching parameters in a meaningful way. Each of the APIs in the Streamdata.io API have an OpenAPI definition. It is how I render each of the individual API paths using Jekyll and Github Pages. These parameters give me another dimension of data in which I can index, and use as a facet in my API gallery search.

I am developing different sets of vocabulary to help me search against the parameters used across APIs, with one of them being focused on company related information. I’m trying to find APIs that provide the ability to add, update, and search against company related data, content, and execute algorithms that help make sense of company resources. There is no perfect way to search for API parameters that touch on company resources, but right now I’m looking for a handful of fields: company, organization, business, enterprise, agency, ticker, corporate, and employer. Returning APIs that have a parameter with any of those words in the path or summary, and weighting differently if it is in the description or tags for each API path.

Next, I’m also tagging each API path that has a URL field, because this will allow me to connect the dot to a company, organization, or other entity via the domain. This is all I’m trying to do, is connect the dots using the parameter structure of an API. I find that there is an important story being told at the API design layer, and API search and discovery is how we are going to bring this story out. Connecting the dots at the corporate level is just one of many interesting stories out there, just waiting to be told. Pushing forward the conversation around how we understand the corporate digital landscape, and what resources they have available.

You can do a basic API search at the bottom of the Streamdata.io API Gallery main page. I do not have my parameter search available publicly yet. I want to spend more time refining my vocabularies, and also look at searching the request and response bodies for each path–I’m guessing this won’t be as straightforward, as parameters has been. Right now I’m immersed in understanding the words we use to design our APIs, and craft our API documentation. It is fascinating to see how people describe their resources, and how they think (or don’t think) about making these resources available to other people. OpenAPI definitions provide a fascinating way to look at how APIs are opening up access to company information, establishing the digital vocabulary for how we exchange data and content, and apply algorithms to help us better understand the business world around us.


Identifying The Different Types Of APIs

APIs come in many shapes and sizes. Even when APIs may share a common resource, the likelihood that they are similar in functional, will be slim. Even after eight years of studying APIs, I still struggle with understanding the differences, and putting APIs into common buckets. Think of the differences between two image APIs like Flickr and Instagram, but then also think about the difference between Twitter and Twilio–the differences are many, and a challenge to articulate.

I’m pushing forward my API Stack, and API Gallery work, and I’m needing to better organize APIs into meaningful groups that I can add to the search functionality for each of API discovery services. To help me establish a handful of new buckets, I’m thinking more critically about the different types of API functionality I’m coming across, establishing seven new buckets:

  • General Data - You can get at data across the platform, users, and resources.
  • Relative Data - You can get at data that is relative to a user, company, or specific account.
  • Static Data - The data doesn’t change too often, and will always remain fairly constant.
  • Evolving Data - The data changes on a regular basis, providing a reason to come back often.
  • Historical Data - Provides access to historical data, going back an X number of. years.
  • Service - The API is offered as a service, or is provided to extend a specific service.
  • Algorithmic - The API provides some sort of algorithmic functionality like ML, or otherwise.

Understanding the type of data an API provides is important to the work I’m doing. Streamdata.io caters to the needs of financial organizations, and they are looking for data to help them with their investment portfolio, but also have very particular opinions around the type of data they want. This first version of my API type list is heavily weighted towards data, but as I evolve in my thinking, I’m guessing the service and algorithmic buckets will expand and evolve as well.

The APIs I am cataloging within this work spring fit into one or many of these buckets. They are meant to transcend the resource being made available, and the provider behind the service. I want to be able to search, filter, and organize APIs across many of the usual characteristics we use to track on. I’m wanting to go beyond the obvious resource focused characteristics, and move beyond the technology being applied. I’m looking to understand what you can do with an API, and be able to stack hundreds, or thousands of similar APIs side by side, and provide a new view of the landscape.


Algolia Kindly Provides A Hacker News Search API

I was working on a serverless app for Streamdata.io that takes posts to Hacker News and streams them into an Amazon S3 data lake, and I came across the Algolia powered Hacker News search API. After being somewhat frustrated with the simplicity of the official Hacker News API, I was pleased to find the search kindly provided by Algolia.

There is no search API available for the core Hacker News API, and the design leaves a lot to be desired, so the simplicity of Algolia’s API solution was refreshing. There is a lot of data flowing into Hacker News on a regular day, so providing a search API is pretty critical. Additionally, Algolia’s ability to deliver such a simple, usable, yet powerful API on top of a relevant data source like Hacker News demonstrates the utility of what Algolia offers as a search solution–something I wanted to take a moment to point out here on the blog.

I consider search to be an essential ingredient for any API. Every API should have a search element to their stack, allowing the indexing and searching of all API resources through a single path. Making Algolia a relevant API service provider in this area, enabling API providers to outsource the indexing and searching of their resources, and the delivery of a dead simple API for your consumers to tap into. This path forward is probably not for every API, as many weave specialized search throughout their API design, but for teams who are lacking in resources, and can afford to outsource this element–Algolia makes sense.

Seeing Algolia in action, for a specific API I was integrating with helped bring their service front and center for me. I tend to showcase Elastic for deploying API search solutions, but it is a good to receive a regular reminder that Algolia does the same thing as a service. Their work on the Hacker News Search API provides a good example of they can do for you–sure, we can all build our own search solutions, but honestly, do you have the time? I’ll make sure and regularly highlight what Algolia is doing as part of my search API research, and thanks Algolia! I really appreciate what you did for the Hacker News API, it made my work a lot easier.


Kicking The Tires On The SAP API Business Hub

I told the folks over at SAP that I would take a look at their API Business Hub. It isn’t paid work, just helping provide feedback on another addition to the API discovery front, something I’m pretty committed to helping push forward in any way that I can. They’ve pulled together a pretty clean, OpenAPI driven catalog of useful APIs for the enterprise, so I wanted to make sure I kick the tires and size it up alongside the other API discovery work I am doing.

The SAP API Business Hub is a pretty simple and clean catalog for searching and browsing applications, integrations, as well as APIs–I am going to focus in on the API section. Which at first glance looks to have about 70 separate APIs, but then you notice each of them are just umbrellas for each API platform, and some of them contain many different API endpoints. Some of the APIs are simple language translation and text extraction resources, while others provide robust access to the SAP S/4HANA Cloud, SAP Ariba, and other SAP systems. You see a lot of SAP focused solutions, but then you also see a handful of partner solutions added via their platform partner program.

I see the beginnings of a useful API catalog getting going on over at the SAP API Business Hub. Each API is well documented, and provides an OpenAPI definition for each API, complete with interactive documentation you can play within a sandbox environment. More than most API catalogs, marketplaces, and directories I profile have available. Allowing you to kick the tires and see what is going on, before working with the production version. They also provide you with a Java SDK to download for each API, something that could easily be expanded to support many different platforms, programming languages, and continuous integration cycles with solutions like APIMATIC. Making it more of a discovery, as well as integration marketplace.

Like any API marketplace effort, SAP needs to drum up activity within their catalog. They need more partners signing up to add their APIs, as well as consumers being made aware of the resources published there–something that takes a lot of work, evangelism, and storytelling. Next, I’m going to go through their partner signup and see what I can do to add some of my API resources there, and tell some stories about how they might be able to improve upon the partner flow. I like that their marketplace is OpenAPI driven. I’m curious about how much of the API publishing process is machine readable, allowing API providers to easily add their resources, without a lot of manual form work–something most are going to not have the time and resources for. I’ll keep evaluating how the SAP API Business Hub overlaps with my other API discovery work on the API Stack, the Streamdata.io API Gallery, Postman Network, and partnerships with APIs.guru, APIs.io, and others–continuing to push forward the API discovery conversation after almost 8 years.


Discover, Profile, Quantify, Rank, And Publish New APIs To The Streamdata.io API Gallery

About 60% of my work these days is building upon the last five years of my API Stack research, with a focus on building out the Streamdata.io API Gallery. We are fine tuning our approach for discovering new API-driven resources from across the landscape, while also profiling, quantifying, ranking, and publishing to the Streamdata.io API Gallery, The API Stack, and potentially other locations like the Postman Network, APIs.Guru, and other API discovery destinations I am working with. Helping us make sense of the increasingly noisy API landscape, while identifying the most valuable resources, and then profiling them to help reduce friction when it comes to potentially on-boarding and streaming data from each resource.

Discover New API-Driven Resources

Finding new APIs isn’t too difficult, you just have to Google for them. Finding new APIs in an automated way, with minimum human interaction becomes a little more difficult, but there are some proven ways to get the job done. There is no single place to go find new APIs, so I’ve refined a list of common place I use to discover new APIs:

  • Search Engines - Using search engine APIs to look for APIs based upon the vocabulary we’ve developed.
  • Github - Github provides a wealth of signals when it comes to APIs, and we use the Github API to discover interesting sources using our vocabulary.
  • Stack Overflow - Using the Stack Exchange API, we are able to keep an eye out for developers talking about different types of interesting APIs.
  • Twitter - The social network still provides some interesting signals when it comes to discussions about interesting APis.
  • Reddit - There are many developers who still use Reddit to discuss technical topics, and ask questions about the APIs they are using.

Using the topic and entity vocabulary we’ve been developing, we can automate the discovery of new APIs across these sources using their APIs. Helping track on signals for the existing APIs we are keeping an eye on, but also quickly identify new APIs that we can add to the queue. Giving us the URL of companies, organizations, institutions, and government agencies who are doing interesting things with APIs.

Profile New Domains That Come In

Our API discovery engine produces a wealth of URLs for us to look at to understand the potential for new data, content, and algorithmic API resources. Our profiling process begins with a single URL, which we then use as the seed for a series of automated jobs that help us understand what an entity is all about:

  • Description - Develop the most informative and concise description of what an entity does, including a set of rich meta tags.
  • Developer - Identify where their developer and API program exists, for quantifying what they do.
  • Blog - Find their blog, and supporting RSS feed so we can tune into what they are saying.
  • Press - Also find their press section, and RSS feed so we can tune into the press about them.
  • Twitter - Find their Twitter account so that we can tune into their social stream.
  • LinkedIn - Find their LinkedIn account so that we can tune into their social stream.
  • Github - Find their Github account so we can find more about what they are building.
  • Contact - Establish a way to contact each entity, in case we have any questions or need support.
  • Other - Identify other common building blocks like support, pricing, and terms of services that helps us understand what is going on.

The profiling process provides us with a framework to understand what an entity is all about, and where they fit into the bigger picture of the API landscape. Most of the sources of information we profile have some sort of machine readable component, allowing us to further quantify the entity, and better understand the value they bring to the table.

Quantify Each Entity

Next up we want to quantify each of the entities we’ve profiled, to give us a better understanding of the scope of their operations, and further define where they fit into the API landscape. We are looking for as much detail about what they are up to so we can know where we should be investing our time and energy reaching out and developing deeper relationships.

  • API - We profile their APIs, generating an OpenAPI definition that describes the entire surface area of their APIs.
  • Applications - Define approximately how many applications are running on an API, and how many developers are actively using.
  • Blog - Pull all their blog posts, including the history, and actively pull on daily basis.
  • Press - Pull all their press releases, including the history, and actively pull on daily basis.
  • Twitter - Pull all their Tweets and mentions, including the history, and actively pull on daily basis.
  • Github - Pull all their repos, stars, followers, and commit history, understand more about what they are building.
  • Other - Pull other relevant signals from Reddit, Stack Overflow, AngelList, CrunchBase, SEC, Alexa Rank, ClearBit, and other important platform signals.

By pulling all the relevant signals for any entity we’ve profiled, we can better understand the scope of their operations, and assess the reach of their network. Helping us further quantity the value and opportunity that exists with each entity we are profiling, before we spend much more time on integrating.

Ranking Each Entity

After we’ve profiled and quantify an entity, we like to rank them, and put them into different buckets, so that we can prioritize which ones we reach out to, and which ones we invest more resources in monitoring, tracking, and integrating with. We currently rank them on a handful of criteria, using our own vocabulary and ranking formula.

  • Provider Signals - Rank their activity and relevance based upon signals within their control.
  • Community Signals - Rank their activity based upon signals the community generates about them.
  • Analyst Signals - Rank their activity based upon signals from the analyst community.
  • StreamRank - Rank the activity of their data, content, and API-driven resources.
  • Topically - Understand the value of the activity based upon the topics that are available.

Our ranking of each entity gives us an overall score derived from several different dimensions. Helping us understand the scope, as well as the potential value for each set of APIs, allowing us to further prioritize which entities we invest more time and resources into, maximizing our efforts when it comes to deeper, more technical integrations, and streaming of data into any potential data lake.

Once an entity has been profiled, quantified, and ranked, we publish the profile to the gallery for discovery. Some of the more interesting APIs we hold back on a little bit, and share with partners and customers who are looking for interesting data sources via landscape analysis reports, but once we are ready we publish the entity to a handful of potential locations:

  • Streamdata.io API Gallery - The distributed gallery owned and operated by Streamdata.io
  • The API Stack - My own research area for profiling APIs that I’ve run for five years.
  • APIs.guru - We are working on the best way to submit OpenAPI definitions to our friends here.
  • Postman Network - For APIs that we validate, and generate working Postman Collections.
  • APIs.io - Publishing to the machine readable API search engine for indexing.
  • Other - We have a network of other aggregation, discovery, and related sites we are working with.

Because each entity is published to its own Github repository, with an APIs.json, OpenAPI, and Postman Collection defining its operations, once published, each entity becomes forkable. Making each gallery entry something anyone can fork, download and directly integrate into their existing systems and applications.

Keep Discovering, Profiling, Quantifying, and Publishing

This work is never ending. We’ll just keep discovery, profiling, quantifying, and publishing useful APIs to the gallery, and beyond. Since we benchmark APIs, we’ll be monitoring APIs that go away and we’ll archive them in the listings. We’ll also be actively quantifying each entity, by tuning into their blogs, press, Twitter, and Github accounts looking for interesting activity about what they are doing. Keeping our finger on the pulse of what each entity is up to, as well as what the scope and activity within their community is all about.

This project began as an API Evangelist project to understand how to keep up with the changing API space, and then evolved into a landscape analysis and lead generation tool for Streamdata.io, but now has become an engine for identifying valuable data and content resources. Providing a powerful discover engine for finding valuable data sources, but when combined with what Streamdata.io does, it also allows you to tune into the most important signals across all these entities being profiled, and stream the resulting data and signals into data lakes within your own existing cloud infrastructure, for use in training machine learning models, dashboards, and other relevant applications.


If A Search For Swagger or OpenAPI Does Yield Results I Try For A Postman Collection Next

While profiling any company, a couple of the Google searches I will execute right away are for “[Company Name] Swagger” and “[Company Name] OpenAPI”, hoping that a provide is progressive enough to have published an OpenAPI definition–saving me hours of work understanding what their API does. I’ve added a third search to my toolbox, if these other two searches do not yield results, searching for “[Company Name] Postman”, revealing whether or not a company has published a Postman Collection for their API–another sign of a progressive, outward thinking API provider in my book.

A machine readable definition for an API tells me more about what a company, organization, institution, or government agency does, than anything else I can dig up on their website, or social media profiles. An OpenAPI definition or Postman Collection is a much more honest view of what an organization does, than the marketing blah blah that is often available on a website. Making machine readable definitions something I look for almost immediately, and prioritize profiling, reviewing, and understanding the entities I come across with a machine readable definition, over those that do not. I only have so much time in a day, and I will prioritize an entity with an OpenAPI or Postman, over those who do not.

The presence of an OpenAPI and / or Postman Collection isn’t just about believing in the tooling benefits these definitions provide. It is about API providers thinking externally about their API consumers. I’ve met a lot of API providers who are dismissive of these machine readable definitions as trends, which demonstrates they aren’t paying attention to the wider API space, and aren’t thinking about how they can make their API consumers lives easier–they are focused on doing what they do. In my experience these API programs tend to not grow as fast, focus on the needs of their integrators and consumers, and often get shut down after they don’t get the results they thought they’d see. APIs are all about having that outward focus, and the presence of OpenAPI and Postman Collection are a sign that a provider is looking outward.

While I’m heavily invested in OpenAPI (I am member), I’m also invested in Postman. More importantly, I’m invested in supporting well defined APIs that provide solutions to developers. When an API has an OpenAPI for delivering mocks, documentation, testing, monitoring, and other solutions, and they provide a Postman Collection that allows you to get up an running making API calls in seconds or minutes, instead of hours or days–it is an API I want to know more about. Making these potential searches the deciding factor between whether or not I will continue profiling and reviewing an API, or just flagging it for future consideration, and moving on to the next API in the queue. I can’t keep up with the number of APIs I have in my queue, and it is signals like this that help me prioritize my world, and get my work done on a regular basis.


People Do Not Use Tags In Their OpenAPI Definitions

I import and work with a number of OpenAPI definitions that I come across in the wild. When I come across a version 1.2, 2.0, 3.0 OpenAPI, I import them into my API monitoring system for publishing as part of my research. After the initial import of any OpenAPI definition, the first thing I look for is the consistent in the naming of paths, the availability of summary, descriptions, as well as tags. The naming conventions used is paths is all over the place, some are cleaner than others. Most have a summary, with fewer having descriptions, but I’d say about 80% of them do not have any tags available for each API path.

Tags for each API path are essential to labeling the value a resource delivers. I’m surprised that API providers don’t see the need for applying these tags. I’m guessing it is because they don’t have to work with many external APIs, and really haven’t put much thought into other people working with their OpenAPI definition beyond it just driving their own documentation. Many people still see OpenAPI as simply a driver of API documentation on their portal, and not as an API discovery, or complete lifecycle solution that is portable beyond their platform. Not considering how tags applied to each API resource will help others index, categorize, and organize APIs based upon the value in delivers.

I have a couple of algorithms that help me parse the path, summary, and description to generate tags for each path, but it is something I’d love for API providers to think more deeply about. It goes beyond just the resources available via each path, and the tags should reflect the overall value an API delivers. If it is a product, event, messaging, or other resource, I can extract a tag from the path, but the path doesn’t always provide a full picture, and I regularly find myself adding more tags to each API(if I have the time). This means that many of the APIs I’m profiling, and adding to my API Stack, API Gallery, and other work isn’t as complete with metadata as they possibly could be. Something API providers should be more aware of, and helping define as part of their hand crafting, or auto-generation of OpenAPI definitions.

It is important for API providers to see their OpenAPI definitions as more than just a localized, static feature of their platforms, and as a portable definition that will be used by 3rd party API service providers, as well as their API consumers. They should be linking their OpenAPI prominently from your API documentation, and not hiding behind the JavaScript voodoo that generates your docs. They should be making sure OpenAPI definitions are as complete as you possibly can, with as much metadata as possible, describing the value that it delivers. Loading up OpenAPI definitions into a variety of API design, documentation, discovery, testing, and other tooling to see what it looks like and how it behaves. API providers will find that tags are beginning to be used for much more than just grouping of paths in your API documentation, and it is how gateways are organizing resources, management solutions are defining monetization and billing, and API discovery solutions are using to drive their API search solutions–to just point out a couple of ways in which they are used.

Tag your APIs as part of your OpenAPI definitions! I know that many API providers are still auto-generating from a system, but once they have published the latest copy, make sure you load up in one of the leading API design tools, and give that last little bit of polish. Think of it as that last bit of API editorial workflow that ensures your API definitions speak to the widest possible audience, and are as coherent as it possibly can. Your API definitions tell a story about the resources you are making available, and the tags help provide a much more precise way to programmatically interpret what APIs actually deliver. Without them APIs might not properly show up in search engine and Github searches, or render coherently in other API services and tooling. OpenAPI tags are an essential part of defining and organizing your API resources–give them the attention they deserve.


How Should We Be Organizing All Of Our Microservices?

A question I get regularly in my API workshops is, “how should we be organizing all of our microservices?” To which I always recommend they tune into what the API Academy team is up to, and then I dance around give a long winded answer about how hard it is for me to answer that. I think in response, I’m going to start asking for a complete org chart for their operations, list of all their database schema, and a list of all their clients and the industries they are operating in. It will still be a journey for them, or me to answer that question, but maybe this response will help them understand the scope of what they are asking.

I wish I could provide simple answers for folks when it came to how they should be naming, grouping, and organizing their microservices. I just don’t have enough knowledge about their organization, clients, and the domains in which they operate to provide a simple answer. It is another one of those API journeys an organization will have to embark on, and find their own way forward. It would take so much time for me to get to know an organization, its culture, resources, and how they are being put to use, I hesitate to even provide any advice, short of pointing them to what the API Academy team publishes books, and provides talks on. They are the only guidance I know that goes beyond the hyped of definition of microservices, and actually gets at the root of how you do it within specific domains, and tackle the cultural side of the conversation.

A significant portion of my workshops lately have been about helping groups think about delivering services using a consistent API lifecycle, and showing them the potential for API governance if they can achieve this consistency. Clearly I need to back up a bit, and address some of the prep work involved with making sure they have an organizational chart, all of the schema they can possibly bring to the table, existing architecture and services in play, as well as much detail on the clients, industries, and domains in which they operate. Most of my workshops I’m going in blind, not knowing who will all be there, but I think I need a section dedicated to the business side of doing microservices, before I ever begin talking about the technical details of delivering microservices, otherwise I will keep getting questions like this that I can’t answer.

Another area that is picking up momentum for me in these discussions is a symptom of of the lack of API discovery, and directly related to the other areas I just mentioned. You need to be able to deliver APIs along a lifecycle, but more importantly you need to be able to find the services, schema, and people behind them, as well as coherently speak to who will be consuming them. Without a comprehensive discovery, and the ability to understand all of these dependencies, organizations will never be able to find the success they desire with microservices. They won’t be any better than the monolithic way many organizations have been doing things to date, it will just be much more distributed complexity, which will achieve the same results as the monolithic systems that are in place today.


API Discovery is for Internal or External Services

The topic of API discovery has been picking up momentum in 2018. It is something I’ve worked on for years, but with the number of microservices emerging out there, it is something I’m seeing become a concern amongst providers. It is also something I’m seeing more potential vendor chatter, looking to provide more services and tooling to help alleviate API discovery pain. Even with all this movement, there is still a lot of education and discussion to occur on the subject to help bring people up to speed on what is API discovery.

The most common view of what is API discovery, is when you need to find an API for developing an application. You have a need for a resource in your application, and you need to look across your internal and partner resources to find what you are looking for. Beyond that, you will need to search for publicly available API resources, using Google, Github, ProgrammableWeb, and other common ways to find popular APIs. This is definitely the most prominent perspective when it comes to API discovery, but it isn’t the only dimension of this problem. There are several dimensions to this stop along the API lifecycle, that I’d like to flesh out further, so that I can better articulate across conversations I am having.

Another area that gets lumped in with API discovery is the concept of service discovery, or how your APIs will find their backend services that they use to make the magic happen. Service discovery focuses on the initial discovery, connectivity, routing, and circuit breaker patterns involved with making sure an API is able to communicate with any service it depends on. With the growth of microservices there are a number of solutions like Consul that have emerged, and cloud providers like AWS are evolving their own service discovery mechanisms. Providing one dimension to the API discovery conversation, but different from, and often confused with front-end API discovery and how developers and applications find services.

One of the least discussed areas of API discovery, but is one that is picking up momentum, is finding APIs when you are developing APIs, to make sure you aren’t building something that has already been developed. I come across many organizations who have duplicate and overlapping APIs that do similar things due to lack of communication and a central directory of APIs. I’m getting asked by more groups regarding how they can be conducting API discovery by default across organizations, sniffing out APIs from log files, on Github, and other channels in use by existing development teams. Many groups just haven’t been good at documenting and communicating around what has been developed, as well as beginning new projects without seeing what already exists–something that will only become a great problem as the number of microservices grows.

The other dimension of API discovery I’m seeing emerge is discovery in the service of governance. Understand what APIs exist across teams so that definitions, schema, and other elements can be aggregated, measured, secured, and governed. EVERY organization I work with is unaware of all the data sources, web services, and APIs that exist across teams. Few want to admit it, but it is a reality. The reality is that you can’t govern or secure what you don’t know you have. Things get developed so rapidly, and baked into web, mobile, desktop, network, and device applications so regularly, that you just can’t see everything. Before companies, organizations, institutions, and government agencies are going to be able to govern anything, they are going to have begin addressing the API discovery problem that exists across their teams.

API discovery is a discipline that is well over a decade old. It is one I’ve been actively working on for over 5 years. It is something that is only now getting the discussion it needs, because it is a growing concern. It will be come a major concern with each passing day of the microservice evolution. People are jumping on the microservices bandwagon without any coherent way to organize schema, vocabulary, or API definitions. Let alone any strategy for indexing, cataloging, sharing, communicating, and registering services. I’m continuing my work on APIs.json, and the API Stack, as well as pushing forward my usage of OpenAPI, Postman, and AsyncAPI, which all contribute to API discovery. I’m going to continue thinking about how we can publish open source directories, catalogs, and search engines, and even some automated scanning of logs and other ways to conduct discovery in the background. Eventually, we will begin to find more solutions that work–it will just take time.


Machine Readable API Regions For Use At Discovery And Runtime

I wrote about Werner Vogel of Amazon’s post considering the impact of cloud regions a couple weeks back. I feel that his post captured an aspect of doing business in the cloud that isn’t discussed enough, and one that will continue to drive not just the business of APIs, but also increasingly the politics of APIs. Amidst increasing digital nationalism, and growing regulation of not just the pipes, but also platforms, understanding where your APIs are operating, and what networks you are using will become very important to doing business at a global scale.

It is an area I’m adding to my list of machine readable API definitions I’d like to add to the APIs.json stack. The goal with APIs.json is to provide a single index where we can link to all the essential building blocks of a APIs operations, with OpenAPI being the first URI, which provides a machine readable definition of the surface area of the APIs. Shortly after establishing the APIs.json specification, we also created API Commons, which is designed to be a machine readable specification for describing the licensing applied to an API, in response to the Oracle v Google API copyright case. Beyond that, there hasn’t been many other machine readable resources, beyond some existing API driven solutions used as part of API operations like Github and Twitter. There are other API definitions like Postman Collections and API Blueprint that I reference, but they are in the same silo as OpenAPI operates within.

Most of the resources we link to are still human-centered URLs like documentation, pricing, terms of service, support, and other essential building blocks of API operations. However, the goal is to evolve as many of these as possible towards being more machine readable. I’d like to see pricing, terms of services, and aspects of support become machine readable, allowing them to become more automated and understood not just at discovery, but also at runtime. I’m envisioning that regions should be added to this list of currently human readable building blocks that should eventually become machine readable. I feel like we are going to be needing to make runtime decisions regarding API regions, and we will need major cloud providers like Amazon, Azure, and Google to describe their regions in a machine readable way–something that API providers can reflect in their own API definitions, depending on which regions they operate in.

At the application and client level, we are going to need to be able to quantify, articulate, and switch between different regions depending on the user, type of resources being consumed, and business being conducted. While this can continue being manual for a while, at some point we are going to need it to become machine readable so it can become part of the API discovery, as well as integration layers. I do not know what this machine readable schema will look like, but I’m sure it will be defined based upon what AWS, Azure, and Google are already up to. However, it will quickly need to become a standard that is owned by some governing body, and overseen by the community and not just vendors. I just wanted to plant the seed, and it is something I’m hoping will grow over time, but I’m sure it will take 5-10 years before something takes roots, based upon my experience with OpenAPI, APIs.json, and API Commons.


The ClosedAPI Specification

You’ve heard of OpenAPI, right? It is the API specification for defining the surface area of your web API, and the schema you employ–making your public API more discoverable, and consumable in a variety of tools services. OpenAPI is the API definition for documenting your API when you are just getting started with your platform, and you are looking to maximize the availability and access of your platform API(s). After you’ve acquired all the users, content, investment, and other value, ClosedAPI is the format you will want to switch to, abandoning OpenAPI, for something a little more discreet.

Collect As Much Data As You Possibly Can

Early on you wanted to be defining the schema for your platform using OpenAPI, and even offering up a GraphQL layer, allowing your data model to rapidly scale, adding as may data points as you possible can. You really want to just ingest any data you can get your hands on the browser, mobile phones, and any other devices you come into contact with. You can just dump it all into big data lake, and sort it out later. Adding to your platform schema when possible, and continuing to establish new data points that can be used in advertising and targeting of your platform users.

Turn The Firehose On To Drive Activity

Early on you wanted your APIs to be 100% open. You’ve provided a firehose to partners. You’ve made your garden hose free to EVERYONE. OpenAPI was all about providing scalable access to as many users as you can through streaming APIs, as well as lower volume transactional APIs you offer. Don’t rate limit too heavily. Just keep the APIs operating at full capacity, generating data and value for the platform. ClosedAPI is for defining your API as you begin to turn off this firehose, and begin restricting access to your garden hose APIs. You’ve built up the capacity of the platform, you really don’t need your digital sharecroppers anymore. They were necessary early on in your business, but they are not longer needed when it comes to squeezing as much revenue as you can from your platform.

The ClosedAPI Specification

We’ve kept the specification as simple as possible. Allowing you to still say you have API(s), but also helping make sure you do not disclose too much about what you actually have going on. Providing you the following fields to describe your APIs:

  • Name
  • Description
  • Email

That is it. You can still have hundreds of APIs. Issue press releases. Everyone will just have to email you to get access to your APIs. It is up to you to decide who actually gets access to your APIs, which emails you respond, or if the email account is ever even checked in the first place. The objective is just to appear that you have APIs, and will entertain requests to access them.

Maintain Control Over Your Platform

You’ve worked hard to get your platform to where it is. Well, not really, but you’ve worked hard to ensure that others do the work for you. You’ve managed to convince a bunch of developers to work for free building out the applications and features of your platform. You’ve managed to get the users of those applications to populate your platform with a wealth of data, making your platform exponentially more valuable that you could have done on your own. Now that you’ve achieved your vision, and people are increasingly using your APIs to extract value that belongs to you, you need to turn off the fire hose, garden hose, and kill off applications that you do not directly control.

The ClosedAPI specification will allow you to still say that you have APIs, but no longer have to actually be responsible for your APIs being publicly available. Now all you have to do is worry about generating as much revenue as you possibly can from the data you have. You might lose some of your users because you do not have publicly available APIs anymore, as well as losing some of your applications, but that is ok. Most of your users are now trapped, locked-in, and dependent on your platform–continuing to generate data, content, and value for your platform. Stay in tune with the specification using the road map below.

Roadmap:

  • Remove Description – The description field seems extraneous.

OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract Each OpenAPI can be used to generate a mock representation of the service using Postman, Stoplight.io, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.


An OpenAPI Service Dependency Vendor Extensions

I’m working on a healthcare related microservice project, and I’m looking for a way to help developers express their service dependencies within the OpenAPI or some other artifact. At this point I’m feeling like the OpenAPI is the place to articulate this, adding a vendor extension to the specification that can allow for the referencing of one or more other services any particular service is dependent on. Helping make service discovery more machine readable at discovery and runtime.

To help not reinvent the wheel, I am looking at using the Schema.org Web API type including the extensions put forth by Mike Ralphson and team. I’d like the x-api-dependencies collection to adopt a standardized schema, that was flexible enough to reference different types of other services. I’d like to see the following elements be present for each dependency:

  • versions (OPTIONAL array of thing -> Property -> softwareVersion). It is RECOMMENDED that APIs be versioned using [semver]
  • entryPoints (OPTIONAL array of Thing -> Intangible -> EntryPoint)
  • license (OPTIONAL, CreativeWork or URL) - the license for the design/signature of the API
  • transport (enumerated Text: HTTP, HTTPS, SMTP, MQTT, WS, WSS etc)</p>
  • apiProtocol (OPTIONAL, enumerated Text: SOAP, GraphQL, gRPC, Hydra, JSON API, XML-RPC, JSON-RPC etc)
  • webApiDefinitions (OPTIONAL array of EntryPoints) containing links to machine-readable API definitions
  • webApiActions (OPTIONAL array of potential Actions)

Using the Schema.org Web type would allow for a pretty robust way to reference dependencies between services in a machine readable way, that can be indexed, and even visualized in services and tooling. When it comes to evolving and moving forward services, having dependency details baked in by default make a lot of sense, and ideally each dependency definition would have all the details of the dependency, as well as potential contact information, to make sure everyone is connected regarding the service road map. Anytime a service is being deprecated, versioned, or impacted in any way, we have all the dependencies needed to make an educated decision regarding how to progress with the least amount of friction as possible.

I’m going to go ahead and create a draft OpenAPI vendor extension specification for x-service-dependencies, and use the Schema.org WebAPI type, complete with the added extensions. Once I start using it, and have successfully implemented it for a handful of services I will publish and share a working example. I’m also on the hunt for other examples of how teams are doing this. I’m not looking for code dependency management solutions, I am specifically looking for API dependency management solutions, and how teams are making these dependencies discoverable in a machine readable way. If you know of any interesting approaches, please let me know, I’d like to hear more about it.


The API Stack Profiling Checklist

I just finished a narrative around my API Stack profiling, telling the entire story around the profiling of APIs for inclusion in the stack. To help encourage folks to get involved, I wanted to help distill down the process into a single checklist that could be implemented by anyone.

The Github Base Everything begins as a Github repository, and it can existing in any user or organization. Once ready, I can fork and publish as part of the API stack, or sync with an existing repository project.

  • Create Repo - Create a single repository with the name of the API provider in plain language.
  • Create README - Add a README for the project, articulating who the target is and the author.

OpenAPI Definition Profiling the API surface area using OpenAPI, providing a definition of the request and response structure for all APIs. Head over to their repository if you need to learn more about OpenAPI. Ideally, there is an existing OpenAPI you can start with, or other machine readable definition you can use as base–look around within their developer portal, because sometimes you can find an existing definition to start with. Next look on Github, as you never know where there might be something existing that will save you time an energy. However you approach, I’m looking for complete details on the following:

  • info - Provide as much information about the API.
  • host - Provide a host, or variables describing host.
  • basePath - Document the basePath for the API.
  • schemes - Provide any schemes that the API uses.
  • produces - Document which media types the API uses.
  • paths - Detail the paths including methods, parameters, enums, responses, and tags.
  • definitions - Provide schema definitions used in all requests and responses.

To help accomplish this, I often will scrape, and use any existing artifacts I can possible find. Then you just have to roll up your sleeves and begin copying and pasting from the existing API documentation, until you have a complete definition. There is never any definitive way to make sure you’ve profiled the entire API, but do your best to profile what is available, including all the detail the provider as shared. There will always be more that we can do later, as the API gets used more, and integrated by more providers and consumers.

Postman Collection Once you have an OpenAPI definition available for the API, import it into Postman. Make sure you have a key, and the relevant authentication environment settings you need. Then begin making API calls to each individual API path, making sure your API definition is as complete as it possibly can. This can be the quickest, or the most time consuming of the profiling, depending on the complexity and design of the API. The goal is to certify each API path, and make sure it truly reflects what has been documented. Once you are done, export a Postman Collection for the API, complimenting the existing OpenAPI, but with a more run-time ready API definiton.

Merging the Two Definitions Depending on how many changes occurred within the Postman portion of the profiling you will have to sync things up with the OpenAPI. Sometimes it is a matter of making minor adjustments, sometimes you are better off generating an entirely new OpenAPI from the Postman Collection using APIMATIC’s API Transformer. The goal is to make sure the OpenAPI and Postman are in sync, and working the same way as expected. Once they are in sync they can uploaded to the Github repository for the project.

Managing the Unkown Unknowns There will be a lot of unknowns along the way. A lot of compromises, and shortcuts that can be taken. Not every definition will be perfect, and sometimes it will require making multiple definitions because of the way an API provider has designed their API and used multiple subdomains. Document it all as Github issues in the repository. Use the Github issues for each API as the journal for what happened, and where you document any open questions, or unfinished dwork. Making the repository the central truth for the API definition, as well as the conversation around the profiling process.

Providing Central APIs.json Index The final step of the process is to create an APIs.json index for the API. You can find a sample one over at the specification website. When I profile an API using APIs.json I am always looking for as much detail as I possibly can, but for the purposes of API Stack profiling, I’m looking for these essential building blocks:

  • Website - The primary website for an entity owning the API.
  • Portal - The URL to the developer portal for an API.
  • Documentation - The direct link to the API documentation.
  • OpenAPI - The link to the OpenAPI I created on Github.
  • Postman - The link to the Postman Collection I created on Github.
  • Sign Up - Where do you sign up for an API.
  • Pricing - A link to the plans, tiers, and pricing for an API.
  • Terms of Service - A URL to the terms of service for an API.
  • Twitter - The Twitter account for the API provider – ideally, API specific.
  • Github - The Github account or organization for the API provider.

If you create multiple OpenAPIs, and Postman Collections, you can add an entry for each API. If you break a larger API provider into several entity provider repositories, you can link them together using the include property of the APIs.json file. I know the name of the specification is JSON, but feel free to do them in YAML if you feel more comfortable–I do. ;-) The goal of the APIs.json is to provide a complete profile of the API operations, going beyond what OpenAPI and Postman Collections deliver.

Including In The API Stack You should keep all your work in your own Github organization or user account. Once you’ve created a repository you would like to include in the API Stack, and syndicate the work to the Streamdata.io API Gallery, APIs.io, APIs.guru, Postman Network, and others, then just submit as a Github issue on the main repository. I’m still working on the details of how to keep repositories in sync with contributors, then reward and recognize them for work their work, but for now I’m relying on Github to track all contributions, and we’ll figure this part out later. The API Stack is just the workbench for all of this, and I’m using it as a place to merge the work of many partners, from many sources, and then work to sensibly syndicate out validated API profiles to all the partner galleries, directories, and search engines.


Defining The Smallest Unit Possible For Use At API Runtime

I’m thinking a lot about what is needed at API runtime lately. How we document and provide machine readable definitions for APIs, and how we provide authentication, pricing, and even terms of service to help reduce friction. As Mike Amundsen (@mamund) puts it, to enable “find and bind”. This goes beyond simple API on-boarding, and getting started pages, and looks to make APIs executable within a single click, allowing us to put them to use as soon as we find them.

The most real world example of this in action can be found with the Run in Postman button, which won’t always deal with the business and politics of APIs at runtime, but will deal with just about everything else. Allowing API providers to publish Run in Postman Buttons, defined using a Postman Collection, which include authentication environment details, that API consumers can use to quickly fire up an API in a single click. One characteristic I’ve come across that contributes to Postman Collections being truly executable is that they reflect the small unit possible for use at API runtime.

You can see an example of this in action over at Peachtree Data, who like many other API providers have crafted Run in Postman buttons, but instead of doing this for the entire surface area of their API, they have done it for a single API path. Making the Run in Postman button much more precise, and executable. Taking it beyond just documentation, to actually being more of a API runtime executable artifact. This is a simple shift in how Postman Collections can be used, but a pretty significant one. Now instead of wading through all of Peachtree’s APIs in my Postman, I can just do an address cleanse, zip code lookup, or email validation–getting down to business in a single click.

This is an important aspect of on-boarding developers. I may not care about wading through and learning about all your APIs right now. I’m just looking for the API solution I need to a particular problem. Why clutter up my journey with a whole bunch of other resources? Just give me what I need, and get out of my way. Most other API providers I have looked at in Postman’s API Network have provided a single Run in Postman button for all of their APIs, where Peachtree has opted to provide many Run in Postman buttons for each of their APIs. Distinguishing themselves, and the value of each of their API resources in a pretty significant way.

I asked the questions the other week, regarding how big or how small is an API? I’m struggling with this question in my API stack work, as part of an investment by Streamdata.io to develop an API gallery. Do people want to find Amazon Web Services APIs? Amazon EC2 APIs? Or the single path for firing up an instance of EC2? What is the small unit of compute we should be documenting, generating OpenAPI and Postman Collections for? I feel like this is an important API discovery conversation to be having. I think depending on the circumstances, the answer will be different. It is a question I’ll keep asking in different scenarios, to help me better understand how I can document, publish, and make APIs not just more discoverable, but usable at runtime.


The Postman API Network

The Postman API Network is one of the recent movements in the API discovery space I’ve been working to get around to covering. As Postman continues its expansion from being just an API client, to a full lifecycle API development solution, they’ve added a network for discovering existing APIs that you can begin using within Postman in a single click. Postman Collections make it ridiculously easy to get up and running with an API. So easy, I’m confounded why ALL APIs aren’t publishing Postman Collections with Run in Postman Buttons published in their API docs.

The Postman API Network provides a catalog of APIs in over ten categories, with links to each API’s documentation. All of the APIs in the network have a Run in Postman button available as part of their documentation, which includes them in the Postman API Network. It is a pretty sensible approach to building a network of valuable APIs, who all have invested in there being a runtime-ready, machine readable Postman Collection for their APIs. One of the more interesting approaches I’ve seen introduced to help solve the API discovery problem in the eight years I’ve been doing API Evangelist.

I’ve been talking to Abhinav Asthana (@a85) about the Postman API Network, and working to understand how I can contribute, and help grow the catalog as part of my work as the API Evangelist. I’m a fan of Postman, and an advocate of it as an API lifecycle development solution, but I’m also really keen on bringing comprehensive API discovery solutions to the table. With the Postman API Network, and other API discovery solutions I’m seeing emerge recently, I’m finding renewed energy for this area of my work. Something I’ll be brainstorming and writing about more frequently in coming months.

Streamdata.io has been investing in me moving forward the API discovery conversation, to build out their vision of a Streamdata.io API Gallery, but also to contribute to the overall API discovery conversation. I’m in the middle of understanding how this aligns with my existing API Stack work, APIs.json and APIs.io effort, as well as with APIs.guru, AnyAPI, and the wider OpenAPI Initiative. If you have thoughts you’d like to share, feel free to ping me, and I’m happy to talk more about the API discovery, network, and run-time work I’m contributing to, and better understand how your work fits into the picture.


Thoughts On The Schema.Org WebAPI Type Extension

I’m putting some thought into the Schema.Org WebAPI Type Extension proposal by Mike Ralphson (Mermade Software) and Ivan Goncharov (APIs.guru), to “facilitate better automatic discovery of WebAPIs and associated machine and human-readable documentation”. It’s an interesting evolution in how we define APIs, in terms of API discovery, but I would also add potentially at “execute time”.

Here is what a base WebAPI type schema could look like:

{ "@context": "http://schema.org/", "@type": "WebAPI", "name": "Google Knowledge Graph Search API", "description": "The Knowledge Graph Search API lets you find entities in the Google Knowledge Graph. The API uses standard schema.org types and is compliant with the JSON-LD specification.", "documentation": "https://developers.google.com/knowledge-graph/", "termsOfService": "https://developers.google.com/knowledge-graph/terms", "provider": { "@type": "Organization", "name": "Google Inc." } }

Then the proposed extensions could include the following:

The webApiDefinitions (EntryPoint) contentType property contains a reference to one of the following conten types:

  • OpenAPI / Swagger in JSON - application/openapi+json or application/x-openapi+json
  • OpenAPI / Swagger in YAML - application/openapi
  • RAML - application/raml+yaml
  • API Blueprint in markdown - text/vnd.apiblueprint
  • API Blueprint parsed in JSON - application/vnd.refract.parse-result+json
  • API Blueprint parsed in YAML - application/vnd.refract.parse-result+yaml

Then the webApiActions property brings a handful of actions to the table, with the following being suggested:

  • apiAuthentication - Links to a resource detailing authentication requirements. Note this is a human-readable resource, not an authentication endpoint
  • apiClientRegistration - Links to a resource where a client may register to use the API
  • apiConsole - Links to an interactive console where API calls may be tested
  • apiPayment - Links to a resource detailing pricing details of the API

I fully support extending the Schema.org WebAPI vocabulary in this way. It adds all the bindings needed to make the WebAPI type executable at runtime, as well as it states at discovery time. I like the transport and protocol additions, helping ensure the WebAPI vocabulary is as robust as it possibly can. webApiDefinitions provides all the technical details regarding the surface area of the API we need to actually engage with it at runtime, and webApiActions begins to get at some of the business of APIs friction that exists at runtime. Making for an interesting vocabulary that can be used to describe web APIs, which also becomes more actionable by providing everything you need to get up and running.

The suggestions are well thought out and complete. If I was to add any elements, I’d say it also needs a support link. There will be contact information embedded within the API definitions, but having a direct link along with registration, documentation, terms of service, authentication, and payment would help out significantly. I would say that the content type to transport and protocol coverage is deficient a bit. Meaning you have SOAP, but not referencing WSDL. I know that there isn’t a direct definition covering every transport and protocol, but eventually it should be as comprehensive as it can. (ie. adding AsyncAPI, etc. in the future)

The WebAPI type extensions reflect what we have been trying to push forward with our APIs.json work, but comes at it from a different direction. I feel there are significant benefits to having all these details as part of the Schema.org vocabulary, expanding on what you can describe in a common way. Which can then also be used as part of each APIs requests, responses, and messages. I don’t see APIs.json as part of a formal vocabulary like this–I see it more as the agile format for indexing APIs that exist, and building versatile collections of APIs which could also contain a WebAPI reference.

I wish I had more constructive criticism or feedback, but I think it is a great first draft of suggestions for evolving the WebAPI type. There are other webApiActions properties I’d like to see based upon my APIs.json work, but I think this represents an excellent first step. There will be some fuzziness between documentation and apiConsole, as well as gaps in actionability between apiAuthentication, and apiClientRegistration–thinks like application creation (to get keys), and opportunities to have Github, Twitter, and other OpenID/OAuth authentication, but these things can be worked out down the road. Sadly there isn’t much standardization at this layer currently, and I see this extension as a first start towards making this happen. As I said, this is a good start, and we have lots of work ahead as we see more adoption.

Nice work Mike and Ivan! Let me know how I can continue to amplify and get the word out. We need to help make sure folks are describing their APIs using Schema.org. I’d love to be able to automate the discovery of APIs, using existing search engines and tooling–I know that you two would like to see this as well. API discovery is a huge problem, which there hasn’t been much movement on in the last decade, and having a common vocabulary that API providers can use to describe their APIs, which search engines can tune into would help move us further down the road when it comes to having more robust API discovery.


An Observable Industry Level Directory Of API Providers And Consumers

I’ve been breaking down the work on banking APIs coming out of Open Banking in the UK lately. I recently took all their OpenAPI definitions and published as a demo API developer portal. Bringing the definitions out of the shadows a little bit, and showing was is possible with the specification. Pushing the project forward some more today I published the Open Banking API Directory specification to the project, showing the surface area of the very interesting, and important component of open banking APIs in the UK.

The Open Banking Directory provides a pretty complete, albeit rough and technical approach to delivering observability for the UK banking industry API ecosystem actor layer. Everyone involved in the banking API ecosystem in UK has to be registered in the directory. It provides profiles of the banks, as well as any third party players. It really provides an unprecedented, industry level look at how you can make API ecosystems more transparent and observable. This thing doesn’t exist at the startup level because nobody wants to be open with the number of developers, or much else regarding the operation of their APIs. Making any single, or industry level API ecosystem, operate as black boxes–even if they claim to be an “open API”.

Could you imagine if API providers didn’t handle their own API management layer, and an industry level organization would handle the registration, certification, directory, and dispute resolution between API providers and API consumers? Could you imagine if we could see the entire directory of Facebook and Twitter developers, understand what businesses and individuals were behind the bots and other applications? Imagine if API providers couldn’t lie about the number of active developers, and we knew how many different APIs each application developers used? And it was all public data? An entirely different API landscape would exist, with entirely different incentive models around providing and consuming gAPIs.

The Open Banking Directory is an interesting precedent. It’s not just an observable API authentication and management layer. It also is an API. Making the whole thing something that can be baked into the industry level, as well as each individual application. I’m going to have to simmer on this concept some more. I’ve thought a lot about collective API developer and client solutions, but never anything like this. I’m curious to see how this plays out in a heavily regulated country and industry, but also eager to think about how something like this might work (or not) in government API circles, or even in the private sector, within smaller, less regulated industries.


What We Need To Be Machine Readable At API Run Time

I had breakfast with Mike Amundsen (@mamund) and Matt McLarty (@MattMcLartyBC) of the CA API Academy team this morning in midtown this morning. As we were sharing stories of what each other was working on, the topic of what is needed to execute an API call came up. Not the time consuming find an API, sign up for an account, figure out the terms of service and pricing version, but all of this condensed into something that can happen in a split second within applications and systems.

How do we distill down the essential ingredients of API consumption into a single, machine readable unit that can be automated into what Mike Amundsen calls, “find and bind”. This is something I’ve been thinking a lot about lately as I work on my API discovery research, and there are a handful of elements that need to be present:

  • Authentication - Having keys to be able to authentication.
  • Surface Area - What is the host, base url, path, headers, and parameters for a request.
  • Terms of Service - What are the legal terms of service for consumption.
  • Pricing - How much does each API request cost me?

We need these elements to be machine readable and easily accessible at discover and runtime. Currently the surface area of the API can be described using OpenAPI, that isn’t a problem. The authentication details can be included in this, but it means you already have to have an application setup, with keys. It doesn’t include new users into the equation, meaning, discovering, registering, and obtaining keys. I have a draft specification I call “API plans” for the pricing portion of it, but it is something that still needs a lot of work. So, in short, we are nowhere near having this layer ready for automation–which we will need to scale all of this API stuff.

This is all stuff I’ve been beating a drum about for years, and I anticipate it is a drum I’ll be beating for a number of more years before we see come into focus. I’m eager to see Mike’s prototype on “find and bind”, because it is the only automated, runtime, discovery, registration, and execute research I’ve come across that isn’t some proprietary magic. I’m going to be investing more cycles into my API plans research, as well as the terms of service stuff I started way back when alongside my API Commons project. Hopefully, moving all this forward another inch or two.


What Is The Streamdata.io API Gallery?

As I prepare to launch the Streamdata.io API Gallery, I am doing a handful of presentations to partners. As part of this process I am looking to distill down the objectives behind the gallery, and the opportunity it delivers to just a handful of talking points I can include in a single slide deck. Of course, as the API Evangelist, the way I do this is by crafting a story here on the blog. To help me frame the conversation, and get to the core of what I needed to present, I wanted to just ask a couple questions, so that I can answer them in my presentation.

What is the Streamdata.io API Gallery? It is a machine readable, continuously deployed collection of OpenAPI definitions, indexed used APIs.json, with a user friendly user interface which allows for the browsing, searching, and filtering of individual APIs that deliver value within specific industries and topical areas.

What are we looking to accomplish with the Streamdata.io API Gallery? Discover and map out interesting and valuable API resources, then quantify what value they bring to the table while also ranking, categorizing, and making them available in a search engine friendly way that allows potential Streamdata.io customers to discover and understand what is possible.

What is the opportunity around the Streamdata.io API Gallery? Identify the best of breed APIs out there, and break down the topics that they deliver within, while also quantifying the publish and subscribe opportunities available–mapping out the event-driven opportunity that has already begun to emerge, while demonstrating Streamdata.io’s role in helping get existing API providers from where they are today, to where they need to be tomorrow.

Why is this relevant to Streamdata.io, and their road map? It provides a wealth of research that Streamdata.io can use to understand the API landscape, and feed it’s own sales and marketing strategy, but doing it in a way that generates valuable search engine and social media exhaust which potential customers might possibly find interesting, bringing them new API consumers, while also opening their eyes up to the event-driven opportunity that exists out there.

Distilling Things Down A Bit More Ok, that answers the general questions about what the Streamdata.io API Gallery is, and why we are building it. Now I want to distill down a little bit more to help me articulate the gallery as part of a series of presentations, existing as just a handful of bullet points. Helping get the point across in hopefully 60 seconds or less.

  • What is the Streamdata.io API Gallery?
    • API directory, for finding individual units of compute within specific topics.
    • OpenAPI (fka Swagger) driven, making each unit of value usable at run-time.
    • APIs.json indexed, making the collections of resources easy to search and use.
    • Github hosted, making it forkable and continuously deployable and integrate(able).
  • Why is the Streamdata.io Gallery relevant?
    • It maps out the API universe with an emphasis on the value each individual API path possesses.
    • Categories, tags, and indexes APIs into collections which are published to Github.
    • Provides a human and machine friendly view of the existing publish and subscribe landscape.
    • Begins to organize the API universe in context of a real time event-driven messaging world.
  • What is the opportunity around the Streamdata.io API Gallery?
    • Redefining the API landscape from an event-driven perspective.
    • Quantify, qualify, and rank APIs to understand what is the most interesting and highest quality.
    • Help API providers realize events occurring via their existing platforms.
    • Begin moving beyond a request and response model to an event-driven reality.

There is definitely a lot more going on within the Streamdata.io API Gallery, but I think this captures the essence of what we are trying to achieve. A lot of what we’ve done is building upon my existing API Stack work, where I have worked to profile and index public APIs using OpenAPI and APIs.json, but this round of work is taking things to a new level. With API Stack I ended up with lists of companies and organizations, each possessing a list of APIs. The Streamdata.io API Gallery is a list of API resources, broken down by the unit of value they bring to the table, which is further defined by whether it is a GET, POST, or PUT–essentially a publish or subscribe opportunity.

Additionally, I am finally finding traction with the API rating system(s) I have been developing for the last five years. Profiling and measuring the companies behind the APIs I’m profiling, and making this knowledge available not just at discover time, but potentially at event and run time. Basically being able to understand the value of an event when it happens in real time, and be able to make programmatic decisions regarding whether we care about the particular event or not. Eventually, allowing us to subscribe only to the events that truly matter to us, and are of the highest value–then tuning out the rest. Delivering API ratings in an increasingly crowded and noisy event-driven API landscape.

We have the prototype for the Streamdata.io API Gallery ready to go. We are still adding APIs, and refining how they are tagged and organized. The rating system is very basic right now, but we will be lighting up different dimensions of the rating(s) algorithm, and hopefully delivering on different angles of how we quantify the value of events that occurring. I’m guessing we will be doing a soft launch in the next couple of weeks to little fanfare, and it will be something that builds, and evolves over time as the API index gets refined and used more heavily.


The Importance of the API Path Summary, Description, and Tags in an OpenAPI Definition

I am creating a lot of OpenAPI definitions right now. Streamdata.io is investing in me pushing forward my API Stack work, where I profile API using OpenAPI, and index their operations using APIs.json. From the resulting indexes, we are building out the Streamdata.io API Gallery, which shows the possibilities of providing streaming APIs on top of existing web APIs available across the landscape. The OpenAPI definitions I’m creating aren’t 100% complete, but they are “good enough” for what we are needing to do with them, and are allowing me to catalog a variety of interesting APIs, and automate the proxying of them using Streamdata.io.

I’m finding the most important part of doing this work is making sure there is a rich summary, description, and set of tags for each API. While the actual path, parameters, and security definitions are crucial to programmatically executing the API, the summary, description, and tags are essential so that I can understand what the API does, and make it discoverable. As I list out different areas of my API Stack research, like the financial market data APIs, it is critical that I have a title, and description for each provider, but the summary, description, and tags are what provides the heart of the index for what is possible with each API.

When designing an API, as a developer, I tend to just fly through writing summary, descriptions, and tags for my APIs. I’m focused on the technical details, not this “fluff”. However, this represents one of the biggest disconnects in the API lifecycle, where the developer is so absorbed with the technical details, we forget, neglect, or just don’t are to articulate what we are doing to other humans. The summary, description, and tags are the outlines in the API contract we are providing. These details are much more than just the fluff for the API documentation. They actually describe the value being delivered, and allow this value to be communicated, and discovered throughout the life of an API–they are extremely important.

As I’m doing this work, I realize just how important these descriptions and tags are to the future of these APIs. Whenever it makes sense I’m translating these APIs into streaming APIs, and I’m taking the tags I’ve created and using them to define the events, topics, and messages that are being transacted via the API I’m profiling. I’m quantifying how real time these APIs are, and mapping out the meaningful events that are occurring. This represents the event-driven shift we are seeing emerge across the API landscape in 2018. However, I’m doing this on top of API providers who may not be aware of this shift in how the business of APIs is getting done, and are just working hard on their current request / response API strategy. These summaries, descriptions, and tags, represent how we are going to begin mapping out the future that is happening around them, and begin to craft a road map that they can use to understand how they can keep evolving, and remain competitive.


The Growing Importance of Github Topics For Your API SEO

When you are operating an API, you are always looking for new ways to be discovered. I study this aspect of operating APIs from the flip-side–how do I find new APIs, and stay in tune with what APIs are to? Historically we find APIs using ProgrammableWeb, Google, and Twitter, but increasingly Github is where I find the newest, coolest APIs. I do a lot of searching via Github for API related topics, but increasingly Github topics themselves are becoming more valuable within search engine indexes, making them an easy way to uncover interesting APIs.

I was profiling the market data API Alpha Vantage today, and one of the things I always do when I am profiling an API, is I conduct a Google, and then secondarily, a Github search for the APIs name. Interestingly, I found a list of Github Topics while Googling for Alpha Vantage API, uncovering some interesting SDKs, CLI, and other open source solutions that have been built on top of the financial data API. Showing the importance of operating your API on Github, but also working to define a set of standard Github Topic tags across all your projects, and helping encourage your API community to use the same set of tags, so that their projects will surface as well.

I consider Github to be the most important tool in an API providers toolbox these days. I know as an API analyst, it is where I learn the most about what is really going on. It is where I find the most meaningful signals that allow me to cut through the noise that exists on Google, Twitter, and other channels. Github isn’t just for code. As I mention regularly, 100% of my work as API Evangelist lives within hundreds of separate Github repositories. Sadly, I don’t spend as much time as I should tagging, and organizing projects into meaningful topic areas, but it is something I’m going to be investing in more. Conveniently, I’m doing a lot of profiling of APIs for my partner Streamdata.io, which involves establishing meaningful tags for use in defining real time data stream topics that consumers can subscribe to–making me think a little more about the role Github topics can play.

One of these days I will do a fresh roundup of the many ways in which Github can be used as part of API operations. I’m trying to curate and write stories about everything I come across while doing my work. The problem is there isn’t a single place I can send my readers to when it comes to applying this wealth of knowledge to their operations. The first step is probably to publish Github as its own research area on Github (mind blown), as I do with my other projects. It has definitely risen up in importance, and can stand on its own feet alongside the other areas of my work. Github plays a central role in almost every stop along the API life cycle, and deserves its own landing page when it comes to my API research, and priority when it comes to helping API providers understanding what they should be doing on the platform to help make their API operations more successful.


You Have to Know Where All Your APIs Are Before You Can Deliver On API Governance

I wrote an earlier article that basic API design guidelines are your first step towards API governance, but I wanted to introduce another first step you should be taking even before basic API design guides–cataloging all of your APIs. I’m regularly surprised by the number of companies I’m talking with who don’t even know where all of their APIs are. Sometimes, but not always, there is some sort of API directory or catalog in place, but often times it is out of date, and people just aren’t registering their APIs, or following any common approach to delivering APIs within an organization–hence the need for API governance.

My recommendation is that even before you start thinking about what your governance will look like, or even mention the word to anyone, you take inventory of what is already happening. Develop an org chart, and begin having conversations. Identify EVERYONE who is developing APIs, and start tracking on how they are doing what they do. Sure, you want to get an inventory of all the APIs each individual or team is developing or operating, but you should also be documenting all the tooling, services, and processes they employ as part of their workflow. Ideally, there is some sort of continuous deployment workflow in place, but this isn’t a reality in many of the organization I work with, so mapping out how things get done is often the first order of business.

One of the biggest failures of API governance I see is that the strategy has no plan for how we get from where we are to where we ant to be, it simply focuses on where we want to be. This type of approach contributes significantly to pissing people off right out of the gate, making API governance a lot more difficult. Stop focusing on where you want to be for a moment, and focus on where you are. Build a map of where people are, tools, services, skills, best and worst practices. Develop a comprehensive map of where organization is today, and then sit down with all stakeholders to evaluate what can be improved upon, and streamlined. Beginning the hard work of building a bridge between your existing teams and what might end up being a future API governance strategy.

API design is definitely the first logical step of your API governance strategy, standardizing how you design your APIs, but this shouldn’t be developed from the outside-in. It should be developed from what already exists within your organization, and then begin mapping to healthy API design practices from across the industry. Make sure you are involving everyone you’ve reached out to as part of inventory of APIs, tools, services, and people. Make sure they have a voice in crafting that first draft of API design guidelines you bring to the table. Without buy-in from everyone involved, you are going to have a much harder time ever reaching the point where you can call what you are doing governance, let alone seeing the results you desire across your API operations.


I Created An OpenAPI For The Hashicorp Consul API

I was needing an OpenAPI (fka Swagger) definition for the Hashicorp Consul API, so that I could use in a federal government project I’m advising on. We are using the solution for the microservices discovery layer, and I wanted to be able to automate using the Consul API, publish documentation within our project Github, import into Postman across the team, as well as several other aspects of API operations. I’m working to assemble at least a first draft OpenAPI for the entire technology stack we’ve opted to use for this project.

First thing I did was Google, “Consul API OpenAPI”, then “Consul API Swagger”, which didn’t yield any results. Then I Githubbed “Consul API Swagger”, and came across a Github Issue where a user had asked for “improved API documentation”. The resulting response from Hashicorp was, “we just finished a revamp of the API docs and we don’t have plans to support Swagger at this time.” Demonstrating they really don’t understand what OpenAPI (fka Swagger) is, something I’ll write about in future stories this week.

One of the users on the thread had created an API Blueprint for the Consul API, and published the resulting documentation to Apiary. Since I wanted an OpenAPI, instead of an API Blueprint, I headed over to APIMATIC API Transformer to see if I could get the job done. After trying to transform the API Blueprint to OpenAPI 2.0 I got some errors, which forced to me to spend some time this weekend trying to hand-craft / scrape the static API docs and publish my own OpenAPI. The process was so frustrating I ended up pausing the work, and writing two blog posts about my experiences, and then this morning I received an email from the APIMATIC team that they caught the errors, updated the API Blueprint, allowing me to continue transforming it into an OpenAPI definition. Benefits of being the API Evangelist? No, benefits of using APIMATIC!

Anyways, you can find the resulting OpenAPI on Github. I will be refining it as I use in my project. Ideally, Hashicorp would take ownership of their own OpenAPI, providing a machine readable API definition that consumers could use in tooling, and other services. However, they are stuck where many other API service providers, API providers, and API consumers are–thinking OpenAPI is still Swagger, which is just about API documentation. ;-( . I try not to let this frustrate me, and will write about it each time I come across, until things change. OpenAPI (fka Swagger) is so much more than just API documentation, and is such an enabler for me as an API consumer when I’m getting up and running with a project. If you are doing APIs, please take the time to understand what it is, it is something that could be the difference between me using our API, or moving on to find another solution. It is that much of a timesaver for me.


API Discovery Is Mostly About You Sharing Stories About The APIs You Use

I do a lot of thinking about API discovery, and how I can help people find the APIs they need. As part of this thinking I’m always curious why API discovery hasn’t evolved much in the last decade. You know, no Google for APIs. No magical AI, ML, AR, VR, or Blockchain for distributed API mining. As I’m thinking, I ask myself, “how is it that the API Evangelist finds most of his APIs?” Well, word of mouth. Storytelling. People talking about the APIs they are using to solve a real world business problem.

That is it! API storytelling is API discovery. If people aren’t talking about your API, it is unlikely it will be found. Sure people still need to be able to Google for solutions, but really that is just Googling, not API discovery. It is likely they are just looking for a company that does what they need, and the API is a given. We really aren’t going to discover new APIs. I don’t know many people who spend time looking for new APIs (except me, and I have a problem). People are going to discover new APIs by hearing about what other people are using, through storytelling on the web and in person.

In my experience as the API Evangelist I see three forms of this in action:

1) APIs talking about their API use cases on their blog 2) Companies telling stories about their infrastructure on their blog 3) Individuals telling stories about the APIs they use in job, side projects, and elsewhere.

This represent the majority of ways in which I discover new APIs. Sure, as the API Evangelist I will discover new APIs occasionally by scouring Github, Googling, and harvesting social media, but I am an analyst. These three ways will be how the average person discovers new APIs. Which means, if you want your API to be discovered, you need to be telling stories about it. If you want the APIs you depend on to be successful and find new users, you need to be telling stories about it.

Sometimes in all of this techno hustle, good old fashioned storytelling is the most important tool in our toolbox. I’m sure we’ll keep seeing waves of API directories, search engines, and brain wave neural networks emerge to help us find APIs over the next couple of years. However, I’m predicting that API discovery will continue to be defined by human beings talking to each other, telling stories on their blogs, via social media, and occasionally through brain interfaces.


API Discovery Will Be About Finding Companies Who Do What You Need And API Is Assumed

While I’m still investing in defining the API discovery space, and I’m seeing some improvements from other API service and tooling providers when it comes to finding, sharing, indexing, and publishing API definitions, I honestly don’t think in the end API discovery will ever be a top-level concern. While API design, deployment, management, and even testing and monitoring have floated to the top as primary discussion areas for API providers, and consumers, the area of API discovery never has quite become a priority. There is always lots of talk about API discovery, mostly about what is broken, rarely about what is needed to fix, with regular waves of directories, marketplaces, and search solutions emerging to attempting to fix the problem, but always falling short.

As I watch more mainstream businesses on-board with the world of APIs, and banks, healthcare, insurance, automobile, and other staple industries work to find their way forward, I’m thinking that the mainstreamification of APIs will surpass API discovery. Meaning that people will be looking for companies who do the thing that they want, and that API is just assumed. Every business will need to have an API, just like every business is assumed to have an website. Sure there will be search engines, directories, and marketplaces to help us find what we are looking for, but when we just won’t always be looking for APIs, we will be looking for solutions. The presence of an API be will be assumed, and if it doesn’t exist we will move on looking for other companies, organizations, institutions, and agencies who do what we need.

I feel like this is one of the reasons API discovery really became a thing. It doesn’t need to be. If you are selling products and services online you need a website, and as the web has matured, you need the same data, content, media, and algorithms available in a machine readable format so they can be distributed to other websites, used within a variety of mobile applications, and available in voice, bot, device, and other applications. This is just how things will work. Developers won’t be searching for APIs, they’ll be searching for the solution to their problem, and the API is just one of the features that have to be present for them to actually become a customer. I’ll keep working to evolve my APIs.json discovery format, and incentivize the development of client, IDE, CI/CD, and other tooling, but I think these things will always be enablers, and not ever a primary concern in the API lifecycle.


OpenAPI 3.0 Tooling Discovery On Github And Social Media

I’ve been setting aside time to browse through and explore tagged projects on Github each week, learning about what is new and trending out there on the Githubz. It is a great way to explore what is being built, and what is getting traction with users. You have to wade through a lot of useless stuff, but when I come across the gems it is always worth it. I’ve been providing guidance to all my customers that they should be publishing their projects to Github, as well as tagging them coherently, so that they come up as part of tagged searches via the Github website, and the API (I do a lot of discovery via the API).

When I am browsing API projects on Github I usually have a couple of orgs and users I tend to peek in on, and my friend Mike Ralphson (@PermittedSoc) is always one. Except, I usually don’t have to remember to peek in on Mike’s work, because he is really good at tagging his work, and building interesting projects, so his stuff is usually coming up as I’m browsing tags. He is the first repository I’ve come across that is organizing OpenAPI 3.0 tooling, and on his project he has some great advice for project owners: “Why not make your project discoverable by using the topic openapi3 on GitHub and using the hashtag #openapi3 on social media?” « Great advice Mike!!

As I said, I regularly monitor Github tags, and I also monitor a variety of hashtags on Twitter for API chatter. If you aren’t tagging your projects, and Tweeting them out with appropriate hashtags, the likelihood they are going to be found decreases pretty significantly. This is how Mike will find your OpenAPI 3.0 tooling for inclusion in his catalog, and it is how I will find your project for inclusion in stories via API Evangelist. It’s a pretty basic thing, but it is one that I know many of you are overlooking because you are down in the weeds working on your project, and even when you come up for air, you probably aren’t always thinking about self-promotion (you’re not a narcissist like me, or are you?)

Twitter #hashtags has long been a discovery mechanism on social media, but the tagging on Github is quickly picking up steam when it comes to coding project discovery. Also, with the myriad of ways in which Github repos are being used beyond code, Github tagging makes it a discovery tool in general. When you consider how API providers are publishing their API portals, documentation, SDKs, definitions, schema, guides, and much more, it makes Github one of the most important API discovery tools out there, moving well beyond what ProgrammableWeb or Google brings to the table. I’ll continue to turn up the volume on what is possible with Github, as it is no secret that I’m a fan. Everything I do runs on Github, from my website, to my APIs, and supporting tooling–making it a pretty critical part of what I do in the API sector.


Cloud Marketplace Becoming The New Wholesale API Discovery Platform

I’m keeping an eye on the AWS Marketplace, as well as what Azure and Google are up to, looking for growing signs of anything API. I’d have to say that, while Azure is in close second, that AWS is growing faster when it comes to the availability of APIs in their marketplace. What I find interesting about this growth is it isn’t just about the cloud, it is about wholesale APIs, and as it grows it quickly becomes about API discovery as well.

The API conversation on AWS Marketplace has for a while been dominated by API service providers, and specifically the API management providers who have pioneered the space:

After management, we see some of the familiar faces from the API space doing API aggregation, database to API deployment, security, integration platform as a service (iPaaS), real time, logging, authentication, and monitoring with Runscope.

All rounding off the API lifecycle, providing a growing number of tools that API provides can deploy into their existing AWS infrastructure to help manage API operations. This is how API providers should be operating, offering retail SaaS versions of their APIs, but also cloud deployable, wholesale versions of their offerings that run in any cloud, not just AWS.

The portion of this aspect of API operations that is capturing my attention is the individual API providers are moving to offer their API up via AWS marketplace, moving things beyond just API service providers selling their tools to the space. Most notably are the API rockstars from the space:

After these well known API providers there are a handful of other companies offering up wholesale editions of their APIs, so that potential customers can bake into their existing infrastructure, alongside their own APIs, or possibly other 3rd party APIs.

These APIs are offering a variety of services but real quick I noticed location, machine learning, video editing, PDFs, health care, payments, sms, and other API driven solutions. It is a pretty impressive start to what I see as the future of API discovery and deployment, as well as any other stop along the lifecycle with all the API service providers offering their warez in the marketplace.

I’m going to setup a monitoring script to alert me of any new API focused additions to the AWS marketplace, using of course, the AWS Marketplace API. I’ve seen enough growth here to warrant the extra work, and added monitoring channel. I’m feeling like this will grow beyond my earlier thoughts about wholesale API deployment, and potentially pushing forward the API discovery conversation, and changing how we will be finding the APIs we use across our infrastructure. I will also keep an eye on Azure and Google in this area, as well as startup players like Algorithmia who are specializing in areas like machine learning and artificial intelligence.


Link Relation Types for APIs

I have been reading through a number of specifications lately, trying to get more up to speed on what standards are available for me to choose from when designing APIs. Next up on my list is Link Relation Types for Web Services, by Erik Wilde. I wanted to take this informational specification and repost here on my site, partially because I find it easier to read, and the process of breaking things down and publishing as a posts helps me digest the specification and absorb more of what it contains.

I’m particularly interested in this one, because Erik captures what I’ve had in my head for APIs.json property types, but haven’t been able to always articulate as well as Erik does, let alone published as an official specification. I think his argument captures the challenge we face with mapping out the structure we have, and how we can balance the web with the API, making sure as much of it becomes machine readable as possible. I’ve grabbed the meat of Link Relation Types for Web Services and pasted here, so I can break down, and reference across my storytelling.


  1. Introduction One of the defining aspects of the Web is that it is possible to interact with Web resources without any prior knowledge of the specifics of the resource. Following Web Architecture by using URIs, HTTP, and media types, the Web’s uniform interface allows interactions with resources without the more complex binding procedures of other approaches.

Many resources on the Web are provided as part of a set of resources that are referred to as a “Web Service” or a “Web API”. In many cases, these services or APIs are defined and managed as a whole, and it may be desirable for clients to be able to discover this service information.

Service information can be broadly separated into two categories: One category is primarily targeted for human users and often uses generic representations for human readable documents, such as HTML or PDF. The other category is structured information that follows some more formalized description model, and is primarily intended for consumption by machines, for example for tools and code libraries.

In the context of this memo, the human-oriented variant is referred to as “documentation”, and the machine-oriented variant is referred to as “description”.

These two categories are not necessarily mutually exclusive, as there are representations that have been proposed that are intended for both human consumption, and for interpretation by machine clients. In addition, a typical pattern for service documentation/description is that there is human-oriented high-level documentation that is intended to put a service in context and explain the general model, which is complemented by a machine-level description that is intended as a detailed technical description of the service. These two resources could be interlinked, but since they are intended for different audiences, it can make sense to provide entry points for both of them.

This memo places no constraints on the specific representations used for either of those two categories. It simply allows providers of aWeb service to make the documentation and/or the description of their services discoverable, and defines two link relations that serve that purpose.

In addition, this memo defines a link relation that allows providers of a Web service to link to a resource that represents status information about the service. This information often represents operational information that allows service consumers to retrieve information about “service health” and related issues.

  1. Terminology The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,”SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 [RFC2119].

  2. Web Services “Web Services” or “Web APIs” (sometimes also referred to as “HTTP API” or “REST API”) are a way to expose information and services on the Web. Following the principles of Web architecture[ they expose URI-identified resources, which are then accessed and transferred using a specific representation. Many services use representations that contain links, and often these links are typed links.

Using typed links, resources can identify relationship types to other resources. RFC 5988 [RFC5988] establishes a framework of registered link relation types, which are identified by simple strings and registered in an IANA registry. Any resource that supports typed links according to RFC 5988 can then use these identifiers to represent resource relationships on the Web without having to re-invent registered relation types.

In recent years, Web services as well as their documentation and description languages have gained popularity, due to the general popularity of the Web as a platform for providing information and services. However, the design of documentation and description languages varies with a number of factors, such as the general application domain, the preferred application data model, and the preferred approach for exposing services.

This specification allows service providers to use a unified way to link to service documentation and/or description. This link should not make any assumptions about the provided type of documentation and/or description, so that service providers can choose the ones that best fit their services and needs.

3.1. Documenting Web Services In the context of this specification, “documentation” refers to information that is primarily intended for human consumption.Typical representations for this kind of documentation are HTML andPDF. Documentation is often structured, but the exact kind of structure depends on the structure of the service that is documented, as well as on the specific way in which the documentation authors choose to document it.

3.2. Describing Web Services In the context of this specification, “description” refers to information that is primarily intended for machine consumption.Typical representations for this are dictated by the technology underlying the service itself, which means that in today’s technology landscape, description formats exist that are based on XML, JSON, RDF, and a variety of other structured data models. Also, in each of those technologies, there may be a variety of languages that a redefined to achieve the same general purpose of describing a Web service.

Descriptions are always structured, but the structuring principles depend on the nature of the described service. For example, one of the earlier service description approaches, the Web ServicesDescription Language (WSDL), uses “operations” as its core concept, which are essentially identical to function calls, because the underlying model is based on that of the Remote Procedure Call (RPC) model. Other description languages for non-RPC approaches to services will use different structuring approaches.

3.3. Unified Documentation/Description If service providers use an approach where there is no distinction of service documentation Section 3.1 and service descriptionSection 3.2, then they may not feel the need to use two separate links. In such a case, an alternative approach is to use the”service” link relation type, which has no indication of whether it links to documentation or description, and thus may be better fit if no such differentiation is required.

  1. Link Relations for Web Services In order to allow Web services to represent the relation of individual resources to service documentation or description, this specification introduces and registers two new link relation types.

4.1. The service-doc Link Relation Type The “service-doc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are documented at a specific URI. The target resource is expected to provide documentation that is primarily intended for human consumption.

4.2. The service-desc Link Relation Type The “service-desc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are described at a specific URI. The target resource is expected to provide a service description that is primarily intended for machine consumption. In many cases, it is provided in a representation that is consumed by tools, code libraries, or similar components.

  1. Web Service Status Resources Web services providing access to a set of resources often are hosted and operated in an environment for which status information may be available. This information may be as simple as confirming that a service is operational, or may provide additional information about different aspects of a service, and/or a history of status information, possibly listing incidents and their resolution.

The “status” link relation type can be used to link to such a status resource, allowing service consumers to retrieve status information about a Web service’s status. Such a link may not be available from all resources provided by a Web service, but from key resources such as a Web service’s home resource.

This memo does not restrict the representation of a status resource in any way. It may be primarily focused on human or machine consumption, or a combination of both. It may be a simple “traffic light” indicator for service health, or a more sophisticated representation conveying more detailed information such as service subsystems and/or a status history.

  1. IANA Considerations The link relation types below have been registered by IANA perSection 6.2.1 of RFC 5988 [RFC5988]:

6.1. Link Relation Type: service-doc

Relation Name: service-doc
Description: Linking to service documentation that is primarily intended for human
consumption.
Reference: [[ This document ]]

6.2. Link Relation Type: service-desc

Relation Name: service-desc
Description: Linking to service description that is primarily intended for consumption by machines.
Reference: [[ This document ]]

6.3. Link Relation Type: status

Relation Name: status
Description: Linking to a resource that represents the status of a Web service or API.
Reference: [[ This document ]]


Adding Some Of My Own Thoughts Beyond The Specification This specification provides a more coherent service-doc, and service-desc that I think we did with humanURL, and support for multiple API definition formats (swagger, api blueprint, raml) as properties for any API. This specification provides a clear solution for human consumption, as well as one intended for consumption by machines. Another interesting link relation it provides is status, helping articulate the current state of an API.

It makes me happy to see this specification pushing forward and formalizing the conversation. I see the evolution of link relations for APIs as an important part of the API discovery and definition conversations in coming years. Processing this specification has helped jumpstart some conversation around APIs.json, as well as other specifications like JSON Home and Pivio.

Thanks for letting me build on your work Erik! - I am looking forward to contributing.


Embeddable API Tooling Discovery With JSON Home

I have been studying JSON Home, trying to understand how it sizes up to APIs.json, and other formats I’m tracking on like Pivio. JSON Home has a number of interesting features, and I thought one of their examples was also interesting, and was relevant to my API embeddable research. In this example, JSON Home was describing a widget that was putting an API to use as part of its operation.

Here is the snippet from the JSON Home example, providing all details of how it works:

JSON Home seems very action oriented. Everything about the format leads you towards taking some sort of API driven action, something that makes a lot of sense when it comes to widgets and other embeddables. I could see JSON Home being used as some sort of definition for button or widget generation and building tooling, providing a machine readable definition for the embeddable tool, and what is possible with the API(s) behind.

I’ve been working towards embeddable directories and API stacks using APIs.json, providing distributed and embeddable tooling that API providers and consumers can publish anywhere. I will be spending more time thinking about how this world of API discovery can overlap with the world of API embeddables, providing not just a directory of buttons, badges, and widgets, but one that describes what is possible when you engage with any embeddable tool. I’m beginning to see JSON Home similar to how I see Postman Collections, something that is closer to runtime, or at least deploy time. Where APIs.json is much more about indexing, search, and discovery–maybe some detail about where the widgets are, or maybe more detail about what embeddable resources are available.


API Discovery Using JSON Home

I’m have finally dedicated some time to learning more about Home Documents for HTTP APIs, or simply JSON Home. I see JSON Home as a nice way to bring together the technical components for an API, very similar to what I’ve been trying to accomplish with APIs.json. One of the biggest differences I see is that I’d say APIs.json was born out of the world of open data and APIs, where JSON Home is born of the web (which actually makes better sense).

I think the JSON Home description captures the specifications origins very well:

The Web itself offers one way to address these issues, using links [RFC3986] to navigate between states. A link-driven application discovers relevant resources at run time, using a shared vocabulary of link relations [RFC5988] and internet media types [RFC6838] to support a “follow your nose” style of interaction - just as a Web browser does to navigate the Web.

JSON Home provides any potential client with a machine readable set of instructions it can follow, involving one, or many APIs–providing a starting page for APIs which also enables:

  • Extensibility - Because new server capabilities can be expressed as link relations, new features can be layered in without introducing a new API version; clients will discover them in the home document.
  • Evolvability - Likewise, interfaces can change gradually by introducing a new link relation and/or format while still supporting the old ones.
  • Customisation - Home documents can be tailored for the client, allowing different classes of service or different client permissions to be exposed naturally.
  • Flexible deployment - Since URLs aren’t baked into documentation, the server can choose what URLs to use for a given service.

JSON Home, is a home page specification which uses JSON to provide APIs with a a launching point for the interactions they offer, by providing a coherent set links, all wrapped in a single machine readable index. Each JSON begins with a handful of values:

  • title - a string value indicating the name of the API
  • links - an object value, whose member names are link relation types [RFC5988], and values are URLs [RFC3986].
  • author - a suitable URL (e.g., mailto: or https:) for the author(s) of the API
  • describedBy - a link to documentation for the API
  • license - a link to the legal terms for using the API

Once you have the general details about the JSON Home API index, you can provide a collection of resource objects possessing links that can be indicated using an href property with a URI value, or template links which uses a URI template. Just like a list of links on a home page, but instead of a browser, it can be used in any client, for a variety of different purposes.

Each of the resources allow for resource hints, which allow clients to obtain relevant information about interacting with a resource beforehand, as a means of optimizing communications, as well as sharing which behaviors will be available for an API. Here are the default hints available for JSON Home:

  • allow - Hints the HTTP methods that the current client will be able to use to interact with the resource; equivalent to the Allow HTTP response header.
  • formats - Hints the representation types that the resource makes available, using the GET method.
  • accept-Patch - Hints the PATCH [RFC5789] request formats accepted by the resource for this client; equivalent to the Accept-Patch HTTP response header.
  • acceptPost - Hints the POST request formats accepted by the resource for this client.
  • acceptPut - Hints the PUT request formats accepted by the resource for this client.
  • acceptRanges - Hints the range-specifiers available to the client for this resource; equivalent to the Accept-Ranges HTTP response header [RFC7233].
  • acceptPrefer - Hints the preferences [RFC7240] supported by the resource. Note that, as per that specifications, a preference can be ignored by the server.
  • docs - Hints the location for human-readable documentation for the relation type of the resource.
  • preconditionRequired - Hints that the resource requires state-changing requests (e.g., PUT, PATCH) to include a precondition, as per [RFC7232], to avoid conflicts due to concurrent updates.
  • authSchemes- Hints that the resource requires authentication using the HTTP Authentication Framework [RFC7235].
  • status - Hints the status of the resource.

These hints provide you with a base set of the most commonly used sets of information, but then there is also a HTTP resource hint registration where all hints are registered. Hints can be added, allowing for the addition of custom defined hints, providing additional information beforehand about what can be expected from a resource link included as part of a JSON Home index. It is a much more sophisticated approach describing the behaviors of links than we included in APIs.json, with the formal hint registry being very useful and well-defined.

I”d say that JSON Home has all the features for defining a single, or collections of APIs, but really reflects its roots in the web, and possesses a heavy focus on enabling action with each link. While this is part of the linking structure of APIs.json, I feel like the detail and the mandate for action around each resource in a JSON Home index is much stronger. I feel like JSON Home is in the same realm as Postman Collections, but when it comes to API discovery. I always feel like a Postman Collection is more transactional than OpenAPI is by default. There is definitely overlap, but Postman Collections always feels one or two step closer to some action being taken than OpenAPI does–I am guessing it is because of it’s client roots, similar to the web roots of JSON Home, and also OpenAPIs roots in documentation.

Ok. Yay! I have Pivio, and now JSON Home both loaded in my brain. I have a feel for what they are trying to accomplish, and have found some interesting layers I hadn’t considered while doing my APIs.json centered API discovery work. Now I can step back, and consider the features of all three of these API discovery formats, establish a rough Venn diagram of their features, and consider how they overlap, and compliment each other. I feel like we are moving towards an important time for API discovery, and with the growing number of APIs available we will see more investment in API discovery specifications, as well as services and tooling that help us with API discovery. I’ll keep working to understand what is going on, establish at least a general understanding of each API discovery specifications, and report back here about what is happening when I can.


Different Search Engines For API Discovery

I was learning about the microservices discovery specification Pivio, which is a schema for framing the conversation, but also an uploader, search, and web interface for managing a collection of microservices. I found their use of ElasticSearch as the search engine for their tooling worth thinking about more. When we first launched APIs.json, we created APIs.io as the search engine–providing a custom developed public API search engine. I hadn’t thought of using ElasticSearch as an engine for searching APIs.json treated as a JSON document.

Honestly, I have been relying on the Github API as the search engine for my API discovery. Using it to uncover not just APIs.json, but OpenAPI, API Blueprint, and other API specification formats. This works well for public discovery, but I could see ElasticSearch being a quick and dirty way to launch a private or public engine for an API discovery, catalog, directory, or type of collection. I will add ElasticSearch, and other platforms I track on as part of my API deployment research as a API discovery building block, evolving the approaches I’m tracking on.

It is easy to think of API discovery as directories like ProgrammableWeb, or marketplaces like Mashape, and public API search engines like APIs.io–someone else’s discovery vehicle, which you are allowed to drive when you need. However, when you begin to consider other types of API discovery search engines, you realize that a collection of API discovery documents like JSON Home, Pivio, and APIs.json can quickly become your own personal API discovery vehicle. I’m going to write a separate piece on how I use Github as my API discovery engine, then I think I’ll step back and look at other approaches to searching JSON or YAML documents to see if I can find any search engines that might be able to be fine tuned specifically for API discovery.


Microservice Discovery Using Pivio

404: Not Found


Enhancing Your API SEO

One question I’m regularly getting from my readers is regarding how you can increase the search engine optimization (SEO) for your APIs–yes, API SEO (acronyms rule)! While we should be investing in API discoverability by embracing hypermedia early on, I feel in its absence we should also be indexing our entire API operations with APIs.json, and making sure we describe individual APIs using OpenAPI, the world of web APIs is still very hitched to the web, making SEO very relevant when it comes to API discoverability.

While I was diving deeper into “The API Platform”, a VERY forward leaning API deployment and management solution, I was pleased to see another mention of API SEO using JSON-LD (scroll down on the page). While I wish every API would adopt JSON-LD for their overall design, I feel we are going to have to piece SEO and discoverability together for our sites, as The API platform demonstrates. They provide a nice example of how you can paste a JSON-LD script into the the page of your API documentation, helping amplify some of the meaning and intent behind your API using JSON-LD + Schema.org.

I have been thinking about Schema.org’s relationship to API discovery for some time now, which is something I’m hoping to get more time to invest in further during 2017. I’d like to see Schema.org get more baked into API design, deployment, and documentation, as well as JSON-LD as part of underlying schema. To help build a bridge from where we are at, to where we need to be going, I’m going to explore how I can leverage OpenAPI tags to help autogenerate JSON-LD Schema.org tags as part of API documentation. While I’d love for everyone to just get the benefits of JSON-LD, I’m afraid many folks won’t have the bandwidth, and could use an assist from the API documentation solutions they are already using–making APIs more SEO friendly by default.

If you are starting a new API I recommend playing with “The API Platform”, as you get the benefits of Schema.org, JSON-LD, and MANY other SIGNIFICANT API concepts by default. Out of all of the API frameworks I’ve evaluated as part of my API deployment research, “The API Platform” is by far the most advanced when it comes to leading by example, and enabling healthy API design practices by default–something that will continue to bring benefits across all stops along the life cycle if you put to work in your operations.


The Open Service Broker API

Jerome Louvel from Restlet introduced me to the Open Service Broker API the other day, a “project allows developers, ISVs, and SaaS vendors a single, simple, and elegant way to deliver services to applications running within cloud-native platforms such as Cloud Foundry, OpenShift, and Kubernetes. The project includes individuals from Fujitsu, Google, IBM, Pivotal, RedHat and SAP.”

Honestly, I only have so much cognitive capacity to understand everything I come across, so I pasted the link into my super secret Slack group for API super heroes to get additional opinions. My friend James Higginbotham (@launchany) quickly responded with, “if I understand correctly, this is a standard that would be equiv to Heroku’s Add-On API? Or am I misunderstanding? The Open Service Broker API is a clean abstraction that allows ‘services’ to expose a catalog of capabilities, as well as the ability to create, use and delete those services. Sounds like add-on support to me, but I could be wrong[…]But seems very much like vendor-to-vendor. Will be interesting to track.”

At first glance, I thought it was more of an aggregation and/or discovery solution, but I think James is right. It is an API scaffolding that SaaS platforms can plug into their platforms to broker other 3rd party API services. It allows any platform to offer an environment for extending your platform like Heroku does, as James points out. It is something that adds an API discovery dimension to the concept of offering up plugins, or I guess what could be an embedded API marketplace within your platform. Opening up wholesale and private label opportunities for API providers to sell their warez directly on other people’s platforms.

The concept really isn’t anything new. I remember developing document print plugins for Box back when I worked with the Mimeo print API in 2011. The Open Service Broker API is just looking to standardize this approach so hat API provider could bake in a set of 3rd party partner APIs directly into their platform. I’ve recently added a plugin area to my API research. I will add the Open Service Broker API as an organization within this research. I’m probably also going to add it to my API discovery research, and I’m even considering expanding it into an API marketplace section of my research. I can see add-on, plugin, marketplace, and API brokering like this grow into its own discipline, with a growing number of definitions, services, and tools to support.


Patent US9639404: API Matchmaking Using Feature Models

Here is another patent in my series of API related patents. I’d file this in the category as the other similar one from IBM–Patent US 8954988: Automated Assessment of Terms of Service in an API Marketplace. It is a good idea. I just don’t feel it is a good patent idea.

Title: API matchmaking using feature models Number: 09454409 Owner: International Business Machines Corporation Abstract: Software that uses machine logic based algorithms to help determine and/or prioritize an application programming interface’s (API) desirability to a user based on how closely the API’s terms of service (ToS) meet the users’ ToS preferences. The software performs the following steps: (i) receiving a set of API ToS feature information that includes identifying information for at least one API and respectively associated ToS features for each identified API; (ii) receiving ToS preference information that relates to ToS related preferences for a user; and (iii) evaluating a strength of a match between each respective API identified in the API ToS feature information set and the ToS preference information to yield a match value for each API identified in the API ToS feature information set. The ToS features include at least a first ToS field. At least one API includes multiple, alternative values in its first ToS field.

Honestly, I don’t have a problem with a company turning something like this into a feature, and even charging for it. I just wish IBM would help us solve the problem of making terms of service machine readable, so something like this is even possible. Could you imagine what would be possible if everybody’s terms of service were machine readable, and could be programmatically evaluated? We’d all be better off, and matchmaking services like this would become a viable service.

I just wish more of the energy I see go into these patent would be spent actually doing things in the API space. Providing low cost, innovative API services that businesses can use, instead of locking up ideas, filing them away with the government, so that they can be used at a later date in litigation and backdoor dealings.


Publishing Your API In The AWS Marketplace

I’ve been watching the conversation around how APIs are discovered since 2010 and I ave been working to understand where things might be going beyond ProgrammableWeb, to the Mashape Marketplace, and even investing in my own API discovery format APIs.json. It is a layer of the API space that feels very bipolar to me, with highs and lows, and a lot of meh in the middle. I do not claim to have “the solution” when it comes to API discovery and prefer just watching what is happening, and contributing where I can.

A number interesting signals for API deployment, as well as API discovery, are coming out of Amazon Marketplace lately. I find myself keeping a closer eye on the almost 350 API related solutions in the marketplace, and today I’m specifically taking notice of the Box API availability in the AWS Marketplace. I find this marketplace approach to not just API discovery via an API marketplace, but also API deployment very interesting. AWS isn’t just a marketplace of APIs, where you find what you need and integrate directly with that provider. It is where you find your API(s) and then spin up an instance within your AWS infrastructure that facilitates that API integration–a significant shift.

I’m interested in the coupling between API providers and AWS. AWS and Box have entered into a partnership, but their approach provides a possible blueprint for how this approach to API integration and deployment can scale. How tightly coupled each API provider chooses to be, looser (proxy calling the API), or tighter (deploying API as AMI), will vary from implementation to implementation, but the model is there. The Box AWS Marketplace instance dependencies on the Box platform aren’t evident to me, but I’m sure they can easily be quantified, and something I can get other API providers to make sure and articulate when publishing their API solutions to AWS Marketplace.

AWS is moving towards earlier visions I’ve had of selling wholesale editions of an API, helping you manage the on-premise and private label API contracts for your platform, and helping you explore the economics of providing wholesale editions of your platforms, either tightly or loosely coupled with AWS infrastructure. Decompiling your API platform into small deployable units of value that can be deployed within a customer’s existing AWS infrastructure, seamlessly integrating with existing AWS services.

I like where Box is going with their AWS partnership. I like how it is pushing forward the API conversation when it comes to using AWS infrastructure, and specifically the marketplace. I’ll keep an eye on where things are going. Box seems to be making all the right moves lately by going all in on the OpenAPI Spec, and decompiling their API platform making it deployable and manageable from the cloud, but also much more modular and usable in a serverless way. Providing us all with one possible blueprint for how we handle the technology and business of our API operations in the clouds.


The APIs.json For Trade.gov

There are a growing number of API providers who have published an APIs.json for their API operations, providing a machine-readable index of not just their API, but for their API entire operations. My favorite example to use in my talks and conversations when I’m showcasing the API discovery format is the one for the International Trade Administration at developer.trade.gov.

The International Trade Administration (ITA) is the government agency that “strengthens the competitiveness of U.S. industry, promotes trade and investment, and ensures fair trade through the rigorous enforcement of our trade laws and agreements”, provides an index of where you can find their developer portal, documentation, terms of service, as well as a machine readable OpenAPI for their trade APIs.

I couldn’t think of a more shining example of APIs when it comes to talking about the API economy. I am pleased to have helped influenced their API efforts and helping them see the importance of providing a machine readable index of their API operations with APIs.json, as well as their APIs using OpenAPI. If you need a well maintained, and meaningful example of how APIs.json works head over to developer.trade.gov and take a look.


The List Of API Signals I Track On In My API Stack Research

I keep an eye on several thousand companies as part of my research into the API space and publish over a thousand of these profiles in my API Stack project. Across the over 1,100 companies, organizations, institutions, and government agencies I'm regularly running into a growing number of signals that tune me into what is going on with each API provider, or service provider. 

Here are the almost 100 types of signals I am tuning into as I keep an eye on the world of APIs, each contributing to my unique awareness of what is going on with everything API.

  • Account Settings (x-account-settings) - Does an API provider allow me to manage the settings for my account?
  • Android SDK (x-android-sdk) - Is there an Android SDK present?
  • Angular (x-angularjs) - Is there an Angular SDK present?
  • API Explorer (x-api-explorer) - Does a provider have an interactive API explorer?
  • Application Gallery (x-application-gallery) - Is there a gallery of applications build on an API available?
  • Application Manager (x-application-manager) - Does the platform allow me to management my APIs?
  • Authentication Overview (x-authentication-overview) - Is there a page dedicated to educating users about authentication?
  • Base URL for API (x-base-url-for-api) - What is the base URL(s) for the API?
  • Base URL for Portal (x-base-url-for-portal) - What is the base URL for the developer portal?
  • Best Practices (x-best-practices) - Is there a page outlining best practices for integrating with an API?
  • Billing history (x-billing-history) - As a developer, can I get at the billing history for my API consumption?
  • Blog (x-blog) - Does the API have a blog, either at the company level, but preferably at the API and developer level as well?
  • Blog RSS Feed (x-blog-rss-feed) - Is there an RSS feed for the blog?
  • Branding page (x-branding-page) - Is there a dedicated branding page as part of API operations?
  • Buttons (x-buttons) - Are there any embeddable buttons available as part of API operations.
  • C# SDK (x-c-sharp) - Is there a C# SDK present?
  • Case Studies (x-case-studies) - Are there case studies available, showcasing implementations on top of an API?
  • Change Log (x-change-log) - Does a platform provide a change log?
  • Chrome Extension (x-chrome-extension) - Does a platform offer up open-source or white label chrome extensions?
  • Code builder (x-code-builder) - Is there some sort of code generator or builder as part of platform operations?
  • Code page (x-code-page) - Is there a dedicated code page for all the samples, libraries, and SDKs?
  • Command Line Interface (x-command-line-interface) - Is there a command line interface (CLI) alongside the API?
  • Community Supported Libraries (x-community-supported-libraries) - Is there a page or section dedicated to code that is developed by the API and developer community?
  • Compliance (x-compliance) - Is there a section dedicated to industry compliance?
  • Contact form (x-contact-form) - Is there a contact form for getting in touch?
  • Crunchbase (x-crunchbase) - Is there a Crunchbase profile for an API or its company?
  • Dedicated plans pricing page (x-dedicated-plans--pricing-page)
  • Deprecation policy (x-deprecation-policy) - Is there a page dedicated to deprecation of APIs?
  • Developer Showcase (x--developer-showcase) - Is there a page that showcases API developers?
  • Documentation (x-documentation) - Where is the documentation for an API?
  • Drupal (x-drupal) - Is there Drupal code, SDK, or modules available for an API?
  • Email (x-email) - Is an email address available for a platform?
  • Embeddable page (x-embeddable-page) - Is there a page of embeddable tools available for a platform?
  • Error response codes (x-error-response-codes) - Is there a listing or page dedicated to API error responses?
  • Events (x-events) - Is there a calendar of events related to platform operations?
  • Facebook (x-facebook) - Is there a Facebook page available for an API?
  • Faq (x-faq) - Is there an FAQ section available for the platform?
  • Forum (x-forum) - Does a provider have a forum for support and asynchronous conversations?
  • Forum rss (x-forum-rss) - If there is a forum, does it have an RSS feed?
  • Getting started (x-getting-started) - Is there a getting started page for an API?
  • Github (x-github) - Does a provider have a Github account for the API or company?
  • Glossary (x-glossary) - Is there a glossary of terms available for a platform?
  • Heroku (x-heroku) - Are there Heroku SDKs, or deployment solutions?
  • How-To Guides (x-howto-guides) - Does a provider offer how-to guides as part of operations?
  • Interactive documentation (x-interactive-documentation) - Is there interactive documentation available as part of operatoins?
  • IoS SDK (x-ios-sdk) - Is there an IoS SDK for Objective-C or Swift?
  • Issues (x-issues) - Is there an issue management page or repo for the platform?
  • Java SDK (x-java) - Is there a Java SDK for the platform?
  • JavaScript API (x-javascript-api) - Is there a JavaScript SDK available for a platform?
  • Joomla (x-joomla) - Is there Joomla plug for the platform?
  • Knowledgebase (x-knowledgebase) - Is there a knowledgebase for the platform?
  • Labs (x-labs) - Is there a labs environment for the API platform?
  • Licensing (x-licensing) - Is there licensing for the API, schema, and code involved?
  • Message Center (x-message-center) - Is there a messaging center available for developers?
  • Mobile Overview (x-mobile-overview) - Is there a section or page dedicated to mobile applications?
  • Node.js (x-nodejs) - Is there a Node.js SDK available for the API?
  • Oauth Scopes (x-oauth-scopes) - Does a provider offer details on the available OAuth scopes?
  • Openapi spec (x-openapi-spec) - Is there an OpenAPI available for the API?
  • Overview (x-overview) - Does a platform have a simple, concise description of what they do?
  • Paid support plans (x-paid-support-plans) - Are there paid support plans available for a platform?
  • Postman Collections (x-postman) - Are there any Postman Collections available?
  • Partner (x-partner) - Is there a partner program available as part of API operations?
  • Phone (x-phone) - Does a provider publish a phone number?
  • PHP SDK (x-php) - Is there a PHP SDK available for an API?
  • Privacy Policy (x-privacy-policy-page) - Does a platform have a privacy policy?
  • PubSub (x-pubsubhubbub) - Does a platform provide a PubSub feed?
  • Python SDK (x-python) - Is there a Python SDK for an API?
  • Rate Limiting (x-rate-limiting) - Does a platform provide information on API rate limiting?
  • Real Time Solutions (x-real-time-page) - Are there real-time solutions available as part of the platform?
  • Road Map (x-road-map) - Does a provider share their roadmap publicly?
  • Ruby SDK (x-ruby) - Is there a Ruby SDK available for the API?
  • Sandbox (x-sandbox) - Is there a sandbox for the platform?
  • Security (x-security) - Does a platform provide an overview of security practices?
  • Self-Service registration (x-self-service-registration) - Does a platform allow for self-service registration?
  • Service Level Agreement (x-service-level-agreement) - Is an SLA available as part of platform integration?
  • Slideshare (x-slideshare) - Does a provider publish talks on Slideshare?
  • Stack Overflow (x-stack-overflow) - Does a provider actively use Stack Overflow as part of platform operations?
  • Starter Projects (x-starter-projects) - Are there start projects available as part of platform operations?
  • Status Dashboard (x-status-dashboard) - Is there a status dashboard available as part of API operations.
  • Status History (x-status-history) - Can you get at the history involved with API operations?
  • Status RSS (x-status-rss) - Is there an RSS feed available as part of the platform status dashboard?
  • Support Page (x-support-overview-page) - Is there a page or section dedicated to support?
  • Terms of Service (x-terms-of-service-page) - Is there a terms of service page?
  • Ticket System (x-ticket-system) - Does a platform offer a ticketing system for support?
  • Tour (x-tour) - Is a tour available to walk a developer through platforms operations?
  • Trademarks (x-trademarks) - Is there details about trademarks, and how to use them?
  • Twitter (x-twitter) - Does a platform have a Twitter account dedicated to the API or even company?
  • Videos (x-videos) - Is there a page, YouTube, or other account dedicated to videos about the API?
  • Webhooks (x-webhook) - Are there webhooks available for an API?
  • Webinars (x-webinars) - Does an API conduct webinars to support operations?
  • White papers (x-white-papers) - Does a platform provide white papers as part of operations?
  • Widgets (x-widgets) - Are there widgets available for use as part of integration?
  • Wordpress (x-wordpress) - Are there WordPress plugins or code available?

There are hundreds of other building blocks I track on as part of API operations, but this list represents the most common, that often have dedicated URLs available for exploring, and have the most significant impact on API integrations. You'll notice there is an x- representation for each one, which I use as part of APIs.json indexes for all the APIs I track on. Some of these signal types are machine readable like OpenAPIs or a Blog RSS, with others machine readable because there is another API behind, like Twitter or Github, but most of them are just static pages, where a human (me) can visit and stay in tune with signals.

I have two primary objectives with this work: 1) identify the important signals, that impact integration, and will keep me and my readers in tune with what is going on, and 2) identify the common channels, and help move the more important ones to be machine-readable, allowing us to scale the monitoring of important signals like pricing and terms of service. My API Stack research provides me wit a nice listing of APIs, as well as more individualized stacks like Microsoft, Google, Microsoft, and Facebook, or even industry stacks like SMS, Email, and News. It also provides me with a wealth of signals we can tune into better understand the scope and health of the API sector, and any individual business vertical that is being touched by APIs.


Expressing What An API Does As Well As What Is Possible Using OpenAPI

I am working to update my OpenAPI definitions for AWS, Google, and Microsoft using some other OpenAPIs I've discovered on Github. When a new OpenAPI has entirely new paths available, I just insert them, but when it has an existing path I have to think more critically about what is next. Sometimes I dismiss the metadata about the API path as incomplete or lower quality than the one I have already. Other times the content is actually more superior than mine, and I incorporate it into my work. Now I'm also finding that in some cases I want to keep my representation, as well as the one I discovered, side by side--both having value.

This is one reason I'm not 100% sold on the fact that just API providers should be crafting their own OpenAPis--sure, the API space would be waaaaaay better if ALL API providers had machine readable OpenAPIs for all their services, but I would want it to end here. You see, API providers are good (sometimes) at defining what their API does, but they often suck at telling you what is possible--which is why they are doing APIs. I have a lot of people who push back on me creating OpenAPIs for popular APIs, telling me that API providers should be the ones doing the hard work, otherwise it doesn't matter. I'm just not sold that this is the case, and there is an opportunity for evolving the definition of an API by external entities using OpenAPI.

To help me explore this idea, and push the boundaries of how I use OpenAPI in my API storytelling, I wanted to frame this in the context of the Amazon EC2 API, which allows me to deploy a single unit of compute into the cloud using an API, a pretty fundamental component of our digital worlds. To make any call against the Amazon EC2 I send all my calls to a single base URL:

ec2.amazonaws.com

With this API call I pass in the "action" I'd like to be taken:

?Action=RunInstances

Along with this base action parameter, I pass in a handful of other parameters to further define things:

&ImageId=ami-60a54009&MaxCount=1&KeyName=my-key-pair&Placement.AvailabilityZone=us-east-1d

Amazon has never been known for superior API design, but it gets the job done. With this single API call I can launch a server in the clouds. When I was first able to do this with APIs, is when the light really went on in my head regarding the potential of APIs. However, back to my story on expressing what an API does, as well as what is possible using OpenAPI. AWS has done an OK job at expressing what Amazon EC2 API does, however they suck at expressing what is possible. This is where API consumers like me step up with OpenAPI and provide some alternative representations of what is possible with the highly valuable API.

When I define the Amazon EC2 API using the OpenAPI specification I use the following:

swagger: '2.0'
info:
title: Amazon EC2
host: ec2.amazonaws.com
paths:
/:
     get:
          summary: The Amazon EC2 service
          operationId: ec2API
     parameters:
          - in: query
            name: action

The AWS API design pattern doesn't lend itself to reuse when it comes to documentation and storytelling, but I'm always looking for an opportunity to push the boundaries, and I'm able to better outline all available actions, as individual API paths by appending the action parameter to the path:

swagger: '2.0'
info:
title: Amazon EC2
host: ec2.amazonaws.com
paths:
/?Action=RunInstances/:
     get:
          summary: Run a new Amazon EC2 instance
          operationId: runInstance

Now I'm able to describe all 228 actions you can take with the single Amazon EC2 API path as separate paths in any OpenAPI generated API documentation and tooling. I can give them unique summaries, descriptions, and operationId. OpenAPI allows me to describe what is possible with an API, going well beyond what the API provider was able to define. I've been using this approach to better quantify the surface area of APIs like Amazon, Flickr, and others who use this pattern for a while now, but as I was looking to update my work, I wanted to take this concept even further.

While appending query parameters to the path definition has allowed me to expand how I describe the surface area of an API using OpenAPI, I'd rather keep these parameters defined properly using the OpenAPI specification, and define an alternative way to make the path unique. To do this, I am exploring the usage of #bookmarks, to help make duplicate API paths more unqiue in the eyes of the schema validators, but invisible to the server side of things--something like this:

swagger: '2.0'
info:
title: Amazon EC2
host: ec2.amazonaws.com
paths:
/#RunInstance/:
     get:
          summary: Run a new Amazon EC2 instance
          operationId: runInstance
  parameters:
     - in: query
               name: action
               default: RunInstances 

I am considering how we can further make the path unique, by predefining other parameters using default or enum:

swagger: '2.0'
info:
title: Amazon EC2
host: ec2.amazonaws.com
paths:
/#RunWebSiteInstance/:
     get:
          summary: Run a new Amazon EC2 website instance
          description: The ability to launch a new website running on its own Amazon EC2 instance, from a predefined AWS AMI. 
          operationId: runWebServerInstance
  parameters:
     - in: query
               name: action
               default: RunInstances
 
     - in: query
               name: ImageId
               default: ami-60a54009
 

I am still drawing in the lines of what the API provider has given me, but I'm now augmenting with a better summary and description of what is possible using OpenAPI, which can now be reflected in documentation and other tooling that is OpenAPI compliant. I can even prepopulate the default values, or available options using enum settings, tailoring to my team, company, or other specific needs. Taking an existing API definition beyond its provider interpretation of what it does, and getting to work on being more creative around what is possible.

Let me know how incoherent this is. I can't tell sometimes. Maybe I need more examples of this in action. I feel like it might be a big piece of the puzzle that has been missing for me regarding how we tell stories about what is possible with APIs. When it comes to API definitions, documentation, and discovery I feel like we are chained to a provider's definition of what is possible, when in reality this shouldn't be what drives the conversation. There should be definitions, documentation, and discovery documents created by API providers that help articulate what an API does, but more importantly, there should be a wealth of definitions, documentation, and discovery documents created by API consumers that help articulate what is possible. 


Thinking About Schema.org's Relationship To API Discovery

I was following the discussion around adding a WebAPI class to Schema.org's core vocabulary, and it got me to think more about the role Schema.org has to play with not just our API definitions, but also significantly influencing API discovery. Meaning that we should be using Schema.org as part of our OpenAPI definitions, providing us with a common vocabulary for communicating around our APIs, but also empowering the discovery of APIs. 

When I describe the relationship between Schema.org to API discovery, I'm talking about using the pending WebAPI class, but I'm also talking about using common Schema.org org within API definitions--something that will open the definitions to discovery because it employs a common schema. I am also talking about how do we leverage this vocabulary in our HTML pages, helping search engines like Google understand there is an API service available:

I will also be exploring how I can better leverage Schema.org in my APIs.json format, better leveraging a common vocabulary describing API operations, not just an individual API. I'm looking to expand the opportunities for discovering, not limit them. I would love all APIs to take a page from the hypermedia playbook, and have a machine readable index for each API, with a set of links present with each response, but I also want folks to learn about APIs through Google, ensuring they are indexed in a way that search engines can comprehend.

When it comes to API discovery I am primarily invested in APIs.json (because it's my baby) describing API operations, and OpenAPI to describe the surface area of an API, but I also want this to map to the very SEO driven world we operate in right now. I will keep investing time in helping folks use Schema.org in their API definitions (APIs.json & OpenAPI), but I will also start investing in folks employing JSON+LD and Schema.org as part of their search engine strategies (like above), making our APIs more discoverable to humans as well as other systems.


Mapping Github Topics To My API Evangelist Research

I was playing around with the new Github topics, and found that it provides an interesting look at the API space, one that I'm hoping will continue to evolve, and maybe I can influence.

I typed 'api-' into Github's topic tagging tool for my repository, and after I tagged each of my research areas with appropriate tags, I set out exploring these layers of Github by clicking on each tag. It is something that became quite a wormhole of API exploration.

I had to put it down, as I could spend hours looking through the repositories, but I wanted to create a machine-readable mapping to my existing API research areas, that I could use to regularly keep an eye on these slices of the Github pie--in an automated way.

Definitions - These are the topics I'm adding to my monitoring of the API space when it comes to API definitions. I thought it was interesting how folks are using Github to manage their API definitions.

I like how OpenAPI is starting to branch out into separate areas, as well as how this area touches on almost every other area liste here. I am going to work to help shape the tags present based on the definitions, templates, and tooling I find on Github in my work.

Design - There was only one API design related item, but is something I expect to expand rapidly as I dive into this area further.

I know of a number of projects that should be tagged and added to the area of API design, as well as have a number of sub-areas I'd like to see included as relevant API design tags.

Deployment - Deployment was a little tougher to get a handle on. There are many different ways to deploy an API, but these are the ones I've identified so far.

I know I will be adding some other areas to this area quickly. Tracking on database, containerized, and serverless approaches to API deployment.

Management - There were two topics that jumped out to me for inclusion in my API management research.

As with all the other areas, I will be harassing some of the common API management providers I know to tag their repositories appropriately, so that they show up in these searches.

Documentation - There are always a number of different perspectives on what constitutes API documentation, but these are a few of these I've found so far.

I think that API console overlaps with API clients, but it works here as well. I will work to find a way to separate out the documentation tools, from the documentation implementations.

SDK - It is hard to identify what is an SDK. It is a sector of the space I've seen renewed innovation, as well as bending of the definition of what is a development kit.

I will be looking to identify language-specific variations as part of this mapping to API SDKs available on Github, making discoverable through topic searches.

API Portal - It was good to see wicked.haufe.io as part of an API portal topic search. I know of a couple of other implementations that should be present, helping people see this growing area of API deployment and management.

This approach to providing Github driven API templates is the future of both the technical and business side of API operations. It is the seed for continuous integration across all stops along the API lifecycle.

API Discovery - Currently it is just my research in the API discovery topic search, but it is where I'm putting this area my work down. I was going to add all my research areas, but I think that will make for a good story in the future.

API discovery is one of the areas I'm looking to stimulate with this Github topics work. I'm going to be publishing separate repositories for each of the API I've profiled as part of my monitoring of the API space, and highlighting those providers who do it as well. We need more API providers to publish their API definitions to Github, making it available to be applied at every other stop along the API lifecycle.

I've long used Github as a discovery tool. Tracking on the Github accounts of companies, organizations, institutions, agencies and individuals is the best way to find the meaningful things going on with APIs. Github topics just adds another dimension to this discovery process, where I don't have to always do the discovery, and other people can tag their repositories, and they'll float up on the radar. Github repo activity, stars, and forks just give an added dimensions to this conversation.

I will have to figure out how to harass people I know about properly tagging their repos. I may even submit a Github issue for some of the ones I think are important enough. Maybe Github will allow users to tag other people's projects, adding another dimension to the conversation, while giving consumers a voice as well. I will update the YAML mapping for this project as I find new Github topics that should be mapped to my existing API research.


Discovering New APIs Through Security Alerts

I tune into a number of different channels looking for signs of individuals, companies, organizations, institutions, and government agencies doing APIs. I find APIs using Google Alerts, monitoring Twitter and Github, using press releases and via patent filings. Another way I am learning to discover APIs is via alerts and notifications about security events.

An example of this can be found via the Industrial Control Systems Cyber Emergency Response Team out of the U.S. Department of Homeland Security (@icscert), with the recent issued advisory ICSA-16-287-01 OSIsoft PI Web API 2015 R2 Service Acct Permissions Vuln to ICS-CERT website, leading me to the OSIsoft website. They aren't very forthcoming with their API operations, but this is something I am used to, and in my experience, companies who aren't very public with their operations tend to also cultivate an environment where security issue go unnoticed.

I am looking to aggregate API related security events and vulnerabilities like the feed coming out of Homeland Security. This information needs to be shared more often, opening up further discussion around API security issues, and even possibly providing an API for sharing real-time updates and news. I wish more companies, organizations, institutions, and government agencies would be more public with their API operations and be more honest about the dangers of providing access to data, content, and algorithms via HTTP, but until this is the norm, I'll continue using API related security alerts and notifications to find new APIs operating online.


What Is APIs.json? And What Is Next For the API Discovery Format?

As part of a renewed focus on the API discovery definition format APIs.json, I wanted to revisit the propsed machine readable API discovery specification, and see what is going on. First, what is APIs.json? It is a machine readable JSON specification, that anyone can use to define their API operations. APIs.json does not describe your APIs like OpenAPI Spec and API Blueprint do, it describes your surrounding API operations, with entries that can reference your Open API Spec, API Blueprint, or any other format that you desire.

APIs.json Is An Index For API Operations
APIs.json provides a machine readable approach that API providers can put work in describing their API operations, similar to how web site providers describe their websites using sitemap.xml. Here are the APIs, who are describing their APIs using APIs.json:

APIStrat Austin API
API Evangelist
Acuity Scheduling
BreezoMeter
CheckMarket
Clarify
Data Validation
DNS Check
Email Hunter
FeedbackHub
Fitbit
Gavagai
Kin Lane
Link Creation Studio
OneMusicAPI
Pandorabots API
Qalendra
RiteTag
Singlewire
SiteCapt
Social Searcher API
Super Monitoring
Timekit
Trade.gov
Twitch Bot Directory
EnClout
frAPI
Section.io
Spoonacular

APIs.json Indexes Can Be Created By 3rd Parties
One important thing to add, is that these APIs.json files can also be crafted, and published by external parties. An example of this is with the Trade.gov APIs. I originally created that APIs.json file, and coordinated with them to eventually it get published under their own domain, making it an authoritative APIs.json file. Many APIs.json files will be born outside of the API operations they describe, something you can see in my API stack project:

  • The API Stack - Provides almost 1000 APIs.json files, that describe the API operations of many leading public API platforms. There is also around 300 OpenAPI specifications, for some of the platforms described

APIs.json Can Be Used To Describe API Collections
Beyond describing a single API, within a single domain, APIs.json can also be used to describe entire collections of APIs, providing a machine readable way to organize, and share valuable collections of API resources. Here are a few examples of projects that are producing APIs.json driven collections.

APIs.json Can Be Used To Describe Collections of Collections
Then taking things up another rung up the chain, APIs.json can also provide a collection of collections, something I do with my own APIs. Each Github organization on my network has a master APIs.json, providing include links to all other APIs.json within the organization. In this scenario I have over 30 other APIs.json indexed, which can all operate independently of each other, but can also be considered a collection of API collections.

  • Master - A master collection of API collections I maintain as part of the API Evangelist network operations.

The First Open Source Tooling For APIs.json
Up until now, this post is all about APIs.json, where in reality the format is useless without their being any tooling built on top of the specification, bringing value to the table. This is why the 3Scale team got to work building an open source APIs.json driven search engine:

  • APIs.io as an open source tool dedicated to APIs.json
  • APIs.io as a public API search engine, with APIs.json as index.
  • APIs.io as a private API search engine, with APIs.json as index.

APIs.json Driving Other Open Tooling
APIs.io is just the beginning. It won't be enough to convince all API providers that they should be producing APIs.json index of their site operations, just for the API discovery boost. We are going to need APIs.json driven tooling that will service every other stop along the life cycle, including:

  • HTTP Client / Hub / Workbenches
  • Documentation
  • Testing
  • Monitoring
  • Virtualization
  • Visualization

APIs.json Integrated Into Existing Platforms
What areas would you like to see served? Personally, I would like to have the ability to load / unload my APIs.json collections into any service that I use. Allowing me to organize my internal, public, and 3rd party APIs I depend within any platform out there that is servicing the API space. Here are a handful of those types of integrations that are already happening:

APIs.json Linking To The Human Aspects Of API Operations
APIs.json is just the scaffolding to hang links to essential aspects of your operations, it doesn't care what you link to. You can start by referencing essential links for your API operations like:

  • Signup - How to signup for a service.
  • Support - Where to get support. 
  • Terms of Service - Where are the terms of service.
  • Pricing - Where to find the pricing for a service.

APIs.json Linking to Machine Readable Aspects of API Operations
These do not have to be machine readable links, they can reference important things the humans will need first. However, ultimately the goal is to make as much of the APIs.json index as machine readable as possible, using a variety of existing API definition formats, available for a variety of purposes.

Defining New, Machine Readable Property Elements For APIs.json
While the APIs.json spec will evolve, something I talk about below, its real strength lies in its ability to incentivize the development of entirely new, machine readable API definitions, bringing even more value to the API discovery process. Here are a few of the additional specs being crafted independent of, but inspired by APIs.json:

  • API Plans, for pricing, plans & rate limits.
  • API Monitoring, for monitoring & testing.
  • API Changelog, for operational monitoring.
  • API SDK, for SDK reference.
  • API Conversations - for the stream around API operations

Roadmap for Version 0.16 of APIs.json
That is the 100K view of what is APIs.json now, and the short term plan for the future. Most of the change within the universe APIs.json is mapping will occur add the individual API, and within the machine readable specs that describe them like OpenAPI Spec, API Blueprint, and Postman. Secondarily, there will be additional, machine readable, API types being defined and added into the spec.

Even with this reality, we do have a handful of changes planned for the 0.16 version of APIs.json:

  • commons - Establish a top level collection of common property elements that apply to ALL APIs being referenced in an APIs.json
  • country - Adding a top level country reference using ISO 3166.
  • New Proper Elements - Suggesting a handful of new property elements to reference common API operation building blocks
    • Registration
    • Blog
    • Github
    • Twitter

I doubt we will see many new additions like commons and country. In the future most of the structural changes to APis.json will be derived from first class property elements (ie. adding documentation or Github), making this the proving ground for defining what are truly the most important aspects of API operations, and what should be machine readable vs human readable.

The Hard Work That Lies Ahead for APIs.json
That concludes defining what is APIs.json, and what is next for APIs.json. Now we really have to get to work, doing the heavy lifting around:

  • Getting more API providers to describe their API operations using APIs.json, and publish in the root of the domain for their API ecosystem.
  • Encourage more API evangelists, brokers & analysts using to describe their collections, using APIs.json, building more meaningful indexes and directories of high value APIs.
  • Encourage platforms to build APIs.json into their operations, as a storage and organization schema, but also as import / export format.
  • Incentivize the development of more meaningful tooling that employs APIs.json, and uses it to better serve the API life cycle.
  • Continue to add new API property elements, making sure as many of them as possible evolve to be machine readable, as well as first class citizens in the APIs.json specification.

You can stay involved with what we are up to via the APIs.json website, and the APIs.json Github repository. You can also stay in tune with what is going on with APis.io via the website, and its Github repository. If you are doing something with APIs.json, ranging from using it as an index for your API operations, to platform integrations, please let me know. Also, if you envision some interesting tooling you'd like to see happen, make sure and submit a Github issue letting us know

While we still have huge amounts of work to do, when it comes to delivering meaningful API discovery solutions that the industry can put to work, I am pretty stoked with what we have managed to do over the last two years of work on the APIs.json specification, and supporting tooling--momentum that I feel picking up in 2016.


Solution Discovery Instead of API Discovery Via API Aggregation and Reciprocity Providers

During my API discovery session talk at @APIStrat Austin this last November, I talked about what I see as an added dimension to the concept of API discovery, one that will become increasingly important when it comes to actually moving things forward --- discovering solutions that are API driven vs. API discovery, where a developer is looking for an API. 

It might not seem that significant to developers, but SaaS services like Zapier, DataFire, and API hubs like Cloud Elements, bring this critical new dimension to how people actually will find your APIs. As nice as ProgrammableWeb has been for the last 10 years, we have to get more sophisticated about how we get our APIs in front of would-be consumers. We just can't depend on everyone who will put our API to work, immediately thinking that they need an API--most likely they are just going to need a solution to their problem, and secondarily need to understand there is an API driving things behind the scenes.

Of of many examples of this in the wild, could be in the area of tech support for your operations. Maybe you use Jira currently, because this is what your development team uses, but with a latest release you need something a little more public facing. When you are exploring what is possible with API reciprocity services like Zapier, and API hubs like Cloud Elements, you get introduced to other API driven solutions like Zendesk, or Desk.com from SalesForce.

This is just one example of how APIs can make an impact on the average business user, and will be the way API discovery happens in the future. In this scenario, I didn't set out looking for an API, but because I use API enabled service providers, I am introduced to other alternative solutions that might also help me tackle the problem I need. I may never have even known SalesForce had a help desk solution, if I wasn't already exploring the solutions Cloud Elements brings to the table.

As an API provider, you need to make sure your APIs are available via the growing number of API aggregation and reciprocity providers, and make sure the solutions they bring to the table are easily discoverable. You need to think beyond the classic developer focused version of API discovery, and make sure and think about API driven solution discovery meant for the average business or individual user.

Disclosure: Cloud Elements is an API Evangelist partner.


Evolving My API Stack To Be A Public Repo For Sharing API Discovery, Monitoring, And Rating Information

My API Stack began as a news site, and evolved into a directory of the APIs that I monitor in the space. I published APIs.json indexes for the almost 1000 companies I am trackig on, with almost 400 OADF files for some of the APIs I've profiled in more detail. My mission around the project so far, has been to create an open source, machine readable repo for the API space.

I have  had two recent occurrences that are pushing me to expand on my API Stack work. First, I have other entities who want to contribute monitoring data and other elements I would like to see collected, but haven't had time. The other is I that I have started spidering the URLs of the API portals I track on, and need a central place to store the indexes, so that others can access.

Ultimately I'd like to see the API Stack act as a public repo, where anyone can grab the data they need to discovery, evaluate, integrate, and stay in tune with what APIs are doing, or not doing. In addition to finding OADF, API Blueprint, and RAML files by crawling and indexing API portals, and publishing in a public repo, I want to build out the other building blocks that I index with APIs.json, like pricing, and TOS changes, and potentially monitoring, testing, performance data available.

Next I will publish some pricing, monitoring, and portal site crawl indexes to the repo, for some of the top APIs out there, and start playing with the best way to store the JSON, and other files, and provide an easy way explore and play with the data. If you have any data that you are collecting, and would like to contribute, or have a specific need you'd like to see tracked on, let me know, and I'll add to the road map.

My goal is to go for quality and completeness of data there, before I look to scale, and expand the quantity of information and tooling available. Let me know if you have any thoughts or feedback.


An Overview Of API Discovery From @APIStrat

I am delivering my API discovery talk from @APIStrat in Austin tomorrow AM. It will be via a Google Hangout, beginning at 8:00 AM PST. Jim Laredo of IBM's API Harmony, Jerome Louvel of Restlet, and Nicolas Grenie of 3Scale and APIs.io will be coming together for the hangout, with Natalie Kerns of Cloud Elements helping moderate again. 

To prepare for the hangout, and I wanted to revist my talk, a perfect opportunity to build off the momentum from the event, and share the story on API Evangelist. You can find the slide deck from my original talk on my talks.kinlane.com project site, and I will post the video from the Google Hangout her on this blog, when we are done tomorrow.

 

Other Emerging Solutions

Other Emerging Solutions

API Discovery via Relationships

Discovery Via Integrated Development Environments (IDE)

API Discovery At Client Runtime

What APIs Exist Publicly?

What APIs Exist Privately?

What APIs that Should Exist?

How APIs Are Being Used?

Focus On Finding Just The Right API

The Aggregation, Integration, and Interoperability of APIs

Meaningful Procesess and Reciprocity Via APIs

Discovery Goes Well Beyond Just Developers Finding APIs


Providing API.json As A Discovery Media Type Every One Of My API Endpoints

It can be easy to stumble across the base URL for one of my APIs out on the open Internet. I design my APIs to be easily distributed, shared, and as accessible as possible--based upon what I feel the needs for the resource might be. You can find most of my APIs, as part of my master stack, but there are other APIs like my screen capture API, or maybe my image manipulation API, that are often orphaned, which I know some people could use some help identifying more of the resources that are behind API operations.

To help support discovery across my network of APIs, I'm going to be supporting requests for Content-Type: application/apis+json for each endpoint, as well as an apis.json file in the root of the API, and supporting portal. An example of this in action, can be seen with my blog API, where you can look into the root of the portal for API (kin-lane.github.io/blog/apis.json), and in the root of the base URL for the API (blog.api.kinlane.com/apis.json), and for each individual endpoint, like the (blog.api.kinlane.com/blog/) endpoint, you can request the Content-Type: application/apis+json, and get a view of the APIs.json discovery file.

It will take me a while to this rolled out across all of my APIs, I have worked out the details on my blog, and API APIs. Providing discovery at the portal, API, and endpoint level just works. It provides not just access to documentation, but the other critical aspects of API operations, in a machine readable way, wherever you need it. It is nice to be on the road to having APIs.json exist as the media type (application/apis+json), something that isn't formal yet, but we are getting much closer with the latest release, and planned releases.

Next, I will push out across all my APIs, and do another story to capture what things look at that point. Hopefully it is something I can encourage others to do eventually, making API discovery a little more ubiquitous across API operations.


My API Discovery Research

I am giving each of my primary API research sites a refresh, and first up is the home page of my API discovery research. As I update each home page, I'm going to publish here on API Evangelist to help bring more awareness to each of the main areas I'm studying.

This is one of my API research sites, focused specifically on API discovery. My name is Kin Lane, and I am the API Evangelist, working as hard as I can to understand the world of the Application Programming Interfaces, widely called an API. This network of API research projects all run on Github, and is my real-time workbench, which means there is a lot of finished work present, but occasionally you will also come across areas projects that are unfinished--you have stumbled in my API discovery research, you will find the main API Evangelist site over here, with other links to my work.

This site is where I publish news that I have read, stories I’ve published, company I’ve profiled, and valuable tools I’m stumbled on across while researching API discovery. Finding APIs, and having your APIs found is a pretty significant pain point, and in the last ten years, little has been done to provide adequate solutions. API discovery is an area I've been monitoring, ultimately have been unsatisfied with what I've seen, I've take matters into my own hands, and created APIs.json, a machine readable API discovery format, any API provider can use to describe their APIs.

There are several API management companies, including Apigee, Mashery, and Mashape who have put forth API discovery solutions, and while these offerings are valuable, I feel they lack the openness necessary to truly move the API space, and the API discovery conversation forward. APIs.json was created to help make sense of the API space, in partnership with 3Scale API management infrastructure, CEO Steve Willmott (@njyx). Together we are working to define as much of the public APIs space as possible using APIs.json, and encourage the development of open tooling around the format, with the first major addition being an open source API search engine called, APIs.io

Next I am working with other providers like Socrata, WSO2, and others to develop additional tooling, to serve all levels of the API sector. I do not feel API discovery is something that simply happens via an API directory like ProgrammableWeb, or even via API search engines like APIs.io--in the future API discovery will also occur in the browser, via our IDEs, and seamlessly within modern clients. In 2015, you'll see many more APIs.json driven efforts to help alleviate API discovery pain, and we hope the format, and supporting open tooling will stimulate other complimentary, or even competing efforts. API discovery is long overdue for some investment by the community, helping us get past some of our PTSD of the SOA era--we can do it!

All my research is openly licensed CC-BY, and is meant to help grow the awareness around healthy API discovery practices. I try to be as fair as I can when covering companies, individuals, and the tools they provide, but ultimately you will notice I have my favorites, and there are some areas I only touch on lightly, for a variety of personal reasons. I try to stay as neutral as I can when it comes to technological dogma, and company allegiance, but after almost five years, I have some pretty strong opinions, and can’t help but try and steer, and influence things in my own unique way. ;-)


Using APIs.json For My Microservice Navigation And Discovery

I’m rebuilding my underlying architecture using microservices and docker containers, and the glue I’m using to bind it all together is APIs.json. I’m not just using APIs.son to deliver on discoverability for all of my services, I am also using it to navigate around my stack. Right now I only have about 10 microservices running, but I have a plan to add almost 50 in total by the time I’m done with this latest sprint.

Each microservice lives as its own Github repository, within a specific organization. I give each one its own APIs.json, indexing all the elements APIs of that specific microservice. APIs.json has two main collections, "apis" and "include". For each microservice APIs.json, I list all the properties for that API, but I use the include element to document the urls of other microservice APIs.json in the collection.

All the Github repositories for this microservice stack lives within a single Github organization, which I give a "master" repo, which acts as a single landing page for the entire stack. It has its own APIs.json file, but rather than having any API collections, it just uses includes, referencing the APIs.json for each microservice in the stack.

APIs.json acts as an index for each microservice, but through the include collection it also provides links to other related microservices within its own stack, which I use to navigate, in a circular way between all supporting services. All of that sounds very dizzying to write out, and I’m sure you are like WTF? You can browse my work on Github, some of it is public, but much of it you have to have oAuth access to see. The public elements all live in the gh-pages branch, while the private aspects live within the private master branch.

This is all a living workbench for me, so expect broken things. If you have questions, or would like more access to better understand, let me know. I’m happy to consider adding you to the Github organization as collaborator so that you can see more of it in action. I will also chronicle my work here on the blog, as I have time, and have anything interesting things to share.


Do You Know That Hypermedia Is A Better Solution For Discovery Than APIs.json?

I spend a lot of time field questions from people about APIs.json. This is something I expect to be doing for the next 10 years, and happy to field questions about exactly what it is all about, and help educate folks about exactly where APIs.json it fits in to the overall API landscape.

A regular comment I get from technologists, and API savvy folks is “you know that hypermedia is a better solution for discovery than APIs.json?” To which I reply “yes I know, but hypermedia is a solution for the API world we want, and APIs.json is a solution for the world we have”. I would love it if everyone understood hypermedia, and designed and deployed their APIs with this knowledge in mind—something I’ve spending a lot more time on in 2014, and will continue the push in 2015.

In the mean time, I intend to continue stitching apitogether thousands of APIs available out there, using APIs.json, allowing them to be discovered via open source public search engines like APIs.io, private internal search engines, and IDE solutions like Codenvy. We don’t just need discovery solutions to find new APIs, we need ways to build collections of high value APIs, something APIs.json excels at.

Another thing that APIs.json does, that I don’t feel that hypermedia provides a solution to, is the discovery of the other technical, business, and political building blocks like the documentation, code libraries, SDKs, terms of service, and other critical elements of API operations. This an area that APIs.json was specifically defined for, allowing the discovery of these vital areas, in addition to the actual API interfaces.

As I do with many of my posts, this one is crafted to be a cookie cutter response to these comments I get regularly about hypermedia vs. APIs.json, allowing me to just reply with a link to this story on my blog. Hopefully we can shift the tide of API deployments in favor of hypermedia, but the APIs we can’t do this with, APIs.json is a healthy alternative, and I even recommend using it alongside hypermedia to provide easy access to the other critical building blocks of API operations, that technologists often overlook.


API Discovery Continues Its Move Into The IDE With Eclipse Che

Another layer to the release of Codenvy this week, was the announcement of the Eclipse project Che, an open source "project to create a platform for SAAS developer environment that contains all of the tools, infrastructure, and processes necessary for a developer to edit, build, test, and debug an application”. Che represents the next generation IDE that runs in the cloud, which coincides with other signs I've seen around API discovery moving into the IDE with signals from API pioneers like SalesForce and Google, or from Microsoft in Visual Studio.

I’m still learning about Che, but I’m beginning to see two distinct ways Che and APIs can be employed. First lets start with the environment:

You can build extensions for Che, and when those extensions are compiled into the kernel, Che creates server-side microservices, a RESTful API, and cross-browser optimized JavaScript, which is then pluggable into a browser IDE. With Che, developers are given the pluggable structure of Eclipse, but accessible through a cloud environment.

This means you can orchestrate development workflows, with APIs. You can predefine, deploy, customize, maintain and orchestrate very modular development environment where everything can be controlled via API. What a perfect environment for orchestrating the next generation of application development.

Next I see APIs leading the development lifecycle as well, not just defining the development environment and process. Earlier stories I’ve done on API discovery via SDKs, showcase SaleForce and Google providing native discovery of their APIs directly in Eclipse. In an Eclipse Che driven development environment, you could define pre-built, or custom API stacks, bringing exactly the API resources developers will need and bake them directly into their IDE—meaning APIs find the developers, developers don’t have to find their own APIs.

This approach to API discovery via the IDE provides some various interesting opportunity for marrying with earlier thoughts I’ve had around being an API broker. I can envision a future where API evangelism evolves to a point where you don’t just represent one API, you can represent many APIs, and configure API fueled developer environments for building any type of application. Think of what Backend as a Service (BaaS) providers like Parse and Kinvey have been doing since 2011, but now think of pre-configured, or custom tailored IDE environments with exactly the resources you need.

I’m just getting started with Che, and Codenvy, so it will take me a while to work through my thoughts on using it as an API brokerage platform. The one thing you can count on me for, is that I will tell stories all along the way as I figure it out.

Disclosure: API Evangelist is an advisor to Codenvy.


An API Evangelism Strategy To Map The Global Family Tree

In my work everyday as the API Evangelist, I get to have some very interesting conversations, with a wide variety of folks, about how they are using APIs, as well as brainstorming other ways they can approach their API strategy allowing them to be more effective. One of the things that keep me going in this space is this diversity. One day I’m looking at Developer.Trade.Gov for the Department of Commerce, the next talking to WordPress about APIs for 60 million websites, and then I’m talking with the The Church of Jesus Christ of Latter-day Saints about the Family Search API, which is actively gathering, preserving, and sharing genealogical records from around the world.

I’m so lucky I get to speak with all of these folks about the benefits, and perils of APIs, helping them think through their approach to opening up their valuable resources using APIs. The process is nourishing for me because I get to speak to such a diverse number of implementations, push my understanding of what is possible with APIs, while also sharpening my critical eye, understanding of where APIs can help, or where they can possibly go wrong. Personally, I find a couple of things very intriguing about the Family Search API story:

  1. Mapping the worlds genealogical history using a publicly available API — Going Big!!
  2. Potential from participation by not just big partners, but the long tail of genealogical geeks
  3. Transparency, openness, and collaboration shining through as the solution beyond just the technology
  4. The mission driven focus of the API overlapping with my obsession for API evangelism intrigues and scares me
  5. Have existing developer area, APIs, and seemingly necessary building blocks but failed to achieve a platform level

I’m open to talking with anyone about their startup, SMB, enterprise, organizational, institutional, or government API, always leaving open a 15 minute slot to hear a good story, which turned into more than an hour of discussion with the Family Search team. See, Family Search already has an API, they have the technology in order, and they even have many of the essential business building blocks as well, but where they are falling short is when it comes to dialing in both the business and politics of their developer ecosystem to discover the right balance that will help them truly become a platform—which is my specialty. ;-)

This brings us to the million dollar question: How does one become a platform?

All of this makes Family Search an interesting API story. The scope of the API, and to take something this big to the next level, Family Search has to become a platform, and not a superficial “platform” where they are just catering to three partners, but nourishing a vibrant long tail ecosystem of website, web application, single page application, mobile applications, and widget developers. Family Search is at an important reflection point, they have all the parts and pieces of a platform, they just have to figure out exactly what changes need to be made to open up, and take things to the next level.

First, let’s quantify the company, what is FamilySearch? “ For over 100 years, FamilySearch has been actively gathering, preserving, and sharing genealogical records worldwide”, believing that “learning about our ancestors helps us better understand who we are—creating a family bond, linking the present to the past, and building a bridge to the future”.

FamilySearch is 1.2 billion total records, with 108 million completed in 2014 so far, with 24 million awaiting, as well as 386 active genealogical projects going on. Family Search provides the ability to manage photos, stories, documents, people, and albums—allowing people to be organized into a tree, knowing the branch everyone belongs to in the global family tree.

FamilySearch, started out as the Genealogical Society of Utah, which was founded in 1894, and dedicate preserving the records of the family of mankind, looking to "help people connect with their ancestors through easy access to historical records”. FamilySearch is a mission-driven, non-profit organization, ran by the The Church of Jesus Christ of Latter-day Saints. All of this comes together to define an entity, that possesses an image that will appeal to some, while leave concern for others—making for a pretty unique formula for an API driven platform, that doesn’t quite have a model anywhere else.

FamilySearch consider what they deliver as as a set of record custodian services:

  • Image Capture - Obtaining a preservation quality image is often the most costly and time-consuming step for records custodians. Microfilm has been the standard, but digital is emerging. Whether you opt to do it yourself or use one of our worldwide camera teams, we can help.
  • Online Indexing - Once an image is digitized, key data needs to be transcribed in order to produce a searchable index that patrons around the world can access. Our online indexing application harnesses volunteers from around the world to quickly and accurately create indexes.
  • Digital Conversion - For those records custodians who already have a substantial collection of microfilm, we can help digitize those images and even provide digital image storage.
  • Online Access - Whether your goal is to make your records freely available to the public or to help supplement your budget needs, we can help you get your records online. To minimize your costs and increase access for your users, we can host your indexes and records on FamilySearch.org, or we can provide tools and expertise that enable you to create your own hosted access.
  • Preservation - Preservation copies of microfilm, microfiche, and digital records from over 100 countries and spanning hundreds of years are safely stored in the Granite Mountain Records Vault—a long-term storage facility designed for preservation.

FamilySearch provides a proven set of services that users can take advantage of via a web applications, as well as iPhone and Android mobile apps, resulting in the online community they have built today. FamilySearch also goes beyond their basic web and mobile application services, and is elevated to software as a service (SaaS) level by having a pretty robust developer center and API stack.

Developer Center
FamilySearch provides the required first impression when you land in the FamilySearch developer center, quickly explaining what you can do with the API, "FamilySearch offers developers a way to integrate web, desktop, and mobile apps with its collaborative Family Tree and vast digital archive of records”, and immediately provides you with a getting started guide, and other supporting tutorials.

FamilySearch provides access to over 100 API resources in the twenty separate groups: Authorities, Change History, Discovery, Discussions, Memories, Notes, Ordinances, Parents and Children, Pedigree, Person, Places, Records, Search and Match, Source Box, Sources, Spouses, User, Utilities, Vocabularies, connecting you to the core FamilySearch genealogical engine.

The FamilySearch developer area provides all the common, and even some forward leaning technical building blocks:

To support developers, FamilySearch provides a fairly standard support setup:

To augment support efforts there are also some other interesting building blocks:

Setting the stage for FamilySearch evolving to being a platform, they even posses some necessary partner level building blocks:

There is even an application gallery showcasing what web, mac & windows desktop, and mobile applications developers have built. FamilySearch even encourages developers to “donate your software skills by participating in community projects and collaborating through the FamilySearch Developer Network”.

Many of the ingredients of a platform exist within the current FamilySearch developer hub, at least the technical elements, and some of the common business, and political building blocks of a platform, but what is missing? This is what makes FamilySearch a compelling story, because it emphasizes one of the core elements of API Evangelist—that all of this API stuff only works when the right blend of technical, business, and politics exists.

Establishing A Rich Partnership Environment

FamilySearch has some strong partnerships, that have helped establish FamilySearch as the genealogy service it is today. FamilySearch knows they wouldn’t exist without the partnerships they’ve established, but how do you take it to the next and grow to a much larger, organic API driven ecosystem where a long tail of genealogy businesses, professionals, and enthusiasts can build on, and contribute to, the FamilySearch platform.

FamilySearch knows the time has come to make a shift to being an open platform, but is not entirely sure what needs to happen to actually stimulate not just the core FamilySearch partners, but also establish a vibrant long tail of developers. A developer portal is not just a place where geeky coders come to find what they need, it is a hub where business development occurs at all levels, in both synchronous, and asynchronous ways, in a 24/7 global environment.

FamilySearch acknowledge they have some issues when it comes investing in API driven partnerships:

  • “Platform” means their core set of large partners
  • Not treating all partners like first class citizens
  • Competing with some of their partners
  • Don’t use their own API, creating a gap in perspective

FamilySearch knows if they can work out the right configuration, they can evolve FamilySearch from a digital genealogical web and mobile service to a genealogical platform. If they do this they can scale beyond what they’ve been able to do with a core set of partners, and crowdsource the mapping of the global family tree, allowing individuals to map their own family trees, while also contributing to the larger global tree. With a proper API driven platform this process doesn’t have to occur via the FamiliySearch website and mobile app, it can happen in any web, desktop, or mobile application anywhere.

FamilySearch already has a pretty solid development team taking care of the tech of the FamilySearch API, and they have 20 people working internally to support partners. They have a handle on the tech of their API, they just need to get a handle on the business and politics of their API, and invest in the resources that needed to help scale the FamilySearch API being just a developer area, to being a growing genealogical developer community, to a full blow ecosystem that span not just the FamilySearch developer portal, but thousands of other sites and applications around the globe.

A Good Dose Of API Evangelism To Shift Culture A Bit

A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business for FamilySearch, something that if done right, can open up FamilySearch to outside ideas, and with the right framework manage to allow the platform to move beyond just certification, and partnering to also investment, and acquisition of data, content, talent, applications, and partners via the FamilySearch developer platform.

Think of evangelism as the grease in the gears of the platform allowing it to grow, expand, and handle a larger volume, of outreach, and support. API evangelism works to lubricate all aspects of platform operation.

First, lets kick off with setting some objectives for why we are doing this, what are we trying to accomplish:

  • Increase Number of Records - Increase the number of overall records in the FamilySearch database, contributing the larger goals of mapping the global family tree.
  • Growth in New Users - Growing the number of new users who are building on the FamilySearch API, increase the overall headcount fro the platform.
  • Growth In Active Apps - Increase not just new users but the number of actual apps being built and used, not just counting people kicking the tires.
  • Growth in Existing User API Usage - Increase how existing users are putting the FamilySearch APIs. Educate about new features, increase adoption.
  • Brand Awareness - One of the top reasons for designing, deploying and managing an active APIs is increase awareness of the FamilySearch brand.
  • What else?

What does developer engagement look like for the FamilySearch platform?

  • Active User Engagement - How do we reach out to existing, active users and find out what they need, and how do we profile them and continue to understand who they are and what they need. Is there a direct line to the CRM?
  • Fresh Engagement - How is FamilySearch contacting new developers who have registered weekly to see what their immediate needs are, while their registration is fresh in their minds.
  • Historical Engagement - How are historical active and / or inactive developers being engaged to better understand what their needs are and would make them active or increase activity.
  • Social Engagement - Is FamilySearch profiling the URL, Twitter, Facebook LinkedIn, and Github profiles, and then actively engage via these active channels?

Establish a Developer Focused Blog For Storytelling

  • Projects - There are over 390 active projects on the FamilySearch platform, plus any number of active web, desktop, and mobile applications. All of this activity should be regularly profiled as part of platform evangelism. An editorial assembly line of technical projects that can feed blog stories, how-tos, samples and Github code libraries should be taking place, establishing a large volume of exhaust via the FamlySearch platform.
  • Stories - FamilySearch is great at writing public, and partner facing content, but there is a need to be writing, editing and posting of stories derived from the technically focused projects, with SEO and API support by design.
  • Syndication - Syndication to Tumblr, Blogger, Medium and other relevant blogging sites on regular basis with the best of the content.

Mapping Out The Geneology Landscape

  • Competition Monitoring - Evaluation of regular activity of competitors via their blog, Twitter, Github and beyond.
  • Alpha Players - Who are the vocal people in the genealogy space with active Twitter, blogs, and Github accounts.
  • Top Apps - What are the top applications in the space, whether built on the FamilySearch platform or not, and what do they do?
  • Social - Mapping the social landscape for genealogy, who is who, and who should the platform be working with.
  • Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks. (should already be done by marketing folks)
  • Cities & Regions - Target specific markets in cities that make sense to your evangelism strategy, what are local tech meet ups, what are the local organizations, schools, and other gatherings. Who are the tech ambassadors for FamilySearch in these spaces?

Adding To Feedback Loop From Forum Operations

  • Stories - Deriving of stories for blog derived from forum activity, and the actual needs of developers.
  • FAQ Feed - Is this being updated regularly with stuff?
  • Streams - other stream giving the platform a heartbeat?

Being Social About Platform Code and Operations With Github

  • Setup Github Account - Setup FamilySearch platform developer account and bring internal development team into a team umbrella as part of.
  • Github Relationships - Managing of followers, forks, downloads and other potential relationships via Github, which has grown beyond just code, and is social.
  • Github Repositories - Managing of code sample Gists, official code libraries and any samples, starter kits or other code samples generated through projects.

Adding To The Feedback Loop From The Bigger FAQ Picture

  • Quora - Regular trolling of Quora and responding to relevant [Client Name] or industry related questions.
  • Stack Exchange - Regular trolling of Stack Exchange / Stack Overflow and responding to relevant FamilySearch or industry related questions.
  • FAQ - Add questions from the bigger FAQ picture to the local FamilySearch FAQ for referencing locally.

Leverage Social Engagement And Bring In Developers Too

  • Facebook - Consider setting up of new API specific Facebook company. Posting of all API evangelism activities and management of friends.
  • Google Plus - Consider setting up of new API specific Google+ company. Posting of all API evangelism activities and management of friends.
  • LinkedIn - Consider setting up of new API specific LinkedIn profile page who will follow developers and other relevant users for engagement. Posting of all API evangelism activities.
  • Twitter - Consider setting up of new API specific Twitter account. Tweeting of all API evangelism activity, relevant industry landscape activity, discover new followers and engage with followers.

Sharing Bookmarks With the Social Space

  • Hacker News - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Hacker News, to keep a fair and balanced profile, as well as network and user engagement.
  • Product Hunt - Product Hunt is a place to share the latest tech creations, providing an excellent format for API providers to share details about their new API offerings.
  • Reddit - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Reddit, to keep a fair and balanced profile, as well as network and user engagement.

Communicate Where The Roadmap Is Going

  • Roadmap - Provide regular roadmap feedback based upon developer outreach and feedback.
  • Changelog - Make sure the change log always reflects the roadmap communication or there could be backlash.

Establish A Presence At Events

  • Conferences - What are the top conferences occurring that we can participate in or attend--pay attention to call for papers of relevant industry events.
  • Hackathons - What hackathons are coming up in 30, 90, 120 days? Which would should be sponsored, attended, etc.
  • Meetups - What are the best meetups in target cities? Are there different formats that would best meet our goals? Are there any sponsorship or speaking opportunities?
  • Family History Centers - Are there local opportunities for the platform to hold training, workshops and other events at Family History Centers?
  • Learning Centers - Are there local opportunities for the platform to hold training, workshops and other events at Learning Centers?

Measuring All Platform Efforts

  • Activity By Group - Summary and highlights from weekly activity within the each area of API evangelism strategy.
  • New Registrations - Historical and weekly accounting of new developer registrations across APis.
  • Volume of Calls - Historical and weekly accounting of API calls per API.
  • Number of Apps - How many applications are there.

Essential Internal Evangelism Activities

  • Storytelling - Telling stories of an API isn’t just something you do externally, what stories need to be told internally to make sure an API initiative is successful.
  • Conversations - Incite internal conversations about the FamilySearch platform. Hold brown bag lunches if you need to, or internal hackathons to get them involved.
  • Participation - It is very healthy to include other people from across the company in API operations. How can we include people from other teams in API evangelism efforts. Bring them to events, conferences and potentially expose them to local, platform focused events.
  • Reporting - Sometimes providing regular numbers and reports to key players internally can help keep operations running smooth. What reports can we produce? Make them meaningful.

All of this evangelism starts with a very external focus, which is a hallmark of API and developer evangelism efforts, but if you notice by the end we are bringing it home to the most important aspect of platform evangelism, the internal outreach. This is the number one reason APIs fail, is due to a lack of internal evangelism, educating top and mid-level management, as well as lower level staff, getting buy-in and direct hands-on involvement with the platform, and failing to justify budget costs for the resources needed to make a platform successful.

Top-Down Change At FamilySearch

The change FamilySearch is looking for already has top level management buy-in, the problem is that the vision is not in lock step sync with actual platform operations. When regular projects developed via the FamilySearch platform are regularly showcased to top level executives, and stories consistent with platform operations are told, management will echo what is actually happening via the FamilySearch. This will provide a much more ongoing, deeper message for the rest of the company, and partners around what the priorities of the platform are, making it not just a meaningless top down mandate.

An example of this in action is with the recent mandate from President Obama, that all federal agencies should go “machine readable by default”, which includes using APIs and open data outputs like JSON, instead of document formats like PDF. This top down mandate makes for a good PR soundbite, but in reality has little affect on the ground at federal agencies. In reality it has taken two years of hard work on the ground, at each agency, between agencies, and with the public to even begin to make this mandate a truth at over 20 of the federal government agencies.

Top down change is a piece of the overall platform evolution at FamilySearch, but is only a piece. Without proper bottom-up, and outside-in change, FamilySearch will never evolve beyond just being a genealogical software as a service with an interesting API. It takes much more than leadership to make a platform.

Bottom-Up Change At FamilySearch

One of the most influential aspects of APIs I have seen at companies, institutions, and agencies is the change of culture brought when APIs move beyond just a technical IT effort, and become about making resources available across an organization, and enabling people to do their job better. Without an awareness, buy-in, and in some cases evangelist conversion, a large organization will not be able to move from a service orientation to a platform way of thinking.

If a company as a whole is unaware of APIs, either at the company or organization, as well as out in the larger world with popular platforms like Twitter, Instagram, and others—it is extremely unlikely they will endorse, let alone participate in moving from being a digital service to platform. Employees need to see the benefits of a platform to their everyday job, and their involvement cannot require what they would perceive as extra work to accomplish platform related duties. FamilySearch employees need to see the benefits the platform brings to the overall mission, and play a role in this happening—even if it originates from a top-down mandate.

Top bookseller Amazon was already on the path to being a platform with their set of commerce APIs, when after a top down mandate from CEO Jeff Bezos, Amazon internalized APIs in such a way, that the entire company interacted, and exchange resources using web APIs, resulting in one of the most successful API platforms—Amazon Web Services (AWS). Bezos mandated that if an Amazon department needed to procure a resource from another department, like server or storage space from IT, it need to happen via APIs. This wasn’t a meaningless top-down mandate, it made employees life easier, and ultimately made the entire company more nimble, and agile, while also saving time and money. Without buy-in, and execution from Amazon employees, what we know as the cloud would never have occurred.

Change at large enterprises, organizations, institutions and agencies, can be expedited with the right top-down leadership, but without the right platform evangelism strategy, that includes internal stakeholders as not just targets of outreach efforts, but also inclusion in operations, it can result in sweeping, transformational changes. This type of change at a single organization can effect how an entire industry operates, similar to what we’ve seen from the ultimate API platform pioneer, Amazon.

Outside-In Change At FamilySearch

The final layer of change that needs to occur to bring FamilySearch from being just a service to a true platform, is opening up the channels to outside influence when it comes not just to platform operations, but organizational operations as well. The bar is high at FamilySearch. The quality of services, and expectation of the process, and adherence to the mission is strong, but if you are truly dedicated to providing a database of all mankind, you are going to have to let mankind in a little bit.

FamilySearch is still the keeper of knowledge, but to become a platform you have to let in the possibility that outside ideas, process, and applications can bring value to the organization, as well as to the wider genealogical community. You have to evolve beyond notions that the best ideas from inside the organization, and just from the leading partners in the space. There are opportunities for innovation and transformation in the long-tail stream, but you have to have a platform setup to encourage, participate in, and be able to identify value in the long-tail stream of an API platform.

Twitter is one of the best examples of how any platform will have to let in outside ideas, applications, companies, and individuals. Much of what we consider as Twitter today was built in the platform ecosystem from the iPhone and Android apps, to the desktop app TweetDeck, to terminology like the #hashtag. Over the last 5 years, Twitter has worked hard to find the optimal platform balance, regarding how they educate, communicate, invest, acquire, and incentives their platform ecosystem. Listening to outside ideas goes well beyond the fact that Twitter is a publicly available social platform, it is about having such a large platform of API developers, and it is impossible to let in all ideas, but through a sophisticated evangelism strategy of in-person, and online channels, in 2014 Twitter has managed to find a balance that is working well.

Having a public facing platform doesn’t mean the flood gates are open for ideas, and thoughts to just flow in, this is where service composition, and the certification and partner framework for FamilySearch will come in. Through clear, transparent partners tiers, open and transparent operations and communications, an optimal flow of outside ideas, applications, companies and individuals can be established—enabling a healthy, sustainable amount of change from the outside world.

Knowing All Of Your Platform Partners

The hallmark of any mature online platform is a well established partner ecosystem. If you’ve made the transition from service to platform, you’ve established a pretty robust approach to not just certifying, and on boarding your partners, you also have stepped it up in knowing and understanding who they are, what their needs are, and investing in them throughout the lifecycle.

First off, profile everyone who comes through the front door of the platform. If they sign up for a public API key, who are they, and where do they potentially fit into your overall strategy. Don’t be pushy, but understanding who they are and what they might be looking for, and make sure you have a track for this type of user well defined.

Next, quality, and certify as you have been doing. Make sure the process is well documented, but also transparent, allowing companies and individuals to quickly understand what it will take to certified, what the benefits are, and examples of other partners who have achieved this status. As a developer, building a genealogical mobile app, I need to know what I can expect, and have some incentive for investing in the certification process.

Keep your friends close, and your competition closer. Open the door wide for your competition to become a platform user, and potentially partner. 100+ year old technology company Johnson Controls (JCI) was concerned about what the competition might do it they opened up their building efficient data resources to the public via the Panoptix API platform, when after it was launched, they realized their competition were now their customer, and a partner in this new approach to doing business online for JCI.

When Department of Energy decides what data and other resource it makes available via Data.gov or the agencies developer program it has to deeply consider how this could affect U.S. industries. The resources the federal agency possesses can be pretty high value, and huge benefits for the private sector, but in some cases how might opening up APIs, or limiting access to APIs help or hurt the larger economy, as well as the Department of Energy developer ecosystem—there are lots of considerations when opening up API resources, that vary from industry to industry.

There are no silver bullets when it comes to API design, deployment, management, and evangelism. It takes a lot of hard work, communication, and iterating before you strike the right balance of operations, and every business sector will be different. Without knowing who your platform users are, and being able to establish a clear and transparent road for them to follow to achieve partner status, FamilySearch will never elevate to a true platform. How can you scale the trusted layers of your platform, if your partner framework isn’t well documented, open, transparent, and well executed? It just can’t be done.

Meaningful Monetization For Platform

All of this will take money to make happen. Designing, and executing on the technical, and the evangelism aspects I’m laying out will cost a lot of money, and on the consumers side, it will take money to design, develop, and manage desktop, web, and mobile applications build around the FamilySearch platform. How will both the FamilySearch platform, and its participants make ends meet?

This conversation is a hard one for startups, and established businesses, let alone when you are a non-profit, mission driven organization. Internal developers cost money, server and bandwidth are getting cheaper but still are a significant platform cost--sustaining a sale, bizdev, and evangelism also will not be cheap. It takes money to properly deliver resources via APIs, and even if the lowest tiers of access are free, at some point consumers are going to have to pay for access, resources, and advanced features.

The conversation around how do you monetize API driven resources is going on across government, from cities up to the federal government. Where the thought of charging for access to public data is unheard of. These are public assets, and they should be freely available. While this is true, think of the same situation, but when it comes to physical public assets that are owned by the government, like parks. You can freely enjoy many city, county, and federal parks, there are sometimes small fees for usage, but if you want to actually sell something in a public park, you will need to buy permits, and often share revenue with the managing agency. We have to think critically about how we fund the publishing, and refinement of publicly owned digital assets, as with physical assets there will be much debate in coming years, around what is acceptable, and what is not.

Woven into the tiers of partner access, there should always be provisions for applying costs, overhead, and even generation of a little revenue to be applied in other ways. With great power, comes great responsibility, and along with great access for FamilySearch partners, many will also be required to cover costs of compute capacity, storage costs, and other hard facts of delivering a scalable platform around any valuable digital assets, whether its privately or publicly held.

Platform monetization doesn’t end with covering the costs of platform operation. Consumers of FamilySearch APIs will need assistance in identify the best ways to cover their own costs as well. Running a successful desktop, web or mobile application will take discipline, structure, and the ability to manage overhead costs, while also being able to generate some revenue through a clear business model. As a platform, FamilySearch will have to bring to the table some monetization opportunities for consumers, providing guidance as part of the certification process regarding what are best practices for monetization, and even some direct opportunities for advertising, in-app purchases and other common approaches to application monetization and sustainment.

Without revenue greasing the gears, no service can achieve platform status. As with all other aspects of platform operations the conversation around monetization cannot be on-sided, and just about the needs of the platform providers. Pro-active steps need to be taken to ensure both the platform provider, and its consumers are being monetized in the healthiest way possible, bringing as much benefit to the overall platform community as possible.

Open & Transparent Operations & Communications

How does all of this talk of platform and evangelism actually happen? It takes a whole lot of open, transparent communication across the board. Right now the only active part of the platform is the FamilySearch Developer Google Group, beyond that you don’t see any activity that is platform specific. There are active Twitter, Facebook, Google+, and mainstream and affiliate focused blogs, but nothing that serves the platform, contributed to the feedback loop that will be necessary to take the service to the next level.

On a public platform, communications cannot all be private emails, phone calls, or face to face meetings. One of the things that allows an online service to expand to become a platform, then scale and grow into robust, vibrant, and active community is a stream of public communications, which include blogs, forums, social streams, images, and video content. These communication channels cannot all be one way, meaning they need to include forum and social conversations, as well as showcase platform activity by API consumers.

Platform communications isn’t just about getting direct messages answered, it is about public conversation so everyone shares in the answer, and public storytelling to help guide and lead the platform, that together with support via multiple channels, establishes a feedback loop, that when done right will keep growing, expanding and driving healthy growth. The transparent nature of platform feedback loops are essential to providing everything the consumers will need, while also bringing a fresh flow of ideas, and insight within the FamilySearch firewall.

Truly Shifting FamilySearch The Culture

Top-down, bottom-up, outside-in, with constantly flow of oxygen via vibrant, flowing feedback loop, and the nourishing, and sanitizing sunlight of platform transparency, where week by week, month by month someone change can occur. It won’t all be good, there are plenty of problems that arise in ecosystem operations, but all of this has the potential to slowly shift culture when done right.

One thing that shows me the team over at FamilySearch has what it takes, is when I asked if I could write this up a story, rather than just a proposal I email them, they said yes. This is a true test of whether or not an organization might have what it takes. If you are unwilling to be transparent about the problems you have currently, and the work that goes into your strategy, it is unlikely you will have what it takes to establish the amount of transparency required for a platform to be successful.

When internal staff, large external partners, and long tail genealogical app developers and enthusiasts are in sync via a FamilySearch platform driven ecosystem, I think we can consider a shift to platform has occurred for FamilySearch. The real question is how do we get there?

Executing On Evangelism

This is not a definitive proposal for executing on an API evangelism strategy, merely a blueprint for the seed that can be used to start a slow, seismic shift in how FamilySearch engages its API area, in a way that will slowly evolve it into a community, one that includes internal, partner, and public developers, and some day, with the right set of circumstances, FamilySearch could grow into robust, social, genealogical ecosystem where everyone comes to access, and participate in the mapping of mankind.

  • Defining Current Platform - Where are we now? In detail.
  • Mapping the Landscape - What does the world of genealogy look like?
  • Identifying Projects - What are the existing projects being developed via the platform?
  • Define an API Evangelist Strategy - Actually flushing out of a detailed strategy.
    • Projects
    • Storytelling
    • Syndication
    • Social
    • Channels
      • External Public
      • External Partner
      • Internal Stakeholder
      • Internal Company-Wide
  • Identify Resources - What resource currently exist? What are needed?
    • Evangelist
    • Content / Storytelling
    • Development
  • Execute - What does execution of an API evangelist strategy look like?
  • Iterate - What does iteration look like for an API evangelism strategy.
    • Weekly
    • Review
    • Repeat

AS with many providers, you don’t want to this to take 5 years, so how do you take a 3-5 year cycle, and execute in 12-18 months?

  • Invest In Evangelist Resources - It takes a team of evangelists to build a platform
    • External Facing
    • Partner Facing
    • Internal Facing
  • Development Resources - We need to step up the number of resources available for platform integration.
    • Code Samples & SDKs
    • Embeddable Tools
  • Content Resources - A steady stream of content should be flowing out of the platform, and syndicated everywhere.
    • Short Form (Blog)
    • Long Form (White Paper & Case Study)
  • Event Budget - FamilySearch needs to be everywhere, so people know that it exists. It can’t just be online.
    • Meetups
    • Hackathons
    • Conferences

There is nothing easy about this. It takes time, and resources, and there are only so many elements you can automate when it comes to API evangelism. For something that is very programmatic, it takes more of the human variable to make the API driven platform algorithm work. With that said it is possible to scale some aspects, and increase the awareness, presence, and effectiveness of FamilySearch platform efforts, which is really what is currently missing.

While as the API Evangelist, I cannot personally execute on every aspect of an API evangelism strategy for FamilySearch, I can provide essential planning expertise for the overall FamilySearch API strategy, as well as provide regular checkin with the team on how things are going, and help plan the roadmap. The two things I can bring to the table that are reflected in this proposal, is the understanding of where the FamilySearch API effort currently is, and what is missing to help get FamilySearch to the next stage of its platform evolution.

When operating within the corporate or organizational silo, it can be very easy to lose site of how other organizations, and companies, are approach their API strategy, and miss important pieces of how you need to shift your strategy. This is one of the biggest inhibitors of API efforts at large organizations, and is one of the biggest imperatives for companies to invest in their API strategy, and begin the process of breaking operations out of their silo.

What FamilySearch is facing demonstrates that APIs are much more than the technical endpoint that most believe, it takes many other business, and political building blocks to truly go from API to platform.


Low Hanging Fruit For API Discovery In The Federal Government

I looked through 77 of the developer areas for federal agencies, resulting in reviewing approximately 190 APIs. While the presentation of 95% of the federal government developer portals are crap, it makes me happy that about 120 of the 190 APIs (over 60%) are actually consumable web APIs, that didn't make me hold my nose and run out of the API area. 

Of the 190, only 13 actually made me happy for one reason or another:

Don't get me wrong, there are other nice implementations in there. I like the simplicity and consistency in APIs coming out of GSA, SBA, but overall federal APIs reflect what I see a lot in the private sector, some developer making a decent API, but their follow-through and launch severeley lacks what it takes to make the API successful. People wonder why nobody uses their APIs? hmmmmm....

A little minimalist simplicity in a developer portal, simple explanation of what an API does, interactive documentation w/ Swagger, code libraries, terms of service (TOS), wouild go a looooooooooooong way in making sure these government resources were found, and put to use. 

Ok, so where the hell do I start? Let's look through theses 123 APIs and see where the real low hanging fruit for demonstrating the potential of APIs.json, when it comes to API discovery in the federal government.

Let's start again with the White House (http://www.whitehouse.gov/developers):

Only one API made it out of the USDA:

Department of Commerce (http://www.commerce.gov/developer):

  • Census Bureau API - http://www.census.gov/developers/ - Yes, a real developer area with supporting building blocks. (Update, News,( App Gallery, Forum, Mailing List). Really could use interactive document though. There are urls, but not active calls. Would be way easier if you could play with data, before committing. (B)
  • Severe Weather Data Inventory - http://www.ncdc.noaa.gov/swdiws/ - Fairly basic interface, wouldn’t take much to turn into modern web API. Right now its just a text file, with a spec style documentation explaining what to do. Looks high value. (B)
  • National Climatic Data Center Climate Data Online Web Services - http://www.ncdc.noaa.gov/cdo-web/webservices/v2Oh yeah, now we are talking. That is an API. No interactive docs, but nice clean ones, and would be some work, but could be done. (A)
  • Environmental Research Division's Data Access Program - http://coastwatch.pfeg.noaa.gov/erddap/rest.html - Looks like a decent web API. Wouldn’t be too much to generate a machine readable definition and make into a better API area. (B)
  • Space Physics Interactive Data Resource Web Services - http://spidr.ngdc.noaa.gov/spidr/docs/SPIDR.REST.WSGuide.en.pdf - Well its a PDF, but looks like a decent web API. It would be some work but could turn into a decide API with Swagger specs. (B)
  • Center for Operational Oceanographic Products and Services - http://tidesandcurrents.noaa.gov/api/ - Fairly straightforward API, Simple. Wouldn’t be hard to generate interactive docs for it. Spec needed. (B)

Arlington Cemetary:

Department of Education:

  • Department of Education - http://www.ed.gov/developers - Lots of high value datasets. Says API, but is JSON file. Wouldn’t be hard to generate APIs for it all and make machine readable definitions. (B)

Energy:

  • Energy Information Administration - http://www.eia.gov/developer/ - Nice web API, simple clean presentation. Needs interactive docs. (B)
  • National Renewable Energy Laboratory - http://developer.nrel.gov/ - Close to a modern Developer area with web APIs. Uses standardized access (umbrella). Some of them have Swagger specs, the rest would be easy to create. (A)
  • Office of Scientific and Technical Information - http://www.osti.gov/XMLServices - Interfaces are pretty well designed, and Swagger specs would be straightforward. But docs are all PDF currently. (B)

Department of Health and Human Services (http://www.hhs.gov/developer):

Food and Drug Administration (http://open.fda.gov):

Department of Homeland Security (http://www.dhs.gov/developer):

Two losse cannons:

 Department of Interior (http://www.doi.gov/developer):

Department of Justice (http://www.justice.gov/developer):

Labor:

  • Department of Labor - http://developer.dol.gov/ - I love their developer area. They have a great API, easy to generate API definitions. (A)
  • Bureau of Labor Statistics - http://www.bls.gov/developers/ - Web APIs in there. Complex, and lots of work, but can be done. API Definitions Needed. (B)

Department of State (http://www.state.gov/developer):

Department of Transportation (http://www.dot.gov/developer):

Department of the Treasury (http://www.treasury.gov/developer):

Veterans Affairs (http://www.va.gov/developer):

Consumer Finance Protectection Bureau:

Federal Communications Commission (http://www.fcc.gov/developers):

Lone bank:

  • Federal Reserve Bank of St. Louis - http://api.stlouisfed.org/ - Good API and area, would be easy to generate API definitions. (B)

General Services Administration (http://www.gsa.gov/developers/):

National Aeronautics and Space Administration http://open.nasa.gov/developer:

Couple more loose cannons:

Recovery Accountability and Transparency Board (http://www.recovery.gov/arra/FAQ/Developer/Pages/default.aspx):

Small Business Administration (http://www.sba.gov/about-sba/sba_performance/sba_data_store/web_service_api):

Last but not least.

That is a lot of potentially valuable API resource to consume. From my perspective, I think that what has come out of GSA, SBA, and White House Petition API, represent probably the simplest, most consistent, and high value targets for me. Next maybe the wealth of APis out of Interior and FDA. AFter that I'll cherry pick from the list, and see which are easiest. 

I'm lookig to create a Swagger definition for each these APIs, and publish as a Github repository, allowing people to play with the API. If I have to, I'll create a proxy for each one, because CORS is not common across the federal government. I'm hoping to not spend to much time on proxies, because once I get in there I always want to improve the interface, and evolve a facade for each API, and I don't have that much time on my hands.


Applying APIs.json To API Discovery In The Federal Government

I recently updated my APIs.json files for all my API Evangelist network domains, to use version 0.14, which is getting pretty close to a stable version. While I await APIs.io to be updated to use this version, I wanted to to spend some time publishing APIs.json files, but this time across federal government APIs.

The thing I like most about APIs.json, is that you can do one for anybody else’s APIs. In the case of our federal government, I don't anticipate any agency getting on board with APIs.json anytime soon, but I can do it for them! There are a lot of APIs in federal government, where do I get started?

To help me understand the scope of API discovery in our federal government I looked through 77 developer portals, outlined by 18F. While browsing these developer portals for federal government agencies, I look at almost 190 APIs--with a goal of identifying the low hanging fruit, when it came to API discovery across hundreds of government APIs.

Out of the 190 APIs, around 120 of them were actual web APIs, that were something I felt I could work with. I settled on a handful of APIs out of the GSA, hosted at www.usa.gov, and explore.data.gov, and got to work creating APIs.json for their APIs.

Before I could generate an APIs.json at each of the two domains (www.usa.gov and explore.data.gov), I needed machine readable API definition for the four APIs. I purposely picked federal agency APIs that were REST(flu), and were something I could easily generate a Swagger definition for.

The federal agency domain API at explore.data.gov was pretty easy, ony taking me a few minutes to handcraft a Swagger definition. Then I moved on to the Federal Agency Directory API at www.usa.gov, and I was happy to see there was already a Swagger definition for the API. After that I tackled the Social Media Registry API, and Mobile App Gallery API, both of which I had to handcraft a Swagger definition for. The Mobile App Gallery API has a CORS issue, but I'm moving on and will setup a proxy to handle later.

Now I have four machine readable API definitions for some pretty valuable government APIs, and I got to work creating my APIs.json that would act as a directory for these API resources. APIs.json are designed to go into the root domain of any API provider, and the four GSA APIs I selected ran under two separate domains (www.usa.gov and explore.data.gov), so I needed two separate APIs.json files:

When APIs.io gets updated to the latest version of APIs.json, I will submit both of these APIs.json files for indexing. Even though I’m not the owner of these domains, I can still submit for inclusion in the API search engine. It would be better if the GSA published the APIs.json, and Swagger API definitions at the actual domains, and submitted themselves--the listings would then show as being authoritative, and hold more weight in API searches.

I still have about 115 more federal government APIs to create machine readable API definitions for, and the resulting APIs.json files that will enable discovery of these APIs--this isn’t something that will happen overnight, and will take a shit-ton of work.

My goal is to help harden the APIs.json format, while making APIs in federal government more accessible, and part of the larger API discovery conversation that is going on in the private sector. One of the powerful features of APIs.json, is that external actors can craft APIs.json collections, from the outside. You don’t have to be an owner of a domain, where an API is hosted, to publish an APIs.json on its behalf--I think this represents the potential of the public sector and private sector working together. 

APis.json are meant to work as collections of APis, or virtual stacks of valuable API resources. Sometimes these virtual stacks are defined internally, within a domain, and sometimes they are mashups of multiple API resources, across numerous domains, by outside actors, or API curators. I will keep curating federal government APIs, generating machine readable API definitions, APIs.json files for their supporting domains--helping define this fun new world of API discovery.


The Power In API Discovery For APIs.json Will Be In The API URL Type

An APIs.json file lives in the root of any domain, or subdomain, and provides references to a collection of API resources. The APIs.json is meant to be a lightweight framework, where someone can build a collection of APIs, give it a name, description, some tags, and the APIs collection points you where you need to go, to get more information about those APIs.

For each API, you can define a list of URLs, each with a defining “type”, letting you know what to expect when you visit the URL. Right now, most of those URLs are just for humans, pointing to the developer portal, document, and terms of service (TOS). We are adding other API url types, that API search engines like APIs.io can expose in their search interfaces, like code samples, and application gallery, to the next version of APIs.json.

These human API URL types provide a reference, that API search engines can use to guide human users who are searching for APIs. However, where the real power of APIs.json comes in, is when an API URL type references a machine readable source, like a Swagger definition, or an API Commons manifest. When it comes to API discovery, we need as many meaningful locations, that we can point human API consumers to, but also machine readable locations, that will help make API discovery much more rich, automated, and precise.

Imagine when I can do more than just search name, description, and tags, by keyword, much like APIs.io works currently. Imagine when you can specify that you only want APIs that are in the API Commons, and openly licensed. Imagine when I can search for APIs that allow me to use all my HTTP verbs, not just GET. Now, go even more in the future, when I can search for APIs who have a specific allowance in their terms of service, with a machine readable TOS type for APIs.json.

This is where I want to take APIs.json, and ultimately API discovery. Machine readable API definitions like API Blueprint, RAML, and Swagger are the very first API URL type that helps automate API discovery, and the API Commons manifest is the latest. My goal is to tackle terms of service, pricing, and other critical aspects of API integration, and push forward a machine readable definition, that can ultimately be baked into the API discovery process--in all API search engines.

What is important to me, is that the API URL types, in each APIs.json, remain independent, and only loosely coupled with the APIs.json format, using a simple label, and URL. I’m not directly invested in the evolution of each Swagger version, however I am with API Commons, and potentially other future APIs.json API URL type definitions. I want anyone to be able to step up and suggest API URL types for the APIs.json spec, making APIs.json URL types a community driven API discovery layer, for the API economy.


Expanding The Layer Of API Discovery From Within The Developers IDE

Much like API design and integration, the world of API discovery is heating up in 2014. We are moving beyond the API directory as our primary mode of API search, in favor of a distributed approach using APIs.json, and supporting open source search engines like APIs.io. Another area of API discovery I’ve been watching for a while, and predict will become an important layer of API discovery, will be via the Integrated Development Environment (IDE) plugin.

Open Source SalesForce API IDE Plugin
SalesForce just announced they have just open sourced their API IDE plugin on Github, after developing on it since 2007, when APEX was born. The plugin is old, but is very much in use in the SalesForce ecosystem, something I’ve written about before. They will be accepting pull requests on the main branch, looking to improve on the codebase, while looking to also maintain a community branch, as well as encouraging you to establish your own branch.

Does Your API Have An IDE Plugin?
How far along are you on your own APIs Eclipse Plugin? Are you trying to reach enterprise developers with your API resource? You should probably look at the pros and cons of providing your API developers with a plugin, for leading IDEs. With the open sourcing of SalesForce API IDE plugin, you can reverse engineer their approach and see what you can use for your own APIs IDE plugin—smells like a good opportunity to me.

Opportunity For General Or Niche API IDE Plugins
Not that using SalesForce open source IDE would be the place to start for this kind of project, but I think there is a huge opportunity to develop API focused IDE plugins, for top developed environments, across many popular APIs. Developers shouldn’t have to leave their development environments to find the resources they need, they should be able to have quick access to the APIs they depend on te most, and discover new API resources right from their local environment, aking IDE plugins an excellent API discovery opportunity.

Native Opportunities For IDE Platforms
I’ve seen a lot of new development environments emerge, many are web-based, with varying degrees of being “integrated”. I think that IDE developers can take a lead from Backend as a Service (BaaS) providers and build in the ability to define an integrated stack of API resources, right into a developer's web, mobile, or Iot development environment. If you are building a platform for developers to produce code, you should begin baking in API discovery and integration directly into your environment.

All I do as the API Evangelist, is shed light on what API pioneers like SalesForce are up to, and expand on their ideas, using my knowledge of the industry--resulting in these stories. SalesForce has been doing APIs for 14 years now, and the IDE has been part of their API driven ecosystem for the last seven years. I think their move to open source the technology, is an opportunity for the wider API space to run with, by helping improve the community SalesForce API IDE plugin, but also apply their experience, and legacy code to help evolve and improve on this layer of API discovery, available within the IDE.


Contributing To The Discovery Lifecycle

Contributing To The Discovery Lifecycle

One of the newest incentives for API providers to develop machine readable API definitions, is about API discovery. After many years of no search solution for APIs, beyond the standard API directory, we are just now beginning to see a new generation of API discovery tooling and services.

Together with 3Scale, I recently launched the APIs.json discovery format, looking to create a single API discovery framework that can live within the root of any domain. Our goal is to allow API providers to describe their APIs, providing machine readable pointers to the common building blocks of a company's API like signup, machine readable definitions, documentation, terms of service, and much, much more.

As we were developing APIs.json, we recognized that without a proper, distributed search engine, any machine readable API discovery formats would not be successful, and with this in mind 3Scale launched an open source API search engine, with the first implementation being APIs.io. As the number of APIs rapidly grows, more search solutions like APIs.io will be needed to make sure the most valuable APIs are discoverable, in real-time.

The future of API discovery will need more than just basic metadata to find APIs, we will need machine readable definitions that describe the API, as well as its supporting building blocks. API definitions will help automate the discovery and understanding of what value an API delivers, helping API consumers find just the API resources they need to make their applications successful.


Multiple Types of APIs.json For Discovery

I’m working through thoughts around a suggestion for future versions of APIs.json API discovery format, and as I do with other things I’m trying to make sense of, I wanted to write a blog post on API Evangelist. If you aren't familiar with the format, APIs.json is meant to be a machine readable JSON file that provides an overview and listing APIs available within a specific domain.

Authoritative APIs.json
This is an APIs.json that is made available in the root of a domain, that is providing detail on an API that is managed within the same domain. This use case is for API providers to list the APIs that they offer publicly.

Tribute APIs.json
There is an API you use, and want to see it indexed in an API search engine like APIs.io—so you create a tribute APIs.json. This APIs.json index is not done by the owner of the API, but by a fan, or outside contributor. Tributes will weave together the world of APIs, when providers do not have the time.

Facade APIs.json
There is an API you use, but doesn’t have exactly the interface you would want. Some resourceful API architects will be building facades to existing, external API resources. In the API economy you do not settle if an API isn't exactly what you need. The remix able nature of APIs allow for extending by designing facades that transform and extend existing API resources.

Cache APIs.json
If I learned anything working in the federal government last year, it is that APIs can go away at any point. In the future there will be a need for cached versions of existing APIs, providing redundancy and access to some important data and content.

Aggregate APIs.json
In fast growing areas of the API economy, we are seeing API aggregation trends, with sectors like social, media, cloud, financial, analytics, and other areas that have matured, and users are depending on potentially multiple API platforms.

Derived APIs.json
I envision derived APis as just an evolution of tribute or facade, stating that I started with a certain API design, but have evolved it beyond what it once was. Not acknowledging where we got our API patterns is a darker side of the API space that needs to go away—let’s be honest about where we learned our patterns and give a nod to these sources.

In the API economy, I think there will be multiple types of APIs that are deployed. As APIs proliferate, if the industry focuses on interoperability and reuse, there will be more types of APIs than the single API provider and APIs.json will help us keep tabs on this.

Not all API providers will have time, desire and the access to resources to publish their APIs.json, and the augmentation, replication and other possible derivatives that will emerge to organically expand on existing patterns.

Right now we are working on just stabilizing the latest release of APIs.json, so that people can get to work on publishing their own APIs.json. My goal with these thoughts is to just explore what is possible, and maybe, if successful some of these thoughts will be incorporated in future versions.


Solving The Problem Of API Discovery

API discovery has not changed much since 2005, when John Musser launched ProgrammableWeb, the API directory we've all come to know and love. In 2014 (9 years later), we have Mashape and a handful of other API directory and discovery tools, but we have not made progress on truly being able to discover the best APIs possible, in a distributed, machine-readable way.

Steve Willmott of 3Scale, and Kin Lane of API Evangelist, are at it again, looking to provide a possible solution, that we are calling APIs.json—a machine readable listing of your APIs and supporting building blocks that lives in the root of your domain.

The objective of APIs.json is to help fix this problem by making it easy for people to signpost where the APIs on a given domain are and provide information on how they work. The format is simple and extensible and can be put into any web root domain for discovery.

We are just getting started, so make sure and get involved via the Github repository or via the Google Group we've setup to facilitate discussion.


Adding API Rating Agency To Discovery Stack

I’m adding the API Rating Agency to my stack of companies who deliver in the area of API discovery. The API Rating Agency helps API consumers understand each API provider on a whole list of metrics, ranging from terms of service to platform uptime.

Rating of APIs has been a nut I’ve been trying to crack for a couple years now, resulting in a ranking system that is more human, than algorithm—so I know how hard it is to go through hundreds of APIs, and develop some sort of coherent ranking system.

The API Rating Agency is a work in progress. I know they are hard at work reviewing API providers, so if you have an API make sure and contact them and see how you can put together a package that will help them understand your API.

To achieve the scale we need in the API economy, we will have to have an unbiased, 3rd party ranking, kind of sorta like Moody’s or Standard & Poor’s, but hopefully more neutral. Developers have to be able to understand which platforms they can depend on.

You can catch Jonathan Bourguignon (@jon_bou) of the API Rating Agency at API Strategy & Practice next week in Amsterdam, participating in the discovery and trust session—I hope to see you there!


API Discovery and Trust At #APIStrat in Amsterdam Next Week

I'm continuing my journey through the session line-up at API Strategy & Practice next week in Amsterdam, next up is the API discovery and trust session, where the conversation will be about API directories, service descriptions and of trust and rating systems for APIs.

Speakers for the API discovery and trust session are:

It will be tough to decide between the API design and development and this one in the first session, on Day 1 for me. Thankfully that everything is recorded!

API discovery is an area we have a lot of work to do to prepare for the explosive growth in the space. It is tough to find quality APIs, understand the differences between APIs, and know who you can really depend on after you’ve integrated into your applications. This will be an important session.


API Discovery - Tools

Tools For API Discovery

There are definitely not enough tools for enabling API providers to offer discovery solutions around their APIs. As we approach 10K public APIs and an untold amount of private APIs, the problem will grow and more tools and solutions will emerge.

These are a handful of the tools and services that can be used in API discovery, but unfortunately they are mostly from the perspective of the API provider, which is where the problem lies right now and needs olving by proviers.


API Discovery - Overview

Overview of API Discovery

In the early days of the web API movement (2005-2010), to find APIs, you went to ProgrammableWeb, which was the only site on the web that was exclusively dedicated to web APIs. Discovery happened via the PW directory and constant stream of news and analysis from across the space.

ProgrammableWeb is still relevant in 2013, but as the number of APIs grows, the directory model is not meeting the demand for finding the best of breed APIs that developers are needing to build the next generation of web and mobile apps.

Among the API tech sector there is always discussion around the need for programatic discovery in API the API space. Something that uses a WADL-like approach to describing APIs, so that the next generation of API directories, IDEs and other systems can discover, understand, monitor and integrate with APIs--with less human involvement. In short, this vision hasn't been achieved.

While technologists would love for there to be the holy grail of API discovery, in reality the space is taking baby steps from API directories to API hubs or marketplaces where you can not just discover APIs, but interact and even manage API integration and usage.

The need for programmatic discovery, and more meaningful indexes of API resources is growing, alongside the growing number of public, but more importantly the number of private APIs.  

 

 


API Discovery - Companies

Companies In The API Discovery Space

There are just a handful of companies who are focusing on solving the API discovery problem. We are just moving beyond the directory model, exploring hub and marketplace solutions.  However there are new providers that are focusing on API discovery which is more about integration, testing and monitoring and finding quality APIs.


API Discovery - Closing

Finding The APIs You Need

The API discovery research for this paper is much more sparse than my other areas of research. This is a new area I'm tracking on, trying to make sense of what is already happening, and attempt to formulate some thoughts on where things might go--maybe even stear in a particular direction.

I really don't think API discovery is going to be as difficult as many claim it will be. There are some very smart people out there, some who have experience with indexing of the web, and I really don't think indexing and providing discovery layers for APIs will be too big of a challenge.

Where I believe the biggest value will come, is in the niche API discovery areas or the long tail of APIs. Discovery for speciality government resources, sensors, commercial fleets and other more obscure areas that the TechCrunches of the world will overlook because they aren't as sexy.

In my opinion, API discovery will be half programmatic, with the creation of API resource definitions using tools like Swagger or IO Docs, allowing directories, hubs, marketplaces, IDEs and other discovery layers to find the best available APIs. The other half will be humans verifying that APIs offer value, are usable, reliable and meaningful when being applied to real world situations. Whoever can blend these two approaches, will win.

Much like we saw Google dominate with its PageRank, we will also see ranking solutions evolve for API discovery. API discovery ranking algorthms evolved from traditional approaches to SEO and recent approaches to social influence will emerge. There won't be a one-size-fits-all API ranking solution, many will emerge, but they will consolidate as it has done with web and social.

This paper will rapidly iterate, as the API discovery arena does. We are close to reaching peak API production, where there are too many APIs available, making proper discovery solutions much more critical.  

 


API Discovery - Building Blocks

API Discovery Building Blocks

As with other areas of providing and consuming APIs, I'm trying to define the common building blocks of API discovery. Even with the passionate discussions in the API space, there are not that many innovative approaches to enabling API discovery.

For years, API discovery is purely about going to ProgrammableWeb and searching through the directory for the API(s) you need. In the last year we've seen new players evolve the paradign with a new API hub or marketplace model, but ultimately there has been no new approaches to empowering developers to find the right API(s).  

Here are a few of the common building blocks I am tracking on when it comes to API discovery.


API Evangelism

API Evangelism

An API is useless if nobody knows about it. Evangelism has emerged as the approach to selling, marketing and support an API platform. While the intent of evangelism can be sales and marketing, the philosophy that has proved successful is to find a balance that is more about focusing on API support and engagement with consumers over sales.

A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business.

Goals
Healthy API evangelism is centered around clear goals. Goals usually start with targets like new user registration, but need to be set higher around active API consumers, expanding how your existing users consume your API resources, all the way to clear definition of how your API will extend and expand your brand. 

Consumer Engagement
While it may seem obvious, actively engaging API consumers often gets lost in the shuffle. Have a strategic approach to reaching out to new users in a meaningful way, establishing healthy practices for reaching out to existing developers at various stages of integration, is essential to growing an API initiative. Without planned engagement of API consumers, a canyon will grow between API provider and API consumer, one that may never be able to be reversed.

Blogging
An active blog, with an RSS feed has the potential to be the face of an API and developer evangelism campaign. A blog will be the channel you tell the stories that help consumers understand the value that an API delivers, how other developers are integrating with it, ultimately leaving an SEO exhaust that will bring in new consumers. If comments are in place, a blog can also provide another channel for opening up conversation with API consumers and the public. 

Landscape
Without an understanding of the industry an API is operating in, an API will not effectively serve any business sector. By establishing and maintaining a relevant keyword list, you can monitor competitors, companies that compliment your platform, and establish an active understanding of the business sector you are trying to serve. Regular monitoring and analysis of the business landscape is necessary to tailor a meaningful API evangelism campaign. 

Support
When it comes to evangelism, support is one of the most critical elements. There is no better word of mouth for an API, than an existing consumer talking about how good the API is, and the support. Engage and support all API consumers. This will drive other vital parts of API evangelism, including creating positive stories for the blog, healthy conversations on social networks and potentially creating evangelists within a community.

GitHub 
I recommend a lot of online services and tools for API providers and consumers to put to use. But there is not any single platform that delivers as much value to the API space as Github. I would put AWS as close second, but Github provides a wealth of resources you can tap when both providing APIs or building applications around them.  Github is a critical piece of any API strategy, allowing social relationships with developers that is centered around code samples, libraries or even documentation and resources for an API.

Social Networking
Twitter, Facebook, LinkedIn, Google+ and Github are essential to all API evangelism strategies. If an API does not have a presence on these platforms, it will miss out on a large segment of potential API consumers. Depending on the business sector an API is targeting, the preferred social network will vary. Providing an active, engaging social support presence when operating an API is vital to any API ecosystem. 

Social Bookmarking
Discovery and curation of bookmarks to relevant news and information via social bookmarking platforms is essential to an active API evangelism strategy. Using Reddit, Hacker News and StumbleUpon will provide discovery and access to a wealth of resources for understanding the API space, but also provide an excellent channel for broadcasting blog posts, news and other resources about API operations, keeping consumers informed, while also opening up other opportunities for discovery. 

Road-map
API providers, and API consumers are constantly building trust and establishing a long term relationship with each other. One key facet of this trust, and the foundation for the relationship is sharing a common road-map. API providers need to actively involve API consumers with where the API resources are going, so that consumers can prepare, adjust and even give feedback that may, or may not, influence the road-map. Nothing will piss off API consumers faster than keeping them in the dark about what is coming down the pipes, and surprising them with changes or breaks in their applications. 

Events
A healthy online presence is critical to any successful API strategy, but giving attention to a strong in-person presence at events is also a proven tactic of successful API providers. Evangelism involves a coordinated presence at relevant conferences, hackathons and local meetups. Events are necessary for building personal relationships with partners and API consumers that can be re-enforced online.

Reporting
Measuring every aspect of an API operations is necessary to understand what is happening in any API operations. Reporting on every aspect of API operations is how you visualize and make sense of some often, very fast moving API activity. It is important to quantify API operations, and develop reports that are crafted to inform key stakeholders about an API initiative.  

Internal
External facing activities will dominate any active API operations. However, an essential aspect of sustainable API programs is internal evangelism. Making sure co-workers across all departments are aware and intimate with API operations, while also informing management, leadership and budget decision makers is critical to keeping API doors open, healthy and active. 

Repeat
API and developer evangelism is an iterative cycle. Successful API operations will measure, assess and plan for the road-map in an ongoing fashion, often repeating on a weekly and monthly basis to keep cycles small, reducing the potential for friction in operations and minimizing failures when they happen.

A healthy API evangelism strategy will be something that is owned partially by all departments in a company. IT was a silo, APIs are about interoperability internally and externally.


API Discovery

API Discovery

In the early days of the web API movement (2005-2010), to find APIs, you went to ProgrammableWeb (PW), which was the only site on the web that was exclusively dedicated to web APIs. Discovery happened via the PW directory and constant stream of news and analysis about the space.

ProgrammableWeb is still relevant in 2013, but as the number of APIs grows, the directory model is not meeting the demand for finding the best of breed APIs that developers are needing to build web and mobile apps.

Within the API tech sector there has always discussion around the need for programmatic discovery of APIs. Something that uses a machine readable approach to describing APIs, so that the next generation of API directories, integrated development environments (IDE) and other systems can discover, understand, monitor and integrate with APIs--with less human involvement. In short, this vision has never been realized.

API discovery has two sides, finding the APIs you need and having your API be found. While there has been an evolution in the API discovery game, much of it is still done manually, with a lot of legwork by both API providers and consumers.

As we move beyond 10K public APIs in the original API directory at ProgrammableWeb, the need for solid approaches to API discovery will emerge. API discovery is the bridge between API provider and consumer.


One API Discovery Definition to Rule Them All

When I talk about API discovery, in-person at events, or on my blog(s), I notice people automatically default to thinking I mean a universal API discovery language that will work for all web APIs. I think the technologists that operate in the API space are always striving for technical perfection--resulting in the discussions that you see around REST, HATEOAS, OAuth and similarly for this one about API discovery.

I’m thankful for the passion and dedication of the technologists in this space, but when it comes to API discovery, I’m never talking about a universal language or approach. I personally just don’t believe there can be one definition to rule them all. When I reference API discovery, I’m focusing on API discovery at the provider level, and providing information and resources that allow people who launch APIs to be successful. I have no interest in defining or support a world-wide or industry level definition for API discovery. I leave these conversations to all y'all tech pundits.

I am a fan of supporting API providers to do something, anything! Sure, it should be a standardized as you feel necessary. I hope you use something that is already in existence like WADL, Swagger or I/O Docs (don’t reinvent the wheel), and make sure and look at the approach Google is taking with their API discovery service--as they have some experience in the field.

In reality though, your motivation to develop JSON or XML definitions for your API will probably be to provide interactive documentation or allow for easy generation of code libraries for your API--not discovery. With the API discovery conversation automatically defaulting to a universal definition by the tech pundits, API providers will often avoid these discussions, leaving it a lower priority when planning and implementing an API. Much like with HATEOAS, without concrete examples of value, API providers won’t see value in providing JSON or XML definitions of their APIs. Interactive docs and auto generation of code libraries are clear value propositions, and show potential for bringing discovery back to the forefront.

Once you have API definitions for all of your API endpoints, its pretty easy to publish a single manifest of all of your APIs in a single JSON (or XML) file in the root of your developer area. Sure I would love all of these definitions to be the same, but I prefer a more pragmatic approach and will accept whatever a API service provider deems suitable for their APIs, and with the resources they have available.

If you think about how web page discovery came together in late 1990s with Yahoo, then solutions provided by Google, and even new approaches from providers like DuckDuckGo. When it comes to API search and discovery, we are in circa 1997, if we compare it with web page discovery. You have directories like ProgrammableWeb, but you also have newer vendors emerging like APIhub, who potentially bring a new perspective to the table.

Since APIs are about “programmatic discovery”, I think how developers discover API will vary, occuring via these directories and hubs, but also occur via their chosen PaaS platform like Drupal, Heroku, Salesforce or with BaaS providers like Parse or Kinvey--as well as popular IDEs like Eclipse who allow for plugins.

It will up to PaaS, BaaS or other 3rd party platform providers to assemble resource stacks that are meaningful to their community. They will do the legwork to find best of breed API resources, which will be made easier if API providers provider JSON or XML definitions of their API resources, but not a requirement.

I believe that similar to website sitemaps, API discovery will have wider definitions that some follow, with successful vendor specific implementations as well, but ultimately it will remain largely imperfect and some API providers will do well, and others will implement poorly.  The markets will decide! (cringe)

My object is to help the average API provider hear stories of other successful approaches, and identify the benefits, in hopes that they will implement something, anything!  Allowing us to take baby steps forward in API discovery, not defining one definition to rule them all and nobody giving a shit, and we don't move forward at all.


The Next Generation of API Discovery

For the last seven years, when you wanted to find an API you went to ProgrammableWeb. It has been the definitive way to discover new APIs, and responsible for all the buzz in the space that has gotten the industry to where it is.

Now that ProgrammableWeb is at 8400 APIs in its directory, and adding 50-100 each week, it will continue to get even more difficult to discover APIs. Even for someone like me who has looked at thousands of APIs, it can be very difficult and time consuming to find the API or APIs you are looking for.

In 2013 there are even more ways to find APIs, new approaches that are looking to define the next generation of API discovery and consumption. Currently I’m tracking on 4 API directories in addition to ProgrammableWeb:

  • APIhub - APIhub is the best way to publish, discover and consume APIs. Search our database or browse through our most popular APIs
  • APIs.io - APIS.io is an open source and free API registry service that allows developers to publish and discover REST APIs and interact with them online
  • Exicon API Directory - Exicon helps marketers and enterprises find qualified developers through our online platform and advisory services
  • Mashape - Mashape provides a world-class marketplace to manage, distribute and consume both private and public APIs by developers from all over the world

APIs.io and Exicon have the least amount of APIs available, but both Mashape and APIhub are currently leading, with Mashape possessing over 1500 and APIhub has over 13,000 APIs available.

In addition to providing the ability to search APIs and browse by category, these new generation of API directories are providing sophisticated tools like interactive documentation, code samples and ways to follow, share, like--providing social interactions for API developers with API publishers.

Beyond these new bells and whistles, what’s next for API discovery? To make developers lives easier they need programmatic ways to discover and understand APIs, as well as some sort of ranking to tell which are good and which are bad APIs.

To provide interactive documentation, these directories posses JSON definitions of each API interface, using formats such as Swagger from Wordnik, which opens up the door for more sophisticated discovery in a programmatic way, and potentially directly from within your IDE.

With the data from the sharing, liking, following, page views and other signals generated via these API directories, there is a potential to develop some sort of ranking. But we need more data signals from the space to truly develop a meaningful ranking. I’ve developed my own API ranking to help me discover which APIs are trending based upon internal and external signals, allowing me to establish my API Stack. But its not enough either. We need a lot more to be able to establish a meaningful way to rank APIs, that truly benefits developer efforts.

As we are switching from showcasing the quantity of APIs, to better understanding the value and quality of APIs, we are going to need a new breed of directories. I’m excited by what I’m seeing from these new API directories, and hopeful for what is coming 2013.

Disclosure: Both ProgrammableWeb and APIhub are API Evangelist partners.


Mulesoft Launches API Discovery Hub

API Discovery is becoming an increasingly troubling problem. As an analyst, I see a dizzying amount of APIs each month. When I get asked to find a particular type of API, or group of APIs in a particular industry--it gets difficult to discover a meaningful results to any query.

My options for API discovery historically has been ProgrammableWeb. The OG API Directory.

Today there is a new player on the block, APIhub. APIhub is a fresh attempt at solving the API discovery, with over 13K APIs organized by category, type, protocol, format and security.

API Hub is looking to provide a solution for two distinct groups:

  • Developers - Developers need an ecosystem to discover, learn, test and use APIs
  • Publishers - Publishers require a platform to publish, manage, engage customers, and monetize APIs

With first release APIhub doesn’t have much out of the gate that is different than ProgrammableWeb, except for a much cleaner layout, search tools that is not cluttered by news, mashups and advertising.

Once you explore APIhub further, you start seeing early signs of deeper features. When you add or claim your API, your given the option to upload a API spec in a Swagger or WADL format, with more formats coming soon. If implemented for all 13K APIs, this could be a powerful discovery engine.

Beyond programmatic discovery I see hints of ranking on the platform. Right now its just 5 star ranking, but have hopes for more sophistication on how developers, publishers and analysts can rank APIs in the future. Allowing consumers to find new and existing APIs in meaningful ways--other than just 5 star ranking or my number of followers and mashups built on an API would be extremeley valuable.

APIhub, marketplace or directory is nothing new. You see folks like APIs.io trying to step up with a directory and startups like Mashape providing a hub of their own, in a similar attempt to create an API marketplace.

I think there are several things that will decide whether APIhub is successful and it will center around their ranking and discovering algorithm, but also their ability to attract API owners to come, claim and enhance their listings. But more importantly can APIhub and Mulesoft get developers to care and participate in curating, ranking and consuming APIs via the new APIhub?  Just like the web API movement, developers will make or break APIhub.  If it add values to developers worlds, they'll embrace it--if not, APIhub will have hard time staying relevant.


Google Deploys a Single, Centralized Terms of Use for APIs

Google has made another step towards a more common API infrastructure in line with their API Discovery Service, API Explorer, and API Console by launching a single terms of service for all Google APIs.

Google has rewritten their terms from the ground up with the goal of making them easier to understand for application developers.

At the moment it seems as though most of the APIs that use the central terms of service are content and data related APIs, like Google Tasks, Google ModeratorGoogle Charts and Blogger.

While more complex APIs like Youtube, Google Analytics, Google Adwords and Google Latitude still use their own terms of service. Over time, more APIs will be migrated to the new, centralized terms of service format. 

With almost 100 APIs now, it makes sense for Google to reduce the complexity of terms of use across APIs, increasing the chances a developer will “legally” build an application or business around one or multiple Google APIs. 

Google provides tools for developers to easily discover, explore their APIs, while also providing centralized management of billing, usage reporting and terms of use.  I predict we will see other API service providers offering tools for managing terms of use, branding and other legal aspects of API management in 2012.


Can Swagger Deliver a RESTful API Discovery Service?

There is a lot of discussion around the growth of APIs, and what the future will look like. How will we discover and make sense of the number of available APIs, and quickly get to work integrating with the APIs that bring the most value to our apps and businesses.

One technology that comes up in every conversation I’ve had is Swagger. What is Swagger?

Swagger is a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services.

The goal of Swagger is to enable client and documentation systems to update at the same pace as the server. The documentation of methods, parameters and models are tightly integrated into the server code, allowing APIs to always stay in sync.

Swagger was born out of initiatives from Wordnik, developed for Wordnik’s own use during the development of developer.wordnik.com. Swagger development began in early 2010 and the framework being released is used by Wordnik’s APIs, which power both internal and external API clients.

Swagger provides a declarative resource specification, allowing users to understand and consume services without knowledge of server implementation, enabling both developers and non-developers to interact with the API, providing clear insight into how the API responds to parameters and options.

I’m familiarizing myself with the specification more and playing with the various tools they provide:

  • Swagger UI - A dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API
  • Swagger Core - Defines Java annotations and required logic to generate a Swagger server or client.
  • Swagger CodeGen - Contains a template-driven engine to generate client code in different languages by parsing your Swagger Resource Declaration.
  • Swagger node.js - A Sample App is a fully-functioning, stand-alone Swagger server written in javascript which uses node.js and the express framework.
  • Swagger Scala Sample App - A fully-functioning, stand-alone Swagger server written in Scala which demonstrates how to enable Swagger in your API.
  • Swagger Java Sample App - A fully-functioning, stand-alone Swagger server written in Java which demonstrates how to enable Swagger in your API.

I understand that Swagger is not the one specification to rule all APIs, and it won’t make all religious API fanatics happy. But I want to start somewhere. I see three main benefits for API owners coming from adopting Swagger:

  • Automated, consistent generation of clean, beautiful, interactive API documentation
  • Generation of client code and SDK in multiple languages
  • Feeding into an industry wide API discovery language that both developers and non-developers can use

I believe strongly that consistent documentation and code samples ensure an API will get used, but as the number of APIs grows, a system like Google API Discovery Service, will be essential for API adoption across industries and around the globe. I’m hoping to learn more about Swagger, and see if it can help deliver on this vision.


With Seevl Music Discovery, the Website is the API

Seevl, a music discovery service that provides anew way to explore the cultural and musical universe of various artists, just launched an API with the assistance of 3Scale.

When deploying the API, Seevl approached it a little differently than most. Instead of providing a separate API to access data, Seevl relies on content negotiation principles to deliver alternative representations of web pages.

This means the entire Seevl website is the API and you can get JSON representations of almost every page in the site.

Seevl relies on HTTP headers to let developers request data using a particular content-types, and authenticate using three parameters:
  • Accept - The content-type required
  • X_APP_ID - Developer application ID
  • X_APP_KEY - Developer application Key
Here is an example search results for beatles using curl:

This approach is meant to make it easier for to developers focus on the development and let the Seevl client libraries handle the content-negotiation.

The Seevl API provides everal methods to search and pull specific data about individual bands and artists and related information.

While this approach is nothing new, its an interesting way to provide users with HTML views and developers with the JSON representations of information stored in a database.

Google APIs Discovery Service

TheGoogle APIs Discovery Service provides a set of web APIs for discovering metadata across Google APIs.

The discovery service delivers a JSON-based API that provides a directory of supported Google APIs, and a machine-readable discovery document"for each of the supported APIs that includes:
  • List of API resource schemas based on JSON Schema
  • List of API methods and available parameters for each method
  • Available OAuth 2.0 scopes for each API
  • Inline documentation of methods, parameters and available parameter values
Developers can use the Google APIs Discovery Service to build client libraries, IDE plugins and other tools that interact with supported Google APIs.

The Google APIs Discovery Service delivers two things for each supported API:
  • APIs Directory Resource
    • Identification and description information, including name, version, title, and description.
    • Documentation information, including icons and a documentation link.
    • Status information, including status labels, and an indication as to whether or not this is the preferred version of the API.
    • Discovery document link, the URI of the discovery document for this API
  • Discovery Document Resource
    • Schemas, which is a list of API resource schemas that describe the data you have access to in each API
    • Methods, including a list of API methods and available parameters for each method.
    • OAuth scopes, which identifies the list of OAuth scopes available for this API.
    • Inline documentation, which provides brief descriptions of schemas, methods, parameters and available parameter values.
The Google APIs Discovery Service is part of a larger effort by Google to get a handle on their growing number of APIs. Developers find themselves potentially using multiple Google APIs across many application or client projects.

The Google APIs Discovery Service allows developers to find new APIs available in the directory and programmatically discover how the API authenticates, what methods, and parameters are available.

Not many companies have the number of APIs that Google has, and would need an API discovery service. But with the recent growth in web APIs there will be more of a need for API discovery services within specific areas or industries.

Discovery Services for Common APIs

I just wrote about the potential ofopen source API billing and traffic control building blocks, if Google would open source their Google API Console, like they did with the Google API Explorer.

I started bundling in thoughts on Google APIs Discovery Service into that post, but then realized it is a separate issue, needing its own blog post.

First, Google isn't about to open source Google API Discovery Service. They issue developers of the Google APIs Discovery Service a patent license. I couldn't find any more details tonight, but I'm assuming it puts this service in a different category than the rest.

I wrote an overview of Google APIs Discovery Service, but essentially it is a way to discover and describe Google APIs using an API. Think Web Application Description Language (WADL), with a API to access, and focused only on supported Google APIs.

As the number of web APIs grow, the need for a discovery APIs and a standard description language will only grow. We wil see services like Google APIs Discovery Service popup within specific industries and API areas.

You see potential for this with Mashsery's API Network, and even more evidence with Mashape's Marketplace.

There aren't many companies with the number of APIs that Google has, so you won't see any single company building an API discovery service like that.

However, I'd like to see API discovery services for cloud computing, social media, location-based, and telcommunication APIs appear making it easier for developers to discover, build code libraries, and integrate with common APIs.

Network Device Discovery with Fing.

I was looking for a network device discovery tool.

I want something that will run at the command line on multiple platforms.

I came acrossFing from Overlook.

Fing is acommand line tool for network and service discovery.

It can detect wireless and wired devices.

It saves network device discovery results to a log file or to a CSV format.

Fing runs on Windows Linux and Mac. You can download Fing from the Overlook site..

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.