These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.14 Feb 2018
When you are operating an API, you are always looking for new ways to be discovered. I study this aspect of operating APIs from the flip-side–how do I find new APIs, and stay in tune with what APIs are to? Historically we find APIs using ProgrammableWeb, Google, and Twitter, but increasingly Github is where I find the newest, coolest APIs. I do a lot of searching via Github for API related topics, but increasingly Github topics themselves are becoming more valuable within search engine indexes, making them an easy way to uncover interesting APIs.
I was profiling the market data API Alpha Vantage today, and one of the things I always do when I am profiling an API, is I conduct a Google, and then secondarily, a Github search for the APIs name. Interestingly, I found a list of Github Topics while Googling for Alpha Vantage API, uncovering some interesting SDKs, CLI, and other open source solutions that have been built on top of the financial data API. Showing the importance of operating your API on Github, but also working to define a set of standard Github Topic tags across all your projects, and helping encourage your API community to use the same set of tags, so that their projects will surface as well.
I consider Github to be the most important tool in an API providers toolbox these days. I know as an API analyst, it is where I learn the most about what is really going on. It is where I find the most meaningful signals that allow me to cut through the noise that exists on Google, Twitter, and other channels. Github isn’t just for code. As I mention regularly, 100% of my work as API Evangelist lives within hundreds of separate Github repositories. Sadly, I don’t spend as much time as I should tagging, and organizing projects into meaningful topic areas, but it is something I’m going to be investing in more. Conveniently, I’m doing a lot of profiling of APIs for my partner Streamdata.io, which involves establishing meaningful tags for use in defining real time data stream topics that consumers can subscribe to–making me think a little more about the role Github topics can play.
One of these days I will do a fresh roundup of the many ways in which Github can be used as part of API operations. I’m trying to curate and write stories about everything I come across while doing my work. The problem is there isn’t a single place I can send my readers to when it comes to applying this wealth of knowledge to their operations. The first step is probably to publish Github as its own research area on Github (mind blown), as I do with my other projects. It has definitely risen up in importance, and can stand on its own feet alongside the other areas of my work. Github plays a central role in almost every stop along the API life cycle, and deserves its own landing page when it comes to my API research, and priority when it comes to helping API providers understanding what they should be doing on the platform to help make their API operations more successful.
I wrote an earlier article that basic API design guidelines are your first step towards API governance, but I wanted to introduce another first step you should be taking even before basic API design guides–cataloging all of your APIs. I’m regularly surprised by the number of companies I’m talking with who don’t even know where all of their APIs are. Sometimes, but not always, there is some sort of API directory or catalog in place, but often times it is out of date, and people just aren’t registering their APIs, or following any common approach to delivering APIs within an organization–hence the need for API governance.
My recommendation is that even before you start thinking about what your governance will look like, or even mention the word to anyone, you take inventory of what is already happening. Develop an org chart, and begin having conversations. Identify EVERYONE who is developing APIs, and start tracking on how they are doing what they do. Sure, you want to get an inventory of all the APIs each individual or team is developing or operating, but you should also be documenting all the tooling, services, and processes they employ as part of their workflow. Ideally, there is some sort of continuous deployment workflow in place, but this isn’t a reality in many of the organization I work with, so mapping out how things get done is often the first order of business.
One of the biggest failures of API governance I see is that the strategy has no plan for how we get from where we are to where we ant to be, it simply focuses on where we want to be. This type of approach contributes significantly to pissing people off right out of the gate, making API governance a lot more difficult. Stop focusing on where you want to be for a moment, and focus on where you are. Build a map of where people are, tools, services, skills, best and worst practices. Develop a comprehensive map of where organization is today, and then sit down with all stakeholders to evaluate what can be improved upon, and streamlined. Beginning the hard work of building a bridge between your existing teams and what might end up being a future API governance strategy.
API design is definitely the first logical step of your API governance strategy, standardizing how you design your APIs, but this shouldn’t be developed from the outside-in. It should be developed from what already exists within your organization, and then begin mapping to healthy API design practices from across the industry. Make sure you are involving everyone you’ve reached out to as part of inventory of APIs, tools, services, and people. Make sure they have a voice in crafting that first draft of API design guidelines you bring to the table. Without buy-in from everyone involved, you are going to have a much harder time ever reaching the point where you can call what you are doing governance, let alone seeing the results you desire across your API operations.
I was needing an OpenAPI (fka Swagger) definition for the Hashicorp Consul API, so that I could use in a federal government project I’m advising on. We are using the solution for the microservices discovery layer, and I wanted to be able to automate using the Consul API, publish documentation within our project Github, import into Postman across the team, as well as several other aspects of API operations. I’m working to assemble at least a first draft OpenAPI for the entire technology stack we’ve opted to use for this project.
First thing I did was Google, “Consul API OpenAPI”, then “Consul API Swagger”, which didn’t yield any results. Then I Githubbed “Consul API Swagger”, and came across a Github Issue where a user had asked for “improved API documentation”. The resulting response from Hashicorp was, “we just finished a revamp of the API docs and we don’t have plans to support Swagger at this time.” Demonstrating they really don’t understand what OpenAPI (fka Swagger) is, something I’ll write about in future stories this week.
One of the users on the thread had created an API Blueprint for the Consul API, and published the resulting documentation to Apiary. Since I wanted an OpenAPI, instead of an API Blueprint, I headed over to APIMATIC API Transformer to see if I could get the job done. After trying to transform the API Blueprint to OpenAPI 2.0 I got some errors, which forced to me to spend some time this weekend trying to hand-craft / scrape the static API docs and publish my own OpenAPI. The process was so frustrating I ended up pausing the work, and writing two blog posts about my experiences, and then this morning I received an email from the APIMATIC team that they caught the errors, updated the API Blueprint, allowing me to continue transforming it into an OpenAPI definition. Benefits of being the API Evangelist? No, benefits of using APIMATIC!
Anyways, you can find the resulting OpenAPI on Github. I will be refining it as I use in my project. Ideally, Hashicorp would take ownership of their own OpenAPI, providing a machine readable API definition that consumers could use in tooling, and other services. However, they are stuck where many other API service providers, API providers, and API consumers are–thinking OpenAPI is still Swagger, which is just about API documentation. ;-( . I try not to let this frustrate me, and will write about it each time I come across, until things change. OpenAPI (fka Swagger) is so much more than just API documentation, and is such an enabler for me as an API consumer when I’m getting up and running with a project. If you are doing APIs, please take the time to understand what it is, it is something that could be the difference between me using our API, or moving on to find another solution. It is that much of a timesaver for me.
I do a lot of thinking about API discovery, and how I can help people find the APIs they need. As part of this thinking I’m always curious why API discovery hasn’t evolved much in the last decade. You know, no Google for APIs. No magical AI, ML, AR, VR, or Blockchain for distributed API mining. As I’m thinking, I ask myself, “how is it that the API Evangelist finds most of his APIs?” Well, word of mouth. Storytelling. People talking about the APIs they are using to solve a real world business problem.
That is it! API storytelling is API discovery. If people aren’t talking about your API, it is unlikely it will be found. Sure people still need to be able to Google for solutions, but really that is just Googling, not API discovery. It is likely they are just looking for a company that does what they need, and the API is a given. We really aren’t going to discover new APIs. I don’t know many people who spend time looking for new APIs (except me, and I have a problem). People are going to discover new APIs by hearing about what other people are using, through storytelling on the web and in person.
In my experience as the API Evangelist I see three forms of this in action:
1) APIs talking about their API use cases on their blog 2) Companies telling stories about their infrastructure on their blog 3) Individuals telling stories about the APIs they use in job, side projects, and elsewhere.
This represent the majority of ways in which I discover new APIs. Sure, as the API Evangelist I will discover new APIs occasionally by scouring Github, Googling, and harvesting social media, but I am an analyst. These three ways will be how the average person discovers new APIs. Which means, if you want your API to be discovered, you need to be telling stories about it. If you want the APIs you depend on to be successful and find new users, you need to be telling stories about it.
Sometimes in all of this techno hustle, good old fashioned storytelling is the most important tool in our toolbox. I’m sure we’ll keep seeing waves of API directories, search engines, and brain wave neural networks emerge to help us find APIs over the next couple of years. However, I’m predicting that API discovery will continue to be defined by human beings talking to each other, telling stories on their blogs, via social media, and occasionally through brain interfaces.
While I’m still investing in defining the API discovery space, and I’m seeing some improvements from other API service and tooling providers when it comes to finding, sharing, indexing, and publishing API definitions, I honestly don’t think in the end API discovery will ever be a top-level concern. While API design, deployment, management, and even testing and monitoring have floated to the top as primary discussion areas for API providers, and consumers, the area of API discovery never has quite become a priority. There is always lots of talk about API discovery, mostly about what is broken, rarely about what is needed to fix, with regular waves of directories, marketplaces, and search solutions emerging to attempting to fix the problem, but always falling short.
As I watch more mainstream businesses on-board with the world of APIs, and banks, healthcare, insurance, automobile, and other staple industries work to find their way forward, I’m thinking that the mainstreamification of APIs will surpass API discovery. Meaning that people will be looking for companies who do the thing that they want, and that API is just assumed. Every business will need to have an API, just like every business is assumed to have an website. Sure there will be search engines, directories, and marketplaces to help us find what we are looking for, but when we just won’t always be looking for APIs, we will be looking for solutions. The presence of an API be will be assumed, and if it doesn’t exist we will move on looking for other companies, organizations, institutions, and agencies who do what we need.
I feel like this is one of the reasons API discovery really became a thing. It doesn’t need to be. If you are selling products and services online you need a website, and as the web has matured, you need the same data, content, media, and algorithms available in a machine readable format so they can be distributed to other websites, used within a variety of mobile applications, and available in voice, bot, device, and other applications. This is just how things will work. Developers won’t be searching for APIs, they’ll be searching for the solution to their problem, and the API is just one of the features that have to be present for them to actually become a customer. I’ll keep working to evolve my APIs.json discovery format, and incentivize the development of client, IDE, CI/CD, and other tooling, but I think these things will always be enablers, and not ever a primary concern in the API lifecycle.
I’ve been setting aside time to browse through and explore tagged projects on Github each week, learning about what is new and trending out there on the Githubz. It is a great way to explore what is being built, and what is getting traction with users. You have to wade through a lot of useless stuff, but when I come across the gems it is always worth it. I’ve been providing guidance to all my customers that they should be publishing their projects to Github, as well as tagging them coherently, so that they come up as part of tagged searches via the Github website, and the API (I do a lot of discovery via the API).
When I am browsing API projects on Github I usually have a couple of orgs and users I tend to peek in on, and my friend Mike Ralphson (@PermittedSoc) is always one. Except, I usually don’t have to remember to peek in on Mike’s work, because he is really good at tagging his work, and building interesting projects, so his stuff is usually coming up as I’m browsing tags. He is the first repository I’ve come across that is organizing OpenAPI 3.0 tooling, and on his project he has some great advice for project owners: “Why not make your project discoverable by using the topic openapi3 on GitHub and using the hashtag #openapi3 on social media?” « Great advice Mike!!
As I said, I regularly monitor Github tags, and I also monitor a variety of hashtags on Twitter for API chatter. If you aren’t tagging your projects, and Tweeting them out with appropriate hashtags, the likelihood they are going to be found decreases pretty significantly. This is how Mike will find your OpenAPI 3.0 tooling for inclusion in his catalog, and it is how I will find your project for inclusion in stories via API Evangelist. It’s a pretty basic thing, but it is one that I know many of you are overlooking because you are down in the weeds working on your project, and even when you come up for air, you probably aren’t always thinking about self-promotion (you’re not a narcissist like me, or are you?)
Twitter #hashtags has long been a discovery mechanism on social media, but the tagging on Github is quickly picking up steam when it comes to coding project discovery. Also, with the myriad of ways in which Github repos are being used beyond code, Github tagging makes it a discovery tool in general. When you consider how API providers are publishing their API portals, documentation, SDKs, definitions, schema, guides, and much more, it makes Github one of the most important API discovery tools out there, moving well beyond what ProgrammableWeb or Google brings to the table. I’ll continue to turn up the volume on what is possible with Github, as it is no secret that I’m a fan. Everything I do runs on Github, from my website, to my APIs, and supporting tooling–making it a pretty critical part of what I do in the API sector.
I’m keeping an eye on the AWS Marketplace, as well as what Azure and Google are up to, looking for growing signs of anything API. I’d have to say that, while Azure is in close second, that AWS is growing faster when it comes to the availability of APIs in their marketplace. What I find interesting about this growth is it isn’t just about the cloud, it is about wholesale APIs, and as it grows it quickly becomes about API discovery as well.
The API conversation on AWS Marketplace has for a while been dominated by API service providers, and specifically the API management providers who have pioneered the space:
After management, we see some of the familiar faces from the API space doing API aggregation, database to API deployment, security, integration platform as a service (iPaaS), real time, logging, authentication, and monitoring with Runscope.
- Cloud Elements (Aggregation)
- SlashDB (Database)
- Runscope (Monitoring)
- Zapier (iPaaS)
- Peach API Security (Security)
- Streamdata (Real Time)
- Auth0 (Authentication)
- Okta (Authentication)
- LogEntries (Logging)
All rounding off the API lifecycle, providing a growing number of tools that API provides can deploy into their existing AWS infrastructure to help manage API operations. This is how API providers should be operating, offering retail SaaS versions of their APIs, but also cloud deployable, wholesale versions of their offerings that run in any cloud, not just AWS.
The portion of this aspect of API operations that is capturing my attention is the individual API providers are moving to offer their API up via AWS marketplace, moving things beyond just API service providers selling their tools to the space. Most notably are the API rockstars from the space:
After these well known API providers there are a handful of other companies offering up wholesale editions of their APIs, so that potential customers can bake into their existing infrastructure, alongside their own APIs, or possibly other 3rd party APIs.
These APIs are offering a variety of services but real quick I noticed location, machine learning, video editing, PDFs, health care, payments, sms, and other API driven solutions. It is a pretty impressive start to what I see as the future of API discovery and deployment, as well as any other stop along the lifecycle with all the API service providers offering their warez in the marketplace.
I’m going to setup a monitoring script to alert me of any new API focused additions to the AWS marketplace, using of course, the AWS Marketplace API. I’ve seen enough growth here to warrant the extra work, and added monitoring channel. I’m feeling like this will grow beyond my earlier thoughts about wholesale API deployment, and potentially pushing forward the API discovery conversation, and changing how we will be finding the APIs we use across our infrastructure. I will also keep an eye on Azure and Google in this area, as well as startup players like Algorithmia who are specializing in areas like machine learning and artificial intelligence.
I have been reading through a number of specifications lately, trying to get more up to speed on what standards are available for me to choose from when designing APIs. Next up on my list is Link Relation Types for Web Services, by Erik Wilde. I wanted to take this informational specification and repost here on my site, partially because I find it easier to read, and the process of breaking things down and publishing as a posts helps me digest the specification and absorb more of what it contains.
I’m particularly interested in this one, because Erik captures what I’ve had in my head for APIs.json property types, but haven’t been able to always articulate as well as Erik does, let alone published as an official specification. I think his argument captures the challenge we face with mapping out the structure we have, and how we can balance the web with the API, making sure as much of it becomes machine readable as possible. I’ve grabbed the meat of Link Relation Types for Web Services and pasted here, so I can break down, and reference across my storytelling.
- Introduction One of the defining aspects of the Web is that it is possible to interact with Web resources without any prior knowledge of the specifics of the resource. Following Web Architecture by using URIs, HTTP, and media types, the Web’s uniform interface allows interactions with resources without the more complex binding procedures of other approaches.
Many resources on the Web are provided as part of a set of resources that are referred to as a “Web Service” or a “Web API”. In many cases, these services or APIs are defined and managed as a whole, and it may be desirable for clients to be able to discover this service information.
Service information can be broadly separated into two categories: One category is primarily targeted for human users and often uses generic representations for human readable documents, such as HTML or PDF. The other category is structured information that follows some more formalized description model, and is primarily intended for consumption by machines, for example for tools and code libraries.
In the context of this memo, the human-oriented variant is referred to as “documentation”, and the machine-oriented variant is referred to as “description”.
These two categories are not necessarily mutually exclusive, as there are representations that have been proposed that are intended for both human consumption, and for interpretation by machine clients. In addition, a typical pattern for service documentation/description is that there is human-oriented high-level documentation that is intended to put a service in context and explain the general model, which is complemented by a machine-level description that is intended as a detailed technical description of the service. These two resources could be interlinked, but since they are intended for different audiences, it can make sense to provide entry points for both of them.
This memo places no constraints on the specific representations used for either of those two categories. It simply allows providers of aWeb service to make the documentation and/or the description of their services discoverable, and defines two link relations that serve that purpose.
In addition, this memo defines a link relation that allows providers of a Web service to link to a resource that represents status information about the service. This information often represents operational information that allows service consumers to retrieve information about “service health” and related issues.
Terminology The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,”SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 [RFC2119].
Web Services “Web Services” or “Web APIs” (sometimes also referred to as “HTTP API” or “REST API”) are a way to expose information and services on the Web. Following the principles of Web architecture[ they expose URI-identified resources, which are then accessed and transferred using a specific representation. Many services use representations that contain links, and often these links are typed links.
Using typed links, resources can identify relationship types to other resources. RFC 5988 [RFC5988] establishes a framework of registered link relation types, which are identified by simple strings and registered in an IANA registry. Any resource that supports typed links according to RFC 5988 can then use these identifiers to represent resource relationships on the Web without having to re-invent registered relation types.
In recent years, Web services as well as their documentation and description languages have gained popularity, due to the general popularity of the Web as a platform for providing information and services. However, the design of documentation and description languages varies with a number of factors, such as the general application domain, the preferred application data model, and the preferred approach for exposing services.
This specification allows service providers to use a unified way to link to service documentation and/or description. This link should not make any assumptions about the provided type of documentation and/or description, so that service providers can choose the ones that best fit their services and needs.
3.1. Documenting Web Services In the context of this specification, “documentation” refers to information that is primarily intended for human consumption.Typical representations for this kind of documentation are HTML andPDF. Documentation is often structured, but the exact kind of structure depends on the structure of the service that is documented, as well as on the specific way in which the documentation authors choose to document it.
3.2. Describing Web Services In the context of this specification, “description” refers to information that is primarily intended for machine consumption.Typical representations for this are dictated by the technology underlying the service itself, which means that in today’s technology landscape, description formats exist that are based on XML, JSON, RDF, and a variety of other structured data models. Also, in each of those technologies, there may be a variety of languages that a redefined to achieve the same general purpose of describing a Web service.
Descriptions are always structured, but the structuring principles depend on the nature of the described service. For example, one of the earlier service description approaches, the Web ServicesDescription Language (WSDL), uses “operations” as its core concept, which are essentially identical to function calls, because the underlying model is based on that of the Remote Procedure Call (RPC) model. Other description languages for non-RPC approaches to services will use different structuring approaches.
3.3. Unified Documentation/Description If service providers use an approach where there is no distinction of service documentation Section 3.1 and service descriptionSection 3.2, then they may not feel the need to use two separate links. In such a case, an alternative approach is to use the”service” link relation type, which has no indication of whether it links to documentation or description, and thus may be better fit if no such differentiation is required.
- Link Relations for Web Services In order to allow Web services to represent the relation of individual resources to service documentation or description, this specification introduces and registers two new link relation types.
4.1. The service-doc Link Relation Type The “service-doc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are documented at a specific URI. The target resource is expected to provide documentation that is primarily intended for human consumption.
4.2. The service-desc Link Relation Type The “service-desc” link relation type is used to represent the fact that a resource is part of a bigger set of resources that are described at a specific URI. The target resource is expected to provide a service description that is primarily intended for machine consumption. In many cases, it is provided in a representation that is consumed by tools, code libraries, or similar components.
- Web Service Status Resources Web services providing access to a set of resources often are hosted and operated in an environment for which status information may be available. This information may be as simple as confirming that a service is operational, or may provide additional information about different aspects of a service, and/or a history of status information, possibly listing incidents and their resolution.
The “status” link relation type can be used to link to such a status resource, allowing service consumers to retrieve status information about a Web service’s status. Such a link may not be available from all resources provided by a Web service, but from key resources such as a Web service’s home resource.
This memo does not restrict the representation of a status resource in any way. It may be primarily focused on human or machine consumption, or a combination of both. It may be a simple “traffic light” indicator for service health, or a more sophisticated representation conveying more detailed information such as service subsystems and/or a status history.
- IANA Considerations The link relation types below have been registered by IANA perSection 6.2.1 of RFC 5988 [RFC5988]:
6.1. Link Relation Type: service-doc
Relation Name: service-doc
Description: Linking to service documentation that is primarily intended for human
Reference: [[ This document ]]
6.2. Link Relation Type: service-desc
Relation Name: service-desc
Description: Linking to service description that is primarily intended for consumption by machines.
Reference: [[ This document ]]
6.3. Link Relation Type: status
Relation Name: status
Description: Linking to a resource that represents the status of a Web service or API.
Reference: [[ This document ]]
Adding Some Of My Own Thoughts Beyond The Specification This specification provides a more coherent service-doc, and service-desc that I think we did with humanURL, and support for multiple API definition formats (swagger, api blueprint, raml) as properties for any API. This specification provides a clear solution for human consumption, as well as one intended for consumption by machines. Another interesting link relation it provides is status, helping articulate the current state of an API.
It makes me happy to see this specification pushing forward and formalizing the conversation. I see the evolution of link relations for APIs as an important part of the API discovery and definition conversations in coming years. Processing this specification has helped jumpstart some conversation around APIs.json, as well as other specifications like JSON Home and Pivio.
Thanks for letting me build on your work Erik! - I am looking forward to contributing.
I have been studying JSON Home, trying to understand how it sizes up to APIs.json, and other formats I’m tracking on like Pivio. JSON Home has a number of interesting features, and I thought one of their examples was also interesting, and was relevant to my API embeddable research. In this example, JSON Home was describing a widget that was putting an API to use as part of its operation.
Here is the snippet from the JSON Home example, providing all details of how it works:
JSON Home seems very action oriented. Everything about the format leads you towards taking some sort of API driven action, something that makes a lot of sense when it comes to widgets and other embeddables. I could see JSON Home being used as some sort of definition for button or widget generation and building tooling, providing a machine readable definition for the embeddable tool, and what is possible with the API(s) behind.
I’ve been working towards embeddable directories and API stacks using APIs.json, providing distributed and embeddable tooling that API providers and consumers can publish anywhere. I will be spending more time thinking about how this world of API discovery can overlap with the world of API embeddables, providing not just a directory of buttons, badges, and widgets, but one that describes what is possible when you engage with any embeddable tool. I’m beginning to see JSON Home similar to how I see Postman Collections, something that is closer to runtime, or at least deploy time. Where APIs.json is much more about indexing, search, and discovery–maybe some detail about where the widgets are, or maybe more detail about what embeddable resources are available.
I’m have finally dedicated some time to learning more about Home Documents for HTTP APIs, or simply JSON Home. I see JSON Home as a nice way to bring together the technical components for an API, very similar to what I’ve been trying to accomplish with APIs.json. One of the biggest differences I see is that I’d say APIs.json was born out of the world of open data and APIs, where JSON Home is born of the web (which actually makes better sense).
I think the JSON Home description captures the specifications origins very well:
The Web itself offers one way to address these issues, using links [RFC3986] to navigate between states. A link-driven application discovers relevant resources at run time, using a shared vocabulary of link relations [RFC5988] and internet media types [RFC6838] to support a “follow your nose” style of interaction - just as a Web browser does to navigate the Web.
JSON Home provides any potential client with a machine readable set of instructions it can follow, involving one, or many APIs–providing a starting page for APIs which also enables:
- Extensibility - Because new server capabilities can be expressed as link relations, new features can be layered in without introducing a new API version; clients will discover them in the home document.
- Evolvability - Likewise, interfaces can change gradually by introducing a new link relation and/or format while still supporting the old ones.
- Customisation - Home documents can be tailored for the client, allowing different classes of service or different client permissions to be exposed naturally.
- Flexible deployment - Since URLs aren’t baked into documentation, the server can choose what URLs to use for a given service.
JSON Home, is a home page specification which uses JSON to provide APIs with a a launching point for the interactions they offer, by providing a coherent set links, all wrapped in a single machine readable index. Each JSON begins with a handful of values:
- title - a string value indicating the name of the API
- links - an object value, whose member names are link relation types [RFC5988], and values are URLs [RFC3986].
- author - a suitable URL (e.g., mailto: or https:) for the author(s) of the API
- describedBy - a link to documentation for the API
- license - a link to the legal terms for using the API
Once you have the general details about the JSON Home API index, you can provide a collection of resource objects possessing links that can be indicated using an href property with a URI value, or template links which uses a URI template. Just like a list of links on a home page, but instead of a browser, it can be used in any client, for a variety of different purposes.
Each of the resources allow for resource hints, which allow clients to obtain relevant information about interacting with a resource beforehand, as a means of optimizing communications, as well as sharing which behaviors will be available for an API. Here are the default hints available for JSON Home:
- allow - Hints the HTTP methods that the current client will be able to use to interact with the resource; equivalent to the Allow HTTP response header.
- formats - Hints the representation types that the resource makes available, using the GET method.
- accept-Patch - Hints the PATCH [RFC5789] request formats accepted by the resource for this client; equivalent to the Accept-Patch HTTP response header.
- acceptPost - Hints the POST request formats accepted by the resource for this client.
- acceptPut - Hints the PUT request formats accepted by the resource for this client.
- acceptRanges - Hints the range-specifiers available to the client for this resource; equivalent to the Accept-Ranges HTTP response header [RFC7233].
- acceptPrefer - Hints the preferences [RFC7240] supported by the resource. Note that, as per that specifications, a preference can be ignored by the server.
- docs - Hints the location for human-readable documentation for the relation type of the resource.
- preconditionRequired - Hints that the resource requires state-changing requests (e.g., PUT, PATCH) to include a precondition, as per [RFC7232], to avoid conflicts due to concurrent updates.
- authSchemes- Hints that the resource requires authentication using the HTTP Authentication Framework [RFC7235].
- status - Hints the status of the resource.
These hints provide you with a base set of the most commonly used sets of information, but then there is also a HTTP resource hint registration where all hints are registered. Hints can be added, allowing for the addition of custom defined hints, providing additional information beforehand about what can be expected from a resource link included as part of a JSON Home index. It is a much more sophisticated approach describing the behaviors of links than we included in APIs.json, with the formal hint registry being very useful and well-defined.
I”d say that JSON Home has all the features for defining a single, or collections of APIs, but really reflects its roots in the web, and possesses a heavy focus on enabling action with each link. While this is part of the linking structure of APIs.json, I feel like the detail and the mandate for action around each resource in a JSON Home index is much stronger. I feel like JSON Home is in the same realm as Postman Collections, but when it comes to API discovery. I always feel like a Postman Collection is more transactional than OpenAPI is by default. There is definitely overlap, but Postman Collections always feels one or two step closer to some action being taken than OpenAPI does–I am guessing it is because of it’s client roots, similar to the web roots of JSON Home, and also OpenAPIs roots in documentation.
Ok. Yay! I have Pivio, and now JSON Home both loaded in my brain. I have a feel for what they are trying to accomplish, and have found some interesting layers I hadn’t considered while doing my APIs.json centered API discovery work. Now I can step back, and consider the features of all three of these API discovery formats, establish a rough Venn diagram of their features, and consider how they overlap, and compliment each other. I feel like we are moving towards an important time for API discovery, and with the growing number of APIs available we will see more investment in API discovery specifications, as well as services and tooling that help us with API discovery. I’ll keep working to understand what is going on, establish at least a general understanding of each API discovery specifications, and report back here about what is happening when I can.
I was learning about the microservices discovery specification Pivio, which is a schema for framing the conversation, but also an uploader, search, and web interface for managing a collection of microservices. I found their use of ElasticSearch as the search engine for their tooling worth thinking about more. When we first launched APIs.json, we created APIs.io as the search engine–providing a custom developed public API search engine. I hadn’t thought of using ElasticSearch as an engine for searching APIs.json treated as a JSON document.
Honestly, I have been relying on the Github API as the search engine for my API discovery. Using it to uncover not just APIs.json, but OpenAPI, API Blueprint, and other API specification formats. This works well for public discovery, but I could see ElasticSearch being a quick and dirty way to launch a private or public engine for an API discovery, catalog, directory, or type of collection. I will add ElasticSearch, and other platforms I track on as part of my API deployment research as a API discovery building block, evolving the approaches I’m tracking on.
It is easy to think of API discovery as directories like ProgrammableWeb, or marketplaces like Mashape, and public API search engines like APIs.io–someone else’s discovery vehicle, which you are allowed to drive when you need. However, when you begin to consider other types of API discovery search engines, you realize that a collection of API discovery documents like JSON Home, Pivio, and APIs.json can quickly become your own personal API discovery vehicle. I’m going to write a separate piece on how I use Github as my API discovery engine, then I think I’ll step back and look at other approaches to searching JSON or YAML documents to see if I can find any search engines that might be able to be fine tuned specifically for API discovery.
404: Not Found
One question I’m regularly getting from my readers is regarding how you can increase the search engine optimization (SEO) for your APIs–yes, API SEO (acronyms rule)! While we should be investing in API discoverability by embracing hypermedia early on, I feel in its absence we should also be indexing our entire API operations with APIs.json, and making sure we describe individual APIs using OpenAPI, the world of web APIs is still very hitched to the web, making SEO very relevant when it comes to API discoverability.
While I was diving deeper into “The API Platform”, a VERY forward leaning API deployment and management solution, I was pleased to see another mention of API SEO using JSON-LD (scroll down on the page). While I wish every API would adopt JSON-LD for their overall design, I feel we are going to have to piece SEO and discoverability together for our sites, as The API platform demonstrates. They provide a nice example of how you can paste a JSON-LD script into the the page of your API documentation, helping amplify some of the meaning and intent behind your API using JSON-LD + Schema.org.
I have been thinking about Schema.org’s relationship to API discovery for some time now, which is something I’m hoping to get more time to invest in further during 2017. I’d like to see Schema.org get more baked into API design, deployment, and documentation, as well as JSON-LD as part of underlying schema. To help build a bridge from where we are at, to where we need to be going, I’m going to explore how I can leverage OpenAPI tags to help autogenerate JSON-LD Schema.org tags as part of API documentation. While I’d love for everyone to just get the benefits of JSON-LD, I’m afraid many folks won’t have the bandwidth, and could use an assist from the API documentation solutions they are already using–making APIs more SEO friendly by default.
If you are starting a new API I recommend playing with “The API Platform”, as you get the benefits of Schema.org, JSON-LD, and MANY other SIGNIFICANT API concepts by default. Out of all of the API frameworks I’ve evaluated as part of my API deployment research, “The API Platform” is by far the most advanced when it comes to leading by example, and enabling healthy API design practices by default–something that will continue to bring benefits across all stops along the life cycle if you put to work in your operations.
Jerome Louvel from Restlet introduced me to the Open Service Broker API the other day, a “project allows developers, ISVs, and SaaS vendors a single, simple, and elegant way to deliver services to applications running within cloud-native platforms such as Cloud Foundry, OpenShift, and Kubernetes. The project includes individuals from Fujitsu, Google, IBM, Pivotal, RedHat and SAP.”
Honestly, I only have so much cognitive capacity to understand everything I come across, so I pasted the link into my super secret Slack group for API super heroes to get additional opinions. My friend James Higginbotham (@launchany) quickly responded with, “if I understand correctly, this is a standard that would be equiv to Heroku’s Add-On API? Or am I misunderstanding? The Open Service Broker API is a clean abstraction that allows ‘services’ to expose a catalog of capabilities, as well as the ability to create, use and delete those services. Sounds like add-on support to me, but I could be wrong[…]But seems very much like vendor-to-vendor. Will be interesting to track.”
At first glance, I thought it was more of an aggregation and/or discovery solution, but I think James is right. It is an API scaffolding that SaaS platforms can plug into their platforms to broker other 3rd party API services. It allows any platform to offer an environment for extending your platform like Heroku does, as James points out. It is something that adds an API discovery dimension to the concept of offering up plugins, or I guess what could be an embedded API marketplace within your platform. Opening up wholesale and private label opportunities for API providers to sell their warez directly on other people’s platforms.
The concept really isn’t anything new. I remember developing document print plugins for Box back when I worked with the Mimeo print API in 2011. The Open Service Broker API is just looking to standardize this approach so hat API provider could bake in a set of 3rd party partner APIs directly into their platform. I’ve recently added a plugin area to my API research. I will add the Open Service Broker API as an organization within this research. I’m probably also going to add it to my API discovery research, and I’m even considering expanding it into an API marketplace section of my research. I can see add-on, plugin, marketplace, and API brokering like this grow into its own discipline, with a growing number of definitions, services, and tools to support.
Here is another patent in my series of API related patents. I’d file this in the category as the other similar one from IBM–Patent US 8954988: Automated Assessment of Terms of Service in an API Marketplace. It is a good idea. I just don’t feel it is a good patent idea.
Title: API matchmaking using feature models Number: 09454409 Owner: International Business Machines Corporation Abstract: Software that uses machine logic based algorithms to help determine and/or prioritize an application programming interface’s (API) desirability to a user based on how closely the API’s terms of service (ToS) meet the users’ ToS preferences. The software performs the following steps: (i) receiving a set of API ToS feature information that includes identifying information for at least one API and respectively associated ToS features for each identified API; (ii) receiving ToS preference information that relates to ToS related preferences for a user; and (iii) evaluating a strength of a match between each respective API identified in the API ToS feature information set and the ToS preference information to yield a match value for each API identified in the API ToS feature information set. The ToS features include at least a first ToS field. At least one API includes multiple, alternative values in its first ToS field.
Honestly, I don’t have a problem with a company turning something like this into a feature, and even charging for it. I just wish IBM would help us solve the problem of making terms of service machine readable, so something like this is even possible. Could you imagine what would be possible if everybody’s terms of service were machine readable, and could be programmatically evaluated? We’d all be better off, and matchmaking services like this would become a viable service.
I just wish more of the energy I see go into these patent would be spent actually doing things in the API space. Providing low cost, innovative API services that businesses can use, instead of locking up ideas, filing them away with the government, so that they can be used at a later date in litigation and backdoor dealings.
I’ve been watching the conversation around how APIs are discovered since 2010 and I ave been working to understand where things might be going beyond ProgrammableWeb, to the Mashape Marketplace, and even investing in my own API discovery format APIs.json. It is a layer of the API space that feels very bipolar to me, with highs and lows, and a lot of meh in the middle. I do not claim to have “the solution” when it comes to API discovery and prefer just watching what is happening, and contributing where I can.
A number interesting signals for API deployment, as well as API discovery, are coming out of Amazon Marketplace lately. I find myself keeping a closer eye on the almost 350 API related solutions in the marketplace, and today I’m specifically taking notice of the Box API availability in the AWS Marketplace. I find this marketplace approach to not just API discovery via an API marketplace, but also API deployment very interesting. AWS isn’t just a marketplace of APIs, where you find what you need and integrate directly with that provider. It is where you find your API(s) and then spin up an instance within your AWS infrastructure that facilitates that API integration–a significant shift.
I’m interested in the coupling between API providers and AWS. AWS and Box have entered into a partnership, but their approach provides a possible blueprint for how this approach to API integration and deployment can scale. How tightly coupled each API provider chooses to be, looser (proxy calling the API), or tighter (deploying API as AMI), will vary from implementation to implementation, but the model is there. The Box AWS Marketplace instance dependencies on the Box platform aren’t evident to me, but I’m sure they can easily be quantified, and something I can get other API providers to make sure and articulate when publishing their API solutions to AWS Marketplace.
AWS is moving towards earlier visions I’ve had of selling wholesale editions of an API, helping you manage the on-premise and private label API contracts for your platform, and helping you explore the economics of providing wholesale editions of your platforms, either tightly or loosely coupled with AWS infrastructure. Decompiling your API platform into small deployable units of value that can be deployed within a customer’s existing AWS infrastructure, seamlessly integrating with existing AWS services.
I like where Box is going with their AWS partnership. I like how it is pushing forward the API conversation when it comes to using AWS infrastructure, and specifically the marketplace. I’ll keep an eye on where things are going. Box seems to be making all the right moves lately by going all in on the OpenAPI Spec, and decompiling their API platform making it deployable and manageable from the cloud, but also much more modular and usable in a serverless way. Providing us all with one possible blueprint for how we handle the technology and business of our API operations in the clouds.
There are a growing number of API providers who have published an APIs.json for their API operations, providing a machine-readable index of not just their API, but for their API entire operations. My favorite example to use in my talks and conversations when I’m showcasing the API discovery format is the one for the International Trade Administration at developer.trade.gov.
The International Trade Administration (ITA) is the government agency that “strengthens the competitiveness of U.S. industry, promotes trade and investment, and ensures fair trade through the rigorous enforcement of our trade laws and agreements”, provides an index of where you can find their developer portal, documentation, terms of service, as well as a machine readable OpenAPI for their trade APIs.
I couldn’t think of a more shining example of APIs when it comes to talking about the API economy. I am pleased to have helped influenced their API efforts and helping them see the importance of providing a machine readable index of their API operations with APIs.json, as well as their APIs using OpenAPI. If you need a well maintained, and meaningful example of how APIs.json works head over to developer.trade.gov and take a look.
I keep an eye on several thousand companies as part of my research into the API space and publish over a thousand of these profiles in my API Stack project. Across the over 1,100 companies, organizations, institutions, and government agencies I'm regularly running into a growing number of signals that tune me into what is going on with each API provider, or service provider.
Here are the almost 100 types of signals I am tuning into as I keep an eye on the world of APIs, each contributing to my unique awareness of what is going on with everything API.
- Account Settings (x-account-settings) - Does an API provider allow me to manage the settings for my account?
- Android SDK (x-android-sdk) - Is there an Android SDK present?
- Angular (x-angularjs) - Is there an Angular SDK present?
- API Explorer (x-api-explorer) - Does a provider have an interactive API explorer?
- Application Gallery (x-application-gallery) - Is there a gallery of applications build on an API available?
- Application Manager (x-application-manager) - Does the platform allow me to management my APIs?
- Authentication Overview (x-authentication-overview) - Is there a page dedicated to educating users about authentication?
- Base URL for API (x-base-url-for-api) - What is the base URL(s) for the API?
- Base URL for Portal (x-base-url-for-portal) - What is the base URL for the developer portal?
- Best Practices (x-best-practices) - Is there a page outlining best practices for integrating with an API?
- Billing history (x-billing-history) - As a developer, can I get at the billing history for my API consumption?
- Blog (x-blog) - Does the API have a blog, either at the company level, but preferably at the API and developer level as well?
- Blog RSS Feed (x-blog-rss-feed) - Is there an RSS feed for the blog?
- Branding page (x-branding-page) - Is there a dedicated branding page as part of API operations?
- Buttons (x-buttons) - Are there any embeddable buttons available as part of API operations.
- C# SDK (x-c-sharp) - Is there a C# SDK present?
- Case Studies (x-case-studies) - Are there case studies available, showcasing implementations on top of an API?
- Change Log (x-change-log) - Does a platform provide a change log?
- Chrome Extension (x-chrome-extension) - Does a platform offer up open-source or white label chrome extensions?
- Code builder (x-code-builder) - Is there some sort of code generator or builder as part of platform operations?
- Code page (x-code-page) - Is there a dedicated code page for all the samples, libraries, and SDKs?
- Command Line Interface (x-command-line-interface) - Is there a command line interface (CLI) alongside the API?
- Community Supported Libraries (x-community-supported-libraries) - Is there a page or section dedicated to code that is developed by the API and developer community?
- Compliance (x-compliance) - Is there a section dedicated to industry compliance?
- Contact form (x-contact-form) - Is there a contact form for getting in touch?
- Crunchbase (x-crunchbase) - Is there a Crunchbase profile for an API or its company?
- Dedicated plans pricing page (x-dedicated-plans--pricing-page)
- Deprecation policy (x-deprecation-policy) - Is there a page dedicated to deprecation of APIs?
- Developer Showcase (x--developer-showcase) - Is there a page that showcases API developers?
- Documentation (x-documentation) - Where is the documentation for an API?
- Drupal (x-drupal) - Is there Drupal code, SDK, or modules available for an API?
- Email (x-email) - Is an email address available for a platform?
- Embeddable page (x-embeddable-page) - Is there a page of embeddable tools available for a platform?
- Error response codes (x-error-response-codes) - Is there a listing or page dedicated to API error responses?
- Events (x-events) - Is there a calendar of events related to platform operations?
- Facebook (x-facebook) - Is there a Facebook page available for an API?
- Faq (x-faq) - Is there an FAQ section available for the platform?
- Forum (x-forum) - Does a provider have a forum for support and asynchronous conversations?
- Forum rss (x-forum-rss) - If there is a forum, does it have an RSS feed?
- Getting started (x-getting-started) - Is there a getting started page for an API?
- Github (x-github) - Does a provider have a Github account for the API or company?
- Glossary (x-glossary) - Is there a glossary of terms available for a platform?
- Heroku (x-heroku) - Are there Heroku SDKs, or deployment solutions?
- How-To Guides (x-howto-guides) - Does a provider offer how-to guides as part of operations?
- Interactive documentation (x-interactive-documentation) - Is there interactive documentation available as part of operatoins?
- IoS SDK (x-ios-sdk) - Is there an IoS SDK for Objective-C or Swift?
- Issues (x-issues) - Is there an issue management page or repo for the platform?
- Java SDK (x-java) - Is there a Java SDK for the platform?
- Joomla (x-joomla) - Is there Joomla plug for the platform?
- Knowledgebase (x-knowledgebase) - Is there a knowledgebase for the platform?
- Labs (x-labs) - Is there a labs environment for the API platform?
- Licensing (x-licensing) - Is there licensing for the API, schema, and code involved?
- Message Center (x-message-center) - Is there a messaging center available for developers?
- Mobile Overview (x-mobile-overview) - Is there a section or page dedicated to mobile applications?
- Node.js (x-nodejs) - Is there a Node.js SDK available for the API?
- Oauth Scopes (x-oauth-scopes) - Does a provider offer details on the available OAuth scopes?
- Openapi spec (x-openapi-spec) - Is there an OpenAPI available for the API?
- Overview (x-overview) - Does a platform have a simple, concise description of what they do?
- Paid support plans (x-paid-support-plans) - Are there paid support plans available for a platform?
- Postman Collections (x-postman) - Are there any Postman Collections available?
- Partner (x-partner) - Is there a partner program available as part of API operations?
- Phone (x-phone) - Does a provider publish a phone number?
- PHP SDK (x-php) - Is there a PHP SDK available for an API?
- PubSub (x-pubsubhubbub) - Does a platform provide a PubSub feed?
- Python SDK (x-python) - Is there a Python SDK for an API?
- Rate Limiting (x-rate-limiting) - Does a platform provide information on API rate limiting?
- Real Time Solutions (x-real-time-page) - Are there real-time solutions available as part of the platform?
- Road Map (x-road-map) - Does a provider share their roadmap publicly?
- Ruby SDK (x-ruby) - Is there a Ruby SDK available for the API?
- Sandbox (x-sandbox) - Is there a sandbox for the platform?
- Security (x-security) - Does a platform provide an overview of security practices?
- Self-Service registration (x-self-service-registration) - Does a platform allow for self-service registration?
- Service Level Agreement (x-service-level-agreement) - Is an SLA available as part of platform integration?
- Slideshare (x-slideshare) - Does a provider publish talks on Slideshare?
- Stack Overflow (x-stack-overflow) - Does a provider actively use Stack Overflow as part of platform operations?
- Starter Projects (x-starter-projects) - Are there start projects available as part of platform operations?
- Status Dashboard (x-status-dashboard) - Is there a status dashboard available as part of API operations.
- Status History (x-status-history) - Can you get at the history involved with API operations?
- Status RSS (x-status-rss) - Is there an RSS feed available as part of the platform status dashboard?
- Support Page (x-support-overview-page) - Is there a page or section dedicated to support?
- Terms of Service (x-terms-of-service-page) - Is there a terms of service page?
- Ticket System (x-ticket-system) - Does a platform offer a ticketing system for support?
- Tour (x-tour) - Is a tour available to walk a developer through platforms operations?
- Trademarks (x-trademarks) - Is there details about trademarks, and how to use them?
- Twitter (x-twitter) - Does a platform have a Twitter account dedicated to the API or even company?
- Videos (x-videos) - Is there a page, YouTube, or other account dedicated to videos about the API?
- Webhooks (x-webhook) - Are there webhooks available for an API?
- Webinars (x-webinars) - Does an API conduct webinars to support operations?
- White papers (x-white-papers) - Does a platform provide white papers as part of operations?
- Widgets (x-widgets) - Are there widgets available for use as part of integration?
- Wordpress (x-wordpress) - Are there WordPress plugins or code available?
There are hundreds of other building blocks I track on as part of API operations, but this list represents the most common, that often have dedicated URLs available for exploring, and have the most significant impact on API integrations. You'll notice there is an x- representation for each one, which I use as part of APIs.json indexes for all the APIs I track on. Some of these signal types are machine readable like OpenAPIs or a Blog RSS, with others machine readable because there is another API behind, like Twitter or Github, but most of them are just static pages, where a human (me) can visit and stay in tune with signals.
I have two primary objectives with this work: 1) identify the important signals, that impact integration, and will keep me and my readers in tune with what is going on, and 2) identify the common channels, and help move the more important ones to be machine-readable, allowing us to scale the monitoring of important signals like pricing and terms of service. My API Stack research provides me wit a nice listing of APIs, as well as more individualized stacks like Microsoft, Google, Microsoft, and Facebook, or even industry stacks like SMS, Email, and News. It also provides me with a wealth of signals we can tune into better understand the scope and health of the API sector, and any individual business vertical that is being touched by APIs.
I am working to update my OpenAPI definitions for AWS, Google, and Microsoft using some other OpenAPIs I've discovered on Github. When a new OpenAPI has entirely new paths available, I just insert them, but when it has an existing path I have to think more critically about what is next. Sometimes I dismiss the metadata about the API path as incomplete or lower quality than the one I have already. Other times the content is actually more superior than mine, and I incorporate it into my work. Now I'm also finding that in some cases I want to keep my representation, as well as the one I discovered, side by side--both having value.
This is one reason I'm not 100% sold on the fact that just API providers should be crafting their own OpenAPis--sure, the API space would be waaaaaay better if ALL API providers had machine readable OpenAPIs for all their services, but I would want it to end here. You see, API providers are good (sometimes) at defining what their API does, but they often suck at telling you what is possible--which is why they are doing APIs. I have a lot of people who push back on me creating OpenAPIs for popular APIs, telling me that API providers should be the ones doing the hard work, otherwise it doesn't matter. I'm just not sold that this is the case, and there is an opportunity for evolving the definition of an API by external entities using OpenAPI.
To help me explore this idea, and push the boundaries of how I use OpenAPI in my API storytelling, I wanted to frame this in the context of the Amazon EC2 API, which allows me to deploy a single unit of compute into the cloud using an API, a pretty fundamental component of our digital worlds. To make any call against the Amazon EC2 I send all my calls to a single base URL:
With this API call I pass in the "action" I'd like to be taken:
Along with this base action parameter, I pass in a handful of other parameters to further define things:
Amazon has never been known for superior API design, but it gets the job done. With this single API call I can launch a server in the clouds. When I was first able to do this with APIs, is when the light really went on in my head regarding the potential of APIs. However, back to my story on expressing what an API does, as well as what is possible using OpenAPI. AWS has done an OK job at expressing what Amazon EC2 API does, however they suck at expressing what is possible. This is where API consumers like me step up with OpenAPI and provide some alternative representations of what is possible with the highly valuable API.
When I define the Amazon EC2 API using the OpenAPI specification I use the following:
title: Amazon EC2
summary: The Amazon EC2 service
- in: query
The AWS API design pattern doesn't lend itself to reuse when it comes to documentation and storytelling, but I'm always looking for an opportunity to push the boundaries, and I'm able to better outline all available actions, as individual API paths by appending the action parameter to the path:
title: Amazon EC2
summary: Run a new Amazon EC2 instance
Now I'm able to describe all 228 actions you can take with the single Amazon EC2 API path as separate paths in any OpenAPI generated API documentation and tooling. I can give them unique summaries, descriptions, and operationId. OpenAPI allows me to describe what is possible with an API, going well beyond what the API provider was able to define. I've been using this approach to better quantify the surface area of APIs like Amazon, Flickr, and others who use this pattern for a while now, but as I was looking to update my work, I wanted to take this concept even further.
While appending query parameters to the path definition has allowed me to expand how I describe the surface area of an API using OpenAPI, I'd rather keep these parameters defined properly using the OpenAPI specification, and define an alternative way to make the path unique. To do this, I am exploring the usage of #bookmarks, to help make duplicate API paths more unqiue in the eyes of the schema validators, but invisible to the server side of things--something like this:
title: Amazon EC2
summary: Run a new Amazon EC2 instance
- in: query
I am considering how we can further make the path unique, by predefining other parameters using default or enum:
title: Amazon EC2
summary: Run a new Amazon EC2 website instance
description: The ability to launch a new website running on its own Amazon EC2 instance, from a predefined AWS AMI.
- in: query
- in: query
I am still drawing in the lines of what the API provider has given me, but I'm now augmenting with a better summary and description of what is possible using OpenAPI, which can now be reflected in documentation and other tooling that is OpenAPI compliant. I can even prepopulate the default values, or available options using enum settings, tailoring to my team, company, or other specific needs. Taking an existing API definition beyond its provider interpretation of what it does, and getting to work on being more creative around what is possible.
Let me know how incoherent this is. I can't tell sometimes. Maybe I need more examples of this in action. I feel like it might be a big piece of the puzzle that has been missing for me regarding how we tell stories about what is possible with APIs. When it comes to API definitions, documentation, and discovery I feel like we are chained to a provider's definition of what is possible, when in reality this shouldn't be what drives the conversation. There should be definitions, documentation, and discovery documents created by API providers that help articulate what an API does, but more importantly, there should be a wealth of definitions, documentation, and discovery documents created by API consumers that help articulate what is possible.
I was following the discussion around adding a WebAPI class to Schema.org's core vocabulary, and it got me to think more about the role Schema.org has to play with not just our API definitions, but also significantly influencing API discovery. Meaning that we should be using Schema.org as part of our OpenAPI definitions, providing us with a common vocabulary for communicating around our APIs, but also empowering the discovery of APIs.
When I describe the relationship between Schema.org to API discovery, I'm talking about using the pending WebAPI class, but I'm also talking about using common Schema.org org within API definitions--something that will open the definitions to discovery because it employs a common schema. I am also talking about how do we leverage this vocabulary in our HTML pages, helping search engines like Google understand there is an API service available:
I will also be exploring how I can better leverage Schema.org in my APIs.json format, better leveraging a common vocabulary describing API operations, not just an individual API. I'm looking to expand the opportunities for discovering, not limit them. I would love all APIs to take a page from the hypermedia playbook, and have a machine readable index for each API, with a set of links present with each response, but I also want folks to learn about APIs through Google, ensuring they are indexed in a way that search engines can comprehend.
When it comes to API discovery I am primarily invested in APIs.json (because it's my baby) describing API operations, and OpenAPI to describe the surface area of an API, but I also want this to map to the very SEO driven world we operate in right now. I will keep investing time in helping folks use Schema.org in their API definitions (APIs.json & OpenAPI), but I will also start investing in folks employing JSON+LD and Schema.org as part of their search engine strategies (like above), making our APIs more discoverable to humans as well as other systems.
I was playing around with the new Github topics, and found that it provides an interesting look at the API space, one that I'm hoping will continue to evolve, and maybe I can influence.
I typed 'api-' into Github's topic tagging tool for my repository, and after I tagged each of my research areas with appropriate tags, I set out exploring these layers of Github by clicking on each tag. It is something that became quite a wormhole of API exploration.
I had to put it down, as I could spend hours looking through the repositories, but I wanted to create a machine-readable mapping to my existing API research areas, that I could use to regularly keep an eye on these slices of the Github pie--in an automated way.
Definitions - These are the topics I'm adding to my monitoring of the API space when it comes to API definitions. I thought it was interesting how folks are using Github to manage their API definitions.
- api-definition (search via Github topics)
- api-description (search via Github topics)
- api-specs (search via Github topics)
- api-blueprint (search via Github topics)
- api-transformer (search via Github topics)
- openapi (search via Github topics)
- openapi-specification (search via Github topics)
- openapi-spec (search via Github topics)
- openapi-validation (search via Github topics)
- openapi-documentation (search via Github topics)
- openapi-sampler (search via Github topics)
- postman-apps (search via Github topics)
- postman-collection (search via Github topics)
- api-json (search via Github topics)
- api-linter (search via Github topics)
I like how OpenAPI is starting to branch out into separate areas, as well as how this area touches on almost every other area liste here. I am going to work to help shape the tags present based on the definitions, templates, and tooling I find on Github in my work.
Design - There was only one API design related item, but is something I expect to expand rapidly as I dive into this area further.
- api-design (search via Github topics)
I know of a number of projects that should be tagged and added to the area of API design, as well as have a number of sub-areas I'd like to see included as relevant API design tags.
Deployment - Deployment was a little tougher to get a handle on. There are many different ways to deploy an API, but these are the ones I've identified so far.
- api-deployment (search via Github topics)
- api-server (search via Github topics)
- api-generator (search via Github topics)
- api-middleware (search via Github topics)
- api-gateway (search via Github topics)
- api-proxy (search via Github topics)
- api-gateways (search via Github topics)
- api-server (search via Github topics)
I know I will be adding some other areas to this area quickly. Tracking on database, containerized, and serverless approaches to API deployment.
Management - There were two topics that jumped out to me for inclusion in my API management research.
As with all the other areas, I will be harassing some of the common API management providers I know to tag their repositories appropriately, so that they show up in these searches.
Documentation - There are always a number of different perspectives on what constitutes API documentation, but these are a few of these I've found so far.
- api-console (search via Github topics)
- api-documentation (search via Github topics)
- api-error (search via Github topics)
I think that API console overlaps with API clients, but it works here as well. I will work to find a way to separate out the documentation tools, from the documentation implementations.
SDK - It is hard to identify what is an SDK. It is a sector of the space I've seen renewed innovation, as well as bending of the definition of what is a development kit.
- api-sdk (search via Github topics)
- api-wrapper (search via Github topics)
- api-client (search via Github topics)
- api-bindings (search via Github topics)
I will be looking to identify language-specific variations as part of this mapping to API SDKs available on Github, making discoverable through topic searches.
API Portal - It was good to see wicked.haufe.io as part of an API portal topic search. I know of a couple of other implementations that should be present, helping people see this growing area of API deployment and management.
- api-portal (search via Github topics)
This approach to providing Github driven API templates is the future of both the technical and business side of API operations. It is the seed for continuous integration across all stops along the API lifecycle.
API Discovery - Currently it is just my research in the API discovery topic search, but it is where I'm putting this area my work down. I was going to add all my research areas, but I think that will make for a good story in the future.
API discovery is one of the areas I'm looking to stimulate with this Github topics work. I'm going to be publishing separate repositories for each of the API I've profiled as part of my monitoring of the API space, and highlighting those providers who do it as well. We need more API providers to publish their API definitions to Github, making it available to be applied at every other stop along the API lifecycle.
I've long used Github as a discovery tool. Tracking on the Github accounts of companies, organizations, institutions, agencies and individuals is the best way to find the meaningful things going on with APIs. Github topics just adds another dimension to this discovery process, where I don't have to always do the discovery, and other people can tag their repositories, and they'll float up on the radar. Github repo activity, stars, and forks just give an added dimensions to this conversation.
I will have to figure out how to harass people I know about properly tagging their repos. I may even submit a Github issue for some of the ones I think are important enough. Maybe Github will allow users to tag other people's projects, adding another dimension to the conversation, while giving consumers a voice as well. I will update the YAML mapping for this project as I find new Github topics that should be mapped to my existing API research.
I tune into a number of different channels looking for signs of individuals, companies, organizations, institutions, and government agencies doing APIs. I find APIs using Google Alerts, monitoring Twitter and Github, using press releases and via patent filings. Another way I am learning to discover APIs is via alerts and notifications about security events.
An example of this can be found via the Industrial Control Systems Cyber Emergency Response Team out of the U.S. Department of Homeland Security (@icscert), with the recent issued advisory ICSA-16-287-01 OSIsoft PI Web API 2015 R2 Service Acct Permissions Vuln to ICS-CERT website, leading me to the OSIsoft website. They aren't very forthcoming with their API operations, but this is something I am used to, and in my experience, companies who aren't very public with their operations tend to also cultivate an environment where security issue go unnoticed.
I am looking to aggregate API related security events and vulnerabilities like the feed coming out of Homeland Security. This information needs to be shared more often, opening up further discussion around API security issues, and even possibly providing an API for sharing real-time updates and news. I wish more companies, organizations, institutions, and government agencies would be more public with their API operations and be more honest about the dangers of providing access to data, content, and algorithms via HTTP, but until this is the norm, I'll continue using API related security alerts and notifications to find new APIs operating online.
As part of a renewed focus on the API discovery definition format APIs.json, I wanted to revisit the propsed machine readable API discovery specification, and see what is going on. First, what is APIs.json? It is a machine readable JSON specification, that anyone can use to define their API operations. APIs.json does not describe your APIs like OpenAPI Spec and API Blueprint do, it describes your surrounding API operations, with entries that can reference your Open API Spec, API Blueprint, or any other format that you desire.
APIs.json Is An Index For API Operations
APIs.json provides a machine readable approach that API providers can put work in describing their API operations, similar to how web site providers describe their websites using sitemap.xml. Here are the APIs, who are describing their APIs using APIs.json:
APIs.json Indexes Can Be Created By 3rd Parties
One important thing to add, is that these APIs.json files can also be crafted, and published by external parties. An example of this is with the Trade.gov APIs. I originally created that APIs.json file, and coordinated with them to eventually it get published under their own domain, making it an authoritative APIs.json file. Many APIs.json files will be born outside of the API operations they describe, something you can see in my API stack project:
- The API Stack - Provides almost 1000 APIs.json files, that describe the API operations of many leading public API platforms. There is also around 300 OpenAPI specifications, for some of the platforms described
APIs.json Can Be Used To Describe API Collections
Beyond describing a single API, within a single domain, APIs.json can also be used to describe entire collections of APIs, providing a machine readable way to organize, and share valuable collections of API resources. Here are a few examples of projects that are producing APIs.json driven collections.
- Defining APIs that you depend on for organizational operation.
- Defining a specific category of API operations, using the format.
- SMS - http://sms.stack.network/
- MMS - http://mms.stack.network/
- Email - http://email.stack.network/
- News - http://news.stack.network/
APIs.json Can Be Used To Describe Collections of Collections
Then taking things up another rung up the chain, APIs.json can also provide a collection of collections, something I do with my own APIs. Each Github organization on my network has a master APIs.json, providing include links to all other APIs.json within the organization. In this scenario I have over 30 other APIs.json indexed, which can all operate independently of each other, but can also be considered a collection of API collections.
- Master - A master collection of API collections I maintain as part of the API Evangelist network operations.
The First Open Source Tooling For APIs.json
Up until now, this post is all about APIs.json, where in reality the format is useless without their being any tooling built on top of the specification, bringing value to the table. This is why the 3Scale team got to work building an open source APIs.json driven search engine:
- APIs.io as an open source tool dedicated to APIs.json
- APIs.io as a public API search engine, with APIs.json as index.
- APIs.io as a private API search engine, with APIs.json as index.
APIs.json Driving Other Open Tooling
APIs.io is just the beginning. It won't be enough to convince all API providers that they should be producing APIs.json index of their site operations, just for the API discovery boost. We are going to need APIs.json driven tooling that will service every other stop along the life cycle, including:
- HTTP Client / Hub / Workbenches
APIs.json Integrated Into Existing Platforms
What areas would you like to see served? Personally, I would like to have the ability to load / unload my APIs.json collections into any service that I use. Allowing me to organize my internal, public, and 3rd party APIs I depend within any platform out there that is servicing the API space. Here are a handful of those types of integrations that are already happening:
- WarewolfESB - ESB integration and API discovery.
- SwaggerHub - Public and private API hub discovery.
- API Management - In Progress w/ 3Scale...
- API Monitoring - In Progress with API Science...
- API Change Log - In Progress with API ChangeLog...
- SmartBear - API discovery for monitoring, testing, virtualization, and security.
- API Evangelist - API analyst operations.
- Kin Lane - API factory operations (not organic)
- Adopta.Agency - Government open data publishing.
APIs.json Linking To The Human Aspects Of API Operations
APIs.json is just the scaffolding to hang links to essential aspects of your operations, it doesn't care what you link to. You can start by referencing essential links for your API operations like:
- Signup - How to signup for a service.
- Support - Where to get support.
- Terms of Service - Where are the terms of service.
- Pricing - Where to find the pricing for a service.
APIs.json Linking to Machine Readable Aspects of API Operations
These do not have to be machine readable links, they can reference important things the humans will need first. However, ultimately the goal is to make as much of the APIs.json index as machine readable as possible, using a variety of existing API definition formats, available for a variety of purposes.
- OpenApI Spec, for API description.
- API Blueprint, for API description.
- API Common, for API licensing.
- Postman, for run-time.
Defining New, Machine Readable Property Elements For APIs.json
While the APIs.json spec will evolve, something I talk about below, its real strength lies in its ability to incentivize the development of entirely new, machine readable API definitions, bringing even more value to the API discovery process. Here are a few of the additional specs being crafted independent of, but inspired by APIs.json:
- API Plans, for pricing, plans & rate limits.
- API Monitoring, for monitoring & testing.
- API Changelog, for operational monitoring.
- API SDK, for SDK reference.
- API Conversations - for the stream around API operations
Roadmap for Version 0.16 of APIs.json
That is the 100K view of what is APIs.json now, and the short term plan for the future. Most of the change within the universe APIs.json is mapping will occur add the individual API, and within the machine readable specs that describe them like OpenAPI Spec, API Blueprint, and Postman. Secondarily, there will be additional, machine readable, API types being defined and added into the spec.
Even with this reality, we do have a handful of changes planned for the 0.16 version of APIs.json:
- commons - Establish a top level collection of common property elements that apply to ALL APIs being referenced in an APIs.json
- country - Adding a top level country reference using ISO 3166.
- New Proper Elements - Suggesting a handful of new property elements to reference common API operation building blocks
I doubt we will see many new additions like commons and country. In the future most of the structural changes to APis.json will be derived from first class property elements (ie. adding documentation or Github), making this the proving ground for defining what are truly the most important aspects of API operations, and what should be machine readable vs human readable.
The Hard Work That Lies Ahead for APIs.json
That concludes defining what is APIs.json, and what is next for APIs.json. Now we really have to get to work, doing the heavy lifting around:
- Getting more API providers to describe their API operations using APIs.json, and publish in the root of the domain for their API ecosystem.
- Encourage more API evangelists, brokers & analysts using to describe their collections, using APIs.json, building more meaningful indexes and directories of high value APIs.
- Encourage platforms to build APIs.json into their operations, as a storage and organization schema, but also as import / export format.
- Incentivize the development of more meaningful tooling that employs APIs.json, and uses it to better serve the API life cycle.
- Continue to add new API property elements, making sure as many of them as possible evolve to be machine readable, as well as first class citizens in the APIs.json specification.
You can stay involved with what we are up to via the APIs.json website, and the APIs.json Github repository. You can also stay in tune with what is going on with APis.io via the website, and its Github repository. If you are doing something with APIs.json, ranging from using it as an index for your API operations, to platform integrations, please let me know. Also, if you envision some interesting tooling you'd like to see happen, make sure and submit a Github issue letting us know.
While we still have huge amounts of work to do, when it comes to delivering meaningful API discovery solutions that the industry can put to work, I am pretty stoked with what we have managed to do over the last two years of work on the APIs.json specification, and supporting tooling--momentum that I feel picking up in 2016.
During my API discovery session talk at @APIStrat Austin this last November, I talked about what I see as an added dimension to the concept of API discovery, one that will become increasingly important when it comes to actually moving things forward --- discovering solutions that are API driven vs. API discovery, where a developer is looking for an API.
It might not seem that significant to developers, but SaaS services like Zapier, DataFire, and API hubs like Cloud Elements, bring this critical new dimension to how people actually will find your APIs. As nice as ProgrammableWeb has been for the last 10 years, we have to get more sophisticated about how we get our APIs in front of would-be consumers. We just can't depend on everyone who will put our API to work, immediately thinking that they need an API--most likely they are just going to need a solution to their problem, and secondarily need to understand there is an API driving things behind the scenes.
Of of many examples of this in the wild, could be in the area of tech support for your operations. Maybe you use Jira currently, because this is what your development team uses, but with a latest release you need something a little more public facing. When you are exploring what is possible with API reciprocity services like Zapier, and API hubs like Cloud Elements, you get introduced to other API driven solutions like Zendesk, or Desk.com from SalesForce.
This is just one example of how APIs can make an impact on the average business user, and will be the way API discovery happens in the future. In this scenario, I didn't set out looking for an API, but because I use API enabled service providers, I am introduced to other alternative solutions that might also help me tackle the problem I need. I may never have even known SalesForce had a help desk solution, if I wasn't already exploring the solutions Cloud Elements brings to the table.
As an API provider, you need to make sure your APIs are available via the growing number of API aggregation and reciprocity providers, and make sure the solutions they bring to the table are easily discoverable. You need to think beyond the classic developer focused version of API discovery, and make sure and think about API driven solution discovery meant for the average business or individual user.
Disclosure: Cloud Elements is an API Evangelist partner.
Evolving My API Stack To Be A Public Repo For Sharing API Discovery, Monitoring, And Rating Information01 Dec 2015
My API Stack began as a news site, and evolved into a directory of the APIs that I monitor in the space. I published APIs.json indexes for the almost 1000 companies I am trackig on, with almost 400 OADF files for some of the APIs I've profiled in more detail. My mission around the project so far, has been to create an open source, machine readable repo for the API space.
I have had two recent occurrences that are pushing me to expand on my API Stack work. First, I have other entities who want to contribute monitoring data and other elements I would like to see collected, but haven't had time. The other is I that I have started spidering the URLs of the API portals I track on, and need a central place to store the indexes, so that others can access.
Ultimately I'd like to see the API Stack act as a public repo, where anyone can grab the data they need to discovery, evaluate, integrate, and stay in tune with what APIs are doing, or not doing. In addition to finding OADF, API Blueprint, and RAML files by crawling and indexing API portals, and publishing in a public repo, I want to build out the other building blocks that I index with APIs.json, like pricing, and TOS changes, and potentially monitoring, testing, performance data available.
Next I will publish some pricing, monitoring, and portal site crawl indexes to the repo, for some of the top APIs out there, and start playing with the best way to store the JSON, and other files, and provide an easy way explore and play with the data. If you have any data that you are collecting, and would like to contribute, or have a specific need you'd like to see tracked on, let me know, and I'll add to the road map.
My goal is to go for quality and completeness of data there, before I look to scale, and expand the quantity of information and tooling available. Let me know if you have any thoughts or feedback.
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.