These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.10 Jul 2018
I’m doing a lot of thinking regarding how JSON PATCH can be applied because of my work with Streamdata.io. When you proxy an existing JSON API with Streamdata.io, after the initial response, every update sent over the wire is articulated as a JSON PATCH update, showing only what has changed. It is an efficient, and useful way to show what has changed with any JSON API response, while being very efficient about what you transmit with each API response, reducing polling, and taking advantage of HTTP caching.
As I’m writing an OpenAPI diff solution, helping understand the differences between OpenAPI definitions I’m importing, and allowing me to understand what has changed over time, I can’t help but think that JSON PATCH would be a great way to articulate change of the surface area of an API over time–that is, if everyone loyally used OpenAPI as their API contract. Providing an OpenAPI diff using JSON PATCH would be a great way to articulate an API road map, and tooling could be developed around it to help API providers publish their road map to their portal, and push out communications with API consumers. Helping everyone understand exactly what is changing in way that could be integrated into existing services, tooling, and systems–making change management a more real time, “pipelinable” (making this word up) affair.
I feel like this could help API providers better understand and articulate what might be breaking changes. There could be tooling and services that help quantify the scope of changes during the road map planning process, and teams could submit OpenAPI definitions before they ever get to work writing code, helping them better see how changes to the API contract will impact the road map. Then the same tooling and services could be used to articulate the road map to consumers, as the road map becomes approved, developed, and ultimately rolled out. With each OpenAPI JSON PATCH moving from road map to change log, keeping all stakeholders up to speed on what is happening across all API resources they depend on–documenting everything along the way.
I am going to think more about this as I evolve my open API lifecycle. How I can iterate a version of my OpenAPI definitions, evaluate the difference, and articulate each update using JSON PATCH. Since more of my API lifecycle is machine readable, I’m guessing I’m going to be able to use this approach beyond just the surface area of my API. I’m going to be able to use it to articulate the changes in my API pricing and plans, as well as licensing, terms of service, and other evolving elements of my operations. It is a concept that will take some serious simmering on the back burners of my platform, but a concept I haven’t been able to shake. So I might as well craft some stories about the approach, and see what I can move forward as I continue to define, design, and iterate on the APIs that drive my platform and API research forward.
I have several projects right now that are needed a baseline for what is expected of microservices developers when it comes to the README for their Github repository. Each microservice should be a self-contained entity, with everything needed to operate the service within a single Github repository. Making the README the front door for the service, and something that anyone engaging with a service will depend on to help them understand what the service does, and where to get at anything needed to operate the service.
Here is a general outline of the elements that should be present in a README for each microservice, providing as much of an overview as possible for each service:
- Title - A concise title for the service that fits the pattern identified and in use across all services.
- Description - Less than 500 words that describe what a service delivers, providing an informative, descriptive, and comprehensive overview of the value a service brings to the table.
- Documentation - Links to any documentation for the service including any machine readable definitions like an OpenAPI definition or Postman Collection, as well as any human readable documentation generated from definitions, or hand crafted and published as part of the repository.
- Requirements - An outline of what other services, tooling, and libraries needed to make a service operate, providing a complete list of EVERYTHING required to work properly.
- Setup - A step by step outline from start to finish of what is needed to setup and operate a service, providing as much detail as you possibly for any new user to be able to get up and running with a service.
- Testing - Providing details and instructions for mocking, monitoring, and testing a service, including any services or tools used, as well as links or reports that are part of active testing for a service.
- Configuration - An outline of all configuration and environmental variables that can be adjusted or customized as part of service operations, including as much detail on default values, or options that would produce different known results for a service.
- Road Map - An outline broken into three groups, 1) planned additions, 2) current issues, 3) change log. Providing a simple, descriptive outline of the road map for a service with links to any Github issues that support what the plan is for a service.
- Discussion - A list of relevant discussions regarding a service with title, details, and any links to relevant Github issues, blog posts, or other updates that tell a story behind the work that has gone into a service.
- Owner - The name, title, email, phone, or other relevant contact information for the owner, or owners of a service providing anyone with the information they need to reach out to person who is responsible for a service.
That represent ten things that each service should contain in the README, providing what is needed for ANYONE to understand what a service does. At any point in time someone should be able to land on the README for a service and be able to understand what is happening without having to reach out to the owner. This is essential for delivering individual services, but also delivery of service at scale across tens, or hundreds of services. If you want to know what a service does, or what the team behind the service is thinking you just have to READ the README to get EVERYTHING you need.
It is important to think outside your bubble when crafting and maintaining a README for each microservice. If it is not up to date, or lacking relevant details, it means the service will potentially be out of sync with other services, and introduce problems into the ecosystem. The README is a simple, yet crucial aspect of delivering services, and it should be something any service stakeholder can read and understand without asking questions. Every service owner should be stepping up to the plate and owning this aspect of their service development, and professionally owning this representation of what their service delivers. In a microservices world each service doesn’t hide in the shadows, it puts it best foot forward and proudly articulates the value it delivers or it should be deprecated and go away.
My Response To The VA Microconsulting Work Statement On API Landscape Analysis and Near-term Roadmap17 Apr 2018
The Department of Veterans Affairs (VA) is listening to my advice around how to execute their API strategy and adopting a micro-approach to not just delivering services, but also to the business of moving the platform forward at the federal agency. I’ve responded to round one, and round two of the RFI’s, and now they have submitted a handful of work statements on Github, so I wanted to provide an official response, share my thoughts on each of the work statements, and actually bid for the work.
First, Here is The VA Background The Lighthouse program is moving VA towards an Application Programming Interface (API) first driven digital enterprise, which will establish the next generation open management platform for Veterans and accelerate transformation in VA’s core functions, such as Health, Benefits, Burial and Memorials. This platform will be a system for designing, developing, publishing, operating, monitoring, analyzing, iterating and optimizing VA’s API ecosystem.
Next, What The VA Considers The Play As the VA Lighthouse Product Owner I have identified two repositories of APIs and publicly available datasets containing information with varying degrees of usefulness to the public. If there are other publicly facing datasets or APIs that would be useful they can be included. I need an evaluation performed to identify and roadmap the five best candidates for public facing APIs.
The two known sources of data are:
- Data.gov - https://catalog.data.gov/organization/va-gov and www.va.gov/data
- Vets.gov - https://github.com/department-of-veterans-affairs/vets-api
Then, What The VA Considers The Deliverable Rubric/logical analysis for determining prioritization of APIs Detail surrounding how the top five were prioritized based on the rubric/analysis (Not to exceed 2 pages unless approved by the Government) Deliverables shall be submitted to VA’s GitHub Repo.
Here Is My Official Response To The Statement To describe my approach to this work statement, and describe what I’ve been already doing in this area for the last five years, I’m going to go with a more logical analysis, as opposed a rubric. There is already too much structured data in all of these efforts, and I’m a big fan of making the process more human. I’ve been working to identify the most valuable data, content, and other resoures across the VA, as well as other federal agencies since I worked in DC back in2013, something that has inspired my Knight Foundation funded work with Adopta.Agency.
Always Start With Low Hanging Fruit When it comes to where you should start with understanding what data and content should be turned into APIs, you always begin with the public website for a government agency. If the data and content is already published to the website, it means that there has been demand for accessing the information, and it has already been through a rigorous process to make publicly available. My well developed low hanging fruit process involves spidering of all VA web properties identifying any page containing links to XML, CSV, and other machine readable formats, as well as any page with a table containing over 25 rows, and any forms providing search, filtering, and access to data. The low hanging fruit portion of this will help identify potentially valuable sources of data beyond what is already available at va.gov/data, data.gov, and on Github. Expanding the catalog we should be considering when it comes to what API should be published.
Tap Existing Government Metrics To help understand what data available at va.gov/data, data.gov, on Github, as well as across all VA web properties, you need to reach out to existing players who are working to track activity across the properties. analytics.usa.gov is a great place to start to understand what people are looking at and searching for across the existing VA web properties. While talking with the folks at the GSA, I would also be talking to them about any available statistics for relevant data on data.gov, understanding what is being downloaded and access when it comes to VA data, but also any other related data from other agencies. Then I’d stop and look at the metrics available on Github for any data stored in with repositories, taking advantage of the metrics built into the social coding platform. Tapping into as many of the existing metrics being tracked on across the VA, as well as other federal agencies.
Talk To Veterans And Their Supporters Next, I’d spend time on Twitter, Facebook, Github, and Linkedin talking to veterans, veteran service organizations (VSO), and people who advocate for veterans. This work would go a long way towards driving other platform outreach and evangelism described in the other statement of work I responded to. I’d spend a significant amount of time asking veterans and their supports what data, content, and other resources they most use, and would be most valuable if were made more available in web, mobile, and other applications. Social media would provide a great start to help understand what information should be made available as APIs, then you could also spend time reviewing the web and mobile properties of organizations that support veterans, taking note of what statistics, data, and other information they are publishing to help them achieve their mission, and better serve veterans.
Start Doing The Work Lastly, I’d start doing the work. Based upon what I’ve learned from my Adopta.Agency work, each dataset I identified I’d publish my profiling to an individual Github repository. Publishing a README file describing the data, where it is located, and a quick snapshot of what is needed to turn the dataset into an API. No matter where the data ended up being in the list of priorities there would be a seed planted, which might attract its own audience, stewards, and interested parties that might want to move the conversation forward–internally, or externally to VA operations. EVERY public dataset at the VA should have its own Github repository seed, and then eventually it’s corresponding API to help make sure the latest version of the data is available for integration in applications. There is no reason that the work to deploy APIs cannot be started as part of the identifying and prioritization of data across the VA. Let’s plant the seed to ensure all data at the VA eventually becomes an API, no matter what it’s priority level is this year.
Some Closing Thoughts On This Project It is tough to understand where to begin with an API effort at any large institution. Trust me, I tried to do at the VA. I was the person who originally setup the VA Github organization, and began taking inventory of the data now available on data.gov, va.gov/data, and on Github. To get the job done you have to do a lot of legwork to identify where all the valuable data is. Then you have to do the hard work of talking to internal data stewards, as well as individuals and organizations across the public landscape who are serving veterans. This is a learning process unto itself, and will go a long way towards helping you understand which datasets truly matter, and should be a priority for API deployment, helping the VA better achieve its mission in serving veterans.
This is a post that has been in my API notebook for quite a while. I feel it is important to keep showcasing the growing number of API providers who are not just using OpenAPI, but also managing them on Github, so I had to make the time to talk about the email API provider SendGrid managing their OpenAPI using Github. Adding to the stack of top tier API providers managing their API definitions in this way.
SendGrid is managing announcements around their OpenAPI definition using Github, allowing developers to signup for email notifications around releases and breaking changes. You can use the Github repository to stay in tune with the API roadmap, and contribute feature requests, submit bug reports, and even submit a pull request with the changes you’d like to see. Going beyond just an API definition being about API documentation, and actually driving the API road map and feedback loop.
This approach to managing an API definition, while also making it a centerpiece of the feedback loop with your community is why I keep beating this drum, and showcasing API providers who are doing this. It is a way to manage the central API contract, where the platform provider and API consumers both have a voice, and producing a machine readable artifact that can be then used across the API lifecycle, from design to deprecation. Elevating the API definition beyond just a bi-product of creating API docs, and making it a central actor in everything that occurs as part of API operations.
I’m still struggling with convincing API providers that they should be adopting OpenAPI. That it is much more than an artifact driving interactive documentation. Showcasing companies like SendGrid using OpenAPI, as well as their usage of Github, is an important part of me convincing other API providers to do the same. If you want me to write about your API, and what you are working to accomplish with your API resources, then publish your OpenAPI definition to Github, and engage with your community there. It may take me a few months, but eventually I will get around to writing it up, and incorporating your story into my toolbox for changing behavior in our API community.
I fell down the rabbit hole of the latest Facebook version release, trying to understand the deprecation of their User Insights API. The story of the deprecation of the API isn’t told accurately as part of the the regular release process, so I found myself thinking more deeply about how we tell stories (or don’t) around each step forward of our APIs. I have dedicated areas of my API research for the road map, issues, and change log for API operations, because their presence tell a lot about the character of an API, and their usage I feel paints and accurate painting of each moment in time for an API.
Facebook has a dedicated change log for their API platform, as well as an active status and issues pages, but they do not share much about what their road map looks like. They provide a handful of elements with each releases change log:
- New Features — New products or services, including new nodes, edges, and fields.
- Changes — Changes to existing products or services (not including Deprecations).
- Deprecations — Existing products or services that are being removed.
- 90-Day Breaking Changes — Changes and deprecations that will take effect 90 days after the version release date.
The presence, or lack of presence, of a road map, change log, status and issue pages for an API paints a particular picture of a platform in my mind. Also, the stories they tell, or do not tell with each release paint an evolving picture of where a platform is headed, and whether or not we want to participating in the journey. Facebook does better than most platforms I track on when it comes to storytelling, by also releasing a blog post telling the story of each release, providing separate posts for the Graph API, as well as the Marketing API. It is too bad that they omitted the deprecation of the Audience Insight API, which occurred at the time of this story.
While I consider the presence of building blocks like a change log, road map, issues and status page a positive sign for platforms. It still always requires reading between the lines, and staying in tune with each release to really get a feel for how well a platform puts these building blocks to work for the platform. Regardless, I think these building blocks do adequately paint a picture of the current state of a platform, it just usually happens to be the picture that platform wants you to see, not necessary the picture the platform consumers would like to see.
I am increasingly tracking on OpenAPI definitions published to Github by leading API providers I track on. Platforms like Stripe, Box, New York Times are actively managing their OpenAPI definitions using Github, making them well suited for integration into their platform operations, API consumer scenarios, and even within analyst systems like what I have going on as the API Evangelist.
Once I have an authoritative source of an OpenAPI, meaning a public URI for an OpenAPI that is actively being maintained by the API provider, I have a pretty valuable feed into the roadmap, as well as change log for an API. I feel like we are getting to the point where there are enough authoritative OpenAPIs that we can start using as a machine readable notification and narrative tool for helping us stay in tune with one or many APIs across the landscape. Helping us stay in tune with APIs in real-time, and giving APIs an effective tool for communicating out changes to the platform–we just need more OpenAPIs, and some new tooling to emerge.
I’m envisioning an OpenAPI client that regularly polls OpenAPIs and caches them. Anytime there is a change it does a diff, and isolated anything new. Think of an RSS reader, but for OpenAPIs, and going well beyond new entries, and actually creates a narrative based upon the additions and changes. Tell me about the new paths added, and any new headers, parameters, or maybe how the schema has grown. Provide me insights on what has changed, and possibly what has been removed, or will be removed in future editions. As an API analyst, I’d like to be able to have an OpenAPI-enabled approach to receiving push notifications when an API changes, with a short, concise summary about what has change in my inbox, via Twitter, or Github notification.
OpenAPI already provides API discovery features through the documentation it generates, and I’m increasingly using Github to find new APIs after they publish their OpenAPIs to Github, but this type of API discovery and notification at the granular level would be something new. If there was such tooling out there, it would provide yet another incentive for API provides to publish and maintain an active, up to date OpenAPI definition. This is a concept I’d like to also see expanded to the API operational level using APIs.json, where we can receive notifications about changes to documentation, pricing, SDKs, and other critical aspects of API integration, beyond just the surface area of the API. All of this stuff will take many years to unfold, as it has taken over five years for us to reach a critical mass of OpenAPI definitions to emerge, I suspect it will take another five to ten years for robust tooling to emerge at this level, which also depends on many API definitions to be available.
There are many reasons you want to have a road map for your API. It helps you communicate with your API community where you are going with your API. It also helps you have a plan in place for the future, which increases the chances you will be moving things forward in a predictable and stable way. When I’m reviewing and API I don’t see a public API road map available, I tend to give them a ding on the reliability and communication for their operations. One of the reasons we do APIs is to help us focus externally with our digital resources, which communication plays an important role, and when API providers aren’t communicating effectively with their community, there are almost always other issues right behind the scenes.
A road map for your API helps you plan, and think through how and what you will be releasing for the foreseeable future. Communicating this plan externally helps force you think about your road map in context of your consumers. Having a road map, and successfully communicating about it via a blog, on Twitter, and other channels helps keep your API consumers in tune with what you doing. In my opinion, an API road map is an essential building block for all API providers, because it has direct value on the health of API operations, but because it also provides an external sign of the overall health of a platform.
Beyond the direct value of having an API road map, there are other reasons for having one that will go beyond just your developer community. In a story in Search Engine Land about Google Posts, the author directly references the road map as part of their storytelling. “In version 4.0 of the API, Google noted that “you can now create Posts on Google directly through the API.” The changelog include a bunch of other features, but the Google Posts is the most notable.” Adding another significant dimension to the road map conversation, and helping out with SEO, and other API marketing efforts.
As you craft your road map you might not be thinking about the fact that people might be referencing it as part of stories about your platform. I wouldn’t sweat it too much, but you should at least make sure you are having a conversation about it with your team, and maybe add an editorial cycle to your road map publishing process. Making sure what you publish will speak to your API consumers, but also make for quotable nuggets that other folks can use when referencing what you are up to. This is all just a thought I am having as I’m monitoring the space, and reading about what other APIs are are up to. I find road maps to be a critical piece of the API communication, and support, and I depend on them to do what I do as the API Evangelist. I just wanted to let you know how important your API road map is, so you don’t forget to give it some energy on a regular basis.
I consider a road map for any API to be an essential building block, whether it is a public API or not. You should be in the business of planning the next steps for your API in an organized way, and you should be sharing that with your API consumers so that they can stay up to speed on what is right around the corner. If you want to really go the extra mile I recommend following what Tyk is up to, with their public road map using Trello.
With the API management platform Tyk, you don’t just see a listing of their API road map, you see all the work and conversation behind the road ma using the visual collaboration platform Trello. Using their road map you can see proposed features, which is great to see if something you want has already been suggested, and you can get at a list of what the next minor releases will contain. Plus using the menu bar you can get at a history of the changes the Tyk team has made to the platform, going back for the entire history of the Trello board.
Using Trello you can subscribe to, or vote up any of the message boards. If you want to submit something you need to sign-up and post something to the Tyk community. Then they’ll consider adding it to the proposed road map features. It is a pretty low cost, easy to make public, approach to delivering a road map. Sometimes this stuff doesn’t need a complex solution, just one that provides some transparency, and help your customers understand what is next. Tyk provides a nice way to provide a road map that any other API provider, or service provider can follow.
Another interesting approach to delivering an API road map that I can add to my research. I’m a big fan of having many different ways of delivering the essential building blocks of API operations, using a variety of simple free or paid SaaS tools. You’d be surprised at how useful an open road map can be for your API. Even if you aren’t adding too many new features, or have a huge number of people participating, it provides an easy reference showing what is next for an API. It also shows someone is home behind the scenes, and that an API is actually active, alive, and something you should be using.
I saw a blog post come across my feeds from the analysis and visualizaiton API provider Qlik, about their Qlik Sense API Insights. It is a pretty interesting approach to trying visualize the change log and road map for an API. I like it because it is an analysis and visualization API provider who has used their own platform to help visualize the evolution of their API.
I find the visualization for Qlik Sense API Insights to be a little busy, and not as interactive as I’d like to see it be, but I like where they are headed. It tries to capture a ton of data, showing the road map and changes across multiple versions of sixteen APIs, something that can’t be easy to wrap your head around, let alone capture in a single visualization. I really like the direction they are going with this, even though it doesn’t fully bring it home for me.
Qlik Sense API Insights is the first approach I’ve seen like this to attempt to try and quantify the API road map and change log–it makes sense that it is something being done by a visualization platform provider. With a little usage and user experience (UX) love I think the concept of analysis, visualizaitons, and hopefully insights around the road map, change log, and even open issues and status could be significantly improved upon. I could see something like this expand and begin to provide an interesting view into the forever changing world of APIs, and keep consumers better informed, and in sync with what is going on.
In a world where many API providers still do not even share a road map or change log I’m always looking for examples of providers going the extra mile to provide more details, especially if they are innovating thike Qlik is with visualizations. I see a lot of conversations about how to version an API, but very few conversations about how to communicate each version of your API. It is something I’d like to keep evangelizing, helping API providers understand they should at least be offering the essentials like a roadmap, issues, change log, and status page, but the possibility for innovation and pushing the conversation forward is within reach too!
I have been learning more about Apicurio, which is the open source API design editor I have been waiting for. There are a number of things I’m interested in when it comes to Apicurio, but one side element that caught my attention was their road map.
I am a big fan of encouraging folks to share their roadmap. It is an important part of helping establish a shared future between API provider and API consumer. Apicurio is an API tool, without any APIs (yet), but the roadmap purpose remains the same. I like how Apicurio shares their tech preview, beta, and 1.x plan, in a coherent and organized way–you do not have to be a developer to understand what they are planning.
As I was using Apicurio I had a lot of questions about what it didn’t do. I had a lot of ideas about what it should do, and before I set out writing these ideas on my blog I spent some time with their road map, syncing items on my list with items on their roadmap. After I was done, I had reduced my list of questions and ideas to just a handful of items–which I will write about shortly. Their proactive, coherent, and complete road map saved me time, but also will save them time when it comes to listening to my feedback (if they do).
Complete, coherent, and plain English road maps are another one of those super simple, captain obvious ideas that over 50% of the APIs I review DO NOT HAVE. Which is why I’m writing about it and showcasing a positive example like Apicurio. Please do not forget to share road map with your community–it will save us both time.
I am stepping back to today and thinking about a pretty long list of API design considerations for the Human Services Data API (HSDA), providing guidance for municipal 211 who are implementing an API. I’m making simple API design decisions from how I define query parameters all the way to hypermedia decisions for the version 2.0 of the HSDA API.
There are a ton of things I want to do with this API design. I really want folks involved with municipal 211 operations to be adopting it, helping ensure their operations are interoperable, and I can help incentivize developers to build some interesting applications. As I think through the laundry list of things I want, I keep coming back to my audience of practitioners, you know the people on the ground with 211 operations that I want to adopt an API way of doing things.
My target audience isn’t steeped in API. They are regular folks trying to get things done on a daily basis. This move from v1.0 to v.1 is incremental. It is not about anything big. Primarily this move was to make sure the API reflected 100% of the surface area of the Human Services Data Specification (HSDS), keeping in sync with the schema’s move from v1.0 to v1.1, and not much more. I need to onboard folks with the concept of HSDS, and what access looks like using the API–I do not have much more bandwidth to do much else.
I want to avoid moving too fast for my API audience. I can see version 2,3, even 4 in my head as the API architect and designer, but am I think of me, or my potential consumers? I’m going to seize this opportunity to educate my target audience about APIs using the road map for the API specification. I have a huge amount of education of 211 operators, as well as the existing vendors who sell them software when it comes to APIs. This stepping back has pulled the reigns in on my API design strategy, encouraging me to keep things as simple as possible for this iteration, and helping me think more thoughtfully about what I will be releasing as part of each incremental, as well as major releases the future will hold.
I have had a number of requests from folks lately to write more about Github, and how they can use the social coding platform as part of their API operations. As I work with more companies outside of the startup echo chamber on their API strategies I am encountering more groups that aren't Github fluent and could use some help getting started. It has also been a while since I've thought deeply about how API providers should be using Github so it will allow me to craft some fresh content on the subject.
Github As Your Technical Social Network
Think of Github as a more technical version of Facebook, but instead of the social interactions being centered around wall posts, news links, photos, and videos, it is focused on engagement with repositories. A repository is basically a file folder that you can make public or private, and put anything you want into it. While code is the most common thing put into Github repositories, they often contain data file, presentations, and other content, providing a beneficial way to manage many aspects of API operations.
The Github Basics
When putting Github to use as part of your API operations, start small. Get your profile setup, define your organization, and begin using it to manage documentation or other simple areas of your operations--until you get the hang of it. Set aside any pre-conceived notions about Github being about code, and focus on the handful of services it offers to enable your API operations.
- Users - Just like other online services, Github has the notion of a user, where you provide a photo, description, and other relevant details about yourself. Avoid making a user accounts for your API, making sure you show the humans involved in API operations. It does make sense to have a testing, or other generic platform Github users accounts, but make sure your API team each have their own user profile, providing a snapshot of everyone involved.
- Organizations - You can use Github organizations to group your API operations under a single umbrella. Each organization has a name, logo, and description, and then you can add specific users as collaborators, and build your team under a single organization. Start with a single repository for your entire API operations, then you can consider the additional organization to further organize your efforts such as partner programs, or other aspects of internal API operations.
- Repositories - A repository is just a folder. You can create a repository, and replicate (check out) a repository using the Github desktop client, and manage its content locally, and commit changes back to Github whenever you are ready. Repositories are designed for collaborative, version controlled engagements, allowing for many people to work together, while still providing centralized governance and control by the designated gatekeeper for whatever project being managed via a repository--the most common usage is for managing open source software.
- Topics - Recently Github added the ability to label your repositories using what they call topics. Topics are used as part of Github discovery, allowing users to search using common topics, as well as searching for users, organizations, and repositories by keyword. Github Topics is providing another way for developers to find interesting APIs using search, browsing, and Github trends.
- Gists - A GitHub service for managing code snippets that allow them to be embedded in other websites, documentation -- great for use in blog posts, and communication around API operations.
- Pages - Use Github Pages for your project websites. It is the quickest way to stand up a web page to host API documentation, code samples, or the entire portal for your API effort.
- API - Everything on the Github platform is available through the Github API. Making all aspects of your API operations available via an API, which is the way it should be.
Managing API Operations With Github
There are a handful of ways I encourage API providers to consider using Github as part of their operations. I prefer to use Github for all aspects of API operations, but not every organization is ready for that--I encourage you to focus in these areas when you are just getting going:
- Developer Portal - You can use Github Pages to host your API developer portal--I recommend taking a look at my minimum viable API portal definition to see an example of this in action.
- Documentation - Whether as part of the entire portal or just as a single repository, it is common for API providers to publish API documentation to Github. Using solutions like ReDoc, it is easy to make your API documentation look good, while also easily keeping them up to date.
- Code Samples w/ Gists - It is easy to manage all samples for an API using Github Gists, allowing them to be embedded in the documentation, and other communication and storytelling conducted as part of platform operations.
- Software Development Kits (SDK) Repositories - If you are providing complete SDKs for API integrations in a variety of languages you should be using Github to manage their existence, allowing API consumers to fork and integrate as they need, while also staying in tune with changes.
- OpenAPI Management - Publish your APIs.json or OpenAPI definition to Github, allowing the YAML or JSON to be versioned, and managed in a collaborate environment where API consumers can fork and integrate into their own operations.
- Issues - Use Github issues for managing the conversation around integration and operational issues.
- Road Map - Also use Github Issues to help aggregate, collaborate, and evolve the road map for API operations, encouraging consumers to be involved.
- Change Log - When anything on the roadmap is achieved flag it for inclusion in the change log, providing a list of changes to the platform that API consumers can use as a reference.
Github is essential to API operations. There is no requirement for Github users to possess developer skills. Many types of users put Github to use in managing the technical aspects of projects to take advantage of the network effect, as well as the version control and collaboration introduced by the social platform. It's common for non-technical folks to be intimidated by Github, ad developers often encourage this, but in reality, Github is as easy to use as any other social network--it just takes some time to get used to and familiar it.
If you have questions about how to use Github, feel free to reach out. I'm happy to focus on specific uses of Github for API operations in more detail. I have numerous examples of how it can be used, I just need to know where I should be focusing next. Remember, there are no stupid questions. I am an advocate for everyone taking advantage of Github and I fully understand that it can be difficult to understand how it works when you are just getting going.
Many of the core areas of my API research, and the common building blocks of the API life cycle that I talk about regularly, often seem trivial to the technically inclined, or the purely business focused segments of my audience. To many, having a road map might be a thing you have when developing and deploying an API, but really doesn't matter if you share publicly. I'd say, with technical folks they often don't even think of it, with the more business focused individuals often deliberately choose not to, seeing it as giving away too much information to your competition.
I think Slack nails the reason why you want to open up and share your API road map. I can talk about this kind of stuff until I'm blue in the face, but people just don't listen -- they need leadership like Slack brings to the table. In their post today, they open with:
We know that being a developer is hard, and building on a platform is not a decision to be made lightly. Many platforms have burned developers and we frequently see that risk highlighted. This is our response.
They nail the the (often elusive) promise of the API ecosystem:
An ecosystem, a real platform, is shared. We are growing fast, but no one company alone could grow fast enough to meet the amount of potential in front of us. We are working hard, but no one team can work hard enough to meet the demand that lies before us all. Instead, we are building a platform where this potential, this demand, is shared. As we grow, developers are able to succeed with us; sharing our customers and joining us in changing the way people do work. In turn, our customers are delighted, new customers have even more reason to use Slack, and the cycle continues.
They nail the reality of the API ecosystem, and how it is a shared experience:
An ecosystem in its healthiest form creates a virtuous cycle. Platforms do fail. We’ll make mistakes, but we’re building for something much greater. We are building for a future where Slack is dwarfed by the aggregate value of the companies built on top of it. This is our success as a platformâ—âwhen the value of the businesses built on top of us is, in sum, larger than we can ever be.
They connect the reality of an ecosystem with having a public road map:
So, today we’re sharing our platform product roadmap. It is a small step in equipping you to claim this opportunity with us. There are three major platform product themes to highlight: app discovery, interactivity and developer experienceâ—âyou can see more on this card.
Slack doesn't stop there, and throw in having a idea showcase to help as well:
We’re also sharing what we’re calling an Ideaboard —âa list of useful ideas that could be built into Slack apps per conversations we’ve had with our customers. [...] The goal of the Ideaboard is to continue this momentum by creating a bridge between our developer community and our customers’ needs.
As Slack mentions, this won't be perfect. They don't have all the answers. A public road map and idea board might not ultimately make or break Slack as a platform, or ensure the community continues to evolve as a full blown ecosystem. Shit can go wrong at any point, but having a public road map, and active, truly contributing idea board, will go a long way to contributing to positive outcomes.
If you aren't allowing input from your API community into your road map, truly considering this input as you craft your road map, and then share the resulting plan with your community, how can you ever expect them to stay in sync? It doesn't mean that Slack will listen to everything, and every developers opinion will be included in the road map. It will however, put Slack into a more open state of mind, and go a long way to set the right tone in the API ecosystem, building trust with the individuals and business who are building on their platform.
What tone has been set in the community around the APIs you provide? What is the tone in the communities you exist in as an API consumer?
Each of the Adopta.Agency project runs as a Github repository, with the main project site running using Github Pages. There are many reasons I do this, but one of the primary ones is that it provides me with most of what I need to provide support for the project.
Jekyll running on Github pages gives me the ability to have a blog, and manage the pages for the project, which is central to support. Next I use Github Issues for everything else. If anyone needs to ask a question, make a contribution, or report a bug, they can do so using Github Issues for the repository.
I even drive the project road map, and change log using Github Issues. If I tag something as roadmap, it shows up on the road map, and if I tag something as changelog, and it is a closed issue, it will show up on the change log--this is the feedback loop that will help me move the clinical trials API forward.
A road map for your API, is one of those essential building blocks that can go a long way in building trust with your API consumers. Sharing your plans, helps developers prepare for the future, and better plan for their own road maps, keeping everything potentially in sync.
From my vantage point, a simple, up to date, easily found road map is a building block that benefits both API provider and consumer. Road maps help provider organize their plans, and figure out how they are going to communicate it with the ecosystem, then setting the overall tone for how API consumers will engaging with a platform--both good and bad.
During my regular monitoring of the space, I saw that Microsoft released a pretty slick "Cloud Platform Roadmap", which is yet another sign for me on how Microsoft is stepping up their cloud (API) game. Their interactive road map allows you browse the roadmap by areas like cloud infrastructure or Internet of Things (IoT), and see features that are recently available, public preview, in development, cancelled, or have been archived.
Beyond the core functionality of the road map, I'm impressed that they published a blog post telling why they did it, acknowledging that it will bring visibility into platform operations. Additionally there is a prominent button, providing an email address where you can send your suggestions for the road map, providing evidence of how a roadmap can contribute to the overall feedback loop for an API platform.
Overall, I like the simplicity of the Microsoft cloud platform road map, and it is definitely a healthy building block I'd encourage other API providers to consider offering as well. I'm going to make sure Microsoft's roadmap is one of the examples I cite along the API management line, as part of my overall API life cycle map.
As I work to create Swagger API definitions for the almost 1000 companies in my API stack, I'm chasing an elusive definition of a complete Swagger definition for any API. Only a small portion of APIs I am defining using Swagger will reach an 80-90% completions status, in reality most will never be complete for reasons that are totally out of my control.
When it comes to crafting API definitions for public APIs, I may be able to get the API definition complete in the sense that it has all the paths, parameters, responses, definitions, and security configurations, but if the API itself is missing pieces--the API definition will always be incomplete (in perpetuity).
As I work to automate the questions I ask of Swagger definitions, helping me get to a more coherent definition of what is a complete Swagger definition, I can't help feel the need to offer API design tips to the API providers I am profiling. Honestly I have never considered myself a high quality API designer, I've been more business than tech, but I can't help pointing out the obvious stuff.
I am still using my API definition tooling for my own goals around defining the API Stack, but because I'm building it all as APIs, I intend to open it up to others for analyzing their own API definitions, determining how complete they are, and potentially offering up tips on what could be done to improve them. A kind of self-service, API design tip API, that helps with some of the most common elements that are missing from API designs.
In 2015, API definitions are opening up a buffet of services that are courting API providers ranging from mocking and deployment, to monitoring and security. I don't see why API definitions couldn't also allow for professional API designers to help API providers evolve their APIs in a more constructive ways, from the outside-in. Using Swagger or API Blueprint, professional designers could easily provide suggestions for existing API providers and their designs, then give them potential suggestions that they could use in their API design road-map.
Anyhoo, food for thought!
I needed a place to keep track of all the things I will be working on around the FAFSA API project. To help me keep track as well as communicate to you, I've added a roadmap page to this project.
If you have anything you'd like to see get done, you can submit as an issue and I'll either add as milestone or find other way to make sure the work gets done.
A roadmap is an essential part of a healthy API ecosystem. The transparency and communication that come with providing a roadmap for your API and open data initiative will go a long way in building trust with your community.
The City of Philadelphia is sharing its roadmap of data sets that it intends to open in the coming months, by providing a release calendar using cloud project management service Trello. In addition to seeing the datasets the city intends to release, you can also see any data sets that have already been published.
According to the open data roadmap, Philadelphia is releasing data on street closures, energy consumption, evacuation routes, campaign finance, bike racks, budgets, expenditures and city employee salaries to name just a few.
This type of transparency doesn’t just build trust with citizens and developers, it provides an incentive for the city government to deliver high quality public data, in a meaningful and timely manner. There is still a lot of work to be done once this data is available, in order to develop quality analysis, visualizations, other APIs and tools that can be used in mobile and web applications. But what Philly is doing is a great start.
The City of Philadelphia’s approach to its open data roadmap is good to see. It is something all cities should have, as well as all API owners. Your roadmap will set expectations within your community, guide your own initiatives and provide a healthy blueprint for moving your efforts forward.
When you operate an API, you need to make sure and communicate with your developer community about what your plans are for the future, so developers can plan their own roadmap, and keep in sync with the coming changes to the API.
In addition to issue management using Github, API providers can use Github to manage their roadmap using the milestones features of the social coding platform. Milestones can be directly related to issues or can be larger goals around where you intend to take your API.
Milestones give developers insight into what the plans are the future of an API, using just a title, description and target date. Milestones are one of the simplest aspects of an API, but can go a long way in creating goodwill with your developers.
API providers have to balance transparency versus giving away information to competitors, something that will take regular consideration. A transparent roadmap for your API will let developers prepare properly, while also letting them know what features are coming in the future--eliminating the possibility of competing with your own ecosystem.
Consider what you can include in your Github roadmap, and using Github for issue management.
It will be 3 months since the White House CiO Steven VanRoekel released a federal API strategy, entitled "Digital Government: Building a 21st Century Platform to Better Serve the American People", part of the Executive Order 13571, directing all federal departments and agencies to make open data, content and web APIs the new default.
This Thursday, August 23rd 2012 will be the first major milestone for departments and agencies, where they should have met two goals:
- 2.1 - Engage with customers to identify at least two existing major customer-facing services that contain high-value data or content as first-move candidates to make compliant with new open data, content, and web API policy
- 7.1 - Engage with customers to identify at least two existing priority customer-facing services to optimize for mobile use
I’ve been monitoring 246 departments and agencies and so far three have released drafts of their strategy:
|Department of Commerce|
|Department of Education (ED)|
|United States Agency for International Development (USAID)|
While the Department of Commerce and Department of Education have only published paragraphs discussing how they are engaging with users, the United States Agency for International Development (USAID) has identified three datasets for the web API portion and two optimized for mobile use. All three have published their digital strategy in HTML, XML and JSON formats, meeting the requirements of "machine readable by default".
I anticipate we’ll see many more agencies releasing their digital strategies this week, but there is no way to tell how many agencies will be able to get on board with the CIO’s strategy in time. No matter what, the next 3 months will be critical time for APIs in Washington DC, not just because of the presidential election, but the end of November will be the next milestone where agencies are expected to have established an agency-wide governance structure for developing and delivering digital services via web APIs.
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.