Last week I thought I would sit down and learn how to write a Visual Studio Code extension - what better way is there to test the documentation your company ships and give yourself the best holiday present of the year?
I will start this by saying right away how easy it is to work on the extension across two platforms - part of it was written on a Windows machine, and another part of a mac. There was absolutely zero friction, as everything is done within the same environment and with the same cross-platform toolchain.
Getting started - setting up the environment
It’s a little meta, but you will be developing extensions for Visual Studio Code in Visual Studio Code. Of course, that doesn’t mean you can’t use another editor, but it certainly makes the workflow easier.
I got started by just downloading the Hello World example, provisioning npm and Yeoman on both developer machines and then starting to introduce modifications to the scaffolding. Visual Studio Code was already installed on my machines, as it’s by far the most used app in my toolbox, but in case you need do download it - you can get it here.
How the extension works
The idea behind the extension is fairly simple - when a developer writes an application in one of the languages we support on docs.microsoft.com, they can get some reference material by leveraging a key combination within the editor.
To do that, the extension will take the user’s selection and run it against the search service on docs.microsoft.com, and also do some light parsing to extract content from rendered HTML pages.
To do all the above, I thought I would take advantage of functionality exposed in the following packages:
- superagent - allows performing HTTP requests. One of the great things about it is that it supports promises.
- xpath - allows performing XPath queries against the content that we take from docs.microsoft.com.
- xmldom - allows the construction of the DOM from the string we download when we get the documentation page.
- file-url - helps converting a relative path to a file URL akin to
- js-htmlencode - helper package that allows me to encode raw string content in a render-friendly format that does not break general markup conventions (e.g. loose tags - you’ll see more about this later).
TL;DR: You can see the entire code file in the GitHub repo.
One of the important items that is important to define in the Visual Studio Code extension, is the command registration - when the extension is activated, any custom commands have to be integrated into the environment - that is typically done through
Remember how earlier I mentioned that I am already using the Hello World sample? In that case, the
registerCommand call is already provisioned for you. You should be able to just set a different command name, but also make sure to update it in
There are two things happening here:
commandssection determines the command itself, with a helpful title. That will be shown in the Command Palette.
keybindingssection determines key combinations that the user can trigger the command in the extension by.
In my case, I only have one command, so I decided to bind it to
Cmd+F1 on a mac).
If we look back at the command trigger, notice that I am triggering the execution of the command with the
previewHtml parameter. You can read up more on that in the Complex Commands page of the official documentation - it’s used to render custom HTML in a WebView alongside the main content that the user is editing.
Before the command registration, I am also declaring a custom document content provider, that will be the one generating the preview:
This class processes the data in several steps (refer to the full source):
Confirm that the language used
Depending on the active language and the user selection, a request is performed to get the basic API information from the docs.microsoft.com servers.
Parsing additional data
The search service doesn’t return all the information that we need - I also need to get the API signature and some sample code that demonstrates how specific API entities work, where available. For that, there are two XPath queries:
There is also an exception - in some cases, API documentation is structured in a way where API entities are grouped in the same page, instead of having the dedicated page. Luckily for us, the search service already accounts for that and gives us a hint - the pound sign (
#). Depending on it, we can adjust the XPath query:
Last but not least, in certain cases the signature fragment is wrapped in a DIV that throws off the XPath lookup. We can eliminate it with some regex:
Once the data is processed, I pass the associated object back into the processing pipeline.
When the JSON is received from the search service, the data is being wrapped in a custom HTML. I’ve decided to use Materialize CSS to present the data in the view, so that there is a little API entity card, with two key tabs - the API signature and a sample.
Here, I am manually constructing the HTML - granted, there are better ways to do that, but for a prototype this should do the trick (by the way, I accept PRs - would be happy to learn what is the best way to do this).
We end up with an experience like this:
Building and Publishing
To build the extension, I am using VSTS, with a Hosted Linux Agent. There is already a Gulp-compatible build template that you can leverage. For that, create a
.npmrc file in the root of your extension folder, with the following content:
unsafe-perm = true
npm runs, it takes configuration settings either directly from the command line, or from the
.npmrc files. Since in VSTS we can’t directly control the command line, I need to pass the configuration through an appropriate file, and in this case, to avoid access errors (especially common when building VSCode automation), you need to allow unsafe permissions.
In addition, you will need a
gulpfile.js, that will describe how the build will happen: