Compatibility/System Addon/Interventions
What are WebCompat interventions?
WebCompat interventions (aka site patches) are workarounds added to desktop and Android Firefox for specific websites or web services. They are used as a pressure-release valve to improve the situation for users more quickly, before proper fixes may be developed. They can be shipped to users within a matter of hours if desired, without requiring a Firefox dot release or full update, or requiring users to restart Firefox. They are listed at and may be toggled at the URL about:compat
.
Overview of how interventions work
Interventions are bundled into a Web Extension which is shipped as part of Firefox, but may also be independently updated with "train-hopping" (aka "out of band") releases. This extension uses standard Web Extension APIs and features, with some extra experimental APIs for advanced fixes and feature-detection.
Interventions affect web pages in simple ways, such as by altering the user-agent string, the web APIs being presented, the CSS being applied, or the HTTP headers or responses. A test framework confirms that interventions are still working and necessary on the live pages, and can also be useful during the development of interventions.
Interventions are defined in interventions.json
. Each definition lists related bugzilla bug numbers, the affected URLs, the rough type of breakage resolved, and the actual interventions which should be applied under which circumstances.
If any CSS or JS changes are also required, they must be placed in separate files bundled with the extension (JS here, CSS here), with the names of all such files then also listed in the definition in content_scripts
sections in the appropriate interventions. These content scripts behave like any other Web Extension content scripts, and may be loaded in all frames as necessary.
Do not forget to rebuild your local instance of Firefox if you add or remove any files in the extension, or alter interventions.json
, as these files require special processing before being picked up by the build. However, changes only within CSS and JS files should be picked up by re-running without rebuilding (on desktop).
Before landing a patch, it's advised to run the extension’s tests, which will ensure that the JSON and related JS/CSS changes match expectations: mach test --headless browser/extensions/webcompat
(these tests may otherwise fail in CI when you try to land your patch, causing a back-out).
The final requirement for patches is that the version number of the WebCompat extension must be incremented any time a patch-stack is landed which alters the add-on with any user-facing (non test) changes. Otherwise there is a possibility that Firefox's extension updating mechanism will not reload the extension correctly. Simply bumping the middle version number component in the extension's manifest.json is sufficient (please do not change the other components, it will cause frowns and possibly headaches). To prevent having to rebase your patch due to other simultaneous patches also bumping the version number, it's worth reaching out to the WebCompat engineers on Slack or Matrix to help coordinate the landing of your patch. (We hope to find a way to remove this frustration in the future).
For reference, here is a complete patch for a simple user-agent override, and here is one for a slightly-more-involved CSS fix.
Speeding up development of common interventions
Artifact builds (Android or desktop) should of course be used to save a lot of time when working on interventions and their tests.
It is possible to add or update an intervention definition in a running copy of Firefox, by visiting the console in about:debugging
for the Web Compatibility Interventions
extension, and calling interventions.updateInterventions()
. This is limited to alterations which could be made to interventions.json
, so adding JS or CSS is not possible (but it is possible to apply existing CSS/JS, for instance to spoof window.chrome
, navigator.userAgentData
, etc). These changes will not survive a Firefox restart, but once a working JSON configuration is found, it may be copy-and-pasted into a final patch against interventions.json (with strict JSON quoting, of course).
It may also be desirable to use our test framework's capabilities to speed up development (discussed below).
Many interventions are already implemented which do a range of interesting things, so it is good to search and ask around before possibly re-inventing one. For instance, here are some particularly interesting examples which may be useful:
- re-writing a response header with a regexp
- fixing a site with issues with the FastClick JS library (or one with legacy FastClick)
- spoofing navigator.userAgentData
- spoofing window.chrome and navigator.vendor,
- running late window.load event listeners
- loading a particular script sync instead of async
- hiding element.styles.mozTransform
- overriding a site's own JS property
- ignoring a specific setTimeout call
Live automated tests
When and Why
It is not a hard requirement to land interventions with live test coverage, but interventions without tests may not be routinely checked, and so will likely become obsolete or broken over time without anyone noticing. By contrast, automated tests are run by QA and/or the WebCompat team at least once per Firefox release cycle, letting us reliably detect broken or obsolete interventions quite quickly. So please do add tests in either the initial patch, or ASAP in a follow-up.
Of course not all interventions can be so tested. Our test framework uses WebDriver, which is sometimes detected and completely blocked by sites. Issues may also be too intermittent to be reliably tested with automated tools. Similarly, login requirements, captchas, two-factor authentication, or VPN requirements can make it tricky (or impossible) to add tests, although they can also be handled relatively simply, with a test which is only semi-automated, yet still automates complicated steps-to-reproduce aside from performing an initial captcha or enabling a VPN (see below for examples).
How
Two basic test cases are generally written for each intervention: one which confirms if the intervention is still needed (with the intervention disabled) and one which confirms if the intervention still works (with it enabled). It is possible to add more, such as ones targeted by platform, as desirable.
In general, test cases will open a tab to the given website and proceed to mimic the user's behavior to perform the steps to reproduce. They then check the HTML, CSS, JS and other conditions to verify that the page is working as expected. Tests often need to do very little beyond loading a page's URL and checking if a specific HTML element or bit of text is displayed. However, advanced chrome JS and WebDriver APIs may also be used to check HTTP requests, whether specific event listeners are added or fired, and so forth (see some examples below).
The tests are simple Python scripts which will seem familiar to those who have worked with Selenium. Simply add a new file to the test directory using the same filename convention. It will likely be easiest to copy-and-paste one of the existing tests as a template.
Tests are run with mach test-interventions --bug 99999
. They may be run in --headless/-H
mode, though if a test somehow hangs, it can be much less of a hassle to close the hanging test's browser window in non-headless mode than it is find and kill any stale geckodriver processes and such with commands like lsof -i:9222
.
Note that screenshots are taken by default on each test-failure, unless --no-failure-screenshots/-s
is added to the command line (these options help when running the tests in the future, and in bulk). Screenshots default to being saved to the current working directory.
Tests may be annotated to indicate when they require logins, are only needed on specific platforms or versions of Firefox, require specific preferences to be set, and other such conditions. The current list of such annotations is here. A summary of all tests skipped, and the reasons, will be output along with the pass/fail summary when running the tests in bulk.
If wishing to run multiple specific tests -- for instance if a change to an intervention might affect several of them -- it's possible to list them all in the --bug/-b
parameter. Note that an actual bug number isn't necessary, as the framework runs all tests matching by strings occurring in their filename, so the following will match and run multiple tests: ./mach test-interventions -b nintendo axisbank 1819450 1925
.
Speeding up development of common interventions, part 2
When running tests on an Android device or in the emulator, an APK re-build will trigger each time you run mach test-interventions
. This adds some possibly-pointless overhead when only the test itself needs to be changed and re-run, and not the actual intervention. To skip that step and save the time, you may run the tests with a specific environment variable prepended to the command line: MOZ_DISABLE_ADB_INSTALL=True ./mach test-interventions ...
Note that Android-specific interventions do not usually require a full Fenix build to test, so the test framework defaults to using the GeckoView demo browser for efficiency. It is possible to annotate Fenix where it does not suffice. Moreover, it is generally possible to develop and test Android interventions in the responsive design mode without even using an emulator or live device...
(Coming soon!) To test interventions which have only soft requirements for running on a specific platform, pass the --platform-override/-P
option to the test framework (with a value of android
, linux
, mac
, or windows
). This runs the tests as though running on that platform, complete with user-agent string and key navigator
properties set to their anticipated values, and responsive design mode started up for android
. Since interventions only rarely need to actually run on the actual platform they are designed for, this can really save time (though do actually test on the device in question, as especially RDM might not mimic the real thing well enough!).
This platform overriding capability may also be very useful for actually developing an intervention. It's possible to work on a test-case while developing an intervention, having that test-case start up a debugging environment for another platform for you, with or without interventions active, and even performing some of the steps to reproduce for you. By simply adding a long-stalling line of code in an appropriate point of the test (like client.await_css("will never match anything", timeout=4000)
) it's possible to save quite a bit of time while prototyping an intervention for another platform, especially Android (where the WebCompat issue reproduces in responsive design mode, at least).
Examples
Here are some select tests with useful code examples for your perusal:
- an archetypical simple test for a user-agent override for a browser block (with a platform requirement)
- an archetypical simple check for a window property like meta viewport
- demonstrating slightly more involved interactions with a site
- waiting for elements with slightly more advanced conditions
- performing slightly more involved user interactions
- checking for redirects
- handling keyboard input
- testing scrolling
- testing scrolling another way
- interacting with a subtle slider
- detecting multiple possibilities, including the site being down
- detecting that a VPN is required
- handling captchas and logins
- handling login with manual 2FA
- handling a site which causes redirect loops
- detecting a page reload cycle
- checking for superfluous scrollbars while keeping the site from playing a video
- setting the window size, checking if an element is blank (all one color), and more
- comparing before and after 'screenshots' of an element
- calling out to chrome JS to activate picture-in-picture on Android
- a deeper dive handling logins and subtly different layouts in Android vs desktop
- a very deep rabbit-hole with subtle site-interactions, for the brave
More Info
- Automated Testing - more info on running automated tests.