Read the Docs: Documentation Simplified

Read the Docs simplifies software documentation by building, versioning, and hosting of your docs, automatically. This enables many “docs like code” workflows, keeping your code & documentation as close as possible.

Never out of sync 🔄

Whenever you push code to your favorite version control system, whether that is Git or Mercurial, Read the Docs will automatically build your docs so your code and documentation are always up-to-date. Read more about VCS Integrations.

Multiple versions 🗂️

Read the Docs can host and build multiple versions of your docs so having a 1.0 version of your docs and a 2.0 version of your docs is as easy as having a separate branch or tag in your version control system. Read more about Versioned Documentation.

Open Source and User Focused 💓

Our code is free and open source. Our company is bootstrapped and 100% user focused. Read the Docs Community hosts documentation for over 100,000 large and small open source projects, in almost every human and computer language. Read the Docs for Business supports hundreds of organizations with product and internal documentation.

You can find out more about our all the Main Features in these pages.

First steps

Are you new to software documentation or are you looking to use your existing docs with Read the Docs? Learn about documentation authoring tools such as Sphinx and MkDocs to help you create fantastic documentation for your project.

Read the Docs tutorial

In this tutorial you will create a documentation project on Read the Docs by importing an Sphinx project from a GitHub repository, tailor its configuration, and explore several useful features of the platform.

The tutorial is aimed at people interested in learning how to use Read the Docs to host their documentation projects. You will fork a fictional software library similar to the one developed in the official Sphinx tutorial. No prior experience with Sphinx is required, and you can follow this tutorial without having done the Sphinx one.

The only things you will need to follow are a web browser, an Internet connection, and a GitHub account (you can register for a free account if you don’t have one). You will use Read the Docs Community, which means that the project will be public.

Getting started

Preparing your project on GitHub

To start, sign in to GitHub and navigate to the tutorial GitHub template, where you will see a green Use this template button. Click it to open a new page that will ask you for some details:

  • Leave the default “Owner”, or change it to something better for a tutorial project.

  • Introduce an appropriate “Repository name”, for example rtd-tutorial.

  • Make sure the project is “Public”, rather than “Private”.

After that, click on the green Create repository from template button, which will generate a new repository on your personal account (or the one of your choosing). This is the repository you will import on Read the Docs, and it contains the following files:

README.rst

Basic description of the repository, you will leave it untouched.

pyproject.toml

Python project metadata that makes it installable. Useful for automatic documentation generation from sources.

lumache.py

Source code of the fictional Python library.

docs/

Directory holding all the Sphinx documentation sources, including some required dependencies in docs/requirements.txt, the Sphinx configuration docs/source/conf.py, and the root document docs/source/index.rst written in reStructuredText.

GitHub template for the tutorial

GitHub template for the tutorial

Sign up for Read the Docs

To sign up for a Read the Docs account, navigate to the Sign Up page and choose the option Sign up with GitHub. On the authorization page, click the green Authorize readthedocs button.

GitHub authorization page

GitHub authorization page

Note

Read the Docs needs elevated permissions to perform certain operations that ensure that the workflow is as smooth as possible, like installing webhooks. If you want to learn more, check out Permissions for connected accounts.

After that, you will be redirected to Read the Docs, where you will need to confirm your e-mail and username. Clicking the Sign Up » button will create your account and redirect you to your dashboard.

By now, you should have two email notifications:

  • One from GitHub, telling you that “A third-party OAuth application … was recently authorized to access your account”. You don’t need to do anything about it.

  • Another one from Read the Docs, prompting you to “verify your email address”. Click on the link to finalize the process.

Finally, you created your account on Read the Docs and are ready to import your first project.

Welcome!

Read the Docs empty dashboard

Read the Docs empty dashboard

Note

Our commercial site offers some extra features, like support for private projects. You can learn more about our two different sites.

First steps

Importing the project to Read the Docs

To import your GitHub project to Read the Docs, first click on the Import a Project button on your dashboard (or browse to the import page directly). You should see your GitHub account under the “Filter repositories” list on the right. If the list of repositories is empty, click the 🔄 button, and after that all your repositories will appear on the center.

Import projects workflow

Import projects workflow

Locate your rtd-tutorial project (possibly clicking next ›› at the bottom if you have several pages of projects), and then click on the ➕ button to the right of the name. The next page will ask you to fill some details about your Read the Docs project:

Name

The name of the project. It has to be unique across all the service, so it is better if you prepend your username, for example {username}-rtd-tutorial.

Repository URL

The URL that contains the sources. Leave the automatically filled value.

Repository type

Version control system used, leave it as “Git”.

Default branch

Name of the default branch of the project, leave it as main.

Edit advanced project options

Leave it unchecked, we will make some changes later.

After hitting the Next button, you will be redirected to the project home. You just created your first project on Read the Docs! 🎉

Project home

Project home

Checking the first build

Read the Docs will try to build the documentation of your project right after you create it. To see the build logs, click on the Your documentation is building link on the project home, or alternatively navigate to the “Builds” page, then open the one on top (the most recent one).

If the build has not finished yet by the time you open it, you will see a spinner next to a “Installing” or “Building” indicator, meaning that it is still in progress.

First successful documentation build

First successful documentation build

When the build finishes, you will see a green “Build completed” indicator, the completion date, the elapsed time, and a link to see the corresponding documentation. If you now click on View docs, you will see your documentation live!

HTML documentation live on Read the Docs

HTML documentation live on Read the Docs

Note

Advertisement is one of our main sources of revenue. If you want to learn more about how do we fund our operations and explore options to go ad-free, check out our Sustainability page.

If you don’t see the ad, you might be using an ad blocker. Our EthicalAds network respects your privacy, doesn’t target you, and tries to be as unobstrusive as possible, so we would like to kindly ask you to not block us ❤️

Basic configuration changes

You can now proceed to make some basic configuration adjustments. Navigate back to the project page and click on the ⚙ Admin button, which will open the Settings page.

First of all, add the following text in the description:

Lumache (/lu’make/) is a Python library for cooks and food lovers that creates recipes mixing random ingredients.

Then set the project homepage to https://world.openfoodfacts.org/, and write food, python in the list of tags. All this information will be shown on your project home.

After that, configure your email so you get a notification if the build fails. To do so, click on the Notifications link on the left, type the email where you would like to get the notification, and click the Add button. After that, your email will be shown under “Existing Notifications”.

Trigger a build from a pull request

Read the Docs allows you to trigger builds from GitHub pull requests and gives you a preview of how the documentation would look like with those changes.

To enable that functionality, first click on the Advanced Settings link on the left under the ⚙ Admin menu, check the “Build pull requests for this project” checkbox, and click the Save button at the bottom of the page.

Next, navigate to your GitHub repository, locate the file docs/source/index.rst, and click on the ✏️ icon on the top-right with the tooltip “Edit this file” to open a web editor (more information on their documentation).

File view on GitHub before launching the editor

File view on GitHub before launching the editor

In the editor, add the following sentence to the file:

docs/source/index.rst
Lumache has its documentation hosted on Read the Docs.

Write an appropriate commit message, and choose the “Create a new branch for this commit and start a pull request” option, typing a name for the new branch. When you are done, click the green Propose changes button, which will take you to the new pull request page, and there click the Create pull request button below the description.

Read the Docs building the pull request from GitHub

Read the Docs building the pull request from GitHub

After opening the pull request, a Read the Docs check will appear indicating that it is building the documentation for that pull request. If you click on the Details link while it is building, you will access the build logs, otherwise it will take you directly to the documentation. When you are satisfied, you can merge the pull request!

Customizing the build process

The Settings page of the project home allows you to change some global configuration values of your project. In addition, you can further customize the building process using the .readthedocs.yaml configuration file. This has several advantages:

  • The configuration lives next to your code and documentation, tracked by version control.

  • It can be different for every version (more on versioning in the next section).

  • Some configurations are only available using the config file.

Read the Docs works without this configuration by making some decisions on your behalf. For example, what Python version to use, how to install the requirements, and others.

Tip

Settings that apply to the entire project are controlled in the web dashboard, while settings that are version or build specific are better in the YAML file.

Upgrading the Python version

For example, to explicitly use Python 3.8 to build your project, navigate to your GitHub repository, click on the Add file button, and add a .readthedocs.yaml file with these contents to the root of your project:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-20.04"
  tools:
    python: "3.8"

The purpose of each key is:

version

Mandatory, specifies version 2 of the configuration file.

build.os

Required to specify the Python version, states the name of the base image.

build.tools.python

Declares the Python version to be used.

After you commit these changes, go back to your project home, navigate to the “Builds” page, and open the new build that just started. You will notice that one of the lines contains python3.8: if you click on it, you will see the full output of the corresponding command, stating that it used Python 3.8.6 to create the virtual environment.

Read the Docs build using Python 3.8

Read the Docs build using Python 3.8

Making warnings more visible

If you navigate to your HTML documentation, you will notice that the index page looks correct, but actually the API section is empty. This is a very common issue with Sphinx, and the reason is stated in the build logs. On the build page you opened before, click on the View raw link on the top right, which opens the build logs in plain text, and you will see several warnings:

WARNING: [autosummary] failed to import 'lumache': no module named lumache
...
WARNING: autodoc: failed to import function 'get_random_ingredients' from module 'lumache'; the following exception was raised:
No module named 'lumache'
WARNING: autodoc: failed to import exception 'InvalidKindError' from module 'lumache'; the following exception was raised:
No module named 'lumache'

To spot these warnings more easily and allow you to address them, you can add the sphinx.fail_on_warning option to your Read the Docs configuration file. For that, navigate to GitHub, locate the .readthedocs.yaml file you created earlier, click on the ✏️ icon, and add these contents:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-20.04"
  tools:
    python: "3.8"

sphinx:
  fail_on_warning: true

At this point, if you navigate back to your “Builds” page, you will see a Failed build, which is exactly the intended result: the Sphinx project is not properly configured yet, and instead of rendering an empty API page, now the build fails.

The reason sphinx.ext.autosummary and sphinx.ext.autodoc fail to import the code is because it is not installed. Luckily, the .readthedocs.yaml also allows you to specify which requirements to install.

To install the library code of your project, go back to editing .readthedocs.yaml on GitHub and modify it as follows:

.readthedocs.yaml
python:
  # Install our python package before building the docs
  install:
    - method: pip
      path: .

With this change, Read the Docs will install the Python code before starting the Sphinx build, which will finish seamlessly. If you go now to the API page of your HTML documentation, you will see the lumache summary!

Enabling PDF and EPUB builds

Sphinx can build several other formats in addition to HTML, such as PDF and EPUB. You might want to enable these formats for your project so your users can read the documentation offline.

To do so, add this extra content to your .readthedocs.yaml:

.readthedocs.yaml
sphinx:
  fail_on_warning: true

formats:
  - pdf
  - epub

After this change, PDF and EPUB downloads will be available both from the “Downloads” section of the project home, as well as the flyout menu.

Downloads available from the flyout menu

Downloads available from the flyout menu

Versioning documentation

Read the Docs allows you to have several versions of your documentation, in the same way that you have several versions of your code. By default, it creates a latest version that points to the default branch of your version control system (main in the case of this tutorial), and that’s why the URLs of your HTML documentation contain the string /latest/.

Creating a new version

Let’s say you want to create a 1.0 version of your code, with a corresponding 1.0 version of the documentation. For that, first navigate to your GitHub repository, click on the branch selector, type 1.0.x, and click on “Create branch: 1.0.x from ‘main’” (more information on their documentation).

Next, go to your project home, click on the Versions button, and under “Active Versions” you will see two entries:

  • The latest version, pointing to the main branch.

  • A new stable version, pointing to the origin/1.0.x branch.

List of active versions of the project

List of active versions of the project

Right after you created your branch, Read the Docs created a new special version called stable pointing to it, and started building it. When the build finishes, the stable version will be listed in the flyout menu and your readers will be able to choose it.

Note

Read the Docs follows some rules to decide whether to create a stable version pointing to your new branch or tag. To simplify, it will check if the name resembles a version number like 1.0, 2.0.3 or 4.x.

Now you might want to set stable as the default version, rather than latest, so that users see the stable documentation when they visit the root URL of your documentation (while still being able to change the version in the flyout menu).

For that, go to the Advanced Settings link under the ⚙ Admin menu of your project home, choose stable in the “Default version*” dropdown, and hit Save at the bottom. Done!

Modifying versions

Both latest and stable are now active, which means that they are visible for users, and new builds can be triggered for them. In addition to these, Read the Docs also created an inactive 1.0.x version, which will always point to the 1.0.x branch of your repository.

List of inactive versions of the project

List of inactive versions of the project

Let’s activate the 1.0.x version. For that, go to the “Versions” on your project home, locate 1.0.x under “Activate a version”, and click on the Activate button. This will take you to a new page with two checkboxes, “Active” and “Hidden”. Check only “Active”, and click Save.

After you do this, 1.0.x will appear on the “Active Versions” section, and a new build will be triggered for it.

Note

You can read more about hidden versions in our documentation.

Show a warning for old versions

When your project matures, the number of versions might increase. Sometimes you will want to warn your readers when they are browsing an old or outdated version of your documentation.

To showcase how to do that, let’s create a 2.0 version of the code: navigate to your GitHub repository, click on the branch selector, type 2.0.x, and click on “Create branch: 2.0.x from ‘main’”. This will trigger two things:

  • Since 2.0.x is your newest branch, stable will switch to tracking it.

  • A new 2.0.x version will be created on your Read the Docs project.

  • Since you already have an active stable version, 2.0.x will be activated.

From this point, 1.0.x version is no longer the most up to date one. To display a warning to your readers, go to the ⚙ Admin menu of your project home, click on the Advanced Settings link on the left, enable the “Show version warning” checkbox, and click the Save button.

If you now browse the 1.0.x documentation, you will see a warning on top encouraging you to browse the latest version instead. Neat!

Warning for old versions

Warning for old versions

Getting insights from your projects

Once your project is up and running, you will probably want to understand how readers are using your documentation, addressing some common questions like:

  • what pages are the most visited pages?

  • what search terms are the most frequently used?

  • are readers finding what they look for?

Read the Docs offers you some analytics tools to find out the answers.

Browsing Traffic Analytics

The Traffic Analytics view shows the top viewed documentation pages of the past 30 days, plus a visualization of the daily views during that period. To generate some artificial views on your newly created project, you can first click around the different pages of your project, which will be accounted immediately for the current day statistics.

To see the Traffic Analytics view, go back the project page again, click on the ⚙ Admin button, and then click on the Traffic Analytics section. You will see the list of pages in descending order of visits, as well as a plot similar to the one below.

Traffic Analytics plot

Traffic Analytics plot

Note

The Traffic Analytics view explained above gives you a simple overview of how your readers browse your documentation. It has the advantage that it stores no identifying information about your visitors, and therefore it respects their privacy. However, you might want to get more detailed data by enabling Google Analytics. Notice though that we take some extra measures to respect user privacy when they visit projects that have Google Analytics enabled, and this might reduce the number of visits counted.

Finally, you can also download this data for closer inspection. To do that, scroll to the bottom of the page and click on the Download all data button. That will prompt you to download a CSV file that you can process any way you want.

Browsing Search Analytics

Apart from traffic analytics, Read the Docs also offers the possibility to inspect what search terms your readers use on your documentation. This can inform decisions on what areas to reinforce, or what parts of your project are less understood or more difficult to find.

To generate some artificial search statistics on the project, go to the HTML documentation, locate the Sphinx search box on the left, type ingredients, and press the Enter key. You will be redirected to the search results page, which will show two entries.

Next, go back to the ⚙ Admin section of your project page, and then click on the Search Analytics section. You will see a table with the most searched queries (including the ingredients one you just typed), how many results did each query return, and how many times it was searched. Below the queries table, you will also see a visualization of the daily number of search queries during the past 30 days.

Most searched terms

Most searched terms

Like the Traffic Analytics, you can also download the whole dataset in CSV format by clicking on the Download all data button.

Where to go from here

This is the end of the tutorial. You started by forking a GitHub repository and importing it on Read the Docs, building its HTML documentation, and then went through a series of steps to customize the build process, tweak the project configuration, and add new versions.

Here you have some resources to continue learning about documentation and Read the Docs:

Happy documenting!

Getting Started with Sphinx

Sphinx is a powerful documentation generator that has many great features for writing technical documentation including:

  • Generate web pages, printable PDFs, documents for e-readers (ePub), and more all from the same sources

  • You can use reStructuredText or Markdown to write documentation

  • An extensive system of cross-referencing code and documentation

  • Syntax highlighted code samples

  • A vibrant ecosystem of first and third-party extensions

If you want to learn more about how to create your first Sphinx project, read on. If you are interested in exploring the Read the Docs platform using an already existing Sphinx project, check out Read the Docs tutorial.

Quick start

See also

If you already have a Sphinx project, check out our Importing Your Documentation guide.

Assuming you have Python already, install Sphinx:

pip install sphinx

Create a directory inside your project to hold your docs:

cd /path/to/project
mkdir docs

Run sphinx-quickstart in there:

cd docs
sphinx-quickstart

This quick start will walk you through creating the basic configuration; in most cases, you can just accept the defaults. When it’s done, you’ll have an index.rst, a conf.py and some other files. Add these to revision control.

Now, edit your index.rst and add some information about your project. Include as much detail as you like (refer to the reStructuredText syntax or this template if you need help). Build them to see how they look:

make html

Your index.rst has been built into index.html in your documentation output directory (typically _build/html/index.html). Open this file in your web browser to see your docs.

_images/sphinx-hello-world.png

Your Sphinx project is built

Edit your files and rebuild until you like what you see, then commit your changes and push to your public repository. Once you have Sphinx documentation in a public repository, you can start using Read the Docs by importing your docs.

Warning

We strongly recommend to pin the Sphinx version used for your project to build the docs to avoid potential future incompatibilities.

Using Markdown with Sphinx

You can use Markdown using MyST and reStructuredText in the same Sphinx project. We support this natively on Read the Docs, and you can do it locally:

pip install myst-parser

Then in your conf.py:

extensions = ['myst_parser']

You can now continue writing your docs in .md files and it will work with Sphinx. Read the Getting started with MyST in Sphinx docs for additional instructions.

Get inspired!

You might learn more and find the first ingredients for starting your own documentation project by looking at Example projects - view live example renditions and copy & paste from the accompanying source code.

External resources

Here are some external resources to help you learn more about Sphinx.

Getting Started with MkDocs

MkDocs is a documentation generator that focuses on speed and simplicity. It has many great features including:

  • Preview your documentation as you write it

  • Easy customization with themes and extensions

  • Writing documentation with Markdown

Note

MkDocs is a great choice for building technical documentation. However, Read the Docs also supports Sphinx, another tool for writing and building documentation.

Quick start

See also

If you already have a Mkdocs project, check out our Importing Your Documentation guide.

Assuming you have Python already, install MkDocs:

pip install mkdocs

Setup your MkDocs project:

mkdocs new .

This command creates mkdocs.yml which holds your MkDocs configuration, and docs/index.md which is the Markdown file that is the entry point for your documentation.

You can edit this index.md file to add more details about your project and then you can build your documentation:

mkdocs serve

This command builds your Markdown files into HTML and starts a development server to browse your documentation. Open up http://127.0.0.1:8000/ in your web browser to see your documentation. You can make changes to your Markdown files and your docs will automatically rebuild.

_images/mkdocs-hello-world.png

Your MkDocs project is built

Once you have your documentation in a public repository such as GitHub, Bitbucket, or GitLab, you can start using Read the Docs by importing your docs.

Warning

We strongly recommend to pin the MkDocs version used for your project to build the docs to avoid potential future incompatibilities.

Get inspired!

You might learn more and find the first ingredients for starting your own documentation project by looking at Example projects - view live example renditions and copy & paste from the accompanying source code.

External resources

Here are some external resources to help you learn more about MkDocs.

Importing Your Documentation

To import a public documentation repository, visit your Read the Docs dashboard and click Import. For private repositories, please use Read the Docs for Business.

Automatically import your docs

If you have connected your Read the Docs account to GitHub, Bitbucket, or GitLab, you will see a list of your repositories that we are able to import. To import one of these projects, just click the import icon next to the repository you’d like to import. This will bring up a form that is already filled with your project’s information. Feel free to edit any of these properties, and then click Next to build your documentation.

_images/import-a-repository.png

Importing a repository

Manually import your docs

If you do not have a connected account, you will need to select Import Manually and enter the information for your repository yourself. You will also need to manually configure the webhook for your repository as well. When importing your project, you will be asked for the repository URL, along with some other information for your new project. The URL is normally the URL or path name you’d use to checkout, clone, or branch your repository. Some examples:

  • Git: https://github.com/ericholscher/django-kong.git

  • Mercurial: https://bitbucket.org/ianb/pip

  • Subversion: http://varnish-cache.org/svn/trunk

  • Bazaar: lp:pasta

Add an optional homepage URL and some tags, and then click Next.

Once your project is created, you’ll need to manually configure the repository webhook if you would like to have new changes trigger builds for your project on Read the Docs. Go to your project’s Admin > Integrations page to configure a new webhook, or see our steps for webhook creation for more information on this process.

Note

The Admin page can be found at https://readthedocs.org/dashboard/<project-slug>/edit/. You can access all of the project settings from the admin page sidebar.

_images/admin-panel.png

Building your documentation

Within a few seconds of completing the import process, your code will automatically be fetched from your repository, and the documentation will be built. Check out our Build process page to learn more about how Read the Docs builds your docs, and to troubleshoot any issues that arise.

Some documentation projects require additional configuration to build such as specifying a certain version of Python or installing additional dependencies. You can configure these settings in a .readthedocs.yaml file. See our Configuration File docs for more details.

It is also important to note that the default version of Sphinx is v1.8.5. We recommend to set the version your project uses explicitily.

Read the Docs will host multiple versions of your code. You can read more about how to use this well on our Versioned Documentation page.

If you have any more trouble, don’t hesitate to reach out to us. The Site Support page has more information on getting in touch.

Choosing Between Our Two Platforms

Users often ask what the differences are between Read the Docs Community and Read the Docs for Business.

While many of our features are available on both of these platforms, there are some key differences between our two platforms.

Read the Docs Community

Read the Docs Community is exclusively for hosting open source documentation. We support open source communities by providing free documentation building and hosting services, for projects of all sizes.

Important points:

  • Open source project hosting is always free

  • All documentation sites include advertising

  • Only supports public VCS repositories

  • All documentation is publicly accessible to the world

  • Less build time and fewer build resources (memory & CPU)

  • Email support included only for issues with our platform

  • Documentation is organized by projects

You can sign up for an account at https://readthedocs.org.

Read the Docs for Business

Read the Docs for Business is meant for companies and users who have more complex requirements for their documentation project. This can include commercial projects with private source code, projects that can only be viewed with authentication, and even large scale projects that are publicly available.

Important points:

  • Hosting plans require a paid subscription plan

  • There is no advertising on documentation sites

  • Allows importing private and public repositories from VCS

  • Supports private versions that require authentication to view

  • Supports team authentication, including SSO with Google, GitHub, GitLab, and Bitbucket

  • More build time and more build resources (memory & CPU)

  • Includes 24x5 email support, with 24x7 SLA support available

  • Documentation is organized by organization, giving more control over permissions

You can sign up for an account at https://readthedocs.com.

Questions?

If you have a question about which platform would be best, email us at support@readthedocs.org.

Read the Docs feature overview

Learn more about configuring your automated documentation builds and some of the core features of Read the Docs.

Main Features

Read the Docs offers a number of platform features that are possible because we both build and host documentation for you.

Automatic Documentation Deployment

We integrate with GitHub, BitBucket, and GitLab. We automatically create webhooks in your repository, which tell us whenever you push a commit. We will then build and deploy your docs every time you push a commit. This enables a workflow that we call Continuous Documentation:

Once you set up your Read the Docs project, your users will always have up-to-date documentation.

Learn more about VCS Integrations.

Custom Domains & White Labeling

When you import a project to Read the Docs, we assign you a URL based on your project name. You are welcome to use this URL, but we also fully support custom domains for all our documentation projects.

Learn more about Custom Domains.

Versioned Documentation

We support multiple versions of your documentation, so that users can find the exact docs for the version they are using. We build this on top of the version control system that you’re already using. Each version on Read the Docs is just a tag or branch in your repository.

You don’t need to change how you version your code, we work with whatever process you are already using. If you don’t have a process, we can recommend one.

Learn more about Versioned Documentation.

Downloadable Documentation

Read the Docs supports building multiple formats for Sphinx-based projects:

  • PDF

  • ePub

  • Zipped HTML

This means that every commit that you push will automatically update your PDFs as well as your HTML.

This feature is great for users who are about to get on a plane and want offline docs, as well as being able to ship your entire set of documentation as one file.

Learn more about Downloadable Documentation.

Open Source and Customer Focused

Read the Docs cares deeply about our customers and our community. As part of that commitment, all of the source code for Read the Docs is open source. This means there’s no vendor lock-in, and you are welcome to contribute the features you want or run your own instance.

Our bootstrapped company is owned and controlled by the founders, and fully funded by our customers and advertisers. That allows us to focus 100% of our attention on building the best possible product for you.

Learn more About Read the Docs.

Configuration File

In addition to using the admin panel of your project to configure your project, you can use a configuration file in the root of your project. The configuration file should be named .readthedocs.yaml.

Note

Some other variants like readthedocs.yaml, .readthedocs.yml, etc are deprecated.

The main advantages of using a configuration file over the web interface are:

  • Settings are per version rather than per project.

  • Settings live in your VCS.

  • They enable reproducible build environments over time.

  • Some settings are only available using a configuration file

Tip

Using a configuration file is the recommended way of using Read the Docs.

Configuration File V2

Read the Docs supports configuring your documentation builds with a YAML file. The configuration file must be in the root directory of your project and be named .readthedocs.yaml.

All options are applied to the version containing this file. Below is an example YAML file which shows the most common configuration options:

# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the version of Python and other tools you might need
build:
  os: ubuntu-20.04
  tools:
    python: "3.9"
    # You can also specify other tool versions:
    # nodejs: "16"
    # rust: "1.55"
    # golang: "1.17"

# Build documentation in the docs/ directory with Sphinx
sphinx:
   configuration: docs/conf.py

# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
#    - pdf

# Optionally declare the Python requirements required to build your docs
python:
   install:
   - requirements: docs/requirements.txt
Supported settings

Read the Docs validates every configuration file. Any configuration option that isn’t supported will make the build fail. This is to avoid typos and provide feedback on invalid configurations.

Warning

When using a v2 configuration file, the local settings from the web interface are ignored.

version
Required

true

Example:

version: 2

Warning

If you don’t provide the version, v1 will be used.

formats

Additional formats of the documentation to be built, apart from the default HTML.

Type

list

Options

htmlzip, pdf, epub, all

Default

[]

Example:

version: 2

# Default
formats: []
version: 2

# Build PDF & ePub
formats:
  - epub
  - pdf

Note

You can use the all keyword to indicate all formats.

version: 2

# Build all formats
formats: all

Warning

At the moment, only Sphinx supports additional formats. pdf, epub, and htmlzip output is not yet supported when using MkDocs.

python

Configuration of the Python environment to be used.

version: 2

python:
  install:
    - requirements: docs/requirements.txt
    - method: pip
      path: .
      extra_requirements:
        - docs
    - method: setuptools
      path: another/package
  system_packages: true
python.version

Warning

This option is now deprecated and replaced by build.tools.python. See python.version (legacy) for the description of this option.

python.install

List of installation methods of packages and requirements. You can have several of the following methods.

Type

list

Default

[]

Requirements file

Install packages from a requirements file.

The path to the requirements file, relative to the root of the project.

Key

requirements

Type

path

Required

true

Example:

version: 2

python:
  version: "3.7"
  install:
    - requirements: docs/requirements.txt
    - requirements: requirements.txt

Warning

If you are using a Conda environment to manage the build, this setting will not have any effect. Instead add the extra requirements to the environment file of Conda.

Packages

Install the project using python setup.py install or pip install.

The path to the package, relative to the root of the project.

Key

path

Type

path

Required

true

The installation method.

Key

method

Options

pip, setuptools

Default

pip

Extra requirements section to install in addition to the package dependencies.

Warning

You need to install your project with pip to use extra_requirements.

Key

extra_requirements

Type

list

Default

[]

Example:

version: 2

python:
  version: "3.7"
  install:
    - method: pip
      path: .
      extra_requirements:
        - docs
    - method: setuptools
      path: package

With the previous settings, Read the Docs will execute the next commands:

pip install .[docs]
python package/setup.py install
python.system_packages

Give the virtual environment access to the global site-packages directory.

Type

bool

Default

false

Warning

If you are using a Conda environment to manage the build, this setting will not have any effect, since the virtual environment creation is managed by Conda.

conda

Configuration for Conda support.

version: 2

conda:
  environment: environment.yml
conda.environment

The path to the Conda environment file, relative to the root of the project.

Type

path

Required

true

build

Configuration for the documentation build process. This allows you to specify the base Read the Docs image used to build the documentation, and control the versions of several tools: Python, Node.js, Rust, and Go.

version: 2

build:
  os: ubuntu-20.04
  tools:
    python: "3.9"
    nodejs: "16"
    rust: "1.55"
    golang: "1.17"
build.os

The Docker image used for building the docs. Image names refer to the operating system Read the Docs uses to build them.

Note

Arbitrary Docker images are not supported.

Type

string

Options

ubuntu-20.04, ubuntu-22.04

Required

true

build.tools

Version specifiers for each tool. It must contain at least one tool.

Type

dict

Options

python, nodejs, rust, golang

Required

true

build.tools.python

Python version to use. You can use several interpreters and versions, from CPython, PyPy, Miniconda, and Mamba.

Note

If you use Miniconda3 or Mambaforge, you can select the Python version using the environment.yml file. See our Conda Support guide for more information.

Type

string

Options
  • 2.7

  • 3 (last stable CPython version)

  • 3.6

  • 3.7

  • 3.8

  • 3.9

  • 3.10

  • 3.11

  • pypy3.7

  • pypy3.8

  • pypy3.9

  • miniconda3-4.7

  • mambaforge-4.10

build.tools.nodejs

Node.js version to use.

Type

string

Options
  • 14

  • 16

  • 18

build.tools.rust

Rust version to use.

Type

string

Options
  • 1.55

  • 1.61

build.tools.golang

Go version to use.

Type

string

Options
  • 1.17

  • 1.18

build.apt_packages

List of APT packages to install. Our build servers run Ubuntu 18.04, with the default set of package repositories installed. We don’t currently support PPA’s or other custom repositories.

Type

list

Default

[]

version: 2

build:
  apt_packages:
    - libclang
    - cmake

Note

When possible avoid installing Python packages using apt (python3-numpy for example), use pip or Conda instead.

build.jobs

Commands to be run before or after a Read the Docs pre-defined build jobs. This allows you to run custom commands at a particular moment in the build process. See Build customization for more details.

version: 2

build:
  os: ubuntu-22.04
  tools:
    python: "3.10"
  jobs:
    pre_create_environment:
      - echo "Command run at 'pre_create_environment' step"
    post_build:
      - echo "Command run at 'post_build' step"
      - echo `date`

Note

Each key under build.jobs must be a list of strings. build.os and build.tools are also required to use build.jobs.

Type

dict

Allowed keys

post_checkout, pre_system_dependencies, post_system_dependencies, pre_create_environment, post_create_environment, pre_install, post_install, pre_build, post_build

Required

false

Default

{}

build.commands

Specify a list of commands that Read the Docs will run on the build process. When build.commands is used, none of the pre-defined build jobs will be executed. (see Build customization for more details). This allows you to run custom commands and control the build process completely. The _readthedocs/html directory (relative to the checkout’s path) will be uploaded and hosted by Read the Docs.

Warning

This feature is in a beta phase and could suffer incompatible changes or even removed completely in the near feature. It does not yet support some of the Read the Docs’ integrations like the flyout menu, search and ads. However, integrating all of them is part of the plan. Use it under your own responsibility.

version: 2

build:
  os: ubuntu-22.04
  tools:
    python: "3.10"
  commands:
    - pip install pelican
    - pelican --settings docs/pelicanconf.py --output _readthedocs/html/ docs/

Note

build.os and build.tools are also required when using build.commands.

Type

list

Required

false

Default

[]

sphinx

Configuration for Sphinx documentation (this is the default documentation type).

version: 2

sphinx:
  builder: html
  configuration: conf.py
  fail_on_warning: true

Note

If you want to pin Sphinx to a specific version, use a requirements.txt or environment.yml file (see Requirements file and conda.environment). If you are using a metadata file to describe code dependencies like setup.py, pyproject.toml, or similar, you can use the extra_requirements option (see Packages). This also allows you to override the default pinning done by Read the Docs if your project was created before October 2020.

sphinx.builder

The builder type for the Sphinx documentation.

Type

string

Options

html, dirhtml, singlehtml

Default

html

Note

The htmldir builder option was renamed to dirhtml to use the same name as sphinx. Configurations using the old name will continue working.

sphinx.configuration

The path to the conf.py file, relative to the root of the project.

Type

path

Default

null

If the value is null, Read the Docs will try to find a conf.py file in your project.

sphinx.fail_on_warning

Turn warnings into errors (-W and --keep-going options). This means the build fails if there is a warning and exits with exit status 1.

Type

bool

Default

false

mkdocs

Configuration for MkDocs documentation.

version: 2

mkdocs:
  configuration: mkdocs.yml
  fail_on_warning: false

Note

If you want to pin MkDocs to a specific version, use a requirements.txt or environment.yml file (see Requirements file and conda.environment). If you are using a metadata file to describe code dependencies like setup.py, pyproject.toml, or similar, you can use the extra_requirements option (see Packages). This also allows you to override the default pinning done by Read the Docs if your project was created before March 2021.

mkdocs.configuration

The path to the mkdocs.yml file, relative to the root of the project.

Type

path

Default

null

If the value is null, Read the Docs will try to find a mkdocs.yml file in your project.

mkdocs.fail_on_warning

Turn warnings into errors. This means that the build stops at the first warning and exits with exit status 1.

Type

bool

Default

false

submodules

VCS submodules configuration.

Note

Only Git is supported at the moment.

Warning

You can’t use include and exclude settings for submodules at the same time.

version: 2

submodules:
  include:
    - one
    - two
  recursive: true
submodules.include

List of submodules to be included.

Type

list

Default

[]

Note

You can use the all keyword to include all submodules.

version: 2

submodules:
  include: all
submodules.exclude

List of submodules to be excluded.

Type

list

Default

[]

Note

You can use the all keyword to exclude all submodules. This is the same as include: [].

version: 2

submodules:
  exclude: all
submodules.recursive

Do a recursive clone of the submodules.

Type

bool

Default

false

Note

This is ignored if there aren’t submodules to clone.

Schema

You can see the complete schema here.

Legacy build specification

The legacy build specification used a different set of Docker images, and only allowed you to specify the Python version. It remains supported for backwards compatibility reasons. Check out the build above for an alternative method that is more flexible.

version: 2

build:
  image: latest
  apt_packages:
    - libclang
    - cmake

python:
  version: "3.7"

The legacy build specification also supports the apt_packages key described above.

Warning

When using the new specification, the build.image and python.version options cannot be used. Doing so will error the build.

build (legacy)
build.image (legacy)

The Docker image used for building the docs.

Type

string

Options

stable, latest

Default

latest

Each image support different Python versions and has different packages installed, as defined here:

  • stable: 2, 2.7, 3, 3.5, 3.6, 3.7, pypy3.5

  • latest: 2, 2.7, 3, 3.5, 3.6, 3.7, 3.8, pypy3.5

python.version (legacy)

The Python version (this depends on build.image (legacy)).

Type

string

Default

3

Note

Make sure to use quotes (") to make it a string. We previously supported using numbers here, but that approach is deprecated.

Warning

If you are using a Conda environment to manage the build, this setting will not have any effect, as the Python version is managed by Conda.

Migrating from v1
Changes
  • The version setting is required. See version.

  • The default value of the formats setting has changed to [] and it doesn’t include the values from the web interface.

  • The top setting requirements_file was moved to python.install and we don’t try to find a requirements file if the option isn’t present. See Requirements file.

  • The setting conda.file was renamed to conda.environment. See conda.environment.

  • The build.image setting has been replaced by build.os. See build.os. Alternatively, you can use the legacy build.image that now has only two options: latest (default) and stable.

  • The settings python.setup_py_install and python.pip_install were replaced by python.install. And now it accepts a path to the package. See Packages.

  • The setting python.use_system_site_packages was renamed to python.system_packages. See python.system_packages.

  • The build will fail if there are invalid keys (strict mode).

Warning

Some values from the web interface are no longer respected, please see Migrating from the web interface if you have settings there.

New settings
Migrating from the web interface

This should be pretty straightforward, just go to the Admin > Advanced settings, and find their respective setting in here.

Not all settings in the web interface are per version, but are per project. These settings aren’t supported via the configuration file.

  • Name

  • Repository URL

  • Repository type

  • Language

  • Programming language

  • Project homepage

  • Tags

  • Single version

  • Default branch

  • Default version

  • Show versions warning

  • Privacy level

  • Analytics code

VCS Integrations

Read the Docs provides integrations with several VCS providers to detect changes to your documentation and versions, mainly using webhooks. Integrations are configured with your repository provider, such as GitHub, Bitbucket or GitLab, and with each change to your repository, Read the Docs is notified. When we receive an integration notification, we determine if the change is related to an active version for your project, and if it is, a build is triggered for that version.

You’ll find a list of configured integrations on your project’s Admin dashboard, under Integrations. You can select any of these integrations to see the integration detail page. This page has additional configuration details and a list of HTTP exchanges that have taken place for the integration, including the Payload URL needed by the repository provider such as GitHub, GitLab, or Bitbucket.

Integration Creation

If you have connected your Read the Docs account to GitHub, Bitbucket, or GitLab, an integration will be set up automatically for your repository. However, if your project was not imported through a connected account, you may need to manually configure an integration for your project.

To manually set up an integration, go to Admin > Integrations > Add integration dashboard page and select the integration type you’d like to add. After you have added the integration, you’ll see a link to information about the integration.

As an example, the URL pattern looks like this: https://readthedocs.org/api/v2/webhook/<project-name>/<id>/.

Use this URL when setting up a new integration with your provider – these steps vary depending on the provider.

Note

If your account is connected to the provider, we’ll try to setup the integration automatically. If something fails, you can still setup the integration manually.

GitHub
  • Go to the Settings page for your project

  • Click Webhooks > Add webhook

  • For Payload URL, use the URL of the integration on Read the Docs, found on the project’s Admin > Integrations page. You may need to prepend https:// to the URL.

  • For Content type, both application/json and application/x-www-form-urlencoded work

  • Leave the Secrets field blank

  • Select Let me select individual events, and mark Branch or tag creation, Branch or tag deletion, Pull requests and Pushes events

  • Ensure Active is enabled; it is by default

  • Finish by clicking Add webhook. You may be prompted to enter your GitHub password to confirm your action.

You can verify if the webhook is working at the bottom of the GitHub page under Recent Deliveries. If you see a Response 200, then the webhook is correctly configured. For a 403 error, it’s likely that the Payload URL is incorrect.

Note

The webhook token, intended for the GitHub Secret field, is not yet implemented.

Bitbucket
  • Go to the Settings > Webhooks > Add webhook page for your project

  • For URL, use the URL of the integration on Read the Docs, found on the Admin > Integrations page

  • Under Triggers, Repository push should be selected

  • Finish by clicking Save

GitLab
  • Go to the Settings > Webhooks page for your project

  • For URL, use the URL of the integration on Read the Docs, found on the Admin > Integrations page

  • Leave the default Push events selected, additionally mark Tag push events and Merge request events.

  • Finish by clicking Add Webhook

Gitea

These instructions apply to any Gitea instance.

Warning

This isn’t officially supported, but using the “GitHub webhook” is an effective workaround, because Gitea uses the same payload as GitHub. The generic webhook is not compatible with Gitea. See issue #8364 for more details. Official support may be implemented in the future.

On Read the Docs:

  • Manually create a “GitHub webhook” integration (this will show a warning about the webhook not being correctly set up, that will go away when the webhook is configured in Gitea)

On your Gitea instance:

  • Go to the Settings > Webhooks page for your project on your Gitea instance

  • Create a new webhook of type “Gitea”

  • For URL, use the URL of the integration on Read the Docs, found on the Admin > Integrations page

  • Leave the default HTTP Method as POST

  • For Content type, both application/json and application/x-www-form-urlencoded work

  • Leave the Secret field blank

  • Select Choose events, and mark Branch or tag creation, Branch or tag deletion and Push events

  • Ensure Active is enabled; it is by default

  • Finish by clicking Add Webhook

  • Test the webhook with Delivery test

Finally, on Read the Docs, check that the warnings have disappeared and the delivery test triggered a build.

Using the generic API integration

For repositories that are not hosted with a supported provider, we also offer a generic API endpoint for triggering project builds. Similar to webhook integrations, this integration has a specific URL, which can be found on the project’s Integrations dashboard page (Admin > Integrations).

Token authentication is required to use the generic endpoint, you will find this token on the integration details page. The token should be passed in as a request parameter, either as form data or as part of JSON data input.

Parameters

This endpoint accepts the following arguments during an HTTP POST:

branches

The names of the branches to trigger builds for. This can either be an array of branch name strings, or just a single branch name string.

Default: latest

token

The integration token found on the project’s Integrations dashboard page (Admin > Integrations).

For example, the cURL command to build the dev branch, using the token 1234, would be:

curl -X POST -d "branches=dev" -d "token=1234" https://readthedocs.org/api/v2/webhook/example-project/1/

A command like the one above could be called from a cron job or from a hook inside Git, Subversion, Mercurial, or Bazaar.

Authentication

This endpoint requires authentication. If authenticating with an integration token, a check will determine if the token is valid and matches the given project. If instead an authenticated user is used to make this request, a check will be performed to ensure the authenticated user is an owner of the project.

Debugging webhooks

If you are experiencing problems with an existing webhook, you may be able to use the integration detail page to help debug the issue. Each project integration, such as a webhook or the generic API endpoint, stores the HTTP exchange that takes place between Read the Docs and the external source. You’ll find a list of these exchanges in any of the integration detail pages.

Resyncing webhooks

It might be necessary to re-establish a webhook if you are noticing problems. To resync a webhook from Read the Docs, visit the integration detail page and follow the directions for re-syncing your repository webhook.

Payload validation

If your project was imported through a connected account, we create a secret for every integration that offers a way to verify that a webhook request is legitimate. Currently, GitHub and GitLab offer a way to check this.

Troubleshooting

Webhook activation failed. Make sure you have the necessary permissions

If you find this error, make sure your user has permissions over the repository. In case of GitHub, check that you have granted access to the Read the Docs OAuth App to your organization.

My project isn’t automatically building

If your project isn’t automatically building, you can check your integration on Read the Docs to see the payload sent to our servers. If there is no recent activity on your Read the Docs project webhook integration, then it’s likely that your VCS provider is not configured correctly. If there is payload information on your Read the Docs project, you might need to verify that your versions are configured to build correctly.

Either way, it may help to either resync your webhook integration (see Resyncing webhooks for information on this process), or set up an entirely new webhook integration.

Custom Domains

Custom domains allow you to serve your documentation from your own domain. This is great for maintaining a consistent brand for your documentation and application.

By default, your documentation is served from a Read the Docs subdomain using the project’s slug:

  • <slug>.readthedocs.io for Read the Docs Community

  • <slug>.readthedocs-hosted.com for Read the Docs for Business.

For example if you import your project and it gets the slug example-docs, it will be served from https://example-docs.readthedocs.io.

Adding a custom domain

To setup your custom domain, follow these steps:

  1. Go the Admin tab of your project.

  2. Click on Domains.

  3. Enter your domain.

  4. Mark the Canonical option if you want use this domain as your canonical domain.

  5. Click on Add.

  6. At the top of the next page you’ll find the value of the DNS record that you need to point your domain to. For Read the Docs Community this is readthedocs.io, and for Read the Docs for Business the record is in the form of <hash>.domains.readthedocs.com.

    Note

    For a subdomain like docs.example.com add a CNAME record, and for a root domain like example.com use an ANAME or ALIAS record.

By default, we provide a validated SSL certificate for the domain, managed by Cloudflare. The SSL certificate issuance should happen within a few minutes, but might take up to one hour. See SSL certificate issue delays for more troubleshooting options.

As an example, our blog’s DNS record looks like this:

dig +short CNAME blog.readthedocs.com
 readthedocs.io.

Warning

We don’t support pointing subdomains or root domains to a project using A records. DNS A records require a static IP address and our IPs may change without notice.

Removing a custom domain

To remove a custom domain:

  1. Go the Admin tab of your project.

  2. Click on Domains.

  3. Click the Remove button next to the domain.

  4. Click Confirm on the confirmation page.

Warning

Once a domain is removed, your previous documentation domain is no longer served by Read the Docs, and any request for it will return a 404 Not Found!

Strict Transport Security (HSTS) and other custom headers

By default, we do not return a Strict Transport Security header (HSTS) for user custom domains. This is a conscious decision as it can be misconfigured in a not easily reversible way. For both Read the Docs Community and Read the Docs for Business, HSTS and other custom headers can be set upon request.

We always return the HSTS header with a max-age of at least one year for our own domains including *.readthedocs.io, *.readthedocs-hosted.com, readthedocs.org and readthedocs.com.

Please contact Site Support if you want to add a custom header to your domain.

Multiple documentation sites as sub-folders of a domain

You may host multiple documentation repositories as sub-folders of a single domain. For example, docs.example.org/projects/repo1 and docs.example.org/projects/repo2. This is a way to boost the SEO of your website.

See Subprojects for more information.

Troubleshooting

SSL certificate issue delays

The status of your domain validation and certificate can always be seen on the details page for your domain under Admin > Domains > YOURDOMAIN.TLD (details).

Domains are usually validated and a certificate issued within minutes. However, if you setup the domain in Read the Docs without provisioning the necessary DNS changes and then update DNS hours or days later, this can cause a delay in validating because there is an exponential back-off in validation.

Tip

Loading the domain details in the Read the Docs dashboard and saving the domain again will force a revalidation.

Migrating from GitBook

If your custom domain was previously used in GitBook, contact GitBook support (via live chat in their website) to remove the domain name from their DNS Zone in order for your domain name to work with Read the Docs, else it will always redirect to GitBook.

Versioned Documentation

Read the Docs supports multiple versions of your repository. On initial import, we will create a latest version. This will point at the default branch defined in your VCS control (by default, main on Git and default in Mercurial).

If your project has any tags or branches with a name following semantic versioning, we also create a stable version, tracking your most recent release. If you want a custom stable version, create either a tag or branch in your project with that name.

When you have VCS Integrations configured for your repository, we will automatically build each version when you push a commit.

How we envision versions working

In the normal case, the latest version will always point to the most up to date development code. If you develop on a branch that is different than the default for your VCS, you should set the Default Branch to that branch.

You should push a tag for each version of your project. These tags should be numbered in a way that is consistent with semantic versioning. This will map to your stable branch by default.

Note

We in fact are parsing your tag names against the rules given by PEP 440. This spec allows “normal” version numbers like 1.4.2 as well as pre-releases. An alpha version or a release candidate are examples of pre-releases and they look like this: 2.0a1.

We only consider non pre-releases for the stable version of your documentation.

If you have documentation changes on a long-lived branch, you can build those too. This will allow you to see how the new docs will be built in this branch of the code. Generally you won’t have more than 1 active branch over a long period of time. The main exception here would be release branches, which are branches that are maintained over time for a specific release number.

Version States

States define the visibility of a version across the site. You can change the states of a version from the Versions tab of your project.

Active
  • Active

    • Docs for this version are visible

    • Builds can be triggered for this version

  • Inactive

    • Docs for this version aren’t visible

    • Builds can’t be triggered for this version

When you deactivate a version, its docs are removed.

Hidden
  • Not hidden and Active

    • This version is listed on the flyout menu on the docs site

    • This version is shown in search results on the docs site

  • Hidden and Active

    • This version isn’t listed on the flyout menu on the docs site

    • This version isn’t shown in search results from another version on the docs site (like on search results from a superproject)

Hiding a version doesn’t make it private, any user with a link to its docs would be able to see it. This is useful when:

  • You no longer support a version, but you don’t want to remove its docs.

  • You have a work in progress version and don’t want to publish its docs just yet.

Note

Active versions that are hidden will be listed as Disallow: /path/to/version/ in the default robots.txt file created by Read the Docs.

Privacy levels

Note

Privacy levels are only supported on Read the Docs for Business.

Public

It means that everything is available to be seen by everyone.

Private

Private versions are available only to people who have permissions to see them. They will not display on any list view, and will 404 when you link them to others. If you want to share your docs temporarily, see Sharing.

In addition, if you want other users to view the build page of your public versions, you’ll need to the set the privacy level of your project to public.

Logging out

When you log in to a documentation site, you will be logged in until close your browser. To log out, click on the Log out link in your documentation’s flyout menu. This is usually located in the bottom right or bottom left, depending on the theme design. This will log you out from the current domain, but not end any other session that you have active.

_images/logout-button.png

Tags and branches

Read the Docs supports two workflows for versioning: based on tags or branches. If you have at least one tag, tags will take preference over branches when selecting the stable version.

Version Control Support Matrix

git

hg

bzr

svn

Tags

Yes

Yes

Yes

No

Branches

Yes

Yes

Yes

No

Default

master

default

trunk

Version warning

This is a banner that appears on the top of every page of your docs that aren’t stable or latest. This banner has a text with a link redirecting the users to the latest version of your docs.

This feature is disabled by default on new projects, you can enable it in the admin section of your docs (Admin > Advanced Settings).

Note

The banner will be injected in an HTML element with the main role or in the main tag. For example:

<div role="main">
  <!-- The banner would be injected here -->
  ...
</div>
<main>
  <!-- The banner would be injected here -->
  ...
</main>

Redirects on root URLs

When a user hits the root URL for your documentation, for example https://pip.readthedocs.io/, they will be redirected to the Default version. This defaults to latest, but could also point to your latest released version.

Downloadable Documentation

Read the Docs supports building multiple formats for Sphinx-based projects:

  • PDF

  • ePub

  • Zipped HTML

This means that every commit that you push will automatically update your PDFs as well as your HTML.

This is enabled by the formats key in our config file. A simple example is here:

# Build PDF & ePub
formats:
  - epub
  - pdf

If you want to see an example, you can download the Read the Docs documentation in the following formats:

Use cases

This functionality is great for anyone who needs documentation when they aren’t connected to the internet. Users who are about to get on a plane can grab a PDF and have the entire doc set ready for their trip.

The other value is having the entire docset in a single file. You can send a user an email with a single PDF or ePub and they’ll have all the docs in one place.

Deleting downloadable content

The entries in the Downloads section of your project dashboard reflect the formats specified in your config file for each active version.

This means that if you wish to remove downloadable content for a given version, you can do so by removing the matching formats key from your config file.

Documentation Hosting Features

The main way that users interact with your documentation is via the hosted HTML that we serve. We support a number of important features that you would expect for a documentation host.

Subdomain support

Every project has a subdomain that is available to serve its documentation based on it’s slug. If you go to <slug>.readthedocs.io, it should show you the latest version of your documentation, for example https://docs.readthedocs.io. For Read the Docs for Business the subdomain looks like <slug>.readthedocs-hosted.com.

See also

Custom Domains.

Content Delivery Network (CDN)

A CDN is used for making documentation pages faster for your users. This is done by caching the documentation page content in multiple data centers around the world, and then serving docs from the data center closest to the user.

We support CDN’s on both of our sites, as we talk about below.

On Read the Docs Community, we are able to provide a CDN to all the projects that we host. This service is graciously sponsored by CloudFlare.

We bust the cache on the CDN when the following actions happen:

  • Your Project is saved

  • Your Domain is saved

  • A new version is built

Sitemaps

Sitemaps allows us to inform search engines about URLs that are available for crawling and communicate them additional information about each URL of the project:

  • when it was last updated,

  • how often it changes,

  • how important it is in relation to other URLs in the site, and

  • what translations are available for a page.

Read the Docs automatically generates a sitemap for each project that hosts to improve results when performing a search on these search engines. This allow us to prioritize results based on the version number, for example to show stable as the top result followed by latest and then all the project’s versions sorted following semantic versioning.

Custom Not Found (404) Pages

If you want your project to use a custom page for not found pages instead of the “Maze Found” default, you can put a 404.html at the top level of your project’s HTML output.

When a 404 is returned, Read the Docs checks if there is a 404.html in the root of your project’s output corresponding to the current version and uses it if it exists. Otherwise, it tries to fall back to the 404.html page corresponding to the default version of the project.

We recommend the sphinx-notfound-page extension, which Read the Docs maintains. It automatically creates a 404.html page for your documentation, matching the theme of your project. See its documentation for how to install and customize it.

Custom robots.txt Pages

robots.txt files allow you to customize how your documentation is indexed in search engines. We automatically generate one for you, which automatically hides versions which are set to Hidden.

The robots.txt file will be served from the default version of your Project. This is because the robots.txt file is served at the top-level of your domain, so we must choose a version to find the file in. The default version is the best place to look for it.

Sphinx and Mkdocs both have different ways of outputting static files in the build:

Sphinx

Sphinx uses html_extra_path option to add static files to the output. You need to create a robots.txt file and put it under the path defined in html_extra_path.

MkDocs

MkDocs needs the robots.txt to be at the directory defined at docs_dir config.

Traffic Analytics

Traffic Analytics lets you see which documents your users are reading. This allows you to understand how your documentation is being used, so you can focus on expanding and updating parts people are reading most.

To see a list of the top pages from the last month, go to the Admin tab of your project, and then click on Traffic Analytics.

Traffic analytics demo

Traffic analytics demo

You can also access to analytics data from search results.

Note

The amount of analytics data stored for download depends which site you’re using:

  • On the Community site, the last 90 days are stored.

  • On the Commercial one, it goes from 30 to infinite storage

    (check out the pricing page).

Enabling Google Analytics on your Project

Read the Docs has native support for Google Analytics. You can enable it by:

  • Going to Admin > Advanced Settings in your project.

  • Fill in the Analytics code heading with your Google Tracking ID (example UA-123456674-1)

Options to manage Google Analytics

Options to manage Google Analytics

Once your documentation rebuilds it will include your Analytics tracking code and start sending data. Google Analytics usually takes 60 minutes, and sometimes can take up to a day before it starts reporting data.

Note

Read the Docs takes some extra precautions with analytics to protect user privacy. As a result, users with Do Not Track enabled will not be counted for the purpose of analytics.

For more details, see the Do Not Track section of our privacy policy.

Disabling Google Analytics on your project

Google Analytics can be completely disabled on your own projects. To disable Google Analytics:

  • Going to Admin > Advanced Settings in your project.

  • Check the box Disable Analytics.

Your documentation will need to be rebuilt for this change to take effect.

Preview Documentation from Pull Requests

Your project can be configured to build and host documentation for every new pull request. Previewing changes to your documentation during review makes it easier to catch documentation formatting and display issues introduced in pull requests.

Features

Build on pull request events

We create and build a new version when a pull request is opened, and rebuild the version whenever a new commit is pushed.

Build status report

Your project’s pull request build status will show as one of your pull request’s checks. This status will update as the build is running, and will show a success or failure status when the build completes.

GitHub build status reporting for pull requests.

GitHub build status reporting

Warning banner

A warning banner is shown at the top of documentation pages to let readers know that this version isn’t the main version for the project.

Note

Warning banners are available only for Sphinx projects.

Configuration

To enable this feature for your project, your Read the Docs account needs to be connected to an account with a supported VCS provider. See Limitations for more information.

If your account is already connected:

  1. Go to your project dashboard

  2. Go to Admin, then Advanced settings

  3. Enable the Build pull requests for this project option

  4. Click on Save

Privacy levels

Note

Privacy levels are only supported on Read the Docs for Business.

By default, all docs built from pull requests are private. To change their privacy level:

  1. Go to your project dashboard

  2. Go to Admin, then Advanced settings

  3. Select your option in Privacy level of builds from pull requests

  4. Click on Save

Privacy levels work the same way as normal versions.

Limitations

  • Only available for GitHub and GitLab currently. Bitbucket is not yet supported.

  • To enable this feature, your Read the Docs account needs to be connected to an account with your VCS provider.

  • Builds from pull requests have the same memory and time limitations as regular builds.

  • Additional formats like PDF and EPUB aren’t built, to reduce build time.

  • Search queries will default to the default experience for your tool. This is a feature we plan to add, but don’t want to overwhelm our search indexes used in production.

  • The built documentation is kept for 90 days after the pull request has been closed or merged.

Troubleshooting

No new builds are started when I open a pull request

The most common cause is that your repository’s webhook is not configured to send Read the Docs pull request events. You’ll need to re-sync your project’s webhook integration to reconfigure the Read the Docs webhook.

To resync your project’s webhook, go to your project’s admin dashboard, Integrations, and then select the webhook integration for your provider. Follow the directions on to re-sync the webhook, or create a new webhook integration.

You may also notice this behavior if your Read the Docs account is not connected to your VCS provider account, or if it needs to be reconnected. You can (re)connect your account by going to your Username dropdown, Settings, then to Connected Services.

Build status is not being reported to your VCS provider

If opening a pull request does start a new build, but the build status is not being updated with your VCS provider, then your connected account may have out dated or insufficient permisisons.

Make sure that you have granted access to the Read the Docs OAuth App for your personal or organization GitHub account. You can also try reconnecting your account with your VCS provider.

Build Notifications and Webhooks

Note

Currently we don’t send notifications or trigger webhooks on builds from pull requests.

Email notifications

Read the Docs allows you to configure emails that can be sent on failing builds. This makes sure you know when your builds have failed.

Take these steps to enable build notifications using email:

  • Go to Admin > Notifications in your project.

  • Fill in the Email field under the New Email Notifications heading

  • Submit the form

You should now get notified by email when your builds fail!

Build Status Webhooks

Read the Docs can also send webhooks when builds are triggered, successful or failed.

Take these steps to enable build notifications using a webhook:

  • Go to Admin > Webhooks in your project.

  • Fill in the URL field and select what events will trigger the webhook

  • Modify the payload or leave the default (see below)

  • Click on Save

URL and events for a webhook

URL and events for a webhook

Every time one of the checked events triggers, Read the Docs will send a POST request to your webhook URL. The default payload will look like this:

{
    "event": "build:triggered",
    "name": "docs",
    "slug": "docs",
    "version": "latest",
    "commit": "2552bb609ca46865dc36401dee0b1865a0aee52d",
    "build": "15173336",
    "start_date": "2021-11-03T16:23:14",
    "build_url": "https://readthedocs.org/projects/docs/builds/15173336/",
    "docs_url": "https://docs.readthedocs.io/en/latest/"
}

When a webhook is sent, a new entry will be added to the “Recent Activity” table. By clicking on each individual entry, you will see the server response, the webhook request, and the payload.

Activity of a webhook

Activity of a webhook

Custom payload examples

You can customize the payload of the webhook to suit your needs, as long as it is valid JSON. Below you have a couple of examples, and in the following section you will find all the available variables.

Custom payload

Custom payload

Slack
{
  "attachments": [
    {
      "color": "#db3238",
      "blocks": [
        {
          "type": "section",
          "text": {
            "type": "mrkdwn",
            "text": "*Read the Docs build failed*"
          }
        },
        {
          "type": "section",
          "fields": [
            {
              "type": "mrkdwn",
              "text": "*Project*: <{{ project.url }}|{{ project.name }}>"
            },
            {
              "type": "mrkdwn",
              "text": "*Version*: {{ version.name }} ({{ build.commit }})"
            },
            {
              "type": "mrkdwn",
              "text": "*Build*: <{{ build.url }}|{{ build.id }}>"
            }
          ]
        }
      ]
    }
  ]
}

More information on the Slack Incoming Webhooks documentation.

Discord
{
  "username": "Read the Docs",
  "content": "Read the Docs build failed",
  "embeds": [
    {
      "title": "Build logs",
      "url": "{{ build.url }}",
      "color": 15258703,
      "fields": [
        {
          "name": "*Project*",
          "value": "{{ project.url }}",
          "inline": true
        },
        {
          "name": "*Version*",
          "value": "{{ version.name }} ({{ build.commit }})",
          "inline": true
        },
        {
          "name": "*Build*",
          "value": "{{ build.url }}"
        }
      ]
    }
  ]
}

More information on the Discord webhooks documentation.

Variable substitutions reference
{{ event }}

Event that triggered the webhook, one of build:triggered, build:failed, or build:passed.

{{ build.id }}

Build ID.

{{ build.commit }}

Commit corresponding to the build, if present (otherwise "").

{{ build.url }}

URL of the build, for example https://readthedocs.org/projects/docs/builds/15173336/.

{{ build.docs_url }}

URL of the documentation corresponding to the build, for example https://docs.readthedocs.io/en/latest/.

{{ build.start_date }}

Start date of the build (UTC, ISO format), for example 2021-11-03T16:23:14.

{{ organization.name }}

Organization name (Commercial only).

{{ organization.slug }}

Organization slug (Commercial only).

{{ project.slug }}

Project slug.

{{ project.name }}

Project name.

{{ project.url }}

URL of the project dashboard.

{{ version.slug }}

Version slug.

{{ version.name }}

Version name.

Validating the payload

After you add a new webhook, Read the Docs will generate a secret key for it and uses it to generate a hash signature (HMAC-SHA256) for each payload that is included in the X-Hub-Signature header of the request.

Webhook secret

Webhook secret

We highly recommend using this signature to verify that the webhook is coming from Read the Docs. To do so, you can add some custom code on your server, like this:

import hashlib
import hmac
import os


def verify_signature(payload, request_headers):
    """
    Verify that the signature of payload is the same as the one coming from request_headers.
    """
    digest = hmac.new(
        key=os.environ["WEBHOOK_SECRET"].encode(),
        msg=payload.encode(),
        digestmod=hashlib.sha256,
    )
    expected_signature = digest.hexdigest()

    return hmac.compare_digest(
        request_headers["X-Hub-Signature"].encode(),
        expected_signature.encode(),
    )
Legacy webhooks

Webhooks created before the custom payloads functionality was added to Read the Docs send a payload with the following structure:

{
    "name": "Read the Docs",
    "slug": "rtd",
    "build": {
        "id": 6321373,
        "commit": "e8dd17a3f1627dd206d721e4be08ae6766fda40",
        "state": "finished",
        "success": false,
        "date": "2017-02-15 20:35:54"
    }
}

To migrate to the new webhooks and keep a similar structure, you can use this payload:

{
    "name": "{{ project.name }}",
    "slug": "{{ project.slug }}",
    "build": {
        "id": "{{ build.id }}",
        "commit": "{{ build.commit }}",
        "state": "{{ event }}",
        "date": "{{ build.start_date }}"
    }
}

Security Log

Security logs allow you to see what has happened recently in your organization or account. We store the IP address and the browser’s User-Agent on each event, so that you can confirm this access was from the intended person.

User security log

We store user security logs from the last 90 days, and track the following events:

Authentication failures and successes are both tracked.

To access your logs:

  • Click on Username dropdown

  • Click on Settings

  • Click on Security Log

Organization security log

Note

This feature exists only on Read the Docs for Business.

We store logs according to your plan, check our pricing page for more details. We track the following events:

  • Authentication on documentation pages from your organization

  • User access to every documentation page from your organization (Enterprise plans only)

Authentication failures and successes are both tracked.

To access your organization logs:

  • Click on Organizations from your user dropdown

  • Click on your organization

  • Click on Settings

  • Click on Security Log

Connecting Your VCS Account

If you are going to import repositories from GitHub, Bitbucket, or GitLab, you should connect your Read the Docs account to your repository host first. Connecting your account allows for:

  • Easier importing of your repositories

  • Automatically configure your repository VCS Integrations which allow Read the Docs to build your docs on every change to your repository

  • Log into Read the Docs with your GitHub, Bitbucket, or GitLab credentials

If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, you’re all done. Your account is connected.

To connect a social account, go to your Username dropdown > Settings > Connected Services. From here, you’ll be able to connect to your GitHub, Bitbucket or GitLab account. This process will ask you to authorize a connection to Read the Docs, that allows us to read information about and clone your repositories.

Permissions for connected accounts

Read the Docs does not generally ask for write permission to your repositories’ code (with one exception detailed below) and since we only connect to public repositories we don’t need special permissions to read them. However, we do need permissions for authorizing your account so that you can login to Read the Docs with your connected account credentials and to setup VCS Integrations which allow us to build your documentation on every change to your repository.

GitHub

Read the Docs requests the following permissions (more precisely, OAuth scopes) when connecting your Read the Docs account to GitHub.

Read access to your email address (user:email)

We ask for this so you can create a Read the Docs account and login with your GitHub credentials.

Administering webhooks (admin:repo_hook)

We ask for this so we can create webhooks on your repositories when you import them into Read the Docs. This allows us to build the docs when you push new commits.

Read access to your organizations (read:org)

We ask for this so we know which organizations you have access to. This allows you to filter repositories by organization when importing repositories.

Repository status (repo:status)

Repository statuses allow Read the Docs to report the status (eg. passed, failed, pending) of pull requests to GitHub. This is used for a feature currently in beta testing that builds documentation on each pull request similar to a continuous integration service.

Note

Read the Docs for Business asks for one additional permission (repo) to allow access to private repositories and to allow us to setup SSH keys to clone your private repositories. Unfortunately, this is the permission for read/write control of the repository but there isn’t a more granular permission that only allows setting up SSH keys for read access.

GitHub permission troubleshooting

Repositories not in your list to import.

Many organizations require approval for each OAuth application that is used, or you might have disabled it in the past for your personal account. This can happen at the personal or organization level, depending on where the project you are trying to access has permissions from.

You need to make sure that you have granted access to the Read the Docs OAuth App to your personal GitHub account. If you do not see Read the Docs in the OAuth App settings, you might need to disconnect and reconnect the GitHub service.

See also

GitHub docs on requesting access to your personal OAuth for step-by-step instructions.

Bitbucket

For similar reasons to those above for GitHub, we request permissions for:

  • Reading your account information including your email address

  • Read access to your team memberships

  • Read access to your repositories

  • Read and write access to webhooks

GitLab

Like the others, we request permissions for:

  • Reading your account information (read_user)

  • API access (api) which is needed to create webhooks in GitLab

Build process

Once a project has been imported and a build is triggered, Read the Docs executes specific pre-defined jobs to build the project’s documentation and update the hosted content. This page explains in detail what happens behind the scenes, and an overview of how you can change this process.

Understanding what’s going on

Understanding how your content is built helps with debugging the problems that may appear in the process. It also allows you customize the steps of the build process.

Note

All the steps are run inside a Docker container with the image the project defines in build.os, and all the Environment Variables defined are exposed to them.

The following are the pre-defined jobs executed by Read the Docs:

checkout

Checks out project’s code from the URL’s repository defined for this project. It will use git clone, hg clone, etc depending on the version control system you choose.

system_dependencies

Installs operating system & system-level dependencies. This includes specific version of languages (e.g. Python, Node.js, Go, Rust) and also apt packages.

At this point, build.tools can be used to define a language version, and build.apt_packages to define apt packages.

create_environment

Creates a Python environment to install all the dependencies in an isolated and reproducible way. Depending on what’s defined by the project a virtualenv or a conda environment (conda) will be used.

install

Install default common dependencies.

If the project has extra Python requirements, python.install can be used to specify them.

Tip

We strongly recommend pinning all the versions required to build the documentation to avoid unexpected build errors.

build

Runs the main command to build the documentation for each of the formats declared (formats). It will use Sphinx (sphinx) or MkDocs (mkdocs) depending on the project.

upload

Once the build process finishes successfully, the resulting artifacts are uploaded to our servers, and the CDN is purged so the newer version of the documentation is served.

See also

If there are extra steps required to build the documentation, or you need to execute additional commands to integrate with other tools, it’s possible to run user-defined commands and customize the build process.

Build resources

Every build has limited resources to avoid misuse of the platform. Currently, these build limits are:

  • 15 minutes build time

  • 3GB of memory

  • 2 concurrent builds

We can increase build limits on a per-project basis. Send an email to support@readthedocs.org providing a good reason why your documentation needs more resources.

If your business is hitting build limits hosting documentation on Read the Docs, please consider Read the Docs for Business which has much higher build resources.

Build customization

Read the Docs has a well-defined build process that works for many projects, but we offer additional customization to support more uses of our platform. This page explains how to extend the build process using user-defined build jobs to execute custom commands, and also how to override the build process completely:

Extend the build process

If you are using Sphinx or Mkdocs and need to execute additional commands.

Override the build process

If you want full control over your build. This option supports any tool that generates HTML as part of the build.

Extend the build process

In the normal build process, the pre-defined jobs checkout, system_dependencies, create_environment, install, build and upload are executed. However, Read the Docs exposes extra jobs to users so they can customize the build process by running shell commands. These extra jobs are:

Step

Customizable jobs

Checkout

post_checkout

System dependencies

pre_system_dependencies, post_system_dependencies

Create environment

pre_create_environment, post_create_environment

Install

pre_install, post_install

Build

pre_build, post_build

Upload

There are no customizable jobs currently

Note

Currently, the pre-defined jobs (checkout, system_dependencies, etc) executed by Read the Docs cannot be overridden or skipped.

These jobs can be declared by using a Configuration File with the build.jobs key on it. Let’s say the project requires commands to be executed before installing any dependency into the Python environment and after the build has finished. In that case, a config file similar to this one can be used:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_install:
      - bash ./scripts/pre_install.sh
    post_build:
      - curl -X POST \
        -F "project=${READTHEDOCS_PROJECT}" \
        -F "version=${READTHEDOCS_VERSION}" https://example.com/webhooks/readthedocs/

There are some caveats to knowing when using user-defined jobs:

  • The current working directory is at the root of your project’s cloned repository

  • Environment variables are expanded in the commands (see Environment Variables)

  • Each command is executed in a new shell process, so modifications done to the shell environment do not persist between commands

  • Any command returning non-zero exit code will cause the build to fail immediately

  • build.os and build.tools are required when using build.jobs

build.jobs examples

We’ve included some common examples where using build.jobs will be useful. These examples may require some adaptation for each projects’ use case, we recommend you use them as a starting point.

Unshallow clone

Read the Docs does not perform a full clone on checkout job to reduce network data and speed up the build process. Because of this, extensions that depend on the full Git history will fail. To avoid this, it’s possible to unshallow the clone done by Read the Docs:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    post_checkout:
      - git fetch --unshallow
Generate documentation from annotated sources with Doxygen

It’s possible to run Doxygen as part of the build process to generate documentation from annotated sources:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_build:
    # Note that this HTML won't be automatically uploaded,
    # unless your documentation build includes it somehow.
      - doxygen
Use MkDocs extensions with extra required steps

There are some MkDocs extensions that require specific commands to be run to generate extra pages before performing the build. For example, pydoc-markdown

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_build:
      - pydoc-markdown --build --site-dir "$PWD/_build/html"
Avoid having a dirty Git index

Read the Docs needs to modify some files before performing the build to be able to integrate with some of its features. Because of this reason, it could happen the Git index gets dirty (it will detect modified files). In case this happens and the project is using any kind of extension that generates a version based on Git metadata (like setuptools_scm), this could cause an invalid version number to be generated. In that case, the Git index can be updated to ignore the files that Read the Docs has modified.

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_install:
      - git update-index --assume-unchanged environment.yml docs/conf.py
Support Git LFS (Large File Storage)

In case the repository contains large files that are tracked with Git LFS, there are some extra steps required to be able to download their content. It’s possible to use post_checkout user-defined job for this.

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    post_checkout:
      # Download and uncompress the binary
      # https://git-lfs.github.com/
      - wget https://github.com/git-lfs/git-lfs/releases/download/v3.1.4/git-lfs-linux-amd64-v3.1.4.tar.gz
      - tar xvfz git-lfs-linux-amd64-v3.1.4.tar.gz
      # Modify LFS config paths to point where git-lfs binary was downloaded
      - git config filter.lfs.process "`pwd`/git-lfs filter-process"
      - git config filter.lfs.smudge  "`pwd`/git-lfs smudge -- %f"
      - git config filter.lfs.clean "`pwd`/git-lfs clean -- %f"
      # Make LFS available in current repository
      - ./git-lfs install
      # Download content from remote
      - ./git-lfs fetch
      # Make local files to have the real content on them
      - ./git-lfs checkout
Install Node.js dependencies

It’s possible to install Node.js together with the required dependencies by using user-defined build jobs. To setup it, you need to define the version of Node.js to use and install the dependencies by using build.jobs.post_install:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.9"
    nodejs: "16"
  jobs:
    post_install:
      # Install dependencies defined in your ``package.json``
      - npm ci
      # Install any other extra dependencies to build the docs
      - npm install -g jsdoc
Install dependencies with Poetry

Projects managed with Poetry, can use the post_create_environment user-defined job to use Poetry for installing Python dependencies. Take a look at the following example:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "3.10"
  jobs:
    post_create_environment:
      # Install poetry
      # https://python-poetry.org/docs/#osx--linux--bashonwindows-install-instructions
      - curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
      # Tell poetry to not use a virtual environment
      - $HOME/.poetry/bin/poetry config virtualenvs.create false
      # Install project's dependencies
      - $HOME/.poetry/bin/poetry install

sphinx:
  configuration: docs/conf.py

Override the build process

Warning

This feature is in a beta phase and could suffer incompatible changes or even removed completely in the near feature. It does not yet support some of the Read the Docs’ features like the flyout menu, and ads. However, we do plan to support these features in the future. Use this feature at your own risk.

If your project requires full control of the build process, and extending the build process is not enough, all the commands executed during builds can be overridden using the build.commands configuration file key.

As Read the Docs does not have control over the build process, you are responsible for running all the commands required to install requirements and build your project properly. Once the build process finishes, the contents of the _readthedocs/html/ directory will be hosted.

build.commands examples

This section contains some examples that showcase what is possible with build.commands. Note that you may need to modify and adapt these examples depending on your needs.

Pelican

Pelican is a well-known static site generator that’s commonly used for blogs and landing pages. If you are building your project with Pelican you could use a configuration file similar to the following:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.10"
  commands:
    - pip install pelican[markdown]
    - pelican --settings docs/pelicanconf.py --output _readthedocs/html/ docs/
Docsify

Docsify generates documentation websites on the fly, without the need to build static HTML. These projects can be built using a configuration file like this:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    nodejs: "16"
  commands:
    - mkdir --parents _readthedocs/html/
    - cp --recursive docs/* _readthedocs/html/
Search support

Read the Docs will automatically index the content of all your HTML files, respecting the search options from your config file.

You can access the search results from the Search tab of your project, or by using the search API.

Note

In order for Read the Docs to index your HTML files correctly, they should follow some of the conventions described at Server Side Search Integration.

Environment Variables

Read the Docs supports two types of environment variables in project builds:

Both are merged together during the build process and are exposed to all of the executed commands. There are two exceptions for user-defined environment variables however:

  • User-defined variables are not available during the checkout step of the build process

  • User-defined variables that are not marked as public will not be available in pull request builds

Default environment variables

Read the Docs builders set the following environment variables automatically for each documentation build:

READTHEDOCS

Whether the build is running inside Read the Docs.

Default

True

READTHEDOCS_VERSION

The slug of the version being built, such as latest, stable, or a branch name like feature-1234. For pull request builds, the value will be the pull request number.

READTHEDOCS_VERSION_NAME

The verbose name of the version being built, such as latest, stable, or a branch name like feature/1234.

READTHEDOCS_VERSION_TYPE

The type of the version being built.

Values

branch, tag, external (for pull request builds), or unknown

READTHEDOCS_PROJECT

The slug of the project being built. For example, my-example-project.

READTHEDOCS_LANGUAGE

The locale name, or the identifier for the locale, for the project being built. This value comes from the project’s configured language.

Examples

en, it, de_AT, es, pt_BR

User-defined environment variables

If extra environment variables are needed in the build process (like an API token), you can define them from the project’s settings page:

  1. Go to your project’s Admin > Environment Variables

  2. Click on Add Environment Variable

  3. Fill the Name and Value

  4. Check the Public option if you want to expose this environment variable to builds from pull requests.

    Warning

    If you mark this option, any user that can create a pull request on your repository will be able to see the value of this environment variable.

  5. Click on Save

Note

Once you create an environment variable, you won’t be able to see its value anymore.

After adding an environment variable, you can read it from your build process, for example in your Sphinx’s configuration file:

conf.py
import os
import requests

# Access to our custom environment variables
username = os.environ.get('USERNAME')
password = os.environ.get('PASSWORD')

# Request a username/password protected URL
response = requests.get(
    'https://httpbin.org/basic-auth/username/password',
    auth=(username, password),
)

You can also use any of these variables from user-defined build jobs in your project’s configuration file:

.readthedocs.yaml
version: 2
build:
  os: ubuntu-22.04
  tools:
    python: 3.10
  jobs:
    post_install:
      - curl -u ${USERNAME}:${PASSWORD} https://httpbin.org/basic-auth/username/password

Badges

Badges let you show the state of your documentation to your users. They are great for embedding in your README, or putting inside your actual doc pages.

Status Badges

They will display in green for passing, red for failing, and yellow for unknown states.

Here are a few examples:

green red yellow

You can see it in action in the Read the Docs README. They will link back to your project’s documentation page on Read the Docs.

Style

Now you can pass the style GET argument, to get custom styled badges same as you would for shields.io. If no argument is passed, flat is used as default.

STYLE

BADGE

flat

Flat Badge

flat-square

Flat-Square Badge

for-the-badge

Badge

plastic

Plastic Badge

social

Social Badge

Project Pages

You will now see badges embedded in your project page. The default badge will be pointed at the default version you have specified for your project. The badge URLs look like this:

https://readthedocs.org/projects/pip/badge/?version=latest&style=plastic

You can replace the version argument with any version that you want to show a badge for. If you click on the badge icon, you will be given snippets for RST, Markdown, and HTML; to make embedding it easier.

If you leave the version argument off, it will default to your latest version. This is probably best to include in your README, since it will stay up to date with your Read the Docs project:

https://readthedocs.org/projects/pip/badge/

Site Support

Usage Questions

If you have questions about how to use Read the Docs, or have an issue that isn’t related to a bug, Stack Overflow is the best place to ask. Tag questions with read-the-docs so other folks can find them easily.

Good questions for Stack Overflow would be:

  • “What is the best way to structure the table of contents across a project?”

  • “How do I structure translations inside of my project for easiest contribution from users?”

  • “How do I use Sphinx to use SVG images in HTML output but PNG in PDF output?”

You might also find the answers you are looking for in our documentation guides. These provide step-by-step solutions to common user requirements.

User Support

If you need a specific request for your project or account, like more resources, change of the project’s slug or username.

Please fill out the form at https://readthedocs.org/support/, and we will reply as soon as possible.

Bug Reports

If you have an issue with the actual functioning of the site, you can file bug reports on our GitHub issue tracker. You can also contribute to Read the Docs, as the code is open source.

Priority Support

We offer priority support with Read the Docs for Business and we have a dedicated team that responds to support requests during business hours.

Frequently Asked Questions

My project isn’t building correctly

First, you should check out the Builds tab of your project. That records all of the build attempts that RTD has made to build your project. If you see ImportError messages for custom Python modules, see our section on My documentation requires additional dependencies.

If you are still seeing errors because of C library dependencies, please see I get import errors on libraries that depend on C modules.

Help, my build passed but my documentation page is 404 Not Found!

This often happens because you don’t have an index.html file being generated. Make sure you have one of the following files:

  • index.rst

  • index.md

At the top level of your built documentation, otherwise we aren’t able to serve a “default” index page.

To test if your docs actually built correctly, you can navigate to a specific page (/en/latest/README.html for example).

My documentation requires additional dependencies

For most Python dependencies, you can can specify a requirements file which details your dependencies. See our guide on Using a configuration file. You can also set your project documentation to install your project itself as a dependency.

Your build may depend on extensions that require additional system packages to be installed. If you are using a Configuration File you can add libraries with apt to the Ubuntu-based builder .

If your project or its dependencies rely on C libraries that cannot be installed this way, see I get import errors on libraries that depend on C modules.

My project requires some additional settings

If this is just a dependency issue, see My documentation requires additional dependencies.

Read the Docs offers some settings which can be used for a variety of purposes. To enable these settings, please send an email to support@readthedocs.org and we will change the settings for the project. Read more about these settings here.

I get import errors on libraries that depend on C modules

Note

Another use case for this is when you have a module with a C extension.

This happens because the build system does not have the dependencies for building your project, such as C libraries needed by some Python packages (e.g. libevent or mysql). For libraries that cannot be installed via apt in the builder there is another way to successfully build the documentation despite missing dependencies.

With Sphinx you can use the built-in autodoc_mock_imports for mocking. If such libraries are installed via setup.py, you also will need to remove all the C-dependent libraries from your install_requires in the RTD environment.

How do I change behavior when building with Read the Docs?

When RTD builds your project, it sets the READTHEDOCS environment variable to the string 'True'. So within your Sphinx conf.py file, you can vary the behavior based on this. For example:

import os
on_rtd = os.environ.get('READTHEDOCS') == 'True'
if on_rtd:
    html_theme = 'default'
else:
    html_theme = 'nature'

The READTHEDOCS variable is also available in the Sphinx build environment, and will be set to True when building on RTD:

{% if READTHEDOCS %}
Woo
{% endif %}

How do I host multiple projects on one custom domain?

We support the concept of subprojects, which allows multiple projects to share a single domain. If you add a subproject to a project, that documentation will be served under the parent project’s subdomain or custom domain.

For example, Kombu is a subproject of Celery, so you can access it on the celery.readthedocs.io domain:

https://celery.readthedocs.io/projects/kombu/en/latest/

This also works the same for custom domains:

http://docs..org/projects/kombu/en/latest/

You can add subprojects in the project admin dashboard.

For details on custom domains, see our documentation on Custom Domains.

Where do I need to put my docs for RTD to find it?

Read the Docs will crawl your project looking for a conf.py. Where it finds the conf.py, it will run sphinx-build in that directory. So as long as you only have one set of sphinx documentation in your project, it should Just Work.

You can specify an exact path to your documentation using a Read the Docs Configuration File.

I want to use the Blue/Default Sphinx theme

We think that our theme is badass, and better than the default for many reasons. Some people don’t like change though 😄, so there is a hack that will let you keep using the default theme. If you set the html_style variable in your conf.py, it should default to using the default theme. The value of this doesn’t matter, and can be set to /default.css for default behavior.

I want to use the Read the Docs theme locally

There is a repository for that: https://github.com/readthedocs/sphinx_rtd_theme. Simply follow the instructions in the README.

Image scaling doesn’t work in my documentation

Image scaling in docutils depends on PIL. PIL is installed in the system that RTD runs on. However, if you are using the virtualenv building option, you will likely need to include PIL in your requirements for your project.

I want comments in my docs

RTD doesn’t have explicit support for this. That said, a tool like Disqus (and the sphinxcontrib-disqus plugin) can be used for this purpose on RTD.

How do I support multiple languages of documentation?

See the section on Localization of Documentation.

Does Read the Docs work well with “legible” docstrings?

Yes. One criticism of Sphinx is that its annotated docstrings are too dense and difficult for humans to read. In response, many projects have adopted customized docstring styles that are simultaneously informative and legible. The NumPy and Google styles are two popular docstring formats. Fortunately, the default Read the Docs theme handles both formats just fine, provided your conf.py specifies an appropriate Sphinx extension that knows how to convert your customized docstrings. Two such extensions are numpydoc and napoleon. Only napoleon is able to handle both docstring formats. Its default output more closely matches the format of standard Sphinx annotations, and as a result, it tends to look a bit better with the default theme.

Note

To use these extensions you need to specify the dependencies on your project by following this guide.

Can I document a Python package that is not at the root of my repository?

Yes. The most convenient way to access a Python package for example via Sphinx’s autoapi in your documentation is to use the Install your project inside a virtualenv using setup.py install option in the admin panel of your project. However this assumes that your setup.py is in the root of your repository.

If you want to place your package in a different directory or have multiple Python packages in the same project, then create a pip requirements file. You can specify the relative path to your package inside the file. For example you want to keep your Python package in the src/python directory, then create a requirements.txt file with the following contents:

src/python/

Please note that the path must be relative to the working directory where pip is launched, rather than the directory where the requirements file is located. Therefore, even if you want to move the requirements file to a requirements/ directory, the example path above would work.

You can customize the path to your requirements file and any other installed dependency using a Read the Docs Configuration File.

I need to install a package in a environment with pinned versions

To ensure proper installation of a Python package, the pip install method will automatically upgrade every dependency to its most recent version in case they aren’t pinned by the package definition. If instead you’d like to pin your dependencies outside the package, you can add this line to your requirements or environment file (if you are using Conda).

In your requirements.txt file:

# path to the directory containing setup.py relative to the project root
-e .

In your Conda environment file (environment.yml):

# path to the directory containing setup.py relative to the environment file
-e ..

Can I use Anaconda Project and anaconda-project.yml?

Yes. With anaconda-project>=0.8.4 you can use the Anaconda Project configuration file anaconda-project.yaml (or anaconda-project.yml) directly in place of a Conda environment file by using dependencies: as an alias for packages:.

I.e., your anaconda-project.yaml file can be used as a conda.environment config in the .readthedocs.yaml config file if it contains:

dependencies:
  - python=3.9
  - scipy
  ...

How can I avoid search results having a deprecated version of my docs?

If readers search something related to your docs in Google, it will probably return the most relevant version of your documentation. It may happen that this version is already deprecated and you want to stop Google indexing it as a result, and start suggesting the latest (or newer) one.

To accomplish this, you can add a robots.txt file to your documentation’s root so it ends up served at the root URL of your project (for example, https://yourproject.readthedocs.io/robots.txt). We have documented how to set this up in our Custom robots.txt Pages docs.

Can I remove advertising from my documentation?

See Opting out of advertising.

How do I change my project slug (the URL your docs are served at)?

We don’t support allowing folks to change the slug for their project. You can update the name which is shown on the site, but not the actual URL that documentation is served.

The main reason for this is that all existing URLs to the content will break. You can delete and re-create the project with the proper name to get a new slug, but you really shouldn’t do this if you have existing inbound links, as it breaks the internet.

If that isn’t enough, you can request the change sending an email to support@readthedocs.org.

How do I change the version slug of my project?

We don’t support allowing folks to change the slug for their versions. But you can rename the branch/tag to achieve this. If that isn’t enough, you can request the change sending an email to support@readthedocs.org.

What commit of Read the Docs is in production?

We deploy readthedocs.org from the rel branch in our GitHub repository. You can see the latest commits that have been deployed by looking on GitHub: https://github.com/readthedocs/readthedocs.org/commits/rel

We also keep an up-to-date changelog.

How can I deploy Jupyter Book projects on Read the Docs?

According to its own documentation,

Jupyter Book is an open source project for building beautiful, publication-quality books and documents from computational material.

Even though Jupyter Book leverages Sphinx “for almost everything that it does”, it purposedly hides Sphinx conf.py files from the user, and instead generates them on the fly from its declarative _config.yml. As a result, you need to follow some extra steps to make Jupyter Book work on Read the Docs.

As described in the official documentation, you can manually convert your Jupyter Book project to Sphinx with the following configuration:

.readthedocs.yaml
 build:
     jobs:
         pre_build:
         # Generate the Sphinx configuration for this Jupyter Book so it builds.
         - "jupyter-book config sphinx docs/"

How-to Guides

These guides will help walk you through specific use cases related to Read the Docs itself, documentation tools like Sphinx and MkDocs and how to write successful documentation.

Guides for documentation authors

These guides offer some tips and tricks to author documentation with the tools supported on Read the Docs. Only reStructuredText or Markdown knowledge and minimal configuration tweaking are needed.

For an introduction to Sphinx and Mkdocs, have a look at our Getting Started with Sphinx and Getting Started with MkDocs.

Cross-referencing with Sphinx

When writing documentation you often need to link to other pages of your documentation, other sections of the current page, or sections from other pages.

An easy way is just to use the raw URL that Sphinx generates for each page/section. This works, but it has some disadvantages:

  • Links can change, so they are hard to maintain.

  • Links can be verbose and hard to read, so it is unclear what page/section they are linking to.

  • There is no easy way to link to specific sections like paragraphs, figures, or code blocks.

  • URL links only work for the html version of your documentation.

Instead, Sphinx offers a powerful way to linking to the different elements of the document, called cross-references. Some advantages of using them:

  • Use a human-readable name of your choice, instead of a URL.

  • Portable between formats: html, PDF, ePub.

  • Sphinx will warn you of invalid references.

  • You can cross reference more than just pages and section headers.

This page describes some best-practices for cross-referencing with Sphinx with two markup options: reStructuredText and MyST (Markdown).

  • If you are not familiar with reStructuredText, check reStructuredText Primer for a quick introduction.

  • If you want to learn more about the MyST Markdown dialect, check out Core Syntax.

Getting started
Explicit targets

Cross referencing in Sphinx uses two components, references and targets.

  • references are pointers in your documentation to other parts of your documentation.

  • targets are where the references can point to.

You can manually create a target in any location of your documentation, allowing you to reference it from other pages. These are called explicit targets.

For example, one way of creating an explicit target for a section is:

.. _My target:

Explicit targets
~~~~~~~~~~~~~~~~

Reference `My target`_.

Then the reference will be rendered as My target.

You can also add explicit targets before paragraphs (or any other part of a page).

Another example, add a target to a paragraph:

.. _target to paragraph:

An easy way is just to use the final link of the page/section.
This works, but it has :ref:`some disadvantages <target to paragraph>`:

Then the reference will be rendered as: some disadvantages.

You can also create in-line targets within an element on your page, allowing you to, for example, reference text within a paragraph.

For example, an in-line target inside a paragraph:

You can also create _`in-line targets` within an element on your page,
allowing you to, for example, reference text *within* a paragraph.

Then you can reference it using `in-line targets`_, that will be rendered as: in-line targets.

Implicit targets

You may also reference some objects by name without explicitly giving them one by using implicit targets.

When you create a section, a footnote, or a citation, Sphinx will create a target with the title as the name:

For example, to reference the previous section
you can use `Explicit targets`_.

The reference will be rendered as: Explicit targets.

Cross-referencing using roles

All targets seen so far can be referenced only from the same page. Sphinx provides some roles that allow you to reference any explicit target from any page.

Note

Since Sphinx will make all explicit targets available globally, all targets must be unique.

You can see the complete list of cross-referencing roles at Cross-referencing syntax. Next, you will explore the most common ones.

The ref role

The ref role can be used to reference any explicit targets. For example:

- :ref:`my target`.
- :ref:`Target to paragraph <target to paragraph>`.
- :ref:`Target inside a paragraph <in-line targets>`.

That will be rendered as:

The ref role also allow us to reference code blocks:

.. _target to code:

.. code-block:: python

   # Add the extension
   extensions = [
      'sphinx.ext.autosectionlabel',
   ]

   # Make sure the target is unique
   autosectionlabel_prefix_document = True

We can reference it using :ref:`code <target to code>`, that will be rendered as: code.

The doc role

The doc role allows us to link to a page instead of just a section. The target name can be relative to the page where the role exists, or relative to your documentation’s root folder (in both cases, you should omit the extension).

For example, to link to a page in the same directory as this one you can use:

- :doc:`intersphinx`
- :doc:`/guides/intersphinx`
- :doc:`Custom title </guides/intersphinx>`

That will be rendered as:

Tip

Using paths relative to your documentation root is recommended, so you avoid changing the target name if the page is moved.

The numref role

The numref role is used to reference numbered elements of your documentation. For example, tables and images.

To activate numbered references, add this to your conf.py file:

# Enable numref
numfig = True

Next, ensure that an object you would like to reference has an explicit target.

For example, you can create a target for the next image:

Logo

Link me!

.. _target to image:

.. figure:: /img/logo.png
   :alt: Logo
   :align: center
   :width: 240px

   Link me!

Finally, reference it using :numref:`target to image`, that will be rendered as Fig. N. Sphinx will enumerate the image automatically.

Automatically label sections

Manually adding an explicit target to each section and making sure is unique is a big task! Fortunately, Sphinx includes an extension to help us with that problem, autosectionlabel.

To activate the autosectionlabel extension, add this to your conf.py file:

# Add the extension
extensions = [
   'sphinx.ext.autosectionlabel',
]

# Make sure the target is unique
autosectionlabel_prefix_document = True

Sphinx will create explicit targets for all your sections, the name of target has the form {path/to/page}:{title-of-section}.

For example, you can reference the previous section using:

- :ref:`guides/cross-referencing-with-sphinx:explicit targets`.
- :ref:`Custom title <guides/cross-referencing-with-sphinx:explicit targets>`.

That will be rendered as:

Invalid targets

If you reference an invalid or undefined target Sphinx will warn us. You can use the -W option when building your docs to fail the build if there are any invalid references. On Read the Docs you can use the sphinx.fail_on_warning option.

Finding the reference name

When you build your documentation, Sphinx will generate an inventory of all explicit and implicit links called objects.inv. You can list all of these targets to explore what is available for you to reference.

List all targets for built documentation with:

python -m sphinx.ext.intersphinx <link>

Where <link> is either a URL or a local path that points to your inventory file (usually in _build/html/objects.inv). For example, to see all targets from the Read the Docs documentation:

python -m sphinx.ext.intersphinx https://docs.readthedocs.io/en/stable/objects.inv
Cross-referencing targets in other documentation sites

You can reference to docs outside your project too! See Link to Other Projects’ Documentation With Intersphinx.

How to use Jupyter notebooks in Sphinx

Jupyter notebooks are a popular tool to describe computational narratives that mix code, prose, images, interactive components, and more. Embedding them in your Sphinx project allows using these rich documents as documentation, which can provide a great experience for tutorials, examples, and other types of technical content. There are a few extensions that allow integrating Jupyter and Sphinx, and this document will explain how to achieve some of the most commonly requested features.

Including classic .ipynb notebooks in Sphinx documentation

There are two main extensions that add support Jupyter notebooks as source files in Sphinx: nbsphinx and MyST-NB. They have similar intent and basic functionality: both can read notebooks in .ipynb and additional formats supported by jupytext, and are configured in a similar way (see Existing relevant extensions for more background on their differences).

First of all, create a Jupyter notebook using the editor of your liking (for example, JupyterLab). For example, source/notebooks/Example 1.ipynb:

Example Jupyter notebook created on JupyterLab

Example Jupyter notebook created on JupyterLab

Next, you will need to enable one of the extensions, as follows:

conf.py
extensions = [
    "nbsphinx",
]

Finally, you can include the notebook in any toctree. For example, add this to your root document:

.. toctree::
   :maxdepth: 2
   :caption: Contents:

   notebooks/Example 1

The notebook will render as any other HTML page in your documentation after doing make html.

Example Jupyter notebook rendered on HTML by nbsphinx

Example Jupyter notebook rendered on HTML by nbsphinx

To further customize the rendering process among other things, refer to the nbsphinx or MyST-NB documentation.

Rendering interactive widgets

Widgets are eventful python objects that have a representation in the browser and that you can use to build interactive GUIs for your notebooks. Basic widgets using ipywidgets include controls like sliders, textboxes, and buttons, and more complex widgets include interactive maps, like the ones provided by ipyleaflet.

You can embed these interactive widgets on HTML Sphinx documentation. For this to work, it’s necessary to save the widget state before generating the HTML documentation, otherwise the widget will appear as empty. Each editor has a different way of doing it:

  • The classical Jupyter Notebook interface provides a “Save Notebook Widget State” action in the “Widgets” menu, as explained in the ipywidgets documentation. You need to click it before exporting your notebook to HTML.

  • JupyterLab provides a “Save Widget State Automatically” option in the “Settings” menu. You need to leave it checked so that widget state is automatically saved.

  • In Visual Studio Code it’s not possible to save the widget state at the time of writing (June 2021).

JupyterLab option to save the interactive widget state automatically

JupyterLab option to save the interactive widget state automatically

For example, if you create a notebook with a simple List.html#IntSlider IntSlider widget from ipywidgets and save the widget state, the slider will render correctly in Sphinx.

Interactive widget rendered in HTML by Sphinx

Interactive widget rendered in HTML by Sphinx

To see more elaborate examples:

Warning

Although widgets themselves can be embedded in HTML, events require a backend (kernel) to execute. Therefore, @interact, .observe, and related functionalities relying on them will not work as expected.

Note

If your widgets need some additional JavaScript libraries, you can add them using add_js_file().

Using notebooks in other formats

For example, this is how a simple notebook looks like in MyST Markdown format:

Example 3.md
---
jupytext:
text_representation:
   extension: .md
   format_name: myst
   format_version: 0.13
   jupytext_version: 1.10.3
kernelspec:
display_name: Python 3
language: python
name: python3
---

# Plain-text notebook formats

This is a example of a Jupyter notebook stored in MyST Markdown format.

```{code-cell} ipython3
import sys
print(sys.version)
```

```{code-cell} ipython3
from IPython.display import Image
```

```{code-cell} ipython3
Image("http://sipi.usc.edu/database/preview/misc/4.2.03.png")
```

To render this notebook in Sphinx you will need to add this to your conf.py:

conf.py
nbsphinx_custom_formats = {
    ".md": ["jupytext.reads", {"fmt": "mystnb"}],
}

Notice that the Markdown format does not store the outputs of the computation. Sphinx will automatically execute notebooks without outputs, so in your HTML documentation they appear as complete.

Creating galleries of examples using notebooks

nbsphinx has support for creating thumbnail galleries from a list of Jupyter notebooks. This functionality relies on Sphinx-Gallery and extends it to work with Jupyter notebooks rather than Python scripts.

To use it, you will need to install both nbsphinx and Sphinx-Gallery, and modify your conf.py as follows:

conf.py
extensions = [
   'nbsphinx',
   'sphinx_gallery.load_style',
]

After doing that, there are two ways to create the gallery:

  • From a reStructuredText source file, using the .. nbgallery:: directive, as showcased in the documentation.

  • From a Jupyter notebook, adding a "nbsphinx-gallery" tag to the metadata of a cell. Each editor has a different way of modifying the cell metadata (see figure below).

Panel to modify cell metadata in JupyterLab

Panel to modify cell metadata in JupyterLab

For example, this reST markup would create a thumbnail gallery with generic images as thumbnails, thanks to the Sphinx-Gallery default style:

Thumbnails gallery
==================

.. nbgallery::
   notebooks/Example 1
   notebooks/Example 2
Simple thumbnail gallery created using nbsphinx

Simple thumbnail gallery created using nbsphinx

To see some examples of notebook galleries in the wild:

Background
Existing relevant extensions

In the first part of this document we have seen that nbsphinx and MyST-NB are similar. However, there are some differences between them:

  • nsphinx uses pandoc to convert the Markdown from Jupyter notebooks to reStructuredText and then to docutils AST, whereas MyST-NB uses MyST-Parser to directly convert the Markdown text to docutils AST. Therefore, nbsphinx assumes pandoc flavored Markdown, whereas MyST-NB uses MyST flavored Markdown. Both Markdown flavors are mostly equal, but they have some differences.

  • nbsphinx executes each notebook during the parsing phase, whereas MyST-NB can execute all notebooks up front and cache them with jupyter-cache. This can result in shorter build times when notebooks are modified if using MyST-NB.

  • nbsphinx provides functionality to create thumbnail galleries, whereas MyST-NB does not have such functionality at the moment (see Creating galleries of examples using notebooks for more information about galleries).

  • MyST-NB allows embedding Python objects coming from the notebook in the documentation (read their “glue” documentation for more information) and provides more sophisticated error reporting than the one nbsphinx has.

  • The visual appearance of code cells and their outputs is slightly different: nbsphinx renders the cell numbers by default, whereas MyST-NB doesn’t.

Deciding which one to use depends on your use case. As general recommendations:

Alternative notebook formats

Jupyter notebooks in .ipynb format (as described in the nbformat documentation) are by far the most widely used for historical reasons.

However, to compensate some of the disadvantages of the .ipynb format (like cumbersome integration with version control systems), jupytext offers other formats based on plain text rather than JSON.

As a result, there are three modes of operation:

  • Using classic .ipynb notebooks. It’s the most straightforward option, since all the tooling is prepared to work with them, and does not require additional pieces of software. It is therefore simpler to manage, since there are fewer moving parts. However, it requires some care when working with Version Control Systems (like git), by doing one of these things:

    • Clear outputs before commit. Minimizes conflicts, but might defeat the purpose of notebooks themselves, since the computation results are not stored.

    • Use tools like nbdime (open source) or ReviewNB (proprietary) to improve the review process.

    • Use a different collaboration workflow that doesn’t involve notebooks.

  • Replace .ipynb notebooks with a text-based format. These formats behave better under version control and they can also be edited with normal text editors that do not support cell-based JSON notebooks. However, text-based formats do not store the outputs of the cells, and this might not be what you want.

  • Pairing .ipynb notebooks with a text-based format, and putting the text-based file in version control, as suggested in the jupytext documentation. This solution has the best of both worlds. In some rare cases you might experience synchronization issues between both files.

These approaches are not mutually exclusive, nor you have to use a single format for all your notebooks. For the examples in this document, we have used the MyST Markdown format.

If you are using alternative formats for Jupyter notebooks, you can include them in your Sphinx documentation using either nbsphinx or MyST-NB (see Existing relevant extensions for more information about the differences between them).

Migrating from reStructuredText to MyST Markdown

Sphinx is usually associated with reStructuredText, the markup language designed for the CPython project in the early ’00s. However, for quite some time Sphinx has been compatible with Markdown as well, thanks to a number of extensions.

The most powerful of such extensions is MyST-Parser, which implements a CommonMark-compliant, extensible Markdown dialect with support for the Sphinx roles and directives that make it so useful.

In this guide, you will find how you can start writing Markdown in your existing reStructuredText project, or migrate it completely.

If, instead of migrating, you are starting a new project from scratch, have a look at Get Started. If you are starting a project for Jupyter, you can start with Jupyter Book, which uses MyST-Parser, see the official Jupyter Book tutorial: Create your first book

Writing your content both in reStructuredText and MyST

It is useful to ask whether a migration is necessary in the first place. Doing bulk migrations of large projects with lots of work in progress will create conflicts for ongoing changes. On the other hand, your writers might prefer to have some files in Markdown and some others in reStructuredText, for whatever reason. Luckily, Sphinx supports reading both types of markup at the same time without problems.

To start using MyST in your existing Sphinx project, first install the `myst-parser` Python package and then enable it on your configuration:

conf.py
extensions = [
    # Your existing extensions
    ...,
    "myst_parser",
]

Your reStructuredText documents will keep rendering, and you will be able to add MyST documents with the .md extension that will be processed by MyST-Parser.

As an example, this guide is written in MyST while the rest of the Read the Docs documentation is written in reStructuredText.

Note

By default, MyST-Parser registers the .md suffix for MyST source files. If you want to use a different suffix, you can do so by changing your source_suffix configuration value in conf.py.

Converting existing reStructuredText documentation to MyST

To convert existing reST documents to MyST, you can use the rst2myst CLI script shipped by RST-to-MyST. The script supports converting the documents one by one, or scanning a series of directories to convert them in bulk.

After installing `rst-to-myst`, you can run the script as follows:

$ rst2myst convert docs/source/index.rst  # Converts index.rst to index.md
$ rst2myst convert docs/**/*.rst  # Convert every .rst file under the docs directory

This will create a .md MyST file for every .rst source file converted.

Advanced usage of rst2myst

The rst2myst accepts several flags to modify its behavior. All of them have sensible defaults, so you don’t have to specify them unless you want to.

These are a few options you might find useful:

-d, --dry-run

Only verify that the script would work correctly, without actually writing any files.

-R, --replace-files

Replace the .rst files by their .md equivalent, rather than writing a new .md file next to the old .rst one.

You can read the full list of options in the `rst2myst` documentation.

Enabling optional syntax

Some reStructuredText syntax will require you to enable certain MyST plugins. For example, to write reST definition lists, you need to add a myst_enable_extensions variable to your Sphinx configuration, as follows:

conf.py
myst_enable_extensions = [
    "deflist",
]

You can learn more about other MyST-Parser plugins in their documentation.

Writing reStructuredText syntax within MyST

There is a small chance that rst2myst does not properly understand a piece of reST syntax, either because there is a bug in the tool or because that syntax does not have a MyST equivalent yet. For example, as explained in the documentation, the sphinx.ext.autodoc extension is incompatible with MyST.

Fortunately, MyST supports an eval-rst directive that will parse the content as reStructuredText, rather than MyST. For example:

```{eval-rst}
.. note::

   Complete MyST migration.

```

will produce the following result:

Note

Complete MyST migration.

As a result, this allows you to conduct a gradual migration, at the expense of having heterogeneous syntax in your source files. In any case, the HTML output will be the same.

Guides for project administrators

These guides cover common use cases relevant for managing documentation projects, using the Read the Docs web interface, and making changes to the configuration files.

For an introduction to Read the Docs, have a look at our Read the Docs tutorial.

Technical Documentation Search Engine Optimization (SEO) Guide

This guide will help you optimize your documentation for search engines with the goal of increasing traffic to your docs. While you optimize your docs to make them more crawler friendly for search engine spiders, it’s important to keep in mind that your ultimate goal is to make your docs more discoverable for your users. You’re trying to make sure that when a user types a question into a search engine that is answerable by your documentation, that your docs appear in the results.

This guide isn’t meant to be your only resource on SEO, and there’s a lot of SEO topics not covered here. For additional reading, please see the external resources section.

While many of the topics here apply to all forms of technical documentation, this guide will focus on Sphinx, which is the most common documentation authoring tool on Read the Docs, as well as improvements provided by Read the Docs itself.

SEO Basics

Search engines like Google and Bing crawl through the internet following links in an attempt to understand and build an index of what various pages and sites are about. This is called “crawling” or “indexing”. When a person sends a query to a search engine, the search engine evaluates this index using a number of factors and attempts to return the results most likely to answer that person’s question.

How search engines “rank” sites based on a person’s query is part of their secret sauce. While some search engines publish the basics of their algorithms (see Google’s published details on PageRank), few search engines give all of the details in an attempt to prevent users from gaming the rankings with low value content which happens to rank well.

Both Google and Bing publish a set of guidelines to help make sites easier to understand for search engines and rank better. To summarize some of the most important aspects as they apply to technical documentation, your site should:

  • Use descriptive and accurate titles in the HTML <title> tag. For Sphinx, the <title> comes from the first heading on the page.

  • Ensure your URLs are descriptive. They are displayed in search results. Sphinx uses the source filename without the file extension as the URL.

  • Make sure the words your readers would search for to find your site are actually included on your pages.

  • Avoid low content pages or pages with very little original content.

  • Avoid tactics that attempt to increase your search engine ranking without actually improving content.

  • Google specifically warns about automatically generated content although this applies primarily to keyword stuffing and low value content. High quality documentation generated from source code (eg. auto generated API documentation) seems OK.

While both Google and Bing discuss site performance as an important factor in search result ranking, this guide is not going to discuss it in detail. Most technical documentation that uses Sphinx or Read the Docs generates static HTML and the performance is typically decent relative to most of the internet.

Optimizing your docs for search engine spiders

Once a crawler or spider finds your site, it will follow links and redirects in an attempt to find any and all pages on your site. While there are a few ways to guide the search engine in its crawl for example by using a sitemap or a robots.txt file which we’ll discuss shortly, the most important thing is making sure the spider can follow links on your site and get to all your pages.

Avoid orphan pages

Sphinx calls pages that don’t have links to them “orphans” and will throw a warning while building documentation that contains an orphan unless the warning is silenced with the orphan directive:

$ make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.8.5
...
checking consistency... /path/to/file.rst: WARNING: document isn't included in any toctree
done
...
build finished with problems, 1 warning.

You can make all Sphinx warnings into errors during your build process by setting SPHINXOPTS = -W --keep-going in your Sphinx Makefile.

Avoid uncrawlable content

While typically this isn’t a problem with technical documentation, try to avoid content that is “hidden” from search engines. This includes content hidden in images or videos which the crawler may not understand. For example, if you do have a video in your docs, make sure the rest of that page describes the content of the video.

When using images, make sure to set the image alt text or set a caption on figures. For Sphinx, the image and figure directives support this:

.. image:: your-image.png
    :alt: A description of this image

.. figure:: your-image.png

    A caption for this figure
Redirects

Redirects tell search engines when content has moved. For example, if this guide moved from guides/technical-docs-seo-guide.html to guides/sphinx-seo-guide.html, there will be a time period where search engines will still have the old URL in their index and will still be showing it to users. This is why it is important to update your own links within your docs as well as redirecting. If the hostname moved from docs.readthedocs.io to docs.readthedocs.org, this would be even more important!

Read the Docs supports a few different kinds of user defined redirects that should cover all the different cases such as redirecting a certain page for all project versions, or redirecting one version to another.

Canonical URLs

Anytime very similar content is hosted at multiple URLs, it is pretty important to set a canonical URL. The canonical URL tells search engines where the original version your documentation is even if you have multiple versions on the internet (for example, incomplete translations or deprecated versions).

Read the Docs supports setting the canonical URL if you are using a custom domain under Admin > Domains in the Read the Docs dashboard.

Use a robots.txt file

A robots.txt file is readable by crawlers and lives at the root of your site (eg. https://docs.readthedocs.io/robots.txt). It tells search engines which pages to crawl or not to crawl and can allow you to control how a search engine crawls your site. For example, you may want to request that search engines ignore unsupported versions of your documentation while keeping those docs online in case people need them.

By default, Read the Docs serves a robots.txt for you. To customize this file, you can create a robots.txt file that is written to your documentation root on your default branch/version.

See the Google’s documentation on robots.txt for additional details.

Use a sitemap.xml file

A sitemap is a file readable by crawlers that contains a list of pages and other files on your site and some metadata or relationships about them (eg. https://docs.readthedocs.io/sitemap.xml). A good sitemaps provides information like how frequently a page or file is updated or any alternate language versions of a page.

Read the Docs generates a sitemap for you that contains the last time your documentation was updated as well as links to active versions, subprojects, and translations your project has. We have a small separate guide on sitemaps.

See the Google docs on building a sitemap.

Use meta tags

Using a meta description allows you to customize how your pages look in search engine result pages.

Typically search engines will use the first few sentences of a page if no meta description is provided. In Sphinx, you can customize your meta description using the following RestructuredText:

.. meta::
    :description lang=en:
        Adding additional CSS or JavaScript files to your Sphinx documentation
        can let you customize the look and feel of your docs or add additional functionality.
_images/google-search-engine-results.png

Google search engine results showing a customized meta description

Moz.com, an authority on search engine optimization, makes the following suggestions for meta descriptions:

  • Your meta description should have the most relevant content of the page. A searcher should know whether they’ve found the right page from the description.

  • The meta description should be between 150-300 characters and it may be truncated down to around 150 characters in some situations.

  • Meta descriptions are used for display but not for ranking.

Search engines don’t always use your customized meta description if they think a snippet from the page is a better description.

Measure, iterate, & improve

Search engines (and soon, Read the Docs itself) can provide useful data that you can use to improve your docs’ ranking on search engines.

Search engine feedback

Google Search Console and Bing Webmaster Tools are tools for webmasters to get feedback about the crawling of their sites (or docs in our case). Some of the most valuable feedback these provide include:

  • Google and Bing will show pages that were previously indexed that now give a 404 (or more rarely a 500 or other status code). These will remain in the index for some time but will eventually be removed. This is a good opportunity to create a redirect.

  • These tools will show any crawl issues with your documentation.

  • Search Console and Webmaster Tools will highlight security issues found or if Google or Bing took action against your site because they believe it is spammy.

Analytics tools

A tool like Google Analytics can give you feedback about the search terms people use to find your docs, your most popular pages, and lots of other useful data.

Search term feedback can be used to help you optimize content for certain keywords or for related keywords. For Sphinx documentation, or other technical documentation that has its own search features, analytics tools can also tell you the terms people search for within your site.

Knowing your popular pages can help you prioritize where to spend your SEO efforts. Optimizing your already popular pages can have a significant impact.

External resources

Here are a few additional resources to help you learn more about SEO and rank better with search engines.

Manage Translations for Sphinx projects

This guide walks through the process needed to manage translations of your documentation. Once this work is done, you can setup your project under Read the Docs to build each language of your documentation by reading Localization of Documentation.

Overview

There are many different ways to manage documentation in multiple languages by using different tools or services. All of them have their pros and cons depending on the context of your project or organization.

In this guide we will focus our efforts around two different methods: manual and using Transifex.

In both methods, we need to follow these steps to translate our documentation:

  1. Create translatable files (.pot and .po extensions) from source language

  2. Translate the text on those files from source language to target language

  3. Build the documentation in target language using the translated texts

Besides these steps, once we have published our first translated version of our documentation, we will want to keep it updated from the source language. At that time, the workflow would be:

  1. Update our translatable files from source language

  2. Translate only new and modified texts in source language into target language

  3. Build the documentation using the most up to date translations

Create translatable files

To generate these .pot files it’s needed to run this command from your docs/ directory:

sphinx-build -b gettext . _build/gettext

Tip

We recommend configuring Sphinx to use gettext_uuid as True and also gettext_compact as False to generate .pot files.

This command will leave the generated files under _build/gettext.

Translate text from source language
Manually

We recommend using sphinx-intl tool for this workflow.

First, you need to install it:

pip install sphinx-intl

As a second step, we want to create a directory with each translated file per target language (in this example we are using Spanish/Argentina and Portuguese/Brazil). This can be achieved with the following command:

sphinx-intl update -p _build/gettext -l es_AR -l pt_BR

This command will create a directory structure similar to the following (with one .po file per .rst file in your documentation):

locale
├── es_AR
│   └── LC_MESSAGES
│       └── index.po
└── pt_BR
    └── LC_MESSAGES
        └── index.po

Now, you can just open those .po files with a text editor and translate them taking care of no breaking the reST notation. Example:

# b8f891b8443f4a45994c9c0a6bec14c3
#: ../../index.rst:4
msgid ""
"Read the Docs hosts documentation for the open source community."
"It supports :ref:`Sphinx <sphinx>` docs written with reStructuredText."
msgstr ""
"FILL HERE BY TARGET LANGUAGE FILL HERE BY TARGET LANGUAGE FILL HERE "
"BY TARGET LANGUAGE :ref:`Sphinx <sphinx>` FILL HERE."
Using Transifex

Transifex is a platform that simplifies the manipulation of .po files and offers many useful features to make the translation process as smooth as possible. These features includes a great web based UI, Translation Memory, collaborative translation, etc.

You need to create an account in their service and a new project before start.

After that, you need to install the transifex-client tool which will help you in the process to upload source files, update them and also download translated files. To do this, run this command:

pip install transifex-client

After installing it, you need to configure your account. For this, you need to create an API Token for your user to access this service through the command line. This can be done under your User’s Settings.

Now, you need to setup it to use this token:

tx init --token $TOKEN --no-interactive

The next step is to map every .pot file you have created in the previous step to a resource under Transifex. To achieve this, you need to run this command:

tx config mapping-bulk \
    --project $TRANSIFEX_PROJECT \
    --file-extension '.pot' \
    --source-file-dir docs/_build/gettext \
    --source-lang en \
    --type PO \
    --expression 'locale/<lang>/LC_MESSAGES/{filepath}/{filename}.po' \
    --execute

This command will generate a file at .tx/config with all the information needed by the transifext-client tool to keep your translation synchronized.

Finally, you need to upload these files to Transifex platform so translators can start their work. To do this, you can run this command:

tx push --source

Now, you can go to your Transifex’s project and check that there is one resource per .rst file of your documentation. After the source files are translated using Transifex, you can download all the translations for all the languages by running:

tx pull --all

This command will leave the .po files needed for building the documentation in the target language under locale/<lang>/LC_MESSAGES.

Warning

It’s important to use always the same method to translate the documentation and do not mix them. Otherwise, it’s very easy to end up with inconsistent translations or losing already translated text.

Build the documentation in target language

Finally, to build our documentation in Spanish(Argentina) we need to tell Sphinx builder the target language with the following command:

sphinx-build -b html -D language=es_AR . _build/html/es_AR

Note

There is no need to create a new conf.py to redefine the language for the Spanish version of this documentation, but you need to set locale_dirs to ["locale"] for Sphinx to find the translated content.

After running this command, the Spanish(Argentina) version of your documentation will be under _build/html/es_AR.

Summary
Update sources to be translated

Once you have done changes in your documentation, you may want to make these additions/modifications available for translators so they can update it:

  1. Create the .pot files:

    sphinx-build -b gettext . _build/gettext
    
  2. Push new files to Transifex

    tx push --sources
    
Build documentation from up to date translation

When translators have finished their job, you may want to update the documentation by pulling the changes from Transifex:

  1. Pull up to date translations from Transifex:

    tx pull --all
    
  2. Commit and push these changes to our repo

    git add locale/
    git commit -m "Update translations"
    git push
    

The last git push will trigger a build per translation defined as part of your project under Read the Docs and make it immediately available.

Using advanced search features

Read the Docs uses Server Side Search to power our search. This guide explains how to add a “search as you type” feature to your documentation, and how to use advanced query syntax to get more accurate results.

Enable “search as you type” in your documentation

readthedocs-sphinx-search is a Sphinx extension that integrates your documentation more closely with the search implementation of Read the Docs. It adds a clean and minimal full-page search UI that supports a search as you type feature.

To try this feature, you can press / (forward slash) and start typing or just visit these URLs:

Search query syntax

Read the Docs uses the Simple Query String feature from Elasticsearch. This means that as the search query becomes more complex, the results yielded become more specific.

Exact phrase search with slop value

~N (tilde N) after a phrase signifies slop amount. It can be used to match words that are near one another.

Example queries:

Prefix query

* (asterisk) at the end of any term signifies a prefix query. It returns the results containing the words with specific prefix.

Example queries:

Fuzzy query

~N after a word signifies edit distance (fuzziness). This type of query is helpful when the exact spelling of the keyword is unknown. It returns results that contain terms similar to the search term as measured by a Levenshtein edit distance.

Example queries:

Build complex queries

The search query syntaxes described in the previous sections can be used with one another to build complex queries.

For example:

Hide a Version and Keep its Docs Online

If you manage a project with a lot of versions, the version (flyout) menu of your docs can be easily overwhelmed and hard to navigate.

_images/flyout-overwhelmed.png

Overwhelmed flyout menu

You can deactivate the version to remove its docs, but removing its docs isn’t always an option. To not list a version in the flyout menu while keeping its docs online, you can mark it as hidden. Go to the Versions tab of your project, click on Edit and mark the Hidden option.

Users that have a link to your old version will still be able to see your docs. And new users can see all your versions (including hidden versions) in the versions tab of your project at https://readthedocs.org/projects/<your-project>/versions/

Check the docs about versions’ states for more information.

Deprecating Content

When you deprecate a feature from your project, you may want to deprecate its docs as well, and stop your users from reading that content.

Deprecating content may sound as easy as delete it, but doing that will break existing links, and you don’t necessary want to make the content inaccessible. Here you’ll find some tips on how to use Read the Docs to deprecate your content progressively and in non harmful ways.

Deprecating versions

If you have multiple versions of your project, it makes sense to have its documentation versioned as well. For example, if you have the following versions and want to deprecate v1.

  • https://project.readthedocs.io/en/v1/

  • https://project.readthedocs.io/en/v2/

  • https://project.readthedocs.io/en/v3/

For cases like this you can hide a version. Hidden versions won’t be listed in the versions menu of your docs, and they will be listed in a robots.txt file to stop search engines of showing results for that version.

Users can still see all versions in the dashboard of your project. To hide a version go to your project and click on Versions > Edit, and mark the Hidden option. Check Version States for more information.

Note

If the versions of your project follow the semver convention, you can activate the Version warning option for your project. A banner with a warning and linking to the stable version will be shown on all versions that are lower than the stable one.

Deprecating pages

You may not always want to deprecate a version, but deprecate some pages. For example, if you have documentation about two APIs and you want to deprecate v1:

  • https://project.readthedocs.io/en/latest/api/v1.html

  • https://project.readthedocs.io/en/latest/api/v2.html

A simple way is just adding a warning at the top of the page, this will warn users visiting that page, but it won’t stop users from being redirected to that page from search results. You can add an entry of that page in a custom robots.txt file to avoid search engines of showing those results. For example:

# robots.txt

User-agent: *

Disallow: /en/latest/api/v1.html # Deprecated API

But your users will still see search results from that page if they use the search from your docs. With Read the Docs you can set a custom rank per pages. For example:

# .readthedocs.yaml

version: 2
search:
   ranking:
      api/v1.html: -1

This won’t hide results from that page, but it will give priority to results from other pages.

Tip

You can make use of Sphinx directives (like warning, deprecated, versionchanged) or MkDocs admonitions to warn your users about deprecated content.

Moving and deleting pages

After you have deprecated a feature for a while, you may want to get rid of its documentation, that’s OK, you don’t have to maintain that content forever. But be aware that users may have links of that page saved, and it will be frustrating and confusing for them to get a 404.

To solve that problem you can create a redirect to a page with a similar feature/content, like redirecting to the docs of the v2 of your API when your users visit the deleted docs from v1, this is a page redirect from /api/v1.html to /api/v2.html. See User-defined Redirects.

Sphinx PDFs with Unicode

Sphinx offers different LaTeX engines that have better support for Unicode characters and non-European languages like Japanese or Chinese. By default Sphinx uses pdflatex, which does not have good support for Unicode characters and may make the PDF builder fail.

To build your documentation in PDF format, you need to configure Sphinx properly in your project’s conf.py. Read the Docs will execute the proper commands depending on these settings. There are several settings that can be defined (all the ones starting with latex_), to modify Sphinx and Read the Docs behavior to make your documentation to build properly.

For docs that are not written in Chinese or Japanese, and if your build fails from a Unicode error, then try xelatex as the latex_engine instead of the default pdflatex in your conf.py:

latex_engine = 'xelatex'

When Read the Docs detects that your documentation is in Chinese or Japanese, it automatically adds some defaults for you.

For Chinese projects, it appends to your conf.py these settings:

latex_engine = 'xelatex'
latex_use_xindy = False
latex_elements = {
    'preamble': '\\usepackage[UTF8]{ctex}\n',
}

And for Japanese projects:

latex_engine = 'platex'
latex_use_xindy = False

Tip

You can always override these settings if you define them by yourself in your conf.py file.

Note

xindy is currently not supported by Read the Docs, but we plan to support it in the near future.

Manually Importing Private Repositories

Warning

This guide is for users of Read the Docs for Business. If you are using GitHub, GitLab, or Bitbucket, we recommend connecting your account and importing your project from https://readthedocs.com/dashboard/import instead of importing it manually.

If you are using an unsupported integration, or don’t want to connect your account, you’ll need to do some extra steps in order to have your project working.

  1. Manually import your project using an SSH URL

  2. Allow access to your project using an SSH key

  3. Setup a webhook to build your documentation on every commit

Importing your project
  1. Go to https://readthedocs.com/dashboard/import/manual/

  2. Fill the Repository URL field with the SSH form of your repository’s URL, e.g git@github.com:readthedocs/readthedocs.org.git

  3. Fill the other required fields

  4. Click Next

Giving access to your project with an SSH key

After importing your project the build will fail, because Read the Docs doesn’t have access to clone your repository. To give access, you’ll need to add your project’s public SSH key to your VCS provider.

Copy your project’s public key

You can find the public SSH key of your Read the Docs project by:

  1. Going to the Admin tab of your project

  2. Click on SSH Keys

  3. Click on the fingerprint of the SSH key (it looks like 6d:ca:6d:ca:6d:ca:6d:ca)

  4. Copy the text from the Public key section

Note

The private part of the SSH key is kept secret.

Add the public key to your project
GitHub

For GitHub, you can use deploy keys with read only access.

  1. Go to your project on GitHub

  2. Click on Settings

  3. Click on Deploy Keys

  4. Click on Add deploy key

  5. Put a descriptive title and paste the public SSH key from your Read the Docs project

  6. Click on Add key

GitLab

For GitLab, you can use deploy keys with read only access.

  1. Go to your project on GitLab

  2. Click on Settings

  3. Click on Repository

  4. Expand the Deploy Keys section

  5. Put a descriptive title and paste the public SSH key from your Read the Docs project

  6. Click on Add key

Bitbucket

For Bitbucket, you can use access keys with read only access.

  1. Go your project on Bitbucket

  2. Click on Repository Settings

  3. Click on Access keys

  4. Click on Add key

  5. Put a descriptive label and paste the public SSH key from your Read the Docs project

  6. Click on Add SSH key

Azure DevOps

For Azure DevOps, you can use SSH key authentication.

  1. Go your Azure DevOps page

  2. Click on User settings

  3. Click on SSH public keys

  4. Click on New key

  5. Put a descriptive name and paste the public SSH key from your Read the Docs project

  6. Click on Add

Others

If you are not using any of the above providers, Read the Docs will still generate a pair of SSH keys. You’ll need to add the public SSH key from your Read the Docs project to your repository. Refer to your provider’s documentation for the steps required to do this.

Webhooks

To build your documentation on every commit, you’ll need to manually add a webhook, see VCS Integrations. If you are using an unsupported integration, you may need to setup a custom integration using our generic webhook.

Guides for developers and designers

These guides are helpful for developers and designers seeking to extend the authoring tools or customize the documentation appearance.

Installing Private Python Packages

Warning

This guide is for Read the Docs for Business.

Read the Docs uses pip to install your Python packages. If you have private dependencies, you can install them from a private Git repository or a private repository manager.

From a Git repository

Pip supports installing packages from a Git repository using the URI form:

git+https://gitprovider.com/user/project.git@{version}

Or if your repository is private:

git+https://{token}@gitprovider.com/user/project.git@{version}

Where version can be a tag, a branch, or a commit. And token is a personal access token with read only permissions from your provider.

To install the package, you need to add the URI in your requirements file. Pip will automatically expand environment variables in your URI, so you don’t have to hard code the token in the URI. See using environment variables in Read the Docs for more information.

Note

You have to use the POSIX format for variable names (only uppercase letters and _ are allowed), and including a dollar sign and curly brackets around the name (${API_TOKEN}) for pip to be able to recognize them.

Below you can find how to get a personal access token from our supported providers. We will be using environment variables for the token.

GitHub

You need to create a personal access token with the repo scope. Check the GitHub documentation on how to create a personal token.

URI example:

git+https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/user/project.git@{version}

Warning

GitHub doesn’t support tokens per repository. A personal token will grant read and write access to all repositories the user has access to. You can create a machine user to give read access only to the repositories you need.

GitLab

You need to create a deploy token with the read_repository scope for the repository you want to install the package from. Check the GitLab documentation on how to create a deploy token.

URI example:

git+https://${GITLAB_TOKEN_USER}:${GITLAB_TOKEN}@gitlab.com/user/project.git@{version}

Here GITLAB_TOKEN_USER is the user from the deploy token you created, not your GitLab user.

Bitbucket

You need to create an app password with Read repositories permissions. Check the Bitbucket documentation on how to create an app password.

URI example:

git+https://${BITBUCKET_USER}:${BITBUCKET_APP_PASSWORD}@bitbucket.org/user/project.git@{version}'

Here BITBUCKET_USER is your Bitbucket user.

Warning

Bitbucket doesn’t support app passwords per repository. An app password will grant read access to all repositories the user has access to.

From a repository manager other than PyPI

Pip by default will install your packages from PyPI. If you are using a repository manager like pypiserver, or Nexus Repository, you need to set the --index-url option. You have two ways of set that option:

Note

Check your repository manager’s documentation to obtain the appropriate index URL.

Using Private Git Submodules

Warning

This guide is for Read the Docs for Business.

Read the Docs uses SSH keys (with read only permissions) in order to clone private repositories. A SSH key is automatically generated and added to your main repository, but not to your submodules. In order to give Read the Docs access to clone your submodules you’ll need to add the public SSH key to each repository of your submodules.

Note

  • You can manage which submodules Read the Docs should clone using a configuration file. See submodules.

  • Make sure you are using SSH URLs for your submodules (git@github.com:readthedocs/readthedocs.org.git for example) in your .gitmodules file, not http URLs.

Table of contents

GitHub

Since GitHub doesn’t allow you to reuse a deploy key across different repositories, you’ll need to use machine users to give read access to several repositories using only one SSH key.

  1. Remove the SSH deploy key that was added to the main repository on GitHub

    1. Go to your project on GitHub

    2. Click on Settings

    3. Click on Deploy Keys

    4. Delete the key added by Read the Docs Commercial (readthedocs.com)

  2. Create a GitHub user and give it read only permissions to all the necessary repositories. You can do this by adding the account as:

  3. Attach the public SSH key from your project on Read the Docs to the GitHub user you just created

    1. Go to the user’s settings

    2. Click on SSH and GPG keys

    3. Click on New SSH key

    4. Put a descriptive title and paste the public SSH key from your Read the Docs project

    5. Click on Add SSH key

Azure DevOps

Azure DevOps does not have per-repository SSH keys, but keys can be added to a user instead. As long as this user has access to your main repository and all its submodules, Read the Docs can clone all the repositories with the same key.

Others

GitLab and Bitbucket allow you to reuse the same SSH key across different repositories. Since Read the Docs already added the public SSH key on your main repository, you only need to add it to each submodule repository.

Adding Custom CSS or JavaScript to Sphinx Documentation

Adding additional CSS or JavaScript files to your Sphinx documentation can let you customize the look and feel of your docs or add additional functionality. For example, with a small snippet of CSS, your documentation could use a custom font or have a different background color.

If your custom stylesheet is _static/css/custom.css, you can add that CSS file to the documentation using the Sphinx option html_css_files:

## conf.py

# These folders are copied to the documentation's HTML output
html_static_path = ['_static']

# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
html_css_files = [
    'css/custom.css',
]

A similar approach can be used to add JavaScript files:

html_js_files = [
    'js/custom.js',
]

Note

The Sphinx HTML options html_css_files and html_js_files were added in Sphinx 1.8. Unless you have a good reason to use an older version, you are strongly encouraged to upgrade. Sphinx is almost entirely backwards compatible.

Overriding or replacing a theme’s stylesheet

The above approach is preferred for adding additional stylesheets or JavaScript, but it is also possible to completely replace a Sphinx theme’s stylesheet with your own stylesheet.

If your replacement stylesheet exists at _static/css/yourtheme.css, you can replace your theme’s CSS file by setting html_style in your conf.py:

## conf.py

html_style = 'css/yourtheme.css'

If you only need to override a few styles on the theme, you can include the theme’s normal CSS using the CSS @import rule .

/** css/yourtheme.css **/

/* This line is theme specific - it includes the base theme CSS */
@import '../alabaster.css';  /* for Alabaster */
/*@import 'theme.css';       /* for the Read the Docs theme */

body {
    /* ... */
}

See also

You can also add custom classes to your html elements. See Docutils Class and this related Sphinx footnote… for more information.

Reproducible Builds

Your docs depend on tools and other dependencies to be built. If your docs don’t have reproducible builds, an update in a dependency can break your builds when least expected, or make your docs look different from your local version. This guide will help you to keep your builds working over time, and in a reproducible way.

Building your docs

To test your build process, you can build them locally in a clean environment (this is without any dependencies installed). Then you should make sure you are running those same steps on Read the Docs.

You can configure how your project is built from the web interface (Admin tab), or by using a configuration file (recommended). If you aren’t familiar with these tools, check our docs:

Note

You can see the exact commands that are run on Read the Docs by going to the Builds tab of your project.

Using a configuration file

If you use the web interface to configure your project, the options are applied to all versions and builds of your docs, and can be lost after changing them over time. Using a configuration file provides you per version settings, and those settings live in your repository.

A configuration file with explicit dependencies looks like this:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-20.04"
  tools:
    python: "3.9"

# Build from the docs/ directory with Sphinx
sphinx:
  configuration: docs/conf.py

# Explicitly set the version of Python and its requirements
python:
  install:
    - requirements: docs/requirements.txt
docs/requirements.txt
# Defining the exact version will make sure things don't break
sphinx==4.2.0
sphinx_rtd_theme==1.0.0
readthedocs-sphinx-search==0.1.1
Don’t rely on implicit dependencies

By default Read the Docs will install the tool you chose to build your docs, and other dependencies, this is done so new users can build their docs without much configuration.

We highly recommend not to assume these dependencies will always be present or that their versions won’t change. Always declare your dependencies explicitly using a configuration file, for example:

✅ Good:

Your project is declaring the Python version explicitly, and its dependencies using a requirements file.

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-20.04"
  tools:
    python: "3.9"

sphinx:
  configuration: docs/conf.py

python:
  install:
    - requirements: docs/requirements.txt
❌ Bad:

Your project is relying on the default Python version and default installed dependencies.

.readthedocs.yaml
version: 2

sphinx:
   configuration: docs/conf.py
Pinning dependencies

As you shouldn’t rely on implicit dependencies, you shouldn’t rely on undefined versions of your dependencies. Some examples:

✅ Good:

The specified versions will be used for all your builds, in all platforms, and won’t be updated unexpectedly.

docs/requirements.txt
sphinx==4.2.0
sphinx_rtd_theme==1.0.0
readthedocs-sphinx-search==0.1.1
docs/environment.yaml
name: docs
channels:
  - conda-forge
  - defaults
dependencies:
  - sphinx==4.2.0
  - nbsphinx==0.8.1
  - pip:
    - sphinx_rtd_theme==1.0.0
❌ Bad:

The latest or any other already installed version will be used, and your builds can fail or change unexpectedly any time.

docs/requirements.txt
sphinx
sphinx_rtd_theme
readthedocs-sphinx-search
docs/environment.yaml
name: docs
channels:
  - conda-forge
  - defaults
dependencies:
  - sphinx
  - nbsphinx
  - pip:
    - sphinx_rtd_theme

Check the pip user guide for more information about requirements files, or our Conda docs about environment files.

Tip

Remember to update your docs’ dependencies from time to time to get new improvements and fixes. It also makes it easy to manage in case a version reaches its end of support date.

Pinning transitive dependencies

Once you have pinned your own dependencies, the next things to worry about are the dependencies of your dependencies. These are called transitive dependencies, and they can upgrade without warning if you do not pin these packages as well.

We recommend pip-tools to help address this problem. It allows you to specify a requirements.in file with your first-level dependencies, and it generates a requirements.txt file with the full set of transitive dependencies.

✅ Good:

All your transitive dependencies will stay defined, which ensures new package releases will not break your docs.

docs/requirements.in
sphinx==4.2.0
docs/requirements.txt
# This file is autogenerated by pip-compile with python 3.7
# To update, run:
#
#    pip-compile docs.in
#
alabaster==0.7.12
    # via sphinx
babel==2.10.1
    # via sphinx
certifi==2021.10.8
    # via requests
charset-normalizer==2.0.12
    # via requests
docutils==0.17.1
    # via sphinx
idna==3.3
    # via requests
imagesize==1.3.0
    # via sphinx
importlib-metadata==4.11.3
    # via sphinx
jinja2==3.1.2
    # via sphinx
markupsafe==2.1.1
    # via jinja2
packaging==21.3
    # via sphinx
pygments==2.11.2
    # via sphinx
pyparsing==3.0.8
    # via packaging
pytz==2022.1
    # via babel
requests==2.27.1
    # via sphinx
snowballstemmer==2.2.0
    # via sphinx
sphinx==4.4.0
    # via -r docs.in
sphinxcontrib-applehelp==1.0.2
    # via sphinx
sphinxcontrib-devhelp==1.0.2
    # via sphinx
sphinxcontrib-htmlhelp==2.0.0
    # via sphinx
sphinxcontrib-jsmath==1.0.1
    # via sphinx
sphinxcontrib-qthelp==1.0.3
    # via sphinx
sphinxcontrib-serializinghtml==1.1.5
    # via sphinx
typing-extensions==4.2.0
    # via importlib-metadata
urllib3==1.26.9
    # via requests
zipp==3.8.0
    # via importlib-metadata

Embedding Content From Your Documentation

Read the Docs allows you to embed content from any of the projects we host and specific allowed external domains (currently, ['docs\\.python\\.org', 'docs\\.scipy\\.org', 'docs\\.sympy\\.org', 'numpy\\.org']) This allows reuse of content across sites, making sure the content is always up to date.

There are a number of uses cases for embedding content, so we’ve built our integration in a way that enables users to build on top of it. This guide will show you some of our favorite integrations:

Contextualized tooltips on documentation pages

Tooltips on your own documentation are really useful to add more context to the current page the user is reading. You can embed any content that is available via reference in Sphinx, including:

  • Python object references

  • Full documentation pages

  • Sphinx references

  • Term definitions

We built a Sphinx extension called sphinx-hoverxref on top of our Embed API you can install in your project with minimal configuration.

Here is an example showing a tooltip when you hover with the mouse a reference:

_images/sphinx-hoverxref-example.png

Tooltip shown when hovering on a reference using sphinx-hoverxref.

You can find more information about this extension, how to install and configure it in the hoverxref documentation.

Inline help on application website

This allows us to keep the official documentation as the single source of truth, while having great inline help in our application website as well. On the “Automation Rules” admin page we could embed the content of our Automation Rules documentation page and be sure it will be always up to date.

Note

We recommend you point at tagged releases instead of latest. Tags don’t change over time, so you don’t have to worry about the content you are embedding disappearing.

The following example will fetch the section “Creating an automation rule” in page automation-rules.html from our own docs and will populate the content of it into the #help-container div element.

<script type="text/javascript">
var params = {
  'url': 'https://docs.readthedocs.io/en/latest/automation-rules.html%23creating-an-automation-rule',
  // 'doctool': 'sphinx',
  // 'doctoolversion': '4.2.0',
};
var url = 'https://readthedocs.org/api/v3/embed/?' + $.param(params);
$.get(url, function(data) {
  $('#help-container').content(data['content']);
});
</script>

<div id="help-container"></div>

You can modify this example to subscribe to .onclick Javascript event, and show a modal when the user clicks in a “Help” link.

Tip

Take into account that if the title changes, your section argument will break. To avoid that, you can manually define Sphinx references above the sections you don’t want to break. For example,

.. in your .rst document file

.. _unbreakable-section-reference:

Creating an automation rule
---------------------------

This is the text of the section.

To link to the section “Creating an automation rule” you can send section=unbreakable-section-reference. If you change the title it won’t break the embedded content because the label for that title will still be unbreakable-section-reference.

Please, take a look at the Sphinx :ref: role documentation for more information about how to create references.

Calling the Embed API directly

Embed API lives under https://readthedocs.org/api/v3/embed/ URL and accept the URL of the content you want to embed. Take a look at its own documentation to find out more details.

You can click on the following links and check a live response directly in the browser as examples:

Note

All relative links to pages contained in the remote content will continue to point at the remote page.

Conda Support

Read the Docs supports Conda as an environment management tool, along with Virtualenv. Conda support is useful for people who depend on C libraries, and need them installed when building their documentation.

This work was funded by Clinical Graphics – many thanks for their support of open source.

Activating Conda

Conda support is available using a Configuration File, see conda.

Our Docker images use Miniconda, a minimal conda installer. After specifying your project requirements using a conda environment.yml file, Read the Docs will create the environment (using conda env create) and add the core dependencies needed to build the documentation.

Creating the environment.yml

There are several ways of exporting a conda environment:

  • conda env export will produce a complete list of all the packages installed in the environment with their exact versions. This is the best option to ensure reproducibility, but can create problems if done from a different operative system than the target machine, in our case Ubuntu Linux.

  • conda env export --from-history will only include packages that were explicitly requested in the environment, excluding the transitive dependencies. This is the best option to maximize cross-platform compatibility, however it may include packages that are not needed to build your docs.

  • And finally, you can also write it by hand. This allows you to pick exactly the packages needed to build your docs (which also results in faster builds) and overcomes some limitations in the conda exporting capabilities.

For example, using the second method for an existing environment:

$ conda activate rtd38
(rtd38) $ conda env export --from-history | tee environment.yml
name: rtd38
channels:
  - defaults
  - conda-forge
dependencies:
  - rasterio==1.2
  - python=3.8
  - pytorch-cpu=1.7
prefix: /home/docs/.conda/envs/rtd38

Read the Docs will override the name and prefix of the environment when creating it, so they can have any value, or not be present at all.

Tip

Bear in mind that rasterio==1.2 (double ==) will install version 1.2.0, whereas python=3.8 (single =) will fetch the latest 3.8.* version, which is 3.8.8 at the time of writing.

Warning

Pinning Sphinx and other Read the Docs core dependencies is not yet supported by default when using conda (see this GitHub issue for discussion). If your project needs it, request that we enable the CONDA_APPEND_CORE_REQUIREMENTS feature flag.

Effective use of channels

Conda packages are usually hosted on https://anaconda.org/, a registration-free artifact archive maintained by Anaconda Inc. Contrary to what happens with the Python Package Index, different users can potentially host the same package in the same repository, each of them using their own channel. Therefore, when installing a conda package, conda also needs to know which channels to use, and which ones take precedence.

If not specified, conda will use defaults, the channel maintained by Anaconda Inc. and subject to Anaconda Terms of Service. It contains well-tested versions of the most widely used packages. However, some packages are not available on the defaults channel, and even if they are, they might not be on their latest versions.

As an alternative, there are channels maintained by the community that have a broader selection of packages and more up-to-date versions of them, the most popular one being conda-forge.

To use the conda-forge channel when specifying your project dependencies, include it in the list of channels in environment.yml, and conda will rank them in order of appearance. To maximize compatibility, we recommend putting conda-forge above defaults:

name: rtd38
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.8
  # Rest of the dependencies

Tip

If you want to opt out the defaults channel completely, replace it by nodefaults in the list of channels. See the relevant conda docs for more information.

Making builds faster with mamba

One important thing to note is that, when enabling the conda-forge channel, the conda dependency solver requires a large amount of RAM and long solve times. This is a known issue due to the sheer amount of packages available in conda-forge.

As an alternative, you can instruct Read the Docs to use mamba, a drop-in replacement for conda that is much faster and reduces the memory consumption of the dependency solving process.

To do that, add a .readthedocs.yaml configuration file with these contents:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-20.04"
  tools:
    python: "mambaforge-4.10"

conda:
  environment: environment.yml

You can read more about the build.tools.python configuration in our documentation.

Mixing conda and pip packages

There are valid reasons to use pip inside a conda environment: some dependency might not be available yet as a conda package in any channel, or you might want to avoid precompiled binaries entirely. In either case, it is possible to specify the subset of packages that will be installed with pip in the environment.yml file. For example:

name: rtd38
channels:
  - conda-forge
  - defaults
dependencies:
  - rasterio==1.2
  - python=3.8
  - pytorch-cpu=1.7
  - pip>=20.1  # pip is needed as dependency
  - pip:
    - black==20.8b1

The conda developers recommend in their best practices to install as many requirements as possible with conda, then use pip to minimize possible conflicts and interoperability issues.

Warning

Notice that conda env export --from-history does not include packages installed with pip, see this conda issue for discussion.

Compiling your project sources

If your project contains extension modules written in a compiled language (C, C++, FORTRAN) or server-side JavaScript, you might need special tools to build it from source that are not readily available on our Docker images, such as a suitable compiler, CMake, Node.js, and others.

Luckily, conda is a language-agnostic package manager, and many of these development tools are already packaged on conda-forge or more specialized channels.

For example, this conda environment contains the required dependencies to compile Slycot on Read the Docs:

name: slycot38
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.8
  - cmake
  - numpy
  - compilers
Troubleshooting

If you have problems on the environment creation phase, either because the build runs out of memory or time or because some conflicts are found, you can try some of these mitigations:

  • Reduce the number of channels in environment.yml, even leaving conda-forge only and opting out of the defaults adding nodefaults.

  • Constrain the package versions as much as possible to reduce the solution space.

  • Use mamba, an alternative package manager fully compatible with conda packages.

  • And, if all else fails, request more resources.

Custom Installs

If you are running a custom installation of Read the Docs, you will need the conda executable installed somewhere on your PATH. Because of the way conda works, we can’t safely install it as a normal dependency into the normal Python virtualenv.

Warning

Installing conda into a virtualenv will override the activate script, making it so you can’t properly activate that virtualenv anymore.

Specifying your dependencies with Poetry

Declaring your project metadata

Poetry is a PEP 517-compliant build backend, which means that it can generate your project metadata using a standardized interface that can be consumed directly by pip. To do that, first make sure that the build-system section of your pyproject.toml declares the build backend as follows:

pyproject.toml
[build-system]
requires = ["poetry_core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

Then, you will be able to install it on Read the Docs just using pip, with a configuration like this:

.readthedocs.yaml
version: 2

build:
  os: ubuntu-20.04
  tools:
    python: "3.9"

python:
  install:
    - method: pip
      path: .

For example, the rich Python library uses Poetry to declare its library dependencies and installs itself on Read the Docs with pip.

Locking your dependencies

With your pyproject.toml file you are free to specify the dependency versions that are most appropriate for your project, either by leaving them unpinned or setting some constraints. However, to achieve Reproducible Builds it is better that you lock your dependencies, so that the decision to upgrade any of them is yours. Poetry does this using poetry.lock files that contain the exact versions of all your transitive dependencies (that is, all the dependencies of your dependencies).

The first time you run poetry install in your project directory Poetry will generate a new poetry.lock file with the versions available at that moment. You can then commit your poetry.lock to version control so that Read the Docs also uses these exact dependencies.

Removing “Edit on …” Buttons from Documentation

When building your documentation, Read the Docs automatically adds buttons at the top of your documentation and in the versions menu that point readers to your repository to make changes. For instance, if your repository is on GitHub, a button that says “Edit on GitHub” is added in the top-right corner to your documentation to make it easy for readers to author new changes.

Remove “On …” section from versions menu

This section can be removed with a custom CSS rule to hide them. Follow the instructions under Adding Custom CSS or JavaScript to Sphinx Documentation and put the following content into the .css file:

/* Hide "On GitHub" section from versions menu */
div.rst-versions > div.rst-other-versions > div.injected > dl:nth-child(4) {
    display: none;
}

Warning

You may need to change the 4 number in dl:nth-child(4) for a different one in case your project has more sections in the versions menu. For example, if your project has translations into different languages, you will need to use the number 5 there.

Now when you build your documentation, your documentation won’t include an edit button or links to the page source.

My Build is Using Too Many Resources

We limit build resources to make sure that users don’t overwhelm our build systems. If you are running into this issue, there are a couple fixes that you might try.

Note

The current build limits can be found on our Build process page.

Reduce formats you’re building

You can change the formats of docs that you’re building with our Configuration File, see formats.

In particular, the htmlzip takes up a decent amount of memory and time, so disabling that format might solve your problem.

Reduce documentation build dependencies

A lot of projects reuse their requirements file for their documentation builds. If there are extra packages that you don’t need for building docs, you can create a custom requirements file just for documentation. This should speed up your documentation builds, as well as reduce your memory footprint.

Use mamba instead of conda

If you need conda packages to build your documentation, you can use mamba as a drop-in replacement to conda, which requires less memory and is noticeably faster.

Document Python modules API statically

If you are installing a lot of Python dependencies just to document your Python modules API using sphinx.ext.autodoc, you can give a try to sphinx-autoapi Sphinx’s extension instead which should produce the exact same output but running statically. This could drastically reduce the memory and bandwidth required to build your docs.

Requests more resources

If you still have problems building your documentation, we can increase build limits on a per-project basis, sending an email to support@readthedocs.org providing a good reason why your documentation needs more resources.

Read the Docs for Science

Documentation and technical writing are broad fields. Their tools and practices have grown relevant to most scientific activities. This includes building publications, books, educational resources, interactive data science, resources for data journalism and full-scale websites for research projects and courses.

Let’s explore the overlap of features for software documentation and academic writing. Here’s a brief overview of some features that people in science and academic writing love about Read the Docs:

🪄 Easy to use

Documentation code doesn’t have to be written by a programmer. In fact, documentation coding languages are designed and developed so you don’t have to be a programmer, and there are many writing aids that makes it easy to abstract from code and focus on content.

Getting started is also made easy:

🔋 Batteries included: Graphs, computations, formulas, maps, diagrams and more

Take full advantage of getting all the richness of Jupyter Notebook combined with Sphinx and the giant ecosystem of extensions for both of these.

Here are some examples:

  • Use symbols familiar from math and physics, build advanced proofs. See also: sphinx-proof

  • Present results with plots, graphs, images and let users interact directly with your datasets and algorithms. See also: Matplotlib, Interactive Data Visualizations

  • Graphs, tables etc. are computed when the latest version of your project is built and published as a stand-alone website. All code examples on your website are validated each time you build.

📚 Bibliographies and external links

Maintain bibliography databases directly as code and have external links automatically verified.

Using extensions for Sphinx such as the popular sphinxcontrib-bibtex extension, you can maintain your bibliography with Sphinx directly or refer to entries .bib files, as well as generating entire Bibliography sections from those files.

📜 Modern themes and classic PDF outputs
_images/screenshot_rtd_downloads.png

Use the latest state-of-the-art themes for web and have PDFs and e-book formats automatically generated.

New themes are improving every day, and when you write documentation based on Jupyter Book and Sphinx, you will separate your contents and semantics from your presentation logic. This way, you can keep up with the latest theme updates or try new themes.

Another example of the benefits from separating content and presentation logic: Your documentation also transforms into printable books and eBooks.

📐 Widgets, widgets and more widgets

Design your science project’s layout and components with widgets from a rich eco-system of open-source extensions built for many purposes. Special widgets help users display and interact with graphs, maps and more. Several extensions are built and invented by the science community.

⚙️ Automatic builds

Build and publish your project for every change made through Git (GitHub, GitLab, Bitbucket etc). Preview changes via pull requests. Receive notifications when something is wrong. How does this work? Have a look at this video:

💬 Collaboration and community
_images/screenshot_edit_on_github.png

Science and academia have a big kinship with software developers: We ❤️ community. Our solutions and projects become better when we foster inclusivity and active participation. Read the Docs features easy access for readers to suggest changes via your git platform (GitHub, GitLab, Bitbucket etc.). But not just any unqualified feedback. Instead, the code and all the tools are available for your community to forge qualified contributions.

Your readers can become your co-authors!

Discuss changes via pull request and track all changes in your project’s version history.

Using git does not mean that anyone can go and change your code and your published project. The full ownership and permission handling remains in your hands. Project and organization owners on your git platform govern what is released and who has access to approve and build changes.

🔎 Full search and analytics

Read the Docs comes with a number of features bundled in that you would have to configure if you were hosting documentation elsewhere.

Super-fast text search

Your documentation is automatically indexed and gets its own search function.

Traffic statistics

Have full access to your traffic data and have quick access to see which of your pages are most popular.

Search analytics

What are people searching for and do they get hits? From each search query in your documentation, we collect a neat little statistic that can help to improve the discoverability and relevance of your documentation.

SEO - Don’t reinvent Search Engine Optimization

Use built-in SEO best-practices from Sphinx, its themes and Read the Docs hosting. This can give you a good ranking on search engines as a direct outcome of simply writing and publishing your documentation project.

🌱 Grow your own solutions

The eco-system is open source and makes it accessible for anyone with Python skills to build their own extensions.

We want science communities to use Read the Docs and to be part of the documentation community 💞

Getting started: Jupyter Book

Jupyter Book on Read the Docs brings you the rich experience of computated Jupyter documents built together with a modern documentation tool. The results are beautiful and automatically deployed websites, built with Sphinx and Executable Book + all the extensions available in this ecosystem.

Here are some popular activities that are well-supported by Jupyter Book:

  • Publications and books

  • Course and research websites

  • Interactive classroom activities

  • Data science software documentation

Visit the gallery of solutions built with Jupyter Book »

Ready to get started?
Examples and users

Read the Docs community for science is already big and keeps growing. The Jupyter Project itself and the many sub-projects of Jupyter are built and published with Read the Docs.

card-img-top
Jupyter Project Documentation
card-img-top
Chainladder - Property and Casualty Loss Reserving in Python
card-img-top
Feature-engine - A Python library for Feature Engineering and Selection

Example projects

  • Need inspiration?

  • Want to bootstrap a new documentation project?

  • Want to showcase your own solution?

The following example projects show a rich variety of uses of Read the Docs. You can use them for inspiration, for learning and for recipies to start your own documentation projects. View the rendered version of each project and then head over to the Git source to see how it’s done and reuse the code.

Sphinx and MkDocs examples

Topic

Framework

Links

Description

Basic Sphinx

Sphinx

[Git] [Rendered]

Sphinx example with versioning and Python doc autogeneration

Basic MkDocs

MkDocs

[Git] [Rendered]

Basic example of using MkDocs

Jupyter Book

Jupyter Book and Sphinx

[Git] [Rendered]

Jupyter Book with popular integrations configured

Real-life examples

Awesome List badge

We maintain an Awesome List where you can contribute new shiny examples of using Read the Docs. Please refer to the instructions on how to submit new entries on Awesome Read the Docs Projects.

Contributing an example project

We would love to add more examples that showcase features of Read the Docs or great tools or methods to build documentation projects.

We require that an example project:

  • is hosted and maintained by you in its own Git repository, example-<topic>.

  • contains a README.

  • uses a .readthedocs.yaml configuration.

  • is added to the above list by opening a PR targeting examples.rst.

We recommend that your project:

  • has continuous integration and PR builds.

  • is versioned as a real software project, i.e. using git tags.

  • covers your most important scenarios, but references external real-life projects whenever possible.

  • has a minimal tech stack – or whatever you feel comfortable about maintaining.

  • copies from an existing example project as a template to get started.

We’re excited to see what you come up with!

Advanced features of Read the Docs

Read the Docs offers many advanced features and options. Learn more about these integrations and how you can get the most out of your documentation and Read the Docs.

Subprojects

Projects can be configured in a nested manner, by configuring a project as a subproject of another project. This allows for documentation projects to share a search index and a namespace or custom domain, but still be maintained independently.

For example, a parent project, Foo is set up with a subproject, Bar. The documentation for Foo will be available at:

https://foo.readthedocs.io/en/latest/

The documentation for Bar will be available under this same path:

https://foo.readthedocs.io/projects/bar/en/latest/

Adding a subproject

In the admin dashboard for your project, select “Subprojects” from the menu. From this page you can add a subproject by typing in the project slug.

Subproject aliases

You can use an alias for the subproject when it is created. This allows you to override the URL that is used to access it, giving more configurability to how you want to structure your projects.

Sharing a custom domain

Projects and subprojects can also be used to share a custom domain with a number of projects. To configure this, one project should be established as the parent project. This project will be configured with a custom domain. Projects can then be added as subprojects to this parent project.

If the example project Foo was set up with a custom domain, docs.example.com, the URLs for projects Foo and Bar would respectively be at: https://docs.example.com/en/latest/ and https://docs.example.com/projects/bar/en/latest/

Custom domain on subprojects

Adding a custom domain to a subproject is not allowed, since your documentation will always be served from the domain of the parent project.

Single Version Documentation

Single Version Documentation lets you serve your docs at a root domain. By default, all documentation served by Read the Docs has a root of /<language>/<version>/. But, if you enable the “Single Version” option for a project, its documentation will instead be served at /.

Warning

This means you can’t have translations or multiple versions for your documentation.

You can see a live example of this at http://www.contribution-guide.org

Enabling

You can toggle the “Single Version” option on or off for your project in the Project Admin page. Check your dashboard for a list of your projects.

Effects

Links pointing to the root URL of the project will now point to the proper URL. For example, if pip was set as a “Single Version” project, then links to its documentation would point to https://pip.readthedocs.io/ rather than redirecting to https://pip.readthedocs.io/en/latest/.

Warning

Documentation at /<language>/<default_version>/ will stop working. Remember to set canonical URLs to tell search engines like Google what to index, and to create User-defined Redirects to avoid broken incoming links.

Flyout Menu

When you are using a Read the Docs site, you will likely notice that we embed a menu on all the documentation pages we serve. This is a way to expose the functionality of Read the Docs on the page, without having to have the documentation theme integrate it directly.

Functionality

The flyout menu provides access to the following bits of Read the Docs functionality:

  • A version switcher that shows users all of the active, unhidden versions they have access to.

  • Downloadable formats for the current version, including HTML & PDF downloads that are enabled by the project.

  • Links to the Read the Docs dashboard for the project.

  • Links to your VCS provider that allow the user to quickly find the exact file that the documentation was rendered from.

  • A search bar that gives users access to our Server Side Search of the current version.

Closed
_images/flyout-closed.png

The flyout when it’s closed

Open
_images/flyout-open.png

The opened flyout

Information for theme authors

People who are making custom documentation themes often want to specify where the flyout is injected, and also what it looks like. We support both of these use cases for themes.

Defining where the flyout menu is injected

The flyout menu injection looks for a specific selector (#readthedocs-embed-flyout), in order to inject the flyout. You can add <div id="readthedocs-embed-flyout"> in your theme, and our JavaScript code will inject the flyout there. All other themes except for the sphinx_rtd_theme have the flyout appended to the <body>.

Styling the flyout

HTML themes can style the flyout to make it match the overall style of the HTML. By default the flyout has it’s own CSS file, which you can look at to see the basic CSS class names.

The example HTML that the flyout uses is included here, so that you can style it in your HTML theme:

<div class="injected">
   <div class="rst-versions rst-badge shift-up" data-toggle="rst-versions">
      <span class="rst-current-version" data-toggle="rst-current-version">
      <span class="fa fa-book">&nbsp;</span>
      v: 2.1.x
      <span class="fa fa-caret-down"></span>
      </span>
      <div class="rst-other-versions">
         <!-- "Languages" section (``dl`` tag) is not included if the project does not have translations -->
         <dl>
            <dt>Languages</dt>
            <dd class="rtd-current-item">
               <a href="https://flask.palletsproject.com/en/2.1.x">en</a>
            </dd>
            <dd>
               <a href="https://flask.palletsproject.com/es/2.1.x">es</a>
            </dd>
         </dl>

         <!-- "Versions" section (``dl`` tag) is not included if the project is single version -->
         <dl>
            <dt>Versions</dt>
            <dd>
               <a href="https://flask.palletsprojects.com/en/latest/">latest</a>
            </dd>
            <dd class="rtd-current-item">
               <a href="https://flask.palletsprojects.com/en/2.1.x/">2.1.x</a>
            </dd>
         </dl>

         <!-- "Downloads" section (``dl`` tag) is not included if the project does not have artifacts to download -->
         <dl>
            <dt>Downloads</dt>
            <dd>
               <a href="//flask.palletsprojects.com/_/downloads/en/2.1.x/pdf/">PDF</a>
             </dd>
            <dd>
               <a href="//flask.palletsprojects.com/_/downloads/en/2.1.x/htmlzip/">HTML</a>
             </dd>
         </dl>

         <dl>
            <dt>On Read the Docs</dt>
            <dd>
               <a href="//readthedocs.org/projects/flask/">Project Home</a>
            </dd>
            <dd>
               <a href="//readthedocs.org/projects/flask/builds/">Builds</a>
            </dd>
            <dd>
               <a href="//readthedocs.org/projects/flask/downloads/">Downloads</a>
            </dd>
         </dl>

         <dl>
            <dt>On GitHub</dt>
            <dd>
               <a href="https://github.com/pallets/flask/blob/2.1.x/docs/index.rst">View</a>
            </dd>
            <dd>
               <a href="https://github.com/pallets/flask/edit/2.1.x/docs/index.rst">Edit</a>
            </dd>
         </dl>

         <dl>
            <dt>Search</dt>
            <dd>
               <div style="padding: 6px;">
                  <form id="flyout-search-form" class="wy-form" target="_blank" action="//readthedocs.org/projects/flask/search/" method="get">
                     <input type="text" name="q" aria-label="Search docs" placeholder="Search docs">
                  </form>
               </div>
            </dd>
         </dl>

         <hr>
         <small>
         <span>Hosted by <a href="https://readthedocs.org">Read the Docs</a></span>
         <span> &middot; </span>
         <a href="https://docs.readthedocs.io/page/privacy-policy.html">Privacy Policy</a>
         </small>
      </div>
   </div>
</div>

Feature Flags

Read the Docs offers some additional flag settings which are disabled by default for every project and can only be enabled by contacting us through our support form or reaching out to the administrator of your service.

Available Flags

CONDA_APPEND_CORE_REQUIREMENTS: Append Read the Docs core requirements to environment.yml file

Makes Read the Docs to install all the requirements at once on conda create step. This helps users to pin dependencies on conda and to improve build time.

DONT_OVERWRITE_SPHINX_CONTEXT: Do not overwrite context vars in conf.py with Read the Docs context

DONT_CREATE_INDEX: Do not create index.md or README.rst if the project does not have one.

When Read the Docs detects that your project doesn’t have an index.md or README.rst, it auto-generate one for you with instructions about how to proceed.

In case you are using a static HTML page as index or an generated index from code, this behavior could be a problem. With this feature flag you can disable that.

Localization of Documentation

Note

This feature only applies to Sphinx documentation. We are working to bring it to our other documentation backends.

Read the Docs supports hosting your docs in multiple languages. There are two different things that we support:

  • A single project written in another language

  • A project with translations into multiple languages

Single project in another language

It is easy to set the Language of your project. On the project Admin page (or Import page), simply select your desired Language from the dropdown. This will tell Read the Docs that your project is in the language. The language will be represented in the URL for your project.

For example, a project that is in Spanish will have a default URL of /es/latest/ instead of /en/latest/.

Note

You must commit the .po files for Read the Docs to translate your documentation.

Project with multiple translations

This situation is a bit more complicated. To support this, you will have one parent project and a number of projects marked as translations of that parent. Let’s use phpmyadmin as an example.

The main phpmyadmin project is the parent for all translations. Then you must create a project for each translation, for example phpmyadmin-spanish. You will set the Language for phpmyadmin-spanish to Spanish. In the parent projects Translations page, you will say that phpmyadmin-spanish is a translation for your project.

This has the results of serving:

  • phpmyadmin at http://phpmyadmin.readthedocs.io/en/latest/

  • phpmyadmin-spanish at http://phpmyadmin.readthedocs.io/es/latest/

It also gets included in the Read the Docs flyout:

_images/translation_bar.png

Note

The default language of a custom domain is determined by the language of the parent project that the domain was configured on. See Custom Domains for more information.

Note

You can include multiple translations in the same repository, with same conf.py and .rst files, but each project must specify the language to build for those docs.

Note

You can read Manage Translations for Sphinx projects to understand the whole process for a documentation with multiples languages in the same repository and how to keep the translations updated on time.

User-defined Redirects

You can set up redirects for a project in your project dashboard’s Redirects page.

Quick summary

  • Go to the Admin tab of your project.

  • From the left navigation menu, select Redirects.

  • In the form box “Redirect Type” select the type of redirect you want. See below for detail.

  • Depending on the redirect type you select, enter From URL and/or To URL as needed.

  • When finished, click the Add button.

Your redirects will be effective immediately.

Features

  • By default, redirects are followed only if the requested page doesn’t exist (404 File Not Found error), if you need to apply a redirect for files that exist, mark the Force redirect option. This option is only available on some plan levels. Please ask support if you need it for some reason.

  • Page redirects and Exact redirects can redirect to URLs outside Read the Docs, just include the protocol in To URL, e.g https://example.com.

Redirect types

We offer a few different type of redirects based on what you want to do.

Prefix redirects

The most useful and requested feature of redirects was when migrating to Read the Docs from an old host. You would have your docs served at a previous URL, but that URL would break once you moved them. Read the Docs includes a language and version slug in your documentation, but not all documentation is hosted this way.

Say that you previously had your docs hosted at https://docs.example.com/dev/, you move docs.example.com to point at Read the Docs. So users will have a bookmark saved to a page at https://docs.example.com/dev/install.html.

You can now set a Prefix Redirect that will redirect all 404’s with a prefix to a new place. The example configuration would be:

Type: Prefix Redirect
From URL: /dev/

Your users query would now redirect in the following manner:

docs.example.com/dev/install.html ->
docs.example.com/en/latest/install.html

Where en and latest are the default language and version values for your project.

Note

If you were hosting your docs without a prefix, you can create a / Prefix Redirect, which will prepend /$lang/$version/ to all incoming URLs.

Page redirects

A more specific case is when you move a page around in your docs. The old page will start 404’ing, and your users will be confused. Page Redirects let you redirect a specific page.

Say you move the example.html page into a subdirectory of examples: examples/intro.html. You would set the following configuration:

Type: Page Redirect
From URL: /example.html
To URL: /examples/intro.html

Page Redirects apply to all versions of you documentation. Because of this, the / at the start of the From URL doesn’t include the /$lang/$version prefix (e.g. /en/latest), but just the version-specific part of the URL. If you want to set redirects only for some languages or some versions, you should use Exact redirects with the fully-specified path.

Exact redirects

Exact Redirects are for redirecting a single URL, taking into account the full URL (including language and version).

You can also redirect a subset of URLs by including the $rest keyword at the end of the From URL.

Exact redirects examples
Redirecting a single URL

Say you’re moving docs.example.com to Read the Docs and want to redirect traffic from an old page at https://docs.example.com/dev/install.html to a new URL of https://docs.example.com/en/latest/installing-your-site.html.

The example configuration would be:

Type: Exact Redirect
From URL: /dev/install.html
To URL:   /en/latest/installing-your-site.html

Your users query would now redirect in the following manner:

docs.example.com/dev/install.html ->
docs.example.com/en/latest/installing-your-site.html

Note that you should insert the desired language for “en” and version for “latest” to achieve the desired redirect.

Redirecting a whole sub-path to a different one

Exact Redirects could be also useful to redirect a whole sub-path to a different one by using a special $rest keyword in the “From URL”. Let’s say that you want to redirect your readers of your version 2.0 of your documentation under /en/2.0/ because it’s deprecated, to the newest 3.0 version of it at /en/3.0/.

This example would be:

Type: Exact Redirect
From URL: /en/2.0/$rest
To URL: /en/3.0/

The readers of your documentation will now be redirected as:

docs.example.com/en/2.0/dev/install.html ->
docs.example.com/en/3.0/dev/install.html

Similarly, if you maintain several branches of your documentation (e.g. 3.0 and latest) and decide to move pages in latest but not the older branches, you can use Exact Redirects to do so.

Migrating your documentation to another domain

You can use an exact redirect to migrate your documentation to another domain, for example:

Type: Exact Redirect
From URL: /$rest
To URL: https://newdocs.example.com/
Force Redirect: True

Then all pages will redirect to the new domain, for example https://docs.example.com/en/latest/install.html will redirect to https://newdocs.example.com/en/latest/install.html.

Sphinx redirects

We also support redirects for changing the type of documentation Sphinx is building. If you switch between HTMLDir and HTML, your URL’s will change. A page at /en/latest/install.html will be served at /en/latest/install/, or vice versa. The built in redirects for this will handle redirecting users appropriately.

Automatic Redirects

Read the Docs supports redirecting certain URLs automatically. This is an overview of the set of redirects that are fully supported and will work into the future.

Redirecting to a Page

You can link to a specific page and have it redirect to your default version. This is done with the /page/ URL prefix. You can reach this page by going to https://docs.readthedocs.io/page/automatic-redirects.html.

This allows you to create links that are always up to date.

Another way to handle this is the latest version. You can set your latest version to a specific version and just always link to latest. You can read more about this in our versions page.

Root URL

A link to the root of your documentation will redirect to the default version, as set in your project settings. For example:

docs.readthedocs.io -> docs.readthedocs.io/en/latest/
www.pip-installer.org -> www.pip-installer.org/en/latest/

This only works for the root URL, not for internal pages. It’s designed to redirect people from http://pip.readthedocs.io/ to the default version of your documentation, since serving up a 404 here would be a pretty terrible user experience. (If your “develop” branch was designated as your default version, then it would redirect to http://pip.readthedocs.io/en/develop.) But, it’s not a universal redirecting solution. So, for example, a link to an internal page like http://pip.readthedocs.io/usage.html doesn’t redirect to http://pip.readthedocs.io/en/latest/usage.html.

The reasoning behind this is that RTD organizes the URLs for docs so that multiple translations and multiple versions of your docs can be organized logically and consistently for all projects that RTD hosts. For the way that RTD views docs, http://pip.readthedocs.io/en/latest/ is the root directory for your default documentation in English, not http://pip.readthedocs.io/. Just like http://pip.readthedocs.io/en/develop/ is the root for your development documentation in English.

Among all the multiple versions of docs, you can choose which is the “default” version for RTD to display, which usually corresponds to the git branch of the most recent official release from your project.

rtfd.io

Links to rtfd.io are treated the same way as above. They redirect the root URL to the default version of the project. They are intended to be easy and short for people to type.

You can reach these docs at https://docs.rtfd.io.

Supported Top-Level Redirects

Note

These “implicit” redirects are supported for legacy reasons. We will not be adding support for any more magic redirects. If you want additional redirects, they should live at a prefix like Redirecting to a Page

The main challenge of URL routing in Read the Docs is handling redirects correctly. Both in the interest of redirecting older URLs that are now obsolete, and in the interest of handling “logical-looking” URLs (leaving out the lang_slug or version_slug shouldn’t result in a 404), the following redirects are supported:

/          -> /en/latest/
/en/       -> /en/latest/
/latest/   -> /en/latest/

The language redirect will work for any of the defined LANGUAGE_CODES we support. The version redirect will work for supported versions.

Automation Rules

Automation rules allow project maintainers to automate actions on new branches and tags on repositories.

Creating an automation rule

  1. Go to your project dashboard

  2. Click Admin > Automation Rules

  3. Click on Add Rule

  4. Fill in the fields

  5. Click Save

How do they work?

When a new tag or branch is pushed to your repository, Read the Docs creates a new version.

All rules are evaluated for this version, in the order they are listed. If the version matches the version type and the pattern in the rule, the specified action is performed on that version.

Note

Versions can match multiple rules, and all matching actions will be performed on the version.

Predefined matches

Automation rules support several predefined version matches:

  • Any version: All new versions will match the rule.

  • SemVer versions: All new versions that follow semantic versioning will match the rule.

User defined matches

If none of the above predefined matches meet your use case, you can use a Custom match.

The custom match should be a valid Python regular expression. Each new version will be tested against this regular expression.

Actions

When a rule matches a new version, the specified action is performed on that version. Currently, the following actions are available:

  • Activate version: Activates and builds the version.

  • Hide version: Hides the version. If the version is not active, activates it and builds the version. See Version States.

  • Make version public: Sets the version’s privacy level to public. See Privacy levels.

  • Make version private: Sets the version’s privacy level to private. See Privacy levels.

  • Set version as default: Sets the version as default, i.e. the version of your project that / redirects to. See more in Root URL. It also activates and builds the version.

  • Delete version: When a branch or tag is deleted from your repository, Read the Docs will delete it only if isn’t active. This action allows you to delete active versions when a branch or tag is deleted from your repository.

    Note

    The default version isn’t deleted even if it matches a rule. You can use the Set version as default action to change the default version before deleting the current one.

Note

If your versions follow PEP 440, Read the Docs activates and builds the version if it’s greater than the current stable version. The stable version is also automatically updated at the same time. See more in Versioned Documentation.

Order

The order your rules are listed in Admin > Automation Rules matters. Each action will be performed in that order, so first rules have a higher priority.

You can change the order using the up and down arrow buttons.

Note

New rules are added at the end (lower priority).

Examples

Activate all new tags
  • Match: Any version

  • Version type: Tag

  • Action: Activate version

Activate only new branches that belong to the 1.x release
  • Custom match: ^1\.\d+$

  • Version type: Branch

  • Action: Activate version

Delete an active version when a branch is deleted
  • Match: Any version

  • Version type: Branch

  • Action: Delete version

Set as default new tags that have the -stable or -release suffix
  • Custom match: -(stable|release)$

  • Version type: Tag

  • Action: Set version as default

Note

You can also create two rules: one to match -stable and other to match -release.

Activate all new tags and branches that start with v or V
  • Custom match: ^[vV]

  • Version type: Tag

  • Action: Activate version

  • Custom match: ^[vV]

  • Version type: Branch

  • Action: Activate version

Activate all new tags that don’t contain the -nightly suffix
  • Custom match: .*(?<!-nightly)$

  • Version type: Tag

  • Action: Activate version

Canonical URLs

A canonical URL allows you to specify the preferred version of a web page to prevent duplicated content. They are mainly used by search engines to link users to the correct version and domain of your documentation.

If canonical URL’s aren’t used, it’s easy for outdated documentation to be the top search result for various pages in your documentation. This is not a perfect solution for this problem, but generally people finding outdated documentation is a big problem, and this is one of the suggested ways to solve it from search engines.

How Read the Docs generates canonical URLs

The canonical URL takes into account:

  • The default version of your project (usually “latest” or “stable”).

  • The canonical custom domain if you have one, otherwise the default subdomain will be used.

For example, if you have a project named example-docs with a custom domain https://docs.example.com, then your documentation will be served at https://example-docs.readthedocs.io and https://docs.example.com. Without specifying a canonical URL, a search engine like Google will index both domains.

You’ll want to use https://docs.example.com as your canonical domain. This means that when Google indexes a page like https://example-docs.readthedocs.io/en/latest/, it will know that it should really point at https://docs.example.com/en/latest/, thus avoiding duplicating the content.

Note

If you want your custom domain to be set as the canonical, you need to set Canonical:  This domain is the primary one where the documentation is served from in the Admin > Domains section of your project settings.

Implementation

The canonical URL is set in HTML with a link element. For example, this page has a canonical URL of:

<link rel="canonical" href="https://docs.readthedocs.io/en/stable/canonical-urls.html" />
Sphinx

If you are using Sphinx, Read the Docs will set the value of the html_baseurl setting (if isn’t already set) to your canonical domain. If you already have html_baseurl set, you need to ensure that the value is correct.

Mkdocs

For MkDocs this isn’t done automatically, but you can use the site_url setting to set a similar value.

Warning

If you change your default version or canonical domain, you’ll need to re-build all your versions in order to update their canonical URL to the new one.

Public API

This section of the documentation details the public API usable to get details of projects, builds, versions and other details from Read the Docs.

API v3

The Read the Docs API uses REST. JSON is returned by all API responses including errors and HTTP response status codes are to designate success and failure.

Authentication and authorization

Requests to the Read the Docs public API are for public and private information. All endpoints require authentication.

Token

The Authorization HTTP header can be specified with Token <your-access-token> to authenticate as a user and have the same permissions that the user itself.

Note

On Read the Docs Community, you will find your access Token under your profile settings.

Session

Warning

Authentication via session is not enabled yet.

Session authentication is allowed on very specific endpoints, to allow hitting the API when reading documentation.

When a user is trying to authenticate via session, CSRF check is performed.

Resources

This section shows all the resources that are currently available in APIv3. There are some URL attributes that applies to all of these resources:

?fields=

Specify which fields are going to be returned in the response.

?omit=

Specify which fields are going to be omitted from the response.

?expand=

Some resources allow to expand/add extra fields on their responses (see Project details for example).

Tip

You can browse the full API by accessing its root URL: https://readthedocs.org/api/v3/

Note

If you are using Read the Docs for Business take into account that you will need to replace https://readthedocs.org/ by https://readthedocs.com/ in all the URLs used in the following examples.

Projects
Projects list
GET /api/v3/projects/

Retrieve a list of all the projects for the current logged in user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/?limit=10&offset=10",
    "previous": null,
    "results": [{
        "id": 12345,
        "name": "Pip",
        "slug": "pip",
        "created": "2010-10-23T18:12:31+00:00",
        "modified": "2018-12-11T07:21:11+00:00",
        "language": {
            "code": "en",
            "name": "English"
        },
        "programming_language": {
            "code": "py",
            "name": "Python"
        },
        "repository": {
            "url": "https://github.com/pypa/pip",
            "type": "git"
        },
        "default_version": "stable",
        "default_branch": "master",
        "subproject_of": null,
        "translation_of": null,
        "urls": {
            "documentation": "http://pip.pypa.io/en/stable/",
            "home": "https://pip.pypa.io/"
        },
        "tags": [
            "distutils",
            "easy_install",
            "egg",
            "setuptools",
            "virtualenv"
        ],
        "users": [
            {
                "username": "dstufft"
            }
        ],
        "active_versions": {
            "stable": "{VERSION}",
            "latest": "{VERSION}",
            "19.0.2": "{VERSION}"
        },
        "_links": {
            "_self": "/api/v3/projects/pip/",
            "versions": "/api/v3/projects/pip/versions/",
            "builds": "/api/v3/projects/pip/builds/",
            "subprojects": "/api/v3/projects/pip/subprojects/",
            "superproject": "/api/v3/projects/pip/superproject/",
            "redirects": "/api/v3/projects/pip/redirects/",
            "translations": "/api/v3/projects/pip/translations/"
        }
    }]
}
Query Parameters
  • language (string) – language code as en, es, ru, etc.

  • programming_language (string) – programming language code as py, js, etc.

The results in response is an array of project data, which is same as GET /api/v3/projects/(string:project_slug)/.

Note

Read the Docs for Business, also accepts

Query Parameters
  • expand (string) – with organization and teams.

Project details
GET /api/v3/projects/(string: project_slug)/

Retrieve details of a single project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/

Example response:

{
    "id": 12345,
    "name": "Pip",
    "slug": "pip",
    "created": "2010-10-23T18:12:31+00:00",
    "modified": "2018-12-11T07:21:11+00:00",
    "language": {
        "code": "en",
        "name": "English"
    },
    "programming_language": {
        "code": "py",
        "name": "Python"
    },
    "repository": {
        "url": "https://github.com/pypa/pip",
        "type": "git"
    },
    "default_version": "stable",
    "default_branch": "master",
    "subproject_of": null,
    "translation_of": null,
    "urls": {
        "documentation": "http://pip.pypa.io/en/stable/",
        "home": "https://pip.pypa.io/"
    },
    "tags": [
        "distutils",
        "easy_install",
        "egg",
        "setuptools",
        "virtualenv"
    ],
    "users": [
        {
            "username": "dstufft"
        }
    ],
    "active_versions": {
        "stable": "{VERSION}",
        "latest": "{VERSION}",
        "19.0.2": "{VERSION}"
    },
    "_links": {
        "_self": "/api/v3/projects/pip/",
        "versions": "/api/v3/projects/pip/versions/",
        "builds": "/api/v3/projects/pip/builds/",
        "subprojects": "/api/v3/projects/pip/subprojects/",
        "superproject": "/api/v3/projects/pip/superproject/",
        "redirects": "/api/v3/projects/pip/redirects/",
        "translations": "/api/v3/projects/pip/translations/"
    }
}
Query Parameters
  • expand (string) – allows to add/expand some extra fields in the response. Allowed values are active_versions, active_versions.last_build and active_versions.last_build.config. Multiple fields can be passed separated by commas.

Note

Read the Docs for Business, also accepts

Query Parameters
  • expand (string) – with organization and teams.

Project create
POST /api/v3/projects/

Import a project under authenticated user.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "name": "Test Project",
    "repository": {
        "url": "https://github.com/readthedocs/template",
        "type": "git"
    },
    "homepage": "http://template.readthedocs.io/",
    "programming_language": "py",
    "language": "es"
}

Example response:

See Project details

Note

Read the Docs for Business, also accepts

Request JSON Object
  • organization (string) – required organization’s slug under the project will be imported.

  • teams (string) – optional teams’ slugs the project will belong to.

Project update
PATCH /api/v3/projects/(string: project_slug)/

Update an existing project.

Example request:

$ curl \
  -X PATCH \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "name": "New name for the project",
    "repository": {
        "url": "https://github.com/readthedocs/readthedocs.org",
        "type": "git"
    },
    "language": "ja",
    "programming_language": "py",
    "homepage": "https://readthedocs.org/",
    "default_version": "v0.27.0",
    "default_branch": "develop",
    "analytics_code": "UA000000",
    "analytics_disabled": false,
    "single_version": false,
    "external_builds_enabled": true,

}
Status Codes
Versions

Versions are different versions of the same project documentation.

The versions for a given project can be viewed in a project’s version page. For example, here is the Pip project’s version page. See Versioned Documentation for more information.

Versions listing
GET /api/v3/projects/(string: project_slug)/versions/

Retrieve a list of all versions for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/pip/versions/?limit=10&offset=10",
    "previous": null,
    "results": ["VERSION"]
}
Query Parameters
  • active (boolean) – return only active versions

  • built (boolean) – return only built versions

  • privacy_level (string) – return versions with specific privacy level (public or private)

  • slug (string) – return versions with matching slug

  • type (string) – return versions with specific type (branch or tag)

  • verbose_name (string) – return versions with matching version name

Version detail
GET /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/

Retrieve details of a single version.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/stable/

Example response:

{
    "id": 71652437,
    "slug": "stable",
    "verbose_name": "stable",
    "identifier": "3a6b3995c141c0888af6591a59240ba5db7d9914",
    "ref": "19.0.2",
    "built": true,
    "active": true,
    "hidden": false,
    "type": "tag",
    "last_build": "{BUILD}",
    "downloads": {
        "pdf": "https://pip.readthedocs.io/_/downloads/pdf/pip/stable/",
        "htmlzip": "https://pip.readthedocs.io/_/downloads/htmlzip/pip/stable/",
        "epub": "https://pip.readthedocs.io/_/downloads/epub/pip/stable/"
    },
    "urls": {
        "dashboard": {
            "edit": "https://readthedocs.org/dashboard/pip/version/stable/edit/"
        },
        "documentation": "https://pip.pypa.io/en/stable/",
        "vcs": "https://github.com/pypa/pip/tree/19.0.2"
    },
    "_links": {
        "_self": "/api/v3/projects/pip/versions/stable/",
        "builds": "/api/v3/projects/pip/versions/stable/builds/",
        "project": "/api/v3/projects/pip/"
    }
}
Response JSON Object
  • ref (string) – the version slug where the stable version points to. null when it’s not the stable version.

  • built (boolean) – the version has at least one successful build.

Query Parameters
  • expand (string) – allows to add/expand some extra fields in the response. Allowed values are last_build and last_build.config. Multiple fields can be passed separated by commas.

Version update
PATCH /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/

Update a version.

Example request:

$ curl \
  -X PATCH \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/0.23/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "active": true,
    "hidden": false
}
Status Codes
Builds

Builds are created by Read the Docs whenever a Project has its documentation built. Frequently this happens automatically via a web hook but can be triggered manually.

Builds can be viewed in the build page for a project. For example, here is Pip’s build page. See Build process for more information.

Build details
GET /api/v3/projects/(str: project_slug)/builds/(int: build_id)/

Retrieve details of a single build for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/builds/8592686/?expand=config

Example response:

{
    "id": 8592686,
    "version": "latest",
    "project": "pip",
    "created": "2018-06-19T15:15:59+00:00",
    "finished": "2018-06-19T15:16:58+00:00",
    "duration": 59,
    "state": {
        "code": "finished",
        "name": "Finished"
    },
    "success": true,
    "error": null,
    "commit": "6f808d743fd6f6907ad3e2e969c88a549e76db30",
    "config": {
        "version": "1",
        "formats": [
            "htmlzip",
            "epub",
            "pdf"
        ],
        "python": {
            "version": 3,
            "install": [
                {
                    "requirements": ".../stable/tools/docs-requirements.txt"
                }
            ],
            "use_system_site_packages": false
        },
        "conda": null,
        "build": {
            "image": "readthedocs/build:latest"
        },
        "doctype": "sphinx_htmldir",
        "sphinx": {
            "builder": "sphinx_htmldir",
            "configuration": ".../stable/docs/html/conf.py",
            "fail_on_warning": false
        },
        "mkdocs": {
            "configuration": null,
            "fail_on_warning": false
        },
        "submodules": {
            "include": "all",
            "exclude": [],
            "recursive": true
        }
    },
    "_links": {
        "_self": "/api/v3/projects/pip/builds/8592686/",
        "project": "/api/v3/projects/pip/",
        "version": "/api/v3/projects/pip/versions/latest/"
    }
}
Response JSON Object
  • created (string) – The ISO-8601 datetime when the build was created.

  • finished (string) – The ISO-8601 datetime when the build has finished.

  • duration (integer) – The length of the build in seconds.

  • state (string) – The state of the build (one of triggered, building, installing, cloning, finished or cancelled)

  • error (string) – An error message if the build was unsuccessful

Query Parameters
  • expand (string) – allows to add/expand some extra fields in the response. Allowed value is config.

Builds listing
GET /api/v3/projects/(str: project_slug)/builds/

Retrieve list of all the builds on this project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/builds/

Example response:

{
    "count": 15,
    "next": "/api/v3/projects/pip/builds?limit=10&offset=10",
    "previous": null,
    "results": ["BUILD"]
}
Query Parameters
  • commit (string) – commit hash to filter the builds returned by commit

  • running (boolean) – filter the builds that are currently building/running

Build triggering
POST /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/builds/

Trigger a new build for the version_slug version of this project.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/latest/builds/

Example response:

{
    "build": "{BUILD}",
    "project": "{PROJECT}",
    "version": "{VERSION}"
}
Status Codes
Subprojects

Projects can be configured in a nested manner, by configuring a project as a subproject of another project. This allows for documentation projects to share a search index and a namespace or custom domain, but still be maintained independently. See Subprojects for more information.

Subproject details
GET /api/v3/projects/(str: project_slug)/subprojects/(str: alias_slug)/

Retrieve details of a subproject relationship.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/

Example response:

{
    "alias": "subproject-alias",
    "child": ["PROJECT"],
    "_links": {
        "parent": "/api/v3/projects/pip/"
    }
}
Subprojects listing
GET /api/v3/projects/(str: project_slug)/subprojects/

Retrieve a list of all sub-projects for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/pip/subprojects/?limit=10&offset=10",
    "previous": null,
    "results": ["SUBPROJECT RELATIONSHIP"]
}
Subproject create
POST /api/v3/projects/(str: project_slug)/subprojects/

Create a subproject relationship between two projects.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "child": "subproject-child-slug",
    "alias": "subproject-alias"
}

Note

child must be a project that you have access to. Or if you are using Read the Docs for Business, additionally the project must be under the same organization as the parent project.

Example response:

See Subproject details

Response JSON Object
  • child (string) – slug of the child project in the relationship.

  • alias (string) – optional slug alias to be used in the URL (e.g /projects/<alias>/en/latest/). If not provided, child project’s slug is used as alias.

Status Codes
Subproject delete
DELETE /api/v3/projects/(str: project_slug)/subprojects/(str: alias_slug)/

Delete a subproject relationship.

Example request:

$ curl \
  -X DELETE \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/
Status Codes
Translations

Translations are the same version of a Project in a different language. See Localization of Documentation for more information.

Translations listing
GET /api/v3/projects/(str: project_slug)/translations/

Retrieve a list of all translations for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/translations/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/pip/translations/?limit=10&offset=10",
    "previous": null,
    "results": [{
        "id": 12345,
        "name": "Pip",
        "slug": "pip",
        "created": "2010-10-23T18:12:31+00:00",
        "modified": "2018-12-11T07:21:11+00:00",
        "language": {
            "code": "en",
            "name": "English"
        },
        "programming_language": {
            "code": "py",
            "name": "Python"
        },
        "repository": {
            "url": "https://github.com/pypa/pip",
            "type": "git"
        },
        "default_version": "stable",
        "default_branch": "master",
        "subproject_of": null,
        "translation_of": null,
        "urls": {
            "documentation": "http://pip.pypa.io/en/stable/",
            "home": "https://pip.pypa.io/"
        },
        "tags": [
            "distutils",
            "easy_install",
            "egg",
            "setuptools",
            "virtualenv"
        ],
        "users": [
            {
                "username": "dstufft"
            }
        ],
        "active_versions": {
            "stable": "{VERSION}",
            "latest": "{VERSION}",
            "19.0.2": "{VERSION}"
        },
        "_links": {
            "_self": "/api/v3/projects/pip/",
            "versions": "/api/v3/projects/pip/versions/",
            "builds": "/api/v3/projects/pip/builds/",
            "subprojects": "/api/v3/projects/pip/subprojects/",
            "superproject": "/api/v3/projects/pip/superproject/",
            "redirects": "/api/v3/projects/pip/redirects/",
            "translations": "/api/v3/projects/pip/translations/"
        }
    }]
}

The results in response is an array of project data, which is same as GET /api/v3/projects/(string:project_slug)/.

Redirects

Redirects allow the author to redirect an old URL of the documentation to a new one. This is useful when pages are moved around in the structure of the documentation set. See User-defined Redirects for more information.

Redirect details
GET /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/

Retrieve details of a single redirect for a project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/

Example response

{
    "pk": 1,
    "created": "2019-04-29T10:00:00Z",
    "modified": "2019-04-29T12:00:00Z",
    "project": "pip",
    "from_url": "/docs/",
    "to_url": "/documentation/",
    "type": "page",
    "_links": {
        "_self": "/api/v3/projects/pip/redirects/1/",
        "project": "/api/v3/projects/pip/"
    }
}
Redirects listing
GET /api/v3/projects/(str: project_slug)/redirects/

Retrieve list of all the redirects for this project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/

Example response

{
    "count": 25,
    "next": "/api/v3/projects/pip/redirects/?limit=10&offset=10",
    "previous": null,
    "results": ["REDIRECT"]
}
Redirect create
POST /api/v3/projects/(str: project_slug)/redirects/

Create a redirect for this project.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "from_url": "/docs/",
    "to_url": "/documentation/",
    "type": "page"
}

Note

type can be one of prefix, page, exact, sphinx_html and sphinx_htmldir.

Depending on the type of the redirect, some fields may not be needed:

  • prefix type does not require to_url.

  • page and exact types require from_url and to_url.

  • sphinx_html and sphinx_htmldir types do not require from_url and to_url.

Example response:

See Redirect details

Status Codes
Redirect update
PUT /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/

Update a redirect for this project.

Example request:

$ curl \
  -X PUT \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "from_url": "/docs/",
    "to_url": "/documentation.html",
    "type": "page"
}

Example response:

See Redirect details

Redirect delete
DELETE /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/

Delete a redirect for this project.

Example request:

$ curl \
  -X DELETE \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/
Status Codes
Environment Variables

Environment Variables are variables that you can define for your project. These variables are used in the build process when building your documentation. They are for example useful to define secrets in a safe way that can be used by your documentation to build properly. Environment variables can also be made public, allowing for them to be used in PR builds. See Environment Variables.

Environment Variable details
GET /api/v3/projects/(str: project_slug)/environmentvariables/(int: environmentvariable_id)/

Retrieve details of a single environment variable for a project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/

Example response

{
    "_links": {
        "_self": "https://readthedocs.org/api/v3/projects/project/environmentvariables/1/",
        "project": "https://readthedocs.org/api/v3/projects/project/"
    },
"created": "2019-04-29T10:00:00Z",
"modified": "2019-04-29T12:00:00Z",
"pk": 1,
"project": "project",
"public": false,
"name": "ENVVAR"
}
Environment Variables listing
GET /api/v3/projects/(str: project_slug)/environmentvariables/

Retrieve list of all the environment variables for this project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/

Example response

{
    "count": 15,
    "next": "/api/v3/projects/pip/environmentvariables/?limit=10&offset=10",
    "previous": null,
    "results": ["ENVIRONMENTVARIABLE"]
}
Environment Variable create
POST /api/v3/projects/(str: project_slug)/environmentvariables/

Create an environment variable for this project.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "name": "MYVAR",
    "value": "My secret value"
}

Example response:

See Environment Variable details

Status Codes
  • 201 Created – Environment variable created successfully

Environment Variable delete
DELETE /api/v3/projects/(str: project_slug)/environmentvariables/(int: environmentvariable_id)/

Delete an environment variable for this project.

Example request:

$ curl \
  -X DELETE \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/
Request Headers
Status Codes
Organizations

Note

The /api/v3/organizations/ endpoint is only available in Read the Docs for Business currently. We plan to have organizations on Read the Docs Community in a near future and we will add support for this endpoint at the same time.

Organizations list
GET /api/v3/organizations/

Retrieve a list of all the organizations for the current logged in user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/

Example response:

{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [
        {
            "_links": {
                "_self": "https://readthedocs.com/api/v3/organizations/pypa/",
                "projects": "https://readthedocs.com/api/v3/organizations/pypa/projects/"
            },
            "created": "2019-02-22T21:54:52.768630Z",
            "description": "",
            "disabled": false,
            "email": "pypa@psf.org",
            "modified": "2020-07-02T12:35:32.418423Z",
            "name": "Python Package Authority",
            "owners": [
                {
                    "username": "dstufft"
                }
            ],
            "slug": "pypa",
            "url": "https://github.com/pypa/"
        }
}
Organization details
GET /api/v3/organizations/(string: organization_slug)/

Retrieve details of a single organization.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/pypa/

Example response:

{
    "_links": {
        "_self": "https://readthedocs.com/api/v3/organizations/pypa/",
        "projects": "https://readthedocs.com/api/v3/organizations/pypa/projects/"
    },
    "created": "2019-02-22T21:54:52.768630Z",
    "description": "",
    "disabled": false,
    "email": "pypa@psf.com",
    "modified": "2020-07-02T12:35:32.418423Z",
    "name": "Python Package Authority",
    "owners": [
        {
            "username": "dstufft"
        }
    ],
    "slug": "pypa",
    "url": "https://github.com/pypa/"
}
Organization projects list
GET /api/v3/organizations/(string: organization_slug)/projects/

Retrieve list of projects under an organization.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/pypa/projects/

Example response:

{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [
        {
            "_links": {
                "_self": "https://readthedocs.com/api/v3/projects/pypa-pip/",
                "builds": "https://readthedocs.com/api/v3/projects/pypa-pip/builds/",
                "environmentvariables": "https://readthedocs.com/api/v3/projects/pypa-pip/environmentvariables/",
                "redirects": "https://readthedocs.com/api/v3/projects/pypa-pip/redirects/",
                "subprojects": "https://readthedocs.com/api/v3/projects/pypa-pip/subprojects/",
                "superproject": "https://readthedocs.com/api/v3/projects/pypa-pip/superproject/",
                "translations": "https://readthedocs.com/api/v3/projects/pypa-pip/translations/",
                "versions": "https://readthedocs.com/api/v3/projects/pypa-pip/versions/"
            },
            "created": "2019-02-22T21:59:13.333614Z",
            "default_branch": "master",
            "default_version": "latest",
            "homepage": null,
            "id": 2797,
            "language": {
                "code": "en",
                "name": "English"
            },
            "modified": "2019-08-08T16:27:25.939531Z",
            "name": "pip",
            "programming_language": {
                "code": "py",
                "name": "Python"
            },
            "repository": {
                "type": "git",
                "url": "https://github.com/pypa/pip"
            },
            "slug": "pypa-pip",
            "subproject_of": null,
            "tags": [],
            "translation_of": null,
            "urls": {
                "builds": "https://readthedocs.com/projects/pypa-pip/builds/",
                "documentation": "https://pypa-pip.readthedocs-hosted.com/en/latest/",
                "home": "https://readthedocs.com/projects/pypa-pip/",
                "versions": "https://readthedocs.com/projects/pypa-pip/versions/"
            }
        }
    ]
}
Remote Organizations

Remote Organizations are the VCS organizations connected via GitHub, GitLab and BitBucket.

Remote Organization listing
GET /api/v3/remote/organizations/

Retrieve a list of all Remote Organizations for the authenticated user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/remote/organizations/

Example response:

{
    "count": 20,
    "next": "api/v3/remote/organizations/?limit=10&offset=10",
    "previous": null,
    "results": [
        {
            "avatar_url": "https://avatars.githubusercontent.com/u/12345?v=4",
            "created": "2019-04-29T10:00:00Z",
            "modified": "2019-04-29T12:00:00Z",
            "name": "Organization Name",
            "pk": 1,
            "slug": "organization",
            "url": "https://github.com/organization",
            "vcs_provider": "github"
        }
    ]
}

The results in response is an array of remote organizations data.

Query Parameters
  • name (string) – return remote organizations with containing the name

  • vcs_provider (string) – return remote organizations for specific vcs provider (github, gitlab or bitbucket)

Request Headers
Remote Repositories

Remote Repositories are the importable repositories connected via GitHub, GitLab and BitBucket.

Remote Repository listing
GET /api/v3/remote/repositories/

Retrieve a list of all Remote Repositories for the authenticated user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/remote/repositories/?expand=projects,remote_organization

Example response:

{
    "count": 20,
    "next": "api/v3/remote/repositories/?expand=projects,remote_organization&limit=10&offset=10",
    "previous": null,
    "results": [
        {
            "remote_organization": {
                "avatar_url": "https://avatars.githubusercontent.com/u/12345?v=4",
                "created": "2019-04-29T10:00:00Z",
                "modified": "2019-04-29T12:00:00Z",
                "name": "Organization Name",
                "pk": 1,
                "slug": "organization",
                "url": "https://github.com/organization",
                "vcs_provider": "github"
            },
            "project": [{
                "id": 12345,
                "name": "project",
                "slug": "project",
                "created": "2010-10-23T18:12:31+00:00",
                "modified": "2018-12-11T07:21:11+00:00",
                "language": {
                    "code": "en",
                    "name": "English"
                },
                "programming_language": {
                    "code": "py",
                    "name": "Python"
                },
                "repository": {
                    "url": "https://github.com/organization/project",
                    "type": "git"
                },
                "default_version": "stable",
                "default_branch": "master",
                "subproject_of": null,
                "translation_of": null,
                "urls": {
                    "documentation": "http://project.readthedocs.io/en/stable/",
                    "home": "https://readthedocs.org/projects/project/"
                },
                "tags": [
                    "test"
                ],
                "users": [
                    {
                        "username": "dstufft"
                    }
                ],
                "_links": {
                    "_self": "/api/v3/projects/project/",
                    "versions": "/api/v3/projects/project/versions/",
                    "builds": "/api/v3/projects/project/builds/",
                    "subprojects": "/api/v3/projects/project/subprojects/",
                    "superproject": "/api/v3/projects/project/superproject/",
                    "redirects": "/api/v3/projects/project/redirects/",
                    "translations": "/api/v3/projects/project/translations/"
                }
            }],
            "avatar_url": "https://avatars3.githubusercontent.com/u/test-organization?v=4",
            "clone_url": "https://github.com/organization/project.git",
            "created": "2019-04-29T10:00:00Z",
            "description": "This is a test project.",
            "full_name": "organization/project",
            "html_url": "https://github.com/organization/project",
            "modified": "2019-04-29T12:00:00Z",
            "name": "project",
            "pk": 1,
            "ssh_url": "git@github.com:organization/project.git",
            "vcs": "git",
            "vcs_provider": "github",
            "default_branch": "master",
            "private": false,
            "admin": true
        }
    ]
}

The results in response is an array of remote repositories data.

Query Parameters
  • name (string) – return remote repositories containing the name

  • vcs_provider (string) – return remote repositories for specific vcs provider (github, gitlab or bitbucket)

  • organization (string) – return remote repositories for specific remote organization (using remote organization slug)

  • expand (string) – allows to add/expand some extra fields in the response. Allowed values are projects and remote_organization. Multiple fields can be passed separated by commas.

Request Headers
Embed
GET /api/v3/embed/

Retrieve HTML-formatted content from documentation page or section. Read Embedding Content From Your Documentation to know more about how to use this endpoint.

Example request:

curl https://readthedocs.org/api/v3/embed/?url=https://docs.readthedocs.io/en/latest/features.html%23read-the-docs-features

Example response:

{
    "url": "https://docs.readthedocs.io/en/latest/features.html#read-the-docs-features",
    "fragment": "read-the-docs-features",
    "content": "<div class=\"section\" id=\"read-the-docs-features\">\n<h1>Read the Docs ...",
    "external": false
}
Response JSON Object
  • url (string) – URL of the document.

  • fragment (string) – fragmet part of the URL used to query the page.

  • content (string) – HTML content of the section.

  • external (string) – whether or not the page is hosted on Read the Docs or externally.

Query Parameters
  • url (string) – full URL of the document (with optional fragment) to fetch content from.

  • doctool (string) – optional documentation tool key name used to generate the target documentation (currently, only sphinx is accepted)

  • doctoolversion (string) – optional documentation tool version used to generate the target documentation (e.g. 4.2.0).

Note

Passing ?doctool= and ?doctoolversion= may improve the response, since the endpoint will know more about the exact structure of the HTML and can make better decisions.

Additional APIs

API v2

The Read the Docs API uses REST. JSON is returned by all API responses including errors and HTTP response status codes are to designate success and failure.

Warning

API v2 is planned to be deprecated soon, though we have not yet set a time frame for deprecation yet. We will alert users with our plans when we do.

For now, API v2 is still used by some legacy application operations still, but we highly recommend Read the Docs users use API v3 instead.

Some improvements in API v3 are:

  • Token based authentication

  • Easier to use URLs which no longer use numerical ids

  • More common user actions are exposed through the API

  • Improved error reporting

See its full documentation at API v3.

Authentication and authorization

Requests to the Read the Docs public API are for public information only and do not require any authentication.

Resources
Projects

Projects are the main building block of Read the Docs. Projects are built when there are changes to the code and the resulting documentation is hosted and served by Read the Docs.

As an example, this documentation is part of the Docs project which has documentation at https://docs.readthedocs.io.

You can always view your Read the Docs projects in your project dashboard.

Project list
GET /api/v2/project/

Retrieve a list of all Read the Docs projects.

Example request:

curl https://readthedocs.org/api/v2/project/?slug=pip

Example response:

{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [PROJECTS]
}
Response JSON Object
  • next (string) – URI for next set of Projects.

  • previous (string) – URI for previous set of Projects.

  • count (integer) – Total number of Projects.

  • results (array) – Array of Project objects.

Query Parameters
  • slug (string) – Narrow the results by matching the exact project slug

Project details
GET /api/v2/project/(int: id)/

Retrieve details of a single project.

{
    "id": 6,
    "name": "Pip",
    "slug": "pip",
    "programming_language": "py",
    "default_version": "stable",
    "default_branch": "master",
    "repo_type": "git",
    "repo": "https://github.com/pypa/pip",
    "description": "Pip Installs Packages.",
    "language": "en",
    "documentation_type": "sphinx_htmldir",
    "canonical_url": "http://pip.pypa.io/en/stable/",
    "users": [USERS]
}
Response JSON Object
  • id (integer) – The ID of the project

  • name (string) – The name of the project.

  • slug (string) – The project slug (used in the URL).

  • programming_language (string) – The programming language of the project (eg. “py”, “js”)

  • default_version (string) – The default version of the project (eg. “latest”, “stable”, “v3”)

  • default_branch (string) – The default version control branch

  • repo_type (string) – Version control repository of the project

  • repo (string) – The repository URL for the project

  • description (string) – An RST description of the project

  • language (string) – The language code of this project

  • documentation_type (string) – An RST description of the project

  • canonical_url (string) – The canonical URL of the default docs

  • users (array) – Array of User IDs who are maintainers of the project.

Status Codes
Project versions
GET /api/v2/project/(int: id)/active_versions/

Retrieve a list of active versions (eg. “latest”, “stable”, “v1.x”) for a single project.

{
    "versions": [VERSION, VERSION, ...]
}
Response JSON Object
  • versions (array) – Version objects for the given Project

See the Version detail call for the format of the Version object.

Versions

Versions are different versions of the same project documentation

The versions for a given project can be viewed in a project’s version screen. For example, here is the Pip project’s version screen.

Version list
GET /api/v2/version/

Retrieve a list of all Versions for all projects

{
    "count": 1000,
    "previous": null,
    "results": [VERSIONS],
    "next": "https://readthedocs.org/api/v2/version/?limit=10&offset=10"
}
Response JSON Object
  • next (string) – URI for next set of Versions.

  • previous (string) – URI for previous set of Versions.

  • count (integer) – Total number of Versions.

  • results (array) – Array of Version objects.

Query Parameters
  • project__slug (string) – Narrow to the versions for a specific Project

  • active (boolean) – Pass true or false to show only active or inactive versions. By default, the API returns all versions.

Version detail
GET /api/v2/version/(int: id)/

Retrieve details of a single version.

{
    "id": 1437428,
    "slug": "stable",
    "verbose_name": "stable",
    "built": true,
    "active": true,
    "type": "tag",
    "identifier": "3a6b3995c141c0888af6591a59240ba5db7d9914",
    "privacy_level": "public",
    "downloads": {
        "pdf": "//readthedocs.org/projects/pip/downloads/pdf/stable/",
        "htmlzip": "//readthedocs.org/projects/pip/downloads/htmlzip/stable/",
        "epub": "//readthedocs.org/projects/pip/downloads/epub/stable/"
    },
    "project": {PROJECT},
}
Response JSON Object
  • id (integer) – The ID of the version

  • verbose_name (string) – The name of the version.

  • slug (string) – The version slug.

  • built (string) – Whether this version has been built

  • active (string) – Whether this version is still active

  • type (string) – The type of this version (typically “tag” or “branch”)

  • identifier (string) – A version control identifier for this version (eg. the commit hash of the tag)

  • downloads (array) – URLs to downloads of this version’s documentation

  • project (object) – Details of the Project for this version.

Status Codes
Builds

Builds are created by Read the Docs whenever a Project has its documentation built. Frequently this happens automatically via a web hook but can be triggered manually.

Builds can be viewed in the build screen for a project. For example, here is Pip’s build screen.

Build list
GET /api/v2/build/

Retrieve details of builds ordered by most recent first

Example request:

curl https://readthedocs.org/api/v2/build/?project__slug=pip

Example response:

{
    "count": 100,
    "next": null,
    "previous": null,
    "results": [BUILDS]
}
Response JSON Object
  • next (string) – URI for next set of Builds.

  • previous (string) – URI for previous set of Builds.

  • count (integer) – Total number of Builds.

  • results (array) – Array of Build objects.

Query Parameters
  • project__slug (string) – Narrow to builds for a specific Project

  • commit (string) – Narrow to builds for a specific commit

Build detail
GET /api/v2/build/(int: id)/

Retrieve details of a single build.

{
    "id": 7367364,
    "date": "2018-06-19T15:15:59.135894",
    "length": 59,
    "type": "html",
    "state": "finished",
    "success": true,
    "error": "",
    "commit": "6f808d743fd6f6907ad3e2e969c88a549e76db30",
    "docs_url": "http://pip.pypa.io/en/latest/",
    "project": 13,
    "project_slug": "pip",
    "version": 3681,
    "version_slug": "latest",
    "commands": [
        {
            "description": "",
            "start_time": "2018-06-19T20:16:00.951959",
            "exit_code": 0,
            "build": 7367364,
            "command": "git remote set-url origin git://github.com/pypa/pip.git",
            "run_time": 0,
            "output": "",
            "id": 42852216,
            "end_time": "2018-06-19T20:16:00.969170"
        },
        ...
    ],
    ...
}
Response JSON Object
  • id (integer) – The ID of the build

  • date (string) – The ISO-8601 datetime of the build.

  • length (integer) – The length of the build in seconds.

  • type (string) – The type of the build (one of “html”, “pdf”, “epub”)

  • state (string) – The state of the build (one of “triggered”, “building”, “installing”, “cloning”, or “finished”)

  • success (boolean) – Whether the build was successful

  • error (string) – An error message if the build was unsuccessful

  • commit (string) – A version control identifier for this build (eg. the commit hash)

  • docs_url (string) – The canonical URL of the build docs

  • project (integer) – The ID of the project being built

  • project_slug (string) – The slug for the project being built

  • version (integer) – The ID of the version of the project being built

  • version_slug (string) – The slug for the version of the project being built

  • commands (array) – Array of commands for the build with details including output.

Status Codes

Some fields primarily used for UI elements in Read the Docs are omitted.

Embed
GET /api/v2/embed/

Retrieve HTML-formatted content from documentation page or section.

Example request:

curl https://readthedocs.org/api/v2/embed/?project=docs&version=latest&doc=features&path=features.html

or

curl https://readthedocs.org/api/v2/embed/?url=https://docs.readthedocs.io/en/latest/features.html

Example response:

{
    "content": [
        "<div class=\"section\" id=\"read-the-docs-features\">\n<h1>Read the Docs..."
    ],
    "headers": [
        {
            "Read the Docs features": "#"
        },
        {
            "Automatic Documentation Deployment": "#automatic-documentation-deployment"
        },
        {
            "Custom Domains & White Labeling": "#custom-domains-white-labeling"
        },
        {
            "Versioned Documentation": "#versioned-documentation"
        },
        {
            "Downloadable Documentation": "#downloadable-documentation"
        },
        {
            "Full-Text Search": "#full-text-search"
        },
        {
            "Open Source and Customer Focused": "#open-source-and-customer-focused"
        }
    ],
    "url": "https://docs.readthedocs.io/en/latest/features",
    "meta": {
        "project": "docs",
        "version": "latest",
        "doc": "features",
        "section": "read the docs features"
    }
}
Response JSON Object
  • content (string) – HTML content of the section.

  • headers (object) – section’s headers in the document.

  • url (string) – URL of the document.

  • meta (object) – meta data of the requested section.

Query Parameters
  • project (string) – Read the Docs project’s slug.

  • doc (string) – document to fetch content from.

  • version (string) – optional Read the Docs version’s slug (default: latest).

  • section (string) – optional section within the document to fetch.

  • path (string) – optional full path to the document including extension.

  • url (string) – full URL of the document (and section) to fetch content from.

Note

You can call this endpoint by sending at least project and doc or url attribute.

Undocumented resources and endpoints

There are some undocumented endpoints in the API. These should not be used and could change at any time. These include:

  • The search API (/api/v2/search/)

  • Endpoints for returning footer and version data to be injected into docs. (/api/v2/footer_html)

  • Endpoints used for advertising (/api/v2/sustainability/)

  • Any other endpoints not detailed above.

Read the Docs for Business

Read the Docs has a commercial offering with improved support and additional features.

Read the Docs for Business

Read the Docs is our community solution for open source projects at readthedocs.org and we offer Read the Docs for Business for building and hosting commercial documentation at readthedocs.com. Features in this section are specific to Read the Docs for Business.

Private repositories and private documentation

The largest difference between the community solution and our commercial offering is the ability to connect to private repositories, to restrict documentation access to certain users, or to share private documentation via private hyperlinks.

Additional build resources

Do you have a complicated build process that uses large amounts of CPU, memory, disk, or networking resources? Our commercial offering has much higher default resources that result in faster documentation build times and we can increase it further for very demanding projects.

Priority support

We have a dedicated support team that responds to support requests during business hours. If you need a quick turnaround, please signup for readthedocs.com.

Advertising-free

All commercially hosted documentation is always ad-free.

Organizations

Note

This feature only exists on Read the Docs for Business.

Organizations allow you to segment who has access to what projects in your company. Your company will be represented as an Organization, let’s use ACME Corporation as our example.

ACME has a few people inside their organization, some who need full access and some who just need access to one project.

Member Types
  • Owners – Get full access to both view and edit the Organization and all Projects

  • Members – Get access to a subset of the Organization projects

  • Teams – Where you give members access to a set of projects.

The best way to think about this relationship is:

Owners will create Teams to assign permissions to all Members.

Warning

Owners, Members and Teams behave differently if you are using SSO with VCS provider (GitHub, Bitbucket or GitLab)

Team Types

You can create two types of Teams:

  • Admins – These teams have full access to administer the projects in the team. They are allowed to change all of the settings, set notifications, and perform any action under the Admin tab.

  • Read Only – These teams are only able to read and search inside the documents.

Example

ACME would set up Owners of their organization, for example Frank Roadrunner would be an owner. He has full access to the organization and all projects.

Wile E. Coyote is a contractor, and will just have access to the new project Road Builder.

Roadrunner would set up a Team called Contractors. That team would have Read Only access to the Road Builder project. Then he would add Wile E. Coyote to the team. This would give him access to just this one project inside the organization.

Single Sign-On

Note

This feature only exists on Read the Docs for Business.

Single sign-on is supported on Read the Docs for Business for all users. SSO will allow you to grant permissions to your organization’s projects in an easy way.

Currently, we support two different types of single sign-on:

  • Authentication and authorization are managed by the identity provider (e.g. GitHub, Bitbucket or GitLab)

  • Authentication (only) is managed by the identity provider (e.g. an active Google Workspace account with a verified email address)

Users can log out by using the Log Out link in the RTD flyout menu.

SSO with VCS provider (GitHub, Bitbucket or GitLab)

Using an identity provider that supports authentication and authorization allows you to manage who has access to projects on Read the Docs, directly from the provider itself. If a user needs access to your documentation project on Read the Docs, that user just needs to be granted permissions in the VCS repository associated with the project.

You can enable this feature in your organization by going to your organization’s detail page > Settings > Authorization and selecting GitHub, GitLab or Bitbucket as provider.

Note the users created under Read the Docs must have their GitHub, Bitbucket or GitLab account connected in order to make SSO work. You can read more about granting permissions on GitHub.

Warning

Once you enable this option, your existing Read the Docs teams will not be used.

Grant access to read the documentation

By granting read (or more) permissions to a user in the VCS repository you are giving access to read the documentation of the associated project on Read the Docs to that user.

Grant access to administrate a project

By granting write permission to a user in the VCS repository you are giving access to read the documentation and to be an administrator of the associated project on Read the Docs to that user.

Grant access to import a project

When SSO with a VCS provider is enabled, only owners of the Read the Docs organization can import projects. Adding users as owners of your organization will give them permissions to import projects.

Note that to be able to import a project, that user must have admin permissions in the VCS repository associated.

Revoke access to a project

If a user should not have access anymore to a project, for any reason, a VCS repository’s admin (e.g. user with Admin role on GitHub for that specific repository) can revoke access to the VCS repository and this will be automatically reflected in Read the Docs.

The same process is followed in case you need to remove admin access, but still want that user to have access to read the documentation. Instead of revoking access completely, just need lower down permissions to read only.

SSO with Google Workspace

Note

Google Workspace was formerly called G Suite

Using your company’s Google email address (e.g. employee@company.com) allows you to manage authentication for your organization’s members. As this identity provider does not provide authorization over each repositories/projects per user, permissions are managed by the internal Read the Docs’s teams authorization system.

By default, users that sign up with a Google account do not have any permissions over any project. However, you can define which teams users matching your company’s domain email address will auto-join when they sign up. Read the following sections to learn how to grant read and admin access.

You can enable this feature in your organization by going to your organization’s detail page > Settings > Authorization and selecting Google as provider and specifying your Google Workspace domain in the Domain field.

Grant access to read a project

You can add a user under a read-only team to grant read permissions to all the projects under that team. This can be done under your organization’s detail page > Teams > Read Only > Invite Member.

To avoid this repetitive task for each employee of your company, the owner of the Read the Docs organization can mark one or many teams for users matching the company’s domain email to join these teams automaically when they sign up.

For example, you can create a team with the projects that all employees of your company should have access to and mark it as Auto join users with an organization’s email address to this team. Then all users that sign up with their employee@company.com email will automatically join this team and have read access to those projects.

Grant access to administer a project

You can add a user under an admin team to grant admin permissions to all the projects under that team. This can be done under your organization’s detail page > Teams > Admins > Invite Member.

Grant access to users to import a project

Making the user member of any admin team under your organization (as mentioned in the previous section), they will be granted access to import a project.

Note that to be able to import a project, that user must have admin permissions in the GitHub, Bitbucket or GitLab repository associated, and their social account connected with Read the Docs.

Revoke user’s access to a project

To revoke access to a project for a particular user, you should remove that user from the team that contains that project. This can be done under your organization’s detail page > Teams > Read Only and click Remove next to the user you want to revoke access.

Revoke user’s access to all the projects

By disabling the Google Workspace account with email employee@company.com, you revoke access to all the projects that user had access and disable login on Read the Docs completely for that user.

Sharing

Note

This feature only exists on Read the Docs for Business.

You can share your project with users outside of your company:

  • by sending them a secret link,

  • by giving them a password.

These methods will allow them to view specific projects or versions of a project inside your organization.

Additionally, you can use a HTTP Authorization Header. This is useful to have access from a script.

Enabling Sharing
  • Go into your project’s Admin page and click on Sharing.

  • Click on New Share

  • Select access type (secret link, password, or HTTP header token), add an expiration date and a Description so you remember who you’re sharing it with.

  • Check Allow access to all versions? if you want to grant access to all versions, or uncheck that option and select the specific versions you want grant access to.

  • Click Save.

  • Get the info needed to share your documentation with other users:

    • If you have selected secret link, copy the link that is generated

    • In case of password, copy the link and password

    • For HTTP header token, you need to pass the Authorization header in your HTTP request.

  • Give that information to the person who you want to give access.

Note

You can always revoke access in the same panel.

Users can log out by using the Log Out link in the RTD flyout menu.

Sharing Methods
Password

Once the person you send the link to clicks on the link, they will see an Authorization required page asking them for the password you generated. When the user enters the password, they will have access to view your project.

Tip

This is useful for when you have documentation you want users to bookmark. They can enter a URL directly and enter the password when prompted.

HTTP Authorization Header

Tip

This approach is useful for automated scripts. It only allows access to a page when the header is present, so it doesn’t allow browsing docs inside of a browser.

Token Authorization

You need to send the Authorization header with the token on each HTTP request. The header has the form Authorization: Token <ACCESS_TOKEN>. For example:

curl -H "Authorization: Token 19okmz5k0i6yk17jp70jlnv91v" https://docs.example.com/en/latest/example.html
Basic Authorization

You can also use basic authorization, with the token as user and an empty password. For example:

curl --url https://docs.example.com/en/latest/example.html --user '19okmz5k0i6yk17jp70jlnv91v:'

Project Privacy Level

Note

This feature only exists on Read the Docs for Business.

By default, only users that belong to your organization can see the dashboard of your project and its builds. If you want users outside your organization and anonymous users to be able to see the dashboard of your project, and the build output of public versions you can set the privacy level of your project to Public.

  • Go the Admin tab of your project.

  • Click on Advanced Settings.

  • Change to Privacy level to Public.

Note

To control access to the documentation itself, see Privacy levels.

The Read the Docs project and organization

Learn about Read the Docs, the project and the company, and find out how you can get involved and contribute to the development and success of Read the Docs and the larger software documentation ecosystem.

Security

Security is very important to us at Read the Docs. We follow generally accepted industry standards to protect the personal information submitted to us, both during transmission and once we receive it. In the spirit of transparency, we are committed to responsible reporting and disclosure of security issues.

Account security

  • All traffic is encrypted in transit so your login is protected.

  • Read the Docs stores only one-way hashes of all passwords. Nobody at Read the Docs has access to your passwords.

  • Account login is protected from brute force attacks with rate limiting.

  • While most projects and docs on Read the Docs are public, we treat your private repositories and private documentation as confidential and Read the Docs employees may only view them with your explicit permission in response to your support requests, or when required for security purposes.

  • You can read more about account privacy in our Privacy Policy.

Reporting a security issue

If you believe you’ve discovered a security issue at Read the Docs, please contact us at security@readthedocs.org (optionally using our PGP key). We request that you please not publicly disclose the issue until it has been addressed by us.

You can expect:

  • We will respond acknowledging your email typically within one business day.

  • We will follow up if and when we have confirmed the issue with a timetable for the fix.

  • We will notify you when the issue is fixed.

  • We will add the issue to our security issue archive.

PGP key

You may use this PGP key to securely communicate with us and to verify signed messages you receive from us.

Security issue archive

Version 5.19.0

Version 5.19.0 fixes an issue that allowed a malicious user to fetch internal and private information from a logged user in readthedocs.org/readthedocs.com by creating a malicious site hosted on readthedocs.io/readthedocs-hosted.com or from any custom domain registered in the platform.

It would have required the attacker to get a logged in user to visit an attacker controlled web page, which could then have made GET API requests on behalf of the user. This vulnerability was found by our team as part of a routine security audit, and there is no indication it was exploited.

The issue was found by the Read the Docs team.

Version 5.14.0

Version 5.14.0 fixes an issue where that affected new code that removed multiple slashes in URL paths. The issue allowed the creation of hyperlinks that looked like they would go to a documentation domain on Read the Docs (either *.readthedocs.io or a custom docs domain)) but instead went to a different domain.

This issue was reported by Splunk after it was reported by a security audit.

Version 3.5.1

Version 3.5.1 fixed an issue that affected projects with “prefix” or “sphinx” user-defined redirects. The issue allowed the creation of hyperlinks that looked like they would go to a documentation domain on Read the Docs (either *.readthedocs.io or a custom docs domain) but instead went to a different domain.

This issue was reported by Peter Thomassen and the desec.io DNS security project and was funded by SSE.

Version 3.2.0

Version 3.2.0 resolved an issue where a specially crafted request could result in a DNS query to an arbitrary domain.

This issue was found by Cyber Smart Defence who reported it as part of a security audit to a firm running a local installation of Read the Docs.

Release 2.3.0

Version 2.3.0 resolves a security issue with translations on our community hosting site that allowed users to modify the hosted path of a target project by adding it as a translation project of their own project. A check was added to ensure project ownership before adding the project as a translation.

In order to add a project as a translation now, users must now first be granted ownership in the translation project.

DMCA Takedown Policy

These are the guidelines that Read the Docs follows when handling DMCA takedown requests and takedown counter requests. If you are a copyright holder wishing to submit a takedown request, or an author that has been notified of a takedown request, please familiarize yourself with our process. You will be asked to confirm that you have reviewed information if you submit a request or counter request.

We aim to keep this entire process as transparent as possible. Our process is modeled after GitHub’s DMCA takedown process, which we appreciate for its focus on transparency and fairness. All requests and counter requests will be posted to this page below, in the Request Archive. These requests will be redacted to remove all identifying information, except for Read the Docs user and project names.

Takedown Process

Here are the steps the Read the Docs will follow in the takedown request process:

Copyright holder submits a request

This request, if valid, will be posted publicly on this page, down below. The author affected by the takedown request will be notified with a link to the takedown request.

For more information on submitting a takedown request, see: Submitting a Request

Author is contacted

The author of the content in question will be asked to make changes to the content specified in the takedown request. The author will have 24 hours to make these changes. The copyright holder will be notified if and when this process begins

Author acknowledges changes have been made

The author must notify Read the Docs that changes have been made within 24 hours of receiving a takedown request. If the author does not respond to this request, the default action will be to disable the Read the Docs project and remove any hosted versions

Copyright holder review

If the author has made changes, the copyright holder will be notified of these changes. If the changes are sufficient, no further action is required, though copyright holders are welcome to submit a formal retraction. If the changes are not sufficient, the author’s changes can be rejected. If the takedown request requires alteration, a new request must be submitted. If Read the Docs does not receive a review response from the copyright holder within 2 weeks, the default action at this step is to assume the takedown request has been retracted.

Content may be disabled

If the author does not respond to a request for changes, or if the copyright holder has rejected the author’s changes during the review process, the documentation project in question will be disabled.

Author submits a counter request

If the author believes their content was disabled as a result of a mistake, a counter request may be submitted. It would be advised that authors seek legal council before continuing. If the submitted counter request is sufficiently detailed, this counter will also be added to this page. The copyright holder will be notified, with a link to this counter request.

For more information on submitting a counter request, see: Submitting a Counter

Copyright holder may file legal action

At this point, if the copyright holder wishes to keep the offending content disabled, the copyright holder must file for legal action ordering the author refrain from infringing activities on Read the Docs. The copyright holder will have 2 weeks to supply Read the Docs with a copy of a valid legal complaint against the author. The default action here, if the copyright holder does not respond to this request, is to re-enable the author’s project.

Submitting a Request

Your request must:

Acknowledge this process

You must first acknowledge you are familiar with our DMCA takedown request process. If you do not acknowledge that you are familiar with our process, you will be instructed to review this information.

Identify the infringing content

You should list URLs to each piece of infringing content. If you allege that the entire project is infringing on copyrights you hold, please specify the entire project as infringing.

Identify infringement resolution

You will need to specify what a user must do in order to avoid having the rest of their content disabled. Be as specific as possible with this. Specify if this means adding attribution, identify specific files or content that should be removed, or if you allege the entire project is infringing, your should be specific as to why it is infringing.

Include your contact information

Include your name, email, physical address, and phone number.

Include your signature

This can be a physical or electronic signature.

Please complete this takedown request template and send it to: support@readthedocs.com

Submitting a Counter

Your counter request must:

Acknowledge this process

You must first acknowledge you are familiar with our DMCA takedown request process. If you do not acknowledge that you are familiar with our process, you will be instructed to review this information.

Identify the infringing content that was removed

Specify URLs in the original takedown request that you wish to challenge.

Include your contact information

Include your name, email, physical address, and phone number.

Include your signature

This can be a physical or electronic signature.

Requests can be submitted to: support@readthedocs.com

Request Archive

For better transparency into copyright ownership and the DMCA takedown process, Read the Docs maintains this archive of previous DMCA takedown requests. This is modeled after GitHub’s DMCA archive.

The following DMCA takedown requests have been submitted:

2022-06-07

Note

The project maintainer was notified about this report and instructed to submit a counter if they believed this request was invalid. The user removed the project manually, and no further action was required.

Are you the copyright owner or authorized to act on the copyright owner’s behalf?

Yes

What work was allegedly infringed? If possible, please provide a URL:

https://www.dicomstandard.org/current

What files or project should be taken down? You should list URLs to each piece of infringing content. If you allege that the entire project is infringing on copyrights you hold, please specify the entire project as infringing:

https://dicom-standard.readthedocs.io/en/latest/index.html

Is the work licensed under an open source license?

No

What would be the best solution for the alleged infringement?

Complete Removal.

Do you have the alleged infringer’s contact information? Yes. If so, please provide it:

[private]

Type (or copy and paste) the following statement: “I have a good faith belief that use of the copyrighted materials described above on the infringing web pages is not authorized by the copyright owner, or its agent, or the law. I have taken fair use into consideration.”

I have a good faith belief that use of the copyrighted materials described above on the infringing web pages is not authorized by the copyright owner, or its agent, or the law. I have taken fair use into consideration.

Type (or copy and paste) the following statement: “I swear, under penalty of perjury, that the information in this notification is accurate and that I am the copyright owner, or am authorized to act on behalf of the owner, of an exclusive right that is allegedly infringed.”

I swear, under penalty of perjury, that the information in this notification is accurate and that I am the copyright owner, or am authorized to act on behalf of the owner, of an exclusive right that is allegedly infringed.

Please confirm that you have read our Takedown Policy: https://docs.readthedocs.io/en/latest/dmca/index.html

Yes

So that we can get back to you, please provide either your telephone number or physical address:

[private]

Please type your full legal name below to sign this request:

[private]

Policy for Abandoned Projects

This policy describes the process by which a Read the Docs project name may be changed.

Rationale

Conflict between the current use of the name and a different suggested use of the same name occasionally arise. This document aims to provide general guidelines for solving the most typical cases of such conflicts.

Specification

The main idea behind this policy is that Read the Docs serves the community. Every user is invited to upload content under the Terms of Use, understanding that it is at the sole risk of the user.

While Read the Docs is not a backup service, the core team of Read the Docs does their best to keep that content accessible indefinitely in its published form. However, in certain edge cases the greater community’s needs might outweigh the individual’s expectation of ownership of a project name.

The use cases covered by this policy are:

Abandoned projects

Renaming a project so that the original project name can be used by a different project

Active projects

Resolving disputes over a name

Implementation

Reachability

The user of Read the Docs is solely responsible for being reachable by the core team for matters concerning projects that the user owns. In every case where contacting the user is necessary, the core team will try to do so at least three times, using the following means of contact:

  • E-mail address on file in the user’s profile

  • E-mail addresses found in the given project’s documentation

  • E-mail address on the project’s home page

The core team will stop trying to reach the user after six weeks and the user will be considered unreachable.

Abandoned projects

A project is considered abandoned when ALL of the following are met:

  • Owner is unreachable (see Reachability)

  • The project has no proper documentation being served (no successful builds) or does not have any releases within the past twelve months

  • No activity from the owner on the project’s home page (or no home page found).

All other projects are considered active.

Renaming of an abandoned project

Projects are never renamed solely on the basis of abandonment.

An abandoned project can be renamed (by appending -abandoned and a uniquifying integer if needed) for purposes of reusing the name when ALL of the following are met:

  • The project has been determined abandoned by the rules described above

  • The candidate is able to demonstrate their own failed attempts to contact the existing owner

  • The candidate is able to demonstrate that the project suggested to reuse the name already exists and meets notability requirements

  • The candidate is able to demonstrate why a fork under a different name is not an acceptable workaround

  • The project has fewer than 100 monthly pageviews

  • The core team does not have any additional reservations.

Name conflict resolution for active projects

The core team of Read the Docs are not arbiters in disputes around active projects. The core team recommends users to get in touch with each other and solve the issue by respectful communication.

Prior art

The Python Package Index (PyPI) policy for claiming abandoned packages (PEP-0541) heavily influenced this policy.

Changelog

Version 8.4.0

Date

August 16, 2022

Version 8.3.7

Date

August 09, 2022

  • @stsewd: Sphinx domain: change type of ID field (#9482)

  • @humitos: Build: unpin Pillow for unsupported Python versions (#9473)

  • @humitos: Release 8.3.6 (#9465)

  • @stsewd: Redirects: check only for hostname and path for infinite redirects (#9463)

  • @benjaoming: Fix missing indentation on reStructuredText badge code (#9404)

  • @stsewd: Embed JS: fix incompatibilities with sphinx 6.x (jquery removal) (#9359)

Version 8.3.6

Date

August 02, 2022

Version 8.3.5

Date

July 25, 2022

Version 8.3.4

Date

July 19, 2022

Version 8.3.3

Date

July 12, 2022

Version 8.3.2

Date

July 05, 2022

Version 8.3.1

Date

June 27, 2022

Version 8.3.0

Date

June 20, 2022

Version 8.2.0

Date

June 14, 2022

Version 8.1.2

Date

June 06, 2022

Version 8.1.1

Date

Jun 1, 2022

Version 8.1.0

Date

May 24, 2022

Version 8.0.2

Date

May 16, 2022

Version 8.0.1

Date

May 09, 2022

Version 8.0.0

Date

May 03, 2022

Note

We are upgrading to Ubuntu 22.04 LTS and also to Python 3.10.

Projects using Mamba with the old feature flag, and now removed, CONDA_USES_MAMBA, have to update their .readthedocs.yaml file to use build.tools.python: mambaforge-4.10 to continue using Mamba to create their environment. See more about build.tools.python at https://docs.readthedocs.io/en/stable/config-file/v2.html#build-tools-python

Version 7.6.2

Date

April 25, 2022

Version 7.6.1

Date

April 19, 2022

Version 7.6.0

Date

April 12, 2022

Version 7.5.1

Date

April 04, 2022

Version 7.5.0

Date

March 28, 2022

Version 7.4.2

Date

March 14, 2022

Version 7.4.1

Date

March 07, 2022

  • @humitos: Upgrade common submodule (#9001)

  • @humitos: Build: RepositoryError message (#8999)

  • @humitos: Requirements: remove django-permissions-policy (#8987)

  • @stsewd: Archive builds: avoid filtering by commands__isnull (#8986)

  • @humitos: Build: cancel error message (#8984)

  • @humitos: API: validate RemoteRepository when creating a Project (#8983)

  • @humitos: Celery: trigger archive_builds frequently with a lower limit (#8981)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 09 (#8977)

  • @stsewd: MkDocs: allow None on extra_css/extra_javascript (#8976)

  • @stsewd: CDN: avoid cache tags collision (#8969)

  • @stsewd: Docs: warn about custom domains on subprojects (#8945)

  • @humitos: Code style: format the code using darker (#8875)

  • @dogukanteber: Use django-storages’ manifest files class instead of the overriden class (#8781)

  • @nienn: Docs: Add links to documentation on creating custom classes (#8466)

  • @stsewd: Integrations: allow to pass more data about external versions (#7692)

Version 7.4.0

Date

March 01, 2022

Version 7.3.0

Date

February 21, 2022

Version 7.2.1

Date

February 15, 2022

Version 7.2.0

Date

February 08, 2022

Version 7.1.2

Date

January 31, 2022

Version 7.1.1

Date

January 31, 2022

Version 7.1.0

Date

January 25, 2022

Version 7.0.0

This is our 7th major version! This is because we are upgrading to Django 3.2 LTS.

Date

January 17, 2022

Version 6.3.3

Date

January 10, 2022

Version 6.3.2

Date

January 04, 2022

Version 6.3.1

Date

December 14, 2021

Version 6.3.0

Date

November 29, 2021

Version 6.2.1

Date

November 23, 2021

Version 6.2.0

Date

November 16, 2021

Version 6.1.2

Date

November 08, 2021

Version 6.1.1

Date

November 02, 2021

Version 6.1.0

Date

October 26, 2021

Version 6.0.0

Date

October 13, 2021

This release includes the upgrade of some base dependencies:

  • Python version from 3.6 to 3.8

  • Ubuntu version from 18.04 LTS to 20.04 LTS

Starting from this release, all the Read the Docs code will be tested and QAed on these versions.

Version 5.25.1

Date

October 11, 2021

Version 5.25.0

Date

October 05, 2021

Version 5.24.0

Date

September 28, 2021

Version 5.23.6

Date

September 20, 2021

Version 5.23.5

Date

September 14, 2021

  • @humitos: Organization: only mark artifacts cleaned as False if they are True (#8481)

  • @astrojuanlu: Fix link to version states documentation (#8475)

  • @stsewd: OAuth models: increase avatar_url lenght (#8472)

  • @pzhlkj6612: Docs: update the links to the dependency management content of setuptools docs (#8470)

  • @stsewd: Permissions: avoid using project.users, use proper permissions instead (#8458)

  • @humitos: Docker build images: update design doc (#8447)

  • @astrojuanlu: New Read the Docs tutorial, part I (#8428)

Version 5.23.4

Date

September 07, 2021

Version 5.23.3

Date

August 30, 2021

Version 5.23.2

Date

August 24, 2021

Version 5.23.1

Date

August 16, 2021

Version 5.23.0

Date

August 09, 2021

Version 5.22.0

Date

August 02, 2021

Version 5.21.0

Date

July 27, 2021

Version 5.20.3

Date

July 19, 2021

Version 5.20.2

Date

July 13, 2021

Version 5.20.1

Date

June 28, 2021

Version 5.20.0

Date

June 22, 2021

Version 5.19.0

Warning

This release contains a security fix to our CSRF settings: https://github.com/readthedocs/readthedocs.org/security/advisories/GHSA-3v5m-qmm9-3c6c

Date

June 15, 2021

Version 5.18.0

Date

June 08, 2021

Version 5.17.0

Date

May 24, 2021

Version 5.16.0

Date

May 18, 2021

  • @stsewd: QuerySets: check for .is_superuser instead of has_perm (#8181)

  • @humitos: Build: use is_active method to know if the build should be skipped (#8179)

  • @humitos: APIv2: disable listing endpoints (#8178)

  • @stsewd: Project: use IntegerField for remote_repository from project form. (#8176)

  • @stsewd: Docs: remove some lies from cross referencing guide (#8173)

  • @stsewd: Docs: add space to bash code (#8171)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 19 (#8170)

  • @stsewd: Querysets: include organizations in is_active check (#8163)

  • @stsewd: Querysets: remove private and for_project (#8158)

  • @davidfischer: Disable FLOC by introducing permissions policy header (#8145)

  • @stsewd: Build: allow to install packages with apt (#8065)

Version 5.15.0

Date

May 10, 2021

  • @stsewd: Ads: don’t load script if a project is marked as ad_free (#8164)

  • @stsewd: Querysets: include organizations in is_active check (#8163)

  • @stsewd: Querysets: simplify project querysets (#8154)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 18 (#8153)

  • @stsewd: Search: default to search on default version of subprojects (#8148)

  • @stsewd: Remove protected privacy level (#8146)

  • @stsewd: Embed: fix paths that start with / (#8139)

  • @humitos: Metrics: run metrics task every 30 minutes (#8138)

  • @humitos: web-celery: add logging for OOM debug on suspicious tasks (#8131)

  • @agjohnson: Fix a few style and grammar issues with SSO docs (#8109)

  • @stsewd: Embed: don’t fail while querying sections with bad id (#8084)

  • @stsewd: Design doc: allow to install packages using apt (#8060)

Version 5.14.3

Date

April 26, 2021

Version 5.14.2

Date

April 20, 2021

Version 5.14.1

Date

April 13, 2021

  • @stsewd: OAuth: protection against deleted objects (#8081)

  • @cocobennett: Add page and page_size to server side api documentation (#8080)

  • @stsewd: Version warning banner: inject on role=”main” or main tag (#8079)

  • @stsewd: OAuth: avoid undefined var (#8078)

  • @stsewd: Conda: protect against None when appending core requirements (#8077)

  • @humitos: SSO: add small paragraph mentioning how to enable it on commercial (#8063)

  • @agjohnson: Add separate version create view and create view URL (#7595)

Version 5.14.0

Date

April 06, 2021

This release includes a security update which was done in a private branch PR. See our security changelog for more details.

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 14 (#8071)

  • @astrojuanlu: Clarify ad-free conditions (#8064)

  • @humitos: SSO: add small paragraph mentioning how to enable it on commercial (#8063)

  • @stsewd: Build environment: allow to run commands with a custom user (#8058)

  • @humitos: Design document for new Docker images structure (#7566)

Version 5.13.0

Date

March 30, 2021

Version 5.12.2

Date

March 23, 2021

Version 5.12.1

Date

March 16, 2021

Version 5.12.0

Date

March 08, 2021

Version 5.11.0

Date

March 02, 2021

Version 5.10.0

Date

February 23, 2021

Version 5.9.0

Date

February 16, 2021

Last Friday we migrated our site from Azure to AWS (read the blog post). This is the first release into our new AWS infra.

Version 5.8.5

Date

January 18, 2021

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 03 (#7840)

  • @humitos: Speed up concurrent builds by limited to 5 hours ago (#7839)

  • @humitos: Match Redis version with production (#7838)

  • @saadmk11: Add Option to Enable External Builds Through Project Update API (#7834)

  • @stsewd: Docs: mention the version warning is for sphinx only (#7832)

  • @stsewd: Tests: make PRODUCTION_DOMAIN explicit (#7831)

  • @stsewd: Docs: make it easy to copy/pasta examples (#7829)

  • @stsewd: PR preview: pass PR and build urls to sphinx context (#7828)

  • @agjohnson: Hide design docs from documentation (#7826)

  • @stsewd: Footer: add cache tags (#7821)

  • @humitos: Log Stripe Resource fallback creation in Sentry (#7820)

  • @humitos: Register MetricsTask to send metrics to AWS CloudWatch (#7817)

  • @saadmk11: Add management command to Sync RemoteRepositories and RemoteOrganizations (#7803)

  • @stsewd: Mkdocs: default to “docs” for docs_dir (#7766)

Version 5.8.4

Date

January 12, 2021

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 02 (#7818)

  • @stsewd: List SYNC_VERSIONS_USING_A_TASK flag in the admin (#7802)

  • @ericholscher: Update build concurrency numbers for Business (#7794)

  • @stsewd: Sphinx: use html_baseurl for setting the canonical URL (#7540)

Version 5.8.3

Date

January 05, 2021

Version 5.8.2

Date

December 21, 2020

Version 5.8.1

Date

December 14, 2020

  • @humitos: Register ShutdownBuilder task (#7749)

  • @saadmk11: Use “path_with_namespace” for GitLab RemoteRepository full_name Field (#7746)

  • @stsewd: Features: remove USE_NEW_PIP_RESOLVER (#7745)

  • @stsewd: Version sync: exclude external versions when deleting (#7742)

  • @stsewd: Search: limit number of sections and domains to 10K (#7741)

  • @stsewd: Traffic analytics: don’t pass context if the feature isn’t enabled (#7740)

  • @stsewd: Analytics: move page views to its own endpoint (#7739)

  • @stsewd: FeatureQuerySet: make check for date inclusive (#7737)

  • @stsewd: Typo: date -> data (#7736)

  • @saadmk11: Use remote_id and vcs_provider Instead of full_name to Get RemoteRepository (#7734)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 49 (#7730)

  • @saadmk11: Update parts of code that were using the old RemoteRepository model fields (#7728)

  • @stsewd: Builds: don’t delete them when a version is deleted (#7679)

  • @stsewd: Sync versions: create new versions in bulk (#7382)

  • @humitos: Use mamba under a feature flag to create conda environments (#6815)

Version 5.8.0

Date

December 08, 2020

Version 5.7.0

Date

December 01, 2020

Version 5.6.5

Date

November 23, 2020

Version 5.6.4

Date

November 16, 2020

Version 5.6.3

Date

November 10, 2020

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 43 (#7602)

Version 5.6.2

Date

November 03, 2020

Version 5.6.1

Date

October 26, 2020

Version 5.6.0

Date

October 19, 2020

Version 5.5.3

Date

October 13, 2020

Version 5.5.2

Date

October 06, 2020

Version 5.5.1

Date

September 28, 2020

Version 5.5.0

Date

September 22, 2020

Version 5.4.3

Date

September 15, 2020

Version 5.4.2

Date

September 09, 2020

Version 5.4.1

Date

September 01, 2020

Version 5.4.0

Date

August 25, 2020

Version 5.3.0

Date

August 18, 2020

Version 5.2.3

Date

August 04, 2020

Version 5.2.2

Date

July 29, 2020

Version 5.2.1

Date

July 14, 2020

Version 5.2.0

Date

July 07, 2020

Version 5.1.5

Date

July 01, 2020

Version 5.1.4

Date

June 23, 2020

Version 5.1.3

Date

June 16, 2020

Version 5.1.2

Date

June 09, 2020

Version 5.1.1

Date

May 26, 2020

Version 5.1.0

Date

May 19, 2020

This release includes one major new feature which is Pageview Analytics. This allows projects to see the pages in their docs that have been viewed in the past 30 days, giving them an idea of what pages to focus on when updating them.

This release also has a few small search improvements, doc updates, and other bugfixes as well.

Version 5.0.0

Date

May 12, 2020

This release includes two large changes, one that is breaking and requires a major version upgrade:

  • We have removed our deprecated doc serving code that used core/views, core/symlinks, and builds/syncers (#6535). All doc serving should now be done via proxito. In production this has been the case for over a month, we have now removed the deprecated code from the codebase.

  • We did a large documentation refactor that should make things nicer to read and highlights more of our existing features. This is the first of a series of new documentation additions we have planned

  • @ericholscher: Fix the caching of featured projects (#7054)

  • @ericholscher: Docs: Refactor and simplify our docs (#7052)

  • @stsewd: Mention using ssh URLs when using private submodules (#7046)

  • @ericholscher: Show project slug in Version admin (#7042)

  • @stsewd: List apiv3 first (#7041)

  • @stsewd: Remove CELERY_ROUTER flag (#7040)

  • @stsewd: Search: remove unused taxonomy field (#7033)

  • @agjohnson: Use a high time limit for celery build task (#7029)

  • @ericholscher: Clean up build admin to make list display match search (#7028)

  • @stsewd: Task Router: check for None (#7027)

  • @stsewd: Implement repo_exists for all VCS backends (#7025)

  • @stsewd: Mkdocs: Index pages without anchors (#7024)

  • @agjohnson: Move docker limits back to setting (#7023)

  • @humitos: Fix typo (#7022)

  • @stsewd: Fix linter (#7021)

  • @ericholscher: Release 4.1.8 (#7020)

  • @ericholscher: Cleanup unresolver logging (#7019)

  • @stsewd: Document about next when using a secret link (#7015)

  • @stsewd: Remove unused field project.version_privacy_level (#7011)

  • @ericholscher: Add proxito headers to redirect responses (#7007)

  • @stsewd: Make hidden field not null (#6996)

  • @humitos: Show a list of packages installed on environment (#6992)

  • @eric-wieser: Ensure invoked Sphinx matches importable one (#6965)

  • @ericholscher: Add an unresolver similar to our resolver (#6944)

  • @KengoTODA: Replace “PROJECT” with project object (#6878)

  • @humitos: Remove code replaced by El Proxito and stateless servers (#6535)

Version 4.1.8

Date

May 05, 2020

This release adds a few new features and bugfixes. The largest change is the addition of hidden versions, which allows docs to be built but not shown to users on the site. This will keep old links from breaking but not direct new users there.

We’ve also expanded the CDN support to make sure we’re passing headers on 3xx and 4xx responses. This will allow us to expand the timeout on our CDN.

We’ve also updated and added a good amount of documentation in this release, and we’re starting a larger refactor of our docs to help users understand the platform better.

Version 4.1.7

Date

April 28, 2020

As of this release, most documentation on Read the Docs Community is now behind Cloudflare’s CDN. It should be much faster for people further from US East. Please report any issues you experience with stale cached documentation (especially CSS/JS).

Another change in this release related to how custom domains are handled. Custom domains will now redirect HTTP -> HTTPS if the Domain’s “HTTPS” flag is set. Also, the subdomain URL (eg. <project>.readthedocs.io/...) should redirect to the custom domain if the Domain’s “canonical” flag is set. These flags are configurable in your project dashboard under Admin > Domains.

Many of the other changes related to improvements for our infrastructure to allow us to have autoscaling build and web servers. There were bug fixes for projects using versions tied to annotated git tags and custom user redirects will now send query parameters.

Version 4.1.6

Date

April 21, 2020

Version 4.1.5

Date

April 15, 2020

Version 4.1.4

Date

April 14, 2020

Version 4.1.3

Date

April 07, 2020

Version 4.1.2

Date

March 31, 2020

Version 4.1.1

Date

March 24, 2020

Version 4.1.0

Date

March 17, 2020

Version 4.0.3

Date

March 10, 2020

Version 4.0.2

Date

March 04, 2020

Version 4.0.1

Date

March 03, 2020

Version 4.0.0

Date

February 25, 2020

This release upgrades our codebase to run on Django 2.2. This is a breaking change, so we have released it as our 4th major version.

Version 3.12.0

Date

February 18, 2020

This version has two major changes:

Version 3.11.6

Date

February 04, 2020

Version 3.11.5

Date

January 29, 2020

Version 3.11.4

Date

January 28, 2020

Version 3.11.3

Date

January 21, 2020

Version 3.11.2

Date

January 08, 2020

Version 3.11.1

Date

December 18, 2019

Version 3.11.0

Date

December 03, 2019

Version 3.10.0

Date

November 19, 2019

Version 3.9.0

Date

November 12, 2019

Version 3.8.0

Date

October 09, 2019

Version 3.7.5

Date

September 26, 2019

Version 3.7.4

Date

September 05, 2019

Version 3.7.3

Date

August 27, 2019

Version 3.7.2

Date

August 08, 2019

Version 3.7.1

Date

August 07, 2019

Version 3.7.0

Date

July 23, 2019

Version 3.6.1

Date

July 17, 2019

Version 3.6.0

Date

July 16, 2019

Version 3.5.3

Date

June 19, 2019

Version 3.5.2

This is a quick hotfix to the previous version.

Date

June 11, 2019

Version 3.5.1

This version contained a security fix for an open redirect issue. The problem has been fixed and deployed on readthedocs.org. For users who depend on the Read the Docs code line for a private instance of Read the Docs, you are encouraged to update to 3.5.1 as soon as possible.

Date

June 11, 2019

Version 3.5.0

Date

May 30, 2019

Version 3.4.2

Date

April 22, 2019

Version 3.4.1

Date

April 03, 2019

Version 3.4.0

Date

March 18, 2019

Version 3.3.1

Date

February 28, 2019

Version 3.3.0

Date

February 27, 2019

Version 3.2.3

Date

February 19, 2019

Version 3.2.2

Date

February 13, 2019

Version 3.2.1

Date

February 07, 2019

Version 3.2.0

Date

February 06, 2019

Version 3.1.0

This version greatly improves our search capabilities, thanks to the Google Summer of Code. We’re hoping to have another version of search coming soon after this, but this is a large upgrade moving to the latest Elastic Search.

Date

January 24, 2019

Version 3.0.0

Read the Docs now only supports Python 3.6+. This is for people running the software on their own servers, builds continue to work across all supported Python versions.

Date

January 23, 2019

Version 2.8.5

Date

January 15, 2019

Version 2.8.4

Date

December 17, 2018

Version 2.8.3

Date

December 05, 2018

Version 2.8.2

Date

November 28, 2018

Version 2.8.1

Date

November 06, 2018

Version 2.8.0

Date

October 30, 2018

Major change is an upgrade to Django 1.11.

Version 2.7.2

Date

October 23, 2018

Version 2.7.1

Date

October 04, 2018

Version 2.7.0

Date

September 29, 2018

Reverted, do not use

Version 2.6.6

Date

September 25, 2018

Version 2.6.5

Date

August 29, 2018

Version 2.6.4

Date

August 29, 2018

Version 2.6.3

Date

August 18, 2018

Release to Azure!

Version 2.6.2

Date

August 14, 2018

Version 2.6.1

Date

July 17, 2018

Version 2.6.0

Date

July 16, 2018

Version 2.5.3

Date

July 05, 2018

Version 2.5.2

Date

June 18, 2018

Version 2.5.1

Date

June 14, 2018

Version 2.5.0

Date

June 06, 2018

Version 2.4.0

Date

May 31, 2018

Version 2.3.14

Date

May 30, 2018

Version 2.3.13

Date

May 23, 2018

Version 2.3.12

Date

May 21, 2018

Version 2.3.11

Date

May 01, 2018

Version 2.3.10

Date

April 24, 2018

Version 2.3.9

Date

April 20, 2018

Version 2.3.8

Date

April 20, 2018

  • @agjohnson: Give TaskStep class knowledge of the underlying task (#3983)

  • @humitos: Resolve domain when a project is a translation of itself (#3981)

Version 2.3.7

Date

April 19, 2018

Version 2.3.6

Date

April 05, 2018

Version 2.3.5

Date

April 05, 2018

Version 2.3.4

  • Release for static assets

Version 2.3.3

Version 2.3.2

This version adds a hotfix branch that adds model validation to the repository URL to ensure strange URL patterns can’t be used.

Version 2.3.1

Version 2.3.0

Warning

Version 2.3.0 includes a security fix for project translations. See Release 2.3.0 for more information

Version 2.2.1

Version 2.2.1 is a bug fix release for the several issues found in production during the 2.2.0 release.

Version 2.2.0

Version 2.1.6

Version 2.1.5

Version 2.1.4

Version 2.1.3

date

Dec 21, 2017

Version 2.1.2

Version 2.1.1

Release information missing

Version 2.1.0

Version 2.0

Previous releases

Starting with version 2.0, we will be incrementing the Read the Docs version based on semantic versioning principles, and will be automating the update of our changelog.

Below are some historical changes from when we have tried to add information here in the past

July 23, 2015
  • Django 1.8 Support Merged

Code Notes
  • Updated Django from 1.6.11 to 1.8.3.

  • Removed South and ported the South migrations to Django’s migration framework.

  • Updated django-celery from 3.0.23 to 3.1.26 as django-celery 3.0.x does not support Django 1.8.

  • Updated Celery from 3.0.24 to 3.1.18 because we had to update django-celery. We need to test this extensively and might need to think about using the new Celery API directly and dropping django-celery. See release notes: https://docs.celeryproject.org/en/3.1/whatsnew-3.1.html

  • Updated tastypie from 0.11.1 to current master (commit 1e1aff3dd4dcd21669e9c68bd7681253b286b856) as 0.11.x is not compatible with Django 1.8. No surprises expected but we should ask for a proper release, see release notes: https://github.com/django-tastypie/django-tastypie/blob/master/docs/release_notes/v0.12.0.rst

  • Updated django-oauth from 0.16.1 to 0.21.0. No surprises expected, see release notes in the docs and finer grained in the repo

  • Updated django-guardian from 1.2.0 to 1.3.0 to gain Django 1.8 support. No surprises expected, see release notes: https://github.com/lukaszb/django-guardian/blob/devel/CHANGES

  • Using django-formtools instead of removed django.contrib.formtools now. Based on the Django release notes, these modules are the same except of the package name.

  • Updated pytest-django from 2.6.2 to 2.8.0. No tests required, but running the testsuite :smile:

  • Updated psycopg2 from 2.4 to 2.4.6 as 2.4.5 is required by Django 1.8. No trouble expected as Django is the layer between us and psycopg2. Also it’s only a minor version upgrade. Release notes: http://initd.org/psycopg/docs/news.html#what-s-new-in-psycopg-2-4-6

  • Added django.setup() to conf.py to load django properly for doc builds.

  • Added migrations for all apps with models in the readthedocs/ directory

Deployment Notes

After you have updated the code and installed the new dependencies, you need to run these commands on the server:

python manage.py migrate contenttypes
python manage.py migrate projects 0002 --fake
python manage.py migrate --fake-initial

Locally I had trouble in a test environment that pip did not update to the specified commit of tastypie. It might be required to use pip install -U -r requirements/deploy.txt during deployment.

Development Update Notes

The readthedocs developers need to execute these commands when switching to this branch (or when this got merged into main):

  • Before updating please make sure that all migrations are applied:

    python manage.py syncdb
    python manage.py migrate
    
  • Update the codebase: git pull

  • You need to update the requirements with pip install -r requirements.txt

  • Now you need to fake the initial migrations:

    python manage.py migrate contenttypes
    python manage.py migrate projects 0002 --fake
    python manage.py migrate --fake-initial
    

About Read the Docs

Read the Docs is a C Corporation registered in Oregon. Our bootstrapped company is owned and fully controlled by the founders, and fully funded by our customers and advertisers. This allows us to focus 100% on our users.

We have two main sources of revenue:

  • Read the Docs for Business - where we provide a valuable paid service to companies.

  • Read the Docs Community - where we provide a free service to the open source community, funded via EthicalAds.

We believe that having both paying customers and ethical advertising is the best way to create a sustainable platform for our users. We have built something that we expect to last a long time, and we are able to make decisions based only on the best interest of our community and customers.

All of the source code for Read the Docs is open source. You are welcome to contribute the features you want or run your own instance. We should note that we generally only support our hosted versions as a matter of our philosophy.

We owe a great deal to the open source community that we are a part of, so we provide free ads via our community ads program. This allows us to give back to the communities and projects that we support and depend on.

We are proud about the way we manage our company and products, and are glad to have you on board with us in this great documentation journey.

Read the Docs Team

readthedocs.org is the largest open source documentation hosting service. Today we:

  • Serve over 55 million pages of documentation a month

  • Serve over 40 TB of documentation a month

  • Host over 80,000 open source projects and support over 100,000 users

Read the Docs is provided as a free service to the open source community, and we hope to maintain a reliable and stable hosting platform for years to come.

Staff

The members of the Staff work full time on the service, and we are also honored to have several external contributors.

We mainly fund our operations through advertising and corporate-hosted documentation with Read the Docs for Business, and we are supported by a number of generous sponsors.

Eric Holscher Eric Holscher

Anthony Johnson Anthony Johnson

All teams

All teams

Manuel Kaufmann Manuel Kaufmann

Santos Gallegos Santos Gallegos

Backend, Operations, Support

Backend, Operations, Support

Benjamin Balder Bach Benjamin Balder Bach

Backend, Operations, Support

Teams
  • The Backend Team folks develop the Django code that powers the backend of the project.

  • The members of the Frontend Team care about UX, CSS, HTML, and JavaScript, and they maintain the project UI as well as the Sphinx theme.

  • As part of operating the site, members of the Operations Team maintain a 24/7 on-call rotation. This means that folks have to be available and have their phone in service.

  • The members of the Advocacy Team spread the word about all the work we do, and seek to understand the users priorities and feedback.

  • The Support Team helps our thousands of users using the service, addressing tasks like resetting passwords, enable experimental features, or troubleshooting build errors.

Note

Please don’t email us personally for support on Read the Docs. You can use our support form for any issues you may have.

Major Contributors

The code that powers the Read the Docs platform, as well as many other related projects in our GitHub organization, are open source, and therefore anybody can contribute.

Our platform code has over a hundred contributors, which makes us extremely proud and thankful. In addition, several contributors have performed ongoing maintenance on several subprojects over the years:

We know that we’re missing a large number of people who have contributed in major ways to our various projects. Please let us know if you feel that you should be on this list, and aren’t!

Read the Docs Open Source Philosophy

Read the Docs is open source software. We have licensed the code base as MIT, which provides almost no restrictions on the use of the code.

However, as a project there are things that we care about more than others. We built Read the Docs to support documentation in the open source community. The code is open for people to contribute to, so that they may build features into https://readthedocs.org that they want. We also believe sharing the code openly is a valuable learning tool, especially for demonstrating how to collaborate and maintain an enormous website.

Official Support

The time of the core developers of Read the Docs is limited. We provide official support for the following things:

Unsupported

There are use cases that we don’t support, because it doesn’t further our goal of promoting documentation in the open source community.

We do not support:

  • Specific usage of Sphinx and Mkdocs, that don’t affect our hosting

  • Custom installations of Read the Docs at your company

  • Installation of Read the Docs on other platforms

  • Any installation issues outside of the Read the Docs Python Code

Rationale

Read the Docs was founded to improve documentation in the open source community. We fully recognize and allow the code to be used for internal installs at companies, but we will not spend our time supporting it. Our time is limited, and we want to spend it on the mission that we set out to originally support.

If you feel strongly about installing Read the Docs internal to a company, we will happily link to third party resources on this topic. Please open an issue with a proposal if you want to take on this task.

The Story of Read the Docs

Documenting projects is hard, hosting them shouldn’t be. Read the Docs was created to make hosting documentation simple.

Read the Docs was started with a couple main goals in mind. The first goal was to encourage people to write documentation, by removing the barrier of entry to hosting. The other goal was to create a central platform for people to find documentation. Having a shared platform for all documentation allows for innovation at the platform level, allowing work to be done once and benefit everyone.

Documentation matters, but its often overlooked. We think that we can help a documentation culture flourish. Great projects, such as Django and SQLAlchemy, and projects from companies like Mozilla, are already using Read the Docs to serve their documentation to the world.

The site has grown quite a bit over the past year. Our look back at 2013 shows some numbers that show our progress. The job isn’t anywhere near done yet, but it’s a great honor to be able to have such an impact already.

We plan to keep building a great experience for people hosting their docs with us, and for users of the documentation that we host.

Advertising

Advertising is the single largest source of funding for Read the Docs. It allows us to:

  • Serve over 35 million pages of documentation per month

  • Serve over 40 TB of documentation per month

  • Host over 80,000 open source projects and support over 100,000 users

  • Pay a small team of dedicated full-time staff

Many advertising models involve tracking users around the internet, selling their data, and privacy intrusion in general. Instead of doing that, we built an Ethical Advertising model that respects user privacy.

We recognize that advertising is not for everyone. You may opt out of paid advertising although you will still see community ads. You can go ad-free by becoming a Gold member or a Supporter of Read the Docs. Gold members can also remove advertising from their projects for all visitors.

For businesses looking to remove advertising, please consider Read the Docs for Business.

EthicalAds

Read the Docs is a large, free web service. There is one proven business model to support this kind of site: Advertising. We are building the advertising model we want to exist, and we’re calling it EthicalAds.

EthicalAds respect users while providing value to advertisers. We don’t track you, sell your data, or anything else. We simply show ads to users, based on the content of the pages you look at. We also give 10% of our ad space to community projects, as our way of saying thanks to the open source community.

We talk a bit below about our worldview on advertising, if you want to know more.

Are you a marketer?

We built a whole business around privacy-focused advertising. If you’re trying to reach developers, we have a network of hand-approved sites (including Read the Docs) where your ads are shown.

Feedback

We’re a community, and we value your feedback. If you ever want to reach out about this effort, feel free to shoot us an email.

You can opt out of having paid ads on your projects, or seeing paid ads if you want. You will still see community ads, which we run for free that promote community projects.

Our Worldview

We’re building the advertising model we want to exist:

  • We don’t track you

  • We don’t sell your data

  • We host everything ourselves, no third-party scripts or images

We’re doing newspaper advertising, on the internet. For a hundred years, newspapers put an ad on the page, some folks would see it, and advertisers would pay for this. This is our model.

So much ad tech has been built to track users. Following them across the web, from site to site, showing the same ads and gathering data about them. Then retailers sell your purchase data to try and attribute sales to advertising. Now there is an industry in doing fake ad clicks and other scams, which leads the ad industry to track you even more intrusively to know more about you. The current advertising industry is in a vicious downward spiral.

As developers, we understand the massive downsides of the current advertising industry. This includes malware, slow site performance, and huge databases of your personal data being sold to the highest bidder.

The trend in advertising is to have larger and larger ads. They should run before your content, they should take over the page, the bigger, weirder, or flashier the better.

We opt out
  • We don’t store personal information about you.

  • We only keep track of views and clicks.

  • We don’t build a profile of your personality to sell ads against.

  • We only show high quality ads from companies that are of interest to developers.

We are running a single, small, unobtrusive ad on documentation pages. The products should be interesting to you. The ads won’t flash or move.

We run the ads we want to have on our site, in a way that makes us feel good.

Additional details
  • We have additional documentation on the technical details of our advertising including our Do Not Track policy and our use of analytics.

  • We have an advertising FAQ written for advertisers.

  • We have gone into more detail about our views in our blog post about this topic.

  • Eric Holscher, one of our co-founders talks a bit more about funding open source this way on his blog.

  • After proving our ad model as a way to fund open source and building our ad serving infrastructure, we launched the EthicalAds network to help other projects be sustainable.

Join us

We’re building the advertising model we want to exist. We hope that others will join us in this mission:

  • If you’re a developer, talk to your marketing folks about using advertising that respects your privacy.

  • If you’re a marketer, vote with your dollars and support us in building the ad model we want to exist. Get more information on what we offer.

Community Ads

There are a large number of projects, conferences, and initiatives that we care about in the software and open source ecosystems. A large number of them operate like we did in the past, with almost no income. Our Community Ads program will highlight some of these projects.

There are a few qualifications for our Community Ads program:

  • Your organization and the linked site should not be trying to entice visitors to buy a product or service. We make an exception for conferences around open source projects if they are run not for profit and soliciting donations for open source projects.

  • A software project should have an OSI approved license.

  • We will not run a community ad for an organization tied to one of our paid advertisers.

We’ll show 10% of our ad inventory each month to support initiatives that we care about. Please complete an application to be considered for our Community Ads program.

Opting Out

We have added multiple ways to opt out of the advertising on Read the Docs.

  1. You can go completely ad-free by becoming a Gold member or a Supporter. Additionally, Gold members may remove advertising from their projects for all visitors.

  2. You can opt out of seeing paid advertisements on documentation pages:

    • Go to the drop down user menu in the top right of the Read the Docs dashboard and clicking Settings (https://readthedocs.org/accounts/edit/).

    • On the Advertising tab, you can deselect See paid advertising.

    You will still see community ads for open source projects and conferences.

  3. Project owners can also opt out of paid advertisements for their projects. You can change these options:

    • Go to your project page (/projects/<slug>/)

    • Go to Admin > Advertising

    • Change your advertising settings

  4. If you are part of a company that uses Read the Docs to host documentation for a commercial product, we offer Read the Docs for Business that offers a completely ad-free experience, additional build resources, and other great features like CDN support and private documentation.

  5. If you would like to completely remove advertising from your open source project, but our commercial plans don’t seem like the right fit, please get in touch to discuss alternatives to advertising.

Advertising Details

Read the Docs largely funds our operations and development through advertising. However, we aren’t willing to compromise our values, document authors, or site visitors simply to make a bit more money. That’s why we created our ethical advertising initiative.

We get a lot of inquiries about our approach to advertising which range from questions about our practices to requests to partner. The goal of this document is to shed light on the advertising industry, exactly what we do for advertising, and how what we do is different. If you have questions or comments, send us an email or open an issue on GitHub.

Other ad networks’ targeting

Some ad networks build a database of user data in order to predict the types of ads that are likely to be clicked. In the advertising industry, this is called behavioral targeting. This can include data such as:

  • sites a user has visited

  • a user’s search history

  • ads, pages, or stories a user has clicked on in the past

  • demographic information such as age, gender, or income level

Typically, getting a user’s page visit history is accomplished by the use of trackers (sometimes called beacons or pixels). For example, if a site uses a tracker from an ad network and a user visits that site, the site can now target future advertising to that user – a known past visitor – with that network. This is called retargeting.

Other ad predictions are made by grouping similar users together based on user data using machine learning. Frequently this involves an advertiser uploading personal data on users (often past customers of the advertiser) to an ad network and telling the network to target similar users. The idea is that two users with similar demographic information and similar interests would like the same products. In ad tech, this is known as lookalike audiences or similar audiences.

Understandably, many people have concerns about these targeting techniques. The modern advertising industry has built enormous value by centralizing massive amounts of data on as many people as possible.

Our targeting details

Read the Docs doesn’t use the above techniques. Instead, we target based solely upon:

  • Details of the page where the advertisement is shown including:

    • The name, keywords, or programming language associated with the project being viewed

    • Content of the page (eg. H1, title, theme, etc.)

    • Whether the page is being viewed from a mobile device

  • General geography

    • We allow advertisers to target ads to a list of countries or to exclude countries from their advertising. For ads targeting the USA, we also support targeting by state or by metro area (DMA specifically).

    • We geolocate a user’s IP address to a country when a request is made.

Where ads are shown

We can place ads in:

  • the sidebar navigation

  • the footer of the page

  • on search result pages

  • a small footer fixed to the bottom of the viewport

  • on 404 pages (rare)

We show no more than one ad per page so you will never see both a sidebar ad and a footer ad on the same page.

Do Not Track Policy

Read the Docs supports Do Not Track (DNT) and respects users’ tracking preferences. For more details, see the Do Not Track section of our privacy policy.

Ad serving infrastructure

Our entire ad server is open source, so you can inspect how we’re doing things. We believe strongly in open source, and we practice what we preach.

Analytics

Analytics are a sensitive enough issue that they require their own section. In the spirit of full transparency, Read the Docs uses Google Analytics (GA). We go into a bit of detail on our use of GA in our Privacy Policy.

GA is a contentious issue inside Read the Docs and in our community. Some users are very sensitive and privacy conscious to usage of GA. Some authors want their own analytics on their docs to see the usage their docs get. The developers at Read the Docs understand that different users have different priorities and we try to respect the different viewpoints as much as possible while also accomplishing our own goals.

We have taken steps to address some of the privacy concerns surrounding GA. These steps apply both to analytics collected by Read the Docs and when authors enable analytics on their docs.

  • Users can opt-out of analytics by using the Do Not Track feature of their browser.

  • Read the Docs instructs Google to anonymize IP addresses sent to them.

  • The cookie set by GA is a session (non-persistent) cookie rather than the default 2 years.

  • Project maintainers can completely disable analytics on their own projects. Follow the steps in Disabling Google Analytics on your project.

Why we use analytics

Advertisers ask us questions that are easily answered with an analytics solution like “how many users do you have in Switzerland browsing Python docs?”. We need to be able to easily get this data. We also use data from GA for some development decisions such as what browsers to support (or not) or how much usage a particular page or feature gets.

Alternatives

We are always exploring our options with respect to analytics. There are alternatives but none of them are without downsides. Some alternatives are:

  • Run a different cloud analytics solution from a provider other than Google (eg. Parse.ly, Matomo Cloud, Adobe Analytics). We priced a couple of these out based on our load and they are very expensive. They also just substitute one problem of data sharing with another.

  • Send data to GA (or another cloud analytics provider) on the server side and strip or anonymize personal data such as IPs before sending them. This would be a complex solution and involve additional infrastructure, but it would have many advantages. It would result in a loss of data on “sessions” and new vs. returning visitors which are of limited value to us.

  • Run a local JavaScript based analytics solution (eg. Matomo community). This involves additional infrastructure that needs to be always up. Frequently there are very large databases associated with this. Many of these solutions aren’t built to handle Read the Docs’ load.

  • Run a local analytics solution based on web server log parsing. This has the same infrastructure problems as above while also not capturing all the data we want (without additional engineering) like the programming language of the docs being shown or whether the docs are built with Sphinx or something else.

Ad blocking

Ad blockers fulfill a legitimate need to mitigate the significant downsides of advertising from tracking across the internet, security implications of third-party code, and impacting the UX and performance of sites.

At Read the Docs, we specifically didn’t want those things. That’s why we built the our Ethical Ad initiative with only relevant, unobtrusive ads that respect your privacy and don’t do creepy behavioral targeting.

Advertising is the single largest source of funding for Read the Docs. To keep our operations sustainable, we ask that you either allow our EthicalAds or go ad-free.

Allowing EthicalAds

If you use AdBlock or AdBlockPlus and you allow acceptable ads or privacy-friendly acceptable ads then you’re all set. Advertising on Read the Docs complies with both of these programs.

If you prefer not to allow acceptable ads but would consider allowing ads that benefit open source, please consider subscribing to either the wider Open Source Ads list or simply the Read the Docs Ads list.

Note

Because of the way Read the Docs is structured where docs are hosted on many different domains, adding a normal ad block exception will only allow that single domain not Read the Docs as a whole.

Going ad-free

Users can go completely ad-free when logged in by becoming a Gold member or a Supporter. Gold members may also completely remove advertising for all visitors to their projects. Thank you for supporting Read the Docs.

Statistics and data

It can be really hard to find good data on ad blocking. In the spirit of transparency, here is the data we have on ad blocking at Read the Docs.

  • 32% of Read the Docs users use an ad blocker

  • Of those, a little over 50% allow acceptable ads

  • Read the Docs users running ad blockers click on ads at about the same rate as those not running an ad blocker.

  • Comparing with our server logs, roughly 28% of our hits did not register a Google Analytics (GA) pageview due to an ad blocker, privacy plugin, disabling JavaScript, or another reason.

  • Of users who do not block GA, about 6% opt out of analytics on Read the Docs by enabling Do Not Track.

Sponsors of Read the Docs

Running Read the Docs isn’t free, and the site wouldn’t be where it is today without generous support of our sponsors. Below is a list of all the folks who have helped the site financially, in order of the date they first started supporting us.

Current sponsors

  • AWS - They cover all of our hosting expenses every month. This is a pretty large sum of money, averaging around $5,000/mo.

  • Cloudflare - Cloudflare is providing us with an enterprise plan of their SSL for SaaS Providers product that enables us to provide SSL certificates for custom domains.

  • Chan Zuckerberg Initiative - Through their “Essential Open Source Software for Science” programme, they fund our ongoing efforts to improve scientific documentation and make Read the Docs a better service for scientific projects.

  • You? (Email us at hello@readthedocs.org for more info)

Past sponsors

Sponsorship Information

As part of increasing sustainability, Read the Docs is testing out promoting sponsors on documentation pages. We have more information about this in our blog post about this effort.

Glossary

dashboard

Main page where you can see all your projects with their build status and import a new project.

flyout menu

Menu displayed on the documentation, readily accessible for readers, containing the list active versions, links to static downloads, and other useful links. Read more in our Flyout Menu page.

pre-defined build jobs

Commands executed by Read the Docs when performing the build process. They cannot be overwritten by the user.

profile page

Page where you can see the projects of a certain user.

project home

Page where you can access all the features of Read the Docs, from having an overview to browsing the latest builds or administering your project.

project page

Another name for project home.

slug

A unique identifier for a project or version. This value comes from the project or version name, which is reduced to lowercase letters, numbers, and hypens. You can retreive your project or version slugs from our API.

root URL

Home URL of your documentation without the /<lang> and /<version> segments. For projects without custom domains, the one ending in .readthedocs.io/ (for example, https://docs.readthedocs.io as opposed to https://docs.readthedocs.io/en/latest).

user-defined build jobs

Commands defined by the user that Read the Docs will execute when performing the build process.

Google Summer of Code

Warning

Read the Docs will not be participating in the Google Summer of Code in 2020. We hope to return to the program in the future, and appreciate the interest everyone has shown.

Thanks for your interest in Read the Docs! Please follow the instructions in Getting Started, as a good place to start. Contacting us will not increase your chance of being accepted, but opening pull requests with docs and tests will.

You can see our Projects from previous years for the work that students have done in the past.

Skills

Incoming students will need the following skills:

  • Intermediate Python & Django programming

  • Familiarity with Markdown, reStructuredText, or some other plain text markup language

  • Familiarity with git, or some other source control

  • Ability to set up your own development environment for Read the Docs

  • Basic understanding of web technologies (HTML/CSS/JS)

  • An interest in documentation and improving open source documentation tools!

We’re happy to help you get up to speed, but the more you are able to demonstrate ability in advance, the more likely we are to choose your application!

Getting Started

The Development Installation doc is probably the best place to get going. It will walk you through getting a basic environment for Read the Docs setup.

Then you can look through our Contributing to Read the Docs doc for information on how to get started contributing to RTD.

People who have a history of submitting pull requests will be prioritized in our Summer of Code selection process.

Want to get involved?

If you’re interested in participating in GSoC as a student, you can apply during the normal process provided by Google. We are currently overwhelmed with interest, so we are not able to respond individually to each person who is interested.

Mentors

Currently we have a few folks signed up:

  • Eric Holscher

  • Manuel Kaufmann

  • Anthony Johnson

  • Safwan Rahman

Warning

Please do not reach out directly to anyone about the Summer of Code. It will not increase your chances of being accepted!

Project Ideas

We have written our some loose ideas for projects to work on here. We are also open to any other ideas that students might have.

These projects are sorted by priority. We will consider the priority on our roadmap as a factor, along with the skill of the student, in our selection process.

Collections of Projects

This project involves building a user interface for groups of projects in Read the Docs (Collections). Users would be allowed to create, publish, and search a Collection of projects that they care about. We would also allow for automatic creation of Collections based on a project’s setup.py or requirements.txt.

Once a user has a Collection, we would allow them to do a few sets of actions on them:

  • Search across all the projects in the Collection with one search dialog

  • Download all the project’s documentation (PDF, HTMLZip, Epub) for offline viewing

  • Build a landing page for the collection that lists out all the projects, and could even have a user-editable description, similar to our project listing page.

There is likely other ideas that could be done with Collections over time.

Integration with OpenAPI/Swagger

Integrate the existing tooling around OpenAPI & Swagger into Sphinx and Read the Docs. This will include building some extensions that generate reStructuredText, and backend Django code that powers the frontend Javascript.

This could include:

  • Building a live preview for testing an API in the documentation

  • Taking a swagger YAML file and generating HTML properly with Sphinx

  • Integration with our existing API to generate Swagger output

Build a new Sphinx theme

Sphinx v2 will introduce a new format for themes, supporting HTML5 and new markup. We are hoping to build a new Sphinx theme that supports this new structure.

This project would include:

  • A large amount of design, including working with CSS & SASS

  • Iterating with the community to build something that works well for a number of use cases

This is not as well defined as the other tasks, so would require a higher level of skill from an incoming student.

Better MkDocs integration

Currently we don’t have a good integration with MkDocs as we do with Sphinx. And it’s hard to maintain compatibility with new versions.

This project would include:

  • Support the latest version of MkDocs

  • Support downloads (#1939)

  • Write a plugin to allow us to have more control over the build process (#4924)

  • Support search (#1088)

Integrated Redirects

Right now it’s hard for users to rename files. We support redirects, but don’t create them automatically on file rename, and our redirect code is brittle.

We should rebuild how we handle redirects across a number of cases:

  • Detecting a file change in git/hg/svn and automatically creating a redirect

  • Support redirecting an entire domain to another place

  • Support redirecting versions

There will also be a good number of things that spawn from this, including version aliases and other related concepts, if this task doesn’t take the whole summer.

Improve Translation Workflow

Currently we have our documentation & website translated on Transifex, but we don’t have a management process for it. This means that translations will often sit for months before making it back into the site and being available to users.

This project would include putting together a workflow for translations:

  • Communicate with existing translators and see what needs they have

  • Help formalize the process that we have around Transifex to make it easier to contribute to

  • Improve our tooling so that integrating new translations is easier

Support for additional build steps for linting and testing

Currently we only build documentation on Read the Docs, but we’d also like to add additional build steps that lets users perform more actions. This would likely take the form of wrapping some of the existing Sphinx builders, and giving folks a nice way to use them inside Read the Docs.

It would be great to have wrappers for the following as a start:

The goal would also be to make it quite easy for users to contribute third party build steps for Read the Docs, so that other useful parts of the Sphinx ecosystem could be tightly integrated with Read the Docs.

Additional Ideas

We have some medium sized projects sketched out in our issue tracker with the tag Feature. Looking through these issues is a good place to start. You might also look through our milestones on GitHub, which provide outlines on the larger tasks that we’re hoping to accomplish.

Projects from previous years

Thanks

This page was heavily inspired by Mailman’s similar GSOC page. Thanks for the inspiration.