Skip to main content

Infrastructure

This page gives an overview of the conda-forge infrastructure, that is, an account of the various pieces maintained by the conda-forge contributors as well as third-party providers that collectively form the basis for the operation of conda-forge.

We start with the different Github repositories maintained by conda-forge itself, then describe the administrative commands available for use in those repositories, the so-called Admin web services, followed by the CI services, i.e. the third-party providers used for building and maintaining packages together. After that, we turn to a description of some aspects of the build environment for packages in Compilers and Runtimes, together with details about the upload to the package server.

Then, we see how the process of building a package interacts with different parts of the infrastructure.

We close out with a brief listing of involved entities.

Repositoriesโ€‹

Staged-recipesโ€‹

This repository is the gateway to conda-forge and where users can submit new recipes which, once reviewed and accepted, will generate a new feedstock and team. You can find the detailed guide for submitting new package recipes in The staging process.

Anatomy of staged-recipesโ€‹

recipes/ contains one or more subdirectories with user-submitted recipes. Most cases will only submit one recipe at a time, but if several subdirectories are present, the build_all.py script will build them in the right order so dependencies are satisfied.

.ci_support contains the conda-build YAML configuration files, but in this case (if compared to feedstocks), you will also find some scripts:

  • build_all.py: Calls conda-build in the right (topographically sorted) order.
  • compute_build_graph.py: Supports build_all.py by providing the job graph with all the submitted recipes.

The YAML files included in .ci_support are minimal and not rendered like the ones you find in feedstocks. Instead, conda-build will take these and combine them with the pinnings from conda-forge-pinning at runtime. Also note that staged-recipes only builds for x64. Support for additional architectures can only be done once a feedstock has been provided.

  • Linux: linux64.yaml plus the CUDA (10.2, 11.0, 11.1 and 11.2) variants.
  • macOS: osx64.yaml.
  • Windows win64.yaml.

The directory .scripts contains roughly the same shell scripts that would be used in a feedstock for the CI pipelines. However, since staged-recipes does not support rerendering, these are kept in sync manually and it is common to see some differences.

Workflowsโ€‹

The main job run on staged-recipes is the conda-build job that runs on every PR (and push to main) to check whether the recipes build packages correctly. These jobs run on Azure Pipelines defined in .azure-pipelines/.

The actual creation of the feedstock is run in conda-forge/admin-requests.

Additional workflows help users set up their recipes correctly. They react to events in PRs:

  • automate-review-labels: Updates PR labels to streamline reviews and requests for help.
  • linter: General recipe linting plus some staged-recipes specific checks, like not editing the examples.

External services connect to staged-recipes too:

  • The @conda-forge-admin bot (deployed at webservices) will lint and provide hints in PRs based on the contents of the recipe.

Feedstocksโ€‹

  • โš™๏ธ Deployed in Github repositories
  • ๐Ÿ”’ Has access to Azure Pipelines, Github Actions, Anaconda.org (cf-staging)
  • ๐Ÿ” Might have access to Travis CI, Cirun via admin-requests (WIP)
  • ๐Ÿค– Integrated with admin-migrations, admin-requests, the autotick-bot, and webservices.

Conda-forge has thousands of feedstocks. Each feedstock hosts a recipe plus the required pipelines, supporting scripts and configuration metadata.

The contents of a feedstock are well specified. Only two locations are user-managed:

  • recipe/: Contains the conda-build instructions to build packages. It needs, at least, a meta.yaml file, and this is also where the optional conda_build_config.yaml usually goes.
  • conda-forge.yml: This is the feedstock configuration file.
warning

You should never manually edit files not listed above! Changes will be overridden in the next feedstock rerender.

Combining these two sources with some external components, conda-smithy will generate (render) the contents of the feedstock. Many of the directories are named like that because it is what the external service (e.g. Azure) requests. However, some conda-smithy-unique directories are worth discussing:

  • .ci_support/: Contains the rendered conda_build_config.yaml files, passed to conda-build via the -m flag. Each file here corresponds to one job in the CI build matrix.
  • .ci_support/migrations/: Special YAML files that instruct conda-smithy how to update the .ci_support/*.yaml files. These migration files are usually put here by the autotick-bot infrastructure, and removed once the migration is considered finished.
  • .scripts/: Common logic and code supporting the steps you can find in the CI pipelines and local debugging tools.
  • build-locally.py: A Python script to debug recipes in your machine, roughly equivalent to what's done in the CI pipelines.
Learn more (WIP)
  • Rerendering a feedstock
  • Recommended workflow

feedstocks monorepoโ€‹

A single repository containing all feedstocks as submodules.

feedstock-outputsโ€‹

This repository is a registry of feedstock names and the packages (artifacts) they produce.

Its main purpose is to provide an allow-list for the validation server to prevent malicious cross-feedstock builds, although it's also an informative map of feedstocks <-> packages that is exposed in the packages section of the website.

cdt-buildsโ€‹

This special repository builds Core Dependency Tree packages for conda-forge (Linux only). It doesn't use the feedstock automated machinery. Instead, it has its own Azure Pipelines workflow and a well-documented README.

msys2-recipesโ€‹

This is a fork of the old community recipes repository at Anaconda, which includes the msys2 recipes under the msys2/ directory. Note also the supporting scripts in the common-scripts/ folder.

Websiteโ€‹

The current conda-forge.org is a statically generated website published to Github Pages.

The documentation is built with Docusaurus and the source files are located in the docs/ directory of the repository.

If you find any typos, errors, unclear explanations, or new topics that can be covered, you can suggest changes to the documentation. For more details, please refer to Improve the documentation.

In addition to the static documentation, the website also offers information on the current status of conda-forge as well as a mapping of packages to feedstocks.

Metadata repositoriesโ€‹

These are repositories that primarily hold metadata used by other parts of the conda-forge ecosystem.

conda-forge pinningโ€‹

Hosts the global pinnings for conda-forge, and the ongoing migrations.

Package-wide dependency pins are defined in conda_build_config.yaml in the conda-forge/conda-forge-pinning-feedstock.

For more information on conda-forge wide package pins, please refer to Globally pinned packages.

Please open a PR and/or an issue there, if you think a pin needs to be advanced. For more information on updating globally pinned packages, please refer to Updating package pins.

conda-forge-repodata-patchesโ€‹

This repository creates the repodata.json patches used by the Anaconda.org to amend the metadata coming from the published packages.

conda-forge-ci-setupโ€‹

This special feedstock provides a package that defines the logic to install and configure a common CI setup across providers.

regro/cf-graph-countyfairโ€‹

This is the graph data used by autotick-bot.

The logic to build the graph is provided by cf-scripts.

docker-imagesโ€‹

This repository builds the Docker images used to provide a unified system on all Linux builds.

Code repositoriesโ€‹

These are repositories that hold programs and other codes that define behavior. However, their actions are often not triggered here, but rather used by other parts of the conda-forge ecosystem.

Smithyโ€‹

This is the main feedstock creation and maintenance tool.

Most of its usage is automated by our infrastructure:

  • Feedstock creation and services registration at staged-recipes
  • Regeneration (rerendering), linting and hinting in PRs done by conda-forge-admin on webservices

However, you can also use it locally or on your forge-like deployments. For local debugging, you will find these commands useful:

  • conda-smithy rerender
  • conda-smithy recipe-lint

Smithy contains maintenance code for conda-forge, which is used by the conda-smithy command line tool and the Admin web services.

conda-forge/conda-smithy is the right repository to report bugs for

  • The rerendering process
  • The recipe linter
  • CI support utils

conda-smithy also contains the command line tool that you should use if you rerender manually from the command line (see Rerendering feedstocks).

Smithy can be used beyond conda-forge's purposes. For example, it can be used to set up self-hosted Azure agents for non-conda-forge infrastructures. (You could also consider using Azure virtual machine scale set agents, which could be less expensive to run than permanently active agents.)

Web servicesโ€‹

The Heroku app providing the conda-forge web services lives in conda-forge/conda-forge-webservices. Please note that the code logic provided by the app is in the Smithy repository.

Bugs or suggestions regarding the service functionality should therefore be opened in conda-forge/conda-smithy's bug tracker.

regro/cf-scriptsโ€‹

The code and logic behind autotick-bot.

Automated maintenanceโ€‹

These components perform actions in automated ways, either triggered by a specific event or continuously as part of a loop.

admin-migrationsโ€‹

This repository hosts workflows that are running 24/7. Its job is to procure an automation loop where some maintenance tasks are added. Its main user is the core team.

admin-requestsโ€‹

This repository hosts workflows that mainly run when triggered by a user-initiated action. This is usually done via a PR that, once approved, is merged and triggers the requested action (mark a package as broken, archive a feedstock, etc).

It also does the job of creating new feedstocks for recipes that have been merged in conda-forge/staged-recipes. The create_feedstocks workflow runs several times per hour to create the new feedstock repositories on the conda-forge organization. The core logic is defined in the Python script .github/workflows/scripts/create_feedstocks.py.

autotick-botโ€‹

note

The older repo regro/autotick-bot is no longer in use; the bot now runs directly in regro/cf-scripts.

webservicesโ€‹

This web application powers several services, like:

  • the @conda-forge-admin bot and its @conda-forge-admin, please ... commands
  • the cf-staging to conda-forge validation (plus copy)
  • status monitoring

Admin web servicesโ€‹

conda-forge is running a webservice on Heroku called conda-forge-webservices.

The following services are run by default on a feedstock:

  • It will lint the recipes in the PRs and report back whether the recipe is in excellent condition or not.
  • When maintainers are added to a recipe, each of the maintainers will be added to the team and given push access to the feedstock.

The webservice also listens to issues and PR comments, so that you can ask for the following services to be done:

@conda-forge-admin, please rerenderโ€‹

Entering the above phrase in a PR of a feedstock will rerender the feedstock and push the changes to your PR. Make sure to tick the Allow edits from maintainers button located at the bottom of the right side bar of the PR. If you enter this phrase in the comment for an issue, the bot will create a new pull request, with the requested re-rendering being completed.

@conda-forge-admin, please add noarch: pythonโ€‹

Entering the above phrase in a PR or an issue of a feedstock will add noarch: python to the build and rerender the feedstock for you.

@conda-forge-admin, please lintโ€‹

Entering the above phrase in a PR of a feedstock will lint the PR again.

@conda-forge-admin, please update teamโ€‹

Entering the above phrase in an issue will update the team for the feedstock. This is usually done automatically.

@conda-forge-admin, please restart ciโ€‹

Entering this command in the PR of a feedstock or staged-recipes will close and then open the PR, causing all of the CI builds to restart.

@conda-forge-admin, please ping teamโ€‹

Entering this command in the PR of a feedstock or staged-recipes will have the admin bot @-mention the team associated with the repo. This command can be useful for people who are not yet members of conda-forge and so cannot @-mention the staged-recipes team for PR reviews.

@conda-forge-admin, please ping conda-forge/โ€‹

Entering this command in the PR of a feedstock or staged-recipes will have the admin bot @-mention the respective team. This command can be useful for people who are not yet members of conda-forge and so cannot @-mention someone due to the general GitHub limitations.

@conda-forge-admin, please rerun botโ€‹

Entering this command in a PR comment will add the bot-rerun label to that PR. This label will cause the auto-tick bot that issues migration and version updates to close the current PR and reissue it. Adding this label to non-bot issued PRs will have no effect.

@conda-forge-admin, please add bot automergeโ€‹

Entering this command in the title or comment of an issue will instruct the admin bot to open a PR enabling the automatic merging of passing PRs from the auto-tick bot. This functionality is currently experimental. You can find more details here. Please open issue on regro/cf-scripts for any feedback, bugs, and/or questions!

@conda-forge-admin, please remove bot automergeโ€‹

Entering this command in the title or comment of an issue will instruct the admin bot to open a PR to disable automerge, undoing the please add bot automerge command.

@conda-forge-admin, please add user @usernameโ€‹

Entering the above phrase in the title of an issue on a feedstock will make a PR that adds the given user to the feedstock. A maintainer or member of core can then merge this PR to add the user. Please do not modify this PR or adjust the commit message. This PR is designed to skip building the package.

Using this command

This is not the recommended way to start to help with a feedstock. If you are interested in helping with a particular recipe, it is better to start by submitting a PR with a new version or a fix. This way, you can get feedback on your work and learn about some of the historical context of the feedstock.

Once you have established a working relationship with the maintainers, you can ask them to add you to the feedstock team. They can then use this command to add you to the team. There isn't any official requirement for how to add new maintainers so it may take a while for concensus to be reached on when to add new maintainers. Do not let this discourage you from contributing!

PRs are free to be opened by anyone!!! Thank you for your time and effort!!!

@conda-forge-admin, please update versionโ€‹

Entering the above phrase in the title of an issue on a feedstock will request the bot to check if there are any new versions available. If there are, it will open a PR with with the needed changes. Note that the bot might start by opening a PR with only partial changes. The rest of the contents will be added in a subsequent commit after a few minutes.

CI build servicesโ€‹

Here we describe common issues with the CI Services that conda-forge builds.

Azure Pipelinesโ€‹

Azure is used to build packages for Windows (native x86_64), macOS (native x86_64), Linux (native x86_64, emulated ARMv8 and IBM Power8+). The build queue on Azure is substantially larger than on all the other providers. Azure builds have a maximum duration of 6 hours.

To see all builds on Azure, visit https://dev.azure.com/conda-forge/feedstock-builds/_build.

Restarting buildsโ€‹

Presently Azure does not sync GitHub users. In order to restart a build you can restart it from the GitHub checks interface. If that doesn't work, a close/open will kick off a new build. You can also use the web services command @conda-forge-admin, please restart ci.

TravisCI (IBM Power 8+, ARM)โ€‹

TravisCI is used to build packages for IBM Power 8+ and ARM. After merging a staged-recipes pull request, it might be necessary to force sync your repositories in TravisCI to see the reload and cancel buttons. To do this please visit https://app.travis-ci.com/account/repositories and click the "Sync accounts" button.

Enabling Travisโ€‹

TravisCI should only be needed to build recipes on native Linux aarch64 and ppc64le.

Enable a build by adding the corresponding line from the following to conda-forge.yml in the root of the feedstock.

provider:
osx: travis
linux_ppc64le: travis
linux_aarch64: travis

For IBM Power 8+ and/or ARM builds, add the name of your feedstock to the list here via a pull request.

GitHub Actionsโ€‹

We use GitHub actions to rerender feedstocks and also run our pull request automerge service. We do not currently support builds on GitHub Actions.

Automergeโ€‹

The automerge service uses the GitHub action in this repo. This action runs out of a Docker container on the prod tag. See the repo README.md for more details. PRs are automatically merged if they satisfy either of the two following sets of conditions:

  1. are from the regro-cf-autotick-bot, have [bot-automerge] in the title, all statuses are passing, and the feedstock allows automerge
  2. have the automerge label and all statuses are passing.

For PRs from the regro-cf-autotick-bot, it can be useful to remove the [bot-automerge] slug from the PR title if you are making edits to the PR.

Rerenderingโ€‹

The rerendering service is triggered by the Heroku app. It uses the GitHub action in this repo. This action runs out of a Docker container on the prod tag. See the repo README.md for more details.

Skipping CI buildsโ€‹

To skip a CI build for a given commit, put [ci skip] ***NO_CI*** in the commit message.

Related links

Third-party Use of Our CI Servicesโ€‹

Due to its stature in the open-source community, conda-forge has enhanced access to certain CI services. This access is a community resource entrusted to conda-forge for use in building packages. We thus cannot support third-party or "off-label" CI jobs in our feedstocks on any of our CI services. If we find such use, we will politely ask the maintainers to rectify the situation. We may take more serious actions, including archiving feedstocks or removing maintainers from the organization, if the situation cannot be rectified.

Compilers and Runtimesโ€‹

conda-forge builds and maintains its own set of compilers for various languages and/or systems (e.g., C, FORTRAN, C++, CUDA, etc.). These are used in all of our CI builds to build essentially all artefacts published by conda-forge.

This compiler infrastructure has a critical role beyond building everything, which is to ensure that packages stay compatible with each other. This is due to how compiled packages have a so-called Application Binary Interface (ABI), and how changes in the compiler infrastructure may break this ABI, leading to crashes, miscalculations, etc. Generally speaking, using a consistent compiler version greatly reduces the risk of ABI breaks.

Compilers generally strive to maintain ABI-compatibility across versions, meaning that combining artefacts for the same target produced by different versions of the same compiler will work together without issue. Due to the nature of the ABI (i.e. a vast interface between software and hardware, with innumerable corner cases), it still happens that unintentional changes for some specific aspect are introduced across compiler versions, though in practice this does not lead to wide-spread issues.

In contrast, when compilers do intentionally change the ABI (as MSVC did with each release before the vc14 series currently covering VS2015-VS2022), every compiled package needs to be rebuilt for that new ABI, and cannot be mixed with builds for the old ABI. While less likely nowadays, in principle it's also possible that a major infrastructural overhaul in the compiler stack similarly forces a complete rebuild.

Such large-scale changes โ€“ requiring +/- all of conda-forge to be rebuilt โ€“ take a lot of effort, though thankfully, in recent years such full rebuilds have not been necessary and we managed to do less disruptive compiler upgrades.

However, large-scale ABI breaks remain a possibility (e.g. MSVC is planning a vNext after vc14), and so we keep our policies for such a scenario in place. While we do not have any formal promises of support for a generation of ABI-compatible compilers, we have historically maintained them according to the following (non-binding) principles.

  • The authoritative source of the current compilers and versions for various languages and platforms is the conda_build_config.yaml in the conda-forge/conda-forge-pinning-feedstock as described in Globally pinned packages.
  • We provide no support of any kind in terms of the long-term stability/support of a given compiler generation.
  • We upgrade them in an ad-hoc manner on a periodic basis as we have the time and energy to do so. Note that because of the way we enforce runtime constraints, these compiler upgrades will not break existing packages. However, if you are using the compilers outside of conda, then you may find issues.
  • We generally provide notice in the form of an announcement when an ABI-incompatible compiler change is going to happen. Note that these changes take a bit of time to complete, so you will generally have time to prepare should you need to.
  • Some of the criteria we think about when considering a compiler migration include:
    • the degree of disruption to the ecosystem,
    • the amount of work for the core team,
    • the amount of time it will cost our (volunteer) feedstock maintainers.

These compiler generations may or may not have some unofficial names for our internal use (e.g. comp7). We note again that the existence of these names does not imply any level of support or stability for the compilers that form the given stack.

For the cases that do not require a complete rebuild of conda-forge (i.e. if the ABI of a new compiler remains compatible, up to rare corner cases), we can just increase the version in our global pinning, and it will slowly roll out to the ecosystem as feedstocks get rerendered.

For such ABI-compatible upgrades, similar but looser principles apply:

  • The pins are similarly defined in the global pinning, see Globally Pinned Packages.
  • We provide no support of any kind in terms of the long-term availability of a given compiler version.
  • We generally provide notice in the form of an announcement when a compiler is going to be upgraded.
  • Without promising any timelines, our compilers on Linux and OSX are normally very recent; on Windows, we generally use the last supported VS version.

Despite the lack of explicit support, we try to keep the compilers in their various versions working also outside of conda-forge, and even provide an easy way to install them (through the compilers feedstock).

More specifically, each compiler uses an activation package that makes the difference between it being merely present in a build environment, and it being used by default. These will be installed when using {{ compiler('xyz') }} in meta.yaml, where 'xyz' is one of 'c', 'cxx', 'fortran', 'cuda', 'rust', 'go-cgo', 'go-nocgo'.

Our default compiler stack is made up very differently on each platform; each platform has its own default compiler, with its own set of feedstocks that provide them. Due to historical reasons (the way compilers are integrated with their OS, and the amount of software written in them, etc.), the most impactful languages are C & C++ (though Fortran is considered part of the default, not least because GCC compiles all three).

Linux (GCC):

OSX (Clang):

Windows (MSVC):

There exists an alternative, MinGW-based, compiler stack on Windows, which is available with a m2w64_ prefix (e.g. {{ compiler('m2w64_c') }}). However, it is falling out of use now that most projects will natively support compilation also with MSVC, in addition to several complications arising from mixing compiler stacks.

Additionally, there is a possibility to use clang as a compiler on Linux & Windows:

Aside from the main C/C++/Fortran compilers, these are the feedstocks for the other compilers:

To upgrade the compiler version of our default compilers in the global pinning for Linux or OSX, ensure that the respective above-mentioned feedstocks have been rebuilt for the new major version, that all interrelated versions are lifted at the same time, and obviously that the compilers work (e.g. by testing them on some feedstocks by specifying the new version through the feedstock-local conda_build_config.yaml). You should also check the compiler release notes for warnings about ABI incompatibilities, and mention any such notices in the discussion about the upgrade.

For Windows, we stay on older compilers for longer, because using a newer toolchain would force everyone wanting to locally develop with conda-forge artefacts to use a toolchain that's at least as new. You can find more details about this topic in this issue about updating to the vc142 toolchain.

CentOS sysroot for linux-* Platformsโ€‹

We currently repackage the sysroot from the appropriate version of CentOS for use with our compilers. These sysroot files are available in the sysroot_linux-* packages. These packages have version numbers that match the version of glibc they package. These versions are 2.12 for CentOS 6 and 2.17 for CentOS 7.

For gcc/gxx/gfortran versions prior to 8.4.0 on ppc64le and 7.5.0 on aarch64/x86_64, we had been building our own versions of glibc. This practice is now deprecated in favor of the CentOS-based sysroots. Additionally, as of the same compiler versions above, we have removed the cos* part of the sysroot path. The new sysroot path has in it simply conda as opposed to conda_cos6 or conda_cos7.

Output Validation and Feedstock Tokensโ€‹

As of writing, anaconda.org does not support generating API tokens that are scoped to allow uploads for some packages but not others. In order to secure feedstock uploads, so that, e.g., the maintainers of the numpy feedstock cannot push a python package, we use a package staging process and issue secret tokens, unique to each feedstock. This process works as follows.

  1. When a CI job on a feedstock is building packages to be uploaded to anaconda.org, it first uploads them to a staging channel, cf-staging.
  2. Then the feedstock CI job makes an API call to our admin webservices server with its secret token and some information about the package it is trying to upload.
  3. The webservices server validates the secret token, the integrity of the package, and that the package is allowed for the given feedstock.
  4. If all of the validation passes, the package is then copied to the conda-forge channel.

We attempt to report errors in this process to users via comments on commits/issues in the feedstocks. Sometimes these reports fail. If you think you are having trouble with uploads, make sure to check/try the following things:

  • Ensure that conda_forge_output_validation: true is set in your conda-forge.yml.
  • Retry the package build and upload by pushing an empty commit to the feedstock.
  • Rerender the feedstock in a PR from a fork of the feedstock and merge.
  • Request a feedstock token reset via our admin-requests repo.
  • Request that any new packages be added to the allowed outputs for the feedstock via our admin-requests repo.

New packages that are added to existing feedstocks are not registered automatically in order to prevent typo squatting and other malicious activities. Package outputs are added during feedstock creation. If you move a package from one feedstock to another, add an output package, or change the name of a package, you will need to request that the new package name be added to your feedstock via the admin-requests repo.

In rare cases, the package name may change regularly in a well-defined way (e.g., libllvm18, libllvm19, etc.). In this case, you can use our admin-requests repo to add a glob pattern that matches the new package name pattern. We use the Python fnmatch module syntax. Output packages that match these patterns will be automatically registered for your feedstock.

Stages of package building and involved infrastructureโ€‹

Packages in conda-forge are almost1 always built through CI. However, when a new package enters conda-forge for the first time, it does so via a pull request in the staged-recipes repository, whereas every new build of the package after that is built in its repository, the so-called feedstock. Both places use slightly different CI setups and interact with the infrastructure accordingly. Hence, we first describe the interaction at the start of a new package and then for existing packages in their respective feedstocks.

Initial submission to staged-recipesโ€‹

The conda-forge/staged-recipes repository uses several pieces of infrastructure.

On pull requests:

  • Package building pipelines. These are slightly different than the ones running in feedstocks (they are not automatically generated by conda-smithy, but they do use the same underlying components).
  • The linter is provided by conda-smithy recipe-lint, run by @conda-forge-admin.
  • Auto-labeling logic, run by Github Actions workflows.

Authenticated services involved:

  • Github, with permissions for:
    • PR labeling
  • Azure Pipelines

The conversion of new recipes in staged-recipes to their respective feedstocks happens in a cron job run by admin-requests. For more details see admin-requests. As part of the feedstock creation, the new feedstock receives a webhook connecting it with the webservices.

Feedstock changesโ€‹

A feedstock can receive changes for several reasons.

Pushes to main or other branches:

  • The automated initialization commits following approval in staged-recipes. These are generated by conda-smithy and pushed by the automation in admin-requests.
  • Automated maintenance commits triggered from admin-migrations.
  • Rerender requests are handled by instances of conda-forge/webservices-dispatch-action and triggered by the webservices.

Automatic pull requests can be opened by...

  • @conda-forge-admin, responding to some issues with titles like @conda-forge-admin, please....
  • @regro-cf-autotick-bot, handling migrations and new versions being available.

...and closed by:

  • conda-forge/automerge-action, if labeled accordingly.

On an open pull request:

  • The building pipelines (more below).
  • The linter is provided by conda-smithy recipe-lint, run by @conda-forge-admin.
  • The @conda-forge-admin, please... command comments, answered by @conda-forge-admin.

On issues:

  • @conda-forge-admin, please... command issues, handled by @conda-forge-admin.

Package buildingโ€‹

The pipelines that build conda packages are used for both pull requests and push events in main and other branches. The only difference is that the packages built during a pull request are not uploaded to the staging channel. Maintaining these up-to-date across all feedstocks involves several repositories:

  • conda-smithy is in charge of generating the CI pipelines themselves, together with the supporting scripts and configuration files. These pipelines and scripts can rely on code and data defined in the repositories below.
  • conda-forge-ci-setup-feedstock provides the code needed to prepare and homogenize the CI runners across providers. It also does some checks before the artifacts are uploaded to cf-staging.
  • conda-forge-pinning-feedstock defines which versions are supported for a number of runtimes and libraries, as well as the compilers used for certain languages and platforms.
  • docker-images builds the standardized container images for Linux runners. This repository has additional authentication needs for Docker Hub, Quay.io.

The pipelines can run on several CI providers supported by conda-smithy, including:

  • Azure DevOps Pipelines
  • Travis CI
  • Circle CI
  • Appveyor
  • Self-hosted Github Actions runners

Registration of hooks and triggers is also done by the conda-smithy app.

tip

conda-smithy supports more CI providers. Check its repository for more details.

Authenticated services involved:

  • Anaconda.org uploads to cf-staging

Package validation and publicationโ€‹

Once built on main (or other branches), the conda packages are uploaded to an intermediary channel named cf-staging. From there, our webservices (conda-forge/conda-forge-webservices) does the following:

  • The logic checks the feedstock token to authenticate a legitimate request.
  • The logic checks that the hash sum of the package on cf-staging against the value computed in the CI to ensure the artifact to be copied is the same.
  • The logic checks that the feedstock is allowed to push the package using the conda-forge/feedstock-outputs repo.
  • If all three checks pass, the webservices copies the artifacts from cf-staging to conda-forge.

Authenticated services involved:

  • Anaconda.org uploads to conda-forge and cf-staging
  • The conda-forge-webservices app deployment itself (currently at Heroku)

Post-publicationโ€‹

Once uploaded to anaconda.org/conda-forge, packages are not immediately available to CLI clients. They have to be replicated in the Content Distribution Network (CDN). This step should ideally take around 15 minutes. In some circumstances, longer delays are possible. Check conda-forge.org/status in case of doubt.

After CDN replication, most packages available on anaconda.org/conda-forge won't suffer any further modifications. However, in some cases, maintainers might need to perform some actions on the published packages:

  • Patching their repodata
  • Marking them as broken

Repodata patchโ€‹

The metadata for conda packages is initially contained in each package archive (under info/). conda index iterates over the published conda packages, extracts the metadata and consolidates all the found JSON blobs into a single JSON file: repodata.json. This is where the hashes and file sizes are added too. This is the metadata file that the CLI clients download initially to solve the environment.

Since the metadata is external to the package files, some details can be modified without rebuilding packages, which simplifies some maintenance tasks notably.

Repodata patches are created in conda-forge/conda-forge-repodata-patches-feedstock, which generates (and uploads) a regular conda package as a result: conda-forge-repodata-patches. Each of these timestamped packages contains the patch instructions for each channel subdir on conda-forge. The Anaconda infrastructure takes the JSON files from these packages and applies them on top of the vanilla repodata.json (which remains available for download as repodata_from_packages.json).

Since conda-forge-repodata-patches-feedstock operates as a regular feedstock for package publication, there are no further infrastructural details to cover.

Mark a package as brokenโ€‹

Sometimes a package is faulty in ways that a repodata patch cannot amend (e.g. bad binary). In these cases, conda-forge does not remove packages from Anaconda.org. Instead, it marks them with the broken label, which has a special meaning: packages labeled as such will be removed from the repodata via automated patches. This action is reversible and doesn't change the direct URL of the artifact, which can always be downloaded from e.g. a lockfile.

The main repository handling this is conda-forge/admin-requests, which features different Github Actions workflows running every 15 minutes.

For this task, the Github Action workflow needs access to:

  • Anaconda.org, to add (or remove) labels
  • Github, to modify and commit the input files after success

Inventory of services & providersโ€‹

Github resourcesโ€‹

In addition to the thousands of repositories, conda-forge uses several other Github services.

Organizationsโ€‹

info

These organizations exist but they are not in active use anymore:

Teamsโ€‹

The conda-forge Github organization has thousands of teams. Most of them are associated with a feedstock, but there are a few special ones that are not!

Configurationโ€‹

Bot accountsโ€‹

Appsโ€‹

  • conda-forge-curator
  • conda-forge-webservices
info

These apps exist but are not in active usage anymore:

  • conda-forge drone instance

Workflowsโ€‹

Continuous integrationโ€‹

See also

Refer to the conda-forge.yml documentation to learn how to configure your CI providers.

Azure Pipelinesโ€‹

conda-forge benefits from the generously offered Microsoft-hosted runners.

Travis CIโ€‹

Cirunโ€‹

  • ๐ŸŒ https://cirun.io
  • ๐Ÿ“ Available on selected feedstocks only
  • ๐Ÿ›  Provides several architectures (depending on feedstock configuration)
  • ๐Ÿ”’ Needs access to Anaconda.org (cf-staging) and the configured backend

Configured with @conda-forge-daemon.

Organization-wide configuration can be found in the .cirun repository.

info

This allows, for example, access to GPU enabled runners for selected feedstocks as described in https://github.com/Quansight/open-gpu-server.

Github Actionsโ€‹

Retired servicesโ€‹

Delivery and distributionโ€‹

Anaconda.orgโ€‹

Docker Hubโ€‹

Github Packagesโ€‹

Github Releasesโ€‹

Quayโ€‹

Serversโ€‹

Herokuโ€‹

Other servicesโ€‹

Footnotesโ€‹

  1. Very few packages cannot be built through CI due to special resource requirements. These packages may be built and uploaded manually following the rules laid out in CFEP-3. โ†ฉ