Contributing packages

To submit a package to the conda-forge channel, add its recipe and licence to the staged-recipes repository and create a pull request. Once the pull request is merged, the package becomes available on the conda-forge channel. Note that contributing a package makes you the maintainer of that package.

A maintainer is responsible for maintaining the feedstock repository and packages as well as their future versions and has push access to the feedstock repositories of only the packages it maintains. You can learn more about the roles of a maintainer here.

The sections below provide detailed instructions on contributing packages to conda-forge.

The staging process

Getting Started

There are multiple ways to get started:

  1. Look at the example recipe in the staged-recipes repository and modify it as necessary.

  2. If it is an R package from CRAN, kindly start by using the conda-forge helper script for R recipes instead. Then if necessary, you can make manual edits to the recipe.

  3. If it is a python package, you can generate the recipe as a starting point with grayskull. Use conda install -c conda-forge grayskull to install grayskull, followed by grayskull pypi your_package_name to generate the recipe. Note that you do not necessarily have to use grayskull, and the recipes produced by grayskull might need to be reviewed and edited. Read more about grayskull and how to use it here.

Your final recipe should have no comments (unless they’re actually relevant to the recipe, and not generic instruction comments), and follow the order in the example.


If there are any details you are not sure about please create a pull request anyway. The conda-forge team will review it and help you make changes to it.

In case you are building your first recipe using conda-forge, a step-by-step instruction and checklist that will help you with a successful build is provided below.

Step-by-step Instructions

  1. Ensure your source code can be downloaded as a single file. Source code should be downloadable as an archive (.tar.gz, .zip, .tar.bz2, .tar.xz) or tagged on GitHub, to ensure that it can be verified. (For further detail, see Build from tarballs, not repos).

  2. Fork the example recipes repository.

  3. Create a new branch from the staged-recipes master branch.

  4. Within your forked copy, generate a new folder in the recipes subdirectory and copy the meta.yml file from the example directory. Please leave the example directory unchanged!

  5. Edit the copied recipe (meta.yml) as needed. For details, see The recipe meta.yaml.

  6. Generate the SHA256 key for your source code archive, as described in the example recipe using the openssl tool. As an alternative, you can also go to the package description on PyPi from which you can directly copy the SHA256.

  7. Be sure to fill in the test section. The simplest test will simply test that the module can be imported, as described in the example.

  8. Remove all irrelevant comments in the meta.yaml file.


Be sure not to checksum the redirection page. Therefore use, for example,:

curl -sL | openssl sha256


  • Ensure that the license and license family descriptors (optional) have the right case and that the license is correct. Note that case sensitive inputs are required (e.g. Apache-2.0 rather than APACHE 2.0). Using SPDX identifiers for license field is recommended. (see SPDX Identifiers and Expressions)

  • Ensure that you have included a license file if your license requires one – most do. (see here)

  • In case your project has tests included, you need to decide if these tests should be executed while building the conda-forge feedstock.

  • Make sure that all tests pass successfully at least on your development machine.

  • Recommended: run the test locally on your source code to ensure the recipe works locally (see Running tests locally for staged recipes).

  • Make sure that your changes do not interfere with other recipes that are in the recipes folder (e.g. the example recipe).

Feedback and revision

Once you finished your PR, all you have to do is wait for feedback from our review team.

The review team will assist you by pointing out improvements and answering questions. Once the package is ready, the reviewers will approve and merge your pull request.

After merging the PR, our CI infrastructure will build the package and make it available in the conda-channel.


If you have questions or have not heard back for a while, you can notify us by including @conda-forge/staged-recipes in your GitHub message.

Post staging process

  • After the PR is merged, our CI services will create a new git repo automatically. For example, the recipe for a package named pydstool will be moved to a new repository This process is automated through a CI job on the conda-forge/staged-recipes repo. It sometimes fails due to API rate limits and will automatically retry itself. If your feedstock has not been created after a day or so, please get in touch with the conda-forge/core team for help.

  • CI services will be enabled automatically and a build will be triggered automatically which will build the conda package and upload to

  • If this is your first contribution, you will be added to the conda-forge team and given access to the CI services so that you can stop and restart builds. You will also be given commit rights to the new git repository.

  • If you want to make a change to the recipe, send a PR to the git repository from a fork. Branches of the main repository are used for maintaining different versions only.

Maintainer role

The maintainer’s job is to:

  • Keep the feedstock updated by merging eventual maintenance PRs from conda-forge’s bots.

  • Keep the feedstock on par with new releases of the source package by

    • Bumping the version number and checksum.

    • Making sure that the feedstock’s requirements stay accurate.

    • Make sure the test requirements match those of the updated package.

  • Answer eventual questions about the package on the feedstock issue tracker.

Adding multiple packages at once

If you would like to add more than one related packages, they can be added to staged-recipes in a single pull request (in separate directories). If the packages are interdependent (i.e. one package being added lists one or more of the other packages being added as a requirement), the build script will be able to locate the dependencies that are only present within staged-recipes as long as the builds finish in the dependencies order. Using a single pull request allows you to quickly get packages set up without waiting for each package in a dependency chain to be reviewed, built, and added to the conda-forge channel before starting the process over with the next recipe in the chain.


When PRs with multiple interdependent recipes are merged, there may be an error if a build finishes before its dependency is built. If this occurs, you can trigger a new build by pushing an empty commit.

git commit --amend --no-edit && git push --force

Synchronizing fork for future use

If you would like to add additional packages in the future, you will need to reset your fork of staged-recipes before creating a new branch on your fork, adding the new package directory/recipe, and creating a pull request. This step ensures you have the most recent version of the tools and configuration files contained in the staged-recipes repository and makes the pull request much easier to review. The following steps will reset your fork of staged-recipes and should be executed from within a clone of your forked staged-recipes directory.

  1. Checkout your master branch:

    git checkout master
  2. Define the conda-forge/staged-recipes repository as upstream (if you have not already done so).:

    git remote add upstream
  3. Pull all of the upstream commits from the upstream master branch.:

    git pull --rebase upstream master
  4. Push all of the changes to your fork on GitHub (make sure there are not any changes on GitHub that you need because they will be overwritten).:

    git push origin master --force

Once these steps are complete, you can continue with the steps in Step-by-step Instructions to stage your new package recipe using your existing staged-recipes fork.

The recipe meta.yaml

The meta.yaml file in the recipe directory is at the heart of every conda package. It defines everything that is required to build and use the package.

meta.yaml is in yaml format, augmented with Jinja templating.

A full reference of the structure and fields of meta.yaml file can be found in the Defining metadata (meta.yaml) section in the conda-build documentation.

In the following, we highlight particularly important and conda-forge specific information and guidelines, ordered by section in meta.yaml.


Build from tarballs, not repos

Packages should be built from tarballs using the url key, not from repositories directly by using e.g. git_url.

There are several reasons behind this rule:

  • Repositories are usually larger than tarballs, draining shared CI time and bandwidth

  • Repositories are not checksummed. Thus, using a tarball has a stronger guarantee that the download that is obtained to build from is in fact the intended package.

  • On some systems, it is possible to not have permission to remove a repo once it is created.

Populating the hash field

If your package is on PyPi, you can get the sha256 hash from your package’s page on PyPI; look for the SHA256 link next to the download link on your package’s files page, e.g.<your-project>/#files.

You can also generate a hash from the command line on Linux (and Mac if you install the necessary tools below).

To generate the sha256 hash: openssl sha256 your_sdist.tar.gz

You may need the openssl package, available on conda-forge conda install openssl -c conda-forge.


Be sure not to checksum the redirection page. Therefore use, for example,:

curl -sL | openssl sha256

Downloading extra sources and data files

conda-build 3 supports multiple sources per recipe. Examples are available in the conda-build docs.


Skipping builds

Use the skip key in the build section along with a selector:

You can e.g. specify not to build …

  • on specific architectures:

    skip: true  # [win]
  • for specific python versions:

    skip: true  # [py<35]

A full description of selectors is in the conda docs.

Optional: bld.bat and/or

In many cases, bld.bat and/or files are not required. Pure Python packages almost never need them.

If the build can be executed with one line, you may put this line in the script entry of the build section of the meta.yaml file with: script: "{{ PYTHON }} -m pip install . -vv".

Remember to always add pip to the host requirements.

Use pip

Normally Python packages should use this line:

  script: "{{ PYTHON }} -m pip install . -vv"

as the installation script in the meta.yml file or bld.bat/ script files, while adding pip to the host requirements:

    - pip

These options should be used to ensure a clean installation of the package without its dependencies. This helps make sure that we’re only including this package, and not accidentally bringing any dependencies along into the conda package.

Usually pure-Python packages only require python, setuptools and pip as host requirements; the real package dependencies are only run requirements.


Build, host and run

Conda-build distinguishes three different kinds of dependencies. In the following paragraphs, we give a very short overview what packages go where. For a detailed explanation please refer to the conda-build documentation.


Build dependencies are required in the build environment and contain all tools that are not needed on the host of the package.

Following packages are examples of typical build dependencies:


Host dependencies are required during build phase, but in contrast to build packages they have to be present on the host.

Following packages are typical examples for host dependencies:

  • shared libraries (c/c++)

  • python/r libraries that link against c libraries (see e.g. Building Against NumPy)

  • python, r-base

  • setuptools, pip (see Use pip)


Run dependencies are only required during run time of the package. Run dependencies typically include

  • most python/r libraries

Avoid external dependencies

As a general rule: all dependencies have to be packaged by conda-forge as well. This is necessary to assure ABI compatibility for all our packages.

There are only a few exceptions to this rule:

  1. Some dependencies have to be satisfied with CDT packages (see Core Dependency Tree Packages (CDTs)).

  2. Some packages require root access (e.g. device drivers) that cannot be distributed by conda-forge. These dependencies should be avoided whenever possible.


Linking shared c/c++ libraries creates dependence on the ABI of the library that was used at build time on the package. The exposed interface changes when previously existing exposed symbols are deleted or modified in a newer version.

It is therefore crucial to ensure that only library versions with a compatible ABI are used after linking.

In the best case, the shared library you depend on:

In these cases you do not have to worry about version requirements:

  # [...]
    - readline
    - libpng

In other cases you have to specify ABI compatible versions manually.

  # [...]
    - libawesome 1.1.*

For more information on pinning, please refer to Pinned dependencies.

Constraining packages at runtime

The run_constrained section allows defining restrictions on packages at runtime without depending on the package. It can be used to restrict allowed versions of optional dependencies and defining incompatible packages.

Defining non-dependency restrictions

Imagine a package can be used together with version 1 of awesome-software when present, but does not strictly depend on it. Therefore you would like to let the user choose whether he/she would like to use the package with or without awesome-software. Let’s assume further that the package is incompatible to version 2 of awesome-software.

In this case run_dependencies can be used to restrict awesome-software to version 1.*, if the user chooses to install it:

  # [...]
    - awesome-software 1.*

Here run_constrained acts as a means to protect users from incompatible versions without introducing an unwanted dependency.

Defining conflicts

Sometimes packages interfere with each other and therefore only one of them can be installed at any time. In combination with an unsatisfiable version, run_constrained can define blockers:

name: awesome-db

  # [...]
    - amazing-db ==9999999999

In this example, awesome-db cannot be installed together with amazing-db as there is no package amazing-db-9999999999.


All recipes need tests. Here are some tips, tricks, and justifications. How you should test depends on the type of package (python, c-lib, command-line tool, … ), and what tests are available for that package. But every conda package must have at least some tests.

Simple existence tests

Sometimes defining tests seems to be hard, e.g. due to:

  • tests for the underlying code base may not exist.

  • test suites may take too long to run on limited CI infrastructure.

  • tests may take too much bandwidth.

In these cases, conda-forge may not be able to execute the prescribed test suite.

However, this is no reason for the recipe to not have tests. At the very least, we want to verify that the package has installed the desired files in the desired locations. This is called existence testing.

Existence testing can be accomplished in the meta.yaml file in the test/commands block.

On posix systems, use the test utility and the $PREFIX variable.

On Windows, use the exist command. See below for an example.

Simple existence testing example:

    - test -f $PREFIX/lib/libboost_log$SHLIB_EXT  # [unix]
    - if not exist %LIBRARY_LIB%\\boost_log-vc140-mt.lib exit 1  # [win]

Testing python packages

For the best information about testing, see the conda build docs test section.

Testing importing

The minimal test of a python package should make sure that the package can be successfully imported. This can be accomplished with this stanza in the meta.yaml:

    - package_name

Note that package_name is the name imported by python; not necessarily the name of the conda package (they are sometimes different).

Testing for an import will catch the bulk of the packaging errors, generally including the presence of dependencies. However, it does not assure that the package works correctly. In particular, it doesn’t test if it works correctly with the versions of dependencies used.

It is good to run some other tests of the code itself (the test suite) if possible.

Running unit tests

The trick here is that there are multiple ways to run unit tests in Python, including nose, pytest, etc.

Also, some packages install the tests with the package, and thus they can be run in place, while others keep the tests with the source code, and thus can not be run straight from an installed package.

Test requirements

Sometimes there are packages required to run the tests that are not required to simply use the package. This is usually a test-running framework, such as nose or pytest. You can ensure that it is included by adding it to requirements in the test stanza:

    - package_name
    - pytest
Copying test files

Often test files are not installed alongside packages. Conda creates a fresh working copy to execute the test stage of build recipes, which don’t contain the files of source package.

You can include files required for testing with the source_files section:

    - package_name
    - pytest
    - tests
    - pytest tests

The source_files section works for files and directories.

Built-in tests

Some packages have testing built-in. In this case, you can put a test command directly in the test stanza:

     python -c "import package_name; package_name.tests.runall()"

Alternatively, you can add a file called in the recipe that will be run at test time. This allows an arbitrarily complicated test script.

pytest tests

If the tests are installed with the package, pytest can find and run them for you with the following command:

    - pytest
    - pytest --pyargs package_name

Command Line Utilities

If a python package installs command line utilities, you probably want to test that they were properly installed:

    - util_1 --help

If the utility actually has a test mode, great. Otherwise simply invoking --help or --version or something will at least test that it is installed and can run.

Tests outside of the package

Note that conda-build runs the tests in an isolated environment after installing the package – thus, at this point it does not have access to the original source tarball. This is to ensure that the test environment is as close as possible to what an end-user will see.

This makes it very hard to run tests that are not installed with the package.

Running tests locally for staged recipes

If you want to run and build packages in the staged-recipes repository locally, go to the root repository directory and run the .scripts/ script. This requires that you have docker installed on your machine.

You need to define an environment variable named CONFIG. Its value must be the name of one of the three YAML configuration files in the .ci_support directory (either linux64, osx64, or win64). As an example, you can invoke the command as follows.

$ cd staged-recipes
$ CONFIG=linux64 ./.scripts/


Packaging the license manually

Sometimes upstream maintainers do not include a license file in their tarball despite being demanded by the license.

If this is the case, you can add the license to the recipe directory (here named LICENSE.txt) and reference it inside the meta.yaml:

  license_file: LICENSE.txt

In this case, please also notify the upstream developers that the license file is missing.


The license should only be shipped along with the recipe if there is no license file in the downloaded archive. If there is a license file in the archive, please set license_file to the path of the license file in the archive.

SPDX Identifiers and Expressions

For the about: license entry in the recipe meta.yaml, using a SPDX identifier or expression is recommended.

See SPDX license identifiers for the licenses. See SPDX license exceptions for license exceptions. See SPDX specification Appendix IV for the specification on expressions.

Apache-2.0 WITH LLVM-exception
BSD-3-Clause OR MIT
LGPL-2.0-only OR GPL-2.0-only
MIT AND BSD-2-Clause


Activate scripts

Recipes are allowed to have activate scripts, which will be sourced or called when the environment is activated. It is generally recommended to avoid using activate scripts when another option is possible because people do not always activate environments the expected way and these packages may then misbehave.

When using them in a recipe, feel free to name them activate.bat,, deactivate.bat, and in the recipe. The installed scripts are recommended to be prefixed by the package name and a separating -. Below is some sample code for Unix and Windows that will make this install process easier. Please feel free to lift it.


# Copy the [de]activate scripts to $PREFIX/etc/conda/[de]activate.d.
# This will allow them to be run on environment activation.
for CHANGE in "activate" "deactivate"
    mkdir -p "${PREFIX}/etc/conda/${CHANGE}.d"
    cp "${RECIPE_DIR}/${CHANGE}.sh" "${PREFIX}/etc/conda/${CHANGE}.d/${PKG_NAME}_${CHANGE}.sh"

In build.bat:

setlocal EnableDelayedExpansion

:: Copy the [de]activate scripts to %PREFIX%\etc\conda\[de]activate.d.
:: This will allow them to be run on environment activation.
for %%F in (activate deactivate) DO (
    if not exist %PREFIX%\etc\conda\%%F.d mkdir %PREFIX%\etc\conda\%%F.d
    copy %RECIPE_DIR%\%%F.bat %PREFIX%\etc\conda\%%F.d\%PKG_NAME%_%%F.bat
    :: Copy unix shell activation scripts, needed by Windows Bash users
    copy %RECIPE_DIR%\ %PREFIX%\etc\conda\%%F.d\

Jinja templating

The recipe meta.yaml can contain expressions that are evaluated during build time. These expressions are written in Jinja syntax.

Jinja expressions serve following purposes in the meta.yaml:

  • They allow defining variables to avoid code duplication. Using a variable for the version allows changing the version only once with every update.

    {% set version = "3.7.3" %}
      name: python
      version: {{ version }}
      url:{{ version }}/Python-{{ version }}.tar.xz
      sha256: da60b54064d4cfcd9c26576f6df2690e62085123826cff2e667e72a91952d318
  • They can call conda-build functions for automatic code generation. Examples are the compilers, cdt packages or the pin_compatible function.

        - {{ compiler('c') }}
        - {{ compiler('cxx') }}
        - {{ cdt('xorg-x11-proto-devel') }}  # [linux]
        - {{ cdt('libx11-devel') }}          # [linux]


        - {{ compiler('c') }}
        - {{ compiler('cxx') }}
        - python
        - numpy
        - python
        - {{ pin_compatible('numpy') }}

For more information please refer to the Templating with Jinja section in the conda-build docs.