(Q) A package I am looking for is not on conda-forge, what can I do?
We have an overview and step-by-step instruction on contributing packages in the section Contributing packages.
(Q) The feedstock for a package from conda-forge is updated, how long should it take to update on Anaconda Cloud?
It depends on the queue, but a good rule of thumb is to wait at least 30 mins - 2 hours. If you don’t see it after 24 hrs, please raise an issue.
(Q) A package from conda-forge is outdated or broken, where can I report the issue?
You can open an issue in the packages feedstock repository on GitHub. Search for the repository
conda-forge/<package-name>-feedstock. There you can also suggest fixes or even become a maintainer. Please refer to Maintaining packages for details.
(Q) I have a question/suggestion. How can I contact you?
Please join us on our Gitter channel. We are always happy to answer questions and help beginners.
(Q) I have a set of related packages, how do I create a conda-forge team?
Conda-forge github teams are very useful means of adding common maintainers to a set of related packages. For example, most R packages are co-maintained by the conda-forge/R team. To create a new team, you can just use one of the existing feedstocks from your packages. Each feedstock has automatically a team assigned (formed from the maintainers of that feedstock). For example, the conda-forge R team is coming from the r-feedstock. Then you can just add - conda-forge/r in the maintainers section to make all maintainers of the r-feedstock also maintainers of the new package.
(Q) Installing and updating takes a long time, what can I do?
Enabling strict channel priority may help. You can do this viaconda config --set channel_priority strict
You can also try using a package called mamba.
conda-compatible package that can be used in place of
conda. It employs a faster solver implemented in
C. It can be installed viaconda install mamba
(Q) Why is Travis-CI failing on my feedstock?
Travis CI builds should be enabled or disabled via the
conda-forge.ymlconfiguration. Nevertheless, sometimes Travis CI ignores this for whatever reason (probably a bug somewhere). In such a case, please disregard failing builds. Note that
travis-ci.orgbuilds are soon being phased out and replaced by
(Q) How can I install a C/C++ compiler in my environment?
You can use our convenient meta-packages
cxx-compilerto install a compiler stack that fits your platform. Error messages such asx86_64-apple-darwin13.4.0-clang: No such file or directory
are a telltale sign that you are lacking compilers.
(Q) Why don’t the C/C++ compilers automatically know how to find libraries installed by conda?
All of our toolchains are built as cross-compilers (even when they are built to run on the same architecture that they are targeting). We do this because it makes it possible to then install them anywhere like any other conda package. As a result, the builtin search path for the compilers only contains the sysroot they were built with. The compiler binary names are also ‘prefixed’ with more complete information about the architecture and ABI they target. So, instead of
gcc, the actual binary will be named something like
The conda-forge infrastructure provides activation scripts which are run when you
conda activatean environment that contains the compiler toolchain. Those scripts set many environment variables that are typically used by GNU
standard(i.e. builtin) build rules. For example, you would see the variable
CCset to the long compiler name
x86_64-conda-linux-gnu-cc. The activation scripts also set a
CMAKE_ARGSvariable with many arguments the conda-forge community finds helpful for configuring cmake build flows. Of particular note, the activation scripts add the
CONDA_PREFIX/libpaths to the appropriate
FLAGSenvironment variables (
LDFLAGS, etc) so that many build systems will pick them up correctly.
What do you do if you have custom
FLAGSthat your project requires for it’s build or you can’t build with some of the flags supplied by conda-forge? What if you are building something that is setup for cross-compiling and expects
CCto contain the name of the target toolchain but wants to be able to build some things for the build-host to use during the build by just calling
The compiler metapackages mentioned above also install packages that create symlinks of the short names (like
gcc) to the actual toolchain binary names (like
x86_64-conda-linux-gnu-cc) for toolchains that are targeting the system they are running on.
- A new optional package called
conda-gcc-specscan also be installed that adds:
-include $CONDA_PREFIX/includeto compile commands
-rpath $CONDA_PREFIX/lib -rpath-link $CONDA_PREFIX/lib -disable-new-dtags -L $CONDA_PREFIX/libto link commands
Using the compiler metapackage with
conda-gcc-specsyou can incude and link libraries installed in
CONDA_PREFIXwithout having to provide any conda-specific cmdline arguments.
(Q) How can I make conda gcc use my system libraries?
First, the conda-forge infrastructure tries very hard to avoid using any of the system-provided libraries, otherwise the dependencies betweeen packages quickly become incomplete and nothing works.
However, as an end user, when not building something that will be packaged and distributed via conda-forge, you may need to link against libraries on your system instead of libraries in your conda environment. This can be accomplished (for gcc) by passing
-sysroot=/on the cmdline.
(Q) How can I compile CUDA (host or device) codes in my environment?
Unfortunately, this is not possible with conda-forge’s current infrastructure (
cudatoolkit, etc) if there is no local CUDA Toolkit installation. In particular, the
nvccpackage provided on conda-forge is a wrapper package that exposes the actual
nvcccompiler to our CI infrastructure in a
conda-friendly way; it does not contain the full
nvcccompiler toolchain. One of the reasons is that CUDA headers like
cuda_runtime.h, etc, which are needed at compile time, are not redistributable according to NVIDIA’s EULA. Likewise, the
cudatoolkitpackage only contains CUDA runtime libraries and not the compiler toolchain.
If you need to compile CUDA code, even if it involves only CUDA host APIs, you will still need a valid CUDA Toolkit installed locally and use it. Please refer to NVCC’s documentation for the CUDA compiler usage and CUDA Programming Guide for general CUDA programming.