All Systems Mostly Operational
Repos and Bots
documentation status:
autotick bot status:
staged-recipes status:
admin web services status: No Status Available
admin migrations status:
libcfgraph status:
CDN cloning status: operational
Long-Term Migrations
Short-term migrations and recently closed migrations are listed below.
Version Updates
Azure Pipelines Usage
Azure Pipelines usage is not available.
GitHub Actions Usage
GitHub Actions usage is not available.
Travis CI Usage
Travis CI usage is not available.
Cloud Services
GitHub status: No Status Available
Travis CI status: No Status Available
Quay.io status: No Status Available
Anaconda status: No Status Available
Azure DevOps: No Status Available
Short-Term Migrations
Recently Closed Migrations
Nothing here! Yay!
Incidents
2023/09/15 09:13:10 UTC
resolved
Windows pipelines are failing #155
The "Install conda-build" step fails with a solver conflict due to a missing package.
Windows pipelines are failing #155
The "Install conda-build" step fails with a solver conflict due to a missing package.
2023/08/02 03:41:16 UTC
resolved
Package uploads are failing #153
It seems to affect most (all?) feedstocks: for a CI run on `main`, the built artefacts fail during publication: ``` ERROR getting output validation information from the webservice: JSONDecodeError('Expecting value: line 1 column 1 (char 0)') copy results: {} Failed to upload due to copy from staging to production channel failed. Trying again in 10 seconds ```
Package uploads are failing #153
It seems to affect most (all?) feedstocks: for a CI run on `main`, the built artefacts fail during publication: ``` ERROR getting output validation information from the webservice: JSONDecodeError('Expecting value: line 1 column 1 (char 0)') copy results: {} Failed to upload due to copy from staging to production channel failed. Trying again in 10 seconds ```
2023/07/05 19:38:54 UTC
resolved
All macOS builds are failing #149
Investigating at https://github.com/conda-forge/python-libarchive-c-feedstock/issues/35
All macOS builds are failing #149
Investigating at https://github.com/conda-forge/python-libarchive-c-feedstock/issues/35
2023/06/29 08:39:26 UTC
resolved
Migration statuses not updating correctly #148
The `r_base43` migration is not getting status updates. Most of the feedstocks listed as "In PR" were merged days ago.
Migration statuses not updating correctly #148
The `r_base43` migration is not getting status updates. Most of the feedstocks listed as "In PR" were merged days ago.
2023/06/27 08:39:10 UTC
resolved
Builds on osx & win failing #147
It appears there's some wide-spread breakage in osx & windows builds happening since about 20min; happens on every PR I've looked at that started since then. Failure looks like: ``` File "/Users/runner/miniforge3/lib/python3.10/site-packages/conda_index/index/__init__.py", line 347, in _get_resolve_object sd._process_raw_repodata(repodata_copy) # type: ignore TypeError: SubdirData._process_raw_repodata() missing 1 required positional argument: 'state' ``` This happens before the actual build step of a given feedstock even runs, during the "Attempting to finalize metadata for" phase.
While it's pure speculation, I have a suspicion that it might be related to https://github.com/conda-forge/miniforge/pull/466 (at least, no other relevant packages that I checked seemed to have been updated recently).
CC @conda-forge/core
Builds on osx & win failing #147
It appears there's some wide-spread breakage in osx & windows builds happening since about 20min; happens on every PR I've looked at that started since then. Failure looks like: ``` File "/Users/runner/miniforge3/lib/python3.10/site-packages/conda_index/index/__init__.py", line 347, in _get_resolve_object sd._process_raw_repodata(repodata_copy) # type: ignore TypeError: SubdirData._process_raw_repodata() missing 1 required positional argument: 'state' ``` This happens before the actual build step of a given feedstock even runs, during the "Attempting to finalize metadata for
2023/06/24 21:49:14 UTC
resolved
Migrations error in bot-bot job #145
Monitoring R 4.3 migration, I noticed we stopped getting PRs the last couple days, despite fixing up stuff I identified as blocking. Having a look at recent `bot-bot` jobs showed they run ~5 mins, as opposed to the typical ~1 hr runtimes from a few days back. The `run migrations` section of the `bot-bot` logs shows the following error: ```bash Run export START_TIME=$(date +%s) export START_TIME=$(date +%s) export TIMEOUT=7200 export CIRCLE_BUILD_URL="[https://github.com/regro/cf-scripts/actions/runs/${RUN_ID}](https://github.com/regro/cf-scripts/actions/runs/$%7BRUN_ID%7D)" export CIRCLE_BUILD_NUM="actually-actions-${RUN_ID}" pushd cf-graph conda-forge-tick auto-tick popd shell: /usr/bin/bash -l {0} env: MAMBA_ROOT_PREFIX: /home/runner/micromamba MAMBA_EXE: /home/runner/micromamba-bin/micromamba CONDARC: /home/runner/micromamba-bin/.condarc CI_SKIP: USERNAME: regro-cf-autotick-bot PASSWORD: *** RUN_ID: 5354311023 MEMORY_LIMIT_GB: 7 CF_TICK_GRAPH_DATA_BACKENDS: mongodb:file MONGODB_CONNECTION_STRING: *** ~/work/cf-scripts/cf-scripts/cf-graph ~/work/cf-scripts/cf-scripts limit read as 7.0 GB Setting memory limit to 6.0 GB collapsing closed PR json processing bot-rerun labels Traceback (most recent call last): File "/home/runner/micromamba/envs/cf-scripts/bin/conda-forge-tick", line 8, in
sys.exit(main())
File "/home/runner/work/cf-scripts/cf-scripts/cf-scripts/conda_forge_tick/cli.py", line 77, in main
func(args)
File "/home/runner/work/cf-scripts/cf-scripts/cf-scripts/conda_forge_tick/auto_tick.py", line 1489, in main
_update_graph_with_pr_info()
File "/home/runner/work/cf-scripts/cf-scripts/cf-scripts/conda_forge_tick/auto_tick.py", line 1471, in _update_graph_with_pr_info
_update_nodes_with_bot_rerun(gx)
File "/home/runner/work/cf-scripts/cf-scripts/cf-scripts/conda_forge_tick/auto_tick.py", line 1341, in _update_nodes_with_bot_rerun
with node["payload"] as payload, payload["pr_info"] as pri, payload[
File "/home/runner/work/cf-scripts/cf-scripts/cf-scripts/conda_forge_tick/lazy_json_backends.py", line 636, in __getitem__
return self._data[item]
KeyError: 'pr_info'
~/work/cf-scripts/cf-scripts
```
Migrations error in bot-bot job #145
Monitoring R 4.3 migration, I noticed we stopped getting PRs the last couple days, despite fixing up stuff I identified as blocking. Having a look at recent `bot-bot` jobs showed they run ~5 mins, as opposed to the typical ~1 hr runtimes from a few days back. The `run migrations` section of the `bot-bot` logs shows the following error: ```bash Run export START_TIME=$(date +%s) export START_TIME=$(date +%s) export TIMEOUT=7200 export CIRCLE_BUILD_URL="[https://github.com/regro/cf-scripts/actions/runs/${RUN_ID}](https://github.com/regro/cf-scripts/actions/runs/$%7BRUN_ID%7D)" export CIRCLE_BUILD_NUM="actually-actions-${RUN_ID}" pushd cf-graph conda-forge-tick auto-tick popd shell: /usr/bin/bash -l {0} env: MAMBA_ROOT_PREFIX: /home/runner/micromamba MAMBA_EXE: /home/runner/micromamba-bin/micromamba CONDARC: /home/runner/micromamba-bin/.condarc CI_SKIP: USERNAME: regro-cf-autotick-bot PASSWORD: *** RUN_ID: 5354311023 MEMORY_LIMIT_GB: 7 CF_TICK_GRAPH_DATA_BACKENDS: mongodb:file MONGODB_CONNECTION_STRING: *** ~/work/cf-scripts/cf-scripts/cf-graph ~/work/cf-scripts/cf-scripts limit read as 7.0 GB Setting memory limit to 6.0 GB collapsing closed PR json processing bot-rerun labels Traceback (most recent call last): File "/home/runner/micromamba/envs/cf-scripts/bin/conda-forge-tick", line 8, in