2016-07-22: General discussion
Time: 14:00 UTC
Hangout link: https://hangouts.google.com/call/v5olhwzpfzgzpoq5i3wthjpqpie
Attendees
Jonathan Helmus
Phil Elson
Filipe
Standing items
- How many repos?
- How many contributors?
- New core devs?
Agenda
-
Governance/mechanism for formally proposing and deciding on enhancements.
* Motivation: Without a formal governance model it is difficult for the conda-forge community to reach final decisions. There is no designated place to propose changes in, e.g. compiler infrastructure or whether to run or not to run a package's unit tests, so these end up being scattered across pull requests and issues.
-
Governance models:
* The Python model: BDFL + PEPs
-
The Jupyter model: BDFL + Steering Council + JEPs: https://github.com/jupyter/governance
-
The astropy model: Coordinating Committee + APEs: https://github.com/astropy/astropy-APEs and http://www.astropy.org/about.html
-
IPEP : https://github.com/ipython/ipython/wiki/IPEP-0:-IPEP-Templatehttps://github.com/ipython/ipython/wiki/IPEP-0:-IPEP-Template
-
numpy governance: http://docs.scipy.org/doc/numpy-dev/dev/governance/governance.html
-
All of these models have a mechanism for enhancement proposals, so how about creating: github.com/conda-forge/conda-forge-enhancement-proposals
-
-
-
SciPy sprint: https://trello.com/b/KURmGkly/conda-forge-scipy-sprint
-
conda-forge code of conduct doc: https://docs.google.com/document/d/10dxX0Zse0Rx1HqsxC73Wfsghmy5m8PP8cHuBIOhWKpc/edit
-
Discuss some guidelines to contact the authors
-
Feedstocks philosophy: Explicit vs implicit / reproducible vs redundant
-
OSX - getting back to a usable, coherent, stack
* libc++ (clang) vs libstdc++ (gcc/g++)
-
Minimum OSX required for clang (10.8, I think?)
-
Actually clang is usable beginning in 10.7. So, this would be viable given your compatibility constraints.
-
Also, all the refs I have seen suggest that this will still have C++11 support.
-
Compatibility with defaults (built on 10.7, uses gcc) - where will people break? I think only if mixing packages - how do we assure that we have all the ones we need?
-
-
Improving infrastructure
* Travis CI API issues
- Finish out GitHub API issues
- Better workflows with staged-recipes
-
Low level packaging
-
Basic community practices when PR-ing to staged-recipes.
-
No need to re-discuss this. I am still writing the docs and, if ready, I will send the link tomorrow (or after SciPy ;-)
-
NetCDF (
also curl/ca-certificates and Perl packages) - Done?* curl and ca-certificates are done and available.
- Perl is no longer relevant as part of this process
-
Notifications (how do we stay on top of them)
-
Standardizing installs
* Mention [`toolchain`](https://github.com/conda-forge/toolchain-feedstock) .
* Discuss rollout to feedstocks.
* Get feedback on [`python-toolchain`](https://github.com/conda-forge/staged-recipes/pull/642) -
MSYS2
* Available on defaults - was in conda 4.1.7, but that was pulled. Coming in 4.1.8.
- Discussing Ray Donnelly's work on MSYS2 packages and how we want to use and integrate these into conda-forge.
- Some use cases to consider OpenBLAS, FFTW, build tools, others?
-
Binary data
* Do we include it in recipes?
- What kinds do we allow if any (e.g. icons)?
- How do we verify the licensing?
- How do we verify that they are safe?
-
OpenBLAS (on Windows)
-
Dev releases: Where do they happen?
* Do we do them at conda-forge?
* Maybe add a label.
* Do we let others do them with a feedstock on their own repo?- How do we enforce whatever we decide?
-
Conda-forge installer
* We have Python 3.5, and 3.4 now. Would be nice to have 2.7.
- Have everything. Though
conda-build
needs some work. - Repo for installer exists, but many questions remain open. ( https://github.com/conda-forge/conda-forge-anvil )
- Have everything. Though
-
Channel mirroring
* Can this point be a little bit explained? I thought about this as well and would like to contribute to this point.
* Eric Dill has put together a script for copying a package from one channel to another here: [conda forge/conda forge.github.io#134](https://github.com/conda-forge/conda-forge.github.io/pull/134)
* I have a really, really crude script that copies all of the packages in one channel to another that I just put at: [](https://gist.github.com/mwcraig/8473cf840f6d29236d6d8af699404555)[https://gist.github.com/mwcraig/8473cf840f6d29236d6d8af699404555](https://gist.github.com/mwcraig/8473cf840f6d29236d6d8af699404555)
* conda-build-all can copy from one channel to another: `conda build-all --inspect-channels conda-forge --upload-channels astropy some_packge_recipe` will copy the `some_package` from the channel conda-forge to astropy if it can, or build it if it doesn't exist on conda-forge. Discussion about what the desired behavior should be has started at: [SciTools/conda build all#46](https://github.com/SciTools/conda-build-all/issues/46) -
Feedstock history
* Is it sacred?
-
Do we rebase/force push?
* If so, under what conditions?
-
How do we avoid multiple people doing this simultaneously?
* I don't think you can.
* IMHO, if it's just one author in staged recipes, sure. If feedstock, no force push - only to PRs to feedstock. If people don't mind merge PRs, it sure is a lot simpler to not rebase. I have messed up rebasing a few times recently... =(
-
-
-
Docker hosting solution
* Docker Hub builds were broken for a week and a half.
- Have switched to quay.io currently.
- Mirroring quay.io image on Docker Hub.
- Thoughts about quay.io? Thoughts about hosting in general?
-
Continuum metadata request: can we add these to linter?
* example metadata: [](https://github.com/ContinuumIO/anaconda-recipes/blob/master/anaconda-build/meta.yaml#L36-L44)[https://github.com/ContinuumIO/anaconda-recipes/blob/master/anaconda-build/meta.yaml#L36-L44](https://github.com/ContinuumIO/anaconda-recipes/blob/master/anaconda-build/meta.yaml#L36-L44)
- Also, distinguish summary (limit of 77 or 80 chars) from description (unlimited)
- Anaconda verify: would be nice to meet in the middle, rather than diverge. conda-build may integrate anaconda-verify, would be nice if conda-forge added metadata here.
-
Google hangouts has a max capacity of 10. Is it worth considering other methods of communication so everyone who wants to participate can?
-
Maybe this ( http://www.freeconferencecalling.com/ ) is an option.
-
Bluejeans
-
Continuum has webex. Past experience is that some Linux platforms had trouble connecting
-
Drop numpy 1.10 and reduce our build matrix. (Numba now works with numpy 1.11.)
-
This comment from the PR for graphviz is the best summary I've seen: conda forge/staged recipes#568#issuecomment-225315370
-
Thanks for pointing this out. The described solution looks reasonable and is preferable to prefixing package names. Great!
-
What is the benefit?
-
Will we distinguish between libs and standalone tools, similar to Debian? I would strongly suggest to do this, because it is (1) established and (2) more accessible for the user (if he wants to use a library, he knows the language. If he wants to use a standalone, he doesn't care). ( )https://www.debian.org/doc/packaging-manuals/python-policy/ch-module_packages.html#s-package_names)
-
Will there be an orchestrated move? If not, how do we deal with inconsistencies and potential conflicts (installing both python-h5py and h5py).
* we will probably go with meta-packages for conflicting packages
-
Signing packages
-
Should be easy to do. ( http://conda.pydata.org/docs/signed-packages.html )
-
There has been some interest previously.
-
HTTPError: 503 Server Error: Service Unavailable: Back-end server is at capacity for url...
-
Seems we are regularly running into this issue under normal usage conditions.
-
Had discussed previously caching packages on AppVeyor and trying to reuse those to start.
-
Maybe we need to consider caching on all CIs.
-
Building our own Miniconda-like self-extracting scripts with packages via
constructor
. -
There have been improvements on Continuum's side that should help this. In short, repodata (the package index for a given channel) was being generated for each anaconda.org query. This was unnecessarily high cost, and some caching schemes have been implemented.
-
Handling removal of unpinned/improperly pinned packages.
-
Has been done manually thus far.
-
This doesn't scale well though.
-
Should we (semi) automate removal?
-
Should we hot-fix broken packages? ( conda forge/conda forge.github.io#170 )
-
Travis CI API unreliability
Notes:
- 873 repositories, 171 people
- Discussion of adding new core-devs and onboarding new contributors
- Governance/Enhancements proposals
-
https://github.com/ipython/ipython/wiki/IPEPs:-IPython-Enhancement-Proposals
-
Want place to move longer technical discussions which will eventually move to decision
-
Use cases for enhancements proposals from the past
* compiler decisions (one per OS)
- CentOS 5 vs 6
-
Enhancements vs how decisions are made
* core group which votes on the issue? Others from the community?
- proposal should provide evidence to help others understand the issue
-
Enhancement proposals get merged regularly
* "pending" status on issue where no decision has been made
-
No BDFL, committee instead (astropy has coordinating committee, numpy has a steering council)
-
Enhancement proposal proposal Pull Request -- Jonathan
-
Iterate for numpy like governance
-
-
Blog post on conda-forge sprint -- Filipe
-
code of conduct
* Filipe has draft, please review
- How do we handle those who misbehave (specified in document)
- Submit as enhancement proposal, review after ~1 week submit
- committee which will sit on code-of-conduct panel to act as nanny (perhaps some external)
-
contacting authors -- ping 4/5 active contributors to inform and ask if they want to contribute
* do not add people to list of maintainers without permissions, let them add themselves in a pull request
- add common snippets to docs so they are easier to find and used by others
- John will add a generic comment to guidelines for contacting contributors via PR.
-
Lots of mention and excitement of conda-forge at SciPy
* Time-series on big packages mentioned at SciPy?
- Some questions on Nathan's whl talk
-
Split gdal into libgdal and gdal like default has done, seem to have fixed issue
-
Next meeting 3 weeks from today, Aug 12th