13. Contributor’s guide
We use a reasonably standard open source workflow somewhat resembling Scott Chacon’s “GitHub Flow”, though we use GitLab:
main
is stable (hopefully)organize plans with prioritized, labeled issues
work on short-lived topic branches
use peer-reviewed merge requests (MRs) to manage the flow of changes into
main
This section explains the current workflow in more detail. The level of approximation and completeness here varies, but it’s a best-effort description of what actually happens, and it seemed useful to write down what we see as best practices. Some of this is automatically validated with CI. There have been previous workflows, and we have not updated closed issues and MRs.
Related wiki pages:
Important
“Imperfect” help is better than no help at all. While it does make things easier and faster if external contributors follow the workflow, that’s a lot to expect. Just do your best and it will be much appreciated. We’ll wrangle logistics and/or help you figure it out.
13.1. Issues (work planning)
We use issues to organize planned work. The issue body should describe the
problem or desired feature, ideally following a “steps to reproduce, expected
behavior, actual behavior, comments” format. Include the exact Charliecloud
version in use, and any relevant logs (e.g. -v
or -vv
) are
also very helpful.
The issue fields we use are listed below. Charliecloud team members should manage their own issues accordingly. The general public are more than welcome to label their issues if they like, but in practice this is rare, which is fine. Whoever triages the incoming issue should update the fields as needed. We do not use issue fields not listed below.
13.1.1. Important workflow notes
Triaging issues. Issues and PRs submitted from outside should be
acknowledged promptly: move them out of incoming
, update title/descriptions if needed, and set fields appropriately.
Closing issues. We close issues when the issue is actually resolved. It is OK for “stale” issues to sit around indefinitely awaiting this. Unlike many projects, we do not automatically close issues just because they’re old. You are not going to have some bot yelling at you every 30 days.
The reasoning is that we want a listing of deficiencies that’s as comprehensive as we can practically make it, and that includes issues we know about that haven’t had much or any recent attention. That is, Charliecloud is small enough (326 open issues as of this writing) to prune moot issues manually rather than assuming some level of arbitrary “staleness” implies mootness.
We aren’t the only project that does this. For example, the currently oldest
bug in Debian is #2297, a race condition
in xterm(1)
reported in 1996.
Re-opening issues. Closed issues can be re-opened if new information
arises, for example a worksforme
issue with new reproduction steps.
13.1.2. Assignee
This is the person responsible for issue logistics, who may or may not be the person who actually did the work. For internal implementations, this is typically the person who did most of the coding; for external submissions, this is typically the person who wrangled the contribution.
While GitLab allows multiple assignees, we almost always have just one.
13.1.3. Status
We use the GitLab status field to describe where an issue falls in its lifecycle.
Open issues have decisions or work remaining. Within that:
incoming
Awaiting triage by a Charliecloud team member. This is the default status for new issues.
blocked
We can’t do this yet because something else needs to happen first. If that something is other issue(s), link those using GitLab’s Linked items → is blocked by. Otherwise, the description should explain; use the phrase “blocked by” or “waiting on”.
uncertain
The course of action is unclear. For example: is the feature a good idea, what is a good approach to solve the bug, additional information is needed to understand the issue. The description should explain; use the phrase “uncertain because”.
open
We plan to do the thing but nobody has started on it.
inprogress
Somebody is actively working on it. Importantly, issues should not stay here for too long (a week? a fortnight?) if work has stalled; in that case, return the issue to
open
.
Closed issues have nothing left to do. The “cancelled” statuses (everything except done
) may not have a milestone.
done
Work is complete. These issues must have a milestone assigned.
duplicate
Same as some other issue. In addition to this tag, duplicates should refer to the other issue in a comment to record the link. Of the duplicates, the better one should stay open (e.g., clearer reproduction steps); if they are roughly equal in quality, the older one should stay open.
moot
No longer relevant. Examples: withdrawn by reporter, can’t be reproduced any more and we don’t know what changed (i.e.,
duplicate
does not apply), obsoleted by change in plans.noaction
The issue is not something we can solve by modifying Charliecloud. Common examples: problems with other software, problems with containers in general that we can’t work around, not actionable due to clarity or other reasons, or someone asks a question rather than making a request for some change. (Formerly
cantfix
.)Warning
Note the conspicuous absence of “user error” in the examples. While it’s true that user error does happen, rarely is there really nothing to do. Much more frequently, there is a documentation or usability bug that contributed to the “user error”.
wontfix
We are not going to do this, even if someone else provides an MR. Sometimes you’ll want to tag and then wait a few days before closing, to allow for further discussion to catch mistaken labels.
worksforme
We cannot reproduce the issue, and it seems unlikely this will change given available information. Typically you’ll want to tag, then wait a few days for clarification before closing. Bugs closed with this tag that do gain a reproducer later should be re-opened.
For some bugs, it really feels like they should be reproducible but we’re missing it somehow; such bugs should be left open in hopes of new insight arising.
GitLab does have subcategories of Open (Triage, Open, and In progress) and Closed (Done and Canceled [sic]). We don’t think about them much, but this is what the tiny status icons mean.
13.1.4. Labels
Each issue should be labeled along three dimensions.
13.1.4.1. Change type
Choose one type from:
bug
Something doesn’t work; e.g., it doesn’t work as intended or it was mis-designed. This includes usability and documentation problems.
enhancement
Things work, but it would be better if something was different. For example, a new feature proposal, an improvement in how a feature works, or clarifying an error message.
refactor
Change that will improve Charliecloud but does not materially affect user-visible behavior. Note this doesn’t mean “invisible to the user”; even user-facing documentation or logging changes could feasibly be this type, if they are more cleanup-oriented.
13.1.4.2. Component
This describes what part of Charliecloud is affected. Choose one or more from:
runtime
The container runtime itself; largely
ch-run
.image
Image building and interaction with image registries; largely
ch-image
. (Not to be confused with image management tasks done by glue code.)glue
The “glue” that ties the runtime and image management (
ch-image
or another builder) together. In general, this is the other executables inbin
.install
Charliecloud build & install system,
configure
, packaging, etc. (Not to be confused with image building.)doc
Documentation and log messages, including internal docs.
test
Test suite, examples, and/or CI.
misc
Everything else. Do not combine with another component.
13.1.4.3. Priority
We prioritize bugs on the two dimensions of the “Eisenhower Matrix”, namely urgency and importance, though we call the latter impact. This lets us think separately about the benefit of doing something and when that thing needs to be done in order to get the benefit.
Note that this view of priority does not address the cost of doing the thing, i.e., the difficulty of a bug fix does not affect its priority.
This is all subjective and imprecise, so work does not follow a strict priority queue, i.e., we do work on lower-priority issues while higher-priority ones are still open. Related to this, issues do often move between priority levels. In particular, if you think we picked the wrong priority level, please say so.
Relevant other perspectives:
Debian bug severity levels (one-dimensional).
Firefox priority and severity.
Bugzilla priority and severity, as documented until version 3.7 (2010), when the detailed priority descriptions were deleted, as well as a few modifications:
Eclipse Web Tools Platform priority and severity.
GCC importance.
Gentoo severity circa 2009.
Impact describes the cost of not doing the thing and/or the benefit of doing the thing. We have five levels. The table below describes factors to consider but is not comprehensive. Generally, the “worst” factor defines the impact, but it’s a judgement call. “Normal” is a good default.
Issues of impact important
and higher should probably explain why.
|
|
|
|
|
|
---|---|---|---|---|---|
use case(s) affected [1] |
any |
notable minority |
many or high-value |
many or high-value |
many or high-value |
impact on use case(s) |
inconvenience |
moderate impairment |
moderate impairment |
significant impairment |
unusuable |
security vulnerability |
no |
no |
trivial |
minor |
yes |
impact on project sustainability [2] |
trivial |
average |
notable |
significant |
major |
embarrassment level [3] |
none |
none |
minor |
moderate |
high |
issue types |
any |
any |
any |
features and bugs |
bugs only |
how many open (goal) |
many |
many |
few |
none |
none |
Notes:
Charliecloud development is considered a high-value use case. Use of another container implementation is considered an unsatisfactory workaround.
Project sustainability is things like funding, reputation, etc.
Embarrasment level refers to questions like “how embarrassed are we to make a release with this issue open?” or “how embarrassed are we if it’s reported in a release”.
While these are easiest to interpret in terms of bugs, they apply just as well to feature requests and refactoring (“unless we implement feature X, Charliecloud’s usability for high-value use case Y is significantly impaired because ...”).
Urgency refers to timing. How soon must the issue be resolved to gain its benefit?
panic
Highest urgency; “drop everything else and skip all your meetings”. Maybe someone has a crazy deadline, or Charliecloud is just spectacularly broken. Often the work is mostly complete before someone gets around to submitting an issue. Typical deadline: days or less.
immediately
Pre-emptively urgent; often becomes your top priority. Typical deadline: weeks.
soon
A reasonable default. Someone is waiting but there is plenty of time to plan ahead. Typical, often vague deadline: months.
eventually
Limited urgency but we would like to get to it; completed as time permits. No one is actively waiting. Many of these issues will unfortunately persist for the life of the project due to resource constraints.
deferred
One can hope. We keep these issues open because we want to retain the fact that issue was made and the reasoning for its indefinite deferral. MRs are still welcome but will likely benefit from an argument that their future maintenance load, if merged, will be low.
13.1.4.4. Deprecated labels
You might see these on old issues, but they are no longer in use. They have a
tilde (~
) prefix to push them to the bottom of the alphabetical list,
because GitLab has no way to mark a label as deprecated.
blocked
: Replaced byblocked
status above.disp::*
: Replaced by “Canceled” statuses above.help wanted
: This tended to get stale and wasn’t generating any leads.hpc
: Related specifically to HPC and HPC scaling considerations. This tended to not get used, and the definition was fuzzy.key issue
: Replaced by importance and urgency tags.n00b
: We used to go occasionally through the issues and mark those that seemed good for new Charliecloud developers to work on, but these never stayed up to date and didn’t seem to attract volunteers.pri::*
: Replaced by importance and urgency tags.question
: Replaced by GitHub Discussions, which then went away when we moved to GitLab.uncertain
: Replaced byuncertain
status above.usability
: Affects usability of any part of Charliecloud, including documentation and project organization. This information was rarely acted on.
13.1.5. Milestone
We use one milestone per release, so this says which release the issue corresponds to. Specifically:
Open statuses: Planned to be included in the specified milestone. A milestone should be added to an Open issue only when we’re reasonably confident or hopeful, but even so, expect some churn.
Closed statuses:
done
: The issue is in the specified milestone. Alldone
issues must have a milestone.others, i.e. all the Canceled [sic] statuses with a red slash icon: May not have a milestone.
13.2. Workflow
Charliecloud’s standard coding workflow is intended to fairly straightforward/normal; in summary:
Do the work in a feature branch. At some point, make a merge request (MR) for the branch.
Pass CI testing.
Have someone on the Charliecloud core team review the code.
Iterate steps 1–3 until the MR passes both CI and code review.
Project lead or designee merges the MR to main.
13.2.1. Branches
13.2.1.1. Naming convention
Name the branch with a brief summary of the issue being fixed (just a couple of words). Separate words with hyphen, then follow with an underscore and the issue number being addressed.
For example, issue #1773 is titled “ch-image
build: --force=fakeroot outputs to stderr despite -q”; the corresponding
branch (for MR !1812) is called
fakeroot-quiet-rhel_1773
. Something even shorter, such as
fakeroot_1773
, would have been fine too.
It’s okay if the branch name misses a little. For example, if you discover during work on a PR that you should close a second issue in the same PR, it’s not necessary to add the second issue number to the branch name.
13.2.1.2. Document first
Best practice for significant changes is to (1) draft documentation, key comments, and possibly some tests first, (2) get feedback on that, and then (3) write the code. Reasons for this include:
Writing the docs helps you understand the necessary code changes better, because you’re forced to clearly articulate its effects, including corner cases.
Reviews of the form “you need a completely different approach” are no fun.
A good description of the intended behavior helps the reviewer better understand and evaluate the work.
13.2.1.3. Granularity
Ideally, one branch addresses one concern, i.e., a branch comprises one self-contained change corresponding to one issue. If there are multiple concerns, make separate issues and/or PRs.
For example, branches should not tidy unrelated code, and non-essential complications should be split into a follow-on issue.
However, in practice, branches often address several related issues, which is fine.
13.2.1.4. Keeping branches up-to-date
If main progresses significantly while a branch is open, you will want those changes to also be reflected in your feature branch so it’s easier to merge into main when the time comes, or because main includes bug fixes you need. Updating your branch can be done in multiple ways, most commonly rebase or merging main into your branch.
We generally prefer the merge approach because (1) conflicts need be resolved only once and (2) it doesn’t alter branch history (which can cause confusion for others). Conversely, it does yield a messier branch history. Example procedure:
$ git stash # if you have un-committed changes
$ git switch main
$ git fetch # update origin/main
$ git merge # merge origin/main to your main
$ git switch mybranch
$ git merge --no-ff --no-commit main
$ emacs ... # resolve merge conflicts
$ git gui # sanity-check merge diff
$ git commit -m 'merge from main'
$ git stash pop
If you prefer rebase, that’s fine too.
Warning
This is a common source of Git problems, so be careful. A typical symptom is commits from main showing up in the branch’s history; another is the GitLab diff showing changes you didn’t make. If you find yourself in this situation and feel lost, ask for help sooner than later because it’s very easy to tangle Git further if you make mistakes attempting to untangle.
13.2.1.5. Commit history
Best practice is to keep your branch history tidy, but not to the extent that it interferes with getting things done. Commit frequently at semantically relevant times. Commit messages that are brief, technically relevant, and quick to write are what you want on feature branches. It is not necessary to rebase or squash to keep branch history tidy.
We squash-merge branches, so the branch history has limited visibility; the audience is approximately whoever looks at your merge request.
Non-best-practices include consistently well-proofread commit messages as well as (much more commonly) content-free ones like “try 2”, “does this work?”, “please pass CI”, “ugh”, and/or “foo”.
Note
Do not commit and push just to run the test suite. Run tests locally until
you really need remote CI (see the ch-test(1)
man page). Your local dev box is set up to pass the test suite, right?
😉
13.2.1.6. Local repository hygiene
Because we delete branches in the main repository on merge, it’s easy for your
local repository to accumulate a clutter of dangling branch pointers (how long
is your git branch -a
?). Remove these with the script
misc/branches-tidy
.
13.2.2. Merge requests (MRs)
13.2.2.1. Draft marker
GitLab merge requests have a “draft” toggle that can be true or false. We only use this to indicate to code reviewers whether the MR is ready to merge, given an approved code review. (In fact, GitLab will not merge a draft MR.)
Draft MR: A review request means intermediate feedback is desired. The branch author plans to continue work after the review cycle.
Non-draft MR: A review request is a request to merge the work into main. Note that un-checking the draft box does not request review; you have to do that separately.
Draft MRs are indicated with a title prefix of “Draft:” in the web UI.
13.2.2.2. Linking with other GitLab artifacts
Closed issues. We use GitLab’s issue close notation
to mark issues that should be closed (as done
) when the MR is merged.
Put this at the end of the MR description and update it as needed.
Include the title-inclusion plus notation so readers can
immediately see a comprehensible list of issues, rather than hovering over or
clicking through opaque issue numbers. This does break the built-in syntax for
a single Closes:
keyword. We recommend, for example:
Closes: #1414+ \
Closes: #2135+ \
Closes: #6237+
Note that a line break within a paragraph is indicated by escaping the newline with a backslash.
Milestone. MRs should also be linked (using the Milestone menu) with the milestone at which they were merged. This should be the same milestone as the linked-to-close issues.
Other artifacts. MRs can use the standard GitLab notation to “mention” arbitrary artifacts. Use this as appropriate.
13.2.2.3. No stand-alone MRs
Each merge request must be linked to one or more issues, i.e., we do not use stand-alone MRs. This is new as of August 2025.
We changed the procedure because issues and merge requests are quite distinct in GitLab, unlike GitHub. GitLab work planning and analysis features only look at issues (e.g., milestone burn-down and burn-up charts).
Another motivation is issue and MR numbering. GitHub uses the same ID sequence
for issues and pull requests, e.g., #XXXX
can refer to either issue
XXXX or pull request XXXX, and only one of those can exist. However, GitLab
issues and merge requests eash use an independent sequence, e.g.,
#YYYY
refers only to issue YYYY and !ZZZZ
only to merge
request ZZZZ, and possibly YYYY = ZZZZ even though they are different objects.
Note, however, that issues and merge requests imported from the old GitHub
project do retain their numbers.
13.2.2.4. Avoid stale MRs
Unlike issues, we do try to avoid retaining stale or stalled MRs due to the merge conflicts they inevitably accumulate. Ideally an MR would be either merged or rejected in a timely manner, but occasionally we do just close them for staleness. This isn’t automated, however.
13.2.2.5. Merge procedures
We merge to main using the GitLab web UI, typically with squash-merge. This is because the meaningful unit of work once something gets to main is the merge request, and it lets us avoid the effort of lots of branch rebasing and curation of branch commits. After merge, we delete the branch, to avoid accumulating stale branches.
The merge commit messages are formulaic using a template defined in GitLab, and importantly they mention the merge request in the first line. This lets readers find relevant discussion about the merge using only command-line Git. For example:
b07acfdb reidpr MR !1902: add CDI support, big refactor
85098b46 reidpr merge PR #268 from @j-ogas: remove ch-docker-run
Note that the specific notation has changed over time. Notably, merges from
when we were on GitHub say PR #XXXX
instead of the now-correct
MR !XXXX
, but the numbers are still valid.
13.2.3. Code review
Peer review of code within Charliecloud is important because we are all human and thus make mistakes. All merge requests must pass review, especially if done by senior team members. We use the GitLab MR web UI for code review.
13.2.3.1. Developer expectations
Seek code review early and often. Intermediate-stage review is often useful, as it helps you make sure you’re on the right track before writing a bunch of code. In particular, anything that requires non-trivial documentation updates or complex code design should have that documentation or design reviewed before proceeding to significant code.
When your MR is ready for review, do this:
Double-check that your MR is marked draft or not, as appropriate (see above).
Verify that CI has passed, or explain why there should be an exception to the usual CI-before-review policy.
Choose a reviewer. Generally, you want to find a reviewer with available time and appropriate expertise. Feel free to ask if you’re not sure. Note that the project lead must approve any MRs before merge, so they are typically a reasonable choice if you don’t have someone else in mind.
Parallel reviews (i.e., selecting more than one person) are generally not needed but are sometimes appropriate. In this case, explain in a comment why you are selecting each person.
You can select yourself or the branch author as reviewer. This is useful when multiple people are working on a branch or the branch received tidying that the original author should look at.
Request review in the GitLab MR UI. This notifies the reviewer that you need their help.
When the review is done, examine the feedback and deal with it appropriately.
Once you request review, the person from whom you requested it now owns the branch, and you should stop work on it unless and until you get it back (modulo other communication, of course). This is so they can make tidy commits if needed without collision.
It is a best practice to ask the original bug reporter to also review, to make sure it solved their problem. This is done with a comment, since GitLab will not request a normal review from external people.
Important
Resolve all threads. If the resolution is a code change, replying in GitLab is usually not necessary, even if the feedback takes the form of a question; just click resolve.
Even if there are no changes, omit a text reply unless it’s really needed. For example, if a thread requests a code change but incorrectly (reviewers are human too), it’s probably good to explain why you’re not doing it. On the other hand, threads of the form “consider approach XYZ” or “do you know about related skill ABC” do not need any reply. Just resolving the thread says “OK” or whatever simple acknowledgement is needed.
These best practices avoid cluttering the reviewer’s inbox.
13.2.3.2. Reviewer expectations
Your job as a code reviewer is to catch the errors and possible improvements that CI doesn’t. Guidelines include:
Be timely. The work is blocked until you review it, and delays mean progressive loss of context as whoever wrote the code forgets what was in it. We have a robot that e-mails
dev@lists.charliecloud.io
nightly listing the open reviews and who they are waiting on.As part of this, pay attention to your review requests. It is a best practice to see them when they appear, without need for manual notification or reminders.
Be critical and thorough. Do your best to find errors, even if you are a junior person reviewing a more senior’s work and they have a track record of good work. If you can’t find problems on a non-trivial patch, you probably haven’t looked hard enough.
Be constructive and kind. It’s easy to be mean when the entire purpose of your work is to find mistakes. Assume good faith and that the work was done by a good person doing their best available at the time. Criticize the work, not the person. Remember we’re all on the same team, working together to make the best product we can.
Don’t test the code (usually). Sometimes you do need to do this to check something specific, but remember you should complement rather than duplicate CI. If there’s a test needed but not added, decribe the test in a request for changes, rather than just trying it and being satisfied if it worked.
Use multi-comment reviews rather than individual separate comments, to avoid cluttering inboxes. Write a meaningful summary comment with your judgement.
The review can also include actual code changes commited to the branch. This is called “tidying”.
Tip
To enter a review without commenting on any individual diff lines, click Your review at upper right. If the button doesn’t appear, you need to request review from yourself.
To see your pending review requests, use GitLab’s “To-Do List”. I have not yet found a way in the merge requests view to see MRs where a review request is pending.
13.2.3.3. Decision/recommendation
We use two of the three possible review decisions, somewhat oddly described collectively by GitLab as “review approval”. When you are done with your review, write a summary comment and choose one of the following. The key difference is whether you (the reviewer) want to re-review after the feedback is considered.
- Approve
The work is ready to proceed (more coding if draft, merging if not). The MR’s developer should respond to and/or implement all the feedback, but re-review of those changes is not needed (e.g., developers with the merge bit can immediately self-merge after dealing with all feedback).
- Request changes
The work is not yet ready to proceed. Re-review is needed after the feedback is responded to and/or implemented.
In both cases, all types of feedback are appropriate: Recommendations for change, questions, other comments, dad jokes, etc.
We are constrained by GitLab behavior here, specifically enforcement of re-review, which is why the meanings are not orthogonal. There is also a Comment decision, which behaves the same as Request changes except with a friendlier (?) color; it seemed redundant.
13.3. Test suite
13.3.1. Continuous integration (CI) testing
We use GitLab CI testing, driven by YAML files in test/gitlab.com
.
This is moderately well documented there. Additional guidelines include:
MRs will usually not be merged until they pass CI, with exceptions if the failures are clearly unconnected and we are confident they aren’t masking a real issue. If appropriate, tests should also pass on relevant supercomputers.
Be judicious with CI cycles. The resources are there for you to use, so take advantage of them, but be mindful of their usage costs. For example: use
[skip ci]
commit messages, cancel pipeline or jobs no longer useful, don’t push every single commit (CI tests only the most recent commit). Avoid making commits merely to trigger CI.
13.3.2. Timing the tests
The ts
utility from moreutils
is quite handy. The following
prepends each line with the elapsed time since the previous line:
$ ch-test -s quick | ts -i '%M:%.S'
Note: a skipped test isn’t free; I see ~0.15 seconds to do a skip.
13.3.3. ch-test
complains about inconsistent versions
There are multiple ways to ask Charliecloud for its version number. These
should all give the same result. If they don’t, ch-test
will fail.
Typically, something needs to be rebuilt. Recall that configure
contains the version number as a constant, so a common way to get into this
situation is to change Git branches without rebuilding it.
Charliecloud is small enough to just rebuild everything with:
$ ./autogen.sh && ./configure && make clean && make
13.3.4. Special images
For images not needed after completion of a test, tag them tmpimg
.
This leaves only one extra image at the end of the test suite.
13.3.5. Writing a test image using the standard workflow
13.3.5.1. Summary
The Charliecloud test suite has a workflow that can build images by two methods:
From a Dockerfile, using
ch-image
or another builder (seecommon.bash:build_()
).By running a custom script.
To create an image that will be built and unpacked and/or mounted, create a
file in examples
(if the image recipe is useful as an example) or
test
(if not) called {Dockerfile,Build}.foo
. This will create
an image tagged foo
. Additional tests can be added to the test suite
Bats files.
To create an image with its own tests, documentation, etc., create a directory
in examples
. In this directory, place
{Dockerfile,Build}[.foo]
to build the image and test.bats
with
your tests. For example, the file examples/foo/Dockerfile
will create
an image tagged foo
, and examples/foo/Dockerfile.bar
tagged
foo-bar
. These images also get the build and unpack/mount tests.
Additional directories can be symlinked into examples
and will be
integrated into the test suite. This allows you to create a site-specific test
suite. ch-test
finds tests at any directory depth; e.g.
examples/foo/bar/Dockerfile.baz
will create a test image tagged
bar-baz
.
Image tags in the test suite must be unique.
Order of processing; within each item, alphabetical order:
Dockerfiles in
test
.Build
files intest
.Dockerfiles in
examples
.Build
files inexamples
.
The purpose of doing Build
second is so they can leverage what has
already been built by a Dockerfile, which is often more straightforward.
13.3.5.2. How to specify when to include and exclude a test image
Each of these image build files must specify its scope for building and
running, which must be greater than or equal than the scope of all tests in
any corresponding .bats
files. Exactly one of the following strings
must appear:
ch-test-scope: skip
ch-test-scope: quick
ch-test-scope: standard
ch-test-scope: full
Other stuff on the line (e.g., comment syntax) is ignored.
Optional test modification directives are:
ch-test-arch-exclude: ARCH
If the output of
uname -m
matchesARCH
, skip the file.ch-test-builder-exclude: BUILDER
If using
BUILDER
, skip the file.ch-test-builder-include: BUILDER
If specified, run only if using
BUILDER
. Can be repeated to include multiple builders. If specified zero times, all builders are included.ch-test-need-sudo
Run only if user has sudo.
13.3.5.3. How to write a Dockerfile
recipe
It’s a standard Dockerfile.
13.3.5.4. How to write a Build
recipe
This is an arbitrary script or program that builds the image. It gets three command line arguments:
$1
: Absolute path to directory containingBuild
.
$2
: Absolute path and name of output image, without extension. This can be either:
Tarball compressed with gzip or xz; append
.tar.gz
or.tar.xz
to$2
. Ifch-test --pack-fmt=squash
, then this tarball will be unpacked and repacked as a SquashFS. Therefore, only use tarball output if the image build process naturally produces it and you would have to unpack it to get a directory (e.g.,docker export
).Directory; use
$2
unchanged. The contents of this directory will be packed without any enclosing directory, so if you want an enclosing directory, include one. Hidden (dot) files in$2
will be ignored.
$3
: Absolute path to temporary directory for use by the script. This can be used for whatever and need no be cleaned up; the test harness will delete it.
Other requirements:
The script may write only in two directories: (a) the parent directory of
$2
and (b)$3
. Specifically, it may not write to the current working directory. Everything written to the parent directory of$2
must have a name starting with$(basename $2)
.The first entry in
$PATH
will be the Charliecloud under test, i.e., barech-*
commands will be the right ones.Any programming language is permitted. To be included in the Charliecloud source code, a language already in the test suite dependencies is required.
The script must test for its dependencies and fail with appropriate error message and exit code if something is missing. To be included in the Charliecloud source code, all dependencies must be something we are willing to install and test.
Exit codes:
0: Image successfully created.
65: One or more dependencies were not met.
126 or 127: No interpreter available for script language (the shell takes care of this).
else: An error occurred.
13.4. Building RPMs
We maintain .spec
files and infrastructure for building RPMs in the
Charliecloud source code. This is for two purposes:
We maintain our own Fedora RPMs (see packaging guidelines).
We want to be able to build an RPM of any commit.
Item 2 is tested; i.e., if you break the RPM build, the test suite will fail.
This section describes how to build the RPMs and the pain we’ve hopefully abstracted away.
13.4.1. Dependencies
Charliecloud
Python 3.6+
either:
the provided example
centos_7ch
oralmalinux_8ch
images, ora RHEL/CentOS 7 or newer container image with (note there are different python version names for the listed packages in RHEL 8 and derivatives):
autoconf
automake
gcc
make
python36
python36-sphinx
python36-sphinx_rtd_theme
rpm-build
rpmlint
rsync
13.4.2. rpmbuild
wrapper script
While building the Charliecloud RPMs is not too weird, we provide a script to
streamline it. The purpose is to (a) make it easy to build versions not
matching the working directory, (b) use an arbitrary rpmbuild
directory, and (c) build in a Charliecloud container for non-RPM-based
environments.
The script must be run from the root of a Charliecloud Git working directory.
Usage:
$ packaging/fedora/build [OPTIONS] IMAGE VERSION
Options:
--install
: Install the RPMs after building into the build environment.
--rpmbuild=DIR
: Use RPM build directory rootDIR
(default:~/rpmbuild
).
For example, to build a version 0.9.7 RPM from the CentOS 7 image provided
with the test suite, on any system, and leave the results in
~/rpmbuild/RPMS
(note the test suite would also build the
necessary image directory):
$ bin/ch-image build -f ./examples/Dockerfile.centos_7ch ./examples
$ bin/ch-convert centos_7ch $CH_TEST_IMGDIR/centos_7ch
$ packaging/fedora/build $CH_TEST_IMGDIR/centos_7ch 0.9.7-1
To build a pre-release RPM of Git HEAD using the CentOS 7 image:
$ bin/ch-image build -f ./examples/Dockerfile.centos_7ch ./examples
$ bin/ch-convert centos_7ch $CH_TEST_IMGDIR/centos_7ch
$ packaging/fedora/build ${CH_TEST_IMGDIR}/centos_7ch HEAD
13.4.3. Gotchas and quirks
13.4.3.1. RPM versions and releases
If VERSION
is HEAD
, then the RPM version will be the content
of VERSION.full
for that commit, including Git gobbledygook, and the
RPM release will be 0
. Note that such RPMs cannot be reliably upgraded
because their version numbers are unordered.
Otherwise, VERSION
should be a released Charliecloud version followed
by a hyphen and the desired RPM release, e.g. 0.9.7-3
.
Other values of VERSION
(e.g., a branch name) may work but are not
supported.
13.4.3.2. Packaged source code and RPM build config come from different commits
The spec file, build
script, .rpmlintrc
, etc. come from the
working directory, but the package source is from the specified commit. This
is what enables us to make additional RPM releases for a given Charliecloud
release (e.g. 0.9.7-2).
Corollaries of this policy are that RPM build configuration can be any or no commit, and it’s not possible to create an RPM of uncommitted source code.
13.4.3.3. Changelog maintenance
The spec file contains a manually maintained changelog. Add a new entry for each new RPM release; do not include the Charliecloud release notes.
For released versions, build
verifies that the most recent changelog
entry matches the given VERSION
argument. The timestamp is not
automatically verified.
For other Charliecloud versions, build
adds a generic changelog entry
with the appropriate version stating that it’s a pre-release RPM.
13.5. Style hints
We haven’t written down a comprehensive style guide. Generally, follow the style of the surrounding code, think in rectangles rather than lines of code or text, and avoid CamelCase.
Note that Reid is very picky about style, so don’t feel singled out if he complains (or even updates this section based on your patch!). He tries to be nice about it.
13.5.1. Writing English
When describing what something does (e.g., your PR or a command), use the imperative mood, i.e., write the orders you are giving rather than describe what the thing does. For example, do:
Inject files from the host into an image directory.Add--join-pid
option toch-run
.Do not (indicative mood):
Injects files from the host into an image directory.Adds--join-pid
option toch-run
.Use sentence case for titles, not title case.
If it’s not a sentence, start with a lower-case character.
Use spell check. Keep your personal dictionary updated so your editor is not filled with false positives.
13.5.2. Documentation
Heading underline characters:
Asterisk,
*
, e.g. “5. Contributor’s guide”Equals,
=
, e.g. “5.7 OCI technical notes”Hyphen,
-
, e.g. “5.7.1 Gotchas”Tilde,
~
, e.g. “5.7.1.1 Namespaces” (try to avoid)
13.5.3. Dependency policy
Specific dependencies (prerequisites) are stated elsewhere in the documentation. This section describes our policy on which dependencies are acceptable.
13.5.3.1. Generally
All dependencies must be stated and justified in the documentation.
We want Charliecloud to run on as many systems as practical, so we work hard to keep dependencies minimal. However, because Charliecloud depends on new-ish kernel features, we do depend on standards of similar vintage.
Core functionality should be available even on small systems with basic Linux distributions, so dependencies for run-time and build-time are only the bare essentials. Exceptions, to be used judiciously:
Features that add convenience rather than functionality may have additional dependencies that are reasonably expected on most systems where the convenience would be used.
Features that only work if some other software is present can add dependencies of that other software (e.g.,
ch-convert
depends on Docker to convert to/from Docker image storage).
The test suite is tricky, because we need a test framework and to set up complex test fixtures. We have not yet figured out how to do this at reasonable expense with dependencies as tight as run- and build-time, so there are systems that do support Charliecloud but cannot run the test suite.
Building the RPMs should work on RPM-based distributions with a kernel new enough to support Charliecloud. You might need to install additional packages (but not from third-party repositories).
13.5.3.2. curl
vs. wget
For URL downloading in shell code, including Dockerfiles, use wget -nv
.
Both work fine for our purposes, and we need to use one or the other
consistently. According to Debian’s popularity contest, 99.88% of reporting
systems have wget
installed, vs. about 44% for curl
. On the
other hand, curl
is in the minimal install of CentOS 7 while
wget
is not.
For now, Reid just picked wget
because he likes it better.
13.5.4. Variable conventions in shell scripts and .bats
files
Separate words with underscores.
User-configured environment variables: all uppercase,
CH_TEST_
prefix. Do not use in individual.bats
files; instead, provide an intermediate variable.Variables local to a given file: lower case, no prefix.
Bats: set in
common.bash
and then used in.bats
files: lower case,ch_
prefix.Surround lower-case variables expanded in strings with curly braces, unless they’re the only thing in the string. E.g.:
"${foo}/bar" # yes "$foo" # yes "$foo/bar" # no "${foo}" # no
Don’t quote variable assignments or other places where not needed (e.g., case statements). E.g.:
foo=${bar}/baz # yes foo="${bar}/baz" # no
13.5.5. Statement ordering within source files
In general, we order things alphabetically.
13.5.5.1. Python
The module as a whole, and each class, comprise a sequence of ordering units
separated by section header comments surrounded by two or more hashes, e.g.
## Globals ##
. Sections with the following names must be in this order
(omissions are fine). Other section names may appear in any order. There is
also an unnamed zeroth section.
Enums
Constants
Globals
Exceptions
Main
Functions
Supporting classes
Core classes
Classes
Within each section, statements occur in the following order.
imports
standard library
external imports not in the standard library
import charliecloud
other Charliecloud imports
assignments
class definitions
function definitions
__init__
static methods
class methods
other double-underscore methods (e.g.
__str__
)properties
“normal” functions (instance methods)
Within each group of statements above, identifiers must occur in alphabetical order. Exceptions:
Classes must appear after their base class.
Assignments may appear in any order.
Statement types not listed above may appear in any order.
A statement that must be out of order is exempted with a comment on its first line containing 👻, because a ghost says “OOO”, i.e. “out of order”.
13.5.6. Python code
13.5.6.1. Indentation width
3 spaces per level. No tab characters.
13.5.7. C code
13.5.7.1. Wrapper/substitute functions
We have a number of functions that wrap or substitute for library functions, to provide error handling, a better interface, etc. The specific reasoning and usage variations should be documented for each.
Sometimes these are renamed when clarity demands (e.g. strcmp(3)
vs
streq()
); otherwise, they have a _ch
suffix (e.g.
realpath(3) vs realpath_ch()
).
Calling the original functions is generally disallowed with preprocessor
magic. To do so, use the FN_BLOCKED
macro from misc.h
, e.g.:
#define strcmp FN_BLOCKED
If you really need to use a blocked function, e.g. in order to actually implement the wrapper, un-block and re-block it with the preprocessor, e.g.:
#undef strcmp
strcmp("a", "b");
#define strcmp FN_BLOCKED
This could also be done with something called symbol wrapping, but that seems to mostly be a GNU thing.
13.5.7.2. Memory management
TL;DR: Charliecloud does not free any memory. You can enable garbage
collection with libgc
if you want, and this is the default, but it may
not be necessary, i.e. simply leaking all allocated memory could still be
smaller than the overhead of trying to clean up.
How-To: (1) Use Charliecloud wrappers for all library functions that
allocate memory, e.g. ch_malloc()
instead of malloc(3)
.
Importantly, this includes things like strdup(3)
and
asprintf(3)
. (2) Don’t call free(3)
or any other library
functions that free memory.
ch-run.c
has, since very nearly the beginning, carried the notice
that it “does not bother to free memory allocations, since they are modest and
the program is short-lived”. Explicit memory management is difficult and
time-consuming, and it didn’t seem worth the effort.
Eventually, we grew a long-running process to serve a
SquashFUSE filesystem, and the short-lived justification became obsolete. The
rough goal became: convert to proper memory management, freeing everything
that we allocated. Various free(3)
crept in here and there, but a full
refactor was never a priority.
Then PR #1919 came to be and grew in scope until it was a significant refactor. We tried to Do It Right on memory management everywhere this PR touched, and we did, until Reid got fed up writing comments about whose problem it was to free this or that and copying data simply so those comments could be tractable.
So now we’re back full circle. Memory management is not worth Charliecloud
developers’ time. We gleefully malloc(3)
and realloc(3)
without a care in the world, sinning every time. But now you have options. You
can either:
YOLO, i.e. simply never free anything, i.e. leak like a sieve. But Charliecloud is still a small program and it’s unlikely to be an actual problem. Our quick-and-dirty tests with a small “hello world” Alpine image running
true(1)
show a mainch-run
process using 350 KiB just before it executes the user program, and the SquashFUSE process the same just before forking and 1,600 KiB upon exit.Link with
libgc
, i.e. the Boehm-Demers-Weiser conservative garbage collector. The idea is that garbage collection scans the stack, heap, and other pointer sources for integers that look like pointers and assumes they are pointers. Apparently it works quite well and can even be faster than explicit memory management in some cases. The quick-and-dirty tests show 900 KiB by the main process, and the SquashFUSE process the same just before forking (after an explicit garbage collection) and 2,200 KiB upon exit.
ch-run
logs memory usage to syslog, and also stderr with -vv
,
so you can analyze your specific situation.
13.5.7.3. const
The const
keyword is used to indicate that variables are read-only. It
has a variety of uses; in Charliecloud, we use it for function pointer
arguments to state
whether or not the object pointed to will be altered by the function. For
example:
void foo(const char *in, char *out)
is a function that will not alter the string pointed to by in
but may
alter the string pointed to by out
. (Note that char const
is
equivalent to const char
, but we use the latter order because that’s
what appears in GCC error messages.)
We do not use const
on local variables or function arguments passed by
value. One could do this to be more clear about what is and isn’t mutable, but
it adds quite a lot of noise to the source code, and in our evaluations didn’t
catch any bugs. We also do not use it on double pointers (e.g., char
**out
used when a function allocates a string and sets the caller’s pointer
to point to it), because so far those are all out-arguments and C has
confusing rules about double
pointers and const
.
13.5.7.4. #include
order and gotchas
Implementation files (.c
) should normally use this order:
#define _GNU_SOURCE
#include "config.h"
#include <grp.h> // standard library headers in alphabetical order
#include <pwd.h>
// ...
#include <sys/syscall.h>
#include <time.h>
#include <libfoo.h> // non-standard library headers in alpha-order
#include "all.h"
config.h
needs to come first because it defines things that alter
library headers.
all.h
is an auto-generated header file that includes all the other
Charliecloud headers (except config.h
). It needs to come last because
it disallows (at compile time) some library functions, which the library
headers must use. Implementation files should include it rather than the
individual headers to simplify dependency management, and also to consistently
enforce the disallowed functions.
Header files (.h
) should normally use this order:
#define _GNU_SOURCE
#pragma once
#include "config.h"
#include <stdbool.h> // stdlib headers in alpha-order
#include <libfoo.h> // non-std library headers
#include "core.h" // individual Charliecloud headers in alpha-order
#include "misc.h"
Because all.h
includes the header being defined, including it rather
than the individual headers seemed excessively circular. Also, the set of
other Charliecloud headers tends to be fairly short because it’s basically
only types that tend to be needed.
Note
This approach invokes some C preprocessor warts that I couldn’t figure out
how to work around in a way I liked. #include
is a simple text
operation that substitutes the contents of the specified file for the line.
Therefore:
Any library headers included by our own headers are also available to the implementation files. For example,
log.h
includeserrno.h
, so none of the C files also neederrno.h
. An ordering convention that put our headers before the libraries would not change this.Conversely, library headers included by implementation files are also available to our header. For example, if all the
.c
files includederrno.h
,log.h
wouldn’t need to do so. Putting our headers before the library headers would avoid this, at the cost of working around the disallowed functions somehow.
This can lead to both over-including, e.g. implementation file
#include`s :code:`errno.h
even though errno
is already
available, as well as under-including, e.g. implementation file does not
#include`s :code:`errno.h
(relying on getting it via our header)
but confusingly has access to errno
anyway. Clarity would have us
include everything actually needed by a file, but the compiler won’t
enforce that so it would be unlikely to be reliable.
Generally we try to minimize the number of explicit headers included by
implementation files, i.e. in this case don’t #include <errno.h>
in
the .c
file, but we also don’t worry too hard about that.
We could also keep the same order and include nothing in our headers, which would solve the above two problems but require every implementation file to include the library headers needed by our headers, which seemed like a weird dependency.
13.6. Debugging
13.6.1. Python printf(3)
-style debugging
Consider ch.ILLERI()
. This uses the same mechanism as the standard
logging functions (ch.INFO()
, ch.VERBOSE()
, etc.) but it
(1) cannot be suppressed and (2) uses a color that stands out.
All ch.ILLERI()
calls must be removed before a PR can be merged.
13.6.2. seccomp(2)
BPF
ch-run --seccomp -vv
will log the BPF instructions as they are
computed, but it’s all in raw hex and hard to interpret, e.g.:
$ ch-run --seccomp -vv alpine:3.17 -- true
[...]
ch-run[62763]: seccomp: arch c00000b7: found 13 syscalls (core.c:582)
ch-run[62763]: seccomp: arch 40000028: found 27 syscalls (core.c:582)
[...]
ch-run[62763]: seccomp(2) program has 156 instructions (core.c:591)
ch-run[62763]: 0: { op=20 k= 4 jt= 0 jf= 0 } (core.c:423)
ch-run[62763]: 1: { op=15 k=c00000b7 jt= 0 jf= 17 } (core.c:423)
ch-run[62763]: 2: { op=20 k= 0 jt= 0 jf= 0 } (core.c:423)
ch-run[62763]: 3: { op=15 k= 5b jt=145 jf= 0 } (core.c:423)
[...]
ch-run[62763]: 154: { op= 6 k=7fff0000 jt= 0 jf= 0 } (core.c:423)
ch-run[62763]: 155: { op= 6 k= 50000 jt= 0 jf= 0 } (core.c:423)
ch-run[62763]: note: see FAQ to disassemble the above (core.c:676)
ch-run[62763]: executing: true (core.c:538)
You can instead use seccomp-tools to disassemble and pretty-print the BPF code in a far easier format, e.g.:
$ sudo apt install ruby-dev
$ gem install --user-install seccomp-tools
$ export PATH=~/.gem/ruby/3.1.0/bin:$PATH
$ seccomp-tools dump -c 'ch-run --seccomp alpine:3.19 -- true'
line CODE JT JF K
=================================
0000: 0x20 0x00 0x00 0x00000004 A = arch
0001: 0x15 0x00 0x11 0xc00000b7 if (A != ARCH_AARCH64) goto 0019
0002: 0x20 0x00 0x00 0x00000000 A = sys_number
0003: 0x15 0x91 0x00 0x0000005b if (A == aarch64.capset) goto 0149
[...]
0154: 0x06 0x00 0x00 0x7fff0000 return ALLOW
0155: 0x06 0x00 0x00 0x00050000 return ERRNO(0)
Note that the disassembly is not perfect; e.g. if an architecture is not in your kernel headers, the system call name is wrong.
13.7. OCI technical notes
This section describes our analysis of the Open Container Initiative (OCI)
specification and implications for our implementations of ch-image
, and
ch-run-oci
. Anything relevant for users goes in the respective man
page; here is for technical details. The main goals are to guide Charliecloud
development and provide and opportunity for peer-review of our work.
13.7.1. ch-run-oci
Currently, ch-run-oci
is only tested with Buildah. These notes
describe what we are seeing from Buildah’s runtime expectations.
13.7.1.1. Gotchas
13.7.1.1.1. Namespaces
Buildah sets up its own user and mount namespaces before invoking the runtime, though it does not change the root directory. We do not understand why. In particular, this means that you cannot see the container root filesystem it provides without joining those namespaces. To do so:
Export
CH_RUN_OCI_LOGFILE
with some logfile path.Export
CH_RUN_OCI_DEBUG_HANG
with the step you want to examine (e.g.,create
).Run
ch-build -b buildah
.Make note of the PID in the logfile.
$ nsenter -U -m -t $PID bash
13.7.1.1.2. Supervisor process and maintaining state
OCI (and thus Buildah) expects a process that exists throughout the life of the container. This conflicts with Charliecloud’s lack of a supervisor process.
13.7.1.2. Bundle directory
OCI documentation (very incomplete): https://github.com/opencontainers/runtime-spec/blob/master/bundle.md
The bundle directory defines the container and is used to communicate between
Buildah and the runtime. The root filesystem (mnt/rootfs
) is mounted
within Buildah’s namespaces, so you’ll want to join them before examination.
ch-run-oci
has restrictions on bundle directory path so it can be
inferred from the container ID (see the man page). This lets us store state in
the bundle directory instead of maintaining a second location for container
state.
Example:
# cd /tmp/buildah265508516
# ls -lR . | head -40
.:
total 12
-rw------- 1 root root 3138 Apr 25 16:39 config.json
d--------- 2 root root 40 Apr 25 16:39 empty
-rw-r--r-- 1 root root 200 Mar 9 2015 hosts
d--x------ 3 root root 60 Apr 25 16:39 mnt
-rw-r--r-- 1 root root 79 Apr 19 20:23 resolv.conf
./empty:
total 0
./mnt:
total 0
drwxr-x--- 19 root root 380 Apr 25 16:39 rootfs
./mnt/rootfs:
total 0
drwxr-xr-x 2 root root 1680 Apr 8 14:30 bin
drwxr-xr-x 2 root root 40 Apr 8 14:30 dev
drwxr-xr-x 15 root root 720 Apr 8 14:30 etc
drwxr-xr-x 2 root root 40 Apr 8 14:30 home
[...]
Observations:
The weird permissions on
empty
(000) andmnt
(100) persist within the namespaces, so you’ll want to be namespace root to look around.hosts
andresolv.conf
are identical to the host’s.empty
is still an empty directory with in the namespaces. What is this for?mnt/rootfs
contains the container root filesystem. It is a tmpfs. No other new filesystems are mounted within the namespaces.
13.7.1.3. config.json
OCI documentation:
This is the meat of the container configuration. Below is an example
config.json
along with commentary and how it maps to ch-run
arguments. This was pretty-printed with jq . config.json
, and we
re-ordered the keys to match the documentation.
There are a number of additional keys that appear in the documentation but not
in this example. These are all unsupported, either by ignoring them or
throwing an error. The ch-run-oci
man page documents comprehensively
what OCI features are and are not supported.
{
"ociVersion": "1.0.0",
We validate that this is “1.0.0”.
"root": {
"path": "/tmp/buildah115496812/mnt/rootfs"
},
Path to root filesystem; maps to NEWROOT
. If key readonly
is
false
or absent, add --write
.
"mounts": [
{
"destination": "/dev",
"type": "tmpfs",
"source": "/dev",
"options": [
"private",
"strictatime",
"noexec",
"nosuid",
"mode=755",
"size=65536k"
]
},
{
"destination": "/dev/mqueue",
"type": "mqueue",
"source": "mqueue",
"options": [
"private",
"nodev",
"noexec",
"nosuid"
]
},
{
"destination": "/dev/pts",
"type": "devpts",
"source": "pts",
"options": [
"private",
"noexec",
"nosuid",
"newinstance",
"ptmxmode=0666",
"mode=0620"
]
},
{
"destination": "/dev/shm",
"type": "tmpfs",
"source": "shm",
"options": [
"private",
"nodev",
"noexec",
"nosuid",
"mode=1777",
"size=65536k"
]
},
{
"destination": "/proc",
"type": "proc",
"source": "/proc",
"options": [
"private",
"nodev",
"noexec",
"nosuid"
]
},
{
"destination": "/sys",
"type": "bind",
"source": "/sys",
"options": [
"rbind",
"private",
"nodev",
"noexec",
"nosuid",
"ro"
]
},
{
"destination": "/etc/hosts",
"type": "bind",
"source": "/tmp/buildah115496812/hosts",
"options": [
"rbind"
]
},
{
"destination": "/etc/resolv.conf",
"type": "bind",
"source": "/tmp/buildah115496812/resolv.conf",
"options": [
"rbind"
]
}
],
This says what filesystems to mount in the container. It is a mix; it has tmpfses, bind-mounts of both files and directories, and other non-device-backed filesystems. The docs suggest a lot of flexibility, including stuff that won’t work in an unprivileged user namespace (e.g., filesystems backed by a block device).
The things that matter seem to be the same as Charliecloud defaults. Therefore, for now we just ignore mounts.
"process": {
"terminal": true,
This says that Buildah wants a pseudoterminal allocated. Charliecloud does not currently support that, so we error in this case.
However, Buildah can be persuaded to set this false
if you redirect
its standard input from /dev/null
, which is the current workaround.
Things work fine.
"cwd": "/",
Maps to --cd
.
"args": [
"/bin/sh",
"-c",
"apk add --no-cache bc"
],
Maps to COMMAND [ARG ...]
. Note that we do not run ch-run
via
the shell, so there aren’t worries about shell parsing.
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"https_proxy=http://proxyout.lanl.gov:8080",
"no_proxy=localhost,127.0.0.1,.lanl.gov",
"HTTP_PROXY=http://proxyout.lanl.gov:8080",
"HTTPS_PROXY=http://proxyout.lanl.gov:8080",
"NO_PROXY=localhost,127.0.0.1,.lanl.gov",
"http_proxy=http://proxyout.lanl.gov:8080"
],
Environment for the container. The spec does not say whether this is the complete environment or whether it should be added to some default environment.
We treat it as a complete environment, i.e., place the variables in a file and
then --unset-env='*' --set-env=FILE
.
"rlimits": [
{
"type": "RLIMIT_NOFILE",
"hard": 1048576,
"soft": 1048576
}
]
Process limits Buildah wants us to set with setrlimit(2)
. Ignored.
"capabilities": {
...
},
Long list of capabilities that Buildah wants. Ignored. (Charliecloud provides security by remaining an unprivileged process.)
"user": {
"uid": 0,
"gid": 0
},
},
Maps to --uid=0 --gid=0
.
"linux": {
"namespaces": [
{
"type": "pid"
},
{
"type": "ipc"
},
{
"type": "mount"
},
{
"type": "user"
}
],
Namespaces that Buildah wants. Ignored; Charliecloud just does user and mount.
"uidMappings": [
{
"hostID": 0,
"containerID": 0,
"size": 1
},
{
"hostID": 1,
"containerID": 1,
"size": 65536
}
],
"gidMappings": [
{
"hostID": 0,
"containerID": 0,
"size": 1
},
{
"hostID": 1,
"containerID": 1,
"size": 65536
}
],
Describes the identity map between the namespace and host. Buildah wants it much larger than Charliecloud’s single entry and asks for container root to be host root, which we can’t do. Ignored.
"maskedPaths": [
"/proc/acpi",
"/proc/kcore",
...
],
"readonlyPaths": [
"/proc/asound",
"/proc/bus",
...
]
Spec says to “mask over the provided paths ... so they cannot be read” and “sed the provided paths as readonly”. Ignored. (Unprivileged user namespace protects us.)
}
}
End of example.
13.7.1.4. State
The OCI spec does not say how the JSON document describing state should be given to the caller. Buildah is happy to get it on the runtime’s standard output.
ch-run-oci
provides an OCI compliant state document. Status
creating
will never be returned, because the create operation is
essentially a no-op, and annotations are not supported, so the
annotations
key will never be given.
13.7.1.5. Additional sources
buildah
man page: https://github.com/containers/buildah/blob/master/docs/buildah.mdbuildah bud
man page: https://github.com/containers/buildah/blob/master/docs/buildah-bud.mdrunc create
man page: https://raw.githubusercontent.com/opencontainers/runc/master/man/runc-create.8.mdhttps://github.com/opencontainers/runtime-spec/blob/master/runtime.md
13.7.2. ch-image
13.7.2.1. pull
Images pulled from registries come with OCI metadata, i.e. a “config blob”.
This is stored verbatim in /ch/config.pulled.json
for debugging.
Charliecloud metadata, which includes a translated subset of the OCI config,
is kept up to date in /ch/metadata.json
.
13.7.2.2. push
Image registries expect a config blob at push time. This blob consists of both OCI runtime and image specification information.
OCI run-time and image documentation:
Since various OCI features are unsupported by Charliecloud we push only what is necessary to satisfy general image registry requirements.
The pushed config is created on the fly, referencing the image’s metadata and layer tar hash. For example, including commentary:
{
"architecture": "amd64",
"charliecloud_version": "0.26",
"comment": "pushed with Charliecloud",
"config": {},
"container_config": {},
"created": "2021-12-10T20:39:56Z",
"os": "linux",
"rootfs": {
"diff_ids": [
"sha256:607c737779a53d3a04cbd6e59cae1259ce54081d9bafb4a7ab0bc863add22be8"
],
"type": "layers"
},
"weirdal": "yankovic"
The fields above are expected by the registry at push time, with the exception
of charliecloud_version
and weirdal
, which are Charliecloud
extensions.
"history": [
{
"created": "2021-11-17T02:20:51.334553938Z",
"created_by": "/bin/sh -c #(nop) ADD file:cb5ed7070880d4c0177fbe6dd278adb7926e38cd73e6abd582fd8d67e4bbf06c in / ",
"empty_layer": true
},
{
"created": "2021-11-17T02:20:51.921052716Z",
"created_by": "/bin/sh -c #(nop) CMD [\"bash\"]",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:08Z",
"created_by": "FROM debian:buster",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "RUN ['/bin/sh', '-c', 'apt-get update && apt-get install -y bzip2 wget && rm -rf /var/lib/apt/lists/*']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "WORKDIR /usr/local/src",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "ARG MC_VERSION='latest'",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "ARG MC_FILE='Miniconda3-latest-Linux-x86_64.sh'",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:21Z",
"created_by": "RUN ['/bin/sh', '-c', 'wget -nv https://repo.anaconda.com/miniconda/$MC_FILE']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:33Z",
"created_by": "RUN ['/bin/sh', '-c', 'bash $MC_FILE -bf -p /usr/local']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:33Z",
"created_by": "RUN ['/bin/sh', '-c', 'rm -Rf $MC_FILE']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:33Z",
"created_by": "RUN ['/bin/sh', '-c', 'which conda && conda --version']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:34Z",
"created_by": "RUN ['/bin/sh', '-c', 'conda config --set auto_update_conda False']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:34Z",
"created_by": "RUN ['/bin/sh', '-c', 'conda config --add channels conda-forge']",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:07Z",
"created_by": "RUN ['/bin/sh', '-c', 'conda install --yes obspy']",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:07Z",
"created_by": "WORKDIR /",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:08Z",
"created_by": "RUN ['/bin/sh', '-c', 'wget -nv http://examples.obspy.org/RJOB_061005_072159.ehz.new']",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:08Z",
"created_by": "COPY ['hello.py'] -> '.'",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:08Z",
"created_by": "RUN ['/bin/sh', '-c', 'chmod 755 ./hello.py']"
}
],
}
The history section is collected from the image’s metadata and
empty_layer
added to all entries except the last to represent a
single-layer image. This is needed because Quay checks that the number of
non-empty history entries match the number of pushed layers.
13.8. Miscellaneous notes
13.8.1. Updating bundled Lark parser
In order to change the version of the bundled lark parser you must modify
multiple files. To find them, e.g. for version 1.1.9 (the regex is hairy to
catch both dot notation and tuples, but not the list of filenames in
lib/Makefile.am
):
$ misc/grep -E '1(\.|, )1(\.|, )9($|\s|\))'
What to do in each location should either be obvious or commented.