User:Juippis/The ultimate testing system with lxd

The ultimate way of testing ebuild contributions.

Summary: You keep a base container updated and in a tidy condition. You make copies of the base container, use the copies to test ebuild contributions, and delete the copies when you're done. This is automated as much as possible; in an ideal situation all you ever have to do, is give one command.

You can of course use this built environment to test your own ebuild modifications before pushing. For more information, please read the pros & cons list below to get a better idea.

Installation
We're going to need LXD to be installed in the host system.

You should expect to have at least 2-4 GB space per container, especially if they pull rust-bin, gentoo-kernel-bin etc. Therefore it might be wise to symlink somewhere with more space available. Using an SSD/NVME is heavily suggested, as it makes all the operations happen in an instant compared to HDDs.

Configuring lxd
Add your user to newly created lxd group. your chosen lxd directory properly for this user.

We will use a binhost @ localhost to be used between host/container. Set up subuid & subgid for accessing binpkg repository from both in and outside the container.

Getting the correct container image
Make sure to choose an image suited for your testing needs.

This will download a default gentoo-x86_64 container image, set it up, name it as my-test-container and start it. You can check that it works,

But for now, we'll want to turn it off so we can configure it properly.

default profile
You can edit the 'default' profile which, by default, gets used by containers and contain some necessary options.

Note especially the network part. LXD

Accessing display from container to test GUI runtime
Edit the config file of your container.

Sharing host's disk to container
With the example above, we're sharing our host's distfiles and binpkg directories with the containers.

Launching the container
Now that we've configured the relevant parts, we can launch the container. We'll construct the base container image that is kept up-to-date and clean. We'll use that to create discardable copies, where the testing happens.

Set up git repositories
Or alternatively, the latest portage tree from any of the mirror's.

Edit relevant /etc/portage files
Edit where relevant.

This will install rust-bin instead if a dep pulls it.

Edit where relevant. These are just "common requirements" I run into all the time. Some packages have especially nasty REQUIRED_USE that stops the automated tests.

Optional. I find that ~ruby causes a lot of problems for automation to deal with.

Always find the latest here.

Binhost / binpkgs
With the above settings we are enabling binpkgs to be generated, and used, for everything other except packages we're actually testing. This will speed up testing process immensively. Make sure your user has correct permissions in your host to be able to write to shared binpkg directory.

Update your container
Start the update.

Let's use the latest gcc.

Set up pkg-testing-tool
Our testing depends heavily on https://github.com/slashbeast/pkg-testing-tools.

Finishing touches for your container
Run to submit the initial list, and from now on running  from a snapshot-container or main container will only submit modified package list when comparing to base image.

Log out, turn off container.

(Ctrl+d works too)

Automatic maintain scripts
Write a new script to your user's  that ideally gets cron job'd 1-4 times a day depending on your needs. Note that running this script will require your main container to be turned off, as it should be unless you're doing manual maintenance. This script can obviously be called manually whenever beginning to work with PRs, if you test packages more sporadically. If you work with your containers daily and update the base image daily, this process is relatively fast. It will delete all inactive copies, as that's the desired action to do.

Optional: If you'd like, now's a good time to make a snapshot of your base image. Depending on your workflow and use cases, this is most likely not needed though.

You can use the snapshot should you accidentally mess with the base image, or if you want a single image to work on. But it requires you to keep the snapshot updated too.

cron job
Use your desired cron system to make the maintenance script a cron job.

PR testing script
Intended to be used with Github pull requests, but it can be modified to work with any git-format patches, such as some bugzilla attachments.

By default the script creates a new discardable snapshot for each PR, but you can obviously do all testing in a single discardable snapshot image to save space and possibly time.

Testing that everything works!
Choose any PR to your liking from https://github.com/gentoo/gentoo/pulls. Do note that it can't be CLOSED, in other words, merged to main tree as that will cause patch merging collision. And it should edit an .ebuild file, ideally adding a new .ebuild to the tree to be tested.

You can always, at any given time log in to the container and inspect / test everything manually. This is especially handy if you spot a mistake in the .ebuild file and need to test that your modification works.

Examples
A couple of real-life examples with full output below.

'''Example 1: Simple PR with no deps. Gets ran with  and  .'''

All looks good. Proceed to merge using pram (or if you share your ::gentoo repository between container/host, just do.

Example 2: PR bumping a package with some USE flags, and a test phase available.

All looks good. Proceed to merge using pram (or if you share your ::gentoo repository between container/host, just do.

Example 3: A PR with multiple commits / packages:

Example 4: How it looks like when emerging contribution fails.

Acknowledgements

 * The scripts are most likely awful, but after I've gotten them to work, I haven't looked back. https://imgs.xkcd.com/comics/is_it_worth_the_time.png
 * The  itself can be used to build a tinderbox checking all commits, but an automated bug-reporting system would be needed then. However, it might be good for overlays. You'll need to store the git checkout of a previous run, then compare that to.
 * could run  before applying patches, but it's not needed for my own workflow since I usually do all testing once a day.
 * I'm making this public to hear feedback!