User:Maffblaster/Drafts/Advanced ebuild testing guide

From Gentoo Wiki
Jump to:navigation Jump to:search

This guide provides instructions for the creation of chroot test environment(s) for ebuild development. Although not the only form of testing, chroots are one viable option for testing ebuilds in a bare stage 3 environment. This helps the build developer determine if a package's dependency graph has been defined appropriately.

Introduction

Chroots have been around for a long time. Apparently the first "chroot" system call was introduced in Version 7 Unix in 1979.[1] They are essential to the Gentoo installation process (those who have followed the Gentoo Handbook have already worked within a chroot environment). Originating with the development of Gentoo the term "stage tarball" was created to help the Gentoo Release Engineering team define what tasks still needed completing in a chroot environment. Read up on the four levels of stage tarballs in the stage tarball article.

Configuration

Official Gentoo stage 3 tarballs

The officially generated stage 3 tarballs from the Release Engineering project are perfect specimens to use for creating new chroots, they can be generally obtained from the following links:

Architecture Profile Init system Lib feature Download link
amd64 17.1 OpenRC Multilib https://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64-openrc/
amd64 17.1 systemd Multilib https://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3-amd64-systemd/
Tip
Depending on how many chroots are created, they can really start eating up disk space quickly. It is a good idea to enable deduplification when using a filesystem that supports it.

For the remainder of this guide, it will be presumed the reader will be using the btrfs filesystem for the partition containing the ebuild test chroots. If btrfs cannot be used any Linux friendly filesystem will work. Space is cheap these days!

After the current stage 3 files have been downloaded, create a nicely laid out directory structure and a couple of subvolumes as a target for the tarballs to be extracted:

user $mkdir -p /srv/chroots/base/amd64/
user $sudo btrfs subvolume create /srv/chroots/base/amd64/openrc
user $sudo btrfs subvolume create /srv/chroots/base/amd64/systemd

Now extract the tarballs to the appropriate directory. Be careful to preserve extended filesystem attributes and access control lists:

root #tar --extract --xz --preserve-permissions --xattrs --acls --verbose --file /path/to/downloaded/stage3-amd64-openrc*.tar.xz --directory /srv/chroots/base/amd64/openrc .
root #tar --extract --xz --preserve-permissions --xattrs --acls --verbose --file /path/to/downloaded/stage3-amd64-systemd*.tar.xz --directory /srv/chroots/base/amd64/systemd .

Now that the base chroots are created, this may be a stopping point for some readers. If the ebuild(s) that will be tested are not specific to a certain desktop profile only a few more steps are needed. Jump down to Mounting the Portage tree.

Snapshotting system profiles

Those who are testing ebuilds with graphical components have a bit more work to do in order to prepare a sound test environment. It is time to create snapshots for the relevant system profiles. Suppose the ebuild(s) being tested run on the GTK graphical framework. It would be wise at this point to create a couple more base snapshots; this time with the Gnome desktop in mind:

user $mkdir -p /srv/chroots/base/desktop/gnome
user $sudo btrfs subvolume snapshot /srv/chroots/base/amd64 /srv/chroots/base/desktop/gnome/openrc
user $sudo btrfs subvolume snapshot /srv/chroots/base/amd64 /srv/chroots/base/desktop/gnome/systemd

Mounting the Gentoo ebuild repository

Next, these snapshots will need to be updated, but before that can be done the host machine's main Gentoo repository must be shared to them:

user $sudo mkdir -p /srv/chroots/base/desktop/gnome/openrc/var/db/repos/gentoo
user $sudo mkdir -p /srv/chroots/base/desktop/gnome/systemd/var/db/repos/gentoo
user $sudo mount --rbind /var/db/repos/gentoo /srv/chroots/base/desktop/gnome/openrc/var/db/repos/gentoo
user $sudo mount --rbind /var/db/repos/gentoo /srv/chroots/base/desktop/gnome/systemd/var/db/repos/gentoo

Chroot into each location using pychroot (dev-python/pychroot) and make sure each are up-to-date:

user $sudo pychroot /srv/chroots/base/desktop/gnome/openrc/var/db/repos/gentoo
user $sudo pychroot srv//chroots/base/desktop/gnome/systemd/var/db/repos/gentoo

Be sure to source /etc/profile after the chroot!

root #source /etc/profile

Finally, eselect the accurate profile name (base this on the snapshot's location) and rebuild the @world set. For example, for the deskop/gnome/systemd snapshot:

root #eselect profile set default/linux/amd64/13.0/desktop/gnome/systemd

Mounting the test ebuild repository

The final step before complete testing is to share the test ebuild repository into the chroot. This is done quickly and easily by recursively bind mounting the repository and then creating a repos.conf entry for the test repository within the chroot.

Open a terminal outside the chroot and run:

user $sudo mkdir --parents /srv/chroots/base/desktop/gnome/systemd/var/db/repos/test_repo
user $sudo mount --rbind /path/to/test/repo/directory /srv/chroots/base/desktop/gnome/systemd/var/db/repos/test_repo
FILE /etc/portage/repos.conf/test_repo.confCreate a simple test_repo.conf ebuild repo entry
[test]
location = /var/db/repos/test_repo
sync-type = git
sync-uri = <ENTER_REPO_GIT_URI>
auto-sync = no

Start testing ebuilds!

Custom stage 3 tarball

Have a currently running system that would be nice to use a test environment? Use the following tar command in the root filesystme to compress it into a stage 3 (or stage 4 tarball). Be sure make sure to name it appropriately:

root #Coming soon...

Snapshotable filesystems (btrfs, zfs)

When using a filesystem that has the capability to create snapshots, it is possible to quickly generate chroot test environments.

Btrfs

To make a 'chroot' snapshot of the currently running system with btrfs, issue:

root #btrfs subvolume snapshot <root subvolume> <destination location>

Then simply run the mount commands for the appropriate virtual filesystems.

Zfs

If you want to take a snapshot of a single dataset in zfs you would do the following:

root #zfs snapshot <dataset>@<label you want to name this>

If you want to take a snapshot of every dataset under a particular dataset, you can do:

root #zfs snapshot -r <dataset>@<label you want to name this>

ZFS snapshots are read-only. If you want to create a writeable dataset, use the "clone" command on a snapshot. Example:

If you have a "tank/gentoo/chroot" dataset that you want to make clones from, you would first take a snapshot to save a read only copy, and then we would make how many clones from that as you desire:

zfs snapshot tank/gentoo/chroot@testingBase
zfs clone tank/gentoo@chroot@testingBase tank/gentoo/container1
zfs clone tank/gentoo@chroot@testingBase tank/gentoo/container2
zfs clone tank/gentoo@chroot@testingBase tank/gentoo/container3

If after a while you decide that you want to delete chroot, all of it's snapshots, and all of its dependent clones, you can actually do that in one command:

root #zfs destroy -R tank/gentoo/chroot

Containers

LXC

https://github.com/globalcitizen/lxc-gentoo

https://github.com/specing/lxc-gentoo

See also

  • Stable request — the procedure for moving an ebuild from testing to stable.

References