TAPPER is an infrastructure that consists of 3 important layers:
The layers work completely autonomously, though can also be connected together.
To fully exploit the system the tasks you need to learn are
Person in charge: Steffen Schwigon
This chapter describes what you need to do in order to get a new machine into the Tapper test rotation.
In the osrc network this means attaching it to osrc_rst
which
is the reset switch tool, a physical device plus the software to
trigger the reset.
Person in charge: Jan Krocker
title Automatic test tftpserver 165.204.15.71 configfile (nd)/tftpboot/FOOBAR.lst
The IP address is that of our application server bancroft
.
option configfile "/tftpboot/cfgs/FOOBAR.lst";
kill -HUP $(pidof dhcpd)
Person in charge: Maik Hentsche
If not already listed at http://bancroft.amd.com/hardwaredb/ contact Jan Krocker.
Person in charge: Jan Krocker
The steps until here are generally enough to put ‘preconditions’ for this host into the Tapper database and thus use the host for tests.
Anyway, you can additionally register the host in ‘temare’.
temare is the Test Matrix Replacement program that schedules tests according to our test plan. If you want tests scheduled for the new machine then follow these steps:
PYTHONPATH
to include the temare src directory
export PYTHONPATH=$PYTHONPATH:/home/tapper/temare/src
/home/tapper/temare/temare hostadd $hostname $memory $cores $bitness
/home/tapper/temare/xentest.pl
Person in charge: Maik Hentsche, Frank Arnold
Example:
1..3 ok ok not ok
Example:
1..3 ok 1 ok 2 not ok 3
Example:
1..3 ok 1 - input file opened ok 2 - file content not ok 3 - last line
Example:
1..3 ok 1 - input file opened ok 2 - file content not ok 3 - last line # TODO
Example:
1..3 ok 1 - input file opened ok 2 - file content not ok 3 - last line # TODO just specced
Example:
1..3 ok 1 - input file opened ok 2 - file content ok 3 - last line # SKIP missing prerequisites
Example:
1..3 ok 1 - input file opened ok 2 - file content not ok 3 - last line # TODO just specced # Failed test 'last line' # at t/data_dpath.t line 410. # got: 'foo' # expected: 'bar'
Example:
1..3 ok 1 - input file opened ok 2 - file content not ok 3 - last line # TODO just specced --- message: Failed test 'last line' at t/data_dpath.t line 410. severity: fail data: got: 'foo' expect: 'bar' ...
Example:
1..3 # Tapper-Suite-Name: Foo-Bar # Tapper-Suite-Version: 2.010013 ok 1 - input file opened ok 2 - file content not ok 3 - last line # TODO just specced
These are the headers that apply to the whole report:
# Tapper-suite-name: -- suite name # Tapper-suite-version: -- suite version # Tapper-machine-name: -- machine/host name # Tapper-machine-description: -- more details to machine # Tapper-starttime-test-program: -- start time for complete test (including guests) # Tapper-endtime-test-program: -- end time for complete test (including guests)
Example:
1..2 # Tapper-section: arithmetics ok 1 add ok 2 multiply 1..1 # Tapper-section: string handling ok 1 concat 1..3 # Tapper-section: benchmarks ok 1 ok 2 ok 3
These are the headers that apply to single sections:
# Tapper-ram: -- memory # Tapper-cpuinfo: -- what CPU # Tapper-uname: -- kernel information # Tapper-osname: -- OS information # Tapper-uptime: -- uptime, maybe the test run time # Tapper-language-description: -- for Software tests, like "Perl 5.10", "Python 2.5" # Tapper-xen-version: -- Xen version # Tapper-xen-changeset: -- particular Xen changeset # Tapper-xen-dom0-kernel: -- the kernel version of the dom0 # Tapper-xen-base-os-description: -- more verbose OS information # Tapper-xen-guest-description: -- description of a guest # Tapper-xen-guest-test: -- the started test program # Tapper-xen-guest-start: -- start time of test # Tapper-xen-guest-flags: -- flags used for starting the guest # Tapper-kvm-module-version: -- version of KVM kernel module # Tapper-kvm-userspace-version: -- version of KVM userland tools # Tapper-kvm-kernel: -- version of kernel # Tapper-kvm-base-os-description: -- more verbose OS information # Tapper-kvm-guest-description: -- description of a guest # Tapper-kvm-guest-test: -- the started test program # Tapper-kvm-guest-start: -- start time of test # Tapper-kvm-guest-flags: -- flags used for starting the guest # Tapper-flags: -- Flags that were used to boot the OS # Tapper-reportcomment: -- Freestyle comment
prove
tool
$ prove t/*.t t/00-load.........ok t/boilerplate.....ok t/pod-coverage....ok All tests successful. Files=4, Tests=6, 0 wallclock secs ( 0.05 usr 0.00 sys + 0.28 cusr 0.05 csys = 0.38 CPU) Result: PASS
# TODO/SKIP
# TODO/SKIP
If we have a Xen environment then there are many guests each running some test suites but they don't know of each other.
The only thing that combines them is a common testrun-id. If each suite just reports this testrun-id as the group id, then the receiving side can combine all those autonomously reporting suites back together by that id.
So simply each suite should output
# Tapper-reportgroup-testrun: 1234
with 1234 being a testrun ID that is available via the environment
variable $TAPPER_TESTRUN
. This variable is provided by the
automation layer.
If the grouping id is not a testrun id, e.g., because you have set up a Xen environment without the TAPPER automation layer, then generate one random value once in dom0 by yourself and use that same value inside all guests with the following header:
TAPPER_REPORT_GROUP=`date|md5sum|awk 'print $1'`
# Tapper-reportgroup-arbitrary: $TAPPER_REPORT_GROUP
How that value gets from dom0 into the guests is left as an
exercise, e.g. via preparing the init scripts in the guest images
before starting them. That's not the problem of the test suite
wrappers, they should only evaluate the environment variable
TAPPER_REPORT_GROUP
.
Person in charge: Frank Becker
This section is about the test suites and wrappers around existing suites. These wrappers are part of our overall test infrastructure.
It's basically about the middle part in the following picture:
[[image:tapper_architecture_overview.png | 800px]]
We have wrappers for existing test and benchmark suites.
Wrappers just run the suites as a user would manually run them but additionally extract results and produce TAP (Test Anything Protocol).
We have some specialized, small test suites that complement the general suites, e.g. for extracting meta information or parsing logs for common problems.
If the environment variables
TAPPER_REPORT_SERVER TAPPER_REPORT_PORT
are set the wrappers report their results by piping their TAP output there, else they print to STDOUT.
See also http://www.bitmover.com/lmbench/.
See also http://freshmeat.net/projects/kernbench/.
See also http://sourceforge.net/projects/va-ctcs/.
See also http://ltp.sourceforge.net/.
The TAPPER automation layer provides some environment variables that the wrappers can use:
TAPPER_REPORT_SERVER
.
TAPPER_REPORT_SERVER
.
These variables should be used in the TAP of the suite as Tapper headers. Important use-case is "report groups", see next chapter.
Person in charge: Frank Becker
The central thing that is needed before a test is run is a so called precondition. Creating those preconditions is the main task needed to do when using the automation framework.
Most of the preconditions describe packages that need to be installed. Other preconditions describe how subdirs should be copied or scripts be executed.
A precondition can depend on other preconditions, leading to a tree of preconditions that will be installed from the leaves to the top.
We store preconditions in the database and assign testruns to them (also in the database).
Usually the preconditions were developed in a (temporary) file and then entered into the database with a tool. After that the temporary file can be deleted.
Preconditions can be kept in files to re-use them when creating testruns but that's not needed for archiving purposes, only for creation purposes.
Though, there is another mechanism on top of normal preconditions: Macro Preconditions. These allow to bundle several preconditions into a common use-case and mark placeholders in them, see section Macro Preconditions.
These macro preconditions should be archived, as they are only template files which are rendered into final preconditions. Only the final preconditions are stored in the database.
Macro preconditions can be stored in
/data/bancroft/tapper/live/repository/macropreconditions/
Some preconditions types can contain other more simple precondition types. To distinguish them we call them Highlevel preconditions and Action preconditions, accordingly.
The following action precondition types are allowed:
Currently only the following high level precondition type is allowed:
High level preconditions both define stuff and can also contain other preconditions.
They are handled with some effort to Do The Right Thing, i.e., a defined root image in the high level precondition is always installed first. All other preconditions are installed in the order defined by its tree structure (depth-first).
We describe preconditions in YAML files (http://www.yaml.org/).
All preconditions have at least a key
precondition_type: TYPE
and optionally
name: VERBOSE DESCRIPTION shortname: SHORT DESCRIPTION
then the remaining keys depend on the TYPE.
--- precondition_type: installer_stop
--- precondition_type: grub config: | title Linux root $grubroot kernel /boot/vmlinuz root=$root"
--- filename: /data/bancroft/tapper/live/repository/packages/linux/linux-2.6.27.7.tar.bz2 precondition_type: package
--- precondition_type: copyfile protocol: nfs source: osko:/export/image_files/official_testing/README dest: /usr/local/share/tapper/perl510
--- precondition_type: fstab line: "165.204.85.14:/vol/osrc_vol0 /home nfs auto,defaults 0 0"
usually the root image that is unpacked to a partition (this is in contrast to a guest file that's just there)
--- precondition_type: image mount: / partition: testing image: /data/bancroft/tapper/live|development/repository/images/rhel-5.2-rc2-32bit.iso,.tar.gz,.tgz,.tar,.tar.bz2 (OPTIONAL)
--- precondition_type: repository type: git url: git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm.git target: kvm revision: c192a1e274b71daea4e6dd327d8a33e8539ed937
Is typically contained implicitely with the abstract precondition virt. But can also be defined explicitely, e.g., for kernel tests.
Creates config for PRC. This config controls what is to be run and started when the machine boots.
If it is a guest, for host system use 0.
startet after boot by the PRC
The wanted time, how long it runs, in seconds, this value will be used
to set an environment variable TAPPER_TS_RUNTIME
, which is
used by the test suite wrappers.
Time that the testprogram is given to run, at most, after that it is killed (SIGINT, SIGKILL).
Only used for virtualization tests. Contains an array, one entry per guest which defines how a guest is started. Can be a SVM file for Xen or an executable for KVM.
precondition_type: prc config: runtime: 30 test_program: /bin/uname_tap.sh timeout_after_testprogram: 90 guests: - svm: /xen/images/..../foo.svm - svm: /xen/images/..../bar.svm - exec: /xen/images/..../start_a_kvm_guest.sh
Defines which program to run at the installation phase.
precondition_type: exec filename: /bin/some_script.sh options: - -v - --foo - --bar="hot stuff"
Requests a reboot test and states how often to reboot.
Note: Reboot count of 1 actually means boot two times since the first boot is always counted as number 0.
precondition_type: reboot count: 2
A virtualization environment.
name: automatically generated Xen test precondition_type: virt host: preconditions: - filename: /data/bancroft/tapper/live/repository/packages/xen/builds/x86_64/xen-3.3-testing/xen-3.3-testing.2009-03-20.18614_f54cf790ffc7.x86_64.tgz precondition_type: package - filename: /data/bancroft/tapper/live/repository/packages/tapperutils/sles10/xen_installer_suse.tar.gz precondition_type: package - filename: /bin/xen_installer_suse.pl precondition_type: exec root: precondition_type: image partition: testing image: /data/bancroft/tapper/live/repository/images/suse/suse_sles10_64b_smp_raw.tar.gz mount: / arch: linux64 testprogram: execname: /opt/tapper/bin/tapper_testsuite_dom0_meta.sh timeout_testprogram: 10800 guests: - config: precondition_type: copyfile protocol: nfs name: bancroft:/data/bancroft/tapper/live/repository/configs/xen/001-sandschaki-1237993266.svm dest: /xen/images/ svm: /xen/images/001-sandschaki-1237993266.svm root: precondition_type: copyfile protocol: nfs arch: linux64 name: osko:/export/image_files/official_testing/redhat_rhel4u7_64b_up_qcow.img dest: /xen/images/ mountfile: /xen/images/001-sandschaki-1237993266.img mounttype: raw testprogram: execname: /opt/tapper/bin/py_ltp timeout_after_testprogram: 10800
These 2 options are possible in each precondition. With that you can execute the precondition inside guest images:
mountfile: ... mountpartition: ... mounttype:
TODO is this the same as mountfile, mountpartition? |
- 1. only mountfile: eg. rawimage, file loop-mounted - 2. only mountpartition: then mount that partition - 3. image file with partitions: mount the imagefile and from that only the given partition
Person in charge: Maik Hentsche
This section describes macro precondition files as they are stored in
/data/bancroft/tapper/live/repository/macropreconditions/
.
A macro precondition is Perl code.
It contains exactly one hash ref.
The hashref must have a key preconditions
pointing to an
arrayref with strings in it. Each of these strings is a precondition
which is preprocessed with Template-Toolkit.
The hash can contain a key mandatory_fields
pointing to an
arrayref of fieldnames that are validated when the macro precondition
is avaluated.
Macro preconditions are not stored in the database. They are only a tool to ease the creation of preconditions. Only the resulting preconditions are stored in database.
Because the preconditions
key contains just an array a macro
precondition can only create a linear list of preconditions, not a
tree (as it would be possible via pre_preconditions). Therefore you
need to order them like a tree would have been walked.
The values for the placeholders can be filled via
tapper-testrun new [all usual options] \ --macroprecond=FILENAME \ -DPLACEHOLDER1=VALUE1 \ -DPLACEHOLDER2=VALUE2 \ -DPLACEHOLDER3=VALUE3
The FILENAME is a complete filename with absolute path.
The format of a macro precondition is basically just a Perl hashref where the "preconditions" are just and array reference of strings:
preconditions => [ 'macro content 1', 'macro content 2', 'macro content 3', ], mandatory_fields => [ $placeholders ],
which will get eval'ed in Perl.
You can quote the strings with Perl quote operators.
The string content of the preconditions can be any string with placeholders in Template-Toolkit syntax. Here is the same example more verbose with the two placeholders "image_file" and "xen_package" in it:
preconditions => [ ' precondition: foobar1 name: A nice description dom0: root: precondition_type: image mount: / image: [% image_file %] partition: /dev/sda2 preconditions: - precondition_type: package filename: [% xen_package %] path: tapperutils/ scripts: ~ ', 'macro content 2', 'macro content 3', ], mandatory_fields => [ qw(image_file xen_package) ],
The appropriate testrun creation looks like this:
tapper-testrun new ... \ --macroprecond=FILENAME \ -Dimage_file=suse/suse_sles10_64b_smp_raw.tar.gz \ -Dxen_package=xen-3.2_20080116_1546_16718_f4a57e0474af__64bit.tar.gz
mandatory_fields => [ qw(kernelpkg) ], preconditions => [ ' arch: linux64 image: suse/suse_sles10_64b_smp_raw.tar.gz mount: / partition: testing precondition_type: image ', ' precondition_type: copyfile name: /data/bancroft/tapper/live/repository/testprograms/uname_tap/uname_tap.sh dest: /bin/ protocol: local ', ' precondition_type: package filename: [% kernelpkg %] path: kernel/ ', ' precondition_type: prc config: runtime: 30 test_program: /bin/uname_tap.sh timeout_after_testprogram: 90 timeouts: - 90 ' ]
The test script uname_tap.sh
to which the macro precondition
refers is just a shell script that examines uname output:
#! /bin/sh echo "1..2" echo "# Tapper-Suite-Name: Kernel-Boot" echo "# Tapper-Suite-Version: 1.00" echo "# Tapper-Machine-Name: " `hostname` if [ x`uname` != xLinux ] ; then echo -n "not " ; fi echo "ok - We run on Linux" if uname -a | grep -vq x86_64 ; then echo -n "not " ; fi echo "ok - Looks like x86_64"
Once you wrote the macro precondition and the test script all you need is this command line:
tapper-testrun new \ --hostname=dickstone \ --macroprecond=/data/bancroft/tapper/live/repository/macropreconditions/kernel/kernel_boot.mpc \ -Dkernelpkg=perfmon-682-x86_64.tar.gz \
or with some more information (owner, topic):
tapper-testrun new \ --owner=mhentsc3 \ --topic=Kernel \ --hostname=dickstone \ --macroprecond=/data/bancroft/tapper/live/repository/macropreconditions/kernel/kernel_boot.mpc \ -Dkernelpkg=perfmon-682-x86_64.tar.gz \
Person in charge: Steffen Schwigon
The Web User Interface is a frontend to the Reports database. It allows to overview reports that came in from several machines, in several test suites.
It can filter the results by dates, machines or test suite, gives colorful (RED/YELLOW/GREEN) overview about success/failure ratios, allows to zoom into details of single reports.
To evaluate reported test results in a more programmatic way, have a look into the DPath Query Language that is part of the Reports::API.
The main URL is
http://osrc.amd.com/tapper
TODO |
There runs yet another daemon, the so called
Tapper::Reports::API
, on the same host where already the
TAP Receiver
runs. This ‘Reports API’ is meant for
everything that needs more than just dropping TAP reports to a port,
e.g., some interactive dialog or parameters.
This Tapper::Reports::API
listens on Port 7358
. Its API
is modeled after classic unix script look&feel with a first line
containing a description how to interpret the rest of the lines.
The first line consists of a shebang (#!
), a api command
and command parameters. The rest of the file is the
payload for the api command.
The syntax of the ‘command params’ varies depending on the
‘api command’ to make each command intuitively useable. Sometimes
they are just positional parameters, sometimes they look like the
start of a HERE document (i.e., they are prefixed with <<
as
you can see below).
Person in charge: Steffen Schwigon
In this section the raw API is described. That's the way you can use
without any dependencies except for the minimum ability to talk to a
port, e.g., via netcat
.
See section Client Utility tapper-api for how to use a dedicated command line utility that makes talking to the reports API easier, but is a dependency that might not be available in your personal test environment.
This api command lets you upload files, aka. attachments, to reports. These files are available later through the web interface. Use this to attach log files, config files or console output.
#! upload REPORTID FILENAME [ CONTENTTYPE ] payload
The id of the report to which the file is assigned
The name of the file
Optional MIME type; defaults to plain
; use
application/octet-stream
to make it downloadable later in
browser.
The raw content of the file to upload.
Just echo
the first api-command line and then immediately
cat
the file content:
$ ( echo "#! upload 552 xyz.tmp" ; cat xyz.tmp ) | netcat -w1 bascha 7358
To query report results we provide sending templates to the API in which you can use a query language to get report details: This api-command is called like the template engine so that we can provide other template engines as well.
#! mason debug=0 <<ENDMARKER payload ENDMARKER
If ‘debug’ is specified and value set to 1 then any error message that might occur is reported as result content. If debug is omitted or false and an error occurs then the result is just empty.
You can choose any word instead of ENDMARKER which should mark the end of input, like in HERE documents, usually some word that is not contained in the template payload.
A mason template.
Mason is a template language, see
http://masonhq.com. Inside the template we provide a function
reportdata
to access report data via a query language. See
section Reports Query Language for details about this.
This is a raw Mason template:
% my $world = "Mason World"; Hello <% $world %>! % my @res = reportdata '{ "suite.name" => "perfmon" } :: //tap/tests_planned'; Planned perfmon tests: % foreach (@res) <% $_ %> %
If you want to submit such a Mason template you can add the api-command line and the EOF marker like this:
$ EOFMARKER="MASONTEMPLATE".$$ $ payload_file="perfmon_tests_planned.mas" $ ( echo "#! mason <<$EOFMARKER" ; cat $payload_file ; echo "$EOFMARKER" ) \ | netcat -w1 bascha 7358
The output of this is the rendered template. You can extend the line to save the rendered result into a file:
$ ( echo "#! mason <<$EOFMARKER" ; cat $payload_file ; echo "$EOFMARKER" ) \ | netcat -w1 bascha 7358 > result.txt
The answer for this looks like this:
Hello Mason World! Planned perfmon tests: 3 4 17
The query language, which is the argument to the reportdata
as
used embedded in the ‘mason’ examples above:
reportdata '{ "suite.name" => "perfmon" } :: //tap/tests_planned'
consists of 2 parts, divided by the ‘::’.
We call the first part in braces reports filter and the second part data filter.
The reports filter selects which reports to look at. The
expression inside the braces is actually a complete
SQL::Abstract
expression
(http://search.cpan.org/~mstrout/SQL-Abstract/) working
internally as a select
in the context of the object relational
mapper, which targets the table Report
with an active JOIN to
the table Suite
.
All the matching reports are then taken to build a data structure for
each one, consisting of the table data and the parsed TAP part which
is turned into a data structure via TAP::DOM
(http://search.cpan.org/~schwigon/TAP-DOM/).
The data filter works then on that data structure for each report.
The filter expressions are best described by example:
{ 'id' => 1234 }
{ 'suite_name' => 'oprofile' }
{ 'machine_name' => 'bascha' }
Here the value that you want to select is a structure by itself, consisting of the comparison operator and a time string:
{ 'created_at' => { '<', '2009-04-09 10:00' } }
The data structure that is created for each report can be evaluated
using the data filter part of the query language, i.e.,
everything after the ::
. This part is passed through to
Data::DPath
(http://search.cpan.org/~schwigon/Data-DPath/).
tapper-api
There is a command line utility tapper-api
that helps with
using the API without the need to talk the protocol and fiddle with
netcat
by yourself.
You can aquire a help page to each sub command:
$ /home/tapper/perl510/bin/tapper-api help upload
prints
tapper-api upload --reportid=s --file=s [ --contenttype=s ] --verbose some more informational output --reportid INT; the testrun id to change --file STRING; the file to upload, use '-' for STDIN --contenttype STRING; content-type, default 'plain', use 'application/octed-stream' for binaries
Use it from the Tapper path, like:
$ /home/tapper/perl510/bin/tapper-api upload \ --file /var/log/messages \ --reportid=301
You can also use the special filename - to read from STDIN,
e.g., if you need to pipe the output of tools like dmesg
:
$ dmesg | /home/tapper/perl510/bin/tapper-api upload \ --file=- \ --filename dmesg \ --reportid=301
TODO
In this chapter we describe how the single features are put together into whole use-cases.
This is a description on how to run Xen tests with Tapper using
SLES10
with one RHEL5.2
guest (64 bit) as an example.
The following mainly applies to manually assigning Xen tests. The SysInt team uses temare to automatically create the here described steps.
We use suse/suse_sles10_64b_smp_raw.tar.gz as Dom0 and
osko:/export/image_files/official_testing/raw_img/redhat_rhel5u2_64b_smp_up_small_raw.img
as the only guest.
The SuSE image is of precondition type image. Thus its path is
relative to /mnt/images
which has
bancroft:/data/bancroft/tapper/live/repository/images/
mounted.
The root partition is named in the section ‘root’ of the Xen
precondition. Furthermore, you need to define the destination
partition to be Dom0 root. We use /dev/sda2
as an example. The
partition could also be named using its UUID or partition label. Thus
you need to add the following to the dom0 part of the Xen precondition:
root: precondition_type: image mount: / image: suse/suse_sles10_64b_smp_raw.tar.gz partition: /dev/sda2
The RedHat image is of type ‘copyfile’.
It is copied from
osko:/export/image_files/official_testing/raw_img/
which is
mounted to /mnt/nfs
before.
This mounting is done automatically because the protocol type nfs is
given. The image file is copied to the destination named as dest in
the ‘copyfile’ precondition. We use /xen/images/
as an
example. To allow the System Installer to install preconditions into
the guest image, the file to mount and the partition to mount need to
be named. Note that even though in some cases, the mountfile can be
determined automatically, in other cases this is not possible
(e.g. when you get it from a tar.gz package). The resulting root
secition for this guest is:
root: precondition_type: copyfile name: osko:/export/image_files/official_testing/raw_img/redhat_rhel5u2_64b_smp_up_small_raw.img protocol: nfs dest: /xen/images/ mountfile: /xen/images/redhat_rhel5u2_64b_smp_up_small_raw.img mountpartition: p1
PRC (Program Run Control) is responsible for starting guests and test suites.
Making PRC able to start Xen guests is very simple. Every guest entry needs to have a section named "config". In this section, a precondition describing how the config file is installed and a filename have to be given. As for guest images the file name is needed because it can't be determined in some cases. We use 001.svm installed via copyfile to /xen/images/001.svm. The resulting config section is:
config: precondition_type: copyfile name: /usr/share/tapper/packages/mhentsc3/001.svm protocol: local dest: /xen/images/ filename: /xen/images/001.svm
You need to define, where you want which test suite to run. This can be done in every guest and the Dom0. In this example, the Dom0 and the single guest will run different testsuites. this chapter only describes the Dom0 test program. See the summary at the end for details on the guest test program.
The section testprogram consists of a precondition definition describing how the test suite is installed. In our example we use a precondition type package with a relative path name. This path is relative to ”'/data/bancroft/tapper/live/repository/packages/”'. Since ”'bancroft:/data/bancroft/”' is mounted to ”'/data/bancroft/”' in the install system, this directory can be accessed at ”'bancroft:/data/bancroft/tapper/live/repository/packages/”'.
Beside the precondition you need to define an execname which is the full path name of the file to be executed (remember, it can't be determined). This file is called in the root directory (”'/”') in the test system thus in case you need to use relative paths inside your test suite they need to be relative to this. The program may take parameters which are named in the optional array ”'parameters”' and taken as is. The parameter is ”'timeout_after_testprogram”' which allows you to define that your test suite shall be killed (and an error shall be reported) after that many seconds. Even though this parameter is optional, leaving it out will result in Tapper waiting forever if your test doesn't send finish messages. The resulting testprogram section looks like this:
testprogram: precondition_type: package filename: tapper-testsuite-system.tar.gz path: mhentsc3/ timeout_after_testprogram: ~ execname: /opt/system/bin/tapper_testsuite_system.sh parameters: - --report
Usually your images will not have every software needed for your tests installed. In fact the example images now do but for the purpose of better explanation we assume that we need to install dhcp, python-xml and bridge-utils in Dom0. Furthermore we need a script to enable network and console. At last we install the Xen package and a Xen installer package. These two are still needed on our test images. Package preconditions may have a ”'scripts”' array attached that name a number of programs to be executed after the package was installed. This is used in our example to call the Xen installer script after the Xen package and the Xen installer package were installed. See the summary at the end for the resulting precondition section. The guest image only needs a DHCP client. Since this precondition is appended to the precondition list of the appropriate guest entry, the System Installer will automatically know that the guest image has to be mounted and the precondition needs to be installed inside relative to this mount.
After all these informations are gathered, put the following YAML text into a file. We use /tmp/xen.yml as an example.
precondition_type: xen name: SLES 10 Xen with RHEL5.2 guest (64 bit) dom0: root: precondition_type: image mount: / image: suse/suse_sles10_64b_smp_raw.tar.gz partition: /dev/sda2 testprogram: precondition_type: package filename: tapper-testsuite-system.tar.gz path: mhentsc3/ timeout_after_testprogram: 3600 execname: /home/tapper/x86_64/bin/tapper_testsuite_ctcs.sh parameters: - --report preconditions: - precondition_type: package filename: dhcp-3.0.3-23.33.x86_64.rpm path: mhentsc3/sles10/ - precondition_type: package filename: dhcp-client-3.0.3-23.33.x86_64.rpm path: mhentsc3/sles10/ - precondition_type: package filename: python-xml-2.4.2-18.7.x86_64.rpm path: mhentsc3/sles10/ - precondition_type: package filename: bridge-utils-1.0.6-14.3.1.x86_64.rpm path: mhentsc3/sles10/ # has to come BEFORE xen because config done in here is needed for xens initrd - precondition_type: package filename: network_enable_sles10.tar.gz path: mhentsc3/sles10/ scripts: - /bin/network_enable_sles10.sh - precondition_type: package filename: xen-3.2_20080116_1546_16718_f4a57e0474af__64bit.tar.gz path: mhentsc3/ scripts: ~ - precondition_type: package filename: xen_installer_suse.tar.gz path: mhentsc3/sles10/ scripts: - /bin/xen_installer_suse.pl # only needed for debug purpose - precondition_type: package filename: console_enable.tar.gz path: mhentsc3/ scripts: - /bin/console_enable.sh guests: - root: precondition_type: copyfile name: osko:/export/image_files/official_testing/raw_img/redhat_rhel5u2_64b_smp_up_small_raw.img protocol: nfs dest: /xen/images/ mountfile: /xen/images/redhat_rhel5u2_64b_smp_up_small_raw.img mountpartition: p1 # mountpartition: /dev/sda3 # or label or uuid config: precondition_type: copyfile name: /usr/share/tapper/packages/mhentsc3/001.svm protocol: local dest: /xen/images/ filename: /xen/images/001.svm testprogram: precondition_type: copyfile name: /usr/share/tapper/packages/mhentsc3/testscript.pl protocol: local dest: /bin/ timeout_after_testprogram: 100 execname: /bin/testscript.pl preconditions: - precondition_type: package filename: dhclient-4.0.0-6.fc9.x86_64.rpm path: mhentsc3/fedora9/
For Xen to run correctly, the defaults grub configuration is not sufficient. You need to add another precondition to your test. System Installer will replace $root with the /dev/ notation of the root partition and $grubroot with the grub notation (including parenthesis) of the root partition. Put the resulting precondition into a file. We use /tmp/grub.yml as an example. This file may read like this:
precondition_type: grub config: | serial --unit=0 --speed=115200 terminal serial timeout 3 default 0 title XEN-test root $grubroot kernel /boot/xen.gz com1=115200,8n1 console=com1 module /boot/vmlinuz-2.6.18.8-xen root=$root showopts console=ttyS0,115200 module /boot/initrd-2.6.18.8-xen
To order your test run with the previously defined preconditions you need to stuff them into the database. Fortunatelly there are commandline tools to help you with this job. They can be found at ”'/home/tapper/perl510/bin/”'. Production server for Tapper is bancroft.amd.com. Log in to this server (as root, since user login hasn't been thoroughly tested). Make sure that $TAPPER_LIVE is set to 1 and /home/tapper/perl510/bin/ is at the beginning of your $PATH (so the correct perl will always be found). For each precondition you want to put into the database you need to define a short name. Call ”'/home/tapper/perl510/bin/tapper-testrun newprecondition”' with the appropriate options, e.g. in our example:
/home/tapper/perl510/bin/tapper-testrun newprecondition --shortname=grub --condition_file=/tmp/grub.yml /home/tapper/perl510/bin/tapper-testrun newprecondition --shortname=xen --condition_file=/tmp/xen.yml
C<tapper-testrun> will return a precondition ID in each case. You will need those soon so please keep them in mind. In the example the precondition id for grub is 4 and for Xen its 5.
You can now put your test run into the database using
/home/tapper/perl510/bin/tapper-testrun new
. This expects a
hostname, a test program and all preconditions. The test program is
never evaluated and only there for historical reasons. Put in anything
you like. root is not yet know to the database as a valid user. Thus
you need to add --owner
with an appropriate user. The resulting
call looks like this:
/home/tapper/perl510/bin/tapper-testrun new \\ --hostname=bullock --precondition=4 --precondition=5 \\ --test_program=whatever --owner=mhentsc3
C<tapper-testrun> new has more optional arguments, one of them being –earliest. This option defines when to start the test earliest. It defaults to "now". When the requested time has arrived, Tapper will setup the system you requested and execute your test run. Stay tuned. When everything went well, you'll see test output soon. For more information on what is going on with Tapper, see /var/log/tapper-debug.
Person in charge: Maik Hentsche
This chapter is dedicated not to end users but to Tapper development.
Tapper is developed using git. There is one central repository to participate on the development
ssh://gituser@wotan/srv/gitroot/Tapper
and one mirrored public one:
git://osrc.amd.com/tapper.git
This chapter assumes all services are deployed, as described in Deployment.
The live environment is based on the host bancroft
for all the
server applications, like mysql db, Reports::Receiver, Reports::API,
Web User Interface, MCP.
The application is configured inside the Apache config and therefore
only needs Apache to be (re)started. /home/tapper
must be
mounted.
$ ssh root@bancroft $ rcapache2 restart
$ ssh root@bancroft $ /etc/init.d/tapper_reports_receiver_daemon restart
$ ssh root@bancroft $ /etc/init.d/tapper_reports_api_daemon restart
The development environment is somewhat distributed.
On host bascha
there are mysql db, Reports::Receiver,
Reports::API, Web User Interface.
The MCP is usually running on host siegfried
, with a test
target machine bullock
.
The application is running with its own webserver on bascha
:
$ ssh ss5@bascha # kill running process $ kill `ps auxwww|grep tapper_reports_web_server | grep -v grep | awk '{print $2}' | sort | head -1` # restart $ sudo /etc/init.d/tapper_reports_web
$ ssh ss5@bascha $ sudo /etc/init.d/tapper_reports_receiver_daemon restart
$ ssh ss5@bascha $ sudo /etc/init.d/tapper_reports_api_daemon restart
The applications write logfiles on these places:
/var/log/tapper-debug
/var/log/tapper_reports_receiver_daemon_stdout.log /var/log/tapper_reports_receiver_daemon_stderr.log
/var/log/tapper_reports_api_daemon_stdout.log /var/log/tapper_reports_api_daemon_stderr.log
This chapter is a collection of instructions how to build the Tapper toolchain.
The whole deployment process should be supported by a common build system, however that is not yet completed but done via several self-written build steps.
This is usually done by a developer on some working state that is worth to be installed in the development or live environment.
cd Tapper/src/TestSuite-LmBench-Python
$ INCREMENT_VERSION=1 python setup.py sdist
The setup.py increments the version number in source if
INCREMENT_VERSION
is set. This is needed before publicly
uploading a new version. For local development this can be omitted.
$ ./scripts/dist_upload_wotan.sh
$ cd Tapper/src/Tapper-Reports-API
Module::Install
driven:
$ perl Makefile.PL $ make $ make test $ make dist
Module::Build
driven:
$ perl Build.PL $ ./Build $ ./Build test $ ./Build dist
Version numbers are not incremented automatically (as it can be done with the Python wrappers). The VERSION upgrade needs to be done manually before publicly uploading a new version.
$ ./scripts/dist_upload_wotan.sh
Following are the steps to create a opt-tapper.tar.gz
package
in a mounted and chrooted image. It compiles Perl and Python, installs
them under /opt/tapper
and installs the Tapper libraries. For
the Perl part it also installs all CPAN dependencies from a local
mirror.
The resulting /opt/tapper
subdir can be used to continuously
upgrade the Tapper libs as described in the following sections later.
$ ssh ss5@bascha
/tmp
64bit:
$ cp .../redhat_rhel4u7_64b_smp_qcow.img /tmp/
32bit:
$ cp .../redhat_rhel4u7_32b_smp_qcow.img /tmp/
$ sudo losetup /dev/loop1 /tmp/redhat_rhel4u7_64b_smp_raw.img $ sudo kpartx -a /dev/loop1 $ sudo mount /dev/mapper/loop1p2 /mnt
$ sudo mount -o loop /tmp/one_partion_image_raw.img /mnt
In this example we need ~ss5
as source for Perl and Python
builders and a bind mounted /dev
to get a random seed for ssh
(used to fetch source).
$ sudo mkdir -p /mnt/home/ss5 $ sudo mount -o bind /2home/ss5 /mnt/home/ss5 $ sudo mount -o bind /dev/ /mnt/dev $ sudo mkdir /mnt/home/tapper $ sudo mount loge:/tapper /mnt/home/tapper $ sudo mount -t proc proc /mnt/proc
64bit:
$ sudo chroot /mnt bash -l
32bit:
$ linux32 sudo chroot /mnt bash -l
64bit:
$ rpm -ivh \ ftp://ftp.tu-chemnitz.de/pub/linux/fedora-epel/4/x86_64/git-core-1.5.3.6-2.el4.x86_64.rpm \ ftp://ftp.tu-chemnitz.de/pub/linux/fedora-epel/4/x86_64/perl-Git-1.5.3.6-2.el4.x86_64.rpm
32bit:
$ rpm -ivh \ ftp://ftp.tu-chemnitz.de/pub/linux/fedora-epel/4/i386/git-core-1.5.3.6-2.el4.i386.rpm \ ftp://ftp.tu-chemnitz.de/pub/linux/fedora-epel/4/i386/perl-Git-1.5.3.6-2.el4.i386.rpm
$ cp -r /home/ss5/.ssh/ /root/
$ cd /home/ss5/tapper-perl $ ./bootstrap_tapper_perl.sh
bootstrap_tapper_perl.sh
needs the user password for sudo, type it in
/opt/tapper/
but without any current Tapper code
/opt/tapper
$ rsync -r /home/tapper/perl510/lib/site_perl/5.10.0/Tapper/ \ /opt/tapper/lib/perl5/site_perl/5.10.0/Tapper/
/home/tapper/PYTHONREPO
, execute build_python.sh
there
$ cd /home/tapper/PYTHONREPO ./wrapper_install_opt.sh
$ cd /mnt/mnt/tapper $ sudo tar -czf /tmp/opt-tapper64_rh4.7.tar.gz opt
/data/bancroft
$ sudo cp \ /tmp/opt-tapper64_rh4.7.tar.gz \ /data/bancroft/tapper/live/repository/packages/tapperutils/opt-tapper64_rh4.7.tar.gz
This is usually done on an official Tapper release by the release manager.
ssh tapper@bancroft
sudo chroot /opt/tapper/mnt64/
/tmp/wrapper
if it doesn't exist
mkdir -p /tmp/wrapper
cd /tmp/wrapper
Do this as user ‘tapper’ because ‘root’ can't read all packages from NFS.
sudo -u tapper cp -rf /home/tapper/PYTHONREPO/* .
/opt/tapper
./wrapper_install_opt.sh
exit
cd /opt/tapper/mnt64 sudo tar -czf \ /data/bancroft/tapper/live/repository/packages/tapperutils/opt-tapper64_rh4.7.tar.gz \ opt/
Basically the same steps, but with another mounted build image.
ssh tapper@bancroft sudo chroot /opt/tapper/mnt32/ mkdir -p /tmp/wrapper cd /tmp/wrapper sudo -u tapper cp -rf /home/tapper/PYTHONREPO/* . ./wrapper_install_opt.sh exit cd /opt/tapper/mnt32 sudo tar -czf \ /data/bancroft/tapper/live/repository/packages/tapperutils/opt-tapper32_rh4.7.tar.gz \ opt/
Person in charge: Maik Hentsche, Conny Seidel, Steffen Schwigon
The web application itself is available via the NFS mounted
/home/tapper
. On the application server in the Apache
webserver you only need to configure a Location for the path
/tapper
.
bancroft$ cat /etc/apache2/conf.d/tapper_reports_web.conf Alias / /home/tapper/perl510/bin/tapper_reports_web_fastcgi_live.pl/ <LocationMatch /tapper[.]*> Options ExecCGI Order allow,deny Allow from all AddHandler fcgid-script .pl </LocationMatch>
Additionally there is a reverse proxy configured on
osrc.amd.com
that points to the application server:
osrc$ cat /etc/apache2/conf.d/tapper_reverse_proxy.conf ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /tapper http://bancroft/tapper ProxyPassReverse /tapper http://bancroft/tapper ProxyPass /hardwaredb http://bancroft/hardwaredb ProxyPassReverse /hardwaredb http://bancroft/hardwaredb
Person in charge: Steffen Schwigon
The database schema is maintained as description for the Object Relational Mapper DBIx::Class using some versioning and upgrading features.
Those features are accessible via the command line tool
tapper-db-deploy
. The basic principle is:
(We show it here for ReportsDB
schema. Same applies for
TestrunDB
.)
src/Tapper-Schema/lib/Tapper/Schema/ReportsDB/Result/*.pm
src/Tapper-Schema/lib/Tapper/Schema/ReportsDB.pm
src/Tapper-Schema/lib/Tapper/Schema.pm
cd src/Tapper-Schema/ tapper-db-deploy makeschemadiffs \ --db=ReportsDB \ --fromversion=2.010021 \ --upgradedir=./upgrades/
In this example the version 2.010021
is the existing
version. Do this for every version that you will later
upgrade. Usually that's just the last one, but maybe you have several
machines with different versions and want to upgrade them to this new
version, then you need to call above line with all those old
–fromversion's.
Of course you also can upgrade them in single steps from any old version via all intermediate versions to the latest. This is probably the best solution anyway.
git add ./upgrades/ git commit -m'Schema: db upgrade files'
For some reason, the –upgradedir option does not work on the upcoming
upgrade
command but only the default subdir var/tmp
, so
you always need to copy the upgrade files, but only discriminate
between development machine or the live machine.
rsync --progress -rc ./upgrades/ /var/tmp/
or
rsync --progress -rc ./upgrades/ bancroft:/var/tmp/
Which environment and therefore which db connection is used depends on the environment. On development machine I have set
export TAPPER_DEVELOPMENT=1
Then you just call
tapper-db-deploy upgrade --db=ReportsDB
There are some environment variables used in several contexts. Some of them are set from the automation layer to support the testsuites, some of them are used to discriminate between development and live context and some are just auxiliary variables to switch features.
Keep in mind that the variable needs to be visible where the actual component is running, which is sometimes not obvious in the client/server infrastructure.
Set by the automation layer for the test suites which in turn should use it in there reports.
Set by the automation layer for the test suites. Specifies the controlling host which initiated the testrun.
Set by the automation layer for the test suites. Specifies to which server the reports should be sent.
Set by the automation layer for the test suites. Specifies to which port the reports should be sent.
Set by the automation layer for the test suites. Specifies on which port the reports interface is listening, which is used, for instance, for uploading files.
Set by the automation layer for the test suites. Specifies the expected time that the testsuite should run. (Some suites, although only taking 1 hour, are re-run again and again for a given timespan.)
Set by the automation layer for the test suites. Specifies the NTP server that the suite should use as reference for time shifting tests.
Set by the automation layer for the test suites inside guests. Specifies which number the guest is, so the suite can report it and later this number helps sorting out results and context.
Set and used by the automation layer. Specifies where the automation layer stores files like, e.g., console logs which are later uploaded. Can be used by the test suites; all files that they store there are automatically uploaded at the end of the testrun.
Used by the database layer. If true value, the cached values of TAP evaluation are thrown away and regenerated. This might be neccessary when the Tapper::TAP::Harness or used sub parts like TAP::DOM changed.
Used by Tapper::Config to switch the config space. By this the whole context of every module that is accessing the config in the same environment is switched to either “live” or “development”. If not set to a true value the LIVE context is used by default.
Used by the web user interface Tapper::Reports::Web. Specifies the port on which it is running, usually only important for development mode. Else it is accessed via the usual Apache HTTP port 80.
Used by the web user interface Tapper::Reports::Web. Specifies whether the web application restarts if it recognizes changes in its source files.
Used by the web user interface Tapper::Reports::Web. Specifies
whether the config context either “live” or “development”. It is
therefore similar to TAPPER_DEVELOPMENT
but the web user
interface is disconnected from the automation layer, even in the used
config, and therefore using its own mechanism.