{% extends "dashboard_app/_content_with_sidebar.html" %} {% load i18n %} {% load humanize %} {% block content %}
# | {% trans "Test case" %} | {% trans "Result" %} | {% trans "Measurement" %} |
---|---|---|---|
{{ test_result.relative_index }} | {{ test_result.test_case|default_if_none:"Not specified" }} |
![]() |
{{ test_result.measurement|default_if_none:"Not specified" }} {{ test_result.units }} |
You can navigate to this test run, regardless of the bundle stream it is located in, by using this permalink.
This is the identifier of the test that was invoked. A test is a collection of test cases. Test is also the smallest piece of code that can be invoked by lava-test.
This is a globally unique identifier that was assigned by the log analyzer. Running the same test multiple times results in different values of this identifier. The dashboard uses this identifier to refer to a particular test run. It is preserved across different LAVA installations, that is, if you pull test results (as bundles) from one system to another this identifier remains intact
This is the SHA1 hash of the bundle that contains this test run.
LAVA can store attachments associated with a particular test run. Those attachments can be used to store log files, crash dumps, screen shots or other useful test artifacts.
{{ tag }}
LAVA can store tags associated with a particular
test run. Tags are simple strings like project-foo-prerelase-testing
or linaro-image-2011-09-27
. Tags can be used by the testing effort
feature to group results together.
LAVA keeps track of all the software packages (such as Debian packages managed with dpkg) that were installed prior to running a test. This information can help you track down errors caused by a particular buggy dependency
LAVA can track more data than just package name and version. You can track precise software information such as the version control system branch or repository, revision or tag name and more
LAVA keeps track of the hardware that was used for testing. This can help cross-reference benchmarks and identify hardware-specific issues.
LAVA can store arbitrary key-value attributes associated with each test run (and separately, each test result)
There are three different timestamps associated with each test run. They are explained below.
This is the moment this that this test run's artifacts
(such as log files and other output) were processed by the log analyzer.
Typically the analyzer is a part of lava-test framework and test output is
analyzed on right on the device so this time may not be trusted, see below
for the description of time check performed
The value no indicates that the log analyzer was not certain that the time and date is accurate.
This is the moment this test run entry was created in the LAVA database. It can differ from upload date if there were any initial deserialization problems and the data was deserialized later.
This is the moment this data was first uploaded to LAVA (as a serialized bundle).