Qt
Internal/Contributor docs for the Qt SDK. Note: These are NOT official API docs; those are found at https://doc.qt.io/
Loading...
Searching...
No Matches
testing.tools.safetynet_conclusions.ComparisonConclusions Class Reference
+ Collaboration diagram for testing.tools.safetynet_conclusions.ComparisonConclusions:

Public Member Functions

 __init__ (self, threshold_significant)
 
 ProcessCase (self, case_name, before, after)
 
 GetSummary (self)
 
 GetCaseResults (self)
 
 GetOutputDict (self)
 

Public Attributes

 threshold_significant = threshold_significant
 
tuple threshold_significant_negative = (1 / (1 + threshold_significant)) - 1
 
dict params = {'threshold': threshold_significant}
 
 summary = ComparisonSummary()
 
dict case_results = {}
 

Detailed Description

All conclusions drawn from a comparison.

This is initialized empty and then processes pairs of results for each test
case, determining the rating for that case, which can be:
"failure" if either or both runs for the case failed.
"regression" if there is a significant increase in time for the test case.
"improvement" if there is a significant decrease in time for the test case.
"no_change" if the time for the test case did not change at all.
"small_change" if the time for the test case changed but within the threshold.

Definition at line 34 of file safetynet_conclusions.py.

Constructor & Destructor Documentation

◆ __init__()

testing.tools.safetynet_conclusions.ComparisonConclusions.__init__ ( self,
threshold_significant )
Initializes an empty ComparisonConclusions.

Args:
  threshold_significant: Float with the tolerance beyond which changes in
      measurements are considered significant.

      The change is considered as a multiplication rather than an addition
      of a fraction of the previous measurement, that is, a
      threshold_significant of 1.0 will flag test cases that became over
      100% slower (> 200% of the previous time measured) or over 100% faster
      (< 50% of the previous time measured).

      threshold_significant 0.02 -> 98.04% to 102% is not significant
      threshold_significant 0.1 -> 90.9% to 110% is not significant
      threshold_significant 0.25 -> 80% to 125% is not significant
      threshold_significant 1 -> 50% to 200% is not significant
      threshold_significant 4 -> 20% to 500% is not significant

Definition at line 46 of file safetynet_conclusions.py.

Member Function Documentation

◆ GetCaseResults()

testing.tools.safetynet_conclusions.ComparisonConclusions.GetCaseResults ( self)
Gets a dict mapping each test case identifier to its CaseResult.

Definition at line 112 of file safetynet_conclusions.py.

References testing.tools.safetynet_conclusions.ComparisonConclusions.case_results.

Referenced by testing.tools.safetynet_conclusions.ComparisonConclusions.GetOutputDict().

+ Here is the caller graph for this function:

◆ GetOutputDict()

testing.tools.safetynet_conclusions.ComparisonConclusions.GetOutputDict ( self)
Returns a conclusions dict with all the conclusions drawn.

Returns:
  A serializable dict with the format illustrated below:
  {
    "version": 1,
    "params": {
      "threshold": 0.02
    },
    "summary": {
      "total": 123,
      "failure": 1,
      "regression": 2,
      "improvement": 1,
      "no_change": 100,
      "small_change": 19
    },
    "comparison_by_case": {
      "testing/resources/new_test.pdf": {
        "before": None,
        "after": 1000,
        "ratio": None,
        "rating": "failure"
      },
      "testing/resources/test1.pdf": {
        "before": 100,
        "after": 120,
        "ratio": 0.2,
        "rating": "regression"
      },
      "testing/resources/test2.pdf": {
        "before": 100,
        "after": 2000,
        "ratio": 19.0,
        "rating": "regression"
      },
      "testing/resources/test3.pdf": {
        "before": 1000,
        "after": 1005,
        "ratio": 0.005,
        "rating": "small_change"
      },
      "testing/resources/test4.pdf": {
        "before": 1000,
        "after": 1000,
        "ratio": 0.0,
        "rating": "no_change"
      },
      "testing/resources/test5.pdf": {
        "before": 1000,
        "after": 600,
        "ratio": -0.4,
        "rating": "improvement"
      }
    }
  }

Definition at line 116 of file safetynet_conclusions.py.

References testing.tools.safetynet_conclusions.ComparisonConclusions.GetCaseResults(), testing.tools.safetynet_conclusions.ComparisonConclusions.GetOutputDict(), Scanner::WaylandArgument.summary, Scanner::WaylandEnumEntry.summary, testing.tools.safetynet_conclusions.ComparisonConclusions.summary, and testing.tools.safetynet_conclusions.ComparisonConclusions.threshold_significant.

Referenced by testing.tools.safetynet_conclusions.ComparisonConclusions.GetOutputDict().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ GetSummary()

testing.tools.safetynet_conclusions.ComparisonConclusions.GetSummary ( self)
Gets the ComparisonSummary with consolidated totals.

Definition at line 108 of file safetynet_conclusions.py.

References Scanner::WaylandArgument.summary, Scanner::WaylandEnumEntry.summary, and testing.tools.safetynet_conclusions.ComparisonConclusions.summary.

◆ ProcessCase()

testing.tools.safetynet_conclusions.ComparisonConclusions.ProcessCase ( self,
case_name,
before,
after )
Feeds a test case results to the ComparisonConclusions.

Args:
  case_name: String identifying the case.
  before: Measurement for the "before" version of the code.
  after: Measurement for the "after" version of the code.

Definition at line 73 of file safetynet_conclusions.py.

References testing.tools.safetynet_conclusions.ComparisonConclusions.case_results, Scanner::WaylandArgument.summary, Scanner::WaylandEnumEntry.summary, testing.tools.safetynet_conclusions.ComparisonConclusions.summary, testing.tools.safetynet_conclusions.ComparisonConclusions.threshold_significant, and testing.tools.safetynet_conclusions.ComparisonConclusions.threshold_significant_negative.

Member Data Documentation

◆ case_results

dict testing.tools.safetynet_conclusions.ComparisonConclusions.case_results = {}

◆ params

dict testing.tools.safetynet_conclusions.ComparisonConclusions.params = {'threshold': threshold_significant}

Definition at line 69 of file safetynet_conclusions.py.

◆ summary

◆ threshold_significant

testing.tools.safetynet_conclusions.ComparisonConclusions.threshold_significant = threshold_significant

◆ threshold_significant_negative

testing.tools.safetynet_conclusions.ComparisonConclusions.threshold_significant_negative = (1 / (1 + threshold_significant)) - 1

The documentation for this class was generated from the following file: