summaryrefslogtreecommitdiffstats
path: root/hacking/azp
diff options
context:
space:
mode:
authorMatt Clay <matt@mystile.com>2021-05-06 01:52:32 +0200
committerMatt Clay <matt@mystile.com>2021-05-06 02:21:14 +0200
commitc48f80d0629e3469f3535bba76de1242a5455ff9 (patch)
tree11b21eab7fbcb3ef1422cd8754656ad98bd6bed3 /hacking/azp
parentFix ansible-test imports and paths after refactor. (diff)
downloadansible-c48f80d0629e3469f3535bba76de1242a5455ff9.tar.xz
ansible-c48f80d0629e3469f3535bba76de1242a5455ff9.zip
Rename hacking/shippable to hacking/azp.
References to Shippable were changed to Azure Pipelines. Also remove rebalance.py as it does not work with Azure Pipelines due to the required data not being present.
Diffstat (limited to 'hacking/azp')
-rw-r--r--hacking/azp/README.md123
-rwxr-xr-xhacking/azp/download.py227
-rwxr-xr-xhacking/azp/get_recent_coverage_runs.py107
-rwxr-xr-xhacking/azp/incidental.py465
-rwxr-xr-xhacking/azp/run.py94
5 files changed, 1016 insertions, 0 deletions
diff --git a/hacking/azp/README.md b/hacking/azp/README.md
new file mode 100644
index 0000000000..5784848228
--- /dev/null
+++ b/hacking/azp/README.md
@@ -0,0 +1,123 @@
+# Azure Pipelines Scripts
+
+## Scripts
+
+This directory contains the following scripts:
+
+- download.py - Download results from CI.
+- get_recent_coverage_runs.py - Retrieve CI URLs of recent coverage test runs.
+- incidental.py - Report on incidental code coverage using data from CI.
+- run.py - Start new runs on CI.
+
+## Incidental Code Coverage
+
+### Background
+
+Incidental testing and code coverage occurs when a test covers one or more portions of code as an unintentional side-effect of testing another portion of code.
+
+For example, the ``yum`` integration test intentionally tests the ``yum`` Ansible module.
+However, in doing so it also uses, and unintentionally tests the ``file`` module as well.
+
+As part of the process of migrating modules and plugins into collections, integration tests were identified that provided exclusive incidental code coverage.
+That is, tests which would be migrated out of the repository which covered code which would not be covered by any remaining tests.
+
+These integration test targets were preserved as incidental tests with the ``incidental_`` prefix prior to migration.
+The plugins necessary to support these tests were also preserved in the ``test/support/`` directory.
+
+The long-term goal for these incidental tests is to replace them with tests that intentionally cover the relevant code.
+As additional intentional tests are added, the exclusive coverage provided by incidental tests will decline, permitting them to be removed without loss of test coverage.
+
+### Reducing Incidental Coverage
+
+Reducing incidental test coverage, and eventually removing incidental tests involves the following process:
+
+1. Run the entire test suite with code coverage enabled.
+ This is done automatically each day on Azure Pipelines.
+ The URLs and statuses of the most recent such test runs can be found with:
+ ```shell
+ hacking/azp/get_recent_coverage_runs.py <optional branch name>
+ ```
+ The branch name defaults to `devel`.
+2. Download code coverage data from Azure Pipelines for local analysis.
+ Example:
+ ```shell
+ # download results to ansible/ansible directory under cwd
+ # substitute the correct run number for the Azure Pipelines coverage run you want to download
+ hacking/azp/download.py 14075 --artifacts --run-metadata -v
+ ```
+3. Analyze code coverage data to see which portions of the code are covered by each test.
+ Example:
+ ```shell script
+ # make sure ansible-test is in $PATH
+ source hacking/env-setup
+ # run the script using whichever directory results were downloaded into
+ hacking/azp/incidental.py 14075/
+ ```
+4. Create new intentional tests, or extend existing ones, to cover code that is currently covered by incidental tests.
+ Reports are created by default in a ``test/results/.tmp/incidental/{hash}/reports/`` directory.
+ The ``{hash}`` value is based on the input files used to generate the report.
+
+Over time, as the above process is repeated, exclusive incidental code coverage will decline.
+When incidental tests no longer provide exclusive coverage they can be removed.
+
+> CAUTION: Only one incidental test should be removed at a time, as doing so may cause another test to gain exclusive incidental coverage.
+
+#### Incidental Plugin Coverage
+
+Incidental test coverage is not limited to ``incidental_`` prefixed tests.
+For example, incomplete code coverage from a filter plugin's own tests may be covered by an unrelated test.
+The ``incidental.py`` script can be used to identify these gaps as well.
+
+Follow the steps 1 and 2 as outlined in the previous section.
+For step 3, add the ``--plugin-path {path_to_plugin}`` option.
+Repeat step 3 for as many plugins as desired.
+
+To report on multiple plugins at once, such as all ``filter`` plugins, the following command can be used:
+
+```shell
+find lib/ansible/plugins/filter -name '*.py' -not -name __init__.py -exec hacking/azp/incidental.py 14075/ --plugin-path '{}' ';'
+```
+
+Each report will show the incidental code coverage missing from the plugin's own tests.
+
+> NOTE: The report does not identify where the incidental coverage comes from.
+
+### Reading Incidental Coverage Reports
+
+Each line of code covered will be included in a report.
+The left column contains the line number where the source occurs.
+If the coverage is for Python code a comment on the right side will indicate the coverage arcs involved.
+
+Below is an example of a report:
+
+```
+Target: incidental_win_psexec
+GitHub: https://github.com/ansible/ansible/blob/6994ef0b554a816f02e0771cb14341a421f7cead/test/integration/targets/incidental_win_psexec
+
+Source: lib/ansible/executor/task_executor.py (2 arcs, 3/1141 lines):
+GitHub: https://github.com/ansible/ansible/blob/6994ef0b554a816f02e0771cb14341a421f7cead/lib/ansible/executor/task_executor.py
+
+ 705 if 'rc' in result and result['rc'] not in [0, "0"]: ### (here) -> 706
+ 706 result['failed'] = True ### 705 -> (here) ### (here) -> 711
+
+ 711 if self._task.until: ### 706 -> (here)
+```
+
+The report indicates the test target responsible for the coverage and provides a link to the source on GitHub using the appropriate commit to match the code coverage.
+
+Each file covered in the report indicates the lines affected, and in the case of Python code, arcs.
+A link to the source file on GitHub using the appropriate commit is also included.
+
+The left column includes the line number for the source code found to the right.
+In the case of Python files, the rightmost comment indicates the coverage arcs involved.
+
+``### (here) -> 706`` for source line 705 indicates that execution flowed from line 705 to line 706.
+Multiple outbound line numbers can be present.
+
+``### 706 -> (here)`` for source line 711 indicates that execution flowed from line 706 to line 711.
+Multiple inbound line numbers can be present.
+
+In both cases ``(here)`` is simply a reference to the current source line.
+
+Arcs are only available for Python code.
+PowerShell code only reports covered line numbers.
diff --git a/hacking/azp/download.py b/hacking/azp/download.py
new file mode 100755
index 0000000000..c573e0a7e1
--- /dev/null
+++ b/hacking/azp/download.py
@@ -0,0 +1,227 @@
+#!/usr/bin/env python
+# PYTHON_ARGCOMPLETE_OK
+
+# (c) 2016 Red Hat, Inc.
+#
+# This file is part of Ansible
+#
+# Ansible is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# Ansible is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
+"""CLI tool for downloading results from Azure Pipelines CI runs."""
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+# noinspection PyCompatibility
+import argparse
+import json
+import os
+import re
+import sys
+import io
+import zipfile
+
+import requests
+
+try:
+ import argcomplete
+except ImportError:
+ argcomplete = None
+
+# Following changes should be made to improve the overall style:
+# TODO use new style formatting method.
+# TODO use requests session.
+# TODO type hints.
+# TODO pathlib.
+
+
+def main():
+ """Main program body."""
+
+ args = parse_args()
+ download_run(args)
+
+
+def run_id_arg(arg):
+ m = re.fullmatch(r"(?:https:\/\/dev\.azure\.com\/ansible\/ansible\/_build\/results\?buildId=)?(\d+)", arg)
+ if not m:
+ raise ValueError("run does not seems to be a URI or an ID")
+ return m.group(1)
+
+
+def parse_args():
+ """Parse and return args."""
+
+ parser = argparse.ArgumentParser(description='Download results from a CI run.')
+
+ parser.add_argument('run', metavar='RUN', type=run_id_arg, help='AZP run id or URI')
+
+ parser.add_argument('-v', '--verbose',
+ dest='verbose',
+ action='store_true',
+ help='show what is being downloaded')
+
+ parser.add_argument('-t', '--test',
+ dest='test',
+ action='store_true',
+ help='show what would be downloaded without downloading')
+
+ parser.add_argument('-p', '--pipeline-id', type=int, default=20, help='pipeline to download the job from')
+
+ parser.add_argument('--artifacts',
+ action='store_true',
+ help='download artifacts')
+
+ parser.add_argument('--console-logs',
+ action='store_true',
+ help='download console logs')
+
+ parser.add_argument('--run-metadata',
+ action='store_true',
+ help='download run metadata')
+
+ parser.add_argument('--all',
+ action='store_true',
+ help='download everything')
+
+ parser.add_argument('--match-artifact-name',
+ default=re.compile('.*'),
+ type=re.compile,
+ help='only download artifacts which names match this regex')
+
+ parser.add_argument('--match-job-name',
+ default=re.compile('.*'),
+ type=re.compile,
+ help='only download artifacts from jobs which names match this regex')
+
+ if argcomplete:
+ argcomplete.autocomplete(parser)
+
+ args = parser.parse_args()
+
+ if args.all:
+ args.artifacts = True
+ args.run_metadata = True
+ args.console_logs = True
+
+ selections = (
+ args.artifacts,
+ args.run_metadata,
+ args.console_logs
+ )
+
+ if not any(selections):
+ parser.error('At least one download option is required.')
+
+ return args
+
+
+def download_run(args):
+ """Download a run."""
+
+ output_dir = '%s' % args.run
+
+ if not args.test and not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+
+ if args.run_metadata:
+ run_url = 'https://dev.azure.com/ansible/ansible/_apis/pipelines/%s/runs/%s?api-version=6.0-preview.1' % (args.pipeline_id, args.run)
+ run_info_response = requests.get(run_url)
+ run_info_response.raise_for_status()
+ run = run_info_response.json()
+
+ path = os.path.join(output_dir, 'run.json')
+ contents = json.dumps(run, sort_keys=True, indent=4)
+
+ if args.verbose:
+ print(path)
+
+ if not args.test:
+ with open(path, 'w') as metadata_fd:
+ metadata_fd.write(contents)
+
+ timeline_response = requests.get('https://dev.azure.com/ansible/ansible/_apis/build/builds/%s/timeline?api-version=6.0' % args.run)
+ timeline_response.raise_for_status()
+ timeline = timeline_response.json()
+ roots = set()
+ by_id = {}
+ children_of = {}
+ parent_of = {}
+ for r in timeline['records']:
+ thisId = r['id']
+ parentId = r['parentId']
+
+ by_id[thisId] = r
+
+ if parentId is None:
+ roots.add(thisId)
+ else:
+ parent_of[thisId] = parentId
+ children_of[parentId] = children_of.get(parentId, []) + [thisId]
+
+ allowed = set()
+
+ def allow_recursive(ei):
+ allowed.add(ei)
+ for ci in children_of.get(ei, []):
+ allow_recursive(ci)
+
+ for ri in roots:
+ r = by_id[ri]
+ allowed.add(ri)
+ for ci in children_of.get(r['id'], []):
+ c = by_id[ci]
+ if not args.match_job_name.match("%s %s" % (r['name'], c['name'])):
+ continue
+ allow_recursive(c['id'])
+
+ if args.artifacts:
+ artifact_list_url = 'https://dev.azure.com/ansible/ansible/_apis/build/builds/%s/artifacts?api-version=6.0' % args.run
+ artifact_list_response = requests.get(artifact_list_url)
+ artifact_list_response.raise_for_status()
+ for artifact in artifact_list_response.json()['value']:
+ if artifact['source'] not in allowed or not args.match_artifact_name.match(artifact['name']):
+ continue
+ if args.verbose:
+ print('%s/%s' % (output_dir, artifact['name']))
+ if not args.test:
+ response = requests.get(artifact['resource']['downloadUrl'])
+ response.raise_for_status()
+ archive = zipfile.ZipFile(io.BytesIO(response.content))
+ archive.extractall(path=output_dir)
+
+ if args.console_logs:
+ for r in timeline['records']:
+ if not r['log'] or r['id'] not in allowed or not args.match_artifact_name.match(r['name']):
+ continue
+ names = []
+ parent_id = r['id']
+ while parent_id is not None:
+ p = by_id[parent_id]
+ name = p['name']
+ if name not in names:
+ names = [name] + names
+ parent_id = parent_of.get(p['id'], None)
+
+ path = " ".join(names)
+ log_path = os.path.join(output_dir, '%s.log' % path)
+ if args.verbose:
+ print(log_path)
+ if not args.test:
+ log = requests.get(r['log']['url'])
+ log.raise_for_status()
+ open(log_path, 'wb').write(log.content)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/hacking/azp/get_recent_coverage_runs.py b/hacking/azp/get_recent_coverage_runs.py
new file mode 100755
index 0000000000..6a7fdae71f
--- /dev/null
+++ b/hacking/azp/get_recent_coverage_runs.py
@@ -0,0 +1,107 @@
+#!/usr/bin/env python
+
+# (c) 2020 Red Hat, Inc.
+#
+# This file is part of Ansible
+#
+# Ansible is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# Ansible is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+from ansible.utils.color import stringc
+import requests
+import sys
+import datetime
+
+# Following changes should be made to improve the overall style:
+# TODO use argparse for arguments.
+# TODO use new style formatting method.
+# TODO use requests session.
+# TODO type hints.
+
+BRANCH = 'devel'
+PIPELINE_ID = 20
+MAX_AGE = datetime.timedelta(hours=24)
+
+if len(sys.argv) > 1:
+ BRANCH = sys.argv[1]
+
+
+def get_coverage_runs():
+ list_response = requests.get("https://dev.azure.com/ansible/ansible/_apis/pipelines/%s/runs?api-version=6.0-preview.1" % PIPELINE_ID)
+ list_response.raise_for_status()
+
+ runs = list_response.json()
+
+ coverage_runs = []
+ for run_summary in runs["value"][0:1000]:
+ run_response = requests.get(run_summary['url'])
+ run_response.raise_for_status()
+ run = run_response.json()
+
+ if run['resources']['repositories']['self']['refName'] != 'refs/heads/%s' % BRANCH:
+ continue
+
+ if 'finishedDate' in run_summary:
+ age = datetime.datetime.now() - datetime.datetime.strptime(run['finishedDate'].split(".")[0], "%Y-%m-%dT%H:%M:%S")
+ if age > MAX_AGE:
+ break
+
+ artifact_response = requests.get("https://dev.azure.com/ansible/ansible/_apis/build/builds/%s/artifacts?api-version=6.0" % run['id'])
+ artifact_response.raise_for_status()
+
+ artifacts = artifact_response.json()['value']
+ if any([a["name"].startswith("Coverage") for a in artifacts]):
+ # TODO wrongfully skipped if all jobs failed.
+ coverage_runs.append(run)
+
+ return coverage_runs
+
+
+def pretty_coverage_runs(runs):
+ ended = []
+ in_progress = []
+ for run in runs:
+ if run.get('finishedDate'):
+ ended.append(run)
+ else:
+ in_progress.append(run)
+
+ for run in sorted(ended, key=lambda x: x['finishedDate']):
+ if run['result'] == "succeeded":
+ print('🙂 [%s] https://dev.azure.com/ansible/ansible/_build/results?buildId=%s (%s)' % (
+ stringc('PASS', 'green'),
+ run['id'],
+ run['finishedDate']))
+ else:
+ print('😢 [%s] https://dev.azure.com/ansible/ansible/_build/results?buildId=%s (%s)' % (
+ stringc('FAIL', 'red'),
+ run['id'],
+ run['finishedDate']))
+
+ if in_progress:
+ print('The following runs are ongoing:')
+ for run in in_progress:
+ print('🤔 [%s] https://dev.azure.com/ansible/ansible/_build/results?buildId=%s' % (
+ stringc('FATE', 'yellow'),
+ run['id']))
+
+
+def main():
+ pretty_coverage_runs(get_coverage_runs())
+
+
+if __name__ == '__main__':
+ main()
diff --git a/hacking/azp/incidental.py b/hacking/azp/incidental.py
new file mode 100755
index 0000000000..10729299dd
--- /dev/null
+++ b/hacking/azp/incidental.py
@@ -0,0 +1,465 @@
+#!/usr/bin/env python
+# PYTHON_ARGCOMPLETE_OK
+
+# (c) 2020 Red Hat, Inc.
+#
+# This file is part of Ansible
+#
+# Ansible is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# Ansible is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
+"""CLI tool for reporting on incidental test coverage."""
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+# noinspection PyCompatibility
+import argparse
+import glob
+import json
+import os
+import re
+import subprocess
+import sys
+import hashlib
+
+try:
+ # noinspection PyPackageRequirements
+ import argcomplete
+except ImportError:
+ argcomplete = None
+
+# Following changes should be made to improve the overall style:
+# TODO use new style formatting method.
+# TODO type hints.
+# TODO pathlib.
+
+
+def main():
+ """Main program body."""
+ args = parse_args()
+
+ try:
+ incidental_report(args)
+ except ApplicationError as ex:
+ sys.exit(ex)
+
+
+def parse_args():
+ """Parse and return args."""
+ source = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
+
+ parser = argparse.ArgumentParser(description='Report on incidental test coverage downloaded from Azure Pipelines.')
+
+ parser.add_argument('result',
+ type=directory,
+ help='path to directory containing test results downloaded from Azure Pipelines')
+
+ parser.add_argument('--output',
+ type=optional_directory,
+ default=os.path.join(source, 'test', 'results', '.tmp', 'incidental'),
+ help='path to directory where reports should be written')
+
+ parser.add_argument('--source',
+ type=optional_directory,
+ default=source,
+ help='path to git repository containing Ansible source')
+
+ parser.add_argument('--skip-checks',
+ action='store_true',
+ help='skip integrity checks, use only for debugging')
+
+ parser.add_argument('--ignore-cache',
+ dest='use_cache',
+ action='store_false',
+ help='ignore cached files')
+
+ parser.add_argument('-v', '--verbose',
+ action='store_true',
+ help='increase verbosity')
+
+ targets = parser.add_mutually_exclusive_group()
+
+ targets.add_argument('--targets',
+ type=regex,
+ default='^incidental_',
+ help='regex for targets to analyze, default: %(default)s')
+
+ targets.add_argument('--plugin-path',
+ help='path to plugin to report incidental coverage on')
+
+ if argcomplete:
+ argcomplete.autocomplete(parser)
+
+ args = parser.parse_args()
+
+ return args
+
+
+def optional_directory(value):
+ if not os.path.exists(value):
+ return value
+
+ return directory(value)
+
+
+def directory(value):
+ if not os.path.isdir(value):
+ raise argparse.ArgumentTypeError('"%s" is not a directory' % value)
+
+ return value
+
+
+def regex(value):
+ try:
+ return re.compile(value)
+ except Exception as ex:
+ raise argparse.ArgumentTypeError('"%s" is not a valid regex: %s' % (value, ex))
+
+
+def incidental_report(args):
+ """Generate incidental coverage report."""
+ ct = CoverageTool()
+ git = Git(os.path.abspath(args.source))
+ coverage_data = CoverageData(os.path.abspath(args.result))
+
+ try:
+ git.show([coverage_data.result_sha, '--'])
+ except subprocess.CalledProcessError:
+ raise ApplicationError('%s: commit not found: %s\n'
+ 'make sure your source repository is up-to-date' % (git.path, coverage_data.result_sha))
+
+ if coverage_data.result != "succeeded":
+ check_failed(args, 'results indicate tests did not pass (result: %s)\n'
+ 're-run until passing, then download the latest results and re-run the report using those results' % coverage_data.result)
+
+ if not coverage_data.paths:
+ raise ApplicationError('no coverage data found\n'
+ 'make sure the downloaded results are from a code coverage run on Azure Pipelines')
+
+ # generate a unique subdirectory in the output directory based on the input files being used
+ path_hash = hashlib.sha256(b'\n'.join(p.encode() for p in coverage_data.paths)).hexdigest()
+ output_path = os.path.abspath(os.path.join(args.output, path_hash))
+
+ data_path = os.path.join(output_path, 'data')
+ reports_path = os.path.join(output_path, 'reports')
+
+ for path in [data_path, reports_path]:
+ if not os.path.exists(path):
+ os.makedirs(path)
+
+ # combine coverage results into a single file
+ combined_path = os.path.join(output_path, 'combined.json')
+ cached(combined_path, args.use_cache, args.verbose,
+ lambda: ct.combine(coverage_data.paths, combined_path))
+
+ with open(combined_path) as combined_file:
+ combined = json.load(combined_file)
+
+ if args.plugin_path:
+ # reporting on coverage missing from the test target for the specified plugin
+ # the report will be on a single target
+ cache_path_format = '%s' + '-for-%s' % os.path.splitext(os.path.basename(args.plugin_path))[0]
+ target_pattern = '^%s$' % get_target_name_from_plugin_path(args.plugin_path)
+ include_path = args.plugin_path
+ missing = True
+ target_name = get_target_name_from_plugin_path(args.plugin_path)
+ else:
+ # reporting on coverage exclusive to the matched targets
+ # the report can contain multiple targets
+ cache_path_format = '%s'
+ target_pattern = args.targets
+ include_path = None
+ missing = False
+ target_name = None
+
+ # identify integration test targets to analyze
+ target_names = sorted(combined['targets'])
+ incidental_target_names = [target for target in target_names if re.search(target_pattern, target)]
+
+ if not incidental_target_names:
+ if target_name:
+ # if the plugin has no tests we still want to know what coverage is missing
+ incidental_target_names = [target_name]
+ else:
+ raise ApplicationError('no targets to analyze')
+
+ # exclude test support plugins from analysis
+ # also exclude six, which for an unknown reason reports bogus coverage lines (indicating coverage of comments)
+ exclude_path = '^(test/support/|lib/ansible/module_utils/six/)'
+
+ # process coverage for each target and then generate a report
+ # save sources for generating a summary report at the end
+ summary = {}
+ report_paths = {}
+
+ for target_name in incidental_target_names:
+ cache_name = cache_path_format % target_name
+
+ only_target_path = os.path.join(data_path, 'only-%s.json' % cache_name)
+ cached(only_target_path, args.use_cache, args.verbose,
+ lambda: ct.filter(combined_path, only_target_path, include_targets=[target_name], include_path=include_path, exclude_path=exclude_path))
+
+ without_target_path = os.path.join(data_path, 'without-%s.json' % cache_name)
+ cached(without_target_path, args.use_cache, args.verbose,
+ lambda: ct.filter(combined_path, without_target_path, exclude_targets=[target_name], include_path=include_path, exclude_path=exclude_path))
+
+ if missing:
+ source_target_path = missing_target_path = os.path.join(data_path, 'missing-%s.json' % cache_name)
+ cached(missing_target_path, args.use_cache, args.verbose,
+ lambda: ct.missing(without_target_path, only_target_path, missing_target_path, only_gaps=True))
+ else:
+ source_target_path = exclusive_target_path = os.path.join(data_path, 'exclusive-%s.json' % cache_name)
+ cached(exclusive_target_path, args.use_cache, args.verbose,
+ lambda: ct.missing(only_target_path, without_target_path, exclusive_target_path, only_gaps=True))
+
+ source_expanded_target_path = os.path.join(os.path.dirname(source_target_path), 'expanded-%s' % os.path.basename(source_target_path))
+ cached(source_expanded_target_path, args.use_cache, args.verbose,
+ lambda: ct.expand(source_target_path, source_expanded_target_path))
+
+ summary[target_name] = sources = collect_sources(source_expanded_target_path, git, coverage_data)
+
+ txt_report_path = os.path.join(reports_path, '%s.txt' % cache_name)
+ cached(txt_report_path, args.use_cache, args.verbose,
+ lambda: generate_report(sources, txt_report_path, coverage_data, target_name, missing=missing))
+
+ report_paths[target_name] = txt_report_path
+
+ # provide a summary report of results
+ for target_name in incidental_target_names:
+ sources = summary[target_name]
+ report_path = os.path.relpath(report_paths[target_name])
+
+ print('%s: %d arcs, %d lines, %d files - %s' % (
+ target_name,
+ sum(len(s.covered_arcs) for s in sources),
+ sum(len(s.covered_lines) for s in sources),
+ len(sources),
+ report_path,
+ ))
+
+ if not missing:
+ sys.stderr.write('NOTE: This report shows only coverage exclusive to the reported targets. '
+ 'As targets are removed, exclusive coverage on the remaining targets will increase.\n')
+
+
+def get_target_name_from_plugin_path(path): # type: (str) -> str
+ """Return the integration test target name for the given plugin path."""
+ parts = os.path.splitext(path)[0].split(os.path.sep)
+ plugin_name = parts[-1]
+
+ if path.startswith('lib/ansible/modules/'):
+ plugin_type = None
+ elif path.startswith('lib/ansible/plugins/'):
+ plugin_type = parts[3]
+ elif path.startswith('lib/ansible/module_utils/'):
+ plugin_type = parts[2]
+ elif path.startswith('plugins/'):
+ plugin_type = parts[1]
+ else:
+ raise ApplicationError('Cannot determine plugin type from plugin path: %s' % path)
+
+ if plugin_type is None:
+ target_name = plugin_name
+ else:
+ target_name = '%s_%s' % (plugin_type, plugin_name)
+
+ return target_name
+
+
+class CoverageData:
+ def __init__(self, result_path):
+ with open(os.path.join(result_path, 'run.json')) as run_file:
+ run = json.load(run_file)
+
+ self.result_sha = run["resources"]["repositories"]["self"]["version"]
+ self.result = run['result']
+
+ self.github_base_url = 'https://github.com/ansible/ansible/blob/%s/' % self.result_sha
+
+ # locate available results
+ self.paths = sorted(glob.glob(os.path.join(result_path, '*', 'coverage-analyze-targets.json')))
+
+
+class Git:
+ def __init__(self, path):
+ self.git = 'git'
+ self.path = path
+
+ try:
+ self.show()
+ except subprocess.CalledProcessError:
+ raise ApplicationError('%s: not a git repository' % path)
+
+ def show(self, args=None):
+ return self.run(['show'] + (args or []))
+
+ def run(self, command):
+ return subprocess.check_output([self.git] + command, cwd=self.path)
+
+
+class CoverageTool:
+ def __init__(self):
+ self.analyze_cmd = ['ansible-test', 'coverage', 'analyze', 'targets']
+
+ def combine(self, input_paths, output_path):
+ subprocess.check_call(self.analyze_cmd + ['combine'] + input_paths + [output_path])
+
+ def filter(self, input_path, output_path, include_targets=None, exclude_targets=None, include_path=None, exclude_path=None):
+ args = []
+
+ if include_targets:
+ for target in include_targets:
+ args.extend(['--include-target', target])
+
+ if exclude_targets:
+ for target in exclude_targets:
+ args.extend(['--exclude-target', target])
+
+ if include_path:
+ args.extend(['--include-path', include_path])
+
+ if exclude_path:
+ args.extend(['--exclude-path', exclude_path])
+
+ subprocess.check_call(self.analyze_cmd + ['filter', input_path, output_path] + args)
+
+ def missing(self, from_path, to_path, output_path, only_gaps=False):
+ args = []
+
+ if only_gaps:
+ args.append('--only-gaps')
+
+ subprocess.check_call(self.analyze_cmd + ['missing', from_path, to_path, output_path] + args)
+
+ def expand(self, input_path, output_path):
+ subprocess.check_call(self.analyze_cmd + ['expand', input_path, output_path])
+
+
+class SourceFile:
+ def __init__(self, path, source, coverage_data, coverage_points):
+ self.path = path
+ self.lines = source.decode().splitlines()
+ self.coverage_data = coverage_data
+ self.coverage_points = coverage_points
+ self.github_url = coverage_data.github_base_url + path
+
+ is_arcs = ':' in dict(coverage_points).popitem()[0]
+
+ if is_arcs:
+ parse = parse_arc
+ else:
+ parse = int
+
+ self.covered_points = set(parse(v) for v in coverage_points)
+ self.covered_arcs = self.covered_points if is_arcs else None
+ self.covered_lines = set(abs(p[0]) for p in self.covered_points) | set(abs(p[1]) for p in self.covered_points)
+
+
+def collect_sources(data_path, git, coverage_data):
+ with open(data_path) as data_file:
+ data = json.load(data_file)
+
+ sources = []
+
+ for path_coverage in data.values():
+ for path, path_data in path_coverage.items():
+ sources.append(SourceFile(path, git.show(['%s:%s' % (coverage_data.result_sha, path)]), coverage_data, path_data))
+
+ return sources
+
+
+def generate_report(sources, report_path, coverage_data, target_name, missing):
+ output = [
+ 'Target: %s (%s coverage)' % (target_name, 'missing' if missing else 'exclusive'),
+ 'GitHub: %stest/integration/targets/%s' % (coverage_data.github_base_url, target_name),
+ ]
+
+ for source in sources:
+ if source.covered_arcs:
+ output.extend([
+ '',
+ 'Source: %s (%d arcs, %d/%d lines):' % (source.path, len(source.covered_arcs), len(source.covered_lines), len(source.lines)),
+ 'GitHub: %s' % source.github_url,
+ '',
+ ])
+ else:
+ output.extend([
+ '',
+ 'Source: %s (%d/%d lines):' % (source.path, len(source.covered_lines), len(source.lines)),
+ 'GitHub: %s' % source.github_url,
+ '',
+ ])
+
+ last_line_no = 0
+
+ for line_no, line in enumerate(source.lines, start=1):
+ if line_no not in source.covered_lines:
+ continue
+
+ if last_line_no and last_line_no != line_no - 1:
+ output.append('')
+
+ notes = ''
+
+ if source.covered_arcs:
+ from_lines = sorted(p[0] for p in source.covered_points if abs(p[1]) == line_no)
+ to_lines = sorted(p[1] for p in source.covered_points if abs(p[0]) == line_no)
+
+ if from_lines:
+ notes += ' ### %s -> (here)' % ', '.join(str(from_line) for from_line in from_lines)
+
+ if to_lines:
+ notes += ' ### (here) -> %s' % ', '.join(str(to_line) for to_line in to_lines)
+
+ output.append('%4d %s%s' % (line_no, line, notes))
+ last_line_no = line_no
+
+ with open(report_path, 'w') as report_file:
+ report_file.write('\n'.join(output) + '\n')
+
+
+def parse_arc(value):
+ return tuple(int(v) for v in value.split(':'))
+
+
+def cached(path, use_cache, show_messages, func):
+ if os.path.exists(path) and use_cache:
+ if show_messages:
+ sys.stderr.write('%s: cached\n' % path)
+ sys.stderr.flush()
+ return
+
+ if show_messages:
+ sys.stderr.write('%s: generating ... ' % path)
+ sys.stderr.flush()
+
+ func()
+
+ if show_messages:
+ sys.stderr.write('done\n')
+ sys.stderr.flush()
+
+
+def check_failed(args, message):
+ if args.skip_checks:
+ sys.stderr.write('WARNING: %s\n' % message)
+ return
+
+ raise ApplicationError(message)
+
+
+class ApplicationError(Exception):
+ pass
+
+
+if __name__ == '__main__':
+ main()
diff --git a/hacking/azp/run.py b/hacking/azp/run.py
new file mode 100755
index 0000000000..00a177944f
--- /dev/null
+++ b/hacking/azp/run.py
@@ -0,0 +1,94 @@
+#!/usr/bin/env python
+# PYTHON_ARGCOMPLETE_OK
+
+# (c) 2016 Red Hat, Inc.
+#
+# This file is part of Ansible
+#
+# Ansible is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# Ansible is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
+
+"""CLI tool for starting new CI runs."""
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+# noinspection PyCompatibility
+import argparse
+import json
+import os
+import sys
+import requests
+import requests.auth
+
+try:
+ import argcomplete
+except ImportError:
+ argcomplete = None
+
+# TODO: Dev does not have a token for AZP, somebody please test this.
+
+# Following changes should be made to improve the overall style:
+# TODO use new style formatting method.
+# TODO type hints.
+
+
+def main():
+ """Main program body."""
+
+ args = parse_args()
+
+ key = os.environ.get('AZP_TOKEN', None)
+ if not key:
+ sys.stderr.write("please set you AZP token in AZP_TOKEN")
+ sys.exit(1)
+
+ start_run(args, key)
+
+
+def parse_args():
+ """Parse and return args."""
+
+ parser = argparse.ArgumentParser(description='Start a new CI run.')
+
+ parser.add_argument('-p', '--pipeline-id', type=int, default=20, help='pipeline to download the job from')
+ parser.add_argument('--ref', help='git ref name to run on')
+
+ parser.add_argument('--env',
+ nargs=2,
+ metavar=('KEY', 'VALUE'),
+ action='append',
+ help='environment variable to pass')
+
+ if argcomplete:
+ argcomplete.autocomplete(parser)
+
+ args = parser.parse_args()
+
+ return args
+
+
+def start_run(args, key):
+ """Start a new CI run."""
+
+ url = "https://dev.azure.com/ansible/ansible/_apis/pipelines/%s/runs?api-version=6.0-preview.1" % args.pipeline_id
+ payload = {"resources": {"repositories": {"self": {"refName": args.ref}}}}
+
+ resp = requests.post(url, auth=requests.auth.HTTPBasicAuth('user', key), data=payload)
+ resp.raise_for_status()
+
+ print(json.dumps(resp.json(), indent=4, sort_keys=True))
+
+
+if __name__ == '__main__':
+ main()