summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/clustering.md56
-rw-r--r--docs/container_groups.md14
-rw-r--r--docs/container_groups/README.md14
-rw-r--r--docs/tasks.md11
4 files changed, 12 insertions, 83 deletions
diff --git a/docs/clustering.md b/docs/clustering.md
index 93c2d55ec0..c65882b46e 100644
--- a/docs/clustering.md
+++ b/docs/clustering.md
@@ -71,62 +71,6 @@ Recommendations and constraints:
- Do not name any instance the same as a group name.
-### Security-Isolated Rampart Groups
-
-In Tower versions 3.2+, customers may optionally define isolated groups inside of security-restricted networking zones from which to run jobs and ad hoc commands. Instances in these groups will _not_ have a full install of Tower, but will have a minimal set of utilities used to run jobs. Isolated groups must be specified in the inventory file prefixed with `isolated_group_`. An example inventory file is shown below:
-
-```
-[tower]
-towerA
-towerB
-towerC
-
-[instance_group_security]
-towerB
-towerC
-
-[isolated_group_govcloud]
-isolatedA
-isolatedB
-
-[isolated_group_govcloud:vars]
-controller=security
-```
-
-In the isolated rampart model, "controller" instances interact with "isolated" instances via a series of Ansible playbooks over SSH. At installation time, a randomized RSA key is generated and distributed as an authorized key to all "isolated" instances. The private half of the key is encrypted and stored within Tower, and is used to authenticate from "controller" instances to "isolated" instances when jobs are run.
-
-When a job is scheduled to run on an "isolated" instance:
-
-* The "controller" instance compiles metadata required to run the job and copies it to the "isolated" instance via `rsync` (any related project or inventory updates are run on the controller instance). This metadata includes:
-
- - the entire SCM checkout directory for the project
- - a static inventory file
- - pexpect passwords
- - environment variables
- - the `ansible`/`ansible-playbook` command invocation, _i.e._, `ansible-playbook -i /path/to/inventory /path/to/playbook.yml -e ...`
-
-* Once the metadata has been `rsync`ed to the isolated host, the "controller instance" starts a process on the "isolated" instance which consumes the metadata and starts running `ansible`/`ansible-playbook`. As the playbook runs, job artifacts (such as `stdout` and job events) are written to disk on the "isolated" instance.
-
-* While the job runs on the "isolated" instance, the "controller" instance periodically copies job artifacts (`stdout` and job events) from the "isolated" instance using `rsync`. It consumes these until the job finishes running on the "isolated" instance.
-
-Isolated groups are architected such that they may exist inside of a VPC with security rules that _only_ permit the instances in its `controller` group to access them; only ingress SSH traffic from "controller" instances to "isolated" instances is required.
-
-Recommendations for system configuration with isolated groups:
- - Do not create a group named `isolated_group_tower`.
- - Do not put any isolated instances inside the `tower` group or other ordinary instance groups.
- - Define the `controller` variable as either a group var or as a hostvar on all the instances in the isolated group. Please _do not_ allow isolated instances in the same group have a different value for this variable - the behavior in this case can not be predicted.
- - Do not put an isolated instance in more than one isolated group.
-
-
-Isolated Instance Authentication
---------------------------------
-At installation time, by default, a randomized RSA key is generated and distributed as an authorized key to all "isolated" instances. The private half of the key is encrypted and stored within Tower, and is used to authenticate from "controller" instances to "isolated" instances when jobs are run.
-
-For users who wish to manage SSH authentication from controlling instances to isolated instances via some system _outside_ of Tower (such as externally-managed, password-less SSH keys), this behavior can be disabled by unsetting two Tower API settings values:
-
-`HTTP PATCH /api/v2/settings/jobs/ {'AWX_ISOLATED_PRIVATE_KEY': '', 'AWX_ISOLATED_PUBLIC_KEY': ''}`
-
-
### Provisioning and Deprovisioning Instances and Groups
* **Provisioning** - Provisioning Instances after installation is supported by updating the `inventory` file and re-running the setup playbook. It's important that this file contain all passwords and information used when installing the cluster, or other instances may be reconfigured (this can be done intentionally).
diff --git a/docs/container_groups.md b/docs/container_groups.md
index 5a9d88e58c..30f1f869be 100644
--- a/docs/container_groups.md
+++ b/docs/container_groups.md
@@ -1,13 +1,11 @@
# Container Groups
-In a traditional AWX installation, jobs (ansible-playbook runs) are executed
-either directly on a member of the cluster or on a pre-provisioned "isolated"
-node.
-
-The concept of a Container Group (working name) allows for job environments to
-be provisioned on-demand as a Pod that exists only for the duration of the
-playbook run. This is known as the ephemeral execution model and ensures a clean
-environment for every job run.
+In a traditional AWX installation, jobs (ansible-playbook runs) are
+executed directly on a member of the cluster. The concept of a
+Container Group (working name) allows for job environments to be
+provisioned on-demand as a Pod that exists only for the duration of
+the playbook run. This is known as the ephemeral execution model and
+ensures a clean environment for every job run.
In some cases it is desireable to have the execution environment be "always-on",
this is is done by manually creating an instance through the AWX API or UI.
diff --git a/docs/container_groups/README.md b/docs/container_groups/README.md
index a13644abb2..0949379bed 100644
--- a/docs/container_groups/README.md
+++ b/docs/container_groups/README.md
@@ -1,13 +1,11 @@
# Container Groups
-In a traditional AWX installation, jobs (ansible-playbook runs) are executed
-either directly on a member of the cluster or on a pre-provisioned "isolated"
-node.
-
-The concept of a Container Group (working name) allows for job environments to
-be provisioned on-demand as a Pod that exists only for the duration of the
-playbook run. This is known as the ephemeral execution model and ensures a clean
-environment for every job run.
+In a traditional AWX installation, jobs (ansible-playbook runs) are
+executed directly on a member of the cluster. The concept of a
+Container Group (working name) allows for job environments to be
+provisioned on-demand as a Pod that exists only for the duration of
+the playbook run. This is known as the ephemeral execution model and
+ensures a clean environment for every job run.
## Configuration
diff --git a/docs/tasks.md b/docs/tasks.md
index f2e29ec777..7ff5847052 100644
--- a/docs/tasks.md
+++ b/docs/tasks.md
@@ -157,17 +157,6 @@ One of the most important tasks in a clustered AWX installation is the periodic
If a node in an AWX cluster discovers that one of its peers has not updated its heartbeat within a certain grace period, it is assumed to be offline, and its capacity is set to zero to avoid scheduling new tasks on that node. Additionally, jobs allegedly running or scheduled to run on that node are assumed to be lost, and "reaped", or marked as failed.
-#### Isolated Tasks and Their Heartbeats
-
-AWX reports as much status as it can via the browsable API at `/api/v2/ping` in order to provide validation of the health of an instance, including the timestamps of the last heartbeat. Since isolated nodes don't have access to the AWX database, their heartbeats are performed by controller nodes instead. A periodic task, `awx_isolated_heartbeat`, is responsible for periodically connecting from a controller to each isolated node and retrieving its capacity (via SSH).
-
-When a job is scheduled to run on an isolated instance, the controller instance puts together the metadata required to run the job and then transfers it to the isolated instance. Once the metadata has been synchronized to the isolated host, the controller instance starts a process on the isolated instance, which consumes the metadata and starts running `ansible/ansible-playbook`. As the playbook runs, job artifacts (such as `stdout` and job events) are written to disk on the isolated instance.
-
-Alternatively: "While the job runs on the isolated instance, the controller instance periodically checks for and copies the job artifacts (_e.g._, `stdout` and job events) that it produces. It processes these until the job finishes running."
-
-To read more about Isolated Instances, refer to the [Isolated Instance Groups](https://docs.ansible.com/ansible-tower/latest/html/administration/clustering.html#isolated-instance-groups) section of the Clustering page in the Ansible Tower Administration guide.
-
-
## AWX Jobs
### Unified Jobs