ocs_ci.deployment package

Subpackages

Submodules

ocs_ci.deployment.acm module

All ACM related deployment classes and functions should go here.

class ocs_ci.deployment.acm.Submariner

Bases: object

Submariner configuaration and deployment

create_acm_brew_icsp()

This is a prereq for downstream unreleased submariner

deploy()
deploy_downstream()
deploy_upstream()
download_binary()
get_default_gateway_node()

Return the default node to be used as submariner gateway

Returns:

Name of the gateway node

Return type:

str

get_primary_cluster_index()

Return list index (in the config list) of the primary cluster A cluster is primary from DR perspective

Returns:

Index of the cluster designated as primary

Return type:

int

get_subctl_version()

Run ‘subctl version ‘ command and return a Version object

Returns:

semanctic version object

Return type:

vers (Version)

submariner_configure_upstream()

Deploy and Configure upstream submariner

Raises:

DRPrimaryNotFoundException – If there is no designated primary cluster found

ocs_ci.deployment.acm.run_subctl_cmd(cmd=None)

Run subctl command

Parameters:

cmd – subctl command to be executed

ocs_ci.deployment.acm.run_subctl_cmd_interactive(cmd, prompt, answer)

Handle interactive prompts with answers during subctl command

Parameters:
  • cmd (str) – Command to be executed

  • prompt (str) – Expected question during command run which needs to be provided

  • answer (str) – Answer for the prompt

Raises:

InteractivePromptException – in case something goes wrong

ocs_ci.deployment.assisted_installer module

This module implements functionality for deploying OCP cluster via Assisted Installer

class ocs_ci.deployment.assisted_installer.AssistedInstallerCluster(name, cluster_path, existing_cluster=False, openshift_version=None, base_dns_domain=None, api_vip=None, ingress_vip=None, ssh_public_key=None, pull_secret=None, cpu_architecture='x86_64', high_availability_mode='Full', image_type='minimal-iso', static_network_config=None)

Bases: object

create_cluster()

Create (register) new cluster in Assisted Installer console

create_infrastructure_environment()

Create new Infrastructure Environment for the cluster

create_kubeadmin_password_file()

Export password for kubeadmin to auth/kubeadmin-password file in cluster path

create_kubeconfig_file()

Export kubeconfig to auth directory in cluster path.

create_metadata_file()

Create metadata.json file.

create_openshift_install_log_file()

Create .openshift_install.log file containing URL to OpenShift console. It is used by our CI jobs to show the console URL in build description.

delete_cluster()

Delete the cluster

delete_infrastructure_environment()

Delete the Infrastructure Environment

download_discovery_iso(local_path)

Download the discovery iso image

Parameters:

local_path (str) – path where to store the discovery iso image

download_ipxe_config(local_path)

Download the ipxe config for discovery boot

Parameters:

local_path (str) – path where to store the ipxe config

Returns:

path to the downloaded ipxe config file

Return type:

str

get_host_id_mac_mapping()

Prepare mapping between host ID and mac addresses

Returns:

host id to mac mapping ([[host1_id, mac1], [host1_id, mac2], [host2_id, mac3],…])

Return type:

list of lists

install_cluster()

Trigger cluster installation

load_existing_cluster_configuration()

Load configuration from existing cluster

prepare_pull_secret(original_pull_secret)

Combine original pull secret with the pull secret for the Assisted Installer console user. We have to replace cloud.openshift.com credentials in the original pull-secret with the credentials for the current user, otherwise Assisted Installer will comply that the pull secret belongs to different user.

Parameters:

original_pull_secret (str or dict) – content of pull secret

update_hosts_config(mac_name_mapping, mac_role_mapping)

Update host names and roles.

Parameters:
  • mac_name_mapping (dict) – host mac address to host name mapping

  • mac_role_mapping (dict) – host mac address to host role mapping

verify_validations_info_for_discovered_nodes()

Check and verify validations info for the discovered nodes.

wait_for_discovered_nodes(expected_nodes)

Wait for expected number of nodes to appear in the Assisted Installer infra/cluster

Parameters:

expected_nodes (int) – number of expected nodes

ocs_ci.deployment.aws module

This module contains platform specific methods and classes for deployment on AWS platform

class ocs_ci.deployment.aws.AWSIPI

Bases: AWSBase

A class to handle AWS IPI specific deployment

class OCPDeployment

Bases: IPIOCPDeployment

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for cloud IPI here.

sts_setup()

Perform setup procedure for STS Mode deployments.

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on this platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

destroy_cluster(log_level='DEBUG')

Destroy OCP cluster specific to AWS IPI

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

class ocs_ci.deployment.aws.AWSUPI

Bases: AWSBase

A class to handle AWS UPI specific deployment

class OCPDeployment

Bases: OCPDeployment

deploy(log_cli_level='DEBUG')

Exact deployment will happen here

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for AWSUPI here.

add_rhel_workers()

Add RHEL worker nodes to the existing cluster

build_ansible_inventory(hosts)

Build the ansible hosts file from jinja template

Parameters:

hosts (list) – list of private host names

Returns:

path of the ansible file created

Return type:

str

check_connection(rhel_pod_obj, host, pem_dst_path)
create_rhel_instance()

This function does the following: 1. Create RHEL worker instances, copy required AWS tags from existing 2. worker instances to new RHEL instances 3. Copy IAM role from existing worker to new RHEL workers

deploy_ocp(log_cli_level='DEBUG')

OCP deployment specific to AWS UPI

Parameters:

log_cli_level (str) – openshift installer’s log level (default: ‘DEBUG’)

destroy_cluster(log_level='DEBUG')

Destroy OCP cluster for AWS UPI

Parameters:

log_level (str) – log level for openshift-installer ( default:DEBUG)

gather_worker_data(suffix='no0')

Gather various info like vpc, iam role, subnet,security group, cluster tag from existing RHCOS workers

Parameters:

suffix (str) – suffix to get resource of worker node, ‘no0’ by default

get_kube_tag(tags)

Fetch kubernets.io tag from worker instance

Parameters:

tags (dict) – AWS tags from existing worker

Returns:

key looks like

”kubernetes.io/cluster/<cluster-name>” and value looks like “share” OR “owned”

Return type:

tuple

get_ready_status(node_ent)

Get the node ‘Ready’ status

Parameters:

node_ent (dict) – Node info which includes details

Returns:

True if node is Ready else False

Return type:

bool

get_rhcos_workers()

Returns a list of rhcos worker names

Returns:

list of rhcos worker nodes

Return type:

rhcos_workers (list)

get_worker_resource_id(resource)

Get the resource ID

Parameters:

resource (dict) – a dictionary of stack resource

Returns:

ID of worker stack resource

Return type:

str

remove_rhcos_workers()

After RHEL workers are added remove rhcos workers from the cluster

Raises:

FailedToRemoveNodeException – if rhcos removal is failed

run_ansible_playbook()

Bring up a helper pod (RHEL) to run openshift-ansible playbook

verify_nodes_added(hosts)

Verify RHEL workers are added

Parameters:

hosts (list) – list of aws private hostnames

Raises:

FailedToAddNodeException – if node addition failed

ocs_ci.deployment.azure module

This module contains platform specific methods and classes for deployment on Azure platform.

class ocs_ci.deployment.azure.AZUREIPI

Bases: AZUREBase

A class to handle Azure IPI specific deployment.

OCPDeployment

alias of IPIOCPDeployment

ocs_ci.deployment.baremetal module

class ocs_ci.deployment.baremetal.BAREMETALAI

Bases: BAREMETALBASE

A class to handle Bare metal Assisted Installer specific deployment

class OCPDeployment

Bases: BMBaseOCPDeployment

configure_ipxe_on_helper()

Configure iPXE on helper node

create_config()

Create the OCP deploy config.

create_dns_records()

Configure DNS records for api and ingress

create_pxe_file(template='ocp-deployment/pxelinux.cfg.ipxe.j2', **kwargs)

Prepare content of PXE file for chain loading to ipxe

deploy(log_cli_level='DEBUG')

Deployment specific to OCP cluster on this platform

Parameters:

log_cli_level (str) – not used for Assisted Installer deployment

deploy_prereq()

Pre-Requisites for Bare Metal AI Deployment

destroy()

Cleanup cluster related resources.

set_pxe_boot_and_reboot(machine)

Ipmi Set Pxe boot and Restart the machine

Parameters:

machine (str) – Machine Name

destroy_cluster(log_level='DEBUG')

Destroy OCP cluster specific to Baremetal - Assisted installer deployment

Parameters:

log_level (str) – this parameter is not used here

class ocs_ci.deployment.baremetal.BAREMETALBASE

Bases: Deployment

A common class for Bare metal deployments

class ocs_ci.deployment.baremetal.BAREMETALUPI

Bases: BAREMETALBASE

A class to handle Bare metal UPI specific deployment

class OCPDeployment

Bases: BMBaseOCPDeployment

configure_storage_for_image_registry(kubeconfig)

Configures storage for the image registry

create_config()

Creates the OCP deploy config for the Bare Metal

create_ignitions()

Creates the ignition files

create_manifest()

Creates the Manifest files

create_pxe_files(ocp_version, role, disk_path)

Create pxe file for giver role

Parameters:
  • ocp_version (float) – OCP version

  • role (str) – Role of node eg:- bootstrap,master,worker

Returns:

temp file path

Return type:

str

deploy(log_cli_level='DEBUG')

Deploy

deploy_prereq()

Pre-Requisites for Bare Metal UPI Deployment

destroy(log_level='')

Destroy OCP cluster specific to BM UPI

set_pxe_boot_and_reboot(machine)

Ipmi Set Pxe boot and Restart the machine

Parameters:

machine (str) – Machine Name

class ocs_ci.deployment.baremetal.BMBaseOCPDeployment

Bases: OCPDeployment

check_bm_status_exist()

Check if BM Cluster already exist

Returns:

response status

Return type:

str

configure_dnsmasq_common_config()

Prepare common configuration for dnsmasq

configure_dnsmasq_hosts_config()

prepare hosts configuration for dnsmasq dhcp

configure_dnsmasq_on_helper_vm()

Install and configure dnsmasq and other required packages for DHCP and PXE boot server on helper VM

configure_dnsmasq_pxe_config()

Prepare PXE configuration for dnsmasq

deploy_prereq()

Pre-Requisites for Bare Metal deployments

destroy(log_level='')

Destroy OCP cluster

get_locked_username()

Get name of user who has locked baremetal resource

Returns:

username

Return type:

str

property helper_node_handler

Create connection to helper node hosting httpd, tftp and dhcp services for PXE boot

restart_dnsmasq_service_on_helper_vm()

Restart dnsmasq service providing DHCP and TFTP services for UPI deployment

start_dnsmasq_service_on_helper_vm()

Start dnsmasq service providing DHCP and TFTP services for UPI deployment

stop_dnsmasq_service_on_helper_vm()

Stop dnsmasq service providing DHCP and TFTP services for UPI deployment

update_bm_status(bm_status)

Update BM status when cluster is deployed/teardown

Parameters:

bm_status (str) – Status to be updated

Returns:

response message

Return type:

str

class ocs_ci.deployment.baremetal.BaremetalPSIUPI

Bases: Deployment

All the functionalities related to BaremetalPSI- UPI deployment lives here

class OCPDeployment

Bases: OCPDeployment

deploy(log_level='')

Implement ocp deploy in specific child class

deploy_prereq()

Instantiate proper flexy class here

destroy(log_level='')

Destroy volumes attached if any and then the cluster

ocs_ci.deployment.baremetal.clean_disk(worker)

Perform disk cleanup

Parameters:

worker (object) – worker node object

ocs_ci.deployment.cert_manager module

This module contains functions needed for installing cert-manager operator from Red Hat. More information about cert-manager can be found at https://github.com/openshift/cert-manager-operator and https://cert-manager.io/

ocs_ci.deployment.cert_manager.deploy_cert_manager()

Installs cert-manager

ocs_ci.deployment.cloud module

This module contains common code and a base class for any cloud platform deployment.

class ocs_ci.deployment.cloud.CloudDeploymentBase

Bases: Deployment

Base class for deployment on a cloud platform (such as AWS, Azure, …).

check_cluster_existence(cluster_name_prefix)

Check cluster existence according to cluster name prefix

Returns:

True if a cluster with the same name prefix already exists,

False otherwise

Return type:

bool

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

class ocs_ci.deployment.cloud.IPIOCPDeployment

Bases: OCPDeployment

Common implementation of IPI OCP deployments for cloud platforms.

deploy(log_cli_level='DEBUG')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for cloud IPI here.

ocs_ci.deployment.cnv module

This module contains functionality required for CNV installation.

class ocs_ci.deployment.cnv.CNVInstaller

Bases: object

CNV Installer class for CNV deployment

check_virtctl_compatibility()

Check if the virtctl binary is compatible with the current system.

Raises:

exceptions.ArchitectureNotSupported – If virtctl is not compatible.

create_cnv_catalog_source()

Creates a nightly catalogsource manifest for CNV operator deployment from quay registry.

create_cnv_namespace()

Creates the namespace for CNV resources

Raises:

CommandFailed – If the ‘oc create’ command fails.

create_cnv_operatorgroup()

Creates an OperatorGroup for CNV

create_cnv_subscription()

Creates subscription for CNV operator

deploy_cnv()

Installs CNV enabling software emulation.

deploy_hyper_converged()

Deploys the HyperConverged CR.

Raises:

TimeoutExpiredError – If the HyperConverged resource does not become available within the specified time.

download_and_extract_virtctl_binary(bin_dir=None)

Download and extract the virtctl binary to bin_dir

Parameters:

bin_dir (str) – The directory to store the virtctl binary.

enable_software_emulation()

Enable software emulation. This is needed on a cluster where the nodes do not support hardware emulation.

Note that software emulation, when enabled, is only used as a fallback when hardware emulation is not available. Hardware emulation is always attempted first, regardless of the value of the useEmulation.

Get all the URL links from virtctl specification links.

Returns:

A list of virtctl download URLs.

Return type:

List[str]

Raises:

exceptions.ResourceNotFoundError – If no URL entries are found.

Retrieve the specification links for the virtctl client.

Returns:

A list of dictionaries containing specification links.

Return type:

List[dict]

Raises:

exceptions.ResourceNotFoundError – If virtctl ConsoleCLIDownload is not found.

get_virtctl_download_url(os_type, os_machine_type)

Get the virtctl download URL based on the specified platform and architecture.

Parameters:
  • os_type (str) – The operating system.

  • os_machine_type (str) – The operating system machine architecture.

Returns:

The virtctl download URL if found, otherwise None.

Return type:

Optional[str]

post_install_verification()

Performs CNV post-installation verification.

Raises:
wait_for_the_resource_to_discover(kind, namespace, resource_name)

Waits for the specified resource to be discovered.

Parameters:
  • kind (str) – The type of the resource to wait for.

  • namespace (str) – The namespace in which to wait for the resource.

  • resource_name (str) – The name of the resource to wait for.

ocs_ci.deployment.deployment module

This module provides base class for different deployment platforms like AWS, VMWare, Baremetal etc.

class ocs_ci.deployment.deployment.Deployment

Bases: object

Base for all deployment platforms

CUSTOM_STORAGE_CLASS_PATH = None

filepath of yaml file with custom storage class if necessary

For some platforms, one have to create custom storage class for OCS to make sure ceph uses disks of expected type and parameters (eg. OCS requires ssd). This variable is either None (meaning that such custom storage class is not needed), or point to a yaml file with custom storage class.

Type:

str

DEFAULT_STORAGECLASS = None
DEFAULT_STORAGECLASS_LSO = 'localblock'
class OCPDeployment

Bases: OCPDeployment

This class has to be implemented in child class and should overload methods for platform specific config.

add_node()

Implement platform-specific add_node in child class

cleanup_pgsql_db()

Perform cleanup for noobaa external pgsql DB in case external pgsq is enabled.

deploy_acm_hub()

Handle ACM HUB deployment

deploy_acm_hub_released()

Handle ACM HUB released image deployment

deploy_acm_hub_unreleased()

Handle ACM HUB unreleased image deployment

deploy_cluster(log_cli_level='DEBUG')

We are handling both OCP and OCS deployment here based on flags

Parameters:

log_cli_level (str) – log level for installer (default: DEBUG)

deploy_gitops_operator(switch_ctx=None)

Deploy GitOps operator

Parameters:

switch_ctx (int) – The cluster index by the cluster name

deploy_lvmo()

deploy lvmo for platform specific (for now only vsphere)

deploy_ocp(log_cli_level='DEBUG')

Base deployment steps, the rest should be implemented in the child class.

Parameters:

log_cli_level (str) – log level for installer (default: DEBUG)

deploy_ocs()

Handle OCS deployment, since OCS deployment steps are common to any platform, implementing OCS deployment here in base class.

deploy_ocs_via_operator(image=None)

Method for deploy OCS via OCS operator

Parameters:

image (str) – Image of ocs registry.

deploy_odf_addon()

This method deploy ODF addon.

deploy_with_external_mode()

This function handles the deployment of OCS on external/indpendent RHCS cluster

deployment_with_ui()

Deployment OCS Operator via OpenShift Console

destroy_cluster(log_level='DEBUG')

Base destroy cluster method, for more platform specific stuff please overload this method in child class.

Parameters:

log_level (str) – log level for installer (default: DEBUG)

do_deploy_cert_manager()

Installs cert-manager operator

do_deploy_fusion()

Install IBM Fusion operator

do_deploy_lvmo()

call lvm deploy

do_deploy_ocp(log_cli_level)

Deploy OCP :param log_cli_level: log level for the installer :type log_cli_level: str

do_deploy_ocs()

Deploy OCS/ODF and run verification as well

do_deploy_rdr()

Call Regional DR deploy

do_deploy_submariner()

Deploy Submariner operator

do_gitops_deploy()

Deploy GitOps operator

Returns:

external_post_deploy_validation()

This function validates successful deployment of OCS in external mode, some of the steps overlaps with converged mode

get_arbiter_location()

Get arbiter mon location for storage cluster

get_rdr_conf()

Aggregate important Regional DR parameters in the dictionary

Returns:

of Regional DR config parameters

Return type:

dict

label_and_taint_nodes()

Label and taint worker nodes to be used by OCS operator

patch_default_sc_to_non_default()

Patch storage class which comes as default with installation to non-default

post_ocp_deploy()

Function does post OCP deployment stuff we need to do.

set_rook_log_level()
subscribe_ocs()

This method subscription manifest and subscribe to OCS operator.

wait_for_csv(csv_name, namespace=None)

Wait for the CSV to appear

Parameters:
  • csv_name (str) – CSV name pattern

  • namespace (str) – Namespace where CSV exists

wait_for_subscription(subscription_name, namespace=None)

Wait for the subscription to appear

Parameters:
  • subscription_name (str) – Subscription name pattern

  • namespace (str) – Namespace name for checking subscription if None then default from ENV_data

class ocs_ci.deployment.deployment.MDRMultiClusterDROperatorsDeploy(dr_conf)

Bases: MultiClusterDROperatorsDeploy

A class for Metro-DR deployments

backup_pod_status_check()
build_bucket_name()
create_dpa()

create DPA OADP will be already installed when we enable backup flag Here we will create dataprotection application and update bucket name and s3 storage link

create_generic_credentials()
create_s3_bucket()
deploy()

deploy ODF multicluster orchestrator operator

deploy_dr_policy()

Deploy dr policy with MDR perspective, only on active ACM

deploy_multicluster_orchestrator()
enable_cluster_backup()

set cluster-backup to True in mch resource Note: changing this flag automatically installs OADP operator

enable_managed_serviceaccount()

update MultiClusterEngine

  • enabled: true name: managedserviceaccount-preview

validate_dpa()

Validate 1. 3 restic pods 2. 1 velero pod 3. backupstoragelocation resource in “Available” phase

class ocs_ci.deployment.deployment.MultiClusterDROperatorsDeploy(dr_conf)

Bases: object

Implement Multicluster DR operators deploy part here, mainly 1. ODF Multicluster Orchestrator operator 2. Metadata object stores (s3 OR MCG) 3. ODF Hub operator 4. ODF Cluster operator

configure_mirror_peer()
deploy()

deploy ODF multicluster orchestrator operator

deploy_dr_multicluster_orchestrator()

Deploy multicluster orchestrator

deploy_dr_policy()
class mcg_meta_obj_store

Bases: object

class s3_meta_obj_store(conf=None)

Bases: object

Internal class to handle aws s3 metadata obj store

deploy_and_configure()
get_meta_access_secret_keys()

Get aws_access_key_id and aws_secret_access_key by default we go with AWS, in case of noobaa it should be implemented in mcg_meta_obj_store class

get_participating_regions()

Get all the participating regions in the DR scenario

Returns:

List of participating regions

Return type:

list of str

get_ramen_resource()
get_s3_profiles()

Get names of s3 profiles from hub configmap resource

get_s3_secret_names()

Get secret resource names for s3

s3_configure()
update_config_map_commit(config_map_data, prefix=None)

merge the config and update the resource

Parameters:
  • config_map_data (dict) – base dictionary which will be later converted to yaml content

  • prefix (str) – Used to identify temp yaml

update_ramen_config_misc()
validate_mirror_peer(resource_name)

Validate mirror peer, Begins with CTX: ACM

  1. Check phase: if RDR then state = ‘ExchangedSecret’

    if MDR then state = ‘S3ProfileSynced’

  2. Check token-exchange-agent pod in ‘Running’ phase

Raises:

ResourceWrongStatusException – If pod is not in expected state

verify_dr_hub_operator()
class ocs_ci.deployment.deployment.RBDDRDeployOps

Bases: object

All RBD specific DR deployment operations

configure_rbd()
deploy()
validate_csi_sidecar()

validate sidecar containers for rbd mirroring on each of the ODF cluster

validate_mirror_peer(resource_name)

Validate mirror peer, Begins with CTX: ACM

  1. Check initial phase of ‘ExchangingSecret’

  2. Check token-exchange-agent pod in ‘Running’ phase

Raises:

ResourceWrongStatusException – If pod is not in expected state

class ocs_ci.deployment.deployment.RDRMultiClusterDROperatorsDeploy(dr_conf)

Bases: MultiClusterDROperatorsDeploy

A class for Regional-DR deployments

deploy()

RDR specific steps for deploy

ocs_ci.deployment.deployment.create_catalog_source(image=None, ignore_upgrade=False)

This prepare catalog source manifest for deploy OCS operator from quay registry.

Parameters:
  • image (str) – Image of ocs registry.

  • ignore_upgrade (bool) – Ignore upgrade parameter.

ocs_ci.deployment.deployment.create_external_pgsql_secret()

Creates secret for external PgSQL to be used by Noobaa

ocs_ci.deployment.deployment.create_fusion_catalog_source()

Create catalog source for fusion operator

ocs_ci.deployment.deployment.create_ocs_secret(namespace)

Function for creation of pull secret for OCS. (Mostly for ibmcloud purpose)

Parameters:

namespace (str) – namespace where to create the secret

ocs_ci.deployment.deployment.get_multicluster_dr_deployment()
ocs_ci.deployment.deployment.setup_persistent_monitoring()

Change monitoring backend to OCS

ocs_ci.deployment.deployment.validate_acm_hub_install()

Verify the ACM MultiClusterHub installation was successful.

ocs_ci.deployment.disconnected module

This module contains functionality required for disconnected installation.

ocs_ci.deployment.disconnected.get_csv_from_image(bundle_image)

Extract clusterserviceversion.yaml file from operator bundle image.

Parameters:

bundle_image (str) – OCS operator bundle image

Returns:

loaded yaml from CSV file

Return type:

dict

ocs_ci.deployment.disconnected.mirror_images_from_mapping_file(mapping_file, icsp=None, ignore_image=None)

Mirror images based on mapping.txt file.

Parameters:
  • mapping_file (str) – path to mapping.txt file

  • icsp (dict) – ImageContentSourcePolicy used for mirroring (workaround for stage images, which are pointing to different registry than they really are)

  • ignore_image – image which should be ignored when applying icsp (mirrored index image)

ocs_ci.deployment.disconnected.mirror_index_image_via_oc_mirror(index_image, packages, icsp=None)

Mirror all images required for ODF deployment and testing to mirror registry via oc-mirror tool and create relevant imageContentSourcePolicy. https://github.com/openshift/oc-mirror

Parameters:
  • index_image (str) – index image which will be pruned and mirrored

  • packages (list) – list of packages to keep

  • icsp (dict) – ImageContentSourcePolicy used for mirroring (workaround for stage images, which are pointing to different registry than they really are)

Returns:

mirrored index image

Return type:

str

ocs_ci.deployment.disconnected.mirror_ocp_release_images(ocp_image_path, ocp_version)

Mirror OCP release images to mirror registry.

Parameters:
  • ocp_image_path (str) – OCP release image path

  • ocp_version (str) – OCP release image version or checksum (starting with sha256:)

Returns:

tuple with four strings:
  • mirrored image path,

  • tag or checksum

  • imageContentSources (for install-config.yaml)

  • ImageContentSourcePolicy (for running cluster)

Return type:

tuple (str, str, str, str)

ocs_ci.deployment.disconnected.prepare_disconnected_ocs_deployment(upgrade=False)

Prepare disconnected ocs deployment: - mirror required images from redhat-operators - get related images from OCS operator bundle csv - mirror related images to mirror registry - create imageContentSourcePolicy for the mirrored images - disable the default OperatorSources

Parameters:

upgrade (bool) – is this fresh installation or upgrade process (default: False)

Returns:

mirrored OCS registry image prepared for disconnected installation

or None (for live deployment)

Return type:

str

ocs_ci.deployment.disconnected.prune_and_mirror_index_image(index_image, mirrored_index_image, packages, icsp=None)

Prune given index image and push it to mirror registry, mirror all related images to mirror registry and create relevant imageContentSourcePolicy This uses opm index prune command, which supports only sqlite-based catalogs (<= OCP 4.10), for >= OCP 4.11 use oc-mirror tool implemented in mirror_index_image_via_oc_mirror(…) function.

Parameters:
  • index_image (str) – index image which will be pruned and mirrored

  • mirrored_index_image (str) – mirrored index image which will be pushed to mirror registry

  • packages (list) – list of packages to keep

  • icsp (dict) – ImageContentSourcePolicy used for mirroring (workaround for stage images, which are pointing to different registry than they really are)

Returns:

path to generated catalogSource.yaml file

Return type:

str

ocs_ci.deployment.factory module

class ocs_ci.deployment.factory.DeploymentFactory

Bases: object

A factory class to get specific platform object

get_deployment()

Get the exact deployment class based on ENV_DATA Example: deployment_platform may look like ‘aws’, ‘vmware’, ‘baremetal’ deployment_type may be like ‘ipi’, ‘upi’ or ‘ai’

ocs_ci.deployment.flexy module

All the flexy related classes and functionality lives here

class ocs_ci.deployment.flexy.FlexyAWSUPI

Bases: FlexyBase

A specific implementation of AWS UPI installation using flexy

class ocs_ci.deployment.flexy.FlexyBaremetalPSI

Bases: FlexyBase

A specific implementation of Baremetal with PSI using flexy

class ocs_ci.deployment.flexy.FlexyBase

Bases: object

A base class for all types of flexy installs

build_container_args(purpose='')

Builds most commonly used arguments for flexy container

Parameters:

purpose (str) – purpose for which we are building these args eg: destroy, debug. By default it will be empty string which turns into ‘deploy’ mode for flexy

Returns:

of flexy container args

Return type:

list

build_destroy_cmd()

Build flexy command line for ‘destroy’ operation

build_install_cmd()

Build flexy command line for ‘deploy’ operation

clone_and_unlock_ocs_private_conf()

Clone ocs_private_conf (flexy env and config) repo into flexy_host_dir

deploy(log_level='')

build and invoke flexy deployer here

Parameters:

log_level (str) – log level for flexy container

deploy_prereq()

Common flexy prerequisites like cloning the private-conf repo locally and updating the contents with user supplied values

destroy()

Invokes flexy container with ‘destroy’ argument

flexy_backup_work_dir()

Perform copying of flexy-dir to cluster_path.

flexy_post_processing()

Perform a few actions required after flexy execution: - update global pull-secret - login to mirror registry (disconnected cluster) - configure proxy server (disconnected cluster) - configure ntp (if required)

flexy_prepare_work_dir()
Prepare Flexy working directory (flexy-dir):
  • copy flexy-dir from cluster_path to data dir (if available)

  • set proper ownership

get_installer_payload(version=None)

A proper installer payload url required for flexy based on DEPLOYMENT[‘installer_version’]. If ‘nigtly’ is present then we will use registry.svc to get latest nightly else if ‘-ga’ is present then we will look for ENV_DATA[‘installer_payload_image’]

merge_flexy_env()

Update the Flexy env file with the user supplied values. This function assumes that the flexy_env_file is available (e.g. flexy-ocs-private repo has been already cloned).

run_container(cmd_string)

Actual container run happens here, a thread will be spawned to asynchronously print flexy container logs

Parameters:

cmd_string (str) – Podman command line along with options

class ocs_ci.deployment.flexy.FlexyVSPHEREUPI

Bases: FlexyBase

A specific implementation of vSphere UPI installation using flexy

ocs_ci.deployment.fusion module

This module contains functions needed to install IBM Fusion

ocs_ci.deployment.fusion.deploy_fusion()

Installs IBM Fusion

ocs_ci.deployment.fusion_aas module

This module contains platform specific methods and classes for deployment on Fusion aaS

class ocs_ci.deployment.fusion_aas.FUSIONAAS

Bases: ROSA

Deployment class for Fusion aaS.

OCPDeployment

alias of FUSIONAASOCP

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_ocs()

Deployment of ODF Managed Service addon on Fusion aaS.

destroy_ocs()

Uninstall ODF Managed Service addon via rosa cli.

class ocs_ci.deployment.fusion_aas.FUSIONAASOCP

Bases: ROSAOCP

Fusion aaS deployment class.

deploy(log_level='')

Deployment specific to OCP cluster on a Fusion aaS platform.

Parameters:

log_level (str) – openshift installer’s log level that is expected from inherited class

ocs_ci.deployment.gcp module

This module contains platform specific methods and classes for deployment on Google Cloud Platform (aka GCP).

class ocs_ci.deployment.gcp.GCPIPI

Bases: GCPBase

A class to handle GCP IPI specific deployment

CUSTOM_STORAGE_CLASS_PATH = '/home/docs/checkouts/readthedocs.org/user_builds/ocs-ci/checkouts/latest/ocs_ci/templates/ocs-deployment/storageclass.gcp.yaml'

filepath of yaml file with custom storage class if necessary

For some platforms, one have to create custom storage class for OCS to make sure ceph uses disks of expected type and parameters (eg. OCS requires ssd). This variable is either None (meaning that such custom storage class is not needed), or point to a yaml file with custom storage class.

Type:

str

OCPDeployment

alias of IPIOCPDeployment

ocs_ci.deployment.ibm module

This module implements the OCS deployment for IBM Power platform Base code in deployment.py contains the required changes to keep code duplication to minimum. Only destroy_ocs is retained here.

class ocs_ci.deployment.ibm.IBMDeployment

Bases: Deployment

Implementation of Deploy for IBM Power architecture

destroy_lso()
destroy_ocs()

Handle OCS destruction. Remove storage classes, PVCs, Storage Cluster, Openshift-storage namespace, LocalVolume, unlabel worker-storage nodes, delete ocs CRDs, etc.

ocs_ci.deployment.ibmcloud module

This module contains platform specific methods and classes for deployment on IBM Cloud Platform.

class ocs_ci.deployment.ibmcloud.IBMCloud

Bases: CloudDeploymentBase

Deployment class for IBM Cloud

DEFAULT_STORAGECLASS = 'ibmc-vpc-block-10iops-tier'
OCPDeployment

alias of IBMCloudOCPDeployment

check_cluster_existence(cluster_name_prefix)

Check cluster existence based on a cluster name prefix.

Parameters:

cluster_name_prefix (str) – name prefix which identifies a cluster

Returns:

True if a cluster with the same name prefix already exists,

False otherwise

Return type:

bool

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

class ocs_ci.deployment.ibmcloud.IBMCloudIPI

Bases: CloudDeploymentBase

A class to handle IBM Cloud IPI specific deployment

DEFAULT_STORAGECLASS = 'ibmc-vpc-block-10iops-tier'
OCPDeployment

alias of IPIOCPDeployment

static check_cluster_existence(cluster_name_prefix)

Check cluster existence based on a cluster name prefix.

Parameters:

cluster_name_prefix (str) – name prefix which identifies a cluster

Returns:

True if a cluster with the same name prefix already exists,

False otherwise

Return type:

bool

delete_leftover_resources(resource_group)

Delete leftovers from IBM Cloud.

Parameters:

resource_group (str) – Resource group in IBM Cloud that contains the cluster resources.

Raises:

LeftoversExistError – In case the leftovers after attempt to clean them out.

delete_resource_group(resource_group)

Delete the resource group that contained the cluster assets.

Parameters:

resource_group (str) – Resource group in IBM Cloud that contains the cluster resources.

delete_volumes(resource_group)

Delete the pvc volumes created in IBM Cloud that the openshift installer doesn’t remove.

Parameters:

resource_group (str) – Resource group in IBM Cloud that contains the cluster resources.

deploy_ocp(log_cli_level='DEBUG')

Perform IBMCloudIPI OCP deployment.

Parameters:

log_cli_level (str) – log level for installer (default: DEBUG)

destroy_cluster(log_level='DEBUG')

Destroy the OCP cluster.

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

static export_api_key()

Exports the IBM CLoud API key as an environment variable.

get_load_balancers()

Gets the load balancers

Returns:

load balancers in json format

Return type:

json

get_load_balancers_count()

Gets the number of load balancers

Returns:

number of load balancers

Return type:

int

get_resource_group(return_id=False)

Retrieve and set the resource group being utilized for the cluster assets.

Parameters:

return_id (bool) – If True, it will return ID instead of name.

Returns:

name or ID of resource group if found. None: in case no RG found.

Return type:

str

manually_create_iam_for_vpc()

Manually specify the IAM secrets for the cloud provider

ocs_ci.deployment.install_ocp_on_rhel module

This module will install OCP on RHEL nodes

class ocs_ci.deployment.install_ocp_on_rhel.OCPINSTALLRHEL(rhel_worker_nodes)

Bases: object

Class to install OCP on RHEL nodes

create_inventory()

Creates the inventory file

Returns:

Path to inventory file

Return type:

str

create_inventory_for_haproxy()

Creates the inventory file for haproxy

Returns:

Path to inventory file for haproxy

Return type:

str

execute_ansible_playbook()

Run ansible-playbook on pod

prepare_rhel_nodes()

Prepare RHEL nodes for OCP installation

upload_helpers(ocp_repo)

Upload helper files to pod for OCP installation on RHEL Helper Files:

- ssh_key pem
- ocp repo
- ocp pem
- kubeconfig
- pull secret
- inventory yaml
Parameters:

ocp_repo (str) – OCP repo to upload

ocs_ci.deployment.multicluster_deployment module

class ocs_ci.deployment.multicluster_deployment.OCPDeployWithACM

Bases: Deployment

When we instantiate this class, the assumption is we already have an OCP cluster with ACM installed and current context is ACM

deploy_cluster(log_cli_level='INFO')

We deploy new OCP clusters using ACM Note: Importing cluster through ACM has been implemented as part of Jenkins pipeline

destroy_cluster(log_cli_level=None)

Teardown OCP clusters deployed through ACM

do_deploy_ocp(log_cli_level='INFO')

This function overrides the parent’s function in order accomodate ACM based OCP cluster deployments

do_rdr_acm_ocp_deploy()

Specific to regional DR OCP cluster deployments

post_deploy_ops()
  1. Install ingress certificates on OCP clusters deployed through ACM

  2. Run post_ocp_deploy on OCP clusters

post_destroy_ops(cluster_list)

Post destroy ops mainly includes ip clean up and dns cleanup

Parameters:

cluster_list (list[ACMOCPClusterDeploy]) – list of platform specific instances

wait_for_all_cluster_async_destroy(destroy_cluster_list)
wait_for_all_clusters_async()

ocs_ci.deployment.netsplit module

ocs_ci.deployment.netsplit.get_netsplit_mc(tmp_path, master_zones, worker_zones, enable_split=True, x_addr_list=None, arbiter_zone=None, latency=None)

Generate machineconfig with network split scripts and configuration, tailored for the current cluster state.

Parameters:
  • tmp_path (pathlib.Path) – Directory where a temporary yaml file will be created. In test context, use pytest fixture tmp_path.

  • master_zones (list[str]) – zones where master nodes are placed

  • worker_zones (list[str]) – zones where worker nodes are placed

  • x_addr_list (list[str]) – IP addressess of external services (zone x)

  • arbiter_zone (str) – name of arbiter zone if arbiter deployment is used

  • latency (int) – additional latency in miliseconds, which will be introduced among zones

Returns:

mc (dict with MachineConfig) to deploy via

deploy_machineconfig()

Raises:
  • UnexpectedDeploymentConfiguration – in case of invalid cluster configuration, which prevents deployment of network split scripts

  • ValueError – in case given zone configuration doesn’t make any sense

ocs_ci.deployment.ocp module

This module provides base class for OCP deployment.

class ocs_ci.deployment.ocp.OCPDeployment

Bases: object

create_config()

Create the OCP deploy config, if something needs to be changed for specific platform you can overload this method in child class.

deploy(log_cli_level='DEBUG')

Implement ocp deploy in specific child class

deploy_prereq()

Perform generic prereq before calling openshift-installer This method performs all the basic steps necessary before invoking the installer

destroy(log_level='DEBUG')

Destroy OCP cluster specific

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

download_installer()

Method to download installer

Returns:

path to the installer

Return type:

str

get_pull_secret()

Load pull secret file

Returns:

content of pull secret

Return type:

str

get_ssh_key()

Loads public ssh to be used for deployment

Returns:

public ssh key or empty string if not found

Return type:

str

test_cluster()

Test if OCP cluster installed successfuly

ocs_ci.deployment.on_prem module

This module contains common code and a base class for any on-premise platform deployment.

class ocs_ci.deployment.on_prem.IPIOCPDeployment

Bases: OCPDeployment

Common implementation of IPI OCP deployments for on-premise platforms

deploy(log_cli_level='DEBUG')

Deployment specific to OCP cluster for on-prem platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for on-premise IPI here

class ocs_ci.deployment.on_prem.OnPremDeploymentBase

Bases: Deployment

Base class for deployment in on-premise platforms

check_cluster_existence(cluster_name_prefix)

Check cluster existence according to cluster name prefix

Returns:

True if a cluster with the same name prefix already exists,

False otherwise

Return type:

bool

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster in on-premise platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

ocs_ci.deployment.openshift_dedicated module

This module contains platform specific methods and classes for deployment on Openshfit Dedicated Platform.

class ocs_ci.deployment.openshift_dedicated.OpenshiftDedicated

Bases: CloudDeploymentBase

Deployment class for Openshift Dedicated.

OCPDeployment

alias of OpenshiftDedicatedOCP

check_cluster_existence(cluster_name_prefix)

Check cluster existence based on a cluster name.

Parameters:

cluster_name_prefix (str) – name prefix which identifies a cluster

Returns:

True if a cluster with the same name prefix already exists,

False otherwise

Return type:

bool

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

class ocs_ci.deployment.openshift_dedicated.OpenshiftDedicatedOCP

Bases: OCPDeployment

Openshift Dedicated deployment class.

deploy(log_level='')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for Openshfit Dedciated deployment.

destroy(log_level='DEBUG')

Destroy OCP cluster specific

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

ocs_ci.deployment.rhv module

This module contains platform specific methods and classes for deployment on Red Hat Virtualization (RHV) platform

class ocs_ci.deployment.rhv.RHVIPI

Bases: RHVBASE

A class to handle RHV IPI specific deployment

OCPDeployment

alias of IPIOCPDeployment

ocs_ci.deployment.rosa module

This module contains platform specific methods and classes for deployment on Openshfit Dedicated Platform.

class ocs_ci.deployment.rosa.ROSA

Bases: CloudDeploymentBase

Deployment class for ROSA.

OCPDeployment

alias of ROSAOCP

check_cluster_existence(cluster_name_prefix)

Check cluster existence based on a cluster name. Cluster in Uninstalling phase is not considered to be existing

Parameters:

cluster_name_prefix (str) – name prefix which identifies a cluster

Returns:

True if a cluster with the same name prefix already exists,

False otherwise

Return type:

bool

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on a cloud platform.

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_ocs()

Deployment of ODF Managed Service addon on ROSA.

destroy_ocs()

Uninstall ODF Managed Service addon via rosa cli.

host_network_update()

Update security group rules for HostNetwork

class ocs_ci.deployment.rosa.ROSAOCP

Bases: OCPDeployment

ROSA deployment class.

cluster_present(cluster_name)

Check if the cluster is present in the cluster list, regardless of its state.

Parameters:

cluster_name (str) – name which identifies the cluster

Returns:

True if a cluster with the given name exists,

False otherwise

Return type:

bool

deploy(log_level='')

Deployment specific to OCP cluster on a ROSA Managed Service platform.

Parameters:

log_cli_level (str) – openshift installer’s log level

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for Openshfit Dedciated deployment.

destroy(log_level='DEBUG')

Destroy OCP cluster specific

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

ocs_ci.deployment.terraform module

This module contains terraform specific methods and classes needed for deployment on vSphere platform

class ocs_ci.deployment.terraform.Terraform(path, bin_path=None, state_file_path=None)

Bases: object

Wrapper for terraform

apply(tfvars, bootstrap_complete=False, module=None, refresh=True)

Apply the changes required to reach the desired state of the configuration

Parameters:
  • tfvars (str) – path to terraform.tfvars file

  • bootstrap_complete (bool) – Removes bootstrap node if True

  • module (str) – Module to apply e.g: constants.COMPUTE_MODULE

  • refresh (bool) – If True, updates the state for each resource prior to planning and applying

change_statefile(module, resource_type, resource_name, instance)

Remove the records from the state file so that terraform will no longer be tracking the corresponding remote objects.

Note: terraform state file should be present in the directory from where the commands are initiated

Parameters:
  • module (str) – Name of the module e.g: compute_vm, module.control_plane_vm etc.

  • resource_type (str) – Resource type e.g: vsphere_virtual_machine, vsphere_compute_cluster etc.

  • resource_name (str) – Name of the resource e.g: vm

  • instance (str) – Name of the instance e.g: compute-0.j-056vu1cs33l-a.qe.rh-ocs.com

Examples:

terraform = Terraform(os.path.join(upi_repo_path, "upi/vsphere/"))
terraform.change_statefile(
    module="compute_vm", resource_type="vsphere_virtual_machine",
    resource_name="vm", instance="compute-0.j-056vu1cs33l-a.qe.rh-ocs.com"
)
destroy(tfvars, refresh=True)

Destroys the cluster

Parameters:

tfvars (str) – path to terraform.tfvars file

destroy_module(tfvars, module)

Destroys the particular module/node

Parameters:
  • tfvars (str) – path to terraform.tfvars file

  • module (str) – Module to destroy e.g: constants.BOOTSTRAP_MODULE

static get_terraform_version()
initialize(upgrade=False)

Initialize a working directory containing Terraform configuration files

Parameters:

upgrade (bool) – True in case installing modules needs upgrade from previously-downloaded objects, False otherwise

output(tfstate, module, json_format=True)

Extracts the value of an output variable from the state file

Parameters:
  • tfstate (str) – path to terraform.tfstate file

  • module (str) – module to extract

  • json_format (bool) – True if format output as json

Returns:

output from tfstate

Return type:

str

ocs_ci.deployment.vmware module

This module contains platform specific methods and classes for deployment on vSphere platform

class ocs_ci.deployment.vmware.VSPHEREAI

Bases: VSPHEREBASE

A class to handle vSphere Assisted Installer specific deployment

class OCPDeployment

Bases: OCPDeployment

assign_api_ingress_ips()

Request API and Ingress IPs from IPAM server

create_config()

Creates the OCP deploy config for the vSphere - not required for Assisted installer deployment

deploy(log_cli_level='DEBUG')

Deployment specific to OCP cluster on this platform

Parameters:

log_cli_level (str) – not used for Assisted Installer deployment

deploy_prereq()

Pre-Requisites for vSphere Assisted installer deployment

generate_terraform_vars()

Generates the terraform.tfvars.json file

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on vSphere platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

destroy_cluster(log_level='DEBUG')

Destroy OCP cluster specific to vSphere Assisted installer

Parameters:

log_level (str) – this parameter is not used here

class ocs_ci.deployment.vmware.VSPHEREIPI

Bases: VSPHEREBASE

A class to handle vSphere IPI specific deployment

class OCPDeployment

Bases: OCPDeployment

create_config()

Creates the OCP deploy config for the vSphere

deploy(log_cli_level='DEBUG')

Deployment specific to OCP cluster on this platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_prereq()

Overriding deploy_prereq from parent. Perform all necessary prerequisites for VSPHEREIPI here.

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on this platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

destroy_cluster(log_level='DEBUG')

Destroy OCP cluster specific to vSphere IPI

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

post_destroy_checks(template_folder)

Post destroy checks on vSphere IPI cluster

Parameters:

template_folder (str) – template folder for the cluster

class ocs_ci.deployment.vmware.VSPHEREUPI

Bases: VSPHEREBASE

A class to handle vSphere UPI specific deployment

class OCPDeployment

Bases: OCPDeployment

change_ignition_ip_and_hostname(ip_address)

Embed into iso.ign ip address and hostname (sno-edge-0) :param ip_address: ip address we got from IPAM to embed inside iso” :type ip_address: str

configure_storage_for_image_registry(kubeconfig)

Configures storage for the image registry

create_config()

Creates the OCP deploy config for the vSphere

create_ignitions()

Creates the ignition files

create_sno_iso_and_upload()

Creating iso file with values for SNO deployment

deploy(log_cli_level='DEBUG')

Deployment specific to OCP cluster on this platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

deploy_prereq()

Pre-Requisites for vSphere UPI Deployment

generate_manifests()

Generates manifest files

wait_for_sno_second_boot_change_ip_and_hostname(ip_address)

After second boot ocp is booting with the right ip address but after a while the ip is changed to dhcp. We monitor the ip address and when it changed we ssh to the node and change back the ip address and hostname :param ip_address: The ip address given from the IPAM server :type ip_address: str

Raises:

ConnectivityFail – Incase after the change we ping the ip_address. If it doesn’t reply we raise.

deploy_ocp(log_cli_level='DEBUG')

Deployment specific to OCP cluster on vSphere platform

Parameters:

log_cli_level (str) – openshift installer’s log level (default: “DEBUG”)

destroy_cluster(log_level='DEBUG')

Destroy OCP cluster specific to vSphere UPI

Parameters:

log_level (str) – log level openshift-installer (default: DEBUG)

destroy_scaleup_nodes(scale_up_terraform_data_dir, scale_up_terraform_var)

Destroy the scale-up nodes

Parameters:
  • scale_up_terraform_data_dir (str) – Path to scale-up terraform data directory

  • scale_up_terraform_var (str) – Path to scale-up terraform.tfvars file

ocs_ci.deployment.zones module

ocs_ci.deployment.zones.are_zone_labels_missing()

Check that there are no nodes with zone labels.

Returns:

True if all nodes are missing zone label, False otherwise.

Return type:

Bool

ocs_ci.deployment.zones.are_zone_labels_present()

Check that there are no nodes without zone labels.

Returns:

True if all nodes have a zone label, False otherwise.

Return type:

Bool

ocs_ci.deployment.zones.assign_dummy_zones(zones, nodes, overwrite=False)

Assign node labels to given nodes based on given zone lists. Zones are assigned so that there is the same number of nodes in each zone.

Parameters:
  • zones (list[str]) – list of k8s zone names

  • nodes (list[str]) – list of node names to label

  • overwrite (bool) – if True, labeling will not fail on already defined zone labels (False by default)

Raises:

ValueError – when number of nodes is not divisible by number of zones

ocs_ci.deployment.zones.create_dummy_zone_labels()

Create dummy zone labels on cluster nodes: try to label all master and worker nodes based on values of worker_availability_zones and master_availability_zones options, but only if there are no zone labels already defined.

Raises:

UnexpectedDeploymentConfiguration – when either cluster or ocs-ci config file are in conflict with dummy zone labels.

Module contents