ocs_ci.ocs package
Subpackages
- ocs_ci.ocs.acm package
- ocs_ci.ocs.must_gather package
- Submodules
- ocs_ci.ocs.must_gather.const_must_gather module
- ocs_ci.ocs.must_gather.must_gather module
MustGather
MustGather.check_pod_name_pattern()
MustGather.cleanup()
MustGather.collect_must_gather()
MustGather.compare_running_pods()
MustGather.log_type
MustGather.print_invalid_files()
MustGather.print_must_gather_debug()
MustGather.search_file_path()
MustGather.validate_expected_files()
MustGather.validate_file_size()
MustGather.validate_must_gather()
MustGather.verify_ceph_file_content()
MustGather.verify_noobaa_diagnostics()
- Module contents
- ocs_ci.ocs.resources package
- Submodules
- ocs_ci.ocs.resources.backingstore module
- ocs_ci.ocs.resources.bucket_policy module
- ocs_ci.ocs.resources.bucketclass module
- ocs_ci.ocs.resources.cache_drop module
- ocs_ci.ocs.resources.catalog_source module
- ocs_ci.ocs.resources.cloud_manager module
- ocs_ci.ocs.resources.cloud_uls module
- ocs_ci.ocs.resources.csv module
- ocs_ci.ocs.resources.deployment module
- ocs_ci.ocs.resources.drpc module
- ocs_ci.ocs.resources.fips module
- ocs_ci.ocs.resources.install_plan module
- ocs_ci.ocs.resources.job module
- ocs_ci.ocs.resources.machineconfig module
- ocs_ci.ocs.resources.mcg module
MCG
MCG.access_key
MCG.access_key_id
MCG.check_backingstore_state()
MCG.check_data_reduction()
MCG.check_if_mirroring_is_done()
MCG.check_ns_resource_validity()
MCG.cli_create_bucketclass()
MCG.cli_get_all_bucket_names()
MCG.cli_verify_bucket_exists()
MCG.create_connection()
MCG.create_namespace_resource()
MCG.data_to_mask
MCG.delete_ns_connection()
MCG.delete_ns_resource()
MCG.exec_mcg_cmd()
MCG.get_admin_default_resource_name()
MCG.get_bucket_info()
MCG.get_default_bc_backingstore_name()
MCG.get_mcg_cli_version()
MCG.get_noobaa_admin_credentials_from_secret()
MCG.mgmt_endpoint
MCG.namespace
MCG.noobaa_password
MCG.noobaa_token
MCG.noobaa_user
MCG.oc_create_bucketclass()
MCG.oc_verify_bucket_exists()
MCG.ocp_resource
MCG.read_system()
MCG.region
MCG.request_aws_credentials()
MCG.reset_admin_pw()
MCG.reset_core_pod()
MCG.retrieve_nb_token()
MCG.s3_endpoint
MCG.s3_get_all_bucket_names()
MCG.s3_get_all_buckets()
MCG.s3_internal_endpoint
MCG.s3_list_all_objects_in_bucket()
MCG.s3_resource
MCG.s3_verify_bucket_exists()
MCG.send_rpc_query()
MCG.status
MCG.update_s3_creds()
- ocs_ci.ocs.resources.mcg_lifecycle_policies module
- ocs_ci.ocs.resources.mcg_params module
- ocs_ci.ocs.resources.mcg_replication_policy module
- ocs_ci.ocs.resources.mockup_bucket_logger module
- ocs_ci.ocs.resources.namespacestore module
- ocs_ci.ocs.resources.objectbucket module
- ocs_ci.ocs.resources.objectconfigfile module
- ocs_ci.ocs.resources.ocs module
- ocs_ci.ocs.resources.osd_resize module
- ocs_ci.ocs.resources.packagemanifest module
- ocs_ci.ocs.resources.pod module
Pod
Pod.add_role()
Pod.copy_from_pod_oc_exec()
Pod.copy_to_pod_cat()
Pod.copy_to_pod_rsync()
Pod.copy_to_server()
Pod.exec_ceph_cmd()
Pod.exec_cmd_on_node()
Pod.exec_cmd_on_pod()
Pod.exec_s3_cmd_on_pod()
Pod.exec_sh_cmd_on_pod()
Pod.fillup_fs()
Pod.get_fio_results()
Pod.get_labels()
Pod.get_memory()
Pod.get_node()
Pod.get_storage_path()
Pod.install_packages()
Pod.labels
Pod.name
Pod.namespace
Pod.restart_count
Pod.roles
Pod.run_git_clone()
Pod.run_io()
Pod.wait_for_pod_delete()
Pod.workload_setup()
cal_md5sum()
calculate_md5sum_of_pod_files()
check_ceph_cmd_execute_successfully()
check_file_existence()
check_pods_after_node_replacement()
check_pods_in_running_state()
check_pods_in_statuses()
check_safe_to_destroy_status()
check_toleration_on_pods()
delete_all_osd_removal_jobs()
delete_deploymentconfig_pods()
delete_osd_removal_job()
delete_pods()
download_file_from_pod()
exit_osd_maintenance_mode()
get_admin_key_from_ceph_tools()
get_alertmanager_managed_ocs_alertmanager_pods()
get_all_pods()
get_ceph_daemon_id()
get_ceph_tools_pod()
get_cephfsplugin_provisioner_pods()
get_containers_names_by_pod()
get_crashcollector_pods()
get_csi_provisioner_pod()
get_csi_snapshoter_pod()
get_debug_pods()
get_deployment_name()
get_deployments_having_label()
get_device_path()
get_file_path()
get_fio_rw_iops()
get_lvm_operator_pod()
get_lvm_vg_manager_pod()
get_mds_pods()
get_mgr_pods()
get_mon_label()
get_mon_pod_by_pvc_name()
get_mon_pod_id()
get_mon_pods()
get_noobaa_core_pod()
get_noobaa_db_pod()
get_noobaa_endpoint_pods()
get_noobaa_operator_pod()
get_noobaa_pods()
get_not_running_pods()
get_ocs_operator_pod()
get_ocs_osd_controller_manager_pod()
get_ocs_provider_server_pod()
get_odf_operator_controller_manager()
get_operator_pods()
get_osd_deployments()
get_osd_pod_id()
get_osd_pods()
get_osd_pods_having_ids()
get_osd_prepare_pods()
get_osd_removal_pod_name()
get_plugin_pods()
get_plugin_provisioner_leader()
get_pod_ceph_daemon_type()
get_pod_count()
get_pod_ip()
get_pod_logs()
get_pod_node()
get_pod_obj()
get_pod_objs()
get_pod_restarts_count()
get_pods_having_label()
get_pods_in_statuses()
get_prometheus_managed_ocs_prometheus_pod()
get_prometheus_operator_pod()
get_pvc_name()
get_rbdfsplugin_provisioner_pods()
get_rgw_pods()
get_rook_ceph_pod_names()
get_running_state_pods()
get_topolvm_controller_pod()
get_topolvm_node_pod()
get_used_space_on_mount_point()
list_ceph_images()
list_of_nodes_running_pods()
pod_resource_utilization_raw_output_from_adm_top()
restart_pods_having_label()
restart_pods_in_statuses()
run_io_and_verify_mount_point()
run_io_in_bg()
run_osd_removal_job()
search_pattern_in_pod_logs()
set_osd_maintenance_mode()
upload()
validate_pods_are_respinned_and_running_state()
verify_data_integrity()
verify_data_integrity_after_expansion_for_block_pvc()
verify_data_integrity_for_multi_pvc_objs()
verify_md5sum_on_pod_files()
verify_node_name()
verify_osd_removal_job_completed_successfully()
verify_pods_upgraded()
wait_for_ceph_cmd_execute_successfully()
wait_for_change_in_pods_statuses()
wait_for_dc_app_pods_to_reach_running_state()
wait_for_new_osd_pods_to_come_up()
wait_for_noobaa_pods_running()
wait_for_osd_pods_having_ids()
wait_for_pods_by_label_count()
wait_for_pods_deletion()
wait_for_pods_to_be_in_statuses()
wait_for_pods_to_be_running()
wait_for_storage_pods()
- ocs_ci.ocs.resources.pv module
- ocs_ci.ocs.resources.pvc module
PVC
PVC.backed_pv
PVC.backed_pv_obj
PVC.backed_sc
PVC.create_reclaim_space_cronjob()
PVC.create_reclaim_space_job()
PVC.create_snapshot()
PVC.get_attached_pods()
PVC.get_pv_volume_handle_name
PVC.get_pvc_access_mode
PVC.get_pvc_vol_mode
PVC.get_rbd_image_name
PVC.image_uuid
PVC.provisioner
PVC.reclaim_policy
PVC.resize_pvc()
PVC.size
PVC.status
create_pvc_clone()
create_pvc_snapshot()
create_restore_pvc()
delete_pvcs()
get_all_pvc_objs()
get_all_pvcs()
get_all_pvcs_in_storageclass()
get_deviceset_pvcs()
get_deviceset_pvs()
get_pvc_objs()
get_pvc_size()
scale_down_pods_and_remove_pvcs()
- ocs_ci.ocs.resources.rgw module
- ocs_ci.ocs.resources.storage_cluster module
StorageCluster
add_capacity()
add_capacity_lso()
basic_verification()
ceph_config_dump()
ceph_mon_dump()
check_custom_storageclass_presence()
get_all_storageclass()
get_ceph_clients()
get_consumer_storage_provider_endpoint()
get_device_class()
get_deviceset_count()
get_in_transit_encryption_config_state()
get_osd_count()
get_osd_replica_count()
get_osd_size()
get_rook_ceph_mon_per_endpoint_ip()
get_storage_cluster()
get_storage_cluster_state()
get_storage_size()
get_storageclass_names_from_storagecluster_spec()
in_transit_encryption_verification()
mcg_only_install_verification()
ocs_install_verification()
osd_encryption_verification()
patch_storage_cluster_for_custom_storage_class()
resize_osd()
set_deviceset_count()
set_in_transit_encryption()
setup_ceph_debug()
validate_serviceexport()
verify_backing_store()
verify_consumer_resources()
verify_consumer_storagecluster()
verify_device_class_in_osd_tree()
verify_kms_ca_only()
verify_managed_secrets()
verify_managed_service_networkpolicy()
verify_managed_service_resources()
verify_managedocs_security()
verify_max_openshift_version()
verify_mcg_only_pods()
verify_multus_network()
verify_networks_in_ceph_pod()
verify_noobaa_endpoint_count()
verify_ocs_csv()
verify_osd_tree_schema()
verify_provider_resources()
verify_provider_storagecluster()
verify_sc_images()
verify_storage_cluster()
verify_storage_cluster_images()
verify_storage_cluster_version()
verify_storage_device_class()
verify_storage_system()
wait_for_consumer_rook_ceph_mon_endpoints_in_provider_wnodes()
wait_for_consumer_storage_provider_endpoint_in_provider_wnodes()
- ocs_ci.ocs.resources.storageclassclaim module
- ocs_ci.ocs.resources.storageconsumer module
- ocs_ci.ocs.resources.stretchcluster module
StretchCluster
StretchCluster.cephfs_failure_checks()
StretchCluster.cephfs_log_file_map
StretchCluster.cephfs_logreader_pods
StretchCluster.cephfs_logwriter_pods
StretchCluster.cephfs_old_log
StretchCluster.check_ceph_accessibility()
StretchCluster.check_for_data_corruption()
StretchCluster.check_for_data_loss()
StretchCluster.check_for_read_pause()
StretchCluster.check_for_write_pause()
StretchCluster.get_logfile_map()
StretchCluster.get_logwriter_reader_pods()
StretchCluster.get_nodes_in_zone()
StretchCluster.get_ocs_nodes_in_zone()
StretchCluster.get_out_of_quorum_nodes()
StretchCluster.get_workload_pvc_obj()
StretchCluster.post_failure_checks()
StretchCluster.rbd_failure_checks()
StretchCluster.rbd_log_file_map
StretchCluster.rbd_logwriter_pods
StretchCluster.rbd_old_log
StretchCluster.reset_conn_score()
StretchCluster.validate_conn_score()
- ocs_ci.ocs.resources.test_packagemanifest module
- ocs_ci.ocs.resources.topology module
- Module contents
- ocs_ci.ocs.tests package
- Submodules
- ocs_ci.ocs.tests.test_defaults module
- ocs_ci.ocs.tests.test_fiojob module
ceph_df_json_output_clusterempty()
fio_json_output()
fio_json_output_broken()
fio_json_output_with_error()
test_fio_to_dict_empty()
test_fio_to_dict_output_broken()
test_fio_to_dict_with_error()
test_fio_to_dict_without_error()
test_get_sc_name_default()
test_get_storageutilization_size_empty100percent()
test_get_timeout_basic()
- Module contents
- ocs_ci.ocs.ui package
- Subpackages
- ocs_ci.ocs.ui.page_objects package
- Submodules
- ocs_ci.ocs.ui.page_objects.alerting module
- ocs_ci.ocs.ui.page_objects.backing_store_tab module
- ocs_ci.ocs.ui.page_objects.block_and_file module
- ocs_ci.ocs.ui.page_objects.block_pools module
- ocs_ci.ocs.ui.page_objects.bucket_class_tab module
- ocs_ci.ocs.ui.page_objects.data_foundation_tabs_common module
- ocs_ci.ocs.ui.page_objects.edit_label_form module
- ocs_ci.ocs.ui.page_objects.namespace_store_tab module
- ocs_ci.ocs.ui.page_objects.object_bucket_claims_tab module
- ocs_ci.ocs.ui.page_objects.object_buckets_tab module
- ocs_ci.ocs.ui.page_objects.object_storage module
- ocs_ci.ocs.ui.page_objects.odf_topology_tab module
- ocs_ci.ocs.ui.page_objects.overview_tab module
- ocs_ci.ocs.ui.page_objects.page_navigator module
- ocs_ci.ocs.ui.page_objects.resource_list module
- ocs_ci.ocs.ui.page_objects.resource_page module
- ocs_ci.ocs.ui.page_objects.searchbar module
- ocs_ci.ocs.ui.page_objects.storage_system_details module
- ocs_ci.ocs.ui.page_objects.storage_system_tab module
- Module contents
- ocs_ci.ocs.ui.page_objects package
- Submodules
- ocs_ci.ocs.ui.acm_ui module
ACMOCPClusterDeployment
ACMOCPClusterDeployment.acm_bailout_if_failed()
ACMOCPClusterDeployment.acm_cluster_status_creating()
ACMOCPClusterDeployment.acm_cluster_status_failed()
ACMOCPClusterDeployment.acm_cluster_status_ready()
ACMOCPClusterDeployment.click_next_button()
ACMOCPClusterDeployment.click_platform_and_credentials()
ACMOCPClusterDeployment.create_cluster()
ACMOCPClusterDeployment.create_cluster_prereq()
ACMOCPClusterDeployment.destroy_cluster()
ACMOCPClusterDeployment.download_cluster_conf_files()
ACMOCPClusterDeployment.download_kubeconfig()
ACMOCPClusterDeployment.fill_multiple_textbox()
ACMOCPClusterDeployment.get_deployment_status()
ACMOCPClusterDeployment.get_destroy_status()
ACMOCPClusterDeployment.goto_cluster_details_page()
ACMOCPClusterDeployment.navigate_create_clusters_page()
ACMOCPClusterDeployment.post_destroy_ops()
ACMOCPClusterDeployment.wait_for_cluster_create()
ACMOCPDeploymentFactory
ACMOCPPlatformVsphereIPI
ACMOCPPlatformVsphereIPI.add_different_pod_network()
ACMOCPPlatformVsphereIPI.create_cluster()
ACMOCPPlatformVsphereIPI.create_cluster_prereq()
ACMOCPPlatformVsphereIPI.fill_cluster_details_page()
ACMOCPPlatformVsphereIPI.fill_network_info()
ACMOCPPlatformVsphereIPI.get_ocp_release_img()
ACMOCPPlatformVsphereIPI.post_destroy_ops()
AcmPageNavigator
AcmPageNavigator.navigate_applications_page()
AcmPageNavigator.navigate_automation_page()
AcmPageNavigator.navigate_bare_metal_assets_page()
AcmPageNavigator.navigate_clusters_page()
AcmPageNavigator.navigate_credentials_page()
AcmPageNavigator.navigate_data_services()
AcmPageNavigator.navigate_from_ocp_to_acm_cluster_page()
AcmPageNavigator.navigate_governance_page()
AcmPageNavigator.navigate_infrastructure_env_page()
AcmPageNavigator.navigate_overview_page()
AcmPageNavigator.navigate_welcome_page()
- ocs_ci.ocs.ui.add_replace_device_ui module
- ocs_ci.ocs.ui.base_ui module
BaseUI
BaseUI.check_element_presence()
BaseUI.check_element_text()
BaseUI.check_number_occurrences_text()
BaseUI.choose_expanded_mode()
BaseUI.clear_input_gradually()
BaseUI.clear_with_ctrl_a_del()
BaseUI.copy_dom()
BaseUI.deep_get()
BaseUI.do_clear()
BaseUI.do_click()
BaseUI.do_click_by_id()
BaseUI.do_click_by_xpath()
BaseUI.do_send_keys()
BaseUI.find_an_element_by_xpath()
BaseUI.get_checkbox_status()
BaseUI.get_element_attribute()
BaseUI.get_element_text()
BaseUI.get_elements()
BaseUI.is_expanded()
BaseUI.navigate_backward()
BaseUI.page_has_loaded()
BaseUI.refresh_page()
BaseUI.scroll_into_view()
BaseUI.select_checkbox_status()
BaseUI.take_screenshot()
BaseUI.wait_for_element_attribute()
BaseUI.wait_for_element_to_be_present()
BaseUI.wait_for_element_to_be_visible()
BaseUI.wait_for_endswith_url()
BaseUI.wait_until_expected_text_is_found()
SeleniumDriver
close_browser()
copy_dom()
garbage_collector_webdriver()
login_ui()
navigate_to_all_clusters()
navigate_to_local_cluster()
proceed_to_login_console()
screenshot_dom_location()
take_screenshot()
wait_for_element_to_be_clickable()
wait_for_element_to_be_visible()
- ocs_ci.ocs.ui.block_pool module
BlockPoolUI
BlockPoolUI.check_pool_avail_capacity()
BlockPoolUI.check_pool_compression_eligibility()
BlockPoolUI.check_pool_compression_ratio()
BlockPoolUI.check_pool_compression_savings()
BlockPoolUI.check_pool_compression_status()
BlockPoolUI.check_pool_existence()
BlockPoolUI.check_pool_replicas()
BlockPoolUI.check_pool_status()
BlockPoolUI.check_pool_used_capacity()
BlockPoolUI.check_pool_volume_type()
BlockPoolUI.check_storage_class_attached()
BlockPoolUI.create_pool()
BlockPoolUI.delete_pool()
BlockPoolUI.edit_pool_parameters()
BlockPoolUI.reach_pool_limit()
- ocs_ci.ocs.ui.deployment_ui module
DeploymentUI
DeploymentUI.configure_data_protection()
DeploymentUI.configure_encryption()
DeploymentUI.configure_in_transit_encryption()
DeploymentUI.configure_osd_size()
DeploymentUI.configure_performance()
DeploymentUI.create_storage_cluster()
DeploymentUI.enable_taint_nodes()
DeploymentUI.install_internal_cluster()
DeploymentUI.install_local_storage_operator()
DeploymentUI.install_lso_cluster()
DeploymentUI.install_mcg_only_cluster()
DeploymentUI.install_ocs_operator()
DeploymentUI.install_ocs_ui()
DeploymentUI.install_storage_cluster()
DeploymentUI.refresh_popup()
DeploymentUI.search_operator_installed_operators_page()
DeploymentUI.verify_disks_lso_attached()
DeploymentUI.verify_operator_succeeded()
- ocs_ci.ocs.ui.helpers_ui module
- ocs_ci.ocs.ui.mcg_ui module
BucketClassUI
BucketClassUI.create_namespace_bucketclass_ui()
BucketClassUI.create_standard_bucketclass_ui()
BucketClassUI.delete_bucketclass_ui()
BucketClassUI.set_cache_namespacestore_policy()
BucketClassUI.set_multi_namespacestore_policy()
BucketClassUI.set_namespacestore_policy
BucketClassUI.set_single_namespacestore_policy()
NamespaceStoreUI
- ocs_ci.ocs.ui.odf_topology module
OdfTopologyHelper
OdfTopologyHelper.busy_box_depl
OdfTopologyHelper.delete_busybox()
OdfTopologyHelper.deploy_busybox()
OdfTopologyHelper.file_stream
OdfTopologyHelper.get_busybox_depl_name()
OdfTopologyHelper.get_deployment_names_from_node_df_cli()
OdfTopologyHelper.get_deployment_obj_from_node_df_cli()
OdfTopologyHelper.get_node_names_df_cli()
OdfTopologyHelper.get_pod_obj_from_node_and_depl_df_cli()
OdfTopologyHelper.get_pod_status_df_cli()
OdfTopologyHelper.get_topology_cli_str()
OdfTopologyHelper.read_topology_cli_all()
OdfTopologyHelper.reload_depl_data_obj_from_node_df_cli()
OdfTopologyHelper.reload_pod_obj_from_node_and_depl_df_cli()
OdfTopologyHelper.set_busybox_depl_name()
OdfTopologyHelper.set_resource_obj_of_node_df_cli()
OdfTopologyHelper.topology_cli_df
OdfTopologyHelper.wait_busy_box_scaled_up()
TopologyCliStr
TopologyUiStr
get_creation_ts_with_offset()
get_deployment_details_cli()
get_node_details_cli()
get_node_names_of_the_pods_by_pattern()
- ocs_ci.ocs.ui.pvc_ui module
- ocs_ci.ocs.ui.storageclass module
- ocs_ci.ocs.ui.validation_ui module
ValidationUI
ValidationUI.check_capacity_breakdown()
ValidationUI.odf_console_plugin_check()
ValidationUI.odf_overview_ui()
ValidationUI.odf_storagesystems_ui()
ValidationUI.refresh_web_console()
ValidationUI.validate_storage_cluster_ui()
ValidationUI.validate_unprivileged_access()
ValidationUI.verification_ui()
ValidationUI.verify_object_service_page()
ValidationUI.verify_ocs_operator_tabs()
ValidationUI.verify_odf_without_ocs_in_installed_operator()
ValidationUI.verify_page_contain_strings()
ValidationUI.verify_persistent_storage_page()
- ocs_ci.ocs.ui.views module
- ocs_ci.ocs.ui.workload_ui module
PvcCapacityDeployment
PvcCapacityDeploymentList
PvcCapacityDeploymentList.add_instance()
PvcCapacityDeploymentList.delete_deployment()
PvcCapacityDeploymentList.delete_pvc()
PvcCapacityDeploymentList.get_deployment_names_list()
PvcCapacityDeploymentList.get_pvc_capacity_deployment()
PvcCapacityDeploymentList.get_pvc_capacity_deployment_by_deployment()
PvcCapacityDeploymentList.get_pvc_names_list()
PvcCapacityDeploymentList.set_pvc_capacity_deployment()
SingletonMeta
WorkloadUi
compare_mem_usage()
divide_capacity()
fill_attached_pv()
wait_for_container_status_ready()
- Module contents
- Subpackages
Submodules
ocs_ci.ocs.amq module
AMQ Class to run amq specific tests
- class ocs_ci.ocs.amq.AMQ(**kwargs)
Bases:
object
Workload operation using AMQ
- cleanup(kafka_namespace='myproject', tiller_namespace='tiller')
Clean up function, will start to delete from amq cluster operator then amq-connector, persistent, bridge, at the end it will delete the created namespace
- Parameters:
kafka_namespace (str) – Created namespace for amq
tiller_namespace (str) – Created namespace for benchmark
- create_consumer_pod(num_of_pods=1, value='10000')
Creates producer pods
- Parameters:
num_of_pods (int) – Number of consumer pods to be created
value (str) – Number of messages to be received
Returns: consumer pod object
- create_kafka_topic(name='my-topic', partitions=1, replicas=1)
Creates kafka topic
- Parameters:
name (str) – Name of the kafka topic
partitions (int) – Number of partitions
replicas (int) – Number of replicas
Return: kafka_topic object
- create_kafka_user(name='my-user')
Creates kafka user
- Parameters:
name (str) – Name of the kafka user
Return: kafka_user object
- create_kafkadrop(wait=True)
Create kafkadrop pod, service and routes
- Parameters:
wait (bool) – If true waits till kafkadrop pod running
- Returns:
Contains objects of kafkadrop pod, service and route
- Return type:
tuple
- create_messaging_on_amq(topic_name='my-topic', user_name='my-user', partitions=1, replicas=1, num_of_producer_pods=1, num_of_consumer_pods=1, value='10000')
Creates workload using Open Messaging tool on amq cluster
- Parameters:
topic_name (str) – Name of the topic to be created
user_name (str) – Name of the user to be created
partitions (int) – Number of partitions of topic
replicas (int) – Number of replicas of topic
num_of_producer_pods (int) – Number of producer pods to be created
num_of_consumer_pods (int) – Number of consumer pods to be created
value (str) – Number of messages to be sent and received
- create_namespace(namespace)
create namespace for amq
- Parameters:
namespace (str) – Namespace for amq pods
- create_producer_pod(num_of_pods=1, value='10000')
Creates producer pods
- Parameters:
num_of_pods (int) – Number of producer pods to be created
value (str) – Number of the messages to be sent
Returns: producer pod object
- export_amq_output_to_gsheet(amq_output, sheet_name, sheet_index)
Collect amq data to google spreadsheet
- Parameters:
amq_output (dict) – amq output in dict
sheet_name (str) – Name of the sheet
sheet_index (int) – Index of sheet
- is_amq_pod_running(pod_pattern, expected_pods, namespace='myproject')
The function checks if provided pod_pattern finds a pod and if the status is running or not
- Parameters:
pod_pattern (str) – the pattern for pod
expected_pods (int) – Number of pods
namespace (str) – Namespace for amq pods
- Returns:
status of pod: True if found pod is running
- Return type:
bool
- run_amq_benchmark(benchmark_pod_name='benchmark', kafka_namespace='myproject', tiller_namespace='tiller', num_of_clients=8, worker=None, timeout=1800, amq_workload_yaml=None, run_in_bg=False)
Run benchmark pod and get the results
- Parameters:
benchmark_pod_name (str) – Name of the benchmark pod
kafka_namespace (str) – Namespace where kafka cluster created
tiller_namespace (str) – Namespace where tiller pod needs to be created
num_of_clients (int) – Number of clients to be created
worker (str) – Loads to create on workloads separated with commas e.g http://benchmark-worker-0.benchmark-worker:8080, http://benchmark-worker-1.benchmark-worker:8080
timeout (int) – Time to complete the run
amq_workload_yaml (dict) – Contains amq workloads information keys and values :name (str): Name of the workloads :topics (int): Number of topics created :partitions_per_topic (int): Number of partitions per topic :message_size (int): Message size :payload_file (str): Load to run on workload :subscriptions_per_topic (int): Number of subscriptions per topic :consumer_per_subscription (int): Number of consumers per subscription :producers_per_topic (int): Number of producers per topic :producer_rate (int): Producer rate :consumer_backlog_sizegb (int): Size of block in gb :test_duration_minutes (int): Time to run the workloads
run_in_bg (bool) – On true the workload will run in background
- Returns:
- Returns benchmark run information if run_in_bg is False.
Otherwise a thread of the amq workload execution
- Return type:
result (str/Thread obj)
- run_amq_workload(command, benchmark_pod_name, tiller_namespace, timeout)
Runs amq workload in bg
- Parameters:
command (str) – Command to run on pod
benchmark_pod_name (str) – Pod name
tiller_namespace (str) – Namespace of pod
timeout (int) – Time to complete the run
- Returns:
Returns benchmark run information
- Return type:
result (str)
- run_in_bg(namespace='myproject', value='10000', since_time=1800)
Validate messages are produced and consumed in bg
- Parameters:
namespace (str) – Namespace of the pod
value (str) – Number of messages to be sent and received
since_time (int) – Number of seconds to required to sent and receive msg
- setup_amq_cluster(sc_name, namespace='myproject', size=100, replicas=3)
Creates amq cluster with persistent storage.
- Parameters:
sc_name (str) – Name of sc
namespace (str) – Namespace for amq cluster
size (int) – Size of the storage
replicas (int) – Number of kafka and zookeeper pods to be created
- setup_amq_cluster_operator(namespace='myproject')
Function to setup amq-cluster_operator, the file is pulling from github it will make sure cluster-operator pod is running
- Parameters:
namespace (str) – Namespace for AMQ pods
- setup_amq_kafka_bridge()
Function to setup amq-kafka, the file file is pulling from github it will make kind: KafkaBridge and will make sure the pod status is running
Return: kafka_bridge object
- setup_amq_kafka_connect()
The function is to setup amq-kafka-connect, the yaml file is pulling from github it will make kind: KafkaConnect and will make sure the status is running
Returns: kafka_connect object
- setup_amq_kafka_persistent(sc_name, size=100, replicas=3)
Function to setup amq-kafka-persistent, the file is pulling from github it will make kind: Kafka and will make sure the status is running
- Parameters:
sc_name (str) – Name of sc
size (int) – Size of the storage in Gi
replicas (int) – Number of kafka and zookeeper pods to be created
return : kafka_persistent
- validate_amq_benchmark(result, amq_workload_yaml, benchmark_pod_name='benchmark')
Validates amq benchmark run
- Parameters:
result (str) – Benchmark run information
amq_workload_yaml (dict) – AMQ workload information
benchmark_pod_name (str) – Name of the benchmark pod
- Returns:
Returns the dict output on success, Otherwise none
- Return type:
res_dict (dict)
- validate_messages_are_consumed(namespace='myproject', value='10000', since_time=1800)
Validates if all messages are received in consumer pod
- Parameters:
namespace (str) – Namespace of the pod
value (str) – Number of messages are recieved
since_time (int) – Number of seconds to required to receive the msg
Raises exception on failures
- validate_messages_are_produced(namespace='myproject', value='10000', since_time=1800)
Validates if all messages are sent in producer pod
- Parameters:
namespace (str) – Namespace of the pod
value (str) – Number of messages are sent
since_time (int) – Number of seconds to required to sent the msg
Raises exception on failures
- validate_msg(pod, namespace='myproject', value='10000', since_time=1800)
Validate if messages are sent or received
- Parameters:
pod (str) – Name of the pod
namespace (str) – Namespace of the pod
value (str) – Number of messages are sent
since_time (int) – Number of seconds to required to sent the msg
- Returns:
True if all messages are sent/received
- Return type:
bool
ocs_ci.ocs.api_client module
A module for implementing specific api client for interacting with openshift or kubernetes cluster
APIClientBase is an abstract base class which imposes a contract on methods to be implemented in derived classes which are specific to api client
- class ocs_ci.ocs.api_client.APIClientBase
Bases:
object
Abstract base class for all api-client classes
This is an abstract base class and api-client specific classes should implement all the methods for interacting with openshift cluster
- abstract api_create()
- abstract api_delete()
- abstract api_get()
- abstract api_patch()
- abstract api_post()
- abstract create_service(**kw)
Create an openshift service
- Parameters:
**kw – ex body=”content”, namespace=’ocnamespace’
- Returns:
Response from api server
Note: Returns could be tricky because it varies from client to client. If its oc-cli then extra work needs to be done in the specific implementation.
- Return type:
dict
- abstract get_labels(pod_name, pod_namespace)
Get the openshift labels on a given pod
- Parameters:
pod_name (str) – Name of pod for which labels to be fetched
pod_namespace (str) – namespace of pod where this pod lives
- Raises:
NotImplementedError – if function not implemented by client
- Returns:
Labels associated with pod
- Return type:
dict
- abstract get_pods(**kwargs)
Because of discrepancies in IO format of each client api leaving this to be implemented by specific client
- Parameters:
**kwargs – ex: namespace=’namespace’,openshift namespace from which we need pods
- Raises:
NotImplementedError – if client has not implemented this function.
- Returns:
A list of pod names
- Return type:
pod_names (list)
- abstract property name
Concrete class will have respective api-client name
- class ocs_ci.ocs.api_client.KubeClient
Bases:
APIClientBase
All activities using upstream kubernetes python client
- property name
Concrete class will have respective api-client name
- class ocs_ci.ocs.api_client.OCCLIClient
Bases:
APIClientBase
All activities using oc-cli.
This implements all functionalities like create, patch, delete using oc commands.
- property name
Concrete class will have respective api-client name
- class ocs_ci.ocs.api_client.OCRESTClient
Bases:
APIClientBase
All activities using openshift REST client
- api_create(**kw)
- api_delete(**kw)
- api_get(**kw)
- api_patch(**kw)
- api_post(**kw)
- create_service(**kw)
- Parameters:
kw – ex: body={body} for the request which has service spec
- Returns:
ResourceInstance
- get_labels(pod_name, pod_namespace)
Get labels from a specific pod
- Parameters:
pod_name (str) – Name of the pod
pod_namespace (str) – namespace where this pod lives
- Raises:
NotFoundError – If resource not found
- Returns:
All the labels on a pod
- Return type:
dict
- get_pods(**kwargs)
Get pods in specific namespace or across oc cluster
- Parameters:
**kwargs – ex: namespace=rook-ceph, label_selector=’x==y’
- Returns:
- of pods names,if no namespace provided then this function
returns all pods across openshift cluster
- Return type:
list
- property name
Concrete class will have respective api-client name
- ocs_ci.ocs.api_client.get_api_client(client_name)
Get instance of corresponding api-client object with given name
- Parameters:
client_name (str) – name of the api client to be instantiated
- Returns:
api client object
ocs_ci.ocs.awscli_pod module
Methods used in awscli_pod fixtures in tests/conftest.py
- ocs_ci.ocs.awscli_pod.awscli_pod_cleanup(namespace=None)
Delete AWS cli pod resources.
- Parameters:
namespace (str) – Namespace for aws cli pod
- ocs_ci.ocs.awscli_pod.create_awscli_pod(scope_name=None, namespace=None, service_account=None)
Create AWS cli pod and its resources.
- Parameters:
scope_name (str) – The name of the fixture’s scope
namespace (str) – Namespace for aws cli pod
- Returns:
awscli_pod_obj
- Return type:
object
ocs_ci.ocs.benchmark_operator module
- Benchmark Operator Class to run various workloads, performance and scale tests
- previously known as Ripsaw and implemented from :
This operator can be used as an object or as a fixture
- class ocs_ci.ocs.benchmark_operator.BenchmarkOperator(**kwargs)
Bases:
object
Workload operation using Benchmark-Operator
- cleanup()
Clean up the cluster from the benchmark operator project
- deploy()
Deploy the benchmark-operator
- get_uuid(benchmark)
- Getting the UUID of the test.
when benchmark-operator used for running a benchmark tests, each run get its own UUID, so the results in the elastic-search server can be sorted.
- Parameters:
benchmark (str) – the name of the main pod in the test
- Returns:
the UUID of the test or ‘’ if UUID not found in the benchmark pod
- Return type:
str
- ocs_ci.ocs.benchmark_operator.benchmark_operator(request)
Use the benchmark operator as fixture
ocs_ci.ocs.benchmark_operator_fio module
- class ocs_ci.ocs.benchmark_operator_fio.BenchmarkOperatorFIO
Bases:
object
Benchmark Operator FIO Class
- calc_number_servers_file_size()
Calc the number of fio server based on file-size
- cleanup()
Clean up the cluster from the benchmark operator project
- clone_benchmark_operator()
Clone benchmark-operator
- create_benchmark_operator()
Create benchmark-operator
- deploy()
Deploy the benchmark-operator
- label_worker_nodes()
Label Worker nodes for cache-dropping enable
- pods_expected_status(pattern, expected_num_pods, expected_status)
- run_fio_benchmark_operator(is_completed=True)
Run FIO on benchmark-operator
- Parameters:
is_completed (bool) – if True, verify client pod move completed state.
- setup_benchmark_fio(total_size=2, jobs='read', read_runtime=30, bs='4096KiB', storageclass='ocs-storagecluster-ceph-rbd', timeout_completed=2400)
Setup of benchmark fio
- Parameters:
total_size (int) –
jobs (str) – fio job types to run, for example the readwrite option
read_runtime (int) – Amount of time in seconds to run read workloads
bs (str) – the Block size that need to used for the prefill
storageclass (str) – StorageClass to use for PVC per server pod
timeout_completed (int) – timeout client pod move to completed state
- wait_for_wl_to_complete()
Wait client pod move to completed state
- wait_for_wl_to_start()
Wait fio-servers move to Running state
- ocs_ci.ocs.benchmark_operator_fio.get_file_size(expected_used_capacity_percent)
Get the file size based on expected used capacity percent
- Parameters:
expected_used_capacity_percent (int) – expected used capacity percent
ocs_ci.ocs.bucket_utils module
Helper functions file for working with object buckets
- ocs_ci.ocs.bucket_utils.abort_all_multipart_upload(s3_obj, bucketname, object_key)
Abort all Multipart Uploads for this Bucket
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
- Returns:
List of aborted upload ids
- Return type:
list
- ocs_ci.ocs.bucket_utils.abort_multipart(s3_obj, bucketname, object_key, upload_id)
Aborts a Multipart Upload for this Bucket
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
upload_id (str) – Multipart Upload-ID
- Returns:
aborted upload id
- Return type:
str
- ocs_ci.ocs.bucket_utils.bucket_read_api(mcg_obj, bucket_name)
Fetches the bucket metadata like size, tiers etc
- Parameters:
mcg_obj (obj) – MCG object
bucket_name (str) – Name of the bucket
- Returns:
Bucket policy response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.bulk_s3_put_bucket_lifecycle_config(mcg_obj, buckets, lifecycle_config)
This method applies a lifecycle configuration to multiple buckets
- Args:
mcg_obj: An MCG object containing the MCG S3 connection credentials buckets (list): list of bucket names to apply the lifecycle rule to lifecycle_config (dict): a dict following the expected AWS json structure of a config file
- ocs_ci.ocs.bucket_utils.change_objects_creation_date_in_noobaa_db(bucket_name, object_keys=[], new_creation_time=0)
Change the creation date of objects at the noobaa-db.
- Parameters:
bucket_name (str) – The name of the bucket where the objects reside
object_keys (list, optional) – A list of object keys to change their creation date Note: If object_keys is empty, all objects in the bucket will be changed.
new_creation_time (int) – The new creation time in unix timestamp in seconds
- Example usage:
# Change the creation date of objects obj1 and obj2 in bucket my-bucket to one minute back change_objects_creation_date(“my-bucket”, [“obj1”, “obj2”], time.time() - 60)
- ocs_ci.ocs.bucket_utils.check_cached_objects_by_name(mcg_obj, bucket_name, expected_objects_names=None)
Check if the names of cached objects in a cache bucket are as expected using rpc call
- Parameters:
mcg_obj (MCG) – An MCG object containing the MCG S3 connection credentials
bucket_name (str) – Name of the cache bucket
expected_objects_names (list) – Expected objects to be cached
- Returns:
True if all the objects exist in the cache as expected, False otherwise
- Return type:
bool
- ocs_ci.ocs.bucket_utils.check_if_objects_expired(mcg_obj, bucket_name, prefix='')
Checks if objects in the bucket is expired
- Parameters:
mcg_obj (MCG) – MCG object
bucket_name (str) – Name of the bucket
prefix (str) – Objects prefix
- Returns:
True if objects are expired, else False
- Return type:
Bool
- ocs_ci.ocs.bucket_utils.check_objects_in_bucket(bucket_name, objects_list, mcg_obj, s3pod, timeout=60)
Checks object list present in bucket and compare it with uploaded object Lists
- ocs_ci.ocs.bucket_utils.check_pv_backingstore_status(backingstore_name, namespace=None, desired_status=['`OPTIMAL`', '`LOW_CAPACITY`'])
check if existing pv backing store is in OPTIMAL state
- Parameters:
backingstore_name (str) – backingstore name
namespace (str) – backing store’s namespace
desired_status (str) – desired state for the backing store, if None is given then desired
status (is the Healthy) –
- Returns:
True if backing store is in the desired state
- Return type:
bool
- ocs_ci.ocs.bucket_utils.cli_create_aws_backingstore(mcg_obj, cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with aws underlying storage using noobaa cli command
- Parameters:
mcg_obj (MCG) – Used for execution for the NooBaa CLI command
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.cli_create_aws_sts_backingstore(mcg_obj, cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore of type aws-sts-s3 with aws underlying storage and the role-ARN
- Parameters:
mcg_obj (MCG) – Used for execution for the NooBaa CLI command
cld_mgr (CloudManager) – holds roleARN for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.cli_create_azure_backingstore(mcg_obj, cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with aws underlying storage using noobaa cli command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.cli_create_google_backingstore(mcg_obj, cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with GCP underlying storage using a NooBaa CLI command
- Parameters:
mcg_obj (MCG) – Used for execution for the NooBaa CLI command
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.cli_create_ibmcos_backingstore(mcg_obj, cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with IBM COS underlying storage using a NooBaa CLI command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.cli_create_pv_backingstore(mcg_obj, backingstore_name, vol_num, size, storage_class, req_cpu=None, req_mem=None, lim_cpu=None, lim_mem=None)
Create a new backingstore with pv underlying storage using noobaa cli command
- Parameters:
backingstore_name (str) – backingstore name
vol_num (int) – number of pv volumes
size (int) – each volume size in GB
storage_class (str) – which storage class to use
req_cpu (str) – requested cpu value
req_mem (str) – requested memory value
lim_cpu (str) – limit cpu value
lim_mem (str) – limit memory value
- ocs_ci.ocs.bucket_utils.cli_create_rgw_backingstore(mcg_obj, cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with IBM COS underlying storage using a NooBaa CLI command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.compare_bucket_object_list(mcg_obj, first_bucket_name, second_bucket_name, timeout=600)
Compares the object lists of two given buckets
- Parameters:
mcg_obj (MCG) – An initialized MCG object
first_bucket_name (str) – The name of the first bucket to compare
second_bucket_name (str) – The name of the second bucket to compare
timeout (int) – The maximum time in seconds to wait for the buckets to be identical
- Returns:
True if both buckets contain the same object names in all objects, False otherwise
- Return type:
bool
- ocs_ci.ocs.bucket_utils.compare_directory(awscli_pod, original_dir, result_dir, amount=2, pattern='ObjKey-', result_pod=None)
Compares object checksums on original and result directories
- Args:
awscli_pod (pod): A pod running the AWS CLI tools original_dir (str): original directory name result_dir (str): result directory name amount (int): Number of test objects to create
- ocs_ci.ocs.bucket_utils.compare_object_checksums_between_bucket_and_local(io_pod, mcg_obj, bucket_name, local_dir, amount=1, pattern='ObjKey-')
Compares the checksums of the objects in a bucket and a local directory
- Parameters:
io_pod (ocs_ci.ocs.ocp.OCP) – The pod object in which the check will take place
mcg_obj (MCG) – An MCG class instance
bucket_name (str) – The name of the bucket to compare the objects from
local_dir (str) – A string containing the path to the local directory
amount (int, optional) – The amount of objects to use for the verification. Defaults to 1.
pattern (str, optional) – A string defining the object naming pattern. Defaults to “ObjKey-“.
- Returns:
True if the checksums are the same, False otherwise
- Return type:
bool
- ocs_ci.ocs.bucket_utils.complete_multipart_upload(s3_obj, bucketname, object_key, upload_id, parts)
Completes the Multipart Upload
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
upload_id (str) – Multipart Upload-ID
parts (list) – List containing the uploaded parts which includes ETag and part number
- Returns:
Dictionary containing the completed multipart upload details
- Return type:
dict
- ocs_ci.ocs.bucket_utils.copy_objects(podobj, src_obj, target, s3_obj=None, signed_request_creds=None, recursive=False, timeout=600, **kwargs)
Copies a object onto a bucket using s3 cp command
- Parameters:
podobj – Pod object that is used to perform copy operation
src_obj – full path to object
target – target bucket
s3_obj – obc/mcg object
recursive – On true, copy directories and their contents/files. False otherwise
timeout – timeout for the exec_oc_cmd, defaults to 600 seconds
- Returns:
None
- ocs_ci.ocs.bucket_utils.copy_random_individual_objects(podobj, file_dir, target, amount, pattern='test-obj-', s3_obj=None, **kwargs)
Generates random objects and then copies them individually one after the other
podobj: Pod object used to perform the operation file_dir: file directory name where the generated objects are placed pattern: pattern to follow for objects naming target: target bucket name amount: number of objects to generate s3_obj: MCG/OBC object
- Returns:
None
- ocs_ci.ocs.bucket_utils.craft_s3_command(cmd, mcg_obj=None, api=False, signed_request_creds=None)
Crafts the AWS CLI S3 command including the login credentials and command to be ran
- Parameters:
mcg_obj – An MCG class instance
cmd – The AWSCLI command to run
api – True if the call is for s3api, false if s3
signed_request_creds – a dictionary containing AWS S3 creds for a signed request
- Returns:
The crafted command, ready to be executed on the pod
- Return type:
str
- ocs_ci.ocs.bucket_utils.craft_s3cmd_command(cmd, mcg_obj=None, signed_request_creds=None)
Crafts the S3cmd CLI command including the login credentials amd command to be ran
- Parameters:
mcg_obj – An MCG class instance
cmd – The s3cmd command to run
signed_request_creds – a dictionary containing S3 creds for a signed request
- Returns:
The crafted command, ready to be executed on the pod
- Return type:
str
- ocs_ci.ocs.bucket_utils.create_aws_bs_using_cli(mcg_obj, access_key, secret_key, backingstore_name, uls_name, region)
create AWS backingstore through CLI using access_key, secret_key :param mcg_obj: MCG object :param access_key: access key :param secret_key: secret key :param backingstore_name: unique name to the backingstore :param uls_name: underlying storage name :param region: region
- Returns:
None
- ocs_ci.ocs.bucket_utils.create_multipart_upload(s3_obj, bucketname, object_key)
Initiates Multipart Upload
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket on which multipart upload to be initiated on
object_key (str) – Unique object Identifier
- Returns:
Multipart Upload-ID
- Return type:
str
- ocs_ci.ocs.bucket_utils.del_objects(uploaded_objects_paths, awscli_pod, mcg_obj)
Deleting objects from bucket
- Parameters:
uploaded_objects_paths (list) – List of object paths
awscli_pod (pod) – A pod running the AWSCLI tools
mcg_obj (obj) – An MCG object containing the MCG S3 connection credentials
- ocs_ci.ocs.bucket_utils.delete_all_noobaa_buckets(mcg_obj, request)
Deletes all the buckets in noobaa and restores the first.bucket after the current test
- Parameters:
mcg_obj – MCG object
request – pytest request object
- ocs_ci.ocs.bucket_utils.delete_bucket_policy(s3_obj, bucketname)
Deletes bucket policy
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
- Returns:
Delete Bucket policy response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.delete_object_tags(io_pod, mcg_obj, bucket, object_keys, prefix='')
Delete tags of objects in a bucket via the AWS CLI
- Parameters:
io_pod (pod) – The pod that will execute the AWS CLI commands
mcg_obj (MCG) – An MCG class instance
bucket (str) – The name of the bucket to delete the tags from
object_keys (list) – A list of object keys to delete the tags from
prefix (str) – The prefix of the objects to delete the tags from
- ocs_ci.ocs.bucket_utils.delete_objects_from_source_and_wait_for_deletion_sync(mcg_obj, source_bucket, target_bucket, mockup_logger, timeout)
Delete all objects from the source bucket,logs the operations and wait for the deletion sync to complete.
- ocs_ci.ocs.bucket_utils.download_objects_using_s3cmd(podobj, src, target, s3_obj=None, recursive=False, signed_request_creds=None, **kwargs)
Download objects from bucket to target directories using s3cmd utility
- Parameters:
podobj (OCS) – The pod on which to execute the commands and download the objects to
src (str) – Fully qualified object source path
target (str) – Fully qualified object target path
s3_obj (MCG, optional) – The MCG object to use in case the target or source are in an MCG
signed_request_creds (dictionary, optional) – the access_key, secret_key, endpoint and region to use when willing to send signed aws s3 requests
- ocs_ci.ocs.bucket_utils.expire_objects_in_bucket(bucket_name, object_keys=[], prefix='')
Expire objects in a bucket by changing their creation date to one year back.
Note that this is a workaround for the fact that the shortest expiration time that expiraiton policies allows is 1 day, which is too long for the tests to wait.
- Parameters:
bucket_name (str) – The name of the bucket where the objects reside
object_keys (list) – A list of object keys to expire .. note:: If object_keys is empty, all objects in the bucket will be expired.
prefix (str) – The prefix of the objects to expire
- ocs_ci.ocs.bucket_utils.get_bucket_available_size(mcg_obj, bucket_name)
Function to get the bucket available size
- Parameters:
mcg_obj (obj) – MCG object
bucket_name (str) – Name of the bucket
- Returns:
Available size in the bucket
- Return type:
int
- ocs_ci.ocs.bucket_utils.get_bucket_policy(s3_obj, bucketname)
Gets bucket policy from a bucket
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
- Returns:
Get Bucket policy response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.get_full_path_object(downloaded_files, bucket_name)
Getting full of object in the bucket
- Parameters:
downloaded_files (list) – List of downloaded files
bucket_name (str) – Name of the bucket
- Returns:
List of full paths of objects
- Return type:
uploaded_objects_paths (list)
- ocs_ci.ocs.bucket_utils.get_nb_bucket_stores(mcg_obj, bucket_name)
Query the noobaa-db for the backingstores/namespacestores that a given bucket is using for its data placement
- Parameters:
mcg_obj – MCG object
bucket_name – name of the bucket
- Returns:
list of backingstores/namespacestores names
- Return type:
list
- ocs_ci.ocs.bucket_utils.get_object_count_in_bucket(io_pod, bucket_name, prefix='', s3_obj=None)
Get the total number of objects in a bucket
- Parameters:
io_pod (pod) – The pod which should handle all needed IO operations
bucket_name (str) – The name of the bucket to count the objects in
prefix (str) – The prefix to start the count from
s3_obj (MCG or OBJ) – An MCG or OBC class instance
- Returns:
The total number of objects in the bucket
- Return type:
int
- ocs_ci.ocs.bucket_utils.get_object_to_tags_dict(io_pod, mcg_obj, bucket, object_keys)
Get tags of objects in a bucket via the AWS CLI
- Parameters:
io_pod (pod) – The pod that will execute the AWS CLI commands
mcg_obj (MCG) – An MCG class instance
bucket (str) – The name of the bucket to get the tags from
object_keys (list) – A list of object keys to get the tags from
- Returns:
- A dictionary from object keys to their list of tag dicts
- For example:
{“objA”: [{“key1”: “value1”}, {“key2”: “value2”}], “objB”: [{“key3”: “value3”}, {“key4”: “value4”}]}
- Return type:
dict
- ocs_ci.ocs.bucket_utils.get_rgw_restart_counts()
Gets the restart count of the RGW pods
- Returns:
restart counts of RGW pods
- Return type:
list
- ocs_ci.ocs.bucket_utils.list_multipart_upload(s3_obj, bucketname)
Lists the multipart upload details on a bucket
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
- Returns:
Dictionary containing the multipart upload details
- Return type:
dict
- ocs_ci.ocs.bucket_utils.list_objects_from_bucket(pod_obj, target, prefix=None, s3_obj=None, signed_request_creds=None, timeout=600, recursive=False, **kwargs)
Lists objects in a bucket using s3 ls command
- Parameters:
pod_obj (Pod) – Pod object that is used to perform copy operation
target (str) – target bucket
prefix (str, optional) – Prefix to perform the list operation on
s3_obj (MCG, optional) – The MCG object to use in case the target or source
signed_request_creds (dictionary, optional) – the access_key, secret_key, endpoint and region to use when willing to send signed aws s3 requests
timeout (int) – timeout for the exec_oc_cmd
recurive (bool) – If true, list objects recursively using the –recursive option
- Returns:
List of objects in a bucket
- ocs_ci.ocs.bucket_utils.list_uploaded_parts(s3_obj, bucketname, object_key, upload_id)
Lists uploaded parts and their ETags
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
upload_id (str) – Multipart Upload-ID
- Returns:
Dictionary containing the multipart upload details
- Return type:
dict
- ocs_ci.ocs.bucket_utils.namespace_bucket_update(mcg_obj, bucket_name, read_resource, write_resource)
Edits MCG namespace bucket resources
- Parameters:
mcg_obj (obj) – An MCG object containing the MCG S3 connection credentials
bucket_name (str) – Name of the bucket
read_resource (list) – Resource dicts or names to provide read access
write_resource (str or dict) – Resource dict or name to provide write access
- ocs_ci.ocs.bucket_utils.obc_io_create_delete(mcg_obj, awscli_pod, bucket_factory)
Running IOs on OBC interface :param mcg_obj: An MCG object containing the MCG S3 connection credentials :type mcg_obj: obj :param awscli_pod: A pod running the AWSCLI tools :type awscli_pod: pod :param bucket_factory: Calling this fixture creates a new bucket(s)
- ocs_ci.ocs.bucket_utils.oc_create_aws_backingstore(cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with aws underlying storage using oc create command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.oc_create_azure_backingstore(cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with Azure underlying storage using oc create command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.oc_create_google_backingstore(cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with GCP underlying storage using oc create command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.oc_create_ibmcos_backingstore(cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with IBM COS underlying storage using oc create command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.oc_create_pv_backingstore(backingstore_name, vol_num, size, storage_class)
Create a new backingstore with pv underlying storage using oc create command
- Parameters:
backingstore_name (str) – backingstore name
vol_num (int) – number of pv volumes
size (int) – each volume size in GB
storage_class (str) – which storage class to use
- ocs_ci.ocs.bucket_utils.oc_create_rgw_backingstore(cld_mgr, backingstore_name, uls_name, region)
Create a new backingstore with RGW underlying storage using oc create command
- Parameters:
cld_mgr (CloudManager) – holds secret for backingstore creation
backingstore_name (str) – backingstore name
uls_name (str) – underlying storage name
region (str) – which region to create backingstore (should be the same as uls)
- ocs_ci.ocs.bucket_utils.patch_replication_policy_to_bucket(bucket_name, rule_id, destination_bucket_name, prefix='')
Patches replication policy to a bucket
- Parameters:
bucket_name (str) – The name of the bucket to patch
rule_id (str) – The ID of the replication rule
destination_bucket_name (str) – The name of the replication destination bucket
- ocs_ci.ocs.bucket_utils.patch_replication_policy_to_bucketclass(bucketclass_name, rule_id, destination_bucket_name)
Patches replication policy to a bucket
- Parameters:
bucketclass_name (str) – The name of the bucketclass to patch
rule_id (str) – The ID of the replication rule
destination_bucket_name (str) – The name of the replication destination bucket
- ocs_ci.ocs.bucket_utils.put_bucket_policy(s3_obj, bucketname, policy)
Adds bucket policy to a bucket
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
policy (str) – Bucket policy in Json format
- Returns:
Bucket policy response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.random_object_round_trip_verification(io_pod, bucket_name, upload_dir, download_dir, amount=1, pattern='RandomObject-', prefix=None, wait_for_replication=False, second_bucket_name=None, mcg_obj=None, s3_creds=None, cleanup=False, result_pod=None, result_pod_path=None, **kwargs)
Writes random objects in a pod, uploads them to a bucket, downloads them from the bucket and then compares them.
- Parameters:
io_pod (ocs_ci.ocs.ocp.OCP) – The pod object in which the files should be
written (generated and) –
bucket_name (str) – The bucket name to perform the round trip verification on
upload_dir (str) – A string containing the path to the directory where the files
from (will be generated and uploaded) –
download_dir (str) – A string containing the path to the directory where the objects
to (will be downloaded) –
amount (int, optional) – The amount of objects to use for the verification. Defaults to 1.
pattern (str, optional) – A string defining the object naming pattern. Defaults to “RandomObject-“.
wait_for_replication (bool, optional) – A boolean defining whether the replication should be waited for. Defaults to False.
second_bucket_name (str, optional) – The name of the second bucket in case of waiting for object replication. Defaults to None.
mcg_obj (MCG, optional) – An MCG class instance. Defaults to None.
s3_creds (dict, optional) – A dictionary containing S3-compatible credentials
None. (for writing objects directly to buckets outside of the MCG. Defaults to) –
cleanup (bool, optional) – A boolean defining whether the files should be cleaned up
verification. (after the) –
result_pod (ocs_ci.ocs.ocp.OCP, optional) – A second pod contianing files for comparison
result_pod_path (str, optional) – A string containing the path to the directory where the files reside in on the result pod
- ocs_ci.ocs.bucket_utils.retrieve_anon_s3_resource()
Returns an anonymous boto3 S3 resource by creating one and disabling signing
Disabling signing isn’t documented anywhere, and this solution is based on a comment by an AWS developer: https://github.com/boto/boto3/issues/134#issuecomment-116766812
- Returns:
An anonymous S3 resource
- Return type:
boto3.resource()
- ocs_ci.ocs.bucket_utils.retrieve_test_objects_to_pod(podobj, target_dir)
Downloads all the test objects to a given directory in a given pod.
- Parameters:
podobj (OCS) – The pod object to download the objects to
target_dir – The fully qualified path of the download target folder
- Returns:
A list of the downloaded objects’ names
- Return type:
list
- ocs_ci.ocs.bucket_utils.retrieve_verification_mode()
- ocs_ci.ocs.bucket_utils.rm_object_recursive(podobj, target, mcg_obj, option='')
Remove bucket objects with –recursive option
- ocs_ci.ocs.bucket_utils.s3_copy_object(s3_obj, bucketname, source, object_key)
Boto3 client based copy object
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
source (str) – Source object key. eg: ‘<bucket>/<key>
object_key (str) – Unique object Identifier for copied object
- Returns:
Copy object response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_delete_bucket_website(s3_obj, bucketname)
Boto3 client based Delete bucket website function
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
- Returns:
DeleteBucketWebsite response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_delete_object(s3_obj, bucketname, object_key, versionid=None)
Simple Boto3 client based Delete object
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
versionid (str) – Unique version number of an object
- Returns:
Delete object response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_delete_objects(s3_obj, bucketname, object_keys)
Boto3 client based delete objects
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_keys (list) – The objects to delete. Format: {‘Key’: ‘object_key’, ‘VersionId’: ‘’}
- Returns:
delete objects response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_get_bucket_versioning(s3_obj, bucketname, s3_client=None)
Boto3 client based Get Bucket Versioning function
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
s3_client – Any s3 client resource
- Returns:
GetBucketVersioning response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_get_bucket_website(s3_obj, bucketname)
Boto3 client based Get bucket website function
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
- Returns:
GetBucketWebsite response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_get_object(s3_obj, bucketname, object_key, versionid='')
Simple Boto3 client based Get object
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
versionid (str) – Unique version number of an object
- Returns:
Get object response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_get_object_acl(s3_obj, bucketname, object_key)
Boto3 client based get_object_acl operation
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier for copied object
- Returns:
get object acl response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_head_object(s3_obj, bucketname, object_key, if_match=None)
Boto3 client based head_object operation to retrieve only metadata
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier for copied object
if_match (str) – Return the object only if its entity tag (ETag) is the same as the one specified,
- Returns:
head object response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_io_create_delete(mcg_obj, awscli_pod, bucket_factory)
Running IOs on s3 bucket :param mcg_obj: An MCG object containing the MCG S3 connection credentials :type mcg_obj: obj :param awscli_pod: A pod running the AWSCLI tools :type awscli_pod: pod :param bucket_factory: Calling this fixture creates a new bucket(s)
- ocs_ci.ocs.bucket_utils.s3_list_object_versions(s3_obj, bucketname, prefix='')
Boto3 client based list object Versionfunction
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
prefix (str) – Object key prefix
- Returns:
List object version response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_list_objects_v1(s3_obj, bucketname, prefix='', delimiter='', max_keys=1000, marker='')
Boto3 client based list object version1
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
prefix (str) – Limits the response to keys that begin with the specified prefix.
delimiter (str) – Character used to group keys.
max_keys (int) – Maximum number of keys returned in the response. Default 1,000 keys.
marker (str) – key to start with when listing objects in a bucket.
- Returns:
list object v1 response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_list_objects_v2(s3_obj, bucketname, prefix='', delimiter='', max_keys=1000, con_token='', fetch_owner=False)
Boto3 client based list object version2
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
prefix (str) – Limits the response to keys that begin with the specified prefix.
delimiter (str) – Character used to group keys.
max_keys (int) – Maximum number of keys returned in the response. Default 1,000 keys.
con_token (str) – Token used to continue the list
fetch_owner (bool) – Unique object Identifier
- Returns:
list object v2 response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_put_bucket_versioning(s3_obj, bucketname, status='Enabled', s3_client=None)
Boto3 client based Put Bucket Versioning function
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
status (str) – ‘Enabled’ or ‘Suspended’. Default ‘Enabled’
s3_client – Any s3 client resource
- Returns:
PutBucketVersioning response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_put_bucket_website(s3_obj, bucketname, website_config)
Boto3 client based Put bucket website function
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
website_config (dict) – Website configuration info
- Returns:
PutBucketWebsite response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_put_object(s3_obj, bucketname, object_key, data, content_type='', content_encoding='')
Simple Boto3 client based Put object
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
object_key (str) – Unique object Identifier
data (str) – string content to write to a new S3 object
content_type (str) – Type of object data. eg: html, txt etc,
- Returns:
Put object response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.s3_upload_part_copy(s3_obj, bucketname, copy_source, object_key, part_number, upload_id)
Boto3 client based upload_part_copy operation
- Parameters:
s3_obj (obj) – MCG or OBC object
bucketname (str) – Name of the bucket
copy_source (str) – Name of the source bucket and key name. {bucket}/{key}
part_number (int) – Part number
upload_id (str) – Upload Id
object_key (str) – Unique object Identifier for copied object
- Returns:
upload_part_copy response
- Return type:
dict
- ocs_ci.ocs.bucket_utils.sample_if_objects_expired(mcg_obj, bucket_name, prefix='', timeout=600, sleep=30)
Sample if the objects in a bucket expired using TimeoutSampler
- ocs_ci.ocs.bucket_utils.setup_base_objects(awscli_pod, original_dir, amount=2)
Creates a directory and populates it with random objects
- Args:
awscli_pod (Pod): A pod running the AWS CLI tools original_dir (str): original directory name amount (Int): Number of test objects to create
- ocs_ci.ocs.bucket_utils.sync_object_directory(podobj, src, target, s3_obj=None, signed_request_creds=None, **kwargs)
Syncs objects between a target and source directories
- Parameters:
podobj (OCS) – The pod on which to execute the commands and download the objects to
src (str) – Fully qualified object source path
target (str) – Fully qualified object target path
s3_obj (MCG, optional) – The MCG object to use in case the target or source are in an MCG
signed_request_creds (dictionary, optional) – the access_key, secret_key, endpoint and region to use when willing to send signed aws s3 requests
- ocs_ci.ocs.bucket_utils.tag_objects(io_pod, mcg_obj, bucket, object_keys, tags, prefix='')
Apply tags to objects in a bucket via the AWS CLI
- Parameters:
io_pod (pod) – The pod that will execute the AWS CLI commands
mcg_obj (MCG) – An MCG class instance
bucket (str) – The name of the bucket to tag the objects in
object_keys (list) – A list of object keys to tag
tags (dict or list of dicts) –
A dictionary of key-value pairs
or a list of tag dicts in the form of key-value pairs (closer to the AWS CLI format)
- I.E: - {“key1”: “value1”, “key2”: “value2”}
{“key: “value1”}
[{“key: “value1”}, {“key2”: “value2”}]
prefix (str) – The prefix of the objects to tag
- ocs_ci.ocs.bucket_utils.update_replication_policy(bucket_name, replication_policy_dict)
Updates the replication policy of a bucket
- Parameters:
bucket_name (str) – The name of the bucket to update
replication_policy_dict (dict) – A dictionary containing the new replication
policy –
- ocs_ci.ocs.bucket_utils.upload_bulk_buckets(s3_obj, buckets, amount=1, object_key='obj-key-0', prefix=None)
Upload given amount of objects with sequential keys to multiple buckets
- Parameters:
s3_obj – obc/mcg object
buckets (list) – list of bucket names to upload to
amount (int, optional) – number of objects to upload per bucket
object_key (str, optional) – base object key
prefix (str, optional) – prefix for the upload path
- ocs_ci.ocs.bucket_utils.upload_objects_with_javasdk(javas3_pod, s3_obj, bucket_name, is_multipart=False)
Performs upload operation using java s3 pod
- Parameters:
javas3_pod – java s3 sdk pod session
s3_obj – MCG object
bucket_name – bucket on which objects are uploaded
is_multipart – By default False, set to True if you want to perform multipart upload
- Returns:
Output of the command execution
- ocs_ci.ocs.bucket_utils.upload_parts(mcg_obj, awscli_pod, bucketname, object_key, body_path, upload_id, uploaded_parts)
Uploads individual parts to a bucket
- Parameters:
mcg_obj (obj) – An MCG object containing the MCG S3 connection credentials
awscli_pod (pod) – A pod running the AWSCLI tools
bucketname (str) – Name of the bucket to upload parts on
object_key (list) – Unique object Identifier
body_path (str) – Path of the directory on the aws pod which contains the parts to be uploaded
upload_id (str) – Multipart Upload-ID
uploaded_parts (list) – list containing the name of the parts to be uploaded
- Returns:
List containing the ETag of the parts
- Return type:
list
- ocs_ci.ocs.bucket_utils.upload_test_objects_to_source_and_wait_for_replication(mcg_obj, source_bucket, target_bucket, mockup_logger, timeout)
Upload a set of objects to the source bucket, logs the operations and wait for the replication to complete.
- ocs_ci.ocs.bucket_utils.verify_s3_object_integrity(original_object_path, result_object_path, awscli_pod, result_pod=None)
Verifies checksum between original object and result object on an awscli pod
- Parameters:
original_object_path (str) – The Object that is uploaded to the s3 bucket
result_object_path (str) – The Object that is downloaded from the s3 bucket
awscli_pod (pod) – A pod running the AWSCLI tools
- Returns:
True if checksum matches, False otherwise
- Return type:
bool
- ocs_ci.ocs.bucket_utils.wait_for_cache(mcg_obj, bucket_name, expected_objects_names=None, timeout=60)
wait for existing cache bucket to cache all required objects
- Parameters:
mcg_obj (MCG) – An MCG object containing the MCG S3 connection credentials
bucket_name (str) – Name of the cache bucket
expected_objects_names (list) – Expected objects to be cached
- ocs_ci.ocs.bucket_utils.wait_for_object_count_in_bucket(io_pod, expected_count, bucket_name, prefix='', s3_obj=None, timeout=60, sleep=3)
Wait for the total number of objects in a bucket to reach the expected count
- Parameters:
io_pod (pod) – The pod which should handle all needed IO operations
expected_count (int) – The expected number of objects in the bucket
bucket_name (str) – The name of the bucket to count the objects in
prefix (str) – The prefix to start the count from
s3_obj (MCG or OBJ) – An MCG or OBC class instance
timeout (int) – The maximum time in seconds to wait for the expected count
sleep (int) – The time in seconds to wait between each count check
- Returns:
True if the expected count was reached, False otherwise
- Return type:
bool
- ocs_ci.ocs.bucket_utils.wait_for_pv_backingstore(backingstore_name, namespace=None)
wait for existing pv backing store to reach OPTIMAL state
- Parameters:
backingstore_name (str) – backingstore name
namespace (str) – backing store’s namespace
- ocs_ci.ocs.bucket_utils.write_individual_s3_objects(mcg_obj, awscli_pod, bucket_factory, downloaded_files, target_dir, bucket_name=None)
Writes objects one by one to an s3 bucket
- Parameters:
mcg_obj (obj) – An MCG object containing the MCG S3 connection credentials
awscli_pod (pod) – A pod running the AWSCLI tools
bucket_factory – Calling this fixture creates a new bucket(s)
downloaded_files (list) – List of downloaded object keys
target_dir (str) – The fully qualified path of the download target folder
bucket_name (str) – Name of the bucket (default: none)
- ocs_ci.ocs.bucket_utils.write_random_objects_in_pod(io_pod, file_dir, amount, pattern='ObjKey-', bs='1M')
Uses /dev/urandom to create and write random files in a given directory in a pod
- Parameters:
io_pod (ocs_ci.ocs.ocp.OCP) – The pod object in which the files should be
written (generated and) –
file_dir (str) – A string describing the path in which
to (to write the files) –
amount (int) – The amount of files to generate
pattern (str) – The file name pattern to use
- Returns:
A list with the names of all written objects
- Return type:
list
- ocs_ci.ocs.bucket_utils.write_random_test_objects_to_bucket(io_pod, bucket_to_write, file_dir, amount=1, pattern='ObjKey-', prefix=None, bs='1M', mcg_obj=None, s3_creds=None)
Write files generated by /dev/urandom to a bucket
- Parameters:
io_pod (ocs_ci.ocs.ocp.OCP) – The pod which should handle all needed IO operations
bucket_to_write (str) – The bucket name to write the random files to
file_dir (str) – The path to the folder where all random files will be
from (generated and copied) –
amount (int, optional) – The amount of random objects to write. Defaults to 1.
pattern (str, optional) – The pattern of the random files’ names. Defaults to ObjKey.
bs (str, optional) – The size of the random files in bytes. Defaults to 1M.
mcg_obj (MCG, optional) – An MCG class instance
s3_creds (dict, optional) – A dictionary containing S3-compatible credentials
None. (for writing objects directly to buckets outside of the MCG. Defaults to) –
- Returns:
A list containing the names of the random files that were written
- Return type:
list
ocs_ci.ocs.ceph_debug module
- class ocs_ci.ocs.ceph_debug.CephObjectStoreTool(deployment_name=None, data_path='/var/lib/ceph/osd/', *args, **kwargs)
Bases:
RookCephPlugin
This is to perform COT related operations on OSD debug pod We can extend this class in future to perform various other ceph-objectstore-tool operations
- run_cot_list_pgs(deployment_name)
Run COT list PG operation
- Parameters:
deployment_name – Name of the original deployment thats in debug
- Returns:
List of PGS
- Return type:
pgs
- class ocs_ci.ocs.ceph_debug.MonStoreTool(deployment_name=None, store_path='/var/lib/ceph/mon/', *args, **kwargs)
Bases:
RookCephPlugin
This is to perform MonStoreTool related operations on Mon debug pod We can extend this class in future to perform various other ceph-monstore-tool operations
- run_mot_get_monmap(deployment_name)
Runs MonStoreTool get monmap operation :param deployment_name: deployment name
- Returns:
output for get monmap command
- Return type:
str
- class ocs_ci.ocs.ceph_debug.RookCephPlugin(namespace='openshift-storage', operator_namespace='openshift-storage', alternate_image=None, *args, **kwargs)
Bases:
object
This helps you put the Mon/OSD deployments in debug mode without scaling down the rook-operator or other steps involved using krew plugin. This will also take care of the plugin installation if not already installed. Generally debug pods are used to perform debug ops, but they can also be used for maintenance purpose. One can use offline tools like ceph-objectstore-tool, ceph-monstore-tool using debug pods. e.g: List all PGs: $ceph-objectstore-tool –data-path /var/lib/ceph/osd/ceph-0 –op –list-pgs
- check_for_rook_ceph()
Checks if rook-ceph plugin is installed
- Returns:
True if installed, False otherwise
- Return type:
bool
- check_krew_installed()
Checks if krew is installed already
- Returns:
True if installed, False otherwise
- Return type:
bool
- debug_start(deployment_name, alternate_image=None, timeout=800)
This starts the debug mode for the deployment
- Parameters:
deployment_name (str) – Name of the deployment that you want
i.e (it to be in debug mode) –
deployments (either Mon or OSD) –
alternate_image (str) – Alternate image that you want to pass
- Returns:
if debug start is successful
- Return type:
True
- debug_stop(deployment_name, alternate_image=None, timeout=800)
This stops the debug mode for the deployment
- Parameters:
alternate_image (str) – Alternate image that you want to pass
- Returns:
if debug stop is successful
- Return type:
True
- install_krew()
Install krew
- install_rook_ceph_plugin()
Install rook-ceph plugin
- krew_install_cmd = 'sh /home/docs/checkouts/readthedocs.org/user_builds/ocs-ci/checkouts/latest/ocs_ci/templates/krew_plugin/krew_install.sh'
- rookceph_install_cmd = 'sh /home/docs/checkouts/readthedocs.org/user_builds/ocs-ci/checkouts/latest/ocs_ci/templates/krew_plugin/rookcephplugin_install.sh'
ocs_ci.ocs.cephfs_workload module
- class ocs_ci.ocs.cephfs_workload.LogReaderWriterParallel(project, tmp_path, storage_size=2)
Bases:
object
Write and read logfile stored on cephfs volume, from all worker nodes of a cluster via k8s Deployment, while fetching content of the stored data via oc rsync to check the data locally.
TO DO: Update the test after the issue https://github.com/red-hat-storage/ocs-ci/issues/5724 will be completed.
- fetch_and_validate_data()
While the workload is running, try to validate the data from the cephfs volume of the workload.
- Raises:
NotFoundError – When the given volume is not found in given spec
Exception – When the data verification job failed
- log_reader_writer_parallel()
Write and read logfile stored on cephfs volume, from all worker nodes of a cluster via k8s Deployment.
- Raises:
NotFoundError – When given volume is not found in given spec
UnexpectedBehaviour – When an unexpected problem with starting the workload occurred
ocs_ci.ocs.clients module
- class ocs_ci.ocs.clients.WinNode(**kw)
Bases:
object
- check_disk(number)
- connect_to_target(ip, username, password)
- create_disk(number)
- create_fio_job_options(job_options)
- create_new_target(ip, port=3260)
- delete_target()
- disconnect_from_target()
- get_iscsi_initiator_name()
- run_fio_test()
- start_iscsi_initiator()
- win_exec(ps_command, timeout=180)
ocs_ci.ocs.cluster module
A module for all rook functionalities and abstractions.
This module has rook related classes, support for functionalities to work with rook cluster. This works with assumptions that an OCP cluster is already functional and proper configurations are made for interaction.
- class ocs_ci.ocs.cluster.CephCluster
Bases:
object
Handles all cluster related operations from ceph perspective
This class has depiction of ceph cluster. Contains references to pod objects which represents ceph cluster entities.
- Parameters:
Attributes – pods (list): A list of ceph cluster related pods cluster_name (str): Name of ceph cluster namespace (str): openshift Namespace where this cluster lives
- calc_trim_mean_throughput(samples=8)
Calculate the cluster average throughput out of a few samples
- Parameters:
samples (int) – The number of samples to take
- Returns:
The average cluster throughput
- Return type:
float
- check_ceph_pool_used_space(cbp_name)
Check for the used space of a pool in cluster
- Returns:
used_in_gb (float): Amount of used space in pool (in GBs)
- Raises:
UnexpectedBehaviour: If used size keeps varying in Ceph status
- cluster_health_check(timeout=None)
Check overall cluster health. Relying on health reported by CephCluster.get()
- Parameters:
timeout (int) – in seconds. By default timeout value will be scaled based on number of ceph pods in the cluster. This is just a crude number. Its been observed that as the number of pods increases it takes more time for cluster’s HEALTH_OK.
- Returns:
True if “HEALTH_OK” else False
- Return type:
bool
- Raises:
CephHealthException – if cluster is not healthy
- property cluster_name
- create_new_blockpool(pool_name)
Creating new RBD pool to use in the tests insted of the default one. the new RBD pool is identical (parameters wise) to the default RBD pool
- Parameters:
pool_name (str) – The name of the RBD pool to create
- create_new_filesystem(fs_name)
Creating new filesystem to use in the tests insted of the default one. the new filesystem is identical (parameters wise) to the default filesystem
- Parameters:
fs_name (str) – The name of the filesystem to create
- create_user(username, caps)
Create a ceph user in the cluster
- Parameters:
username (str) – ex client.user1
caps (str) – ceph caps ex: mon ‘allow r’ osd ‘allow rw’
- Returns:
return value of get_user_key()
- delete_blockpool(pool_name)
Delete a ceph RBD pool - not the default one - from the cluster
- Parameters:
pool_name (str) – the name of the RBD pool to delete
- delete_filesystem(fs_name='ocs-storagecluster-cephfilesystem')
Delete the ceph filesystem from the cluster, and wait until it recreated, then create the subvolumegroup on it.
- get_admin_key()
- Returns:
base64 encoded key
- Return type:
adminkey (str)
- get_blockpool_status(poolname=None)
Getting the RBD pool status
- Parameters:
fsname (str) – The RBD pool name
- Returnes:
bool : true if the RBD pool status is Ready, false otherwise
- get_ceph_capacity()
The function gets the total mount of storage capacity of the ocs cluster. the calculation is <total bytes> / <replica number> it will not take into account the current used capacity.
- Returns:
Total storage capacity in GiB (GiB is for development environment)
- Return type:
int
- get_ceph_cluster_iops()
The function gets the IOPS from the ocs cluster
- Returns:
Total IOPS in the cluster
- get_ceph_default_replica()
The function return the default replica count in the system, taken from ‘ceph status’. in case no parameter found, return ‘0’.
- Returns:
the default replica count - 0 if not found.
- Return type:
int
- get_ceph_free_capacity()
Function to calculate the free capacity of a cluster
- Returns:
The free capacity of a cluster (in GB)
- Return type:
float
- get_ceph_health(detail=False)
Exec ceph health cmd on tools pod and return the status of the ceph cluster.
- Parameters:
detail (bool) – If True the ‘ceph health detail’ is executed
- Returns:
Output of the ceph health command.
- Return type:
str
- get_ceph_status(format=None)
Exec ceph status cmd on tools pod and return its output.
- Parameters:
format (str) – Format of the output (e.g. json-pretty, json, plain)
- Returns:
Output of the ceph status command.
- Return type:
str
- get_cephfilesystem_status(fsname=None)
Getting the ceph filesystem status
- Parameters:
fsname (str) – The filesystem name
- Returnes:
bool : true if the filesystem status is Ready, false otherwise
- get_cluster_throughput()
Function to get the throughput of ocs cluster
- Returns:
The write throughput of the cluster in MiB/s
- Return type:
float
- get_iops_percentage(osd_size=2)
The function calculates the IOPS percentage of the cluster depending on number of osds in the cluster
- Parameters:
osd_size (int) – Size of 1 OSD in Ti
- Returns:
IOPS percentage of the OCS cluster
- get_mons_from_cluster()
Getting the list of mons from the cluster
- Returns:
Returns the mons from the cluster
- Return type:
available_mon (list)
- get_rebalance_status()
This function gets the rebalance status
- Returns:
True if rebalance is completed, False otherwise
- Return type:
bool
- get_throughput_percentage()
Function to get throughput percentage of the ocs cluster
- Returns:
Throughput percentage of the cluster
- get_user_key(user)
- Parameters:
user (str) – ceph username ex: client.user1
- Returns:
base64 encoded user key
- Return type:
key (str)
- is_health_ok()
- Returns:
True if “HEALTH_OK” else False
- Return type:
bool
- property mcg_obj
- mds_change_count(new_count)
Change mds count in the cluster
- Parameters:
new_count (int) – Absolute number of active mdss required
- mds_health_check(count)
MDS health check based on pod count
- Parameters:
count (int) – number of pods expected
- Raises:
MDACountException – if pod count doesn’t match
- mon_change_count(new_count)
Change mon count in the cluster
- Parameters:
new_count (int) – Absolute number of mons required
- mon_health_check(count)
Mon health check based on pod count
- Parameters:
count (int) – Expected number of mon pods
- Raises:
MonCountException – if mon pod count doesn’t match
- property namespace
- noobaa_health_check()
Check Noobaa health
- property pods
- remove_mon_from_cluster()
Removing the mon pod from deployment
- Returns:
True if removal of mon is successful, False otherwise
- Return type:
remove_mon(bool)
- scan_cluster()
Get accurate info on current state of pods
- set_noout()
Set noout flag for maintainance
- set_pgs(poolname, pgs)
Setting up the PG / PGP / PG_MIN number of a pool if the pg_num_min is not setting to the pg_num number, the autoscale will set automaticlly the pg_num to 32 (incase you try to set pg_num > 32)
- Parameters:
poolname (str) – the pool name that need to be modify
pgs (int) – new number of PG’s
- static set_port(pod)
Set port attribute on pod. port attribute for mon is required for secrets and this attrib is not a member for original pod class.
- set_target_ratio(poolname, ratio)
Setting the target_size_ratio of a ceph pool
- Parameters:
poolname (str) – the pool name
ratio (float) – the new ratio to set
- time_taken_to_complete_rebalance(timeout=600)
This function calculates the time taken to complete rebalance
- Parameters:
timeout (int) – Time to wait for the completion of rebalance
- Returns:
Time taken in minutes for the completion of rebalance
- Return type:
int
- unset_noout()
unset noout flag for peering
- wait_for_noobaa_health_ok(tries=60, delay=5)
Wait for Noobaa health to be OK
- wait_for_rebalance(timeout=600, repeat=3)
Wait for re-balance to complete
- Parameters:
timeout (int) – Time to wait for the completion of re-balance
repeat (int) – How many times to repeat the check to make sure, it’s really completed.
- Returns:
True if re-balance completed, False otherwise
- Return type:
bool
- class ocs_ci.ocs.cluster.CephClusterExternal
Bases:
CephCluster
Handle all external ceph cluster related functionalities Assumption: Cephcluster Kind resource exists
- cluster_health_check(timeout=300)
This would be a comprehensive cluster health check which includes checking pods, external ceph cluster health. raise exceptions.CephHealthException(“Cluster health is NOT OK”)
- property cluster_name
- property namespace
- validate_pvc()
Check whether all PVCs are in bound state
- wait_for_cluster_cr()
we have to wait for cluster cr to appear else it leads to list index out of range error
- wait_for_nooba_cr()
- class ocs_ci.ocs.cluster.CephHealthMonitor(ceph_cluster, sleep=5)
Bases:
Thread
Context manager class for monitoring ceph health status of CephCluster. If CephCluster will get to HEALTH_ERROR state it will save the ceph status to health_error_status variable and will stop monitoring.
- log_error_status()
- run()
Method representing the thread’s activity.
You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.
- class ocs_ci.ocs.cluster.LVM(fstrim=False, fail_on_thin_pool_not_empty=False, threading_lock=None)
Bases:
object
class for lvm cluster
- check_for_alert(alert_name)
Check to see if a given alert is available
- Parameters:
alert_name (str) – Alert name
- Returns:
True if alert is available else False
- Return type:
bool
- cluster_ip()
Get cluster ip address for ssh connections, sets self.ip
- compare_percent_data_from_pvc(pvc_obj, data_size)
Compare data percentage from data send to lv data measure in lvs :param pvc_obj: pvc or snaphost or restored pvc :type pvc_obj: PVC, OCS :param data_size: the expected data to have on the pvc :type data_size: float
- Raises:
(exception) – LvDataPercentSizeWrong
- compare_thin_pool_data_percent(data_percent, sampler=True, timeout=10, wait=1, fail=True, diff_allowed=0.5)
Check thin pool data percent against data_percent :param data_percent: The expected data percent :type data_percent: float :param sampler: use sampler for compare :type sampler: bool :param timeout: the time the sampler should run :type timeout: int :param wait: wait between sampling :type wait: int :param fail: if to fail the test or just give warning :type fail: bool :param diff_allowed: The difference allowed between expected and real data percentage :type diff_allowed: float
- create_ssh_object()
Get ssh object ready, sets self.node_ssh
- exec_cmd_on_cluster_node(cmd)
Exec cmd on SNO node with ssh :param cmd: command to send to server :type cmd: str
- Returns:
(str) output from server.
- fstrim()
perform fstrim on all disks
- get_and_parse_lvs()
get “lvs –reportformat json” from server and parse some data, sets self.lv_data
- get_and_parse_pvs()
get “pvs –reportformat json” from server and parse some data, sets self.pv_data
- get_and_parse_vgs()
get “vgs –reportformat json” from server and parse some data, sets self.vg_data
- get_lv_data_percent_of_pvc(pvc_obj)
Get lv data percent by pvc obj :param pvc_obj: pvc, snapshot or restored pvc obj :type pvc_obj: PVC,OCS
- Returns:
data percent of lv under pvc
- Return type:
(str)
- get_lv_name_from_pvc(pvc_obj)
Get lv name by pvc obj :param pvc_obj: pvc, snapshot or restored pvc obj :type pvc_obj: PVC,OCS
- Returns:
lv name under pvc
- Return type:
(str)
- static get_lv_name_from_snapshot(snap_obj)
Get lv name by snapshot obj :param snap_obj: snapshot to find lv name :type snap_obj: OCS
- Returns:
lv name
- Return type:
(str)
- get_lv_size_of_pvc(pvc_obj)
Get lv size by pvc obj :param pvc_obj: pvc, snapshot or restored pvc obj :type pvc_obj: PVC,OCS
- Returns:
size of lv under pvc
- Return type:
(str)
- get_lvm_thin_pool()
get thinpool name. :returns: (str) thinpool name
- get_lvm_thin_pool_config_overprovision_ratio()
Get overprovisionRatio from lvmcluster :returns: (int) overprovisionRatio
- get_lvm_thin_pool_config_size_percent()
get sizePercent from lvmcluster :returns: (int) sizePercent
- get_lvm_version()
Get redhat-operators version (4.10, 4.11) :returns: (str) lvmo version
- get_lvmcluster()
Get OCP object of lvm cluster and sets self.lvmcluster
- get_thin_pool1_data_percent()
Get thin-pool-1 data percent :returns: (str) of the data percent
- get_thin_pool1_size()
gets thin-pool size :returns: (str) thin pool size
- get_thin_pool_metadata()
Get thin pool metdata percent :returns: (str) metadata percent
- get_thin_provisioning_alerts()
Get the list of alerts that active in the cluster
- Returns:
alrets name
- Return type:
list
- get_vg_free()
gets vg free :returns: (str) vg_free
- get_vg_size()
gets vgs size :returns: (str) vg_size
- init_prom()
- parse_topolvm_metrics(metrics)
Returns the name and value of topolvm metrics
- Parameters:
metric_name (list) – metrics name to be paesed
- Returns:
topolvm metrics by: names: value
- Return type:
dict
- validate_metrics_vs_operating_system_stats(metric, expected_os_value)
Validate metrics vs operating system stats
- Parameters:
metric (str) – tololvm metric name
expected_os_value (str) – linux “lvs” equivalent value
- Returns:
True if metric equals expected_os_value, False otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.calculate_compression_ratio(pool_name)
Calculating the compression of data on RBD pool
- Parameters:
pool_name (str) – the name of the pool to calculate the ratio on
- Returns:
the compression ratio in percentage
- Return type:
int
- ocs_ci.ocs.cluster.change_ceph_backfillfull_ratio(backfillfull_ratio)
Change Ceph Backfillfull Ratio
- Parameters:
backfillfull_ratio (int) – backfillfull_ratio
- ocs_ci.ocs.cluster.change_ceph_full_ratio(full_ratio)
Change Ceph full_ratio
- Parameters:
full_ratio (int) – backfillfull_ratio
- ocs_ci.ocs.cluster.check_ceph_health_after_add_capacity(ceph_health_tries=80, ceph_rebalance_timeout=1800)
Check Ceph health after adding capacity to the cluster
- Parameters:
ceph_health_tries (int) – The number of tries to wait for the Ceph health to be OK.
ceph_rebalance_timeout (int) – The time to wait for the Ceph cluster rebalanced.
- ocs_ci.ocs.cluster.check_ceph_osd_tree()
Checks whether an OSD tree is created/modified correctly. It is a summary of the previous functions: ‘check_osd_tree_1az_vmware’, ‘check_osd_tree_3az_cloud’, ‘check_osd_tree_1az_cloud’.
- Returns:
True, if the ceph osd tree is formed correctly. Else False
- Return type:
bool
- ocs_ci.ocs.cluster.check_ceph_osd_tree_after_node_replacement()
Check the ceph osd tree after the process of node replacement.
- Returns:
True if the ceph osd tree formation is correct, and all the OSD’s are up. Else False
- Return type:
bool
- ocs_ci.ocs.cluster.check_cephcluster_status(desired_phase='Connected', desired_health='HEALTH_OK', name='ocs-external-storagecluster-cephcluster', namespace='openshift-storage-extended')
Check cephcluster health and phase.
- Parameters:
desired_phase (string) – The cephcluster desired phase.
desired_health (string) – The cephcluster desired health.
name (string) – name of the cephcluster.
namespace (string) – namespace of the cephcluster.
- Returns:
True incase cluster is healthy and connected.
- Return type:
bool
- Raises:
CephHealthException incase phase or health are not as expected. –
- ocs_ci.ocs.cluster.check_clusters()
Test if lvm or cephcluster is installed and set config.RUN values for conditions
- ocs_ci.ocs.cluster.check_osd_tree_1az_cloud(osd_tree, number_of_osds)
Checks whether an OSD tree is created/modified correctly. This can be used as a verification step for deployment and cluster expansion tests. This function is specifically for ocs cluster created on 1 AZ config
- Parameters:
osd_tree (dict) – Dictionary of the values which represent ‘osd tree’.
number_of_osds (int) – total number of osds in the cluster
- Returns:
True, if the ceph osd tree is formed correctly. Else False
- Return type:
Boolean
- ocs_ci.ocs.cluster.check_osd_tree_1az_vmware(osd_tree, number_of_osds)
Checks whether an OSD tree is created/modified correctly. This can be used as a verification step for deployment and cluster expansion tests. This function is specifically for ocs cluster created on 1 AZ VMWare setup
- Parameters:
osd_tree (dict) – Dictionary of the values which represent ‘osd tree’.
number_of_osds (int) – total number of osds in the cluster
- Returns:
True, if the ceph osd tree is formed correctly. Else False
- Return type:
bool
- ocs_ci.ocs.cluster.check_osd_tree_1az_vmware_flex(osd_tree, number_of_osds)
Checks whether an OSD tree is created/modified correctly. This can be used as a verification step for deployment and cluster expansion tests. This function is specifically for ocs cluster created on 1 AZ VMWare LSO setup
- Parameters:
osd_tree (dict) – Dictionary of the values which represent ‘osd tree’.
number_of_osds (int) – total number of osds in the cluster
- Returns:
True, if the ceph osd tree is formed correctly. Else False
- Return type:
bool
- ocs_ci.ocs.cluster.check_osd_tree_3az_cloud(osd_tree, number_of_osds)
Checks whether an OSD tree is created/modified correctly. This can be used as a verification step for deployment and cluster expansion tests. This function is specifically for ocs cluster created on 3 AZ config
- Parameters:
osd_tree (dict) – Dictionary of the values which represent ‘osd tree’.
number_of_osds (int) – total number of osds in the cluster
- Returns:
True, if the ceph osd tree is formed correctly. Else False
- Return type:
Boolean
- ocs_ci.ocs.cluster.check_osds_in_hosts_are_up(osd_tree)
Check if all the OSD’s in status ‘up’
- Parameters:
osd_tree (dict) – The ceph osd tree
- Returns:
True if all the OSD’s in status ‘up’. Else False
- Return type:
bool
- ocs_ci.ocs.cluster.check_osds_in_hosts_osd_tree(hosts, osd_tree)
Checks if osds are formed correctly after cluster expansion
- Parameters:
hosts (list) – List of hosts
osd_tree (str) – ‘ceph osd tree’ command output
- Returns:
True if osd tree formatted correctly
- Return type:
bool
- ocs_ci.ocs.cluster.check_pool_compression_replica_ceph_level(pool_name, compression, replica)
Validate compression and replica values in ceph level
- Parameters:
pool_name (str) – The pool name to check values.
compression (bool) – True for compression otherwise False.
replica (int) – size of pool to verify.
- Returns:
(bool) True if replica and compression are validated. Otherwise raise Exception.
- ocs_ci.ocs.cluster.client_cluster_health_check()
Check the client cluster health.
The function will check the following: 1. Wait for the cluster connectivity 2. Wait for the nodes to be in a Ready state 3. Checking that there are no extra Ceph pods on the cluster 4. Wait for the pods to be running in the cluster namespace 5. Checking that the storageclient is connected
- Raises:
ResourceWrongStatusException – In case not all the nodes are ready, not all the pods are running, or the storageclient is not connected
CephHealthException – In case there are extra Ceph pods on the cluster
- ocs_ci.ocs.cluster.count_cluster_osd()
The function returns the number of cluster OSDs
- Returns:
number of OSD pods in current cluster
- Return type:
osd_count (int)
- ocs_ci.ocs.cluster.fetch_connection_scores_for_mon(mon_pod)
This will fetch connection scores for each mons
- Parameters:
mon_pod (Pod) – Pod object for the respective mon pod
- Returns:
Represeting connection score dump for the mon
- Return type:
String
- ocs_ci.ocs.cluster.get_all_pgid()
Get all the pgid’s listed in pgs_brief dump
- Returns:
List of all the pgid’s in pgs_brief dump
- Return type:
list
- ocs_ci.ocs.cluster.get_balancer_eval()
Function to get ceph pg balancer eval value
- Returns:
Eval output of pg balancer
- Return type:
eval_out (float)
- ocs_ci.ocs.cluster.get_byte_used_by_pool(pool_name)
Check byte_used value for specific pool
- Parameters:
pool_name (str) – name of the pool to check replica
- Returns:
The amount of byte stored from pool.
- Return type:
integer
- ocs_ci.ocs.cluster.get_ceph_df_detail()
Get ceph osd df detail
- Returns:
‘ceph df details’ command output
- Return type:
dict
- ocs_ci.ocs.cluster.get_ceph_pool_property(pool_name, prop)
The fuction preform ceph osd pool get on a specific property.
- Parameters:
pool_name (str) – The pool name to get the property.
prop (str) – The property to get for example size, compression_mode etc.
- Returns:
(str) property value as string and incase there is no property None.
- ocs_ci.ocs.cluster.get_child_nodes_osd_tree(node_id, osd_tree)
This function finds the children of a node from the ‘ceph osd tree’ and returns them as list
- Parameters:
node_id (int) – the id of the node for which the children to be retrieved
osd_tree (dict) – dictionary containing the output of ‘ceph osd tree’
- Returns:
of ‘children’ of a given node_id
- Return type:
list
- ocs_ci.ocs.cluster.get_full_ratio_from_osd_dump()
Get the full ratio value from osd map
- Returns:
full ratio value
- Return type:
float
- ocs_ci.ocs.cluster.get_lvm_full_version()
Get redhat-operators version (4.11-xxx) :returns: (str) lvmo full version
- ocs_ci.ocs.cluster.get_mds_cache_memory_limit()
Get the default value of mds
- Returns:
Value of mds cache memory limit
- Return type:
int
- Raises:
UnexpectedBehaviour – if MDS-a and MDS-b cache memory limit doesn’t match
IOError if fail to read configuration –
- ocs_ci.ocs.cluster.get_mds_standby_replay_info()
Return information about the Ceph MDS standby replay node.
- Returns:
A dictionary containing information about the standby-replay MDS daemon, including the following keys in case of success, otherwise None. - “node_ip”: The IP address of the node running the standby-replay MDS daemon. - “mds_daemon”: The name of the MDS daemon. - “standby_replay_pod”: The name of the standby replay pod.
- Return type:
dict
- ocs_ci.ocs.cluster.get_mon_config_value(key)
Gets the default value of a specific ceph monitor config
- Parameters:
key (str) – Configuration key. Ex: mon_max_pg_per_osd
- Returns:
Ceph monitor configuration value
- Return type:
any
- ocs_ci.ocs.cluster.get_mon_quorum_ranks()
This will return the map representing each mon’s quorum ranks
- Returns:
Mon quorum ranks
- Return type:
Dict
- ocs_ci.ocs.cluster.get_nodes_osd_tree(osd_tree, node_ids=None)
This function gets the ‘ceph osd tree’ nodes, which have the ids ‘node_ids’, and returns them as a list. If ‘node_ids’ are not passed, it returns all the ‘ceph osd tree’ nodes.
- Parameters:
osd_tree (dict) – Dictionary containing the output of ‘ceph osd tree’
node_ids (list) – The ids of the nodes for which we want to retrieve
- Returns:
- The nodes of a given ‘node_ids’. If ‘node_ids’ are not passed,
it returns all the nodes.
- Return type:
list
- ocs_ci.ocs.cluster.get_osd_dump(pool_name)
Get the osd dump part of a given pool
- Parameters:
pool_name (str) – ceph pool name
- Returns:
pool information from osd dump
- Return type:
dict
- ocs_ci.ocs.cluster.get_osd_pg_log_dups_tracked()
Get the default tracked number of osd pg log dups
- Returns:
Number of default tracked osd pg log dups
- Return type:
int
- ocs_ci.ocs.cluster.get_osd_pods_memory_sum()
Get the sum of memory of all OSD pods. This is used to determine the size needed for a PVC so when IO will be running over it the OSDs cache will be filled
- Returns:
The sum of the OSD pods memory in GB
- Return type:
int
- ocs_ci.ocs.cluster.get_osd_utilization()
Get osd utilization value
- Returns:
Dict of osd name and its used value i.e {‘osd.1’: 15.276289408185841, ‘osd.0’: 15.276289408185841, ‘osd.2’: 15.276289408185841}
- Return type:
osd_filled (dict)
- ocs_ci.ocs.cluster.get_percent_used_capacity()
Function to calculate the percentage of used capacity in a cluster
- Returns:
The percentage of the used capacity in the cluster
- Return type:
float
- ocs_ci.ocs.cluster.get_pg_balancer_status()
Function to check pg_balancer active and mode is upmap
- Returns:
True if active and upmap is set else False
- Return type:
bool
- ocs_ci.ocs.cluster.get_pgs_brief_dump()
Get pgs_brief dump from ceph pg dump
- Returns:
pgs_brief dump output
- Return type:
dict
- ocs_ci.ocs.cluster.get_pgs_per_osd()
Function to get ceph pg count per OSD
- Returns:
Dict of osd name and its used value i.e {‘osd.0’: 136, ‘osd.2’: 136, ‘osd.1’: 136}
- Return type:
osd_dict (dict)
- ocs_ci.ocs.cluster.get_pool_num(pool_name)
Get the pool number of a given pool (e.g., ocs-storagecluster-cephblockpool -> 2)
- Parameters:
pool_name (str) – ceph pool name
- Returns:
pool number
- Return type:
int
- ocs_ci.ocs.cluster.get_specific_pool_pgid(pool_name)
Get all the pgid’s of a specific pool
- Parameters:
pool_name (str) – ceph pool name
- Returns:
List of all the pgid’s of a given pool
- Return type:
list
- ocs_ci.ocs.cluster.is_flexible_scaling_enabled()
Check if flexible scaling is enabled
- Returns:
True if failure domain is “host” and flexible scaling is enabled. False otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_hci_client_cluster()
Check if the cluster is a Fusion HCI Client cluster
- Returns:
True, if the cluster is a Fusion HCI client cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_hci_cluster()
Check if the cluster is an hci provider or client cluster
- Returns:
True, if the cluster is an hci cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_hci_provider_cluster()
Check if the cluster is a Fusion HCI provider cluster
- Returns:
True, if the cluster is a Fusion HCI provider cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_lso_cluster()
Check if the cluster is an lso cluster
- Returns:
True, if the cluster is an lso cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_managed_service_cluster()
Check if the cluster is a managed service cluster
- Returns:
True, if the cluster is a managed service cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_ms_consumer_cluster()
Check if the cluster is a managed service consumer cluster
- Returns:
True, if the cluster is a managed service consumer cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_ms_provider_cluster()
Check if the cluster is a managed service provider cluster
- Returns:
True, if the cluster is a managed service provider cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.is_vsphere_ipi_cluster()
Check if the cluster is a vSphere IPI cluster
- Returns:
True, if the cluster is a vSphere IPI cluster. False, otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.set_osd_op_complaint_time(osd_op_complaint_time_val: float) dict
Set osd_op_complaint_time to the given value
- Parameters:
osd_op_complaint_time_val (float) – Value in seconds to set osd_op_complaint_time to
- Returns:
output of the command
- Return type:
dict
- ocs_ci.ocs.cluster.silence_ceph_osd_crash_warning(osd_pod_name)
Silence the osd crash warning of a specific osd pod
- Parameters:
osd_pod_name (str) – The name of the osd pod which we need to silence the crash warning
- Returns:
True if it found the osd crash with name ‘osd_pod_name’. False otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.validate_claim_name_match_pvc(pvc_names, validated_pods=None)
Validate if OCS pods have mathching PVC and Claim name
- Parameters:
pvc_names (list) – names of all PVCs you would like to validate with.
validated_pods (set) – set to store already validated pods - if you pass an empty set from outside of this function, it will speed up the next validation when re-tries, as it will skip those already validated pods added to this set by previous run of this function.
- Raises:
AssertionError – when the claim name does not match one of PVC name.
- ocs_ci.ocs.cluster.validate_cluster_on_pvc()
Validate creation of PVCs for MON and OSD pods. Also validate that those PVCs are attached to the OCS pods
- Raises:
AssertionError – If PVC is not mounted on one or more OCS pods
- ocs_ci.ocs.cluster.validate_compression(pool_name)
Check if data was compressed
- Parameters:
pool_name (str) – name of the pool to check replica
- Returns:
True if compression works. False if not
- Return type:
bool
- ocs_ci.ocs.cluster.validate_existence_of_blocking_pdb()
Validate creation of PDBs for OSDs. 1. Versions lesser than ocs-operator.v4.6.2 have PDBs for each osds 2. Versions greater than or equal to ocs-operator.v4.6.2-233.ci have PDBs collectively for osds like rook-ceph-osd
- Returns:
True if blocking PDBs are present, false otherwise
- Return type:
bool
- ocs_ci.ocs.cluster.validate_ocs_pods_on_pvc(pods, pvc_names, pvc_label=None)
Validate if ocs pod has PVC. This validation checking if there is the pvc like: rook-ceph-mon-a for the pod rook-ceph-mon-a-56f67f5968-6j4px.
- Parameters:
pods (list) – OCS pod names
pvc_names (list) – names of all PVCs
pvc_label (str) – label of PVC name for the pod. If None, we will verify pvc based on name of pod.
- Raises:
AssertionError – If no PVC found for one of the pod
- ocs_ci.ocs.cluster.validate_osd_utilization(osd_used=80)
Validates osd utilization matches osd_used value
- Parameters:
osd_used (int) – osd used value
- Returns:
- True if all osd values is equal or greater to osd_used.
False Otherwise.
- Return type:
bool
- ocs_ci.ocs.cluster.validate_pdb_creation()
Validate creation of PDBs for MON, MDS and OSD pods.
- Raises:
AssertionError – If required PDBs were not created.
- ocs_ci.ocs.cluster.validate_pg_balancer()
Validate either data is equally distributed to OSDs
- Returns:
True if avg PG’s per osd difference is <=10 else False
- Return type:
bool
- ocs_ci.ocs.cluster.validate_replica_data(pool_name, replica)
Check if data is replica 2 or 3
- Parameters:
replica (int) – size of the replica(2,3)
pool_name (str) – name of the pool to check replica
- Returns:
True if replicated data size is meet rep config and False if dont
- Return type:
Bool
- ocs_ci.ocs.cluster.wait_for_silence_ceph_osd_crash_warning(osd_pod_name, timeout=900)
Wait for ‘timeout’ seconds to check for the ceph osd crash warning, and silence it.
- Parameters:
osd_pod_name (str) – The name of the osd pod which we need to silence the crash warning
timeout (int) – time in seconds to wait for silence the osd crash warning
- Returns:
True if it found the osd crash with name ‘osd_pod_name’. False otherwise
- Return type:
bool
ocs_ci.ocs.cluster_load module
A module for cluster load related functionalities
- class ocs_ci.ocs.cluster_load.ClusterLoad(project_factory=None, pvc_factory=None, sa_factory=None, pod_factory=None, target_percentage=None, threading_lock=None)
Bases:
object
A class for cluster load functionalities
- adjust_load_if_needed()
Dynamically adjust the IO load based on the cluster latency. In case the latency goes beyond 250 ms, start deleting FIO pods. Once latency drops back below 100 ms, re-create the FIO pods to make sure that cluster load is around the target percentage
- calc_trim_metric_mean(metric, samples=5, mute_logs=False)
Get the trimmed mean of a given metric
- Parameters:
metric (str) – The metric to calculate the average result for
samples (int) – The number of samples to take
mute_logs (bool) – True for muting the logs, False otherwise
- Returns:
The average result for the metric
- Return type:
float
- decrease_load(wait=True)
Delete DeploymentConfig with its pods and the PVC. Then, wait for the IO to be stopped
- Parameters:
wait (bool) – True for waiting for IO to drop after the deletion of the FIO pod, False otherwise
- get_query(query, mute_logs=False)
Get query from Prometheus and parse it
- Parameters:
query (str) – Query to be done
mute_logs (bool) – True for muting the logs, False otherwise
- Returns:
the query result
- Return type:
float
- increase_load(rate, wait=True)
Create a PVC, a service account and a DeploymentConfig of FIO pod
- Parameters:
rate (str) – FIO ‘rate’ value (e.g. ‘20M’)
wait (bool) – True for waiting for IO to kick in on the newly created pod, False otherwise
- increase_load_and_print_data(rate, wait=True)
Increase load and print data
- Parameters:
rate (str) – FIO ‘rate’ value (e.g. ‘20M’)
wait (bool) – True for waiting for IO to kick in on the newly created pod, False otherwise
- print_metrics(mute_logs=False)
Print metrics
- Parameters:
mute_logs (bool) – True for muting the Prometheus logs, False otherwise
- reach_cluster_load_percentage()
Reach the cluster limit and then drop to the given target percentage. The number of pods needed for the desired target percentage is determined by creating pods one by one, while examining the cluster latency. Once the latency is greater than 250 ms and it is growing exponentially, it means that the cluster limit has been reached. Then, dropping to the target percentage by deleting all pods and re-creating ones with smaller value of FIO ‘rate’ param. This leaves the number of pods needed running IO for cluster load to be around the desired percentage.
- reduce_load(pause=True)
Pause the cluster load
- resume_load()
Resume the cluster load
- ocs_ci.ocs.cluster_load.wrap_msg(msg)
Wrap a log message with ‘=’ marks. Necessary for making cluster load background logging distinguishable
- Parameters:
msg (str) – The log message to wrap
- Returns:
The wrapped log message
- Return type:
str
ocs_ci.ocs.constants module
Constants module.
This module contains any values that are widely used across the framework, utilities, or tests that will predominantly remain unchanged.
In the event values here have to be changed it should be under careful review and with consideration of the entire project.
ocs_ci.ocs.cosbench module
- class ocs_ci.ocs.cosbench.Cosbench
Bases:
object
Cosbench S3 benchmark tool
- cleanup()
Cosbench cleanup
- cosbench_full()
Run full Cosbench workload
- static generate_container_stage_config(selector, start_container, end_container)
Generates container config which creates buckets in bulk
- Parameters:
selector (str) – The way object is accessed/selected. u=uniform, r=range, s=sequential.
start_container (int) – Start of containers
end_container (int) – End of containers
- Returns:
Container and object configuration
- Return type:
(str)
- static generate_stage_config(selector, start_container, end_container, start_objects, end_object)
Generates config which is used in stage creation
- Parameters:
selector (str) – The way object is accessed/selected. u=uniform, r=range, s=sequential.
start_container (int) – Start of containers
end_container (int) – End of containers
start_objects (int) – Start of objects
end_object (int) – End of objects
- Returns:
Container and object configuration
- Return type:
(str)
- get_performance_result(workload_name, workload_id, size)
- get_result_csv(workload_id, workload_name)
Gets cosbench workload result csv
- Parameters:
workload_id (str) – ID of cosbench workload
workload_name (str) – Name of the workload
- Returns:
Absolute path of the result csv
- Return type:
str
- run_cleanup_workload(prefix, containers, objects, start_container=None, start_object=None, sleep=15, timeout=300, validate=True)
Deletes specific objects and containers in bulk.
- Parameters:
prefix (str) – Prefix of bucket name.
containers (int) – Number of containers/buckets to be created.
objects (int) – Number of objects to be created on each bucket.
start_container (int) – Start of containers. Default: 1.
start_object (int) – Start of objects. Default: 1.
sleep (int) – Sleep in seconds.
timeout (int) – Timeout in seconds.
validate (bool) – Validates whether cleanup and dispose is completed.
- Returns:
Workload xml and its name
- Return type:
Tuple[str, str]
- run_init_workload(prefix, containers, objects, start_container=None, start_object=None, size=64, size_unit='KB', sleep=15, timeout=300, validate=True)
Creates specific containers and objects in bulk
- Parameters:
prefix (str) – Prefix of bucket name.
containers (int) – Number of containers/buckets to be created.
objects (int) – Number of objects to be created on each bucket.
start_container (int) – Start of containers. Default: 1.
start_object (int) – Start of objects. Default: 1.
size (int) – Size of each objects.
size_unit (str) – Object size unit (B/KB/MB/GB)
sleep (int) – Sleep in seconds.
timeout (int) – Timeout in seconds.
validate (bool) – Validates whether init and prepare is completed.
- Returns:
Workload xml and its name
- Return type:
Tuple[str, str]
- run_main_workload(operation_type, prefix, containers, objects, workers=4, selector='s', start_container=None, start_object=None, size=64, size_unit='KB', sleep=15, timeout=300, extend_objects=None, validate=True, result=True)
Creates and runs main Cosbench workload.
- Parameters:
operation_type (dict) – Cosbench operation and its ratio. Operation (str): Supported ops are read, write, list and delete. Ratio (int): Percentage of each operation. Should add up to 100.
workers (int) – Number of users to perform operations.
containers (int) – Number of containers/buckets to be created.
objects (int) – Number of objects to be created on each bucket.
selector (str) – The way object is accessed/selected. u=uniform, r=range, s=sequential.
prefix (str) – Prefix of bucket name.
start_container (int) – Start of containers. Default: 1.
start_object (int) – Start of objects. Default: 1.
size (int) – Size of each objects.
size_unit (str) – Object size unit (B/KB/MB/GB)
sleep (int) – Sleep in seconds
timeout (int) – Timeout in seconds
validate (bool) – Validates whether each stage is completed
extend_objects (int) – Extends the total number of objects to prevent overlap. Use only for Write and Delete operations.
result (bool) – Get performance results when running workload is completed.
- Returns:
Workload xml and its name
- Return type:
Tuple[str, str]
- setup_cosbench()
Setups Cosbench namespace, configmap and pod
- submit_workload(workload_path)
Submits Cosbench xml to initiate workload
- Parameters:
workload_path (str) – Absolute path of xml to submit
- validate_workload(workload_id, workload_name)
Validates each stage of cosbench workload
- Parameters:
workload_id (str) – ID of cosbench workload
workload_name (str) – Name of the workload
- Raises:
UnexpectedBehaviour – When workload csv is incorrect/malformed.
- wait_for_workload(workload_id, sleep=1, timeout=60)
Waits for the cosbench workload to complete
- Parameters:
workload_id (str) – ID of cosbench workload
sleep – sleep in seconds
timeout – timeout in seconds to check if mirroring
- Returns:
Whether cosbench workload processed successfully
- Return type:
bool
ocs_ci.ocs.couchbase module
Couchbase workload class
- class ocs_ci.ocs.couchbase.CouchBase(**kwargs)
Bases:
PillowFight
CouchBase workload operation
- analyze_run(skip_analyze=False)
Analyzing the workload run logs
- Parameters:
skip_analyze (bool) – Option to skip logs analysis
- cleanup()
Cleaning up the resources created during Couchbase deployment
- couchbase_full()
Run full CouchBase workload
- couchbase_operatorgroup()
Creates an operator group for Couchbase
- couchbase_subscription()
Creates subscription for Couchbase operator
- create_cb_cluster(replicas=1, sc_name=None, image=None)
Deploy a Couchbase server using Couchbase operator
Once the couchbase operator is running, we need to wait for the worker pods to be up. Once the Couchbase worker pods are up, pillowfight task is started.
After the pillowfight task has finished, the log is collected and analyzed.
- Raises:
Exception – If pillowfight results indicate that a minimum performance level is not reached (1 second response time, less than 1000 ops per second)
- create_cb_secrets()
” Create secrets for running Couchbase workers
- create_data_buckets()
Create data buckets
- create_namespace(namespace)
create namespace for couchbase
- Parameters:
namespace (str) – Namespace for deploying couchbase pods
- get_couchbase_csv()
” Get the Couchbase CSV object
- Returns:
Couchbase CSV object
- Return type:
- Raises:
CSVNotFound – In case no CSV found.
- get_couchbase_nodes()
Get nodes that contain a couchbase app pod
- Returns:
List of nodes
- Return type:
list
- respin_couchbase_app_pod()
Respin the couchbase app pod
- Returns:
pod status
- run_workload(replicas, num_items=None, num_threads=None, num_of_cycles=None, run_in_bg=False)
Running workload with pillow fight operator :param replicas: Number of pods :type replicas: int :param num_items: Number of items to be loaded to the cluster :type num_items: int :param num_threads: Number of threads :type num_threads: int :param num_of_cycles: Specify the number of times the workload should cycle :type num_of_cycles: int :param run_in_bg: Optional run IOs in background :type run_in_bg: bool
- wait_for_pillowfights_to_complete(timeout=3600)
Wait for the pillowfight workload to complete
ocs_ci.ocs.defaults module
Defaults module. All the defaults used by OSCCI framework should reside in this module. PYTEST_DONT_REWRITE - avoid pytest to rewrite, keep this msg here please!
ocs_ci.ocs.disruptive_operations module
- ocs_ci.ocs.disruptive_operations.get_ocs_operator_node_name()
Getting node’s name that running ocs-operator pod
- Returns:
node’s name that running ocs-operator pod
- Return type:
str
- ocs_ci.ocs.disruptive_operations.osd_node_reboot()
Rebooting worker node that running OSD
- Raises:
AssertionError – in case the ceph-tools pod was not recovered
- ocs_ci.ocs.disruptive_operations.worker_node_shutdown(abrupt)
Shutdown worker node that running ocs-operator pod
- Parameters:
abrupt – (bool): True if abrupt shutdown, False for permanent shutdown
- Raises:
AssertionError – in case the ceph-tools pod was not recovered
ocs_ci.ocs.elasticsearch module
Deploying an Elasticsearch server for collecting logs from benchmark-operator (ripsaw) benchmarks.
Interface for the Performance ElasticSearch server
- class ocs_ci.ocs.elasticsearch.ElasticSearch(**kwargs)
Bases:
object
ElasticSearch Environment
- cleanup()
Cleanup the environment from all Elasticsearch components.
- dumping_all_data(target_path)
Dump All data from the internal ES server to .tgz file.
- Parameters:
target_path (str) – the path where the results file will be copy into
- Returns:
- True if the dump operation succeed and return the results data to the host
otherwise False
- Return type:
bool
- get_health()
This method return the health status of the Elasticsearch.
- Returns:
True if the status is green (OK) otherwise - False
- Return type:
bool
- get_indices()
Getting list of all indices in the ES server - all created by the test, the installation of the ES was without any indexes pre-installed.
- Returns:
list of all indices defined in the ES server
- Return type:
list
- get_ip()
This function return the IP address of the Elasticsearch cluster. this IP is to use inside the OCP cluster
- Return
str : String that represent the Ip Address.
- get_password()
This method return the password used to connect the Elasticsearch.
- Returns:
The password as text
- Return type:
str
- get_port()
This function return the port of the Elasticsearch cluster.
- Return
str : String that represent the port.
- get_scheme()
This function return the schema of the Elasticsearch cluster.
- Return
str : String that represent the schema (http or https).
- ocs_ci.ocs.elasticsearch.elasticsearch_load(connection, target_path)
Load all data from target_path/results into an elasticsearch (es) server.
- Parameters:
connection (obj) – an elasticsearch connection object
target_path (str) – the path where data was dumped into
- Returns:
True if loading data succeed, False otherwise
- Return type:
bool
ocs_ci.ocs.exceptions module
- exception ocs_ci.ocs.exceptions.ACMClusterDeployException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ACMClusterDestroyException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ACMClusterImportException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.AlertingError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ArchitectureNotSupported
Bases:
Exception
- exception ocs_ci.ocs.exceptions.AuthError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.BenchmarkTestFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.CPUNotSufficientException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.CSVNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.CephHealthException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.CephToolBoxNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ChannelNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ClassCreationException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ClientDownloadError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ClusterNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ClusterNotInSTSModeException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.CommandFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ConfigurationError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ConnectivityFail
Bases:
Exception
- exception ocs_ci.ocs.exceptions.CredReqSecretNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.DRPrimaryNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.DeploymentPlatformNotSupported
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ElasticSearchNotDeployed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterCephSSHAuthDetailsMissing
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterCephfsMissing
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterDetailsException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterExporterRunFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterNodeRoleNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterObjectStoreUserCreationFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterRGWAdminOpsUserException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterRGWEndPointMissing
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ExternalClusterRGWEndPointPortMissing
Bases:
Exception
- exception ocs_ci.ocs.exceptions.FailedToAddNodeException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.FailedToDeleteInstance
Bases:
Exception
- exception ocs_ci.ocs.exceptions.FailedToRemoveNodeException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.FipsNotInstalledException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.FlexyDataNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.HPCSDeploymentError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.HostValidationFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.HyperConvergedHealthException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.IPAMAssignUpdateFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.IPAMReleaseUpdateFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ImageIsNotDeletedOrNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.IncorrectUiOptionRequested(text, func=None)
Bases:
Exception
- exception ocs_ci.ocs.exceptions.InteractivePromptException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.InvalidStatusCode
Bases:
Exception
- exception ocs_ci.ocs.exceptions.KMIPDeploymentError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.KMIPOperationError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.KMSConnectionDetailsError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.KMSNotSupported
Bases:
Exception
- exception ocs_ci.ocs.exceptions.KMSResourceCleaneupError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.KMSTokenError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.LVMOHealthException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.LeftoversExistError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.LvDataPercentSizeWrong
Bases:
Exception
- exception ocs_ci.ocs.exceptions.LvSizeWrong
Bases:
Exception
- exception ocs_ci.ocs.exceptions.LvThinUtilNotChanged
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MDRDeploymentException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MDSCountException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ManagedServiceAddonDeploymentError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ManagedServiceSecurityGroupNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.Md5CheckFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MemoryNotSufficientException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MissingDecoratorError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MissingRequiredConfigKeyError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MonCountException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.MultiStorageClusterExternalCephHealth
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoBucketPolicyResponse
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoInstallPlanForApproveFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoRunningCephToolBoxException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoThreadingLockUsedError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NodeHasNoAttachedVolume
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NodeNotFoundError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NonUpgradedImagesFoundError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoobaaCliChecksumFailedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoobaaConditionException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NoobaaHealthException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NotAllNodesCreated
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NotAllPodsHaveSameImagesError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NotFoundError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NotSupportedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NotSupportedFunctionError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.NotSupportedProxyConfiguration
Bases:
Exception
- exception ocs_ci.ocs.exceptions.OCSWorkerScaleFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.OSDScaleFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ObjectsStillBeingDeletedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.OpenShiftAPIResponseException(response)
Bases:
Exception
- exception ocs_ci.ocs.exceptions.OpenshiftConsoleSuiteNotDefined
Bases:
Exception
- exception ocs_ci.ocs.exceptions.OperationFailedToCompleteException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PDBNotCreatedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PSIVolumeCreationFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PSIVolumeDeletionFailed
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PSIVolumeNotInExpectedState
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PVCNotCreated
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PVNotSufficientException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PageNotLoaded
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PassThroughEnabledDeviceNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PendingCSRException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PerformanceException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PodNotCreated
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolCephValueNotMatch
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolCompressionWrong
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolDataNotErased
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolDidNotReachReadyState
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolNotCompressedAsExpected
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolNotDeleted
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolNotDeletedFromUI
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolNotReplicatedAsNeeded
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolSizeWrong
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PoolStateIsUnknow
Bases:
Exception
- exception ocs_ci.ocs.exceptions.PvcNotDeleted
Bases:
Exception
- exception ocs_ci.ocs.exceptions.RBDSideCarContainerException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.RDMDiskNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.RDRDeploymentException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ROSAProdAdminLoginFailedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.RebootEventNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ResourceLeftoversException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ResourceNameNotSpecifiedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ResourceNotDeleted
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ResourceNotFoundError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ResourcePoolNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ResourceWrongStatusException(resource_or_name, describe_out=None, column=None, expected=None, got=None)
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ReturnedEmptyResponseException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.RhcosImageNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.SameNameClusterAlreadyExistsException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.SameNamePrefixClusterAlreadyExistsException
Bases:
Exception
Bases:
Exception
- exception ocs_ci.ocs.exceptions.StorageClassNotDeletedFromUI
Bases:
Exception
- exception ocs_ci.ocs.exceptions.StorageNotSufficientException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.StorageSizeNotReflectedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.StorageclassIsNotDeleted
Bases:
Exception
- exception ocs_ci.ocs.exceptions.StorageclassNotCreated
Bases:
Exception
- exception ocs_ci.ocs.exceptions.TagNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.TemplateNotFound
Bases:
Exception
- exception ocs_ci.ocs.exceptions.TerrafromFileNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.ThinPoolUtilityWrong
Bases:
Exception
- exception ocs_ci.ocs.exceptions.TimeoutException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.TimeoutExpiredError(value, custom_message=None)
Bases:
Exception
- message = 'Timed Out'
- exception ocs_ci.ocs.exceptions.UnableUpgradeConnectionException
Bases:
Exception
Bases:
Exception
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnexpectedBehaviour
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnexpectedDeploymentConfiguration
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnexpectedImage
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnexpectedInput
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnexpectedODFAccessException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnexpectedVolumeType
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnhealthyBucket
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnknownCloneTypeException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnknownOperationForTerraformVariableUpdate
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnsupportedBrowser
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnsupportedFeatureError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnsupportedOSType
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnsupportedPlatformError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnsupportedPlatformVersionError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UnsupportedWorkloadError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.UsernameNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.VMMaxDisksReachedException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.VSLMNotFoundException
Bases:
Exception
- exception ocs_ci.ocs.exceptions.VaultDeploymentError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.VaultOperationError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.VolumesExistError
Bases:
Exception
- exception ocs_ci.ocs.exceptions.WrongVersionExpression
Bases:
ValueError
- exception ocs_ci.ocs.exceptions.ZombieProcessFoundException
Bases:
Exception
ocs_ci.ocs.external_ceph module
- class ocs_ci.ocs.external_ceph.Ceph(name='ceph', node_list=None)
Bases:
object
- property ceph_demon_stat
Retrieves expected numbers for demons of each role
- Returns:
Ceph demon stats
- Return type:
dict
- get_ceph_demons(role=None)
Get Ceph demons list
- Returns:
list of CephDemon
- Return type:
list
- get_ceph_object(role, order_id=0)
Returns single ceph object. If order id is provided returns that occurrence from results list, otherwise returns first occurrence
- Parameters:
role (str) – Ceph object’s role
order_id (int) – order number of the ceph object
- Returns:
ceph object
- Return type:
- get_ceph_objects(role=None)
Get Ceph Object by role. Returns all objects if role is not defined. Ceph object can be Ceph demon, client, installer or generic entity. Pool role is never assigned to Ceph object and means that node has no Ceph objects
- Parameters:
role (str) – Ceph object’s role as str
- Returns:
ceph objects
- Return type:
list
- get_metadata_list(role, client=None)
Returns metadata for demons of specified role
- Parameters:
role (str) – ceph demon role
client (CephObject) – Client with keyring and ceph-common
- Returns:
metadata as json object representation
- Return type:
list
- get_node_by_hostname(hostname)
Returns Ceph node by it’s hostname
- Parameters:
hostname (str) – hostname
- Returns:
ceph node object
- Return type:
- get_nodes(role=None, ignore=None)
Get node(s) by role. Return all nodes if role is not defined
- Parameters:
role (str, RolesContainer) – node’s role. Takes precedence over ignore
ignore (str, RolesContainer) – node’s role to ignore from the list
- Returns:
nodes
- Return type:
list
- get_osd_by_id(osd_id, client=None)
- Parameters:
osd_id (int) – osd id
client (CephObject) – Client with keyring and ceph-common:
- Returns:
- daemon object or None in case the osd daemon list is
empty
- Return type:
- get_osd_container_name_by_id(osd_id, client=None)
- Parameters:
osd_id (int) – osd id
client (CephObject) – Client with keyring and ceph-common:
- get_osd_data_partition(osd_id, client=None)
Returns data partition by given osd id
- Parameters:
osd_id (int) – osd id
client (CephObject) – client, optional
- Returns:
data path
- Return type:
str
- get_osd_data_partition_path(osd_id, client=None)
Returns data partition path by given osd id
- Parameters:
osd_id (int) – osd id
client (CephObject) – Client with keyring and ceph-common:
- Returns:
data partition path
- Return type:
str
- get_osd_device(osd_id, client=None)
- Parameters:
osd_id (int) – osd id
client (CephObject) – Client with keyring and ceph-common:
- Returns:
osd device
- Return type:
str
- get_osd_metadata(osd_id, client=None)
Retruns metadata for osd by given id
- Parameters:
osd_id (int) – osd id
client (CephObject) – Client with keyring and ceph-common
- Returns:
osd metadata like:
{ "id": 8, "arch": "x86_64", "back_addr": "172.16.115.29:6801/1672", "back_iface": "eth0", "backend_filestore_dev_node": "vdd", "backend_filestore_partition_path": "/dev/vdd1", "ceph_version": "ceph version 12.2.5-42.el7cp (82d52d7efa6edec70f6a0fc306f40b89265535fb) luminous (stable)", "cpu": "Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz", "osd_data": "/var/lib/ceph/osd/ceph-8", "osd_journal": "/var/lib/ceph/osd/ceph-8/journal", "osd_objectstore": "filestore", "rotational": "1" }
- Return type:
dict
- get_osd_service_name(osd_id, client=None)
- Parameters:
osd_id (int) – osd id
client (CephObject) – Client with keyring and ceph-common:
- Returns:
service name
- Return type:
str
- role_to_node_mapping()
Given a role we should be able to get the corresponding CephNode object
- class ocs_ci.ocs.external_ceph.CephClient(role, node)
Bases:
CephObject
- class ocs_ci.ocs.external_ceph.CephDemon(role, node)
Bases:
CephObject
- ceph_demon_by_container_name(container_name)
- property container_name
- property container_prefix
- exec_command(cmd, **kw)
Proxy to node’s exec_command with wrapper to run commands inside the container for containerized demons
- Parameters:
cmd (str) – command to execute
**kw – options
- Returns:
node’s exec_command resut
- class ocs_ci.ocs.external_ceph.CephNode(**kw)
Bases:
object
- connect(timeout=300)
connect to ceph instance using paramiko ssh protocol eg: self.connect() - setup tcp keepalive to max retries for active connection - set up hostname and shortname as attributes for tests to query
- create_ceph_object(role)
Create ceph object on the node
- Parameters:
role (str) – ceph object role
- Returns:
created ceph object
- Return type:
- exec_command(**kw)
execute a command on the vm eg: self.exec_cmd(cmd=’uptime’) or self.exec_cmd(cmd=’background_cmd’, check_ec=False)
- check_ec
False will run the command and not wait for exit code
- Type:
bool
- get_allocated_volumes()
- get_ceph_demons(role=None)
Get Ceph demons list. Only active (those which will be part of the cluster) demons are shown.
- Returns:
list of CephDemon
- Return type:
list
- get_ceph_objects(role=None)
Get Ceph objects list on the node
- Parameters:
role (str) – Ceph object role
- Returns:
ceph objects
- Return type:
list
- get_free_volumes()
- obtain_root_permissions(path)
Transfer ownership of root to current user for the path given. Recursive.
- Parameters:
path (str) – file path
- reconnect()
- remove_ceph_object(ceph_object)
Removes ceph object form the node
- Parameters:
ceph_object (CephObject) – ceph object to remove
- property role
- search_ethernet_interface(ceph_node_list)
Search interface on the given node node which allows every node in the cluster accesible by it’s shortname.
- Parameters:
ceph_node_list (list) – list of CephNode
- Returns:
returns None if no suitable interface found
- Return type:
eth_interface (str)
- set_eth_interface(eth_interface)
set the eth interface
- set_internal_ip()
set the internal ip of the vm which differs from floating ip
- write_file(**kw)
- class ocs_ci.ocs.external_ceph.CephObject(role, node)
Bases:
object
- exec_command(cmd, **kw)
Proxy to node’s exec_command
- Parameters:
cmd (str) – command to execute
**kw – options
- Returns:
node’s exec_command result
- property pkg_type
- write_file(**kw)
Proxy to node’s write file
- Parameters:
**kw – options
- Returns:
node’s write_file result
- class ocs_ci.ocs.external_ceph.CephObjectFactory(node)
Bases:
object
- CLIENT_ROLES = ['client']
- DEMON_ROLES = ['mon', 'osd', 'mgr', 'rgw', 'mds', 'nfs']
- create_ceph_object(role)
Create an appropriate Ceph object by role
- Parameters:
role – role string
- Returns:
Ceph object based on role
- class ocs_ci.ocs.external_ceph.CephOsd(node, device=None)
Bases:
CephDemon
- property container_name
- property is_active
- class ocs_ci.ocs.external_ceph.NodeVolume(status)
Bases:
object
- ALLOCATED = 'allocated'
- FREE = 'free'
- class ocs_ci.ocs.external_ceph.RolesContainer(role='pool')
Bases:
object
Container for single or multiple node roles. Can be used as iterable or with equality ‘==’ operator to check if role is present for the node. Note that ‘==’ operator will behave the same way as ‘in’ operator i.e. check that value is present in the role list.
- append(object)
- clear()
- equals(other)
- extend(iterable)
- remove(object)
- update_role(roles_list)
ocs_ci.ocs.fio_artefacts module
- ocs_ci.ocs.fio_artefacts.get_configmap_dict()
ConfigMap template for fio workloads. Note that you need to add actual configuration to workload.fio file.
- Returns:
YAML data for a OCP ConfigMap object
- Return type:
dict
- ocs_ci.ocs.fio_artefacts.get_job_dict()
Job template for fio workloads.
- Returns:
YAML data for a job object
- Return type:
dict
- ocs_ci.ocs.fio_artefacts.get_mcg_conf(mcg_obj, workload_bucket, custom_options=None)
Basic fio configuration for upgrade utilization for NooBaa S3 bucket.
- Parameters:
mcg_obj (obj) – MCG object, it can be found among fixtures
workload_bucket (obj) – MCG bucket
custom_options (dict) – Dictionary of lists containing tuples with additional configuration for fio in format: {‘section’: [(‘option’, ‘value’),…],…} e.g. {‘global’:[(‘name’,’bucketname’)],’create’:[(‘time_based’,’1’),(‘runtime’,’48h’)]} Those values can be added to the config or rewrite already existing values
- Returns:
updated fio configuration
- Return type:
str
- ocs_ci.ocs.fio_artefacts.get_pvc_dict()
PVC template for fio workloads. Note that all ‘None’ values needs to be defined before usage.
- Returns:
YAML data for a PVC object
- Return type:
dict
ocs_ci.ocs.fiojob module
This module contains functions which implements functionality necessary to run
general fio workloads as k8s Jobs in OCP/OCS cluster via workload
fixtures (see ocs_ci.utility.workloadfixture
).
- ocs_ci.ocs.fiojob.delete_fio_data(fio_job_file, delete_check_func)
Delete fio data by removing the fio job resource, with a wait to make sure date were reclaimed on the ceph level.
- ocs_ci.ocs.fiojob.fio_to_dict(fio_output)
” Parse fio output and provide parsed dict it as a result.
- ocs_ci.ocs.fiojob.get_ceph_storage_stats(ceph_pool_name)
Get ceph storage utilization values from
ceph df
: total STORED value and MAX AVAIL of given ceph pool, which are important for understanding how much space is already consumed and how much is still available.- Parameters:
ceph_pool_name (str) – name of ceph pool where you want to write data
- Returns:
int: sum of all ceph pool STORED values (Bytes) int: value of MAX AVAIL value of given ceph pool (Bytes)
- Return type:
tuple
- ocs_ci.ocs.fiojob.get_pool_name(fixture_name)
Return ceph pool name based on fixture name suffix.
- ocs_ci.ocs.fiojob.get_sc_name(fixture_name)
Return storage class name based on fixture name suffix.
- ocs_ci.ocs.fiojob.get_storageutilization_size(target_percentage, ceph_pool_name)
For the purpose of the workload storage utilization fixtures, get expected pvc_size based on STORED and MAX AVAIL values (as reported by ceph df) for given ceph pool and target utilization percentage.
This is only approximate, and it won’t work eg. if each pool has different configuration of replication.
- Parameters:
target_percentage (float) – target total utilization, eg. 0.5 for 50%
ceph_pool_name (str) – name of ceph pool where you want to write data
- Returns:
pvc_size for storage utilization job (in GiB, rounded)
- Return type:
int
- ocs_ci.ocs.fiojob.get_timeout(fio_min_mbps, pvc_size)
Compute how long we will let the job running while writing data to the volume.
- Parameters:
fio_min_mbps (int) – minimal write speed in MiB/s
pvc_size (int) – size of PVC in GiB, which will be used to writing
- Returns:
write_timeout in seconds
- Return type:
int
- ocs_ci.ocs.fiojob.wait_for_job_completion(namespace, timeout, error_msg)
This is a WORKAROUND of particular ocsci design choices: I just wait for one pod in the namespace, and then ask for the pod again to get it’s name (but it would be much better to just wait for the job to finish instead, then ask for a name of the successful pod and use it to get logs …)
- Returns:
name of Pod resource of the finished job
- Return type:
str
- ocs_ci.ocs.fiojob.workload_fio_storageutilization(fixture_name, project, fio_pvc_dict, fio_job_dict, fio_configmap_dict, measurement_dir, tmp_path, target_percentage=None, target_size=None, with_checksum=False, keep_fio_data=False, minimal_time=480, throw_skip=True, threading_lock=None)
This function implements core functionality of fio storage utilization workload fixtures. This is necessary because we can’t parametrize single general fixture over multiple parameters (it would mess with test case id and polarion test case tracking).
It works as a workload fixture, as understood by
ocs_ci.utility.workloadfixture
module.When
target_percentage
is specified, the goal of the fixture is to fill whatever is left so that total cluster utilization reaches the target percentage. This means that in this mode, number of data written depends on both total capacity and current utilization. If the current storage utilization already exceeds the target, the test is skipped.On the other hand with
target_size
, you can specify the size of data written by fio directly.- Parameters:
fixture_name (str) – name of the fixture using this function (for logging and k8s object labeling purposes)
project (ocs_ci.ocs.ocp.OCP) – OCP object of project in which the Job is deployed, as created by
project_factory
orproject
fixturefio_pvc_dict (dict) – PVC k8s struct for fio target volume
fio_job_dict (dict) – Job k8s struct for fio job
fio_configmap_dict (dict) – configmap k8s struct with fio config file
measurement_dir (str) – reference to a fixture which represents a directory where measurement results are stored, see also
ocs_ci.utility.workloadfixture.measure_operation()
tmp_path (pathlib.PosixPath) – reference to pytest
tmp_path
fixturetarget_percentage (float) – target utilization as percentage wrt all usable OCS space, eg. 0.50 means a request to reach 50% of total OCS storage utilization (wrt usable space)
target_size (int) – target size of the PVC for fio to use, eg. 10 means a request for fio to write 10GiB of data
with_checksum (bool) – if true, sha1 checksum of the data written by fio is stored on the volume, and reclaim policy of the volume is changed to
Retain
so that the volume is not removed during test teardown for later verification runskeep_fio_data (bool) – If true, keep the fio data after the fio storage utilization is completed. Else if false, deletes the fio data.
minimal_time (int) – Minimal number of seconds to monitor a system. (See more details in the function ‘measure_operation’)
throw_skip (bool) – if True function will raise pytest.skip.Exception and test will be skipped, otherwise return None
threading_lock (threading.RLock) – lock to be used for thread synchronization when calling ‘oc’ cmd
- Returns:
- measurement results with timestamps and other medatada from
- Return type:
dict
- ocs_ci.ocs.fiojob.write_data_via_fio(fio_job_file, write_timeout, pvc_size, target_percentage)
Write data via fio Job (specified in
tf
tmp file) to reach desired utilization level, and keep this level forminimal_time
seconds.
ocs_ci.ocs.flowtest module
- class ocs_ci.ocs.flowtest.BackgroundOps
Bases:
object
- handler(func, *args, **kwargs)
Wraps the function to run specific iterations
- Returns:
True if function runs successfully
- Return type:
bool
- wait_for_bg_operations(bg_ops, timeout=1200)
Waits for threads to be completed
- Parameters:
bg_ops (list) – Futures
timeout (int) – Time in seconds to wait
- class ocs_ci.ocs.flowtest.FlowOperations
Bases:
object
Flow based operations class
- add_capacity_entry_criteria()
Entry criteria verification function for add capacity operation
- Returns:
containing the params used in add capacity exit operation
- Return type:
tuple
- add_capacity_exit_criteria(restart_count_before, osd_pods_before)
Exit criteria function for Add capacity operation
- Parameters:
restart_count_before (dict) – Restart counts of pods
osd_pods_before (list) – List of OSD pods before
- node_operations_entry_criteria(node_type, number_of_nodes, operation_name='Node Operation', network_fail_time=None)
Entry criteria function for node related operations
- Parameters:
node_type (str) – Type of node
number_of_nodes (int) – Number of nodes
operation_name (str) – Name of the node operation
network_fail_time (int) – Total time to fail the network in a node
- Returns:
containing the params used in Node operations
- Return type:
tuple
- validate_cluster(cluster_check=False, node_status=False, pod_status=False, operation_name='')
Validates various ceph and ocs cluster checks
- Parameters:
node_status (bool) – Verifies node is Ready
pod_status (bool) – Verifies StorageCluster pods in expected state
operation_name (str) – Name of the operation, to Tag
ocs_ci.ocs.fusion module
- ocs_ci.ocs.fusion.create_fusion_monitoring_resources()
Create resources used for Managed Fusion aaS Monitoring
- ocs_ci.ocs.fusion.deploy_odf()
Create openshift-storage namespace and deploy managedFusionOffering CR there.
ocs_ci.ocs.hsbench module
- class ocs_ci.ocs.hsbench.HsBench
Bases:
object
Hotsauce S3 benchmark
- cleanup(timeout=600)
Clear all objects in the associated bucket Clean up deployment config, pvc, pod and test user
- create_resource_hsbench()
- Create resource for hsbench mark test:
Create service account Create PVC Create golang pod
- create_test_user()
Create a radosgw test user for S3 access
- delete_bucket(bucket_name=None)
Delete bucket
- Parameters:
bucket_name (str) – Name of bucket
- delete_objects_in_bucket(bucket_name=None)
Delete objects in a bucket
- Parameters:
bucket_name (str) – Name of bucket
- delete_test_user()
Delete RGW test user and bucket belong to test user
- install_hsbench(timeout=4200)
Install HotSauce S3 benchmark: https://github.com/markhpc/hsbench
- run_benchmark(num_obj=None, run_mode=None, timeout=None, access_key=None, secret_key=None, end_point=None, num_bucket=None, object_size=None, bucket_prefix=None, result=None, validate=True)
Running Hotsauce S3 benchmark Usage detail can be found at: https://github.com/markhpc/hsbench
- Parameters:
num_obj (int) – Maximum number of objects
run_mode (string) – mode types
timeout (int) – timeout in seconds
access_key (string) – Access Key credential
secret_key (string) – Secret Key credential
end_point (string) – S3 end_point
num_bucket (int) – Number of bucket(s)
object_size (int) – Size of object
bucket_prefix (str) – Prefix for buckets
result (str) – Write CSV output to this file
validate (Boolean) – Validates whether running workload is completed.
- validate_hsbench_put_get_list_objects(result=None, num_objs=None, put=None, get=None, list_obj=None)
Validate PUT, GET, LIST objects from previous hsbench operation
- Parameters:
result (str) – Result file name
num_objs (str) – Number of objects to validate
put (Boolean) – Validate PUT operation
get (Boolean) – Validate GET operation
list_obj (Boolean) – Validate LIST operation
- validate_hsbench_workload(result=None)
Validate if workload was running on the app-pod
- Raises:
UnexpectedBehaviour – if result.csv file doesn’t contain output data.
- validate_reshard_process()
Validate reshard process
- Raises:
CommandFailed – If reshard process fails
- validate_s3_objects(upgrade=None)
Validate S3 objects using ‘radosgw-admin’ on single bucket Validate objects in buckets after completed upgrade
- Parameters:
upgrade (str) – Upgrade status
- Raises:
UnexpectedBehaviour – If objects pre-upgrade and post-upgrade are not identical.
ocs_ci.ocs.jenkins module
Jenkins Class to run jenkins specific tests
- class ocs_ci.ocs.jenkins.Jenkins(num_of_projects=1, num_of_builds=1)
Bases:
object
Workload operation using Jenkins
- Parameters:
projects (iterable) – project names
num_of_builds (int) – number of builds per project
- cleanup()
Clean up
- create_app_jenkins()
create application jenkins
- create_jenkins_build_config()
create jenkins build config
- create_jenkins_pvc()
create jenkins pvc
- Returns:
pvc_objs
- Return type:
List
- create_ocs_jenkins_template()
Create OCS Jenkins Template
- create_project_names()
Create project names
- export_builds_results_to_googlesheet(sheet_name='E2E Workloads', sheet_index=3)
Collect builds results, output to google spreadsheet
- Parameters:
sheet_name (str) – Name of the sheet
sheet_index (int) – Index of sheet
- get_build_duration_time(namespace, build_name)
get build duration time
- Parameters:
namespace (str) – get build in namespace
build_name (str) – the name of the jenkins build
- get_build_name_by_pattern(pattern='client', namespace=None, filter=None)
Get build name by pattern
- Returns:
build name
- Return type:
list
- get_builds_obj(namespace)
Get all jenkins builds
- Returns:
jenkins deploy pod objects list
- Return type:
List
- get_builds_sorted_by_number(project)
Get builds per project and sort builds by build name number
- Parameters:
project (str) – project name
- Returns:
List of Jenkins build OCS obj
- Return type:
List
- get_jenkins_deploy_pods(namespace)
Get all jenkins deploy pods
- Parameters:
namespace (str) – get pods in namespace
- Returns:
jenkins deploy pod objects list
- Return type:
pod_objs (list)
- get_node_name_where_jenkins_pod_not_hosted(node_type='worker', num_of_nodes=1)
get nodes
- Parameters:
node_type (str) – The node type (e.g. worker, master)
num_of_nodes (int) – The number of nodes to be returned
- Returns:
List of compute node names
- Return type:
list
- property number_builds_per_project
- property number_projects
- print_completed_builds_results()
Get builds logs and print them on table
- start_build()
Start build on jenkins
- wait_for_build_to_complete(timeout=1200)
Wait for build status to reach complete state
- Parameters:
timeout (int) – Time in seconds to wait
- wait_for_jenkins_deploy_status(status, timeout=600)
Wait for jenkins deploy pods status to reach running/completed
- Parameters:
status (str) – status to reach Running or Completed
timeout (int) – Time in seconds to wait
ocs_ci.ocs.longevity module
- class ocs_ci.ocs.longevity.Longevity
Bases:
object
This class consists of the library functions and params required for Longevity testing
- cluster_sanity_check(cluster_health=True, db_usage=True, resource_utilization=True, disk_utilization=True)
Cluster sanity checks
- Parameters:
cluster_health (bool) – Checks the cluster health if set to True
db_usage (bool) – Get the mon and noobaa db usage if set to True
resource_utilization (bool) – Get the Memory, CPU utilization of nodes and pods if set to True
disk_utilization (bool) – Get the osd and total cluster disk utilization if set to True
- Returns:
Returns cluster sanity checks outputs in a nested dictionary
- Return type:
cluster_sanity_check_dict (dict)
- collect_cluster_sanity_checks_outputs(dir_name=None)
Collect cluster sanity checks outputs and store the outputs in ocs-ci log directory
- Parameters:
dir_name (str) – By default the cluster sanity checks outputs are stored in
directory. (ocs-ci-log_path/cluster_sanity_outputs) –
provided (when dir_name input is) –
under (a new directory with that name gets created) –
it (ocs-ci-log_path/cluster_sanity_outputs/<dir_name> and the outputs gets collected inside) –
- construct_stage_builder_bulk_pod_creation_yaml(pvc_list, namespace)
This function constructs bulks pod.yamls to create bulk pods using kube_jobs.
- Parameters:
pvc_list (list) – List of PVCs
namespace (str) – Namespace where the resource has to be created
- Returns:
List of all Pod.yaml dict list
- Return type:
pods_dict_list (list)
- construct_stage_builder_bulk_pvc_creation_yaml(num_of_pvcs, pvc_size)
This function constructs pvc.yamls to create bulk of pvc’s using kube_jobs. It constructs yamls with the specified number of PVCs of each supported type and access mode.
Eg: num_of_pvcs = 30 The function creates a total of 120 PVCs of below types and access modes RBD-Filesystemvolume -> 30 RWO PVCs CEPHFS -> 30 RWO PVCs, 30 RWX PVCs RBD-Block -> 30 RWX PVCs
- Parameters:
num_of_pvcs (int) – Bulk PVC count
pvc_size (str) – size of all pvcs to be created with Gi suffix (e.g. 10Gi).
None (If) –
created (random size pvc will be) –
- Returns:
List of all PVC.yaml dicts
- Return type:
all_pvc_dict_list (list)
- construct_stage_builder_kube_job(obj_dict_list, namespace, kube_job_name='job_profile')
This function constructs kube object config file for the kube object
- Parameters:
obj_dict_list (list) – List of dictionaries with kube objects,
construct_stage_builder_bulk_pvc_creation_yaml() (the return value of) –
namespace (str) – Namespace where the object has to be deployed
name (str) – Name of this object config file
- Returns:
List of all PVC.yaml dicts
- Return type:
pvc_dict_list (list)
- create_stage_builder_kube_job(kube_job_obj_list, namespace)
Create kube jobs
- Parameters:
kube_job_list (list) – List of kube jobs
namespace (str) – Namespace where the job has to be created
- create_stagebuilder_all_pvc_types(num_of_pvc, namespace, pvc_size, kube_job_name='all_pvc_job_profile')
Create stagebuilder PVCs with all supported PVC types and access modes
- Parameters:
num_of_pvc (int) – Bulk PVC count
namespace (str) – Namespace where the Kube job/PODs are to be created
pvc_size (str) – size of all pvcs to be created with Gi suffix (e.g. 10Gi).
None (If) –
created (random size pvc will be) –
- Returns:
List of all PVC.yaml dicts
- Return type:
pvc_job_file_list (list)
Raises: AssertionError: If not all PVCs reached Bound state
- create_stagebuilder_obc(num_of_obcs, namespace, sc_name='openshift-storage.noobaa.io', obc_kube_job_name='obc_job_profile')
Create stagebuilder OBC
It first constructs bulk obc.yamls with the specified number of OBCs and then creates bulk obc’s using the kube_jobs.
- Parameters:
namespace (str) – Namespace uses to create bulk of obc
sc_name (str) – storage class name using for obc creation; By default uses
'openshift-storage.noobaa.io' (Noobaa storage class) –
num_of_obcs (str) – Bulk obc count
- Returns:
List of all OBC.yaml dicts
- Return type:
obc_job_file (list)
- create_stagebuilder_pods_with_all_pvc_types(num_of_pvc, namespace, pvc_size, pvc_kube_job_name='all_pvc_for_pod_attach_job_profile', pod_kube_job_name='all_pods_job_profile')
Create stagebuilder pods with all supported PVC types and access modes
It first constructs bulk pvc.yamls with the specified number of PVCs of each supported type, access modes and then creates bulk pvc’s using the kube_jobs. Once all the PVCs in the kube_jobs reaches BOUND state it then constructs bulk pod.yamls for each of these PVCs using kube_job.
Eg: num_of_pvc = 30 The function creates a total of 120 PVCs of below types and access modes RBD-Filesystemvolume -> 30 RWO PVCs CEPHFS -> 30 RWO PVCs, 30 RWX PVCs RBD-Block -> 30 RWX PVCs and then creates pods for each of these PVCs. So, it will create 150 PODs
- Parameters:
num_of_pvc (int) – Bulk PVC count
namespace (str) – Namespace where the Kube job/PVCs/PODs are to be created
pvc_size (str) – size of all pvcs to be created with Gi suffix (e.g. 10Gi).
None (If) –
created (random size pvc will be) –
- Returns:
List of all POD.yaml and PVC.yaml dicts
- Return type:
pod_pvc_job_file_list (list)
- delete_stage_builder_kube_job(kube_job_obj_list, namespace)
Delete the stage builder kube jobs
- Parameters:
kube_job_obj_list (list) – List of kube jobs to delete
namespace (str) – Namespace where the job is created
- get_pvc_bound_list(pvc_job_file_list, namespace, pvc_count)
Get the pvcs which are in Bound state from the given pvc job file list
- Parameters:
pvc_job_file_list (list) – List of all PVC.yaml dicts
namespace (str) – Namespace where the resource has to be created
pvc_count (int) – Bulk PVC count; If not specified the count will be
dict (fetched from the kube job pvc yaml) –
- Returns:
List of all PVCs in Bound state
- Return type:
list
- Raises:
AssertionError – If not all PVCs reached to Bound state
- get_resource_yaml_dict_from_kube_job_obj(kube_job_obj, namespace, resource_name='PVC')
Get the resource (PVC/POD) yaml dict from the kube job object
- Parameters:
kube_job_obj (obj) – Kube job object
namespace (str) – Namespace where the job is created
- Returns:
List of all resource yaml dicts
- Return type:
res_yaml_dict (list)
- longevity_all_stages(project_factory, start_apps_workload, multi_pvc_pod_lifecycle_factory, multi_obc_lifecycle_factory, pod_factory, multi_pvc_clone_factory, multi_snapshot_factory, snapshot_restore_factory, teardown_factory, apps_run_time=540, stage_run_time=180, concurrent=False)
Calling this function runs all the stages i.e Stage1, Stage2, Stage3, Stage4 of the ODF Longevity testing.
- Parameters:
project_factory – Fixture to create a new Project.
start_apps_workload – Application workload fixture which reads the list of app workloads to run and starts running those iterating over the workloads in the list for a specified duration
multi_pvc_pod_lifecycle_factory – Fixture to create/delete multiple pvcs and pods, verify FIO and measure pvc creation/deletion time and pod attach time
multi_obc_lifecycle_factory – Fixture to create/delete multiple obcs and measure their creation/deletion time.
pod_factory – Fixture to create new PODs.
multi_pvc_clone_factory – Fixture to create a clone from each PVC in the provided list of PVCs.
multi_snapshot_factory – Fixture to create a VolumeSnapshot of each PVC in the provided list of PVCs.
snapshot_restore_factory – Fixture to create a new PVCs out of the VolumeSnapshot provided.
teardown_factory – Fixture to tear down a resource that was created during the test.
apps_run_time (int) – start_apps_workload fixture run time in minutes
stage_run_time (int) – Stage2, Stage3, Stage4 run time in minutes
concurrent (bool) – If set to True, Stage2,3,4 gets executed concurrently, by default set to False
- stage_0(num_of_pvc, num_of_obc, pvc_size, namespace=None, ignore_teardown=True)
This function creates the initial soft configuration required to start longevity testing
- Parameters:
num_of_pvc (int) – Bulk PVC count
num_of_obc (int) – Bulk OBC count
namespace (str) – Namespace where the Kube job/PVCs/PODsOBCs are to be created
pvc_size (str) – size of all pvcs to be created with Gi suffix (e.g. 10Gi).
None (If) –
created (random size pvc will be) –
ignore_teardown (bool) – Set it to True to skip deleting the created resources
- Returns:
List of all PVC, POD, OBC yaml dicts
- Return type:
kube_job_file_list (list)
- stage_2(project_factory, multi_pvc_pod_lifecycle_factory, multi_obc_lifecycle_factory, num_of_pvcs=100, pvc_size=2, num_of_obcs=20, run_time=1440, measure=True, delay=600)
Function to handle automation of Longevity Stage 2 Sequential Steps i.e. Creation / Deletion of PVCs, PODs and OBCs and measurement of creation / deletion times of the mentioned resources.
- Parameters:
project_factory – Fixture to create a new Project.
multi_pvc_pod_lifecycle_factory – Fixture to create/delete multiple pvcs and pods and measure pvc creation/deletion time and pod attach time.
multi_obc_lifecycle_factory – Fixture to create/delete multiple obcs and measure their creation/deletion time.
num_of_pvcs (int) – Total Number of PVCs / PODs we want to create.
pvc_size (int) – Size of each PVC in GB.
num_of_obcs (int) – Number of OBCs we want to create of each type. (Total OBCs = num_of_obcs * 5)
run_time (int) – Total Run Time in minutes.
measure (bool) – True if we want to measure the performance metrics, False otherwise.
delay (int) – Delay time (in seconds) between sequential and bulk operations as well as between cycles.
- stage_3(project_factory, num_of_pvc=150, num_of_obc=150, pvc_size=None, delay=60, run_time=1440)
- Concurrent bulk operations of following
PVC creation - all supported types (RBD, CephFS, RBD-block) PVC deletion - all supported types (RBD, CephFS, RBD-block) OBC creation OBC deletion APP pod creation - all supported types (RBD, CephFS, RBD-block) APP pod deletion - all supported types (RBD, CephFS, RBD-block)
- Parameters:
project_factory – Fixture to create a new Project.
num_of_pvc (int) – Bulk PVC count
num_of_obc (int) – Bulk OBC count
pvc_size (str) – size of all pvcs to be created with Gi suffix (e.g. 10Gi).
None (If) –
created (random size pvc will be) –
delay (int) – Delay in seconds before starting the next cycle
run_time (int) – The amount of time the particular stage has to run (in minutes)
- stage_4(project_factory, multi_pvc_pod_lifecycle_factory, pod_factory, multi_pvc_clone_factory, multi_snapshot_factory, snapshot_restore_factory, teardown_factory, num_of_pvcs=30, fio_percentage=25, pvc_size=2, run_time=180, pvc_size_new=4)
- Function to handle automation of Longevity Stage 4 i.e.
- Creation / Deletion of PODs, PVCs of different types +
fill data upto fio_percentage (default is 25%) of mount point space.
Creation / Deletion of Clones of the given PVCs.
Creation / Deletion of VolumeSnapshots of the given PVCs.
Restore the created VolumeSnapshots into a new set of PVCs.
Expansion of size of the original PVCs.
- Parameters:
project_factory – Fixture to create a new Project.
multi_pvc_pod_lifecycle_factory – Fixture to create/delete multiple pvcs and pods, verify FIO and measure pvc creation/deletion time and pod attach time
pod_factory – Fixture to create new PODs.
multi_pvc_clone_factory – Fixture to create a clone from each PVC in the provided list of PVCs.
multi_snapshot_factory – Fixture to create a VolumeSnapshot of each PVC in the provided list of PVCs.
snapshot_restore_factory – Fixture to create a new PVCs out of the VolumeSnapshot provided.
teardown_factory – Fixture to tear down a resource that was created during the test.
num_of_pvcs (int) – Total Number of PVCs we want to create for each operation (clone, snapshot, expand).
fio_percentage (float) – Percentage of PVC space we want to be utilized for FIO.
pvc_size (int) – Size of each PVC in GB.
run_time (int) – Total Run Time in minutes.
pvc_size_new (int) – Size of the expanded PVC in GB.
- validate_obcs_in_kube_job_reached_running_state(kube_job_obj, namespace, num_of_obc)
Validate that OBCs in the kube job list reached BOUND state
- Parameters:
kube_job_obj (obj) – Kube Job Object
namespace (str) – Namespace of OBC’s created
num_of_obc (int) – Bulk OBCs count; If not specified the count will be
dict (fetched from the kube job obc yaml) –
- Returns:
List of all OBCs which is in Bound state.
- Return type:
obc_bound_list (list)
- Raises:
AssertionError – If not all OBC reached to Bound state
- validate_pods_in_kube_job_reached_running_state(kube_job_obj, namespace, pod_count=None, timeout=60)
Validate PODs in the kube job list reached RUNNING state
- Parameters:
kube_job_obj_list (list) – List of Kube job objects
namespace (str) – Namespace where the Kube job/PVCs are created
pod_count (int) – Bulk PODs count; If not specified the count will be
dict (fetched from the kube job pod yaml) –
- Returns:
List of all PODs in RUNNING state
- Return type:
running_pods_list (list)
Raises: AssertionError: If not all PODs reached to Running state
- validate_pvc_in_kube_job_reached_bound_state(kube_job_obj_list, namespace, pvc_count, timeout=60)
Validate PVCs in the kube job list reached BOUND state
- Parameters:
kube_job_obj_list (list) – List of Kube job objects
namespace (str) – Namespace where the Kube job/PVCs are created
pvc_count (int) – Bulk PVC count; If not specified the count will be
dict (fetched from the kube job pvc yaml) –
- Returns:
List of all PVCs in Bound state
- Return type:
pvc_bound_list (list)
Raises: AssertionError: If not all PVCs reached to Bound state
- ocs_ci.ocs.longevity.start_app_workload(request, workloads_list=None, run_time=10, run_in_bg=True, delay=600)
This function reads the list of app workloads to run and starts running those iterating over the workload in the list for a specified duration
Usage: start_app_workload(workloads_list=[‘pgsql’, ‘couchbase’, ‘cosbench’], run_time=60, run_in_bg=True)
- Parameters:
workloads_list (list) – The list of app workloads to run
run_time (int) – The amount of time the workloads should run (in minutes)
run_in_bg (bool) – Runs the workload in background starting a thread
delay (int) – Delay in seconds before starting the next cycle
- Raises:
UnsupportedWorkloadError – When the workload is not found in the supported_app_workloads list
- ocs_ci.ocs.longevity.start_ocp_workload(workloads_list, run_in_bg=True)
- This function reads the list of OCP workloads to run and
starts running those iterating over the elements in the list.
Usage: start_ocp_workload(workloads_list=[‘logging’,’registry’], run_in_bg=True)
- Args:
workloads_list (list): The list of ocp workloads to run run_in_bg (bool): Runs the workload in background starting a thread
- Raises:
UnsupportedWorkloadError – When the workload is not found in the supported_ocp_workloads list
ocs_ci.ocs.machine module
- ocs_ci.ocs.machine.add_annotation_to_machine(annotation, machine_name)
Add annotation to the machine :param annotation: Annotation to be set on the machine :type annotation: str :param eg: annotation = “machine.openshift.io/exclude-node-draining=’’” :param machine_name: machine name :type machine_name: str
- ocs_ci.ocs.machine.add_node(machine_set, count)
Add new node to the cluster
- Parameters:
machine_set (str) – Name of a machine set to get increase replica count
count (int) – Count to increase
- Returns:
True if commands executes successfully
- Return type:
bool
- ocs_ci.ocs.machine.change_current_replica_count_to_ready_replica_count(machine_set)
Change the current replica count to be equal to the ready replica count We may use this function after deleting a node or after adding a new node.
- Parameters:
machine_set (str) – Name of the machine set
- Returns:
True if the change was made successfully. False otherwise
- Return type:
bool
- ocs_ci.ocs.machine.check_machineset_exists(machine_set)
Function to check machineset exists or not
- Parameters:
machine_set (str) – Name of the machine set
- Returns:
True if machineset exists, else false
- Return type:
bool
- ocs_ci.ocs.machine.create_custom_machineset(role='app', instance_type=None, labels=None, taints=None, zone='a')
Function to create custom machineset works only for AWS i.e. Using this user can create nodes with different instance type and role. https://docs.openshift.com/container-platform/4.1/machine_management/creating-machineset.html
- Parameters:
role (str) – Role type to be added for node eg: it will be app,worker
instance_type (str) – Type of instance
labels (list) – List of Labels (key, val) to be added to the node
taints (list) – List of taints to be applied
zone (str) – Machineset zone for node creation.
- Returns:
Created machineset name
- Return type:
machineset (str)
- Raises:
ResourceNotFoundError – Incase machineset creation failed
UnsupportedPlatformError – Incase of wrong platform
- ocs_ci.ocs.machine.create_ocs_infra_nodes(num_nodes)
Create infra node instances
- Parameters:
num_nodes (int) – Number of instances to be created
- Returns:
list of instance names
- Return type:
list
- ocs_ci.ocs.machine.delete_custom_machineset(machine_set)
Function to delete custom machineset
- Parameters:
machine_set (str) – Name of the machine set to be deleted
WARN – Make sure it’s not OCS worker node machines set, if so then OCS worker nodes and machine set will be deleted.
- Raises:
UnexpectedBehaviour – Incase machineset not deleted
- ocs_ci.ocs.machine.delete_machine(machine_name)
Deletes a machine
- Parameters:
machine_name (str) – Name of the machine you want to delete
- Raises:
CommandFailed – In case yaml_file and resource_name wasn’t provided
- ocs_ci.ocs.machine.delete_machine_and_check_state_of_new_spinned_machine(machine_name)
Deletes a machine and checks the state of the newly spinned machine
- Parameters:
machine_name (str) – Name of the machine you want to delete
- Returns:
New machine name
- Return type:
machine (str)
- Raises:
ResourceNotFoundError – Incase machine creation failed
- ocs_ci.ocs.machine.delete_machines(machine_names)
Delete the machines
- Parameters:
machine_names (list) – List of the machine names you want to delete
- Raises:
CommandFailed – In case yaml_file and resource_name wasn’t provided
- ocs_ci.ocs.machine.get_labeled_nodes(label)
Fetches all nodes with specific label.
- Parameters:
label (str) – node label to look for
- Returns:
List of names of labeled nodes
- Return type:
list
- ocs_ci.ocs.machine.get_machine_from_machineset(machine_set)
Get the machine name from its associated machineset
- Parameters:
machine_set (str) – Name of the machine set
- Returns:
Machine names
- Return type:
List
- ocs_ci.ocs.machine.get_machine_from_node_name(node_name)
Get the associated machine name for the given node name
- Parameters:
node_name (str) – Name of the node
- Returns:
Machine name
- Return type:
str
- ocs_ci.ocs.machine.get_machine_objs(machine_names=None)
Get machine objects by machine names
- Parameters:
machine_names (list) – The machine names to get their objects
None (If) –
machines (will return all cluster) –
- Returns:
Cluster machine OCS objects
- Return type:
list
- ocs_ci.ocs.machine.get_machine_type(machine_name)
Get the machine type (e.g. worker, master)
- Parameters:
machine_name (str) – Name of the machine
- Returns:
Type of the machine
- Return type:
str
- ocs_ci.ocs.machine.get_machines(machine_type='worker')
Get cluster’s machines according to the machine type (e.g. worker, master)
- Parameters:
machine_type (str) – The machine type (e.g. worker, master)
- Returns:
The nodes OCP instances
- Return type:
list
- ocs_ci.ocs.machine.get_machines_in_statuses(statuses, machine_objs=None, machine_type='worker')
Get all machines in specific statuses
- Parameters:
statuses (list) – List of the statuses to search for the machines
machine_objs (list) – The machine objects to check their statues. If not specified, it gets all the machines.
machine_type (str) – The machine type (e.g. worker, master)
- Returns:
OCP objects representing the machines in the specific statuses
- Return type:
list
- ocs_ci.ocs.machine.get_machineset_from_machine_name(machine_name)
Get the machineset associated with the machine name
- Parameters:
machine_name (str) – Name of the machine
- Returns:
Machineset name
- Return type:
str
- ocs_ci.ocs.machine.get_machineset_objs(machineset_names=None)
Get machineset objects by machineset names
- Parameters:
machineset_names (list) – The machineset names to get their objects
None (If) –
machines (will return all cluster) –
- Returns:
Cluster machineset OCS objects
- Return type:
list
- ocs_ci.ocs.machine.get_machinesets()
Get machine sets
- Returns:
list of machine sets
- Return type:
machine_sets (list)
- ocs_ci.ocs.machine.get_ready_replica_count(machine_set)
Get replica count which are in ready state in a machine set
- Parameters:
machine_set (str) – Machineset name
- Returns:
replica count which are in ready state
- Return type:
ready_replica (int)
- ocs_ci.ocs.machine.get_replica_count(machine_set)
Get replica count of a machine set
- Parameters:
machine_set (str) – Name of a machine set to get replica count
- Returns:
replica count of a machine set
- Return type:
replica count (int)
- ocs_ci.ocs.machine.get_storage_cluster(namespace='openshift-storage')
Get storage cluster name
- Parameters:
namespace (str) – Namespace of the resource
- Returns:
Storage cluster name
- Return type:
str
- ocs_ci.ocs.machine.set_replica_count(machine_set, count)
Change the replica count of a machine set.
- Parameters:
machine_set (str) – Name of the machine set
count (int) – The number of the new replica count
- Returns:
True if the change was made successfully. False otherwise
- Return type:
bool
- ocs_ci.ocs.machine.wait_for_current_replica_count_to_reach_expected_value(machine_set, expected_value, timeout=360)
Wait for the current replica count to reach an expected value
- Parameters:
machine_set (str) – Name of the machine set
expected_value (int) – The expected value to reach
timeout (int) – Time to wait for the current replica count to reach the expected value
- Returns:
True, in case of the current replica count reached the expected value. False otherwise
- Return type:
bool
- ocs_ci.ocs.machine.wait_for_machines_count_to_reach_status(machine_count, machine_type='worker', expected_status='Running', timeout=600, sleep=20)
Wait for a machine count to reach the expected status
- Parameters:
machine_count (int) – The machine count
machine_type (str) – The machine type (e.g. worker, master)
expected_status (str) – The expected status. Default value is “Running”.
timeout (int) – Time to wait for the machine count to reach the expected status.
sleep (int) – Time in seconds to wait between attempts.
- Raises:
TimeoutExpiredError – In case the machine count didn’t reach the expected status in the given timeout.
- ocs_ci.ocs.machine.wait_for_new_node_to_be_ready(machine_set, timeout=600)
Wait for the new node to reach ready state
- Parameters:
machine_set (str) – Name of the machine set
timeout (int) – Timeout in secs, default 10mins
- Raises:
ResourceWrongStatusException – In case the new spun machine fails to reach Ready state or replica count didn’t match
- ocs_ci.ocs.machine.wait_for_ready_replica_count_to_reach_expected_value(machine_set, expected_value, timeout=180)
Wait for the ready replica count to reach an expected value
- Parameters:
machine_set (str) – Name of the machine set
expected_value (int) – The expected value to reach
timeout (int) – Time to wait for the ready replica count to reach the expected value
- Returns:
True, in case of the ready replica count reached the expected value. False otherwise
- Return type:
bool
ocs_ci.ocs.managedservice module
- ocs_ci.ocs.managedservice.change_current_index_to_default_index()
Change the current cluster index to the default cluster index
- ocs_ci.ocs.managedservice.check_and_change_current_index_to_default_index()
Check that the default cluster index was equal to the current cluster index, and also change the current cluster index to the default cluster index if they are not equal.
- Returns:
True, if the default cluster index was equal to the current cluster index
- Return type:
bool
- ocs_ci.ocs.managedservice.check_default_cluster_context_index_equal_to_current_index()
Check that the default cluster index is equal to the current cluster index
- Returns:
True, if the default cluster index is equal to the current cluster index
- Return type:
bool
- ocs_ci.ocs.managedservice.check_switch_to_correct_cluster_at_setup(cluster_type=None)
Check that we switch to the correct cluster type at setup according to the ‘cluster_type’ parameter provided
- Parameters:
cluster_type (str) – The cluster type
- Raises:
AssertionError – In case of switching to the wrong cluster type.
- ocs_ci.ocs.managedservice.get_admin_key_from_provider()
Get admin key from rook-ceph-tools pod on provider
- Returns:
- The admin key obtained from rook-ceph-tools pod on provider.
Return empty string if admin key is not obtained.
- Return type:
str
- ocs_ci.ocs.managedservice.get_consumer_names()
Get the names of all consumers connected to this provider cluster. Runs on provider cluster
- Returns:
names of all connected consumers, empty list if there are none
- Return type:
list
- ocs_ci.ocs.managedservice.get_dms_secret_name()
Get name of the Dead Man’s Snitch secret for currently used addon.
- Returns:
name of the secret
- Return type:
string
- ocs_ci.ocs.managedservice.get_managedocs_component_state(component)
Get the state of the given managedocs component: alertmanager, prometheus or storageCluster.
- Parameters:
component (str) – the component of managedocs resource
- Returns:
the state of the component
- Return type:
str
- ocs_ci.ocs.managedservice.get_pagerduty_secret_name()
Get name of the PagerDuty secret for currently used addon.
- Returns:
name of the secret
- Return type:
string
- ocs_ci.ocs.managedservice.get_parameters_secret_name()
Get name of the addon parameters secret for currently used addon.
- Returns:
name of the secret
- Return type:
string
- ocs_ci.ocs.managedservice.get_provider_service_type()
Get the type of the ocs-provider-server Service(e.g., NodePort, LoadBalancer)
- Returns:
The type of the ocs-provider-server Service
- Return type:
str
- ocs_ci.ocs.managedservice.get_smtp_secret_name()
Get name of the SMTP secret for currently used addon.
- Returns:
name of the secret
- Return type:
string
- ocs_ci.ocs.managedservice.is_rados_connect_error_in_ex(ex)
Check if the RADOS connect error is found in the exception
- Parameters:
ex (Exception) – The exception to check if the RADOS connect error is found
- Returns:
True, if the RADOS connect error is found in the exception. False otherwise
- Return type:
bool
- ocs_ci.ocs.managedservice.patch_consumer_toolbox(ceph_admin_key=None, consumer_tools_pod=None)
Patch the rook-ceph-tools deployment with ceph.admin key. Applicable for MS platform only to enable rook-ceph-tools to run ceph commands.
- Parameters:
ceph_admin_key (str) – The ceph admin key which should be used to patch rook-ceph-tools deployment on consumer
consumer_tools_pod (OCS) – The rook-ceph-tools pod object.
- Returns:
The new pod object after patching the rook-ceph-tools deployment. If it fails to patch, it returns None.
- Return type:
- ocs_ci.ocs.managedservice.update_non_ga_version()
Update pull secret, catalog source, subscription and operators to consume ODF and deployer versions provided in configuration.
- ocs_ci.ocs.managedservice.update_pull_secret()
Update pull secret with extra quay.io/rhceph-dev credentials.
Note: This is a hack done to allow odf to odf deployment before full addon is available.
ocs_ci.ocs.mcg_workload module
- ocs_ci.ocs.mcg_workload.create_workload_job(job_name, bucket, project, mcg_obj, resource_path, custom_options=None)
Creates kubernetes job that should utilize MCG bucket.
- Parameters:
job_name (str) – Name of the job
bucket (objt) – MCG bucket with S3 interface
project (obj) – OCP object representing OCP project which will be used for the job
mcg_obj (obj) – instance of MCG class
resource_path (str) – path to directory where should be created resources
custom_options (dict) – Dictionary of lists containing tuples with additional configuration for fio in format: {‘section’: [(‘option’, ‘value’),…],…} e.g. {‘global’:[(‘name’,’bucketname’)],’create’:[(‘time_based’,’1’),(‘runtime’,’48h’)]} Those values can be added to the config or rewrite already existing values
- Returns:
Job object
- Return type:
obj
- ocs_ci.ocs.mcg_workload.get_configmap_dict(fio_job, mcg_obj, bucket, custom_options=None)
Fio configmap dictionary with configuration set for MCG workload.
- Parameters:
fio_job (dict) – Definition of fio job
mcg_obj (obj) – instance of MCG class
bucket (obj) – MCG bucket to be used for workload
custom_options (dict) – Dictionary of lists containing tuples with additional configuration for fio in format: {‘section’: [(‘option’, ‘value’),…],…} e.g. {‘global’:[(‘name’,’bucketname’)],’create’:[(‘time_based’,’1’),(‘runtime’,’48h’)]} Those values can be added to the config or rewrite already existing values
- Returns:
Configmap definition
- Return type:
dict
- ocs_ci.ocs.mcg_workload.get_job_dict(job_name)
Fio job dictionary with configuration set for MCG workload.
- Parameters:
job_name (str) – Name of the workload job
- Returns:
Specification for the workload job
- Return type:
dict
- ocs_ci.ocs.mcg_workload.mcg_job_factory(request, bucket_factory, project_factory, mcg_obj, resource_path)
MCG IO workload factory. Calling this fixture creates a OpenShift Job.
- Parameters:
request (obj) – request fixture instance
bucket_factory (func) – factory function for bucket creation
project_factory (func) – factory function for project creation
mcg_obj (obj) – instance of MCG class
resource_path (str) – path to directory where should be created resources
- Returns:
MCG workload job factory function
- Return type:
func
- ocs_ci.ocs.mcg_workload.wait_for_active_pods(job, desired_count, timeout=3)
Wait for job to load desired number of active pods in time specified in timeout.
- Parameters:
job (obj) – OCS job object
desired_count (str) – Number of desired active pods for provided job
timeout (int) – Number of seconds to wait for the job to get into state
- Returns:
If job has desired number of active pods
- Return type:
bool
ocs_ci.ocs.metrics module
OCS Metrics module.
Code in this module Supports monitoring test cases dealing with OCS metrics.
- ocs_ci.ocs.metrics.get_missing_metrics(prometheus, metrics, current_platform=None, start=None, stop=None)
Using given prometheus instance, check that all given metrics which are expected to be available on current platform are there.
If start and stop timestamps are specified, the function will try to fetch metric data from a middle of this time range (instead of fetching the current value). Expected to be used with workload fixtures.
- Parameters:
prometheus (ocs_ci.utility.prometheus.PrometheusAPI) – prometheus instance
metrics (list) – list or tuple with metrics to be checked
current_platform (str) – name of current platform (optional)
start (float) – start timestamp (unix time number)
stop (float) – stop timestamp (unix time number)
- Returns:
metrics which were not available but should be
- Return type:
list
ocs_ci.ocs.monitoring module
- ocs_ci.ocs.monitoring.check_ceph_health_status_metrics_on_prometheus(mgr_pod, threading_lock)
Check ceph health status metric is collected on prometheus pod
- Parameters:
mgr_pod (str) – Name of the mgr pod
threading_lock (obj) – Threading lock object to ensure only one thread is making ‘oc’ calls
- Returns:
True on success, false otherwise
- Return type:
bool
- ocs_ci.ocs.monitoring.check_ceph_metrics_available(threading_lock)
Check that all healthy ceph metrics are available.
- Parameters:
threading_lock (threading.RLock) – A lock to use for thread safety ‘oc’ calls
- Returns:
True on success, false otherwise
- Return type:
bool
- ocs_ci.ocs.monitoring.check_if_monitoring_stack_exists()
Check if monitoring is configured on the cluster with ODF backed PVCs
- Returns:
True if monitoring is configured on the cluster, false otherwise
- Return type:
bool
- ocs_ci.ocs.monitoring.check_pvcdata_collected_on_prometheus(pvc_name, threading_lock)
Checks whether initially pvc related data is collected on pod
- Parameters:
pvc_name (str) – Name of the pvc
threading_lock (threading.RLock) – A lock to prevent multiple threads calling ‘oc’ command at the same time
- Returns:
True on success, raises UnexpectedBehaviour on failures
- ocs_ci.ocs.monitoring.create_configmap_cluster_monitoring_pod(sc_name=None, telemeter_server_url=None)
Create a configmap named cluster-monitoring-config based on the arguments.
- Parameters:
sc_name (str) – Name of the storage class which will be used for persistent storage needs of OCP Prometheus and Alert Manager. If not defined, the related options won’t be present in the monitoring config map and the default (non persistent) storage will be used for OCP Prometheus and Alert Manager.
telemeter_server_url (str) – URL of Telemeter server where telemeter client (running in the cluster) will send it’s telemetry data. If not defined, related option won’t be present in the monitoring config map and the default (production) telemeter server will receive the metrics data.
- ocs_ci.ocs.monitoring.get_ceph_capacity_metrics(threading_lock)
Get CEPH capacity breakdown data from Prometheus, return all response texts collected to a dict Use the queries from ceph-storage repo: https://github.com/red-hat-storage/odf-console/blob/master/packages/ocs/queries/ceph-storage.ts
- To get the data use format similar to:
data.get(‘PROJECTS_TOTAL_USED’).get(‘data’).get(‘result’)[0].get(‘value’)
- Returns:
A dictionary containing the CEPH capacity breakdown data
- Return type:
dict
- ocs_ci.ocs.monitoring.get_list_pvc_objs_created_on_monitoring_pods()
Returns list of pvc objects created on monitoring pods
- Returns:
List of pvc objs
- Return type:
list
- ocs_ci.ocs.monitoring.get_metrics_persistentvolumeclaims_info(threading_lock)
Returns the created pvc information on prometheus pod
- Parameters:
threading_lock (threading.RLock) – A lock to prevent multiple threads calling ‘oc’ command at the same time
- Returns:
The pvc metrics collected on prometheus pod
- Return type:
response.content (dict)
- ocs_ci.ocs.monitoring.get_prometheus_response(api, query) dict
Get the response from Prometheus based on the provided query
- Parameters:
api (PrometheusAPI) – A PrometheusAPI object
query (str) – The Prometheus query string
- Returns:
A dictionary representing the parsed JSON response from Prometheus
- Return type:
dict
- ocs_ci.ocs.monitoring.get_pvc_namespace_metrics(threading_lock)
Get PVC and Namespace metrics from Prometheus.
- Parameters:
threading_lock (threading.RLock) – A lock to use for thread safety ‘oc’ calls
- Returns:
A dictionary containing the PVC and Namespace metrics data
- Return type:
dict
- ocs_ci.ocs.monitoring.prometheus_health_check(name='monitoring', kind='ClusterOperator')
Return true if the prometheus cluster is healthy
- Parameters:
name (str) – Name of the resources
kind (str) – Kind of the resource
- Returns:
True on prometheus health is ok, false otherwise
- Return type:
bool
- ocs_ci.ocs.monitoring.validate_pvc_are_mounted_on_monitoring_pods(pod_list)
Validate created pvc are mounted on monitoring pods
- Parameters:
pod_list (list) – List of the pods where pvc are mounted
- ocs_ci.ocs.monitoring.validate_pvc_created_and_bound_on_monitoring_pods()
Validate pvc’s created and bound in state on monitoring pods
- Raises:
AssertionError – If no PVC are created or if any PVC are not in the Bound state
ocs_ci.ocs.node module
- ocs_ci.ocs.node.add_disk_stretch_arbiter()
Adds disk to storage nodes in a stretch cluster with arbiter configuration evenly spread across two zones. Stretch cluster has replica 4, hence 2 disks to each of the zones
- ocs_ci.ocs.node.add_disk_to_node(node_obj, disk_size=None)
Add a new disk to a node
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
disk_size (int) – The size of the new disk to attach. If not specified, the disk size will be equal to the size of the previous disk.
- ocs_ci.ocs.node.add_new_disk_for_vsphere(sc_name)
Check the PVS in use per node, and add a new disk to the worker node with the minimum PVS.
- Parameters:
sc_name (str) – The storage class name
- ocs_ci.ocs.node.add_new_node_and_label_it(machineset_name, num_nodes=1, mark_for_ocs_label=True)
Add a new node for ipi and label it
- Parameters:
machineset_name (str) – Name of the machine set
num_nodes (int) – number of nodes to add
mark_for_ocs_label (bool) – True if label the new node
eg: add_new_node_and_label_it(“new-tdesala-zlqzn-worker-us-east-2a”)
- Returns:
new spun node names
- Return type:
list
- ocs_ci.ocs.node.add_new_node_and_label_upi(node_type, num_nodes, mark_for_ocs_label=True, node_conf=None)
Add a new node for aws/vmware upi platform and label it
- Parameters:
node_type (str) – Type of node, RHEL or RHCOS
num_nodes (int) – number of nodes to add
mark_for_ocs_label (bool) – True if label the new node
node_conf (dict) – The node configurations.
- Returns:
new spun node names
- Return type:
list
- ocs_ci.ocs.node.add_new_nodes_and_label_after_node_failure_ipi(machineset_name, num_nodes=1, mark_for_ocs_label=True)
Add new nodes for ipi and label them after node failure
- Parameters:
machineset_name (str) – Name of the machine set
num_nodes (int) – number of nodes to add
mark_for_ocs_label (bool) – True if label the new node
- Returns:
new spun node names
- Return type:
list
- ocs_ci.ocs.node.add_new_nodes_and_label_upi_lso(node_type, num_nodes, mark_for_ocs_label=True, node_conf=None, add_disks=True, add_nodes_to_lvs_and_lvd=True)
Add a new node for aws/vmware upi lso platform and label it
- Parameters:
node_type (str) – Type of node, RHEL or RHCOS
num_nodes (int) – number of nodes to add
mark_for_ocs_label (bool) – True if label the new nodes
node_conf (dict) – The node configurations.
add_disks (bool) – True if add disks to the new nodes.
add_nodes_to_lvs_and_lvd (bool) – True if add the new nodes to localVolumeDiscovery and localVolumeSet.
- Returns:
new spun node names
- Return type:
list
- ocs_ci.ocs.node.add_node_to_lvd_and_lvs(node_name)
Add a new node to localVolumeDiscovery and localVolumeSet
- Parameters:
node_name (str) – the new node name to add to localVolumeDiscovery and localVolumeSet
- Returns:
True in case the changes are applied successfully. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.check_for_zombie_process_on_node(node_name=None)
Check if there are any zombie process on the nodes
- Parameters:
node_name (list) – Node names list to check for zombie process
- Raises:
ZombieProcessFoundException – In case zombie process are found on the node
- ocs_ci.ocs.node.check_node_ip_equal_to_associated_pods_ips(node_obj)
Check that the node ip is equal to the pods ips associated with the node. This function is mainly for the managed service deployment.
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
- True, if the node ip is equal to the pods ips associated with the node.
False, otherwise.
- Return type:
bool
- ocs_ci.ocs.node.check_nodes_specs(min_memory, min_cpu)
Check that the cluster worker nodes meet the required minimum CPU and memory
- Parameters:
min_memory (int) – The required minimum memory in bytes
min_cpu (int) – The required minimum number of vCPUs
- Returns:
True if all nodes meet the required minimum specs, False otherwise
- Return type:
bool
- ocs_ci.ocs.node.check_taint_on_nodes(taint=None)
Function to check for particular taint on nodes
- Parameters:
taint (str) – The taint to check on nodes
- Returns:
True if taint is present on node. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.consumer_verification_steps_after_provider_node_replacement()
Check the consumer verification steps after a provider node replacement The steps are as follows:
- Check if the consumer “storageProviderEndpoint” IP is found in the provider worker node ips
- 1.1 If not found, edit the rosa addon installation - modify the param ‘storage-provider-endpoint’
with the first provider worker node IP.
- 1.2 Wait and check again if the consumer “storageProviderEndpoint” IP is found in the
provider worker node IPs.
- If found, Check if the mon endpoint ips are found in the provider worker node ips, and we can execute
a ceph command from the consumer 2.1 If not found, restart the ocs-operator pod 2.2 Wait and check again if the mon endpoint ips are found in the provider worker node ips, and we can execute a ceph command from the consumer
- Returns:
True, if the consumer verification steps finished successfully. False, otherwise.
- Return type:
bool
- ocs_ci.ocs.node.delete_and_create_osd_node_aws_upi(osd_node_name)
Unschedule, drain and delete osd node, and creating a new osd node. At the end of the function there should be the same number of osd nodes as it was in the beginning, and also ceph health should be OK. This function is for AWS UPI.
- Parameters:
osd_node_name (str) – the name of the osd node
- Returns:
The new node name
- Return type:
str
- ocs_ci.ocs.node.delete_and_create_osd_node_ipi(osd_node_name)
Unschedule, drain and delete osd node, and creating a new osd node. At the end of the function there should be the same number of osd nodes as it was in the beginning, and also ceph health should be OK.
This function is for any IPI platform.
- Parameters:
osd_node_name (str) – the name of the osd node
- Returns:
The new node name
- Return type:
str
- ocs_ci.ocs.node.delete_and_create_osd_node_vsphere_upi(osd_node_name, use_existing_node=False)
Unschedule, drain and delete osd node, and creating a new osd node. At the end of the function there should be the same number of osd nodes as it was in the beginning, and also ceph health should be OK. This function is for vSphere UPI.
- Parameters:
osd_node_name (str) – the name of the osd node
use_existing_node (bool) – If False, create a new node and label it. If True, use an existing node to replace the deleted node and label it.
- Returns:
The new node name
- Return type:
str
- ocs_ci.ocs.node.delete_and_create_osd_node_vsphere_upi_lso(osd_node_name, use_existing_node=False)
Unschedule, drain and delete osd node, and creating a new osd node. At the end of the function there should be the same number of osd nodes as it was in the beginning, and also ceph health should be OK. This function is for vSphere UPI.
- Parameters:
osd_node_name (str) – the name of the osd node
use_existing_node (bool) – If False, create a new node and label it. If True, use an existing node to replace the deleted node and label it.
- Returns:
The new node name
- Return type:
str
- ocs_ci.ocs.node.drain_nodes(node_names, timeout=1800, disable_eviction=False)
Drain nodes
- Parameters:
node_names (list) – The names of the nodes
timeout (int) – Time to wait for the drain nodes ‘oc’ command
disable_eviction (bool) – On True will delete pod that is protected by PDB, False by default
- Raises:
TimeoutExpired – in case drain command fails to complete in time
- ocs_ci.ocs.node.generate_new_nodes_and_osd_running_nodes_ipi(osd_running_worker_nodes=None, num_of_nodes=2)
Create new nodes and generate osd running worker nodes in the same machinesets as the new nodes, or if it’s a 1AZ cluster, it generates osd running worker nodes in the same rack or zone as the new nodes. This function is only for an IPI deployment
- Parameters:
osd_running_worker_nodes – The list to use in the function for generate osd running worker nodes. If not provided, it generates all the osd running worker nodes.
num_of_nodes (int) – The number of the new nodes to create and generate. The default value is 2.
- Returns:
The list of the generated osd running worker nodes
- Return type:
list
- ocs_ci.ocs.node.generate_nodes_for_provider_worker_node_tests()
Generate worker node objects for the provider worker node tests.
The function generates the node objects as follows: - If we have 3 worker nodes in the cluster, it generates all the 3 worker nodes - If we have more than 3 worker nodes, it generates one mgr node, 2 mon nodes, and 2 osd nodes. The nodes can overlap, but at least 3 nodes should be selected(and no more than 5 nodes).
- Returns:
The list of the node objects for the provider node tests
- Return type:
list
- ocs_ci.ocs.node.get_all_nodes()
Gets the all nodes in cluster
- Returns:
List of node name
- Return type:
list
- ocs_ci.ocs.node.get_another_osd_node_in_same_rack_or_zone(failure_domain, node_obj, node_names_to_search=None)
Get another osd node in the same rack or zone of a given node.
- Parameters:
failure_domain (str) – The failure domain
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object to search for another osd node in the same rack or zone.
node_names_to_search (list) – The list of node names to search for another osd node in the same rack or zone. If not specified, it will search in all the worker nodes.
- Returns:
- The osd node in the same rack or zone of the given node.
If not found, it returns None.
- Return type:
- ocs_ci.ocs.node.get_app_pod_running_nodes(pod_obj)
Gets the app pod running node names
- Parameters:
pod_obj (list) – List of app pod objects
- Returns:
App pod running node names
- Return type:
list
- ocs_ci.ocs.node.get_both_osd_and_app_pod_running_node(osd_running_nodes, app_pod_running_nodes)
Gets both osd and app pod running node names
- Parameters:
osd_running_nodes (list) – List of osd running node names
app_pod_running_nodes (list) – List of app pod running node names
- Returns:
Both OSD and app pod running node names
- Return type:
list
- ocs_ci.ocs.node.get_compute_node_names(no_replace=False)
Gets the compute node names
- Parameters:
no_replace (bool) – If False ‘.’ will replaced with ‘-’
- Returns:
List of compute node names
- Return type:
list
- ocs_ci.ocs.node.get_crashcollector_nodes()
Get the nodes names where crashcollector pods are running
- Returns:
node names where crashcollector pods are running
- Return type:
set
- ocs_ci.ocs.node.get_encrypted_osd_devices(node_obj, node)
Get osd encrypted device names of a node
- Parameters:
node_obj – OCP object of kind node
node – node name
- Returns:
List of encrypted osd device names
- ocs_ci.ocs.node.get_master_nodes()
Fetches all master nodes.
- Returns:
List of names of master nodes
- Return type:
list
- ocs_ci.ocs.node.get_mon_running_nodes()
Gets the mon running node names
- Returns:
MON node names
- Return type:
list
- ocs_ci.ocs.node.get_node_az(node)
Get the node availability zone
- Parameters:
node (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The name of the node availability zone
- Return type:
str
- ocs_ci.ocs.node.get_node_from_machine_name(machine_name)
Get node name from a given machine_name.
- Parameters:
machine_name (str) – Name of Machine
- Returns:
Name of Node (or None if not found)
- Return type:
str
- ocs_ci.ocs.node.get_node_hostname_label(node_obj)
Get the hostname label of a node
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The node’s hostname label
- Return type:
str
- ocs_ci.ocs.node.get_node_index_in_local_block(node_name)
Get the node index in the node values as it appears in the local block resource
- Parameters:
node_name (str) – The node name to search for his index
- Returns:
The node index in the nodeSelector values
- Return type:
int
- ocs_ci.ocs.node.get_node_internal_ip(node_obj)
Get the node internal ip
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The node internal ip or None
- Return type:
str
- ocs_ci.ocs.node.get_node_ip_addresses(ipkind)
Gets a dictionary of required IP addresses for all nodes
- Parameters:
ipkind – ExternalIP or InternalIP or Hostname
- Returns:
Internal or Exteranl IP addresses keyed off of node name
- Return type:
dict
- ocs_ci.ocs.node.get_node_ips(node_type='worker')
Gets the node public IP
- Parameters:
node_type (str) – The node type (e.g. worker, master)
- Returns:
Node IP’s
- Return type:
list
- ocs_ci.ocs.node.get_node_logs(node_name)
Get logs from a given node
pod_name (str): Name of the node
- Returns:
Output of ‘dmesg’ run on node
- Return type:
str
- ocs_ci.ocs.node.get_node_mon_ids(node_name)
Get the node mon ids
- Parameters:
node_name (str) – The node name to get the mon ids
- Returns:
The list of the mon ids
- Return type:
list
- ocs_ci.ocs.node.get_node_name(node_obj)
Get oc node’s name
- Parameters:
node_obj (node_obj) – oc node object
- Returns:
node’s name
- Return type:
str
- ocs_ci.ocs.node.get_node_names(node_type='worker')
Get node names
- Parameters:
node_type (str) – The node type (e.g. worker, master)
- Returns:
The node names
- Return type:
list
- ocs_ci.ocs.node.get_node_objs(node_names=None)
Get node objects by node names
- Parameters:
node_names (list) – The node names to get their objects for. If None, will return all cluster nodes
- Returns:
Cluster node OCP objects
- Return type:
list
- ocs_ci.ocs.node.get_node_osd_ids(node_name)
Get the node osd ids
- Parameters:
node_name (str) – The node name to get the osd ids
- Returns:
The list of the osd ids
- Return type:
list
- ocs_ci.ocs.node.get_node_pods(node_name, pods_to_search=None, raise_pod_not_found_error=False)
Get all the pods of a specified node
- Parameters:
node_name (str) – The node name to get the pods
pods_to_search (list) – list of pods to search for the node pods. If not specified, will search in all the pods.
raise_pod_not_found_error (bool) – If True, it raises an exception, if one of the pods in the pod names are not found. If False, it ignores the case of pod not found and returns the pod objects of the rest of the pod nodes. The default value is False
- Returns:
list of all the pods of the specified node
- Return type:
list
- ocs_ci.ocs.node.get_node_pods_to_scale_down(node_name)
Get the pods of a node to scale down as described in the documents of node replacement with LSO
- Parameters:
node_name (str) – The node name
- Returns:
The node’s pods to scale down
- Return type:
list
- ocs_ci.ocs.node.get_node_rack(node_obj)
Get the worker node rack
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The worker node rack name
- Return type:
str
- ocs_ci.ocs.node.get_node_rack_dict()
Get worker node rack
- Returns:
{“Node name”:”Rack name”}
- Return type:
dict
- ocs_ci.ocs.node.get_node_rack_or_zone(failure_domain, node_obj)
Get the worker node rack or zone name based on the failure domain value
- Parameters:
failure_domain (str) – The failure domain
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The worker node rack/zone name
- Return type:
str
- ocs_ci.ocs.node.get_node_rack_or_zone_dict(failure_domain)
Get worker node rack or zone dictionary based on the failure domain value
- Parameters:
failure_domain (str) – The failure domain
- Returns:
{“Node name”:”Zone/Rack name”}
- Return type:
dict
- ocs_ci.ocs.node.get_node_resource_utilization_from_adm_top(nodename=None, node_type='worker', print_table=False)
Gets the node’s cpu and memory utilization in percentage using adm top command.
- Parameters:
nodename (str) – The node name
node_type (str) – The node type (e.g. master, worker)
- Returns:
- Node name and its cpu and memory utilization in
percentage
- Return type:
dict
- ocs_ci.ocs.node.get_node_resource_utilization_from_oc_describe(nodename=None, node_type='worker', print_table=False)
Gets the node’s cpu and memory utilization in percentage using oc describe node
- Parameters:
nodename (str) – The node name
node_type (str) – The node type (e.g. master, worker)
- Returns:
- Node name and its cpu and memory utilization in
percentage
- Return type:
dict
- ocs_ci.ocs.node.get_node_rook_ceph_pod_names(node_name)
Get the rook ceph pod names associated with the node
- Parameters:
node_name (str) – The node name
- Returns:
The rook ceph pod names associated with the node
- Return type:
list
- ocs_ci.ocs.node.get_node_status(node_obj)
Get the node status.
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The node status. If the command failed, it returns None.
- Return type:
str
- ocs_ci.ocs.node.get_node_zone(node_obj)
Get the worker node zone
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
The worker node zone name
- Return type:
str
- ocs_ci.ocs.node.get_node_zone_dict()
Get worker node zone dictionary
- Returns:
{“Node name”:”Zone name”}
- Return type:
dict
- ocs_ci.ocs.node.get_nodes(node_type='worker', num_of_nodes=None)
Get cluster’s nodes according to the node type (e.g. worker, master) and the number of requested nodes from that type. In case of HCI provider cluster and ‘node_type’ is worker, it will exclude the master nodes.
- Parameters:
node_type (str) – The node type (e.g. worker, master)
num_of_nodes (int) – The number of nodes to be returned
- Returns:
The nodes OCP instances
- Return type:
list
- ocs_ci.ocs.node.get_nodes_having_label(label)
Gets nodes with particular label
- Parameters:
label (str) – Label
- Returns:
Representing nodes info
- Return type:
Dict
- ocs_ci.ocs.node.get_nodes_in_statuses(statuses, node_objs=None)
Get all nodes in specific statuses
- Parameters:
statuses (list) – List of the statuses to search for the nodes
node_objs (list) – The node objects to check their statues. If not specified, it gets all the nodes.
- Returns:
OCP objects representing the nodes in the specific statuses
- Return type:
list
- ocs_ci.ocs.node.get_nodes_racks_or_zones(failure_domain, node_names)
Get the nodes racks or zones
failure_domain (str): The failure domain node_names (list): The node names to get their racks or zones
- Returns:
The nodes racks or zones
- Return type:
list
- ocs_ci.ocs.node.get_nodes_where_ocs_pods_running()
Get the node names where rook ceph pods are running
- Returns:
node names where rook ceph pods are running
- Return type:
set
- ocs_ci.ocs.node.get_num_of_racks()
Get the number of racks in the cluster
- Returns:
The number of racks in the cluster
- Return type:
int
- ocs_ci.ocs.node.get_ocs_nodes(num_of_nodes=None)
Gets the ocs nodes
- Parameters:
num_of_nodes (int) – The number of ocs nodes to return. If not specified, it returns all the ocs nodes.
- Returns:
List of the ocs nodes
- Return type:
list
- ocs_ci.ocs.node.get_odf_zone_count()
Get the number of Availability zones used by ODF cluster
- Returns:
the number of availability zones
- Return type:
int
- ocs_ci.ocs.node.get_osd_ids_per_node()
Get a dictionary of the osd ids per node
- Returns:
The dictionary of the osd ids per node
- Return type:
dict
- ocs_ci.ocs.node.get_osd_running_nodes()
Gets the osd running node names
- Returns:
OSD node names
- Return type:
list
- ocs_ci.ocs.node.get_osds_per_node()
Gets the osd running pod names per node name
- Returns:
{“Node name”:[“osd running pod name running on the node”,..,]}
- Return type:
dict
- ocs_ci.ocs.node.get_other_worker_nodes_in_same_rack_or_zone(failure_domain, node_obj, node_names_to_search=None)
Get other worker nodes in the same rack or zone of a given node.
- Parameters:
failure_domain (str) – The failure domain
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object to search for other worker nodes in the same rack or zone.
node_names_to_search (list) – The list of node names to search the other worker nodes in the same rack or zone. If not specified, it will search in all the worker nodes.
- Returns:
The list of the other worker nodes in the same rack or zone of the given node.
- Return type:
list
- ocs_ci.ocs.node.get_provider()
Return the OCP Provider (Platform)
- Returns:
The Provider that the OCP is running on
- Return type:
str
- ocs_ci.ocs.node.get_running_pod_count_from_node(nodename=None, node_type='worker')
Gets the node running pod count using oc describe node
- Parameters:
nodename (str) – The node name
node_type (str) – The node type (e.g. master, worker)
- Returns:
Node name and its pod_count
- Return type:
dict
- ocs_ci.ocs.node.get_typed_worker_nodes(os_id='rhcos')
Get worker nodes with specific OS
- Parameters:
os_id (str) – OS type like rhcos, RHEL etc…
- Returns:
list of worker nodes instances having specified os
- Return type:
list
- ocs_ci.ocs.node.get_worker_nodes()
Fetches all worker nodes. In case of HCI provider cluster, it will exclude the master nodes.
- Returns:
List of names of worker nodes
- Return type:
list
- ocs_ci.ocs.node.get_worker_nodes_not_in_ocs()
Get the worker nodes that are not ocs labeled.
- Returns:
list of worker node objects that are not ocs labeled
- Return type:
list
- ocs_ci.ocs.node.gracefully_reboot_nodes(disable_eviction=False)
Gracefully reboot OpenShift Container Platform nodes
- Parameters:
disable_eviction (bool) – On True will delete pod that is protected by PDB, False by default
- ocs_ci.ocs.node.is_node_labeled(node_name, label="cluster.ocs.openshift.io/openshift-storage=''")
Check if the node is labeled with a specified label.
- Parameters:
node_name (str) – The node name to check if it has the specific label
label (str) – The name of the label. Default value is the OCS label.
- Returns:
True if the node is labeled with the specified label. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.is_node_rack_or_zone_exist(failure_domain, node_name)
Check if the node rack/zone exist
- Parameters:
failure_domain (str) – The failure domain
node_name (str) – The node name
- Returns:
True if the node rack/zone exist. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.label_nodes(nodes, label="cluster.ocs.openshift.io/openshift-storage=''")
Label nodes
- Parameters:
nodes (list) – list of node objects need to label
label (str) – New label to be assigned for these nodes. Default value is the OCS label
- ocs_ci.ocs.node.list_encrypted_rbd_devices_onnode(node)
Get rbd crypt devices from the node
- Parameters:
node – node name
- Returns:
List of encrypted osd device names
- ocs_ci.ocs.node.node_network_failure(node_names, wait=True)
Induce node network failure Bring node network interface down, making the node unresponsive
- Parameters:
node_names (list) – The names of the nodes
wait (bool) – True in case wait for status is needed, False otherwise
- Returns:
True if node network fail is successful
- Return type:
bool
- ocs_ci.ocs.node.node_replacement_verification_steps_ceph_side(old_node_name, new_node_name, new_osd_node_name=None)
Check the verification steps from the Ceph side, after the process of node replacement as described in the docs
- Parameters:
old_node_name (str) – The name of the old node that has been deleted
new_node_name (str) – The name of the new node that has been created
new_osd_node_name (str) – The name of the new node that has been added to osd nodes
- Returns:
True if all the verification steps passed. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.node_replacement_verification_steps_user_side(old_node_name, new_node_name, new_osd_node_name, old_osd_ids)
Check the verification steps that the user should perform after the process of node replacement as described in the docs
- Parameters:
old_node_name (str) – The name of the old node that has been deleted
new_node_name (str) – The name of the new node that has been created
new_osd_node_name (str) – The name of the new node that has been added to osd nodes
old_osd_ids (list) – List of the old osd ids
- Returns:
True if all the verification steps passed. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.print_table_node_resource_utilization(utilization_dict, field_names)
Print table of node utilization
- Parameters:
utilization_dict (dict) – CPU and Memory utilization per Node
field_names (list) – The field names of the table
- ocs_ci.ocs.node.recover_node_to_ready_state(node_obj)
Recover the node to be in Ready state.
- Parameters:
node_obj (ocs_ci.ocs.resources.ocs.OCS) – The node object
- Returns:
True if the node recovered to Ready state. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.remove_nodes(nodes)
Remove the nodes from cluster
- Parameters:
nodes (list) – list of node instances to remove from cluster
- ocs_ci.ocs.node.replace_old_node_in_lvd_and_lvs(old_node_name, new_node_name)
Replace the old node with the new node in localVolumeDiscovery and localVolumeSet, as described in the documents of node replacement with LSO
- Parameters:
old_node_name (str) – The old node name to remove from the local volume
new_node_name (str) – the new node name to add to the local volume
- Returns:
True in case if changes are applied. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.scale_down_deployments(node_name)
Scale down the deployments of a node as described in the documents of node replacement with LSO
- Parameters:
node_name (str) – The node name
- ocs_ci.ocs.node.schedule_nodes(node_names)
Change nodes to be scheduled
- Parameters:
node_names (list) – The names of the nodes
- ocs_ci.ocs.node.taint_nodes(nodes, taint_label=None)
Taint nodes
- Parameters:
nodes (list) – list of node names need to taint
taint_label (str) – Taint label to be used, If None the constants.OPERATOR_NODE_TAINT will be used.
- ocs_ci.ocs.node.unschedule_nodes(node_names)
Change nodes to be unscheduled
- Parameters:
node_names (list) – The names of the nodes
- ocs_ci.ocs.node.untaint_nodes(taint_label=None, nodes_to_untaint=None)
Function to remove taints from nodes
- Parameters:
taint_label (str) – taint to use
nodes_to_untaint (list) – list of node objs to untaint
- Returns:
True if untainted, false otherwise
- Return type:
bool
- ocs_ci.ocs.node.verify_all_nodes_created()
Verify all nodes are created or not
- Raises:
NotAllNodesCreated – In case all nodes are not created
- ocs_ci.ocs.node.verify_crypt_device_present_onnode(node, vol_handle)
Find the crypt device maching for given volume handle.
- Parameters:
node – node name
vol_handle – volumen handle name.
- Returns:
if volume handle device found on the node. False: if volume handle device not found on the node.
- Return type:
True
- ocs_ci.ocs.node.verify_worker_nodes_security_groups()
Check the worker nodes security groups set correctly. The function checks that the pods ip are equal to their associated nodes.
- Returns:
True, if the worker nodes security groups set correctly. False otherwise
- Return type:
bool
- ocs_ci.ocs.node.wait_for_all_osd_ids_come_up_on_nodes(expected_osd_ids_per_node, timeout=360, sleep=20)
Wait for all the expected osd ids to come up on their associated nodes
- Parameters:
expected_osd_ids_per_node (dict) – The expected osd ids per node
timeout (int) – Time to wait for all the expected osd ids to come up on their associated nodes
sleep (int) – Time in seconds to sleep between attempts
- Returns:
- True, if all the expected osd ids come up on their associated nodes.
False, otherwise
- Return type:
bool
- ocs_ci.ocs.node.wait_for_new_osd_node(old_osd_node_names, timeout=600)
Wait for the new osd node to appear.
- Parameters:
old_osd_node_names (list) – List of the old osd node names
timeout (int) – time to wait for the new osd node to appear
- Returns:
- The new osd node name if the new osd node appear in the specific timeout.
Else it returns None
- Return type:
str
- ocs_ci.ocs.node.wait_for_new_worker_node_ipi(machineset, old_wnodes, timeout=900)
Wait for the new worker node to be ready
- Parameters:
machineset (str) – The machineset name
old_wnodes (list) – The old worker nodes
timeout (int) – Time to wait for the new worker node to be ready.
- Returns:
The new worker node object
- Return type:
- Raises:
ResourceWrongStatusException – In case the new spun machine fails to reach Ready state or replica count didn’t match. Or in case one or more nodes haven’t reached the desired state.
- ocs_ci.ocs.node.wait_for_node_count_to_reach_status(node_count, node_type='worker', expected_status='Ready', timeout=300, sleep=20)
Wait for a node count to reach the expected status
- Parameters:
node_count (int) – The node count
node_type (str) – The node type. Default value is worker.
expected_status (str) – The expected status. Default value is “Ready”.
timeout (int) – Time to wait for the node count to reach the expected status.
sleep (int) – Time in seconds to wait between attempts.
- Raises:
TimeoutExpiredError – In case the node count didn’t reach the expected status in the given timeout.
- ocs_ci.ocs.node.wait_for_nodes_racks_or_zones(failure_domain, node_names, timeout=120)
Wait for the nodes racks or zones to appear
- Parameters:
failure_domain (str) – The failure domain
node_names (list) – The node names to get their racks or zones
timeout (int) – The time to wait for the racks or zones to appear on the nodes
- Raises:
TimeoutExpiredError – In case not all the nodes racks or zones appear in the given timeout
- ocs_ci.ocs.node.wait_for_nodes_status(node_names=None, status='Ready', timeout=180, sleep=3)
Wait until all nodes are in the given status
- Parameters:
node_names (list) – The node names to wait for to reached the desired state If None, will wait for all cluster nodes
status (str) – The node status to wait for (e.g. ‘Ready’, ‘NotReady’, ‘SchedulingDisabled’)
timeout (int) – The number in seconds to wait for the nodes to reach the status
sleep (int) – Time in seconds to sleep between attempts
- Raises:
ResourceWrongStatusException – In case one or more nodes haven’t reached the desired state
- ocs_ci.ocs.node.wait_for_osd_ids_come_up_on_node(node_name, expected_osd_ids, timeout=180, sleep=10)
Wait for the expected osd ids to come up on a node
- Parameters:
node_name (str) – The node name
expected_osd_ids (list) – The list of the expected osd ids to come up on the node
timeout (int) – Time to wait for the osd ids to come up on the node
sleep (int) – Time in seconds to sleep between attempts
- Returns:
True, the osd ids to come up on the node. False, otherwise
- Return type:
bool
ocs_ci.ocs.ocp module
General OCP object
- class ocs_ci.ocs.ocp.OCP(api_version='v1', kind='Service', namespace=None, resource_name='', selector=None, field_selector=None, cluster_kubeconfig='', threading_lock=None, silent=False, skip_tls_verify=False)
Bases:
object
A basic OCP object to run basic ‘oc’ commands
- add_label(resource_name, label)
Adds a new label for this pod
- Parameters:
resource_name (str) – Name of the resource you want to label
label (str) – New label to be assigned for this pod E.g: “label=app=’rook-ceph-mds’”
- annotate(annotation, resource_name='', overwrite=True)
Update the annotations on resource.
- Parameters:
annotation (str) – Annotation string (key=value pair or key- for removing annotation) E.g: ‘cluster.x-k8s.io/paused=””’
resource_name (str) – Name of the resource you want to label
overwrite (bool) – Overwrite existing annotation with the same key, (default: True)
- Returns:
Dictionary represents a returned yaml file
- Return type:
dict
- property api_version
- apply(yaml_file)
Applies configuration changes to a resource
- Parameters:
yaml_file (str) – Path to a yaml file to use in ‘oc apply -f file.yaml
- Returns:
Dictionary represents a returned yaml file
- Return type:
dict
- check_function_supported(support_var)
Check if the resource supports the functionality based on the support_var.
- Parameters:
support_var (bool) – True if functionality is supported, False otherwise.
- Raises:
NotSupportedFunctionError – If support_var == False
- check_name_is_specified(resource_name='')
Check if the name of the resource is specified in class level and if not raise the exception.
- Raises:
ResourceNameNotSpecifiedException – in case the name is not specified.
- check_phase(phase)
Check phase of resource
- Parameters:
phase (str) – Phase of resource object
- Returns:
- True if phase of object is the same as passed one, False
otherwise.
- Return type:
bool
- Raises:
NotSupportedFunctionError – If resource doesn’t have phase!
ResourceNameNotSpecifiedException – in case the name is not specified.
- check_resource_existence(should_exist, timeout=60, resource_name='', selector=None)
Checks whether an OCP() resource exists
- Parameters:
should_exist (bool) – Whether the resource should or shouldn’t be found
timeout (int) – How long should the check run before moving on
resource_name (str) – Name of the resource.
selector (str) – Selector of the resource.
- Returns:
True if the resource was found, False otherwise
- Return type:
bool
- create(yaml_file=None, resource_name='', out_yaml_format=True)
Creates a new resource
- Parameters:
yaml_file (str) – Path to a yaml file to use in ‘oc create -f file.yaml
resource_name (str) – Name of the resource you want to create
out_yaml_format (bool) – Determines if the output should be formatted to a yaml like string
- Returns:
Dictionary represents a returned yaml file
- Return type:
dict
- property data
- delete(yaml_file=None, resource_name='', wait=True, force=False, timeout=600)
Deletes a resource
- Parameters:
yaml_file (str) – Path to a yaml file to use in ‘oc delete -f file.yaml
resource_name (str) – Name of the resource you want to delete
wait (bool) – Determines if the delete command should wait to completion
force (bool) – True for force deletion with –grace-period=0, False otherwise
timeout (int) – timeout for the oc_cmd, defaults to 600 seconds
- Returns:
Dictionary represents a returned yaml file
- Return type:
dict
- Raises:
CommandFailed – In case yaml_file and resource_name wasn’t provided
- delete_project(project_name)
Delete a project. A project created by the new_project function does not have a corresponding yaml file so normal resource deletion calls do not work
- Parameters:
project_name (str) – Name of the project to be deleted
- Returns:
True in case project deletion succeeded.
- Return type:
bool
- Raises:
CommandFailed – When the project deletion does not succeed.
- describe(resource_name='', selector=None, all_namespaces=False)
Get command - ‘oc describe <resource>’
- Parameters:
resource_name (str) – The resource name to fetch
selector (str) – The label selector to look for
all_namespaces (bool) – Equal to oc describe <resource> -A
Example
describe(‘my-pv1’)
- Returns:
Dictionary represents a returned yaml file
- Return type:
dict
- exec_oc_cmd(command, out_yaml_format=True, secrets=None, timeout=600, ignore_error=False, silent=False, cluster_config=None, skip_tls_verify=False, **kwargs)
Executing ‘oc’ command
- Parameters:
command (str) – The command to execute (e.g. create -f file.yaml) without the initial ‘oc’ at the beginning
out_yaml_format (bool) – whether to return yaml loaded python object or raw output
secrets (list) – A list of secrets to be masked with asterisks This kwarg is popped in order to not interfere with subprocess.run(
**kwargs
)timeout (int) – timeout for the oc_cmd, defaults to 600 seconds
ignore_error (bool) – True if ignore non zero return code and do not raise the exception.
silent (bool) – If True will silent errors from the server, default false
cluster_config (MultiClusterConfig) – cluster_config will be used only in the context of multiclsuter executions
skip_tls_verify (bool) – Adding ‘–insecure-skip-tls-verify’ to oc command
- Returns:
Dictionary represents a returned yaml file. str: If out_yaml_format is False.
- Return type:
dict
- exec_oc_debug_cmd(node, cmd_list, timeout=300, namespace=None)
Function to execute “oc debug” command on OCP node
- Parameters:
node (str) – Node name where the command to be executed
cmd_list (list) – List of commands eg: [‘cmd1’, ‘cmd2’]
timeout (int) – timeout for the exec_oc_cmd, defaults to 600 seconds
namespace (str) – Namespace name which will be used to create debug pods
- Returns:
Returns output of the executed command/commands
- Return type:
out (str)
- Raises:
CommandFailed – When failure in command execution
- get(resource_name='', out_yaml_format=True, selector=None, all_namespaces=False, retry=0, wait=3, dont_raise=False, silent=False, field_selector=None, cluster_config=None, skip_tls_verify=False)
Get command - ‘oc get <resource>’
- Parameters:
resource_name (str) – The resource name to fetch
out_yaml_format (bool) – Adding ‘-o yaml’ to oc command
selector (str) – The label selector to look for.
all_namespaces (bool) – Equal to oc get <resource> -A
retry (int) – Number of attempts to retry to get resource
wait (int) – Number of seconds to wait between attempts for retry
dont_raise (bool) – If True will raise when get is not found
field_selector (str) – Selector (field query) to filter on, supports ‘=’, ‘==’, and ‘!=’. (e.g. status.phase=Running)
skip_tls_verify (bool) – Adding ‘–insecure-skip-tls-verify’ to oc command
Example
get(‘my-pv1’)
- Returns:
Dictionary represents a returned yaml file None: Incase dont_raise is True and get is not found
- Return type:
dict
- get_logs(name, container_name=None, all_containers=False, secrets=None, timeout=None, ignore_error=False)
Execute
oc logs
command to fetch logs for a given k8s resource.Since the log is stored as a string in memory, this will be problematic when the log is large.
- Parameters:
name (str) – name of the resource to fetch logs from
container_name (str) – name of the container (optional)
all_containers (bool) – fetch logs from all containers of the resource
secrets (list) – A list of secrets to be masked with asterisks
timeout (int) – timeout for the oc_cmd
ignore_error (bool) – True if ignore non zero return code and do not raise the exception.
- Returns:
container logs
- Return type:
str
- get_resource(resource_name, column, retry=0, wait=3, selector=None)
Get a column value for a resource based on: ‘oc get <resource_kind> <resource_name>’ command
- Parameters:
resource_name (str) – The name of the resource to get its column value
column (str) – The name of the column to retrive
retry (int) – Number of attempts to retry to get resource
wait (int) – Number of seconds to wait beteween attempts for retry
selector (str) – The resource selector to search with.
- Returns:
- The output returned by ‘oc get’ command not in the ‘yaml’
format
- Return type:
str
- get_resource_status(resource_name, column='STATUS')
Get the resource STATUS column based on: ‘oc get <resource_kind> <resource_name>’ command
- Parameters:
resource_name (str) – The name of the resource to get its STATUS
- Returns:
- The status returned by ‘oc get’ command not in the ‘yaml’
format
- Return type:
str
- get_user_token()
Get user access token
- Returns:
access token
- Return type:
str
- is_exist(resource_name='', selector=None)
Check if at least one of the resource exists.
- Parameters:
resource_name (str) – Name of the resource.
selector (str) – Selector of the resource.
- Raises:
ResourceNameNotSpecifiedException – In case the name is not specified.
- Returns:
True if the resource exists False otherwise.
- Return type:
bool
- property kind
- login(user, password)
Logs user in
- Parameters:
user (str) – Name of user to be logged in
password (str) – Password of user to be logged in
- Returns:
output of login command
- Return type:
str
- login_as_sa()
Logs in as system:admin
- Returns:
output of login command
- Return type:
str
- property namespace
- new_project(project_name, policy='baseline')
Creates a new project
- Parameters:
project_name (str) – Name of the project to be created
- Returns:
True in case project creation succeeded, False otherwise
- Return type:
bool
- patch(resource_name='', params=None, format_type='')
Applies changes to resources
- Parameters:
resource_name (str) – Name of the resource
params (str) – Changes to be added to the resource
type (str) – Type of the operation
- Returns:
True in case if changes are applied. False otherwise
- Return type:
bool
- reload_data()
Reloading data of OCP object
- remove_label(resource_name, label)
Remove label from the resource.
- Parameters:
resource_name (str) – Name of the resource you want to remove label.
label (str) – Label Name to be remove.
- property resource_name
- wait(resource_name='', condition='Available', timeout=300, selector=None)
Wait for a resource to meet a specific condition using ‘oc wait’ command.
- Parameters:
resource_name (str) – The name of the specific resource to wait for.
condition (str) – The condition to wait for (e.g.,’Available’, ‘Ready’).
timeout (int) – Timeout in seconds for the wait operation.
selector (str) – The label selector to look for
- Raises:
TimeoutExpiredError – If the resource does not meet the specified condition within the timeout.
- wait_for_delete(resource_name='', timeout=60, sleep=3)
Wait for a resource to be deleted
- Parameters:
resource_name (str) – The name of the resource to wait for (e.g.my-pv1)
timeout (int) – Time in seconds to wait
sleep (int) – Sampling time in seconds
- Raises:
CommandFailed – If failed to verify the resource deletion
TimeoutError – If resource is not deleted within specified timeout
- Returns:
True in case resource deletion is successful
- Return type:
bool
- wait_for_phase(phase, timeout=300, sleep=5)
Wait till phase of resource is the same as required one passed in the phase parameter.
- Parameters:
phase (str) – Desired phase of resource object
timeout (int) – Timeout in seconds to wait for desired phase
sleep (int) – Time in seconds to sleep between attempts
- Raises:
ResourceWrongStatusException – In case the resource is not in expected phase.
NotSupportedFunctionError – If resource doesn’t have phase!
ResourceNameNotSpecifiedException – in case the name is not specified.
- wait_for_resource(condition, resource_name='', column='STATUS', selector=None, resource_count=0, timeout=60, sleep=3, dont_allow_other_resources=False, error_condition=None)
Wait for a resource to reach to a desired condition
- Parameters:
condition (str) – The desired state the resource that is sampled from ‘oc get <kind> <resource_name>’ command
resource_name (str) – The name of the resource to wait for (e.g.my-pv1)
column (str) – The name of the column to compare with
selector (str) – The resource selector to search with. Example: ‘app=rook-ceph-mds’
resource_count (int) – How many resources expected to be
timeout (int) – Time in seconds to wait
sleep (int) – Sampling time in seconds
dont_allow_other_resources (bool) – If True it will not allow other resources in different state. For example you are waiting for 2 resources and there are currently 3 (2 in running state, 1 in ContainerCreating) the function will continue to next iteration to wait for only 2 resources in running state and no other exists.
error_condition (str) – State of the resource that is sampled from ‘oc get <kind> <resource_name>’ command, which makes this method to fail immediately without waiting for a timeout. This is optional and makes sense only when there is a well defined unrecoverable state of the resource(s) which is not expected to be part of a workflow under test, and at the same time, the timeout itself is large.
- Returns:
- True in case all resources reached desired condition,
False otherwise
- Return type:
bool
- ocs_ci.ocs.ocp.clear_overprovision_spec(ignore_errors=False)
Remove cluster overprovision policy.
- Parameters:
ignore_errors (bool) – Flag to report errors.
- Returns:
True on success and False on error.
- Return type:
bool
- ocs_ci.ocs.ocp.confirm_cluster_operator_version(target_version, cluster_operator)
Check if cluster operator upgrade process is completed:
- Parameters:
cluster_operator – (str): ClusterOperator name
target_version (str) – expected OCP client
- Returns:
True if success, False if failed
- Return type:
bool
- ocs_ci.ocs.ocp.get_all_cluster_operators()
Get all ClusterOperators names in OCP
- Returns:
cluster-operator names
- Return type:
list
- ocs_ci.ocs.ocp.get_all_resource_names_of_a_kind(kind)
Returns all the resource names of a particular type
- Parameters:
kind (str) – The resource type to look for
- Returns:
A list of strings
- Return type:
(list)
- ocs_ci.ocs.ocp.get_all_resource_of_kind_containing_string(search_string, kind)
Return all the resource of kind which name contain search_string :param search_string: The string to search in name of the resource :type search_string: str :param kind: Kind of the resource to search for :type kind: str
- Returns:
List of resource
- Return type:
(list)
- ocs_ci.ocs.ocp.get_build()
Return the OCP Build Version
- Returns:
The build version of the OCP
- Return type:
str
- ocs_ci.ocs.ocp.get_cluster_operator_version(cluster_operator_name)
Get image version of selected cluster operator
- Parameters:
cluster_operator_name (str) – ClusterOperator name
- Returns:
cluster operator version: ClusterOperator image version
- Return type:
str
- ocs_ci.ocs.ocp.get_clustername()
Return the name (DNS short name) of the cluster
- Returns:
the short DNS name of the cluster
- Return type:
str
- ocs_ci.ocs.ocp.get_current_oc_version()
Gets Current OCP client version
- Returns:
current COP client version
- Return type:
str
- ocs_ci.ocs.ocp.get_images(data, images=None)
Get the images from the ocp object like pod, CSV and so on.
- Parameters:
data (dict) – Yaml data from the object.
images (dict) – Dict where to put the images (doesn’t have to be set!).
- Returns:
Images dict like: {‘image_name’: ‘image.url.to:tag’, …}
- Return type:
dict
- ocs_ci.ocs.ocp.get_ocp_channel()
Return the OCP Channel
- Returns:
The channel of the OCP
- Return type:
str
- ocs_ci.ocs.ocp.get_ocp_upgrade_channel()
Gets OCP upgrade channel
- Returns:
OCP upgrade channel name
- Return type:
str
- ocs_ci.ocs.ocp.get_ocp_url()
Getting default URL for OCP console :returns: OCP console URL :rtype: str
- ocs_ci.ocs.ocp.get_services_by_label(label, namespace)
Fetches deployment resources with given label in given namespace
- Parameters:
label (str) – label which deployments might have
namespace (str) – Namespace in which to be looked up
- Returns:
deployment OCP instances
- Return type:
list
- ocs_ci.ocs.ocp.patch_ocp_upgrade_channel(channel_variable='stable-4.16')
Using ‘oc patch clusterversion’ if new OCP upgrade channel is different than current one
- Parameters:
channel_variable (str) – New OCP upgrade subscription channel
- ocs_ci.ocs.ocp.rsync(src, dst, node, dst_node=True, extra_params='')
This function will rsync source folder to destination path. You can rsync local folder to the node or vice versa depends on dst_node parameter. By default the rsync is from local to the node.
- Parameters:
src (str) – Source path of folder to rsync.
dst (str) – Destination path where to rsync.
node (str) – Node to/from copy.
dst_node (bool) – True if the destination (dst) is the node, False when dst is the local folder.
extra_params (str) – “See: oc rsync –help for the extra params”
- ocs_ci.ocs.ocp.set_overprovision_policy(capacity, quota_name, sc_name, label)
Set OverProvisionControl Policy.
- Parameters:
capacity (str) – storage capacity e.g. 50Gi
quota_name (str) – quota name.
sc_name (str) – storage class name
label (dict) – storage quota labels.
- Returns:
True on success and False on error.
- Return type:
bool
- ocs_ci.ocs.ocp.switch_to_default_rook_cluster_project()
Switch to default project
- Returns:
True on success, False otherwise
- Return type:
bool
- ocs_ci.ocs.ocp.switch_to_project(project_name)
Switch to another project
- Parameters:
project_name (str) – Name of the project to be switched to
- Returns:
True on success, False otherwise
- Return type:
bool
- ocs_ci.ocs.ocp.upgrade_ocp(image_path, image)
upgrade OCP version
- Parameters:
image (str) – image to be installed
image_path (str) – path to image
- ocs_ci.ocs.ocp.validate_cluster_version_status()
Verify OCP upgrade is completed, by checking ‘oc get clusterversion’ status
- Returns:
- False in case that one of condition flags is invalid:
Progressing (should be False), Failing(should be False) or Available (should be True)
- Return type:
bool
- ocs_ci.ocs.ocp.verify_cluster_operator_status(cluster_operator)
Checks if cluster operator status is degraded or progressing, as sign that upgrade not yet completed
- Parameters:
cluster_operator (str) – OCP cluster operator name
- Returns:
True if cluster operator status is valid, False if cluster operator status is “degraded” or “progressing”
- Return type:
bool
- ocs_ci.ocs.ocp.verify_images_upgraded(old_images, object_data)
Verify that all images in ocp object are upgraded.
- Parameters:
old_images (set) – Set with old images.
object_data (dict) – OCP object yaml data.
- Raises:
NonUpgradedImagesFoundError – In case the images weren’t upgraded.
- ocs_ci.ocs.ocp.verify_ocp_upgrade_channel(channel_variable='stable-4.16')
When upgrade OCP version, verify that subscription channel is same as current one
- Parameters:
channel_variable (str) – New OCP upgrade subscription channel
- Returns:
- True when OCP subscription channel is correct,
and no patch needed
- Return type:
bool
- ocs_ci.ocs.ocp.wait_for_cluster_connectivity(tries=200, delay=3)
Wait for the cluster to be reachable
- Parameters:
tries (int) – The number of retries
delay (int) – The delay in seconds between retries
- Returns:
True if cluster is reachable, False otherwise
- Return type:
bool
- Raises:
CommandFailed – In case the cluster is unreachable
ocs_ci.ocs.ocs_upgrade module
- class ocs_ci.ocs.ocs_upgrade.OCSUpgrade(namespace, version_before_upgrade, ocs_registry_image, upgrade_in_current_source)
Bases:
object
OCS Upgrade helper class
- check_if_upgrade_completed(channel, csv_name_pre_upgrade)
Checks if OCS operator finishes it’s upgrade
- Parameters:
channel – (str): OCS subscription channel
csv_name_pre_upgrade – (str): OCS operator name
- Returns:
True if upgrade completed, False otherwise
- Return type:
bool
- get_csv_name_pre_upgrade()
Getting OCS operator name as displayed in CSV
- Returns:
OCS operator name, as displayed in CSV
- Return type:
str
- get_images_post_upgrade(channel, pre_upgrade_images, upgrade_version)
- Checks if all images of OCS cluster upgraded,
and return list of all images if upgrade success
- Parameters:
channel – (str): OCS subscription channel
pre_upgrade_images – (dict): Contains all OCS cluster images
upgrade_version – (str): version to be upgraded
- Returns:
Contains full path of OCS cluster old images
- Return type:
set
- get_parsed_versions()
Getting parsed version names, current version, and upgrade version
- Returns:
- contains 2 strings with parsed names of current version
and upgrade version
- Return type:
tuple
- get_pre_upgrade_image(csv_name_pre_upgrade)
Getting all OCS cluster images, before upgrade
- Parameters:
csv_name_pre_upgrade – (str): OCS operator name
- Returns:
- Contains all OCS cluster images
dict keys: Image name dict values: Image full path
- Return type:
dict
- get_upgrade_version()
Getting the required upgrade version
- Returns:
version to be upgraded
- Return type:
str
- load_version_config_file(upgrade_version)
Loads config file to the ocs-ci config with upgrade version
- Parameters:
upgrade_version – (str): version to be upgraded
- property ocs_registry_image
- set_upgrade_channel()
Wait for the new package manifest for upgrade.
- Returns:
OCS subscription channel
- Return type:
str
- set_upgrade_images()
Set images for upgrade
- update_subscription(channel)
Updating OCS operator subscription
- Parameters:
channel – (str): OCS subscription channel
- property version_before_upgrade
- ocs_ci.ocs.ocs_upgrade.get_upgrade_image_info(old_csv_images, new_csv_images)
Log the info about the images which are going to be upgraded.
- Parameters:
old_csv_images (dict) – dictionary with the old images from pre upgrade CSV
new_csv_images (dict) – dictionary with the post upgrade images
- Returns:
- with three sets which contain those types of images
old_images_for_upgrade - images which should be upgraded new_images_to_upgrade - new images to which upgrade unchanged_images - unchanged images without upgrade
- Return type:
tuple
- ocs_ci.ocs.ocs_upgrade.ocs_odf_upgrade_ui()
Function to upgrade OCS 4.8 to ODF 4.9 via UI on OCP 4.9 Pass proper versions and upgrade_ui.yaml while running this function for validation to pass
- ocs_ci.ocs.ocs_upgrade.run_ocs_upgrade(operation=None, *operation_args, **operation_kwargs)
Run upgrade procedure of OCS cluster
- Parameters:
operation – (function): Function to run
operation_args – (iterable): Function’s arguments
operation_kwargs – (map): Function’s keyword arguments
- ocs_ci.ocs.ocs_upgrade.verify_image_versions(old_images, upgrade_version, version_before_upgrade)
Verify if all the images of OCS objects got upgraded
- Parameters:
old_images (set) – set with old images
upgrade_version (packaging.version.Version) – version of OCS
version_before_upgrade (float) – version of OCS before upgrade
ocs_ci.ocs.openshift_ops module
- class ocs_ci.ocs.openshift_ops.OCP
Bases:
object
Class which contains various utility functions for interacting with OpenShift
- static call_api(method, **kw)
This function makes generic REST calls
- Parameters:
method (str) – one of the GET, CREATE, PATCH, POST, DELETE
**kw – Based on context of the call kw will be populated by caller
- Returns:
ResourceInstance object
- create_project(project)
Creates new project
- Parameters:
project (str) – project name
- Returns:
True if successful otherwise False
- Return type:
bool
- get_labels(pod_name, pod_namespace)
Get labels from specific pod
- Parameters:
pod_name (str) – Name of pod in oc cluster
pod_namespace (str) – pod namespace in which the pod lives
- Raises:
NotFoundError – If resource not found
- Returns:
All the openshift labels on a given pod
- Return type:
dict
- get_pods(**kw)
Get pods in specific namespace or across oc cluster.
- Parameters:
**kw – ex: namespace=rook-ceph, label_selector=’x==y’
- Returns:
- of pods names, if no namespace provided then this function
returns all pods across openshift cluster.
- Return type:
list
- get_projects()
Gets all the projects in the cluster
- Returns:
List of projects
- Return type:
list
- get_services()
Gets all the services in the cluster
- Returns:
- defaultdict of services, key represents the namespace
and value represents the services
- Return type:
dict
- get_services_in_namespace(namespace)
Gets the services in a namespace
- Returns:
list of services in a namespace
- Return type:
list
- static set_kubeconfig(kubeconfig_path)
Export environment variable KUBECONFIG for future calls of OC commands or other API calls
- Parameters:
kubeconfig_path (str) – path to kubeconfig file to be exported
- Returns:
True if successfully connected to cluster, False otherwise
- Return type:
boolean
ocs_ci.ocs.openstack module
- class ocs_ci.ocs.openstack.CephVMNode(**kw)
Bases:
object
- attach_floating_ip(timeout=120)
- create_node(**kw)
- destroy_node()
Relies on the fact that names should be unique. Along the chain we prevent non-unique names to be used/added. TODO: raise an exception if more than one node is matched to the name, that can be propagated back to the client.
- destroy_volume(name)
- get_driver(**kw)
- get_private_ip()
Workaround. self.node.private_ips returns empty list.
- get_volume(name)
Return libcloud.compute.base.StorageVolume
- exception ocs_ci.ocs.openstack.GetIPError
Bases:
Exception
- exception ocs_ci.ocs.openstack.InvalidHostName
Bases:
Exception
- exception ocs_ci.ocs.openstack.NodeErrorState
Bases:
Exception
ocs_ci.ocs.osd_operations module
- ocs_ci.ocs.osd_operations.osd_device_replacement(nodes)
Replacing randomly picked osd device :param node: The OCS object representing the node :type node: OCS
ocs_ci.ocs.parallel module
- class ocs_ci.ocs.parallel.ExceptionHolder(exc_info)
Bases:
object
- ocs_ci.ocs.parallel.capture_traceback(func, *args, **kwargs)
Utility function to capture tracebacks of any exception func raises.
- class ocs_ci.ocs.parallel.parallel
Bases:
object
This class is a context manager for running functions in parallel.
You add functions to be run with the spawn method:
with parallel() as p: for foo in bar: p.spawn(quux, foo, baz=True)
You can iterate over the results (which are in arbitrary order):
with parallel() as p: for foo in bar: p.spawn(quux, foo, baz=True) for result in p: print result
If one of the spawned functions throws an exception, it will be thrown when iterating over the results, or when the with block ends.
At the end of the with block, the main thread waits until all spawned functions have completed, or, if one exited with an exception, kills the rest and raises the exception.
- spawn(func, *args, **kwargs)
- ocs_ci.ocs.parallel.resurrect_traceback(exc)
ocs_ci.ocs.perf_pgsql module
PGSQL performance workload class
- class ocs_ci.ocs.perf_pgsql.PerfPGSQL(**kwargs)
Bases:
Postgresql
Class to create pgsql, pgbench pods and run workloads
- validate_pgbench_perf(pgbench_pods)
Processing the pod output and prints latency,tps in table format for all the pods
ocs_ci.ocs.perfresult module
Basic Module to manage performance results
- class ocs_ci.ocs.perfresult.PerfResult(uuid, crd)
Bases:
object
Basic Performance results object for Q-PAS team
- add_key(key, value)
Adding (key and value) to this object results dictionary as a new dictionary.
- Parameters:
key (str) – String which will be the key for the value
value – value to add, can be any kind of data type
- dump_to_file()
Writing the test results data into a JSON file, which can be loaded into the ElasticSearch server
- es_connect()
Create Elastic-Search server connection
- es_read()
Reading all test results from the elastic-search server
- Returns:
list of results
- Return type:
list
- Assert:
if no data found in the server
- es_write()
Writing the results to the elastic-search server, and to a JSON file
- results_link()
Create a link to the results of the test in the elasticsearch serer
- Returns:
http link to the test results in the elastic-search server
- Return type:
str
- class ocs_ci.ocs.perfresult.ResultsAnalyse(uuid, crd, full_log_path, index_name)
Bases:
PerfResult
This class generates results for all tests as one unit and saves them to an elastic search server on the cluster
ocs_ci.ocs.perftests module
- class ocs_ci.ocs.perftests.PASTest
Bases:
BaseTest
Base class for QPAS team - Performance and Scale tests
This class contain functions which used by performance and scale test, and also can be used by E2E test which used the benchmark-operator (ripsaw)
- add_test_to_results_check(test, test_count, test_name)
Adding test information to list of test(s) that we want to check the results and push them to the dashboard.
- Parameters:
test (str) – the name of the test function that we want to check
test_count (int) – number of test(s) that need to run - according to parametize
test_name (str) – the test name in the Performance dashboard
- check_results_and_push_to_dashboard()
Checking test(s) results - that all test(s) are finished OK, and push the results into the performance dashboard
- check_tests_results()
Check that all sub-tests (test multiplication by parameters) finished and pushed the data to the ElastiSearch server. It also generate the es link to push into the performance dashboard.
- cleanup_testing_pod()
- cleanup_testing_pvc()
- copy_es_data(elasticsearch)
Copy data from Internal ES (if exists) to the main ES
- Parameters:
elasticsearch (obj) – elasticsearch object (if exits)
- create_fio_pod_yaml(pvc_size=1, filesize=0)
This function create a new performance pod yaml file, which will trigger the FIO command on starting and getting into Compleat state when finish
If the filesize argument is not provided, The FIO will fillup 70% of the PVC which will attached to the pod.
- Parameters:
pvc_size (int/float) – the size of the pvc_which will attach to the pod (in GiB)
file_size (str) – the filesize to write into (e.g 100Mi, 30Gi)
- create_new_pool(pool_name)
Creating new Storage pool for RBD / CephFS to use in a test so it can be deleted in the end of the test for fast cleanup
- Parameters:
pool_name (str) – the name of the pool to create
- create_test_project()
Creating new project (namespace) for performance test
- create_testing_pod_and_wait_for_completion(**kwargs)
- create_testing_pvc_and_wait_for_bound()
- delete_ceph_pool(pool_name)
Delete Storage pool (RBD / CephFS) that was created for the test for fast cleanup.
- Parameters:
pool_name (str) – the name of the pool to be delete
- delete_test_project()
Deleting the performance test project (namespace)
- deploy_and_wait_for_wl_to_start(timeout=300, sleep=20)
Deploy the workload and wait until it start working
- Parameters:
timeout (int) – time in second to wait until the benchmark start
sleep (int) – Sleep interval seconds
- deploy_benchmark_operator()
Deploy the benchmark operator
- es_connect()
Create elasticsearch connection to the server
- Returns:
True if there is a connection to the ES, False if not.
- Return type:
bool
- es_info_backup(elasticsearch)
Saving the Original elastic-search IP and PORT - if defined in yaml
- Parameters:
elasticsearch (obj) – elasticsearch object
- get_cephfs_data()
Look through ceph pods and find space usage on all ceph pools
- Returns:
total used capacity in GiB.
- Return type:
int
- get_env_info()
Getting the environment information and update the workload RC if necessary.
- get_kibana_indexid(server, name)
Get the kibana Index ID by its name.
- Parameters:
server (str) – the IP (or name) of the Kibana server
name (str) – the name of the index
- Returns:
- the index ID of the given name
return None if the index does not exist.
- Return type:
str
- get_node_info(node_type='master')
Getting node type hardware information and update the main environment dictionary.
- Parameters:
node_type (str) – the node type to collect data about, can be : master / worker - the default is master
- get_osd_info()
Getting the OSD’s information and update the main environment dictionary.
- static get_time(time_format=None)
Getting the current GMT time in a specific format for the ES report, or for seeking in the containers log
- Parameters:
time_format (str) – which thime format to return - None / CSI
- Returns:
current date and time in formatted way
- Return type:
str
- initialize_test_crd()
Initializing the test CRD file. this include the Elasticsearch info, cluster name and user name which run the test
- push_to_dashboard(test_name)
Pushing the test results into the performance dashboard, if exist
- Parameters:
test_name (str) – the test name as defined in the performance dashboard
- Returns:
None in case of pushing the results to the dashboard failed
- read_from_es(es, index, uuid)
Reading all results from elasticsearch server
- Parameters:
es (dict) – dictionary with elasticsearch info {server, port}
index (str) – the index name to read from the elasticsearch server
uuid (str) – the test UUID to find in the elasticsearch server
- Returns:
list of all results
- Return type:
list
- set_results_path_and_file(func_name)
Setting the results_path and results_file parameter for a specific test
- Parameters:
func_name (str) – the name of the function which use for the test
- set_storageclass(interface)
Setting the benchmark CRD storageclass
- Parameters:
interface (str) – The interface which will used in the test
- setup()
Setting up the environment for each performance and scale test
- Parameters:
name (str) – The test name that will use in the performance dashboard
- teardown()
- wait_for_wl_to_finish(timeout=18000, sleep=300)
Waiting until the workload is finished and get the test log
- Parameters:
timeout (int) – time in second to wait until the benchmark start
sleep (int) – Sleep interval seconds
- Raises:
exception for too much restarts of the test. –
ResourceWrongStatusException – test Failed / Error
TimeoutExpiredError – test did not completed on time.
- write_result_to_file(res_link)
Write the results link into file, to combine all sub-tests results together in one file, so it can be easily pushed into the performance dashboard
- Parameters:
res_link (str) – http link to the test results in the ES server
ocs_ci.ocs.pgsql module
Postgresql workload class
- class ocs_ci.ocs.pgsql.Postgresql(**kwargs)
Bases:
BenchmarkOperator
Postgresql workload operation
- attach_pgsql_pod_to_claim_pvc(pvc_objs, postgres_name, run_benchmark=True, pgbench_name=None)
Attaches pgsql pod to created claim PVC
- Parameters:
pvc_objs (list) – List of PVC objs which needs to attached to pod
postgres_name (str) – Name of the postgres pod
run_benchmark (bool) – On true, runs pgbench benchmark on postgres pod
pgbench_name (str) – Name of pgbench benchmark
- Returns:
List of pod objs created
- Return type:
pgsql_obj_list (list)
- cleanup()
Clean up
- create_pgbench_benchmark(replicas, pgbench_name=None, postgres_name=None, clients=None, threads=None, transactions=None, scaling_factor=None, timeout=None, samples=None, wait=True)
Create pgbench benchmark pods
- Parameters:
replicas (int) – Number of pgbench pods to be deployed
pgbench_name (str) – Name of pgbench bechmark
postgres_name (str) – Name of postgres pod
clients (int) – Number of clients
threads (int) – Number of threads
transactions (int) – Number of transactions
scaling_factor (int) – scaling factor
timeout (int) – Time in seconds to wait
wait (bool) – On true waits till pgbench reaches Completed state
- Returns:
pgbench pod objects list
- Return type:
List
- delete_pgbench_pods(pg_obj_list)
Delete all pgbench pods on cluster
- Returns:
True if deleted, False otherwise
- Return type:
bool
- export_pgoutput_to_googlesheet(pg_output, sheet_name, sheet_index)
Collect pgbench output to google spreadsheet
- Parameters:
pg_output (list) – pgbench outputs in list
sheet_name (str) – Name of the sheet
sheet_index (int) – Index of sheet
- filter_pgbench_nodes_from_nodeslist(nodes_list)
Filter pgbench nodes from the given nodes list
- Parameters:
nodes_list (list) – List of nodes to be filtered
- Returns:
List of pgbench not running nodes from the given nodes list
- Return type:
list
- get_pgbech_pod_status_table(pgbench_pods)
Get pgbench pod data and print results on a table
- Parameters:
pods (pgbench) – List of pgbench pods
- get_pgbench_pods()
Get all pgbench pods
- Returns:
pgbench pod objects list
- Return type:
List
- get_pgbench_running_nodes()
get nodes that contains pgbench pods
- Returns:
List of pgbench running nodes
- Return type:
list
- get_pgbench_status(pgbench_pod_name)
Get pgbench status
- Parameters:
pgbench_pod_name (str) – Name of the pgbench pod
- Returns:
state of pgbench pod (running/completed)
- Return type:
str
- get_pgsql_nodes()
Get nodes that contain a pgsql app pod
- Returns:
Cluster node OCP objects
- Return type:
list
- get_postgres_pods()
Get all postgres pods :returns: postgres pod objects list :rtype: List
- get_postgres_pvc()
Get all postgres pvc
- Returns:
postgres pvc objects list
- Return type:
List
- get_postgres_used_file_space(pod_obj_list)
Get the used file space on a mount point
- Parameters:
pod_obj_list (POD) – List of pod objects
- Returns:
List of pod object
- Return type:
list
- is_pgbench_running()
Check if pgbench is running
- Returns:
True if pgbench is running; False otherwise
- Return type:
bool
- pgsql_full()
Run full pgsql workload
- respin_pgsql_app_pod()
Respin the pgsql app pod
- Returns:
pod status
- setup_postgresql(replicas, sc_name=None)
Deploy postgres sql server
- Parameters:
replicas (int) – Number of postgresql pods to be deployed
- Raises:
CommandFailed – If PostgreSQL server setup fails
- validate_pgbench_run(pgbench_pods, print_table=True)
Validate pgbench run
- Parameters:
pods (pgbench) – List of pgbench pods
- Returns:
pgbench outputs in list
- Return type:
pg_output (list)
- wait_for_pgbench_status(status, timeout=None)
Wait for pgbench benchmark pods status to reach running/completed
- Parameters:
status (str) – status to reach Running or Completed
timeout (int) – Time in seconds to wait
- wait_for_postgres_status(status='Running', timeout=300)
Wait for postgres pods status to reach running/completed
- Parameters:
status (str) – status to reach Running or Completed
timeout (int) – Time in seconds to wait
ocs_ci.ocs.pillowfight module
Pillowfight Class to run various workloads and scale tests
- class ocs_ci.ocs.pillowfight.PillowFight(**kwargs)
Bases:
object
Workload operation using PillowFight This class was modelled after the RipSaw class in this directory.
- MAX_ACCEPTABLE_RESPONSE_TIME = 2000
- MIN_ACCEPTABLE_OPS_PER_SEC = 2000
- analyze_all()
Analyze the data extracted into self.logs files
- cleanup()
Remove pillowfight pods and temp files
- export_pfoutput_to_googlesheet(sheet_name, sheet_index)
Collect pillowfight output to google spreadsheet
- Parameters:
sheet_name (str) – Name of the sheet
sheet_index (int) – Index of sheet
- parse_pillowfight_log(data_from_log)
Run oc logs on the pillowfight pod passed in. Cleanup the output from oc logs to handle peculiarities in the couchbase log results, and generate a summary of the results.
The dictionary returned has two values; ‘opspersec’ and ‘resptimes’. opspersec is a list of ops per second numbers reported.’ resptimes is a dictionary index by the max response time of a range. Each entry in resptimes contains a minimum response time for that range, and a count of how many messages fall within that range.
- Parameters:
data_from_log (str) – log data
- Returns:
ops per sec and response time information
- Return type:
dict
- run_pillowfights(replicas=1, num_items=None, num_threads=None, num_of_cycles=None)
loop through all the yaml files extracted from the pillowfight repo and run them.
- Parameters:
replicas (int) – Number of pod replicas
num_items (int) – Number of items to be loaded to the cluster
num_threads (int) – Number of threads
- sanity_check(stats)
Make sure the worst cases for ops per second and response times are within an acceptable range.
- wait_for_pillowfights_to_complete(timeout=1800)
Wait for the pillowfights to complete. Run oc logs on the results and save the logs in self.logs directory
- Raises:
Exception – If pillowfight fails to reach completed state
ocs_ci.ocs.platform_nodes module
- class ocs_ci.ocs.platform_nodes.AWSNodes
Bases:
NodesBase
AWS EC2 instances class
- all_nodes_found(node_names)
Relying on oc get nodes -o wide to confirm that node is added to cluster
- Parameters:
node_names (list) – of node names as string
- Returns:
‘True’ if all the node names appeared in ‘get nodes’ else ‘False’
- Return type:
bool
- approve_all_nodes_csr(node_list)
Make sure that all the newly added nodes are in approved csr state
- Parameters:
node_list (list) – of AWSUPINode/AWSIPINode objects
- Raises:
PendingCSRException – If any pending csrs exists
- attach_nodes_to_cluster(node_list)
Attach nodes in the list to the cluster
- Parameters:
node_list (list) – of AWSUPINode/AWSIPINode objects
- attach_nodes_to_upi_cluster(node_list)
Attach node to upi cluster Note: For RHCOS nodes, create function itself would have attached the nodes to cluster so nothing to do here
- Parameters:
node_list (list) – of AWSUPINode objects
- attach_rhel_nodes_to_upi_cluster(node_list)
Attach RHEL nodes to upi cluster
- Parameters:
node_list (list) – of AWSUPINode objects with RHEL os
- attach_volume(volume, node)
Attach a data volume to an instance
- Parameters:
volume (Volume) – The volume to delete
node (OCS) – The EC2 instance to attach the volume to
- build_ansible_inventory(hosts)
Build the ansible hosts file from jinja template
- Parameters:
hosts (list) – list of private host names
- Returns:
path of the ansible file created
- Return type:
str
- check_connection(rhel_pod_obj, host, pem_dst_path)
Check whether newly brought up RHEL instances are accepting ssh connections
- Parameters:
rhel_pod_obj (Pod) – object for handling ansible pod
host (str) – Node to which we want to try ssh
pem_dst_path (str) – path to private key for ssh
- create_nodes(node_conf, node_type, num_nodes)
create aws instances of nodes
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created RHCOS/RHEL
num_nodes (int) – Number of node instances to be created
- Returns:
of AWSUPINode objects
- Return type:
list
- detach_volume(volume, node=None, delete_from_backend=True)
Detach a volume from an EC2 instance
- Parameters:
volume (Volume) – The volume to delete
node (OCS) – The OCS object representing the node
delete_from_backend (bool) – True for deleting the disk from the storage backend, False otherwise
- get_available_slots(existing_indexes, required_slots)
Get indexes which are free
- Parameters:
existing_indexes (list) – of integers
required_slots (int) – required number of integers
- Returns:
of integers (available slots)
- Return type:
list
- get_data_volumes()
Get the data EBS volumes
- Returns:
EBS Volume instances
- Return type:
list
- get_ec2_instances(nodes)
Get the EC2 instances dicts
- Parameters:
nodes (list) – The OCS objects of the nodes
- Returns:
The EC2 instances dicts (IDs and names)
- Return type:
dict
- get_existing_indexes(index_list)
Extract index suffixes from index_list
- Parameters:
index_list (list) – of stack names in the form of ‘clustername-no$i’
- Returns:
sorted list of Integers
- Return type:
list
- get_node_by_attached_volume(volume)
Get node OCS object of the EC2 instance that has the volume attached to
- Parameters:
volume (Volume) – The volume to get the EC2 according to
- Returns:
The OCS object of the EC2 instance
- Return type:
- get_ready_status(node_info)
Get the node ‘Ready’ status
- Parameters:
node_info (dict) – Node info which includes details
- Returns:
True if node is Ready else False
- Return type:
bool
- get_stack_name_of_node(node_name)
Get the stack name of a given node
- Parameters:
node_name (str) – the name of the node
- Returns:
The stack name of the given node
- Return type:
str
- restart_nodes(nodes, timeout=300, wait=True)
Restart EC2 instances
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True if need to wait till the restarted node reaches READY state. False otherwise
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state, and ‘ready’ state.
- restart_nodes_by_stop_and_start(nodes, wait=True, force=True)
Restart nodes by stopping and starting EC2 instances
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True in case wait for status is needed, False otherwise
force (bool) – True for force instance stop, False otherwise
Returns:
- restart_nodes_by_stop_and_start_teardown()
Make sure all EC2 instances are up. To be used in the test teardown
- start_nodes(instances=None, nodes=None, wait=True)
Start EC2 instances
- Parameters:
nodes (list) – The OCS objects of the nodes
instances (dict) – instance-id and name dict
wait (bool) – True for waiting the instances to start, False otherwise
- stop_nodes(nodes, wait=True, force=True)
Stop EC2 instances
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to stop, False otherwise
force (bool) – True for force stopping the instances abruptly, False otherwise
- terminate_nodes(nodes, wait=True)
Terminate EC2 instances
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to terminate,
otherwise (False) –
- verify_nodes_added(hosts)
Verify RHEL workers are added
- Parameters:
hosts (list) – list of aws private hostnames
- Raises:
FailedToAddNodeException – if node addition failed
- wait_for_nodes_to_stop(nodes)
Wait for the nodes to reach status stopped
- Parameters:
nodes (list) – The OCS objects of the nodes
- Raises:
ResourceWrongStatusException – In case of the nodes didn’t reach the expected status stopped.
- wait_for_nodes_to_stop_or_terminate(nodes)
Wait for the nodes to reach status stopped or terminated
- Parameters:
nodes (list) – The OCS objects of the nodes
- Raises:
ResourceWrongStatusException – In case of the nodes didn’t reach the expected status stopped or terminated.
- wait_for_nodes_to_terminate(nodes)
Wait for the nodes to reach status terminated
- Parameters:
nodes (list) – The OCS objects of the nodes
- Raises:
ResourceWrongStatusException – In case of the nodes didn’t reach the expected status terminated.
- wait_for_volume_attach(volume)
Wait for an EBS volume to be attached to an EC2 instance. This is used as when detaching the EBS volume from the EC2 instance, re-attach should take place automatically
- Parameters:
volume (Volume) – The volume to wait for to be attached
- Returns:
- True if the volume has been attached to the
instance, False otherwise
- Return type:
bool
- class ocs_ci.ocs.platform_nodes.AWSUPINode(node_conf, node_type)
Bases:
AWSNodes
Node object representing AWS upi nodes
- add_cert_to_template(worker_template_path)
Add cert to worker template
- Parameters:
worker_template_path (str) – Path where template file is located
- build_stack_params(index, conf)
Build all the params required for a stack creation
- Parameters:
index (int) – An integer index for this stack
conf (dict) – Node config
- Returns:
of param dicts
- Return type:
list
- gather_worker_data(suffix='no0')
Gather various info like vpc, iam role, subnet,security group, cluster tag from existing RHCOS workers
- Parameters:
suffix (str) – suffix to get resource of worker node, ‘no0’ by default
- get_cert_content(worker_ignition_path)
Get the certificate content from worker ignition file
- Parameters:
worker_ignition_path (str) – Path of the worker ignition file
- Returns:
certificate content
- Return type:
formatted_cert (str)
- get_kube_tag(tags)
Fetch kubernets.io tag from worker instance
- Parameters:
tags (dict) – AWS tags from existing worker
- Returns:
- key looks like
”kubernetes.io/cluster/<cluster-name>” and value looks like “share” OR “owned”
- Return type:
tuple
- get_rhcos_worker_template()
Download template and keep it locally
- Returns:
local path to template file
- Return type:
path (str)
- get_worker_resource_id(resource)
Get the resource ID
- Parameters:
resource (dict) – a dictionary of stack resource
- Returns:
ID of worker stack resource
- Return type:
str
- update_template_with_cert(worker_template_path, cert)
Update the template file with cert provided
- Parameters:
worker_template_path (str) – template file path
cert (str) – Certificate body
- class ocs_ci.ocs.platform_nodes.AZURENodes
Bases:
NodesBase
Azure Nodes class
- attach_nodes_to_cluster(node_list)
- attach_volume(volume, node)
- create_and_attach_nodes_to_cluster(node_conf, node_type, num_nodes)
Create nodes and attach them to cluster Use this function if you want to do both creation/attachment in a single call
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created RHCOS/RHEL
num_nodes (int) – Number of node instances to be created
- create_nodes(node_conf, node_type, num_nodes)
- detach_volume(volume, node=None, delete_from_backend=True)
Detach a volume from an Azure Vm instance
- Parameters:
volume (Disk) – The disk object required to delete a volume
node (OCS) – The OCS object representing the node
delete_from_backend (bool) – True for deleting the disk from the storage backend, False otherwise
- get_data_volumes()
Get the data Azure disk objects
- Returns:
azure disk objects
- Return type:
list
- get_node_by_attached_volume(volume)
Get node OCS object of the Azure vm instance that has the volume attached to
- Parameters:
volume (Disk) – The disk object to get the Azure Vm according to
- Returns:
The OCS object of the Azure Vm instance
- Return type:
- restart_nodes(nodes, timeout=900, wait=True)
Restart Azure vm instances
- Parameters:
nodes (list) – The OCS objects of the nodes / Azure Vm instance
wait (bool) – True if need to wait till the restarted node reaches READY state. False otherwise
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state, and ‘ready’ state.
- restart_nodes_by_stop_and_start(nodes, timeout=540, wait=True, force=True)
Restart Azure vm instances by stop and start
- Parameters:
nodes (list) – The OCS objects of the nodes / Azure Vm instance
wait (bool) – True if need to wait till the restarted node reaches READY state. False otherwise
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state, and ‘ready’ state.
force (bool) – True for force VM stop, False otherwise
- restart_nodes_by_stop_and_start_teardown()
Make sure all VM instances up by the end of the test
- start_nodes(nodes, timeout=540, wait=True)
Start Azure vm instances
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to start, False otherwise
timeout (int) – time in seconds to wait for node to reach ‘ready’ state.
- stop_nodes(nodes, timeout=540, wait=True, force=True)
Stop Azure vm instances
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to stop, False otherwise
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state.
force (bool) – True for force VM stop, False otherwise
- wait_for_volume_attach(volume)
Wait for a Disk to be attached to an Azure Vm instance. This is used as when detaching the Disk from the Azure Vm instance, re-attach should take place automatically
- Parameters:
volume (Disk) – The Disk to wait for to be attached
- Returns:
- True if the volume has been attached to the
instance, False otherwise
- Return type:
bool
- class ocs_ci.ocs.platform_nodes.BaremetalNodes
Bases:
NodesBase
Baremetal Nodes class
- attach_nodes_to_cluster(node_list)
- attach_volume(volume, node)
- create_and_attach_nodes_to_cluster(node_conf, node_type, num_nodes)
Create nodes and attach them to cluster Use this function if you want to do both creation/attachment in a single call
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created RHCOS/RHEL
num_nodes (int) – Number of node instances to be created
- create_nodes(node_conf, node_type, num_nodes)
- detach_volume(volume, node=None, delete_from_backend=True)
- get_data_volumes()
- get_node_by_attached_volume(volume)
- read_default_config(default_config_path)
Commonly used function to read default config
- Parameters:
default_config_path (str) – Path to default config file
- Returns:
of default config loaded
- Return type:
dict
- restart_nodes(nodes, force=True)
Restart Baremetal Machine
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for force BM stop, False otherwise
- restart_nodes_by_stop_and_start_teardown()
Make sure all BMs are up by the end of the test
- start_nodes(nodes, wait=True)
Start Baremetal Machine
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – Wait for node status
- stop_nodes(nodes, force=True)
Stop Baremetal Machine
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for force nodes stop, False otherwise
- wait_for_volume_attach(volume)
- class ocs_ci.ocs.platform_nodes.GCPNodes
Bases:
NodesBase
Google Cloud Platform Nodes class
- restart_nodes(nodes, wait=True)
Restart nodes. This is a hard reset - the instance does not do a graceful shutdown
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to start, False otherwise
- Raises:
OperationFailedToCompleteException – In case that not all the operations completed successfully
- restart_nodes_by_stop_and_start(nodes, wait=True, force=True)
Restart nodes by stop and start
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to stop, False otherwise
force (bool) – True for force node stop, False otherwise
- Raises:
OperationFailedToCompleteException – In case that not all the operations completed successfully
- restart_nodes_by_stop_and_start_teardown()
Start the nodes in a NotReady state
- Raises:
OperationFailedToCompleteException – In case that not all the operations completed successfully
- start_nodes(nodes, wait=True)
Start nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to start, False otherwise
- Raises:
OperationFailedToCompleteException – In case that not all the operations completed successfully
- stop_nodes(nodes, wait=True)
Stop nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to stop, False otherwise
- Raises:
OperationFailedToCompleteException – In case that not all the operations completed successfully
- terminate_nodes(nodes, wait=True)
Terminate nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to terminate, False otherwise
- Raises:
OperationFailedToCompleteException – In case that not all the operations completed successfully
- class ocs_ci.ocs.platform_nodes.IBMCloud
Bases:
NodesBase
IBM Cloud class
- attach_volume(volume, node)
- check_workers_ready_state(cmd)
Check if all worker nodes are in Ready state.
- Parameters:
cmd (str) – command to get the workers
- Returns:
‘True’ if all the node names appeared in ‘Ready’ else ‘False’
- Return type:
bool
- create_and_attach_nodes_to_cluster(node_conf, node_type, num_nodes)
Create nodes and attach them to cluster Use this function if you want to do both creation/attachment in a single call
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created
num_nodes (int) – Number of node instances to be created
- create_nodes(node_conf, node_type, num_nodes)
Creates new node on IBM Cloud.
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created
num_nodes (int) – Number of node instances to be created
- Returns:
of IBMCloudNode objects
- Return type:
list
- Raises:
NotAllNodesCreated – In case all nodes are not created
TimeoutExpiredError – In case node is not created in time
- delete_volume_id(volume)
- detach_volume(volume, node=None, delete_from_backend=True)
- get_data_volumes()
- get_node_by_attached_volume(volume)
- get_volume_id()
- restart_nodes(nodes, timeout=900, wait=True)
Restart all the ibmcloud vm instances
- Parameters:
nodes (list) – The OCS objects of the nodes instance
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state, and ‘ready’ state.
wait (bool) – True if need to wait till the restarted node reaches READY state. False otherwise
- restart_nodes_by_stop_and_start(nodes, force=True)
Make sure all the nodes which are not ready on IBM Cloud
- Parameters:
nodes (list) – The OCS objects of the nodes instance
force (bool) – True for force node stop, False otherwise
- restart_nodes_by_stop_and_start_teardown()
Make sure all nodes are up by the end of the test on IBM Cloud.
- wait_for_volume_attach(volume)
- class ocs_ci.ocs.platform_nodes.IBMCloudBMNodes
Bases:
NodesBase
IBM Cloud for Bare metal machines class
- create_nodes(node_conf, node_type, num_nodes)
Create nodes
- get_machines(nodes)
Get the machines associated with the given nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
- Returns:
List of dictionaries. List of the machines associated with the given nodes
- Return type:
list
- restart_nodes(nodes, wait=True, force=False)
Restart nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – If True, wait for the nodes to be ready. False, otherwise
force (bool) – If True, it will force restarting the nodes. False, otherwise. Default value is False.
- restart_nodes_by_stop_and_start(nodes, wait=True)
Restart the nodes by stop and start
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – If True, wait for the nodes to be ready. False, otherwise
- restart_nodes_by_stop_and_start_teardown()
Start the nodes in a NotReady state
- start_nodes(nodes, wait=True)
Start nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – If True, wait for the nodes to be ready. False, otherwise
- stop_nodes(nodes, wait=True)
Stop nodes
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – If True, wait for the nodes to be in a NotReady state. False, otherwise
- terminate_nodes(nodes, wait=True)
Terminate nodes
- class ocs_ci.ocs.platform_nodes.IBMPowerNodes
Bases:
NodesBase
IBM Power Nodes class
- restart_nodes(nodes, timeout=540, wait=True, force=True)
Restart PowerNode
- Parameters:
nodes (list) – The OCS objects of the nodes
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state, and ‘ready’ state.
wait (bool) – True if need to wait till the restarted node reaches timeout
force (bool) – True for force BM stop, False otherwise
- restart_nodes_by_stop_and_start(nodes, force=True)
Restart PowerNodes with stop and start
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for force node stop, False otherwise
- restart_nodes_by_stop_and_start_teardown()
Make sure all PowerNodes are up by the end of the test
- start_nodes(nodes, force=True)
Start PowerNode
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – Wait for node status
- stop_nodes(nodes, force=True)
Stop PowerNode
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for force nodes stop, False otherwise
- class ocs_ci.ocs.platform_nodes.NodesBase
Bases:
object
A base class for nodes related operations. Should be inherited by specific platform classes
- attach_nodes_to_cluster(node_list)
- attach_volume(volume, node)
- create_and_attach_nodes_to_cluster(node_conf, node_type, num_nodes)
Create nodes and attach them to cluster Use this function if you want to do both creation/attachment in a single call
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created RHCOS/RHEL
num_nodes (int) – Number of node instances to be created
- create_nodes(node_conf, node_type, num_nodes)
- detach_volume(volume, node=None, delete_from_backend=True)
- get_data_volumes()
- get_node_by_attached_volume(volume)
- read_default_config(default_config_path)
Commonly used function to read default config
- Parameters:
default_config_path (str) – Path to default config file
- Returns:
of default config loaded
- Return type:
dict
- restart_nodes(nodes, wait=True)
- restart_nodes_by_stop_and_start(nodes, force=True)
- restart_nodes_by_stop_and_start_teardown()
- start_nodes(nodes)
- stop_nodes(nodes)
- terminate_nodes(nodes, wait=True)
- wait_for_nodes_to_stop(nodes)
- wait_for_nodes_to_stop_or_terminate(nodes)
- wait_for_nodes_to_terminate(nodes)
- wait_for_volume_attach(volume)
- class ocs_ci.ocs.platform_nodes.PlatformNodesFactory
Bases:
object
A factory class to get specific nodes platform object
- get_nodes_platform()
- class ocs_ci.ocs.platform_nodes.RHVNodes
Bases:
NodesBase
RHV Nodes class
- get_rhv_vm_instances(nodes)
Get the RHV VM instaces list
- Parameters:
nodes (list) – The OCS objects of the nodes
- Returns:
The RHV vm instances list
- Return type:
list
- restart_nodes(nodes, timeout=900, wait=True, force=True)
Restart RHV VM
- Parameters:
nodes (list) – The OCS objects of the nodes
timeout (int) – time in seconds to wait for node to reach ‘ready’ state
wait (bool) – True if need to wait till the restarted node reaches READY state. False otherwise
force (bool) – True for force VM reboot, False otherwise
- Raises:
ValueError – Raises if No nodes found for restarting
- restart_nodes_by_stop_and_start(nodes, timeout=900, wait=True, force=True)
Restart RHV vms by stop and start
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True if need to wait till the restarted node reaches READY state. False otherwise
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state, and ‘ready’ state.
force (bool) – True for force VM stop, False otherwise
- restart_nodes_by_stop_and_start_teardown()
Make sure all RHV VMs are up by the end of the test
- start_nodes(nodes, timeout=600, wait=True)
Start RHV VM
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to start, False otherwise
timeout (int) – time in seconds to wait for node to reach ‘ready’ state.
- stop_nodes(nodes, timeout=600, wait=True, force=True)
Shutdown RHV VM
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the instances to stop, False otherwise
timeout (int) – time in seconds to wait for node to reach ‘not ready’ state.
force (bool) – True for force VM stop, False otherwise
- class ocs_ci.ocs.platform_nodes.VMWareIPINodes
Bases:
VMWareNodes
VMWare IPI nodes class
- get_vms(nodes)
Get vSphere vm objects list in the Datacenter(and not just in the cluster scope). Note: If one of the nodes failed with an exception, it will not return his corresponding VM object.
- Parameters:
nodes (list) – The OCS objects of the nodes
- Returns:
vSphere vm objects list in the Datacenter
- Return type:
list
- class ocs_ci.ocs.platform_nodes.VMWareLSONodes
Bases:
VMWareNodes
VMWare LSO nodes class
- detach_volume(volume, node=None, delete_from_backend=True)
Detach disk from a VM and delete from datastore if specified
- Parameters:
volume (str) – Volume path
node (OCS) – The OCS object representing the node
delete_from_backend (bool) – True for deleting the disk (vmdk) from backend datastore, False otherwise
- get_data_volumes(pvs=None)
Get the data vSphere volumes
- Parameters:
pvs (list) – PV OCS objects
- Returns:
vSphere volumes
- Return type:
list
- class ocs_ci.ocs.platform_nodes.VMWareNodes
Bases:
NodesBase
VMWare nodes class
- attach_volume(node, volume)
- create_and_attach_nodes_to_cluster(node_conf, node_type, num_nodes)
Create nodes and attach them to cluster Use this function if you want to do both creation/attachment in a single call
- Parameters:
node_conf (dict) – of node configuration
node_type (str) – type of node to be created RHCOS/RHEL
num_nodes (int) – Number of node instances to be created
- create_and_attach_volume(node, size)
Create a new volume and attach it to the given VM
- Parameters:
node (OCS) – The OCS object representing the node
size (int) – The size in GB for the new volume
- detach_volume(volume, node=None, delete_from_backend=True)
Detach disk from a VM and delete from datastore if specified
- Parameters:
volume (str) – Volume path
node (OCS) – The OCS object representing the node
delete_from_backend (bool) – True for deleting the disk (vmdk) from backend datastore, False otherwise
- get_data_volumes(pvs=None)
Get the data vSphere volumes
- Parameters:
pvs (list) – PV OCS objects
- Returns:
vSphere volumes
- Return type:
list
- get_node_by_attached_volume(volume)
Get node OCS object of the VM instance that has the volume attached to
- Parameters:
volume (Volume) – The volume to get the VM according to
- Returns:
The OCS object of the VM instance
- Return type:
- get_reboot_events(nodes)
Gets the number of reboot events
- Parameters:
nodes (list) – The OCS objects of the nodes
- Returns:
- Dictionary which contains node names as key and value as number
of reboot events e.g: {‘compute-0’: 11, ‘compute-1’: 9, ‘compute-2’: 9}
- Return type:
dict
- get_vm_from_ips(node_ips, dc)
Fetches VM objects from given IP’s
- Parameters:
node_ips (list) – List of node IP’s
dc (str) – Datacenter name
- Returns:
List of VM objects
- Return type:
list
- get_vms(nodes)
Get vSphere vm objects list
- Parameters:
nodes (list) – The OCS objects of the nodes
- Returns:
vSphere vm objects list
- Return type:
list
- get_volume_path(volume_handle)
Fetches the volume path for the volumeHandle
- Parameters:
volume_handle (str) – volumeHandle which exists in PV
- Returns:
volume path of PV
- Return type:
str
- restart_nodes(nodes, force=True, timeout=300, wait=True)
Restart vSphere VMs
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for Hard reboot, False for Soft reboot
timeout (int) – time in seconds to wait for node to reach READY state
wait (bool) – True if need to wait till the restarted OCP node reaches READY state. False otherwise
- restart_nodes_by_stop_and_start(nodes, force=True)
Restart vSphere VMs
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for force VM stop, False otherwise
- restart_nodes_by_stop_and_start_teardown()
Make sure all VMs are up by the end of the test
- start_nodes(nodes, wait=True)
Start vSphere VMs
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – Wait for the VMs to start
- stop_nodes(nodes, force=True, wait=True)
Stop vSphere VMs
- Parameters:
nodes (list) – The OCS objects of the nodes
force (bool) – True for force VM stop, False otherwise
wait (bool) – Wait for the VMs to stop
- terminate_nodes(nodes, wait=True)
Terminate the VMs. The VMs will be deleted only from the inventory and not from the disk.
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the VMs to terminate,
otherwise (False) –
- wait_for_volume_attach(volume)
- class ocs_ci.ocs.platform_nodes.VMWareUPINodes
Bases:
VMWareNodes
VMWare UPI nodes class
- terminate_nodes(nodes, wait=True)
Terminate the VMs. The VMs will be deleted only from the inventory and not from the disk. After deleting the VMs, it will also modify terraform state file of the removed VMs
- Parameters:
nodes (list) – The OCS objects of the nodes
wait (bool) – True for waiting the VMs to terminate,
otherwise (False) –
- class ocs_ci.ocs.platform_nodes.VSPHEREUPINode(node_conf, node_type, compute_count)
Bases:
VMWareNodes
Node object representing VMWARE UPI nodes
- add_node(use_terraform=True)
Add nodes to the current cluster
- Parameters:
use_terraform (bool) – if True use terraform to add nodes, otherwise use manual steps to add nodes
- add_nodes_with_terraform()
Add nodes using terraform
- add_nodes_without_terraform()
Add nodes without terraform
- change_terraform_statefile_after_remove_vm(vm_name)
Remove the records from the state file, so that terraform will no longer be tracking the corresponding remote objects of the vSphere VM object we removed.
- Parameters:
vm_name (str) – The VM name
- change_terraform_tfvars_after_remove_vm(num_nodes_removed=1)
Update the compute count after removing node from cluster
- Parameters:
num_nodes_removed (int) – Number of nodes removed from cluster
- generate_node_names_for_vsphere(count, prefix='compute-')
Generate the node names for vsphere platform
- Parameters:
count (int) – Number of node names to generate
prefix (str) – Prefix for node name
- Returns:
List of node names
- Return type:
list
- update_terraform_tfvars_compute_count(type, count)
Update terraform tfvars file for compute count
- Parameters:
type (str) – Type of operation. Either add or remove
count (int) – Number to add or remove to the exiting compute count
- wait_for_connection_and_set_host_name(ip, host_name)
Waits for connection to establish to the node and sets the hostname
- Parameters:
ip (str) – IP of the node
host_name (str) – Host name to set for the node
- Raises:
NoValidConnectionsError – Raises if connection is not established
AuthenticationException – Raises if credentials are not correct
ocs_ci.ocs.pod_exec module
Dispatcher for proper exec based on api-client config
This class takes care of dispatching a proper api-client instance to take care of command execution.
WHY this class ? Execution of command might happen using various means, as of now we are using kubernetes client but we might also use openshift rest client. To cope with this we might have to dynamically figure out at run time which backend to use.
This module will have all api client class definitions in this file along with dispatcher class Exec.
- class ocs_ci.ocs.pod_exec.CmdObj(cmd, timeout, wait, check_ec, long_running)
Bases:
tuple
- check_ec
Alias for field number 3
- cmd
Alias for field number 0
- long_running
Alias for field number 4
- timeout
Alias for field number 1
- wait
Alias for field number 2
- class ocs_ci.ocs.pod_exec.Exec(oc_client='KubClient')
Bases:
object
Dispatcher class for proper api client instantiation
This class has a factory function which returns proper api client instance necessary for command execution
- run(podname, namespace, cmd_obj)
actual run happens here Command will be fwd to api client object and it should return a 3 tuple (stdout, stderr, retval)
- class ocs_ci.ocs.pod_exec.KubClient
Bases:
object
Specific to upstream Kubernetes client library
- run(podname, namespace, cmd_obj)
- ocs_ci.ocs.pod_exec.register_class(cls)
Decorator for registering a class ‘cls’ in class map
Please make sure that api-client name in config should be same as class name.
ocs_ci.ocs.quay_operator module
- class ocs_ci.ocs.quay_operator.QuayOperator
Bases:
object
Quay operator class
- check_quay_registry_endpoint()
Checks if quay registry endpoint is up
- Returns:
True if quay endpoint is up else False
- Return type:
bool
- create_quay_registry()
Creates Quay registry
- get_quay_endpoint()
Returns quay endpoint
- setup_quay_operator(channel=None)
Deploys the Quay operator using the specified channel or the default channel if not provided
- Parameters:
channel (str, optional) – The channel of the Quay operator to deploy. If not provided, the default channel will be used.
- Raises:
TimeoutError – If the Quay operator pod fails to reach the ‘Running’ state
within the timeout. –
- teardown()
Quay operator teardown
- wait_for_quay_endpoint()
Waits for quay registry endpoint
- ocs_ci.ocs.quay_operator.check_super_user(endpoint, token)
Validates the super user based on the token
- Parameters:
endpoint (str) – Quay Endpoint url
token (str) – Super user token
- Returns:
True in case token is from a super user
- Return type:
bool
- ocs_ci.ocs.quay_operator.create_quay_org(endpoint, token, org_name)
Creates an organization in quay
- Parameters:
endpoint (str) – Quay endpoint url
token (str) – Super user token
org_name (str) – Organization name
- Returns:
True in case org creation is successful
- Return type:
bool
- ocs_ci.ocs.quay_operator.create_quay_repository(endpoint, token, repo_name, org_name, description='new_repo')
Creates a quay repository
- Parameters:
endpoint (str) – Quay Endpoint url
token (str) – Super user token
org_name (str) – Organization name
repo_name (str) – Repository name
description (str) – Description of the repo
- Returns:
True in case repo creation is successful
- Return type:
bool
- ocs_ci.ocs.quay_operator.delete_quay_repository(endpoint, token, org_name, repo_name)
Deletes the quay repository
- Parameters:
endpoint (str) – Quay Endpoint url
token (str) – Super user token
org_name (str) – Organization name
repo_name (str) – Repository name
- Returns:
True in case repo is delete successfully
- Return type:
bool
- ocs_ci.ocs.quay_operator.get_super_user_token(endpoint)
Gets the initialized super user token. This is one time, cant get the token again once initialized.
- Parameters:
endpoint (str) – Quay Endpoint url
- Returns:
Super user token
- Return type:
str
- ocs_ci.ocs.quay_operator.quay_super_user_login(endpoint_url)
Logins in to quay endpoint
- Parameters:
endpoint_url (str) – External endpoint of quay
ocs_ci.ocs.rados_utils module
- class ocs_ci.ocs.rados_utils.RadosHelper(mon, config=None, log=None, cluster='ceph')
Bases:
object
- create_pool(pool_name, pg_num=16, erasure_code_profile_name=None, min_size=None, erasure_code_use_overwrites=False)
Create a pool named from the pool_name parameter.
- Parameters:
pool_name (str) – name of the pool being created.
pg_num (int) – initial number of pgs.
erasure_code_profile_name (str) – if set and !None create an erasure coded pool using the profile
min_size (int) – minimum size
erasure_code_use_overwrites (bool) – if True, allow overwrites
- get_mgr_proxy_container(node, docker_image, proxy_container='mgr_proxy')
Returns mgr dummy container to access containerized storage :param node: ceph node :type node: ceph.ceph.CephNode :param docker_image: repository/image:tag :type docker_image: str
- Returns:
mgr object
- Return type:
ceph.ceph.CephDemon
- get_num_pools()
- Returns:
number of pools in the cluster
- get_osd_dump_json()
osd dump –format=json converted to a python object :returns: the python object
- get_pg_primary(pool, pgnum)
get primary for pool, pgnum (e.g. (data, 0)->0
- get_pg_random(pool, pgnum)
get random osd for pool, pgnum (e.g. (data, 0)->0
- get_pgid(pool, pgnum)
- Parameters:
pool – pool name
pgnum – pg number
- Returns:
a string representing this pg.
- get_pool_dump(pool)
get the osd dump part of a pool
- get_pool_num(pool)
get number for pool (e.g., data -> 2)
- get_pool_property(pool_name, prop)
- Parameters:
pool_name – pool
prop – property to be checked.
- Returns:
property as an int value.
- is_up(osd_id)
:return 1 if up, 0 if down
- kill_osd(osd_node, osd_service)
- Params:
id , type of signal, list of osd objects type: “SIGKILL”, “SIGTERM”, “SIGHUP” etc.
- Returns:
1 or 0
- list_pools()
list all pool names
- raw_cluster_cmd(*args)
- Returns:
(stdout, stderr)
- revive_osd(osd_node, osd_service)
- Returns:
0 if revive success,1 if fail
- ocs_ci.ocs.rados_utils.corrupt_pg(osd_deployment, pool_name, pool_object)
Rewrite given object in a ceph pool with /etc/shadow file.
- Parameters:
osd_deployment (object) – OSD deployment object where PG will be corrupted
pool_name (str) – name of ceph pool to be corrupted
pool_object (str) – name of object to be corrupted
- ocs_ci.ocs.rados_utils.get_pg_log_dups_count_via_cot(osd_deployments, pgid)
Get the pg log dup entries count via COT
- Parameters:
osd_deployments (OCS) – List of OSD deployment OCS instances
pgid (str) – pgid for a pool eg: ‘1.55’
- Returns:
List of total number of pg dups per osd
- Return type:
list
- ocs_ci.ocs.rados_utils.inject_corrupted_dups_into_pg_via_cot(osd_deployments, pgid, injected_dups_file_name_prefix='text')
Inject corrupted dups into a pg via COT
- Parameters:
osd_deployments (OCS) – List of OSD deployment OCS instances
pgid (str) – pgid for a pool eg: ‘1.55’
injected_dups_file_name_prefix (str) – File name prefix for injecting dups
ocs_ci.ocs.registry module
- ocs_ci.ocs.registry.add_role_to_user(role_type, user, cluster_role=False, namespace=None)
Function to add a cluster/regular role to user
- Parameters:
role_type (str) – Type of the role to be added
user (str) – User to be added for the role
cluster_role (bool) – Whether to add a cluster-role or a regular role
namespace (str) – Namespace to be used
- Raises:
AssertionError – When failure in adding new role to user
- ocs_ci.ocs.registry.change_registry_backend_to_ocs()
Function to deploy registry with OCS backend.
- Raises:
AssertionError – When failure in change of registry backend to OCS
- ocs_ci.ocs.registry.check_if_registry_stack_exists()
Check if registry is configured on the cluster with ODF backed PVCs
- Returns:
True if registry is configured on the cluster, false otherwise
- Return type:
bool
- ocs_ci.ocs.registry.check_image_exists_in_registry(image_url)
Function to check either image exists in registry or not
- Parameters:
image_url (str) – Image url to be verified
- Returns:
True if image exists, else False
- Return type:
bool
- ocs_ci.ocs.registry.enable_route_and_create_ca_for_registry_access()
Function to enable route and to create ca, copy to respective location for registry access
- Raises:
AssertionError – When failure in enabling registry default route
- ocs_ci.ocs.registry.get_build_name_by_pattern(pattern='', namespace=None)
In a given namespace find names of the builds that match the given pattern
- Parameters:
pattern (str) – name of the build with given pattern
namespace (str) – Namespace value
- Returns:
List of build names matching the pattern
- Return type:
build_list (list)
- ocs_ci.ocs.registry.get_default_route_name()
Function to get default route name
- Returns:
Returns default route name
- Return type:
route_name (str)
- ocs_ci.ocs.registry.get_oc_podman_login_cmd(skip_tls_verify=True)
Function to get oc and podman login commands on node
- Parameters:
skip_tls_verify (bool) – If true, the server’s certificate will not be checked for validity
- Returns:
List of cmd for oc/podman login
- Return type:
cmd_list (list)
- ocs_ci.ocs.registry.get_registry_pod_obj()
Function to get registry pod obj
- Returns:
List of Registry pod objs
- Return type:
pod_obj (list)
- Raises:
UnexpectedBehaviour – When image-registry pod is not present.
- ocs_ci.ocs.registry.image_list_all()
Function to list the images in the podman registry
- Returns:
Images present in cluster
- Return type:
image_list_output (str)
- ocs_ci.ocs.registry.image_pull(image_url)
Function to pull images from repositories
- Parameters:
image_url (str) – Image url container image repo link
- ocs_ci.ocs.registry.image_pull_and_push(project_name, template='redis-ephemeral', image='registry.redhat.io/rhel8/redis', pattern='', wait=True)
Pull and push images running oc new-app command :param project_name: Name of project :type project_name: str :param template: Name of the template of the image :type template: str :param image: Name of the image with tag :type image: str :param pattern: name of the build with given pattern :type pattern: str :param wait: If true waits till the image pull and push completes. :type wait: bool
- ocs_ci.ocs.registry.image_push(image_url, namespace)
Function to push images to destination
- Parameters:
image_url (str) – Image url container image repo link
namespace (str) – Image to be uploaded namespace
- Returns:
Uploaded image path
- Return type:
registry_path (str)
- ocs_ci.ocs.registry.image_rm(registry_path, image_url)
Function to remove images from registry
- Parameters:
registry_path (str) – Image registry path
image_url (str) – Image url container image repo link
- ocs_ci.ocs.registry.modify_registry_pod_count(count)
Function to modify registry replica count(increase/decrease pod count)
- Parameters:
count (int) – registry replica count to be changed to
- Returns:
True in case if changes are applied. False otherwise
- Return type:
bool
- Raises:
TimeoutExpiredError – When number of image registry pods doesn’t match the count
- ocs_ci.ocs.registry.remove_role_from_user(role_type, user, cluster_role=False, namespace=None)
Function to remove a cluster/regular role from a user
- Parameters:
role_type (str) – Type of the role to be removed
user (str) – User of the role
cluster_role (bool) – Whether to remove a cluster-role or a regular role
namespace (str) – Namespace to be used
- Raises:
AssertionError – When failure in removing role from user
- ocs_ci.ocs.registry.validate_image_exists(app='redis')
Validate image exists on registries path :param app: Label or application name :type app: str
- Returns:
Dir/Files/Images are listed in string format
- Return type:
image_list (str)
- Raises:
Exceptions if dir/folders not found –
- ocs_ci.ocs.registry.validate_pvc_mount_on_registry_pod()
Function to validate pvc mounted on the registry pod
- Raises:
AssertionError – When PVC mount not present in the registry pod
- ocs_ci.ocs.registry.validate_registry_pod_status()
Function to validate registry pod status
ocs_ci.ocs.scale_lib module
- class ocs_ci.ocs.scale_lib.FioPodScale(kind='deploymentconfig', node_selector={'scale-label': 'app-scale'})
Bases:
object
FioPodScale Class with required scale library functions and params
- cleanup()
Function to tear down
- create_and_set_namespace()
Create and set namespace for the pods to be created Create sa_name if Kind if DeploymentConfig
- create_multi_pvc_pod(pvc_count=760, pvcs_per_pod=20, obj_name='obj1', start_io=True, io_runtime=None, pvc_size=None, max_pvc_size=105)
Function to create PVC of different type and attach them to PODs and start IO.
- Parameters:
pvc_count (int) – Number of PVCs to be created
pvcs_per_pod (int) – No of PVCs to be attached to single pod
Example –
attached (If 20 then a POD will be created with 20PVCs) –
obj_name (string) – Object name prefix string
start_io (bool) – Binary value to start IO default it’s True
io_runtime (seconds) – Runtime in Seconds to continue IO
pvc_size (int) – Size of PVC to be created
max_pvc_size (int) – The max size of the pvc
- Returns:
List all the rbd PVCs names created fs_pvc_name (list): List all the fs PVCs names created pod_running_list (list): List all the PODs names created
- Return type:
rbd_pvc_name (list)
- create_scale_pods(scale_count=1500, pvc_per_pod_count=20, start_io=True, io_runtime=None, pvc_size=None, max_pvc_size=105, obj_name_prefix='obj')
Main Function with scale pod creation flow and checks to add nodes for the supported platforms, validates pg-balancer after scaling Function breaks the scale_count in multiples of 750 and iterates those many time to reach the desired count.
- Parameters:
scale_count (int) – No of PVCs to be Scaled. Should be one of the values in the dict “constants.SCALE_PVC_ROUND_UP_VALUE”.
pvc_per_pod_count (int) – Number of PVCs to be attached to single POD
Example –
POD (If 20 then 20 PVCs will be attached to single) –
start_io (bool) – Binary value to start IO default it’s True
io_runtime (seconds) – Runtime in Seconds to continue IO
pvc_size (int) – Size of PVC to be created
max_pvc_size (int) – The max size of the pvc
obj_name_prefix (str) – The prefix of the object name. The default value is ‘obj’
- property kind
- property node_selector
- pvc_expansion(pvc_new_size, wait_time=30)
Function to expand PVC size and verify the new size is reflected.
- Parameters:
pvc_new_size (int) – Updated/extended PVC size
wait_time – timeout for all the pvc in kube job to get extended size
- ocs_ci.ocs.scale_lib.add_worker_based_on_cpu_utilization(node_count, expected_percent, role_type=None, machineset_name=None)
Function to evaluate CPU utilization of nodes and add node if required.
- Parameters:
machineset_name (list) – Machineset_names to add more nodes if required.
node_count (int) – Additional nodes to be added
expected_percent (int) – Expected utilization precent
role_type (str) – To add type to the nodes getting added
- Returns:
True if Nodes gets added, else false.
- Return type:
bool
- ocs_ci.ocs.scale_lib.add_worker_based_on_pods_count_per_node(node_count, expected_count, role_type=None, machineset_name=None)
Function to evaluate number of pods up in node and add new node accordingly.
- Parameters:
machineset_name (list) – Machineset_names to add more nodes if required.
node_count (int) – Additional nodes to be added
expected_count (int) – Expected pod count in one node
role_type (str) – To add type to the nodes getting added
- Returns:
True if Nodes gets added, else false.
- Return type:
bool
- ocs_ci.ocs.scale_lib.attach_multiple_pvc_to_pod_dict(pvc_list, namespace, raw_block_pv=False, pvcs_per_pod=10, deployment_config=False, start_io=True, node_selector=None, io_runtime=None, io_size=None, pod_yaml='/home/docs/checkouts/readthedocs.org/user_builds/ocs-ci/checkouts/latest/ocs_ci/templates/app-pods/performance.yaml')
Function to construct pod.yaml with multiple PVC’s Note: Function supports only performance.yaml which in-built has fio
- Parameters:
pvc_list (list) – list of PVCs to be attach to single pod
namespace (str) – Name of the namespace where to deploy
raw_block_pv (bool) – Either PVC is raw block PV or not
pvcs_per_pod (int) – No of PVCs to be attached to single pod
deployment_config (bool) – If True then DC enabled else not
start_io (bool) – Binary value to start IO default it’s True
node_selector (dict) – Pods will be created in this node_selector Example, {‘nodetype’: ‘app-pod’}
io_runtime (seconds) – Runtime in Seconds to continue IO
io_size (str value with M|K|G) – io_size with respective unit
pod_yaml (dict) – Pod yaml file dict
- Returns:
pod data with multiple PVC mount paths added
- Return type:
pod_data (str)
- ocs_ci.ocs.scale_lib.check_all_pod_reached_running_state_in_kube_job(kube_job_obj, namespace, no_of_pod, timeout=30)
Function to check either bulk created PODs reached Running state using kube_job
- Parameters:
kube_job_obj (obj) – Kube Job Object
namespace (str) – Namespace of PVC’s created
no_of_pod (int) – POD count
timeout (sec) – Timeout between each POD iteration check
- Returns:
List of all PODs reached running state.
- Return type:
pod_running_list (list)
- Asserts:
If not all POD reached Running state.
- ocs_ci.ocs.scale_lib.check_all_pvc_reached_bound_state_in_kube_job(kube_job_obj, namespace, no_of_pvc, timeout=30)
Function to check either bulk created PVCs reached Bound state using kube_job
- Parameters:
kube_job_obj (obj) – Kube Job Object
namespace (str) – Namespace of PVC’s created
no_of_pvc (int) – Bulk PVC count
timeout – a timeout for all the pvc in kube job to reach bound status
- Returns:
List of all PVCs which is in Bound state.
- Return type:
pvc_bound_list (list)
- Asserts:
If not all PVC reached to Bound state.
- ocs_ci.ocs.scale_lib.check_and_add_enough_worker(worker_count)
Function to check if there is enough workers available to scale pods. IF there is no enough worker then worker will be added based on supported platforms Function also adds scale label to the respective worker nodes.
- Parameters:
worker_count (int) – Expected worker count to be present in the setup
- Returns:
True is there is enough worker count else raise exception.
- Return type:
book
- ocs_ci.ocs.scale_lib.check_enough_resource_available_in_workers(ms_name=None, pod_dict_path=None)
Function to check if there is enough resource in worker, if not add worker for automation supported platforms
- Parameters:
ms_name (list) – Require param in-case of aws platform to increase the worker
pod_dict_path (str) – Pod dict path for nginx pod.
- ocs_ci.ocs.scale_lib.collect_scale_data_in_file(namespace, kube_pod_obj_list, kube_pvc_obj_list, scale_count, pvc_per_pod_count, scale_data_file)
Function to add scale data to a file :param namespace: Namespace of scale resource created :type namespace: str :param kube_pod_obj_list: List of pod kube jobs :type kube_pod_obj_list: list :param kube_pvc_obj_list: List of pvc kube jobs :type kube_pvc_obj_list: list :param scale_count: Scaled PVC count :type scale_count: int :param pvc_per_pod_count: PVCs per pod count :type pvc_per_pod_count: int :param scale_data_file: Scale data file with path :type scale_data_file: str
- ocs_ci.ocs.scale_lib.construct_pvc_clone_yaml_bulk_for_kube_job(pvc_dict_list, clone_yaml, sc_name)
Function to construct pvc.yaml to create bulk of pvc clones using kube_job
- Parameters:
pvc_dict_list (list) – List of PVCs for each of them one clone is to be created
clone_yaml (str) – Clone yaml which is the template for building clones
sc_name (str) – SC name for pvc creation
- Returns:
List of all PVC.yaml dicts
- Return type:
pvc_dict_list (list)
- ocs_ci.ocs.scale_lib.construct_pvc_creation_yaml_bulk_for_kube_job(no_of_pvc, access_mode, sc_name, pvc_size=None, max_pvc_size=105)
Function to construct pvc.yaml to create bulk of pvc’s using kube_job
- Parameters:
no_of_pvc (int) – Bulk PVC count
access_mode (str) – PVC access_mode
sc_name (str) – SC name for pvc creation
pvc_size (str) – size of all pvcs to be created with Gi suffix (e.g. 10Gi). If None, random size pvc will be created
max_pvc_size (int) – The max size of the pvc. It should be greater than 9
- Returns:
List of all PVC.yaml dicts
- Return type:
pvc_dict_list (list)
- ocs_ci.ocs.scale_lib.delete_objs_parallel(obj_list, namespace, kind)
Function to delete objs specified in list
- Parameters:
obj_list (list) – List can be obj of pod, pvc, etc
namespace (str) – Namespace where the obj belongs to
kind (str) – Obj Kind
- ocs_ci.ocs.scale_lib.get_expected_worker_count(scale_count=1500)
Function to get expected worker count based on platform to scale pods in cluster
- Parameters:
scale_count (int) – Scale count of the PVC+POD to be created
- Returns:
Expected worker count to scale required number of pod
- Return type:
expected_worker_count (int)
- ocs_ci.ocs.scale_lib.get_max_pvc_count()
Return the maximum number of pvcs to test for. This value is 500 times the number of worker nodes.
- ocs_ci.ocs.scale_lib.get_pod_creation_time_in_kube_job(kube_job_obj, namespace, no_of_pod)
Function to get pod creation time of pods created using kube_job Note: Function doesn’t support DeploymentConig pods
- Parameters:
kube_job_obj (obj) – Kube Job Object
namespace (str) – Namespace of PVC’s created
no_of_pod (int) – POD count
- Returns:
Dictionary of pod_name with creation time.
- Return type:
pod_dict (dict)
- ocs_ci.ocs.scale_lib.get_rate_based_on_cls_iops(custom_iops_dict=None, osd_size=2048)
Function to check ceph cluster iops and suggest rate param for fio.
- Parameters:
osd_size (int) – Size of the OSD in GB
custom_iops_dict (dict) – Dictionary of rate param to be used during IO run.
Example – ‘16k’, ‘usage_40%_60%’: ‘8k’,
{'usage_below_40%' (iops_dict =) – ‘16k’, ‘usage_40%_60%’: ‘8k’,
'usage_60%_80%' – ‘4k’, ‘usage_80%_95%’: ‘2K’}
Warning –
example. (Make sure dict key is same as above) –
- Returns:
Rate parm for fio based on ceph cluster IOPs
- Return type:
rate_param (str)
- ocs_ci.ocs.scale_lib.get_size_based_on_cls_usage(custom_size_dict=None)
Function to check cls capacity suggest IO write to cluster
- Parameters:
custom_size_dict (dict) – Dictionary of size param to be used during IO run.
Example – ‘2G’, ‘usage_60_70’: ‘512M’,
{'usage_below_60' (size_dict =) – ‘2G’, ‘usage_60_70’: ‘512M’,
'usage_70_80' – ‘10M’, ‘usage_80_85’: ‘512K’, ‘usage_above_85’: ‘10K’}
Warning –
example. (Make sure dict key is same as above) –
- Returns:
IO size to be considered for cluster env
- Return type:
size (str)
- ocs_ci.ocs.scale_lib.increase_pods_per_worker_node_count(pods_per_node=500, pods_per_core=10)
Function to increase pods per node count, default OCP supports 250 pods per node, from OCP 4.6 limit is going to be 500, but using this function can override this param to create more pods per worker nodes. more detail: https://docs.openshift.com/container-platform/4.5/nodes/nodes/nodes-nodes-managing-max-pods.html
Example: The default value for podsPerCore is 10 and the default value for maxPods is 250. This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor.
WARN: This function will perform Unscheduling of workers and reboot so Please aware if there is any non-dc pods then expected to be terminated.
- Parameters:
pods_per_node (int) – Pods per node limit count
pods_per_core (int) – Pods per core limit count
- Raises:
UnexpectedBehaviour if machineconfigpool not in Updating state within 40secs. –
- ocs_ci.ocs.scale_lib.scale_capacity_with_deviceset(add_deviceset_count=2, timeout=300)
Scale storagecluster deviceset count by increasing the value in storagecluster crs fil
Example: If add_deviceset_count = 2 & existing storagecluster has deviceset count as 1, at the end of function deviceset value will be existing value + add_deviceset_count i.e. 3
- Parameters:
add_deviceset_count (int) – Deviceset count to be added to existing value
timeout (sec) – Timeout for wait_for_resource function call
- ocs_ci.ocs.scale_lib.scale_ocs_node(node_count=3)
Scale OCS worker node by increasing the respective machineset replica value by node_count.
Example: If node_count = 3 & existing cluster has 3 worker nodes, then at the end of this function, setup should have 6 OCS worker nodes.
- Parameters:
node_count (int) – Add node_count OCS worker node to the cluster
- ocs_ci.ocs.scale_lib.validate_all_expanded_pvc_size_in_kube_job(kube_job_obj, namespace, no_of_pvc, resize_value, timeout=30)
Function to check either bulk created PVCs has extended size using kube_job
- Parameters:
kube_job_obj (obj) – Kube Job Object
namespace (str) – Namespace of PVC’s created
no_of_pvc (int) – Bulk PVC count
resize_value (int) – Updated/extended PVC size
timeout – a timeout for all the pvc in kube job to get extended size
- Returns:
List of all PVCs which have extended size.
- Return type:
pvc_extended_list (list)
- Asserts:
If not all PVC has the extended size.
- ocs_ci.ocs.scale_lib.validate_all_pods_and_check_state(namespace, pod_scale_list)
Function to validate all the PODs are in Running state
- Parameters:
namespace (str) – Namespace of PVC’s created
pod_scale_list (list) – List of expected PODs scaled
- ocs_ci.ocs.scale_lib.validate_all_pvcs_and_check_state(namespace, pvc_scale_list)
Function to validate all the PVCs are in Bound state
- Parameters:
namespace (str) – Namespace of PVC’s created
pvc_scale_list (list) – List of expected PVCs scaled
- ocs_ci.ocs.scale_lib.validate_node_and_oc_services_are_up_after_reboot(wait_time=40)
Function to validation all the nodes(worker/master), pods and services are up and running in the OCP cluster
- Parameters:
wait_time (int) – Wait time before validating nodes and services
ocs_ci.ocs.scale_noobaa_lib module
- ocs_ci.ocs.scale_noobaa_lib.check_all_obc_reached_bound_state_in_kube_job(kube_job_obj, namespace, no_of_obc, timeout=60, no_wait_time=20)
Function to check either bulk created OBCs reached Bound state using kube_job :param kube_job_obj: Kube Job Object :type kube_job_obj: obj :param namespace: Namespace of OBC’s created :type namespace: str :param no_of_obc: Bulk OBC count :type no_of_obc: int :param timeout: a timeout for all the obc in kube job to reach bound state :type timeout: second :param no_wait_time: number of wait time to ensure all OCBs to reach bound state :type no_wait_time: int
- Returns:
List of all OBCs which is in Bound state.
- Return type:
obc_bound_list (list)
- Raises:
AssertionError – If not all OBC reached to Bound state
- ocs_ci.ocs.scale_noobaa_lib.check_all_obcs_status(namespace=None)
Check all OBCs status in given namespace
- Parameters:
namespace (str) – Namespace where endpoint is running
- Returns:
A list of all OBCs in Bound state
- Return type:
obc_bound_list
- ocs_ci.ocs.scale_noobaa_lib.check_memory_leak_in_noobaa_endpoint_log()
Check memory leak log in Noobaa endpoint logs.
- Raises:
UnexpectedBehaviour – If memory leak error is existing in Noobaa endpoint logs.
- ocs_ci.ocs.scale_noobaa_lib.cleanup(namespace, obc_list=None)
Delete all OBCs created in the cluster
- Parameters:
namespace (str) – Namespace of OBC’s deleting
obc_list (string) – List of OBCs to be deleted
- ocs_ci.ocs.scale_noobaa_lib.construct_obc_creation_yaml_bulk_for_kube_job(no_of_obc, sc_name, namespace)
Constructing obc.yaml file to create bulk of obc’s using kube_job
- Parameters:
no_of_obc (int) – Bulk obc count
sc_name (str) – storage class name using for obc creation
namespace (str) – Namespace uses to create bulk of obc
- Returns:
List of all obc.yaml dicts
- Return type:
obc_dict_list (list)
- ocs_ci.ocs.scale_noobaa_lib.create_namespace()
Create and set namespace for obcs to be created
- ocs_ci.ocs.scale_noobaa_lib.delete_bucket(bucket_name=None)
- ocs_ci.ocs.scale_noobaa_lib.delete_namespace(namespace=None)
Delete namespace which was created for OBCs :param namespace: Namespace where OBCs were created :type namespace: str
- ocs_ci.ocs.scale_noobaa_lib.delete_object(bucket_name=None)
- ocs_ci.ocs.scale_noobaa_lib.get_endpoint_pod_count(namespace)
Function to query number of endpoint running in the cluster.
- Parameters:
namespace (str) – Namespace where endpoint is running
- Returns:
Number of endpoint pods
- Return type:
endpoint_count (int)
- ocs_ci.ocs.scale_noobaa_lib.get_hpa_utilization(namespace)
Function to get hpa utilization in the cluster.
- Parameters:
namespace (str) – Namespace where endpoint is running
- Returns:
Percentage of CPU utilization on HPA.
- Return type:
hpa_cpu_utilization (int)
- ocs_ci.ocs.scale_noobaa_lib.get_noobaa_pods_status()
Check Noobaa pod status to ensure it is in Running state.
Args: None
- Returns:
True, if all Noobaa pods in Running state. False, otherwise
- Return type:
Boolean
- ocs_ci.ocs.scale_noobaa_lib.get_pod_obj(pod_name)
Function to get pod object using pod name
- Parameters:
pod_name (str) – Name of a pod
- Returns:
Object of a pod
- Return type:
pod object (obj)
- ocs_ci.ocs.scale_noobaa_lib.hsbench_cleanup()
Clean up deployment config, pvc, pod and test user
- ocs_ci.ocs.scale_noobaa_lib.hsbench_io(namespace=None, num_obj=None, num_bucket=None, object_size=None, run_mode=None, bucket_prefix=None, result=None, validate=None, timeout=None)
Run hsbench s3 benchmark
- Parameters:
namespace (str) – namespace to run workload
num_obj (int) – Number of object(s)
num_bucket (int) – Number of bucket(s)
object_size (str) – Size of objects in bytes with postfix K, M, and G
run_mode (str) – run mode
bucket_prefix (str) – Prefix for buckets
result (str) – Write CSV output to this file
validate (bool) – Validates whether running workload is completed.
timeout (int) – timeout in second
- ocs_ci.ocs.scale_noobaa_lib.hsbench_setup()
Setup and install hsbench
- ocs_ci.ocs.scale_noobaa_lib.measure_obc_creation_time(obc_name_list, timeout=120)
Measure OBC creation time :param obc_name_list: List of obc names to measure creation time :type obc_name_list: list :param timeout: Wait time in second before collecting log :type timeout: int
- Returns:
Dictionary of obcs and creation time in second
- Return type:
obc_dict (dict)
- ocs_ci.ocs.scale_noobaa_lib.measure_obc_deletion_time(obc_name_list, timeout=60)
Measure OBC deletion time :param obc_name_list: List of obc names to measure deletion time :type obc_name_list: list :param timeout: Wait time in second before collecting log :type timeout: int
- Returns:
Dictionary of obcs and deletion time in second
- Return type:
obc_dict (dict)
- ocs_ci.ocs.scale_noobaa_lib.noobaa_running_node_restart(pod_name)
Function to restart node which has noobaa pod’s running
- Parameters:
pod_name (str) – Name of noobaa pod
- ocs_ci.ocs.scale_noobaa_lib.validate_bucket(num_objs=None, upgrade=None, result=None, put=None, get=None, list_obj=None)
Validate S3 objects created by hsbench on bucket(s)
ocs_ci.ocs.scale_pgsql module
ScalePodPGSQL workload class for scale
- class ocs_ci.ocs.scale_pgsql.ScalePodPGSQL(node_selector={'scale-label': 'app-scale'}, **kwargs)
Bases:
Postgresql
Scale Postgresql workload with scale parameters and functions
- cleanup()
Clean up
- setup_postgresql(replicas, node_selector=None)
Deploy postgres sql server
- Parameters:
replicas (int) – Number of postgresql pods to be deployed
- Raises:
CommandFailed – If PostgreSQL server setup fails
- ocs_ci.ocs.scale_pgsql.add_worker_node(instance_type=None)
- ocs_ci.ocs.scale_pgsql.delete_worker_node()
ocs_ci.ocs.small_file_workload module
Class to implement the Small Files benchmark as a subclass of the benchmark operator
This workload is required an elastic-search instance to run.
- class ocs_ci.ocs.small_file_workload.SmallFiles(es, **kwargs)
Bases:
BenchmarkOperator
Small_Files workload benchmark
- delete()
Delete the benchmark
- run()
Run the benchmark and wait until it completed
- setup_operations(ops)
Setting up the test operations
- Parameters:
ops – can be list of operations or a string of one operation
- setup_storageclass(interface)
Setting up the storageclass parameter in the CRD object
- Parameters:
interface (str) – the storage interface
- setup_test_params(file_size, files, threads, samples)
Setting up the parameters for this test
- Parameters:
file_size (int) – the file size in KB
files (int) – number of file to use in the test
threads (int) – number of threads to use in the test
samples (int) – number of sample to run the test
- setup_vol_size(file_size, files, threads, total_capacity)
Calculating the size of the volume that need to be test, it should be at least twice in the size then the size of the files, and at least 100Gi.
Since the file_size is in Kb and the vol_size need to be in Gb, more calculation is needed.
- Parameters:
file_size (int) – the file size in KB
files (int) – number of file to use in the test
threads (int) – number of threads to use in the test
total_capacity (int) – The total usable storage capacity in GiB
ocs_ci.ocs.uninstall module
- ocs_ci.ocs.uninstall.remove_cluster_logging_operator_from_ocs()
Function removes cluster logging operator from OCS
- ocs_ci.ocs.uninstall.remove_monitoring_stack_from_ocs()
Function removes monitoring stack from OCS
- ocs_ci.ocs.uninstall.remove_ocp_registry_from_ocs(platform)
Function removes OCS registry from OCP cluster
- Parameters:
platform (str) – the platform the cluster deployed on
- ocs_ci.ocs.uninstall.uninstall_lso(lso_sc)
Function uninstalls local-volume objects from OCS cluster
- ocs_ci.ocs.uninstall.uninstall_ocs()
The function uninstalls the OCS operator from a openshift cluster and removes all its settings and dependencies
ocs_ci.ocs.utils module
- ocs_ci.ocs.utils.apply_oc_resource(template_name, cluster_path, _templating, template_data=None, template_dir='ocs-deployment')
Apply an oc resource after rendering the specified template with the rook data from cluster_conf.
- Parameters:
template_name (str) – Name of the ocs-deployment config template
cluster_path (str) – Path to cluster directory, where files will be written
_templating (Templating) – Object of Templating class used for templating
template_data (dict) – Data for render template (default: {})
template_dir (str) – Directory under templates dir where template exists (default: ocs-deployment)
- ocs_ci.ocs.utils.check_ceph_healthly(ceph_mon, num_osds, num_mons, mon_container=None, timeout=300)
Function to check ceph is in healthy state
- Parameters:
ceph_mon (CephNode) – monitor node
num_osds (int) – number of osds in cluster
num_mons (int) – number of mons in cluster
mon_container (str) – monitor container name if monitor is placed in the container
timeout – 300 seconds(default) max time to check if cluster is not healthy within timeout period return 1
- Returns:
returns 0 when ceph is in healthy state otherwise returns 1
- Return type:
int
- ocs_ci.ocs.utils.cleanup_ceph_nodes(osp_cred, pattern=None, timeout=300)
- ocs_ci.ocs.utils.collect_noobaa_db_dump(log_dir_path, cluster_config=None)
Collect the Noobaa DB dump
- Parameters:
log_dir_path (str) – directory for dumped Noobaa DB
cluster_config (MultiClusterConfig) – If multicluster scenario then this object will have specific cluster config
- ocs_ci.ocs.utils.collect_ocs_logs(dir_name, ocp=True, ocs=True, mcg=False, status_failure=True)
Collects OCS logs
- Parameters:
dir_name (str) – directory name to store OCS logs. Logs will be stored in dir_name suffix with _ocs_logs.
ocp (bool) – Whether to gather OCP logs
ocs (bool) – Whether to gather OCS logs
mcg (bool) – True for collecting MCG logs (noobaa db dump)
status_failure (bool) – Whether the collection is after success or failure, allows better naming for folders under logs directory
- ocs_ci.ocs.utils.collect_pod_container_rpm_package(dir_name)
Collect information about rpm packages from all containers + go version
- Parameters:
dir_name (str) – directory to store container rpm package info
- ocs_ci.ocs.utils.collect_prometheus_metrics(metrics, dir_name, start, stop, step=1.0, threading_lock=None)
Collects metrics from Prometheus and saves them in file in json format. Metrics can be found in OCP Console in Monitoring -> Metrics.
- Parameters:
metrics (list) – list of metrics to get from Prometheus (E.g. ceph_cluster_total_used_bytes, cluster:cpu_usage_cores:sum, cluster:memory_usage_bytes:sum)
dir_name (str) – directory name to store metrics. Metrics will be stored in dir_name suffix with _ocs_metrics.
start (str) – start timestamp of required datapoints
stop (str) – stop timestamp of required datapoints
step (float) – step of required datapoints
threading_lock – (threading.RLock): Lock to use for thread safety (default: None)
- ocs_ci.ocs.utils.config_ntp(ceph_node)
- ocs_ci.ocs.utils.create_ceph_conf(fsid, mon_hosts, pg_num='128', pgp_num='128', size='2', auth='cephx', pnetwork='172.16.0.0/12', jsize='1024')
- ocs_ci.ocs.utils.create_ceph_nodes(cluster_conf, inventory, osp_cred, run_id, instances_name=None)
- ocs_ci.ocs.utils.create_nodes(conf, inventory, osp_cred, run_id, instances_name=None)
- ocs_ci.ocs.utils.create_oc_resource(template_name, cluster_path, _templating, template_data=None, template_dir='ocs-deployment')
Create an oc resource after rendering the specified template with the rook data from cluster_conf.
- Parameters:
template_name (str) – Name of the ocs-deployment config template
cluster_path (str) – Path to cluster directory, where files will be written
_templating (Templating) – Object of Templating class used for templating
template_data (dict) – Data for render template (default: {})
template_dir (str) – Directory under templates dir where template exists (default: ocs-deployment)
- ocs_ci.ocs.utils.enable_console_plugin()
Enables console plugin for ODF
- ocs_ci.ocs.utils.enable_mco_console_plugin()
Enables console plugin for MCO
- ocs_ci.ocs.utils.export_mg_pods_logs(log_dir_path)
Export must gather pods logs
- Parameters:
log_dir_path (str) – the path of copying the logs
- ocs_ci.ocs.utils.generate_repo_file(base_url, repos)
- ocs_ci.ocs.utils.get_active_acm_index()
Get index of active acm cluster
- ocs_ci.ocs.utils.get_all_acm_indexes()
Get indexes fro all ACM clusters This is more relevant in case of MDR scenario
- Returns:
A list of ACM indexes
- Return type:
list
- ocs_ci.ocs.utils.get_ceph_versions(ceph_nodes, containerized=False)
Log and return the ceph or ceph-ansible versions for each node in the cluster.
- Parameters:
ceph_nodes – nodes in the cluster
containerized – is the cluster containerized or not
- Returns:
A dict of the name / version pair for each node or container in the cluster
- ocs_ci.ocs.utils.get_cluster_object(external_rhcs_info)
Build a external_ceph.ceph object with all node and role info
- Parameters:
external_rhcs_info (dict) –
- Returns:
external_ceph.ceph object
- ocs_ci.ocs.utils.get_external_mode_rhcs()
Get external cluster info from config and obtain external cluster object
- Returns:
external_ceph.Ceph object
- ocs_ci.ocs.utils.get_helper_pods_output(log_dir_path)
Get the output of “oc describe mg-helper pods”
- Parameters:
log_dir_path (str) – the path of copying the logs
- ocs_ci.ocs.utils.get_iso_file_url(base_url)
- ocs_ci.ocs.utils.get_logs_ocp_mg_pods(log_dir_path)
Get logs from OCP Must Gather pods
- Parameters:
log_dir_path (str) – the path of copying the logs
- ocs_ci.ocs.utils.get_namespce_name_by_pattern(pattern='client', filter=None)
Find namespace names that match the given pattern
- Parameters:
pattern (str) – name of the namespace with given pattern
filter (str) – namespace name to filter from the list
- Returns:
Namespace names matching the pattern
- Return type:
list
- ocs_ci.ocs.utils.get_non_acm_cluster_config()
Get a list of non-acm cluster’s config objects
- Returns:
of cluster config objects
- Return type:
list
- ocs_ci.ocs.utils.get_openstack_driver(yaml)
- ocs_ci.ocs.utils.get_passive_acm_index()
Get index of passive acm cluster
- ocs_ci.ocs.utils.get_pod_name_by_pattern(pattern='client', namespace=None, filter=None)
In a given namespace find names of the pods that match the given pattern
- Parameters:
pattern (str) – name of the pod with given pattern
namespace (str) – Namespace value
filter (str) – pod name to filter from the list
- Returns:
List of pod names matching the pattern
- Return type:
pod_list (list)
- ocs_ci.ocs.utils.get_primary_cluster_config()
Get the primary cluster config object in a DR scenario
- Returns:
primary cluster config obhect from config.clusters
- Return type:
framework.config
- ocs_ci.ocs.utils.get_public_network()
Get the configured public network subnet for nodes in the cluster.
- Returns:
(str) public network subnet
- ocs_ci.ocs.utils.get_rook_version()
Get the rook image information from rook-ceph-operator pod
- Returns:
rook version
- Return type:
str
- ocs_ci.ocs.utils.get_root_permissions(node, path)
Transfer ownership of root to current user for the path given. Recursive. :param node: :type node: ceph.ceph.CephNode :param path: file path
- ocs_ci.ocs.utils.hard_reboot(gyaml, name=None)
- ocs_ci.ocs.utils.keep_alive(ceph_nodes)
- ocs_ci.ocs.utils.kill_osd_external(ceph_cluster, osd_id, sig_type='SIGTERM')
Kill an osd with given signal
- Parameters:
ceph_cluster (external_cluster.Ceph) – Cluster object
osd_id (int) – id of osd
sig_type (str) – type of signal to be sent
- Raises:
CommandFailed exception –
- ocs_ci.ocs.utils.label_pod_security_admission(namespace=None, upgrade_version=None)
Label PodSecurity admission
- Parameters:
namespace (str) – Namespace name
upgrade_version (semantic_version.Version) – ODF semantic version for upgrade if it’s an upgrade run, otherwise None.
- ocs_ci.ocs.utils.node_power_failure(gyaml, sleep_time=300, name=None)
- ocs_ci.ocs.utils.oc_get_all_obc_names()
- Returns:
A set of all OBC names
- Return type:
set
- ocs_ci.ocs.utils.open_firewall_port(ceph_node, port, protocol)
Opens firewall ports for given node :param ceph_node: ceph node :type ceph_node: ceph.ceph.CephNode :param port: port :type port: str :param protocol: protocol :type protocol: str
- ocs_ci.ocs.utils.reboot_node(ceph_node, timeout=300)
Reboot a node with given ceph_node object
- Parameters:
ceph_node (CephNode) – Ceph node object representing the node.
timeout (int) – Wait time in seconds for the node to comeback.
- Raises:
SSHException – if not able to connect through ssh
- ocs_ci.ocs.utils.revive_osd_external(ceph_cluster, osd_id)
Start an already stopped osd
- Parameters:
ceph_cluster (external_cluster.Ceph) – cluster object
osd_id (int) – id of osd
- Raises:
CommandFailed exception in case of failure –
- ocs_ci.ocs.utils.run_must_gather(log_dir_path, image, command=None, cluster_config=None)
Runs the must-gather tool against the cluster
- Parameters:
log_dir_path (str) – directory for dumped must-gather logs
image (str) – must-gather image registry path
command (str) – optional command to execute within the must-gather image
cluster_config (MultiClusterConfig) – Holds specifc cluster config object in case of multicluster
- Returns:
must-gather cli output
- Return type:
mg_output (str)
- ocs_ci.ocs.utils.search_ethernet_interface(ceph_node, ceph_node_list)
Search interface on the given node node which allows every node in the cluster accesible by it’s shortname.
- Parameters:
ceph_node (ceph.ceph.CephNode) – node where check is performed
ceph_node_list (list) – node list to check
- ocs_ci.ocs.utils.set_cdn_repo(node, repos)
- ocs_ci.ocs.utils.setup_cdn_repos(ceph_nodes, build=None)
- ocs_ci.ocs.utils.setup_ceph_toolbox(force_setup=False, storage_cluster=None)
Setup ceph-toolbox - also checks if toolbox exists, if it exists it behaves as noop.
- Parameters:
force_setup (bool) – force setup toolbox pod
- ocs_ci.ocs.utils.setup_deb_cdn_repo(node, build=None)
- ocs_ci.ocs.utils.setup_deb_repos(node, ubuntu_repo)
- ocs_ci.ocs.utils.setup_repos(ceph, base_url, installer_url=None)
- ocs_ci.ocs.utils.setup_vm_node(node, ceph_nodes, **params)
- ocs_ci.ocs.utils.store_cluster_state(ceph_cluster_object, ceph_clusters_file_name)
- ocs_ci.ocs.utils.thread_init_class(class_init_operations, shutdown)
- ocs_ci.ocs.utils.update_ca_cert(node, cert_url, timeout=120)
- ocs_ci.ocs.utils.write_docker_daemon_json(json_text, node)
Write given string to /etc/docker/daemon/daemon :param json_text: json string :param node: Ceph node object :type node: ceph.ceph.CephNode
ocs_ci.ocs.version module
Version reporting module for OCS QE purposes:
logging version of OCS/OCP stack for every test run
generating version report directly usable in bug reports
It asks openshift for:
ClusterVersion resource
image identifiers of all containers running in openshift storage related namespaces
rpm package versions of few selected components (such as rook or ceph, if given pod is running)
- ocs_ci.ocs.version.get_environment_info()
Getting the environment information, Information that will be collected
- Versions:
OCP - version / build / channel OCS - version / build Ceph - version Rook - version
- Platform:
BM / VmWare / Cloud provider etc. Instance type / architecture Cluster name User name that run the test
- Returns:
dictionary that contain the environment information
- Return type:
dict
- ocs_ci.ocs.version.get_ocp_version(version_dict=None)
Query OCP to get sheer OCP version string. If optional
version_dict
is specified, the version is extracted from the dict and OCP query is not performed.See an example of OCP version string:
'4.10.0-0.nightly-2022-02-09-111355'
- Parameters:
version_dict (dict) – k8s ClusterVersion dict dump (optional)
- Returns:
full version string of OCP cluster
- Return type:
str
- ocs_ci.ocs.version.get_ocp_version_dict()
Query OCP to get all information about OCP version.
- Returns:
ClusterVersion k8s object
- Return type:
dict
- ocs_ci.ocs.version.get_ocs_version()
Query OCP to get all information about OCS version.
- Returns:
image_dict with information about images IDs
- Return type:
dict
- ocs_ci.ocs.version.main()
Main fuction of version reporting command line tool. used by entry point report-version from setup.py to invoke this function.
- ocs_ci.ocs.version.report_ocs_version(cluster_version, image_dict, file_obj)
- Report OCS version via:
python logging
printing human readable version into file_obj (stdout by default)
- Parameters:
cluster_version (dict) – cluster version dict
image_dict (dict) – dict of image objects
file_obj (object) – file object to log information
ocs_ci.ocs.warp module
- class ocs_ci.ocs.warp.Warp
Bases:
object
Warp - S3 benchmarking tool: https://github.com/minio/warp WARP is an open-source full-featured S3 performance assessment software built to conduct tests between WARP clients and object storage hosts. WARP measures GET and PUT performance from multiple clients against a MinIO cluster.
- cleanup(multi_client=False)
Clear all objects in the associated bucket Clean up deployment config, pvc, pod and test user
- create_resource_warp(multi_client=False, replicas=1)
- Create resource for Warp S3 benchmark:
Create service account
Create PVC
Create warp pod for running workload
- run_benchmark(bucket_name=None, access_key=None, secret_key=None, duration=None, concurrent=None, objects=None, obj_size=None, timeout=None, validate=True, multi_client=True, tls=False, insecure=False, debug=False)
Running Warp S3 benchmark Usage detail can be found at: https://github.com/minio/warp
- Parameters:
bucket_name (string) – Name of bucket
access_key (string) – Access Key credential
secret_key (string) – Secret Key credential
duration (string) – Duration of the test
concurrent (int) – number of concurrent
objects (int) – number of objects
obj_size (int) – size of object
timeout (int) – timeout in seconds
validate (Boolean) – Validates whether running workload is completed.
multi_client (Boolean) – If True, then run multi client benchmarking
tls (Boolean) – Use TLS (HTTPS) for transport
insecure (Boolean) – disable TLS certification verification
debug (Boolean) – Enable debug output
- validate_warp_workload()
Validate if workload was running on the app-pod
- Raises:
UnexpectedBehaviour – if output.csv file doesn’t contain output data.
ocs_ci.ocs.workload module
- class ocs_ci.ocs.workload.WorkLoad(name=None, path=None, work_load=None, storage_type='fs', pod=None, jobs=1)
Bases:
object
- run(**conf)
Perform work_load_mod.run in order to run actual io. Every workload module should implement run() function so that we can invoke <workload_module>.run() to run IOs.
- Parameters:
**conf (dict) – Run configuration a.k.a parameters for workload io runs
- Returns:
Returns a concurrent.future object
- Return type:
result (Future)
- setup(**setup_conf)
Perform work_load_mod.setup() to setup the workload. Every workload module should implement setup() method so that respective <workload_module>.setup() function can be called from here
- Parameters:
setup_conf (dict) – Work load setup configuration, varies from workload to workload. Refer constants.TEMPLATE_WORKLOAD_DIR for various available workloads
- Returns:
True if setup is success else False
- Return type:
bool
Module contents
Avoid already-imported warning cause of we are importing this package from run wrapper for loading config.
You can see documentation here: https://docs.pytest.org/en/latest/reference.html under section PYTEST_DONT_REWRITE