Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ include::source/operator-aee-vmware.adoc[leveloffset=+1]

include::source/operator-performance-expectations.adoc[leveloffset=+1]

include::source/meta-ci-guide.adoc[leveloffset=+1]

== Community

https://github.com/os-migrate/vmware-migration-kit/[GitHub hosted source code]
Expand Down
37 changes: 37 additions & 0 deletions source/meta-ci-guide.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
[id="os-migrate-meta-ci-guide"]

= CI for OS Migrate Guide

There are four repositories for code for the OS Migrate project.

* os-migrate
* vmware-migration-kit
* documentation
* fake server (TBD)

The two software core repos, os-migrate, and vmware-migration-kit, have various test types available, listed below.

For the most part the Linters and Sanity tests are in place already, but there are gaps in unit testing that need to be filled. Additionally, functional tests need to be expanded.

A meta umbrella term in this project is End to End or E2E testing, which broadly speaking means Integration (and therefore functional) testing.

== Test Types
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are explaining what we do here, maybe we can write a bit more info here about the tests, so we arent vague. Hopefully this isnt an overkill, so let me know what you think.

  1. Linters: Style, syntax, formatting
  2. Sanity Tests (ansible-test sanity) Import tests
  • Compilation checks
  • Documentation validation
  • Argument spec validation
  • Code quality checks (pylint, pep8)
  • we dont write these, we conform them. they come as standardization for ansible.
  1. Unit Tests
  • Individual functions/methods tested in isolation
  1. Integration Tests
  • Module integration tests (test modules against real/fake APIs)
  • Role integration tests (test roles in test environments)
  • Can use real systems OR mocks (like srv.go fake OpenStack)
  1. Functional tests
  • For role testing, meaning testing the POV of personas admin and tenant, we are ensuring migration works regardless of which permission level is used. we are testing it via ansible playbooks. in our project, it is categorized as functional, we are testing a lot of stuff from the perspective, but not full workflow as with e2e.
  1. E2E Tests
  • Full workflow tests from user perspective
  • Typically against real systems

then.... i am not sure where to put Module testing as you are describing it now (meaning: separate headers/topics). I would personally categorize module testing under either unit (testing single small targeted feature in module) or integration (stuff is working together/ idempotency).


* Linters
** Style check, syntax, format etc.
* Sanity
** documentation in place, certifications, etc.
* Unit
** Individual tests tied to specific code blocks
* Module
** Module level testing
* Role
** Role level testing
* Functional
** Fit-for-purpose validations that prove working code externally, either via real connections or mocks & fakes
* Integration
** Full validations against working systems, some real, some fakes

== Resource Requirements

A lot of the testing is trivial in scope and can be accomplised using existing GitHub Actions, at almost zero cost. However for larger E2E testing we require dedicated hardware.