ARCHIVER Cloud Benchmarking Test Suite
Event Type
Project Poster
TimeTuesday, June 23rd3:47pm - 3:49pm
LocationAnalog 2
DescriptionIn order to monitor the R&D progress from the current state-of-the-art to be achieved during the ARCHIVER project, a technical validation process for commercial service providers has been created. R&D environments need to be accessed remotely, suitable for test execution and validation. The testing approach will be staged, starting at the functional level during early phases expanding to performance and scalability testing during later R&D phases. Leveraging on the lessons learned from earlier projects such as HNSciCloud and OCRE project's test-suite code base, a validation framework for the ARCHIVER service has been developed. It's automated according to Agile principles, manual execution possible as well, allowing grouping of test sets assigned to a test plan for tracking progress. Automated tests use, where applicable, approaches based on domain specific language such as Gherkin Syntax integrated with Jenkins software, to automate test deployment, provide a validation pipeline and generate transparent reporting. Tests range from cloud infrastructure such as storage, network connectivity and ingest rates, to upper in the stack as for example evaluating the degree of “FAIRness” of the resulting data repository services. The framework utilizes open technologies, that allows easy deployment and a transparent assessment, stimulating the contribution of the research community with additional use cases. The abstraction layer is based on Terraform for resource provisioning and Docker and Kubernetes for orchestration. The concept is expected to be expanded and considered a best practice for the on-boarding of commercial services for research environments in the context of the European Open Science Cloud.
Poster PDF