CI/CD Integration

HPE Machine Learning Data Management is a powerful system for providing data provenance and scalable processing to data scientists and engineers. You can make it even more powerful by integrating it with your existing continuous integration and continuous deployment (CI/CD) workflows and systems. If you are just starting to use HPE Machine Learning Data Management and not setting up automation for your HPE Machine Learning Data Management build processes, see Working with Pipelines.

The following diagram demonstrates automated HPE Machine Learning Data Management development workflow with CI:

Developer Workflow

Although initial CI setup might require extra effort on your side, in the long run, it brings significant benefits to your team, including the following:

  • Simplified workflow for data scientists. Data scientists do not need to be aware of the complexity of the underlying containerized infrastructure. They can follow an established Git process, and the CI platform takes care of the Docker build and push process behind the scenes.

  • Your CI platform can run additional unit tests against the submitted code before creating the build.

  • Flexibility in tagging Docker images, such as specifying a custom name and tag or using the commit SHA for tagging.

CI Workflow

The CI workflow includes the following steps:

  1. A new commit triggers a Git hook.

    Typically, HPE Machine Learning Data Management users store the following artifacts in a Git repository:

    • A Dockerfile that you use to build local images.
    • A pipeline.json specification file that you can use in a Makefile to create local builds, as well as in the CI/CD workflows.
    • The code that performs data transformations.

    A commit hook in Git for your repository triggers the CI/CD process. It uses the information in your pipeline specification for subsequent steps.

  2. Build an image.

    Your CI process automatically starts the build of a Docker container image based on your code and the Dockerfile.

  3. Push the image tagged with commit ID to an image registry.

    Your CI process pushes a Docker image created in Step 2 to your preferred image registry. When a data scientist submits their code to Git, a CI process uses the Dockerfile in the repository to build, tag with a Git commit SHA, and push the container to your image registry.

  4. Update the pipeline spec with the tagged image.

    In this step, your CI/CD infrastructure uses your updated pipeline.json specification and fills in the Git commit SHA for the version of the image that must be used in this pipeline. Then, it runs the pachctl update pipeline command to push the updated pipeline specification to HPE Machine Learning Data Management. After that, HPE Machine Learning Data Management pulls a new image from the registry automatically. When the production pipeline is updated with the pipeline.json file that has the correct image tag in it, HPE Machine Learning Data Management restarts all pods for this pipeline with the new image automatically.

GitHub Actions

GitHub actions are a convenient way to kick off workflows and perform integration. These can be used to:

  • Manually trigger a pipeline build, or
  • Automatically build a pipeline from a commit or pull request.

In our example, we show how to use the HPE Machine Learning Data Management GitHub Action to incorporate HPE Machine Learning Data Management functions to run on a Pull Request or at other points during development.