Pachyderm Worker
About #
HPE Machine Learning Data Management workers are kubernetes pods that run the docker image (your user code) specified in the pipeline specification. When you create a pipeline, HPE Machine Learning Data Management spins up workers that continuously run in the cluster, waiting for new data to process.
Each datum goes through the following processing phases inside a HPE Machine Learning Data Management worker pod:
Phase | Description |
---|---|
Downloading | The HPE Machine Learning Data Management worker pod downloads the datum contents into HPE Machine Learning Data Management. |
Processing | The HPE Machine Learning Data Management worker pod runs the contents of the datum against your code. |
Uploading | The HPE Machine Learning Data Management worker pod uploads the results of processing into an output repository. |
