Edit Packaged Model

You can edit a packaged model to update the model’s details.

When you edit a packaged model’s URL, it creates a new version of that model, which is then selectable in the Model dropdown on the Deployment creation page.


How to Edit a Packaged Model

Via the UI

  1. Sign in to HPE Machine Learning Inferencing Software.
  2. Navigate to the Packaged Models page.
  3. Select the Ellipsis icon next to the model you want to edit.
  4. Select Edit.
  5. Update the packaged model details.
  6. Select Save.

Via the CLI

  1. Sign in via the CLI.
    aioli user login admin
  2. Update the packaged model with the following command:
     aioli model update <MODEL_NAME> \
     --description <DESCRIPTION> \
     --image <USER_NAME>/<MODEL_NAME>:<TAG> \
     --registry <REGISTRY_NAME>

Via the API

  1. Sign in to HPE Machine Learning Inferencing Software.
    curl -X 'POST' \
      '<YOUR_EXT_CLUSTER_IP>/api/v1/login' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "username": "<YOUR_USERNAME>",
      "password": "<YOUR_PASSWORD>"
    }'
  2. Obtain the Bearer token from the response.
  3. Get a list of packaged models to find the model ID.
    curl -X 'GET' \
         '<YOUR_EXT_CLUSTER_IP>/api/v1/models' \
         -H 'accept: application/json' \
         -H 'Authorization : Bearer <YOUR_ACCESS_TOKEN>'
  4. Copy the model ID from the response.
  5. Use the following cURL command to edit a packaged model.
    curl -X 'PUT' \
      '<YOUR_EXT_CLUSTER_IP>/api/v1/models/<MODEL_ID>' \
      -H 'accept: application/json' \
      -H 'Authorization: Bearer <YOUR_ACCESS_TOKEN>' \
      -H 'Content-Type: application/json' \
      -d '{
     "arguments": ["--debug"],
      "description": "<DESCRIPTION>",
      "environment": {
          "key": "value",
          "key2": "value2"
      }
      "image": "<USER_NAME>/<MODEL_NAME>:<TAG>",
      "modelFormat": "<MODEL_FORMAT>",
      "name": "<MODEL_NAME>",
      "registry": "<REGISTRY>",
      "resources": {
          "gpuType": "<GPU_TYPE>",
          "limits": {
              "cpu": "<CPU_LIMIT>",
              "gpu": "<GPU_LIMIT>",
              "memory": "<MEMORY_LIMIT>"
          },
          "requests": {
              "cpu": "<CPU_REQUEST>",
              "gpu": "<GPU_REQUEST>",
              "memory": "<MEMORY_REQUEST>"
          }
      },
      "url": "<OBJECT_URL>"
    }'