Skip to content

1. Quick Start

Objective is to configure your client environment (your laptop, CLI shell, Notebook or other environments) to talk to Infinstor MLFlow server. Use the steps below:

  1. Obtain your Infinstor MLflow server configuration from your administrator
    • for the SaaS version, use the following
    • for Enterprise version (single tenant version), please contact your administrator for the above configuration details.
  2. Install infinstor-mlflow-plugin
  3. Configure environment variable MLFLOW_TRACKING_URI
  4. Obtain a token file for authentication
  5. Configure MLFLOW_EXPERIMENT_ID environment variable
  6. Use MLflow cli or run python code with MLflow api calls. An MLflow example with xgboost is shown here.
  7. Use MLflow UI to view the MLflow runs under the above MLflow experiment

1.1. Install infinstor-mlflow-plugin

  • pip install --upgrade infinstor-mlflow-plugin
    • Installs infinstor-mlflow-plugin in your client environment

1.2. Configure MLFLOW_TRACKING_URI environment variable

InfinStor MLflow backend is activated by setting the environment variable MLFLOW_TRACKING_URI to the value provided by your administrator. See configuration details above for details. Example

> export MLFLOW_TRACKING_URI=<admin_provided_tracking_uri>

1.3. Obtain a Token file for authentication

Use either the UI or CLI (as described below) to generate an authentication token to talk to Infinstor MLflow service

1.3.1. Token file generation using a UI

Use your InfinStor service dashboard UI URL (see configuration details described above) to login. Use your provided credentials. Then click on Configuration -> Manage Token in the sidebar

token_generation_ui Create and Download Token File

Pressing the 'Create Token File' button causes a new tab to be opened, and the user will flow through the InfinStor main service' cognito authetication. In the case of Enterprise Licenses, users will have to complete the authentication system configured for that particular Enterprise. Use Token File for authenticating CLI programs

Once authentication is complete, the browser will download a file named token. This token file must be placed in a sub-directory called .infinstor in the user's home directory.

> mkdir -p ~/.infinstor
> cp ~/Downloads/token ~/.infinstor

1.3.2. Token file generation using the CLI

  1. Install infinstor-mlflow-plugin as described here
  2. Configure your MLFLOW_TRACKING_URI using the steps described below.
  3. login_infinstor
    • This will prompt for your credentials and will automatically create the tokenfile in ~/.infinstor/token if authentiation is successful

1.4. Configure MLFLOW_EXPERIMENT_ID environment variable

The environment variable MLFLOW_EXPERIMENT_ID must be set in order to record runs in the correct experiment ID. If this environment variable is not set, then runs will be recorded to the default experiment id 0. This is particularly important when Authorization is enabled, because the user may not have access to experiment id 0. The following example sets the experiment id to 7


1.5. Xgboost Example


To run the example below, you will need Infinstor MLflow server with compute extensions (and not just the Infinstor MLflow server).
- The infinstor backend (the -b argument to mlflow run, as shown below) will run the MLproject in Infinstor's compute platform. This platform allows the MLproject to be run in any supported public cloud.
- Without the infinstor backend, the MLproject will run in your local laptop or VM (where the mlflow run is issued) instead.

The example xgboost supplied with open source MLflow is a good quick example. It can be found here:

Download the files from this directory and edit the conda.yaml by adding infinstor_mlflow_plugin and boto3 to the pip dependencies. The edited file is shown here:

name: xgboost-example
  - defaults
  - anaconda
  - conda-forge
  - python=3.6
  - xgboost
  - pip
  - pip:
      - mlflow>=1.6.0
      - matplotlib
      - infinstor_mlflow_plugin
      - boto3

Now you can run the example as follows:

> export MLFLOW_TRACKING_URI=<admin_provided_tracking_uri>
> cd <directory with downloaded files> 
> export MLFLOW_EXPERIMENT_ID=<exp_id>
> mlflow run -b infinstor-backend --backend-config '{"instance_type": "t3.large"}' .

1.6. MLflow UI

The MLflow UI is available at the URL provided by your administrator. See configuration details above for details. Use the MLflow UI to see the tracked MLflow experiments' and MLflow runs*.