Skip to content

Using InfinStor Mlflow Models

Models may be recorded in the InfinStor MLflow by invoking the API log_model

Once models are recorded, they can be used for serving using the following command:

mlflow models serve -m run:/<run_id>/infinstor/model

If you do not need MLflow to create a new conda environment, the following command is used:

mlflow models serve -m run:/<run_id>/infinstor/model --no-conda

Important

There is a single forward slash following run: in the above commands. Using double forward slash as in standard URI/URL notation will not work