Model#
You can use a corresponding method of ModelRegistry to create models: ModelRegistry.tensorflow.create_model, ModelRegistry.torch.create_model, ModelRegistry.sklearn.create_model, ModelRegistry.python.create_model, ModelRegistry.llm.create_model. You can retrieve an existing model via ModelRegistry.get_model.
hsml.tensorflow.signature.create_model #
create_model(
name: str,
version: int | None = None,
metrics: dict | None = None,
description: str | None = None,
input_example: pandas.DataFrame
| pandas.Series
| numpy.ndarray
| list
| None = None,
model_schema: ModelSchema | None = None,
feature_view=None,
training_dataset_version: int | None = None,
)
Create a TensorFlow model metadata object.
Lazy
This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the model to create. TYPE: |
version | Optionally version of the model to create, defaults to TYPE: |
metrics | Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE) TYPE: |
description | Optionally a string describing the model, defaults to empty string TYPE: |
input_example | Optionally an input example that represents a single input for the model, defaults to TYPE: |
model_schema | Optionally a model schema for the model inputs and/or outputs. TYPE: |
feature_view | Optionally a feature view object returned by querying the feature store. If the feature view is not provided, the model will not have access to provenance. DEFAULT: |
training_dataset_version | Optionally a training dataset version. If training dataset version is not provided, but the feature view is provided, the training dataset version used will be the last accessed training dataset of the feature view, within the code/notebook that reads the feature view and training dataset and then creates the model. TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
|
|
hsml.torch.signature.create_model #
create_model(
name: str,
version: int | None = None,
metrics: dict | None = None,
description: str | None = None,
input_example: pandas.DataFrame
| pandas.Series
| numpy.ndarray
| list
| None = None,
model_schema: ModelSchema | None = None,
feature_view=None,
training_dataset_version: int | None = None,
)
Create a Torch model metadata object.
Lazy
This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the model to create. TYPE: |
version | Optionally version of the model to create, defaults to TYPE: |
metrics | Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE) TYPE: |
description | Optionally a string describing the model, defaults to empty string TYPE: |
input_example | Optionally an input example that represents a single input for the model, defaults to TYPE: |
model_schema | Optionally a model schema for the model inputs and/or outputs. TYPE: |
feature_view | Optionally a feature view object returned by querying the feature store. If the feature view is not provided, the model will not have access to provenance. DEFAULT: |
training_dataset_version | Optionally a training dataset version. If training dataset version is not provided, but the feature view is provided, the training dataset version used will be the last accessed training dataset of the feature view, within the code/notebook that reads the feature view and training dataset and then creates the model. TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
|
|
hsml.sklearn.signature.create_model #
create_model(
name: str,
version: int | None = None,
metrics: dict | None = None,
description: str | None = None,
input_example: pandas.DataFrame
| pandas.Series
| numpy.ndarray
| list
| None = None,
model_schema: ModelSchema | None = None,
feature_view=None,
training_dataset_version: int | None = None,
)
Create an SkLearn model metadata object.
Lazy
This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the model to create. TYPE: |
version | Optionally version of the model to create, defaults to TYPE: |
metrics | Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE) TYPE: |
description | Optionally a string describing the model, defaults to empty string TYPE: |
input_example | Optionally an input example that represents a single input for the model, defaults to TYPE: |
model_schema | Optionally a model schema for the model inputs and/or outputs. TYPE: |
feature_view | Optionally a feature view object returned by querying the feature store. If the feature view is not provided, the model will not have access to provenance. DEFAULT: |
training_dataset_version | Optionally a training dataset version. If training dataset version is not provided, but the feature view is provided, the training dataset version used will be the last accessed training dataset of the feature view, within the code/notebook that reads the feature view and training dataset and then creates the model. TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
|
|
hsml.python.signature.create_model #
create_model(
name: str,
version: int | None = None,
metrics: dict | None = None,
description: str | None = None,
input_example: pandas.DataFrame
| pandas.Series
| numpy.ndarray
| list
| None = None,
model_schema: ModelSchema | None = None,
feature_view=None,
training_dataset_version: int | None = None,
)
Create a generic Python model metadata object.
Lazy
This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the model to create. TYPE: |
version | Optionally version of the model to create, defaults to TYPE: |
metrics | Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE) TYPE: |
description | Optionally a string describing the model, defaults to empty string TYPE: |
input_example | Optionally an input example that represents a single input for the model, defaults to TYPE: |
model_schema | Optionally a model schema for the model inputs and/or outputs. TYPE: |
feature_view | Optionally a feature view object returned by querying the feature store. If the feature view is not provided, the model will not have access to provenance. DEFAULT: |
training_dataset_version | Optionally a training dataset version. If training dataset version is not provided, but the feature view is provided, the training dataset version used will be the last accessed training dataset of the feature view, within the code/notebook that reads the feature view and training dataset and then creates the model. TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
|
|
hsml.llm.signature.create_model #
create_model(
name: str,
version: int | None = None,
metrics: dict | None = None,
description: str | None = None,
input_example: pandas.DataFrame
| pandas.Series
| numpy.ndarray
| list
| None = None,
model_schema: ModelSchema | None = None,
feature_view=None,
training_dataset_version: int | None = None,
)
Create an LLM model metadata object.
Lazy
This method is lazy and does not persist any metadata or uploads model artifacts in the model registry on its own. To save the model object and the model artifacts, call the save() method with a local file path to the directory containing the model artifacts.
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the model to create. TYPE: |
version | Optionally version of the model to create, defaults to TYPE: |
metrics | Optionally a dictionary with model evaluation metrics (e.g., accuracy, MAE) TYPE: |
description | Optionally a string describing the model, defaults to empty string TYPE: |
input_example | Optionally an input example that represents a single input for the model, defaults to TYPE: |
model_schema | Optionally a model schema for the model inputs and/or outputs. TYPE: |
feature_view | Optionally a feature view object returned by querying the feature store. If the feature view is not provided, the model will not have access to provenance. DEFAULT: |
training_dataset_version | Optionally a training dataset version. If training dataset version is not provided, but the feature view is provided, the training dataset version used will be the last accessed training dataset of the feature view, within the code/notebook that reads the feature view and training dataset and then creates the model. TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
|
|
Model #
NOT_FOUND_ERROR_CODE class-attribute instance-attribute #
NOT_FOUND_ERROR_CODE = 360000
Metadata object representing a model in the Model Registry.
model_files_path property #
model_files_path
Path of the model files including version and files folder.
Resolves to /Projects/{project_name}/Models/{name}/{version}/Files.
model_path property #
model_path
Path of the model with version folder omitted.
Resolves to /Projects/{project_name}/Models/{name}.
shared_registry_project_name property writable #
shared_registry_project_name
shared_registry_project_name of the model.
version_path property #
version_path
Path of the model including version folder.
Resolves to /Projects/{project_name}/Models/{name}/{version}.
add_tag #
Attach a tag to a model.
A tag consists of a
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the tag to be added. TYPE: |
value | Value of the tag to be added. |
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to add the tag. |
delete #
delete()
Delete the model.
Potentially dangerous operation
This operation drops all metadata associated with this version of the model and deletes the model files.
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
delete_tag #
delete_tag(name: str)
Delete a tag attached to a model.
| PARAMETER | DESCRIPTION |
|---|---|
name | Name of the tag to be removed. TYPE: |
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to delete the tag. |
deploy #
deploy(
name: str | None = None,
description: str | None = None,
artifact_version: str | None = None,
serving_tool: str | None = None,
script_file: str | None = None,
config_file: str | None = None,
resources: PredictorResources | dict | None = None,
inference_logger: InferenceLogger | dict | None = None,
inference_batcher: InferenceBatcher
| dict
| None = None,
transformer: Transformer | dict | None = None,
api_protocol: str | None = IE.API_PROTOCOL_REST,
environment: str | None = None,
) -> deployment.Deployment
Deploy the model.
Example
import hopsworks
project = hopsworks.login()
# get Hopsworks Model Registry handle
mr = project.get_model_registry()
# retrieve the trained model you want to deploy
my_model = mr.get_model("my_model", version=1)
my_deployment = my_model.deploy()
Parameters: name: Name of the deployment. description: Description of the deployment. artifact_version: (Deprecated) Version number of the model artifact to deploy, CREATE to create a new model artifact or MODEL-ONLY to reuse the shared artifact containing only the model files. serving_tool: Serving tool used to deploy the model server. script_file: Path to a custom predictor script implementing the Predict class. config_file: Model server configuration file to be passed to the model deployment. It can be accessed via CONFIG_FILE_PATH environment variable from a predictor or transformer script. For LLM deployments without a predictor script, this file is used to configure the vLLM engine. resources: Resources to be allocated for the predictor. inference_logger: Inference logger configuration. inference_batcher: Inference batcher configuration. transformer: Transformer to be deployed together with the predictor. api_protocol: API protocol to be enabled in the deployment (i.e., 'REST' or 'GRPC'). Defaults to 'REST'. environment: The inference environment to use.
| RETURNS | DESCRIPTION |
|---|---|
deployment.Deployment |
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
download #
download(local_path=None) -> str
Download the model files.
| PARAMETER | DESCRIPTION |
|---|---|
local_path | path where to download the model files in the local filesystem DEFAULT: |
Returns: str: Absolute path to local folder containing the model files.
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |
get_feature_view #
Get the parent feature view of this model, based on explicit provenance.
Only accessible, usable feature view objects are returned. Otherwise an Exception is raised. For more details, call the base method - get_feature_view_provenance
Parameters: init: By default this is set to True. If you require a more complex initialization of the feature view for online or batch scenarios, you should set init to False to retrieve a non initialized feature view and then call init_batch_scoring() or init_serving() with the required parameters. online: By default this is set to False and the initialization for batch scoring is considered the default scenario. If you set online to True, the online scenario is enabled and the init_serving() method is called. When inside a deployment, the only available scenario is the online one, thus the parameter is ignored and init_serving is always called (if init is set to True). If you want to override this behaviour, you should set init to False and proceed with a custom initialization.
| RETURNS | DESCRIPTION |
|---|---|
|
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to retrieve the feature view. |
get_feature_view_provenance #
get_feature_view_provenance() -> explicit_provenance.Links
Get the parent feature view of this model, based on explicit provenance.
This feature view can be accessible, deleted or inaccessible. For deleted and inaccessible feature views, only a minimal information is returned.
| RETURNS | DESCRIPTION |
|---|---|
explicit_provenance.Links |
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to retrieve the feature view provenance. |
get_tag #
get_tags #
get_training_dataset_provenance #
get_training_dataset_provenance() -> (
explicit_provenance.Links
)
Get the parent training dataset of this model, based on explicit provenance.
This training dataset can be accessible, deleted or inaccessible. For deleted and inaccessible training datasets, only a minimal information is returned.
| RETURNS | DESCRIPTION |
|---|---|
explicit_provenance.Links |
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | in case the backend fails to retrieve the training dataset provenance. |
save #
save(
model_path,
await_registration=480,
keep_original_files=False,
upload_configuration: dict[str, Any] | None = None,
)
Persist this model including model files and metadata to the model registry.
| PARAMETER | DESCRIPTION |
|---|---|
model_path | Local or remote (Hopsworks file system) path to the folder where the model files are located, or path to a specific model file.
|
await_registration | Awaiting time for the model to be registered in Hopsworks. DEFAULT: |
keep_original_files | If the model files are located in hopsfs, whether to move or copy those files into the Models dataset. Default is False (i.e., model files will be moved) DEFAULT: |
upload_configuration | When saving a model from outside Hopsworks, the model is uploaded to the model registry using the REST APIs. Each model artifact is divided into chunks and each chunk uploaded independently. This parameter can be used to control the upload chunk size, the parallelism and the number of retries. |
| RETURNS | DESCRIPTION |
|---|---|
|
|
| RAISES | DESCRIPTION |
|---|---|
hopsworks.client.exceptions.RestAPIError | In case the backend encounters an issue |