SageMaker / Client / update_inference_component

update_inference_component#

SageMaker.Client.update_inference_component(**kwargs)#

Updates an inference component.

See also: AWS API Documentation

Request Syntax

response = client.update_inference_component(
    InferenceComponentName='string',
    Specification={
        'ModelName': 'string',
        'Container': {
            'Image': 'string',
            'ArtifactUrl': 'string',
            'Environment': {
                'string': 'string'
            }
        },
        'StartupParameters': {
            'ModelDataDownloadTimeoutInSeconds': 123,
            'ContainerStartupHealthCheckTimeoutInSeconds': 123
        },
        'ComputeResourceRequirements': {
            'NumberOfCpuCoresRequired': ...,
            'NumberOfAcceleratorDevicesRequired': ...,
            'MinMemoryRequiredInMb': 123,
            'MaxMemoryRequiredInMb': 123
        },
        'BaseInferenceComponentName': 'string'
    },
    RuntimeConfig={
        'CopyCount': 123
    }
)
Parameters:
  • InferenceComponentName (string) –

    [REQUIRED]

    The name of the inference component.

  • Specification (dict) –

    Details about the resources to deploy with this inference component, including the model, container, and compute resources.

    • ModelName (string) –

      The name of an existing SageMaker model object in your account that you want to deploy with the inference component.

    • Container (dict) –

      Defines a container that provides the runtime environment for a model that you deploy with an inference component.

      • Image (string) –

        The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.

      • ArtifactUrl (string) –

        The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

      • Environment (dict) –

        The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.

        • (string) –

          • (string) –

    • StartupParameters (dict) –

      Settings that take effect while the model container starts up.

      • ModelDataDownloadTimeoutInSeconds (integer) –

        The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.

      • ContainerStartupHealthCheckTimeoutInSeconds (integer) –

        The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests.

    • ComputeResourceRequirements (dict) –

      The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.

      Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.

      • NumberOfCpuCoresRequired (float) –

        The number of CPU cores to allocate to run a model that you assign to an inference component.

      • NumberOfAcceleratorDevicesRequired (float) –

        The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and Amazon Web Services Inferentia.

      • MinMemoryRequiredInMb (integer) – [REQUIRED]

        The minimum MB of memory to allocate to run a model that you assign to an inference component.

      • MaxMemoryRequiredInMb (integer) –

        The maximum MB of memory to allocate to run a model that you assign to an inference component.

    • BaseInferenceComponentName (string) –

      The name of an existing inference component that is to contain the inference component that you’re creating with your request.

      Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.

      When you create an adapter inference component, use the Container parameter to specify the location of the adapter artifacts. In the parameter value, use the ArtifactUrl parameter of the InferenceComponentContainerSpecification data type.

      Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.

  • RuntimeConfig (dict) –

    Runtime settings for a model that is deployed with an inference component.

    • CopyCount (integer) – [REQUIRED]

      The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.

Return type:

dict

Returns:

Response Syntax

{
    'InferenceComponentArn': 'string'
}

Response Structure

  • (dict) –

    • InferenceComponentArn (string) –

      The Amazon Resource Name (ARN) of the inference component.

Exceptions