CleanRoomsML / Client / start_trained_model_inference_job

start_trained_model_inference_job#

CleanRoomsML.Client.start_trained_model_inference_job(**kwargs)#

Defines the information necessary to begin a trained model inference job.

See also: AWS API Documentation

Request Syntax

response = client.start_trained_model_inference_job(
    membershipIdentifier='string',
    name='string',
    trainedModelArn='string',
    configuredModelAlgorithmAssociationArn='string',
    resourceConfig={
        'instanceType': 'ml.r7i.48xlarge'|'ml.r6i.16xlarge'|'ml.m6i.xlarge'|'ml.m5.4xlarge'|'ml.p2.xlarge'|'ml.m4.16xlarge'|'ml.r7i.16xlarge'|'ml.m7i.xlarge'|'ml.m6i.12xlarge'|'ml.r7i.8xlarge'|'ml.r7i.large'|'ml.m7i.12xlarge'|'ml.m6i.24xlarge'|'ml.m7i.24xlarge'|'ml.r6i.8xlarge'|'ml.r6i.large'|'ml.g5.2xlarge'|'ml.m5.large'|'ml.p3.16xlarge'|'ml.m7i.48xlarge'|'ml.m6i.16xlarge'|'ml.p2.16xlarge'|'ml.g5.4xlarge'|'ml.m7i.16xlarge'|'ml.c4.2xlarge'|'ml.c5.2xlarge'|'ml.c6i.32xlarge'|'ml.c4.4xlarge'|'ml.g5.8xlarge'|'ml.c6i.xlarge'|'ml.c5.4xlarge'|'ml.g4dn.xlarge'|'ml.c7i.xlarge'|'ml.c6i.12xlarge'|'ml.g4dn.12xlarge'|'ml.c7i.12xlarge'|'ml.c6i.24xlarge'|'ml.g4dn.2xlarge'|'ml.c7i.24xlarge'|'ml.c7i.2xlarge'|'ml.c4.8xlarge'|'ml.c6i.2xlarge'|'ml.g4dn.4xlarge'|'ml.c7i.48xlarge'|'ml.c7i.4xlarge'|'ml.c6i.16xlarge'|'ml.c5.9xlarge'|'ml.g4dn.16xlarge'|'ml.c7i.16xlarge'|'ml.c6i.4xlarge'|'ml.c5.xlarge'|'ml.c4.xlarge'|'ml.g4dn.8xlarge'|'ml.c7i.8xlarge'|'ml.c7i.large'|'ml.g5.xlarge'|'ml.c6i.8xlarge'|'ml.c6i.large'|'ml.g5.12xlarge'|'ml.g5.24xlarge'|'ml.m7i.2xlarge'|'ml.c5.18xlarge'|'ml.g5.48xlarge'|'ml.m6i.2xlarge'|'ml.g5.16xlarge'|'ml.m7i.4xlarge'|'ml.p3.2xlarge'|'ml.r6i.32xlarge'|'ml.m6i.4xlarge'|'ml.m5.xlarge'|'ml.m4.10xlarge'|'ml.r6i.xlarge'|'ml.m5.12xlarge'|'ml.m4.xlarge'|'ml.r7i.2xlarge'|'ml.r7i.xlarge'|'ml.r6i.12xlarge'|'ml.m5.24xlarge'|'ml.r7i.12xlarge'|'ml.m7i.8xlarge'|'ml.m7i.large'|'ml.r6i.24xlarge'|'ml.r6i.2xlarge'|'ml.m4.2xlarge'|'ml.r7i.24xlarge'|'ml.r7i.4xlarge'|'ml.m6i.8xlarge'|'ml.m6i.large'|'ml.m5.2xlarge'|'ml.p2.8xlarge'|'ml.r6i.4xlarge'|'ml.m6i.32xlarge'|'ml.p3.8xlarge'|'ml.m4.4xlarge',
        'instanceCount': 123
    },
    outputConfiguration={
        'accept': 'string',
        'members': [
            {
                'accountId': 'string'
            },
        ]
    },
    dataSource={
        'mlInputChannelArn': 'string'
    },
    description='string',
    containerExecutionParameters={
        'maxPayloadInMB': 123
    },
    environment={
        'string': 'string'
    },
    kmsKeyArn='string',
    tags={
        'string': 'string'
    }
)
Parameters:
  • membershipIdentifier (string) –

    [REQUIRED]

    The membership ID of the membership that contains the trained model inference job.

  • name (string) –

    [REQUIRED]

    The name of the trained model inference job.

  • trainedModelArn (string) –

    [REQUIRED]

    The Amazon Resource Name (ARN) of the trained model that is used for this trained model inference job.

  • configuredModelAlgorithmAssociationArn (string) – The Amazon Resource Name (ARN) of the configured model algorithm association that is used for this trained model inference job.

  • resourceConfig (dict) –

    [REQUIRED]

    Defines the resource configuration for the trained model inference job.

    • instanceType (string) – [REQUIRED]

      The type of instance that is used to perform model inference.

    • instanceCount (integer) –

      The number of instances to use.

  • outputConfiguration (dict) –

    [REQUIRED]

    Defines the output configuration information for the trained model inference job.

    • accept (string) –

      The MIME type used to specify the output data.

    • members (list) – [REQUIRED]

      Defines the members that can receive inference output.

      • (dict) –

        Defines who will receive inference results.

        • accountId (string) – [REQUIRED]

          The account ID of the member that can receive inference results.

  • dataSource (dict) –

    [REQUIRED]

    Defines he data source that is used for the trained model inference job.

    • mlInputChannelArn (string) – [REQUIRED]

      The Amazon Resource Name (ARN) of the ML input channel for this model inference data source.

  • description (string) – The description of the trained model inference job.

  • containerExecutionParameters (dict) –

    The execution parameters for the container.

    • maxPayloadInMB (integer) –

      The maximum size of the inference container payload, specified in MB.

  • environment (dict) –

    The environment variables to set in the Docker container.

    • (string) –

      • (string) –

  • kmsKeyArn (string) – The Amazon Resource Name (ARN) of the KMS key. This key is used to encrypt and decrypt customer-owned data in the ML inference job and associated data.

  • tags (dict) –

    The optional metadata that you apply to the resource to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

    The following basic restrictions apply to tags:

    • Maximum number of tags per resource - 50.

    • For each resource, each tag key must be unique, and each tag key can have only one value.

    • Maximum key length - 128 Unicode characters in UTF-8.

    • Maximum value length - 256 Unicode characters in UTF-8.

    • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

    • Tag keys and values are case sensitive.

    • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Clean Rooms ML considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.

    • (string) –

      • (string) –

Return type:

dict

Returns:

Response Syntax

{
    'trainedModelInferenceJobArn': 'string'
}

Response Structure

  • (dict) –

    • trainedModelInferenceJobArn (string) –

      The Amazon Resource Name (ARN) of the trained model inference job.

Exceptions