BedrockRuntime / Client / invoke_model

invoke_model#

BedrockRuntime.Client.invoke_model(**kwargs)#

Invokes the specified Bedrock model to run inference using the input provided in the request body. You use InvokeModel to run inference for text models, image models, and embedding models.

For more information, see Run inference in the Bedrock User Guide.

For example requests, see Examples (after the Errors section).

See also: AWS API Documentation

Request Syntax

response = client.invoke_model(
    body=b'bytes'|file,
    contentType='string',
    accept='string',
    modelId='string'
)
Parameters:
  • body (bytes or seekable file-like object) –

    [REQUIRED]

    Input data in the format specified in the content-type request header. To see the format and content of this field for different models, refer to Inference parameters.

  • contentType (string) – The MIME type of the input data in the request. The default value is application/json.

  • accept (string) – The desired MIME type of the inference body in the response. The default value is application/json.

  • modelId (string) –

    [REQUIRED]

    Identifier of the model.

Return type:

dict

Returns:

Response Syntax

{
    'body': StreamingBody(),
    'contentType': 'string'
}

Response Structure

  • (dict) –

    • body (StreamingBody) –

      Inference response from the model in the format specified in the content-type header field. To see the format and content of this field for different models, refer to Inference parameters.

    • contentType (string) –

      The MIME type of the inference result.

Exceptions