Bedrock / Client / get_model_invocation_job
get_model_invocation_job#
- Bedrock.Client.get_model_invocation_job(**kwargs)#
Gets details about a batch inference job. For more information, see View details about a batch inference job
See also: AWS API Documentation
Request Syntax
response = client.get_model_invocation_job( jobIdentifier='string' )
- Parameters:
jobIdentifier (string) –
[REQUIRED]
The Amazon Resource Name (ARN) of the batch inference job.
- Return type:
dict
- Returns:
Response Syntax
{ 'jobArn': 'string', 'jobName': 'string', 'modelId': 'string', 'clientRequestToken': 'string', 'roleArn': 'string', 'status': 'Submitted'|'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped'|'PartiallyCompleted'|'Expired'|'Validating'|'Scheduled', 'message': 'string', 'submitTime': datetime(2015, 1, 1), 'lastModifiedTime': datetime(2015, 1, 1), 'endTime': datetime(2015, 1, 1), 'inputDataConfig': { 's3InputDataConfig': { 's3InputFormat': 'JSONL', 's3Uri': 'string', 's3BucketOwner': 'string' } }, 'outputDataConfig': { 's3OutputDataConfig': { 's3Uri': 'string', 's3EncryptionKeyId': 'string', 's3BucketOwner': 'string' } }, 'vpcConfig': { 'subnetIds': [ 'string', ], 'securityGroupIds': [ 'string', ] }, 'timeoutDurationInHours': 123, 'jobExpirationTime': datetime(2015, 1, 1) }
Response Structure
(dict) –
jobArn (string) –
The Amazon Resource Name (ARN) of the batch inference job.
jobName (string) –
The name of the batch inference job.
modelId (string) –
The unique identifier of the foundation model used for model inference.
clientRequestToken (string) –
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
roleArn (string) –
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
status (string) –
The status of the batch inference job.
message (string) –
If the batch inference job failed, this field contains a message describing why the job failed.
submitTime (datetime) –
The time at which the batch inference job was submitted.
lastModifiedTime (datetime) –
The time at which the batch inference job was last modified.
endTime (datetime) –
The time at which the batch inference job ended.
inputDataConfig (dict) –
Details about the location of the input to the batch inference job.
Note
This is a Tagged Union structure. Only one of the following top level keys will be set:
s3InputDataConfig
. If a client receives an unknown member it will setSDK_UNKNOWN_MEMBER
as the top level key, which maps to the name or tag of the unknown member. The structure ofSDK_UNKNOWN_MEMBER
is as follows:'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
s3InputDataConfig (dict) –
Contains the configuration of the S3 location of the input data.
s3InputFormat (string) –
The format of the input data.
s3Uri (string) –
The S3 location of the input data.
s3BucketOwner (string) –
The ID of the Amazon Web Services account that owns the S3 bucket containing the input data.
outputDataConfig (dict) –
Details about the location of the output of the batch inference job.
Note
This is a Tagged Union structure. Only one of the following top level keys will be set:
s3OutputDataConfig
. If a client receives an unknown member it will setSDK_UNKNOWN_MEMBER
as the top level key, which maps to the name or tag of the unknown member. The structure ofSDK_UNKNOWN_MEMBER
is as follows:'SDK_UNKNOWN_MEMBER': {'name': 'UnknownMemberName'}
s3OutputDataConfig (dict) –
Contains the configuration of the S3 location of the output data.
s3Uri (string) –
The S3 location of the output data.
s3EncryptionKeyId (string) –
The unique identifier of the key that encrypts the S3 location of the output data.
s3BucketOwner (string) –
The ID of the Amazon Web Services account that owns the S3 bucket containing the output data.
vpcConfig (dict) –
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
subnetIds (list) –
An array of IDs for each subnet in the VPC to use.
(string) –
securityGroupIds (list) –
An array of IDs for each security group in the VPC to use.
(string) –
timeoutDurationInHours (integer) –
The number of hours after which batch inference job was set to time out.
jobExpirationTime (datetime) –
The time at which the batch inference job times or timed out.
Exceptions