SageMaker / Client / describe_transform_job
describe_transform_job#
- SageMaker.Client.describe_transform_job(**kwargs)#
Returns information about a transform job.
See also: AWS API Documentation
Request Syntax
response = client.describe_transform_job( TransformJobName='string' )
- Parameters:
TransformJobName (string) –
[REQUIRED]
The name of the transform job that you want to view details of.
- Return type:
dict
- Returns:
Response Syntax
{ 'TransformJobName': 'string', 'TransformJobArn': 'string', 'TransformJobStatus': 'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped', 'FailureReason': 'string', 'ModelName': 'string', 'MaxConcurrentTransforms': 123, 'ModelClientConfig': { 'InvocationsTimeoutInSeconds': 123, 'InvocationsMaxRetries': 123 }, 'MaxPayloadInMB': 123, 'BatchStrategy': 'MultiRecord'|'SingleRecord', 'Environment': { 'string': 'string' }, 'TransformInput': { 'DataSource': { 'S3DataSource': { 'S3DataType': 'ManifestFile'|'S3Prefix'|'AugmentedManifestFile', 'S3Uri': 'string' } }, 'ContentType': 'string', 'CompressionType': 'None'|'Gzip', 'SplitType': 'None'|'Line'|'RecordIO'|'TFRecord' }, 'TransformOutput': { 'S3OutputPath': 'string', 'Accept': 'string', 'AssembleWith': 'None'|'Line', 'KmsKeyId': 'string' }, 'DataCaptureConfig': { 'DestinationS3Uri': 'string', 'KmsKeyId': 'string', 'GenerateInferenceId': True|False }, 'TransformResources': { 'InstanceType': 'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.m6i.large'|'ml.m6i.xlarge'|'ml.m6i.2xlarge'|'ml.m6i.4xlarge'|'ml.m6i.8xlarge'|'ml.m6i.12xlarge'|'ml.m6i.16xlarge'|'ml.m6i.24xlarge'|'ml.m6i.32xlarge'|'ml.c6i.large'|'ml.c6i.xlarge'|'ml.c6i.2xlarge'|'ml.c6i.4xlarge'|'ml.c6i.8xlarge'|'ml.c6i.12xlarge'|'ml.c6i.16xlarge'|'ml.c6i.24xlarge'|'ml.c6i.32xlarge'|'ml.r6i.large'|'ml.r6i.xlarge'|'ml.r6i.2xlarge'|'ml.r6i.4xlarge'|'ml.r6i.8xlarge'|'ml.r6i.12xlarge'|'ml.r6i.16xlarge'|'ml.r6i.24xlarge'|'ml.r6i.32xlarge'|'ml.m7i.large'|'ml.m7i.xlarge'|'ml.m7i.2xlarge'|'ml.m7i.4xlarge'|'ml.m7i.8xlarge'|'ml.m7i.12xlarge'|'ml.m7i.16xlarge'|'ml.m7i.24xlarge'|'ml.m7i.48xlarge'|'ml.c7i.large'|'ml.c7i.xlarge'|'ml.c7i.2xlarge'|'ml.c7i.4xlarge'|'ml.c7i.8xlarge'|'ml.c7i.12xlarge'|'ml.c7i.16xlarge'|'ml.c7i.24xlarge'|'ml.c7i.48xlarge'|'ml.r7i.large'|'ml.r7i.xlarge'|'ml.r7i.2xlarge'|'ml.r7i.4xlarge'|'ml.r7i.8xlarge'|'ml.r7i.12xlarge'|'ml.r7i.16xlarge'|'ml.r7i.24xlarge'|'ml.r7i.48xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge'|'ml.g5.xlarge'|'ml.g5.2xlarge'|'ml.g5.4xlarge'|'ml.g5.8xlarge'|'ml.g5.12xlarge'|'ml.g5.16xlarge'|'ml.g5.24xlarge'|'ml.g5.48xlarge'|'ml.inf2.xlarge'|'ml.inf2.8xlarge'|'ml.inf2.24xlarge'|'ml.inf2.48xlarge', 'InstanceCount': 123, 'VolumeKmsKeyId': 'string' }, 'CreationTime': datetime(2015, 1, 1), 'TransformStartTime': datetime(2015, 1, 1), 'TransformEndTime': datetime(2015, 1, 1), 'LabelingJobArn': 'string', 'AutoMLJobArn': 'string', 'DataProcessing': { 'InputFilter': 'string', 'OutputFilter': 'string', 'JoinSource': 'Input'|'None' }, 'ExperimentConfig': { 'ExperimentName': 'string', 'TrialName': 'string', 'TrialComponentDisplayName': 'string', 'RunName': 'string' } }
Response Structure
(dict) –
TransformJobName (string) –
The name of the transform job.
TransformJobArn (string) –
The Amazon Resource Name (ARN) of the transform job.
TransformJobStatus (string) –
The status of the transform job. If the transform job failed, the reason is returned in the
FailureReason
field.FailureReason (string) –
If the transform job failed,
FailureReason
describes why it failed. A transform job creates a log file, which includes error messages, and stores it as an Amazon S3 object. For more information, see Log Amazon SageMaker Events with Amazon CloudWatch.ModelName (string) –
The name of the model used in the transform job.
MaxConcurrentTransforms (integer) –
The maximum number of parallel requests on each instance node that can be launched in a transform job. The default value is 1.
ModelClientConfig (dict) –
The timeout and maximum number of retries for processing a transform job invocation.
InvocationsTimeoutInSeconds (integer) –
The timeout value in seconds for an invocation request. The default value is 600.
InvocationsMaxRetries (integer) –
The maximum number of retries when invocation requests are failing. The default value is 3.
MaxPayloadInMB (integer) –
The maximum payload size, in MB, used in the transform job.
BatchStrategy (string) –
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
To enable the batch strategy, you must set
SplitType
toLine
,RecordIO
, orTFRecord
.Environment (dict) –
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
(string) –
(string) –
TransformInput (dict) –
Describes the dataset to be transformed and the Amazon S3 location where it is stored.
DataSource (dict) –
Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.
S3DataSource (dict) –
The S3 location of the data source that is associated with a channel.
S3DataType (string) –
If you choose
S3Prefix
,S3Uri
identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.If you choose
ManifestFile
,S3Uri
identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.The following values are compatible:
ManifestFile
,S3Prefix
The following value is not compatible:
AugmentedManifestFile
S3Uri (string) –
Depending on the value specified for the
S3DataType
, identifies either a key name prefix or a manifest. For example:A key name prefix might look like this:
s3://bucketname/exampleprefix/
.A manifest might look like this:
s3://bucketname/example.manifest
The manifest is an S3 object which is a JSON file with the following format:[ {"prefix": "s3://customer_bucket/some/prefix/"},
"relative/path/to/custdata-1",
"relative/path/custdata-2",
...
"relative/path/custdata-N"
]
The preceding JSON matches the followingS3Uris
:s3://customer_bucket/some/prefix/relative/path/to/custdata-1
s3://customer_bucket/some/prefix/relative/path/custdata-2
...
s3://customer_bucket/some/prefix/relative/path/custdata-N
The complete set ofS3Uris
in this manifest constitutes the input data for the channel for this datasource. The object that eachS3Uris
points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.
ContentType (string) –
The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.
CompressionType (string) –
If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is
None
.SplitType (string) –
The method to use to split the transform job’s data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for
SplitType
isNone
, which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter toLine
to split records on a newline character boundary.SplitType
also supports a number of record-oriented binary data formats. Currently, the supported record formats are:RecordIO
TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the
BatchStrategy
andMaxPayloadInMB
parameters. When the value ofBatchStrategy
isMultiRecord
, Amazon SageMaker sends the maximum number of records in each request, up to theMaxPayloadInMB
limit. If the value ofBatchStrategy
isSingleRecord
, Amazon SageMaker sends individual records in each request.Note
Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of
BatchStrategy
is set toSingleRecord
. Padding is not removed if the value ofBatchStrategy
is set toMultiRecord
.For more information about
RecordIO
, see Create a Dataset Using RecordIO in the MXNet documentation. For more information aboutTFRecord
, see Consuming TFRecord data in the TensorFlow documentation.
TransformOutput (dict) –
Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.
S3OutputPath (string) –
The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example,
s3://bucket-name/key-name-prefix
.For every S3 object used as input for the transform job, batch transform stores the transformed data with an .
out
suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored ats3://bucket-name/input-name-prefix/dataset01/data.csv
, batch transform stores the transformed data ats3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out
. Batch transform doesn’t upload partially processed objects. For an input S3 object that contains multiple records, it creates an .out
file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation.Accept (string) –
The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.
AssembleWith (string) –
Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify
None
. To add a newline character at the end of every transformed record, specifyLine
.KmsKeyId (string) –
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The
KmsKeyId
can be any of the following formats:Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN:
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name:
alias/ExampleAlias
Alias name ARN:
arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
If you don’t provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role’s account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.
The KMS key policy must grant permission to the IAM role that you specify in your CreateModel request. For more information, see Using Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer Guide.
DataCaptureConfig (dict) –
Configuration to control how SageMaker captures inference data.
DestinationS3Uri (string) –
The Amazon S3 location being used to capture the data.
KmsKeyId (string) –
The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the batch transform job.
The KmsKeyId can be any of the following formats:
Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN:
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name:
alias/ExampleAlias
Alias name ARN:
arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
GenerateInferenceId (boolean) –
Flag that indicates whether to append inference id to the output.
TransformResources (dict) –
Describes the resources, including ML instance types and ML instance count, to use for the transform job.
InstanceType (string) –
The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ``ml.m5.large``instance types.
InstanceCount (integer) –
The number of ML compute instances to use in the transform job. The default value is
1
, and the maximum is100
. For distributed transform jobs, specify a value greater than1
.VolumeKmsKeyId (string) –
The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt model data on the storage volume attached to the ML compute instance(s) that run the batch transform job.
Note
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can’t request a
VolumeKmsKeyId
when using an instance type with local storage.For a list of instance types that support local instance storage, see Instance Store Volumes.
For more information about local instance storage encryption, see SSD Instance Store Volumes.
The
VolumeKmsKeyId
can be any of the following formats:Key ID:
1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN:
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name:
alias/ExampleAlias
Alias name ARN:
arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
CreationTime (datetime) –
A timestamp that shows when the transform Job was created.
TransformStartTime (datetime) –
Indicates when the transform job starts on ML instances. You are billed for the time interval between this time and the value of
TransformEndTime
.TransformEndTime (datetime) –
Indicates when the transform job has been completed, or has stopped or failed. You are billed for the time interval between this time and the value of
TransformStartTime
.LabelingJobArn (string) –
The Amazon Resource Name (ARN) of the Amazon SageMaker Ground Truth labeling job that created the transform or training job.
AutoMLJobArn (string) –
The Amazon Resource Name (ARN) of the AutoML transform job.
DataProcessing (dict) –
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
InputFilter (string) –
A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the
InputFilter
parameter to exclude fields, such as an ID column, from the input. If you want SageMaker to pass the entire input dataset to the algorithm, accept the default value$
.Examples:
"$"
,"$[1:]"
,"$.features"
OutputFilter (string) –
A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want SageMaker to store the entire input dataset in the output file, leave the default value,
$
. If you specify indexes that aren’t within the dimension size of the joined dataset, you get an error.Examples:
"$"
,"$[0,5:]"
,"$['id','SageMakerOutput']"
JoinSource (string) –
Specifies the source of the data to join with the transformed data. The valid values are
None
andInput
. The default value isNone
, which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, setJoinSource
toInput
. You can specifyOutputFilter
as an additional filter to select a portion of the joined dataset and store it in the output file.For JSON or JSONLines objects, such as a JSON array, SageMaker adds the transformed data to the input JSON object in an attribute called
SageMakerOutput
. The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under theSageMakerInput
key and the results are stored inSageMakerOutput
.For CSV data, SageMaker takes each row as a JSON array and joins the transformed data with the input by appending each transformed row to the end of the input. The joined data has the original input data followed by the transformed data and the output is a CSV file.
For information on how joining in applied, see Workflow for Associating Inferences with Input Records.
ExperimentConfig (dict) –
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
ExperimentName (string) –
The name of an existing experiment to associate with the trial component.
TrialName (string) –
The name of an existing trial to associate the trial component with. If not specified, a new trial is created.
TrialComponentDisplayName (string) –
The display name for the trial component. If this key isn’t specified, the display name is the trial component name.
RunName (string) –
The name of the experiment run to associate with the trial component.
Exceptions