SageMaker / Client / create_data_quality_job_definition
create_data_quality_job_definition#
- SageMaker.Client.create_data_quality_job_definition(**kwargs)#
Creates a definition for a job that monitors data quality and drift. For information about model monitor, see Amazon SageMaker AI Model Monitor.
See also: AWS API Documentation
Request Syntax
response = client.create_data_quality_job_definition( JobDefinitionName='string', DataQualityBaselineConfig={ 'BaseliningJobName': 'string', 'ConstraintsResource': { 'S3Uri': 'string' }, 'StatisticsResource': { 'S3Uri': 'string' } }, DataQualityAppSpecification={ 'ImageUri': 'string', 'ContainerEntrypoint': [ 'string', ], 'ContainerArguments': [ 'string', ], 'RecordPreprocessorSourceUri': 'string', 'PostAnalyticsProcessorSourceUri': 'string', 'Environment': { 'string': 'string' } }, DataQualityJobInput={ 'EndpointInput': { 'EndpointName': 'string', 'LocalPath': 'string', 'S3InputMode': 'Pipe'|'File', 'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key', 'FeaturesAttribute': 'string', 'InferenceAttribute': 'string', 'ProbabilityAttribute': 'string', 'ProbabilityThresholdAttribute': 123.0, 'StartTimeOffset': 'string', 'EndTimeOffset': 'string', 'ExcludeFeaturesAttribute': 'string' }, 'BatchTransformInput': { 'DataCapturedDestinationS3Uri': 'string', 'DatasetFormat': { 'Csv': { 'Header': True|False }, 'Json': { 'Line': True|False }, 'Parquet': {} }, 'LocalPath': 'string', 'S3InputMode': 'Pipe'|'File', 'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key', 'FeaturesAttribute': 'string', 'InferenceAttribute': 'string', 'ProbabilityAttribute': 'string', 'ProbabilityThresholdAttribute': 123.0, 'StartTimeOffset': 'string', 'EndTimeOffset': 'string', 'ExcludeFeaturesAttribute': 'string' } }, DataQualityJobOutputConfig={ 'MonitoringOutputs': [ { 'S3Output': { 'S3Uri': 'string', 'LocalPath': 'string', 'S3UploadMode': 'Continuous'|'EndOfJob' } }, ], 'KmsKeyId': 'string' }, JobResources={ 'ClusterConfig': { 'InstanceCount': 123, 'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge'|'ml.g5.xlarge'|'ml.g5.2xlarge'|'ml.g5.4xlarge'|'ml.g5.8xlarge'|'ml.g5.16xlarge'|'ml.g5.12xlarge'|'ml.g5.24xlarge'|'ml.g5.48xlarge'|'ml.r5d.large'|'ml.r5d.xlarge'|'ml.r5d.2xlarge'|'ml.r5d.4xlarge'|'ml.r5d.8xlarge'|'ml.r5d.12xlarge'|'ml.r5d.16xlarge'|'ml.r5d.24xlarge', 'VolumeSizeInGB': 123, 'VolumeKmsKeyId': 'string' } }, NetworkConfig={ 'EnableInterContainerTrafficEncryption': True|False, 'EnableNetworkIsolation': True|False, 'VpcConfig': { 'SecurityGroupIds': [ 'string', ], 'Subnets': [ 'string', ] } }, RoleArn='string', StoppingCondition={ 'MaxRuntimeInSeconds': 123 }, Tags=[ { 'Key': 'string', 'Value': 'string' }, ] )
- Parameters:
JobDefinitionName (string) –
[REQUIRED]
The name for the monitoring job definition.
DataQualityBaselineConfig (dict) –
Configures the constraints and baselines for the monitoring job.
BaseliningJobName (string) –
The name of the job that performs baselining for the data quality monitoring job.
ConstraintsResource (dict) –
The constraints resource for a monitoring job.
S3Uri (string) –
The Amazon S3 URI for the constraints resource.
StatisticsResource (dict) –
The statistics resource for a monitoring job.
S3Uri (string) –
The Amazon S3 URI for the statistics resource.
DataQualityAppSpecification (dict) –
[REQUIRED]
Specifies the container that runs the monitoring job.
ImageUri (string) – [REQUIRED]
The container image that the data quality monitoring job runs.
ContainerEntrypoint (list) –
The entrypoint for a container used to run a monitoring job.
(string) –
ContainerArguments (list) –
The arguments to send to the container that the monitoring job runs.
(string) –
RecordPreprocessorSourceUri (string) –
An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flattened JSON so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.
PostAnalyticsProcessorSourceUri (string) –
An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
Environment (dict) –
Sets the environment variables in the container that the monitoring job runs.
(string) –
(string) –
DataQualityJobInput (dict) –
[REQUIRED]
A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
EndpointInput (dict) –
Input object for the endpoint
EndpointName (string) – [REQUIRED]
An endpoint in customer’s account which has enabled
DataCaptureConfig
enabled.LocalPath (string) – [REQUIRED]
Path to the filesystem where the endpoint data is available to the container.
S3InputMode (string) –
Whether the
Pipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.S3DataDistributionType (string) –
Whether input data distributed in Amazon S3 is fully replicated or sharded by an Amazon S3 key. Defaults to
FullyReplicated
FeaturesAttribute (string) –
The attributes of the input data that are the input features.
InferenceAttribute (string) –
The attribute of the input data that represents the ground truth label.
ProbabilityAttribute (string) –
In a classification problem, the attribute that represents the class probability.
ProbabilityThresholdAttribute (float) –
The threshold for the class probability to be evaluated as a positive result.
StartTimeOffset (string) –
If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.
EndTimeOffset (string) –
If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.
ExcludeFeaturesAttribute (string) –
The attributes of the input data to exclude from the analysis.
BatchTransformInput (dict) –
Input object for the batch transform job.
DataCapturedDestinationS3Uri (string) – [REQUIRED]
The Amazon S3 location being used to capture the data.
DatasetFormat (dict) – [REQUIRED]
The dataset format for your batch transform job.
Csv (dict) –
The CSV dataset used in the monitoring job.
Header (boolean) –
Indicates if the CSV data has a header.
Json (dict) –
The JSON dataset used in the monitoring job
Line (boolean) –
Indicates if the file should be read as a JSON object per line.
Parquet (dict) –
The Parquet dataset used in the monitoring job
LocalPath (string) – [REQUIRED]
Path to the filesystem where the batch transform data is available to the container.
S3InputMode (string) –
Whether the
Pipe
orFile
is used as the input mode for transferring data for the monitoring job.Pipe
mode is recommended for large datasets.File
mode is useful for small files that fit in memory. Defaults toFile
.S3DataDistributionType (string) –
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to
FullyReplicated
FeaturesAttribute (string) –
The attributes of the input data that are the input features.
InferenceAttribute (string) –
The attribute of the input data that represents the ground truth label.
ProbabilityAttribute (string) –
In a classification problem, the attribute that represents the class probability.
ProbabilityThresholdAttribute (float) –
The threshold for the class probability to be evaluated as a positive result.
StartTimeOffset (string) –
If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.
EndTimeOffset (string) –
If specified, monitoring jobs subtract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.
ExcludeFeaturesAttribute (string) –
The attributes of the input data to exclude from the analysis.
DataQualityJobOutputConfig (dict) –
[REQUIRED]
The output configuration for monitoring jobs.
MonitoringOutputs (list) – [REQUIRED]
Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
(dict) –
The output object for a monitoring job.
S3Output (dict) – [REQUIRED]
The Amazon S3 storage location where the results of a monitoring job are saved.
S3Uri (string) – [REQUIRED]
A URI that identifies the Amazon S3 storage location where Amazon SageMaker AI saves the results of a monitoring job.
LocalPath (string) – [REQUIRED]
The local path to the Amazon S3 storage location where Amazon SageMaker AI saves the results of a monitoring job. LocalPath is an absolute path for the output data.
S3UploadMode (string) –
Whether to upload the results of the monitoring job continuously or after the job completes.
KmsKeyId (string) –
The Key Management Service (KMS) key that Amazon SageMaker AI uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
JobResources (dict) –
[REQUIRED]
Identifies the resources to deploy for a monitoring job.
ClusterConfig (dict) – [REQUIRED]
The configuration for the cluster resources used to run the processing job.
InstanceCount (integer) – [REQUIRED]
The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
InstanceType (string) – [REQUIRED]
The ML compute instance type for the processing job.
VolumeSizeInGB (integer) – [REQUIRED]
The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
VolumeKmsKeyId (string) –
The Key Management Service (KMS) key that Amazon SageMaker AI uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
NetworkConfig (dict) –
Specifies networking configuration for the monitoring job.
EnableInterContainerTrafficEncryption (boolean) –
Whether to encrypt all communications between the instances used for the monitoring jobs. Choose
True
to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.EnableNetworkIsolation (boolean) –
Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.
VpcConfig (dict) –
Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.
SecurityGroupIds (list) – [REQUIRED]
The VPC security group IDs, in the form
sg-xxxxxxxx
. Specify the security groups for the VPC that is specified in theSubnets
field.(string) –
Subnets (list) – [REQUIRED]
The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
(string) –
RoleArn (string) –
[REQUIRED]
The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker AI can assume to perform tasks on your behalf.
StoppingCondition (dict) –
A time limit for how long the monitoring job is allowed to run before stopping.
MaxRuntimeInSeconds (integer) – [REQUIRED]
The maximum runtime allowed in seconds.
Note
The
MaxRuntimeInSeconds
cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.
Tags (list) –
(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide.
(dict) –
A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.
You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.
For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.
Key (string) – [REQUIRED]
The tag key. Tag keys must be unique per resource.
Value (string) – [REQUIRED]
The tag value.
- Return type:
dict
- Returns:
Response Syntax
{ 'JobDefinitionArn': 'string' }
Response Structure
(dict) –
JobDefinitionArn (string) –
The Amazon Resource Name (ARN) of the job definition.
Exceptions