braket.aws.aws_quantum_job module

class braket.aws.aws_quantum_job.AwsQuantumJob(arn: str, aws_session: AwsSession | None = None, quiet: bool = False)[source]

Bases: QuantumJob

Amazon Braket implementation of a quantum job.

Initializes an AwsQuantumJob.

Parameters:
  • arn (str) – The ARN of the hybrid job.

  • aws_session (AwsSession | None) – The AwsSession for connecting to AWS services. Default is None, in which case an AwsSession object will be created with the region of the hybrid job.

  • quiet (bool) – Sets the verbosity of the logger to low and does not report queue position. Default is False.

Raises:

ValueError – Supplied region and session region do not match.

TERMINAL_STATES: ClassVar[set[str]] = {'CANCELLED', 'COMPLETED', 'FAILED'}
RESULTS_FILENAME = 'results.json'
RESULTS_TAR_FILENAME = 'model.tar.gz'
LOG_GROUP = '/aws/braket/jobs'
class LogState(value)[source]

Bases: Enum

Log state enum.

TAILING = 'tailing'
JOB_COMPLETE = 'job_complete'
COMPLETE = 'complete'
classmethod create(device: str, source_module: str, entry_point: str | None = None, image_uri: str | None = None, job_name: str | None = None, code_location: str | None = None, role_arn: str | None = None, wait_until_complete: bool = False, hyperparameters: dict[str, Any] | None = None, input_data: str | dict | S3DataSourceConfig | None = None, instance_config: InstanceConfig | None = None, distribution: str | None = None, stopping_condition: StoppingCondition | None = None, output_data_config: OutputDataConfig | None = None, copy_checkpoints_from_job: str | None = None, checkpoint_config: CheckpointConfig | None = None, aws_session: AwsSession | None = None, tags: dict[str, str] | None = None, logger: Logger = <Logger braket.aws.aws_quantum_job (WARNING)>, quiet: bool = False, reservation_arn: str | None = None) AwsQuantumJob[source]

Creates a hybrid job by invoking the Braket CreateJob API.

Parameters:
  • device (str) – Device ARN of the QPU device that receives priority quantum task queueing once the hybrid job begins running. Each QPU has a separate hybrid jobs queue so that only one hybrid job is running at a time. The device string is accessible in the hybrid job instance as the environment variable “AMZN_BRAKET_DEVICE_ARN”. When using embedded simulators, you may provide the device argument as a string of the form: “local:<provider>/<simulator_name>”.

  • source_module (str) – Path (absolute, relative or an S3 URI) to a python module to be tarred and uploaded. If source_module is an S3 URI, it must point to a tar.gz file. Otherwise, source_module may be a file or directory.

  • entry_point (str | None) – A str that specifies the entry point of the hybrid job, relative to the source module. The entry point must be in the format importable.module or importable.module:callable. For example, source_module.submodule:start_here indicates the start_here function contained in source_module.submodule. If source_module is an S3 URI, entry point must be given. Default: source_module’s name

  • image_uri (str | None) – A str that specifies the ECR image to use for executing the hybrid job. image_uris.retrieve_image() function may be used for retrieving the ECR image URIs for the containers supported by Braket. Default = <Braket base image_uri>.

  • job_name (str | None) – A str that specifies the name with which the hybrid job is created. Allowed pattern for hybrid job name: ^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,50}$ Default: f’{image_uri_type}-{timestamp}’.

  • code_location (str | None) – The S3 prefix URI where custom code will be uploaded. Default: f’s3://{default_bucket_name}/jobs/{job_name}/script’.

  • role_arn (str | None) – A str providing the IAM role ARN used to execute the script. Default: IAM role returned by AwsSession’s get_default_jobs_role().

  • wait_until_complete (bool) – True if we should wait until the hybrid job completes. This would tail the hybrid job logs as it waits. Otherwise False. Default: False.

  • hyperparameters (dict[str, Any] | None) – Hyperparameters accessible to the hybrid job. The hyperparameters are made accessible as a dict[str, str] to the hybrid job. For convenience, this accepts other types for keys and values, but str() is called to convert them before being passed on. Default: None.

  • input_data (str | dict | S3DataSourceConfig | None) – Information about the training data. Dictionary maps channel names to local paths or S3 URIs. Contents found at any local paths will be uploaded to S3 at f’s3://{default_bucket_name}/jobs/{job_name}/data/{channel_name}. If a local path, S3 URI, or S3DataSourceConfig is provided, it will be given a default channel name “input”. Default: {}.

  • instance_config (InstanceConfig | None) – Configuration of the instance(s) for running the classical code for the hybrid job. Default: InstanceConfig(instanceType='ml.m5.large', instanceCount=1, volumeSizeInGB=30).

  • distribution (str | None) – A str that specifies how the hybrid job should be distributed. If set to “data_parallel”, the hyperparameters for the hybrid job will be set to use data parallelism features for PyTorch or TensorFlow. Default: None.

  • stopping_condition (StoppingCondition | None) – The maximum length of time, in seconds, and the maximum number of quantum tasks that a hybrid job can run before being forcefully stopped. Default: StoppingCondition(maxRuntimeInSeconds=5 * 24 * 60 * 60).

  • output_data_config (OutputDataConfig | None) – Specifies the location for the output of the hybrid job. Default: OutputDataConfig(s3Path=f’s3://{default_bucket_name}/jobs/{job_name}/data’, kmsKeyId=None).

  • copy_checkpoints_from_job (str | None) – A str that specifies the hybrid job ARN whose checkpoint you want to use in the current hybrid job. Specifying this value will copy over the checkpoint data from use_checkpoints_from_job’s checkpoint_config s3Uri to the current hybrid job’s checkpoint_config s3Uri, making it available at checkpoint_config.localPath during the hybrid job execution. Default: None

  • checkpoint_config (CheckpointConfig | None) – Configuration that specifies the location where checkpoint data is stored. Default: CheckpointConfig(localPath=’/opt/jobs/checkpoints’, s3Uri=f’s3://{default_bucket_name}/jobs/{job_name}/checkpoints’).

  • aws_session (AwsSession | None) – AwsSession for connecting to AWS Services. Default: AwsSession()

  • tags (dict[str, str] | None) – Dict specifying the key-value pairs for tagging this hybrid job. Default: {}.

  • logger (Logger) – Logger object with which to write logs, such as quantum task statuses while waiting for quantum task to be in a terminal state. Default is getLogger(__name__)

  • quiet (bool) – Sets the verbosity of the logger to low and does not report queue position. Default is False.

  • reservation_arn (str | None) – the reservation window arn provided by Braket Direct to reserve exclusive usage for the device to run the hybrid job on. Default: None.

Returns:

AwsQuantumJob – Hybrid job tracking the execution on Amazon Braket.

Raises:

ValueError – Raises ValueError if the parameters are not valid.

property arn: str

The ARN (Amazon Resource Name) of the quantum hybrid job.

Type:

str

property name: str

The name of the quantum job.

Type:

str

state(use_cached_value: bool = False) str[source]

The state of the quantum hybrid job.

Parameters:

use_cached_value (bool) – If True, uses the value most recently retrieved value from the Amazon Braket GetJob operation. If False, calls the GetJob operation to retrieve metadata, which also updates the cached value. Default = False.

Returns:

str – The value of status in metadata(). This is the value of the status key in the Amazon Braket GetJob operation.

See also

metadata()

queue_position() HybridJobQueueInfo[source]

The queue position details for the hybrid job.

Returns:

HybridJobQueueInfo – Instance of HybridJobQueueInfo class representing the queue position information for the hybrid job. The queue_position is only returned when the hybrid job is not in RUNNING/CANCELLING/TERMINAL states, else queue_position is returned as None. If the queue position of the hybrid job is greater than 15, we return ‘>15’ as the queue_position return value.

Examples

job status = QUEUED and position is 2 in the queue. >>> job.queue_position() HybridJobQueueInfo(queue_position=’2’, message=None)

job status = QUEUED and position is 18 in the queue. >>> job.queue_position() HybridJobQueueInfo(queue_position=’>15’, message=None)

job status = COMPLETED >>> job.queue_position() HybridJobQueueInfo(queue_position=None, message=’Job is in COMPLETED status. AmazonBraket does

not show queue position for this status.’)

logs(wait: bool = False, poll_interval_seconds: int = 5) None[source]
Display logs for a given hybrid job, optionally tailing them until hybrid job is

complete.

If the output is a tty or a Jupyter cell, it will be color-coded based on which instance the log entry is from.

Parameters:
  • wait (bool) – True to keep looking for new log entries until the hybrid job completes; otherwise False. Default: False.

  • poll_interval_seconds (int) – The interval of time, in seconds, between polling for new log entries and hybrid job completion (default: 5).

Raises:

exceptions.UnexpectedStatusException – If waiting and the training hybrid job fails.

metadata(use_cached_value: bool = False) dict[str, Any][source]

Gets the hybrid job metadata defined in Amazon Braket.

Parameters:

use_cached_value (bool) – If True, uses the value most recently retrieved from the Amazon Braket GetJob operation, if it exists; if does not exist, GetJob is called to retrieve the metadata. If False, always calls GetJob, which also updates the cached value. Default: False.

Returns:

dict[str, Any] – Dict that specifies the hybrid job metadata defined in Amazon Braket.

metrics(metric_type: MetricType = MetricType.TIMESTAMP, statistic: MetricStatistic = MetricStatistic.MAX) dict[str, list[Any]][source]

Gets all the metrics data, where the keys are the column names, and the values are a list containing the values in each row. For example, the table:

timestamp energy

0 0.1 1 0.2

would be represented as: { “timestamp” : [0, 1], “energy” : [0.1, 0.2] } values may be integers, floats, strings or None.

Parameters:
  • metric_type (MetricType) – The type of metrics to get. Default: MetricType.TIMESTAMP.

  • statistic (MetricStatistic) – The statistic to determine which metric value to use when there is a conflict. Default: MetricStatistic.MAX.

Returns:

dict[str, list[Any]] – The metrics data.

cancel() str[source]

Cancels the job.

Returns:

str – Indicates the status of the job.

Raises:

ClientError – If there are errors invoking the CancelJob API.

result(poll_timeout_seconds: float = 864000, poll_interval_seconds: float = 5) dict[str, Any][source]

Retrieves the hybrid job result persisted using the save_job_result function.

Parameters:
  • poll_timeout_seconds (float) – The polling timeout, in seconds, for result(). Default: 10 days.

  • poll_interval_seconds (float) – The polling interval, in seconds, for result(). Default: 5 seconds.

Returns:

dict[str, Any] – Dict specifying the job results.

Raises:
  • RuntimeError – if hybrid job is in a FAILED or CANCELLED state.

  • TimeoutError – if hybrid job execution exceeds the polling timeout period.

download_result(extract_to: str | None = None, poll_timeout_seconds: float = 864000, poll_interval_seconds: float = 5) None[source]

Downloads the results from the hybrid job output S3 bucket and extracts the tar.gz bundle to the location specified by extract_to. If no location is specified, the results are extracted to the current directory.

Parameters:
  • extract_to (str | None) – The directory to which the results are extracted. The results are extracted to a folder titled with the hybrid job name within this directory. Default= Current working directory.

  • poll_timeout_seconds (float) – The polling timeout, in seconds, for download_result(). Default: 10 days.

  • poll_interval_seconds (float) – The polling interval, in seconds, for download_result().Default: 5 seconds.

Raises:
  • RuntimeError – if hybrid job is in a FAILED or CANCELLED state.

  • TimeoutError – if hybrid job execution exceeds the polling timeout period.