text
stringlengths 5
22M
| id
stringlengths 12
177
| metadata
dict | __index_level_0__
int64 0
1.37k
|
---|---|---|---|
# Multi-node Search
This sample shows how to use Azure ML to do partial training on a GPU cluster during an Archai
`EvolutionParetoSearch`. Implementing the `AsyncModelEvaluator` interface an
`AmlTrainingValAccuracy` evaluator is able to do partial training of many models in parallel across
a GPU cluster using [Azure ML
pipelines](https://learn.microsoft.com/en-us/azure/machine-learning/tutorial-pipeline-python-sdk)
where the result is the best possible model model for `MNIST` classification. The same technique
works for any task, we only use MNIST here to minimize the compute cost of the GPU cluster.
The [multi_node_search](multi_node_search.ipynb) notebook uses your Azure storage account and your
Azure ML workspace. The Azure storage account will accumulate results while the search is in
progress.
The notebook first kicks off the master search pipeline which will look like this:

The data prep component simply copies the MNIST dataset to a pipeline blob store for use as input
not only on this search job but also in all the partial training jobs to come.
The `EvolutionParetoSearch` will do 5 iterations, and during that time it will kick off one new AML
pipeline for each iteration that will train the batch of models that the `AmlTrainingValAccuracy`
needs to get validation accuracy numbers for. You can get the log output of the search job by
looking at the "Outputs + logs" in the Azure Machine Learning Studio.
For example, the following partial training pipeline was created to do partial training of 36 models in a GPU cluster:

Azure ML automatically keeps the GPU cluster busy feeding all the models until they are all trained.
In this case we create an 8 node GPU cluster, and so you will see the jobs completing in batches of 8.
You can see all the pipelines created dynamically by the search using your Azure ML Pipelines
dashboard. Below you see 5 partial training iterations followed by a final full training job and
since we have an 8-gpu cluster doing the work the entire job finished in 1 hour 14 minutes. It did
partial training on a total of 65 models, and full training on 32.

You can also run the cell titled `Plots` in the notebook multiple times to watch how the pareto curves are
shaping up, you will see something like this after 5 iterations have completed.

When the search is finished the next cell in the notebook will download the results including a file named 'models.json' that reports the best models found by the search:
```json
{
"init_num_models": 10,
"partial_training_epochs": 1,
"full_training_epochs": 10,
"id_bc52a6ee_4ef2_4327_8c8c_3bba2165c3ea": {
"archid": "(1, 3, 32)",
"val_acc": 0.3953
},
"id_1250cb3f_0fc9_4cad_a4f2_e627db5c66e8": {
"archid": "(1, 1, 32)",
"val_acc": 0.2237
}
}
```
You can run the last cell of the notebook to compare the inference accuracy of the top
model using the ONNX runtime locally on the resulting downloaded .onnx model.
You should see something like this:
```
Inference pass rate is 99.57 %.
How does this compare with the training validation accuracy of 0.9957
```
## Design
So how does all this work. The overall architecture looks like this:

Where the main search pipeline launches [search.py](scripts/search.py) script on a cheap CPU virtual
machine since it doesn't need much horse power. This search pipeline starts with a quick data-prep
step to ensure our dataset is in a shared Azure blob store, then it does the Archai search. When
the search is complete we have a `pareto.json` file containing the id's of the best models, and we
then kick off a full training run for those models using a child pipeline.
This `search.py` script plugs the [aml_training_evaluator.py](scripts/aml_training_evaluator.py)
into the `EvolutionParetoSearch` objectives, and the `AmlTrainingValAccuracy` is an
`AsyncModelEvaluator` meaning that the search algorithm calls `send` to pass all the models in the
current iteration then it calls `fetch_all` to get the results.
So it is the `fetch_all` method that creates a new AML pipeline for that iteration,
dynamically adding a partial training job for each model, each one of those commands
invokes [train.py](scripts/train.py). `train.py` uses [Pytorch Lightning](https://lightning.ai/docs/pytorch/stable/) to train the model, which means the model is a `LightningModule`.
But how does each partial training job send results back to the master search pipeline
you might ask? Great question, I'm glad you asked. The training jobs are given access
to the azure storage account where there is a `status` table that they write to and
a `models` blob store they can write the trainged .onnx files to.
You can look at this status table to get a great idea of what is going on:

Here you can see some models are still `training` and some are completed: `trained`.
You can also see what the model architecture is for each one. When a job is finished
it will have a published `val_acc` column containing the validation accuracy reported
by the partial training.
The `AmlTrainingValAccuracy` evaluator collects these numbers and returns them to the
search algorithm which will use them to select the best models to iterate on.
## Scripts
The code behind this notebook is organized into the following folders:
### data_prep
`prep_data_store.py` this script contains the code that runs in the data prep Azure ML pipeline component.
It's job in life is to move the training dataset from the MnistDatasetProvider into the cloud using
the mounted Azure blob store location given as a command line argument. Azure ML makes this look
like a local folder, but in fact, it is automatically copied to an azure blob store by Azure ML because
the path has `mode="rw_mount"` in the pipeline component definition. This is a cool feature of Azure ML
that makes it super simple to test all these scripts locally, but then they all "just work" when
running in the cloud.
This script is in a different folder from the other scripts because this way ensures maximum reuse
of the output dataset during the development of your other training script. Often times those need
more debugging and this will save on cloud compute by maximizing the reuse of this node in each
submitted Azure ML pipeline
### scripts
`aml_training_evaluator.py` provides the class `AmlTrainingValAccuracy` which implements the Archai
`AsyncModelEvaluator` interface. This is the core of this entire tutorial, showing how to use the
`AsyncModelEvaluator` interface to get parallel training happening on an Azure ML GPU cluster.
`commands.py` provides some of the Azure ML pipeline `command`s used in the
[multi_node_search](multi_node_search.ipynb) notebook, moved here just so the notebook doesn't
become overly verbose.
`mnist_data_module.py` provides a Pytorch `LightningDataModule` wrapper on the `MnistDatasetProvider`
so we can use the Pytorch Lightning `Trainer`.
`model.py` this is our configurable pytorch CNN model that is also a `LightningModule`. This class builds
different model architectures during the Archai search
`monitor.py` this provides a `JobCompletionMonitor` class that monitors the status of training jobs in
our Azure Storage Table returning all the validation accuracies when the jobs are complete.
This is used by the `AmlTrainingValAccuracy` model evaluator.
`search.py` this is the main Archai search component that will run in our Azure ML pipeline. It
uses the `AmlTrainingValAccuracy` model evaluator as a search objective among others. It is also
designed so it can run locally for debugging, with the training happening in the cloud.
`train.py` this is the model training script that uses the Pytorch Lightning `Trainer` to train a
specific model architecture. This will be used across all the nodes of your GPU cluster to train as
many jobs in parallel as you want. It reports status to the Azure Storage Table so that the
`JobCompletionMonitor` knows when they finish and what the validation accuracy was.
`training_pipeline.py` this script can start an entirely new Azure ML Pipeline for doing some
parallel training of models, where each training component uses `train.py`. This same script is
used to kick off partial training from the `AmlTrainingValAccuracy` and the final full training run
performed in the main pipeline defined in the [multi_node_search](multi_node_search.ipynb) notebook.
`utils.py` a few small helper functions used by the notebook, moved here just to reduce the notebook
size.
|
archai/docs/advanced_guide/cloud/azure/notebooks/multi_node_search/readme.md/0
|
{
"file_path": "archai/docs/advanced_guide/cloud/azure/notebooks/multi_node_search/readme.md",
"repo_id": "archai",
"token_count": 2271
}
| 324 |
display_name: Search with Transformer-Flex
type: command
compute: nas-cpu-cluster-D14-v2
inputs:
model_type: gpt2
num_iters: 2
init_num_models: 5
num_random_mix: 5
max_unseen_population: 100
mutations_per_parent: 1
num_crossovers: 5
seed: 123
outputs:
output_dir:
type: uri_folder
code: .
environment:
azureml:aml-archai:0.0.1
command: >-
python search.py
--model_type ${{inputs.model_type}}
--num_iters ${{inputs.num_iters}}
--init_num_models ${{inputs.init_num_models}}
--num_random_mix ${{inputs.num_random_mix}}
--max_unseen_population ${{inputs.max_unseen_population}}
--mutations_per_parent ${{inputs.mutations_per_parent}}
--num_crossovers ${{inputs.num_crossovers}}
--seed ${{inputs.seed}}
--output_dir ${{outputs.output_dir}}
|
archai/docs/advanced_guide/cloud/azure/notebooks/text_generation/src/search.yaml/0
|
{
"file_path": "archai/docs/advanced_guide/cloud/azure/notebooks/text_generation/src/search.yaml",
"repo_id": "archai",
"token_count": 314
}
| 325 |
First Time Contributor
======================
We are excited to have you as a contributor to our platform. Most contributions require you to agree to a `Contributor License Agreement (CLA) <https://cla.microsoft.com>`_ declaring that you have the right to, and actually do, grant us the rights to use your contribution.
If this is your first time contributing to Archai, please follow these guidelines:
#. Familiarize yourself with the project's codebase and community guidelines. This can be done by reading through the documentation and exploring the codebase on GitHub.
#. Create a new branch for your changes. This will allow you to easily submit your code as a pull request when you are finished.
#. Follow the project's style guidelines. This may include specific formatting or naming conventions.
#. Test your code thoroughly. This may involve writing unit tests or manually testing the functionality.
#. Document your code. This may include in-line documentation as well as updating relevant documentation files.
#. Submit your code as a pull request. Include a clear and concise description of the changes made and any relevant issue references. When you submit a pull request, a CLA-bot will automatically determine if you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
#. Be open to feedback and iteration on your code. It is common for pull requests to go through multiple rounds of review before being merged.
We appreciate your contributions and look forward to working with you to continue the impactful progress of Archai as a positive feedback loop between the engineering and research aspects of NAS. This project has adopted the `Microsoft Open Source Code of Conduct <https://opensource.microsoft.com/codeofconduct/>`_. For more information see the `Code of Conduct FAQ <https://opensource.microsoft.com/codeofconduct/faq/>`_ or contact `[email protected] <mailto:[email protected]>`_ with any additional questions or comments.
|
archai/docs/contributing/first_contribution.rst/0
|
{
"file_path": "archai/docs/contributing/first_contribution.rst",
"repo_id": "archai",
"token_count": 460
}
| 326 |
<jupyter_start><jupyter_code>from random import Random
import torch
from torch import nn<jupyter_output><empty_output><jupyter_text>Config Search Spaces As seen before, discrete search spaces in Archai are defined using the `DiscreteSearchSpace` abstract class. This tutorial shows how to use the Config Search Space API, which allows building search spaces automatically without having to subclass `DiscreteSearchSpace` . Let's first start with a simple Pytorch model<jupyter_code>class MyConvBlock(nn.Module):
def __init__(self, in_ch: int, out_ch: int, kernel_size=3):
super().__init__()
self.op = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=kernel_size, padding='same'),
nn.BatchNorm2d(out_ch),
nn.ReLU()
)
def forward(self, x):
return self.op(x)
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.stem_conv = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=4, padding=1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.layers = nn.Sequential(*[
MyConvBlock(32, 32)
for i in range(5)
])
def forward(self, x):
return self.layers(self.stem_conv(x))
model = MyModel()
x = torch.randn(2, 3, 64, 64)
model.forward(x).shape<jupyter_output><empty_output><jupyter_text>Creating an `ArchParamTree` To turn this model into a search space, first we need to define an `ArchParamTree` with the architecture parameters we want to search<jupyter_code>from archai.discrete_search.search_spaces.config import ArchParamTree, ArchConfig, DiscreteChoice
arch_param_tree = {
'conv_kernel_size': DiscreteChoice([3, 5, 7]),
'num_ch': DiscreteChoice([8, 16, 32]),
'num_layers': DiscreteChoice(range(1, 6))
}
arch_param_tree = ArchParamTree(arch_param_tree)<jupyter_output><empty_output><jupyter_text>`ArchParamTree` are used to generate `ArchConfig` objects, that specify the chosen architecture configuration. We can sample a configuration using `arch_param_tree.sample_config()`<jupyter_code>arch_config = arch_param_tree.sample_config()
arch_config<jupyter_output><empty_output><jupyter_text>ArchConfig objects behave like dictionaries. To get the value of an arch parameter, just call `arch_config.pick(parameter_name)`<jupyter_code>arch_config.pick('conv_kernel_size')
arch_config.pick('num_ch')
arch_config.to_dict()<jupyter_output><empty_output><jupyter_text>Let's use this in our Pytorch Model definition:<jupyter_code>class MyModel(nn.Module):
# **We add arch_config as the first parameter of the module**
def __init__(self, arch_config: ArchConfig):
super().__init__()
# **We call arch_config.pick('num_ch')**
num_ch = arch_config.pick('num_ch')
self.stem_conv = nn.Sequential(
nn.Conv2d(3, num_ch, kernel_size=3, stride=4, padding=1),
nn.BatchNorm2d(num_ch),
nn.ReLU()
)
self.layers = nn.Sequential(*[
# **We pick the kernel size and number of layers**
MyConvBlock(num_ch, num_ch, kernel_size=arch_config.pick('conv_kernel_size'))
for i in range(arch_config.pick('num_layers'))
])
def forward(self, x):
return self.layers(self.stem_conv(x))
model = MyModel(arch_config)
model<jupyter_output><empty_output><jupyter_text>To get an Archai DiscreteSearchSpace, we just pass `MyModel` and `search_param_tree` to `ConfigSearchSpace`:<jupyter_code>from archai.discrete_search.search_spaces.config import ConfigSearchSpace
search_space = ConfigSearchSpace(MyModel, arch_param_tree, mutation_prob=0.3)<jupyter_output><empty_output><jupyter_text>All the methods from `DiscreteSearchSpace`, `EvolutionarySearchSpace` and `BayesOptSearchSpace` are automatically implemented.<jupyter_code># Randomly samples a model
m = search_space.random_sample()
print(m.archid)
# Mutates a model
m2 = search_space.mutate(m)
print(m2.archid)
# Crossover
m3 = search_space.crossover([search_space.random_sample(), search_space.random_sample()])
print(m3.archid)
# Encode
print(search_space.encode(m3))<jupyter_output>307525215b21f510fb6ba1570c71126274e60167
307525215b21f510fb6ba1570c71126274e60167
cc3aba2e903b62619035a871ff3bcdc65dc151de
[3. 8. 1.]<jupyter_text>Saving and loading<jupyter_code>search_space.save_arch(m3, 'arch.json')
m = search_space.load_arch('arch.json')
!cat arch.json<jupyter_output>{
"conv_kernel_size": 3,
"num_ch": 8,
"num_layers": 1
}<jupyter_text>We can now use this with any Archai search algorithm and objective! More features of ArchParamTrees Nesting dictionaries inside an `ArchParamTree`<jupyter_code>arch_param_tree = {
# Stem convolution architecture
'stem_config': {
'kernel_size': DiscreteChoice([3, 5, 7])
},
'conv_kernel_size': DiscreteChoice([3, 5, 7]),
'num_ch': DiscreteChoice([8, 16, 32])
}
arch_param_tree = ArchParamTree(arch_param_tree)
c = arch_param_tree.sample_config()
c<jupyter_output><empty_output><jupyter_text>Calling `c.pick` for a parameter containing a dictionary returns a new `ArchConfig` object for that dictionary<jupyter_code>c.pick('stem_config')
c.pick('stem_config').pick('kernel_size')<jupyter_output><empty_output><jupyter_text>Sharing architecture parameters We can share configuration of different parts of the architecture by re-using references<jupyter_code>kernel_size_choice = DiscreteChoice([3, 5, 7])
arch_param_tree = {
'stem_config': {
'kernel_size': kernel_size_choice
},
'conv_kernel_size': kernel_size_choice,
'num_ch': DiscreteChoice([8, 16, 32])
}
arch_param_tree = ArchParamTree(arch_param_tree)<jupyter_output><empty_output><jupyter_text>`conv_kernel_size` is now always equal to `stem_config.kernel_size`<jupyter_code>arch_param_tree.sample_config()
arch_param_tree.sample_config()<jupyter_output><empty_output><jupyter_text>Re-using references of entire dictionaries also works<jupyter_code>stem_config = {
'kernel_size': DiscreteChoice([3, 5, 7]),
'stride': DiscreteChoice([2, 4])
}
arch_param_tree = {
'block1': stem_config,
'block2': stem_config,
'block3': stem_config
}
arch_param_tree = ArchParamTree(arch_param_tree)
arch_param_tree.sample_config()<jupyter_output><empty_output><jupyter_text>Repeating configs a variable number of times We can repeat a block of arch parameters using the `repeat_config` function<jupyter_code>from archai.discrete_search.search_spaces.config import repeat_config
arch_param_tree = ArchParamTree({
'layers': repeat_config({
'kernel_size': DiscreteChoice([1, 3, 5, 7]),
'residual': DiscreteChoice([False, True]),
'act_fn': DiscreteChoice(['relu', 'gelu'])
}, repeat_times=[0, 1, 2], share_arch=False)
})<jupyter_output><empty_output><jupyter_text>ArchParamTree will stack 0, 1, 2 or 3 configs inside ``layers`` in an `ArchConfigList` object<jupyter_code>c = arch_param_tree.sample_config(rng=Random(1))
c
print(len(c.pick('layers')))
c = arch_param_tree.sample_config(rng=Random(2))
c.pick('layers')
print(len(c.pick('layers')))<jupyter_output>2<jupyter_text>We can select a config from an `ArchConfigList` by selecting the index of the layer we want<jupyter_code># Picks the config of the second layer
print(c.pick('layers')[1])
# Picks the kernel size of the second layer
kernel_size = c.pick('layers')[1].pick('kernel_size')
print(f'kernel_size = {kernel_size}')<jupyter_output>ArchConfig({
"kernel_size": 7,
"residual": true,
"act_fn": "gelu"
})
kernel_size = 7<jupyter_text>We can also iterate on an `ArchConfigList` object<jupyter_code>config = arch_param_tree.sample_config(rng=Random(5))
modules = [
nn.Conv2d(16, 16, kernel_size=layer_conf.pick('kernel_size'))
for layer_conf in config.pick('layers')
]
modules<jupyter_output><empty_output><jupyter_text>We can make the architectures parameters the same for each layer by setting `share_arch=True`,<jupyter_code>arch_param_tree = ArchParamTree({
'layers': repeat_config({
'kernel_size': DiscreteChoice([1, 3, 5, 7]),
'residual': DiscreteChoice([False, True]),
'act_fn': DiscreteChoice(['relu', 'gelu'])
}, repeat_times=[2, 3], share_arch=True)
})
arch_param_tree.sample_config()<jupyter_output><empty_output><jupyter_text>Example: Building an Image Classification Search Space Let's use the features described above to build the following search space for image classification We can build this succinctly using the `repeat_config` function<jupyter_code>arch_param_tree = ArchParamTree({
'base_num_channels': DiscreteChoice([8, 16, 32, 64]),
'downsample_blocks': repeat_config({
'max_pool_kernel_size': DiscreteChoice([2, 3]),
'channel_multiplier': DiscreteChoice([1.0, 1.2, 1.4, 1.6, 1.8, 2.0]),
'convs': repeat_config({
'kernel_size': DiscreteChoice([3, 5, 7]),
'act_fn': DiscreteChoice(['relu', 'gelu']),
}, repeat_times=[1, 2, 3, 4, 5], share_arch=False)
}, repeat_times=[1, 2, 3], share_arch=False)
})
# We may want to reduce the search space size by sharing some of the architecture params
# using share_arch=True.
class MyConvBlock(nn.Module):
def __init__(self, arch_config: ArchConfig, in_ch: int, out_ch: int):
super().__init__()
self.op = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=arch_config.pick('kernel_size'),
padding='same'),
nn.BatchNorm2d(out_ch),
nn.ReLU() if arch_config.pick('act_fn') == 'relu' else nn.GELU()
)
def forward(self, x):
return self.op(x)
class MyModel(nn.Module):
def __init__(self, arch_config: ArchConfig, stem_stride: int = 2):
super().__init__()
self.base_ch = arch_config.pick('base_num_channels')
self.stem_conv = nn.Sequential(
nn.Conv2d(3, self.base_ch, kernel_size=3, stride=stem_stride, padding=1),
nn.BatchNorm2d(self.base_ch),
nn.ReLU()
)
self.layers = []
current_ch = self.base_ch
for block_cfg in arch_config.pick('downsample_blocks'):
next_ch = int(block_cfg.pick('channel_multiplier') * current_ch)
for i, conv_cfg in enumerate(block_cfg.pick('convs')):
self.layers.append(
MyConvBlock(
conv_cfg,
in_ch=(current_ch if i == 0 else next_ch),
out_ch=next_ch
)
)
self.layers.append(
nn.MaxPool2d(kernel_size=block_cfg.pick('max_pool_kernel_size'))
)
current_ch = next_ch
self.layers = nn.Sequential(*self.layers)
def forward(self, x):
return self.layers(self.stem_conv(x))
config = arch_param_tree.sample_config()
model = MyModel(config, stem_stride=2)
model(torch.randn(10, 3, 240, 240)).shape<jupyter_output><empty_output><jupyter_text>We can check the search space size by calling `arch_param_tree.num_archs`<jupyter_code>arch_param_tree.num_archs<jupyter_output><empty_output><jupyter_text>Now let's turn `MyModel` into a search space object that can be used in Archai<jupyter_code>ss = ConfigSearchSpace(
MyModel, arch_param_tree,
model_kwargs={"stem_stride": 2} # additional kwargs will be passed to MyModel.__init__()
)
m = ss.random_sample()
m2 = ss.mutate(m)
# now we can use this search space with any Archai search algorithm
print(m2.archid)<jupyter_output>d56a2b2d01f75d3f21824f89e5761b4608e6f18e<jupyter_text>Tracking used architecture parameters for model de-duplication Consider the following example:<jupyter_code>arch_param_tree = ArchParamTree({
'op_type': DiscreteChoice(['identity', 'conv']),
'conv_kernel_size': DiscreteChoice([1, 3, 5, 7])
})
class MyOperation(nn.Module):
def __init__(self, arch_config: ArchConfig, in_ch):
super().__init__()
self.op_type = arch_config.pick('op_type')
if arch_config.pick('op_type') == 'conv':
self.op = nn.Sequential(
nn.Conv2d(
in_ch, in_ch,
kernel_size=arch_config.pick('conv_kernel_size'),
padding='same',
),
nn.BatchNorm2d(in_ch),
nn.ReLU(),
)
def forward(self, x):
if self.op_type == 'identity':
return x
return self.op(x)<jupyter_output><empty_output><jupyter_text>Notice that when `op_type="identity"` the value of `conv_kernel_size` is not used at all.That means that our search space might not know that the architectures encoded by `("identity", 3)` and `("identity", 7)` are in fact the same architecture! That can become a huge problem given that each architecture evaluation can be expensive. To avoid that, each `ArchConfig` object automatically tracks when an architecture parameter was used with the `.pick` method. For instance:<jupyter_code>c = arch_param_tree.sample_config()
c<jupyter_output><empty_output><jupyter_text>`ArchConfig.get_used_params()` returns the usage dictionary of this `ArchConfig` object.<jupyter_code>c.get_used_params()<jupyter_output><empty_output><jupyter_text>Let's pick a parameter now<jupyter_code>c.pick('op_type')
c.get_used_params()<jupyter_output><empty_output><jupyter_text>This is automatically handled by the ConfigSearchSpace object when generating architecture ids, which allows deduplicating architectures<jupyter_code>ss = ConfigSearchSpace(
MyOperation, arch_param_tree, model_kwargs={"in_ch": 16}, seed=8
)<jupyter_output><empty_output><jupyter_text>Non-used architecture parameters will be encoded using the value passed to `unused_param_value` (NaN, in our case)<jupyter_code>m1 = ss.random_sample()
print(f'm1 config = {m1.metadata["config"]}')
print(f'm1 archid = {m1.archid}')
m2 = ss.random_sample()
print(f'm2 config = {m2.metadata["config"]}')
print(f'm2 archid = {m2.archid}')<jupyter_output>m2 config = ArchConfig({
"op_type": "identity",
"conv_kernel_size": 5
})
m2 archid = 260c332c6fc8c6c976736a379f3ae1ac439afd74<jupyter_text>Notice how `m1` and `m2` have different value for `conv_kernel_size`, but since `op_type='identity'` both are mapped to the same architecture id. To turn this feature off, you can either* Selectively call `config.pick(param_name, record_usage=False)`* or set `ConfigSearchSpace(..., track_unused_params=False)` This feature is also automatically used when generating architecture encodings for surrogate models, to make sure equivalent architectures are correctly mapped to the same representation:<jupyter_code>ss.encode(m1)
ss.encode(m2)<jupyter_output><empty_output>
|
archai/docs/getting_started/notebooks/discrete_search/config_search.ipynb/0
|
{
"file_path": "archai/docs/getting_started/notebooks/discrete_search/config_search.ipynb",
"repo_id": "archai",
"token_count": 6196
}
| 327 |
PyTorch Profiler (Utilities)
============================
Evaluator
---------
.. automodule:: archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_eval
:members:
:undoc-members:
Hooks
-----
.. automodule:: archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_hooks
:members:
:undoc-members:
Model
-----
.. automodule:: archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_model
:members:
:undoc-members:
|
archai/docs/reference/api/archai.discrete_search.evaluators.pt_profiler_utils.rst/0
|
{
"file_path": "archai/docs/reference/api/archai.discrete_search.evaluators.pt_profiler_utils.rst",
"repo_id": "archai",
"token_count": 174
}
| 328 |
Natural Language Processing
===========================
Modules
-------
.. automodule:: archai.quantization.nlp.modules
:members:
:undoc-members:
|
archai/docs/reference/api/archai.quantization.nlp.rst/0
|
{
"file_path": "archai/docs/reference/api/archai.quantization.nlp.rst",
"repo_id": "archai",
"token_count": 46
}
| 329 |
Neural Architecture Search
==========================
Architecture Module
-------------------
.. automodule:: archai.supergraph.nas.arch_module
:members:
:undoc-members:
Architecture Parameters
-----------------------
.. automodule:: archai.supergraph.nas.arch_params
:members:
:undoc-members:
Architecture Trainer
--------------------
.. automodule:: archai.supergraph.nas.arch_trainer
:members:
:undoc-members:
Cell
----
.. automodule:: archai.supergraph.nas.cell
:members:
:undoc-members:
DAG Edge
--------
.. automodule:: archai.supergraph.nas.dag_edge
:members:
:undoc-members:
Evaluater
---------
.. automodule:: archai.supergraph.nas.evaluater
:members:
:undoc-members:
Experiment Runner
-----------------
.. automodule:: archai.supergraph.nas.exp_runner
:members:
:undoc-members:
Finalizers
----------
.. automodule:: archai.supergraph.nas.finalizers
:members:
:undoc-members:
Model
-----
.. automodule:: archai.supergraph.nas.model
:members:
:undoc-members:
Model Description
-----------------
.. automodule:: archai.supergraph.nas.model_desc
:members:
:undoc-members:
Model Description Builder
-------------------------
.. automodule:: archai.supergraph.nas.model_desc_builder
:members:
:undoc-members:
NAS-Based Utitilies
-------------------
.. automodule:: archai.supergraph.nas.nas_utils
:members:
:undoc-members:
Operations
----------
.. automodule:: archai.supergraph.nas.operations
:members:
:undoc-members:
Random Finalizers
-----------------
.. automodule:: archai.supergraph.nas.random_finalizers
:members:
:undoc-members:
Search Combinations
-------------------
.. automodule:: archai.supergraph.nas.search_combinations
:members:
:undoc-members:
Searcher
--------
.. automodule:: archai.supergraph.nas.searcher
:members:
:undoc-members:
Model Description Visualizer
----------------------------
.. automodule:: archai.supergraph.nas.vis_model_desc
:members:
:undoc-members:
|
archai/docs/reference/api/archai.supergraph.nas.rst/0
|
{
"file_path": "archai/docs/reference/api/archai.supergraph.nas.rst",
"repo_id": "archai",
"token_count": 700
}
| 330 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
""" Script to prepare sport8 dataset for pytorch dataloader. On Windows requires installation of WinRAR
from here: https://www.rarlab.com/download.htm and adding path of unrar.exe to PATH environment variable.
"""
import os
import sys
import subprocess
from collections import defaultdict
from typing import Dict, List
import dataset_utils
try:
import pyunpack
except:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'pyunpack'])
import pyunpack
from torchvision.datasets import utils as tvutils
from archai.common import utils
def load_csv_data(filename: str) -> Dict[str, List[str]]:
"""Loads the data in csv files into a dictionary with
class names as keys and list of image names as values."""
data_dict = defaultdict(list)
with open(filename, "r") as f:
lines = f.readlines()
assert len(lines) > 0
for line in lines[1:]:
words = line.rstrip().split(",")
assert len(words) > 0
data_dict[words[0]] = words[1:]
return data_dict
def dataset_valid(dataroot: str) -> bool:
sport8 = os.path.join(dataroot, "sport8")
train = os.path.join(sport8, "train")
test = os.path.join(sport8, "test")
meta = os.path.join(sport8, "meta")
if not os.path.isdir(sport8) or not os.path.isdir(train) or not os.path.isdir(test) or not os.path.isdir(meta):
return False
num_train_files = 0
for base, dirs, files in os.walk(train):
for file in files:
num_train_files += 1
if num_train_files != 1261:
return False
num_test_files = 0
for base, dirs, files in os.walk(test):
for file in files:
num_test_files += 1
if num_test_files != 318:
return False
# all checks passed
return True
def download():
DOWNLOAD_URL = "https://vision.stanford.edu/lijiali/event_dataset/event_dataset.rar"
# make sport8 directory
sport8 = utils.full_path(os.path.join(dataroot, "sport8"))
meta = utils.full_path(os.path.join(sport8, "meta"))
os.makedirs(sport8, exist_ok=True)
os.makedirs(meta, exist_ok=True)
dir_downloads = utils.dir_downloads()
filename = os.path.basename(DOWNLOAD_URL)
archive = os.path.join(dir_downloads, filename)
if not os.path.isfile(archive):
tvutils.download_url(DOWNLOAD_URL, dir_downloads, filename)
print(f"Extracting {archive} to {sport8}")
pyunpack.Archive(archive).extractall(sport8)
# download the csv files for the train and test split
# from 'NAS Evaluation is Frustrating' repo
# note that download_url doesn't work in vscode debug mode
test_file_url = "https://raw.githubusercontent.com/antoyang/NAS-Benchmark/master/data/Sport8_test.csv"
train_file_url = "https://raw.githubusercontent.com/antoyang/NAS-Benchmark/master/data/Sport8_train.csv"
tvutils.download_url(test_file_url, meta, filename=None, md5=None)
tvutils.download_url(train_file_url, meta, filename=None, md5=None)
return sport8, meta
def copy_data_helper(data: Dict[str, List[str]], imagesroot: str, foldername: str) -> None:
for key in data.keys():
images = data[key]
for im in images:
if not im:
continue
source = os.path.join(imagesroot, key, im)
target = os.path.join(foldername, key, im)
if not os.path.isfile(target):
utils.copy_file(source, target)
def prepare_data(sport8: str, meta: str):
test_file = os.path.join(meta, "Sport8_test.csv")
test_data = load_csv_data(test_file)
train_file = os.path.join(meta, "Sport8_train.csv")
train_data = load_csv_data(train_file)
train = os.path.join(sport8, "train")
test = os.path.join(sport8, "test")
os.makedirs(train, exist_ok=True)
os.makedirs(test, exist_ok=True)
# make classname directories for train and test
for key in test_data.keys():
os.makedirs(os.path.join(sport8, "test", key), exist_ok=True)
os.makedirs(os.path.join(sport8, "train", key), exist_ok=True)
# copy images to the right locations
imagesroot = os.path.join(sport8, "event_img")
testfoldername = os.path.join(sport8, "test")
copy_data_helper(test_data, imagesroot, testfoldername)
trainfoldername = os.path.join(sport8, "train")
copy_data_helper(train_data, imagesroot, trainfoldername)
if __name__ == "__main__":
dataroot = dataset_utils.get_dataroot()
# check that dataset is in format required
# else download and prepare dataset
if not dataset_valid(dataroot):
# this step will create folder sport8/event_img
# which has all the images for each class in its own subfolder
sport8, meta = download()
prepare_data(sport8, meta)
|
archai/scripts/supergraph/download_datasets/sport8_install.py/0
|
{
"file_path": "archai/scripts/supergraph/download_datasets/sport8_install.py",
"repo_id": "archai",
"token_count": 1955
}
| 331 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
"""
Without cudnn setup, requires_grad=False:
3: 0.90ms for 1000 calls [stddev: 9.08, min: 0.49, max: 287.68]
4: 0.57ms for 1000 calls [stddev: 0.16, min: 0.48, max: 3.89]
6: 0.32ms for 1000 calls [stddev: 0.07, min: 0.27, max: 1.22]
5: 0.32ms for 1000 calls [stddev: 0.06, min: 0.27, max: 0.56]
0: 0.29ms for 1000 calls [stddev: 0.09, min: 0.19, max: 1.87]
1: 0.19ms for 1000 calls [stddev: 0.05, min: 0.16, max: 1.19]
7: 0.09ms for 1000 calls [stddev: 0.02, min: 0.07, max: 0.16]
2: 0.05ms for 1000 calls [stddev: 0.01, min: 0.04, max: 0.11]
With cudnn setup, requires_grad=False:
3: 0.86ms for 1000 calls [stddev: 8.12, min: 0.53, max: 257.40]
4: 0.54ms for 1000 calls [stddev: 0.06, min: 0.49, max: 0.91]
0: 0.31ms for 1000 calls [stddev: 0.04, min: 0.22, max: 1.03]
5: 0.30ms for 1000 calls [stddev: 0.03, min: 0.27, max: 0.52]
6: 0.30ms for 1000 calls [stddev: 0.03, min: 0.27, max: 0.51]
1: 0.19ms for 1000 calls [stddev: 0.05, min: 0.17, max: 1.48]
7: 0.09ms for 1000 calls [stddev: 0.01, min: 0.08, max: 0.17]
2: 0.05ms for 1000 calls [stddev: 0.01, min: 0.04, max: 0.10]
With cudnn setup, requires_grad=True:
torch.Size([64, 16, 32, 32])7: 0.10ms for 1000 calls [stddev: 0.03, min: 0.09, max: 0.51]
torch.Size([64, 16, 32, 32])6: 0.31ms for 1000 calls [stddev: 0.06, min: 0.27, max: 1.88]
torch.Size([64, 16, 32, 32])5: 0.31ms for 1000 calls [stddev: 0.07, min: 0.27, max: 2.16]
torch.Size([64, 16, 32, 32])4: 0.55ms for 1000 calls [stddev: 0.08, min: 0.49, max: 1.38]
torch.Size([64, 16, 32, 32])3: 0.86ms for 1000 calls [stddev: 9.35, min: 0.49, max: 296.30]
torch.Size([64, 16, 32, 32])2: 0.06ms for 1000 calls [stddev: 0.01, min: 0.05, max: 0.21]
torch.Size([64, 16, 32, 32])1: 0.20ms for 1000 calls [stddev: 0.06, min: 0.18, max: 1.70]
torch.Size([64, 16, 32, 32])0: 0.23ms for 1000 calls [stddev: 0.05, min: 0.20, max: 1.40]
forward: 2.78ms for 1000 calls [stddev: 9.43, min: 2.22, max: 300.59]
# PT-DARTS
forward: 2.70ms for 560 calls [stddev: 0.39, min: 2.22, max: 4.47]
3: 0.62ms for 560 calls [stddev: 0.09, min: 0.52, max: 1.14]
4: 0.61ms for 560 calls [stddev: 0.09, min: 0.51, max: 1.08]
6: 0.33ms for 560 calls [stddev: 0.05, min: 0.28, max: 0.76]
5: 0.33ms for 560 calls [stddev: 0.05, min: 0.28, max: 0.91]
0: 0.24ms for 560 calls [stddev: 0.04, min: 0.20, max: 0.40]
1: 0.21ms for 560 calls [stddev: 0.03, min: 0.18, max: 0.37]
2: 0.12ms for 560 calls [stddev: 0.17, min: 0.04, max: 1.74]
7: 0.10ms for 560 calls [stddev: 0.02, min: 0.08, max: 0.36]
# PT_DARTS
forward: 3.22ms for 560 calls [stddev: 0.84, min: 2.36, max: 8.14]
12/13 12:51:38 AM | Train: [ 1/50] Step 001/390 Loss 2.321 Prec@(1,5) (12.5%, 60.9%)
timebudget report...
torch.Size([64, 64, 8, 8])7: 0.12ms for 170 calls [stddev: 0.02, min: 0.09, max: 0.19]
torch.Size([64, 64, 8, 8])6: 0.36ms for 170 calls [stddev: 0.08, min: 0.29, max: 0.62]
torch.Size([64, 64, 8, 8])5: 0.36ms for 170 calls [stddev: 0.07, min: 0.29, max: 0.60]
torch.Size([64, 64, 8, 8])4: 0.67ms for 170 calls [stddev: 0.18, min: 0.53, max: 2.23]
torch.Size([64, 64, 8, 8])3: 0.68ms for 170 calls [stddev: 0.13, min: 0.54, max: 1.07]
torch.Size([64, 64, 8, 8])2: 0.07ms for 170 calls [stddev: 0.02, min: 0.05, max: 0.12]
torch.Size([64, 64, 8, 8])1: 0.23ms for 170 calls [stddev: 0.05, min: 0.19, max: 0.39]
torch.Size([64, 64, 8, 8])0: 0.26ms for 170 calls [stddev: 0.05, min: 0.21, max: 0.45]
torch.Size([64, 64, 16, 16])7: 0.14ms for 40 calls [stddev: 0.03, min: 0.11, max: 0.21]
torch.Size([64, 64, 16, 16])6: 0.37ms for 40 calls [stddev: 0.07, min: 0.30, max: 0.57]
torch.Size([64, 64, 16, 16])5: 0.37ms for 40 calls [stddev: 0.08, min: 0.30, max: 0.56]
torch.Size([64, 64, 16, 16])4: 0.67ms for 40 calls [stddev: 0.15, min: 0.54, max: 1.14]
torch.Size([64, 64, 16, 16])3: 0.67ms for 40 calls [stddev: 0.15, min: 0.55, max: 1.18]
torch.Size([64, 64, 16, 16])2: 0.50ms for 40 calls [stddev: 0.10, min: 0.41, max: 0.82]
torch.Size([64, 64, 16, 16])1: 0.23ms for 40 calls [stddev: 0.04, min: 0.19, max: 0.37]
torch.Size([64, 64, 16, 16])0: 0.26ms for 40 calls [stddev: 0.05, min: 0.21, max: 0.41]
torch.Size([64, 32, 32, 32])7: 0.14ms for 40 calls [stddev: 0.03, min: 0.11, max: 0.27]
torch.Size([64, 32, 32, 32])6: 0.38ms for 40 calls [stddev: 0.09, min: 0.29, max: 0.64]
torch.Size([64, 32, 32, 32])5: 0.38ms for 40 calls [stddev: 0.09, min: 0.29, max: 0.65]
torch.Size([64, 32, 32, 32])4: 0.70ms for 40 calls [stddev: 0.16, min: 0.54, max: 1.19]
torch.Size([64, 32, 32, 32])3: 0.70ms for 40 calls [stddev: 0.16, min: 0.54, max: 1.19]
torch.Size([64, 32, 32, 32])2: 0.54ms for 40 calls [stddev: 0.13, min: 0.41, max: 0.91]
torch.Size([64, 32, 32, 32])1: 0.24ms for 40 calls [stddev: 0.05, min: 0.19, max: 0.40]
torch.Size([64, 32, 32, 32])0: 0.28ms for 40 calls [stddev: 0.09, min: 0.21, max: 0.70]
torch.Size([64, 32, 16, 16])7: 0.11ms for 170 calls [stddev: 0.02, min: 0.09, max: 0.18]
torch.Size([64, 32, 16, 16])6: 0.34ms for 170 calls [stddev: 0.05, min: 0.29, max: 0.53]
torch.Size([64, 32, 16, 16])5: 0.34ms for 170 calls [stddev: 0.05, min: 0.29, max: 0.51]
torch.Size([64, 32, 16, 16])4: 0.63ms for 170 calls [stddev: 0.08, min: 0.54, max: 0.94]
torch.Size([64, 32, 16, 16])3: 0.64ms for 170 calls [stddev: 0.09, min: 0.54, max: 0.96]
torch.Size([64, 32, 16, 16])2: 0.06ms for 170 calls [stddev: 0.01, min: 0.05, max: 0.10]
torch.Size([64, 32, 16, 16])1: 0.22ms for 170 calls [stddev: 0.03, min: 0.19, max: 0.32]
torch.Size([64, 32, 16, 16])0: 0.25ms for 170 calls [stddev: 0.04, min: 0.21, max: 0.39]
torch.Size([64, 16, 32, 32])7: 0.13ms for 140 calls [stddev: 0.02, min: 0.09, max: 0.19]
torch.Size([64, 16, 32, 32])6: 0.39ms for 140 calls [stddev: 0.07, min: 0.30, max: 0.60]
torch.Size([64, 16, 32, 32])5: 0.39ms for 140 calls [stddev: 0.07, min: 0.30, max: 0.60]
torch.Size([64, 16, 32, 32])4: 0.72ms for 140 calls [stddev: 0.13, min: 0.55, max: 1.01]
torch.Size([64, 16, 32, 32])3: 0.74ms for 140 calls [stddev: 0.13, min: 0.56, max: 1.05]
torch.Size([64, 16, 32, 32])2: 0.07ms for 140 calls [stddev: 0.01, min: 0.06, max: 0.12]
torch.Size([64, 16, 32, 32])1: 0.25ms for 140 calls [stddev: 0.05, min: 0.19, max: 0.39]
torch.Size([64, 16, 32, 32])0: 0.29ms for 140 calls [stddev: 0.05, min: 0.22, max: 0.46]
# CPU
torch.Size([64, 16, 32, 32])7TrueTrue: 1.40ms for 1000 calls [stddev: 0.47, min: 0.60, max: 6.45]
torch.Size([64, 16, 32, 32])6TrueTrue: 62.79ms for 1000 calls [stddev: 15.29, min: 46.34, max: 97.40]
torch.Size([64, 16, 32, 32])5TrueTrue: 29.08ms for 1000 calls [stddev: 6.35, min: 21.32, max: 48.38]
torch.Size([64, 16, 32, 32])4TrueTrue: 17.52ms for 1000 calls [stddev: 1.53, min: 15.35, max: 29.74]
torch.Size([64, 16, 32, 32])3TrueTrue: 11.31ms for 1000 calls [stddev: 1.66, min: 9.30, max: 38.80]
torch.Size([64, 16, 32, 32])2TrueTrue: 0.63ms for 1000 calls [stddev: 0.29, min: 0.28, max: 4.34]
torch.Size([64, 16, 32, 32])1TrueTrue: 5.23ms for 1000 calls [stddev: 0.72, min: 3.32, max: 17.07]
torch.Size([64, 16, 32, 32])0TrueTrue: 7.89ms for 1000 calls [stddev: 0.91, min: 5.85, max: 21.50]
forward: 136.22ms for 1000 calls [stddev: 22.90, min: 109.06, max: 205.25]
"""
import torch
from torch_testbed.timing import print_all_timings
from archai.supergraph.algos.darts.mixed_op import MixedOp
from archai.common import utils
utils.setup_cuda(2, local_rank=0)
device = torch.device("cuda")
mop = MixedOp(16, 1).to(device=device)
a = torch.randn(8, requires_grad=True).to(device=device)
x = torch.randn((64, 16, 32, 32), requires_grad=True).to(device=device)
for i in range(1000):
y = mop(x, a)
print_all_timings()
|
archai/scripts/supergraph/performance/mixed_ops.py/0
|
{
"file_path": "archai/scripts/supergraph/performance/mixed_ops.py",
"repo_id": "archai",
"token_count": 5508
}
| 332 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import argparse
from transformers import AutoTokenizer, OPTConfig, OPTForCausalLM, TrainingArguments
from archai.datasets.nlp.fast_hf_dataset_provider import (
FastDataCollatorForLanguageModeling,
FastHfDatasetProvider,
)
from archai.trainers.nlp.hf_trainer import HfTrainer
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Trains an OPT model using the Hugging Face trainer.")
parser.add_argument(
"-dn",
"--dataset_name",
type=str,
default="wikitext",
help="Name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"-dcn",
"--dataset_config_name",
type=str,
default="wikitext-103-raw-v1",
help="Configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument("-ls", "--logging_steps", type=int, default=10, help="Number of steps between logs.")
parser.add_argument("-es", "--eval_steps", type=int, default=100, help="Number of steps between evaluations.")
parser.add_argument("-bsz", "--per_device_train_batch_size", type=int, default=64, help="Batch size per device.")
parser.add_argument("-n", "--max_steps", type=int, default=1, help="Maximum number of steps.")
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse_args()
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
collator = FastDataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
dataset_provider = FastHfDatasetProvider.from_hub(
args.dataset_name,
dataset_config_name=args.dataset_config_name,
tokenizer=tokenizer,
)
train_dataset = dataset_provider.get_train_dataset(seq_len=2048)
eval_dataset = dataset_provider.get_val_dataset(seq_len=2048)
config = OPTConfig(
n_positions=2048,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
vocab_size=50272,
)
model = OPTForCausalLM(config=config)
print(f"Total parameters: {sum(p.numel() for p in model.parameters())}")
training_args = TrainingArguments(
"hf-opt",
evaluation_strategy="steps",
logging_steps=args.logging_steps,
eval_steps=args.eval_steps,
per_device_train_batch_size=args.per_device_train_batch_size,
learning_rate=3e-4,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-8,
weight_decay=0.1,
max_steps=args.max_steps,
save_steps=args.save_steps,
)
trainer = HfTrainer(
model=model,
args=training_args,
data_collator=collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
|
archai/scripts/trainers/hf/train_opt.py/0
|
{
"file_path": "archai/scripts/trainers/hf/train_opt.py",
"repo_id": "archai",
"token_count": 1210
}
| 333 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import argparse
import os
import sys
from archai.common.store import ArchaiStore
CONNECTION_NAME = 'MODEL_STORAGE_CONNECTION_STRING'
def list_models(con_str, experiment_name):
parser = argparse.ArgumentParser(
description="List all azure blob store assets.")
parser.add_argument('--prefix', type=str, required=True, default=None,
help='List models matching this prefix')
args = parser.parse_args()
prefix = args.prefix
storage_account_name, storage_account_key = ArchaiStore.parse_connection_string(con_str)
store = ArchaiStore(storage_account_name, storage_account_key, table_name=experiment_name)
for blob in store.list_blobs(prefix):
print(blob)
if __name__ == '__main__':
experiment_name = os.getenv("EXPERIMENT_NAME", "facesynthetics")
con_str = os.getenv(CONNECTION_NAME)
if not con_str:
print(f"Please specify your {CONNECTION_NAME} environment variable.")
sys.exit(1)
list_models(con_str, experiment_name)
|
archai/tasks/face_segmentation/aml/azure/list.py/0
|
{
"file_path": "archai/tasks/face_segmentation/aml/azure/list.py",
"repo_id": "archai",
"token_count": 397
}
| 334 |
# assumes you have already run "az login" and "az account set --subscription id"
# also assumes you have run "az aks install-cli"
$resource_group = "snpe-quantizaton-rg"
$storage_account_name = "nasfacemodels"
$plan_location = "westus2"
$aks_cluster = "snpe-quantizer-aks"
$aks_node_vm = "Standard_D16s_v3"
$aks_namespace = "snpe"
# This registry name is also referenced in quantizer.yaml
$registry_name = "snpecontainerregistry001"
function Check-Provisioning($result) {
$rc = Join-String -Separator "`n" -InputObject $rc
$x = ConvertFrom-Json $rc
if ($x.provisioningState -ne "Succeeded") {
Write-Error "Failed to create registry"
Write-Error $rc
exit 1
}
}
function GetConnectionString()
{
$x = az storage account show-connection-string --name $storage_account_name --resource-group $resource_group | ConvertFrom-Json
return $x.connectionString
}
function CreateBlobContainer($name, $conn_str)
{
Write-Host "Checking blob container '$name' exists"
$rc = &az storage container exists --name $name --connection-string $conn_str | ConvertFrom-Json
if (-not $rc.exists) {
Write-Host "Creating blob container $name"
$rc = &az storage container create --name $name --resource-group $resource_group --connection-string $conn_str | ConvertFrom-Json
if (-not $rc.created) {
Write-Error "Failed to create blob container $name"
Write-Error $rc
}
}
}
function EnablePublicAccess($name, $conn_str){
Write-Host "Checking blob container '$name' has public access"
$rc = &az storage container show-permission --name $name --connection-string $conn_str | ConvertFrom-Json
if ($rc.publicAccess -ne "blob" ){
Write-Host "Setting blob container '$name' permissions for public access"
$rc = &az storage container set-permission --name $name --public-access blob --connection-string $conn_str
}
}
function GetZipRootName($zip)
{
$zip = [IO.Compression.ZipFile]::OpenRead($zip)
$entries = $zip.Entries
$root = [System.IO.Path]::GetDirectoryName($entries[0].FullName)
$zip.Dispose()
return $root
}
function CopyLocal($zip)
{
$a = [System.IO.Path]::GetDirectoryName([System.IO.Path]::GetFullPath($zip))
$b = [System.IO.Path]::GetFullPath(".")
if ($a -ne $b) {
Copy-Item -Path $zip -Destination "."
}
}
function EnsureNamespace($name)
{
Write-Host "Checking kubernetes namespaces"
$rc = &kubectl get namespace $name 2>&1
if ($rc.ToString().Contains("NotFound")) {
Write-Host "Creating kubernetes namespace $name" 2>&1
$rc = &kubectl create namespace $name
Write-Host "Create returned: $rc"
}
}
if ("$Env:SNPE_SDK" -eq "")
{
Write-Host "Please set your SNPE_SDK path so we can upload the SNPE SDK zip file to Azure"
exit 1
}
CopyLocal -zip $Env:SNPE_SDK
$snpe_sdk_zip = [System.IO.Path]::GetFileName($Env:SNPE_SDK)
$snpe_root = GetZipRootName -zip $Env:SNPE_SDK
if ("$Env:ANDROID_NDK" -eq ""){
Write-Host "Please set your ANDROID_NDK path so we can upload the Android NDK zip file to Azure"
exit 1
}
CopyLocal -zip $Env:ANDROID_NDK
$android_sdk_zip = [System.IO.Path]::GetFileName($Env:ANDROID_NDK)
$android_ndk_root = GetZipRootName -zip $Env:ANDROID_NDK
if ("$Env:INPUT_TESTSET" -eq ""){
Write-Host "Please set your INPUT_TESTSET path pointing to zip file containing the 10k test set images"
exit 1
}
Write-Host "Checking azure account..."
$output = &az account show 2>&1
if ($output.Contains("ERROR:"))
{
Write-Host "Please login to an azure account using 'az login' and set the subscription you want to use."
Write-Host "using 'az account set --subscriptoin id' then try again."
Exit 1
}
$account = $output | ConvertFrom-Json
$name = $account.name
$subscription = $account.id
Write-Host "You are using azure subscription $name with id $subscription"
$proceed = Read-Host -Prompt "Do you want to proceed (y/n) "
$lower = $proceed.ToLowerInvariant()
if ($lower -ne 'yes' -and $lower -ne 'y'){
Write-Host "Your answer $lower does not match 'yes' or 'y' so the script is terminating."
Exit 1
}
# ======= Storage account
Write-Host Checking storage account $storage_account_name...
$rc = &az storage account show --resource-group $resource_group --name $storage_account_name 2>&1
$rc = Join-String -Separator "`n" -InputObject $rc
if ($rc.Contains("ResourceNotFound")) {
Write-Host Creating storage account $storage_account_name...
$rc = &az storage account create --name $storage_account_name --resource-group $resource_group --location $plan_location --kind StorageV2 --sku Standard_LRS 2>&1
}
Check-Provisioning -result $rc
$conn_str = GetConnectionString
CreateBlobContainer -name "models" -conn_str $conn_str
CreateBlobContainer -name "downloads" -conn_str $conn_str
EnablePublicAccess -name "downloads" -conn_str $conn_str
Write-Host Checking container registry $registry_name
$rc = &az acr show --resource-group $resource_group --name $registry_name 2>&1
if ($rc.Contains("ResourceNotFound")) {
Write-Host Creating container registry $registry_name...
$rc = &az acr create --resource-group $resource_group --name $registry_name --sku Standard 2>&1
Check-Provisioning -result $rc
$rc = &az acr update --name $registry_name --anonymous-pull-enabled true --admin-enabled true
}
Check-Provisioning -result $rc
$rc = &az acr login --name $registry_name --expose-token | ConvertFrom-Json
$token = $rc.accessToken
$acrServer = $rc.loginServer
# ======= aks cluster
Write-Host Checking aks cluster..
$rc = &az aks show --name $aks_cluster --resource-group $resource_group 2>&1
if ($rc.Contains("ResourceNotFound")) {
# note, azure requires min of 10 pods per node, but our pods are running expensive quantization job, so
# to allow 10 of those to run on a node we've had to scale up our VM to Standard_D16s_v3 with 16 cores and 64 gb RAM.
# even then we'll see how well that performs... could be 10 quantizations on a node will thrash it to bits...
$rc = &az aks create --name $aks_cluster --resource-group $resource_group --location $plan_location --enable-cluster-autoscaler `
--node-osdisk-size 250 --min-count 1 --max-count 10 --max-pods 10 --node-osdisk-size 100 --node-vm-size $aks_node_vm `
--node-osdisk-type Managed --nodepool-name nodepool1 2>&1
}
Check-Provisioning -result $rc
$rc = &"$env:windir\system32\where" kubectl.exe 2>&1
print("where kubectl.exe => $rc")
if ($rc.ToString().Contains("Could not find files")) {
Write-Host "kubectl not found, skipping kubectl setup."
if ($IsWindows){
Write-Host "You can build the quantizer docker image on Windows if you install the Docker Desktop for Windows"
Write-Host "and set it to Linux container mode and you can manage your Azure Kubernetes cluster using kubectl if"
Write-Host "you enable the docker desktop Kubernetes support under Settings."
}
}
else
{
# this makes it possible to use kubectl locally to manage this cluster.
&az aks get-credentials --resource-group $resource_group --name $aks_cluster
EnsureNamespace -name $aks_namespace
}
Write-Host "======= upload INPUT_TESTSET"
$fileName = [System.IO.Path]::GetFileName($Env:INPUT_TESTSET)
$rc = az storage blob exists --account-name $storage_account_name --container-name downloads --name $fileName --connection-string $conn_str | ConvertFrom-Json
if ($rc.exists){
Write-Host "File $fileName already exists in 'downloads' container"
} else {
Write-Host "Uploading $fileName to 'downloads' container"
az storage blob upload --file "$Env:INPUT_TESTSET" --account-name $storage_account_name --container-name downloads --name $fileName --connection-string "$conn_str"
}
$test_set_url="https://$storage_account_name.blob.core.windows.net/downloads/$fileName"
Write-Host "Test set url is $test_set_url"
# populate MODEL_STORAGE_CONNECTION_STRING variable in quantizer.template.yaml
$template = Get-Content -Path "quantizer.template.yaml"
$tags = &az acr repository show-tags -n $registry_name --repository quantizer | ConvertFrom-JSON
if ($tags.GetType().Name -eq "String"){
$tags = @($tags)
}
$latest = [Version]"1.0"
foreach ($t in $tags) {
$v = [Version]$t
if ($v -gt $latest){
$latest = $v
}
}
$v = [Version]$latest
$vnext = [System.String]::Format("{0}.{1}", $v.Major, $v.Minor + 1)
Write-Host "Creating quantizer.yaml and setting image version $vnext"
$template = $template.Replace("quantizer:1.0", "quantizer:$vnext")
$template = $template.Replace("$MSCS$", $conn_str)
Set-Content -Path "quantizer.yaml" -Value $template
# ================= Write out info/next steps ================
Write-Host ""
Write-Host docker build `
--build-arg "SNPE_SDK_ZIP=$snpe_sdk_zip" --build-arg "SNPE_SDK_ROOT=$snpe_root" `
--build-arg "ANDROID_NDK_ZIP=$android_sdk_zip" --build-arg "ANDROID_NDK_ROOT=$android_ndk_root" `
. --progress plain
Write-Host ""
Write-Host "### Run the above docker build to build the docker image and the following to publish it..."
Write-Host "Get the password from Azure portal for $registry_name under Access keys:"
Write-Host " docker login $acrServer -u $registry_name -p <get password>"
Write-Host " az aks get-credentials --resource-group $resource_group --name $aks_cluster"
Write-Host " docker tag ... $registry_name.azurecr.io/quantizer:$vnext"
Write-Host " docker push $registry_name.azurecr.io/quantizer:$vnext"
Write-Host ""
Write-Host "To cleanup old images see cleanup.ps1"
Write-Host ""
Write-Host "### Apply the new image on your cluster..."
Write-Host " kubectl apply -f quantizer.yaml"
Write-Host ""
Write-Host "### To run the runner script locally please set the following environment variable: "
Write-Host "set MODEL_STORAGE_CONNECTION_STRING=$conn_str"
|
archai/tasks/face_segmentation/aml/docker/quantizer/setup.ps1/0
|
{
"file_path": "archai/tasks/face_segmentation/aml/docker/quantizer/setup.ps1",
"repo_id": "archai",
"token_count": 3524
}
| 335 |
import os
import sys
import json
import subprocess
home = os.getenv("HOME")
experiments = f"{home}/snpe/experiment"
script_dir = os.path.dirname(os.path.abspath(__file__))
map_file = os.path.join(script_dir, "map.txt")
command = f'{script_dir}/../azure/loop.sh'
def run(cmd):
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='utf-8')
stdout, stderr = p.communicate()
return (stdout.strip(), stderr.strip())
def find_screens():
screens = {}
stdout, stderr = run(["screen", "-list"])
if stderr != "":
print("screen -list failed: ")
print(stderr)
sys.exit(1)
for line in stdout.splitlines():
parts = line.strip().split('\t')
if len(parts) > 1:
id = parts[0]
parts = id.split(".")
if len(parts) == 2:
proc, device = parts
screens[device] = proc
else:
print("### skipping unrecognized screen name: {id}")
return screens
def find_devices():
devices = []
stdout, stderr = run(["adb", "devices"])
if stderr != "":
print("adb devices failed: ")
print(stderr)
sys.exit(1)
for line in stdout.splitlines():
parts = line.split('\t')
if len(parts) == 2 and parts[1] == 'device':
devices += [parts[0]]
return devices
def load_map():
map = {}
if os.path.exists(map_file):
with open(map_file, "r") as f:
map = json.load(f)
return map
def save_map(map):
with open(map_file, "w") as f:
json.dump(map, f, indent=2)
return map
def allocate_folder(map, id):
next = 1
inverse_map = {v: k for k, v in map.items()}
while True:
folder = f"{experiments}{next}"
if folder not in inverse_map:
map[id] = folder
save_map(map)
if not os.path.isdir(folder):
os.mkdir(folder)
return folder
else:
next += 1
def read_lock(folder):
lock_file = os.path.join(folder, "lock.txt")
if os.path.exists(lock_file):
return open(lock_file).read().strip()
return None
def main():
devices = find_devices()
if len(devices) == 0:
print("### Found no Qualcomm Devices using `adb devices`")
sys.exit(1)
screens = find_screens()
map = load_map()
print("# Found the following Qualcomm Devices using `adb devices`:")
for id in find_devices():
if id in map:
folder = map[id]
else:
folder = allocate_folder(map, id)
lock = read_lock(folder)
print(f"Device {id}, mapped to folder {folder}")
if id in screens:
proc = screens[id]
print(f" Screen is running: {proc}.{id}")
elif lock:
print(f" Locked by: {lock}")
else:
print(f" Please run: screen -dmS {id} {command} --device {id} --no_quantization " +
f"--cleanup_stale_pods 3600 --working {folder}")
if __name__ == '__main__':
main()
|
archai/tasks/face_segmentation/aml/setup/launch.py/0
|
{
"file_path": "archai/tasks/face_segmentation/aml/setup/launch.py",
"repo_id": "archai",
"token_count": 1474
}
| 336 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
from threading import Lock
class PriorityQueue:
def __init__(self):
self.queue = []
self.lock = Lock()
def enqueue(self, priority, data):
self.lock.acquire()
inserted = False
for i in range(len(self.queue)):
item = self.queue[i]
if item[0] > priority:
self.queue.insert(i, (priority, data))
inserted = True
break
if not inserted:
self.queue += [(priority, data)]
self.lock.release()
def dequeue(self):
""" returns a tuple containing the (priority, data) that was enqueued. """
self.lock.acquire()
item = None
if len(self.queue) > 0:
item = self.queue[0]
del self.queue[0]
self.lock.release()
return item
def peek(self):
""" returns a tuple containing the (priority, data) that was enqueued. """
self.lock.acquire()
item = None
if len(self.queue) > 0:
item = self.queue[0]
self.lock.release()
return item
def size(self):
self.lock.acquire()
rc = len(self.queue)
self.lock.release()
return rc
|
archai/tasks/face_segmentation/aml/util/priority_queue.py/0
|
{
"file_path": "archai/tasks/face_segmentation/aml/util/priority_queue.py",
"repo_id": "archai",
"token_count": 609
}
| 337 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import torch
from typing import Callable, List, Optional
from torch import nn, Tensor
from torchvision.models._utils import _make_divisible
from torchvision.ops.misc import Conv2dNormActivation
# Adapted from https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv2.py
class CustomInvertedResidual(nn.Module):
def __init__(
self,
inp: int,
oup: int,
stride: int,
expand_ratio: int,
kernel: int,
norm_layer: Optional[Callable[..., nn.Module]] = None,
) -> None:
super().__init__()
self.stride = stride
if stride not in [1, 2]:
raise ValueError(f"stride should be 1 or 2 instead of {stride}")
if norm_layer is None:
norm_layer = nn.BatchNorm2d
hidden_dim = int(round(inp * expand_ratio))
self.use_res_connect = self.stride == 1 and inp == oup
layers: List[nn.Module] = []
if expand_ratio != 1:
# pw
layers.append(
Conv2dNormActivation(inp, hidden_dim, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.ReLU6)
)
layers.extend(
[
# dw
Conv2dNormActivation(
hidden_dim,
hidden_dim,
kernel_size=kernel,
stride=stride,
groups=hidden_dim,
norm_layer=norm_layer,
activation_layer=nn.ReLU6,
),
# pw-linear
nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
norm_layer(oup),
]
)
self.conv = nn.Sequential(*layers)
self.out_channels = oup
self._is_cn = stride > 1
def forward(self, x: Tensor) -> Tensor:
if self.use_res_connect:
return x + self.conv(x)
else:
return self.conv(x)
#
# Adapted from torchvision MobileNetV2 to allow for different kernel sizes
# to be passed in instead of default value of 3
class CustomMobileNetV2(nn.Module):
def __init__(
self,
num_classes: int,
width_mult: float = 1.0,
inverted_residual_setting: Optional[List[List[int]]] = None,
round_nearest: int = 8,
block: Optional[Callable[..., nn.Module]] = None,
norm_layer: Optional[Callable[..., nn.Module]] = None,
dropout: float = 0.2,
) -> None:
"""
MobileNet V2 main class
Args:
num_classes (int): Number of classes
width_mult (float): Width multiplier - adjusts number of channels in each layer by this amount
inverted_residual_setting: Network structure
round_nearest (int): Round the number of channels in each layer to be a multiple of this number
Set to 1 to turn off rounding
block: Module specifying inverted residual building block for mobilenet
norm_layer: Module specifying the normalization layer to use
dropout (float): The droupout probability
"""
super().__init__()
if block is None:
block = CustomInvertedResidual
if norm_layer is None:
norm_layer = nn.BatchNorm2d
input_channel = 32
last_channel = 1280
if inverted_residual_setting is None:
inverted_residual_setting = [
# t, c, n, s, k
[1, 16, 1, 1, 3],
[6, 24, 2, 2, 3],
[6, 32, 3, 2, 3],
[6, 64, 4, 2, 3],
[6, 96, 3, 1, 3],
[6, 160, 3, 2, 3],
[6, 320, 1, 1, 3],
]
# check inverted_residual_setting for validity - t,c,n,s,k are required
if len(inverted_residual_setting) == 0 or any(len(ir) != 5 for ir in inverted_residual_setting):
raise ValueError(
f"inverted_residual_setting should be non-empty or a 5-element list, got {inverted_residual_setting}"
)
# building first layer
input_channel = _make_divisible(input_channel * width_mult, round_nearest)
self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
features: List[nn.Module] = [
Conv2dNormActivation(3, input_channel, stride=2, norm_layer=norm_layer, activation_layer=nn.ReLU6)
]
# building inverted residual blocks
for t, c, n, s, k in inverted_residual_setting:
output_channel = _make_divisible(c * width_mult, round_nearest)
for i in range(n):
stride = s if i == 0 else 1
features.append(
block(input_channel, output_channel, stride, expand_ratio=t, kernel=k, norm_layer=norm_layer)
)
input_channel = output_channel
# building last several layers
features.append(
Conv2dNormActivation(
input_channel, self.last_channel, kernel_size=1, norm_layer=norm_layer, activation_layer=nn.ReLU6
)
)
# make it nn.Sequential
self.features = nn.Sequential(*features)
# building classifier
self.classifier = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(self.last_channel, num_classes),
)
# weight initialization
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode="fan_out")
if m.bias is not None:
nn.init.zeros_(m.bias)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.ones_(m.weight)
nn.init.zeros_(m.bias)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.zeros_(m.bias)
def _forward_impl(self, x: Tensor) -> Tensor:
# This exists since TorchScript doesn't support inheritance, so the superclass method
# (this one) needs to have a name other than `forward` that can be accessed in a subclass
x = self.features(x)
# Cannot use "squeeze" as batch-size can be 1
x = nn.functional.adaptive_avg_pool2d(x, (1, 1))
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
def forward(self, x: Tensor) -> Tensor:
return self._forward_impl(x)
|
archai/tasks/facial_landmark_detection/model.py/0
|
{
"file_path": "archai/tasks/facial_landmark_detection/model.py",
"repo_id": "archai",
"token_count": 3218
}
| 338 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import torch
from archai.datasets.cv.transforms.brightness import Brightness, RandomBrightness
def test_brightness():
# Assert that it works with minimum value
b = Brightness(-1.0)
img = torch.tensor([0.5])
result = b(img)
assert result.tolist() == [0.0]
# Assert that it works with maximum value
b = Brightness(1.0)
img = torch.tensor([0.5])
result = b(img)
assert result.tolist() == [1.0]
# Assert that it works with value within bounds
b = Brightness(0.5)
img = torch.tensor([0.5])
result = b(img)
assert result.tolist() == [1.0]
def test_random_brightness():
# Assert that it works with minimum and maximum values
rb = RandomBrightness(-1.0, 1.0)
img = torch.tensor([0.5])
result = rb(img)
assert result.tolist()[0] >= -1.0 and result.tolist()[0] <= 1.0
|
archai/tests/datasets/cv/transforms/test_brightness.py/0
|
{
"file_path": "archai/tests/datasets/cv/transforms/test_brightness.py",
"repo_id": "archai",
"token_count": 361
}
| 339 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
import pytest
from archai.discrete_search.algos.regularized_evolution import (
RegularizedEvolutionSearch,
)
@pytest.fixture(scope="session")
def output_dir(tmp_path_factory):
return tmp_path_factory.mktemp("out_re")
def test_regularized_evolution(output_dir, search_space, search_objectives, surrogate_model):
algo = RegularizedEvolutionSearch(
search_space=search_space,
search_objectives=search_objectives,
output_dir=output_dir,
num_iters=5,
init_num_models=4,
pareto_sample_size=4,
history_size=10,
seed=1,
)
search_results = algo.search()
assert len(os.listdir(output_dir)) > 0
df = search_results.get_search_state_df()
assert all(0 <= x <= 0.4 for x in df["Random1"].tolist())
all_models = [m for iter_r in search_results.results for m in iter_r["models"]]
# Checks if all registered models satisfy constraints
_, valid_models = search_objectives.validate_constraints(all_models)
assert len(valid_models) == len(all_models)
|
archai/tests/discrete_search/algos/test_regularized_evolution.py/0
|
{
"file_path": "archai/tests/discrete_search/algos/test_regularized_evolution.py",
"repo_id": "archai",
"token_count": 437
}
| 340 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
import torch
import pytest
from transformers import PretrainedConfig, CodeGenConfig
from archai.discrete_search.api import ArchaiModel
from archai.discrete_search.search_spaces.config import ArchConfig, ConfigSearchSpace
from archai.discrete_search.search_spaces.nlp import TfppSearchSpace
N_POSITIONS = 2048
def check_fwd_pass(model: ArchaiModel):
assert isinstance(model, ArchaiModel)
arch_config = model.metadata['config']
assert isinstance(arch_config, ArchConfig)
_ = arch_config.pick('hidden_size', record_usage=False)
x = torch.randint(high=10, size=(1, N_POSITIONS))
# Test forward pass
y = model.arch(x)
def test_tfpp_sample_codegen():
search_space = TfppSearchSpace(
'codegen', total_layers=[1], total_heads=[6],
homogeneous=True, seed=1, n_positions=N_POSITIONS
)
for _ in range(5):
model = search_space.random_sample()
check_fwd_pass(model)
def test_tfpp_sample_gpt2():
search_space = TfppSearchSpace(
'gpt2', total_layers=[1], total_heads=[6],
homogeneous=True, seed=1, n_positions=N_POSITIONS
)
for _ in range(5):
model = search_space.random_sample()
check_fwd_pass(model)
|
archai/tests/discrete_search/search_spaces/nlp/tfpp/test_tfpp.py/0
|
{
"file_path": "archai/tests/discrete_search/search_spaces/nlp/tfpp/test_tfpp.py",
"repo_id": "archai",
"token_count": 510
}
| 341 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
from transformers import GPT2Config, GPT2LMHeadModel
from archai.common.file_utils import create_file_name_identifier
from archai.onnx.export import export_to_onnx
from archai.onnx.optimization import optimize_onnx
def test_optimize_onnx():
model = GPT2LMHeadModel(config=GPT2Config(vocab_size=1, n_layer=1))
onnx_model_path = "temp_model.onnx"
onnx_config = export_to_onnx(model, onnx_model_path)
# Assert that `ort_model_path` + `-opt` is returned when `optimize_onnx` is called
ort_model_path = optimize_onnx(onnx_model_path, onnx_config)
assert ort_model_path == create_file_name_identifier(onnx_model_path, "-opt")
# Assert that `ort_model_path` + `opt` is returned when `optimize_onnx`
# is called with `only_ort` set to `True`
ort_model_path = optimize_onnx(onnx_model_path, onnx_config, only_ort=True)
assert ort_model_path == create_file_name_identifier(onnx_model_path, "-opt")
# Assert that `ort_model_path` + `-opt` is returned when `optimize_onnx`
# is called with `input_int32` set to `True`
ort_model_path = optimize_onnx(onnx_model_path, onnx_config, input_int32=True)
assert ort_model_path == create_file_name_identifier(onnx_model_path, "-opt")
os.remove(onnx_model_path)
os.remove(ort_model_path)
|
archai/tests/onnx/test_optimization.py/0
|
{
"file_path": "archai/tests/onnx/test_optimization.py",
"repo_id": "archai",
"token_count": 550
}
| 342 |
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import math
from unittest.mock import MagicMock
from transformers import TrainerControl, TrainerState, TrainingArguments
from archai.trainers.nlp.hf_callbacks import (
BPCTrainerCallback,
PerplexityTrainerCallback,
)
def test_bpc_trainer_callback():
callback = BPCTrainerCallback()
args = MagicMock(spec=TrainingArguments)
state = MagicMock(spec=TrainerState)
state.log_history = [{"loss": 0.5, "eval_loss": 0.4, "test_loss": 0.3}]
control = MagicMock(spec=TrainerControl)
# Assert that the bpc values were added to the log history
callback.on_log(args, state, control)
assert state.log_history[-1]["bpc"] == 0.5 / math.log(2)
assert state.log_history[-1]["eval_bpc"] == 0.4 / math.log(2)
# Assert that the bpc values were added to the metrics dictionary
metrics = {"eval_loss": 0.25, "test_loss": 0.2}
callback.on_evaluate(args, state, control, metrics)
assert metrics["eval_bpc"] == 0.25 / math.log(2)
assert metrics["test_bpc"] == 0.2 / math.log(2)
def test_perplexity_trainer_callback():
callback = PerplexityTrainerCallback()
args = MagicMock(spec=TrainingArguments)
state = MagicMock(spec=TrainerState)
state.log_history = [{"loss": 0.5, "eval_loss": 0.4, "test_loss": 0.3}]
control = MagicMock(spec=TrainerControl)
# Assert that the perplexity values were added to the log history
callback.on_log(args, state, control)
assert state.log_history[-1]["ppl"] == math.exp(0.5)
assert state.log_history[-1]["eval_ppl"] == math.exp(0.4)
# Assert that the perplxity values were added to the metrics dictionary
metrics = {"eval_loss": 0.25, "test_loss": 0.2}
callback.on_evaluate(args, state, control, metrics)
assert metrics["eval_ppl"] == math.exp(0.25)
assert metrics["test_ppl"] == math.exp(0.2)
|
archai/tests/trainers/nlp/test_hf_callbacks.py/0
|
{
"file_path": "archai/tests/trainers/nlp/test_hf_callbacks.py",
"repo_id": "archai",
"token_count": 706
}
| 343 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from msrest import Configuration
from .version import VERSION
class ClientConfiguration(Configuration):
def __init__(self, base_url=None):
if not base_url:
raise ValueError('base_url is required.')
base_url = base_url.rstrip('/')
super(ClientConfiguration, self).__init__(base_url)
self.add_user_agent('azure-devops/{}'.format(VERSION))
self.additional_headers = {}
|
azure-devops-python-api/azure-devops/azure/devops/client_configuration.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/client_configuration.py",
"repo_id": "azure-devops-python-api",
"token_count": 208
}
| 344 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest import Serializer, Deserializer
from ...client import Client
from ...v7_0.search import models
class SearchClient(Client):
"""Search
:param str base_url: Service URL
:param Authentication creds: Authenticated credentials.
"""
def __init__(self, base_url=None, creds=None):
super(SearchClient, self).__init__(base_url, creds)
client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)}
self._serialize = Serializer(client_models)
self._deserialize = Deserializer(client_models)
resource_area_identifier = 'ea48a0a1-269c-42d8-b8ad-ddc8fcdcf578'
def fetch_scroll_code_search_results(self, request, project=None):
"""FetchScrollCodeSearchResults.
Provides a set of results for the search text.
:param :class:`<ScrollSearchRequest> <azure.devops.v7_0.search.models.ScrollSearchRequest>` request: The Code Search Request.
:param str project: Project ID or project name
:rtype: :class:`<CodeSearchResponse> <azure.devops.v7_0.search.models.CodeSearchResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
content = self._serialize.body(request, 'ScrollSearchRequest')
response = self._send(http_method='POST',
location_id='852dac94-e8f7-45a2-9910-927ae35766a2',
version='7.0',
route_values=route_values,
content=content)
return self._deserialize('CodeSearchResponse', response)
def fetch_code_search_results(self, request, project=None):
"""FetchCodeSearchResults.
Provides a set of results for the search text.
:param :class:`<CodeSearchRequest> <azure.devops.v7_0.search.models.CodeSearchRequest>` request: The Code Search Request.
:param str project: Project ID or project name
:rtype: :class:`<CodeSearchResponse> <azure.devops.v7_0.search.models.CodeSearchResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
content = self._serialize.body(request, 'CodeSearchRequest')
response = self._send(http_method='POST',
location_id='e7f29993-5b82-4fca-9386-f5cfe683d524',
version='7.0',
route_values=route_values,
content=content)
return self._deserialize('CodeSearchResponse', response)
def fetch_package_search_results(self, request):
"""FetchPackageSearchResults.
Provides a set of results for the search text.
:param :class:`<PackageSearchRequest> <azure.devops.v7_0.search.models.PackageSearchRequest>` request: The Package Search Request.
:rtype: :class:`<PackageSearchResponse> <azure.devops.v7_0.search.models.PackageSearchResponse>`
"""
content = self._serialize.body(request, 'PackageSearchRequest')
response = self._send(http_method='POST',
location_id='f62ada48-eedc-4c8e-93f0-de870e4ecce0',
version='7.0',
content=content)
response_object = models.PackageSearchResponse()
response_object.content = self._deserialize('PackageSearchResponseContent', response)
response_object.activity_id = response.headers.get('ActivityId')
return response_object
def get_repository_status(self, project, repository):
"""GetRepositoryStatus.
Provides status of Repository.
:param str project: Project ID or project name
:param str repository: Repository ID or repository name.
:rtype: :class:`<RepositoryStatusResponse> <azure.devops.v7_0.search.models.RepositoryStatusResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
if repository is not None:
route_values['repository'] = self._serialize.url('repository', repository, 'str')
response = self._send(http_method='GET',
location_id='1f60303c-7261-4387-80f1-742a2ecf2964',
version='7.0',
route_values=route_values)
return self._deserialize('RepositoryStatusResponse', response)
def get_tfvc_repository_status(self, project):
"""GetTfvcRepositoryStatus.
Provides status of TFVC Repository.
:param str project: Project ID or project name
:rtype: :class:`<TfvcRepositoryStatusResponse> <azure.devops.v7_0.search.models.TfvcRepositoryStatusResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
response = self._send(http_method='GET',
location_id='d5bf4e52-e0af-4626-8c50-8a80b18fa69f',
version='7.0',
route_values=route_values)
return self._deserialize('TfvcRepositoryStatusResponse', response)
def fetch_wiki_search_results(self, request, project=None):
"""FetchWikiSearchResults.
Provides a set of results for the search request.
:param :class:`<WikiSearchRequest> <azure.devops.v7_0.search.models.WikiSearchRequest>` request: The Wiki Search Request.
:param str project: Project ID or project name
:rtype: :class:`<WikiSearchResponse> <azure.devops.v7_0.search.models.WikiSearchResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
content = self._serialize.body(request, 'WikiSearchRequest')
response = self._send(http_method='POST',
location_id='e90e7664-7049-4100-9a86-66b161d81080',
version='7.0',
route_values=route_values,
content=content)
return self._deserialize('WikiSearchResponse', response)
def fetch_work_item_search_results(self, request, project=None):
"""FetchWorkItemSearchResults.
Provides a set of results for the search text.
:param :class:`<WorkItemSearchRequest> <azure.devops.v7_0.search.models.WorkItemSearchRequest>` request: The Work Item Search Request.
:param str project: Project ID or project name
:rtype: :class:`<WorkItemSearchResponse> <azure.devops.v7_0.search.models.WorkItemSearchResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
content = self._serialize.body(request, 'WorkItemSearchRequest')
response = self._send(http_method='POST',
location_id='73b2c9e2-ff9e-4447-8cda-5f5b21ff7cae',
version='7.0',
route_values=route_values,
content=content)
return self._deserialize('WorkItemSearchResponse', response)
|
azure-devops-python-api/azure-devops/azure/devops/released/search/search_client.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/released/search/search_client.py",
"repo_id": "azure-devops-python-api",
"token_count": 3379
}
| 345 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest import Serializer, Deserializer
from ...client import Client
from . import models
class ContributionsClient(Client):
"""Contributions
:param str base_url: Service URL
:param Authentication creds: Authenticated credentials.
"""
def __init__(self, base_url=None, creds=None):
super(ContributionsClient, self).__init__(base_url, creds)
client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)}
self._serialize = Serializer(client_models)
self._deserialize = Deserializer(client_models)
resource_area_identifier = '8477aec9-a4c7-4bd4-a456-ba4c53c989cb'
def query_contribution_nodes(self, query):
"""QueryContributionNodes.
[Preview API] Query for contribution nodes and provider details according the parameters in the passed in query object.
:param :class:`<ContributionNodeQuery> <azure.devops.v7_1.contributions.models.ContributionNodeQuery>` query:
:rtype: :class:`<ContributionNodeQueryResult> <azure.devops.v7_1.contributions.models.ContributionNodeQueryResult>`
"""
content = self._serialize.body(query, 'ContributionNodeQuery')
response = self._send(http_method='POST',
location_id='db7f2146-2309-4cee-b39c-c767777a1c55',
version='7.1-preview.1',
content=content)
return self._deserialize('ContributionNodeQueryResult', response)
def query_data_providers(self, query, scope_name=None, scope_value=None):
"""QueryDataProviders.
[Preview API]
:param :class:`<DataProviderQuery> <azure.devops.v7_1.contributions.models.DataProviderQuery>` query:
:param str scope_name:
:param str scope_value:
:rtype: :class:`<DataProviderResult> <azure.devops.v7_1.contributions.models.DataProviderResult>`
"""
route_values = {}
if scope_name is not None:
route_values['scopeName'] = self._serialize.url('scope_name', scope_name, 'str')
if scope_value is not None:
route_values['scopeValue'] = self._serialize.url('scope_value', scope_value, 'str')
content = self._serialize.body(query, 'DataProviderQuery')
response = self._send(http_method='POST',
location_id='738368db-35ee-4b85-9f94-77ed34af2b0d',
version='7.1-preview.1',
route_values=route_values,
content=content)
return self._deserialize('DataProviderResult', response)
def get_installed_extensions(self, contribution_ids=None, include_disabled_apps=None, asset_types=None):
"""GetInstalledExtensions.
[Preview API]
:param [str] contribution_ids:
:param bool include_disabled_apps:
:param [str] asset_types:
:rtype: [InstalledExtension]
"""
query_parameters = {}
if contribution_ids is not None:
contribution_ids = ";".join(contribution_ids)
query_parameters['contributionIds'] = self._serialize.query('contribution_ids', contribution_ids, 'str')
if include_disabled_apps is not None:
query_parameters['includeDisabledApps'] = self._serialize.query('include_disabled_apps', include_disabled_apps, 'bool')
if asset_types is not None:
asset_types = ":".join(asset_types)
query_parameters['assetTypes'] = self._serialize.query('asset_types', asset_types, 'str')
response = self._send(http_method='GET',
location_id='2648442b-fd63-4b9a-902f-0c913510f139',
version='7.1-preview.1',
query_parameters=query_parameters)
return self._deserialize('[InstalledExtension]', self._unwrap_collection(response))
def get_installed_extension_by_name(self, publisher_name, extension_name, asset_types=None):
"""GetInstalledExtensionByName.
[Preview API]
:param str publisher_name:
:param str extension_name:
:param [str] asset_types:
:rtype: :class:`<InstalledExtension> <azure.devops.v7_1.contributions.models.InstalledExtension>`
"""
route_values = {}
if publisher_name is not None:
route_values['publisherName'] = self._serialize.url('publisher_name', publisher_name, 'str')
if extension_name is not None:
route_values['extensionName'] = self._serialize.url('extension_name', extension_name, 'str')
query_parameters = {}
if asset_types is not None:
asset_types = ":".join(asset_types)
query_parameters['assetTypes'] = self._serialize.query('asset_types', asset_types, 'str')
response = self._send(http_method='GET',
location_id='3e2f6668-0798-4dcb-b592-bfe2fa57fde2',
version='7.1-preview.1',
route_values=route_values,
query_parameters=query_parameters)
return self._deserialize('InstalledExtension', response)
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/contributions/contributions_client.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/contributions/contributions_client.py",
"repo_id": "azure-devops-python-api",
"token_count": 2399
}
| 346 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest.serialization import Model
class AcquisitionOperation(Model):
"""
:param operation_state: State of the AcquisitionOperation for the current user
:type operation_state: object
:param operation_type: AcquisitionOperationType: install, request, buy, etc...
:type operation_type: object
:param reason: Optional reason to justify current state. Typically used with Disallow state.
:type reason: str
:param reasons: List of reasons indicating why the operation is not allowed.
:type reasons: list of :class:`AcquisitionOperationDisallowReason <azure.devops.v7_1.extension_management.models.AcquisitionOperationDisallowReason>`
"""
_attribute_map = {
'operation_state': {'key': 'operationState', 'type': 'object'},
'operation_type': {'key': 'operationType', 'type': 'object'},
'reason': {'key': 'reason', 'type': 'str'},
'reasons': {'key': 'reasons', 'type': '[AcquisitionOperationDisallowReason]'}
}
def __init__(self, operation_state=None, operation_type=None, reason=None, reasons=None):
super(AcquisitionOperation, self).__init__()
self.operation_state = operation_state
self.operation_type = operation_type
self.reason = reason
self.reasons = reasons
class AcquisitionOperationDisallowReason(Model):
"""
:param message: User-friendly message clarifying the reason for disallowance
:type message: str
:param type: Type of reason for disallowance - AlreadyInstalled, UnresolvedDemand, etc.
:type type: str
"""
_attribute_map = {
'message': {'key': 'message', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'}
}
def __init__(self, message=None, type=None):
super(AcquisitionOperationDisallowReason, self).__init__()
self.message = message
self.type = type
class AcquisitionOptions(Model):
"""
Market item acquisition options (install, buy, etc) for an installation target.
:param default_operation: Default Operation for the ItemId in this target
:type default_operation: :class:`AcquisitionOperation <azure.devops.v7_1.extension_management.models.AcquisitionOperation>`
:param item_id: The item id that this options refer to
:type item_id: str
:param operations: Operations allowed for the ItemId in this target
:type operations: list of :class:`AcquisitionOperation <azure.devops.v7_1.extension_management.models.AcquisitionOperation>`
:param properties: Additional properties which can be added to the request.
:type properties: :class:`object <azure.devops.v7_1.extension_management.models.object>`
:param target: The target that this options refer to
:type target: str
"""
_attribute_map = {
'default_operation': {'key': 'defaultOperation', 'type': 'AcquisitionOperation'},
'item_id': {'key': 'itemId', 'type': 'str'},
'operations': {'key': 'operations', 'type': '[AcquisitionOperation]'},
'properties': {'key': 'properties', 'type': 'object'},
'target': {'key': 'target', 'type': 'str'}
}
def __init__(self, default_operation=None, item_id=None, operations=None, properties=None, target=None):
super(AcquisitionOptions, self).__init__()
self.default_operation = default_operation
self.item_id = item_id
self.operations = operations
self.properties = properties
self.target = target
class ContributionBase(Model):
"""
Base class shared by contributions and contribution types
:param description: Description of the contribution/type
:type description: str
:param id: Fully qualified identifier of the contribution/type
:type id: str
:param visible_to: VisibleTo can be used to restrict whom can reference a given contribution/type. This value should be a list of publishers or extensions access is restricted too. Examples: "ms" - Means only the "ms" publisher can reference this. "ms.vss-web" - Means only the "vss-web" extension from the "ms" publisher can reference this.
:type visible_to: list of str
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'visible_to': {'key': 'visibleTo', 'type': '[str]'}
}
def __init__(self, description=None, id=None, visible_to=None):
super(ContributionBase, self).__init__()
self.description = description
self.id = id
self.visible_to = visible_to
class ContributionConstraint(Model):
"""
Specifies a constraint that can be used to dynamically include/exclude a given contribution
:param group: An optional property that can be specified to group constraints together. All constraints within a group are AND'd together (all must be evaluate to True in order for the contribution to be included). Different groups of constraints are OR'd (only one group needs to evaluate to True for the contribution to be included).
:type group: int
:param id: Fully qualified identifier of a shared constraint
:type id: str
:param inverse: If true, negate the result of the filter (include the contribution if the applied filter returns false instead of true)
:type inverse: bool
:param name: Name of the IContributionFilter plugin
:type name: str
:param properties: Properties that are fed to the contribution filter class
:type properties: :class:`object <azure.devops.v7_1.extension_management.models.object>`
:param relationships: Constraints can be optionally be applied to one or more of the relationships defined in the contribution. If no relationships are defined then all relationships are associated with the constraint. This means the default behaviour will eliminate the contribution from the tree completely if the constraint is applied.
:type relationships: list of str
"""
_attribute_map = {
'group': {'key': 'group', 'type': 'int'},
'id': {'key': 'id', 'type': 'str'},
'inverse': {'key': 'inverse', 'type': 'bool'},
'name': {'key': 'name', 'type': 'str'},
'properties': {'key': 'properties', 'type': 'object'},
'relationships': {'key': 'relationships', 'type': '[str]'}
}
def __init__(self, group=None, id=None, inverse=None, name=None, properties=None, relationships=None):
super(ContributionConstraint, self).__init__()
self.group = group
self.id = id
self.inverse = inverse
self.name = name
self.properties = properties
self.relationships = relationships
class ContributionPropertyDescription(Model):
"""
Description about a property of a contribution type
:param description: Description of the property
:type description: str
:param name: Name of the property
:type name: str
:param required: True if this property is required
:type required: bool
:param type: The type of value used for this property
:type type: object
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'required': {'key': 'required', 'type': 'bool'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, description=None, name=None, required=None, type=None):
super(ContributionPropertyDescription, self).__init__()
self.description = description
self.name = name
self.required = required
self.type = type
class ContributionType(ContributionBase):
"""
A contribution type, given by a json schema
:param description: Description of the contribution/type
:type description: str
:param id: Fully qualified identifier of the contribution/type
:type id: str
:param visible_to: VisibleTo can be used to restrict whom can reference a given contribution/type. This value should be a list of publishers or extensions access is restricted too. Examples: "ms" - Means only the "ms" publisher can reference this. "ms.vss-web" - Means only the "vss-web" extension from the "ms" publisher can reference this.
:type visible_to: list of str
:param indexed: Controls whether or not contributions of this type have the type indexed for queries. This allows clients to find all extensions that have a contribution of this type. NOTE: Only TrustedPartners are allowed to specify indexed contribution types.
:type indexed: bool
:param name: Friendly name of the contribution/type
:type name: str
:param properties: Describes the allowed properties for this contribution type
:type properties: dict
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'visible_to': {'key': 'visibleTo', 'type': '[str]'},
'indexed': {'key': 'indexed', 'type': 'bool'},
'name': {'key': 'name', 'type': 'str'},
'properties': {'key': 'properties', 'type': '{ContributionPropertyDescription}'}
}
def __init__(self, description=None, id=None, visible_to=None, indexed=None, name=None, properties=None):
super(ContributionType, self).__init__(description=description, id=id, visible_to=visible_to)
self.indexed = indexed
self.name = name
self.properties = properties
class ExtensionAcquisitionRequest(Model):
"""
Contract for handling the extension acquisition process
:param assignment_type: How the item is being assigned
:type assignment_type: object
:param billing_id: The id of the subscription used for purchase
:type billing_id: str
:param item_id: The marketplace id (publisherName.extensionName) for the item
:type item_id: str
:param operation_type: The type of operation, such as install, request, purchase
:type operation_type: object
:param properties: Additional properties which can be added to the request.
:type properties: :class:`object <azure.devops.v7_1.extension_management.models.object>`
:param quantity: How many licenses should be purchased
:type quantity: int
"""
_attribute_map = {
'assignment_type': {'key': 'assignmentType', 'type': 'object'},
'billing_id': {'key': 'billingId', 'type': 'str'},
'item_id': {'key': 'itemId', 'type': 'str'},
'operation_type': {'key': 'operationType', 'type': 'object'},
'properties': {'key': 'properties', 'type': 'object'},
'quantity': {'key': 'quantity', 'type': 'int'}
}
def __init__(self, assignment_type=None, billing_id=None, item_id=None, operation_type=None, properties=None, quantity=None):
super(ExtensionAcquisitionRequest, self).__init__()
self.assignment_type = assignment_type
self.billing_id = billing_id
self.item_id = item_id
self.operation_type = operation_type
self.properties = properties
self.quantity = quantity
class ExtensionAuditLog(Model):
"""
Audit log for an extension
:param entries: Collection of audit log entries
:type entries: list of :class:`ExtensionAuditLogEntry <azure.devops.v7_1.extension_management.models.ExtensionAuditLogEntry>`
:param extension_name: Extension that the change was made for
:type extension_name: str
:param publisher_name: Publisher that the extension is part of
:type publisher_name: str
"""
_attribute_map = {
'entries': {'key': 'entries', 'type': '[ExtensionAuditLogEntry]'},
'extension_name': {'key': 'extensionName', 'type': 'str'},
'publisher_name': {'key': 'publisherName', 'type': 'str'}
}
def __init__(self, entries=None, extension_name=None, publisher_name=None):
super(ExtensionAuditLog, self).__init__()
self.entries = entries
self.extension_name = extension_name
self.publisher_name = publisher_name
class ExtensionAuditLogEntry(Model):
"""
An audit log entry for an extension
:param audit_action: Change that was made to extension
:type audit_action: str
:param audit_date: Date at which the change was made
:type audit_date: datetime
:param comment: Extra information about the change
:type comment: str
:param updated_by: Represents the user who made the change
:type updated_by: :class:`IdentityRef <azure.devops.v7_1.extension_management.models.IdentityRef>`
"""
_attribute_map = {
'audit_action': {'key': 'auditAction', 'type': 'str'},
'audit_date': {'key': 'auditDate', 'type': 'iso-8601'},
'comment': {'key': 'comment', 'type': 'str'},
'updated_by': {'key': 'updatedBy', 'type': 'IdentityRef'}
}
def __init__(self, audit_action=None, audit_date=None, comment=None, updated_by=None):
super(ExtensionAuditLogEntry, self).__init__()
self.audit_action = audit_action
self.audit_date = audit_date
self.comment = comment
self.updated_by = updated_by
class ExtensionAuthorization(Model):
"""
:param id:
:type id: str
:param scopes:
:type scopes: list of str
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'scopes': {'key': 'scopes', 'type': '[str]'}
}
def __init__(self, id=None, scopes=None):
super(ExtensionAuthorization, self).__init__()
self.id = id
self.scopes = scopes
class ExtensionBadge(Model):
"""
:param description:
:type description: str
:param img_uri:
:type img_uri: str
:param link:
:type link: str
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'img_uri': {'key': 'imgUri', 'type': 'str'},
'link': {'key': 'link', 'type': 'str'}
}
def __init__(self, description=None, img_uri=None, link=None):
super(ExtensionBadge, self).__init__()
self.description = description
self.img_uri = img_uri
self.link = link
class ExtensionDataCollection(Model):
"""
Represents a single collection for extension data documents
:param collection_name: The name of the collection
:type collection_name: str
:param documents: A list of documents belonging to the collection
:type documents: list of :class:`object <azure.devops.v7_1.extension_management.models.object>`
:param scope_type: The type of the collection's scope, such as Default or User
:type scope_type: str
:param scope_value: The value of the collection's scope, such as Current or Me
:type scope_value: str
"""
_attribute_map = {
'collection_name': {'key': 'collectionName', 'type': 'str'},
'documents': {'key': 'documents', 'type': '[object]'},
'scope_type': {'key': 'scopeType', 'type': 'str'},
'scope_value': {'key': 'scopeValue', 'type': 'str'}
}
def __init__(self, collection_name=None, documents=None, scope_type=None, scope_value=None):
super(ExtensionDataCollection, self).__init__()
self.collection_name = collection_name
self.documents = documents
self.scope_type = scope_type
self.scope_value = scope_value
class ExtensionDataCollectionQuery(Model):
"""
Represents a query to receive a set of extension data collections
:param collections: A list of collections to query
:type collections: list of :class:`ExtensionDataCollection <azure.devops.v7_1.extension_management.models.ExtensionDataCollection>`
"""
_attribute_map = {
'collections': {'key': 'collections', 'type': '[ExtensionDataCollection]'}
}
def __init__(self, collections=None):
super(ExtensionDataCollectionQuery, self).__init__()
self.collections = collections
class ExtensionEventCallback(Model):
"""
Base class for an event callback for an extension
:param uri: The uri of the endpoint that is hit when an event occurs
:type uri: str
"""
_attribute_map = {
'uri': {'key': 'uri', 'type': 'str'}
}
def __init__(self, uri=None):
super(ExtensionEventCallback, self).__init__()
self.uri = uri
class ExtensionEventCallbackCollection(Model):
"""
Collection of event callbacks - endpoints called when particular extension events occur.
:param post_disable: Optional. Defines an endpoint that gets called via a POST request to notify that an extension disable has occurred.
:type post_disable: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
:param post_enable: Optional. Defines an endpoint that gets called via a POST request to notify that an extension enable has occurred.
:type post_enable: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
:param post_install: Optional. Defines an endpoint that gets called via a POST request to notify that an extension install has completed.
:type post_install: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
:param post_uninstall: Optional. Defines an endpoint that gets called via a POST request to notify that an extension uninstall has occurred.
:type post_uninstall: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
:param post_update: Optional. Defines an endpoint that gets called via a POST request to notify that an extension update has occurred.
:type post_update: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
:param pre_install: Optional. Defines an endpoint that gets called via a POST request to notify that an extension install is about to occur. Response indicates whether to proceed or abort.
:type pre_install: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
:param version_check: For multi-version extensions, defines an endpoint that gets called via an OPTIONS request to determine the particular version of the extension to be used
:type version_check: :class:`ExtensionEventCallback <azure.devops.v7_1.extension_management.models.ExtensionEventCallback>`
"""
_attribute_map = {
'post_disable': {'key': 'postDisable', 'type': 'ExtensionEventCallback'},
'post_enable': {'key': 'postEnable', 'type': 'ExtensionEventCallback'},
'post_install': {'key': 'postInstall', 'type': 'ExtensionEventCallback'},
'post_uninstall': {'key': 'postUninstall', 'type': 'ExtensionEventCallback'},
'post_update': {'key': 'postUpdate', 'type': 'ExtensionEventCallback'},
'pre_install': {'key': 'preInstall', 'type': 'ExtensionEventCallback'},
'version_check': {'key': 'versionCheck', 'type': 'ExtensionEventCallback'}
}
def __init__(self, post_disable=None, post_enable=None, post_install=None, post_uninstall=None, post_update=None, pre_install=None, version_check=None):
super(ExtensionEventCallbackCollection, self).__init__()
self.post_disable = post_disable
self.post_enable = post_enable
self.post_install = post_install
self.post_uninstall = post_uninstall
self.post_update = post_update
self.pre_install = pre_install
self.version_check = version_check
class ExtensionFile(Model):
"""
:param asset_type:
:type asset_type: str
:param language:
:type language: str
:param source:
:type source: str
"""
_attribute_map = {
'asset_type': {'key': 'assetType', 'type': 'str'},
'language': {'key': 'language', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'}
}
def __init__(self, asset_type=None, language=None, source=None):
super(ExtensionFile, self).__init__()
self.asset_type = asset_type
self.language = language
self.source = source
class ExtensionIdentifier(Model):
"""
Represents the component pieces of an extensions fully qualified name, along with the fully qualified name.
:param extension_name: The ExtensionName component part of the fully qualified ExtensionIdentifier
:type extension_name: str
:param publisher_name: The PublisherName component part of the fully qualified ExtensionIdentifier
:type publisher_name: str
"""
_attribute_map = {
'extension_name': {'key': 'extensionName', 'type': 'str'},
'publisher_name': {'key': 'publisherName', 'type': 'str'}
}
def __init__(self, extension_name=None, publisher_name=None):
super(ExtensionIdentifier, self).__init__()
self.extension_name = extension_name
self.publisher_name = publisher_name
class ExtensionLicensing(Model):
"""
How an extension should handle including contributions based on licensing
:param overrides: A list of contributions which deviate from the default licensing behavior
:type overrides: list of :class:`LicensingOverride <azure.devops.v7_1.extension_management.models.LicensingOverride>`
"""
_attribute_map = {
'overrides': {'key': 'overrides', 'type': '[LicensingOverride]'}
}
def __init__(self, overrides=None):
super(ExtensionLicensing, self).__init__()
self.overrides = overrides
class ExtensionManifest(Model):
"""
Base class for extension properties which are shared by the extension manifest and the extension model
:param base_uri: Uri used as base for other relative uri's defined in extension
:type base_uri: str
:param constraints: List of shared constraints defined by this extension
:type constraints: list of :class:`ContributionConstraint <azure.devops.v7_1.extension_management.models.ContributionConstraint>`
:param contributions: List of contributions made by this extension
:type contributions: list of :class:`Contribution <azure.devops.v7_1.extension_management.models.Contribution>`
:param contribution_types: List of contribution types defined by this extension
:type contribution_types: list of :class:`ContributionType <azure.devops.v7_1.extension_management.models.ContributionType>`
:param demands: List of explicit demands required by this extension
:type demands: list of str
:param event_callbacks: Collection of endpoints that get called when particular extension events occur
:type event_callbacks: :class:`ExtensionEventCallbackCollection <azure.devops.v7_1.extension_management.models.ExtensionEventCallbackCollection>`
:param fallback_base_uri: Secondary location that can be used as base for other relative uri's defined in extension
:type fallback_base_uri: str
:param language: Language Culture Name set by the Gallery
:type language: str
:param licensing: How this extension behaves with respect to licensing
:type licensing: :class:`ExtensionLicensing <azure.devops.v7_1.extension_management.models.ExtensionLicensing>`
:param manifest_version: Version of the extension manifest format/content
:type manifest_version: float
:param restricted_to: Default user claims applied to all contributions (except the ones which have been specified restrictedTo explicitly) to control the visibility of a contribution.
:type restricted_to: list of str
:param scopes: List of all oauth scopes required by this extension
:type scopes: list of str
:param service_instance_type: The ServiceInstanceType(Guid) of the VSTS service that must be available to an account in order for the extension to be installed
:type service_instance_type: str
"""
_attribute_map = {
'base_uri': {'key': 'baseUri', 'type': 'str'},
'constraints': {'key': 'constraints', 'type': '[ContributionConstraint]'},
'contributions': {'key': 'contributions', 'type': '[Contribution]'},
'contribution_types': {'key': 'contributionTypes', 'type': '[ContributionType]'},
'demands': {'key': 'demands', 'type': '[str]'},
'event_callbacks': {'key': 'eventCallbacks', 'type': 'ExtensionEventCallbackCollection'},
'fallback_base_uri': {'key': 'fallbackBaseUri', 'type': 'str'},
'language': {'key': 'language', 'type': 'str'},
'licensing': {'key': 'licensing', 'type': 'ExtensionLicensing'},
'manifest_version': {'key': 'manifestVersion', 'type': 'float'},
'restricted_to': {'key': 'restrictedTo', 'type': '[str]'},
'scopes': {'key': 'scopes', 'type': '[str]'},
'service_instance_type': {'key': 'serviceInstanceType', 'type': 'str'}
}
def __init__(self, base_uri=None, constraints=None, contributions=None, contribution_types=None, demands=None, event_callbacks=None, fallback_base_uri=None, language=None, licensing=None, manifest_version=None, restricted_to=None, scopes=None, service_instance_type=None):
super(ExtensionManifest, self).__init__()
self.base_uri = base_uri
self.constraints = constraints
self.contributions = contributions
self.contribution_types = contribution_types
self.demands = demands
self.event_callbacks = event_callbacks
self.fallback_base_uri = fallback_base_uri
self.language = language
self.licensing = licensing
self.manifest_version = manifest_version
self.restricted_to = restricted_to
self.scopes = scopes
self.service_instance_type = service_instance_type
class ExtensionPolicy(Model):
"""
Policy with a set of permissions on extension operations
:param install: Permissions on 'Install' operation
:type install: object
:param request: Permission on 'Request' operation
:type request: object
"""
_attribute_map = {
'install': {'key': 'install', 'type': 'object'},
'request': {'key': 'request', 'type': 'object'}
}
def __init__(self, install=None, request=None):
super(ExtensionPolicy, self).__init__()
self.install = install
self.request = request
class ExtensionRequest(Model):
"""
A request for an extension (to be installed or have a license assigned)
:param reject_message: Required message supplied if the request is rejected
:type reject_message: str
:param request_date: Date at which the request was made
:type request_date: datetime
:param requested_by: Represents the user who made the request
:type requested_by: :class:`IdentityRef <azure.devops.v7_1.extension_management.models.IdentityRef>`
:param request_message: Optional message supplied by the requester justifying the request
:type request_message: str
:param request_state: Represents the state of the request
:type request_state: object
:param resolve_date: Date at which the request was resolved
:type resolve_date: datetime
:param resolved_by: Represents the user who resolved the request
:type resolved_by: :class:`IdentityRef <azure.devops.v7_1.extension_management.models.IdentityRef>`
"""
_attribute_map = {
'reject_message': {'key': 'rejectMessage', 'type': 'str'},
'request_date': {'key': 'requestDate', 'type': 'iso-8601'},
'requested_by': {'key': 'requestedBy', 'type': 'IdentityRef'},
'request_message': {'key': 'requestMessage', 'type': 'str'},
'request_state': {'key': 'requestState', 'type': 'object'},
'resolve_date': {'key': 'resolveDate', 'type': 'iso-8601'},
'resolved_by': {'key': 'resolvedBy', 'type': 'IdentityRef'}
}
def __init__(self, reject_message=None, request_date=None, requested_by=None, request_message=None, request_state=None, resolve_date=None, resolved_by=None):
super(ExtensionRequest, self).__init__()
self.reject_message = reject_message
self.request_date = request_date
self.requested_by = requested_by
self.request_message = request_message
self.request_state = request_state
self.resolve_date = resolve_date
self.resolved_by = resolved_by
class ExtensionShare(Model):
"""
:param id:
:type id: str
:param is_org:
:type is_org: bool
:param name:
:type name: str
:param type:
:type type: str
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'is_org': {'key': 'isOrg', 'type': 'bool'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'}
}
def __init__(self, id=None, is_org=None, name=None, type=None):
super(ExtensionShare, self).__init__()
self.id = id
self.is_org = is_org
self.name = name
self.type = type
class ExtensionStatistic(Model):
"""
:param statistic_name:
:type statistic_name: str
:param value:
:type value: float
"""
_attribute_map = {
'statistic_name': {'key': 'statisticName', 'type': 'str'},
'value': {'key': 'value', 'type': 'float'}
}
def __init__(self, statistic_name=None, value=None):
super(ExtensionStatistic, self).__init__()
self.statistic_name = statistic_name
self.value = value
class ExtensionVersion(Model):
"""
:param asset_uri:
:type asset_uri: str
:param badges:
:type badges: list of :class:`ExtensionBadge <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.ExtensionBadge>`
:param fallback_asset_uri:
:type fallback_asset_uri: str
:param files:
:type files: list of :class:`ExtensionFile <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.ExtensionFile>`
:param flags:
:type flags: object
:param last_updated:
:type last_updated: datetime
:param properties:
:type properties: list of { key: str; value: str }
:param target_platform:
:type target_platform: str
:param validation_result_message:
:type validation_result_message: str
:param version:
:type version: str
:param version_description:
:type version_description: str
"""
_attribute_map = {
'asset_uri': {'key': 'assetUri', 'type': 'str'},
'badges': {'key': 'badges', 'type': '[ExtensionBadge]'},
'fallback_asset_uri': {'key': 'fallbackAssetUri', 'type': 'str'},
'files': {'key': 'files', 'type': '[ExtensionFile]'},
'flags': {'key': 'flags', 'type': 'object'},
'last_updated': {'key': 'lastUpdated', 'type': 'iso-8601'},
'properties': {'key': 'properties', 'type': '[{ key: str; value: str }]'},
'target_platform': {'key': 'targetPlatform', 'type': 'str'},
'validation_result_message': {'key': 'validationResultMessage', 'type': 'str'},
'version': {'key': 'version', 'type': 'str'},
'version_description': {'key': 'versionDescription', 'type': 'str'}
}
def __init__(self, asset_uri=None, badges=None, fallback_asset_uri=None, files=None, flags=None, last_updated=None, properties=None, target_platform=None, validation_result_message=None, version=None, version_description=None):
super(ExtensionVersion, self).__init__()
self.asset_uri = asset_uri
self.badges = badges
self.fallback_asset_uri = fallback_asset_uri
self.files = files
self.flags = flags
self.last_updated = last_updated
self.properties = properties
self.target_platform = target_platform
self.validation_result_message = validation_result_message
self.version = version
self.version_description = version_description
class GraphSubjectBase(Model):
"""
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._visual_studio._services._web_api.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None):
super(GraphSubjectBase, self).__init__()
self._links = _links
self.descriptor = descriptor
self.display_name = display_name
self.url = url
class IdentityRef(GraphSubjectBase):
"""
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._visual_studio._services._web_api.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
:param directory_alias: Deprecated - Can be retrieved by querying the Graph user referenced in the "self" entry of the IdentityRef "_links" dictionary
:type directory_alias: str
:param id:
:type id: str
:param image_url: Deprecated - Available in the "avatar" entry of the IdentityRef "_links" dictionary
:type image_url: str
:param inactive: Deprecated - Can be retrieved by querying the Graph membership state referenced in the "membershipState" entry of the GraphUser "_links" dictionary
:type inactive: bool
:param is_aad_identity: Deprecated - Can be inferred from the subject type of the descriptor (Descriptor.IsAadUserType/Descriptor.IsAadGroupType)
:type is_aad_identity: bool
:param is_container: Deprecated - Can be inferred from the subject type of the descriptor (Descriptor.IsGroupType)
:type is_container: bool
:param is_deleted_in_origin:
:type is_deleted_in_origin: bool
:param profile_url: Deprecated - not in use in most preexisting implementations of ToIdentityRef
:type profile_url: str
:param unique_name: Deprecated - use Domain+PrincipalName instead
:type unique_name: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'directory_alias': {'key': 'directoryAlias', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'image_url': {'key': 'imageUrl', 'type': 'str'},
'inactive': {'key': 'inactive', 'type': 'bool'},
'is_aad_identity': {'key': 'isAadIdentity', 'type': 'bool'},
'is_container': {'key': 'isContainer', 'type': 'bool'},
'is_deleted_in_origin': {'key': 'isDeletedInOrigin', 'type': 'bool'},
'profile_url': {'key': 'profileUrl', 'type': 'str'},
'unique_name': {'key': 'uniqueName', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None, directory_alias=None, id=None, image_url=None, inactive=None, is_aad_identity=None, is_container=None, is_deleted_in_origin=None, profile_url=None, unique_name=None):
super(IdentityRef, self).__init__(_links=_links, descriptor=descriptor, display_name=display_name, url=url)
self.directory_alias = directory_alias
self.id = id
self.image_url = image_url
self.inactive = inactive
self.is_aad_identity = is_aad_identity
self.is_container = is_container
self.is_deleted_in_origin = is_deleted_in_origin
self.profile_url = profile_url
self.unique_name = unique_name
class InstallationTarget(Model):
"""
:param extension_version:
:type extension_version: str
:param product_architecture:
:type product_architecture: str
:param target:
:type target: str
:param target_platform:
:type target_platform: str
:param target_version:
:type target_version: str
"""
_attribute_map = {
'extension_version': {'key': 'extensionVersion', 'type': 'str'},
'product_architecture': {'key': 'productArchitecture', 'type': 'str'},
'target': {'key': 'target', 'type': 'str'},
'target_platform': {'key': 'targetPlatform', 'type': 'str'},
'target_version': {'key': 'targetVersion', 'type': 'str'}
}
def __init__(self, extension_version=None, product_architecture=None, target=None, target_platform=None, target_version=None):
super(InstallationTarget, self).__init__()
self.extension_version = extension_version
self.product_architecture = product_architecture
self.target = target
self.target_platform = target_platform
self.target_version = target_version
class InstalledExtension(ExtensionManifest):
"""
Represents a VSTS extension along with its installation state
:param base_uri: Uri used as base for other relative uri's defined in extension
:type base_uri: str
:param constraints: List of shared constraints defined by this extension
:type constraints: list of :class:`ContributionConstraint <azure.devops.v7_1.extension_management.models.ContributionConstraint>`
:param contributions: List of contributions made by this extension
:type contributions: list of :class:`Contribution <azure.devops.v7_1.extension_management.models.Contribution>`
:param contribution_types: List of contribution types defined by this extension
:type contribution_types: list of :class:`ContributionType <azure.devops.v7_1.extension_management.models.ContributionType>`
:param demands: List of explicit demands required by this extension
:type demands: list of str
:param event_callbacks: Collection of endpoints that get called when particular extension events occur
:type event_callbacks: :class:`ExtensionEventCallbackCollection <azure.devops.v7_1.extension_management.models.ExtensionEventCallbackCollection>`
:param fallback_base_uri: Secondary location that can be used as base for other relative uri's defined in extension
:type fallback_base_uri: str
:param language: Language Culture Name set by the Gallery
:type language: str
:param licensing: How this extension behaves with respect to licensing
:type licensing: :class:`ExtensionLicensing <azure.devops.v7_1.extension_management.models.ExtensionLicensing>`
:param manifest_version: Version of the extension manifest format/content
:type manifest_version: float
:param restricted_to: Default user claims applied to all contributions (except the ones which have been specified restrictedTo explicitly) to control the visibility of a contribution.
:type restricted_to: list of str
:param scopes: List of all oauth scopes required by this extension
:type scopes: list of str
:param service_instance_type: The ServiceInstanceType(Guid) of the VSTS service that must be available to an account in order for the extension to be installed
:type service_instance_type: str
:param extension_id: The friendly extension id for this extension - unique for a given publisher.
:type extension_id: str
:param extension_name: The display name of the extension.
:type extension_name: str
:param files: This is the set of files available from the extension.
:type files: list of :class:`ExtensionFile <azure.devops.v7_1.extension_management.models.ExtensionFile>`
:param flags: Extension flags relevant to contribution consumers
:type flags: object
:param install_state: Information about this particular installation of the extension
:type install_state: :class:`InstalledExtensionState <azure.devops.v7_1.extension_management.models.InstalledExtensionState>`
:param last_published: This represents the date/time the extensions was last updated in the gallery. This doesnt mean this version was updated the value represents changes to any and all versions of the extension.
:type last_published: datetime
:param publisher_id: Unique id of the publisher of this extension
:type publisher_id: str
:param publisher_name: The display name of the publisher
:type publisher_name: str
:param registration_id: Unique id for this extension (the same id is used for all versions of a single extension)
:type registration_id: str
:param version: Version of this extension
:type version: str
"""
_attribute_map = {
'base_uri': {'key': 'baseUri', 'type': 'str'},
'constraints': {'key': 'constraints', 'type': '[ContributionConstraint]'},
'contributions': {'key': 'contributions', 'type': '[Contribution]'},
'contribution_types': {'key': 'contributionTypes', 'type': '[ContributionType]'},
'demands': {'key': 'demands', 'type': '[str]'},
'event_callbacks': {'key': 'eventCallbacks', 'type': 'ExtensionEventCallbackCollection'},
'fallback_base_uri': {'key': 'fallbackBaseUri', 'type': 'str'},
'language': {'key': 'language', 'type': 'str'},
'licensing': {'key': 'licensing', 'type': 'ExtensionLicensing'},
'manifest_version': {'key': 'manifestVersion', 'type': 'float'},
'restricted_to': {'key': 'restrictedTo', 'type': '[str]'},
'scopes': {'key': 'scopes', 'type': '[str]'},
'service_instance_type': {'key': 'serviceInstanceType', 'type': 'str'},
'extension_id': {'key': 'extensionId', 'type': 'str'},
'extension_name': {'key': 'extensionName', 'type': 'str'},
'files': {'key': 'files', 'type': '[ExtensionFile]'},
'flags': {'key': 'flags', 'type': 'object'},
'install_state': {'key': 'installState', 'type': 'InstalledExtensionState'},
'last_published': {'key': 'lastPublished', 'type': 'iso-8601'},
'publisher_id': {'key': 'publisherId', 'type': 'str'},
'publisher_name': {'key': 'publisherName', 'type': 'str'},
'registration_id': {'key': 'registrationId', 'type': 'str'},
'version': {'key': 'version', 'type': 'str'}
}
def __init__(self, base_uri=None, constraints=None, contributions=None, contribution_types=None, demands=None, event_callbacks=None, fallback_base_uri=None, language=None, licensing=None, manifest_version=None, restricted_to=None, scopes=None, service_instance_type=None, extension_id=None, extension_name=None, files=None, flags=None, install_state=None, last_published=None, publisher_id=None, publisher_name=None, registration_id=None, version=None):
super(InstalledExtension, self).__init__(base_uri=base_uri, constraints=constraints, contributions=contributions, contribution_types=contribution_types, demands=demands, event_callbacks=event_callbacks, fallback_base_uri=fallback_base_uri, language=language, licensing=licensing, manifest_version=manifest_version, restricted_to=restricted_to, scopes=scopes, service_instance_type=service_instance_type)
self.extension_id = extension_id
self.extension_name = extension_name
self.files = files
self.flags = flags
self.install_state = install_state
self.last_published = last_published
self.publisher_id = publisher_id
self.publisher_name = publisher_name
self.registration_id = registration_id
self.version = version
class InstalledExtensionQuery(Model):
"""
:param asset_types:
:type asset_types: list of str
:param monikers:
:type monikers: list of :class:`ExtensionIdentifier <azure.devops.v7_1.extension_management.models.ExtensionIdentifier>`
"""
_attribute_map = {
'asset_types': {'key': 'assetTypes', 'type': '[str]'},
'monikers': {'key': 'monikers', 'type': '[ExtensionIdentifier]'}
}
def __init__(self, asset_types=None, monikers=None):
super(InstalledExtensionQuery, self).__init__()
self.asset_types = asset_types
self.monikers = monikers
class InstalledExtensionState(Model):
"""
The state of an installed extension
:param flags: States of an installed extension
:type flags: object
:param installation_issues: List of installation issues
:type installation_issues: list of :class:`InstalledExtensionStateIssue <azure.devops.v7_1.extension_management.models.InstalledExtensionStateIssue>`
:param last_updated: The time at which this installation was last updated
:type last_updated: datetime
"""
_attribute_map = {
'flags': {'key': 'flags', 'type': 'object'},
'installation_issues': {'key': 'installationIssues', 'type': '[InstalledExtensionStateIssue]'},
'last_updated': {'key': 'lastUpdated', 'type': 'iso-8601'}
}
def __init__(self, flags=None, installation_issues=None, last_updated=None):
super(InstalledExtensionState, self).__init__()
self.flags = flags
self.installation_issues = installation_issues
self.last_updated = last_updated
class InstalledExtensionStateIssue(Model):
"""
Represents an installation issue
:param message: The error message
:type message: str
:param source: Source of the installation issue, for example "Demands"
:type source: str
:param type: Installation issue type (Warning, Error)
:type type: object
"""
_attribute_map = {
'message': {'key': 'message', 'type': 'str'},
'source': {'key': 'source', 'type': 'str'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, message=None, source=None, type=None):
super(InstalledExtensionStateIssue, self).__init__()
self.message = message
self.source = source
self.type = type
class LicensingOverride(Model):
"""
Maps a contribution to a licensing behavior
:param behavior: How the inclusion of this contribution should change based on licensing
:type behavior: object
:param id: Fully qualified contribution id which we want to define licensing behavior for
:type id: str
"""
_attribute_map = {
'behavior': {'key': 'behavior', 'type': 'object'},
'id': {'key': 'id', 'type': 'str'}
}
def __init__(self, behavior=None, id=None):
super(LicensingOverride, self).__init__()
self.behavior = behavior
self.id = id
class PublishedExtension(Model):
"""
:param categories:
:type categories: list of str
:param deployment_type:
:type deployment_type: object
:param display_name:
:type display_name: str
:param extension_id:
:type extension_id: str
:param extension_name:
:type extension_name: str
:param flags:
:type flags: object
:param installation_targets:
:type installation_targets: list of :class:`InstallationTarget <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.InstallationTarget>`
:param last_updated:
:type last_updated: datetime
:param long_description:
:type long_description: str
:param present_in_conflict_list: Check if Extension is in conflict list or not. Taking as String and not as boolean because we don't want end customer to see this flag and by making it Boolean it is coming as false for all the cases.
:type present_in_conflict_list: str
:param published_date: Date on which the extension was first uploaded.
:type published_date: datetime
:param publisher:
:type publisher: :class:`PublisherFacts <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.PublisherFacts>`
:param release_date: Date on which the extension first went public.
:type release_date: datetime
:param shared_with:
:type shared_with: list of :class:`ExtensionShare <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.ExtensionShare>`
:param short_description:
:type short_description: str
:param statistics:
:type statistics: list of :class:`ExtensionStatistic <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.ExtensionStatistic>`
:param tags:
:type tags: list of str
:param versions:
:type versions: list of :class:`ExtensionVersion <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.ExtensionVersion>`
"""
_attribute_map = {
'categories': {'key': 'categories', 'type': '[str]'},
'deployment_type': {'key': 'deploymentType', 'type': 'object'},
'display_name': {'key': 'displayName', 'type': 'str'},
'extension_id': {'key': 'extensionId', 'type': 'str'},
'extension_name': {'key': 'extensionName', 'type': 'str'},
'flags': {'key': 'flags', 'type': 'object'},
'installation_targets': {'key': 'installationTargets', 'type': '[InstallationTarget]'},
'last_updated': {'key': 'lastUpdated', 'type': 'iso-8601'},
'long_description': {'key': 'longDescription', 'type': 'str'},
'present_in_conflict_list': {'key': 'presentInConflictList', 'type': 'str'},
'published_date': {'key': 'publishedDate', 'type': 'iso-8601'},
'publisher': {'key': 'publisher', 'type': 'PublisherFacts'},
'release_date': {'key': 'releaseDate', 'type': 'iso-8601'},
'shared_with': {'key': 'sharedWith', 'type': '[ExtensionShare]'},
'short_description': {'key': 'shortDescription', 'type': 'str'},
'statistics': {'key': 'statistics', 'type': '[ExtensionStatistic]'},
'tags': {'key': 'tags', 'type': '[str]'},
'versions': {'key': 'versions', 'type': '[ExtensionVersion]'}
}
def __init__(self, categories=None, deployment_type=None, display_name=None, extension_id=None, extension_name=None, flags=None, installation_targets=None, last_updated=None, long_description=None, present_in_conflict_list=None, published_date=None, publisher=None, release_date=None, shared_with=None, short_description=None, statistics=None, tags=None, versions=None):
super(PublishedExtension, self).__init__()
self.categories = categories
self.deployment_type = deployment_type
self.display_name = display_name
self.extension_id = extension_id
self.extension_name = extension_name
self.flags = flags
self.installation_targets = installation_targets
self.last_updated = last_updated
self.long_description = long_description
self.present_in_conflict_list = present_in_conflict_list
self.published_date = published_date
self.publisher = publisher
self.release_date = release_date
self.shared_with = shared_with
self.short_description = short_description
self.statistics = statistics
self.tags = tags
self.versions = versions
class PublisherFacts(Model):
"""
High-level information about the publisher, like id's and names
:param display_name:
:type display_name: str
:param domain:
:type domain: str
:param flags:
:type flags: object
:param is_domain_verified:
:type is_domain_verified: bool
:param publisher_id:
:type publisher_id: str
:param publisher_name:
:type publisher_name: str
"""
_attribute_map = {
'display_name': {'key': 'displayName', 'type': 'str'},
'domain': {'key': 'domain', 'type': 'str'},
'flags': {'key': 'flags', 'type': 'object'},
'is_domain_verified': {'key': 'isDomainVerified', 'type': 'bool'},
'publisher_id': {'key': 'publisherId', 'type': 'str'},
'publisher_name': {'key': 'publisherName', 'type': 'str'}
}
def __init__(self, display_name=None, domain=None, flags=None, is_domain_verified=None, publisher_id=None, publisher_name=None):
super(PublisherFacts, self).__init__()
self.display_name = display_name
self.domain = domain
self.flags = flags
self.is_domain_verified = is_domain_verified
self.publisher_id = publisher_id
self.publisher_name = publisher_name
class ReferenceLinks(Model):
"""
The class to represent a collection of REST reference links.
:param links: The readonly view of the links. Because Reference links are readonly, we only want to expose them as read only.
:type links: dict
"""
_attribute_map = {
'links': {'key': 'links', 'type': '{object}'}
}
def __init__(self, links=None):
super(ReferenceLinks, self).__init__()
self.links = links
class RequestedExtension(Model):
"""
A request for an extension (to be installed or have a license assigned)
:param extension_name: The unique name of the extension
:type extension_name: str
:param extension_requests: A list of each request for the extension
:type extension_requests: list of :class:`ExtensionRequest <azure.devops.v7_1.extension_management.models.ExtensionRequest>`
:param publisher_display_name: DisplayName of the publisher that owns the extension being published.
:type publisher_display_name: str
:param publisher_name: Represents the Publisher of the requested extension
:type publisher_name: str
:param request_count: The total number of requests for an extension
:type request_count: int
"""
_attribute_map = {
'extension_name': {'key': 'extensionName', 'type': 'str'},
'extension_requests': {'key': 'extensionRequests', 'type': '[ExtensionRequest]'},
'publisher_display_name': {'key': 'publisherDisplayName', 'type': 'str'},
'publisher_name': {'key': 'publisherName', 'type': 'str'},
'request_count': {'key': 'requestCount', 'type': 'int'}
}
def __init__(self, extension_name=None, extension_requests=None, publisher_display_name=None, publisher_name=None, request_count=None):
super(RequestedExtension, self).__init__()
self.extension_name = extension_name
self.extension_requests = extension_requests
self.publisher_display_name = publisher_display_name
self.publisher_name = publisher_name
self.request_count = request_count
class UserExtensionPolicy(Model):
"""
Represents the extension policy applied to a given user
:param display_name: User display name that this policy refers to
:type display_name: str
:param permissions: The extension policy applied to the user
:type permissions: :class:`ExtensionPolicy <azure.devops.v7_1.microsoft._visual_studio._services._gallery._web_api.models.ExtensionPolicy>`
:param user_id: User id that this policy refers to
:type user_id: str
"""
_attribute_map = {
'display_name': {'key': 'displayName', 'type': 'str'},
'permissions': {'key': 'permissions', 'type': 'ExtensionPolicy'},
'user_id': {'key': 'userId', 'type': 'str'}
}
def __init__(self, display_name=None, permissions=None, user_id=None):
super(UserExtensionPolicy, self).__init__()
self.display_name = display_name
self.permissions = permissions
self.user_id = user_id
class Contribution(ContributionBase):
"""
An individual contribution made by an extension
:param description: Description of the contribution/type
:type description: str
:param id: Fully qualified identifier of the contribution/type
:type id: str
:param visible_to: VisibleTo can be used to restrict whom can reference a given contribution/type. This value should be a list of publishers or extensions access is restricted too. Examples: "ms" - Means only the "ms" publisher can reference this. "ms.vss-web" - Means only the "vss-web" extension from the "ms" publisher can reference this.
:type visible_to: list of str
:param constraints: List of constraints (filters) that should be applied to the availability of this contribution
:type constraints: list of :class:`ContributionConstraint <azure.devops.v7_1.extension_management.models.ContributionConstraint>`
:param includes: Includes is a set of contributions that should have this contribution included in their targets list.
:type includes: list of str
:param properties: Properties/attributes of this contribution
:type properties: :class:`object <azure.devops.v7_1.extension_management.models.object>`
:param restricted_to: List of demanded claims in order for the user to see this contribution (like anonymous, public, member...).
:type restricted_to: list of str
:param targets: The ids of the contribution(s) that this contribution targets. (parent contributions)
:type targets: list of str
:param type: Id of the Contribution Type
:type type: str
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'visible_to': {'key': 'visibleTo', 'type': '[str]'},
'constraints': {'key': 'constraints', 'type': '[ContributionConstraint]'},
'includes': {'key': 'includes', 'type': '[str]'},
'properties': {'key': 'properties', 'type': 'object'},
'restricted_to': {'key': 'restrictedTo', 'type': '[str]'},
'targets': {'key': 'targets', 'type': '[str]'},
'type': {'key': 'type', 'type': 'str'}
}
def __init__(self, description=None, id=None, visible_to=None, constraints=None, includes=None, properties=None, restricted_to=None, targets=None, type=None):
super(Contribution, self).__init__(description=description, id=id, visible_to=visible_to)
self.constraints = constraints
self.includes = includes
self.properties = properties
self.restricted_to = restricted_to
self.targets = targets
self.type = type
class ExtensionState(InstalledExtensionState):
"""
The state of an extension
:param flags: States of an installed extension
:type flags: object
:param installation_issues: List of installation issues
:type installation_issues: list of :class:`InstalledExtensionStateIssue <azure.devops.v7_1.extension_management.models.InstalledExtensionStateIssue>`
:param last_updated: The time at which this installation was last updated
:type last_updated: datetime
:param extension_name:
:type extension_name: str
:param last_version_check: The time at which the version was last checked
:type last_version_check: datetime
:param publisher_name:
:type publisher_name: str
:param version:
:type version: str
"""
_attribute_map = {
'flags': {'key': 'flags', 'type': 'object'},
'installation_issues': {'key': 'installationIssues', 'type': '[InstalledExtensionStateIssue]'},
'last_updated': {'key': 'lastUpdated', 'type': 'iso-8601'},
'extension_name': {'key': 'extensionName', 'type': 'str'},
'last_version_check': {'key': 'lastVersionCheck', 'type': 'iso-8601'},
'publisher_name': {'key': 'publisherName', 'type': 'str'},
'version': {'key': 'version', 'type': 'str'}
}
def __init__(self, flags=None, installation_issues=None, last_updated=None, extension_name=None, last_version_check=None, publisher_name=None, version=None):
super(ExtensionState, self).__init__(flags=flags, installation_issues=installation_issues, last_updated=last_updated)
self.extension_name = extension_name
self.last_version_check = last_version_check
self.publisher_name = publisher_name
self.version = version
__all__ = [
'AcquisitionOperation',
'AcquisitionOperationDisallowReason',
'AcquisitionOptions',
'ContributionBase',
'ContributionConstraint',
'ContributionPropertyDescription',
'ContributionType',
'ExtensionAcquisitionRequest',
'ExtensionAuditLog',
'ExtensionAuditLogEntry',
'ExtensionAuthorization',
'ExtensionBadge',
'ExtensionDataCollection',
'ExtensionDataCollectionQuery',
'ExtensionEventCallback',
'ExtensionEventCallbackCollection',
'ExtensionFile',
'ExtensionIdentifier',
'ExtensionLicensing',
'ExtensionManifest',
'ExtensionPolicy',
'ExtensionRequest',
'ExtensionShare',
'ExtensionStatistic',
'ExtensionVersion',
'GraphSubjectBase',
'IdentityRef',
'InstallationTarget',
'InstalledExtension',
'InstalledExtensionQuery',
'InstalledExtensionState',
'InstalledExtensionStateIssue',
'LicensingOverride',
'PublishedExtension',
'PublisherFacts',
'ReferenceLinks',
'RequestedExtension',
'UserExtensionPolicy',
'Contribution',
'ExtensionState',
]
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/extension_management/models.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/extension_management/models.py",
"repo_id": "azure-devops-python-api",
"token_count": 21817
}
| 347 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from .models import *
from .git_client import GitClient
__all__ = [
'AdvSecEnablementStatus',
'AdvSecEnablementUpdate',
'Attachment',
'BillableCommitter',
'BillableCommitterDetail',
'Comment',
'CommentIterationContext',
'CommentPosition',
'CommentThread',
'CommentThreadContext',
'CommentTrackingCriteria',
'FileContentMetadata',
'FileDiff',
'FileDiffParams',
'FileDiffsCriteria',
'GitAnnotatedTag',
'GitAsyncRefOperation',
'GitAsyncRefOperationDetail',
'GitAsyncRefOperationParameters',
'GitAsyncRefOperationSource',
'GitBaseVersionDescriptor',
'GitBlobRef',
'GitBranchStats',
'GitCommit',
'GitCommitDiffs',
'GitCommitChanges',
'GitCommitRef',
'GitConflict',
'GitConflictUpdateResult',
'GitDeletedRepository',
'GitFilePathsCollection',
'GitForkOperationStatusDetail',
'GitForkRef',
'GitForkSyncRequest',
'GitForkSyncRequestParameters',
'GitCherryPick',
'GitImportGitSource',
'GitImportRequest',
'GitImportRequestParameters',
'GitImportStatusDetail',
'GitImportTfvcSource',
'GitItem',
'GitItemDescriptor',
'GitItemRequestData',
'GitMerge',
'GitMergeOperationStatusDetail',
'GitMergeOriginRef',
'GitMergeParameters',
'GitObject',
'GitPolicyConfigurationResponse',
'GitPullRequest',
'GitPullRequestCommentThread',
'GitPullRequestCommentThreadContext',
'GitPullRequestCompletionOptions',
'GitPullRequestChange',
'GitPullRequestIteration',
'GitPullRequestIterationChanges',
'GitPullRequestMergeOptions',
'GitPullRequestQuery',
'GitPullRequestQueryInput',
'GitPullRequestSearchCriteria',
'GitPullRequestStatus',
'GitPush',
'GitPushRef',
'GitPushSearchCriteria',
'GitQueryBranchStatsCriteria',
'GitQueryCommitsCriteria',
'GitRecycleBinRepositoryDetails',
'GitRef',
'GitRefFavorite',
'GitRefUpdate',
'GitRefUpdateResult',
'GitRepository',
'GitRepositoryCreateOptions',
'GitRepositoryRef',
'GitRepositoryStats',
'GitRevert',
'GitStatus',
'GitStatusContext',
'GitSuggestion',
'GitTargetVersionDescriptor',
'GitTemplate',
'GitTreeDiff',
'GitTreeDiffEntry',
'GitTreeDiffResponse',
'GitTreeEntryRef',
'GitTreeRef',
'GitUserDate',
'GitVersionDescriptor',
'GlobalGitRepositoryKey',
'GraphSubjectBase',
'Change',
'IdentityRef',
'IdentityRefWithVote',
'ImportRepositoryValidation',
'ItemContent',
'ItemModel',
'JsonPatchOperation',
'LineDiffBlock',
'PolicyConfiguration',
'PolicyConfigurationRef',
'PolicyTypeRef',
'ReferenceLinks',
'ResourceRef',
'ShareNotificationContext',
'SourceToTargetRef',
'TeamProjectCollectionReference',
'TeamProjectReference',
'VersionedPolicyConfigurationRef',
'VstsInfo',
'WebApiCreateTagRequestData',
'WebApiTagDefinition',
'GitClient'
]
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/git/__init__.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/git/__init__.py",
"repo_id": "azure-devops-python-api",
"token_count": 1316
}
| 348 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from .models import *
from .member_entitlement_management_client import MemberEntitlementManagementClient
__all__ = [
'AadGraphMember',
'AccessLevel',
'BaseOperationResult',
'EntitlementBase',
'EntitlementOperationResultBase',
'Extension',
'ExtensionSummaryData',
'GraphGroup',
'GraphMember',
'GraphServicePrincipal',
'GraphSubject',
'GraphSubjectBase',
'GraphUser',
'Group',
'GroupEntitlement',
'GroupEntitlementOperationReference',
'GroupOperationResult',
'GroupOption',
'JsonPatchOperation',
'LicenseSummaryData',
'MemberEntitlement',
'MemberEntitlement2',
'MemberEntitlement2OperationReference',
'MemberEntitlement2OperationResult',
'MemberEntitlement2PatchResponse',
'MemberEntitlement2PostResponse',
'MemberEntitlement2ResponseBase',
'MemberEntitlementOperationReference',
'MemberEntitlementsPatchResponse',
'MemberEntitlementsPostResponse',
'MemberEntitlementsResponseBase',
'OperationReference',
'OperationResult',
'PagedGraphMemberList',
'PagedList',
'ProjectEntitlement',
'ProjectRef',
'ReferenceLinks',
'ServicePrincipalEntitlement',
'ServicePrincipalEntitlementOperationReference',
'ServicePrincipalEntitlementOperationResult',
'ServicePrincipalEntitlementsPatchResponse',
'ServicePrincipalEntitlementsPostResponse',
'ServicePrincipalEntitlementsResponseBase',
'SummaryData',
'TeamRef',
'UserEntitlement',
'UserEntitlementOperationReference',
'UserEntitlementOperationResult',
'UserEntitlementsPatchResponse',
'UserEntitlementsPostResponse',
'UserEntitlementsResponseBase',
'UsersSummary',
'MemberEntitlementManagementClient'
]
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/member_entitlement_management/__init__.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/member_entitlement_management/__init__.py",
"repo_id": "azure-devops-python-api",
"token_count": 680
}
| 349 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest.serialization import Model
class GraphSubjectBase(Model):
"""
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._visual_studio._services._web_api.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None):
super(GraphSubjectBase, self).__init__()
self._links = _links
self.descriptor = descriptor
self.display_name = display_name
self.url = url
class IdentityRef(GraphSubjectBase):
"""
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._visual_studio._services._web_api.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
:param directory_alias: Deprecated - Can be retrieved by querying the Graph user referenced in the "self" entry of the IdentityRef "_links" dictionary
:type directory_alias: str
:param id:
:type id: str
:param image_url: Deprecated - Available in the "avatar" entry of the IdentityRef "_links" dictionary
:type image_url: str
:param inactive: Deprecated - Can be retrieved by querying the Graph membership state referenced in the "membershipState" entry of the GraphUser "_links" dictionary
:type inactive: bool
:param is_aad_identity: Deprecated - Can be inferred from the subject type of the descriptor (Descriptor.IsAadUserType/Descriptor.IsAadGroupType)
:type is_aad_identity: bool
:param is_container: Deprecated - Can be inferred from the subject type of the descriptor (Descriptor.IsGroupType)
:type is_container: bool
:param is_deleted_in_origin:
:type is_deleted_in_origin: bool
:param profile_url: Deprecated - not in use in most preexisting implementations of ToIdentityRef
:type profile_url: str
:param unique_name: Deprecated - use Domain+PrincipalName instead
:type unique_name: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'directory_alias': {'key': 'directoryAlias', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'image_url': {'key': 'imageUrl', 'type': 'str'},
'inactive': {'key': 'inactive', 'type': 'bool'},
'is_aad_identity': {'key': 'isAadIdentity', 'type': 'bool'},
'is_container': {'key': 'isContainer', 'type': 'bool'},
'is_deleted_in_origin': {'key': 'isDeletedInOrigin', 'type': 'bool'},
'profile_url': {'key': 'profileUrl', 'type': 'str'},
'unique_name': {'key': 'uniqueName', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None, directory_alias=None, id=None, image_url=None, inactive=None, is_aad_identity=None, is_container=None, is_deleted_in_origin=None, profile_url=None, unique_name=None):
super(IdentityRef, self).__init__(_links=_links, descriptor=descriptor, display_name=display_name, url=url)
self.directory_alias = directory_alias
self.id = id
self.image_url = image_url
self.inactive = inactive
self.is_aad_identity = is_aad_identity
self.is_container = is_container
self.is_deleted_in_origin = is_deleted_in_origin
self.profile_url = profile_url
self.unique_name = unique_name
class Permission(Model):
"""
:param authorized:
:type authorized: bool
:param authorized_by:
:type authorized_by: :class:`IdentityRef <azure.devops.v7_1.pipeline_permissions.models.IdentityRef>`
:param authorized_on:
:type authorized_on: datetime
"""
_attribute_map = {
'authorized': {'key': 'authorized', 'type': 'bool'},
'authorized_by': {'key': 'authorizedBy', 'type': 'IdentityRef'},
'authorized_on': {'key': 'authorizedOn', 'type': 'iso-8601'}
}
def __init__(self, authorized=None, authorized_by=None, authorized_on=None):
super(Permission, self).__init__()
self.authorized = authorized
self.authorized_by = authorized_by
self.authorized_on = authorized_on
class PipelinePermission(Permission):
"""
:param authorized:
:type authorized: bool
:param authorized_by:
:type authorized_by: :class:`IdentityRef <azure.devops.v7_1.pipeline_permissions.models.IdentityRef>`
:param authorized_on:
:type authorized_on: datetime
:param id:
:type id: int
"""
_attribute_map = {
'authorized': {'key': 'authorized', 'type': 'bool'},
'authorized_by': {'key': 'authorizedBy', 'type': 'IdentityRef'},
'authorized_on': {'key': 'authorizedOn', 'type': 'iso-8601'},
'id': {'key': 'id', 'type': 'int'}
}
def __init__(self, authorized=None, authorized_by=None, authorized_on=None, id=None):
super(PipelinePermission, self).__init__(authorized=authorized, authorized_by=authorized_by, authorized_on=authorized_on)
self.id = id
class ReferenceLinks(Model):
"""
The class to represent a collection of REST reference links.
:param links: The readonly view of the links. Because Reference links are readonly, we only want to expose them as read only.
:type links: dict
"""
_attribute_map = {
'links': {'key': 'links', 'type': '{object}'}
}
def __init__(self, links=None):
super(ReferenceLinks, self).__init__()
self.links = links
class Resource(Model):
"""
:param id: Id of the resource.
:type id: str
:param name: Name of the resource.
:type name: str
:param type: Type of the resource.
:type type: str
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'}
}
def __init__(self, id=None, name=None, type=None):
super(Resource, self).__init__()
self.id = id
self.name = name
self.type = type
class ResourcePipelinePermissions(Model):
"""
:param all_pipelines:
:type all_pipelines: :class:`Permission <azure.devops.v7_1.pipeline_permissions.models.Permission>`
:param pipelines:
:type pipelines: list of :class:`PipelinePermission <azure.devops.v7_1.pipeline_permissions.models.PipelinePermission>`
:param resource:
:type resource: :class:`Resource <azure.devops.v7_1.pipeline_permissions.models.Resource>`
"""
_attribute_map = {
'all_pipelines': {'key': 'allPipelines', 'type': 'Permission'},
'pipelines': {'key': 'pipelines', 'type': '[PipelinePermission]'},
'resource': {'key': 'resource', 'type': 'Resource'}
}
def __init__(self, all_pipelines=None, pipelines=None, resource=None):
super(ResourcePipelinePermissions, self).__init__()
self.all_pipelines = all_pipelines
self.pipelines = pipelines
self.resource = resource
__all__ = [
'GraphSubjectBase',
'IdentityRef',
'Permission',
'PipelinePermission',
'ReferenceLinks',
'Resource',
'ResourcePipelinePermissions',
]
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/pipeline_permissions/models.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/pipeline_permissions/models.py",
"repo_id": "azure-devops-python-api",
"token_count": 3353
}
| 350 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest import Serializer, Deserializer
from ...client import Client
from . import models
class ProfileRegionsClient(Client):
"""ProfileRegions
:param str base_url: Service URL
:param Authentication creds: Authenticated credentials.
"""
def __init__(self, base_url=None, creds=None):
super(ProfileRegionsClient, self).__init__(base_url, creds)
client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)}
self._serialize = Serializer(client_models)
self._deserialize = Deserializer(client_models)
resource_area_identifier = '8ccfef3d-2b87-4e99-8ccb-66e343d2daa8'
def get_geo_region(self, ip):
"""GetGeoRegion.
[Preview API] Lookup up country/region based on provided IPv4, null if using the remote IPv4 address.
:param str ip:
:rtype: :class:`<GeoRegion> <azure.devops.v7_1.profile_regions.models.GeoRegion>`
"""
query_parameters = {}
if ip is not None:
query_parameters['ip'] = self._serialize.query('ip', ip, 'str')
response = self._send(http_method='GET',
location_id='35b3ff1d-ab4c-4d1c-98bb-f6ea21d86bd9',
version='7.1-preview.1',
query_parameters=query_parameters)
return self._deserialize('GeoRegion', response)
def get_regions(self):
"""GetRegions.
[Preview API]
:rtype: :class:`<ProfileRegions> <azure.devops.v7_1.profile_regions.models.ProfileRegions>`
"""
response = self._send(http_method='GET',
location_id='b129ca90-999d-47bb-ab37-0dcf784ee633',
version='7.1-preview.1')
return self._deserialize('ProfileRegions', response)
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/profile_regions/profile_regions_client.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/profile_regions/profile_regions_client.py",
"repo_id": "azure-devops-python-api",
"token_count": 925
}
| 351 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from .models import *
from .tfvc_client import TfvcClient
__all__ = [
'AssociatedWorkItem',
'FileContentMetadata',
'GitRepository',
'GitRepositoryRef',
'GraphSubjectBase',
'Change',
'CheckinNote',
'IdentityRef',
'ItemContent',
'ItemModel',
'ReferenceLinks',
'TeamProjectCollectionReference',
'TeamProjectReference',
'TfvcBranch',
'TfvcBranchMapping',
'TfvcBranchRef',
'TfvcChange',
'TfvcChangeset',
'TfvcChangesetRef',
'TfvcChangesetSearchCriteria',
'TfvcChangesetsRequestData',
'TfvcItem',
'TfvcItemDescriptor',
'TfvcItemRequestData',
'TfvcLabel',
'TfvcLabelRef',
'TfvcLabelRequestData',
'TfvcMappingFilter',
'TfvcMergeSource',
'TfvcPolicyFailureInfo',
'TfvcPolicyOverrideInfo',
'TfvcShallowBranchRef',
'TfvcShelveset',
'TfvcShelvesetRef',
'TfvcShelvesetRequestData',
'TfvcStatistics',
'TfvcVersionDescriptor',
'VersionControlProjectInfo',
'VstsInfo',
'TfvcClient'
]
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/tfvc/__init__.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/tfvc/__init__.py",
"repo_id": "azure-devops-python-api",
"token_count": 543
}
| 352 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from msrest.serialization import Model
class Activity(Model):
"""
:param capacity_per_day:
:type capacity_per_day: float
:param name:
:type name: str
"""
_attribute_map = {
'capacity_per_day': {'key': 'capacityPerDay', 'type': 'float'},
'name': {'key': 'name', 'type': 'str'}
}
def __init__(self, capacity_per_day=None, name=None):
super(Activity, self).__init__()
self.capacity_per_day = capacity_per_day
self.name = name
class BacklogColumn(Model):
"""
:param column_field_reference:
:type column_field_reference: :class:`WorkItemFieldReference <azure.devops.v7_1.work.models.WorkItemFieldReference>`
:param width:
:type width: int
"""
_attribute_map = {
'column_field_reference': {'key': 'columnFieldReference', 'type': 'WorkItemFieldReference'},
'width': {'key': 'width', 'type': 'int'}
}
def __init__(self, column_field_reference=None, width=None):
super(BacklogColumn, self).__init__()
self.column_field_reference = column_field_reference
self.width = width
class BacklogConfiguration(Model):
"""
:param backlog_fields: Behavior/type field mapping
:type backlog_fields: :class:`BacklogFields <azure.devops.v7_1.work.models.BacklogFields>`
:param bugs_behavior: Bugs behavior
:type bugs_behavior: object
:param hidden_backlogs: Hidden Backlog
:type hidden_backlogs: list of str
:param is_bugs_behavior_configured: Is BugsBehavior Configured in the process
:type is_bugs_behavior_configured: bool
:param portfolio_backlogs: Portfolio backlog descriptors
:type portfolio_backlogs: list of :class:`BacklogLevelConfiguration <azure.devops.v7_1.work.models.BacklogLevelConfiguration>`
:param requirement_backlog: Requirement backlog
:type requirement_backlog: :class:`BacklogLevelConfiguration <azure.devops.v7_1.work.models.BacklogLevelConfiguration>`
:param task_backlog: Task backlog
:type task_backlog: :class:`BacklogLevelConfiguration <azure.devops.v7_1.work.models.BacklogLevelConfiguration>`
:param url:
:type url: str
:param work_item_type_mapped_states: Mapped states for work item types
:type work_item_type_mapped_states: list of :class:`WorkItemTypeStateInfo <azure.devops.v7_1.work.models.WorkItemTypeStateInfo>`
"""
_attribute_map = {
'backlog_fields': {'key': 'backlogFields', 'type': 'BacklogFields'},
'bugs_behavior': {'key': 'bugsBehavior', 'type': 'object'},
'hidden_backlogs': {'key': 'hiddenBacklogs', 'type': '[str]'},
'is_bugs_behavior_configured': {'key': 'isBugsBehaviorConfigured', 'type': 'bool'},
'portfolio_backlogs': {'key': 'portfolioBacklogs', 'type': '[BacklogLevelConfiguration]'},
'requirement_backlog': {'key': 'requirementBacklog', 'type': 'BacklogLevelConfiguration'},
'task_backlog': {'key': 'taskBacklog', 'type': 'BacklogLevelConfiguration'},
'url': {'key': 'url', 'type': 'str'},
'work_item_type_mapped_states': {'key': 'workItemTypeMappedStates', 'type': '[WorkItemTypeStateInfo]'}
}
def __init__(self, backlog_fields=None, bugs_behavior=None, hidden_backlogs=None, is_bugs_behavior_configured=None, portfolio_backlogs=None, requirement_backlog=None, task_backlog=None, url=None, work_item_type_mapped_states=None):
super(BacklogConfiguration, self).__init__()
self.backlog_fields = backlog_fields
self.bugs_behavior = bugs_behavior
self.hidden_backlogs = hidden_backlogs
self.is_bugs_behavior_configured = is_bugs_behavior_configured
self.portfolio_backlogs = portfolio_backlogs
self.requirement_backlog = requirement_backlog
self.task_backlog = task_backlog
self.url = url
self.work_item_type_mapped_states = work_item_type_mapped_states
class BacklogFields(Model):
"""
:param type_fields: Field Type (e.g. Order, Activity) to Field Reference Name map
:type type_fields: dict
"""
_attribute_map = {
'type_fields': {'key': 'typeFields', 'type': '{str}'}
}
def __init__(self, type_fields=None):
super(BacklogFields, self).__init__()
self.type_fields = type_fields
class BacklogLevel(Model):
"""
Contract representing a backlog level
:param category_reference_name: Reference name of the corresponding WIT category
:type category_reference_name: str
:param plural_name: Plural name for the backlog level
:type plural_name: str
:param work_item_states: Collection of work item states that are included in the plan. The server will filter to only these work item types.
:type work_item_states: list of str
:param work_item_types: Collection of valid workitem type names for the given backlog level
:type work_item_types: list of str
"""
_attribute_map = {
'category_reference_name': {'key': 'categoryReferenceName', 'type': 'str'},
'plural_name': {'key': 'pluralName', 'type': 'str'},
'work_item_states': {'key': 'workItemStates', 'type': '[str]'},
'work_item_types': {'key': 'workItemTypes', 'type': '[str]'}
}
def __init__(self, category_reference_name=None, plural_name=None, work_item_states=None, work_item_types=None):
super(BacklogLevel, self).__init__()
self.category_reference_name = category_reference_name
self.plural_name = plural_name
self.work_item_states = work_item_states
self.work_item_types = work_item_types
class BacklogLevelConfiguration(Model):
"""
:param add_panel_fields: List of fields to include in Add Panel
:type add_panel_fields: list of :class:`WorkItemFieldReference <azure.devops.v7_1.work.models.WorkItemFieldReference>`
:param color: Color for the backlog level
:type color: str
:param column_fields: Default list of columns for the backlog
:type column_fields: list of :class:`BacklogColumn <azure.devops.v7_1.work.models.BacklogColumn>`
:param default_work_item_type: Default Work Item Type for the backlog
:type default_work_item_type: :class:`WorkItemTypeReference <azure.devops.v7_1.work.models.WorkItemTypeReference>`
:param id: Backlog Id (for Legacy Backlog Level from process config it can be categoryref name)
:type id: str
:param is_hidden: Indicates whether the backlog level is hidden
:type is_hidden: bool
:param name: Backlog Name
:type name: str
:param rank: Backlog Rank (Taskbacklog is 0)
:type rank: int
:param type: The type of this backlog level
:type type: object
:param work_item_count_limit: Max number of work items to show in the given backlog
:type work_item_count_limit: int
:param work_item_types: Work Item types participating in this backlog as known by the project/Process, can be overridden by team settings for bugs
:type work_item_types: list of :class:`WorkItemTypeReference <azure.devops.v7_1.work.models.WorkItemTypeReference>`
"""
_attribute_map = {
'add_panel_fields': {'key': 'addPanelFields', 'type': '[WorkItemFieldReference]'},
'color': {'key': 'color', 'type': 'str'},
'column_fields': {'key': 'columnFields', 'type': '[BacklogColumn]'},
'default_work_item_type': {'key': 'defaultWorkItemType', 'type': 'WorkItemTypeReference'},
'id': {'key': 'id', 'type': 'str'},
'is_hidden': {'key': 'isHidden', 'type': 'bool'},
'name': {'key': 'name', 'type': 'str'},
'rank': {'key': 'rank', 'type': 'int'},
'type': {'key': 'type', 'type': 'object'},
'work_item_count_limit': {'key': 'workItemCountLimit', 'type': 'int'},
'work_item_types': {'key': 'workItemTypes', 'type': '[WorkItemTypeReference]'}
}
def __init__(self, add_panel_fields=None, color=None, column_fields=None, default_work_item_type=None, id=None, is_hidden=None, name=None, rank=None, type=None, work_item_count_limit=None, work_item_types=None):
super(BacklogLevelConfiguration, self).__init__()
self.add_panel_fields = add_panel_fields
self.color = color
self.column_fields = column_fields
self.default_work_item_type = default_work_item_type
self.id = id
self.is_hidden = is_hidden
self.name = name
self.rank = rank
self.type = type
self.work_item_count_limit = work_item_count_limit
self.work_item_types = work_item_types
class BacklogLevelWorkItems(Model):
"""
Represents work items in a backlog level
:param work_items: A list of work items within a backlog level
:type work_items: list of :class:`WorkItemLink <azure.devops.v7_1.work.models.WorkItemLink>`
"""
_attribute_map = {
'work_items': {'key': 'workItems', 'type': '[WorkItemLink]'}
}
def __init__(self, work_items=None):
super(BacklogLevelWorkItems, self).__init__()
self.work_items = work_items
class BoardBadge(Model):
"""
Represents a board badge.
:param board_id: The ID of the board represented by this badge.
:type board_id: str
:param image_url: A link to the SVG resource.
:type image_url: str
"""
_attribute_map = {
'board_id': {'key': 'boardId', 'type': 'str'},
'image_url': {'key': 'imageUrl', 'type': 'str'}
}
def __init__(self, board_id=None, image_url=None):
super(BoardBadge, self).__init__()
self.board_id = board_id
self.image_url = image_url
class BoardCardRuleSettings(Model):
"""
:param _links:
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param rules:
:type rules: dict
:param url:
:type url: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'rules': {'key': 'rules', 'type': '{[Rule]}'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, _links=None, rules=None, url=None):
super(BoardCardRuleSettings, self).__init__()
self._links = _links
self.rules = rules
self.url = url
class BoardCardSettings(Model):
"""
:param cards:
:type cards: dict
"""
_attribute_map = {
'cards': {'key': 'cards', 'type': '{[FieldSetting]}'}
}
def __init__(self, cards=None):
super(BoardCardSettings, self).__init__()
self.cards = cards
class BoardColumn(Model):
"""
:param column_type:
:type column_type: object
:param description:
:type description: str
:param id:
:type id: str
:param is_split:
:type is_split: bool
:param item_limit:
:type item_limit: int
:param name:
:type name: str
:param state_mappings:
:type state_mappings: dict
"""
_attribute_map = {
'column_type': {'key': 'columnType', 'type': 'object'},
'description': {'key': 'description', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'is_split': {'key': 'isSplit', 'type': 'bool'},
'item_limit': {'key': 'itemLimit', 'type': 'int'},
'name': {'key': 'name', 'type': 'str'},
'state_mappings': {'key': 'stateMappings', 'type': '{str}'}
}
def __init__(self, column_type=None, description=None, id=None, is_split=None, item_limit=None, name=None, state_mappings=None):
super(BoardColumn, self).__init__()
self.column_type = column_type
self.description = description
self.id = id
self.is_split = is_split
self.item_limit = item_limit
self.name = name
self.state_mappings = state_mappings
class BoardFields(Model):
"""
:param column_field:
:type column_field: :class:`FieldReference <azure.devops.v7_1.work.models.FieldReference>`
:param done_field:
:type done_field: :class:`FieldReference <azure.devops.v7_1.work.models.FieldReference>`
:param row_field:
:type row_field: :class:`FieldReference <azure.devops.v7_1.work.models.FieldReference>`
"""
_attribute_map = {
'column_field': {'key': 'columnField', 'type': 'FieldReference'},
'done_field': {'key': 'doneField', 'type': 'FieldReference'},
'row_field': {'key': 'rowField', 'type': 'FieldReference'}
}
def __init__(self, column_field=None, done_field=None, row_field=None):
super(BoardFields, self).__init__()
self.column_field = column_field
self.done_field = done_field
self.row_field = row_field
class BoardChartReference(Model):
"""
:param name: Name of the resource
:type name: str
:param url: Full http link to the resource
:type url: str
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, name=None, url=None):
super(BoardChartReference, self).__init__()
self.name = name
self.url = url
class BoardReference(Model):
"""
:param id: Id of the resource
:type id: str
:param name: Name of the resource
:type name: str
:param url: Full http link to the resource
:type url: str
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, id=None, name=None, url=None):
super(BoardReference, self).__init__()
self.id = id
self.name = name
self.url = url
class BoardRow(Model):
"""
:param color:
:type color: str
:param id:
:type id: str
:param name:
:type name: str
"""
_attribute_map = {
'color': {'key': 'color', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'}
}
def __init__(self, color=None, id=None, name=None):
super(BoardRow, self).__init__()
self.color = color
self.id = id
self.name = name
class BoardSuggestedValue(Model):
"""
:param name:
:type name: str
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'}
}
def __init__(self, name=None):
super(BoardSuggestedValue, self).__init__()
self.name = name
class BoardUserSettings(Model):
"""
:param auto_refresh_state:
:type auto_refresh_state: bool
"""
_attribute_map = {
'auto_refresh_state': {'key': 'autoRefreshState', 'type': 'bool'}
}
def __init__(self, auto_refresh_state=None):
super(BoardUserSettings, self).__init__()
self.auto_refresh_state = auto_refresh_state
class CapacityPatch(Model):
"""
Expected data from PATCH
:param activities:
:type activities: list of :class:`Activity <azure.devops.v7_1.work.models.Activity>`
:param days_off:
:type days_off: list of :class:`DateRange <azure.devops.v7_1.work.models.DateRange>`
"""
_attribute_map = {
'activities': {'key': 'activities', 'type': '[Activity]'},
'days_off': {'key': 'daysOff', 'type': '[DateRange]'}
}
def __init__(self, activities=None, days_off=None):
super(CapacityPatch, self).__init__()
self.activities = activities
self.days_off = days_off
class CategoryConfiguration(Model):
"""
Details about a given backlog category
:param name: Name
:type name: str
:param reference_name: Category Reference Name
:type reference_name: str
:param work_item_types: Work item types for the backlog category
:type work_item_types: list of :class:`WorkItemTypeReference <azure.devops.v7_1.work.models.WorkItemTypeReference>`
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'reference_name': {'key': 'referenceName', 'type': 'str'},
'work_item_types': {'key': 'workItemTypes', 'type': '[WorkItemTypeReference]'}
}
def __init__(self, name=None, reference_name=None, work_item_types=None):
super(CategoryConfiguration, self).__init__()
self.name = name
self.reference_name = reference_name
self.work_item_types = work_item_types
class CreatePlan(Model):
"""
:param description: Description of the plan
:type description: str
:param name: Name of the plan to create.
:type name: str
:param properties: Plan properties.
:type properties: object
:param type: Type of plan to create.
:type type: object
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'properties': {'key': 'properties', 'type': 'object'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, description=None, name=None, properties=None, type=None):
super(CreatePlan, self).__init__()
self.description = description
self.name = name
self.properties = properties
self.type = type
class DateRange(Model):
"""
:param end: End of the date range.
:type end: datetime
:param start: Start of the date range.
:type start: datetime
"""
_attribute_map = {
'end': {'key': 'end', 'type': 'iso-8601'},
'start': {'key': 'start', 'type': 'iso-8601'}
}
def __init__(self, end=None, start=None):
super(DateRange, self).__init__()
self.end = end
self.start = start
class FieldReference(Model):
"""
An abstracted reference to a field
:param reference_name: fieldRefName for the field
:type reference_name: str
:param url: Full http link to more information about the field
:type url: str
"""
_attribute_map = {
'reference_name': {'key': 'referenceName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, reference_name=None, url=None):
super(FieldReference, self).__init__()
self.reference_name = reference_name
self.url = url
class FilterClause(Model):
"""
:param field_name:
:type field_name: str
:param index:
:type index: int
:param logical_operator:
:type logical_operator: str
:param operator:
:type operator: str
:param value:
:type value: str
"""
_attribute_map = {
'field_name': {'key': 'fieldName', 'type': 'str'},
'index': {'key': 'index', 'type': 'int'},
'logical_operator': {'key': 'logicalOperator', 'type': 'str'},
'operator': {'key': 'operator', 'type': 'str'},
'value': {'key': 'value', 'type': 'str'}
}
def __init__(self, field_name=None, index=None, logical_operator=None, operator=None, value=None):
super(FilterClause, self).__init__()
self.field_name = field_name
self.index = index
self.logical_operator = logical_operator
self.operator = operator
self.value = value
class GraphSubjectBase(Model):
"""
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._visual_studio._services._web_api.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None):
super(GraphSubjectBase, self).__init__()
self._links = _links
self.descriptor = descriptor
self.display_name = display_name
self.url = url
class IdentityRef(GraphSubjectBase):
"""
:param _links: This field contains zero or more interesting links about the graph subject. These links may be invoked to obtain additional relationships or more detailed information about this graph subject.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._visual_studio._services._web_api.models.ReferenceLinks>`
:param descriptor: The descriptor is the primary way to reference the graph subject while the system is running. This field will uniquely identify the same graph subject across both Accounts and Organizations.
:type descriptor: str
:param display_name: This is the non-unique display name of the graph subject. To change this field, you must alter its value in the source provider.
:type display_name: str
:param url: This url is the full route to the source resource of this graph subject.
:type url: str
:param directory_alias: Deprecated - Can be retrieved by querying the Graph user referenced in the "self" entry of the IdentityRef "_links" dictionary
:type directory_alias: str
:param id:
:type id: str
:param image_url: Deprecated - Available in the "avatar" entry of the IdentityRef "_links" dictionary
:type image_url: str
:param inactive: Deprecated - Can be retrieved by querying the Graph membership state referenced in the "membershipState" entry of the GraphUser "_links" dictionary
:type inactive: bool
:param is_aad_identity: Deprecated - Can be inferred from the subject type of the descriptor (Descriptor.IsAadUserType/Descriptor.IsAadGroupType)
:type is_aad_identity: bool
:param is_container: Deprecated - Can be inferred from the subject type of the descriptor (Descriptor.IsGroupType)
:type is_container: bool
:param is_deleted_in_origin:
:type is_deleted_in_origin: bool
:param profile_url: Deprecated - not in use in most preexisting implementations of ToIdentityRef
:type profile_url: str
:param unique_name: Deprecated - use Domain+PrincipalName instead
:type unique_name: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'descriptor': {'key': 'descriptor', 'type': 'str'},
'display_name': {'key': 'displayName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'directory_alias': {'key': 'directoryAlias', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'image_url': {'key': 'imageUrl', 'type': 'str'},
'inactive': {'key': 'inactive', 'type': 'bool'},
'is_aad_identity': {'key': 'isAadIdentity', 'type': 'bool'},
'is_container': {'key': 'isContainer', 'type': 'bool'},
'is_deleted_in_origin': {'key': 'isDeletedInOrigin', 'type': 'bool'},
'profile_url': {'key': 'profileUrl', 'type': 'str'},
'unique_name': {'key': 'uniqueName', 'type': 'str'}
}
def __init__(self, _links=None, descriptor=None, display_name=None, url=None, directory_alias=None, id=None, image_url=None, inactive=None, is_aad_identity=None, is_container=None, is_deleted_in_origin=None, profile_url=None, unique_name=None):
super(IdentityRef, self).__init__(_links=_links, descriptor=descriptor, display_name=display_name, url=url)
self.directory_alias = directory_alias
self.id = id
self.image_url = image_url
self.inactive = inactive
self.is_aad_identity = is_aad_identity
self.is_container = is_container
self.is_deleted_in_origin = is_deleted_in_origin
self.profile_url = profile_url
self.unique_name = unique_name
class ITaskboardColumnMapping(Model):
"""
:param state:
:type state: str
:param work_item_type:
:type work_item_type: str
"""
_attribute_map = {
'state': {'key': 'state', 'type': 'str'},
'work_item_type': {'key': 'workItemType', 'type': 'str'}
}
def __init__(self, state=None, work_item_type=None):
super(ITaskboardColumnMapping, self).__init__()
self.state = state
self.work_item_type = work_item_type
class IterationCapacity(Model):
"""
Capacity and teams for all teams in an iteration
:param teams:
:type teams: list of :class:`TeamCapacityTotals <azure.devops.v7_1.work.models.TeamCapacityTotals>`
:param total_iteration_capacity_per_day:
:type total_iteration_capacity_per_day: float
:param total_iteration_days_off:
:type total_iteration_days_off: int
"""
_attribute_map = {
'teams': {'key': 'teams', 'type': '[TeamCapacityTotals]'},
'total_iteration_capacity_per_day': {'key': 'totalIterationCapacityPerDay', 'type': 'float'},
'total_iteration_days_off': {'key': 'totalIterationDaysOff', 'type': 'int'}
}
def __init__(self, teams=None, total_iteration_capacity_per_day=None, total_iteration_days_off=None):
super(IterationCapacity, self).__init__()
self.teams = teams
self.total_iteration_capacity_per_day = total_iteration_capacity_per_day
self.total_iteration_days_off = total_iteration_days_off
class Link(Model):
"""
Link description.
:param attributes: Collection of link attributes.
:type attributes: dict
:param rel: Relation type.
:type rel: str
:param url: Link url.
:type url: str
"""
_attribute_map = {
'attributes': {'key': 'attributes', 'type': '{object}'},
'rel': {'key': 'rel', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, attributes=None, rel=None, url=None):
super(Link, self).__init__()
self.attributes = attributes
self.rel = rel
self.url = url
class Member(Model):
"""
:param display_name:
:type display_name: str
:param id:
:type id: str
:param image_url:
:type image_url: str
:param unique_name:
:type unique_name: str
:param url:
:type url: str
"""
_attribute_map = {
'display_name': {'key': 'displayName', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'image_url': {'key': 'imageUrl', 'type': 'str'},
'unique_name': {'key': 'uniqueName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, display_name=None, id=None, image_url=None, unique_name=None, url=None):
super(Member, self).__init__()
self.display_name = display_name
self.id = id
self.image_url = image_url
self.unique_name = unique_name
self.url = url
class ParentChildWIMap(Model):
"""
:param child_work_item_ids:
:type child_work_item_ids: list of int
:param id:
:type id: int
:param title:
:type title: str
:param work_item_type_name:
:type work_item_type_name: str
"""
_attribute_map = {
'child_work_item_ids': {'key': 'childWorkItemIds', 'type': '[int]'},
'id': {'key': 'id', 'type': 'int'},
'title': {'key': 'title', 'type': 'str'},
'work_item_type_name': {'key': 'workItemTypeName', 'type': 'str'}
}
def __init__(self, child_work_item_ids=None, id=None, title=None, work_item_type_name=None):
super(ParentChildWIMap, self).__init__()
self.child_work_item_ids = child_work_item_ids
self.id = id
self.title = title
self.work_item_type_name = work_item_type_name
class Plan(Model):
"""
Data contract for the plan definition
:param created_by_identity: Identity that created this plan. Defaults to null for records before upgrading to ScaledAgileViewComponent4.
:type created_by_identity: :class:`IdentityRef <azure.devops.v7_1.work.models.IdentityRef>`
:param created_date: Date when the plan was created
:type created_date: datetime
:param description: Description of the plan
:type description: str
:param id: Id of the plan
:type id: str
:param last_accessed: Date when the plan was last accessed. Default is null.
:type last_accessed: datetime
:param modified_by_identity: Identity that last modified this plan. Defaults to null for records before upgrading to ScaledAgileViewComponent4.
:type modified_by_identity: :class:`IdentityRef <azure.devops.v7_1.work.models.IdentityRef>`
:param modified_date: Date when the plan was last modified. Default to CreatedDate when the plan is first created.
:type modified_date: datetime
:param name: Name of the plan
:type name: str
:param properties: The PlanPropertyCollection instance associated with the plan. These are dependent on the type of the plan. For example, DeliveryTimelineView, it would be of type DeliveryViewPropertyCollection.
:type properties: object
:param revision: Revision of the plan. Used to safeguard users from overwriting each other's changes.
:type revision: int
:param type: Type of the plan
:type type: object
:param url: The resource url to locate the plan via rest api
:type url: str
:param user_permissions: Bit flag indicating set of permissions a user has to the plan.
:type user_permissions: object
"""
_attribute_map = {
'created_by_identity': {'key': 'createdByIdentity', 'type': 'IdentityRef'},
'created_date': {'key': 'createdDate', 'type': 'iso-8601'},
'description': {'key': 'description', 'type': 'str'},
'id': {'key': 'id', 'type': 'str'},
'last_accessed': {'key': 'lastAccessed', 'type': 'iso-8601'},
'modified_by_identity': {'key': 'modifiedByIdentity', 'type': 'IdentityRef'},
'modified_date': {'key': 'modifiedDate', 'type': 'iso-8601'},
'name': {'key': 'name', 'type': 'str'},
'properties': {'key': 'properties', 'type': 'object'},
'revision': {'key': 'revision', 'type': 'int'},
'type': {'key': 'type', 'type': 'object'},
'url': {'key': 'url', 'type': 'str'},
'user_permissions': {'key': 'userPermissions', 'type': 'object'}
}
def __init__(self, created_by_identity=None, created_date=None, description=None, id=None, last_accessed=None, modified_by_identity=None, modified_date=None, name=None, properties=None, revision=None, type=None, url=None, user_permissions=None):
super(Plan, self).__init__()
self.created_by_identity = created_by_identity
self.created_date = created_date
self.description = description
self.id = id
self.last_accessed = last_accessed
self.modified_by_identity = modified_by_identity
self.modified_date = modified_date
self.name = name
self.properties = properties
self.revision = revision
self.type = type
self.url = url
self.user_permissions = user_permissions
class PlanViewData(Model):
"""
Base class for plan view data contracts. Anything common goes here.
:param id:
:type id: str
:param revision:
:type revision: int
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'revision': {'key': 'revision', 'type': 'int'}
}
def __init__(self, id=None, revision=None):
super(PlanViewData, self).__init__()
self.id = id
self.revision = revision
class PredefinedQuery(Model):
"""
Represents a single pre-defined query.
:param has_more: Whether or not the query returned the complete set of data or if the data was truncated.
:type has_more: bool
:param id: Id of the query
:type id: str
:param name: Localized name of the query
:type name: str
:param results: The results of the query. This will be a set of WorkItem objects with only the 'id' set. The client is responsible for paging in the data as needed.
:type results: list of :class:`WorkItem <azure.devops.v7_1.work.models.WorkItem>`
:param url: REST API Url to use to retrieve results for this query
:type url: str
:param web_url: Url to use to display a page in the browser with the results of this query
:type web_url: str
"""
_attribute_map = {
'has_more': {'key': 'hasMore', 'type': 'bool'},
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'results': {'key': 'results', 'type': '[WorkItem]'},
'url': {'key': 'url', 'type': 'str'},
'web_url': {'key': 'webUrl', 'type': 'str'}
}
def __init__(self, has_more=None, id=None, name=None, results=None, url=None, web_url=None):
super(PredefinedQuery, self).__init__()
self.has_more = has_more
self.id = id
self.name = name
self.results = results
self.url = url
self.web_url = web_url
class ProcessConfiguration(Model):
"""
Process Configurations for the project
:param bug_work_items: Details about bug work items
:type bug_work_items: :class:`CategoryConfiguration <azure.devops.v7_1.work.models.CategoryConfiguration>`
:param portfolio_backlogs: Details about portfolio backlogs
:type portfolio_backlogs: list of :class:`CategoryConfiguration <azure.devops.v7_1.work.models.CategoryConfiguration>`
:param requirement_backlog: Details of requirement backlog
:type requirement_backlog: :class:`CategoryConfiguration <azure.devops.v7_1.work.models.CategoryConfiguration>`
:param task_backlog: Details of task backlog
:type task_backlog: :class:`CategoryConfiguration <azure.devops.v7_1.work.models.CategoryConfiguration>`
:param type_fields: Type fields for the process configuration
:type type_fields: dict
:param url:
:type url: str
"""
_attribute_map = {
'bug_work_items': {'key': 'bugWorkItems', 'type': 'CategoryConfiguration'},
'portfolio_backlogs': {'key': 'portfolioBacklogs', 'type': '[CategoryConfiguration]'},
'requirement_backlog': {'key': 'requirementBacklog', 'type': 'CategoryConfiguration'},
'task_backlog': {'key': 'taskBacklog', 'type': 'CategoryConfiguration'},
'type_fields': {'key': 'typeFields', 'type': '{WorkItemFieldReference}'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, bug_work_items=None, portfolio_backlogs=None, requirement_backlog=None, task_backlog=None, type_fields=None, url=None):
super(ProcessConfiguration, self).__init__()
self.bug_work_items = bug_work_items
self.portfolio_backlogs = portfolio_backlogs
self.requirement_backlog = requirement_backlog
self.task_backlog = task_backlog
self.type_fields = type_fields
self.url = url
class ReferenceLinks(Model):
"""
The class to represent a collection of REST reference links.
:param links: The readonly view of the links. Because Reference links are readonly, we only want to expose them as read only.
:type links: dict
"""
_attribute_map = {
'links': {'key': 'links', 'type': '{object}'}
}
def __init__(self, links=None):
super(ReferenceLinks, self).__init__()
self.links = links
class ReorderOperation(Model):
"""
Represents a reorder request for one or more work items.
:param ids: IDs of the work items to be reordered. Must be valid WorkItem Ids.
:type ids: list of int
:param iteration_path: IterationPath for reorder operation. This is only used when we reorder from the Iteration Backlog
:type iteration_path: str
:param next_id: ID of the work item that should be after the reordered items. Can use 0 to specify the end of the list.
:type next_id: int
:param parent_id: Parent ID for all of the work items involved in this operation. Can use 0 to indicate the items don't have a parent.
:type parent_id: int
:param previous_id: ID of the work item that should be before the reordered items. Can use 0 to specify the beginning of the list.
:type previous_id: int
"""
_attribute_map = {
'ids': {'key': 'ids', 'type': '[int]'},
'iteration_path': {'key': 'iterationPath', 'type': 'str'},
'next_id': {'key': 'nextId', 'type': 'int'},
'parent_id': {'key': 'parentId', 'type': 'int'},
'previous_id': {'key': 'previousId', 'type': 'int'}
}
def __init__(self, ids=None, iteration_path=None, next_id=None, parent_id=None, previous_id=None):
super(ReorderOperation, self).__init__()
self.ids = ids
self.iteration_path = iteration_path
self.next_id = next_id
self.parent_id = parent_id
self.previous_id = previous_id
class ReorderResult(Model):
"""
Represents a reorder result for a work item.
:param id: The ID of the work item that was reordered.
:type id: int
:param order: The updated order value of the work item that was reordered.
:type order: float
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'int'},
'order': {'key': 'order', 'type': 'float'}
}
def __init__(self, id=None, order=None):
super(ReorderResult, self).__init__()
self.id = id
self.order = order
class Rule(Model):
"""
:param clauses:
:type clauses: list of :class:`FilterClause <azure.devops.v7_1.work.models.FilterClause>`
:param filter:
:type filter: str
:param is_enabled:
:type is_enabled: str
:param name:
:type name: str
:param settings:
:type settings: dict
"""
_attribute_map = {
'clauses': {'key': 'clauses', 'type': '[FilterClause]'},
'filter': {'key': 'filter', 'type': 'str'},
'is_enabled': {'key': 'isEnabled', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'settings': {'key': 'settings', 'type': '{str}'}
}
def __init__(self, clauses=None, filter=None, is_enabled=None, name=None, settings=None):
super(Rule, self).__init__()
self.clauses = clauses
self.filter = filter
self.is_enabled = is_enabled
self.name = name
self.settings = settings
class TaskboardColumn(Model):
"""
Represents the taskbord column
:param id: Column ID
:type id: str
:param mappings: Work item type states mapped to this column to support auto state update when column is updated.
:type mappings: list of :class:`ITaskboardColumnMapping <azure.devops.v7_1.work.models.ITaskboardColumnMapping>`
:param name: Column name
:type name: str
:param order: Column position relative to other columns in the same board
:type order: int
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'mappings': {'key': 'mappings', 'type': '[ITaskboardColumnMapping]'},
'name': {'key': 'name', 'type': 'str'},
'order': {'key': 'order', 'type': 'int'}
}
def __init__(self, id=None, mappings=None, name=None, order=None):
super(TaskboardColumn, self).__init__()
self.id = id
self.mappings = mappings
self.name = name
self.order = order
class TaskboardColumnMapping(Model):
"""
Represents the state to column mapping per work item type This allows auto state update when the column changes
:param state: State of the work item type mapped to the column
:type state: str
:param work_item_type: Work Item Type name who's state is mapped to the column
:type work_item_type: str
"""
_attribute_map = {
'state': {'key': 'state', 'type': 'str'},
'work_item_type': {'key': 'workItemType', 'type': 'str'}
}
def __init__(self, state=None, work_item_type=None):
super(TaskboardColumnMapping, self).__init__()
self.state = state
self.work_item_type = work_item_type
class TaskboardColumns(Model):
"""
:param columns:
:type columns: list of :class:`TaskboardColumn <azure.devops.v7_1.work.models.TaskboardColumn>`
:param is_customized: Are the columns cutomized for this team
:type is_customized: bool
:param is_valid: Specifies if the referenced WIT and State is valid
:type is_valid: bool
:param validation_messsage: Details of validation failure if the state to column mapping is invalid
:type validation_messsage: str
"""
_attribute_map = {
'columns': {'key': 'columns', 'type': '[TaskboardColumn]'},
'is_customized': {'key': 'isCustomized', 'type': 'bool'},
'is_valid': {'key': 'isValid', 'type': 'bool'},
'validation_messsage': {'key': 'validationMesssage', 'type': 'str'}
}
def __init__(self, columns=None, is_customized=None, is_valid=None, validation_messsage=None):
super(TaskboardColumns, self).__init__()
self.columns = columns
self.is_customized = is_customized
self.is_valid = is_valid
self.validation_messsage = validation_messsage
class TaskboardWorkItemColumn(Model):
"""
Column value of a work item in the taskboard
:param column: Work item column value in the taskboard
:type column: str
:param column_id: Work item column id in the taskboard
:type column_id: str
:param state: Work Item state value
:type state: str
:param work_item_id: Work item id
:type work_item_id: int
"""
_attribute_map = {
'column': {'key': 'column', 'type': 'str'},
'column_id': {'key': 'columnId', 'type': 'str'},
'state': {'key': 'state', 'type': 'str'},
'work_item_id': {'key': 'workItemId', 'type': 'int'}
}
def __init__(self, column=None, column_id=None, state=None, work_item_id=None):
super(TaskboardWorkItemColumn, self).__init__()
self.column = column
self.column_id = column_id
self.state = state
self.work_item_id = work_item_id
class TeamCapacity(Model):
"""
Represents team member capacity with totals aggregated
:param team_members:
:type team_members: list of :class:`TeamMemberCapacityIdentityRef <azure.devops.v7_1.work.models.TeamMemberCapacityIdentityRef>`
:param total_capacity_per_day:
:type total_capacity_per_day: float
:param total_days_off:
:type total_days_off: int
"""
_attribute_map = {
'team_members': {'key': 'teamMembers', 'type': '[TeamMemberCapacityIdentityRef]'},
'total_capacity_per_day': {'key': 'totalCapacityPerDay', 'type': 'float'},
'total_days_off': {'key': 'totalDaysOff', 'type': 'int'}
}
def __init__(self, team_members=None, total_capacity_per_day=None, total_days_off=None):
super(TeamCapacity, self).__init__()
self.team_members = team_members
self.total_capacity_per_day = total_capacity_per_day
self.total_days_off = total_days_off
class TeamCapacityTotals(Model):
"""
Team information with total capacity and days off
:param team_capacity_per_day:
:type team_capacity_per_day: float
:param team_id:
:type team_id: str
:param team_total_days_off:
:type team_total_days_off: int
"""
_attribute_map = {
'team_capacity_per_day': {'key': 'teamCapacityPerDay', 'type': 'float'},
'team_id': {'key': 'teamId', 'type': 'str'},
'team_total_days_off': {'key': 'teamTotalDaysOff', 'type': 'int'}
}
def __init__(self, team_capacity_per_day=None, team_id=None, team_total_days_off=None):
super(TeamCapacityTotals, self).__init__()
self.team_capacity_per_day = team_capacity_per_day
self.team_id = team_id
self.team_total_days_off = team_total_days_off
class TeamContext(Model):
"""
The Team Context for an operation.
:param project: The team project Id or name. Ignored if ProjectId is set.
:type project: str
:param project_id: The Team Project ID. Required if Project is not set.
:type project_id: str
:param team: The Team Id or name. Ignored if TeamId is set.
:type team: str
:param team_id: The Team Id
:type team_id: str
"""
_attribute_map = {
'project': {'key': 'project', 'type': 'str'},
'project_id': {'key': 'projectId', 'type': 'str'},
'team': {'key': 'team', 'type': 'str'},
'team_id': {'key': 'teamId', 'type': 'str'}
}
def __init__(self, project=None, project_id=None, team=None, team_id=None):
super(TeamContext, self).__init__()
self.project = project
self.project_id = project_id
self.team = team
self.team_id = team_id
class TeamFieldValue(Model):
"""
Represents a single TeamFieldValue
:param include_children:
:type include_children: bool
:param value:
:type value: str
"""
_attribute_map = {
'include_children': {'key': 'includeChildren', 'type': 'bool'},
'value': {'key': 'value', 'type': 'str'}
}
def __init__(self, include_children=None, value=None):
super(TeamFieldValue, self).__init__()
self.include_children = include_children
self.value = value
class TeamFieldValuesPatch(Model):
"""
Expected data from PATCH
:param default_value:
:type default_value: str
:param values:
:type values: list of :class:`TeamFieldValue <azure.devops.v7_1.work.models.TeamFieldValue>`
"""
_attribute_map = {
'default_value': {'key': 'defaultValue', 'type': 'str'},
'values': {'key': 'values', 'type': '[TeamFieldValue]'}
}
def __init__(self, default_value=None, values=None):
super(TeamFieldValuesPatch, self).__init__()
self.default_value = default_value
self.values = values
class TeamIterationAttributes(Model):
"""
:param finish_date: Finish date of the iteration. Date-only, correct unadjusted at midnight in UTC.
:type finish_date: datetime
:param start_date: Start date of the iteration. Date-only, correct unadjusted at midnight in UTC.
:type start_date: datetime
:param time_frame: Time frame of the iteration, such as past, current or future.
:type time_frame: object
"""
_attribute_map = {
'finish_date': {'key': 'finishDate', 'type': 'iso-8601'},
'start_date': {'key': 'startDate', 'type': 'iso-8601'},
'time_frame': {'key': 'timeFrame', 'type': 'object'}
}
def __init__(self, finish_date=None, start_date=None, time_frame=None):
super(TeamIterationAttributes, self).__init__()
self.finish_date = finish_date
self.start_date = start_date
self.time_frame = time_frame
class TeamSettingsDataContractBase(Model):
"""
Base class for TeamSettings data contracts. Anything common goes here.
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, _links=None, url=None):
super(TeamSettingsDataContractBase, self).__init__()
self._links = _links
self.url = url
class TeamSettingsDaysOff(TeamSettingsDataContractBase):
"""
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param days_off:
:type days_off: list of :class:`DateRange <azure.devops.v7_1.work.models.DateRange>`
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'days_off': {'key': 'daysOff', 'type': '[DateRange]'}
}
def __init__(self, _links=None, url=None, days_off=None):
super(TeamSettingsDaysOff, self).__init__(_links=_links, url=url)
self.days_off = days_off
class TeamSettingsDaysOffPatch(Model):
"""
:param days_off:
:type days_off: list of :class:`DateRange <azure.devops.v7_1.work.models.DateRange>`
"""
_attribute_map = {
'days_off': {'key': 'daysOff', 'type': '[DateRange]'}
}
def __init__(self, days_off=None):
super(TeamSettingsDaysOffPatch, self).__init__()
self.days_off = days_off
class TeamSettingsIteration(TeamSettingsDataContractBase):
"""
Represents a shallow ref for a single iteration.
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param attributes: Attributes of the iteration such as start and end date.
:type attributes: :class:`TeamIterationAttributes <azure.devops.v7_1.work.models.TeamIterationAttributes>`
:param id: Id of the iteration.
:type id: str
:param name: Name of the iteration.
:type name: str
:param path: Relative path of the iteration.
:type path: str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'attributes': {'key': 'attributes', 'type': 'TeamIterationAttributes'},
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'path': {'key': 'path', 'type': 'str'}
}
def __init__(self, _links=None, url=None, attributes=None, id=None, name=None, path=None):
super(TeamSettingsIteration, self).__init__(_links=_links, url=url)
self.attributes = attributes
self.id = id
self.name = name
self.path = path
class TeamSettingsPatch(Model):
"""
Data contract for what we expect to receive when PATCH
:param backlog_iteration:
:type backlog_iteration: str
:param backlog_visibilities:
:type backlog_visibilities: dict
:param bugs_behavior:
:type bugs_behavior: object
:param default_iteration:
:type default_iteration: str
:param default_iteration_macro:
:type default_iteration_macro: str
:param working_days:
:type working_days: list of str
"""
_attribute_map = {
'backlog_iteration': {'key': 'backlogIteration', 'type': 'str'},
'backlog_visibilities': {'key': 'backlogVisibilities', 'type': '{bool}'},
'bugs_behavior': {'key': 'bugsBehavior', 'type': 'object'},
'default_iteration': {'key': 'defaultIteration', 'type': 'str'},
'default_iteration_macro': {'key': 'defaultIterationMacro', 'type': 'str'},
'working_days': {'key': 'workingDays', 'type': '[object]'}
}
def __init__(self, backlog_iteration=None, backlog_visibilities=None, bugs_behavior=None, default_iteration=None, default_iteration_macro=None, working_days=None):
super(TeamSettingsPatch, self).__init__()
self.backlog_iteration = backlog_iteration
self.backlog_visibilities = backlog_visibilities
self.bugs_behavior = bugs_behavior
self.default_iteration = default_iteration
self.default_iteration_macro = default_iteration_macro
self.working_days = working_days
class TimelineCriteriaStatus(Model):
"""
:param message:
:type message: str
:param type:
:type type: object
"""
_attribute_map = {
'message': {'key': 'message', 'type': 'str'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, message=None, type=None):
super(TimelineCriteriaStatus, self).__init__()
self.message = message
self.type = type
class TimelineIterationStatus(Model):
"""
:param message:
:type message: str
:param type:
:type type: object
"""
_attribute_map = {
'message': {'key': 'message', 'type': 'str'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, message=None, type=None):
super(TimelineIterationStatus, self).__init__()
self.message = message
self.type = type
class TimelineTeamData(Model):
"""
:param backlog: Backlog matching the mapped backlog associated with this team.
:type backlog: :class:`BacklogLevel <azure.devops.v7_1.work.models.BacklogLevel>`
:param field_reference_names: The field reference names of the work item data
:type field_reference_names: list of str
:param id: The id of the team
:type id: str
:param is_expanded: Was iteration and work item data retrieved for this team. <remarks> Teams with IsExpanded false have not had their iteration, work item, and field related data queried and will never contain this data. If true then these items are queried and, if there are items in the queried range, there will be data. </remarks>
:type is_expanded: bool
:param iterations: The iteration data, including the work items, in the queried date range.
:type iterations: list of :class:`TimelineTeamIteration <azure.devops.v7_1.work.models.TimelineTeamIteration>`
:param name: The name of the team
:type name: str
:param order_by_field: The order by field name of this team
:type order_by_field: str
:param partially_paged_field_reference_names: The field reference names of the partially paged work items, such as ID, WorkItemType
:type partially_paged_field_reference_names: list of str
:param partially_paged_work_items:
:type partially_paged_work_items: list of [object]
:param project_id: The project id the team belongs team
:type project_id: str
:param rollup_work_item_types: Work item types for which we will collect roll up data on the client side
:type rollup_work_item_types: list of str
:param status: Status for this team.
:type status: :class:`TimelineTeamStatus <azure.devops.v7_1.work.models.TimelineTeamStatus>`
:param team_field_default_value: The team field default value
:type team_field_default_value: str
:param team_field_name: The team field name of this team
:type team_field_name: str
:param team_field_values: The team field values
:type team_field_values: list of :class:`TeamFieldValue <azure.devops.v7_1.work.models.TeamFieldValue>`
:param work_items: Work items associated with the team that are not under any of the team's iterations
:type work_items: list of [object]
:param work_item_type_colors: Colors for the work item types.
:type work_item_type_colors: list of :class:`WorkItemColor <azure.devops.v7_1.work.models.WorkItemColor>`
"""
_attribute_map = {
'backlog': {'key': 'backlog', 'type': 'BacklogLevel'},
'field_reference_names': {'key': 'fieldReferenceNames', 'type': '[str]'},
'id': {'key': 'id', 'type': 'str'},
'is_expanded': {'key': 'isExpanded', 'type': 'bool'},
'iterations': {'key': 'iterations', 'type': '[TimelineTeamIteration]'},
'name': {'key': 'name', 'type': 'str'},
'order_by_field': {'key': 'orderByField', 'type': 'str'},
'partially_paged_field_reference_names': {'key': 'partiallyPagedFieldReferenceNames', 'type': '[str]'},
'partially_paged_work_items': {'key': 'partiallyPagedWorkItems', 'type': '[[object]]'},
'project_id': {'key': 'projectId', 'type': 'str'},
'rollup_work_item_types': {'key': 'rollupWorkItemTypes', 'type': '[str]'},
'status': {'key': 'status', 'type': 'TimelineTeamStatus'},
'team_field_default_value': {'key': 'teamFieldDefaultValue', 'type': 'str'},
'team_field_name': {'key': 'teamFieldName', 'type': 'str'},
'team_field_values': {'key': 'teamFieldValues', 'type': '[TeamFieldValue]'},
'work_items': {'key': 'workItems', 'type': '[[object]]'},
'work_item_type_colors': {'key': 'workItemTypeColors', 'type': '[WorkItemColor]'}
}
def __init__(self, backlog=None, field_reference_names=None, id=None, is_expanded=None, iterations=None, name=None, order_by_field=None, partially_paged_field_reference_names=None, partially_paged_work_items=None, project_id=None, rollup_work_item_types=None, status=None, team_field_default_value=None, team_field_name=None, team_field_values=None, work_items=None, work_item_type_colors=None):
super(TimelineTeamData, self).__init__()
self.backlog = backlog
self.field_reference_names = field_reference_names
self.id = id
self.is_expanded = is_expanded
self.iterations = iterations
self.name = name
self.order_by_field = order_by_field
self.partially_paged_field_reference_names = partially_paged_field_reference_names
self.partially_paged_work_items = partially_paged_work_items
self.project_id = project_id
self.rollup_work_item_types = rollup_work_item_types
self.status = status
self.team_field_default_value = team_field_default_value
self.team_field_name = team_field_name
self.team_field_values = team_field_values
self.work_items = work_items
self.work_item_type_colors = work_item_type_colors
class TimelineTeamIteration(Model):
"""
:param css_node_id: The iteration CSS Node Id
:type css_node_id: str
:param finish_date: The end date of the iteration
:type finish_date: datetime
:param name: The iteration name
:type name: str
:param partially_paged_work_items: All the partially paged workitems in this iteration.
:type partially_paged_work_items: list of [object]
:param path: The iteration path
:type path: str
:param start_date: The start date of the iteration
:type start_date: datetime
:param status: The status of this iteration
:type status: :class:`TimelineIterationStatus <azure.devops.v7_1.work.models.TimelineIterationStatus>`
:param work_items: The work items that have been paged in this iteration
:type work_items: list of [object]
"""
_attribute_map = {
'css_node_id': {'key': 'cssNodeId', 'type': 'str'},
'finish_date': {'key': 'finishDate', 'type': 'iso-8601'},
'name': {'key': 'name', 'type': 'str'},
'partially_paged_work_items': {'key': 'partiallyPagedWorkItems', 'type': '[[object]]'},
'path': {'key': 'path', 'type': 'str'},
'start_date': {'key': 'startDate', 'type': 'iso-8601'},
'status': {'key': 'status', 'type': 'TimelineIterationStatus'},
'work_items': {'key': 'workItems', 'type': '[[object]]'}
}
def __init__(self, css_node_id=None, finish_date=None, name=None, partially_paged_work_items=None, path=None, start_date=None, status=None, work_items=None):
super(TimelineTeamIteration, self).__init__()
self.css_node_id = css_node_id
self.finish_date = finish_date
self.name = name
self.partially_paged_work_items = partially_paged_work_items
self.path = path
self.start_date = start_date
self.status = status
self.work_items = work_items
class TimelineTeamStatus(Model):
"""
:param message:
:type message: str
:param type:
:type type: object
"""
_attribute_map = {
'message': {'key': 'message', 'type': 'str'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, message=None, type=None):
super(TimelineTeamStatus, self).__init__()
self.message = message
self.type = type
class UpdatePlan(Model):
"""
:param description: Description of the plan
:type description: str
:param name: Name of the plan to create.
:type name: str
:param properties: Plan properties.
:type properties: object
:param revision: Revision of the plan that was updated - the value used here should match the one the server gave the client in the Plan.
:type revision: int
:param type: Type of the plan
:type type: object
"""
_attribute_map = {
'description': {'key': 'description', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'properties': {'key': 'properties', 'type': 'object'},
'revision': {'key': 'revision', 'type': 'int'},
'type': {'key': 'type', 'type': 'object'}
}
def __init__(self, description=None, name=None, properties=None, revision=None, type=None):
super(UpdatePlan, self).__init__()
self.description = description
self.name = name
self.properties = properties
self.revision = revision
self.type = type
class UpdateTaskboardColumn(Model):
"""
:param id: Column ID, keep it null for new column
:type id: str
:param mappings: Work item type states mapped to this column to support auto state update when column is updated.
:type mappings: list of :class:`TaskboardColumnMapping <azure.devops.v7_1.work.models.TaskboardColumnMapping>`
:param name: Column name is required
:type name: str
:param order: Column position relative to other columns in the same board
:type order: int
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'mappings': {'key': 'mappings', 'type': '[TaskboardColumnMapping]'},
'name': {'key': 'name', 'type': 'str'},
'order': {'key': 'order', 'type': 'int'}
}
def __init__(self, id=None, mappings=None, name=None, order=None):
super(UpdateTaskboardColumn, self).__init__()
self.id = id
self.mappings = mappings
self.name = name
self.order = order
class UpdateTaskboardWorkItemColumn(Model):
"""
:param new_column:
:type new_column: str
"""
_attribute_map = {
'new_column': {'key': 'newColumn', 'type': 'str'}
}
def __init__(self, new_column=None):
super(UpdateTaskboardWorkItemColumn, self).__init__()
self.new_column = new_column
class WorkItemColor(Model):
"""
Work item color and icon.
:param icon:
:type icon: str
:param primary_color:
:type primary_color: str
:param work_item_type_name:
:type work_item_type_name: str
"""
_attribute_map = {
'icon': {'key': 'icon', 'type': 'str'},
'primary_color': {'key': 'primaryColor', 'type': 'str'},
'work_item_type_name': {'key': 'workItemTypeName', 'type': 'str'}
}
def __init__(self, icon=None, primary_color=None, work_item_type_name=None):
super(WorkItemColor, self).__init__()
self.icon = icon
self.primary_color = primary_color
self.work_item_type_name = work_item_type_name
class WorkItemFieldReference(Model):
"""
Reference to a field in a work item
:param name: The friendly name of the field.
:type name: str
:param reference_name: The reference name of the field.
:type reference_name: str
:param url: The REST URL of the resource.
:type url: str
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'reference_name': {'key': 'referenceName', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, name=None, reference_name=None, url=None):
super(WorkItemFieldReference, self).__init__()
self.name = name
self.reference_name = reference_name
self.url = url
class WorkItemLink(Model):
"""
A link between two work items.
:param rel: The type of link.
:type rel: str
:param source: The source work item.
:type source: :class:`WorkItemReference <azure.devops.v7_1.microsoft._team_foundation._work_item_tracking._web_api.models.WorkItemReference>`
:param target: The target work item.
:type target: :class:`WorkItemReference <azure.devops.v7_1.microsoft._team_foundation._work_item_tracking._web_api.models.WorkItemReference>`
"""
_attribute_map = {
'rel': {'key': 'rel', 'type': 'str'},
'source': {'key': 'source', 'type': 'WorkItemReference'},
'target': {'key': 'target', 'type': 'WorkItemReference'}
}
def __init__(self, rel=None, source=None, target=None):
super(WorkItemLink, self).__init__()
self.rel = rel
self.source = source
self.target = target
class WorkItemReference(Model):
"""
Contains reference to a work item.
:param id: Work item ID.
:type id: int
:param url: REST API URL of the resource
:type url: str
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'int'},
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, id=None, url=None):
super(WorkItemReference, self).__init__()
self.id = id
self.url = url
class WorkItemRelation(Link):
"""
:param attributes: Collection of link attributes.
:type attributes: dict
:param rel: Relation type.
:type rel: str
:param url: Link url.
:type url: str
"""
_attribute_map = {
'attributes': {'key': 'attributes', 'type': '{object}'},
'rel': {'key': 'rel', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
}
def __init__(self, attributes=None, rel=None, url=None):
super(WorkItemRelation, self).__init__(attributes=attributes, rel=rel, url=url)
class WorkItemTrackingResourceReference(Model):
"""
Base class for work item tracking resource references.
:param url:
:type url: str
"""
_attribute_map = {
'url': {'key': 'url', 'type': 'str'}
}
def __init__(self, url=None):
super(WorkItemTrackingResourceReference, self).__init__()
self.url = url
class WorkItemTypeReference(WorkItemTrackingResourceReference):
"""
Reference to a work item type.
:param url:
:type url: str
:param name: Name of the work item type.
:type name: str
"""
_attribute_map = {
'url': {'key': 'url', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'}
}
def __init__(self, url=None, name=None):
super(WorkItemTypeReference, self).__init__(url=url)
self.name = name
class WorkItemTypeStateInfo(Model):
"""
:param states: State name to state category map
:type states: dict
:param work_item_type_name: Work Item type name
:type work_item_type_name: str
"""
_attribute_map = {
'states': {'key': 'states', 'type': '{str}'},
'work_item_type_name': {'key': 'workItemTypeName', 'type': 'str'}
}
def __init__(self, states=None, work_item_type_name=None):
super(WorkItemTypeStateInfo, self).__init__()
self.states = states
self.work_item_type_name = work_item_type_name
class Board(BoardReference):
"""
:param id: Id of the resource
:type id: str
:param name: Name of the resource
:type name: str
:param url: Full http link to the resource
:type url: str
:param _links:
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param allowed_mappings:
:type allowed_mappings: dict
:param can_edit:
:type can_edit: bool
:param columns:
:type columns: list of :class:`BoardColumn <azure.devops.v7_1.work.models.BoardColumn>`
:param fields:
:type fields: :class:`BoardFields <azure.devops.v7_1.work.models.BoardFields>`
:param is_valid:
:type is_valid: bool
:param revision:
:type revision: int
:param rows:
:type rows: list of :class:`BoardRow <azure.devops.v7_1.work.models.BoardRow>`
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'allowed_mappings': {'key': 'allowedMappings', 'type': '{{[str]}}'},
'can_edit': {'key': 'canEdit', 'type': 'bool'},
'columns': {'key': 'columns', 'type': '[BoardColumn]'},
'fields': {'key': 'fields', 'type': 'BoardFields'},
'is_valid': {'key': 'isValid', 'type': 'bool'},
'revision': {'key': 'revision', 'type': 'int'},
'rows': {'key': 'rows', 'type': '[BoardRow]'}
}
def __init__(self, id=None, name=None, url=None, _links=None, allowed_mappings=None, can_edit=None, columns=None, fields=None, is_valid=None, revision=None, rows=None):
super(Board, self).__init__(id=id, name=name, url=url)
self._links = _links
self.allowed_mappings = allowed_mappings
self.can_edit = can_edit
self.columns = columns
self.fields = fields
self.is_valid = is_valid
self.revision = revision
self.rows = rows
class BoardChart(BoardChartReference):
"""
:param name: Name of the resource
:type name: str
:param url: Full http link to the resource
:type url: str
:param _links: The links for the resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param settings: The settings for the resource
:type settings: dict
"""
_attribute_map = {
'name': {'key': 'name', 'type': 'str'},
'url': {'key': 'url', 'type': 'str'},
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'settings': {'key': 'settings', 'type': '{object}'}
}
def __init__(self, name=None, url=None, _links=None, settings=None):
super(BoardChart, self).__init__(name=name, url=url)
self._links = _links
self.settings = settings
class CapacityContractBase(TeamSettingsDataContractBase):
"""
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param activities: Collection of capacities associated with the team member
:type activities: list of :class:`Activity <azure.devops.v7_1.work.models.Activity>`
:param days_off: The days off associated with the team member
:type days_off: list of :class:`DateRange <azure.devops.v7_1.work.models.DateRange>`
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'activities': {'key': 'activities', 'type': '[Activity]'},
'days_off': {'key': 'daysOff', 'type': '[DateRange]'}
}
def __init__(self, _links=None, url=None, activities=None, days_off=None):
super(CapacityContractBase, self).__init__(_links=_links, url=url)
self.activities = activities
self.days_off = days_off
class DeliveryViewData(PlanViewData):
"""
Data contract for Data of Delivery View
:param id:
:type id: str
:param revision:
:type revision: int
:param criteria_status: Filter criteria status of the timeline
:type criteria_status: :class:`TimelineCriteriaStatus <azure.devops.v7_1.work.models.TimelineCriteriaStatus>`
:param end_date: The end date of the delivery view data
:type end_date: datetime
:param child_id_to_parent_id_map: Work item child id to parent id map
:type child_id_to_parent_id_map: dict
:param max_expanded_teams: Max number of teams that can be configured for a delivery plan
:type max_expanded_teams: int
:param parent_item_maps: Mapping between parent id, title and all the child work item ids
:type parent_item_maps: list of :class:`ParentChildWIMap <azure.devops.v7_1.work.models.ParentChildWIMap>`
:param start_date: The start date for the delivery view data
:type start_date: datetime
:param teams: All the team data
:type teams: list of :class:`TimelineTeamData <azure.devops.v7_1.work.models.TimelineTeamData>`
:param work_item_dependencies: List of all work item ids that have a dependency but not a violation
:type work_item_dependencies: list of int
:param work_item_violations: List of all work item ids that have a violation
:type work_item_violations: list of int
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'revision': {'key': 'revision', 'type': 'int'},
'criteria_status': {'key': 'criteriaStatus', 'type': 'TimelineCriteriaStatus'},
'end_date': {'key': 'endDate', 'type': 'iso-8601'},
'child_id_to_parent_id_map': {'key': 'childIdToParentIdMap', 'type': '{int}'},
'max_expanded_teams': {'key': 'maxExpandedTeams', 'type': 'int'},
'parent_item_maps': {'key': 'parentItemMaps', 'type': '[ParentChildWIMap]'},
'start_date': {'key': 'startDate', 'type': 'iso-8601'},
'teams': {'key': 'teams', 'type': '[TimelineTeamData]'},
'work_item_dependencies': {'key': 'workItemDependencies', 'type': '[int]'},
'work_item_violations': {'key': 'workItemViolations', 'type': '[int]'}
}
def __init__(self, id=None, revision=None, criteria_status=None, end_date=None, child_id_to_parent_id_map=None, max_expanded_teams=None, parent_item_maps=None, start_date=None, teams=None, work_item_dependencies=None, work_item_violations=None):
super(DeliveryViewData, self).__init__(id=id, revision=revision)
self.criteria_status = criteria_status
self.end_date = end_date
self.child_id_to_parent_id_map = child_id_to_parent_id_map
self.max_expanded_teams = max_expanded_teams
self.parent_item_maps = parent_item_maps
self.start_date = start_date
self.teams = teams
self.work_item_dependencies = work_item_dependencies
self.work_item_violations = work_item_violations
class IterationWorkItems(TeamSettingsDataContractBase):
"""
Represents work items in an iteration backlog
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param work_item_relations: Work item relations
:type work_item_relations: list of :class:`WorkItemLink <azure.devops.v7_1.work.models.WorkItemLink>`
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'work_item_relations': {'key': 'workItemRelations', 'type': '[WorkItemLink]'}
}
def __init__(self, _links=None, url=None, work_item_relations=None):
super(IterationWorkItems, self).__init__(_links=_links, url=url)
self.work_item_relations = work_item_relations
class TeamFieldValues(TeamSettingsDataContractBase):
"""
Essentially a collection of team field values
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param default_value: The default team field value
:type default_value: str
:param field: Shallow ref to the field being used as a team field
:type field: :class:`FieldReference <azure.devops.v7_1.work.models.FieldReference>`
:param values: Collection of all valid team field values
:type values: list of :class:`TeamFieldValue <azure.devops.v7_1.work.models.TeamFieldValue>`
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'default_value': {'key': 'defaultValue', 'type': 'str'},
'field': {'key': 'field', 'type': 'FieldReference'},
'values': {'key': 'values', 'type': '[TeamFieldValue]'}
}
def __init__(self, _links=None, url=None, default_value=None, field=None, values=None):
super(TeamFieldValues, self).__init__(_links=_links, url=url)
self.default_value = default_value
self.field = field
self.values = values
class TeamMemberCapacity(CapacityContractBase):
"""
Represents capacity for a specific team member
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param activities: Collection of capacities associated with the team member
:type activities: list of :class:`Activity <azure.devops.v7_1.work.models.Activity>`
:param days_off: The days off associated with the team member
:type days_off: list of :class:`DateRange <azure.devops.v7_1.work.models.DateRange>`
:param team_member: Shallow Ref to the associated team member
:type team_member: :class:`Member <azure.devops.v7_1.work.models.Member>`
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'activities': {'key': 'activities', 'type': '[Activity]'},
'days_off': {'key': 'daysOff', 'type': '[DateRange]'},
'team_member': {'key': 'teamMember', 'type': 'Member'}
}
def __init__(self, _links=None, url=None, activities=None, days_off=None, team_member=None):
super(TeamMemberCapacity, self).__init__(_links=_links, url=url, activities=activities, days_off=days_off)
self.team_member = team_member
class TeamMemberCapacityIdentityRef(CapacityContractBase):
"""
Represents capacity for a specific team member
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param activities: Collection of capacities associated with the team member
:type activities: list of :class:`Activity <azure.devops.v7_1.work.models.Activity>`
:param days_off: The days off associated with the team member
:type days_off: list of :class:`DateRange <azure.devops.v7_1.work.models.DateRange>`
:param team_member: Identity ref of the associated team member
:type team_member: :class:`IdentityRef <azure.devops.v7_1.work.models.IdentityRef>`
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'activities': {'key': 'activities', 'type': '[Activity]'},
'days_off': {'key': 'daysOff', 'type': '[DateRange]'},
'team_member': {'key': 'teamMember', 'type': 'IdentityRef'}
}
def __init__(self, _links=None, url=None, activities=None, days_off=None, team_member=None):
super(TeamMemberCapacityIdentityRef, self).__init__(_links=_links, url=url, activities=activities, days_off=days_off)
self.team_member = team_member
class TeamSetting(TeamSettingsDataContractBase):
"""
Data contract for TeamSettings
:param _links: Collection of links relevant to resource
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.work.models.ReferenceLinks>`
:param url: Full http link to the resource
:type url: str
:param backlog_iteration: Backlog Iteration
:type backlog_iteration: :class:`TeamSettingsIteration <azure.devops.v7_1.work.models.TeamSettingsIteration>`
:param backlog_visibilities: Information about categories that are visible on the backlog.
:type backlog_visibilities: dict
:param bugs_behavior: BugsBehavior (Off, AsTasks, AsRequirements, ...)
:type bugs_behavior: object
:param default_iteration: Default Iteration, the iteration used when creating a new work item on the queries page.
:type default_iteration: :class:`TeamSettingsIteration <azure.devops.v7_1.work.models.TeamSettingsIteration>`
:param default_iteration_macro: Default Iteration macro (if any)
:type default_iteration_macro: str
:param working_days: Days that the team is working
:type working_days: list of str
"""
_attribute_map = {
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'url': {'key': 'url', 'type': 'str'},
'backlog_iteration': {'key': 'backlogIteration', 'type': 'TeamSettingsIteration'},
'backlog_visibilities': {'key': 'backlogVisibilities', 'type': '{bool}'},
'bugs_behavior': {'key': 'bugsBehavior', 'type': 'object'},
'default_iteration': {'key': 'defaultIteration', 'type': 'TeamSettingsIteration'},
'default_iteration_macro': {'key': 'defaultIterationMacro', 'type': 'str'},
'working_days': {'key': 'workingDays', 'type': '[object]'}
}
def __init__(self, _links=None, url=None, backlog_iteration=None, backlog_visibilities=None, bugs_behavior=None, default_iteration=None, default_iteration_macro=None, working_days=None):
super(TeamSetting, self).__init__(_links=_links, url=url)
self.backlog_iteration = backlog_iteration
self.backlog_visibilities = backlog_visibilities
self.bugs_behavior = bugs_behavior
self.default_iteration = default_iteration
self.default_iteration_macro = default_iteration_macro
self.working_days = working_days
class WorkItemCommentVersionRef(WorkItemTrackingResourceReference):
"""
Represents the reference to a specific version of a comment on a Work Item.
:param url:
:type url: str
:param comment_id: The id assigned to the comment.
:type comment_id: int
:param created_in_revision: [Internal] The work item revision where this comment was originally added.
:type created_in_revision: int
:param is_deleted: [Internal] Specifies whether comment was deleted.
:type is_deleted: bool
:param text: [Internal] The text of the comment.
:type text: str
:param version: The version number.
:type version: int
"""
_attribute_map = {
'url': {'key': 'url', 'type': 'str'},
'comment_id': {'key': 'commentId', 'type': 'int'},
'created_in_revision': {'key': 'createdInRevision', 'type': 'int'},
'is_deleted': {'key': 'isDeleted', 'type': 'bool'},
'text': {'key': 'text', 'type': 'str'},
'version': {'key': 'version', 'type': 'int'}
}
def __init__(self, url=None, comment_id=None, created_in_revision=None, is_deleted=None, text=None, version=None):
super(WorkItemCommentVersionRef, self).__init__(url=url)
self.comment_id = comment_id
self.created_in_revision = created_in_revision
self.is_deleted = is_deleted
self.text = text
self.version = version
class WorkItemTrackingResource(WorkItemTrackingResourceReference):
"""
Base class for WIT REST resources.
:param url:
:type url: str
:param _links: Link references to related REST resources.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._team_foundation._work_item_tracking._web_api.models.ReferenceLinks>`
"""
_attribute_map = {
'url': {'key': 'url', 'type': 'str'},
'_links': {'key': '_links', 'type': 'ReferenceLinks'}
}
def __init__(self, url=None, _links=None):
super(WorkItemTrackingResource, self).__init__(url=url)
self._links = _links
class WorkItem(WorkItemTrackingResource):
"""
Describes a work item.
:param url:
:type url: str
:param _links: Link references to related REST resources.
:type _links: :class:`ReferenceLinks <azure.devops.v7_1.microsoft._team_foundation._work_item_tracking._web_api.models.ReferenceLinks>`
:param comment_version_ref: Reference to a specific version of the comment added/edited/deleted in this revision.
:type comment_version_ref: :class:`WorkItemCommentVersionRef <azure.devops.v7_1.microsoft._team_foundation._work_item_tracking._web_api.models.WorkItemCommentVersionRef>`
:param fields: Map of field and values for the work item.
:type fields: dict
:param id: The work item ID.
:type id: int
:param relations: Relations of the work item.
:type relations: list of :class:`WorkItemRelation <azure.devops.v7_1.microsoft._team_foundation._work_item_tracking._web_api.models.WorkItemRelation>`
:param rev: Revision number of the work item.
:type rev: int
"""
_attribute_map = {
'url': {'key': 'url', 'type': 'str'},
'_links': {'key': '_links', 'type': 'ReferenceLinks'},
'comment_version_ref': {'key': 'commentVersionRef', 'type': 'WorkItemCommentVersionRef'},
'fields': {'key': 'fields', 'type': '{object}'},
'id': {'key': 'id', 'type': 'int'},
'relations': {'key': 'relations', 'type': '[WorkItemRelation]'},
'rev': {'key': 'rev', 'type': 'int'}
}
def __init__(self, url=None, _links=None, comment_version_ref=None, fields=None, id=None, relations=None, rev=None):
super(WorkItem, self).__init__(url=url, _links=_links)
self.comment_version_ref = comment_version_ref
self.fields = fields
self.id = id
self.relations = relations
self.rev = rev
__all__ = [
'Activity',
'BacklogColumn',
'BacklogConfiguration',
'BacklogFields',
'BacklogLevel',
'BacklogLevelConfiguration',
'BacklogLevelWorkItems',
'BoardBadge',
'BoardCardRuleSettings',
'BoardCardSettings',
'BoardColumn',
'BoardFields',
'BoardChartReference',
'BoardReference',
'BoardRow',
'BoardSuggestedValue',
'BoardUserSettings',
'CapacityPatch',
'CategoryConfiguration',
'CreatePlan',
'DateRange',
'FieldReference',
'FilterClause',
'GraphSubjectBase',
'IdentityRef',
'ITaskboardColumnMapping',
'IterationCapacity',
'Link',
'Member',
'ParentChildWIMap',
'Plan',
'PlanViewData',
'PredefinedQuery',
'ProcessConfiguration',
'ReferenceLinks',
'ReorderOperation',
'ReorderResult',
'Rule',
'TaskboardColumn',
'TaskboardColumnMapping',
'TaskboardColumns',
'TaskboardWorkItemColumn',
'TeamCapacity',
'TeamCapacityTotals',
'TeamContext',
'TeamFieldValue',
'TeamFieldValuesPatch',
'TeamIterationAttributes',
'TeamSettingsDataContractBase',
'TeamSettingsDaysOff',
'TeamSettingsDaysOffPatch',
'TeamSettingsIteration',
'TeamSettingsPatch',
'TimelineCriteriaStatus',
'TimelineIterationStatus',
'TimelineTeamData',
'TimelineTeamIteration',
'TimelineTeamStatus',
'UpdatePlan',
'UpdateTaskboardColumn',
'UpdateTaskboardWorkItemColumn',
'WorkItemColor',
'WorkItemFieldReference',
'WorkItemLink',
'WorkItemReference',
'WorkItemRelation',
'WorkItemTrackingResourceReference',
'WorkItemTypeReference',
'WorkItemTypeStateInfo',
'Board',
'BoardChart',
'CapacityContractBase',
'DeliveryViewData',
'IterationWorkItems',
'TeamFieldValues',
'TeamMemberCapacity',
'TeamMemberCapacityIdentityRef',
'TeamSetting',
'WorkItemCommentVersionRef',
'WorkItemTrackingResource',
'WorkItem',
]
|
azure-devops-python-api/azure-devops/azure/devops/v7_1/work/models.py/0
|
{
"file_path": "azure-devops-python-api/azure-devops/azure/devops/v7_1/work/models.py",
"repo_id": "azure-devops-python-api",
"token_count": 34216
}
| 353 |
[/_apis/build/status/microsoft.qdk-python?branchName=main)](https://dev.azure.com/ms-quantum-public/Microsoft%20Quantum%20(public)/_build/latest?definitionId=32&branchName=main)
# Azure Quantum SDK
## Introduction
This repository contains the azure-quantum Python SDK.
Use azure-quantum SDK to submit quantum jobs written in Q#, Qiskit, or Cirq to the Azure Quantum service:
- `azure-quantum` [](https://badge.fury.io/py/azure-quantum)
## Installation and getting started
To install the Azure Quantum package, run:
```bash
pip install azure-quantum
```
If using qiskit, cirq or qsharp, include the optional dependency as part of the install command:
```bash
pip install azure-quantum[qiskit]
pip install azure-quantum[cirq]
pip install azure-quantum[qsharp]
```
To get started, visit the following Quickstart guides:
- [Quickstart: Submit a circuit with Qiskit](https://learn.microsoft.com/azure/quantum/quickstart-microsoft-qiskit)
- [Quickstart: Submit a circuit with Cirq](https://learn.microsoft.com/azure/quantum/quickstart-microsoft-qiskit)
- [Quickstart: Submit a circuit with a provider-specific format](https://learn.microsoft.com/azure/quantum/quickstart-microsoft-provider-format).
## Development
See [CONTRIBUTING](./CONTRIBUTING.md) for instructions on how to build and test.
## Contributing
For details on contributing to this repository, see the [contributing guide](https://github.com/microsoft/azure-quantum-python/blob/main/CONTRIBUTING.md).
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
### Note on Packages
While we encourage contributions in any part of the code, there are some exceptions to take into account.
- The package `azure.quantum._client` is autogenerated using the [Azure Quantum Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/quantum/data-plane). No manual changes to this code are accepted (because they will be lost next time we regenerate the client).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
|
azure-quantum-python/README.md/0
|
{
"file_path": "azure-quantum-python/README.md",
"repo_id": "azure-quantum-python",
"token_count": 970
}
| 354 |
# --------------------------------------------------------------------------
#
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the ""Software""), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# --------------------------------------------------------------------------
# pylint: skip-file
# pyright: reportUnnecessaryTypeIgnoreComment=false
from base64 import b64decode, b64encode
import calendar
import datetime
import decimal
import email
from enum import Enum
import json
import logging
import re
import sys
import codecs
from typing import (
Dict,
Any,
cast,
Optional,
Union,
AnyStr,
IO,
Mapping,
Callable,
TypeVar,
MutableMapping,
Type,
List,
Mapping,
)
try:
from urllib import quote # type: ignore
except ImportError:
from urllib.parse import quote
import xml.etree.ElementTree as ET
import isodate # type: ignore
from azure.core.exceptions import DeserializationError, SerializationError
from azure.core.serialization import NULL as CoreNull
_BOM = codecs.BOM_UTF8.decode(encoding="utf-8")
ModelType = TypeVar("ModelType", bound="Model")
JSON = MutableMapping[str, Any]
class RawDeserializer:
# Accept "text" because we're open minded people...
JSON_REGEXP = re.compile(r"^(application|text)/([a-z+.]+\+)?json$")
# Name used in context
CONTEXT_NAME = "deserialized_data"
@classmethod
def deserialize_from_text(cls, data: Optional[Union[AnyStr, IO]], content_type: Optional[str] = None) -> Any:
"""Decode data according to content-type.
Accept a stream of data as well, but will be load at once in memory for now.
If no content-type, will return the string version (not bytes, not stream)
:param data: Input, could be bytes or stream (will be decoded with UTF8) or text
:type data: str or bytes or IO
:param str content_type: The content type.
"""
if hasattr(data, "read"):
# Assume a stream
data = cast(IO, data).read()
if isinstance(data, bytes):
data_as_str = data.decode(encoding="utf-8-sig")
else:
# Explain to mypy the correct type.
data_as_str = cast(str, data)
# Remove Byte Order Mark if present in string
data_as_str = data_as_str.lstrip(_BOM)
if content_type is None:
return data
if cls.JSON_REGEXP.match(content_type):
try:
return json.loads(data_as_str)
except ValueError as err:
raise DeserializationError("JSON is invalid: {}".format(err), err)
elif "xml" in (content_type or []):
try:
try:
if isinstance(data, unicode): # type: ignore
# If I'm Python 2.7 and unicode XML will scream if I try a "fromstring" on unicode string
data_as_str = data_as_str.encode(encoding="utf-8") # type: ignore
except NameError:
pass
return ET.fromstring(data_as_str) # nosec
except ET.ParseError as err:
# It might be because the server has an issue, and returned JSON with
# content-type XML....
# So let's try a JSON load, and if it's still broken
# let's flow the initial exception
def _json_attemp(data):
try:
return True, json.loads(data)
except ValueError:
return False, None # Don't care about this one
success, json_result = _json_attemp(data)
if success:
return json_result
# If i'm here, it's not JSON, it's not XML, let's scream
# and raise the last context in this block (the XML exception)
# The function hack is because Py2.7 messes up with exception
# context otherwise.
_LOGGER.critical("Wasn't XML not JSON, failing")
raise DeserializationError("XML is invalid") from err
raise DeserializationError("Cannot deserialize content-type: {}".format(content_type))
@classmethod
def deserialize_from_http_generics(cls, body_bytes: Optional[Union[AnyStr, IO]], headers: Mapping) -> Any:
"""Deserialize from HTTP response.
Use bytes and headers to NOT use any requests/aiohttp or whatever
specific implementation.
Headers will tested for "content-type"
"""
# Try to use content-type from headers if available
content_type = None
if "content-type" in headers:
content_type = headers["content-type"].split(";")[0].strip().lower()
# Ouch, this server did not declare what it sent...
# Let's guess it's JSON...
# Also, since Autorest was considering that an empty body was a valid JSON,
# need that test as well....
else:
content_type = "application/json"
if body_bytes:
return cls.deserialize_from_text(body_bytes, content_type)
return None
try:
basestring # type: ignore
unicode_str = unicode # type: ignore
except NameError:
basestring = str
unicode_str = str
_LOGGER = logging.getLogger(__name__)
try:
_long_type = long # type: ignore
except NameError:
_long_type = int
class UTC(datetime.tzinfo):
"""Time Zone info for handling UTC"""
def utcoffset(self, dt):
"""UTF offset for UTC is 0."""
return datetime.timedelta(0)
def tzname(self, dt):
"""Timestamp representation."""
return "Z"
def dst(self, dt):
"""No daylight saving for UTC."""
return datetime.timedelta(hours=1)
try:
from datetime import timezone as _FixedOffset # type: ignore
except ImportError: # Python 2.7
class _FixedOffset(datetime.tzinfo): # type: ignore
"""Fixed offset in minutes east from UTC.
Copy/pasted from Python doc
:param datetime.timedelta offset: offset in timedelta format
"""
def __init__(self, offset):
self.__offset = offset
def utcoffset(self, dt):
return self.__offset
def tzname(self, dt):
return str(self.__offset.total_seconds() / 3600)
def __repr__(self):
return "<FixedOffset {}>".format(self.tzname(None))
def dst(self, dt):
return datetime.timedelta(0)
def __getinitargs__(self):
return (self.__offset,)
try:
from datetime import timezone
TZ_UTC = timezone.utc
except ImportError:
TZ_UTC = UTC() # type: ignore
_FLATTEN = re.compile(r"(?<!\\)\.")
def attribute_transformer(key, attr_desc, value):
"""A key transformer that returns the Python attribute.
:param str key: The attribute name
:param dict attr_desc: The attribute metadata
:param object value: The value
:returns: A key using attribute name
"""
return (key, value)
def full_restapi_key_transformer(key, attr_desc, value):
"""A key transformer that returns the full RestAPI key path.
:param str _: The attribute name
:param dict attr_desc: The attribute metadata
:param object value: The value
:returns: A list of keys using RestAPI syntax.
"""
keys = _FLATTEN.split(attr_desc["key"])
return ([_decode_attribute_map_key(k) for k in keys], value)
def last_restapi_key_transformer(key, attr_desc, value):
"""A key transformer that returns the last RestAPI key.
:param str key: The attribute name
:param dict attr_desc: The attribute metadata
:param object value: The value
:returns: The last RestAPI key.
"""
key, value = full_restapi_key_transformer(key, attr_desc, value)
return (key[-1], value)
def _create_xml_node(tag, prefix=None, ns=None):
"""Create a XML node."""
if prefix and ns:
ET.register_namespace(prefix, ns)
if ns:
return ET.Element("{" + ns + "}" + tag)
else:
return ET.Element(tag)
class Model(object):
"""Mixin for all client request body/response body models to support
serialization and deserialization.
"""
_subtype_map: Dict[str, Dict[str, Any]] = {}
_attribute_map: Dict[str, Dict[str, Any]] = {}
_validation: Dict[str, Dict[str, Any]] = {}
def __init__(self, **kwargs: Any) -> None:
self.additional_properties: Optional[Dict[str, Any]] = {}
for k in kwargs:
if k not in self._attribute_map:
_LOGGER.warning("%s is not a known attribute of class %s and will be ignored", k, self.__class__)
elif k in self._validation and self._validation[k].get("readonly", False):
_LOGGER.warning("Readonly attribute %s will be ignored in class %s", k, self.__class__)
else:
setattr(self, k, kwargs[k])
def __eq__(self, other: Any) -> bool:
"""Compare objects by comparing all attributes."""
if isinstance(other, self.__class__):
return self.__dict__ == other.__dict__
return False
def __ne__(self, other: Any) -> bool:
"""Compare objects by comparing all attributes."""
return not self.__eq__(other)
def __str__(self) -> str:
return str(self.__dict__)
@classmethod
def enable_additional_properties_sending(cls) -> None:
cls._attribute_map["additional_properties"] = {"key": "", "type": "{object}"}
@classmethod
def is_xml_model(cls) -> bool:
try:
cls._xml_map # type: ignore
except AttributeError:
return False
return True
@classmethod
def _create_xml_node(cls):
"""Create XML node."""
try:
xml_map = cls._xml_map # type: ignore
except AttributeError:
xml_map = {}
return _create_xml_node(xml_map.get("name", cls.__name__), xml_map.get("prefix", None), xml_map.get("ns", None))
def serialize(self, keep_readonly: bool = False, **kwargs: Any) -> JSON:
"""Return the JSON that would be sent to server from this model.
This is an alias to `as_dict(full_restapi_key_transformer, keep_readonly=False)`.
If you want XML serialization, you can pass the kwargs is_xml=True.
:param bool keep_readonly: If you want to serialize the readonly attributes
:returns: A dict JSON compatible object
:rtype: dict
"""
serializer = Serializer(self._infer_class_models())
return serializer._serialize(self, keep_readonly=keep_readonly, **kwargs) # type: ignore
def as_dict(
self,
keep_readonly: bool = True,
key_transformer: Callable[[str, Dict[str, Any], Any], Any] = attribute_transformer,
**kwargs: Any
) -> JSON:
"""Return a dict that can be serialized using json.dump.
Advanced usage might optionally use a callback as parameter:
.. code::python
def my_key_transformer(key, attr_desc, value):
return key
Key is the attribute name used in Python. Attr_desc
is a dict of metadata. Currently contains 'type' with the
msrest type and 'key' with the RestAPI encoded key.
Value is the current value in this object.
The string returned will be used to serialize the key.
If the return type is a list, this is considered hierarchical
result dict.
See the three examples in this file:
- attribute_transformer
- full_restapi_key_transformer
- last_restapi_key_transformer
If you want XML serialization, you can pass the kwargs is_xml=True.
:param function key_transformer: A key transformer function.
:returns: A dict JSON compatible object
:rtype: dict
"""
serializer = Serializer(self._infer_class_models())
return serializer._serialize(self, key_transformer=key_transformer, keep_readonly=keep_readonly, **kwargs) # type: ignore
@classmethod
def _infer_class_models(cls):
try:
str_models = cls.__module__.rsplit(".", 1)[0]
models = sys.modules[str_models]
client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)}
if cls.__name__ not in client_models:
raise ValueError("Not Autorest generated code")
except Exception:
# Assume it's not Autorest generated (tests?). Add ourselves as dependencies.
client_models = {cls.__name__: cls}
return client_models
@classmethod
def deserialize(cls: Type[ModelType], data: Any, content_type: Optional[str] = None) -> ModelType:
"""Parse a str using the RestAPI syntax and return a model.
:param str data: A str using RestAPI structure. JSON by default.
:param str content_type: JSON by default, set application/xml if XML.
:returns: An instance of this model
:raises: DeserializationError if something went wrong
"""
deserializer = Deserializer(cls._infer_class_models())
return deserializer(cls.__name__, data, content_type=content_type) # type: ignore
@classmethod
def from_dict(
cls: Type[ModelType],
data: Any,
key_extractors: Optional[Callable[[str, Dict[str, Any], Any], Any]] = None,
content_type: Optional[str] = None,
) -> ModelType:
"""Parse a dict using given key extractor return a model.
By default consider key
extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor
and last_rest_key_case_insensitive_extractor)
:param dict data: A dict using RestAPI structure
:param str content_type: JSON by default, set application/xml if XML.
:returns: An instance of this model
:raises: DeserializationError if something went wrong
"""
deserializer = Deserializer(cls._infer_class_models())
deserializer.key_extractors = ( # type: ignore
[ # type: ignore
attribute_key_case_insensitive_extractor,
rest_key_case_insensitive_extractor,
last_rest_key_case_insensitive_extractor,
]
if key_extractors is None
else key_extractors
)
return deserializer(cls.__name__, data, content_type=content_type) # type: ignore
@classmethod
def _flatten_subtype(cls, key, objects):
if "_subtype_map" not in cls.__dict__:
return {}
result = dict(cls._subtype_map[key])
for valuetype in cls._subtype_map[key].values():
result.update(objects[valuetype]._flatten_subtype(key, objects))
return result
@classmethod
def _classify(cls, response, objects):
"""Check the class _subtype_map for any child classes.
We want to ignore any inherited _subtype_maps.
Remove the polymorphic key from the initial data.
"""
for subtype_key in cls.__dict__.get("_subtype_map", {}).keys():
subtype_value = None
if not isinstance(response, ET.Element):
rest_api_response_key = cls._get_rest_key_parts(subtype_key)[-1]
subtype_value = response.pop(rest_api_response_key, None) or response.pop(subtype_key, None)
else:
subtype_value = xml_key_extractor(subtype_key, cls._attribute_map[subtype_key], response)
if subtype_value:
# Try to match base class. Can be class name only
# (bug to fix in Autorest to support x-ms-discriminator-name)
if cls.__name__ == subtype_value:
return cls
flatten_mapping_type = cls._flatten_subtype(subtype_key, objects)
try:
return objects[flatten_mapping_type[subtype_value]] # type: ignore
except KeyError:
_LOGGER.warning(
"Subtype value %s has no mapping, use base class %s.",
subtype_value,
cls.__name__,
)
break
else:
_LOGGER.warning("Discriminator %s is absent or null, use base class %s.", subtype_key, cls.__name__)
break
return cls
@classmethod
def _get_rest_key_parts(cls, attr_key):
"""Get the RestAPI key of this attr, split it and decode part
:param str attr_key: Attribute key must be in attribute_map.
:returns: A list of RestAPI part
:rtype: list
"""
rest_split_key = _FLATTEN.split(cls._attribute_map[attr_key]["key"])
return [_decode_attribute_map_key(key_part) for key_part in rest_split_key]
def _decode_attribute_map_key(key):
"""This decode a key in an _attribute_map to the actual key we want to look at
inside the received data.
:param str key: A key string from the generated code
"""
return key.replace("\\.", ".")
class Serializer(object):
"""Request object model serializer."""
basic_types = {str: "str", int: "int", bool: "bool", float: "float"}
_xml_basic_types_serializers = {"bool": lambda x: str(x).lower()}
days = {0: "Mon", 1: "Tue", 2: "Wed", 3: "Thu", 4: "Fri", 5: "Sat", 6: "Sun"}
months = {
1: "Jan",
2: "Feb",
3: "Mar",
4: "Apr",
5: "May",
6: "Jun",
7: "Jul",
8: "Aug",
9: "Sep",
10: "Oct",
11: "Nov",
12: "Dec",
}
validation = {
"min_length": lambda x, y: len(x) < y,
"max_length": lambda x, y: len(x) > y,
"minimum": lambda x, y: x < y,
"maximum": lambda x, y: x > y,
"minimum_ex": lambda x, y: x <= y,
"maximum_ex": lambda x, y: x >= y,
"min_items": lambda x, y: len(x) < y,
"max_items": lambda x, y: len(x) > y,
"pattern": lambda x, y: not re.match(y, x, re.UNICODE),
"unique": lambda x, y: len(x) != len(set(x)),
"multiple": lambda x, y: x % y != 0,
}
def __init__(self, classes: Optional[Mapping[str, Type[ModelType]]] = None):
self.serialize_type = {
"iso-8601": Serializer.serialize_iso,
"rfc-1123": Serializer.serialize_rfc,
"unix-time": Serializer.serialize_unix,
"duration": Serializer.serialize_duration,
"date": Serializer.serialize_date,
"time": Serializer.serialize_time,
"decimal": Serializer.serialize_decimal,
"long": Serializer.serialize_long,
"bytearray": Serializer.serialize_bytearray,
"base64": Serializer.serialize_base64,
"object": self.serialize_object,
"[]": self.serialize_iter,
"{}": self.serialize_dict,
}
self.dependencies: Dict[str, Type[ModelType]] = dict(classes) if classes else {}
self.key_transformer = full_restapi_key_transformer
self.client_side_validation = True
def _serialize(self, target_obj, data_type=None, **kwargs):
"""Serialize data into a string according to type.
:param target_obj: The data to be serialized.
:param str data_type: The type to be serialized from.
:rtype: str, dict
:raises: SerializationError if serialization fails.
"""
key_transformer = kwargs.get("key_transformer", self.key_transformer)
keep_readonly = kwargs.get("keep_readonly", False)
if target_obj is None:
return None
attr_name = None
class_name = target_obj.__class__.__name__
if data_type:
return self.serialize_data(target_obj, data_type, **kwargs)
if not hasattr(target_obj, "_attribute_map"):
data_type = type(target_obj).__name__
if data_type in self.basic_types.values():
return self.serialize_data(target_obj, data_type, **kwargs)
# Force "is_xml" kwargs if we detect a XML model
try:
is_xml_model_serialization = kwargs["is_xml"]
except KeyError:
is_xml_model_serialization = kwargs.setdefault("is_xml", target_obj.is_xml_model())
serialized = {}
if is_xml_model_serialization:
serialized = target_obj._create_xml_node()
try:
attributes = target_obj._attribute_map
for attr, attr_desc in attributes.items():
attr_name = attr
if not keep_readonly and target_obj._validation.get(attr_name, {}).get("readonly", False):
continue
if attr_name == "additional_properties" and attr_desc["key"] == "":
if target_obj.additional_properties is not None:
serialized.update(target_obj.additional_properties)
continue
try:
orig_attr = getattr(target_obj, attr)
if is_xml_model_serialization:
pass # Don't provide "transformer" for XML for now. Keep "orig_attr"
else: # JSON
keys, orig_attr = key_transformer(attr, attr_desc.copy(), orig_attr)
keys = keys if isinstance(keys, list) else [keys]
kwargs["serialization_ctxt"] = attr_desc
new_attr = self.serialize_data(orig_attr, attr_desc["type"], **kwargs)
if is_xml_model_serialization:
xml_desc = attr_desc.get("xml", {})
xml_name = xml_desc.get("name", attr_desc["key"])
xml_prefix = xml_desc.get("prefix", None)
xml_ns = xml_desc.get("ns", None)
if xml_desc.get("attr", False):
if xml_ns:
ET.register_namespace(xml_prefix, xml_ns)
xml_name = "{{{}}}{}".format(xml_ns, xml_name)
serialized.set(xml_name, new_attr) # type: ignore
continue
if xml_desc.get("text", False):
serialized.text = new_attr # type: ignore
continue
if isinstance(new_attr, list):
serialized.extend(new_attr) # type: ignore
elif isinstance(new_attr, ET.Element):
# If the down XML has no XML/Name, we MUST replace the tag with the local tag. But keeping the namespaces.
if "name" not in getattr(orig_attr, "_xml_map", {}):
splitted_tag = new_attr.tag.split("}")
if len(splitted_tag) == 2: # Namespace
new_attr.tag = "}".join([splitted_tag[0], xml_name])
else:
new_attr.tag = xml_name
serialized.append(new_attr) # type: ignore
else: # That's a basic type
# Integrate namespace if necessary
local_node = _create_xml_node(xml_name, xml_prefix, xml_ns)
local_node.text = unicode_str(new_attr)
serialized.append(local_node) # type: ignore
else: # JSON
for k in reversed(keys): # type: ignore
new_attr = {k: new_attr}
_new_attr = new_attr
_serialized = serialized
for k in keys: # type: ignore
if k not in _serialized:
_serialized.update(_new_attr) # type: ignore
_new_attr = _new_attr[k] # type: ignore
_serialized = _serialized[k]
except ValueError as err:
if isinstance(err, SerializationError):
raise
except (AttributeError, KeyError, TypeError) as err:
msg = "Attribute {} in object {} cannot be serialized.\n{}".format(attr_name, class_name, str(target_obj))
raise SerializationError(msg) from err
else:
return serialized
def body(self, data, data_type, **kwargs):
"""Serialize data intended for a request body.
:param data: The data to be serialized.
:param str data_type: The type to be serialized from.
:rtype: dict
:raises: SerializationError if serialization fails.
:raises: ValueError if data is None
"""
# Just in case this is a dict
internal_data_type_str = data_type.strip("[]{}")
internal_data_type = self.dependencies.get(internal_data_type_str, None)
try:
is_xml_model_serialization = kwargs["is_xml"]
except KeyError:
if internal_data_type and issubclass(internal_data_type, Model):
is_xml_model_serialization = kwargs.setdefault("is_xml", internal_data_type.is_xml_model())
else:
is_xml_model_serialization = False
if internal_data_type and not isinstance(internal_data_type, Enum):
try:
deserializer = Deserializer(self.dependencies)
# Since it's on serialization, it's almost sure that format is not JSON REST
# We're not able to deal with additional properties for now.
deserializer.additional_properties_detection = False
if is_xml_model_serialization:
deserializer.key_extractors = [ # type: ignore
attribute_key_case_insensitive_extractor,
]
else:
deserializer.key_extractors = [
rest_key_case_insensitive_extractor,
attribute_key_case_insensitive_extractor,
last_rest_key_case_insensitive_extractor,
]
data = deserializer._deserialize(data_type, data)
except DeserializationError as err:
raise SerializationError("Unable to build a model: " + str(err)) from err
return self._serialize(data, data_type, **kwargs)
def url(self, name, data, data_type, **kwargs):
"""Serialize data intended for a URL path.
:param data: The data to be serialized.
:param str data_type: The type to be serialized from.
:rtype: str
:raises: TypeError if serialization fails.
:raises: ValueError if data is None
"""
try:
output = self.serialize_data(data, data_type, **kwargs)
if data_type == "bool":
output = json.dumps(output)
if kwargs.get("skip_quote") is True:
output = str(output)
output = output.replace("{", quote("{")).replace("}", quote("}"))
else:
output = quote(str(output), safe="")
except SerializationError:
raise TypeError("{} must be type {}.".format(name, data_type))
else:
return output
def query(self, name, data, data_type, **kwargs):
"""Serialize data intended for a URL query.
:param data: The data to be serialized.
:param str data_type: The type to be serialized from.
:keyword bool skip_quote: Whether to skip quote the serialized result.
Defaults to False.
:rtype: str, list
:raises: TypeError if serialization fails.
:raises: ValueError if data is None
"""
try:
# Treat the list aside, since we don't want to encode the div separator
if data_type.startswith("["):
internal_data_type = data_type[1:-1]
do_quote = not kwargs.get("skip_quote", False)
return self.serialize_iter(data, internal_data_type, do_quote=do_quote, **kwargs)
# Not a list, regular serialization
output = self.serialize_data(data, data_type, **kwargs)
if data_type == "bool":
output = json.dumps(output)
if kwargs.get("skip_quote") is True:
output = str(output)
else:
output = quote(str(output), safe="")
except SerializationError:
raise TypeError("{} must be type {}.".format(name, data_type))
else:
return str(output)
def header(self, name, data, data_type, **kwargs):
"""Serialize data intended for a request header.
:param data: The data to be serialized.
:param str data_type: The type to be serialized from.
:rtype: str
:raises: TypeError if serialization fails.
:raises: ValueError if data is None
"""
try:
if data_type in ["[str]"]:
data = ["" if d is None else d for d in data]
output = self.serialize_data(data, data_type, **kwargs)
if data_type == "bool":
output = json.dumps(output)
except SerializationError:
raise TypeError("{} must be type {}.".format(name, data_type))
else:
return str(output)
def serialize_data(self, data, data_type, **kwargs):
"""Serialize generic data according to supplied data type.
:param data: The data to be serialized.
:param str data_type: The type to be serialized from.
:param bool required: Whether it's essential that the data not be
empty or None
:raises: AttributeError if required data is None.
:raises: ValueError if data is None
:raises: SerializationError if serialization fails.
"""
if data is None:
raise ValueError("No value for given attribute")
try:
if data is CoreNull:
return None
if data_type in self.basic_types.values():
return self.serialize_basic(data, data_type, **kwargs)
elif data_type in self.serialize_type:
return self.serialize_type[data_type](data, **kwargs)
# If dependencies is empty, try with current data class
# It has to be a subclass of Enum anyway
enum_type = self.dependencies.get(data_type, data.__class__)
if issubclass(enum_type, Enum):
return Serializer.serialize_enum(data, enum_obj=enum_type)
iter_type = data_type[0] + data_type[-1]
if iter_type in self.serialize_type:
return self.serialize_type[iter_type](data, data_type[1:-1], **kwargs)
except (ValueError, TypeError) as err:
msg = "Unable to serialize value: {!r} as type: {!r}."
raise SerializationError(msg.format(data, data_type)) from err
else:
return self._serialize(data, **kwargs)
@classmethod
def _get_custom_serializers(cls, data_type, **kwargs):
custom_serializer = kwargs.get("basic_types_serializers", {}).get(data_type)
if custom_serializer:
return custom_serializer
if kwargs.get("is_xml", False):
return cls._xml_basic_types_serializers.get(data_type)
@classmethod
def serialize_basic(cls, data, data_type, **kwargs):
"""Serialize basic builting data type.
Serializes objects to str, int, float or bool.
Possible kwargs:
- basic_types_serializers dict[str, callable] : If set, use the callable as serializer
- is_xml bool : If set, use xml_basic_types_serializers
:param data: Object to be serialized.
:param str data_type: Type of object in the iterable.
"""
custom_serializer = cls._get_custom_serializers(data_type, **kwargs)
if custom_serializer:
return custom_serializer(data)
if data_type == "str":
return cls.serialize_unicode(data)
return eval(data_type)(data) # nosec
@classmethod
def serialize_unicode(cls, data):
"""Special handling for serializing unicode strings in Py2.
Encode to UTF-8 if unicode, otherwise handle as a str.
:param data: Object to be serialized.
:rtype: str
"""
try: # If I received an enum, return its value
return data.value
except AttributeError:
pass
try:
if isinstance(data, unicode): # type: ignore
# Don't change it, JSON and XML ElementTree are totally able
# to serialize correctly u'' strings
return data
except NameError:
return str(data)
else:
return str(data)
def serialize_iter(self, data, iter_type, div=None, **kwargs):
"""Serialize iterable.
Supported kwargs:
- serialization_ctxt dict : The current entry of _attribute_map, or same format.
serialization_ctxt['type'] should be same as data_type.
- is_xml bool : If set, serialize as XML
:param list attr: Object to be serialized.
:param str iter_type: Type of object in the iterable.
:param bool required: Whether the objects in the iterable must
not be None or empty.
:param str div: If set, this str will be used to combine the elements
in the iterable into a combined string. Default is 'None'.
:keyword bool do_quote: Whether to quote the serialized result of each iterable element.
Defaults to False.
:rtype: list, str
"""
if isinstance(data, str):
raise SerializationError("Refuse str type as a valid iter type.")
serialization_ctxt = kwargs.get("serialization_ctxt", {})
is_xml = kwargs.get("is_xml", False)
serialized = []
for d in data:
try:
serialized.append(self.serialize_data(d, iter_type, **kwargs))
except ValueError as err:
if isinstance(err, SerializationError):
raise
serialized.append(None)
if kwargs.get("do_quote", False):
serialized = ["" if s is None else quote(str(s), safe="") for s in serialized]
if div:
serialized = ["" if s is None else str(s) for s in serialized]
serialized = div.join(serialized)
if "xml" in serialization_ctxt or is_xml:
# XML serialization is more complicated
xml_desc = serialization_ctxt.get("xml", {})
xml_name = xml_desc.get("name")
if not xml_name:
xml_name = serialization_ctxt["key"]
# Create a wrap node if necessary (use the fact that Element and list have "append")
is_wrapped = xml_desc.get("wrapped", False)
node_name = xml_desc.get("itemsName", xml_name)
if is_wrapped:
final_result = _create_xml_node(xml_name, xml_desc.get("prefix", None), xml_desc.get("ns", None))
else:
final_result = []
# All list elements to "local_node"
for el in serialized:
if isinstance(el, ET.Element):
el_node = el
else:
el_node = _create_xml_node(node_name, xml_desc.get("prefix", None), xml_desc.get("ns", None))
if el is not None: # Otherwise it writes "None" :-p
el_node.text = str(el)
final_result.append(el_node)
return final_result
return serialized
def serialize_dict(self, attr, dict_type, **kwargs):
"""Serialize a dictionary of objects.
:param dict attr: Object to be serialized.
:param str dict_type: Type of object in the dictionary.
:param bool required: Whether the objects in the dictionary must
not be None or empty.
:rtype: dict
"""
serialization_ctxt = kwargs.get("serialization_ctxt", {})
serialized = {}
for key, value in attr.items():
try:
serialized[self.serialize_unicode(key)] = self.serialize_data(value, dict_type, **kwargs)
except ValueError as err:
if isinstance(err, SerializationError):
raise
serialized[self.serialize_unicode(key)] = None
if "xml" in serialization_ctxt:
# XML serialization is more complicated
xml_desc = serialization_ctxt["xml"]
xml_name = xml_desc["name"]
final_result = _create_xml_node(xml_name, xml_desc.get("prefix", None), xml_desc.get("ns", None))
for key, value in serialized.items():
ET.SubElement(final_result, key).text = value
return final_result
return serialized
def serialize_object(self, attr, **kwargs):
"""Serialize a generic object.
This will be handled as a dictionary. If object passed in is not
a basic type (str, int, float, dict, list) it will simply be
cast to str.
:param dict attr: Object to be serialized.
:rtype: dict or str
"""
if attr is None:
return None
if isinstance(attr, ET.Element):
return attr
obj_type = type(attr)
if obj_type in self.basic_types:
return self.serialize_basic(attr, self.basic_types[obj_type], **kwargs)
if obj_type is _long_type:
return self.serialize_long(attr)
if obj_type is unicode_str:
return self.serialize_unicode(attr)
if obj_type is datetime.datetime:
return self.serialize_iso(attr)
if obj_type is datetime.date:
return self.serialize_date(attr)
if obj_type is datetime.time:
return self.serialize_time(attr)
if obj_type is datetime.timedelta:
return self.serialize_duration(attr)
if obj_type is decimal.Decimal:
return self.serialize_decimal(attr)
# If it's a model or I know this dependency, serialize as a Model
elif obj_type in self.dependencies.values() or isinstance(attr, Model):
return self._serialize(attr)
if obj_type == dict:
serialized = {}
for key, value in attr.items():
try:
serialized[self.serialize_unicode(key)] = self.serialize_object(value, **kwargs)
except ValueError:
serialized[self.serialize_unicode(key)] = None
return serialized
if obj_type == list:
serialized = []
for obj in attr:
try:
serialized.append(self.serialize_object(obj, **kwargs))
except ValueError:
pass
return serialized
return str(attr)
@staticmethod
def serialize_enum(attr, enum_obj=None):
try:
result = attr.value
except AttributeError:
result = attr
try:
enum_obj(result) # type: ignore
return result
except ValueError:
for enum_value in enum_obj: # type: ignore
if enum_value.value.lower() == str(attr).lower():
return enum_value.value
error = "{!r} is not valid value for enum {!r}"
raise SerializationError(error.format(attr, enum_obj))
@staticmethod
def serialize_bytearray(attr, **kwargs):
"""Serialize bytearray into base-64 string.
:param attr: Object to be serialized.
:rtype: str
"""
return b64encode(attr).decode()
@staticmethod
def serialize_base64(attr, **kwargs):
"""Serialize str into base-64 string.
:param attr: Object to be serialized.
:rtype: str
"""
encoded = b64encode(attr).decode("ascii")
return encoded.strip("=").replace("+", "-").replace("/", "_")
@staticmethod
def serialize_decimal(attr, **kwargs):
"""Serialize Decimal object to float.
:param attr: Object to be serialized.
:rtype: float
"""
return float(attr)
@staticmethod
def serialize_long(attr, **kwargs):
"""Serialize long (Py2) or int (Py3).
:param attr: Object to be serialized.
:rtype: int/long
"""
return _long_type(attr)
@staticmethod
def serialize_date(attr, **kwargs):
"""Serialize Date object into ISO-8601 formatted string.
:param Date attr: Object to be serialized.
:rtype: str
"""
if isinstance(attr, str):
attr = isodate.parse_date(attr)
t = "{:04}-{:02}-{:02}".format(attr.year, attr.month, attr.day)
return t
@staticmethod
def serialize_time(attr, **kwargs):
"""Serialize Time object into ISO-8601 formatted string.
:param datetime.time attr: Object to be serialized.
:rtype: str
"""
if isinstance(attr, str):
attr = isodate.parse_time(attr)
t = "{:02}:{:02}:{:02}".format(attr.hour, attr.minute, attr.second)
if attr.microsecond:
t += ".{:02}".format(attr.microsecond)
return t
@staticmethod
def serialize_duration(attr, **kwargs):
"""Serialize TimeDelta object into ISO-8601 formatted string.
:param TimeDelta attr: Object to be serialized.
:rtype: str
"""
if isinstance(attr, str):
attr = isodate.parse_duration(attr)
return isodate.duration_isoformat(attr)
@staticmethod
def serialize_rfc(attr, **kwargs):
"""Serialize Datetime object into RFC-1123 formatted string.
:param Datetime attr: Object to be serialized.
:rtype: str
:raises: TypeError if format invalid.
"""
try:
if not attr.tzinfo:
_LOGGER.warning("Datetime with no tzinfo will be considered UTC.")
utc = attr.utctimetuple()
except AttributeError:
raise TypeError("RFC1123 object must be valid Datetime object.")
return "{}, {:02} {} {:04} {:02}:{:02}:{:02} GMT".format(
Serializer.days[utc.tm_wday],
utc.tm_mday,
Serializer.months[utc.tm_mon],
utc.tm_year,
utc.tm_hour,
utc.tm_min,
utc.tm_sec,
)
@staticmethod
def serialize_iso(attr, **kwargs):
"""Serialize Datetime object into ISO-8601 formatted string.
:param Datetime attr: Object to be serialized.
:rtype: str
:raises: SerializationError if format invalid.
"""
if isinstance(attr, str):
attr = isodate.parse_datetime(attr)
try:
if not attr.tzinfo:
_LOGGER.warning("Datetime with no tzinfo will be considered UTC.")
utc = attr.utctimetuple()
if utc.tm_year > 9999 or utc.tm_year < 1:
raise OverflowError("Hit max or min date")
microseconds = str(attr.microsecond).rjust(6, "0").rstrip("0").ljust(3, "0")
if microseconds:
microseconds = "." + microseconds
date = "{:04}-{:02}-{:02}T{:02}:{:02}:{:02}".format(
utc.tm_year, utc.tm_mon, utc.tm_mday, utc.tm_hour, utc.tm_min, utc.tm_sec
)
return date + microseconds + "Z"
except (ValueError, OverflowError) as err:
msg = "Unable to serialize datetime object."
raise SerializationError(msg) from err
except AttributeError as err:
msg = "ISO-8601 object must be valid Datetime object."
raise TypeError(msg) from err
@staticmethod
def serialize_unix(attr, **kwargs):
"""Serialize Datetime object into IntTime format.
This is represented as seconds.
:param Datetime attr: Object to be serialized.
:rtype: int
:raises: SerializationError if format invalid
"""
if isinstance(attr, int):
return attr
try:
if not attr.tzinfo:
_LOGGER.warning("Datetime with no tzinfo will be considered UTC.")
return int(calendar.timegm(attr.utctimetuple()))
except AttributeError:
raise TypeError("Unix time object must be valid Datetime object.")
def rest_key_extractor(attr, attr_desc, data):
key = attr_desc["key"]
working_data = data
while "." in key:
# Need the cast, as for some reasons "split" is typed as list[str | Any]
dict_keys = cast(List[str], _FLATTEN.split(key))
if len(dict_keys) == 1:
key = _decode_attribute_map_key(dict_keys[0])
break
working_key = _decode_attribute_map_key(dict_keys[0])
working_data = working_data.get(working_key, data)
if working_data is None:
# If at any point while following flatten JSON path see None, it means
# that all properties under are None as well
return None
key = ".".join(dict_keys[1:])
return working_data.get(key)
def rest_key_case_insensitive_extractor(attr, attr_desc, data):
key = attr_desc["key"]
working_data = data
while "." in key:
dict_keys = _FLATTEN.split(key)
if len(dict_keys) == 1:
key = _decode_attribute_map_key(dict_keys[0])
break
working_key = _decode_attribute_map_key(dict_keys[0])
working_data = attribute_key_case_insensitive_extractor(working_key, None, working_data)
if working_data is None:
# If at any point while following flatten JSON path see None, it means
# that all properties under are None as well
return None
key = ".".join(dict_keys[1:])
if working_data:
return attribute_key_case_insensitive_extractor(key, None, working_data)
def last_rest_key_extractor(attr, attr_desc, data):
"""Extract the attribute in "data" based on the last part of the JSON path key."""
key = attr_desc["key"]
dict_keys = _FLATTEN.split(key)
return attribute_key_extractor(dict_keys[-1], None, data)
def last_rest_key_case_insensitive_extractor(attr, attr_desc, data):
"""Extract the attribute in "data" based on the last part of the JSON path key.
This is the case insensitive version of "last_rest_key_extractor"
"""
key = attr_desc["key"]
dict_keys = _FLATTEN.split(key)
return attribute_key_case_insensitive_extractor(dict_keys[-1], None, data)
def attribute_key_extractor(attr, _, data):
return data.get(attr)
def attribute_key_case_insensitive_extractor(attr, _, data):
found_key = None
lower_attr = attr.lower()
for key in data:
if lower_attr == key.lower():
found_key = key
break
return data.get(found_key)
def _extract_name_from_internal_type(internal_type):
"""Given an internal type XML description, extract correct XML name with namespace.
:param dict internal_type: An model type
:rtype: tuple
:returns: A tuple XML name + namespace dict
"""
internal_type_xml_map = getattr(internal_type, "_xml_map", {})
xml_name = internal_type_xml_map.get("name", internal_type.__name__)
xml_ns = internal_type_xml_map.get("ns", None)
if xml_ns:
xml_name = "{{{}}}{}".format(xml_ns, xml_name)
return xml_name
def xml_key_extractor(attr, attr_desc, data):
if isinstance(data, dict):
return None
# Test if this model is XML ready first
if not isinstance(data, ET.Element):
return None
xml_desc = attr_desc.get("xml", {})
xml_name = xml_desc.get("name", attr_desc["key"])
# Look for a children
is_iter_type = attr_desc["type"].startswith("[")
is_wrapped = xml_desc.get("wrapped", False)
internal_type = attr_desc.get("internalType", None)
internal_type_xml_map = getattr(internal_type, "_xml_map", {})
# Integrate namespace if necessary
xml_ns = xml_desc.get("ns", internal_type_xml_map.get("ns", None))
if xml_ns:
xml_name = "{{{}}}{}".format(xml_ns, xml_name)
# If it's an attribute, that's simple
if xml_desc.get("attr", False):
return data.get(xml_name)
# If it's x-ms-text, that's simple too
if xml_desc.get("text", False):
return data.text
# Scenario where I take the local name:
# - Wrapped node
# - Internal type is an enum (considered basic types)
# - Internal type has no XML/Name node
if is_wrapped or (internal_type and (issubclass(internal_type, Enum) or "name" not in internal_type_xml_map)):
children = data.findall(xml_name)
# If internal type has a local name and it's not a list, I use that name
elif not is_iter_type and internal_type and "name" in internal_type_xml_map:
xml_name = _extract_name_from_internal_type(internal_type)
children = data.findall(xml_name)
# That's an array
else:
if internal_type: # Complex type, ignore itemsName and use the complex type name
items_name = _extract_name_from_internal_type(internal_type)
else:
items_name = xml_desc.get("itemsName", xml_name)
children = data.findall(items_name)
if len(children) == 0:
if is_iter_type:
if is_wrapped:
return None # is_wrapped no node, we want None
else:
return [] # not wrapped, assume empty list
return None # Assume it's not there, maybe an optional node.
# If is_iter_type and not wrapped, return all found children
if is_iter_type:
if not is_wrapped:
return children
else: # Iter and wrapped, should have found one node only (the wrap one)
if len(children) != 1:
raise DeserializationError(
"Tried to deserialize an array not wrapped, and found several nodes '{}'. Maybe you should declare this array as wrapped?".format(
xml_name
)
)
return list(children[0]) # Might be empty list and that's ok.
# Here it's not a itertype, we should have found one element only or empty
if len(children) > 1:
raise DeserializationError("Find several XML '{}' where it was not expected".format(xml_name))
return children[0]
class Deserializer(object):
"""Response object model deserializer.
:param dict classes: Class type dictionary for deserializing complex types.
:ivar list key_extractors: Ordered list of extractors to be used by this deserializer.
"""
basic_types = {str: "str", int: "int", bool: "bool", float: "float"}
valid_date = re.compile(r"\d{4}[-]\d{2}[-]\d{2}T\d{2}:\d{2}:\d{2}" r"\.?\d*Z?[-+]?[\d{2}]?:?[\d{2}]?")
def __init__(self, classes: Optional[Mapping[str, Type[ModelType]]] = None):
self.deserialize_type = {
"iso-8601": Deserializer.deserialize_iso,
"rfc-1123": Deserializer.deserialize_rfc,
"unix-time": Deserializer.deserialize_unix,
"duration": Deserializer.deserialize_duration,
"date": Deserializer.deserialize_date,
"time": Deserializer.deserialize_time,
"decimal": Deserializer.deserialize_decimal,
"long": Deserializer.deserialize_long,
"bytearray": Deserializer.deserialize_bytearray,
"base64": Deserializer.deserialize_base64,
"object": self.deserialize_object,
"[]": self.deserialize_iter,
"{}": self.deserialize_dict,
}
self.deserialize_expected_types = {
"duration": (isodate.Duration, datetime.timedelta),
"iso-8601": (datetime.datetime),
}
self.dependencies: Dict[str, Type[ModelType]] = dict(classes) if classes else {}
self.key_extractors = [rest_key_extractor, xml_key_extractor]
# Additional properties only works if the "rest_key_extractor" is used to
# extract the keys. Making it to work whatever the key extractor is too much
# complicated, with no real scenario for now.
# So adding a flag to disable additional properties detection. This flag should be
# used if your expect the deserialization to NOT come from a JSON REST syntax.
# Otherwise, result are unexpected
self.additional_properties_detection = True
def __call__(self, target_obj, response_data, content_type=None):
"""Call the deserializer to process a REST response.
:param str target_obj: Target data type to deserialize to.
:param requests.Response response_data: REST response object.
:param str content_type: Swagger "produces" if available.
:raises: DeserializationError if deserialization fails.
:return: Deserialized object.
"""
data = self._unpack_content(response_data, content_type)
return self._deserialize(target_obj, data)
def _deserialize(self, target_obj, data):
"""Call the deserializer on a model.
Data needs to be already deserialized as JSON or XML ElementTree
:param str target_obj: Target data type to deserialize to.
:param object data: Object to deserialize.
:raises: DeserializationError if deserialization fails.
:return: Deserialized object.
"""
# This is already a model, go recursive just in case
if hasattr(data, "_attribute_map"):
constants = [name for name, config in getattr(data, "_validation", {}).items() if config.get("constant")]
try:
for attr, mapconfig in data._attribute_map.items():
if attr in constants:
continue
value = getattr(data, attr)
if value is None:
continue
local_type = mapconfig["type"]
internal_data_type = local_type.strip("[]{}")
if internal_data_type not in self.dependencies or isinstance(internal_data_type, Enum):
continue
setattr(data, attr, self._deserialize(local_type, value))
return data
except AttributeError:
return
response, class_name = self._classify_target(target_obj, data)
if isinstance(response, basestring):
return self.deserialize_data(data, response)
elif isinstance(response, type) and issubclass(response, Enum):
return self.deserialize_enum(data, response)
if data is None:
return data
try:
attributes = response._attribute_map # type: ignore
d_attrs = {}
for attr, attr_desc in attributes.items():
# Check empty string. If it's not empty, someone has a real "additionalProperties"...
if attr == "additional_properties" and attr_desc["key"] == "":
continue
raw_value = None
# Enhance attr_desc with some dynamic data
attr_desc = attr_desc.copy() # Do a copy, do not change the real one
internal_data_type = attr_desc["type"].strip("[]{}")
if internal_data_type in self.dependencies:
attr_desc["internalType"] = self.dependencies[internal_data_type]
for key_extractor in self.key_extractors:
found_value = key_extractor(attr, attr_desc, data)
if found_value is not None:
if raw_value is not None and raw_value != found_value:
msg = (
"Ignoring extracted value '%s' from %s for key '%s'"
" (duplicate extraction, follow extractors order)"
)
_LOGGER.warning(msg, found_value, key_extractor, attr)
continue
raw_value = found_value
value = self.deserialize_data(raw_value, attr_desc["type"])
d_attrs[attr] = value
except (AttributeError, TypeError, KeyError) as err:
msg = "Unable to deserialize to object: " + class_name # type: ignore
raise DeserializationError(msg) from err
else:
additional_properties = self._build_additional_properties(attributes, data)
return self._instantiate_model(response, d_attrs, additional_properties)
def _build_additional_properties(self, attribute_map, data):
if not self.additional_properties_detection:
return None
if "additional_properties" in attribute_map and attribute_map.get("additional_properties", {}).get("key") != "":
# Check empty string. If it's not empty, someone has a real "additionalProperties"
return None
if isinstance(data, ET.Element):
data = {el.tag: el.text for el in data}
known_keys = {
_decode_attribute_map_key(_FLATTEN.split(desc["key"])[0])
for desc in attribute_map.values()
if desc["key"] != ""
}
present_keys = set(data.keys())
missing_keys = present_keys - known_keys
return {key: data[key] for key in missing_keys}
def _classify_target(self, target, data):
"""Check to see whether the deserialization target object can
be classified into a subclass.
Once classification has been determined, initialize object.
:param str target: The target object type to deserialize to.
:param str/dict data: The response data to deserialize.
"""
if target is None:
return None, None
if isinstance(target, basestring):
try:
target = self.dependencies[target]
except KeyError:
return target, target
try:
target = target._classify(data, self.dependencies)
except AttributeError:
pass # Target is not a Model, no classify
return target, target.__class__.__name__ # type: ignore
def failsafe_deserialize(self, target_obj, data, content_type=None):
"""Ignores any errors encountered in deserialization,
and falls back to not deserializing the object. Recommended
for use in error deserialization, as we want to return the
HttpResponseError to users, and not have them deal with
a deserialization error.
:param str target_obj: The target object type to deserialize to.
:param str/dict data: The response data to deserialize.
:param str content_type: Swagger "produces" if available.
"""
try:
return self(target_obj, data, content_type=content_type)
except:
_LOGGER.debug(
"Ran into a deserialization error. Ignoring since this is failsafe deserialization", exc_info=True
)
return None
@staticmethod
def _unpack_content(raw_data, content_type=None):
"""Extract the correct structure for deserialization.
If raw_data is a PipelineResponse, try to extract the result of RawDeserializer.
if we can't, raise. Your Pipeline should have a RawDeserializer.
If not a pipeline response and raw_data is bytes or string, use content-type
to decode it. If no content-type, try JSON.
If raw_data is something else, bypass all logic and return it directly.
:param raw_data: Data to be processed.
:param content_type: How to parse if raw_data is a string/bytes.
:raises JSONDecodeError: If JSON is requested and parsing is impossible.
:raises UnicodeDecodeError: If bytes is not UTF8
"""
# Assume this is enough to detect a Pipeline Response without importing it
context = getattr(raw_data, "context", {})
if context:
if RawDeserializer.CONTEXT_NAME in context:
return context[RawDeserializer.CONTEXT_NAME]
raise ValueError("This pipeline didn't have the RawDeserializer policy; can't deserialize")
# Assume this is enough to recognize universal_http.ClientResponse without importing it
if hasattr(raw_data, "body"):
return RawDeserializer.deserialize_from_http_generics(raw_data.text(), raw_data.headers)
# Assume this enough to recognize requests.Response without importing it.
if hasattr(raw_data, "_content_consumed"):
return RawDeserializer.deserialize_from_http_generics(raw_data.text, raw_data.headers)
if isinstance(raw_data, (basestring, bytes)) or hasattr(raw_data, "read"):
return RawDeserializer.deserialize_from_text(raw_data, content_type) # type: ignore
return raw_data
def _instantiate_model(self, response, attrs, additional_properties=None):
"""Instantiate a response model passing in deserialized args.
:param response: The response model class.
:param d_attrs: The deserialized response attributes.
"""
if callable(response):
subtype = getattr(response, "_subtype_map", {})
try:
readonly = [k for k, v in response._validation.items() if v.get("readonly")]
const = [k for k, v in response._validation.items() if v.get("constant")]
kwargs = {k: v for k, v in attrs.items() if k not in subtype and k not in readonly + const}
response_obj = response(**kwargs)
for attr in readonly:
setattr(response_obj, attr, attrs.get(attr))
if additional_properties:
response_obj.additional_properties = additional_properties
return response_obj
except TypeError as err:
msg = "Unable to deserialize {} into model {}. ".format(kwargs, response) # type: ignore
raise DeserializationError(msg + str(err))
else:
try:
for attr, value in attrs.items():
setattr(response, attr, value)
return response
except Exception as exp:
msg = "Unable to populate response model. "
msg += "Type: {}, Error: {}".format(type(response), exp)
raise DeserializationError(msg)
def deserialize_data(self, data, data_type):
"""Process data for deserialization according to data type.
:param str data: The response string to be deserialized.
:param str data_type: The type to deserialize to.
:raises: DeserializationError if deserialization fails.
:return: Deserialized object.
"""
if data is None:
return data
try:
if not data_type:
return data
if data_type in self.basic_types.values():
return self.deserialize_basic(data, data_type)
if data_type in self.deserialize_type:
if isinstance(data, self.deserialize_expected_types.get(data_type, tuple())):
return data
is_a_text_parsing_type = lambda x: x not in ["object", "[]", r"{}"]
if isinstance(data, ET.Element) and is_a_text_parsing_type(data_type) and not data.text:
return None
data_val = self.deserialize_type[data_type](data)
return data_val
iter_type = data_type[0] + data_type[-1]
if iter_type in self.deserialize_type:
return self.deserialize_type[iter_type](data, data_type[1:-1])
obj_type = self.dependencies[data_type]
if issubclass(obj_type, Enum):
if isinstance(data, ET.Element):
data = data.text
return self.deserialize_enum(data, obj_type)
except (ValueError, TypeError, AttributeError) as err:
msg = "Unable to deserialize response data."
msg += " Data: {}, {}".format(data, data_type)
raise DeserializationError(msg) from err
else:
return self._deserialize(obj_type, data)
def deserialize_iter(self, attr, iter_type):
"""Deserialize an iterable.
:param list attr: Iterable to be deserialized.
:param str iter_type: The type of object in the iterable.
:rtype: list
"""
if attr is None:
return None
if isinstance(attr, ET.Element): # If I receive an element here, get the children
attr = list(attr)
if not isinstance(attr, (list, set)):
raise DeserializationError("Cannot deserialize as [{}] an object of type {}".format(iter_type, type(attr)))
return [self.deserialize_data(a, iter_type) for a in attr]
def deserialize_dict(self, attr, dict_type):
"""Deserialize a dictionary.
:param dict/list attr: Dictionary to be deserialized. Also accepts
a list of key, value pairs.
:param str dict_type: The object type of the items in the dictionary.
:rtype: dict
"""
if isinstance(attr, list):
return {x["key"]: self.deserialize_data(x["value"], dict_type) for x in attr}
if isinstance(attr, ET.Element):
# Transform <Key>value</Key> into {"Key": "value"}
attr = {el.tag: el.text for el in attr}
return {k: self.deserialize_data(v, dict_type) for k, v in attr.items()}
def deserialize_object(self, attr, **kwargs):
"""Deserialize a generic object.
This will be handled as a dictionary.
:param dict attr: Dictionary to be deserialized.
:rtype: dict
:raises: TypeError if non-builtin datatype encountered.
"""
if attr is None:
return None
if isinstance(attr, ET.Element):
# Do no recurse on XML, just return the tree as-is
return attr
if isinstance(attr, basestring):
return self.deserialize_basic(attr, "str")
obj_type = type(attr)
if obj_type in self.basic_types:
return self.deserialize_basic(attr, self.basic_types[obj_type])
if obj_type is _long_type:
return self.deserialize_long(attr)
if obj_type == dict:
deserialized = {}
for key, value in attr.items():
try:
deserialized[key] = self.deserialize_object(value, **kwargs)
except ValueError:
deserialized[key] = None
return deserialized
if obj_type == list:
deserialized = []
for obj in attr:
try:
deserialized.append(self.deserialize_object(obj, **kwargs))
except ValueError:
pass
return deserialized
else:
error = "Cannot deserialize generic object with type: "
raise TypeError(error + str(obj_type))
def deserialize_basic(self, attr, data_type):
"""Deserialize basic builtin data type from string.
Will attempt to convert to str, int, float and bool.
This function will also accept '1', '0', 'true' and 'false' as
valid bool values.
:param str attr: response string to be deserialized.
:param str data_type: deserialization data type.
:rtype: str, int, float or bool
:raises: TypeError if string format is not valid.
"""
# If we're here, data is supposed to be a basic type.
# If it's still an XML node, take the text
if isinstance(attr, ET.Element):
attr = attr.text
if not attr:
if data_type == "str":
# None or '', node <a/> is empty string.
return ""
else:
# None or '', node <a/> with a strong type is None.
# Don't try to model "empty bool" or "empty int"
return None
if data_type == "bool":
if attr in [True, False, 1, 0]:
return bool(attr)
elif isinstance(attr, basestring):
if attr.lower() in ["true", "1"]:
return True
elif attr.lower() in ["false", "0"]:
return False
raise TypeError("Invalid boolean value: {}".format(attr))
if data_type == "str":
return self.deserialize_unicode(attr)
return eval(data_type)(attr) # nosec
@staticmethod
def deserialize_unicode(data):
"""Preserve unicode objects in Python 2, otherwise return data
as a string.
:param str data: response string to be deserialized.
:rtype: str or unicode
"""
# We might be here because we have an enum modeled as string,
# and we try to deserialize a partial dict with enum inside
if isinstance(data, Enum):
return data
# Consider this is real string
try:
if isinstance(data, unicode): # type: ignore
return data
except NameError:
return str(data)
else:
return str(data)
@staticmethod
def deserialize_enum(data, enum_obj):
"""Deserialize string into enum object.
If the string is not a valid enum value it will be returned as-is
and a warning will be logged.
:param str data: Response string to be deserialized. If this value is
None or invalid it will be returned as-is.
:param Enum enum_obj: Enum object to deserialize to.
:rtype: Enum
"""
if isinstance(data, enum_obj) or data is None:
return data
if isinstance(data, Enum):
data = data.value
if isinstance(data, int):
# Workaround. We might consider remove it in the future.
try:
return list(enum_obj.__members__.values())[data]
except IndexError:
error = "{!r} is not a valid index for enum {!r}"
raise DeserializationError(error.format(data, enum_obj))
try:
return enum_obj(str(data))
except ValueError:
for enum_value in enum_obj:
if enum_value.value.lower() == str(data).lower():
return enum_value
# We don't fail anymore for unknown value, we deserialize as a string
_LOGGER.warning("Deserializer is not able to find %s as valid enum in %s", data, enum_obj)
return Deserializer.deserialize_unicode(data)
@staticmethod
def deserialize_bytearray(attr):
"""Deserialize string into bytearray.
:param str attr: response string to be deserialized.
:rtype: bytearray
:raises: TypeError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
return bytearray(b64decode(attr)) # type: ignore
@staticmethod
def deserialize_base64(attr):
"""Deserialize base64 encoded string into string.
:param str attr: response string to be deserialized.
:rtype: bytearray
:raises: TypeError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
padding = "=" * (3 - (len(attr) + 3) % 4) # type: ignore
attr = attr + padding # type: ignore
encoded = attr.replace("-", "+").replace("_", "/")
return b64decode(encoded)
@staticmethod
def deserialize_decimal(attr):
"""Deserialize string into Decimal object.
:param str attr: response string to be deserialized.
:rtype: Decimal
:raises: DeserializationError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
try:
return decimal.Decimal(str(attr)) # type: ignore
except decimal.DecimalException as err:
msg = "Invalid decimal {}".format(attr)
raise DeserializationError(msg) from err
@staticmethod
def deserialize_long(attr):
"""Deserialize string into long (Py2) or int (Py3).
:param str attr: response string to be deserialized.
:rtype: long or int
:raises: ValueError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
return _long_type(attr) # type: ignore
@staticmethod
def deserialize_duration(attr):
"""Deserialize ISO-8601 formatted string into TimeDelta object.
:param str attr: response string to be deserialized.
:rtype: TimeDelta
:raises: DeserializationError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
try:
duration = isodate.parse_duration(attr)
except (ValueError, OverflowError, AttributeError) as err:
msg = "Cannot deserialize duration object."
raise DeserializationError(msg) from err
else:
return duration
@staticmethod
def deserialize_date(attr):
"""Deserialize ISO-8601 formatted string into Date object.
:param str attr: response string to be deserialized.
:rtype: Date
:raises: DeserializationError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
if re.search(r"[^\W\d_]", attr, re.I + re.U): # type: ignore
raise DeserializationError("Date must have only digits and -. Received: %s" % attr)
# This must NOT use defaultmonth/defaultday. Using None ensure this raises an exception.
return isodate.parse_date(attr, defaultmonth=0, defaultday=0)
@staticmethod
def deserialize_time(attr):
"""Deserialize ISO-8601 formatted string into time object.
:param str attr: response string to be deserialized.
:rtype: datetime.time
:raises: DeserializationError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
if re.search(r"[^\W\d_]", attr, re.I + re.U): # type: ignore
raise DeserializationError("Date must have only digits and -. Received: %s" % attr)
return isodate.parse_time(attr)
@staticmethod
def deserialize_rfc(attr):
"""Deserialize RFC-1123 formatted string into Datetime object.
:param str attr: response string to be deserialized.
:rtype: Datetime
:raises: DeserializationError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
try:
parsed_date = email.utils.parsedate_tz(attr) # type: ignore
date_obj = datetime.datetime(
*parsed_date[:6], tzinfo=_FixedOffset(datetime.timedelta(minutes=(parsed_date[9] or 0) / 60))
)
if not date_obj.tzinfo:
date_obj = date_obj.astimezone(tz=TZ_UTC)
except ValueError as err:
msg = "Cannot deserialize to rfc datetime object."
raise DeserializationError(msg) from err
else:
return date_obj
@staticmethod
def deserialize_iso(attr):
"""Deserialize ISO-8601 formatted string into Datetime object.
:param str attr: response string to be deserialized.
:rtype: Datetime
:raises: DeserializationError if string format invalid.
"""
if isinstance(attr, ET.Element):
attr = attr.text
try:
attr = attr.upper() # type: ignore
match = Deserializer.valid_date.match(attr)
if not match:
raise ValueError("Invalid datetime string: " + attr)
check_decimal = attr.split(".")
if len(check_decimal) > 1:
decimal_str = ""
for digit in check_decimal[1]:
if digit.isdigit():
decimal_str += digit
else:
break
if len(decimal_str) > 6:
attr = attr.replace(decimal_str, decimal_str[0:6])
date_obj = isodate.parse_datetime(attr)
test_utc = date_obj.utctimetuple()
if test_utc.tm_year > 9999 or test_utc.tm_year < 1:
raise OverflowError("Hit max or min date")
except (ValueError, OverflowError, AttributeError) as err:
msg = "Cannot deserialize datetime object."
raise DeserializationError(msg) from err
else:
return date_obj
@staticmethod
def deserialize_unix(attr):
"""Serialize Datetime object into IntTime format.
This is represented as seconds.
:param int attr: Object to be serialized.
:rtype: Datetime
:raises: DeserializationError if format invalid
"""
if isinstance(attr, ET.Element):
attr = int(attr.text) # type: ignore
try:
attr = int(attr)
date_obj = datetime.datetime.fromtimestamp(attr, TZ_UTC)
except ValueError as err:
msg = "Cannot deserialize to unix datetime object."
raise DeserializationError(msg) from err
else:
return date_obj
|
azure-quantum-python/azure-quantum/azure/quantum/_client/_serialization.py/0
|
{
"file_path": "azure-quantum-python/azure-quantum/azure/quantum/_client/_serialization.py",
"repo_id": "azure-quantum-python",
"token_count": 35548
}
| 355 |
##
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
##
from typing import TYPE_CHECKING, Dict, Sequence
if TYPE_CHECKING:
import cirq
from azure.quantum import Job as AzureJob
class Job:
"""
Thin wrapper around an Azure Quantum Job that supports
returning results in Cirq format.
"""
def __init__(
self,
azure_job: "AzureJob",
program: "cirq.Circuit",
measurement_dict: dict = None
):
"""Construct a Job.
:param azure_job: Job
:type azure_job: azure.quantum.job.Job
:param program: Cirq program
:type program: cirq.Circuit
:param measurement_dict: Measurments
:type measurement_dict: dict
"""
self._azure_job = azure_job
self._program = program
self._measurement_dict = measurement_dict
def job_id(self) -> str:
"""Returns the job id (UID) for the job."""
return self._azure_job.id
def status(self) -> str:
"""Gets the current status of the job."""
self._azure_job.refresh()
status = self._azure_job.details.status
if status == "Failed":
return f"{status}: {self._azure_job.details.error_data.message}"
else:
return status
def target(self) -> str:
"""Returns the target where the job was run."""
return self._azure_job.details.target
def name(self) -> str:
"""Returns the name of the job which was supplied during job creation."""
return self._azure_job.details.name
def num_qubits(self) -> int:
"""Returns the number of qubits for the job."""
return self._azure_job.details.metadata["qubits"]
def repetitions(self) -> int:
"""Returns the number of repetitions for the job."""
return self._azure_job.details.metadata["repetitions"]
def measurement_dict(self) -> Dict[str, Sequence[int]]:
"""Returns a dictionary of measurement keys to target qubit index."""
if self._measurement_dict is None:
from cirq import MeasurementGate
measurements = [op for op in self._program.all_operations() if isinstance(op.gate, MeasurementGate)]
self._measurement_dict = {
meas.gate.key: [q.x for q in meas.qubits] for meas in measurements
}
return self._measurement_dict
def results(self, timeout_seconds: int = 7200) -> "cirq.Result":
"""Poll the Azure Quantum API for results."""
return self._azure_job.get_results(timeout_secs=timeout_seconds)
def cancel(self):
"""Cancel the given job."""
self._azure_job.workspace.cancel_job(self._azure_job)
def delete(self):
"""Delete the given job."""
self._azure_job.workspace.cancel_job(self._azure_job)
def __str__(self) -> str:
return f'azure.quantum.cirq.Job(job_id={self.job_id()})'
|
azure-quantum-python/azure-quantum/azure/quantum/cirq/job.py/0
|
{
"file_path": "azure-quantum-python/azure-quantum/azure/quantum/cirq/job.py",
"repo_id": "azure-quantum-python",
"token_count": 1219
}
| 356 |
##
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
##
import os
import json
import logging
import warnings
logger = logging.getLogger(__name__)
from typing import Any, Dict, Tuple, Union, List, Optional
from azure.quantum.version import __version__
from azure.quantum.qiskit.job import (
MICROSOFT_OUTPUT_DATA_FORMAT,
MICROSOFT_OUTPUT_DATA_FORMAT_V2,
AzureQuantumJob,
)
from abc import abstractmethod
from azure.quantum.job.session import SessionHost
try:
from qiskit import QuantumCircuit, transpile
from qiskit.providers import BackendV1 as Backend
from qiskit.providers import Options
from qiskit.providers import Provider
from qiskit.providers.models import BackendConfiguration
from qiskit.qobj import QasmQobj, PulseQobj
from pyqir import Module
from qiskit_qir import to_qir_module
except ImportError:
raise ImportError(
"Missing optional 'qiskit' dependencies. \
To install run: pip install azure-quantum[qiskit]"
)
class AzureBackendBase(Backend, SessionHost):
# Name of the provider's input parameter which specifies number of shots for a submitted job.
# If None, backend will not pass this input parameter.
_SHOTS_PARAM_NAME = None
@abstractmethod
def __init__(
self,
configuration: BackendConfiguration,
provider: Provider = None,
**fields
):
super().__init__(configuration, provider, **fields)
@abstractmethod
def run(
self,
run_input: Union[QuantumCircuit, List[QuantumCircuit]] = [],
shots: int = None,
**options,
) -> AzureQuantumJob:
"""Run on the backend.
This method returns a
:class:`~azure.quantum.qiskit.job.AzureQuantumJob` object
that runs circuits.
Args:
run_input (QuantumCircuit or List[QuantumCircuit]): An individual or a
list of :class:`~qiskit.circuits.QuantumCircuit` to run on the backend.
shots (int, optional): Number of shots, defaults to None.
options: Any kwarg options to pass to the backend for running the
config. If a key is also present in the options
attribute/object then the expectation is that the value
specified will be used instead of what's set in the options
object.
Returns:
Job: The job object for the run
"""
pass
@classmethod
def _can_send_shots_input_param(cls) -> bool:
"""
Tells if provider's backend class is able to specify shots number for its jobs.
"""
return cls._SHOTS_PARAM_NAME is not None
@classmethod
@abstractmethod
def _default_options(cls) -> Options:
pass
@abstractmethod
def _azure_config(self) -> Dict[str, str]:
pass
def retrieve_job(self, job_id) -> AzureQuantumJob:
"""Returns the Job instance associated with the given id."""
return self._provider.get_job(job_id)
def _get_output_data_format(self, options: Dict[str, Any] = {}) -> str:
config: BackendConfiguration = self.configuration()
# output data format default depends on the number of experiments. QIR backends
# that don't define a default in their azure config will use this value
# Once more than one experiment is supported, we should always use the v2 format
default_output_data_format = (
MICROSOFT_OUTPUT_DATA_FORMAT
if config.max_experiments == 1
else MICROSOFT_OUTPUT_DATA_FORMAT_V2
)
azure_config: Dict[str, Any] = config.azure
# if the backend defines an output format, use that over the default
azure_defined_override = azure_config.get(
"output_data_format", default_output_data_format
)
# if the user specifies an output format, use that over the default azure config
output_data_format = options.pop("output_data_format", azure_defined_override)
return output_data_format
def _get_input_params(self, options: Dict[str, Any], shots: int = None) -> Dict[str, Any]:
# Backend options are mapped to input_params.
input_params: Dict[str, Any] = vars(self.options).copy()
# Determine shots number, if needed.
if self._can_send_shots_input_param():
options_shots = options.pop(self.__class__._SHOTS_PARAM_NAME, None)
final_shots = None
# First we check for the explicitly specified 'shots' parameter, then for a provider-specific
# field in options, then for a backend's default value.
# Warn about options conflict, default to 'shots'.
if shots is not None and options_shots is not None:
warnings.warn(
f"Parameter 'shots' conflicts with the '{self.__class__._SHOTS_PARAM_NAME}' parameter. "
"Please, provide only one option for setting shots. Defaulting to 'shots' parameter."
)
final_shots = shots
elif shots is not None:
final_shots = shots
elif options_shots is not None:
warnings.warn(
f"Parameter '{self.__class__._SHOTS_PARAM_NAME}' is subject to change in future versions. "
"Please, use 'shots' parameter instead."
)
final_shots = options_shots
# If nothing is found, try to get from default values.
if final_shots is None:
final_shots = input_params.get(self.__class__._SHOTS_PARAM_NAME)
# Also add all possible shots options into input_params to make sure
# that all backends covered.
# TODO: Double check all backends for shots options in order to remove this extra check.
input_params["shots"] = final_shots
input_params["count"] = final_shots
# Safely removing "shots" and "count" from options as they will be passed in input_params now.
_ = options.pop("shots", None)
_ = options.pop("count", None)
input_params[self.__class__._SHOTS_PARAM_NAME] = final_shots
if "items" in options:
input_params["items"] = options.pop("items")
# Take also into consideration options passed in the options, as the take precedence
# over default values:
for opt in options.copy():
if opt in input_params:
input_params[opt] = options.pop(opt)
return input_params
def _run(self, job_name, input_data, input_params, metadata, **options):
logger.info(f"Submitting new job for backend {self.name()}")
# The default of these job parameters come from the AzureBackend configuration:
config = self.configuration()
blob_name = options.pop("blob_name", config.azure["blob_name"])
content_type = options.pop("content_type", config.azure["content_type"])
provider_id = options.pop("provider_id", config.azure["provider_id"])
input_data_format = options.pop(
"input_data_format", config.azure["input_data_format"]
)
output_data_format = self._get_output_data_format(options)
# QIR backends will have popped "targetCapability" to configure QIR generation.
# Anything left here is an invalid parameter with the user attempting to use
# deprecated parameters.
targetCapability = input_params.get("targetCapability", None)
if (
targetCapability not in [None, "qasm"]
and input_data_format != "qir.v1"
):
message = "The targetCapability parameter has been deprecated and is only supported for QIR backends."
message += os.linesep
message += "To find a QIR capable backend, use the following code:"
message += os.linesep
message += (
f'\tprovider.get_backend("{self.name()}", input_data_format: "qir.v1").'
)
raise ValueError(message)
job = AzureQuantumJob(
backend=self,
target=self.name(),
name=job_name,
input_data=input_data,
blob_name=blob_name,
content_type=content_type,
provider_id=provider_id,
input_data_format=input_data_format,
output_data_format=output_data_format,
input_params=input_params,
metadata=metadata,
**options,
)
return job
def _normalize_run_input_params(self, run_input, **options):
if "circuit" not in options:
# circuit is not provided, check if there is run_input
if run_input:
return run_input
else:
raise ValueError("No input provided.")
if run_input:
# even though circuit is provided, we still have run_input
warnings.warn(
DeprecationWarning(
"The circuit parameter has been deprecated and will be ignored."
)
)
return run_input
else:
warnings.warn(
DeprecationWarning(
"The circuit parameter has been deprecated. Please use the run_input parameter."
)
)
# we don't have run_input
# we know we have circuit parameter, but it may be empty
circuit = options.get("circuit")
if circuit:
return circuit
else:
raise ValueError("No input provided.")
def _get_azure_workspace(self) -> "Workspace":
return self.provider().get_workspace()
def _get_azure_target_id(self) -> str:
return self.name()
def _get_azure_provider_id(self) -> str:
return self._azure_config()["provider_id"]
class AzureQirBackend(AzureBackendBase):
@abstractmethod
def __init__(
self, configuration: BackendConfiguration, provider: Provider = None, **fields
):
super().__init__(configuration, provider, **fields)
def _azure_config(self) -> Dict[str, str]:
return {
"blob_name": "inputData",
"content_type": "qir.v1",
"input_data_format": "qir.v1",
}
def run(
self,
run_input: Union[QuantumCircuit, List[QuantumCircuit]] = [],
shots: int = None,
**options,
) -> AzureQuantumJob:
"""Run on the backend.
This method returns a
:class:`~azure.quantum.qiskit.job.AzureQuantumJob` object
that runs circuits.
Args:
run_input (QuantumCircuit or List[QuantumCircuit]): An individual or a
list of :class:`~qiskit.circuits.QuantumCircuit` to run on the backend.
shots (int, optional): Number of shots, defaults to None.
options: Any kwarg options to pass to the backend for running the
config. If a key is also present in the options
attribute/object then the expectation is that the value
specified will be used instead of what's set in the options
object.
Returns:
Job: The job object for the run
"""
run_input = self._normalize_run_input_params(run_input, **options)
options.pop("run_input", None)
options.pop("circuit", None)
circuits = list([])
if isinstance(run_input, QuantumCircuit):
circuits = [run_input]
else:
circuits = run_input
max_circuits_per_job = self.configuration().max_experiments
if len(circuits) > max_circuits_per_job:
raise NotImplementedError(
f"This backend only supports running a maximum of {max_circuits_per_job} circuits per job."
)
# config normalization
input_params = self._get_input_params(options, shots=shots)
shots_count = None
if self._can_send_shots_input_param():
shots_count = input_params.get(self.__class__._SHOTS_PARAM_NAME)
job_name = ""
if len(circuits) > 1:
job_name = f"batch-{len(circuits)}"
if shots_count is not None:
job_name = f"{job_name}-{shots_count}"
else:
job_name = circuits[0].name
job_name = options.pop("job_name", job_name)
metadata = options.pop("metadata", self._prepare_job_metadata(circuits))
input_data = self._translate_input(circuits, input_params)
job = super()._run(job_name, input_data, input_params, metadata, **options)
logger.info(
f"Submitted job with id '{job.id()}' with shot count of {shots_count}:"
)
return job
def _prepare_job_metadata(self, circuits: List[QuantumCircuit]) -> Dict[str, str]:
"""Returns the metadata relative to the given circuits that will be attached to the Job"""
if len(circuits) == 1:
circuit: QuantumCircuit = circuits[0]
return {
"qiskit": str(True),
"name": circuit.name,
"num_qubits": circuit.num_qubits,
"metadata": json.dumps(circuit.metadata),
}
# for batch jobs, we don't want to store the metadata of each circuit
# we fill out the result header in output processing.
# These headers don't matter for execution are are only used for
# result processing.
return {}
def _generate_qir(
self, circuits, targetCapability, **to_qir_kwargs
) -> Tuple[Module, List[str]]:
config = self.configuration()
# Barriers aren't removed by transpilation and must be explicitly removed in the Qiskit to QIR translation.
emit_barrier_calls = "barrier" in config.basis_gates
return to_qir_module(
circuits,
targetCapability,
emit_barrier_calls=emit_barrier_calls,
**to_qir_kwargs,
)
def _get_qir_str(self, circuits, targetCapability, **to_qir_kwargs) -> str:
module, _ = self._generate_qir(circuits, targetCapability, **to_qir_kwargs)
return str(module)
def _translate_input(
self, circuits: List[QuantumCircuit], input_params: Dict[str, Any]
) -> bytes:
"""Translates the input values to the QIR expected by the Backend."""
logger.info(f"Using QIR as the job's payload format.")
config = self.configuration()
# Override QIR translation parameters
# We will record the output by default, but allow the backend to override this, and allow the user to override the backend.
to_qir_kwargs = input_params.pop(
"to_qir_kwargs", config.azure.get("to_qir_kwargs", {"record_output": True})
)
targetCapability = input_params.pop(
"targetCapability",
self.options.get("targetCapability", "AdaptiveExecution"),
)
if logger.isEnabledFor(logging.DEBUG):
qir = self._get_qir_str(circuits, targetCapability, **to_qir_kwargs)
logger.debug(f"QIR:\n{qir}")
# We'll transpile automatically to the supported gates in QIR unless explicitly skipped.
if not input_params.pop("skipTranspile", False):
# Set of gates supported by QIR targets.
circuits = transpile(
circuits, basis_gates=config.basis_gates, optimization_level=0
)
# We'll only log the QIR again if we performed a transpilation.
if logger.isEnabledFor(logging.DEBUG):
qir = self._get_qir_str(circuits, targetCapability, **to_qir_kwargs)
logger.debug(f"QIR (Post-transpilation):\n{qir}")
(module, entry_points) = self._generate_qir(
circuits, targetCapability, **to_qir_kwargs
)
if not "items" in input_params:
arguments = input_params.pop("arguments", [])
input_params["items"] = [
{"entryPoint": name, "arguments": arguments} for name in entry_points
]
return module.bitcode
class AzureBackend(AzureBackendBase):
"""Base class for interfacing with a backend in Azure Quantum"""
@abstractmethod
def __init__(
self, configuration: BackendConfiguration, provider: Provider = None, **fields
):
super().__init__(configuration, provider, **fields)
backend_name = None
def _prepare_job_metadata(self, circuit):
"""Returns the metadata relative to the given circuit that will be attached to the Job"""
return {
"qiskit": True,
"name": circuit.name,
"num_qubits": circuit.num_qubits,
"metadata": json.dumps(circuit.metadata),
}
@abstractmethod
def _translate_input(self, circuit):
pass
def run(
self,
run_input: Union[QuantumCircuit, List[QuantumCircuit]] = [],
shots: int = None,
**options,
):
"""Submits the given circuit to run on an Azure Quantum backend."""
circuit = self._normalize_run_input_params(run_input, **options)
options.pop("run_input", None)
options.pop("circuit", None)
# Some Qiskit features require passing lists of circuits, so unpack those here.
# We currently only support single-experiment jobs.
if isinstance(circuit, (list, tuple)):
if len(circuit) > 1:
raise NotImplementedError("Multi-experiment jobs are not supported!")
circuit = circuit[0]
# If the circuit was created using qiskit.assemble,
# disassemble into QASM here
if isinstance(circuit, QasmQobj) or isinstance(circuit, PulseQobj):
from qiskit.assembler import disassemble
circuits, run, _ = disassemble(circuit)
circuit = circuits[0]
if options.get("shots") is None:
# Note that qiskit.assembler.disassemble() sets the default number of shots for QasmQobj and PulseQobj to 1024
# unless the user specifies the backend.
options["shots"] = run["shots"]
# If not provided as options, the values of these parameters
# are calculated from the circuit itself:
job_name = options.pop("job_name", circuit.name)
metadata = options.pop("metadata", self._prepare_job_metadata(circuit))
input_params = self._get_input_params(options, shots=shots)
input_data = self._translate_input(circuit)
job = super()._run(job_name, input_data, input_params, metadata, **options)
shots_count = None
if self._can_send_shots_input_param():
shots_count = input_params.get(self.__class__._SHOTS_PARAM_NAME)
logger.info(
f"Submitted job with id '{job.id()}' for circuit '{circuit.name}' with shot count of {shots_count}:"
)
logger.info(input_data)
return job
def _get_shots_or_deprecated_count_input_param(
param_name: str,
shots: int = None,
count: int = None,
) -> Optional[int]:
"""
This helper function checks if the deprecated 'count' option is specified.
In earlier versions it was possible to pass this option to specify shots number for a job,
but now we only check for it for compatibility reasons.
"""
final_shots = None
if shots is not None:
final_shots = shots
elif count is not None:
final_shots = count
warnings.warn(
"The 'count' parameter will be deprecated. "
f"Please, use '{param_name}' parameter instead.",
category=DeprecationWarning,
)
return final_shots
|
azure-quantum-python/azure-quantum/azure/quantum/qiskit/backends/backend.py/0
|
{
"file_path": "azure-quantum-python/azure-quantum/azure/quantum/qiskit/backends/backend.py",
"repo_id": "azure-quantum-python",
"token_count": 8523
}
| 357 |
import collections.abc
from typing import Any, Dict, Union
from azure.quantum.job import JobFailedWithResultsError
from azure.quantum.job.job import Job, DEFAULT_TIMEOUT
from azure.quantum._client.models import JobDetails
class MicrosoftElementsDftJob(Job):
"""
A dedicated job class for jobs from the microsoft.dft target.
"""
def __init__(self, workspace, job_details: JobDetails, **kwargs):
"""Azure Quantum Job that is submitted to a given Workspace.
:param workspace: Workspace instance to submit job to
:type workspace: Workspace
:param job_details: Job details model,
contains Job ID, name and other details
:type job_details: JobDetails
"""
super().__init__(workspace, job_details, **kwargs)
def get_results(self, timeout_secs: float = DEFAULT_TIMEOUT) -> Dict[str, Any]:
"""Get job results by downloading the results blob from the
storage container linked via the workspace.
:param timeout_secs: Timeout in seconds, defaults to 300
:type timeout_secs: float
:raises: :class:`RuntimeError` if job execution failed.
:raises: :class:`azure.quantum.job.JobFailedWithResultsError` if job execution failed,
but failure results could still be retrieved.
:return: Results dictionary.
"""
try:
job_results = super().get_results(timeout_secs)
return job_results
except JobFailedWithResultsError as e:
failure_results = e.get_failure_results()
if MicrosoftElementsDftJob._is_dft_failure_results(failure_results):
error = failure_results["results"][0]["error"]
message = f'{e.get_message()} Error type: {error["error_type"]}. Message: {error["error_message"]}'
raise JobFailedWithResultsError(message, failure_results) from None
@classmethod
def _allow_failure_results(cls) -> bool:
"""
Allow to download job results even if the Job status is "Failed".
"""
return True
@staticmethod
def _is_dft_failure_results(failure_results: Union[Dict[str, Any], str]) -> bool:
return isinstance(failure_results, dict) \
and "results" in failure_results \
and isinstance(failure_results["results"], collections.abc.Sequence) \
and len(failure_results["results"]) > 0 \
and isinstance(failure_results["results"][0], dict) \
and "error" in failure_results["results"][0] \
and isinstance(failure_results["results"][0]["error"], dict) \
and "error_type" in failure_results["results"][0]["error"] \
and "error_message" in failure_results["results"][0]["error"]
|
azure-quantum-python/azure-quantum/azure/quantum/target/microsoft/elements/dft/job.py/0
|
{
"file_path": "azure-quantum-python/azure-quantum/azure/quantum/target/microsoft/elements/dft/job.py",
"repo_id": "azure-quantum-python",
"token_count": 1187
}
| 358 |
$env:AZURE_QUANTUM_ENV = $null
$env:AZURE_CLIENT_ID = $null
$env:AZURE_CLIENT_SECRET = $null
$env:AZURE_TENANT_ID = $null
$env:AZURE_TEST_RUN_LIVE = $null
$env:AZURE_QUANTUM_WORKSPACE_RG = $null
$env:SUBSCRIPTION_ID = $null
$env:AZURE_QUANTUM_WORKSPACE_NAME = $null
$env:AZURE_QUANTUM_WORKSPACE_LOCATION = $null
$env:AZURE_SUBSCRIPTION_ID = $env:SUBSCRIPTION_ID
$env:AZURE_RESOURCE_GROUP = $env:AZURE_QUANTUM_WORKSPACE_RG
$env:QUANTUM_TOKEN_FILE = $null
$env:AZURE_QUANTUM_CONNECTION_STRING = $null
|
azure-quantum-python/azure-quantum/eng/Clear-Env-Vars.ps1/0
|
{
"file_path": "azure-quantum-python/azure-quantum/eng/Clear-Env-Vars.ps1",
"repo_id": "azure-quantum-python",
"token_count": 232
}
| 359 |
##
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
##
from typing import Dict
import time
import pytest
from common import QuantumTestBase, DEFAULT_TIMEOUT_SECS
from test_job_payload_factory import JobPayloadFactory
from azure.quantum import Job, JobStatus, Session, SessionStatus, SessionJobFailurePolicy
from azure.quantum.qiskit.backends.quantinuum import QuantinuumQPUQirBackend
from azure.quantum.qiskit.provider import AzureQuantumProvider
from import_qsharp import skip_if_no_qsharp
ECHO_PROVIDER_NAME = "microsoft.test"
class TestSession(QuantumTestBase):
@pytest.mark.live_test
@pytest.mark.session
def test_session_list_top_level_items(self):
workspace = self.create_workspace()
result = workspace.list_top_level_items()
result_types = map(type, result)
self.assertIn(Job, result_types)
self.assertIn(Session, result_types)
@pytest.mark.live_test
@pytest.mark.session
def test_session_list_sessions(self):
workspace = self.create_workspace()
result = workspace.list_sessions()
result_types = map(type, result)
self.assertIn(Session, result_types)
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.echo_targets
def test_session_get_session(self):
workspace = self.create_workspace()
session = Session(workspace=workspace,
name="My Session",
target="echo-quantinuum",
provider_id=ECHO_PROVIDER_NAME)
self.assertIsNone(session.details.status)
session.open()
self.assertEqual(session.details.status, SessionStatus.WAITING)
obtained_session = workspace.get_session(session_id=session.id)
self.assertIsInstance(obtained_session, Session)
self.assertEqual(obtained_session.id, session.id)
self.assertEqual(obtained_session.details.id, session.details.id)
self.assertEqual(obtained_session.details.target, session.details.target)
self.assertEqual(obtained_session.details.provider_id, session.details.provider_id)
self.assertEqual(obtained_session.details.name, session.details.name)
self.assertEqual(obtained_session.details.status, session.details.status)
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.echo_targets
def test_session_open_close(self):
workspace = self.create_workspace()
session = Session(workspace=workspace,
target="echo-quantinuum",
provider_id=ECHO_PROVIDER_NAME)
self.assertIsNone(session.details.status)
session.open()
self.assertEqual(session.details.status, SessionStatus.WAITING)
session.close()
self.assertEqual(session.details.status, SessionStatus.SUCCEEDED)
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.echo_targets
def test_session_target_open_session(self):
target = self._get_target("echo-quantinuum")
self.assertIsNone(target.latest_session)
session = target.open_session()
self.assertIsNotNone(target.latest_session)
self.assertEqual(target.latest_session.id, session.id)
self.assertEqual(target.get_latest_session_id(), session.id)
self.assertEqual(session.details.status, SessionStatus.WAITING)
session.close()
self.assertEqual(session.details.status, SessionStatus.SUCCEEDED)
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.echo_targets
def test_session_with_target_open_session(self):
target = self._get_target("echo-quantinuum")
self.assertIsNone(target.latest_session)
with target.open_session() as session:
self.assertIsNotNone(target.latest_session)
self.assertEqual(target.latest_session.id, session.id)
self.assertEqual(target.get_latest_session_id(), session.id)
self.assertEqual(session.details.status, SessionStatus.WAITING)
self.assertEqual(session.details.status, SessionStatus.SUCCEEDED)
def _get_cirq_target(self, target_name):
workspace = self.create_workspace()
if "echo-quantinuum" in target_name:
from azure.quantum.cirq.targets import QuantinuumTarget
return QuantinuumTarget(workspace=workspace,
provider_id=ECHO_PROVIDER_NAME,
name=target_name)
from azure.quantum.cirq import AzureQuantumService
service = AzureQuantumService(workspace=workspace)
target = service.get_target(target_name)
self.assertIsNotNone(target)
return target
def _test_session_job_cirq_circuit(self, target_name):
workspace = self.create_workspace()
target = self._get_cirq_target(target_name)
circuit = JobPayloadFactory.get_cirq_circuit_bell_state()
with target.open_session() as session:
self.assertEqual(session.details.status, SessionStatus.WAITING)
session_id = session.id
job1 = target.submit(circuit, name="Job 1")
azure_job = job1._azure_job if hasattr(job1, '_azure_job') \
else workspace.get_job(job_id=job1._job["id"])
target.submit(circuit, name="Job 2")
azure_job.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS)
session.refresh()
self.assertEqual(session.details.status, SessionStatus.EXECUTING)
session = workspace.get_session(session_id=session_id)
session_jobs = session.list_jobs()
self.assertEqual(len(session_jobs), 2)
self.assertEqual(session_jobs[0].details.name, "Job 1")
self.assertEqual(session_jobs[1].details.name, "Job 2")
[job.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS) for job in session_jobs]
session.refresh()
self.assertEqual(session.details.status, SessionStatus.SUCCEEDED)
def _get_qiskit_backend(self, target_name):
from azure.quantum.qiskit import AzureQuantumProvider
workspace = self.create_workspace()
provider = AzureQuantumProvider(workspace=workspace)
if "echo-quantinuum" in target_name:
return EchoQuantinuumQPUQirBackend("echo-quantinuum", provider)
backend = provider.get_backend(target_name)
self.assertIsNotNone(backend)
return backend
def _test_session_job_qiskit_circuit(self, target_name):
workspace = self.create_workspace()
backend = self._get_qiskit_backend(target_name)
circuit = JobPayloadFactory.get_qiskit_circuit_bell_state()
with backend.open_session() as session:
self.assertEqual(session.details.status, SessionStatus.WAITING)
session_id = session.id
job1 = backend.run(circuit, shots=100, job_name="Job 1")
backend.run(circuit, shots=100, job_name="Job 2")
job1.wait_for_final_state()
session.refresh()
self.assertEqual(session.details.status, SessionStatus.EXECUTING)
session = workspace.get_session(session_id=session_id)
session_jobs = session.list_jobs()
self.assertEqual(len(session_jobs), 2)
self.assertEqual(session_jobs[0].details.name, "Job 1")
self.assertEqual(session_jobs[1].details.name, "Job 2")
[job.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS) for job in session_jobs]
session.refresh()
self.assertEqual(session.details.status, SessionStatus.SUCCEEDED)
def _get_target(self, target_name):
workspace = self.create_workspace()
if "echo-quantinuum" in target_name:
from azure.quantum.target.quantinuum import Quantinuum
target = Quantinuum(workspace=workspace,
name=target_name,
provider_id=ECHO_PROVIDER_NAME)
return target
target = workspace.get_targets(target_name)
self.assertIsNotNone(target)
return target
def _test_session_job_qsharp_callable(self, target_name):
workspace = self.create_workspace()
target = self._get_target(target_name)
qsharp_callable = JobPayloadFactory.get_qsharp_inline_callable_bell_state()
output_data_format = "honeywell.qir.v1" if "echo-quantinuum" in target_name \
else target._qir_output_data_format()
with target.open_session() as session:
self.assertEqual(session.details.status, SessionStatus.WAITING)
session_id = session.id
job1 = target.submit(qsharp_callable,
name="Job 1",
output_data_format=output_data_format)
target.submit(qsharp_callable,
name="Job 2",
output_data_format=output_data_format)
job1.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS)
session.refresh()
self.assertEqual(session.details.status, SessionStatus.EXECUTING)
session = workspace.get_session(session_id=session_id)
session_jobs = session.list_jobs()
self.assertEqual(len(session_jobs), 2)
self.assertEqual(session_jobs[0].details.name, "Job 1")
self.assertEqual(session_jobs[1].details.name, "Job 2")
[job.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS) for job in session_jobs]
session.refresh()
self.assertEqual(session.details.status, SessionStatus.SUCCEEDED)
def _test_session_job_failure_policies(self, target_name):
"""
This test case checks the session job failure policies
behavior in more detail.
First it checks the behavior of the SessionJobFailurePolicy.ABORT
policy by submitting a failing job right away.
Then it checks the behavior of the SessionJobFailurePolicy.CONTINUE
policy by submitting a failing job in the middle of two
successful jobs.
Note: all other tests that submit jobs for a session
defaults to using the SessionJobFailurePolicy.ABORT policy
and they have check the expected behavior by asserting
the session status WAITING->EXECUTING->SUCCEEDED
"""
workspace = self.create_workspace()
target = self._get_target(target_name)
qsharp_callable = JobPayloadFactory.get_qsharp_inline_callable_bell_state()
output_data_format = "honeywell.qir.v1" if "echo-quantinuum" in target_name else None
with target.open_session(job_failure_policy=SessionJobFailurePolicy.ABORT) as session:
self.assertEqual(session.details.status, SessionStatus.WAITING)
# pass an invalid output_data_format to make the job fail
job1 = target.submit(qsharp_callable,
name="Bad Job 1",
output_data_format="invalid_output_format")
job1.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS)
self.assertEqual(job1.details.status, JobStatus.FAILED)
session.refresh()
self.assertEqual(session.details.status, SessionStatus.FAILED)
from azure.core.exceptions import HttpResponseError
with self.assertRaises(HttpResponseError) as context:
target.submit(qsharp_callable,
name="Good Job 2",
output_data_format=output_data_format)
self.assertIn("Session is already in a terminal state.",
context.exception.message)
session_jobs = session.list_jobs()
self.assertEqual(len(session_jobs), 1)
self.assertEqual(session_jobs[0].details.name, "Bad Job 1")
with target.open_session(job_failure_policy=SessionJobFailurePolicy.CONTINUE) as session:
self.assertEqual(session.details.status, SessionStatus.WAITING)
job1 = target.submit(qsharp_callable,
name="Good Job 1",
output_data_format=output_data_format)
job1.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS)
self.assertEqual(job1.details.status, JobStatus.SUCCEEDED)
session.refresh()
self.assertEqual(session.details.status, SessionStatus.EXECUTING)
# pass an invalid output_data_format to make the job fail
job2 = target.submit(qsharp_callable,
name="Bad Job 2",
output_data_format="invalid_output_format")
job2.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS)
self.assertEqual(job2.details.status, JobStatus.FAILED)
session.refresh()
self.assertEqual(session.details.status, SessionStatus.FAILURE_S_)
job3 = target.submit(qsharp_callable,
name="Good Job 3",
output_data_format=output_data_format)
job3.wait_until_completed(timeout_secs=DEFAULT_TIMEOUT_SECS)
self.assertEqual(job3.details.status, JobStatus.SUCCEEDED)
session.refresh()
self.assertEqual(session.details.status, SessionStatus.FAILURE_S_)
session_jobs = session.list_jobs()
self.assertEqual(len(session_jobs), 3)
self.assertEqual(session_jobs[0].details.name, "Good Job 1")
self.assertEqual(session_jobs[1].details.name, "Bad Job 2")
self.assertEqual(session_jobs[2].details.name, "Good Job 3")
# Session job failure policy tests
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qsharp
@pytest.mark.echo_targets
@skip_if_no_qsharp
def test_session_job_failure_policies_echo_quantinuum(self):
self._test_session_job_failure_policies(target_name="echo-quantinuum")
# Session support for Cirq jobs
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.cirq
@pytest.mark.ionq
def test_session_job_cirq_circuit_ionq(self):
self._test_session_job_cirq_circuit(target_name="ionq.simulator")
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.cirq
@pytest.mark.quantinuum
def test_session_job_cirq_circuit_quantinuum(self):
self._test_session_job_cirq_circuit(target_name="quantinuum.sim.h1-1e")
@pytest.mark.skip(reason="Currently the echo-quantinuum is only accepting QIR input formats and Cirq is using qasm.")
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.cirq
@pytest.mark.echo_targets
def test_session_job_cirq_circuit_echo_quantinuum(self):
self._test_session_job_cirq_circuit(target_name="echo-quantinuum")
# Session support for Qiskit jobs
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qiskit
@pytest.mark.ionq
def test_session_job_qiskit_circuit_ionq(self):
self._test_session_job_qiskit_circuit(target_name="ionq.simulator")
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qiskit
@pytest.mark.quantinuum
def test_session_job_qiskit_circuit_quantinuum(self):
self._test_session_job_qiskit_circuit(target_name="quantinuum.sim.h1-1e")
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qiskit
@pytest.mark.echo_targets
def test_session_job_qiskit_circuit_echo_quantinuum(self):
self._test_session_job_qiskit_circuit(target_name="echo-quantinuum")
# Session support for Q# jobs
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qsharp
@pytest.mark.quantinuum
@skip_if_no_qsharp
def test_session_job_qsharp_callable_quantinuum(self):
self._test_session_job_qsharp_callable(target_name="quantinuum.sim.h1-1e")
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qsharp
@pytest.mark.echo_targets
@skip_if_no_qsharp
def test_session_job_qsharp_callable_echo_quantinuum(self):
self._test_session_job_qsharp_callable(target_name="echo-quantinuum")
@pytest.mark.skip(reason="IonQ does not support Q#->QIR programs yet.")
@pytest.mark.live_test
@pytest.mark.session
@pytest.mark.qsharp
@pytest.mark.ionq
@skip_if_no_qsharp
def test_session_job_qsharp_callable_ionq(self):
self._test_session_job_qsharp_callable(target_name="ionq.simulator")
class EchoQuantinuumQPUQirBackend(QuantinuumQPUQirBackend):
def _azure_config(self) -> Dict[str, str]:
config = super()._azure_config()
config.update(
{
"provider_id": ECHO_PROVIDER_NAME,
"output_data_format": "honeywell.qir.v1"
}
)
return config
def __init__(self,
name: str,
provider: AzureQuantumProvider,
**kwargs):
super().__init__(name=name, provider=provider)
self._provider_id = ECHO_PROVIDER_NAME
self._provider_name = ECHO_PROVIDER_NAME
|
azure-quantum-python/azure-quantum/tests/unit/test_session.py/0
|
{
"file_path": "azure-quantum-python/azure-quantum/tests/unit/test_session.py",
"repo_id": "azure-quantum-python",
"token_count": 7699
}
| 360 |
<jupyter_start><jupyter_text>A practical introduction into quantum signal processing (QSP)In this notebook we are going to implement and run the single-qubit quantum circuits used to illustrate quantum signal processing in [arXiv:2110.11327](https://arxiv.org/abs/2110.11327) and [arXiv:2105.02859](https://arxiv.org/abs/2105.02859).Quantum signal processing is a systematic framework to transform quantum systems with respect to almost arbitrary polynomial functions. We first include some functions from Qiskit, NumPy, SymPy, as well as Matplotlib.<jupyter_code>from qiskit import QuantumCircuit, transpile
from qiskit.quantum_info import Operator
import numpy as np
import numpy.random as random
import sympy as sp
import matplotlib.pyplot as plt
from math import acos
# Printing configuration
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from IPython.display import display, Markdown<jupyter_output><empty_output><jupyter_text>1. QSP polynomialsTo illustrate quantum signal processing, consider a sequence in which we alternate a *signal* rotation operator$$ W(x) = \begin{pmatrix} x & \mathrm{i}\sqrt{1 - x^2} \\ \mathrm{i}\sqrt{1 - x^2} & x \end{pmatrix} $$for a *fixed* value $x \in [1, -1]$ and a *processing* rotation operator$$ S(\phi) = e^{\mathrm{i}\phi Z} $$for variable real numbers $\phi$. These two matrices are unitary and can be implemented using $R_x(-2\cos^{-1} x) = W(x)$ and $R_z(-2 \phi) = S(\phi)$ operations as follows.<jupyter_code>class WGate(QuantumCircuit):
"""Defines the W(x) gate in terms of a Rx rotation"""
def __init__(self, x):
super().__init__(1)
super().rx(-2.0 * acos(x), 0)
class SGate(QuantumCircuit):
"""Defines the S(ɸ) gate in terms of a Rz rotation"""
def __init__(self, phi):
super().__init__(1)
super().rz(-2.0 * phi, 0)<jupyter_output><empty_output><jupyter_text>We also define two functions that compute the matrices explicitly as defined above. We are using SymPy here, so we can later evaluate them symbolically to create various polynomials.<jupyter_code>def expected_w(x):
"""Return W(x) as a SymPy matrix"""
return sp.Matrix([[x, 1j * sp.sqrt(1 - x**2)], [1j * sp.sqrt(1 - x**2), x]])
def expected_s(phi):
"""Return S(x) as a SymPy matrix"""
return sp.diag([sp.exp(1j * phi), sp.exp(-1j * phi)], unpack=True)<jupyter_output><empty_output><jupyter_text>Let's now test whether the `WGate` and the `SGate` do compute the expected matrices. For this purpose, we create quantum circuits with a single gate and random values for $x$ and $\phi$ and compare the unitary of the circuit with the expected unitary from the function.<jupyter_code># Check for 100 random numbers
for _ in range(100):
# Draw an x in [-1, 1] to test WGate
x = random.rand() * 2 - 1
circ = QuantumCircuit(1)
circ.append(WGate(x), [0])
# Run circuit and return matrix
actual = Operator(circ).data
# Expected result from definition (transforms SymPy matrix into NumPy array)
expected = np.array(expected_w(x).evalf()).astype(complex)
assert np.allclose(actual, expected)
# Draw a ɸ in [0, 2π] to test SGate
phi = random.rand() * np.pi
circ = QuantumCircuit(1)
circ.append(SGate(phi), [0])
# Run circuit and return matrix
actual = Operator(circ).data
# Expected result from definition (transforms SymPy matrix into NumPy array)
expected = np.array(expected_s(phi).evalf()).astype(complex)
assert np.allclose(actual, expected)
print("All tests passed.")<jupyter_output><empty_output><jupyter_text>Now that we have a better understanding of $W(x)$ and $S(\phi)$, let's get back to quantum signal processing, or in short QSP.For a set of so-called *QSP phases* $\vec \phi = (\phi_0, \dots, \phi_d)$ we can construct the unitary matrix$$ U^{\vec\phi} = S(\phi_0)\prod_{i=1}^d W(x) S(\phi_i) = \begin{pmatrix} P(x) & \mathrm{i}Q(x) \sqrt{1-x^2} \\ \mathrm{i}Q^*(x)\sqrt{1-x^2} & P^*(x) \end{pmatrix}$$for some polynomials $P(x)$ and $Q(x)$ such that $\mathrm{deg}(P) \le d$, $\mathrm{deg}(Q) < d$. You can find more details about this remarkable result in [arXiv:1603.03996](https://arxiv.org/abs/1603.03996).Also the inverse direction works! Given any polynomial $P(x)$ that fulfills the requirement properties and some error bound $\epsilon$ we can find QSP phases $\vec\phi$ from which we can build the alternating gate sequence $U^{\vec\phi}$. Algorithms to find these phases are described, e.g., in [arXiv:1806.10236](https://arxiv.org/abs/1806.10236) and [arXiv:2003.02831](https://arxiv.org/abs/2003.02831). But instead of using such approaches to find phases for some polynomial, we are picking some phases and use whatever polynomial results from them in our experiments.<jupyter_code>def qsp_polynomial(x, phases):
"""
Given 𝑑 + 1 QSP phases ɸ[0], ..., ɸ[𝑑], returns the unitary matrix U^ɸ as described above,
the polynomial in its top-left corner, and the expression in the top-right corner."""
assert phases, "phases cannot be empty"
poly = expected_s(phases[0])
for phi in phases[1:]:
poly = poly * expected_w(x) * expected_s(phi)
poly = sp.simplify(poly)
return poly, poly[0, 0], poly[0, 1]<jupyter_output><empty_output><jupyter_text>Let's use these functions to create a polynomial of degree 1 with phases $0.1$, $0.2$, $0.3$, and $0.4$.<jupyter_code># For now we want the polynomial symbolically for some variable 𝑥
x = sp.Symbol('x')
phases = [0.1, 0.2, 0.3, 0.4]
# For our first experiment we do not need the polynomial in the top-right corner
U, poly, _ = qsp_polynomial(x, phases)
# We use this to circumvent a display problem with SymPy
display(Markdown(poly._repr_latex_()))<jupyter_output><empty_output><jupyter_text>Let's also plot the squared magnitude $|P(x)|^2$ of this polynomial in the range $x \in [-1, 1]$.<jupyter_code># translate amplitude polynomial into probability polynomial
poly_abs = sp.Abs(poly)**2
# discretization points (this will correspond to the number of quantum jobs)
discretization_points = 30
xs = np.linspace(-1, 1, discretization_points)[1:-1]
ys = [poly_abs.evalf(subs={'x': x}) for x in xs]
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(xs, ys)
plt.show()<jupyter_output><empty_output><jupyter_text>2. Simulating the QSP polynomial with a quantum circuitNext, we are creating the quantum circuit using $W$ and $S$ gates that is supposed to implement our target unitary with the QSP polynomial $P$ in the top-left corner.<jupyter_code>def qsp_circuit(x, phases):
"""Creates a single-qubit unitary for some given QSP phases"""
# Map SumPy phases into NumPy phases and reverse them (gate application
# direction is opposite to matrix multiplication)
np_phases = [float(phi) for phi in phases]
np_phases.reverse()
circ = QuantumCircuit(1)
# We need to apply gates in reverse order to matrix multiplication
circ.append(SGate(np_phases[0]), [0]) # apply last angle
for phi in np_phases[1:]: # iterate through all but last angle in reverse order
circ.append(WGate(x), [0])
circ.append(SGate(phi), [0])
return circ<jupyter_output><empty_output><jupyter_text>We simulate the unitary matrices of QSP circuits constructed for some random values of $x \in [-1, 1]$ and compare them to the unitary $U$ that we explicitly constructed above.<jupyter_code># Test QSP polynomial
for _ in range(10):
x = random.rand() * 2.0 - 1.0
# Create circuit
circ = qsp_circuit(x, phases)
# Run circuit and return unitary matrix
actual = Operator(circ).data
# Expected result from U by plugging in x value for x symbol (transforms SymPy matrix into NumPy array)
expected = np.array(U.evalf(subs={'x': x})).astype(complex)
assert np.allclose(actual, expected)<jupyter_output><empty_output><jupyter_text>Great, the test passes! This means we should be able to reconstruct the polynomial $|P(x)|^2$ that we plotted above by running the QSP circuit several times and estimating the measurement probability of receiving $|0\rangle$ from various measurement results iterating through different values for $x$.We set ourselves up by first creating a function that takes as input a quantum circuit that can be parameterized for a value $x$, a set of values from which we draw $x$, as well as a number of shots that indicates how many times the circuit is run for each $x$.The function returns a list of measurement probabilities that approximate $|P(x)|^2$ based on running the QSP circuit.The reason that we approximate the probability of measuring $|0\rangle$ is because we obtain $P(x) = \langle 0|U^{\vec\phi} |0\rangle$, i.e., as a projection to the top-left corner.For the upcoming experiments we are going to simulate the QSP polynomial on a quantum computing backend.To do this, we connect to the Azure Quantum service.We construct an instance of the `AzureQuantumProvider`. Note that it's imported from `azure.quantum.qiskit`.<jupyter_code>from azure.quantum import Workspace
from azure.quantum.qiskit import AzureQuantumProvider
workspace = Workspace(
resource_id = "/subscriptions/677fc922-91d0-4bf6-9b06-4274d319a0fa/resourceGroups/xiou/providers/Microsoft.Quantum/Workspaces/xiou-notebooks-demo",
location = "eastus2euap")
provider = AzureQuantumProvider(workspace)
# List all available targets
print("This workspace's targets:")
for backend in provider.backends():
print("- " + backend.name())
# Select ionq.simulator target
backend = provider.get_backend("ionq.simulator")
def simulate_polynomial(circuit_function, xs, num_shots):
"""
This function creates experiment circuits for each `x` in `xs`,
simulates them for the given number of shots and returns a list
of measurement probabilities for each input value `x`.
"""
# Array in which we store all simulated probabilities
ys_simulated = []
# Submit jobs for each x
jobs = [backend.run(circuit_function(x), shots=num_shots) for x in xs]
# After we submitted all the jobs, we wait for each of them.
# It does not matter whether jobs finishes in a different order than they were executed, since we
# traverse the jobs in the same order as they were executed.
for job in jobs:
# Derive probability of |0⟩ outcome
counts = job.result().get_counts()
probability = counts.get('0', 0) / counts.shots()
# Append probability to simulated probabilities for plotting
ys_simulated.append(probability)
return ys_simulated<jupyter_output><empty_output><jupyter_text>Let's try this function for our QSP circuit. We can use the `qsp_circuit` function that we created above, but we must add a measurement instruction at the end of the circuit.Calling the `simulate_polynomial` function returns values that we plot alongside the values we got from directly evaluating the formula.<jupyter_code># Number of shots per x value (choose a higher x to increase the precision)
num_shots = 100
# Creates an experiment circuit based on x
def experiment_circuit(x):
# Create circuit
circ = qsp_circuit(x, phases)
# Measure qubit
circ.measure_all()
# Transpile circuit
return transpile(circ, backend)
ys_simulated = simulate_polynomial(experiment_circuit, xs, num_shots)
# Plot both evaluated probabilities `ys`` and simulated ones `simulated_ys``
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(xs, ys, label='formula')
ax.plot(xs, ys_simulated, label='simulated')
plt.legend(loc='upper right')
plt.show()<jupyter_output><empty_output><jupyter_text>3. Projecting into the $\{|+\rangle, |-\rangle\}$ basisIn the previous section we projected the unitary into the $\{|0\rangle, |1\rangle\}$ basis, and we could compute $\langle 0|U^{\vec\phi}|0\rangle = P(x)$.When projecting into the $\{|+\rangle, |-\rangle\}$ basis, we can construct a richer set of polynomials, since$$ \langle +|U^{\vec\phi}|+\rangle = \mathrm{Re}(P(x)) + \mathrm{i}\cdot\mathrm{Re}(Q(x))\cdot\sqrt{1 - x^2}$$<jupyter_code># For now we want the polynomial symbolically for some variable 𝑥 and we use the same phases as above
x = sp.Symbol('x')
phases = [0.1, 0.2, 0.3, 0.4]
# This time we need to the entry in the top-right corner of the unitary
_, poly, qpoly = qsp_polynomial(x, phases)
# translate amplitude polynomial into probability polynomial
sum_poly = sp.re(poly) + 1j * sp.re(qpoly / 1j)
sum_poly_abs = sp.Abs(sum_poly)**2
# we use the same number of discretization points
discretization_points = 30
# and evaluate the new polynomial for all values in xs
xs = np.linspace(-1, 1, discretization_points)[1:-1]
ys = [sum_poly_abs.evalf(subs={'x': x}) for x in xs]
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(xs, ys)
plt.show()
# Creates an experiment circuit based on x (this time adds H gates to the beginning and end)
def experiment_circuit(x):
# Create circuit
circ = qsp_circuit(x, phases)
# Sandwich the circuit with H gates
h_circ = QuantumCircuit(1)
h_circ.h(0)
circ = h_circ.compose(circ).compose(h_circ)
# Measure qubit
circ.measure_all()
# Transpile circuit
return transpile(circ, backend)
ys_simulated = simulate_polynomial(experiment_circuit, xs, num_shots)
# Plot both evaluated probabilities `ys`` and simulated ones `simulated_ys``
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(xs, ys, label='formula')
ax.plot(xs, ys_simulated, label='simulated')
plt.legend(loc='upper right')
plt.show()<jupyter_output><empty_output>
|
azure-quantum-python/samples/quantum-signal-processing/signal-processing.ipynb/0
|
{
"file_path": "azure-quantum-python/samples/quantum-signal-processing/signal-processing.ipynb",
"repo_id": "azure-quantum-python",
"token_count": 4716
}
| 361 |
# Builds react-lib, js-lib and links the packages.
resources:
- repo: self
clean: true
# Trigger whenever a PR is submitted
pr:
branches:
include:
- main
variables:
rootDirectory: visualization
project: $(rootDirectory)/react-lib
jslib: $(rootDirectory)/js-lib
reactnodemodules: node_modules/react
projectname: quantum-visualization
outputDirectory: $(Agent.BuildDirectory)/output
tests: $(project)/TestResults
stages:
- stage: Build_Visualization_Library
pool:
vmImage: "windows-latest"
jobs:
- job: Build_Test_Link_Publish
steps:
- task: UseDotNet@2
inputs:
version: "3.1.x"
- task: NodeTool@0
inputs:
versionSpec: "16.x"
- task: Bash@3
displayName: "Build js-lib and dependencies"
inputs:
targetType: "filePath"
filePath: "$(rootDirectory)/build/build-jslib.sh"
failOnStderr: true
workingDirectory: "$(rootDirectory)/build/"
# - task: SFP.build-tasks.custom-build-task-1.EsrpCodeSigning@1
# displayName: 'Signing'
# inputs:
# ConnectedServiceName: CodeSign
# FolderPath: "$(jslib)/dist/"
# Pattern: "*.js"
# CertificateId: 100040160
# OpusName: "Microsoft Quantum Development Kit"
# OpusInfo: "https://www.microsoft.com/quantum"
# SessionTimeout: 120
- task: CopyFiles@2
inputs:
SourceFolder: "$(jslib)/dist/"
Contents: "**"
TargetFolder: "$(outputDirectory)"
displayName: "Copy build artifacts to output directory"
- task: Npm@1
displayName: npm run tests (react-lib)
inputs:
workingDir: "$(project)"
command: "custom"
customCommand: "run testsonly"
- task: PublishTestResults@2
displayName: "Publish Test Results (react-lib)"
condition: succeededOrFailed()
inputs:
testResultsFiles: "$(tests)/test-results.xml"
- task: PublishCodeCoverageResults@1
displayName: "Publish Code Coverage Results (react-lib)"
condition: succeededOrFailed()
inputs:
codeCoverageTool: "cobertura"
summaryFileLocation: "$(project)/coverage/cobertura-coverage.xml"
- task: PublishPipelineArtifact@1
inputs:
targetPath: $(outputDirectory)"
artifactType: "pipeline"
artifactName: "microsoft-visualization"
|
azure-quantum-python/visualization/build/visualization-lib-pr.yml/0
|
{
"file_path": "azure-quantum-python/visualization/build/visualization-lib-pr.yml",
"repo_id": "azure-quantum-python",
"token_count": 1290
}
| 362 |
// Jest Snapshot v1, https://goo.gl/fbAQLP
exports[`Donut chart tests Verify Donut Chart: DonutChart 1`] = `
<div
style={
{
"alignItems": "center",
"justifyContent": "center",
}
}
>
<svg
height={1000}
id="donutchart"
width={1000}
/>
</div>
`;
|
azure-quantum-python/visualization/react-lib/src/components/d3-visualization-components/__tests__/__snapshots__/DonutChart.test.tsx.snap/0
|
{
"file_path": "azure-quantum-python/visualization/react-lib/src/components/d3-visualization-components/__tests__/__snapshots__/DonutChart.test.tsx.snap",
"repo_id": "azure-quantum-python",
"token_count": 132
}
| 363 |
{
"extends": "../tsconfig.json",
"compileOnSave": true,
"compilerOptions": {
"inlineSources": true,
"module": "commonjs",
"target": "ES2019",
"types": ["jest", "node"]
},
"exclude": ["**/*.js"]
}
|
azure-quantum-python/visualization/react-lib/test-config/tsconfig.json/0
|
{
"file_path": "azure-quantum-python/visualization/react-lib/test-config/tsconfig.json",
"repo_id": "azure-quantum-python",
"token_count": 121
}
| 364 |
Alignment
=========
.. testsetup:: *
from bistring import Alignment
.. autoclass:: bistring.Alignment
|
bistring/docs/Python/Alignment.rst/0
|
{
"file_path": "bistring/docs/Python/Alignment.rst",
"repo_id": "bistring",
"token_count": 37
}
| 365 |
{
"name": "bistring",
"version": "0.5.0",
"description": "Bidirectionally transformed strings",
"repository": {
"type": "git",
"url": "git+https://github.com/microsoft/bistring.git"
},
"author": "[email protected]",
"license": "MIT",
"bugs": {
"url": "https://github.com/microsoft/bistring/issues"
},
"homepage": "https://github.com/microsoft/bistring#readme",
"main": "dist/index.js",
"typings": "dist/index.d.ts",
"module": "dist/index.mjs",
"browser": "dist/index.browser.js",
"files": [
"dist"
],
"scripts": {
"generate": "./scripts/generate_unicode.py",
"prepare": "npm run build",
"build": "rollup -c",
"watch": "rollup -cw",
"test": "jest"
},
"devDependencies": {
"@babel/preset-env": "^7.22.9",
"@rollup/plugin-babel": "^6.0.3",
"@rollup/plugin-commonjs": "^25.0.3",
"@rollup/plugin-typescript": "^11.1.2",
"@types/jest": "^29.5.3",
"core-js": "^3.32.0",
"jest": "^29.6.2",
"jest-junit": "^16.0.0",
"rollup": "^3.27.0",
"ts-jest": "^29.1.1",
"tslib": "^2.6.1",
"typescript": "^5.1.6"
},
"jest": {
"reporters": [
"default",
"jest-junit"
],
"testRegex": ".*\\.(spec|test)\\.[jt]s$",
"transform": {
".*\\.ts?$": "ts-jest"
}
},
"jest-junit": {
"outputDirectory": "./test-results"
}
}
|
bistring/js/package.json/0
|
{
"file_path": "bistring/js/package.json",
"repo_id": "bistring",
"token_count": 850
}
| 366 |
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
|
bistring/python/pyproject.toml/0
|
{
"file_path": "bistring/python/pyproject.toml",
"repo_id": "bistring",
"token_count": 32
}
| 367 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from typing import List
class BookingDetails:
def __init__(
self,
destination: str = None,
origin: str = None,
travel_date: str = None,
unsupported_airports: List[str] = None,
):
self.destination = destination
self.origin = origin
self.travel_date = travel_date
self.unsupported_airports = unsupported_airports or []
|
botbuilder-python/generators/app/templates/core/{{cookiecutter.bot_name}}/booking_details.py/0
|
{
"file_path": "botbuilder-python/generators/app/templates/core/{{cookiecutter.bot_name}}/booking_details.py",
"repo_id": "botbuilder-python",
"token_count": 188
}
| 368 |
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from .about import __version__
from .slack_client_options import SlackClientOptions
from .slack_client import SlackClient
from .slack_adapter import SlackAdapter
from .slack_payload import SlackPayload
from .slack_message import SlackMessage
from .slack_event import SlackEvent
from .activity_resourceresponse import ActivityResourceResponse
from .slack_request_body import SlackRequestBody
from .slack_helper import SlackHelper
from .slack_adatper_options import SlackAdapterOptions
__all__ = [
"__version__",
"SlackClientOptions",
"SlackClient",
"SlackAdapter",
"SlackPayload",
"SlackMessage",
"SlackEvent",
"ActivityResourceResponse",
"SlackRequestBody",
"SlackHelper",
"SlackAdapterOptions",
]
|
botbuilder-python/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/__init__.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-adapters-slack/botbuilder/adapters/slack/__init__.py",
"repo_id": "botbuilder-python",
"token_count": 287
}
| 369 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from .qnamaker import QnAMaker
from .qnamaker_endpoint import QnAMakerEndpoint
from .qnamaker_options import QnAMakerOptions
from .qnamaker_telemetry_client import QnAMakerTelemetryClient
from .qna_dialog_response_options import QnADialogResponseOptions
from .utils import (
ActiveLearningUtils,
GenerateAnswerUtils,
HttpRequestUtils,
QnATelemetryConstants,
)
from .models import (
FeedbackRecord,
FeedbackRecords,
Metadata,
QnAMakerTraceInfo,
QueryResult,
QueryResults,
)
__all__ = [
"ActiveLearningUtils",
"FeedbackRecord",
"FeedbackRecords",
"GenerateAnswerUtils",
"HttpRequestUtils",
"Metadata",
"QueryResult",
"QueryResults",
"QnAMaker",
"QnAMakerEndpoint",
"QnAMakerOptions",
"QnAMakerTelemetryClient",
"QnAMakerTraceInfo",
"QnATelemetryConstants",
"QnADialogResponseOptions",
]
|
botbuilder-python/libraries/botbuilder-ai/botbuilder/ai/qna/__init__.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-ai/botbuilder/ai/qna/__init__.py",
"repo_id": "botbuilder-python",
"token_count": 376
}
| 370 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
class RankerTypes:
"""Default Ranker Behaviour. i.e. Ranking based on Questions and Answer."""
DEFAULT = "Default"
""" Ranker based on question Only. """
QUESTION_ONLY = "QuestionOnly"
""" Ranker based on Autosuggest for question field only. """
AUTO_SUGGEST_QUESTION = "AutoSuggestQuestion"
|
botbuilder-python/libraries/botbuilder-ai/botbuilder/ai/qna/models/ranker_types.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-ai/botbuilder/ai/qna/models/ranker_types.py",
"repo_id": "botbuilder-python",
"token_count": 126
}
| 371 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import os
from setuptools import setup
REQUIRES = [
"azure-cognitiveservices-language-luis==0.2.0",
"botbuilder-schema==4.16.0",
"botbuilder-core==4.16.0",
"aiohttp==3.9.3",
]
TESTS_REQUIRES = ["aiounittest>=1.1.0"]
root = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(root, "botbuilder", "ai", "about.py")) as f:
package_info = {}
info = f.read()
exec(info, package_info)
with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
long_description = f.read()
setup(
name=package_info["__title__"],
version=package_info["__version__"],
url=package_info["__uri__"],
author=package_info["__author__"],
description=package_info["__description__"],
keywords="botbuilder-ai LUIS QnAMaker bots ai botframework botbuilder",
long_description=long_description,
long_description_content_type="text/x-rst",
license=package_info["__license__"],
packages=[
"botbuilder.ai",
"botbuilder.ai.qna",
"botbuilder.ai.luis",
"botbuilder.ai.qna.models",
"botbuilder.ai.qna.utils",
"botbuilder.ai.qna.dialogs",
],
install_requires=REQUIRES + TESTS_REQUIRES,
tests_require=TESTS_REQUIRES,
include_package_data=True,
classifiers=[
"Programming Language :: Python :: 3.7",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 5 - Production/Stable",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
)
|
botbuilder-python/libraries/botbuilder-ai/setup.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-ai/setup.py",
"repo_id": "botbuilder-python",
"token_count": 695
}
| 372 |
{
"entities": {
"$instance": {
"Airline": [
{
"endIndex": 23,
"modelType": "List Entity Extractor",
"recognitionSources": [
"externalEntities"
],
"startIndex": 7,
"text": "humberg airlines",
"type": "Airline"
},
{
"endIndex": 32,
"modelType": "List Entity Extractor",
"recognitionSources": [
"model"
],
"startIndex": 27,
"text": "Delta",
"type": "Airline"
}
]
},
"Airline": [
[
"HumAir"
],
[
"Delta"
]
]
},
"intents": {
"Cancel": {
"score": 0.00330878259
},
"Delivery": {
"score": 0.00452178251
},
"EntityTests": {
"score": 0.052175343
},
"Greeting": {
"score": 0.002769983
},
"Help": {
"score": 0.002995687
},
"None": {
"score": 0.0302589461
},
"Roles": {
"score": 0.132316783
},
"search": {
"score": 0.007362695
},
"SpecifyName": {
"score": 0.00500302855
},
"Travel": {
"score": 0.0146034053
},
"Weather_GetForecast": {
"score": 0.005048246
}
},
"sentiment": {
"label": "neutral",
"score": 0.5
},
"text": "fly on humberg airlines or Delta",
"v3": {
"options": {
"externalEntities": [
{
"entityLength": 16,
"entityName": "Airline",
"resolution": [
"HumAir"
],
"startIndex": 7
}
],
"includeAllIntents": true,
"includeAPIResults": true,
"includeInstanceData": true,
"log": true,
"preferExternalEntities": true,
"slot": "production"
},
"response": {
"prediction": {
"entities": {
"$instance": {
"Airline": [
{
"length": 16,
"modelType": "List Entity Extractor",
"modelTypeId": 5,
"recognitionSources": [
"externalEntities"
],
"startIndex": 7,
"text": "humberg airlines",
"type": "Airline"
},
{
"length": 5,
"modelType": "List Entity Extractor",
"modelTypeId": 5,
"recognitionSources": [
"model"
],
"startIndex": 27,
"text": "Delta",
"type": "Airline"
}
]
},
"Airline": [
[
"HumAir"
],
[
"Delta"
]
]
},
"intents": {
"Cancel": {
"score": 0.00330878259
},
"Delivery": {
"score": 0.00452178251
},
"EntityTests": {
"score": 0.052175343
},
"Greeting": {
"score": 0.002769983
},
"Help": {
"score": 0.002995687
},
"None": {
"score": 0.0302589461
},
"Roles": {
"score": 0.132316783
},
"search": {
"score": 0.007362695
},
"SpecifyName": {
"score": 0.00500302855
},
"Travel": {
"score": 0.0146034053
},
"Weather.GetForecast": {
"score": 0.005048246
}
},
"normalizedQuery": "fly on humberg airlines or delta",
"sentiment": {
"label": "neutral",
"score": 0.5
},
"topIntent": "Roles"
},
"query": "fly on humberg airlines or Delta"
}
}
}
|
botbuilder-python/libraries/botbuilder-ai/tests/luis/test_data/ExternalEntitiesAndList_v3.json/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-ai/tests/luis/test_data/ExternalEntitiesAndList_v3.json",
"repo_id": "botbuilder-python",
"token_count": 2690
}
| 373 |
{
"entities": {
"$instance": {
"Composite2": [
{
"endIndex": 66,
"modelType": "Composite Entity Extractor",
"recognitionSources": [
"model"
],
"score": 0.7077416,
"startIndex": 0,
"text": "http://foo.com is where you can get a weather forecast for seattle",
"type": "Composite2"
}
],
"geographyV2": [
{
"endIndex": 66,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
],
"startIndex": 59,
"text": "seattle",
"type": "builtin.geographyV2.city"
}
]
},
"Composite2": [
{
"$instance": {
"url": [
{
"endIndex": 14,
"modelType": "Prebuilt Entity Extractor",
"recognitionSources": [
"model"
],
"startIndex": 0,
"text": "http://foo.com",
"type": "builtin.url"
}
],
"Weather_Location": [
{
"endIndex": 66,
"modelType": "Entity Extractor",
"recognitionSources": [
"model"
],
"score": 0.76184386,
"startIndex": 59,
"text": "seattle",
"type": "Weather.Location"
}
]
},
"url": [
"http://foo.com"
],
"Weather_Location": [
"seattle"
]
}
],
"geographyV2": [
{
"location": "seattle",
"type": "city"
}
]
},
"intents": {
"Cancel": {
"score": 0.000171828113
},
"Delivery": {
"score": 0.0011408634
},
"EntityTests": {
"score": 0.342939854
},
"Greeting": {
"score": 0.0001518702
},
"Help": {
"score": 0.0005502715
},
"None": {
"score": 0.0175834317
},
"Roles": {
"score": 0.0432791822
},
"search": {
"score": 0.01050759
},
"SpecifyName": {
"score": 0.001833231
},
"Travel": {
"score": 0.004430798
},
"Weather_GetForecast": {
"score": 0.669524968
}
},
"sentiment": {
"label": "neutral",
"score": 0.5
},
"text": "http://foo.com is where you can get a weather forecast for seattle",
"v3": {
"options": {
"includeAllIntents": true,
"includeAPIResults": true,
"includeInstanceData": true,
"log": true,
"preferExternalEntities": true,
"slot": "production"
},
"response": {
"prediction": {
"entities": {
"$instance": {
"Composite2": [
{
"length": 66,
"modelType": "Composite Entity Extractor",
"modelTypeId": 4,
"recognitionSources": [
"model"
],
"score": 0.7077416,
"startIndex": 0,
"text": "http://foo.com is where you can get a weather forecast for seattle",
"type": "Composite2"
}
],
"geographyV2": [
{
"length": 7,
"modelType": "Prebuilt Entity Extractor",
"modelTypeId": 2,
"recognitionSources": [
"model"
],
"startIndex": 59,
"text": "seattle",
"type": "builtin.geographyV2.city"
}
]
},
"Composite2": [
{
"$instance": {
"url": [
{
"length": 14,
"modelType": "Prebuilt Entity Extractor",
"modelTypeId": 2,
"recognitionSources": [
"model"
],
"startIndex": 0,
"text": "http://foo.com",
"type": "builtin.url"
}
],
"Weather.Location": [
{
"length": 7,
"modelType": "Entity Extractor",
"modelTypeId": 1,
"recognitionSources": [
"model"
],
"score": 0.76184386,
"startIndex": 59,
"text": "seattle",
"type": "Weather.Location"
}
]
},
"url": [
"http://foo.com"
],
"Weather.Location": [
"seattle"
]
}
],
"geographyV2": [
{
"type": "city",
"value": "seattle"
}
]
},
"intents": {
"Cancel": {
"score": 0.000171828113
},
"Delivery": {
"score": 0.0011408634
},
"EntityTests": {
"score": 0.342939854
},
"Greeting": {
"score": 0.0001518702
},
"Help": {
"score": 0.0005502715
},
"None": {
"score": 0.0175834317
},
"Roles": {
"score": 0.0432791822
},
"search": {
"score": 0.01050759
},
"SpecifyName": {
"score": 0.001833231
},
"Travel": {
"score": 0.004430798
},
"Weather.GetForecast": {
"score": 0.669524968
}
},
"normalizedQuery": "http://foo.com is where you can get a weather forecast for seattle",
"sentiment": {
"label": "neutral",
"score": 0.5
},
"topIntent": "Weather.GetForecast"
},
"query": "http://foo.com is where you can get a weather forecast for seattle"
}
}
}
|
botbuilder-python/libraries/botbuilder-ai/tests/luis/test_data/Prebuilt_v3.json/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-ai/tests/luis/test_data/Prebuilt_v3.json",
"repo_id": "botbuilder-python",
"token_count": 4283
}
| 374 |
{
"activeLearningEnabled": false,
"answers": [
{
"questions": [
"Q1"
],
"answer": "A1",
"score": 80,
"id": 15,
"source": "Editorial",
"metadata": [
{
"name": "topic",
"value": "value"
}
]
},
{
"questions": [
"Q2"
],
"answer": "A2",
"score": 78,
"id": 16,
"source": "Editorial",
"metadata": [
{
"name": "topic",
"value": "value"
}
]
}
]
}
|
botbuilder-python/libraries/botbuilder-ai/tests/qna/test_data/QnaMaker_RankerType_QuestionOnly.json/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-ai/tests/qna/test_data/QnaMaker_RankerType_QuestionOnly.json",
"repo_id": "botbuilder-python",
"token_count": 400
}
| 375 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""Bot Telemetry Middleware."""
from threading import current_thread
# Map of thread id => POST body text
_REQUEST_BODIES = {}
def retrieve_bot_body():
"""
Retrieve the POST body text from temporary cache.
The POST body corresponds to the thread ID and must reside in the cache just for the lifetime of the request.
"""
result = _REQUEST_BODIES.get(current_thread().ident, None)
return result
class BotTelemetryMiddleware:
"""
Save off the POST body to later populate bot-specific properties to add to Application Insights.
Example activating MIDDLEWARE in Django settings:
.. code-block:: python
MIDDLEWARE = [
# Ideally add somewhere near top
'botbuilder.applicationinsights.django.BotTelemetryMiddleware',
...
]
"""
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
self.process_request(request)
response = self.get_response(request)
_REQUEST_BODIES.pop(current_thread().ident, None)
return response
def process_request(self, request) -> bool:
"""Process the incoming Django request."""
# Bot Service doesn't handle anything over 256k
# TODO: Add length check
body_unicode = (
request.body.decode("utf-8") if request.method == "POST" else None
)
# Sanity check JSON
if body_unicode is not None:
# Integration layer expecting just the json text.
_REQUEST_BODIES[current_thread().ident] = body_unicode
return True
|
botbuilder-python/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/django/bot_telemetry_middleware.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/django/bot_telemetry_middleware.py",
"repo_id": "botbuilder-python",
"token_count": 630
}
| 376 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from botframework.connector.auth import BotFrameworkAuthentication, ClaimsIdentity
from .channel_service_handler import ChannelServiceHandler
class CloudChannelServiceHandler(ChannelServiceHandler):
def __init__( # pylint: disable=super-init-not-called
self, auth: BotFrameworkAuthentication
):
if not auth:
raise TypeError("Auth can't be None")
self._auth = auth
async def _authenticate(self, auth_header: str) -> ClaimsIdentity:
return await self._auth.authenticate_channel_request(auth_header)
|
botbuilder-python/libraries/botbuilder-core/botbuilder/core/cloud_channel_service_handler.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/botbuilder/core/cloud_channel_service_handler.py",
"repo_id": "botbuilder-python",
"token_count": 207
}
| 377 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from copy import deepcopy
from typing import Dict, List
from .storage import Storage, StoreItem
class MemoryStorage(Storage):
def __init__(self, dictionary=None):
super(MemoryStorage, self).__init__()
self.memory = dictionary if dictionary is not None else {}
self._e_tag = 0
async def delete(self, keys: List[str]):
try:
for key in keys:
if key in self.memory:
del self.memory[key]
except TypeError as error:
raise error
async def read(self, keys: List[str]):
data = {}
if not keys:
return data
try:
for key in keys:
if key in self.memory:
data[key] = self.memory[key]
except TypeError as error:
raise error
return data
async def write(self, changes: Dict[str, StoreItem]):
if changes is None:
raise Exception("Changes are required when writing")
if not changes:
return
try:
# iterate over the changes
for key, change in changes.items():
new_value = deepcopy(change)
old_state_etag = None
# Check if the a matching key already exists in self.memory
# If it exists then we want to cache its original value from memory
if key in self.memory:
old_state = self.memory[key]
if isinstance(old_state, dict):
old_state_etag = old_state.get("e_tag", None)
elif hasattr(old_state, "e_tag"):
old_state_etag = old_state.e_tag
new_state = new_value
# Set ETag if applicable
new_value_etag = None
if isinstance(new_value, dict):
new_value_etag = new_value.get("e_tag", None)
elif hasattr(new_value, "e_tag"):
new_value_etag = new_value.e_tag
if new_value_etag == "":
raise Exception("memory_storage.write(): etag missing")
if (
old_state_etag is not None
and new_value_etag is not None
and new_value_etag != "*"
and new_value_etag != old_state_etag
):
raise KeyError(
"Etag conflict.\nOriginal: %s\r\nCurrent: %s"
% (new_value_etag, old_state_etag)
)
# If the original object didn't have an e_tag, don't set one (C# behavior)
if old_state_etag:
if isinstance(new_state, dict):
new_state["e_tag"] = str(self._e_tag)
else:
new_state.e_tag = str(self._e_tag)
self._e_tag += 1
self.memory[key] = deepcopy(new_state)
except Exception as error:
raise error
# TODO: Check if needed, if not remove
def __should_write_changes(
self, old_value: StoreItem, new_value: StoreItem
) -> bool:
"""
Helper method that compares two StoreItems and their e_tags and returns True if the new_value should overwrite
the old_value. Otherwise returns False.
:param old_value:
:param new_value:
:return:
"""
# If old_value is none or if the new_value's e_tag is '*', then we return True
if old_value is None or (
hasattr(new_value, "e_tag") and new_value.e_tag == "*"
):
return True
# If none of the above cases, we verify that e_tags exist on both arguments
if hasattr(new_value, "e_tag") and hasattr(old_value, "e_tag"):
if new_value.e_tag is not None and old_value.e_tag is None:
return True
# And then we do a comparing between the old and new e_tag values to decide if the new data will be written
if old_value.e_tag == new_value.e_tag or int(old_value.e_tag) <= int(
new_value.e_tag
):
return True
return False
return False
|
botbuilder-python/libraries/botbuilder-core/botbuilder/core/memory_storage.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/botbuilder/core/memory_storage.py",
"repo_id": "botbuilder-python",
"token_count": 2215
}
| 378 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from copy import copy
from inspect import getmembers
from typing import Type
from enum import Enum
from msrest.serialization import Model, Deserializer, Serializer
import botbuilder.schema as schema
import botbuilder.schema.teams as teams_schema
DEPENDICIES = [
schema_cls
for key, schema_cls in getmembers(schema)
if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
]
DEPENDICIES += [
schema_cls
for key, schema_cls in getmembers(teams_schema)
if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
]
DEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}
def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:
deserializer = Deserializer(DEPENDICIES_DICT)
_clean_data_for_serialization(
deserializer.dependencies[msrest_cls.__name__], dict_to_deserialize
)
return deserializer(msrest_cls.__name__, dict_to_deserialize)
def serializer_helper(object_to_serialize: Model) -> dict:
if object_to_serialize is None:
return None
serializer = Serializer(DEPENDICIES_DICT)
# pylint: disable=protected-access
return serializer._serialize(object_to_serialize)
def _clean_data_for_serialization(msrest_cls: Type[Model], dict_to_deserialize: dict):
# pylint: disable=protected-access
# Clean channel response of empty strings for expected objects.
if not isinstance(dict_to_deserialize, dict):
return
serialization_model = copy(msrest_cls._attribute_map)
for key, value in msrest_cls._attribute_map.items():
if key != value["key"]:
serialization_model[value["key"]] = value
for prop, prop_value in dict_to_deserialize.items():
if (
prop in serialization_model
and serialization_model[prop]["type"] in DEPENDICIES_DICT
and not prop_value
):
dict_to_deserialize[prop] = None
|
botbuilder-python/libraries/botbuilder-core/botbuilder/core/serializer_helper.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/botbuilder/core/serializer_helper.py",
"repo_id": "botbuilder-python",
"token_count": 781
}
| 379 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from http import HTTPStatus
from typing import Awaitable, Callable, List
from botbuilder.core import (
Bot,
BotFrameworkAdapter,
BotFrameworkAdapterSettings,
InvokeResponse,
TurnContext,
)
from botbuilder.schema import Activity, ActivityTypes, ResourceResponse
from botframework.connector import AsyncBfPipeline, BotFrameworkConnectorConfiguration
from botframework.connector.aio import ConnectorClient
from botframework.connector.auth import (
ClaimsIdentity,
MicrosoftAppCredentials,
MicrosoftGovernmentAppCredentials,
)
from .streaming_activity_processor import StreamingActivityProcessor
from .streaming_request_handler import StreamingRequestHandler
from .streaming_http_client import StreamingHttpDriver
class BotFrameworkHttpAdapterBase(BotFrameworkAdapter, StreamingActivityProcessor):
# pylint: disable=pointless-string-statement
def __init__(self, settings: BotFrameworkAdapterSettings):
super().__init__(settings)
self.connected_bot: Bot = None
self.claims_identity: ClaimsIdentity = None
self.request_handlers: List[StreamingRequestHandler] = None
async def process_streaming_activity(
self,
activity: Activity,
bot_callback_handler: Callable[[TurnContext], Awaitable],
) -> InvokeResponse:
if not activity:
raise TypeError(
f"'activity: {activity.__class__.__name__}' argument can't be None"
)
"""
If a conversation has moved from one connection to another for the same Channel or Skill and
hasn't been forgotten by the previous StreamingRequestHandler. The last requestHandler
the conversation has been associated with should always be the active connection.
"""
request_handler = [
handler
for handler in self.request_handlers
if handler.service_url == activity.service_url
and handler.has_conversation(activity.conversation.id)
]
request_handler = request_handler[-1] if request_handler else None
context = TurnContext(self, activity)
if self.claims_identity:
context.turn_state[self.BOT_IDENTITY_KEY] = self.claims_identity
connector_client = self._create_streaming_connector_client(
activity, request_handler
)
context.turn_state[self.BOT_CONNECTOR_CLIENT_KEY] = connector_client
await self.run_pipeline(context, bot_callback_handler)
if activity.type == ActivityTypes.invoke:
activity_invoke_response = context.turn_state.get(self._INVOKE_RESPONSE_KEY)
if not activity_invoke_response:
return InvokeResponse(status=HTTPStatus.NOT_IMPLEMENTED)
return activity_invoke_response.value
return None
async def send_streaming_activity(self, activity: Activity) -> ResourceResponse:
raise NotImplementedError()
def can_process_outgoing_activity(self, activity: Activity) -> bool:
if not activity:
raise TypeError(
f"'activity: {activity.__class__.__name__}' argument can't be None"
)
return not activity.service_url.startswith("https")
async def process_outgoing_activity(
self, _turn_context: TurnContext, activity: Activity
) -> ResourceResponse:
if not activity:
raise TypeError(
f"'activity: {activity.__class__.__name__}' argument can't be None"
)
# TODO: Check if we have token responses from OAuth cards.
# The ServiceUrl for streaming channels begins with the string "urn" and contains
# information unique to streaming connections. Now that we know that this is a streaming
# activity, process it in the streaming pipeline.
# Process streaming activity.
return await self.send_streaming_activity(activity)
def _create_streaming_connector_client(
self, activity: Activity, request_handler: StreamingRequestHandler
) -> ConnectorClient:
empty_credentials = (
MicrosoftAppCredentials.empty()
if self._channel_provider and self._channel_provider.is_government()
else MicrosoftGovernmentAppCredentials.empty()
)
streaming_driver = StreamingHttpDriver(request_handler)
config = BotFrameworkConnectorConfiguration(
empty_credentials,
activity.service_url,
pipeline_type=AsyncBfPipeline,
driver=streaming_driver,
)
streaming_driver.config = config
connector_client = ConnectorClient(None, custom_configuration=config)
return connector_client
|
botbuilder-python/libraries/botbuilder-core/botbuilder/core/streaming/bot_framework_http_adapter_base.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/botbuilder/core/streaming/bot_framework_http_adapter_base.py",
"repo_id": "botbuilder-python",
"token_count": 1770
}
| 380 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from .turn_context import TurnContext
from .bot_state import BotState
from .storage import Storage
class UserState(BotState):
"""
Reads and writes user state for your bot to storage.
"""
no_key_error_message = (
"UserState: channel_id and/or conversation missing from context.activity."
)
def __init__(self, storage: Storage, namespace=""):
"""
Creates a new UserState instance.
:param storage:
:param namespace:
"""
self.namespace = namespace
super(UserState, self).__init__(storage, "Internal.UserState")
def get_storage_key(self, turn_context: TurnContext) -> str:
"""
Returns the storage key for the current user state.
:param turn_context:
:return:
"""
channel_id = turn_context.activity.channel_id or self.__raise_type_error(
"invalid activity-missing channelId"
)
user_id = turn_context.activity.from_property.id or self.__raise_type_error(
"invalid activity-missing from_property.id"
)
storage_key = None
if channel_id and user_id:
storage_key = "%s/users/%s" % (channel_id, user_id)
return storage_key
def __raise_type_error(self, err: str = "NoneType found while expecting value"):
raise TypeError(err)
|
botbuilder-python/libraries/botbuilder-core/botbuilder/core/user_state.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/botbuilder/core/user_state.py",
"repo_id": "botbuilder-python",
"token_count": 569
}
| 381 |
from http import HTTPStatus
from typing import List
import aiounittest
from botframework.connector import ConnectorClient
from botframework.connector.auth import AppCredentials
from botbuilder.core import ActivityHandler, BotAdapter, TurnContext
from botbuilder.schema import (
Activity,
ActivityTypes,
ChannelAccount,
ConversationReference,
MessageReaction,
ResourceResponse,
)
class TestingActivityHandler(ActivityHandler):
__test__ = False
def __init__(self):
self.record: List[str] = []
async def on_message_activity(self, turn_context: TurnContext):
self.record.append("on_message_activity")
return await super().on_message_activity(turn_context)
async def on_members_added_activity(
self, members_added: ChannelAccount, turn_context: TurnContext
):
self.record.append("on_members_added_activity")
return await super().on_members_added_activity(members_added, turn_context)
async def on_members_removed_activity(
self, members_removed: ChannelAccount, turn_context: TurnContext
):
self.record.append("on_members_removed_activity")
return await super().on_members_removed_activity(members_removed, turn_context)
async def on_message_reaction_activity(self, turn_context: TurnContext):
self.record.append("on_message_reaction_activity")
return await super().on_message_reaction_activity(turn_context)
async def on_reactions_added(
self, message_reactions: List[MessageReaction], turn_context: TurnContext
):
self.record.append("on_reactions_added")
return await super().on_reactions_added(message_reactions, turn_context)
async def on_reactions_removed(
self, message_reactions: List[MessageReaction], turn_context: TurnContext
):
self.record.append("on_reactions_removed")
return await super().on_reactions_removed(message_reactions, turn_context)
async def on_token_response_event(self, turn_context: TurnContext):
self.record.append("on_token_response_event")
return await super().on_token_response_event(turn_context)
async def on_event(self, turn_context: TurnContext):
self.record.append("on_event")
return await super().on_event(turn_context)
async def on_end_of_conversation_activity(self, turn_context: TurnContext):
self.record.append("on_end_of_conversation_activity")
return await super().on_end_of_conversation_activity(turn_context)
async def on_typing_activity(self, turn_context: TurnContext):
self.record.append("on_typing_activity")
return await super().on_typing_activity(turn_context)
async def on_installation_update(self, turn_context: TurnContext):
self.record.append("on_installation_update")
return await super().on_installation_update(turn_context)
async def on_installation_update_add(self, turn_context: TurnContext):
self.record.append("on_installation_update_add")
return await super().on_installation_update_add(turn_context)
async def on_installation_update_remove(self, turn_context: TurnContext):
self.record.append("on_installation_update_remove")
return await super().on_installation_update_remove(turn_context)
async def on_unrecognized_activity_type(self, turn_context: TurnContext):
self.record.append("on_unrecognized_activity_type")
return await super().on_unrecognized_activity_type(turn_context)
async def on_invoke_activity(self, turn_context: TurnContext):
self.record.append("on_invoke_activity")
if turn_context.activity.name == "some.random.invoke":
return self._create_invoke_response()
return await super().on_invoke_activity(turn_context)
async def on_sign_in_invoke( # pylint: disable=unused-argument
self, turn_context: TurnContext
):
self.record.append("on_sign_in_invoke")
return
class NotImplementedAdapter(BotAdapter):
async def delete_activity(
self, context: TurnContext, reference: ConversationReference
):
raise NotImplementedError()
async def send_activities(
self, context: TurnContext, activities: List[Activity]
) -> List[ResourceResponse]:
raise NotImplementedError()
async def update_activity(self, context: TurnContext, activity: Activity):
raise NotImplementedError()
class TestInvokeAdapter(NotImplementedAdapter):
def __init__(self, on_turn_error=None, activity: Activity = None):
super().__init__(on_turn_error)
self.activity = activity
async def delete_activity(
self, context: TurnContext, reference: ConversationReference
):
raise NotImplementedError()
async def send_activities(
self, context: TurnContext, activities: List[Activity]
) -> List[ResourceResponse]:
self.activity = next(
(
activity
for activity in activities
if activity.type == ActivityTypes.invoke_response
),
None,
)
return []
async def update_activity(self, context: TurnContext, activity: Activity):
raise NotImplementedError()
class MockConnectorClient(ConnectorClient):
def __init__(self):
super().__init__(
credentials=MockCredentials(), base_url="http://tempuri.org/whatever"
)
class MockCredentials(AppCredentials):
def get_access_token(self, force_refresh: bool = False) -> str:
return "awesome"
class TestActivityHandler(aiounittest.AsyncTestCase):
async def test_message_reaction(self):
# Note the code supports multiple adds and removes in the same activity though
# a channel may decide to send separate activities for each. For example, Teams
# sends separate activities each with a single add and a single remove.
# Arrange
activity = Activity(
type=ActivityTypes.message_reaction,
reactions_added=[MessageReaction(type="sad")],
reactions_removed=[MessageReaction(type="angry")],
)
turn_context = TurnContext(NotImplementedAdapter(), activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
# Assert
assert len(bot.record) == 3
assert bot.record[0] == "on_message_reaction_activity"
assert bot.record[1] == "on_reactions_added"
assert bot.record[2] == "on_reactions_removed"
async def test_invoke(self):
activity = Activity(type=ActivityTypes.invoke, name="some.random.invoke")
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 1
assert bot.record[0] == "on_invoke_activity"
assert adapter.activity.value.status == int(HTTPStatus.OK)
async def test_invoke_should_not_match(self):
activity = Activity(type=ActivityTypes.invoke, name="should.not.match")
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 1
assert bot.record[0] == "on_invoke_activity"
assert adapter.activity.value.status == int(HTTPStatus.NOT_IMPLEMENTED)
async def test_on_end_of_conversation_activity(self):
activity = Activity(type=ActivityTypes.end_of_conversation)
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 1
assert bot.record[0] == "on_end_of_conversation_activity"
async def test_typing_activity(self):
activity = Activity(type=ActivityTypes.typing)
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 1
assert bot.record[0] == "on_typing_activity"
async def test_on_installation_update(self):
activity = Activity(type=ActivityTypes.installation_update)
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 1
assert bot.record[0] == "on_installation_update"
async def test_on_installation_update_add(self):
activity = Activity(type=ActivityTypes.installation_update, action="add")
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 2
assert bot.record[0] == "on_installation_update"
assert bot.record[1] == "on_installation_update_add"
async def test_on_installation_update_add_upgrade(self):
activity = Activity(
type=ActivityTypes.installation_update, action="add-upgrade"
)
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 2
assert bot.record[0] == "on_installation_update"
assert bot.record[1] == "on_installation_update_add"
async def test_on_installation_update_remove(self):
activity = Activity(type=ActivityTypes.installation_update, action="remove")
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 2
assert bot.record[0] == "on_installation_update"
assert bot.record[1] == "on_installation_update_remove"
async def test_on_installation_update_remove_upgrade(self):
activity = Activity(
type=ActivityTypes.installation_update, action="remove-upgrade"
)
adapter = TestInvokeAdapter()
turn_context = TurnContext(adapter, activity)
# Act
bot = TestingActivityHandler()
await bot.on_turn(turn_context)
assert len(bot.record) == 2
assert bot.record[0] == "on_installation_update"
assert bot.record[1] == "on_installation_update_remove"
|
botbuilder-python/libraries/botbuilder-core/tests/test_activity_handler.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/tests/test_activity_handler.py",
"repo_id": "botbuilder-python",
"token_count": 4085
}
| 382 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# pylint: disable=line-too-long,missing-docstring,unused-variable
import copy
import uuid
from typing import Dict
from unittest.mock import Mock
import aiounittest
from botframework.connector import Channels
from botbuilder.core import (
NullTelemetryClient,
TelemetryLoggerMiddleware,
TelemetryLoggerConstants,
TurnContext,
MessageFactory,
)
from botbuilder.core.adapters import TestAdapter, TestFlow
from botbuilder.schema import (
Activity,
ActivityTypes,
ChannelAccount,
ConversationAccount,
ConversationReference,
)
from botbuilder.schema.teams import TeamInfo, TeamsChannelData, TenantInfo
class TestTelemetryMiddleware(aiounittest.AsyncTestCase):
# pylint: disable=unused-argument
async def test_create_middleware(self):
telemetry = NullTelemetryClient()
my_logger = TelemetryLoggerMiddleware(telemetry, True)
assert my_logger
async def test_do_not_throw_on_null_from(self):
telemetry = Mock()
my_logger = TelemetryLoggerMiddleware(telemetry, False)
adapter = TestAdapter(
template_or_conversation=Activity(
channel_id="test",
recipient=ChannelAccount(id="bot", name="Bot"),
conversation=ConversationAccount(id=str(uuid.uuid4())),
)
)
adapter.use(my_logger)
async def send_proactive(context: TurnContext):
await context.send_activity("proactive")
async def logic(context: TurnContext):
await adapter.create_conversation(
context.activity.channel_id,
send_proactive,
)
adapter.logic = logic
test_flow = TestFlow(None, adapter)
await test_flow.send("foo")
await test_flow.assert_reply("proactive")
telemetry_calls = [
(
TelemetryLoggerConstants.BOT_MSG_RECEIVE_EVENT,
{
"fromId": None,
"conversationName": None,
"locale": None,
"recipientId": "bot",
"recipientName": "Bot",
},
),
]
self.assert_telemetry_calls(telemetry, telemetry_calls)
async def test_should_send_receive(self):
telemetry = Mock()
my_logger = TelemetryLoggerMiddleware(telemetry, True)
async def logic(context: TurnContext):
await context.send_activity(f"echo:{context.activity.text}")
adapter = TestAdapter(logic)
adapter.use(my_logger)
test_flow = TestFlow(None, adapter)
test_flow = await test_flow.send("foo")
test_flow = await test_flow.assert_reply("echo:foo")
test_flow = await test_flow.send("bar")
await test_flow.assert_reply("echo:bar")
# assert
# Note: None values just check for existence of the key, not the explicit
# value (generated)
telemetry_calls = [
(
TelemetryLoggerConstants.BOT_MSG_RECEIVE_EVENT,
{
"text": "foo",
"fromId": "User1",
"conversationName": None,
"locale": None,
"recipientId": "bot",
"recipientName": "Bot",
},
),
(
TelemetryLoggerConstants.BOT_MSG_SEND_EVENT,
{
"text": "echo:foo",
"replyActivityId": None,
"recipientId": None,
"conversationName": None,
"locale": None,
},
),
(
TelemetryLoggerConstants.BOT_MSG_RECEIVE_EVENT,
{
"text": "bar",
"fromId": "User1",
"conversationName": None,
"locale": None,
"recipientId": "bot",
"recipientName": "Bot",
"fromName": "user",
},
),
(
TelemetryLoggerConstants.BOT_MSG_SEND_EVENT,
{
"replyActivityId": None,
"recipientId": "User1",
"conversationName": None,
"locale": None,
"fromName": "Bot",
"text": "echo:bar",
},
),
]
self.assert_telemetry_calls(telemetry, telemetry_calls)
async def test_none_telemetry_client(self):
my_logger = TelemetryLoggerMiddleware(None, True)
async def logic(context: TurnContext):
await context.send_activity(f"echo:{context.activity.text}")
adapter = TestAdapter(logic)
adapter.use(my_logger)
test_flow = TestFlow(None, adapter)
test_flow = await test_flow.send("foo")
test_flow = await test_flow.assert_reply("echo:foo")
test_flow = await test_flow.send("bar")
await test_flow.assert_reply("echo:bar")
async def test_log_update(self):
telemetry = Mock()
my_logger = TelemetryLoggerMiddleware(telemetry, True)
activity_to_update = None
async def process(context: TurnContext) -> None:
nonlocal activity_to_update
if context.activity.text == "update":
if not activity_to_update:
raise Exception("activity to update not set yet!")
activity_to_update.text = "new response"
await context.update_activity(activity_to_update)
else:
activity = self.create_reply(context.activity, "response")
response = await context.send_activity(activity)
activity.id = response.id
# clone the activity, so we can use it to do an update
activity_to_update = copy.copy(activity)
# await context.send_activity(f'echo:{context.activity.text}')
adapter = TestAdapter(process)
adapter.use(my_logger)
test_flow = TestFlow(None, adapter)
test_flow = await test_flow.send("foo")
test_flow = await test_flow.assert_reply("response")
test_flow = await test_flow.send("update")
# assert
# Note: None values just check for existence of the key, not the explicit
# value (generated)
telemetry_call_expected = [
(
TelemetryLoggerConstants.BOT_MSG_RECEIVE_EVENT,
{
"text": "foo",
"fromId": "User1",
"conversationName": None,
"locale": None,
"recipientId": "bot",
"recipientName": "Bot",
},
),
(
TelemetryLoggerConstants.BOT_MSG_SEND_EVENT,
{
"replyActivityId": "1",
"recipientId": "User1",
"conversationName": None,
"locale": None,
"fromName": "Bot",
"text": "response",
},
),
(
TelemetryLoggerConstants.BOT_MSG_RECEIVE_EVENT,
{
"text": "update",
"fromId": "User1",
"conversationName": None,
"locale": None,
"recipientId": "bot",
"recipientName": "Bot",
"fromName": "user",
},
),
(
TelemetryLoggerConstants.BOT_MSG_UPDATE_EVENT,
{
"recipientId": "User1",
"conversationId": "Convo1",
"conversationName": None,
"locale": None,
"text": "new response",
},
),
]
self.assert_telemetry_calls(telemetry, telemetry_call_expected)
async def test_log_teams(self):
telemetry = Mock()
my_logger = TelemetryLoggerMiddleware(telemetry, True)
adapter = TestAdapter(
template_or_conversation=ConversationReference(channel_id=Channels.ms_teams)
)
adapter.use(my_logger)
team_info = TeamInfo(
id="teamId",
name="teamName",
)
channel_data = TeamsChannelData(
team=team_info,
tenant=TenantInfo(id="tenantId"),
).serialize()
activity = MessageFactory.text("test")
activity.channel_data = channel_data
activity.from_property = ChannelAccount(
id="userId",
name="userName",
aad_object_id="aaId",
)
test_flow = TestFlow(None, adapter)
await test_flow.send(activity)
telemetry_call_expected = [
(
TelemetryLoggerConstants.BOT_MSG_RECEIVE_EVENT,
{
"text": "test",
"fromId": "userId",
"recipientId": "bot",
"recipientName": "Bot",
"TeamsTenantId": "tenantId",
"TeamsUserAadObjectId": "aaId",
"TeamsTeamInfo": TeamInfo.serialize(team_info),
},
),
]
self.assert_telemetry_calls(telemetry, telemetry_call_expected)
def create_reply(self, activity, text, locale=None):
return Activity(
type=ActivityTypes.message,
from_property=ChannelAccount(
id=activity.recipient.id, name=activity.recipient.name
),
recipient=ChannelAccount(
id=activity.from_property.id, name=activity.from_property.name
),
reply_to_id=activity.id,
service_url=activity.service_url,
channel_id=activity.channel_id,
conversation=ConversationAccount(
is_group=activity.conversation.is_group,
id=activity.conversation.id,
name=activity.conversation.name,
),
text=text or "",
locale=locale or activity.locale,
)
def assert_telemetry_call(
self, telemetry_mock, index: int, event_name: str, props: Dict[str, str]
) -> None:
self.assertTrue(
index < len(telemetry_mock.track_event.call_args_list),
f"{len(telemetry_mock.track_event.call_args_list)} calls were made. You were asking for index {index}.",
)
args, kwargs = telemetry_mock.track_event.call_args_list[index]
self.assertEqual(
args[0],
event_name,
f"Event NAME not matching.\n Expected: {props}\n Generated: {args[1]}",
)
for key, val in props.items():
self.assertTrue(
key in args[1],
msg=f"Could not find value {key} in '{args[1]}' for index {index}",
)
self.assertTrue(
isinstance(args[1], dict),
f"ERROR: Second parm passed not a dictionary! {type(args[1])}",
)
if props[key]:
self.assertTrue(
val == args[1][key],
f' ERROR: Validate failed: "{val}" expected, "{args[1][key]}" generated',
)
def assert_telemetry_calls(self, telemetry_mock, calls) -> None:
index = 0
for event_name, props in calls:
self.assert_telemetry_call(telemetry_mock, index, event_name, props)
index += 1
if index != len(telemetry_mock.track_event.call_args_list):
self.assertTrue( # pylint: disable=redundant-unittest-assert
False,
f"Found {len(telemetry_mock.track_event.call_args_list)} calls, testing for {index + 1}",
)
|
botbuilder-python/libraries/botbuilder-core/tests/test_telemetry_middleware.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-core/tests/test_telemetry_middleware.py",
"repo_id": "botbuilder-python",
"token_count": 6371
}
| 383 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from typing import Callable, List, Union
from .choice import Choice
from .find_choices_options import FindChoicesOptions, FindValuesOptions
from .found_choice import FoundChoice
from .found_value import FoundValue
from .model_result import ModelResult
from .sorted_value import SortedValue
from .token import Token
from .tokenizer import Tokenizer
class Find:
"""Contains methods for matching user input against a list of choices"""
@staticmethod
def find_choices(
utterance: str,
choices: [Union[str, Choice]],
options: FindChoicesOptions = None,
):
"""Matches user input against a list of choices"""
if not choices:
raise TypeError(
"Find: choices cannot be None. Must be a [str] or [Choice]."
)
opt = options if options else FindChoicesOptions()
# Normalize list of choices
choices_list = [
Choice(value=choice) if isinstance(choice, str) else choice
for choice in choices
]
# Build up full list of synonyms to search over.
# - Each entry in the list contains the index of the choice it belongs to which will later be
# used to map the search results back to their choice.
synonyms: [SortedValue] = []
for index, choice in enumerate(choices_list):
if not opt.no_value:
synonyms.append(SortedValue(value=choice.value, index=index))
if (
getattr(choice, "action", False)
and getattr(choice.action, "title", False)
and not opt.no_value
):
synonyms.append(SortedValue(value=choice.action.title, index=index))
if choice.synonyms is not None:
for synonym in choice.synonyms:
synonyms.append(SortedValue(value=synonym, index=index))
def found_choice_constructor(value_model: ModelResult) -> ModelResult:
choice = choices_list[value_model.resolution.index]
return ModelResult(
start=value_model.start,
end=value_model.end,
type_name="choice",
text=value_model.text,
resolution=FoundChoice(
value=choice.value,
index=value_model.resolution.index,
score=value_model.resolution.score,
synonym=value_model.resolution.value,
),
)
# Find synonyms in utterance and map back to their choices_list
return list(
map(
found_choice_constructor, Find.find_values(utterance, synonyms, options)
)
)
@staticmethod
def find_values(
utterance: str, values: List[SortedValue], options: FindValuesOptions = None
) -> List[ModelResult]:
# Sort values in descending order by length, so that the longest value is searchd over first.
sorted_values = sorted(
values, key=lambda sorted_val: len(sorted_val.value), reverse=True
)
# Search for each value within the utterance.
matches: [ModelResult] = []
opt = options if options else FindValuesOptions()
tokenizer: Callable[[str, str], List[Token]] = (
opt.tokenizer if opt.tokenizer else Tokenizer.default_tokenizer
)
tokens = tokenizer(utterance, opt.locale)
max_distance = (
opt.max_token_distance if opt.max_token_distance is not None else 2
)
for entry in sorted_values:
# Find all matches for a value
# - To match "last one" in "the last time I chose the last one" we need
# to re-search the string starting from the end of the previous match.
# - The start & end position returned for the match are token positions.
start_pos = 0
searched_tokens = tokenizer(entry.value.strip(), opt.locale)
while start_pos < len(tokens):
match: Union[ModelResult, None] = Find._match_value(
tokens,
max_distance,
opt,
entry.index,
entry.value,
searched_tokens,
start_pos,
)
if match is not None:
start_pos = match.end + 1
matches.append(match)
else:
break
# Sort matches by score descending
sorted_matches = sorted(
matches,
key=lambda model_result: model_result.resolution.score,
reverse=True,
)
# Filter out duplicate matching indexes and overlapping characters
# - The start & end positions are token positions and need to be translated to
# character positions before returning. We also need to populate the "text"
# field as well.
results: List[ModelResult] = []
found_indexes = set()
used_tokens = set()
for match in sorted_matches:
# Apply filters.
add = match.resolution.index not in found_indexes
for i in range(match.start, match.end + 1):
if i in used_tokens:
add = False
break
# Add to results
if add:
# Update filter info
found_indexes.add(match.resolution.index)
for i in range(match.start, match.end + 1):
used_tokens.add(i)
# Translate start & end and populate text field
match.start = tokens[match.start].start
match.end = tokens[match.end].end
match.text = utterance[match.start : match.end + 1]
results.append(match)
# Return the results sorted by position in the utterance
return sorted(results, key=lambda model_result: model_result.start)
@staticmethod
def _match_value(
source_tokens: List[Token],
max_distance: int,
options: FindValuesOptions,
index: int,
value: str,
searched_tokens: List[Token],
start_pos: int,
) -> Union[ModelResult, None]:
# Match value to utterance and calculate total deviation.
# - The tokens are matched in order so "second last" will match in
# "the second from last one" but not in "the last from the second one".
# - The total deviation is a count of the number of tokens skipped in the
# match so for the example above the number of tokens matched would be
# 2 and the total deviation would be 1.
matched = 0
total_deviation = 0
start = -1
end = -1
for token in searched_tokens:
# Find the position of the token in the utterance.
pos = Find._index_of_token(source_tokens, token, start_pos)
if pos >= 0:
# Calculate the distance between the current token's position and the previous token's distance.
distance = pos - start_pos if matched > 0 else 0
if distance <= max_distance:
# Update count of tokens matched and move start pointer to search for next token
# after the current token
matched += 1
total_deviation += distance
start_pos = pos + 1
# Update start & end position that will track the span of the utterance that's matched.
if start < 0:
start = pos
end = pos
# Calculate score and format result
# - The start & end positions and the results text field will be corrected by the caller.
result: ModelResult = None
if matched > 0 and (
matched == len(searched_tokens) or options.allow_partial_matches
):
# Percentage of tokens matched. If matching "second last" in
# "the second form last one" the completeness would be 1.0 since
# all tokens were found.
completeness = matched / len(searched_tokens)
# Accuracy of the match. The accuracy is reduced by additional tokens
# occuring in the value that weren't in the utterance. So an utterance
# of "second last" matched against a value of "second from last" would
# result in an accuracy of 0.5.
accuracy = float(matched) / (matched + total_deviation)
# The final score is simply the compeleteness multiplied by the accuracy.
score = completeness * accuracy
# Format result
result = ModelResult(
text="",
start=start,
end=end,
type_name="value",
resolution=FoundValue(value=value, index=index, score=score),
)
return result
@staticmethod
def _index_of_token(tokens: List[Token], token: Token, start_pos: int) -> int:
for i in range(start_pos, len(tokens)):
if tokens[i].normalized == token.normalized:
return i
return -1
|
botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/find.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/find.py",
"repo_id": "botbuilder-python",
"token_count": 4195
}
| 384 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from botframework.connector.auth import (
ClaimsIdentity,
SkillValidation,
AuthenticationConstants,
GovernmentConstants,
)
from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext
from botbuilder.core.skills import SkillHandler, SkillConversationReference
import botbuilder.dialogs as dialogs # pylint: disable=unused-import
from botbuilder.dialogs.memory import DialogStateManager
from botbuilder.dialogs.dialog_context import DialogContext
from botbuilder.dialogs.dialog_turn_result import DialogTurnResult
from botbuilder.dialogs import (
DialogEvents,
DialogSet,
DialogTurnStatus,
)
from botbuilder.schema import Activity, ActivityTypes, EndOfConversationCodes
class DialogExtensions:
@staticmethod
async def run_dialog(
dialog: "dialogs.Dialog",
turn_context: TurnContext,
accessor: StatePropertyAccessor,
):
"""
Creates a dialog stack and starts a dialog, pushing it onto the stack.
"""
dialog_set = DialogSet(accessor)
dialog_set.add(dialog)
dialog_context: DialogContext = await dialog_set.create_context(turn_context)
await DialogExtensions._internal_run(turn_context, dialog.id, dialog_context)
@staticmethod
async def _internal_run(
context: TurnContext, dialog_id: str, dialog_context: DialogContext
) -> DialogTurnResult:
# map TurnState into root dialog context.services
for key, service in context.turn_state.items():
dialog_context.services[key] = service
# get the DialogStateManager configuration
dialog_state_manager = DialogStateManager(dialog_context)
await dialog_state_manager.load_all_scopes()
dialog_context.context.turn_state[
dialog_state_manager.__class__.__name__
] = dialog_state_manager
# Loop as long as we are getting valid OnError handled we should continue executing the actions for the turn.
# NOTE: We loop around this block because each pass through we either complete the turn and break out of the
# loop or we have had an exception AND there was an OnError action which captured the error. We need to
# continue the turn based on the actions the OnError handler introduced.
end_of_turn = False
while not end_of_turn:
try:
dialog_turn_result = await DialogExtensions.__inner_run(
context, dialog_id, dialog_context
)
# turn successfully completed, break the loop
end_of_turn = True
except Exception as err:
# fire error event, bubbling from the leaf.
handled = await dialog_context.emit_event(
DialogEvents.error, err, bubble=True, from_leaf=True
)
if not handled:
# error was NOT handled, throw the exception and end the turn. (This will trigger the
# Adapter.OnError handler and end the entire dialog stack)
raise
# save all state scopes to their respective botState locations.
await dialog_state_manager.save_all_changes()
# return the redundant result because the DialogManager contract expects it
return dialog_turn_result
@staticmethod
async def __inner_run(
turn_context: TurnContext, dialog_id: str, dialog_context: DialogContext
) -> DialogTurnResult:
# Handle EoC and Reprompt event from a parent bot (can be root bot to skill or skill to skill)
if DialogExtensions.__is_from_parent_to_skill(turn_context):
# Handle remote cancellation request from parent.
if turn_context.activity.type == ActivityTypes.end_of_conversation:
if not dialog_context.stack:
# No dialogs to cancel, just return.
return DialogTurnResult(DialogTurnStatus.Empty)
# Send cancellation message to the dialog to ensure all the parents are canceled
# in the right order.
return await dialog_context.cancel_all_dialogs(True)
# Handle a reprompt event sent from the parent.
if (
turn_context.activity.type == ActivityTypes.event
and turn_context.activity.name == DialogEvents.reprompt_dialog
):
if not dialog_context.stack:
# No dialogs to reprompt, just return.
return DialogTurnResult(DialogTurnStatus.Empty)
await dialog_context.reprompt_dialog()
return DialogTurnResult(DialogTurnStatus.Waiting)
# Continue or start the dialog.
result = await dialog_context.continue_dialog()
if result.status == DialogTurnStatus.Empty:
result = await dialog_context.begin_dialog(dialog_id)
await DialogExtensions._send_state_snapshot_trace(dialog_context)
# Skills should send EoC when the dialog completes.
if (
result.status == DialogTurnStatus.Complete
or result.status == DialogTurnStatus.Cancelled
):
if DialogExtensions.__send_eoc_to_parent(turn_context):
activity = Activity(
type=ActivityTypes.end_of_conversation,
value=result.result,
locale=turn_context.activity.locale,
code=EndOfConversationCodes.completed_successfully
if result.status == DialogTurnStatus.Complete
else EndOfConversationCodes.user_cancelled,
)
await turn_context.send_activity(activity)
return result
@staticmethod
def __is_from_parent_to_skill(turn_context: TurnContext) -> bool:
if turn_context.turn_state.get(SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY):
return False
claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
return isinstance(
claims_identity, ClaimsIdentity
) and SkillValidation.is_skill_claim(claims_identity.claims)
@staticmethod
async def _send_state_snapshot_trace(dialog_context: DialogContext):
"""
Helper to send a trace activity with a memory snapshot of the active dialog DC.
:param dialog_context:
:return:
"""
claims_identity = dialog_context.context.turn_state.get(
BotAdapter.BOT_IDENTITY_KEY, None
)
trace_label = (
"Skill State"
if isinstance(claims_identity, ClaimsIdentity)
and SkillValidation.is_skill_claim(claims_identity.claims)
else "Bot State"
)
# send trace of memory
snapshot = DialogExtensions._get_active_dialog_context(
dialog_context
).state.get_memory_snapshot()
trace_activity = Activity.create_trace_activity(
"BotState",
"https://www.botframework.com/schemas/botState",
snapshot,
trace_label,
)
await dialog_context.context.send_activity(trace_activity)
@staticmethod
def __send_eoc_to_parent(turn_context: TurnContext) -> bool:
claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
if isinstance(
claims_identity, ClaimsIdentity
) and SkillValidation.is_skill_claim(claims_identity.claims):
# EoC Activities returned by skills are bounced back to the bot by SkillHandler.
# In those cases we will have a SkillConversationReference instance in state.
skill_conversation_reference: SkillConversationReference = (
turn_context.turn_state.get(
SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
)
)
if skill_conversation_reference:
# If the skillConversationReference.OAuthScope is for one of the supported channels,
# we are at the root and we should not send an EoC.
return (
skill_conversation_reference.oauth_scope
!= AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
and skill_conversation_reference.oauth_scope
!= GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
)
return True
return False
@staticmethod
def _get_active_dialog_context(dialog_context: DialogContext) -> DialogContext:
"""
Recursively walk up the DC stack to find the active DC.
:param dialog_context:
:return:
"""
child = dialog_context.child
if not child:
return dialog_context
return DialogExtensions._get_active_dialog_context(child)
|
botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py",
"repo_id": "botbuilder-python",
"token_count": 3786
}
| 385 |
from abc import ABC, abstractmethod
class PathResolverBase(ABC):
@abstractmethod
def transform_path(self, path: str):
raise NotImplementedError()
|
botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/path_resolver_base.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/path_resolver_base.py",
"repo_id": "botbuilder-python",
"token_count": 57
}
| 386 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from abc import ABC, abstractmethod
class MemoryScope(ABC):
def __init__(self, name: str, include_in_snapshot: bool = True):
# <summary>
# Gets or sets name of the scope.
# </summary>
# <value>
# Name of the scope.
# </value>
self.include_in_snapshot = include_in_snapshot
# <summary>
# Gets or sets a value indicating whether this memory should be included in snapshot.
# </summary>
# <value>
# True or false.
# </value>
self.name = name
# <summary>
# Get the backing memory for this scope.
# </summary>
# <param name="dc">dc.</param>
# <returns>memory for the scope.</returns>
@abstractmethod
def get_memory(
self, dialog_context: "DialogContext"
) -> object: # pylint: disable=unused-argument
raise NotImplementedError()
# <summary>
# Changes the backing object for the memory scope.
# </summary>
# <param name="dc">dc.</param>
# <param name="memory">memory.</param>
@abstractmethod
def set_memory(
self, dialog_context: "DialogContext", memory: object
): # pylint: disable=unused-argument
raise NotImplementedError()
# <summary>
# Populates the state cache for this <see cref="BotState"/> from the storage layer.
# </summary>
# <param name="dialogContext">The dialog context object for this turn.</param>
# <param name="force">Optional, <c>true</c> to overwrite any existing state cache
# or <c>false</c> to load state from storage only if the cache doesn't already exist.</param>
# <param name="cancellationToken">A cancellation token that can be used by other objects
# or threads to receive notice of cancellation.</param>
# <returns>A task that represents the work queued to execute.</returns>
async def load(
self, dialog_context: "DialogContext", force: bool = False
): # pylint: disable=unused-argument
return
# <summary>
# Writes the state cache for this <see cref="BotState"/> to the storage layer.
# </summary>
# <param name="dialogContext">The dialog context object for this turn.</param>
# <param name="force">Optional, <c>true</c> to save the state cache to storage
# or <c>false</c> to save state to storage only if a property in the cache has changed.</param>
# <param name="cancellationToken">A cancellation token that can be used by other objects
# or threads to receive notice of cancellation.</param>
# <returns>A task that represents the work queued to execute.</returns>
async def save_changes(
self, dialog_context: "DialogContext", force: bool = False
): # pylint: disable=unused-argument
return
# <summary>
# Deletes any state in storage and the cache for this <see cref="BotState"/>.
# </summary>
# <param name="dialogContext">The dialog context object for this turn.</param>
# <param name="cancellationToken">A cancellation token that can be used by other objects
# or threads to receive notice of cancellation.</param>
# <returns>A task that represents the work queued to execute.</returns>
async def delete(
self, dialog_context: "DialogContext"
): # pylint: disable=unused-argument
return
|
botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/memory_scope.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/memory_scope.py",
"repo_id": "botbuilder-python",
"token_count": 1187
}
| 387 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import re
from datetime import datetime, timedelta
from http import HTTPStatus
from typing import Union, Awaitable, Callable
from botframework.connector import Channels
from botframework.connector.auth import (
ClaimsIdentity,
SkillValidation,
JwtTokenValidation,
)
from botbuilder.core import (
CardFactory,
MessageFactory,
InvokeResponse,
TurnContext,
BotAdapter,
)
from botbuilder.core.bot_framework_adapter import TokenExchangeRequest
from botbuilder.dialogs import Dialog, DialogContext, DialogTurnResult
from botbuilder.schema import (
Activity,
ActivityTypes,
ActionTypes,
CardAction,
InputHints,
SigninCard,
SignInConstants,
OAuthCard,
TokenResponse,
TokenExchangeInvokeRequest,
TokenExchangeInvokeResponse,
)
from .prompt_options import PromptOptions
from .oauth_prompt_settings import OAuthPromptSettings
from .prompt_validator_context import PromptValidatorContext
from .prompt_recognizer_result import PromptRecognizerResult
from .._user_token_access import _UserTokenAccess
class CallerInfo:
def __init__(self, caller_service_url: str = None, scope: str = None):
self.caller_service_url = caller_service_url
self.scope = scope
class OAuthPrompt(Dialog):
PERSISTED_OPTIONS = "options"
PERSISTED_STATE = "state"
PERSISTED_EXPIRES = "expires"
PERSISTED_CALLER = "caller"
"""
Creates a new prompt that asks the user to sign in, using the Bot Framework Single Sign On (SSO) service.
.. remarks::
The prompt will attempt to retrieve the users current token and if the user isn't signed in, it
will send them an `OAuthCard` containing a button they can press to sign in. Depending on the channel,
the user will be sent through one of two possible sign-in flows:
- The automatic sign-in flow where once the user signs in, the SSO service will forward
the bot the users access token using either an `event` or `invoke` activity.
- The "magic code" flow where once the user signs in, they will be prompted by the SSO service
to send the bot a six digit code confirming their identity. This code will be sent as a
standard `message` activity.
Both flows are automatically supported by the `OAuthPrompt` and they only thing you need to be careful of
is that you don't block the `event` and `invoke` activities that the prompt might be waiting on.
You should avoid persisting the access token with your bots other state. The Bot Frameworks SSO service
will securely store the token on your behalf. If you store it in your bots state,
it could expire or be revoked in between turns.
When calling the prompt from within a waterfall step, you should use the token within the step
following the prompt and then let the token go out of scope at the end of your function.
When used with your bots :class:`DialogSet`, you can simply add a new instance of the prompt as a named
dialog using :meth`DialogSet.add()`.
You can then start the prompt from a waterfall step using either :meth:`DialogContext.begin()` or
:meth:`DialogContext.prompt()`.
The user will be prompted to sign in as needed and their access token will be passed as an argument to
the callers next waterfall step.
"""
def __init__(
self,
dialog_id: str,
settings: OAuthPromptSettings,
validator: Callable[[PromptValidatorContext], Awaitable[bool]] = None,
):
"""
Creates a new instance of the :class:`OAuthPrompt` class.
:param dialog_id: The Id to assign to this prompt.
:type dialog_id: str
:param settings: Additional authentication settings to use with this instance of the prompt
:type settings: :class:`OAuthPromptSettings`
:param validator: Optional, contains additional, custom validation for this prompt
:type validator: :class:`PromptValidatorContext`
.. remarks::
The value of :param dialogId: must be unique within the :class:`DialogSet`or :class:`ComponentDialog`
to which the prompt is added.
"""
super().__init__(dialog_id)
self._validator = validator
if not settings:
raise TypeError(
"OAuthPrompt.__init__(): OAuthPrompt requires OAuthPromptSettings."
)
self._settings = settings
self._validator = validator
async def begin_dialog(
self, dialog_context: DialogContext, options: PromptOptions = None
) -> DialogTurnResult:
"""
Starts an authentication prompt dialog. Called when an authentication prompt dialog is pushed onto the
dialog stack and is being activated.
:param dialog_context: The dialog context for the current turn of the conversation
:type dialog_context: :class:`DialogContext`
:param options: Optional, additional information to pass to the prompt being started
:type options: :class:`PromptOptions`
:return: Dialog turn result
:rtype: :class`:`DialogTurnResult`
.. remarks::
If the task is successful, the result indicates whether the prompt is still active after the turn
has been processed.
"""
if dialog_context is None:
raise TypeError(
f"OAuthPrompt.begin_dialog(): Expected DialogContext but got NoneType instead"
)
options = options or PromptOptions()
# Ensure prompts have input hint set
if options.prompt and not options.prompt.input_hint:
options.prompt.input_hint = InputHints.accepting_input
if options.retry_prompt and not options.retry_prompt.input_hint:
options.retry_prompt.input_hint = InputHints.accepting_input
# Initialize prompt state
timeout = (
self._settings.timeout
if isinstance(self._settings.timeout, int)
else 900000
)
state = dialog_context.active_dialog.state
state[OAuthPrompt.PERSISTED_STATE] = {}
state[OAuthPrompt.PERSISTED_OPTIONS] = options
state[OAuthPrompt.PERSISTED_EXPIRES] = datetime.now() + timedelta(
seconds=timeout / 1000
)
state[OAuthPrompt.PERSISTED_CALLER] = OAuthPrompt.__create_caller_info(
dialog_context.context
)
output = await _UserTokenAccess.get_user_token(
dialog_context.context, self._settings, None
)
if output is not None:
# Return token
return await dialog_context.end_dialog(output)
await self._send_oauth_card(dialog_context.context, options.prompt)
return Dialog.end_of_turn
async def continue_dialog(self, dialog_context: DialogContext) -> DialogTurnResult:
"""
Continues a dialog. Called when a prompt dialog is the active dialog and the user replied with a new activity.
:param dialog_context: The dialog context for the current turn of the conversation
:type dialog_context: :class:`DialogContext`
:return: Dialog turn result
:rtype: :class:`DialogTurnResult`
.. remarks::
If the task is successful, the result indicates whether the dialog is still
active after the turn has been processed by the dialog.
The prompt generally continues to receive the user's replies until it accepts the
user's reply as valid input for the prompt.
"""
# Check for timeout
state = dialog_context.active_dialog.state
is_message = dialog_context.context.activity.type == ActivityTypes.message
is_timeout_activity_type = (
is_message
or OAuthPrompt._is_token_response_event(dialog_context.context)
or OAuthPrompt._is_teams_verification_invoke(dialog_context.context)
or OAuthPrompt._is_token_exchange_request_invoke(dialog_context.context)
)
has_timed_out = is_timeout_activity_type and (
datetime.now() > state[OAuthPrompt.PERSISTED_EXPIRES]
)
if has_timed_out:
return await dialog_context.end_dialog(None)
if state["state"].get("attemptCount") is None:
state["state"]["attemptCount"] = 1
else:
state["state"]["attemptCount"] += 1
# Recognize token
recognized = await self._recognize_token(dialog_context)
# Validate the return value
is_valid = False
if self._validator is not None:
is_valid = await self._validator(
PromptValidatorContext(
dialog_context.context,
recognized,
state[OAuthPrompt.PERSISTED_STATE],
state[OAuthPrompt.PERSISTED_OPTIONS],
)
)
elif recognized.succeeded:
is_valid = True
# Return recognized value or re-prompt
if is_valid:
return await dialog_context.end_dialog(recognized.value)
if is_message and self._settings.end_on_invalid_message:
# If EndOnInvalidMessage is set, complete the prompt with no result.
return await dialog_context.end_dialog(None)
# Send retry prompt
if (
not dialog_context.context.responded
and is_message
and state[OAuthPrompt.PERSISTED_OPTIONS].retry_prompt is not None
):
await dialog_context.context.send_activity(
state[OAuthPrompt.PERSISTED_OPTIONS].retry_prompt
)
return Dialog.end_of_turn
async def get_user_token(
self, context: TurnContext, code: str = None
) -> TokenResponse:
"""
Gets the user's tokeN.
:param context: Context for the current turn of conversation with the user
:type context: :class:`TurnContext`
:param code: (Optional) Optional user entered code to validate.
:type code: str
:return: A response that includes the user's token
:rtype: :class:`TokenResponse`
.. remarks::
If the task is successful and the user already has a token or the user successfully signs in,
the result contains the user's token.
"""
return await _UserTokenAccess.get_user_token(context, self._settings, code)
async def sign_out_user(self, context: TurnContext):
"""
Signs out the user
:param context: Context for the current turn of conversation with the user
:type context: :class:`TurnContext`
:return: A task representing the work queued to execute
.. remarks::
If the task is successful and the user already has a token or the user successfully signs in,
the result contains the user's token.
"""
return await _UserTokenAccess.sign_out_user(context, self._settings)
@staticmethod
def __create_caller_info(context: TurnContext) -> CallerInfo:
bot_identity = context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
if bot_identity and SkillValidation.is_skill_claim(bot_identity.claims):
return CallerInfo(
caller_service_url=context.activity.service_url,
scope=JwtTokenValidation.get_app_id_from_claims(bot_identity.claims),
)
return None
async def _send_oauth_card(
self, context: TurnContext, prompt: Union[Activity, str] = None
):
if not isinstance(prompt, Activity):
prompt = MessageFactory.text(prompt or "", None, InputHints.accepting_input)
else:
prompt.input_hint = prompt.input_hint or InputHints.accepting_input
prompt.attachments = prompt.attachments or []
if OAuthPrompt._channel_suppports_oauth_card(context.activity.channel_id):
if not any(
att.content_type == CardFactory.content_types.oauth_card
for att in prompt.attachments
):
card_action_type = ActionTypes.signin
sign_in_resource = await _UserTokenAccess.get_sign_in_resource(
context, self._settings
)
link = sign_in_resource.sign_in_link
bot_identity: ClaimsIdentity = context.turn_state.get(
BotAdapter.BOT_IDENTITY_KEY
)
# use the SignInLink when in speech channel or bot is a skill or
# an extra OAuthAppCredentials is being passed in
if (
(
bot_identity
and SkillValidation.is_skill_claim(bot_identity.claims)
)
or not context.activity.service_url.startswith("http")
or self._settings.oath_app_credentials
):
if context.activity.channel_id == Channels.emulator:
card_action_type = ActionTypes.open_url
elif not OAuthPrompt._channel_requires_sign_in_link(
context.activity.channel_id
):
link = None
json_token_ex_resource = (
sign_in_resource.token_exchange_resource.as_dict()
if sign_in_resource.token_exchange_resource
else None
)
prompt.attachments.append(
CardFactory.oauth_card(
OAuthCard(
text=self._settings.text,
connection_name=self._settings.connection_name,
buttons=[
CardAction(
title=self._settings.title,
text=self._settings.text,
type=card_action_type,
value=link,
)
],
token_exchange_resource=json_token_ex_resource,
)
)
)
else:
if not any(
att.content_type == CardFactory.content_types.signin_card
for att in prompt.attachments
):
if not hasattr(context.adapter, "get_oauth_sign_in_link"):
raise Exception(
"OAuthPrompt._send_oauth_card(): get_oauth_sign_in_link() not supported by the current adapter"
)
link = await context.adapter.get_oauth_sign_in_link(
context,
self._settings.connection_name,
None,
self._settings.oath_app_credentials,
)
prompt.attachments.append(
CardFactory.signin_card(
SigninCard(
text=self._settings.text,
buttons=[
CardAction(
title=self._settings.title,
value=link,
type=ActionTypes.signin,
)
],
)
)
)
# Send prompt
await context.send_activity(prompt)
async def _recognize_token(
self, dialog_context: DialogContext
) -> PromptRecognizerResult:
context = dialog_context.context
token = None
if OAuthPrompt._is_token_response_event(context):
token = context.activity.value
# fixup the turnContext's state context if this was received from a skill host caller
state: CallerInfo = dialog_context.active_dialog.state[
OAuthPrompt.PERSISTED_CALLER
]
if state:
# set the ServiceUrl to the skill host's Url
dialog_context.context.activity.service_url = state.caller_service_url
claims_identity = context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
connector_client = await _UserTokenAccess.create_connector_client(
context,
dialog_context.context.activity.service_url,
claims_identity,
state.scope,
)
context.turn_state[
BotAdapter.BOT_CONNECTOR_CLIENT_KEY
] = connector_client
elif OAuthPrompt._is_teams_verification_invoke(context):
code = context.activity.value["state"]
try:
token = await _UserTokenAccess.get_user_token(
context, self._settings, code
)
if token is not None:
await context.send_activity(
Activity(
type="invokeResponse",
value=InvokeResponse(status=HTTPStatus.OK),
)
)
else:
await context.send_activity(
Activity(
type="invokeResponse",
value=InvokeResponse(status=HTTPStatus.NOT_FOUND),
)
)
except Exception:
await context.send_activity(
Activity(
type="invokeResponse",
value=InvokeResponse(status=HTTPStatus.INTERNAL_SERVER_ERROR),
)
)
elif self._is_token_exchange_request_invoke(context):
if isinstance(context.activity.value, dict):
context.activity.value = TokenExchangeInvokeRequest().from_dict(
context.activity.value
)
if not (
context.activity.value
and self._is_token_exchange_request(context.activity.value)
):
# Received activity is not a token exchange request.
await context.send_activity(
self._get_token_exchange_invoke_response(
int(HTTPStatus.BAD_REQUEST),
"The bot received an InvokeActivity that is missing a TokenExchangeInvokeRequest value."
" This is required to be sent with the InvokeActivity.",
)
)
elif (
context.activity.value.connection_name != self._settings.connection_name
):
# Connection name on activity does not match that of setting.
await context.send_activity(
self._get_token_exchange_invoke_response(
int(HTTPStatus.BAD_REQUEST),
"The bot received an InvokeActivity with a TokenExchangeInvokeRequest containing a"
" ConnectionName that does not match the ConnectionName expected by the bots active"
" OAuthPrompt. Ensure these names match when sending the InvokeActivityInvalid"
" ConnectionName in the TokenExchangeInvokeRequest",
)
)
elif not getattr(context.adapter, "exchange_token"):
# Token Exchange not supported in the adapter.
await context.send_activity(
self._get_token_exchange_invoke_response(
int(HTTPStatus.BAD_GATEWAY),
"The bot's BotAdapter does not support token exchange operations."
" Ensure the bot's Adapter supports the ExtendedUserTokenProvider interface.",
)
)
raise AttributeError(
"OAuthPrompt._recognize_token(): not supported by the current adapter."
)
else:
# No errors. Proceed with token exchange.
token_exchange_response = None
try:
token_exchange_response = await _UserTokenAccess.exchange_token(
context,
self._settings,
TokenExchangeRequest(token=context.activity.value.token),
)
except:
# Ignore Exceptions
# If token exchange failed for any reason, tokenExchangeResponse above stays null, and
# hence we send back a failure invoke response to the caller.
pass
if not token_exchange_response or not token_exchange_response.token:
await context.send_activity(
self._get_token_exchange_invoke_response(
int(HTTPStatus.PRECONDITION_FAILED),
"The bot is unable to exchange token. Proceed with regular login.",
)
)
else:
await context.send_activity(
self._get_token_exchange_invoke_response(
int(HTTPStatus.OK), None, context.activity.value.id
)
)
token = TokenResponse(
channel_id=token_exchange_response.channel_id,
connection_name=token_exchange_response.connection_name,
token=token_exchange_response.token,
expiration=None,
)
elif context.activity.type == ActivityTypes.message and context.activity.text:
match = re.match(r"(?<!\d)\d{6}(?!\d)", context.activity.text)
if match:
token = await _UserTokenAccess.get_user_token(
context, self._settings, match[0]
)
return (
PromptRecognizerResult(True, token)
if token is not None
else PromptRecognizerResult()
)
def _get_token_exchange_invoke_response(
self, status: int, failure_detail: str, identifier: str = None
) -> Activity:
return Activity(
type=ActivityTypes.invoke_response,
value=InvokeResponse(
status=status,
body=TokenExchangeInvokeResponse(
id=identifier,
connection_name=self._settings.connection_name,
failure_detail=failure_detail,
),
),
)
@staticmethod
def _is_token_response_event(context: TurnContext) -> bool:
activity = context.activity
return (
activity.type == ActivityTypes.event
and activity.name == SignInConstants.token_response_event_name
)
@staticmethod
def _is_teams_verification_invoke(context: TurnContext) -> bool:
activity = context.activity
return (
activity.type == ActivityTypes.invoke
and activity.name == SignInConstants.verify_state_operation_name
)
@staticmethod
def _channel_suppports_oauth_card(channel_id: str) -> bool:
if channel_id in [
Channels.cortana,
Channels.skype,
Channels.skype_for_business,
]:
return False
return True
@staticmethod
def _channel_requires_sign_in_link(channel_id: str) -> bool:
if channel_id in [Channels.ms_teams]:
return True
return False
@staticmethod
def _is_token_exchange_request_invoke(context: TurnContext) -> bool:
activity = context.activity
return (
activity.type == ActivityTypes.invoke
and activity.name == SignInConstants.token_exchange_operation_name
)
@staticmethod
def _is_token_exchange_request(obj: TokenExchangeInvokeRequest) -> bool:
return bool(obj.connection_name) and bool(obj.token)
|
botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/oauth_prompt.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/oauth_prompt.py",
"repo_id": "botbuilder-python",
"token_count": 11469
}
| 388 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# pylint: disable=pointless-string-statement
from enum import Enum
from typing import Callable, List, Tuple
import aiounittest
from botbuilder.core import (
AutoSaveStateMiddleware,
BotAdapter,
ConversationState,
MemoryStorage,
MessageFactory,
UserState,
TurnContext,
)
from botbuilder.core.adapters import TestAdapter
from botbuilder.core.skills import SkillHandler, SkillConversationReference
from botbuilder.dialogs import (
ComponentDialog,
Dialog,
DialogContext,
DialogEvents,
DialogInstance,
DialogReason,
TextPrompt,
WaterfallDialog,
DialogManager,
DialogManagerResult,
DialogTurnStatus,
WaterfallStepContext,
)
from botbuilder.dialogs.prompts import PromptOptions
from botbuilder.schema import (
Activity,
ActivityTypes,
ChannelAccount,
ConversationAccount,
EndOfConversationCodes,
InputHints,
)
from botframework.connector.auth import AuthenticationConstants, ClaimsIdentity
class SkillFlowTestCase(str, Enum):
# DialogManager is executing on a root bot with no skills (typical standalone bot).
root_bot_only = "RootBotOnly"
# DialogManager is executing on a root bot handling replies from a skill.
root_bot_consuming_skill = "RootBotConsumingSkill"
# DialogManager is executing in a skill that is called from a root and calling another skill.
middle_skill = "MiddleSkill"
# DialogManager is executing in a skill that is called from a parent (a root or another skill) but doesn"t call
# another skill.
leaf_skill = "LeafSkill"
class SimpleComponentDialog(ComponentDialog):
# An App ID for a parent bot.
parent_bot_id = "00000000-0000-0000-0000-0000000000PARENT"
# An App ID for a skill bot.
skill_bot_id = "00000000-0000-0000-0000-00000000000SKILL"
# Captures an EndOfConversation if it was sent to help with assertions.
eoc_sent: Activity = None
# Property to capture the DialogManager turn results and do assertions.
dm_turn_result: DialogManagerResult = None
def __init__(
self, id: str = None, prop: str = None
): # pylint: disable=unused-argument
super().__init__(id or "SimpleComponentDialog")
self.text_prompt = "TextPrompt"
self.waterfall_dialog = "WaterfallDialog"
self.add_dialog(TextPrompt(self.text_prompt))
self.add_dialog(
WaterfallDialog(
self.waterfall_dialog,
[
self.prompt_for_name,
self.final_step,
],
)
)
self.initial_dialog_id = self.waterfall_dialog
self.end_reason = None
@staticmethod
async def create_test_flow(
dialog: Dialog,
test_case: SkillFlowTestCase = SkillFlowTestCase.root_bot_only,
enabled_trace=False,
) -> TestAdapter:
conversation_id = "testFlowConversationId"
storage = MemoryStorage()
conversation_state = ConversationState(storage)
user_state = UserState(storage)
activity = Activity(
channel_id="test",
service_url="https://test.com",
from_property=ChannelAccount(id="user1", name="User1"),
recipient=ChannelAccount(id="bot", name="Bot"),
conversation=ConversationAccount(
is_group=False, conversation_type=conversation_id, id=conversation_id
),
)
dialog_manager = DialogManager(dialog)
dialog_manager.user_state = user_state
dialog_manager.conversation_state = conversation_state
async def logic(context: TurnContext):
if test_case != SkillFlowTestCase.root_bot_only:
# Create a skill ClaimsIdentity and put it in turn_state so isSkillClaim() returns True.
claims_identity = ClaimsIdentity({}, False)
claims_identity.claims[
"ver"
] = "2.0" # AuthenticationConstants.VersionClaim
claims_identity.claims[
"aud"
] = (
SimpleComponentDialog.skill_bot_id
) # AuthenticationConstants.AudienceClaim
claims_identity.claims[
"azp"
] = (
SimpleComponentDialog.parent_bot_id
) # AuthenticationConstants.AuthorizedParty
context.turn_state[BotAdapter.BOT_IDENTITY_KEY] = claims_identity
if test_case == SkillFlowTestCase.root_bot_consuming_skill:
# Simulate the SkillConversationReference with a channel OAuthScope stored in turn_state.
# This emulates a response coming to a root bot through SkillHandler.
context.turn_state[
SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
] = SkillConversationReference(
None, AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
)
if test_case == SkillFlowTestCase.middle_skill:
# Simulate the SkillConversationReference with a parent Bot ID stored in turn_state.
# This emulates a response coming to a skill from another skill through SkillHandler.
context.turn_state[
SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
] = SkillConversationReference(
None, SimpleComponentDialog.parent_bot_id
)
async def aux(
turn_context: TurnContext, # pylint: disable=unused-argument
activities: List[Activity],
next: Callable,
):
for activity in activities:
if activity.type == ActivityTypes.end_of_conversation:
SimpleComponentDialog.eoc_sent = activity
break
return await next()
# Interceptor to capture the EoC activity if it was sent so we can assert it in the tests.
context.on_send_activities(aux)
SimpleComponentDialog.dm_turn_result = await dialog_manager.on_turn(context)
adapter = TestAdapter(logic, activity, enabled_trace)
adapter.use(AutoSaveStateMiddleware([user_state, conversation_state]))
return adapter
async def on_end_dialog(
self, context: DialogContext, instance: DialogInstance, reason: DialogReason
):
self.end_reason = reason
return await super().on_end_dialog(context, instance, reason)
async def prompt_for_name(self, step: WaterfallStepContext):
return await step.prompt(
self.text_prompt,
PromptOptions(
prompt=MessageFactory.text(
"Hello, what is your name?", None, InputHints.expecting_input
),
retry_prompt=MessageFactory.text(
"Hello, what is your name again?", None, InputHints.expecting_input
),
),
)
async def final_step(self, step: WaterfallStepContext):
await step.context.send_activity(f"Hello { step.result }, nice to meet you!")
return await step.end_dialog(step.result)
class DialogManagerTests(aiounittest.AsyncTestCase):
"""
self.beforeEach(() => {
_dmTurnResult = undefined
})
"""
async def test_handles_bot_and_skills(self):
construction_data: List[Tuple[SkillFlowTestCase, bool]] = [
(SkillFlowTestCase.root_bot_only, False),
(SkillFlowTestCase.root_bot_consuming_skill, False),
(SkillFlowTestCase.middle_skill, True),
(SkillFlowTestCase.leaf_skill, True),
]
for test_case, should_send_eoc in construction_data:
with self.subTest(test_case=test_case, should_send_eoc=should_send_eoc):
SimpleComponentDialog.dm_turn_result = None
SimpleComponentDialog.eoc_sent = None
dialog = SimpleComponentDialog()
test_flow = await SimpleComponentDialog.create_test_flow(
dialog, test_case
)
step1 = await test_flow.send("Hi")
step2 = await step1.assert_reply("Hello, what is your name?")
step3 = await step2.send("SomeName")
await step3.assert_reply("Hello SomeName, nice to meet you!")
self.assertEqual(
SimpleComponentDialog.dm_turn_result.turn_result.status,
DialogTurnStatus.Complete,
)
self.assertEqual(dialog.end_reason, DialogReason.EndCalled)
if should_send_eoc:
self.assertTrue(
bool(SimpleComponentDialog.eoc_sent),
"Skills should send EndConversation to channel",
)
self.assertEqual(
SimpleComponentDialog.eoc_sent.type,
ActivityTypes.end_of_conversation,
)
self.assertEqual(
SimpleComponentDialog.eoc_sent.code,
EndOfConversationCodes.completed_successfully,
)
self.assertEqual(SimpleComponentDialog.eoc_sent.value, "SomeName")
else:
self.assertIsNone(
SimpleComponentDialog.eoc_sent,
"Root bot should not send EndConversation to channel",
)
async def test_skill_handles_eoc_from_parent(self):
SimpleComponentDialog.dm_turn_result = None
dialog = SimpleComponentDialog()
test_flow = await SimpleComponentDialog.create_test_flow(
dialog, SkillFlowTestCase.leaf_skill
)
step1 = await test_flow.send("Hi")
step2 = await step1.assert_reply("Hello, what is your name?")
await step2.send(Activity(type=ActivityTypes.end_of_conversation))
self.assertEqual(
SimpleComponentDialog.dm_turn_result.turn_result.status,
DialogTurnStatus.Cancelled,
)
async def test_skill_handles_reprompt_from_parent(self):
SimpleComponentDialog.dm_turn_result = None
dialog = SimpleComponentDialog()
test_flow = await SimpleComponentDialog.create_test_flow(
dialog, SkillFlowTestCase.leaf_skill
)
step1 = await test_flow.send("Hi")
step2 = await step1.assert_reply("Hello, what is your name?")
step3 = await step2.send(
Activity(type=ActivityTypes.event, name=DialogEvents.reprompt_dialog)
)
await step3.assert_reply("Hello, what is your name?")
self.assertEqual(
SimpleComponentDialog.dm_turn_result.turn_result.status,
DialogTurnStatus.Waiting,
)
async def test_skill_should_return_empty_on_reprompt_with_no_dialog(self):
SimpleComponentDialog.dm_turn_result = None
dialog = SimpleComponentDialog()
test_flow = await SimpleComponentDialog.create_test_flow(
dialog, SkillFlowTestCase.leaf_skill
)
await test_flow.send(
Activity(type=ActivityTypes.event, name=DialogEvents.reprompt_dialog)
)
self.assertEqual(
SimpleComponentDialog.dm_turn_result.turn_result.status,
DialogTurnStatus.Empty,
)
async def test_trace_bot_state(self):
SimpleComponentDialog.dm_turn_result = None
dialog = SimpleComponentDialog()
def assert_is_trace(activity, description): # pylint: disable=unused-argument
assert activity.type == ActivityTypes.trace
def assert_is_trace_and_label(activity, description):
assert_is_trace(activity, description)
assert activity.label == "Bot State"
test_flow = await SimpleComponentDialog.create_test_flow(
dialog, SkillFlowTestCase.root_bot_only, True
)
step1 = await test_flow.send("Hi")
step2 = await step1.assert_reply("Hello, what is your name?")
step3 = await step2.assert_reply(assert_is_trace_and_label)
step4 = await step3.send("SomeName")
step5 = await step4.assert_reply("Hello SomeName, nice to meet you!")
await step5.assert_reply(assert_is_trace_and_label)
self.assertEqual(
SimpleComponentDialog.dm_turn_result.turn_result.status,
DialogTurnStatus.Complete,
)
|
botbuilder-python/libraries/botbuilder-dialogs/tests/test_dialog_manager.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-dialogs/tests/test_dialog_manager.py",
"repo_id": "botbuilder-python",
"token_count": 5716
}
| 389 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# pylint: disable=no-member
import json
from typing import Dict, List, Tuple
from logging import Logger
import aiohttp
from botbuilder.core import InvokeResponse
from botbuilder.core.skills import BotFrameworkClient
from botbuilder.schema import (
Activity,
ExpectedReplies,
ConversationReference,
ConversationAccount,
ChannelAccount,
RoleTypes,
)
from botframework.connector.auth import (
ChannelProvider,
CredentialProvider,
MicrosoftAppCredentials,
AppCredentials,
MicrosoftGovernmentAppCredentials,
)
class BotFrameworkHttpClient(BotFrameworkClient):
"""
A skill host adapter that implements the API to forward activity to a skill and
implements routing ChannelAPI calls from the skill up through the bot/adapter.
"""
INVOKE_ACTIVITY_NAME = "SkillEvents.ChannelApiInvoke"
_BOT_IDENTITY_KEY = "BotIdentity"
_APP_CREDENTIALS_CACHE: Dict[str, MicrosoftAppCredentials] = {}
def __init__(
self,
credential_provider: CredentialProvider,
channel_provider: ChannelProvider = None,
logger: Logger = None,
):
if not credential_provider:
raise TypeError("credential_provider can't be None")
self._credential_provider = credential_provider
self._channel_provider = channel_provider
self._logger = logger
self._session = aiohttp.ClientSession()
async def post_activity(
self,
from_bot_id: str,
to_bot_id: str,
to_url: str,
service_url: str,
conversation_id: str,
activity: Activity,
) -> InvokeResponse:
app_credentials = await self._get_app_credentials(from_bot_id, to_bot_id)
if not app_credentials:
raise KeyError("Unable to get appCredentials to connect to the skill")
# Get token for the skill call
token = (
app_credentials.get_access_token()
if app_credentials.microsoft_app_id
else None
)
# Capture current activity settings before changing them.
original_conversation_id = activity.conversation.id
original_service_url = activity.service_url
original_relates_to = activity.relates_to
original_recipient = activity.recipient
try:
activity.relates_to = ConversationReference(
service_url=activity.service_url,
activity_id=activity.id,
channel_id=activity.channel_id,
conversation=ConversationAccount(
id=activity.conversation.id,
name=activity.conversation.name,
conversation_type=activity.conversation.conversation_type,
aad_object_id=activity.conversation.aad_object_id,
is_group=activity.conversation.is_group,
role=activity.conversation.role,
tenant_id=activity.conversation.tenant_id,
properties=activity.conversation.properties,
),
bot=None,
)
activity.conversation.id = conversation_id
activity.service_url = service_url
if not activity.recipient:
activity.recipient = ChannelAccount(role=RoleTypes.skill)
else:
activity.recipient.role = RoleTypes.skill
status, content = await self._post_content(to_url, token, activity)
return InvokeResponse(status=status, body=content)
finally:
# Restore activity properties.
activity.conversation.id = original_conversation_id
activity.service_url = original_service_url
activity.relates_to = original_relates_to
activity.recipient = original_recipient
async def _post_content(
self, to_url: str, token: str, activity: Activity
) -> Tuple[int, object]:
headers_dict = {
"Content-type": "application/json; charset=utf-8",
}
if token:
headers_dict.update(
{
"Authorization": f"Bearer {token}",
}
)
json_content = json.dumps(activity.serialize())
resp = await self._session.post(
to_url,
data=json_content.encode("utf-8"),
headers=headers_dict,
)
resp.raise_for_status()
data = (await resp.read()).decode()
return resp.status, json.loads(data) if data else None
async def post_buffered_activity(
self,
from_bot_id: str,
to_bot_id: str,
to_url: str,
service_url: str,
conversation_id: str,
activity: Activity,
) -> List[Activity]:
"""
Helper method to return a list of activities when an Activity is being
sent with DeliveryMode == expectReplies.
"""
response = await self.post_activity(
from_bot_id, to_bot_id, to_url, service_url, conversation_id, activity
)
if not response or (response.status / 100) != 2:
return []
return ExpectedReplies().deserialize(response.body).activities
async def _get_app_credentials(
self, app_id: str, oauth_scope: str
) -> AppCredentials:
if not app_id:
return MicrosoftAppCredentials.empty()
# in the cache?
cache_key = f"{app_id}{oauth_scope}"
app_credentials = BotFrameworkHttpClient._APP_CREDENTIALS_CACHE.get(cache_key)
if app_credentials:
return app_credentials
# create a new AppCredentials
app_password = await self._credential_provider.get_app_password(app_id)
app_credentials = (
MicrosoftGovernmentAppCredentials(app_id, app_password, scope=oauth_scope)
if self._channel_provider and self._channel_provider.is_government()
else MicrosoftAppCredentials(app_id, app_password, oauth_scope=oauth_scope)
)
# put it in the cache
BotFrameworkHttpClient._APP_CREDENTIALS_CACHE[cache_key] = app_credentials
return app_credentials
|
botbuilder-python/libraries/botbuilder-integration-aiohttp/botbuilder/integration/aiohttp/bot_framework_http_client.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-integration-aiohttp/botbuilder/integration/aiohttp/bot_framework_http_client.py",
"repo_id": "botbuilder-python",
"token_count": 2775
}
| 390 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""Bot Framework Application Insights integration package for aiohttp library."""
import os
__title__ = "botbuilder-integration-applicationinsights-aiohttp"
__version__ = (
os.environ["packageVersion"] if "packageVersion" in os.environ else "4.16.0"
)
__uri__ = "https://www.github.com/Microsoft/botbuilder-python"
__author__ = "Microsoft"
__description__ = "Microsoft Bot Framework Bot Builder"
__summary__ = "Microsoft Bot Framework Bot Builder SDK for Python."
__license__ = "MIT"
|
botbuilder-python/libraries/botbuilder-integration-applicationinsights-aiohttp/botbuilder/integration/applicationinsights/aiohttp/about.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-integration-applicationinsights-aiohttp/botbuilder/integration/applicationinsights/aiohttp/about.py",
"repo_id": "botbuilder-python",
"token_count": 164
}
| 391 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
class ContentType:
O365_CONNECTOR_CARD = "application/vnd.microsoft.teams.card.o365connector"
FILE_CONSENT_CARD = "application/vnd.microsoft.teams.card.file.consent"
FILE_DOWNLOAD_INFO = "application/vnd.microsoft.teams.file.download.info"
FILE_INFO_CARD = "application/vnd.microsoft.teams.card.file.info"
class Type:
O365_CONNECTOR_CARD_VIEWACTION = "ViewAction"
O365_CONNECTOR_CARD_OPEN_URI = "OpenUri"
O365_CONNECTOR_CARD_HTTP_POST = "HttpPOST"
O365_CONNECTOR_CARD_ACTION_CARD = "ActionCard"
O365_CONNECTOR_CARD_TEXT_INPUT = "TextInput"
O365_CONNECTOR_CARD_DATE_INPUT = "DateInput"
O365_CONNECTOR_CARD_MULTICHOICE_INPUT = "MultichoiceInput"
|
botbuilder-python/libraries/botbuilder-schema/botbuilder/schema/teams/additional_properties.py/0
|
{
"file_path": "botbuilder-python/libraries/botbuilder-schema/botbuilder/schema/teams/additional_properties.py",
"repo_id": "botbuilder-python",
"token_count": 317
}
| 392 |
============================================
Microsoft Bot Framework Connector for Python
============================================
.. image:: https://dev.azure.com/FuseLabs/SDK_v4/_apis/build/status/Python/Python-CI-PR-yaml?branchName=master
:target: https://dev.azure.com/FuseLabs/SDK_v4/_apis/build/status/Python/Python-CI-PR-yaml?branchName=master
:align: right
:alt: Azure DevOps status for master branch
.. image:: https://badge.fury.io/py/botframework-connector.svg
:target: https://badge.fury.io/py/botframework-connector
:alt: Latest PyPI package version
Within the Bot Framework, the Bot Connector service enables your bot to exchange messages with users on channels that are configured in the Bot Framework Portal.
How to Install
==============
.. code-block:: python
pip install botframework-connector
How to Use
==========
Authentication
==============
Your bot communicates with the Bot Connector service using HTTP over a secured channel (SSL/TLS). When your bot sends a request to the Connector service, it must include information that the Connector service can use to verify its identity.
To authenticate the requests, you'll need configure the Connector with the App ID and password that you obtained for your bot during registration and the Connector will handle the rest.
More information: https://docs.microsoft.com/en-us/bot-framework/rest-api/bot-framework-rest-connector-authentication
Example
=======
Client creation (with authentication), conversation initialization and activity send to user.
.. code-block:: python
from botbuilder.schema import *
from botframework.connector import ConnectorClient
from botframework.connector.auth import MicrosoftAppCredentials
APP_ID = '<your-app-id>'
APP_PASSWORD = '<your-app-password>'
SERVICE_URL = 'https://slack.botframework.com'
CHANNEL_ID = 'slack'
BOT_ID = '<bot-id>'
RECIPIENT_ID = '<user-id>'
credentials = MicrosoftAppCredentials(APP_ID, APP_PASSWORD)
connector = ConnectorClient(credentials, base_url=SERVICE_URL)
conversation = connector.conversations.create_conversation(ConversationParameters(
bot=ChannelAccount(id=BOT_ID),
members=[ChannelAccount(id=RECIPIENT_ID)]))
connector.conversations.send_to_conversation(conversation.id, Activity(
type=ActivityTypes.message,
channel_id=CHANNEL_ID,
recipient=ChannelAccount(id=RECIPIENT_ID),
from_property=ChannelAccount(id=BOT_ID),
text='Hello World!'))
Rest API Documentation
======================
For the Connector Service API Documentation, please see our `API reference`_.
Documentation/Wiki
==================
You can find more information on the botbuilder-python project by visiting our `Wiki`_.
Requirements
============
* `Python >= 3.7.0`_
Source Code
===========
The latest developer version is available in a github repository:
https://github.com/Microsoft/botbuilder-python/
Contributing
============
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the `Microsoft Open Source Code of Conduct`_.
For more information see the `Code of Conduct FAQ`_ or
contact `[email protected]`_ with any additional questions or comments.
Reporting Security Issues
=========================
Security issues and bugs should be reported privately, via email, to the Microsoft Security
Response Center (MSRC) at `[email protected]`_. You should
receive a response within 24 hours. If for some reason you do not, please follow up via
email to ensure we received your original message. Further information, including the
`MSRC PGP`_ key, can be found in
the `Security TechCenter`_.
License
=======
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT_ License.
.. _API Reference: https://docs.microsoft.com/en-us/Bot-Framework/rest-api/bot-framework-rest-connector-api-reference
.. _Wiki: https://github.com/Microsoft/botbuilder-python/wiki
.. _Python >= 3.7.0: https://www.python.org/downloads/
.. _MIT: https://github.com/Microsoft/vscode/blob/master/LICENSE.txt
.. _Microsoft Open Source Code of Conduct: https://opensource.microsoft.com/codeofconduct/
.. _Code of Conduct FAQ: https://opensource.microsoft.com/codeofconduct/faq/
.. [email protected]: mailto:[email protected]
.. [email protected]: mailto:[email protected]
.. _MSRC PGP: https://technet.microsoft.com/en-us/security/dn606155
.. _Security TechCenter: https://github.com/Microsoft/vscode/blob/master/LICENSE.txt
.. <https://technet.microsoft.com/en-us/security/default>`_
|
botbuilder-python/libraries/botframework-connector/README.rst/0
|
{
"file_path": "botbuilder-python/libraries/botframework-connector/README.rst",
"repo_id": "botbuilder-python",
"token_count": 1496
}
| 393 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from logging import Logger
from typing import Optional
from botbuilder.schema import Activity
from ..bot_framework_sdk_client_async import BotFrameworkConnectorConfiguration
from ..http_client_factory import HttpClientFactory
from ..skills.bot_framework_client import BotFrameworkClient
from ._bot_framework_client_impl import _BotFrameworkClientImpl
from ._user_token_client_impl import _UserTokenClientImpl
from ._connector_factory_impl import _ConnectorFactoryImpl
from .authenticate_request_result import AuthenticateRequestResult
from .authentication_configuration import AuthenticationConfiguration
from .authentication_constants import AuthenticationConstants
from .bot_framework_authentication import BotFrameworkAuthentication
from .claims_identity import ClaimsIdentity
from .channel_provider import ChannelProvider
from .connector_factory import ConnectorFactory
from .credential_provider import _DelegatingCredentialProvider
from .jwt_token_validation import JwtTokenValidation
from .service_client_credentials_factory import ServiceClientCredentialsFactory
from .skill_validation import SkillValidation
from .simple_channel_provider import SimpleChannelProvider
from .user_token_client import UserTokenClient
class _BuiltinBotFrameworkAuthentication(BotFrameworkAuthentication):
def __init__(
self,
to_channel_from_bot_oauth_scope: str,
login_endpoint: str,
caller_id: str,
channel_service: str,
oauth_endpoint: str,
credentials_factory: ServiceClientCredentialsFactory,
auth_configuration: AuthenticationConfiguration,
http_client_factory: HttpClientFactory,
connector_client_configuration: BotFrameworkConnectorConfiguration,
logger: Logger,
):
self._to_channel_from_bot_oauth_scope = to_channel_from_bot_oauth_scope
self._login_endpoint = login_endpoint
self._caller_id = caller_id
self._channel_service = channel_service
self._oauth_endpoint = oauth_endpoint
self._credentials_factory = credentials_factory
self._auth_configuration = auth_configuration
self._http_client_factory = http_client_factory
self._connector_client_configuration = connector_client_configuration
self._logger = logger
@staticmethod
def get_app_id(claims_identity: ClaimsIdentity) -> str:
# For requests from channel App Id is in Audience claim of JWT token. For emulator it is in AppId claim. For
# unauthenticated requests we have anonymous claimsIdentity provided auth is disabled.
# For Activities coming from Emulator AppId claim contains the Bot's AAD AppId.
app_id = claims_identity.get_claim_value(AuthenticationConstants.AUDIENCE_CLAIM)
if app_id is None:
app_id = claims_identity.get_claim_value(
AuthenticationConstants.APP_ID_CLAIM
)
return app_id
async def authenticate_request(
self, activity: Activity, auth_header: str
) -> AuthenticateRequestResult:
credential_provider = _DelegatingCredentialProvider(self._credentials_factory)
claims_identity = await JwtTokenValidation.authenticate_request(
activity,
auth_header,
credential_provider,
self._get_channel_provider(),
self._auth_configuration,
)
outbound_audience = (
JwtTokenValidation.get_app_id_from_claims(claims_identity.claims)
if SkillValidation.is_skill_claim(claims_identity.claims)
else self._to_channel_from_bot_oauth_scope
)
caller_id = await self.generate_caller_id(
credential_factory=self._credentials_factory,
claims_identity=claims_identity,
caller_id=self._caller_id,
)
connector_factory = _ConnectorFactoryImpl(
app_id=_BuiltinBotFrameworkAuthentication.get_app_id(claims_identity),
to_channel_from_bot_oauth_scope=self._to_channel_from_bot_oauth_scope,
login_endpoint=self._login_endpoint,
validate_authority=True,
credential_factory=self._credentials_factory,
connector_client_configuration=self._connector_client_configuration,
logger=self._logger,
)
result = AuthenticateRequestResult()
result.claims_identity = claims_identity
result.audience = outbound_audience
result.caller_id = caller_id
result.connector_factory = connector_factory
return result
async def authenticate_streaming_request(
self, auth_header: str, channel_id_header: str
) -> AuthenticateRequestResult:
credential_provider = _DelegatingCredentialProvider(self._credentials_factory)
if channel_id_header is None:
is_auth_disabled = (
await self._credentials_factory.is_authentication_disabled()
)
if not is_auth_disabled:
raise PermissionError("Unauthorized Access. Request is not authorized")
claims_identity = await JwtTokenValidation.validate_auth_header(
auth_header,
credential_provider,
self._get_channel_provider(),
channel_id_header,
)
outbound_audience = (
JwtTokenValidation.get_app_id_from_claims(claims_identity.claims)
if SkillValidation.is_skill_claim(claims_identity.claims)
else self._to_channel_from_bot_oauth_scope
)
caller_id = await self.generate_caller_id(
credential_factory=self._credentials_factory,
claims_identity=claims_identity,
caller_id=self._caller_id,
)
result = AuthenticateRequestResult()
result.claims_identity = claims_identity
result.audience = outbound_audience
result.caller_id = caller_id
return result
def create_connector_factory(
self, claims_identity: ClaimsIdentity
) -> ConnectorFactory:
return _ConnectorFactoryImpl(
app_id=_BuiltinBotFrameworkAuthentication.get_app_id(claims_identity),
to_channel_from_bot_oauth_scope=self._to_channel_from_bot_oauth_scope,
login_endpoint=self._login_endpoint,
validate_authority=True,
credential_factory=self._credentials_factory,
connector_client_configuration=self._connector_client_configuration,
logger=self._logger,
)
async def create_user_token_client(
self, claims_identity: ClaimsIdentity
) -> UserTokenClient:
app_id = _BuiltinBotFrameworkAuthentication.get_app_id(claims_identity)
credentials = await self._credentials_factory.create_credentials(
app_id,
oauth_scope=self._to_channel_from_bot_oauth_scope,
login_endpoint=self._login_endpoint,
validate_authority=True,
)
return _UserTokenClientImpl(app_id, credentials, self._oauth_endpoint)
def create_bot_framework_client(self) -> BotFrameworkClient:
return _BotFrameworkClientImpl(
self._credentials_factory,
self._http_client_factory,
self._login_endpoint,
self._logger,
)
def get_originating_audience(self) -> str:
return self._to_channel_from_bot_oauth_scope
async def authenticate_channel_request(self, auth_header: str) -> ClaimsIdentity:
credential_provider = _DelegatingCredentialProvider(self._credentials_factory)
if auth_header is None:
is_auth_disabled = await credential_provider.is_authentication_disabled()
if not is_auth_disabled:
# No auth header. Auth is required. Request is not authorized.
raise PermissionError("Unauthorized Access. Request is not authorized")
# In the scenario where auth is disabled, we still want to have the
# IsAuthenticated flag set in the ClaimsIdentity.
# To do this requires adding in an empty claim.
# Since ChannelServiceHandler calls are always a skill callback call, we set the skill claim too.
return SkillValidation.create_anonymous_skill_claim()
return await JwtTokenValidation.validate_auth_header(
auth_header,
credential_provider,
channel_service_or_provider=self._get_channel_provider(),
channel_id="unknown",
auth_configuration=self._auth_configuration,
)
def _get_channel_provider(self) -> Optional[ChannelProvider]:
return (
SimpleChannelProvider(self._channel_service)
if self._channel_service is not None
else None
)
|
botbuilder-python/libraries/botframework-connector/botframework/connector/auth/_built_in_bot_framework_authentication.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-connector/botframework/connector/auth/_built_in_bot_framework_authentication.py",
"repo_id": "botbuilder-python",
"token_count": 3557
}
| 394 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from abc import ABC, abstractmethod
from botframework.connector.aio import ConnectorClient
class ConnectorFactory(ABC):
@abstractmethod
async def create(self, service_url: str, audience: str) -> ConnectorClient:
"""
A factory method used to create ConnectorClient instances.
:param service_url: The url for the client.
:param audience: The audience for the credentials the client will use.
:returns: A ConnectorClient for sending activities to the audience at the service_url.
"""
raise NotImplementedError()
|
botbuilder-python/libraries/botframework-connector/botframework/connector/auth/connector_factory.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-connector/botframework/connector/auth/connector_factory.py",
"repo_id": "botbuilder-python",
"token_count": 210
}
| 395 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from datetime import timedelta
from typing import List, Union
class VerifyOptions:
def __init__(self, issuer, audience, clock_tolerance, ignore_expiration):
self.issuer: Union[List[str], str] = issuer or []
self.audience: str = audience
self.clock_tolerance: Union[int, timedelta] = clock_tolerance or 0
self.ignore_expiration: bool = ignore_expiration or False
|
botbuilder-python/libraries/botframework-connector/botframework/connector/auth/verify_options.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-connector/botframework/connector/auth/verify_options.py",
"repo_id": "botbuilder-python",
"token_count": 160
}
| 396 |
interactions:
- request:
body: '{"activity": {"recipient": {"id": "U19KH8EHJ:T03CWQ0QB"}, "channelId":
"slack", "text": "Hi there!", "type": "message"}, "bot": {"id": "B21UTEF8S:T03CWQ0QB"},
"members": [{"id": "U19KH8EHJ:T03CWQ0QB"}]}'
headers:
Accept: [application/json]
Accept-Encoding: ['gzip, deflate']
Connection: [keep-alive]
Content-Length: ['202']
Content-Type: [application/json; charset=utf-8]
User-Agent: [python/3.5.3 (Linux-4.11.0-041100-generic-x86_64-with-Ubuntu-17.04-zesty)
requests/2.18.1 msrest/0.4.23 azure-botframework-connector/v3.0]
method: POST
uri: https://slack.botframework.com/v3/conversations
response:
body: {string: "{\r\n \"activityId\": \"1514296290.000025\",\r\n \"id\": \"\
B21UTEF8S:T03CWQ0QB:D2369CT7C\"\r\n}"}
headers:
cache-control: [no-cache]
content-length: ['83']
content-type: [application/json; charset=utf-8]
date: ['Tue, 26 Dec 2017 13:51:29 GMT']
expires: ['-1']
pragma: [no-cache]
request-context: ['appId=cid-v1:6814484e-c0d5-40ea-9dba-74ff29ca4f62']
server: [Microsoft-IIS/10.0]
strict-transport-security: [max-age=31536000]
vary: [Accept-Encoding]
x-powered-by: [ASP.NET]
status: {code: 200, message: OK}
version: 1
|
botbuilder-python/libraries/botframework-connector/tests/recordings/test_conversations_create_conversation.yaml/0
|
{
"file_path": "botbuilder-python/libraries/botframework-connector/tests/recordings/test_conversations_create_conversation.yaml",
"repo_id": "botbuilder-python",
"token_count": 640
}
| 397 |
interactions:
- request:
body: '{"type": "message", "from": {"id": "B21UTEF8S:T03CWQ0QB"}, "text": "Updating
activity...", "channelId": "slack", "recipient": {"id": "U19KH8EHJ:T03CWQ0QB"}}'
headers:
Accept: [application/json]
Accept-Encoding: ['gzip, deflate']
Connection: [keep-alive]
Content-Length: ['156']
Content-Type: [application/json; charset=utf-8]
User-Agent: [python/3.5.3 (Linux-4.11.0-041100-generic-x86_64-with-Ubuntu-17.04-zesty)
requests/2.18.1 msrest/0.4.23 azure-botframework-connector/v3.0]
method: POST
uri: https://slack.botframework.com/v3/conversations/B21UTEF8S%3AT03CWQ0QB%3AD2369CT7C/activities
response:
body: {string: "{\r\n \"id\": \"1514310955.000243\"\r\n}"}
headers:
cache-control: [no-cache]
content-length: ['33']
content-type: [application/json; charset=utf-8]
date: ['Tue, 26 Dec 2017 17:55:55 GMT']
expires: ['-1']
pragma: [no-cache]
request-context: ['appId=cid-v1:6814484e-c0d5-40ea-9dba-74ff29ca4f62']
server: [Microsoft-IIS/10.0]
strict-transport-security: [max-age=31536000]
vary: [Accept-Encoding]
x-powered-by: [ASP.NET]
status: {code: 200, message: OK}
- request:
body: '{"type": "message", "from": {"id": "B21UTEF8S:T03CWQ0QB"}, "text": "Activity
updated.", "channelId": "slack", "recipient": {"id": "U19KH8EHJ:T03CWQ0QB"}}'
headers:
Accept: [application/json]
Accept-Encoding: ['gzip, deflate']
Connection: [keep-alive]
Content-Length: ['153']
Content-Type: [application/json; charset=utf-8]
User-Agent: [python/3.5.3 (Linux-4.11.0-041100-generic-x86_64-with-Ubuntu-17.04-zesty)
requests/2.18.1 msrest/0.4.23 azure-botframework-connector/v3.0]
method: PUT
uri: https://slack.botframework.com/v3/conversations/B21UTEF8S%3AT03CWQ0QB%3AD2369CT7C/activities/1514310955.000243
response:
body: {string: "{\r\n \"id\": \"1514310955.000243\"\r\n}"}
headers:
cache-control: [no-cache]
content-length: ['33']
content-type: [application/json; charset=utf-8]
date: ['Tue, 26 Dec 2017 17:55:56 GMT']
expires: ['-1']
pragma: [no-cache]
request-context: ['appId=cid-v1:6814484e-c0d5-40ea-9dba-74ff29ca4f62']
server: [Microsoft-IIS/10.0]
strict-transport-security: [max-age=31536000]
vary: [Accept-Encoding]
x-powered-by: [ASP.NET]
status: {code: 200, message: OK}
version: 1
|
botbuilder-python/libraries/botframework-connector/tests/recordings/test_conversations_update_activity.yaml/0
|
{
"file_path": "botbuilder-python/libraries/botframework-connector/tests/recordings/test_conversations_update_activity.yaml",
"repo_id": "botbuilder-python",
"token_count": 1191
}
| 398 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import json
from asyncio import Future
from abc import ABC, abstractmethod
from uuid import UUID
from typing import List
from botframework.streaming.transport import TransportConstants
from botframework.streaming.payload_transport import PayloadSender
from botframework.streaming.payloads import ResponseMessageStream
from botframework.streaming.payloads.models import (
Header,
Serializable,
StreamDescription,
)
class PayloadDisassembler(ABC):
def __init__(self, sender: PayloadSender, identifier: UUID):
self.sender = sender
self.identifier = identifier
self._task_completion_source = Future()
self._stream: List[int] = None
self._stream_length: int = None
self._send_offset: int = None
self._is_end: bool = False
self._type: str = None
@property
@abstractmethod
def type(self) -> str:
return self._type
async def get_stream(self) -> List[int]:
raise NotImplementedError()
async def disassemble(self):
self._stream = await self.get_stream()
self._stream_length = len(self._stream)
self._send_offset = 0
await self._send()
@staticmethod
def get_stream_description(stream: ResponseMessageStream) -> StreamDescription:
description = StreamDescription(id=str(stream.id))
# TODO: This content type is hardcoded for POC, investigate how to proceed
content = bytes(stream.content).decode("utf8")
try:
json.loads(content)
content_type = "application/json"
except ValueError:
content_type = "text/plain"
description.content_type = content_type
description.length = len(content)
# TODO: validate statement below, also make the string a constant
# content_length: int = stream.content.headers.get("Content-Length")
# if content_length:
# description.length = int(content_length)
# else:
# # TODO: check statement validity
# description.length = stream.content.headers.content_length
return description
@staticmethod
def serialize(item: Serializable, stream: List[int], length: List[int]):
encoded_json = item.to_json().encode()
stream.clear()
stream.extend(list(encoded_json))
length.clear()
length.append(len(stream))
async def _send(self):
# determine if we know the length we can send and whether we can tell if this is the end
is_length_known = self._is_end
header = Header(type=self.type, id=self.identifier, end=self._is_end)
header.payload_length = 0
if self._stream_length is not None:
# determine how many bytes we can send and if we are at the end
header.payload_length = min(
self._stream_length - self._send_offset,
TransportConstants.MAX_PAYLOAD_LENGTH,
)
header.end = (
self._send_offset + header.payload_length >= self._stream_length
)
is_length_known = True
self.sender.send_payload(header, self._stream, is_length_known, self._on_send)
async def _on_send(self, header: Header):
self._send_offset += header.payload_length
self._is_end = header.end
if self._is_end:
self._task_completion_source.set_result(True)
else:
await self._send()
|
botbuilder-python/libraries/botframework-streaming/botframework/streaming/payloads/disassemblers/payload_disassembler.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-streaming/botframework/streaming/payloads/disassemblers/payload_disassembler.py",
"repo_id": "botbuilder-python",
"token_count": 1413
}
| 399 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from uuid import UUID
from typing import Callable, Dict, List
from botframework.streaming.payloads.assemblers import PayloadStreamAssembler
from botframework.streaming.payloads.models import Header
class StreamManager:
def __init__(
self, on_cancel_stream: Callable[[PayloadStreamAssembler], None] = None
):
self._on_cancel_stream = on_cancel_stream or (lambda ocs: None)
self._active_assemblers: Dict[UUID, PayloadStreamAssembler] = {}
def get_payload_assembler(self, identifier: UUID) -> PayloadStreamAssembler:
self._active_assemblers[identifier] = self._active_assemblers.get(
identifier, PayloadStreamAssembler(self, identifier)
)
return self._active_assemblers[identifier]
def get_payload_stream(self, header: Header) -> "streaming.PayloadStream":
assembler = self.get_payload_assembler(header.id)
return assembler.get_payload_as_stream()
def on_receive(
self, header: Header, content_stream: List[int], content_length: int
):
assembler = self._active_assemblers.get(header.id)
if assembler:
assembler.on_receive(header, content_stream, content_length)
def close_stream(self, identifier: UUID):
assembler = self._active_assemblers.get(identifier)
if assembler:
del self._active_assemblers[identifier]
stream = assembler.get_payload_as_stream()
if (
assembler.content_length
and len(stream) < assembler.content_length
or not assembler.end
):
self._on_cancel_stream(assembler)
|
botbuilder-python/libraries/botframework-streaming/botframework/streaming/payloads/stream_manager.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-streaming/botframework/streaming/payloads/stream_manager.py",
"repo_id": "botbuilder-python",
"token_count": 704
}
| 400 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from enum import IntEnum
class WebSocketCloseStatus(IntEnum):
NORMAL_CLOSURE = 1000
ENDPOINT_UNAVAILABLE = 1001
PROTOCOL_ERROR = 1002
INVALID_MESSAGE_TYPE = 1003
EMPTY = 1005
INVALID_PAYLOAD_DATA = 1007
POLICY_VIOLATION = 1008
MESSAGE_TOO_BIG = 1009
MANDATORY_EXTENSION = 1010
INTERNAL_SERVER_ERROR = 1011
|
botbuilder-python/libraries/botframework-streaming/botframework/streaming/transport/web_socket/web_socket_close_status.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-streaming/botframework/streaming/transport/web_socket/web_socket_close_status.py",
"repo_id": "botbuilder-python",
"token_count": 185
}
| 401 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from typing import List
from uuid import uuid4
import aiounittest
from botframework.streaming import PayloadStream, StreamingRequest
from botframework.streaming.payloads import SendOperations
from botframework.streaming.payloads.assemblers import PayloadStreamAssembler
from botframework.streaming.payload_transport import PayloadSender
from botframework.streaming.transport import TransportSenderBase
class MockTransportSender(TransportSenderBase):
# pylint: disable=unused-argument
def __init__(self):
super().__init__()
self.is_connected = True
self.buffers = []
async def send(self, buffer: List[int], offset: int, count: int) -> int:
self.buffers.append(buffer.copy())
return count
class TestSendOperations(aiounittest.AsyncTestCase):
async def test_request_dissasembler_with_variable_stream_send(self):
sender = PayloadSender()
transport = MockTransportSender()
sender.connect(transport)
sut = SendOperations(sender)
request = StreamingRequest.create_post("/a/b")
stream = PayloadStream(PayloadStreamAssembler(None, uuid4(), "blah", 100))
stream.write([0] * 100, 0, 100)
request.add_stream(await stream.read_until_end())
await sut.send_request(uuid4(), request)
self.assertEqual(4, len(transport.buffers))
async def test_request_dissasembler_with_json_stream_send(self):
sender = PayloadSender()
transport = MockTransportSender()
sender.connect(transport)
sut = SendOperations(sender)
request = StreamingRequest.create_post("/a/b")
request.add_stream(bytes("abc", "ascii"))
await sut.send_request(uuid4(), request)
self.assertEqual(4, len(transport.buffers))
|
botbuilder-python/libraries/botframework-streaming/tests/test_send_operations.py/0
|
{
"file_path": "botbuilder-python/libraries/botframework-streaming/tests/test_send_operations.py",
"repo_id": "botbuilder-python",
"token_count": 699
}
| 402 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import logging
import sys
import traceback
from datetime import datetime
from aiohttp import web
from aiohttp.web import Request, Response
from aiohttp.web_response import json_response
from botbuilder.core import (
BotFrameworkAdapterSettings,
ConversationState,
MemoryStorage,
UserState,
TurnContext,
BotFrameworkAdapter,
)
from botbuilder.schema import Activity, ActivityTypes
from adapter_with_error_handler import AdapterWithErrorHandler
from bots import ChildBot
from dialogs import MainDialog
from config import DefaultConfig
CONFIG = DefaultConfig()
STORAGE = MemoryStorage()
CONVERSATION_STATE = ConversationState(STORAGE)
USER_STATE = UserState(STORAGE)
# Create adapter.
# See https://aka.ms/about-bot-adapter to learn more about how bots work.
SETTINGS = BotFrameworkAdapterSettings(CONFIG.APP_ID, CONFIG.APP_PASSWORD)
ADAPTER = AdapterWithErrorHandler(SETTINGS, CONVERSATION_STATE, USER_STATE)
# Catch-all for errors.
async def on_error(context: TurnContext, error: Exception):
# This check writes out errors to console log .vs. app insights.
# NOTE: In production environment, you should consider logging this to Azure
# application insights.
print(f"\n [on_turn_error] unhandled error: {error}", file=sys.stderr)
traceback.print_exc()
# Send a message to the user
await context.send_activity("The bot encountered an error or bug.")
await context.send_activity(
"To continue to run this bot, please fix the bot source code."
)
# Send a trace activity if we're talking to the Bot Framework Emulator
if context.activity.channel_id == "emulator":
# Create a trace activity that contains the error object
trace_activity = Activity(
label="TurnError",
name="on_turn_error Trace",
timestamp=datetime.utcnow(),
type=ActivityTypes.trace,
value=f"{error}",
value_type="https://www.botframework.com/schemas/error",
)
# Send a trace activity, which will be displayed in Bot Framework Emulator
await context.send_activity(trace_activity)
ADAPTER.on_turn_error = on_error
DIALOG = MainDialog(CONFIG)
# Create the Bot
BOT = ChildBot(DIALOG, USER_STATE, CONVERSATION_STATE, CONFIG)
# Listen for incoming requests on /api/messages
async def messages(req: Request) -> Response:
# Main bot message handler.
if "application/json" in req.headers["Content-Type"]:
body = await req.json()
else:
return Response(status=415)
activity = Activity().deserialize(body)
auth_header = req.headers["Authorization"] if "Authorization" in req.headers else ""
try:
response = await ADAPTER.process_activity(activity, auth_header, BOT.on_turn)
if response:
return json_response(data=response.body, status=response.status)
return Response(status=201)
except Exception as exception:
raise exception
"""async def options(req: Request) -> Response:
return Response(status=200)"""
APP = web.Application()
APP.router.add_post("/api/messages", messages)
if __name__ == "__main__":
try:
logging.basicConfig(level=logging.DEBUG)
web.run_app(APP, host="localhost", port=CONFIG.PORT)
except Exception as error:
raise error
|
botbuilder-python/tests/experimental/sso/child/app.py/0
|
{
"file_path": "botbuilder-python/tests/experimental/sso/child/app.py",
"repo_id": "botbuilder-python",
"token_count": 1171
}
| 403 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import asyncio
import sys
from types import MethodType
from flask import Flask, request, Response
from botbuilder.core import (
BotFrameworkAdapter,
BotFrameworkAdapterSettings,
MessageFactory,
TurnContext,
)
from botbuilder.schema import Activity, InputHints
from bot import MyBot
# Create the loop and Flask app
LOOP = asyncio.get_event_loop()
APP = Flask(__name__, instance_relative_config=True)
APP.config.from_object("config.DefaultConfig")
# Create adapter.
# See https://aka.ms/about-bot-adapter to learn more about how bots work.
SETTINGS = BotFrameworkAdapterSettings(APP.config["APP_ID"], APP.config["APP_PASSWORD"])
ADAPTER = BotFrameworkAdapter(SETTINGS)
# Catch-all for errors.
# pylint: disable=unused-argument
async def on_error(self, context: TurnContext, error: Exception):
# This check writes out errors to console log .vs. app insights.
# NOTE: In production environment, you should consider logging this to Azure
# application insights.
print(f"\n [on_turn_error]: {error}", file=sys.stderr)
# Send a message to the user
error_message_text = "Sorry, it looks like something went wrong."
error_message = MessageFactory.text(
error_message_text, error_message_text, InputHints.expecting_input
)
await context.send_activity(error_message)
ADAPTER.on_turn_error = MethodType(on_error, ADAPTER)
# Create the main dialog
BOT = MyBot()
# Listen for incoming requests on GET / for Azure monitoring
@APP.route("/", methods=["GET"])
def ping():
return Response(status=200)
# Listen for incoming requests on /api/messages.
@APP.route("/api/messages", methods=["POST"])
def messages():
# Main bot message handler.
if "application/json" in request.headers["Content-Type"]:
body = request.json
else:
return Response(status=415)
activity = Activity().deserialize(body)
auth_header = (
request.headers["Authorization"] if "Authorization" in request.headers else ""
)
async def aux_func(turn_context):
await BOT.on_turn(turn_context)
try:
task = LOOP.create_task(
ADAPTER.process_activity(activity, auth_header, aux_func)
)
LOOP.run_until_complete(task)
return Response(status=201)
except Exception as exception:
raise exception
if __name__ == "__main__":
try:
APP.run(debug=False, port=APP.config["PORT"]) # nosec debug
except Exception as exception:
raise exception
|
botbuilder-python/tests/functional-tests/functionaltestbot/functionaltestbot/app.py/0
|
{
"file_path": "botbuilder-python/tests/functional-tests/functionaltestbot/functionaltestbot/app.py",
"repo_id": "botbuilder-python",
"token_count": 890
}
| 404 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from botbuilder.core import ActivityHandler, TurnContext
class ChildBot(ActivityHandler):
async def on_message_activity(self, turn_context: TurnContext):
await turn_context.send_activity("child: activity (1)")
await turn_context.send_activity("child: activity (2)")
await turn_context.send_activity("child: activity (3)")
await turn_context.send_activity(f"child: {turn_context.activity.text}")
|
botbuilder-python/tests/skills/skills-buffered/child/bots/child_bot.py/0
|
{
"file_path": "botbuilder-python/tests/skills/skills-buffered/child/bots/child_bot.py",
"repo_id": "botbuilder-python",
"token_count": 167
}
| 405 |
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from botbuilder.dialogs import (
ComponentDialog,
DialogTurnResult,
)
from botbuilder.dialogs import DialogContext
from botbuilder.core import BotFrameworkAdapter, MessageFactory
from botbuilder.schema import ActivityTypes
class LogoutDialog(ComponentDialog):
def __init__(
self, dialog_id: str, connection_name: str,
):
super().__init__(dialog_id)
self.connection_name = connection_name
async def on_begin_dialog(
self, inner_dc: DialogContext, options: object
) -> DialogTurnResult:
result = await self._interrupt(inner_dc)
if result:
return result
return await super().on_begin_dialog(inner_dc, options)
async def on_continue_dialog(self, inner_dc: DialogContext) -> DialogTurnResult:
result = await self._interrupt(inner_dc)
if result:
return result
return await super().on_continue_dialog(inner_dc)
async def _interrupt(self, inner_dc: DialogContext):
if inner_dc.context.activity.type == ActivityTypes.message:
text = inner_dc.context.activity.text.lower()
if text == "logout":
bot_adapter: BotFrameworkAdapter = inner_dc.context.adapter
await bot_adapter.sign_out_user(inner_dc.context, self.connection_name)
await inner_dc.context.send_activity(MessageFactory.text("You have been signed out."))
return await inner_dc.cancel_all_dialogs()
return None
|
botbuilder-python/tests/skills/skills-prototypes/dialog-to-dialog/authentication-bot/dialogs/logout_dialog.py/0
|
{
"file_path": "botbuilder-python/tests/skills/skills-prototypes/dialog-to-dialog/authentication-bot/dialogs/logout_dialog.py",
"repo_id": "botbuilder-python",
"token_count": 620
}
| 406 |
from typing import Awaitable, Callable, List
from botbuilder.core import Middleware, TurnContext
from botbuilder.schema import Activity, ResourceResponse
class DummyMiddleware(Middleware):
def __init__(self, label: str):
self._label = label
async def on_turn(
self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]
):
message = f"{self._label} {context.activity.type} {context.activity.text}"
print(message)
# Register outgoing handler
context.on_send_activities(self._outgoing_handler)
await logic()
async def _outgoing_handler(
self,
context: TurnContext, # pylint: disable=unused-argument
activities: List[Activity],
logic: Callable[[TurnContext], Awaitable[List[ResourceResponse]]],
):
for activity in activities:
message = f"{self._label} {activity.type} {activity.text}"
print(message)
return await logic()
|
botbuilder-python/tests/skills/skills-prototypes/simple-bot-to-bot/simple-root-bot/middleware/dummy_middleware.py/0
|
{
"file_path": "botbuilder-python/tests/skills/skills-prototypes/simple-bot-to-bot/simple-root-bot/middleware/dummy_middleware.py",
"repo_id": "botbuilder-python",
"token_count": 372
}
| 407 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="ainTwodotsverticalabove-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="ain-ar.fina"/>
<component base="twodotsverticalabove-ar" xOffset="10" yOffset="253"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/ainT_wodotsverticalabove-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/ainT_wodotsverticalabove-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 172
}
| 408 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="dad-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="sad-ar.fina"/>
<component base="dotabove-ar" xOffset="280" yOffset="373"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/dad-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/dad-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 163
}
| 409 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="dalVinvertedabove-ar" format="2">
<advance width="1200"/>
<unicode hex="06EE"/>
<outline>
<component base="dal-ar"/>
<component base="vinvertedabove-ar" xOffset="16" yOffset="502"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/dalV_invertedabove-ar.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/dalV_invertedabove-ar.glif",
"repo_id": "cascadia-code",
"token_count": 171
}
| 410 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="exclamdown" format="2">
<advance width="1200"/>
<unicode hex="00A1"/>
<outline>
<contour>
<point x="446" y="-370" type="line"/>
<point x="754" y="-370" type="line"/>
<point x="717" y="504" type="line"/>
<point x="479" y="504" type="line"/>
</contour>
<component base="period" yScale="-1" yOffset="1060"/>
</outline>
<lib>
<dict>
<key>com.schriftgestaltung.Glyphs.ComponentInfo</key>
<array>
<dict>
<key>alignment</key>
<integer>-1</integer>
<key>index</key>
<integer>0</integer>
<key>name</key>
<string>period</string>
</dict>
</array>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/exclamdown.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/exclamdown.glif",
"repo_id": "cascadia-code",
"token_count": 384
}
| 411 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="four-persiansuperior" format="2">
<advance width="1200"/>
<outline>
<component base="four-persianinferior" yOffset="790"/>
</outline>
<lib>
<dict>
<key>com.schriftgestaltung.Glyphs.category</key>
<string>Number</string>
<key>com.schriftgestaltung.Glyphs.script</key>
<string>arabic</string>
<key>com.schriftgestaltung.Glyphs.subCategory</key>
<string>Small</string>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/four-persiansuperior.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/four-persiansuperior.glif",
"repo_id": "cascadia-code",
"token_count": 265
}
| 412 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="gafThreedots-ar.medi" format="2">
<advance width="1200"/>
<anchor x="423" y="1850" name="top"/>
<outline>
<component base="gaf-ar.medi"/>
<component base="threedotsupabove-ar.v2" xScale="0.7" yScale="0.7" xOffset="44" yOffset="998"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/gafT_hreedots-ar.medi.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/gafT_hreedots-ar.medi.glif",
"repo_id": "cascadia-code",
"token_count": 203
}
| 413 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="gueh-ar.medi" format="2">
<advance width="1200"/>
<outline>
<component base="gaf-ar.medi"/>
<component base="twodotsverticalbelow-ar" xOffset="47" yOffset="-24"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/gueh-ar.medi.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/gueh-ar.medi.glif",
"repo_id": "cascadia-code",
"token_count": 167
}
| 414 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="hahHamzaabove-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="hah-ar.fina"/>
<component base="hamzaabove-ar" xOffset="-54" yOffset="-180"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/hahH_amzaabove-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/hahH_amzaabove-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 170
}
| 415 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="hahThreedotsabove-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="hah-ar.fina"/>
<component base="threedotsupabove-ar" xOffset="-34" yOffset="382"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/hahT_hreedotsabove-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/hahT_hreedotsabove-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 172
}
| 416 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="hehVinvertedabove-ar.init" format="2">
<advance width="1200"/>
<outline>
<component base="hehDoachashmee-ar.init"/>
<component base="vinvertedabove-ar" xOffset="-51" yOffset="398"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/hehV_invertedabove-ar.init.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/hehV_invertedabove-ar.init.glif",
"repo_id": "cascadia-code",
"token_count": 173
}
| 417 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="kehehThreedotsupbelow-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="keheh-ar.fina"/>
<component base="threedotsupbelow-ar" xOffset="-47" yOffset="-24"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/kehehT_hreedotsupbelow-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/kehehT_hreedotsupbelow-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 176
}
| 418 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="kirghizyu-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="waw-ar.fina"/>
<component base="vinvertedabove-ar" xOffset="70" yOffset="238"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/kirghizyu-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/kirghizyu-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 168
}
| 419 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="less_asterisk.liga" format="2">
<advance width="1200"/>
<outline>
<component base="less" xOffset="131"/>
<component base="asterisk" xOffset="1133"/>
</outline>
<lib>
<dict>
<key>com.schriftgestaltung.Glyphs.ComponentInfo</key>
<array>
<dict>
<key>alignment</key>
<integer>-1</integer>
<key>index</key>
<integer>0</integer>
<key>name</key>
<string>less</string>
</dict>
<dict>
<key>alignment</key>
<integer>-1</integer>
<key>index</key>
<integer>1</integer>
<key>name</key>
<string>asterisk</string>
</dict>
</array>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/less_asterisk.liga.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/less_asterisk.liga.glif",
"repo_id": "cascadia-code",
"token_count": 413
}
| 420 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="lslash" format="2">
<advance width="1200"/>
<unicode hex="0142"/>
<outline>
<contour>
<point x="128" y="438" type="line"/>
<point x="934" y="808" type="line"/>
<point x="934" y="1077" type="line"/>
<point x="128" y="707" type="line"/>
</contour>
<component base="l"/>
</outline>
<lib>
<dict>
<key>com.schriftgestaltung.Glyphs.ComponentInfo</key>
<array>
<dict>
<key>alignment</key>
<integer>-1</integer>
<key>index</key>
<integer>0</integer>
<key>name</key>
<string>l</string>
</dict>
</array>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/lslash.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/lslash.glif",
"repo_id": "cascadia-code",
"token_count": 370
}
| 421 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="noon-ar.medi" format="2">
<advance width="1200"/>
<outline>
<component base="behDotless-ar.medi"/>
<component base="dotabove-ar" xOffset="30" yOffset="213"/>
</outline>
<lib>
<dict>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/noon-ar.medi.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/noon-ar.medi.glif",
"repo_id": "cascadia-code",
"token_count": 163
}
| 422 |
<?xml version='1.0' encoding='UTF-8'?>
<glyph name="noonTahabove-ar.fina" format="2">
<advance width="1200"/>
<outline>
<component base="noonghunna-ar.fina"/>
<component base="_onedotstah" xOffset="-35" yOffset="37"/>
</outline>
<lib>
<dict>
<key>com.schriftgestaltung.Glyphs.glyph.leftMetricsKey</key>
<string>noonghunna-ar</string>
<key>com.schriftgestaltung.Glyphs.glyph.rightMetricsKey</key>
<string>_part.instroke</string>
<key>public.markColor</key>
<string>0.98,0.36,0.67,1</string>
</dict>
</lib>
</glyph>
|
cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/noonT_ahabove-ar.fina.glif/0
|
{
"file_path": "cascadia-code/sources/CascadiaCode-Bold.ufo/glyphs/noonT_ahabove-ar.fina.glif",
"repo_id": "cascadia-code",
"token_count": 270
}
| 423 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.