hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0ca84eec978b9de6eeb072304b5498fc042a4e5e | 3,505 | md | Markdown | Docs/HOWTO_linux.md | Arjanit21/Digital-electronics-3 | 2b28277b5bdce2129cc6131ca56cc56f7bf095f6 | [
"MIT"
] | null | null | null | Docs/HOWTO_linux.md | Arjanit21/Digital-electronics-3 | 2b28277b5bdce2129cc6131ca56cc56f7bf095f6 | [
"MIT"
] | null | null | null | Docs/HOWTO_linux.md | Arjanit21/Digital-electronics-3 | 2b28277b5bdce2129cc6131ca56cc56f7bf095f6 | [
"MIT"
] | null | null | null | ## How to use AVR template on Linux
1. Download and install [Visual Studio Code](https://code.visualstudio.com/) source code editor.
2. AVR template requires the following packages to be installed correctly:
```bash
sudo apt-get install git make avrdude putty doxygen doxygen-gui
```
3. Download the latest toolchain AVR 8-bit Toolchain - Linux 64-bit from Microchip [webpage](https://www.microchip.com/mplab/avr-support/avr-and-arm-toolchains-c-compilers), from this [repository](../Install/avr8-gnu-toolchain-3.6.2.1778-linux.any.x86_64.tar.gz), or from Microchip's [archive](https://www.microchip.com/en-us/development-tools-tools-and-software/avr-and-sam-downloads-archive) and extract all files to `/opt` directory:
```bash
sudo tar -xzvf avr8-gnu-toolchain-3.6.2.1778-linux.any.x86_64.tar.gz -C /opt/
```
4. Download and extract `Examples` folder from this [repository](https://github.com/tomas-fryza/Digital-electronics-2/archive/master.zip) to local computer.
5. Start Visual Studio Code source code editor, open examples folder, and in `Examples/Makefile.in` enable and/or modify Linux parameters according to your local settings:
```Makefile
## Linux
PREFIX = /opt/avr8-gnu-toolchain-linux_x86_64
AVRDUDE = avrdude
RM = rm -f
# See "dmesg" command output
USBPORT = /dev/ttyUSB0
## Windows
#PREFIX = C:\Appz\Avr\avr8-gnu-toolchain-win32_x86
#AVRDUDE = C:\Appz\Avr\avrdude.exe
#RM = del
## See USB-SERIAL CH340 port in Device Manager
#USBPORT = COM3
```
6. In Visual Studio Code, open a new terminal in menu **Terminal > New Terminal** and change working directory to `Examples/blink`.
```bash
cd Examples/blink
ls
```
All processes are done with help of `Makefile` script file. The following commands allow compilation and programming:
```bash
make all
make flash
make size
make list
make clean
```
7. To create a new project, make a new directory within `Labs` folder and copy three files `main.c`, `Makefile`, and `README.md` from `Examples/blink` project to `Labs/new-project-folder`
> If your Arduino board (or clone) does not contain any bootloader, follow instructions at [Instructables](https://www.instructables.com/id/How-to-fix-bad-Chinese-Arduino-clones/) or [Arduino webpages](https://www.arduino.cc/en/Tutorial/ArduinoISP).
>
> Install **AVR Support** extension in Visual Studio Code for AVR assembly language support.
>
#### Tested on operating systems
**Name** | **Version** | **Date (YYYY-MM-DD)** | **Note** |
:--------- | :------------------------- | :-------------------: | :------------------ |
Linux Mint | 20.1, Ulyssa | 2021-06-28 | Office |
Linux Mint | 20.1, Ulyssa | 2021-06-24 | Laptop |
Ubuntu | 20.04.1 LTS, Focal Fossa | 2020-12-22 | Student, VirtualBox |
Ubuntu | 20.04.1 LTS, Focal Fossa | 2020-12-10 | Student, Laptop |
Linux Mint | 18.3, Sylvia | 2019-06-13 | Laptop |
Linux Mint | 18.2, Sonya | 2019-05-17 | Lab SC 6.61 |
Ubuntu | 18.04.1 LTS, Bionic Beaver | 2019-05-15 | Office |
Ubuntu | 16.04, Xenial Xerus | 2018-09-15 | Office |
```bash
# FYI: How to check OS version in Linux
cat /etc/os-release
```
| 44.367089 | 436 | 0.62311 | eng_Latn | 0.453536 |
0ca8b4baa6e7dceef8060ead3dfb0d14ae566e7f | 510 | md | Markdown | archives/2022-03-26.md | erbanku/v2ex-hot-hub | 746387344a12bbb5265a65511af95f9e3eddd6c1 | [
"MIT"
] | null | null | null | archives/2022-03-26.md | erbanku/v2ex-hot-hub | 746387344a12bbb5265a65511af95f9e3eddd6c1 | [
"MIT"
] | null | null | null | archives/2022-03-26.md | erbanku/v2ex-hot-hub | 746387344a12bbb5265a65511af95f9e3eddd6c1 | [
"MIT"
] | null | null | null | # v2ex 热议话题
`最后更新时间:2022-03-26 11:19:05 +0800`
1. [发现 64GB iPhone / iPad 足够用了](https://www.v2ex.com/t/842826)
1. [ubuntu20.04 如何安装微信](https://www.v2ex.com/t/842818)
1. [打游戏心跳很快](https://www.v2ex.com/t/842861)
1. [ios 彩云天气放弃了免费用户?](https://www.v2ex.com/t/842823)
1. [offer 决赛圈!](https://www.v2ex.com/t/842869)
1. [同样是 MySQL 8,这个我问题我硬是没搞明白会这样,希望大家帮我分析一下](https://www.v2ex.com/t/842853)
1. [请教一下,出售 M1 Air 本本的人的原因?](https://www.v2ex.com/t/842841)
1. [常用 insert 键的请慎重选择 联想/小新 笔记本](https://www.v2ex.com/t/842808)
| 39.230769 | 74 | 0.696078 | yue_Hant | 0.788207 |
0ca8d64f4fc6f6d4d2ce2ec38c0bee2a39c70770 | 447 | md | Markdown | README.md | Elycin/Larahacks | 1559d1098f3ab99643c6a929577066a005ef29e9 | [
"MIT"
] | null | null | null | README.md | Elycin/Larahacks | 1559d1098f3ab99643c6a929577066a005ef29e9 | [
"MIT"
] | null | null | null | README.md | Elycin/Larahacks | 1559d1098f3ab99643c6a929577066a005ef29e9 | [
"MIT"
] | null | null | null | # Larahacks
A bunch of hacks and tweaks that you can use in your Laravel project.
## Models
### Traits
You can use traits by appending `use {TraitName};` inside of your model class.
- [Write Through Cache](https://github.com/Elycin/Larahacks/blob/main/Traits/WriteThroughCache.php)
A write through cache is extremely reliable in situations where you are heavily dependent on the usage of the cache. Requires that you have a taggable store.
| 34.384615 | 157 | 0.771812 | eng_Latn | 0.997758 |
0ca9191f9cc1f72f0db6f13a30c9487cbca2ecc7 | 714 | md | Markdown | docs/components/form/rating.md | CareyToboo/amis | b43445931614a42462c2d2add173ad58a0a5ec30 | [
"Apache-2.0"
] | 2 | 2021-04-10T10:20:50.000Z | 2021-04-10T10:26:59.000Z | docs/components/form/rating.md | CareyToboo/amis | b43445931614a42462c2d2add173ad58a0a5ec30 | [
"Apache-2.0"
] | null | null | null | docs/components/form/rating.md | CareyToboo/amis | b43445931614a42462c2d2add173ad58a0a5ec30 | [
"Apache-2.0"
] | 1 | 2021-04-10T10:27:10.000Z | 2021-04-10T10:27:10.000Z | ---
title: Rating 评分
description:
type: 0
group: null
menuName: Rating 评分
icon:
order: 37
---
## 基本用法
```schema:height="400" scope="body"
{
"type": "form",
"api": "https://houtai.baidu.com/api/mock2/form/saveForm",
"controls": [
{
"type": "rating",
"name": "rating",
"label": "评分"
}
]
}
```
## 属性表
当做选择器表单项使用时,除了支持 [普通表单项属性表](./formitem#%E5%B1%9E%E6%80%A7%E8%A1%A8) 中的配置以外,还支持下面一些配置
| 属性名 | 类型 | 默认值 | 说明 |
| -------- | --------- | ------- | ------------------ |
| half | `boolean` | `false` | 是否使用半星选择 |
| count | `number` | `5` | 共有多少星可供选择 |
| readOnly | `boolean` | `false` | 只读 |
| 17.414634 | 84 | 0.45098 | yue_Hant | 0.263985 |
0ca9e4db11b9810d4a8b60973692a55f159a8914 | 2,571 | md | Markdown | docker/README.md | doytsujin/trow | 9925d57dae8095a80c99e997e9470ed0be13c425 | [
"Apache-2.0"
] | null | null | null | docker/README.md | doytsujin/trow | 9925d57dae8095a80c99e997e9470ed0be13c425 | [
"Apache-2.0"
] | null | null | null | docker/README.md | doytsujin/trow | 9925d57dae8095a80c99e997e9470ed0be13c425 | [
"Apache-2.0"
] | null | null | null | # Building Trow
The easiest way to build Trow is via Dockerfile. From this directory, either run `build.sh` or run
something similar to following:
```
docker build -f Dockerfile -t trow ..
```
Note that the build context needs to be the root directory of the project (*not* the directory with
the Dockerfile).
To run tests, use the `build.sh` script or `Dockerfile.test` image (tests will run as part of the build).
Once issues related to TLS libraries have been resolved, a minimal build based on a scratch image
will be added.
## Mulitplatform Builds
There are several ways to produce multiplatform builds with Docker:
1. Build directly on the target hardware.
2. Use Docker multiplatform support e.g. `--platform` argument available with buildx to produce
images for other platforms. This uses QEMU internally to emulate the target platform. In
practice, I hit issues with this solution, seemingly because of bugs in QEMU and interactions
with multi-stage builds.
3. Use Rust cross-compilation to produce a binary for the target platform and copy into a base
image for the target platform. This requires a bit more configuration, but does work. When
targetting a low-powered platform (e.g. Raspberry Pi), this option may be considerably faster
than building directly on the hardware or using emulation.
Our Dockerfile uses 3 (with Docker multiplatform support to assemble the final image). Assuming
you're running on amd64, you can run the following:
```
docker buildx build --pull --load -t trow:armv7 -f Dockerfile --platform linux/arm/v7 ../
```
You can build a multi-platform image (or rather manifest pointing to multiple images) with:
```
docker buildx build --pull --load -t trow:armv7 -f Dockerfile --platform linux/arm/v7,linux/arm64,linux/amd64 ../
```
But be aware that you can't load the result into a local Docker instance as it doesn't
currently understand multi-platform manifests.
All of this assumes you have a recent version of Docker with buildkit installed.
Note that `--pull` avoids an issue whereby Docker can use the wrong base image and `--load` puts the
image into the host Docker image cache.
If you get an error about an unsupported platform, you may need to install binfmt handlers. This can
be done for common platforms with `docker run --privileged --rm
docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64` (also see [qus](https://github.com/dbhi/qus)
for an alternative approach and explanation of what is happening here). Restart docker or create a
new builder instance after doing this.
| 43.576271 | 113 | 0.770517 | eng_Latn | 0.997911 |
0ca9fc6d6caa3e4dbbc40ccf32f66003a2658d31 | 3,310 | md | Markdown | README.md | Bhaskers-Blu-Org2/GuardedFabricTools | 76abbdaf396d0cd87d105cfa35a6241475372272 | [
"MIT"
] | 6 | 2018-01-24T11:59:27.000Z | 2019-04-07T02:41:11.000Z | README.md | Bhaskers-Blu-Org2/GuardedFabricTools | 76abbdaf396d0cd87d105cfa35a6241475372272 | [
"MIT"
] | 3 | 2019-09-30T17:11:02.000Z | 2020-06-23T14:17:53.000Z | README.md | Microsoft/GuardedFabricTools | 76abbdaf396d0cd87d105cfa35a6241475372272 | [
"MIT"
] | 5 | 2020-04-10T00:52:36.000Z | 2021-09-09T01:31:48.000Z | # Guarded Fabric Tools
A PowerShell module containing tools to make deploying shielded virtual machines and managing a guarded fabric easier.
Included tools:
- **New-ShieldedVM** helps you deploy a shielded VM from PowerShell using a template disk and shielding data file. This function is intended for use on a guarded host.
- **ConvertTo-ShieldedVM** allows you to quickly add a virtual TPM and security policy to an existing VM. This function is intended for use on a guarded host.
- **New-ShieldingDataAnswerFile** generates answer files (also called unattend files) that automate configuration of Windows or Linux in a shielded VM. These answer files are compliant with System Center Virtual Machine Manager and `New-ShieldedVM`. This function is intended for use on the machine where you are preparing a shielding data file.
- **Get-HgsAttestationReport** queries the event log on an HGS server for information about recent attestation attempts to help you understand which hosts have tried attesting and whether or not they passed. This function is intended for use on an HGS server. [Additional documentation](./AttestationReport/Usage.md)
- **Add-AccessRule** and its accompanying extensions to the X509Certificate2 class in PowerShell allow you to manage the access control list (ACL) on certificate private keys through PowerShell. This function is intended for use on an HGS server when granting the group managed service account access to use the HGS encryption and signing keys. [Additional documentation](./CertificateManagement/Usage.md)
Check out the [official documentation](https://aka.ms/ShieldedVMs) for more information about shielded virtual machines in Windows Server.
## Installing
To use the Guarded Fabric Tools in a production environment, download and install the digitally signed module from the PowerShell Gallery. See [Guarded Fabric Tools on the PowerShell Gallery](https://www.powershellgallery.com/packages/GuardedFabricTools/).
Run the following command in PowerShell to install the module.
```powershell
Install-Module -Name GuardedFabricTools
```
If the computer where you're installing the module does not have internet connectivity, use [Save-Module](https://docs.microsoft.com/en-us/powershell/module/PowershellGet/Save-Module) to download the files and copy them manually to `C:\Program Files\WindowsPowerShell\Modules` on the target machine.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
| 84.871795 | 405 | 0.807251 | eng_Latn | 0.994476 |
0caaee7d59e8ba370aa32b5789cfd30d0c3c8f9b | 3,642 | md | Markdown | mdop/dart-v7/planning-how-to-save-and-deploy-the-dart-70-recovery-image.md | pahuijbr/windows-itpro-docs | 81c52b83f06e20aea1fdb3c47500c3ebd2a7a120 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-04T12:48:06.000Z | 2021-08-04T12:48:06.000Z | mdop/dart-v7/planning-how-to-save-and-deploy-the-dart-70-recovery-image.md | heatherpoulsen/windows-itpro-docs | fe1a16acde81ae5a0f24a63dc6ef94b5a2a38c63 | [
"CC-BY-3.0"
] | 1 | 2021-02-01T22:14:31.000Z | 2021-02-01T23:02:31.000Z | mdop/dart-v7/planning-how-to-save-and-deploy-the-dart-70-recovery-image.md | heatherpoulsen/windows-itpro-docs | fe1a16acde81ae5a0f24a63dc6ef94b5a2a38c63 | [
"CC-BY-3.0"
] | 1 | 2020-07-13T22:27:08.000Z | 2020-07-13T22:27:08.000Z | ---
title: Planning How to Save and Deploy the DaRT 7.0 Recovery Image
description: Planning How to Save and Deploy the DaRT 7.0 Recovery Image
author: jamiejdt
ms.assetid: d96e9363-6186-4fc3-9b83-ba15ed9694a5
ms.pagetype: mdop
ms.mktglfcycl: support
ms.sitesec: library
ms.prod: w7
---
# Planning How to Save and Deploy the DaRT 7.0 Recovery Image
Use the information in this section when you plan for saving and deploying the Microsoft Diagnostics and Recovery Toolset (DaRT) 7 recovery image.
## Planning How to Save and Deploy the DaRT Recovery Image
You can save and deploy the DaRT recovery image by using the following methods. When you are determining the method that you will use, consider the advantages and disadvantages of each. Also, consider how you want to use DaRT in your enterprise.
**Note**
You might want to use more than one method in your organization. For example, you can boot into DaRT from a remote partition for most situations and have a USB flash drive available in case the end-user computer cannot connect to the network.
The following table shows some advantages and disadvantages of each method of using DaRT in your organization.
<table>
<colgroup>
<col width="33%" />
<col width="33%" />
<col width="33%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Method to Boot into DaRT</th>
<th align="left">Advantages</th>
<th align="left">Disadvantages</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>From a CD or DVD</p></td>
<td align="left"><p>Supports scenarios in which the master boot record (MBR) is corrupted and you cannot access the hard disk. Also supports cases in which there is no network connection.</p>
<p>This is most familiar to users of earlier versions of DaRT, and a CD or DVD can be burned directly from the <strong>DaRT Recovery Image Wizard</strong>.</p></td>
<td align="left"><p>Requires that someone with access to the CD or DVD is physically at the end-user computer to boot into DaRT.</p></td>
</tr>
<tr class="even">
<td align="left"><p>From a USB flash drive (UFD)</p></td>
<td align="left"><p>Provides same advantages as booting from a CD or DVD and also provides support to computers that have no CD or DVD drive.</p></td>
<td align="left"><p>Requires you to format the UFD before you can use it to boot into DaRT. Also requires that someone with access to the UFD is physically at the end-user computer to boot into DaRT.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>From a remote (network) partition</p></td>
<td align="left"><p>Lets you boot into DaRT without needing a CD, DVD, or UFD. Also allows for easy upgrades of DaRT because there is only one file location to update.</p></td>
<td align="left"><p>Does not work if the end-user computer is not connected to the network.</p>
<p>Widely available to end users and might require additional security considerations when you are creating the recovery image.</p></td>
</tr>
<tr class="even">
<td align="left"><p>From a recovery partition</p></td>
<td align="left"><p>Lets you boot into DaRT without needing a CD, DVD, or UFD that includes instances in which there is no network connectivity.</p>
<p>Also, can be implemented and managed as part of your standard Windows image process by using automated distribution tools, such as System Center Configuration Manager.</p></td>
<td align="left"><p>When updating DaRT, requires you to update all computers in your enterprise instead of just one partition (on the network) or device (CD, DVD, or UFD).</p></td>
</tr>
</tbody>
</table>
## Related topics
[Planning to Deploy DaRT 7.0](planning-to-deploy-dart-70.md)
| 42.847059 | 245 | 0.7419 | eng_Latn | 0.996037 |
0cab19a8eba05bb099db3ffe4753496e7c044894 | 907 | md | Markdown | _posts/2018-11-08-python-Goal_of_Python-03.md | TheFrancisHe/TheFrancisHe.github.io | 984b359eba57fddb524ba41a162574ba59d4dbc2 | [
"MIT"
] | null | null | null | _posts/2018-11-08-python-Goal_of_Python-03.md | TheFrancisHe/TheFrancisHe.github.io | 984b359eba57fddb524ba41a162574ba59d4dbc2 | [
"MIT"
] | null | null | null | _posts/2018-11-08-python-Goal_of_Python-03.md | TheFrancisHe/TheFrancisHe.github.io | 984b359eba57fddb524ba41a162574ba59d4dbc2 | [
"MIT"
] | null | null | null | ---
layout: post
title: Goal of Python
subtitle: 开发者的好伙伴——Python
date: 2018-11-08
author: HD
catalog: true
tags:
- Python
---
## 正文
*既有概念的补充*
**1.像纯英语那样容易理解**
有这样一种情况:
开发使用一些语言开发出了某个功能,过段时间后,可能出现连开发者自己都无法读懂自己写的代码。
**2.用一种方法。最好是只有一种方法来做一件事情**
实现了一个功能,拒绝花俏,拒绝“炫技”代码
**3.人生苦短,我用Python”**
代码量少 ==》解决更多问题
**4.OO-面向对象**
Utils直接拿来用,复用性体现了OO。
大概方法型语言复用性较弱。
**5.可扩展性~~**
Python结合C、C++
---
*Python 程序*
**编译型语言错误的演示:**
```python
code:
print("Hello Python");
print("Hello World");
prit("Hello error");
```
**output:**
```python
[dba@bda 认识Python]$ python 01-HelloPython.py
Hello Python
Hello World //前两句还是输出结果了,解释型语言,解释一句,执行一句。
Traceback (most recent call last):
File "01-HelloPython.py", line 3, in <module>
prit("Hello error");
NameError: name 'prit' is not defined
[dba@bda 认识Python]$
```
> 每行代码负责完成一个动作 : 写python的优秀习惯
> 语法严苛,从而使得写出的代码整齐简洁。
| 10.670588 | 47 | 0.652701 | yue_Hant | 0.542079 |
0cac1f572bfcc396b591ad1337c0518a3c826b42 | 1,947 | md | Markdown | clients/csharp/docs/RevisionHistoryApi.md | Soluto/tweek-openapi-clients | feee32006743ea4bb815f2608bd95950439388c3 | [
"Apache-2.0"
] | null | null | null | clients/csharp/docs/RevisionHistoryApi.md | Soluto/tweek-openapi-clients | feee32006743ea4bb815f2608bd95950439388c3 | [
"Apache-2.0"
] | null | null | null | clients/csharp/docs/RevisionHistoryApi.md | Soluto/tweek-openapi-clients | feee32006743ea4bb815f2608bd95950439388c3 | [
"Apache-2.0"
] | null | null | null | # Org.OpenAPITools.Api.RevisionHistoryApi
All URIs are relative to *http://localhost/api/v2*
Method | HTTP request | Description
------------- | ------------- | -------------
[**GetRevisionHistory**](RevisionHistoryApi.md#getrevisionhistory) | **GET** /revision-history |
<a name="getrevisionhistory"></a>
# **GetRevisionHistory**
> List<Object> GetRevisionHistory (string keyPath, string since)
Get Revision History
### Example
```csharp
using System;
using System.Diagnostics;
using Org.OpenAPITools.Api;
using Org.OpenAPITools.Client;
using Org.OpenAPITools.Model;
namespace Example
{
public class GetRevisionHistoryExample
{
public void main()
{
// Configure HTTP basic authorization: bearerAuth
Configuration.Default.Username = "YOUR_USERNAME";
Configuration.Default.Password = "YOUR_PASSWORD";
var apiInstance = new RevisionHistoryApi();
var keyPath = keyPath_example; // string |
var since = since_example; // string |
try
{
List<Object> result = apiInstance.GetRevisionHistory(keyPath, since);
Debug.WriteLine(result);
}
catch (Exception e)
{
Debug.Print("Exception when calling RevisionHistoryApi.GetRevisionHistory: " + e.Message );
}
}
}
}
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**keyPath** | **string**| |
**since** | **string**| |
### Return type
**List<Object>**
### Authorization
[bearerAuth](../README.md#bearerAuth)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
| 25.618421 | 180 | 0.598356 | yue_Hant | 0.618772 |
0cac7995f0fc53b875848e33da19c99784dd6829 | 149 | md | Markdown | jitsi/installer/buster/README.md | emrahcom/emrah-tools | 6b1460464240970a4d94ed1e4d0c45c235b93759 | [
"Apache-2.0"
] | 14 | 2020-11-23T14:23:55.000Z | 2022-03-06T09:29:09.000Z | jitsi/installer/buster/README.md | emrahcom/emrah-tools | 6b1460464240970a4d94ed1e4d0c45c235b93759 | [
"Apache-2.0"
] | 3 | 2021-02-09T18:17:45.000Z | 2021-04-12T15:00:06.000Z | jitsi/installer/buster/README.md | emrahcom/emrah-tools | 6b1460464240970a4d94ed1e4d0c45c235b93759 | [
"Apache-2.0"
] | 6 | 2021-02-02T10:47:21.000Z | 2021-12-16T03:26:13.000Z | # jitsi-buster-installer
This repo has been moved to
[jitsi-contrib / installers](https://github.com/jitsi-contrib/installers/tree/main/jitsi-base)
| 29.8 | 94 | 0.785235 | eng_Latn | 0.618461 |
0cad8b5da86ad43e591c08271bd47df94f41f184 | 845 | md | Markdown | apps/todosnavigators/JS-HTML5-localstorage-CRUD-master/README.md | ribafs/mobile | 518f8bf5b9a8b62d639a6c8e034f78ee925fac9f | [
"MIT"
] | 1 | 2021-02-10T03:10:21.000Z | 2021-02-10T03:10:21.000Z | apps/todosnavigators/JS-HTML5-localstorage-CRUD-master/README.md | ribafs/mobile | 518f8bf5b9a8b62d639a6c8e034f78ee925fac9f | [
"MIT"
] | null | null | null | apps/todosnavigators/JS-HTML5-localstorage-CRUD-master/README.md | ribafs/mobile | 518f8bf5b9a8b62d639a6c8e034f78ee925fac9f | [
"MIT"
] | null | null | null | # CRUD with JS and HTML5 localstorage
It's a basic CRUD(create, read, update, delete) based application with ES5 where data is saving in to the HTML5 localstorage. It's a practice project for learning JS DOM and HTML5.
## Visit Site
https://sudiptochy.github.io/JS-HTML5-localstorage-CRUD/
## Getting Started
Just clone or download this repository as a zip and run it on your local server.
### Prerequisites
What things you need to install the software and how to install them
```
HTML5, CSS3, JS(ES6), Local Server
```
## Built With
* [ES6](https://developer.mozilla.org/bm/docs/Web/JavaScript)
* [HTML5](https://www.w3schools.com/html/html5_intro.asp)
* [CSS3](https://www.w3schools.com/css/default.asp)
## Features
* You can Create, Read, Update and Delete Data
* You can Search any data
* Ascending and Descending Sorting of Data | 27.258065 | 180 | 0.746746 | eng_Latn | 0.913194 |
0cad98088a6018ed622329aa0fc7c8897561255a | 18 | md | Markdown | content/p1.md | bep/hugo-alpine-test | 7680792c2f9dc2c0acdd1af696370243fcd43946 | [
"MIT"
] | 13 | 2020-01-27T16:31:23.000Z | 2022-01-28T13:49:28.000Z | content/p1.md | bep/hugo-alpine-test | 7680792c2f9dc2c0acdd1af696370243fcd43946 | [
"MIT"
] | 2 | 2020-05-05T13:16:27.000Z | 2020-05-23T23:33:05.000Z | content/p1.md | bep/hugo-alpine-test | 7680792c2f9dc2c0acdd1af696370243fcd43946 | [
"MIT"
] | 2 | 2021-11-26T13:31:10.000Z | 2022-01-23T09:51:13.000Z | ---
title: P1
---
| 4.5 | 9 | 0.388889 | eng_Latn | 0.82527 |
0cad9fde410bec89f723b119898083c0f7647962 | 75 | md | Markdown | packages/components/spin/demo/Icon.md | 15051107253/idux | b5a95457edefb2de6a2d2ab35e85a458d0288112 | [
"MIT"
] | 170 | 2021-08-22T15:33:08.000Z | 2022-03-31T03:58:00.000Z | packages/components/spin/demo/Icon.md | Tyh2001/idux | 152f9782acbec9619fdc1e02cb91894aacfb44e4 | [
"MIT"
] | 435 | 2021-08-07T06:42:30.000Z | 2022-03-31T07:51:52.000Z | packages/components/spin/demo/Icon.md | Tyh2001/idux | 152f9782acbec9619fdc1e02cb91894aacfb44e4 | [
"MIT"
] | 42 | 2021-08-09T14:36:56.000Z | 2022-03-24T12:03:30.000Z | ---
order: 2
title:
zh: 图标
en: icon
---
## zh
图标名称
## en
icon name
| 5 | 10 | 0.493333 | nld_Latn | 0.661249 |
0cadb6576ec57607cc52a91989573c51a3df469e | 6,908 | md | Markdown | docs/containers/includes/vs-2017/container-tools.md | mhartkorn/visualstudio-docs | 506d078d467def6509f22fc8ba522cdae1917a98 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/containers/includes/vs-2017/container-tools.md | mhartkorn/visualstudio-docs | 506d078d467def6509f22fc8ba522cdae1917a98 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/containers/includes/vs-2017/container-tools.md | mhartkorn/visualstudio-docs | 506d078d467def6509f22fc8ba522cdae1917a98 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Visual Studio Container Tools with ASP.NET Core
author: ghogen
description: Learn how to use Visual Studio 2017 tooling and Docker for Windows
ms.author: ghogen
ms.date: 02/01/2019
ms.technology: vs-azure
ms.topic: include
---
With Visual Studio, you can easily build, debug, and run containerized ASP.NET Core apps and publish them to Azure Container Registry, Docker Hub, Azure App Service, or your own container registry. In this article, we'll publish to Container Registry.
## Prerequisites
* [Docker Desktop](https://hub.docker.com/editions/community/docker-ce-desktop-windows)
* [Visual Studio 2017](https://visualstudio.microsoft.com/vs/older-downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=vs+2017+download) with the **Web Development**, **Azure Tools** workload, and/or **.NET Core cross-platform development** workload installed
* To publish to Azure Container Registry, an Azure subscription. [Sign up for a free trial](https://azure.microsoft.com/free/dotnet/).
## Installation and setup
For Docker installation, first review the information at [Docker Desktop for Windows: What to know before you install](https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install). Next, install [Docker Desktop](https://hub.docker.com/editions/community/docker-ce-desktop-windows).
## Add a project to a Docker container
1. From the Visual Studio menu, select **File > New > Project**.
1. Under the **Templates** section of the **New Project** dialog box, select **Visual C# > Web**.
1. Select **ASP.NET Core Web Application** or if you want to use the .NET Framework instead of .NET Core, select **ASP.NET Web Application**.
1. Give your new application a name (or take the default) and select **OK**.
1. Select **Web Application**.
1. Check the **Enable Docker Support** checkbox.

The screenshot shows .NET Core; if you're using .NET Framework, it looks a bit different.
1. Select the type of container you want (Windows or Linux) and click **OK**.
## Dockerfile overview
A *Dockerfile*, the recipe for creating a final Docker image, is created in the project. Refer to [Dockerfile reference](https://docs.docker.com/engine/reference/builder/) for an understanding of the commands within it.:
```
FROM mcr.microsoft.com/dotnet/aspnet:2.1 AS base
WORKDIR /app
EXPOSE 59518
EXPOSE 44364
FROM mcr.microsoft.com/dotnet/sdk:2.1 AS build
WORKDIR /src
COPY HelloDockerTools/HelloDockerTools.csproj HelloDockerTools/
RUN dotnet restore HelloDockerTools/HelloDockerTools.csproj
COPY . .
WORKDIR /src/HelloDockerTools
RUN dotnet build HelloDockerTools.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish HelloDockerTools.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "HelloDockerTools.dll"]
```
The preceding *Dockerfile* is based on the [microsoft/aspnetcore](https://hub.docker.com/r/microsoft/aspnetcore/) image, and includes instructions for modifying the base image by building your project and adding it to the container. If you're using the .NET Framework, the base image will be different.
When the new project dialog's **Configure for HTTPS** check box is checked, the *Dockerfile* exposes two ports. One port is used for HTTP traffic; the other port is used for HTTPS. If the check box isn't checked, a single port (80) is exposed for HTTP traffic.
## Debug
Select **Docker** from the debug drop-down in the toolbar, and start debugging the app. You might see a message with a prompt about trusting a certificate; choose to trust the certificate to continue.
The **Output** window shows what actions are taking place.
Open the **Package Manager Console** (PMC) from the menu **Tools**> NuGet Package Manager, **Package Manager Console**.
The resulting Docker image of the app is tagged as *dev*. The image is based on the *2.1-aspnetcore-runtime* tag of the *microsoft/dotnet* base image. Run the `docker images` command in the **Package Manager Console** (PMC) window. The images on the machine are displayed:
```console
REPOSITORY TAG IMAGE ID CREATED SIZE
hellodockertools dev d72ce0f1dfe7 30 seconds ago 255MB
microsoft/dotnet 2.1-aspnetcore-runtime fcc3887985bb 6 days ago 255MB
```
> [!NOTE]
> The **dev** image does not contain the app binaries and other content, as **Debug** configurations use volume mounting to provide the iterative edit and debug experience. To create a production image containing all contents, use the **Release** configuration.
Run the `docker ps` command in PMC. Notice the app is running using the container:
```console
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
baf9a678c88d hellodockertools:dev "C:\\remote_debugge..." 21 seconds ago Up 19 seconds 0.0.0.0:37630->80/tcp dockercompose4642749010770307127_hellodockertools_1
```
## Publish Docker images
Once the develop and debug cycle of the app is completed, you can create a production image of the app.
1. Change the configuration drop-down to **Release** and build the app.
1. Right-click your project in **Solution Explorer** and choose **Publish**.
1. On the publish target dialog, select the **Container Registry** tab.
1. Choose **Create New Azure Container Registry** and click **Publish**.
1. Fill in your desired values in the **Create a new Azure Container Registry**.
| Setting | Suggested value | Description |
| ------------ | ------- | -------------------------------------------------- |
| **DNS Prefix** | Globally unique name | Name that uniquely identifies your container registry. |
| **Subscription** | Choose your subscription | The Azure subscription to use. |
| **[Resource Group](/azure/azure-resource-manager/resource-group-overview)** | myResourceGroup | Name of the resource group in which to create your container registry. Choose **New** to create a new resource group.|
| **[SKU](/azure/container-registry/container-registry-skus)** | Standard | Service tier of the container registry |
| **Registry Location** | A location close to you | Choose a Location in a [region](https://azure.microsoft.com/regions/) near you or near other services that will use your container registry. |
![Visual Studio's create Azure Container Registry dialog][0]
1. Click **Create**
## Next steps
You can now pull the container from the registry to any host capable of running Docker images, for example [Azure Container Instances](/azure/container-instances/container-instances-tutorial-deploy-app).
[0]:../../media/hosting-web-apps-in-docker/vs-acr-provisioning-dialog.png
| 56.622951 | 305 | 0.729155 | eng_Latn | 0.911344 |
0cae65009498c142b2b1f64d85e00ef32c00a1d5 | 664 | md | Markdown | docs/devices/philips/7199960ph.md | augustskare/homebridge-z2m | 450427eb9328ecb4720744e05cc42235dc66b65a | [
"Apache-2.0"
] | 139 | 2020-06-28T00:14:19.000Z | 2022-03-30T05:11:11.000Z | docs/devices/philips/7199960ph.md | augustskare/homebridge-z2m | 450427eb9328ecb4720744e05cc42235dc66b65a | [
"Apache-2.0"
] | 380 | 2020-06-30T23:11:31.000Z | 2022-03-31T18:32:23.000Z | docs/devices/philips/7199960ph.md | augustskare/homebridge-z2m | 450427eb9328ecb4720744e05cc42235dc66b65a | [
"Apache-2.0"
] | 31 | 2020-07-12T06:29:50.000Z | 2022-03-13T02:30:53.000Z | ---
title: "Philips 7199960PH Homebridge/HomeKit integration"
description: "Add HomeKit support to your Philips 7199960PH, using Homebridge, Zigbee2MQTT and homebridge-z2m."
---
<!---
This file has been GENERATED using src/docgen/docgen.ts
DO NOT EDIT THIS FILE MANUALLY!
-->
# Philips 7199960PH
> Hue Iris
# Services and characteristics
The following HomeKit Services and Characteristics are exposed by
the Philips 7199960PH
* [Lightbulb](../../light.md)
* Brightness
* Hue
* On
* Saturation
# Related
* [Other devices from Philips](../index.md#philips)
* [Zigbee2MQTT documentation for this device](https://www.zigbee2mqtt.io/devices/7199960PH.html) | 25.538462 | 111 | 0.753012 | eng_Latn | 0.724105 |
0cae776101ffbdc1e0ea3bc5ef8e37e449ac20a4 | 1,035 | md | Markdown | README.md | vorasagar7/microservices-frontend-test | df69a0de13095041912b361696aca34ff6ef53cc | [
"MIT"
] | null | null | null | README.md | vorasagar7/microservices-frontend-test | df69a0de13095041912b361696aca34ff6ef53cc | [
"MIT"
] | 15 | 2017-03-07T07:22:52.000Z | 2017-03-07T08:04:21.000Z | README.md | vorasagar7/microservices-frontend-test | df69a0de13095041912b361696aca34ff6ef53cc | [
"MIT"
] | null | null | null | # Microservice-Catalog-Frontend
[](https://travis-ci.org/p632-sp-2017/microservice-catalog-frontend) [](https://codeclimate.com/github/p632-sp-2017/microservice-catalog-frontend) [](https://codeclimate.com/github/p632-sp-2017/microservice-catalog-frontend/coverage) [](https://codeclimate.com/github/p632-sp-2017/microservice-catalog-frontend)
This is the Microservice Catalog project made under UITS for the P632 course.
Steps to Run:
1) Install Dependencies
```
cd src/main/app
yarn install
```
2) Run Application
```
cd ../../..
mvn spring-boot:run
```
3) Stop Application
```
mvn spring-boot:stop
```
| 47.045455 | 738 | 0.776812 | yue_Hant | 0.285604 |
0cae9cf5ef18f3cea69eb6ed253742192cca374b | 462 | md | Markdown | README.md | herrecito/engine | 42dc0288068ac7d215cc5c96d59299c60ea43a49 | [
"Unlicense"
] | 11 | 2015-07-29T08:07:20.000Z | 2019-07-11T14:20:59.000Z | README.md | herrecito/engine | 42dc0288068ac7d215cc5c96d59299c60ea43a49 | [
"Unlicense"
] | 2 | 2017-05-27T22:00:03.000Z | 2020-12-27T17:48:55.000Z | README.md | herrecito/engine | 42dc0288068ac7d215cc5c96d59299c60ea43a49 | [
"Unlicense"
] | null | null | null | 
# Dependencies
* SDL 2.0
* SDL_image 2.0
* GLEW
# Build
If you want to take it for a spin, build with:
make
Create a map with:
./bin/editor
(Left button to add walls, right button to delete, S to save, L to load)
And run it with:
./bin/engine
You'll need some textures and spritesheets:
* ascii.png
* ceil.png
* floor.png
* pistol.png
* wall.png
You can find mine here: http://imgur.com/a/F2Cnu
| 13.588235 | 72 | 0.675325 | eng_Latn | 0.862529 |
0caf2593e26be0aeaaafee0ca7e4774c235f47e4 | 1,289 | md | Markdown | docs/framework/windows-workflow-foundation/4207-maximumretriesexceededforsqlcommand.md | trubor/docs.ru-ru | 95745f1c3bd3bb4cf7026dc91d786b97e56fcc70 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/4207-maximumretriesexceededforsqlcommand.md | trubor/docs.ru-ru | 95745f1c3bd3bb4cf7026dc91d786b97e56fcc70 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/4207-maximumretriesexceededforsqlcommand.md | trubor/docs.ru-ru | 95745f1c3bd3bb4cf7026dc91d786b97e56fcc70 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-10-31T15:06:56.000Z | 2021-10-31T15:06:56.000Z | ---
description: 'Дополнительные сведения: 4207-Максимумретриесексцеедедфорсклкомманд'
title: 4207 - MaximumRetriesExceededForSqlCommand
ms.date: 03/30/2017
ms.assetid: 8c8bee26-9ad4-4e01-bd16-0e1fd510fb6b
ms.openlocfilehash: e831da08e37010afaa33f3a52cd7cf7a9b4d713b
ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 02/06/2021
ms.locfileid: "99742723"
---
# <a name="4207---maximumretriesexceededforsqlcommand"></a>4207 - MaximumRetriesExceededForSqlCommand
## <a name="properties"></a>Свойства
|||
|-|-|
|ID|4207|
|Keywords|Quota, WFInstanceStore|
|Уровень|Сведения|
|Канал|Microsoft-Windows-Application Server-Applications/Debug|
## <a name="description"></a>Описание
Указывает, что поставщиком SQL выполнено максимальное количество повторов команды SQL и дальнейшие попытки выполняться не будут.
## <a name="message"></a>Сообщение
Выполнено максимальное количество повторов команды SQL. Дальнейшие попытки выполняться не будут.
## <a name="details"></a>Сведения
|Имя элемента данных|Тип элемента данных|Описание|
|--------------------|--------------------|-----------------|
|Домен приложения|xs:string|Строка, возвращаемая AppDomain.CurrentDomain.FriendlyName.|
| 34.837838 | 131 | 0.742436 | rus_Cyrl | 0.228015 |
0caf38767f3ecced3b521c441869e35698b89461 | 835 | md | Markdown | skills/B01I763DSY/README.md | zwang695/alexa-skills-list | 43fb6168a3313f004d02a910d1d8930f42a5fce4 | [
"MIT"
] | null | null | null | skills/B01I763DSY/README.md | zwang695/alexa-skills-list | 43fb6168a3313f004d02a910d1d8930f42a5fce4 | [
"MIT"
] | null | null | null | skills/B01I763DSY/README.md | zwang695/alexa-skills-list | 43fb6168a3313f004d02a910d1d8930f42a5fce4 | [
"MIT"
] | null | null | null | # [planoFacts](http://alexa.amazon.com/#skills/amzn1.echo-sdk-ams.app.e6acf209-eada-4aaa-bd2e-154d9961dfaa)
 0
To use the planoFacts skill, try saying...
* *Alexa, open plano facts.*
* *Alexa, begin plano facts.*
* *Alexa, start plano facts.*
Get a fact about Plano
***
### Skill Details
* **Invocation Name:** plano facts
* **Category:** null
* **ID:** amzn1.echo-sdk-ams.app.e6acf209-eada-4aaa-bd2e-154d9961dfaa
* **ASIN:** B01I763DSY
* **Author:** secretsquirrel123
* **Release Date:** July 11, 2016 @ 08:27:36
* **In-App Purchasing:** No
| 33.4 | 287 | 0.706587 | kor_Hang | 0.23598 |
0cb07cd1fa064abf12974c97c66ffdb76e9dc453 | 234 | md | Markdown | _posts/1933-09-01-pioneer-james-lummus-writes-about.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1933-09-01-pioneer-james-lummus-writes-about.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | _posts/1933-09-01-pioneer-james-lummus-writes-about.md | MiamiMaritime/miamimaritime.github.io | d087ae8c104ca00d78813b5a974c154dfd9f3630 | [
"MIT"
] | null | null | null | ---
title: Pioneer James Lummus writes about
tags:
- Sep 1933
---
Pioneer James Lummus writes about life in Miami in 1896.
Newspapers: **Miami Morning News or The Miami Herald**
Page: **8**, Section: **N/A**
| 19.5 | 58 | 0.628205 | eng_Latn | 0.931737 |
0cb0904c762da46609b6c95799cca75b7468ed65 | 3,734 | md | Markdown | README.md | Andrei-Dolgolev/humbug | 5936fc6e2c9dbf35ecf964983a1ef2253e4e085c | [
"Apache-2.0"
] | null | null | null | README.md | Andrei-Dolgolev/humbug | 5936fc6e2c9dbf35ecf964983a1ef2253e4e085c | [
"Apache-2.0"
] | null | null | null | README.md | Andrei-Dolgolev/humbug | 5936fc6e2c9dbf35ecf964983a1ef2253e4e085c | [
"Apache-2.0"
] | null | null | null | # humbug
Humbug helps you understand what keeps users coming back to your developer tool as well as any
friction they experience.
Humbug lets you collect basic system information and crash reports while respecting your users'
privacy. In addition to getting reports, you get to be [GDPR](https://gdpr-info.eu/)-compliant from
day one.
Humbug is currently available in the following programming languages:
1. [Python](./python)
2. [Go](./go)
3. Javascript (coming soon)
If you would like support for another programming language, please
[create an issue](https://github.com/bugout-dev/humbug/issues/new).
---
## Using Humbug
### Trying it out
First, sign up for an account at https://bugout.dev.
Once you have created your account, go to the [`Account > Teams`](https://bugout.dev/account/teams)
page and create a team:

Once you have created a team, you should see something like this:

Click on the `Usage Reports` button on your new team to set up reporting:

Enter a name for your project:

This should result in a view like this one:

Now, create a new token that you can use for reporting:

Which should get you to a view like this one:

Make special note of the `Journal ID` and the `Token`. You will need them in the next step, where
you will instrument your application to register usage reports with Bugout.
Here are some examples of how to do this in:
1. [Python](./python/README.md#integration)
#### Using the demo journal and token
If you would like to try things out with the demo integration from above, just email
[me](mailto:[email protected]) ([zomglings](https://github.com/zomglings)) with your Bugout
username and I will add you to the demo team.
You can also reach me on the [Bugout.dev community slack](https://join.slack.com/t/bugout-dev/shared_invite/zt-fhepyt87-5XcJLy0iu702SO_hMFKNhQ).
#### From development to production
We recommend generating one token for development and testing and using different tokens for each
version of your production library or application.
### Accessing reports
You can access your Bugout knowledge base at https://bugout.dev, via the Bugout API, or using the
`bugout` command line tool.
Bugout client libraries:
1. [Python](https://pypi.org/project/bugout/)
2. [Go](https://github.com/bugout-dev/bugout-go)
3. [Javascript](https://github.com/bugout-dev/bugout-js)
The `bugout` command line tool can be installed from:
https://github.com/bugout-dev/bugout-go/releases/latest
You can use [`humbug.bash`](https://gist.github.com/zomglings/a82ea32e8533afe62278bb2056e95621)
to download your Humbug reports to your filesystem in an easy to analyze JSON format.
### Getting help
You can get help by:
1. [Creating an issue](https://github.com/bugout-dev/humbug/issues/new)
2. [Asking for help on the Bugout.dev community Slack](https://join.slack.com/t/bugout-dev/shared_invite/zt-fhepyt87-5XcJLy0iu702SO_hMFKNhQ)
3. [Emailing zomglings](mailto:[email protected])
4. [Scheduling a meeting with zomglings](https://calendly.com/neeraj-simiotics/bugout-30)
| 37.34 | 144 | 0.767006 | eng_Latn | 0.950873 |
0cb0faca99c335f26148953b214f8fc08496cc85 | 10,107 | md | Markdown | docs/ssms/scripting/run-the-transact-sql-debugger.md | ysy68251435/sql-docs | 56b963446965f3a4bb0fa1446f49578dbff382e0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/scripting/run-the-transact-sql-debugger.md | ysy68251435/sql-docs | 56b963446965f3a4bb0fa1446f49578dbff382e0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/scripting/run-the-transact-sql-debugger.md | ysy68251435/sql-docs | 56b963446965f3a4bb0fa1446f49578dbff382e0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Run the Transact-SQL Debugger | Microsoft Docs"
ms.custom: ""
ms.date: "03/14/2017"
ms.prod: sql
ms.technology: scripting
ms.reviewer: ""
ms.topic: conceptual
helpviewer_keywords:
- "Transact-SQL debugger, sysadmin requirement"
- "Transact-SQL debugger, supported versions"
- "Query Editor [Database Engine], right-click menu"
- "debugging [SQL Server], T-SQL debugger"
- "Transact-SQL debugger, Query Editor shortcut menu"
- "Transact-SQL debugger, stopping"
- "Transact-SQL debugger, Debug menu"
- "debugging [SQL Server]"
- "Transact-SQL debugger, Debug toolbar"
- "Transact-SQL debugger, keyboard shortcuts"
- "Transact-SQL debugger, starting"
ms.assetid: 386f6d09-dbec-4dc7-9e8a-cd9a4a50168c
author: markingmyname
ms.author: maghan
manager: jroth
monikerRange: ">=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current"
---
# Run the Transact-SQL Debugger
[!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../includes/appliesto-ss-asdb-asdw-pdw-md.md)]
You can start the [!INCLUDE[tsql](../../includes/tsql-md.md)] debugger after you open a [!INCLUDE[ssDE](../../includes/ssde-md.md)] Query Editor window. Then, you can run your [!INCLUDE[tsql](../../includes/tsql-md.md)] code in debug mode until you stop the debugger. You can set options to customize how the debugger runs.
## Starting and Stopping the Debugger
The requirements to start the [!INCLUDE[tsql](../../includes/tsql-md.md)] debugger are as follows:
- If your [!INCLUDE[ssDE](../../includes/ssde-md.md)] Query Editor is connected to an instance of the [!INCLUDE[ssDE](../../includes/ssde-md.md)] on another computer, you must have configured the debugger for remote debugging. For more information, see [Configure firewall rules before running the TSQL Debugger](../../relational-databases/scripting/configure-firewall-rules-before-running-the-tsql-debugger.md).
- [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] must be running under a Windows account that is a member of the sysadmin fixed server roll.
- The [!INCLUDE[ssDE](../../includes/ssde-md.md)] Query Editor window must be connected by using either a Windows Authentication or [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Authentication login that is a member of the sysadmin fixed server role.
- The [!INCLUDE[ssDE](../../includes/ssde-md.md)] Query Editor window must be connected to an instance of the [!INCLUDE[ssDE](../../includes/ssde-md.md)] from [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)] Service Pack 2 (SP2) or later. You cannot run the debugger when the Query Editor window is connected to an instance that is in single-user mode.
We recommend that [!INCLUDE[tsql](../../includes/tsql-md.md)] code be debugged on a test server, not a production server, for the following reasons:
- Debugging is a highly privileged operation. Therefore, only members of the sysadmin fixed server role are allowed to debug in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)].
- Debugging sessions often run for long periods of time while you investigate the operations of several [!INCLUDE[tsql](../../includes/tsql-md.md)] statements. Locks, such as update locks, that are acquired by the session might be held for extended periods, until the session is ended or the transaction committed or rolled back.
Starting the [!INCLUDE[tsql](../../includes/tsql-md.md)] debugger puts the Query Editor window into debug mode. When the Query Editor window enters debug mode, the debugger pauses at the first line of code. You can then step through the code, pause the execution on specific [!INCLUDE[tsql](../../includes/tsql-md.md)] statements, and use the debugger windows to view the current execution state. You can start the debugger by either clicking the **Debug** button on the **Query** toolbar or by clicking **Start Debugging** on the **Debug** menu.
The Query Editor window stays in debug mode until either the last statement in the Query Editor window finishes or you stop debug mode. You can stop debug mode and statement execution by using any one of the following methods:
- On the **Debug** menu, click **Stop Debugging**.
- On the **Debug** toolbar, click the **Stop Debugging** button.
- On the **Query** menu, click **Cancel Executing Query**.
- On the **Query** toolbar, click the **Cancel Executing Query** button.
You can also stop debug mode and allow for the remaining [!INCLUDE[tsql](../../includes/tsql-md.md)] statements to finish executing by clicking **Detach All** on the **Debug** menu.
## Controlling the Debugger
You can control how the [!INCLUDE[tsql](../../includes/tsql-md.md)] debugger operates by using the following menu commands, toolbars, and shortcuts:
- The **Debug** menu and the **Debug** toolbar. Both the **Debug** menu and **Debug** toolbar are inactive until the focus is placed in an open Query Editor window. They remain active until the current project is closed.
- The debugger keyboard shortcuts.
- The Query Editor shortcut menu. The shortcut menu is displayed when you right-click a line in a Query Editor window. When the Query Editor window is in debug mode, the shortcut menu displays debugger commands that apply to the selected line or string.
- Menu items and context commands in the windows that are opened by the debugger, such as the **Watch** or **Breakpoints** windows.
The following table shows the debugger menu commands, toolbar buttons, and keyboard shortcuts.
|Debug menu command|Editor shortcut command|Toolbar button|Keyboard shortcut|Action|
|------------------------|-----------------------------|--------------------|-----------------------|------------|
|**Windows/Breakpoints**|Not available|**Breakpoints**|CTRL+ALT+B|Display the **Breakpoints** window in which you can view and manage breakpoints.|
|**Windows/Watch/Watch1**|Not available|**Breakpoints/Watch/Watch1**|CTRL+ALT+W, 1|Display the **Watch1** window.|
|**Windows/Watch/Watch2**|Not available|**Breakpoints/Watch/Watch2**|CTRL+ALT+W, 2|Display the **Watch2** window.|
|**Windows/Watch/Watch3**|Not available|**Breakpoints/Watch/Watch3**|CTRL+ALT+W, 3|Display the **Watch3** window.|
|**Windows/Watch/Watch4**|Not available|**Breakpoints/Watch/Watch4**|CTRL+ALT+W, 4|Display the **Watch4** window.|
|**Windows/Locals**|Not available|**Breakpoints/Locals**|CTRL+ALT+V, L|Display the **Locals** window.|
|**Windows/Call Stack**|Not available|**Breakpoints/Call Stack**|CTRL+ALT+C|Display the **Call Stack** window.|
|**Windows/Threads**|Not available|**Breakpoints/Threads**|CTRL+ALT+H|Display the **Threads** window.|
|**Continue**|Not available|**Continue**|ALT+F5|Run to the next breakpoint. **Continue** is not active until you are focused on a Query Editor window that is in debug mode.|
|**Start Debugging**|Not available|**Start Debugging**|ALT+F5|Put a Query Editor window into debug mode and run to the first breakpoint. If you are focused on a Query Editor window that is in debug mode, **Start Debugging** is replaced by **Continue**.|
|**Break All**|Not available|**Break All**|CTRL+ALT+BREAK|This feature not used by the [!INCLUDE[tsql](../../includes/tsql-md.md)] debugger.|
|**Stop Debugging**|Not available|**Stop Debugging**|SHIFT+F5|Take a Query Editor window out of debug mode and return it to regular mode.|
|**Detach All**|Not available|Not available|Not available|Stops debug mode, but executes the remaining statements in the Query Editor window.|
|**Step Into**|Not available|**Step Into**|F11|Run the next statement, and also open a new Query Editor window in debug mode if the next statement runs a stored procedure, trigger, or function.|
|**Step Over**|Not available|**Step Over**|F10|Same as **Step Into**, except that no functions, stored procedures, or triggers are debugged.|
|**Step Out**|Not available|**Step Out**|SHIFT+F11|Execute the remaining code in a trigger, function, or stored procedure without pausing for any breakpoints. Regular debug mode resumes when control is returned to the code that called the module.|
|Not available|**Run To** Cursor|Not available|CTRL+F10|Execute all code from the last stop location to the current cursor location without stopping at any breakpoints.|
|**QuickWatch**|**QuickWatch**|Not available|CTRL+ALT+Q|Display the **QuickWatch** window.|
|**Toggle Breakpoint**|**Breakpoint/Insert Breakpoint**|Not available|F9|Position a breakpoint on the current or selected [!INCLUDE[tsql](../../includes/tsql-md.md)] statement.|
|Not available|**Breakpoint/Delete Breakpoint**|Not available|Not available|Delete the breakpoint from the selected line.|
|Not available|**Breakpoint/Disable Breakpoint**|Not available|Not available|Disable the breakpoint on the selected line. The breakpoint remains on the line of code, but will not stop execution until it is reenabled.|
|Not available|**Breakpoint/Enable Breakpoint**|Not available|Not available|Enable the breakpoint on the selected line.|
|**Delete All Breakpoints**|Not available|Not available|CTRL+SHIFT+F9|Delete all breakpoints.|
|**Disable All Breakpoints**|Not available|Not available|Not available|Disable all breakpoints.|
|Not available|**Add Watch**|Not available|Not available|Add the selected expression to the **Watch** window.|
## See Also
[Transact-SQL Debugger](../../relational-databases/scripting/transact-sql-debugger.md)
[Step Through Transact-SQL Code](../../relational-databases/scripting/step-through-transact-sql-code.md)
[Transact-SQL Debugger Information](../../relational-databases/scripting/transact-sql-debugger-information.md)
[Database Engine Query Editor (SQL Server Management Studio)](../../relational-databases/scripting/database-engine-query-editor-sql-server-management-studio.md)
[Live Query Statistics](../../relational-databases/performance/live-query-statistics.md)
| 91.054054 | 549 | 0.731374 | eng_Latn | 0.939101 |
0cb11faa86e32d7ce817c685ecf5ff31bcd49ffe | 880 | md | Markdown | docs/components/trainhook/README.md | limberc/HyperGAN | b074e74abf0ed9b81bd52084706e3707a47e0fe2 | [
"MIT"
] | 889 | 2016-08-27T01:37:35.000Z | 2018-10-07T19:47:56.000Z | docs/components/trainhook/README.md | limberc/HyperGAN | b074e74abf0ed9b81bd52084706e3707a47e0fe2 | [
"MIT"
] | 101 | 2016-11-30T03:34:02.000Z | 2018-10-02T13:50:52.000Z | docs/components/trainhook/README.md | limberc/HyperGAN | b074e74abf0ed9b81bd52084706e3707a47e0fe2 | [
"MIT"
] | 145 | 2016-09-27T06:56:24.000Z | 2018-09-25T16:09:28.000Z | ---
description: Train hooks provide training events and loss modification to trainers.
---
# Train Hook
[https://github.com/HyperGAN/HyperGAN/tree/master/hypergan/train\_hooks](https://github.com/HyperGAN/HyperGAN/tree/master/hypergan/train_hooks)
## Access
```python
gan.trainer.train_hooks # => [...]
```
Train hooks are setup and invoked by the trainer.
## Events
Override these methods to change the train loop
```python
before_step(step, feed_dict)
after_step(step, feed_dict)
after_create()
gradients(d_grads, g_grads)
```
### before\_step\(feed\_dict\)
### after\_step\(feed\_dict\)
Executed before/after the step takes place. `feed_dict` is what is being sent to the graph during the training step.
### after\_create\(\)
Ran after the trainer is created.
### gradients\(d\_grads, g\_grads\)
Refines the gradients before they are applied to the optimizer.
| 20.952381 | 143 | 0.740909 | eng_Latn | 0.97338 |
0cb1338eb164607ba42e5237943cec4dd6538c7f | 5,906 | md | Markdown | resources/website/content/fr/certificates.md | dominicporter/web | 073effa83391c1a77e1317eb4cb39fa0a941ecf9 | [
"Apache-2.0"
] | 1 | 2019-12-31T07:17:55.000Z | 2019-12-31T07:17:55.000Z | resources/website/content/fr/certificates.md | dominicporter/web | 073effa83391c1a77e1317eb4cb39fa0a941ecf9 | [
"Apache-2.0"
] | 8 | 2019-12-04T10:27:59.000Z | 2020-01-05T14:33:26.000Z | resources/website/content/fr/certificates.md | winwisely99/web | 935ee6006644abda2cd8aa3563e86eacad73eba6 | [
"Apache-2.0"
] | null | null | null | ---
title: Chaîne de confiance
slug: certificates
top_graphic: 5
lastmod: 2019-05-01
---
# Certificats racine
Nos racines sont conservées en toute sécurité hors ligne. Nous émettons des certificats finaux signés par les intermédiaires de la section suivante.
* Actif
* [ISRG Root X1 (auto-signé)](/certs/isrgrootx1.pem.txt)
Nous avons mis en place des sites Web pour tester les certificats liés à nos racines.
* ISRG Root X1 Certificat valide
* [https://valid-isrgrootx1.letsencrypt.org/](https://valid-isrgrootx1.letsencrypt.org/)
* ISRG Root X1 Certificat Révoqué
* [https://revoked-isrgrootx1.letsencrypt.org/](https://revoked-isrgrootx1.letsencrypt.org/)
* ISRG Root X1 Certificat Expiré
* [https://expired-isrgrootx1.letsencrypt.org/](https://expired-isrgrootx1.letsencrypt.org/)
# Certificats Intermédiaires
IdenTrust a aussi signé (signature croisée, *cross signed* en anglais) nos intermédiaires. Cela permet à nos certificats finaux d'être acceptés par tous les principaux navigateurs pendant que nous propageons notre propre racine.
Dans des circonstances normales, les certificats émis par Let's Encrypt proviendront de «Let's Encrypt Authority X3». L'autre intermédiaire, «Let's Encrypt Authority X4», est réservé à la reprise après sinistre et ne sera utilisé que si nous perdons la possibilité d'utiliser «Let's Encrypt Authority X3». Les intermédiaires X1 et X2 étaient notre première génération d'intermédiaires. Nous les avons remplacés par de nouveaux intermédiaires plus compatibles avec Windows XP.
* Actif
* [Let's Encrypt Authority X3 (Signé par IdenTrust)](/certs/lets-encrypt-x3-cross-signed.pem.txt)
* [Let's Encrypt Authority X3 (Signé par ISRG Root X1)](/certs/letsencryptauthorityx3.pem.txt)
* Secours
* [Let's Encrypt Authority X4 (Signé par IdenTrust)](/certs/lets-encrypt-x4-cross-signed.pem.txt)
* [Let's Encrypt Authority X4 (Signé par ISRG Root X1)](/certs/letsencryptauthorityx4.pem.txt)
* Retirés
* [Let's Encrypt Authority X2 (Signé par IdenTrust)](/certs/lets-encrypt-x2-cross-signed.pem.txt)
* [Let's Encrypt Authority X2 (Signé par ISRG Root X1)](/certs/letsencryptauthorityx2.pem.txt)
* [Let's Encrypt Authority X1 (Signé par IdenTrust)](/certs/lets-encrypt-x1-cross-signed.pem.txt)
* [Let's Encrypt Authority X1 (Signé par ISRG Root X1)](/certs/letsencryptauthorityx1.pem.txt)
# Signature croisée
Notre intermédiaire "Let's Encrypt Authority X3" représente une seule paire de clés public/privée.
La clé privée de cette paire génère la signature pour tous les certificats finaux, c'est-à-dire les certificats que nous délivrons pour une utilisation sur votre serveur.
Notre intermédiaire est signé par la racine ISRG X1. Cependant, puisque nous sommes une toute nouvelle
autorité de certification, ISRG Root X1 n'est pas encore approuvé dans la plupart des navigateurs.
Afin d'être reconnu immédiatement, notre intermédiaire est également signé par
une autre autorité de certification, IdenTrust, dont la racine est déjà approuvée par
les principaux navigateurs. Plus précisément, IdenTrust a signé notre intermédiaire en utilisant leur certificat racine
"DST Root CA X3" (maintenant appelé "TrustID X3 Root"). [Télécharger "TrustID X3 Root" sur identrust.com](https://www.identrust.com/support/downloads) (ou, de façon alternative, vous pouvez télécharger une copie ici : [.pem](/certs/trustid-x3-root.pem.txt), [.p7b](/certs/trustid-x3-root.p7b)).
Cela signifie qu'il y a deux certificats disponibles qui représentent tous deux notre
intermédiaire. L'un est signé par DST Root CA X3, et l'autre est signé par ISRG
Racine X1. La façon la plus simple de les distinguer est de regarder leur champ "émetteur".
Lors de la configuration d'un serveur Web, l'opérateur du serveur configure non seulement le
certificat final, mais aussi une liste d'intermédiaires pour aider les navigateurs à vérifier
que le certificat d'entité finale possède une chaîne de confiance menant à une racine approuvée
certificat. Presque tous les opérateurs de serveurs choisiront de servir une chaîne contenant
le certificat intermédiaire avec le sujet "Let's Encrypt Authority X3" et
ayant pour émetteur "DST Root CA X3." Le logiciel recommandé par Let's Encrypt, [Certbot](https://certbot.org), rendra
la configuration transparente.
L'image suivante explique visuellement les relations entre nos certificats :
<img src="/certs/isrg-keys.png" alt="Schéma des relations clés de l'ISRG">
# Certificat de signature de l'OCSP
Ce certificat est utilisé pour signer les réponses OCSP pour les intermédiaires de l'autorité Let's Encrypt, de sorte que nous n'avons pas besoin d'avoir la clé racine en ligne afin de
signer ces réponses. Une copie de ce certificat est automatiquement incluse dans
ces réponses OCSP, donc les abonnés n'ont pas besoin de faire quoi que ce soit avec.
Ceci est inclus uniquement à titre informatif.
* [ISRG Root OCSP X1 (Signé par ISRG Root X1)](/certs/isrg-root-ocsp-x1.pem.txt)
# Certificate Transparency
Nous nous engageons à la transparence dans nos opérations et dans les certificats que nous
émettons. Nous soumettons tous les certificats aux [Logs de Certificate Transparency](https://www.certificate-transparency.org/) au fur et à mesure que nous les émettons. Vous pouvez voir tous
émis Let's Encrypt certificats via ces liens:
* [Émis par Let's Encrypt Authority X1](https://crt.sh/?Identity=%25&iCAID=7395)
* [Émis par Let's Encrypt Authority X3](https://crt.sh/?Identity=%25&iCAID=16418)
# Plus d'informations
Les clés privées de l'autorité de certification racine ISRG et des autorités de certification intermédiaires de Let's Encrypt sont stockées sur des modules de sécurité matériels (HSM), qui offrent un degré de protection élevé contre le vol de clés.
Toutes les clés de l'ISRG sont actuellement des clés RSA. Nous [prévoyons de générer des clés ECDSA]({{< ref "/upcoming-features.md" >}}).
| 62.829787 | 475 | 0.785303 | fra_Latn | 0.970976 |
0cb1639d6f5c0fa0923f35e14b95a34f06dce358 | 1,164 | md | Markdown | README.md | artemeu/DotnetCoreSmtp | 15678910237a69623c7c1066873cc22644aa7d0e | [
"MIT"
] | null | null | null | README.md | artemeu/DotnetCoreSmtp | 15678910237a69623c7c1066873cc22644aa7d0e | [
"MIT"
] | null | null | null | README.md | artemeu/DotnetCoreSmtp | 15678910237a69623c7c1066873cc22644aa7d0e | [
"MIT"
] | null | null | null | # DotnetCoreSmtp Project
## Author
[Artem Erkal Ucar](https://github.com/artemeu)
## Project setup instructions
This project is a .Net 5.0 Web Api Smtp Service Project.
## Installation
Download and install .Net 5.0 SDK
```bash
https://dotnet.microsoft.com/download
```
## Configuration
Modify EmailSettings content in DotnetCoreSmtp.Api/appsettings.json file with valid sender email information.
```python
"EfmailSettings": {
"SenderEmailAddress": "[email protected]",
"SenderEmailAddressPassword": "password",
"SenderName": "Sender Name",
"Port": 587,
"Host": "Email Host Name"
},
```
## Run Projcet
In DotnetCoreSmtp.Api project directory run command
```bash
dotnet run
```
## Swagger
This project has swagger implementation
```bash
http://localhost:8034/swagger/index.html
```
## Send Email
POST Smtp Post Api
```python
# /Smtp/send/email
```
Project runs on localhost or local network IP address on port 8034
```python
# localhost:8034/Smtp/send/email
```
## Request Body
Example Value | Schema
```python
{
"recipientEmailAddress": "[email protected]",
"emailSubject": "string",
"emailContent": "string"
}
```
| 15.72973 | 109 | 0.705326 | eng_Latn | 0.263223 |
0cb1ade8734f31e624e3b8f4591080127a4307f8 | 4,465 | md | Markdown | windows-driver-docs-pr/network/managing-the-local-dcbx-willing-state.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-07T12:25:23.000Z | 2022-02-07T12:25:23.000Z | windows-driver-docs-pr/network/managing-the-local-dcbx-willing-state.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/network/managing-the-local-dcbx-willing-state.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Managing the Local DCBX Willing State
description: Managing the Local DCBX Willing State
ms.date: 04/20/2017
---
# Managing the Local DCBX Willing State
The IEEE 802.1Qaz draft standard defines the Data Center Bridging Exchange (DCBX) protocol. This protocol allows DCB configuration parameters to be exchanged between the network adapter (local peer) and a directly connected remote peer. This allows these peers to adapt and tune Quality of Service (QoS) parameters to optimize data transfer over the connection.
Based on the local and remote QoS parameter settings, the miniport driver resolves the conflicts and derives a set of operational QoS parameters. The network adapter uses these operational parameters for the prioritized transmission of packets to the remote peer. For more information about how the driver resolves its operational NDIS QoS parameter settings, see [Resolving Operational NDIS QoS Parameters](resolving-operational-ndis-qos-parameters.md).
DCBX consists of DCB type-length-value (TLV) settings that are carried over Link Layer Discovery Protocol (LLDP) packets. A separate TLV is defined for the following types of QoS parameters:
- [Enhanced Transmission Selection (ETS)](enhanced-transmission-selection--ets--algorithm.md)
- [Priority-based Flow Control (PFC)](priority-based-flow-control--pfc.md)
The TLVs for ETS and PFC define a bit known as the *Willing* bit. If the network adapter sends its TLV settings to the remote peer with the Willing bit set to one, it indicates that the adapter is willing to accept QoS parameters from the remote peer.
The ability to set individual Willing bits in these TLVs depends on the local DCBX Willing state that is managed by the miniport driver. The miniport driver must follow these guidelines for managing the local DCBX Willing state:
- If the local DCBX Willing state is disabled, the local Willing bit must be set to zero in the DCBX TLVs. In this case, the operational QoS parameters are always resolved from the local QoS parameters. For more information on these parameters, see [Setting Local NDIS QoS Parameters](setting-local-ndis-qos-parameters.md).
- If the local DCBX Willing state is enabled, the local Willing bit must be set to one in the DCBX TLVs. In this case, the operational QoS parameters must be resolved from the remote QoS parameters. For more information on these parameters, see [Receiving Remote NDIS QoS Parameters](receiving-remote-ndis-qos-parameters.md).
**Note** If local DCBX Willing state is enabled, the miniport driver can also resolve its operational QoS parameters based on any proprietary QoS settings that are defined by the independent hardware vendor (IHV). The driver can only do this for QoS parameters that are not configured remotely by the peer or locally by the operating system.
The miniport driver manages the local DCBX Willing state in the following way:
- When the miniport driver is initialized through a call to its [*MiniportInitializeEx*](/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_initialize) function, it should enable the local DCBX Willing state based on proprietary QoS settings that are defined by the IHV.
- The DCB component (Msdcb.sys) issues an object identifier (OID) method request of [OID\_QOS\_PARAMETERS](./oid-qos-parameters.md) to configure the local QoS parameters on a network adapter. The **InformationBuffer** member of the [**NDIS\_OID\_REQUEST**](/windows-hardware/drivers/ddi/oidrequest/ns-oidrequest-ndis_oid_request) structure for this OID request contains a pointer to an [**NDIS\_QOS\_PARAMETERS**](/windows-hardware/drivers/ddi/ntddndis/ns-ntddndis-_ndis_qos_parameters) structure.
If the **NDIS\_QOS\_PARAMETERS\_WILLING** flag is set in the **Flags** member of this structure, the miniport driver enables the DCBX Willing state. If this bit is not set, the miniport driver disabled the DCBX Willing state.
For more information about LLDP, refer to the IEEE 802.1AB-2005 standard.
For more information about the local DCBX Willing bits and TLVs, refer to the IEEE 802.1Qaz draft standard.
**Note** Starting with Windows Server 2012, the DCB component can be configured through a PowerShell cmdlet to set or clear the **NDIS\_QOS\_PARAMETERS\_WILLING** flag when it issues an [OID\_QOS\_PARAMETERS](./oid-qos-parameters.md) request. This causes the miniport driver to respectively enable or disable the local DCBX Willing state.
| 89.3 | 499 | 0.795297 | eng_Latn | 0.994594 |
0cb3046ae7b0e75cf94f99837918baa0e90790b2 | 355 | md | Markdown | content/publication/ferrari-2014-territorial/index.md | DiogoFerrari/academic-kickstart | 2730c7beb6b23f11ae2e831226ef782e2dc8cdc6 | [
"MIT"
] | null | null | null | content/publication/ferrari-2014-territorial/index.md | DiogoFerrari/academic-kickstart | 2730c7beb6b23f11ae2e831226ef782e2dc8cdc6 | [
"MIT"
] | null | null | null | content/publication/ferrari-2014-territorial/index.md | DiogoFerrari/academic-kickstart | 2730c7beb6b23f11ae2e831226ef782e2dc8cdc6 | [
"MIT"
] | null | null | null | ---
title: "The Territorial Division of Power: A Survey on Brazil"
date: 2014-01-01
publishDate: 2019-07-10T03:08:17.665598Z
authors: ["Marta Arretche", "Rogerio Schlegel", "Diogo Ferrari"]
publication_types: ["1"]
abstract: ""
featured: false
publication: "*Paper presented at the International Political Science Association Annual Meeting (IPSA)*"
---
| 29.583333 | 105 | 0.752113 | eng_Latn | 0.510552 |
0cb395225b376a902ff9c30ec54aba316ac3c41b | 1,003 | md | Markdown | problems/leetcode217_ContainsDuplicate/readme.md | WuYifanX/leetcode | 9e8097f0e8dc54910c0de4d814bced38d1c4aaa1 | [
"Apache-2.0"
] | null | null | null | problems/leetcode217_ContainsDuplicate/readme.md | WuYifanX/leetcode | 9e8097f0e8dc54910c0de4d814bced38d1c4aaa1 | [
"Apache-2.0"
] | null | null | null | problems/leetcode217_ContainsDuplicate/readme.md | WuYifanX/leetcode | 9e8097f0e8dc54910c0de4d814bced38d1c4aaa1 | [
"Apache-2.0"
] | null | null | null | # Leetcode
Given an array of integers, find if the array contains any duplicates.
Your function should return true if any value appears at least twice in the array, and it should return false if every element is distinct.
Example 1:
Input: [1,2,3,1]
Output: true
Example 2:
Input: [1,2,3,4]
Output: false
Example 3:
Input: [1,1,1,3,3,4,3,2,4,2]
Output: true
# Solution
```java
package leetcode217_ContainsDuplicate;
import java.util.HashSet;
import java.util.Set;
class Solution {
public boolean containsDuplicate(int[] nums) {
if (nums.length == 0) {
return false;
}
Set<Integer> countSet = new HashSet<>();
for (int currentValue : nums) {
if (countSet.contains(currentValue)) {
return true;
} else {
countSet.add(currentValue);
}
}
return false;
}
public static void main(String[] args) {
int[] inputs2 = new int[] {1, 2, 3, 4};
System.out.println(new Solution().containsDuplicate(inputs2));
}
}
```
| 16.716667 | 139 | 0.653041 | eng_Latn | 0.864224 |
0cb3e7c0add465169df5a9278031545451756007 | 1,612 | md | Markdown | mac/razor.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | mac/razor.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | mac/razor.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Razor
description: Información sobre la compatibilidad de razor en aplicaciones principales de asp.net en Visual Studio para Mac
author: conceptdev
ms.author: crdun
ms.date: 05/03/2018
ms.topic: article
ms.technology: vs-ide-general
ms.assetid: F898CB6E-05ED-44CD-8DB6-427B2592CCC6
ms.openlocfilehash: f4c572fffb819affbbe74f05b95e270f8bbaa285
ms.sourcegitcommit: 0a8ac5f2a685270d9ca79bb39d26fd90099bfa29
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/09/2018
ms.locfileid: "51293400"
---
# <a name="razor-support"></a>Compatibilidad con Razor
Visual Studio para Mac es compatible con la edición de Razor, lo que incluye IntelliSense y el resaltado de sintaxis en archivos *.cshtml*.

## <a name="getting-started-with-razor-in-visual-studio-for-mac"></a>Introducción a Razor en Visual Studio para Mac
Hay dos opciones que debe tener en cuenta al comenzar con Razor en Visual Studio para Mac: las páginas de Razor en ASP.NET Core y ASP.NET Core MVC. Para ver tutoriales y obtener más información sobre estas opciones, visite una de las guías siguientes:
- [Introducción a las páginas de Razor en ASP.NET Core en macOS con Visual Studio para Mac](/aspnet/core/tutorials/razor-pages-mac/razor-pages-start?view=aspnetcore-2.1)
- [Introducción a ASP.NET Core MVC y Visual Studio para Mac](/aspnet/core/tutorials/first-mvc-app-mac/start-mvc?view=aspnetcore-2.1)
## <a name="see-also"></a>Vea también
- [Introducción a C# y ASP.NET Core en Visual Studio (en Windows)](/visualstudio/ide/tutorial-csharp-aspnet-core) | 50.375 | 251 | 0.788462 | spa_Latn | 0.873815 |
0cb4d28a22050e4453eb25618fa35615063f97cc | 82 | md | Markdown | samples/storm.md | dockerswarm/dockerswarm.github.io | 5283a57a5bcec986d87e3d849ad8949ba87fdd80 | [
"Apache-2.0"
] | null | null | null | samples/storm.md | dockerswarm/dockerswarm.github.io | 5283a57a5bcec986d87e3d849ad8949ba87fdd80 | [
"Apache-2.0"
] | null | null | null | samples/storm.md | dockerswarm/dockerswarm.github.io | 5283a57a5bcec986d87e3d849ad8949ba87fdd80 | [
"Apache-2.0"
] | null | null | null | ---
title: Storm
keywords: library, sample, Storm
layout: library
repo: storm
---
| 11.714286 | 32 | 0.707317 | eng_Latn | 0.363671 |
0cb683a3a86983aa78c760a226a49f29f6b7ef4b | 73 | md | Markdown | README.md | leandrorosa/docker-nodejs-golang | 1ccd8d4b586edf22651215a7aae2fe69d204c6d2 | [
"MIT"
] | null | null | null | README.md | leandrorosa/docker-nodejs-golang | 1ccd8d4b586edf22651215a7aae2fe69d204c6d2 | [
"MIT"
] | null | null | null | README.md | leandrorosa/docker-nodejs-golang | 1ccd8d4b586edf22651215a7aae2fe69d204c6d2 | [
"MIT"
] | 4 | 2017-10-12T03:18:27.000Z | 2021-04-15T01:44:14.000Z | # docker-nodejs-golang
docker image for running nodejs + golang directly
| 24.333333 | 49 | 0.808219 | eng_Latn | 0.656756 |
0cb685fd2d79a95d1ef5778b82a9ca413f4c3219 | 91 | md | Markdown | README.md | velociraptor98/Maze_explorer | a1bd4554a8cc8a9ff4297ef2b125f1f42b67526e | [
"MIT"
] | null | null | null | README.md | velociraptor98/Maze_explorer | a1bd4554a8cc8a9ff4297ef2b125f1f42b67526e | [
"MIT"
] | null | null | null | README.md | velociraptor98/Maze_explorer | a1bd4554a8cc8a9ff4297ef2b125f1f42b67526e | [
"MIT"
] | null | null | null | # Maze_explorer
A patrolling algorithm implemented in c# in unity using genetic algorithms
| 30.333333 | 74 | 0.835165 | eng_Latn | 0.948109 |
0cb8746501b4c68091696400e429f99f93f9f036 | 1,559 | md | Markdown | README.md | kennypascal/react-typescript-sass-webpack-starter | a7ecf4afd1bebaaacb01758d706f59647a996226 | [
"MIT"
] | null | null | null | README.md | kennypascal/react-typescript-sass-webpack-starter | a7ecf4afd1bebaaacb01758d706f59647a996226 | [
"MIT"
] | 15 | 2020-09-19T00:49:02.000Z | 2021-09-09T21:43:37.000Z | README.md | kennypascal/react-typescript-sass-webpack-starter | a7ecf4afd1bebaaacb01758d706f59647a996226 | [
"MIT"
] | null | null | null | # React, Typescript, Sass, Webpack Starter
A bare minimum react-typescript-sass-webpack boilerplate for quickly creating interactive applications.
**Note:** This project does not include **Server-Side Rendering**, **Testing Frameworks** or any other items that would make this package unnecessarily complicated.
## Contains
- [x] [React](https://Reactjs.com/)
- [x] [Typescript](https://www.typescriptlang.org)
- [x] [Sass](https://sass-lang.com)
- [x] [Webpack](https://webpack.github.io)
- [x] [Typescript Loader](https://github.com/TypeStrong/ts-loader)
- [x] [PostCSS Loader](https://github.com/postcss/postcss-loader)
- [x] [Mini CSS Extract Plugin](https://github.com/webpack-contrib/mini-css-extract-plugin)
- [x] [HTML Webpack Plugin](https://github.com/ampedandwired/html-webpack-plugin)
## Preparation
Before you start developing you will need:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- [Node.js](https://nodejs.org/) (version 12.17.0 is recommended for this repo)
- [NPM](https://www.npmjs.com/)
- [NVM](https://github.com/creationix/nvm) (manage multiple versions of Node and NPM)
## Setup
```
$ npm run setup
```
When running setup you will be prompted to enter information regarding your project.
## Running
```
$ npm run start
```
## Build
When building the final project and template for deployment run:
```
$ npm run build
```
To add a report analyzing the javascript bundle run:
```
$ npm run build:analyze
```
This will create a build folder with html, javascript, css and assets.
# License
MIT | 29.415094 | 164 | 0.726106 | eng_Latn | 0.602947 |
0cba606ad75111e6c74bf9bae34957c5d6c2a23d | 1,327 | md | Markdown | docs/framework/winforms/controls/imagelist-component-windows-forms.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/imagelist-component-windows-forms.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/controls/imagelist-component-windows-forms.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ImageList コンポーネント (Windows フォーム)
ms.date: 03/30/2017
helpviewer_keywords:
- ImageList component [Windows Forms]
- image controls
ms.assetid: 83b48634-782b-464d-9b7d-568dc6e0bef2
ms.openlocfilehash: 61939c427a8a74ef85f269d9e8788d4c3c3b3035
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/23/2019
ms.locfileid: "61971154"
---
# <a name="imagelist-component-windows-forms"></a>ImageList コンポーネント (Windows フォーム)
Windows フォーム `ImageList` コンポーネントは、コントロールで表示するイメージの保存に使用します。 イメージ リストでは、一貫性のある 1 つのイメージのカタログのコードを記述することができます。
## <a name="in-this-section"></a>このセクションの内容
[ImageList コンポーネントの概要](imagelist-component-overview-windows-forms.md)
このコンポーネントの用途、主な機能、およびプロパティについて説明します。
[方法: 追加または削除のイメージを Windows フォームの ImageList コンポーネント](how-to-add-or-remove-images-with-the-windows-forms-imagelist-component.md)
イメージの一覧からイメージを追加および削除する方法を説明します。
参照してください[方法。デザイナーを使って ImageList イメージを追加または](how-to-add-or-remove-imagelist-images-with-the-designer.md)します。
## <a name="reference"></a>参照
<xref:System.Windows.Forms.ImageList>
このクラスについて説明し、すべてのメンバーへのリンクの一覧を示します。
## <a name="related-sections"></a>関連項目
[Windows フォームで使用するコントロール](controls-to-use-on-windows-forms.md)
Windows フォーム コントロールの完全な一覧を、使用に関する情報リンクと共に提供します。
| 39.029412 | 129 | 0.786737 | yue_Hant | 0.444796 |
0cba66be994394955fea18f5e4387831da2d4cfd | 509 | md | Markdown | tree/homework/LargestBST/README.md | shahbagdadi/ICFTIPS2019 | 5fdcfc9b71ac32065f2a0b28ef18a6a33fa2963c | [
"Apache-2.0"
] | null | null | null | tree/homework/LargestBST/README.md | shahbagdadi/ICFTIPS2019 | 5fdcfc9b71ac32065f2a0b28ef18a6a33fa2963c | [
"Apache-2.0"
] | null | null | null | tree/homework/LargestBST/README.md | shahbagdadi/ICFTIPS2019 | 5fdcfc9b71ac32065f2a0b28ef18a6a33fa2963c | [
"Apache-2.0"
] | null | null | null | ## Largest BST
Given a binary tree, find the largest Binary Search Tree (BST), where largest means BST with largest number of nodes in it.
The largest BST must include all of its descendants.
```
Example:
Input: [10,5,15,1,8,null,7]
10
/ \
5 15
/ \ \
1 8 7
Output: 3
Explanation: The Largest BST Subtree in this case is 1 <- 5 -> 8.
The return value is the subtree's size, which is 3.
Follow up:
Can you figure out ways to solve it with O(n) time complexity?
``` | 20.36 | 125 | 0.64833 | eng_Latn | 0.998605 |
0cbb2abcdb712aaf3a94044d73d6f1bf43ba6d91 | 1,644 | md | Markdown | results/innerfidelity/innerfidelity_harman_over-ear_2018/AKG K1000/README.md | NekoAlosama/AutoEq-nekomod | a314a809c3fe46c3c8526243bd97f0f31a90c710 | [
"MIT"
] | null | null | null | results/innerfidelity/innerfidelity_harman_over-ear_2018/AKG K1000/README.md | NekoAlosama/AutoEq-nekomod | a314a809c3fe46c3c8526243bd97f0f31a90c710 | [
"MIT"
] | null | null | null | results/innerfidelity/innerfidelity_harman_over-ear_2018/AKG K1000/README.md | NekoAlosama/AutoEq-nekomod | a314a809c3fe46c3c8526243bd97f0f31a90c710 | [
"MIT"
] | null | null | null | # AKG K1000
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-26.09dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-26.08 dB**.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|----------:|
| Peaking | 21.62 Hz | 1.37 | 23.18 dB |
| Peaking | 34.88 Hz | 1.22 | 11.46 dB |
| Peaking | 1945.74 Hz | 1.34 | -7.60 dB |
| Peaking | 3642.32 Hz | 0.35 | 28.69 dB |
| Peaking | 5032.37 Hz | 0.3 | -27.02 dB |
| Peaking | 56.36 Hz | 2.41 | 2.32 dB |
| Peaking | 100.67 Hz | 1.39 | -2.55 dB |
| Peaking | 4783.02 Hz | 7.54 | 1.57 dB |
| Peaking | 5953.63 Hz | 6.24 | -2.99 dB |
| Peaking | 10012.65 Hz | 2.05 | 2.24 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-29.74dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|---------:|
| Peaking | 31.25 Hz | 1.41 | 30.60 dB |
| Peaking | 62.50 Hz | 1.41 | -1.03 dB |
| Peaking | 125.00 Hz | 1.41 | -3.81 dB |
| Peaking | 250.00 Hz | 1.41 | 0.73 dB |
| Peaking | 500.00 Hz | 1.41 | 2.34 dB |
| Peaking | 1000.00 Hz | 1.41 | 0.78 dB |
| Peaking | 2000.00 Hz | 1.41 | -3.24 dB |
| Peaking | 4000.00 Hz | 1.41 | 2.32 dB |
| Peaking | 8000.00 Hz | 1.41 | -5.08 dB |
| Peaking | 16000.01 Hz | 1.41 | -5.81 dB |
### Graphs
 | 41.1 | 98 | 0.551095 | eng_Latn | 0.684241 |
0cbb7c977094b3cace97f842b79b3f0852404aa8 | 51,657 | md | Markdown | docs/ssdt/walkthrough-creating-and-running-a-sql-server-unit-test.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssdt/walkthrough-creating-and-running-a-sql-server-unit-test.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssdt/walkthrough-creating-and-running-a-sql-server-unit-test.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tutorial: Crear y ejecutar una prueba unitaria de SQL Server | Microsoft Docs'
ms.custom:
- SSDT
ms.date: 02/09/2017
ms.prod: sql
ms.technology: ssdt
ms.reviewer: ''
ms.topic: conceptual
ms.assetid: 992c1d8e-3729-438b-9ef4-cd103e28f145
author: stevestein
ms.author: sstein
manager: craigg
ms.openlocfilehash: 77ef8c2340724558b137bb1da1bb3448db677795
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/01/2018
ms.locfileid: "47855293"
---
# <a name="walkthrough-creating-and-running-a-sql-server-unit-test"></a>Tutorial: Crear y ejecutar una prueba unitaria de SQL Server
En este tutorial, se crea una prueba unitaria de SQL Server que comprueba el comportamiento de varios procedimientos almacenados. Las pruebas unitarias de SQL Server se crean para ayudar a identificar los defectos del código que podrían producir un comportamiento incorrecto de la aplicación. Las pruebas unitarias de SQL Server y las pruebas de aplicación se pueden ejecutar como parte de un conjunto de pruebas automatizado.
En este tutorial, realizará las tareas siguientes:
- [Crear un script que contiene un esquema de la base de datos](#CreateScript)
- [Crear un nuevo proyecto de base de datos e importar el esquema](#CreateProjectAndImport)
- [Implementar el proyecto de base de datos en un entorno de desarrollo aislado](#DeployDBProj)
- [Crear pruebas unitarias de SQL Server](#CreateDBUnitTests)
- [Definir la lógica de prueba](#DefineTestLogic)
- [Ejecutar pruebas unitarias de SQL Server](#RunTests)
- [Agregar una prueba unitaria negativa](#NegativeTest)
Cuando una de las pruebas unitarias detecta un error en un procedimiento almacenado, se corrige el error y se vuelve a ejecutar la prueba.
## <a name="prerequisites"></a>Prerequisites
Para completar este tutorial, debe poder conectarse a un servidor de bases de datos (o a una base de datos LocalDB) en el que tenga permisos para crear e implementar una base de datos. Para más información, consulte [Permisos necesarios para las características de base de datos de Visual Studio](http://msdn.microsoft.com/library/aa833413(VS.100).aspx).
## <a name="CreateScript"></a>Crear un script que contiene un esquema de la base de datos
#### <a name="to-create-a-script-from-which-you-can-import-a-schema"></a>Para crear un script desde el que se puede importar un esquema
1. En el menú **Archivo** , elija **Nuevo**y haga clic en **Archivo**.
Aparece el cuadro de diálogo **Nuevo archivo** .
2. En la lista **Categorías** , haga clic en **General** si no está ya resaltado.
3. En la lista **Plantillas** , haga clic en **Archivo SQL**y en **Abrir**.
Se abre el editor Transact\-SQL.
4. Copie el código Transact\-SQL y péguelo en el editor de Transact\-SQL.
```
PRINT N'Creating Sales...';
GO
CREATE SCHEMA [Sales]
AUTHORIZATION [dbo];
GO
PRINT N'Creating Sales.Customer...';
GO
CREATE TABLE [Sales].[Customer] (
[CustomerID] INT IDENTITY (1, 1) NOT NULL,
[CustomerName] NVARCHAR (40) NOT NULL,
[YTDOrders] INT NOT NULL,
[YTDSales] INT NOT NULL
);
GO
PRINT N'Creating Sales.Orders...';
GO
CREATE TABLE [Sales].[Orders] (
[CustomerID] INT NOT NULL,
[OrderID] INT IDENTITY (1, 1) NOT NULL,
[OrderDate] DATETIME NOT NULL,
[FilledDate] DATETIME NULL,
[Status] CHAR (1) NOT NULL,
[Amount] INT NOT NULL
);
GO
PRINT N'Creating Sales.Def_Customer_YTDOrders...';
GO
ALTER TABLE [Sales].[Customer]
ADD CONSTRAINT [Def_Customer_YTDOrders] DEFAULT 0 FOR [YTDOrders];
GO
PRINT N'Creating Sales.Def_Customer_YTDSales...';
GO
ALTER TABLE [Sales].[Customer]
ADD CONSTRAINT [Def_Customer_YTDSales] DEFAULT 0 FOR [YTDSales];
GO
PRINT N'Creating Sales.Def_Orders_OrderDate...';
GO
ALTER TABLE [Sales].[Orders]
ADD CONSTRAINT [Def_Orders_OrderDate] DEFAULT GetDate() FOR [OrderDate];
GO
PRINT N'Creating Sales.Def_Orders_Status...';
GO
ALTER TABLE [Sales].[Orders]
ADD CONSTRAINT [Def_Orders_Status] DEFAULT 'O' FOR [Status];
GO
PRINT N'Creating Sales.PK_Customer_CustID...';
GO
ALTER TABLE [Sales].[Customer]
ADD CONSTRAINT [PK_Customer_CustID] PRIMARY KEY CLUSTERED ([CustomerID] ASC) WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF);
GO
PRINT N'Creating Sales.PK_Orders_OrderID...';
GO
ALTER TABLE [Sales].[Orders]
ADD CONSTRAINT [PK_Orders_OrderID] PRIMARY KEY CLUSTERED ([OrderID] ASC) WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF);
GO
PRINT N'Creating Sales.FK_Orders_Customer_CustID...';
GO
ALTER TABLE [Sales].[Orders]
ADD CONSTRAINT [FK_Orders_Customer_CustID] FOREIGN KEY ([CustomerID]) REFERENCES [Sales].[Customer] ([CustomerID]) ON DELETE NO ACTION ON UPDATE NO ACTION;
GO
PRINT N'Creating Sales.CK_Orders_FilledDate...';
GO
ALTER TABLE [Sales].[Orders]
ADD CONSTRAINT [CK_Orders_FilledDate] CHECK ((FilledDate >= OrderDate) AND (FilledDate < '01/01/2020'));
GO
PRINT N'Creating Sales.CK_Orders_OrderDate...';
GO
ALTER TABLE [Sales].[Orders]
ADD CONSTRAINT [CK_Orders_OrderDate] CHECK ((OrderDate > '01/01/2005') and (OrderDate < '01/01/2020'));
GO
PRINT N'Creating Sales.uspCancelOrder...';
GO
CREATE PROCEDURE [Sales].[uspCancelOrder]
@OrderID INT
AS
BEGIN
DECLARE @Delta INT, @CustomerID INT
BEGIN TRANSACTION
SELECT @Delta = [Amount], @CustomerID = [CustomerID]
FROM [Sales].[Orders] WHERE [OrderID] = @OrderID;
UPDATE [Sales].[Orders]
SET [Status] = 'X'
WHERE [OrderID] = @OrderID;
UPDATE [Sales].[Customer]
SET
YTDOrders = YTDOrders - @Delta
WHERE [CustomerID] = @CustomerID
COMMIT TRANSACTION
END
GO
PRINT N'Creating Sales.uspFillOrder...';
GO
CREATE PROCEDURE [Sales].[uspFillOrder]
@OrderID INT, @FilledDate DATETIME
AS
BEGIN
DECLARE @Delta INT, @CustomerID INT
BEGIN TRANSACTION
SELECT @Delta = [Amount], @CustomerID = [CustomerID]
FROM [Sales].[Orders] WHERE [OrderID] = @OrderID;
UPDATE [Sales].[Orders]
SET [Status] = 'F',
[FilledDate] = @FilledDate
WHERE [OrderID] = @OrderID;
UPDATE [Sales].[Customer]
SET
YTDSales = YTDSales - @Delta
WHERE [CustomerID] = @CustomerID
COMMIT TRANSACTION
END
GO
PRINT N'Creating Sales.uspNewCustomer...';
GO
CREATE PROCEDURE [Sales].[uspNewCustomer]
@CustomerName NVARCHAR (40)
AS
BEGIN
INSERT INTO [Sales].[Customer] (CustomerName) VALUES (@CustomerName);
SELECT SCOPE_IDENTITY()
END
GO
PRINT N'Creating Sales.uspPlaceNewOrder...';
GO
CREATE PROCEDURE [Sales].[uspPlaceNewOrder]
@CustomerID INT, @Amount INT, @OrderDate DATETIME, @Status CHAR (1)='O'
AS
BEGIN
DECLARE @RC INT
BEGIN TRANSACTION
INSERT INTO [Sales].[Orders] (CustomerID, OrderDate, FilledDate, Status, Amount)
VALUES (@CustomerID, @OrderDate, NULL, @Status, @Amount)
SELECT @RC = SCOPE_IDENTITY();
UPDATE [Sales].[Customer]
SET
YTDOrders = YTDOrders + @Amount
WHERE [CustomerID] = @CustomerID
COMMIT TRANSACTION
RETURN @RC
END
GO
CREATE PROCEDURE [Sales].[uspShowOrderDetails]
@CustomerID INT=0
AS
BEGIN
SELECT [C].[CustomerName], CONVERT(date, [O].[OrderDate]), CONVERT(date, [O].[FilledDate]), [O].[Status], [O].[Amount]
FROM [Sales].[Customer] AS C
INNER JOIN [Sales].[Orders] AS O
ON [O].[CustomerID] = [C].[CustomerID]
WHERE [C].[CustomerID] = @CustomerID
END
GO
```
5. Guarde el archivo. Anote la ubicación porque debe usar este script en el procedimiento siguiente.
6. En el menú **Archivo** , haga clic en **Cerrar solución**.
A continuación creará un proyecto de base de datos e importará el esquema desde el script que ha creado.
## <a name="CreateProjectAndImport"></a>Crear un proyecto de base de datos e importar un esquema
#### <a name="to-create-a-database-project"></a>Para crear un proyecto de base de datos
1. En el menú **Archivo** , elija **Nuevo**y, a continuación, haga clic en **Proyecto**.
Aparecerá el cuadro de diálogo **Nuevo proyecto** .
2. En **Plantillas instaladas**, seleccione el nodo **SQL Server** y, a continuación, **Proyecto de base de datos de SQL Server**.
3. En **Nombre**, escriba **SimpleUnitTestDB**.
4. Active la casilla **Crear directorio para la solución** si aún no está seleccionada.
5. Desactive la casilla **Agregar al control de código fuente** si aún no lo está y haga clic en **Aceptar**.
Se crea el proyecto de base de datos, que aparece en el **Explorador de soluciones**. A continuación, importará el esquema de la base de datos de un script.
#### <a name="to-import-a-database-schema-from-a-script"></a>Para importar un esquema de la base de datos de un script
1. En el menú **Proyecto**, haga clic en **Importar** y, a continuación, en **Script (\*.sql)**.
2. Haga clic en **Siguiente** después de leer la página de bienvenida.
3. Haga clic en **Examinar**para ir al directorio donde guardó el archivo .sql.
4. Haga doble clic en el archivo .sql y haga clic en **Finalizar**.
Se importa el script y los objetos definidos en el script se agregan al proyecto de base de datos.
5. Revise el resumen y haga clic en **Finalizar** para completar la operación.
> [!NOTE]
> El procedimiento Sales.uspFillOrder contiene un error de código intencional que se detectará y se corregirá más adelante en este procedimiento.
#### <a name="to-examine-the-resulting-project"></a>Para examinar el proyecto resultante
1. En el **Explorador de soluciones**, examine los archivos de script que se importaron al proyecto.
2. En el **Explorador de objetos de** , examine la base de datos del nodo Proyectos.
## <a name="DeployDBProj"></a>Implementar en LocalDB
De forma predeterminada, al presionar F5 se implementa (o publica) la base de datos en una base de datos LocalDB. Puede cambiar la ubicación de la base de datos si va a la pestaña Depurar de la página de propiedades del proyecto y cambia la cadena de conexión.
## <a name="CreateDBUnitTests"></a>Crear pruebas unitarias de SQL Server
#### <a name="to-create-a-sql-server-unit-test-for-the-stored-procedures"></a>Para crear una prueba unitaria de SQL Server para los procedimientos almacenados
1. En el **Explorador de objetos de SQL Server**, expanda el nodo de proyectos de **SimpleUnitTestDB** y, después, expanda los nodos **Programación** y **Procedimientos almacenados**.
2. Haga clic con el botón derecho en alguno de los procedimientos almacenados y haga clic en **Crear pruebas unitarias** para mostrar el cuadro de diálogo **Crear pruebas unitarias**.
3. Active las casillas de los cinco procedimientos almacenados: **Sales.uspCancelOrder**, **Sales.uspFillOrder**, **Sales.uspNewCustomer**, **Sales.uspPlaceNewOrder**y **Sales.uspShowOrderDetails**.
4. En la lista desplegable **Proyecto**, seleccione **Crear un nuevo proyecto de prueba de Visual C#**.
5. Acepte los nombres predeterminados para el nombre de proyecto y el nombre de clase, y haga clic en **Aceptar**.
6. En el cuadro de diálogo de configuración de pruebas, en **Ejecutar pruebas unitarias usando la siguiente conexión de datos**, especifique una conexión a la base de datos que implementó anteriormente en este tutorial. Por ejemplo, si usó la ubicación de implementación predeterminada, que es LocalDB, haría clic en **Nueva conexión** y especificaría **(LocalDB)\Projects**. Después, elija el nombre de la base de datos. A continuación, haga clic en Aceptar para cerrar el cuadro de diálogo **Propiedades de la conexión** .
> [!NOTE]
> Si debe probar las vistas o los procedimientos almacenados que tienen permisos restringidos, especificaría normalmente esa conexión en este paso. Después debe especificar la conexión secundaria, con permisos más amplios, para validar la prueba. Si tiene una conexión secundaria, debe agregar el usuario al proyecto de base de datos y crear un inicio de sesión para el usuario en el script anterior a la implementación.
7. En el cuadro de diálogo de configuración de pruebas, en la sección **Implementación** , active la casilla **Implementar automáticamente el proyecto de base de datos antes de ejecutar pruebas unitarias** .
8. En **Proyecto de base de datos**, haga clic en **SimpleUnitTestDB.sqlproj**.
9. En **Configuración de implementación**, haga clic en **Depurar**.
Podría generar también datos de prueba como parte de las pruebas unitarias de SQL Server. En este tutorial, puede omitir este paso porque las pruebas van a crear sus propios datos.
10. Haga clic en **Aceptar**.
Se compilará el proyecto de prueba y aparecerá el Diseñador de pruebas unitarias de SQL Server. A continuación, actualizará la lógica de prueba en el script Transact\-SQL de las pruebas unitarias.
## <a name="DefineTestLogic"></a>Definir la lógica de prueba
Esta base de datos muy simple tiene dos tablas, Customer y Order. Actualiza la base de datos mediante los procedimientos almacenados siguientes:
- uspNewCustomer: Este procedimiento almacenado agrega un registro a la tabla Customer, que establece las columnas YTDOrders y YTDSales del cliente en cero.
- uspPlaceNewOrder: Este procedimiento almacenado agrega un registro a la tabla Orders para el cliente especificado y actualiza el valor de YTDOrders en el registro correspondiente en la tabla Customer.
- uspFillOrder: Este procedimiento almacenado actualiza un registro en la tabla Orders al cambiar el estado de 'O' a 'F' y aumenta la cantidad de YTDSales en el registro correspondiente en la tabla Customer.
- uspCancelOrder: Este procedimiento almacenado actualiza un registro en la tabla Orders al cambiar el estado de 'O' a 'X' y disminuye la cantidad de YTDOrders en el registro correspondiente en la tabla Customer.
- uspShowOrderDetails: Este procedimiento almacenado combina la tabla Orders con la tabla Customer y muestra los registros de un cliente específico.
> [!NOTE]
> En este ejemplo se muestra cómo crear una prueba unitaria simple de SQL Server. En una base de datos del mundo real, podría sumar los importes totales de todos los pedidos con un estado 'O' o 'F' para un determinado cliente. Los procedimientos de este tutorial no contienen tampoco ningún control de errores. Por ejemplo, no le impiden llamar uspFillOrder para un orden que se ha rellenado previamente.
Las pruebas suponen que la base de datos se inicia en un estado limpio. Creará las pruebas que comprueban las condiciones siguientes:
- uspNewCustomer: Compruebe que la tabla Customer contiene una fila después de ejecutar el procedimiento almacenado.
- uspPlaceNewOrder: Para el cliente que tiene un CustomerID de 1, haga un pedido de 100 dólares. Compruebe que la cantidad de YTDOrders del cliente es 100 y que la cantidad de YTDSales es cero.
- uspFillOrder: Para el cliente que tiene un CustomerID de 1, haga un pedido de 50 dólares. Rellene este pedido. Compruebe que las cantidades de YTDOrders e YTDSales son ambos 50.
- uspShowOrderDetails: Para el cliente que tiene un CustomerID de 1, haga pedidos de 100, 50 y 5 dólares. Compruebe que uspShowOrderDetails devuelve el número de columnas correcto y que el conjunto de resultados tiene la suma de comprobación esperada.
> [!NOTE]
> Para un conjunto completo de pruebas unitarias de SQL Server, se comprobaría normalmente que las demás columnas se establecieron correctamente. Para mantener este tutorial en un tamaño administrable, no describe cómo comprobar el comportamiento de uspCancelOrder.
#### <a name="to-write-the-sql-server-unit-test-for-uspnewcustomer"></a>Para escribir la prueba unitaria de SQL Server para uspNewCustomer
1. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspNewCustomerTest** y asegúrese de que **Prueba** está resaltado en la lista adyacente.
Después de realizar el paso anterior, puede crear el script de prueba para la acción de prueba en la prueba unitaria.
2. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
-- ssNoVersion unit test for Sales.uspNewCustomer
DECLARE @RC AS INT, @CustomerName AS NVARCHAR (40);
SELECT @RC = 0,
@CustomerName = 'Fictitious Customer';
EXECUTE @RC = [Sales].[uspNewCustomer] @CustomerName;
SELECT * FROM [Sales].[Customer];
```
3. En el panel **Condiciones de prueba**, haga clic en la Condición de prueba no concluyente y, a continuación, haga clic en el icono **Eliminar condición de prueba** (la X de color rojo).
4. En el panel **Condiciones de prueba**, haga clic en **Recuento de filas** en la lista y, a continuación, haga clic en el icono **Agregar condición de prueba** (el signo + de color verde).
5. Abra la ventana **Propiedades** (seleccione la condición de prueba y presione F4) y establezca la propiedad **Recuento de filas** en 1.
6. En el menú **Archivo** , haga clic en **Guardar todo**.
A continuación definirá la lógica de prueba unitaria para uspPlaceNewOrder.
#### <a name="to-write-the-sql-server-unit-test-for-uspplaceneworder"></a>Para escribir la prueba unitaria de SQL Server para uspPlaceNewOrder
1. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspPlaceNewOrderTest** y asegúrese de que **Prueba** está resaltado en la lista adyacente.
Después de realizar este paso, puede crear el script de prueba para la acción de prueba en la prueba unitaria.
2. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
-- ssNoVersion unit test for Sales.uspPlaceNewOrder
DECLARE @RC AS INT, @CustomerID AS INT, @Amount AS INT, @OrderDate AS DATETIME, @Status AS CHAR (1);
DECLARE @CustomerName AS NVARCHAR(40);
SELECT @RC = 0,
@CustomerID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@OrderDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- place an order for that customer
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, @Amount, @OrderDate, @Status;
-- verify that the YTDOrders value is correct.
SELECT @RC = [YTDOrders] FROM [Sales].[Customer] WHERE [CustomerID] = @CustomerID
SELECT @RC AS RC
```
3. En el panel **Condiciones de prueba** , haga clic en la Condición de prueba no concluyente y, a continuación, haga clic en **Eliminar condición de prueba**.
4. En el panel **Condiciones de prueba** , haga clic en **Valor escalar** en la lista y, a continuación, haga clic en **Agregar condición de prueba**.
5. En la ventana **Propiedades** , establezca la propiedad **Valor esperado** en 100.
6. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspPlaceNewOrderTest** y asegúrese de que **Anterior a la prueba** está resaltado en la lista adyacente.
Después de realizar este paso, puede especificar las instrucciones que ponen los datos en el estado necesario para ejecutar la prueba. En este ejemplo, debe crear el registro Customer para poder hacer un pedido.
7. Haga clic en **Haga clic aquí para crear** para crear un script anterior a la prueba.
8. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
/*
Add Transact-SQL statements here that you want to run before
the test script is run.
*/
-- Add a customer for this test with the name 'Fictitious Customer'
DECLARE @NewCustomerID AS INT, @CustomerID AS INT, @RC AS INT, @CustomerName AS NVARCHAR (40);
SELECT @RC = 0,
@NewCustomerID = 0,
@CustomerID = 0,
@CustomerName = N'Fictitious Customer';
IF NOT EXISTS(SELECT * FROM [Sales].[Customer] WHERE CustomerName = @CustomerName)
BEGIN
EXECUTE @NewCustomerID = [Sales].[uspNewCustomer] @CustomerName;
END
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- delete any old records in the Orders table and clear out the YTD Sales/Orders fields
DELETE from [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
UPDATE [Sales].[Customer] SET YTDOrders = 0, YTDSales = 0 WHERE [CustomerID] = @CustomerID;
```
9. En el menú **Archivo** , haga clic en **Guardar todo**.
A continuación creará la prueba unitaria para uspFillOrder.
#### <a name="to-write-the-sql-server-unit-test-for-uspfillorder"></a>Para escribir la prueba unitaria de SQL Server para uspFillOrder
1. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspFillOrderTest** y asegúrese de que **Prueba** está resaltado en la lista adyacente.
Después de realizar este paso, puede crear el script de prueba para la acción de prueba en la prueba unitaria.
2. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
-- ssNoVersion unit test for Sales.uspFillOrder
DECLARE @RC AS INT, @CustomerID AS INT, @Amount AS INT, @FilledDate AS DATETIME, @Status AS CHAR (1);
DECLARE @CustomerName AS NVARCHAR(40), @OrderID AS INT;
SELECT @RC = 0,
@CustomerID = 0,
@OrderID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@FilledDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- Get the most recently added order.
SELECT @OrderID = MAX([OrderID]) FROM [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
-- fill an order for that customer
EXECUTE @RC = [Sales].[uspFillOrder] @OrderID, @FilledDate;
-- verify that the YTDOrders value is correct.
SELECT @RC = [YTDSales] FROM [Sales].[Customer] WHERE [CustomerID] = @CustomerID
SELECT @RC AS RC;
```
3. En el panel **Condiciones de prueba** , haga clic en la Condición de prueba no concluyente y, a continuación, haga clic en **Eliminar condición de prueba**.
4. En el panel **Condiciones de prueba** , haga clic en **Valor escalar** en la lista y, a continuación, haga clic en **Agregar condición de prueba**.
5. En la ventana **Propiedades** , establezca la propiedad **Valor esperado** en 100.
6. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspFillOrderTest** y asegúrese de que **Anterior a la prueba** está resaltado en la lista adyacente. Después de realizar este paso, puede especificar las instrucciones que ponen los datos en el estado necesario para ejecutar la prueba. En este ejemplo, debe crear el registro Customer para poder hacer un pedido.
7. Haga clic en **Haga clic aquí para crear** para crear un script anterior a la prueba.
8. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
/*
Add Transact-SQL statements here that you want to run before
the test script is run.
*/
BEGIN TRANSACTION
-- Add a customer for this test with the name 'CustomerB'
DECLARE @NewCustomerID AS INT, @RC AS INT, @CustomerName AS NVARCHAR (40);
SELECT @RC = 0,
@NewCustomerID = 0,
@CustomerName = N'Fictitious Customer';
IF NOT EXISTS(SELECT * FROM [Sales].[Customer] WHERE CustomerName = @CustomerName)
BEGIN
EXECUTE @NewCustomerID = [Sales].[uspNewCustomer] @CustomerName;
END
DECLARE @CustomerID AS INT, @Amount AS INT, @OrderDate AS DATETIME, @Status AS CHAR (1);
SELECT @RC = 0,
@CustomerID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@OrderDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- delete any old records in the Orders table and clear out the YTD Sales/Orders fields
DELETE from [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
UPDATE [Sales].[Customer] SET YTDOrders = 0, YTDSales = 0 WHERE [CustomerID] = @CustomerID;
-- place an order for that customer
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, @Amount, @OrderDate, @Status;
COMMIT TRANSACTION
```
9. En el menú **Archivo** , haga clic en **Guardar todo**.
#### <a name="to-write-the-sql-server-unit-test-for-uspshoworderdetails"></a>Para escribir la prueba unitaria de SQL Server para uspShowOrderDetails
1. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspShowOrderDetailsTest** y asegúrese de que **Prueba** está resaltado en la lista adyacente.
Después de realizar este paso, puede crear el script de prueba para la acción de prueba en la prueba unitaria.
2. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
-- ssNoVersion unit test for Sales.uspFillOrder
DECLARE @RC AS INT, @CustomerID AS INT, @Amount AS INT, @FilledDate AS DATETIME, @Status AS CHAR (1);
DECLARE @CustomerName AS NVARCHAR(40), @OrderID AS INT;
SELECT @RC = 0,
@CustomerID = 0,
@OrderID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@FilledDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- fill an order for that customer
EXECUTE @RC = [Sales].[uspShowOrderDetails] @CustomerID;
SELECT @RC AS RC;
```
3. En el panel **Condiciones de prueba** , haga clic en la Condición de prueba no concluyente y, a continuación, haga clic en **Eliminar condición de prueba**.
4. En el panel **Condiciones de prueba** , haga clic en **Esquema esperado** en la lista y, a continuación, haga clic en **Agregar condición de prueba**.
5. En la ventana **Propiedades**, en la propiedad **Configuración**, haga clic en el botón Examinar ("**…**").
6. En el cuadro de diálogo **Configuración de expectedSchemaCondition1** , especifique una conexión a la base de datos. Por ejemplo, si usó la ubicación de implementación predeterminada, que es LocalDB, haría clic en **Nueva conexión** y especificaría **(LocalDB)\Projects**. Después, elija el nombre de la base de datos.
7. Haga clic en **Recuperar**. (Si es necesario, haga clic en **Recuperar** hasta que vea los datos).
Se ejecuta el cuerpo de Transact\-SQL de la prueba unitaria y el esquema resultante aparece en el cuadro de diálogo. Dado que el código anterior a la prueba no se ejecutó, no se devuelve ningún dato. Ya que solo se está comprobando el esquema y no los datos, esto es correcto.
8. Haga clic en **Aceptar**.
El esquema esperado se almacena con la condición de prueba.
9. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspShowOrderDetailsTest** y asegúrese de que **Anterior a la prueba** está resaltado en la lista adyacente. Después de realizar este paso, puede especificar las instrucciones que ponen los datos en el estado necesario para ejecutar la prueba. En este ejemplo, debe crear el registro Customer para poder hacer un pedido.
10. Haga clic en **Haga clic aquí para crear** para crear un script anterior a la prueba.
11. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
/*
Add Transact-SQL statements here to run before the test script is run.
*/
BEGIN TRANSACTION
-- Add a customer for this test with the name 'FictitiousCustomer'
DECLARE @NewCustomerID AS INT, @RC AS INT, @CustomerName AS NVARCHAR (40);
SELECT @RC = 0,
@NewCustomerID = 0,
@CustomerName = N'Fictitious Customer';
IF NOT EXISTS(SELECT * FROM [Sales].[Customer] WHERE CustomerName = @CustomerName)
BEGIN
EXECUTE @NewCustomerID = [Sales].[uspNewCustomer] @CustomerName;
END
DECLARE @CustomerID AS INT, @Amount AS INT, @OrderDate AS DATETIME, @Status AS CHAR (1);
SELECT @RC = 0,
@CustomerID = 0,
@CustomerName = N'Fictitious Customer',
@OrderDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- delete any old records in the Orders table and clear out the YTD Sales/Orders fields
DELETE from [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
UPDATE [Sales].[Customer] SET YTDOrders = 0, YTDSales = 0 WHERE [CustomerID] = @CustomerID;
-- place 3 orders for that customer
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, 100, @OrderDate, @Status;
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, 50, @OrderDate, @Status;
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, 5, @OrderDate, @Status;
COMMIT TRANSACTION
```
12. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspShowOrderDetailsTest** y en **Prueba** en la lista adyacente.
Debe hacerlo porque desea aplicar la condición de suma de comprobación a la prueba, no a la acción anterior a la prueba.
13. En el panel **Condiciones de prueba** , haga clic en **Suma de comprobación de datos** en la lista y, a continuación, haga clic en **Agregar condición de prueba**.
14. En la ventana **Propiedades**, en la propiedad **Configuración**, haga clic en el botón Examinar ("**…**").
15. En el cuadro de diálogo **Configuración de checksumCondition1** , especifique una conexión a la base de datos.
16. Reemplace el código Transact\-SQL del cuadro de diálogo (bajo el botón **Editar conexión**) con el código siguiente:
```
BEGIN TRANSACTION
-- Add a customer for this test with the name 'CustomerB'
DECLARE @NewCustomerID AS INT, @RC AS INT, @CustomerName AS NVARCHAR (40);
SELECT @RC = 0,
@NewCustomerID = 0,
@CustomerName = N'Fictitious Customer';
IF NOT EXISTS(SELECT * FROM [Sales].[Customer] WHERE CustomerName = @CustomerName)
BEGIN
EXECUTE @NewCustomerID = [Sales].[uspNewCustomer] @CustomerName;
END
DECLARE @CustomerID AS INT, @Amount AS INT, @OrderDate AS DATETIME, @Status AS CHAR (1);
SELECT @RC = 0,
@CustomerID = 0,
@CustomerName = N'Fictitious Customer',
@OrderDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- delete any old records in the Orders table and clear out the YTD Sales/Orders fields
DELETE from [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
UPDATE [Sales].[Customer] SET YTDOrders = 0, YTDSales = 0 WHERE [CustomerID] = @CustomerID;
-- place 3 orders for that customer
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, 100, @OrderDate, @Status;
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, 50, @OrderDate, @Status;
EXECUTE @RC = [Sales].[uspPlaceNewOrder] @CustomerID, 5, @OrderDate, @Status;
COMMIT TRANSACTION
-- ssNoVersion unit test for Sales.uspFillOrder
DECLARE @FilledDate AS DATETIME;
DECLARE @OrderID AS INT;
SELECT @RC = 0,
@CustomerID = 0,
@OrderID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@FilledDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- fill an order for that customer
EXECUTE @RC = [Sales].[uspShowOrderDetails] @CustomerID;
SELECT @RC AS RC;
```
Este código combina el código Transact\-SQL anterior a la prueba con el Transact\-SQL de la propia prueba. Necesita ambos para devolver los mismos resultados que la prueba devolverá cuando se ejecute.
17. Haga clic en **Recuperar**. (Si es necesario, haga clic en **Recuperar** hasta que vea los datos).
Se ejecuta el Transact\-SQL especificado y se calcula una suma de comprobación para los datos devueltos.
18. Haga clic en **Aceptar**.
La suma de comprobación calculada se almacena con la condición de prueba. La suma de comprobación esperada aparece en la columna Valor de la condición de prueba Suma de comprobación de datos.
19. En el menú **Archivo** , haga clic en **Guardar todo**.
En este punto, estará listo para ejecutar las pruebas.
## <a name="RunTests"></a>Ejecutar pruebas unitarias de SQL Server
#### <a name="to-run-the-sql-server-unit-tests"></a>Para ejecutar las pruebas unitarias de SQL Server
1. En el menú **Prueba**, elija **Windows** y haga clic en **Vista de pruebas** de Visual Studio 2010 o en **Explorador de pruebas** de Visual Studio 2012.
2. En la ventana **Vista de pruebas** (Visual Studio 2010), haga clic en **Actualizar** en la barra de herramientas para actualizar la lista de pruebas. Para ver la lista de pruebas en el **Explorador de pruebas** (Visual Studio 2012), compile la solución.
En la ventana **Vista de pruebas** o **Explorador de pruebas** se muestran las pruebas que creó anteriormente en este tutorial y a las que agregó las instrucciones y las condiciones de prueba de Transact\-SQL. La prueba que se denomina TestMethod1 está vacía y no se usa en este tutorial.
3. Haga clic con el botón derecho en **Sales_uspNewCustomerTest** y haga clic en **Ejecutar selección**.
Visual Studio utiliza el contexto privilegiado que especificó para conectarse a la base de datos y aplicar el plan de generación de datos. Visual Studio se activa en el contexto de ejecución antes de ejecutar el script Transact\-SQL en la prueba. Finalmente, Visual Studio evalúa los resultados del script Transact\-SQL con los especificados en la condición de prueba, y aparece un resultado de correcto o error en la ventana **Resultados de pruebas**.
4. Vea el resultado en la ventana **Resultados de pruebas** .
La prueba se supera, lo que significa que la instrucción **SELECT** devuelve una fila cuando se ejecuta.
5. Repita el paso 3 para las pruebas Sales_uspPlaceNewOrderTest, Sales_uspFillOrderTest y Sales_uspShowOrderDetailsTest. Los resultados deberían ser los siguientes:
|Prueba|Resultado esperado|
|--------|-------------------|
|Sales_uspPlaceNewOrderTest|Superada|
|Sales_uspShowOrderDetailsTest|Superada|
|Sales_uspFillOrderTest|Produce el siguiente error: 'Error de la condición ScalarValueCondition (scalarValueCondition2): ResultSet 1 fila 1 columna 1: los valores no coinciden, actual: '100', esperado: '100''. Este error aparece porque la definición del procedimiento almacenado contiene un error menor.|
A continuación, se corregirá el error y volverá a ejecutar la prueba.
#### <a name="to-correct-the-error-in-salesuspfillorder"></a>Para corregir el error en Sales.uspFillOrder
1. En el nodo Proyectos del **Explorador de objetos de SQL Server** correspondiente a la base de datos, haga doble clic en el procedimiento almacenado **uspFillOrder** para abrir su definición en el editor de Transact\-SQL.
2. En la definición, busque la siguiente instrucción Transact\-SQL:
```
UPDATE [Sales].[Customer]
SET
YTDSales = YTDSales - @Delta
WHERE [CustomerID] = @CustomerID
```
3. Cambie la cláusula SET en la instrucción para que coincida con la siguiente:
```
UPDATE [Sales].[Customer]
SET
YTDSales = YTDSales + @Delta
WHERE [CustomerID] = @CustomerID
```
4. En el menú **Archivo** , haga clic en **Guardar uspFillOrder.sql**.
5. En **Vista de pruebas**, haga clic con el botón derecho en **Sales_uspFillOrderTest** y haga clic en **Ejecutar selección**.
Se supera la prueba.
## <a name="NegativeTest"></a>Agregar una prueba unitaria negativa
Puede crear una prueba negativa para comprobar que una prueba genera un error cuando debe dar error. Por ejemplo, si intenta cancelar un pedido que ya estaba rellenado, la prueba debe generar un error. En esta parte del tutorial, se crea una prueba unitaria negativa para el procedimiento almacenado Sales.uspCancelOrder.
Para crear y comprobar una prueba negativa, debe realizar las siguientes tareas:
- Actualizar el procedimiento almacenado para probar las condiciones de error
- Definir una nueva prueba unitaria
- Modificar el código de la prueba unitaria para indicar que se espera un error
- Ejecutar la prueba unitaria
#### <a name="to-update-the-stored-procedure"></a>Para actualizar el procedimiento almacenado
1. En el nodo Proyectos del **Explorador de objetos de SQL Server** de la base de datos SimpleUnitTestDB, expanda los nodos Programación y Procedimientos almacenados y, a continuación, haga doble clic en uspCancelOrder.
2. En el editor de Transact\-SQL, actualice la definición del procedimiento para que coincida con el código siguiente:
```
CREATE PROCEDURE [Sales].[uspCancelOrder]
@OrderID INT
AS
BEGIN
DECLARE @Delta INT, @CustomerID INT, @PriorStatus CHAR(1)
BEGIN TRANSACTION
BEGIN TRY
IF (NOT EXISTS(SELECT [CustomerID] from [Sales].[Orders] WHERE [OrderID] = @OrderID))
BEGIN
-- Specify WITH LOG option so that the error is
-- written to the application log.
RAISERROR( 'That order does not exist.', -- Message text
16, -- severity
1 -- state
) WITH LOG;
END
SELECT @Delta = [Amount], @CustomerID = [CustomerID], @PriorStatus = [Status]
FROM [Sales].[Orders] WHERE [OrderID] = @OrderID
IF @PriorStatus <> 'O'
BEGIN
-- Specify WITH LOG option so that the error is
-- written to the application log.
RAISERROR ( 'You can only cancel open orders.', -- Message text
16, -- Severity
1 -- State
) WITH LOG;
END
ELSE
BEGIN
-- If we make it to here, then we can cancel the order. Update the status to 'X' first...
UPDATE [Sales].[Orders]
SET [Status] = 'X'
WHERE [OrderID] = @OrderID
-- and then remove the amount from the YTDOrders for the customer
UPDATE [Sales].[Customer]
SET
YTDOrders = YTDOrders - @Delta
WHERE [CustomerID] = @CustomerID
COMMIT TRANSACTION
RETURN 1; -- indicate success
END
END TRY
BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
SELECT @ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
ROLLBACK TRANSACTION
-- Use RAISERROR inside the CATCH block to return
-- error information about the original error that
-- caused execution to jump to the CATCH block.
RAISERROR (@ErrorMessage, -- Mesasge text
@ErrorSeverity, -- Severity
@ErrorState -- State
);
RETURN 0; -- indicate failure
END CATCH;
END
```
3. En el menú **Archivo** , haga clic en **Guardar uspCancelOrder.sql**.
4. Presione F5 para implementar **SimpleUnitTestDB**.
Implemente las actualizaciones en el procedimiento almacenado uspCancelOrder. No cambió ningún otro objeto, tan solo ha actualizado el procedimiento almacenado.
A continuación definirá la prueba unitaria asociada para este procedimiento.
#### <a name="to-write-the-sql-server-unit-test-for-uspcancelorder"></a>Para escribir la prueba unitaria de SQL Server para uspCancelOrder
1. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspCancelOrderTest** y asegúrese de que **Prueba** está resaltado en la lista adyacente.
Después de realizar este paso, puede crear el script de prueba para la acción de prueba en la prueba unitaria.
2. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
-- ssNoVersion unit test for Sales.uspFillOrder
DECLARE @RC AS INT, @CustomerID AS INT, @Amount AS INT, @FilledDate AS DATETIME, @Status AS CHAR (1);
DECLARE @CustomerName AS NVARCHAR(40), @OrderID AS INT;
SELECT @RC = 0,
@CustomerID = 0,
@OrderID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@FilledDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- Get the most recently added order.
SELECT @OrderID = MAX([OrderID]) FROM [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
-- try to cancel an order for that customer that has already been filled
EXECUTE @RC = [Sales].[uspCancelOrder] @OrderID;
SELECT @RC AS RC;
```
3. En el panel **Condiciones de prueba** , haga clic en la Condición de prueba no concluyente y, a continuación, haga clic en el icono **Eliminar condición de prueba** .
4. En el panel **Condiciones de prueba** , haga clic en **Valor escalar** en la lista y, a continuación, haga clic en el icono **Agregar condición de prueba** .
5. En la ventana **Propiedades** , establezca la propiedad **Valor esperado** en 0.
6. En la barra de navegación del Diseñador de pruebas unitarias de SQL Server, haga clic en **Sales_uspCancelOrderTest** y asegúrese de que **Anterior a la prueba** está resaltado en la lista adyacente. Después de realizar este paso, puede especificar las instrucciones que ponen los datos en el estado necesario para ejecutar la prueba. En este ejemplo, debe crear el registro Customer para poder hacer un pedido.
7. Haga clic en **Haga clic aquí para crear** para crear un script anterior a la prueba.
8. Actualice las instrucciones Transact\-SQL en el editor de Transact\-SQL para que coincidan con las instrucciones siguientes:
```
/*
Add Transact-SQL statements here to run before the test script is run.
*/
BEGIN TRANSACTION
-- Add a customer for this test with the name 'CustomerB'
DECLARE @NewCustomerID AS INT, @RC AS INT, @CustomerName AS NVARCHAR (40);
SELECT @RC = 0,
@NewCustomerID = 0,
@CustomerName = N'Fictitious Customer';
IF NOT EXISTS(SELECT * FROM [Sales].[Customer] WHERE CustomerName = @CustomerName)
BEGIN
EXECUTE @NewCustomerID = [Sales].[uspNewCustomer] @CustomerName;
END
DECLARE @CustomerID AS INT, @Amount AS INT, @OrderDate AS DATETIME, @FilledDate AS DATETIME, @Status AS CHAR (1), @OrderID AS INT;
SELECT @RC = 0,
@CustomerID = 0,
@OrderID = 0,
@CustomerName = N'Fictitious Customer',
@Amount = 100,
@OrderDate = getdate(),
@FilledDate = getdate(),
@Status = 'O';
-- NOTE: Assumes that you inserted a Customer record with CustomerName='Fictitious Customer' in the pre-test script.
SELECT @CustomerID = [CustomerID] FROM [Sales].[Customer] WHERE [CustomerName] = @CustomerName;
-- delete any old records in the Orders table and clear out the YTD Sales/Orders fields
DELETE from [Sales].[Orders] WHERE [CustomerID] = @CustomerID;
UPDATE [Sales].[Customer] SET YTDOrders = 0, YTDSales = 0 WHERE [CustomerID] = @CustomerID;
-- place an order for that customer
EXECUTE @OrderID = [Sales].[uspPlaceNewOrder] @CustomerID, @Amount, @OrderDate, @Status;
-- fill the order for that customer
EXECUTE @RC = [Sales].[uspFillOrder] @OrderID, @FilledDate;
COMMIT TRANSACTION
```
9. En el menú **Archivo** , haga clic en **Guardar todo**.
En este punto, estará listo para ejecutar las pruebas.
#### <a name="to-run-the-sql-server-unit-tests"></a>Para ejecutar las pruebas unitarias de SQL Server
1. En **Vista de pruebas**, haga clic con el botón derecho en **Sales_uspCancelOrderTest** y haga clic en **Ejecutar selección**.
2. Vea el resultado en la ventana **Resultados de pruebas** .
La prueba genera un error y se muestra el siguiente mensaje de error:
**El método de prueba TestProject1.SqlServerUnitTests1.Sales_uspCancelOrderTest generó la excepción: System.Data.SqlClient.SqlException: solo puede cancelar pedidos abiertos.**
A continuación, modifique el código para indicar que se espera la excepción.
#### <a name="to-modify-the-code-for-the-unit-test"></a>Para modificar el código de la prueba unitaria
1. En el **Explorador de soluciones**, expanda **TestProject1**, haga clic con el botón derecho en **SqlServerUnitTests1.cs** y, a continuación, haga clic en **Ver código**.
2. En el editor de código, navegue al método Sales_uspCancelOrderTest. Modifique los atributos del método para que coincidan con el código siguiente:
```
[TestMethod(), ExpectedSqlException(Severity=16, MatchFirstError=false, State=1)]
public void Sales_uspCancelOrderTest()
```
Especifique que se espera ver una excepción concreta. Opcionalmente, puede especificar un número de error concreto. Si no agrega este atributo, la prueba unitaria generará un error y aparecerá un mensaje en la ventana Resultados de pruebas
> [!IMPORTANT]
> Actualmente, Visual Studio 2012 no admite el atributo ExpectedSqlException. Para obtener información sobre cómo solucionar este problema, vea [No se puede ejecutar la prueba unitaria de base de datos "Error esperado"](http://social.msdn.microsoft.com/Forums/en-US/ssdt/thread/e74e06ad-e3c9-4cb0-97ad-a6f235a52345).
3. En el menú Archivo, haga clic en Guardar SqlServerUnitTests1.cs.
A continuación, vuelva a ejecutar la prueba unitaria para comprobar que genera un error como se esperaba.
#### <a name="to-re-run-the-sql-server-unit-tests"></a>Para volver a ejecutar las pruebas unitarias de SQL Server
1. En **Vista de pruebas**, haga clic con el botón derecho en **Sales_uspCancelOrderTest** y haga clic en **Ejecutar selección**.
2. Vea el resultado en la ventana **Resultados de pruebas** .
La prueba se supera, lo que significa que se produjo un error en el procedimiento cuando se suponía que se debía producir.
## <a name="next-steps"></a>Next Steps
En un proyecto típico, se definirían pruebas unitarias adicionales para comprobar que todos los objetos de base de datos críticos funcionan correctamente. Una vez completado el conjunto de pruebas, se protegerían las pruebas en el control de versiones para compartirlas con el equipo.
Después de establecer una línea base, puede crear y modificar los objetos de base de datos y crear pruebas asociadas para comprobar si un cambio interrumpirá el comportamiento esperado.
## <a name="see-also"></a>Ver también
[Crear y definir pruebas unitarias de SQL Server](../ssdt/creating-and-defining-sql-server-unit-tests.md)
[Comprobar el código de base de datos con pruebas unitarias de SQL Server](../ssdt/verifying-database-code-by-using-sql-server-unit-tests.md)
[Cómo: Crear una prueba unitaria de SQL Server vacía](../ssdt/how-to-create-an-empty-sql-server-unit-test.md)
[Cómo: Configurar una ejecución de prueba unitaria de SQL Server](../ssdt/how-to-configure-sql-server-unit-test-execution.md)
| 51.864458 | 527 | 0.676114 | spa_Latn | 0.872361 |
0cbb7d0252d8e1458888a95644eaf825abafec86 | 5,918 | md | Markdown | _posts/2020-04-21-webservices-api-restful-e-protocolo-http.md | paulodutra/paulodutra.github.io | 8c64e700760a7478292b8c27604e4ab878b6ac70 | [
"MIT"
] | null | null | null | _posts/2020-04-21-webservices-api-restful-e-protocolo-http.md | paulodutra/paulodutra.github.io | 8c64e700760a7478292b8c27604e4ab878b6ac70 | [
"MIT"
] | null | null | null | _posts/2020-04-21-webservices-api-restful-e-protocolo-http.md | paulodutra/paulodutra.github.io | 8c64e700760a7478292b8c27604e4ab878b6ac70 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Webservices, API RestFul e Protocolo HTTP"
date: 2020-04-21 14:00:00 -0200
categories: blog, tecnologia, webservices, API, Restful, Http
---
Fala galera, o post de hoje iremos falar sobre alguns conceitos de Webservices, API RestFul e Protocolo HTTP.
# O que é um webservice ?
Webservice é um software que permite a interoperabilidade de maquina a maquina, suas interações acontecem normalmente utilizando o protocolo HTTP.
Mais informações em <a href="https:www.w3.org/TR/ws-arch/#whatis" target="__blank">https:www.w3.org/TR/ws-arch/#whatis</a>
Exemplos de webservices:
- Informações públicas;
- Redes sociais;
- Negócio;
- Serviços;
- Logística;
- Mobile;
- Internet das coisas (IOT) geladeira, cadeira, veículo;
- Entre outros fins.
## Diferença entre Webservice VS API
API é um grupo de rotinas, protocolo é métodos que permite a comunicação entre aplicações. (Application Programming Interface ou interface de programação de aplicações); Veja abaixo algumas diferenças:
- Um Webservice é uma API;
- Nem toda API é um Webservice, ex: BIOS, DOM(Document Object Model, para acessar e manipular informações do HTML), Code do PHP, etc;
- Webservice está ligado a HTTP, REST, SOAP, XML-RPC;
- Em outras palavras Webservice é uma API voltada para a web.
## RestFul e Protocolo HTTP
Proposto por Roy Fielding nos anos 2000, RestFul é uma arquitetura que faz uso do protocolo HTTP, padroniza uma interface para gerência de recursos e manipulação do mesmos através da troca representacional de estados.
Restful significa: **Re**presentational **S**tate **T**ranfer ou Transferência de estado representacional, é orientado a resource ou seja a recursos, O **ful representa a implementação de fato do REST**, ou seja esta implementando a arquitetura REST em nossa API. É **Stateless** ou seja não guarda estado (não armazena sessão ou cookie).
Um servidor Restfull, pode ser consumido por aplicações clientes (AngularJS, View.JS, Guzzle, CURL, ReactJS entre outros) ou em aplicações que rodam no lado do servidor (essa aplicação pode utilizar um servidor restfull, para lhe prover as informações).
Restful trabalha métodos HTTP, Status code e cabeçalhos HTTP e permite a negociação do tipo de conteúdo a ser retornado através de content-type (que pode ser especificado no header da requisição) como por exemplo: json, xml, txt entre outros.
Outra prática comum em aplicações do tipo restful é trabalhar com versões, como por exemplo **http://servidor.com/api/v1/produto.json** aonde o **v1 é a versão** é o **produto é o resource** ou seja o recurso também conhecido como **endpoint**.
Cada verbo HTTP possui um significado, (entretanto você pode utilizar o verbo para o significado que você desejar), veja abaixo a convenção para os verbo HTTP: GET, POST, PUT, DELETE, PATCH, OPTIONS:
**GET:** Consultar informações é considerado "Seguro" no ponto de vista de não fazer nenhuma alteração nos dados da API em si, exemplo: GET/clientes;
**POST:** Cria um novo recurso é considerado "Não Seguro" no ponto de vista de fazer alteração nos dados API em si (Porque iremos alterar o estado da nossa API), exemplo: POST/pedidos;
**PUT:** Atualiza um recurso existente é considerado "Não Seguro" no ponto de vista de fazer alteração nos dados API em si (Porque iremos alterar o estado da nossa API), exemplo: PUT/pedidos/2320;
**DELETE:** Exclui um recurso existente é considerado "Não Seguro" no ponto de vista de fazer alteração nos dados API em si (Porque iremos alterar o estado da nossa API), exemplo: DELETE/pedidos/4060;
**OPTIONS:** Consulta informações na API,é considerado "Seguro" no ponto de vista de não fazer nenhuma alteração nos dados da API em si (Porque iremos alterar o estado da nossa API), exemplo: OPTIONS/clientes.
O **OPTIONS é uma forma do cliente identificar quais recursos estão disponíveis**, ou seja caso a API implemente o cliente pode passar OPTIONS/clientes e receber quais recursos estão disponíveis. Também serve para consultar quais HEADERS podemos passar em nossa requisição. Muito implementado em libs javascript onde antes dela enviar as requisições de fato ela manda um option e recebe quais recursos estão disponíveis, e caso não tenha a requisição desejada aborta a mesma.
Sendo assim, vamos ver abaixo um exemplo de rota e a utilização de cada verbo:
**GET** - http://servidor.com/api/v1/produto - Lista varios itens
**GET** - http://servidor.com/api/v1/produto/1 - Visualiza um item
**POST** - http://servidor.com/api/v1/produto - Insere
**PUT** - http://servidor.com/api/v1/produto/1 - Atualiza
**DELETE** - http://servidor.com/api/v1/produto/1 - Remove
**OBS**: Tem APIS que pode utilizar o verbo POST com o id ou identificador na URL para atualizar ou usar o PUT e caso o recurso informado não existir ele acabar criando.
Os itens de urls encontrados de API para API podem variar ou seja podem ser maiores de acordo com a necessidade.
## Exemplo de uso de cada classe de verbo HTTP
|2xx - Tudo certo | 3xx - Alteração de estado |4xx - Erro no cliente |5xx - Erro no servidor
|-------------------|-----------------------------------|-----------------------------|-------------------------------
| 200 - Tudo ok | 301- Redirecionamento permanente | 404 - Página não encontrada | 500 - Erro no servidor
| | 302 - Redirecionamento temporário | 422 - Falha na validação |
| | | |
**OBS**: Existe um RFC que especifica a definição dos Status code é a RFC 2616 Seção 10. A w3c orgão que ajuda a padronizar os padrões de acessibilidade entre outros na WEB, possui um documento a respeito das definições dos status code segue link abaixo:
<a href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html" target="__blank">https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html</a>
| 61.010309 | 476 | 0.733694 | por_Latn | 0.999785 |
0cbbef03051bf19dca7bbd52b75f4df59855ec1f | 322 | md | Markdown | README.md | mschenk42/flask-resteasy | e787c248a93eee1a4634b38fbcfdf747d8960849 | [
"BSD-3-Clause"
] | null | null | null | README.md | mschenk42/flask-resteasy | e787c248a93eee1a4634b38fbcfdf747d8960849 | [
"BSD-3-Clause"
] | null | null | null | README.md | mschenk42/flask-resteasy | e787c248a93eee1a4634b38fbcfdf747d8960849 | [
"BSD-3-Clause"
] | null | null | null | # Flask-RestEasy
Flask-RestEasy is a [Flask](http://flask.pocoo.org) extension for quickly creating [JSON API](http://www.jsonapi.org) compliant REST APIs for
models created with [SQLAlchemy](http://www.sqlalchemy.org) and integrated using the [Flask-SQLAlchemy extension](http://www.pythonhosted.org/Flask-SQLAlchemy/).
| 80.5 | 161 | 0.779503 | yue_Hant | 0.387621 |
0cbce031c4d9d79c9afd0cc3ebfe43d399a1ed03 | 2,887 | md | Markdown | README.md | chipto/css-visor | e6aa3682ef62575f3bb3bc89c8700e950c8fbd48 | [
"MIT"
] | 7 | 2017-10-23T15:56:23.000Z | 2018-05-31T11:52:32.000Z | README.md | chipto/css-visor | e6aa3682ef62575f3bb3bc89c8700e950c8fbd48 | [
"MIT"
] | 5 | 2018-01-04T15:45:09.000Z | 2018-05-28T03:44:35.000Z | README.md | NeekSandhu/css-visor | e6aa3682ef62575f3bb3bc89c8700e950c8fbd48 | [
"MIT"
] | 1 | 2018-01-04T16:51:28.000Z | 2018-01-04T16:51:28.000Z | # css-visor
The ultimate `style-loader` replacement you knew you needed
`css-visor` is like a supervisor that finds, injects and updates your imported stylesheets - with sourcemaps and no Flash of Unstyled Content
## Background
`css-visor` was created out of long living pain as seen in:
- [#613 - css-loader with `sourceMap: true` cause adding style tag delayed](https://github.com/webpack-contrib/css-loader/issues/613)
- [#591 - Use css sourceMaps in development](https://github.com/facebookincubator/create-react-app/pull/591#issuecomment-247807916)
## Usage
Install it
`npm install --save-dev css-visor`
Light it up
```javascript
// webpack.config.js
const CSSVisor = require('css-visor')
const HtmlWebpackPlugin = require('html-webpack-plugin')
module.exports = {
// existing config
module: {
rules: [{
test: /\.css$/,
use: [
{
loader: 'css-visor/loader',
// Optional
options: {
/**
* Path prefix you'd like to be prefixed in <link> tag
*
* Default: ''
* => <link href="styles/button.da3n1b.css">
*
* Custom: 'static/base'
* => <link href="static/base/styles/button.da3n1b.css">
*/
pathPrefix: 'static/styles' // default: ''
}
},
{
loader: 'css-loader',
options: {
sourceMap: true,
}
},
// more loaders (sass, postcss)
]
}]
},
plugins: [
new CSSVisor(),
new HtmlWebpackPlugin({
inject: true
})
]
}
```
It should now be working out of the box.
Not working? Make sure to have `css-visor/loader` right before `css-loader` and instance of CSSVisor in `plugins` list of your webpack config.
Still not working? Please file an issue to have it known
> This release is only compatible with webpack 4 and up. To use with an older version of webpack use `npm i [email protected]`, although `1.0` is no longer maintained
## Reliability
`css-visor` is literally built from source of [extract-loader](https://github.com/peerigon/extract-loader). So far, it has shown no problems, even with imported stylsheets, images/svg's (processed through `url-loader`/`file-loader`).
The only things you can import in a CSS file anyway.
## External Licenses
Much of the code taken from [extract-loader](https://github.com/peerigon/extract-loader) which is Unlicensed, see [extract-loader/LICENSE](https://github.com/peerigon/extract-loader/blob/master/LICENSE)for more information.
| 33.964706 | 234 | 0.576723 | eng_Latn | 0.889247 |
0cbd429bfa4c1370becfa484e9a5f396bb03b126 | 76 | md | Markdown | README.md | waelio/vite-quasar-template | 7f34f67fc9d8c710d13fe25970e04d6185137cc6 | [
"MIT"
] | null | null | null | README.md | waelio/vite-quasar-template | 7f34f67fc9d8c710d13fe25970e04d6185137cc6 | [
"MIT"
] | null | null | null | README.md | waelio/vite-quasar-template | 7f34f67fc9d8c710d13fe25970e04d6185137cc6 | [
"MIT"
] | null | null | null | # vite-quasar-template
Vite project that extends Vitesse and uses quasar ui
| 25.333333 | 52 | 0.815789 | eng_Latn | 0.902807 |
0cbd8b1a13bec48b3ea2773e195c4bbba90b1cba | 6,655 | md | Markdown | step_by_step.md | koenighotze/Hotel-Reservation-Tool | 7d217d738bdca0d8ace45149190e8c9ff73b6d9c | [
"Apache-2.0"
] | 6 | 2017-04-11T08:49:30.000Z | 2020-06-10T08:59:39.000Z | step_by_step.md | koenighotze/Hotel-Reservation-Tool | 7d217d738bdca0d8ace45149190e8c9ff73b6d9c | [
"Apache-2.0"
] | 29 | 2015-08-28T20:51:06.000Z | 2016-02-07T10:13:23.000Z | step_by_step.md | koenighotze/Hotel-Reservation-Tool | 7d217d738bdca0d8ace45149190e8c9ff73b6d9c | [
"Apache-2.0"
] | 2 | 2017-05-07T19:13:58.000Z | 2020-05-19T18:42:20.000Z | # Step by Step Guide
## Create project structure
```bash
$ mvn archetype:generate -DgroupId=org.koenighotze\
-DartifactId=jee7helloworld \
-DarchetypeArtifactId=maven-archetype-webapp \
-DinteractiveMode=false
```
## Add JEE7 Dependencies
In [pom.xml](https://gist.github.com/koenighotze/bedce5cec0f7c7148da8) add:
```xml
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>7.0</version>
<scope>provided</scope>
</dependency>
```
## Add Wildfly Plugin
In [pom.xml](https://gist.github.com/koenighotze/bedce5cec0f7c7148da8) add:
```xml
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<version>1.0.2.Final</version>
</plugin>
```
## Try it ... Run It
```shell
$ mvn wildfly:run
$ open http://localhost:8080/jee7helloworld/
```
## Setup JSF
Copy the following to [`web.xml`](https://gist.github.com/koenighotze/73d1625e7c51250bd7c1)
```xml
<<?xml version="1.0" encoding="UTF-8"?>
<web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
<servlet>
<servlet-name>Faces Servlet</servlet-name>
<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Faces Servlet</servlet-name>
<url-pattern>*.xhtml</url-pattern>
</servlet-mapping>
</web-app>
```
## Create index.xhtml
Copy the following into [`src/main/resources/webapp/index.xhtml`](https://gist.github.com/koenighotze/f4d534052aff939dabcb)
```xml
<!DOCTYPE html>
<html
xmlns="http://www.w3.org/1999/xhtml"
xmlns:jsf="http://xmlns.jcp.org/jsf">
<body>
<form jsf:id="form">
What is your name: <input type="text" jsf:value="#{hello.name}"/>
<input type="submit" value="Greet me" jsf:action="hello.xhtml"/>
</form>
</body>
</html>
```
## Create hello.xhtml
Copy the following into [`src/main/resources/webapp/hello.xhtml`](https://gist.github.com/koenighotze/07af8e874540e78a6beb)
```xml
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
Hello #{hello.name}
</body>
</html>
```
## Add Model
Copy the following into a new class [`src/main/java/hello/Hello.java`](https://gist.github.com/koenighotze/3f869a3c46aec29b0a17)
```java
package hello;
import javax.enterprise.inject.Model;
import java.io.Serializable;
@Model
public class Hello implements Serializable {
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
```
## Run Application
```shell
$ mvn wildfly:redeploy
$ open http://localhost:8080/jee7helloworld/index.xhtml
```
## Add Controller and CRUD
Add JPA persistence by creating [`src/main/resources/META-INF/persistence.xml`](https://gist.github.com/koenighotze/305fceff59a5a1987a01)
```xml
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
<persistence-unit name="JEE7HelloWorld-Booking" transaction-type="JTA">
<properties>
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
</properties>
</persistence-unit>
</persistence>
```
Add ORM to `Hello.java`
```java
@Model
@Entity
public class Hello implements Serializable {
@Id
private String name;
```
Add the following to a new class [`src/main/java/hello/HelloController.java`](https://gist.github.com/koenighotze/4e5195adc8671896323a)
```java
package hello;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import javax.inject.Named;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.criteria.CriteriaQuery;
import javax.transaction.Transactional;
import java.util.List;
@Named
@ApplicationScoped
public class HelloController {
@Inject
private Hello hello;
@PersistenceContext
private EntityManager em;
public List<Hello> helloSoFar() {
CriteriaQuery<Hello> cq = this.em.getCriteriaBuilder().createQuery(Hello.class);
cq.select(cq.from(Hello.class));
return this.em.createQuery(cq).getResultList();
}
@Transactional
public void storeName(Hello hello) {
em.persist(hello);
}
}
```
Replace the submit button in [`index.xhtml`](https://gist.github.com/koenighotze/a56a10e50eaf4fede0fe) with
```xml
<input type="submit" value="Greet me"
jsf:actionListener="#{helloController.storeName(hello)}"
jsf:action="hello.xhtml"/>
```
Add the following to [`index.xhtml`](https://gist.github.com/koenighotze/a56a10e50eaf4fede0fe)
```xml
<br/>
<div>
Hellos thus far:<br/>
<ul>
<ui:repeat value="#{helloController.helloSoFar()}" var="h">
<li>#{h.name}</li>
</ui:repeat>
</ul>
</div>
```
## Run Application
```shell
$ mvn wildfly:redeploy
$ open http://localhost:8080/jee7helloworld/index.xhtml
```
## And now REST
Enable [`Hello.java`](https://gist.github.com/koenighotze/30bba9760ee797b12902) for automagic XML/JSON-ification.
```java
@Model
@Entity
@XmlRootElement
public class Hello implements Serializable {
```
Add the following class [`Application.java`](https://gist.github.com/koenighotze/c945d46b9072632ea757):
```java
package hello;
@javax.ws.rs.ApplicationPath("rest")
public class Application extends javax.ws.rs.core.Application {
}
```
Expose the [`HelloController.java`](https://gist.github.com/koenighotze/56e6bb6d9fafba7263ef) methods via Jax-RS
```java
@Named
@ApplicationScoped
@Path("hello")
public class HelloController {
...
@GET
@Produces({APPLICATION_XML, APPLICATION_JSON})
public List<Hello> helloSoFar() {
...
@POST
@Consumes({APPLICATION_XML, APPLICATION_JSON})
@Transactional
public void storeName(Hello hello) {
...
```
## Run Application...again
```shell
$ mvn wildfly:redeploy
$ open http://localhost:8080/jee7helloworld/index.xhtml
```
Try `curl` on the REST service:
```shell
$ curl http://localhost:8080/jee7helloworld/rest/hello
$ curl -X POST http://localhost:8080/jee7helloworld/rest/hello --header "Content-Type: application/xml" --data '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><hello><name>Test</name></hello>'
```
| 23.269231 | 201 | 0.705635 | yue_Hant | 0.323588 |
0cbecfb2056614cb4cd75cc2b10fc96e83f0983e | 13,088 | md | Markdown | _examples/tutorial/url-shortener/url-shortener.md | ZRothschild/iris | b300a5cfa9978b3b9404357243f96794f63b1246 | [
"BSD-3-Clause"
] | null | null | null | _examples/tutorial/url-shortener/url-shortener.md | ZRothschild/iris | b300a5cfa9978b3b9404357243f96794f63b1246 | [
"BSD-3-Clause"
] | null | null | null | _examples/tutorial/url-shortener/url-shortener.md | ZRothschild/iris | b300a5cfa9978b3b9404357243f96794f63b1246 | [
"BSD-3-Clause"
] | null | null | null | # Go Iris 短链接生成
Hackernoon文章 : https://medium.com/hackernoon/a-url-shortener-service-using-go-iris-and-bolt-4182f0b00ae7
## 目录结构
> 主目录`url-shortener`
```html
—— resources
—— css
—— style.css
—— templates
—— index.html
—— factory.go
—— main.go
—— main_test.go
—— store.go
```
## 代码示例
> `resources/css/style.css`
```css
body{
background-color:silver;
}
```
> `templates/index.html`
```html
<html>
<head>
<meta charset="utf-8">
<title>Golang URL Shortener</title>
<link rel="stylesheet" href="/static/css/style.css" />
</head>
<body>
<h2>Golang URL Shortener</h2>
<h3>{{ .FORM_RESULT}}</h3>
<form action="/shorten" method="POST">
<input type="text" name="url" style="width: 35em;" />
<input type="submit" value="Shorten!" />
</form>
{{ if IsPositive .URL_COUNT }}
<p>{{ .URL_COUNT }} URLs shortened</p>
{{ end }}
<form action="/clear_cache" method="POST">
<input type="submit" value="Clear DB" />
</form>
</body>
</html>
```
> `factory.go`
```golang
package main
import (
"net/url"
"github.com/iris-contrib/go.uuid"
)
//生成类型以生成密钥(短网址)
// Generator the type to generate keys(short urls)
type Generator func() string
// DefaultGenerator是默认的URL生成器
// DefaultGenerator is the defautl url generator
var DefaultGenerator = func() string {
id, _ := uuid.NewV4()
return id.String()
}
//工厂负责生成密钥(短网址)
// Factory is responsible to generate keys(short urls)
type Factory struct {
store Store
generator Generator
}
// NewFactory接收一个生成器和一个存储,并返回一个新的URL Factory。
// NewFactory receives a generator and a store and returns a new url Factory.
func NewFactory(generator Generator, store Store) *Factory {
return &Factory{
store: store,
generator: generator,
}
}
// Gen生成密钥
// Gen generates the key.
func (f *Factory) Gen(uri string) (key string, err error) {
//我们不返回已解析的url,因为#hash已转换为uri兼容,并且我们不想一直进行编码/解码,
// 因此不需要这样做,我们将URL保存为用户期望的值,如果 uri验证已通过
// we don't return the parsed url because #hash are converted to uri-compatible
// and we don't want to encode/decode all the time, there is no need for that,
// we save the url as the user expects if the uri validation passed.
_, err = url.ParseRequestURI(uri)
if err != nil {
return "", err
}
key = f.generator()
//确保密钥是唯一的
// Make sure that the key is unique
for {
if v := f.store.Get(key); v == "" {
break
}
key = f.generator()
}
return key, nil
}
```
> `main.go`
```golang
// Package main展示了如何创建简单的URL Shortener。
//
//文章:https://medium.com/@kataras/a-url-shortener-service-using-go-iris-and-bolt-4182f0b00ae7
//
// Package main shows how you can create a simple URL Shortener.
//
// Article: https://medium.com/@kataras/a-url-shortener-service-using-go-iris-and-bolt-4182f0b00ae7
//
// $ go get github.com/etcd-io/bbolt
// $ go get github.com/iris-contrib/go.uuid
// $ cd $GOPATH/src/github.com/kataras/iris/_examples/tutorial/url-shortener
// $ go build
// $ ./url-shortener
package main
import (
"fmt"
"html/template"
"github.com/kataras/iris/v12"
)
func main() {
//为数据库分配一个变量,以便稍后使用
// assign a variable to the DB so we can use its features later.
db := NewDB("shortener.db")
//将该数据库传递给我们的应用程序,以便以后可以使用其他数据库测试整个应用程序。
// Pass that db to our app, in order to be able to test the whole app with a different database later on.
app := newApp(db)
//当服务器关闭时释放"db"连接
// release the "db" connection when server goes off.
iris.RegisterOnInterrupt(db.Close)
app.Run(iris.Addr(":8080"))
}
func newApp(db *DB) *iris.Application {
app := iris.Default() // or app := iris.New()
//创建我们的工厂,该工厂是对象创建的管理
//在我们的Web应用程序和数据库之间
// create our factory, which is the manager for the object creation.
// between our web app and the db.
factory := NewFactory(DefaultGenerator, db)
//通过HTML std视图引擎为"./templates" 目录的“ * .html”文件提供服务
// serve the "./templates" directory's "*.html" files with the HTML std view engine.
tmpl := iris.HTML("./templates", ".html").Reload(true)
//在此处注册任何模板功能
//
//看./templates/index.html#L16
// register any template func(s) here.
//
// Look ./templates/index.html#L16
tmpl.AddFunc("IsPositive", func(n int) bool {
if n > 0 {
return true
}
return false
})
app.RegisterView(tmpl)
//提供静态文件(css)
// Serve static files (css)
app.HandleDir("/static", "./resources")
indexHandler := func(ctx iris.Context) {
ctx.ViewData("URL_COUNT", db.Len())
ctx.View("index.html")
}
app.Get("/", indexHandler)
//通过在http://localhost:8080/u/dsaoj41u321dsa上使用的键来查找并执行短网址
// find and execute a short url by its key
// used on http://localhost:8080/u/dsaoj41u321dsa
execShortURL := func(ctx iris.Context, key string) {
if key == "" {
ctx.StatusCode(iris.StatusBadRequest)
return
}
value := db.Get(key)
if value == "" {
ctx.StatusCode(iris.StatusNotFound)
ctx.Writef("Short URL for key: '%s' not found", key)
return
}
ctx.Redirect(value, iris.StatusTemporaryRedirect)
}
app.Get("/u/{shortkey}", func(ctx iris.Context) {
execShortURL(ctx, ctx.Params().Get("shortkey"))
})
app.Get("/u/3861bc4d-ca57-4cbc-9fe4-9e0e2b50fff4", func(ctx iris.Context) {
fmt.Printf("%s\n","testsssss")
})
//app.Get("/u/{shortkey}", func(ctx iris.Context) {
// execShortURL(ctx, ctx.Params().Get("shortkey"))
//})
app.Post("/shorten", func(ctx iris.Context) {
formValue := ctx.FormValue("url")
if formValue == "" {
ctx.ViewData("FORM_RESULT", "You need to a enter a URL")
ctx.StatusCode(iris.StatusLengthRequired)
} else {
key, err := factory.Gen(formValue)
if err != nil {
ctx.ViewData("FORM_RESULT", "Invalid URL")
ctx.StatusCode(iris.StatusBadRequest)
} else {
if err = db.Set(key, formValue); err != nil {
ctx.ViewData("FORM_RESULT", "Internal error while saving the URL")
app.Logger().Infof("while saving URL: " + err.Error())
ctx.StatusCode(iris.StatusInternalServerError)
} else {
ctx.StatusCode(iris.StatusOK)
shortenURL := "http://" + app.ConfigurationReadOnly().GetVHost() + "/u/" + key
ctx.ViewData("FORM_RESULT",
template.HTML("<pre><a target='_new' href='"+shortenURL+"'>"+shortenURL+" </a></pre>"))
}
}
}
//没有重定向,我们需要FORM_RESULT
indexHandler(ctx) // no redirect, we need the FORM_RESULT.
})
app.Post("/clear_cache", func(ctx iris.Context) {
db.Clear()
ctx.Redirect("/")
})
return app
}
```
> `main_test.go`
```golang
package main
import (
"io/ioutil"
"os"
"testing"
"time"
"github.com/kataras/iris/v12/httptest"
)
// TestURLShortener tests the simple tasks of our url shortener application.
// Note that it's a pure test.
// The rest possible checks is up to you, take it as as an exercise!
func TestURLShortener(t *testing.T) {
// temp db file
f, err := ioutil.TempFile("", "shortener")
if err != nil {
t.Fatalf("creating temp file for database failed: %v", err)
}
db := NewDB(f.Name())
app := newApp(db)
e := httptest.New(t, app)
originalURL := "https://google.com"
// save
e.POST("/shorten").
WithFormField("url", originalURL).Expect().
Status(httptest.StatusOK).Body().Contains("<pre><a target='_new' href=")
keys := db.GetByValue(originalURL)
if got := len(keys); got != 1 {
t.Fatalf("expected to have 1 key but saved %d short urls", got)
}
// get
e.GET("/u/" + keys[0]).Expect().
Status(httptest.StatusTemporaryRedirect).Header("Location").Equal(originalURL)
// save the same again, it should add a new key
e.POST("/shorten").
WithFormField("url", originalURL).Expect().
Status(httptest.StatusOK).Body().Contains("<pre><a target='_new' href=")
keys2 := db.GetByValue(originalURL)
if got := len(keys2); got != 1 {
t.Fatalf("expected to have 1 keys even if we save the same original url but saved %d short urls", got)
} // the key is the same, so only the first one matters.
if keys[0] != keys2[0] {
t.Fatalf("expected keys to be equal if the original url is the same, but got %s = %s ", keys[0], keys2[0])
}
// clear db
e.POST("/clear_cache").Expect().Status(httptest.StatusOK)
if got := db.Len(); got != 0 {
t.Fatalf("expected database to have 0 registered objects after /clear_cache but has %d", got)
}
// give it some time to release the db connection
db.Close()
time.Sleep(1 * time.Second)
// close the file
if err := f.Close(); err != nil {
t.Fatalf("unable to close the file: %s", f.Name())
}
// and remove the file
if err := os.Remove(f.Name()); err != nil {
t.Fatalf("unable to remove the file from %s", f.Name())
}
time.Sleep(1 * time.Second)
}
```
> `store.go`
```golang
package main
import (
"bytes"
bolt "github.com/etcd-io/bbolt"
)
//Panic,如果您不想因严重的INITIALIZE-ONLY-ERRORS 异常而将其更改
// Panic panics, change it if you don't want to panic on critical INITIALIZE-ONLY-ERRORS
var Panic = func(v interface{}) {
panic(v)
}
// Store是网址的存储接口
//注意:没有Del功能
// Store is the store interface for urls.
// Note: no Del functionality.
type Store interface {
Set(key string, value string) error // 如果出了问题返回错误 | error if something went wrong
Get(key string) string // 如果找不到,则为空值 | empty value if not found
Len() int // 应该返回所有记录/表/桶的数量 | should return the number of all the records/tables/buckets
Close() // 释放存储或忽略 | release the store or ignore
}
var tableURLs = []byte("urls")
//Store的数据库表示形式。
//只有一个表/存储桶包含网址,因此它不是完整的数据库,
//它仅适用于单个存储桶,因为我们需要这些。
// DB representation of a Store.
// Only one table/bucket which contains the urls, so it's not a fully Database,
// it works only with single bucket because that all we need.
type DB struct {
db *bolt.DB
}
var _ Store = &DB{}
// openDatabase打开一个新的数据库连接并返回其实例
// openDatabase open a new database connection
// and returns its instance.
func openDatabase(stumb string) *bolt.DB {
//打开当前工作目录下的data(base)文件,
// 如果不存在该文件将被创建
// Open the data(base) file in the current working directory.
// It will be created if it doesn't exist.
db, err := bolt.Open(stumb, 0600, nil)
if err != nil {
Panic(err)
}
//在此处创建存储桶
// create the buckets here
tables := [...][]byte{
tableURLs,
}
db.Update(func(tx *bolt.Tx) (err error) {
for _, table := range tables {
_, err = tx.CreateBucketIfNotExists(table)
if err != nil {
Panic(err)
}
}
return
})
return db
}
// NewDB返回一个新的数据库实例,其连接已打开
// DB实现Store
// NewDB returns a new DB instance, its connection is opened.
// DB implements the Store.
func NewDB(stumb string) *DB {
return &DB{
db: openDatabase(stumb),
}
}
// Set设置一个缩短的网址及其键
//注意:调用方负责生成密钥
// Set sets a shorten url and its key
// Note: Caller is responsible to generate a key.
func (d *DB) Set(key string, value string) error {
return d.db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists(tableURLs)
//生成url的ID注意:我们可以使用它代替随机的字符串键
// 但是我们想模拟一个实际的url缩短器,因此我们跳过它
// id, _ := b.NextSequence()
// Generate ID for the url
// Note: we could use that instead of a random string key
// but we want to simulate a real-world url shortener
// so we skip that.
// id, _ := b.NextSequence()
if err != nil {
return err
}
k := []byte(key)
valueB := []byte(value)
c := b.Cursor()
found := false
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.Equal(valueB, v) {
found = true
break
}
}
//如果值已经存在,请不要重新输入
// if value already exists don't re-put it.
if found {
return nil
}
return b.Put(k, []byte(value))
})
}
//Clear将清除表URL的所有数据库条目
// Clear clears all the database entries for the table urls.
func (d *DB) Clear() error {
return d.db.Update(func(tx *bolt.Tx) error {
return tx.DeleteBucket(tableURLs)
})
}
// Get通过其键返回一个URL
//
//如果找不到,则返回一个空字符串
// Get returns a url by its key.
//
// Returns an empty string if not found.
func (d *DB) Get(key string) (value string) {
keyB := []byte(key)
d.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(tableURLs)
if b == nil {
return nil
}
c := b.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.Equal(keyB, k) {
value = string(v)
break
}
}
return nil
})
return
}
// GetByValue返回特定(原始)URL值的所有键
// GetByValue returns all keys for a specific (original) url value.
func (d *DB) GetByValue(value string) (keys []string) {
valueB := []byte(value)
d.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(tableURLs)
if b == nil {
return nil
}
c := b.Cursor()
//首先为存储区的表"urls"
// first for the bucket's table "urls"
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.Equal(valueB, v) {
keys = append(keys, string(k))
}
}
return nil
})
return
}
// Len返回所有“短”网址的长度
// Len returns all the "shorted" urls length
func (d *DB) Len() (num int) {
d.db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
b := tx.Bucket(tableURLs)
if b == nil {
return nil
}
b.ForEach(func([]byte, []byte) error {
num++
return nil
})
return nil
})
return
}
//关闭将关闭数据库)连接
// Close shutdowns the data(base) connection.
func (d *DB) Close() {
if err := d.db.Close(); err != nil {
Panic(err)
}
}
``` | 23.413238 | 116 | 0.653881 | eng_Latn | 0.463195 |
0cbf9eb0defc8e463f8615c5d9de8a8405c3e31a | 453 | md | Markdown | introduction-to-text-mining/README.md | DraceZhan/live-learning-sessions | 12f29b758755339753fb34ceb3a01ee31273bee1 | [
"MIT"
] | 9 | 2020-08-28T19:15:43.000Z | 2022-03-29T15:25:12.000Z | introduction-to-text-mining/README.md | DraceZhan/live-learning-sessions | 12f29b758755339753fb34ceb3a01ee31273bee1 | [
"MIT"
] | 1 | 2021-02-22T18:06:53.000Z | 2021-02-23T05:42:15.000Z | introduction-to-text-mining/README.md | DraceZhan/live-learning-sessions | 12f29b758755339753fb34ceb3a01ee31273bee1 | [
"MIT"
] | 37 | 2020-08-13T19:31:05.000Z | 2022-03-07T16:42:55.000Z | ### Introduction to Text Mining
<hr>
- In this session, we will examine methods of how one can analyze text data without delving into more advanced NLP methods. Students will find how using basic string methods, one can gleam relevant insight from review data to allow further conventional EDA methods
**Prerequisites:**
`pip install nltk`
Once installed, you can run in either Python or a Jupyter notebook:
```
import nltk
nltk.download('all')
``` | 32.357143 | 264 | 0.770419 | eng_Latn | 0.997014 |
0cbfcec02843e876d9208c243b41c2f96db7bcb9 | 103 | md | Markdown | _sponsors/spring.md | innovationOnBoard/iob-staging | be6bb9bfbb4202aaf2b8e7441bd45eb01047c37e | [
"CC-BY-3.0"
] | null | null | null | _sponsors/spring.md | innovationOnBoard/iob-staging | be6bb9bfbb4202aaf2b8e7441bd45eb01047c37e | [
"CC-BY-3.0"
] | 3 | 2017-08-08T22:36:10.000Z | 2017-09-11T03:18:54.000Z | _sponsors/spring.md | innovationOnBoard/iob-staging | be6bb9bfbb4202aaf2b8e7441bd45eb01047c37e | [
"CC-BY-3.0"
] | 2 | 2017-07-26T19:14:41.000Z | 2017-11-20T23:07:08.000Z | ---
layout: post
weight: 100
name: Spring.is
status: silver
img: /assets/images/sponsors/spring.png
--- | 14.714286 | 39 | 0.728155 | eng_Latn | 0.274506 |
0cbff628037f29bf000492e5f00bcbe920a64830 | 413 | md | Markdown | _posts/2021-07-08/2021-06-16-Which-lips-are-you-kissing-first-almost-40-year-old-mom-20210616130326118484.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-16-Which-lips-are-you-kissing-first-almost-40-year-old-mom-20210616130326118484.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | _posts/2021-07-08/2021-06-16-Which-lips-are-you-kissing-first-almost-40-year-old-mom-20210616130326118484.md | ipussy/ipussy.github.io | 95d19a74e38bb54303cf18057a99a57c783e76bf | [
"Apache-2.0"
] | null | null | null | ---
title: "Which lips are you kissing first (almost 40 year old mom)"
metadate: "hide"
categories: [ Pussy ]
image: "https://preview.redd.it/u8jeydxx1k571.jpg?auto=webp&s=b51d7b14a0a0cc621871a12116045af38ec734c6"
thumb: "https://preview.redd.it/u8jeydxx1k571.jpg?width=1080&crop=smart&auto=webp&s=2cd8bcd0723ad7257187febfa45bc2271438ce5c"
visit: ""
---
Which lips are you kissing first (almost 40 year old mom)
| 41.3 | 125 | 0.77724 | eng_Latn | 0.382677 |
0cbffbe29d6f99c888d80d737a04e2722cde96d4 | 11,457 | md | Markdown | README.md | cugzhaolei/ClientServerProject | c4e8bd0ee830e4f00bb66ddd6e561ba381705ce8 | [
"MIT"
] | null | null | null | README.md | cugzhaolei/ClientServerProject | c4e8bd0ee830e4f00bb66ddd6e561ba381705ce8 | [
"MIT"
] | null | null | null | README.md | cugzhaolei/ClientServerProject | c4e8bd0ee830e4f00bb66ddd6e561ba381705ce8 | [
"MIT"
] | 2 | 2021-07-05T08:30:53.000Z | 2021-10-17T14:23:38.000Z | # C-S架构的服务器客户端模版
[](https://www.visualstudio.com/zh-hans/) [](https://blogs.msdn.microsoft.com/dotnet/2016/08/24/whats-new-in-csharp-7-0/) [](https://github.com/dathlin/ClientServerProject/blob/master/LICENSE)
[](https://www.visualstudio.com/zh-hans/)
## Summary
一个基于中小型提炼的C-S开发框架,覆盖电脑端,web端,手机端的全平台系统模版,在大多数的一对多的系统设计中会包含一些常用的重复的功能代码,
比如网络通信机制,客户端版本控制,账户控制管理,密码修改,公告管理,服务器配置,各种常用窗口等等,而且大多数的中小型系统只是需要到简单的权限管理即可。
<br />
本框架包含了四种客户端的模式,第一种常用的winform客户端,第二种为wpf客户端,第三种为asp.net mvc模式,
第四种为安卓平台的客户端,也就是说你可以在winform和wpf客户端上选择一种模式,然后您的系统提供一些功能(诸如报表查看)到asp.net上去,
然后提供一个手机端使用的Android App,如果服务器假设在云端,所有的人都可以随时随地的进行交互,数据流通,所有的账户模型都是统一的,
浏览器还方便一些只需要查看报表用户的人员不需要在部署客户端了。当然,客户端可以提供更加强大的功能。
A CS development framework based on small-to-medium sized refining, covering computer-side, web-side, and mobile-side full-platform system templates,
will include some commonly used repetitive function codes, such as network communication mechanisms,
in most one-to-many system designs. Client version control, account control management, password modification,
announcement management, server configuration, various common windows, etc., and most small and medium-sized systems
just need simple rights management.
This framework contains four client-side modes, the first commonly used winform client, the second is the wpf client,
the third is the asp.net mvc mode, and the fourth is the client of the Android platform. You can choose a mode on winform and wpf clients,
then your system provides some functions (such as report viewing) to asp.net, and then provide an Android app for mobile phone use,
if the server is assumed in the cloud, all People can interact with anytime, anywhere, data flow, all account models are unified,
the browser is also convenient for some people who only need to view the report users do not need to deploy the client. Of course,
the client can provide more powerful features.
## Features included
<ul>
<li>一个简单的账户管理功能,包含了账户注册,密码修改,客户端登录账户记录,账户注销,账户包含了一些基础信息</li>
<li>一个简单的客户端登录控制功能,可以手动控制允许哪些客户端进行登录,只需要打开窗口配置一下即可</li>
<li>一个简单的公告管理功能,允许有权限的账户针对公告进行更改,未来将支持公告更改记录</li>
<li>一个简单的意见反馈功能,允许客户端反馈软件的意见或是BUG,方便开发人员更改</li>
<li>一个简单的右下角消息框弹出功能,在公告更改和新发消息的时候可以自由控制</li>
<li>一个简单的版本日志提示窗口,在新版本更新后就会自动提示显示</li>
<li>一个简单的角色管理器功能,对每个角色可以配置任意数量的账户名</li>
</ul>
<ul>
<li>服务器端的配置实时保存,以防止服务器电脑突然关机,断电造成的数据丢失问题</li>
<li>一个完善的网络通信框架,包含一对多控制的tcp网络(服务器对客户端进行控制,并方便的群发数据),单独请求数据的同步网络,udp网络</li>
<li>完善的自动升级的部署机制,服务器部署新版本后,所有客户端都将一键自动更新</li>
<li>客户端提供开发人员远程更新服务器程序的能力,方便开发人员的操作</li>
<li>完善的日志记录功能,所有的网络功能和文件功能都提供了日志记录,所有客户端的异常都会发送至服务器记录,客户端也可以非常方便的查看所有的日志信息,您也可以很方便的将其他信息记录到日志中</li>
<li>一个简单的局域网聊天功能,用于所有的在线账户进行聊天的功能,所有的消息做了一定的缓存</li>
<li>提供了一个文件共享平台,大多数的软件系统都会共享一些特殊的文件资料,允许方便的下载,管理,上传</li>
<li>提供所有账户自身的头像功能,未来还将支持多账户同步</li>
<li>提供一个简单的开发中心,允许客户端实时监视服务器程序的对象内存使用情况</li>
<li>客户端提供了一个统一的配置中心,可以用来配置服务器各种参数。</li>
</ul>
<ul>
<li>Wpf版本的客户端程序额外提供了一个主题颜色设置的功能</li>
</ul>
## Environment
<ul>
<li>IDE: Visual Studio 2017
<ul>
<li>winform 服务器:.NET Framework 3.5</li>
<li>winform 客户端:.NET Framework 3.5</li>
<li>wpf客户端: .NET Framework 4.5</li>
<li>asp.net mvc服务器:.NET Framework 4.5</li>
</ul>
</li>
</ul>
## Getting Started
<ul>
<li>重新生成 <b>CommonLibrary</b> 项目</li>
<li>请确认 <b>ClientsLibrary</b> 项目文件 <b>UserClient.cs</b> 的类 <b>UserClient</b> 的属性 <b>ServerIp</b> 是否为127.0.0.1,如果不是,请修改</li>
<li>重新生成 <b>ClientsLibrary</b> 项目</li>
<li>重新生成 <b>软件系统服务端模版</b> 并运行该exe</li>
<li>选择一个你想调试的客户端版本程序,例如winform就启动 <b>软件系统客户端模版</b> 项目</li>
<li>输入一个默认的账户admin,密码123456</li>
<li>接下来就可以体验所有的功能了</li>
</ul>
安卓端的程序在文件夹AndroidTemplate中,请使用Android Studio打开该文件夹,并修改连接的服务器地址。(该模版还在完善中...)
## Quick Experience
如果觉得下载源代码比较麻烦,又想快速体验客户端功能,那就点击<a href="https://github.com/dathlin/ClientServerProject/raw/master/QuickExperience/软件自动更新.exe">软件自动更新.exe</a>下载程序,放置到任意位置,推荐桌面,双击安装程序,输入默认的账户密码即可体验完整的最新版本的客户端,当服务器的客户端版本更新时,你再打开本程序时也会自动升级。然后桌面的 **软件自动更新.exe** 就可以删除了。此处目前仅仅体验wpf程序。
至于卸载软件,只要删除桌面的快捷方式和安装目录的文件即可,其他位置不会有任何的文件残留。
## Secondary Development
基于本模版可以方便的进行二次开发,具体可以开发示例如下(如下是我个人的实践经验,欢迎补充):
<ul>
<li>基于现场监视控制的上位机系统,可方便实现一对多的同步监视</li>
<li>用于部门人员的项目管理系统</li>
<li>用于设备资料档案管理的设备管理系统</li>
<li>用于管理备品备件的ERP系统</li>
<li>多客户端之间需要进行复杂数据交互的系统</li>
</ul>
二次开发时需要特别注意的是在项目 **CommonLibrary** -> **UserSystem.cs** 中的参数需要根据实际全部修改,注意事项已在文件里说明。
## Contribute
如果你也喜欢这个项目,可以点击右上角的star或是fork,如果发现了一些BUG或是需要更改的地方也可以直接发起pull request,当然也可以联系技术支持QQ群来联系我本人,或是发送邮件,具体参考下面。
## Contact
<ul>
<li>技术支持QQ群:<strong>592132877</strong></li>
<li>邮箱:<strong>[email protected]</strong></li>
</ul>
## Disclaimer
<ul>
<li>使用了<a href="http://www.newtonsoft.com/json">json.net组件</a></li>
<li>Wpf模版使用了一个开源项目,<a href="https://github.com/ButchersBoy/MaterialDesignInXamlToolkit">https://github.com/ButchersBoy/MaterialDesignInXamlToolkit</a></li>
<li>文件共享功能的图标来源<a href="http://fileicons.chromefans.org/">免费文件图标</a></li>
</ul>
## HslCommunication.dll [](https://www.nuget.org/packages/HslCommunication/) 
<p>本C-S项目的核心组件,该组件功能提供了一些基础功能类和整个C-S项目的网络支持,除此之外,该组件提供了访问三菱PLC,西门子PLC,欧姆龙PLC,Modbus的数据功能。
关于这个库的项目介绍地址如下:</p>
[http://www.cnblogs.com/dathlin/p/7703805.html](http://www.cnblogs.com/dathlin/p/7703805.html)
在Nuget控制台输入下面的指令即可安装,或者使用VS2017的Nuget包管理器来方便的下载组件,如果不清楚怎么使用NuGet可以参考网上教程。
关于该库的急速体验Demo程序下载地址:
[HslCommunicationDemo.zip](https://github.com/dathlin/HslCommunication/raw/master/Download/HslCommunicationDemo.zip)
<pre>
<code>
Install-Package HslCommunication
</code>
</pre>
# 整个系统的架构设计如下
#### 核心架构的设计机制

<br />
#### 系统的登录设计
<ol>
<li>状态检查,检测服务器的维护状态设置,如果处于维护中,则显示不能登录系统原因。</li>
<li>账户检查,服务器对登录账户全面检查,用户名是否存在,密码是否正确,是否允许登录,并对登录ip,时间,频次进行记录。</li>
<li>版本检查,服务器返回最新的版本号,客户端检测后根据自己的需求选择是否启动更新程序。</li>
<li>参数下载,上述所有检查通过以后,进行运行数据的初始化,比如将公告数据传送到客户端,您也可以添加自己的数据。采用json数据进行封装,客户端解析的时候请参照示例。</li>
<li>上述所有检测通过之后,启动客户端的主界面程序。但凡有一项检测失败,或者参数下载失败,均不允许登录,并且提示相关错误。</li>
</ol>

#### 系统的权限角色模型设计

#### 系统的异常处理模型设计

#### 系统的账户头像设计

#### 系统的其他工具设计

#### 一个基于此模版的示例项目设计模型

# 服务器端程序界面如下:

###### 下述服务器端的功能说明均来自服务器的菜单点击
1. 服务器端的版本控制,更新新的版本号,按照实际需求来更新您的版本号,门牌为【设置】-【版本控制】

2. 服务器端的维护状态控制,比如系统维护阶段,不允许所有客户端登录,门牌为【设置】-【维护切换】

3. 消息群发,您也可以在代码中自动触发群发,代码参考此处的手动群发,门牌为【设置】-【消息发送】

4. 账户管理,客户端的界面和这个一致,该管理属于底层的json数据管理,任意更改数据,门牌为【设置】-【账户管理】

5. 关于本系统,框架版本号以本github发布的版本号为准,门牌为【关于】-【关于软件】

6. 一键断开,如遇到紧急情况,或是切换维护之前,可以选择强制关闭所有的客户端。门牌为【设置】-【一键断开】
<br />
<br />
<br />
# 客户端的程序界面
###### 登录窗口

<br />
###### 登录主界面(此处点击了关于菜单)

###### 更改公告,此处没有设置权限过滤,门牌为【管理员】-【更改公告】

###### 日志查看,本系统集成了非常实用的日志功能,所有的网络组件均支持日志的记录,方便调试。门牌为【管理员】-【日志查看】

###### 远程更新,成功部署本系统后,支持远程客户端的版本更新,此功能应开发人员拥有。门牌为【管理员】-【远程更新】

###### 密码更改,当账户需要更改密码时,需要对密码进行验证。门牌为【设置】-【修改密码】

###### 更新日志,当客户端更新了新的版本后,初次运行程序时就会自动弹出如下窗口,具体的更新内容应该写入到文件中。手动门牌为【关于】-【更新日志】

###### 反馈意见,人性化的软件允许用户支持提交修改建议,功能使用反馈等。门牌为【关于】-【意见反馈】

###### 快速注册账号,支持管理员快速注册账号,该界面允许更改。门牌为【管理员】-【注册账号】

###### 共享文件,本系统支持一个小型的文件共享,包含了上传下载删除过滤。门牌为主界面的【文件数量】


###### 本系统集成了一个小型的简单群聊天系统,缓存消息200条。门牌为主界面的【设置】-【留言板】

###### 监视服务器的对象内存消耗,门牌为【管理员】-【开发中心】

###### 修改账户的头像,门牌为【设置】-【我的信息】-点击头像

###### 我的账户信息及个人文件,门牌为【设置】-【我的信息】

###### 统一的系统配置界面,门牌为【管理员】-【系统配置】

##### 中英文双语支持,目前仅先适配我的信息和系统配置界面

<br />
# Wpf版客户端的程序界面
###### 登录窗口

###### 主窗口,还未实现文件功能

###### 主窗口的暗色主题

###### 主题选择界面

###### 共享文件界面

###### 其他功能界面使用了winform的窗口技术,此处不在赘述。
<br />
# Web版客户端的程序界面
###### 登录界面,背景图片可以自定义

###### 主界面,很多功能还在完善中

<br />
# Android 客户端模版(还在完善中...)


<br />
# License:
##### Copyright (c) Richard.Hu. All rights reserved.
##### Licensed under the MIT License.
##### WeChat:工业智能软件开发项目组
| 34.823708 | 400 | 0.771843 | yue_Hant | 0.585751 |
0cc004b6ed924bb4a6f2e02361f01b8ae6476cbe | 6,473 | md | Markdown | articles/application-gateway/custom-error.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | 63 | 2017-08-28T07:43:47.000Z | 2022-02-24T03:04:04.000Z | articles/application-gateway/custom-error.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | 704 | 2017-08-04T09:45:07.000Z | 2021-12-03T05:49:08.000Z | articles/application-gateway/custom-error.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | 178 | 2017-07-05T10:56:47.000Z | 2022-03-18T12:25:19.000Z | ---
title: Erstellen von benutzerdefinierten Azure Application Gateway-Fehlerseiten
description: In diesem Artikel erfahren Sie, wie Sie benutzerdefinierte Application Gateway-Fehlerseiten erstellen. Sie können für eine benutzerdefinierte Fehlerseite Ihr eigenes Branding und Layout verwenden.
services: application-gateway
author: vhorne
ms.service: application-gateway
ms.topic: how-to
ms.date: 11/16/2019
ms.author: victorh
ms.custom: devx-track-azurepowershell
ms.openlocfilehash: 5bdae2055f46f6f933325c95b86d427951c6cfbc
ms.sourcegitcommit: 40866facf800a09574f97cc486b5f64fced67eb2
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 08/30/2021
ms.locfileid: "123222661"
---
# <a name="create-application-gateway-custom-error-pages"></a>Erstellen von benutzerdefinierten Application Gateway-Fehlerseiten
Mit Application Gateway können Sie benutzerdefinierte Fehlerseiten erstellen, anstatt Standardfehlerseiten anzuzeigen. Sie können für eine benutzerdefinierte Fehlerseite Ihr eigenes Branding und Layout verwenden.
Beispielsweise können Sie Ihre eigenen Wartungsseite definieren, wenn Ihre Webanwendung nicht erreichbar ist, oder eine Seite für nicht autorisierten Zugriff, wenn eine schädliche Anforderung an eine Webanwendung gesendet wird.
Benutzerdefinierte Fehlerseiten werden für die folgenden beiden Szenarien unterstützt:
- **Wartungsseite:** Diese benutzerdefinierte Fehlerseite wird anstelle der Seite „502 Ungültiges Gateway“ gesendet. Sie wird angezeigt, wenn Application Gateway kein Back-End zum Weiterleiten von Datenverkehr hat. Beispiele: Eine geplante Wartung steht an, oder ein unvorhergesehenes Problem beeinträchtigt den Zugriff auf den Back-End-Pool.
- **Seite für nicht autorisierten Zugriff:** Diese benutzerdefinierte Fehlerseite wird anstelle der Seite „403 Nicht autorisierter Zugriff“ gesendet. Sie wird angezeigt, wenn die Application Gateway-WAF schädlichen Datenverkehr erkennt und blockiert.
Wenn ein Fehler von den Back-End-Servern stammt, wird er unverändert zurück an den Aufrufer übergeben. Es wird keine benutzerdefinierte Fehlerseite angezeigt. Application Gateway kann eine benutzerdefinierte Fehlerseite anzeigen, wenn eine Anforderung das Back-End nicht erreichen kann.
Benutzerdefinierte Fehlerseiten können auf globaler Ebene und auf Listenerebene definiert werden:
- **Globale Ebene:** Die Fehlerseite gilt für den Datenverkehr für alle Webanwendungen, die auf dieser Application Gateway-Instanz bereitgestellt wurden.
- **Listenerebene:** Die Fehlerseite gilt für den Datenverkehr, der an diesem Listener empfangen wird.
- **Beide:** Die auf Listenerebene definierte benutzerdefinierte Fehlerseite hat höhere Priorität als die auf globaler Ebene festgelegte.
Zum Erstellen einer benutzerdefinierten Fehlerseite benötigen Sie Folgendes:
- Einen HTTP-Antwortstatuscode
- Den entsprechenden Standort für die Fehlerseite
- Ein öffentlich zugängliches Azure Storage-Blob für den Standort
- Einen HTM- oder HTML-Erweiterungstyp
Die Größe der Fehlerseite muss weniger als 1 MB betragen. Sie können für diese HTML-Datei auf interne oder externe Bilder/CSS verweisen. Verwenden Sie für extern referenzierte Ressourcen absolute URLs, die öffentlich zugänglich sind. Beachten Sie die HTML-Dateigröße, wenn Sie interne Bilder (Base64-codiertes Inlinebild) oder CSS verwenden. Relative Links mit Dateien am selben Blobspeicherort werden derzeit nicht unterstützt.
Nachdem Sie eine Fehlerseite angeben, lädt Application Gateway sie vom Speicherort des Speicherblobs herunter und speichert sie im lokalen Application Gateway-Cache. Diese HTML-Seite wird dann vom Anwendungsgateway verarbeitet, während die extern referenzierten Ressourcen direkt vom Client abgerufen werden. Um eine vorhandene benutzerdefinierte Fehlerseite zu ändern, müssen Sie in der Application Gateway-Konfiguration auf einen anderen Blobspeicherort verweisen. Application Gateway überprüft nicht regelmäßig den Blobspeicherort zum Abrufen neuer Versionen.
## <a name="portal-configuration"></a>Portalkonfiguration
1. Navigieren Sie im Portal zu Application Gateway, und wählen Sie eine Application Gateway-Instanz aus.

2. Klicken Sie auf **Listener**, und navigieren Sie zu einem Listener, für den Sie eine Fehlerseite anzeigen möchten.

3. Konfigurieren Sie eine benutzerdefinierte Fehlerseite für einen 403-WAF-Fehler oder eine Seite „502 Wartung“ auf Listenerebene.
> [!NOTE]
> Das Erstellen von benutzerdefinierten Fehlerseiten auf globaler Ebene über das Azure-Portal wird derzeit nicht unterstützt.
4. Geben Sie eine öffentlich zugängliche Blob-URL für einen bestimmten Fehlerstatuscode an, und klicken Sie auf **Speichern**. Application Gateway ist jetzt mit der benutzerdefinierten Fehlerseite konfiguriert.

## <a name="azure-powershell-configuration"></a>Azure PowerShell-Konfiguration
Sie können mithilfe von Azure PowerShell eine benutzerdefinierte Fehlerseite konfigurieren. Beispielsweise eine globale benutzerdefinierte Fehlerseite:
```powershell
$appgw = Get-AzApplicationGateway -Name <app-gateway-name> -ResourceGroupName <resource-group-name>
$updatedgateway = Add-AzApplicationGatewayCustomError -ApplicationGateway $appgw -StatusCode HttpStatus502 -CustomErrorPageUrl "http://<website-url>"
```
Oder eine Fehlerseite auf Listenerebene:
```powershell
$appgw = Get-AzApplicationGateway -Name <app-gateway-name> -ResourceGroupName <resource-group-name>
$listener01 = Get-AzApplicationGatewayHttpListener -Name <listener-name> -ApplicationGateway $appgw
$updatedlistener = Add-AzApplicationGatewayHttpListenerCustomError -HttpListener $listener01 -StatusCode HttpStatus502 -CustomErrorPageUrl "http://<website-url>"
```
Weitere Informationen finden Sie unter [Add-AzApplicationGatewayCustomError](/powershell/module/az.network/add-azapplicationgatewaycustomerror) und [Add-AzApplicationGatewayHttpListenerCustomError](/powershell/module/az.network/add-azapplicationgatewayhttplistenercustomerror).
## <a name="next-steps"></a>Nächste Schritte
Weitere Informationen zur Application Gateway-Diagnose finden Sie unter [Back-End-Integrität, Diagnoseprotokolle und Metriken für Application Gateway](application-gateway-diagnostics.md).
| 71.922222 | 562 | 0.830681 | deu_Latn | 0.993434 |
0cc0fa4ea234d7acaeaac7b013b15b5de072c854 | 12,081 | md | Markdown | readme.md | engineer-man/piston
c
| 20e71f617bb05093ad035e663c5d129d6d7705ec
| [
"MIT"
] | 1,320 | 2018-09-20T23:31:32.000Z | 2022-03-31T13:14:06.000Z | readme.md | engineer-man/piston
c
| 20e71f617bb05093ad035e663c5d129d6d7705ec
| [
"MIT"
] | 284 | 2018-10-17T18:29:26.000Z | 2022-03-30T07:00:04.000Z | readme.md | engineer-man/piston
c
| 20e71f617bb05093ad035e663c5d129d6d7705ec
| [
"MIT"
] | 204 | 2018-10-09T21:11:31.000Z | 2022-03-30T00:35:02.000Z | <h1 align="center">
<a href="https://github.com/engineer-man/piston">
<img src="var/docs/images/piston.svg" valign="middle" width="58" height="58" alt="engineer-man piston" />
</a>
<span valign="middle">
Piston
</span>
</h1>
<h3 align="center">A high performance general purpose code execution engine.</h3>
<br>
<p align="center">
<a href="https://github.com/engineer-man/piston/commits/master">
<img src="https://img.shields.io/github/last-commit/engineer-man/piston.svg?style=for-the-badge&logo=github&logoColor=white"
alt="GitHub last commit">
<a href="https://github.com/engineer-man/piston/issues">
<img src="https://img.shields.io/github/issues/engineer-man/piston.svg?style=for-the-badge&logo=github&logoColor=white"
alt="GitHub issues">
<a href="https://github.com/engineer-man/piston/pulls">
<img src="https://img.shields.io/github/issues-pr-raw/engineer-man/piston.svg?style=for-the-badge&logo=github&logoColor=white"
alt="GitHub pull requests">
</p>
---
<h4 align="center">
<a href="#About">About</a> •
<a href="#Public-API">Public API</a> •
<a href="#Getting-Started">Getting Started</a> •
<a href="#Usage">Usage</a> •
<a href="#Supported-Languages">Supported Languages</a> •
<a href="#Principle-of-Operation">Principles</a> •
<a href="#Security">Security</a> •
<a href="#License">License</a> •
<a href="https://piston.readthedocs.io">Documentation</a>
</h4>
---
<br>
# About
<h4>
Piston is a high performance general purpose code execution engine. It excels at running untrusted and
possibly malicious code without fear from any harmful effects.
</h4>
<br>
It's used in numerous places including:
- [EMKC Challenges](https://emkc.org/challenges)
- [EMKC Weekly Contests](https://emkc.org/contests)
- [Engineer Man Discord Server](https://discord.gg/engineerman)
- Web IDEs
- 200+ direct integrations
<br>
### Official Extensions
The following are approved and endorsed extensions/utilities to the core Piston offering.
- [I Run Code](https://github.com/engineer-man/piston-bot), a Discord bot used in 4100+ servers to handle arbitrary code evaluation in Discord. To get this bot in your own server, go here: https://emkc.org/run.
- [Piston CLI](https://github.com/Shivansh-007/piston-cli), a universal shell supporting code highlighting, files, and interpretation without the need to download a language.
- [Node Piston Client](https://github.com/dthree/node-piston), a Node.js wrapper for accessing the Piston API.
- [Piston4J](https://github.com/the-codeboy/Piston4J), a Java wrapper for accessing the Piston API.
- [Pyston](https://github.com/ffaanngg/pyston), a Python wrapper for accessing the Piston API.
- [Go-Piston](https://github.com/milindmadhukar/go-piston), a Golang wrapper for accessing the Piston API.
- [piston_rs](https://github.com/Jonxslays/piston_rs), a Rust wrapper for accessing the Piston API.
<br>
# Public API
- Requires no installation and you can use it immediately.
- Reference the Runtimes/Execute sections below to learn about the request and response formats.
<br>
When using the public Piston API, use the following two URLs:
```
GET https://emkc.org/api/v2/piston/runtimes
POST https://emkc.org/api/v2/piston/execute
```
> Important Note: The Piston API is rate limited to 5 requests per second. If you have a need for more requests than that
> and it's for a good cause, please reach out to me (EngineerMan#0001) on [Discord](https://discord.gg/engineerman)
> so we can discuss potentially getting you an unlimited key. What is and isn't a good cause is up to me, but, in general
> if your project is a) open source, b) helping people at no cost to them, and c) not likely to use tons of resources
> thereby impairing another's ability to enjoy Piston, you'll likely be granted a key.
<br>
# Getting Started
## All In One
### Host System Package Dependencies
- Docker
- Docker Compose
- Node JS (>= 13, preferably >= 15)
### After system dependencies are installed, clone this repository:
```sh
# clone and enter repo
git clone https://github.com/engineer-man/piston
```
### Installation
```sh
# Start the API container
docker-compose up -d api
# Install all the dependencies for the cli
cd cli && npm i && cd -
```
The API will now be online with no language runtimes installed. To install runtimes, [use the CLI](#cli).
## Just Piston (no CLI)
### Host System Package Dependencies
- Docker
### Installation
```sh
docker run \
-v $PWD:'/piston' \
--tmpfs /piston/jobs \
-dit \
-p 2000:2000 \
--name piston_api \
ghcr.io/engineer-man/piston
```
## Piston for testing packages locally
### Host System Package Dependencies
- Same as [All In One](#All-In-One)
### Installation
```sh
# Build the Docker containers
./piston start
# For more help
./piston help
```
<br>
# Usage
### CLI
The CLI is the main tool used for installing packages within piston, but also supports running code.
You can execute the cli with `cli/index.js`.
```sh
# List all available packages
cli/index.js ppman list
# Install latest python
cli/index.js ppman install python
# Install specific version of python
cli/index.js ppman install python=3.9.4
# Run a python script using the latest version
echo 'print("Hello world!")' > test.py
cli/index.js run python test.py
# Run a python script using a specific version
echo 'print("Hello world!")' > test.py
cli/index.js run python test.py -l 3.9.4
cli/index.js run python test.py -l 3.x
cli/index.js run python test.py -l 3
```
If you are operating on a remote machine, add the `-u` flag like so:
```sh
cli/index.js -u http://piston.server:2000 ppman list
```
### API
The container exposes an API on port 2000 by default.
This is used by the CLI to carry out running jobs and package management.
#### Runtimes Endpoint
`GET /api/v2/runtimes`
This endpoint will return the supported languages along with the current version and aliases. To execute
code for a particular language using the `/api/v2/execute` endpoint, either the name or one of the aliases must
be provided, along with the version.
Multiple versions of the same language may be present at the same time, and may be selected when running a job.
```json
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"language": "bash",
"version": "5.1.0",
"aliases": [
"sh"
]
},
{
"language": "brainfuck",
"version": "2.7.3",
"aliases": [
"bf"
]
},
...
]
```
#### Execute Endpoint
`POST /api/v2/execute`
This endpoint requests execution of some arbitrary code.
- `language` (**required**) The language to use for execution, must be a string and must be installed.
- `version` (**required**) The version of the language to use for execution, must be a string containing a SemVer selector for the version or the specific version number to use.
- `files` (**required**) An array of files containing code or other data that should be used for execution. The first file in this array is considered the main file.
- `files[].name` (_optional_) The name of the file to upload, must be a string containing no path or left out.
- `files[].content` (**required**) The content of the files to upload, must be a string containing text to write.
- `files[].encoding` (_optional_) The encoding scheme used for the file content. One of `base64`, `hex` or `utf8`. Defaults to `utf8`.
- `stdin` (_optional_) The text to pass as stdin to the program. Must be a string or left out. Defaults to blank string.
- `args` (_optional_) The arguments to pass to the program. Must be an array or left out. Defaults to `[]`.
- `compile_timeout` (_optional_) The maximum time allowed for the compile stage to finish before bailing out in milliseconds. Must be a number or left out. Defaults to `10000` (10 seconds).
- `run_timeout` (_optional_) The maximum time allowed for the run stage to finish before bailing out in milliseconds. Must be a number or left out. Defaults to `3000` (3 seconds).
- `compile_memory_limit` (_optional_) The maximum amount of memory the compile stage is allowed to use in bytes. Must be a number or left out. Defaults to `-1` (no limit)
- `run_memory_limit` (_optional_) The maximum amount of memory the run stage is allowed to use in bytes. Must be a number or left out. Defaults to `-1` (no limit)
```json
{
"language": "js",
"version": "15.10.0",
"files": [
{
"name": "my_cool_code.js",
"content": "console.log(process.argv)"
}
],
"stdin": "",
"args": ["1", "2", "3"],
"compile_timeout": 10000,
"run_timeout": 3000,
"compile_memory_limit": -1,
"run_memory_limit": -1
}
```
A typical response upon successful execution will contain 1 or 2 keys `run` and `compile`.
`compile` will only be present if the language requested requires a compile stage.
Each of these keys has an identical structure, containing both a `stdout` and `stderr` key, which is a string containing the text outputted during the stage into each buffer.
It also contains the `code` and `signal` which was returned from each process.
```json
HTTP/1.1 200 OK
Content-Type: application/json
{
"language": "js",
"version": "15.10.0",
"run": {
"stdout": "[\n '/piston/packages/node/15.10.0/bin/node',\n '/piston/jobs/9501b09d-0105-496b-b61a-e5148cf66384/my_cool_code.js',\n '1',\n '2',\n '3'\n]\n",
"stderr": "",
"output": "[\n '/piston/packages/node/15.10.0/bin/node',\n '/piston/jobs/9501b09d-0105-496b-b61a-e5148cf66384/my_cool_code.js',\n '1',\n '2',\n '3'\n]\n",
"code": 0,
"signal": null
}
}
```
If a problem exists with the request, a `400` status code is returned and the reason in the `message` key.
```json
HTTP/1.1 400 Bad Request
Content-Type: application/json
{
"message": "html-5.0.0 runtime is unknown"
}
```
<br>
# Supported Languages
`awk`,
`bash`,
`befunge93`,
`brachylog`,
`brainfuck`,
`c`,
`c++`,
`cjam`,
`clojure`,
`cobol`,
`coffeescript`,
`cow`,
`crystal`,
`csharp`,
`csharp.net`,
`d`,
`dart`,
`dash`,
`dragon`,
`elixir`,
`emacs`,
`erlang`,
`file`,
`forte`,
`fortran`,
`freebasic`,
`fsharp.net`,
`fsi`,
`go`,
`golfscript`,
`groovy`,
`haskell`,
`husk`,
`iverilog`,
`japt`,
`java`,
`javascript`,
`jelly`,
`julia`,
`kotlin`,
`lisp`,
`llvm_ir`,
`lolcode`,
`lua`,
`nasm`,
`nasm64`,
`nim`,
`ocaml`,
`octave`,
`osabie`,
`paradoc`,
`pascal`,
`perl`,
`php`,
`ponylang`,
`powershell`,
`prolog`,
`pure`,
`pyth`,
`python`,
`python2`,
`racket`,
`raku`,
`retina`,
`rockstar`,
`rscript`,
`ruby`,
`rust`,
`scala`,
`sqlite3`,
`swift`,
`typescript`,
`basic`,
`basic.net`,
`vlang`,
`vyxal`,
`yeethon`,
`zig`,
<br>
# Principle of Operation
Piston uses Docker as the primary mechanism for sandboxing. There is an API within the container written in Node
which takes in execution requests and executees them within the container safely.
High level, the API writes any source code to a temporary directory in `/piston/jobs`.
The source file is either ran or compiled and ran (in the case of languages like c, c++, c#, go, etc.).
<br>
# Security
Docker provides a great deal of security out of the box in that it's separate from the system.
Piston takes additional steps to make it resistant to
various privilege escalation, denial-of-service, and resource saturation threats. These steps include:
- Disabling outgoing network interaction
- Capping max processes at 256 by default (resists `:(){ :|: &}:;`, `while True: os.fork()`, etc.)
- Capping max files at 2048 (resists various file based attacks)
- Cleaning up all temp space after each execution (resists out of drive space attacks)
- Running as a variety of unprivileged users
- Capping runtime execution at 3 seconds
- Capping stdout to 65536 characters (resists yes/no bombs and runaway output)
- SIGKILLing misbehaving code
<br>
# License
Piston is licensed under the MIT license.
| 28.764286 | 212 | 0.691582 | eng_Latn | 0.958963 |
0cc150a555df66aa96ac6b256bdca7d09a36ddcf | 619 | md | Markdown | _posts/atom-notes.md | soldier828/soldier828.github.io | 987c12b1aada920f76ec929adb06612de82406fa | [
"Apache-2.0"
] | null | null | null | _posts/atom-notes.md | soldier828/soldier828.github.io | 987c12b1aada920f76ec929adb06612de82406fa | [
"Apache-2.0"
] | null | null | null | _posts/atom-notes.md | soldier828/soldier828.github.io | 987c12b1aada920f76ec929adb06612de82406fa | [
"Apache-2.0"
] | null | null | null | - You can also add more than one directory to your current Atom window, by choosing "File >> Add Project Folder…" from the menu bar or hitting cmd-shift-O.
- You can also hide and show it with cmd-\
- ctrl-0 will focus it。 When the Tree view has focus you can press a, m, or delete to add, move or delete files and folders.
- If you hit either cmd-T or cmd-P, the Fuzzy Finder dialog will pop up. This will let you quickly search for any file in any directory your project by typing parts of the path.
- You can also search through only the files currently opened (rather than every file in your project) with cmd-B.
| 103.166667 | 177 | 0.754443 | eng_Latn | 0.999955 |
0cc197e473dd8ef4778edd75375c9cad14d64063 | 1,203 | md | Markdown | docs/framework/wpf/advanced/fonts-wpf.md | Jonatandb/docs.es-es | c18663ce8a09607fe195571492cad602bc2f01bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/fonts-wpf.md | Jonatandb/docs.es-es | c18663ce8a09607fe195571492cad602bc2f01bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/fonts-wpf.md | Jonatandb/docs.es-es | c18663ce8a09607fe195571492cad602bc2f01bb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Fuentes
ms.date: 03/30/2017
f1_keywords:
- AutoGeneratedOrientationPage
helpviewer_keywords:
- fonts [WPF]
ms.assetid: 6c766a95-ad03-475e-a36f-2243e9495941
ms.openlocfilehash: bf1101c5a32c05230aec92f61ab74c2e4d5037fc
ms.sourcegitcommit: de17a7a0a37042f0d4406f5ae5393531caeb25ba
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 01/24/2020
ms.locfileid: "76737909"
---
# <a name="fonts-wpf"></a>Fuentes (WPF)
[!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] incluye compatibilidad con la presentación enriquecida de texto con fuentes OpenType. En el Windows SDK se incluye un ejemplo de un paquete de fuentes OpenType.
## <a name="in-this-section"></a>En esta sección
[Características de las fuentes OpenType](opentype-font-features.md)
[Empaquetar fuentes con aplicaciones](packaging-fonts-with-applications.md)
[Paquete de fuentes OpenType de ejemplo](sample-opentype-font-pack.md)
[Temas "Cómo..."](fonts-how-to-topics.md)
## <a name="see-also"></a>Consulte también
- <xref:System.Windows.FontStyle>
- <xref:System.Windows.SystemFonts>
- [Documentos en WPF](documents-in-wpf.md)
- [Tipografía en WPF](typography-in-wpf.md)
| 38.806452 | 242 | 0.764755 | spa_Latn | 0.470302 |
0cc19972547e4fb352e409bec64d7b35a321c412 | 10,469 | md | Markdown | _posts/5_trauma/2018-02-13-el-sindrome-doloroso-del-trocanter-mayor-gtps.md | nogalesdev/nogalesdev.github.io | ea5ec7ee9e21b9bff6af0d67e42983424e547cce | [
"MIT"
] | null | null | null | _posts/5_trauma/2018-02-13-el-sindrome-doloroso-del-trocanter-mayor-gtps.md | nogalesdev/nogalesdev.github.io | ea5ec7ee9e21b9bff6af0d67e42983424e547cce | [
"MIT"
] | null | null | null | _posts/5_trauma/2018-02-13-el-sindrome-doloroso-del-trocanter-mayor-gtps.md | nogalesdev/nogalesdev.github.io | ea5ec7ee9e21b9bff6af0d67e42983424e547cce | [
"MIT"
] | null | null | null | ---
title: El sindrome doloroso del TROCÁNTER MAYOR (GTPS)
teaser: El dolor en la región lateral de la cadera ha sido diagnosticado durante mucho tiempo erróneamente como bursitis trocánteres, siendo usado por primera vez en 1023 por Stegerman, solo el 8% de las personas con este síndrome tienen bursitis y el 20% presentan engrosamiento de la bursa. Suele presentarse entre los 40, 50 y 60 años de la vida y es mas frecuente en mujeres que en hombres.
date: 2018-02-13T20:57:39+01:00
author: nogales
layout: page-fullwidth
image: /wp-content/uploads/2018/02/P1100030.jpg
categories:
- Traumatologia
tags:
- bursitis
- greather trochanteric pain syndrome
- Hip pain
- Low back pain
- sciatic nerve compression
- Sport Medicine
- Medicina deportiva
- Dolor en cadera
- ARTROPLASTIA TOTAL DE CADERA
---
<div class="row">
<div class="medium-4 medium-push-8 columns" markdown="1">
<div class="panel radius" markdown="1">
**Tabla de contenidos**
{: #toc }
* TOC
{:toc}
</div>
</div><!-- /.medium-4.columns -->
<div class="medium-8 medium-pull-4 columns" markdown="1">
## Introducción
El dolor en la región lateral de la cadera ha sido diagnosticado durante mucho tiempo erróneamente como bursitis trocánteres, siendo usado por primera vez en 1023 por Stegerman, solo el 8% de las personas con este síndrome tienen bursitis y el 20% presentan engrosamiento de la bursa.
Suele presentarse entre los 40, 50 y 60 años de la vida y es mas frecuente en mujeres que en hombres.
Es un síntoma frecuente, y se asocia a patología músculo-tendinosa, sinovial y neurológica.
Patología músculo-tendinosa: debido a lesión aguda o crónica de los tendones glúteo medio y menor, secundaria a sobrecarga, a patología artrósica de la cadera o a traumatismos. Hay muchos autores que han comparado al glúteo medio y menor con el manguito rotador del hombro.
Patología sinovial: localizado en las bursas a nivel del trocánter mayor, ya sean la bursa ilio-glúteas o la bursa glúteo-femoral.
Patología neurológica: debido a radiculopatía Lumbar L4-L5 o a patología de compresión del nervio ciático en la salida pélvica en relación con la fosa glútea profunda que provoca en ambos casos un dolor irradiado a la región lateral de la cadera, pero que también se irradia hacia la región posterior de la pierna.
El 10-25% de la población general sufre este síndrome, pudiendo afectar a gente sedentaria y a deportistas, principalmente corredores.
## Sintomatología
El dolor al sentarse con las piernas cruzadas.
Dolor al tumbarse sobre el lado afectado, especialmente nocturno.
Dolor en apoyo monopodal prolongado.
Disminución de la tolerancia al ejercicio físico.
Dolor al estiramiento de la musculatura glútea.
Dolor en la abducción de cadera contra resistencia.
Dolor en la marcha.
Pérdida de movilidad de la cadera y de la fuerza de abducción de la cadera cuando hay desgarro de los tendones glúteos.
## Exploración física
Hipertrofia del tensor de la fascia lata y atrofia de los músculos glúteo medio y menor.
Déficit de fuerza de abducción de la cadera: test de “abducción mas Rotación interna” en el caso del glúteo medio y test de “abducción mas Rotación externa” en el caso del glúteo menor.
Artrosis de cadera y dolor lumbar bajo en el 60% de los pacientes con este síndrome.
Las tendinitis calcificantes en trocánter mayor suelen estar en torno al 13-40% de los casos, lo que indica una sobrecarga de los tendones abductores de la cadera.
Hay que valorar la presencia de compromiso del nervio ciático en la fosa glútea profunda, con el test de “elevación mas aducción pasiva del miembro inferior extendido” que provoca irradiación hasta la región posterior de la rodilla y pierna. Este test se diferencia del test de elevación de miembros inferiores (straight leg raise) ya que este ultimo se realiza sin aducción ni abducción de la cadera. Otro test es el de flexión de la cadera con la rodilla en extension y aplicar rotación interna y adducción, este test es también para la valoración del atrapamiento del nervio ciático en la fosa glútea profunda.
Se deba valorar la movilidad de la cadera así como descartar otras patologías como el impingement femoro-acetabular (FAI), inestabilidad articular, síndrome de compresión del trocánter mayor o de la región ilio-isquiática, etc, mediante una exploración sistemática.
## Pruebas complementarias
Aunque las pruebas complementarias no dan muchos resultados, lo que si parece es que descarta una cantidad de patologías que pudieran ser causa del problema.
- **La radiografía simple:** para descartar patología de artrosis coxo-femoral, lesiones tumores en huesos de la pelvis, imágenes compatibles con impingement fémoro-acetabular o alteraciones en la morfología articular como displasias de cadera etc.
- **La ecografía**: Muy útil para valorar la región trocanterea y la fosa glútea profunda, donde se pueden detectar roturas de los tendones glúteos, bursitis, entesopatías y cualquier aumento de actividad inflamatoria con el efecto doppler.
- **La electroneurografía**: en los casos de compresión del nervio ciático se han descrito estudios electroneurográficos en reposo y tras ejercicios que demuestran la compresión del nervio en la fosa glútea profunda, lo que indicaría una patología neurológica por atrapamiento o compresión en la zona.
- **La Resonancia magnética**: muy útil para descartar múltiples patologías, desde la artrosis, lesiones del labrum cotioideo, necrosis avascular de la cabeza femoral, entesopatía de los glúteos, rotura parcial o total de los tendones glúteos, bursitis, etc…
## Tratamiento
Una vez llegado al diagnóstico preciso, se trata de realizar la solución al problema de la forma mas eficiente posible.
La batería de soluciones es importante, y va desde las medidas conservadoras “de la abuela”, hasta el tratamiento recuperador, el tratamiento rehabilitador, las técnicas de infiltración ecoguiadas o no, a las técnicas quirúrgicas que pueden ser endoscópicas o por cirugía abierta.
Así en el **trocánter mayor** podemos tratarlo con medidas físicas como estiramientos y ejercicios de tonificación muscular, con tratamiento rehabilitador como electroterapia, masaje transverso profundo, ondas de choque, punción seca etc. Las infiltraciones en el trocánter mayor con corticosteroides y anestésicos locales siguen realizándose en la actualidad con un 50% de efectividad; en nuestro caso realizamos infiltraciones de anestésico local y luego aplicamos plasma autologo rico en plaquetas (PRGF endoretR) que desde 2012 lo usamos con control ecográfico que nos da una mayor exactitud en la colocación del medicamento en la zona afectada y por tanto mejores resultados.
En nuestra experiencia hay una mejoría del 85% tras dicho tratamiento. Habitualmente realizamos tres infiltraciones con un intervalo de entre 2 y cuatro semanas de cada infiltración.
Cuando hay **rotura parcial de los tendones glúteo medio o menor** comenzamos con las infiltraciones de PRP y control entre 3 y 6 meses de las mismas: si el control ecográfico y clínico son buenos damos de alta. Si el dolor desaparece y la ecografía evidencia rotura parcial del tendón glúteo medio o menor se mantiene en observación por un año. Si las molestias y le debilidad no desaparecen y la ecografía evidencia rotura y retracción del tendón le recomendamos cirugía de re anclaje del tendón del glúteo medio.
Si existiera un compromiso isquio-femoral lo ideal es descomprimir el espacio con un raspado del borde posterior del trocánter mayor para aumentar dicho espacio.
En el caso de estar afectado _el espacio glúteo profundo_, que afecta al nervio ciático se debería saber exactamente la causa de la compresión del nervio, que va desde la musculatura rotadora externa a compromisos de espacio como el fémoro-isquiático antes comentado etc.. en estos casos se pueden tratar con estiramientos específicos, ejercicios de tonificación de la musculatura pelvi-trocanterea (muy importante en la estabilidad pélvica) y suelo pélvico.
Si persiste el compromiso nervioso a pesar de esto, se debe realizar liberación del nervio ciático mayor desde la salida pélvica hasta por debajo del trocánter menor, para resolver este problema. En mi caso ha sido muy poco frecuente, pero sin embargo en las cirugías de reemplazo articular he tenido varios casos de compresión del nervio ciático mayor que he liberado, ya que mi abordaje es siempre posterior, y por tanto me permites inspeccionar de forma habitual la fosa glútea profunda, encontrando desde variantes anatómicos del ciático bifurcado y trifurcado pasando a través del piramidal o por detrás del mismo, lo que nos ha permitido resolver el problema y que el dolor posterior irradiado desaparezca. Por ello recomiendo que antes de realizar la cirugía de reemplazo articular se realice una valoración del espacio glúteo profundo por si el abordaje antero-lateral no sea el idóneo para dicha cirugía.
## Conclusion
- Causas y prevalencia
- El síndrome doloroso del trocánter mayor está frecuente en el 20% de la población entre 40 y 60 años.
- Es debido a múltiples patologías que hay que descartar o confirmar.
- La bursitis trocanterea es solamente el 8% de la patología. La mayor incidencia está en los tendones del glúteo medio y menor en trocánter.
- **El diagnóstico** es eminentemente clínico y se puede apoyar en determinadas pruebas complementarias como la rx simple de caderas, la ecografía y en último plano la resonancia magnética o la electroneurografía.
- **El tratamiento** está basado en un correcto diagnóstico y va desde el tratamiento conservador, a los tratamientos cruentos como las infiltraciones, la cirugía endoscópica o la cirugía abierta.
- **Los resultados** suelen ser muy positivos y no suelen dejar secuelas.
Si esta patología no se diagnostica puede estar mucho tiempo en el individuo sin recuperarse de la misma con la mala calidad de vida que conlleva el mismo.
Por mi parte creo que la aportación que realizo a esta patología es el diagnóstico ecográfico y el tratamiento con infiltraciones ecoguiadas de plasma autólogo rico en plaquetas que llevo realizando desde el 2007 hasta la actualidad, si bien desde 2011 las realizo con control ecográfico.
## Versión en PDF
[SINDROME DOLOROSO DEL TROCANTER MAYOR](http://www.nogales.eu/wp-content/uploads/2018/02/SINDROME-DOLOROSO-DEL-TROCANTER-MAYOR.pdf)
</div><!-- /.medium-8.columns -->
</div><!-- /.row --> | 103.653465 | 913 | 0.800076 | spa_Latn | 0.996542 |
0cc21d7612fcec2ff4f60105a56a700549d614a2 | 1,607 | md | Markdown | docs/visual-basic/language-reference/error-messages/nested-function-does-not-have-a-signature-that-is-compatible-with-delegate.md | turibbio/docs.it-it | 2212390575baa937d6ecea44d8a02e045bd9427c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/error-messages/nested-function-does-not-have-a-signature-that-is-compatible-with-delegate.md | turibbio/docs.it-it | 2212390575baa937d6ecea44d8a02e045bd9427c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/error-messages/nested-function-does-not-have-a-signature-that-is-compatible-with-delegate.md | turibbio/docs.it-it | 2212390575baa937d6ecea44d8a02e045bd9427c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: La funzione annidata non dispone di una firma compatibile con il delegato '<delegatename>'
ms.date: 07/20/2015
f1_keywords:
- vbc36532
- bc36532
helpviewer_keywords:
- BC36532
ms.assetid: 493f292c-d81e-40ef-8b47-61f020571829
ms.openlocfilehash: 28d07f01c0fd467cb68d73749988273eee95edf4
ms.sourcegitcommit: f8c270376ed905f6a8896ce0fe25b4f4b38ff498
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 06/04/2020
ms.locfileid: "84409426"
---
# <a name="nested-function-does-not-have-a-signature-that-is-compatible-with-delegate-delegatename"></a>La funzione annidata non dispone di una firma compatibile con il delegato '\<delegatename>'
Un'espressione lambda è stata assegnata a un delegato con una firma incompatibile. Nel codice seguente, ad esempio, il delegato `Del` ha due parametri Integer.
```vb
Delegate Function Del(ByVal p As Integer, ByVal q As Integer) As Integer
```
L'errore viene generato se un'espressione lambda con un argomento viene dichiarata come tipo `Del` :
```vb
' Neither of these is valid.
' Dim lambda1 As Del = Function(n As Integer) n + 1
' Dim lambda2 As Del = Function(n) n + 1
```
**ID errore:** BC36532
## <a name="to-correct-this-error"></a>Per correggere l'errore
Modificare la definizione del delegato o l'espressione lambda assegnata in modo che le firme siano compatibili.
## <a name="see-also"></a>Vedere anche
- [Conversione di tipo relaxed del delegato](../../programming-guide/language-features/delegates/relaxed-delegate-conversion.md)
- [Espressioni lambda](../../programming-guide/language-features/procedures/lambda-expressions.md)
| 37.372093 | 195 | 0.774736 | ita_Latn | 0.797411 |
0cc236ac9b5032c4ca93ef5fa340bde7b721037f | 1,703 | md | Markdown | docs/README.md | benoitbzl/ngx-charts | bec3f92c57341da5793f943bd9e5ca908664d0b0 | [
"MIT"
] | 1 | 2018-03-26T03:05:27.000Z | 2018-03-26T03:05:27.000Z | docs/README.md | benoitbzl/ngx-charts | bec3f92c57341da5793f943bd9e5ca908664d0b0 | [
"MIT"
] | null | null | null | docs/README.md | benoitbzl/ngx-charts | bec3f92c57341da5793f943bd9e5ca908664d0b0 | [
"MIT"
] | 1 | 2021-05-21T04:44:50.000Z | 2021-05-21T04:44:50.000Z | # ngx-charts
Declarative Charting Framework for Angular2 and beyond!
ngx-charts is unique because we don't merely wrap d3, nor any other chart engine for that matter. It is using Angular to render and animate the SVG elements with all of its binding and speed goodness, and uses d3 for the excellent math functions, scales, axis and shape generators, etc. By having Angular do all of the rendering it opens us up to endless possibilities the Angular platform provides such as AoT, Universal, etc.
Data visualization is a science but that doesn't mean it has to be ugly. One of the big efforts we've made while creating this project is to make the charts aesthetically pleasing. The styles are also completely customizable through CSS, so you can override them as you please.
Also, constructing custom charts is possible by leveraging the various ngx-charts components that are exposed through the ngx-charts module.
[Click here](https://swimlane.github.io/ngx-charts/) to checkout an interactive demo or for more information check out the chart examples on the left.
## What People Are Saying...
>[“looks very cool!”](https://twitter.com/bradlygreen/status/774386597810712577)
Brad Green, Engineering Director at Google for Angular
>[“this is dope”](https://twitter.com/robwormald/status/774337985701478401)
Rob Wormald, Developer Advocate for Angular
## In the news
### AngularAir Esp 91
{% youtube %}https://www.youtube.com/watch?v=FlpxvsJsIpk{% endyoutube %}
The project was featured on [AngularAir](https://angularair.com/) where [@amcdnl](https://github.com/amcdnl)
and [@marjan-georgiev](https://github.com/marjan-georgiev) spoke about the project, challenges and whats to come. | 58.724138 | 428 | 0.781562 | eng_Latn | 0.994635 |
0cc2c183387518998210bd0ea4a782821d6bb0ff | 1,871 | md | Markdown | challenges/sorts/merge/blog.md | austin-wood-401-advanced-javascript/ata-structures-and-algorithms | 0c131dc98128726013a6b792e21fed9e0b47bb78 | [
"MIT"
] | null | null | null | challenges/sorts/merge/blog.md | austin-wood-401-advanced-javascript/ata-structures-and-algorithms | 0c131dc98128726013a6b792e21fed9e0b47bb78 | [
"MIT"
] | 1 | 2021-05-10T03:17:03.000Z | 2021-05-10T03:17:03.000Z | challenges/sorts/merge/blog.md | austin-wood-401-advanced-javascript/ata-structures-and-algorithms | 0c131dc98128726013a6b792e21fed9e0b47bb78 | [
"MIT"
] | null | null | null |
## Blog Notes: Merge Sort
Merge sort is the first of the "good" algos we get for sorting.
I would rank this as medium difficulty and medium speed.
## Learning Objectives
* Implement merge sort
* Write unit tests
* Produce appropriate Documentation
[insertion sort visual](https://www.toptal.com/developers/sorting-algorithms/merge-sort)
This algorithm is a divide and conquor style sorting approach.
First the algo checks if the array.length is > 1
Second the algo splits the array in two halves
goto 10
left = merge(left)
right = merge(right)
and then as they come off the call stack they will be re-merged
psuedocode
```
ALGORITHM Mergesort(arr)
DECLARE n <-- arr.length
if n > 1
DECLARE mid <-- n/2
DECLARE left <-- arr[0...mid]
DECLARE right <-- arr[mid...n]
// sort the left side
Mergesort(left)
// sort the right side
Mergesort(right)
// merge the sorted left and right sides together
Merge(left, right, arr)
ALGORITHM Merge(left, right, arr)
DECLARE i <-- 0
DECLARE j <-- 0
DECLARE k <-- 0
while i < left.length && j < right.length
if left[i] <= right[j]
arr[k] <-- left[i]
i <-- i + 1
else
arr[k] <-- right[j]
j <-- j + 1
k <-- k + 1
if i = left.length
set remaining entries in arr to remaining values in right
else
set remaining entries in arr to remaining values in left
```
Readings and References
Watch
* [Video](https://www.youtube.com/watch?v=JSceec-wEyw)
Read
* [Article 1](https://medium.com/karuna-sehgal/a-simplified-explanation-of-merge-sort-77089fe03bb2)
* [Article 2](https://www.hackerearth.com/practice/algorithms/sorting/merge-sort/tutorial/) | 25.283784 | 102 | 0.61411 | eng_Latn | 0.919735 |
0cc37d9e9a561262d915774ec6ce585b0f83a1a4 | 2,677 | md | Markdown | docs/ssms/visual-db-tools/color-dialog-box-visual-database-tools.md | dirceuresende/sql-docs.pt-br | 023b1c4ae887bc1ed6a45cb3134f33a800e5e01e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-12T00:50:30.000Z | 2021-10-12T00:53:51.000Z | docs/ssms/visual-db-tools/color-dialog-box-visual-database-tools.md | dirceuresende/sql-docs.pt-br | 023b1c4ae887bc1ed6a45cb3134f33a800e5e01e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/visual-db-tools/color-dialog-box-visual-database-tools.md | dirceuresende/sql-docs.pt-br | 023b1c4ae887bc1ed6a45cb3134f33a800e5e01e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-25T13:33:56.000Z | 2020-06-25T13:33:56.000Z | ---
title: Caixa de diálogo Cores (Ferramentas de Banco de Dados Visual) | Microsoft Docs
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: sql-tools
ms.component: ssms-visual-db
ms.reviewer: ''
ms.suite: sql
ms.technology: ssms
ms.tgt_pltfrm: ''
ms.topic: conceptual
f1_keywords:
- VS.ToolsOptions.FontsAndColors.ColorPicker
ms.assetid: 89a19608-f24c-41fa-a1a9-6e2e2cd952fa
caps.latest.revision: 3
author: stevestein
ms.author: sstein
manager: craigg
ms.openlocfilehash: 5e77921c7510cf02c0e9e66b7c89ab736117de38
ms.sourcegitcommit: e77197ec6935e15e2260a7a44587e8054745d5c2
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 07/11/2018
ms.locfileid: "38032034"
---
# <a name="color-dialog-box-visual-database-tools"></a>Caixa de diálogo Cores (Visual Database Tools)
[!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../includes/appliesto-ss-asdb-asdw-pdw-md.md)]
A **Caixa de diálogo Cores** retorna o valor RGB da cor selecionada pelo usuário. O usuário pode selecionar um conjunto de cores básicas determinado pelo driver de vídeo ou um conjunto de cores personalizadas. Selecionar em Cores básicas ou criar cores personalizadas. Defina as cores clicando na matriz de cores, digitando nas caixas **Matiz**, **Saturação**, **Luminosidade**, **Vermelho**, **Verde**e **Azul** .
## <a name="options"></a>Opções
**Cores básicas**
Cores predeterminadas pelo driver de vídeo.
**Cores personalizadas**
Cores adicionadas usando-se o botão **Adicionar às Cores Personalizadas** .
**Definir cores personalizadas**
Expande a caixa de diálogo para mostrar a área de cores personalizadas.
Matriz de cores
A matriz de cores mostra a paleta de cores. Para definir uma cor personalizada, clique em qualquer lugar da matriz. Altere o matiz movimentando o ponteiro no sentido horizontal. Altere a saturação movimentando o ponteiro no sentido vertical.
Barra de luminosidade
Arraste o controle deslizante para alterar a luminosidade ou as tonalidades relativas a claro e escuro de uma cor. O valor numérico correspondente aparece em **Luminosidade**.
**Color**
Exibe a cor atualmente selecionada.
**Matiz**
Valor de matiz da cor selecionada.
**Saturação**
Valor de saturação da cor selecionada.
**Luminosidade**
Luminosidade (claro ou escuro) da cor selecionada.
**Vermelho**
Valor numérico do componente vermelho, que varia de 0 até 255.
**Verde**
Valor numérico do componente verde, que varia de 0 até 255.
**Azul**
Valor numérico do componente azul, que varia de 0 até 255.
**Adicionar às Cores Personalizadas**
Clique para adicionar a cor à área de Cores personalizadas.
| 37.704225 | 416 | 0.755323 | por_Latn | 0.969513 |
0cc38a9e72980cef17e4555d36158e1f2f7001e6 | 198 | md | Markdown | CHANGELOG.md | zacierka/csgodemoparser | 37c6ae3784a9f37f999f2534c29697baa0d3f27b | [
"MIT"
] | null | null | null | CHANGELOG.md | zacierka/csgodemoparser | 37c6ae3784a9f37f999f2534c29697baa0d3f27b | [
"MIT"
] | null | null | null | CHANGELOG.md | zacierka/csgodemoparser | 37c6ae3784a9f37f999f2534c29697baa0d3f27b | [
"MIT"
] | 1 | 2021-05-23T17:57:05.000Z | 2021-05-23T17:57:05.000Z | # Change Log
Change log for Project csgodemoparser
#### 0.0.1
- Working example
- print out kills from game demo
#### 1.0.0
- Published Build
#### 1.0.1
- Code Clean up
- onPlayersConnect Listener | 16.5 | 37 | 0.70202 | eng_Latn | 0.858127 |
0cc3e0ecc85ec8fd8af45441081001c0def1a0f6 | 36,793 | md | Markdown | help/tutorial-comprehensive-technical-v22/modules/module4/ex3.md | carterworks/platform-learn.en | f37808f1096e70ca5fc6924dc9177e279789a26c | [
"MIT"
] | null | null | null | help/tutorial-comprehensive-technical-v22/modules/module4/ex3.md | carterworks/platform-learn.en | f37808f1096e70ca5fc6924dc9177e279789a26c | [
"MIT"
] | null | null | null | help/tutorial-comprehensive-technical-v22/modules/module4/ex3.md | carterworks/platform-learn.en | f37808f1096e70ca5fc6924dc9177e279789a26c | [
"MIT"
] | null | null | null | ---
title: Query Service - Queries, queries, queries... and churn analysis
description: Query Service - Queries, queries, queries... and churn analysis
kt: 5342
audience: Data Engineer, Data Architect, Data Analyst, BI Expert
doc-type: tutorial
activity: develop
exl-id: 98d79b61-83e8-4868-82dc-94977f22a0dd
---
# 4.3 Queries, queries, queries... and churn analysis
## Objective
* Write queries for data analyses
* Write SQL queries combining online, call center and loyalty data available in Adobe Experience Platform
* Learn about Adobe Defined Functions
## Context
In this exercises you will write queries to analyze product views, product funnels, churn etc.
All queries listed in this chapter will be executed in your **PSQL command-line interface**. You should copy (CTRL-c) the statement blocks indicated with **SQL** and paste (CTRL-v)them in the **PSQL command-line interface**. The **Query Result** blocks show the pasted SQL statement and the associated query result.
## 4.3.1 Write basic queries for data analysis
### Timestamp
Data captured in Adobe Experience Platform is time stamped. The **timestamp** attribute allows you to analyze data over time.
How many product views do we have on a daily basis?
**SQL**
```sql
select date_format( timestamp , 'yyyy-MM-dd') AS Day,
count(*) AS productViews
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and eventType = 'commerce.productViews'
group by Day
limit 10;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
aepenablementfy21:all=> select date_format( timestamp , 'yyyy-MM-dd') AS Day,
aepenablementfy21:all-> count(*) AS productViews
aepenablementfy21:all-> from demo_system_event_dataset_for_website_global_v1_1
aepenablementfy21:all-> where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
aepenablementfy21:all-> and eventType = 'commerce.productViews'
aepenablementfy21:all-> group by Day
aepenablementfy21:all-> limit 10;
Day | productViews
------------+--------------
2020-07-31 | 2297
(1 row)
```
### Top 5 products viewed
What are the top 5 products viewed?
#### SQL
```sql
select productListItems.name, count(*)
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and eventType = 'commerce.productViews'
group by productListItems.name
order by 2 desc
limit 5;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
aepenablementfy21:all=> select productListItems.name, count(*)
aepenablementfy21:all-> from demo_system_event_dataset_for_website_global_v1_1
aepenablementfy21:all-> where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
aepenablementfy21:all-> and eventType = 'commerce.productViews'
aepenablementfy21:all-> group by productListItems.name
aepenablementfy21:all-> order by 2 desc
aepenablementfy21:all-> limit 5;
name | count(1)
---------------------------------------+----------
Google Pixel XL 32GB Black Smartphone | 938
SIM Only | 482
Samsung Galaxy S8 | 456
Samsung Galaxy S7 32GB Black | 421
(4 rows)
```
### Product Interaction funnel, from viewing to buying
**SQL**
```sql
select eventType, count(*)
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and eventType is not null
and eventType <> ''
group by eventType;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
aepenablementfy21:all=> select eventType, count(*)
aepenablementfy21:all-> from demo_system_event_dataset_for_website_global_v1_1
aepenablementfy21:all-> where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
aepenablementfy21:all-> and eventType is not null
aepenablementfy21:all-> and eventType <> ''
aepenablementfy21:all-> group by eventType;
eventType | count(1)
------------------------------+----------
commerce.productViews | 2297
commerce.productListAdds | 494
commerce.purchases | 246
(3 rows)
```
### Identify visitors with risk to Churn (visit page => Cancel Service)
**SQL**
```sql
select distinct --aepTenantId--.identification.core.ecid
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and web.webPageDetails.name = 'Cancel Service'
group by --aepTenantId--.identification.core.ecid
limit 10;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
aepenablementfy21:all=> select distinct --aepTenantId--.identification.core.ecid
aepenablementfy21:all-> from demo_system_event_dataset_for_website_global_v1_1
aepenablementfy21:all-> where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
aepenablementfy21:all-> and web.webPageDetails.name = 'Cancel Service'
aepenablementfy21:all-> group by --aepTenantId--.identification.core.ecid
aepenablementfy21:all-> limit 10;
ecid
----------------------------------
67802232253493573025911610627278
27147331741697745713411940873426
19806347932758146991274525406147
06339676267512351981624626408225
23933440740775575701680766564499
11860828134020790182705892056898
04258863338643046907489131372300
90257333076958492787834714105751
66695181015407529430237951973742
19103852558440070949457567094096
(10 rows)
```
In the next set of queries we will extend the above query, in order to get a complete view on the customers and their behavior that have been visiting the "Cancel Service" page. You will learn how to use the Adobe Defined Function to sessionize information, identify the sequence and timing of events. You will also join datasets together to further enrich and prepare the data for analysis in Microsoft Power BI.
## 4.3.2 Advanced Queries
The majority of the business logic requires gathering the touch-points for a customer and ordering them by time. This support is provided by Spark SQL in the form of window functions. Window functions are part of standard SQL and are supported by many other SQL engines.
### Adobe Defined Functions
Adobe has added a set of **Adobe Defined Functions** to the standard SQL syntax that allow you to better understand your experience data. In the next couple of queries you will learn about these ADF functions. You can find more information and the complete list [in the documentation](https://experienceleague.adobe.com/docs/experience-platform/query/sql/adobe-defined-functions.html).
### What do people do on the site before reaching the "Cancel Service" page as the 3rd page in a session?
With this query you will discover the first two Adobe Defined Functions **SESS_TIMEOUT** and **NEXT**
> The **SESS_TIMEOUT()** reproduces the visit groupings found with Adobe Analytics. It performs a similar time-based grouping, but customizable parameters.
>
> **NEXT()** and **PREVIOUS()** help you to understand how customers navigate your site.
**SQL**
```sql
SELECT
webPage,
webPage_2,
webPage_3,
webPage_4,
count(*) journeys
FROM
(
SELECT
webPage,
NEXT(webPage, 1, true)
OVER(PARTITION BY ecid, session.num
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING).value
AS webPage_2,
NEXT(webPage, 2, true)
OVER(PARTITION BY ecid, session.num
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING).value
AS webPage_3,
NEXT(webPage, 3, true)
OVER(PARTITION BY ecid, session.num
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING).value
AS webPage_4,
session.depth AS SessionPageDepth
FROM (
select a.--aepTenantId--.identification.core.ecid as ecid,
a.timestamp,
web.webPageDetails.name as webPage,
SESS_TIMEOUT(timestamp, 60 * 30)
OVER (PARTITION BY a.--aepTenantId--.identification.core.ecid
ORDER BY timestamp
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
AS session
from demo_system_event_dataset_for_website_global_v1_1 a
where a.--aepTenantId--.identification.core.ecid in (
select b.--aepTenantId--.identification.core.ecid
from demo_system_event_dataset_for_website_global_v1_1 b
where b.--aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and b.web.webPageDetails.name = 'Cancel Service'
)
)
)
WHERE SessionPageDepth=1
and webpage_3 = 'Cancel Service'
GROUP BY webPage, webPage_2, webPage_3, webPage_4
ORDER BY journeys DESC
LIMIT 10;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
webPage | webPage_2 | webPage_3 | webPage_4 | journeys
---------------------------------------+---------------------------------------+----------------+------------+----------
Citi Signal Sport | Google Pixel XL 32GB Black Smartphone | Cancel Service | Call Start | 2
SIM Only | Citi Signal Shop | Cancel Service | | 2
SIM Only | Telco Home | Cancel Service | | 2
TV & Broadband Deals | Samsung Galaxy S7 32GB Black | Cancel Service | | 2
Telco Home | Citi Signal Sport | Cancel Service | Call Start | 2
Google Pixel XL 32GB Black Smartphone | Broadband Deals | Cancel Service | | 2
Broadband Deals | Samsung Galaxy S7 32GB Black | Cancel Service | | 2
Broadband Deals | Samsung Galaxy S8 | Cancel Service | | 1
Samsung Galaxy S8 | Google Pixel XL 32GB Black Smartphone | Cancel Service | | 1
SIM Only | Google Pixel XL 32GB Black Smartphone | Cancel Service | Call Start | 1
(10 rows)
```
### How much time do we have before a visitor calls the call center after visiting the "Cancel Service" Page?
To answer this kind of query will we use the **TIME_BETWEEN_NEXT_MATCH()** Adobe Defined Function.
> Time-between previous or next match functions provide a new dimension, which measures the time that has elapsed since a particular incident.
**SQL**
```sql
select * from (
select --aepTenantId--.identification.core.ecid as ecid,
web.webPageDetails.name as webPage,
TIME_BETWEEN_NEXT_MATCH(timestamp, web.webPageDetails.name='Call Start', 'seconds')
OVER(PARTITION BY --aepTenantId--.identification.core.ecid
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
AS contact_callcenter_after_seconds
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and web.webPageDetails.name in ('Cancel Service', 'Call Start')
) r
where r.webPage = 'Cancel Service'
limit 15;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
ecid | webPage | contact_callcenter_after_seconds
----------------------------------+----------------+----------------------------------
00331886620679939148047665693117 | Cancel Service |
00626561600197295782131349716866 | Cancel Service |
00630470663554417679969244202779 | Cancel Service | -797
00720875344152796154458668700428 | Cancel Service | -519
00746064605049656090779523644276 | Cancel Service | -62
00762093837616944422322357210965 | Cancel Service |
00767875779073091876070699689209 | Cancel Service |
00798691264980137616449378075855 | Cancel Service |
00869613691740150556826953447162 | Cancel Service | -129
00943638725078228957873279219207 | Cancel Service | -750
01167540466536077846425644389346 | Cancel Service |
01412448537869549016063764484810 | Cancel Service |
01419076946514450291741574452702 | Cancel Service | -482
01533124771963987423015507880755 | Cancel Service |
01710651086750904478559809475925 | Cancel Service |
(15 rows)
```
### And what is the outcome of that contact?
Explain that we are joining datasets together, in this case we join our `demo_system_event_dataset_for_website_global_v1_1` with `demo_system_event_dataset_for_call_center_global_v1_1`. We do this to know the outcome of the call center interaction.
**SQL**
```sql
select distinct r.*,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callFeeling,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callContractCancelled
from (
select --aepTenantId--.identification.core.ecid ecid,
web.webPageDetails.name as webPage,
TIME_BETWEEN_NEXT_MATCH(timestamp, web.webPageDetails.name='Call Start', 'seconds')
OVER(PARTITION BY --aepTenantId--.identification.core.ecid
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
AS contact_callcenter_after_seconds
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and web.webPageDetails.name in ('Cancel Service', 'Call Start')
) r
, demo_system_event_dataset_for_call_center_global_v1_1 c
where r.ecid = c.--aepTenantId--.identification.core.ecid
and r.webPage = 'Cancel Service'
and c.--aepTenantId--.interactionDetails.core.callCenterAgent.callContractCancelled IN (true,false)
and c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic IN ('contract', 'invoice','complaint','wifi')
limit 15;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
ecid | webPage | contact_callcenter_after_seconds | callfeeling | calltopic | callcontractcancelled
----------------------------------+----------------+----------------------------------+-------------+-----------+-----------------------
65003638134805559755890758041032 | Cancel Service | -440 | negative | contract | true
24197860921105808861772992106002 | Cancel Service | -109 | negative | contract | true
96145097889556586310105454800766 | Cancel Service | -501 | neutral | contract | true
18680613140217544548647790969994 | Cancel Service | -502 | negative | contract | true
66121898576007921287545496624574 | Cancel Service | -546 | negative | contract | true
35086866174626846547860375146326 | Cancel Service | -493 | negative | contract | false
30502827193916828536733220567055 | Cancel Service | -924 | negative | contract | true
85319114253582167371394801608573 | Cancel Service | -267 | positive | contract | true
04258863338643046907489131372300 | Cancel Service | -588 | positive | contract | false
23933440740775575701680766564499 | Cancel Service | -261 | neutral | contract | true
17332005215125613039685855763735 | Cancel Service | -478 | neutral | contract | true
02666934104296797891818818456669 | Cancel Service | -297 | positive | contract | true
48158305927116134877913019413025 | Cancel Service | -47 | neutral | contract | false
13294750130353985087337266864522 | Cancel Service | -71 | positive | contract | false
69034679856689334967307492458080 | Cancel Service | -812 | negative | contract | true
(15 rows)
```
### What is the loyalty profile of these customers?
In this query we join loyalty data that we have onboarded in Adobe Experience Platform. This allows to enrich the churn analysis with loyalty data.
**SQL**
```sql
select r.*,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callFeeling,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic,
l.--aepTenantId--.loyaltyDetails.level,
l.--aepTenantId--.identification.core.loyaltyId
from (
select --aepTenantId--.identification.core.ecid ecid,
web.webPageDetails.name as webPage,
TIME_BETWEEN_NEXT_MATCH(timestamp, web.webPageDetails.name='Call Start', 'seconds')
OVER(PARTITION BY --aepTenantId--.identification.core.ecid
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
AS contact_callcenter_after_seconds
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and web.webPageDetails.name in ('Cancel Service', 'Call Start')
) r
, demo_system_event_dataset_for_call_center_global_v1_1 c
, demo_system_profile_dataset_for_loyalty_global_v1_1 l
where r.ecid = c.--aepTenantId--.identification.core.ecid
and r.webPage = 'Cancel Service'
and l.--aepTenantId--.identification.core.ecid = r.ecid
and c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic IN ('contract', 'invoice','complaint','wifi','promo')
limit 15;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
ecid | webPage | contact_callcenter_after_seconds | callfeeling | calltopic | level | loyaltyid
----------------------------------+----------------+----------------------------------+-------------+-----------+--------+-----------
65003638134805559755890758041032 | Cancel Service | -440 | negative | contract | Gold | 924854108
65003638134805559755890758041032 | Cancel Service | -440 | negative | contract | Gold | 924854108
24197860921105808861772992106002 | Cancel Service | -109 | negative | contract | Bronze | 094259678
24197860921105808861772992106002 | Cancel Service | -109 | negative | contract | Bronze | 094259678
96145097889556586310105454800766 | Cancel Service | -501 | neutral | contract | Gold | 644887358
96145097889556586310105454800766 | Cancel Service | -501 | neutral | contract | Gold | 644887358
18680613140217544548647790969994 | Cancel Service | -502 | negative | contract | Gold | 205300004
18680613140217544548647790969994 | Cancel Service | -502 | negative | contract | Gold | 205300004
66121898576007921287545496624574 | Cancel Service | -546 | negative | contract | Bronze | 095728673
66121898576007921287545496624574 | Cancel Service | -546 | negative | contract | Bronze | 095728673
35086866174626846547860375146326 | Cancel Service | -493 | negative | contract | Bronze | 453145930
35086866174626846547860375146326 | Cancel Service | -493 | negative | contract | Bronze | 453145930
30502827193916828536733220567055 | Cancel Service | -924 | negative | contract | Gold | 269406417
30502827193916828536733220567055 | Cancel Service | -924 | negative | contract | Gold | 269406417
85319114253582167371394801608573 | Cancel Service | -267 | positive | contract | Bronze | 899276035
(15 rows)
```
### From what region do they visit us?
Lets include the geographical info, like longitude, attitude, city, countrycode, captured by the Adobe Experience Platform in order to get some geographical insights about churning customers.
**SQL**
```sql
select distinct r.ecid,
r.city,
r.countrycode,
r.lat as latitude,
r.lon as longitude,
r.contact_callcenter_after_seconds as seconds_to_contact_callcenter,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callFeeling,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callContractCancelled,
l.--aepTenantId--.loyaltyDetails.level,
l.--aepTenantId--.identification.core.loyaltyId
from (
select --aepTenantId--.identification.core.ecid ecid,
placeContext.geo._schema.latitude lat,
placeContext.geo._schema.longitude lon,
placeContext.geo.city,
placeContext.geo.countryCode,
web.webPageDetails.name as webPage,
TIME_BETWEEN_NEXT_MATCH(timestamp, web.webPageDetails.name='Call Start', 'seconds')
OVER(PARTITION BY --aepTenantId--.identification.core.ecid
ORDER BY timestamp
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
AS contact_callcenter_after_seconds
from demo_system_event_dataset_for_website_global_v1_1
where --aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and web.webPageDetails.name in ('Cancel Service', 'Call Start')
) r
, demo_system_event_dataset_for_call_center_global_v1_1 c
, demo_system_profile_dataset_for_loyalty_global_v1_1 l
where r.ecid = c.--aepTenantId--.identification.core.ecid
and r.webPage = 'Cancel Service'
and l.--aepTenantId--.identification.core.ecid = r.ecid
and c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic IN ('contract', 'invoice','complaint','wifi','promo')
limit 15;
```
Copy the statement above and execute it in your **PSQL command-line interface**.
**Query Result**
```text
ecid | city | countrycode | latitude | longitude | seconds_to_contact_callcenter | callfeeling | calltopic | callcontractcancelled | level | loyaltyid
----------------------------------+-----------+-------------+------------+------------+-------------------------------+-------------+-----------+-----------------------+--------+-----------
00630470663554417679969244202779 | Charlton | GB | 51.59119 | -1.407848 | -797 | negative | contract | false | Bronze | 524483285
00630470663554417679969244202779 | Charlton | GB | 51.59119 | -1.407848 | -797 | negative | contract | | Bronze | 524483285
00720875344152796154458668700428 | Ashley | GB | 51.4139633 | -2.2685462 | -519 | positive | contract | false | Silver | 860696333
00720875344152796154458668700428 | Ashley | GB | 51.4139633 | -2.2685462 | -519 | positive | contract | | Silver | 860696333
00746064605049656090779523644276 | Liverpool | GB | 53.4913801 | -2.867264 | -62 | positive | contract | true | Bronze | 072387270
00746064605049656090779523644276 | Liverpool | GB | 53.4913801 | -2.867264 | -62 | positive | contract | | Bronze | 072387270
00869613691740150556826953447162 | Langley | GB | 51.888151 | -0.23924 | -129 | negative | contract | true | Bronze | 789347684
00869613691740150556826953447162 | Langley | GB | 51.888151 | -0.23924 | -129 | negative | contract | | Bronze | 789347684
00943638725078228957873279219207 | Eaton | GB | 53.2945961 | -0.9335791 | -750 | positive | contract | false | Gold | 033926162
00943638725078228957873279219207 | Eaton | GB | 53.2945961 | -0.9335791 | -750 | positive | contract | | Gold | 033926162
01419076946514450291741574452702 | Tullich | GB | 57.4694803 | -3.1269422 | -482 | neutral | contract | false | Bronze | 105063634
01419076946514450291741574452702 | Tullich | GB | 57.4694803 | -3.1269422 | -482 | neutral | contract | | Bronze | 105063634
01738842540109643781526526573341 | Whitwell | GB | 54.3886617 | -1.555363 | -562 | neutral | contract | false | Gold | 791324509
01738842540109643781526526573341 | Whitwell | GB | 54.3886617 | -1.555363 | -562 | neutral | contract | | Gold | 791324509
02052460258994877317679083617975 | Edinburgh | GB | 55.9309486 | -3.1859102 | -545 | neutral | contract | false | Gold | 443477555
(15 rows)
```
## Call Center Interaction Analysis
In the queries above we only looked at the visitors that ended up contacting the call center in case of service cancellation. We want to take this a bit broader and take into account all call center interaction including (wifi, promo, invoice, complaint and contract).
You will need to edit a query, so let's first open notepad or brackets.
On Windows click "search"-icon (1) in the windows toolbar, type **notepad** in the "search"-field (2), click (3) the "notepad" result:

On Mac

Copy the following statement to notepad/brackets:
```sql
select /* enter your name */
e.--aepTenantId--.identification.core.ecid as ecid,
e.placeContext.geo.city as city,
e.placeContext.geo._schema.latitude latitude,
e.placeContext.geo._schema.longitude longitude,
e.placeContext.geo.countryCode as countrycode,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callFeeling as callFeeling,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callTopic as callTopic,
c.--aepTenantId--.interactionDetails.core.callCenterAgent.callContractCancelled as contractCancelled,
l.--aepTenantId--.loyaltyDetails.level as loyaltystatus,
l.--aepTenantId--.loyaltyDetails.points as loyaltypoints,
l.--aepTenantId--.identification.core.loyaltyId as crmid
from demo_system_event_dataset_for_website_global_v1_1 e
,demo_system_event_dataset_for_call_center_global_v1_1 c
,demo_system_profile_dataset_for_loyalty_global_v1_1 l
where e.--aepTenantId--.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
and e.web.webPageDetails.name in ('Cancel Service', 'Call Start')
and e.--aepTenantId--.identification.core.ecid = c.--aepTenantId--.identification.core.ecid
and l.--aepTenantId--.identification.core.ecid = e.--aepTenantId--.identification.core.ecid;
```
And replace
```text
enter your name
```
Do not remove `/\*` and `\*/`. Your modified statement in notepad should look like:

Copy your modified statement from **notepad** into the **PSQL command line window** and hit enter. You should see the following result in the PSQL command line window:
```text
aepenablementfy21:all=>
aepenablementfy21:all=> select /* vangeluw */
aepenablementfy21:all-> e._experienceplatform.identification.core.ecid as ecid,
aepenablementfy21:all-> e.placeContext.geo.city as city,
aepenablementfy21:all-> e.placeContext.geo._schema.latitude latitude,
aepenablementfy21:all-> e.placeContext.geo._schema.longitude longitude,
aepenablementfy21:all-> e.placeContext.geo.countryCode as countrycode,
aepenablementfy21:all-> c._experienceplatform.interactionDetails.core.callCenterAgent.callFeeling as callFeeling,
aepenablementfy21:all-> c._experienceplatform.interactionDetails.core.callCenterAgent.callTopic as callTopic,
aepenablementfy21:all-> c._experienceplatform.interactionDetails.core.callCenterAgent.callContractCancelled as contractCancelled,
aepenablementfy21:all-> l._experienceplatform.loyaltyDetails.level as loyaltystatus,
aepenablementfy21:all-> l._experienceplatform.loyaltyDetails.points as loyaltypoints,
aepenablementfy21:all-> l._experienceplatform.identification.core.loyaltyId as crmid
aepenablementfy21:all-> from demo_system_event_dataset_for_website_global_v1_1 e
aepenablementfy21:all-> ,demo_system_event_dataset_for_call_center_global_v1_1 c
aepenablementfy21:all-> ,demo_system_profile_dataset_for_loyalty_global_v1_1 l
aepenablementfy21:all-> where e._experienceplatform.demoEnvironment.brandName IN ('Luma Telco', 'Citi Signal')
aepenablementfy21:all-> and e.web.webPageDetails.name in ('Cancel Service', 'Call Start')
aepenablementfy21:all-> and e._experienceplatform.identification.core.ecid = c._experienceplatform.identification.core.ecid
aepenablementfy21:all-> and l._experienceplatform.identification.core.ecid = e._experienceplatform.identification.core.ecid;
ecid | city | latitude | longitude | countrycode | callFeeling | callTopic | contractCancelled | loyaltystatus | loyaltypoints | crmid
----------------------------------+------------+------------+------------+-------------+-------------+-----------+-------------------+---------------+---------------+-----------
33977405947573095768416894125891 | Tullich | 57.4694803 | -3.1269422 | GB | positive | wifi | false | Bronze | 73.0 | 904552921
33977405947573095768416894125891 | Tullich | 57.4694803 | -3.1269422 | GB | positive | wifi | false | Bronze | 73.0 | 904552921
33977405947573095768416894125891 | Tullich | 57.4694803 | -3.1269422 | GB | positive | wifi | | Bronze | 73.0 | 904552921
33977405947573095768416894125891 | Tullich | 57.4694803 | -3.1269422 | GB | positive | wifi | | Bronze | 73.0 | 904552921
67802232253493573025911610627278 | Linton | 54.0542238 | -2.0215836 | GB | none | none | false | Silver | 522.0 | 417981877
67802232253493573025911610627278 | Linton | 54.0542238 | -2.0215836 | GB | none | none | false | Silver | 522.0 | 417981877
67802232253493573025911610627278 | Linton | 54.0542238 | -2.0215836 | GB | none | none | | Silver | 522.0 | 417981877
67802232253493573025911610627278 | Linton | 54.0542238 | -2.0215836 | GB | none | none | | Silver | 522.0 | 417981877
27147331741697745713411940873426 | Langley | 51.888151 | -0.23924 | GB | none | none | false | Bronze | 790.0 | 826545716
27147331741697745713411940873426 | Langley | 51.888151 | -0.23924 | GB | none | none | false | Bronze | 790.0 | 826545716
27147331741697745713411940873426 | Langley | 51.888151 | -0.23924 | GB | none | none | | Bronze | 790.0 | 826545716
27147331741697745713411940873426 | Langley | 51.888151 | -0.23924 | GB | none | none | | Bronze | 790.0 | 826545716
19806347932758146991274525406147 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | false | Gold | 981.0 | 412492571
19806347932758146991274525406147 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | false | Gold | 981.0 | 412492571
19806347932758146991274525406147 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | | Gold | 981.0 | 412492571
19806347932758146991274525406147 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | | Gold | 981.0 | 412492571
06339676267512351981624626408225 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | false | Bronze | 632.0 | 024761880
06339676267512351981624626408225 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | false | Bronze | 632.0 | 024761880
06339676267512351981624626408225 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | | Bronze | 632.0 | 024761880
06339676267512351981624626408225 | Edinburgh | 55.9309486 | -3.1859102 | GB | none | none | | Bronze | 632.0 | 024761880
23933440740775575701680766564499 | Whitwell | 54.3886617 | -1.555363 | GB | neutral | contract | true | Gold | 853.0 | 696923821
23933440740775575701680766564499 | Whitwell | 54.3886617 | -1.555363 | GB | neutral | contract | true | Gold | 853.0 | 696923821
23933440740775575701680766564499 | Whitwell | 54.3886617 | -1.555363 | GB | neutral | contract | | Gold | 853.0 | 696923821
23933440740775575701680766564499 | Whitwell | 54.3886617 | -1.555363 | GB | neutral | contract | | Gold | 853.0 | 696923821
11860828134020790182705892056898 | Norton | 52.2679288 | -1.1202549 | GB | none | none | false | Gold | 139.0 | 271933383
11860828134020790182705892056898 | Norton | 52.2679288 | -1.1202549 | GB | none | none | false | Gold | 139.0 | 271933383
11860828134020790182705892056898 | Norton | 52.2679288 | -1.1202549 | GB | none | none | | Gold | 139.0 | 271933383
11860828134020790182705892056898 | Norton | 52.2679288 | -1.1202549 | GB | none | none | | Gold | 139.0 | 271933383
:
```
In the next you will persist your query (also known as **create table as select** or **CTAS**) as a new dataset that you will use in Microsoft Power BI.
Next Step: [4.4 - Power BI/Tableau](./ex4.md)
[Go Back to Module 4](./query-service.md)
[Go Back to All Modules](../../overview.md)
| 60.714521 | 413 | 0.61865 | eng_Latn | 0.363147 |
0cc3f173d2f4d7f550c2f40b4086f9921f9eefbd | 1,832 | md | Markdown | sections/api/helpers/create-global-style.md | TommyAlmeida/styled-components-website | 0493f52d5f49e3baaaac4deb263625dfeb755f4a | [
"MIT"
] | 1 | 2019-08-01T22:02:18.000Z | 2019-08-01T22:02:18.000Z | sections/api/helpers/create-global-style.md | TommyAlmeida/styled-components-website | 0493f52d5f49e3baaaac4deb263625dfeb755f4a | [
"MIT"
] | null | null | null | sections/api/helpers/create-global-style.md | TommyAlmeida/styled-components-website | 0493f52d5f49e3baaaac4deb263625dfeb755f4a | [
"MIT"
] | null | null | null | import Code from 'components/Code'
import Table, { Row, Column } from 'components/Table'
### `createGlobalStyle` | v4
A helper function to generate a special `StyledComponent` that handles global styles. Normally, styled components are automatically scoped to a local CSS class and therefore isolated from other components. In the case of `createGlobalStyle`, this limitation is removed and things like CSS resets or base stylesheets can be applied.
<Table head={['Arguments', 'Description']}>
<Row>
<Column>
1. <Code>TaggedTemplateLiteral</Code>
</Column>
<Column>A tagged template literal with your CSS and interpolations.</Column>
</Row>
</Table>
Returns a `StyledComponent` that does not accept children. Place it at the top of your React tree and the global styles will be injected when the component is "rendered".
```jsx
import { createGlobalStyle } from 'styled-components'
const GlobalStyle = createGlobalStyle`
body {
color: ${props => (props.whiteColor ? 'white' : 'black')};
}
`
// later in your app
<React.Fragment>
<Navigation /> {/* example of other top-level stuff */}
<GlobalStyle whiteColor />
</React.Fragment>
```
Since the `GlobalStyle` component is a `StyledComponent`, that means it also has access to theming from the [`<ThemeProvider>` component](/docs/api#themeprovider) if provided.
```jsx
import { createGlobalStyle, ThemeProvider } from 'styled-components'
const GlobalStyle = createGlobalStyle`
body {
color: ${props => (props.whiteColor ? 'white' : 'black')};
font-family: ${props => props.theme.fontFamily};
}
`
// later in your app
<ThemeProvider theme={{ fontFamily: 'Helvetica Neue' }}>
<React.Fragment>
<Navigation /> {/* example of other top-level stuff */}
<GlobalStyle whiteColor />
</React.Fragment>
</ThemeProvider>
```
| 32.140351 | 331 | 0.716157 | eng_Latn | 0.964159 |
0cc494aee673f112fce87d659b1564e19c706bc9 | 6,031 | md | Markdown | docs/2.16/waf-installation/nginx-plus.md | AnastasiaTWW/product-docs-en | b0abaea82aa80a4d965953bdac5d887526465f50 | [
"MIT"
] | null | null | null | docs/2.16/waf-installation/nginx-plus.md | AnastasiaTWW/product-docs-en | b0abaea82aa80a4d965953bdac5d887526465f50 | [
"MIT"
] | null | null | null | docs/2.16/waf-installation/nginx-plus.md | AnastasiaTWW/product-docs-en | b0abaea82aa80a4d965953bdac5d887526465f50 | [
"MIT"
] | null | null | null | [img-wl-console-users]: ../images/check-users.png
[wallarm-status-instr]: ../admin-en/configure-statistics-service.md
[memory-instr]: ../admin-en/configuration-guides/allocate-resources-for-waf-node.md
[waf-directives-instr]: ../admin-en/configure-parameters-en.md
[sqli-attack-desc]: ../attacks-vulns-list.md#sql-injection
[xss-attack-desc]: ../attacks-vulns-list.md#crosssite-scripting-xss
[img-test-attacks-in-ui]: ../images/admin-guides/test-attacks.png
[waf-mode-instr]: ../admin-en/configure-wallarm-mode.md
[logging-instr]: ../admin-en/configure-logging.md
[proxy-balancer-instr]: ../admin-en/using-proxy-or-balancer-en.md
[scanner-whitelisting-instr]: ../admin-en/scanner-ips-whitelisting.md
[process-time-limit-instr]: ../admin-en/configure-parameters-en.md#wallarm_process_time_limit
[configure-selinux-instr]: ../admin-en/configure-selinux.md
[configure-proxy-balancer-instr]: ../admin-en/configuration-guides/access-to-wallarm-api-via-proxy.md
[install-postanalytics-instr]: ../admin-en/installation-postanalytics-en.md
[update-instr]: ../updating-migrating/nginx-modules.md
[install-postanalytics-docs]: ../../admin-en/installation-postanalytics-en/
[versioning-policy]: ../updating-migrating/versioning-policy.md
[enable-libdetection-docs]: ../admin-en/configure-parameters-en.md#wallarm_enable_libdetection
[waf-mode-recommendations]: ../about-wallarm-waf/deployment-best-practices.md#follow-recommended-onboarding-steps
[waf-installation-instr-latest]: /waf-installation/nginx-plus/
[waf-installation-instr-middle]: /2.18/waf-installation/nginx-plus/
[versioning-policy]: ../updating-migrating/versioning-policy.md
# Installing dynamic WAF module for NGINX Plus
These instructions describe the steps to install Wallarm WAF as a dynamic module for the official commercial version of NGINX Plus.
--8<-- "../include/waf/installation/already-installed-waf-postanalytics-deprecation.md"
## Requirements
--8<-- "../include/waf/installation/nginx-requirements.md"
## Installation options
--8<-- "../include/waf/installation/nginx-installation-options.md"
Installation commands for both options are described in the further instructions.
## Installation
### 1. Install NGINX Plus and dependencies
Install NGINX Plus and its dependencies using these [official NGINX instructions](https://www.nginx.com/resources/admin-guide/installing-nginx-plus/).
!!! info "Installing on Amazon Linux 2"
To install NGINX Plus on Amazon Linux 2, use the CentOS 7 instructions.
### 2. Add Wallarm WAF repositories
Wallarm WAF is installed and updated from the Wallarm repositories. To add repositories, use the commands for your platform:
--8<-- "../include/waf/installation/add-nginx-waf-repos-2.16.md"
### 3. Install Wallarm WAF packages
#### Request processing and postanalytics on the same server
To run postanalytics and process the requests on the same server, the following packages are required:
* `nginx-plus-module-wallarm` for the NGINX Plus-Wallarm module
* `wallarm-node` for the postanalytics module, Tarantool database, and additional NGINX Plus-Wallarm packages
=== "Debian"
```bash
sudo apt install --no-install-recommends wallarm-node nginx-plus-module-wallarm
```
=== "Ubuntu"
```bash
sudo apt install --no-install-recommends wallarm-node nginx-plus-module-wallarm
```
=== "CentOS or Amazon Linux 2"
```bash
sudo yum install wallarm-node nginx-plus-module-wallarm
```
#### Request processing and postanalytics on different servers
To run postanalytics and process the requests on different servers, the following packages are required:
* `wallarm-node-nginx` and `nginx-plus-module-wallarm` for the NGINX Plus-Wallarm module
=== "Debian"
```bash
sudo apt install --no-install-recommends wallarm-node-nginx nginx-plus-module-wallarm
```
=== "Ubuntu"
```bash
sudo apt install --no-install-recommends wallarm-node-nginx nginx-plus-module-wallarm
```
=== "CentOS or Amazon Linux 2"
```bash
sudo yum install wallarm-node-nginx nginx-plus-module-wallarm
```
* `wallarm-node-tarantool` on the separate server for the postanalytics module and Tarantool database (installation steps are described in the [instructions](../admin-en/installation-postanalytics-en.md))
### 4. Connect the Wallarm WAF module
1. Open the file `/etc/nginx/nginx.conf`:
```bash
sudo vim /etc/nginx/nginx.conf
```
2. Add the following directive right after the `worker_processes` directive:
```bash
load_module modules/ngx_http_wallarm_module.so;
```
Configuration example with the added directive:
```
user nginx;
worker_processes auto;
load_module modules/ngx_http_wallarm_module.so;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
```
3. Copy the configuration files for the system setup:
``` bash
sudo cp /usr/share/doc/nginx-plus-module-wallarm/examples/*.conf /etc/nginx/conf.d/
```
### 5. Connect the WAF node to Wallarm Cloud
--8<-- "../include/waf/installation/connect-waf-and-cloud.md"
### 6. Update Wallarm WAF configuration
--8<-- "../include/waf/installation/nginx-waf-min-configuration-2.16.md"
### 7. Restart NGINX Plus
--8<-- "../include/waf/root_perm_info.md"
--8<-- "../include/waf/restart-nginx-2.16.md"
### 8. Test Wallarm WAF operation
--8<-- "../include/waf/installation/test-waf-operation.md"
## Settings customization
Dynamic Wallarm WAF module with default settings is installed for NGINX Plus. To customize Wallarm WAF settings, use the [available directives](../admin-en/configure-parameters-en.md).
--8<-- "../include/waf/installation/common-customization-options-216.md"
| 39.677632 | 204 | 0.700879 | eng_Latn | 0.538692 |
0cc5894c857f325d85c02c25b57f168b19f6c7ca | 557 | md | Markdown | README.md | wingleung/dashing-sentry | 576adecf00fd696ca9c344cae1b6808d76c7dba9 | [
"MIT"
] | null | null | null | README.md | wingleung/dashing-sentry | 576adecf00fd696ca9c344cae1b6808d76c7dba9 | [
"MIT"
] | 2 | 2018-08-13T19:01:21.000Z | 2019-02-12T17:26:26.000Z | README.md | wingleung/dashing-sentry | 576adecf00fd696ca9c344cae1b6808d76c7dba9 | [
"MIT"
] | 2 | 2019-08-08T21:53:07.000Z | 2019-10-15T19:43:44.000Z | # dashing-sentry
Dashing widget for a Top 5 from Sentry error tracking
<img width="800" src="sentry_demo.png" alt="Got">
## Usage
Add job and widget files to your project.
*config.yml*
```
sentry:
projects:
YOUR_PROJECT_NAME:
name: YOUR_PROJECT_NAME
organization: YOUR_ORGANIZATION_NAME
api_key: YOUR_API_KEY
```
*dashboard.erb*
```
<li data-row="1" data-col="1" data-sizex="2" data-sizey="1">
<div data-id="sentry_toperrors_PROJECT_NAME" data-view="SentryTopErrors" data-title="Top 5 errors"></div>
</li>
```
## License
MIT
| 19.206897 | 107 | 0.696589 | yue_Hant | 0.259128 |
0cc5d6ee43965d62da00f5fdaedf319d7fd7390a | 311 | md | Markdown | about.md | pavlovdog/ethereum_cv | ed2a028f54da3056739c9431e575924c16e55556 | [
"MIT"
] | null | null | null | about.md | pavlovdog/ethereum_cv | ed2a028f54da3056739c9431e575924c16e55556 | [
"MIT"
] | null | null | null | about.md | pavlovdog/ethereum_cv | ed2a028f54da3056739c9431e575924c16e55556 | [
"MIT"
] | null | null | null | ---
layout: page
title: About
---
Hi there, fellows.
**Contacts**
[Github](https://github.com/pavlovdog)
[ethereum.stackexchange.com](http://ethereum.stackexchange.com/users/4406/sergey-potekhin)
[VK](https://vk.com/home_pavlovdog)
[Email](mailto:[email protected]) - [email protected]
| 17.277778 | 90 | 0.736334 | yue_Hant | 0.189066 |
0cc619024794df6b8017c2fee293204094a01a38 | 6,605 | md | Markdown | docs/framework/wcf/feature-details/how-to-call-wcf-service-operations-asynchronously.md | ilyakharlamov/docs.fr-fr | 54c09f71d03787b462bdd134b3407d5ed708a191 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/how-to-call-wcf-service-operations-asynchronously.md | ilyakharlamov/docs.fr-fr | 54c09f71d03787b462bdd134b3407d5ed708a191 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/how-to-call-wcf-service-operations-asynchronously.md | ilyakharlamov/docs.fr-fr | 54c09f71d03787b462bdd134b3407d5ed708a191 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Procédure : Appeler des opérations de Service WCF de façon asynchrone'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
ms.assetid: 0face17f-43ca-417b-9b33-737c0fc360df
ms.openlocfilehash: 19b09c9ec789419f2774207b051b8ee488b6725d
ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 01/23/2019
ms.locfileid: "54625547"
---
# <a name="how-to-call-wcf-service-operations-asynchronously"></a>Procédure : Appeler des opérations de Service WCF de façon asynchrone
Cette rubrique présente comment un client peut accéder de façon asynchrone à une opération de service. Le service dans cette rubrique implémente l'interface `ICalculator`. Le client peut appeler les opérations sur cette interface de manière asynchrone à l'aide du modèle d'appel asynchrone commandé par événement. (Pour plus d’informations sur le modèle d’appel asynchrone basé sur des événements, consultez [programmation multithread avec le modèle asynchrone basé sur événement](https://go.microsoft.com/fwlink/?LinkId=248184)). Pour obtenir un exemple qui montre comment implémenter une opération de façon asynchrone dans un service, consultez [Comment : Implémenter une opération de Service asynchrone](../../../../docs/framework/wcf/how-to-implement-an-asynchronous-service-operation.md). Pour plus d’informations sur les opérations synchrones et asynchrones, consultez [synchrone et opérations asynchrones](../../../../docs/framework/wcf/synchronous-and-asynchronous-operations.md).
> [!NOTE]
> Le modèle d'appel asynchrone commandé par événement n'est pas pris en charge lorsqu'il utilise un <xref:System.ServiceModel.ChannelFactory%601>. Pour plus d’informations sur les appels asynchrones à l’aide de la <xref:System.ServiceModel.ChannelFactory%601>, consultez [Comment : Appeler des opérations de façon asynchrone à l’aide d’une fabrique de canaux](../../../../docs/framework/wcf/feature-details/how-to-call-operations-asynchronously-using-a-channel-factory.md).
## <a name="procedure"></a>Procédure
#### <a name="to-call-wcf-service-operations-asynchronously"></a>Pour appeler des opérations de service WCF de façon asynchrone
1. Exécutez le [ServiceModel Metadata Utility Tool (Svcutil.exe)](../../../../docs/framework/wcf/servicemodel-metadata-utility-tool-svcutil-exe.md) outil avec les deux le `/async` et le `/tcv:Version35` commande options ensemble, comme indiqué dans la commande suivante.
```
svcutil /n:http://Microsoft.ServiceModel.Samples,Microsoft.ServiceModel.Samples http://localhost:8000/servicemodelsamples/service/mex /a /tcv:Version35
```
Cette opération génère en plus les opérations synchrones et standards en fonction de délégué asynchrones, une classe de client WCF qui contient :
- Deux <`operationName` > `Async` opérations pour une utilisation avec l’approche appel asynchrone basé sur des événements. Exemple :
[!code-csharp[EventAsync#1](../../../../samples/snippets/csharp/VS_Snippets_CFX/eventasync/cs/generatedclient.cs#1)]
[!code-vb[EventAsync#1](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/eventasync/vb/generatedclient.vb#1)]
- Événements terminés d’opérations sous la forme <`operationName` > `Completed` pour une utilisation avec l’approche appel asynchrone basé sur des événements. Exemple :
[!code-csharp[EventAsync#2](../../../../samples/snippets/csharp/VS_Snippets_CFX/eventasync/cs/generatedclient.cs#2)]
[!code-vb[EventAsync#2](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/eventasync/vb/generatedclient.vb#2)]
- <xref:System.EventArgs?displayProperty=nameWithType> types pour chaque opération (sous la forme <`operationName`>`CompletedEventArgs`) pour une utilisation avec l’approche appel asynchrone basé sur des événements. Exemple :
[!code-csharp[EventAsync#3](../../../../samples/snippets/csharp/VS_Snippets_CFX/eventasync/cs/generatedclient.cs#3)]
[!code-vb[EventAsync#3](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/eventasync/vb/generatedclient.vb#3)]
2. Dans l'application d'appel, créez une méthode de rappel à appeler au terme de l'opération asynchrone en vous conformant à l'exemple de code suivant.
[!code-csharp[EventAsync#4](../../../../samples/snippets/csharp/VS_Snippets_CFX/eventasync/cs/client.cs#4)]
[!code-vb[EventAsync#4](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/eventasync/vb/client.vb#4)]
3. Avant d’appeler l’opération, utilisez un nouveau générique <xref:System.EventHandler%601?displayProperty=nameWithType> de type <`operationName` > `EventArgs` pour ajouter la méthode de gestionnaire (créée à l’étape précédente) à la <`operationName` > `Completed` événement. Appelez ensuite la <`operationName` > `Async` (méthode). Exemple :
[!code-csharp[EventAsync#5](../../../../samples/snippets/csharp/VS_Snippets_CFX/eventasync/cs/client.cs#5)]
[!code-vb[EventAsync#5](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/eventasync/vb/client.vb#5)]
## <a name="example"></a>Exemple
> [!NOTE]
> Les règles de conception pour le modèle asynchrone basé sur les événements stipulent que si plusieurs valeurs sont retournées, une valeur est retournée comme la propriété `Result` et les autres sont retournées comme les propriétés sur l'objet <xref:System.EventArgs>. Il en découle que si un client importe des métadonnées à l'aide des options de commande asynchrone basées sur les événements et que l'opération retourne plusieurs valeurs, l'objet <xref:System.EventArgs> par défaut retourne une valeur comme la propriété `Result` et le reste sont des propriétés de l'objet <xref:System.EventArgs>. Pour recevoir l'objet message comme la propriété `Result` et que les valeurs retournées sur cet objet soient des propriétés, utilisez l'option de commande `/messageContract`. Cette opération génère une signature qui retourne le message de réponse comme la propriété `Result` sur l'objet <xref:System.EventArgs>. Toutes les valeurs de retour internes sont ensuite des propriétés de l'objet de message de réponse.
[!code-csharp[EventAsync#6](../../../../samples/snippets/csharp/VS_Snippets_CFX/eventasync/cs/client.cs#6)]
[!code-vb[EventAsync#6](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/eventasync/vb/client.vb#6)]
## <a name="see-also"></a>Voir aussi
- [Guide pratique pour Implémenter une opération de Service asynchrone](../../../../docs/framework/wcf/how-to-implement-an-asynchronous-service-operation.md)
| 97.132353 | 1,015 | 0.756094 | fra_Latn | 0.877593 |
0cc628c559f840be6f5bd47cdeb1058fb237c012 | 944 | md | Markdown | Algorithmic Thinking, Models of Computation/README.md | rudrajit1729/Algorithms-MIT | 8d1082a350ee7aafd7aebf1aff85e6830be978f2 | [
"MIT"
] | 2 | 2020-07-23T02:38:31.000Z | 2021-08-28T10:06:01.000Z | Algorithmic Thinking, Models of Computation/README.md | rudrajit1729/Algorithms-MIT | 8d1082a350ee7aafd7aebf1aff85e6830be978f2 | [
"MIT"
] | null | null | null | Algorithmic Thinking, Models of Computation/README.md | rudrajit1729/Algorithms-MIT | 8d1082a350ee7aafd7aebf1aff85e6830be978f2 | [
"MIT"
] | 1 | 2020-08-14T10:31:23.000Z | 2020-08-14T10:31:23.000Z | # Introduction to Algorithms
## Algorithmic Thinking, Peak Finding
For a given structure of data find a peak(if it exists)
Peak is defined as the element which is greater than or equal to the neighbors
1D Version - Given an array of n integers, find a peak(i.e. element which is NOT smaller than its neighbours)
2D Version - Given an m x n matrix of integers, find a peak(i.e. element which is NOT smaller than its neighbours)
More on it at - https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/MIT6_006F11_lec01.pdf
## Models of Computation, Document Distance
General Idea about the different kinds of computational machines and the use of each of them in proper domains of problems.
More on it at - https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/MIT6_006F11_lec02_orig.pdf
| 52.444444 | 172 | 0.79661 | eng_Latn | 0.967129 |
0cc6ad9ccd26decf02618a2191a71f470e7aa653 | 5,438 | md | Markdown | wdk-ddi-src/content/portcls/nf-portcls-pcregisterphysicalconnectiontoexternal.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-12-23T14:02:21.000Z | 2022-02-13T00:40:38.000Z | wdk-ddi-src/content/portcls/nf-portcls-pcregisterphysicalconnectiontoexternal.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/portcls/nf-portcls-pcregisterphysicalconnectiontoexternal.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:portcls.PcRegisterPhysicalConnectionToExternal
title: PcRegisterPhysicalConnectionToExternal function (portcls.h)
description: The PcRegisterPhysicalConnectionToExternal function registers a physical connection from an audio adapter filter to an external audio adapter filter.
old-location: audio\pcregisterphysicalconnectiontoexternal.htm
tech.root: audio
ms.date: 05/08/2018
keywords: ["PcRegisterPhysicalConnectionToExternal function"]
ms.keywords: PcRegisterPhysicalConnectionToExternal, PcRegisterPhysicalConnectionToExternal function [Audio Devices], audio.pcregisterphysicalconnectiontoexternal, audpc-routines_8e03485f-aca9-4e06-981b-fa9593472499.xml, portcls/PcRegisterPhysicalConnectionToExternal
req.header: portcls.h
req.include-header: Portcls.h
req.target-type: Universal
req.target-min-winverclnt: The PortCls system driver implements the PcRegisterPhysicalConnectionToExternal function in Microsoft Windows 98/Me and in Windows 2000 and later operating systems.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib: Portcls.lib
req.dll:
req.irql: PASSIVE_LEVEL
targetos: Windows
req.typenames:
f1_keywords:
- PcRegisterPhysicalConnectionToExternal
- portcls/PcRegisterPhysicalConnectionToExternal
topic_type:
- APIRef
- kbSyntax
api_type:
- LibDef
api_location:
- Portcls.lib
- Portcls.dll
api_name:
- PcRegisterPhysicalConnectionToExternal
---
# PcRegisterPhysicalConnectionToExternal function
## -description
The <b>PcRegisterPhysicalConnectionToExternal</b> function registers a physical connection from an audio adapter filter to an external audio adapter filter.
## -parameters
### -param DeviceObject
[in]
Pointer to the device object for the device. This is a system structure of type <a href="/windows-hardware/drivers/ddi/wdm/ns-wdm-_device_object">DEVICE_OBJECT</a>.
### -param FromUnknown
[in]
Pointer to the <a href="/windows-hardware/drivers/ddi/portcls/nn-portcls-iport">IPort</a> interface of a port driver object. The port driver object that is associated with <i>FromUnknown</i> is bound to the subdevice that supplies the connection's data source (output) pin.
### -param FromPin
[in]
Specifies a pin ID. This parameter identifies the source (output) pin on the filter that is associated with the <i>FromUnknown</i> interface.
### -param ToString
[in]
Pointer to a null-terminated Unicode string containing the symbolic link name of the external filter that supplies the sink pin for the connection.
### -param ToPin
[in]
Specifies a pin ID. This parameter identifies the sink (input) pin on the external filter named by <i>ToString</i>.
## -returns
<b>PcRegisterPhysicalConnectionToExternal</b> returns STATUS_SUCCESS if the call was successful. Otherwise, it returns an appropriate error code.
## -remarks
An adapter driver calls <b>PcRegisterPhysicalConnectionToExternal</b> to register a physical connection with the PortCls system driver. PortCls stores this information so that the port driver can subsequently use the information to respond to <a href="/windows-hardware/drivers/stream/ksproperty-pin-physicalconnection">KSPROPERTY_PIN_PHYSICALCONNECTION</a> property requests.
This function is useful for specifying a topology link between two audio adapters that are controlled by different adapter drivers. The function registers a physical connection between a filter object representing a subdevice in the local audio adapter and a filter object representing a subdevice in an external adapter.
The <i>ToString</i> parameter is a symbolic link to the subdevice that is exposed by the external adapter driver.
The information that is required to register an external physical connection must be supplied to the two drivers. This can be done either during an initial coordinated install of the two devices, or dynamically by a user-mode configuration program that coordinates changes to the configuration of both devices.
An adapter driver can call the <a href="/windows-hardware/drivers/ddi/portcls/nf-portcls-iunregisterphysicalconnection-unregisterphysicalconnectiontoexternal">IUnregisterPhysicalConnection::UnregisterPhysicalConnectionToExternal</a> method to delete the registration of a physical connection that was registered by a previous call to <b>PcRegisterPhysicalConnectionToExternal</b>. For more information, see <a href="/windows-hardware/drivers/audio/dynamic-audio-subdevices">Dynamic Audio Subdevices</a>.
## -see-also
<a href="/windows-hardware/drivers/ddi/wdm/ns-wdm-_device_object">DEVICE_OBJECT</a>
<a href="/windows-hardware/drivers/ddi/portcls/nn-portcls-iport">IPort</a>
<a href="/windows-hardware/drivers/ddi/portcls/nf-portcls-iunregisterphysicalconnection-unregisterphysicalconnectiontoexternal">IUnregisterPhysicalConnection::UnregisterPhysicalConnectionToExternal</a>
<a href="/windows-hardware/drivers/stream/ksproperty-pin-physicalconnection">KSPROPERTY_PIN_PHYSICALCONNECTION</a>
<a href="/windows-hardware/drivers/ddi/portcls/nf-portcls-pcregisterphysicalconnection">PcRegisterPhysicalConnection</a>
<a href="/windows-hardware/drivers/ddi/portcls/nf-portcls-pcregisterphysicalconnectionfromexternal">PcRegisterPhysicalConnectionFromExternal</a>
| 46.478632 | 504 | 0.80103 | eng_Latn | 0.892614 |
0cc71917c64d72d7afca76323781232ef7d8817e | 3,206 | md | Markdown | node_modules/eslint-plugin-promise/CHANGELOG.md | shivampip/github-cat-ribbon | 294ee107a4f18900d85523af330749522bf78e0f | [
"MIT"
] | 235 | 2020-08-20T02:39:00.000Z | 2022-03-13T12:37:55.000Z | node_modules/eslint-plugin-promise/CHANGELOG.md | shivampip/github-cat-ribbon | 294ee107a4f18900d85523af330749522bf78e0f | [
"MIT"
] | 46 | 2019-10-05T13:27:27.000Z | 2021-09-02T05:37:58.000Z | node_modules/eslint-plugin-promise/CHANGELOG.md | shivampip/github-cat-ribbon | 294ee107a4f18900d85523af330749522bf78e0f | [
"MIT"
] | 16 | 2020-08-24T08:28:58.000Z | 2021-12-14T12:44:36.000Z | ## 4.2.1
- Added more use cases to `no-return-wrap`
## 4.0.1
- Remove `promise/param-names` fixer
([#146](https://github.com/xjamundx/eslint-plugin-promise/pull/146))
## 4.0.0
- Added fixer for `promise/no-new-statics` rule
([#133](https://github.com/xjamundx/eslint-plugin-promise/pull/133))
- Support ESLint v5
([#144](https://github.com/xjamundx/eslint-plugin-promise/pull/144))
This is a breaking change that drops support for Node v4. In order to use ESLint
v5 and eslint-plugin-promise v4, you must use Node >=6.
## 3.8.0
- Removed `promise/avoid-new` from recommended configuration
([#119](https://github.com/xjamundx/eslint-plugin-promise/pull/119))
- Ignored event listener callbacks in `promise/prefer-await-to-callbacks`
([#117](https://github.com/xjamundx/eslint-plugin-promise/pull/117))
- Ignored top-level awaits in `promise/prefer-await-to-then`
([#126](https://github.com/xjamundx/eslint-plugin-promise/pull/126))
- Added docs for `promise/no-nesting` and `promise/prefer-await-to-then`
([#120](https://github.com/xjamundx/eslint-plugin-promise/pull/120))
([#121](https://github.com/xjamundx/eslint-plugin-promise/pull/121))
## 3.7.0
- Added `promise/valid-params` rule
([#85](https://github.com/xjamundx/eslint-plugin-promise/pull/85))
- Added `promise/no-new-statics` rule
([#82](https://github.com/xjamundx/eslint-plugin-promise/pull/82))
- Added fixer for `promise/param-names` rule
([#99](https://github.com/xjamundx/eslint-plugin-promise/pull/99))
- Added rule documentation to each rule
([#91](https://github.com/xjamundx/eslint-plugin-promise/pull/91))
## 3.6.0
- Added `['catch']` support in `catch-or-return`
- Added `no-return-in-finally` rule
- Fixed some formatting in the docs
- Added `allowReject` option to `no-return-wrap`
- Added exceptions for `no-callback-in-promise`
## 3.5.0
- Added support for recommended settings using
`extends: plugin:promise/recommended`
## 3.4.2
- Fixed always return false positive with ternary (#31)
## 3.4.1
- fixed #49
## 3.4.0
- new rule: avoid-new
- new rule: no-promise-in-callback
- new rule: no-callback-in-promise
- new rule: no-nesting
## 3.3.2
- Removed eslint from peerDeps
## 3.3.1
- Updated engines with proper stuff
- Fixed bug for unreachable code
## 3.3.0
- Rule: `prefer-async-to-callbacks` added
- Rule: `prefer-async-to-then` added
## 3.2.1
- Fix: `no-return-wrap` rule missing from index.js
## 3.2.0
- Added `no-return-wrap` rule
## 3.1.0
- Added multiple terminationMethods
## 3.0.1
- Removed deprecated `always-catch` rule
- FIX: always-return error with "fn && fn()"
## 3.0.0
- Updated column and line numbers
- Added flow analysis for better handling of if statements
## 2.0.1
- Fixed type in docs
## 2.0.0
- ESLint 3.0 Support
## 1.3.2
- Updated tests to run on eslint 2.0
- Fixed some issues with `no-native` rule
## 1.3.1
- Actually added `no-native` rule
## 1.3.0
- Added `no-native` rule
## 1.2.0
- Allow `throw` in `always-return` rule
- Added `terminationMethod` option to `catch-or-return` rule
## 1.1.0
- Added `catch-or-return` rule
## 1.0.8
- Fixed crash issues
## 1.0.0 - 1.0.7
- Lots of basic feature updates and doc changes
| 22.263889 | 80 | 0.69869 | eng_Latn | 0.679796 |
0cc7673ad4cc9f53a8e8d1d4f46a71061a1bd0be | 460 | md | Markdown | iterations/92/ticket.claws-mail-3.17.4.md | ckauhaus/nixos-vulnerability-roundup | 07589f47223f811b06f5fd62000c1adeadd37e6d | [
"BSD-3-Clause"
] | 5 | 2018-11-08T08:38:04.000Z | 2021-11-14T17:12:14.000Z | iterations/92/ticket.claws-mail-3.17.4.md | ckauhaus/nixos-vulnerability-roundup | 07589f47223f811b06f5fd62000c1adeadd37e6d | [
"BSD-3-Clause"
] | 8 | 2019-09-30T19:58:28.000Z | 2019-11-23T17:56:05.000Z | iterations/92/ticket.claws-mail-3.17.4.md | ckauhaus/nixos-vulnerability-roundup | 07589f47223f811b06f5fd62000c1adeadd37e6d | [
"BSD-3-Clause"
] | 2 | 2019-02-17T11:28:32.000Z | 2019-10-27T10:53:24.000Z | Vulnerability roundup 92: claws-mail-3.17.4: 1 advisory [7.5]
[search](https://search.nix.gsc.io/?q=claws-mail&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=claws-mail+in%3Apath&type=Code)
* [ ] [CVE-2020-16094](https://nvd.nist.gov/vuln/detail/CVE-2020-16094) CVSSv3=7.5 (nixos-20.03)
Scanned versions: nixos-20.03: 925ae0dee63.
Cc @fpletz
Cc @globin
<!-- https://github.com/NixOS/nixpkgs/issues/96784 -->
| 38.333333 | 175 | 0.719565 | yue_Hant | 0.487579 |
0cc7fec59c22673bb95587ef125ab70c8cbd1b6d | 1,760 | md | Markdown | Skype/UCMA/platform-and-endpoints.md | petgus/skype-docs | c69cb33eac05f978d3139bf8621e0f8f0071c09e | [
"CC-BY-4.0",
"MIT"
] | 175 | 2016-03-24T18:06:52.000Z | 2021-12-27T08:58:25.000Z | Skype/UCMA/platform-and-endpoints.md | petgus/skype-docs | c69cb33eac05f978d3139bf8621e0f8f0071c09e | [
"CC-BY-4.0",
"MIT"
] | 583 | 2016-03-29T02:22:30.000Z | 2022-03-31T22:15:07.000Z | Skype/UCMA/platform-and-endpoints.md | petgus/skype-docs | c69cb33eac05f978d3139bf8621e0f8f0071c09e | [
"CC-BY-4.0",
"MIT"
] | 338 | 2016-03-29T17:15:54.000Z | 2022-03-18T10:15:20.000Z | ---
title: Platform and endpoints
TOCTitle: Platform and endpoints
ms:assetid: de5868bc-9ac7-4f88-b700-a2efce8d531e
ms:mtpsurl: https://msdn.microsoft.com/en-us/library/Dn466045(v=office.16)
ms:contentKeyID: 65239979
ms.date: 07/27/2015
mtps_version: v=office.16
---
# Platform and endpoints
**Applies to**: Skype for Business 2015
The [CollaborationPlatform](https://docs.microsoft.com/dotnet/api/microsoft.rtc.collaboration.collaborationplatform?view=ucma-api) class provides connection management, message dispatching, and other services to endpoints. An application creates an instance of the **CollaborationPlatform** class to take advantage of the Microsoft Unified Communications Managed API 5.0 infrastructure. An application developer can use one of the constructors in this class to create a server platform, from which an application server can be created. Another constructor can be used to create an auto-provisioned application server platform, and a third constructor in this class can be used to create a client platform, on which a number of user endpoints can be created.
The following illustration shows the principal classes that represent the two platform types and the three endpoint types.
.png "UCMA platform and endpoint classes")
This section includes the following topics:
- [Client platforms](client-platforms.md)
- [Server platforms](server-platforms.md)
- [User endpoints and application endpoints](user-endpoints-and-application-endpoints.md)
- [Incoming message dispatching](incoming-message-dispatching.md)
- [Trusted applications](trusted-applications.md)
- [Trusted domains and SIP gateways](trusted-domains-and-sip-gateways.md)
| 58.666667 | 757 | 0.808523 | eng_Latn | 0.976925 |
0cc86c019e7c656d391cde96a1139b7be1f9bbd5 | 2,003 | md | Markdown | wdk-ddi-src/content/ucmtypes/nf-ucmtypes-ucm_pd_power_data_object_init_variable_non_battery.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ucmtypes/nf-ucmtypes-ucm_pd_power_data_object_init_variable_non_battery.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ucmtypes/nf-ucmtypes-ucm_pd_power_data_object_init_variable_non_battery.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:ucmtypes.UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY
title: UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY function (ucmtypes.h)
description: Initializes a UCM_PD_POWER_DATA_OBJECT structure as a Variable Supply Non Battery type Power Data Object.
old-location: buses\ucm_pd_power_data_object_init_variable_non_battery.htm
tech.root: usbref
ms.assetid: BBC8975A-E5B1-4137-83D8-891075A8F4D0
ms.date: 05/07/2018
ms.keywords: UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY, UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY function [Buses], buses.ucm_pd_power_data_object_init_variable_non_battery, ucmtypes/UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY
ms.topic: function
req.header: ucmtypes.h
req.include-header: Ucmcx.h
req.target-type: Windows
req.target-min-winverclnt: Windows 10
req.target-min-winversvr: Windows Server 2016
req.kmdf-ver: 1.15
req.umdf-ver: 2.15
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- Ucmtypes.h
api_name:
- UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY
product:
- Windows
targetos: Windows
req.typenames:
---
# UCM_PD_POWER_DATA_OBJECT_INIT_VARIABLE_NON_BATTERY function
## -description
Initializes a <a href="https://msdn.microsoft.com/library/windows/hardware/mt187935">UCM_PD_POWER_DATA_OBJECT</a> structure as a Variable Supply Non Battery type Power Data Object.
## -parameters
### -param Pdo [out]
A pointer to a <a href="https://msdn.microsoft.com/library/windows/hardware/mt187935">UCM_PD_POWER_DATA_OBJECT</a> structure in which the <b>VariableSupplyNonBatteryPdo.VariableSupportNonBattery</b> member is set to <b>UcmPdPdoTypeVariableSupplyNonBattery</b>.
## -returns
This function does not return a value.
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/mt187935">UCM_PD_POWER_DATA_OBJECT</a>
| 24.728395 | 260 | 0.812282 | yue_Hant | 0.871075 |
0cc86f98faabcec5f7ae88c31bdc91b75190a5d1 | 480 | md | Markdown | plugins/README.md | coltonhughes/win_move_user | cac3f7b01dc137dc2042abd7a5dbc90b7beaec03 | [
"MIT"
] | null | null | null | plugins/README.md | coltonhughes/win_move_user | cac3f7b01dc137dc2042abd7a5dbc90b7beaec03 | [
"MIT"
] | null | null | null | plugins/README.md | coltonhughes/win_move_user | cac3f7b01dc137dc2042abd7a5dbc90b7beaec03 | [
"MIT"
] | null | null | null | # Module Documentation
This module is quite simple.
All it does is allow you to provide a name for a user and move them within the Active Directory environment.
## Example
```
- name: Move a user to the Disabled Users OU
win_move_user:
name: [email protected]
path: OU=Disabled Users,DC=domain,DC=com
```
This is a very simple module that I put very little effort in perfecting. If issues arise feel free to open them or fork and modify at your own discretion. | 40 | 156 | 0.760417 | eng_Latn | 0.998729 |
0cc8ffe00422121a1b1476dd4df8fc8e63478a04 | 1,974 | md | Markdown | doc/decisions/unit_testing.md | JakobWonisch/libelektra | 0d9ba69aedca7ec8ad706255672867f1aa174e52 | [
"0BSD",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause-Clear",
"BSD-3-Clause"
] | 188 | 2015-01-07T20:34:26.000Z | 2022-03-16T09:55:09.000Z | doc/decisions/unit_testing.md | JakobWonisch/libelektra | 0d9ba69aedca7ec8ad706255672867f1aa174e52 | [
"0BSD",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause-Clear",
"BSD-3-Clause"
] | 3,813 | 2015-01-02T14:00:08.000Z | 2022-03-31T14:19:11.000Z | doc/decisions/unit_testing.md | JakobWonisch/libelektra | 0d9ba69aedca7ec8ad706255672867f1aa174e52 | [
"0BSD",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause-Clear",
"BSD-3-Clause"
] | 149 | 2015-01-10T02:07:50.000Z | 2022-03-16T09:50:24.000Z | # C++ Unit Testing Framework
## Problem
The previous unit testing framework started as hack to have a bit more
than simple asserts. It is not easy to use (needs explicit enumeration
of all test cases) and lacks really important features (e.g. output of
the assertion that failed).
## Constraints
- Must be BSD licenced
- Must be easy to use
- should be portable
- container testing?
- mocking?
## Assumptions
## Considered Alternatives
- Continue with current framework
- Boost Unit Testing framework
## Decision
- Google Unit testing framework with code downloaded by CMake for
systems where no source is packaged (Debian Wheezy, Arch Linux,
Fedora,...)
## Rationale
- Having the output of current values when an assertion fails in any case
- No listing of all test cases in main (but instead having test discovery)
- No more commenting out if you only want to run parts of the test-suite
- No more typos in test-suite namings
- xUnit output for jenkins
- value and type-parameterized tests
- Mock-Support (not available in gtest?)
- setup/teardown global+per test
- supports death tests
- writing many parts of it on our own adds to the total amount of code to write and maintain.
- integrations into IDEs
## Implications
- It adds lots of code in the repository
- It is not ideal to have different frameworks intermixed (C vs. C++ frameworks, but most code is C)
- In the end we have to write a lot of functionality ourselves anyway (e.g. comparing Keys and KeySets)
- Testsuite execution are already handled by cmake and kdb run-all.
- The selection of tests within a test suite does not play well with ctest.
- Rewriting all current tests to have unified behavior is a lot of work
- Won't work for ABI compatibility tests
- Mock only by extra framework
## Related Decisions
- [Script Testing](script_testing.md)
## Notes
- We had discussions on Mailinglists
- We had discussions on [GitHub](https://github.com/ElektraInitiative/libelektra/pull/26)
| 30.84375 | 103 | 0.764944 | eng_Latn | 0.9974 |
0ccb981290a21939eb37fe609dd0429f35a894b9 | 5,509 | md | Markdown | articles/active-directory/privileged-identity-management/subscription-requirements.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/privileged-identity-management/subscription-requirements.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/privileged-identity-management/subscription-requirements.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Requisitos de suscripción para usar PIM: Azure | Microsoft Docs'
description: Describe los requisitos de suscripción y licencia necesarios para usar Azure AD Privileged Identity Management (PIM).
services: active-directory
documentationcenter: ''
author: rolyon
manager: mtillman
editor: markwahl-msft
ms.assetid: 34367721-8b42-4fab-a443-a2e55cdbf33d
ms.service: active-directory
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: conceptual
ms.component: pim
ms.date: 06/01/2017
ms.author: rolyon
ms.custom: pim
ms.openlocfilehash: 1554895dcba0c09a3a2e19c284a1cd6f0416cfe1
ms.sourcegitcommit: 63613e4c7edf1b1875a2974a29ab2a8ce5d90e3b
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 08/29/2018
ms.locfileid: "43190617"
---
# <a name="subscription-requirements-to-use-pim"></a>Requisitos de la suscripción para usar PIM
Azure AD Privileged Identity Management está disponible como parte de la edición Premium P2 de Azure AD. Para más información sobre las demás características de P2 y compararlo con Premium P1, consulte [Ediciones de Azure Active Directory](../active-directory-editions.md).
>[!NOTE]
Cuando Azure Active Directory (Azure AD) Privileged Identity Management estaba en versión preliminar, no había ninguna comprobación de licencia para un inquilino para probar el servicio. Ahora que Azure AD Privileged Identity Management ha pasado a disponibilidad general, debe haber una suscripción de prueba o de pago para el inquilino para poder seguir usando Privileged Identity Management después de diciembre de 2016.
## <a name="confirm-your-trial-or-paid-subscription"></a>Confirmación de su suscripción de prueba o de pago
Si no está seguro de si su organización tiene una suscripción de prueba o de pago, puede comprobar si hay una suscripción en su inquilino mediante el uso de los comandos incluidos en el módulo Azure Active Directory para Windows PowerShell V1.
1. Abra una ventana de PowerShell.
2. Escriba `Connect-MsolService` para autenticarse como un usuario en el inquilino.
3. Escriba `Get-MsolSubscription | ft SkuPartNumber,IsTrial,Status`.
Este comando recupera una lista de las suscripciones en el inquilino. Si no se devuelven líneas, tendrá que obtener una prueba de Azure AD Premium P2 o comprar una suscripción de Azure AD Premium P2 o de EMS E5 para usar Azure AD Privileged Identity Management. Para obtener una prueba y comenzar a usar Azure AD Privileged Identity Management, lea [Introducción a Azure AD Privileged Identity Management](pim-getting-started.md).
Si este comando devuelve una línea en la que SkuPartNumber es "AAD_PREMIUM_P2" o "EMSPREMIUM" e IsTrial es "True", significa que hay una prueba de Azure AD Premium P2 en el inquilino. Si el estado de la suscripción no está habilitado y no tiene una compra de suscripción de Azure AD Premium P2 o EMS E5, debe adquirir una suscripción de Azure AD Premium P2 o EMS E5 para seguir usando Azure AD Privileged Identity Management.
Azure AD Premium P2 se encuentra disponible a través del [Contrato Microsoft Enterprise](https://www.microsoft.com/en-us/licensing/licensing-programs/enterprise.aspx), el programa de [licencias por volumen abiertas](https://www.microsoft.com/en-us/licensing/licensing-programs/open-license.aspx) y el [programa de proveedores de soluciones en la nube](https://partner.microsoft.com/cloud-solution-provider). Los suscriptores de Azure y Office 365 también pueden comprar Azure AD Premium P2 en línea. Encontrará más información sobre precios de Azure AD Premium y cómo realizar la solicitud en línea en [Precios de Azure Active Directory](https://azure.microsoft.com/pricing/details/active-directory/).
## <a name="azure-ad-privileged-identity-management-is-not-available-in-tenant"></a>Azure AD Privileged Identity Management no está disponible en el inquilino
Azure AD Privileged Identity Management ya no estará disponible en el inquilino si:
- Su organización estaba usando Azure AD Privileged Identity Management cuando estaba en versión preliminar y no compra la suscripción de Azure AD Premium P2 o EMS E5.
- Su organización tenía una prueba de Azure AD Premium P2 o EMS E5 que expiró.
- Su organización tenía una suscripción comprada que expiró.
Cuando una suscripción de Azure AD Premium P2 o EMS E5 expira, o una organización que estaba usando Azure AD Privileged Identity Management en versión preliminar no obtiene una suscripción de Azure AD Premium P2 o EMS E5:
- Las asignaciones de roles permanentes para roles de Azure AD no se verán afectadas.
- La extensión de Azure AD Privileged Identity Management en Azure Portal, así como los cmdlets de Graph API y las interfaces de PowerShell de Azure AD Privileged Identity Management, ya no estarán disponibles para que los usuarios activen roles con privilegios, administren el acceso con privilegios o realicen revisiones de acceso de roles con privilegios.
- Se quitarán las asignaciones de roles elegibles de roles de Azure AD, puesto que los usuarios ya no podrán activar roles con privilegios.
- Las revisiones de acceso en curso de roles de Azure AD finalizarán y las opciones de configuración de Azure AD Privileged Identity Management se quitarán.
- Azure AD Privileged Identity Management no volverá a enviar correos electrónicos cuando se produzcan cambios de asignación de rol.
## <a name="next-steps"></a>Pasos siguientes
- [Primer uso de PIM](pim-getting-started.md)
- [Roles de directorio de Azure AD que se pueden administrar en PIM](pim-roles.md)
| 83.469697 | 702 | 0.808132 | spa_Latn | 0.971541 |
0ccbda99a0d773a4589addfbc363b6e49bdf0573 | 87 | md | Markdown | meshlocals/existing/toronto.md | tsuckow/hyperboria-docs | 011ca500174b34d0f4d906d009365948f8e1bd70 | [
"CC-BY-4.0"
] | 1 | 2016-08-20T19:05:47.000Z | 2016-08-20T19:05:47.000Z | meshlocals/existing/toronto.md | tsuckow/hyperboria-docs | 011ca500174b34d0f4d906d009365948f8e1bd70 | [
"CC-BY-4.0"
] | null | null | null | meshlocals/existing/toronto.md | tsuckow/hyperboria-docs | 011ca500174b34d0f4d906d009365948f8e1bd70 | [
"CC-BY-4.0"
] | null | null | null | Toronto Meshnet
====
**Website** : [TransitionTech](http://transitiontech.ca/toronto)
| 17.4 | 64 | 0.712644 | yue_Hant | 0.856674 |
0ccbfab57935da25276f269c1d086e4012edc396 | 621 | md | Markdown | docs/Design/SRD0015.md | tcartwright/SqlServer.Rules | e1d76b714f6e4441d34105ae00f9248df71c4246 | [
"MIT"
] | 18 | 2018-10-24T13:15:53.000Z | 2022-03-14T11:50:17.000Z | docs/Design/SRD0015.md | tcartwright/SqlServer.Rules | e1d76b714f6e4441d34105ae00f9248df71c4246 | [
"MIT"
] | 27 | 2020-01-22T21:49:24.000Z | 2021-09-07T15:25:28.000Z | docs/Design/SRD0015.md | tcartwright/SqlServer.Rules | e1d76b714f6e4441d34105ae00f9248df71c4246 | [
"MIT"
] | 5 | 2020-01-22T21:50:10.000Z | 2022-01-23T11:37:42.000Z | [This document is automatically generated. All changed made to it WILL be lost]: <>
# SQL Server Rule: SRD0015
| | |
|----|----|
| Assembly | SqlServer.Rules.dll |
| Namespace | SqlServer.Rules.Design |
| Class | UseColumnListWithInsertsRule |
## Rule Information
| | |
|----|----|
| Id | SRD0015 |
| Friendly Name | Implicit column list |
| Category | Design |
| Ignorable | false |
| Applicable Types | Procedure |
## Description
Always use a column list in INSERT statements.
## Summary
Always use a column list in INSERT statements.
| 20.7 | 86 | 0.597424 | eng_Latn | 0.609038 |
0ccc35e7969a9a5ca2dcd7708d67aeabe81556be | 6,293 | md | Markdown | README.md | boaguilar/SL-Cloud | b5b9784dc57e0ddefb296c76af81d7e36eca5242 | [
"MIT"
] | null | null | null | README.md | boaguilar/SL-Cloud | b5b9784dc57e0ddefb296c76af81d7e36eca5242 | [
"MIT"
] | null | null | null | README.md | boaguilar/SL-Cloud | b5b9784dc57e0ddefb296c76af81d7e36eca5242 | [
"MIT"
] | null | null | null | # Synthetic Lethality Cloud (SL-Cloud)
This project provides a cloud-based data access platform coupled with software and well documented computational notebooks that re-implement published synthetic lethality (SL) inference algorithms to facilitate novel investigation into synthetic lethality. In addition we provide general purpose functions that support these prediction workflows e.g. saving data in bigquery tables. We anticipate that computationally savvy users can leverage the resources provided in this project to conduct highly customizable analysis based on their cancer type of interest and particular context.
Open the framework in **MyBinder**: [](https://mybinder.org/v2/gh/boaguilar/SL-Cloud/HEAD)
**Note**: for **MyBinder**, run the **MyBinder_Authentication.ipynb notebook** first!.
Please find our paper [here](https://www.biorxiv.org/content/10.1101/2021.09.18.459450v1).
If you have any questions, please reach out Bahar Tercan [email protected].
Zenodo : [](https://zenodo.org/badge/latestdoi/476040191)
## Getting Started
### Get a Google Identity
To be able to use our platform, researchers first need to have a Google identity, if you don't have one, please click [here](https://accounts.google.com/signup/v2/webcreateaccount?dsh=308321458437252901&continue=https%3A%2F%2Faccounts.google.com%2FManageAccount&flowName=GlifWebSignIn&flowEntry=SignUp#FirstName=&LastName=) to get, you can also link a non-Gmail account(like sluser<span>@isbscience.org</span>) as a Google identity by [this method](https://accounts.google.com/signup/v2/webcreateaccount?flowName=GlifWebSignIn&flowEntry=SignUp&nogm=true).
### Request Google Cloud Credits
Take advantage of a one-time [$300 Google Credit](https://cloud.google.com/free/).
If you have already used this one-time offer (or there is some other reason you cannot use it), see this information about how to [request ISB-CGC Cloud Credits](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowtoRequestCloudCredits.html).
### Set up a Google Cloud Project
See Google’s documentation about how to create a [Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects).
[Enable Required Google Cloud APIs](https://cloud.google.com/apis/docs/getting-started#enabling_apis)
### First Notebook
Please run the [first notebook](https://github.com/IlyaLab/SL-Cloud/blob/main/first_notebook.ipynb) to start using our bigquery tables.
## What is There in the Project?
### Scripts
- [Scripts folder](https://github.com/IlyaLab/SL-Cloud/tree/main/scripts/): includes the functions that are used by DAISY and Mutation Dependent SL Inference workflows explained below. This folder also contains scripts for data wrangling procedures like BigQuery dataset and table creation, how to save DEPMAP data in BigQuery tables, helper functions like writing dataframes into excel files and gene conversion among gene symbol, EntrezID and alias.
### Sythetic Lethality Inference Workflows
Example notebooks can be found in the Example_pipelines directory, which including the following notebooks:
- [DAISY Pipeline](https://github.com/IlyaLab/SL-Cloud/blob/main/Example_pipelines/DAISY_example.ipynb) :We reimplemented the published workflow DAISY (Jerby-Arnon et al., 2014) using up-to-date large scale data resources. </br>
- [Mutation Dependent SL pipeline](https://github.com/IlyaLab/SL-Cloud/blob/main/Example_pipelines/MDSLP_example.ipynb): We implemented a mutation-dependent synthetic lethality prediction (MDSLP) workflow based on the rationale that for tumors with mutations that have an impact on protein expression or protein structure (functional mutation), the knockout effects or inhibition of a partner target gene show conditional dependence for the mutated molecular entities.</br>
- [Conservation-based Inference from Yeast Genetic Interactions](https://github.com/IlyaLab/SL-Cloud/blob/main/Example_pipelines/CGI_example.ipynb): We presented a workflow that leverages cross-species conservation to infer experimentally-derived synthetic lethal interactions in yeast to predict relevant SL pairs in humans. We implemented the Conserved Genetic Interaction (CGI) workflow based, in part, on methods described in (Srivas et al., 2016). </br>
### Synthetic-Lethality Inference Data Resources
This resource provides access to publicly available cancer genomics datasets relevant for SL inference. These data have been pre-processed, cleaned and stored in cloud-based query-able tables leveraging [Google BigQuery](https://cloud.google.com/bigquery) technology. In addition we leverage relevant datasets available through the Institute for Systems Biology Cancer Genomics Cloud ([ISB-CGC](https://isb-cgc.appspot.com/)) to make inferences of potential synthetic lethal interactions.
The following represent project-specific datasets with relevance for SL inference:
- **DEPMAP**: DEPMAP shRNA (DEMETER2 V6) and CRISPR (DepMap Public 20Q3) gene expression, sample information, mutation and copy number alterations for CRISPR experiments and and gene dependency scores for shRNA and gene effect scores.
- **CellMap**: Yeast interaction dataset based on fitness scores after single and double knockouts from SGA experiments.
- **Gene Information**: Tables with relevant gene annotation information such as yeast and human ortholog information, gene-alias-Entrez ID mapping, gene Ensembl-id mapping, gene-Refseq mapping.
### Accessing SL Resource
To be able to see the data in the syntheticlethality dataset, please click on https://console.cloud.google.com/bigquery and add the syntheticlethality dataset, users need to pin the syntheticlethality project by first clicking "ADD DATA" and after selecting "Pin a project" and "Enter project name", you will see the window as in the Figure below. After writing syntheticlethality into Projectname box, please click on PIN.
<img src="https://github.com/IlyaLab/SL-Cloud/blob/main/figures/add_sldataset.png" >
### Accessing ISB-CGC Resources
To add ISB-CGC datasets, users need to follow the same steps with Accessing SL Resource, only difference is, they need to write isb-cgc-bq into Projectname box.
| 98.328125 | 586 | 0.802161 | eng_Latn | 0.936076 |
0ccc80a7733913e76aacf05d6c18d40a3a06d740 | 2,943 | md | Markdown | _docs/topics/5-cstruct/cstruct-07-tasks.md | andyguestysj/prog03 | f7e2f0596227a8b8a1e08024a7b907266df93306 | [
"MIT"
] | null | null | null | _docs/topics/5-cstruct/cstruct-07-tasks.md | andyguestysj/prog03 | f7e2f0596227a8b8a1e08024a7b907266df93306 | [
"MIT"
] | null | null | null | _docs/topics/5-cstruct/cstruct-07-tasks.md | andyguestysj/prog03 | f7e2f0596227a8b8a1e08024a7b907266df93306 | [
"MIT"
] | null | null | null | ---
title: Exercises 2
permalink: /docs/cstruct-07-tasks/
---
* [Setting up VSC](https://ysjprog03.netlify.app/docs/vsc/)
* [Cloning a repo in VSC](https://ysjprog03.netlify.app/docs/vsc-cloning/)
* [Gitlab - Setup](https://ysjprog03.netlify.app/docs/gitlab-setup/)
* [Gitlab - Creating a New Repo](https://ysjprog03.netlify.app/docs/gitlab-save/)
* [Gitlab & VSC](https://ysjprog03.netlify.app/docs/gitlab-vsc/)
#### Exercise 1
1. Create a project called "DoubleLinked"
2. Implement the doubly linked list described above
3. Add these nodes in order : 7, 3, 9, 10, 5
4. Create a PrintList(Node *head) function that prints the values in the nodes from first to last
5. Modify PrintList(Node *head) so that it prints the values from first to last then from last to first
6. Write void InsertNodeBefore(Node *next_node, int data) that inserts a new node in to the list before next_node.
#### Exercise 2
1. Return to your "DoubleLinked" project
2. Add in `deleteNode()` above and verify it works.
3. Create a new function `void deleteNodeAt(Node** head_ref, int position)` which will delete the node at position `position` in the list. i.e. if `position` is 0 delete the first node, if it is 1 delete the second node, etc. Your function should do nothing if it is asked to delete a node that doesn't exist.
4. Modify `void deleteNodeAt(Node** head_ref, int position)` so that negative values of position are treated as offsets from the end. i.e. if `position` is -1 then delete the last node, -2 delete the second last node, etc.
#### Exercise 3
1. Log in to replit.com
2. Clone and open [https://github.com/andyguestysj/cStack.git](https://github.com/andyguestysj/cStack.git)
alternative on replit [https://replit.com/@andyguest/cStack](https://replit.com/@andyguest/cStack)
3. Convert the stack and code so that it stores `char`s instead of `int`s
4. Write a function that takes a char array as a parameter and returns a char array with the original string backwards. Use a stack to do this.
5. Write a function that takes a char array string as input. The input is made up of letters, spaces and asterisks. Parse the string character by character. If the character is a letter or a space push it to the stack. If it is an asterisk, do not add the asterisk to th stack, instead pop the top of the stack and print it. Test your function with this string "TH\*E L\*L\*AMA CO\*ME\*S FIRST"
6. Make a new function that reverses a string (from 4) and then processes it based on the asterisks (as in 5). Then after you've finished parsing the string you pop and print everything from the stack. Test it with "SH\*E\*KLL\*OFL\*O\*"
Extra Challenge
Read through [https://www.web4college.com/converters/infix-to-postfix-prefix.php](https://www.web4college.com/converters/infix-to-postfix-prefix.php)
Try to implement the Infix to Postfix Algorithm.
[Infix to Postfix Algorithm](http://csis.pace.edu/~wolf/CS122/infix-postfix.htm)
| 63.978261 | 395 | 0.749235 | eng_Latn | 0.957433 |
0ccd373bb3cdcf7d3d639f90c5bf96b967447af5 | 5,177 | md | Markdown | articles/service-fabric/service-fabric-diagnostics-event-generation-app.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-diagnostics-event-generation-app.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-diagnostics-event-generation-app.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Supervisión del nivel de aplicación de Azure Service Fabric
description: Obtenga información sobre los eventos y los registros de nivel de servicio y aplicación usados para supervisar y diagnosticar los clústeres de Azure Service Fabric.
author: srrengar
ms.topic: conceptual
ms.date: 11/21/2018
ms.author: srrengar
ms.openlocfilehash: 97c3be391dfbee7301ea47bf7234a9549d373370
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/27/2020
ms.locfileid: "75464728"
---
# <a name="application-logging"></a>Registro de aplicaciones
La instrumentación del código no es solo una manera de obtener información acerca de los usuarios, sino el único método para saber si algo va mal en la aplicación y para diagnosticar qué debe corregirse. Aunque técnicamente es posible que conectar un depurador a un servicio de producción, no es un procedimiento habitual. Por lo tanto, es importante disponer de datos de instrumentación detallados.
Algunos productos instrumentan el código automáticamente. Aunque estas soluciones pueden funcionar bien, la instrumentación manual casi siempre debe ser específica para su lógica de negocios. Al final, debe tener suficiente información para depurar desde la aplicación de manera forense. Las aplicaciones de Service Fabric se pueden instrumentar con cualquier marco de registro. En este documento se describen algunos enfoques diferentes para instrumentar el código y se indica cuándo elegir uno u otro.
Para ver ejemplos sobre cómo usar estas sugerencias, consulte [Adición del registro a la aplicación de Service Fabric](service-fabric-how-to-diagnostics-log.md).
## <a name="application-insights-sdk"></a>SDK de Application Insights
Application Insights consigue una eficaz integración con Service Fabric directamente, sin necesidad de configuraciones adicionales. Los usuarios pueden agregar los paquetes de NuGet de Service Fabric de AI y recibir datos y registros creados y recopilados que pueden verse en Azure Portal. Además, se aconseja que los usuarios agreguen su propia telemetría para poder diagnosticar y depurar sus aplicaciones y rastrear cuáles son los servicios y las partes de su aplicación que más se usan. La clase [TelemetryClient](https://docs.microsoft.com/dotnet/api/microsoft.applicationinsights.telemetryclient?view=azure-dotnet) del SDK ofrece muchas formas de rastrear la telemetría en sus aplicaciones. Consulte un ejemplo de cómo instrumentar y agregar Application Insights a su aplicación en nuestro tutorial para [supervisar y diagnosticar una aplicación .NET](service-fabric-tutorial-monitoring-aspnet.md).
## <a name="eventsource"></a>EventSource
Cuando se crea una solución de Service Fabric a partir de una plantilla en Visual Studio, se genera una clase derivada de **EventSource** (**ServiceEventSource** o **ActorEventSource**). Se crea una plantilla en la que podrá agregar eventos para la aplicación o el servicio. El nombre de **EventSource** **debe** ser único y debe cambiarse a partir de la cadena de plantilla predeterminada de MyCompany-<solución>-<proyecto>. El hecho de tener varias definiciones de **EventSource** con el mismo nombre genera un problema en tiempo de ejecución. Cada evento definido debe tener un identificador único. Si el identificador no es único, se produce un error en tiempo de ejecución. En algunas organizaciones se asignan previamente rangos de valores para los identificadores, lo cual evita conflictos entre los equipos de desarrollo independientes. Para más información, consulte el [blog de Vance](https://blogs.msdn.microsoft.com/vancem/2012/07/09/introduction-tutorial-logging-etw-events-in-c-system-diagnostics-tracing-eventsource/) o la [documentación de MSDN](https://msdn.microsoft.com/library/dn774985(v=pandp.20).aspx).
## <a name="aspnet-core-logging"></a>Registro de ASP.NET Core
Es importante planear minuciosamente la instrumentación del código. Un plan de instrumentación correcto puede ayudarle a evitar que se desestabilice el código base y sea necesario volver a instrumentarlo. Para reducir el riesgo, puede elegir una biblioteca de instrumentación como [Microsoft.Extensions.Logging](https://www.nuget.org/packages/Microsoft.Extensions.Logging/), componente de Microsoft ASP.NET Core. ASP.NET Core tiene una interfaz [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) que puede usar con su proveedor preferido al tiempo que reduce al mínimo el efecto sobre el código existente. Puede utilizar el código de ASP.NET Core en Windows y Linux, y en .NET Framework completo, por lo que el código de instrumentación es estándar.
## <a name="next-steps"></a>Pasos siguientes
Una vez que haya elegido el proveedor de registro para instrumentar las aplicaciones y los servicios, los registros y los eventos deben agregarse antes de que se puedan enviar a cualquier plataforma de análisis. Obtenga información sobre [Application Insights](service-fabric-diagnostics-event-analysis-appinsights.md) y [EventFlow](service-fabric-diagnostics-event-aggregation-eventflow.md) para entender mejor algunas de las opciones recomendadas de Azure Monitor.
| 136.236842 | 1,136 | 0.816689 | spa_Latn | 0.987006 |
0ccd602b5413faad901976dbe1c1292d5b650966 | 2,216 | md | Markdown | translations/pt-BR/content/issues/tracking-your-work-with-issues/deleting-an-issue.md | Micheleerb/docs | f3b3fe69e5f5c9446bc0a7f6270aa6bb8139be58 | [
"CC-BY-4.0",
"MIT"
] | 6 | 2022-03-09T07:09:42.000Z | 2022-03-09T07:14:08.000Z | translations/pt-BR/content/issues/tracking-your-work-with-issues/deleting-an-issue.md | Husky57/docs | 1d590a4feb780b0acc9a41381e721b61146175db | [
"CC-BY-4.0",
"MIT"
] | 133 | 2021-11-01T18:16:33.000Z | 2022-03-29T18:18:46.000Z | translations/pt-BR/content/issues/tracking-your-work-with-issues/deleting-an-issue.md | Waleedalaedy/docs | 26d4b73dcbb9a000c32faa37234288649f8d211a | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-05T09:44:04.000Z | 2021-10-05T09:44:52.000Z | ---
title: Excluir um problema
intro: Pessoas com permissões de administrador em um repositório podem excluir permanentemente um problema de um repositório.
redirect_from:
- /github/managing-your-work-on-github/managing-your-work-with-issues-and-pull-requests/deleting-an-issue
- /articles/deleting-an-issue
- /github/managing-your-work-on-github/deleting-an-issue
- /issues/tracking-your-work-with-issues/creating-issues/deleting-an-issue
versions:
fpt: '*'
ghes: '*'
ghae: '*'
ghec: '*'
topics:
- Pull requests
---
Você só pode excluir problemas em um repositório que pertença à sua conta de usuário. Não é possível excluir problemas em um repositório pertencente a outra conta de usuário, mesmo que você seja um colaborador nela.
Para excluir um problema em um repositório que pertença a uma organização, o proprietário da organização deve permitir a exclusão de um problema dos repositórios da organização e você deve ter permissões de administrador ou de proprietário no repositório. Para obter mais informações, consulte "[Permitindo que pessoas excluam problemas na sua organização](/articles/allowing-people-to-delete-issues-in-your-organization)" e "[Funções do repositório para uma organização](/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization)".
Os colaboradores não recebem uma notificação quando você exclui um problema. Ao acessarem a URL de um problema excluído, os colaboradores verão uma mensagem informando que o problema foi eliminado. As pessoas com permissões de administrador ou proprietário no repositório também verão o nome de usuário da pessoa que excluiu o problema e quando isso ocorreu.
1. Navegue até o problema que deseja excluir.
2. Na barra lateral direita, em "Notifications" (Notificações), clique em **Delete this issue** (Excluir este problema). 
4. Para confirmar a exclusão, clique em **Delete this issue** (Excluir problema).
## Leia mais
- "[Vinculando uma pull request a um problema](/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue)"
| 71.483871 | 577 | 0.792419 | por_Latn | 0.99983 |
0cce5945c43bb9503fb22acbe15b8a7d96ac49d6 | 4,133 | md | Markdown | NEWS.md | nigiord/micom | 29116bcfe37269a4ad6e8d4ec2d04966dcfb5afd | [
"Apache-2.0"
] | null | null | null | NEWS.md | nigiord/micom | 29116bcfe37269a4ad6e8d4ec2d04966dcfb5afd | [
"Apache-2.0"
] | null | null | null | NEWS.md | nigiord/micom | 29116bcfe37269a4ad6e8d4ec2d04966dcfb5afd | [
"Apache-2.0"
] | null | null | null | # News and release notes for MICOM
This includes a list of major changes for each minor version starting from 0.19.0.
For information on how to use `micom` please refer to
[the documentation](https://micom-dev.github.io/micom).
### 0.24.1
Fixed settings to get better convergence behavior with OSQP.
### 0.24.0
MICOM now works with OSQP and works out of the box without a commercial QP solver. See
the [installation docs]() for more information.
### 0.23.1
The sample heatmap in `plot_exchanges_per_sample` is now automatically rotated when
there are mores samples than reactions.
media workflows will use presolving by default since those are often numerically
problematic.
### 0.23.0
Fix the signature and add deprecation warnings for optimize_* methods.
`plot_exchanges_per_taxon` will now use normalized fluxes (per 1 gDW for each taxon)
which allows better comparability. The old behavior can be enabled with
`use_total_flux=True`.
Avoid negative growth rate constraints in `_apply_min_growth`.
Can now enable presolving/scaling in `grow` and `tradeoff`.
### 0.22.7
Fix some warnings from pandas.
Avoid a crash in `reularize_l2_norm` when all reactions for a taxon have been fixed to
zero.
Raise a better error if the minimal medium optimization fails in grow.
Use the right tolerance when setting atol and rtol automatically.
### 0.22.6
Fixed an error in `micom.workflows.build` if build was run without a model database
but with a `file` column in the taxonomy.
### 0.22.5
Fixed missing data files.
### 0.22.4
Improves version compatibility with cobrapy and QIIME 2.
### 0.22.3
Fixed a bug where incorrectly labeled biomass demand reactions were treated like an
exchange.
### 0.22.2
Lowered the required version for pandas to re-enable compatibility with Python 3.6.
Docs are now built on all pushes.
### 0.22.1
`atol` and `rtol` are now consistently exposed.
Remove six dependency.
### 0.22.0
Got a bit smarter in cleaning up compartment suffixes. This fixes the odd "_e_m" suffix
in CARVEME-derived models. This will change the names of exchange reactions compared
to version 0.21.x.
### 0.21.0
Stabilize minimal_medium a bit more.
The strategy to calculate fluxes in the grow workflow can now be set with the
`strategy` argument. For instance, you can now get fluxes via parsimonious FBA instead of
assuming minimal imports.
Fixed a bug where minimal media classification with `weights="mass"` would fail due to
invalid elements in the formula (for instance "X").
### 0.20.0
This version brings new functionality to design growth media from a skeleton medium.
This also allows for a quicker verification of media against the model databases.
Added workflows:
- `check_db_media` check if models in a model database can grow on a given medium
- `complete_db_media`completes a growth medium so all models in the db can grow on it
Together with this we now provide several new revised growth media for the gut on
github.com/micom-dev/media. In particular, we finally provide the often requested growth
medium for the carveME database.
### 0.19.0
`minimal_medium` now accepts weighting the fluxes bei molecular weight or
any elemental content. For instance, you can now calculate a minimal medium that
minimizes carbon import for instance. The used `weigths` argument propagates
to any workflow using this function including `complete_medium`, `fix_medium`
and `grow`.
### 0.1.0 - 0.18.0
`minimal_medium` now has an option to return all fluxes along with the
import and export fluxes. Useful if you want to check what every individual
taxa consumes.
Addition of `Community.cooperative_tradeoff` which brings a fast method to
get nearly unique growth rates for a particular growth tradeoff (for instance
50% maximal growth rate).
Addition of elasticity coefficients which allows you to study which import
and export reactions are sensitive to changes in diet or taxa abundances.
Several changes to make `micom` capable of handling numerical instable models.
This includes a crossover strategy to obtain an optimal or near-optimal
from an incomplete interior point optimization.
| 31.075188 | 89 | 0.782241 | eng_Latn | 0.998859 |
0ccec0222e03eba1bfc7c788e135f23620e199d8 | 13,346 | md | Markdown | content/troubleshooting/preview-built-problem.md | YesCT/docs | 0410c8775bb7afeaf800f2125fa9de5a925c0e9d | [
"MIT"
] | null | null | null | content/troubleshooting/preview-built-problem.md | YesCT/docs | 0410c8775bb7afeaf800f2125fa9de5a925c0e9d | [
"MIT"
] | null | null | null | content/troubleshooting/preview-built-problem.md | YesCT/docs | 0410c8775bb7afeaf800f2125fa9de5a925c0e9d | [
"MIT"
] | null | null | null | ---
title: "Preview has built incorrectly"
date: 2019-09-19T12:38:53-04:00
weight: 2
---
- [A Preview says it is "ready", but shows a blank page](#a-preview-says-it-is-ready-but-shows-a-blank-page)
- [A Preview is pulling the wrong Docker image](#a-preview-is-pulling-the-wrong-docker-image)
- [Using Base Previews: unexpected Preview build results](#using-base-previews-unexpected-preview-build-results)
- [Something in my Preview isn't right](#something-in-my-preview-isn-t-right)
- [Troubleshooting Visual Diffs](#troubleshooting-visual-diffs)
## A Preview says it is "ready", but shows a blank page
When a Preview says it is "ready", that means that it successfully ran the
[commands](/setting-up-services/how-to-set-up-services/leverage-service-commands) in your
[configuration file](/setting-up-tugboat/create-a-tugboat-config-file/), and none of those commands returned an error.
It does not necessarily mean that those commands did what you expected them to do. For example, your configuration might
set up a database, but not provide the correct password to some application config file. In this case, the Preview would
build successfully, but the application might not load.
To troubleshoot where this might have gone wrong:
1. Double-check the commands in the [configuration file](/setting-up-tugboat/create-a-tugboat-config-file/).
2. Check [the Preview's logs](../debug-config-file/#how-to-check-the-preview-logs) for any clues, and.
3. Make use of [Tugboat's terminal access](/tugboat-cli/) to the Preview to do the same type of troubleshooting you
would do if this happened on your local installation.
## A Preview is pulling the wrong Docker image
Have you updated the Docker image you want your Preview to use, but it still appears to be pulling the old image? There
are a couple of things that could be in play here:
1. [Verify what version of the image you're calling](#verify-what-version-of-the-image-you-re-calling)
2. [Not all Preview Actions call the Docker image again](#rebuild-the-preview-from-scratch)
### Verify what version of the image you're calling
Maybe you thought you had left a version tag off, so you'd be getting the latest Docker image, but you had actually
called a [specific version of the image](/setting-up-services/service-images/image-version-tags) in the config file. (Or
vice versa! Maybe your config file calls `latest` or doesn't specify a version, but you actually need a specific image
version.) First thing's first; double-check whether you're calling a specific version of the Docker image in your
[config file](/setting-up-tugboat/create-a-tugboat-config-file/), and update as necessary.
### Rebuild the Preview from scratch
The more common issue is performing a Preview Action that doesn't actually call the Docker image specified in your
config file.
If you're building a Preview from a PR, and you've got a Base Preview set in your Tugboat project, the Preview from your
PR only executes commands in the `build` portion of the config file. Your Docker image is pulled before `init`.
For more info, see:
[When does Tugboat pull a Docker image](/setting-up-services/service-images/docker-pull/#when-does-tugboat-pull-a-docker-image),
and
[When does Tugboat update a Docker image?](/setting-up-services/service-images/docker-pull/#when-does-tugboat-update-a-docker-image)
Basically, this means building a Preview from a PR when you're using a Base Preview will never pull a new Docker image.
The practical fix for this issue is to
[build the Preview from scratch, without using the Base Preview](/building-a-preview/work-with-base-previews/building-new-previews/).
If you want to change the Docker image in your Base Preview, so all new Previews will build from the new image, you'll
need to [Rebuild](/building-a-preview/work-with-base-previews/change-or-update/#change-a-base-preview) the Base Preview.
{{% notice tip %}} If you're using the [Repository Setting](/setting-up-tugboat/select-repo-settings/) to
[Refresh Base Previews Automatically](/setting-up-tugboat/select-repo-settings/#refresh-base-previews-automatically),
this does _not_ update the Docker images used in your Preview. This only kicks off a
[Refresh](/building-a-preview/work-with-base-previews/change-or-update/#automatically-refresh-a-base-preview), which
runs config file commands from both `update` and `build`. You need to manually
[Rebuild a Base Preview](/building-a-preview/work-with-base-previews/change-or-update/#change-a-base-preview) to pull a
new Docker image. {{% /notice %}}
## Using Base Previews - unexpected Preview build results
When you're using Base Previews, you may experience a few different types of unexpected build results:
- [A Preview that I didn't expect has built](#a-preview-that-i-didn-t-expect-has-built)
- [A Preview I expected to build is missing](#a-preview-that-i-expected-to-build-is-missing)
### A Preview that I didn't expect has built
When you build Previews after building a Base Preview, those Previews generate new Preview builds from every Base
Preview type that matches against the new Preview build. If you're seeing a build that you didn't expect,
[view your Base Preview types](/building-a-preview/work-with-base-previews/view-base-preview-types/) and verify they're
set for the appropriate
[Base Preview Type](/building-a-preview/preview-deep-dive/how-previews-work/#base-preview-auto-select).
One example might be that you have two Previews set as Base Previews: a
[Repository Base Preview](/building-a-preview/preview-deep-dive/how-previews-work/#repository-base-preview) and a
[Branch Base Preview](/building-a-preview/preview-deep-dive/how-previews-work/#branch-base-preview). When building a
Preview that matches against the Branch, you'll actually get two Preview builds: one for the Repository Base Preview,
and one for the Branch Base Preview, since both match the new Preview build. You can change the Base Preview Type - for
example, [stop using a Repository Base Preview](/building-a-preview/work-with-base-previews/stop-using-base-preview/),
or
[change the Base Preview Type to Branch Base Preview](/building-a-preview/work-with-base-previews/change-or-update/#change-base-preview-type) -
if you don't want every new Preview to build from both Base Previews.
### A Preview that I expected to build is missing
If you expected to see a Preview build that you don't see, there are a few different things to check:
- [Are you automatically building Previews from Pull Requests?](#automatically-building-previews-from-pull-requests)
- [Have you set the Base Preview you want to use?](#setting-base-previews)
- [Is your Base Preview set to the appropriate Base Preview Type?](#matching-against-base-preview-types)
#### Automatically building Previews from Pull Requests
In the Preview Won't Build section, we cover
[Previews are not building automatically](../preview-built-problem/#previews-are-not-building-automatically); this same
thing applies when you're using Base Previews. Go into [Repository Settings](/setting-up-tugboat/select-repo-settings/)
and make sure the repository you're building Previews from is set up to {{% ui-text %}}Build Pull Requests
Automatically{{% /ui-text %}}.

#### Setting Base Previews
The next thing to check is whether the Base Preview you expected to see as the basis for the missing Preview build is
set as a Base Preview.
1. Go to [View Base Previews](/building-a-preview/work-with-base-previews/view-base-preview-types/) and confirm that the
Base Preview you expect to see is listed there.
2. If not, you'll need to
[set the Preview that you want to use as your Base Preview](/building-a-preview/work-with-base-previews/set-a-base-preview/).
Then, all new Preview builds that match the
[Base Preview Type](/building-a-preview/preview-deep-dive/how-previews-work/#base-preview-auto-select) you specified
will build from that Base Preview.
#### Matching against Base Preview Types
Finally, verify that the Base Previews you're using are set to the appropriate
[Base Preview Types](/building-a-preview/preview-deep-dive/how-previews-work/#base-preview-auto-select). If you're not
seeing a Preview built from a Base Preview that you expect to see, that Base Preview might not be set for the expected
Base Preview Type.
One example might be that you have a Base Preview set as a Branch Base Preview instead of a Repository Base Preview, and
the pull request you're building the Preview from doesn't merge into that branch. In this case, you'd need to change the
Base Preview Type to Repository, to ensure that _all_ Previews built in this repository build from the Base Preview, or
manually build the Preview from the specific Base Preview you want to use.
1. Go to [View Base Preview Types](/building-a-preview/work-with-base-previews/view-base-preview-types/) and confirm
that the Base Preview is set for the appropriate Base Preview Type.
2. If necessary,
[change the Base Preview Type](/building-a-preview/work-with-base-previews/change-or-update/#change-base-preview-type)
to produce the expected build results.
3. Alternately, if you don't want to change a Base Preview Type, you can
[manually create a Preview build from a specific Base Preview](/building-a-preview/work-with-base-previews/building-new-previews/#build-a-preview-from-a-specific-base-preview).
## Something in my Preview isn't right
It's possible for a Preview to build, but to be missing something you expect to see. This is similar to the
["Preview says it is "ready", but shows a blank page"](#a-preview-says-it-is-ready-but-shows-a-blank-page) issue above;
your Preview may not have failed during the build process, but it's possible something in the configuration file didn't
execute as you expected.
To troubleshoot where this might have gone wrong:
1. Double-check the commands in the [configuration file](/setting-up-tugboat/create-a-tugboat-config-file/).
2. Check [the Preview's logs](../debug-config-file/#how-to-check-the-preview-logs) for any clues.
3. Make use of [Tugboat's terminal access](/tugboat-cli/) to the Preview to do the same type of troubleshooting you
would do if this happened on your local installation.
If you can't figure out why something isn't as you expect in your Preview, let us know! Visit us at
[Help and Support](/support/); we're happy to help.
## Troubleshooting Visual Diffs
- [Visual diffs aren't displaying, or aren't displaying as I expect](#visual-diffs-aren-t-displaying-or-aren-t-displaying-as-i-expect)
- [URL not found](#url-not-found)
- [There was a problem generating Visual Diffs for this Preview.](#there-was-a-problem-generating-visual-diffs-for-this-preview)
### Visual diffs aren't displaying, or aren't displaying as I expect
To configure which pages have visual diffs generated, you need to specify the relative URLs of the pages in a
`visualdiffs` key in the Service definition. That information should be in the
[config file](/setting-up-tugboat/create-a-tugboat-config-file/) of the branch or PR that you're building, _not_ the
Base Preview.
Some things you might try when troubleshooting a visual diff include:
- Confirm the _relative URL_ is correct;
- Consider overriding the default timeout option;
- Consider overriding the default WaitUntil option.
#### Overriding the default timeout option
```yaml
services:
apache:
visualdiffs:
# Create a visualdiff of /blog, but override the default timeout option
- url: /blog
timeout: 10
```
#### Overriding the default waitUntil option
```yaml
services:
apache:
visualdiffs:
# Create a visualdiff of /about, but override the default waitUntil option
- url: /about
waitUntil: domcontentloaded
```
If you've verified the relative URLs are correct, and that information is in the config file of the branch or PR you're
building, but you're still not seeing what you expect to see, reach out to us at [help and support](/support/) - we're
happy to take a look!
### URL not found
If the URL you use when [configuring your visual diffs](/visual-diffs/using-visual-diffs/#how-to-configure-visual-diffs)
doesn't match a _relative URL_ in your site, you may see visual diff panes generate, but with "Not Found" message inside
the images. If this happens, [edit your config file](/setting-up-tugboat/create-a-tugboat-config-file/) to specify the
_relative URLs_ of the pages you want to diff, and then
[Rebuild](/building-a-preview/administer-previews/change-or-update-previews/#rebuild-previews) the Preview.

### There was a problem generating Visual Diffs for this preview.
If there's an issue with the way your visual diffs key is configured, you may get the "There was a problem generating
Visual Diffs for this preview" error. This could be because of the _relative URLs_ in your
[config file](/setting-up-tugboat/create-a-tugboat-config-file/), or it could be that you need to override
[the default timeout option](#overriding-the-default-timeout-option) or the
[default waitUntil option](#overriding-the-default-waituntil-option). If you've tried those things, and are still having
problems, reach out to us at [help and support](/support/) - we're happy to take a look!
| 58.279476 | 179 | 0.772741 | eng_Latn | 0.981498 |
0ccf586a35512001c9b2d5bee536206d5edad6d6 | 2,440 | md | Markdown | translations/ru-RU/content/rest/reference/licenses.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 7 | 2020-11-21T07:03:01.000Z | 2021-12-29T01:56:28.000Z | translations/ru-RU/content/rest/reference/licenses.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 419 | 2021-01-27T03:39:25.000Z | 2022-03-26T20:28:31.000Z | translations/ru-RU/content/rest/reference/licenses.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 3 | 2022-01-02T12:10:01.000Z | 2022-01-08T20:18:08.000Z | ---
title: Licenses
redirect_from:
- /v3/licenses
versions:
free-pro-team: '*'
enterprise-server: '*'
github-ae: '*'
---
The Licenses API returns metadata about popular open source licenses and information about a particular project's license file.
The Licenses API uses [the open source Ruby Gem Licensee](https://github.com/benbalter/licensee) to attempt to identify the project's license. Licensee matches the contents of a project's `LICENSE` file (if it exists) against a short list of known licenses. As a result, the API does not take into account the licenses of project dependencies or other means of documenting a project's license such as references to the license name in the documentation.
If a license is matched, the license key and name returned conforms to the [SPDX specification](https://spdx.org/).
**Note:** These endpoints will also return a repository's license information:
- [Get a repository](/v3/repos/#get-a-repository)
- [List repositories for a user](/v3/repos/#list-repositories-for-a-user)
- [List organization repositories](/v3/repos/#list-organization-repositories)
- [List forks](/rest/reference/repos#list-forks)
- [List repositories watched by a user](/rest/reference/activity#list-repositories-watched-by-a-user)
- [List team repositories](/v3/teams/#list-team-repositories)
{% warning %}
GitHub is a lot of things, but it’s not a law firm. As such, GitHub does not provide legal advice. Using the Licenses API or sending us an email about it does not constitute legal advice nor does it create an attorney-client relationship. If you have any questions about what you can and can't do with a particular license, you should consult with your own legal counsel before moving forward. In fact, you should always consult with your own lawyer before making any decisions that might have legal ramifications or that may impact your legal rights.
GitHub created the License API to help users get information about open source licenses and the projects that use them. We hope it helps, but please keep in mind that we’re not lawyers (at least not most of us aren't) and that we make mistakes like everyone else. For that reason, GitHub provides the API on an “as-is” basis and makes no warranties regarding any information or licenses provided on or through it, and disclaims liability for damages resulting from using the API.
{% endwarning %}
{% include rest_operations_at_current_path %}
| 69.714286 | 551 | 0.779508 | eng_Latn | 0.998546 |
0ccf8f1342fe5fc18b229322821eaec647350a78 | 12,991 | md | Markdown | docs/running_psalm/issues.md | gmazzap/psalm | 8c7423505a4282d1cdd38f3ff9552b911cbd5fa9 | [
"MIT"
] | null | null | null | docs/running_psalm/issues.md | gmazzap/psalm | 8c7423505a4282d1cdd38f3ff9552b911cbd5fa9 | [
"MIT"
] | null | null | null | docs/running_psalm/issues.md | gmazzap/psalm | 8c7423505a4282d1cdd38f3ff9552b911cbd5fa9 | [
"MIT"
] | null | null | null | # Issue types
- [AbstractInstantiation](issues/AbstractInstantiation.md)
- [AbstractMethodCall](issues/AbstractMethodCall.md)
- [ArgumentTypeCoercion](issues/ArgumentTypeCoercion.md)
- [AssignmentToVoid](issues/AssignmentToVoid.md)
- [CircularReference](issues/CircularReference.md)
- [ConflictingReferenceConstraint](issues/ConflictingReferenceConstraint.md)
- [ContinueOutsideLoop](issues/ContinueOutsideLoop.md)
- [DeprecatedClass](issues/DeprecatedClass.md)
- [DeprecatedConstant](issues/DeprecatedConstant.md)
- [DeprecatedFunction](issues/DeprecatedFunction.md)
- [DeprecatedInterface](issues/DeprecatedInterface.md)
- [DeprecatedMethod](issues/DeprecatedMethod.md)
- [DeprecatedProperty](issues/DeprecatedProperty.md)
- [DeprecatedTrait](issues/DeprecatedTrait.md)
- [DocblockTypeContradiction](issues/DocblockTypeContradiction.md)
- [DuplicateArrayKey](issues/DuplicateArrayKey.md)
- [DuplicateClass](issues/DuplicateClass.md)
- [DuplicateFunction](issues/DuplicateFunction.md)
- [DuplicateMethod](issues/DuplicateMethod.md)
- [DuplicateParam](issues/DuplicateParam.md)
- [EmptyArrayAccess](issues/EmptyArrayAccess.md)
- [FalsableReturnStatement](issues/FalsableReturnStatement.md)
- [FalseOperand](issues/FalseOperand.md)
- [ForbiddenCode](issues/ForbiddenCode.md)
- [ForbiddenEcho](issues/ForbiddenEcho.md)
- [ImplementedParamTypeMismatch](issues/ImplementedParamTypeMismatch.md)
- [ImplementedReturnTypeMismatch](issues/ImplementedReturnTypeMismatch.md)
- [ImplicitToStringCast](issues/ImplicitToStringCast.md)
- [ImpureByReferenceAssignment](issues/ImpureByReferenceAssignment.md)
- [ImpureFunctionCall](issues/ImpureFunctionCall.md)
- [ImpureMethodCall](issues/ImpureMethodCall.md)
- [ImpurePropertyAssignment](issues/ImpurePropertyAssignment.md)
- [ImpureStaticProperty](issues/ImpureStaticProperty.md)
- [ImpureStaticVariable](issues/ImpureStaticVariable.md)
- [InaccessibleClassConstant](issues/InaccessibleClassConstant.md)
- [InaccessibleMethod](issues/InaccessibleMethod.md)
- [InaccessibleProperty](issues/InaccessibleProperty.md)
- [InterfaceInstantiation](issues/InterfaceInstantiation.md)
- [InternalClass](issues/InternalClass.md)
- [InternalMethod](issues/InternalMethod.md)
- [InternalProperty](issues/InternalProperty.md)
- [InvalidArgument](issues/InvalidArgument.md)
- [InvalidArrayAccess](issues/InvalidArrayAccess.md)
- [InvalidArrayAssignment](issues/InvalidArrayAssignment.md)
- [InvalidArrayOffset](issues/InvalidArrayOffset.md)
- [InvalidCast](issues/InvalidCast.md)
- [InvalidCatch](issues/InvalidCatch.md)
- [InvalidClass](issues/InvalidClass.md)
- [InvalidExtendClass](issues/InvalidExtendClass.md)
- [InvalidClone](issues/InvalidClone.md)
- [InvalidDocblock](issues/InvalidDocblock.md)
- [InvalidDocblockParamName](issues/InvalidDocblockParamName.md)
- [InvalidFalsableReturnType](issues/InvalidFalsableReturnType.md)
- [InvalidFunctionCall](issues/InvalidFunctionCall.md)
- [InvalidGlobal](issues/InvalidGlobal.md)
- [InvalidIterator](issues/InvalidIterator.md)
- [InvalidMethodCall](issues/InvalidMethodCall.md)
- [InvalidNullableReturnType](issues/InvalidNullableReturnType.md)
- [InvalidOperand](issues/InvalidOperand.md)
- [InvalidParamDefault](issues/InvalidParamDefault.md)
- [InvalidParent](issues/InvalidParent.md)
- [InvalidPassByReference](issues/InvalidPassByReference.md)
- [InvalidPropertyAssignment](issues/InvalidPropertyAssignment.md)
- [InvalidPropertyAssignmentValue](issues/InvalidPropertyAssignmentValue.md)
- [InvalidPropertyFetch](issues/InvalidPropertyFetch.md)
- [InvalidReturnStatement](issues/InvalidReturnStatement.md)
- [InvalidReturnType](issues/InvalidReturnType.md)
- [InvalidScalarArgument](issues/InvalidScalarArgument.md)
- [InvalidScope](issues/InvalidScope.md)
- [InvalidStaticInvocation](issues/InvalidStaticInvocation.md)
- [InvalidStringClass](issues/InvalidStringClass.md)
- [InvalidTemplateParam](issues/InvalidTemplateParam.md)
- [InvalidThrow](issues/InvalidThrow.md)
- [InvalidToString](issues/InvalidToString.md)
- [LessSpecificImplementedReturnType](issues/LessSpecificImplementedReturnType.md)
- [LessSpecificReturnStatement](issues/LessSpecificReturnStatement.md)
- [LessSpecificReturnType](issues/LessSpecificReturnType.md)
- [LoopInvalidation](issues/LoopInvalidation.md)
- [MethodSignatureMismatch](issues/MethodSignatureMismatch.md)
- [MethodSignatureMustOmitReturnType](issues/MethodSignatureMustOmitReturnType.md)
- [MismatchingDocblockParamType](issues/MismatchingDocblockParamType.md)
- [MismatchingDocblockReturnType](issues/MismatchingDocblockReturnType.md)
- [MissingClosureParamType](issues/MissingClosureParamType.md)
- [MissingClosureReturnType](issues/MissingClosureReturnType.md)
- [MissingConstructor](issues/MissingConstructor.md)
- [MissingDependency](issues/MissingDependency.md)
- [MissingDocblockType](issues/MissingDocblockType.md)
- [MissingFile](issues/MissingFile.md)
- [MissingImmutableAnnotation](issues/MissingImmutableAnnotation.md)
- [MissingParamType](issues/MissingParamType.md)
- [MissingPropertyType](issues/MissingPropertyType.md)
- [MissingReturnType](issues/MissingReturnType.md)
- [MissingTemplateParam](issues/MissingTemplateParam.md)
- [MissingThrowsDocblock](issues/MissingThrowsDocblock.md)
- [MixedArgument](issues/MixedArgument.md)
- [MixedArgumentTypeCoercion](issues/MixedArgumentTypeCoercion.md)
- [MixedArrayAccess](issues/MixedArrayAccess.md)
- [MixedArrayAssignment](issues/MixedArrayAssignment.md)
- [MixedArrayOffset](issues/MixedArrayOffset.md)
- [MixedArrayTypeCoercion](issues/MixedArrayTypeCoercion.md)
- [MixedAssignment](issues/MixedAssignment.md)
- [MixedClone](issues/MixedClone.md)
- [MixedFunctionCall](issues/MixedFunctionCall.md)
- [MixedInferredReturnType](issues/MixedInferredReturnType.md)
- [MixedMethodCall](issues/MixedMethodCall.md)
- [MixedOperand](issues/MixedOperand.md)
- [MixedPropertyAssignment](issues/MixedPropertyAssignment.md)
- [MixedPropertyFetch](issues/MixedPropertyFetch.md)
- [MixedPropertyTypeCoercion](issues/MixedPropertyTypeCoercion.md)
- [MixedReturnStatement](issues/MixedReturnStatement.md)
- [MixedReturnTypeCoercion](issues/MixedReturnTypeCoercion.md)
- [MixedStringOffsetAssignment](issues/MixedStringOffsetAssignment.md)
- [MoreSpecificImplementedParamType](issues/MoreSpecificImplementedParamType.md)
- [MoreSpecificReturnType](issues/MoreSpecificReturnType.md)
- [MutableDependency](issues/MutableDependency.md)
- [NoInterfaceProperties](issues/NoInterfaceProperties.md)
- [NoValue](issues/NoValue.md)
- [NonStaticSelfCall](issues/NonStaticSelfCall.md)
- [NullArgument](issues/NullArgument.md)
- [NullArrayAccess](issues/NullArrayAccess.md)
- [NullArrayOffset](issues/NullArrayOffset.md)
- [NullFunctionCall](issues/NullFunctionCall.md)
- [NullIterator](issues/NullIterator.md)
- [NullOperand](issues/NullOperand.md)
- [NullPropertyAssignment](issues/NullPropertyAssignment.md)
- [NullPropertyFetch](issues/NullPropertyFetch.md)
- [NullReference](issues/NullReference.md)
- [NullableReturnStatement](issues/NullableReturnStatement.md)
- [OverriddenMethodAccess](issues/OverriddenMethodAccess.md)
- [OverriddenPropertyAccess](issues/OverriddenPropertyAccess.md)
- [ParadoxicalCondition](issues/ParadoxicalCondition.md)
- [ParentNotFound](issues/ParentNotFound.md)
- [PossibleRawObjectIteration](issues/PossibleRawObjectIteration.md)
- [PossiblyFalseArgument](issues/PossiblyFalseArgument.md)
- [PossiblyFalseIterator](issues/PossiblyFalseIterator.md)
- [PossiblyFalseOperand](issues/PossiblyFalseOperand.md)
- [PossiblyFalsePropertyAssignmentValue](issues/PossiblyFalsePropertyAssignmentValue.md)
- [PossiblyFalseReference](issues/PossiblyFalseReference.md)
- [PossiblyInvalidArgument](issues/PossiblyInvalidArgument.md)
- [PossiblyInvalidArrayAccess](issues/PossiblyInvalidArrayAccess.md)
- [PossiblyInvalidArrayAssignment](issues/PossiblyInvalidArrayAssignment.md)
- [PossiblyInvalidArrayOffset](issues/PossiblyInvalidArrayOffset.md)
- [PossiblyInvalidCast](issues/PossiblyInvalidCast.md)
- [PossiblyInvalidFunctionCall](issues/PossiblyInvalidFunctionCall.md)
- [PossiblyInvalidIterator](issues/PossiblyInvalidIterator.md)
- [PossiblyInvalidMethodCall](issues/PossiblyInvalidMethodCall.md)
- [PossiblyInvalidOperand](issues/PossiblyInvalidOperand.md)
- [PossiblyInvalidPropertyAssignment](issues/PossiblyInvalidPropertyAssignment.md)
- [PossiblyInvalidPropertyAssignmentValue](issues/PossiblyInvalidPropertyAssignmentValue.md)
- [PossiblyInvalidPropertyFetch](issues/PossiblyInvalidPropertyFetch.md)
- [PossiblyNullArgument](issues/PossiblyNullArgument.md)
- [PossiblyNullArrayAccess](issues/PossiblyNullArrayAccess.md)
- [PossiblyNullArrayAssignment](issues/PossiblyNullArrayAssignment.md)
- [PossiblyNullArrayOffset](issues/PossiblyNullArrayOffset.md)
- [PossiblyNullFunctionCall](issues/PossiblyNullFunctionCall.md)
- [PossiblyNullIterator](issues/PossiblyNullIterator.md)
- [PossiblyNullOperand](issues/PossiblyNullOperand.md)
- [PossiblyNullPropertyAssignment](issues/PossiblyNullPropertyAssignment.md)
- [PossiblyNullPropertyAssignmentValue](issues/PossiblyNullPropertyAssignmentValue.md)
- [PossiblyNullPropertyFetch](issues/PossiblyNullPropertyFetch.md)
- [PossiblyNullReference](issues/PossiblyNullReference.md)
- [PossiblyUndefinedArrayOffset](issues/PossiblyUndefinedArrayOffset.md)
- [PossiblyUndefinedGlobalVariable](issues/PossiblyUndefinedGlobalVariable.md)
- [PossiblyUndefinedIntArrayOffset](issues/PossiblyUndefinedIntArrayOffset.md)
- [PossiblyUndefinedMethod](issues/PossiblyUndefinedMethod.md)
- [PossiblyUndefinedStringArrayOffset](issues/PossiblyUndefinedStringArrayOffset.md)
- [PossiblyUndefinedVariable](issues/PossiblyUndefinedVariable.md)
- [PossiblyUnusedMethod](issues/PossiblyUnusedMethod.md)
- [PossiblyUnusedParam](issues/PossiblyUnusedParam.md)
- [PossiblyUnusedProperty](issues/PossiblyUnusedProperty.md)
- [PropertyNotSetInConstructor](issues/PropertyNotSetInConstructor.md)
- [PropertyTypeCoercion](issues/PropertyTypeCoercion.md)
- [RawObjectIteration](issues/RawObjectIteration.md)
- [RedundantCondition](issues/RedundantCondition.md)
- [RedundantConditionGivenDocblockType](issues/RedundantConditionGivenDocblockType.md)
- [RedundantIdentityWithTrue](issues/RedundantIdentityWithTrue.md)
- [ReferenceConstraintViolation](issues/ReferenceConstraintViolation.md)
- [ReservedWord](issues/ReservedWord.md)
- [StringIncrement](issues/StringIncrement.md)
- [TaintedInput](issues/TaintedInput.md)
- [TooFewArguments](issues/TooFewArguments.md)
- [TooManyArguments](issues/TooManyArguments.md)
- [TooManyTemplateParams](issues/TooManyTemplateParams.md)
- [TraitMethodSignatureMismatch](issues/TraitMethodSignatureMismatch.md)
- [TypeDoesNotContainNull](issues/TypeDoesNotContainNull.md)
- [TypeDoesNotContainType](issues/TypeDoesNotContainType.md)
- [UncaughtThrowInGlobalScope](issues/UncaughtThrowInGlobalScope.md)
- [UndefinedClass](issues/UndefinedClass.md)
- [UndefinedConstant](issues/UndefinedConstant.md)
- [UndefinedDocblockClass](issues/UndefinedDocblockClass.md)
- [UndefinedFunction](issues/UndefinedFunction.md)
- [UndefinedGlobalVariable](issues/UndefinedGlobalVariable.md)
- [UndefinedInterface](issues/UndefinedInterface.md)
- [UndefinedInterfaceMethod](issues/UndefinedInterfaceMethod.md)
- [UndefinedMagicMethod](issues/UndefinedMagicMethod.md)
- [UndefinedMagicPropertyAssignment](issues/UndefinedMagicPropertyAssignment.md)
- [UndefinedMagicPropertyFetch](issues/UndefinedMagicPropertyFetch.md)
- [UndefinedMethod](issues/UndefinedMethod.md)
- [UndefinedPropertyAssignment](issues/UndefinedPropertyAssignment.md)
- [UndefinedPropertyFetch](issues/UndefinedPropertyFetch.md)
- [UndefinedThisPropertyAssignment](issues/UndefinedThisPropertyAssignment.md)
- [UndefinedThisPropertyFetch](issues/UndefinedThisPropertyFetch.md)
- [UndefinedTrait](issues/UndefinedTrait.md)
- [UndefinedVariable](issues/UndefinedVariable.md)
- [UnevaluatedCode](issues/UnevaluatedCode.md)
- [UnimplementedAbstractMethod](issues/UnimplementedAbstractMethod.md)
- [UnimplementedInterfaceMethod](issues/UnimplementedInterfaceMethod.md)
- [UninitializedProperty](issues/UninitializedProperty.md)
- [UnnecessaryVarAnnotation](issues/UnnecessaryVarAnnotation.md)
- [UnrecognizedExpression](issues/UnrecognizedExpression.md)
- [UnrecognizedStatement](issues/UnrecognizedStatement.md)
- [UnresolvableInclude](issues/UnresolvableInclude.md)
- [UnusedClass](issues/UnusedClass.md)
- [UnusedClosureParam](issues/UnusedClosureParam.md)
- [UnusedFunctionCall](issues/UnusedFunctionCall.md)
- [UnusedMethod](issues/UnusedMethod.md)
- [UnusedMethodCall](issues/UnusedMethodCall.md)
- [UnusedParam](issues/UnusedParam.md)
- [UnusedProperty](issues/UnusedProperty.md)
- [UnusedPsalmSuppress](issues/UnusedPsalmSuppress.md)
- [UnusedVariable](issues/UnusedVariable.md)
| 57.995536 | 93 | 0.829497 | yue_Hant | 0.746661 |
0ccf9ccf6ceb5ede7a46ee90efd5292db52bbe51 | 1,882 | md | Markdown | provisioning/roles/ansible-role-ruby/README.md | gongmingqm10/OpenLMIS-TechOps | f0c0be17f6f270b5a97fc302c1c6fb1c29ee7687 | [
"MIT"
] | null | null | null | provisioning/roles/ansible-role-ruby/README.md | gongmingqm10/OpenLMIS-TechOps | f0c0be17f6f270b5a97fc302c1c6fb1c29ee7687 | [
"MIT"
] | null | null | null | provisioning/roles/ansible-role-ruby/README.md | gongmingqm10/OpenLMIS-TechOps | f0c0be17f6f270b5a97fc302c1c6fb1c29ee7687 | [
"MIT"
] | null | null | null | # Ansible Role: Ruby
[](https://travis-ci.org/geerlingguy/ansible-role-ruby)
Installs Ruby and bundler gem on Linux.
## Requirements
None.
## Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`):
workspace: /root
The location where temporary files will be downloaded in preparation for Ruby installation.
ruby_rubygems_package_name: rubygems
The name of the `rubygems` package. Generally, the default should work; but it will be set to `rubygems-integration` automatically on Ubuntu Trusty (14.04).
ruby_install_gems: []
A list of Ruby gems to install (just the name of the gem to be installed). This is meant as a simple convenience, and will only install the latest version of the gem. If you need to install gems with more options or specificity, you can do so elsewhere in your playbook.
ruby_install_from_source: false
By default, this role will install whatever version of ruby is available through your system's package manager (`apt` or `yum`). You can install whatever version you like (including the latest release) by setting this to `true` and/or updating the `ruby_download_url` and `ruby_version`.
ruby_download_url: http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.1.tar.gz
The URL from which Ruby will be downloaded (only used if `ruby_install_from_source` is `true`).
ruby_version: 2.2.1
The version of ruby that will be installed (only used if `ruby_install_from_source` is `true`).
## Dependencies
None.
## Example Playbook
- hosts: server
roles:
- { role: geerlingguy.ruby }
## License
MIT / BSD
## Author Information
This role was created in 2014 by [Jeff Geerling](http://jeffgeerling.com/), author of [Ansible for DevOps](http://ansiblefordevops.com/).
| 33.607143 | 287 | 0.750797 | eng_Latn | 0.982495 |
0ccfd7463aae99a8aaa5f9203adce06316f56f5c | 1,052 | md | Markdown | _proceedings/2009-02-01-Ensembles-de-redes-neuronales-construidos-mediante-algoritmos-hibridos-multiobjetivo-para-optimizar-.md | pagutierrez/pagutierrez.github.io | 45b9cc8aa7759b1cefb693176d125c9a16f9fdb4 | [
"MIT"
] | null | null | null | _proceedings/2009-02-01-Ensembles-de-redes-neuronales-construidos-mediante-algoritmos-hibridos-multiobjetivo-para-optimizar-.md | pagutierrez/pagutierrez.github.io | 45b9cc8aa7759b1cefb693176d125c9a16f9fdb4 | [
"MIT"
] | null | null | null | _proceedings/2009-02-01-Ensembles-de-redes-neuronales-construidos-mediante-algoritmos-hibridos-multiobjetivo-para-optimizar-.md | pagutierrez/pagutierrez.github.io | 45b9cc8aa7759b1cefb693176d125c9a16f9fdb4 | [
"MIT"
] | null | null | null | ---
title: "Ensembles de redes neuronales construidos mediante algoritmos híbridos multiobjetivo para optimizar la precisión y la sensitividad"
collection: proceedings
permalink: /proceeding/2009-02-01-Ensembles-de-redes-neuronales-construidos-mediante-algoritmos-hibridos-multiobjetivo-para-optimizar-
date: 2009-02-01
venue: 'In VI Congreso Español sobre Metaheurísticas and Algoritmos Evolutivos y Bioinspirados (MAEB09)'
citation: 'Juan Carlos Fernández, Mariano Carbonero-Ruz, <strong>Pedro Antonio Gutiérrez</strong>, César Hervás-Martínez, "Ensembles de redes neuronales construidos mediante algoritmos híbridos multiobjetivo para optimizar la precisión y la sensitividad." In VI Congreso Español sobre Metaheurísticas and Algoritmos Evolutivos y Bioinspirados (MAEB09), 2009, Málaga, España, pp.309-316.'
---
Use [Google Scholar](https://scholar.google.com/scholar?q=Ensembles+de+redes+neuronales+construidos+mediante+algoritmos+hibridos+multiobjetivo+para+optimizar+la+precision+y+la+sensitividad){:target="_blank"} for full citation | 116.888889 | 397 | 0.826996 | spa_Latn | 0.711982 |
0cd04c6b9888d8db446fe9d1feee1aab19a8f2c7 | 682 | md | Markdown | _posts/2007-05-03-using-screen.md | njwedwards/itn | a8d762a3165b57622a3c86d68844e82c33be410b | [
"CC-BY-4.0"
] | null | null | null | _posts/2007-05-03-using-screen.md | njwedwards/itn | a8d762a3165b57622a3c86d68844e82c33be410b | [
"CC-BY-4.0"
] | null | null | null | _posts/2007-05-03-using-screen.md | njwedwards/itn | a8d762a3165b57622a3c86d68844e82c33be410b | [
"CC-BY-4.0"
] | null | null | null | ---
tags: [Notebooks/itblog]
title: Using Screen
created: '2019-09-29T11:18:01.719Z'
modified: '2019-10-18T14:22:57.151Z'
---
The unix program screen can be very helpful. Its good because the session that you are using will continue to run after you log off. This is especially useful if you are doing a download or running a command that takes a long time. Below are some common commands:
C-a n = next
C-a p = previous
C-a d = detach (exit)
C-a c = new terminal
C-a S = split the screen
C-a : resize 20
From the command line:
screen -ls
screen -R (reattach)
Extra help [http://gentoo-wiki.com/TIP\_Using\_screen][1]
[1]: http://gentoo-wiki.com/TIP_Using_screen
| 28.416667 | 263 | 0.717009 | eng_Latn | 0.989029 |
0cd0564365890d109c2817cde5b7ad429fd8ea22 | 19,322 | md | Markdown | TODO.md | Letterus/CoreObject | 2b16f9678833aea3a89c40d711d13a4aad557a94 | [
"MIT"
] | 99 | 2015-01-02T05:02:42.000Z | 2021-12-03T08:43:13.000Z | TODO.md | Letterus/CoreObject | 2b16f9678833aea3a89c40d711d13a4aad557a94 | [
"MIT"
] | 62 | 2015-01-09T20:12:36.000Z | 2021-11-10T19:00:42.000Z | TODO.md | Letterus/CoreObject | 2b16f9678833aea3a89c40d711d13a4aad557a94 | [
"MIT"
] | 20 | 2015-01-01T14:02:37.000Z | 2021-11-13T03:00:48.000Z | TODO
====
Major Missing Features
----------------------
- COUndoTrack doesn't cope with attempts by the user to undo changes in persistent roots that are not present in the store (assetions will fail)
- Persistent root faulting; currently the entire store is loaded in memory
- A challenge will be supporting cross-persistent root reference inverses.
i.e. suppose we open a store, load a document persistent root into memory.
Accessing the derived property "document.tags" requires a search query over
the current revisions of the current branches of all persistent roots in the whole store
(at least conceptually). The store keeps an index of cross-persistent references
anticipating this problem, but it's not currently used. The most straightforawrd
solution would be to load every persistent root that has ever had a cross-reference
to document at the same time as when we load "document".
- Partial loading (loading an object using another entity e.g. COPerson as COObject)
- Add -persistentEntityDescription (for a partially loaded person object, -persistentEntityDescription would return COPerson when -entityDescription returns COObject)
- (ETEntityDescription *)persistentEntityDescription
{
return [self.objectGraphContext.modelDescriptionRepository descriptionForName: @"COObject"];
}
- Better query support (in-memory and in-store)
- Introduce our own query objects for expressing a search query, with
implementations that can run against the store in SQL as well as in memory.
We will probably need to combine both appraoches to complete a search.
- NSPredicate to SQL generator using OMeta (not sure)
- Import/Export
- Write a generic object manager than can open any store and display any COObject
- Should support displaying all key/values (like past StoreBorwser prototypes)
Not blocking release, but critical for ObjectManager
- The missing feature is: CoreObject can't load a COItem as a COObject unless it has the matching
entity description available in memory. This can be a barrier for debugging
(you can't take a saved item graph from an app and load it in a test case, without adding all
of the relevant model classes to the test case)
- Implement something like COSelectiveHistoryTrack (using a query to select revisions based on criterias e.g. inner object subset)
- Something to aggregate the history of multiple persistent roots in this same class?
- Attributed string merge needs work - doing a selective undo of an older change
tends to corrupt the document. We probably need different merge variants for collaborative
editing and (merging branches/selective undo)
Open Questions
--------------
- Merging UI
- We only have automatic metamodel-driven copying on subsets of an inner object graph. Investigate copying a persistent root and also referenced persistent roots. For example, being able to copy the Typewriter library persistent root, and have all note persistent roots in the library automatically copied as well, would be genuinely useful. This seems to be essentially the same problem we already solve with COCopier. Note that the persistent root copies would be essentially cheap copies, but we would have to rewrite the cross-references to point to the copies instead of the originals.
- Do cross-store references make sense? i.e. switch from COPath to a URL?
- Adjust COEditingContext loaded object cache to use a strategy that matches which root objects are going to be accessed recurrently (e.g. photos in a Photo Manager should have priority over other root objects)
- Scalability to 50k persistent roots, 50k root objects
- Reintroduce history compaction (will it be needed?), which was present but bitrotted and is not supported right now.
Possibly just collapse "minor edits" or collapse to daily snapshots + explicit tags. Not sure how much space this will save though.
Another trick we can try is: when we decided to stop appending deltas and write a new full snapshot,
take the previous full snapshot and all of the deltas, and zlib compress the binary
representations of those item graphs as a single block.
- Perhaps support Layers in addition to XMPP
- https://layer.com
- http://www.theverge.com/2013/12/4/5173726/you-have-too-many-chat-apps-can-layer-connect-them
Future Work (Minor features, refactoring, cleanup)
--------------------------------------------------
- General
- Decide whether to enable Clang warning 'Implicit Conversions to 32 Bit Type'
- This produces some CoreObject warnings currently
- I have set this warning explicitly to 'No' in TestCoreObject (where we probably don't want it) - Quentin
- At the same time, we could remove -Wno-sign-compare in CoreObject target too (used to inhibit some -Wextra warnings)
- GNUstep
- Port Samples
- Port benchmark suite
- Add -URLByAppendingPathComponent:isDirectory: and -fileHandleForReadingFromURL:error: to GNUstep (see COSQLiteStore+Attachments)
- Add -dateWithContentsOfFile:options:error: to GNUstep (see COOCommitDescriptor)
- Add -predicateWithBlock: to GNUstep (see COSmartGroup)
- Perhaps tweak `[[NSDecimalNumber defaultBehavior] scale]` to return NSDecimalScale by default as macOS does
- Perhaps don't treat `-[NSSet countByEnumeratingWithState:objects:count:]` as a primitive method to match macOS behavior
- Store
- exportRevisions: and importRevisions: that take a set of CORevisionID an returns a delta-compressed NSData
suitable for network transport.
- GC: only collect divergent revisions older than X days
- Switch from FMDB to an SQL abstraction backend
- Async attachment import
- Revisit deletion
- Don't leave empty backing store databases cluttering the store directory
- Doesn't happen while running the test suite anymore, but I don't remove it since backing stores might still need to be deleted on store compaction (for deleted persistent roots).
- Add support for indexing properties other than text ones. Either use a single table with
property, value columns and a composite key on (property, value), or one table per
indexed property name.
- COEditingContext
- Expose COSQLiteStore's attachments feature
- expose COSQLiteStore's finalize deletion method
- expose store's searching functionality
- Switch to NSUUID everywhere?
- Expose COEditingContext(Debugging), probably with a header we can explicit import (should be outside of COEditingContext+Private)
- COPersistentRoot
- Refactor handling of store notifications
- Add -initialRevision?
- COBranch
- Extend COBranch API to support branches and merging revisions accross tracks
- Clean up selective undo code and share implementation with undo system
- Implement -isCopy and -isTrunkBranch
- Record last modified date on branches
- COPersistentObjectContext
- Find a better name for this protocol, that gives it less importance, and convey it can represent a transient object graph context (or at least doesn't go against this use case).
- Perhaps move it to EtoileUI
- Metamodel
- Add checks that derived are not persistent (readonly can be persistent, for set-once properties. Useful for immutable objects).
- Add check that parent property (isContainer = YES) is derived
- Add check that one side of an opposite is derived (now sanity-checked in COObject+RelationshipCache, but should be impossible to configure a metamodel with this constraint broken)
- Review other constraints
- Add a check that the derived side of a multivalued opposite is unordered
- Add a check that complains about ivars for incoming relationships
- Add constraint stating that a keyed relationships cannot have an opposite (at least for now)
- Move to CoreObject - extract it to a tiny framework CoreObjectModelDescription.framework with no dependencies except Foundation
- Improve metamodel checker to ensure matching Objective-C property types (e.g. a ETCollection must be declared as multivalued, a NSArray as ordered)
- Add a check that complains about serialization accessors or serialization transformers for relationships if there is no custom persistent type declared
- For keyed relationships, it's not clear what the permissible types of keys are.
The tests just test NSString.
- perform validation at freezing time on the set of entities being frozen. This means a group of entities can only become frozen if they are valid.
- call +newEntityDescription methods lazily, either at first use of -entityDescriptionForClass: or -descriptionForName: on ETModelDescriptionRepository (for performance)
- Have a look at the section "Discussion of Composite & Aggregate Terminology in UML" in ETModelElementDescription's class description.
See if we want to tweak the metamodel to be closer to UML or FAME. Consider how this will impact COCopier.
- COObject
- If you have a property in a superclass that's readonly and implemented as an ObjC method (e.g. -[COObject identifier]),
and you override the property in a subclass but make it readwrite in the metamodel (like COLibrary does), you won't get
autogenerated variable storage accessors. We should either fail explicitly, or support this.
- Better error message if you try to use a composite relationship across persistent roots,
currently you get a COPath does not respond to UUIDValue exception.
- We should have dedicated array/set multivalue mutation methods rather than using:
`-[COObject (void)insertObject: (id)object atIndex: (NSUInteger)index hint: (id)hint forProperty: (NSString *)key]`
for both, IMO (Eric)
- Use NSOrderedSet for relationships
- Throw an exception if the developer names a property that conflicts with a NSObject/COObject method
- Add dictionary update tests to TestObjectUpdate.m
- Add relationship update check to detect persistent objects inserted into a transient relationship. The object put in the relationship could belong to:
- a transient object graph context --> allowed (see transient property _dropIndicator in -[ETLayout awakeFromDeserialization)
- the same object graph context --> disallowed (otherwise we can accidentally easily look up shared instance using the wrong object graph context e.g. `_dropIndicator = [ETDropIndicator sharedInstanceForObjectGraphContext: layout.objectGraphContext])`
- some other persistent object graph context --> allowed or disallowed (not sure yet)
- Make primitives with potentially mutable subclasses (NSString and NSData)
have immutable copies made before being stored in the variable storage.
- Add more transient relationship tests
- Check memory management for transient relationships:
- Transient collection (which is retaining) can contains persistent COObjects?
- Persistent collection (weak references) can contain transient COObjects
- Fix problem with properties that have the first letter uppercase
- Synthesize more KVC accessors, such as the multivalued ones
- Synthesize accessors for the primitive types (int, BOOL, etc). Just a 'nice to have' feature.
- Double check support for properties that are both readOnly and persistent (Basically set-once.)
- For all of the relationship tests, test cross persistent root, and a mix of cross-persistent root
and inner objects with in the same relationship.
- For all of the relationship tests, we need to come up with some solution to exercise all Relationship tests in
the following situations at least:
- references in the same transient object contexts (we just cover this case currently)
- references accross transient object contexts
- references accross persistent object graph contexts
To get a good test coverage, we probably need to have an abstract TestRelationship test class and concrete test subclasses covering each variation by initializing the test class instance with the proper object graph contexts. For persistent object graph contexts, we want some mechanism to check the relationship states:
- before commit (we need to catch tracking and non-tracking branch border cases that can arise on persistent root initialization)
- after commit in another recreated context
Wrapping all test method code with -checkObjectGraphContextWithExistingAndNew(Editing)Context:inBlock: in abstract classes such as TestUnivaluedRelationship should do the trick.
@interface TestRelationship : TestCommon // Abstract
{
/* If we references in the same object graph context, then both ivars are initialized to point the same context */
COObjectGraphContext *sourceObjectGraphContext;
COObjectGraphContext *targetObjectGraphContext;
}
@interface TestUnivaluedRelationship : TestRelationship // Abstract
@interface TestUnivaluedRelationshipAccrossPersistentObjectGraphContexts : TestUnivaluedRelationship <UKTest>
See -testUnivaluedGroupWithOppositeInPersistentRoot as a starting point.
- Test entity type checking for all relationship types (inserting a child where the type is wrong.)
Also test this for cross-persistent root relationships.
- The relationship tests mostly test NSString and COObject references. Test
more with NSData, NSNumber, COAttachmentID, etc.
- Attempting to set a value for a readonly property should raise an exception
- Expose and document the COPrimitiveCollection protocol and the
COMutableArray and related collection subclasses, so advanced use cases that need to
use ivar storage for collections (or implement custom storage data structures)
instead of the variable storage can integrate correctly with CoreObject.
- Avoid hardcoding the model description repository in +automaticallyNotifiesObserversForKey:
- the simplest solution could be to return NO and let subclasses override it to return YES (e.g. for the rare case, where the user wants to have hand-written accessors + automatic KVO notifications for some transient properties)
- or to return NO and just forbid overriding it (since returning YES seems almost useless if we synthesize accessors, and we support @dynamic even for transient properties)
- check `+[COObject automaticallyNotifiesObserversForKey:]` doesn't break KVO notifications for transient and non-derived properties in EtoileUI subclasses
- Add some collection-oriented KVO update tests to TestObjectUpdate
- COObjectGraphContext
- Test `-[COObjectGraphContext discardAllChanges]` during synchronization. It's important to use
-reloadAtRevision: and not -setCurrentRevision: here, otherwise -supportsRevert would go our way
- Test nil univalued relationship case (see r10270)
- COItem
- tidy up ugly NSMutableDictionary / NSDictionary casting
- use a `std::unordered_map<NSString *, std::pair<uint32_t, id>>`
(i.e. map from string to (COType, object) pair).
(Well, use a simple wrapper class instead of std::pair.) NOTE: using
SEL as a map key won't work on libobjc2.
- Write tests that ensure we correctly copy (see -mutableCopyWithNameMapping:):
- relationships (mixing UUIDs and COPath)
- collections of value objects
- Collaboration
- COSynchronizer should handle syncing persistent root / branch metadata changes?
- support sending attachments (or large persistent roots) using XMPP file transfer
- COSynchronizerClient is missing the detection for illegal reverts
- COUndoTrack
- Doesn’t work: [[COUndoTrack trackForPattern: @"org.etoile.projectdemo*" withEditingContext: nil] clear];
- Perhaps have different commands for a regular commit and a revert.
It's probably confusing or dangerous that undoing a revert can cause a selective undo (as it can now),
whereas for undoing a regular commit it's okay to make a selective undo.
- Refactor COCommand initialization which is a mess
- Rename COTrack to COHistoryTrack protocol
- Reduce commit description related code duplication in CORevision and COCommandGroup
- Concurrency between processes is not robust (no checking that in-memory
revisions snapshot is in sync with what is in the DB)
- e.g:
a = [COUndoTrack trackForName: @"test" withEditingContext: ctx]
b = [COUndoTrack trackForName: @"test" withEditingContext: ctx]
...
[ctx commitWithUndoTrack: a]
[a nodes] will not equal [b nodes] but I would expect them to be the equal
- Maybe support user-defined actions that track state not managed by CoreObject
- Model objects (COObject subclasses included with CoreObject for convenience)
- for COLibrary, evaluate whether we can enfore the constraint that one persistent root belongs to one library (we discussed this and we can't)
- Test unordered COCollection subclass
- Serialization
- Make a strict set of supported types, see: Scraps/serialization_todo.txt
- Utilities
- Define some CoreObject exceptions perhaps
- COAbstractClassInitializationException
- COImmutableCollectionMutationException
- what else?
- Write commit descriptor tests (localization is untested at this time)
- Implement copying commit descriptor plist and string files to ~/Library/CoreObject/Commits, in order to support browsing changes done by applications uninstalled from the system
- Integrate COCommitDescriptor with Schema Upgrade
- adjust to support versioned descriptors
- multiple commit descriptor versions can present per domain in ~/Library/CoreObject/Commits
- COError API will probably need adjustements once we report more errors through the core API (for now, we just report validation errors at commit time)
- Documentation
- Expose the following API if needed once more mature and documented:
- COCopier, COPrimitiveCollection
- all COCommand subclasses
- Diff
- Extras
- Store
- StorageDataModel
- Synchronization
- Once all COCommand class hierarchy appear in the API documentation, their @group should be changed to 'Undo Actions' to ensure COUndoTrack and COTrack don't get lost among all these subclasses.
- Check and update all Core, Model, Undo and Utilities API documentation (underway)
- Reviewed classes
- Core: COObjectGraphContext, COEditingContext, COBranch, CORevision, COObject
- Model: all
- Undo: all, but needs to be checked again due to Undo-tree rewrite
- Utilities: COCommitDescriptor, COError
- talk about how we automatically synchronize COEditingContexts (in the same process or different processes), we should explicitly talk about cross references
- talk about change notifications in the class descriptions. mention the notifications we support for each class description.
- Code Quality
- Reviewed classes: none (COObjectGraphContext, COEditingContext, COBranch underwent a preliminary review)
- COAttributedString
- Automatically split chunks longer than X characters
| 44.830626 | 590 | 0.760273 | eng_Latn | 0.988862 |
0cd119ab1cd4b98e3d2f7383d7438f52e2cf91d7 | 56 | md | Markdown | README.md | BrynardSecurity-terraform/xg-cwp-demo | 66d7311af56bb05352d7bbbe054ad5007969ae05 | [
"MIT"
] | null | null | null | README.md | BrynardSecurity-terraform/xg-cwp-demo | 66d7311af56bb05352d7bbbe054ad5007969ae05 | [
"MIT"
] | null | null | null | README.md | BrynardSecurity-terraform/xg-cwp-demo | 66d7311af56bb05352d7bbbe054ad5007969ae05 | [
"MIT"
] | null | null | null | # xg-cwp-demo
AWS XG and Cloud Workload Protection Demo
| 18.666667 | 41 | 0.785714 | kor_Hang | 0.913829 |
0cd143b73d7794cb3d26563461e8f75f17265e9c | 1,132 | md | Markdown | docs/language-reference/rise-types.md | rise-lang/shine | 047cb520abed78c783710109a756452948214788 | [
"MIT"
] | 36 | 2019-12-20T13:40:20.000Z | 2022-03-22T04:06:16.000Z | docs/language-reference/rise-types.md | rise-lang/shine | 047cb520abed78c783710109a756452948214788 | [
"MIT"
] | 148 | 2020-01-02T12:07:25.000Z | 2022-03-23T17:09:31.000Z | docs/language-reference/rise-types.md | rise-lang/shine | 047cb520abed78c783710109a756452948214788 | [
"MIT"
] | 6 | 2020-01-17T15:15:42.000Z | 2021-04-13T09:05:17.000Z | ---
title: RISE Type System
sidebar_label: RISE Type System
---
## Well-formed Types
```scala mdoc:passthrough
println(s"""
### Kinds
$$
${rise.core.types.latex.wellFormedTypes.kinds}
$$
### Kinding Structural Rule
$$
${rise.core.types.latex.wellFormedTypes.structuralRules}
$$
### Type Equality
$$
${rise.core.types.latex.wellFormedTypes.typeEquality}
$$
### Address Spaces
$$
${rise.core.types.latex.wellFormedTypes.addressSpaces}
$$
### Nat to Nat Type Level Functions
$$
${rise.core.types.latex.wellFormedTypes.natToNatTypeLevelFunctions}
$$
### Nat to Data Type Level Functions
$$
${rise.core.types.latex.wellFormedTypes.natToDataTypeLevelFunctions}
$$
### Natural numbers
$$
${rise.core.types.latex.wellFormedTypes.naturalNumbers}
$$
### Data Types
$$
${rise.core.types.latex.wellFormedTypes.dataTypes}
$$
### Types
$$
${rise.core.types.latex.wellFormedTypes.types}
$$
""")
```
## Typing Rules
```scala mdoc:passthrough
println(s"""
### Structural Rules
$$
${rise.core.types.latex.typingRules.structural}
$$
### Introduction and Elimination Rules
$$
${rise.core.types.latex.typingRules.introAndElim}
$$
""")
```
| 15.506849 | 68 | 0.714664 | yue_Hant | 0.299239 |
0cd1ad13c0b10d7daf63b2f701f6efd36ced2553 | 60 | md | Markdown | _includes/03-links.md | questionedandmarked/markdown-portfolio | dfa8ff880da0c5a15d6436448e4bd0a803ef1f34 | [
"MIT"
] | null | null | null | _includes/03-links.md | questionedandmarked/markdown-portfolio | dfa8ff880da0c5a15d6436448e4bd0a803ef1f34 | [
"MIT"
] | 5 | 2019-08-19T17:19:53.000Z | 2019-08-23T19:37:16.000Z | _includes/03-links.md | questionedandmarked/markdown-portfolio | dfa8ff880da0c5a15d6436448e4bd0a803ef1f34 | [
"MIT"
] | null | null | null | [is youtube a social media site?](https://www.youtube.com/)
| 30 | 59 | 0.716667 | por_Latn | 0.194219 |
0cd1d2e753b33895c5926483fc704339eeb30426 | 9,572 | md | Markdown | k8s-deploy/README.md | hainingzhang/KubeFATE | b16e2106f6907bb5f5693e016c18f18b13d99ff9 | [
"Apache-2.0"
] | 1 | 2021-09-08T07:59:52.000Z | 2021-09-08T07:59:52.000Z | k8s-deploy/README.md | LaynePeng/KubeFATE | 8593580b76ff8f5147bef694558629301b999edf | [
"Apache-2.0"
] | null | null | null | k8s-deploy/README.md | LaynePeng/KubeFATE | 8593580b76ff8f5147bef694558629301b999edf | [
"Apache-2.0"
] | null | null | null | # Kubernetes Deployment
We recommend using [Kubernetes](https://kubernetes.io/) as a underlying infrastructure to create and manage the FATE clusters in a production environment. KubeFATE supports deploying multiple FATE clusters in an instance of Kubernetes with different namespaces for the purposes of development, testing and production. Considering the different IT designs and standards in each company, the actual deployment should be customized. KubeFATE is flexibile for the FATE configuration.
If you focus on how to quickly use KubeFATE, please jump to [Use Scenarios](#use-scenarios).
## High-level architecture of multiple federated learning parties
The hig-hlevel architecture of a multi-party federated learning deployment (e.g. two parties) is shown as follows:
<div align="center">
<img src="./images/hamflp.PNG">
</div>
* KubeFATE: Orchestrates a FATE cluster of a party. It offers APIs for FATE-Cloud Manager and other management portals.
* Harbor (Optional): Versioned FATE deployments and images management.
* Kubernetes: Container orchestration engine.
KubeFATE is responsible for:
* Day 1 initialization: Provision a FATE cluster on Kubernetes
* Day 2 operations: Provides RESTful APIs to manage FATE clusters
## High-level architecture of KubeFATE
The high-level architecture of KubeFATE is shwon as follows:
<div align="center">
<img src="./images/kfha.PNG">
</div>
The numbers depicted in the diagram:
1. Accepting external API calls of Authentication & authorization
2. Rendering templates via Helm;
3. Storing jobs and configuration of a FATE deployment
4. KubeFATE is running as a service of Kubernetes
There are two parts of KubeFATE:
* The KubeFATE CLI. KubeFATE CLI is an executable helps to quickly initialize and manage a FATE cluster in an interactive mode. It does not rely on Kubernetes. Eventually, KubeFATE CLI calls KubeFATE Service for operations with a KubeFATE user token.
* The KubeFATE Service. The KubeFATE service provides RESTful APIs for managing FATE clusters. The KubeFATE service is deployed in Kubernetes, and exposes APIs via [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/). For the authentication and authorization, the KubeFATE service implements [JWT](https://jwt.io/introduction/), and neutral to other security solutions which can be added to Kubernetes ingress.
KubeFATE is designed to handle different versions FATE. Normally, KubeFATE CLI and KubeFATE service can work with several FATE releases.
## User scenarios
Suppose in an organization, there are two roles:
* System Admin: who is responisble for the infrastructure management as well as Kubernetes administration
* ML Infrastructure Operators: who is responsible for managing the machine learning cluster like FATE
<div align="center">
<img src="./images/swim.PNG">
</div>
### Initializing a FATE deployment
#### Creating role, namespace and other resource in Kubernetes
The sample yaml can be found in [rbac-config.yaml](./rbac-config.yaml). In this sample, we create a kube-fate namespace for KubeFATE service. Resource constraints can be applied to kube-fate namespace, refer to [Kubernetes Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), [Configure Memory and CPU Quotas for Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).
Run the following command to create the namespace:
```
$ kubectl apply -f ./rbac-config.yaml
```
Note that, the default username and password of KubeFATE service can be set in `rbac-config.yaml` Secret->kubefate-secret->stringData :
```
stringData:
kubefateUsername: admin
kubefatePassword: admin
```
#### Preparing domain name and deploying KubeFATE in Kubernetes
Because KubeFATE service exposes RESTful APIs for external access, system admin needs to prepare a domain name for KubeFATE service. In our sample, the domain name is `kubefate.net` . Moreover, system admin should create a namespace (e.g. fate-9999) for FATE deployment.
```
$ kubectl apply -f ./kubefate.yaml
$ kubectl create namespace fate-9999
```
For more about the configuration of KubeFATE service, please refer to: [KubeFATE service Configuration Guild](../docs/configurations/kubefate_service_configuration.md).
#### Preparing cluster configuration and deploying FATE
After the system admin deployed the KubeFATE service and prepared the namespace for FATE. The ML Infrastructure Operator is able to start the deployment of FATE. The `config.yaml` for `kubefate` CLI is required. It contains the username and password of KubeFATE access, and the KubeFATE service URL:
```
log:
level: info
user:
username: admin
password: admin
serviceurl: kubefate.net
```
|Name |Type |Description |
|-----------|--------|------------------------------------------------------------------|
|log |scalars |The log level of command line. |
|user |mappings|User name and password when logging into KubeFATE service. |
|serviceurl |scalars |KubeFATE service's ingress domain name, defined in kubefate.yaml. |
Create a `cluster.yaml` for FATE cluster configuration. The details of configuration can be found here: [FATE Cluster Configuration Guide](../docs/configurations/FATE_cluster_configuration.md).
**NOTE:** For Chinese user, specifying a local image registry in `cluster.yaml` can accelerate the download of images. The details is as follows:
```
registry: "hub.c.163.com/federatedai"
```
Next, install the FATE cluster,
```
$ kubefate cluster install -f ./cluster.yaml
create job success, job id=fe846176-0787-4879-9d27-622692ce181c
```
*NOTE: If you want to deploy **FATE on Spark**, you can use `cluster-spark.yaml`.*
#### Checking the status of "Installing Cluster" job
After the above command has finished, a job is created for installing a FATE cluster. Run the command `kubefate job describe` to check the status of the job, util the "Status" turns to `Success`.
```
$ kubefate job describe fe846176-0787-4879-9d27-622692ce181c
StartTime 2020-11-13 07:22:53
EndTime 2020-11-13 07:23:35
Duration 42s
Status Success
Creator admin
ClusterId 27e37a60-fffb-4031-a76f-990b2ff43cf2
States - update job status to Running
- create cluster in DB Success
- overwrite current installation
- helm install success
- checkout cluster status [28]
- job run Success
SubJobs clustermanager PodStatus: Running, SubJobStatus: Success, Duration: 6s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:22:59
fateboard PodStatus: Running, SubJobStatus: Success, Duration: 1s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:22:55
mysql PodStatus: Running, SubJobStatus: Success, Duration: 8s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:23:01
nodemanager-0 PodStatus: Running, SubJobStatus: Success, Duration: 8s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:23:01
nodemanager-1 PodStatus: Running, SubJobStatus: Success, Duration: 8s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:23:01
python PodStatus: Running, SubJobStatus: Success, Duration: 1s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:22:55
rollsite PodStatus: Running, SubJobStatus: Success, Duration: 8s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:23:01
client PodStatus: Running, SubJobStatus: Success, Duration: 42s, StartTime: 2020-11-13 07:22:53, EndTime: 2020-11-13 07:23:35
```
#### Describing the cluster and finding FATE access information
After the `installing cluster` job succeeded, use `kubefate cluster describe` to check the FATE access information:
```
$ kubefate cluster describe 27e37a60-fffb-4031-a76f-990b2ff43cf2
UUID 27e37a60-fffb-4031-a76f-990b2ff43cf2
Name fate-9999
NameSpace fate-9999
ChartName fate
ChartVersion v1.5.0
REVISION 1
Age 92s
Status Running
Spec name: fate-9999
namespace: fate-9999
chartName: fate
chartVersion: v1.5.0
partyId: 9999
......
Info dashboard:
- 9999.notebook.kubefate.net
- 9999.fateboard.kubefate.net
ip: 192.168.0.1
pod:
- clustermanager-78f98b85bf-ph2hv
......
status:
modules:
client: Running
clustermanager: Running
fateboard: Running
mysql: Running
nodemanager-0: Running
nodemanager-1: Running
python: Running
rollsite: Running
```
### Other user scenarios
#### [Manage FATE and FATE-Serving Version](../docs/Manage_FATE_and_FATE-Serving_Version.md)
#### [Update and Delete a FATE Cluster](../docs/Update_and_Delete_a_FATE_Cluster.md)
#### [KubeFATE Examples](examples)
#### [KubeFATE Command Line User Guide](../docs/KubeFATE_command_line_user_guide.md)
## KubeFATE service RESTful APIs reference
#### [API Reference](docs/KubeFATE_API_Reference_Swagger.md)
| 53.775281 | 479 | 0.69672 | eng_Latn | 0.816887 |
0cd21489fccf0714ba388844085fc499313ee500 | 4,156 | md | Markdown | Document/0x02-Frontispiece.md | boblone19/owasp-masvs | 1ca7ec573bbd928b990db5e80e3c287217afc98c | [
"CC0-1.0"
] | null | null | null | Document/0x02-Frontispiece.md | boblone19/owasp-masvs | 1ca7ec573bbd928b990db5e80e3c287217afc98c | [
"CC0-1.0"
] | 1 | 2021-03-05T09:21:37.000Z | 2021-03-05T09:21:37.000Z | Document/0x02-Frontispiece.md | julepka/owasp-masvs | 1601fe1e825dee6f3d4d6470e404f7422245bf9c | [
"CC0-1.0"
] | null | null | null | # About the Standard
<img src="images/OWASP_logo.png" title="OWASP LOGO" />
Welcome to the Mobile Application Security Verification Standard (MASVS) 1.2. The MASVS is a community effort to establish a framework of security requirements needed to design, develop and test secure mobile apps on iOS and Android.
The MASVS is a culmination of community effort and industry feedback. We expect this standard to evolve over time and welcome feedback from the community.
The best way to get in contact with us is via the OWASP Mobile Project Slack channel: <https://owasp.slack.com/messages/project-mobile_omtg/details/> .
Accounts can be created at the following URL: [https://owasp.slack.com/join/shared_invite/zt-g398htpy-AZ40HOM1WUOZguJKbblqkw#/](https://owasp.slack.com/join/shared_invite/zt-g398htpy-AZ40HOM1WUOZguJKbblqkw#/).
## Copyright and License
[<img src="images/CC-license.png" title="License" width="200px" height="45px" />](https://creativecommons.org/licenses/by-sa/4.0/)
Copyright © 2020 The OWASP Foundation.This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). For any reuse or distribution, you must make clear to others the license terms of this work.
<!-- \pagebreak -->
## Acknowledgements
| Project Lead | Lead Author | Contributors and Reviewers
| ------- | --- | ----------------- |
| Sven Schleier, Jeroen Willemsen and Carlos Holguera | Bernhard Mueller | Alexander Antukh, Mesheryakov Aleksey, Bachevsky Artem, Jeroen Beckers, Vladislav Chelnokov, Ben Cheney, Peter Chi, Lex Chien, Stephen Corbiaux, Manuel Delgado, Ratchenko Denis, Ryan Dewhurst, Tereshin Dmitry, Christian Dong, Oprya Egor, Ben Gardiner, Rocco Gränitz, Henry Hu, Sjoerd Langkemper, Vinícius Henrique Marangoni, Martin Marsicano, Roberto Martelloni, Gall Maxim, Eugen Martynov, Riotaro Okada, Abhinav Sejpal, Stefaan Seys, Yogesh Sharma, Prabhant Singh, Sven Schleier, Nikhil Soni, Anant Shrivastava, Francesco Stillavato, Romuald Szkudlarek, Abderrahmane Aftahi, Abdessamad Temmar, Koki Takeyama, Chelnokov Vladislav, Leo Wang |
<br/>
| Language | Translators & Reviewers |
| --- | ------------------------------ |
| Chinese (Traditonal) | Peter Chi, and Lex Chien, Henry Hu, Leo Wang |
| Chinese (Simplified) | Bob Peng, Harold Zang, Jack S |
| French | Romuald Szkudlarek, Abderrahmane Aftahi, Christian Dong (Review) |
| German | Rocco Gränitz, Sven Schleier (Review) |
| Hindi | Mukesh Sharma, Ritesh Kumar, Kunwar Atul Singh, Parag Dave, Devendra Kumar Sinha, Vikrant Shah |
| Japanese | Koki Takeyama, Riotaro Okada (Review) |
| Korean | Youngjae Jeon, Jeongwon Cho, Jiyou Han, Jiyeon Sung |
| Persian | Hamed Salimian, Ramin Atefinia, Dorna Azhirak, Bardiya Akbari, Mahsa Omidvar, Alireza Mazhari, Milad Khoshdel |
| Russian | Gall Maxim, Eugen Martynov, Chelnokov Vladislav (Review), Oprya Egor (Review), Tereshin Dmitry (Review) |
| Spanish | Martin Marsicano, Carlos Holguera |
This document started as a fork of the OWASP Application Security Verification Standard written by Jim Manico.
## Sponsors
While both the MASVS and the MSTG are created and maintained by the community on a voluntary basis, sometimes a little bit of outside help is required. We therefore thank our sponsors for providing the funds to be able to hire technical editors. Note that their sponsorship does not influence the content of the MASVS or MSTG in any way. The sponsorship packages are described on the [OWASP Project Wiki](https://owasp.org/www-project-mobile-security-testing-guide/#div-sponsorship "OWASP Mobile Security Testing Guide Sponsorship Packages").
### Honourable Benefactor
[<img src="images/NowSecure_logo.png" title="NowSecure" width="200px" height="58px" />](https://www.nowsecure.com/ "NowSecure")
### Good Samaritan Benefactor
[<img src="images/Randorisec_logo.png" title="Randorisec" width="200px" height="58px" />](https://www.randorisec.fr/ "RandoriSec")
Next, we would like to thank the OWASP Bay Area Chapter for their sponsorship. Last, we would like to thank everybody that bought the book from Leanpub and sponsored us that way.
| 72.912281 | 717 | 0.763234 | eng_Latn | 0.855667 |
0cd267e1276b9dd6f829215d71219f2ba6ae68af | 8,638 | md | Markdown | articles/supply-chain/warehousing/packing-vs-storage-dimensions.md | MicrosoftDocs/Dynamics-365-Operations.it- | 10c91d0b02b9925d81227106bc04e18f538a6e25 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:14:19.000Z | 2021-04-20T21:13:45.000Z | articles/supply-chain/warehousing/packing-vs-storage-dimensions.md | MicrosoftDocs/Dynamics-365-Operations.it- | 10c91d0b02b9925d81227106bc04e18f538a6e25 | [
"CC-BY-4.0",
"MIT"
] | 10 | 2017-12-12T12:01:52.000Z | 2019-04-30T11:46:17.000Z | articles/supply-chain/warehousing/packing-vs-storage-dimensions.md | MicrosoftDocs/Dynamics-365-Operations.it- | 10c91d0b02b9925d81227106bc04e18f538a6e25 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2018-07-20T06:42:28.000Z | 2019-10-12T18:16:59.000Z | ---
title: Impostare dimensioni diverse per l'imballaggio e l'immagazzinamento
description: In questo argomento viene illustrato come specificare il processo (imballaggio, immagazzinamento o imballaggio nidificato) per cui viene utilizzata ciascuna dimensione specificata.
author: mirzaab
ms.date: 01/28/2021
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: EcoResPhysicalProductDimensions, WHSPhysDimUOM
audience: Application User
ms.reviewer: kamaybac
ms.search.scope: Core, Operations
ms.search.region: Global
ms.author: mirzaab
ms.search.validFrom: 2021-01-28
ms.dyn365.ops.version: 10.0.17
ms.openlocfilehash: 0e8ce576f21f1f5ea5f3acb7d43bbe68826e6f39
ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 09/29/2021
ms.locfileid: "7580074"
---
# <a name="set-different-dimensions-for-packing-and-storage"></a>Impostare dimensioni diverse per l'imballaggio e l'immagazzinamento
[!include [banner](../../includes/banner.md)]
Alcuni articoli sono imballati o sotccati in modo tale che potrebbe essere necessario tener traccia delle dimensioni fisiche in modo diverso per ciascuno dei diversi processi. La funzionalità *Dimensioni prodotto di imballaggio* consente di configurare uno o più tipi di dimensioni per ogni prodotto. Ogni tipo di dimensione fornisce una serie di misure fisiche (peso, larghezza, profondità e altezza) e stabilisce il processo in cui si applicano tali valori di misura fisica. Quando questa fnzionalità è abilitata, il sistema supporterà i seguenti tipi di dimensioni:
- *Immagazzinamento*: le dimensione di immagazzinamento vengono utilizzate insieme ai valori volumetrici di ubicazione per determinare il numero di articoli che possono essere immagazzinati in varie ubicazioni di magazzino.
- *Imballaggio*: le dimensioni dell'imballaggio vengono utilizzate durante la containerizzazione e il processo di imballaggio manuale per determinare quanti di ciascun articolo si adatteranno ai vari tipi di contenitore.
- *Imballaggio annidato*: le dimensioni di imballaggio nidificate vengono utilizzate quando il processo di imballaggio contiene più livelli.
Le dimensioni *Immagazzinamento* sono supportate anche quando la funzionalità *Dimensioni prodotto di immagazzinamento* non è abilitata. Puoi configurarle utilizzando la pagina **Dimensione fisica** in Supply Chain Management. Queste dimensioni vengono utilizzate da tutti i processi in cui le dimensioni di imballaggio e imballaggio nidificato non sono specificate.
Le dimensioni di *imballaggio* e *imballaggio annidato* vengono configurate utilizzando la pagina **Dimensioni fisiche del prodotto**, che viene aggiunta quando abiliti la funzionalità *Dimensioni prodotto di immagazzinamento*.
Questo argomento fornisce uno scenario che illustra come utilizzare questa funzionalità.
## <a name="turn-on-the-packaging-product-dimensions-feature"></a>Attivare la funzionalità delle dimensioni del prodotto di imballaggio
Prima di poter utilizzare questa funzione, è necessario attivarla nel sistema. Gli amministratori possono utilizzare l'area di lavoro [Gestione funzionalità](../../fin-ops-core/fin-ops/get-started/feature-management/feature-management-overview.md) per controllare lo stato della funzionalità e attivarla se necessario. Nell'area di lavoro, la funzionalità è elencata nel modo seguente:
- **Modulo:** *Gestione Magazzino*
- **Nome funzionalità:** *Dimensioni prodotto di imballaggio*
## <a name="example-scenario"></a>Scenario di esempio
### <a name="set-up-the-scenario"></a>Impostare lo scenario
Prima di poter eseguire lo scenario di esempio, è necessario preparare il sistema come descritto in questa sezione.
#### <a name="enable-demo-data"></a>Abilitare i dati dimostrativi
Per elaborare lo scenario utilizzando i record e i valori demo specificati qui, devi utilizzare un sistema in cui sono installati i [dati dimostrativi](../../fin-ops-core/dev-itpro/deployment/deploy-demo-environment.md) standard. È inoltre necessario selezionare la persona giuridica *USMF* prima di iniziare.
#### <a name="add-a-new-physical-dimension-to-a-product"></a>Aggiungere una nuova dimensione fisica a un prodotto
Aggiungi una nuova dimensione fisica per un prodotto procedendo come segue:
1. Fare clic su **Gestione informazioni sul prodotto \> Prodotti \> Prodotti rilasciati**.
1. Seleziona il prodotto con **Numero articolo** *A0001*.
1. Nel riquadro azioni, aprire **Gestione articoli** e, dal gruppo **Magazzino**, selezionare **Dimensioni fisiche del prodotto**.
1. Si apre la pagina **Dimensioni fisiche del prodotto**. Nel riquadro azioni selezionare **Nuovo** per aggiungere una nuova dimensione alla griglia con le impostazioni seguenti:
- **Tipo di dimensione fisica** - *Imballaggio*
- **Unità fisica** - *pz*
- **Peso** - *4*
- **Unità peso** - *kg*
- **Profondità** - *3*
- **Altezza** - *4*
- **Larghezza** - *3*
- **Lunghezza** - *cm*
- **Unità di volume** - *cm3*
Il campo **Volume** viene calcolato automaticamente in base alle impostazioni **Profondità**, **Altezza** e **Larghezza**.
#### <a name="create-a-new-container-type"></a>Creare un nuovo tipo di contenitore
Vai a **Gestione magazzino \> Imposta \> Contenitori \> Tipi di contenitori** e crea un nuovo record con le seguenti impostazioni:
- **Codice tipo di contenitore** - *Scatola piccola*
- **Descrizione** - *Scatola piccola*
- **Peso netto massimo** - *50*
- **Volume** - *144*
- **Lunghezza** - *6*
- **Larghezza** - *6*
- **Altezza** - *4*
#### <a name="create-a-container-group"></a>Creare un gruppo di contenitori
Vai a **Gestione magazzino \> Imposta \> Contenitori \> Gruppi di contenitori** e crea un nuovo record con le seguenti impostazioni:
- **ID gruppo di contenitori** - *Scatola piccola*
- **Descrizione** - *Scatola piccola*
Aggiungere una nuova riga alla sezione **Dettagli**. Impostare **Tipo di contenitore** su *Scatola piccola*.
#### <a name="set-up-a-container-build-template"></a>Impostare un modello di versione del contenitore
Andare a **Gestione magazzino \> Imposta \> Contenitori \> Modelli di build contenitore** se selezionare **Scatole**. Modificare **ID gruppo di contenitori** su *Scatola piccola*.
### <a name="run-the-scenario"></a>Eseguire lo scenario
Dopo aver preparato il sistema come descritto nella sezione precedente, sei pronto per eseguire lo scenario come descritto nella sezione successiva.
#### <a name="create-a-sales-order-and-create-a-shipment"></a>Creare un ordine cliente e una spedizione
In questo processo creerai una spedizione basata sulle dimensioni di *imballaggio* dell'articolo, per le quali l'altezza è inferiore a 3.
1. Selezionare **Vendite e marketing \> Ordini cliente \> Tutti gli ordini cliente**.
1. Nel Riquadro azioni selezionare **Nuovo**.
1. Nella finestra di dialogo **Crea ordine cliente**, imposta i seguenti valori:
- **Conto cliente:** *US-001*
- **Magazzino:** *63*
1. Scegli **OK** per creare l'ordine cliente e chiudere la finestra di dialogo.
1. Viene aperto il nuovo ordine cliente. Dovrebbe includere una nuova riga vuota nella griglia della Scheda dettaglio **Righe ordine cliente**. Su questa riga, impostare i seguenti valori:
- **Numero articolo:** *A0001*
- **Quantità:** *5*
1. Nella scheda dettaglio **Righe ordine cliente**, selezionare **Scorte \> Prenotazione**.
1. Nella pagina **Prenotazione**, quindi nel riquadro azioni, selezionare **Prenota lotto** per prenotare le scorte.
1. Chiudere la pagina.
1. Nel riquadro azioni, aprire la scheda **Magazzino** e selezionare **Rilascia in magazzino** per creare lavoro per il magazzino.
1. Nella Scheda dettaglio **Righe ordine cliente**, selezionare **Magazzino \> Dettagli spedizione**.
1. Nel riquadro azioni, aprire la scheda **Trasporto** e selezionare **Visualizza contenitori**. Confermare che l'articolo è stato containerizzato nei due contenitori *Scatola piccola*.
#### <a name="place-an-item-into-storage"></a>Posizionare un articolo in magazzino
1. Aprire il dispositivo mobile, accedere al magazzino 63 e passare a **Inventario \> Rettifica in entrata**.
1. Immettere **Ubicazione** = *SHORT-01*. Creare una nuova targa con **Articolo** = *A0001* e **Quantità** = *1 pz*.
1. Selezionare **OK**. Riceverai l'errore "Ubicazione SHORT-01 non riuscita perché l'articolo A0001 non si adatta alle dimensioni specificate dell'ubicazione." Questo perché le dimensioni del tipo di *Immagazzinamento* del prodotto sono maggiori delle dimensioni specificate nel profilo di ubicazione.
[!INCLUDE[footer-include](../../includes/footer-banner.md)] | 63.514706 | 568 | 0.765571 | ita_Latn | 0.998566 |
0cd2819da9d5636577ab04b42e9b8b5538775583 | 368 | md | Markdown | README.md | sinedied/leadshow | b517e3c17a7d27efca377ca31af25bd023b8aaf2 | [
"MIT"
] | null | null | null | README.md | sinedied/leadshow | b517e3c17a7d27efca377ca31af25bd023b8aaf2 | [
"MIT"
] | null | null | null | README.md | sinedied/leadshow | b517e3c17a7d27efca377ca31af25bd023b8aaf2 | [
"MIT"
] | null | null | null | # 🌟 leadshow
> Simple realtime leaderboard server for demo/show purposes
## Running the server
1. Make sure you have [NodeJS](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed.
2. Install your dependencies
```sh
install
```
3. Start the app
```sh
npm start
```
The leadboard display is available at `http://localhost:3030`.
| 19.368421 | 96 | 0.665761 | eng_Latn | 0.941938 |
0cd31f975ac9d8c3fa67224c9c107ee0ff3df034 | 4,606 | md | Markdown | README.md | linwaiwai/ios-runtime | 1e01a18c0cf936266a0c8d86bcc5ffc4ff3653ef | [
"Apache-2.0"
] | 1 | 2020-09-03T01:33:53.000Z | 2020-09-03T01:33:53.000Z | README.md | linwaiwai/ios-runtime | 1e01a18c0cf936266a0c8d86bcc5ffc4ff3653ef | [
"Apache-2.0"
] | null | null | null | README.md | linwaiwai/ios-runtime | 1e01a18c0cf936266a0c8d86bcc5ffc4ff3653ef | [
"Apache-2.0"
] | null | null | null | # iOS Runtime for NativeScript
[](https://waffle.io/NativeScript/ios-runtime)
Contains the source code for the NativeScript's iOS Runtime. [NativeScript](https://www.nativescript.org/) is a framework which enables developers to write truly native mobile applications for Android and iOS using JavaScript and CSS. Each mobile platform has its own ecosystem and offers completely different development tools and language(s) - Java for Android and Objective C (Swift) for iOS. In order to translate JavaScript code to the corresponding native APIs some kind of proxy mechanism is needed. This is exactly what the "Runtime" parts of NativeScript are responsible for. The iOS Runtime may be thought of as "The Bridge" between the JavaScript and the iOS world. A NativeScript application for iOS is a standard native package (ipa) which besides the JavaScript files embed the runtime as well.
```shell
git clone --recursive [email protected]:NativeScript/ios-runtime.git
```
<!-- TOC depthFrom:2 -->
- [Requirements](#requirements)
- [Architecture Diagram](#architecture-diagram)
- [Local Development](#local-development)
- [Building a Distribution Package](#building-a-distribution-package)
- [Contribute](#contribute)
- [Get Help](#get-help)
<!-- /TOC -->
## Requirements
- OS X 10.11+
- [Xcode 10+](https://developer.apple.com/xcode/)
- [CMake 3.3.2] or later. Tested with versions up to 3.14 (https://github.com/Kitware/CMake/releases/download/v3.14.0/cmake-3.14.0-Darwin-x86_64.dmg) - after installing CMake.app add a symlink to cmake in `usr/local/bin` using the following command `ln -s /Applications/CMake.app/Contents/bin/cmake /usr/local/bin`
- [llvm 7.0](http://releases.llvm.org/download.html#7.0.0) - used to build the [metadata generator](https://github.com/NativeScript/ios-metadata-generator) submodule. Be sure to have the folder containing `llvm-config` in `PATH` or make a symlink to in `/usr/local/bin/`.
- [Automake](https://www.gnu.org/software/automake/) - available in [Homebrew](http://brew.sh) as `automake`.
- [GNU Libtool](http://www.gnu.org/software/libtool/) - available in [Homebrew](http://brew.sh) as `libtool`.
- Checkout all git submodules using `git submodule update --init`.
## Architecture Diagram
The NativeScript iOS Runtime architecture can be summarized in the following diagram.

For more details on how it works, read the [documentation](https://docs.nativescript.org/runtimes/ios/overview).
## Local Development
To be able to open and build {N} iOS Runtime in Xcode you need to configure it for WebKit development and generate the Xcode project files using cmake. To do this execute the following:
```shell
sudo ./src/webkit/Tools/Scripts/configure-xcode-for-ios-development
./cmake-gen.sh
open "cmake-build/NativeScript.xcodeproj"
```
After you open the newly generated project in Xcode you can run the `TestRunner` target or the `Gameraww` example app.
For more information on WebKit configuration see [Building iOS Port section of WebKit's README](https://github.com/WebKit/webkit/blob/master/ReadMe.md#building-ios-port)
## Building a Distribution Package
To build the [`tns-ios` npm package](https://www.npmjs.com/package/tns-ios) run `./build/scripts/package-tns-ios.sh` in the **root** of the repository. The package contains the NativeScript Cocoa Framework, the NativeScript CLI template project and the API metadata generator.
To build the [`tns-ios-inspector` npm package](https://www.npmjs.com/package/tns-ios-inspector) run `./build/scripts/package-tns-ios-inspector.sh` in the **root** of the repository. The package contains the Web Inspector frontend.
## Contribute
We love PRs! Check out the [contributing guidelines](CONTRIBUTING.md). If you want to contribute, but you are not sure where to start - look for [issues labeled `help wanted`](https://github.com/NativeScript/ios-runtime/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22).
## Get Help
Please, use [github issues](https://github.com/NativeScript/ios-runtime/issues) strictly for [reporting bugs](CONTRIBUTING.md#reporting-bugs) or [requesting features](CONTRIBUTING.md#requesting-new-features). For general questions and support, check out [Stack Overflow](https://stackoverflow.com/questions/tagged/nativescript) or ask our experts in [NativeScript community Slack channel](http://developer.telerik.com/wp-login.php?action=slack-invitation).
| 74.290323 | 808 | 0.771168 | eng_Latn | 0.839586 |
0cd379cfa7fac85efb3a4e6bb0768ae67a3e3445 | 3,237 | md | Markdown | dev-docs/Workflows.md | CrevoNFT/delicate-dinos-contract | f6d258099880ffabadbcabe45546c40579edffb1 | [
"MIT"
] | null | null | null | dev-docs/Workflows.md | CrevoNFT/delicate-dinos-contract | f6d258099880ffabadbcabe45546c40579edffb1 | [
"MIT"
] | null | null | null | dev-docs/Workflows.md | CrevoNFT/delicate-dinos-contract | f6d258099880ffabadbcabe45546c40579edffb1 | [
"MIT"
] | null | null | null | # Contracts Deployment
Hardhat Deploy Scripts
libs
- DelicateDinosMetadata
- DelicateDinosUpgrade
- DelicateDinosRandomness
- DinoUpToken
- DelicateDinosRaffle
- ==> DelicateDinos(randomnessProv, upToken, raffle)
- INIT: DelicateDinosRandomness.initMaster(delicateDinos)
- DelicateDinosMinter(delicateDinos)
- delicateDinos.setMinterContract(minter)
Fund DelicateDinosRandomness with LINK
# Minting Controls
In the contracts repo, run
- `yarn close-minting --contract ...`
- `yarn open-whitelist --contract ...(address)... --fee ...(MATIC amount)...`
- `yarn open-public-sale --contract ...(address)... --fee ...(MATIC amount)...`
- `yarn drop-dinos --contract ...`
- `yarn open-drop-claim --contract ...`
# Prepare Whitelisted Minting
CONTRACTS (so that Dinos Contract knows who is whitelisted and allows mintWhitelisted() to be called)
- update `whitelist/whitelist.json`
- run `yarn open-whitelist`
FRONTEND (so that mint page recognizes whitelisted addresses and provides a proof on calling contract.mintWhitelisted()):
- update `src/whitelist/whitelist.json`
- redeploy frontend
# Minting WEB UI
## Mint Page has a form:
- name
- mint button
- (disabled if would revert)
- not whitelisted
- not enough matic to pay
- ...
- warning
- if not whitelisted
- if not enough matic to pay
- ...
## Public Sale has a form:
- name
- mint button
- (disabled if would revert)
- not enough matic to pay
- warning
- if not enough matic to pay
## Claiming: My Own Page
- any claim-bearing token is displayed accordingly (tokenIdCanClaim(tokenId))
- button to perform the claim
- disabled if not enough gas
# ARTWORK updates on Whitelist / PublicSale / Claim Dropped
## On-Chain
- mintDino(), mintDino(), ...
## Off-Chain
- read mint events (transfer from 0 address), read ArtworkSet() events => diff set needs artwork
- for all tokenIds in diff set, **create metadata**
- read contract traits
- create artwork accordingly
- upload artwork to IPFS => get newBaseUri (artwork directory uri)
## On-Chain
- for all tokenIds in diff set
- updateArtwork(tokenId, newBaseUri)
TODO: write script for this ^ ^ ^
# Upgrade Dino: WEB UI
## Traits (pre-impact)
- owner can change any trait
- only teeth / skin will affect dino's resistance to impact
Pay with DNOUP Token
`dnoUpTokenContract.approve(dinosContract.address, dnoUpTokenAmount);`
`dinosContract.upgradeTraits(tokenId, teethLengthDelta, skinThicknessDelta, dnoUpTokenAmount);`
## Name (post-impact)
- only once possible, after impact (bonus feature)
`dinosContract.setName("FunkyName77")`
# Impact
## Simulation for Dinos
`yarn go-impact`
- metadata is updated
- artwork remains the same
- impact events `DinoDamaged(uint8)` are emitted by Dinos Contract (how much the fossil value was affected)
- now, each dino's name can be updated (bonus feature)
## Asteroids Drop
- retrieve all DinoDamaged() events, sort by max, take first n
- drop 1 asteroid to each
- holders of dinos obtain them automatically without claiming
- we call the mint method in the asteroids contract for each dino holder individually (script)
- the remaining asteroids are for whitelist / public sale | 26.975 | 121 | 0.728452 | eng_Latn | 0.907206 |
0cd3a0b0e36a19732a021fb57e5135215c2c191a | 11,553 | md | Markdown | documents/aws-cloudformation-user-guide/doc_source/aws-resource-athena-workgroup.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-cloudformation-user-guide/doc_source/aws-resource-athena-workgroup.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-cloudformation-user-guide/doc_source/aws-resource-athena-workgroup.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # AWS::Athena::WorkGroup<a name="aws-resource-athena-workgroup"></a>
The AWS::Athena::WorkGroup resource specifies an Amazon Athena workgroup, which contains a name, description, creation time, state, and other configuration, listed under [WorkGroupConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-athena-workgroup.html#cfn-athena-workgroup-workgroupconfiguration)\. Each workgroup enables you to isolate queries for you or your group from other queries in the same account\. For more information, see [CreateWorkGroup](https://docs.aws.amazon.com/athena/latest/APIReference/API_CreateWorkGroup.html) in the *Amazon Athena API Reference*\.
## Syntax<a name="aws-resource-athena-workgroup-syntax"></a>
To declare this entity in your AWS CloudFormation template, use the following syntax:
### JSON<a name="aws-resource-athena-workgroup-syntax.json"></a>
```
{
"Type" : "AWS::Athena::WorkGroup",
"Properties" : {
"[Description](#cfn-athena-workgroup-description)" : String,
"[Name](#cfn-athena-workgroup-name)" : String,
"[RecursiveDeleteOption](#cfn-athena-workgroup-recursivedeleteoption)" : Boolean,
"[State](#cfn-athena-workgroup-state)" : String,
"[Tags](#cfn-athena-workgroup-tags)" : Tags,
"[WorkGroupConfiguration](#cfn-athena-workgroup-workgroupconfiguration)" : WorkGroupConfiguration,
"[WorkGroupConfigurationUpdates](#cfn-athena-workgroup-workgroupconfigurationupdates)" : WorkGroupConfigurationUpdates
}
}
```
### YAML<a name="aws-resource-athena-workgroup-syntax.yaml"></a>
```
Type: AWS::Athena::WorkGroup
Properties:
[Description](#cfn-athena-workgroup-description): String
[Name](#cfn-athena-workgroup-name): String
[RecursiveDeleteOption](#cfn-athena-workgroup-recursivedeleteoption): Boolean
[State](#cfn-athena-workgroup-state): String
[Tags](#cfn-athena-workgroup-tags):
Tags
[WorkGroupConfiguration](#cfn-athena-workgroup-workgroupconfiguration):
WorkGroupConfiguration
[WorkGroupConfigurationUpdates](#cfn-athena-workgroup-workgroupconfigurationupdates):
WorkGroupConfigurationUpdates
```
## Properties<a name="aws-resource-athena-workgroup-properties"></a>
`Description` <a name="cfn-athena-workgroup-description"></a>
The workgroup description\.
*Required*: No
*Type*: String
*Minimum*: `0`
*Maximum*: `1024`
*Update requires*: [No interruption](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)
`Name` <a name="cfn-athena-workgroup-name"></a>
The workgroup name\.
*Required*: Yes
*Type*: String
*Pattern*: `[a-zA-Z0-9._-]{1,128}`
*Update requires*: [Replacement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-replacement)
`RecursiveDeleteOption` <a name="cfn-athena-workgroup-recursivedeleteoption"></a>
The option to delete the workgroup and its contents even if the workgroup contains any named queries or query executions\.
*Required*: No
*Type*: Boolean
*Update requires*: [No interruption](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)
`State` <a name="cfn-athena-workgroup-state"></a>
The state of the workgroup: ENABLED or DISABLED\.
*Required*: No
*Type*: String
*Allowed values*: `DISABLED | ENABLED`
*Update requires*: [No interruption](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)
`Tags` <a name="cfn-athena-workgroup-tags"></a>
An array of key\-value pairs to apply to this resource\.
For more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html)\.
*Required*: No
*Type*: [Tags](aws-properties-athena-workgroup-tags.md)
*Update requires*: [No interruption](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)
`WorkGroupConfiguration` <a name="cfn-athena-workgroup-workgroupconfiguration"></a>
The configuration of the workgroup, which includes the location in Amazon S3 where query results are stored, the encryption option, if any, used for query results, whether Amazon CloudWatch Metrics are enabled for the workgroup, and the limit for the amount of bytes scanned \(cutoff\) per query, if it is specified\. The [EnforceWorkGroupConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-athena-workgroup-workgroupconfigurationupdates.html#cfn-athena-workgroup-workgroupconfigurationupdates-enforceworkgroupconfiguration) option determines whether workgroup settings override client\-side query settings\.
*Required*: No
*Type*: [WorkGroupConfiguration](aws-properties-athena-workgroup-workgroupconfiguration.md)
*Update requires*: [No interruption](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)
`WorkGroupConfigurationUpdates` <a name="cfn-athena-workgroup-workgroupconfigurationupdates"></a>
The configuration information that will be updated for this workgroup, which includes the location in Amazon S3 where query results are stored, the encryption option, if any, used for query results, whether the Amazon CloudWatch Metrics are enabled for the workgroup, whether the workgroup settings override the client\-side settings, and the data usage limit for the amount of bytes scanned per query, if it is specified\.
*Required*: No
*Type*: [WorkGroupConfigurationUpdates](aws-properties-athena-workgroup-workgroupconfigurationupdates.md)
*Update requires*: [No interruption](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)
## Return values<a name="aws-resource-athena-workgroup-return-values"></a>
### Ref<a name="aws-resource-athena-workgroup-return-values-ref"></a>
When you pass the logical ID of this resource to the intrinsic `Ref` function, `Ref` returns the name of the WorkGroup\. For example:
`{ "Ref": "myWorkGroup" }`
For more information about using the `Ref` function, see [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html)\.
### Fn::GetAtt<a name="aws-resource-athena-workgroup-return-values-fn--getatt"></a>
The `Fn::GetAtt` intrinsic function returns a value for a specified attribute of this type\. The following are the available attributes and sample return values\.
For more information about using the `Fn::GetAtt` intrinsic function, see [Fn::GetAtt](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html)\.
#### <a name="aws-resource-athena-workgroup-return-values-fn--getatt-fn--getatt"></a>
`CreationTime` <a name="CreationTime-fn::getatt"></a>
The date and time the workgroup was created, as a UNIX timestamp in seconds\. For example: `1582761016`\.
## Examples<a name="aws-resource-athena-workgroup--examples"></a>
### Creating an Athena WorkGroup<a name="aws-resource-athena-workgroup--examples--Creating_an_Athena_WorkGroup"></a>
The following example template creates the Athena WorkGroup `MyCustomWorkGroup`\. Note the use of `WorkGroupConfiguration` to specify the configuration for the WorkGroup\.
#### YAML<a name="aws-resource-athena-workgroup--examples--Creating_an_Athena_WorkGroup--yaml"></a>
```
Resources:
MyAthenaWorkGroup:
Type: AWS::Athena::WorkGroup
Properties:
Name: MyCustomWorkGroup
Description: My WorkGroup
State: ENABLED
Tags:
- Key: "key1"
Value: "value1"
- Key: "key2"
Value: "value2"
WorkGroupConfiguration:
BytesScannedCutoffPerQuery: 200000000
EnforceWorkGroupConfiguration: false
PublishCloudWatchMetricsEnabled: false
RequesterPaysEnabled: true
ResultConfiguration:
OutputLocation: s3://path/to/my/bucket/
```
#### JSON<a name="aws-resource-athena-workgroup--examples--Creating_an_Athena_WorkGroup--json"></a>
```
{
"Resources":{
"MyAthenaWorkGroup":{
"Type":"AWS::Athena::WorkGroup",
"Properties":{
"Name":"MyCustomWorkGroup",
"Description":"My WorkGroup",
"State":"ENABLED",
"Tags":[
{
"Key":"key1",
"Value":"value1"
},
{
"Key":"key2",
"Value":"value2"
}
],
"WorkGroupConfiguration":{
"BytesScannedCutoffPerQuery":200000000,
"EnforceWorkGroupConfiguration":false,
"PublishCloudWatchMetricsEnabled":false,
"RequesterPaysEnabled":true,
"ResultConfiguration":{
"OutputLocation":"s3://path/to/my/bucket/"
}
}
}
}
}
}
```
### Updating an Athena WorkGroup<a name="aws-resource-athena-workgroup--examples--Updating_an_Athena_WorkGroup"></a>
The following example template updates the Athena WorkGroup `MyCustomWorkGroup`\. Note the use of `WorkGroupConfigurationUpdates` instead of `WorkGroupConfiguration`\.
#### YAML<a name="aws-resource-athena-workgroup--examples--Updating_an_Athena_WorkGroup--yaml"></a>
```
Resources:
MyAthenaWorkGroup:
Type: AWS::Athena::WorkGroup
Properties:
Name: MyCustomWorkGroup
Description: My WorkGroup Updated
State: DISABLED
Tags:
- Key: "key1"
Value: "value1"
- Key: "key2"
Value: "value2"
WorkGroupConfigurationUpdates:
BytesScannedCutoffPerQuery: 10000000
EnforceWorkGroupConfiguration: true
PublishCloudWatchMetricsEnabled: true
RequesterPaysEnabled: false
ResultConfigurationUpdates:
EncryptionConfiguration:
EncryptionOption: SSE_S3
OutputLocation: s3://path/to/my/bucket/updated/
```
#### JSON<a name="aws-resource-athena-workgroup--examples--Updating_an_Athena_WorkGroup--json"></a>
```
{
"Resources":{
"MyAthenaWorkGroup":{
"Type":"AWS::Athena::WorkGroup",
"Properties":{
"Name":"MyCustomWorkGroup",
"Description":"My WorkGroup Updated",
"State":"DISABLED",
"Tags":[
{
"Key":"key1",
"Value":"value1"
},
{
"Key":"key2",
"Value":"value2"
}
],
"WorkGroupConfigurationUpdates":{
"BytesScannedCutoffPerQuery":10000000,
"EnforceWorkGroupConfiguration":true,
"PublishCloudWatchMetricsEnabled":true,
"RequesterPaysEnabled":false,
"ResultConfigurationUpdates":{
"EncryptionConfiguration":{
"EncryptionOption":"SSE_S3"
},
"OutputLocation":"s3://path/to/my/bucket/updated/"
}
}
}
}
}
}
``` | 46.963415 | 651 | 0.681641 | yue_Hant | 0.333411 |
0cd3e8e031fef78bc7c2bb466756505759e4a1d6 | 1,088 | md | Markdown | README.md | atmos/lifeline | 3f1cea53018a02aa71a379c9b9a72deb5f7abe99 | [
"MIT"
] | 1 | 2016-05-09T10:36:32.000Z | 2016-05-09T10:36:32.000Z | README.md | atmos/lifeline | 3f1cea53018a02aa71a379c9b9a72deb5f7abe99 | [
"MIT"
] | null | null | null | README.md | atmos/lifeline | 3f1cea53018a02aa71a379c9b9a72deb5f7abe99 | [
"MIT"
] | null | null | null | lifeline
========
Another [oauth][oauth] experiment. Share info with [sinatra][sinatra] and [twitter][twitter].
Installation
============
It's a sinatra app, packaged as a gem, deployed as a rack app.
% sudo gem install bundler
% gem bundle
% bin/rake repackage
% sudo gem install pkg/lifeline*.gem
Deployment
==========
Use [passenger][passenger] and a config.ru like this:
Example config.ru
require 'rubygems'
require 'lifeline'
DataMapper.setup(:default, "mysql://atmos:s3cr3t@localhost/lifeline_production")
ENV['LIFELINE_READKEY'] = /\w{18}/.gen # this should really be what twitter gives you
ENV['LIFELINE_READSECRET'] = /\w{24}/.gen # this should really be what twitter gives you
class LifelineSite < Lifeline::App
set :public, File.expand_path(File.dirname(__FILE__), "public")
set :environment, :production
end
run LifelineSite
testing
=======
% gem bundle
% bin/rake
[sinatra]: http://www.sinatrarb.com
[twitter]: http://twitter.com
[oauth]: http://oauth.net
[passenger]: http://modrails.com
| 24.727273 | 94 | 0.670956 | eng_Latn | 0.405422 |
0cd41b76d6d4ebfc38d9fd28deffe4dd78fc3c8a | 390 | md | Markdown | _scholastic/football-1925.md | kwaldenphd/wax-pdf-sandbox | 96299045c0d18c7f20224f8d6ea10ffb98bec462 | [
"MIT"
] | null | null | null | _scholastic/football-1925.md | kwaldenphd/wax-pdf-sandbox | 96299045c0d18c7f20224f8d6ea10ffb98bec462 | [
"MIT"
] | null | null | null | _scholastic/football-1925.md | kwaldenphd/wax-pdf-sandbox | 96299045c0d18c7f20224f8d6ea10ffb98bec462 | [
"MIT"
] | 4 | 2021-11-09T15:33:57.000Z | 2021-11-30T01:38:25.000Z | ---
pid: football-1925
order: '28'
file_name: football-1925.pdf
label: Notre Dame Football Review - 1925
_date: '1925'
object_type: newspaper
source: http://archives.nd.edu/Football/Football-1925.pdf
thumbnail: "/img/derivatives/simple/football-1925_image0/thumbnail.jpg"
full: "/img/derivatives/simple/football-1925_image0/fullwidth.jpg"
layout: scholastic_item
collection: scholastic
---
| 27.857143 | 71 | 0.792308 | yue_Hant | 0.267733 |
0cd5718c06e184c037ad32fd7fd9c5ab56cb3ffd | 6,175 | md | Markdown | posts/acid-technos-avin-it-heyday.md | continuumizm/cizm-website | 6b562d558b7d5477a6b52a3849551aefae70c090 | [
"MIT"
] | null | null | null | posts/acid-technos-avin-it-heyday.md | continuumizm/cizm-website | 6b562d558b7d5477a6b52a3849551aefae70c090 | [
"MIT"
] | 6 | 2020-06-05T04:16:57.000Z | 2020-08-19T21:36:01.000Z | posts/acid-technos-avin-it-heyday.md | continuumizm/cizm-website | 6b562d558b7d5477a6b52a3849551aefae70c090 | [
"MIT"
] | null | null | null | ---
layout: post
title: Acid Techno's 'avin It Heyday
subtitle: "#303day"
date: 2021-03-04T04:00:28.121Z
leadimage: /img/stay-up-forever-its-not-intelligent-acid-techno-compilation-ad-muzik021-february-1997-1796x1123-continuumizm-comp.jpg
lead: "Really shouldn't talk about #303day without mentioning London Acid Techno
- the music and scene of the mid-90s and on revolving around fast, hard techno
beats with grinding, raw, dirty 303 acid lines, and underground parties of
squats, drugs, travelers and sound systems - epitomized in the 1997
compilation <em>It's Not Intelligent...And It's Not From Detroit...But It's
F**king 'avin It!</em>"
summary: On a day celebrating the TB-303, acid and dance music, one really must
look at the London acid techno scene. Revolving around fast, hard techno beats
with grinding, raw, dirty 303 acid lines, and underground parties of squats,
drugs, travelers and sound systems in the mid-90s it was epitomized in a
compilation called It's Not Intelligent...And It's Not From Detroit...But It's
F**king 'avin It!
category: sounds
tags:
- acidtechno
- london
- tb303
- 303day
---
With a tight knit group of producers all making tracks, collaborating with each other, running a slew of labels and DJing on the party scene, it was definitely a whole thing. Productions were raw but very well done. Strong beats and 303 basslines maximized to effect, many of the tunes were a force to be reckoned with. Closer in relation to the hard trance scene than anything else, the sounds and djs circulated amongst the same flyers and releases sometimes. It was the purely dance-driven, raw techno sound that was appealing and the focus of the core set of producers. While the outside dance scene evolved to include things like things like superclubs, watered down trance becoming mainstream, and even the broader techno scene heavily rooted in trends from Detroit and experimentation, the acid techno scene carved a niche out for itself representing another segment of partying and society even. A punk attitude, alot of the DJ/producers hailing from a rock/punk background, a rave spirit and drugs like ecstasy (with some KET to keep it hard), and an assortment of associated characters like travelers and activist 'crusties' in the mix, made it stand on its own underground cultural force in London.
The heroes of the scene included names like Chris, Aaron and Julian Liberator (brothers best known as the Liberator DJs), The Geezer, Lawrie Immersion, D.A.V.E. The Drummer, DDR, Rowland The Bastard, and Steve Smitten and all feature on numerous releases. Labels with names like Stay Up Forever, Smitten, Routemaster, Cluster, and Hydraulix all have a number of classics to their catalogues. Some of the stuff was so hard to get ahold of it you weren't right in the thick of it going to central shop Choci's Chewns in London, that it's fun to look back now and search on Discogs for all the lost 12"s and some mixcds that were super rare.
The Lochi (Chris Liberator & Lawrie Immersion) tune *[London Acid City](https://www.youtube.com/watch?v=1UdfG8MhnxY)* became an anthem both for the free party scene and the community that existed in and around it. A protest movement in London in 1996 called "Reclaim The Streets" would blare it during street marches, it's simple but effective 303 loop and vocal snippet *"Our Time Is Now"* capturing imaginations and moving feet. On a personal connection, I had some editorial influence over my high school yearbook (I was the Editor) and was inspired to sneak it in - as the title no less!! An inside joke for myself, yep. I also loved treating the neighbors to some of the toughest acid tracks :-P. When it came time to collect some of the many underground vinyl favourites into a more 'mainstream' release, a cd compilation, the Liberator's Stay Up Forever label lead the compilation *It's Not Intelligent...And It's Not From Detroit...But It's F\*\*king 'avin It!* with the track.
Success definitely extended beyond just the London party scene. You could find all kinds of scenes farther into Eastern Europe in places like Poland where acid techno was played regularly by djs at parties in fields. Made sense, anywhere that rules are a bit more lax and the culture is a bit more free-for-all could be a place for acid techno to thrive and rock it!
The party scene thrived and required constant new releases. There are so many releases in the 6-7 year span. Things seemed to roll along fine for years into the 00s with even another massive hit "One Night In Hackney" causing waves. But that tune also is noted as a changed tone from the early years. But it represents both the darker tone of the music and the change in society where the scene existed. Around the mid-2000s apparently the party scene and music got a bit darker and weighed down by drug influences where the balance between tunes and getting smashed at gatherings increasingly dominated by K-holes tipped coupled with the London scene very much threatened by gentrification trends etc. I lost track of it as the releases became less frequent and the pressure elsewhere from the global techno scene's shift to minimal, slower sounds, no doubt affected how things went also. Thing is, the music with it's simple arsenal of tools, 303 basslines and hard pumping beats, and focus on having a good time and the parties, made it kind of timeless. Returning to these sounds still make you want to dance and if you can appreciate a good acid line like we all can, you can still appreciate acid techno in all its raw, 'avin it ways.
Listen to the mix cd portion of the *It's Not Intelligent...* compilation here:
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/ukkAb4ZHz6g" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
<small class="text-secondary">Cover photo: Magazine ad, similar to the cover art for the compilation <em>It's Not Intelligent...And It's Not From Detroit...But It's F\*\*king 'avin It!</em>, February 1997.</small> | 147.02381 | 1,240 | 0.783968 | eng_Latn | 0.999264 |
0cd5d36ccdeec49f489bc2976efb58f69eecd912 | 56 | md | Markdown | README.md | stineot/historicalrecipies | a5154493ed9ccd98fd5d6d7dbadab0b30c7f0fd2 | [
"CC-BY-4.0"
] | null | null | null | README.md | stineot/historicalrecipies | a5154493ed9ccd98fd5d6d7dbadab0b30c7f0fd2 | [
"CC-BY-4.0"
] | null | null | null | README.md | stineot/historicalrecipies | a5154493ed9ccd98fd5d6d7dbadab0b30c7f0fd2 | [
"CC-BY-4.0"
] | null | null | null | # historicalrecipies
my first repocitory for git hub ws
| 18.666667 | 34 | 0.821429 | eng_Latn | 0.998184 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.