hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
53d78403bf3730613f50a7444bd7ff681faae392 | 1,510 | md | Markdown | ppersonal-python/README.md | suomitek/cubeai | cc4c0f5f445a552d239910da63944307c1f06e37 | [
"Apache-2.0"
] | null | null | null | ppersonal-python/README.md | suomitek/cubeai | cc4c0f5f445a552d239910da63944307c1f06e37 | [
"Apache-2.0"
] | null | null | null | ppersonal-python/README.md | suomitek/cubeai | cc4c0f5f445a552d239910da63944307c1f06e37 | [
"Apache-2.0"
] | null | null | null | # ppersonal
CubeAI ★ 智立方 个人中心前端门户
ppersonal前端基于Angular框架,使用TypeScript/HTML/CSS等语言开发。
## 基本配置
- 监听端口:8204(可任意)
- 服务注册与发现,中心配置:Consul(8500)
## 开发环境
- 操作系统
- Linux,建议Ubuntu 16.04 LTS
- Python 3.5 以上
- Docker
- 前端build工具
- Node.js
- Yarn
- 集成开发环境
- Pycharm
- Idea IntelliJ
## 开发
1. 前端开发环境准备
第一次从Github克隆本项目代码后,应先在本项目目录下执行如下命令以安装前端开发需要的Node依赖:
yarn install
2. 开发环境中运行ppersonal之前,需要先拉起项目依赖的后台docker,以及uaa微服务和gateway等微服务。
cd ~/cubeai/docker/dev-python
docker-compose up
参见docker/dev-python目录下的README文档。
3. 使用PyCharm打开本project所在目录。
4. 建议在PyCharm中专门为本project新建一个专用Python虚拟环境,Python版本选择3.5以上。
5. 在PyCharm的terminal窗口中执行如下命令安装依赖包:
sh pip-install-reqs.sh
6. 在PyCharm窗口中右键单击“start.py”文件,选择“run 'start'”来启动服务。
7. 建议使用Idea IntelliJ打开本project来进行前端Angular代码调试。
8. 每次前端代码改动后,在另一个terminal窗口中运行:
yarn webpack:build 或者 yarn webpack:prod
来完成代码编译。
9. 然后在浏览器中打开或刷新页面:
http://127.0.0.1:8080
gateway网关会自动将相关页面路由至ppersonal微服务来提供前端界面服务。
10. Angular前端源代码修改之后,重复执行上述第8-9步,。
11. 开发完成后,在terminal窗口中执行如下命令来生成微服务docker镜像:
sh build-docker.sh
## 部署
1. docker-compose部署
- 在docker目录下,执行如下命令来打包所有微服务镜像:
cd ~/cubeai/docker
sh build-all-python.sh
- 然后cd到cubeai/docker/prod-python,执行docker-compose命令拉起并运行所有微服务:
cd ~/cubeai/docker
docker-compose up
参见docker/prod-python目录下面的README文档。
3. k8s部署
参见docker/k8s目录下面的README文档。
| 14.803922 | 62 | 0.688742 | yue_Hant | 0.795266 |
53d7b655d40ccd2979cdc141039eef732cb7d54d | 51 | md | Markdown | README.md | boctavian96/LibgdxTemplate | 177a43eab2cfad540481193d2ce757ad6fa876fb | [
"MIT"
] | null | null | null | README.md | boctavian96/LibgdxTemplate | 177a43eab2cfad540481193d2ce757ad6fa876fb | [
"MIT"
] | null | null | null | README.md | boctavian96/LibgdxTemplate | 177a43eab2cfad540481193d2ce757ad6fa876fb | [
"MIT"
] | null | null | null | # LibgdxTemplate
Template Project for future games
| 17 | 33 | 0.843137 | eng_Latn | 0.940419 |
53d898b359684a97c8191480af38ebbd2d808106 | 194 | md | Markdown | _posts/2022-02-21-test2.md | egls0401/egls0401.github.io | a555c0844cba2adb2eeaeb230154884a66203c5b | [
"MIT"
] | null | null | null | _posts/2022-02-21-test2.md | egls0401/egls0401.github.io | a555c0844cba2adb2eeaeb230154884a66203c5b | [
"MIT"
] | null | null | null | _posts/2022-02-21-test2.md | egls0401/egls0401.github.io | a555c0844cba2adb2eeaeb230154884a66203c5b | [
"MIT"
] | 2 | 2019-08-19T05:10:19.000Z | 2022-02-12T07:56:04.000Z | ---
title: "재훈 게시물"
excerpt: "카카오"
categories:
- kakao
- develop
tags:
- 카카오
- omniauth
- ruby
last_modified_at: 2019-04-13T08:06:00-05:00
---
# 네번쨰 게시물입니다
## 안녕하세요
* d하이
<br>
*하이* | 9.7 | 43 | 0.603093 | kor_Hang | 0.928215 |
53d89de90934021b7662f9e10918b223b68afc96 | 3,610 | md | Markdown | pages/content/amp-dev/documentation/guides-and-tutorials/contribute/contribute-documentation/[email protected] | KooXme/amp.dev | 71db03d1754671c3e718ee7a160dba7457bf546b | [
"Apache-2.0"
] | 1 | 2020-12-27T07:39:28.000Z | 2020-12-27T07:39:28.000Z | pages/content/amp-dev/documentation/guides-and-tutorials/contribute/contribute-documentation/[email protected] | KooXme/amp.dev | 71db03d1754671c3e718ee7a160dba7457bf546b | [
"Apache-2.0"
] | 18 | 2019-07-31T08:23:19.000Z | 2019-07-31T22:55:02.000Z | pages/content/amp-dev/documentation/guides-and-tutorials/contribute/contribute-documentation/[email protected] | shishirm/amp.dev | 6db0f48cdb0a9f4e52e7884cd9c9f77ffd2a851d | [
"Apache-2.0"
] | 1 | 2020-04-22T15:39:57.000Z | 2020-04-22T15:39:57.000Z | ---
"$title": Cómo contribuir con la documentación
"$order": '0'
"$hidden": 'true'
description: Empezar a contribuir con la documentación para amp.dev
formats:
- websites
- stories
- ads
- email
author: CrystalOnScript
---
Los documentos son el punto de partida para que los desarrolladores aprendan a crear sitios web exitosos, historias, anuncios y correos electrónicos dinámicos con AMP. El equipo principal que trabaja los documentos de AMP es pequeño, pero nuestra responsabilidad es muy grande.<br>¡Apreciamos su ayuda! Los colaboradores pueden arreglar errores tipográficos, corregir la información que no esté actualizada, ¡y escribir nuevos documentos! Utilice esta página para conocer cuáles son las ventajas y las desventajas de colaborar con los documentos.
¡Gracias por su interés en colaborar con el Proyecto AMP, bienvenido al equipo!
# Comenzar la colaboración
[amp.dev](https://amp.dev/) guarda nuestros documentos y el equipo de AMP colabora en [GitHub](https://github.com/ampproject). Siga las instrucciones que se encuentran en el archivo [LÉAME del repositorio](https://github.com/ampproject/amp.dev) para ejecutar amp.dev en su equipo local. Esto es fundamental para garantizar que el renderizado y formateo de los activos se realice correctamente. Sin embargo, para efectuar correcciones sencillas como errores tipográficos, puede editar el archivo directamente en GitHub.
¡Únase al [grupo de trabajo y divulgación de](https://github.com/ampproject/wg-outreach) [AMP Project Slack](https://docs.google.com/forms/d/e/1FAIpQLSd83J2IZA6cdR6jPwABGsJE8YL4pkypAbKMGgUZZriU7Qu6Tg/viewform?fbzx=4406980310789882877) y díganos en qué está trabajando!
Hay varias maneras de contribuir con los documentos de amp.dev. Para comenzar hemos descrito algunas de ellas. Si desea contribuir de una manera diferente, no dude en [ponerse en contacto con nosotros](https://github.com/ampproject/wg-outreach) y pregunte si podemos aceptarlo.
## Good first issues
Nuestro pequeño (aunque poderoso) equipo está ocupado eliminando nuestros problemas grandes y pequeños. Si desea contribuir pero no está seguro por dónde comenzar, consulte nuestra sección de problemas y filtre con la [etiqueta `good first issues`](https://github.com/ampproject/amp.dev/labels/good%20first%20issue). Estas son correcciones sencillas que le ayudarán a familiarizarse con amp.dev y cómo funciona el proceso de las colaboraciones.
## Editar documentos
Si ve información que no esté actualizada, sea inapropiada o incorrecta, ¡puede corregirla! Agradecemos las correcciones así como las solicitudes de validación, pero también apreciamos si hace esto cuando un archivo suyo presente algún problema. Si su archivo tiene problemas y se lo asigna a usted mismo le bonificaremos puntos. Esto permite supervisar los esfuerzos y evitar que se duplique el trabajo.
## Redactar documentos
¿Desea consultar una guía o tutorial pero no la encuentra en amp.dev? ¡Puede escribirla! Comience por revisar cuáles son los [tipos de documentos que aceptamos](documentation-types.md) y [envíe una propuesta sobre el documento](https://github.com/ampproject/amp.dev/issues/new?assignees=&labels=&template=--content-proposal-.md&title=Content+proposal+). Una vez que su propuesta haya sido aceptada, familiarícese con la[ guía sobre la terminología de AMP](formatting.md?format=websites) y las [recomendaciones sobre el formato que deben tener los documentos](formatting.md). Eche un vistazo [aquí para obtener más información sobre ejemplos de las contribuciones](https://github.com/ampproject/amp.dev/blob/future/contributing/samples.md).
| 97.567568 | 739 | 0.807479 | spa_Latn | 0.993779 |
53d8ae31c1346ecec8e49c7fcd9bbd04fee1665b | 1,219 | md | Markdown | README.md | zahlabut/ExportTestNames | 26366859452dc5b966de761ad379eb570c373153 | [
"Apache-2.0"
] | null | null | null | README.md | zahlabut/ExportTestNames | 26366859452dc5b966de761ad379eb570c373153 | [
"Apache-2.0"
] | null | null | null | README.md | zahlabut/ExportTestNames | 26366859452dc5b966de761ad379eb570c373153 | [
"Apache-2.0"
] | null | null | null | # ExportTestNames
This script is recursively passes through all Python scripts (in unittest structure)
in provided paths and exports all classes and test names.
Usage:
1) Use Conf.py configuration file to set all paths you might want to analyze.
2) Run ExportTestCases.py file
Output Example:
/home/ashtempl/PycharmProjects/designate-tempest-plugin/designate_tempest_plugin/tests/api/v2/test_blacklists.py
class BlacklistsAdminTest(BaseBlacklistsTest):
def test_create_blacklist(self):
def test_create_blacklist_invalid_pattern(self):
def test_create_blacklist_huge_size_description(self):
def test_create_blacklist_as_primary_fails(self):
def test_show_blacklist(self):
def test_delete_blacklist(self):
def test_list_blacklists(self):
def test_update_blacklist(self):
class TestBlacklistNotFoundAdmin(BaseBlacklistsTest):
def test_show_blacklist_404(self):
def test_update_blacklist_404(self):
def test_delete_blacklist_404(self):
class TestBlacklistInvalidIdAdmin(BaseBlacklistsTest):
def test_show_blacklist_invalid_uuid(self):
def test_update_blacklist_invalid_uuid(self):
def test_delete_blacklist_invalid_uuid(self):
Total Number of tests is: 14
| 32.945946 | 112 | 0.812961 | eng_Latn | 0.488039 |
53d8c110a5de22984ad45287fbba20a9e0ec7776 | 4,311 | md | Markdown | docs/json_browser/markdowns/ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check.md | EbiEga/ega-metadata-schema | d1bc44615022af2a1c713727d1061c91a1136565 | [
"Apache-2.0"
] | 1 | 2021-11-26T08:49:44.000Z | 2021-11-26T08:49:44.000Z | docs/json_browser/markdowns/ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check.md | EbiEga/ega-metadata-schema | d1bc44615022af2a1c713727d1061c91a1136565 | [
"Apache-2.0"
] | 11 | 2021-05-06T10:22:35.000Z | 2022-03-24T16:00:20.000Z | docs/json_browser/markdowns/ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check.md | EbiEga/ega-metadata-schema | d1bc44615022af2a1c713727d1061c91a1136565 | [
"Apache-2.0"
] | null | null | null | # External accession: object_id and object_type check Schema
```txt
https://github.com/EbiEga/ega-metadata-schema/tree/main/schemas/EGA.common-definitions.json#/definitions/object-id-and-object-type-check/anyOf/1
```
A check that ensures that, if 'external_accession' is given as the object_type, the corresponding node exists within object_id
| Abstract | Extensible | Status | Identifiable | Custom Properties | Additional Properties | Access Restrictions | Defined In |
| :------------------ | :--------- | :------------- | :----------- | :---------------- | :-------------------- | :------------------ | :---------------------------------------------------------------------------------------- |
| Can be instantiated | No | Unknown status | No | Forbidden | Allowed | none | [EGA.common-definitions.json*](../out/EGA.common-definitions.json "open original schema") |
## 1 Type
unknown ([External accession: object_id and object_type check](ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check.md))
# 1 Properties
| Property | Type | Required | Nullable | Defined by |
| :-------------------------- | :------------ | :------- | :------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [object_id](#object_id) | Not specified | Optional | cannot be null | [EGA common metadata definitions](ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check-properties-object_id.md "https://github.com/EbiEga/ega-metadata-schema/tree/main/schemas/EGA.common-definitions.json#/definitions/object-id-and-object-type-check/anyOf/1/properties/object_id") |
| [object_type](#object_type) | Not specified | Optional | cannot be null | [EGA common metadata definitions](ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check-properties-object_type.md "https://github.com/EbiEga/ega-metadata-schema/tree/main/schemas/EGA.common-definitions.json#/definitions/object-id-and-object-type-check/anyOf/1/properties/object_type") |
## object_id
`object_id`
* is optional
* Type: unknown
* cannot be null
* defined in: [EGA common metadata definitions](ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check-properties-object_id.md "https://github.com/EbiEga/ega-metadata-schema/tree/main/schemas/EGA.common-definitions.json#/definitions/object-id-and-object-type-check/anyOf/1/properties/object_id")
### object_id Type
unknown
## object_type
`object_type`
* is optional
* Type: unknown
* cannot be null
* defined in: [EGA common metadata definitions](ega-2-definitions-check-that-the-object_ids-accession-pattern-and-object_type-match-anyof-external-accession-object_id-and-object_type-check-properties-object_type.md "https://github.com/EbiEga/ega-metadata-schema/tree/main/schemas/EGA.common-definitions.json#/definitions/object-id-and-object-type-check/anyOf/1/properties/object_type")
### object_type Type
unknown
### object_type Constraints
**enum**: the value of this property must be equal to one of the following values:
| Value | Explanation |
| :--------------------- | :---------- |
| `"external_accession"` | |
| 64.343284 | 449 | 0.547669 | yue_Hant | 0.395537 |
53d9162f232fbf093e79599cbf7d494301fa2d20 | 497 | md | Markdown | 0105-combine-schedulers-pt2/README.md | foxsin10/episode-code-samples | be7214189ea5b2ba74af2ff437f5075b2acefaef | [
"MIT"
] | 683 | 2017-12-09T21:29:01.000Z | 2022-03-30T08:48:39.000Z | 0105-combine-schedulers-pt2/README.md | hadiidbouk/episode-code-samples | 1c3f93f283ce143d87e8c55f174b773a9140f5eb | [
"MIT"
] | 62 | 2018-03-05T20:30:16.000Z | 2022-03-24T14:41:58.000Z | 0105-combine-schedulers-pt2/README.md | hadiidbouk/episode-code-samples | 1c3f93f283ce143d87e8c55f174b773a9140f5eb | [
"MIT"
] | 262 | 2018-01-30T02:17:40.000Z | 2022-03-28T09:57:23.000Z | ## [Point-Free](https://www.pointfree.co)
> #### This directory contains code from Point-Free Episode: [Combine Schedulers: Controlling Time](https://www.pointfree.co/episodes/ep105-combine-schedulers-controlling-time)
>
> The Scheduler protocol of Combine is a powerful abstraction that unifies many ways of executing asynchronous work, and it can even control the flow of time through our code. Unfortunately Combine doesn’t give us this ability out of the box, so let’s build it from scratch.
| 82.833333 | 274 | 0.784708 | eng_Latn | 0.989724 |
53d98bf71ff13fc3a69c5190968fdecad6aff4ad | 775 | md | Markdown | content/blog/blog-bite-using-promissory-notes-to-purchase-or-sell-a-business/index.md | sahilkanaya/clausehound-blog | f820b8e26ace29ae635d86c122529695a6647d10 | [
"MIT"
] | 1 | 2020-05-31T22:57:06.000Z | 2020-05-31T22:57:06.000Z | content/blog/blog-bite-using-promissory-notes-to-purchase-or-sell-a-business/index.md | JaninaFe/clausehound-blog | 80b09f7adf619bca44008e1a8586159dd4274abc | [
"MIT"
] | null | null | null | content/blog/blog-bite-using-promissory-notes-to-purchase-or-sell-a-business/index.md | JaninaFe/clausehound-blog | 80b09f7adf619bca44008e1a8586159dd4274abc | [
"MIT"
] | null | null | null | ---
title: "Blog Bite: Using promissory notes to purchase or sell a business."
author: [email protected]
tags: ["Promissory Note","cmcivor"]
date: 2018-05-15 14:57:29
description: "This article posted on our partner site Mondaq.com discusses how promissory notes can be used in the purchase or selling of a business. They are a way to make payment for the transaction over a peri..."
---
[This article posted on our partner site Mondaq.com](http://www.mondaq.com/canada/x/440042/Shareholders/Terms+of+the+Deal+Both+Price+and+Terms+are+Important) discusses how promissory notes can be used in the purchase or selling of a business. They are a way to make payment for the transaction over a period of time, to reduce taxes, and but they also produce a collection risk. | 86.111111 | 378 | 0.779355 | eng_Latn | 0.991555 |
53d99c40dc84f747c4fea428a270725febc95469 | 1,610 | md | Markdown | subscriptions/faq/subscriber/renewals/includes/cancel-cloud-subs.md | PavelAltynnikov/visualstudio-docs.ru-ru | 885a1cdea7a08daa90d84262b36bd83e6e9f1328 | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-12-27T02:53:32.000Z | 2022-02-23T03:39:29.000Z | subscriptions/faq/subscriber/renewals/includes/cancel-cloud-subs.md | PavelAltynnikov/visualstudio-docs.ru-ru | 885a1cdea7a08daa90d84262b36bd83e6e9f1328 | [
"CC-BY-4.0",
"MIT"
] | 80 | 2017-12-19T18:54:56.000Z | 2021-04-26T15:30:18.000Z | subscriptions/faq/subscriber/renewals/includes/cancel-cloud-subs.md | PavelAltynnikov/visualstudio-docs.ru-ru | 885a1cdea7a08daa90d84262b36bd83e6e9f1328 | [
"CC-BY-4.0",
"MIT"
] | 53 | 2017-12-19T16:16:21.000Z | 2021-10-09T11:14:07.000Z | ---
title: Как отменить ежемесячные и ежегодные подписки?
description: При отмене облачной подписки на Visual Studio вы просто отменяете автоматическое продление. Подписка продолжается до даты...
ms.faqid: q4_6
ms.topic: include
ms.assetid: 2c83cd19-2692-4aef-9cd7-b7842639cbce
author: CaityBuschlen
ms.author: cabuschl
ms.date: 01/29/2021
ms.openlocfilehash: ed82939c8a70a8b9591e8db4ee727bb1f888d53e
ms.sourcegitcommit: cfeffe2364275a347db0ba2dce36d8e80001c081
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 01/30/2021
ms.locfileid: "99104543"
---
## <a name="how-do-i-cancel-monthly-and-annual-subscriptions"></a>Как отменить ежемесячные и ежегодные подписки?
Чтобы отменить ежемесячные и ежегодные подписки, приобретенные в [Visual Studio Marketplace](https://marketplace.visualstudio.com), необходимо войти на [портал администрирования](https://manage.visualstudio.com) и ввести ноль для количества подписок по соглашению.
Чтобы сократить число подписок, выполните следующие действия:
1. Выполните вход в https://manage.visualstudio.com
2. Если вы являетесь администратором нескольких соглашений, выберите нужное соглашение в раскрывающемся списке.
3. Нажмите значок **Обзор** в верхнем левом углу, чтобы отобразить сведения о подписках.
4. Выберите запись для подписок, которые вы хотите отменить, и нажмите **Изменить количество**. Вы перейдете в Visual Studio Marketplace, где сможете изменить количество.
5. Измените количество на нуль (0). Ваши подписки будут действовать до запланированной даты выставления счетов, но не будут продлены в начале следующего расчетного периода.
| 59.62963 | 264 | 0.81677 | rus_Cyrl | 0.917703 |
53da07a18d550d6d6c3d9eea5e047d95e20eca27 | 8,039 | md | Markdown | Exchange/ExchangeServer/permissions/feature-permissions/policy-and-compliance-permissions.md | atguilmette/OfficeDocs-Exchange | 893ce915a4c12e25118e164705b3a65cb908d442 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Exchange/ExchangeServer/permissions/feature-permissions/policy-and-compliance-permissions.md | atguilmette/OfficeDocs-Exchange | 893ce915a4c12e25118e164705b3a65cb908d442 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Exchange/ExchangeServer/permissions/feature-permissions/policy-and-compliance-permissions.md | atguilmette/OfficeDocs-Exchange | 893ce915a4c12e25118e164705b3a65cb908d442 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Messaging policy and compliance permissions in Exchange 2016"
ms.author: serdars
author: SerdarSoysal
manager: serdars
ms.date: 6/12/2018
ms.audience: ITPro
ms.topic: reference
ms.prod: exchange-server-itpro
localization_priority: Normal
ms.assetid: ec4d3b9f-b85a-4cb9-95f5-6fc149c3899b
description: "Summary: Learn about permissions that are required to manage policy and compliance features in Exchange Server 2016."
---
# Messaging policy and compliance permissions in Exchange 2016
**Summary**: Learn about permissions that are required to manage policy and compliance features in Exchange Server 2016.
The permissions required to configure messaging policy and compliance vary depending on the procedure being performed or the cmdlet you want to run. For more information about messaging policy and compliance, see [Messaging policy and compliance in Exchange 2016](../../policy-and-compliance/policy-and-compliance.md).
To find out what permissions you need to perform the procedure or run the cmdlet, do the following:
1. In the table below, find the feature that is most related to the procedure you want to perform or the cmdlet you want to run.
2. Next, look at the permissions required for the feature. You must be assigned one of those role groups, an equivalent custom role group, or an equivalent management role. You can also click on a role group to see its management roles. If a feature lists more than one role group, you only need to be assigned one of the role groups to use the feature. For more information about role groups and management roles, see **Understanding Role Based Access Control**.
3. Now, run the **Get-ManagementRoleAssignment** cmdlet to look at the role groups or management roles assigned to you to see if you have the permissions that are necessary to manage the feature.
> [!NOTE]
> You must be assigned the Role Management management role to run the **Get-ManagementRoleAssignment** cmdlet. If you don't have permissions to run the **Get-ManagementRoleAssignment** cmdlet, ask your Exchange administrator to retrieve the role groups or management roles assigned to you.
If you want to delegate the ability to manage a feature to another user, see **Delegate a Management Role**.
## Messaging policy and compliance permissions
You can use the features in the following table to configure messaging policy and compliance features. The role groups that are required to configure each feature are listed.
Users who are assigned the View-Only Management role group can view the configuration of the features in the following table. For more information, see [View-Only Organization Management](http://technet.microsoft.com/library/c514c6d0-0157-4c52-9ec6-441d9a30f3df.aspx).
|**Feature**|**Permissions required**|
|:-----|:-----|
|Data loss prevention (DLP) <br/> |[Compliance Management](http://technet.microsoft.com/library/b91b23a4-e9c7-4bd0-9ee3-ec5cb498da15.aspx) <br/> |
|Delete mailbox content (using the [Search-Mailbox](http://technet.microsoft.com/library/9ee3b02c-d343-4816-a583-a90b1fad4b26.aspx) cmdlet with the _DeleteContent_ switch) <br/> |[Discovery Management](http://technet.microsoft.com/library/b8bc5922-a8c9-4707-906d-fa38bb87da8f.aspx) **and** <br/> [Mailbox Import Export Role](http://technet.microsoft.com/library/d7cdce7a-6c46-4750-b237-d1c1773e8d28.aspx) <br/> **Note**: By default, the Mailbox Import Export role isn't assigned to any role group. You can assign a management role to a built-in or custom role group, a user, or a universal security group. Assigning a role to a role group is recommended. For more information, see [Add a Role to a User or USG](http://technet.microsoft.com/library/ae5608de-a141-4714-8876-bce7d2a22cb5.aspx). <br/> |
|Discovery mailboxes - Create <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Recipient Management](http://technet.microsoft.com/library/669d602e-68e3-41f9-a455-b942d212d130.aspx) <br/> |
|Information Rights Management (IRM) configuration <br/> |[Compliance Management](http://technet.microsoft.com/library/b91b23a4-e9c7-4bd0-9ee3-ec5cb498da15.aspx) <br/> [Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> |
|In-Place Archive <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Recipient Management](http://technet.microsoft.com/library/669d602e-68e3-41f9-a455-b942d212d130.aspx) <br/> |
|In-Place Archive - Test connectivity <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Server Management](http://technet.microsoft.com/library/30cbc4de-adb3-42e8-922f-7661095bdb8c.aspx) <br/> |
|In-Place eDiscovery <br/> |[Discovery Management](http://technet.microsoft.com/library/b8bc5922-a8c9-4707-906d-fa38bb87da8f.aspx) <br/> **Note**: By default, the Discovery Management role group doesn't have any members. No users, including administrators, have the required permissions to search mailboxes. For more information, see [Assign eDiscovery permissions in Exchange 2016](../../policy-and-compliance/ediscovery/assign-permissions.md). <br/> |
|In-Place Hold <br/> |[Discovery Management](http://technet.microsoft.com/library/b8bc5922-a8c9-4707-906d-fa38bb87da8f.aspx) <br/> [Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> **Notes**: <br/> • To create a query-based In-Place Hold, a user requires both the Mailbox Search and Legal Hold roles to be assigned directly or via membership in a role group that has both roles assigned. To create an In-Place Hold without using a query, which places all mailbox items on hold, you must have the Legal Hold role assigned. The Discovery Management role group is assigned both roles. <br/> • The Organization Management role group is assigned the Legal Hold role. Members of the Organization Management role group can place an In-Place Hold on all items in a mailbox, but can't create a query-based In-Place Hold. <br/> |
|Journaling <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Records Management](http://technet.microsoft.com/library/0e0c95ce-6109-4591-b86d-c6cfd44d21f5.aspx) <br/> |
|Litigation Hold <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> |
|Mailbox audit logging <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Records Management](http://technet.microsoft.com/library/0e0c95ce-6109-4591-b86d-c6cfd44d21f5.aspx) <br/> |
|Message classifications <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> |
|Messaging records management <br/> |[Compliance Management](http://technet.microsoft.com/library/b91b23a4-e9c7-4bd0-9ee3-ec5cb498da15.aspx) <br/> [Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Records Management](http://technet.microsoft.com/library/0e0c95ce-6109-4591-b86d-c6cfd44d21f5.aspx) <br/> |
|Retention policies - Apply <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Recipient Management](http://technet.microsoft.com/library/669d602e-68e3-41f9-a455-b942d212d130.aspx) <br/> [Records Management](http://technet.microsoft.com/library/0e0c95ce-6109-4591-b86d-c6cfd44d21f5.aspx) <br/> |
|Retention policies - Create <br/> |See the entry for Messaging records management <br/> |
|Transport rules <br/> |[Organization Management](http://technet.microsoft.com/library/0bfd21c1-86ac-4369-86b7-aeba386741c8.aspx) <br/> [Records Management](http://technet.microsoft.com/library/0e0c95ce-6109-4591-b86d-c6cfd44d21f5.aspx) <br/> |
| 133.983333 | 885 | 0.778455 | eng_Latn | 0.773849 |
53da2dab00d2996ee85d26df6b50d0261bbb8440 | 2,141 | md | Markdown | README.md | hihouhou/huginn_geforcenow_region_status_agent | ce3684b090ef8db3bbfa56d4d400f7606cf8f44f | [
"MIT"
] | null | null | null | README.md | hihouhou/huginn_geforcenow_region_status_agent | ce3684b090ef8db3bbfa56d4d400f7606cf8f44f | [
"MIT"
] | null | null | null | README.md | hihouhou/huginn_geforcenow_region_status_agent | ce3684b090ef8db3bbfa56d4d400f7606cf8f44f | [
"MIT"
] | null | null | null | # GeforcenowRegionStatusAgent
Welcome to your new agent gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/huginn_geforcenow_region_status_agent`. To experiment with that code, run `bin/console` for an interactive prompt.
TODO: Delete this and the text above, and describe your gem
## Installation
This gem is run as part of the [Huginn](https://github.com/huginn/huginn) project. If you haven't already, follow the [Getting Started](https://github.com/huginn/huginn#getting-started) instructions there.
Add this string to your Huginn's .env `ADDITIONAL_GEMS` configuration:
```ruby
huginn_geforcenow_region_status_agent
# when only using this agent gem it should look like this:
ADDITIONAL_GEMS=huginn_geforcenow_region_status_agent
```
And then execute:
$ bundle
## Usage
TODO: Write usage instructions here
## Development
Running `rake` will clone and set up Huginn in `spec/huginn` to run the specs of the Gem in Huginn as if they would be build-in Agents. The desired Huginn repository and branch can be modified in the `Rakefile`:
```ruby
HuginnAgent.load_tasks(branch: '<your branch>', remote: 'https://github.com/<github user>/huginn.git')
```
Make sure to delete the `spec/huginn` directory and re-run `rake` after changing the `remote` to update the Huginn source code.
After the setup is done `rake spec` will only run the tests, without cloning the Huginn source again.
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release` to create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
## Contributing
1. Fork it ( https://github.com/[my-github-username]/huginn_geforcenow_region_status_agent/fork )
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create a new Pull Request
| 44.604167 | 315 | 0.766464 | eng_Latn | 0.963362 |
53da3fa9517a78733cf928b07eaf08e70341e5e5 | 205 | md | Markdown | api/rest-api/methods/users/README.md | PriyaBihani/docs | 351e8de10b4e16720a2fd57711c78e3f53784ab0 | [
"MIT"
] | null | null | null | api/rest-api/methods/users/README.md | PriyaBihani/docs | 351e8de10b4e16720a2fd57711c78e3f53784ab0 | [
"MIT"
] | null | null | null | api/rest-api/methods/users/README.md | PriyaBihani/docs | 351e8de10b4e16720a2fd57711c78e3f53784ab0 | [
"MIT"
] | null | null | null | ---
description: REST API Users Methods
---
# Users
Please find the document here:
[https://developer.rocket.chat/api/rest-api/methods/users](https://developer.rocket.chat/api/rest-api/methods/users)
| 18.636364 | 116 | 0.741463 | eng_Latn | 0.349533 |
53da882a41cea4f141e46aa8724ace7d2f4ee2e3 | 24 | md | Markdown | README.md | ketan14/try-Django | 5cea942ad60ea6da3c7c27bc0b620390565023a0 | [
"MIT"
] | null | null | null | README.md | ketan14/try-Django | 5cea942ad60ea6da3c7c27bc0b620390565023a0 | [
"MIT"
] | null | null | null | README.md | ketan14/try-Django | 5cea942ad60ea6da3c7c27bc0b620390565023a0 | [
"MIT"
] | null | null | null | # try-Django
try Django
| 8 | 12 | 0.75 | lit_Latn | 0.929801 |
53dae9b9ca4f2a879e1b9b628940bb110bce895d | 553 | md | Markdown | laby_macros/html_docs/keygen.md | chiyadev/laby | 2d9c473b4b9623f3a16339f6aa07ab3390776a19 | [
"MIT"
] | 1 | 2022-03-30T22:29:14.000Z | 2022-03-30T22:29:14.000Z | laby_macros/html_docs/keygen.md | chiyadev/laby | 2d9c473b4b9623f3a16339f6aa07ab3390776a19 | [
"MIT"
] | null | null | null | laby_macros/html_docs/keygen.md | chiyadev/laby | 2d9c473b4b9623f3a16339f6aa07ab3390776a19 | [
"MIT"
] | null | null | null | # Deprecated
The **`<keygen>`** [HTML](https://developer.mozilla.org/en-US/docs/Web/HTML) element exists to facilitate generation of key material, and submission of the public key as part of an [HTML form](https://developer.mozilla.org/en-US/docs/Learn/Forms). This mechanism is designed for use with Web-based certificate management systems. It is expected that the `<keygen>` element will be used in an HTML form along with other information needed to construct a certificate request, and that the result of the process will be a signed certificate.
| 138.25 | 538 | 0.783002 | eng_Latn | 0.997505 |
53db886f793ccba714c0e93dcdf928fae6733f04 | 19 | md | Markdown | README.md | GeorgeAntoniobr/palmeira-fict-cia | 0fe79fe66c190f33bd1a4fcc21a3924a732bca77 | [
"MIT"
] | null | null | null | README.md | GeorgeAntoniobr/palmeira-fict-cia | 0fe79fe66c190f33bd1a4fcc21a3924a732bca77 | [
"MIT"
] | null | null | null | README.md | GeorgeAntoniobr/palmeira-fict-cia | 0fe79fe66c190f33bd1a4fcc21a3924a732bca77 | [
"MIT"
] | null | null | null | # palmeira-fict-cia | 19 | 19 | 0.789474 | vie_Latn | 0.562667 |
53dc27ddbd804f5b7a862c918a7f6346b733498e | 5,412 | md | Markdown | Invidious-Instances.md | RiversideRocks/documentation | b0f46efac56385edd603acb26e802be977e26202 | [
"CC0-1.0"
] | null | null | null | Invidious-Instances.md | RiversideRocks/documentation | b0f46efac56385edd603acb26e802be977e26202 | [
"CC0-1.0"
] | null | null | null | Invidious-Instances.md | RiversideRocks/documentation | b0f46efac56385edd603acb26e802be977e26202 | [
"CC0-1.0"
] | null | null | null | ---
title: Invidious-Instances
description:
published: true
date: 2021-05-23T16:58:51.441Z
tags:
editor: markdown
dateCreated: 2021-05-23T16:58:48.431Z
---
# Public Invidious Instances:
[Uptime History provided by Uptimerobot](https://stats.uptimerobot.com/89VnzSKAn)
[Instances API](https://api.invidious.io/)
**Warning: Any public instance that isn't in this list is considered untrustworthy. Use them at your own risk.**
## List of public Invidious Instances (sorted from oldest to newest):
* [invidious.snopyta.org](https://invidious.snopyta.org/) 🇫🇮
* [yewtu.be](https://yewtu.be) 🇳🇱 [](https://uptime.invidious.io/784257752) - Source code/changes: https://github.com/unixfox/invidious-custom
* [invidious.kavin.rocks](https://invidious.kavin.rocks) 🇮🇳 [](https://status.kavin.rocks/786132664) [invidious-us.kavin.rocks](https://invidious-us.kavin.rocks) 🇺🇸 [](https://status.kavin.rocks/788216947) [invidious-jp.kavin.rocks](https://invidious-jp.kavin.rocks) 🇯🇵 [](https://status.kavin.rocks/788866642) (uses Cloudflare)
* [vid.puffyan.us](https://vid.puffyan.us) 🇺🇸 [](https://stats.uptimerobot.com/n7A08HGVl6/786947233)
* [invidious.namazso.eu](https://invidious.namazso.eu) 🇩🇪
* [inv.riverside.rocks](https://inv.riverside.rocks) 🇺🇸
* [vid.mint.lgbt](https://vid.mint.lgbt) 🇨🇦 [Status Page](https://status.mint.lgbt/service/lesvidious)
* [invidious.osi.kr](https://invidious.osi.kr) 🇳🇱 [Status Page](https://status.osbusiness.net/report/uptime/6e47474f3737993d8a3fde06f33dc128/)
* [invidio.xamh.de](https://invidio.xamh.de) 🇩🇪 
* [youtube.076.ne.jp](https://youtube.076.ne.jp) 🇯🇵 - Source code/changes: https://git.076.ne.jp/TechnicalSuwako/invidious-mod
* [yt.didw.to](https://yt.didw.to/) 🇸🇪
* [yt.artemislena.eu](https://yt.artemislena.eu) 🇩🇪
* [inv.cthd.icu](https://inv.cthd.icu/) 🇷🇸 - Source code/changes: https://github.com/cysea/invidious-custom
### Tor Onion Services:
* [c7hqkpkpemu6e7emz5b4vyz7idjgdvgaaa3dyimmeojqbgpea3xqjoid.onion](http://c7hqkpkpemu6e7emz5b4vyz7idjgdvgaaa3dyimmeojqbgpea3xqjoid.onion)
* [w6ijuptxiku4xpnnaetxvnkc5vqcdu7mgns2u77qefoixi63vbvnpnqd.onion](http://w6ijuptxiku4xpnnaetxvnkc5vqcdu7mgns2u77qefoixi63vbvnpnqd.onion/)
* [kbjggqkzv65ivcqj6bumvp337z6264huv5kpkwuv6gu5yjiskvan7fad.onion](http://kbjggqkzv65ivcqj6bumvp337z6264huv5kpkwuv6gu5yjiskvan7fad.onion/) 🇳🇱
* [grwp24hodrefzvjjuccrkw3mjq4tzhaaq32amf33dzpmuxe7ilepcmad.onion](http://grwp24hodrefzvjjuccrkw3mjq4tzhaaq32amf33dzpmuxe7ilepcmad.onion) 🇺🇸
* [hpniueoejy4opn7bc4ftgazyqjoeqwlvh2uiku2xqku6zpoa4bf5ruid.onion](http://hpniueoejy4opn7bc4ftgazyqjoeqwlvh2uiku2xqku6zpoa4bf5ruid.onion) 🇺🇸 (Onion of invidious-us.kavin.rocks)
* [osbivz6guyeahrwp2lnwyjk2xos342h4ocsxyqrlaopqjuhwn2djiiyd.onion](http://osbivz6guyeahrwp2lnwyjk2xos342h4ocsxyqrlaopqjuhwn2djiiyd.onion) 🇳🇱 (Onion of invidious.hub.ne.kr)
* [p4ozd76i5zmqepf6xavtehswcve2taptxbwpswkq5osfvncwylavllid.onion](http://p4ozd76i5zmqepf6xavtehswcve2taptxbwpswkq5osfvncwylavllid.onion) 🇯🇵 (Onion of invidious-jp.kavin.rocks)
* [u2cvlit75owumwpy4dj2hsmvkq7nvrclkpht7xgyye2pyoxhpmclkrad.onion](http://u2cvlit75owumwpy4dj2hsmvkq7nvrclkpht7xgyye2pyoxhpmclkrad.onion) 🇺🇸 (Onion of inv.riverside.rocks)
## Rules to have your instance in this list:
1. Instances MUST have been up for at least a month before it can be added to this list.
2. Instances MUST have been updated in the last month. An instance that hasn't been updated in the last month is considered unmaintained and is removed from the list.
3. Instances MUST have statistics (/api/v1/stats) enabled (`statistics_enabled:true` in the configuration file).
4. Instances MUST have an uptime of at 90% ([according to uptime.invidious.io](https://uptime.invidious.io/)).
5. Instances MUST be served via domain name.
6. Instances MUST be served via HTTPS (or/and onion).
7. Instances using any DDoS Protection / MITM MUST be marked as such (eg: Cloudflare, DDoS-Guard...).
8. Instances using any type of anti-bot protection MUST be marked as such.
9. Instances MUST NOT use any type of analytics.
10. Any system whose goal is to modify the content served to the user (i.e web server HTML rewrite) is considered the same as modifying the source code.
11. Instances running a modified source code:
- MUST respect the AGPL by publishing their source code and stating their changes **before** they are be added to the list
- MUST publish any later modification in a timely manner
- MUST contain a link to both the modified and original source code of Invidious in the footer.
12. Instances MUST NOT serve ads (sponsorship links in the banner are considered ads) NOR promote products.
**NOTE:** We reserve the right to decline any instance from being added to the list, and to remove or ban any instance that repeatedly breaks the aforementioned rules.
| 63.670588 | 631 | 0.786401 | eng_Latn | 0.34035 |
53dc5113c8397dab2929e6d68a716997535be4d0 | 270 | md | Markdown | README.md | ayushnoori/dtc-pharma | 6f3feb19421656218a57db2450b1ae6a49a1d6f9 | [
"CC-BY-3.0"
] | null | null | null | README.md | ayushnoori/dtc-pharma | 6f3feb19421656218a57db2450b1ae6a49a1d6f9 | [
"CC-BY-3.0"
] | null | null | null | README.md | ayushnoori/dtc-pharma | 6f3feb19421656218a57db2450b1ae6a49a1d6f9 | [
"CC-BY-3.0"
] | null | null | null | # RapidRx Landing Page
This repository houses the landing page for the [gener8tor](https://www.gener8tor.com)-based direct-to-consumer pharmaceuticals startup, RapidRx.
# Contact
Jeff Boeh
*Managing Director at The Brandery*
[[email protected]]([email protected])
| 24.545455 | 145 | 0.788889 | eng_Latn | 0.854789 |
53dd2be2727bab695616db1dba3170c2bb1c6c15 | 3,728 | md | Markdown | CHANGELOG.md | ripienaar/aaasvc | d5bd584a2a95d389783d4c6077891e5baab7f661 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | ripienaar/aaasvc | d5bd584a2a95d389783d4c6077891e5baab7f661 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | ripienaar/aaasvc | d5bd584a2a95d389783d4c6077891e5baab7f661 | [
"Apache-2.0"
] | null | null | null | |Date |Issue |Description |
|----------|------|---------------------------------------------------------------------------------------------------------|
|2010/01/06| |Release 0.3.2
|2020/01/01|68 |Use common component for managing Rego evaluation |
|2019/12/29|66 |Support tracing Rego evaluation when in debug mode |
|2019/12/22| |Release 0.3.1 |
|2019/12/22|61 |Do not cache opa policies read from file |
|2019/12/22| |Release 0.3.0 |
|2019/12/22|55 |Allow TLS to be disabled using `--disable-tls` for use in Kubernetes |
|2019/12/21|2 |Allow users to be set in a separate file that gets hot reloaded |
|2019/12/21|50 |Support NATS JetStream for auditing |
|2019/12/21|48 |Support user properties |
|2019/12/19|42 |Support Open Policy Agent |
|2019/04/19| |Release 0.2.0 |
|2019/04/19|34 |Run as the root user on el7 as well |
|2019/02/15| |Release 0.1.0 |
|2019/02/14|30 |Include a UTC Unix time stamp in the nats notification |
|2019/02/14|28 |Instead of 0, 1 or 2 use unknown, allow or deny for the action taken in nats notifications |
|2019/02/14|28 |Include the `site` that produced the audit message in the nats notification |
|2019/02/14| |Release 0.0.3 |
|2019/02/14|25 |Allow disabling authenticators and signers |
|2019/02/01|22 |Make callerids more compatible with mco standards |
|2019/02/01|21 |Enforce exp headers and do not accept ones that are too far in the future |
|2019/01/31| |Release 0.0.2 |
|2019/01/31|18 |Write token with mode `0600` |
|2019/01/30|16 |Syntax fixes to login.rb |
|2019/01/30|12 |Correctly expose the agent and action being authorized to prometheus |
|2019/01/30|11 |Place correct integer format date in the `iat` field of JWT tokens |
|2019/01/30|10 |Expand the token path to allow `~` to be used in system wide configs |
|2019/01/30| |Release 0.0.1 |
| 120.258065 | 125 | 0.324839 | eng_Latn | 0.938761 |
53dd3166ed58487211530d70af2b028a4cffa45d | 995 | md | Markdown | content/posts/abominable.md | michaeltlombardi/picaroons | 0e22a28e04d335e997101d5446e6da63b36b028d | [
"MIT"
] | null | null | null | content/posts/abominable.md | michaeltlombardi/picaroons | 0e22a28e04d335e997101d5446e6da63b36b028d | [
"MIT"
] | 11 | 2021-09-02T05:39:42.000Z | 2022-01-06T20:29:19.000Z | content/posts/abominable.md | michaeltlombardi/picaroons | 0e22a28e04d335e997101d5446e6da63b36b028d | [
"MIT"
] | null | null | null | ---
author: Michael T. Lombardi
date: 2021-12-11
linktitle: The Abominable
title: The Abominable
summary: |
A terrible beast for the winter feast.
categories:
- beastiary
images:
- images/posts/abominable/abominable.png
---
> Snow blows hard, the world shadow and ice.
> The fire burns low and an awful laugh rolls over the camp like the severed and gnawed limb that tumbles to your feet.
{{< columns >}}
**HD:** 8d8
**Domains:**
- Tundra Ambush (d12)
**Tricks:**
- _Freeze Shadows_ (d8, 2 uses): Roll trick dice whenever nearby a target as a contest; if successful, target cannot move until their shadow is no longer cast where it is.
- _Spatterkin_ (2d6, 3 uses): Roll trick dice when injured; blood spatter forms a small gorey snowbeast with the result as HP.
- _Masticating Regeneration_ (3d6, 1 use): Roll trick dice on hit, deal additional result damage and gain result HP.
<--->

{{< /columns >}}
| 27.638889 | 171 | 0.728643 | eng_Latn | 0.988349 |
53dd86e507f263e3e07e6eb91f1d90c8951998ac | 414 | md | Markdown | _posts/2014-05-20-html-imports-include-html5-rocks.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 5 | 2016-01-25T08:51:46.000Z | 2022-02-16T05:51:08.000Z | _posts/2014-05-20-html-imports-include-html5-rocks.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 3 | 2015-08-22T08:39:36.000Z | 2021-07-25T15:24:10.000Z | _posts/2014-05-20-html-imports-include-html5-rocks.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 2 | 2016-01-18T03:56:54.000Z | 2021-07-25T14:27:30.000Z | ---
title: 'HTML Imports: ウェブのための #include - HTML5 Rocks'
author: azu
layout: post
itemUrl: 'http://www.html5rocks.com/ja/tutorials/webcomponents/imports/'
editJSONPath: 'https://github.com/jser/jser.info/edit/gh-pages/data/2014/05/index.json'
date: '2014-05-20T12:37:42Z'
tags:
- HTML
- WebComponents
---
HTML Importsについての日本語訳。
基本的的な使い方、`link.import`、WebComponentsの取り込み、`template`要素、サブインポート、キャッシュや非動的ロードについて等
| 29.571429 | 87 | 0.763285 | yue_Hant | 0.520123 |
53ded54f0b6e7304e79f98c692cf1e985b45e700 | 1,586 | md | Markdown | 0 Status/0 Common Commands.md | mchirico/istio | 0648264b79373646229c9b4f367b3ae997550954 | [
"Apache-2.0"
] | null | null | null | 0 Status/0 Common Commands.md | mchirico/istio | 0648264b79373646229c9b4f367b3ae997550954 | [
"Apache-2.0"
] | 1 | 2020-12-29T19:41:26.000Z | 2020-12-29T19:41:29.000Z | 0 Status/0 Common Commands.md | mchirico/istio | 0648264b79373646229c9b4f367b3ae997550954 | [
"Apache-2.0"
] | null | null | null | # Common Commands
```bash
kubectl -n istio-system get deploy
```
NAME READY UP-TO-DATE AVAILABLE AGE
grafana 1/1 1 1 5h37m
istio-egressgateway 1/1 1 1 5h56m
istio-ingressgateway 1/1 1 1 5h56m
istiod 1/1 1 1 5h56m
jaeger 1/1 1 1 5h37m
kiali 1/1 1 1 5h37m
prometheus 1/1 1 1 5h37m
For a detail of what's installed
```bash
kubectl -n istio-system get IstioOperator installed-state -o yaml
```
# Access to resources
```bash
k auth can-i --list
# You can also create
k create ns dayz
kubens dayz
k create sa zoe
k create rolebinding zoe --clusterrole=admin --serviceaccount=dayz:zoe
k auth can-i --list --as=system:serviceaccount:dayz:zoe -n dayz
# Compare this to kube-system
k auth can-i --list --as=system:serviceaccount:dayz:zoe -n kube-system
# Or better way
k auth can-i create deployments --as zoe
k auth can-i delete nodes --as zoe
k auth can-i create pods --as zoe
# Other things you can do
k get rolebindings
```
`k get rolebindings`
NAME ROLE AGE
zoe ClusterRole/admin 29m
# Delete namespace
```bash
(
NAMESPACE=yellow
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
``` | 22.657143 | 128 | 0.588272 | eng_Latn | 0.47176 |
53dee1837768c6c6c3e40b33b1f556489aa9c55d | 1,276 | md | Markdown | util/records/base/web/http.md | lys091112/doc | c11d0ca3e74020db5f352eefe36f5191522972a1 | [
"MIT"
] | 1 | 2017-07-17T05:51:12.000Z | 2017-07-17T05:51:12.000Z | util/records/base/web/http.md | lys091112/doc | c11d0ca3e74020db5f352eefe36f5191522972a1 | [
"MIT"
] | null | null | null | util/records/base/web/http.md | lys091112/doc | c11d0ca3e74020db5f352eefe36f5191522972a1 | [
"MIT"
] | null | null | null | # HTTP 基础详解
## Web起源和网络基础
TCP/IP 是一类协议的总称, 包括 IP, TCP, UDP, FTP, SNMP, HTTP, DNS, FDDI, ICMP等
按照协议族可以分为四层:应用层,传输层,网络层,数据链接层, 层次化的好处是可以很方便的替换任一层的实现,只需要把各自层的接口定好即可。
* 应用层。 预存类各类通用应用服务,例如:FTP, DNS 。。
* 传输层。提供网络链接中两台电脑的数据传输, 如: TCP, UDP
* 网络层。用来处理网络上流动的数据包,数据包是网络传输的最小单位,规定了以怎样的路径传递,在于多台计算机通信是,主要作用就是在众多选项中选择一条传输线路, 如:IP
* 数据链路层。 处理网络链接中的硬件部分, 包括驱动,NIC(网络适配器),光纤等物理可见部分
数据传输的封装: 应用层[(HTTP数据)]---->传输层[(TCP首部(HTTP数据))] ---->网络层[(IP首部(TCP首部(HTTP数据)))] ---->数据链路层[(以太网首部(IP首部(TCP首部(HTTP数据))))]
## HTTP协议特性
## HTTP 报文信息
## 状态码定义
**状态码类别**
| |类别| 原因描述|
|--|--|--|
|1XX |Information | 接收的消息正在处理|
|2XX | Success |请求处理完毕|
|3XX | Redirection(重定向状态码) | 需要进行附加操作 |
|4XX | Client Error | 服务器无法处理请求|
|5XX | Server Error| 服务器内部处理故障|
**200类状态码**
|类别| 原因描述|
|--|--|
|200| OK|
|204| 请求成功,但无资源可用|
|206| 范围请求,只要其中的一部分|
**300类状态码**
|类别| 原因描述|
|--|--|
| 301| Moved Permanently(永久性重定向)|
| 302| Found 临时重定向,暂时访问新的链接|
| 304| Not Modified 没有满足http请求头中的If-Match信息,不包含主体部分|
| 307| 临时重定向|
**400类**
|类别| 原因描述|
|--|--|
| 401 | 用户为认证,没有访问权限|
| 403 | Forbidden 请求被拒绝|
| 404 | 资源没有找到|
**500类**
|类别| 原因描述|
|--|--|
| 500 | Inter=rnal Server Error 服务器内部处理错误|
| 503 | Service Unavalible 资源不可用|
## Web服务器基础
## HTTPS 协议详解
## 用户认证
## 基于HTTP功能的追加协议
## 构建web的基础知识
## Web网络安全
| 14.177778 | 120 | 0.646552 | yue_Hant | 0.844607 |
53dfc78574dc434e30e3db1dbcb2fdbe4190857b | 716 | md | Markdown | README.md | psykzz/flask_rollbar | 7b657fb18795e60c7587bbb09b20eb419677c599 | [
"MIT"
] | 5 | 2015-12-26T12:10:15.000Z | 2019-05-04T19:56:38.000Z | README.md | psykzz/flask_rollbar | 7b657fb18795e60c7587bbb09b20eb419677c599 | [
"MIT"
] | 45 | 2016-11-14T17:19:53.000Z | 2022-03-31T02:33:33.000Z | README.md | psykzz/flask_rollbar | 7b657fb18795e60c7587bbb09b20eb419677c599 | [
"MIT"
] | 1 | 2020-02-18T03:16:07.000Z | 2020-02-18T03:16:07.000Z | # Flask Rollbar
Integration for rollbar in Flask
## Example
```python
from flask import Flask
from flask_rollbar import Rollbar
app = Flask(__name__)
app.config.update({
'ROLLBAR_ENABLED': True,
'ROLLBAR_SERVER_KEY': os.environ.get('ROLLBAR_SERVER_KEY'),
})
# Supports using the factory pattern as well.
Rollbar(app)
# Or
# rb = Rollbar()
# rb.init_app(app)
@app.route('/')
def index():
return "hello world, <a href='/error'>click here for an error</a>"
@app.route('/error')
def error():
a = 1 / 0
return "Should never happen"
app.run()
```
## Rollbar
* Read more about rollbar here https://rollbar.com/
* Live demo - https://rollbar.com/demo/demo/
## Readme is still a work in progress
| 16.651163 | 68 | 0.685754 | eng_Latn | 0.414395 |
53e028d53bc9ecf8a0bb678f89b47c3f031bd229 | 45 | md | Markdown | README.md | Allfass/TelegramBot | e6493336b754588f42b6956e543205166b0b866e | [
"MIT"
] | null | null | null | README.md | Allfass/TelegramBot | e6493336b754588f42b6956e543205166b0b866e | [
"MIT"
] | 1 | 2022-02-22T15:59:43.000Z | 2022-02-22T16:28:17.000Z | README.md | Allfass/TelegramBot | e6493336b754588f42b6956e543205166b0b866e | [
"MIT"
] | null | null | null | # TelegramBot
a simple bot written in python
| 15 | 30 | 0.8 | eng_Latn | 0.995101 |
53e0c0a0c7d9da1d6f74e381db9aa59936510ae8 | 78 | md | Markdown | README.md | dianavuillio/ccc-examples | 728022d3791fc8a1fce380638632aadcffccf172 | [
"MIT"
] | null | null | null | README.md | dianavuillio/ccc-examples | 728022d3791fc8a1fce380638632aadcffccf172 | [
"MIT"
] | null | null | null | README.md | dianavuillio/ccc-examples | 728022d3791fc8a1fce380638632aadcffccf172 | [
"MIT"
] | null | null | null | # Tareas de Code Cave Camp
* Lista de super
* Lista libre para recordatorios
| 15.6 | 32 | 0.75641 | spa_Latn | 0.535949 |
53e0e81dcf22373f8c943bffaddfa5eacf15ff30 | 122 | md | Markdown | _posts/0000-01-02-pinatlee.md | pinatlee/github-slideshow | 8d5abc51c9110c24efa40be09cfdca84633e180f | [
"MIT"
] | null | null | null | _posts/0000-01-02-pinatlee.md | pinatlee/github-slideshow | 8d5abc51c9110c24efa40be09cfdca84633e180f | [
"MIT"
] | 3 | 2020-10-30T04:51:08.000Z | 2020-10-30T05:17:35.000Z | _posts/0000-01-02-pinatlee.md | pinatlee/github-slideshow | 8d5abc51c9110c24efa40be09cfdca84633e180f | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Welcome to our second slide!"
---
learning github piece by piece
Use the left arrow to go back!
| 17.428571 | 37 | 0.721311 | eng_Latn | 0.998655 |
53e0fc829d9104d1ba8f45226459666c14de1267 | 4,808 | md | Markdown | docs/2014/integration-services/service/events-logged-by-the-integration-services-service.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/service/events-logged-by-the-integration-services-service.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/service/events-logged-by-the-integration-services-service.md | satoshi-baba-0823/sql-docs.ja-jp | a0681de7e067cc6da1be720cb8296507e98e0f29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Integration Services サービスによってログに記録されるイベント | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- integration-services
ms.topic: conceptual
helpviewer_keywords:
- service [Integration Services], events
- events [Integration Services], service
- Integration Services service, events
ms.assetid: d4122dcf-f16f-47a0-93a2-ffa3d0d4f9cf
author: douglaslMS
ms.author: douglasl
manager: craigg
ms.openlocfilehash: 52cb18c5828a2d72ef8a36082554425e7e3afb82
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 10/02/2018
ms.locfileid: "48187092"
---
# <a name="events-logged-by-the-integration-services-service"></a>Integration Services サービスによってログに記録されるイベント
[!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] サービスは、各種のメッセージを Windows アプリケーション イベント ログに記録します。 これらのメッセージは、サービスの起動時、サービスの停止時、および特定の問題の発生時にログに記録されます。
このトピックでは、アプリケーション イベント ログに記録される一般的なイベント メッセージについて説明します。 [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] サービスは、SQLISService のイベント ソースを使用してこのトピックで説明されているすべてのメッセージをログに記録します。
[!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] サービスの概要については、「[Integration Services サービス (SSIS サービス)](integration-services-service-ssis-service.md)」を参照してください。
## <a name="messages-about-the-status-of-the-service"></a>サービスの状態に関するメッセージ
インストールで [ [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] ] を選択すると、 [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] サービスがインストールおよび起動され、スタートアップの種類が自動に設定されます。
|イベント ID|シンボル名|テキスト|注|
|--------------|-------------------|----------|-----------|
|256|DTS_MSG_SERVER_STARTING|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービスを開始しています。|サービスが開始されようとしています。|
|257|DTS_MSG_SERVER_STARTED|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービスが開始されました。|サービスが開始されました。|
|260|DTS_MSG_SERVER_START_FAILED|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービスを開始できませんでした。%nエラー: %1|サービスを開始できませんでした。 開始できないのは、インストールが破損したか、サービス アカウントが適切でないことが原因である可能性があります。|
|258|DTS_MSG_SERVER_STOPPING|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービスを停止しています。%n%n実行中のパッケージはサーバー終了時にすべて停止します: %1|サービスを停止しています。また、パッケージを停止するようにサービスを構成している場合は、実行中のパッケージもサービスによってすべて停止されます。 構成ファイルで true 値または false 値を設定して、サービスの停止時に実行中のパッケージを停止するかどうかを指定できます。 このイベントのメッセージには、この設定値が含まれています。|
|259|DTS_MSG_SERVER_STOPPED|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービスが停止しました。%nサーバーのバージョン %1|サービスが停止しました。|
## <a name="messages-about-the-configuration-file"></a>構成ファイルに関するメッセージ
[!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)] サービスの設定は、変更可能な XML ファイルに格納されています。 詳細については、「[Integration Services サービスの構成 (SSIS サービス)](../configuring-the-integration-services-service-ssis-service.md)」を参照してください。
|イベント ID|シンボル名|テキスト|注|
|--------------|-------------------|----------|-----------|
|274|DTS_MSG_SERVER_MISSING_CONFIG_REG|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービス: %n構成ファイルを指定するレジストリ設定がありません。 %n既定の構成ファイルを読み込もうとしています。|構成ファイルのパスを含むレジストリ エントリが存在しないか、空です。|
|272|DTS_MSG_SERVER_MISSING_CONFIG|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービス構成ファイルが存在しません。%n既定の設定を使用して読み込んでいます。|指定した場所に構成ファイル自体が存在しません。|
|273|DTS_MSG_SERVER_BAD_CONFIG|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービス構成ファイルが正しくありません。%n構成ファイルの読み取り中にエラーが発生しました: %1%n%n既定の設定を使用してサーバーを読み込んでいます。|構成ファイルを読み取ることができないか、無効です。 このエラーは、ファイル内の XML 構文エラーによって発生する可能性があります。|
## <a name="other-messages"></a>その他のメッセージ
|イベント ID|シンボル名|テキスト|注|
|--------------|-------------------|----------|-----------|
|336|DTS_MSG_SERVER_STOPPING_PACKAGE|[!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssIS](../../includes/ssis-md.md)] サービス: 実行中のパッケージを停止しています。%nパッケージ インスタンス ID: %1%nパッケージ ID: %2%nパッケージ名: %3%nパッケージの説明: %4%nパッケージ|実行中のパッケージをサービスが停止しようとしています。 実行中のパッケージは、[!INCLUDE[ssManStudio](../../includes/ssmanstudio-md.md)] で監視および停止できます。 [!INCLUDE[ssManStudio](../../includes/ssmanstudio-md.md)] でパッケージを管理する方法については、「[パッケージの管理 (SSIS サービス)](package-management-ssis-service.md)」を参照してください。|
## <a name="related-tasks"></a>Related Tasks
ログ エントリを表示する方法については、「 [[ログ イベント] ウィンドウでログ エントリを表示する](../view-log-entries-in-the-log-events-window.md)」を参照してください。
## <a name="see-also"></a>参照
[Integration Services パッケージによってログに記録されるイベント](../performance/events-logged-by-an-integration-services-package.md)
| 73.969231 | 495 | 0.735233 | yue_Hant | 0.672977 |
53e1d0d68ef88e09e54369407e9801947af79f19 | 2,279 | md | Markdown | gallery/psget/repository/psget_get-psrepository.md | RobertoGarrido/powerShell-Docs.es-es | ce4879349fc59870b15f5b44f47e193c2cc7380a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-12-26T18:20:59.000Z | 2018-12-26T18:20:59.000Z | gallery/psget/repository/psget_get-psrepository.md | gabasch85/powerShell-Docs.es-es | ce4879349fc59870b15f5b44f47e193c2cc7380a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | gallery/psget/repository/psget_get-psrepository.md | gabasch85/powerShell-Docs.es-es | ce4879349fc59870b15f5b44f47e193c2cc7380a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-12-25T17:39:24.000Z | 2018-12-25T17:39:24.000Z | ---
description:
manager: carolz
ms.topic: article
author: jpjofre
ms.prod: powershell
keywords: powershell,cmdlet,gallery
ms.date: 2016-10-14
contributor: manikb
title: psget_get psrepository
ms.technology: powershell
ms.openlocfilehash: b1d5172232f0c2916382b6c35093a238f6b2cb4d
ms.sourcegitcommit: c732e3ee6d2e0e9cd8c40105d6fbfd4d207b730d
ms.translationtype: HT
ms.contentlocale: es-ES
---
# <a name="get-psrepository"></a>Get-PSRepository
Obtiene los repositorios registrados de un equipo.
## <a name="description"></a>Descripción
El cmdlet Get-PSRepository obtiene los repositorios del módulo de PowerShell que están registrados para el usuario actual de un equipo.
Para cada repositorio registrado, Get-PSRepository devuelve un objeto PSRepository que opcionalmente se puede canalizar a Unregister-PSRepository para anular el registro de un repositorio registrado.
## <a name="cmdlet-syntax"></a>Sintaxis de cmdlet
```powershell
Get-Command -Name Get-PSRepository -Module PowerShellGet -Syntax
```
## <a name="cmdlet-online-help-reference"></a>Referencia de la ayuda en línea de cmdlet
[Get-PSRepository](http://go.microsoft.com/fwlink/?LinkID=517127)
## <a name="example-commands"></a>Comandos de ejemplo
```powershell
# Properties of Get-PSRepository returned object
Get-PSRepository PSGallery | Format-List * -Force
Name : PSGallery
SourceLocation : https://www.powershellgallery.com/api/v2/
Trusted : False
Registered : True
InstallationPolicy : Untrusted
PackageManagementProvider : NuGet
PublishLocation : https://www.powershellgallery.com/api/v2/package/
ScriptSourceLocation : https://www.powershellgallery.com/api/v2/items/psscript/
ScriptPublishLocation : https://www.powershellgallery.com/api/v2/package/
ProviderOptions : {}
# Get all registered repositories
Get-PSRepository
# Get a specific registered repository
Get-PSRepository PSGallery
Name InstallationPolicy SourceLocation
---- ------------------ --------------
PSGallery Untrusted https://www.powershellgallery.com/api/v2/
# Get registered repository with wildcards
Get-PSRepository *Gallery*
```
| 33.028986 | 199 | 0.723124 | kor_Hang | 0.181206 |
53e1fa8513b745564e421ff60eb53a3471454a28 | 2,366 | md | Markdown | docs/1.23/networking/v1/networkPolicyIngressRule.md | jsonnet-libs/k8s-libsonnet | f8efa81cf15257bd151b97e31599e20b2ba5311b | [
"Apache-2.0"
] | 51 | 2021-07-02T12:34:06.000Z | 2022-03-25T09:20:57.000Z | docs/1.23/networking/v1/networkPolicyIngressRule.md | jsonnet-libs/k8s-libsonnet | f8efa81cf15257bd151b97e31599e20b2ba5311b | [
"Apache-2.0"
] | null | null | null | docs/1.23/networking/v1/networkPolicyIngressRule.md | jsonnet-libs/k8s-libsonnet | f8efa81cf15257bd151b97e31599e20b2ba5311b | [
"Apache-2.0"
] | 4 | 2021-07-22T17:39:30.000Z | 2021-11-17T19:15:14.000Z | ---
permalink: /1.23/networking/v1/networkPolicyIngressRule/
---
# networking.v1.networkPolicyIngressRule
"NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from."
## Index
* [`fn withFrom(from)`](#fn-withfrom)
* [`fn withFromMixin(from)`](#fn-withfrommixin)
* [`fn withPorts(ports)`](#fn-withports)
* [`fn withPortsMixin(ports)`](#fn-withportsmixin)
## Fields
### fn withFrom
```ts
withFrom(from)
```
"List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list."
### fn withFromMixin
```ts
withFromMixin(from)
```
"List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list."
**Note:** This function appends passed data to existing values
### fn withPorts
```ts
withPorts(ports)
```
"List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list."
### fn withPortsMixin
```ts
withPortsMixin(ports)
```
"List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list."
**Note:** This function appends passed data to existing values | 45.5 | 390 | 0.765427 | eng_Latn | 0.999763 |
53e214c8aa39cfee4d555e3b40afb64a9923948a | 4,517 | md | Markdown | TODO.md | simpleviewinc/keg-herkin | b44877cd8a84e8c52189af5c6b44fe4913dbafad | [
"MIT"
] | 1 | 2021-04-02T17:51:53.000Z | 2021-04-02T17:51:53.000Z | TODO.md | simpleviewinc/keg-herkin | b44877cd8a84e8c52189af5c6b44fe4913dbafad | [
"MIT"
] | 7 | 2021-02-03T00:45:09.000Z | 2021-10-13T22:08:44.000Z | TODO.md | simpleviewinc/keg-herkin | b44877cd8a84e8c52189af5c6b44fe4913dbafad | [
"MIT"
] | null | null | null | # New feature flow
1. Start a command which
* starts the backend docker container
* starts the local server running on the host machine
* maybe accepts flags for syncing other projects for feature and step definition files
2. User runs `yarn test:create` to create a new test file
* this drops them into the qa-wolf REPL
* they can then decide what to do -- record, or do something else
3. User can manually edit the feature file with jest and cucumber as they want
3. User can run `yarn test:run` to run the test
# TODO
[] start task
[x] starts the local server running on the host machine
* the local server, starting the browser, gets the websocket-hash to the browser, passed to the `docker-compose.yml` file
[x] starts the backend docker container
[] accepts flags for syncing other projects for feature and step definition files
* `values.defaults.yml` we create with defaults
* `values.yml` file can be configured with the environment variable overrides (or the user can pass these inline at the terminal).
[] create test
* this drops them into the qa-wolf REPL for creating
* they can then decide what to do using that REPL -- record, or do something else
* parameters
* context: name of test
* url: url that test will be run on (could use default env)
* template: cucumber vs qa-wolf
* ideally eventually we can abstract this out and won't need separate templates
[] edit test
* this drops them into the qa-wolf REPL for editing
* they can then decide what to do using that REPL -- record, or do something else
* parameters
* context?: name of test (if not provided, we could ask)
[] run test
* runs the cucumber test-runner
* need this for feature files
* could ALSO run the qa-wolf test-runner, but this can't run features
* runs the tests
* Environment variables / start-command args
1. Path to the external project
2. Browser(s) to test on
3. Headless mode
4. Dimensions of browser
5. Playwright device list
6. Path to the custom value yml file
7. test-runner (cucumber vs qa-wolf)
* also could let user define the custom docker-compose file that docker compose runs with
* when running tests, looks at the "env" argument (e.g. env=ci, env=mac-ci, env=win-ci), which it uses to determine which `values.<env>.yml` to use
* then each of these can define any environment variables they want
* parameters
* context?: name of single test file to test (if not provided, we run all the tests)
* browser?: uses env var by default, otherwise uses this browser,
* this could also be defined in the `values.yml`, following this pattern:
* BROWSER_<env>
[] create our qa-wolf template
* import/integrate with the "expect" package (ideally import jest's expect package directly, so that we have control over that dependency)
* [some existing work done on this](tests/wolf/test.template.js)
* import/integrate with the cucumber exports (Given/When/Then)
* investigate/research: there is a lot of jest/qa-wolf initialization code that takes up a lot of the template. It would be nice to just wrap these into a function that we import, OR to pass our tests into that file to be run there.
[] setup the test results reporter
* ideally, just copy over the cucumber-html-reporter work from `keg-regulator`
[] we need configuration for setup for cucumber
* probably can copy from `keg-regulator`
* just ignore the selenium stuff
[] github actions for running this in a ci environment
- a reusable action that can be added to any repo
- when this action is added, it will use kegherkin to run the tests in that repo
- makes this super simple to setup on any "test" repo or other repo
[] migrate repo from lancetipton to @simpleviewinc org
[] stop task
[] investigate using keg-cli as a node_module to reuse tasks like start/stop
# Done?
[ x ] it needs a server that runs on the host machine, not in docker container, for launching the browser
TODO:
* Integrate parkin library from NPM
* Fix Definitions editors
* Have the definition editor show based on the selected step ( maybe? )
* Add saving and deleting test files
* Needed for features / steps, unit, and waypoint
* Add web-socket
* Add UI for running tests from backend server
* Add browser tab manager
* Open test site in new browser tab
* Run tests in browser tab for the opened site
* Investigate injecting messenger into the opened browser tab | 45.626263 | 234 | 0.729245 | eng_Latn | 0.998523 |
53e226ef3bda8ab652c89b4a621eee5c80e9e3a5 | 1,877 | md | Markdown | leetcode/00800-00899/00868_binary-gap/README.md | geekhall/algorithms | 7dfab1e952e4b1b3ae63ec1393fd481b8bf4af86 | [
"MIT"
] | null | null | null | leetcode/00800-00899/00868_binary-gap/README.md | geekhall/algorithms | 7dfab1e952e4b1b3ae63ec1393fd481b8bf4af86 | [
"MIT"
] | null | null | null | leetcode/00800-00899/00868_binary-gap/README.md | geekhall/algorithms | 7dfab1e952e4b1b3ae63ec1393fd481b8bf4af86 | [
"MIT"
] | null | null | null | # 00868. Binary Gap
_Read this in other languages:_
[_简体中文_](README.zh-CN.md)
<p>Given a positive integer <code>n</code>, find and return <em>the <strong>longest distance</strong> between any two <strong>adjacent</strong> </em><code>1</code><em>'s in the binary representation of </em><code>n</code><em>. If there are no two adjacent </em><code>1</code><em>'s, return </em><code>0</code><em>.</em></p>
<p>Two <code>1</code>'s are <strong>adjacent</strong> if there are only <code>0</code>'s separating them (possibly no <code>0</code>'s). The <b>distance</b> between two <code>1</code>'s is the absolute difference between their bit positions. For example, the two <code>1</code>'s in <code>"1001"</code> have a distance of 3.</p>
<p> </p>
<p><strong>Example 1:</strong></p>
<pre>
<strong>Input:</strong> n = 22
<strong>Output:</strong> 2
<strong>Explanation:</strong> 22 in binary is "10110".
The first adjacent pair of 1's is "<u>1</u>0<u>1</u>10" with a distance of 2.
The second adjacent pair of 1's is "10<u>11</u>0" with a distance of 1.
The answer is the largest of these two distances, which is 2.
Note that "<u>1</u>01<u>1</u>0" is not a valid pair since there is a 1 separating the two 1's underlined.
</pre>
<p><strong>Example 2:</strong></p>
<pre>
<strong>Input:</strong> n = 8
<strong>Output:</strong> 0
<strong>Explanation:</strong> 8 in binary is "1000".
There are not any adjacent pairs of 1's in the binary representation of 8, so we return 0.
</pre>
<p><strong>Example 3:</strong></p>
<pre>
<strong>Input:</strong> n = 5
<strong>Output:</strong> 2
<strong>Explanation:</strong> 5 in binary is "101".
</pre>
<p> </p>
<p><strong>Constraints:</strong></p>
<ul>
<li><code>1 <= n <= 10<sup>9</sup></code></li>
</ul>
| 40.804348 | 358 | 0.673948 | eng_Latn | 0.937193 |
53e24d114f2fedb13cf129ba2ca6fcfaf3dd200d | 152 | md | Markdown | Models/Electricity_Market/Taxes/V001/README.md | schmocker/Pyjamas | 52a72d6e8b915f77a2194d4e7d53c46d0ec28c17 | [
"MIT"
] | 2 | 2018-05-31T15:02:08.000Z | 2018-07-11T11:02:44.000Z | Models/Electricity_Market/Taxes/V001/README.md | schmocker/Pyjamas | 52a72d6e8b915f77a2194d4e7d53c46d0ec28c17 | [
"MIT"
] | null | null | null | Models/Electricity_Market/Taxes/V001/README.md | schmocker/Pyjamas | 52a72d6e8b915f77a2194d4e7d53c46d0ec28c17 | [
"MIT"
] | null | null | null | # Taxes
## Inputs
No input
## Outputs
The output of this model is:
* taxes
## Properties
The property of this model is:
*taxes
## Remarks
... | 6.909091 | 30 | 0.644737 | eng_Latn | 0.985883 |
53e2de02e3b84fec61c0a91ed70642d4f379a55a | 223 | md | Markdown | _posts/2013-03-05-sound-46.md | KyomaHooin/kyomahooin.github.io | cef46ddbf851c4b99698521414ca6eebe90c053b | [
"MIT"
] | null | null | null | _posts/2013-03-05-sound-46.md | KyomaHooin/kyomahooin.github.io | cef46ddbf851c4b99698521414ca6eebe90c053b | [
"MIT"
] | null | null | null | _posts/2013-03-05-sound-46.md | KyomaHooin/kyomahooin.github.io | cef46ddbf851c4b99698521414ca6eebe90c053b | [
"MIT"
] | null | null | null | ---
title: Synkro & Indigo - Guidance
layout: post
tags: sound
date: 2013-03-05 21:42:00
---
<iframe width="603" height="452" src="https://www.youtube.com/embed/M2Ni5vfvsx8" frameborder="0" allowfullscreen="true"></iframe>
| 27.875 | 129 | 0.713004 | eng_Latn | 0.142316 |
53e329de2fa43ee3689a929b1c6d872f3a5e53de | 12,277 | md | Markdown | articles/virtual-machines/linux/run-command-managed.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:16.000Z | 2021-03-12T23:37:16.000Z | articles/virtual-machines/linux/run-command-managed.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/run-command-managed.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Ejecución de scripts en una máquina virtual Linux en Azure con la característica administrada Ejecutar comandos (versión preliminar)
description: En este tema se describe cómo ejecutar scripts dentro de una máquina virtual Linux de Azure con la característica administrada actualizada Ejecutar comando.
services: automation
ms.service: virtual-machines
ms.collection: linux
author: cynthn
ms.author: cynthn
ms.date: 10/27/2021
ms.topic: how-to
ms.reviewer: jushiman
ms.custom: devx-track-azurepowershell
ms.openlocfilehash: 71c08740161fd3df80757c6d86a10dec3e917198
ms.sourcegitcommit: e41827d894a4aa12cbff62c51393dfc236297e10
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/04/2021
ms.locfileid: "131554818"
---
# <a name="preview-run-scripts-in-your-linux-vm-by-using-managed-run-commands"></a>Versión preliminar: ejecución de scripts en la máquina virtual Linux con la característica administrada Ejecutar comandos
**Se aplica a:** :heavy_check_mark: Máquinas virtuales Linux :heavy_check_mark: Conjuntos de escalado flexibles
> [!IMPORTANT]
> Actualmente, la característica **Ejecutar comando administrada** está en versión preliminar pública.
> Esta versión preliminar se ofrece sin un Acuerdo de Nivel de Servicio y no se recomienda para cargas de trabajo de producción. Es posible que algunas características no sean compatibles o que tengan sus funcionalidades limitadas. Para más información, consulte [Términos de uso complementarios de las Versiones Preliminares de Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
La característica Ejecutar comando usa el agente de máquina virtual (VM) para ejecutar scripts en una máquina virtual Linux de Azure. Puede usar estos scripts para la administración general de máquinas o aplicaciones. Pueden ayudarle a diagnosticar y corregir rápidamente el acceso a la máquina virtual y los problemas de red, así como a revertir la máquina virtual a un estado correcto.
La característica administrada *actualizada* Ejecutar comando usa el mismo canal de agente de máquina virtual para ejecutar scripts y proporciona las siguientes mejoras sobre la característica [Ejecutar comando orientada a la acción original](run-command.md):
- Compatibilidad con la característica actualizada Ejecutar comando a través de la plantilla de implementación de ARM
- Ejecución en paralelo de varios scripts
- Ejecución secuencial de scripts
- Se puede cancelar el script RunCommand
- Tiempo de espera de script especificado por el usuario
- Compatibilidad con scripts de larga duración (horas/días)
- Paso de secretos (parámetros, contraseñas) de forma segura
## <a name="register-for-preview"></a>Registro en la versión preliminar
Debe registrar la suscripción para poder usar la característica administrada Ejecutar comando durante la versión preliminar pública. Vaya a la [configuración de características en versión preliminar de la suscripción de Azure](../../azure-resource-manager/management/preview-features.md) para obtener instrucciones del registro y use el nombre de característica `RunCommandPreview`.
## <a name="azure-cli"></a>la CLI de Azure
En los siguientes ejemplos use [az vm run-command](/cli/azure/vm/run-command) para ejecutar un script de shell en una máquina virtual Linux de Azure.
### <a name="execute-a-script-with-the-vm"></a>Ejecución de un script con la máquina virtual
Este comando entregará el script a la máquina virtual, lo ejecutará y devolverá la salida capturada.
```azurecli-interactive
az vm run-command create --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --script "echo Hello World!"
```
### <a name="list-all-deployed-runcommand-resources-on-a-vm"></a>Enumeración de todos los recursos RunCommand implementados en una máquina virtual
Este comando devolverá una lista completa de las instancias de Ejecutar comando implementadas previamente junto con sus propiedades.
```azurecli-interactive
az vm run-command list --name "myVM" --resource-group "myRG"
```
### <a name="get-execution-status-and-results"></a>Obtención del estado de ejecución y los resultados
Este comando recuperará el progreso actual de la ejecución, incluida la salida más reciente, la hora de inicio y finalización, el código de salida y el estado terminal de la ejecución.
```azurecli-interactive
az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" –expand
```
### <a name="delete-runcommand-resource-from-the-vm"></a>Eliminación del recurso RunCommand de la máquina virtual
Quite el recurso RunCommand implementado anteriormente en la máquina virtual. Si la ejecución del script sigue en curso, la ejecución finalizará.
```azurecli-interactive
az vm run-command delete --name "myRunCommand" --vm-name "myVM" --resource-group "myRG"
```
## <a name="powershell"></a>PowerShell
### <a name="execute-a-script-with-the-vm"></a>Ejecución de un script con la máquina virtual
Este comando entregará el script a la máquina virtual, lo ejecutará y devolverá la salida capturada.
```powershell-interactive
Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" – Script "echo Hello World!"
```
### <a name="list-all-deployed-runcommand-resources-on-a-vm"></a>Enumeración de todos los recursos RunCommand implementados en una máquina virtual
Este comando devolverá una lista completa de las instancias de Ejecutar comando implementadas previamente junto con sus propiedades.
```powershell-interactive
Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM"
```
### <a name="get-execution-status-and-results"></a>Obtención del estado de ejecución y los resultados
Este comando recuperará el progreso actual de la ejecución, incluida la salida más reciente, la hora de inicio y finalización, el código de salida y el estado terminal de la ejecución.
```powershell-interactive
Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" -Status
```
### <a name="delete-runcommand-resource-from-the-vm"></a>Eliminación del recurso RunCommand de la máquina virtual
Quite el recurso RunCommand implementado anteriormente en la máquina virtual. Si la ejecución del script sigue en curso, la ejecución finalizará.
```powershell-interactive
Remove-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName"
```
## <a name="rest-api"></a>API DE REST
Para implementar una nueva instancia de Ejecutar comando, ejecute PUT en la máquina virtual directamente y especifique un nombre único para la instancia de Ejecutar comando.
```rest
PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?api-version=2019-12-01
```
```json
{
"location": "<location>",
"properties": {
"source": {
"script": "echo Hello World",
"scriptUri": "<URI>",
"commandId": "<Id>"
},
"parameters": [
{
"name": "param1",
"value": "value1"
},
{
"name": "param2",
"value": "value2"
}
],
"protectedParameters": [
{
"name": "secret1",
"value": "value1"
},
{
"name": "secret2",
"value": "value2"
}
],
"runAsUser": "userName",
"runAsPassword": "userPassword",
"timeoutInSeconds": 3600,
"outputBlobUri": "<URI>",
"errorBlobUri": "<URI>"
}
}
```
### <a name="notes"></a>Notas
- Puede proporcionar un script en línea, un URI de script o un [id. de comando](run-command.md#available-commands) de script integrado como origen de entrada.
- Solo se admite un tipo de entrada de origen para la ejecución de un comando.
- Ejecutar comando admite la salida en blobs de Storage, que se pueden usar para almacenar salidas de script grandes.
- Ejecutar comando admite la salida de errores en blobs de Storage.
### <a name="list-running-instances-of-run-command-on-a-vm"></a>Enumeración de las instancias en ejecución de Ejecutar comando en una máquina virtual
```rest
GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands?api-version=2019-12-01
```
### <a name="get-output-details-for-a-specific-run-command-deployment"></a>Obtención de los detalles de salida de una implementación específica de Ejecutar comando
```rest
GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?$expand=instanceView&api-version=2019-12-01
```
### <a name="cancel-a-specific-run-command-deployment"></a>Cancelación de una implementación específica de Ejecutar comando
Para cancelar una implementación en ejecución, puede usar PUT o PATCH en la instancia en ejecución de Ejecutar comando y especificar un script en blanco en el cuerpo de la solicitud. Esto cancelará la ejecución en curso.
También puede eliminar la instancia de Ejecutar comando.
```rest
DELETE /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachines/<vmName>/runcommands/<runCommandName>?api-version=2019-12-01
```
### <a name="deploy-scripts-in-an-ordered-sequence"></a>Implementación de scripts en una secuencia ordenada
Para implementar scripts secuencialmente, use una plantilla de implementación y especifique una relación `dependsOn` entre scripts secuenciales.
```json
{
"type": "Microsoft.Compute/virtualMachines/runCommands",
"name": "secondRunCommand",
"apiVersion": "2019-12-01",
"location": "[parameters('location')]",
"dependsOn": <full resourceID of the previous other Run Command>,
"properties": {
"source": {
"script": "echo Hello World!"
},
"timeoutInSeconds": 60
}
}
```
### <a name="execute-multiple-run-commands-sequentially"></a>Ejecución secuencial de varias instancias de Ejecutar comando
De forma predeterminada, si implementa varios recursos RunCommand mediante la plantilla de implementación, se ejecutarán simultáneamente en la máquina virtual. Si tiene una dependencia de los scripts y un orden de ejecución preferido, puede usar la propiedad `dependsOn` para que se ejecuten secuencialmente.
En este ejemplo, **secondRunCommand** se ejecutará después de **firstRunCommand**.
```json
{
"$schema":"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion":"1.0.0.0",
"resources":[
{
"type":"Microsoft.Compute/virtualMachines/runCommands",
"name":"[concat(parameters('vmName'),'/firstRunCommand')]",
"apiVersion":"2019-12-01",
"location":"[parameters('location')]",
"dependsOn":[
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
],
"properties":{
"source":{
"script":"echo First: Hello World!"
},
"parameters":[
{
"name":"param1",
"value":"value1"
},
{
"name":"param2",
"value":"value2"
}
],
"timeoutInSeconds":20
}
},
{
"type":"Microsoft.Compute/virtualMachines/runCommands",
"name":"[concat(parameters('vmName'),'/secondRunCommand')]",
"apiVersion":"2019-12-01",
"location":"[parameters('location')]",
"dependsOn":[
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'),'runcommands/firstRunCommand')]"
],
"properties":{
"source":{
"scriptUrl":"http://github.com/myscript.ps1"
},
"timeoutInSeconds":60
}
}
]
}
```
## <a name="next-steps"></a>Pasos siguientes
Para obtener información sobre otras maneras de ejecutar comandos y scripts de forma remota en la máquina virtual, consulte [Ejecución de scripts en una máquina virtual Linux](run-scripts-in-vm.md).
| 46.680608 | 417 | 0.719313 | spa_Latn | 0.859789 |
53e4395e87cf9e5815b05d944c1ba48cad147a4c | 92 | md | Markdown | README.md | satuelisa/CourseDescriptions | f80753a1e2ae1e1c7f23550b9d56e5184200aee7 | [
"Apache-2.0"
] | 2 | 2022-01-11T20:23:16.000Z | 2022-01-18T17:53:48.000Z | README.md | satuelisa/CourseDescriptions | f80753a1e2ae1e1c7f23550b9d56e5184200aee7 | [
"Apache-2.0"
] | null | null | null | README.md | satuelisa/CourseDescriptions | f80753a1e2ae1e1c7f23550b9d56e5184200aee7 | [
"Apache-2.0"
] | null | null | null | # CourseDescriptions
Formato de LaTeX para la preparación de programas analíticos de PISIS
| 30.666667 | 70 | 0.836957 | spa_Latn | 0.90464 |
53e43c60bf857274b67f3a433785fe2e2ab95a4c | 2,290 | md | Markdown | public/schemas/adresses-csv.md | oncletom/adresse.data.gouv.fr | 657c29be26583692c100c16a20788643bec7bb34 | [
"MIT"
] | null | null | null | public/schemas/adresses-csv.md | oncletom/adresse.data.gouv.fr | 657c29be26583692c100c16a20788643bec7bb34 | [
"MIT"
] | null | null | null | public/schemas/adresses-csv.md | oncletom/adresse.data.gouv.fr | 657c29be26583692c100c16a20788643bec7bb34 | [
"MIT"
] | null | null | null | # Schéma des données “Adresses” au format CSV
Le séparateur point-virgule et l'encodage UTF-8 sont utilisés.
Ce format est **largement compatible** avec l'[ancien format CSV](ban-2015.md) qui a servi à la diffusion des données BAN de 2015 à fin 2018.
| Nom du champ | Description | Changements |
| --- | --- | --- |
| `id` | Clé d’interopérabilité telle que définie dans la [spécification du format d'échange BAL 1.1](https://cms.geobretagne.fr/sites/default/files/documents/aitf-sig-topo-adresse-fichier-echange-simplifie-v_1.1_0.pdf). Lorsqu'aucun code FANTOIR n'est connu, un code transitoire composé de 6 caractères alpha-numériques est généré. | Passage de l'identifiant BDUNI à la clé d'interopérabilité|
| `id_fantoir` | Identifiant FANTOIR de la voie, le cas échant | L'identifiant est préfixé par la commune de rattachement FANTOIR (commune actuelle ou commune ancienne) |
| `numero` | Numéro de l’adresse dans la voie | |
| `rep` | Indice de répétition associé au numéro (par exemple `bis`, `a`…) | |
| `nom_voie` | Nom de la voie en minuscules accentuées | Le libellé est systématiquement amélioré|
| `code_postal` | Code postal du bureau de distribution de la voie | |
| `code_insee` | Code INSEE de la commune actuelle sur la base du Code Officiel géographique en vigueur | |
| `nom_commune` | Nom officiel de la commune actuelle | |
| `code_insee_ancienne_commune` | Code INSEE de l'ancienne commune sur laquelle est située l'adresse | Nouveau champ |
| `nom_ancienne_commune` | Nom de l'ancienne commune sur laquelle est située l'adresse | Nouveau champ |
| `x` | Coordonnées cartographique en projection légale | |
| `y` | Coordonnées cartographique en projection légale | |
| `lon` | Longitude en WGS-84 | |
| `lat` | Latitude en WGS-84 | |
| `alias` | _Vide_ | Mis à vide |
| `nom_ld` | _Vide_ | Mis à vide |
| `libelle_acheminement` | Nom de la commune d’acheminement | |
| `nom_afnor` | Nom de la voie normalisé selon la norme postale | |
| `source_position` | Source de la position géographique. Valeurs possibles : (`commune`, `cadastre`, `arcep`, `laposte`, `insee`, `sdis`, `inconnue`) | Nouveau champ |
| `source_nom_voie` | Source du nom de la voie. Valeurs possibles : (`commune`, `cadastre`, `arcep`, `laposte`, `insee`, `sdis`, `inconnue`) | Nouveau champ |
| 78.965517 | 394 | 0.726638 | fra_Latn | 0.970274 |
53e4957894bd0b1f3f9e2922aaae1bf49bcbd25f | 58 | md | Markdown | README.md | ZhaoLoON/zhaoloon.github.io | 9d0144bc4a112bd5af8678bff941f91ef7eb6a3b | [
"Apache-2.0"
] | null | null | null | README.md | ZhaoLoON/zhaoloon.github.io | 9d0144bc4a112bd5af8678bff941f91ef7eb6a3b | [
"Apache-2.0"
] | null | null | null | README.md | ZhaoLoON/zhaoloon.github.io | 9d0144bc4a112bd5af8678bff941f91ef7eb6a3b | [
"Apache-2.0"
] | null | null | null | ##ZhaoLoON的博客
博客地址:[ZhaoLoON的博客](http://www.zhaoloon.com)
| 19.333333 | 43 | 0.758621 | yue_Hant | 0.8017 |
53e5a848ac8c32a04560274a1e055b336679ee49 | 132 | md | Markdown | README.md | XavierRLX/Xavierburger | 9d7d0ea735afc4b5392c5f54294906363bb0970d | [
"MIT"
] | null | null | null | README.md | XavierRLX/Xavierburger | 9d7d0ea735afc4b5392c5f54294906363bb0970d | [
"MIT"
] | null | null | null | README.md | XavierRLX/Xavierburger | 9d7d0ea735afc4b5392c5f54294906363bb0970d | [
"MIT"
] | null | null | null | # Xavierburger
Site responsivo, funcionando por enquanto apenas com html e css.
Links ativos : Cadastro, login e Trabalhe Conosco
| 33 | 65 | 0.795455 | por_Latn | 0.995623 |
53e5b1278d47dbeb3998fabd0a836d7b5a834a15 | 4,839 | md | Markdown | README.md | jialigit/nusa | c3dea56dadaed6be5d6ebeb3ba50f4e0dfa65f10 | [
"MIT"
] | 2 | 2019-12-03T08:32:07.000Z | 2021-04-01T01:23:39.000Z | README.md | jialigit/nusa | c3dea56dadaed6be5d6ebeb3ba50f4e0dfa65f10 | [
"MIT"
] | null | null | null | README.md | jialigit/nusa | c3dea56dadaed6be5d6ebeb3ba50f4e0dfa65f10 | [
"MIT"
] | null | null | null | # NuSA
A Python library for structural analysis using the finite element method, designed for academic purposes.
## Versions
* **0.1.0.dev1** (Initial pre-alpha release)
* Developer version (This repository): **0.2.0**
## Requirements
* NumPy
* Tabulate
* Matplotlib
* [GMSH](http://gmsh.info/)
## Installation
From PyPI (0.1.0 version):
```
$ pip install nusa
```
or from this repo (developer version):
```
$ pip install git+https://github.com/JorgeDeLosSantos/nusa.git
```
## Elements type supported
* Spring
* Bar
* Truss
* Beam
* Linear triangle (currently, only plane stress)
## Mini-Demos
### Linear Triangle Element
```python
from nusa import *
import nusa.mesh as nmsh
md = nmsh.Modeler()
a = md.add_rectangle((0,0),(1,1), esize=0.1)
b = md.add_circle((0.5,0.5), 0.1, esize=0.05)
md.substract_surfaces(a,b)
nc, ec = md.generate_mesh()
x,y = nc[:,0], nc[:,1]
nodos = []
elementos = []
for k,nd in enumerate(nc):
cn = Node((x[k],y[k]))
nodos.append(cn)
for k,elm in enumerate(ec):
i,j,m = int(elm[0]),int(elm[1]),int(elm[2])
ni,nj,nm = nodos[i],nodos[j],nodos[m]
ce = LinearTriangle((ni,nj,nm),200e9,0.3,0.1)
elementos.append(ce)
m = LinearTriangleModel()
for node in nodos: m.add_node(node)
for elm in elementos: m.add_element(elm)
# Boundary conditions and loads
minx, maxx = min(x), max(x)
miny, maxy = min(y), max(y)
for node in nodos:
if node.x == minx:
m.add_constraint(node, ux=0, uy=0)
if node.x == maxx:
m.add_force(node, (10e3,0))
m.plot_model()
m.solve()
m.plot_nsol("seqv")
```


### Spring element
**Example 01**. For the spring assemblage with arbitrarily numbered nodes shown in the figure
obtain (a) the global stiffness matrix, (b) the displacements of nodes 3 and 4, (c) the
reaction forces at nodes 1 and 2, and (d) the forces in each spring. A force of 5000 lb
is applied at node 4 in the `x` direction. The spring constants are given in the figure.
Nodes 1 and 2 are fixed.

```python
# -*- coding: utf-8 -*-
# NuSA Demo
from nusa import *
def test1():
"""
Logan, D. (2007). A first course in the finite element analysis.
Example 2.1, pp. 42.
"""
P = 5000.0
# Model
m1 = SpringModel("2D Model")
# Nodes
n1 = Node((0,0))
n2 = Node((0,0))
n3 = Node((0,0))
n4 = Node((0,0))
# Elements
e1 = Spring((n1,n3),1000.0)
e2 = Spring((n3,n4),2000.0)
e3 = Spring((n4,n2),3000.0)
# Adding elements and nodes to the model
for nd in (n1,n2,n3,n4):
m1.add_node(nd)
for el in (e1,e2,e3):
m1.add_element(el)
m1.add_force(n4, (P,))
m1.add_constraint(n1, ux=0)
m1.add_constraint(n2, ux=0)
m1.solve()
if __name__ == '__main__':
test1()
```
### Beam element
**Example 02**. For the beam and loading shown, determine the deflection at point C.
Use E = 29 x 10<sup>6</sup> psi.

```python
"""
Beer & Johnston. (2012) Mechanics of materials.
Problem 9.13 , pp. 568.
"""
from nusa import *
# Input data
E = 29e6
I = 291 # W14x30
P = 35e3
L1 = 5*12 # in
L2 = 10*12 #in
# Model
m1 = BeamModel("Beam Model")
# Nodes
n1 = Node((0,0))
n2 = Node((L1,0))
n3 = Node((L1+L2,0))
# Elements
e1 = Beam((n1,n2),E,I,L1)
e2 = Beam((n2,n3),E,I,L2)
# Add elements
for nd in (n1,n2,n3): m1.add_node(nd)
for el in (e1,e2): m1.add_element(el)
m1.add_force(n2, (-P,))
m1.add_constraint(n1, ux=0, uy=0) # fixed
m1.add_constraint(n3, uy=0) # fixed
m1.solve() # Solve model
# Displacement at C point
print(n2.uy)
```
## Documentation
To build documentation based on docstrings execute the `docs/toHTML.py` script. (Sphinx required)
Tutorials (Jupyter notebooks):
Spanish version (in progress):
* [Introducción a NuSA](docs/nusa-info/es/intro-nusa.ipynb)
* [Elemento Spring](docs/nusa-info/es/spring-element.ipynb)
* [Elemento Bar](docs/nusa-info/es/bar-element.ipynb)
* [Elemento Beam](docs/nusa-info/es/beam-element.ipynb)
* [Elemento Truss](docs/nusa-info/es/truss-element.ipynb)
* [Elemento LinearTriangle](docs/nusa-info/es/linear-triangle-element.ipynb)
English version (TODO):
* [Introduction to NuSA](docs/nusa-info/en/intro-nusa.ipynb)
* [Spring element](docs/nusa-info/en/spring-element.ipynb)
* [Bar element](docs/nusa-info/en/bar-element.ipynb)
* [Beam element](docs/nusa-info/en/beam-element.ipynb)
* [Truss element](docs/nusa-info/en/truss-element.ipynb)
* [LinearTriangle element](docs/nusa-info/en/linear-triangle-element.ipynb)
## About...
```
Developer: Pedro Jorge De Los Santos
E-mail: [email protected]
Blog: numython.github.io // jorgedelossantos.github.io
``` | 22.197248 | 105 | 0.663154 | eng_Latn | 0.36151 |
53e61cb5badfefd6df48b272526ac9289d50086b | 33,557 | md | Markdown | README.md | jacksontj/salt-pack | d6285d1f4ca6023587f65c3545afbbcf70343465 | [
"Apache-2.0"
] | 1 | 2021-12-20T23:52:10.000Z | 2021-12-20T23:52:10.000Z | README.md | jacksontj/salt-pack | d6285d1f4ca6023587f65c3545afbbcf70343465 | [
"Apache-2.0"
] | null | null | null | README.md | jacksontj/salt-pack | d6285d1f4ca6023587f65c3545afbbcf70343465 | [
"Apache-2.0"
] | 1 | 2018-12-10T12:20:31.000Z | 2018-12-10T12:20:31.000Z | # Salt Package Builder (salt-pack)
Salt-pack is an open-source package builder for most commonly used Linux platforms, for example: Redhat/CentOS and Debian/Ubuntu families, utilizing SaltStack states and execution modules to build Salt and a specified set of dependencies, from which a platform specific repository can be built.
Salt-pack relies on SaltStack’s Master-Minion functionality to build the desired packages and repository, and can install the required tools to build the packages and repository for that platform.
The Salt state file which drives the building process is found in salt/states/pkgbuild.py, which provides a typical salt virtual interface to perform the build process. The virtual interface is satisfied by execution modules for the appropriate supported platform, for example :
Redhat / CentOS
: salt/modules/rpmbuild.py
Debian / Ubuntu
: salt/modules/debbuild.py
The Redhat/CentOS and Debian/Ubuntu platform families build process are internally specified differently, however the external build commands are the same, with keyword arguments for platform specification, for example: rhel6, ubuntu1204.
The salt-pack project is maintained in GitHub at https://github.com/saltstack/salt-pack.git.
# Overview
The building of packages is controlled by SLS state files for the various platforms, which in turn are at the leaf node of a directory tree describing the package name and version. For example :
```
file_roots / pkg / <pkg name> / <version> / OS /
init.sls
spec
sources
```
where:
| File / Directory | Description |
|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| init.sls | Initialisation SLS file describing what is to be built. Note that this can include files obtained over the Internet. |
| spec | Directory containing file[s] describing the building of the package for this platform OS, e.g. rpm spec file for Redhat, dsc file or tarball for Debian/Ubuntu. |
| source | various source files to be used in building the package, for example: salt-2015.8.5.tar.gz. |
| Operating System(OS) | Description |
|----------------------|-------------------|
| rhel7 | Redhat 7 |
| rhel6 | Redhat 6 |
| rhel5 | Redhat |
| debian8 | Debian 8 (jessie) |
| debian7 | Debian 7 (wheezy) |
| ubuntu1604 | Ubuntu 16.04 LTS |
| ubuntu1404 | Ubuntu 14.04 LTS |
| ubuntu1204 | Ubuntu 12.04 LTS |
For example:
`file_roots/pkg/salt/2015_8_8/rhel7/spec/salt.spec`
`file_roots/pkg/salt/2015_8_8/debian8/spec/salt_debian.tar.xz`
Currently the Redhat/CentOS and Debian/Ubuntu platforms are internally built differently. The Redhat/CentOS builds are pillar data driven state files (pkgbuild.sls contained in pillar_roots) and makes heavy use of Jinja macro’s to define the package to be built, the location of sources and the output that should be produced when the build succeeds. The Debian/Ubuntu builds are driven only driven by the state files and their contents. The Debian/Ubuntu builds shall eventually also be driven by pillar data, similar to Redhat/CentOS, but to date there has been insufficient time to achieve this goal.
Packages can be built individually or make use of Salt’s state.highstate to build the salt package and all of its dependencies.
There are currently three highstate SLS files for the three main platforms. These files are as follows:
> `redhat_pkg.sls`
`debian_pkg.sls`
`ubuntu_pkg.sls`
Specific versions of these files for salt builds can be found in directory `file_roots/versions/<salt version/` , e.g. `file_roots/2015_8_8/redhat_pkg.sls` and can be specified on the command line, as shown in examples below.
The current families of operating systems, Redhat/CentOS, Debian and Ubuntu have default assumptions as to the most current for those platforms, with older versions being specified by use of command line pillar data. The default values can be overridden by using build_release keyword.
| Platform | Default | Overrides |
|-----------------|------------|--------------|
| Redhat / CentOS | rhel7 | rhel6, rhel5 |
| Debian | debian8 | debian7 |
| Ubuntu | ubuntu1604 | ubuntu1404, ubuntu1204 |
The pillar data to drive the build process for Redhat can be found in the following locations:
> `pillar_roots / pkgbuild.sls`
`pillar_roots / versions / <version> / pkgbuild.sls`
If the file pillar_roots / pkgbuild.sls has the field build_version set, then the file pillar_roots / versions / build_version / pkgbuild.sls is utilized in the building of Redhat packages.
# Setup
The tools required to build salt and it’s dependencies for the various minions and their operating system/platform is handled by state files and macros which can be found in the setup directory and it’s respective operating system/platform sub-directories.
For example to install the required tools for Redhat 6, Debian 8 and Ubuntu 12 based minions, respectively :
> `salt rh6_minion state.sls setup.redhat.rhel6`
`salt jessie_minion state.sls setup.debian.debian8`
`salt ubuntu12_minion state.sls setup.ubuntu.ubuntu12`
The files used to install the tools for each platform are as follows:
### setup/base_map.jinja
Jinja macro file describing build destination, build user, and platform packages to use for highstate.
### macros.jinja
Helper macro definitions utilized in building Salt and it’s dependencies from pillar data (provided by pkgbuild.sls). Currently utilized by Redhat/CentOS build state files.
## Redhat
### setup/redhat/map.jinja
Helper macro definitions for building Redhat 5, 6 and 7 releases
### setup/redhat/init.sls
Common Redhat initialization state files that install the relevant platform tools on the minion to build salt and it’s dependencies, for example: mock, rpmdevtools, createrepo, etc.
### setup/redhat/rhel7/init.sls
### setup/redhat/rhel6/init.sls
### setup/redhat/rhel5/init.sls
Initialization state files install the relevant platform tools on the minion to build salt and it’s dependencies.
## Debian
### setup/debian/map.jinja
Helper macro definitions for building Debian 8 (jessie) and 7 (wheezy) releases.
### setup/debian/init.sls
Common Debian initialization state files that install the relevant platform tools on the minion to build salt and it’s dependencies, for example: build-essential, dh-make, pbuilder, debhelper, devscripts, etc.
### setup/debian/debian8/init.sls
### setup/debian/debian7/init.sls
Initialization state files install the relevant platform tools on the minion to build salt and it’s dependencies, install apt-preferences, pbuilder hooks, other repositories to access, etc.
## Ubuntu
### setup/ubuntu/map.jinja
Helper macro definitions for building Ubuntu releases.
### setup/ubuntu/init.sls
Common ubuntu initialization state files that install the relevant platform tools on the minion to build salt and it’s dependencies, for example: build-essential, dh-make, pbuilder, debhelper, devscripts, gnupg, gnupg-agent, python-gnupg, etc.
### setup/ubuntu/ubuntu16/init.sls
### setup/ubuntu/ubuntu14/init.sls
### setup/ubuntu/ubuntu12/init.sls
Initialization state files install the relevant platform tools on the minion to build salt and it’s dependencies, install apt-preferences, pbuilder hooks, other repositories to access, etc.
# Command line Pillar overrides
The build and it’s build product can be controlled by various pillar data on the command line.
The following are command line pillar data overrides available for controlling build releases, destinations, architectures, versions, etc. These keys and values are typically defined in base_map.jinja, and platform’s macro.jinja files. Note: the default for a platform is typically the newest for that platform, for example: the current default for the Ubuntu platform is ubuntu1604, previously it was ubuntu1404, however once Ubuntu 16.04 LTS was released, the default became ubuntu1604.
| Key | Values | Default | Description |
|---------------|-------------------------------|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
| build_dest | Any absolute path | /srv/pkgs | Path describing location to place the product of the build. |
| build_runas | Any user | Redhat / CentOS - builder | User to use when building - non-root on Redhat and CentOS platforms. |
| | | Debian / Ubuntu - root | Currently root on Debian and Ubuntu platforms (eventually shall allow for non-root building) |
| build_version | Any format not containing ‘.’ | none | Typically version of package with dot ( ‘.’ ) replaced by underscore ( ‘_’ ) to accommodate Salt parsing, for example: 2015_8_8 for 2015.8.8, 1_0_3 for 1.0.3 |
| build_release | rhel7, rhel6, rhel5 | rhel7 | Redhat / CentOS platforms |
| | debian8, debian7 | debian8 | Debian platforms |
| | ubuntu1604, ubuntu1404, ubuntu1204 | ubuntu1604 | Ubuntu platforms |
| build_arch | i386, x86_64 | x86_64 | Redhat / CentOS platforms |
| | amd64 | amd64 | Debian platforms |
| | amd64 | amd64 | Ubuntu platforms |
# Repo
The tools required to create repositories for salt and it’s dependencies for the various platforms are handled by state files and macros which can be found in the repo directory.
For example to create a repository for Redhat 6, Debian 8 and Ubuntu 12 based minions, respectively :
> `salt rh6_minion state.sls repo.redhat.rhel6 pillar='{ "keyid" : "ABCDEF12", "build_release" : "rhel6" }'`
`salt jessie_minion state.sls repo.debian.debian8 pillar='{ "keyid" : "ABCDEF12" }'`
`salt wheezy_minion state.sls repo.debian.debian7 pillar='{ "keyid" : "ABCDEF12" , "build_release" : "debian7" }'`
`salt ubuntu12_minion state.sls setup.ubuntu.ubuntu12 pillar='{ "build_release" : "ubuntu1204" }'`
Where the keyid to sign the repository built is an example value `ABCDEF12`, and where the platform is other than the default, the build release is specified.
Signing of packages and repositories can now performed automatically utilizing Public and Private keys supplied via pillar data, with an optional passphrase supplied from pillar data on the command line. In the case of Debian / Ubuntu this is achieved by the use of gpg-agent to cache the key and pass-phase, which is utilized when signing the packages and repository.
## Signing Process
1 The signing process utilizes gpg keys, the Public and Private keys are held in a pillar data file name `gpg_keys.sls` under pillar_roots, with id's and values shown in the example are significant, contents as follows :
```yaml
gpg_pkg_priv_key: |
-----BEGIN PGP PRIVATE KEY BLOCK-----
Version: GnuPG v1
lQO+BFciIfQBCADAPCtzx7I5Rl32escCMZsPzaEKWe7bIX1em4KCKkBoX47IG54b
w82PCE8Y1jF/9Uk2m3RKVWp3YcLlc7Ap3gj6VO4ysvVz28UbnhPxsIkOlf2cq8qc
.
.
OrHznKwgO4ndL2UCUpj/F0DWIZcCX0PpxlIotgWTJhgxUW2D+7eYsCZkM0TMhSJ2
Ebe+8JCQTwqSXPRTzXmy/b5WXDeM79CkLWvuGpXFor76D+ECMRPv/rawukEcNptn
R5OmgHqvydEnO4pWbn8JzQO9YX/Us0SMHBVzLC8eIi5ZIopzalvX
=JvW8
-----END PGP PRIVATE KEY BLOCK-----
gpg_pkg_priv_keyname: gpg_pkg_key.pem
gpg_pkg_pub_key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQENBFciIfQBCADAPCtzx7I5Rl32escCMZsPzaEKWe7bIX1em4KCKkBoX47IG54b
w82PCE8Y1jF/9Uk2m3RKVWp3YcLlc7Ap3gj6VO4ysvVz28UbnhPxsIkOlf2cq8qc
.
.
bYP7t5iwJmQzRMyFInYRt77wkJBPCpJc9FPNebL9vlZcN4zv0KQta+4alcWivvoP
4QIxE+/+trC6QRw2m2dHk6aAeq/J0Sc7ilZufwnNA71hf9SzRIwcFXMsLx4iLlki
inNqW9c=
=s1CX
-----END PGP PUBLIC KEY BLOCK-----
gpg_pkg_pub_keyname: gpg_pkg_key
```
2 Pillar data file `gpg_keys.sls` is loaded and presented to the relevant minions securely. For example, the contents of a top file loading the `gpg_keys.sls` :
```yaml
base: '*':
- gpg_keys
- pkgbuild
```
3 Before signing the built packages and creating the repository, ensure that the User has sufficient access right to the directory containing the packages and where the repository is to be created or updated. This is easliy achieved by a SaltStack state file: for example on Debian systems, `repo/debian/init.sls` :
```yaml
{% import "setup/debian/map.jinja" as buildcfg %}
ensure_user_access: file.directory:
- name: {{buildcfg.build_dest_dir}}
- user: {{buildcfg.build_runas}}
- group: {{buildcfg.build_runas}}
- dir_mode: 755
- file_mode: 644
- recurse:
- user
- group
- mode
```
4 Optional parameters for signing packages is controlled from each platform's `init.sls` repo state file. It allows for whether a passphrase is used, the particular directory for the Public and Private keys to be found (transferred securely from the salt-master as pillar data). Examples for Redhat 6, Ubuntu 14.04 and Debian 8 based minions can be found in the respective files :
> `repo/redhat/rhel7/init.sls`
> `repo/ubuntu/ubuntu1404/init.sls`
> `repo/debian/debian8/init.sls`
Example contents from Debian 8's `init.sls` state file :
```yaml
{% import "setup/debian/map.jinja" as buildcfg %}
{% set repo_keyid = pillar.get('keyid', 'None') %}
include:
- repo.debian
{{buildcfg.build_dest_dir}}:
pkgbuild.repo:
{% if repo_keyid != 'None' %}
- keyid: {{repo_keyid}}
- use_passphrase: True
- gnupghome: {{buildcfg.build_gpg_keydir}}
- runas: {{buildcfg.build_runas}}
{% endif %}
- env:
OPTIONS : 'ask-passphrase'
ORIGIN : 'SaltStack'
LABEL : 'salt_debian8'
SUITE: 'stable'
CODENAME : 'jessie'
ARCHS : 'amd64 i386 source'
COMPONENTS : 'main'
DESCRIPTION : 'SaltStack Debian 8 package repo'
```
Example contents from Redhat 6's `init.sls` state.file :
```yaml
{% import "setup/redhat/map.jinja" as buildcfg %}
{% set repo_keyid = pillar.get('keyid', 'None') %}
include:
- repo.redhat
{{buildcfg.build_dest_dir}}:
pkgbuild.repo:
- order: last
{% if repo_keyid != 'None' %}
- keyid: {{repo_keyid}}
- use_passphrase: True
- gnupghome: {{buildcfg.build_gpg_keydir}}
- runas: {{buildcfg.build_runas}}
- env:
ORIGIN : 'SaltStack'
{% endif %}
```
## Debian / Ubuntu Signing Process
1. Debian / Ubuntu platforms leverage gpg-agent when signing packages and creating repositories. The `gpg-agent` is installed and executed as part of the setup process on Debian / Ubuntu. For example on a Debian 8 platform's `init.sls` state file, the inclusion of an additional include statement :
```yaml
include:
- setup.debian
- setup.debian.gpg_agent
```
As part of the setup process, the Public and Private keys are also imported by the `gpg_agent.sls` state file.
# Packages
The packages directory contains all of the various dependencies and salt, listed initially by name, version and platforms for that version. For example :
Each platform directory consists of an init.sls state file, a spec directory and an optional sources directory, as follows :
| State file / directory | Description |
|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| init.sls | describes the package and version being build for that platform, its dependencies and expected results. |
| spec | appropriate information for defining the building of package that is being performed on the specific platform, for example: spec file for rpm, dsc for Debian/Ubuntu, tarball containing control files, etc. describing what is being build for Debian/Ubuntu (used for Salt) |
| sources | optional directory containing sources and dependencies for the package being built. Not required if all sources are obtained over the network. For example: pkg/python-timelib/0_2_4/rhel7 requires no sources since in init.sls, the macro expands to obtain the sources from https://pypi.python.org/packages/source/t/timelib/timelib-0.2.4.zip#md5=400e316f81001ec0842fa9b2cef5ade9 |
For example: package python-timelib, version 0.2.4
```txt
pkg/python-timelib/
└── 0_2_4
├── debian7
│ ├── init.sls
│ ├── sources
│ │ ├── timelib_0.2.4-1.debian.tar.xz
│ │ └── timelib_0.2.4.orig.tar.gz
│ └── spec
│ └── timelib_0.2.4-1.dsc
├── debian8
│ ├── init.sls
│ ├── sources
│ │ ├── timelib_0.2.4-1.debian.tar.xz
│ │ └── timelib_0.2.4.orig.tar.gz
│ └── spec
│ └── timelib_0.2.4-1.dsc
├── rhel5
│ ├── init.sls
│ ├── sources
│ │ └── timelib-0.2.4.zip
│ └── spec
│ └── python-timelib.spec
├── rhel6
│ ├── init.sls
│ ├── sources
│ │ └── timelib-0.2.4.zip
│ └── spec
│ └── python-timelib.spec
├── rhel7
│ ├── init.sls
│ └── spec
│ └── python-timelib.spec
├── ubuntu1204
│ ├── init.sls
│ ├── sources
│ │ ├── timelib_0.2.4-1.debian.tar.xz
│ │ └── timelib_0.2.4.orig.tar.gz
│ └── spec
│ └── timelib_0.2.4-1.dsc
└── ubuntu1404
├── init.sls
├── sources
│ ├── timelib_0.2.4-1.debian.tar.xz
│ └── timelib_0.2.4.orig.tar.gz
└── spec
└── timelib_0.2.4-1.dsc
```
## Layout of Init.sls
The init.sls file satifies the requirements of the pkgbuild.py state file, which is defined by a Yaml file, as follows :
```yaml
<pkgname>_<version>:
pkgbuild.built:
- runas: <username>
- force: <True | False - indication if build, regardless of existing build product>
- results: <expected product of the build process>
- dest_dir: <directory to place the product of the build process>
- spec: < path to the package's spec file >
- template: jinja
- tgt: <target for the build process>
- sources: <sources need to build package>
```
### Redhat - init.sls and pkgbuild.sls
On Redhat the build process is driven by pillar data present in `pkgbuild.sls` (or the versioned `pkgbuild.sls` if build_version is active), and macro expansions of that pillar data in `init.sls`. The `pkgbuild.sls` file contains pillar data for rhel7, rhel6 and rhel5 platforms, defining salt and it’s dependencies and their dependencies for that platform.
For example, the section of pillar data information to build salt on rhel7 in `pkgbuild.sls` is as follows :
```yaml
salt:
version: 2015.8.8-1
noarch: True
build_deps:
- python-crypto
- python-msgpack
- python-yaml
- python-requests
- python-pyzmq
- python-markupsafe
- python-tornado
- python-futures
- python-libcloud
results:
- salt
- salt-master
- salt-minion
- salt-syndic
- salt-api
- salt-cloud
- salt-ssh
```
This section details the following fields:
version
: version of salt to build
noarch
: architecture independent
build_deps
: dependencies required by salt, these are build if not already built before attempting to build salt
results
: expected product of building salt
The salt `init.sls` file for Redhat 7 is as follows, and is primarily driven by using jinja templating and macros to fill the contents of the various fields required by the `init.sls` :
```yaml
{% import "setup/redhat/map.jinja" as buildcfg %}
{% import "setup/macros.jinja" as macros with context %}
{% set pkg_data = salt["pillar.get"]("pkgbuild_registry:" ~ buildcfg.build_release, {}) %}
{% set force = salt["pillar.get"]("pkgbuild_force.all", False) or salt["pillar.get"]("pkgbuild_force." ~ slspath, False) %}
{% set sls_name = "salt" %}
{% set pypi_name = sls_name %}
{% set pkg_info = pkg_data.get(sls_name, {}) %}
{% if "version" in pkg_info %}
{% set pkg_name = pkg_info.get("name", sls_name) %}
{% set version, release = pkg_info["version"].split("-", 1) %}
{% if pkg_info.get("noarch", False) %}
{% set arch = "noarch" %}
{% else %}
{% set arch = buildcfg.build_arch %}
{% endif %}
{{ macros.includes(sls_name, pkg_data) }}
{{sls_name}}-{{version}}:
pkgbuild.built:
- runas: {{buildcfg.build_runas}}
- force: {{force}}
{{ macros.results(sls_name, pkg_data) }}
- dest_dir: {{buildcfg.build_dest_dir}}
- spec: salt://{{slspath}}/spec/{{pkg_name}}.spec
- template: jinja
- tgt: {{buildcfg.build_tgt}}
{{ macros.build_deps(sls_name, pkg_data) }}
{{ macros.requires(sls_name, pkg_data) }}
- sources:
## - salt://{{slspath}}/sources/{{pkg_name}}-{{version}}.tar.gz
- {{ macros.pypi_source(pypi_name, version) }}
- {{ macros.pypi_source("SaltTesting", "2015.7.10") }}
- salt://{{slspath}}/sources/{{pkg_name}}-common.logrotate
- salt://{{slspath}}/sources/README.fedora
- salt://{{slspath}}/sources/{{pkg_name}}-api
- salt://{{slspath}}/sources/{{pkg_name}}-api.service
- salt://{{slspath}}/sources/{{pkg_name}}-api.environment
- salt://{{slspath}}/sources/{{pkg_name}}-master
- salt://{{slspath}}/sources/{{pkg_name}}-master.service
- salt://{{slspath}}/sources/{{pkg_name}}-master.environment
- salt://{{slspath}}/sources/{{pkg_name}}-minion
- salt://{{slspath}}/sources/{{pkg_name}}-minion.service
- salt://{{slspath}}/sources/{{pkg_name}}-minion.environment
- salt://{{slspath}}/sources/{{pkg_name}}-syndic
- salt://{{slspath}}/sources/{{pkg_name}}-syndic.service
- salt://{{slspath}}/sources/{{pkg_name}}-syndic.environment
- salt://{{slspath}}/sources/{{pkg_name}}.bash
- salt://{{slspath}}/sources/{{pkg_name}}-{{version}}-tests.patch
{% endif %}
```
The initial part of the init.sls expands required values which are then used to state the package name (sls_name) and version. Note that the Salt package in this example is retrieved from Python Package Index website, reducing the need to store salt’s versioned tarball (example of using a local salt versioned tarball is commented out). slspath is defined in salt and expands to the package’s location on the Salt Master.
For example, if the base path is `/srv/salt` for this example, `slspath` would expand as follows :
`/srv/salt/pkg/salt/2015_8_8`
### Debian / Ubuntu - init.sls
On Debian and Ubuntu the build process has not yet been converted to be driven by pillar data (it is hoped to upgrade to being by pillar data in the future when time allows), it is entirely driven by the `init.sls` file. Hence, using Salt’s 2015.8.8 `init.sls` file for Debian 8 as an example :
```yaml
{% import "setup/debian/map.jinja" as buildcfg %}
{% set force = salt['pillar.get']('build_force.all', False) or salt['pillar.get']('build_force.' ~ slspath, False) %}
{% set name = 'salt' %}
{% set version = '2015.8.8' %}
{% set release_nameadd = '+ds' %}
{% set release_ver = '2' %}
{{name}}-{{version.replace('.', '_')}}:
pkgbuild.built:
- runas: {{buildcfg.build_runas}}
- results:
- {{name}}_{{version}}{{release_nameadd}}.orig.tar.gz
- {{name}}_{{version}}{{release_nameadd}}-{{release_ver}}.dsc
- {{name}}_{{version}}{{release_nameadd}}-{{release_ver}}.debian.tar.xz
- {{name}}-api_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- {{name}}-cloud_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- {{name}}-common_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- {{name}}-master_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- {{name}}-minion_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- {{name}}-ssh_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- {{name}}-syndic_{{version}}{{release_nameadd}}-{{release_ver}}_all.deb
- force: {{force}}
- dest_dir: {{buildcfg.build_dest_dir}}
- spec: salt://{{slspath}}/spec/{{name}}_debian.tar.xz
- tgt: {{buildcfg.build_tgt}}
- template: jinja
- sources:
- salt://{{slspath}}/sources/{{name}}-{{version}}.tar.gz
```
The contents of the init.sls are simpler than that shown for Redhat init.sls files, due to simpler use of jinja templating, however it is planned to eventually update Debian / Ubuntu to be driven by pillar data since this simplifies specifying versions of packages and their dependencies.
Note that salt’s spec file is a tarball containing the various Debian files, such as control, changelog, service files, etc. The spec file can also be a dsc file on Debian and Ubuntu, for example the `init.sls` for python-urllib3 on Debian 8 :
```yaml
{% import "setup/debian/map.jinja" as buildcfg %}
{% set force = salt['pillar.get']('build_force.all', False) or salt['pillar.get']('build_force.' ~ slspath, False) %}
{% set pypi_name = 'urllib3' %}
{% set name = 'python-' ~ pypi_name %}
{% set name3 = 'python3-' ~ pypi_name %}
{% set version = '1.10.4' %}
{% set release_ver = '1' %}
{{name}}-{{version.replace('.', '_')}}:
pkgbuild.built:
- runas: {{buildcfg.build_runas}}
- results:
- {{name}}_{{version}}-{{release_ver}}_all.deb
- {{name}}-whl_{{version}}-{{release_ver}}_all.deb
- {{name3}}_{{version}}-{{release_ver}}_all.deb
- {{name}}_{{version}}.orig.tar.gz
- {{name}}_{{version}}-{{release_ver}}.dsc
- {{name}}_{{version}}-{{release_ver}}.debian.tar.xz
- force: {{force}}
- dest_dir: {{buildcfg.build_dest_dir}}
- spec: salt://{{slspath}}/spec/{{name}}_{{version}}-{{release_ver}}.dsc
- tgt: {{buildcfg.build_tgt}}
- template: jinja
- sources:
- salt://{{slspath}}/sources/{{name}}_{{version}}.orig.tar.gz
- salt://{{slspcd ../.ath}}/sources/{{name}}_{{version}}-{{release_ver}}.debian.tar.xz
```
# Building Salt and Dependencies
Individual packages can be built separately or all packages can be built using highstate.
The highstate is controlled by `redhat_pkg.sls`, `debian_pkg.sls` and `ubuntu_pkg.sls` as stated in the Overview, and contain all of the dependencies for salt on that platform, for example Ubuntu :
`versions/2015_8_8/ubuntu_pkg.sls`
```yaml
{% import "setup/ubuntu/map.jinja" as buildcfg %}
include:
{% if buildcfg.build_release == 'ubuntu1604' %}
- pkg.libsodium.1_0_8.ubuntu1604
- pkg.python-ioflo.1_5_0.ubuntu1604
- pkg.python-libnacl.4_1.ubuntu1604
- pkg.python-raet.0_6_5.ubuntu1604
- pkg.python-timelib.0_2_4.ubuntu1604
- pkg.salt.2015_8_8.ubuntu1604
{% elif buildcfg.build_release == 'ubuntu1404' %}
- pkg.libsodium.1_0_3.ubuntu1404
- pkg.python-enum34.1_0_4.ubuntu1404
- pkg.python-future.0_14_3.ubuntu1404
- pkg.python-futures.3_0_3.ubuntu1404
- pkg.python-ioflo.1_3_8.ubuntu1404
- pkg.python-libcloud.0_15_1.ubuntu1404
- pkg.python-libnacl.4_1.ubuntu1404
- pkg.python-raet.0_6_3.ubuntu1404
- pkg.python-timelib.0_2_4.ubuntu1404
- pkg.python-tornado.4_2_1.ubuntu1404
- pkg.salt.2015_8_8.ubuntu1404
- pkg.zeromq.4_0_4.ubuntu1404
{% elif buildcfg.build_release == 'ubuntu1204' %}
- pkg.libsodium.1_0_3.ubuntu1204
- pkg.python-backports-ssl_match_hostname.3_4_0_2.ubuntu1204
- pkg.python-croniter.0_3_4.ubuntu1204
- pkg.python-crypto.2_6_1.ubuntu1204
- pkg.python-enum34.1_0_4.ubuntu1204
- pkg.python-future.0_14_3.ubuntu1204
- pkg.python-futures.3_0_3.ubuntu1204
- pkg.python-ioflo.1_3_8.ubuntu1204
- pkg.python-libcloud.0_14_1.ubuntu1204
- pkg.python-libnacl.4_1.ubuntu1204
- pkg.python-msgpack.0_3_0.ubuntu1204
- pkg.python-mako.0_7_0.ubuntu1204
- pkg.python-pyzmq.14_0_1.ubuntu1204
- pkg.python-raet.0_6_3.ubuntu1204
- pkg.python-requests.2_0_0.ubuntu1204
- pkg.python-timelib.0_2_4.ubuntu1204
- pkg.python-tornado.4_2_1.ubuntu1204
- pkg.python-urllib3.1_7_1.ubuntu1204
- pkg.salt.2015_8_8.ubuntu1204
- pkg.zeromq.4_0_4.ubuntu1204
{% endif %}
```
Hence to build salt 2015.8.8 and it’s dependencies for Ubuntu 12.04 and then sign packages and create a repository with a passphrase, using state file `repo/ubuntu/ubuntu1204/init.sls`:
```yaml
{% import "setup/ubuntu/map.jinja" as buildcfg %}
{% set repo_keyid = pillar.get('keyid', 'None') %}
include:
- repo.ubuntu
{{buildcfg.build_dest_dir}}:
pkgbuild.repo:
{% if repo_keyid != 'None' %}
- keyid: {{repo_keyid}}
- use_passphrase: True
- gnupghome: {{buildcfg.build_gpg_keydir}}
- runas: {{buildcfg.build_runas}}
{% endif %}
- env:
OPTIONS : 'ask-passphrase'
ORIGIN : 'SaltStack'
LABEL : 'salt_ubuntu12'
CODENAME : 'precise'
ARCHS : 'amd64 i386 source'
COMPONENTS : 'main'
DESCRIPTION : 'SaltStack Ubuntu 12 package repo'
```
```bash
salt u12m state.highstate pillar='{ "build_dest" : "/srv/ubuntu/2015.8.8/pkgs", "build_release" : "ubuntu1204" , "build_version" : "2015_8_8" }'
```
```bash
salt u12m state.sls repo.ubuntu.ubuntu12 pillar='{ "build_dest" : "/srv/ubuntu/2015.8.8/pkgs", "build_release" : "ubuntu1204", "keyid" : "ABCDEF12" , "build_version" : "2015_8_8", "gpg_passphrase" : "my-pass-phrase" }'
```
This command shall place the product of building in destination /srv/ubuntu/2015.8.8/pkgs.
Similarly for Redhat 6 32-bit :
```bash
salt redhat7_minion state.highstate pillar='{ "build_dest" : "/srv/redhat/2015.8.8/pkgs", "build_release" : "rhel6", "build_arch" : "i386" , "build_version" : "2015_8_8" }'
```
```bash
salt redhat7_minion state.sls repo.redhat.rhel6 pillar='{ "build_dest" : "/srv/redhat/2015.8.8/pkgs", "keyid" : "ABCDEF12", "build_release" : "rhel6", "build_arch" : "i386" , "build_version" : "2015_8_8", "gpg_passphrase" : "my-pass-phrase" }'
```
To individually build Salt 2015.8.8 for Redhat 7 :
```bash
salt redhat7_minion state.sls pkg.salt.2015_8_8.rhel7 pillar='{ "build_dest" : "/srv/redhat/2015.8.8/pkgs" , "build_version" : "2015_8_8" }'
```
Note: that currently the building of 32-bit packages on Debian and Ubuntu does not work. It had worked in early development of salt-pack but for some as yet undetermined reason it stopped working. Given the movement to 64-bit architectures, 32-bit support is a low priority task.
| 44.922356 | 606 | 0.619036 | eng_Latn | 0.928962 |
53e68ea561195204b290a8527d9638a84c9d1c3a | 74 | md | Markdown | docs/3.0/admin-en/installation-guides/google-cloud/create-image.md | AnastasiaTWW/product-docs-en | b0abaea82aa80a4d965953bdac5d887526465f50 | [
"MIT"
] | 7 | 2020-05-06T07:59:39.000Z | 2021-11-26T07:24:18.000Z | docs/3.0/admin-en/installation-guides/google-cloud/create-image.md | AnastasiaTWW/product-docs-en | b0abaea82aa80a4d965953bdac5d887526465f50 | [
"MIT"
] | 28 | 2020-05-28T11:03:39.000Z | 2021-12-27T15:36:40.000Z | docs/3.0/admin-en/installation-guides/google-cloud/create-image.md | AnastasiaTWW/product-docs-en | b0abaea82aa80a4d965953bdac5d887526465f50 | [
"MIT"
] | 21 | 2020-04-24T14:47:50.000Z | 2022-02-02T15:23:32.000Z | --8<-- "latest/admin-en/installation-guides/google-cloud/create-image.md"
| 37 | 73 | 0.756757 | nld_Latn | 0.16971 |
53e70e0715ec4cf705ec829769e7f3fa9ce02443 | 13,101 | md | Markdown | docs/monitors/linux_system_metrics.md | Kami/scalyr-agent-2 | b26ebb6a74c2670ae28052079f2fac95d88e832a | [
"Apache-2.0"
] | null | null | null | docs/monitors/linux_system_metrics.md | Kami/scalyr-agent-2 | b26ebb6a74c2670ae28052079f2fac95d88e832a | [
"Apache-2.0"
] | 1 | 2020-06-03T13:19:37.000Z | 2020-06-03T13:35:28.000Z | docs/monitors/linux_system_metrics.md | Kami/scalyr-agent-2 | b26ebb6a74c2670ae28052079f2fac95d88e832a | [
"Apache-2.0"
] | null | null | null | /// DECLARE path=/help/monitors/linux-system-metrics
/// DECLARE title=Linux System Metrics
/// DECLARE section=help
/// DECLARE subsection=monitors
<!-- Auto generated content below. DO NOT edit manually, but run tox -egenerate-monitor-docs command instead -->
# Linux System Metrics
This agent monitor plugin records CPU consumption, memory usage, and other metrics for the server on which
the agent is running.
@class=bg-warning docInfoPanel: An *agent monitor plugin* is a component of the Scalyr Agent. To use a plugin,
simply add it to the ``monitors`` section of the Scalyr Agent configuration file (``/etc/scalyr/agent.json``).
For more information, see [Agent Plugins](/help/scalyr-agent#plugins).
## Sample Configuration
The linux_system_metrics plugin is configured automatically by the Scalyr Agent. You do not need to include
this plugin in your configuration file.
## Viewing Data
You can see an overview of this data in the System dashboard. Click the {{menuRef:Dashboards}} menu and select
{{menuRef:System}}. Use the dropdown near the top of the page to select the host whose data you'd like to view.
## Configuration Reference
|||# Option ||| Usage
|||# ``network_interface_prefixes``||| The prefixes for the network interfaces to gather statistics for. This is \
either a string or a list of strings. The prefix must be the entire string \
starting after ``/dev/`` and to theregex defined by network_interface_suffix, \
which defaults to [0-9A-Z]+ (multiple digits or uppercase letters). For \
example, ``eth`` matches all devices starting with ``/dev/eth`` that end in a \
digit or an uppercase letter, that is eth0, eth1, ethA, ethB and so on.
|||# ``network_interface_suffix`` ||| The suffix for network interfaces to gather statistics for. This is a single \
regex that defaults to [0-9A-Z]+ - multiple digits or uppercase letters in a \
row. This is appended to each of the network_interface_prefixes to create the \
full interface name when interating over network interfaces in /dev
|||# ``local_disks_only`` ||| (defaults to true) Limits the metrics to only locally mounted filesystems
## Log reference
Each event recorded by this plugin will have the following fields:
|||# Field ||| Meaning
|||# ``monitor``||| Always ``linux_system_metrics``.
|||# ``metric`` ||| The name of a metric being measured, e.g. "proc.stat.cpu".
|||# ``value`` ||| The metric value.
## Metrics
The table below describes the metrics recorded by the monitor.
### general metrics
|||# Metric ||| Fields ||| Description
|||# ``sys.cpu.count`` ||| ||| The number of CPUs on the system
|||# ``proc.stat.cpu`` ||| ``type`` ||| CPU counters in units of jiffies, where ``type`` can be one of \
``user``, ``nice``, ``system``, ``iowait``, ``irq``, ``softirq``, \
``steal``, ``guest``. As a rate, they should add up to \
``100*numcpus`` on the host.
|||# ``proc.stat.intr`` ||| ||| The number of interrupts since boot.
|||# ``proc.stat.ctxt`` ||| ||| The number of context switches since boot.
|||# ``proc.stat.processes`` ||| ||| The number of processes created since boot.
|||# ``proc.stat.procs_blocked`` ||| ||| The number of processes currently blocked on I/O.
|||# ``proc.loadavg.1min`` ||| ||| The load average over 1 minute.
|||# ``proc.loadavg.5min`` ||| ||| The load average over 5 minutes.
|||# ``proc.loadavg.15min`` ||| ||| The load average over 15 minutes.
|||# ``proc.loadavg.runnable`` ||| ||| The number of runnable threads/processes.
|||# ``proc.loadavg.total_threads`` ||| ||| The total number of threads/processes.
|||# ``proc.kernel.entropy_avail`` ||| ||| The number of bits of entropy that can be read without blocking from \
/dev/random
|||# ``proc.uptime.total`` ||| ||| The total number of seconds since boot.
|||# ``proc.uptime.now`` ||| ||| The seconds since boot of idle time
### virtual memory metrics
|||# Metric ||| Description
|||# ``proc.vmstat.pgfault`` ||| The total number of minor page faults since boot.
|||# ``proc.vmstat.pgmajfault`` ||| The total number of major page faults since boot
|||# ``proc.vmstat.pswpin`` ||| The total number of processes swapped in since boot.
|||# ``proc.vmstat.pswpout`` ||| The total number of processes swapped out since boot.
|||# ``proc.vmstat.pgpgin`` ||| The total number of pages swapped in since boot.
|||# ``proc.vmstat.pgpgout`` ||| The total number of pages swapped out in since boot.
### numa metrics
|||# Metric ||| Fields ||| Description
|||# ``sys.numa.zoneallocs`` ||| ``node``, \
``type`` ||| The number of pages allocated from the preferred node, either \
type=hit or type=miss.
|||# ``sys.numa.foreign_allocs`` ||| ``node`` ||| The number of pages allocated from node because the preferred node \
did not have any free.
|||# ``sys.numa.allocation`` ||| ``node``, \
``type`` ||| The number of pages allocated either type=locally or type=remotely \
for processes on this node.
|||# ``sys.numa.interleave`` ||| ``node``, \
``type=hit`` ||| The number of pages allocated successfully by the interleave \
strategy.
### sockets metrics
|||# Metric ||| Fields ||| Description
|||# ``net.sockstat.num_sockets`` ||| ||| The total number of sockets allocated (only TCP).
|||# ``net.sockstat.num_timewait`` ||| ||| The total number of TCP sockets currently in TIME_WAIT state.
|||# ``net.sockstat.sockets_inuse`` ||| ``type`` ||| The total number of sockets in use by type.
|||# ``net.sockstat.num_orphans`` ||| ||| The total number of orphan TCP sockets (not attached to any file \
descriptor).
|||# ``net.sockstat.memory`` ||| ``type`` ||| Memory allocated for this socket type (in bytes).
|||# ``net.sockstat.ipfragqueues`` ||| ||| The total number of IP flows for which there are currently fragments \
queued for reassembly.
### network metrics
|||# Metric ||| Fields ||| Description
|||# ``net.stat.tcp.abort`` ||| ``type`` ||| The total number of connections that the kernel had to \
abort due broken down by reason.
|||# ``net.stat.tcp.abort.failed`` ||| ||| The total number of times the kernel failed to abort a \
connection because it didn't even have enough memory to \
reset it.
|||# ``net.stat.tcp.congestion.recovery`` ||| ``type`` ||| The number of times the kernel detected spurious \
retransmits and was able to recover part or all of the \
CWND, broken down by how it recovered.
|||# ``net.stat.tcp.delayedack`` ||| ``type`` ||| The number of delayed ACKs sent of different types.
|||# ``net.stat.tcp.failed_accept`` ||| ``reason`` ||| The number of times a connection had to be dropped after \
the 3WHS. reason=full_acceptq indicates that the \
application isn't accepting connections fast enough. You \
should see SYN cookies too.
|||# ``net.stat.tcp.invalid_sack`` ||| ``type`` ||| The number of invalid SACKs we saw of diff types. \
(requires Linux v2.6.24-rc1 or newer)
|||# ``net.stat.tcp.memory.pressure`` ||| ||| The number of times a socket entered the "memory \
pressure" mode.
|||# ``net.stat.tcp.memory.prune`` ||| ``type`` ||| The number of times a socket had to discard received data \
due to low memory conditions, broken down by type.
|||# ``net.stat.tcp.packetloss.recovery`` ||| ``type`` ||| The number of times we recovered from packet loss by type \
of recovery (e.g. fast retransmit vs SACK).
|||# ``net.stat.tcp.receive.queue.full`` ||| ||| The number of times a received packet had to be dropped \
because the socket's receive queue was full (requires \
Linux v2.6.34-rc2 or newer)
|||# ``net.stat.tcp.reording`` ||| ``detectedby`` ||| The number of times we detected re-ordering broken down \
by how.
|||# ``net.stat.tcp.syncookies`` ||| ``type`` ||| SYN cookies (both sent & received).
### disk requests metrics
|||# Metric ||| Fields ||| Description
|||# ``iostat.disk.read_requests`` ||| ``dev`` ||| The total number of reads completed by device
|||# ``iostat.disk.read_merged`` ||| ``dev`` ||| The total number of reads merged by device
|||# ``iostat.disk.read_sectors`` ||| ``dev`` ||| The total number of sectors read by device
|||# ``iostat.disk.msec_read`` ||| ``dev`` ||| Time in msec spent reading by device
|||# ``iostat.disk.write_requests`` ||| ``dev`` ||| The total number of writes completed by device
|||# ``iostat.disk.write_merged`` ||| ``dev`` ||| The total number of writes merged by device
|||# ``iostat.disk.write_sectors`` ||| ``dev`` ||| The total number of sectors written by device
|||# ``iostat.disk.msec_write`` ||| ``dev`` ||| The total time in milliseconds spent writing by device
|||# ``iostat.disk.ios_in_progress`` ||| ``dev`` ||| The number of I/O operations in progress by device
|||# ``iostat.disk.msec_total`` ||| ``dev`` ||| The total time in milliseconds doing I/O by device.
|||# ``iostat.disk.msec_weighted_total`` ||| ``dev`` ||| Weighted time doing I/O (multiplied by ios_in_progress) by \
device.
### disk resources metrics
|||# Metric ||| Fields ||| Description
|||# ``df.1kblocks.total`` ||| ``mount``, \
``fstype`` ||| The total size of the file system broken down by mount and filesystem type.
|||# ``df.1kblocks.used`` ||| ``mount``, \
``fstype`` ||| The number of blocks used broken down by mount and filesystem type.
|||# ``df.inodes.total`` ||| ``mount``, \
``fstype`` ||| The number of inodes broken down by mount and filesystem type.
|||# ``df.inodes.used`` ||| ``mount``, \
``fstype`` ||| The number of used inodes broken down by mount and filesystem type.
|||# ``df.inodes.free`` ||| ``mount``, \
``fstype`` ||| The number of free inodes broken down by mount and filesystem type.
### memory metrics
|||# Metric ||| Description
|||# ``proc.meminfo.memtotal`` ||| The total number of 1 KB pages of RAM.
|||# ``proc.meminfo.memfree`` ||| The total number of unused 1 KB pages of RAM. This does not include the number of \
cached pages which can be used when allocating memory.
|||# ``proc.meminfo.cached`` ||| The total number of 1 KB pages of RAM being used to cache blocks from the filesystem. \
These can be reclaimed as used to allocate memory as needed.
|||# ``proc.meminfo.buffers`` ||| The total number of 1 KB pages of RAM being used in system buffers.
| 70.816216 | 124 | 0.524693 | eng_Latn | 0.987614 |
53e72360ba101de5d523d76a99dfb1d2638a3756 | 2,491 | md | Markdown | README.md | quadrate-tech/classifieds-web-interface | 73f17db5528dd7772165437dfa37124ca2589f7d | [
"MIT"
] | 3 | 2020-09-17T05:18:20.000Z | 2021-01-31T23:58:49.000Z | README.md | quadrate-tech/classifieds-web-interface | 73f17db5528dd7772165437dfa37124ca2589f7d | [
"MIT"
] | 18 | 2020-09-14T09:56:24.000Z | 2020-11-10T18:58:13.000Z | README.md | quadrate-tech/classifieds-web-interface | 73f17db5528dd7772165437dfa37124ca2589f7d | [
"MIT"
] | 1 | 2021-01-30T03:40:03.000Z | 2021-01-30T03:40:03.000Z | # classifieds-web-interface
Python Version -- 3.8.4
To check your current python version type in following command in the terminal python --version
Django Version 3.1
To check your current version of django type following command in the terminal python -m django --version
Angular version 10
Instruction for Collaborators
Create a branch name same as issue name with issue number, For Example, if issue name is "this-is-issue-abcd #8" your branch name should be "this-is-issue-abcd.#8"
Once you done the task commit the changes with most suitable commit message followed by #issue_number
Pull the Changes from Origin/Remote Master Branch
Apply the changes and commit & push the changes
Create the pull request after commit & push the changes
Link the pull request with relevant issue
Note : Do not try to commit to the master branch, Test well locally before you create the pull request
Recommended IDE Pycharm Professional
Download Link https://www.jetbrains.com/pycharm/download
Instruction about how to install the Django framework
UNIX/Linux and Mac OS X Installation
Download the latest version of Django http://www.djangoproject.com/download.
When you got archive from link above, it will be like Django-x.xx.tar.gz
Extract and install it using this command
$ tar xzvf Django-x.xx.tar.gz
$ cd Django-x.xx
$ sudo python setup.py instal
Test your installation by running this command $ django-admin.py --version If it isn't working run this $ django-admin --version
If you see the current version of Django printed on the screen, then everything is set
Windows Installation
Download the latest version of Django http://www.djangoproject.com/download.
On some version of (windows 7) you may need to ensure the Path system variable has the Path the following C:\Python27\;C:\Python27\Lib\sitepackages\django\bin\int it, it depending on your Python version.
When you got your archive from the link above, Extract and install Django
c:\>cd c:\Django-x.xx
install Django by running the below command for which you will need administrative privileges in windows shell "cmd"
c:\Django-x.xx>python setup.py install
To test your installation, open a command prompt and type
c:\>django-admin.py --version If you see the current version of Django printed on screen, then everything is set.
Or else Launch a "cmd" prompt and type python then:
c:\> python
>>> import django
>>> print django.get_version()
SuperUser Username qts-admin
SuperUser Password QTSSuperUser
| 32.776316 | 203 | 0.786431 | eng_Latn | 0.990915 |
53e769dc41f8d04c11f2dc41925751e7ba5d1444 | 1,415 | md | Markdown | README.md | EmilyDSarani/paw-star-fe | b531770d686ae2b1bcb8f6ce8ec3802c97e27f77 | [
"MIT"
] | 1 | 2022-02-07T04:22:51.000Z | 2022-02-07T04:22:51.000Z | README.md | EmilyDSarani/paw-star-fe | b531770d686ae2b1bcb8f6ce8ec3802c97e27f77 | [
"MIT"
] | 22 | 2021-10-16T00:48:49.000Z | 2021-11-16T22:32:02.000Z | README.md | EmilyDSarani/paw-star-fe | b531770d686ae2b1bcb8f6ce8ec3802c97e27f77 | [
"MIT"
] | 1 | 2021-11-18T21:20:13.000Z | 2021-11-18T21:20:13.000Z | # Paw-Star
### Elijah Prosperie
- [LinkedIn](https://www.linkedin.com/in/elijahprosperie/)
- [GitHub](https://github.com/ProsperieEli)
### Katie Schrattenholzer
- [LinkedIn](https://www.linkedin.com/in/k-schrattenholzer/)
- [GitHub](https://github.com/k-schrattenholzer)
### Diyana Mendoza
- [LinkedIn](https://www.linkedin.com/in/diyana-mendoza-price/)
- [GitHub](https://github.com/diyanamendoza)
### Emily Sarani
- [LinkedIn](https://www.linkedin.com/in/emily-sarani-2b3074135/)
- [GitHub](https://github.com/EmilyDSarani)
<br>
<br>
#### User Story:
1. Login, or create a user profile
1. Redirect to the Pets page, where pets can be added
1. Directed to the paw-strology page to see a list of their pets, with a daily horoscope, mood, and compatibility message for each given pet
#### Endpoints:
- Get
- Post
- Delete
#### API's used:
- [Backend Repo](https://github.com/ProsperieEli/paw-star-be)
- [ZodiacSign](https://rapidapi.com/hajderr/api/zodiac-sign)
- [Aztro](https://github.com/sameerkumar18/aztro)
- [Yelp](https://www.yelp.com/developers/documentation/v3)
- [Data Muse](https://www.datamuse.com/api/)
- [Daily Quote](https://type.fit/api/quotes)
#### Libraries, frameworks, and packages used:
- React
- React-Router-Dom
- React-Loader-Spinner
- React-Hash-Link
- Netlify
- superagent
Icons made by [Icongeek26](https://www.flaticon.com/authors/icongeek26) from https://www.flaticon.com/
| 30.76087 | 140 | 0.720848 | yue_Hant | 0.344422 |
53e7c309eaab4a70d5aaf38bde0e2dcba60bae21 | 40 | md | Markdown | README.md | Ro0tk1t/SecretChat | b316c36ffec60cc26f27121f9400a1d4b73deb83 | [
"MIT"
] | null | null | null | README.md | Ro0tk1t/SecretChat | b316c36ffec60cc26f27121f9400a1d4b73deb83 | [
"MIT"
] | null | null | null | README.md | Ro0tk1t/SecretChat | b316c36ffec60cc26f27121f9400a1d4b73deb83 | [
"MIT"
] | null | null | null | # SecretChat
Secret chat for small team
| 13.333333 | 26 | 0.8 | eng_Latn | 0.999157 |
53e7dd269bfb2f8676b645ac97aa30b51b5bbe95 | 56 | md | Markdown | README.md | huegli/automation | 2aa2532fefb35acc071a5edc2b313e27d08f65c9 | [
"MIT"
] | null | null | null | README.md | huegli/automation | 2aa2532fefb35acc071a5edc2b313e27d08f65c9 | [
"MIT"
] | null | null | null | README.md | huegli/automation | 2aa2532fefb35acc071a5edc2b313e27d08f65c9 | [
"MIT"
] | null | null | null | # automation
Automation scripts for use around the home
| 18.666667 | 42 | 0.821429 | eng_Latn | 0.990252 |
53e7feda83d787b2ee266b54096e9719f3587aa1 | 105 | md | Markdown | web/config/metadata/amsua/AMSUA_NOAA17_Brightness_Temp_Channel_12.md | Swaniti/IndiaSatelliteView | d7e1f4093061ed52554d99f197f29e6a6ae69cce | [
"NASA-1.3"
] | null | null | null | web/config/metadata/amsua/AMSUA_NOAA17_Brightness_Temp_Channel_12.md | Swaniti/IndiaSatelliteView | d7e1f4093061ed52554d99f197f29e6a6ae69cce | [
"NASA-1.3"
] | null | null | null | web/config/metadata/amsua/AMSUA_NOAA17_Brightness_Temp_Channel_12.md | Swaniti/IndiaSatelliteView | d7e1f4093061ed52554d99f197f29e6a6ae69cce | [
"NASA-1.3"
] | null | null | null | ### AMSU-A/NOAA-17 Brightness Temperature (Channel 12)
Temporal coverage: 21 July 2002 - 28 October 2003
| 35 | 54 | 0.761905 | eng_Latn | 0.48404 |
53e815d16170986ca147a6adf92c12d7ae5461d0 | 35,010 | md | Markdown | documents/aws-systems-manager-user-guide/doc_source/automation-document-sample-mad.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-systems-manager-user-guide/doc_source/automation-document-sample-mad.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-systems-manager-user-guide/doc_source/automation-document-sample-mad.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Deploy VPC architecture and Microsoft Active Directory domain controllers<a name="automation-document-sample-mad"></a>
To increase efficiency and standardize common tasks, you might choose to automate deployments\. This is useful if you regularly deploy the same architecture across multiple accounts and Regions\. Automating architecture deployments can also reduce the potential for human error that can occur when deploying architecture manually\. AWS Systems Manager Automation actions can help you accomplish this\.
The following sample AWS Systems Manager Automation document performs these actions\.
+ Retrieves the latest Windows Server 2012R2 Amazon Machine Image \(AMI\) using Systems Manager Parameter Store to use when launching the EC2 instances that will be configured as domain controllers\.
+ Uses the `aws:executeAwsApi` Automation action to call several AWS API actions to create the VPC architecture\. The domain controller instances are launched in private subnets, and connect to the internet using a NAT gateway\. This enables the SSM Agent on the instances to access the requisite Systems Manager endpoints\.
+ Uses the `aws:waitForAwsResourceProperty` Automation action to confirm the instances launched by the previous action are `Online` for AWS Systems Manager\.
+ Uses the `aws:runCommand` Automation action to configure the instances launched as Microsoft Active Directory domain controllers\.
------
#### [ YAML ]
```
---
description: Custom Automation Deployment Sample
schemaVersion: '0.3'
parameters:
AutomationAssumeRole:
type: String
default: ''
description: >-
(Optional) The ARN of the role that allows Automation to perform the
actions on your behalf. If no role is specified, Systems Manager
Automation uses your IAM permissions to run this document.
mainSteps:
- name: getLatestWindowsAmi
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ssm
Api: GetParameter
Name: >-
/aws/service/ami-windows-latest/Windows_Server-2012-R2_RTM-English-64Bit-Base
outputs:
- Name: amiId
Selector: $.Parameter.Value
Type: String
nextStep: createSSMInstanceRole
- name: createSSMInstanceRole
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: iam
Api: CreateRole
AssumeRolePolicyDocument: >-
{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":["ec2.amazonaws.com"]},"Action":["sts:AssumeRole"]}]}
RoleName: sampleSSMInstanceRole
nextStep: attachManagedSSMPolicy
- name: attachManagedSSMPolicy
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: iam
Api: AttachRolePolicy
PolicyArn: 'arn:aws:iam::aws:policy/service-role/AmazonSSMManagedInstanceCore'
RoleName: sampleSSMInstanceRole
nextStep: createSSMInstanceProfile
- name: createSSMInstanceProfile
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: iam
Api: CreateInstanceProfile
InstanceProfileName: sampleSSMInstanceRole
outputs:
- Name: instanceProfileArn
Selector: $.InstanceProfile.Arn
Type: String
nextStep: addSSMInstanceRoleToProfile
- name: addSSMInstanceRoleToProfile
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: iam
Api: AddRoleToInstanceProfile
InstanceProfileName: sampleSSMInstanceRole
RoleName: sampleSSMInstanceRole
nextStep: createVpc
- name: createVpc
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateVpc
CidrBlock: 10.0.100.0/22
outputs:
- Name: vpcId
Selector: $.Vpc.VpcId
Type: String
nextStep: getMainRtb
- name: getMainRtb
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: DescribeRouteTables
Filters:
- Name: vpc-id
Values:
- '{{ createVpc.vpcId }}'
outputs:
- Name: mainRtbId
Selector: '$.RouteTables[0].RouteTableId'
Type: String
nextStep: verifyMainRtb
- name: verifyMainRtb
action: aws:assertAwsResourceProperty
onFailure: Abort
inputs:
Service: ec2
Api: DescribeRouteTables
RouteTableIds:
- '{{ getMainRtb.mainRtbId }}'
PropertySelector: '$.RouteTables[0].Associations[0].Main'
DesiredValues:
- 'True'
nextStep: createPubSubnet
- name: createPubSubnet
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateSubnet
CidrBlock: 10.0.103.0/24
AvailabilityZone: us-west-2c
VpcId: '{{ createVpc.vpcId }}'
outputs:
- Name: pubSubnetId
Selector: $.Subnet.SubnetId
Type: String
nextStep: createPubRtb
- name: createPubRtb
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateRouteTable
VpcId: '{{ createVpc.vpcId }}'
outputs:
- Name: pubRtbId
Selector: $.RouteTable.RouteTableId
Type: String
nextStep: createIgw
- name: createIgw
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateInternetGateway
outputs:
- Name: igwId
Selector: $.InternetGateway.InternetGatewayId
Type: String
nextStep: attachIgw
- name: attachIgw
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: AttachInternetGateway
InternetGatewayId: '{{ createIgw.igwId }}'
VpcId: '{{ createVpc.vpcId }}'
nextStep: allocateEip
- name: allocateEip
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: AllocateAddress
Domain: vpc
outputs:
- Name: eipAllocationId
Selector: $.AllocationId
Type: String
nextStep: createNatGw
- name: createNatGw
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateNatGateway
AllocationId: '{{ allocateEip.eipAllocationId }}'
SubnetId: '{{ createPubSubnet.pubSubnetId }}'
outputs:
- Name: natGwId
Selector: $.NatGateway.NatGatewayId
Type: String
nextStep: verifyNatGwAvailable
- name: verifyNatGwAvailable
action: aws:waitForAwsResourceProperty
timeoutSeconds: 150
inputs:
Service: ec2
Api: DescribeNatGateways
NatGatewayIds:
- '{{ createNatGw.natGwId }}'
PropertySelector: '$.NatGateways[0].State'
DesiredValues:
- available
nextStep: createNatRoute
- name: createNatRoute
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateRoute
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: '{{ createNatGw.natGwId }}'
RouteTableId: '{{ getMainRtb.mainRtbId }}'
nextStep: createPubRoute
- name: createPubRoute
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateRoute
DestinationCidrBlock: 0.0.0.0/0
GatewayId: '{{ createIgw.igwId }}'
RouteTableId: '{{ createPubRtb.pubRtbId }}'
nextStep: setPubSubAssoc
- name: setPubSubAssoc
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: AssociateRouteTable
RouteTableId: '{{ createPubRtb.pubRtbId }}'
SubnetId: '{{ createPubSubnet.pubSubnetId }}'
- name: createDhcpOptions
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateDhcpOptions
DhcpConfigurations:
- Key: domain-name-servers
Values:
- '10.0.100.50,10.0.101.50'
- Key: domain-name
Values:
- sample.com
outputs:
- Name: dhcpOptionsId
Selector: $.DhcpOptions.DhcpOptionsId
Type: String
nextStep: createDCSubnet1
- name: createDCSubnet1
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateSubnet
CidrBlock: 10.0.100.0/24
AvailabilityZone: us-west-2a
VpcId: '{{ createVpc.vpcId }}'
outputs:
- Name: firstSubnetId
Selector: $.Subnet.SubnetId
Type: String
nextStep: createDCSubnet2
- name: createDCSubnet2
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateSubnet
CidrBlock: 10.0.101.0/24
AvailabilityZone: us-west-2b
VpcId: '{{ createVpc.vpcId }}'
outputs:
- Name: secondSubnetId
Selector: $.Subnet.SubnetId
Type: String
nextStep: createDCSecGroup
- name: createDCSecGroup
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: CreateSecurityGroup
GroupName: SampleDCSecGroup
Description: Security Group for Sample Domain Controllers
VpcId: '{{ createVpc.vpcId }}'
outputs:
- Name: dcSecGroupId
Selector: $.GroupId
Type: String
nextStep: authIngressDCTraffic
- name: authIngressDCTraffic
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: AuthorizeSecurityGroupIngress
GroupId: '{{ createDCSecGroup.dcSecGroupId }}'
IpPermissions:
- FromPort: -1
IpProtocol: '-1'
IpRanges:
- CidrIp: 0.0.0.0/0
Description: Allow all traffic between Domain Controllers
nextStep: verifyInstanceProfile
- name: verifyInstanceProfile
action: aws:waitForAwsResourceProperty
maxAttempts: 5
onFailure: Abort
inputs:
Service: iam
Api: ListInstanceProfilesForRole
RoleName: sampleSSMInstanceRole
PropertySelector: '$.InstanceProfiles[0].Arn'
DesiredValues:
- '{{ createSSMInstanceProfile.instanceProfileArn }}'
nextStep: iamEventualConsistency
- name: iamEventualConsistency
action: aws:sleep
inputs:
Duration: PT2M
nextStep: launchDC1
- name: launchDC1
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: RunInstances
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
DeleteOnTermination: true
VolumeSize: 50
VolumeType: gp2
- DeviceName: xvdf
Ebs:
DeleteOnTermination: true
VolumeSize: 100
VolumeType: gp2
IamInstanceProfile:
Arn: '{{ createSSMInstanceProfile.instanceProfileArn }}'
ImageId: '{{ getLatestWindowsAmi.amiId }}'
InstanceType: t2.micro
MaxCount: 1
MinCount: 1
PrivateIpAddress: 10.0.100.50
SecurityGroupIds:
- '{{ createDCSecGroup.dcSecGroupId }}'
SubnetId: '{{ createDCSubnet1.firstSubnetId }}'
TagSpecifications:
- ResourceType: instance
Tags:
- Key: Name
Value: SampleDC1
outputs:
- Name: pdcInstanceId
Selector: '$.Instances[0].InstanceId'
Type: String
nextStep: launchDC2
- name: launchDC2
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: RunInstances
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
DeleteOnTermination: true
VolumeSize: 50
VolumeType: gp2
- DeviceName: xvdf
Ebs:
DeleteOnTermination: true
VolumeSize: 100
VolumeType: gp2
IamInstanceProfile:
Arn: '{{ createSSMInstanceProfile.instanceProfileArn }}'
ImageId: '{{ getLatestWindowsAmi.amiId }}'
InstanceType: t2.micro
MaxCount: 1
MinCount: 1
PrivateIpAddress: 10.0.101.50
SecurityGroupIds:
- '{{ createDCSecGroup.dcSecGroupId }}'
SubnetId: '{{ createDCSubnet2.secondSubnetId }}'
TagSpecifications:
- ResourceType: instance
Tags:
- Key: Name
Value: SampleDC2
outputs:
- Name: adcInstanceId
Selector: '$.Instances[0].InstanceId'
Type: String
nextStep: verifyDCInstanceState
- name: verifyDCInstanceState
action: aws:waitForAwsResourceProperty
inputs:
Service: ec2
Api: DescribeInstanceStatus
IncludeAllInstances: true
InstanceIds:
- '{{ launchDC1.pdcInstanceId }}'
- '{{ launchDC2.adcInstanceId }}'
PropertySelector: '$.InstanceStatuses[0].InstanceState.Name'
DesiredValues:
- running
nextStep: verifyInstancesOnlineSSM
- name: verifyInstancesOnlineSSM
action: aws:waitForAwsResourceProperty
timeoutSeconds: 600
inputs:
Service: ssm
Api: DescribeInstanceInformation
InstanceInformationFilterList:
- key: InstanceIds
valueSet:
- '{{ launchDC1.pdcInstanceId }}'
- '{{ launchDC2.adcInstanceId }}'
PropertySelector: '$.InstanceInformationList[0].PingStatus'
DesiredValues:
- Online
nextStep: installADRoles
- name: installADRoles
action: aws:runCommand
inputs:
DocumentName: AWS-RunPowerShellScript
InstanceIds:
- '{{ launchDC1.pdcInstanceId }}'
- '{{ launchDC2.adcInstanceId }}'
Parameters:
commands: |-
try {
Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
}
catch {
Write-Error "Failed to install ADDS Role."
}
nextStep: setAdminPassword
- name: setAdminPassword
action: aws:runCommand
inputs:
DocumentName: AWS-RunPowerShellScript
InstanceIds:
- '{{ launchDC1.pdcInstanceId }}'
Parameters:
commands:
- net user Administrator "sampleAdminPass123!"
nextStep: createForest
- name: createForest
action: aws:runCommand
inputs:
DocumentName: AWS-RunPowerShellScript
InstanceIds:
- '{{ launchDC1.pdcInstanceId }}'
Parameters:
commands: |-
$dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force
try {
Install-ADDSForest -DomainName "sample.com" -DomainMode 6 -ForestMode 6 -InstallDNS -DatabasePath "D:\NTDS" -SysvolPath "D:\SYSVOL" -SafeModeAdministratorPassword $dsrmPass -Force
}
catch {
Write-Error $_
}
try {
Add-DnsServerForwarder -IPAddress "10.0.100.2"
}
catch {
Write-Error $_
}
nextStep: associateDhcpOptions
- name: associateDhcpOptions
action: aws:executeAwsApi
onFailure: Abort
inputs:
Service: ec2
Api: AssociateDhcpOptions
DhcpOptionsId: '{{ createDhcpOptions.dhcpOptionsId }}'
VpcId: '{{ createVpc.vpcId }}'
nextStep: waitForADServices
- name: waitForADServices
action: aws:sleep
inputs:
Duration: PT1M
nextStep: promoteADC
- name: promoteADC
action: aws:runCommand
inputs:
DocumentName: AWS-RunPowerShellScript
InstanceIds:
- '{{ launchDC2.adcInstanceId }}'
Parameters:
commands: |-
ipconfig /renew
$dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force
$domAdminUser = "sample\Administrator"
$domAdminPass = "sampleAdminPass123!" | ConvertTo-SecureString -asPlainText -Force
$domAdminCred = New-Object System.Management.Automation.PSCredential($domAdminUser,$domAdminPass)
try {
Install-ADDSDomainController -DomainName "sample.com" -InstallDNS -DatabasePath "D:\NTDS" -SysvolPath "D:\SYSVOL" -SafeModeAdministratorPassword $dsrmPass -Credential $domAdminCred -Force
}
catch {
Write-Error $_
}
```
------
#### [ JSON ]
```
{
"description": "Custom Automation Deployment Sample",
"schemaVersion": "0.3",
"assumeRole": "{{ AutomationAssumeRole }}",
"parameters": {
"AutomationAssumeRole": {
"type": "String",
"description": "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to execute this document.",
"default": ""
}
},
"mainSteps": [
{
"name": "getLatestWindowsAmi",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ssm",
"Api": "GetParameter",
"Name": "/aws/service/ami-windows-latest/Windows_Server-2012-R2_RTM-English-64Bit-Base"
},
"outputs": [
{
"Name": "amiId",
"Selector": "$.Parameter.Value",
"Type": "String"
}
],
"nextStep": "createSSMInstanceRole"
},
{
"name": "createSSMInstanceRole",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "iam",
"Api": "CreateRole",
"AssumeRolePolicyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"ec2.amazonaws.com\"]},\"Action\":[\"sts:AssumeRole\"]}]}",
"RoleName": "sampleSSMInstanceRole"
},
"nextStep": "attachManagedSSMPolicy"
},
{
"name": "attachManagedSSMPolicy",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "iam",
"Api": "AttachRolePolicy",
"PolicyArn": "arn:aws:iam::aws:policy/service-role/AmazonSSMManagedInstanceCore",
"RoleName": "sampleSSMInstanceRole"
},
"nextStep": "createSSMInstanceProfile"
},
{
"name": "createSSMInstanceProfile",
"action":"aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "iam",
"Api": "CreateInstanceProfile",
"InstanceProfileName": "sampleSSMInstanceRole"
},
"outputs": [
{
"Name": "instanceProfileArn",
"Selector": "$.InstanceProfile.Arn",
"Type": "String"
}
],
"nextStep": "addSSMInstanceRoleToProfile"
},
{
"name": "addSSMInstanceRoleToProfile",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "iam",
"Api": "AddRoleToInstanceProfile",
"InstanceProfileName": "sampleSSMInstanceRole",
"RoleName": "sampleSSMInstanceRole"
},
"nextStep": "createVpc"
},
{
"name": "createVpc",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateVpc",
"CidrBlock": "10.0.100.0/22"
},
"outputs": [
{
"Name": "vpcId",
"Selector": "$.Vpc.VpcId",
"Type": "String"
}
"nextStep": "getMainRtb"
},
{
"name": "getMainRtb",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "DescribeRouteTables",
"Filters": [
{
"Name": "vpc-id",
"Values": ["{{ createVpc.vpcId }}"]
}
]
},
"outputs": [
{
"Name": "mainRtbId",
"Selector": "$.RouteTables[0].RouteTableId",
"Type": "String"
}
],
"nextStep": "verifyMainRtb"
},
{
"name": "verifyMainRtb",
"action": "aws:assertAwsResourceProperty",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "DescribeRouteTables",
"RouteTableIds": ["{{ getMainRtb.mainRtbId }}"],
"PropertySelector": "$.RouteTables[0].Associations[0].Main",
"DesiredValues": ["True"]
},
"nextStep": "createPubSubnet"
},
{
"name": "createPubSubnet",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateSubnet",
"CidrBlock": "10.0.103.0/24",
"AvailabilityZone": "us-west-2c",
"VpcId": "{{ createVpc.vpcId }}"
},
"outputs":[
{
"Name": "pubSubnetId",
"Selector": "$.Subnet.SubnetId",
"Type": "String"
}
],
"nextStep": "createPubRtb"
},
{
"name": "createPubRtb",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateRouteTable",
"VpcId": "{{ createVpc.vpcId }}"
},
"outputs": [
{
"Name": "pubRtbId",
"Selector": "$.RouteTable.RouteTableId",
"Type": "String"
}
],
"nextStep": "createIgw"
},
{
"name": "createIgw",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateInternetGateway"
},
"outputs": [
{
"Name": "igwId",
"Selector": "$.InternetGateway.InternetGatewayId",
"Type": "String"
}
],
"nextStep": "attachIgw"
},
{
"name": "attachIgw",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "AttachInternetGateway",
"InternetGatewayId": "{{ createIgw.igwId }}",
"VpcId": "{{ createVpc.vpcId }}"
},
"nextStep": "allocateEip"
},
{
"name": "allocateEip",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "AllocateAddress",
"Domain": "vpc"
},
"outputs": [
{
"Name": "eipAllocationId",
"Selector": "$.AllocationId",
"Type": "String"
}
],
"nextStep": "createNatGw"
},
{
"name": "createNatGw",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateNatGateway",
"AllocationId": "{{ allocateEip.eipAllocationId }}",
"SubnetId": "{{ createPubSubnet.pubSubnetId }}"
},
"outputs":[
{
"Name": "natGwId",
"Selector": "$.NatGateway.NatGatewayId",
"Type": "String"
}
],
"nextStep": "verifyNatGwAvailable"
},
{
"name": "verifyNatGwAvailable",
"action": "aws:waitForAwsResourceProperty",
"timeoutSeconds": 150,
"inputs": {
"Service": "ec2",
"Api": "DescribeNatGateways",
"NatGatewayIds": [
"{{ createNatGw.natGwId }}"
],
"PropertySelector": "$.NatGateways[0].State",
"DesiredValues": [
"available"
]
},
"nextStep": "createNatRoute"
},
{
"name": "createNatRoute",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateRoute",
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": "{{ createNatGw.natGwId }}",
"RouteTableId": "{{ getMainRtb.mainRtbId }}"
},
"nextStep": "createPubRoute"
},
{
"name": "createPubRoute",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateRoute",
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": "{{ createIgw.igwId }}",
"RouteTableId": "{{ createPubRtb.pubRtbId }}"
},
"nextStep": "setPubSubAssoc"
},
{
"name": "setPubSubAssoc",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "AssociateRouteTable",
"RouteTableId": "{{ createPubRtb.pubRtbId }}",
"SubnetId": "{{ createPubSubnet.pubSubnetId }}"
}
},
{
"name": "createDhcpOptions",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateDhcpOptions",
"DhcpConfigurations": [
{
"Key": "domain-name-servers",
"Values": ["10.0.100.50,10.0.101.50"]
},
{
"Key": "domain-name",
"Values": ["sample.com"]
}
]
},
"outputs": [
{
"Name": "dhcpOptionsId",
"Selector": "$.DhcpOptions.DhcpOptionsId",
"Type": "String"
}
],
"nextStep": "createDCSubnet1"
},
{
"name": "createDCSubnet1",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateSubnet",
"CidrBlock": "10.0.100.0/24",
"AvailabilityZone": "us-west-2a",
"VpcId": "{{ createVpc.vpcId }}"
},
"outputs": [
{
"Name": "firstSubnetId",
"Selector": "$.Subnet.SubnetId",
"Type": "String"
}
],
"nextStep": "createDCSubnet2"
},
{
"name": "createDCSubnet2",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateSubnet",
"CidrBlock": "10.0.101.0/24",
"AvailabilityZone": "us-west-2b",
"VpcId": "{{ createVpc.vpcId }}"
},
"outputs": [
{
"Name": "secondSubnetId",
"Selector": "$.Subnet.SubnetId",
"Type": "String"
}
],
"nextStep": "createDCSecGroup"
},
{
"name": "createDCSecGroup",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "CreateSecurityGroup",
"GroupName": "SampleDCSecGroup",
"Description": "Security Group for Example Domain Controllers",
"VpcId": "{{ createVpc.vpcId }}"
},
"outputs": [
{
"Name": "dcSecGroupId",
"Selector": "$.GroupId",
"Type": "String"
}
],
"nextStep": "authIngressDCTraffic"
},
{
"name": "authIngressDCTraffic",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "AuthorizeSecurityGroupIngress",
"GroupId": "{{ createDCSecGroup.dcSecGroupId }}",
"IpPermissions": [
{
"FromPort": -1,
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0",
"Description": "Allow all traffic between Domain Controllers"
}
]
}
]
},
"nextStep": "verifyInstanceProfile"
},
{
"name": "verifyInstanceProfile",
"action": "aws:waitForAwsResourceProperty",
"maxAttempts": 5,
"onFailure": "Abort",
"inputs": {
"Service": "iam",
"Api": "ListInstanceProfilesForRole",
"RoleName": "sampleSSMInstanceRole",
"PropertySelector": "$.InstanceProfiles[0].Arn",
"DesiredValues": [
"{{ createSSMInstanceProfile.instanceProfileArn }}"
]
},
"nextStep": "iamEventualConsistency"
},
{
"name": "iamEventualConsistency",
"action": "aws:sleep",
"inputs": {
"Duration": "PT2M"
},
"nextStep": "launchDC1"
},
{
"name": "launchDC1",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "RunInstances",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 50,
"VolumeType": "gp2"
}
},
{
"DeviceName": "xvdf",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 100,
"VolumeType": "gp2"
}
}
],
"IamInstanceProfile": {
"Arn": "{{ createSSMInstanceProfile.instanceProfileArn }}"
},
"ImageId": "{{ getLatestWindowsAmi.amiId }}",
"InstanceType": "t2.micro",
"MaxCount": 1,
"MinCount": 1,
"PrivateIpAddress": "10.0.100.50",
"SecurityGroupIds": [
"{{ createDCSecGroup.dcSecGroupId }}"
],
"SubnetId": "{{ createDCSubnet1.firstSubnetId }}",
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
{
"Key": "Name",
"Value": "SampleDC1"
}
]
}
]
},
"outputs": [
{
"Name": "pdcInstanceId",
"Selector": "$.Instances[0].InstanceId",
"Type": "String"
}
],
"nextStep": "launchDC2"
},
{
"name": "launchDC2",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "RunInstances",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 50,
"VolumeType": "gp2"
}
},
{
"DeviceName": "xvdf",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 100,
"VolumeType": "gp2"
}
}
],
"IamInstanceProfile": {
"Arn": "{{ createSSMInstanceProfile.instanceProfileArn }}"
},
"ImageId": "{{ getLatestWindowsAmi.amiId }}",
"InstanceType": "t2.micro",
"MaxCount": 1,
"MinCount": 1,
"PrivateIpAddress": "10.0.101.50",
"SecurityGroupIds": [
"{{ createDCSecGroup.dcSecGroupId }}"
],
"SubnetId": "{{ createDCSubnet2.secondSubnetId }}",
"TagSpecifications": [
{
"ResourceType": "instance",
"Tags": [
{
"Key": "Name",
"Value": "SampleDC2"
}
]
}
]
},
"outputs": [
{
"Name": "adcInstanceId",
"Selector": "$.Instances[0].InstanceId",
"Type": "String"
}
],
"nextStep": "verifyDCInstanceState"
},
{
"name": "verifyDCInstanceState",
"action": "aws:waitForAwsResourceProperty",
"inputs": {
"Service": "ec2",
"Api": "DescribeInstanceStatus",
"IncludeAllInstances": true,
"InstanceIds": [
"{{ launchDC1.pdcInstanceId }}",
"{{ launchDC2.adcInstanceId }}"
],
"PropertySelector": "$.InstanceStatuses[0].InstanceState.Name",
"DesiredValues": [
"running"
]
},
"nextStep": "verifyInstancesOnlineSSM"
},
{
"name": "verifyInstancesOnlineSSM",
"action": "aws:waitForAwsResourceProperty",
"timeoutSeconds": 600,
"inputs": {
"Service": "ssm",
"Api": "DescribeInstanceInformation",
"InstanceInformationFilterList": [
{
"key": "InstanceIds",
"valueSet": [
"{{ launchDC1.pdcInstanceId }}",
"{{ launchDC2.adcInstanceId }}"
]
}
],
"PropertySelector": "$.InstanceInformationList[0].PingStatus",
"DesiredValues": [
"Online"
]
},
"nextStep": "installADRoles"
},
{
"name": "installADRoles",
"action": "aws:runCommand",
"inputs": {
"DocumentName": "AWS-RunPowerShellScript",
"InstanceIds": [
"{{ launchDC1.pdcInstanceId }}",
"{{ launchDC2.adcInstanceId }}"
],
"Parameters": {
"commands": [
"try {",
" Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools",
"}",
"catch {",
" Write-Error \"Failed to install ADDS Role.\"",
"}"
]
}
},
"nextStep": "setAdminPassword"
},
{
"name": "setAdminPassword",
"action": "aws:runCommand",
"inputs": {
"DocumentName": "AWS-RunPowerShellScript",
"InstanceIds": [
"{{ launchDC1.pdcInstanceId }}"
],
"Parameters": {
"commands": [
"net user Administrator \"sampleAdminPass123!\""
]
}
},
"nextStep": "createForest"
},
{
"name": "createForest",
"action": "aws:runCommand",
"inputs": {
"DocumentName": "AWS-RunPowerShellScript",
"InstanceIds": [
"{{ launchDC1.pdcInstanceId }}"
],
"Parameters": {
"commands": [
"$dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force",
"try {",
" Install-ADDSForest -DomainName \"sample.com\" -DomainMode 6 -ForestMode 6 -InstallDNS -DatabasePath \"D:\\NTDS\" -SysvolPath \"D:\\SYSVOL\" -SafeModeAdministratorPassword $dsrmPass -Force",
"}",
"catch {",
" Write-Error $_",
"}",
"try {",
" Add-DnsServerForwarder -IPAddress \"10.0.100.2\"",
"}",
"catch {",
" Write-Error $_",
"}"
]
}
},
"nextStep": "associateDhcpOptions"
},
{
"name": "associateDhcpOptions",
"action": "aws:executeAwsApi",
"onFailure": "Abort",
"inputs": {
"Service": "ec2",
"Api": "AssociateDhcpOptions",
"DhcpOptionsId": "{{ createDhcpOptions.dhcpOptionsId }}",
"VpcId": "{{ createVpc.vpcId }}"
},
"nextStep": "waitForADServices"
},
{
"name": "waitForADServices",
"action": "aws:sleep",
"inputs": {
"Duration": "PT1M"
},
"nextStep": "promoteADC"
},
{
"name": "promoteADC",
"action": "aws:runCommand",
"inputs": {
"DocumentName": "AWS-RunPowerShellScript",
"InstanceIds": [
"{{ launchDC2.adcInstanceId }}"
],
"Parameters": {
"commands": [
"ipconfig /renew",
"$dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force",
"$domAdminUser = \"sample\\Administrator\"",
"$domAdminPass = \"sampleAdminPass123!\" | ConvertTo-SecureString -asPlainText -Force",
"$domAdminCred = New-Object System.Management.Automation.PSCredential($domAdminUser,$domAdminPass)",
"try {",
" Install-ADDSDomainController -DomainName \"sample.com\" -InstallDNS -DatabasePath \"D:\\NTDS\" -SysvolPath \"D:\\SYSVOL\" -SafeModeAdministratorPassword $dsrmPass -Credential $domAdminCred -Force",
"}",
"catch {",
" Write-Error $_",
"}"
]
}
}
}
]
}
```
------ | 28.862325 | 401 | 0.552556 | yue_Hant | 0.70783 |
53e82d41cc23f4b5cc228f2ae9ca328a0c0c69ce | 3,096 | md | Markdown | articles/supply-chain/service-management/cancel-service-orders.md | MicrosoftDocs/Dynamics-365-Operations.sv-se | 90cb258993f991b2ce1b67078a6519e342608a4e | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-18T17:14:56.000Z | 2022-03-02T03:46:34.000Z | articles/supply-chain/service-management/cancel-service-orders.md | MicrosoftDocs/Dynamics-365-Operations.sv-se | 90cb258993f991b2ce1b67078a6519e342608a4e | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-12-13T12:57:02.000Z | 2019-04-30T11:46:04.000Z | articles/supply-chain/service-management/cancel-service-orders.md | MicrosoftDocs/Dynamics-365-Operations.sv-se | 90cb258993f991b2ce1b67078a6519e342608a4e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Avbryt serviceorder
description: Du kan avbryta en serviceorder eller serviceorderrad från själva serviceordern, och du kan avbryta flera serviceorder genom att köra ett periodiskt jobb.
author: kamaybac
ms.date: 05/01/2018
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: SMAServiceOrderTable
audience: Application User
ms.reviewer: kamaybac
ms.custom: ''
ms.assetid: ''
ms.search.region: Global
ms.author: kamaybac
ms.search.validFrom: 2016-02-28
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: cca6c34bb43702e2c33935a73dc24f1a630065c0
ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec
ms.translationtype: HT
ms.contentlocale: sv-SE
ms.lasthandoff: 09/29/2021
ms.locfileid: "7571531"
---
# <a name="cancel-service-orders"></a>Avbryt serviceorder
[!include [banner](../includes/banner.md)]
Du kan avbryta en serviceorder eller serviceorderrad från själva serviceordern, och du kan avbryta flera serviceorder genom att köra ett periodiskt jobb.
> [!NOTE]
> <P>Du kan inte avbryta en serviceorder om detta inte är tillåtet i serviceorderns aktuella fas, om serviceordern innehåller artikelbehov eller om serviceordern redan har bokförts.</P>
## <a name="cancel-a-service-order-in-the-service-orders-form"></a>Avbryta en serviceorder i formuläret för serviceorder
1. Klicka på noden **Servicehantering** \> **Vanligt** \> **Serviceorder** \> **Serviceorder**. Välj serviceorder och klicka sedan på **Avbryt order** i åtgärdsfönstret.
## <a name="cancel-a-service-order-line"></a>Avbryta en serviceorderrad
1. Klicka på noden **Servicehantering** \> **Vanligt** \> **Serviceorder** \> **Serviceorder**. Dubbelklicka på den serviceorder som innehåller raden du vill annullera.
2. Välj serviceorderraden som du vill avbryta och klicka sedan på **Avbryt orderrad** för att ändra status för raden till **Har avbrutits**.
> [!TIP]
> <P>Om du vill återkalla annulleringen av en serviceorderrad och ändra tillbaka status till <STRONG>Skapad</STRONG>, klickar du på <STRONG>Återkalla avbrott</STRONG>.</P>
## <a name="cancel-multiple-service-orders"></a>Avbryta flera serviceorder
1. Klicka på **Servicehantering**\>**Periodisk**\>**Serviceorder**\>**Avbryt serviceorder**.
2. Klicka på **Välj**.
3. I formuläret **Förfrågan** i kolumnen **Kriterier** väljer du de serviceorder som du vill avbryta.
4. Klicka på **OK** när du vill stänga formuläret **Förfrågan**.
5. Markera kryssrutan **Visa informationslogg** om du vill skapa en informationslogg som anger avbrutna serviceorder.
6. Markera kryssrutan **Återkalla avbrott** om du vill återkalla statusen **Har avbrutits** för en serviceorder.
7. Klicka på **OK**.
Markerade serviceorder kommer då antingen att annulleras eller få sin status **Har avbrutits** ändrad till **Pågår**.
> [!NOTE]
> <P>Om du markerar kryssrutan <STRONG>Återkalla avbrott</STRONG> kommer serviceorder med statusen <STRONG>Har avbrutits</STRONG> att återkallas och inga serviceorder med statusen <STRONG>Pågår</STRONG> avbryts.</P>
[!INCLUDE[footer-include](../../includes/footer-banner.md)] | 38.7 | 215 | 0.757752 | swe_Latn | 0.970492 |
53e883015a14f78933494e4564a093cfa2cde5ca | 7,370 | md | Markdown | node_modules/eth-gas-reporter/CHANGELOG.md | ArthurBonsu/HardHatEtherjsChallenge | cdfd2e7d37a8dc6e6bc904815e736fb8345ce802 | [
"MIT"
] | null | null | null | node_modules/eth-gas-reporter/CHANGELOG.md | ArthurBonsu/HardHatEtherjsChallenge | cdfd2e7d37a8dc6e6bc904815e736fb8345ce802 | [
"MIT"
] | null | null | null | node_modules/eth-gas-reporter/CHANGELOG.md | ArthurBonsu/HardHatEtherjsChallenge | cdfd2e7d37a8dc6e6bc904815e736fb8345ce802 | [
"MIT"
] | null | null | null | ## Changelog: eth-gas-reporter
# 0.2.21 / 2021-02-16
- Fix missing truffle migration deployments data (https://github.com/cgewecke/eth-gas-reporter/issues/240)
- Upgrade solidity-parser/parser to 0.11.1 (https://github.com/cgewecke/eth-gas-reporter/issues/239)
# 0.2.20 / 2020-12-01
- Add support for remote contracts data pre-loading (hardhat-gas-reporter feature)
# 0.2.19 / 2020-10-29
- Delegate contract loading/parsing to artifactor & make optional (#227)
# 0.2.18 / 2020-10-13
- Support multiple codechecks reports per CI run
- Add CI error threshold options: maxMethodDiff, maxDeploymentDiff
- Add async collection methods for BuidlerEVM
- Update solidity-parser/parser to 0.8.0 (contribution: @vicnaum)
- Update dev deps / use Node 12 in CI
# 0.2.17 / 2020-04-13
- Use @solidity-parser/parser for better solc 0.6.x parsing
- Upgrade Mocha to ^7.1.1 (to remove minimist vuln warning)
- Stop crashing when parser or ABI Encoder fails
- Update @ethersproject/abi to ^5.0.0-beta.146 (and unpin)
# 0.2.16 / 2020-03-18
- Use new coinmarketcap data API / make api key configurable. Old (un-gated) API has been taken offline.
- Fix crashing when artifact transactionHash is stale after deleting previously migrated contracts
# 0.2.15 / 2020-02-12
- Use parser-diligence to parse Solidity 0.6.x
- Add option to show full method signature
# 0.2.14 / 2019-12-01
- Add ABIEncoderV2 support by using @ethersproject/abi for ABI processing
# 0.2.12 / 2019-09-30
- Add try/catch block for codechecks.getValue so it doesn't throw when server is down.
- Pin parser-antlr to 0.4.7
# 0.2.11 / 2019-08-27
- Fix syntax err on unresolved provider error msg (contribution: gnidan)
- Add unlock-protocol funding ymls
- Update abi-decoder deps / web3
# 0.2.10 / 2019-08-08
- Small codechecks table formatting improvements
- Fix syntax error when codechecks errors on missing gas report
# 0.2.9 / 2019-07-30
- Optimize post-transaction data collection (reduce # of calls & cache addresses)
- Catch codechecks server errors
# 0.2.8 / 2019-07-27
- Render codechecks CI table as markdown
# 0.2.7 / 2019-07-27
- Fix block limit basis bug
- Fix bug affecting Truffle < v5.0.10 (crash because metadata not defined)
- Add percentage diff columns to codechecks ci table / make table narrower
- Slightly randomize gas consumption in tests
- Begin running codechecks in CI for own tests
# 0.2.6 / 2019-07-16
- Stopped using npm-shrinkwrap, because it seemed to correlate w/ weird installation problems
- Fix bug which caused outputFile option to crash due to misnamed variable
# 0.2.5 / 2019-07-15
- Upgrade lodash for because of vulnerability report (contribution @ppoliani)
# 0.2.4 / 2019-07-08
- Update abi-decoder to 2.0.1 to fix npm installation bug with bignumber.js fork
# 0.2.3 / 2019-07-04
- Bug fix to invoke user defined artifactType methods correctly
# 0.2.2 / 2019-07-02
- Add documentation about codechecks, buidler, advanced use cases.
- Add artifactType option as a user defined function so people use with any compilation artifacts.
- Add codechecks integration
- Add buidler plugin integration
- Remove shelljs due to GH security warning, execute ls command manually
# 0.2.1 / 2019-06-19
- Upgrade mocha from 4.1.0 to 5.2.0
- Report solc version and settings info
- Add EtherRouter method resolver logic (as option and example)
- Add proxyResolver option & support discovery of delegated method calls identity
- Add draft of 0x artifact handler
- Add url option for non-truffle, non-buidler use
- Add buidler truffle-v5 plugin support (preface to gas-reporter plugin in next release)
- Completely reorganize and refactor
# 0.2.0 / 2019-05-07
- Add E2E tests in CI
- Restore logic that matches tx signatures to contracts as a fallback when it's impossible to
be certain which contract was called (contribution @ItsNickBarry)
- Fix bug which crashed reporter when migrations linked un-deployed contracts
# 0.1.12 / 2018-09-14
- Allow contracts to share method signatures (contribution @wighawag)
- Collect gas data for Migrations deployments (contribution @wighawag)
- Add ability allow to specify a different src folder for contracts (contribution @wighawag)
- Handle in-memory provider error correctly / use spec reporter if sync calls impossible (contribution @wighawag)
- Default to only showing invoked methods in report
# 0.1.10 / 2018-07-18
- Update mocha from 3.5.3 to 4.10.0 (contribution ldub)
- Update truffle to truffle@next to fix mocha issues (contribution ldub)
- Modify binary checking to allow very long bytecodes / large contracts (contribution ben-kaufman)
# 0.1.9 / 2018-06-27
- Fix bug that caused test gas to include before hook gas consumption totals
# 0.1.8 / 2018-06-26
- Add showTimeSpent option to also show how long each test took (contribution @ldub)
- Update cli-table2 to cli-table3 (contribution @DanielRuf)
# 0.1.7 / 2018-05-27
- Support reStructured text code-block output
# 0.1.5 / 2018-05-15
- Support multi-contract files by parsing files w/ solidity-parser-antlr
# 0.1.4 / 2018-05-14
- Try to work around web3 websocket provider by attempting connection over http://.
`requestSync` doesn't support this otherwise.
- Detect and identify binaries with library links, add to the deployments table
- Add scripts to run geth in CI (not enabled)
# 0.1.2 / 2018-04-20
- Make compatible with Web 1.0 by creating own sync RPC wrapper. (Contribution: @area)
# 0.1.1 / 2017-12-19
- Use mochas own reporter options instead of .ethgas (still supported)
- Add onlyCalledMethods option
- Add outputFile option
- Add noColors option
# 0.1.0 / 2017-12-10
- Require config gas price to be expressed in gwei (breaking change)
- Use eth gas station API for gas price (it's more accurate)
- Fix bug that caused table not to print if any test failed.
# 0.0.15 / 2017-12-09
- Fix ascii colorization bug that caused crashed during table generation. (Use colors/safe).
# 0.0.14 / 2017-11-30
- Fix bug that caused the error report at the end of test run not to be printed.
# 0.0.13 / 2017-11-15
- Filter throws by receipt.status if possible
- Use testrpc 6.0.2 in tests, add view and pure methods to tests.
# 0.0.12 / 2017-10-28
- Add config. Add gasPrice and currency code options
- Improve table clarity
- Derive block.gasLimit from rpc
# 0.0.11 / 2017-10-23
- Add Travis CI
- Fix bug that crashed reported when truffle could not find required file
# 0.0.10 / 2017-10-22
- Add examples
# 0.0.10 / 2017-10-22
- Filter deployment calls that throw from the stats
# 0.0.8 / 2017-10-22
- Filter method calls that throw from the stats
- Add deployment stats
- Add number of calls column
# 0.0.6 / 2017-10-14
- Stop showing zero gas usage in mocha output
- Show currency rates and gwei gas price rates in table header
\* Alphabetize table
- Fix bug caused by unused methods reporting NaN
- Fix failure to round avg gas use in table
- Update dev deps to truffle4 beta
# 0.0.5 / 2017-10-12
- Thanks
- Update image
- Finish table formatting
- Add some variable gas consumption contracts
- Working table
- Get map to work in the runner
- Get gasStats file and percentage of limit working
- Test using npm install
- Add gasPrice data fetch, config logic
- More tests
- Abi encoding map.
# 0.0.4 / 2017-10-01
- Add visual inspection test
- Fix bug that counted gas consumed in the test hooks
| 30.580913 | 113 | 0.748982 | eng_Latn | 0.954512 |
53e920f63ee4f0be8ab2b59e56c2e3532822f91d | 5,764 | md | Markdown | articles/active-directory/manage-apps/application-proxy.md | jiyongseong/azure-docs.ko-kr | f1313d505132597ce47e343e2195151587b32238 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/manage-apps/application-proxy.md | jiyongseong/azure-docs.ko-kr | f1313d505132597ce47e343e2195151587b32238 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/manage-apps/application-proxy.md | jiyongseong/azure-docs.ko-kr | f1313d505132597ce47e343e2195151587b32238 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 온-프레미스 앱에 대한 보안된 원격 액세스를 제공하는 방법
description: Azure AD 응용 프로그램 프록시를 사용하여 온-프레미스 앱에 대한 보안된 원격 액세스를 제공하는 방법을 설명합니다.
services: active-directory
documentationcenter: ''
author: barbkess
manager: mtillman
ms.service: active-directory
ms.component: app-mgmt
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 01/31/2018
ms.author: barbkess
ms.reviewer: harshja
ms.custom: it-pro
ms.openlocfilehash: c5f706e6e9402bfc404c370a0d1a45fc07656a9e
ms.sourcegitcommit: e14229bb94d61172046335972cfb1a708c8a97a5
ms.translationtype: HT
ms.contentlocale: ko-KR
ms.lasthandoff: 05/14/2018
---
# <a name="how-to-provide-secure-remote-access-to-on-premises-applications"></a>온-프레미스 응용 프로그램에 보안된 원격 액세스를 제공하는 방법
요즈음 직원은 어디서나 언제든지 어느 장치에서나 생산성을 높이기를 원합니다. 태블릿, 휴대폰 또는 랩톱을 막론하고 자신의 장치에서 일하기를 원합니다. 그리고 해당하는 모든 응용 프로그램인 클라우드의 SaaS 앱 및 회사 앱 온-프레미스 모두에도 액세스할 수 있다고 예상합니다. 온-프레미스 응용 프로그램에 대한 액세스를 제공하려면 일반적으로 가상 사설망(VPN) 또는 완충 영역(DMZ)이 필요했습니다. 이러한 솔루션은 복잡하고 안전하게 만들기도 어려울 뿐만 아니라 설정과 관리에도 비용이 많이 듭니다.
더 나은 방법이 있습니다!
모바일 중심, 클라우드 중심의 최신 인력에게는 최신 원격 액세스 솔루션을 필요합니다. Azure AD 응용 프로그램 프록시는 Azure Active Directory의 기능이며 원격 액세스 서비스를 제공합니다. 즉, 배포, 사용 및 관리하기 쉽습니다.
[!INCLUDE [identity](../../../includes/azure-ad-licenses.md)]
## <a name="what-is-azure-active-directory-application-proxy"></a>Azure Active Directory 응용 프로그램 프록시란?
Azure AD 응용 프로그램 프록시는 온-프레미스에 호스트된 웹 응용 프로그램에 대한 SSO(Single Sign-On) 및 보안된 원격 액세스를 제공합니다. 게시하려는 일부 앱은 SharePoint 사이트, Outlook Web Access 또는 사용자가 가지고 있는 다른 LOB 웹 응용 프로그램을 포함합니다. 이러한 온-프레미스 웹 응용 프로그램은 Azure AD, 동일한 ID 및 O365에서 사용되는 제어 플랫폼과 통합됩니다. 최종 사용자는 O365 및 Azure AD와 통합된 다른 SaaS 앱에 액세스할 때와 같은 방식으로 온-프레미스 응용 프로그램에 액세스할 수 있습니다. 사용자에게 이 솔루션을 제공하기 위해 네트워크 인프라를 변경하거나 VPN을 요구할 필요가 없습니다.
## <a name="why-is-application-proxy-a-better-solution"></a>응용 프로그램 프록시가 더 나은 솔루션인 이유
Azure AD 응용 프로그램 프록시는 모든 온-프레미스 응용 프로그램에 대해 단순하고 보안되고 비용 효율적인 원격 액세스 솔루션을 제공합니다.
Azure AD 응용 프로그램 프록시는:
* **간단**
* 응용 프로그램 프록시를 사용하도록 응용 프로그램을 변경하거나 업데이트할 필요가 없습니다.
* 사용자는 일관된 인증 환경을 제공 받습니다. MyApps 포털을 사용하여 클라우드 및 앱 온-프레미스의 SaaS 앱에 Single Sign-On을 가져올 수 있습니다.
* **보안**
* Azure AD 응용 프로그램 프록시를 사용하여 앱을 게시하면 Azure의 다양한 권한 부여 제어 및 보안 분석을 이용할 수 있습니다. 조건부 액세스 및 2단계 인증과 같은 클라우드 규모 보안 및 Azure 보안 기능을 제공 받습니다.
* 사용자에게 원격 액세스를 제공하도록 방화벽을 통해 모든 인바운드 연결을 열 필요가 없습니다.
* **비용 효율성**
* 응용 프로그램 프록시는 클라우드에서 작업하므로 시간과 비용을 절약할 수 있습니다. 온-프레미스 솔루션을 사용하려면 일반적으로 DMZ, 에지 서버 또는 기타 복잡한 인프라를 설정하고 유지 관리해야 합니다.
## <a name="what-kind-of-applications-work-with-application-proxy"></a>어떤 종류의 응용 프로그램이 응용 프로그램 프록시에서 작동합니까?
Azure AD 응용 프로그램 프록시를 사용하면 다양한 유형의 내부 응용 프로그램에 액세스할 수 있습니다.
* 인증을 위해 [Windows 통합 인증](application-proxy-configure-single-sign-on-with-kcd.md)을 사용하는 웹 응용 프로그램
* 폼 기반 또는 [헤더 기반](application-proxy-configure-single-sign-on-with-ping-access.md) 액세스를 사용하는 웹 응용 프로그램
* 여러 장치에서 다양한 응용 프로그램을 표시하려는 웹 API
* [원격 데스크톱 게이트웨이](application-proxy-integrate-with-remote-desktop-services.md) 뒤에서 호스트되는 응용 프로그램
* ADAL(Active Directory 인증 라이브러리)과 통합되는 리치 클라이언트 앱
## <a name="how-does-application-proxy-work"></a>응용 프로그램 프록시는 어떻게 작동합니까?
응용 프로그램 프록시가 작동하도록 구성해야 하는 두 가지 구성 요소는 커넥터 및 외부 끝점입니다.
커넥터는 네트워크 내부의 Windows Server에 상주하는 간단한 에이전트입니다. 커넥터는 클라우드의 응용 프로그램 프록시 서비스에서 응용 프로그램 온-프레미스로 트래픽 흐름을 지원합니다. 아웃바운드 연결만 사용하므로 인바운드 포트를 열거나 DMZ에 항목을 저장할 필요가 없습니다. 커넥터는 상태를 저장하지 않으며 필요에 따라 클라우드에서 정보를 가져옵니다. 커넥터에 대한 정보 및 부하 분산 및 인증하는 방법은 [Azure AD 응용 프로그램 프록시 커넥터 이해](application-proxy-connectors.md)를 참조하세요.
외부 끝점은 사용자가 네트워크 외부에서 응용 프로그램에 도달하는 방법입니다. 결정하는 외부 URL로 직접 이동하거나 MyApps 포털을 통해 응용 프로그램에 액세스할 수 있습니다. 사용자가 이러한 끝점 중 하나로 이동하면 Azure AD에서 인증한 다음 커넥터를 통해 온-프레미스 응용 프로그램에 라우팅됩니다.

1. 사용자는 응용 프로그램 프록시 서비스를 통해 응용 프로그램에 액세스하고 인증을 위해 Azure AD 로그인 페이지로 전달됩니다.
2. 성공적인 로그인 후에 토큰을 생성하고 클라이언트 장치에 보냅니다.
3. 클라이언트는 토큰에서 UPN(사용자 주체 이름) 및 SPN(보안 주체 이름)을 검색한 다음 응용 프로그램 프록시 커넥터에 요청을 전달하는 응용 프로그램 프록시 서비스에 토큰을 보냅니다.
4. Single Sign-On을 구성한 경우 커넥터는 사용자를 대신하는 데 필요한 모든 추가 인증을 수행합니다.
5. 커넥터는 온-프레미스 응용 프로그램에 요청을 보냅니다.
6. 응답은 응용 프로그램 프록시 서비스 및 커넥터를 통해 사용자에게 전송됩니다.
### <a name="single-sign-on"></a>SSO(Single sign-on)
Azure AD 응용 프로그램 프록시는 Windows 통합 인증(IWA) 또는 클레임 인식 응용 프로그램을 사용하는 응용 프로그램에 SSO(Single Sign-On)를 제공합니다. 응용 프로그램에서 IWA를 사용하는 경우 응용 프로그램 프록시는 SSO를 제공하는 Kerberos 제한 위임을 사용하여 사용자를 가장합니다. Azure Active Directory를 신뢰하는 클레임 인식 응용 프로그램이 있는 경우에는 사용자가 이미 Azure AD에 의해 인증되었으므로 SSO가 작동합니다.
Kerberos에 대한 자세한 내용은 [KCD(Kerberos Constrained Delegation)에 대해 확인하려는 모든 정보](https://blogs.technet.microsoft.com/applicationproxyblog/2015/09/21/all-you-want-to-know-about-kerberos-constrained-delegation-kcd)를 참조하세요.
### <a name="managing-apps"></a>앱 관리
응용 프로그램 프록시를 사용하여 앱이 게시되면 Azure Portal에서 다른 엔터프라이즈 앱처럼 관리할 수 있습니다. 조건부 액세스 및 2단계 인증과 같은 Azure Active Directory 보안 기능을 사용하고, 사용자 권한을 제어하고, 앱에 대한 브랜딩을 사용자 지정할 수 있습니다.
## <a name="get-started"></a>시작하기
응용 프로그램 프록시를 구성하기 전에 지원되는 [Azure Active Directory 버전](https://azure.microsoft.com/pricing/details/active-directory/) 및 전역 관리자 권한이 있는 Azure AD 디렉터리가 있는지 확인합니다.
두 단계에서 응용 프로그램 프록시 시작:
1. [응용 프로그램 프록시를 사용하도록 설정하고 커넥터 구성](application-proxy-enable.md)
2. [응용 프로그램 게시](application-proxy-publish-azure-portal.md) - 쉽고 빠른 마법사를 사용하여 온-프레미스 앱을 게시하고 원격으로 액세스할 수 있도록 합니다.
## <a name="whats-next"></a>다음 작업
첫 번째 앱을 게시하면 응용 프로그램 프록시를 사용하여 수행할 수 있는 작업은 많습니다.
* [Single Sign-On 사용](application-proxy-configure-single-sign-on-with-kcd.md)
* [고유한 도메인 이름을 사용하여 응용 프로그램 게시](application-proxy-configure-custom-domain.md)
* [Azure AD 응용 프로그램 프록시 커넥터에 대해 알아보기](application-proxy-connectors.md)
* [기존 온-프레미스 프록시 서버 작업](application-proxy-configure-connectors-with-proxy-servers.md)
* [사용자 지정 홈페이지 설정](application-proxy-configure-custom-home-page.md)
최신 뉴스 및 업데이트는 [응용 프로그램 프록시 블로그](http://blogs.technet.com/b/applicationproxyblog/)
| 55.423077 | 385 | 0.739244 | kor_Hang | 1.00001 |
53e9ebde72830de13267c089e3f4c19ff0a0b306 | 318 | md | Markdown | README.md | tjspann/tj-framework-uwp | 46b65ad324224f13644db3ef61e65a7efde78817 | [
"MIT"
] | null | null | null | README.md | tjspann/tj-framework-uwp | 46b65ad324224f13644db3ef61e65a7efde78817 | [
"MIT"
] | 5 | 2019-10-10T13:11:31.000Z | 2019-11-12T17:54:18.000Z | README.md | tjspann/tj-framework-uwp | 46b65ad324224f13644db3ef61e65a7efde78817 | [
"MIT"
] | null | null | null | # The World Framework
This project describes relationships between objects not defined in Visual Studio. A person lives in a town in a state and works for an organization at a place... That sort of thing. Use this in video games, automation programs, anywhere it can benefit you.
Previously called the tj-framework
| 318 | 318 | 0.795597 | eng_Latn | 0.999775 |
53ea17878a0a86002a042628a46627271f360162 | 1,027 | md | Markdown | snippets/invertKeyValues.md | JGutie52/30-seconds-of-code | 3cd481c0d1ddcbea3800ccead1697f48bc46b5e7 | [
"CC0-1.0"
] | 78,166 | 2018-09-22T08:31:02.000Z | 2022-03-31T23:09:55.000Z | snippets/invertKeyValues.md | panjikusumarizki/30-seconds-of-code | a2f959334bda6dc17f1a92e1c469d0eb8c9185b1 | [
"CC-BY-4.0"
] | 1,105 | 2018-09-22T11:37:22.000Z | 2021-10-02T13:48:57.000Z | snippets/invertKeyValues.md | panjikusumarizki/30-seconds-of-code | a2f959334bda6dc17f1a92e1c469d0eb8c9185b1 | [
"CC-BY-4.0"
] | 9,190 | 2018-09-22T18:58:17.000Z | 2022-03-31T10:34:24.000Z | ---
title: invertKeyValues
tags: object,advanced
firstSeen: 2018-01-01T17:33:46+02:00
lastUpdated: 2020-10-20T23:02:01+03:00
---
Inverts the key-value pairs of an object, without mutating it.
- Use `Object.keys()` and `Array.prototype.reduce()` to invert the key-value pairs of an object and apply the function provided (if any).
- Omit the second argument, `fn`, to get the inverted keys without applying a function to them.
- The corresponding inverted value of each inverted key is an array of keys responsible for generating the inverted value. If a function is supplied, it is applied to each inverted key.
```js
const invertKeyValues = (obj, fn) =>
Object.keys(obj).reduce((acc, key) => {
const val = fn ? fn(obj[key]) : obj[key];
acc[val] = acc[val] || [];
acc[val].push(key);
return acc;
}, {});
```
```js
invertKeyValues({ a: 1, b: 2, c: 1 }); // { 1: [ 'a', 'c' ], 2: [ 'b' ] }
invertKeyValues({ a: 1, b: 2, c: 1 }, value => 'group' + value);
// { group1: [ 'a', 'c' ], group2: [ 'b' ] }
```
| 35.413793 | 185 | 0.642648 | eng_Latn | 0.94301 |
53ea27108a513898a68df708cbd7906d62d7e89a | 7,650 | md | Markdown | node/cdns.md | ahgilak/deno_manual | 1e5fb7051992272435703dc21a568b95072647cd | [
"MIT"
] | null | null | null | node/cdns.md | ahgilak/deno_manual | 1e5fb7051992272435703dc21a568b95072647cd | [
"MIT"
] | null | null | null | node/cdns.md | ahgilak/deno_manual | 1e5fb7051992272435703dc21a568b95072647cd | [
"MIT"
] | null | null | null | ## Packages from CDNs
Because Deno supports remote HTTP modules, and content delivery networks (CDNs)
can be powerful tools to transform code, the combination allows an easy way to
access code in the npm registry via Deno, usually in a way that works with Deno
without any further actions, and often enriched with TypeScript types. In this
section we will explore that in detail.
### What about `deno.land/x/`?
The [`deno.land/x/`](https://deno.land/x/) is a public registry for code,
hopefully code written specifically for Deno. It is a public registry though and
all it does is "redirect" Deno to the location where the code exists. It doesn't
transform the code in any way. There is a lot of great code on the registry, but
at the same time, there is some code that just isn't well maintained (or doesn't
work at all). If you are familiar with the npm registry, you know that as well,
there are varying degrees of quality.
Because it simply serves up the original published source code, it doesn't
really help when trying to use code that didn't specifically consider Deno when
authored.
### Deno "friendly" CDNs
Deno friendly content delivery networks (CDNs) not only host packages from npm,
they provide them in a way that maximizes their integration to Deno. They
directly address some of the challenges in consuming code written for Node:
- They provide packages and modules in the ES Module format, irrespective of how
they are published on npm.
- They resolve all the dependencies as the modules are served, meaning that all
the Node specific module resolution logic is handled by the CDN.
- Often, they inform Deno of type definitions for a package, meaning that Deno
can use them to type check your code and provide a better development
experience.
- The CDNs also "polyfill" the built-in Node modules, making a lot of code that
leverages the built-in Node modules _just work_.
- The CDNs deal with all the semver matching for packages that a package manager
like `npm` would be required for a Node application, meaning you as a
developer can express your 3rd party dependency versioning as part of the URL
you use to import the package.
#### esm.sh
[esm.sh](https://esm.sh/) is a CDN that was specifically designed for Deno,
though addressing the concerns for Deno also makes it a general purpose CDN for
accessing npm packages as ES Module bundles. esm.sh uses
[esbuild](https://esbuild.github.io/) to take an arbitrary npm package and
ensure that it is consumable as an ES Module. In many cases you can just import
the npm package into your Deno application:
```tsx
import React from "https://esm.sh/react";
export default class A extends React.Component {
render() {
return <div></div>;
}
}
```
esm.sh supports the use of both specific versions of packages, as well as
[semver](https://semver.npmjs.com/) versions of packages, so you can express
your dependency in a similar way you would in a `package.json` file when you
import it. For example, to get a specific version of a package:
```tsx
import React from "https://esm.sh/[email protected]";
```
Or to get the latest patch release of a minor release:
```tsx
import React from "https://esm.sh/react@~16.13.0";
```
esm.sh uses the `std/node` polyfills to replace the built-in modules in Node,
meaning that code that uses those built-in modules will have the same
limitations and caveats as those modules in `std/node`.
esm.sh also automatically sets a header which Deno recognizes that allows Deno
to be able to retrieve type definitions for the package/module. See
[Using `X-TypeScript-Types` header](../typescript/types.md#using-x-typescript-types-header)
in this manual for more details on how this works.
The CDN is also a good choice for people who develop in mainland China, as the
hosting of the CDN is specifically designed to work with "the great firewall of
China", as well as esm.sh provides information on self hosting the CDN as well.
Check out the [esm.sh homepage](https://esm.sh/) for more detailed information
on how the CDN can be used and what features it has.
#### Skypack
[Skypack.dev](https://www.skypack.dev/) is designed to make development overall
easier by not requiring packages to be installed locally, even for Node
development, and to make it easy to create web and Deno applications that
leverage code from the npm registry.
Skypack has a great way of discovering packages in the npm registry, by
providing a lot of contextual information about the package, as well as a
"scoring" system to try to help determine if the package follows best-practices.
Skypack detects Deno's user agent when requests for modules are received and
ensures the code served up is tailored to meet the needs of Deno. The easiest
way to load a package is to use the
[lookup URL](https://docs.skypack.dev/skypack-cdn/api-reference/lookup-urls) for
the package:
```tsx
import React from "https://cdn.skypack.dev/react";
export default class A extends React.Component {
render() {
return <div></div>;
}
}
```
Lookup URLs can also contain the [semver](https://semver.npmjs.com/) version in
the URL:
```tsx
import React from "https://cdn.skypack.dev/react@~16.13.0";
```
By default, Skypack does not set the types header on packages. In order to have
the types header set, which is automatically recognized by Deno, you have to
append `?dts` to the URL for that package:
```ts
import { pathToRegexp } from "https://cdn.skypack.dev/path-to-regexp?dts";
const re = pathToRegexp("/path/:id");
```
See
[Using `X-TypeScript-Types` header](../typescript/types.md#using-x-typescript-types-header)
in this manual for more details on how this works.
Skypack docs have a
[specific page on usage with Deno](https://docs.skypack.dev/skypack-cdn/code/deno)
for more information.
### Other CDNs
There are a couple of other CDNs worth mentioning.
#### UNPKG
[UNPKG](https://unpkg.com/) is the most well known CDN for npm packages. For
packages that include an ES Module distribution for things like the browsers,
many of them can be used directly off of UNPKG. That being said, everything
available on UNPKG is available on more Deno friendly CDNs.
#### JSPM
The [jspm.io](https://jspm.io) CDN is specifically designed to provide npm and
other registry packages as ES Modules in a way that works well with import maps.
While it doesn't currently cater to Deno, the fact that Deno can utilize import
maps, allows you to use the [JSPM.io generator](https://generator.jspm.io/) to
generate an import-map of all the packages you want to use and have them served
up from the CDN.
### Considerations
While CDNs can make it easy to allow Deno to consume packages and modules from
the npm registry, there can still be some things to consider:
- Deno does not (and will not) support Node plugins. If the package requires a
native plugin, it won't work under Deno.
- Dependency management can always be a bit of a challenge and a CDN can make it
a bit more obfuscated what dependencies are there. You can always use
`deno info` with the module or URL to get a full breakdown of how Deno
resolves all the code.
- While the Deno friendly CDNs try their best to serve up types with the code
for consumption with Deno, lots of types for packages conflict with other
packages and/or don't consider Deno, which means you can often get strange
diagnostic message when type checking code imported from these CDNs, though
skipping type checking will result in the code working perfectly fine. This is
a fairly complex topic and is covered in the
[Types and type declarations](../typescript/types.md) section of the manual.
| 41.803279 | 91 | 0.765882 | eng_Latn | 0.998898 |
53ea761f68d5e4ddd94baac082cea99b796a98e5 | 7,639 | md | Markdown | docs/profiling/how-to-launch-a-stand-alone-dotnet-framework-app-to-collect-memory-data.md | MicrosoftDocs/visualstudio-docs.tr-tr | ff0c41f814d042e7d4a0e457839db4a191a59f81 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-09-14T23:12:51.000Z | 2021-08-22T21:23:28.000Z | docs/profiling/how-to-launch-a-stand-alone-dotnet-framework-app-to-collect-memory-data.md | huriyilmaz/visualstudio-docs.tr-tr | 9459e8aaaeb3441455be384a2b011dbf306ce691 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2018-07-20T23:01:49.000Z | 2021-04-15T20:00:12.000Z | docs/profiling/how-to-launch-a-stand-alone-dotnet-framework-app-to-collect-memory-data.md | huriyilmaz/visualstudio-docs.tr-tr | 9459e8aaaeb3441455be384a2b011dbf306ce691 | [
"CC-BY-4.0",
"MIT"
] | 22 | 2018-01-11T11:53:37.000Z | 2022-03-06T16:38:31.000Z | ---
title: Profil oluşturma komut satırı - İstemci uygulamasını .NET Framework, bellek verilerini al
description: Tek başına bir uygulama başlatmak Visual Studio Profil Oluşturma Araçları ve bellek etkinliği verilerini toplamak için .NET Framework komut satırı araçlarını kullanmayı öğrenin.
ms.custom: SEO-VS-2020
ms.date: 11/04/2016
ms.topic: how-to
ms.assetid: 3bc53041-91b7-4ad0-8413-f8bf2c4b3f5e
author: mikejo5000
ms.author: mikejo
manager: jmartens
ms.technology: vs-ide-debug
monikerRange: vs-2017
ms.workload:
- dotnet
ms.openlocfilehash: 068bb1cca204e2202f5ed231c07e81062beab44f
ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 08/13/2021
ms.locfileid: "122033453"
---
# <a name="how-to-launch-a-stand-alone-net-framework-application-with-the-profiler-to-collect-memory-data-by-using-the-command-line"></a>Nasıl kullanılır: Komut satırı kullanarak .NET Framework veri toplamak için profil oluşturma ile tek başına bir uygulama başlatma
Bu konuda, tek başına (Profil Oluşturma Araçları) bir uygulama başlatmak ve bellek verileri .NET Framework komut satırı araçlarının [!INCLUDE[vsprvs](../code-quality/includes/vsprvs_md.md)] nasıl kullanımı açıklanmıştır.
Profil oluşturma oturumunun üç parçası vardır:
- Profilleyiciyi kullanarak uygulamayı başlatma.
- Profil oluşturma verilerini toplama.
- Profil oluşturma oturumunu sonlandırma.
> [!NOTE]
> Profil oluşturma araçlarının yolunu almak için [bkz. Komut satırı araçlarının yolunu belirtme.](../profiling/specifying-the-path-to-profiling-tools-command-line-tools.md) 64 bit bilgisayarlarda, araçların hem 64 bit hem de 32 bit sürümleri kullanılabilir. Profil oluşturma komut satırı araçlarını kullanmak için, araç yolunu Komut İstemi penceresinin PATH ortam değişkenine eklemeniz veya komutun kendisine eklemeniz gerekir.
## <a name="start-the-application-with-the-profiler"></a>Uygulamayı profil oluşturma ile başlatma
Profilleyiciyi kullanarak bir hedef uygulamayı başlatmak için profilVSPerfCmd.exebaşlatmak ve uygulamayı başlatmak için **/start** ve **/launch** seçeneklerini kullanın. **/start** ve **/launch seçeneklerini tek** bir komut satırı üzerinde belirtebilirsiniz.
Hedef uygulamanın başında veri toplamayı duraklatmak için **/globaloff** seçeneklerini de ekebilirsiniz. Ardından veri **toplamaya başlamak için /globalon** kullanırsanız.
#### <a name="to-start-an-application-by-using-the-profiler"></a>Profilleyiciyi kullanarak bir uygulamayı başlatmak için
1. Bir komut istemi penceresi açın.
2. Profilleyiciyi başlatma. Şunu yazın:
**VSPerfCmd /start:sample /output:** `OutputFile` [`Options`]
- [/start](../profiling/start.md)**:sample** seçeneği profilleyiciyi başlatıyor.
- [/output](../profiling/output.md)**: seçeneği** / start ile `OutputFile` **gereklidir.** `OutputFile` profil oluşturma verileri (.vsp) dosyasının adını ve konumunu belirtir.
**/start:sample** seçeneğiyle aşağıdaki seçeneklerden herhangi birini kullanabilirsiniz.
| Seçenek | Açıklama |
| - | - |
| [/wincounter:](../profiling/wincounter.md) `WinCounterPath` | Profil oluşturma Windows toplanacak bir performans sayacı belirtir. |
| [/automark:](../profiling/automark.md) `Interval` | Yalnızca **/wincounter ile** kullanın. Performans sayacı toplama olayları arasındaki milisaniye Windows sayısını belirtir. Varsayılan değer 500 ms'tir. |
3. Hedef uygulamayı başlatma. Şunu yazın:
**VSPerfCmd**[/launch:](../profiling/launch.md) `appName` **/gc:**{**ayırma**|**yaşam süresi**}[ `Options` ]
- Bellek verilerini **toplamak** için [/gc](../profiling/gc-vsperfcmd.md) `Keyword` : .NET Framework gereklidir. anahtar sözcüğü parametresi, bellek ayırma verilerini mi topla, yoksa hem bellek ayırma hem de nesne yaşam süresi verilerini toplamayı belirtir.
|Anahtar kelime|Açıklama|
|-------------|-----------------|
|**Ayırma**|Yalnızca bellek ayırma verilerini toplayın.|
|**Ömür boyu**|Hem bellek ayırma hem de nesne yaşam süresi verilerini toplayın.|
**/launch** seçeneğiyle aşağıdaki seçeneklerden herhangi birini kullanabilirsiniz.
|Seçenek|Açıklama|
|------------|-----------------|
|[/args:](../profiling/args.md) `Arguments`|Hedef uygulamaya geçirilen komut satırı bağımsız değişkenlerini içeren bir dize belirtir.|
|[/console](../profiling/console.md)|Hedef komut satırı uygulamasını ayrı bir pencerede başlatır.|
|[/events](../profiling/events-vsperfcmd.md) **:**`Config`|Profil oluşturma sırasında toplanacak Windows (ETW) olayı için bir Olay İzleme belirtir. ETW olayları ayrı bir (.etl) dosyasında toplanır.|
|[/targetclr](../profiling/targetclr.md) **:**`Version`|Bir uygulamada birden fazla çalışma zamanı sürümü yüklendiğinde profili oluşturmak için ortak dil çalışma zamanının (CLR) sürümünü belirtir.|
## <a name="control-data-collection"></a>Veri toplamayı denetleme
Hedef uygulama çalıştıryken, veri toplama seçeneklerini kullanarak dosyada veri yazmayı başlatarak ve *durdurarak veriVSPerfCmd.exeyapabilirsiniz.* Veri toplamayı denetlemek, uygulamayı başlatma veya kapatma gibi program yürütmenin belirli bir bölümü için veri toplamaya olanak sağlar.
#### <a name="to-start-and-stop-data-collection"></a>Veri toplamayı başlatmak ve durdurmak için
- Aşağıdaki seçenek çiftleri veri toplamayı başlat ve durdur. Her seçeneği ayrı bir komut satırı üzerinde belirtin. Veri toplamayı birden çok kez açabilirsiniz ve kapatabilirsiniz.
|Seçenek|Açıklama|
|------------|-----------------|
|[/globalon /globaloff](../profiling/globalon-and-globaloff.md)|Tüm işlemler **için veri toplamayı** başlatır ( /globalon ) veya durdurur (**/globaloff).**|
|[/processon](../profiling/processon-and-processoff.md) **:** `PID` [processoff:](../profiling/processon-and-processoff.md) `PID`|İşlem kimliği ( ) tarafından belirtilen işlem için veri toplamayı (**/processon**) veya durdurur ( `PID` ).|
|[/attach](../profiling/attach.md) **:** `PID` [/detach](../profiling/detach.md)|**/attach,** tarafından belirtilen işlem (işlem kimliği) için `PID` veri toplamaya başlar. **/detach tüm** işlemler için veri toplamayı durdurur.|
- Veri dosyasına profil **VSPerfCmd.exe** [/mark](../profiling/mark.md) seçeneğini de kullanabilirsiniz. **/mark komutu** bir tanımlayıcı, zaman damgası ve isteğe bağlı bir kullanıcı tanımlı metin dizesi ekler. İşaretler verileri filtrelemek için kullanılabilir.
## <a name="end-the-profiling-session"></a>Profil oluşturma oturumunu sona er
Profil oluşturma oturumunu sona erdirmak için profil oluşturma işleminin tüm profili yapılan işlemlerden ayrılmaları ve profil oluşturma işleminin açıkça kapatılmış olması gerekir. Uygulamayı kapatarak veya **VSPerfCmd /detach** seçeneğini çağırarak, örnekleme yöntemini kullanarak profil profili yapılan bir uygulamanın profillerini ayırabilirsiniz. Ardından **VSPerfCmd /shutdown** seçeneğini çağırarak profil oluşturma veri dosyasını kapatın. **VSPerfClrEnv /off** komutu profil oluşturma ortam değişkenlerini temizler.
#### <a name="to-end-a-profiling-session"></a>Profil oluşturma oturumunu sona erdir
1. Profilleyiciyi hedef uygulamadan ayırmak için aşağıdaki adımlardan birini gerçekleştirin:
- Hedef uygulamayı kapatın.
-veya-
- **VSPerfCmd /detach yazın**
2. Profilleyiciyi kapatın. Şunu yazın:
**VSPerfCmd** [/shutdown](../profiling/shutdown.md)
## <a name="see-also"></a>Ayrıca bkz.
- [Tek başına uygulamaların profilini oluşturma](../profiling/command-line-profiling-of-stand-alone-applications.md)
- [.NET bellek veri görünümleri](../profiling/dotnet-memory-data-views.md)
| 66.426087 | 523 | 0.761487 | tur_Latn | 0.999411 |
53eb1cc10c24fab4687e66abb938cb1e9fd393ec | 1,023 | md | Markdown | ssh_script/README.md | fulln/sampleScrips | d00861e1457dd1ab824c3bd8c7963458b7eebb48 | [
"Apache-2.0"
] | 3 | 2020-06-20T04:58:59.000Z | 2020-06-20T09:39:22.000Z | ssh_script/README.md | fulln/sampleScrips | d00861e1457dd1ab824c3bd8c7963458b7eebb48 | [
"Apache-2.0"
] | null | null | null | ssh_script/README.md | fulln/sampleScrips | d00861e1457dd1ab824c3bd8c7963458b7eebb48 | [
"Apache-2.0"
] | null | null | null |
## login script
this scripts is used for login to server what has been installed `expect` . if you don't get this package ,you can
```
brew install expect
```
on macOs or
```
apt-get install expect
```
on Ubuntu. after install you can
```
expect -v
```
to check it's works or not
### how use
download the shell scripts and the `.exp` scripts . move them to a safety place , then
- open locate.sh and change the default params.so that you can use it without any params .`tips: host can print like '192.168.*.' `
- you can link it on your bash shell or zsh shell .
here is my configuration on zsh
```
rmt () { cd /path/to/location/ && sh locate.sh $*;}
```
and source you `.zshrc` file. now you can use `rmt` keyword to login server automatically .
you can manager the `.exp` scripts to do anything after has Logged to a server.
have fun.([or you may want to download living videos, you can follow this project](https://github.com/fulln/sampleScrips))
<a href="../README.md"><——Back to Menus</a>
| 28.416667 | 132 | 0.692082 | eng_Latn | 0.999043 |
53ebf9641af695152ecf267494dd0f7fbc899df0 | 486 | md | Markdown | docs/en/engines/table-engines/special/materializedview.md | bluebirddm/ClickHouse | f53da4d36b8c3a214567c935caed478edce08363 | [
"Apache-2.0"
] | 2 | 2020-02-12T12:34:14.000Z | 2021-06-05T18:40:33.000Z | docs/en/engines/table-engines/special/materializedview.md | bluebirddm/ClickHouse | f53da4d36b8c3a214567c935caed478edce08363 | [
"Apache-2.0"
] | 3 | 2020-02-18T14:59:34.000Z | 2020-02-19T10:42:18.000Z | docs/en/engines/table-engines/special/materializedview.md | bluebirddm/ClickHouse | f53da4d36b8c3a214567c935caed478edce08363 | [
"Apache-2.0"
] | 1 | 2020-06-15T13:51:03.000Z | 2020-06-15T13:51:03.000Z | ---
toc_priority: 43
toc_title: MaterializedView
---
# MaterializedView Table Engine {#materializedview}
Used for implementing materialized views (for more information, see [CREATE TABLE](../../../sql-reference/statements/create.md)). For storing data, it uses a different engine that was specified when creating the view. When reading from a table, it just uses that engine.
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/materializedview/) <!--hide-->
| 44.181818 | 270 | 0.771605 | eng_Latn | 0.975986 |
53eccb5b6c53a2675a45fa45d200801203ca6e02 | 195 | md | Markdown | CDS_Tool_Output/Parsers/parserContent_s-cyberark-remote-logon-2.md | deebigarajeswaran/Content-Doc | b99ffab7998eb4594887c36c60e6e2ab586a8b53 | [
"MIT"
] | null | null | null | CDS_Tool_Output/Parsers/parserContent_s-cyberark-remote-logon-2.md | deebigarajeswaran/Content-Doc | b99ffab7998eb4594887c36c60e6e2ab586a8b53 | [
"MIT"
] | null | null | null | CDS_Tool_Output/Parsers/parserContent_s-cyberark-remote-logon-2.md | deebigarajeswaran/Content-Doc | b99ffab7998eb4594887c36c60e6e2ab586a8b53 | [
"MIT"
] | null | null | null | #### Parser Content
```Java
{
Name = s-cyberark-remote-logon-2
DataType = "remote-logon"
Conditions = [ """%CYBERARK:""", """Message="PSM Secure Connect Session Start""", """;Safe=""" ]
}
``` | 24.375 | 98 | 0.605128 | kor_Hang | 0.256132 |
53ed142d339cca532cae38b559e7cfa512d46508 | 3,879 | md | Markdown | docs/relational-databases/replication/mssql-eng021798.md | way0utwest/sql-docs | c5224ee523e9d5b95f1bb9a2a07699a1ea11cf6d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-01T15:27:25.000Z | 2021-04-01T15:27:25.000Z | docs/relational-databases/replication/mssql-eng021798.md | way0utwest/sql-docs | c5224ee523e9d5b95f1bb9a2a07699a1ea11cf6d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/replication/mssql-eng021798.md | way0utwest/sql-docs | c5224ee523e9d5b95f1bb9a2a07699a1ea11cf6d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "MSSQL_ENG021798 | Microsoft Docs"
ms.custom: ""
ms.date: "03/14/2017"
ms.prod: "sql-non-specified"
ms.prod_service: "database-engine"
ms.service: ""
ms.component: "replication"
ms.reviewer: ""
ms.suite: "sql"
ms.technology:
- "replication"
ms.tgt_pltfrm: ""
ms.topic: "article"
helpviewer_keywords:
- "MSSQL_ENG021798 error"
ms.assetid: 596f5092-75ab-4a19-8582-588687c7b089
caps.latest.revision: 16
author: "BYHAM"
ms.author: "rickbyh"
manager: "jhubbard"
ms.workload: "Inactive"
---
# MSSQL_ENG021798
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
## Message Details
|||
|-|-|
|Product Name|SQL Server|
|Event ID|21798|
|Event Source|MSSQLSERVER|
|Component|[!INCLUDE[ssDEnoversion](../../includes/ssdenoversion-md.md)]|
|Symbolic Name||
|Message Text|The '%s' agent job must be added through '%s' before continuing. Please see the documentation for '%s'.|
## Explanation
To create a publication, you must be a member of the **sysadmin** fixed server role on the Publisher or a member of the **db_owner** fixed database role in the publication database. If you are a member of the **db_owner** role, this error is raised if:
- You run scripts from [!INCLUDE[ssVersion2000](../../includes/ssversion2000-md.md)]. The security model changed in [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)], and these scripts must be updated.
- The stored procedure **sp_addpublication** is executed before executing [sp_addlogreader_agent (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-addlogreader-agent-transact-sql.md). This applies to all transactional publications.
- The stored procedure **sp_addpublication** is executed before executing [sp_addqreader_agent (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-addqreader-agent-transact-sql.md). This applies to transactional publications that are enabled for queued updating subscriptions (a value of TRUE for the **@allow_queued_tran** parameter of **sp_addpublication**).
The stored procedures **sp_addlogreader_agent** and **sp_addqreader_agent** each create an agent job and allow you to specify the [!INCLUDE[msCoName](../../includes/msconame-md.md)] Windows account under which the agent runs. For users in the **sysadmin** role, agent jobs are created implicitly if **sp_addlogreader_agent** and **sp_addqreader_agent** are not executed; agents run under the context of the [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Agent service account at the Distributor. Although **sp_addlogreader_agent** and **sp_addqreader_agent** are not required for users in the **sysadmin** role, it is a security best practice to specify a separate account for the agents. For more information, see [Replication Agent Security Model](../../relational-databases/replication/security/replication-agent-security-model.md).
## User Action
Ensure you execute procedures in the correct order. For more information, see [Create a Publication](../../relational-databases/replication/publish/create-a-publication.md). If you have replication scripts from previous versions of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], update these scripts to include the stored procedures and parameters required by [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)] and later versions. For more information, see [Upgrade Replication Scripts (Replication Transact-SQL Programming)](../../relational-databases/replication/administration/upgrade-replication-scripts-replication-transact-sql-programming.md).
## See Also
[Errors and Events Reference (Replication)](../../relational-databases/replication/errors-and-events-reference-replication.md)
| 69.267857 | 850 | 0.742717 | eng_Latn | 0.87447 |
53ee659bd2c0a845ae9c3447e769ffaabfd75b61 | 1,639 | md | Markdown | docs/reporting-security-issues.md | ElephoneApp/kubernetes | 004bf5d2558a7f6b0a1771e265d40ef64689ca5f | [
"Apache-2.0"
] | null | null | null | docs/reporting-security-issues.md | ElephoneApp/kubernetes | 004bf5d2558a7f6b0a1771e265d40ef64689ca5f | [
"Apache-2.0"
] | null | null | null | docs/reporting-security-issues.md | ElephoneApp/kubernetes | 004bf5d2558a7f6b0a1771e265d40ef64689ca5f | [
"Apache-2.0"
] | 2 | 2020-11-04T04:57:39.000Z | 2021-01-12T09:51:04.000Z | <!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Security
If you believe you have discovered a vulnerability or a have a security incident to report, please follow the steps below. This applies to Kubernetes releases v1.0 or later.
To watch for security and major API announcements, please join our [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group.
## Reporting a security issue
To report an issue, please:
- Submit a bug report [here](http://goo.gl/vulnz).
- Select “I want to report a technical security bug in a Google product (SQLi, XSS, etc.).”
- Select “Other” as the Application Type.
- Under reproduction steps, please additionally include
- the words "Kubernetes Security issue"
- Description of the issue
- Kubernetes release (e.g. output of `kubectl version` command, which includes server version.)
- Environment setup (e.g. which "Getting Started Guide" you followed, if any; what node operating system used; what service or software creates your virtual machines, if any)
An online submission will have the fastest response; however, if you prefer email, please send mail to [email protected]. If you feel the need, please use the [PGP public key](https://services.google.com/corporate/publickey.txt) to encrypt communications.
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
| 48.205882 | 257 | 0.747407 | eng_Latn | 0.890628 |
53eee66159d57ed5eec234e927817c349827c2bd | 20,112 | md | Markdown | site/en/blog/app-shell/index.md | GokhanKabar/developer.chrome.com | f7a3fd808b484aabd15c8c9a4ebd27c6fa6ab33c | [
"Apache-2.0"
] | 1 | 2022-03-23T06:01:06.000Z | 2022-03-23T06:01:06.000Z | site/en/blog/app-shell/index.md | GokhanKabar/developer.chrome.com | f7a3fd808b484aabd15c8c9a4ebd27c6fa6ab33c | [
"Apache-2.0"
] | 4 | 2022-03-22T08:35:11.000Z | 2022-03-29T14:15:05.000Z | site/en/blog/app-shell/index.md | GokhanKabar/developer.chrome.com | f7a3fd808b484aabd15c8c9a4ebd27c6fa6ab33c | [
"Apache-2.0"
] | 2 | 2021-12-24T02:04:46.000Z | 2022-03-29T14:19:53.000Z | ---
layout: 'layouts/blog-post.njk'
title: Instant Loading Web Apps with an Application Shell Architecture
description: >
Application shell architecture is a method of building progressive web apps today, taking advantage of a range of technologies.
authors:
- addyosmani
- mattgaunt
date: 2015-11-16
updated: 2020-07-24
---
An **application shell** is the minimal HTML, CSS, and JavaScript powering a user interface. The application shell should:
* load fast
* be cached
* dynamically display content
An application shell is the secret to reliably good performance. Think of your app's shell like the bundle of code you'd publish to an app store if you were building a native app. It's the load needed to get off the ground, but might not be the whole story. It keeps your UI local and pulls in content dynamically through an API.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/QejALAOl0YsR5G6kqe6n.jpg", alt="App Shell Separation of HTML, JS and CSS shell and the HTML Content", width="800", height="591" %}
</figure>
## Background
Alex Russell's [Progressive Web Apps](https://infrequently.org/2015/06/progressive-apps-escaping-tabs-without-losing-our-soul/) article describes how a web app can *progressively* change through use and user consent to provide a more native-app-like experience complete with offline support, push notifications and the ability to be added to the home screen. It depends very much on the functionality and performance benefits of [service worker](https://developers.google.com/web/fundamentals/getting-started/primers/service-workers) and their caching abilities. This allows you to focus on **speed**, giving your web apps the same **instant loading** and regular updates you're used to seeing in native applications.
To take full advantage of these capabilities we need a new way of thinking about websites: the **application shell architecture**.
Let's dive into how to structure your app using a **service worker augmented
application shell architecture**. We'll look at both client and server-side rendering and share an end-to-end sample you can try today.
To emphasize the point, the example below shows the first load of an app using this architecture. Notice the 'App is ready for offline use' toast at the bottom of the screen. If an update to the shell becomes available later, we can inform the user to refresh for the new version.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/3kYzAICxgMctrDc840XN.png", alt="Image of service worker running in DevTools for the application shell", width="800", height="502" %}
</figure>
### What are service workers, again?
A service worker is a script that runs in the background, separate from your web page. It responds to events, including network requests made from pages it serves and push notices from your server. A service worker has an intentionally short lifetime. It wakes up when it gets an event and runs only as long as it needs to process it.
Service workers also have a limited set of APIs when compared to JavaScript in a normal browsing context. This is standard for [workers](https://developer.mozilla.org/docs/Web/API/Web_Workers_API/Using_web_workers) on the web. A Service worker can’t access the DOM but can access things like the [Cache API](https://developer.mozilla.org/docs/Web/API/Cache), and they can make network requests using the [Fetch API](https://developer.mozilla.org/docs/Web/API/Fetch_API). The [IndexedDB API](https://developer.mozilla.org/docs/Web/API/IndexedDB_API) and [postMessage()](https://developer.mozilla.org/docs/Web/API/Client/postMessage) are also available to use for data persistence and messaging between the service worker and pages it controls. [Push events](https://developer.mozilla.org/docs/Web/API/ServiceWorkerGlobalScope/onpush) sent from your server can invoke the [Notification API](https://developer.mozilla.org/docs/Web/API/Notifications_API) to increase user engagement.
A service worker can intercept network requests made from a page (which triggers a fetch event on the service worker) and return a response retrieved from the network, or retrieved from a local cache, or even constructed programmatically. Effectively, it's a programmable proxy in the browser. The neat part is that, regardless of where the response comes from, it looks to the web page as though there were no service worker involvement.
To learn more about service workers in depth, read an [Introduction to Service Workers](https://developers.google.com/web/fundamentals/getting-started/primers/service-workers).
## Performance benefits
Service workers are powerful for offline caching but they also offer significant performance wins in the form of instant loading for repeat visits to your site or web app. You can cache your application shell so it works offline and populate its content using JavaScript.
On repeat visits, this allows you to get **meaningful pixels** on the screen without the network, even if your content eventually comes from there. Think of it as displaying toolbars and cards **immediately**, then loading the rest of your content **progressively**.
To test this architecture on real devices, we’ve run our [application shell sample](https://github.com/GoogleChrome/application-shell/) on [WebPageTest.org](http://www.webpagetest.org/) and shown the results below.
**Test 1:** [Testing on Cable with a Nexus 5 using Chrome Dev](http://www.webpagetest.org/result/151113_8S_G68/)
The first view of the app has to fetch all the resources from the network and doesn’t achieve a meaningful paint until **1.2 seconds** in. Thanks to service worker caching, our repeat visit achieves meaningful paint and fully finishes loading in **0.5 seconds**.
<a href="https://youtu.be/bsAefxnSRZU">
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/wRl9j2Sq7dRLmjsuRxVI.png", alt="Web Page Test Paint Diagram for Cable Connection", width="680", height="480" %}
</figure>
</a>
**Test 2:** [Testing on 3G with a Nexus 5 using Chrome Dev](http://www.webpagetest.org/result/151112_8R_YQN/)
We can also test our sample with a slightly slower 3G connection. This time it takes **2.5 seconds** on first visit for our first meaningful paint. It takes [**7.1 seconds**](http://www.webpagetest.org/video/view.php?id=151112_8R_YQN.3.0) to fully load the page. With service worker caching, our repeat visit achieves meaningful paint and fully finishes loading in [**0.8 seconds**](http://www.webpagetest.org/video/view.php?id=151112_8R_YQN.3.1).
<a href="https://youtu.be/488XbwCKf5g">
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/a7ebJRPP7HUzlq8LRMK0.png", alt="Web Page Test Paint Diagram for 3G Connection", width="738", height="520" %}
</figure>
</a>
[Other views](http://www.webpagetest.org/result/151112_HH_11D0/) tell a similar story. Compare the **3 seconds** it takes to achieve first meaningful paint in the application shell:
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/eSIeWMZntzfGeYrWbm0z.png", alt="Paint timeline for first view from Web Page Test", width="800", height="247" %}
</figure>
to the **0.9 seconds** it takes when the same page is loaded from our service worker cache. Over 2 seconds of time is saved for our end users.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/dBavGB7P5UBo1EngwOeU.png", alt="Paint timeline for repeat view from Web Page Test", width="800", height="244" %}
</figure>
Similar and reliable performance wins are possible for your own applications using the application shell architecture.
## Does service worker require us to rethink how we structure apps?
Service workers imply some subtle changes in application architecture. Rather than squashing all of your application into an HTML string, it can be beneficial to do things AJAX-style. This is where you have a shell (that is always cached and can always boot up without the network) and content that is refreshed regularly and managed separately.
The implications of this split are large. On the first visit you can render content on the server and install the service worker on the client. On subsequent visits you need only request data.
## What about progressive enhancement?
While service worker isn’t currently supported by all browsers, the application content shell architecture uses [progressive enhancement](https://en.wikipedia.org/wiki/Progressive_enhancement) to ensure everyone can access the content. For example, take our sample project.
Below you can see the full version rendered in Chrome, Firefox Nightly and Safari. On the very left you can see the Safari version where the content is rendered on the server _without_ a service worker. On the right we see the Chrome and Firefox Nightly versions powered by service worker.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/7nUlJ8vAsOWi2SLV0kjF.jpg", alt="Image of Application Shell loaded in Safari, Chrome and Firefox", width="800", height="476" %}
</figure>
## When does it make sense to use this architecture?
The application shell architecture makes the most sense for apps and sites that are dynamic. If your site is small and static, you probably don't need an application shell and can simply cache the whole site in a service worker `oninstall` step. Use the approach that makes the most sense for your project. A number of JavaScript frameworks already encourage splitting your application logic from the content, making this pattern more straight-forward to apply.
## Are there any production apps using this pattern yet?
The application shell architecture is possible with just a few changes to your overall application’s UI and has worked well for large-scale sites such as Google’s [I/O 2015 Progressive Web App](https://developers.google.com/web/showcase/case-study/service-workers-iowa) and Google’s Inbox.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/VrI0pv3eihgLe7qMF3Hs.png", alt="Image of Google Inbox loading. Illustrates Inbox using service worker.", width="800", height="618" %}
</figure>
Offline application shells are a major performance win and are also demonstrated well in Jake Archibald’s [offline Wikipedia app](https://wiki-offline.jakearchibald.com/wiki/Rick_and_Morty) and [Flipkart Lite](http://tech-blog.flipkart.net/2015/11/progressive-web-app/)’s progressive web app.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/q0LyypOQEo1YPJWzUHq2.jpg", alt="Screenshots of Jake Archibald's Wikipedia Demo.", width="800", height="570", class="screenshot" %}
</figure>
## Explaining the architecture
During the first load experience, your goal is to get meaningful content to the user’s screen as quickly as possible.
### First load and loading other pages
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/IUhRLTF6v2ticqGPNxWE.png", alt="Diagram of the First Load with the App Shell", width="800", height="420" %}
</figure>
**In general the application shell architecture will:**
* Prioritize the initial load, but let service worker cache the application shell so repeat visits do not require the shell to be re-fetched from the network.
* Lazy-load or background load everything else. One good option is to use [read-through caching](https://googlechrome.github.io/samples/service-worker/read-through-caching/) for dynamic content.
* Use service worker tools, such as [sw-precache](https://github.com/GoogleChrome/sw-precache), for example to reliably cache and update the service worker that manages your static content. (More about sw-precache later.)
**To achieve this:**
* **Server** will send HTML content that the client can render and use far-future HTTP cache expiration headers to account for browsers without service worker support. It will serve filenames using hashes to enable both ‘versioning’ and easy updates for later in the application lifecycle.
* **Page(s)** will include inline CSS styles in a `<style>` tag within the document `<head>` to provide a fast first paint of the application shell. Each page will asynchronously load the JavaScript necessary for the current view. Because CSS cannot be asynchronously loaded, we can request styles using JavaScript as it IS asynchronous rather than parser-driven and synchronous. We can also take advantage of [`requestAnimationFrame()`](https://developer.mozilla.org/docs/Web/API/window/requestAnimationFrame) to avoid cases where we might get a fast cache hit and end up with styles accidentally becoming part of the critical rendering path. `requestAnimationFrame()` forces the first frame to be painted before the styles to be loaded. Another option is to use projects such as Filament Group’s [loadCSS](https://github.com/filamentgroup/loadCSS) to request CSS asynchronously using JavaScript.
* **Service worker** will store a cached entry of the application shell so that on repeat visits, the shell can be loaded entirely from the service worker cache unless an update is available on the network.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/AozpCaYOkYRUwCPW2pnW.jpg", alt="App Shell for Content", width="800", height="591" %}
</figure>
## A practical implementation
We’ve written a fully working [sample](https://github.com/GoogleChrome/application-shell) using the application shell architecture, vanilla ES2015 JavaScript for the client, and Express.js for the server. There is of course nothing stopping you from using your own stack for either the client or the server portions (e.g PHP, Ruby, Python).
### Service worker lifecycle
For our application shell project, we use [sw-precache](https://github.com/GoogleChrome/sw-precache/) which offers the following service worker lifecycle:
<table>
<tr>
<th>Event</th>
<th>Action</th>
</tr>
<tr>
<td>Install</td>
<td>Cache the application shell and other single page app resources.</td>
</tr>
<tr>
<td>Activate</td>
<td>Clear out old caches.</td>
</tr>
<tr>
<td>Fetch</td>
<td>Serve up a single page web app for URL's and use the cache for assets and predefined partials. Use network for other requests.</td>
</tr>
</table>
### Server bits
In this architecture, a server side component (in our case, written in Express) should be able to treat content and presentation separately. Content could be added to an HTML layout that results in a static render of the page, or it could be served separately and dynamically loaded.
Understandably, your server-side setup may drastically differ from the one we use for our demo app. This pattern of web apps is achievable by most server setups, though it **does** require some rearchitecting. We’ve found that the following model works quite well:
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/zNO2jZmpDyDXl8dXz09l.png", alt="Diagram of the App Shell Architecture", width="700", height="329" %}
</figure>
* Endpoints are defined for three parts of your application: the user facing URL’s (index/wildcard), the application shell (service worker) and your HTML partials.
* Each endpoint has a controller that pulls in a [handlebars](https://www.npmjs.com/package/handlebars-layouts) layout which in turn can pull in handlebar partials and views. Simply put, partials are views that are chunks of HTML that are copied into the final page.
Note: JavaScript frameworks that do more advanced data synchronization are often way easier to port to an Application Shell architecture. They tend to use data-binding and sync rather than partials.
* The user is initially served a static page with content. This page registers a service worker, if it’s supported, which caches the application shell and everything it depends on (CSS, JS etc).
* The app shell will then act as a single page web app, using javascript to XHR in the content for a specific URL. The XHR calls are made to a /partials* endpoint which returns the small chunk of HTML, CSS and JS needed to display that content.
Note: There are many ways to approach this and XHR is just one of them. Some applications will inline their data (maybe using JSON) for initial render and therefore aren’t "static" in the flattened HTML sense.
* Browsers **without** service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options. The service worker aspect provides you with new opportunities for enhancing the performance of your Single-page Application style app using the cached application shell.
### File versioning
One question that arises is how to handle file versioning and updating. This is application specific and the options are:
* Network first and use the cached version otherwise.
* Network only and fail if offline.
* Cache the old version and update later.
For the application shell itself, a cache-first approach should be taken for your service worker setup. If you aren’t caching the application shell, you haven’t properly adopted the architecture.
Note: The application shell sample does not (at the time of writing) use file versioning for the assets referenced in the static render, often used for cache busting. **We hope to add this in the near future.** The service worker is otherwise versioned by sw-precache (covered in the [Tooling](#tooling) section).
### Tooling
We maintain a number of different [service worker helper libraries](https://developers.google.com/web/tools/service-worker-libraries/) that make the process of precaching your application’s shell or handling common caching patterns easier to setup.
<figure>
{% Img src="image/T4FyVKpzu4WKF1kBNvXepbi08t52/LOV0uTcMUwvBZ2Z95qku.png", alt="Screenshot of the Service Worker Library Site on Web Fundamentals", width="800", height="438" %}
</figure>
#### Use sw-precache for your application shell
Using [sw-precache](https://github.com/GoogleChrome/sw-precache) to cache the application shell should handle the concerns around file revisions, the install/activate questions, and the fetch scenario for the app shell. Drop sw-precache into your application’s build process and use configurable wildcards to pick up your static resources. Rather than manually hand-crafting your service worker script, let sw-precache generate one that manages your cache in a safe and efficient, using a cache-first fetch handler.
Initial visits to your app trigger precaching of the complete set of needed resources. This is similar to the experience of installing a native app from an app store. When users return to your app, only updated resources are downloaded. In our demo, we inform users when a new shell is available with the message, "App updates. Refresh for the new version." This pattern is a low-friction way of letting users know they can refresh for the latest version.
#### Use sw-toolbox for runtime caching
Use [sw-toolbox](https://github.com/GoogleChrome/sw-toolbox) for runtime caching with varying strategies depending on the resource:
* [cacheFirst](https://github.com/GoogleChrome/sw-toolbox#toolboxcachefirst) for images, along with a dedicated named cache that has a custom expiration policy of N maxEntries.
* [networkFirst](https://github.com/GoogleChrome/sw-toolbox#toolboxnetworkfirst) or fastest for API requests, depending on the desired content freshness. Fastest might be fine, but if there’s a specific API feed that's updated frequently, use networkFirst.
## Conclusion
Application shell architectures comes with several benefits but only makes sense for some classes of applications. The model is still young and it will be worth evaluating the effort and overall performance benefits of this architecture.
In our experiments, we took advantage of template sharing between the client and server to minimize the work of building two application layers. This ensures progressive enhancement is still a core feature.
If you’re already considering using service workers in your app, take a look at the architecture and evaluate if it makes sense for your own projects.
_With thanks to our reviewers: Jeff Posnick, Paul Lewis, Alex Russell, Seth Thompson, Rob Dodson, Taylor Savage and Joe Medley._
| 77.353846 | 979 | 0.785601 | eng_Latn | 0.993952 |
53ef231dc541eff91bde3bd87fa158b295951ac4 | 263 | md | Markdown | README.md | JakePrim/JetPackDemo | e85be784c356f1dc704fa6c395d676b484f6577b | [
"Apache-2.0"
] | null | null | null | README.md | JakePrim/JetPackDemo | e85be784c356f1dc704fa6c395d676b484f6577b | [
"Apache-2.0"
] | null | null | null | README.md | JakePrim/JetPackDemo | e85be784c356f1dc704fa6c395d676b484f6577b | [
"Apache-2.0"
] | 1 | 2021-09-28T09:02:06.000Z | 2021-09-28T09:02:06.000Z | # JetPack 全家桶最佳实践
- [#JetPack | LiveData 如何安全观察数据](https://www.yuque.com/jakeprim/android/onl14s)
- [#JetPack | Lifecycle 如何做到感知生命周期](https://www.yuque.com/jakeprim/android/hfihnu)
- [#JetPack | ViewModel 如何对视图状态管理](https://www.yuque.com/jakeprim/android/yiqlff) | 52.6 | 82 | 0.756654 | yue_Hant | 0.81421 |
53f0363d89bbbdbd87f3740ab6b90a9175e0424b | 10,729 | md | Markdown | CHANGELOG.md | patarapolw/fastify-oas | 8d7e127d808c23fed24bc2acb4383a1222b07680 | [
"MIT"
] | null | null | null | CHANGELOG.md | patarapolw/fastify-oas | 8d7e127d808c23fed24bc2acb4383a1222b07680 | [
"MIT"
] | null | null | null | CHANGELOG.md | patarapolw/fastify-oas | 8d7e127d808c23fed24bc2acb4383a1222b07680 | [
"MIT"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
## [2.7.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.6.2...v2.7.0) (2020-05-03)
### Features
* extended doc options ([65df4ea](https://gitlab.com/m03geek/fastify-oas/commit/65df4eab8136ab820a0e98890567d5971b49008c))
### Bug Fixes
* redirect fix ([6ca211b](https://gitlab.com/m03geek/fastify-oas/commit/6ca211bbc67c10b5884f32a5b830a064f47e4276))
### [2.6.2](https://gitlab.com/m03geek/fastify-oas/compare/v2.6.1...v2.6.2) (2020-03-28)
### Bug Fixes
* node 8 compat ([6ea4a05](https://gitlab.com/m03geek/fastify-oas/commit/6ea4a0554a420ef9ff0b8b01959dabb67421b514))
### [2.6.1](https://gitlab.com/m03geek/fastify-oas/compare/v2.6.0...v2.6.1) (2020-03-28)
### Bug Fixes
* **ts:** fix types ([9e4d612](https://gitlab.com/m03geek/fastify-oas/commit/9e4d6122ecb318627c9ed2fe229c946feca0e5a5))
## [2.6.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.5.2...v2.6.0) (2020-03-28)
### Features
* allow inclusion of 'examples' keyword ([59fbdd7](https://gitlab.com/m03geek/fastify-oas/commit/59fbdd7e08ae1d28e66d2a39b191f1a7179bc516)), closes [#26](https://gitlab.com/m03geek/fastify-oas/issues/26)
### [2.5.2](https://gitlab.com/m03geek/fastify-oas/compare/v2.5.1...v2.5.2) (2020-02-24)
### Bug Fixes
* **deps:** fix fastify working versions ([fbfccc1](https://gitlab.com/m03geek/fastify-oas/commit/fbfccc1eca1c2cfff46a2ffea525ab140d0487d7))
### [2.5.1](https://gitlab.com/m03geek/fastify-oas/compare/v2.5.0...v2.5.1) (2020-02-18)
### Bug Fixes
* add null check ([6d22653](https://gitlab.com/m03geek/fastify-oas/commit/6d22653))
## [2.5.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.4.0...v2.5.0) (2019-10-31)
### Features
* **openapi:** support $ref-way style shared schema references ([d401740](https://gitlab.com/m03geek/fastify-oas/commit/d401740))
## [2.4.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.3.3...v2.4.0) (2019-10-21)
### Features
* **openapi:** collect fastify schemas recursively ([6c47a88](https://gitlab.com/m03geek/fastify-oas/commit/6c47a88))
### [2.3.3](https://gitlab.com/m03geek/fastify-oas/compare/v2.3.2...v2.3.3) (2019-09-11)
### [2.3.2](https://gitlab.com/m03geek/fastify-oas/compare/v2.3.1...v2.3.2) (2019-09-11)
### Bug Fixes
* **typescript:** move some exported members to deps ([63cfb9c](https://gitlab.com/m03geek/fastify-oas/commit/63cfb9c))
### [2.3.1](https://gitlab.com/m03geek/fastify-oas/compare/v2.3.0...v2.3.1) (2019-07-15)
## [2.3.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.2.0...v2.3.0) (2019-07-15)
### Features
* **openapi:** return default response for empty schema ([0d9457b](https://gitlab.com/m03geek/fastify-oas/commit/0d9457b))
### Tests
* **openapi:** fix openapi spec compatibility ([0ab6314](https://gitlab.com/m03geek/fastify-oas/commit/0ab6314))
## [2.2.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.1.3...v2.2.0) (2019-06-18)
### Features
* **openapi:** convert multiple and nullable types to OpenAPI ([34f9d47](https://gitlab.com/m03geek/fastify-oas/commit/34f9d47))
## [2.1.3](https://gitlab.com/m03geek/fastify-oas/compare/v2.1.2...v2.1.3) (2019-04-17)
## [2.1.2](https://gitlab.com/m03geek/fastify-oas/compare/v2.1.1...v2.1.2) (2019-04-12)
## [2.1.1](https://gitlab.com/m03geek/fastify-oas/compare/v2.1.0...v2.1.1) (2019-04-12)
### Bug Fixes
* package.json & package-lock.json to reduce vulnerabilities ([8d5a453](https://gitlab.com/m03geek/fastify-oas/commit/8d5a453))
* **swagger:** add operationId support ([9ef427a](https://gitlab.com/m03geek/fastify-oas/commit/9ef427a))
# [2.1.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.0.0...v2.1.0) (2019-04-01)
### Features
* **openapi:** add style and explode support ([25f2f98](https://gitlab.com/m03geek/fastify-oas/commit/25f2f98))
# [2.0.0](https://gitlab.com/m03geek/fastify-oas/compare/v2.0.0-rc.4...v2.0.0) (2019-02-26)
<a name="2.0.0-rc.4"></a>
# [2.0.0-rc.4](https://gitlab.com/m03geek/fastify-oas/compare/v2.0.0-rc.3...v2.0.0-rc.4) (2019-01-23)
### Bug Fixes
* **types:** typo fix ([ff9858e](https://gitlab.com/m03geek/fastify-oas/commit/ff9858e))
<a name="2.0.0-rc.3"></a>
# [2.0.0-rc.3](https://gitlab.com/m03geek/fastify-oas/compare/v2.0.0-rc.2...v2.0.0-rc.3) (2019-01-14)
### Bug Fixes
* add pattern to valid params ([9e8b766](https://gitlab.com/m03geek/fastify-oas/commit/9e8b766))
### Features
* Support operationId ([cbbda88](https://gitlab.com/m03geek/fastify-oas/commit/cbbda88))
<a name="2.0.0-rc.2"></a>
# [2.0.0-rc.2](https://gitlab.com/m03geek/fastify-oas/compare/v2.0.0-rc.1...v2.0.0-rc.2) (2018-12-26)
<a name="2.0.0-rc.1"></a>
# [2.0.0-rc.1](https://gitlab.com/m03geek/fastify-oas/compare/v2.0.0-rc.0...v2.0.0-rc.1) (2018-12-26)
### Bug Fixes
* **plugin:** proper plugin version check ([b906898](https://gitlab.com/m03geek/fastify-oas/commit/b906898))
<a name="2.0.0-rc.0"></a>
# [2.0.0-rc.0](https://gitlab.com/m03geek/fastify-oas/compare/v1.1.1...v2.0.0-rc.0) (2018-12-26)
### Features
* add fastify v2 support ([450fd7b](https://gitlab.com/m03geek/fastify-oas/commit/450fd7b))
### BREAKING CHANGES
* drop fastify v1 support
## [1.1.1](https://gitlab.com/m03geek/fastify-oas/compare/v1.1.0...v1.1.1) (2018-12-17)
### Bug Fixes
* remove console.log ([ce1dc54](https://gitlab.com/m03geek/fastify-oas/commit/ce1dc54))
# [1.1.0](https://gitlab.com/m03geek/fastify-oas/compare/v1.0.0...v1.1.0) (2018-12-17)
### Features
* add hideUntagged option ([8d7f4e5](https://gitlab.com/m03geek/fastify-oas/commit/8d7f4e5))
# [1.0.0](https://gitlab.com/m03geek/fastify-oas/compare/v0.6.2...v1.0.0) (2018-12-16)
### Bug Fixes
* **redoc:** add tagGroups support ([be728e1](https://gitlab.com/m03geek/fastify-oas/commit/be728e1))
* **swagger:** add title support ([75530a8](https://gitlab.com/m03geek/fastify-oas/commit/75530a8))
### Features
* **docs:** add notice about fastify version support ([dc73245](https://gitlab.com/m03geek/fastify-oas/commit/dc73245))
## [0.6.2](https://gitlab.com/m03geek/fastify-oas/compare/v0.6.1...v0.6.2) (2018-11-22)
### Bug Fixes
* **body:** required params fix ([590e219](https://gitlab.com/m03geek/fastify-oas/commit/590e219))
## [0.6.1](https://gitlab.com/m03geek/fastify-oas/compare/v0.6.0...v0.6.1) (2018-11-16)
# [0.6.0](https://gitlab.com/m03geek/fastify-oas/compare/v0.5.3...v0.6.0) (2018-11-08)
### Features
* add redoc ([7508231](https://gitlab.com/m03geek/fastify-oas/commit/7508231))
## [0.5.3](https://gitlab.com/m03geek/fastify-oas/compare/v0.5.2...v0.5.3) (2018-10-31)
### Bug Fixes
* **typescript:** typings fix ([ed7a237](https://gitlab.com/m03geek/fastify-oas/commit/ed7a237))
## [0.5.2](https://gitlab.com/m03geek/fastify-oas/compare/v0.5.1...v0.5.2) (2018-10-31)
## [0.5.1](https://gitlab.com/m03geek/fastify-oas/compare/v0.5.0...v0.5.1) (2018-10-31)
### Bug Fixes
* **typedoc:** fix typedocs ([7993a36](https://gitlab.com/m03geek/fastify-oas/commit/7993a36))
# [0.5.0](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.9...v0.5.0) (2018-10-31)
### Features
* add typescript definitions and typedocs ([6ce96d1](https://gitlab.com/m03geek/fastify-oas/commit/6ce96d1))
## [0.4.9](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.8...v0.4.9) (2018-10-17)
## [0.4.8](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.7...v0.4.8) (2018-09-25)
## [0.4.7](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.6...v0.4.7) (2018-09-25)
## [0.4.6](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.5...v0.4.6) (2018-08-12)
### Bug Fixes
* **changelog:** fix changelog links ([59a0053](https://gitlab.com/m03geek/fastify-oas/commit/59a0053))
## [0.4.5](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.4...v0.4.5) (2018-08-12)
### Bug Fixes
* **helpers:** fix enum handling ([bfc483c](https://gitlab.com/m03geek/fastify-oas/commit/bfc483c)), closes [#3](https://gitlab.com/m03geek/fastify-oas/issues/3)
## [0.4.4](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.3...v0.4.4) (2018-08-06)
### Bug Fixes
* **schema:** use schemaKey if $id is missing ([1ac51eb](https://gitlab.com/m03geek/fastify-oas/commit/1ac51eb))
## [0.4.3](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.2...v0.4.3) (2018-08-06)
### Bug Fixes
* **schemas:** fix add schemas as models ([942e9ce](https://gitlab.com/m03geek/fastify-oas/commit/942e9ce))
## [0.4.2](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.1...v0.4.2) (2018-08-06)
### Bug Fixes
* **routes:** fix routes with separate schemas ([d132258](https://gitlab.com/m03geek/fastify-oas/commit/d132258))
* **schemas:** fix schemas generation ([82e4fbc](https://gitlab.com/m03geek/fastify-oas/commit/82e4fbc))
## [0.4.1](https://gitlab.com/m03geek/fastify-oas/compare/v0.4.0...v0.4.1) (2018-08-05)
# [0.4.0](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.8...v0.4.0) (2018-08-05)
## [0.3.8](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.7...v0.3.8) (2018-08-04)
## [0.3.7](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.6...v0.3.7) (2018-08-04)
### Bug Fixes
* package name ([309d254](https://gitlab.com/m03geek/fastify-oas/commit/309d254))
## [0.3.6](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.5...v0.3.6) (2018-08-04)
## [0.3.5](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.4...v0.3.5) (2018-08-04)
## [0.3.4](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.3...v0.3.4) (2018-08-03)
## [0.3.3](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.2...v0.3.3) (2018-08-03)
## [0.3.2](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.1...v0.3.2) (2018-08-03)
## [0.3.1](https://gitlab.com/m03geek/fastify-oas/compare/v0.3.0...v0.3.1) (2018-08-03)
# [0.3.0](https://gitlab.com/m03geek/fastify-oas/compare/v0.2.0...v0.3.0) (2018-08-03)
### Bug Fixes
* response and body generation ([4640cfc](https://gitlab.com/m03geek/fastify-oas/commit/4640cfc))
### Features
* add externalDocs and tags support ([2335359](https://gitlab.com/m03geek/fastify-oas/commit/2335359))
# [0.2.0](https://gitlab.com/m03geek/fastify-oas/compare/cfe110c...v0.2.0) (2018-07-28)
### Features
* add helpers for oas ([54d4d33](https://gitlab.com/m03geek/fastify-oas/commit/54d4d33))
* add main file ([4f70f99](https://gitlab.com/m03geek/fastify-oas/commit/4f70f99))
* add openapi generator ([862561a](https://gitlab.com/m03geek/fastify-oas/commit/862561a))
* add swagger routes ([cb959fb](https://gitlab.com/m03geek/fastify-oas/commit/cb959fb))
* add swagger ui ([cfe110c](https://gitlab.com/m03geek/fastify-oas/commit/cfe110c))
| 26.755611 | 203 | 0.681704 | yue_Hant | 0.113271 |
53f0ed4e06ce6e90d895239a6a9c205d9185bc22 | 329 | md | Markdown | src/pages/OLAP/index.md | kimifdw/kimifdw.github.io | a27e3cebef58cfc8c30193080e60e375b62ca8ee | [
"MIT"
] | 1 | 2020-10-26T07:16:36.000Z | 2020-10-26T07:16:36.000Z | src/pages/OLAP/index.md | kimifdw/kimifdw.github.io | a27e3cebef58cfc8c30193080e60e375b62ca8ee | [
"MIT"
] | 2 | 2021-09-20T20:51:20.000Z | 2022-02-26T01:34:32.000Z | src/pages/OLAP/index.md | kimifdw/kimifdw.github.io | a27e3cebef58cfc8c30193080e60e375b62ca8ee | [
"MIT"
] | null | null | null | ---
title: OLAP笔记
date: "2021-02-16"
spoiler: OLAP
---
# 概念
## OLTP
> (Online Transaction Processing)在线事务处理,实时提供服务
## OLAP
> (Online Analytical Processing)在线分析处理,批量处理用户数据
## 行式存储
> 以数据行或实体为逻辑单元管理数据,数据行的存储是连续的
## 列式存储
> 以数据列为逻辑单元管理数据,相邻的数据都是具有相同类型的数据
# 列式存储
## 优势
1. 满足快速读取特定列的需求——按需读取
2. 就近存储同一列的数据,减少存储占用的磁盘空间——数据压缩
| 10.612903 | 47 | 0.714286 | yue_Hant | 0.917609 |
53f0eff909cf5f445bdda82ddfd6516bbc2d1261 | 1,064 | md | Markdown | _docs/topic-05/06-semantic-markup/page-03.md | ashley-rezvani/341-web-design-Spring2022 | 5d009bb506c1791768d1cd9bd55565eac1d6d57c | [
"MIT"
] | 3 | 2021-01-12T00:12:33.000Z | 2021-01-18T18:32:07.000Z | _docs/topic-05/06-semantic-markup/page-03.md | ashley-rezvani/341-web-design-Spring2022 | 5d009bb506c1791768d1cd9bd55565eac1d6d57c | [
"MIT"
] | 4 | 2021-01-12T00:13:59.000Z | 2021-01-29T01:13:19.000Z | _docs/topic-05/06-semantic-markup/page-03.md | ashley-rezvani/341-web-design-Spring2022 | 5d009bb506c1791768d1cd9bd55565eac1d6d57c | [
"MIT"
] | 1 | 2022-01-13T01:56:37.000Z | 2022-01-13T01:56:37.000Z | ---
title: Superscript and Subscript
module: topic-05
permalink: /topic-05/sup-sub/
categories: html
tags: elements, subscript, superscript
---
<div class="divider-heading"></div>
The **superscript element** ( `<sup>...</sup>` ) and **subscript element** ( `<sub>...</sub>` ) are used to raise or lower text relative to normal text.
The superscript element denotes characters that should appear as 'superscript', such as date suffixes or mathematical powers.
Footnotes or chemical and mathematical formulas, especially regarding fractions such as <sup>1</sup>/<sub>3</sub> commonly use the subscript element.
<div class="code-heading">
<span class="html">HTML</span>
</div>
```html
<p>This is a the 3<sup>rd</sup> element in this section.</p>
<p>You are <sup>1</sup>/<sub>3</sub> through this section.</p>
```
<div class="external-embed">
<p data-height="400" data-theme-id="30567" data-slug-hash="abNRBLv" data-default-tab="html,result" data-user="michaelcassens" data-pen-title="Semantic HTML, Superscript and Subscript" class="codepen"></p>
</div>
| 34.322581 | 206 | 0.717105 | eng_Latn | 0.902915 |
53f186f0c98151009e78283053807f978f17a6d6 | 9,143 | md | Markdown | articles/healthcare-apis/access-fhir-postman-tutorial.md | matmahnke/azure-docs.pt-br | 6c96d25caf8663547775f333164198e3ed03972f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/healthcare-apis/access-fhir-postman-tutorial.md | matmahnke/azure-docs.pt-br | 6c96d25caf8663547775f333164198e3ed03972f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/healthcare-apis/access-fhir-postman-tutorial.md | matmahnke/azure-docs.pt-br | 6c96d25caf8663547775f333164198e3ed03972f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: FHIR servidor do Postman no Azure – API do Azure para FHIR
description: Neste tutorial, veremos as etapas necessárias para usar o Postman para acessar um servidor FHIR. O Postman é útil para depurar aplicativos que acessam APIs.
services: healthcare-apis
ms.service: healthcare-apis
ms.subservice: fhir
ms.topic: tutorial
ms.reviewer: dseven
ms.author: matjazl
author: matjazl
ms.date: 02/07/2019
ms.openlocfilehash: f8b5e344fc963d466571e75ff16f17367dc32971
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/09/2020
ms.locfileid: "87844840"
---
# <a name="access-azure-api-for-fhir-with-postman"></a>Acessar a API do Azure para FHIR com o Postman
Um aplicativo cliente acessaria uma API FHIR por meio de uma [API REST](https://www.hl7.org/fhir/http.html). Talvez você também queira interagir diretamente com o servidor FHIR à medida que cria aplicativos, por exemplo, para depuração. Neste tutorial, veremos as etapas necessárias para usar o [Postman](https://www.getpostman.com/) para acessar um servidor FHIR. O Postman é uma ferramenta geralmente usada para depuração ao criar aplicativos que acessam APIs.
## <a name="prerequisites"></a>Pré-requisitos
- Um ponto de extremidade do FHIR no Azure. Você pode configurar isso usando a API do Azure para FHIR gerenciada ou o servidor FHIR de software livre para o Azure. Configure a API do Azure para FHIR gerenciada usando o [portal do Azure](fhir-paas-portal-quickstart.md), o [PowerShell](fhir-paas-powershell-quickstart.md) ou a [CLI do Azure](fhir-paas-cli-quickstart.md).
- Um [aplicativo cliente](register-confidential-azure-ad-client-app.md) que você usará para acessar o serviço FHIR
- Postman instalado. Você pode obtê-lo em [https://www.getpostman.com](https://www.getpostman.com)
## <a name="fhir-server-and-authentication-details"></a>Servidor FHIR e detalhes de autenticação
Para usar o Postman, os seguintes detalhes são necessários:
- A URL do servidor FHIR, por exemplo `https://MYACCOUNT.azurehealthcareapis.com`
- O provedor de identidade `Authority` para seu servidor FHIR, por exemplo, `https://login.microsoftonline.com/{TENANT-ID}`
- O `audience` configurado. Normalmente é a URL do servidor FHIR, por exemplo, `https://MYACCOUNT.azurehealthcareapis.com` ou apenas `https://azurehealthcareapis.com`.
- O `client_id` (ou a ID do aplicativo) do [aplicativo cliente](register-confidential-azure-ad-client-app.md) você usará para acessar o serviço FHIR.
- O `client_secret` (ou o segredo do aplicativo) do aplicativo cliente.
Por fim, você deve verificar se `https://www.getpostman.com/oauth2/callback` é uma URL de resposta registrada para seu aplicativo cliente.
## <a name="connect-to-fhir-server"></a>Conectar ao servidor do FHIR
Usando o Postman, faça uma solicitação de `GET` para `https://fhir-server-url/metadata`:

A URL de metadados para a API do Azure para FHIR é `https://MYACCOUNT.azurehealthcareapis.com/metadata`. Neste exemplo, a URL do servidor FHIR é `https://fhirdocsmsft.azurewebsites.net` e a instrução de capacidade do servidor está disponível em `https://fhirdocsmsft.azurewebsites.net/metadata`. Esse ponto de extremidade deve estar acessível sem autenticação.
Se você tentar acessar recursos restritos, deverá obter uma resposta de "Falha na autenticação":

## <a name="obtaining-an-access-token"></a>Como obter um token de acesso
Para obter um token de acesso válido, selecione "Autorização" e escolha o tipo "OAuth 2.0":

Pressione "Obter Novo Token de Acesso" e uma caixa de diálogo será exibida:

Você precisará de alguns detalhes:
| Campo | Exemplo de valor | Comentário |
|-----------------------|-----------------------------------------------------------------------------------------------------------------|----------------------------|
| Nome do Token | MYTOKEN | Um nome de sua escolha |
| Tipo de concessão | Código de Autorização | |
| URL de retorno de chamada | `https://www.getpostman.com/oauth2/callback` | |
| URL de autenticação | `https://login.microsoftonline.com/{TENANT-ID}/oauth2/authorize?resource=<audience>` | `audience` é `https://MYACCOUNT.azurehealthcareapis.com` para a API do Azure para FHIR |
| URL do Token de Acesso | `https://login.microsoftonline.com/{TENANT ID}/oauth2/token` | |
| ID do Cliente | `XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX` | ID do aplicativo |
| Segredo do cliente | `XXXXXXXX` | Chave do cliente secreta |
| Escopo | `<Leave Blank>` |
| Estado | `1234` | |
| Autenticação de cliente | Enviar credenciais do cliente no corpo |
Pressione "Token de Solicitação" e você será guiado pelo fluxo de autenticação do Azure Active Directory e um token será retornado para o Postman. Se você tiver problemas, abra o console do Postman (no item de menu "Exibir->Mostrar Console do Postman").
Role para baixo na tela de token retornada e pressione "Usar Token":

O token agora deve ser preenchido no campo "Token de Acesso" e você pode selecionar tokens em "Tokens Disponíveis". Se você "Enviar" novamente para repetir a pesquisa de recursos `Patient`, deverá obter um status `200 OK`:

Nesse caso, não há pacientes no banco de dados e a pesquisa está vazia.
Se você inspecionar o token de acesso com uma ferramenta como [https://jwt.ms](https://jwt.ms), deverá ver o conteúdo como:
```jsonc
{
"aud": "https://MYACCOUNT.azurehealthcareapis.com",
"iss": "https://sts.windows.net/{TENANT-ID}/",
"iat": 1545343803,
"nbf": 1545343803,
"exp": 1545347703,
"acr": "1",
"aio": "AUQAu/8JXXXXXXXXXdQxcxn1eis459j70Kf9DwcUjlKY3I2G/9aOnSbw==",
"amr": [
"pwd"
],
"appid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"oid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"appidacr": "1",
...// Truncated
}
```
Em situações de solução de problemas, a validação de que você tem o público-alvo correto (declaração `aud`) é um bom ponto de partida. Se o token for do emissor correto (declaração `iss`) e tiver o público-alvo correto (declaração `aud`), mas você ainda não conseguir acessar a API do FHIR, é provável que o usuário ou a entidade de serviço (declaração `oid`) não tenha acesso ao plano de dados do FHIR. Recomendamos [usar o Azure RBAC (controle de acesso baseado em função do Azure)](configure-azure-rbac.md) para atribuir funções de plano de dados aos usuários. Se você estiver usando um locatário do Azure Active Directory secundário e externo para seu plano de dados, será necessário [configurar as atribuições de RBAC locais](configure-local-rbac.md).
Também é possível [obter um token para a API do Azure para FHIR usando a CLI do Azure](get-healthcare-apis-access-token-cli.md). Se você estiver usando um token obtido com a CLI do Azure, deverá usar o tipo de autorização "Token do Portador" e colar o token diretamente.
## <a name="inserting-a-patient"></a>Como inserir um paciente
Agora que você tem um token de acesso válido, pode inserir um novo paciente. Alterne para o método "POST" e adicione o seguinte documento JSON ao corpo da solicitação:
[!code-json[](samples/sample-patient.json)]
Pressione "Enviar" e poderá ver que o paciente foi criado com êxito:

Se você repetir a pesquisa de pacientes, agora deverá ver o registro de pacientes:

## <a name="next-steps"></a>Próximas etapas
Neste tutorial, você acessou uma API do FHIR usando o Postman. Leia sobre os recursos de API com suporte em nossa seção de recursos com suporte.
>[!div class="nextstepaction"]
>[Recursos compatíveis](fhir-features-supported.md)
| 67.227941 | 756 | 0.664005 | por_Latn | 0.993001 |
53f22b5e21207f49b8edcc999c659c56f1e23ae8 | 12,292 | md | Markdown | Python/Python3.md | AllenWangxiao/notes | f11b9b24c6e7d4bb2450b825f36d03e7ba70aafd | [
"Apache-2.0"
] | 2,054 | 2018-08-23T08:02:12.000Z | 2022-03-30T04:44:43.000Z | Python/Python3.md | AllenWangxiao/notes | f11b9b24c6e7d4bb2450b825f36d03e7ba70aafd | [
"Apache-2.0"
] | 17 | 2019-04-16T05:28:21.000Z | 2021-09-17T07:22:09.000Z | Python/Python3.md | AllenWangxiao/notes | f11b9b24c6e7d4bb2450b825f36d03e7ba70aafd | [
"Apache-2.0"
] | 468 | 2019-01-15T05:47:44.000Z | 2022-03-30T04:44:59.000Z | # Python3
Python 是由吉多·范罗苏姆(Guido Van Rossum)在 90 年代早期设计。
它是如今最常用的编程语言之一。它的语法简洁且优美,几乎就是可执行的伪代码。
欢迎大家斧正。英文版原作 Louie Dinh [@louiedinh](http://twitter.com/louiedinh)
邮箱 louiedinh [at] [谷歌的信箱服务]。中文翻译 Geoff Liu。
注意:这篇教程是基于 Python 3 写的。如果你想学旧版 Python 2,我们特别有[另一篇教程](http://learnxinyminutes.com/docs/python/)。
```python
# 用井字符开头的是单行注释
""" 多行字符串用三个引号
包裹,也常被用来做多
行注释
"""
####################################################
## 1. 原始数据类型和运算符
####################################################
# 整数
3 # => 3
# 算术没有什么出乎意料的
1 + 1 # => 2
8 - 1 # => 7
10 * 2 # => 20
# 但是除法例外,会自动转换成浮点数
35 / 5 # => 7.0
5 / 3 # => 1.6666666666666667
# 整数除法的结果都是向下取整
5 // 3 # => 1
5.0 // 3.0 # => 1.0 # 浮点数也可以
-5 // 3 # => -2
-5.0 // 3.0 # => -2.0
# 浮点数的运算结果也是浮点数
3 * 2.0 # => 6.0
# 模除
7 % 3 # => 1
# x的y次方
2**4 # => 16
# 用括号决定优先级
(1 + 3) * 2 # => 8
# 布尔值
True
False
# 用not取非
not True # => False
not False # => True
# 逻辑运算符,注意and和or都是小写
True and False # => False
False or True # => True
# 整数也可以当作布尔值
0 and 2 # => 0
-5 or 0 # => -5
0 == False # => True
2 == True # => False
1 == True # => True
# 用==判断相等
1 == 1 # => True
2 == 1 # => False
# 用!=判断不等
1 != 1 # => False
2 != 1 # => True
# 比较大小
1 < 10 # => True
1 > 10 # => False
2 <= 2 # => True
2 >= 2 # => True
# 大小比较可以连起来!
1 < 2 < 3 # => True
2 < 3 < 2 # => False
# 字符串用单引双引都可以
"这是个字符串"
'这也是个字符串'
# 用加号连接字符串
"Hello " + "world!" # => "Hello world!"
# 字符串可以被当作字符列表
"This is a string"[0] # => 'T'
# 用.format来格式化字符串
"{} can be {}".format("strings", "interpolated")
# 可以重复参数以节省时间
"{0} be nimble, {0} be quick, {0} jump over the {1}".format("Jack", "candle stick")
# => "Jack be nimble, Jack be quick, Jack jump over the candle stick"
# 如果不想数参数,可以用关键字
"{name} wants to eat {food}".format(name="Bob", food="lasagna")
# => "Bob wants to eat lasagna"
# 如果你的Python3程序也要在Python2.5以下环境运行,也可以用老式的格式化语法
"%s can be %s the %s way" % ("strings", "interpolated", "old")
# None是一个对象
None # => None
# 当与None进行比较时不要用 ==,要用is。is是用来比较两个变量是否指向同一个对象。
"etc" is None # => False
None is None # => True
# None,0,空字符串,空列表,空字典都算是False
# 所有其他值都是True
bool(0) # => False
bool("") # => False
bool([]) # => False
bool({}) # => False
####################################################
## 2. 变量和集合
####################################################
# print是内置的打印函数
print("I'm Python. Nice to meet you!")
# 在给变量赋值前不用提前声明
# 传统的变量命名是小写,用下划线分隔单词
some_var = 5
some_var # => 5
# 访问未赋值的变量会抛出异常
# 参考流程控制一段来学习异常处理
some_unknown_var # 抛出NameError
# 用列表(list)储存序列
li = []
# 创建列表时也可以同时赋给元素
other_li = [4, 5, 6]
# 用append在列表最后追加元素
li.append(1) # li现在是[1]
li.append(2) # li现在是[1, 2]
li.append(4) # li现在是[1, 2, 4]
li.append(3) # li现在是[1, 2, 4, 3]
# 用pop从列表尾部删除
li.pop() # => 3 且li现在是[1, 2, 4]
# 把3再放回去
li.append(3) # li变回[1, 2, 4, 3]
# 列表存取跟数组一样
li[0] # => 1
# 取出最后一个元素
li[-1] # => 3
# 越界存取会造成IndexError
li[4] # 抛出IndexError
# 列表有切割语法
li[1:3] # => [2, 4]
# 取尾
li[2:] # => [4, 3]
# 取头
li[:3] # => [1, 2, 4]
# 隔一个取一个
li[::2] # =>[1, 4]
# 倒排列表
li[::-1] # => [3, 4, 2, 1]
# 可以用三个参数的任何组合来构建切割
# li[始:终:步伐]
# 用del删除任何一个元素
del li[2] # li is now [1, 2, 3]
# 列表可以相加
# 注意:li和other_li的值都不变
li + other_li # => [1, 2, 3, 4, 5, 6]
# 用extend拼接列表
li.extend(other_li) # li现在是[1, 2, 3, 4, 5, 6]
# 用in测试列表是否包含值
1 in li # => True
# 用len取列表长度
len(li) # => 6
# 元组是不可改变的序列
tup = (1, 2, 3)
tup[0] # => 1
tup[0] = 3 # 抛出TypeError
# 列表允许的操作元组大都可以
len(tup) # => 3
tup + (4, 5, 6) # => (1, 2, 3, 4, 5, 6)
tup[:2] # => (1, 2)
2 in tup # => True
# 可以把元组合列表解包,赋值给变量
a, b, c = (1, 2, 3) # 现在a是1,b是2,c是3
# 元组周围的括号是可以省略的
d, e, f = 4, 5, 6
# 交换两个变量的值就这么简单
e, d = d, e # 现在d是5,e是4
# 用字典表达映射关系
empty_dict = {}
# 初始化的字典
filled_dict = {"one": 1, "two": 2, "three": 3}
# 用[]取值
filled_dict["one"] # => 1
# 用 keys 获得所有的键。
# 因为 keys 返回一个可迭代对象,所以在这里把结果包在 list 里。我们下面会详细介绍可迭代。
# 注意:字典键的顺序是不定的,你得到的结果可能和以下不同。
list(filled_dict.keys()) # => ["three", "two", "one"]
# 用values获得所有的值。跟keys一样,要用list包起来,顺序也可能不同。
list(filled_dict.values()) # => [3, 2, 1]
# 用in测试一个字典是否包含一个键
"one" in filled_dict # => True
1 in filled_dict # => False
# 访问不存在的键会导致KeyError
filled_dict["four"] # KeyError
# 用get来避免KeyError
filled_dict.get("one") # => 1
filled_dict.get("four") # => None
# 当键不存在的时候get方法可以返回默认值
filled_dict.get("one", 4) # => 1
filled_dict.get("four", 4) # => 4
# setdefault方法只有当键不存在的时候插入新值
filled_dict.setdefault("five", 5) # filled_dict["five"]设为5
filled_dict.setdefault("five", 6) # filled_dict["five"]还是5
# 字典赋值
filled_dict.update({"four":4}) # => {"one": 1, "two": 2, "three": 3, "four": 4}
filled_dict["four"] = 4 # 另一种赋值方法
# 用del删除
del filled_dict["one"] # 从filled_dict中把one删除
# 用set表达集合
empty_set = set()
# 初始化一个集合,语法跟字典相似。
some_set = {1, 1, 2, 2, 3, 4} # some_set现在是{1, 2, 3, 4}
# 可以把集合赋值于变量
filled_set = some_set
# 为集合添加元素
filled_set.add(5) # filled_set现在是{1, 2, 3, 4, 5}
# & 取交集
other_set = {3, 4, 5, 6}
filled_set & other_set # => {3, 4, 5}
# | 取并集
filled_set | other_set # => {1, 2, 3, 4, 5, 6}
# - 取补集
{1, 2, 3, 4} - {2, 3, 5} # => {1, 4}
# in 测试集合是否包含元素
2 in filled_set # => True
10 in filled_set # => False
####################################################
## 3. 流程控制和迭代器
####################################################
# 先随便定义一个变量
some_var = 5
# 这是个if语句。注意缩进在Python里是有意义的
# 印出"some_var比10小"
if some_var > 10:
print("some_var比10大")
elif some_var < 10: # elif句是可选的
print("some_var比10小")
else: # else也是可选的
print("some_var就是10")
"""
用for循环语句遍历列表
打印:
dog is a mammal
cat is a mammal
mouse is a mammal
"""
for animal in ["dog", "cat", "mouse"]:
print("{} is a mammal".format(animal))
"""
"range(number)"返回数字列表从0到给的数字
打印:
0
1
2
3
"""
for i in range(4):
print(i)
"""
while循环直到条件不满足
打印:
0
1
2
3
"""
x = 0
while x < 4:
print(x)
x += 1 # x = x + 1 的简写
# 用try/except块处理异常状况
try:
# 用raise抛出异常
raise IndexError("This is an index error")
except IndexError as e:
pass # pass是无操作,但是应该在这里处理错误
except (TypeError, NameError):
pass # 可以同时处理不同类的错误
else: # else语句是可选的,必须在所有的except之后
print("All good!") # 只有当try运行完没有错误的时候这句才会运行
# Python提供一个叫做可迭代(iterable)的基本抽象。一个可迭代对象是可以被当作序列
# 的对象。比如说上面range返回的对象就是可迭代的。
filled_dict = {"one": 1, "two": 2, "three": 3}
our_iterable = filled_dict.keys()
print(our_iterable) # => dict_keys(['one', 'two', 'three']),是一个实现可迭代接口的对象
# 可迭代对象可以遍历
for i in our_iterable:
print(i) # 打印 one, two, three
# 但是不可以随机访问
our_iterable[1] # 抛出TypeError
# 可迭代对象知道怎么生成迭代器
our_iterator = iter(our_iterable)
# 迭代器是一个可以记住遍历的位置的对象
# 用__next__可以取得下一个元素
our_iterator.__next__() # => "one"
# 再一次调取__next__时会记得位置
our_iterator.__next__() # => "two"
our_iterator.__next__() # => "three"
# 当迭代器所有元素都取出后,会抛出StopIteration
our_iterator.__next__() # 抛出StopIteration
# 可以用list一次取出迭代器所有的元素
list(filled_dict.keys()) # => Returns ["one", "two", "three"]
####################################################
## 4. 函数
####################################################
# 用def定义新函数
def add(x, y):
print("x is {} and y is {}".format(x, y))
return x + y # 用return语句返回
# 调用函数
add(5, 6) # => 印出"x is 5 and y is 6"并且返回11
# 也可以用关键字参数来调用函数
add(y=6, x=5) # 关键字参数可以用任何顺序
# 我们可以定义一个可变参数函数
def varargs(*args):
return args
varargs(1, 2, 3) # => (1, 2, 3)
# 我们也可以定义一个关键字可变参数函数
def keyword_args(**kwargs):
return kwargs
# 我们来看看结果是什么:
keyword_args(big="foot", loch="ness") # => {"big": "foot", "loch": "ness"}
# 这两种可变参数可以混着用
def all_the_args(*args, **kwargs):
print(args)
print(kwargs)
"""
all_the_args(1, 2, a=3, b=4) prints:
(1, 2)
{"a": 3, "b": 4}
"""
# 调用可变参数函数时可以做跟上面相反的,用*展开序列,用**展开字典。
args = (1, 2, 3, 4)
kwargs = {"a": 3, "b": 4}
all_the_args(*args) # 相当于 foo(1, 2, 3, 4)
all_the_args(**kwargs) # 相当于 foo(a=3, b=4)
all_the_args(*args, **kwargs) # 相当于 foo(1, 2, 3, 4, a=3, b=4)
# 函数作用域
x = 5
def setX(num):
# 局部作用域的x和全局域的x是不同的
x = num # => 43
print (x) # => 43
def setGlobalX(num):
global x
print (x) # => 5
x = num # 现在全局域的x被赋值
print (x) # => 6
setX(43)
setGlobalX(6)
# 函数在Python是一等公民
def create_adder(x):
def adder(y):
return x + y
return adder
add_10 = create_adder(10)
add_10(3) # => 13
# 也有匿名函数
(lambda x: x > 2)(3) # => True
# 内置的高阶函数
map(add_10, [1, 2, 3]) # => [11, 12, 13]
filter(lambda x: x > 5, [3, 4, 5, 6, 7]) # => [6, 7]
# 用列表推导式可以简化映射和过滤。列表推导式的返回值是另一个列表。
[add_10(i) for i in [1, 2, 3]] # => [11, 12, 13]
[x for x in [3, 4, 5, 6, 7] if x > 5] # => [6, 7]
####################################################
## 5. 类
####################################################
# 定义一个继承object的类
class Human(object):
# 类属性,被所有此类的实例共用。
species = "H. sapiens"
# 构造方法,当实例被初始化时被调用。注意名字前后的双下划线,这是表明这个属
# 性或方法对Python有特殊意义,但是允许用户自行定义。你自己取名时不应该用这
# 种格式。
def __init__(self, name):
# Assign the argument to the instance's name attribute
self.name = name
# 实例方法,第一个参数总是self,就是这个实例对象
def say(self, msg):
return "{name}: {message}".format(name=self.name, message=msg)
# 类方法,被所有此类的实例共用。第一个参数是这个类对象。
@classmethod
def get_species(cls):
return cls.species
# 静态方法。调用时没有实例或类的绑定。
@staticmethod
def grunt():
return "*grunt*"
# 构造一个实例
i = Human(name="Ian")
print(i.say("hi")) # 印出 "Ian: hi"
j = Human("Joel")
print(j.say("hello")) # 印出 "Joel: hello"
# 调用一个类方法
i.get_species() # => "H. sapiens"
# 改一个共用的类属性
Human.species = "H. neanderthalensis"
i.get_species() # => "H. neanderthalensis"
j.get_species() # => "H. neanderthalensis"
# 调用静态方法
Human.grunt() # => "*grunt*"
####################################################
## 6. 模块
####################################################
# 用import导入模块
import math
print(math.sqrt(16)) # => 4.0
# 也可以从模块中导入个别值
from math import ceil, floor
print(ceil(3.7)) # => 4.0
print(floor(3.7)) # => 3.0
# 可以导入一个模块中所有值
# 警告:不建议这么做
from math import *
# 如此缩写模块名字
import math as m
math.sqrt(16) == m.sqrt(16) # => True
# Python模块其实就是普通的Python文件。你可以自己写,然后导入,
# 模块的名字就是文件的名字。
# 你可以这样列出一个模块里所有的值
import math
dir(math)
####################################################
## 7. 高级用法
####################################################
# 用生成器(generators)方便地写惰性运算
def double_numbers(iterable):
for i in iterable:
yield i + i
# 生成器只有在需要时才计算下一个值。它们每一次循环只生成一个值,而不是把所有的
# 值全部算好。
#
# range的返回值也是一个生成器,不然一个1到900000000的列表会花很多时间和内存。
#
# 如果你想用一个Python的关键字当作变量名,可以加一个下划线来区分。
range_ = range(1, 900000000)
# 当找到一个 >=30 的结果就会停
# 这意味着 `double_numbers` 不会生成大于30的数。
for i in double_numbers(range_):
print(i)
if i >= 30:
break
# 装饰器(decorators)
# 这个例子中,beg装饰say
# beg会先调用say。如果返回的say_please为真,beg会改变返回的字符串。
from functools import wraps
def beg(target_function):
@wraps(target_function)
def wrapper(*args, **kwargs):
msg, say_please = target_function(*args, **kwargs)
if say_please:
return "{} {}".format(msg, "Please! I am poor :(")
return msg
return wrapper
@beg
def say(say_please=False):
msg = "Can you buy me a beer?"
return msg, say_please
print(say()) # Can you buy me a beer?
print(say(say_please=True)) # Can you buy me a beer? Please! I am poor :(
```
## 想继续学吗
### 线上免费材料(英文)
* [Learn Python The Hard Way](http://learnpythonthehardway.org/book/)
* [Dive Into Python](http://www.diveintopython.net/)
* [Ideas for Python Projects](http://pythonpracticeprojects.com)
* [The Official Docs](http://docs.python.org/3/)
* [Hitchhiker's Guide to Python](http://docs.python-guide.org/en/latest/)
* [Python Module of the Week](http://pymotw.com/3/)
* [A Crash Course in Python for Scientists](http://nbviewer.ipython.org/5920182)
### 书籍(也是英文)
* [Programming Python](http://www.amazon.com/gp/product/0596158106/ref=as_li_qf_sp_asin_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0596158106&linkCode=as2&tag=homebits04-20)
* [Dive Into Python](http://www.amazon.com/gp/product/1441413022/ref=as_li_tf_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1441413022&linkCode=as2&tag=homebits04-20)
* [Python Essential Reference](http://www.amazon.com/gp/product/0672329786/ref=as_li_tf_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0672329786&linkCode=as2&tag=homebits04-20)
| 19.762058 | 178 | 0.582086 | yue_Hant | 0.344668 |
53f243aff65ba3c2c4aabc5620753f1dbf37ee6a | 1,228 | md | Markdown | docs/nodejs/7-koa-static.md | hijameszhang/coding | 536b15436b7fe0ce450f9985529af64bd8235dba | [
"MIT"
] | 5 | 2019-06-13T14:44:35.000Z | 2019-07-06T14:33:04.000Z | docs/nodejs/7-koa-static.md | hijameszhang/coding | 536b15436b7fe0ce450f9985529af64bd8235dba | [
"MIT"
] | null | null | null | docs/nodejs/7-koa-static.md | hijameszhang/coding | 536b15436b7fe0ce450f9985529af64bd8235dba | [
"MIT"
] | null | null | null | # Koa-static中间件
一般一个Web HTTP请求, 可能会有三种回应:
* 访问文件, 如: js, css, png, jpg, gif等
* 访问静态目录
* 找不到资源(HTTP 404)
在项目中,像一些静态文件的处理,Koa 也要现成的模块,省去我们自己需要从本地目录读取文件的很多步骤。
## 安装
```
npm install koa-static
```
或者
```
yarn add koa-static
```
## 使用
koa-static的使用简单, 主要核心的代码为:
``` js
const static_ = require('koa-static')
const path = require('path')
var app = new Koa();
app.use(static_(
path.join(__dirname, './static')
))
```
完整示例代码如下:
``` js
const Koa = require('koa');
const Router = require('koa-router');
const static_ = require('koa-static')
const path = require('path')
var app = new Koa();
var router = new Router();
app.use(static_(
path.join(__dirname, './static')
))
router.get('/', (ctx, next) => {
ctx.body = 'home'
});
router.get('/list', (ctx, next) => {
ctx.body = 'list'
})
router.get('/api', (ctx, next) => {
let res = {hello: 'world'}
ctx.set("Content-Type", "application/json")
ctx.body = JSON.stringify(res)
})
app
.use(router.routes())
.use(router.allowedMethods());
app.listen(3000)
```
我在当前js执行目录下新建了一个`static`目录, 并在`static`目录下新建了一个名为`demo.js`的文件, 并添加以下内容:
``` js
console.log('hello james')
```
启动工程后, 我们在访问: http://localhost:3000/demo.js时, 就会看到浏览器端会响应为:
```
console.log('hello james')
``` | 16.594595 | 70 | 0.647394 | yue_Hant | 0.284273 |
53f2b24df58239dfbffd253493b627ce6a32284a | 21,067 | md | Markdown | docs/extensibility/extending-the-properties-task-list-output-and-options-windows.md | evanwindom/visualstudio-docs | 1470eb0a54bd870650b72adb625a6a0d488f620e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/extending-the-properties-task-list-output-and-options-windows.md | evanwindom/visualstudio-docs | 1470eb0a54bd870650b72adb625a6a0d488f620e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/extending-the-properties-task-list-output-and-options-windows.md | evanwindom/visualstudio-docs | 1470eb0a54bd870650b72adb625a6a0d488f620e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Extending the Properties, Task List, Output, and Options Windows | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "vs-ide-sdk"
ms.tgt_pltfrm: ""
ms.topic: "article"
helpviewer_keywords:
- "properties pane"
- "task list"
- "output window"
- "properties window"
- "tutorials"
- "tool windows"
ms.assetid: 06990510-5424-44b8-9fd9-6481acec5c76
caps.latest.revision: 37
ms.author: "gregvanl"
manager: "ghogen"
translation.priority.mt:
- "cs-cz"
- "de-de"
- "es-es"
- "fr-fr"
- "it-it"
- "ja-jp"
- "ko-kr"
- "pl-pl"
- "pt-br"
- "ru-ru"
- "tr-tr"
- "zh-cn"
- "zh-tw"
---
# Extending the Properties, Task List, Output, and Options Windows
You can access any tool window in Visual Studio. This walkthrough shows how to integrate information about your tool window into a new **Options** page and a new setting on the **Properties** page, and also how to write to the **Task List** and **Output** windows.
## Prerequisites
Starting in Visual Studio 2015, you do not install the Visual Studio SDK from the download center. It is included as an optional feature in Visual Studio setup. You can also install the VS SDK later on. For more information, see [Installing the Visual Studio SDK](../extensibility/installing-the-visual-studio-sdk.md).
## Create an Extension with a Tool Window
1. Create a project named **TodoList** using the VSIX template, and add a custom tool window item template named **TodoWindow**.
> [!NOTE]
> For more information about creating an extension with a tool window, see [Creating an Extension with a Tool Window](../extensibility/creating-an-extension-with-a-tool-window.md).
## Set Up the Tool Window
Add a TextBox in which to type a new ToDo item, a Button to add the new item to the list, and a ListBox to display the items on the list.
1. In TodoWindow.xaml, delete the Button, TextBox, and StackPanel controls from the UserControl.
> [!NOTE]
> This does not delete the **button1_Click** event handler, which you will reuse in a later step.
2. From the **All WPF Controls** section of the **Toolbox**, drag a **Canvas** control to the grid.
3. Drag a **TextBox**, a **Button**, and a **ListBox** to the Canvas. Arrange the elements so that the TextBox and the Button are on the same level, and the ListBox fills the rest of the window below them, as in the picture below.

4. In the XAML pane, find the Button and set its Content property to **Add**. Reconnect the button event handler to the Button control by adding a `Click="button1_Click"` attribute. The Canvas block should look like this:
```xml
<Canvas HorizontalAlignment="Left" Width="306">
<TextBox x:Name="textBox" HorizontalAlignment="Left" Height="23" Margin="10,10,0,0" TextWrapping="Wrap" Text="TextBox" VerticalAlignment="Top" Width="208"/>
<Button x:Name="button" Content="Add" HorizontalAlignment="Left" Margin="236,13,0,0" VerticalAlignment="Top" Width="48" Click="button1_Click"/>
<ListBox x:Name="listBox" HorizontalAlignment="Left" Height="222" Margin="10,56,0,0" VerticalAlignment="Top" Width="274"/>
</Canvas>
```
#### Customize the constructor
1. In the TodoWindowControl.xaml.cs file, add the following using statement:
```c#
using System;
```
2. Add a public reference to the TodoWindow and have the TodoWindowControl constructor take a TodoWindow parameter. The code should look like this:
```c#
public TodoWindow parent;
public TodoWindowControl(TodoWindow window)
{
InitializeComponent();
parent = window;
}
```
3. In TodoWindow.cs, change TodoWindowControl constructor to include the TodoWindow parameter. The code should look like this:
```c#
public TodoWindow() : base(null)
{
this.Caption = "TodoWindow";
this.BitmapResourceID = 301;
this.BitmapIndex = 1;
this.Content = new TodoWindowControl(this);
}
```
## Create an Options Page
You can provide a page in the **Options** dialog box so that users can change settings for the tool window. Creating an Options page requires both a class that describes the options and an entry in the TodoListPackage.cs or TodoListPackage.vb file.
1. Add a class named `ToolsOptions.cs`. Make the ToolsOptions class inherit from <xref:Microsoft.VisualStudio.Shell.DialogPage>.
```c#
class ToolsOptions : DialogPage
{
}
```
2. Add the following using statement:
```c#
using Microsoft.VisualStudio.Shell;
```
3. The Options page in this walkthrough provides only one option named DaysAhead. Add a private field named **daysAhead** and a property named **DaysAhead** to the ToolsOptions class:
```c#
private double daysAhead;
public double DaysAhead
{
get { return daysAhead; }
set { daysAhead = value; }
}
```
Now you must make the project aware of this Options page.
#### Make the Options page available to users
1. In TodoWindowPackage.cs, add a <xref:Microsoft.VisualStudio.Shell.ProvideOptionPageAttribute> to the TodoWindowPackage class:
```c#
[ProvideOptionPage(typeof(ToolsOptions), "ToDo", "General", 101, 106, true)]
```
2. The first parameter to the ProvideOptionPage constructor is the type of the class ToolsOptions, which you created earlier. The second parameter, "ToDo", is the name of the category in the **Options** dialog box. The third parameter, "General", is the name of the subcategory of the **Options** dialog box where the Options page will be available. The next two parameters are resource IDs for strings; the first is the name of the category, and the second is the name of the subcategory. The final parameter determines whether this page can be accessed by using automation.
When a user opens your Options page, it should resemble the following picture.

Notice the category **ToDo** and the subcategory **General**.
## Make Data Available to the Properties Window
You can make To Do list information available by creating a class named TodoItem that stores information about the individual items in the ToDo list.
1. Add a class named `TodoItem.cs`.
When the tool window is available to users, the items in the ListBox will be represented by TodoItems. When the user selects one of these items in the ListBox, the **Properties** window will display information about the item.
To make data available in the **Properties** window, you turn the data into public properties that have two special attributes, `Description` and `Category`. `Description` is the text that appears at the bottom of the **Properties** window. `Category` determines where the property should appear when the **Properties** window is displayed in the **Categorized** view. In the following picture, the **Properties** window is in **Categorized** view, the **Name** property in the **ToDo Fields** category is selected, and the description of the **Name** property is displayed at the bottom of the window.

2. Add the following using statements the TodoItem.cs file.
```c#
using System.ComponentModel;
using System.Windows.Forms;
using Microsoft.VisualStudio.Shell.Interop;
```
3. Add the `public` access modifier to the class declaration.
```c#
public class TodoItem
{
}
```
Add the two properties, Name and DueDate. We'll do the UpdateList() and CheckForErrors() later.
```c#
public class TodoItem
{
private TodoWindowControl parent;
private string name;
[Description("Name of the ToDo item")]
[Category("ToDo Fields")]
public string Name
{
get { return name; }
set
{
name = value;
parent.UpdateList(this);
}
}
private DateTime dueDate;
[Description("Due date of the ToDo item")]
[Category("ToDo Fields")]
public DateTime DueDate
{
get { return dueDate; }
set
{
dueDate = value;
parent.UpdateList(this);
parent.CheckForErrors();
}
}
}
```
4. Add a private reference to the user control. Add a constructor that takes the user control and the name for this ToDo item. To find the value for daysAhead, it gets the Options page property.
```c#
private TodoWindowControl parent;
public TodoItem(TodoWindowControl control, string itemName)
{
parent = control;
name = itemName;
dueDate = DateTime.Now;
double daysAhead = 0;
IVsPackage package = parent.parent.Package as IVsPackage;
if (package != null)
{
object obj;
package.GetAutomationObject("ToDo.General", out obj);
ToolsOptions options = obj as ToolsOptions;
if (options != null)
{
daysAhead = options.DaysAhead;
}
}
dueDate = dueDate.AddDays(daysAhead);
}
```
5. Because instances of the `TodoItem` class will be stored in the ListBox and the ListBox will call the `ToString` function, you must overload the `ToString` function. Add the following code to TodoItem.cs, after the constructor and before the end of the class.
```c#
public override string ToString()
{
return name + " Due: " + dueDate.ToShortDateString();
}
```
6. In TodoWindowControl.xaml.cs, add stub methods to the TodoWindowControl class for the `CheckForError` and `UpdateList` methods. Put them after the ProcessDialogChar and before the end of the file.
```c#
public void CheckForErrors()
{
}
public void UpdateList(TodoItem item)
{
}
```
The `CheckForError` method will call a method that has the same name in the parent object, and that method will check whether any errors have occurred and handle them correctly. The `UpdateList` method will update the ListBox in the parent control; the method is called when the `Name` and `DueDate` properties in this class change. They will be implemented later.
## Integrate into the Properties Window
Now write the code that manages the ListBox, which will be tied to the **Properties** window.
You must change the button click handler to read the TextBox, create a TodoItem, and adds it to the ListBox.
1. Replace the existing `button1_Click` function with code that creates a new TodoItem and adds it to the ListBox. It calls TrackSelection(), which will be defined later.
```c#
private void button1_Click(object sender, RoutedEventArgs e)
{
if (textBox.Text.Length > 0)
{
var item = new TodoItem(this, textBox.Text);
listBox.Items.Add(item);
TrackSelection();
CheckForErrors();
}
}
```
2. In the Design view select the ListBox control. In the **Properties** window click the **Event handlers** button and find the SelectionChanged event. Fill in the text box with **listBox_SelectionChanged**. Doing this adds a stub for a SelectionChanged handler and assigns it to the event.
3. Implement the TrackSelection() method. Since you will need to get the <xref:Microsoft.VisualStudio.Shell.Interop.SVsUIShell><xref:Microsoft.VisualStudio.Shell.Interop.STrackSelection> services, you need make the <xref:Microsoft.VisualStudio.Shell.WindowPane.GetService%2A> accessible by the TodoWindowControl. Add the following method to the TodoWindow class:
```
internal object GetVsService(Type service)
{
return GetService(service);
}
```
4. Add the following using statements to TodoWindowControl.xaml.cs:
```c#
using System.Runtime.InteropServices;
using Microsoft.VisualStudio.Shell.Interop;
using Microsoft.VisualStudio;
using Microsoft.VisualStudio.Shell;
```
5. Fill in the SelectionChanged handler as follows:
```
private void listBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
TrackSelection();
}
```
6. Now, fill in the TrackSelection function, which will provide integration with the **Properties** window. This function is called when the user adds an item to the ListBox or clicks an item in the ListBox. It adds the contents of the ListBox to a SelectionContainer and passes the SelectionContainer to the **Properties** window's <xref:Microsoft.VisualStudio.Shell.Interop.ITrackSelection.OnSelectChange%2A> event handler. The TrackSelection service tracks selected objects in the user interface (UI) and displays their properties
```c#
private SelectionContainer mySelContainer;
private System.Collections.ArrayList mySelItems;
private IVsWindowFrame frame = null;
private void TrackSelection()
{
if (frame == null)
{
var shell = parent.GetVsService(typeof(SVsUIShell)) as IVsUIShell;
if (shell != null)
{
var guidPropertyBrowser = new
Guid(ToolWindowGuids.PropertyBrowser);
shell.FindToolWindow((uint)__VSFINDTOOLWIN.FTW_fForceCreate,
ref guidPropertyBrowser, out frame);
}
}
if (frame != null)
{
frame.Show();
}
if (mySelContainer == null)
{
mySelContainer = new SelectionContainer();
}
mySelItems = new System.Collections.ArrayList();
var selected = listBox.SelectedItem as TodoItem;
if (selected != null)
{
mySelItems.Add(selected);
}
mySelContainer.SelectedObjects = mySelItems;
ITrackSelection track = parent.GetVsService(typeof(STrackSelection))
as ITrackSelection;
if (track != null)
{
track.OnSelectChange(mySelContainer);
}
}
```
Now that you have a class that the **Properties** window can use, you can integrate the **Properties** window with the tool window. When the user clicks an item in the ListBox in the tool window, the **Properties** window should be updated accordingly. Similarly, when the user changes a ToDo item in the **Properties** window, the associated item should be updated.
7. Now, add the rest of the UpdateList function code in TodoWindowControl.xaml.cs. It should drop and re-add the modified TodoItem from the ListBox.
```c#
public void UpdateList(TodoItem item)
{
var index = listBox.SelectedIndex;
listBox.Items.RemoveAt(index);
listBox.Items.Insert(index, item);
listBox.SelectedItem = index;
}
```
8. Test your code. Build the project and start debugging. The experimental instance should appear.
9. Open the **Tools / Options** pages. You should see the ToDo category in the left pane. Categories are listed in alphabetical, so look under the Ts.
10. On the Todo options page, you should see the DaysAhead property set to **0**. Change it to **2**.
11. On the View / Other Windows menu, open **TodoWindow**. Type **EndDate** in the text box and click **Add**.
12. In the list box you should see a date two days later than today.
## Add Text to the Output Window and Items to the Task List
For the **Task List**, you create a new object of type Task, and then add that Task object to the **Task List** by calling its Add method. To write to the **Output** window, you call its GetPane method to obtain a pane object, and then you call the OutputString method of the pane object.
1. In TodoWindowControl.xaml.cs, in the `button1_Click` method, add code to get the **General** pane of the **Output** window (which is the default), and write to it. The method should look like this:
```c#
private void button1_Click(object sender, EventArgs e)
{
if (textBox.Text.Length > 0)
{
var item = new TodoItem(this, textBox.Text);
listBox.Items.Add(item);
var outputWindow = parent.GetVsService(
typeof(SVsOutputWindow)) as IVsOutputWindow;
IVsOutputWindowPane pane;
Guid guidGeneralPane = VSConstants.GUID_OutWindowGeneralPane;
outputWindow.GetPane(ref guidGeneralPane, out pane);
if (pane != null)
{
pane.OutputString(string.Format(
"To Do item created: {0}\r\n",
item.ToString()));
}
TrackSelection();
CheckForErrors();
}
}
```
2. In order to add items to the Task List, you need a to add a nested class to the TodoWindowControl class. The nested class needs to derive from <xref:Microsoft.VisualStudio.Shell.TaskProvider>. Add the following code to the end of the TodoWindowControl class.
```c#
[Guid("72de1eAD-a00c-4f57-bff7-57edb162d0be")]
public class TodoWindowTaskProvider : TaskProvider
{
public TodoWindowTaskProvider(IServiceProvider sp)
: base(sp)
{
}
}
```
3. Next add a private reference to TodoTaskProvider and a CreateProvider() method to the TodoWindowControl class. The code should look like this:
```c#
private TodoWindowTaskProvider taskProvider;
private void CreateProvider()
{
if (taskProvider == null)
{
taskProvider = new TodoWindowTaskProvider(parent);
taskProvider.ProviderName = "To Do";
}
}
```
4. Add ClearError(), which clears the Task List, and ReportError(), which adds an entry to the Task List, to the TodoWindowControl class.
```c#
private void ClearError()
{
CreateProvider();
taskProvider.Tasks.Clear();
}
private void ReportError(string p)
{
CreateProvider();
var errorTask = new Task();
errorTask.CanDelete = false;
errorTask.Category = TaskCategory.Comments;
errorTask.Text = p;
taskProvider.Tasks.Add(errorTask);
taskProvider.Show();
var taskList = parent.GetVsService(typeof(SVsTaskList))
as IVsTaskList2;
if (taskList == null)
{
return;
}
var guidProvider = typeof(TodoWindowTaskProvider).GUID;
taskList.SetActiveProvider(ref guidProvider);
}
```
5. Now implement the CheckForErrors method, as follows.
```c#
public void CheckForErrors()
{
foreach (TodoItem item in listBox.Items)
{
if (item.DueDate < DateTime.Now)
{
ReportError("To Do Item is out of date: "
+ item.ToString());
}
}
}
```
## Trying It Out
1. Build the project and start debugging. The experimental instance appears.
2. Open the TodoWindow (**View / Other Windows / TodoWindow**).
3. Type something in the text box and then click **Add**.
A due date 2 days after today is added to the list box. No errors are generated, and the **Task List** (**View / Task List**) should have no entries.
4. Now change the setting on the **Tools / Options / ToDo** page from **2** back to **0**.
5. Type something else in the **TodoWindow** and then click **Add** again. This triggers an error and also an entry in the **Task List**.
As you add items, the initial date is set to now plus 2 days.
6. On the **View** menu, click **Output** to open the **Output** window.
Notice that every time that you add an item, a message is displayed in the **Task List** pane.
7. Click one of the items in the ListBox.
The **Properties** window displays the two properties for the item.
8. Change one of the properties and then press ENTER.
The item is updated in the ListBox. | 40.591522 | 609 | 0.639863 | eng_Latn | 0.931894 |
53f2d79106edd5e0640622589a70bc7f6b48c44b | 3,758 | md | Markdown | archives/2021-02-22.md | justjavac/zhihu-trending-hot-video | 74f09a2b2567ab956c32aca06e770037bd2e33a8 | [
"MIT"
] | 13 | 2020-11-24T07:44:56.000Z | 2021-04-26T21:12:06.000Z | archives/2021-02-22.md | justjavac/zhihu-trending-hot-video | 74f09a2b2567ab956c32aca06e770037bd2e33a8 | [
"MIT"
] | null | null | null | archives/2021-02-22.md | justjavac/zhihu-trending-hot-video | 74f09a2b2567ab956c32aca06e770037bd2e33a8 | [
"MIT"
] | 3 | 2020-11-28T11:32:40.000Z | 2022-01-22T00:43:46.000Z | # 2021-02-22
共 44 条
<!-- BEGIN -->
<!-- 最后更新时间 Mon Feb 22 2021 23:12:07 GMT+0800 (CST) -->
1. [1987 年的中国发生了什么?【激荡四十年· 1987
】](https://www.zhihu.com/zvideo/1347222667146174464)
2. [【科普】为什么人类在演化过程中把大部分体毛给褪没了?](https://www.zhihu.com/zvideo/1347255478666383360)
3. [拆开和脑袋一样大的摄像头,看看内部的构造和工作原理](https://www.zhihu.com/zvideo/1347259471853453312)
4. [鸡翅的做法到底有多少种呢?我要在这条路上走到黑~](https://www.zhihu.com/zvideo/1347246954011791360)
5. [到底是魔术还是智商测试?我感觉在街头给别人表演这个很容易被打](https://www.zhihu.com/zvideo/1347235722735263744)
6. [尿液能喝吗?为什么有人喜欢喝尿?医生说我们都是喝尿长大的](https://www.zhihu.com/zvideo/1346784480997335040)
7. [酒店不外传的蒜蓉辣椒酱做法大公开,只要多加这一步,你也可以当大厨](https://www.zhihu.com/zvideo/1346453118490775552)
8. [你们见过最潮的老年人是什么样的吗?](https://www.zhihu.com/zvideo/1345473257122902016)
9. [椒盐大虾不要直接用油炸,记住这两点,外酥里嫩不吸油,越嚼越香](https://www.zhihu.com/zvideo/1347099455997546496)
10. [沙盘推演:大渡桥横铁索寒(上)石达开覆灭记 +
彝海结盟](https://www.zhihu.com/zvideo/1347273615939903488)
11. [顾桃:年轻人想成功,只要做到这四件事,样样都跟钱没关系【 214
】](https://www.zhihu.com/zvideo/1347254693320675328)
12. [如果动物生气了之后开口说话会发生什么?!](https://www.zhihu.com/zvideo/1347159916528922624)
13. [别惹熊猫!古古 最不好惹的熊猫](https://www.zhihu.com/zvideo/1347289809380077568)
14. [凉水伤胃?要严格坐月子?喝红糖水补血?这些「老一辈经验」不是一点用没有!](https://www.zhihu.com/zvideo/1346889165192323072)
15. [古代的侠客,靠什么生活](https://www.zhihu.com/zvideo/1347214187714732033)
16. [新年小熊猫也实现车厘子自由!虽然在他们眼里车厘子比不上苹果!](https://www.zhihu.com/zvideo/1347211701545234432)
17. [偶然在网上买了一只萌宠,路上运输几天后,回来开箱后感觉太萌了](https://www.zhihu.com/zvideo/1347160582404206592)
18. [迷惑...
成都一男子夜爬电线杆做仰卧起坐,致上万用户集体停电](https://www.zhihu.com/zvideo/1347250147546509312)
19. [小伙直接问越南卖竹笋姑娘想不想嫁给我,看她怎么说。](https://www.zhihu.com/zvideo/1346777370360176640)
20. [解剖魔术道具,看小棍是如何穿过硬币盒子的,惊叹巧妙的空间利用](https://www.zhihu.com/zvideo/1347237290364809216)
21. [炊事班的战斗力:野外也要有红烧肉,背着枪就把糖色炒了!](https://www.zhihu.com/zvideo/1347171723440324608)
22. [越南女孩嫁中国 12
年第一次打视频回来妈妈一看见就哭爸爸不敢认](https://www.zhihu.com/zvideo/1346962962729668609)
23. [人类早期驯服沙雕鹦鹉的珍贵影像](https://www.zhihu.com/zvideo/1346847805491761152)
24. [「每天一遍,你好初恋」](https://www.zhihu.com/zvideo/1346791904210862080)
25. [搞笑:小伙收购知了猴,只要公的!因为公的叫得烦人,看谁不顺眼就放谁家去](https://www.zhihu.com/zvideo/1346879864977436673)
26. [搞笑:小伙羡慕隔壁村月亮圆,毅然决然改村籍,最后却后悔不已!](https://www.zhihu.com/zvideo/1345303225201864704)
27. [咦?这个”喷泉“怎么会自动循环喷水!奶爸自制「小水车」,3
岁龙凤胎玩得停不下来](https://www.zhihu.com/zvideo/1346957039181078528)
28. [主人在金渐层面前系满红线,猫能成功避障吗?](https://www.zhihu.com/zvideo/1346125799842668544)
29. [戍边英雄团长祁发宝近况曝光身体恢复良好
已于春节前伤愈!](https://www.zhihu.com/zvideo/1346843500302991360)
30. [[考研复试]博士生导师分享简历的原则和思维!](https://www.zhihu.com/zvideo/1346804567653474304)
31. [拆解会从肚子里变出道具的小叮当,研究内部的机械构造](https://www.zhihu.com/zvideo/1346871797804113920)
32. [2021 年,他们从抗冰开始](https://www.zhihu.com/zvideo/1346402849522307073)
33. [他才三岁就把我这辈子想做的都做了](https://www.zhihu.com/zvideo/1346774119179079680)
34. [狗:我不能太萌,我要凶狠点你们才怕我!](https://www.zhihu.com/zvideo/1346932327319298048)
35. [女子上台殴打迪士尼剧场演员!](https://www.zhihu.com/zvideo/1346906340619685888)
36. [全网几百万粉丝博主,白嫖粉丝海鲜大餐,居然还有「鱼王」三刀鱼](https://www.zhihu.com/zvideo/1346103742656282624)
37. [实拍闲鱼线下交易!苹果 X 手机内部版只需要
1300?](https://www.zhihu.com/zvideo/1346775539315896320)
38. [真的能产生 100
公斤的力量?拆解网红玩具腕力球,测试极限转速](https://www.zhihu.com/zvideo/1346870924315095040)
39. [这已经不是一只正经的小鸟了,和其他鸟比起来它太与众不同了](https://www.zhihu.com/zvideo/1346818059043237888)
40. [水煮虾滑|自制 Q
弹有嚼劲的虾滑,香辣口味,超好吃哒~!](https://www.zhihu.com/zvideo/1346142057296281600)
41. [用烟花送奥特曼上天,小朋友激动到说不出话!](https://www.zhihu.com/zvideo/1346638997091983360)
42. [「你的名字无人知晓,你的功绩与世长存!」](https://www.zhihu.com/zvideo/1346517099389517824)
43. [小奶猫第一次开荤,场面完全失控](https://www.zhihu.com/zvideo/1346504660572975105)
44. [30
年前巩俐主演,奥斯卡提名影片,只能在日本首映](https://www.zhihu.com/zvideo/1345658881180061696)
<!-- END -->
| 57.815385 | 92 | 0.776211 | yue_Hant | 0.517971 |
53f3484da15cfd293a92087dc0b6d21624f71af2 | 3,653 | md | Markdown | README.md | trustbuilder/trustbuilder-gateway | deaec53160e6ba1181d5fca18206a4d9696a1781 | [
"BSD-3-Clause"
] | 1 | 2017-03-31T08:16:18.000Z | 2017-03-31T08:16:18.000Z | README.md | trustbuilder/trustbuilder-gateway | deaec53160e6ba1181d5fca18206a4d9696a1781 | [
"BSD-3-Clause"
] | null | null | null | README.md | trustbuilder/trustbuilder-gateway | deaec53160e6ba1181d5fca18206a4d9696a1781 | [
"BSD-3-Clause"
] | null | null | null | Name
====
TrustBuilder Gateway - A Policy Enforcment Point written in lua with support of session cookies or api based requests
Table of Contents
=================
* [Name](#name)
* [Status](#status)
* [Description](#description)
* [Synopsis](#synopsis)
* [Authentication](#authentication)
* [Authorization](#authorization)
Status
======
This library is considered experimental and still under active development.
The API is still in flux and may change without notice.
Description
===========
This library requires Openresty adn redis and following modules:
* [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector)
* [lua-resty-cookie](https://github.com/cloudflare/lua-resty-cookie)
Synopsis
========
```
server {
# Standard nginx configuration
# Begin variable configuration
set $session_store_location '127.0.0.1:6379';
set $session_timeout '600';
set $session_inactivity_timeout '300';
set $session_cookie 'MY_SESSION_COOKIE';
set $session_auth 'foobared';
set $login_url '/idhub/gw-login';
set $authorization_url '/authzServer';
set $authorization_cache_time '60';
#Location configurations
#Authentication Interface Example
location /auth {
header_filter_by_lua_block {
require("trustbuilder-gateway.auth_interface")()
}
content_by_lua_block {
local b64session,err = ngx.encode_base64('{"principal":"Username", "meta":{"auth_time":"1460382419000"} ,"attributes":{"email":"[email protected]"}}')
ngx.header["X-TB-AUTH"] = b64session
ngx.say("AUTH")
}
}
location = /t {
access_by_lua_block {
--- Send credential username to backend
local header_map = {
my_username = "credential.principal",
my_email = "credential.attributes['email']"
}
require("trustbuilder-gateway.protect").web_app(header_map)
}
content_by_lua_block {
ngx.say("HIT")
}
}
location = /api {
access_by_lua_block {
--- Send credential username to backend
local header_map = {
my_username = "credential.principal",
my_email = "credential.attributes['email']"
}
require("trustbuilder-gateway.protect").api(header_map)
}
}
# Authorization Header
location = /authzServer {
internal;
proxy_pass http://127.0.0.1:$server_port/authorizationController;
}
location = /authorizationController {
content_by_lua_block {
--- Allow
ngx.header["X-TB-AZN"] = '{"score":1}'
--- Deny
--- ngx.header["X-TB-AZN"] = '{"score":0, "reason": "Deny"}'
ngx.exit(200)
}
}
}
```
[Back to TOC](#table-of-contents)
Authentication
==============
To authenticate a user in the gateway it is sufficient to put a Response header X-TB-AUTH with a base64encoded JSON.
Example
```
{
"meta": {
"created": 1448283712000,
"auth_time": 1461853344933,
"updated": 1458906917000
},
"attributes": {
"nickname": "User",
"groups": [
"Admin",
"Engineer"
],
"locale": "nl-BE",
"website": "http://www.securit.biz",
"isAdministrator": "yes"
},
"principal": "MyUsername"
}
```
[Back to TOC](#table-of-contents)
Authorization
=============
To authorize the user you have all the user information and location and method available. As a response you are expected to send a json with a score. 0 to fail 1 to allow
[Back to TOC](#table-of-contents)
| 24.353333 | 171 | 0.616753 | eng_Latn | 0.653994 |
53f3f41cba75908341e7929b472da48b7127d316 | 744 | md | Markdown | CHANGELOG.md | ClassWizard/ZIPFoundation | 66e4724e0753e5b2ccc94ff2040097271e27e608 | [
"MIT"
] | null | null | null | CHANGELOG.md | ClassWizard/ZIPFoundation | 66e4724e0753e5b2ccc94ff2040097271e27e608 | [
"MIT"
] | null | null | null | CHANGELOG.md | ClassWizard/ZIPFoundation | 66e4724e0753e5b2ccc94ff2040097271e27e608 | [
"MIT"
] | null | null | null | # Changelog
## [0.9.2](https://github.com/weichsel/ZIPFoundation/releases/tag/0.9.2)
### Updated
- Changed default POSIX permissions when file attributes are missing
- Improved docs
- Fixed a compiler warning when compiling with the latest Xcode 9 beta
## [0.9.1](https://github.com/weichsel/ZIPFoundation/releases/tag/0.9.1)
### Added
- Optional parameter to skip CRC32 checksum calculation
### Updated
- Tweaked POSIX buffer sizes to improve IO and comrpression performance
- Improved source readability
- Refined documentation
### Removed
- Optional parameter skip decompression during entry retrieval
## [0.9.0](https://github.com/weichsel/ZIPFoundation/releases/tag/0.9.0)
### Added
- Initial release of ZIP Foundation. | 28.615385 | 72 | 0.75 | eng_Latn | 0.804979 |
53f42a964913ebdf350ff5844d624d3d6e2d7d61 | 2,121 | md | Markdown | README.md | mytechnotalent/Fundamental-C- | f0113147bc858df8aefc399faa9fe51e08d51a42 | [
"MIT"
] | 24 | 2021-06-06T14:23:29.000Z | 2022-01-05T00:00:41.000Z | README.md | mytechnotalent/Fundamental-C- | f0113147bc858df8aefc399faa9fe51e08d51a42 | [
"MIT"
] | null | null | null | README.md | mytechnotalent/Fundamental-C- | f0113147bc858df8aefc399faa9fe51e08d51a42 | [
"MIT"
] | 5 | 2021-07-09T09:27:12.000Z | 2021-12-06T01:43:53.000Z | 
## FREE Reverse Engineering Self-Study Course [HERE](https://github.com/mytechnotalent/Reverse-Engineering-Tutorial)
<br>
# Fundamental C++
The book and code repo for the FREE Fundamental C++ book by Kevin Thomas.
## FREE Book
[Download](https://github.com/mytechnotalent/Fundamental-CPP/blob/main/Fundamental%20C%2B%2B.pdf)
## Chapter 1 - Hello World
In this lesson we will discuss the basics of C++ output.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## Chapter 2 - Variables, Constants, Arrays, Vectors, Statements, Operators, Strings
In this lesson we will discuss variables, constants, arrays, vectors, statements, operators and strings.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## Chapter 3 - Program Flow
In this lesson we will discuss the basics of program flow.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## Chapter 4 - Functions
In this lesson we will discuss functions.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## Chapter 5 - Pointers
In this lesson we will discuss pointers.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## (Chapter 6 - Input
In this lesson we will discuss proper input validation.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## Chapter 7 - Classes
In this lesson we will discuss class basics.
-> Click [HERE](https://github.com/mytechnotalent/Reverse-Engineering/blob/main/Fundamental%20C%2B%2B.pdf) to read the FREE pdf book.
## License
[MIT License](https://github.com/mytechnotalent/Fundamental-CPP/blob/main/LICENSE)
| 42.42 | 133 | 0.767562 | eng_Latn | 0.311634 |
53f4dae5c93afed8dda8148be8f2c8cd4804c983 | 5,696 | md | Markdown | contents/careers/product-marketer.md | bard/posthog.com | 0b9736d1b7a23dd7562aec45d93029fe5a926229 | [
"MIT"
] | null | null | null | contents/careers/product-marketer.md | bard/posthog.com | 0b9736d1b7a23dd7562aec45d93029fe5a926229 | [
"MIT"
] | null | null | null | contents/careers/product-marketer.md | bard/posthog.com | 0b9736d1b7a23dd7562aec45d93029fe5a926229 | [
"MIT"
] | null | null | null | ---
title: Product Marketer
sidebar: Careers
showTitle: true
---
<h5 class='centered'>PostHog exists to increase the number of successful products in the world.</h5>
While we have mainly hired engineers to date, we are now growing across all areas of the business, and we are currently hiring for our first **Product Marketer** to work alongside our newly hired Marketing Lead. In this role, you will own market/competitor research, product position, and messaging. You will play a key role in our growth.
### What you'll be doing
- Conducting market/competitor research. PostHog operates across a number of industries. It will be your job to understand our competition.
- Armed with a deep understanding of the market, you'll iterate on our product positioning and messaging.
- Develop user personas from user research conducted by our team and supplemented by you.
- Collaborate with our Product team on our vision and roadmap to stay ahead of the competition and satisfy user demand.
- Create compelling collateral/content to communicate product benefits to personas and enable sales.
### What you'll bring
- Experience working as a product marketer for a technical product positioned at developers.
- Excellent writing, researching and communication skills.
- Experience owning marketing research, product positioning, and messaging.
- Experience working closely with product and sales teams.
- A detail-orientated and organized approach to work with a desire to move quickly and continuously improve yourself and your team.
- Experience working in open source and/or product analytics is desirable but not critical.
### What we offer in return
* Generous, transparent [compensation](/handbook/people/compensation)
* [Unlimited, permissionless vacation](/handbook/people/time-off) with a 25 day minimum
* Health insurance, including dental and vision (UK and US-based only)
* [Generous parental leave](/handbook/people/time-off)
* Visa sponsorship if needed, for you and your loved ones
* [Training budget](/handbook/people/training)
* [$200/month budget towards coworking or café working](/handbook/people/spending-money)
* Carbon offsetting for work travel with [Project Wren](https://www.wren.co/)
* [Free books](/handbook/people/training#books)
*Please note that benefits vary slightly by country. If you have any questions, please don't hesitate to ask our team.*
### About PostHog
PostHog's engineering culture is best-in-class, and we've had explosive user growth from this alone.
We [launched a four week old minimum viable product in February 2020](/handbook/company/story), and since then have been deployed in thousands of places around the world.
PostHog's platform makes it easy for software teams to understand their user behavior. This, coupled with our fast growth, has led to a wide variety of very cool use cases. PostHog helps indie game designers make it more fun to defend earth from an alien threat. PostHog is used by multinational organizations running software that powers everything from banks to airlines. PostHog is used by startups disrupting their own industries.
We're a company like no other in our space. Our approach is bottom up, and that starts by being great for developers to install and use.
By being open source, we can be used on any software project throughout the world, for free, forever. Some developers will use the platform on a side project, others at their startup or small business, and others in their Fortune 500. We are building a true platform that can grow from 1 user to many, no matter the size or shape of the organization.
The core of our approach is to delight end users. It's not about executive dashboards and then a terrible interface for everyone else. It's the sense of power we give to the person on the ground, doing the actual work, every day.
### Our team
Our team is a combination of former CTOs and YC founders all turned developers, alongside some of the best developers from the world's largest tech companies who have the experience to help us handle scalability.
Our [team](/handbook/company/team) is tiny, but we live in 10 different countries. This diverse group looks pretty weird on paper, but it's magical. Apart from the fact most of them think [pineapple belongs on pizza](https://twitter.com/PostHogHQ/status/1319583079648923648). It doesn't.
We take bets on people with less experience too - we are as willing to hire an unproven genius straight out of school as we are a seasoned veteran.
We're all remote, and we've raised enough to pay top of market. [We are proudly backed](/handbook/strategy/investors) by some of the best VCs and Investors in the world, such as YCombinator.
### Sold? Apply now
* [Drop us a line](mailto:[email protected]) and tell us:
* How you can achieve the above in a few sentences
* Why you're drawn to us
* Your resumé and/or LinkedIn
* Please also add a link to your portfolio or examples of previous work
### Not sold? Learn more first
* [How we hire](/careers#the-process)
* We ask for your best work, and in return, [pay generously](/handbook/people/compensation) and have [exceptional benefits](/careers/#benefits)
* Learn about [the team you'd be working with](/handbook/company/team)
* Getting hiring right is key to diversity. Learn about [how we think about this](/handbook/company/diversity).
*We believe people from diverse backgrounds, with different identities and experiences, make our product and our company better. No matter your background, we'd love to hear from you! Also, if you have a disability, please let us know if there's any way we can make the interview process better for you; we're happy to accommodate!*
| 69.463415 | 434 | 0.785288 | eng_Latn | 0.999407 |
53f4db48ceb40aa448d445db0131788a69cddd1d | 23 | md | Markdown | README.md | chenyq502/CHMapPlace | 341b51d5dff31827868b685e8dd8d91c43c62892 | [
"MIT"
] | null | null | null | README.md | chenyq502/CHMapPlace | 341b51d5dff31827868b685e8dd8d91c43c62892 | [
"MIT"
] | null | null | null | README.md | chenyq502/CHMapPlace | 341b51d5dff31827868b685e8dd8d91c43c62892 | [
"MIT"
] | null | null | null | # CHMapPlace
map place
| 7.666667 | 12 | 0.782609 | ces_Latn | 0.655675 |
53f59b59ec5a1ed63a3221bb4001974c086cff12 | 268 | md | Markdown | README.md | rafaeltscs/once-upon-library | 521b1a0aa7d2a3302e4ed760d7e4b57fa0bc0fd2 | [
"MIT"
] | 1 | 2019-03-27T10:17:22.000Z | 2019-03-27T10:17:22.000Z | README.md | dev-friends/once-upon-library | 521b1a0aa7d2a3302e4ed760d7e4b57fa0bc0fd2 | [
"MIT"
] | null | null | null | README.md | dev-friends/once-upon-library | 521b1a0aa7d2a3302e4ed760d7e4b57fa0bc0fd2 | [
"MIT"
] | null | null | null | # Once Upon a Library
This project was generated with [Angular CLI](https://github.com/angular/angular-cli) version 7.3.6.
Thi is the code sample for a Story written in Medium by Rafael Chiappetta: https://medium.com/dev-friends/once-upon-a-library-1-3-6ce6a714c676
| 44.666667 | 142 | 0.776119 | eng_Latn | 0.920283 |
53f5e5fa0f64db4d6af724fd9666ce96fd69766c | 217 | md | Markdown | README.md | eduardogch/vps-bootstrap | 4bbd9d3ce99c0c791bc1cee8b5855af4664760d3 | [
"MIT"
] | 5 | 2016-06-16T19:25:32.000Z | 2021-03-27T21:35:56.000Z | README.md | eduardogch/vps-bootstrap | 4bbd9d3ce99c0c791bc1cee8b5855af4664760d3 | [
"MIT"
] | null | null | null | README.md | eduardogch/vps-bootstrap | 4bbd9d3ce99c0c791bc1cee8b5855af4664760d3 | [
"MIT"
] | null | null | null | VPS Bootstrap
================
## Script to have ready my Ubuntu 14.04 VPS server.
Only run in the console:
```
wget https://raw.githubusercontent.com/eduardogch/vps-bootstrap/master/vps-bootstrap.sh -O - | sh
```
| 19.727273 | 97 | 0.672811 | eng_Latn | 0.304346 |
53f602c33420582e8f6721d5f5f0c2bb2b2f6d60 | 1,322 | md | Markdown | _jogos/watchdogs.md | Strike7/ad | 33d0cde422d4c1cfb37318fcaf1d694371ad4a76 | [
"CC-BY-4.0"
] | null | null | null | _jogos/watchdogs.md | Strike7/ad | 33d0cde422d4c1cfb37318fcaf1d694371ad4a76 | [
"CC-BY-4.0"
] | null | null | null | _jogos/watchdogs.md | Strike7/ad | 33d0cde422d4c1cfb37318fcaf1d694371ad4a76 | [
"CC-BY-4.0"
] | null | null | null | ---
layout: mercadolivre-individual
title: " Watch Dogs"
date: 2014-05-27 20:25:30
category: "Ação e Aventura"
tamanho: "14,44 GB"
estrelas: 4
desenvolvedor: "Ubisoft "
editor: "Ubisoft Entertainment"
tempo_campanha: 16
tempo_completo: 24
cover: "whatdog.png"
background_image: "watchdog.jpg"
cover_id: 68
images: ["mkx_marked.jpg"]
youtube: "https://www.youtube.com/watch?v=KpIeWxsfBos"
prices: ["6,86", "9,80", "14,7", "29,40"]
periodos:
- descricao: '3 dias'
link: 'http://produto.mercadolivre.com.br/MLB-703464512-aluguel-locaco-de-jogos-4-dias-xbox-one-midia-digital-_JM'
- descricao: '7 dias'
link: 'http://produto.mercadolivre.com.br/MLB-688725759-aluguel-locaco-de-jogos-xbox-one-midia-digital-_JM'
- descricao: '14 dias'
link: 'http://produto.mercadolivre.com.br/MLB-693927364-aluguel-locaco-de-jogos-xbox-one-midia-digital-_JM'
- descricao: '30 dias'
link: 'http://produto.mercadolivre.com.br/MLB-702105161-aluguel-locaco-de-jogos-xbox-one-midia-digital-_JM'
---
NUM MUNDO HIPERLIGADO COMO O ACTUAL, CHICAGO FUNCIONA SOB O CONTROLO DO CTOS, A REDE DE COMPUTADORES MAIS AVANÇADA DA AMÉRICA. No papel de Aiden Pearce, podes aceder a todo este sistema e tornar Chicago na derradeira arma na tua busca pela vingança. Mas e se os teus objectivos pessoais colidirem com uma cidade inteira?
| 40.060606 | 320 | 0.747352 | por_Latn | 0.645048 |
53f6b8f4bef604ecf6055fb04e546e0abe6f9add | 27 | md | Markdown | guides/usage-with-framer-motion.md | JohnPaulHarold/lrud | 7c1bd8a1166ca9326bc5dd0cb60989f1893cd7e8 | [
"MIT"
] | 28 | 2020-08-13T08:42:02.000Z | 2022-03-28T01:28:45.000Z | guides/usage-with-framer-motion.md | JohnPaulHarold/lrud | 7c1bd8a1166ca9326bc5dd0cb60989f1893cd7e8 | [
"MIT"
] | 78 | 2020-08-13T00:01:42.000Z | 2022-03-24T20:34:33.000Z | guides/usage-with-framer-motion.md | JohnPaulHarold/lrud | 7c1bd8a1166ca9326bc5dd0cb60989f1893cd7e8 | [
"MIT"
] | 9 | 2021-01-14T10:05:45.000Z | 2022-01-19T11:41:22.000Z | # Usage with Framer Motion
| 13.5 | 26 | 0.777778 | eng_Latn | 0.708737 |
53f6e0d66ed6064ace84767f907df81e8faac88c | 1,668 | md | Markdown | README.md | manas94gupta/Dine-Out | fe88b66b7e2c09270c6c9a1861facc105998b046 | [
"MIT"
] | null | null | null | README.md | manas94gupta/Dine-Out | fe88b66b7e2c09270c6c9a1861facc105998b046 | [
"MIT"
] | null | null | null | README.md | manas94gupta/Dine-Out | fe88b66b7e2c09270c6c9a1861facc105998b046 | [
"MIT"
] | null | null | null | # Dine Out
## Introduction
Dine Out helps you to find a nearby place to party/dine at or order from. You can
sort the places by 14 different categories and view their location and other info
on google maps.
Dine Out uses google maps api and zomato api to retrieve the desired restaurants info
from zomato's database and display them on the google maps. The user interface is
responsive and intuitive to use. The map also has a button to switch between dark and
normal mode.
Users can search for the places in a specific location by entering an address in the
sidebar, and then they can also search among the retrieved places in the sidebar.
Clicking on a marker on the map opens up an info window with ratings, pictures,
address and other info.
You can view the site [here](https://manas94gupta.github.io/Dine-Out).
## Instructions
* Source code is in the `src` directory.
* Final production code is in the `dist` directory.
* To build the production code `node js` must be installed.
* `gulp-useref` along with `gulp-uglify`, `gulp-cssnano` and `gulp-if` is used to concatenate and minify css and js files.
* Run `gulp-useref` to build the final production code.
## Screenshots


## Resources Used
* [Google Maps API](https://developers.google.com/maps/documentation/javascript/tutorial)
* [Zomato API](https://developers.zomato.com/api)
* [Knockout JS](http://knockoutjs.com/documentation/introduction.html)
| 45.081081 | 122 | 0.739209 | eng_Latn | 0.972471 |
53f6eabfcc396b95c1c46b9faa4b1fafc8ad468a | 154 | md | Markdown | _pages/about.md | larsrozema/larsrozema.github.io | f0a2fedca3ed2e68d149173028463c30f67e05d9 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | _pages/about.md | larsrozema/larsrozema.github.io | f0a2fedca3ed2e68d149173028463c30f67e05d9 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | _pages/about.md | larsrozema/larsrozema.github.io | f0a2fedca3ed2e68d149173028463c30f67e05d9 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | ---
title: "About"
permalink: /about/
header:
image: "/images/top-banner.jpg"
---
I'm studying data science, machine learning and data visualizations.
| 17.111111 | 68 | 0.714286 | eng_Latn | 0.594347 |
53f724c7ede79411c5b214f313f00953e3462a43 | 1,166 | md | Markdown | content/blog/1/b619f6a6057c3f6dbcb34a7d0a8b2831_t.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/1/b619f6a6057c3f6dbcb34a7d0a8b2831_t.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/1/b619f6a6057c3f6dbcb34a7d0a8b2831_t.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: b619f6a6057c3f6dbcb34a7d0a8b2831_t
mitle: "12 Кучета, Които Нямат Представа Колко Са Големи И Се Държат Като Малки!"
description: " "
image: "https://cdnone.netlify.com/db/2017/09/599fec14df141_4XhlEpA__700.jpg"
---
<p> </p><p>Всички кучета в началото са малки очарователни кученца, които идеално се вписват в нашия дом и живот. Но те растат. И растат бързо. А понякога те стават наистина големи. Но това не ги спира да се нареждат на най-доброто място в целия свят – върху вас.</p> <p>#1</p> <p> <br/><img src="https://cdnone.netlify.com/db/2017/09/599fec14df141_4XhlEpA__700.jpg"/><br/> </p><p>#2 <br/><img src="https://cdnone.netlify.com/db/2017/09/5996c9c77ac66_1d11bu47wzjy__700.jpg"/><br/></p><p></p> <div id="SC_TBlock_456377" class="SC_TBlock"> </div><p></p><p></p> <p>#3</p> <p> <br/><img src="https://cdnone.netlify.com/db/2017/09/c-1c6faca1-277e-4434-b204-50fdbb570913-5997ecc225db5__700.jpg"/><br/></p> <p>#4</p> <p> <br/><img src="https://cdnone.netlify.com/db/2017/09/giant-lap-dogs-116-599c35d1c41ef__700.jpg"/><br/></p> <p> </p><div id="SC_TBlock_456377" class="SC_TBlock"> </div><p></p> <i></i>1 / 3<i></i> | 145.75 | 937 | 0.683533 | bul_Cyrl | 0.118148 |
53f7811f3dd622e13b3e07ed250dfa75293cecb5 | 10,186 | md | Markdown | atomics/T1036.003/T1036.003.md | leegengyu/atomic-red-team | 5a80650a00247dbc7e04caf850e584b56b995829 | [
"MIT"
] | 2 | 2021-05-27T12:19:14.000Z | 2021-05-27T12:50:15.000Z | atomics/T1036.003/T1036.003.md | leegengyu/atomic-red-team | 5a80650a00247dbc7e04caf850e584b56b995829 | [
"MIT"
] | 1 | 2021-01-04T14:31:34.000Z | 2021-01-04T14:31:34.000Z | atomics/T1036.003/T1036.003.md | leegengyu/atomic-red-team | 5a80650a00247dbc7e04caf850e584b56b995829 | [
"MIT"
] | null | null | null | # T1036.003 - Rename System Utilities
## [Description from ATT&CK](https://attack.mitre.org/techniques/T1036/003)
<blockquote>Adversaries may rename legitimate system utilities to try to evade security mechanisms concerning the usage of those utilities. Security monitoring and control mechanisms may be in place for system utilities adversaries are capable of abusing. (Citation: LOLBAS Main Site) It may be possible to bypass those security mechanisms by renaming the utility prior to utilization (ex: rename <code>rundll32.exe</code>). (Citation: Elastic Masquerade Ball) An alternative case occurs when a legitimate utility is copied or moved to a different directory and renamed to avoid detections based on system utilities executing from non-standard paths. (Citation: F-Secure CozyDuke)</blockquote>
## Atomic Tests
- [Atomic Test #1 - Masquerading as Windows LSASS process](#atomic-test-1---masquerading-as-windows-lsass-process)
- [Atomic Test #2 - Masquerading as Linux crond process.](#atomic-test-2---masquerading-as-linux-crond-process)
- [Atomic Test #3 - Masquerading - cscript.exe running as notepad.exe](#atomic-test-3---masquerading---cscriptexe-running-as-notepadexe)
- [Atomic Test #4 - Masquerading - wscript.exe running as svchost.exe](#atomic-test-4---masquerading---wscriptexe-running-as-svchostexe)
- [Atomic Test #5 - Masquerading - powershell.exe running as taskhostw.exe](#atomic-test-5---masquerading---powershellexe-running-as-taskhostwexe)
- [Atomic Test #6 - Masquerading - non-windows exe running as windows exe](#atomic-test-6---masquerading---non-windows-exe-running-as-windows-exe)
- [Atomic Test #7 - Masquerading - windows exe running as different windows exe](#atomic-test-7---masquerading---windows-exe-running-as-different-windows-exe)
- [Atomic Test #8 - Malicious process Masquerading as LSM.exe](#atomic-test-8---malicious-process-masquerading-as-lsmexe)
- [Atomic Test #9 - File Extension Masquerading](#atomic-test-9---file-extension-masquerading)
<br/>
## Atomic Test #1 - Masquerading as Windows LSASS process
Copies cmd.exe, renames it, and launches it to masquerade as an instance of lsass.exe.
Upon execution, cmd will be launched by powershell. If using Invoke-AtomicTest, The test will hang until the 120 second timeout cancels the session
**Supported Platforms:** Windows
#### Attack Commands: Run with `command_prompt`!
```cmd
copy %SystemRoot%\System32\cmd.exe %SystemRoot%\Temp\lsass.exe
%SystemRoot%\Temp\lsass.exe /B
```
#### Cleanup Commands:
```cmd
del /Q /F %SystemRoot%\Temp\lsass.exe >nul 2>&1
```
<br/>
<br/>
## Atomic Test #2 - Masquerading as Linux crond process.
Copies sh process, renames it as crond, and executes it to masquerade as the cron daemon.
Upon successful execution, sh is renamed to `crond` and executed.
**Supported Platforms:** Linux
#### Attack Commands: Run with `sh`!
```sh
cp /bin/sh /tmp/crond;
/tmp/crond
```
#### Cleanup Commands:
```sh
rm /tmp/crond
```
<br/>
<br/>
## Atomic Test #3 - Masquerading - cscript.exe running as notepad.exe
Copies cscript.exe, renames it, and launches it to masquerade as an instance of notepad.exe.
Upon successful execution, cscript.exe is renamed as notepad.exe and executed from non-standard path.
**Supported Platforms:** Windows
#### Attack Commands: Run with `command_prompt`!
```cmd
copy %SystemRoot%\System32\cscript.exe %APPDATA%\notepad.exe /Y
cmd.exe /c %APPDATA%\notepad.exe /B
```
#### Cleanup Commands:
```cmd
del /Q /F %APPDATA%\notepad.exe >nul 2>&1
```
<br/>
<br/>
## Atomic Test #4 - Masquerading - wscript.exe running as svchost.exe
Copies wscript.exe, renames it, and launches it to masquerade as an instance of svchost.exe.
Upon execution, no windows will remain open but wscript will have been renamed to svchost and ran out of the temp folder
**Supported Platforms:** Windows
#### Attack Commands: Run with `command_prompt`!
```cmd
copy %SystemRoot%\System32\wscript.exe %APPDATA%\svchost.exe /Y
cmd.exe /c %APPDATA%\svchost.exe /B
```
#### Cleanup Commands:
```cmd
del /Q /F %APPDATA%\svchost.exe >nul 2>&1
```
<br/>
<br/>
## Atomic Test #5 - Masquerading - powershell.exe running as taskhostw.exe
Copies powershell.exe, renames it, and launches it to masquerade as an instance of taskhostw.exe.
Upon successful execution, powershell.exe is renamed as taskhostw.exe and executed from non-standard path.
**Supported Platforms:** Windows
#### Attack Commands: Run with `command_prompt`!
```cmd
copy %windir%\System32\windowspowershell\v1.0\powershell.exe %APPDATA%\taskhostw.exe /Y
cmd.exe /K %APPDATA%\taskhostw.exe
```
#### Cleanup Commands:
```cmd
del /Q /F %APPDATA%\taskhostw.exe >nul 2>&1
```
<br/>
<br/>
## Atomic Test #6 - Masquerading - non-windows exe running as windows exe
Copies an exe, renames it as a windows exe, and launches it to masquerade as a real windows exe
Upon successful execution, powershell will execute T1036.003.exe as svchost.exe from on a non-standard path.
**Supported Platforms:** Windows
#### Inputs:
| Name | Description | Type | Default Value |
|------|-------------|------|---------------|
| outputfile | path of file to execute | path | ($env:TEMP + "\svchost.exe")|
| inputfile | path of file to copy | path | PathToAtomicsFolder\T1036.003\bin\T1036.003.exe|
#### Attack Commands: Run with `powershell`!
```powershell
copy #{inputfile} #{outputfile}
$myT1036_003 = (Start-Process -PassThru -FilePath #{outputfile}).Id
Stop-Process -ID $myT1036_003
```
#### Cleanup Commands:
```powershell
Remove-Item #{outputfile} -Force -ErrorAction Ignore
```
#### Dependencies: Run with `powershell`!
##### Description: Exe file to copy must exist on disk at specified location (#{inputfile})
##### Check Prereq Commands:
```powershell
if (Test-Path #{inputfile}) {exit 0} else {exit 1}
```
##### Get Prereq Commands:
```powershell
New-Item -Type Directory (split-path #{inputfile}) -ErrorAction ignore | Out-Null
Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1036.003/bin/T1036.003.exe" -OutFile "#{inputfile}"
```
<br/>
<br/>
## Atomic Test #7 - Masquerading - windows exe running as different windows exe
Copies a windows exe, renames it as another windows exe, and launches it to masquerade as second windows exe
**Supported Platforms:** Windows
#### Inputs:
| Name | Description | Type | Default Value |
|------|-------------|------|---------------|
| outputfile | path of file to execute | path | ($env:TEMP + "\svchost.exe")|
| inputfile | path of file to copy | path | $env:ComSpec|
#### Attack Commands: Run with `powershell`!
```powershell
copy #{inputfile} #{outputfile}
$myT1036_003 = (Start-Process -PassThru -FilePath #{outputfile}).Id
Stop-Process -ID $myT1036_003
```
#### Cleanup Commands:
```powershell
Remove-Item #{outputfile} -Force -ErrorAction Ignore
```
<br/>
<br/>
## Atomic Test #8 - Malicious process Masquerading as LSM.exe
Detect LSM running from an incorrect directory and an incorrect service account
This works by copying cmd.exe to a file, naming it lsm.exe, then copying a file to the C:\ folder.
Upon successful execution, cmd.exe will be renamed as lsm.exe and executed from non-standard path.
**Supported Platforms:** Windows
#### Attack Commands: Run with `command_prompt`! Elevation Required (e.g. root or admin)
```cmd
copy C:\Windows\System32\cmd.exe C:\lsm.exe
C:\lsm.exe /c echo T1036.003 > C:\T1036.003.txt
```
#### Cleanup Commands:
```cmd
del C:\T1036.003.txt >nul 2>&1
del C:\lsm.exe >nul 2>&1
```
<br/>
<br/>
## Atomic Test #9 - File Extension Masquerading
download and execute a file masquerading as images or Office files. Upon execution 3 calc instances and 3 vbs windows will be launched.
e.g SOME_LEGIT_NAME.[doc,docx,xls,xlsx,pdf,rtf,png,jpg,etc.].[exe,vbs,js,ps1,etc] (Quartelyreport.docx.exe)
**Supported Platforms:** Windows
#### Inputs:
| Name | Description | Type | Default Value |
|------|-------------|------|---------------|
| exe_path | path to exe to use when creating masquerading files | path | C:\Windows\System32\calc.exe|
| vbs_path | path of vbs to use when creating masquerading files | path | PathToAtomicsFolder\T1036.003\src\T1036.003_masquerading.vbs|
| ps1_path | path of powershell script to use when creating masquerading files | path | PathToAtomicsFolder\T1036.003\src\T1036.003_masquerading.ps1|
#### Attack Commands: Run with `command_prompt`!
```cmd
copy #{exe_path} %temp%\T1036.003_masquerading.docx.exe /Y
copy #{exe_path} %temp%\T1036.003_masquerading.pdf.exe /Y
copy #{exe_path} %temp%\T1036.003_masquerading.ps1.exe /Y
copy #{vbs_path} %temp%\T1036.003_masquerading.xls.vbs /Y
copy #{vbs_path} %temp%\T1036.003_masquerading.xlsx.vbs /Y
copy #{vbs_path} %temp%\T1036.003_masquerading.png.vbs /Y
copy #{ps1_path} %temp%\T1036.003_masquerading.doc.ps1 /Y
copy #{ps1_path} %temp%\T1036.003_masquerading.pdf.ps1 /Y
copy #{ps1_path} %temp%\T1036.003_masquerading.rtf.ps1 /Y
%temp%\T1036.003_masquerading.docx.exe
%temp%\T1036.003_masquerading.pdf.exe
%temp%\T1036.003_masquerading.ps1.exe
%temp%\T1036.003_masquerading.xls.vbs
%temp%\T1036.003_masquerading.xlsx.vbs
%temp%\T1036.003_masquerading.png.vbs
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File %temp%\T1036.003_masquerading.doc.ps1
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File %temp%\T1036.003_masquerading.pdf.ps1
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File %temp%\T1036.003_masquerading.rtf.ps1
```
#### Cleanup Commands:
```cmd
del /f %temp%\T1036.003_masquerading.docx.exe > nul 2>&1
del /f %temp%\T1036.003_masquerading.pdf.exe > nul 2>&1
del /f %temp%\T1036.003_masquerading.ps1.exe > nul 2>&1
del /f %temp%\T1036.003_masquerading.xls.vbs > nul 2>&1
del /f %temp%\T1036.003_masquerading.xlsx.vbs > nul 2>&1
del /f %temp%\T1036.003_masquerading.png.vbs > nul 2>&1
del /f %temp%\T1036.003_masquerading.doc.ps1 > nul 2>&1
del /f %temp%\T1036.003_masquerading.pdf.ps1 > nul 2>&1
del /f %temp%\T1036.003_masquerading.rtf.ps1 > nul 2>&1
```
<br/>
| 28.060606 | 693 | 0.720597 | eng_Latn | 0.508506 |
53f781880caf194cbb2228c014c98f016c5d0fb9 | 4,776 | md | Markdown | man/vscpl2drv-logger.1.md | grodansparadis/vscp_driver_lI_logger | d28916353ea012aa839fb1e6ed46180640390b5e | [
"MIT"
] | 1 | 2019-09-27T12:36:26.000Z | 2019-09-27T12:36:26.000Z | man/vscpl2drv-logger.1.md | grodansparadis/vscp_driver_lI_logger | d28916353ea012aa839fb1e6ed46180640390b5e | [
"MIT"
] | 1 | 2021-07-01T09:47:47.000Z | 2021-07-01T09:47:47.000Z | man/vscpl2drv-logger.1.md | grodansparadis/vscp_driver_lI_logger | d28916353ea012aa839fb1e6ed46180640390b5e | [
"MIT"
] | null | null | null | % vscpl2drv-logger(1) VSCP Level II Logger Driver
% Åke Hedman, Grodans Paradis AB
% September 28, 2019
# NAME
vscpl2drv-logger - VSCP Level I Logger Driver
# SYNOPSIS
vscpl2drv-logger
# DESCRIPTION
vscpd level II driver for diagnostic logging. It makes it possible to log VSCP events to a file. Two formats of the log file is supported. Either a standard log file with a standard text string for each event on each line or loggings can be written on XML format. The advantage ofthe later is that it can be read by VSCP works and further analyzed there. Several drivers can be loaded logging data to different output files and using different filter/masks.
## Configuration string
A VSCP level II driver have access to the tcp/ip interface of the machine is is installed on and frin that host get unique credentials to allow it to log in to the tcp/ip interface. The driver can use this as a method to initially fetch configuration parameters. The link is also used to as pass other data such as events to/from the server.
The configuration string for vscpl2drv-logger (set in */etc/vscp/vscpd.conf*) have the following format
```bash
path;rewrite;vscpworksfmt;filter;mask
```
* **path** - The absolute or relative path including the file name to the file that log data should be written to. Mandatory.
* **rewrite** - Set to 'true' to create a new file or rewrite data over an old file with new data. Set to 'false' to append data to an existing file (create it if it's not available). Defaults to 'false'.
* **filter** - A Standard VSCP filter in string from 'priority,class,type,GUID'. Example: '1,0x0000,0x0006,ff:ff:ff:ff:ff:ff:ff:01:00:00:00:00:00:00:00:00'. Defaults to an all zero filter.
* **mask** - Standard VSCP mask in string form 'priority,class,type,GUID'. Example: 1,0x0000,0x0006,ff:ff:ff:ff:ff:ff:ff:01:00:00:00:00:00:00:00:00. Defaults to an all zero mask.
The configuration string is the first configuration data that is read. The driver will, after it is read, ask the server for driver specific configuration data. This data is fetched with the same pattern for all drivers. Variables are formed by the driver name + some driver specific variable name. If this variable exist and contains data this will be used as configuration for the driver.
For the vscpl2drv-logger the following configuration variables are defined
| Variable name | Type | Description |
| ------------- | :--: | ----------- |
| **_path** | string | Path to the logfile. |
| **_rewrite** | bool | Set to “true” to rewrite the file each time the driver is started. Set to “false” to append to file. |
| **_vscpworksfmt** | bool | If “true” VSCP works XML format will be used for the log file. This means that the file will be possible to read and further analyzed by VSCP Works. If “false” a standard text based format will be used. |
| **_filter** | string | Standard VSCP filter in string from. 1,0x0000,0x0006,ff:ff:ff:ff:ff:ff:ff:01:00:00:00:00:00:00:00:00 as priority,class,type,GUID |
| **_mask** | string | Standard VSCP mask in string form. 1,0x0000,0x0006,ff:ff:ff:ff:ff:ff:ff:01:00:00:00:00:00:00:00:00 as priority,class,type,GUID |
## Example of vscpd.conf entry for the logger driver.
```xml
<driver enable="true" >
<name>log1</name>
<path>/usr/local/lib/vscpl2_loggerdrv.so</path>
<config>/tmp/vscp_level2.log</config>
<guid>00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00</guid>
</driver>
```
I this case the driver will fetch configuration data from the server from variables *log1_path, log1_rewrite, log1_vscpworksfmt, log1_filter and log1_mask*
---
There are many Level I drivers available in VSCP & Friends framework that can be used with both VSCP Works and the VSCP Daemon and added to that Level II and Level III drivers that can be used with the VSCP Daemon.
Level I drivers is documented [here](https://grodansparadis.gitbooks.io/the-vscp-daemon/level_i_drivers.html).
Level II drivers is documented [here](https://grodansparadis.gitbooks.io/the-vscp-daemon/level_ii_drivers.html)
Level III drivers is documented [here](https://grodansparadis.gitbooks.io/the-vscp-daemon/level_iii_drivers.html)
# SEE ALSO
`vscpd` (8).
`uvscpd` (8).
`vscpworks` (1).
`vscpcmd` (1).
`vscp-makepassword` (1).
`vscphelperlib` (1).
The VSCP project homepage is here <https://www.vscp.org>.
The [manual](https://grodansparadis.gitbooks.io/the-vscp-daemon) for vscpd contains full documentation. Other documentation can be found here <https://grodansparadis.gitbooks.io>.
The vscpd source code may be downloaded from <https://github.com/grodansparadis/vscp>. Source code for other system components of VSCP & Friends are here <https://github.com/grodansparadis>
# COPYRIGHT
Copyright 2000-2021 Åke Hedman, The VSCP Project - MIT license. | 57.542169 | 457 | 0.750209 | eng_Latn | 0.968432 |
53f786d72f7dfd842b2d801b33c637462b38127b | 905 | md | Markdown | README.md | lweine01/react-portfolio | 6bcaa1d96c192023a54d8829cf04fe80d2cf47e9 | [
"MIT"
] | null | null | null | README.md | lweine01/react-portfolio | 6bcaa1d96c192023a54d8829cf04fe80d2cf47e9 | [
"MIT"
] | null | null | null | README.md | lweine01/react-portfolio | 6bcaa1d96c192023a54d8829cf04fe80d2cf47e9 | [
"MIT"
] | null | null | null | # React Portfolio
[Link to Deployed Site](https://ancient-beach-10392.herokuapp.com/)
## Table of Contents
- [Description](#Description)
- [Usage](#Usage)
- [Screenshots](#Screenshots)
- [License](#License)
- [Languages](#Languages)
- [Questions](#Questions)
## Description
This application is my resume 2.0 created using react. It displays information about me, testimonials, highlighted projects, a link to my resume and a contact me form.
## Usage
To showcase my work to future employers as well as clients.
## Screenshot


-------
## License
MIT [](https://opensource.org/licenses/MIT)
## Languages
JavaScript, HTML, CSS, React
## Questions
Please contact me at [email protected] if you have any questions or suggestions. | 28.28125 | 167 | 0.743646 | eng_Latn | 0.812269 |
53f83b97d7005e10b45a912cb192ea62c0a6235c | 1,563 | md | Markdown | docs/collector.md | stephenhu/nbad | 2bb106417ac2a7b0e69f08e68f98cb90f907a172 | [
"MIT"
] | null | null | null | docs/collector.md | stephenhu/nbad | 2bb106417ac2a7b0e69f08e68f98cb90f907a172 | [
"MIT"
] | null | null | null | docs/collector.md | stephenhu/nbad | 2bb106417ac2a7b0e69f08e68f98cb90f907a172 | [
"MIT"
] | null | null | null | # collector
collect data from different sources, persist data and load into memory.
games and stats information are collected and persisted onto the filesystem as json files.
this information is read into memory during startup. additional data is retrieved
in realtime and stored in memory. a background task gets the actual data.
## design
1. initial data for a season should be bulk downloaded as a 1 time operation before starting nbad
1. subsequent information should be downloaded periodically to disk and then loaded into memory
1. current games should have a different mechanism for traking
## interface
should this be constantly crawling on its own or should it be triggered by the end
user? if it's constantly crawling then there needs to be some state kept about
which sites it's crawled. this is a little different from a web index in that
when the season ends, the state doesn't change, so there's no need to continuously
crawl. in my mind, there should be a massive store to json for all previous
seasons, this can then be loaded into memory either as datastructures or in a
kv store. that means the collectors should be done in the stats library.
## news
### rss
* https://sports.yahoo.com/nba/rss/
* https://www.espn.com/espn/rss/nba/news
* https://api.foxsports.com/v1/rss?tag=nba
* http://archive.nba.com/rss # lots of dead links
* http://www.nba.com/rss/nba_rss.xml # this link doesn't work, clippers team rss works
* https://www.nba.com/clippers/rss.xml # use this in the teams view
### non-rss
* https://www.nba.com/news
* | 39.075 | 98 | 0.767115 | eng_Latn | 0.997482 |
53f8b9231047ec90e29ce9e3d50dce75a20ccf0d | 578 | md | Markdown | README.md | allanmcinnes/logicsim | 0e3d49eaea04a40ae08587515c1e714ea4e64587 | [
"MIT"
] | null | null | null | README.md | allanmcinnes/logicsim | 0e3d49eaea04a40ae08587515c1e714ea4e64587 | [
"MIT"
] | null | null | null | README.md | allanmcinnes/logicsim | 0e3d49eaea04a40ae08587515c1e714ea4e64587 | [
"MIT"
] | null | null | null | # logicsim
A simple discrete-event digital logic simulator, implemented in Java. Intended partly as an example of discrete-event simulation, and partly as a classroom example of applying various classic "Gang of Four" design patterns (command, decorator, observer, etc.)
The main `LogicSim` class includes a couple of examples that show how the simulator can be used to model and simulate simple logic circuits.
## Building and Running
Build at the command-line using `ant`. The resulting `.jar` is placed in `build/jar`. To run, execute `java -jar build/jar/LogicSim.jar`.
| 57.8 | 259 | 0.780277 | eng_Latn | 0.998382 |
53f8de81e1fadc801353537f6498a7c1d4fa9ce3 | 2,453 | md | Markdown | README.md | Lorenz5600/PopupBrowser | a3d8fd3bd19bd320c24e06808f7de7553ad61232 | [
"MIT"
] | null | null | null | README.md | Lorenz5600/PopupBrowser | a3d8fd3bd19bd320c24e06808f7de7553ad61232 | [
"MIT"
] | null | null | null | README.md | Lorenz5600/PopupBrowser | a3d8fd3bd19bd320c24e06808f7de7553ad61232 | [
"MIT"
] | null | null | null | <img align="left" width="64" height="64" src="./PopupBrowser.png">
# PopupBrowser
A small commandline controlled WPF app with a minimalistic browser (WebView2) to quickly display web content:
* Automatically place some monitoring dashboards on your screen(s) on system start
* Call it from other apps with some context, e.g. configure your client management app to show content from your corporate wiki or web-based
Active Directory querying tool with a single click.
Why not use one of the usual browsers for this purpose, you ask? I did this before, but the usual browsers
* take longer to load
* cause trouble to control size and position of it's window
* have a titlebar, an addressbar, menu and other stuff you might not want
* lack the ability to act as a notification/tooltip window that quickly close on hitting ESC, clicking outside the window or by a timer
## Usage
PopupBrowser.exe <*Url*> \[options\]
```
PopupBrowser.exe www.github.com --Size 400,300 --Style Fixed --Position AtCursor --Offset 20,-20
```
## Options
<p><strong>-n /--Name <Name> (Default 'PopupBrowser')</strong></p>
<p>Sets the instance name used to store window properties and to prevent multiple instances running</p>
<p><strong>-p / --Position <Value> (Default 'Center')</strong></p>
<p>Determines screen position:</p>
<ul>
<li>Center: Center on active screen</li>
<li>AtCursor: Topleft corner placed at mouse position</li>
<li>Recall: Restore window's position and size from last session</li>
</ul>
<p><strong>-o / --Offset <x,y> (Default '0,0')</strong></p>
<p>Offsets the window position</p>
<p><strong>-s / --Size <width,height> (Default '800,600')</strong></p>
<p>Determines the window size</p>
<p><strong>-t / --Style <Value> (Default 'Window')</strong></p>
<p>Determines window style:</p>
<ul>
<li>Window: resizable window</li>
<li>Fixed: non-resizable window</li>
<li>Notification: border- and titleless, non-resizable window</li>
</ul>
<p><strong>-a / --ShowAddressBar</strong></p>
<p>Show browser address bar</p>
<p><strong>-e / --EasyClose (Default: On)</strong></p>
<p>Close Window by pressing ESC or clicking outside the window</p>
<p><strong>-c / --CloseAfter <Seconds> (Default '0')</strong></p>
<p>Automatically close window after a given time. A value of 0 disables the timer.</p>
## Requirements
* .Net 5 Runtime
* [Edge WebView2 Runtime](https://developer.microsoft.com/de-de/microsoft-edge/webview2/) | 48.098039 | 141 | 0.723604 | eng_Latn | 0.917699 |
53f8f01a7f3dcf40e33f522f6ffef472df0c5280 | 28 | md | Markdown | README.md | rudissaar/moorhuhn-bundle-installer | 768859cb18856208d109a5690196c8c8a79286e7 | [
"MIT"
] | 1 | 2020-10-07T21:42:08.000Z | 2020-10-07T21:42:08.000Z | README.md | rudissaar/moorhuhn-bundle-installer | 768859cb18856208d109a5690196c8c8a79286e7 | [
"MIT"
] | null | null | null | README.md | rudissaar/moorhuhn-bundle-installer | 768859cb18856208d109a5690196c8c8a79286e7 | [
"MIT"
] | null | null | null | # Moorhuhn Bundle Installer
| 14 | 27 | 0.821429 | deu_Latn | 0.364773 |
53f8f492c1accb6f5919019ad65af78e15163697 | 3,329 | md | Markdown | tests/test_biolink_model/output/markdown_no_image/GeneProductMixin.md | krishna-saravan/linkml | 8c34844ebaf054f44ceb386e4d51ee4c95dbebe6 | [
"CC0-1.0"
] | 83 | 2021-03-17T16:31:02.000Z | 2022-03-13T23:17:02.000Z | tests/test_biolink_model/output/markdown_no_image/GeneProductMixin.md | krishna-saravan/linkml | 8c34844ebaf054f44ceb386e4d51ee4c95dbebe6 | [
"CC0-1.0"
] | 390 | 2021-03-18T18:44:11.000Z | 2022-03-30T22:55:01.000Z | tests/test_biolink_model/output/markdown_no_image/GeneProductMixin.md | krishna-saravan/linkml | 8c34844ebaf054f44ceb386e4d51ee4c95dbebe6 | [
"CC0-1.0"
] | 20 | 2021-03-27T08:55:56.000Z | 2022-02-24T15:25:57.000Z |
# Class: gene product mixin
The functional molecular product of a single gene locus. Gene products are either proteins or functional RNA molecules.
URI: [biolink:GeneProductMixin](https://w3id.org/biolink/vocab/GeneProductMixin)
[:symbol_type%20%3F],[Protein]uses%20-.->[GeneProductMixin],[RNAProduct]uses%20-.->[GeneProductMixin],[GeneProductMixin]^-[GeneProductIsoformMixin],[GeneOrGeneProduct]^-[GeneProductMixin],[Protein],[GeneProductIsoformMixin],[GeneOrGeneProduct],[Gene],[RNAProduct])](https://yuml.me/diagram/nofunky;dir:TB/class/[GeneToGeneProductRelationship],[GeneToGeneProductRelationship]++-%20object%201..1>[GeneProductMixin|synonym:label_type%20*;xref:iri_type%20*;name(i):symbol_type%20%3F],[Protein]uses%20-.->[GeneProductMixin],[RNAProduct]uses%20-.->[GeneProductMixin],[GeneProductMixin]^-[GeneProductIsoformMixin],[GeneOrGeneProduct]^-[GeneProductMixin],[Protein],[GeneProductIsoformMixin],[GeneOrGeneProduct],[Gene],[RNAProduct])
## Identifier prefixes
* UniProtKB
* gtpo
* PR
## Parents
* is_a: [GeneOrGeneProduct](GeneOrGeneProduct.md) - A union of gene loci or gene products. Frequently an identifier for one will be used as proxy for another
## Children
* [GeneProductIsoformMixin](GeneProductIsoformMixin.md) - This is an abstract class that can be mixed in with different kinds of gene products to indicate that the gene product is intended to represent a specific isoform rather than a canonical or reference or generic product. The designation of canonical or reference may be arbitrary, or it may represent the superclass of all isoforms.
## Mixin for
* [RNAProduct](RNAProduct.md) (mixin)
* [Protein](Protein.md) (mixin) - A gene product that is composed of a chain of amino acid sequences and is produced by ribosome-mediated translation of mRNA
## Referenced by Class
* **[GeneToGeneProductRelationship](GeneToGeneProductRelationship.md)** *[gene to gene product relationship➞object](gene_to_gene_product_relationship_object.md)* <sub>1..1</sub> **[GeneProductMixin](GeneProductMixin.md)**
* **[Gene](Gene.md)** *[has gene product](has_gene_product.md)* <sub>0..\*</sub> **[GeneProductMixin](GeneProductMixin.md)**
## Attributes
### Own
* [synonym](synonym.md) <sub>0..\*</sub>
* Description: Alternate human-readable names for a thing
* Range: [LabelType](types/LabelType.md)
* in subsets: (translator_minimal)
* [xref](xref.md) <sub>0..\*</sub>
* Description: Alternate CURIEs for a thing
* Range: [IriType](types/IriType.md)
* in subsets: (translator_minimal)
### Inherited from gene or gene product:
* [macromolecular machine mixin➞name](macromolecular_machine_mixin_name.md) <sub>0..1</sub>
* Description: genes are typically designated by a short symbol and a full name. We map the symbol to the default display name and use an additional slot for full name
* Range: [SymbolType](types/SymbolType.md)
* in subsets: (translator_minimal,samples)
## Other properties
| | | |
| --- | --- | --- |
| **Exact Mappings:** | | WIKIDATA:Q424689 |
| | | GENO:0000907 |
| | | NCIT:C26548 |
| 51.215385 | 934 | 0.743467 | eng_Latn | 0.592825 |
53f90fe5d756b7bd7064efc9d30dd8635fc6bd65 | 2,533 | md | Markdown | docs/analysis-services/scripting/properties/nonemptybehavior-element-assl.md | drake1983/sql-docs.es-es | d924b200133b8c9d280fc10842a04cd7947a1516 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-25T17:50:01.000Z | 2020-04-25T17:50:01.000Z | docs/analysis-services/scripting/properties/nonemptybehavior-element-assl.md | drake1983/sql-docs.es-es | d924b200133b8c9d280fc10842a04cd7947a1516 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/analysis-services/scripting/properties/nonemptybehavior-element-assl.md | drake1983/sql-docs.es-es | d924b200133b8c9d280fc10842a04cd7947a1516 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Elemento NonEmptyBehavior (ASSL) | Documentos de Microsoft
ms.date: 5/8/2018
ms.prod: sql
ms.custom: assl
ms.reviewer: owend
ms.technology: analysis-services
ms.topic: reference
author: minewiskan
ms.author: owend
manager: kfile
ms.openlocfilehash: 41dbb603f97adde99d033a50c81ae15686bb7618
ms.sourcegitcommit: c12a7416d1996a3bcce3ebf4a3c9abe61b02fb9e
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 05/10/2018
---
# <a name="nonemptybehavior-element-assl"></a>Elemento NonEmptyBehavior (ASSL)
[!INCLUDE[ssas-appliesto-sqlas](../../../includes/ssas-appliesto-sqlas.md)]
Determina el comportamiento de no vacío asociado al elemento primario de la [CalculationProperty](../../../analysis-services/scripting/objects/calculationproperty-element-assl.md) elemento.
## <a name="syntax"></a>Sintaxis
```xml
<CalculationProperty>
<NonEmptyBehavior>...</NonEmptyBehavior>
</CalculationProperty>
```
## <a name="element-characteristics"></a>Características de los elementos
|Característica|Descripción|
|--------------------|-----------------|
|Tipo y longitud de los datos|Cadena|
|Valor predeterminado|Ninguno|
|Cardinalidad|0-1: Elemento opcional que puede aparecer solo una vez.|
## <a name="element-relationships"></a>Relaciones del elemento
|Relación|Elemento|
|------------------|-------------|
|Elemento primario|[CalculationProperty](../../../analysis-services/scripting/objects/calculationproperty-element-assl.md)|
|Elementos secundarios|Ninguno|
## <a name="remarks"></a>Comentarios
El **NonEmptyBehavior** propiedad se aplica a **CalculationProperty** elementos con un [CalculationType](../../../analysis-services/scripting/properties/calculationtype-element-assl.md) establecido en *miembro*.
El elemento que corresponde al elemento primario de **NonEmptyBehavior** en el objeto de Analysis Management Objects (AMO) es el modelo <xref:Microsoft.AnalysisServices.CalculationProperty>.
## <a name="see-also"></a>Vea también
[Elemento CalculationProperties (ASSL)](../../../analysis-services/scripting/collections/calculationproperties-element-assl.md)
[Elemento MdxScript (ASSL)](../../../analysis-services/scripting/objects/mdxscript-element-assl.md)
[Elemento MdxScripts (ASSL)](../../../analysis-services/scripting/collections/mdxscripts-element-assl.md)
[Propiedades & #40; ASSL & #41;](../../../analysis-services/scripting/properties/properties-assl.md)
| 42.216667 | 214 | 0.717331 | spa_Latn | 0.184253 |
53f9b37996c3b3b1ff3787309d6e5d2ac36911fd | 343 | md | Markdown | README.md | pradyot-09/CodechefSolutions | fa56fa9e59c087c79c75b199234aeeaab94f336c | [
"MIT"
] | null | null | null | README.md | pradyot-09/CodechefSolutions | fa56fa9e59c087c79c75b199234aeeaab94f336c | [
"MIT"
] | null | null | null | README.md | pradyot-09/CodechefSolutions | fa56fa9e59c087c79c75b199234aeeaab94f336c | [
"MIT"
] | null | null | null | # CodechefSolutions
All my CodeChef solutions are here :)
CodeChef is a Competitive Coding platform
https://www.codechef.com/
Various Algorithms like Divide and conquer,dijkstra's algo,Greedy algorithms,BST,Dynamic Programming,Bit-mask,Segment trees,Quick Sort etc
were implemented by me for solving the problems.
CodeChef user Id : paddi
| 34.3 | 138 | 0.810496 | eng_Latn | 0.897464 |
53f9b9c6ac8e2387ba63dda165801ffed89c135c | 13 | md | Markdown | test/resources/composition-tests/end_matter.md | praefervidus/markdown-composer | 127fc33abadfa49f6906688c894254815152227d | [
"MIT"
] | null | null | null | test/resources/composition-tests/end_matter.md | praefervidus/markdown-composer | 127fc33abadfa49f6906688c894254815152227d | [
"MIT"
] | null | null | null | test/resources/composition-tests/end_matter.md | praefervidus/markdown-composer | 127fc33abadfa49f6906688c894254815152227d | [
"MIT"
] | null | null | null | end matter... | 13 | 13 | 0.692308 | dan_Latn | 0.605141 |
53fb0be334890fc9554e7fd2e4a2ae976bb74f10 | 14,331 | md | Markdown | docs/JSDoc_Guidelines_eeaa5de.md | FabioNascimento/openui5-docs | 2170b7084fb67664334aa529de360645b1a85607 | [
"CC-BY-4.0"
] | null | null | null | docs/JSDoc_Guidelines_eeaa5de.md | FabioNascimento/openui5-docs | 2170b7084fb67664334aa529de360645b1a85607 | [
"CC-BY-4.0"
] | null | null | null | docs/JSDoc_Guidelines_eeaa5de.md | FabioNascimento/openui5-docs | 2170b7084fb67664334aa529de360645b1a85607 | [
"CC-BY-4.0"
] | null | null | null | <!-- loioeeaa5de14e5f4fc1ac796bc0c1ada5fb -->
| loio |
| -----|
| eeaa5de14e5f4fc1ac796bc0c1ada5fb |
<div id="loio">
view on: [demo kit nightly build](https://openui5nightly.hana.ondemand.com/#/topic/eeaa5de14e5f4fc1ac796bc0c1ada5fb) | [demo kit latest release](https://openui5.hana.ondemand.com/#/topic/eeaa5de14e5f4fc1ac796bc0c1ada5fb)</div>
## JSDoc Guidelines
Provides an overview of guidelines for creating JSDoc documentation.
To document JavaScript coding, you can add documentation comments to the code. Based on these comments, the descriptions of the OpenUI5 entities are generated and shown in the *API Reference* of the Demo Kit. OpenUI5 uses the JSDoc3 toolkit, which resembles JavaDoc, to generate the descriptions. For an explanation of the available tags, see [https://jsdoc.app](https://jsdoc.app).
***
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__section_wjj_hys_l2b"/>
### Basics of JSDoc
Here are some general principles for writing comments:
- Document the constructor with `@class`, `@author`, `@since`, and so on.
- For subclasses, document the inheritance by using an `@extends` tag in their constructor doclet.
- Document at least public and protected methods with JSDoc, mark them as `@public` or `@protected`.
If you also document private methods with JSDoc, mark them as `@private`. This is currently the default in OpenUI5, but not in JSDoc, so it is safer to explicitly specify this. `@protected` is not clearly defined for a JavaScript environment. In OpenUI5, it denotes a method that is not meant to be used by applications. It might be used outside the relevant class or subclasses, but only in closely related classes.
To explicitly specify which modules are allowed to use a class or function, mark the latter as `@private` followed by `@ui5-restricted <modulenames>`, with a comma-separated list of the modules that have access to this class or function.
> ### Note:
> To ensure that external JSDoc generators can also produce proper documentation, `@private` must be used first followed by `@ui5-restricted`. `@ui5-restricted` overrules `@private`, if it can be interpreted by the generator.
- Document method parameters with type \(in curly braces\) and parameter name \(in square brackets if optional\).
- Use `@namespace` for static helper classes that only provide static methods.
For an example of how to create a class, see [Example for Defining a Class](Example_for_Defining_a_Class_f6fba4c.md).
***
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__section_s55_3j2_p2b"/>
### Descriptions
A documentation comment should provide the following content:
- Summary sentence at the beginning; the summary is reused, for example, for tooltips and in summaries in the *API Reference*
- Background information required to understand the object
- Special considerations that apply
- Detailed description with additional information that does not repeat the self-explanatory API name or summary
> ### Note:
> Avoid implementation details and dependencies unless they are important for usage.
***
#### Dos and Don'ts
- To avoid line wrapping, make sure that each line of the description has a similar length as the code. In the *API Reference*, the line breaks in a description are ignored, and it appears as a continuous text.
- Use a period at the end of each summary sentence. The punctuation is required for JSDoc to identify the first sentence.
- Don’t use a period inside a summary sentence. For example, don’t use “e.g.”, but write “for example” instead. Otherwise the summary sentence will be cut off.
> ### Note:
> You can create links to external sources. The source should comply with standard legal requirements. The required icons are added to the link as described in the Demo Kit under *Terms of Use* \> *Disclaimer*. For more information about creating links, see the explanations below \(@see and \{@link\}\).
***
#### Recommendations for Writing Descriptions
- Don’t use exclamation marks.
- Make sure you spell acronyms correctly, for example, ID, JSON, URL.
- In the summary sentence, omit repetitive clauses like "This class" or "This method".
- For actions, start directly with an appropriate verb in the third person: Adds, allocates, constructs, converts, deallocates, destroys, gets, provides, reads, removes, represents, returns, sets, saves, and so on.
For methods, use the following verbs:
<table>
<tr>
<th>
Type
</th>
<th>
Verb
</th>
</tr>
<tr>
<td>
Constructor
</td>
<td>
Constructs
</td>
</tr>
<tr>
<td>
Boolean
</td>
<td>
Indicates \(whether\)
</td>
</tr>
<tr>
<td>
Getter
</td>
<td>
Gets
</td>
</tr>
<tr>
<td>
Setter
</td>
<td>
Sets
</td>
</tr>
<tr>
<td>
Other
</td>
<td>
Adds/Removes/Creates/Releases/Other verb that applies
</td>
</tr>
</table>
- For objects, use a noun phrase.
Example: Base class for navigation
***
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__section_cfg_hvt_l2b"/>
### Inline and HTML Tags
You can use inline and HTML tags in your comments.
**Inline tags** can be placed anywhere in the comments. Inline tags are denoted by curly brackets and have the following syntax: \{@tagname comment\}.
**HTML tags** are used to format documentation comments. HTML tags have the standard HTML syntax: <tag\>...</tag\>.
The table provides an overview of the most common inline and HTML tags.
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__table_ezd_5yt_l2b"/>Inline and HTML Tags
<table>
<tr>
<th>
Tag
</th>
<th>
Use
</th>
<th>
Example
</th>
<th>
How to Use / Details
</th>
<th>
Type of Tag
</th>
</tr>
<tr>
<td>
\{@link\}
</td>
<td>
Links within API Reference
</td>
<td>
`{@link sap.ui.generic.app.navigation.service.NavError Error}`
`{@link sap.ui.comp.smarttable.SmartTable#event:beforeRebindTable}`
</td>
<td>
To replace the path with a display text, use it like this: \{@link <path\> space <display text\>\}.
You can also use `#myMethod` for links within a class or control to individual methods, for example. The leading hash will then be removed automatically.
For other links, use the required syntax, for example, `#event:name`.
</td>
<td>
Inline
</td>
</tr>
<tr>
<td>
Empty line
</td>
<td>
Creates a paragraph
</td>
<td>
</td>
<td>
Using <p\> is not necessary, since empty lines are used to define paragraphs.
</td>
<td rowspan="8">
HTML
</td>
</tr>
<tr>
<td>
<code\>…</code\>
</td>
<td>
Technical entities \(optional\)
</td>
<td>
the <code\>Button</code\> control
</td>
<td>
</td>
</tr>
<tr>
<td>
<pre\>…</pre\>
</td>
<td>
Code samples
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
<ul\>
<li\>…</li\>
<li\>…</li\>
</ul\>
</td>
<td>
Unordered lists
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
<ol\>
<li\>…</li\>
<li\>…</li\>
</ol\>
</td>
<td>
Ordered lists
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
<strong\>… </strong\> or <b\>…</b\>
</td>
<td>
Bold font
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
<i\>…</i\>
</td>
<td>
Italics
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
</td>
<td>
Non-breaking space
</td>
<td>
</td>
<td>
</td>
</tr>
</table>
***
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__section_agg_ncm_n2b"/>
### Block Tags
You can also use block tags in your comments.
**Block tags** can only be placed in the tag section below the comment. They are separated from the comment by an empty line \(recommended, but not a technical requirement\). Block tags have the following syntax: @tagname comment.
The table provides an overview of the most common block tags.
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__table_krl_ffm_n2b"/>Block Tags
<table>
<tr>
<th>
Tag
</th>
<th>
Use
</th>
<th>
Example
</th>
<th>
How to Use / Details
</th>
</tr>
<tr>
<td>
@param
</td>
<td>
Adds parameters
</td>
<td>
``` js
/**
* ...
* @param {string} statement The SQL statement to be prepared
* ...
*/
```
</td>
<td>
Begin description with a capital letter.
</td>
</tr>
<tr>
<td>
@returns
</td>
<td>
Adds return values
</td>
<td>
`@returns {type1|type2|...} Description`
</td>
<td>
Begin description with a capital letter.
</td>
</tr>
<tr>
<td>
@throws
</td>
<td>
Adds the description of an exception if an error occurs
</td>
<td>
`@throws {type} Description`
</td>
<td>
Begin description with a capital letter.
</td>
</tr>
<tr>
<td>
@author
</td>
<td>
Adds the name of the developer responsible for the code
</td>
<td>
`@author Max Mustermann`
</td>
<td rowspan="2">
This is an optional tag that is not displayed in JSDoc.
If you need to use the version tag, use $\{version\} so you don't have to update this manually for each new version.
</td>
</tr>
<tr>
<td>
@version
</td>
<td>
Names the version for an entity
</td>
<td>
`@version 14.1.2`
</td>
</tr>
<tr>
<td>
@see
</td>
<td>
Adds information \(for example, link to documentation or the SAP Fiori Design Guidelines\) in the header section of the *API Reference*
</td>
<td>
`@see path`
`@see free text`
`@see {@link topic:bed8274140d04fc0b9bcb2db42d8bac2 Smart Table}`
`@see {@link fiori:/flexible-column-layout/ Flexible Column Layout}`
</td>
<td>
@see \{@link topic:loio <semantic control name\>\} provides a link to the documentation \(developer guide\).
If there are several @see tags with documentation links, only the first one is shown in the header. The other ones are displayed under *Documentation Links* in the *Overview* section.
For more generic topics that are not directly related to a class or control, use inline links.
</td>
</tr>
<tr>
<td>
@since
</td>
<td>
Adds the version in which an entity was first introduced
</td>
<td>
`@since 1.30`
</td>
<td>
Be as specific as possible \(without mentioning patch levels for new development\), since this information is useful even for internal purposes. For example, mention 1.27, even though this is not an external release.
</td>
</tr>
<tr>
<td>
@deprecated
</td>
<td>
Adds the version in which an entity was deprecated
</td>
<td>
`@deprecated As of version 1.28, replaced by {@link class name}`
</td>
<td>
Be as specific as possible \(without mentioning patch levels\), since this information is useful even for internal purposes. For example, mention 1.27, even though this is not an external release.
Provide information about what replaces the deprecated entity.
</td>
</tr>
<tr>
<td>
@experimental
</td>
<td>
Classifies an entity that is not ready for production use yet, but available for testing purposes
</td>
<td>
`@experimental As of version 1.56.0`
</td>
<td>
</td>
</tr>
<tr>
<td>
@example
</td>
<td>
Inserts a code sample after the comment
</td>
<td>
``` js
/**
* ...
* @example
* var id = myjob.schedules.add({
* description: "Added at runtime, run every 10 minutes",
* xscron: "* * * * * *\/10 0",
* parameter: {
* a: "c"
```
</td>
<td>
The code sample is inserted automatically with <pre\>. It is always inserted right after the comment.
To insert an example somewhere else, for example, in the middle of a comment, use <pre\>.
You can add a header for the example by using <caption\>.
</td>
</tr>
</table>
***
#### Tips for Using Block Tags
- The order of the block tags is not mandatory from a technical perspective, but recommended to ensure consistency.
For parameters, however, a fixed order is mandatory.
- There are more tags available, such as `@class`or `@name`.
***
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__section_rh4_3yr_kgb"/>
### Links to API Documentation
To refer to another entity within the *API Reference*, you can use `{@link}` in combination with the reference types shown in the table below.
<a name="loioeeaa5de14e5f4fc1ac796bc0c1ada5fb__table_rkg_bds_kgb"/>Reference Types within API Reference
<table>
<tr>
<th>
Type of Reference
</th>
<th>
Description
</th>
<th>
Example
</th>
<th>
Comment
</th>
</tr>
<tr>
<td>
<full.path.ClassName\>
</td>
<td>
Refers to a class, interface, enumeration, or namespace
</td>
<td>
`sap.ui.comp.smarttable.SmartTable`
</td>
<td>
</td>
</tr>
<tr>
<td>
full.path.ClassName**\#**method
</td>
<td>
Refers to an instance method of a class
</td>
<td>
`sap.ui.comp.smarttable.SmartTable#getHeader`
</td>
<td>
`.prototype.` and \# are interchangeable
</td>
</tr>
<tr>
<td>
full.path.ClassName**.prototype.**method
</td>
<td>
Refers to an instance method of a class
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
full.path.ClassName**.**method
</td>
<td>
Refers to a static method \(or any other static property\)
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
`#method`
</td>
<td>
Refers to an instance method **within** a class
</td>
<td>
`#getHeader`
</td>
<td>
You must use this type of reference **within** an API that you are documenting, for example, within the `SmartTable` control documentation, if you want to link to a method that belongs to the control itself.
</td>
</tr>
<tr>
<td>
`#.method`
</td>
<td>
Refers to a static method **within** a class
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
full.path.ClassName**\#event:**name
</td>
<td>
Refers to an event fired by an instance of a class
</td>
<td>
`sap.ui.comp.smarttable.SmartTable#event:beforeRebindTable`
</td>
<td>
</td>
</tr>
<tr>
<td>
`#event:name`
</td>
<td>
Refers to an event **within** a class
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
full.path.ClassName**\#annotation:**name
</td>
<td>
Refers to an instance annotation of a class
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
`#annotation:name`
</td>
<td>
Refers to an annotation **within** a class
</td>
<td>
`#annotation:Text Text`
</td>
<td>
</td>
</tr>
</table>
| 11.42823 | 420 | 0.655083 | eng_Latn | 0.983107 |
53fb4cdd97c78544a4f751739da7c622394a3078 | 803 | md | Markdown | _posts/2019-12-12-thursday-hey-man.md | sdeehub/i-learn-type-theme | 0b019d3d16d37bb7a458e870cccdd42b6afa2dc9 | [
"MIT"
] | null | null | null | _posts/2019-12-12-thursday-hey-man.md | sdeehub/i-learn-type-theme | 0b019d3d16d37bb7a458e870cccdd42b6afa2dc9 | [
"MIT"
] | 49 | 2019-09-17T05:13:50.000Z | 2020-01-15T15:34:31.000Z | _posts/2019-12-12-thursday-hey-man.md | sdeehub/i.learn | 0b019d3d16d37bb7a458e870cccdd42b6afa2dc9 | [
"MIT"
] | null | null | null | ---
title: A Meaningful Connection
date: 2019-12-12 12:38:13 Z
tags:
- Reference
layout: post
feature-img: https://res.cloudinary.com/sdees-reallife/image/upload/v1555658919/sample_feature_img.png
---
How to build a meaningful connection
- Before the event: do your research - who will be there, their backgrounds and interests
- At the event: find something you have in common - shift from a peer to someone like them > same neighborhood, same age, like dogs, same school
- At the event: learn about the person's passions - work-related, hobby
- At the event: become their wingman - maybe there's someone else at the event that you think they should meet. > make introductions, share interesting facts, shine a light on them.
<i class="fa fa-child" style="color:plum"></i>
Dorie Clark - inLEARNING
| 40.15 | 181 | 0.760897 | eng_Latn | 0.992674 |
53fb9716174e75f43fe50fc59e290d2029f8af98 | 550 | md | Markdown | elasticsearch-rolling-restart/README.md | hmcts/ansible-oneshots | 41352f5845c461997532ee44ad0631c9822be2af | [
"MIT"
] | null | null | null | elasticsearch-rolling-restart/README.md | hmcts/ansible-oneshots | 41352f5845c461997532ee44ad0631c9822be2af | [
"MIT"
] | 2 | 2018-06-04T13:31:13.000Z | 2018-06-19T10:49:26.000Z | elasticsearch-rolling-restart/README.md | hmcts/ansible-oneshots | 41352f5845c461997532ee44ad0631c9822be2af | [
"MIT"
] | 3 | 2018-04-06T13:32:18.000Z | 2021-04-10T23:08:44.000Z | Elasticsearch Rolling Restart
=============================
Performs a rolling restart of the Elasticsearch cluster.
Useful when apply certain types of configuration changes.
Variables
=========
elk_es_host - Host used for checking cluster health. The ELK LB VS on the F5s
should be fine for this.
elk_es_user - Username that has sufficient cluster priveleges to view cluster
health. Default's to 'elastic' as that is an out of the box built-in system
user name.
elk_es_pass - Password for the above user. Details for this should be in
vault.
| 28.947368 | 78 | 0.74 | eng_Latn | 0.988756 |
53fbd0006f33c12c6d9edbbce638f82def9e8eea | 5,375 | md | Markdown | README.md | hagarj/docservice | 0fd2dc0f288e909c3b4f6565727847e48c6fddb4 | [
"MIT"
] | null | null | null | README.md | hagarj/docservice | 0fd2dc0f288e909c3b4f6565727847e48c6fddb4 | [
"MIT"
] | null | null | null | README.md | hagarj/docservice | 0fd2dc0f288e909c3b4f6565727847e48c6fddb4 | [
"MIT"
] | null | null | null | # DocService by Jason Hagar
This RESTful service fulfills the take home test provided. It obeys the following client contract as specified for posting new documents:
> {
> "html": "\<string>",
> "links": [
> {
> "id":\<int>,
> "title": "\<string>",
> "uri": "\<string>"
> }, // ...
> ],
> "references": [
> {
> "anchor": "\<string>",
> "position": \<int>,
> "link": \<link-id>
> }
> // ...
> ]
> }
# Implemented Features
The following features have been implemented:
- POST new documents
- GET HTML of a specific version of a document (key, id tuple)
- GET HTML of all versions of a document
- GET links of a specific version of a document
- GET references of a specific version of a document
# Assumptions
I have assumed that the service is to not allow deletes or modifications, as those features were not specified in the assignment. Thus, PUT, PATCH, and DELETE verbs are not supported.
I went ahead and selected Cassandra for the data storage backend. This seemed like a good choice for the following reasons:
- Cassandra is designed for high performance and reliability.
- Automatic support for replication across sites is built-in. Not used in this implementation, but could easily be modified to support this in the future.
- The ability to partition by key for documents and (key,id) for links and references lends itself to high performance fetches for documents from the database by not having to read across partitions.
For service implementation I went with:
- Node
- Express
- DataStax cassandra-driver
These choices enabled me to build a service quickly that meets all the requirements.
# Data definition
Data in the Cassandra database is organized under the docservice keyspace with three tables:
### Documents
|key|id|html|
|--|--|--|
|Client specified key|Service assigned ID|Client provided HTML|
### Links
|key|id|linkid|title|uri|
|--|--|--|--|--|
|Client specified key|Service assigned ID|Client provided link-id|Client provided title|Client provided URI|
This table is partitioned by (key,id) for fast retrieval of all links for a specific document.
### References
|key|id|linkid|anchor|position|
|--|--|--|--|--|
|Client specified key|Service assigned ID|Client provided link-id|Client provided anchor|Client provided position|
This table is partitioned by (key,id) for fast retrieval of all references for a specific document.
# Additional functionality
This service will also register the docservice keyspace and tables if not already registered.
# API
**Title:** Store new document with [key]
**URL:** /docservice/[key]
**Method:** POST
**URL params:** None
**Data params:** JSON describing the document conforming to the assignment's model.
**Success response**: JSON with the given key and newly assigned ID for this document's version:
{
"key": "\<key>",
"id": "\<cassandra-timeuuid>"
}
**Title:** Get document (specific version)
**URL:** /docservice/[key]/[id]/html
**Method:** GET
**URL params:** None
**Success response**: JSON with the given key, ID, and the associated HTML for the document:
{
"key": "\<key>",
"id": "\<cassandra-timeuuid>"
"html": "\<your-document-html>"
}
**Title:** Get all documents for a given key
**URL:** /docservice/[key]/all/html
**Method:** GET
**URL params:** None
**Success response**: JSON with the given key and further chunked by each stored ID, and the associated HTML for that document
{
"key": "\<key>",
"docs": [
{
"id": "\<your-document-html>"
"html": "\<your-document-html>"
},
// ...
]
}
**Title:** Get links for a given specific document (key/id tuple)
**URL:** /docservice/[key]/[id]/html
**Method:** GET
**URL params:** None
**Success response**: JSON with the given key, id, and further chunked by each stored link:
{
"key": "\<key>",
"id": "\<cassandra-timeuuid>"
"links": [
{
"id": \<int>,
"title": "\<string>",
"uri": "\<string>"
},
// ...
],
}
**Title:** Get referencesfor a given specific document (key/id tuple)
**URL:** /docservice/[key]/[id]/html
**Method:** GET
**URL params:** None
**Success response**: JSON with the given key, id, and further chunked by each stored reference:
{
"key": "\<key>",
"id": "\<cassandra-timeuuid>"
"references": [
{
"anchor": "\<string>",
"position": \<int>,
"link": \<link-id>
},
// ...
],
}
| 36.073826 | 200 | 0.690047 | eng_Latn | 0.861115 |
53fc55eb6cce67500e170ced6c5aff39112fa998 | 616 | md | Markdown | src/locale-provider/docs/basic.md | Lohoyo/santd | 1098a4af2552961ecf9de4ca38e69037f44ef43f | [
"MIT"
] | 63 | 2019-06-05T08:55:38.000Z | 2022-03-10T07:57:56.000Z | src/locale-provider/docs/basic.md | hchhtc123/santd | 90c701f3ffd77763beb31c62b57f0c5a4898c19c | [
"MIT"
] | 13 | 2020-05-22T09:09:29.000Z | 2021-07-21T11:19:33.000Z | src/locale-provider/docs/basic.md | hchhtc123/santd | 90c701f3ffd77763beb31c62b57f0c5a4898c19c | [
"MIT"
] | 27 | 2019-06-05T07:02:34.000Z | 2022-03-09T07:24:24.000Z | <text lang="cn">
#### 国际化
用 `LocaleProvider` 包裹你的应用,并引用对应的语言包。
</text>
```html
<template>
<div>
<s-locale-provider locale="{{locale}}">
<s-pagination defaultCurrent="{{1}}" total="{{50}}" showSizeChanger="{{true}}" />
</s-locale-provider>
</div>
</template>
<script>
import {Pagination, LocaleProvider} from 'santd';
import zhCN from 'santd/locale-provider/zh_CN';
export default {
initData() {
return {
locale: zhCN
}
},
components: {
's-locale-provider': LocaleProvider,
's-pagination': Pagination
}
}
</script>
```
| 19.870968 | 93 | 0.573052 | yue_Hant | 0.623286 |
53fc60282655506e9889b47a82b265ca11cc3651 | 290 | md | Markdown | README.md | Scillman/va_call | 577dc9b382ef4f3d6e2e0fbfceee6b095ba05b4b | [
"MIT"
] | null | null | null | README.md | Scillman/va_call | 577dc9b382ef4f3d6e2e0fbfceee6b095ba05b4b | [
"MIT"
] | null | null | null | README.md | Scillman/va_call | 577dc9b382ef4f3d6e2e0fbfceee6b095ba05b4b | [
"MIT"
] | null | null | null | # NOTICE
This is only a simple demonstration. It is not meant to be used in any kind of distributed application. And it is highly recommended to __not__ use any addressing using virtual address.
# va_call
An example demonstrating how to call a function from C++ using its virtual address.
| 48.333333 | 185 | 0.796552 | eng_Latn | 0.999817 |
53fcd3e38d8fdc833407fbd888e8b85ff7cb9067 | 2,675 | md | Markdown | src/fr/2021-01/02/02.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/fr/2021-01/02/02.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/fr/2021-01/02/02.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Le roi est mort. Vive le roi!
date: 03/01/2021
---
`Ésaïe 6:1 parle de la mort du roi Ozias. Lisez 2 Chroniques 26 et ensuite, répondez à cette question: quelle est la signification de la mort du roi Ozias?`
Différentes perspectives peuvent être données concernant la mort de ce roi.
1. Bien que le règne d’Ozias ait été long et prospère, « lorsqu’il fut puissant, son cœur s’éleva pour le perdre » (2 Chron. 26:16, LSG) et il tenta d’offrir de l’encens dans le temple. Lorsque les prêtres l’arrêtèrent à juste titre parce qu’il n’était pas autorisé à être un descendant sacerdotal d’Aaron (2 Chron. 26:18), le roi se mit en colère. À ce moment, lorsque le roi refusa la réprimande, l’Éternel le frappa immédiatement de la lèpre, qu’il eut « jusqu’au jour de sa mort, et étant lépreux, il vivait dans une maison séparée, car il fut exclu de la maison de l’Éternel » (2 Chron. 26:21, LSG). Quelle ironie qu’Ésaïe ait eu une vision du roi pur, immortel et divin dans Sa maison ou Son temple l’année même où le roi humain impur est mort!
2. Il existe un contraste frappant entre Ozias et Ésaïe. Ozias recherchait la sainteté de manière présomptueuse, pour une mauvaise raison (l’orgueil), et est devenu rituellement impur, de sorte qu’il fut retranché de la sainteté. Ésaïe, en revanche, a permis à la sainteté de Dieu de l’atteindre. Il a humblement admis sa faiblesse et a aspiré à la pureté morale, qu’il a reçue (Ésaïe 6:5-7, LSG). Tout comme le publicain dans la parabole de Jésus, il s’en est allé justifié: « car quiconque s’élève sera abaissé, et celui qui s’abaisse sera élevé » (Luc 18:14,LSG).
3. Il existe une similitude frappante entre le corps lépreux d’Ozias et l’état moral de son peuple: « De la plante du pied jusqu’à la tête, rien n’est en bon état: Ce ne sont que blessures, contusions et plaies vives » (Esa 1:6 LSG).
4. La mort d’Ozias vers 740 av. JC marque une crise majeure dans le leadeurship du peuple de Dieu. La mort de tout souverain absolu rend son pays vulnérable lors d’une transition du pouvoir. Mais Juda était en danger particulier, car Tiglath-Piléser III était monté sur le trône d’Assyrie quelques années auparavant, en 745 av. JC, et avait immédiatement pris le chemin de la guerre qui faisait de sa nation une superpuissance invincible qui menaçait l’existence indépendante de toutes les nations du Proche-Orient. En cette période de crise, Dieu a encouragé Ésaïe en montrant au prophète qu’Il était toujours Maitre de la situation.
`Lisez attentivement 2 Chroniques 26:16. Comment chacun de nous fait-il face à cet instinct pour la même chose? Comment le fait de méditer sur la croix peut-il nous protéger de cela?` | 148.611111 | 750 | 0.770093 | fra_Latn | 0.994621 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.