hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
93a6f8a604d6d63e3712fa156d2609a7f45f48e3 | 792 | md | Markdown | src/pages/euro-stacking-containers/stackable-totes-storage.md | Boytobeaman/gatsby-starter-netlify-cms | d7139faedac07582d3b8c716835acded1c59b7d7 | [
"MIT"
] | null | null | null | src/pages/euro-stacking-containers/stackable-totes-storage.md | Boytobeaman/gatsby-starter-netlify-cms | d7139faedac07582d3b8c716835acded1c59b7d7 | [
"MIT"
] | null | null | null | src/pages/euro-stacking-containers/stackable-totes-storage.md | Boytobeaman/gatsby-starter-netlify-cms | d7139faedac07582d3b8c716835acded1c59b7d7 | [
"MIT"
] | null | null | null | ---
templateKey: eurostackingcontainer-post
title: stackable totes storage
description: desc
model: EUB
external_long: '400'
external_width: '300'
external_height: '148'
internal_long: '368'
internal_width: '260'
internal_height: '142'
volumn: '13.5'
weight: '0.86'
date: 2019-05-04T04:14:29.910Z
tags:
- stacking crate
- stackable bin
images:
- >-
https://cdn.movingboxsale.com/products/3270a65e611942ea940fd82f307e3170.jpg
- >-
https://cdn.movingboxsale.com/products/1d1eddef0bf34e0b81f217ece9048ea0.jpg
- >-
https://cdn.movingboxsale.com/products/37474a97260345be8b3a6b9fd0362513.jpg
- >-
https://cdn.movingboxsale.com/products/2a5e5922554c4b52bbe3c55bf2f83092.jpg
- >-
https://cdn.movingboxsale.com/products/e1b14d2db2784d88a5e608954ea83583.jpg
---
desc
| 25.548387 | 79 | 0.760101 | yue_Hant | 0.266295 |
93a771d1fc71e05a0ca8f37d8657b75409bd9bc0 | 730 | md | Markdown | chip8/Readme.md | bryan-pakulski/emulators | 599856760529cce7cc31be43d07617983e642dae | [
"MIT"
] | null | null | null | chip8/Readme.md | bryan-pakulski/emulators | 599856760529cce7cc31be43d07617983e642dae | [
"MIT"
] | null | null | null | chip8/Readme.md | bryan-pakulski/emulators | 599856760529cce7cc31be43d07617983e642dae | [
"MIT"
] | null | null | null | # Compiling
- run make init to create missing folders (dependencies / lib)
- Add library files into lib
- Add below files into dependencies folder
# Required files
- chip8 roms (can be easily found on the internet)
- OpenSans-Regular.ttf
# Required libraries (linux)
- libsdl2-dev
- libsdl2-ttf
## Lib packages
You can install .so .a linking binaries directly without the use of a package manager by following these steps:
- Create lib folder in this directory
- Add required .so / .a binaries from above requirements
The make file will automatically accomodate these and link them to the binary
Note: You should download the external libraries only from a trusted external source i.e. Arch linux mirrors. | 34.761905 | 112 | 0.757534 | eng_Latn | 0.996184 |
93a7d76a7c949e2994e48f22c680de5bbfbb91fc | 778 | md | Markdown | server-2013/lync-server-2013-reference-topologies.md | v-rajagt/OfficeDocs-SkypeforBusiness-Test-pr.zh-cn | eab3686e8cbc09adec3e81749bfcafc598f66fb9 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-05-19T19:27:53.000Z | 2022-02-19T00:00:24.000Z | server-2013/lync-server-2013-reference-topologies.md | v-rajagt/OfficeDocs-SkypeforBusiness-Test-pr.zh-cn | eab3686e8cbc09adec3e81749bfcafc598f66fb9 | [
"CC-BY-4.0",
"MIT"
] | 30 | 2018-05-30T19:12:05.000Z | 2018-08-24T10:54:53.000Z | server-2013/lync-server-2013-reference-topologies.md | v-rajagt/OfficeDocs-SkypeforBusiness-Test-pr.zh-cn | eab3686e8cbc09adec3e81749bfcafc598f66fb9 | [
"CC-BY-4.0",
"MIT"
] | 18 | 2018-05-02T08:27:54.000Z | 2021-11-15T11:24:05.000Z | ---
title: Lync Server 2013 参考拓扑
TOCTitle: 参考拓扑
ms:assetid: 1b9e3467-ee74-4598-a348-16490b098760
ms:mtpsurl: https://technet.microsoft.com/zh-cn/library/Gg398254(v=OCS.15)
ms:contentKeyID: 49312160
ms.date: 05/19/2016
mtps_version: v=OCS.15
ms.translationtype: HT
---
# Lync Server 2013 中的参考拓扑
_**上一次修改主题:** 2012-05-21_
理想的 Lync Server 拓扑取决于组织的大小、要部署的工作负荷、是否需要高可用性以及相应的投资成本。
以下主题概述了三种参考拓扑,包括多种决策背后的推论,这些决策推动了对每个拓扑的要求。
## 本部分内容
- [小型组织中 Lync Server 2013 的参考拓扑](lync-server-2013-reference-topology-for-small-organizations.md)
- [中型组织中的 Lync Server 2013 参考拓扑](lync-server-2013-reference-topology-for-medium-size-organizations.md)
- [具有多数据中心的大型组织中的 Lync Server 2013 参考拓扑](lync-server-2013-reference-topology-for-large-organizations-with-multiple-data-centers.md)
| 25.933333 | 133 | 0.77892 | yue_Hant | 0.255575 |
93a80863f37958179fb64f029cc0f079d6499449 | 1,787 | md | Markdown | docs/design/add-in-design.md | isabella232/office-js-docs-pr.zh-cn | f383f5ea663d60b5cd282cc7792611f06ef7f18f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/design/add-in-design.md | isabella232/office-js-docs-pr.zh-cn | f383f5ea663d60b5cd282cc7792611f06ef7f18f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-23T19:03:07.000Z | 2021-02-23T19:03:07.000Z | docs/design/add-in-design.md | isabella232/office-js-docs-pr.zh-cn | f383f5ea663d60b5cd282cc7792611f06ef7f18f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 设计 Office 加载项
description: 了解 Office 加载项视觉设计的最佳做法。
ms.date: 06/20/2019
localization_priority: Priority
ms.openlocfilehash: a2965c2ee148c82708b9c61edd853f112adcf93c
ms.sourcegitcommit: be23b68eb661015508797333915b44381dd29bdb
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/08/2020
ms.locfileid: "44607678"
---
# <a name="design-office-add-ins"></a>设计 Office 加载项
Office 外接程序可通过提供用户可在 Office 客户端内访问的上下文功能来扩展 Office 体验。通过外接程序,用户可以访问 Office 内的第三方功能以完成更多操作,而无需进行成本高昂的上下文切换。
你的外接程序 UX 设计必须与 Office 无缝集成,为用户提供高效、自然的交互。利用[外接程序命令](add-in-commands.md)提供对外接程序的访问权限,并应用创建基于 HTML 的自定义 UI 时建议的最佳实践。
## <a name="office-design-principles"></a>Office 设计原则
Office 应用程序遵循一套常规交互原则。应用共享内容并具有外观和行为相似的元素。此通用性基于一套设计原则。这些原则帮助 Office 团队创建支持客户任务的界面。了解并遵循这些原则将有助于支持 Office 内部的客户目标。
若要打造积极的加载项体验,请遵循 Office 设计原则:
- **对 Office 进行明确设计。** 加载项的功能、外观和感受必须和谐地完善 Office 体验。加载项应该让人感觉就像安装在本机一样。它们应无缝融入 iPad 版 Word 或 PowerPoint 网页版。设计良好的加载项将恰当地融合体验、平台和 Office 应用程序。请考虑使用 Office UI Fabric 作为设计语言。在适当的位置应用文档和 UI 主题。
- **重点关注几个关键任务;好好完成。** 帮助客户在不影响其他工作的情况下完成一项工作。为客户提供真正的价值。与 Office 文档交互时,关注常见用例并认真挑选出用户最受益的。
- **使内容优先于 Chrome。** 使客户的页面、幻灯片或电子表格始终关注体验。外接程序是辅助界面。没有任何辅助 Chrome 应当与外接程序的内容和功能交互。请明智地品牌化你的体验。我们知道这对于向用户提供独特且可识别的功能但避免干扰十分重要。努力将重点集中于内容和任务完成,而非品牌关注。
- **使其方便好用并保持对用户的控制。** 人们喜欢使用实用且外观吸引人的产品。 小心地定制你的体验。 将每个交互和视觉细节考虑在内,把细节做好。 允许用户控制其体验。 完成任务的必要步骤必须清楚并相互关联。 重要的决定应该是易于理解的。 操作应该可以轻松撤消。 外接程序不是一个目标,它是对 Office 功能的增强。
- **针对所有平台和输入方法进行设计**。外接程序设计用于 Office 支持的所有平台,您的外接程序 UI 应该进行优化,以便跨平台和外形规格运行。支持鼠标/键盘和触摸输入设备,确保您的自定义 HTML UI 响应迅速,可适应不同的外形规格。有关详细信息,请参阅[触摸](../concepts/add-in-development-best-practices.md#optimize-for-touch)。
## <a name="see-also"></a>另请参阅
- [Office UI Fabric](https://developer.microsoft.com/fabric)
- [加载项开发最佳做法](../concepts/add-in-development-best-practices.md)
| 45.820513 | 208 | 0.810856 | yue_Hant | 0.721051 |
93a8982b0fd33c009e9002a791c7b888bcd484ac | 1,687 | md | Markdown | results/headphonecom/headphonecom_harman_over-ear_2018/Beyerdynamic T90 250 Ohm/README.md | NekoAlosama/AutoEq-nekomod | a314a809c3fe46c3c8526243bd97f0f31a90c710 | [
"MIT"
] | null | null | null | results/headphonecom/headphonecom_harman_over-ear_2018/Beyerdynamic T90 250 Ohm/README.md | NekoAlosama/AutoEq-nekomod | a314a809c3fe46c3c8526243bd97f0f31a90c710 | [
"MIT"
] | null | null | null | results/headphonecom/headphonecom_harman_over-ear_2018/Beyerdynamic T90 250 Ohm/README.md | NekoAlosama/AutoEq-nekomod | a314a809c3fe46c3c8526243bd97f0f31a90c710 | [
"MIT"
] | null | null | null | # Beyerdynamic T90 250 Ohm
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-7.57dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-7.58 dB**.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|----------:|
| Peaking | 23.82 Hz | 0.81 | 7.01 dB |
| Peaking | 55.66 Hz | 1.69 | 4.15 dB |
| Peaking | 902.92 Hz | 1.33 | 4.04 dB |
| Peaking | 10905.18 Hz | 0.84 | -4.88 dB |
| Peaking | 18742.85 Hz | 0.3 | -10.13 dB |
| Peaking | 224.66 Hz | 1.66 | -2.01 dB |
| Peaking | 1642.63 Hz | 1.97 | 1.07 dB |
| Peaking | 3162.91 Hz | 2.14 | -3.23 dB |
| Peaking | 4875.10 Hz | 3.49 | 4.95 dB |
| Peaking | 6513.67 Hz | 5.61 | -2.68 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-8.26dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|--------:|------------:|-----:|----------:|
| Peaking | 31.25 Hz | 1.41 | 7.54 dB |
| Peaking | 62.50 Hz | 1.41 | 3.95 dB |
| Peaking | 125.00 Hz | 1.41 | -0.16 dB |
| Peaking | 250.00 Hz | 1.41 | -2.22 dB |
| Peaking | 500.00 Hz | 1.41 | 1.27 dB |
| Peaking | 1000.00 Hz | 1.41 | 4.10 dB |
| Peaking | 2000.00 Hz | 1.41 | -0.86 dB |
| Peaking | 4000.00 Hz | 1.41 | 0.35 dB |
| Peaking | 8000.00 Hz | 1.41 | -6.04 dB |
| Peaking | 16000.01 Hz | 1.41 | -16.28 dB |
### Graphs
 | 42.175 | 98 | 0.55246 | eng_Latn | 0.670443 |
93a8c912b5ebf71507739324ce093c2b60e0c783 | 3,062 | md | Markdown | 10_sles_for_sap/README.md | Cyclenerd/sap-on-gcp-scripts | 37dd5c5b34e7fce4706b493fe320d79b3244bd20 | [
"Apache-2.0"
] | 1 | 2022-02-01T18:44:03.000Z | 2022-02-01T18:44:03.000Z | 10_sles_for_sap/README.md | Cyclenerd/sap-on-gcp-scripts | 37dd5c5b34e7fce4706b493fe320d79b3244bd20 | [
"Apache-2.0"
] | 1 | 2022-02-01T16:20:00.000Z | 2022-02-01T18:35:56.000Z | 10_sles_for_sap/README.md | Cyclenerd/sap-on-gcp-scripts | 37dd5c5b34e7fce4706b493fe320d79b3244bd20 | [
"Apache-2.0"
] | null | null | null | # SUSE Linux Enterprise Server 15 for SAP
Create service account and Compute Engine virtual machine instance with SUSE Linux Enterprise Server 15 for SAP as operating system.
## Configuration
Configuration other than default values:
| Variable | Description | Value |
|----------|-------------|-------|
| MY_GCP_GCE_NAME | Name of GCE virtual machine instance | `slessap` |
| MY_GCP_GCE_TYPE | GCE machine type | `n1-standard-1` |
| MY_GCP_GCE_DISK_BOOT_TYPE | Type of the boot disk | `pd-ssd` |
| MY_GCP_GCE_DISK_BOOT_SIZE | Size of the boot disk | `64GB` |
| MY_GCP_GCE_IMAGE_FAMILY | Image family for the OS that the boot disk will be initialized with | `sles-15-sp3-sap` |
| MY_GCP_GCE_IMAGE_PROJECT | Project against image family references | `suse-sap-cloud` |
## Pricing
[Google Cloud Pricing Calculator](https://cloud.google.com/products/calculator/#id=6b01ac7e-ea27-442a-a1ea-76a00512991b)
* Region: Finland
* 730 total hours per month
* VM class: regular
* Instance type: `n1-standard-1` (USD 26.70) [Sustained Use Discount applied]
* Operating System / Software: Paid (USD 124.10)
* Sustained Use Discount: 30%
* Effective Hourly Rate: USD 0.207
* Estimated Component Cost: USD 150.80 per 1 month
* Zonal SSD PD: 64 GiB
* Total Estimated Cost: USD 162.77 per 1 month
SUSE Linux Enterprise Server 15 for SAP usage fee billed by Google:
* EUR 0.14/hour (EUR 105.27/month) for 1-2 vCPU machine types
* EUR 0.29/hour (EUR 210.55/month) for 3-4 vCPU machine types
* EUR 0.35/hour (EUR 253.90/month) for 5+ vCPU machine types
Information without guarantee.
## Scripts
* `01_create_slessap.sh` : Create service account and Compute Engine virtual machine instance
* `10_ssh_slessap.sh` : SSH into a Linux virtual machine instance
* `99_delete_slessap.sh` : Delete Compute Engine virtual machine instance and service account
### Snapshots
If you don't need your VM for a long time you can make a snapshot from the disk.
You can then delete the VM and the disk (`99_delete_slessap.sh`).
If you need the VM with the data again, you can create a new fresh VM from the snapshot.
* `30_create_snapshot_slessap.sh` : Create snapshot of Compute Engine persistent boot disk
* `31_create_from_snapshot_slessap.sh` : Create Compute Engine persistent boot disk from last snapshot and create virtual machine instance with created disk
* `39_delete_snapshots_slessap.sh` : Delete all Compute Engine boot disk snapshots from specific instance
You will then save the [disk cost](https://cloud.google.com/compute/all-pricing#disk) and pay only the very cheap [snapshot price](https://cloud.google.com/compute/all-pricing#disk).
* Regional snapshot storage $0.029 per GB in `europe-north1` (Finland)
* Multi-regional snapshot storage $0.0286 per GB in `eu` (European Union) [DEFAULT]
Example:
```shell
# Create snapshot
bash 30_create_snapshot_slessap.sh
# Delete SA, Disk and VM
bash 99_delete_slessap.sh
# Later, create new VM from snapshot
bash 31_create_from_snapshot_slessap.sh
# Delete all snapshots
bash 39_delete_snapshots_slessap.sh
``` | 41.945205 | 182 | 0.759308 | eng_Latn | 0.776315 |
93a8dee50ef9876b0159fa6f282fba42e791f553 | 229 | md | Markdown | packages/adapters/bus-adapters/resolve-bus-base/README.md | artemjackson/resolve | 61995fe53c5561f9ddeeb19346f13d76006f1c78 | [
"MIT"
] | null | null | null | packages/adapters/bus-adapters/resolve-bus-base/README.md | artemjackson/resolve | 61995fe53c5561f9ddeeb19346f13d76006f1c78 | [
"MIT"
] | null | null | null | packages/adapters/bus-adapters/resolve-bus-base/README.md | artemjackson/resolve | 61995fe53c5561f9ddeeb19346f13d76006f1c78 | [
"MIT"
] | null | null | null | # **resolve-bus-base**
[](https://badge.fury.io/js/resolve-bus-base)

| 45.8 | 106 | 0.733624 | yue_Hant | 0.470694 |
93a9258843106407a839694e9ef45ccf12f89154 | 11,218 | md | Markdown | deliverable-2/README.md | csc301-winter-2020/team-project-3-matron | 53e4298ac8ff2866f187783a2e7d74cae578d4d5 | [
"MIT"
] | 1 | 2020-05-22T20:32:07.000Z | 2020-05-22T20:32:07.000Z | deliverable-2/README.md | csc301-winter-2020/team-project-3-matron | 53e4298ac8ff2866f187783a2e7d74cae578d4d5 | [
"MIT"
] | 1 | 2021-03-10T14:22:55.000Z | 2021-03-10T14:22:55.000Z | deliverable-2/README.md | csc301-winter-2020/team-project-3-matron | 53e4298ac8ff2866f187783a2e7d74cae578d4d5 | [
"MIT"
] | 4 | 2020-04-25T17:42:23.000Z | 2021-04-29T02:29:28.000Z | # Team 3 - Matron
## Description
Our web-based app is to be used as a component in a Matron's larger overall system for managing the schedules of nurses and related healthcare professionals. The schedules of such workers are currently served by systems which don't take the physical layout of the work environment into account. As such, nurses regularly walk around their hospital wing much more than they need to (ie., by walking past patient rooms they're scheduled to visit later in the day) and become more exhausted and inefficient as a result. With Matron's spatially-optimized scheduling, nurses will receive schedules that require them to walk the minimal amount.
Our application allows a hospital blueprint to be translated into a graph that allows room-to-room distances to be queried by the Matron scheduler. Users build this graph through a visual interface that allows them to add nodes representing rooms with a given label (eg., "exam room 3", "room 301", "supply room 6", etc.) and type (eg., "patient room", "supply room", "exam room", "workstation", etc.), and add edges between them representing hallways that connect the rooms.
Once the graph is built, it can be saved and then subsequently reloaded, edited, resaved, etc.
## Key Features (Implemented)
1. Users can load saved maps/blueprint images via the map name.
2. Users can create a new named map with an optional blueprint backdrop image.
3. Users can delete saved maps/blueprint images via the map name.
4. Users can edit the map via the mouse/keyboard (add/remove/rename/retypeset nodes, add/remove edges).
5. Users can save the current loaded map.
6. Users can obtain the relative distance from one room to another within the current graph.
7. Responsive button colors/messages let the user know when they're missing required inputs.
8. Node/edge snapping and edge junction creation.
<!--* Users can load/view an existing map and blueprint image from its name.
* Users can upload an existing map of the unit to use as a guide when building the graph of the rooms.
* Users can draw and modify the graph of the unit using nodes which are overlaid on the uploaded guide.
* Users can measure the time it takes to get from any given room to anythere.
* All created hospital maps are saved for users to retrieve and further edit.
* Possible to obtain the relative distance from one location to another within map drawn or given by the users.
* Described the key features in the application that the user can access
* Provide a breakdown or detail for each feature that is most appropriate for your application
* This section will be used to assess the value of the features built -->
## Instructions
The master branch auto deploys to https://salty-shelf-59230.herokuapp.com/.
The develop branch auto deploys to https://floating-shore-56001.herokuapp.com/.
Designed for the <strong>latest version of Chrome on Windows</strong>. No guarantees on other platforms.
Optimized for use with a <strong>3-button mouse</strong>. Results may vary with other input devices.
We have no user accounts.
1. I want to load an existing map:
On the homepage I'm prompted to select a unit to load via a searchable dropdown. I can leftclick the search box and select it from the dropdown, or begin typing its name to narrow the search results before leftclicking on it. I then click the green "edit map" button to load it.
(Note, the map entitled "demo" will always exist for loading/editing as it is intentionally undeletable for demo purposes.)
2. I want to create a new graph:
On the homepage I'm prompted to select a unit to load via a searchable dropdown. I leftclick the search box and begin typing a new name. Eventually I'll be shown the option to "add <new name>". I leftclick this option and am prompted with the option to upload a blueprint image to serve as the backdrop for my map. Note that uploading a blueprint is NOT required (you can tell since the "Create floor button" is green, indicating that we may proceed (key feature 7)). I then click the "Create floor button" and am taken to a new blank canvas potentially filled with a backdrop image if one was uploaded.
3. I want to delete an existing map:
On the homepage I'm prompted to select a unit to load via a searchable dropdown. I leftclick the search box and see a list of existing maps. If I click the red "x" to the right of each map name, I will delete that map from the database. (Note that the map entitled "demo" is undeletable for demo purposes.)
4. I want to edit the current loaded map in the graph editor:
I leftclick the canvas to add a new room (large) node. I am then prompted to enter the node's label (eg. "room 301") and type (eg. "classroom"). The type dropdown works the same as the map selection dropdown on the homepage, ie., I can select an existing node type or enter a new one via the keyboard. Note that no types existing on a newly created map. Once a type is created, it will show up as option in the type dropdown for subsequent nodes. Once the inputs are valid, the red "Enter label" button will transition to a green "Save node" button which I click to save the node.
I leftclick an existing room (large) node to edit its label or type.
I rightclick the canvas to add a hallway (small) node. This enables a "ghost edge" that follows my cursor. In this state, if I click the canvas, I create a new hallway node and move the "ghost edge" source to the new node. This enables me to easily create long stretches of curvy hallways. If I rightclick on an existing node, I create an edge between the ghost source and the rightclicked node. If I press Esc, I remove the ghost edge. If I rightclick on an existing edge, a new hallway node will be formed at the junction between that edge and the ghost edge (key feature 8). (Note that while the ghost edge is enabled, the snapping distance is increase substantially to make it easier to connect things. A cyan border is added to the nearby element my rightclick will snap to. If the snapping distance is too great and I require more precision, I can simply zoom in via the scroll-wheel.)
I rightclick an existing node to enable a "ghost edge" with its source set to the rightclicked node.
I leftclickHold and drag on an existing node to move it and its connected edges.
I hold ctrl+leftclickHold and drag over the canvas to box select/deselect. (Actually toggles the selection of all nodes in the box.)
I leftclickHold and drag on the canvas to pan the camera.
I scroll to zoom the camera in and out.
I press X on my keyboard to delete nodes/edges selected via box select.
I press Esc to disable the ghost edge if it's enabled.
5. I want to save my current map:
In the graph editor interface, I leftclick the save button in the top left corner to save my graph to the database. (Note that empty graphs will not be saved.) It will be saved under the name chosen back at the homepage. Later I can load this version of the graph from the homepage via that name.
6. I want to obtain the distance between two rooms in the current map:
In the graph editor interface, I leftclick the paper plane button (to the right of the save button). This opens a popup that prompts me to enter the labels of two rooms in my graph. When I leftclick the calculate button it displays the distance or gives me an error message prompting me to retry with the correct arguments.
Distance query notes:
* <strong>If I've modified the map, I need to save before the distances will update.</strong>
* The number is unitless, as these distances are only ever compared relative to each other.
* If the returned value is -1, this indicates that no path exists between the two given rooms.
* This element queries our backend API to obtain the distance. Normally, such API methods will only be called by the backend Matron scheduling API. This element exists solely for debugging and demonstration purposes.
## Development requirements
The webapp is configured for heroku. The codebase can be directly deployed to heroku, or requirements can be manually installed. In either case, there is the additional requirement of a MongoDB server, which can be provisioned from heroku or elsewhere.
* Install Python 3.6+
* Install or Provision a MongoDB instance.
* Clone the repo
* Put your MongoDB access settings into `app/main.py` on line 13.
* If using Heroku, push the current copy of the repo to the heroku-remote
* If not using heroku, install python dependencies using `pip install -r requirements.txt`
* To run the server: `python app/main.py`
## Deployment and Github Workflow
Our main process from writing code to a live application is:
* The two core branches are master and develop. Both core branches are protected, as merging to master requires three code reviews and merging to develop requires one. This is because develop is for fast iterations and new features, but master should reflect a more thorough and polished version of the project.
* When a feature is planned, an issue is created for it and a card is generated for the project view.
* Once someone begins working on the feature, they create a branch for the feature, and move the card to "In Progress". All code changes relevant to this feature should only be made in this branch.
* Once the feature is complete, a pull request from their branch to develop is opened. Any team member is able to test and review the code.
* Once tested and reviewed, it can be merged into develop, from where it is automatically deployed to Heroku to test the new feature
* When it is time to make a release, develop is merged into master, undergoing significant testing and review in the process. Master also has automatic deployments to Heroku, to use as demos to show the partner
Some additional information:
* We chose to assign tasks by features of the app because we each have expertise in different areas. Using branching in the git repository was the optimal way to accommodate people working on different parts of the app.
* The testing before pull request is done to ensure the new feature will work well with the other parts of the app and is bug free. If the test fails, the person assigned to that feature can go back to the branch and fix the issue and then it can be tested again
* We are working on integrating a continuous integration service to automatically run tests, like Travis CI or GitHub actions. Once this is complete, we'll require tests to pass before being able to merge to develop or master.
## Licenses
We decided to use the MIT license because we felt it is the one of the best open source licenses available. Firstly, It does not require any derivative works to also use that license unlike GPL. Since we are building an addon for an existing service that may one day need to extend our code, this aspect is appealing to both us and the project partner. It also means other projects that want to build on our code can use it freely without having any licensing restrictions or paying any fees
| 86.96124 | 895 | 0.767516 | eng_Latn | 0.999773 |
93a9457705e9a1b77ec0c5cfbea065a1509b0b74 | 16 | md | Markdown | README.md | spek-lang/spek-lang.org | 1a8575403a3a081bef91441e4954c388bd8a3684 | [
"Apache-2.0"
] | null | null | null | README.md | spek-lang/spek-lang.org | 1a8575403a3a081bef91441e4954c388bd8a3684 | [
"Apache-2.0"
] | null | null | null | README.md | spek-lang/spek-lang.org | 1a8575403a3a081bef91441e4954c388bd8a3684 | [
"Apache-2.0"
] | null | null | null | # spek-lang.org
| 8 | 15 | 0.6875 | deu_Latn | 0.371409 |
93a9fcff347b36a7459bf1b5cd82721375abfe2f | 689 | md | Markdown | CHANGELOG.md | mcmips/osim-rl | 610b95cf0c4484f1acecd31187736b0113dcfb73 | [
"MIT"
] | 867 | 2017-01-21T20:53:36.000Z | 2022-03-20T09:47:08.000Z | CHANGELOG.md | mcmips/osim-rl | 610b95cf0c4484f1acecd31187736b0113dcfb73 | [
"MIT"
] | 197 | 2017-01-22T21:27:36.000Z | 2022-01-10T16:18:35.000Z | CHANGELOG.md | mcmips/osim-rl | 610b95cf0c4484f1acecd31187736b0113dcfb73 | [
"MIT"
] | 277 | 2017-02-01T18:42:18.000Z | 2022-03-23T11:30:31.000Z | # osim-rl
Install the most recent version with
```bash
pip install git+https://github.com/stanfordnmbl/osim-rl.git -U
```
in the conda environment with OpenSim.
## [2.1.0] - 2018-08-27
- `equilibrateMuscles` called in the first step (https://github.com/stanfordnmbl/osim-rl/issues/133)
- Added the new reward function (round 2)
- `timestap_limit` for the second round is 1000
- Fixed the `set_state` (https://github.com/stanfordnmbl/osim-rl/issues/125)
- `difficulty` added to the environment (https://github.com/stanfordnmbl/osim-rl/issues/127). `1` for the second round and `0` for the first round
- `seed` added to the environment (https://github.com/stanfordnmbl/osim-rl/issues/158)
| 43.0625 | 146 | 0.746009 | eng_Latn | 0.89635 |
93aa1aa4c4dd59e6be3884b5d764ebfd6f2284bb | 3,893 | md | Markdown | articles/sql-database/sql-database-planned-maintenance.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-planned-maintenance.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/sql-database/sql-database-planned-maintenance.md | eltociear/azure-docs.es-es | b028e68295007875c750136478a13494e2512990 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Planeación de los eventos de mantenimiento de Azure
description: Aprenda a prepararse para los eventos de mantenimiento planeado en su base de datos de Azure SQL.
services: sql-database
ms.service: sql-database
ms.subservice: operations
ms.custom: ''
ms.devlang: ''
ms.topic: conceptual
author: aamalvea
ms.author: aamalvea
ms.reviewer: carlrab
ms.date: 01/30/2019
ms.openlocfilehash: ba882176fbe17f7b74c786f421dde8fadd58d9b7
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/27/2020
ms.locfileid: "73821312"
---
# <a name="planning-for-azure-maintenance-events-in-azure-sql-database"></a>Planeación de los eventos de mantenimiento en Azure SQL Database
Aprenda a prepararse para los eventos de mantenimiento planeado en su base de datos de Azure SQL.
## <a name="what-is-a-planned-maintenance-event"></a>Qué es un evento de mantenimiento planeado
Para cada base de datos, Azure SQL Database mantiene un cuórum de réplicas de base de datos en el que una de ellas es la principal. En todo momento una réplica principal debe realizar mantenimiento en línea, mientras que al menos una réplica secundaria debe estar en buen estado. Durante el mantenimiento planeado, los miembros del cuórum de la base de datos se quedarán sin conexión una a la vez, con la intención de que haya una réplica principal respondiendo y al menos una réplica secundaria en línea, de forma que no haya tiempo de inactividad en el cliente. Cuando sea necesario que la réplica principal esté sin conexión, se producirá un proceso de reconfiguración o conmutación por error en el que una réplica secundaria se convertirá en la nueva réplica principal.
## <a name="what-to-expect-during-a-planned-maintenance-event"></a>Qué esperar durante un evento de mantenimiento planeado
Las reconfiguraciones o conmutaciones por error normalmente se completan en 30 segundos; el promedio es de 8 segundos. Si ya está conectada, la aplicación debe volver a conectarse a la nueva copia correcta de la réplica principal de la base de datos. Si se intenta establecer una nueva conexión mientras la base de datos está realizando una reconfiguración antes de que la nueva réplica esté en línea, se devolverá el error 40613 (base de datos no disponible): "La base de datos '{nombre de la base de datos}' del servidor '{nombre del servidor}' no está disponible actualmente. Vuelva a intentar la conexión más tarde". Si la base de datos tiene una consulta de larga duración, esta consulta se interrumpirá durante la reconfiguración y deberá reiniciarse.
## <a name="retry-logic"></a>Lógica de reintento
Todas las aplicaciones cliente de producción que se conecten a un servicio de base de datos en la nube deben implementar una [lógica de reintento](sql-database-connectivity-issues.md#retry-logic-for-transient-errors) de conexión sólida. Así, ayudarán a mitigar estas situaciones y, por lo general, harán que el usuario final no tenga noticias de los errores.
## <a name="frequency"></a>Frecuencia
En promedio, se producen 1,7 eventos de mantenimiento planeado cada mes.
## <a name="resource-health"></a>Estado de los recursos
Si la base de datos SQL está experimentando errores de inicio de sesión, compruebe la ventana de [Resource Health](../service-health/resource-health-overview.md#get-started) en [Azure Portal](https://portal.azure.com) para conocer el estado actual. La sección del historial de estado contiene el motivo del tiempo de inactividad de cada evento (si está disponible).
## <a name="next-steps"></a>Pasos siguientes
- Obtenga más información acerca de [Resource Health](sql-database-resource-health.md) para SQL Database.
- Para obtener más información acerca de la lógica de reintentos, consulte [Lógica de reintento para errores transitorios](sql-database-connectivity-issues.md#retry-logic-for-transient-errors).
| 77.86 | 775 | 0.798099 | spa_Latn | 0.984439 |
93aaa6734d6116a8b48cb25341c082ab88d55c2b | 769 | md | Markdown | README.md | necccc/primus-headless-cookie | a664439d2239250abfef517df2c7237fe71bb7cd | [
"MIT"
] | null | null | null | README.md | necccc/primus-headless-cookie | a664439d2239250abfef517df2c7237fe71bb7cd | [
"MIT"
] | null | null | null | README.md | necccc/primus-headless-cookie | a664439d2239250abfef517df2c7237fe71bb7cd | [
"MIT"
] | null | null | null | # primus-headless-cookie
A monkey-patch around http for headless Primus.createSocket to handle sticky sessions & cookies.
Result of [issue #452 at primus](https://github.com/primus/primus/issues/452)
## Usage
```
var Primus = require('primus');
var primusHsCookies = require('primus-headless-cookie');
var Socket = Primus.createSocket({ transformer: transformer, parser: parser }),
url = primusHsCookies('http://localhost:8080'),
client = new Socket(url);
```
## How
It basically adds a unique ID to every url used in sockets, based on this ID, it keeps a register to read Set-Cookie headers and put them back to each outgoing request.
This workaround solves the problem with loadbalancers, like haproxy, which commonly uses cookies for sticky sessions. | 38.45 | 168 | 0.754226 | eng_Latn | 0.90037 |
93aaee2a0d7d0fa3b721e0a921a3cd24fd31e77e | 2,411 | md | Markdown | README.md | cra/festlsh2018-nonviolent-intro-to-dl | a47badabb789f195cfedc1bb2eff378f62deebf9 | [
"MIT"
] | null | null | null | README.md | cra/festlsh2018-nonviolent-intro-to-dl | a47badabb789f195cfedc1bb2eff378f62deebf9 | [
"MIT"
] | null | null | null | README.md | cra/festlsh2018-nonviolent-intro-to-dl | a47badabb789f195cfedc1bb2eff378f62deebf9 | [
"MIT"
] | null | null | null | <a href="http://www.numpy.org"><img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60"></a>
<a href="http://bottlepy.org/"><img alt="BottlePy" src="http://bottlepy.org/docs/dev/_static/logo_nav.png" height="60"></a>
# Контрастный цветоопределитель на нейронных сетях
(звучит круто, на деле ничего особенного)
Сразу отметим, что **для этой задачи вообще нет большого резона использовать нейронные сети**, она выбрана для демонстрации общего принципа.
Проект сделан для презентация к [фестивалю ЛШ2018](http://fest.letnyayashkola.org) от мастерской [Deep Learning](http://www.letnyayashkola.org/deeplearning).
[Презентация](http://cra.github.io/festlsh2018-nonviolent-intro-to-dl/index.html).
Простая proof-of-concept нейронная сеть, которая учится по вводу пользователя угадывать белый или чёрный логотип надо использовать для рандомно выбранного цвета.
Интерфейс на html/js/jquery, "бэкенд" на [bottlepy](http://bottlepy.org/), нейронная сеть на чистом numpy.
Идея от [codetrain](https://www.youtube.com/watch?v=L9InSe46jkw) + [Jabrills](https://www.youtube.com/watch?v=KO7W0Qq8yUE)
# Установка и запуск
Проверено только для python3.6+. Если из virtualenv, то можно в принципе поставить numpy и bottle самостоятельно или из requirements.txt
$ pip -r requirements.txt
$ python server.py # или 'make run'
После чего открываем браузер по адресу `http://localhost:8080`
Под виндой не пробовал, но наверное можно сделать `conda intstal bottle` и потом попробовать `python server.py`
## «Что можно сделать ещё»
* Попробовать поменять число нейронов в скрытом слое.
* Имплементировать другую активационную функцию: для этого надо дописать их в файле `neural_networks.py` и подменить соответствующие места в методах `train` и `predict` (или сделать свой класс)
* Использовать только один нейрон в выводном слое для цели «использовать белый». В противном случае использовать чёрный цвет
* Сделать то же самое на другом методе машинного обучения (DecisionTree например) и сравнить результат
Наконец, как совсем хорошее улучшение можно расширить всё это так чтобы на выходе было три вывода (r, g, b) и соответственно сеть обучалась делать «дополняющий цвет» а не просто бинарно белый/чёрный. Но тут ещё надо будет придумать как собрать входные данные для обучения и как измерять «точность».
| 63.447368 | 322 | 0.781833 | rus_Cyrl | 0.919306 |
93ab123af6b66b795e7b1273524fdfa46c608d31 | 78 | md | Markdown | Presentations/README.md | BrynMorley/lab-submissions | 40c3f66d53d3eb12eb3a1ffcb137cc65f7a4b37a | [
"MIT"
] | null | null | null | Presentations/README.md | BrynMorley/lab-submissions | 40c3f66d53d3eb12eb3a1ffcb137cc65f7a4b37a | [
"MIT"
] | 1 | 2020-06-25T10:21:47.000Z | 2020-06-25T10:21:47.000Z | Presentations/README.md | BrynMorley/lab-submissions | 40c3f66d53d3eb12eb3a1ffcb137cc65f7a4b37a | [
"MIT"
] | null | null | null | When I have submitted my presentation i will update this file
Draft submitted | 26 | 61 | 0.833333 | eng_Latn | 0.999513 |
93abb089ec51153c636bc0cb628f7a77bd05744f | 13,989 | md | Markdown | WindowsServerDocs/administration/windows-commands/at.md | TSlivede/windowsserverdocs.de-de | 94efc4447d5eac158ab05bc87f9fcec15c317872 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/administration/windows-commands/at.md | TSlivede/windowsserverdocs.de-de | 94efc4447d5eac158ab05bc87f9fcec15c317872 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/administration/windows-commands/at.md | TSlivede/windowsserverdocs.de-de | 94efc4447d5eac158ab05bc87f9fcec15c317872 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: at
description: Windows-Befehle Thema **am** –, dass Befehle, und führen Sie auf einem Computer an einem angegebenen Datum und die Programme.
ms.custom: na
ms.prod: windows-server-threshold
ms.reviewer: na
ms.suite: na
ms.technology: manage-windows-commands
ms.tgt_pltfrm: na
ms.topic: article
ms.assetid: ff18fd16-9437-4c53-8794-bfc67f5256b3
author: coreyp-at-msft
ms.author: coreyp
manager: dongill
ms.date: 10/16/2017
ms.openlocfilehash: fc9d9f3d008db1bb85bfb6afa0308834c929b5f0
ms.sourcegitcommit: eaf071249b6eb6b1a758b38579a2d87710abfb54
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/31/2019
ms.locfileid: "66435278"
---
# <a name="at"></a>at
>Gilt für: WindowsServer (Halbjährlicher Kanal), Windows Server 2016, Windows Server 2012 R2, WindowsServer 2012
Plant die Befehle "und" Programme ", um einen bestimmten Zeitpunkt und Datum auf einem Computer ausführen. Sie können **am** nur, wenn der Zeitplan-Dienst ausgeführt wird. Ohne Parameter verwendet **am** geplante Befehle enthält.
## <a name="syntax"></a>Syntax
```
at [\\computername] [[id] [/delete] | /delete [/yes]]
at [\\computername] <time> [/interactive] [/every:date[,...] | /next:date[,...]] <command>
```
## <a name="parameters"></a>Parameter
| Parameter | Beschreibung |
|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| \\\\\<computerName\> | Gibt einen Remotecomputer an. Wenn Sie diesen Parameter weglassen, **am** plant, die Befehle und Programme auf dem lokalen Computer. |
| \<id\> | Gibt die ID zu einem geplanten Befehl zugewiesen. |
| /delete | Bricht einen geplanten Befehl ab. Wenn Sie weglassen *ID*, alle geplanten Befehle auf dem Computer abgebrochen. |
| / yes | Antworten Ja für alle Abfragen aus dem System, wenn Sie geplante Ereignisse löschen. |
| \<time\> | Gibt die Zeit, wenn den Befehl ausgeführt werden soll. Zeit wird als Stunden: Minuten im 24-Stunden-Notation (d. h. 00:00 [Mitternacht] bis 23:59) ausgedrückt. |
| / interactive | Ermöglicht das *Befehl* interagieren Sie mit dem Desktop des Benutzers, der zum Zeitpunkt angemeldet ist *Befehl* ausgeführt wird. |
| / alle: | Ausführungen *Befehl* auf allen angegebenen Tag oder die Tage der Woche oder Monat (z. B. jeden Donnerstag oder dritten Tag des Monats). |
| \<date\> | Gibt das Datum aus, wenn den Befehl ausgeführt werden soll. Sie können eine oder mehrere Tage der Woche angeben (d. h. type **M**,**T**,**W**,**Th**,**F**,**S**, **"su"** ) oder eine oder mehrere Tage des Monats (Typ, d. h. 1 bis 31). Trennen Sie mehrere Datumseinträge, durch Kommas. Wenn Sie weglassen *Datum*, **am** verwendet den aktuellen Tag des Monats. |
| /next: | Ausführungen *Befehl* auf das nächste Vorkommen des Tages (z. B. weiter Donnerstag). |
| \<command\> | Gibt den Windows-Befehl, Programm (d. h. .exe oder .com-Datei) oder ein Batchprogramm (d. h. bat- oder cmd-Datei), die Sie ausführen möchten. Wenn der Befehl einen Pfad als Argument erfordert, verwenden Sie den absoluten Pfad (d. h. am Anfang vollständigen Pfad mit dem Laufwerkbuchstaben). Wenn der Befehl ist auf einem Remotecomputer befindet, geben Universal Naming Convention (UNC)-Notation für den Server an, und Teilen von Namen, anstatt einen remote Laufwerkbuchstaben. |
| /? | Zeigt die Hilfe an der Eingabeaufforderung an. |
## <a name="remarks"></a>Hinweise
- **SCHTASKS** ist eine andere Befehlszeile Planungstool, mit denen Sie zum Erstellen und Verwalten geplanter Aufgaben. Weitere Informationen zu **"SCHTASKS"** , finden Sie unter verwandten Themen.
- Mithilfe von **an**
Verwendung von **am**, Sie müssen ein Mitglied der lokalen Gruppe "Administratoren" sein.
- Laden von Cmd.exe
**am** lädt nicht automatisch Cmd.exe, den Befehlsinterpreter, bevor Sie Befehle ausführen. Wenn Sie eine ausführbare Datei (.exe) nicht ausgeführt werden, Sie müssen explizit laden, Cmd.exe am Anfang des Befehls wie folgt: **Cmd/c Dir > c:\test.out**
- Anzeigen von geplanten Befehlen
Bei Verwendung von **am** ohne Befehlszeilenoptionen, geplante Aufgaben angezeigt werden in einer Tabelle formatiert etwa wie folgt:
```
Status ID Day time Command Line
OK 1 Each F 4:30 PM net send group leads status due
OK 2 Each M 12:00 AM chkstor > check.file
OK 3 Each F 11:59 PM backup2.bat
```
- U. a. die Identifikationsnummer (*ID*)
Wenn Sie die ID einschließen (*ID*) mit **am** an einer Eingabeaufforderung, Informationen für einen einzelnen Eintrag angezeigt wird, in einem Format ähnlich dem folgenden:
```
Task ID: 1
Status: OK
Schedule: Each F
time of Day: 4:30 PM
Command: net send group leads status due
```
Nachdem Sie einen Befehl mit geplant **am**, vor allem ein Befehl mit Befehlszeilenoptionen, überprüfen Sie die Befehlssyntax durch Eingabe **am** ohne Befehlszeilenoptionen. Wenn die Informationen in der Spalte über die Befehlszeile auf falsch festgelegt ist, löschen Sie den Befehl aus, und geben Sie es erneut ein. Wenn es immer noch falsch ist, geben Sie den Befehl mit weniger Befehlszeilenoptionen ein.
- Anzeigen von Ergebnissen
Befehle mit geplanten **am** als Hintergrundprozesse ausführen. Ausgabe wird nicht auf dem Computerbildschirm angezeigt. Verwenden Sie das Umleitungssymbol (>), um die Ausgabe in eine Datei umzuleiten. Wenn Sie die Ausgabe in eine Datei umleiten, müssen Sie das Symbol mit dem Escapezeichen (^), vor der Umleitungssymbol verwenden, ob es sich bei Verwendung von **am** in der Befehlszeile oder in einer Batchdatei. Geben Sie beispielsweise, um die Ausgabe zu Output.txt umzuleiten:
`at 14:45 c:\test.bat ^>c:\output.txt`
Das aktuelle Verzeichnis für den ausgeführten Befehl ist der Ordner.
- Ändern der Systemzeit
Wenn Sie die Systemzeit auf einem Computer ändern, nachdem Sie einen Befehl zum Ausführen mit geplant **am**, Synchronisieren der **am** Planer mit der überarbeiteten Systemzeit durch Eingabe **am** ohne Befehlszeilenoptionen ein.
- Speichern von Befehlen
Geplante Befehle werden in der Registrierung gespeichert. Daher gehen keine geplante Aufgaben verloren, wenn Sie den Zeitplan neu starten.
- Herstellen einer Verbindung mit Netzwerk-Laufwerke
Verwenden Sie ein umgeleiteten Laufwerk nicht für geplante Aufträge, die Zugriff auf das Netzwerk aus. Der Zeitplan-Dienst möglicherweise nicht im umgeleitete Laufwerk zugreifen oder im umgeleitete Laufwerk möglicherweise nicht vorhanden, wenn ein anderer Benutzer zum Zeitpunkt angemeldet ist, die die geplante Aufgabe ausgeführt wird. Verwenden Sie stattdessen die UNC-Pfade für geplante Aufträge. Zum Beispiel:
`at 1:00pm my_backup \\\server\share`
Verwenden Sie nicht die folgende Syntax, wobei **x:** ist eine Verbindung mit dem vom Benutzer:
`at 1:00pm my_backup x:`
Wenn Sie planen, eine **am** -Befehl, der einen Laufwerkbuchstaben für die Verbindung zu einem freigegebenen Verzeichnis, verwendet eine **am** Befehl aus, um das Laufwerk zu trennen, wenn Sie mit der Verwendung des Laufwerks fertig sind. Wenn das Laufwerk nicht getrennt ist, ist der zugeordnete Laufwerkbuchstabe nicht verfügbar ist, an der Eingabeaufforderung.
- Aufgaben nach 72 Stunden beendet
Standardmäßig werden Aufgaben, die der Zeitplan wird mithilfe der **am** Befehl Beenden nach 72 Stunden. Sie können die Registrierung, um diesen Standardwert ändern, ändern.
1. Starten Sie die Registrierungs-Editor (regedit.exe).
2. Suchen Sie, und klicken Sie auf den folgenden Schlüssel in der Registrierung: **HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Schedule**
3. Klicken Sie auf im Bearbeitungsmenü auf Wert hinzufügen, und fügen Sie dann den folgenden Registrierungswert hinzu: Wertname: AtTaskMaxHours-Datentyp: Reg_DWOrd-Basis: Decimal-Wert-Daten: 0. Der Wert 0 im Datenfeld Wert gibt an, dass, wird nicht beendet. Werte zwischen 1 und 99 gibt die Anzahl der Stunden an.
**Vorsicht**
- Durch eine fehlerhafte Bearbeitung der Registrierung können schwerwiegende Schäden am System verursacht werden. Bevor Sie Änderungen an der Registrierung vornehmen, sollten Sie alle wichtigen Computerdaten sichern.
- Aufgabenplanung und **am** Befehl
Können Sie Ordner "Geplante Aufgaben" anzeigen oder ändern Sie die Einstellungen einer Aufgabe, die mithilfe der **am** Befehl. Wenn Sie planen, eine Aufgabe mit der **am** Befehl, der Task wird im Ordner "Geplante Aufgaben" mit einem Namen wie im folgenden aufgeführt:**at3478**. Aber wenn Sie ändern eine am Task über den Ordner "Geplante Aufgaben", es wird ein Upgrade auf eine normale geplante Aufgabe. Die Aufgabe wird nicht mehr angezeigt, die **am** Befehl ein, und die Konto Einstellung nicht mehr gilt für sie. Sie müssen explizit ein Benutzerkonto und Kennwort für den Task eingeben.
## <a name="examples"></a>Beispiele
Um eine Liste der Befehle, die geplant wird, auf die Marketing-Server anzuzeigen, geben Sie Folgendes ein:
`at \\marketing`
Weitere Informationen zu einem Befehl mit der ID-Nummer 3 auf dem Corp-Server, geben Sie Folgendes ein:
`at \\corp 3`
So planen Sie einen Befehl "net Share" auf dem Corp-Server ausführen, um 8:00 Uhr und umzuleiten. die Wartung-Server, in das freigegebene Verzeichnis "Reports" und die Datei Unternehmen.txt, Typ:
`at \\corp 08:00 cmd /c "net share reports=d:\marketing\reports >> \\maintenance\reports\corp.txt"`
Zum Sichern von der Festplatte des Servers, Marketing, damit ein Bandlaufwerk aus, um Mitternacht alle fünf Tage, erstellen Sie ein Batchprogramm fünften, die die Sicherung Befehle enthält, und klicken Sie dann geplant die Batch-Anwendung ausgeführt haben, geben Sie Folgendes ein:
`at \\marketing 00:00 /every:5,10,15,20,25,30 archive`
Um alle Befehle, die geplant wird, auf dem aktuellen Server zu löschen der **am** Zeitplaninformationen wie folgt:
`at /delete`
Zum Ausführen eines Befehls, der nicht auf eine ausführbare Datei (d. h. .exe) ist, setzen Sie vor dem Befehl **Cmd/c** Cmd.exe wie folgt zu laden:
`cmd /c dir > c:\test.out`
| 116.575 | 595 | 0.539209 | deu_Latn | 0.996927 |
93ac0234c688c39f0c4cd11ec89978d11500be23 | 1,155 | md | Markdown | docs/debugger/debug-interface-access/idiaimagedata-get-virtualaddress.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/debug-interface-access/idiaimagedata-get-virtualaddress.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/debug-interface-access/idiaimagedata-get-virtualaddress.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Idiaımagedata::get_virtualaddress | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology: vs-ide-debug
ms.topic: conceptual
dev_langs:
- C++
helpviewer_keywords:
- IDiaImageData::get_virtualAddress method
ms.assetid: 67ecdc8c-d342-4d0b-b02a-c6b88e22fd02
author: mikejo5000
ms.author: mikejo
manager: douge
ms.workload:
- multiple
ms.openlocfilehash: 2316fffcb0c5e60839cec3ab0908cd3a4faabe14
ms.sourcegitcommit: 240c8b34e80952d00e90c52dcb1a077b9aff47f6
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/23/2018
ms.locfileid: "49863821"
---
# <a name="idiaimagedatagetvirtualaddress"></a>IDiaImageData::get_virtualAddress
Görüntü sanal bellekte konumunu alır.
## <a name="syntax"></a>Sözdizimi
```C++
HRESULT get_virtualAddress (
ULONGLONG* pRetVal
);
```
#### <a name="parameters"></a>Parametreler
`pRetVal`
[out] Görüntünün sanal adres döndürür.
## <a name="return-value"></a>Dönüş Değeri
Başarılı olursa döndürür `S_OK`; Aksi takdirde bir hata kodu döndürür.
## <a name="see-also"></a>Ayrıca Bkz.
[IDiaImageData](../../debugger/debug-interface-access/idiaimagedata.md) | 26.860465 | 80 | 0.744589 | tur_Latn | 0.180363 |
93acd1f0a3d4a47e519f5465df9182675eaba360 | 2,727 | md | Markdown | articles/quickstart/native/ios-swift/04-calling-apis.md | davidrissato/docs | 8842a0b230129017ed4d16068b0343eb5b331f5d | [
"MIT"
] | 4 | 2020-12-05T14:44:29.000Z | 2021-11-17T10:30:27.000Z | articles/quickstart/native/ios-swift/04-calling-apis.md | davidrissato/docs | 8842a0b230129017ed4d16068b0343eb5b331f5d | [
"MIT"
] | 37 | 2019-05-03T05:51:27.000Z | 2021-10-22T03:26:07.000Z | articles/quickstart/native/ios-swift/04-calling-apis.md | davidrissato/docs | 8842a0b230129017ed4d16068b0343eb5b331f5d | [
"MIT"
] | 1 | 2020-12-05T14:35:03.000Z | 2020-12-05T14:35:03.000Z | ---
title: Calling APIs
description: This tutorial will show you how to use Access Tokens to make authenticated API calls.
budicon: 546
topics:
- quickstarts
- native
- ios
- swift
github:
path: 04-Calling-APIs
contentType: tutorial
useCase: quickstart
---
Auth0 provides a set of tools for protecting your resources with end-to-end authentication in your application.
In this tutorial, you'll learn how to get a token, attach it to a request (using the authorization header), and call any API you need to authenticate with.
Before you continue with this tutorial, make sure that you have completed the previous tutorials. This tutorial assumes that:
* You have completed the [Session Handling](/quickstart/native/ios-swift/03-user-sessions) tutorial and you know how to handle the `Credentials` object.
* You have set up a backend application as API. To learn how to do it, follow one of the [backend tutorials](/quickstart/backend).
<%= include('../_includes/_calling_api_create_api') %>
<%= include('../_includes/_calling_api_create_scope') %>
## Get the User's Access Token
To retrieve an Access Token that is authorized to access your API, you need to specify the **API Identifier** value you created in the [Auth0 APIs Dashboard](https://manage.auth0.com/#/apis).
Present the Hosted Login Page:
::: note
Depending on the standards in your API, you configure the authorization header differently. The code below is just an example.
:::
```swift
// HomeViewController.swift
let APIIdentifier = "API_IDENTIFIER" // Replace with the API Identifier value you created
Auth0
.webAuth()
.scope("openid profile")
.audience(APIIdentifier)
.start {
switch $0 {
case .failure(let error):
// Handle the error
print("Error: \(error)")
case .success(let credentials):
// Do something with credentials e.g.: save them.
// Auth0 will automatically dismiss the hosted login page
print("Credentials: \(credentials)")
}
}
```
## Attach the Access Token
To give the authenticated user access to secured resources in your API, include the user's Access Token in the requests you send to the API.
```swift
// ProfileViewController.swift
let token = ... // The accessToken you stored after authentication
let url = URL(string: "your api url")! // Set to your Protected API URL
var request = URLRequest(url: url)
request.addValue("Bearer \(token)", forHTTPHeaderField: "Authorization")
let task = URLSession.shared.dataTask(with: request) { data, response, error in
// Parse the response
}
```
## Send the Request
Send the request you created:
```swift
// ProfileViewController.swift
task.resume()
```
| 32.082353 | 191 | 0.720572 | eng_Latn | 0.974826 |
93aceb98eb8b908c16440d8c6114621ecd85804a | 3,013 | md | Markdown | rustler_mix/README.md | surferlocal/rustler | 6fbaf63369d7229fb54ae699c9fe8afa68ee51d0 | [
"Apache-2.0",
"MIT"
] | null | null | null | rustler_mix/README.md | surferlocal/rustler | 6fbaf63369d7229fb54ae699c9fe8afa68ee51d0 | [
"Apache-2.0",
"MIT"
] | null | null | null | rustler_mix/README.md | surferlocal/rustler | 6fbaf63369d7229fb54ae699c9fe8afa68ee51d0 | [
"Apache-2.0",
"MIT"
] | null | null | null | # Rustler
[](https://hex.pm/packages/rustler)
[](https://hexdocs.pm/rustler/)
[](https://hex.pm/packages/rustler)
[](https://github.com/rusterlium/rustler/blob/master/LICENSE)
[](https://github.com/rusterlium/rustler/commits/master)
The Mix package for [rustler](https://github.com/rusterlium/rustler), a library to write Erlang Native Implemented Functions (NIFs) in [Rust](https://www.rust-lang.org/) programming language.
## Installation
This package is available on [Hex.pm](https://hex.pm/packages/rustler). To install it, add `:rustler` to your dependencies:
```elixir
def deps do
[
{:rustler, "~> 0.23.0"}
]
end
```
## Usage
1. Fetch all necessary dependencies:
```
$ mix deps.get
```
2. Check your installation by showing help from the installed Mix task:
```
$ mix help rustler.new
```
3. Generate the boilerplate for a new Rustler project. Follow the instructions
to configure your project:
```
$ mix rustler.new
```
4. [Load the NIF in your program.](#loading-the-nif).
## Crate configuration
The Rust crate compilation can be controlled via Mix compile-time configuration in `config/config.exs`.
See [configuration options](https://hexdocs.pm/rustler/Rustler.html#module-configuration-options) for more details.
## Loading the NIF
Loading a Rustler NIF is done in almost the same way as normal NIFs.
The actual loading is done by calling `use Rustler, otp_app: :my_app` in the module you want to load the NIF in.
This sets up the `@on_load` module hook to load the NIF when the module is first
loaded.
```elixir
defmodule MyProject.MyModule do
use Rustler,
otp_app: :my_app,
crate: :my_crate
# When loading a NIF module, dummy clauses for all NIF function are required.
# NIF dummies usually just error out when called when the NIF is not loaded, as that should never normally happen.
def my_native_function(_arg1, _arg2), do: :erlang.nif_error(:nif_not_loaded)
end
```
Note that `:crate` is the name in the `[lib]` section of your `Cargo.toml`. The
`:crate` option is optional if your crate and `otp_app` use the same name.
See the `Rustler` module for more information.
## Copyright and License
Licensed under either of
- Apache License, Version 2.0, ([LICENSE-APACHE](../LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license ([LICENSE-MIT](../LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
## Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
| 34.238636 | 228 | 0.731829 | eng_Latn | 0.85429 |
93ad27699f6076ad9735b1e83dba444cfd35c688 | 8,842 | md | Markdown | 01-README.md | colhountech/Asteroids2D | 27f8370a268cd17f76e31e275f74f3e25799ed42 | [
"MIT"
] | null | null | null | 01-README.md | colhountech/Asteroids2D | 27f8370a268cd17f76e31e275f74f3e25799ed42 | [
"MIT"
] | null | null | null | 01-README.md | colhountech/Asteroids2D | 27f8370a268cd17f76e31e275f74f3e25799ed42 | [
"MIT"
] | null | null | null | # 01-Game
Open [01-Game.html](01-Game.html) in your browser, and the game should load.
To begin with, this is the simplest game we can build.
Pay attention to the following:
```html
<body onload="init();">
```
When the page is loaded, `init()` is called. That's the next block of code we will look at.
`gameCanvas` is the Canvas where our Game is drawn. In our game it's the black background. This tells us how big we want our display, usually this would be the size of the full screen for a game. You can see here it's `black`. Try changing it to `navy`.
In our Asteroids Game, we have 3 important "things" or "objects" (I much prefer "things" than "objects", but aparently "objects" won, and that'w what most people use to call "things" in code). So, the 3 important objects are:
* A `canvas` which is what we can see on the screen.
* A `stage` is where everything goes before it's drawn on that canvas.
* A `ship` which we will add to the stage and move around the stage so we can see it on this canvas.
All easy so far? Good.
So, we are using a toolkit called `EaselJs` and with this we will add the `ship` to the `stage` and move it around the `stage` and then EaselJS will draw this on our `canvas` display for us, so we don't have to get caught up on yucky code :)
You can see where we define these 3 things below:
```js
var canvas; //Main canvas
var stage; //Main display stage
var ship; //the actual ship
```
See this line?
```js
<script type="text/javascript" src="Ship.js"></script>
```
To keep things simple, we have already have the code for the Ship in a different file - `Ship.js` so for now, we just accept that the ship knows how to draw itself and move. we just want to give it commands to move around based on the keys we press. If you are really curious (and you should be), you can have a peek at the [Ship.js](Ship.js) file and play with the `MAX_VELOCITY` or `MAX_THRUST`, See what happens to our game.
Next we need to define what commands are going to move our ship. Let's use the arrow keys and we will define these too in code. Every key on the keyboard has a special code, so rather than trying to remember these codes, we will just give them nice easy to remember names:
```js
var KEYCODE_UP = 38; //useful keycode
var KEYCODE_LEFT = 37; //useful keycode
var KEYCODE_RIGHT = 39; //useful keycode
```
Computers can read the Key Codes very quickly in a fraction of a second, so we also need to check if the keys are being held down or being released. We only will move the ship if the key is being held down. when a key is pressed we will activate a command and when a key is released, we will deactivate a command.
We have 3 commands that we will map to keys:
```js
var cmdLeft; //is the user holding a turn left command
var cmdRight; //is the user holding a turn right command
var cmdForward; //is the user holding a forward command
```
The next 2 lines needs a little bit of explaination. Have you ever put on some toast, and asked Alexa to set a timer for you to remind you to check that your toast is done. You might say:
>"Hey, Alexa! Set a timer for 2 minutes"
In 2 minutes, Alexa comes back and starts to ring an alarm. Very useful so you toast doesn't get cold.
In code, we call these alarms funny names called **Handlers** because they handle alarms. They are just alarms that we can use to tell us something has happened, like a key was pressed down or a key was lifted up.
That what the next 2 lines are about:
```js
//register key functions
document.onkeydown = handleKeyDown;
document.onkeyup = handleKeyUp;
```
* The first line tell us to run the `handleKeyDown` code whenever a key is being pressed down.
* The next line tell us to run the `handleKeyUp` code whenever a key is being lifted up.
So, in our handler, all we need to do is check to see if one of our special keys are being pressed: left arrow key, right arrow key and up arrow key, then map this to a command: turn left, turn right, move forward. If we wanted to add more commands later (like a fire button) and map them to keys, we would add this our handlers later on.
OK. That' pretty much it. Not much there. The rest of the code are blocks of code wrapped that we can use when we need. They are called `function`s and they all look like this:
```js
function restart() {
// some code
}
```
A `function` really is nothing more than a label or name of a block of code. Just makes it easier to read and understand, and is one of the important bits behind being a good programmer - use `function`s.
OK. Let's look at the rest of the functions in our Game.
### init()
The `init()` function always runs first when you load the webpage. (actually this is another handler and it's attached to the handler that runs every time a webpage loads). so far our init() function doesn't do much.
```js
canvas = document.getElementById("gameCanvas");
stage = new createjs.Stage(canvas);
restart();
```
Pretty simple really? Just find the `canvas` to draw on. Create a `stage` and then `restart()` the game. Enough said.
### restart()
the `restart()` function is just a block of code that we will run every time we restart the game. This will be useful later when our game ends, and we don't need to reload the webpage to restart the game.
`restart()` only does 4 simple things at the start of our game:
It clears the `stage` of everything ready for a new game. In the code world, we usually call a thing that looks after other things as a `parent` and the things it looks after are `children`. In our game the only child will be a single ship, but later we will have other asteroids too, so all of these together are the `children` on the stage. I guess you could think of the stage really like a parent acting as a stage manager and it manages all the things on the stage such as ships and asteroids - it's children.
So, every time we start the game we need to remove all the children from the stage.
```js
//hide anything on stage
stage.removeAllChildren();
```
Next, we add the Ship to the stage. We place it in the middle of the stage, which is half the width and half the height of the canvas.
`x` is how far along the left to right direction, and `y` is how far down the up to down direction, starting in the top left corner as `x = 0` and `y = 0`.
```js
//create the player
ship = new Ship();
ship.x = canvas.width / 2;
ship.y = canvas.height / 2;
//ensure stage is blank and add the ship
stage.clear();
stage.addChild(ship);
```
We also reset all the key commands. It's good practice to reset all our commands at the start of our game.
```js
//reset key presses
cmdLeft = cmdRight = cmdFwd = false;
```
#### start the game timer
And finally, we start the `tick()` game timer. This is THE most important part of any realistic game. Sonetimes it's called the Game Loop and it's what gives us the `fps` number when we play games. For example, if a game has an `fps` of 100, that means it's running the Game Loop 100 times every second. 100 frames per second = 100 fps. You have probably heard about fps if you are a gamer.
Every time the game loop `tick()` functaion is called, it updates tiny change to our ship such as moving it. Normally, we want our game loop to run at 24 times a second. Any faster than that and the human eyes doesn't really notice, unless if you have a very fast moving game like Fortnite or Black Ops. In these games, the higher the fps, the smoother the game looks when you move around. For our game we will let `EaselJs` look after the frame rate for us, which normally is 30 frames per second.
Here is how to setup the game loop `tick()` function:
```js
//start game timer
if (!createjs.Ticker.hasEventListener("tick")) {
createjs.Ticker.addEventListener("tick", tick);
}
```
We check the game loop `tick()` function is registered on the Ticker event, and if it's not, then it is added.
Now, lets looks at what happens in the the `tick()` function:
## tick()
`tick()` is our Game Loop. This is where all the magic happens in every game like this. Our game is very simple right now. All we do is We need to check if a key has been pressed or released, and if so we activate or deactivate a command. We then update the `tick()` on each sub child (in this case, just the ship) and then as stage to display the updates on the canvas.
```js
//handle turning
if ( cmdLeft ) {
ship.rotateLeft();
} else if ( cmdRight ) {
ship.rotateRight();
}
//handle thrust
if (cmdFwd) {
ship.accelerate();
}
//call sub ticks
ship.tick(event);
// update stage
stage.update(event);
```
That's everything. Now go back a read through the code and see if
everyting makes sense, and we are ready to start addding back some cool stuff to build out our game.
| 48.31694 | 514 | 0.72755 | eng_Latn | 0.999774 |
93ad561511ff683ef376840024dcd339927fea93 | 674 | md | Markdown | commands/json/assertValue(json,jsonpath,expected).md | irabashetti/documentation | 9f03494b31a29a3ba0cfafad4acf9c8ede115e6b | [
"Apache-2.0"
] | null | null | null | commands/json/assertValue(json,jsonpath,expected).md | irabashetti/documentation | 9f03494b31a29a3ba0cfafad4acf9c8ede115e6b | [
"Apache-2.0"
] | null | null | null | commands/json/assertValue(json,jsonpath,expected).md | irabashetti/documentation | 9f03494b31a29a3ba0cfafad4acf9c8ede115e6b | [
"Apache-2.0"
] | null | null | null | ---
layout: default
title: assertValue(json,jsonpath,expected)
parent: json
tags: command json jsonpath
comments: true
---
### Description
This command asserts that `jsonpath` points to an element (or the first element) in `json` whose value match that of
`expected`.
### Parameters
- **json** - the JSON document or file
- **jsonpath** - the path to describe the JSON element (or the first element) in question
- **expected** - expected value of the matching JSON element
### Example
**Book Store Data in JSON**<br/>

**Script**:<br/>

**Output**:<br/>

| 22.466667 | 116 | 0.718101 | eng_Latn | 0.785223 |
93aebbc95d5bab274a9f7612644c3fa32a33a138 | 100 | md | Markdown | README.md | kwmsmith/h3-cython | 00960b1da2abafe6bf360b669855a55015db5a15 | [
"Apache-2.0"
] | 4 | 2019-03-31T23:38:18.000Z | 2019-05-28T16:52:22.000Z | README.md | kwmsmith/h3-cython | 00960b1da2abafe6bf360b669855a55015db5a15 | [
"Apache-2.0"
] | null | null | null | README.md | kwmsmith/h3-cython | 00960b1da2abafe6bf360b669855a55015db5a15 | [
"Apache-2.0"
] | null | null | null | # h3-cython
## Cython bindings to the H3 geospatial indexing library
https://uber.github.io/h3/#/
| 16.666667 | 56 | 0.73 | eng_Latn | 0.325535 |
93affe1feb2c4e86235670dbc3adba246d2aa822 | 18 | md | Markdown | _includes/01-name.md | Norvys/markdown-portfolio | 8bbaec0188dde5269efe925726b88f152f8f020a | [
"MIT"
] | null | null | null | _includes/01-name.md | Norvys/markdown-portfolio | 8bbaec0188dde5269efe925726b88f152f8f020a | [
"MIT"
] | 11 | 2021-07-09T19:50:35.000Z | 2021-07-10T00:25:36.000Z | _includes/01-name.md | Norvys/markdown-portfolio | 8bbaec0188dde5269efe925726b88f152f8f020a | [
"MIT"
] | null | null | null | # Norvys González
| 9 | 17 | 0.777778 | slk_Latn | 0.395075 |
93b0acbc5aebd298d15599840e8ba11a75bbbb99 | 667 | md | Markdown | README.md | solarwinds/fosite-example | e734d982e48bfd15fb0ce3b772fa8b52697a91de | [
"Apache-2.0"
] | 54 | 2017-05-18T05:06:58.000Z | 2022-03-18T23:21:44.000Z | README.md | solarwinds/fosite-example | e734d982e48bfd15fb0ce3b772fa8b52697a91de | [
"Apache-2.0"
] | 26 | 2017-03-15T16:48:46.000Z | 2020-09-12T11:01:37.000Z | README.md | solarwinds/fosite-example | e734d982e48bfd15fb0ce3b772fa8b52697a91de | [
"Apache-2.0"
] | 39 | 2017-04-18T01:19:44.000Z | 2022-02-18T06:27:30.000Z | # ORY Fosite Example Server
[](https://travis-ci.org/ory/fosite-example)
ORY Fosite is the security first OAuth2 & OpenID Connect framework for Go. Built simple, powerful and extensible. This repository contains an exemplary http server using ORY Fosite for serving OAuth2 requests.
## Install and run
The Fosite example server requires [`[email protected]` or higher installed](https://golang.org/dl/) as it uses go modules for dependency management.
Once installed, run the demo:
```
$ go get -d github.com/ory/fosite-example
$ cd $GOPATH/src/github.com/ory/fosite-example
$ go run main.go
```
| 39.235294 | 209 | 0.76012 | eng_Latn | 0.922411 |
93b267d114a773e5cf280313b03ddd471da82765 | 3,309 | md | Markdown | generated-sources/bash/mojang-api/docs/NameHistoryApi.md | AsyncMC/Mojang-API-Libs | b01bbd2bce44bfa2b9ed705a128cf4ecda077916 | [
"Apache-2.0"
] | null | null | null | generated-sources/bash/mojang-api/docs/NameHistoryApi.md | AsyncMC/Mojang-API-Libs | b01bbd2bce44bfa2b9ed705a128cf4ecda077916 | [
"Apache-2.0"
] | null | null | null | generated-sources/bash/mojang-api/docs/NameHistoryApi.md | AsyncMC/Mojang-API-Libs | b01bbd2bce44bfa2b9ed705a128cf4ecda077916 | [
"Apache-2.0"
] | null | null | null | # NameHistoryApi
All URIs are relative to **
Method | HTTP request | Description
------------- | ------------- | -------------
[**findUniqueIdsByName**](NameHistoryApi.md#findUniqueIdsByName) | **POST** /profiles/minecraft | Find the current UUID of multiple players at once
[**getNameHistory**](NameHistoryApi.md#getNameHistory) | **GET** /user/profiles/{stripped_uuid}/names | Gets the full player's name history
[**getUniqueIdByName**](NameHistoryApi.md#getUniqueIdByName) | **GET** /users/profiles/minecraft/{username} | Find the UUID by name
## **findUniqueIdsByName**
Find the current UUID of multiple players at once
Find the current players name, UUID, demo status and migration flag by the current players name. The \"at\" parameter is not supported. Players not found are not returned. If no players are found, an empty array is returned.
### Example
```bash
findUniqueIdsByName
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**requestBody** | [**array[string]**](array.md) | Array with the player names |
### Return type
[**array[CurrentPlayerIDs]**](CurrentPlayerIDs.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
## **getNameHistory**
Gets the full player's name history
### Example
```bash
getNameHistory stripped_uuid=value
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**strippedUuid** | **string** | The player UUID without hyphens | [default to null]
### Return type
[**array[NameChange]**](NameChange.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not Applicable
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
## **getUniqueIdByName**
Find the UUID by name
Find the current player name, UUID, demo status and migration flag by the current player name or at a given time.
### Example
```bash
getUniqueIdByName username=value at=value
```
### Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**username** | **string** | The username in a given time, or in present if \"at\" is not sent | [default to null]
**at** | **integer** | Find the username in a given time, when 0 selects the original name however, it only works if the name was changed at least once, or if the account is legacy. The time is an UNIX timestamp (without milliseconds) | [optional] [default to null]
### Return type
[**CurrentPlayerIDs**](CurrentPlayerIDs.md)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: Not Applicable
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
| 30.925234 | 266 | 0.662134 | eng_Latn | 0.762109 |
93b2a71d10ddca5412ec1d6d3341859177b24145 | 1,062 | md | Markdown | _posts/2018-06-06-practic-serial-notes.md | NikoTung/NikoTung.github.io | d4063af58d45d927a996514a0a336a3fb9dc1b0a | [
"MIT"
] | 1 | 2015-11-28T15:23:58.000Z | 2015-11-28T15:23:58.000Z | _posts/2018-06-06-practic-serial-notes.md | NikoTung/NikoTung.github.io | d4063af58d45d927a996514a0a336a3fb9dc1b0a | [
"MIT"
] | 2 | 2015-06-04T09:16:49.000Z | 2015-08-03T09:42:16.000Z | _posts/2018-06-06-practic-serial-notes.md | NikoTung/NikoTung.github.io | d4063af58d45d927a996514a0a336a3fb9dc1b0a | [
"MIT"
] | 1 | 2015-12-04T05:25:47.000Z | 2015-12-04T05:25:47.000Z | ---
layout: post
title: "刻意练习"
description: ""
category: "2018-06"
tags: [刻意练习,成长,学习]
---
这周在读刻意练习系列的书《刻意练习》,《学习之道》和《练习心态》,把这三本书的内容整理做一下整理。
人类都有很强的适应能力。比如随着婴儿的长大,父母抱起孩子的力量也会在慢慢的适应,而自己不容易察觉。意外失明后,大脑会重新布局,这也就是为何盲人的听觉和触觉会表现得异常发达。而人类却又是偏向于稳定性,在刻意练习中走出舒适区是十分重要的,这样会触发大脑的重新平衡,改变大脑结构,挑战越大变化越大。
正因为大脑偏向于稳定,从另一方面来说也就造成了拖延,被这个小恶魔控制而无法进行行动,会上瘾,会成为一种习惯。在面对拖延习惯时,要注意到这四个部分:
* 信号:当你开始进行学习时,突然手机屏幕一亮,一条爆炸新闻推送过来了,然后你就打开手机,最后一个晚上过去了。
* 反映程序:就是你看到信号后的反应,比如打开手机了解那个爆炸的八卦新闻
* 奖励机制:就是一种让你短暂快乐的感受
* 信念:习惯的力量来自你强大的信念。拖到最后一个小时,我就会很高效率的把任务完成
过程中对自我的肯定是很重要的,要相信自我,注重方法和练习。在刻意练习中,作者认为没有天生奇才。比如莫扎特,我们都习惯认为他就是一个天才,很小时候就能谈各种琴。而忽略了他爸爸从小就带着他在欧洲巡演,从小就耳濡目染,从小就开始对他进行钢琴的训练。作者更注重的是长时间的刻意练习。
刻意练习需要带有目的练习,而不仅仅是时间的累积,单纯的一万小时。需要有如下几个特点:
* 带有特定目标
* 要专注
* 包含反馈
* 走出舒适区
书中给出了很多进行练习的方法
* 学习之道
* 任务分解,todo list 的形式
* 番茄钟,增加专注
* 记录行动日志,管理时间
* 形象记忆法
* 宫殿记忆法
* 故事记忆法
* 刻意练习
* 最大限度使用刻意练习原则
* 找出与杰出人物的差别
* 寻找自己的导师
* 在工作/生活中使用刻意练习
* 找出杰出人物的线路图
* 练习心态:这本书从更高的层面说到练习的心态
* 注重以过程为导向,而不是目的
* 注重看事情的视角,换个角度劣势变优势
* 4S 方法(simple,small,short,slow)
* DOC 方法 (do it, observe ,correct) | 22.125 | 146 | 0.777778 | zho_Hans | 0.524536 |
93b3c06f08c250a6d5012d38ade473a0481066fb | 1,705 | md | Markdown | content/home/experience.md | abhinaukumar/academic-kickstart | d1b25f76f9c9d1928367fca8e541f3ccf1522d71 | [
"MIT"
] | null | null | null | content/home/experience.md | abhinaukumar/academic-kickstart | d1b25f76f9c9d1928367fca8e541f3ccf1522d71 | [
"MIT"
] | null | null | null | content/home/experience.md | abhinaukumar/academic-kickstart | d1b25f76f9c9d1928367fca8e541f3ccf1522d71 | [
"MIT"
] | null | null | null | +++
# Experience widget.
widget = "experience" # See https://sourcethemes.com/academic/docs/page-builder/
headless = true # This file represents a page section.
active = true # Activate this widget? true/false
weight = 100 # Order that this section will appear.
title = "Experience"
subtitle = ""
# Date format for experience
# Refer to https://sourcethemes.com/academic/docs/customization/#date-format
date_format = "Jan 2006"
# Experiences.
# Add/remove as many `[[experience]]` blocks below as you like.
# Required fields are `title`, `company`, and `date_start`.
# Leave `date_end` empty if it's your current employer.
# Begin/end multi-line descriptions with 3 quotes `"""`.
[[experience]]
title = "Graduate Research Assistant"
company = "Laboratory for Image and Video Engineering"
company_url = "hhtps://live.ece.utexas.edu"
location = "Austin, TX"
date_start = "2019-09-01"
date_end = ""
description = """Working with Prof. Alan Bovik on estimating and optimizing the subjective quality of images and videos. """
[[experience]]
title = "Summer Research Intern"
company = "Carnegie Mellon University"
company_url = "https://cmu.edu"
location = "Pittsburgh, PA"
date_start = "2018-05-21"
date_end = "2018-07-31"
description = """Worked with Prof. Katia Sycara to develop a safe Reinforcement Learning algorithm inspired by evidence accumulation in the brain."""
[[experience]]
title = "Summer Intern"
company = "Uurmi Systems, C/o Mathworks Inc."
company_url = ""
location = "Hyderabad, India"
date_start = "2017-05-08"
date_end = "2017-07-21"
description = """Worked on optimizing stereo matching algorithms for GPUs using CUDA."""
+++
| 35.520833 | 151 | 0.71261 | eng_Latn | 0.880772 |
93b513c795b6a7fe33a6289957bb0cae578c6622 | 586 | md | Markdown | README.md | windhxs/zju_cst_ai_security | c201fceee6d13e658cd7a381b792a244094c58f9 | [
"MIT"
] | null | null | null | README.md | windhxs/zju_cst_ai_security | c201fceee6d13e658cd7a381b792a244094c58f9 | [
"MIT"
] | null | null | null | README.md | windhxs/zju_cst_ai_security | c201fceee6d13e658cd7a381b792a244094c58f9 | [
"MIT"
] | null | null | null | # zju_cst_ai_security
# 介绍
浙大软院2021人工智能安全作业,用CNN实现CIFAR10分类任务
# 数据集
[CIFAR10](http://www.cs.toronto.edu/~kriz/cifar.html)
下载并解压到`data/`目录下,速度可能比较慢,可使用知乎搜索相关内容解决
# 模型
简化版的[ResNet](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf)
# 运行
## 训练:
单GPU:
```
python main.py --cuda --gpuid 0 --train --model_path MODEL_DIR
```
多GPU:
```
python main.py --cuda --gpuid [gpuid list] --train --model_path MODEL_DIR
```
## 测试
```
python main.py --cuda --gpuid 0 --model_path MODEL_DIR
```
测试请勿使用多GPU运行。
# 结果
acc 84.39%
| 13.627907 | 125 | 0.711604 | yue_Hant | 0.482998 |
93b536475ee0b410bcf75dc5a39ca94246d740eb | 6,166 | md | Markdown | about/acknowledgments.md | hoytpr/bioinformatics-semester | e611c00fb33ecc0ccecd56161caae00217b7a3d0 | [
"CC-BY-4.0"
] | null | null | null | about/acknowledgments.md | hoytpr/bioinformatics-semester | e611c00fb33ecc0ccecd56161caae00217b7a3d0 | [
"CC-BY-4.0"
] | null | null | null | about/acknowledgments.md | hoytpr/bioinformatics-semester | e611c00fb33ecc0ccecd56161caae00217b7a3d0 | [
"CC-BY-4.0"
] | 2 | 2019-02-04T19:57:09.000Z | 2022-03-30T22:17:39.000Z | ---
layout: page
title: Acknowledgments
---
#### Primary Contributors
Although this course is taught at OSU by [Dr. Peter R. Hoyt](http://biochemistry.okstate.edu/faculty/dr.-peter-hoyt-1), the original course materials and website
design have been primarily developed and
implemented by [Ethan White](http://ethanwhite.org) and [Zachary Brym](http://zackbrym.weecology.org/) in cooperation with [Data Carpentry](https://datacarpentry.org/). See our [contributors page](https://github.com/datacarpentry/semester-biology/graphs/contributors) for more
details. More recent genomics lessons from the Carpentries were developed thanks to
[genomics data experts](https://github.com/datacarpentry/organization-genomics/graphs/contributors)
, [genomics wrangling experts](https://github.com/datacarpentry/wrangling-genomics/graphs/contributors)
and [shell experts](https://github.com/datacarpentry/shell-genomics/graphs/contributors) who should
be appreciated. You can cite the material used in this course using the follwing DOI:
[](https://doi.org/10.5281/zenodo.3260609)
[](https://doi.org/10.5281/zenodo.3260560)
[](https://doi.org/10.5281/zenodo.3260317)
[](https://doi.org/10.5281/zenodo.3260309)
Annotations indicating specific contributors are scattered throughout the
lessons, but there are inevitably some contributors not mentioned, and if they contact me
I'll gladly add their names or links to this list. The curricula are made available through
the [Creative Commons License](http://creativecommons.org/licenses/by/4.0/)
#### Philosophy
This particular course by Dr. Hoyt emphasizes making the curricula more
understandable for a specific target
audience of Life Scientists, who are
**truly beginners**, and who want or need
to be better. To strive toward this
goal Dr. Hoyt will embrace The Carpentries active learning pedagogy and continue to
improve the course through practices of education experts including
[Dr. Rochelle Tractenberg](https://cbpr.georgetown.edu/rochelle_tractenberg/#)
who developed the [Mastery Rubric for Bioinformatics](https://www.biorxiv.org/content/10.1101/655456v1) [published in PlOS ONE](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0225256)
and described in an [interview](https://lifescitrainers.org/2019/06/25/bioinformatics-mastery-rubric-interview-with-rochelle-tractenberg/)
by [Jason Williams](https://www.linkedin.com/in/jason-williams-52847233) on the
[Life Science Trainers website](https://lifescitrainers.org). See also her [poster](https://f1000research.com/posters/8-1160) and [slides](https://f1000research.com/slides/8-1161)
from ISMB/ECCB 2019 availble through [F1000](https://f1000research.com/) and [The GOBLET Foundation](https://www.mygoblet.org/).
#### Diversity
Oklahoma State University is proudly committed to promoting diversity and
inclusion within higher education. This course follows those proud commitments.
For more information inclusion and diversity awards, enrollment, and resources,
please read about [OSU's Institutional Diversity Office](https://diversity.okstate.edu/).
The Vice President for Institutional Diversity and Chief Diversity Officer at OSU
is [Dr. Jason F. Kirksey](https://diversity.okstate.edu/dr-jason-f-kirksey).
#### Funding
Dr. Hoyt is generously supported by the
[Department of Biochemistry and Molecular Biology](http://biochemistry.okstate.edu/) and
the [Office of the Vice President for Research](https://research.okstate.edu/) at
[Oklahoma State University](https://go.okstate.edu/).
The initial development of this course, was supported by the Gordon
and Betty Moore Foundation's Data-Driven Discovery Initiative through grants
to Ethan White [GBMF4563](https://www.moore.org/grants/list/GBMF4563) and [GBMF4855](https://www.moore.org/grants/list/GBMF4855), and by an [NSF CAREER award](http://nsf.gov/awardsearch/showAward?AWD_ID=0953694).
As these curricula become more diverse it will be difficult to maintain a
list of all funding sources, but we will try.
#### The Carpentries
Dr. Hoyt and [OSU](http://info.library.okstate.edu/c.php?g=902101) participate in The Carpentries community.
[The Carpentries](https://carpentries.org/) are an organization that teaches foundational
coding and data science skills to researchers worldwide. Software Carpentry,
Data Carpentry, and Library Carpentry workshops are based on our lessons.
To participate you are expected to follow the [Code of Conduct](http://docs.carpentries.org/topic_folders/policies/code-of-conduct.html).
We are a diverse, global community of volunteers. Our community includes
[Instructors](https://carpentries.org/instructors/), helpers, Trainers,
[Maintainers](https://carpentries.org/maintainers/), Mentors, community champions,
member organisations, supporters, workshop organisers, staff and a whole lot more.
#### Data Carpentry
[Data Carpentry](http://datacarpentry.org/) is a web and workshop based organization
that is designed to teach basic computing concepts, skills, and tools for working
with scientific data. The resources provided on this site are being developed
in association with Data Carpentry.
#### Software Carpentry
[Software Carpentry](http://software-carpentry.org) has been teaching scientists and engineers the concepts, skills,
and tools they need to use and build software more productively since 1977. All
of the content is freely available under a Creative Commons license (the same
one we use here). The existence of this content saves me a massive
amount of time and effort, and inspired me to show other Life Scientists that
programming knowledge is essential, and that programming well is achievable.
#### Infrastructure
The site is built using [Jekyll](http://jekyllrb.com/) with the [Hyde](http://hyde.getpoole.com/) theme from [Poole](http://getpoole.com/)
and icons from [Font Awesome](http://fontawesome.io) by Dave Gandy.
| 65.595745 | 277 | 0.785598 | eng_Latn | 0.937518 |
93b5ad5360e1c36f3b422fbf5027c4a83e8a5103 | 82 | md | Markdown | README.md | 9fv/ansible-role-debian-docker-ce | d97a0da32d0e0d35d90567c650e0e2be9aa7cedf | [
"MIT"
] | null | null | null | README.md | 9fv/ansible-role-debian-docker-ce | d97a0da32d0e0d35d90567c650e0e2be9aa7cedf | [
"MIT"
] | null | null | null | README.md | 9fv/ansible-role-debian-docker-ce | d97a0da32d0e0d35d90567c650e0e2be9aa7cedf | [
"MIT"
] | null | null | null | # ansible-role-debian-docker-ce
A role for Ansible to install Docker CE on Debian
| 27.333333 | 49 | 0.792683 | eng_Latn | 0.777897 |
93b5d9d2c3f4475f68a388bcc4b60938daa9b843 | 3,369 | md | Markdown | business-central/LocalFunctionality/Germany/how-to-declare-vat-vies-tax.md | ACPJanousek/dynamics365smb-docs | 96d77ae7b4b17ceb48135986bcbf5fd1842780f8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-03-22T13:54:33.000Z | 2019-03-22T13:54:33.000Z | business-central/LocalFunctionality/Germany/how-to-declare-vat-vies-tax.md | TRASERDanielGorski/dynamics365smb-docs | 355a63ff2f5d723977930b9869d49428400211be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/LocalFunctionality/Germany/how-to-declare-vat-vies-tax.md | TRASERDanielGorski/dynamics365smb-docs | 355a63ff2f5d723977930b9869d49428400211be | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-08-13T08:56:06.000Z | 2020-08-13T08:56:06.000Z | ---
title: How to Declare VAT-VIES Tax
description: The VAT-VIES declaration report allows you to submit information about sales transactions with other European Union (EU) countries/regions to the customs and tax authorities' list system.
services: project-madeira
documentationcenter: ''
author: SorenGP
ms.service: dynamics365-business-central
ms.topic: article
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords:
ms.date: 10/01/2018
ms.author: sgroespe
---
# Declare VAT-VIES Tax
[!INCLUDE[d365fin](../../includes/d365fin_md.md)] includes the VAT-VIES declaration report, which you can use to submit information about sales transactions with other European Union (EU) countries/regions to the customs and tax authorities' list system. The report displays information in the same format that is used in the customs and tax authorities' declaration list.
Depending on the volume of sales of goods or services to other EU countries/regions, you must submit monthly, bi-monthly, or quarterly declarations. If your company has sales of more than 100,000 euros per quarter, you must submit a monthly declaration. If your company has sales of less than 100,000 euros per quarter, you must submit a quarterly declaration. For more information, see the [BZSt website](https://go.microsoft.com/fwlink/?LinkId=204368).
The report is based on the VAT Entry table.
## To declare VAT-VIES tax
1. Choose the  icon, enter **VAT-Vies Declaration Tax – DE**, and then choose the related link.
2. On the **VAT-Vies Declaration Tax – DE** page, on the **Options** FastTab, fill in the fields as described in the following table.
|Field|Description|
|---------------------------------|---------------------------------------|
|**Reporting Period**|Select the time period that the report applies to. This can be a month, a two-month period, a quarter, or the calendar year.|
|**Date of Signature**|Enter the date on which the VAT-VIES declaration is sent.|
|**Corrected Notification**|If selected, this field indicates that this is a corrected version of an already delivered VAT-VIES declaration.|
|**Show Amounts in Add. Reporting Currency**|If selected, the amounts of the report will be in the additional reporting currency. For more information, see Additional Reporting Currency.|
|**Change to monthly reporting**|If selected, your company has sales of more than 100,000 euros per quarter and you must migrate from a quarterly report to a monthly report. **Important:** Only select this field the first time that you submit a monthly report.|
|**Revoke monthly reporting**|If selected, you want to switch from monthly reporting to another reporting period.<br /><br /> For example, if you have previously submitted monthly declarations but the EU sales are less than 100,000 euros per quarter, select this field and then select one of the quarters in the **Reporting Period** field.|
3. On the **VAT Entry** FastTab, select the appropriate filters.
> [!NOTE]
> In order to run this report, you must select the **Posting Date** as a filter, and enter the posting date value.
## See Also
[VAT Reporting](vat-reporting.md)
| 73.23913 | 456 | 0.725735 | eng_Latn | 0.992848 |
93b6adbbff56dbc20d24567594a097c38a39b0e7 | 389 | md | Markdown | README.md | hewison-chris/serialization | 8130df762c7e78880adba276f5e7fc791c92b90a | [
"MIT"
] | null | null | null | README.md | hewison-chris/serialization | 8130df762c7e78880adba276f5e7fc791c92b90a | [
"MIT"
] | null | null | null | README.md | hewison-chris/serialization | 8130df762c7e78880adba276f5e7fc791c92b90a | [
"MIT"
] | null | null | null | # serialization
Library for cross platform serializing and deserializing of data
## [`agora.crypto.Serializer`](https://github.com/bpfkorea/serialization/blob/v0.x.x/source/agora/serialization/Serializer.d)
This module exposes two main categories of functions for binary serialization:
- `serializeFull` / `serializePart` for serialization
- `deserializeFull` for deserialization
| 48.625 | 125 | 0.796915 | eng_Latn | 0.610447 |
93b6d77545ebae040a32f21c7279d67f00b5ea62 | 1,536 | md | Markdown | _includes/v20.2/orchestration/kubernetes-limitations.md | nickvigilante/docs | 2e35591266c757265b3e477ea009b1099cd0786f | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | _includes/v20.2/orchestration/kubernetes-limitations.md | nickvigilante/docs | 2e35591266c757265b3e477ea009b1099cd0786f | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | _includes/v20.2/orchestration/kubernetes-limitations.md | nickvigilante/docs | 2e35591266c757265b3e477ea009b1099cd0786f | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | #### Kubernetes version
Kubernetes 1.18 or higher is required in order to use our current configuration files. If you need to run on an older version of Kubernetes, we keep configuration files in the versioned subdirectories of [https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes) (e.g., [v1.7](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/v1.7)).
#### Helm version
Helm 3.0 or higher is required when using our instructions to [deploy via Helm](orchestrate-cockroachdb-with-kubernetes.html#step-2-start-cockroachdb).
#### Resources
When starting Kubernetes, select machines with at least **4 vCPUs** and **16 GiB** of memory, and provision at least **2 vCPUs** and **8 Gi** of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload.
#### Storage
At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider a [DaemonSet](kubernetes-performance.html#running-in-a-daemonset) deployment until StatefulSets support node-local storage.
| 96 | 488 | 0.800781 | eng_Latn | 0.987637 |
93b6e5916b04a5c9741e73349697f145f6e7cc0f | 310 | md | Markdown | _posts/2021-01-18-INNODB_SYS_FOREIGN_COLS.md | daohengshangqian/blog.io | 006c704d3a0517fb85ea5c640125521024bd1dbd | [
"Apache-2.0"
] | null | null | null | _posts/2021-01-18-INNODB_SYS_FOREIGN_COLS.md | daohengshangqian/blog.io | 006c704d3a0517fb85ea5c640125521024bd1dbd | [
"Apache-2.0"
] | null | null | null | _posts/2021-01-18-INNODB_SYS_FOREIGN_COLS.md | daohengshangqian/blog.io | 006c704d3a0517fb85ea5c640125521024bd1dbd | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: INNODB_SYS_FOREIGN_COLS 详解
date: 2021-01-18
categories: blog
tags: [MySQL]
description: INNODB_SYS_FOREIGN_COLS 详解
---
#### 系统表介绍
INNODB_SYS_FOREIGN_COLS 提供了外键列相关信息
列名 | 含义
---|---
ID | ID
FOR_COL_NAME | 子表列名
REF_COL_NAME | 附表列名
POS | 键列的顺序
| 14.761905 | 40 | 0.635484 | yue_Hant | 0.939058 |
93b8bc3a0866f03601a06f1717781ab422fbf714 | 11,740 | md | Markdown | _posts/2018-07-02-crypto-incrementalism-vs-crypto-anarchy.md | sourcecrypto/cryptowords.github.io | 27fdf44c9a95c3bdea029e64aea57a28bbaccce8 | [
"MIT"
] | 6 | 2019-05-07T02:36:12.000Z | 2020-01-08T19:32:03.000Z | _posts/2018-07-02-crypto-incrementalism-vs-crypto-anarchy.md | sourcecrypto/cryptowords.github.io | 27fdf44c9a95c3bdea029e64aea57a28bbaccce8 | [
"MIT"
] | 1 | 2019-06-24T13:13:10.000Z | 2019-06-25T20:11:19.000Z | _posts/2018-07-02-crypto-incrementalism-vs-crypto-anarchy.md | sourcecrypto/cryptowords.github.io | 27fdf44c9a95c3bdea029e64aea57a28bbaccce8 | [
"MIT"
] | 3 | 2019-06-24T05:15:14.000Z | 2020-01-09T22:12:44.000Z | ---
title: "Crypto-incrementalism vs Crypto-anarchy"
permalink: "/crypto-incrementalism-vs-crypto-anarchy"
tags:
- Tony Sheng
- CY18 Q3
excerpt: Tony Sheng believes all crypto projects fall into one of two camps, which he also believes can coexist. Posted July 2, 2018.
defaults:
# _posts
- scope:
path: ""
type: posts
values:
layout: single
read_time: true
comments: false
share: true
related: false
---
# [Crypto-incrementalism vs Crypto-anarchy](https://www.tonysheng.com/incremental-vs-anarchy)
### By [Tony Sheng](https://twitter.com/tonysheng)
### Posted July 2, 2018
A close friend of mine invests in crypto. He focuses on projects that either help with regulatory compliance or fiat inflows (e.g. institutional custody). He sees value in blockchain and the surrounding ecosystem not as an unstoppable force that tears down borders, undermines governments, and shepherds in an era of crypto-anarchy, but as nothing more than a new data structure with potentially disruptive use-cases.
Let's call this crypto-incrementalism.
My other friends want unstoppable, unseizable money that cripples legacy financial institutions; a transparent, uncensorable information infrastructure that precludes any abuses from states or corporations; true privacy and anonymity such that participants are completely unidentifiable by those that would seek to harm them; a society built on technologies unable to comply with authoritarian requests.
Let's call this the crypto-anarchy<sup>[1](https://www.tonysheng.com/incremental-vs-anarchy#fn:1)</sup> .
Crypto-incrementalists and the crypto-anarchists share strong convictions in the power of decentralization. Both can get excited about Satoshi's vision to improve on a financial system that relies "almost exclusively on financial institutions serving as trusted third parties to process electronic payments."<sup>[2](https://www.tonysheng.com/incremental-vs-anarchy#fn:2)</sup>
Nobody mourns the removal of the middle-man. It's a powerful starting point for crypto-incrementalists and crypto-anarchists alike.
However, these groups diverge quickly as we move from the abstract to the material. How should we handle a government's request to identify an individual using a cryptocurrency? How do we return assets stolen in a hack? How do we enforce legality for tokenized physical assets (e.g. real-estate)? Or tokenized securities?
## Crypto-incrementalism
Crypto-incrementalists think it's okay–even preferable–to design crypto systems able to comply with requests from governments and corporations. Assets stolen? Use a back-door to reverse the transactions. Need to enforce legality of tokenized securities? Design around the existing legal infrastructure so the legal system is the "source of truth." Here, the benefit of crypto is to reduce inefficiencies in existing systems by applying a new technology. Working with governments and corporations is the easiest way to drive that adoption.
Recent actions by the powers that govern EOS provide an instructive example. On June 22nd, the "EOS Core Arbitration Forum (ECAF)" asked the 21 EOS block producers (the 21 entities that decide what "truth" is on the chain) to freeze seven accounts, providing no explanation. The ECAF stated, "the logic and reasoning for this Order will be posted at a later date." Alarmingly, all 21 block producers complied.
{: .align-center}
Such a process gives EOS the flexibility to deal with malicious actors, but comes at the cost of potential abuses. It's not hard to imagine a scenario where hackers sent a similar decree to the block producers. Or for ECAF to ask the block producers to freeze accounts on behalf of a government.
This is a trade-off. Acceptance of these risks offers a way to incorporate blockchain into some very large industries such as enterprise blockchains, security tokens, tokenization of physical assets, and more. For these use cases, a centralized entity has authoritarian control of the protocol by design–it's the only way to comply with existing financial and legal systems. Debates around these use cases tend to reduce to this single design choice that is exciting to crypto-incrementalists and repulsive to crypto-anarchists.
## Crypto-anarchy
Crypto-anarchists want information infrastructures that are, by design, unable to comply with authoritarian requests. Assets stolen? Nobody has the power to reverse the transactions: best we can do is fork the protocol. Need to enforce legality of tokenized securities? Hard af and probably not worth working on. Want to identify an individual participant in the network? Sorry, that data does not exist. Any designs that make a protocol vulnerable to authoritarian control render said protocol useless. For crypto-anarchists, the success is binary: sufficient protections against centralized powers, or insufficient. Anything "in the middle" is insufficient.
Why require complete decentralization and privacy? The corner cases are unacceptable. Members of a persecuted minority could get their funds frozen, speech censored, identity deleted by an evil government. Even if the government couldn't directly freeze the funds (e.g. we're all using a Bitcoin), without sufficient privacy, they could identify the addresses belonging to members of the persecuted minority and take actions on them physically or through the ecosystem surrounding Bitcoin (e.g. retailers).
Use of popular decentralized networks are not yet sufficiently private. Torrent users will often receive cease-and-desists from their internet providers. Bitcoin users have been routinely identified for innocuous and criminal purposes. Users aren't sufficiently protected for two reasons: (1) privacy technology is not yet built into networks like Bitcoin, and (2) users are largely uneducated on privacy preservation, and unbeknowingly identify themselves with things like their IP address.
Is decentralization good enough? If Bitcoin cannot comply with an authoritarian request to seize funds (which it cannot), doesn't that satisfy the needs of crypto-anarchists? It's certainly good, and covers the majority of authoritarian requests, but does not protect users from e.g. physical violence. While a user can't have their funds seized at the protocol level, if their identity is exposed, a powerful entity could find them physically and coerce them.
Luckily, lots of exciting work is happening around privacy. There are privacy-forward cryptocurrencies like Zcash and Monero, projects like Enigma and Keep, and the big protocols like Bitcoin and Ethereum have plans to improve their privacy technology.
## Use cases
Here are some example implementations of the most commonly discussed use cases for crypto.
**Money**
* Incremental: a cryptocurrency that can, without the consent of the majority of the network, report the identities and behaviors of participants in the network to governments, freeze or seize balances, or change the "rules" (e.g. monetary policy)
* Anarchic: a cryptocurrency that is unable to comply with authoritarian requests (e.g. Bitcoin) and offers strong privacy guarantees (e.g. Zcash)
**Computing platform**
* Incremental: a smart contract protocol that can, without the consent of the majority of the network, blacklist accounts (e.g. EOS) or prevent access to certain groups (e.g. private blockchains).
* Anarchic: a smart contract protocol that is unable to comply with authoritarian requests (e.g. Ethereum) and offers strong privacy guarantees (none yet exist of material scale)
**Tokenized securities**
* Incremental: a protocol or set of smart contracts designed to comply with existing financial and legal systems, necessarily enabling those in power to control access and modify data of the protocol
* Anarchic: a completely new system for securities where the source of truth is not existing systems, but what is on the chain. This new system is unable to comply with authoritarian requests and sufficiently protects user privacy
**Non-fungible tokens**
* Incremental: just like tokenized securities where the token is only a pointer to the provenance ("true" version) of the asset. When the token and the asset are out of sync, a centralized entity is able to modify the ledger to match the token with the asset.
* Anarchic: the provenance of the asset is the token. The ledger is unable to comply with authoritarian requests and the privacy of users is preserved.
In each of these cases, there are possible benefits in adopting the incremental approach compared with current technologies. For example, an incremental approach to tokenized securities scales the technology behind securities while maintaining compatability with today's systems. However, the incremental approach will never satisfy the requirements of crypto-anarchists because the incremental approach can always comply with authoritarian requests and does not necessarily (but could) offer privacy.
## Conclusion
{: .align-center}
In sum, one can think of crypto projects as either crypto-incremental or crypto-anarchic. Both remove the middle-man omnipresent in today's web2 systems, but differ in their abilities to comply with authoritarian requests and preserve privacy.
Crypto-incrementalists envision numerous applications of blockchain that improve efficiencies in existing systems, designing around current legal and financial systems. The protocols of crypto-incrementalists remove the middle-man but are not resistant to censorship. While their protocols may behave like decentralized networks, they are still able to comply with authoritarian requests. The crypto-incremental future does not look _that_ different from today. Users may enjoy incrementally reduced fees, but no new freedoms.
The protocols of crypto-anarchists cannot comply with authoritarian requests. And they preserve the privacy and anonymity of its users. These requirements are hard to satisfy (as evidenced by Bitcoin's poor privacy guarantees), and make certain use cases all but impossible (e.g. enterprise blockchains, tokenized securities) with legacy financial and legal systems.
Many of the most common debates in crypto reach an impasse because one party is a crypto-incrementalist and the other is a crypto-anarchist. Critics of EOS/IOTA/(anything that's not Bitcoin) complain that it's too centralized. If you are a crypto-anarchist, EOS would never satisfy your requirements. But if you're a crypto-incrementalist, you may happily accept centralization for the faster transactions.
I believe crypto-incremental and crypto-anarchic projects can coexist. Where we see problems is in the widespread conflation of crypto-anarchic properties with crypto-incremental projects. Early crypto evangelists have been so successful in promoting censorship resistance and privacy that many assume that all crypto projects have or will have these properties. This is a dangerous misconception, as it's highly unlikely (mostly impossible) that a project can go from crypto-incremental to crypto-anarchic. Crypto-incremental projects are simply not contenders for use cases that require crypto-anarchy, but many market themselves as contenders because the "value" of those use cases (e.g. unseizable money, permissionless platforms) is much higher than an incremental improvement to an existing system. We would do well as an industry to clarify.
1. https://nakamotoinstitute.org/crypto-anarchist-manifesto/#selection-71.665-71.898 [↩](https://www.tonysheng.com/incremental-vs-anarchy#fnref:1)
2. https://bitcoin.org/bitcoin.pdf [↩](https://www.tonysheng.com/incremental-vs-anarchy#fnref:2)
| 102.982456 | 848 | 0.805877 | eng_Latn | 0.999047 |
93b8e57beb451236211e062bc861fad7a39fd9a4 | 10,867 | md | Markdown | docs/windows/db-command.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | 1 | 2020-05-21T13:04:35.000Z | 2020-05-21T13:04:35.000Z | docs/windows/db-command.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | null | null | null | docs/windows/db-command.md | OpenLocalizationTestOrg/cpp-docs.it-it | 05d8d2dcc95498d856f8456e951d801011fe23d1 | [
"CC-BY-4.0"
] | null | null | null | ---
title: db_command | Microsoft Docs
ms.custom:
ms.date: 11/04/2016
ms.reviewer:
ms.suite:
ms.technology:
- devlang-cpp
ms.tgt_pltfrm:
ms.topic: language-reference
f1_keywords:
- vc-attr.db_command
dev_langs:
- C++
helpviewer_keywords:
- db_command attribute
ms.assetid: 714c3e15-85d7-408b-9a7c-88505c3e5d24
caps.latest.revision: 13
author: mikeblome
ms.author: mblome
manager: ghogen
translation.priority.ht:
- de-de
- es-es
- fr-fr
- it-it
- ja-jp
- ko-kr
- ru-ru
- zh-cn
- zh-tw
translation.priority.mt:
- cs-cz
- pl-pl
- pt-br
- tr-tr
translationtype: Human Translation
ms.sourcegitcommit: 3168772cbb7e8127523bc2fc2da5cc9b4f59beb8
ms.openlocfilehash: 1743e3f1636bb82d0a6c0403511079e4fa63b09e
---
# db_command
Creates an OLE DB command.
## Syntax
```
[ db_command(
command,
name,
source_name,
hresult,
bindings,
bulk_fetch)
]
```
#### Parameters
`command`
A command string containing the text of an OLE DB command. A simple example is:
```
[ db_command ( command = "Select * from Products" ) ]
```
The *command* syntax is as follows:
```
binding parameter block 1
OLE DB command
binding parameter block 2
continuation of OLE DB command
binding parameter block 3
...
```
A *binding parameter block* is defined as follows:
**([** `bindtype` **]** *szVar1* [*, szVar2* [, *nVar3* [, ...]]] **)**
where:
**(** marks the start of the data binding block.
**[** `bindtype` **]** is one of the following case-insensitive strings:
- **[db_column]** binds each of the member variables to a column in a rowset.
- **[bindto]** (same as **[db_column]**).
- **[in]** binds member variables as input parameters.
- **[out]** binds member variables as output parameters.
- **[in,out]** binds member variables as input/output parameters.
*SzVarX* resolves to a member variable within the current scope.
**)** marks the end of the data binding block.
If the command string contains one or more specifiers such as [in], [out], or [in/out], **db_command** builds a parameter map.
If the command string contains one or more parameters such as [db_column] or [bindto], **db_command** generates a rowset and an accessor map to service these bound variables. See [db_accessor](../windows/db-accessor.md) for more information.
> [!NOTE]
> [`bindtype`] syntax and the `bindings` parameter are not valid when using **db_command** at the class level.
Here are some examples of binding parameter blocks. The following example binds the `m_au_fname` and `m_au_lname` data members to the `au_fname` and `au_lname` columns, respectively, of the authors table in the pubs database:
```
TCHAR m_au_fname[21];
TCHAR m_au_lname[41];
TCHAR m_state[3] = 'CA';
[db_command (
command = "SELECT au_fname([bindto]m_au_fname), au_lname([bindto]m_au_lname) " \
"FROM dbo.authors " \
"WHERE state = ?([in]m_state)")
```
]
*name* (optional)
The name of the handle you use to work with the rowset. If you specify *name*, **db_command** generates a class with the specified *name*, which can be used to traverse the rowset or to execute multiple action queries. If you do not specify *name*, it will not be possible to return more than one row of results to the user.
*source_name* (optional)
The `CSession` variable or instance of a class that has the `db_source` attribute applied to it on which the command executes. See [db_source](../windows/db-source.md).
**db_command** checks to ensure that the variable used for *source_name* is valid, so the specified variable should be in function or global scope.
`hresult` (optional)
Identifies the variable that will receive the `HRESULT` of this database command. If the variable does not exist, it will be automatically injected by the attribute.
*bindings* (optional)
Allows you to separate the binding parameters from the OLE DB command.
If you specify a value for `bindings`, **db_command** will parse the associated value and will not parse the [`bindtype`] parameter. This usage allows you to use OLE DB provider syntax. To disable parsing, without binding parameters, specify **Bindings=""**.
If you do not specify a value for `bindings`, **db_command** will parse the binding parameter block, looking for '**(**', followed by **[**`bindtype`**]** in brackets, followed by one or more previously declared C++ member variables, followed by '**)**'. All text between the parentheses will be stripped from the resulting command, and these parameters will be used to construct column and parameter bindings for this command.
*bulk_fetch*(optional)
An integer value that specifies the number of rows to fetch.
The default value is 1, which specifies single row fetching (the rowset will be of type [CRowset](../data/oledb/crowset-class.md)).
A value greater than 1 specifies bulk row fetching. Bulk row fetching refers to the ability of bulk rowsets to fetch multiple row handles (the rowset will be of type [CBulkRowset](../data/oledb/cbulkrowset-class.md) and will call `SetRows` with the specified number of rows).
If *bulk_fetch* is less than one, `SetRows` will return zero.
## Remarks
**db_command** creates a [CCommand](../data/oledb/ccommand-class.md) object, which is used by an OLE DB consumer to execute a command.
You can use **db_command** with either class or function scope; the main difference is the scope of the `CCommand` object. With function scope, data such as bindings terminate at function end. Both class and function scope usages involve the OLE DB Consumer Template class **CCommand<>**, but the template arguments differ for the function and class cases. In the function case, bindings will be made to an **Accessor** that comprises local variables, while the class usage will infer a `CAccessor`-derived class as the argument. When used as a class attribute, **db_command** works in conjunction with **db_column**.
**db_command** can be used to execute commands that do not return a result set.
When the consumer attribute provider applies this attribute to a class, the compiler will rename the class to _*YourClassName*Accessor, where *YourClassName* is the name you gave the class, and the compiler will also create a class called *YourClassName,* which derives from \_*YourClassName*Accessor. In Class View, you will see both classes.
## Example
This sample defines a command that selects the first and last names from a table where the state column matches 'CA'. **db_command** creates and reads a rowset on which you can call wizard-generated functions such as [OpenAll and CloseAll](../data/oledb/consumer-wizard-generated-methods.md), as well as `CRowset` member functions such as [MoveNext](../data/oledb/crowset-movenext.md).
Note that this code requires you to provide your own connection string that connects to the pubs database. For information on how to do this in the development environment, see [How to: Connect to a Database from Server Explorer](http://msdn.microsoft.com/en-us/7c1c3067-0d77-471b-872b-639f9f50db74) and [How to: Add New Data Connections in Server Explorer/Database Explorer](http://msdn.microsoft.com/en-us/fb2f513b-ddad-4142-911e-856bba0054c8).
```
// db_command.h
#include <atlbase.h>
#include <atlplus.h>
#include <atldbcli.h>
#pragma once
[ db_source(L"your connection string"),
db_command(L" \
SELECT au_lname, au_fname \
FROM dbo.authors \
WHERE state = 'CA'") ]
struct CAuthors {
// In order to fix several issues with some providers, the code below may bind
// columns in a different order than reported by the provider
DBSTATUS m_dwau_lnameStatus;
DBSTATUS m_dwau_fnameStatus;
DBLENGTH m_dwau_lnameLength;
DBLENGTH m_dwau_fnameLength;
[ db_column("au_lname", status="m_dwau_lnameStatus", length="m_dwau_lnameLength") ] TCHAR m_au_lname[41];
[ db_column("au_fname", status="m_dwau_fnameStatus", length="m_dwau_fnameLength") ] TCHAR m_au_fname[21];
[ db_param("7", paramtype="DBPARAMIO_INPUT") ] TCHAR m_state[3];
void GetRowsetProperties(CDBPropSet* pPropSet) {
pPropSet->AddProperty(DBPROP_CANFETCHBACKWARDS, true, DBPROPOPTIONS_OPTIONAL);
pPropSet->AddProperty(DBPROP_CANSCROLLBACKWARDS, true, DBPROPOPTIONS_OPTIONAL);
}
};
```
## Example
```
// db_command.cpp
// compile with: /c
#include "db_command.h"
int main(int argc, _TCHAR* argv[]) {
HRESULT hr = CoInitialize(NULL);
// Instantiate rowset
CAuthors rs;
// Open rowset and move to first row
strcpy_s(rs.m_state, sizeof(rs.m_state), _T("CA"));
hr = rs.OpenAll();
hr = rs.MoveFirst();
// Iterate through the rowset
while( SUCCEEDED(hr) && hr != DB_S_ENDOFROWSET ) {
// Print out the column information for each row
printf("First Name: %s, Last Name: %s\n", rs.m_au_fname, rs.m_au_lname);
hr = rs.MoveNext();
}
rs.CloseAll();
CoUninitialize();
}
```
## Example
This sample uses `db_source` on a data source class `CMySource`, and `db_command` on command classes `CCommand1` and `CCommand2`.
```
// db_command_2.cpp
// compile with: /c
#include <atlbase.h>
#include <atlplus.h>
#include <atldbcli.h>
// class usage for both db_source and db_command
[ db_source(L"your connection string"),
db_command(L" \
SELECT au_lname, au_fname \
FROM dbo.authors \
WHERE state = 'CA'") ]
struct CMySource {
HRESULT OpenDataSource() {
return S_OK;
}
};
[db_command(command = "SELECT * FROM Products")]
class CCommand1 {};
[db_command(command = "SELECT FNAME, LNAME FROM Customers")]
class CCommand2 {};
int main() {
CMySource s;
HRESULT hr = s.OpenDataSource();
if (SUCCEEDED(hr)) {
CCommand1 c1;
hr = c1.Open(s);
CCommand2 c2;
hr = c2.Open(s);
}
s.CloseDataSource();
}
```
## Requirements
### Attribute Context
|||
|-|-|
|**Applies to**|**class**, `struct`, member, method, local|
|**Repeatable**|No|
|**Required attributes**|None|
|**Invalid attributes**|None|
For more information about the attribute contexts, see [Attribute Contexts](../windows/attribute-contexts.md).
## See Also
[OLE DB Consumer Attributes](../windows/ole-db-consumer-attributes.md)
[Stand-Alone Attributes](../windows/stand-alone-attributes.md)
[Attributes Samples](http://msdn.microsoft.com/en-us/558ebdb2-082f-44dc-b442-d8d33bf7bdb8)
<!--HONumber=Jan17_HO2-->
| 36.837288 | 620 | 0.688138 | eng_Latn | 0.967617 |
93b9e21d8270a3658807592db62b7d46e35fe4f3 | 1,628 | md | Markdown | WindowsServerDocs/administration/windows-commands/ksetup-delkpasswd.md | awakecoding/windowsserverdocs | cb266c8ea42b9800babbbe96b17885e82b55787d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-08-24T10:46:35.000Z | 2020-08-24T10:46:35.000Z | WindowsServerDocs/administration/windows-commands/ksetup-delkpasswd.md | awakecoding/windowsserverdocs | cb266c8ea42b9800babbbe96b17885e82b55787d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-11T19:54:41.000Z | 2020-11-11T19:54:41.000Z | WindowsServerDocs/administration/windows-commands/ksetup-delkpasswd.md | awakecoding/windowsserverdocs | cb266c8ea42b9800babbbe96b17885e82b55787d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ksetup delkpasswd
description: Reference topic for the ksetup delkpasswd command, which removes a Kerberos password server (kpasswd) for a realm.
ms.prod: windows-server
ms.technology: manage-windows-commands
ms.topic: article
ms.assetid: 2db0bfcd-bc08-48e3-9f30-65b6411839c6
author: coreyp-at-msft
ms.author: coreyp
manager: dongill
ms.date: 10/16/2017
---
# ksetup delkpasswd
> Applies to: Windows Server (Semi-Annual Channel), Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
Removes a Kerberos password server (kpasswd) for a realm.
## Syntax
```
ksetup /delkpasswd <realmname> <kpasswdname>
```
### Parameters
| Parameter | Description |
| --------- | ----------- |
| `<realmname>` | Specifies the uppercase DNS name, such as CORP.CONTOSO.COM, and is listed as the default realm or **Realm=** when **ksetup** is run. |
| `<kpasswdname>` | Specifies the Kerberos password server. It's stated as a case-insensitive, fully-qualified domain name, such as mitkdc.contoso.com. If the KDC name is omitted, DNS might be used to locate KDCs. |
### Examples
To make sure the realm CORP.CONTOSO.COM uses the non-Windows KDC server mitkdc.contoso.com as the password server, type:
```
ksetup /delkpasswd CORP.CONTOSO.COM mitkdc.contoso.com
```
To make sure the realm CORP.CONTOSO.COM is not mapped to a Kerberos password server (the KDC name), type `ksetup` on the Windows computer and then view the output.
## Additional References
- [Command-Line Syntax Key](command-line-syntax-key.md)
- [ksetup command](ksetup.md)
- [ksetup delkpasswd command](ksetup-delkpasswd.md)
| 32.56 | 215 | 0.743857 | eng_Latn | 0.759037 |
93ba82fba0ea7bdf276626f9dfc8d3a105377d13 | 535 | md | Markdown | i18n/ja/docusaurus-plugin-content-docs/current/gallery/collection/01_7_services.md | jwdevelab/zw | 1f4d5f57b864a8a3ae730c62091ccda8fc3a8d1e | [
"MIT"
] | null | null | null | i18n/ja/docusaurus-plugin-content-docs/current/gallery/collection/01_7_services.md | jwdevelab/zw | 1f4d5f57b864a8a3ae730c62091ccda8fc3a8d1e | [
"MIT"
] | null | null | null | i18n/ja/docusaurus-plugin-content-docs/current/gallery/collection/01_7_services.md | jwdevelab/zw | 1f4d5f57b864a8a3ae730c62091ccda8fc3a8d1e | [
"MIT"
] | null | null | null | ---
id: services
title: '🔺 Services'
sidebar_position: 7
description: Services Collection
keywords:
- コレクション
- services
- zsh
- z-shell
- zi
---
```shell
# a service that runs the redis database, in background, single instance
zi ice wait lucid service"redis"
zi light zservices/redis
```
```shell
# Github-Issue-Tracker – the issue-puller thread
GIT_SLEEP_TIME=700
GIT_PROJECTS=z-shell/zsh-github-issues:z-shell/zi
zi ice wait lucid service"GIT" pick"zsh-github-issues.service.zsh"
zi light z-shell/zsh-github-issues
```
| 19.107143 | 72 | 0.736449 | eng_Latn | 0.510779 |
93bb1841cc921ff3ed816ec028ffe0581c476e70 | 6,786 | md | Markdown | README.md | Avi-nashkumar/One-person-one-portal | 2e9e27a02dfa7160d4b9d4407895fa0b5d197364 | [
"MIT"
] | null | null | null | README.md | Avi-nashkumar/One-person-one-portal | 2e9e27a02dfa7160d4b9d4407895fa0b5d197364 | [
"MIT"
] | null | null | null | README.md | Avi-nashkumar/One-person-one-portal | 2e9e27a02dfa7160d4b9d4407895fa0b5d197364 | [
"MIT"
] | null | null | null | # One-person-one-portal
our idea is one person one portal ,one blockchain id multiple uses at single platform interconnected internally .It is at present deployed on ethereum test netwrok .In this portal a user can do kyc ,do asset/property management and track health records.In this portal you ned to upload documents and do first time verification of a document uploaded and after that you just have to send the documents.we have ocr working in backend that will fetch details like name ,dob etc from the document uploaded for verification and matching credentials .also after that you can create property ,transfer ownership like at present if you want to transfer the ownership of land you need to go to the governemnet offices which is hectic and costly .On this portal once property is created ,you can change ownership in minutes .similarly you can track the health records . There are lot of challenges in this project which needs to be overcome ,but we think if the workis done in rght direction the portal can act as smart governance technique based on blockchain in the future
Steps to do kyc and asset management : 1.Signup on the protal using a valid adharr and put the name same that is on aadharr card 2.Enter the Right address of your ethereum account 3.now login using the username and password 4.To upload kyc document go to kyc panel and upload the document and put the details there 5.To create property/asset enter the details and ethereum address of the property holder(your address) 6.To view property go the view property section 7.To transfer ownership first add the the ethereum address of the next owner in add user panel 8.Approver the added user there only 9.Now go to view Property section and enterthe property id 10.put the address of new owner in chnage ownership box and click the chnage ownership button 11.same way you can change the value of property
**Testnet Used**:
Testrpc – This is a local network running on your compuetr. 10 free wallet accounts with test ether is allocated.
Wallet
Wallets are very important part of a smart contract. It serves 2 purposes:
It serves as client to ethereum wallet. To make a transaction on network ether has to be spent and you can authorize these payments using this.
To communicate with a blockchain and to deploy, you need to either have a full node or a wallet cleint of the network. A wallet can facilitate the communication with the network. Note: We have used testrpc which provides us with 10 free accounts with their private keys and 100 ethers are linked to each account.It is these accounts which we are using for transaction. Tp run it go to : cmd/testrpc
Deployment
To deploy a contract the following steps are to be taken:
Compile the code and get necessary bytecodes and ABIcodes
Select a network to migrate to
Make a deployment with a wallet address as transaction sender
Authenticate the transaction form the wallet and pay the transaction cost.
Your contract will be deployed and will be assigned a public address which can be used to access it.
Web Interface
A web app can be used to work with the contract. A backend javascript framework, web3.js, can intract with the blockchain. It can connect to the network, identify the contract and perform transactions. There are two kinds of transaction operation on a contract:
Web App
Open src/js/app.js file. This is the javascript file that interacts with the contract.
Paste your contract address replacing 'contract_address in “web3.eth.contract(abi).at('contract_address');
Go to remix page. In the compile section go to details tab. In the ABI section click on copy button to copy your ABI code.
Go to src/js/app.js file and paste it replacing abi_array in var abi = abi_array ;
Open src/index.html to open the web app.
Interacting on web App
Fill up the user details and click add user or add admin. You will find block being created and ether being reduced in remix ide from the account we are using which got 100 free ethers.
In the command window you can see block being created.
Different Operations In the App
1.Intialising Contract
**Output**
msg.sender is made as creatorAdmin
msg.sender is made as superAdmin
msg.sender is made as verified user
2.Create a new Property
parameters
CreateProperty- property Id, propoerty value, property owner address
prerequisites
msg.sender should be admin
property owner should be verified user
Output
mark property Id, Status as Pending, propoerty value, property owner address
3.Approve the new Property.
parameters
approveProperty- property Id
prerequisites
msg.sender should be superadmin
current owner should not be msg.sender
Output
mark property Satus as Approved
4.Reject the new Property.
parameters
rejectProperty- property Id
prerequisites
msg.sender should be superadmin
current owner should not be msg.sender
Output
Mark property Satus as Rejected
Request Change of Ownership.
parameters
changeOwnership- property Id, new owner address
prerequisites
msg.sender should be the current owner
new owner should be verified user
current owner is not the new owner
No pending ownership change request should exist.
Output
mark property Ownership change request
6.Approve change in Onwership.
parameters
ApproveChangeOwnership- property Id
prerequisites
msg.sender should be superadmin
ownership change request must exist
Output
mark new owner address as current owner
7.Change the price of the property.
parameters
changeValue- propoerty Id, new property value
prerequisites
msg.sender should be the current owner
No pending ownership change request should exist.
Output
change property value
8.Get the property details.
parameters
GetPropertyDetails- propoerty Id
Output
Status, propoerty value, property owner address
9.Add new user.
parameters
addNewUser- address
prerequisites
msg.sender should be admin
No pending request for the address should exist.(user or admin or superadmin)
address should not be a verified user.(user or admin or superadmin)
Output mark address as user
10.Add new admin.
parameters
AddNewAdmin- address
prerequisites
msg.sender should be superadmin
No pending request for the address should exist.(user or admin or superadmin)
address should not be a verified user.(user or admin or superadmin)
Output
mark address as Admin
11.Add new SuperAdmin
parameters
addNewSuperAdmin- address
prerequisites
msg.sender should be superadmin
No pending request for the address should exist.(user or admin or superadmin)
address should not be a verified user.(user or admin or superadmin)
Output
mark address as SuperAdmin
Approve Pending User.
parameters
approveUsers- address
prerequisites
msg.sender should be superadmin
Pending request should exist for address
Output
mark address as Verified user
| 34.622449 | 1,064 | 0.808429 | eng_Latn | 0.998816 |
93bc5173c6a373d90dcb3dfc132c83a455a0bbd7 | 820 | md | Markdown | elmish/README.md | fable-compiler/fableconf-workshops | a27c0abb2207d98906c40d889748bbb84dd0e657 | [
"MIT"
] | 4 | 2017-12-31T12:03:37.000Z | 2019-10-04T23:29:52.000Z | elmish/README.md | fable-compiler/fableconf-workshops | a27c0abb2207d98906c40d889748bbb84dd0e657 | [
"MIT"
] | 2 | 2017-12-30T08:26:32.000Z | 2018-10-21T08:12:50.000Z | elmish/README.md | fable-compiler/fableconf-workshops | a27c0abb2207d98906c40d889748bbb84dd0e657 | [
"MIT"
] | 7 | 2017-09-10T21:45:36.000Z | 2019-12-21T19:05:00.000Z | # Fullstack Fable example
## Requirements
- [Dotnet SDK 2.1.302](https://www.microsoft.com/net/download)
- [node.js with npm](https://nodejs.org)
- [Mono Framework](https://www.mono-project.com/download/stable/) for some tooling if working in non-Windows environment
- An F# IDE, like Visual Studio Code with [Ionide extension](http://ionide.io/)
## Installing dependencies
Type `npm install` to install dependencies (for both JS and F#) after cloning the repository or whenever dependencies change.
> [Paket](https://fsprojects.github.io/Paket/) is the tool used to manage F# dependencies.
## Development
Fable and Webpack are used to compile and bundle both the client and the server projects. To start them in watch mode (so the server is reloaded whenever there's a change in the code) type: `npm run start`.
| 43.157895 | 206 | 0.754878 | eng_Latn | 0.981882 |
93bcc0ea7141d91bfc7d9163f284d807626e9ec9 | 84 | md | Markdown | CONTRIBUTING.md | egedinnen/hackathon-competitors | 1e833084936aff7a57a633542e39fa687c2b5545 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | egedinnen/hackathon-competitors | 1e833084936aff7a57a633542e39fa687c2b5545 | [
"MIT"
] | 1 | 2020-10-28T16:42:39.000Z | 2020-10-28T16:42:39.000Z | CONTRIBUTING.md | egedinnen/hackathon-competitors | 1e833084936aff7a57a633542e39fa687c2b5545 | [
"MIT"
] | 2 | 2020-10-28T16:38:02.000Z | 2020-10-28T18:31:04.000Z | Add your full name to next line with your github, linkedin or another profile link.
| 42 | 83 | 0.797619 | eng_Latn | 0.999767 |
93bd120efc36a36b91c2cbc92cc92702997814b7 | 1,068 | md | Markdown | articles/vs-mobile-services-javascript-what-happened.md | rustd/azure-content | 263b2bad92dbccbaa78c7014217510af76a706ce | [
"CC-BY-3.0"
] | 2 | 2016-02-10T04:24:38.000Z | 2021-09-17T22:53:42.000Z | articles/vs-mobile-services-javascript-what-happened.md | rustd/azure-content | 263b2bad92dbccbaa78c7014217510af76a706ce | [
"CC-BY-3.0"
] | null | null | null | articles/vs-mobile-services-javascript-what-happened.md | rustd/azure-content | 263b2bad92dbccbaa78c7014217510af76a706ce | [
"CC-BY-3.0"
] | 1 | 2021-05-30T01:41:12.000Z | 2021-05-30T01:41:12.000Z | <properties
pageTitle=""
description="Describes what happened to your Azure Mobile Services project in Visual Studio"
services="mobile-services"
documentationCenter=""
authors="patshea123"
manager="douge"
editor=""/>
<tags
ms.service="mobile-services"
ms.workload="mobile"
ms.tgt_pltfrm=""
ms.devlang="JavaScript"
ms.topic="article"
ms.date="02/02/2015"
ms.author="patshea"/>
> [AZURE.SELECTOR]
> - [Getting Started](/documentation/articles/vs-mobile-services-javascript-getting-started/)
> - [What Happened](/documentation/articles/vs-mobile-services-javascript-what-happened/)
###<span id="whathappened">What happened to my project?</id>
#####References Added
The Windows Azure Mobile Service library was added to your project in the form of a **MobileServices.js** file.
#####Connection string values for Mobile Services
In the `services\mobileServices\settings` folder, a new JavaScript (.js) file with a **MobileServiceClient** was generated that contains the selected mobile service's application URL and application key.
| 33.375 | 205 | 0.748127 | eng_Latn | 0.877697 |
93bde3243b50927f98a70a24085015b07e4c4746 | 8,524 | md | Markdown | docs/xquery/functions-related-to-qnames-expanded-qname.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xquery/functions-related-to-qnames-expanded-qname.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xquery/functions-related-to-qnames-expanded-qname.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Erweiterter QName (XQuery) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 03/14/2017
ms.prod: sql
ms.prod_service: sql
ms.reviewer: ''
ms.technology: xml
ms.topic: language-reference
dev_langs:
- XML
helpviewer_keywords:
- expanded-QName function
- fn:expanded-QName function
ms.assetid: b8377042-95cc-467b-9ada-fe43cebf4bc3
author: rothja
ms.author: jroth
ms.openlocfilehash: 7c50409ea35809c52de718a8281bf76f75a5a0e0
ms.sourcegitcommit: b87d36c46b39af8b929ad94ec707dee8800950f5
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/08/2020
ms.locfileid: "68004581"
---
# <a name="functions-related-to-qnames---expanded-qname"></a>Funktionen, die sich auf QNames beziehen – expanded-QName
[!INCLUDE[tsql-appliesto-ss2012-xxxx-xxxx-xxx-md](../includes/tsql-appliesto-ss2012-xxxx-xxxx-xxx-md.md)]
Gibt einen Wert des xs: QName-Typs mit dem in der *$paramURI* angegebenen Namespace-URI und dem im *$paramLocal*angegebenen lokalen Namen zurück. Wenn *$paramURI* eine leere Zeichenfolge oder eine leere Sequenz ist, stellt Sie keinen Namespace dar.
## <a name="syntax"></a>Syntax
```
fn:expanded-QName($paramURI as xs:string?, $paramLocal as xs:string?) as xs:QName?
```
## <a name="arguments"></a>Argumente
*$paramURI*
Der Namespace-URI (Universal Resource Identifier) für QName.
*$paramLocal*
Der lokale Teil des Namens von QName.
## <a name="remarks"></a>Bemerkungen
Folgendes gilt für die **expanded-QName ()-** Funktion:
- Wenn sich der angegebene *$paramLocal* Wert nicht in der richtigen lexikalischen Form für den xs: NcName-Typ befindet, wird die leere Sequenz zurückgegeben und stellt einen dynamischen Fehler dar.
- Das Konvertieren des Typs xs:QName in andere Typen wird in [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] nicht unterstützt. Aus diesem Grund kann die **Erweiterte QName ()-** Funktion nicht in der XML-Konstruktion verwendet werden. Wenn Sie beispielsweise einen Knoten wie `<e> expanded-QName(...) </e>` konstruieren, darf der Wert nicht typisiert sein. Um dies zu erreichen, müssten Sie den von `expanded-QName()` zurückgegebenen Wert vom Typ xs:QName in xdt:untypedAtomic konvertieren. Dies wird jedoch nicht unterstützt. Eine Lösungsmöglichkeit wird nachfolgend in diesem Thema bereitgestellt.
- Sie können vorhandene Werte vom Typ QName ändern oder vergleichen. `/root[1]/e[1] eq expanded-QName("http://nsURI" "myNS")` Vergleicht z. b. den Wert des-Elements, `e` <> mit dem von der **expanded-QName ()-** Funktion zurückgegebenen QName.
## <a name="examples"></a>Beispiele
Dieses Thema stellt XQuery-Beispiele für XML-Instanzen bereit, die **** in verschiedenen Spalten vom Typ [!INCLUDE[ssSampleDBobject](../includes/sssampledbobject-md.md)] XML in der-Datenbank gespeichert sind.
### <a name="a-replacing-a-qname-type-node-value"></a>A. Ersetzen eines Knotenwerts vom Typ QName
In diesem Beispiel wird veranschaulicht, wie Sie den Wert eines Elementknotens vom Typ QName ändern können. Das Beispiel führt die folgenden Aktionen aus:
- Erstellen einer XML-Schemaauflistung, die ein Element vom Typ QName definiert.
- Erstellt eine Tabelle mit einer Spalte vom Typ **XML** mithilfe der XML-Schema Auflistung.
- Speichern einer XML-Instanz in der Tabelle.
- Verwendet die **Modify ()** -Methode des XML-Datentyps, um den Wert des QName-typelements in der-Instanz zu ändern. Die **expanded-QName ()-** Funktion wird verwendet, um den neuen QName-Typwert zu generieren.
```
-- If XML schema collection (if exists)
-- drop xml schema collection SC
-- go
-- Create XML schema collection
CREATE XML SCHEMA COLLECTION SC AS N'
<schema xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="QNameXSD"
xmlns:xqo="QNameXSD" elementFormDefault="qualified">
<element name="Root" type="xqo:rootType" />
<complexType name="rootType">
<sequence minOccurs="1" maxOccurs="1">
<element name="ElemQN" type="xs:QName" />
</sequence>
</complexType>
</schema>'
go
-- Create table.
CREATE TABLE T( XmlCol xml(SC) )
-- Insert sample XML instnace
INSERT INTO T VALUES ('
<Root xmlns="QNameXSD" xmlns:ns="https://myURI">
<ElemQN>ns:someName</ElemQN>
</Root>')
go
-- Verify the insertion
SELECT * from T
go
-- Result
<Root xmlns="QNameXSD" xmlns:ns="https://myURI">
<ElemQN>ns:someName</ElemQN>
</Root>
```
In der folgenden Abfrage wird der Wert `ElemQN` <> Element ersetzt, indem die **Modify ()** -Methode des XML-Datentyps und der replace value of XML DML (wie gezeigt) verwendet wird.
```
-- the value.
UPDATE T
SET XmlCol.modify('
declare default element namespace "QNameXSD";
replace value of /Root[1]/ElemQN
with expanded-QName("https://myURI", "myLocalName") ')
go
-- Verify the result
SELECT * from T
go
```
Dies ist das Ergebnis. Beachten Sie, dass das `ElemQN` Element <> vom Typ QName nun über einen neuen Wert verfügt:
```
<Root xmlns="QNameXSD" xmlns:ns="urn">
<ElemQN xmlns:p1="https://myURI">p1:myLocalName</ElemQN>
</Root>
```
Die folgenden Anweisungen entfernen die in diesem Beispiel verwendeten Objekte.
```
-- Cleanup
DROP TABLE T
go
drop xml schema collection SC
go
```
### <a name="b-dealing-with-the-limitations-when-using-the-expanded-qname-function"></a>B. Umgehend mit den Einschränkungen bei Verwendung der erweiterten QName()-Funktion
Die **expanded-QName-** Funktion kann in der XML-Konstruktion nicht verwendet werden. Das folgende Beispiel veranschaulicht dies. Im Beispiel wird zuerst ein Knoten eingefügt, um diese Einschränkung zu umgehen, und dann wird der Knoten geändert.
```
-- if exists drop the table T
--drop table T
-- go
-- Create XML schema collection
-- DROP XML SCHEMA COLLECTION SC
-- go
CREATE XML SCHEMA COLLECTION SC AS '
<schema xmlns="http://www.w3.org/2001/XMLSchema">
<element name="root" type="QName" nillable="true"/>
</schema>'
go
-- Create table T with a typed xml column (using the XML schema collection)
CREATE TABLE T (xmlCol XML(SC))
go
-- Insert an XML instance.
insert into T values ('<root xmlns:a="https://someURI">a:b</root>')
go
-- Verify
SELECT *
FROM T
```
Der folgende Versuch fügt ein weiteres `root` <>-Element hinzu, schlägt jedoch fehl, da die erweiterte QName ()-Funktion in der XML-Konstruktion nicht unterstützt wird.
```
update T SET xmlCol.modify('
insert <root>{expanded-QName("http://ns","someLocalName")}</root> as last into / ')
go
```
Eine Lösung hierfür besteht darin, zuerst eine Instanz mit einem Wert für das <`root`> Element einzufügen und dann zu ändern. In diesem Beispiel wird ein Nil-Anfangswert verwendet, wenn das `root` <>-Element eingefügt wird. Die XML-Schema Auflistung in diesem Beispiel lässt einen Nullwert für das `root` <>-Element zu.
```
update T SET xmlCol.modify('
insert <root xsi:nil="true"/> as last into / ')
go
-- now replace the nil value with another QName.
update T SET xmlCol.modify('
replace value of /root[last()] with expanded-QName("http://ns","someLocalName") ')
go
-- verify
SELECT * FROM T
go
-- result
<root>b</root>
```
`<root xmlns:a="https://someURI">a:b</root>`
`<root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p1="http://ns">p1:someLocalName</root>`
Sie können den Wert für QName vergleichen, wie in der folgenden Abfrage gezeigt: Die Abfrage gibt nur die <`root`> Elemente zurück, deren Werte mit dem von der **erweiterten QName ()-** Funktion zurückgegebenen Wert des QName-Typs identisch sind.
```
SELECT xmlCol.query('
for $i in /root
return
if ($i eq expanded-QName("http://ns","someLocalName") ) then
$i
else
()')
FROM T
```
### <a name="implementation-limitations"></a>Implementierungseinschränkungen
Es gibt eine Einschränkung: die Funktion " **expanded-QName ()** " akzeptiert die leere Sequenz als zweites Argument und gibt "Empty" zurück, anstatt einen Laufzeitfehler zu erhalten, wenn das zweite Argument falsch ist.
## <a name="see-also"></a>Weitere Informationen
[Funktionen im Zusammenhang mit QNames (XQuery-)](https://msdn.microsoft.com/library/7e07eb26-f551-4b63-ab77-861684faff71)
| 40.784689 | 611 | 0.700258 | deu_Latn | 0.909074 |
93be2d59d6f0fe0a014ae3a808aee02803720076 | 1,720 | md | Markdown | docs/framework/unmanaged-api/alink/addfile2-method.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/alink/addfile2-method.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/alink/addfile2-method.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Método AddFile2
ms.date: 03/30/2017
api_name:
- AddFile2
- IALink2.AddFile2
api_location:
- alink.dll
api_type:
- COM
f1_keywords:
- AddFile2
helpviewer_keywords:
- AddFile2 method
ms.assetid: 03bc49bf-a89b-4fb6-a88d-97482e061195
topic_type:
- apiref
ms.openlocfilehash: cff6707496c7d9657796deb8bf6fa9165ff295a2
ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/24/2020
ms.locfileid: "95717078"
---
# <a name="addfile2-method"></a>Método AddFile2
Adiciona arquivos ao assembly. Também pode ser usado para criar módulos desvinculados.
## <a name="syntax"></a>Sintaxe
```cpp
HRESULT AddFile2(
mdAssembly AssemblyID,
LPCWSTR pszFilename,
DWORD dwFlags,
IMetaDataEmit2* pEmitter,
mdFile* pFileToken
) PURE;
```
## <a name="parameters"></a>Parâmetros
`AssemblyID`
ID do assembly ao qual o arquivo é adicionado.
`pszFilename`
Nome do arquivo a ser adicionado.
`dwFlags`
`FileDef`Sinalizadores com+, como `ffContainsNoMetaData` e `ffWriteable` . `dwFlags` é passado para o [método definofile](../metadata/imetadataassemblyemit-definefile-method.md).
`pEmitter`
Interface para interface de [interface IMetaDataEmit2](../metadata/imetadataemit2-interface.md) .
`pFileToken`
Recebe a ID do arquivo que está sendo adicionado.
## <a name="return-value"></a>Valor Retornado
Retorna S_OK se o método tiver sucesso.
## <a name="requirements"></a>Requisitos
Requer ALink. h.
## <a name="see-also"></a>Confira também
- [Interface IALink2](ialink2-interface.md)
- [Interface IALink](ialink-interface.md)
- [API do ALink](index.md)
| 24.225352 | 181 | 0.722093 | por_Latn | 0.382738 |
93bea796511b920bbe5a7f1feeff9525ff8c2e95 | 358 | md | Markdown | 600-toc/632-recursive-function-theory/sudan-function.md | mandober/debrief.math | c6bf21581ccb48a82a74038135bca09c1d0c2a4f | [
"Unlicense"
] | 1 | 2019-01-18T21:56:33.000Z | 2019-01-18T21:56:33.000Z | 600-toc/632-recursive-function-theory/sudan-function.md | mandober/debrief.math | c6bf21581ccb48a82a74038135bca09c1d0c2a4f | [
"Unlicense"
] | 1 | 2019-06-16T19:34:58.000Z | 2019-06-16T19:35:03.000Z | 600-toc/632-recursive-function-theory/sudan-function.md | mandober/dust-dllci | 3afc7936579dfefd7f823d774c4ac17cc6c57666 | [
"Unlicense"
] | null | null | null | # Sudan function
https://en.wikipedia.org/wiki/Sudan_function
The Sudan function is an example of a function that is recursive, but not primitive recursive. The Sudan function was the first function having this property to be published. It was discovered and published in 1927 by Gabriel Sudan, a Romanian mathematician who was a student of David Hilbert.
| 59.666667 | 293 | 0.810056 | eng_Latn | 0.999723 |
93bf19992ca32ed0ac2663eca4cd75a3e6d951c8 | 4,379 | md | Markdown | docs/standard/garbage-collection/index.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/garbage-collection/index.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard/garbage-collection/index.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Garbage Collection
ms.date: 03/30/2017
ms.technology: dotnet-standard
helpviewer_keywords:
- memory, garbage collection
- garbage collection, automatic memory management
- GC [.NET Framework]
- memory, allocating
- common language runtime, garbage collection
- garbage collector
- cleanup operations
- garbage collection
- memory, releasing
- common language runtime, automatic memory management
- automatic memory management
- runtime, automatic memory management
- runtime, garbage collection
- garbage collection, about
ms.assetid: 22b6cb97-0c80-4eeb-a2cf-5ed7655e37f9
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 0d820783b931195bf62b75ea76d7d0573289bab8
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 05/04/2018
---
# <a name="garbage-collection"></a>Garbage Collection
Der Garbage Collector von .NET verwaltet die Belegung und Freigabe von Arbeitsspeicher für die Anwendung. Bei jedem Erstellen eines neuen Objekts belegt die Common Language Runtime (CLR) Speicher für das Objekt aus dem verwalteten Heap. Solange ein Adressbereich im verwalteten Heap verfügbar ist, reserviert die Laufzeit Arbeitsspeicher für neue Objekte. Arbeitsspeicher ist jedoch nicht unendlich verfügbar. Möglicherweise muss mithilfe der Garbage Collection Arbeitsspeicher freigegeben werden. Das Optimierungsmodul der Garbage Collection bestimmt den besten Zeitpunkt für das Einsammeln anhand der erfolgten Speicherbelegungen. Beim Einsammeln durch die Garbage Collection wird nach Objekten im verwalteten Heap gesucht, die nicht mehr von der Anwendung verwendet werden. Anschließend werden die für das Freigeben des Arbeitsspeichers erforderlichen Operationen ausgeführt.
<a name="related_topics"></a>
## <a name="related-topics"></a>Verwandte Themen
|Titel|description|
|-----------|-----------------|
|[Grundlagen der Garbage Collection](../../../docs/standard/garbage-collection/fundamentals.md)|Beschreibt, wie die Garbage Collection funktioniert, wie Objekte auf dem verwalteten Heap zugeordnet werden und erläutert andere Kernkonzepte.|
|[Garbage Collection und Leistung](../../../docs/standard/garbage-collection/performance.md)|Beschreibt die Leistungsprüfungen, die Sie verwenden können, um Probleme mit der Garbage Collection oder der Leistung zu analysieren.|
|[Induzierte Sammlungen](../../../docs/standard/garbage-collection/induced.md)|Beschreibt, wie eine Garbage Collection initiiert wird.|
|[Latenzmodi](../../../docs/standard/garbage-collection/latency.md)|Beschreibt die Modi, die das Ausmaß der Garbage Collection bestimmen.|
|[Optimierung für freigegebenes Webhosting](../../../docs/standard/garbage-collection/optimization-for-shared-web-hosting.md)|Beschreibt, wie die Garbage Collection auf Servern, die von mehreren kleinen Websites gemeinsam verwendet werden, optimiert werden kann.|
|[Garbage Collection-Benachrichtigungen](../../../docs/standard/garbage-collection/notifications.md)|Beschreibt, wie festgestellt werden kann, wann eine vollständige Garbage Collection ansteht und wann sie abgeschlossen ist.|
|[Überwachung von Anwendungsdomänenressourcen](../../../docs/standard/garbage-collection/app-domain-resource-monitoring.md)|Beschreibt, wie die durch eine Anwendungsdomäne verursachte CPU- und Speicherauslastung überwacht wird.|
|[Schwache Verweise](../../../docs/standard/garbage-collection/weak-references.md)|Beschreibt Funktionen, die dem Garbage Collector ermöglichen, ein Objekt zu sammeln, während die Anwendung nach wie vor auf das Objekt zugreifen kann.|
## <a name="reference"></a>Referenz
<xref:System.GC?displayProperty=nameWithType>
<xref:System.GCCollectionMode?displayProperty=nameWithType>
<xref:System.GCNotificationStatus?displayProperty=nameWithType>
<xref:System.Runtime.GCLatencyMode?displayProperty=nameWithType>
<xref:System.Runtime.GCSettings?displayProperty=nameWithType>
<xref:System.Runtime.GCSettings.LargeObjectHeapCompactionMode%2A?displayProperty=nameWithType>
<xref:System.Object.Finalize%2A?displayProperty=nameWithType>
<xref:System.IDisposable?displayProperty=nameWithType>
## <a name="see-also"></a>Siehe auch
[Bereinigen von nicht verwalteten Ressourcen](../../../docs/standard/garbage-collection/unmanaged.md)
| 67.369231 | 880 | 0.791277 | deu_Latn | 0.907573 |
93bf6226aa912064680167dc4403ad3b34b62fde | 742 | md | Markdown | src/day15/README.md | brandon-fenty/data-structures-and-algorithms | 196092b6d69d51a593f6c7857381725d22ba0f9a | [
"MIT"
] | null | null | null | src/day15/README.md | brandon-fenty/data-structures-and-algorithms | 196092b6d69d51a593f6c7857381725d22ba0f9a | [
"MIT"
] | 2 | 2018-07-10T16:36:26.000Z | 2018-07-18T16:54:28.000Z | src/day15/README.md | brandon-fenty/data-structures-and-algorithms | 196092b6d69d51a593f6c7857381725d22ba0f9a | [
"MIT"
] | null | null | null | # Eeney Meeney Miney Moe
- People are standing in a circle playing eeney meeney miney moe. The counting starts at the same place everytime and the count will be the same for each round. During each round, you will move around the circle skipping each person until the specified count is reached, the person the count stops on will be removed. Repeat this process until there is only one man left standing, he is the winner.
## Challenge
- This problem can be solved using a queue; the ideal solution will enqueue and dequeue ```n``` number of times, once ```n``` is reached, the value at the front of the queue will be removed.
- The solution should be O(n) for time and O(1) for space.
## Solution
 | 61.833333 | 398 | 0.75876 | eng_Latn | 0.999855 |
93c04705f5f80948dbb1d382cc95726b54a29cda | 3,169 | md | Markdown | README.md | shengofer/ngx-breadcrumbs | 52dad64a014381ee7c5614282a47cef58a5a3d78 | [
"MIT"
] | null | null | null | README.md | shengofer/ngx-breadcrumbs | 52dad64a014381ee7c5614282a47cef58a5a3d78 | [
"MIT"
] | null | null | null | README.md | shengofer/ngx-breadcrumbs | 52dad64a014381ee7c5614282a47cef58a5a3d78 | [
"MIT"
] | null | null | null | [](https://www.npmjs.com/org/ng8-breadcrumbs)
[](https://github.com/shengofer/ngx-breadcrumbs/blob/master/LICENSE)
# ng8-breadcrumb
This component generates a breadcrumb trail, as you navigate to child routes using the @angular/router. It interprets the browser URL of a navigate request,
in the same way the component router does to match a path to a specific component, to build up a hierarchy of available parent/child routes for that destination.
So given a navigation request to a url '/comp1/comp2/comp3', a breadcrumb trail with 3 levels will be generated. Each level includes all the elements from the previous
level along with the next child. Thus the above url request will result in the following 3 levels being generated: '/comp1', '/comp1/comp2', '/comp1/comp2/comp3'.
Theres a breadcrumbService that allows you to add names for each of your app's available routes. This friendly name will show up in the breadcrumb trail
for each matching level, otherwise it will show the last url fragment.
## Dependencies
Optionally uses bootstrap.css (v >3.x.x) for styling of some elements (although the component is fully functional without it and there is a flag to turn off the dependency).
## Install
Install the module via npm:
npm install ng8-breadcrumbs --save
## Usage
Import the this module into your module using forRoot()
import {NgxBreadcrumbsModule} from 'ng8-breadcrumbs';
@NgModule({
imports: [NgxBreadcrumbsModule]
})
export class AppModule {
...
}
Alternatively you can import the this module into your module and manually provide its service
import {NgxBreadcrumbsModule, NgxBreadcrumbsService} from 'ng8-breadcrumbs';
@NgModule({
imports: [NgxBreadcrumbsModule],
providers: [NgxBreadcrumbsService]
})
export class AppModule {
...
}
Inject the BreadcrumbService into your component to map your routes
export class AppComponent {
constructor(private breadcrumbService: NgxBreadcrumbsService) {
}
}
Place the breadcrumb selector in your component's html where you added your router-outlet:
<ngx-breadcrumbs [allowBootstrap]="true"></ngx-breadcrumbs>
<router-outlet></router-outlet>
## Directives
`useBootstrap: boolean` to apply the bootstrap breadcrumb style. Defaulted to true.
<ngx-breadcrumbs [useBootstrap]="false"></ngx-breadcrumbs>
`prefix: string` to have a static prefix as the first breadcrumb which routes to the base root when clicked.
<ngx-breadcrumbs prefix="App Title"></ngx-breadcrumbs>
## BreadcrumbService
Add friendly names for each of your app's routes (paths). Can also specify regular expressions to match routes and assign a friendly name.
this.breadcrumbsService.store(
[
{ label: 'user', url: `../../user`, params: [] },
{ label: `settings`,url: `../settings`, params: { tab: 'global' } },
]);
## Build
npm install
npm build
To build a standalone bundle:
npm bundles
## Running
npm start
| 36.425287 | 173 | 0.73083 | eng_Latn | 0.975247 |
93c04f4500998eb342966ff5e4c585e25f615c11 | 2,574 | md | Markdown | articles/cosmos-db/sql-query-log.md | fuadi-star/azure-docs.nl-nl | 0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T07:45:43.000Z | 2021-04-20T21:12:50.000Z | articles/cosmos-db/sql-query-log.md | fuadi-star/azure-docs.nl-nl | 0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd | [
"CC-BY-4.0",
"MIT"
] | 575 | 2017-08-30T07:14:53.000Z | 2022-03-04T05:36:23.000Z | articles/cosmos-db/sql-query-log.md | fuadi-star/azure-docs.nl-nl | 0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd | [
"CC-BY-4.0",
"MIT"
] | 58 | 2017-07-06T11:58:36.000Z | 2021-11-04T12:34:58.000Z | ---
title: Azure Cosmos DB query taal aanmelden
description: Meer informatie over de SQL-functie voor logboek registratie in Azure Cosmos DB om de natuurlijke logaritme van de opgegeven numerieke expressie te retour neren
author: ginamr
ms.service: cosmos-db
ms.subservice: cosmosdb-sql
ms.topic: conceptual
ms.date: 09/13/2019
ms.author: girobins
ms.custom: query-reference
ms.openlocfilehash: 44a9d5b273e13886b0674b3b2e9f5f7a75e72fcc
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 03/29/2021
ms.locfileid: "93338574"
---
# <a name="log-azure-cosmos-db"></a>LOGBOEK (Azure Cosmos DB)
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
Retourneert de natuurlijke logaritme van de opgegeven numerieke expressie.
## <a name="syntax"></a>Syntaxis
```sql
LOG (<numeric_expr> [, <base>])
```
## <a name="arguments"></a>Argumenten
*numeric_expr*
Is een numerieke expressie.
*base*
Een optioneel numeriek argument waarmee de basis voor de logaritme wordt ingesteld.
## <a name="return-types"></a>Retour typen
Retourneert een numerieke expressie.
## <a name="remarks"></a>Opmerkingen
Standaard retourneert LOG () de natuurlijke logaritme. U kunt de basis van de logaritme met een andere waarde wijzigen met behulp van de optionele basis parameter.
De natuurlijke logaritme is de logaritme van het grondtal **e**, waarbij **e** een Irrational-constante is die ongeveer gelijk is aan 2,718281828.
De natuurlijke logaritme van de exponentiële waarde van een getal is het getal zelf: logboek (EXP (n)) = n. En de exponentiële waarde van de natuurlijke logaritme van een getal is het getal zelf: EXP (LOG (n)) = n.
Deze systeem functie maakt geen gebruik van de index.
## <a name="examples"></a>Voorbeelden
In het volgende voor beeld wordt een variabele gedeclareerd en wordt de logaritmische waarde van de opgegeven variabele (10) geretourneerd.
```sql
SELECT LOG(10) AS log
```
Dit is de resultatenset.
```json
[{log: 2.3025850929940459}]
```
In het volgende voor beeld wordt de `LOG` exponent van een getal berekend.
```sql
SELECT EXP(LOG(10)) AS expLog
```
Dit is de resultatenset.
```json
[{expLog: 10.000000000000002}]
```
## <a name="next-steps"></a>Volgende stappen
- [Wiskundige functies Azure Cosmos DB](sql-query-mathematical-functions.md)
- [Systeem functies Azure Cosmos DB](sql-query-system-functions.md)
- [Inleiding tot Azure Cosmos DB](introduction.md)
| 31.390244 | 216 | 0.732323 | nld_Latn | 0.990912 |
93c04ff8148c93adbb167f67f181bd20a5abf66d | 966 | md | Markdown | README.md | gridcoin-community/Gridcoin-Tasks-Backup | ee848b94e45eeae4c40cf88a03ffa1496b7a4373 | [
"MIT"
] | 27 | 2017-11-03T18:26:05.000Z | 2022-03-24T17:41:53.000Z | README.md | gridcoin-community/Gridcoin-Tasks | ee848b94e45eeae4c40cf88a03ffa1496b7a4373 | [
"MIT"
] | 215 | 2017-09-24T13:31:45.000Z | 2022-03-18T20:45:20.000Z | README.md | gridcoin-community/Gridcoin-Tasks-Backup | ee848b94e45eeae4c40cf88a03ffa1496b7a4373 | [
"MIT"
] | 2 | 2020-01-27T03:25:31.000Z | 2021-09-15T08:17:28.000Z | # Gridcoin-Tasks
The Gridcoin-Tasks repository is for long-term ideas, things relating to the larger Gridcoin community, or other issues that don't belong in any other repository
### Steps to Propose an Idea/Issue
1) Click on [Issues](https://github.com/gridcoin-community/Gridcoin-Tasks/issues)
2) Search for your idea to make sure someone hasn't opened the same issue
3) Click on [New Issues](https://github.com/gridcoin-community/Gridcoin-Tasks/issues/new)
4) Enter an appropriate title that is fairly descriptive
5) Type an explanation of your idea as descriptive and clear as possible
6) Click `Submit new issue`
### Other Information
If you want to report issues (or propose new features) regarding the Gridcoin client please do so on the [Gridcoin-Research](https://github.com/gridcoin-community/Gridcoin-Research/issues/) repo.
Check out the other [gridcoin-community repos](https://github.com/gridcoin-community) for other community organized projects.
| 56.823529 | 195 | 0.791925 | eng_Latn | 0.978456 |
93c055dc40915bbe1d5da131dd0191cb99a00198 | 730 | md | Markdown | DataScienceAndBusinessAnalytics/#1 Prediction using Supervised ML/README.md | gyanprakash0221/TheSparksFoundation | af1b6ad21c7531e2e32ee82ef99f712297e2343b | [
"Apache-2.0"
] | 1 | 2021-07-12T18:34:36.000Z | 2021-07-12T18:34:36.000Z | DataScienceAndBusinessAnalytics/#1 Prediction using Supervised ML/README.md | gyanprakash0221/TheSparksFoundation | af1b6ad21c7531e2e32ee82ef99f712297e2343b | [
"Apache-2.0"
] | null | null | null | DataScienceAndBusinessAnalytics/#1 Prediction using Supervised ML/README.md | gyanprakash0221/TheSparksFoundation | af1b6ad21c7531e2e32ee82ef99f712297e2343b | [
"Apache-2.0"
] | null | null | null | ### Prediction using Supervised ML
**(Level - Beginner)**
● Predict the percentage of an student based on the no. of study hours.
● This is a simple linear regression task as it involves just 2 variables.
● You can use R, Python, SAS Enterprise Miner or any other tool
● Data can be found at http://bit.ly/w-data
● What will be predicted score if a student studies for 9.25 hrs/ day?
● Sample Solution : https://bit.ly/2HxiGGJ
● Task submission:
1. Host the code on GitHub Repository (public). Record the code and
output in a video. Post the video on YouTube
2. Share links of code (GitHub) and video (YouTube) as a post on
YOUR LinkedIn profile, not TSF Network.
3. Submit the LinkedIn link in Task Submission Form when shared.
| 45.625 | 74 | 0.750685 | eng_Latn | 0.986884 |
93c07c38d1f272fa09049067b2a57bc4fa7356a0 | 825 | md | Markdown | includes/iot-hub-basic-partial.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/iot-hub-basic-partial.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/iot-hub-basic-partial.md | gschrijvers/azure-docs.nl-nl | e46af0b9c1e4bb7cb8088835a8104c5d972bfb78 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Include-bestand
description: Include-bestand
services: iot-hub
author: kgremban
ms.service: iot-hub
ms.topic: include
ms.date: 04/01/2018
ms.author: kgremban
ms.custom: include file
ms.openlocfilehash: b0b3825e5afe31f16553a5c7cacbe8cb1fb40295
ms.sourcegitcommit: 849bb1729b89d075eed579aa36395bf4d29f3bd9
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 04/28/2020
ms.locfileid: "70050307"
---
>[!NOTE]
>Sommige van de functies die in dit artikel worden genoemd, zoals Cloud-naar-apparaat-berichten, apparaatdubbels en Apparaatbeheer, zijn alleen beschikbaar in de laag standaard van IoT Hub. Raadpleeg [How to choose the right IoT Hub tier](../articles/iot-hub/iot-hub-scaling.md) (De juiste IoT Hub-prijscategorie kiezen) voor meer informatie over de Basic- en Standard-prijscategorieën van IoT Hub. | 43.421053 | 398 | 0.809697 | nld_Latn | 0.942337 |
93c1ba9db29158bbf57a4435e989a93ed450f88b | 2,548 | md | Markdown | propertygrid/features/filtering.md | attilaantal/winforms-docs | c311033085e6f770435eaa3c921edde9efcb12dd | [
"MIT"
] | null | null | null | propertygrid/features/filtering.md | attilaantal/winforms-docs | c311033085e6f770435eaa3c921edde9efcb12dd | [
"MIT"
] | null | null | null | propertygrid/features/filtering.md | attilaantal/winforms-docs | c311033085e6f770435eaa3c921edde9efcb12dd | [
"MIT"
] | null | null | null | ---
title: Filtering
page_title: Filtering | RadPropertyGrid
description: Just like the grouping and sorting functionality, filtering is possible both through the text box of the toolbar, or programmatically by populating the FilterDescriptors collection of RadPropertyGrid.
slug: winforms/propertygrid/features/filtering
tags: filtering
published: True
position: 0
previous_url: propertygrid-features-filtering
---
# Filtering
Just like the grouping and sorting functionality, filtering is possible both through the text box of the toolbar, or programmatically by populating the __FilterDescriptors__ collection of RadPropertyGrid. For the first option, just enable the toolbar by setting __ToolBarxVisible__ to *true* and type the desired search string in the text box:
>caption Figure 1: RadPropertyGrid Filtering

To add filters programmatically, first make sure that the __EnableFiltering__ property is set to *true* and then, define the desired __FilterDescriptor__ and add it to the control __FilterDescriptors__ collection.
You can filter by the following criteria’s:
* __Name__: The property name.
* __Value__: The property value.
* __Category__: Assigned from the __Category__ attrubute name.
* __FormattedValue__: The value of the property converted to string.
* __Label__: By default this is identical to the property name, unless changed by setting the __Label__ property of the item.
* __Description__: This is determined by the property __Description__ attribute
* __OriginalValue__: The value used when the property is initialized.
>caption Figure 2: Filter Descriptor

#### Adding a Filter Descriptor
{{source=..\SamplesCS\PropertyGrid\Features\PropertyGridFiltering.cs region=Filtering}}
{{source=..\SamplesVB\PropertyGrid\Features\PropertyGridFiltering.vb region=Filtering}}
````C#
FilterDescriptor filter = new FilterDescriptor("Name", FilterOperator.Contains, "size");
radPropertyGrid1.FilterDescriptors.Add(filter);
````
````VB.NET
Dim filter = New FilterDescriptor("Name", FilterOperator.Contains, "size")
RadPropertyGrid1.FilterDescriptors.Add(filter)
````
{{endregion}}
# See Also
* [Grouping]({%slug winforms/propertygrid/features/grouping%})
* [Sorting]({%slug winforms/propertygrid/features/sorting%})
* [Editors]({%slug winforms/propertygrid/editors/overview%})
| 39.2 | 344 | 0.784144 | eng_Latn | 0.954834 |
93c3e22fe34de248ea48eae24d8192154ee2b4be | 2,323 | md | Markdown | docs/web-service-reference/tokenissuers-soap.md | MicrosoftDocs/office-developer-exchange-docs.de-DE | 2e35110e080001522ea0fa4d4859e065d2af37f3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T18:53:40.000Z | 2022-03-28T11:50:28.000Z | docs/web-service-reference/tokenissuers-soap.md | MicrosoftDocs/office-developer-exchange-docs.de-DE | 2e35110e080001522ea0fa4d4859e065d2af37f3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-12-08T02:37:48.000Z | 2021-12-08T02:38:08.000Z | docs/web-service-reference/tokenissuers-soap.md | MicrosoftDocs/office-developer-exchange-docs.de-DE | 2e35110e080001522ea0fa4d4859e065d2af37f3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-17T20:59:32.000Z | 2020-06-17T20:59:32.000Z | ---
title: TokenIssuers (SOAP)
manager: sethgros
ms.date: 09/17/2015
ms.audience: Developer
ms.topic: reference
ms.localizationpriority: medium
ms.assetid: 26c55228-184e-4340-bd80-f86be56f3e7a
description: Die TokenIssuers-Elemente stellen die TokenIssuer (SOAP)-Auflistung dar.
ms.openlocfilehash: 68ff3ed515b346a84734596fae6fe127768b4476
ms.sourcegitcommit: 54f6cd5a704b36b76d110ee53a6d6c1c3e15f5a9
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 09/24/2021
ms.locfileid: "59520402"
---
# <a name="tokenissuers-soap"></a>TokenIssuers (SOAP)
Die **TokenIssuers-Elemente** stellen die [TokenIssuer (SOAP)-Auflistung](tokenissuer-soap.md) dar.
```XML
<TokenIssuers>
<TokenIssuer/>
</TokenIssuers>
```
**TokenIssuers**
## <a name="attributes-and-elements"></a>Attribute und Elemente
In den folgenden Abschnitten werden Attribute, untergeordnete und übergeordnete Elemente erläutert.
### <a name="attributes"></a>Attribute
Keine
### <a name="child-elements"></a>Untergeordnete Elemente
|**Element**|**Beschreibung**|
|:-----|:-----|
|[TokenIssuer (SOAP)](tokenissuer-soap.md) <br/> |Gibt den [Uri (SOAP)](uri-soap.md) und [den Endpunkt (SOAP)](endpoint-soap.md) für den Sicherheitstokendienst an. <br/> |
### <a name="parent-elements"></a>Übergeordnete Elemente
|**Element**|**Beschreibung**|
|:-----|:-----|
|[GetFederationInformationResponse (SOAP)](getfederationinformationresponse-soap.md) <br/> |Enthält die [SOAP-Antwort (GetFederationInformation-Vorgang).](getfederationinformation-operation-soap.md) <br/> |
## <a name="remarks"></a>HinwBemerkungeneise
**TokenIssuers** stellt eine Sammlung von [TokenIssuer (SOAP)-Elementen](tokenissuer-soap.md) dar, die in der AutoDiscovery verwendet werden sollen.
## <a name="element-information"></a>Informationen zu Elementen
|||
|:-----|:-----|
|Namespace <br/> |https://schemas.microsoft.com/exchange/2010/Autodiscover <br/> |
|Name des Schemas <br/> |AutoErmittlungsschema <br/> |
|Überprüfungsdatei <br/> |Messages.xsd <br/> |
|Leer kann sein <br/> |True <br/> |
## <a name="see-also"></a>Siehe auch
[AutoErmittlung Webdienstverweis für Exchange](autodiscover-web-service-reference-for-exchange.md)
[SOAP AutoDiscover XML-Elemente für Exchange 2013](soap-autodiscover-xml-elements-for-exchange-2013.md)
| 33.666667 | 207 | 0.732673 | deu_Latn | 0.358322 |
93c559be95f59915673fb5323f4a298ddc29c915 | 1,516 | md | Markdown | controls/raddatepicker-and-radtimepicker/raddatetimepickers-keyboardsupport.md | telerik/uwp-docs | 59ded5c5b7abe2ffeee9e851575776aee5417c36 | [
"MIT",
"Unlicense"
] | 10 | 2017-02-14T07:19:09.000Z | 2021-06-11T13:28:28.000Z | controls/raddatepicker-and-radtimepicker/raddatetimepickers-keyboardsupport.md | telerik/uwp-docs | 59ded5c5b7abe2ffeee9e851575776aee5417c36 | [
"MIT",
"Unlicense"
] | 9 | 2017-06-12T14:40:04.000Z | 2020-05-20T13:23:50.000Z | controls/raddatepicker-and-radtimepicker/raddatetimepickers-keyboardsupport.md | telerik/uwp-docs | 59ded5c5b7abe2ffeee9e851575776aee5417c36 | [
"MIT",
"Unlicense"
] | 11 | 2017-10-19T16:22:40.000Z | 2021-12-02T16:11:15.000Z | ---
title: Keyboard Support
page_title: Keyboard Support
description: Check our "Keyboard Support" documentation article for RadDatePicker and RadTimePicker for UWP controls.
slug: raddatetimepickers-keyboardsupport
tags: keyboard,support
published: True
position: 7
---
# Keyboard Support
Telerik’s RadDatePicker and RadTimePicker controls respond to keyboard input similar to the way you’d expect any other Windows control to respond.
## Supported Keys
Here are listed all keyboard keys supported by RadDatePicker and RadTimePicker control and the actions they perform:
* **TAB key** - Focuses the inline part (picker).
* **Enter key** - Opens the Selector Popup part.
* **Tab key** or **Left/Right Arrow key** - Moves through the selector items and expands the corresponding list used for selecting a value.
* **Up/Down Arrow key** - Makes a selection within the currently expanded list.
* **Enter/Escape key** (valid only for Standard Mode) - Commits/Cancels the selection.
## Selector Part - Keyboard Input
RadDatePicker and RadTimePicker supports keyboard input per expanded list in the **Inline Display Mode** and in the Inline Part of the **Standart Display Mode**.
> The input string resets to Empty one second after no character is typed. After each typed character, the expanded list is navigates instantly.
### Example
The following table shows sample string inputs and the result in an expanded Year List in RadDatePicker.
 | 43.314286 | 161 | 0.782982 | eng_Latn | 0.986343 |
93c5966db02921568f5aeec439db9bda92bc71fb | 2,864 | md | Markdown | README.md | charleshkang/Bookstore | 332c6c6a6db8a11f7cf089f7a8d45a5a41e40053 | [
"MIT"
] | null | null | null | README.md | charleshkang/Bookstore | 332c6c6a6db8a11f7cf089f7a8d45a5a41e40053 | [
"MIT"
] | null | null | null | README.md | charleshkang/Bookstore | 332c6c6a6db8a11f7cf089f7a8d45a5a41e40053 | [
"MIT"
] | null | null | null | # ProlificLibrary
View all books from the Prolific Library! Project was built using Swift 2.3, and follows the MVC design pattern. I wanted to use Swift 3, but the beta version of Xcode 8 was causing a lot of errors, so I stuck with Swift 2.3 and Xcode 7.3.1.

## Implementation
I chose MVC as the design pattern for this project because it's what I'm most used to, and through careful abstraction, can be very organized. I used GCD to make my data fetching asynchronous. Because they are public functions, this leads to reusability in future projects. I implemented error handling in my `BookStatus` file, and took advantage of `guard`s early exit feature to ensure any errors from the backend were handled.
I chose to make the UI in Interface Builder because I like seeing the app's flow and design easily. I did not make individual storyboard files, as this was a simple 3 screen app that, in my opinion, would not be necessary.
Through my use of extensions, I've allowed my view controllers to stay organized.
I followed the Prolific Interactive style guide as closely as possible. Making sure each type has its own file, and assigning proper access control to type declarations and functions. I used a combination of `if let`, `guard`, and `nil coalescing` to safely unwrap optionals, only using `!` with IBOutlets and properties I am completely sure will have a value.
## Pods
- SwiftyJSON (handle JSON data better and more easily)
- Alamofire (to make networking simpler, and used in conjunction with SwiftyJSON)
- TextFieldEffects (to make a better UX for the input form)
## Requirements
- Xcode 7.3.1
- iOS 9.0(because of the use of stack views)
## Installation
- Install [Cocoapods](http://guides.cocoapods.org/using/getting-started.html#installation).
- cd to directory and use `pod init` to create a Podfile
```swift
open Podfile
```
- Add the following to Podfile
```swift
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '9.0'
use_frameworks!
pod 'Alamofire', '~> 3.0’
pod 'SwiftyJSON', '2.4.0'
pod 'TextFieldEffects', ‘1.2.0’
```
- Save and install pods
```swift
pod install
```
- Open ProlificLibrary.xcworkspace
## Features
- See all books from the Prolific Library in a table view
- Press the '+' button to add a new book in a modal view
- Press the trash button to delete all books
- Press on any book to see details, as well as check out a book
- Asynchronous handling of data fetching, as to not block the UI from making updates
- Share the book on Twitter or Facebook using the Social framework. You will need to be signed into Facebook or Twitter for sharing to work.
- Proper error handling when input for title and author is `" "`.
## Future Improvements and Features
- Unit Tests
- Swift 3 Migration
| 48.542373 | 429 | 0.761872 | eng_Latn | 0.995185 |
93c5bc711b53e5584fa02694f294a97e1157decd | 2,129 | md | Markdown | _wiki/BioJava_CookBookItaliano_Sequence_ExtractGeneRegions.md | biojava/biojava.github.io | 32d95e1e36e7d719b62eaba6bf529e710576d1da | [
"CC-BY-3.0"
] | 3 | 2016-06-10T06:04:51.000Z | 2020-01-03T00:47:51.000Z | _wiki/BioJava_CookBookItaliano_Sequence_ExtractGeneRegions.md | biojava/biojava.github.io | 32d95e1e36e7d719b62eaba6bf529e710576d1da | [
"CC-BY-3.0"
] | 14 | 2016-03-23T04:38:32.000Z | 2020-11-10T00:36:18.000Z | _wiki/BioJava_CookBookItaliano_Sequence_ExtractGeneRegions.md | biojava/biojava.github.io | 32d95e1e36e7d719b62eaba6bf529e710576d1da | [
"CC-BY-3.0"
] | 16 | 2016-03-21T16:40:26.000Z | 2021-03-17T15:01:10.000Z | ---
title: BioJava:CookBookItaliano:Sequence:ExtractGeneRegions
permalink: wiki/BioJava%3ACookBookItaliano%3ASequence%3AExtractGeneRegions
---
Come posso estrarre tutte le regioni che rappresentano caratteristiche speciali (ad esempio 'geni' or 'sequenze codificanti')?
------------------------------------------------------------------------------------------------------------------------------
```java
` public Sequence sequenceJustFeatues(Sequence seq, String featureName)`
` throws Exception {`
` Location loccollection = this.genLocationsOfSequence(seq, featureName);`
` SymbolList extract = loccollection.symbols(seq);`
` Sequence seqmodif = DNATools`
` .createDNASequence(extract.seqString(), "New Sequence");`
` return seqmodif;`
` }`
` public Sequence sequenceWithoutFeature(Sequence seq, String featureName)`
` throws Exception {`
` // featureName: the name of the feature which describes genes: gene or CDS`
` Location loccollection = this.genLocationsOfFeature(seq, featureName); // see below`
` SimpleSymbolList modif = new SimpleSymbolList(seq);`
` Edit e = null;`
` for (int i = seq.length(); i > 0; i--){ // this is slow. For a better implementation drop me an email`
` if (loccollection.contains(i)) {`
` e = new Edit(i, 1, SymbolList.EMPTY_LIST);`
` modif.edit(e);`
` }`
` }`
` Sequence seqmodif = DNATools.createDNASequence(modif.seqString(), "New Sequence");`
` return seqmodif;`
` }`
` public Location genLocationsOfFeature(Sequence seq, String featureName)`
` throws Exception {`
` Location loccollection = null;`
` for (Iterator i = seq.features(); i.hasNext();) {`
` Feature f = (Feature) i.next();`
` if (f.getType().equals(featureName)) {`
` if (loccollection == null) {`
` loccollection = f.getLocation();`
` } else {`
` loccollection = loccollection.union(f.getLocation());`
` }`
` }`
` }`
` return loccollection;`
` }`
```
| 33.265625 | 126 | 0.585721 | eng_Latn | 0.436343 |
93c5e6f5dc5b5267911e191fa7f28f6b87c73077 | 1,312 | md | Markdown | docs/user-guide/downward-api/README.md | kfox1111/kubernetes.github.io | 5baa39970b8a17d19e64f35f685f7a436b5e1409 | [
"Apache-2.0"
] | 2 | 2020-11-03T10:43:28.000Z | 2021-07-12T18:45:19.000Z | docs/user-guide/downward-api/README.md | kfox1111/kubernetes.github.io | 5baa39970b8a17d19e64f35f685f7a436b5e1409 | [
"Apache-2.0"
] | null | null | null | docs/user-guide/downward-api/README.md | kfox1111/kubernetes.github.io | 5baa39970b8a17d19e64f35f685f7a436b5e1409 | [
"Apache-2.0"
] | 5 | 2019-03-24T08:59:53.000Z | 2020-06-02T15:02:18.000Z | Following this example, you will create a pod with a container that consumes the pod's name and
namespace using the [downward API](http://kubernetes.io/docs/user-guide/downward-api/).
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
started](http://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the pod
Containers consume the downward API using environment variables. The downward API allows
containers to be injected with the name and namespace of the pod the container is in.
Use the [`dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
downward API.
```shell
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
```
### Examine the logs
This pod runs the `env` command in a container that consumes the downward API. You can grep
through the pod logs to see that the pod was injected with the correct values:
```shell
$ kubectl logs dapi-test-pod | grep POD_
2015-04-30T20:22:18.568024817Z MY_POD_NAME=dapi-test-pod
2015-04-30T20:22:18.568087688Z MY_POD_NAMESPACE=default
2015-04-30T20:22:18.568092435Z MY_POD_IP=10.0.1.6
``` | 41 | 108 | 0.779726 | eng_Latn | 0.993267 |
93c64be4ca9d5ac3ca665f7a19eb3ff738931e63 | 1,652 | md | Markdown | README.md | DavidQF555/MusicBot | c4694edb869a04b72a3373ffb2b02fbc0e6d9c9b | [
"Apache-2.0"
] | 2 | 2021-10-03T05:03:02.000Z | 2021-10-03T13:01:52.000Z | README.md | DavidQF555/MusicBot | c4694edb869a04b72a3373ffb2b02fbc0e6d9c9b | [
"Apache-2.0"
] | 2 | 2021-10-03T13:33:32.000Z | 2021-10-09T20:26:11.000Z | README.md | DavidQF555/MusicBot | c4694edb869a04b72a3373ffb2b02fbc0e6d9c9b | [
"Apache-2.0"
] | null | null | null | # Music Bot
This a basic Discord music bot created using [Node.js](https://nodejs.org/) and the [Discord.js](https://discord.js.org/) library.
## Setup
1. Create a Discord application in the [Discord Developer Portal](https://discord.com/developers/applications)
2. In the *Bot* tab, click the *Add Bot* button
3. Copy the bot token and paste it as the **TOKEN** enviroment variable
4. Copy the client ID in the *OAuth2* tab and paste it as the **CLIENT_ID** environment variable
5. Add the bot to servers using `https://discord.com/api/oauth2/authorize?client_id=CLIENT_ID&permissions=3147776&scope=applications.commands%20bot`, replacing *CLIENT_ID* with the bot's client ID
6. Go to the [Google Cloud API Credentials Page](https://console.cloud.google.com/apis/credentials) and click *Create Credentials* and then *API Key*. If you want, you can restrict the key to only call *YouTube Data API v3*. Paste this key as the **YT_DATA_KEY** environment variable. This key is what the bot uses to search for videos on YouTube
7. Start the bot with the command `npm start` in the console
### Environment Variables
| Key | Value |
| - | - |
| TOKEN | Token of Discord bot |
| CLIENT_ID | Client ID of Discord bot |
| YT_DATA_KEY | API key for YouTube Data API v3 |
## Usage
### Commands
| Name | Description |
| - | - |
| play | Queues a track using a query searched on YouTube |
| queue | Displays the current queue |
| skip | Skips the current track |
| clear | Clears the whole queue |
| loop | Toggles whether the queue should loop |
| remove | Removes a track from the queue based on the query |
| shuffle | Randomizes the order of the queue | | 50.060606 | 347 | 0.733051 | eng_Latn | 0.917621 |
93c76d2c1550ccb7502c9a6df1e1e8f1054d0359 | 5,073 | md | Markdown | static/src/_posts/2021-07-06-maddy-vps.md | mediocregopher/thoughts | 24e9541e02be403bd63d71979240377be2ecb122 | [
"WTFPL"
] | null | null | null | static/src/_posts/2021-07-06-maddy-vps.md | mediocregopher/thoughts | 24e9541e02be403bd63d71979240377be2ecb122 | [
"WTFPL"
] | 13 | 2020-02-25T12:44:33.000Z | 2022-02-26T12:18:26.000Z | static/src/_posts/2021-07-06-maddy-vps.md | mediocregopher/thoughts | 24e9541e02be403bd63d71979240377be2ecb122 | [
"WTFPL"
] | 2 | 2019-09-10T16:44:37.000Z | 2021-01-16T04:02:38.000Z | ---
title: >-
Setting Up maddy On A VPS
description: >-
We have delivery!
tags: tech
series: selfhost
---
In the previous post I left off with being blocked by my ISP from sending
outbound emails on port 25, effectively forcing me to set up [maddy][maddy] on a
virtual private server (VPS) somewhere else.
After some research I chose [Vultr][vultr] as my VPS of choice. They apparently
don't block you from sending outbound emails on port 25, and are in general
pretty cheap. I rented their smallest VPS server for $5/month, plus an
additional $3/month to reserve an IPv4 address (though I'm not sure I really
need that, I have dDNS set up at home and could easily get that working here as
well).
## TLS
The first major hurdle was getting TLS certs for `mydomain.com` (not the real
domain) onto my Vultr box. For the time being I've opted to effectively
copy-paste my local [LetsEncrypt][le] setup to Vultr, using certbot to
periodically update my records using DNS TXT challenges.
The downside to this is that I now require my Cloudflare API key to be present
on the Vultr box, which effectively means that if the box ever gets owned
someone will have full access to all my DNS. For now I've locked down the box as
best as I can, and will look into changing the setup in the future. There's two
ways I could go about it:
* SCP the certs from my local box to the remote everytime they're renewed. This
would require setting up a new user on the remote box with very narrow
privileges. This isn't the worst thing though.
* Use a different challenge method than DNS TXT records.
But again, I'm trying to set up maddy, not LetsEncrypt, and so I needed to move
on.
## Deployment
In the previous post I talked about how I'm using nix to generate a systemd
service file which encompasses all dependencies automatically, without needing
to install anything to the global system or my nix profile.
Since that's already been set up, it's fairly trivial to use `nix-copy-closure`
to copy a service file, and _all_ of its dependencies (including configuration)
from my local box to the remote Vultr box. Simply:
```
nix-copy-closure -s <ssh host> <nix store path>
```
I whipped up some scripts around this so that I can run a single make target and
have it build the service (and all deps), do a `nix-copy-closure` to the remote
host, copy the service file into `/etc/systemd/service`, and restart the
service.
## Changes
For the most part the maddy deployment on the remote box is the same as on the
local one. Down the road I will likely change them both significantly, so that
the remote one only deals with SMTP (no need for IMAP) and the local one will
automatically forward all submitted messages to it.
Once that's done, and the remote Vultr box is set up on my [nebula][nebula]
network, there won't be a need for the remote maddy to do any SMTP
authentication, since the submission endpoint can be made entirely private.
For now, however, I've set up maddy on the remote box's public interface with
SMTP authentication enabled, to make testing easier.
## Testing
And now, to test it! I changed the SMTP credentials in my `~/.mailrc` file as
appropriate, and let a test email rip:
```
echo 'Hello! This is a cool email' | mailx -s 'Subject' -r 'Me <[email protected]>' '[email protected]'
```
This would, ideally, send an email from my SMTP server (on my domain) to a test
gmail domain. Unfortunately, it did not do that, but instead maddy spit this out
in its log:
> maddy[1547]: queue: delivery attempt failed {"msg_id":"330a1ed9","rcpt":"[email protected]","reason":"[2001:19f0:5001:355a:5400:3ff:fe73:3d02] Our system has detected that\nthis message does not meet IPv6 sending guidelines regarding PTR\nrecords and authentication. Please review\n https://support.google.com/mail/?p=IPv6AuthError for more information\n. gn42si18496961ejc.717 - gsmtp","remote_server":"gmail-smtp-in.l.google.com.","smtp_code":550,"smtp_enchcode":"5.7.1","smtp_msg":"gmail-smtp-in.l.google.com. said: [2001:19f0:5001:355a:5400:3ff:fe73:3d02] Our system has detected that\nthis message does not meet IPv6 sending guidelines regarding PTR\nrecords and authentication. Please review\n https://support.google.com/mail/?p=IPv6AuthError for more information\n. gn42si18496961ejc.717 - gsmtp"}
Luckily Vultr makes setting up PTR records for reverse DNS fairly easy. They
even allowed me to do it on my box's IPv6 address which I'm not paying to
reserve (though I'm not sure what the long-term risks of that are... can it
change?).
Once done, I attempted to send my email again, and what do you know...

Success!
So now I can send emails. There are a few next steps from here:
* Get the VPS on my nebula network and lock it down properly.
* Fix the TLS cert situation.
* Set up the remote maddy to forward submissions to my local maddy.
* Use my sick new email!
[maddy]: https://maddy.email
[le]: https://letsencrypt.org/
[vultr]: https://www.vultr.com/
[nebula]: https://github.com/slackhq/nebula
| 43.732759 | 820 | 0.759708 | eng_Latn | 0.9962 |
93c88ca4bfa95dc5cc0c2a0785d35ae5d4bae995 | 1,089 | md | Markdown | src/binary-search-tree/README.md | RCMiron/TSDS | 4c96aa19c422f08bcabc614a73c96c1084e37fbb | [
"MIT"
] | 4 | 2018-04-22T17:09:09.000Z | 2020-05-24T09:13:00.000Z | src/binary-search-tree/README.md | RCMiron/TSDS | 4c96aa19c422f08bcabc614a73c96c1084e37fbb | [
"MIT"
] | 1 | 2019-07-19T15:22:19.000Z | 2019-07-19T15:22:19.000Z | src/binary-search-tree/README.md | RCMiron/TSDS | 4c96aa19c422f08bcabc614a73c96c1084e37fbb | [
"MIT"
] | 2 | 2018-03-06T07:03:59.000Z | 2018-03-07T06:38:54.000Z | ### BinarySearchTree
Instantiate tree
```typescript
const exampleNode = {idx: 1, score: 45}
const bst: BinarySearchTree = new BinarySearchTree(exampleNode);
```
This will take the type of the first node and enforce it upon all subsequent nodes. This type will be referred to as T in this readme
Method | Parameters | Returns | What it does
--- | --- | --- | ---
setCompareField | value: any | void | sets the compare field for insertion; nullifies insertCondition
setInsertCondition | (value: T) => boolean | void | sets insertion rule; nullifies compareField; be sure to use the ```function``` keyword, arrow functions do not behave as expected when used as class methods
insert | value: T | void | inserts new node according to insertCondition, compareField or simple comparison in the case of primitives
traverseDepth | (node: BinarySearchTree<T>) => any, order: TraverseOrder | void | traverses tree according to passed order: 'inOrder', 'preOrder' or 'postOrder'; default order: 'inOrder'
traverseBreadth | (node: BinarySearchTree<T>) => any | void | traverses tree level by level | 68.0625 | 208 | 0.749311 | eng_Latn | 0.966975 |
93c8c4a5fe18a2f50efbd99164fd41e9705e94e7 | 1,511 | md | Markdown | tensorflow/g3doc/api_docs/python/functions_and_classes/tf.contrib.learn.RunConfig.md | c0g/tomserflow | f7b42f6ba58c3ff20ecd002535d2cca5d93bcf8e | [
"Apache-2.0"
] | 2 | 2016-05-25T19:30:35.000Z | 2016-05-25T20:48:08.000Z | tensorflow/g3doc/api_docs/python/functions_and_classes/tf.contrib.learn.RunConfig.md | c0g/tomserflow | f7b42f6ba58c3ff20ecd002535d2cca5d93bcf8e | [
"Apache-2.0"
] | 1 | 2016-10-19T02:43:04.000Z | 2016-10-31T14:53:06.000Z | tensorflow/g3doc/api_docs/python/functions_and_classes/tf.contrib.learn.RunConfig.md | c0g/tomserflow | f7b42f6ba58c3ff20ecd002535d2cca5d93bcf8e | [
"Apache-2.0"
] | 8 | 2016-10-23T00:50:02.000Z | 2019-04-21T11:11:57.000Z | This class specifies the specific configurations for the run.
Parameters:
tf_master: TensorFlow master. Empty string is default for local.
num_cores: Number of cores to be used. (default: 4)
verbose: Controls the verbosity, possible values:
0: the algorithm and debug information is muted.
1: trainer prints the progress.
2: log device placement is printed.
gpu_memory_fraction: Fraction of GPU memory used by the process on
each GPU uniformly on the same machine.
tf_random_seed: Random seed for TensorFlow initializers.
Setting this value, allows consistency between reruns.
keep_checkpoint_max: The maximum number of recent checkpoint files to keep.
As new files are created, older files are deleted.
If None or 0, all checkpoint files are kept.
Defaults to 5 (that is, the 5 most recent checkpoint files are kept.)
keep_checkpoint_every_n_hours: Number of hours between each checkpoint
to be saved. The default value of 10,000 hours effectively disables
the feature.
Attributes:
tf_master: Tensorflow master.
tf_config: Tensorflow Session Config proto.
tf_random_seed: Tensorflow random seed.
keep_checkpoint_max: Maximum number of checkpoints to keep.
keep_checkpoint_every_n_hours: Number of hours between each checkpoint.
- - -
#### `tf.contrib.learn.RunConfig.__init__(tf_master='', num_cores=4, verbose=1, gpu_memory_fraction=1, tf_random_seed=42, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000)` {#RunConfig.__init__}
| 43.171429 | 204 | 0.776307 | eng_Latn | 0.980913 |
93c8cd966f228ea63b21417e8abd519bce2cdb6b | 1,007 | md | Markdown | JavaScript/0058-lengthOfLastWord.md | JianmingXia/LeetCode | 57d7ac7cee696abaec6b10439084376fbe42fa83 | [
"Apache-2.0"
] | 1 | 2020-01-11T09:05:37.000Z | 2020-01-11T09:05:37.000Z | JavaScript/0058-lengthOfLastWord.md | JianmingXia/LeetCode | 57d7ac7cee696abaec6b10439084376fbe42fa83 | [
"Apache-2.0"
] | null | null | null | JavaScript/0058-lengthOfLastWord.md | JianmingXia/LeetCode | 57d7ac7cee696abaec6b10439084376fbe42fa83 | [
"Apache-2.0"
] | null | null | null | # 最后一个单词的长度
## 题目描述
给定一个仅包含大小写字母和空格 ' ' 的字符串,返回其最后一个单词的长度。
如果不存在最后一个单词,请返回 0 。
说明:一个单词是指由字母组成,但不包含任何空格的字符串。
### 示例
```
示例:
输入: "Hello World"
输出: 5
```
## 思路分析
这题其实是个非常简单的题目,但是由于审题原因,这里错了好几次——题目的关键字是“返回其最后一个单词的长度”。所以我们找到最后一个单词,计算长度即可,其它的都是花里胡哨的。
## 代码
- 时间复杂度O(n)
- 空间复杂度O(n)
### 复杂思路
```
/**
* @param {string} s
* @return {number}
*/
var lengthOfLastWord = function(s) {
if (s.length === 0) {
return 0;
}
const reverseArr = s.split("").reverse();
let index = 0;
for (; index < reverseArr.length; index++) {
if (reverseArr[index] === " ") {
continue;
} else {
break;
}
}
if(index === reverseArr.length) {
return 0;
}
const lastBlankPos = reverseArr.indexOf(" ", index);
if (lastBlankPos < 0) {
return s.length - index;
} else {
return lastBlankPos - index;
}
};
```
### 简易思路
```
/**
* @param {string} s
* @return {number}
*/
var lengthOfLastWord = function(s) {
return s
.trim()
.split(" ")
.pop().length;
};
``` | 14.183099 | 85 | 0.576961 | yue_Hant | 0.528412 |
93c956ee21265b3f3514d3f285c49b4d71377259 | 128 | md | Markdown | README.md | Sejoslaw/BetterVillagers | 50b0435c2dd3fa1dece5f6c320c94221d4ab65ba | [
"Apache-2.0"
] | 2 | 2019-06-27T17:34:07.000Z | 2020-02-09T17:33:28.000Z | README.md | Sejoslaw/BetterVillagers | 50b0435c2dd3fa1dece5f6c320c94221d4ab65ba | [
"Apache-2.0"
] | 5 | 2018-07-15T05:50:35.000Z | 2020-10-14T15:37:28.000Z | README.md | Sejoslaw/BetterVillagers | 50b0435c2dd3fa1dece5f6c320c94221d4ab65ba | [
"Apache-2.0"
] | null | null | null | # Better Villager
Better Villager Minecraft Mod
MinecraftCurse page: http://minecraft.curseforge.com/projects/better-villagers
| 25.6 | 78 | 0.835938 | eng_Latn | 0.30557 |
93c96c1d308e1054225f7ed73708ff1bb187f173 | 606 | md | Markdown | README.md | ohbado/SetupShaderVariantCollection | e34fd1388ee5c02dd283571dba73170372f4ce9f | [
"MIT"
] | 3 | 2020-05-17T13:39:42.000Z | 2020-11-11T07:12:12.000Z | README.md | ohbado/SetupShaderVariantCollection | e34fd1388ee5c02dd283571dba73170372f4ce9f | [
"MIT"
] | null | null | null | README.md | ohbado/SetupShaderVariantCollection | e34fd1388ee5c02dd283571dba73170372f4ce9f | [
"MIT"
] | null | null | null | # SetupShaderVariantCollection
Unity editor extension that automatically creates Unity Shader variant collection from shader_compile.csv output of ProfilerReader(https://github.com/unity3d-jp/ProfilerReader).
# Requirement
Unity 2019.2/2019.3
# Install
Install ProfilerReader(https://github.com/unity3d-jp/ProfilerReader).
Copy SetupSVCollection to the Editor folder.
# Usage
Click [Tools] -> [SetupShaderVariantCollection].
Set the existing Shader variant collection.
Set the output shader_compile.csv of ProfilerReader by [Select CSV File].
Write to the Shader variant collection with [Set Variant].
| 35.647059 | 177 | 0.818482 | yue_Hant | 0.42845 |
93cab59560114c0b8d4e6eeb0df824cadc1d486a | 1,880 | md | Markdown | scripting-docs/winscript/reference/ijsdebugbreakpoint-interface.md | viniciustavanoferreira/visualstudio-docs.pt-br | 2ec4855214a26a53888d4770ff5d6dde15dbb8a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | scripting-docs/winscript/reference/ijsdebugbreakpoint-interface.md | viniciustavanoferreira/visualstudio-docs.pt-br | 2ec4855214a26a53888d4770ff5d6dde15dbb8a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | scripting-docs/winscript/reference/ijsdebugbreakpoint-interface.md | viniciustavanoferreira/visualstudio-docs.pt-br | 2ec4855214a26a53888d4770ff5d6dde15dbb8a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Interface IJsDebugBreakPoint | Microsoft Docs
ms.custom: ''
ms.date: 01/18/2017
ms.reviewer: ''
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: reference
ms.assetid: 791c8488-21e7-46be-b1b4-fe74117cf200
caps.latest.revision: 4
author: mikejo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: e6bb4e12f0e08baf1842d251347f35265425b999
ms.sourcegitcommit: 184e2ff0ff514fb980724fa4b51e0cda753d4c6e
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/18/2019
ms.locfileid: "72577685"
---
# <a name="ijsdebugbreakpoint-interface"></a>Interface IJsDebugBreakPoint
Representa um ponto de interrupção.
## <a name="syntax"></a>Sintaxe
```cpp
IJsDebugBreakPoint : public IUnknown;
```
## <a name="members"></a>Membros
### <a name="public-methods"></a>Métodos Públicos
|Name|Descrição|
|----------|-----------------|
|[Método IJsDebugBreakPoint::Delete](../../winscript/reference/ijsdebugbreakpoint-delete-method.md)|Exclui o ponto de interrupção.|
|[Método IJsDebugBreakPoint::Disable](../../winscript/reference/ijsdebugbreakpoint-disable-method.md)|Desabilita o ponto de interrupção.|
|[Método IJsDebugBreakPoint::Enable](../../winscript/reference/ijsdebugbreakpoint-enable-method.md)|Habilita o ponto de interrupção.|
|[Método IJsDebugBreakPoint::GetDocumentPosition](../../winscript/reference/ijsdebugbreakpoint-getdocumentposition-method.md)|Retorna a posição da instrução em que o ponto de interrupção foi associado.|
|[Método IJsDebugBreakPoint::IsEnabled](../../winscript/reference/ijsdebugbreakpoint-isenabled-method.md)|Determina se o ponto de interrupção está habilitado.|
## <a name="requirements"></a>Requisitos
**Cabeçalho:** jscript9diag. h
## <a name="see-also"></a>Consulte também
[Referência de interfaces de script do Windows](../../winscript/reference/windows-script-interfaces-reference.md) | 40.869565 | 204 | 0.748404 | por_Latn | 0.219775 |
93ccf493fceb421d0754a367a05631039c586f11 | 781 | md | Markdown | docs/framework/wcf/diagnostics/tracing/system-servicemodel-metadataexchangeclientsendrequest.md | proudust/docs.ja-jp | d8197f8681ef890994bcf45958e42f597a3dfc7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/tracing/system-servicemodel-metadataexchangeclientsendrequest.md | proudust/docs.ja-jp | d8197f8681ef890994bcf45958e42f597a3dfc7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/tracing/system-servicemodel-metadataexchangeclientsendrequest.md | proudust/docs.ja-jp | d8197f8681ef890994bcf45958e42f597a3dfc7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: System.ServiceModel.MetadataExchangeClientSendRequest
ms.date: 03/30/2017
ms.assetid: ba02fed9-331a-4aea-b5e1-fe16c7dd4ddd
ms.openlocfilehash: 9c10c1b24e632ac02e86f776b0fde2165dd92cb9
ms.sourcegitcommit: cdb295dd1db589ce5169ac9ff096f01fd0c2da9d
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 06/09/2020
ms.locfileid: "84595987"
---
# <a name="systemservicemodelmetadataexchangeclientsendrequest"></a>System.ServiceModel.MetadataExchangeClientSendRequest
System.ServiceModel.MetadataExchangeClientSendRequest
## <a name="description"></a>説明
MetadataExchangeClient はメタデータの要求を送信しています。
## <a name="see-also"></a>関連項目
- [トレース](index.md)
- [トレースを使用したアプリケーションのトラブルシューティング](using-tracing-to-troubleshoot-your-application.md)
- [管理と診断](../index.md)
| 33.956522 | 121 | 0.809219 | yue_Hant | 0.575039 |
93ccffbfe1cac45f5e1ef5eb06789a8f9d262005 | 1,227 | md | Markdown | README.md | kojingharang/fun_injector | 842e470d67c9bf95de8debd7d8fe50684a962003 | [
"MIT"
] | null | null | null | README.md | kojingharang/fun_injector | 842e470d67c9bf95de8debd7d8fe50684a962003 | [
"MIT"
] | null | null | null | README.md | kojingharang/fun_injector | 842e470d67c9bf95de8debd7d8fe50684a962003 | [
"MIT"
] | null | null | null | fun_injector
======
Type safe gen_server generator: It injects functions in a specified module to some empty gen_server by using parse_transform
Motivation
======
- Testing your gen_server is sometimes not easy.
- To avoid that, you might split your gen_server into a complicated logic module and a thin gen_server which calls the logic module.
- Then you might write exported functions which call gen_server:call and handle_call clauses which call corresponding funcion in the logic module, following some rules.
- .... that's buggy and really boring.
- fun_injector auto-generate those wrapper functions instead of you, Yay!
How to use
======
0. Add an entry to rebar.config in your project (like many other libraries).
{fun_injector, ".*", {git, "git://github.com/kojingharang/fun_injector.git", {branch, "master"}}}
2. Add an empty gen_server implementation.
3. Add compile option like this:
-compile([{parse_transform, fun_injector},
{fun_injector_extract_from, adder}]).
Now your gen_server have exported functions defined in adder.
See test directory for detail.
Status
======
the very alpha.
- TODOs
- Docs
- Error handling
- Output deparsed source of generated AST
| 32.289474 | 168 | 0.740016 | eng_Latn | 0.994151 |
93cd2c35a697a7ee9158fb129f6dd2f95701cb6a | 306 | md | Markdown | Readme.md | ForbesLindesay/khaos-typescript | bb6d5199ef1a4f3ae3671406f7fbdab61355e223 | [
"MIT"
] | null | null | null | Readme.md | ForbesLindesay/khaos-typescript | bb6d5199ef1a4f3ae3671406f7fbdab61355e223 | [
"MIT"
] | null | null | null | Readme.md | ForbesLindesay/khaos-typescript | bb6d5199ef1a4f3ae3671406f7fbdab61355e223 | [
"MIT"
] | null | null | null |
# khaos-forbeslindesay
A [Khaos](http://github.com/segmentio/khaos) template to start new projects quickly.
## Installation
Save the template locally with:
$ khaos install ForbesLindesay/khaos-forbeslindesay node
## Usage
Create a new project with:
$ khaos node my-project
## License
MIT
| 14.571429 | 84 | 0.738562 | eng_Latn | 0.607726 |
93cd6d2ef058bc26c34183c74460d9b4fde6d2f1 | 5,528 | markdown | Markdown | src/site_source/_posts/2013-09-20-getting-started-with-openstack-and-designate.markdown | jamiehannaford/developer.rackspace.com | 066becad0c87d4b53fe5881e9208d8fe700ec5d5 | [
"Apache-2.0"
] | null | null | null | src/site_source/_posts/2013-09-20-getting-started-with-openstack-and-designate.markdown | jamiehannaford/developer.rackspace.com | 066becad0c87d4b53fe5881e9208d8fe700ec5d5 | [
"Apache-2.0"
] | null | null | null | src/site_source/_posts/2013-09-20-getting-started-with-openstack-and-designate.markdown | jamiehannaford/developer.rackspace.com | 066becad0c87d4b53fe5881e9208d8fe700ec5d5 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "Getting Started with OpenStack and Designate"
date: 2013-09-20 09:18
comments: true
author: Tim Simmons
published: true
categories:
- OpenStack
- Designate
- DNS
---
**Note**: This guide has been merged into the official Designate documentation. You can see that document here:
<https://designate.readthedocs.org/en/latest/getting-started.html>
A few weeks ago my team at Rackspace began investigation into the DNS as a Service application of Openstack, [Designate][1]. I’d like to share the method that my team and I formulated for getting a development environment for Designate up and running quickly. This set-up doesn’t include an OpenStack installation so there is no integration with Keystone or Nova. It’s the simplest possible installation, a great way for anyone to get started in contributing to OpenStack. Credit to the folks working on Designate for the original document.<!--More-->
**Initial Setup**
-----------------
The first thing you need is an Ubuntu Server (12.04). I recommend spinning up a [Cloud Server][2] with Rackspace. It’s relatively inexpensive and very slick. Assuming you have access to a server we can start installation.
**Installing Designate**
------------------------
**1) Install system package dependencies:**
$ apt-get install python-pip python-virtualenv
$ apt-get install rabbitmq-server
$ apt-get build-dep python-lxml
**2) Clone the Designate repo off of Stackforge:**
$ git clone https://github.com/stackforge/designate.git
$ cd designate
**3) Setup virtualenv:**
$ virtualenv --no-site-packages .venv
$ . .venv/bin/activate
**4) Install Designate and it’s dependencies**
$ pip install -r requirements.txt -r test-requirements.txt
$ python setup.py develop
**Note**: Everything from here on out should take place in or below your designate/etc folder
**5) Copy sample config files to edit yourself**
$ cd etc/designate
$ ls *.sample | while read f; do cp $f $(echo $f | sed
"s/.sample$//g"); done
**6) Install the DNS server choose between**
PowerDNS
```
$DEBIAN_FRONTEND=noninteractive apt-get install pdns-server pdns-backend-sqlite3
#Update path to SQLite database to /root/designate/powerdns.sqlite or wherever your top level designate directory resides
$ editor /etc/powerdns/pdns.d/pdns.local.gsqlite3
#Change the corresponding line in the config file to mirror:
gsqlite3-database=/root/designate/pdns.sqlite
#Restart PowerDNS:
$ service pdns restart
```
**7. If you intend to run Designate as a non-root user, then sudo permissions need to be granted:**
```
$ echo "designate ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/90-designate
$ sudo chmod 0440 /etc/sudoers.d/90-designate
```
**8. Make the directory for Designate’s log files:**
```
$ mkdir /var/log/designate
```
**Configure Designate**
------------------------
```
$ editor designate.conf
```
Copy or mirror the configuration from this sample file [here][3].
**Start the Central Services**
------------------------
```
#Initialize and sync the Designate database:
$ designate-manage database-init
$ designate-manage database-sync
#Initialize and sync the PowerDNS database:
$ designate-manage powerdns database-init
$ designate-manage powerdns database-sync
#Restart PowerDNS or bind9
$ service pdns restart
#Start the central service:
$ designate-central
```
**NOTE**: If you get an error of the form: ERROR [designate.openstack.common.rpc.common] AMQP server on localhost:5672 is unreachable: Socket closed
Run the following command:
```
$ rabbitmqctl change_password guest guest
#Then try starting the service again
$ designate-central
```
You’ll now be seeing the log from the central service.
**Start the API Service**
------------------------
Open up a new ssh window and log in to your server (or however you’re communicating with your server).
```
$ cd root/designate
#Make sure your virtualenv is sourced
$ . .venv/bin/activate
$ cd etc/designate
#Start the API Service
$ designate-api
#You may have to run root/designate/bin/designate-api```
```
You’ll now be seeing the log from the API service.
**Exercising the API**
------------------------
Calls to the Designate API can be made using the following format:
http://IP.Address:9001/v1/command
In a web browser, curl statement, Rest client where “command” is any of the commands listed in the Designate Documentation
You can find the IP Address of your server by running
```
wget http://ipecho.net/plain -O - -q ; echo
```
If you'd like to see an instance in action, go here: <http://162.209.9.99:9001/v1/>
A couple of notes on the API:
* Before domains are created, you must create a server.
* You can read the ReST API Documentation [here][4]
Happy Designating! If you would like to contribute to Designate, come and [join us][5].
###About the Author
Tim Simmons is a Rackspace intern on the Cloud DNS team. Recently, the
team evaluated the OpenStack DNSaaS solution, Designate. Tim took an
active role in the investigation; he wrote a "Getting Started" guide
which is published above. He also wrote a guide on using Designate, which will be
published here next week. Tim continues to play en essential role in our
next generation DNS offering.
[1]: https://wiki.openstack.org/wiki/Designate
[2]: http://www.rackspace.com/cloud/servers/
[3]: https://gist.github.com/TimSimmons/6596014
[4]: https://designate.readthedocs.org/en/latest/rest.html
[5]: https://designate.readthedocs.org/en/latest/getting-involved.html | 31.05618 | 551 | 0.72992 | eng_Latn | 0.964186 |
93ce2b7673bbf30134230bc71c045c74e3543a0c | 6,132 | md | Markdown | _posts/Algorithm/Problem_Solving/2020-12-10-problem_solving_21.md | cjlee38/cjlee38.github.io | 0e24e8c76405b70cc7ba036dfd4d94883c79a421 | [
"MIT"
] | null | null | null | _posts/Algorithm/Problem_Solving/2020-12-10-problem_solving_21.md | cjlee38/cjlee38.github.io | 0e24e8c76405b70cc7ba036dfd4d94883c79a421 | [
"MIT"
] | null | null | null | _posts/Algorithm/Problem_Solving/2020-12-10-problem_solving_21.md | cjlee38/cjlee38.github.io | 0e24e8c76405b70cc7ba036dfd4d94883c79a421 | [
"MIT"
] | null | null | null | ---
layout: post
title: "# 알고스팟 [ID:PICNIC] 소풍 ( Java )"
date: 2020-12-10 13:39:00 +0900
categories: problem-solving
tags: programmers
author: cjlee
cover: /assets/covers/coding.png
---
[문제 링크](https://algospot.com/judge/problem/read/PICNIC)
# Problem
**문제**
안드로메다 유치원 익스프레스반에서는 다음 주에 율동공원으로 소풍을 갑니다. 원석 선생님은 소풍 때 학생들을 두 명씩 짝을 지어 행동하게 하려고 합니다. 그런데 서로 친구가 아닌 학생들끼리 짝을 지어 주면 서로 싸우거나 같이 돌아다니지 않기 때문에, 항상 서로 친구인 학생들끼리만 짝을 지어 줘야 합니다.
각 학생들의 쌍에 대해 이들이 서로 친구인지 여부가 주어질 때, 학생들을 짝지어줄 수 있는 방법의 수를 계산하는 프로그램을 작성하세요. 짝이 되는 학생들이 일부만 다르더라도 다른 방법이라고 봅니다. 예를 들어 다음 두 가지 방법은 서로 다른 방법입니다.
(태연,제시카) (써니,티파니) (효연,유리)
(태연,제시카) (써니,유리) (효연,티파니)
**입력**
입력의 첫 줄에는 테스트 케이스의 수 C (C <= 50) 가 주어집니다. 각 테스트 케이스의 첫 줄에는 학생의 수 n (2 <= n <= 10) 과 친구 쌍의 수 m (0 <= m <= n*(n-1)/2) 이 주어집니다. 그 다음 줄에 m 개의 정수 쌍으로 서로 친구인 두 학생의 번호가 주어집니다. 번호는 모두 0 부터 n-1 사이의 정수이고, 같은 쌍은 입력에 두 번 주어지지 않습니다. 학생들의 수는 짝수입니다.
**출력**
각 테스트 케이스마다 한 줄에 모든 학생을 친구끼리만 짝지어줄 수 있는 방법의 수를 출력합니다.
**예제 입력**
3
2 1
0 1
4 6
0 1 1 2 2 3 3 0 0 2 1 3
6 10
0 1 0 2 1 2 1 3 1 4 2 3 2 4 3 4 3 5 4 5
**예제 출력**
1
3
4
# Solve
: 책의 카테고리도 그렇지만, 문제 자체의 내용도 읽어보면, 완전탐색을 통해서 구할 수 있음을 알 수 있다.
즉, 아직 짝이 맺어지지 않은 학생을 두 명 (A, B) 고른 다음에, 그 둘이 친구인지 확인해보고, 맞다면 짝을 맺어주는 방식을 계속 반복하면 된다.
따라서, 자연스럽게 재귀함수와, 짝이 맺어졌는지 확인하는 boolean 배열(대개 visited 배열로 표현되는) 이 필요하단 것을 머릿속에 떠올릴 수 있다.
```java
private int countPairings(boolean[] paired) {
int target = findTarget(paired);
if (target == -1) return 1;
int ret = 0;
for (int i = target + 1; i < n; i++) {
if (!paired[i] && areFriends[target][i]) {
paired[target] = paired[i] = true;
ret += countPairings(paired);
paired[target] = paired[i] = false;
}
}
return ret;
}
```
위 함수는 paried 라는 boolean 배열을 받아서, 짝을 찾아서 맺어주는 역할을 하는 재귀함수이다. 그런데 여기서 눈여겨보아야 할 것이, 바로 `target` 변수다. `target` 변수는 **1.** 아직 짝을 구하지 못한 학생 A 를 나타내기도 하지만, **2.** 동시에 재귀 함수의 호출을 종료하는 Base case이기도 하며, **3.** 또한 **짝의 쌍을 중복으로 찾지 않도록 해주는 역할** 까지 해준다.
이게 무슨 말일까? 우선, `target` 변수를 할당해주는 `findTarget()` 함수를 살펴보자.
```java
public int findTarget(boolean[] paired) {
int target = -1;
for (int i = 0; i < n; ++i) {
if (!paired[i]) {
target = i;
break;
}
}
return target;
}
```
특별할 것 없이, target을 -1로 설정해놓은 뒤에 for 문을 돌면서, false인, 즉 다시 말해 아직 짝이 맺어지지 않은 학생을 발견하면, 해당 학생을 target으로 return 해주는 함수이다. 따라서, 1번에서 언급한 아직 짝을 구하지 못한 학생 A를 나타낸다.
만약 짝이 맺어지지 않은 학생을 발견하지 못한다면, 이는 모든 학생이 짝이 맺어졌다는 뜻이므로 target은 -1 로 return 될 것이고, 따라서 `if (target == -1) return 1;` 라는 Base case의 조건에 부합하게 된다.
다시 `countPairings()` 로 돌아가서, for 문을 살펴보자.
`for (int i = target + 1; i < n; i++)`
짝이 맺어지지 않은 학생 B(코드 상 i)를 찾는 for문이, target + 1 부터 시작한다. 0부터 시작하지 않은 이유가 바로 3번의 **짝의 쌍을 중복으로 찾지 않기 위함** 이다.
잠깐 다른 이야기를 하자면, 수학의 순열과 조합을 떠올려보자.
순열은 "서로 다른 n 개의 원소를 가진 집합에서, r 개를 **순서를 따져서** 나열" 한다.
반면 조합은 "서로 다른 n 개의 원소를 가진 집합에서, r 개를 **순서를 따지지 않고** 나열" 한다.
[1, 2, 3] 이라는 배열에서,
3P2 는 (1, 2), (2, 1), (1, 3), (3, 1), (2, 3), (3, 2) 이 되고,
3C2 는 (1, 2), (1, 3), (2, 3) 이 된다.
이쯤되면 아마 눈치를 챘을텐데, 우리가 구하고자 하는 것은 **조합** 이지, 순열이 아니다. 즉, (1, 2)와 (2, 1)은 같은 취급을 한다.
다시 코드로 돌아가서, 만약 for 문이 target + 1 이 아닌, 0 부터 시작했다고 해보자.
0번 학생과 5번 학생이 짝이라고 하고, 재귀 함수를 호출하는 과정 속에서 5번이 짝이 맺어지지 않아 A 로 선정되었다고 할 때, 0 번 학생은 B로 선정되어 그 둘을 짝으로 맺어줄 것이다. 이는 우리가 원하는 결과가 아니다. 왜냐면, 그 전에 이미 A 가 0 이었을 때 B 가 5 임을 발견해서, 짝을 맺어주었을 것이기 때문이다.
즉, 정리하자면, (5, 0) 이라는 쌍을 생성하게 되는 꼴이고, 이는 조합으로 고려하겠다는 의미가 아닌, 순열로 고려하겠다는 의미이므로, 중복된 쌍을 만들게 된다. 이는 우리가 원하는 것이 아니다.
---
핵심이 되는 target 을 이해하고 나면, 나머지 코드는 쉽게 따라갈 수 있다. 전체 코드는 다음과 같다.
```java
package algospot;
/*
3
2 1
0 1
4 6
0 1 1 2 2 3 3 0 0 2 1 3
6 10
0 1 0 2 1 2 1 3 1 4 2 3 2 4 3 4 3 5 4 5
*/
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.*;
// 소풍
public class Picnic {
public static void main(String[] args) throws IOException {
List<Solution> problems = new ArrayList<>();
BufferedReader br = new BufferedReader(new InputStreamReader((System.in)));
StringTokenizer st = new StringTokenizer(br.readLine());
int c = Integer.parseInt(st.nextToken());
for (int i = 0; i < c; ++i) {
st = new StringTokenizer(br.readLine());
int n = Integer.parseInt(st.nextToken());
int m = Integer.parseInt(st.nextToken());
st = new StringTokenizer(br.readLine());
boolean[][] areFriends = new boolean[n][n];
for (int j = 0; j < m; j++) {
int a = Integer.parseInt(st.nextToken());
int b = Integer.parseInt(st.nextToken());
areFriends[a][b] = true;
areFriends[b][a] = true;
}
problems.add(new Solution(n, m, areFriends));
}
for (Solution problem : problems) {
int answer = problem.run();
System.out.println(answer);
}
}
static class Solution {
private int n;
private int m;
private boolean[][] areFriends;
public Solution(int n, int m, boolean[][] areFriends) {
this.n = n;
this.m = m;
this.areFriends = areFriends;
}
public int run() {
boolean[] paired = new boolean[n];
int ret = countPairings(paired);
return ret;
}
private int countPairings(boolean[] paired) {
int target = findTarget(paired);
if (target == -1) return 1;
int ret = 0;
for (int i = target + 1; i < n; i++) {
if (!paired[i] && areFriends[target][i]) {
paired[target] = paired[i] = true;
ret += countPairings(paired);
paired[target] = paired[i] = false;
}
}
return ret;
}
public int findTarget(boolean[] paired) {
int target = -1;
for (int i = 0; i < n; ++i) {
if (!paired[i]) {
target = i;
break;
}
}
return target;
}
}
}
``` | 27.013216 | 239 | 0.550228 | kor_Hang | 1.00001 |
93cfa96920a2e160c4add9713f059ac46580afd3 | 6,031 | md | Markdown | docs/dev/connectors/pubsub.zh.md | sbairos/flink | 0799b5c20a127110e47439668cf8f8db2e4ecbf3 | [
"Apache-2.0"
] | 3 | 2019-11-07T03:21:06.000Z | 2020-07-29T07:04:02.000Z | docs/dev/connectors/pubsub.zh.md | sbairos/flink | 0799b5c20a127110e47439668cf8f8db2e4ecbf3 | [
"Apache-2.0"
] | 3 | 2021-03-30T11:55:12.000Z | 2021-12-14T21:56:08.000Z | docs/dev/connectors/pubsub.zh.md | sbairos/flink | 0799b5c20a127110e47439668cf8f8db2e4ecbf3 | [
"Apache-2.0"
] | 2 | 2019-07-04T19:47:56.000Z | 2021-09-12T14:20:10.000Z | ---
title: "Google Cloud PubSub"
nav-title: Google Cloud PubSub
nav-parent_id: connectors
nav-pos: 7
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
这个连接器可向 [Google Cloud PubSub](https://cloud.google.com/pubsub) 读取与写入数据。添加下面的依赖来使用此连接器:
{% highlight xml %}
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-gcp-pubsub{{ site.scala_version_suffix }}</artifactId>
<version>{{ site.version }}</version>
</dependency>
{% endhighlight %}
<p style="border-radius: 5px; padding: 5px" class="bg-danger">
<b>注意</b>:此连接器最近才加到 Flink 里,还未接受广泛测试。
</p>
注意连接器目前还不是二进制发行版的一部分,添加依赖、打包配置以及集群运行信息请参考[这里]({{ site.baseurl }}/zh/getting-started/project-setup/dependencies.html)
## Consuming or Producing PubSubMessages
连接器可以接收和发送 Google PubSub 的信息。和 Google PubSub 一样,这个连接器能够保证`至少一次`的语义。
### PubSub SourceFunction
`PubSubSource` 类的对象由构建类来构建: `PubSubSource.newBuilder(...)`
有多种可选的方法来创建 PubSubSource,但最低要求是要提供 Google Project、Pubsub 订阅和反序列化 PubSubMessages 的方法。
Example:
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
DeserializationSchema<SomeObject> deserializer = (...);
SourceFunction<SomeObject> pubsubSource = PubSubSource.newBuilder()
.withDeserializationSchema(deserializer)
.withProjectName("project")
.withSubscriptionName("subscription")
.build();
streamExecEnv.addSource(source);
{% endhighlight %}
</div>
</div>
当前还不支持 PubSub 的 source functions [pulls](https://cloud.google.com/pubsub/docs/pull) messages 和 [push endpoints](https://cloud.google.com/pubsub/docs/push)。
### PubSub Sink
`PubSubSink` 类的对象由构建类来构建: `PubSubSink.newBuilder(...)`
构建类的使用方式与 PubSubSource 类似。
Example:
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
DataStream<SomeObject> dataStream = (...);
SerializationSchema<SomeObject> serializationSchema = (...);
SinkFunction<SomeObject> pubsubSink = PubSubSink.newBuilder()
.withSerializationSchema(serializationSchema)
.withProjectName("project")
.withSubscriptionName("subscription")
.build()
dataStream.addSink(pubsubSink);
{% endhighlight %}
</div>
</div>
### Google Credentials
应用程序需要使用 [Credentials](https://cloud.google.com/docs/authentication/production) 来通过认证和授权才能使用 Google Cloud Platform 的资源,例如 PubSub。
上述的两个构建类都允许你提供 Credentials, 但是连接器默认会通过环境变量: [GOOGLE_APPLICATION_CREDENTIALS](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) 来获取 Credentials 的路径。
如果你想手动提供 Credentials,例如你想从外部系统读取 Credentials,你可以使用 `PubSubSource.newBuilder(...).withCredentials(...)`。
### 集成测试
在集成测试的时候,如果你不想直接连 PubSub 而是想读取和写入一个 docker container,可以参照 [PubSub testing locally](https://cloud.google.com/pubsub/docs/emulator)。
下面的例子展示了如何使用 source 来从仿真器读取信息并发送回去:
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
String hostAndPort = "localhost:1234";
DeserializationSchema<SomeObject> deserializationSchema = (...);
SourceFunction<SomeObject> pubsubSource = PubSubSource.newBuilder()
.withDeserializationSchema(deserializationSchema)
.withProjectName("my-fake-project")
.withSubscriptionName("subscription")
.withPubSubSubscriberFactory(new PubSubSubscriberFactoryForEmulator(hostAndPort, "my-fake-project", "subscription", 10, Duration.ofSeconds(15), 100))
.build();
SerializationSchema<SomeObject> serializationSchema = (...);
SinkFunction<SomeObject> pubsubSink = PubSubSink.newBuilder()
.withSerializationSchema(serializationSchema)
.withProjectName("my-fake-project")
.withSubscriptionName("subscription")
.withHostAndPortForEmulator(hostAndPort)
.build()
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.addSource(pubsubSource)
.addSink(pubsubSink);
{% endhighlight %}
</div>
</div>
### 至少一次语义保证
#### SourceFunction
有很多原因导致会一个信息会被多次发出,例如 Google PubSub 的故障。
另一个可能的原因是超过了确认的截止时间,即收到与确认信息之间的时间间隔。PubSubSource 只有在信息被成功快照之后才会确认以保证至少一次的语义。这意味着,如果你的快照间隔大于信息确认的截止时间,那么你订阅的信息很有可能会被多次处理。
因此,我们建议把快照的间隔设置得比信息确认截止时间更短。
参照 [PubSub](https://cloud.google.com/pubsub/docs/subscriber) 来增加信息确认截止时间。
注意: `PubSubMessagesProcessedNotAcked` 显示了有多少信息正在等待下一个 checkpoint 还没被确认。
#### SinkFunction
Sink function 会把准备发到 PubSub 的信息短暂地缓存以提高性能。每次 checkpoint 前,它会刷新缓冲区,并且只有当所有信息成功发送到 PubSub 之后,checkpoint 才会成功完成。
{% top %}
| 38.414013 | 215 | 0.670701 | yue_Hant | 0.327442 |
93cffbe2a090d080b19881400fb6f0c7bf244f07 | 939 | md | Markdown | CHANGELOG.md | HalZhan/easy-vuex | 79c3972891bfc1539a5cf34ddb594d074e45d1ae | [
"MIT"
] | 2 | 2018-12-20T07:56:24.000Z | 2018-12-22T04:31:37.000Z | CHANGELOG.md | HalZhan/kiss-vuex | 79c3972891bfc1539a5cf34ddb594d074e45d1ae | [
"MIT"
] | null | null | null | CHANGELOG.md | HalZhan/kiss-vuex | 79c3972891bfc1539a5cf34ddb594d074e45d1ae | [
"MIT"
] | null | null | null | # Change Log
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
<a name="0.1.3"></a>
## [0.1.3](https://github.com/HalZhan/kiss-vuex/compare/v0.1.2...v0.1.3) (2018-12-20)
### Bug Fixes
* **pkg:** remove peer dependencies ([c9b268a](https://github.com/HalZhan/kiss-vuex/commit/c9b268a))
<a name="0.1.2"></a>
## [0.1.2](https://github.com/HalZhan/kiss-vuex/compare/v0.1.1...v0.1.2) (2018-12-19)
<a name="0.1.1"></a>
## [0.1.1](https://github.com/HalZhan/kiss-vuex/compare/v0.1.0...v0.1.1) (2018-12-19)
<a name="0.1.0"></a>
# 0.1.0 (2018-12-19)
### Features
* **store:** add support for store(decorator and function) ([082208b](https://github.com/HalZhan/kiss-vuex/commit/082208b))
* **Store:** simple way to change the object property ([da71e3c](https://github.com/HalZhan/kiss-vuex/commit/da71e3c))
| 28.454545 | 174 | 0.675186 | yue_Hant | 0.231156 |
93d0daf0922f670df9efc06aa76b7a7823f2297a | 164 | md | Markdown | README.md | josephriosIO/blog | 79d3ecbcfdf09602391e7b23176b17355611a93f | [
"MIT"
] | null | null | null | README.md | josephriosIO/blog | 79d3ecbcfdf09602391e7b23176b17355611a93f | [
"MIT"
] | 10 | 2021-03-01T21:08:15.000Z | 2022-02-27T01:26:14.000Z | README.md | josephriosIO/blog | 79d3ecbcfdf09602391e7b23176b17355611a93f | [
"MIT"
] | null | null | null | # gatsby-starter-point
A humble starter for personal blog. :shipit:
## Basic Guide
Install dependencies
```
npm install
```
when it's done
```
gatsby develop
``` | 11.714286 | 44 | 0.707317 | eng_Latn | 0.843683 |
93d0f3228b669696deab8892d569c705e5c46ca9 | 470 | md | Markdown | notes/0.44.0.md | mikegirkin/guardrail | 101bc2f2d9489b95aa1455676382dd74de2421a5 | [
"MIT"
] | 395 | 2018-03-23T05:11:48.000Z | 2021-02-25T18:06:46.000Z | notes/0.44.0.md | mikegirkin/guardrail | 101bc2f2d9489b95aa1455676382dd74de2421a5 | [
"MIT"
] | 651 | 2018-03-23T20:40:11.000Z | 2021-02-24T23:58:34.000Z | notes/0.44.0.md | mikegirkin/guardrail | 101bc2f2d9489b95aa1455676382dd74de2421a5 | [
"MIT"
] | 106 | 2018-03-23T07:30:05.000Z | 2021-02-16T16:05:35.000Z | Adding initial OpenAPI 3.0 support, Scala 2.11 file upload fix, adding missing import in http4s
===
Included issues:
- guardrail-dev/guardrail#181 Adding missing import for "ci" strings
- guardrail-dev/guardrail#185 Fixing compilation for file uploads in 2.11
- guardrail-dev/guardrail#177 Initial OpenAPI v3 support (Models and parser only, no new features)
Contributors:
- @blast-hardcheese
- fthomas/scala-steward
- @stanislav-chetvertkov
- @countfloyd
- @andreaTP
| 31.333333 | 98 | 0.785106 | eng_Latn | 0.65911 |
93d1c4528bc75078af60da06c228fa9b3434e3bb | 911 | md | Markdown | tools/rsdl/syntax-highlighting/how-to.md | oasis-open/odata-rapid | b85bf3acb6f43fe4e4f9bc03976b6bed592feebf | [
"Apache-2.0"
] | 13 | 2020-06-18T21:36:06.000Z | 2021-11-27T06:16:18.000Z | tools/rsdl/syntax-highlighting/how-to.md | oasis-open/odata-rapid | b85bf3acb6f43fe4e4f9bc03976b6bed592feebf | [
"Apache-2.0"
] | 144 | 2020-06-18T20:40:19.000Z | 2022-03-24T15:18:10.000Z | tools/rsdl/syntax-highlighting/how-to.md | oasis-open/odata-rapid | b85bf3acb6f43fe4e4f9bc03976b6bed592feebf | [
"Apache-2.0"
] | 7 | 2020-06-20T06:30:19.000Z | 2022-02-18T18:32:58.000Z | # Install extension locally
```ps1
Copy-Item -Path . -Destination #env:userprofile\.vscode\extensions\oasis-open.rsdl-0.0.1 -Recurse -force
```
# Package VSIX
Once: install packaging tool:
```sh
npm install -g vsce
```
Then in this folder:
```sh
vsce package
```
# Install from VSIX
- Switch to Extensions view
- Click on `...` (Views and More Actions...)
- Select menu item "Install from VSIX..."
- Select file `rsdl-i.j.k.vsix`, click `Install`
# Sources
- https://www.sublimetext.com/docs/scope_naming.html
- https://macromates.com/manual/en/regular_expressions
- https://www.apeth.com/nonblog/stories/textmatebundle.html
- https://github.com/microsoft/vscode-textmate/blob/main/test-cases/themes/syntaxes/JavaScript.tmLanguage.json
- https://benparizek.com/notebook/notes-on-how-to-create-a-language-grammar-and-custom-theme-for-a-textmate-bundle
- https://www.regular-expressions.info/index.html
| 25.305556 | 114 | 0.744237 | yue_Hant | 0.564951 |
93d22e75fb198de673f780d2a8c70f48988e5507 | 4,784 | md | Markdown | docs/miRNA-seq/miRSeq-Tools-and-Versions.md | wong-nw/pipeliner-docs | b7b72bca534c5de01e8af489bc31ebb03db714b7 | [
"MIT"
] | 3 | 2021-02-19T18:47:36.000Z | 2021-06-29T22:51:25.000Z | docs/miRNA-seq/miRSeq-Tools-and-Versions.md | wong-nw/pipeliner-docs | b7b72bca534c5de01e8af489bc31ebb03db714b7 | [
"MIT"
] | 3 | 2021-02-05T16:05:30.000Z | 2021-02-19T18:58:33.000Z | docs/miRNA-seq/miRSeq-Tools-and-Versions.md | wong-nw/pipeliner-docs | b7b72bca534c5de01e8af489bc31ebb03db714b7 | [
"MIT"
] | 2 | 2021-02-05T15:22:18.000Z | 2021-02-09T19:39:10.000Z | ## Reference Genomes
|Name|Species|Common name|Annotation Version|
|----|-------|-----------|------------------|
|hg38|_Homo sapiens_|Human|Genome Reference Consortium Human Build 38|
|mm10|_Mus musculus_|House mouse|Genome Reference Consortium Mouse Build 38
## Software
### Quality control and pre-processing
|Package/Tool|Version|Usage|Reference|
|------------|-------|-----|---------|
|FastQC|0.11.5|Preliminary quality control on FASTQ reads before and after adapter trimming|[1](#ref1)|
|Cutadapt|1.18|Adapter sequence trimming|[2](#ref2)|
|Kraken|1.1|Assesses microbial contamination|[3](#ref3)|
|KronaTools|2.7|Visualizes results from Kraken|[4](#ref4)
|FastQ Screen|0.9.3|Assesses sequence contamination|[5](#ref5)|
|MultiQC|1.4|QC report aggregation and generation|[6](#ref6)|
|FASTX-Toolkit|0.0.14|FASTA and FASTQ conversion kit|[7](#ref7)|
### Alignment and differential expression
|Package/Tool|Version|Usage|Reference|
|------------|-------|-----|---------|
|miRDeep2|2.0.0.8|Backbone for miRSeq alignment and quantification. Dependencies include `Bowtie1`, `ViennaRNA`, `RandFold`, and `Perl`|[8](#ref8), [9](#ref9), [10](#ref10), [11](#ref11), [12](#ref12)|
|edgeR|3.28.1|Normalization and differential miR expression analysis|[13](#ref13), [14](#ref14)|
|R|3.6.1|Programming language used to run edgeR|[15](#ref15)|
## References
<a name=ref1><sup>1</sup></a> Andrews, S. 2018. "FastQC: A quality control tool for high throughput sequence data." http://www.bioinformatics.babraham.ac.uk/projects/fastqc/. Version 0.11.5
<a name=ref2><sup>2</sup></a> Martin, M. 2011. "Cutadapt removes adapter sequences from high-throughput sequencing reads." _EMBnet.journal_ 17(1): 10-12. [doi: 10.14806/ej.17.1.200](https://doi.org/10.14806/ej.17.1.200)
<a name=ref3><sup>3</sup></a> Wood and Salzberg. 2014. "Kraken: ultrafast metagenomic sequence classification using exact alignments." _Genome Biol._ 15: R46. [doi: 10.1186/gb-2014-15-3-r46](https://doi.org/10.1186/gb-2014-15-3-r46)
<a name=ref4><sup>4</sup></a> Ondov, Bergman, and Phillippy. 2011. "Interactive metagenomic visualization in a Web browser." _BMC Bioinformatics_ 12: 385. [doi: 10.1186/1471-2105-12-385](https://doi.org/10.1186/1471-2105-12-385)
<a name=ref5><sup>5</sup></a> Wingett and Andrews. 2018. "FastQ Screen: A tool for multi-genome mapping and quality control." _F1000Res._ 7: 1338. [doi: 10.12688/f1000research.15931.2](https://doi.org/10.12688/f1000research.15931.2)
<a name=ref6><sup>6</sup></a> Ewels, Magnusson, _et al._ 2016. "MultiQC: Summarize analysis results for multiple tools and samples in a single report." _Bioinformatics_ 32(19): 3047-3048. [doi: 10.1093/bioinformatics/btw354](https://doi.org/10.1093/bioinformatics/btw354)
<a name=ref7><sup>7</sup></a> Gordon, A. 2010. "FASTX-Toolkit: FASTQ/A short-reads pre-processing tools." http://hannonlab.cshl.edu/fastx_toolkit/index.html
<a name=ref8><sup>8</sup></a> Friedlander, Mackowiak, _et al._ 2012. "miRDeep2 accurately identifies known and hundreds of novel microRNA genes in seven animal clades." _Nucleic Acids Res._ 40(1): 37-52. [doi: 10.1093/nar/gkr688](https://doi.org/10.1093/nar/gkr688)
<a name=ref9><sup>9</sup></a> Langmead, Trapnell, _et al._ 2009. "Ultrafast and memory-efficient alignment of short DNA sequences to the human genome." _Genome Biol._ 10: R25. [doi: 10.1186/gb-2009-10-3-r25](https://doi.org/10.1186/gb-2009-10-3-r25)
<a name=ref10><sup>10</sup></a> Lorenz, Bernhart, _et al._ 2011. "ViennaRNA Package 2.0." _Algorithms Mol Biol._ 6: 26. [doi: 10.1186/1748-7188-6-26](https://doi.org/10.1186/1748-7188-6-26)
<a name=ref11><sup>11</sup></a> Bonnet, Wuyts, _et al._ 2004. "Evidence that microRNA precursors, unlike other non-coding RNAs, have lower folding free energies than random sequences." _Bioinformatics_ 20(17): 2911-2917. [doi: 10.1093/bioinformatics/bth374](https://doi.org/10.1093/bioinformatics/bth374)
<a name=ref12><sup>12</sup></a> "perl - The Perl 5 language interpreter." 2017. https://www.perl.org. Version 5.24.3.
<a name=ref13><sup>13</sup></a> Robinson, McCarthy, and Smyth. 2010. "edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. _Bioinformatics_ 26: 139-140. [doi: 10.1093/bioinformatics/btp616](https://doi.org/10.1093/bioinformatics/btp616)
<a name=ref14><sup>14</sup></a> McCarthy, Chen, and Smyth. 2012. "Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation." _Nucleic Acids Res._ 40(10): 4288-4297. [doi: 10.1093/nar/gks042](https://doi.org/10.1093/nar/gks042)
<a name=ref15><sup>15</sup></a> "The R Project for Statistical Computing." https://www.r-project.org/. [Version 3.6.1](https://cran.r-project.org/src/base/R-3/R-3.6.1.tar.gz), released 2019-07-05.
| 85.428571 | 304 | 0.720945 | eng_Latn | 0.259337 |
93d2355f878bad42a08d8a95dbc7973185a8ce35 | 5,890 | md | Markdown | afl/README.md | dx1u0/dockerized_fuzzing | 2a6878cc2dd2fc5e3b620a7cf46c05a3e47684c3 | [
"MIT"
] | 106 | 2020-02-07T03:29:40.000Z | 2022-03-27T14:51:55.000Z | afl/README.md | dx1u0/dockerized_fuzzing | 2a6878cc2dd2fc5e3b620a7cf46c05a3e47684c3 | [
"MIT"
] | 1 | 2021-08-17T02:18:43.000Z | 2021-08-17T02:18:43.000Z | afl/README.md | dx1u0/dockerized_fuzzing | 2a6878cc2dd2fc5e3b620a7cf46c05a3e47684c3 | [
"MIT"
] | 17 | 2020-03-06T07:10:58.000Z | 2021-09-04T14:47:32.000Z | ## AFL
https://hub.docker.com/r/zjuchenyuan/afl
Source: https://github.com/mirrorer/afl
```
Current Version: 2.52b
More Versions: Ubuntu 16.04, gcc 5.4, clang 3.8
Last Update: 2017/11
Language: C
Special dependencies: QEMU may be needed (not included in this image)
Type: Mutation Fuzzer, Compile-time Instrumentation
```
## Guidance
Welcome to the world of fuzzing!
In this tutorial, we will experience a simple realistic fuzzing towards [MP3Gain](http://mp3gain.sourceforge.net/) 1.6.2.
### Step1: System configuration
Run these commands as root or sudoer, if you have not or rebooted:
```
echo "" | sudo tee /proc/sys/kernel/core_pattern # disable generating of core dump file
echo 0 | sudo tee /proc/sys/kernel/core_uses_pid
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
echo 1 | sudo tee /proc/sys/kernel/sched_child_runs_first # tfuzz require this
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space # vuzzer require this
```
Error message like `No such file or directory` is fine, and you can just ignore it.
Note:
Although not all configuration are required by this fuzzer, we provide these command in a uniform manner for consistency between different fuzzers.
These commands may impair your system security (turning off ASLR), but not a big problem since fuzzing experiments are normally conducted in dedicated machines.
Instead of `echo core > /proc/sys/kernel/core_pattern` given by many fuzzers which still generate a core dump file when crash happens,
here we disable core dump file generation to reduce I/O pressure during fuzzing. [Ref](http://man7.org/linux/man-pages/man5/core.5.html).
### Step2: Compile target programs and Prepare seed files
Since AFL uses compilation-time instrumentation, we need to build target program using `afl-gcc` or `afl-clang`.
```
wget https://sourceforge.net/projects/mp3gain/files/mp3gain/1.6.2/mp3gain-1_6_2-src.zip/download -O mp3gain-1_6_2-src.zip
mkdir -p mp3gain1.6.2 && cd mp3gain1.6.2
unzip ../mp3gain-1_6_2-src.zip
# build using afl-gcc
docker run --rm -w /work -it -v `pwd`:/work --privileged zjuchenyuan/afl \
sh -c "make clean; make"
```
`zjuchenyuan/afl` image has already set environment `CC` and `CXX`, so you just need to `make`, it's equivalent to `CC=/afl/afl-gcc CXX=/afl/afl-g++ make`.
If you want to build with clang, refer to last section.
### Step3: Prepare seed files
If you have not prepared mp3 seed files for fuzzing, you can use what we have provided [here](https://github.com/UNIFUZZ/dockerized_fuzzing_examples/tree/master/seed/mp3). More seed types? Take a look at [UNIFUZZ seeds repo](https://github.com/UNIFUZZ/seeds).
`apt install -y subversion` may be needed if `svn: command not found`.
```
svn export https://github.com/UNIFUZZ/dockerized_fuzzing_examples/trunk/seed/mp3 seed_mp3
```
### Step4: Fuzzing!
```
mkdir -p output/afl
docker run -w /work -it -v `pwd`:/work --privileged zjuchenyuan/afl \
afl-fuzz -i seed_mp3 -o output/afl -m 2048 -t 100+ -- ./mp3gain @@
```
### Explanation
#### Docker run usage
docker run [some params] <image name> program [program params]
#### Docker Parameters
- `-w /work`: set the working folder of the container, so we can omit `cd /work`.
- `-it`: interactive, like a terminal; if you want to run the container in the background, use `-d`
- `-v `pwd`:/work`: set the directory mapping, so the output files will be directly wrote to host folder.
- `--privileged`: this may not be required, but to make things easier. Without this, you cannot do preparation step in the container.
#### AFL Parameters
- `-i`: seed folder
- `-o`: output folder, where crash files and queue files will be stored
- `-m`: `-m 2048` to set memory limit as 2GB
- `-t`: `-t 100+` to set time limit as 100ms, skip those seed files which leads to timeout
- `--`: which means which after it is the target program and its command line
- `@@`: place holder for mutated file
#### Output Example
See [here](https://github.com/UNIFUZZ/dockerized_fuzzing_examples/tree/master/output/afl)
### Using Clang Compiler
If you want to build program using `clang`, AFL provided llvm_mode. You can set environment variable `CC` and `CXX` to `/afl/afl-clang-fast` and `/afl/afl-clang-fast++` respectively.
For example, instead of just `make`, you can `CC=/afl/afl-clang-fast CXX=/afl/afl-clang-fast++ make`. In some cases, you may need to manually change Makefile to change CC and CXX.
This image use clang-3.8.
### Using ASAN
Building with `AFL_USE_ASAN=1`, and running with `-m none` and longer timeout. Example:
```
# build ASAN binary, you will see "[+] Instrumented x locations (64-bit, ASAN/MSAN mode, ratio 33%)."
docker run --rm -w /work -it -v `pwd`:/work --privileged zjuchenyuan/afl \
sh -c "make clean; AFL_USE_ASAN=1 make"
# run fuzzing
mkdir -p output/aflasan
docker run -w /work -it -v `pwd`:/work --privileged zjuchenyuan/afl \
afl-fuzz -i seed_mp3 -o output/aflasan -m none -t 500+ -- ./mp3gain @@
```
### Target programs need specific file name?
AFL will always use `.cur_input` as the mutated file name, if your targeted program need specific suffix type for filename, you need to modify [afl-fuzz.c](https://github.com/mirrorer/afl/blob/master/afl-fuzz.c).
Here is an example using `x.mp3` to replace `.cur_input`:
```
docker run -w /work -it -v `pwd`:/work --privileged zjuchenyuan/afl /bin/bash
# in the container
cd /afl
sed -i 's/.cur_input/x.mp3/g' afl-fuzz.c
make clean
CC=gcc CXX=g++ make && make install
# now you can use your customized afl-fuzz to run fuzzing
```
## Official Documentation
http://lcamtuf.coredump.cx/afl/
https://github.com/mirrorer/afl/tree/master/docs
| 38.75 | 260 | 0.713073 | eng_Latn | 0.957509 |
93d3787ad11f95cffd384808365aa6702df2e893 | 3,847 | md | Markdown | articles/active-directory/active-directory-properties-area.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-properties-area.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-properties-area.md | Almulo/azure-docs.es-es | f1916cdaa2952cbe247723758a13b3ec3d608863 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Agregar información de privacidad de su organización en Azure AD | Microsoft Docs
description: Se explica cómo agregar información de privacidad de su organización en el área de propiedades de Azure Active Directory (Azure AD).
services: active-directory
documentationcenter: ''
author: eross-msft
manager: mtillman
ms.service: active-directory
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: conceptual
ms.date: 04/17/2018
ms.author: lizross
ms.reviewer: bpham
ms.custom: it-pro
ms.openlocfilehash: a34fa2b8c2d966af108664c219a222fb9a5b7abc
ms.sourcegitcommit: e8f443ac09eaa6ef1d56a60cd6ac7d351d9271b9
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 09/12/2018
ms.locfileid: "35773529"
---
# <a name="how-to-add-your-organizations-privacy-info-in-azure-active-directory"></a>Procedimiento: Agregar información de privacidad de su organización en Azure Active Directory
Este artículo explica cómo un administrador de inquilinos puede agregar información relacionada con la privacidad al inquilino de Azure Active Directory (Azure AD) de una organización, a través de Azure Portal.
Se recomienda agregar su contacto de privacidad global y la declaración de privacidad de su organización, de modo que los empleados internos e invitados externos puedan revisar las directivas. Dado que las declaraciones de privacidad se crean de forma única y específica para cada negocio, es recomendable ponerse en contacto con un abogado para obtener ayuda.
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-dsr-and-stp-note.md)]
## <a name="access-the-properties-area-to-add-your-privacy-info"></a>Acceder al área de propiedades para agregar la información de privacidad
1. Inicie sesión en Azure Portal como administrador de inquilinos.
2. En la barra de navegación izquierda, seleccione **Azure Active Directory** y, luego, seleccione **Propiedades**.
Se muestra el área de **Propiedades**.

3. Agregar la información de privacidad de sus empleados:
- **Contacto técnico.** Escriba la dirección de correo electrónico de la persona de contacto para el soporte técnico de su organización.
- **Contacto de privacidad global.** Escriba la dirección de correo electrónico de la persona de contacto para consultas sobre privacidad de los datos personales. Esta persona también es quien se pone en contacto con Microsoft si se produce una vulneración de datos. Si no aparece ninguna persona aquí, Microsoft se pone en contacto con los administradores globales.
- **URL de la declaración de privacidad.** Escriba el vínculo del documento de su organización que describe la forma en que su organización controla la privacidad de datos interna y externa del invitado.
>[!Important]
>Si no incluye su propia declaración de privacidad o su contacto de privacidad, los invitados externos verán el texto en el cuadro de diálogo **Permisos de revisión** que dice: **< _nombre de su organización_> no ha proporcionado vínculos de sus términos que pueda revisar**. Por ejemplo, un usuario invitado verá este mensaje cuando reciba una invitación para acceder a una organización a través de la colaboración B2B.

4. Seleccione **Guardar**.
## <a name="next-steps"></a>Pasos siguientes
- [Canje de invitación de colaboración B2B de Azure Active Directory](https://aka.ms/b2bredemption)
- [Adición o modificación de la información de perfil de un usuario en Azure Active Directory](fundamentals/active-directory-users-profile-azure-portal.md) | 66.327586 | 429 | 0.790746 | spa_Latn | 0.979963 |
93d52633abe9e6899e6dafce4e56f0947546fa8c | 3,070 | md | Markdown | README.md | JTOne123/nwebdav | bf6c1d6a680db43c73f213aef4442ad79d3a6c62 | [
"MIT"
] | 121 | 2016-04-14T18:51:32.000Z | 2022-03-24T10:36:09.000Z | README.md | JTOne123/nwebdav | bf6c1d6a680db43c73f213aef4442ad79d3a6c62 | [
"MIT"
] | 59 | 2016-10-26T07:45:47.000Z | 2022-03-13T21:31:06.000Z | README.md | JTOne123/nwebdav | bf6c1d6a680db43c73f213aef4442ad79d3a6c62 | [
"MIT"
] | 42 | 2016-11-07T20:49:57.000Z | 2022-02-17T12:04:40.000Z | # NWebDAV
.NET implementation of a WebDAV server.
## Overview
I needed a WebDAV server implementation for C#, but I couldn't find an
implementation that fulfilled my needs. That's why I wrote
my own.
__Requirements__
* Fast, scalable, robust with moderate memory usage.
* Abstract data store, so it can be used for directories/files but also for any
other data.
* Abstract locking providers, so it can use in-memory locking (small servers)
or Redis (large installations).
* Flexible and extensible property management.
* Fully supports .NET framework, Mono and the Core CLR.
* Allows for various HTTP authentication and SSL support (basic authentication works).
## WebDAV client on Windows Vista/7
The Windows Vista/7 WebDAV client is implemented poorly. We have support for
this client since the 0.1.7 release.
* It required the 'D' namespace prefix on all DAV related XML nodes. XML
namespaces without prefixes are not supported.
* It cannot deal with XML date time format (ISO 8601) in a decent manner. It
processes the fraction part as milliseconds, which is wrong. Milliseconds
can be between 0 and 999, where a fraction can have more than 3 digits. The
difference is subtle. __2016-04-14T01:02:03.12__ denotes 120ms, but it could
be parsed as 12ms by Windows 7 clients. __2016-04-14T01:02:03.1234__ denotes
123.4ms, but cannot be parsed when using integers. Windows 7 clients don't
accept this format. For that reason we will not output more than 3 digits
for the fraction part.
Windows 7 client might perform very bad when connecting to any WebDAV server
(not related to this specific implementation). This is caused, because it tries
to auto-detect any proxy server before __any__ request. Refer to
[KB2445570](https://support.microsoft.com/en-us/kb/2445570) for more information.
## Work in progress
This module is currently work-in-progress and shouldn't be used for production use yet. If you want to help, then let me know...
The following features are currently missing:
* Only the in-memory locking provider has been implemented yet.
* Check if each call responds with the proper status codes (as defined in the WebDAV specification).
* Recursive locking is not supported yet.
* We should have compatibility flags that can be used to implement quirks
for bad WebDAV clients. We can detect the client based on the User-Agent
and provide support for it.
The current version seems to work fine to serve files using WebDAV on both Windows and OS X.
# Contact
If you have any questions and/or problems, then you can submit an issue. For other remarks, you can also contact me via email at <[email protected]>.
# Donate
I never intended to make any profit for this code, but I received a request to add a donation link. So if you think that you want to donate to this project, then you can use the following button to donate to me via PayPal (don't feel obliged to do so).
[](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=KZYDXR3ERJQZJ)
| 51.166667 | 252 | 0.77785 | eng_Latn | 0.998469 |
93d5fbf3b1de67c08c44de2462fe6e5c706d63a4 | 35,189 | md | Markdown | dynamicsax2012-msdn/releaseupdatedb60-hrm-upgrade-scripts.md | AndreasVolkmann/DynamicsAX2012-msdn | 53dd6d433cc772eb1e207abb8e5bed1c157ee1f1 | [
"CC-BY-4.0",
"MIT"
] | 7 | 2020-05-18T17:20:32.000Z | 2021-11-04T20:39:20.000Z | dynamicsax2012-msdn/releaseupdatedb60-hrm-upgrade-scripts.md | AndreasVolkmann/DynamicsAX2012-msdn | 53dd6d433cc772eb1e207abb8e5bed1c157ee1f1 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2018-08-07T20:05:55.000Z | 2022-01-21T20:00:29.000Z | dynamicsax2012-msdn/releaseupdatedb60-hrm-upgrade-scripts.md | AndreasVolkmann/DynamicsAX2012-msdn | 53dd6d433cc772eb1e207abb8e5bed1c157ee1f1 | [
"CC-BY-4.0",
"MIT"
] | 23 | 2018-08-08T11:57:41.000Z | 2021-08-31T09:04:49.000Z | ---
title: ReleaseUpdateDB60_HRM Upgrade Scripts
TOCTitle: ReleaseUpdateDB60_HRM Upgrade Scripts
ms:assetid: 12980209-8e2d-4739-bde8-a61364ec6734
ms:mtpsurl: https://msdn.microsoft.com/en-us/library/JJ735839(v=AX.60)
ms:contentKeyID: 49706751
ms.date: 05/18/2015
mtps_version: v=AX.60
---
# ReleaseUpdateDB60\_HRM Upgrade Scripts
_**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012_
## In This Section
[ReleaseUpdateDB60\_HRM.allowDupHRMCompPerfAllocationLineIdIdx Upgrade Script](releaseupdatedb60-hrm-allowduphrmcompperfallocationlineididx-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.allowDupHRPLimitTableRelationshipPrimary Upgrade Script](releaseupdatedb60-hrm-allowduphrplimittablerelationshipprimary-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.allowDupIndex](releaseupdatedb60-hrm-allowdupindex.md)
[ReleaseUpdateDB60\_HRM.allowDupIndex Upgrade Script](releaseupdatedb60-hrm-allowdupindex-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.allowNoDupHRMCompPerfAllocationLineIdIdx Upgrade Script](releaseupdatedb60-hrm-allownoduphrmcompperfallocationlineididx-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.allowNoDupIndex](releaseupdatedb60-hrm-allownodupindex.md)
[ReleaseUpdateDB60\_HRM.allowNoDupIndex Upgrade Script](releaseupdatedb60-hrm-allownodupindex-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.createHcmIdentificationTypeAlienNumber Upgrade Script](releaseupdatedb60-hrm-createhcmidentificationtypealiennumber-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.deleteDupHRPLimitTableRelationshipPrimar Upgrade Script](releaseupdatedb60-hrm-deleteduphrplimittablerelationshipprimar-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.disableHRMCompVarAwardEmplIdx Upgrade Script](releaseupdatedb60-hrm-disablehrmcompvarawardemplidx-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.discardPersonelNumberDuplicates Upgrade Script](releaseupdatedb60-hrm-discardpersonelnumberduplicates-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.enableHRMCompVarAwardEmplIdx Upgrade Script](releaseupdatedb60-hrm-enablehrmcompvarawardemplidx-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.printOnly5AbsenceStatusColumns Upgrade Script](releaseupdatedb60-hrm-printonly5absencestatuscolumns-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateDirPartyRelationship Upgrade Script](releaseupdatedb60-hrm-updatedirpartyrelationship-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateEmplLedgerAccounts\_RU Upgrade Script](releaseupdatedb60-hrm-updateemplledgeraccounts-ru-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateEmployeeTable\_RU Upgrade Script](releaseupdatedb60-hrm-updateemployeetable-ru-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateEmployeeTable\_RUPaymTransCodes Upgrade Script](releaseupdatedb60-hrm-updateemployeetable-rupaymtranscodes-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmAccommodationType Upgrade Script](releaseupdatedb60-hrm-updatehcmaccommodationtype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateEmplLimits Upgrade Script](releaseupdatedb60-hrm-updateempllimits-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmAdaRequirement Upgrade Script](releaseupdatedb60-hrm-updatehcmadarequirement-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmAgreementTerm Upgrade Script](releaseupdatedb60-hrm-updatehcmagreementterm-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmApplicant Upgrade Script](releaseupdatedb60-hrm-updatehcmapplicant-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmBenefit Upgrade Script](releaseupdatedb60-hrm-updatehcmbenefit-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmBenefitOption Upgrade Script](releaseupdatedb60-hrm-updatehcmbenefitoption-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmBenefitPlan Upgrade Script](releaseupdatedb60-hrm-updatehcmbenefitplan-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmBenefitType Upgrade Script](releaseupdatedb60-hrm-updatehcmbenefittype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCertificateType Upgrade Script](releaseupdatedb60-hrm-updatehcmcertificatetype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCompensationLevel Upgrade Script](releaseupdatedb60-hrm-updatehcmcompensationlevel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCompLocation Upgrade Script](releaseupdatedb60-hrm-updatehcmcomplocation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmContractorAffiliationDetail Upgrade Script](releaseupdatedb60-hrm-updatehcmcontractoraffiliationdetail-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseNotes Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursenotes-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseType](releaseupdatedb60-hrm-updatehcmcoursetype.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseType Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursetype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeCertificateProfile](releaseupdatedb60-hrm-updatehcmcoursetypecertificateprofile.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeCertificateProfile Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursetypecertificateprofile-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeEducationProfile](releaseupdatedb60-hrm-updatehcmcoursetypeeducationprofile.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeEducationProfile Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursetypeeducationprofile-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeGroup](releaseupdatedb60-hrm-updatehcmcoursetypegroup.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeGroup Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursetypegroup-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeNotes Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursetypenotes-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeSkillProfile](releaseupdatedb60-hrm-updatehcmcoursetypeskillprofile.md)
[ReleaseUpdateDB60\_HRM.updateHcmCourseTypeSkillProfile Upgrade Script](releaseupdatedb60-hrm-updatehcmcoursetypeskillprofile-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmDiscussion Upgrade Script](releaseupdatedb60-hrm-updatehcmdiscussion-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmDiscussionType Upgrade Script](releaseupdatedb60-hrm-updatehcmdiscussiontype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationDiscipline](releaseupdatedb60-hrm-updatehcmeducationdiscipline.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationDiscipline Upgrade Script](releaseupdatedb60-hrm-updatehcmeducationdiscipline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationDisciplineCategory](releaseupdatedb60-hrm-updatehcmeducationdisciplinecategory.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationDisciplineCategory Upgrade Script](releaseupdatedb60-hrm-updatehcmeducationdisciplinecategory-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationDisciplineGroup](releaseupdatedb60-hrm-updatehcmeducationdisciplinegroup.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationDisciplineGroup Upgrade Script](releaseupdatedb60-hrm-updatehcmeducationdisciplinegroup-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationInstitution Upgrade Script](releaseupdatedb60-hrm-updatehcmeducationinstitution-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEducationLevel Upgrade Script](releaseupdatedb60-hrm-updatehcmeducationlevel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentBonus Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentbonus-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentContractor Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentcontractor-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentDetail Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentdetail-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentEmployee Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentemployee-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentInsurance Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentinsurance-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentLeave Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentleave-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentStockOption Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentstockoption-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentTerm Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentterm-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentVacation Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentvacation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEmploymentValidTo4to6 Upgrade Script](releaseupdatedb60-hrm-updatehcmemploymentvalidto4to6-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmEthnicOrigin Upgrade Script](releaseupdatedb60-hrm-updatehcmethnicorigin-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoal Upgrade Script](releaseupdatedb60-hrm-updatehcmgoal-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalActivity](releaseupdatedb60-hrm-updatehcmgoalactivity.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalActivity Upgrade Script](releaseupdatedb60-hrm-updatehcmgoalactivity-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalComment](releaseupdatedb60-hrm-updatehcmgoalcomment.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalComment Upgrade Script](releaseupdatedb60-hrm-updatehcmgoalcomment-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalHeading](releaseupdatedb60-hrm-updatehcmgoalheading.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalHeading Upgrade Script](releaseupdatedb60-hrm-updatehcmgoalheading-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalNote](releaseupdatedb60-hrm-updatehcmgoalnote.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalNote Upgrade Script](releaseupdatedb60-hrm-updatehcmgoalnote-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalType Upgrade Script](releaseupdatedb60-hrm-updatehcmgoaltype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalTypeTemplate](releaseupdatedb60-hrm-updatehcmgoaltypetemplate.md)
[ReleaseUpdateDB60\_HRM.updateHcmGoalTypeTemplate Upgrade Script](releaseupdatedb60-hrm-updatehcmgoaltypetemplate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmi9Document](releaseupdatedb60-hrm-updatehcmi9document.md)
[ReleaseUpdateDB60\_HRM.updateHcmi9Document Upgrade Script](releaseupdatedb60-hrm-updatehcmi9document-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmi9DocumentList](releaseupdatedb60-hrm-updatehcmi9documentlist.md)
[ReleaseUpdateDB60\_HRM.updateHcmi9DocumentList Upgrade Script](releaseupdatedb60-hrm-updatehcmi9documentlist-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmi9DocumentType](releaseupdatedb60-hrm-updatehcmi9documenttype.md)
[ReleaseUpdateDB60\_HRM.updateHcmi9DocumentType Upgrade Script](releaseupdatedb60-hrm-updatehcmi9documenttype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmIdentificationType](releaseupdatedb60-hrm-updatehcmidentificationtype.md)
[ReleaseUpdateDB60\_HRM.updateHcmIdentificationType Upgrade Script](releaseupdatedb60-hrm-updatehcmidentificationtype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmIdentityCardTable\_RU Upgrade Script](releaseupdatedb60-hrm-updatehcmidentitycardtable-ru-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmIncomeTaxCategory](releaseupdatedb60-hrm-updatehcmincometaxcategory.md)
[ReleaseUpdateDB60\_HRM.updateHcmIncomeTaxCategory Upgrade Script](releaseupdatedb60-hrm-updatehcmincometaxcategory-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmIncomeTaxCode](releaseupdatedb60-hrm-updatehcmincometaxcode.md)
[ReleaseUpdateDB60\_HRM.updateHcmIncomeTaxCode Upgrade Script](releaseupdatedb60-hrm-updatehcmincometaxcode-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmInsuranceType](releaseupdatedb60-hrm-updatehcminsurancetype.md)
[ReleaseUpdateDB60\_HRM.updateHcmInsuranceType Upgrade Script](releaseupdatedb60-hrm-updatehcminsurancetype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmIssuingAgency](releaseupdatedb60-hrm-updatehcmissuingagency.md)
[ReleaseUpdateDB60\_HRM.updateHcmIssuingAgency Upgrade Script](releaseupdatedb60-hrm-updatehcmissuingagency-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJob Upgrade Script](releaseupdatedb60-hrm-updatehcmjob-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobDetail Upgrade Script](releaseupdatedb60-hrm-updatehcmjobdetail-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobFunction](releaseupdatedb60-hrm-updatehcmjobfunction.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobFunction Upgrade Script](releaseupdatedb60-hrm-updatehcmjobfunction-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobPreferredCertificate Upgrade Script](releaseupdatedb60-hrm-updatehcmjobpreferredcertificate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobPreferredEducationDiscipline Upgrade Script](releaseupdatedb60-hrm-updatehcmjobpreferrededucationdiscipline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobPreferredSkill Upgrade Script](releaseupdatedb60-hrm-updatehcmjobpreferredskill-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobResponsibility Upgrade Script](releaseupdatedb60-hrm-updatehcmjobresponsibility-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTask](releaseupdatedb60-hrm-updatehcmjobtask.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTask Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtask-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTaskAssignment Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtaskassignment-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTemplate Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtemplate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTemplateCertificate Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtemplatecertificate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTemplateEducationDiscipline Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtemplateeducationdiscipline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTemplateResponsibility Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtemplateresponsibility-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTemplateSkill Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtemplateskill-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobTemplateTask Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtemplatetask-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobType](releaseupdatedb60-hrm-updatehcmjobtype.md)
[ReleaseUpdateDB60\_HRM.updateHcmJobType Upgrade Script](releaseupdatedb60-hrm-updatehcmjobtype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmLanguageCode](releaseupdatedb60-hrm-updatehcmlanguagecode.md)
[ReleaseUpdateDB60\_HRM.updateHcmLanguageCode Upgrade Script](releaseupdatedb60-hrm-updatehcmlanguagecode-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmLeaveType](releaseupdatedb60-hrm-updatehcmleavetype.md)
[ReleaseUpdateDB60\_HRM.updateHcmLeaveType Upgrade Script](releaseupdatedb60-hrm-updatehcmleavetype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmLoanItem](releaseupdatedb60-hrm-updatehcmloanitem.md)
[ReleaseUpdateDB60\_HRM.updateHcmLoanItem Upgrade Script](releaseupdatedb60-hrm-updatehcmloanitem-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmLoanType](releaseupdatedb60-hrm-updatehcmloantype.md)
[ReleaseUpdateDB60\_HRM.updateHcmLoanType Upgrade Script](releaseupdatedb60-hrm-updatehcmloantype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollBasis](releaseupdatedb60-hrm-updatehcmpayrollbasis.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollBasis Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollbasis-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollCategory](releaseupdatedb60-hrm-updatehcmpayrollcategory.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollCategory Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollcategory-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollDeduction](releaseupdatedb60-hrm-updatehcmpayrolldeduction.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollDeduction Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrolldeduction-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollDeductionType](releaseupdatedb60-hrm-updatehcmpayrolldeductiontype.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollDeductionType Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrolldeductiontype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollFrame](releaseupdatedb60-hrm-updatehcmpayrollframe.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollFrame Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollframe-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollFrameCategory](releaseupdatedb60-hrm-updatehcmpayrollframecategory.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollFrameCategory Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollframecategory-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollLine](releaseupdatedb60-hrm-updatehcmpayrollline.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollLine Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollPension](releaseupdatedb60-hrm-updatehcmpayrollpension.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollPension Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollpension-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollPremium](releaseupdatedb60-hrm-updatehcmpayrollpremium.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollPremium Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollpremium-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollScaleLevel](releaseupdatedb60-hrm-updatehcmpayrollscalelevel.md)
[ReleaseUpdateDB60\_HRM.updateHcmPayrollScaleLevel Upgrade Script](releaseupdatedb60-hrm-updatehcmpayrollscalelevel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonAccommodation Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonaccommodation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonCertificate Upgrade Script](releaseupdatedb60-hrm-updatehcmpersoncertificate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonCourse Upgrade Script](releaseupdatedb60-hrm-updatehcmpersoncourse-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonDetails](releaseupdatedb60-hrm-updatehcmpersondetails.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonDetails Upgrade Script](releaseupdatedb60-hrm-updatehcmpersondetails-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonEducation Upgrade Script](releaseupdatedb60-hrm-updatehcmpersoneducation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonIdentificationNumber Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonidentificationnumber-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonImage Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonimage-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonLaborUnion](releaseupdatedb60-hrm-updatehcmpersonlaborunion.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonLaborUnion Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonlaborunion-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonLoan Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonloan-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonPrivateDetails Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonprivatedetails-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonProfessionalExperience Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonprofessionalexperience-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonProjectRole Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonprojectrole-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonSkill Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonskill-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonSkillMapping Upgrade Script](releaseupdatedb60-hrm-updatehcmpersonskillmapping-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPersonTrustedPosition Upgrade Script](releaseupdatedb60-hrm-updatehcmpersontrustedposition-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPosition Upgrade Script](releaseupdatedb60-hrm-updatehcmposition-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionDefaultDimension Upgrade Script](releaseupdatedb60-hrm-updatehcmpositiondefaultdimension-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionDetail Upgrade Script](releaseupdatedb60-hrm-updatehcmpositiondetail-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionDuration Upgrade Script](releaseupdatedb60-hrm-updatehcmpositionduration-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionHierarchy Upgrade Script](releaseupdatedb60-hrm-updatehcmpositionhierarchy-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionUnionAgreement Upgrade Script](releaseupdatedb60-hrm-updatehcmpositionunionagreement-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionWorkerAssignment Upgrade Script](releaseupdatedb60-hrm-updatehcmpositionworkerassignment-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmPositionHierarchyType Upgrade Script](releaseupdatedb60-hrm-updatehcmpositionhierarchytype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmRatingLevel](releaseupdatedb60-hrm-updatehcmratinglevel.md)
[ReleaseUpdateDB60\_HRM.updateHcmRatingLevel Upgrade Script](releaseupdatedb60-hrm-updatehcmratinglevel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmRatingModel](releaseupdatedb60-hrm-updatehcmratingmodel.md)
[ReleaseUpdateDB60\_HRM.updateHcmRatingModel Upgrade Script](releaseupdatedb60-hrm-updatehcmratingmodel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmReasonCode](releaseupdatedb60-hrm-updatehcmreasoncode.md)
[ReleaseUpdateDB60\_HRM.updateHcmReasonCode Upgrade Script](releaseupdatedb60-hrm-updatehcmreasoncode-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmReminderType](releaseupdatedb60-hrm-updatehcmremindertype.md)
[ReleaseUpdateDB60\_HRM.updateHcmReminderType Upgrade Script](releaseupdatedb60-hrm-updatehcmremindertype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmResponsibility](releaseupdatedb60-hrm-updatehcmresponsibility.md)
[ReleaseUpdateDB60\_HRM.updateHcmResponsibility Upgrade Script](releaseupdatedb60-hrm-updatehcmresponsibility-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkill](releaseupdatedb60-hrm-updatehcmskill.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkill Upgrade Script](releaseupdatedb60-hrm-updatehcmskill-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillMappingCertificate Upgrade Script](releaseupdatedb60-hrm-updatehcmskillmappingcertificate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillMappingEducation Upgrade Script](releaseupdatedb60-hrm-updatehcmskillmappingeducation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillMappingHeader Upgrade Script](releaseupdatedb60-hrm-updatehcmskillmappingheader-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillMappingRange Upgrade Script](releaseupdatedb60-hrm-updatehcmskillmappingrange-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillMappingSkill Upgrade Script](releaseupdatedb60-hrm-updatehcmskillmappingskill-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillType](releaseupdatedb60-hrm-updatehcmskilltype.md)
[ReleaseUpdateDB60\_HRM.updateHcmSkillType Upgrade Script](releaseupdatedb60-hrm-updatehcmskilltype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmSurveyCompany](releaseupdatedb60-hrm-updatehcmsurveycompany.md)
[ReleaseUpdateDB60\_HRM.updateHcmSurveyCompany Upgrade Script](releaseupdatedb60-hrm-updatehcmsurveycompany-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmTitle](releaseupdatedb60-hrm-updatehcmtitle.md)
[ReleaseUpdateDB60\_HRM.updateHcmTitle Upgrade Script](releaseupdatedb60-hrm-updatehcmtitle-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmUnionAgreement Upgrade Script](releaseupdatedb60-hrm-updatehcmunionagreement-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmUnionAgreementDuration Upgrade Script](releaseupdatedb60-hrm-updatehcmunionagreementduration-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmUnions](releaseupdatedb60-hrm-updatehcmunions.md)
[ReleaseUpdateDB60\_HRM.updateHcmUnions Upgrade Script](releaseupdatedb60-hrm-updatehcmunions-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmVeteranStatus](releaseupdatedb60-hrm-updatehcmveteranstatus.md)
[ReleaseUpdateDB60\_HRM.updateHcmVeteranStatus Upgrade Script](releaseupdatedb60-hrm-updatehcmveteranstatus-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerAffiliationAdvHolder\_RU Upgrade Script](releaseupdatedb60-hrm-updatehcmworkeraffiliationadvholder-ru-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerBankAccount](releaseupdatedb60-hrm-updatehcmworkerbankaccount.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerBankAccount Upgrade Script](releaseupdatedb60-hrm-updatehcmworkerbankaccount-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerBasic Upgrade Script](releaseupdatedb60-hrm-updatehcmworkerbasic-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerEnrolledBenefit Upgrade Script](releaseupdatedb60-hrm-updatehcmworkerenrolledbenefit-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerGroup\_RU Upgrade Script](releaseupdatedb60-hrm-updatehcmworkergroup-ru-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerIdentificationNumber](releaseupdatedb60-hrm-updatehcmworkeridentificationnumber.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerIdentificationNumber Upgrade Script](releaseupdatedb60-hrm-updatehcmworkeridentificationnumber-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerNonHcm Upgrade Script](releaseupdatedb60-hrm-updatehcmworkernonhcm-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerReminder](releaseupdatedb60-hrm-updatehcmworkerreminder.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerReminder Upgrade Script](releaseupdatedb60-hrm-updatehcmworkerreminder-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTask](releaseupdatedb60-hrm-updatehcmworkertask.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTask Upgrade Script](releaseupdatedb60-hrm-updatehcmworkertask-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTaskAssignment](releaseupdatedb60-hrm-updatehcmworkertaskassignment.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTaskAssignment Upgrade Script](releaseupdatedb60-hrm-updatehcmworkertaskassignment-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTaxInfo](releaseupdatedb60-hrm-updatehcmworkertaxinfo.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTaxInfo Upgrade Script](releaseupdatedb60-hrm-updatehcmworkertaxinfo-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTitle](releaseupdatedb60-hrm-updatehcmworkertitle.md)
[ReleaseUpdateDB60\_HRM.updateHcmWorkerTitle Upgrade Script](releaseupdatedb60-hrm-updatehcmworkertitle-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRCComp Upgrade Script](releaseupdatedb60-hrm-updatehrccomp-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMAbsenceCode Upgrade Script](releaseupdatedb60-hrm-updatehrmabsencecode-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMAbsenceRequest Upgrade Script](releaseupdatedb60-hrm-updatehrmabsencerequest-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMAbsenceSetup Upgrade Script](releaseupdatedb60-hrm-updatehrmabsencesetup-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMAbsenceTable Upgrade Script](releaseupdatedb60-hrm-updatehrmabsencetable-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicantInterview Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicantinterview-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicantRouting Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicantrouting-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicantTable Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicanttable-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicantTableFK Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicanttablefk-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicantTableLogistics Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicanttablelogistics-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplication Upgrade Script](releaseupdatedb60-hrm-updatehrmapplication-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicationBasket Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicationbasket-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicationEmailTemplate Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicationemailtemplate-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMApplicationWordBookmark Upgrade Script](releaseupdatedb60-hrm-updatehrmapplicationwordbookmark-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEligibility Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeligibility-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEligibilityLevel Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeligibilitylevel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEventEmpl Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeventempl-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEventLine Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeventline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEventLineComposite Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeventlinecomposite-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEventLineFixed Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeventlinefixed-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompEventLinePointInTime Upgrade Script](releaseupdatedb60-hrm-updatehrmcompeventlinepointintime-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompFixedBudget Upgrade Script](releaseupdatedb60-hrm-updatehrmcompfixedbudget-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompFixedEmpl Upgrade Script](releaseupdatedb60-hrm-updatehrmcompfixedempl-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompLocation](releaseupdatedb60-hrm-updatehrmcomplocation.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompLocation Upgrade Script](releaseupdatedb60-hrm-updatehrmcomplocation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompOrgPerf Upgrade Script](releaseupdatedb60-hrm-updatehrmcomporgperf-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompPerfAllocation Upgrade Script](releaseupdatedb60-hrm-updatehrmcompperfallocation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompPerfAllocationLine Upgrade Script](releaseupdatedb60-hrm-updatehrmcompperfallocationline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompPerfPlanEmpl Upgrade Script](releaseupdatedb60-hrm-updatehrmcompperfplanempl-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompVarAwardEmpl Upgrade Script](releaseupdatedb60-hrm-updatehrmcompvarawardempl-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompVarEnrollEmpl Upgrade Script](releaseupdatedb60-hrm-updatehrmcompvarenrollempl-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompVarEnrollEmplLine Upgrade Script](releaseupdatedb60-hrm-updatehrmcompvarenrollemplline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCompVarPlanLevel Upgrade Script](releaseupdatedb60-hrm-updatehrmcompvarplanlevel-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseAttendee Upgrade Script](releaseupdatedb60-hrm-updatehrmcourseattendee-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseAttendeeLine](releaseupdatedb60-hrm-updatehrmcourseattendeeline.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseAttendeeLine Upgrade Script](releaseupdatedb60-hrm-updatehrmcourseattendeeline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseInstructor](releaseupdatedb60-hrm-updatehrmcourseinstructor.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseInstructor Upgrade Script](releaseupdatedb60-hrm-updatehrmcourseinstructor-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseLocation](releaseupdatedb60-hrm-updatehrmcourselocation.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseLocation Upgrade Script](releaseupdatedb60-hrm-updatehrmcourselocation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseTable Upgrade Script](releaseupdatedb60-hrm-updatehrmcoursetable-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseTableInstructor](releaseupdatedb60-hrm-updatehrmcoursetableinstructor.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseTableInstructor Upgrade Script](releaseupdatedb60-hrm-updatehrmcoursetableinstructor-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMCourseType Upgrade Script](releaseupdatedb60-hrm-updatehrmcoursetype-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updatehrmEmployeeContact Upgrade Script](releaseupdatedb60-hrm-updatehrmemployeecontact-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHrmEmployeeContactLogistics Upgrade Script](releaseupdatedb60-hrm-updatehrmemployeecontactlogistics-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMMassHireLine Upgrade Script](releaseupdatedb60-hrm-updatehrmmasshireline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMMassHireProject Upgrade Script](releaseupdatedb60-hrm-updatehrmmasshireproject-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMMedia Upgrade Script](releaseupdatedb60-hrm-updatehrmmedia-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMParameters Upgrade Script](releaseupdatedb60-hrm-updatehrmparameters-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMPayrollDeduction](releaseupdatedb60-hrm-updatehrmpayrolldeduction.md)
[ReleaseUpdateDB60\_HRM.updateHRMPayrollDeduction Upgrade Script](releaseupdatedb60-hrm-updatehrmpayrolldeduction-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMPayrollLine](releaseupdatedb60-hrm-updatehrmpayrollline.md)
[ReleaseUpdateDB60\_HRM.updateHRMPayrollLine Upgrade Script](releaseupdatedb60-hrm-updatehrmpayrollline-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMPayrollPension](releaseupdatedb60-hrm-updatehrmpayrollpension.md)
[ReleaseUpdateDB60\_HRM.updateHRMPayrollPension Upgrade Script](releaseupdatedb60-hrm-updatehrmpayrollpension-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMRecruitingLastNews Upgrade Script](releaseupdatedb60-hrm-updatehrmrecruitinglastnews-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMRecruitingTable Upgrade Script](releaseupdatedb60-hrm-updatehrmrecruitingtable-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRMVirtualNetworkGroupRelation Upgrade Script](releaseupdatedb60-hrm-updatehrmvirtualnetworkgrouprelation-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateHRPPartyPosTblRelationship Upgrade Script](releaseupdatedb60-hrm-updatehrppartypostblrelationship-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateJobLimits Upgrade Script](releaseupdatedb60-hrm-updatejoblimits-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateOMHierarchyRelationship Upgrade Script](releaseupdatedb60-hrm-updateomhierarchyrelationship-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateOMTeam Upgrade Script](releaseupdatedb60-hrm-updateomteam-upgrade-script.md)
[ReleaseUpdateDB60\_HRM.updateSysCompanyUserInfo](releaseupdatedb60-hrm-updatesyscompanyuserinfo.md)
[ReleaseUpdateDB60\_HRM.updateSysCompanyUserInfo Upgrade Script](releaseupdatedb60-hrm-updatesyscompanyuserinfo-upgrade-script.md)
| 61.198261 | 162 | 0.886243 | yue_Hant | 0.75878 |
93d7195d3b3e0ed798a02b07cac2fe466d7b5ec2 | 1,636 | md | Markdown | content/en/docs/reference/commands/jx_delete_quickstartlocation.md | romainverduci/jx-docs | ba94049b5712446751c55253003097b50e70be65 | [
"Apache-2.0"
] | null | null | null | content/en/docs/reference/commands/jx_delete_quickstartlocation.md | romainverduci/jx-docs | ba94049b5712446751c55253003097b50e70be65 | [
"Apache-2.0"
] | null | null | null | content/en/docs/reference/commands/jx_delete_quickstartlocation.md | romainverduci/jx-docs | ba94049b5712446751c55253003097b50e70be65 | [
"Apache-2.0"
] | null | null | null | ---
date: 2020-06-21T00:57:59Z
title: "jx delete quickstartlocation"
slug: jx_delete_quickstartlocation
url: /commands/jx_delete_quickstartlocation/
description: list of jx commands
---
## jx delete quickstartlocation
Deletes one or more quickstart locations for your team
### Synopsis
Deletes one or more quickstart locations for your team
For more documentation see: https://jenkins-x.io/developing/create-quickstart/#customising-your-teams-quickstarts
```
jx delete quickstartlocation [flags]
```
### Examples
```
# Pick a quickstart location to delete for your team
jx delete quickstartlocation
# Pick a quickstart location to delete for your team using an abbreviation
jx delete qsloc
# Delete a GitHub organisation 'myorg' for your team
jx delete qsloc --owner myorg
# Delete a specific location for your team
jx delete qsloc --url https://foo.com --owner myowner
```
### Options
```
-h, --help help for quickstartlocation
-o, --owner string The owner is the user or organisation of the Git provider
-u, --url string The URL of the Git service (default "https://github.com")
```
### Options inherited from parent commands
```
-b, --batch-mode Runs in batch mode without prompting for user input (default true)
--verbose Enables verbose output. The environment variable JX_LOG_LEVEL has precedence over this flag and allows setting the logging level to any value of: panic, fatal, error, warning, info, debug, trace
```
### SEE ALSO
* [jx delete](/commands/jx_delete/) - Deletes one or more resources
###### Auto generated by spf13/cobra on 21-Jun-2020
| 28.206897 | 215 | 0.728606 | eng_Latn | 0.963498 |
93d78c4a15d1e210b4c734b9fa4561b64c270322 | 587 | md | Markdown | _posts/2017-01-14-gitignore-update.md | cjh5414/cjh5414.github.io | a7e3cea96c54d191ec5c8646755e6e7577d61cba | [
"MIT"
] | 1 | 2017-04-30T06:19:49.000Z | 2017-04-30T06:19:49.000Z | _posts/2017-01-14-gitignore-update.md | cjh5414/cjh5414.github.io | a7e3cea96c54d191ec5c8646755e6e7577d61cba | [
"MIT"
] | null | null | null | _posts/2017-01-14-gitignore-update.md | cjh5414/cjh5414.github.io | a7e3cea96c54d191ec5c8646755e6e7577d61cba | [
"MIT"
] | 5 | 2018-01-03T07:46:55.000Z | 2019-03-06T01:22:55.000Z | ---
layout: post
title: "이미 push된 file .gitignore 적용하기"
tags: [Git]
---
> git으로 관리하고 싶지 않은 파일들은 .gitignore 에 명시함으로써 해당 파일을 무시할 수 있다. 하지만 종종 무시할 파일을 .gitignore 에 추가하기 전에 git push 해버리는 경우가 있다. 이 때 뒤늦게 .gitignore 을 수정하여 push를 하지만 원격 저장소에서 해당 파일은 사라지지 않는다.
<br/>
## Apply .gitignore
아래의 git 명령들을 실행해주면 .gitignore의 파일들이 적용되어 원격 저장소에서 사라진다.
```
$ git rm -r --cached .
$ git add .
$ git commit -m "Apply .gitignore"
$ git push
```
<br/>
## 같은 실수를 반복하지 않기 위한 git 사용 습관
[좋은 git 사용 습관](https://cjh5414.github.io/git-habit/)
git commit 과정에 대한 이해와 좋은 commit 습관에 대한 포스트이다.
| 20.964286 | 182 | 0.650767 | kor_Hang | 1.00001 |
93d796b65d770cffad44521d30c819bdd4886def | 225 | md | Markdown | _posts/2019-11-01-post-exemplo.md | capitulomaringa/capitulomaringa.github.io | 5a4f34ab59bf30458e61111fb409faf43de32206 | [
"MIT"
] | null | null | null | _posts/2019-11-01-post-exemplo.md | capitulomaringa/capitulomaringa.github.io | 5a4f34ab59bf30458e61111fb409faf43de32206 | [
"MIT"
] | 4 | 2019-11-14T18:13:18.000Z | 2021-10-07T20:12:55.000Z | _posts/2019-11-01-post-exemplo.md | capitulomaringa/capitulomaringa.github.io | 5a4f34ab59bf30458e61111fb409faf43de32206 | [
"MIT"
] | null | null | null | ---
title: Exemplo de Postagem
categories: exemplo outro-exemplo
date: 2019/11/01
---
#### Isso é um exemplo.
Todas as funções dentro [Link](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) são suportadas.
| 22.5 | 112 | 0.742222 | por_Latn | 0.993769 |
93d879724640fe7a0a10af1e71380e88ca48acc8 | 103 | md | Markdown | web/ghuntley.com/content/_drafts/recommended-solution-architecture-with-xamarin.md | jkachmar/depot | ffbef1ef692d580531684815bf7813265df12693 | [
"BlueOak-1.0.0",
"Apache-2.0"
] | null | null | null | web/ghuntley.com/content/_drafts/recommended-solution-architecture-with-xamarin.md | jkachmar/depot | ffbef1ef692d580531684815bf7813265df12693 | [
"BlueOak-1.0.0",
"Apache-2.0"
] | null | null | null | web/ghuntley.com/content/_drafts/recommended-solution-architecture-with-xamarin.md | jkachmar/depot | ffbef1ef692d580531684815bf7813265df12693 | [
"BlueOak-1.0.0",
"Apache-2.0"
] | null | null | null | ---
title: Recommended solution architecture with Xamarin
date: '2016-07-04 00:55:00'
draft: true
---
| 14.714286 | 53 | 0.718447 | eng_Latn | 0.965534 |
93d951a9b709f20ddb5e5e7ec7d8ef02efd95abd | 83 | md | Markdown | src/__tests__/fixtures/unfoldingWord/en_tq/psa/111/05.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | null | null | null | src/__tests__/fixtures/unfoldingWord/en_tq/psa/111/05.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 226 | 2020-09-09T21:56:14.000Z | 2022-03-26T18:09:53.000Z | src/__tests__/fixtures/unfoldingWord/en_tq/psa/111/05.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 1 | 2022-01-10T21:47:07.000Z | 2022-01-10T21:47:07.000Z | # What will Yahweh always call to mind?
He will always call to mind his covenant.
| 20.75 | 41 | 0.759036 | eng_Latn | 0.999831 |
93da668df0eac581cbada5d9ad0da0306438b640 | 82 | md | Markdown | README_ZH.md | changjianlong/bottomDialog | 180c01a35128ad908276296056ccf0e0abc53d0a | [
"Apache-2.0"
] | null | null | null | README_ZH.md | changjianlong/bottomDialog | 180c01a35128ad908276296056ccf0e0abc53d0a | [
"Apache-2.0"
] | null | null | null | README_ZH.md | changjianlong/bottomDialog | 180c01a35128ad908276296056ccf0e0abc53d0a | [
"Apache-2.0"
] | null | null | null | # BottomDialog
`BottomDialog` 是一个通过 `DialogFragment` 实现的底部弹窗布局,并且支持弹出动画,支持任意布局
| 16.4 | 63 | 0.804878 | yue_Hant | 0.466801 |
93da83cab15ed50d83b7923b93e3cfd474b4084a | 6,386 | md | Markdown | _posts/2019-3-15-jellyfish.md | alkwan/forty-jekyll-theme | 2ec86e70bb460925ffc8308d6897eb2afa80ac1b | [
"CC-BY-3.0"
] | null | null | null | _posts/2019-3-15-jellyfish.md | alkwan/forty-jekyll-theme | 2ec86e70bb460925ffc8308d6897eb2afa80ac1b | [
"CC-BY-3.0"
] | null | null | null | _posts/2019-3-15-jellyfish.md | alkwan/forty-jekyll-theme | 2ec86e70bb460925ffc8308d6897eb2afa80ac1b | [
"CC-BY-3.0"
] | 1 | 2019-07-28T11:31:30.000Z | 2019-07-28T11:31:30.000Z | ---
layout: post
title: Jellyfish OS
description: A mobile operating system designed for modern professionals.
image: assets/images/jellyfish_screens.png
nav-menu: true
---
<!-- Main -->
<div id="main" class="alt">
<section id="one">
<div class="inner">
<p>A mobile operating system designed for professionals, focused on creating a fluid, personalized experience that is vibrant and easy to use.</p>
<div class="row">
<div class="4u 12u$(medium)">
<h4>Project Deliverables</h4>
<ul class="alt"><li>Create a unique mobile operating system.</li>
<li>Establish a design language for the mobile operating system.</li>
<li>Create 10 applications using the established design language.</li>
</ul>
</div>
<div class="4u 12u$(medium)">
<h4>My Role</h4>
<ul class="alt"><li>UI Animation</li>
<li>UX/UI design - Messages application</li>
<li>Icon design and implementation</li></ul>
</div>
<div class="4u$ 12u$(medium)">
<h4>Project Timeline</h4>
<ul class="alt"><li>Winter Quarter 2019 - 10 weeks </li>
<li>INFO 365: Mobile Application Design</li></ul>
</div>
</div>
<hr class="major" />
<!-- Project Overview -->
<h3 id="elements">Project Overview</h3>
<h4>The Problem</h4>
<p><span class="image left"><img src="/assets/images/current_tech_proscons.PNG" alt="Whiteboard with pros and cons."/></span>
What makes a great mobile phone? Why do people choose and stick to certain operating systems, like Android or iPhone? These were the types of questions my group asked as we tackled creating a brand new mobile operating system. We went deep into the nitty gritty of what we liked and disliked about current devices and began brainstorming ideas for a new device of our own. Since we were creating something entirely new, we decided to focus on an audience of young, tech-savvy professionals. As we narrowed this down, we found a few key aspects to focus in on: <b>personalization</b> and <b>ease of use</b>.
<span class="image fit"><img src="/assets/images/device_brainstorming.PNG"/></span></p>
<h4>Our Solution</h4>
<p><span class="image right"><img src="/assets/images/jellyfish/moodboard.gif"/></span>My team started by creating a mood board that conveyed the look and feel of our imagined operating system. Using the mood board as inspiration, we created a design language, for which we came up with six design principles. Our operating system was to be <b>animated</b>, <b>fluid</b>, <b>iridescent</b>, <b>tactile</b>, <b>modern</b>, and <b>personalized</b>. We wanted users to have a smooth, responsive, and enjoyable experience. With these principles in mind, we laid down the ground work for our OS by identifying and creating user journeys, common layouts/design patterns, and interaction design. Once we had this foundation, we designed 10 apps total, presenting our designs once a week and getting feedback from our professor and TA.</p>
<hr class="major" />
<h3 id="elements">My Work</h3>
<h4>Research and Planning</h4>
<!-- Animation research -->
<p><span class="image left"><img src="/assets/images/jellyfish/jellyfish.gif"/></span>During our process, we identified <b>animated</b> as one of our design principles. We would later find it too difficult to implement in our final designs, but I was tasked with the job of researching and coming up with UI animation. While I am familiar with 3D character animation, this was my first time attempting UI animation. Using tutorials and example files online, I created these animations in Adobe After Effects.</p><p>To the left is my first attempt at line animation in After Effects, and below are the UI animations I created. Interface designs were not done by the team.</p>
<div class="row 50% uniform">
<div class="4u"><span class="image fit"><img src="/assets/images/jellyfish/clock_countdown.gif" alt="" /></span></div>
<div class="4u"><span class="image fit"><img src="/assets/images/jellyfish/notification_options.gif" alt="" /></span></div>
<div class="4u$"><span class="image fit"><img src="/assets/images/jellyfish/notification_swipe_dif_speed.gif" alt="" /></span></div>
</div>
<h4>Messages App Design</h4>
<!-- Messages app -->
<p>For one of our 10 apps, I created the Messages app. I started out by doing some competitive analysis of other existing messaging apps. I picked out the elements that I liked and disliked, then combined them and made sure they matched our design language.</p>
<h5>Competitive Analysis</h5>
<span class="image fit"><img src="/assets/images/jellyfish/References.png"/></span>
<h5>Initial Designs</h5>
<span class="image fit"><img src="/assets/images/jellyfish/messages_v1.png"/></span>
<span class="image fit"><img src="/assets/images/jellyfish/messages_v2.png"/></span>
<h5>Final Designs</h5>
<p>When we were near the end of the project, we decided not to finish the Messages app design. However, at that point, we stepped back and realized we had strayed from our established design language and were too focused on existing operating systems like Android and iOS and were emulating them too closely. We made the decision to scrap our previous designs, and I created less extensive versions that better reflected what we wanted for our look and feel.</p>
<span class="image fit"><img src="/assets/images/jellyfish/messages_final.png"/></span>
<h4>Icon Library</h4>
<!-- Created a library of ~80 icons -->
<p><span class="image right"><img src="/assets/images/jellyfish/icon_library.png"/></span>My biggest contribution to this project was the icon library. In our initial designs, we used icons from various icon libraries that we found online. However, this made visual consistency almost impossible. In order to fix this, I created entirely new icons in Adobe Illustrator, making sure they were stylistically consistent. I ended up making roughly <b>80 icons</b>, and almost all of the icons used in our final designs were created by me. To the left are most of icons I designed for the project, and below are some of our final screens that show them in use.</p>
<span class="image fit"><img src="/assets/images/jellyfish/music_screens.png"/></span>
<span class="image fit"><img src="/assets/images/jellyfish/notes_calendar_screens.png"/></span>
</div>
</section>
</div> | 89.943662 | 834 | 0.725963 | eng_Latn | 0.988257 |
93dad229e885c74f5f6de69689a475637f6c6e0b | 522 | md | Markdown | README.md | nvdajp/cheatsheet | 7c6405af09ddeb0728628319dcdb24fd1559a0ad | [
"CC0-1.0"
] | 8 | 2018-05-21T07:05:23.000Z | 2021-05-23T05:15:10.000Z | README.md | nvdajp/cheatsheet | 7c6405af09ddeb0728628319dcdb24fd1559a0ad | [
"CC0-1.0"
] | null | null | null | README.md | nvdajp/cheatsheet | 7c6405af09ddeb0728628319dcdb24fd1559a0ad | [
"CC0-1.0"
] | 1 | 2021-05-29T08:49:29.000Z | 2021-05-29T08:49:29.000Z | # NVDAチートシート
NVDA日本語チーム 西本卓也
このデータは2018年5月に印刷物として作成してイベント等で配布した
NVDA チートシートの改訂版(2021年5月 バージョン 210523)です。
Microsoft PowerPoint ファイルと Antenna House PDF Driver 8.0 で作成した PDF ファイルを公開します。
主な変更点
* 「フォーカスハイライト」の説明を「ビジュアルハイライト」に変更しました。
* pptx ファイルの読み上げ順序を改善しました。
* PDF ファイルのアクセシビリティを改善しました。
ご要望などは下記にお知らせください。
* https://github.com/nvdajp/cheatsheet/issues
* [email protected]
## 引用した著作物
* NVDA アイコン (NV Access)
* NVDAJP アイコン: [NVDAJP キャラクタ「でめきん」](https://ja.osdn.net/projects/nvdajp/images/#id3176)(炉部愛 CC BY-SA 2.1) より
| 21.75 | 108 | 0.783525 | yue_Hant | 0.910603 |
93db0c5288e4b7d8ba14443b0a1e63beedc18a5b | 625 | md | Markdown | Documentation/logistic_regression.md | cpilat97/R | 3ba805a0dc4bba8dec51fdef034a9ced9b59cf16 | [
"MIT"
] | 568 | 2018-09-24T02:35:22.000Z | 2022-03-31T04:01:36.000Z | Documentation/logistic_regression.md | seanpm2001/R | d3f75d4dffc16b9ac905e57d763f3708bec6a957 | [
"MIT"
] | 29 | 2018-10-01T20:19:49.000Z | 2022-01-21T18:28:08.000Z | Documentation/logistic_regression.md | seanpm2001/R | d3f75d4dffc16b9ac905e57d763f3708bec6a957 | [
"MIT"
] | 275 | 2018-09-24T01:35:08.000Z | 2022-03-20T16:37:31.000Z |
```r
x <- cbind(x_train,y_train)
```
```
## Error in cbind(x_train, y_train): object 'x_train' not found
```
```r
# Train the model using the training sets and check score
logistic <- glm(y_train ~ ., data = x,family='binomial')
```
```
## Error in model.frame.default(formula = y_train ~ ., data = x, drop.unused.levels = TRUE): 'data' must be a data.frame, environment, or list
```
```r
summary(logistic)
```
```
## Error in summary(logistic): object 'logistic' not found
```
```r
# Predict Output
predicted= predict(logistic,x_test)
```
```
## Error in predict(logistic, x_test): object 'logistic' not found
```
| 16.891892 | 142 | 0.6496 | eng_Latn | 0.906868 |
93db1cf98ec6a1264b0c07115ad33d92cec5894f | 2,029 | md | Markdown | README.md | JdsLima/Electron | ac43723df0049c8ade9d5204d836cd4f90c361ba | [
"Apache-2.0"
] | 1 | 2021-01-18T21:06:08.000Z | 2021-01-18T21:06:08.000Z | README.md | JdsLima/Electron | ac43723df0049c8ade9d5204d836cd4f90c361ba | [
"Apache-2.0"
] | 3 | 2020-10-06T18:28:57.000Z | 2021-01-18T21:02:17.000Z | README.md | JdsLima/Electron | ac43723df0049c8ade9d5204d836cd4f90c361ba | [
"Apache-2.0"
] | null | null | null | # Electron
Sistema de gestão para micro empresas
#### To compile this project:
# --> Windows:
First, install a recent version of Node.js. We recommend that you install either the latest LTS or Current version available. Visit the Node.js download page and select the Windows Installer. Once downloaded, execute the installer and let the installation wizard guide you through the installation.
On the screen that allows you to configure the installation, make sure to select the Node.js runtime, npm package manager, and Add to PATH options.
Once installed, confirm that everything works as expected. Find the Windows PowerShell by opening the Start Menu and typing PowerShell. Open up PowerShell or another command line client of your choice and confirm that both node and npm are available:
## This command should print the version of Node.js
> node -v
## This command should print the version of npm
> npm -v
# --> Linux:
First, install a recent version of Node.js. Depending on your Linux distribution, the installation steps might differ. Assuming that you normally install software using a package manager like apt or pacman, use the official Node.js guidance on installing on Linux.
You're running Linux, so you likely already know how to operate a command line client. Open up your favorite client and confirm that both node and npm are available globally:
## This command should print the version of Node.js
> node -v
## This command should print the version of npm
> npm -v
If both commands printed a version number, you are all set! Before you get started, you might want to install a code editor suited for JavaScript development.
# Installing Electron:
Create a folder where you want to start the electron application and install electron
> npm install --save-dev electron
To enable electron live-reload:
> npm install electron-reload
After installed
clone and extract the repository in the folder where you installed electron.
With everything ready just start the application with:
> npm start
| 40.58 | 298 | 0.78758 | eng_Latn | 0.998243 |
93db8e4d97679a7c0648ad72b64b8c15bb253066 | 75 | md | Markdown | README.md | KyleKaiWang/dx12_labs | b369aad3c7a139a439dcb45ba9e26677f957bddb | [
"Apache-2.0"
] | null | null | null | README.md | KyleKaiWang/dx12_labs | b369aad3c7a139a439dcb45ba9e26677f957bddb | [
"Apache-2.0"
] | null | null | null | README.md | KyleKaiWang/dx12_labs | b369aad3c7a139a439dcb45ba9e26677f957bddb | [
"Apache-2.0"
] | null | null | null | # Titan Inifinite - DirectX 12
This is experimental project for DirectX 12
| 25 | 43 | 0.8 | eng_Latn | 0.993663 |
93db9b7d1bf89426a8b41b4efa9e46fa8d0b2444 | 9,338 | md | Markdown | th/12.1.md | CFFPhil/build-web-application-with-golang | 606abd586a7270d0e48762cf0454ba0fac330698 | [
"BSD-3-Clause"
] | 39,872 | 2015-01-01T09:09:00.000Z | 2022-03-31T15:36:24.000Z | th/12.1.md | CFFPhil/build-web-application-with-golang | 606abd586a7270d0e48762cf0454ba0fac330698 | [
"BSD-3-Clause"
] | 272 | 2015-01-03T14:03:30.000Z | 2022-02-09T03:06:20.000Z | th/12.1.md | CFFPhil/build-web-application-with-golang | 606abd586a7270d0e48762cf0454ba0fac330698 | [
"BSD-3-Clause"
] | 11,462 | 2015-01-01T05:28:30.000Z | 2022-03-31T08:24:28.000Z | # 12.1 Logs
We want to build web applications that can keep track of events which have occurred throughout execution, combining them all into one place for easy access later on, when we inevitably need to perform debugging or optimization tasks. Go provides a simple `log` package which we can use to help us implement simple logging functionality. Logs can be printed using Go's `fmt` package, called inside error handling functions for general error logging. Go's standard package only contains basic functionality for logging, however. There are many third party logging tools that we can use to supplement it if your needs are more sophisticated (tools similar to log4j and log4cpp, if you've ever had to deal with logging in Java or C++). A popular and fully featured, open-source logging tool in Go is the [seelog](https://github.com/cihub/seelog) logging framework. Let's take a look at how we can use `seelog` to perform logging in our Go applications.
## Introduction to seelog
Seelog is a logging framework for Go that provides some simple functionality for implementing logging tasks such as filtering and formatting. Its main features are as follows:
- Dynamic configuration via XML; you can load configuration parameters dynamically without recompiling your program
- Supports hot updates, the ability to dynamically change the configuration without the need to restart the application
- Supports multi-output streams that can simultaneously pipe log output to multiple streams, such as a file stream, network flow, etc.
- Support for different log outputs
- Command line output
- File Output
- Cached output
- Support log rotate
- SMTP Mail
The above is only a partial list of seelog's features. To fully take advantage of all of seelog's functionality, have a look at its [official wiki](https://github.com/cihub/seelog/wiki) which thoroughly documents what you can do with it. Let's see how we'd use seelog in our projects:
First install seelog:
go get -u github.com/cihub/seelog
Then let's write a simple example:
package main
import log "github.com/cihub/seelog"
func main() {
defer log.Flush()
log.Info("Hello from Seelog!")
}
Compile and run the program. If you see a `Hello from seelog` in your application log, seelog has been successfully installed and is running operating normally.
## Custom log processing with seelog
Seelog supports custom log processing. The following code snippet is based on the its custom log processing part of its package:
package logs
import (
"errors"
"fmt"
seelog "github.com/cihub/seelog"
"io"
)
var Logger seelog.LoggerInterface
func loadAppConfig() {
appConfig := `
<seelog minlevel="warn">
<outputs formatid="common">
<rollingfile type="size" filename="/data/logs/roll.log" maxsize="100000" maxrolls="5"/>
<filter levels="critical">
<file path="/data/logs/critical.log" formatid="critical"/>
<smtp formatid="criticalemail" senderaddress="[email protected]" sendername="ShortUrl API" hostname="smtp.gmail.com" hostport="587" username="mailusername" password="mailpassword">
<recipient address="[email protected]"/>
</smtp>
</filter>
</outputs>
<formats>
<format id="common" format="%Date/%Time [%LEV] %Msg%n" />
<format id="critical" format="%File %FullPath %Func %Msg%n" />
<format id="criticalemail" format="Critical error on our server!\n %Time %Date %RelFile %Func %Msg \nSent by Seelog"/>
</formats>
</seelog>
`
logger, err := seelog.LoggerFromConfigAsBytes([]byte(appConfig))
if err != nil {
fmt.Println(err)
return
}
UseLogger(logger)
}
func init() {
DisableLog()
loadAppConfig()
}
// DisableLog disables all library log output
func DisableLog() {
Logger = seelog.Disabled
}
// UseLogger uses a specified seelog.LoggerInterface to output library log.
// Use this func if you are using Seelog logging system in your app.
func UseLogger(newLogger seelog.LoggerInterface) {
Logger = newLogger
}
The above implements the three main functions:
- `DisableLog`
Initializes a global variable `Logger` with seelog disabled, mainly in order to prevent the logger from being repeatedly initialized
- `LoadAppConfig`
Initializes the configuration settings of seelog according to a configuration file. In our example we are reading the configuration from an in-memory string, but of course, you can read it from an XML file also. Inside the configuration, we set up the following parameters:
- Seelog
The `minlevel` parameter is optional. If configured, logging levels which are greater than or equal to the specified level will be recorded. The optional `maxlevel` parameter is similarly used to configure the maximum logging level desired.
- Outputs
Configures the output destination. In our particular case, we channel our logging data into two output destinations. The first is a rolling log file where we continuously save the most recent window of logging data. The second destination is a filtered log which records only critical level errors. We additionally configure it to alert us via email when these types of errors occur.
- Formats
Defines the various logging formats. You can use custom formatting, or predefined formatting -a full list of predefined formats can be found on seelog's [wiki](https://github.com/cihub/seelog/wiki/Format-reference)
- `UseLogger`
Set the current logger as our log processor
Above, we've defined and configured a custom log processing package. The following code demonstrates how we'd use it:
package main
import (
"net/http"
"project/logs"
"project/configs"
"project/routes"
)
func main() {
addr, _ := configs.MainConfig.String("server", "addr")
logs.Logger.Info("Start server at:%v", addr)
err := http.ListenAndServe(addr, routes.NewMux())
logs.Logger.Critical("Server err:%v", err)
}
## Email notifications
The above example explains how to set up email notifications with `seelog`. As you can see, we used the following `smtp` configuration:
<smtp formatid="criticalemail" senderaddress="[email protected]" sendername="ShortUrl API" hostname="smtp.gmail.com" hostport="587" username="mailusername" password="mailpassword">
<recipient address="[email protected]"/>
</smtp>
We set the format of our alert messages through the `criticalemail` configuration, providing our mail server parameters to be able to receive them. We can also configure our notifier to send out alerts to additional users using the `recipient` configuration. It's a simple matter of adding one line for each additional recipient.
To test whether or not this code is working properly, you can add a fake critical message to your application like so:
logs.Logger.Critical("test Critical message")
Don't forget to delete it once you're done testing, or when your application goes live, your inbox may be flooded with email notifications.
Now, whenever our application logs a critical message while online, you and your specified recipients will receive a notification email. You and your team can then process and remedy the situation in a timely manner.
## Using application logs
When it comes to logs, each application's use-case may vary. For example, some people use logs for data analysis purposes, others for performance optimization. Some logs are used to analyze user behavior and how people interact with your website. Of course, there are logs which are simply used to record application events as auxiliary data for finding problems.
As an example, let's say we need to track user attempts at logging into our system. This involves recording both successful and unsuccessful login attempts into our log. We'd typically use the "Info" log level to record these types of events, rather than something more serious like "warn". If you're using a linux-type system, you can conveniently view all unsuccessful login attempts from the log using the `grep` command like so:
# cat /data/logs/roll.log | grep "failed login"
2012-12-11 11:12:00 WARN : failed login attempt from 11.22.33.44 username password
This way, we can easily find the appropriate information in our application log, which can help us to perform statistical analysis if needed. In addition, we also need to consider the size of logs generated by high-traffic web applications. These logs can sometimes grow unpredictably. To resolve this issue, we can set `seelog` up with the logrotate configuration to ensure that single log files do not consume excessive disk space.
## Summary
In this section, we've learned the basics of `seelog` and how to build a custom logging system with it. We saw that we can easily configure `seelog` into as powerful a log processing system as we need, using it to supply us with reliable sources of data for analysis. Through log analysis, we can optimize our system and easily locate the sources of problems when they arise. In addition, `seelog` ships with various default log levels. We can use the `minlevel` configuration in conjunction with a log level to easily set up tests or send automated notification messages.
## Links
- [Directory](preface.md)
- Previous section: [Deployment and maintenance](12.0.md)
- Next section: [Errors and crashes](12.2.md)
| 51.59116 | 950 | 0.762262 | eng_Latn | 0.994056 |
93dbdff14ae38acd0c16522042c1da310d8ea363 | 1,168 | md | Markdown | docs/pro/singleflight.md | disc/centrifugal.dev | ce1dafce30036d7730dda648803e42900f9ac489 | [
"Apache-2.0"
] | null | null | null | docs/pro/singleflight.md | disc/centrifugal.dev | ce1dafce30036d7730dda648803e42900f9ac489 | [
"Apache-2.0"
] | null | null | null | docs/pro/singleflight.md | disc/centrifugal.dev | ce1dafce30036d7730dda648803e42900f9ac489 | [
"Apache-2.0"
] | null | null | null | ---
id: singleflight
title: Singleflight
---
Centrifugo PRO provides an additional boolean option `use_singleflight` (default `false`). When this option enabled Centrifugo will automatically try to merge identical requests to history, online presence or presence stats issued at the same time into one real network request.
This option can radically reduce a load on a broker in the following situations:
* Many clients subscribed to the same channel and in case of massive reconnect scenario try to access history simultaneously to restore a state (whether manually using history API or over automatic recovery feature)
* Many clients subscribed to the same channel and positioning feature is on so Centrifugo tracks client position
* Many clients subscribed to the same channel and in case of massive reconnect scenario try to call presence or presence stats simultaneously
Using this option only makes sense with remote engine (Redis, KeyDB, Tarantool), it won't provide a benefit in case of using a Memory engine.
To enable:
```json title="config.json"
{
...
"use_singleflight": true
}
```
Or via `CENTRIFUGO_USE_SINGLEFLIGHT` environment variable.
| 44.923077 | 278 | 0.794521 | eng_Latn | 0.998631 |
93dc53e10054033b95806ad97c26fb7b2304676b | 1,312 | md | Markdown | README.md | ShantanuSen/Drowsiness-Detection | 469c907afffacd3bfa9de03b8574c0f49b1aa0c3 | [
"CC0-1.0"
] | null | null | null | README.md | ShantanuSen/Drowsiness-Detection | 469c907afffacd3bfa9de03b8574c0f49b1aa0c3 | [
"CC0-1.0"
] | null | null | null | README.md | ShantanuSen/Drowsiness-Detection | 469c907afffacd3bfa9de03b8574c0f49b1aa0c3 | [
"CC0-1.0"
] | null | null | null | # Drowsiness-Detection
Drowsiness detection for the perfection of brain computer interface using Viola-jones algorithm
Abstract - Security and reconnaissance applications are prominent BCI paradigms which are less complex and sophisticated if there is no contamination in Electroencephalogram (EEG) signal. The better the quality of EEG signal ensures the better the performance (better Information Transfer Rate (ITR), high Signal to Noise Ratio (SNR), high Bandwidth (BW), and so on) of BCI paradigms. Drowsiness is one of the major contamination in EEG signal that hampers the operation of modern BCI paradigms. In this research, a non-intrusive machine vision based concept is used to determine the drowsiness from the patient which ensure the drowsy free EEG signal. In this proposed system, a camera which placed in a way that it records subjects (BCI Users) eye movement in every time as well as it can monitor the open and close state of eye. Viola-jones Algorithm is applicable for the detection of face as well as state of eye (Open, closed or semi-open) which is the key concern for the detection of drowsiness from the patient's EEG signal. After detecting this drowsiness, decision can be easily made for the perfect operation of BCI.
Full paper link: https://ieeexplore.ieee.org/document/7873106
| 187.428571 | 1,128 | 0.810976 | eng_Latn | 0.999111 |
93dc758039b096bc2893bb0a24b2255c383f0f1b | 11,242 | md | Markdown | node_modules/applause/README.md | sbkgames/dontshaveyourbeard | dc60254ace66b449b85f50224ea4ac272862fc8d | [
"MIT"
] | null | null | null | node_modules/applause/README.md | sbkgames/dontshaveyourbeard | dc60254ace66b449b85f50224ea4ac272862fc8d | [
"MIT"
] | null | null | null | node_modules/applause/README.md | sbkgames/dontshaveyourbeard | dc60254ace66b449b85f50224ea4ac272862fc8d | [
"MIT"
] | null | null | null | # Applause [](https://travis-ci.org/outaTiME/applause) [](http://badge.fury.io/js/applause)
Pattern replacer that helps create a human-friendly replaces.
## Install
First make sure you have installed the latest version of [node.js](http://nodejs.org/)
(You may need to restart your computer after this step).
From NPM for use as a command line app:
```shell
npm install applause -g
```
From NPM for programmatic use:
```shell
npm install applause
```
From Git:
```shell
git clone git://github.com/outaTiME/applause
cd applause
npm link .
```
## API Reference
Assuming installation via NPM, you can use `applause` in your application like this:
```javascript
var fs = require('fs');
var Applause = require('applause');
var options = {
patterns: [
{
match: 'foo',
replacement: 'bar'
}
]
};
var applause = Applause.create(options);
var contents = '@@foo';
var result = applause.replace(contents);
console.log(result); // bar
```
### Applause Options
#### patterns
Type: `Array`
Define patterns that will be used to replace the contents of source files.
#### patterns.match
Type: `String|RegExp`
Indicates the matching expression.
If matching type is `String` we use a simple variable lookup mechanism `@@string` (in any other case we use the default regexp replace logic):
```javascript
{
patterns: [
{
match: 'foo',
replacement: 'bar' // replaces "@@foo" to "bar"
}
]
}
```
#### patterns.replacement or patterns.replace
Type: `String|Function|Object`
Indicates the replacement for match, for more information about replacement check out the [String.replace].
You can specify a function as replacement. In this case, the function will be invoked after the match has been performed. The function's result (return value) will be used as the replacement string.
```javascript
{
patterns: [
{
match: /foo/g,
replacement: function () {
return 'bar'; // replaces "foo" to "bar"
}
}
]
}
```
Also supports object as replacement (we create string representation of object using [JSON.stringify]):
```javascript
{
patterns: [
{
match: /foo/g,
replacement: [1, 2, 3] // replaces "foo" with string representation of "array" object
}
]
}
```
[String.replace]: http://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace
[JSON.stringify]: http://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify
#### patterns.json
Type: `Object`
If an attribute `json` found in pattern definition we flatten the object using `delimiter` concatenation and each key–value pair will be used for the replacement (simple variable lookup mechanism and no regexp support).
```javascript
{
patterns: [
{
json: {
"key": "value" // replaces "@@key" to "value"
}
}
]
}
```
Also supports nested objects:
```javascript
{
patterns: [
{
json: {
"key": "value", // replaces "@@key" to "value"
"inner": { // replaces "@@inner" with string representation of "inner" object
"key": "value" // replaces "@@inner.key" to "value"
}
}
}
]
}
```
For deferred invocations is possible to define functions:
```javascript
{
patterns: [
{
json: function (done) {
done({
key: 'value'
});
}
}
]
}
```
#### patterns.yaml
Type: `String`
If an attribute `yaml` found in pattern definition will be converted and then processed like [json attribute](#patternsjson).
```javascript
{
patterns: [
{
yaml: 'key: value' // replaces "@@key" to "value"
}
]
}
```
For deferred invocations is possible to define functions:
```javascript
{
patterns: [
{
yaml: function (done) {
done('key: value');
}
}
]
}
```
#### patterns.cson
Type: `String`
If an attribute `cson` found in pattern definition will be converted and then processed like [json attribute](#patternsjson).
```javascript
{
patterns: [
{
cson: 'key: \'value\''
}
]
}
```
For deferred invocations is possible to define functions:
```javascript
{
patterns: [
{
cson: function (done) {
done('key: \'value\'');
}
}
]
}
```
#### variables
Type: `Object`
This is the old way to define patterns using plain object (simple variable lookup mechanism and no regexp support). You can still use this but for more control you should use the new `patterns` way.
```javascript
{
variables: {
'key': 'value' // replaces "@@key" to "value"
}
}
```
#### prefix
Type: `String`
Default: `@@`
The prefix added for matching (prevent bad replacements / easy way).
> This only applies for simple variable lookup mechanism.
#### usePrefix
Type: `Boolean`
Default: `true`
If set to `false`, we match the pattern without `prefix` concatenation (useful when you want to lookup an simple string).
> This only applies for simple variable lookup mechanism.
#### preservePrefix
Type: `Boolean`
Default: `false`
If set to `true`, we preserve the `prefix` in target.
> This only applies for simple variable lookup mechanism and `patterns.replacement` is an string.
#### delimiter
Type: `String`
Default: `.`
The delimiter used to flatten when using object as replacement.
#### preserveOrder
Type: `Boolean`
Default: `false`
If set to `true`, we preserve the patterns definition order, otherwise these will be sorted (in ascending order) to prevent replacement issues like `head` / `header` (typo regexps will be resolved at last).
#### detail
Type: `Boolean`
Default: `false`
If set to `true`, return a object response with the `content` and `detail` of replace operation.
### Usage Examples
#### Basic
File `src/manifest.appcache`:
```
CACHE MANIFEST
# @@timestamp
CACHE:
favicon.ico
index.html
NETWORK:
*
```
Node:
```js
var fs = require('fs');
var Applause = require('applause');
var options = {
patterns: [
{
match: 'timestamp',
replacement: new Date().getTime()
}
]
};
var applause = Applause.create(options);
var contents = fs.readFileSync('./src/manifest.appcache', 'utf8');
var result = applause.replace(contents);
console.log(result); // replaced output
```
#### Multiple matching
File `src/manifest.appcache`:
```
CACHE MANIFEST
# @@timestamp
CACHE:
favicon.ico
index.html
NETWORK:
*
```
File `src/humans.txt`:
```
__ _
_ _/__ /./|,//_`
/_//_// /_|/// //_, outaTiME v.@@version
/* TEAM */
Web Developer / Graphic Designer: Ariel Oscar Falduto
Site: http://www.outa.im
Twitter: @outa7iME
Contact: afalduto at gmail dot com
From: Buenos Aires, Argentina
/* SITE */
Last update: @@timestamp
Standards: HTML5, CSS3, robotstxt.org, humanstxt.org
Components: H5BP, Modernizr, jQuery, Twitter Bootstrap, LESS, Jade, Grunt
Software: Sublime Text 2, Photoshop, LiveReload
```
Node:
```js
var fs = require('fs');
var pkg = require('./package.json');
var Applause = require('applause');
var options = {
patterns: [
{
match: 'version',
replacement: pkg.version
},
{
match: 'timestamp',
replacement: new Date().getTime()
}
]
};
var applause = Applause.create(options);
var contents = fs.readFileSync('./src/manifest.appcache', 'utf8');
var result = applause.replace(contents);
console.log(result); // replaced output
contents = fs.readFileSync('./src/humans.txt', 'utf8');
result = applause.replace(contents);
console.log(result); // replaced output
```
#### Cache busting
File `src/index.html`:
```html
<head>
<link rel="stylesheet" href="/css/style.css?rel=@@timestamp">
<script src="/js/app.js?rel=@@timestamp"></script>
</head>
```
Node:
```js
var fs = require('fs');
var Applause = require('applause');
var options = {
patterns: [
{
match: 'timestamp',
replacement: new Date().getTime()
}
]
};
var applause = Applause.create(options);
var contents = fs.readFileSync('./src/index.html', 'utf8');
var result = applause.replace(contents);
console.log(result); // replaced output
```
#### Include file
File `src/index.html`:
```html
<body>
@@include
</body>
```
Node:
```js
var fs = require('fs');
var Applause = require('applause');
var options = {
patterns: [
{
match: 'include',
replacement: fs.readFileSync('./includes/content.html', 'utf8')
}
]
};
var applause = Applause.create(options);
var contents = fs.readFileSync('./src/index.html', 'utf8');
var result = applause.replace(contents);
console.log(result); // replaced output
```
#### Regular expression
File `src/username.txt`:
```
John Smith
```
Node:
```js
var fs = require('fs');
var Applause = require('applause');
var options = {
patterns: [
{
match: /(\w+)\s(\w+)/,
replacement: '$2, $1' // replaces "John Smith" to "Smith, John"
}
]
};
var applause = Applause.create(options);
var contents = fs.readFileSync('./username.txt', 'utf8');
var result = applause.replace(contents);
console.log(result); // replaced output
```
#### Lookup for `foo` instead of `@@foo`
Node:
```js
var Applause = require('applause');
// option 1 (explicitly using an regexp)
var applause_op1 = Applause.create({
patterns: [
{
match: /foo/g,
replacement: 'bar'
}
]
});
// option 2 (easy way)
var applause_op2 = Applause.create({
patterns: [
{
match: 'foo',
replacement: 'bar'
}
],
usePrefix: false
});
// option 3 (old way)
var applause_op3 = Applause.create({
patterns: [
{
match: 'foo',
replacement: 'bar'
}
],
prefix: '' // remove prefix
});
```
## Release History
* 2015-08-19 v1.0.4 Small fixes and package.json update.
* 2015-08-11 v1.0.0 Version stabilization and Meteor integration.
* 2015-08-06 v0.4.3 Fix issue with special characters attributes ($$, $&, $`, $', $n or $nn) on JSON, YAML and CSON.
* 2015-05-07 v0.4.1 Fix regression issue with empty string in replacement.
* 2015-05-01 v0.4.0 New test cases, parse CSON using [@groupon](https://github.com/groupon/cson-parser), replace alias now supported, new detail flag and third party dependencies updated.
* 2014-10-10 v0.3.4 Escape regexp when matching type is `String`.
* 2014-06-10 v0.3.3 Remove node v.8.0 support and third party dependencies updated.
* 2014-04-18 v0.3.2 JSON / YAML / CSON as function supported. Readme updated (thanks [@milanlandaverde](https://github.com/milanlandaverde)).
* 2014-03-23 v0.3.1 Readme updated.
* 2014-03-22 v0.3.0 Performance improvements. Expression flag removed. New pattern matching for CSON object. More test cases, readme updated and code cleanup.
* 2014-03-21 v0.2.0 Project rename from `pattern-replace` to `applause` (thanks Lady Gaga). Test cases in Mocha and readme updated.
* 2014-03-11 v0.1.2 New pattern matching for YAML object. New preserveOrder flag.
* 2014-02-26 v0.1.1 Remove the force flag (only applies in grunt plugin).
* 2014-02-25 v0.1.0 Initial version.
---
Task submitted by [Ariel Falduto](http://outa.im/)
| 21.372624 | 219 | 0.655044 | eng_Latn | 0.769354 |
93dc8dc83035359fcf74e294206ac0d541bcf6e4 | 6,571 | md | Markdown | content/home/teaching-old.md | jcspilio/website | e8f96775610d454085f1bab2591514b61e3194bd | [
"MIT"
] | null | null | null | content/home/teaching-old.md | jcspilio/website | e8f96775610d454085f1bab2591514b61e3194bd | [
"MIT"
] | null | null | null | content/home/teaching-old.md | jcspilio/website | e8f96775610d454085f1bab2591514b61e3194bd | [
"MIT"
] | null | null | null | +++
# A Projects section created with the Portfolio widget.
widget = "portfolio" # See https://sourcethemes.com/academic/docs/page-builder/
headless = true # This file represents a page section.
active = false # Activate this widget? true/false
weight = 30 # Order that this section will appear.
title = "Teaching"
subtitle = ""
[content]
# Page type to display. E.g. project.
page_type = "teaching"
# Filter toolbar (optional).
# Add or remove as many filters (`[[content.filter_button]]` instances) as you like.
# To show all items, set `tag` to "asterisk".
# To filter by a specific tag, set `tag` to an existing tag name.
# To remove toolbar, delete/comment all instances of `[[content.filter_button]]` below.
# Default filter index (e.g. 0 corresponds to the first `[[filter_button]]` instance below).
filter_default = 0
[[content.filter_button]]
name = "Recent"
tag = "Recent teaching"
[[content.filter_button]]
name = "Mathematics"
tag = "Mathematics"
[[content.filter_button]]
name = "Data science"
tag = "Data science"
[[content.filter_button]]
name = "Behaviour"
tag = "Behaviour"
[[content.filter_button]]
name = "Finance"
tag = "Finance"
[design]
# Choose how many columns the section has. Valid values: 1 or 2.
columns = "2"
# Toggle between the various page layout types.
# 1 = List
# 2 = Compact
# 3 = Card
# 5 = Showcase
view = 2
# For Showcase view, flip alternate rows?
flip_alt_rows = false
[design.background]
# Apply a background color, gradient, or image.
# Uncomment (by removing `#`) an option to apply it.
# Choose a light or dark text color by setting `text_color_light`.
# Any HTML color name or Hex value is valid.
# Background color.
# color = "navy"
# Background gradient.
# gradient_start = "DeepSkyBlue"
# gradient_end = "SkyBlue"
# Background image.
image = "blackboard2.jpg" # Name of image in `static/img/`.
image_darken = 0.1 # Darken the image? Range 0-1 where 0 is transparent and 1 is opaque.
# Text color (true=light or false=dark).
text_color_light = true
[advanced]
# Custom CSS.
css_style = ""
# CSS class.
css_class = ""
+++
| Course | Institution | Year | Convenor (if not me) |
|------------------------------------------------------------------------------------------------------------------------------------- |---------------------- |------------------ |---------------------------------------------- |
| [Game Theory](https://jcspil.io/teaching/game-theory/) | Stanford House | Trinity 2019 | |
| [Statistics and Data Analysis](https://courses.maths.ox.ac.uk/node/37534) | Oriel College | Trinity 2019 | Dr Dino Sejdinovic</br>Prof Christl Donnelly |
| [Game Theory](https://jcspil.io/teaching/game-theory/) | Worcester College | Trinity 2019 | Dr Péter Eső |
| [Statistics II](https://courses.maths.ox.ac.uk/node/37712) | Oriel College | Hilary 2019 | Dr Neil Laws |
| [Micro-finance](https://jcspil.io/teaching/micro-finance/) | Stanford House | Hilary 2019 | |
| [Probability II](https://courses.maths.ox.ac.uk/node/37706) | Oriel College | Michaelmas 2018 | Dr Matthias Winkel |
| [Probability I](https://courses.maths.ox.ac.uk/node/37524) | Oriel College | Michaelmas 2018 | Dr James Martin |
| [Financial and Quantitative Economics](https://jcspil.io/teaching/financial-and-quantitative-economics/) | Stanford House | Michaelmas 2018 | |
| [Who's Afraid of the Big (Bad) Data?](https://jcspil.io/teaching/big-bad-data/) | The IDHouse | Summer 2018 | |
| [Financial Crises and Crashes](https://jcspil.io/teaching/financial-crises-and-crashes/) | CBL International | Easter 2018 | |
| [Behavioural Economics](https://jcspil.io/teaching/behavioral-economics/) | Stanford House | Hilary 2018 | |
| Decision and [Game Theory](https://jcspil.io/teaching/game-theory/) | Stanford House | Hilary 2018 | |
| [Game Theory](https://jcspil.io/teaching/game-theory/) | Worcester College | Michaelmas 2017 | Dr Péter Eső |
| [Game Theory](https://jcspil.io/teaching/game-theory/) | Worcester College | Trinity 2017 | Dr Péter Eső |
| [Intermediate Econometrics](https://elearning.sydney.edu.au/webapps/portal/execute/tabs/tabAction?tab_tab_group_id=_26_1) | University of Sydney | Semester 1, 2016 | Prof Deborah Cobb-Clark |
| [Introduction to Economic Statistics](https://elearning.sydney.edu.au/webapps/portal/execute/tabs/tabAction?tab_tab_group_id=_26_1) | University of Sydney | Semester 1, 2016 | Dr Timothy Fisher |
| [Introduction to Econometrics](https://elearning.sydney.edu.au/webapps/portal/execute/tabs/tabAction?tab_tab_group_id=_26_1) | University of Sydney | Semester 2, 2015 | Dr Kadir Atalay</br>Dr Peter Exterkate |
| 62.580952 | 228 | 0.484401 | eng_Latn | 0.366957 |
93dc937a3c1b623e72bd2dcd10de394e0eecc998 | 3,238 | md | Markdown | _posts/2015-12-09-8daysofonika.md | enricll/enricll.github.io | 9d77d96766a0d228ce55086e10f79952c77e8c1c | [
"MIT"
] | null | null | null | _posts/2015-12-09-8daysofonika.md | enricll/enricll.github.io | 9d77d96766a0d228ce55086e10f79952c77e8c1c | [
"MIT"
] | null | null | null | _posts/2015-12-09-8daysofonika.md | enricll/enricll.github.io | 9d77d96766a0d228ce55086e10f79952c77e8c1c | [
"MIT"
] | null | null | null | ---
date: "2015-12-09T02:37:03+00:00"
draft: false
tags: ["música"]
title: "#8DaysofOnika"
---
So, yeah, today it's Nicki's birthday and I know last week there was a campaign all over Twitter with the hashtag #8DaysofOnika, I did the first day and then I forgot (as usual), so I'm going to do it now.
<!-- more -->
**Your Favorite Music Video**: I'll have to go with "Lookin Ass". The song is bad-ass as fuck, but the video is even better. Seriously, I haven't seen yet _anyone_ making a video like this.
<iframe width="540" height="405" id="youtube_iframe" src="https://www.youtube.com/embed/2mwNbTL3pOs?feature=oembed&enablejsapi=1&origin=https://safe.txmblr.com&wmode=opaque" frameborder="0" allowfullscreen=""></iframe>
----
**Your Favorite Quote**: “I’m a human beeeeeiiiing!”. The whole video is iconic (THE WHOLE DOCUMENTARY IS ICONIC), but this moment is brilliant. She stops talking, looks at us, and says it, while the camera zooms in.
<iframe width="540" height="405" id="youtube_iframe" src="https://www.youtube.com/embed/PzGZamtlRP0?feature=oembed&enablejsapi=1&origin=https://safe.txmblr.com&wmode=opaque" frameborder="0" allowfullscreen=""></iframe>
----
**Your Favorite Song:** “Freedom”. No question. [I wrote an entire article on why I would choose it...](https://medium.com/@enricll/this-is-the-best-nicki-minaj-song-e731a9b177b5#.efpx92ldv)
<iframe width="540" height="405" id="youtube_iframe" src="https://www.youtube.com/embed/54zpFh0KuK0?feature=oembed&enablejsapi=1&origin=https://safe.txmblr.com&wmode=opaque" frameborder="0" allowfullscreen=""></iframe>
----
**Your Favorite Performance**: [I have a YouTube playlist with my favorite performances](https://www.youtube.com/playlist?list=PLUqCSYhMF85UGSVjesRzKD6zGnpsuPUvX), but my #1 is definitely “High School” with Lil Wayne at the Billboard Music Awards back in 2013.
<iframe width="540" height="405" id="youtube_iframe" src="https://www.youtube.com/embed/JiYxO1Hmvkk?feature=oembed&enablejsapi=1&origin=https://safe.txmblr.com&wmode=opaque" frameborder="0" allowfullscreen=""></iframe>
----
**Your Favorite Outfit:** I don’t want to repeat any song (I love her outfit on the “High School” performance so much), so I’ll choose her outfit on “Throw Sum Mo” video. Fucking sick.
<iframe width="540" height="405" id="youtube_iframe" src="https://www.youtube.com/embed/fwrY0D2ACNk?feature=oembed&enablejsapi=1&origin=https://safe.txmblr.com&wmode=opaque" frameborder="0" allowfullscreen=""></iframe>
----
**Your Favorite Album/Mixtape**: _The Re-Up_ as an EP is amazing, and it’s overlooked most of the times although it has her best rapping lyrically and technically. On the other hand, _The Pinkprint_ and _Pink Friday_ are far more cohesive, with better production and songwriting. Don’t make me choose, please.

----
**Your Favorite Selfie**: This series of selfies fucked me up.

----
**Your Happy Birthday Message**: Don’t ever change, keep doing QUALITY music and being the boss ass bitch you are. Just... Thanks. THANKS.
| 67.458333 | 309 | 0.75139 | eng_Latn | 0.759797 |
93dd5fa6b7c7778ff3f5cd86043458cdc2b7f2e4 | 214 | md | Markdown | packages/iota/docs/interfaces/IMessageIdResponse.md | eike-hass/iota.js | 18ddf1ab7ffdd2ef59c6b345fe9ec70a4629bb48 | [
"Apache-2.0"
] | 1 | 2022-02-27T05:28:41.000Z | 2022-02-27T05:28:41.000Z | packages/iota/docs/interfaces/IMessageIdResponse.md | eike-hass/iota.js | 18ddf1ab7ffdd2ef59c6b345fe9ec70a4629bb48 | [
"Apache-2.0"
] | null | null | null | packages/iota/docs/interfaces/IMessageIdResponse.md | eike-hass/iota.js | 18ddf1ab7ffdd2ef59c6b345fe9ec70a4629bb48 | [
"Apache-2.0"
] | null | null | null | # Interface: IMessageIdResponse
Message id response.
## Table of contents
### Properties
- [messageId](IMessageIdResponse.md#messageid)
## Properties
### messageId
• **messageId**: `string`
The message id.
| 11.888889 | 46 | 0.714953 | eng_Latn | 0.251701 |
93dd6af18af732677825bcfc7b0378df9ec6afea | 7,225 | md | Markdown | rfc/trace_identifiers.md | carlosalberto/specification | 1be630515dafd4d2a468d083300900f89f28e24d | [
"Apache-2.0"
] | null | null | null | rfc/trace_identifiers.md | carlosalberto/specification | 1be630515dafd4d2a468d083300900f89f28e24d | [
"Apache-2.0"
] | null | null | null | rfc/trace_identifiers.md | carlosalberto/specification | 1be630515dafd4d2a468d083300900f89f28e24d | [
"Apache-2.0"
] | null | null | null | # Trace Identifiers
**Current State:** Draft
**Author:** [tedsuo](https://github.com/tedsuo)
The OpenTracing SpanContext interface is extended to include `SpanID` and `TraceID` accessors.
The OpenTracing model of computation specifies two primary object types, `Spans` and `Traces`, but does not specify identifiers for these objects. Identifiers for the two primary object types make it easier to correlate tracing data with data in other systems, simplify important tasks, and allow the creation of reusable trace observers. Some use cases are detailed below.
# Background: Existing Protocols
Before discussing changes to the OpenTracing specification, it’s worth reviewing several popular wire protocols which contain these trace identifiers.
## Trace-Context HTTP Headers
[Trace-Context HTTP headers](https://github.com/w3c/distributed-tracing) are in the process of being standardized via the w3c. The tracing community has voiced strong support in implementing these headers for use in tracing interop.
The `Traceparent` header contains the following fields: `version`, `trace-id`, `span-id`, and `trace-options`.
| field | format | description |
| :--- | :--- | :--- |
| `trace-id` | 128-bit; 32HEXDIG | The ID of the whole trace forest. If all bytes are 0, the `Trace-Parent` may be ignored. |
| `span-id` | 64-bit; 16HEXDIG | The ID of the caller span (parent). If all bytes are 0, the `Trace-Parent` may be ignored. |
## B3 HTTP Headers
The [B3 HTTP headers](https://github.com/openzipkin/b3-propagation) are widely adopted, mostly by Zipkin-like tracing systems. The B3 protocol includes `X-B3-TraceId` and `X-B3-SpanId` as required headers, which contain the `TraceId` and `SpanId` values, respectively.
| field | format | description |
| :--- | :--- | :--- |
| `TraceId` | 64 or 128-bit; opaque | The ID of the trace. Every span in a trace shares this ID. |
| `SpanId` | 64-bit; opaque | Indicates the position of the current operation in the trace tree. The value may or may not be derived from the value of the `traceId`. |
# Specification Changes
The `SpanContext` section of the specification is extended to include the following properties:
| method | format | description |
| :--- | :--- | :--- |
| `TraceID` | string | Globally unique. Every span in a trace shares this ID. |
| `SpanID` | string | Unique within a trace. Each span within a trace contains a different ID. |
**String** values are used for identifiers. In this context, a string is defined as an immutable, variable length sequence of characters. The empty string is a valid return type.
A string is preferred over other formats for the following reasons:
* Forwards compatibility with future versions of Trace-Context and other standards.
* Backwards compatibility with pre-existing ID formats.
* Strongly supported across many languages, and commonly used for transferring data between independent subsystems.
## Alternate Formats
In some cases, additional formats may be appropriate, if a language supports multiple common transport formats. Rather than manually converting the string value to another format, an additional accessors could be added to allow the tracer to do the conversion.
If tracing systems converge on common in-memory formats for Trace-Context identifiers, accessors may be added for those formats as well.
## Backwards Compatibility and Optional Support
The OpenTracing specification does not currently require trace and span identifiers. To continue support for existing tracers, the empty string value can be returned when no ID has been set.
# Use Cases
## Log Correlation
The primary expected consumer for Trace-Context identifiers are logging systems which run independently from the tracing system.
Log indexing has become a common practice, often by including a request identifier in the log. In the past, this has involved manually propagating these identifiers as headers. However, systems which using OpenTracing automatically propagate these identifiers via the Inject/Extract interface. Some of these identifiers are user-generated, and contained in Baggage. However, the most relevant identifiers for log indexing are the Trace and Span IDs. Therefore, exposing these values would be immensley valuable.
## Trace Observers
The OpenTracing community would like to develop secondary observation systems which utilize the tracing runtime, but are tracer independent. Trace and span identifiers would allow these observers to correlate tracing data without having knowledge of the wire protocol or tracing implemnetation. Examples include:
* Generating metrics from tracing data
* Integrating with runtime and system diagnostics
* Integrating with 3rd-party context-propagation systems
* Correlating logs, as mentioned above
# Risk Assessment
Because this proposal includes the exposure of new information, and adds entirely new concepts to the interface, some risks exist.
## Tracer support
Some existing tracers may not be able to support this feature, as their internal model does not include any client-side trace identifiers. These tracers may choose to not support this feature by returning empty string values.
## Protocol support
It's possible that new tracing protocols may emerge which use an entirely different header scheme. Examples could include a tracing system which handles trace joins explicitly as part of the protocol, and thus no longer has an equivalent concept of a trace id, or a system which uses backpropagation to contain additional data.
This is mitigated by the fact that the concept of a span and a trace are directly part of the OpenTracing model of computation. Some form of identifier for these objects will be availiable to a tracer that conforms to this model of computation. While its likely that additional identifiers or interfaces may be necessary to handle future changes, it is impossible that the trace and span concepts will be removed from this version of OpenTracing.
Because the accessors produce a variable-width string value, new formats and wireprotocols for these identifiers will not result in a breaking change for the OpenTracing interface. Likewise, systems which consume this data are by definition separate from the tracing system, and are not dependent on the format of the identifier.
## Extra Allocations and Overhead
Internally, tracers do not always use strings to represent their identifiers. So there is a conversion cost when using these accessors.
While a single allocation may be inevitable, exposing accessors in additional formats could be done to prevent double allocations while formatting the identifiers. For example, converting from a tracer’s native format to a string may trigger an allocation. If there are many systems which want to consume the identifier in a format which requires an allocation when converting from a string, a second allocation could occur.
## Restrictions on Length and Formatting
There may be some advantage in specifying a maximum length for an identifier, or restricting on the available character set. However, it is currently not clear what the correct value for these limits should be, or which use cases would benefit from applying them.
| 80.277778 | 511 | 0.790035 | eng_Latn | 0.999338 |
93dee872fd31440f632ca950473ebc553e04c89d | 29 | md | Markdown | README.md | ikhwan-cr/ikhwan | 93953374c2bef0ce0a8bb0463f0cbd2d187d187b | [
"Unlicense"
] | null | null | null | README.md | ikhwan-cr/ikhwan | 93953374c2bef0ce0a8bb0463f0cbd2d187d187b | [
"Unlicense"
] | null | null | null | README.md | ikhwan-cr/ikhwan | 93953374c2bef0ce0a8bb0463f0cbd2d187d187b | [
"Unlicense"
] | null | null | null | # ikhwan
Tahap pembelajaran
| 9.666667 | 19 | 0.793103 | zsm_Latn | 0.883206 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.