hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4d2d80c6186264888c5ad0897f08b96543949232 | 544 | md | Markdown | docs/index.md | anand2312/scheems | 2074ad4a1253d2e4b2ebb0e12e9bcedc9fd2286a | [
"MIT"
] | 4 | 2021-05-30T09:59:52.000Z | 2021-05-30T10:43:50.000Z | docs/index.md | anand2312/scheems | 2074ad4a1253d2e4b2ebb0e12e9bcedc9fd2286a | [
"MIT"
] | null | null | null | docs/index.md | anand2312/scheems | 2074ad4a1253d2e4b2ebb0e12e9bcedc9fd2286a | [
"MIT"
] | null | null | null | # Welcome to Scheems
**Scheems** is an API generator for your relational databases.
!!! warning
This project is a work in progress.
Scheems can create APIs that allow CRUD operations on your databases.
Database support is implemented with SQLAlchemy, while request validation is implemented with [Pydantic](https://pydantic-docs.helpmanual.io/).
Just define your models with SQLALchemy, add them to Scheems, and you're good to go!
Powered by [SQLAlchemy 🧙♂️](https://www.sqlalchemy.org/) and [Starlette 🌟](https://www.starlette.io/)
| 36.266667 | 143 | 0.759191 | eng_Latn | 0.980003 |
4d2d9570358535907523e53f8fb6b8168fb8ee9e | 1,140 | md | Markdown | src/routes/utils/pivot.md | techniq/layerchart | a6366ad8b80c3196eda8ca5187da10c547fe67a1 | [
"MIT"
] | 2 | 2022-02-05T03:37:29.000Z | 2022-02-19T16:44:25.000Z | src/routes/utils/pivot.md | techniq/layerchart | a6366ad8b80c3196eda8ca5187da10c547fe67a1 | [
"MIT"
] | 16 | 2021-12-09T15:44:20.000Z | 2022-03-31T14:55:07.000Z | src/routes/utils/pivot.md | techniq/layerchart | a6366ad8b80c3196eda8ca5187da10c547fe67a1 | [
"MIT"
] | null | null | null | ---
title: ['Utils', 'Pivot']
---
<script lang="ts">
import Preview from '$lib/docs/Preview.svelte';
import { pivotLonger, pivotWider } from '$lib/utils/pivot';
import { wideData, longData } from '$lib/utils/genData';
const wideDataDisplay = JSON.stringify(wideData, null, 2);
const longDataDisplay = JSON.stringify(longData, null, 2);
const pivotLongerResult = pivotLonger(wideData, ['apples', 'bananas', 'cherries', 'dates'], 'fruit', 'value');
const pivotLongerDisplay = JSON.stringify(pivotLongerResult, null, 2);
const pivotWiderResult = pivotWider(longData, 'year', 'fruit', 'value');
const pivotWiderDisplay = JSON.stringify(pivotWiderResult, null, 2);
</script>
## pivotLonger
### Before
<Preview code={wideDataDisplay} highlight>
wideData
</Preview>
### After
<Preview code={pivotLongerDisplay} highlight>
pivotLonger(wideData, ['apples', 'bananas', 'cherries', 'dates'], 'fruit', 'value')
</Preview>
## pivotWider
### Before
<Preview code={longDataDisplay} highlight>
longData
</Preview>
### After
<Preview code={pivotWiderDisplay} highlight>
pivotWider(longData, 'year', 'fruit', 'value')
</Preview>
| 24.255319 | 111 | 0.710526 | yue_Hant | 0.70091 |
4d2f0dd467f903559650e3b032a470b5f420f17f | 11,287 | md | Markdown | WindowsServerDocs/identity/ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md | ilchiodi/windowsserverdocs.it-it | c9a108584b6430aed06a10c888377ec29480fd01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md | ilchiodi/windowsserverdocs.it-it | c9a108584b6430aed06a10c888377ec29480fd01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/identity/ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md | ilchiodi/windowsserverdocs.it-it | c9a108584b6430aed06a10c888377ec29480fd01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
ms.assetid: 3a840b63-78b7-4e62-af7b-497026bfdb93
title: Guida dettagliata-gestire i rischi con il controllo degli accessi condizionali
author: billmath
ms.author: billmath
manager: femila
ms.date: 05/31/2017
ms.topic: article
ms.prod: windows-server
ms.technology: identity-adfs
ms.openlocfilehash: 2ce45d3952b6f848635ed601f7ff251fcda3982c
ms.sourcegitcommit: b00d7c8968c4adc8f699dbee694afe6ed36bc9de
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/08/2020
ms.locfileid: "80857644"
---
# <a name="walkthrough-guide-manage-risk-with-conditional-access-control"></a>Guida allo scenario: Gestire i rischi con il controllo di accesso condizionale
## <a name="about-this-guide"></a>Informazioni sulla guida
Questa procedura dettagliata fornisce istruzioni per la gestione dei rischi con uno dei fattori (dati utente) disponibili tramite il meccanismo di controllo di accesso condizionale in Active Directory Federation Services (AD FS) in Windows Server 2012 R2. Per altre informazioni sui meccanismi di autorizzazione e controllo di accesso condizionale in AD FS in Windows Server 2012 R2, vedere [gestire i rischi con il controllo di accesso condizionale](../../ad-fs/operations/Manage-Risk-with-Conditional-Access-Control.md).
Questo scenario è suddiviso nelle sezioni seguenti:
- [Passaggio 1: configurazione dell'ambiente lab](../../ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md#BKMK_1)
- [Passaggio 2: verificare il meccanismo di controllo di accesso AD FS predefinito](../../ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md#BKMK_2)
- [Passaggio 3: configurare i criteri di controllo degli accessi condizionali in base ai dati utente](../../ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md#BKMK_3)
- [Passaggio 4: verificare il meccanismo di controllo di accesso condizionale](../../ad-fs/operations/Walkthrough-Guide--Manage-Risk-with-Conditional-Access-Control.md#BKMK_4)
## <a name="step-1-setting-up-the-lab-environment"></a><a name="BKMK_1"></a>Passaggio 1: configurazione dell'ambiente lab
Per eseguire le procedure descritte in questo scenario, è necessario un ambiente con i componenti seguenti:
- Un dominio Active Directory con account utente e di gruppo di test, in esecuzione in Windows Server 2008, Windows Server 2008 R2 o Windows Server 2012 con lo schema aggiornato a Windows Server 2012 R2 o un dominio di Active Directory in esecuzione in Windows Server 2012 R2
- Server federativo in esecuzione in Windows Server 2012 R2
- Un server Web che ospita l'applicazione di esempio
- Un computer client da cui è possibile accedere all'applicazione di esempio
> [!WARNING]
> Sia negli ambienti di produzione che in quelli di testing è consigliabile evitare di utilizzare lo stesso computer come server federativo e come server Web.
In questo ambiente, il server federativo rilascia le attestazioni richieste affinché gli utenti possano accedere all'applicazione di esempio. Il server Web ospita un'applicazione di esempio che considererà attendibili gli utenti che presentano le attestazioni rilasciate dal server federativo.
Per istruzioni su come configurare questo ambiente, vedere [configurare l'ambiente lab per ad FS in Windows Server 2012 R2](../../ad-fs/deployment/Set-up-the-lab-environment-for-AD-FS-in-Windows-Server-2012-R2.md).
## <a name="step-2-verify-the-default-ad-fs-access-control-mechanism"></a><a name="BKMK_2"></a>Passaggio 2: verificare il meccanismo di controllo di accesso AD FS predefinito
In questo passaggio verrà verificato il meccanismo di controllo degli accessi predefinito di ADFS, in base al quale l'utente viene reindirizzato alla pagina di accesso ad ADFS, specifica le credenziali valide e ottiene l'accesso all'applicazione. È possibile usare l'account di Active Directory **Robert Hatley** e l'applicazione di esempio **ClaimApp** configurata in [configurare l'ambiente lab per ad FS in Windows Server 2012 R2](../../ad-fs/deployment/Set-up-the-lab-environment-for-AD-FS-in-Windows-Server-2012-R2.md).
#### <a name="to-verify-the-default-ad-fs-access-control-mechanism"></a>Per verificare il meccanismo di controllo degli accessi predefinito di ADFS
1. Nel computer client aprire una finestra del browser e passare all'applicazione di esempio: **https://webserv1.contoso.com/claimapp** .
Questa azione reindirizza automaticamente la richiesta al server federativo e viene richiesto di eseguire l'accesso specificando nome utente e password.
2. Digitare le credenziali dell'account di Active Directory **Robert Hatley** creato in [configurare l'ambiente lab per ad FS in Windows Server 2012 R2](../../ad-fs/deployment/Set-up-the-lab-environment-for-AD-FS-in-Windows-Server-2012-R2.md).
Verrà concesso l'accesso all'applicazione.
## <a name="step-3-configure-conditional-access-control-policy-based-on-user-data"></a><a name="BKMK_3"></a>Passaggio 3: configurare i criteri di controllo degli accessi condizionali in base ai dati utente
In questo passaggio verranno configurati i criteri di controllo degli accessi in base ai dati sull'appartenenza a gruppi dell'utente. In altri termini, verrà configurata una **regola di autorizzazione di rilascio** nel server federativo per un trust della relying party che rappresenta l'applicazione di esempio - **claimapp**. In base alla logica di questa regola, all'utente di Active Directory **Robert Hatley** verranno rilasciate le attestazioni necessarie per accedere all'applicazione perché appartiene a un gruppo **Finance** . L'account **Robert Hatley** è stato aggiunto al gruppo **Finance** in [configurare l'ambiente lab per ad FS in Windows Server 2012 R2](../../ad-fs/deployment/Set-up-the-lab-environment-for-AD-FS-in-Windows-Server-2012-R2.md).
È possibile eseguire questa attività tramite la console di gestione di ADFS o con Windows PowerShell.
#### <a name="to-configure-conditional-access-control-policy-based-on-user-data-via-the-ad-fs-management-console"></a>Per configurare i criteri di controllo di accesso condizionale in base ai dati utente tramite la console di gestione AD FS
1. Nella console di gestione di ADFS passare a **Relazioni di attendibilità**, e quindi a **Attendibilità componente**.
2. Selezionare un trust della relying party che rappresenta l'applicazione di esempio (**claimapp**), quindi selezionare **Modifica regole attestazione** nel riquadro **Azioni** o facendo clic con il pulsante destro del mouse su questo trust della replying party.
3. Nella finestra **Modifica regole attestazione per claimapp** selezionare la scheda **Regole di autorizzazione rilascio** e fare clic su **Aggiungi regola**.
4. Nella pagina **Seleziona modello di regola** dell'**Aggiunta guidata regole attestazione di autorizzazione di rilascio** selezionare il modello di regola attestazione **Consentire o negare l'accesso agli utenti in base a un'attestazione in ingresso** e quindi fare clic su **Avanti**.
5. Nella pagina **Configura regola** eseguire tutte le operazioni seguenti e quindi fare clic su **Fine**:
1. Immettere un nome per la regola attestazioni, ad esempio **TestRule**.
2. Selezionare **SID gruppo** come **Tipo di attestazione in ingresso**.
3. Fare clic su **Sfoglia**, digitare in **Finance** il nome del gruppo di test di Active Directory, quindi risolverlo per il campo **Valore Attestazione in ingresso**.
4. Selezionare l'opzione **Nega accesso agli utenti con questa attestazione in ingresso**.
6. Nella finestra **Modifica regole attestazione per claimapp** assicurarsi di eliminare la regola **Consenti accesso a tutti gli utenti** creata per impostazione predefinita al momento della creazione di questa attendibilità del componente.
#### <a name="to-configure-conditional-access-control-policy-based-on-user-data-via-windows-powershell"></a>Per configurare criteri di controllo di accesso condizionale in base ai dati utente tramite Windows PowerShell
1. Nel server federativo aprire la finestra di comando di Windows PowerShell ed eseguire il comando seguente:
~~~
`$rp = Get-AdfsRelyingPartyTrust -Name claimapp`
~~~
2. Nella stessa finestra di comando di Windows PowerShell eseguire il seguente comando:
~~~
`$GroupAuthzRule = '@RuleTemplate = "Authorization" @RuleName = "Foo" c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value =~ "^(?i)<group_SID>$"] =>issue(Type = "https://schemas.microsoft.com/authorization/claims/deny", Value = "DenyUsersWithClaim");'
Set-AdfsRelyingPartyTrust -TargetRelyingParty $rp -IssuanceAuthorizationRules $GroupAuthzRule`
~~~
> [!NOTE]
> Assicurarsi di sostituire <group_SID> con il valore del SID del gruppo **Finance** di Active Directory.
## <a name="step-4-verify-conditional-access-control-mechanism"></a><a name="BKMK_4"></a>Passaggio 4: verificare il meccanismo di controllo di accesso condizionale
In questo passaggio verranno verificati i criteri di controllo di accesso condizionale impostati nel passaggio precedente. È possibile usare la procedura seguente per verificare che l'utente di Active Directory **Robert Hatley** possa accedere all'applicazione di esempio perché appartiene al gruppo **Finance** e che gli utenti di Active Directory che non appartengono al gruppo **Finance** non possano accedere all'applicazione di esempio.
1. Nel computer client aprire una finestra del browser e passare all'applicazione di esempio: **https://webserv1.contoso.com/claimapp**
Questa azione reindirizza automaticamente la richiesta al server federativo e viene richiesto di eseguire l'accesso specificando nome utente e password.
2. Digitare le credenziali dell'account di Active Directory **Robert Hatley** creato in [configurare l'ambiente lab per ad FS in Windows Server 2012 R2](../../ad-fs/deployment/Set-up-the-lab-environment-for-AD-FS-in-Windows-Server-2012-R2.md).
Verrà concesso l'accesso all'applicazione.
3. Digitare le credenziali di un altro utente di Active Directory che NON appartiene al gruppo **Finance**. Per ulteriori informazioni su come creare gli account utente in Active Directory, vedere [https://technet.microsoft.com/library/cc7833232.aspx](https://technet.microsoft.com/library/cc783323%28v=ws.10%29.aspx).
A questo punto, a causa dei criteri di controllo di accesso configurati nel passaggio precedente, viene visualizzato un messaggio di accesso negato per questo utente di Active Directory che non appartiene al gruppo **Finance** . Il testo del messaggio predefinito è **non si dispone dell'autorizzazione per accedere al sito. Fare clic qui per disconnettersi e accedere di nuovo o contattare l'amministratore per le autorizzazioni.** . Tuttavia, questo testo è completamente personalizzabile. Per altre informazioni su come personalizzare l'esperienza di accesso, vedere [Customizing the AD FS Sign-in Pages](https://technet.microsoft.com/library/dn280950.aspx).
## <a name="see-also"></a>Vedi anche
[Gestire i rischi con il controllo degli accessi condizionali](../../ad-fs/operations/Manage-Risk-with-Conditional-Access-Control.md)
[configurare l'ambiente lab per ad FS in Windows Server 2012 R2](../deployment/Set-up-the-lab-environment-for-AD-FS-in-Windows-Server-2012-R2.md)
| 82.992647 | 761 | 0.787012 | ita_Latn | 0.996147 |
4d2f19ac8541a54dd11422a98f8563750d925537 | 1,736 | md | Markdown | help/aem-viewers-ref/c-html5-aem-asset-viewers/c-html5-aem-carousel/c-html5-aem-carousel-hotspot--image-support.md | isabella232/dynamic-media-developer-resources.en | 0e9fa3e675a7cec40a9bb513ce1b9bc6afee0df0 | [
"MIT"
] | null | null | null | help/aem-viewers-ref/c-html5-aem-asset-viewers/c-html5-aem-carousel/c-html5-aem-carousel-hotspot--image-support.md | isabella232/dynamic-media-developer-resources.en | 0e9fa3e675a7cec40a9bb513ce1b9bc6afee0df0 | [
"MIT"
] | 1 | 2021-02-23T11:04:19.000Z | 2021-02-23T11:04:19.000Z | help/aem-viewers-ref/c-html5-aem-asset-viewers/c-html5-aem-carousel/c-html5-aem-carousel-hotspot--image-support.md | isabella232/dynamic-media-developer-resources.en | 0e9fa3e675a7cec40a9bb513ce1b9bc6afee0df0 | [
"MIT"
] | null | null | null | ---
description: null
seo-description: null
seo-title: Hotspot and Image maps support
solution: Experience Manager
title: Hotspot and Image maps support
topic: Dynamic media
uuid: 839b6a7f-4f6f-43ad-8eb8-254959c7fbac
---
# Hotspot and Image maps support{#hotspot-and-image-maps-support}
The viewer supports the rendering of hotspot icons and image map regions on top of the main view. The appearance of hotspot icons and regions is controlled through CSS as described in the customize Hotspots and Image maps section.
See [Hotspots and Image maps](../../c-html5-aem-asset-viewers/c-html5-aem-carousel/c-html5-aem-carousel-customizingviewer/r-html5-aem-carousel-customize-hotspots-imagemaps.md#reference-2ac3cc414ef2467390bf53145f1d8d74).
Hotspots and regions can either activate a Quick View feature on the hosting web page by triggering a JavaScript callback or redirect a user to an external web page.
## Quick View hotspots {#section-cda48fc9730142d0bb3326bac7df3271}
These types of hotspots or image maps should be authored using the "Quick View" action type in Dynamic Media, of AEM. When a user activates such a hotspot or image map, the viewer runs the `quickViewActivate` JavaScript callback and passes the hotspot or image map data to it. It is expected that the embedding web page listens for this callback. When it triggers the page, it opens its own Quick View implementation.
## Redirect to external web page {#section-ef820c71251e4215800bb99c0c9ebe16}
Hotspots or image maps authored for the action type "Quick View" in Dynamic Media of AEM redirects the user to an external URL. Depending on settings made during authoring, the URL opens in a new browser tab, in the same window, or in the named browser window.
| 66.769231 | 417 | 0.804724 | eng_Latn | 0.976888 |
4d2f8ffebd4e61655ae7da363f9553f396a714e2 | 2,563 | md | Markdown | packages/site/docs/api/options/theme.zh.md | kevin51jiang/ant-design-charts | ada9cf6446e4ac521867b7e44cfcd49b97645350 | [
"MIT"
] | null | null | null | packages/site/docs/api/options/theme.zh.md | kevin51jiang/ant-design-charts | ada9cf6446e4ac521867b7e44cfcd49b97645350 | [
"MIT"
] | null | null | null | packages/site/docs/api/options/theme.zh.md | kevin51jiang/ant-design-charts | ada9cf6446e4ac521867b7e44cfcd49b97645350 | [
"MIT"
] | null | null | null | ---
title: 图表主题
order: 9
---
推荐使用 💄 [ThemeSet](https://theme-set.antv.vision) 在线自定义自己的主题配置。
#### 内置主题
目前默认的内置主要有两套:`default` 和 `dark`
```ts
{
theme: 'default', // 'dark',
}
```
#### 主题属性
除了使用内置的 `default` 和 `dark` 主题之外,还可以通过设置主题属性来修改部分主题内容:
下表列出了组成主题的大配置项上的具体属性:
| 主题属性 | 类型 | 描述 |
| --- | --- | ---|
| defaultColor | *string*| 主题色 |
| padding | *number* | number\[] |
| fontFamily | *string* | 图表字体 |
| colors10 | *string\[]* | 分类颜色色板,分类个数小于 10 时使用 |
| colors20 |*string\[]* | 分类颜色色板,分类个数大于 10 时使用 |
| columnWidthRatio | *number* | 一般柱状图宽度占比,0 - 1 范围数值
| maxColumnWidth | *number* | 柱状图最大宽度,像素值 |
| minColumnWidth| *number* | 柱状图最小宽度,像素值 |
| roseWidthRatio | *number* | 玫瑰图占比,0 - 1 范围数值 |
| multiplePieWidthRatio | *number* | 多层饼图/环图占比,0 - 1 范围数值 |
| geometries | *object* | 配置每个 Geometry 下每个 shape 的样式,包括默认样式以及各个状态下的样式 |
| components | *object* | 配置坐标轴,图例,tooltip, annotation 的主题样式 |
| labels | *object* | 配置 Geometry 下 label 的主题样式 |
| innerLabels | *object* | 配置 Geometry 下展示在图形内部的 labels 的主题样式 |
| pieLabels | *object* | 配置饼图 labels 的主题样式 |
使用方式:
```ts
{
theme: {
colors10: ['#FF6B3B', '#626681', '#FFC100', '#9FB40F', '#76523B', '#DAD5B5', '#0E8E89', '#E19348', '#F383A2', '#247FEA']
}
}
```
#### 主题属性(主题样式表)
除了以上介绍的主题属性之外,还可以传入主题样式表来自定义主题。如果你需要对全局主题进行配置的话,对样式风格进行切换,比如更改颜色、字体大小、边框粗细等
使用方式:
```ts
{
theme: {
styleSheet: {
fontFamily: 'Avenir'
}
}
}
```
支持的样式表属性:
| **属性** | **类型** | **描述** |
| ----------------------- | -------- | ------------- |
| `backgroundColor` | *string* | 背景色 |
| `brandColor` | *string* | 主题色,默认取 10 色分类颜色色板的第一个颜色 |
| `paletteQualitative10` | *string* | 分类颜色色板,分类个数小于 10 时使用 |
| `paletteQualitative20` | *string* | 分类颜色色板,分类个数大于 10 时使用 |
| `paletteSemanticRed` | *string* | 语义红色 |
| `paletteSemanticGreen` | *string* | 语义绿色 |
| `paletteSemanticYellow` | *string* | 语义黄色 |
| `fontFamily` | *string* | 字体 |
#### 更新主题
使用方式:
```ts
// 示例1:
plot.update({ theme: 'dark' });
// 示例2:
plot.update({ theme: { defaultColor: '#FF6B3B' } })
```
#### 自定义注册主题
另外,还可以通过 G2 提供了自定义主题机制来定义全新的主题结构,以允许用户切换、定义图表主题。前往 [G2 | 自定义主题](https://g2.antv.vision/zh/docs/api/advanced/register-theme) 查看详情。
<playground path="general/theme/demo/register-theme.ts" rid="rect-register-theme"></playground>
🌰 自定义主题 [DEMO](/zh/examples/general/theme#register-theme) 示例
#### 参阅
* [G2 自定义主题](https://g2.antv.vision/zh/docs/api/advanced/register-theme)
* [G2 主题配置项详解](https://g2.antv.vision/zh/docs/api/advanced/dive-into-theme)
| 24.179245 | 129 | 0.607101 | yue_Hant | 0.649774 |
4d303b4574522a41734ea62173d5bda0c4da9558 | 168 | md | Markdown | README.md | KoMaR1911/C4USMultiHack | 69c8909eeb7696f84747a29349a43b6168f28a2b | [
"Apache-2.0"
] | 4 | 2021-04-17T20:45:29.000Z | 2021-06-17T17:13:12.000Z | README.md | KoMaR1911/C4USMultiHack | 69c8909eeb7696f84747a29349a43b6168f28a2b | [
"Apache-2.0"
] | null | null | null | README.md | KoMaR1911/C4USMultiHack | 69c8909eeb7696f84747a29349a43b6168f28a2b | [
"Apache-2.0"
] | null | null | null | Self leaked
This is source from Metin2 - C4US.PL
Specially credits to people without this source doesnt exist:
- EroS
- Seremo
This source is created by C4US.PL
| 12.923077 | 61 | 0.755952 | eng_Latn | 0.999655 |
4d305dd7706c0f710bffb707217d7e9942ab6d29 | 14,681 | md | Markdown | docs/parallel/concrt/reference/ischeduler-structure.md | yecril71pl/cpp-docs.pl-pl | 599c99edee44b11ede6956ecf2362be3bf25d2f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/parallel/concrt/reference/ischeduler-structure.md | yecril71pl/cpp-docs.pl-pl | 599c99edee44b11ede6956ecf2362be3bf25d2f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/parallel/concrt/reference/ischeduler-structure.md | yecril71pl/cpp-docs.pl-pl | 599c99edee44b11ede6956ecf2362be3bf25d2f1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Struktura IScheduler
ms.date: 11/04/2016
f1_keywords:
- IScheduler
- CONCRTRM/concurrency::IScheduler
- CONCRTRM/concurrency::IScheduler::IScheduler::AddVirtualProcessors
- CONCRTRM/concurrency::IScheduler::IScheduler::GetId
- CONCRTRM/concurrency::IScheduler::IScheduler::GetPolicy
- CONCRTRM/concurrency::IScheduler::IScheduler::NotifyResourcesExternallyBusy
- CONCRTRM/concurrency::IScheduler::IScheduler::NotifyResourcesExternallyIdle
- CONCRTRM/concurrency::IScheduler::IScheduler::RemoveVirtualProcessors
- CONCRTRM/concurrency::IScheduler::IScheduler::Statistics
helpviewer_keywords:
- IScheduler structure
ms.assetid: 471de85a-2b1a-4b6d-ab81-2eff2737161e
ms.openlocfilehash: ccd82b5c5112bc322717f2b58d79d4c8f34f5bbd
ms.sourcegitcommit: c123cc76bb2b6c5cde6f4c425ece420ac733bf70
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/14/2020
ms.locfileid: "81368175"
---
# <a name="ischeduler-structure"></a>Struktura IScheduler
Interfejs do abstrakcji harmonogramu pracy. Menedżer zasobów środowiska wykonawczego współbieżności używa tego interfejsu do komunikowania się z harmonogramami pracy.
## <a name="syntax"></a>Składnia
```cpp
struct IScheduler;
```
## <a name="members"></a>Elementy członkowskie
### <a name="public-methods"></a>Metody publiczne
|Nazwa|Opis|
|----------|-----------------|
|[IScheduler::AddVirtualProcessors](#addvirtualprocessors)|Udostępnia harmonogram z zestawem katalogów głównych procesora wirtualnego do jego użycia. Każdy `IVirtualProcessorRoot` interfejs reprezentuje prawo do wykonania pojedynczego wątku, który może wykonywać pracę w imieniu harmonogramu.|
|[IScheduler::Identyfikator GetId](#getid)|Zwraca unikatowy identyfikator harmonogramu.|
|[IScheduler::GetPolicy](#getpolicy)|Zwraca kopię zasad harmonogramu. Aby uzyskać więcej informacji na temat zasad harmonogramu, zobacz [Harmonogrampolicy](schedulerpolicy-class.md).|
|[IScheduler::NotifyResourcesExternallyBusy](#notifyresourcesexternallybusy)|Powiadamia ten harmonogram, że wątki sprzętowe reprezentowane przez zestaw katalogów głównych procesora wirtualnego w macierzy `ppVirtualProcessorRoots` są teraz używane przez inne harmonogramy.|
|[IScheduler::NotifyResourcesExternallyIdle](#notifyresourcesexternallyidle)|Powiadamia ten harmonogram, że wątki sprzętowe reprezentowane przez zestaw katalogów głównych procesora wirtualnego w macierzy `ppVirtualProcessorRoots` nie są używane przez inne harmonogramy.|
|[IScheduler::UsuńWirtualneprocesory](#removevirtualprocessors)|Inicjuje usuwanie katalogów głównych procesora wirtualnego, które zostały wcześniej przydzielone do tego harmonogramu.|
|[IScheduler::Statystyki](#statistics)|Zawiera informacje dotyczące wskaźników nadejścia i ukończenia zadania oraz zmiany długości kolejki dla harmonogramu.|
## <a name="remarks"></a>Uwagi
Jeśli implementujesz niestandardowy harmonogram, który komunikuje się z Menedżerem `IScheduler` zasobów, należy podać implementację interfejsu. Ten interfejs jest jednym z końca dwukierunkowego kanału komunikacji między harmonogramem a Menedżerem zasobów. Drugi koniec jest reprezentowany `IResourceManager` `ISchedulerProxy` przez i interfejsy, które są implementowane przez Menedżera zasobów.
## <a name="inheritance-hierarchy"></a>Hierarchia dziedziczenia
`IScheduler`
## <a name="requirements"></a>Wymagania
**Nagłówek:** concrtrm.h
**Przestrzeń nazw:** współbieżność
## <a name="ischeduleraddvirtualprocessors-method"></a><a name="addvirtualprocessors"></a>IScheduler::AddVirtualProcessors Metoda
Udostępnia harmonogram z zestawem katalogów głównych procesora wirtualnego do jego użycia. Każdy `IVirtualProcessorRoot` interfejs reprezentuje prawo do wykonania pojedynczego wątku, który może wykonywać pracę w imieniu harmonogramu.
```cpp
virtual void AddVirtualProcessors(
_In_reads_(count) IVirtualProcessorRoot** ppVirtualProcessorRoots,
unsigned int count) = 0;
```
### <a name="parameters"></a>Parametry
*ppVirtualProcessorRoots*<br/>
Tablica `IVirtualProcessorRoot` interfejsów reprezentujących katalogi główne procesora wirtualnego dodawane do harmonogramu.
*Liczba*<br/>
Liczba interfejsów `IVirtualProcessorRoot` w tablicy.
### <a name="remarks"></a>Uwagi
Menedżer zasobów wywołuje `AddVirtualProcessor` metodę przyznania początkowego zestawu katalogów głównych procesora wirtualnego do harmonogramu. Może również wywołać metodę, aby dodać katalogi główne procesora wirtualnego do harmonogramu, gdy równoważy zasoby wśród harmonogramów.
## <a name="ischedulergetid-method"></a><a name="getid"></a>IScheduler::Metoda GetId
Zwraca unikatowy identyfikator harmonogramu.
```cpp
virtual unsigned int GetId() const = 0;
```
### <a name="return-value"></a>Wartość zwracana
Unikatowy identyfikator liczby całkowitej.
### <a name="remarks"></a>Uwagi
Należy użyć [GetSchedulerId](concurrency-namespace-functions.md) funkcji, aby uzyskać unikatowy identyfikator dla `IScheduler` obiektu, który implementuje interfejs, przed użyciem interfejsu jako parametr do metod dostarczonych przez Menedżera zasobów. Oczekuje się, że zwróci ten `GetId` sam identyfikator, gdy funkcja jest wywoływana.
Identyfikator uzyskany z innego źródła może spowodować niezdefiniowane zachowanie.
## <a name="ischedulergetpolicy-method"></a><a name="getpolicy"></a>IScheduler::Metoda GetPolicy
Zwraca kopię zasad harmonogramu. Aby uzyskać więcej informacji na temat zasad harmonogramu, zobacz [Harmonogrampolicy](schedulerpolicy-class.md).
```cpp
virtual SchedulerPolicy GetPolicy() const = 0;
```
### <a name="return-value"></a>Wartość zwracana
Kopia zasad harmonogramu.
## <a name="ischedulernotifyresourcesexternallybusy-method"></a><a name="notifyresourcesexternallybusy"></a>IScheduler::NotifyResourcesExternallyBusy Metoda
Powiadamia ten harmonogram, że wątki sprzętowe reprezentowane przez zestaw katalogów głównych procesora wirtualnego w macierzy `ppVirtualProcessorRoots` są teraz używane przez inne harmonogramy.
```cpp
virtual void NotifyResourcesExternallyBusy(
_In_reads_(count) IVirtualProcessorRoot** ppVirtualProcessorRoots,
unsigned int count) = 0;
```
### <a name="parameters"></a>Parametry
*ppVirtualProcessorRoots*<br/>
Tablica `IVirtualProcessorRoot` interfejsów skojarzonych z wątkami sprzętowymi, na których inne harmonogramy stały się zajęte.
*Liczba*<br/>
Liczba interfejsów `IVirtualProcessorRoot` w tablicy.
### <a name="remarks"></a>Uwagi
Jest możliwe dla określonego wątku sprzętowego, które mają być przypisane do wielu harmonogramów w tym samym czasie. Jednym z powodów może być to, że nie ma wystarczającej liczby wątków sprzętowych w systemie, aby spełnić minimalną współbieżność dla wszystkich harmonogramów, bez udostępniania zasobów. Inną możliwością jest to, że zasoby są tymczasowo przypisane do innych harmonogramów, gdy harmonogram będący właścicielem nie używa ich, w drodze wszystkich jego katalogów głównych procesora wirtualnego w tym wątku sprzętowym są dezaktywowane.
Poziom subskrypcji wątku sprzętowego jest oznaczony liczbą subskrybowanych wątków i aktywowanych katalogów głównych procesora wirtualnego skojarzonych z tym wątkiem sprzętowym. Z punktu widzenia określonego harmonogramu poziom subskrypcji zewnętrznego wątku sprzętowego jest częścią subskrypcji innych harmonogramów przyczynić się do. Powiadomienia, że zasoby są zajęte zewnętrznie są wysyłane do harmonogramu, gdy poziom subskrypcji zewnętrznej dla wątku sprzętowego przenosi się z zera na terytorium dodatnie.
Powiadomienia za pośrednictwem tej metody są wysyłane tylko do harmonogramów, które mają zasady, w których wartość klucza `MinConcurrency` zasad jest równa wartości klucza `MaxConcurrency` zasad. Aby uzyskać więcej informacji na temat zasad harmonogramu, zobacz [Harmonogrampolicy](schedulerpolicy-class.md).
Harmonogram, który kwalifikuje się do powiadomień pobiera zestaw początkowych powiadomień podczas jego tworzenia, informując go, czy zasoby, które właśnie zostały przypisane, są zajęte zewnętrznie lub bezczynne.
## <a name="ischedulernotifyresourcesexternallyidle-method"></a><a name="notifyresourcesexternallyidle"></a>Metoda IScheduler::NotifyResourcesExternallyIdle
Powiadamia ten harmonogram, że wątki sprzętowe reprezentowane przez zestaw katalogów głównych procesora wirtualnego w macierzy `ppVirtualProcessorRoots` nie są używane przez inne harmonogramy.
```cpp
virtual void NotifyResourcesExternallyIdle(
_In_reads_(count) IVirtualProcessorRoot** ppVirtualProcessorRoots,
unsigned int count) = 0;
```
### <a name="parameters"></a>Parametry
*ppVirtualProcessorRoots*<br/>
Tablica `IVirtualProcessorRoot` interfejsów skojarzonych z wątkami sprzętowymi, na których inne harmonogramy stały się bezczynne.
*Liczba*<br/>
Liczba interfejsów `IVirtualProcessorRoot` w tablicy.
### <a name="remarks"></a>Uwagi
Jest możliwe dla określonego wątku sprzętowego, które mają być przypisane do wielu harmonogramów w tym samym czasie. Jednym z powodów może być to, że nie ma wystarczającej liczby wątków sprzętowych w systemie, aby spełnić minimalną współbieżność dla wszystkich harmonogramów, bez udostępniania zasobów. Inną możliwością jest to, że zasoby są tymczasowo przypisane do innych harmonogramów, gdy harmonogram będący właścicielem nie używa ich, w drodze wszystkich jego katalogów głównych procesora wirtualnego w tym wątku sprzętowym są dezaktywowane.
Poziom subskrypcji wątku sprzętowego jest oznaczony liczbą subskrybowanych wątków i aktywowanych katalogów głównych procesora wirtualnego skojarzonych z tym wątkiem sprzętowym. Z punktu widzenia określonego harmonogramu poziom subskrypcji zewnętrznego wątku sprzętowego jest częścią subskrypcji innych harmonogramów przyczynić się do. Powiadomienia, że zasoby są zajęte zewnętrznie są wysyłane do harmonogramu, gdy poziom subskrypcji zewnętrznej dla wątku sprzętowego spadnie do zera z poprzedniej wartości dodatniej.
Powiadomienia za pośrednictwem tej metody są wysyłane tylko do harmonogramów, które mają zasady, w których wartość klucza `MinConcurrency` zasad jest równa wartości klucza `MaxConcurrency` zasad. Aby uzyskać więcej informacji na temat zasad harmonogramu, zobacz [Harmonogrampolicy](schedulerpolicy-class.md).
Harmonogram, który kwalifikuje się do powiadomień pobiera zestaw początkowych powiadomień podczas jego tworzenia, informując go, czy zasoby, które właśnie zostały przypisane, są zajęte zewnętrznie lub bezczynne.
## <a name="ischedulerremovevirtualprocessors-method"></a><a name="removevirtualprocessors"></a>IScheduler::RemoveVirtualProcessors Metoda
Inicjuje usuwanie katalogów głównych procesora wirtualnego, które zostały wcześniej przydzielone do tego harmonogramu.
```cpp
virtual void RemoveVirtualProcessors(
_In_reads_(count) IVirtualProcessorRoot** ppVirtualProcessorRoots,
unsigned int count) = 0;
```
### <a name="parameters"></a>Parametry
*ppVirtualProcessorRoots*<br/>
Tablica `IVirtualProcessorRoot` interfejsów reprezentujących katalogi główne procesora wirtualnego do usunięcia.
*Liczba*<br/>
Liczba interfejsów `IVirtualProcessorRoot` w tablicy.
### <a name="remarks"></a>Uwagi
Menedżer zasobów wywołuje `RemoveVirtualProcessors` metodę, aby odzyskać zestaw katalogów głównych procesora wirtualnego z harmonogramu. Harmonogram oczekuje się wywołać [Remove](iexecutionresource-structure.md#remove) metody na każdym interfejsie, gdy odbywa się z katalogami korzeni procesora wirtualnego. Nie należy `IVirtualProcessorRoot` używać interfejsu po wywołaniu `Remove` metody na nim.
Parametr `ppVirtualProcessorRoots` wskazuje na tablicę interfejsów. Wśród zestawu katalogów głównych procesora wirtualnego do usunięcia, korzenie nigdy `Remove` nie zostały aktywowane mogą być zwracane natychmiast przy użyciu metody. Katalogi główne, które zostały aktywowane i wykonują pracę lub zostały dezaktywowane i czekają na pracę, aby dotrzeć, powinny być zwracane asynchronicznie. Harmonogram musi podjąć wszelkie próby usunięcia katalogu głównego procesora wirtualnego tak szybko, jak to możliwe. Opóźnienie usunięcia katalogów głównych procesora wirtualnego może spowodować niezamierzone nadsubskrypcje w harmonogramie.
## <a name="ischedulerstatistics-method"></a><a name="statistics"></a>IScheduler::Metoda statystyk
Zawiera informacje dotyczące wskaźników nadejścia i ukończenia zadania oraz zmiany długości kolejki dla harmonogramu.
```cpp
virtual void Statistics(
_Out_ unsigned int* pTaskCompletionRate,
_Out_ unsigned int* pTaskArrivalRate,
_Out_ unsigned int* pNumberOfTasksEnqueued) = 0;
```
### <a name="parameters"></a>Parametry
*pTaskCompletionRate (Szybkość poł.*<br/>
Liczba zadań, które zostały ukończone przez harmonogram od ostatniego wywołania tej metody.
*pTaskArrivalRate (Równość małżenażu)*<br/>
Liczba zadań, które dotarły do harmonogramu od ostatniego wywołania tej metody.
*pNumberOfTasksEukietowany*<br/>
Całkowita liczba zadań we wszystkich kolejkach harmonogramu.
### <a name="remarks"></a>Uwagi
Ta metoda jest wywoływana przez Menedżera zasobów w celu zbierania statystyk dla harmonogramu. Statystyki zebrane w tym miejscu będą używane do kierowania algorytmów dynamicznych sprzężenia zwrotnego, aby określić, kiedy należy przypisać więcej zasobów do harmonogramu i kiedy zabrać zasoby. Wartości podane przez harmonogram mogą być optymistyczne i niekoniecznie muszą dokładnie odzwierciedlać bieżącą liczbę.
Tę metodę należy zaimplementować, jeśli Menedżer zasobów ma używać opinii na temat takich rzeczy, jak przybycie zadań, aby określić sposób równoważenia zasobu między harmonogramem a innymi harmonogramami zarejestrowanymi w Menedżerze zasobów. Jeśli zdecydujesz się nie zbierać statystyk, można `DynamicProgressFeedback` ustawić `DynamicProgressFeedbackDisabled` klucz zasad do wartości w zasadach harmonogramu, a Menedżer zasobów nie będzie wywoływać tej metody w harmonogramie.
W przypadku braku informacji statystycznych Menedżer zasobów użyje poziomów subskrypcji wątku sprzętowego do podejmowania decyzji dotyczących alokacji zasobów i migracji. Aby uzyskać więcej informacji na temat poziomów subskrypcji, zobacz [IExecutionResource::CurrentSubscriptionLevel](iexecutionresource-structure.md#currentsubscriptionlevel).
## <a name="see-also"></a>Zobacz też
[współbieżność Obszar nazw](concurrency-namespace.md)<br/>
[Klawiatura PolicyElementKey](concurrency-namespace-enums.md)<br/>
[Klasa polityki harmonogramu](schedulerpolicy-class.md)<br/>
[IExecutionContext, struktura](iexecutioncontext-structure.md)<br/>
[IThreadProxy, struktura](ithreadproxy-structure.md)<br/>
[IVirtualProcessorRoot, struktura](ivirtualprocessorroot-structure.md)<br/>
[IResourceManager — Struktura](iresourcemanager-structure.md)
| 63.008584 | 630 | 0.822355 | pol_Latn | 0.999762 |
4d32b2933a4a26f7391d211c47752b81262c2d69 | 762 | md | Markdown | _posts/2021-08-18-bolsonaro-segue-constituicao-e-espera-que-outros-poderes-facam-o-mesmo-diz-general-ramos.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2021-08-18-bolsonaro-segue-constituicao-e-espera-que-outros-poderes-facam-o-mesmo-diz-general-ramos.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2021-08-18-bolsonaro-segue-constituicao-e-espera-que-outros-poderes-facam-o-mesmo-diz-general-ramos.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 3411784003
title: >-
Bolsonaro segue Constituição e espera que outros Poderes façam o mesmo, diz general Ramos
author: Tatu D'Oquei
date: 2021-08-18 15:28:00
pub_date: 2021-08-18 15:28:00
time_added: 2021-08-20 22:00:13
category:
tags: []
image: https://f.i.uol.com.br/fotografia/2021/06/07/162309649660be7cb047474_1623096496_3x2_rt.jpg
---
As declarações foram dadas em meio a atritos de Bolsonaro com o STF (Supremo Tribunal Federal).
**Link:** [https://www1.folha.uol.com.br/poder/2021/08/bolsonaro-segue-constituicao-e-espera-que-outros-poderes-facam-o-mesmo-diz-general-ramos.shtml](https://www1.folha.uol.com.br/poder/2021/08/bolsonaro-segue-constituicao-e-espera-que-outros-poderes-facam-o-mesmo-diz-general-ramos.shtml)
| 40.105263 | 290 | 0.771654 | por_Latn | 0.848929 |
4d32d8af80e42ec54a650eea92956c6dd2bde545 | 453 | md | Markdown | content/en/guides/userguide/reusable/Model Environments Result.md | eightnoneone/ortelius-docs | 328e77dc1c53e23ae8587c51675a37b9e7fa0f23 | [
"Apache-2.0"
] | 2 | 2020-12-10T15:18:32.000Z | 2022-02-07T17:59:01.000Z | content/en/guides/userguide/reusable/Model Environments Result.md | eightnoneone/ortelius-docs | 328e77dc1c53e23ae8587c51675a37b9e7fa0f23 | [
"Apache-2.0"
] | 28 | 2020-07-20T23:48:55.000Z | 2021-08-03T17:43:21.000Z | content/en/guides/userguide/reusable/Model Environments Result.md | eightnoneone/ortelius-docs | 328e77dc1c53e23ae8587c51675a37b9e7fa0f23 | [
"Apache-2.0"
] | 32 | 2020-07-21T06:16:38.000Z | 2022-02-07T17:59:04.000Z | **_Environments_ Result**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| success | boolean | Is true or false depending on the success or failure of the query. If success is false, then result is not returned and a text field named "error" is returned instead. | No |
| result | An Array of _Environments_ | Is a JSON array of objects, one for each _Environment_ returned by the query (if success is true). | No |
| 64.714286 | 196 | 0.668874 | eng_Latn | 0.999344 |
4d330569516f16b780b88f7cbe6bd10f9c023a98 | 358 | md | Markdown | README.md | Manirathinam21/Perceptron_pypi | 9df33d44381e223dc9ccf9021433ca90d90d6736 | [
"MIT"
] | null | null | null | README.md | Manirathinam21/Perceptron_pypi | 9df33d44381e223dc9ccf9021433ca90d90d6736 | [
"MIT"
] | null | null | null | README.md | Manirathinam21/Perceptron_pypi | 9df33d44381e223dc9ccf9021433ca90d90d6736 | [
"MIT"
] | null | null | null | # Perceptron_pypi
Perceptron_pypi
# Reference -
[official python docs](https://packaging.python.org/tutorials/packaging-projects/)
[github docs for github actions](https://docs.github.com/en/actions/guides/building-and-testing-python#publishing-to-package-registries)
[pypi Perceptron package](https://pypi.org/project/Perceptron-pypi-Manirathinam21/0.0.1/) | 51.142857 | 136 | 0.807263 | kor_Hang | 0.215676 |
4d3361dfe42ead042a9f0513dd0e1ede27ea8ea8 | 2,689 | md | Markdown | src/posts/getting-ready-for-a-billion-dollar-business.md | LenaSchnedlitz/se-unlocked-website | e7421f7deff393967bd22792fed407fee5591aa1 | [
"MIT"
] | 3 | 2021-10-06T10:05:02.000Z | 2022-02-28T23:59:18.000Z | src/posts/getting-ready-for-a-billion-dollar-business.md | LenaSchnedlitz/se-unlocked-website | e7421f7deff393967bd22792fed407fee5591aa1 | [
"MIT"
] | 13 | 2021-10-06T08:46:34.000Z | 2022-03-30T07:28:14.000Z | src/posts/getting-ready-for-a-billion-dollar-business.md | PaulieScanlon/se-unlocked-website | e7421f7deff393967bd22792fed407fee5591aa1 | [
"MIT"
] | 4 | 2021-12-03T19:58:23.000Z | 2022-02-28T13:35:42.000Z | ---
title: "Getting ready to build a billion-dollar business"
date: "2021-05-04T08:27:45+00:00"
status: publish
permalink: /getting-ready-for-a-billion-dollar-business
author: michaela
excerpt: "Max Stoiber explains how his work at GitHub prepared him to start his own billon-dollar business."
type: post
id: 121308
thumbnail_alt: "Picture of podcast guest"
thumbnail: ../uploads/2021/05/Max-Stoiber.jpg
category:
- Entrepreneurship
- "Open Source"
- Startup
tag: []
post_format: []
secondline_themes_page_sidebar:
- hidden-sidebar
secondline_themes_header_image_id:
- "121311"
post_header_image: ../uploads/2021/05/Max-Stoiber-Background.jpg
_yoast_wpseo_content_score:
- "30"
audio: https://dts.podtrac.com/redirect.mp3/cdn.simplecast.com/audio/aaca909a-e34f-49ae-a86f-f59e4fa807f0/episodes/c9aa17f7-c159-456c-bd44-ca609c0ac29c/audio/3eb59a7c-8a65-4a7a-8550-eb374bc57dde/default_tc.mp3
_yoast_wpseo_primary_category:
- "16"
secondline_themes_disable_img:
- "on"
---
Max Stoiber is a JavaScript Engineer that is in love with React and Node, and also a fellow Austrian. He has a track record in the open-source world, worked for Gatsby, and Github, and also is a successful entrepreneur.
**We talk about:**
- what he learned about software engineering best practices at GitHub,
- why he started his newest side-project bedrock,
- why building an indie or small lifestyle businesses is not his thing anymore,
- and how he prepares to build a billion-dollar business.
<div class="sponsorship">
Book your <a href="https://www.michaelagreiler.com/workshops">awesomecodereview.com</a> workshop!
</div>
Links:
- [Max’s Twitter](https://twitter.com/mxstbr)
- [Bedrock](https://bedrock.mxstbr.com/)
- [Feedback Fish](https://feedback.fish/)
- [Book: The Mum Test](https://www.amazon.com/Mom-Test-customers-business-everyone-ebook/dp/B01H4G2J1U/)
### Subscribe on [iTunes](https://podcasts.apple.com/at/podcast/software-engineering-unlocked/id1477527378?l=en), [Spotify](https://open.spotify.com/show/2wz1OneBIDXpbBYeuyIsJL?si=2I0R0HuaTLK6RT0f7lDIFg), [Google](https://www.google.com/podcasts?feed=aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9LMV9tdjBDSg%3D%3D), [Deezer](https://www.deezer.com/show/465682), or via [RSS](https://www.software-engineering-unlocked.com/subscribe/).
## Transcript: Getting ready for a billion-dollar business
_\[If you want, you can help make the transcript better, and improve the podcast’s accessibility via_ [Github](https://github.com/mgreiler/se-unlocked/tree/master/Transcripts)_[.](https://github.com/mgreiler/se-unlocked/tree/master/Transcripts) I’m happy to lend a hand to help you get started with pull requests, and open source work.\]_
| 47.175439 | 427 | 0.776869 | eng_Latn | 0.765124 |
4d337a914582a3275e1962112160de3398073e14 | 70 | md | Markdown | README.md | liukai6789/ThumbnailView | ec32309c202c6d5f76eb4b5594220cbccc9810e3 | [
"MIT"
] | 4 | 2018-08-06T05:48:40.000Z | 2019-01-11T02:05:45.000Z | README.md | liukai6789/ThumbnailView | ec32309c202c6d5f76eb4b5594220cbccc9810e3 | [
"MIT"
] | null | null | null | README.md | liukai6789/ThumbnailView | ec32309c202c6d5f76eb4b5594220cbccc9810e3 | [
"MIT"
] | 1 | 2018-08-06T11:43:02.000Z | 2018-08-06T11:43:02.000Z | # ThumbnailView
一个能自适应高度,显示多个thumbnail的控件
# 支持pod
pod 'ThumbnailView'
| 14 | 25 | 0.828571 | yue_Hant | 0.719929 |
4d339fcb9b242d7ebad4b2336d54f00871350b6d | 16 | md | Markdown | README.md | Ivaylohristovv/Blog---Project | f592f00baf19a3433b3f152570892142f5935e66 | [
"MIT"
] | null | null | null | README.md | Ivaylohristovv/Blog---Project | f592f00baf19a3433b3f152570892142f5935e66 | [
"MIT"
] | null | null | null | README.md | Ivaylohristovv/Blog---Project | f592f00baf19a3433b3f152570892142f5935e66 | [
"MIT"
] | null | null | null | # Blog---Project | 16 | 16 | 0.6875 | kor_Hang | 0.535584 |
4d3415bcb451e6fe2cfbcba8ed54db97827204e5 | 521 | md | Markdown | README.md | kevin123-web/DeberArbol | e69b280740831c5ecdbe4b41c1821b6fd5e38890 | [
"MIT"
] | null | null | null | README.md | kevin123-web/DeberArbol | e69b280740831c5ecdbe4b41c1821b6fd5e38890 | [
"MIT"
] | null | null | null | README.md | kevin123-web/DeberArbol | e69b280740831c5ecdbe4b41c1821b6fd5e38890 | [
"MIT"
] | null | null | null | # DeberArbol
Es un programa que realiza el conteo de hojas , nodos , niveles .
Este program no realiza notaciones o las dibuja graficamente como las prefijas , infijas , postfijas
Solo es un contador que realiza lo siguiente que atravez de un acumulador nos ayuda a realizar el respectivo conteo de un arbol binario
este por la cual se distribuye por los hijos cabe recalcar que una vez ya impreso en la consola se debe aplastar la tecla "ENTER"
Es un programa que nos ayuda a saber cuanto tenemos una expresion etc
| 65.125 | 135 | 0.794626 | spa_Latn | 0.998312 |
4d3461b388d5cac47b251cbc8af9a26b87fa67b5 | 950 | md | Markdown | src/phase2/python/README.md | ancient-sentinel/cs-4800-graphics-project | 98d93c84293b4d4d5858c1f0f7633780bbc77464 | [
"MIT"
] | null | null | null | src/phase2/python/README.md | ancient-sentinel/cs-4800-graphics-project | 98d93c84293b4d4d5858c1f0f7633780bbc77464 | [
"MIT"
] | null | null | null | src/phase2/python/README.md | ancient-sentinel/cs-4800-graphics-project | 98d93c84293b4d4d5858c1f0f7633780bbc77464 | [
"MIT"
] | 1 | 2020-12-20T19:10:55.000Z | 2020-12-20T19:10:55.000Z | # Phase 2 Python Code
A repository for Python code developed in Phase 2.
### Files:
* `DetectionTrackerPrototype.py` : A script testing the integration of the ImageAI YOLO_V3 object detection with the OpenCV CSRT trackers aggregated within
a Multi-Tracker object. This prototype runs object detection on the first frame of the video and passes the bounding boxes of the discovered objects to the
OpenCV trackers to monitor for the remainder of the video.

* `ModelEvaluation.py` : A script for evaluating YOLO_V3 object models generated using ImageAI.
* `ModelTrainer.py` : A script forr training a custom YOLO_V3 object detector with an annotated image data set using ImageAI.
* `SingleFrameDetection.py` : A script which applies a custom YOLO_V3 object detector on a single video frame.
* `VideoDetection.py` : A script which applies a custom YOLO_V3 object dector to a video file.
| 47.5 | 157 | 0.776842 | eng_Latn | 0.960216 |
4d3468f44be88decea9078bda9544541915e2e75 | 1,194 | md | Markdown | README.md | davidgarland/dgkiss | 62cc648267d1ecff8f80bdef3438fadb7b25623b | [
"MIT"
] | 1 | 2021-11-24T08:28:36.000Z | 2021-11-24T08:28:36.000Z | README.md | davidgarland/dgkiss | 62cc648267d1ecff8f80bdef3438fadb7b25623b | [
"MIT"
] | null | null | null | README.md | davidgarland/dgkiss | 62cc648267d1ecff8f80bdef3438fadb7b25623b | [
"MIT"
] | 1 | 2021-11-24T08:28:40.000Z | 2021-11-24T08:28:40.000Z | # dgkiss
My personal [GKISS](https://github.com/gkisslinux/grepo) repository.
Packages here assume you're running Wayland (as is standard now for most people using KISS linux and derivatives),
and GKISS means I assume you're runing glibc. If packages here do not work with musl libc or X11, they will be
considered working as intended.
Some highlights:
- [entr](https://eradman.com/entrproject/)
- [chrony](https://chrony.tuxfamily.org/)
- [openjdk11-hotspot-bin](https://adoptium.net/?variant=openjdk11&jvmVariant=hotspot)
- [openjdk16-hotspot-bin](https://adoptium.net/?variant=openjdk16&jvmVariant=hotspot)
- [glfw](https://www.glfw.org/) (+ Minecraft [patches](https://github.com/Admicos/minecraft-wayland))
- [mesa](https://gitlab.freedesktop.org/mesa/mesa) + [libglvnd](https://github.com/NVIDIA/libglvnd) (depends on packages from [kiss-xorg](https://github.com/ehawkvu/kiss-xorg))
- [MultiMC](https://multimc.org/)
- [libslirp](https://gitlab.freedesktop.org/slirp/libslirp)
- [libepoxy](https://github.com/anholt/libepoxy)
Things I need to get around to fixing, or are in an unknown state:
- [melonDS](http://melonds.kuribo64.net/) (build errors)
- [fftw](https://www.fftw.org/)
| 47.76 | 176 | 0.747069 | eng_Latn | 0.326044 |
4d347f9ba0be66c7fca84b1ba558be69f9ef6884 | 7,472 | md | Markdown | articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md | julianosaless/azure-docs.pt-br | 461791547c9cc2b4df751bb3ed881ce57796f1e4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md | julianosaless/azure-docs.pt-br | 461791547c9cc2b4df751bb3ed881ce57796f1e4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md | julianosaless/azure-docs.pt-br | 461791547c9cc2b4df751bb3ed881ce57796f1e4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tutorial – Configurar implantações sem interrupção para Máquinas Virtuais do Linux do Azure
description: Neste tutorial, você aprenderá a configurar o pipeline de CD (implantação contínua) que atualiza de modo incremental um grupo de Máquinas Virtuais do Linux do Azure usando a estratégia de implantação sem interrupção
author: moala
manager: jpconnock
tags: azure-devops-pipelines
ms.assetid: ''
ms.service: virtual-machines-linux
ms.topic: tutorial
ms.tgt_pltfrm: azure-pipelines
ms.workload: infrastructure
ms.date: 4/10/2020
ms.author: moala
ms.custom: devops
ms.openlocfilehash: 75888b1ebbda33891296fe0b54c5d204955e32a3
ms.sourcegitcommit: 58faa9fcbd62f3ac37ff0a65ab9357a01051a64f
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/29/2020
ms.locfileid: "82113469"
---
# <a name="tutorial---configure-rolling-deployment-strategy-for-azure-linux-virtual-machines"></a>Tutorial – Configurar estratégia de implantação sem interrupção para Máquinas Virtuais do Linux do Azure
O Azure DevOps é um serviço interno do Azure que automatiza cada parte do processo de DevOps com integração contínua e entrega contínua para qualquer recurso do Azure.
Se seu aplicativo usa máquinas virtuais, aplicativos Web, Kubernetes ou qualquer outro recurso, você pode implementar infraestrutura como código, integração contínua, teste contínuo, entrega contínua e monitoramento contínuo com o Azure e o Azure DevOps.

## <a name="iaas---configure-cicd"></a>IaaS – Configurar CI/CD
O Azure Pipelines fornece um conjunto completo de ferramentas de automação de CI/CD para implantações em máquinas virtuais. Você pode configurar um pipeline de entrega contínua para uma VM do Azure diretamente do portal do Azure. Este documento contém as etapas associadas à configuração de um pipeline de CI/CD para fazer implantações sem interrupção em vários computadores usando o portal do Azure. Você também pode dar uma olhada em outras estratégias, como [canário](https://aka.ms/AA7jdrz) e [azul-verde](https://aka.ms/AA83fwu), que são inatamente compatíveis por meio do portal do Azure.
**Configurar CI/CD em máquinas virtuais**
Máquinas virtuais podem ser adicionadas como destinos em um [grupo de implantação](https://docs.microsoft.com/azure/devops/pipelines/release/deployment-groups) e podem ser direcionadas para atualizações em vários computadores. Após implantado, o **Histórico de Implantação** dentro de um grupo de implantação fornece rastreabilidade da VM ao pipeline e, em seguida, até o commit.
**Implantações Sem Interrupção**: uma implantação sem interrupção substitui as instâncias da versão anterior de um aplicativo por instâncias da nova versão do aplicativo em um conjunto fixo de computadores (conjunto sem interrupção) em cada iteração. Vejamos como você pode configurar uma atualização sem interrupção para máquinas virtuais.
Você pode configurar atualizações sem interrupção em suas "**máquinas virtuais**" dentro do portal do Azure usando a opção de entrega contínua.
Veja um passo a passo do processo.
1. Entre no portal do Azure e navegue até uma máquina virtual.
2. No painel da VM à esquerda, navegue até o menu de **entrega contínua** . Em seguida, clique em **Configurar**.

3. No painel de configuração, clique em "Organização do Azure DevOps" para selecionar uma conta existente ou criar uma. Em seguida, selecione o projeto no qual deseja configurar o pipeline.

4. Um grupo de implantação é um conjunto lógico de computadores de destino da implantação que representam os ambientes físicos; por exemplo, "Desenvolvimento", "Teste", "UAT" e "Produção". Você pode criar um grupo de implantação ou selecionar um existente.
5. Selecione o pipeline de build que publica o pacote a ser implantado na máquina virtual. Observe que o pacote publicado deve ter um script de implantação _deploy.ps1_ ou _deploy.sh_ na pasta `deployscripts` na raiz do pacote. Esse script de implantação será executado pelo pipeline do Azure DevOps em tempo de execução.
6. Selecione a estratégia de implantação de sua escolha. Nesse caso, vamos selecionar "Sem Interrupção".
7. Opcionalmente, você pode marcar o computador com a função. Por exemplo, "Web", "DB" etc. Isso ajuda você a direcionar as VMs que têm apenas uma função específica.
8. Clique em **OK** na caixa de diálogo para configurar o pipeline de entrega contínua.
9. Depois de concluído, você terá um pipeline de entrega contínua configurado para implantação na máquina virtual.

10. Você verá que a implantação na máquina virtual está em andamento. Você pode clicar no link para navegar até o pipeline. Clique em **Versão 1** para exibir a implantação. Ou você pode clicar em **Editar** para modificar a definição do pipeline de lançamento.
11. Se tiver várias VMs a serem configuradas, repita as etapas 2 a 4 para que outras VMs sejam adicionadas ao grupo de implantação. Observe que, se você selecionar um Grupo de Implantação para o qual já exista uma execução de pipeline, a VM será adicionada ao grupo de implantação sem criar pipelines.
12. Quando terminar, clique na definição do pipeline, navegue até a organização do Azure DevOps e clique em **Editar** pipeline de lançamento.

13. Clique no link **1 trabalho, 1 tarefa** na fase **desenvolvimento**. Clique na fase **Implantar**.

14. No painel de configuração à direita, você pode especificar o número de computadores que deseja implantar em paralelo em cada iteração. Caso queira implantar em vários computadores de uma vez, você poderá especificá-los em termos de percentual, usando o controle deslizante.
15. A tarefa Executar Script de Implantação executará, por padrão, o script de implantação _deploy.ps1_ ou _deploy.sh_ na pasta "deployscripts" no diretório raiz do pacote publicado.

## <a name="other-deployment-strategies"></a>Outras estratégias de implantação
- [Configurar a estratégia de implantação canário](https://aka.ms/AA7jdrz)
- [Configurar estratégia de implantação azul-verde](https://aka.ms/AA83fwu)
## <a name="azure-devops-project"></a>Projeto do Azure DevOps
Comece a usar o Azure com mais facilidade do que nunca.
Com o DevOps Projects, comece a executar seu aplicativo em qualquer serviço do Azure em apenas três etapas: selecione um idioma do aplicativo, um runtime e um serviço do Azure.
[Saiba mais](https://azure.microsoft.com/features/devops-projects/ ).
## <a name="additional-resources"></a>Recursos adicionais
- [Implantar em Máquinas Virtuais do Azure usando um projeto do DevOps](https://docs.microsoft.com/azure/devops-project/azure-devops-project-vms)
- [Implementar a implantação contínua do aplicativo em um Conjunto de Dimensionamento de Máquinas Virtuais do Azure](https://docs.microsoft.com/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset)
| 85.885057 | 595 | 0.798849 | por_Latn | 0.999496 |
4d3529c4d1d4788103d4825cda90d76f25142bfd | 335 | md | Markdown | docs/config/services/ssh/service.md | lologit1998/honeytrap-docs | ef88bd56cb09cf4c365569472f2166758ab8a5c2 | [
"CC-BY-4.0"
] | null | null | null | docs/config/services/ssh/service.md | lologit1998/honeytrap-docs | ef88bd56cb09cf4c365569472f2166758ab8a5c2 | [
"CC-BY-4.0"
] | null | null | null | docs/config/services/ssh/service.md | lologit1998/honeytrap-docs | ef88bd56cb09cf4c365569472f2166758ab8a5c2 | [
"CC-BY-4.0"
] | null | null | null | ---
title: SSH Service
---
{% capture overview %}
{% endcapture %}
The SSH service will simulate a ssh shell. By default all credentials will be valid.
#### Configuration
```
[service.ssh-simulator]
type="ssh-simulator"
port="tcp/22"
credentials=["root:root", "root:password"]
banner="SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.2"
```
| 16.75 | 84 | 0.698507 | eng_Latn | 0.673023 |
4d35a601b59f162dfab7a5566da1bc949c0a2c3f | 1,134 | md | Markdown | docs/kb/index.md | mshimizu-kx/docs | d36adede3847bb9d829251c9aebf2ff8c01164f5 | [
"CC-BY-4.0"
] | null | null | null | docs/kb/index.md | mshimizu-kx/docs | d36adede3847bb9d829251c9aebf2ff8c01164f5 | [
"CC-BY-4.0"
] | 1 | 2020-12-21T14:43:41.000Z | 2020-12-21T14:43:41.000Z | docs/kb/index.md | mshimizu-kx/docs | d36adede3847bb9d829251c9aebf2ff8c01164f5 | [
"CC-BY-4.0"
] | null | null | null | ---
title: Knowledge Base for q and kdb+ – Knowledge Base – kdb+ and q documentation
description: The Knowledge Base contains articles about how to get things done with kdb+.
keywords: cookbook, how-to, q, kdb+
---
# Knowledge Base
The Knowledge Base contains articles about how to get things done with kdb+.
## Popular
- [Frequently-asked questions](faq.md)
- [Get started](../learn/index.md)
- [Programming idioms](programming-idioms.md)
- [WebSockets](websockets.md)
- [File compression](file-compression.md)
- [Replay logfile](replay-log.md)
## Big Data
- [Splayed tables](splayed-tables.md)
- [Load balancing](load-balancing.md)
- [Loading from large files](loading-from-large-files.md)
- [Splaying large files](splaying-large-files.md)
- [Splayed schema change](splayed-schema-change.md)
- [Temporal data](temporal-data.md)
- [Bulk Copy Program](bcp.md)
- [Database partitioning with par.txt](partition.md)
:fontawesome-regular-hand-point-right:
[Interfaces](../interfaces/index.md) | 31.5 | 89 | 0.649912 | eng_Latn | 0.664872 |
4d35c37b747f11c7ad2fe3f5373a68ec0154d9b2 | 64 | md | Markdown | README.md | raghusaripalli/businessCard | c6e6bbef0117b2ef4ae12160d00ca3962b2562b3 | [
"MIT"
] | null | null | null | README.md | raghusaripalli/businessCard | c6e6bbef0117b2ef4ae12160d00ca3962b2562b3 | [
"MIT"
] | null | null | null | README.md | raghusaripalli/businessCard | c6e6bbef0117b2ef4ae12160d00ca3962b2562b3 | [
"MIT"
] | null | null | null | It's me, Raghuveer!
# Usage
## npm
```
npx raghusaripalli
```
| 7.111111 | 19 | 0.609375 | est_Latn | 0.358646 |
4d36bc2ebfe72eb7cca6abe0e0c2e9807b9c82d0 | 1,389 | md | Markdown | _posts/people-love/22/w/bcbc/2021-04-07-ihascupquake.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/22/w/bcbc/2021-04-07-ihascupquake.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | _posts/people-love/22/w/bcbc/2021-04-07-ihascupquake.md | chito365/p | d43434482da24b09c9f21d2f6358600981023806 | [
"MIT"
] | null | null | null | ---
id: 15622
title: iHasCupquake
date: 2021-04-07T17:01:01+00:00
author: victor
layout: post
guid: https://ukdataservers.com/ihascupquake/
permalink: /04/07/ihascupquake
tags:
- show love
- unspecified
- single
- relationship
- engaged
- married
- complicated
- open relationship
- widowed
- separated
- divorced
- Husband
- Wife
- Boyfriend
- Girlfriend
category: Guides
---
* some text
{: toc}
## Who is iHasCupquake
Gamer who rose to internet fame for her Minecraft videos. Her iHasCupquake YouTube channel has earned over 6 million subscribers.
## Prior to Popularity
She started uploading her first web videos on the suggestion of her husband.
## Random data
She uploads videos to her iHasCupquake, Tiffyquake, and WeAreMishMish channels.
## Family & Everyday Life of iHasCupquake
Her real name is Tiffany Garcia. She married fellow gamer Mario Herrera, AKA Red or Redb15. She grew up with a brother named Anthony Garcia.
## People Related With iHasCupquake
She often collaborated with other YouTubers like Sonja Reid, Aureylian, and Kaleidow.
| 17.582278 | 141 | 0.588913 | eng_Latn | 0.990732 |
4d3712bb95363244f424d642d348ef54ad9a16f4 | 111 | md | Markdown | ports/bluepill/README.md | lupyuen/bluepill-micropython | b139b406a86c90c6e6aa4b3f624044819c8a09e5 | [
"MIT"
] | 8 | 2020-02-03T07:15:38.000Z | 2021-05-20T16:16:12.000Z | ports/bluepill/README.md | lupyuen/bluepill-micropython | b139b406a86c90c6e6aa4b3f624044819c8a09e5 | [
"MIT"
] | null | null | null | ports/bluepill/README.md | lupyuen/bluepill-micropython | b139b406a86c90c6e6aa4b3f624044819c8a09e5 | [
"MIT"
] | null | null | null | # STM32 Blue Pill Port
To be integrated with https://github.com/lupyuen/send_altitude_cocoos/tree/micropython
| 27.75 | 86 | 0.81982 | eng_Latn | 0.478951 |
4d37b6b61bef5235963dfa262f175b10a852ed0d | 1,122 | md | Markdown | _posts/25/2019-07-31-hello3307 (342).md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/25/2019-07-31-hello3307 (342).md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/25/2019-07-31-hello3307 (342).md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | ---
id: 3643
title: Florian Jungwirth
author: chito
layout: post
guid: http://localhost/mbti/?p=3643
permalink: /hello3643
tags:
- claims
- lawyer
- doctor
- house
- multi family
- online
- poll
- business
- unspecified
- single
- relationship
- engaged
- married
- complicated
- open relationship
- widowed
- separated
- divorced
- Husband
- Wife
- Boyfriend
- Girlfriend
category: Guides
---
{: toc}
## Name
Florian Jungwirth
* * *
## Nationality
Germany
* * *
## National Position
* * *
## Random data
* National kit
* Club
SJ Earthquakes
* Club Position
LCB
* Club Kit
23
* Club Joining
42768
* Contract Expiry
2020
* Rating
72
* Height
181 cm
* Weight
77 kg
* Preffered Foot
Right
* Birth Date
32535
* Preffered Position
CDM/CB Medium / Medium
* Weak foot
3
* Skill Moves
2
* Ball Control
66
* Dribbling
60
* Marking
69
* Sliding Tackle
74
* Standing Tackle
70
* Aggression
69
* Reactions
71
* Attacking Position
48
* Interceptions
78</ul> | 9.35 | 35 | 0.603387 | eng_Latn | 0.8005 |
4d3851a803e0a4da4ecc96d8b6bb1acad4f84449 | 640 | md | Markdown | docs/csharp/misc/cs0025.md | sakapon/docs.ja-jp | 2da5f9a15a0aea66ef4cf2eefb7c608a1105395d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs0025.md | sakapon/docs.ja-jp | 2da5f9a15a0aea66ef4cf2eefb7c608a1105395d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs0025.md | sakapon/docs.ja-jp | 2da5f9a15a0aea66ef4cf2eefb7c608a1105395d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: コンパイラ エラー CS0025
title: コンパイラ エラー CS0025
ms.date: 07/20/2015
f1_keywords:
- CS0025
helpviewer_keywords:
- CS0025
ms.assetid: dfb6f013-cb61-4b37-afbf-93afeaf2fa08
ms.openlocfilehash: ea5f63b7412321c8f072cdef7860b2deb76ef31c
ms.sourcegitcommit: d579fb5e4b46745fd0f1f8874c94c6469ce58604
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/30/2020
ms.locfileid: "89138684"
---
# <a name="compiler-error-cs0025"></a>コンパイラ エラー CS0025
標準ライブラリ ファイル 'file' が見つかりません
コンパイラで必要なファイルが見つかりませんでした。 パスが正しいことと、ファイルが存在することを確認してください。
ファイルが Visual Studio システムファイルの場合は、Visual Studio のインストールを修復するか、完全に再インストールする必要があります。
| 27.826087 | 82 | 0.814063 | yue_Hant | 0.453054 |
4d38d7be2a19d561493a27043f070dfa64839464 | 3,709 | md | Markdown | reference/architecture/data-path.md | shuuji3/calico | c71f9aaf122d2effa24c32c95b6daa2fbd099c2d | [
"Apache-2.0"
] | 2 | 2021-03-31T10:52:45.000Z | 2021-12-02T15:40:14.000Z | reference/architecture/data-path.md | shuuji3/calico | c71f9aaf122d2effa24c32c95b6daa2fbd099c2d | [
"Apache-2.0"
] | null | null | null | reference/architecture/data-path.md | shuuji3/calico | c71f9aaf122d2effa24c32c95b6daa2fbd099c2d | [
"Apache-2.0"
] | null | null | null | ---
title: 'The Calico data path: IP routing and iptables'
description: Learn how packets flow between workloads in a datacenter, or between a workload and the internet.
canonical_url: '/reference/architecture/data-path'
---
One of Calico’s key features is how packets flow between workloads in a
data center, or between a workload and the Internet, without additional
encapsulation.
In the Calico approach, IP packets to or from a workload are routed and
firewalled by the Linux routing table and iptables infrastructure on the
workload’s host. For a workload that is sending packets, Calico ensures
that the host is always returned as the next hop MAC address regardless
of whatever routing the workload itself might configure. For packets
addressed to a workload, the last IP hop is that from the destination
workload’s host to the workload itself.

Suppose that IPv4 addresses for the workloads are allocated from a
datacenter-private subnet of 10.65/16, and that the hosts have IP
addresses from 172.18.203/24. If you look at the routing table on a host:
```bash
route -n
```
You will see something like this:
```
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.18.203.1 0.0.0.0 UG 0 0 0 eth0
10.65.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ns-db03ab89-b4
10.65.0.21 172.18.203.126 255.255.255.255 UGH 0 0 0 eth0
10.65.0.22 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0
10.65.0.23 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0
10.65.0.24 0.0.0.0 255.255.255.255 UH 0 0 0 tapa429fb36-04
172.18.203.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
{: .no-select-button}
There is one workload on this host with IP address 10.65.0.24, and
accessible from the host via a TAP (or veth, etc.) interface named
tapa429fb36-04. Hence there is a direct route for 10.65.0.24, through
tapa429fb36-04. Other workloads, with the .21, .22 and .23 addresses,
are hosted on two other hosts (172.18.203.126 and .129), so the routes
for those workload addresses are via those hosts.
The direct routes are set up by a Calico agent named Felix when it is
asked to provision connectivity for a particular workload. A BGP client
(such as BIRD) then notices those and distributes them – perhaps via a
route reflector – to BGP clients running on other hosts, and hence the
indirect routes appear also.
## Bookended security
The routing above in principle allows any workload in a data center to
communicate with any other – but in general, an operator will want to
restrict that; for example, so as to isolate customer A’s workloads from
those of customer B. Therefore Calico also programs iptables on each
host, to specify the IP addresses (and optionally ports etc.) that each
workload is allowed to send to or receive from. This programming is
‘bookended’ in that the traffic between workloads X and Y will be
firewalled by both X’s host and Y’s host – this helps to keep unwanted
traffic off the data center’s core network, and as a secondary defense
in case it is possible for a rogue workload to compromise its local
host.
## Is that all?
As far as the static data path is concerned, yes. It’s just a
combination of responding to workload ARP requests with the host MAC, IP
routing and iptables. There’s a great deal more to Calico in terms of
how the required routing and security information is managed, and for
handling dynamic things such as workload migration – but the basic data
path really is that simple.
| 46.3625 | 110 | 0.729846 | eng_Latn | 0.999211 |
4d39196db3ca96251de03abbafc4ad9091046d35 | 266 | md | Markdown | src/Doc/Masa.Blazor.Doc/Demos/Components/Grid/misc/oneColumnWidth.md | gavin1ee/MASA.Blazor | f900a9d248fb54ee45911b34982d6eb18e887009 | [
"MIT"
] | 206 | 2021-07-08T03:19:05.000Z | 2022-03-29T03:23:21.000Z | src/Doc/MASA.Blazor.Doc/Demos/Components/Grid/demo/oneColumnWidth.md | jessicaqu1/MASA.Blazor | 0cbd4971907971465b6940e62a690b429857f505 | [
"MIT"
] | 334 | 2021-08-28T05:16:42.000Z | 2022-03-31T10:00:44.000Z | src/Doc/MASA.Blazor.Doc/Demos/Components/Grid/demo/oneColumnWidth.md | jessicaqu1/MASA.Blazor | 0cbd4971907971465b6940e62a690b429857f505 | [
"MIT"
] | 23 | 2021-07-09T06:06:27.000Z | 2022-03-25T09:49:41.000Z | ---
order: 5
title:
zh-CN: 一列宽度
en-US: One column width
---
## zh-CN
使用自动布局时,你可以只定义一列的宽度,并且仍然可以让它的同级元素围绕它自动调整大小。
## en-US
When using the auto-layout, you can define the width of only one column and still have its siblings to automatically resize around it.
| 17.733333 | 134 | 0.729323 | eng_Latn | 0.994708 |
4d398e0a8c728fea37e5ec50ced30f3d2bd001a7 | 1,459 | md | Markdown | docs/vs-2015/profiling/how-to-manually-create-performance-sessions.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/profiling/how-to-manually-create-performance-sessions.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/profiling/how-to-manually-create-performance-sessions.md | galaxyuliana/visualstudio-docs.ko-kr | 0f07b2bdcdecc134d4f27d7da71521546f4046a6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '방법: 성능 세션 수동으로 만들기 | Microsoft Docs'
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-debug
ms.topic: conceptual
f1_keywords:
- vs.performance.wizard.dllpage
- vs.performance.wizard.exepage
helpviewer_keywords:
- performance sessions, creating
- performance tools, creating performance sessions
ms.assetid: ee2b3e0c-0990-46d9-8de6-c29fa386b15b
caps.latest.revision: 23
author: MikeJo5000
ms.author: mikejo
manager: jillfra
ms.openlocfilehash: 622d349fd063cf0a22e3c286003490e088cd4440
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 04/23/2019
ms.locfileid: "68192837"
---
# <a name="how-to-manually-create-performance-sessions"></a>방법: 성능 세션 수동으로 만들기
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
성능 세션을 수동으로 만들 수 있습니다. 이 작업을 위해 [!INCLUDE[vsprvs](../includes/vsprvs-md.md)]에서 프로젝트를 열 필요가 없습니다. 자세한 내용은 [성능 세션 구성](../profiling/configuring-performance-sessions.md)을 참조하세요.
### <a name="to-manually-create-a-performance-session"></a>성능 세션을 수동으로 만들려면
1. **분석** 메뉴에서 **프로파일러**를 가리키고 **새 성능 세션**을 클릭합니다.
빈 성능 세션이 **성능 탐색기**에 추가됩니다.
2. **대상**을 마우스 오른쪽 단추로 클릭하고 **대상 이진 파일 추가**를 선택합니다.
3. **대상 이진 파일 추가** 대화 상자에서 파일 이름을 선택하고 **열기**를 클릭합니다.
새 이진 파일이 추가됩니다.
## <a name="see-also"></a>참고 항목
[성능 탐색기](../profiling/performance-explorer.md)
[시작](../profiling/getting-started-with-performance-tools.md)
| 32.422222 | 175 | 0.721727 | kor_Hang | 0.984483 |
4d39c18639f487941058d25490333f3c9cfdb01e | 572 | md | Markdown | README.md | codemix/4square-no-coffee | ab1e43120593a4d1f5ac25ea1d9da3ab4e2a9552 | [
"MIT"
] | null | null | null | README.md | codemix/4square-no-coffee | ab1e43120593a4d1f5ac25ea1d9da3ab4e2a9552 | [
"MIT"
] | null | null | null | README.md | codemix/4square-no-coffee | ab1e43120593a4d1f5ac25ea1d9da3ab4e2a9552 | [
"MIT"
] | null | null | null | # Foursquare Venues API
This is a simple module to access 4squares' venue API.
npm install foursquarevenues
##License##
MIT
## Example
```js
var foursquare = (require('foursquarevenues'))('CLIENTIDKEY', 'CLIENTSECRETKEY');
var params = {
"ll": "40.7,-74"
};
foursquare.getVenues(params, function(error, venues) {
if (!error) {
console.log(venues);
}
});
foursquare.exploreVenues(params, function(error, venues) {
if (!error) {
console.log(venues);
}
});
```
**Enjoy the usage. You can email me at [email protected] for any bugs.**
| 15.888889 | 82 | 0.660839 | eng_Latn | 0.630364 |
4d39c9fb0c8bff6efe78f38e07cae2801b590691 | 1,042 | md | Markdown | README.md | useryard/useryard | 5fa62a59f0bb93dc8142725797fc93beda3880da | [
"Apache-2.0"
] | null | null | null | README.md | useryard/useryard | 5fa62a59f0bb93dc8142725797fc93beda3880da | [
"Apache-2.0"
] | null | null | null | README.md | useryard/useryard | 5fa62a59f0bb93dc8142725797fc93beda3880da | [
"Apache-2.0"
] | null | null | null | # useryard
Extremely simple customer portal. Authentication, subscription payments and
customer email communication made easy.
Deploy a customer portal in minutes, with account details management,
subscription management, marketing newsletter management, business metrics.
Connects to common providers like Stripe, Sendgrid and Mailgun.
Concentrate on what really moves the needle for your business, and delegate the
common things to every business.
## Features
- [ ] Authentication
- [ ] Password-less (magic links)
- [ ] Password
- [ ] Social logins (OAuth 2.0)
- [ ] Authorization
- [ ] Customer subscriptions
- [ ] Recurring payments
- [ ] Theming
- [ ] Customize UI to match your business
- [ ] Admin panel
- [ ] Subscription management
- [ ] Manage plans
- [ ] Manage customers
- [ ] Business Metrics
- [ ] Growth and Churn
- [ ] Recurring revenue
## Getting Started
```bash
npm run dev
# or
yarn dev
```
## LICENSE
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) (Apache-2.0)
| 24.232558 | 79 | 0.710173 | eng_Latn | 0.965055 |
4d39ea312605b74eef69ea28a83febc236fb4219 | 5,421 | md | Markdown | README.md | marcoinscatola/open-korean-text-wrapper-node-2 | 8f23c7c9795fb4b9cbe061b8213bac66131476e5 | [
"Apache-2.0"
] | 33 | 2017-02-03T02:45:55.000Z | 2021-07-09T17:55:25.000Z | README.md | marcoinscatola/open-korean-text-wrapper-node-2 | 8f23c7c9795fb4b9cbe061b8213bac66131476e5 | [
"Apache-2.0"
] | 1 | 2018-03-15T11:41:27.000Z | 2018-03-20T15:00:55.000Z | README.md | marcoinscatola/open-korean-text-wrapper-node-2 | 8f23c7c9795fb4b9cbe061b8213bac66131476e5 | [
"Apache-2.0"
] | 6 | 2017-05-06T08:36:59.000Z | 2022-01-28T01:28:53.000Z | # open-korean-text-node
[](https://badge.fury.io/js/open-korean-text-node)
[](https://travis-ci.org/open-korean-text/open-korean-text-wrapper-node-2)
A nodejs binding for [open-korean-text](https://github.com/open-korean-text/open-korean-text) via [node-java](https://github.com/joeferner/node-java) interface.
## Dependency
Currently wraps [open-korean-text 2.2.0](https://github.com/open-korean-text/open-korean-text/releases/tag/open-korean-text-2.2.0)
현재 이 프로젝트는 [open-korean-text 2.2.0](https://github.com/open-korean-text/open-korean-text/releases/tag/open-korean-text-2.2.0)을 사용중입니다.
## Requirement
Since it uses java code compiled with Java 8, make sure you have both Java 8 JDK and JRE installed.
For more details about installing java interface, see installation notes on below links.
이 프로젝트는 Java 8로 컴파일된 코드를 사용하기 때문에, Java 8 JDK/JRE가 설치되어 있어야 합니다.
Java interface의 설치에 관련된 더 자세한 사항은 아래 링크에서 확인하세요.
- [node-gyp#installation](https://github.com/nodejs/node-gyp#installation)
- [node-java#installation](https://github.com/joeferner/node-java#installation)
## Installation
```bash
npm install --save open-korean-text-node
```
### Usage
```typescript
import OpenKoreanText from 'open-korean-text-node';
// or
const OpenKoreanText = require('open-korean-text-node').default;
```
- See [API](#api) section to get more informations.
## Examples
- [test/processor.spec.js](./test/processor.spec.js)
- [test/tokens.spec.js](./test/tokens.spec.js)
## API
### OpenKoreanText
#### Tokenizing
```typescript
OpenKoreanText.tokenize(text: string): Promise<IntermediaryTokens>;
OpenKoreanText.tokenizeSync(text: string): IntermediaryTokens;
```
- `text` a target string to tokenize
#### Detokenizing
```typescript
OpenKoreanText.detokenize(tokens: IntermediaryTokensObject): Promise<string>;
OpenKoreanText.detokenize(words: string[]): Promise<string>;
OpenKoreanText.detokenize(...words: string[]): Promise<string>;
OpenKoreanText.detokenizeSync(tokens: IntermediaryTokensObject): string;
OpenKoreanText.detokenizeSync(words: string[]): string;
OpenKoreanText.detokenizeSync(...words: string[]): string;
```
- `tokens` an intermediary token object from `tokenize`
- `words` an array of words to detokenize
#### Phrase Extracting
```typescript
OpenKoreanText.extractPhrases(tokens: IntermediaryTokens, options?: ExcludePhrasesOptions): Promise<KoreanToken>;
OpenKoreanText.extractPhrasesSync(tokens: IntermediaryTokens, options?: ExcludePhrasesOptions): KoreanToken;
```
- `tokens` an intermediary token object from `tokenize` or `stem`
- `options` an object to pass options to extract phrases where
- `filterSpam` - a flag to filter spam tokens. defaults to `true`
- `includeHashtag` - a flag to include hashtag tokens. defaults to `false`
#### Normalizing
```typescript
OpenKoreanText.normalize(text: string): Promise<string>;
OpenKoreanText.normalizeSync(text: string): string;
```
- `text` a target string to normalize
#### Sentence Splitting
```typescript
OpenKoreanText.splitSentences(text: string): Promise<Sentence[]>;
OpenKoreanText.splitSentencesSync(text: string): Sentence[];
```
- `text` a target string to normalize
* returns array of `Sentence` which includes:
* `text`: string - the sentence's text
* `start`: number - the sentence's start position from original string
* `end`: number - the sentence's end position from original string
#### Custom Dictionary
```typescript
OpenKoreanText.addNounsToDictionary(...words: string[]): Promise<void>;
OpenKoreanText.addNounsToDictionarySync(...words: string[]): void;
```
- `words` words to add to dictionary
#### toJSON
```typescript
OpenKoreanText.tokensToJsonArray(tokens: IntermediaryTokensObject, keepSpace?: boolean): Promise<KoreanToken[]>;
OpenKoreanText.tokensToJsonArraySync(tokens: IntermediaryTokensObject, keepSpace?: boolean): KoreanToken[];
```
- `tokens` an intermediary token object from `tokenize` or `stem`
- `keepSpace` a flag to omit 'Space' token or not, defaults to `false`
### **IntermediaryToken** object
An intermediate token object required for internal processing.
Provides a convenience wrapper functionS to process text without using processor object
```typescript
tokens.extractPhrases(options?: ExcludePhrasesOptions): Promise<KoreanToken>;
tokens.extractPhrasesSync(options?: ExcludePhrasesOptions): KoreanToken;
tokens.detokenize(): Promise<string>;
tokens.detokenizeSync(): string;
tokens.toJSON(): KoreanToken[];
```
- NOTE: `tokens.toJSON()` method is equivalent with `OpenKoreanText.tokensToJsonArraySync(tokens, false)`
### **KoreanToken** object
A JSON output object which contains:
- `text`: string - token's text
- `stem`: string - token's stem
- `pos`: stirng - type of token. possible entries are:
- Word level POS:
`Noun`, `Verb`, `Adjective`,
`Adverb`, `Determiner`, `Exclamation`,
`Josa`, `Eomi`, `PreEomi`, `Conjunction`,
`NounPrefix`, `VerbPrefix`, `Suffix`, `Unknown`
- Chunk level POS:
`Korean`, `Foreign`, `Number`, `KoreanParticle`, `Alpha`,
`Punctuation`, `Hashtag`, `ScreenName`,
`Email`, `URL`, `CashTag`
- Functional POS:
`Space`, `Others`
- `offset`: number - position from original string
- `length`: number - length of text
- `isUnknown`: boolean
| 33.054878 | 165 | 0.743774 | kor_Hang | 0.418254 |
4d3a1ab6f166346ab8955f94b614b965fed2e8f3 | 4,060 | md | Markdown | articles/media-services/latest/player-use-azure-media-player-how-to.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/media-services/latest/player-use-azure-media-player-how-to.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/media-services/latest/player-use-azure-media-player-how-to.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: Reprodução com Azure Media Player - Azure
description: O Azure Media Player é um leitor de vídeo sonoro construído para reproduzir conteúdo sonoro da Microsoft Azure Media Services numa grande variedade de navegadores e dispositivos.
services: media-services
documentationcenter: ''
author: IngridAtMicrosoft
manager: femila
editor: ''
ms.service: media-services
ms.workload: ''
ms.topic: article
ms.date: 07/17/2019
ms.author: inhenkel
ms.openlocfilehash: cf4916341a97868de757804b570212f1cc1105b2
ms.sourcegitcommit: 02bc06155692213ef031f049f5dcf4c418e9f509
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 04/03/2021
ms.locfileid: "106281964"
---
# <a name="playback-with-azure-media-player"></a>Reprodução com Azure Media Player
O Azure Media Player é um leitor de vídeo sonoro construído para reproduzir conteúdo sonoro da Microsoft Azure Media Services numa grande variedade de navegadores e dispositivos. O Azure Media Player utiliza padrões da indústria, tais como HTML5, Extensões de Fonte de Mídia (MSE) e Extensões de Mídia Encriptadas (EME) para proporcionar uma experiência de streaming adaptativa enriquecida. Quando estes padrões não estão disponíveis num dispositivo ou num browser, o Azure Media Player utiliza o Flash e o Silverlight como tecnologia de retorno. Independentemente da tecnologia de reprodução utilizada, os desenvolvedores terão uma interface JavaScript unificada para aceder a APIs. Isto permite que os conteúdos servidos pela Azure Media Services sejam reproduzidos em uma vasta gama de dispositivos e navegadores sem qualquer esforço extra.
O Microsoft Azure Media Services permite que os conteúdos sejam servidos com formatos de streaming HLS, DASH, Smooth Streaming para reproduzir conteúdo. O Azure Media Player tem em conta estes vários formatos e reproduz automaticamente o melhor link baseado nas capacidades da plataforma/navegador. Os Serviços de Mídia também permitem encriptação dinâmica de ativos com encriptação PlayReady ou encriptação de envelope aES-128 bit. O Azure Media Player permite a desencriptação de conteúdo encriptado PlayReady e AES-128 quando devidamente configurado.
> [!NOTE]
> A reprodução HTTPS é necessária para conteúdo encriptado Widevine.
## <a name="use-azure-media-player-demo-page"></a>Use a página de demonstração do Azure Media Player
### <a name="start-using"></a>Comece a usar
Pode utilizar a [página de demonstração do Azure Media Player](https://aka.ms/azuremediaplayer) para reproduzir amostras do Azure Media Services ou do seu próprio fluxo.
Para reproduzir um novo vídeo, cole um URL diferente e prima **Update**.
Para configurar várias opções de reprodução (por exemplo, tecnologia, linguagem ou encriptação), prima **Opções Avançadas**.

### <a name="monitor-diagnostics-of-a-video-stream"></a>Monitorize diagnósticos de um fluxo de vídeo
Pode utilizar a [página de demonstração do Azure Media Player](https://aka.ms/azuremediaplayer) para monitorizar os diagnósticos de um stream de vídeo.

## <a name="set-up-azure-media-player-in-your-html"></a>Configurar o Azure Media Player no seu HTML
O Azure Media Player é fácil de configurar. Bastam alguns momentos para obter a reprodução básica de conteúdos de mídia na sua conta de Media Services. Consulte [a documentação do Azure Media Player](../azure-media-player/azure-media-player-overview.md) para obter mais detalhes sobre como configurar e configurar o Azure Media Player.
## <a name="additional-notes"></a>Notas adicionais
* Widevine é um serviço fornecido pela Google Inc. e sujeito aos termos de serviço e Política de Privacidade da Google, Inc.
## <a name="next-steps"></a>Passos seguintes
* [Azure Media Player documentation](../azure-media-player/azure-media-player-overview.md) (Documentação do Leitor de Multimédia do Azure)
* [Amostras do Azure Media Player](https://github.com/Azure-Samples/azure-media-player-samples) | 68.813559 | 843 | 0.801724 | por_Latn | 0.994062 |
4d3b69efdc8344fd7a61bb76235770827e05d06b | 2,579 | md | Markdown | README.md | 6bee/aqua-compare | 21539819f519c05841b9e41de55a4af2eb6b1708 | [
"MIT"
] | 8 | 2016-06-12T20:56:01.000Z | 2022-01-05T09:05:55.000Z | README.md | 6bee/aqua-compare | 21539819f519c05841b9e41de55a4af2eb6b1708 | [
"MIT"
] | null | null | null | README.md | 6bee/aqua-compare | 21539819f519c05841b9e41de55a4af2eb6b1708 | [
"MIT"
] | null | null | null | # aqua-graphcompare
| branch | package | AppVeyor | Travis CI |
| --- | --- | --- | --- |
| `main` | [![NuGet Badge][1]][2] [![MyGet Pre Release][3]][4] | [![Build status][5]][6] | [![Travis build Status][7]][8] |
### Description
Differ for arbitrary object graphs allows to compare property values starting at a pair of root objects, recording any differences while visiting all nodes of the object graph.
The comparison result contains a list of deltas describing each difference found.
The comparer may be customized by both, subtyping and dependency injection for various purposes:
* Override selection of properties for comparison for any given object type
* Specify display string provider for object instance/value labeling (breadcrumb)
* Specify display string provider for property values (old/new value display string)
* Specify custom object mapper for advanced scenario
The comparer allows comparison of independent object types and relies on object structure and values at runtime rather than statically defined type information.
### Features
* Differ for arbitrary object graphs
* Provides hierarchical and flat deltas
* Allows for custom descriptions for types and members
* Allows for custom resolution of values (i.e. display values for enums, foreign keys, etc.)
## Sample
Compare two versions of a business object
```C#
var original = GetOriginalBusinessObject();
var changed = GetModifiedBusinessObject();
var result = new GraphComparer().Compare(original, changed);
Console.WriteLine("{0} {1} {2}",
result.FromType,
result.IsMatch ? "==" : "<>",
result.ToType);
foreach (var delta in result.Deltas)
{
Console.WriteLine(delta.ChangeType);
Console.WriteLine(delta.Breadcrumb);
Console.WriteLine(delta.OldValue);
Console.WriteLine(delta.NewValue);
}
```
[1]: https://buildstats.info/nuget/aqua-graphcompare?includePreReleases=true
[2]: http://www.nuget.org/packages/aqua-graphcompare
[3]: http://img.shields.io/myget/aqua/vpre/aqua-graphcompare.svg?style=flat-square&label=myget
[4]: https://www.myget.org/feed/aqua/package/nuget/aqua-graphcompare
[5]: https://ci.appveyor.com/api/projects/status/se738mykuhel4b3q/branch/main?svg=true
[6]: https://ci.appveyor.com/project/6bee/aqua-graphcompare/branch/main
[7]: https://travis-ci.org/6bee/aqua-graphcompare.svg?branch=main
[8]: https://travis-ci.org/6bee/aqua-graphcompare?branch=main
| 42.983333 | 177 | 0.699496 | eng_Latn | 0.805859 |
4d3ccd852bfe6d9552e0f0ad83ef5061bbaa4a81 | 2,385 | md | Markdown | docs/manual/v0.30/docs/statements/blocks.md | JingruiLea/ceu | 3ecc25a31ebf8d219939f589ce131df3c34303d4 | [
"MIT"
] | 63 | 2018-06-07T07:38:49.000Z | 2022-03-10T00:43:28.000Z | docs/manual/v0.30/docs/statements/blocks.md | JingruiLea/ceu | 3ecc25a31ebf8d219939f589ce131df3c34303d4 | [
"MIT"
] | 23 | 2018-06-25T19:22:53.000Z | 2020-12-01T18:35:45.000Z | docs/manual/v0.30/docs/statements/blocks.md | JingruiLea/ceu | 3ecc25a31ebf8d219939f589ce131df3c34303d4 | [
"MIT"
] | 9 | 2018-10-30T01:52:34.000Z | 2020-12-06T21:49:47.000Z | ## Blocks
A `Block` delimits a lexical scope for
[storage entities](../storage_entities/#entity-classes)
and
[abstractions](#abstractions),
which are only visible to statements inside the block.
Compound statements (e.g. *do-end*, *if-then-else*, *loops*, etc.) create new
blocks and can be nested to an arbitrary level.
### `do-end` and `escape`
The `do-end` statement creates an explicit block.
The `escape` statement terminates the deepest matching enclosing `do-end`:
```ceu
Do ::= do [`/´(ID_int|`_´)] [`(´ [LIST(ID_int)] `)´]
Block
end
Escape ::= escape [`/´ID_int] [Exp]
```
A `do-end` and `escape` accept an optional identifier following the symbol `/`.
An `escape` only matches a `do-end` with the same identifier.
The neutral identifier `_` in a `do-end` is guaranteed not to match any
`escape` statement.
A `do-end` also supports an optional list of identifiers in parenthesis which
restricts the visible storage entities inside the block to those matching the
list.
An empty list hides all storage entities from the enclosing scope.
A `do-end` can be [assigned](#assignments) to a variable whose type must be
matched by nested `escape` statements.
The whole block evaluates to the value of a reached `escape`.
If the variable is of [option type](../types/#option), the `do-end` is allowed
to terminate without an `escape`, otherwise it raises a runtime error.
Programs have an implicit enclosing `do-end` that assigns to a
*program status variable* of type `int` whose meaning is platform dependent.
Examples:
```ceu
do
do/a
do/_
escape; // matches line 1
end
escape/a; // matches line 2
end
end
```
```ceu
var int a;
var int b;
do (a)
a = 1;
b = 2; // "b" is not visible
end
```
```ceu
var int? v =
do
if <cnd> then
escape 10; // assigns 10 to "v"
else
nothing; // "v" remains unassigned
end
end;
```
```ceu
escape 0; // program terminates with a status value of 0
```
### `pre-do-end`
The `pre-do-end` statement prepends its statements in the beginning of the
program:
```ceu
Pre_Do ::= pre do
Block
end
```
All `pre-do-end` statements are concatenated together in the order they appear
and are moved to the beginning of the top-level block, before all other
statements.
| 25.105263 | 79 | 0.663312 | eng_Latn | 0.995608 |
4d3d2d599d02a4d4d33e29721657993f740d66e8 | 3,932 | md | Markdown | docs/cppcx/wrl/module-releasenotifier-class.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cppcx/wrl/module-releasenotifier-class.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cppcx/wrl/module-releasenotifier-class.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:54:57.000Z | 2020-05-28T15:54:57.000Z | ---
title: Classe Module::ReleaseNotifier
ms.date: 09/17/2018
ms.topic: reference
f1_keywords:
- module/Microsoft::WRL::Module::ReleaseNotifier
- module/Microsoft::WRL::Module::ReleaseNotifier::~ReleaseNotifier
- module/Microsoft::WRL::Module::ReleaseNotifier::Invoke
- module/Microsoft::WRL::Module::ReleaseNotifier::Release
- module/Microsoft::WRL::Module::ReleaseNotifier::ReleaseNotifier
helpviewer_keywords:
- Microsoft::WRL::Module::ReleaseNotifier class
- Microsoft::WRL::Module::ReleaseNotifier::~ReleaseNotifier, destructor
- Microsoft::WRL::Module::ReleaseNotifier::Invoke method
- Microsoft::WRL::Module::ReleaseNotifier::Release method
- Microsoft::WRL::Module::ReleaseNotifier::ReleaseNotifier, constructor
ms.assetid: 17249cd1-4d88-42e3-8146-da9e942d12bd
ms.openlocfilehash: f314d09c443d0d284e3a821b5c879bfb74baf812
ms.sourcegitcommit: c123cc76bb2b6c5cde6f4c425ece420ac733bf70
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/14/2020
ms.locfileid: "81371281"
---
# <a name="modulereleasenotifier-class"></a>Classe Module::ReleaseNotifier
Richiama un gestore eventi quando viene rilasciato l'ultimo oggetto in un modulo.
## <a name="syntax"></a>Sintassi
```cpp
class ReleaseNotifier;
```
## <a name="members"></a>Membri
### <a name="public-constructors"></a>Costruttori pubblici
Nome | Descrizione
----------------------------------------------------------------------------------- | --------------------------------------------------------------------------
[Modulo::ReleaseNotifier::](#releasenotifier-tilde-releasenotifier) | Deinizializza l'istanza corrente `Module::ReleaseNotifier` della classe.
[Modulo::ReleaseNotifier::ReleaseNotifier](#releasenotifier-releasenotifier) | Inizializza una nuova istanza della classe `Module::ReleaseNotifier`.
### <a name="public-methods"></a>Metodi pubblici
Nome | Descrizione
------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------
[Modulo::ReleaseNotifier::Richiamare](#releasenotifier-invoke) | Quando implementato, chiama un gestore eventi quando viene rilasciato l'ultimo oggetto in un modulo.
[Module::ReleaseNotifier::Release](#releasenotifier-release) | Elimina l'oggetto corrente `Module::ReleaseNotifier` se l'oggetto è stato costruito con un parametro **true**.
## <a name="inheritance-hierarchy"></a>Gerarchia di ereditarietà
`ReleaseNotifier`
## <a name="requirements"></a>Requisiti
**Intestazione:** module.h
**Spazio dei nomi:** Microsoft::WRL
## <a name="modulereleasenotifierreleasenotifier"></a><a name="releasenotifier-tilde-releasenotifier"></a>Modulo::ReleaseNotifier::
Deinizializza l'istanza corrente `Module::ReleaseNotifier` della classe.
```cpp
WRL_NOTHROW virtual ~ReleaseNotifier();
```
## <a name="modulereleasenotifierinvoke"></a><a name="releasenotifier-invoke"></a>Modulo::ReleaseNotifier::Richiamare
Quando implementato, chiama un gestore eventi quando viene rilasciato l'ultimo oggetto in un modulo.
```cpp
virtual void Invoke() = 0;
```
## <a name="modulereleasenotifierrelease"></a><a name="releasenotifier-release"></a>Modulo::ReleaseNotifier::Release
Elimina l'oggetto corrente `Module::ReleaseNotifier` se l'oggetto è stato costruito con un parametro **true**.
```cpp
void Release() throw();
```
## <a name="modulereleasenotifierreleasenotifier"></a><a name="releasenotifier-releasenotifier"></a>Modulo::ReleaseNotifier::ReleaseNotifier
Inizializza una nuova istanza della classe `Module::ReleaseNotifier`.
```cpp
ReleaseNotifier(bool release) throw();
```
### <a name="parameters"></a>Parametri
*Rilascio*<br/>
`true`per eliminare questa `Release` istanza quando viene chiamato il metodo; `false` per non eliminare questa istanza.
| 40.536082 | 173 | 0.68235 | ita_Latn | 0.190236 |
4d3da3bcaa826235b0290030b45df65fd43db3a7 | 310 | md | Markdown | docs/examples/vertical.md | udaypydi/react-carousel | 3fd675839c7e41c17e3aea10570d12b120247ed0 | [
"MIT"
] | null | null | null | docs/examples/vertical.md | udaypydi/react-carousel | 3fd675839c7e41c17e3aea10570d12b120247ed0 | [
"MIT"
] | null | null | null | docs/examples/vertical.md | udaypydi/react-carousel | 3fd675839c7e41c17e3aea10570d12b120247ed0 | [
"MIT"
] | null | null | null | ## Vertical Carousel
You can pass a `vertical` prop to rotate the carousel in a vertical direction. The default value of `vertical` is false.
```jsx render
<Carousel
vertical
infinite
keepDirectionWhenDragging
>
<img src={imageOne} />
<img src={imageTwo} />
<img src={imageThree} />
</Carousel>
``` | 23.846154 | 120 | 0.709677 | eng_Latn | 0.812358 |
4d3df0e1e1a099856eda328a48291b639547248c | 27 | md | Markdown | notes/class4.md | campbellmarianna/Core-Data-Structures | 56a0db7a8e613062d716f0438cd467f3f03ff236 | [
"MIT"
] | null | null | null | notes/class4.md | campbellmarianna/Core-Data-Structures | 56a0db7a8e613062d716f0438cd467f3f03ff236 | [
"MIT"
] | 4 | 2019-04-21T12:19:19.000Z | 2019-05-19T06:23:12.000Z | notes/class4.md | campbellmarianna/Core-Data-Structures | 56a0db7a8e613062d716f0438cd467f3f03ff236 | [
"MIT"
] | null | null | null | # Make code perform faster
| 13.5 | 26 | 0.777778 | eng_Latn | 0.990239 |
4d3e2697c8763df302faeac02e8cb30a1e898e2e | 1,710 | md | Markdown | src/message/message.md | ppYoung/tdesign-mobile-vue | 76b1cea2e6f9e88a57b9ac5fd0a10bb713a965ea | [
"MIT"
] | 25 | 2022-03-06T18:29:31.000Z | 2022-03-31T06:49:22.000Z | src/message/message.md | ppYoung/tdesign-mobile-vue | 76b1cea2e6f9e88a57b9ac5fd0a10bb713a965ea | [
"MIT"
] | 44 | 2022-03-06T18:32:19.000Z | 2022-03-31T10:02:53.000Z | src/message/message.md | ppYoung/tdesign-mobile-vue | 76b1cea2e6f9e88a57b9ac5fd0a10bb713a965ea | [
"MIT"
] | 15 | 2022-03-07T03:17:51.000Z | 2022-03-28T10:59:19.000Z | :: BASE_DOC ::
## API
### Message Props
名称 | 类型 | 默认值 | 说明 | 必传
-- | -- | -- | -- | --
align | String | left | 文本对齐方式。可选项:left/center。TS 类型:`MessageAlignType` `type MessageAlignType = 'left' | 'center'`。[详细类型定义](https://github.com/Tencent/tdesign-mobile-vue/tree/develop/src/message/type.ts) | N
closeBtn | String / Boolean / Slot / Function | undefined | 关闭按钮,可以自定义。值为 true 显示默认关闭按钮,值为 false 不显示关闭按钮。值类型为 string 则直接显示值,如:“关闭”。也可以完全自定义按钮。TS 类型:`string | boolean | TNode`。[通用类型定义](https://github.com/Tencent/tdesign-mobile-vue/blob/develop/src/common.ts) | N
content | String / Slot / Function | - | 用于自定义消息弹出内容。TS 类型:`string | TNode`。[通用类型定义](https://github.com/Tencent/tdesign-mobile-vue/blob/develop/src/common.ts) | N
duration | Number | 3000 | 消息内置计时器,计时到达时会触发 duration-end 事件。单位:毫秒。值为 0 则表示没有计时器。 | N
theme | String | info | 消息组件风格。可选项:info/success/warning/error。TS 类型:`MessageThemeList` `type MessageThemeList = 'info' | 'success' | 'warning' | 'error'`。[详细类型定义](https://github.com/Tencent/tdesign-mobile-vue/tree/develop/src/message/type.ts) | N
visible | Boolean | false | 是否显示,隐藏时默认销毁组件 | N
zIndex | Number | - | 元素层级,样式默认为 5000 | N
onClose | Function | | TS 类型:`() => void`<br/>关闭Message时触发 | N
onClosed | Function | | TS 类型:`() => void`<br/>关闭Message时并且动画结束后触发 | N
onOpen | Function | | TS 类型:`() => void`<br/>展示Message时触发 | N
onOpened | Function | | TS 类型:`() => void`<br/>展示Message时并且动画结束后触发 | N
onVisibleChange | Function | | TS 类型:`(visible: boolean) => void`<br/>可见性变化时触发 | N
### Message Events
名称 | 参数 | 描述
-- | -- | --
close | - | 关闭Message时触发
closed | - | 关闭Message时并且动画结束后触发
open | - | 展示Message时触发
opened | - | 展示Message时并且动画结束后触发
visible-change | `(visible: boolean)` | 可见性变化时触发
| 57 | 261 | 0.680702 | yue_Hant | 0.298704 |
4d3e30784d8ee37cf229bee9a4e25823c30d6bda | 322 | md | Markdown | packages/expo-face-detector/README.md | ThakurKarthik/expo | ed78ed4f07c950184a59422ebd95645253f44e3d | [
"Apache-2.0",
"MIT"
] | null | null | null | packages/expo-face-detector/README.md | ThakurKarthik/expo | ed78ed4f07c950184a59422ebd95645253f44e3d | [
"Apache-2.0",
"MIT"
] | null | null | null | packages/expo-face-detector/README.md | ThakurKarthik/expo | ed78ed4f07c950184a59422ebd95645253f44e3d | [
"Apache-2.0",
"MIT"
] | null | null | null | # expo-face-detector
`expo-face-detector` lets you use the power of [Google Mobile Vision](https://developers.google.com/vision/face-detection-concepts) framework to detect faces on images.
See [FaceDetector docs](https://docs.expo.io/versions/latest/sdk/face-detector) for documentation of this universal module's API.
| 53.666667 | 168 | 0.791925 | eng_Latn | 0.415036 |
4d3e89289e4472f8aa52e3de99c576eeb525b57e | 1,042 | md | Markdown | README.md | maxired/cbfy | 922ce76756a74d08edb96b410b6ff78221731b21 | [
"Apache-2.0"
] | null | null | null | README.md | maxired/cbfy | 922ce76756a74d08edb96b410b6ff78221731b21 | [
"Apache-2.0"
] | null | null | null | README.md | maxired/cbfy | 922ce76756a74d08edb96b410b6ff78221731b21 | [
"Apache-2.0"
] | null | null | null | # cbfy
## Installation
`npm install cbfy`
## Usage
Call a callback after a Promise return
```javascript
const cbfy = require('cbfy')
Promise.resolve().then(...cbfy(cb))
```
## Why ?
Promises are sometimes counter intuitive, specially when mixin Promises and callbacks.
Of course you want avoid that situation, but when migrating from legacy code, it's good to have a way.
This package is a little bit similar to `[nodify](https://www.npmjs.com/package/nodeify)`, or at least try to solve the same problem.
it just does it with a diffent interface, using object spreading, in order to respect the `then(successCb, errorCb)` syntax
## Warning
You have to be carefull about thwn you want use it, or not.
Once you call cbfy, everything that happend in your cb will be runned asyncronously and escaped of the Parent promise.
You probably don't want chain anything after the `then(...cbfy(cb))` call.
For example, if even if you callback return a Promise you can`t chain it the same way as you would without cbfy.
This is by design.
| 29.771429 | 134 | 0.75048 | eng_Latn | 0.998918 |
4d3e9b6f0c150f487257de1f1adfbef4de53026a | 7,385 | md | Markdown | en/device-dev/kernel/kernel-small-basic-inner-reflect.md | acidburn0zzz/openharmony | f19488de6f7635f3171b1ad19a25e4844c0a24df | [
"CC-BY-4.0"
] | null | null | null | en/device-dev/kernel/kernel-small-basic-inner-reflect.md | acidburn0zzz/openharmony | f19488de6f7635f3171b1ad19a25e4844c0a24df | [
"CC-BY-4.0"
] | null | null | null | en/device-dev/kernel/kernel-small-basic-inner-reflect.md | acidburn0zzz/openharmony | f19488de6f7635f3171b1ad19a25e4844c0a24df | [
"CC-BY-4.0"
] | null | null | null | # Virtual-to-Physical Mapping<a name="EN-US_TOPIC_0000001079036248"></a>
## Basic Concepts<a name="section9108144913615"></a>
The Memory Management Unit \(MMU\) is used to map the virtual addresses in the process space and the actual physical addresses and specify corresponding access permissions and cache attributes. When a program is executed, the CPU accesses the virtual memory, locates the corresponding physical memory based on the MMU page table entry, and executes the code or performs data read/write operations. The page tables of the MMU store the mappings between virtual and physical addresses and the access permission. A page table is created when each process is created. The page table contains page table entries \(PTEs\), and each PTE describes a mapping between a virtual address region and a physical address region. The MMU has a Translation Lookaside Buffer \(TLB\) to perform address translation. During address translation, the MMU first searches the TLB for the corresponding PTE. If a match is found, the address can be returned directly. The following figure illustrates how the CPU accesses the memory or peripherals.
**Figure 1** CPU accessing the memory or peripheral<a name="fig209379387574"></a>

## Working Principles<a name="section12392621871"></a>
Virtual-to-physical address mapping is a process of establishing page tables. The MMU has multiple levels of page tables. The LiteOS-A kernel uses the level-2 page tables to describe the process space. Each level-1 PTE descriptor occupies 4 bytes, which indicate a mapping record of 1 MiB memory space. The 1 GiB user space of the LiteOS-A kernel has 1024 level-1 PTEs. When a user process is created, a 4 KiB memory block is requested from the memory as the storage area of the level-1 page table. The level-2 page table dynamically request memory based on requirements of the process.
- When a user program is loaded and started, the code segment and data segment are mapped to the virtual memory space \(for details, see [Dynamic Loading and Linking](kernel-small-bundles-linking.md)\). At that time, no physical page is mapped.
- When the program is executed, as shown by the bold arrow in the following figure, the CPU accesses the virtual address and checks for the corresponding physical memory in the MMU. If the virtual address does not have the corresponding physical address, a page missing fault is triggered. The kernel requests the physical memory, writes the virtual-physical address mapping and the related attributes to the page table, and caches the PTE in the TLB. Then, the CPU can directly access the actual physical memory.
- If the PTE already exists in the TLB, the CPU can access the physical memory without accessing the page table stored in the memory.
**Figure 2** CPU accessing the memory<a name="fig95557155719"></a>

## Development Guidelines<a name="section10264102013713"></a>
### Available APIs<a name="section195320251578"></a>
**Table 1** APIs of the virtual-to-physical address mapping module
<a name="table1415203765610"></a>
<table><thead align="left"><tr id="row134151837125611"><th class="cellrowborder" valign="top" width="12.821282128212822%" id="mcps1.2.4.1.1"><p id="p16415637105612"><a name="p16415637105612"></a><a name="p16415637105612"></a>Category</p>
</th>
<th class="cellrowborder" valign="top" width="29.832983298329836%" id="mcps1.2.4.1.2"><p id="p11415163718562"><a name="p11415163718562"></a><a name="p11415163718562"></a>API</p>
</th>
<th class="cellrowborder" valign="top" width="57.34573457345735%" id="mcps1.2.4.1.3"><p id="p1641533755612"><a name="p1641533755612"></a><a name="p1641533755612"></a>Description</p>
</th>
</tr>
</thead>
<tbody><tr id="row12171174434013"><td class="cellrowborder" rowspan="5" valign="top" width="12.821282128212822%" headers="mcps1.2.4.1.1 "><p id="p48244461959"><a name="p48244461959"></a><a name="p48244461959"></a>MMU operations</p>
</td>
<td class="cellrowborder" valign="top" width="29.832983298329836%" headers="mcps1.2.4.1.2 "><p id="p15630114884017"><a name="p15630114884017"></a><a name="p15630114884017"></a>LOS_ArchMmuQuery</p>
</td>
<td class="cellrowborder" valign="top" width="57.34573457345735%" headers="mcps1.2.4.1.3 "><p id="p4171244164013"><a name="p4171244164013"></a><a name="p4171244164013"></a>Obtains the physical address and attributes corresponding to the virtual address of the process space.</p>
</td>
</tr>
<tr id="row17223043124018"><td class="cellrowborder" valign="top" headers="mcps1.2.4.1.1 "><p id="p1730695210400"><a name="p1730695210400"></a><a name="p1730695210400"></a>LOS_ArchMmuMap</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.2.4.1.2 "><p id="p202242431404"><a name="p202242431404"></a><a name="p202242431404"></a>Maps the virtual address region of the process space and the physical address region.</p>
</td>
</tr>
<tr id="row536885134010"><td class="cellrowborder" valign="top" headers="mcps1.2.4.1.1 "><p id="p236819594010"><a name="p236819594010"></a><a name="p236819594010"></a>LOS_ArchMmuUnmap</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.2.4.1.2 "><p id="p736918564019"><a name="p736918564019"></a><a name="p736918564019"></a>Removes the mapping between the virtual address region of the process space and the physical address region.</p>
</td>
</tr>
<tr id="row11567448194112"><td class="cellrowborder" valign="top" headers="mcps1.2.4.1.1 "><p id="p0568204814115"><a name="p0568204814115"></a><a name="p0568204814115"></a>LOS_ArchMmuChangeProt</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.2.4.1.2 "><p id="p05681348204114"><a name="p05681348204114"></a><a name="p05681348204114"></a>Modifies the mapping attributes of the virtual address region of the process space.</p>
</td>
</tr>
<tr id="row1141513373562"><td class="cellrowborder" valign="top" headers="mcps1.2.4.1.1 "><p id="p17765212416"><a name="p17765212416"></a><a name="p17765212416"></a>LOS_ArchMmuMove</p>
</td>
<td class="cellrowborder" valign="top" headers="mcps1.2.4.1.2 "><p id="p1972971913115"><a name="p1972971913115"></a><a name="p1972971913115"></a>Moves a mapping record of a virtual address region in the process space to another unused virtual address region for remapping.</p>
</td>
</tr>
</tbody>
</table>
### How to Develop<a name="section152774210712"></a>
To use virtual-to-physical address mapping APIs:
1. Call **LOS\_ArchMmuMap** to map a physical memory block.
2. Perform the following operations on the mapped address region:
- Call **LOS\_ArchMmuQuery** to query the physical address region corresponding to a virtual address region and the mapping attributes.
- Call **LOS\_ArchMmuChangeProt** to modify the mapping attributes.
- Call **LOS\_ArchMmuMove** to remap the virtual address region.
3. Call **LOS\_ArchMmuUnmap** to remove the mapping.
> **NOTE:**
>The preceding APIs can be used after the MMU initialization is complete and the page tables of the related process are created. The MMU initialization is complete during system startup. Page tables are created when the processes are created. You do not need to perform any operation.
| 90.060976 | 1,022 | 0.750846 | eng_Latn | 0.934969 |
4d3f29cdbf8b9ab7cf0c086f29d68d62f1e11a99 | 967 | md | Markdown | content/blog/20210314-drama-borderless-episode-2/index.md | dendense/himitsu | 65ebb8169a0faae11078577ca4b141632b76160a | [
"MIT"
] | null | null | null | content/blog/20210314-drama-borderless-episode-2/index.md | dendense/himitsu | 65ebb8169a0faae11078577ca4b141632b76160a | [
"MIT"
] | null | null | null | content/blog/20210314-drama-borderless-episode-2/index.md | dendense/himitsu | 65ebb8169a0faae11078577ca4b141632b76160a | [
"MIT"
] | 2 | 2020-09-19T09:53:56.000Z | 2021-05-27T13:26:45.000Z | ---
path: /blog/20210314-drama-borderless-episode2
date: 2021-03-14T17:55:30.770Z
title: "[Drama] Borderless Episode 2"
author: Chr0balord
description: Drama 'Borderless' Episode 2
tags:
- Drama
- Nogizaka46
- Sakurazaka46
- Hinatazaka46
image: https://i.ibb.co/2czznTt/Himitsu-Pro-210314-Drama-Borderless-Episode-2-1080p-mp4-thumbs.jpg
image2: null
label: Pasted
url: https://controlc.com/6ca7cd88
---
# Format
* MP4 (1080p & 720p)
# Information
Artist :
Nogizaka46 = Endo Sakura & Hayakawa Seira\
Sakurazaka46 = Kobayashi Yui, Watanabe Risa & Morita Hikaru\
Hinatazaka46 = Saito Kyoko & Hamagishi Hiyori <br>
Drama : Borderless\
Broadcast Date : 14 March 2021\
Subtitle : Not Available (will update when subtitle is available)
For more information : [](https://www.hikaritv.net/borderless/)<https://www.hikaritv.net/borderless/>
**Don't spread the Google Drive link without permission, if you want to share. Just share the article link. Thank You!** | 27.628571 | 120 | 0.749741 | eng_Latn | 0.191168 |
4d3fc2eb4446c92a731d46a1f81d4060591ce82c | 26,080 | md | Markdown | articles/cosmos-db/odbc-driver.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/odbc-driver.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/odbc-driver.md | cluxter/azure-docs.fr-fr | 9e1df772cdd6e4a94e61c4ccee8cd41e692dc427 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Se connecter à Azure Cosmos DB à l’aide d’outils d’analyse décisionnelle
description: Découvrez comment utiliser le pilote ODBC Azure Cosmos DB pour créer des tables et des vues afin d’afficher les données normalisées dans BI et dans un logiciel d’analyse de données.
author: SnehaGunda
ms.service: cosmos-db
ms.topic: how-to
ms.date: 10/02/2019
ms.author: sngun
ms.openlocfilehash: 57db2253cbffa8e16313c7613de6d2ddb2f2b0a2
ms.sourcegitcommit: 0100d26b1cac3e55016724c30d59408ee052a9ab
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 07/07/2020
ms.locfileid: "86027237"
---
# <a name="connect-to-azure-cosmos-db-using-bi-analytics-tools-with-the-odbc-driver"></a>Se connecter à Azure Cosmos DB à l’aide d’outils d’analyse décisionnelle avec le pilote ODBC
Le pilote ODBC Azure Cosmos DB vous permet de vous connecter à Azure Cosmos DB à l’aide d’outils d’analyse décisionnelle comme SQL Server Integration Services, Power BI Desktop et Tableau, pour analyser et créer une représentation visuelle de vos données Azure Cosmos DB dans ces solutions.
Le pilote ODBC Azure Cosmos DB est conforme à ODBC 3.8 et prend en charge la syntaxe ANSI SQL-92. Le pilote offre de puissantes fonctionnalités pour vous aider à renormaliser les données dans Azure Cosmos DB. Grâce à ce pilote, vous pouvez représenter les données dans Azure Cosmos DB sous forme de tables et de vues. Il vous permet d’effectuer des opérations SQL dans des tables et des vues, notamment des regroupements par requêtes, des insertions, des mises à jour et des suppressions.
> [!NOTE]
> La connexion à Azure Cosmos DB avec le pilote ODBC est actuellement prise en charge uniquement pour les comptes de l’API SQL Azure Cosmos DB.
## <a name="why-do-i-need-to-normalize-my-data"></a>Pourquoi dois-je normaliser mes données ?
Azure Cosmos DB est une base de données sans schéma, ce qui permet le développement rapide d’applications et donne la possibilité d’effectuer une itération sur les modèles de données sans être limité à un schéma strict. Une même base de données Azure Cosmos peut contenir des documents JSON de différentes structures. C’est une solution idéale pour le développement rapide d’applications, mais si vous souhaitez analyser et créer des rapports de vos données à l’aide d’outils d’analyse de données et décisionnels, les données doivent souvent être aplaties et respecter un schéma spécifique.
C’est là qu’intervient le pilote ODBC. Grâce au pilote ODBC, vous pouvez à présent renormaliser les données d’Azure Cosmos DB dans des tables et des vues adaptées à vos besoins d’analytique données et de création de rapports. Les schémas renormalisés n’ont aucun impact sur les données sous-jacentes et il n’est pas obligatoire pour les développeurs de les respecter ; au lieu de cela, ils permettent de tirer parti d’outils compatibles ODBC pour accéder aux données. Désormais, votre base de données Azure Cosmos ne sera pas uniquement l’un des outils favoris de votre équipe de développement. Vos analystes de données vont l’adorer eux aussi.
Familiarisons-nous avec le pilote ODBC.
## <a name="step-1-install-the-azure-cosmos-db-odbc-driver"></a><a id="install"></a>Étape 1 : Installer le pilote ODBC Azure Cosmos DB
1. Téléchargez les pilotes correspondant à votre environnement :
| Programme d’installation | Systèmes d’exploitation pris en charge|
|---|---|
|[Microsoft Azure Cosmos DB ODBC 64-bit.msi](https://aka.ms/cosmos-odbc-64x64) pour Windows 64 bits| Versions 64 bits de Windows 8.1 ou version ultérieure, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012 et Windows Server 2008 R2.|
|[Microsoft Azure Cosmos DB ODBC 32x64-bit.msi](https://aka.ms/cosmos-odbc-32x64) pour 32 bits sur Windows 64 bits| Versions 64 bits de Windows 8.1 ou version ultérieure, Windows 8, Windows 7, Windows XP, Windows Vista, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2 et Windows Server 2003.|
|[Microsoft Azure Cosmos DB ODBC 32-bit.msi](https://aka.ms/cosmos-odbc-32x32) pour Windows 32 bits|Versions 32 bits de Windows 8.1 ou version ultérieure, Windows 8, Windows 7, Windows XP et Windows Vista.|
Exécutez le fichier msi localement pour lancer l’**Assistant d’installation du pilote ODBC Microsoft Azure Cosmos DB**.
1. Terminez l’assistant d’installation en utilisant l’entrée par défaut pour installer le pilote ODBC.
1. Ouvrez l’application **Administrateur de sources de données ODBC** sur votre ordinateur. Vous pouvez le faire en tapant **sources de données ODBC** dans la zone de recherche Windows.
Vous pouvez confirmer l’installation du pilote en cliquant dans l’onglet **Pilotes** pour vérifier que le **pilote ODBC Microsoft Azure Cosmos DB** est répertorié.
:::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Administrateur de la source de données ODBC Azure Cosmos DB":::
## <a name="step-2-connect-to-your-azure-cosmos-database"></a><a id="connect"></a>Étape 2 : Vous connecter à votre base de données Azure Cosmos
1. Après l’[installation du pilote ODBC Azure Cosmos DB](#install), dans la fenêtre **Administrateur de sources de données ODBC**, cliquez sur **Ajouter**. Vous pouvez créer un DSN utilisateur ou système. Dans cet exemple, vous allez créer un DSN utilisateur.
1. Dans la fenêtre **Créer une nouvelle source de données**, sélectionnez **Microsoft Azure Cosmos DB ODBC Driver (Pilote ODBC Microsoft Azure Cosmos DB)** , puis cliquez sur **Terminer**.
1. Dans la fenêtre **Azure Cosmos DB ODBC Driver SDN Setup (Configuration DSN du pilote ODBC Azure Cosmos DB)** , indiquez les informations suivantes :
:::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Fenêtre de configuration DSN du pilote ODBC Azure Cosmos DB":::
- **Nom de source de données** : le nom convivial de votre DSN ODBC. Ce nom étant spécifique à votre compte Azure Cosmos DB, choisissez-le de manière appropriée si vous possédez plusieurs comptes.
- **Description** : courte description de la source de données.
- **Hôte** : URI de votre compte Azure Cosmos DB. Vous pouvez récupérer cette information sur la page des clés Azure Cosmos DB du portail Azure, comme illustré dans la capture d’écran suivante.
- **Clé d’accès** : clé primaire ou secondaire, en lecture-écriture ou en lecture seule, affichée sur la page des clés Azure Cosmos DB du portail Azure, comme illustré dans la capture d’écran suivante. Nous vous recommandons d'utiliser la clé en lecture seule si le DSN sert au traitement des données en lecture seule et à la création de rapports.
:::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Page des clés Azure Cosmos DB":::
- **Chiffrer la clé d’accès pour** : sélectionnez l’option optimale en fonction des utilisateurs de cet ordinateur.
1. Cliquez sur le bouton **Test** pour vérifier que vous pouvez vous connecter à votre compte Azure Cosmos DB.
1. Cliquez sur **Options avancées** et définissez les valeurs suivantes :
* **Version de l’API REST** : Sélectionnez la [version de l’API REST](/rest/api/cosmos-db/) pour vos opérations. La valeur par défaut est 2015-12-16. Si vous avez des conteneurs avec de [grandes clés de partition](large-partition-keys.md) et que vous avez besoin de l’API REST version 2018-12-31 :
- Tapez **2018-12-31** comme version de l’API REST
- Dans le menu **Démarrer**, tapez « regedit » pour rechercher et ouvrir l’application **Éditeur du Registre**.
- Dans l’Éditeur du Registre, accédez au chemin suivant : **Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI**
- Créez une nouvelle sous-clé avec le même nom que votre DSN, p. ex. « Contoso Account ODBC DSN ».
- Accédez à la sous-clé « Contoso Account ODBC DSN ».
- Cliquez avec le bouton droit pour ajouter une nouvelle valeur **String** :
- Nom de la valeur : **IgnoreSessionToken**
- Données de la valeur : **1**
:::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Paramètres de l’Éditeur du Registre":::
- **Cohérence des requêtes** : sélectionnez le [niveau de cohérence](consistency-levels.md) de vos opérations. La valeur par défaut est Session.
- **Nombre de tentatives** : entrez le nombre de tentatives d’une opération si la demande initiale n’aboutit pas en raison d’une limitation du débit service.
- **Fichier de schéma** : Vous avez plusieurs possibilités.
- Par défaut, si vous ne modifiez pas cette entrée (vide), le pilote analyse la première page des données de tous les conteneurs afin de déterminer le schéma de chaque conteneur. Cette opération est appelée Mappage de conteneur. Si aucun fichier de schéma n’est défini, le pilote doit effectuer l’analyse pour chaque session de pilote, ce qui peut allonger le délai de démarrage d’une application avec le DSN. Nous vous recommandons de toujours associer un fichier de schéma à un DSN.
- Si vous disposez déjà d’un fichier de schéma (peut-être un fichier que vous avez créé à l’aide de l’Éditeur de schéma), cliquez sur **Parcourir**, recherchez votre fichier, cliquez sur **Enregistrer**, puis sur **OK**.
- Si vous souhaitez créer un nouveau schéma, cliquez sur **OK**, puis sur **Éditeur de schéma** dans la fenêtre principale. Accédez ensuite à l’Éditeur de schéma pour plus d’informations. Après la création du nouveau fichier de schéma, pensez à revenir à la fenêtre **Options avancées** pour l’inclure.
1. Une fois que vous avez terminé et fermé la fenêtre de **configuration DSN du pilote ODBC Azure Cosmos DB**, le DSN du nouvel utilisateur est ajouté à l’onglet DSN utilisateur.
:::image type="content" source="./media/odbc-driver/odbc-driver-user-dsn.png" alt-text="Nouveau nom de source de données ODBC Azure Cosmos DB dans l’onglet Nom de source de données utilisateur":::
## <a name="step-3-create-a-schema-definition-using-the-container-mapping-method"></a><a id="#container-mapping"></a>Étape 3 : Créer une définition de schéma à l’aide de la méthode de mappage de conteneur
Il existe deux types de méthodes d’échantillonnage que vous pouvez utiliser : le **mappage de conteneur** et les **délimiteurs de table**. Une session d’échantillonnage peut utiliser ces deux méthodes d’échantillonnage, mais chaque conteneur ne peut utiliser qu’une méthode d’échantillonnage spécifique. Les étapes ci-dessous créent un schéma pour les données d’un ou plusieurs conteneurs à l’aide de la méthode de mappage de conteneur. Cette méthode d’échantillonnage récupère les données dans la page d’un conteneur pour déterminer la structure de ces données. Elle transpose un conteneur en table côté ODBC. Cette méthode d’échantillonnage est rapide et efficace lorsque les données d’un conteneur sont homogènes. Si un conteneur contient des données hétérogènes, nous vous recommandons d’utiliser la [méthode de mappage par délimiteurs de table](#table-mapping), car elle fournit une méthode d’échantillonnage plus robuste pour déterminer les structures des données du conteneur.
1. Après avoir terminé les étapes 1 à 4 de la rubrique [Vous connecter à votre base de données Azure Cosmos](#connect), cliquez sur **Éditeur de schéma** dans la fenêtre **Configuration DSN du pilote ODBC Azure Cosmos DB**.
:::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Bouton Éditeur de schéma dans la fenêtre de configuration du nom de source de données du pilote ODBC Azure Cosmos DB":::
1. Dans la fenêtre **Éditeur de schéma**, cliquez sur **Créer**.
La fenêtre **Générer le schéma** affiche tous les conteneurs du compte Azure Cosmos DB.
1. Sélectionnez un ou plusieurs conteneurs à échantillonner, puis cliquez sur **Échantillonner**.
1. Dans l’onglet **Mode Création**, la base de données, le schéma et la table sont représentés. Dans la vue de la table, l’analyse affiche l’ensemble des propriétés associées aux noms de colonne (Nom SQL, Nom de la Source, etc.).
Pour chaque colonne, vous pouvez modifier le nom de la colonne SQL, le type SQL, la longueur SQL (le cas échéant), l’échelle (le cas échéant), la précision (le cas échéant) et la valeur Nullable.
- Vous pouvez définir **Masquer la colonne** sur **true** si vous souhaitez exclure cette colonne des résultats de la requête. Les colonnes marquées Masquer la colonne = true ne sont pas retournées pour la sélection et la projection, bien qu’elles fassent toujours partie du schéma. Par exemple, vous pouvez masquer toutes les propriétés système Azure Cosmos DB requises commençant par « _ ».
- La colonne **id** est le seul champ qui ne peut pas être masqué car elle sert de clé primaire dans le schéma normalisé.
1. Une fois que vous avez terminé la définition du schéma, cliquez sur **Fichier** | **Enregistrer**, accédez au répertoire d’enregistrement du schéma, puis cliquez sur **Enregistrer**.
1. Pour utiliser ce schéma avec un nom de source de données (DSN), ouvrez la **fenêtre de configuration du DSN du pilote ODBC Azure Cosmos DB** (par le biais de l’Administrateur de sources de données ODBC), cliquez sur **Options avancées**, puis, dans la boîte de dialogue **Fichier de schéma**, accédez au schéma enregistré. L’enregistrement d’un fichier de schéma dans une source de données existante modifie la connexion de DSN afin de définir l’étendue des données et de la structure définie par le schéma.
## <a name="step-4-create-a-schema-definition-using-the-table-delimiters-mapping-method"></a><a id="table-mapping"></a>Étape 4 : Créer une définition de schéma à l’aide de la méthode de mappage des délimiteurs de table
Il existe deux types de méthodes d’échantillonnage que vous pouvez utiliser : le **mappage de conteneur** et les **délimiteurs de table**. Une session d’échantillonnage peut utiliser ces deux méthodes d’échantillonnage, mais chaque conteneur ne peut utiliser qu’une méthode d’échantillonnage spécifique.
Les étapes suivantes créent un schéma pour les données d’un ou plusieurs conteneurs à l’aide de la méthode de mappage par **délimiteurs de table**. Nous vous recommandons d’utiliser cette méthode d’échantillonnage lorsque vos conteneurs contiennent des données hétérogènes. Vous pouvez utiliser cette méthode pour définir l’étendue de l’échantillonnage sur un ensemble d’attributs et ses valeurs correspondantes. Par exemple, si un document contient une propriété « Type », vous pouvez étendre l’échantillonnage aux valeurs de cette propriété. Le résultat final de l’échantillonnage serait un ensemble de tables pour chacune des valeurs du type que vous avez spécifié. Par exemple, Type = Voiture produira une table Voiture tandis que Type = Avion produira une table Avion.
1. Après avoir terminé les étapes 1 à 4 de la rubrique [Vous connecter à votre base de données Azure Cosmos](#connect), cliquez sur **Éditeur de schéma** dans la fenêtre Configuration DSN du pilote ODBC Azure Cosmos DB.
1. Dans la fenêtre **Éditeur de schéma**, cliquez sur **Créer**.
La fenêtre **Générer le schéma** affiche tous les conteneurs du compte Azure Cosmos DB.
1. Sélectionnez un conteneur sous l’onglet **Exemple de vue**, dans la colonne **Définition de mappage** du conteneur, puis cliquez sur **Modifier**. Puis, dans la fenêtre **Définition de mappage**, sélectionnez la méthode **Délimiteurs de table**. Faites ensuite ce qui suit :
a. Dans le champ **Attributs**, tapez le nom d’une propriété de délimiteur. Il s’agit d’une propriété de votre document que vous souhaitez étendre à l’échantillonnage, par exemple Ville. Appuyez ensuite sur Entrée.
b. Si vous souhaitez uniquement étendre l’échantillonnage certaines valeurs de l’attribut que vous avez entré précédemment, sélectionnez l’attribut dans la zone de sélection, entrez une valeur dans la zone **Valeur**, par exemple Seattle, puis appuyez sur Entrée. Vous pouvez continuer à ajouter d’autres valeurs pour les attributs. Assurez-vous simplement que l’attribut approprié est sélectionné lorsque vous entrez des valeurs.
Par exemple, si vous incluez une valeur **Attributs** de type Ville et que vous souhaitez limiter votre table pour inclure uniquement les lignes dont la valeur Ville est New York et Dubaï, vous devez entrer Ville dans la zone Attributs et New York puis Dubaï dans la zone **Valeurs**.
1. Cliquez sur **OK**.
1. Après avoir mappé les définitions des conteneurs à échantillonner, dans la fenêtre **Éditeur de schéma**, cliquez sur **Échantillonner**.
Pour chaque colonne, vous pouvez modifier le nom de la colonne SQL, le type SQL, la longueur SQL (le cas échéant), l’échelle (le cas échéant), la précision (le cas échéant) et la valeur Nullable.
- Vous pouvez définir **Masquer la colonne** sur **true** si vous souhaitez exclure cette colonne des résultats de la requête. Les colonnes marquées Masquer la colonne = true ne sont pas retournées pour la sélection et la projection, bien qu’elles fassent toujours partie du schéma. Par exemple, vous pouvez masquer toutes les propriétés système Azure Cosmos DB requises commençant par `_`.
- La colonne **id** est le seul champ qui ne peut pas être masqué car elle sert de clé primaire dans le schéma normalisé.
1. Une fois que vous avez terminé la définition du schéma, cliquez sur **Fichier** | **Enregistrer**, accédez au répertoire d’enregistrement du schéma, puis cliquez sur **Enregistrer**.
1. Dans la fenêtre de **configuration du DSN du pilote ODBC Azure Cosmos DB**, cliquez sur **Options avancées**. Puis, dans la fenêtre **Fichier de schéma**, accédez au fichier de schéma enregistré et cliquez sur **OK**. Cliquez à nouveau sur **OK** pour enregistrer le DSN. Cette opération enregistre dans le DSN le schéma que vous avez créé.
## <a name="optional-set-up-linked-server-connection"></a>(Facultatif) Configurer une connexion à un serveur lié
Vous pouvez interroger Azure Cosmos DB à partir de SQL Server Management Studio (SSMS) en configurant une connexion à un serveur lié.
1. Créez une source de données système, nommée par exemple `SDS Name`, en suivant les instructions de [l’étape 2](#connect).
1. [Installez SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) et connectez-vous au serveur.
1. Dans l’éditeur de requête SSMS, créez un objet de serveur lié `DEMOCOSMOS` pour la source de données avec les commandes suivantes. Remplacez `DEMOCOSMOS` par le nom de votre serveur lié, et `SDS Name` par le nom de votre source de données système.
```sql
USE [master]
GO
EXEC master.dbo.sp_addlinkedserver @server = N'DEMOCOSMOS', @srvproduct=N'', @provider=N'MSDASQL', @datasrc=N'SDS Name'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'DEMOCOSMOS', @useself=N'False', @locallogin=NULL, @rmtuser=NULL, @rmtpassword=NULL
GO
```
Pour voir le nom du nouveau serveur lié, actualisez la liste Serveurs liés.
:::image type="content" source="./media/odbc-driver/odbc-driver-linked-server-ssms.png" alt-text="Serveur lié dans SSMS":::
### <a name="query-linked-database"></a>Interroger une base de données liée
Pour interroger la base de données liée, entrez une requête SSMS. Dans cet exemple, la requête effectue une sélection dans la table du conteneur nommé `customers`:
```sql
SELECT * FROM OPENQUERY(DEMOCOSMOS, 'SELECT * FROM [customers].[customers]')
```
exécutez la requête. Le résultat devrait se présenter ainsi :
```
attachments/ 1507476156 521 Bassett Avenue, Wikieup, Missouri, 5422 "2602bc56-0000-0000-0000-59da42bc0000" 2015-02-06T05:32:32 +05:00 f1ca3044f17149f3bc61f7b9c78a26df
attachments/ 1507476156 167 Nassau Street, Tuskahoma, Illinois, 5998 "2602bd56-0000-0000-0000-59da42bc0000" 2015-06-16T08:54:17 +04:00 f75f949ea8de466a9ef2bdb7ce065ac8
attachments/ 1507476156 885 Strong Place, Cassel, Montana, 2069 "2602be56-0000-0000-0000-59da42bc0000" 2015-03-20T07:21:47 +04:00 ef0365fb40c04bb6a3ffc4bc77c905fd
attachments/ 1507476156 515 Barwell Terrace, Defiance, Tennessee, 6439 "2602c056-0000-0000-0000-59da42bc0000" 2014-10-16T06:49:04 +04:00 e913fe543490432f871bc42019663518
attachments/ 1507476156 570 Ruby Street, Spokane, Idaho, 9025 "2602c156-0000-0000-0000-59da42bc0000" 2014-10-30T05:49:33 +04:00 e53072057d314bc9b36c89a8350048f3
```
> [!NOTE]
> Le serveur Cosmos DB lié ne prend pas en charge les noms en quatre parties. Un message d’erreur de ce type est retourné :
```
Msg 7312, Level 16, State 1, Line 44
Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
```
## <a name="optional-creating-views"></a>(Facultatif) Création de vues
Vous pouvez définir et créer des vues dans le cadre du processus d’échantillonnage. Ces vues sont équivalentes aux vues SQL. Elles sont en lecture seule et affichent les sélections et les projections de la requête SQL Azure Cosmos DB définie.
Pour créer une vue de vos données, dans la fenêtre **Éditeur de schéma**, dans la colonne **View Definitions** (Définitions de vue), cliquez sur **Add** (Ajouter) sur la ligne du conteneur à échantillonner.
:::image type="content" source="./media/odbc-driver/odbc-driver-create-view.png" alt-text="Création d’une vue des données":::
Puis, dans la fenêtre **View Definitions** (Définitions de la vue), procédez comme suit :
1. Cliquez sur **New** (Nouveau), entrez un nom pour la vue, par exemple, EmployeesfromSeattleView, puis cliquez **OK**.
1. Dans la fenêtre **Modifier l’affichage**, entrez une requête Azure Cosmos DB. Utilisez obligatoirement une [requête SQL Azure Cosmos DB](how-to-sql-query.md), par exemple `SELECT c.City, c.EmployeeName, c.Level, c.Age, c.Manager FROM c WHERE c.City = "Seattle"`, puis cliquez sur **OK**.
:::image type="content" source="./media/odbc-driver/odbc-driver-create-view-2.png" alt-text="Ajout d’une requête lors de la création d’une vue":::
Vous pouvez créer autant de vues que vous le souhaitez. Une fois que vous avez terminé la définition des vues, vous pouvez échantillonner les données.
## <a name="step-5-view-your-data-in-bi-tools-such-as-power-bi-desktop"></a>Étape 5 : Affichage de vos données dans des outils décisionnels comme Power BI Desktop
Vous pouvez utiliser votre nouveau DSN pour vous connecter à Azure Cosmos DB avec n’importe quel outil compatible ODBC. Cette étape vous indique simplement comment vous connecter à Power BI Desktop et créer une visualisation Power BI.
1. Ouvrez Power BI Desktop.
1. Cliquez sur **Get Data** (Obtenir les données).
:::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Obtenir les données dans Power BI Desktop":::
1. Dans la fenêtre **Get Data** (Obtenir les données), cliquez sur **Other** (Autre) | **ODBC** | **Connect** (Se connecter).
:::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Choix de la source de données ODBC dans l’option Obtenir les données de Power BI":::
1. Dans la fenêtre **From ODBC** (Depuis ODBC), sélectionnez le nom de source de données que vous avez créé, puis cliquez sur **OK**. Vous pouvez laisser les entrées **Options avancées** vides.
:::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Choisir le nom de la source de données de l’option Obtenir des données de Power BI":::
1. Dans la fenêtre **Accéder à une source de données à l’aide d’un pilote ODBC**, sélectionnez **Par défaut ou Personnalisé** , puis cliquez sur **Connecter**. Vous n’avez pas besoin d’inclure les **propriétés de la chaîne d’informations d’identification**.
1. Dans la fenêtre du **navigateur**, dans le volet gauche, développez la base de données, le schéma, puis sélectionnez la table. Le volet des résultats inclut les données en utilisant le schéma que vous avez créé.
:::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Sélection de la table dans l’option Obtenir les données de Power BI":::
1. Pour visualiser les données dans Power BI Desktop, cochez la case en regard du nom de la table, puis cliquez sur **Charger**.
1. Dans Power BI Desktop, à l’extrême gauche, sélectionnez l’onglet Données  pour confirmer que vos données ont été importées.
1. Vous pouvez désormais créer des éléments visuels à l’aide de Power BI en cliquant sur l’onglet Rapport , sur **Nouvel élément visuel**, puis en personnalisation votre mosaïque. Pour plus d’informations sur la création de visualisations dans Power BI Desktop, consultez [Types de visualisation dans Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
## <a name="troubleshooting"></a>Dépannage
Si l’erreur suivante s’affiche, vérifiez que les valeurs **Hôte** et **Clé d’accès** que vous avez copiées sur le portail Azure à l’[étape 2](#connect) sont correctes, puis réessayez. Utilisez les boutons de copie à droite des valeurs **Hôte** et **Clé d’accès** sur le portail Azure pour copier les valeurs correctes.
```output
[HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
```
## <a name="next-steps"></a>Étapes suivantes
Pour en savoir plus sur Azure Cosmos DB, consultez la page [Bienvenue dans Azure Cosmos DB](introduction.md).
| 102.27451 | 984 | 0.762998 | fra_Latn | 0.983297 |
4d401626f6baf194350913fb6a5cf31e0fc7e08b | 7,026 | md | Markdown | articles/devtest-labs/devtest-lab-vmcli.md | ebarbosahsi/azure-docs.es-es | b6dbec832e5dccd7118e05208730a561103b357e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/devtest-labs/devtest-lab-vmcli.md | ebarbosahsi/azure-docs.es-es | b6dbec832e5dccd7118e05208730a561103b357e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/devtest-labs/devtest-lab-vmcli.md | ebarbosahsi/azure-docs.es-es | b6dbec832e5dccd7118e05208730a561103b357e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Creación y administración de máquinas virtuales en DevTest Labs con la CLI de Azure
description: Aprenda a usar Azure DevTest Labs para crear y administrar máquinas virtuales con la CLI de Azure
ms.topic: article
ms.date: 06/26/2020
ms.openlocfilehash: 22ee6bf607fe1b66cece0e7ddb25a2da2830258b
ms.sourcegitcommit: dda0d51d3d0e34d07faf231033d744ca4f2bbf4a
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/05/2021
ms.locfileid: "102201471"
---
# <a name="create-and-manage-virtual-machines-with-devtest-labs-using-the-azure-cli"></a>Creación y administración de máquinas virtuales con DevTest Labs mediante la CLI de Azure
Este inicio rápido le ayudará a crear, iniciar, actualizar y limpiar una máquina de desarrollo en el laboratorio, así como conectarse a ella.
Antes de empezar:
* Si no se ha creado un laboratorio, encontrará instrucciones [aquí](devtest-lab-create-lab.md).
* [Instalación de la CLI de Azure](/cli/azure/install-azure-cli). Para empezar, ejecute az login para crear una conexión con Azure.
## <a name="create-and-verify-the-virtual-machine"></a>Creación y comprobación de la máquina virtual
Antes de ejecutar los comandos relacionados de DevTest Labs, establezca el contexto de Azure adecuado mediante el comando `az account set`:
```azurecli
az account set --subscription 11111111-1111-1111-1111-111111111111
```
El comando para crear una máquina virtual es `az lab vm create`. Solo se necesita el grupo de recursos para el laboratorio, el nombre del laboratorio y el nombre de la máquina virtual. El resto de los argumentos cambian según el tipo de máquina virtual.
El siguiente comando crea una imagen basada en Windows desde Azure Marketplace. El nombre de la imagen es el mismo que vería al crear una máquina virtual mediante Azure Portal.
```azurecli
az lab vm create --resource-group DtlResourceGroup --lab-name MyLab --name 'MyTestVm' --image "Visual Studio Community 2017 on Windows Server 2016 (x64)" --image-type gallery --size 'Standard_D2s_v3' --admin-username 'AdminUser' --admin-password 'Password1!'
```
El siguiente comando crea una máquina virtual basada en una imagen personalizada disponible en el laboratorio:
```azurecli
az lab vm create --resource-group DtlResourceGroup --lab-name MyLab --name 'MyTestVm' --image "My Custom Image" --image-type custom --size 'Standard_D2s_v3' --admin-username 'AdminUser' --admin-password 'Password1!'
```
El argumento **image-type** ha cambiado de **gallery** a **custom**. El nombre de la imagen coincide con lo que se ve si fuera a crear la máquina virtual en Azure Portal.
El siguiente comando crea una máquina virtual desde una imagen de Marketplace mediante autenticación ssh:
```azurecli
az lab vm create --lab-name sampleLabName --resource-group sampleLabResourceGroup --name sampleVMName --image "Ubuntu Server 16.04 LTS" --image-type gallery --size Standard_DS1_v2 --authentication-type ssh --generate-ssh-keys --ip-configuration public
```
También puede crear máquinas virtuales basadas en las fórmulas estableciendo el parámetro **image-type** en **formula**. Si tiene que elegir una red virtual específica para la máquina virtual, use los parámetros **vnet-name** y **subnet**. Para obtener más información, consulte[az lab vm create](/cli/azure/lab/vm#az-lab-vm-create).
## <a name="verify-that-the-vm-is-available"></a>Compruebe que la máquina virtual esté disponible.
Use el comando `az lab vm show` para comprobar que la máquina virtual está disponible antes de iniciarla y conectarse a ella.
```azurecli
az lab vm show --lab-name sampleLabName --name sampleVMName --resource-group sampleResourceGroup --expand 'properties($expand=ComputeVm,NetworkInterface)' --query '{status: computeVm.statuses[0].displayStatus, fqdn: fqdn, ipAddress: networkInterface.publicIpAddress}'
```
```json
{
"fqdn": "lisalabvm.southcentralus.cloudapp.azure.com",
"ipAddress": "13.85.228.112",
"status": "Provisioning succeeded"
}
```
## <a name="start-and-connect-to-the-virtual-machine"></a>Inicio y conexión a la máquina virtual
El siguiente comando de ejemplo inicia una máquina virtual:
```azurecli
az lab vm start --lab-name sampleLabName --name sampleVMName --resource-group sampleLabResourceGroup
```
Conéctese a una máquina virtual: [SSH](../virtual-machines/linux/mac-create-ssh-keys.md) o [Escritorio remoto](../virtual-machines/windows/connect-logon.md).
```bash
ssh userName@ipAddressOrfqdn
```
## <a name="update-the-virtual-machine"></a>Actualización de la máquina virtual
El siguiente comando de ejemplo aplica artefactos a una máquina virtual:
```azurecli
az lab vm apply-artifacts --lab-name sampleLabName --name sampleVMName --resource-group sampleResourceGroup --artifacts @/artifacts.json
```
```json
[
{
"artifactId": "/artifactSources/public repo/artifacts/linux-java",
"parameters": []
},
{
"artifactId": "/artifactSources/public repo/artifacts/linux-install-nodejs",
"parameters": []
},
{
"artifactId": "/artifactSources/public repo/artifacts/linux-apt-package",
"parameters": [
{
"name": "packages",
"value": "abcd"
},
{
"name": "update",
"value": "true"
},
{
"name": "options",
"value": ""
}
]
}
]
```
### <a name="list-artifacts-available-in-the-lab"></a>Enumeración de los artefactos disponibles en el laboratorio
Para enumerar los artefactos disponibles en una máquina virtual de un laboratorio, ejecute los siguientes comandos.
**Cloud Shell - PowerShell**: observe el uso del acento grave (\`) delante de $ en $expand (por ejemplo, `$expand):
```azurecli-interactive
az lab vm show --resource-group <resourcegroupname> --lab-name <labname> --name <vmname> --expand "properties(`$expand=artifacts)" --query "artifacts[].{artifactId: artifactId, status: status}"
```
**Cloud Shell - Bash**: observe el uso del carácter de barra (\\) delante de $ en el comando.
```azurecli-interactive
az lab vm show --resource-group <resourcegroupname> --lab-name <labname> --name <vmname> --expand "properties(\$expand=artifacts)" --query "artifacts[].{artifactId: artifactId, status: status}"
```
Salida del ejemplo:
```json
[
{
"artifactId": "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.DevTestLab/labs/<lab name>/artifactSources/public repo/artifacts/windows-7zip",
"status": "Succeeded"
}
]
```
## <a name="stop-and-delete-the-virtual-machine"></a>Detención y eliminación de la máquina virtual
El siguiente comando de ejemplo detiene una máquina virtual.
```azurecli
az lab vm stop --lab-name sampleLabName --name sampleVMName --resource-group sampleResourceGroup
```
Elimine una máquina virtual.
```azurecli
az lab vm delete --lab-name sampleLabName --name sampleVMName --resource-group sampleResourceGroup
```
## <a name="next-steps"></a>Pasos siguientes
Consulte el siguiente contenido: [Documentación de la CLI de Azure para Azure DevTest Labs](/cli/azure/lab).
| 44.751592 | 333 | 0.74452 | spa_Latn | 0.80595 |
4d4037bba1304e32645be887a33261ba85e18774 | 11,096 | md | Markdown | README.md | tormozit/bsl_console | d073f64c725c09e76122d69bef64a033b79a2436 | [
"MIT"
] | 1 | 2021-11-27T08:51:22.000Z | 2021-11-27T08:51:22.000Z | README.md | tormozit/bsl_console | d073f64c725c09e76122d69bef64a033b79a2436 | [
"MIT"
] | null | null | null | README.md | tormozit/bsl_console | d073f64c725c09e76122d69bef64a033b79a2436 | [
"MIT"
] | null | null | null | # Консоль кода для 1С 8.3 (Управляемые и обычные формы)
Для работы внутри 1С требуется версия платформы не ниже **8.3.14.1565**

## Как работает?
На основе [Monaco editor](https://github.com/Microsoft/monaco-editor)
## Основные возможности:
* Подсветка синтаксиса языка 1С
* Подсветка языка запросов
* Автокомплит для глобальных перечислений и функций
* Автокомплит для метаданных (Справочники, Документы и т.п.)
* Автокомплит для объектов метаданных (СправочникСсылка, ДокументОбъект и т.п.)
* Подсказка параметров конструкторов и методов
* Подсказка для типов
* Вставка готовых блоков кода (сниппеты)
* Вызов конструктора запроса и конструктора форматной строки
* Загрузка пользовательских функций и сниппетов
* Выделение строки, при выполнении которой произошла ошибка
* Сворачивание циклов, условий и текстов запросов
* Всплывающие подсказки для глобальных функций, перечислений и классов
* Подсказки через точку для реквизитов типа справочники/документы
* Подсказки через точку для объектов типа ТаблицаЗначений/Массив/РезультатЗапроса/ДвоичныеДанные и др., в том числе для объектов, полученных через методы других объектов.
* Подсказки для источников и полей в режиме запроса
## Как запускать?
1. Для запуска в браузере достаточно открыть **index.html** из каталога **src**, либо воспользоваться [ссылкой](https://salexdv.github.io/bsl_console/src/index.html)
2. Для запуска в 1С можно использовать обработку **console.epf**, выкладываемую в [релизах](https://github.com/salexdv/bsl_console/releases) или сделать свою.
3. Редактор используется на сайте [Paste1C](https://paste1c.ru/).
## Функции для взаимодействия с 1С:Предприятием
### Работа с текстом (кодом)
| Функция | Описание |
| ------------------------------ | --------------------------------------------------------------------------------------------- |
| `setText` | Устанавливает переданный текст в текущую или определенную позицию |
| `updateText` | Полностью заменяет весь текст редактора, игнорируя при этом режим *Только просмотр* |
| `getText` | Возвращает весь текст из окна редактора |
| `eraseText` | Удаляет весь текст редактора |
| `selectedText(text)` | Функция без параметров возвращает выделенный текст, с параметром - устанавливает |
| `getSelection` | Возвращает [selection](https://microsoft.github.io/monaco-editor/api/classes/monaco.selection.html), аналог GetTextSelectionBounds|
| `setSelectionByLength` | Устанавливает выделение, аналог первой сигнатуры SetTextSelectionBounds |
| `setSelection` | Устанавливает выделение, аналог второй сигнатуры SetTextSelectionBounds |
| `getLineCount` | Возвращает количество строк |
| `getLineContent` | Возвращает содержимое строки по её номеру, аналог GetLine |
| `setLineContent` | Устанавливает содержимое строки по её номеру, аналог ReplaceLine |
| `getCurrentLineContent` | Возвращает содержимое текущей строки |
| `getCurrentLine` | Возвращает номер текущей строки |
| `getCurrentColumn` | Возвращает номер текущей колонки |
| `setLineContent` | устанавливает содержимое строки по её номеру, аналог ReplaceLine |
| `getQuery` | Определяет текст запроса в текущей позиции и возвращает его вместе с областью текста |
| `getFormatString` | Определяет текст форматной строки в текущей позиции |
| `findText` | Возвращает номер строки, в которой находится заданный текст |
| `addComment` | Добавляет комментарий к текущему блоку кода |
| `removeComment` | Удаляет комментарий у текущего блока |
| `addWordWrap` | Добавляет перенос строки к текущему блоку |
| `removeWordWrap` | Удаляет перенос строки у текущего блока |
### Управление режимом работы / настройками
| Функция | Описание |
| ------------------------------ | --------------------------------------------------------------------------------------------- |
| `init` | Инициализация редактора с передачей версии платформы |
| `setTheme` | Установка темы редактора `bsl-white`, `bsl-white-query`, `bsl-dark`, `bsl-dark-query` |
| `setReadOnly` | Устанавливает/снимает режим *Только просмотр* |
| `switchLang` | Переключает язык подсказок с английского на русский и обратно |
| `enableQuickSuggestions` | Включает/выключает режим быстрых подсказок |
| `minimap` | Включает/выключает отображение карты кода |
| [`enableModificationEvent`](docs/modification_event.md) | Включает/выключает генерацию события, возникающего при изменении содержимого редактора|
| [`switchQueryMode`](docs/switch_query.md) | Переключение между режимом запроса и режимом редактирования кода |
| [`compare`](docs/compare.md) | Включает/выключает режим сравнения текстов |
| `nextDiff` | Переход с следующему изменению в режиме сравнения |
| `previousDiff` | Переход с предыдущему изменению в режиме сравнения |
| `getVarsNames` | Возвращает имена всех объявленных в коде переменных |
| `switchXMLMode` | Переключение в режим просмотра XML с подсветкой и обратно |
| `disableContextMenu` | Отключает показ контекстного меню |
| `hideLineNumbers` | Скрывает отображение номеров строк в редакторе |
| `hideScrollX` | Скрывает стандартную горизонтальную полосу прокрутки |
| `hideScrollY` | Скрывает стандартную вертикальную полосу прокрутки |
### Взаимодействие
| Функция | Описание |
| ------------------------------ | --------------------------------------------------------------------------------------------- |
| `updateMetadata` | Обновляет через JSON структуру метаданных (Справочники/Документы/пр.) |
| `clearMetadata` | Очищает структуру метаданных |
| `updateSnippets` | Обновляет пользовательские сниппеты |
| `updateCustomFunctions` | Обновляет пользовательские функции |
| `setCustomHovers ` | Обновляет пользовательские подсказки, показываемые при наведении |
| [`addContextMenuItem`](docs/add_menu.md) | Регистрирует пользовательский пункт контекстного меню и связанное с ним событие |
| `markError` | Индикация ошибки в указанной строке |
| [`triggerSuggestions`](docs/trigger_suggestions.md) | Принудительный вызов подсказок |
| [`showCustomSuggestions`](docs/custom_suggestions.md) | Показ пользовательских подсказок |
## События, генерируемые редактором для 1С:Предприятия
| Событие | Описание |
| ------------------------------ | --------------------------------------------------------------------------------------------- |
| `EVENT_QUERY_CONSTRUCT` | При выборе пункта меню "Конструктор запросов". Возвращает текст и позицию запроса |
| `EVENT_FORMAT_CONSTRUCT` | При выборе пункта меню "Конструктор форматной строки". Возвращает текст и позицию фор.строки |
| `EVENT_CONTENT_CHANGED` | При любом изменении содержимого редактора. Вкл/откл через *enableModificationEvent* |
| `EVENT_GET_METADATA` | Генерируется при отсутствии метаданных. В параметрах передается имя запрашиваемых метаданных |
| `EVENT_XXX` | При выборе пользовательского пункта меню. *addContextMenuItem('Мой пункт', 'EVENT_MY')* |
*Перед началом работы с редактором из 1С Предпрития желательно вызвать функцию инициализации и передать в нее текущую версию платформы.*
Пример:
```javascript
init('8.3.18.891');
```
## Продукты, использующие консоль:
* [Infostart Toolkit](https://infostart.ru/journal/news/news/infostart-toolkit-1-3-teper-s-novym-redaktorom-koda-na-baze-monaco-editor_1303095/)
* [Конвертация данных 3 расширение](https://infostart.ru/public/1289837/)
* [Контекстная подсказка в 1С КД3](https://github.com/GenVP/TipsInCD3)
## Проверенные платформы:
* 8.3.15.1830
* 8.3.16.1148
* 8.3.17.1386
* 8.3.18.891
## Известные проблемы:
* На платформах до 8.3.16 могут не работать горячие клавиши CTRL+C, CTRL+V и CTRL+Z и т.п.
* На платформах до 8.3.18 команды копировать/вставить работают только в пределах окна редактора
* В веб-клиенте недоступно любое взаимодействие редактора и 1С. Можно попробовать только набор кода. Иногда для этого в браузере надо предварительно открыть данную [ссылку](https://salexdv.github.io/bsl_console/src/index.html)
* Работа в linux на данный момент не поддерживается.
* Из-за особенностей реализации подсказка через точку для реквизитов ссылочного типа работает только тогда, когда подсказываемый реквизит выбран через Enter
## Благодарности
Выражаю благодарность команде [1c-syntax](https://github.com/1c-syntax) и их [проекту для VSCode](https://github.com/1c-syntax/vsc-language-1c-bsl) за подробное описание внутренних конструкций языка в JSON, а также за коллекцию сниппетов. | 86.015504 | 238 | 0.556146 | rus_Cyrl | 0.93224 |
4d4114b0f710e4c22d5dbd412b50f5f6c852bc31 | 1,761 | md | Markdown | docs/extensibility/debugger/setnotificationforwaitcompletion-method.md | tommorris/visualstudio-docs.es-es | 651470ca234bb6db8391ae9f50ff23485896393c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/setnotificationforwaitcompletion-method.md | tommorris/visualstudio-docs.es-es | 651470ca234bb6db8391ae9f50ff23485896393c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/setnotificationforwaitcompletion-method.md | tommorris/visualstudio-docs.es-es | 651470ca234bb6db8391ae9f50ff23485896393c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: SetNotificationForWaitCompletion (método) | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- vs-ide-sdk
ms.topic: conceptual
helpviewer_keywords:
- SetNotificationForWaitCompletion method, Task class [.NET Framework debug engines]
ms.assetid: da149c9a-20f4-4543-a29e-429c8c1d2e19
author: gregvanl
ms.author: gregvanl
manager: douge
ms.workload:
- vssdk
ms.openlocfilehash: 42c5bca56bc46c0b8124fbfaf7ca046c2c1e59ec
ms.sourcegitcommit: 8d38d5d2f2b75fc1563952c0d6de0fe43af12766
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 07/26/2018
ms.locfileid: "39276343"
---
# <a name="setnotificationforwaitcompletion-method"></a>SetNotificationForWaitCompletion (Método)
Establece o borra el bit de estado TASK_STATE_WAIT_COMPLETION_NOTIFICATION.
**Namespace:** <xref:System.Threading.Tasks?displayProperty=fullName>
**Ensamblado:** mscorlib (en *mscorlib.dll*)
## <a name="syntax"></a>Sintaxis
```vb
internal void SetNotificationForWaitCompletion(bool enabled)
```
### <a name="parameters"></a>Parámetros
`enabled`
`true` Para establecer el bit; `false` al anular el bit.
## <a name="exceptions"></a>Excepciones
## <a name="remarks"></a>Comentarios
El depurador establece este bit para ayudar a paso fuera de un cuerpo de método asincrónico. Si `enabled` es `true`, este método debe llamarse únicamente en una tarea que no se ha completado todavía. Cuando `enabled` es `false`, este método puede llamarse en las tareas completadas. En cualquier caso, solo debe usarse para tareas de la promesa de estilo.
## <a name="requirements"></a>Requisitos
## <a name="see-also"></a>Vea también
[Clase de tarea](../../extensibility/debugger/task-class-internal-members.md) | 35.938776 | 358 | 0.752413 | spa_Latn | 0.521078 |
4d4149a2857946a39be533e73f7521aeda74384b | 298 | md | Markdown | README.md | itisjoe/cmdEnter | 1b0195ea2084b78c03c9999064fc8a473cb59078 | [
"MIT"
] | 1 | 2017-03-01T02:29:49.000Z | 2017-03-01T02:29:49.000Z | README.md | itisjoe/cmdEnter | 1b0195ea2084b78c03c9999064fc8a473cb59078 | [
"MIT"
] | null | null | null | README.md | itisjoe/cmdEnter | 1b0195ea2084b78c03c9999064fc8a473cb59078 | [
"MIT"
] | null | null | null | # cmdEnter
A Firefox addon that let Mac user can press[ cmd + Enter ] to open url in a new tab.
## How to install
Go to [Firefox add-on page](https://addons.mozilla.org/firefox/addon/cmdenterformac/) to install officially.
or
Drag **cmdenter.xpi** to your Firefox. (Of course, on your Mac.)
| 22.923077 | 108 | 0.718121 | eng_Latn | 0.938127 |
4d417d63f7f6b17f8d17ed7566646c1b5325c143 | 1,084 | md | Markdown | README.md | bekk/bekk-puppet-mssql2014 | 2b8af7a872c8200a3e6533d6113ca90c42e4bd2e | [
"Apache-2.0"
] | null | null | null | README.md | bekk/bekk-puppet-mssql2014 | 2b8af7a872c8200a3e6533d6113ca90c42e4bd2e | [
"Apache-2.0"
] | null | null | null | README.md | bekk/bekk-puppet-mssql2014 | 2b8af7a872c8200a3e6533d6113ca90c42e4bd2e | [
"Apache-2.0"
] | null | null | null | # Microsoft SQL Server puppet module.
This module installs Microsoft SQL Server 2014 on Windows 2012R2. It is based on the [Puppetlabs MSSQL module](https://forge.puppetlabs.com/puppetlabs/mssql)
## Installation
This module depends on DISM module to enable .net 3.5 on Windows Server:
* [dism module](http://forge.puppetlabs.com/puppetlabs/dism)
## Usage
Example:
```puppet
class {'mssql2014':
media => "D:",
instanceid => 'MSSQLSERVER',
instancename => 'MSSQLSERVER',
features => 'SQL,Tools',
agtsvcaccount => 'SQLAGTSVC',
agtsvcpassword => 'sqlagtsvc2014demo',
sqlsvcaccount => 'SQLSVC',
sqlsvcpassword => 'sqlsvc2014demo',
instancedir => "C:\\Program Files\\Microsoft SQL Server",
ascollation => 'Latin1_General_CI_AS',
sqlcollation => 'SQL_Latin1_General_CP1_CI_AS',
securitymode => 'SQL',
admin => 'Administrator',
sapwd => 'sapwd!2014demo'
}
```
See http://msdn.microsoft.com/en-us/library/ms144259.aspx for more information about these options.
| 31.882353 | 157 | 0.660517 | kor_Hang | 0.388156 |
4d419d8934cecd97934564f831f091f0234451ea | 720 | md | Markdown | atom/nucleus/ruby/docs/Accounting.md | sumit4-ttn/SDK | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | [
"Apache-2.0"
] | null | null | null | atom/nucleus/ruby/docs/Accounting.md | sumit4-ttn/SDK | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | [
"Apache-2.0"
] | null | null | null | atom/nucleus/ruby/docs/Accounting.md | sumit4-ttn/SDK | b3ae385e5415e47ac70abd0b3fdeeaeee9aa7cff | [
"Apache-2.0"
] | null | null | null | # NucleusApi::Accounting
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**client_id** | **String** | clientId | [optional]
**create_date** | **DateTime** | | [optional]
**currency_code** | **String** | currencyCode |
**id** | **String** | | [optional]
**metadata** | **Hash<String, String>** | | [optional]
**period_type** | **String** | periodType | [optional]
**period_year** | **Integer** | periodYear | [optional]
**secondary_id** | **String** | | [optional]
**stat_date** | **DateTime** | statDate |
**stat_name** | **String** | statName |
**stat_value** | **Float** | statValue |
**update_date** | **DateTime** | | [optional]
| 36 | 62 | 0.543056 | yue_Hant | 0.355474 |
4d42ba12a36dd9e8c1eb412b776b41c9d386dce2 | 660 | md | Markdown | leet_code/merge_between_ll/README.md | daveeS987/data-structures-and-algorithms | e2fc060a558bdf8696331ddeaf317259a56e36a3 | [
"MIT"
] | null | null | null | leet_code/merge_between_ll/README.md | daveeS987/data-structures-and-algorithms | e2fc060a558bdf8696331ddeaf317259a56e36a3 | [
"MIT"
] | null | null | null | leet_code/merge_between_ll/README.md | daveeS987/data-structures-and-algorithms | e2fc060a558bdf8696331ddeaf317259a56e36a3 | [
"MIT"
] | 1 | 2021-04-06T02:03:31.000Z | 2021-04-06T02:03:31.000Z | # Merge in Between Linked List
Given two linked list, and an "a" and "b" integer value, remove the nodes in linked list 1 from "a" to "b" and insert linked list 2.
## White Board Process

## Approach and Efficiency
I utilized two pointers to keep a reference of the start and end slicing points in the first list. I traversed "a-1" amount of times to get the start, and "b" amount of times to get the end slice point. Then I traversed the second list to grab its tail. I pointed start to list2 head and list2 tail to point to end. Then I returned the first linked list
Big O:
- Time: O(n)
- Space: O(1)
| 38.823529 | 353 | 0.740909 | eng_Latn | 0.998918 |
4d432251cd2fb7f9f49f812b0a22d593f444435f | 1,285 | md | Markdown | README.md | gaohaofht2017/vue-backend | 1d8a5fbf592fc6a2c65dc51086fcd13bf5110128 | [
"MIT"
] | 909 | 2017-11-14T03:13:40.000Z | 2022-03-30T06:56:57.000Z | README.md | johnhun/vue-backend | d2e10bb1f689db453de776cf0aeb8d967ef3ce53 | [
"MIT"
] | 30 | 2017-11-13T09:13:15.000Z | 2022-02-10T15:19:52.000Z | README.md | johnhun/vue-backend | d2e10bb1f689db453de776cf0aeb8d967ef3ce53 | [
"MIT"
] | 349 | 2017-11-14T04:15:15.000Z | 2022-03-31T12:38:01.000Z | # Vue + ElementUI 后台管理系统框架
## **在线预览**
[https://harsima.github.io/vue-backend](https://harsima.github.io/vue-backend)
## **相关教程**
- [Vue + ElementUI 手撸后台管理网站基本框架(零)前言篇](http://blog.csdn.net/harsima/article/details/77949609)
- [Vue + ElementUI 手撸后台管理网站基本框架(一)创建项目](http://blog.csdn.net/harsima/article/details/77949623)
- [Vue + ElementUI 手撸后台管理网站基本框架(二)权限控制](http://blog.csdn.net/harsima/article/details/77949448)
- [Vue + ElementUI 手撸后台管理网站基本框架(三)登录及系统菜单加载](http://blog.csdn.net/harsima/article/details/77949465)
- [Vue + ElementUI 手撸后台管理网站基本框架(四)主题切换](http://blog.csdn.net/harsima/article/details/78934405)
## **功能列表**
- 登录登出
- 菜单异步加载
- 页面详细权限控制
- 多语言支持
- 布局切换
- 高德地图集成
- Echarts集成
- 错误页面
- mock数据
- 页面加载进度条
## **项目使用**
``` bash
# 安装项目依赖
npm install
# 开启本地服务,默认为localhost:9000
npm run dev
# 项目打包,构建生产环境
npm run build
# 打包过程中想查看具体报告则可以通过以下命令实现
npm run build --report
```
## Nginx简单部署配置
将打包后的文件放到Nginx安装目录中的html文件夹内,然后对Nginx进行简单配置即可。
```
...
# 以上保持默认配置即可
server {
listen 9090;
server_name localhost;
# 项目文件目录
root html/vue-backend;
index index.html index.htm;
location / {
# vue-router使用history模式下的必须配置
try_files $uri $uri/ /index.html;
index index.html;
}
}
```
## 其他
欢迎反馈及探讨各种问题,同时请注意issue规则。
QQ交流群:745454791 | 19.769231 | 99 | 0.69572 | yue_Hant | 0.771695 |
4d43f5afd4b8447873c58da7c5d975b95c72c2d2 | 524 | md | Markdown | pages/data/events/2019/20190329-599f1f43/event.cs.md | otahirs/zbm_web | a391eb2b51c4443af3d61092c780c8400a12daaa | [
"MIT"
] | 8 | 2019-02-21T09:18:41.000Z | 2021-10-06T04:53:04.000Z | pages/data/events/2019/20190329-599f1f43/event.cs.md | otahirs/zbm_web | a391eb2b51c4443af3d61092c780c8400a12daaa | [
"MIT"
] | 92 | 2019-02-21T09:23:34.000Z | 2022-03-14T07:23:54.000Z | pages/data/events/2019/20190329-599f1f43/event.cs.md | otahirs/zbm_web | a391eb2b51c4443af3d61092c780c8400a12daaa | [
"MIT"
] | 4 | 2019-02-23T19:57:36.000Z | 2020-02-12T12:38:13.000Z | ---
taxonomy:
skupina:
- pulci2
- zaci1
- zaci2
type: M
start: '2019-03-29'
end: '2019-03-29'
title: 'Mapový trénink (žáci-)'
place: 'Královo Pole'
meetTime: '16:00'
meetPlace: 'v sedle pod Medláneckým kopcem nad areálem kolejí VUT'
map: 'Kozí hora (1:10 000, ekvidistance 5 m)'
transport: 'autobusem 53 na konečnou zastávku Kolejní'
id: 20190329-599f1f43
template: trenink
date: '2019-07-16'
---
* **sraz**: {{page.header.meetTime}} {{page.header.meetPlace}}. Doprava {{page.header.transport}}.
| 24.952381 | 98 | 0.671756 | ces_Latn | 0.792824 |
4d43fbd8da8d089ccad30cf6ce5bd05e37c8e677 | 428 | md | Markdown | packages/chemical-groups/README.md | cheminfo-js/mf-parser | 6f42f9cae2750cee8ace9aa39378168cdd25d07e | [
"MIT"
] | 2 | 2018-11-24T17:18:43.000Z | 2020-04-03T14:01:54.000Z | packages/chemical-groups/README.md | cheminfo-js/mf-parser | 6f42f9cae2750cee8ace9aa39378168cdd25d07e | [
"MIT"
] | 3 | 2018-09-14T12:05:42.000Z | 2020-02-17T07:39:41.000Z | packages/chemical-groups/README.md | cheminfo-js/molecular-formula | 6f42f9cae2750cee8ace9aa39378168cdd25d07e | [
"MIT"
] | null | null | null | ## Chemical Groups
This package contains various groups used in organic chemistry like Ph, Tips, Ala.
The file `src/groups.js` can be edited directly on [this page](http://www.cheminfo.org/?viewURL=https%3A%2F%2Fcouch.cheminfo.org%2Fcheminfo-public%2F2b7d0688e43300da6a97de7cde0342b7%2Fview.json&loadversion=true&fillsearch=MF+groups+editor). After edition you need to copy the JSON and replace the full `src/groups.js` file.
| 71.333333 | 323 | 0.801402 | eng_Latn | 0.937402 |
4d4442df06811a68ce4cfc619708afba63a8ad84 | 1,484 | md | Markdown | data/mdOutput/snocross-championship-racing.md | djleven/gamedb | dff261aac409ef89acc622dc4a1296793a075635 | [
"MIT"
] | null | null | null | data/mdOutput/snocross-championship-racing.md | djleven/gamedb | dff261aac409ef89acc622dc4a1296793a075635 | [
"MIT"
] | null | null | null | data/mdOutput/snocross-championship-racing.md | djleven/gamedb | dff261aac409ef89acc622dc4a1296793a075635 | [
"MIT"
] | null | null | null | ---
view: game
layout: game
author: reicast
created_at: 2018-03-25 09:00
updated_at: 2019-05-02 09:00
id: snocross-championship-racing
title: "SnoCross - Championship Racing"
gamedb-issue: 0
releases:
- id: "F51D"
region: EU
version: "1.000"
discs: 1
medium: gdrom
test-videos:
- fingerprint: "F51D GD-ROM1/1 EU"
title: Intro auto run
hw: i7 2720qm, GeForce 540M
yt: 6xCtTN9_MZo
git: d59197f84353d7d2b746383e9277d9ed7c8c4053
platform: win86-release
gotIGDBGame: 1
idIGDB: 45035
cover:
- id: 28345
game: 45035
height: 980
image_id: "xibcyqllwwmrx9b31mxj"
url: "//images.igdb.com/igdb/image/upload/t_thumb/xibcyqllwwmrx9b31mxj.jpg"
width: 1000
first_release_date: 969494400
categories:
- "Racing"
name: "SnoCross Championship Racing"
popularity: 1.0
slug: "snocross-championship-racing"
summary: "Sno-Cross Championship Racing provides players with the opportunity to race 12 snowmobiles by Yamaha. In the championship mode snowmobiles can be upgraded with money won by winning races and performing tricks. There are 10 racing circuits set in such locations as Nagano, Aspen, and Munich. A track editor is included as well so that users can modify current tracks or create their own.
Strap on your goggles and helmet, choose your favorite Yamaha sled, and hit the courses. Gain experience day and night, sun rain or snow, racing on the icy flats of Vladivostok, the slopes of Aspen, and the tunnels of Nagano."
---
| 32.26087 | 397 | 0.749326 | eng_Latn | 0.975421 |
4d444b614ea78777063d8d77e4e3f9000cf28485 | 655 | md | Markdown | README.md | davenquinn/shell-config | a344eb8fd1eae1404f25f607e42345cf17c1c878 | [
"MIT"
] | null | null | null | README.md | davenquinn/shell-config | a344eb8fd1eae1404f25f607e42345cf17c1c878 | [
"MIT"
] | null | null | null | README.md | davenquinn/shell-config | a344eb8fd1eae1404f25f607e42345cf17c1c878 | [
"MIT"
] | null | null | null | # Daven's dotfiles
This was long ago forked from Mathias Bynen's version,
but it has been substantially changed to suit my whims,
and only the fundamental
approach remains the same (all of the mechanics are different
internally).
## Installation
Run `git submodule update --init --recursive` to grab all the
software.
Install some python modules (you will need `click` and `pathlib` (for Python < 3.0)
Run `./make-links.py` with an optional `-p` switch for profile.
This will install os-specific dotfiles as well as more general
configurations.
## Key commands
- `make clean`: clean up files autogenerated (mostly by
Vim's plugin infrastructure)
| 27.291667 | 83 | 0.763359 | eng_Latn | 0.997397 |
4d452d4a96a57976553a1e13a793fb18120dca45 | 1,916 | md | Markdown | _posts/11/2021-04-06-hidaka-marin.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/11/2021-04-06-hidaka-marin.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/11/2021-04-06-hidaka-marin.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | ---
id: 3667
title: Hidaka Marin
date: 2021-04-06T10:36:26+00:00
author: Laima
layout: post
guid: https://ukdataservers.com/hidaka-marin/
permalink: /04/06/hidaka-marin
tags:
- claims
- lawyer
- doctor
- house
- multi family
- online
- poll
- business
category: Guides
---
* some text
{: toc}
## Who is Hidaka Marin
Idol group singer and member of Sakura Gakuin. She is also recognized as one piece of the Sakura Gakuin sub-unit Minipati.
## Prior to Popularity
Like many of her fellow Sakura Gakuin members, she joined the idol group in mid 2015.
## Random data
Outside of her skills as a member of an idol group, she is also a swimmer and enjoys tap dancing.
## Family & Everyday Life of Hidaka Marin
She was born and raised in Kanagawa, Japan.
## People Related With Hidaka Marin
She is joined in the Sakura Gakuin sub-unit Minipati by Yamaide Aiko and Okazaki Momoko.
| 19.55102 | 122 | 0.343424 | eng_Latn | 0.995506 |
4d45bda9bd60508e0e9a9aec4f9be1936ae0348b | 34,218 | md | Markdown | src/external/cloudi_x_exometer_core/README.md | khzmk/repo1 | 8592061652dbf88fddfca9b65a6a12a01d9e3b47 | [
"MIT"
] | 260 | 2015-01-02T12:59:27.000Z | 2022-03-13T00:43:38.000Z | src/external/cloudi_x_exometer_core/README.md | khzmk/repo1 | 8592061652dbf88fddfca9b65a6a12a01d9e3b47 | [
"MIT"
] | 107 | 2015-01-03T21:51:25.000Z | 2021-12-22T05:09:46.000Z | src/external/cloudi_x_exometer_core/README.md | khzmk/repo1 | 8592061652dbf88fddfca9b65a6a12a01d9e3b47 | [
"MIT"
] | 34 | 2015-02-11T05:49:41.000Z | 2020-07-13T21:07:43.000Z |
# Exometer Core - Erlang instrumentation package, core services #
Copyright (c) 2014 Basho Technologies, Inc. All Rights Reserved.
__Version:__ Oct 30 2018 13:49:09
__Authors:__ Ulf Wiger ([`[email protected]`](mailto:[email protected])), Magnus Feuer ([`[email protected]`](mailto:[email protected])).
[![Travis][travis badge]][travis]
[![Hex.pm Version][hex version badge]][hex]
[![Hex.pm License][hex license badge]][hex]
[![Erlang Versions][erlang version badge]][travis]
[![Build Tool][build tool]][hex]
The Exometer Core package allows for easy and efficient instrumentation of
Erlang code, allowing crucial data on system performance to be
exported to a wide variety of monitoring systems.
Exometer Core comes with a set of pre-defined monitor components, and can
be expanded with custom components to handle new types of Metrics, as
well as integration with additional external systems such as
databases, load balancers, etc.
This document gives a high level overview of the Exometer system. For
details, please see the documentation for individual modules, starting
with `exometer`.
Note the section on [Dependency Management](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Dependency_Management) for how to deal with
optional packages, both users and developers.
### <a name="Table_of_Content">Table of Content</a> ###
1. [Concept and definitions](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Concept_and_definitions)
1. [Metric](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Metric)
2. [Data Point](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Data_Point)
3. [Metric Type](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Metric_Type)
4. [Entry Callback](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Entry_Callback)
5. [Probe](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Probe)
6. [Caching](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Caching)
7. [Subscriptions and Reporters](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Subscriptions_and_Reporters)
2. [Built-in entries and probes](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Built-in_entries_and_probes)
1. [counter (exometer native)](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#counter_(exometer_native))
2. [fast_counter (exometer native)](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#fast_counter_(exometer_native))
3. [gauge (exometer native)](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#gauge_(exometer_native))
4. [exometer_histogram (probe)](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#exometer_histogram_(probe))
5. [exometer_uniform (probe)](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#exometer_uniform_(probe))
6. [exometer_spiral (probe)](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#exometer_spiral_(probe))
7. [exometer_folsom [entry]](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#exometer_folsom_[entry])
8. [exometer_function [entry]](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#exometer_function_[entry])
3. [Built in Reporters](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Built_in_Reporters)
1. [exometer_report_tty](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#exometer_report_tty)
4. [Instrumenting Erlang code](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Instrumenting_Erlang_code)
1. [Exometer Core Start](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Exometer_Core_Start)
2. [Creating metrics](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Creating_metrics)
3. [Deleting metrics](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Deleting_metrics)
4. [Setting metric values](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Setting_metric_values)
5. [Retrieving metric values](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Retrieving_metric_values)
6. [Setting up subscriptions](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Setting_up_subscriptions)
7. [Set metric options](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Set_metric_options)
5. [Configuring Exometer Core](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_Exometer_Core)
1. [Configuring type - entry maps](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_type_-_entry_maps)
2. [Configuring statically defined entries](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_statically_defined_entries)
3. [Configuring static subscriptions](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_static_subscriptions)
4. [Configuring reporter plugins](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_reporter_plugins)
6. [Creating custom exometer entries](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Creating_custom_exometer_entries)
7. [Creating custom probes](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Creating_custom_probes)
8. [Creating custom reporter plugins](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Creating_custom_reporter_plugins)
9. [Dependency management](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Dependency_management)
### <a name="Concepts_and_Definitions">Concepts and Definitions</a> ###
Exometer Core introduces a number of concepts and definitions used
throughout the documentation and the code.

#### <a name="Metric">Metric</a> ####
A metric is a specific measurement sampled inside an Erlang system and
then reported to the Exometer Core system. An example metric would be
"transactions_per_second", or "memory_usage".
Metrics are identified by a list of terms, such as given below:
`[ xml_front_end, parser, file_size ]`
A metric is created through a call by the code to be instrumented to
`exometer:new()`. Once created, the metric can be updated through
`exometer:update()`, or on its own initiative through the
`exometer_probe:sample` behavior implementation.
#### <a name="Data_Point">Data Point</a> ####
Each metric can consist of multiple data points, where each point has
a specific value.
A typical example of data points would be a
`transactions_per_second` (tps) metric, usually stored as a
histogram covering the last couple of minutes of tps samples. Such a
histogram would host multiple values, such as `min`, `max`,
`median`, `mean`, `50_percentile`, `75_percentile`,
etc.
It is up to the type of the metric, and the data probe backing that
type (see below), to specify which data points are available under the
given metric.
#### <a name="Metric_Type">Metric Type</a> ####
The type of a metric, specified when the metric is created through
`exometer:new()`, determines which `exometer_entry`
callback to use.
The link between the type and the entry to use is configured
through the `exometer_admin` module, and its associated exometer
defaults configuration data.
The metric type, in other words, is only used to map a metric to a
configurable `exometer_entry` callback.
#### <a name="Entry_Callback">Entry Callback</a> ####
An exometer entry callback will receive values reported to a metric through the
`exometer:update()` call and compile it into one or more data points.
The entry callback can either be a counter (implemented natively
in `exometer`), or a more complex statistical analysis such
as a uniform distribution or a regular histogram.
The various outputs from these entries are reported as data points
under the given metric.
An entry can also interface external analytics packages.
`exometer_folsom`, for example, integrates with the
`folsom_metrics` package found at [`https://github.com/boundary/folsom`](https://github.com/boundary/folsom).
#### <a name="Probe">Probe</a> ####
Probes are a further specialization of exometer entries that run in
their own Erlang processes and have their own state (like a
gen_server). A probe is implemented through the `exometer_probe`
behavior.
A probe can be used if independent monitoring is needed of,
for example, `/proc` trees, network interfaces, and other subsystems
that need periodic sampling. In these cases, the
`exometer_probe:probe_sample()` call is invoked regularly by exometer,
in the probe's own process, in order to extract data from
the given subsystem and add it to the metric's data points.
#### <a name="Caching">Caching</a> ####
Metric and data point values are read with the `exometer:get_value()`
function. In the case of counters, this operation is very fast. With probes,
the call results in a synchronous dialog with the probe process, and the
cost of serving the request depends on the probe implementation and the
nature of the metric being served.
If the cost of reading the value is so high that calling the function often
would result in prohibitive load, it is possible to cache the value. This is
done either explicitly from the probe itself (by calling
`exometer_cache:write()`), or by specifying the option `{cache, Lifetime}`
for the entry. If an entry has a non-zero cache lifetime specified, the
`get_value()` call will try fetching the cached value before calling the
actual entry and automatically caching the result.
Note that if `{cache, Lifetime}` is not specified, `exometer:get_value()`
will neither read nor write to the cache. It is possible for the probe
to periodically cache a value regardless of how the cache lifetime is set,
and the probe may also explicitly read from the cache if it isn't done
automatically.
#### <a name="Subscriptions_and_Reporters">Subscriptions and Reporters</a> ####
The subscription concept, managed by `exometer_report` allows metrics
and their data points to be sampled at given intervals and delivered
to one or more recipients, which can be either an arbitrary process
or a Reporter plugin.
Each subscription ties a specific metric-datapoint pair to a reporter
and an interval (given in milliseconds). The reporter system will, at
the given interval, send the current value of the data point to the
subscribing reporter. The subscription, with all its parameters,
is setup through a call to `exometer_report:subscribe()`.
In the case of processes, subscribed-to values will be delivered as a
message. Modules, which implement the `exometer_report` callback
behavior, will receive the plugins as a callbacks within the
`exometer_report` process.
Subscriptions can either be setup at runtime, through
`exometer_report:subscribe()` calls, or statically through the
`exometer_report` configuration data.
### <a name="Built-in_entries_and_probes">Built-in entries and probes</a> ###
There are a number of built-in entries and probes shipped
with the Exometer Core package, as described below:
#### <a name="counter_(exometer_native)">counter (exometer native)</a> ####
The counter is implemented directly in `exometer` to provide simple
counters. A call to `exometer:update()` will add the provided value
to the counter.
The counter can be reset to zero through `exometer:reset()`.
The available data points under a metric using the counter entry
are `value` and `ms_since_reset`.
#### <a name="fast_counter_(exometer_native)">fast_counter (exometer native)</a> ####
A fast counter implements the counter functionality, through the
`trace_info` system, yielding a speed increase of about 3.5 in
comparison to the regular counter.
The tradeoff is that running tracing and/or debugging may interfere
with the counter functionality.
A call to `exometer:update()` will add the provided value to the
counter.
The counter can be reset to zero through `exometer:reset()`.
The available data points under a metric using the fast_counter
entry are `value` and `ms_since_reset`.
#### <a name="gauge_(exometer_native)">gauge (exometer native)</a> ####
The gauge is implemented directly in `exometer` to provide simple
gauges. A call to `exometer:update()` will set the gauge's value
to the provided value. That is, the value of the gauge entry is
always the most recently provided value.
The gauge can be reset to zero through `exometer:reset()`.
The available data points under a metric using the gauge entry
are `value` and `ms_since_reset`.
#### <a name="exometer_histogram_(probe)">exometer_histogram (probe)</a> ####
The histogram probe stores a given number of updates, provided through
`exometer:update()`, in a histogram. The histogram maintains a log
derived from all values received during a configurable time span and
provides min, max, median, mean, and percentile analysis data points
for the stored data.
In order to save memory, the histogram is divided into equal-sized
time slots, where each slot spans a settable interval. All values
received during a time slot will be averaged into a single value to be
stored in the histogram once the time slot expires. The averaging
function (which can be replaced by the caller), allows for
high-frequency update metrics to have their resolution traded against
resource consumption.
#### <a name="exometer_uniform_(probe)">exometer_uniform (probe)</a> ####
The uniform probe provides a uniform sample over a pool of values
provided through `exometer:update()`. When the pool reaches its configurable
max size, existing values will be replaced at random to make space for
new values. Much like `exometer_histogram`, the uniform probe
provides min, max, median, mean, and percentile analysis data points
for the stored data.
#### <a name="exometer_spiral_(probe)">exometer_spiral (probe)</a> ####
The spiral probe maintains the total sum of all values stored in its
histogram. The histogram has a configurable time span, all values
provided to the probe, through `exometer:update()`, within that time
span will be summed up and reported. If, for example, the histogram
covers 60 seconds, the spiral probe will report the sum of all
values reported during the last minute.
The grand total of all values received during the lifetime of the
probe is also available.
#### <a name="exometer_folsom_[entry]">exometer_folsom [entry]</a> ####
The folsom entry integrates with the folsom metrics package provided
by the boundary repo at github. Updated values sent to the folsom entry
can be forwarded to folsom's counter, histogram, duration, meter,
and spiral.
Folsom integration is provided as a backup. New code using Exometer Core
should use the native probes that duplicate folsom.
#### <a name="exometer_function_[entry]">exometer_function [entry]</a> ####
The function entry allows for a simple caller-supplied function to be
invoked in order to retrieve non-exometer data. The
`exometer_function:get_value()` function will invoke a
`Module:Function(DataPoints)` call, where `Module` and
`Function` are provided by the caller.
The function entry provides an easy way of integrating an external
system without having to write a complete entry.
### <a name="Built_in_Reporters">Built in Reporters</a> ###
Exometer Core ships with some built-in reporters which can be used to forward
updated metrics and their data points to external systems. They can also
serve as templates for custom-developed reporters.
#### <a name="exometer_report_tty">exometer_report_tty</a> ####
The `exometer_report_tty` reporter is mainly intended for experimentation.
It outputs reports directly to the tty.
### <a name="Instrumenting_Erlang_code">Instrumenting Erlang code</a> ###
The code using Exometer Core needs to be instrumented in order to setup and
use metrics reporting.
#### <a name="Exometer_Core_Start">Exometer Core Start</a> ####
The system using Exometer Core must start the `exometer` application
prior to using it:
```erlang
application:start(lager),
application:start(exometer_core).
```
Note that dependent applications need to be started first. On newer OTP versions
(R61B or later), you can use `application:ensure_all_started(exometer)`.
For testing, you can also use [`exometer:start/0`](https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer.md#start-0).
If you make use of e.g. folsom metrics, you also need to start `folsom`.
Exometer Core will not do that automatically, nor does it contain an
application dependency for it.
See [Configuring Exometer Core](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_Exometer_Core) for details on configuration data
format.
#### <a name="Creating_metrics">Creating metrics</a> ####
A metric, can be created throuh a call to
```erlang
exometer:new(Name, Type)
```
`Name` is a list of atoms, uniquely identifying the metric created.
The type of the metric, specified by `Type` will be mapped
to an exometer entry through the table maintained by
`exometer_admin` Please see the [Configuring type - entry
maps](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_type_-_entry_maps) for details.
The resolved entry to use will determine the data points available
under the given metric.
#### <a name="Deleting_metrics">Deleting metrics</a> ####
A metric previously created with `exometer:new()` can be deleted by
`exometer:delete()`.
All subscriptions to the deleted metrics will be cancelled.
#### <a name="Setting_metric_values">Setting metric values</a> ####
A created metric can have its value updated through the
`exometer:update()` function:
```erlang
exometer:update(Name, Value)
```
The `Name` parameter is the same atom list provided to a previous
`exometer:new()` call. The `Value` is an arbitrarty element that is
forwarded to the `exometer:update()` function of the entry/probe that the
metric is mapped to.
The receiving entry/probe will process the provided value and modify
its data points accordingly.
#### <a name="Retrieving_metric_values">Retrieving metric values</a> ####
Exometer-using code can at any time retrieve the data point values
associated with a previously created metric. In order to find out which
data points are available for a metric, the following call can be used:
```erlang
exometer:info(Name, datapoints)
```
The `Name` parameter is the same atom list provided to a previous
`exometer:new()` call. The call will return a list of data point
atoms that can then be provided to `exometer:get_value()` to
retrieve their actual value:
```erlang
exometer:get_value(Name, DataPoint)
```
The `Name` paramer identifies the metric, and `DataPoints`
identifies the data points (returned from the previous `info()` call)
to retrieve the value for.
If no DataPoints are provided, the values of a default list of data points,
determined by the backing entry / probe, will be returned.
#### <a name="Setting_up_subscriptions">Setting up subscriptions</a> ####
A subscription can either be statically configured, or dynamically
setup from within the code using Exometer Core. For details on statically
configured subscriptions, please see [Configuring static subscriptions](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_static_subscriptions).
A dynamic subscription can be setup with the following call:
```erlang
exometer_report:subscribe(Recipient, Metric, DataPoint, Inteval)
```
`Recipient` is the name of a reporter.
#### <a name="Set_metric_options">Set metric options</a> ####
Each created metric can have options setup for it through the following call:
```erlang
exometer:setopts(Name, Options)
```
The `Name` paramer identifies the metric to set the options for, and
Options is a proplist (`[{ Key, Value },...]`) with the options to be
set.
Exometer Core looks up the the backing entry that hosts the metric with
the given Name, and will invoke the entry\'s `setopts/4` function to set
the actual options. Please see the `setopts/4` function for the various
entries for details.
### <a name="Configuring_Exometer_Core">Configuring Exometer Core</a> ###
Exometer Core defaults can be changed either through OTP application environment
variables or through the use of Basho's `cuttlefish`
([`https://github.com/basho/cuttlefish`](https://github.com/basho/cuttlefish)).
__Note:__ Exometer Core will check both the `exometer` and the `exometer_core`
application environments. The `exometer` environment overrides the
`exometer_core` environment. However, if only `exometer_core` is used, any
`exometer` environment will simply be ignored. This is because of the
application controller: environment data is not loaded until the application
in question is loaded.
#### <a name="Configuring_type_-_entry_maps">Configuring type - entry maps</a> ####
The dynamic method of configuring defaults for `exometer` entries is:
```erlang
exometer_admin:set_default(NamePattern, Type, Default)
```
Where `NamePattern` is a list of terms describing what is essentially
a name prefix with optional wildcards (`'_'`). A pattern that
matches any legal name is `['_']`.
`Type` is an atom defining a type of metric. The types already known to
`exometer`, `counter`, `fast_counter`, `ticker`, `uniform`, `histogram`,
`spiral`, `netlink`, and `probe` may be redefined, but other types can be
described as well.
`Default` is either an `#exometer_entry{}` record (unlikely), or a list of
`{Key, Value}` options, where the keys correspond to `#exometer_entry` record
attribute names. The following attributes make sense to preset:
```erlang
{module, atom()} % the callback module
{status, enabled | disabled} % operational status of the entry
{cache, non_neg_integer()} % cache lifetime (ms)
{options, [{atom(), any()}]} % entry-specific options
```
Below is an example, from `exometer_core/priv/app.config`:
```erlang
{exometer, [
{defaults, [
{['_'], function , [{module, exometer_function}]},
{['_'], counter , [{module, exometer}]},
{['_'], histogram, [{module, exometer_histogram}]},
{['_'], spiral , [{module, exometer_spiral}]},
{['_'], duration , [{module, exometer_folsom}]},
{['_'], meter , [{module, exometer_folsom}]},
{['_'], gauge , [{module, exometer_folsom}]}
]}
]}
```
In systems that use CuttleFish, the file `exometer/priv/exometer.schema`
contains a schema for default settings. The setup corresponding to the above
defaults would be as follows:
```ini
exometer.template.function.module = exometer_function
exometer.template.counter.module = exometer
exometer.template.histogram.module = exometer_histogram
exometer.template.spiral.module = exometer_spiral
exometer.template.duration.module = exometer_folsom
exometer.template.meter.module = exometer_folsom
exometer.template.gauge.module = exometer_folsom
```
#### <a name="Configuring_statically_defined_entries">Configuring statically defined entries</a> ####
Using the `exometer` environment variable `predefined`, entries can be added
at application startup. The variable should have one of the following values:
* `{script, File}` - `File` will be processed using `file:script/2`. The return
value (the result of the last expression in the script) should be a list of`{Name, Type, Options}` tuples.
* `{apply, M, F, A}` - The result of `apply(M, F, A)` should be `{ok, L}` where`L` is a list of `{Name, Type, Options}` tuples.
* `L`, where L is a list of `{Name, Type, Options}` tuples or extended
instructions (see below).
The list of instructions may include:
* `{delete, Name}` - deletes `Name` from the exometer registry.
* `{select_delete, Pattern}` - applies a select pattern and
deletes all matching entries.
* `{re_register, {Name, Type, Options}}` - redefines an entry if present,
otherwise creates it.
Exometer Core will also scan all loaded applications for the environment
variables `exometer_defaults` and `exometer_predefined`, and process
as above. If an application is loaded and started after exometer has started,
it may call the function `exometer:register_application()` or
`exometer:register_application(App)`. This function will do nothing if
exometer isn't already running, and otherwise process the `exometer_defaults`
and `exometer_predefined` variables as above. The function can also be
called during upgrade, as it will re-apply the settings each time.
#### <a name="Configuring_static_subscriptions">Configuring static subscriptions</a> ####
Static subscriptions, which are automatically setup at exometer
startup without having to invoke `exometer_report:subscribe()`, are
configured through the report sub section under exometer.
Below is an example, from `exometer/priv/app.config`:
```erlang
{exometer, [
{report, [
{subscribers, [
{exometer_report_collectd, [db, cache, hits], mean, 2000, true},
{exometer_report_collectd, [db, cache, hits], max, 5000, false}
]}
]}
]}
```
The `report` section configures static subscriptions and reporter
plugins. See [Configuring reporter plugins](https://github.com/Feuerlabs/exometer_core/blob/master/doc/README.md#Configuring_reporter_plugins) for details on
how to configure individual plugins.
The `subscribers` sub-section contains all static subscriptions to be
setup att exometer applications start. Each tuple in the prop list
should be of one of the following formats:
* `{Reporter, Metric, DataPoint, Interval}`
* `{Reporter, Metric, DataPoint, Interval, RetryFailedMetrics}`
* `{Reporter, Metric, DataPoint, Interval, RetryFailedMetrics, Extra}`
* `{apply, {M, F, A}}`
* `{select, {MatchPattern, DataPoint, Interval [, Retry [, Extra] ]}}`
In the case of `{apply, M, F, A}`, the result of `apply(M, F, A)` must
be a list of `subscribers` tuples.
In the case of `{select, Expr}`, a list of metrics is fetched using
`exometer:select(MatchPattern)`, where the result must be on the form
`{Key, Type, Status}` (i.e. what corresponds to `'$_'`).
The rest of the items will be applied to each of the matching entries.
The meaning of the above tuple elements is:
+ `Reporter :: module()`<br />Specifies the reporter plugin module, such as`exometer_report_collectd` that is to receive updated metric's data
points.
+ `Metric :: [atoms()]`<br />Specifies the path to a metric previously created with an`exometer:new()` call.
+ `DataPoint` :: atom() | [atom()]'<br />Specifies the data point within the given metric to send to the
receiver. The data point must match one of the data points returned by`exometer:info(Name, datapoints)` for the given metrics name.
+ `Interval` :: integer()' (milliseconds)<br />Specifies the interval, in milliseconds, between each update of the
given metric's data point. At the given interval, the data point will
be samples, and the result will be sent to the receiver.
+ `RetryFailedMetrics :: boolean()`<br />Specifies if the metric should be continued to be reported
even if it is not found during a reporting cycle. This would be
the case if a metric is not created by the time it is reported for
the first time. If the metric will be created at a later time,
this value should be set to true. Set this value to false if all
attempts to report the metric should stop if when is not found.
The default value is `true`.
+ `Extra :: any()`<br />Provides a means to pass along extra information for a given
subscription. An example is the `syntax` option for the SNMP reporter,
in which case `Extra` needs to be a property list.
Example configuration in sys.config, using the `{select, Expr}` pattern:
```erlang
[
{exometer, [
{predefined,
[{[a,1], counter, []},
{[a,2], counter, []},
{[b,1], counter, []},
{[c,1], counter, []}]},
{report,
[
{reporters,
[{exometer_report_tty, []}]},
{subscribers,
[{select, {[{ {[a,'_'],'_','_'}, [], ['$_']}],
exometer_report_tty, value, 1000}}]}
]}
]}
].
```
This will activate a subscription on `[a,1]` and `[a,2]` in the
`exometer_report_tty` reporter, firing once per second.
#### <a name="Configuring_reporter_plugins">Configuring reporter plugins</a> ####
The various reporter plugins to be loaded by exometer are configured
in the `report` section under `reporters`
Each reporter has an entry named after its module, and the content of
that entry is dependent on the reporter itself. The following chapters
specifies the configuration parameters for the reporters shipped with
exometer.
### <a name="Creating_custom_exometer_entries">Creating custom exometer entries</a> ###
Please see @see exometer_entry documentation for details.
### <a name="Creating_custom_probes">Creating custom probes</a> ###
Please see @see exometer_probe documentation for details.
### <a name="Creating_custom_reporter_plugins">Creating custom reporter plugins</a> ###
Please see @see exometer_report documentation for details.
#### <a name="Customizing_rebar.config">Customizing rebar.config</a> ####
The OS environment variables `EXOMETER_CORE_CONFIG_PREPROCESS` and
`EXOMETER_CORE_CONFIG_POSTPROCESS` can be used to insert a script, similar to
`rebar.config.script` in the processing flow of the exometer build.
As the names imply, the script given by `EXOMETER_CONFIG_CONFIG_PREPROCESS`
(if any) will be run before exometer does any processing of its own, and the
`EXOMETER_CORE_CONFIG_POSTPROCESS` script (if any) will be run after all other
processing is complete.
[travis]: https://travis-ci.org/Feuerlabs/exometer_core
[travis badge]: https://img.shields.io/travis/Feuerlabs/exometer_core/master.svg?style=flat-square
[hex]: https://hex.pm/packages/exometer_core
[hex version badge]: https://img.shields.io/hexpm/v/exometer_core.svg?style=flat-square
[hex license badge]: https://img.shields.io/hexpm/l/exometer_core.svg?style=flat-square
[erlang version badge]: https://img.shields.io/badge/erlang-18--21-blue.svg?style=flat-square
[build tool]: https://img.shields.io/badge/build%20tool-rebar3-orange.svg?style=flat-square
## Modules ##
<table width="100%" border="0" summary="list of modules">
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exo_montest.md" class="module">exo_montest</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer.md" class="module">exometer</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_admin.md" class="module">exometer_admin</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_alias.md" class="module">exometer_alias</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_cache.md" class="module">exometer_cache</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_cpu.md" class="module">exometer_cpu</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_duration.md" class="module">exometer_duration</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_entry.md" class="module">exometer_entry</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_folsom.md" class="module">exometer_folsom</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_folsom_monitor.md" class="module">exometer_folsom_monitor</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_function.md" class="module">exometer_function</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_histogram.md" class="module">exometer_histogram</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_igor.md" class="module">exometer_igor</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_info.md" class="module">exometer_info</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_probe.md" class="module">exometer_probe</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_proc.md" class="module">exometer_proc</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_report.md" class="module">exometer_report</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_report_logger.md" class="module">exometer_report_logger</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_report_tty.md" class="module">exometer_report_tty</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_shallowtree.md" class="module">exometer_shallowtree</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_slide.md" class="module">exometer_slide</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_slot_slide.md" class="module">exometer_slot_slide</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_spiral.md" class="module">exometer_spiral</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_uniform.md" class="module">exometer_uniform</a></td></tr>
<tr><td><a href="https://github.com/Feuerlabs/exometer_core/blob/master/doc/exometer_util.md" class="module">exometer_util</a></td></tr></table>
| 45.202114 | 175 | 0.758022 | eng_Latn | 0.922542 |
4d467950f33f35f243af7dc0f6278d0dd1c1e290 | 168 | md | Markdown | README.md | Jonesdl-2785/colorGen | a35e2f03e14dbe74773ab6ef4c1aa04f48e14901 | [
"MIT"
] | null | null | null | README.md | Jonesdl-2785/colorGen | a35e2f03e14dbe74773ab6ef4c1aa04f48e14901 | [
"MIT"
] | null | null | null | README.md | Jonesdl-2785/colorGen | a35e2f03e14dbe74773ab6ef4c1aa04f48e14901 | [
"MIT"
] | null | null | null | # colorGen
# HTML, CSS, JavaScript color generator
User presses button to generate a new color.
## To Be Completed
1. Add mix color
2. ADd sounds when buttons pressed
| 21 | 44 | 0.761905 | eng_Latn | 0.774771 |
4d477eefa63d66d0df7bf508d999d402a4d6fcf4 | 1,850 | md | Markdown | docs/visual-basic/programming-guide/concepts/linq/how-to-create-a-list-of-items.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-06-02T11:09:59.000Z | 2019-06-15T10:17:08.000Z | docs/visual-basic/programming-guide/concepts/linq/how-to-create-a-list-of-items.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/concepts/linq/how-to-create-a-list-of-items.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-13T21:47:22.000Z | 2020-03-13T21:47:22.000Z | ---
title: "How to: Create a List of Items"
ms.date: 07/20/2015
helpviewer_keywords:
- "list [LINQ in Visual Basic]"
- "objects [Visual Basic], list of items"
ms.assetid: fe941aba-6340-455c-8b1f-ffd9c3eb1ac5
---
# How to: Create a List of Items
The code in this topic defines a `Student` class and creates a list of instances of the class. The list is designed to support the topic [Walkthrough: Writing Queries in Visual Basic](../../../../visual-basic/programming-guide/concepts/linq/walkthrough-writing-queries.md). It also can be used for any application that requires a list of objects. The code defines the items in the list of students by using object initializers.
## Example
If you are working on the walkthrough, you can use this code for the Module1.vb file of the project that is created there. Just replace the lines marked with **** in the `Main` method with the queries and query executions that are provided in the walkthrough.
[!code-vb[VbLINQHowToCreateList#1](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbLINQHowToCreateList/VB/Class1.vb#1)]
## See also
- [Walkthrough: Writing Queries in Visual Basic](../../../../visual-basic/programming-guide/concepts/linq/walkthrough-writing-queries.md)
- [Getting Started with LINQ in Visual Basic](../../../../visual-basic/programming-guide/concepts/linq/getting-started-with-linq.md)
- [Object Initializers: Named and Anonymous Types](../../../../visual-basic/programming-guide/language-features/objects-and-classes/object-initializers-named-and-anonymous-types.md)
- [Introduction to LINQ in Visual Basic](../../../../visual-basic/programming-guide/language-features/linq/introduction-to-linq.md)
- [LINQ](../../../../visual-basic/programming-guide/language-features/linq/index.md)
- [Queries](../../../../visual-basic/language-reference/queries/index.md)
| 77.083333 | 429 | 0.748108 | eng_Latn | 0.940206 |
4d47afeb61517ff33166f65d7f572f78ad072db0 | 1,306 | md | Markdown | docs/guide/updating-configs.md | gonnavis/handsfree | d283bd7021847e2113ec2e7cfbe269ccf9ea369b | [
"Apache-2.0"
] | null | null | null | docs/guide/updating-configs.md | gonnavis/handsfree | d283bd7021847e2113ec2e7cfbe269ccf9ea369b | [
"Apache-2.0"
] | null | null | null | docs/guide/updating-configs.md | gonnavis/handsfree | d283bd7021847e2113ec2e7cfbe269ccf9ea369b | [
"Apache-2.0"
] | null | null | null | # 🎭 Updating and switching models
[handsfree.update(config, callback)](/ref/method/update/) can be used to update Handsfree in real time, even as it's actively running. The passed [config](/ref/prop/config/) will override the existing one, and the `callback` will get called after all new models are loaded (or immediately if all models are already loaded).
In addition to reconfiguring models you can also [enable/disable them](/ref/prop/model/#toggling-models-on-off/), as well as reconfigure plugins. Below is an example of switching off the [holistic model](/ref/model/holistic/) for the [weboji model](/ref/model/weboji/) and configuring the the [facePointer plugin](/ref/plugin/facePointer/):
```js
// Start the holistic model with "browser" plugins
const handsfree = new Handsfree({holistic: true})
handsfree.enablePlugins('browser')
handsfree.start()
// Switch to the weboji model and update the facePointer
handsfree.update({
weboji: true,
holistic: false,
plugin: {
// Make the face pointer move slower
facePointer: {
speed: {
x: .5,
y: .5
}
}
}
})
// Toggle a specific model on, loading missing dependencies
handsfree.model.handpose.enable(function () {
handsfree.weboji.disable()
})
```
## See also
- [handsfree.model](/ref/prop/model/) | 35.297297 | 340 | 0.719755 | eng_Latn | 0.971212 |
4d47b6b829a8bdcf75368ca845212d6e0e810356 | 185 | md | Markdown | _posts/2018-09-10-cleaning-is-an-act-of-destruction.md | mattrichards/mattrichards.github.io | 5de359b79a31d428cab844382d32a79502e78cc8 | [
"Apache-2.0"
] | null | null | null | _posts/2018-09-10-cleaning-is-an-act-of-destruction.md | mattrichards/mattrichards.github.io | 5de359b79a31d428cab844382d32a79502e78cc8 | [
"Apache-2.0"
] | 6 | 2020-03-18T06:05:42.000Z | 2021-10-04T03:32:46.000Z | _posts/2018-09-10-cleaning-is-an-act-of-destruction.md | mattrichards/mattrichards.github.io | 5de359b79a31d428cab844382d32a79502e78cc8 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: "Cleaning is an Act of Destruction"
date: 2018-09-10 07:51:57 -0700
categories:
---
There's a fine line between scouring your grater and grating your sponge. | 23.125 | 73 | 0.72973 | eng_Latn | 0.993075 |
4d48267b5d83ad66b6a3064215afcc0d08792333 | 3,653 | md | Markdown | docs/framework/wpf/advanced/properties-wpf.md | MarktW86/dotnet.docs | 178451aeae4e2c324aadd427ed6bf6850e483900 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/properties-wpf.md | MarktW86/dotnet.docs | 178451aeae4e2c324aadd427ed6bf6850e483900 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/properties-wpf.md | MarktW86/dotnet.docs | 178451aeae4e2c324aadd427ed6bf6850e483900 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-06T09:36:01.000Z | 2021-01-06T09:36:01.000Z | ---
title: "Properties (WPF) | Microsoft Docs"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-wpf"
ms.tgt_pltfrm: ""
ms.topic: "article"
f1_keywords:
- "AutoGeneratedOrientationPage"
helpviewer_keywords:
- "WPF, properties"
- "properties [WPF], about properties"
- "Windows Presentation Foundation, properties"
- "properties [WPF]"
ms.assetid: d6e0197f-f2c4-48ed-b45b-b9cdb64aab1c
caps.latest.revision: 72
author: dotnet-bot
ms.author: dotnetcontent
manager: "wpickett"
---
# Properties (WPF)
[!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] provides a set of services that can be used to extend the functionality of a [!INCLUDE[TLA#tla_clr](../../../../includes/tlasharptla-clr-md.md)] property. Collectively, these services are typically referred to as the WPF property system. A property that is backed by the [!INCLUDE[TLA#tla_titlewinclient](../../../../includes/tlasharptla-titlewinclient-md.md)] property system is known as a dependency property.
## In This Section
[Dependency Properties Overview](../../../../docs/framework/wpf/advanced/dependency-properties-overview.md)
[Attached Properties Overview](../../../../docs/framework/wpf/advanced/attached-properties-overview.md)
[Dependency Property Callbacks and Validation](../../../../docs/framework/wpf/advanced/dependency-property-callbacks-and-validation.md)
[Custom Dependency Properties](../../../../docs/framework/wpf/advanced/custom-dependency-properties.md)
[Dependency Property Metadata](../../../../docs/framework/wpf/advanced/dependency-property-metadata.md)
[Framework Property Metadata](../../../../docs/framework/wpf/advanced/framework-property-metadata.md)
[Dependency Property Value Precedence](../../../../docs/framework/wpf/advanced/dependency-property-value-precedence.md)
[Read-Only Dependency Properties](../../../../docs/framework/wpf/advanced/read-only-dependency-properties.md)
[Property Value Inheritance](../../../../docs/framework/wpf/advanced/property-value-inheritance.md)
[Dependency Property Security](../../../../docs/framework/wpf/advanced/dependency-property-security.md)
[Safe Constructor Patterns for DependencyObjects](../../../../docs/framework/wpf/advanced/safe-constructor-patterns-for-dependencyobjects.md)
[Collection-Type Dependency Properties](../../../../docs/framework/wpf/advanced/collection-type-dependency-properties.md)
[XAML Loading and Dependency Properties](../../../../docs/framework/wpf/advanced/xaml-loading-and-dependency-properties.md)
[How-to Topics](../../../../docs/framework/wpf/advanced/properties-how-to-topics.md)
## Reference
<xref:System.Windows.DependencyProperty>
<xref:System.Windows.PropertyMetadata>
<xref:System.Windows.FrameworkPropertyMetadata>
<xref:System.Windows.DependencyObject>
## Related Sections
[WPF Architecture](../../../../docs/framework/wpf/advanced/wpf-architecture.md)
[XAML in WPF](../../../../docs/framework/wpf/advanced/xaml-in-wpf.md)
[Base Elements](../../../../docs/framework/wpf/advanced/base-elements.md)
[Element Tree and Serialization](../../../../docs/framework/wpf/advanced/element-tree-and-serialization.md)
[Events](../../../../docs/framework/wpf/advanced/events-wpf.md)
[Input](../../../../docs/framework/wpf/advanced/input-wpf.md)
[Resources](../../../../docs/framework/wpf/advanced/resources-wpf.md)
[WPF Content Model](../../../../docs/framework/wpf/controls/wpf-content-model.md)
[Threading Model](../../../../docs/framework/wpf/advanced/threading-model.md) | 58.919355 | 493 | 0.714481 | eng_Latn | 0.255178 |
4d485c497e9d60b7c2a417e9f64b11f5b4cac3fd | 4,752 | md | Markdown | README.md | clouddrove/ansible-role-php | 9a383bc6122775732c50a1bbbcb28128863b9985 | [
"MIT"
] | 11 | 2019-08-24T12:47:47.000Z | 2021-02-06T14:50:25.000Z | README.md | clouddrove/ansible-role-php | 9a383bc6122775732c50a1bbbcb28128863b9985 | [
"MIT"
] | 3 | 2020-12-08T11:29:00.000Z | 2020-12-11T14:52:26.000Z | README.md | clouddrove/ansible-role-php | 9a383bc6122775732c50a1bbbcb28128863b9985 | [
"MIT"
] | 5 | 2020-04-13T10:41:56.000Z | 2021-02-06T14:50:30.000Z | <!-- This file was automatically generated by the `geine`. Make all changes to `README.yaml` and run `make readme` to rebuild this file. -->
<p align="center"> <img src="https://user-images.githubusercontent.com/50652676/62451340-ba925480-b78b-11e9-99f0-13a8a9cc0afa.png" width="100" height="100"></p>
<h1 align="center">
Ansible Role PHP
</h1>
<p align="center" style="font-size: 1.2rem;">
This ansible role is used to install PHP server on Debian.
</p>
<p align="center">
<a href="https://www.ansible.com">
<img src="https://img.shields.io/badge/Ansible-2.8-green?style=flat&logo=ansible" alt="Ansible">
</a>
<a href="LICENSE.md">
<img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="Licence">
</a>
<a href="https://ubuntu.com/">
<img src="https://img.shields.io/badge/ubuntu-16.x-orange?style=flat&logo=ubuntu" alt="Distribution">
</a>
<a href="https://ubuntu.com/">
<img src="https://img.shields.io/badge/ubuntu-18.x-orange?style=flat&logo=ubuntu" alt="Distribution">
</a>
</p>
<p align="center">
<a href='https://facebook.com/sharer/sharer.php?u=https://github.com/clouddrove/ansible-role-php'>
<img title="Share on Facebook" src="https://user-images.githubusercontent.com/50652676/62817743-4f64cb80-bb59-11e9-90c7-b057252ded50.png" />
</a>
<a href='https://www.linkedin.com/shareArticle?mini=true&title=Ansible+Role+PHP&url=https://github.com/clouddrove/ansible-role-php'>
<img title="Share on LinkedIn" src="https://user-images.githubusercontent.com/50652676/62817742-4e339e80-bb59-11e9-87b9-a1f68cae1049.png" />
</a>
<a href='https://twitter.com/intent/tweet/?text=Ansible+Role+PHP&url=https://github.com/clouddrove/ansible-role-php'>
<img title="Share on Twitter" src="https://user-images.githubusercontent.com/50652676/62817740-4c69db00-bb59-11e9-8a79-3580fbbf6d5c.png" />
</a>
</p>
<hr>
We eat, drink, sleep and most importantly love **DevOps**. DevOps always promotes automation and standardisation. While setting up various environments like local, dev, testing, production, etc. it is critical to maintain the same environment across. This can easily be achieved using automating the environment setup & installation with the help of ansible-playbooks.
Smaller roles are created for each environment elements; which also include tasks & tests. These roles can then be grouped together in [ansible-playbook](https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html) to achieve the desired yet consistent results.
## Prerequisites
This module has a few dependencies:
- [Ansible2.8](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html)
- [Python](https://www.python.org/downloads)
## What Includes
Following things includes in this role:
- Php-7.3
- Php-fpm
- Pecl
- Composer
## Example Playbook
**IMPORTANT:** Since the `master` branch used in `source` varies based on new modifications, we suggest that you use the release versions [here](https://github.com/clouddrove/ansible-role-php/releases).
```yaml
- hosts: localhost
remote_user: ubuntu
become: true
roles:
- clouddrove.ansible_role_php
```
## Variables
```yaml
php_version: 7.3
php_dir: "/etc/php/{{ php_version }}"
php_fpm_dir: "/etc/php/{{ php_version }}/fpm"
log_path: /var/log/php
state: present
is_web_server_is_apache: true
```
## Installation
```console
$ ansible-galaxy install clouddrove.ansible_role_php
```
## Feedback
If you come accross a bug or have any feedback, please log it in our [issue tracker](https://github.com/clouddrove/ansible-role-php/issues), or feel free to drop us an email at [[email protected]](mailto:[email protected]).
If you have found it worth your time, go ahead and give us a ★ on [our GitHub](https://github.com/clouddrove/ansible-role-php)!
## About us
At [CloudDrove][website], we offer expert guidance, implementation support and services to help organisations accelerate their journey to the cloud. Our services include docker and container orchestration, cloud migration and adoption, infrastructure automation, application modernisation and remediation, and performance engineering.
<p align="center">We are <b> The Cloud Experts!</b></p>
<hr />
<p align="center">We ❤️ <a href="https://github.com/clouddrove">Open Source</a> and you can check out <a href="https://github.com/clouddrove">our other modules</a> to get help with your new Cloud ideas.</p>
[website]: https://clouddrove.com
[github]: https://github.com/clouddrove
[linkedin]: https://cpco.io/linkedin
[twitter]: https://twitter.com/clouddrove/
[email]: https://clouddrove.com/contact-us.html
[terraform_modules]: https://github.com/clouddrove?utf8=%E2%9C%93&q=terraform-&type=&language=
| 35.2 | 369 | 0.737163 | eng_Latn | 0.578665 |
4d48808d722699da522f7fe18da22fda260bd375 | 1,732 | md | Markdown | readme/extreme_learning.md | kd303/DL-NLP-Readings | 517145235a8fce32d1066114d3ef82b1b7c80e05 | [
"MIT"
] | 1 | 2019-07-01T00:37:32.000Z | 2019-07-01T00:37:32.000Z | readme/extreme_learning.md | zuodongzhou/DL-NLP-Readings | ad1962e287279c0e75efc32567a902399c762ac8 | [
"MIT"
] | null | null | null | readme/extreme_learning.md | zuodongzhou/DL-NLP-Readings | ad1962e287279c0e75efc32567a902399c762ac8 | [
"MIT"
] | null | null | null | # Extreme Learn Machines
- **Extreme Learning Machine**, [[slides]](/Documents/ELM/Extreme%20Learning%20Machine.pdf), [[homepage]](http://www.ntu.edu.sg/home/egbhuang/).
- [2004 IEEE-IJCNN] **Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks**, [[paper]](https://pdfs.semanticscholar.org/2b9c/0e4d1d473aadbe1c2a76f75bc02bfa6416b0.pdf), sources: [[tobifinn/pyextremelm]](https://github.com/tobifinn/pyextremelm).
- [2006 Neurocomputing] **Extreme learning machine: Theory and applications**, [[paper]](http://axon.cs.byu.edu/~martinez/classes/678/Presentations/Yao.pdf), sources: [[rtaormina/ELM_MatlabClass]](https://github.com/rtaormina/ELM_MatlabClass).
- [2006 IEEE-TNN] **A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks**, [[paper]](/Documents/ELM/A%20Fast%20and%20Accurate%20Online%20Sequential%20Learning%20Algorithm%20for%20Feedforward%20Networks.pdf).
- [2006 IEEE-TNN] **Universal Approximation Using Incremental Constructive Feedforward Networks With Random Hidden Nodes**, [[paper]](http://www.ntu.edu.sg/home/EGBHuang/pdf/I-ELM.pdf).
- [2011 IJMLC] **Extreme learning machines: a survey**, [[paper]](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.298.522&rep=rep1&type=pdf).
- [2012 IEEE-TSMCB] **Extreme Learning Machine for Regression and Multiclass Classification**, [[paper]](/Documents/ELM/Extreme%20Learning%20Machine%20for%20Regression%20and%20Multiclass%20Classification.pdf), sources: [[rtaormina/ELM_MatlabClass]](https://github.com/rtaormina/ELM_MatlabClass), [[ExtremeLearningMachines/ELM-C]](https://github.com/ExtremeLearningMachines/ELM-C), [[ExtremeLearningMachines/ELM-JAVA]](https://github.com/ExtremeLearningMachines/ELM-JAVA). | 192.444444 | 471 | 0.7806 | yue_Hant | 0.763176 |
4d494a4590da469e33b087a766d668af99fb74a3 | 661 | md | Markdown | post/code_review.md | 0zzz/gatsby-starter-muses | 50bb4878f658b0b2486f6f028a714065dc9693b3 | [
"MIT"
] | null | null | null | post/code_review.md | 0zzz/gatsby-starter-muses | 50bb4878f658b0b2486f6f028a714065dc9693b3 | [
"MIT"
] | null | null | null | post/code_review.md | 0zzz/gatsby-starter-muses | 50bb4878f658b0b2486f6f028a714065dc9693b3 | [
"MIT"
] | null | null | null | ---
path: "/blog/1"
date: "2018-04-23 11:20"
title: "1111"
---
### code review 的要求
Lv1. 验证代码是否符合当前需求的正确和有效的解决方案
Lv2 确保你的代码是可维护的
Lv3 增加对代码库的共享知识
Lv4 通过定期反馈提高团队的技能
Lv5 不应该成为开发的一个沉重开销
### code review 原则
##### 发布pr之前应该明确
1. 邀请合适Reviewers
2. 有一个主要审核人,做出决定
3. 明确每个Reviewer的审核范围,和主要职责
##### 提出pr需要注意的
1. 这个pr解决了什么问题,用什么解决,为什么用。尽可能丰富的上下文关系,能够帮助别人更快的审核你的代码
2. pr越小越好,小的好处是更准确的描述,更少的不必要沟通,以及更快的通过审核,迭代。注意区分代码逻辑和代码风格的修改最好分2个分支提
3. 提高代码的可读性
4. 点击确认发布按钮前,认真阅读diff,确认这次pr修改了哪些内容。尝试用第三人来目光来审视自己的diff。如果太混乱难以阅读,首先应该修改代码风格以及抽象和重构。然后再是考虑加上部分注释和上下文
##### 审核过程中
1. 避免重大更改。如果需要重大更改尽早的提醒正在审核的同事。
2. 回应每一条评论。明确反馈,以及是否做出修正,及时沟通
3. 每次cr是一次讨论,不是一个要求。面对cr评论,欢迎提出不同意见,但要解释清楚为什么
| 17.864865 | 100 | 0.785174 | yue_Hant | 0.528602 |
4d495996f4786499773dcd7d6a39deec69ebed76 | 3,584 | md | Markdown | README.md | sk4zuzu/m-aws-kubernetes-service | 7d1b9f282d023f4ec83f89e3c9d25860f31e813b | [
"Apache-2.0"
] | null | null | null | README.md | sk4zuzu/m-aws-kubernetes-service | 7d1b9f282d023f4ec83f89e3c9d25860f31e813b | [
"Apache-2.0"
] | null | null | null | README.md | sk4zuzu/m-aws-kubernetes-service | 7d1b9f282d023f4ec83f89e3c9d25860f31e813b | [
"Apache-2.0"
] | null | null | null | # m-aws-kubernetes-service
Epiphany Module: AWS Kubernetes Service
## Prepare AWS access key
Have a look [here](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
## Build image
In main directory run:
```bash
make build
```
## Run module
```bash
cd examples/basic_flow
AWS_ACCESS_KEY="access key id" AWS_SECRET_KEY="access key secret" make all
```
Or use config file with credentials:
```bash
cd examples/basic_flow
cat >awsks.mk <<'EOF'
AWS_ACCESS_KEY ?= "access key id"
AWS_SECRET_KEY ?= "access key secret"
EOF
make all
```
## Destroy EKS cluster
```
cd examples/basic_flow
make -k destroy
```
## Release module
```bash
make release
```
or if you want to set different version number:
```bash
make release VERSION=number_of_your_choice
```
## Notes
- The cluster autoscaler major and minor versions must match your cluster.
For example if you are running a 1.16 EKS cluster set version to v1.16.5.
For more details check [documentation](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/autoscaling.md#notes)
## Module dependencies
| Component | Version | Repo/Website | License |
| ----------------------------- | ------- | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| Terraform | 0.13.2 | https://www.terraform.io/ | [Mozilla Public License 2.0](https://github.com/hashicorp/terraform/blob/master/LICENSE) |
| Terraform AWS provider | 3.7.0 | https://github.com/terraform-providers/terraform-provider-aws | [Mozilla Public License 2.0](https://github.com/terraform-providers/terraform-provider-aws/blob/master/LICENSE) |
| Terraform Kubernetes provider | 1.13.2 | https://github.com/hashicorp/terraform-provider-kubernetes | [Mozilla Public License 2.0](https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/LICENSE) |
| Terraform Helm Provider | 1.3.1 | https://github.com/hashicorp/terraform-provider-helm | [Mozilla Public License 2.0](https://github.com/hashicorp/terraform-provider-helm/blob/master/LICENSE) |
| Terraform AWS EKS module | 12.2.0 | https://github.com/terraform-aws-modules/terraform-aws-eks | [Apache License 2.0](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/LICENSE) |
| Terraform AWS IAM module | 2.21.0 | https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-assumable-role-with-oidc | [Apache License 2.0](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/LICENSE) |
| Make | 4.3 | https://www.gnu.org/software/make/ | [ GNU General Public License](https://www.gnu.org/licenses/gpl-3.0.html) |
| yq | 3.3.4 | https://github.com/mikefarah/yq/ | [ MIT License](https://github.com/mikefarah/yq/blob/master/LICENSE) |
| 50.478873 | 267 | 0.554129 | yue_Hant | 0.259522 |
4d49a08ac51fb3009ea5ed6b91fd9f8b5b1f9271 | 667 | md | Markdown | _posts/OS/P/2015-01-20-OsPDR20.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 4 | 2017-08-09T02:48:10.000Z | 2020-11-11T01:54:08.000Z | _posts/OS/P/2015-01-20-OsPDR20.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 1 | 2020-05-31T13:03:01.000Z | 2020-06-01T01:47:14.000Z | _posts/OS/P/2015-01-20-OsPDR20.md | funRiceGenes/funRiceGenes.github.io | e27a14e23dd8c3d6127e38ee62240dc9e01008be | [
"MIT"
] | 6 | 2018-10-03T20:47:32.000Z | 2021-07-19T01:58:31.000Z | ---
layout: post
title: "OsPDR20"
description: ""
category: genes
tags:
---
* **Information**
+ Symbol: OsPDR20
+ MSU: [LOC_Os09g16330](http://rice.uga.edu/cgi-bin/ORF_infopage.cgi?orf=LOC_Os09g16330)
+ RAPdb: [Os09g0332700](http://rapdb.dna.affrc.go.jp/viewer/gbrowse_details/irgsp1?name=Os09g0332700)
* **Publication**
+ [Plant ABC proteins--a unified nomenclature and updated inventory](http://www.ncbi.nlm.nih.gov/pubmed?term=Plant ABC proteins--a unified nomenclature and updated inventory%5BTitle%5D), 2008, Trends Plant Sci.
* **Genbank accession number**
* **Key message**
* **Connection**
[//]: # * **Key figures**
| 25.653846 | 214 | 0.68066 | yue_Hant | 0.393618 |
4d4a56fa8c4f4e6a625733f0323b1471bd971b06 | 324 | md | Markdown | sdk/js/swagger/docs/SwaggerJwkCreateSet.md | renesugar/hydra | 43f0a993a09e821fa5de7b8299aa15a429553500 | [
"Apache-2.0"
] | 1 | 2022-03-14T16:47:47.000Z | 2022-03-14T16:47:47.000Z | sdk/js/swagger/docs/SwaggerJwkCreateSet.md | renesugar/hydra | 43f0a993a09e821fa5de7b8299aa15a429553500 | [
"Apache-2.0"
] | null | null | null | sdk/js/swagger/docs/SwaggerJwkCreateSet.md | renesugar/hydra | 43f0a993a09e821fa5de7b8299aa15a429553500 | [
"Apache-2.0"
] | null | null | null | # OryHydraCloudNativeOAuth20AndOpenIdConnectServer.SwaggerJwkCreateSet
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**body** | [**JsonWebKeySetGeneratorRequest**](JsonWebKeySetGeneratorRequest.md) | | [optional]
**set** | **String** | The set in: path |
| 32.4 | 97 | 0.598765 | yue_Hant | 0.257601 |
4d4af03e7852b811ddb1b164aa7d14e35737609b | 2,135 | md | Markdown | CE/developer/org-service/create-custom-activity-entity.md | MicrosoftDocs/D365-CE-iCMSTest.vi-VN | fdc5fd28015eed10eb38f38922ac67a9a3f7817b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T14:20:05.000Z | 2021-04-20T21:12:56.000Z | CE/developer/org-service/create-custom-activity-entity.md | MicrosoftDocs/D365-CE-iCMSTest.vi-VN | fdc5fd28015eed10eb38f38922ac67a9a3f7817b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | CE/developer/org-service/create-custom-activity-entity.md | MicrosoftDocs/D365-CE-iCMSTest.vi-VN | fdc5fd28015eed10eb38f38922ac67a9a3f7817b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-14T19:41:38.000Z | 2021-11-18T12:49:32.000Z | ---
title: Create a custom activity entity (Developer Guide for Dynamics 365 for Customer Engagement) | MicrosoftDocs
description: The sample demonstrates how to create a custom activity entity.
ms.custom: ''
ms.date: 10/31/2017
ms.reviewer: ''
ms.service: crm-online
ms.suite: ''
ms.tgt_pltfrm: ''
ms.topic: article
applies_to:
- Dynamics 365 for Customer Engagement (online)
- Dynamics 365 for Customer Engagement (on-premises)
- Dynamics CRM 2016
- Dynamics CRM Online
ms.assetid: 56e37fe0-182e-4021-a0b1-b32cba93d49e
caps.latest.revision: 13
author: JimDaly
ms.author: jdaly
manager: amyla
search.audienceType:
- developer
search.app:
- D365CE
ms.openlocfilehash: 17c7fc4c09ce0e6dad745ffab246df563a8367e0
ms.sourcegitcommit: 9f0efd59de16a6d9902fa372cb25fc0baf1c2838
ms.translationtype: HT
ms.contentlocale: vi-VN
ms.lasthandoff: 01/08/2019
ms.locfileid: "386442"
---
# <a name="create-a-custom-activity-entity"></a>Create a custom activity entity
[!INCLUDE[](../../includes/cc_applies_to_update_9_0_0.md)]
This topic contains a sample that shows how to create a custom activity entity.
The following sample creates a custom entity and sets the <xref:Microsoft.Xrm.Sdk.Metadata.EntityMetadata.IsActivity> property to `true`. All activities must have a <xref:Microsoft.Xrm.Sdk.Messages.CreateEntityRequest.PrimaryAttribute><xref:Microsoft.Xrm.Sdk.Metadata.AttributeMetadata.SchemaName> set to `Subject` so that it corresponds to the common `ActivityPointer`.`Subject` attribute used by all activities.
[!code-csharp[Entities#CreateCustomActivityEntity1](../../snippets/csharp/CRMV8/entities/cs/createcustomactivityentity1.cs#createcustomactivityentity1)]
### <a name="see-also"></a>Xem thêm
<xref:Microsoft.Xrm.Sdk.Messages.CreateEntityRequest>
[Use the sample and helper code](use-sample-helper-code.md)
[Custom activities](../custom-activities.md)
[Customize entity metadata](../customize-entity-metadata.md)
[Modify Entity Icons](../modify-icons-entity.md)
[Modify Entity Messages](../modify-messages-entity.md)
[Sample: Create a custom activity](../sample-create-custom-activity.md)
| 41.862745 | 416 | 0.778454 | yue_Hant | 0.443452 |
4d4b62c668c1873dd8d842074bf887dbd244a71f | 17,006 | md | Markdown | articles/supply-chain/warehousing/auto-release-shipment-for-cross-docking.md | MicrosoftDocs/Dynamics-365-Operations.it- | 10c91d0b02b9925d81227106bc04e18f538a6e25 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-18T17:14:19.000Z | 2021-04-20T21:13:45.000Z | articles/supply-chain/warehousing/auto-release-shipment-for-cross-docking.md | MicrosoftDocs/Dynamics-365-Operations.it- | 10c91d0b02b9925d81227106bc04e18f538a6e25 | [
"CC-BY-4.0",
"MIT"
] | 10 | 2017-12-12T12:01:52.000Z | 2019-04-30T11:46:17.000Z | articles/supply-chain/warehousing/auto-release-shipment-for-cross-docking.md | MicrosoftDocs/Dynamics-365-Operations.it- | 10c91d0b02b9925d81227106bc04e18f538a6e25 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2018-07-20T06:42:28.000Z | 2019-10-12T18:16:59.000Z | ---
title: Rilascio automatico della spedizione della versione per cross-docking
description: In questo argomento viene descritta una strategia di cross-docking che consente di rilasciare automaticamente un ordine di domanda nel magazzino quando l'ordine di produzione che fornisce la quantità della domanda viene dichiarato finito, in modo che la quantità viene spostata direttamente dall'ubicazione di uscita della produzione nell'ubicazione in uscita.
author: Mirzaab
ms.date: 10/15/2019
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: WHSCrossDockingTemplate
audience: Application User
ms.reviewer: kamaybac
ms.search.region: Global
ms.author: mirzaab
ms.search.validFrom: 2019-10-1
ms.dyn365.ops.version: 10.0.6
ms.openlocfilehash: 1315bda1fd284eb326d4f08bf36bfea59074fde3
ms.sourcegitcommit: 3b87f042a7e97f72b5aa73bef186c5426b937fec
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 09/29/2021
ms.locfileid: "7577938"
---
# <a name="auto-release-shipment-for-cross-docking"></a>Rilascio automatico della spedizione della versione per cross-docking
[!include [banner](../includes/banner.md)]
In questo argomento viene descritta una strategia di cross-docking che consente di rilasciare automaticamente un ordine di domanda nel magazzino quando l'ordine di produzione che fornisce la quantità della domanda viene dichiarato finito. In questo modo, la quantità necessaria per l'esecuzione dell'ordine di domanda viene spostata direttamente dall'ubicazione di uscita della produzione nell'ubicazione in uscita.
Il cross-docking è un magazzino che gestisce il flusso in cui la quantità necessaria per soddisfare un ordine in uscita viene eseguita nella banchina di uscita o nell'area di transito dell'ordine dall'ubicazione in cui l'ordine in entrata è stato ricevuto. L'ordine in entrata può essere un ordine fornitore, un ordine di trasferimento o di produzione. Benché che la funzionalità avanzata di cross-docking supporta tutti gli ordini di offerta e domanda e richiede che la domanda in uscita sia rilasciata prima che l'opportunità di cross-dock venga identificata, la funzionalità di rilascio automatico della spedizione ha le seguenti caratteristiche:
- Supporta solo gli ordini di produzione come fornitura e solo gli ordini cliente e gli ordini di trasferimento come domanda.
- L'operazione di cross-docking può essere avviata anche se l'ordine di domanda non è stato rilasciato al magazzino prima della registrazione del ricevimento della fornitura, ovvero prima che la produzione venga dichiarata finita.
Questa funzionalità di cross-docking ha due vantaggi:
- Le operazioni di magazzino possono ignorare il passaggio dello stoccaggio di prodotti finiti nell'area di immagazzinamento regolare se tali quantità verranno prelevate ancora per evadere l'ordine di uscita. In questo caso, le quantità possono essere spostate una volta, dall'ubicazione di uscita a un'ubicazione di spedizione/imballaggio. In questo modo, le funzionalità consente di ridurre il numero di volte che le scorte vengono gestite e, infine, massimizzare il risparmio di tempo e spazio nello shop floor del magazzino.
- Le operazioni di magazzino possono posticipare il rilascio degli ordini cliente e degli ordini di trasferimento nel magazzino fino a quando l'uscita dei prodotti finiti per l'ordine di produzione associato viene dichiarata come finita. Il vantaggio può risultare particolarmente importante negli ambienti di produzione su ordine, in cui i lead time di produzione tendono a essere più lunghi dei lead time negli ambienti di produzione per magazzino.
## <a name="prerequisites"></a>Prerequisiti
| Prerequisito | Descrizione |
|---|---|
| Elemento | Gli articoli devono essere abilitati per i processi di gestione magazzino.<p>**Nota:** gli articoli abilitati per il peso variabile non possono essere inclusi nei processi di cross-docking.</p> |
| Magazzino | Il magazzino deve essere abilitato per i processi di gestione magazzino. |
| Modelli di cross-docking | Almeno un modello di cross-docking che utilizza i criteri di rilascio della domanda **Al ricevimento della fornitura** deve essere impostato per un magazzino specifico. |
| Classe lavoro | L'ID della classe di lavoro di cross-docking deve essere creato per il tipo di ordine di lavoro **Cross-docking**. |
| Modelli di lavoro | I modelli di tipo di ordine di lavoro **Cross-docking** sono necessari per creare il lavoro di prelievo e stoccaggio di cross-docking. |
| Direttive ubicazione | Le direttive ubicazione del tipo di ordine di lavoro **Cross-docking** sono necessarie per facilitare il lavoro di stoccaggio nelle ubicazioni in cui le quantità degli ordini cliente vengono imballate e spedite. |
| Contrassegno tra un ordine di domanda e un ordine di produzione | Il sistema di gestione magazzino può attivare il rilascio automatico della spedizione dell'ordine di uscita e creare un lavoro cross-docking dall'ubicazione di uscita all'azione di dichiarazione di finito solo se gli ordini cliente e gli ordini di trasferimento vengono prenotati e contrassegnati per un ordine di produzione. |
## <a name="example-cross-docking-flow"></a>Flusso di cross-docking di esempio
Un flusso tipico di cross-docking è costituito dalle seguenti operazioni principali.
1. Un incaricato dell'ordine cliente crea un ordine cliente per un prodotto di tipo produzione su ordine. In genere, la quantità richiesta non è disponibile e deve prima essere prodotta.
2. L'incaricato dell'ordine cliente crea un ordine di produzione direttamente dalla riga ordine cliente. In questo modo, l''incaricato dell'ordine cliente prenota e contrassegna la quantità dell'ordine cliente rispetto alla quantità dell'ordine di produzione.
In alternativa, un pianificazione di produzione indica che il contrassegno deve essere aggiornato quando gli ordini pianificati vengono stabilizzati.
3. L'ordine di produzione comporta l'esecuzione delle seguenti operazioni:
1. Una pianificazione della produzione stima e rilascia l'ordine di produzione. La stima include la prenotazione di materie prime e il rilascio include l'uscita in un magazzino.
2. Un addetto magazzino avvia e completa il prelievo di materie prime dall'ubicazione di immagazzinamento all'ubicazione di entrata della produzione, in base al lavoro di prelievo di produzione.
3. Un operatore dello shop floor avvia l'ordine di produzione.
4. Nell'ultima operazione, un operatore utilizza il dispositivo mobile per dichiarare finito l'ordine di produzione.
4. Il sistema utilizza l'impostazione per identificare l'evento di cross-docking per i due ordini collegati, quindi completa le seguenti attività:
1. Rilasciare automaticamente l'ordine cliente collegato a un magazzino per creare una spedizione.
2. Creare automaticamente il lavoro di cross-docking con le istruzioni per il prelievo dei prodotti finiti dall'ubicazione di uscita e spostarli nell'ubicazione in uscita che le direttive di ubicazione di cross-docking hanno assegnato all'ordine cliente.
3. Richiedere a un operatore di completare il lavoro di cross-docking subito dopo che l'ordine di produzione viene dichiarato finito.
5. Dopo che il lavoro di cross-docking è completato e le quantità vengono caricate nel veicolo, un addetto alla pianificazione del magazzino conferma la spedizione di vendita.
## <a name="example-scenario"></a>Scenario di esempio
Per questo scenario, i dati dimostrativi devono essere installati ed è necessario utilizzare la società di dati dimostrativi **USMF**.
### <a name="set-up-cross-docking-that-uses-the-auto-release-shipment-feature"></a>Impostare il cross-docking che utilizza la funzionalità di spedizione con rilascio automatico
#### <a name="cross-docking-template"></a>Modello di cross-docking
1. Andare a **Gestione magazzino** \> **Impostazione** \> **Lavoro** \> **Modelli di cross-docking**.
2. Selezionare **Nuovo**.
3. Nel campo **Numero progressivo** immettere **1**.
4. Nel campo **ID modello di cross-docking**, immettere un nome, ad esempio **XDock\_RAF**.
5. Nel campo **Criteri di rilascio della domanda** selezionare **Al ricevimento della fornitura**.
6. Nel campo **Magazzino**, immettere il numero del magazzino in cui si desidera impostare il processo di cross-docking. Per questo esempio, selezionare **51**.
> [!NOTE]
> Non appena si seleziona **Al ricevimento della fornitura** come criteri del rilascio della domanda per il modello, tutti gli altri campi presenti nella pagina diventano non disponibili. Inoltre, non è possibile definire le origini della fornitura. Questo comportamento si verifica perché il cross-docking che utilizza la funzionalità di spedizione con rilascio automatico supporta solo gli ordini di produzione come origini della fornitura e richiede che esista un contrassegno tra gli ordini cliente e gli ordini di produzione. Se si seleziona **Prima del ricevimento della fornitura** come criteri di rilascio della domanda, i campi delle schede **Progettazione** e **Origini di fornitura** sono disponibili e possono essere modificati.
#### <a name="work-classes"></a>Classi lavoro
1. Andare a **Gestione magazzino** \> **Impostazioni** \> **Lavoro** \> **Classi di lavoro**.
2. Selezionare **Nuovo**.
3. Nel campo **ID classe di lavoro**, immettere un nome, ad esempio **CrossDock**.
4. Nel campo **Tipo ordine di lavoro** selezionare **Cross-docking**.
Per limitare i tipi di ubicazione per lo stoccaggio dei prodotti finiti con cross-docking, è possibile specificare uno o più tipi di ubicazione validi.
#### <a name="work-templates"></a>Modelli di lavoro
1. Andare a **Gestione magazzino** \> **Impostazioni** \> **Lavoro** \> **Modelli di lavoro**.
2. Nel campo **Tipo ordine di lavoro** selezionare **Cross-docking**.
3. Selezionare **Nuovo**.
4. Nel campo **Numero progressivo** immettere **1**.
5. Nel campo **Modello di lavoro**, immettere un nome, ad esempio **CrossDock\_51**.
6. Selezionare **Salva**.
7. Nella sezione **Dettagli modello di lavoro**, selezionare **Nuovo**.
8. Per la nuova riga, nel campo **Tipo di lavoro**, selezionare **Prelievo**. Nel campo **ID classe lavoro** selezionare **CrossDock**.
9. Selezionare **Nuovo**.
10. Per la nuova riga, nel campo **Tipo di lavoro**, selezionare **Stoccaggio**. Nel campo **ID classe lavoro** selezionare **CrossDock**.
#### <a name="location-directives"></a>Direttive ubicazione
Un processo di stoccaggio standard per i prodotti finiti richiede una direttiva di ubicazione **Stoccaggio** per guidare lo spostamento delle quantità di produzione prelevate verso uno spazio di stoccaggio regolare. Inoltre, è necessario impostare una direttiva di ubicazione **Stoccaggio** di cross-docking per indicare al lavoro di stoccare la quantità finita in un'ubicazione di uscita designata che serve la spedizione dell'ordine cliente collegato.
Per il cross-docking, come per il normale stoccaggio dei prodotti finiti, non è necessario creare una direttiva di ubicazione per l'azione di prelievo perché viene fornita l'ubicazione di uscita. Inoltre, si prevede che questa ubicazione di uscita sia impostata come ubicazione di uscita predefinita per uno dei record relativi alle risorse (vale a dire, la risorsa, la relazione del gruppo di risorse o il gruppo di risorse) o come ubicazione predefinita dei prodotti finiti di produzione per un magazzino.
1. Andare a **Gestione magazzino** \> **Impostazioni** \> **Direttive ubicazione**.
2. Nel campo **Tipo ordine di lavoro** selezionare **Cross-docking**.
3. Selezionare **Nuovo**.
4. Nel campo **Numero progressivo** immettere **1**.
5. Nel campo **Nome**, immettere un nome, ad esempio **Baydoor**.
6. Nel campo **Tipo di lavoro** selezionare **Inserisci**.
7. Nel campo **Sito** selezionare **5**.
8. Nel campo **Magazzino** selezionare **51**.
9. Nella scheda dettaglio **Righe**, selezionare **Nuovo**.
10. Nel campo **A quantità**, immettere la quantità massima dell'intervallo, ad esempio **1000000**.
11. Selezionare **Salva**.
12. Nella scheda dettaglio **Azioni delle direttive di ubicazione**, selezionare **Nuovo**.
13. Nel campo **Nome**, immettere un nome, ad esempio **Baydoor**.
14. Selezionare **Salva**.
15. È possibile utilizzare la funzione di query standard per limitare le ubicazioni di stoccaggio a una o più ubicazioni specifiche. Selezionare **Modifica query** e selezionare **51** con criterio per il campo **Magazzino** nella tabella **Ubicazioni**.
### <a name="cross-dock-finished-goods-to-the-outbound-location"></a>Prodotti finiti di cross-docking nell'ubicazione in uscita
Per eseguire il cross-dock della quantità di prodotti finiti nell'ubicazione in uscita dell'ordine cliente associato, attenersi alla seguente procedura.
1. Andare a **Vendite e marketing** \> **Ordini cliente** \> **Tutti gli ordini cliente**.
2. Selezionare **Nuovo**.
3. Per l'intestazione dell'ordine cliente, selezionare l'account **US-001** e un magazzino impostato per il cross-docking che utilizza la funzionalità di spedizione con rilascio automatico.
4. Aggiungere una riga per un prodotto finito e immettere **10** come quantità.
5. Nella sezione **Righe ordine cliente**, selezionare **Prodotto e fornitura** \> **Ordine di produzione**.
6. Nella finestra di dialogo **Crea ordine di produzione**, esaminare i valori predefiniti e quindi selezionare **Crea**. Un nuovo ordine di produzione viene creato e associato all'ordine cliente, ovvero viene riservato e contrassegnato.
7. Facoltativo: modificare il valore del campo **Quantità** in modo che sia superiore al valore necessario per evadere l'ordine cliente. Quando la quantità di produzione viene dichiarata finita, il sistema crea le operazioni di cross-docking per la quantità contrassegnata e il lavoro di stoccaggio per la quantità rimanente, in base alla normale procedura per la gestione dello stoccaggio dei prodotti finiti.
Come indicato in precedenza, un contrassegno deve esistere tra l'ordine cliente e l'ordine di produzione. In caso contrario, il cross-docking non verrà eseguito. Un contrassegno può essere creato in diversi modi:
- Il sistema crea automaticamente il contrassegno nelle seguenti situazioni:
- L'ordine di produzione viene creato manualmente direttamente dalla riga ordine cliente (vedere il passaggio 6).
- Gli ordini di produzione pianificati vengono stabilizzati e il campo **Aggiorna contrassegno** è impostato su **Standard**.
- L'utente può creare manualmente il contrassegno aprendo la pagina **Contrassegno** dalla riga dell'ordine di domanda.
> [!NOTE]
> Un contrassegno non può essere creato manualmente per gli articoli impostati per l'utilizzo della media ponderata e standard come modelli inventariali.
8. Nella pagina **Ordine di produzione**, nel riquadro azioni, nella scheda **Ordine di produzione**, nel gruppo **Processo** selezionare **Stima** e quindi **OK**. L'ordine viene stimato e la quantità di materie prime viene prenotata per la produzione.
9. Nel riquadro azioni, nella scheda **Ordine di produzione**, nel gruppo **Processo** selezionare **Rilascio** e quindi **OK**. Il lavoro di prelievo di magazzino viene creato per le materie prime.
10. Aprire ed esaminare il lavoro. Nella scheda **Magazzino** del riquadro azioni, nel gruppo **Generale**, selezionare **Dettagli lavoro**. Prendere nota dell'ID lavoro.
11. Accedi all'app per dispositivi mobili Gestione magazzino per eseguire il lavoro nel magazzino 51.
12. Andare a **Produzione** \> **Prelievo di produzione**.
13. Immettere l'ID lavoro per avviare e completare il prelievo delle materie prime.
Dopo che il lavoro è dichiarato finito, la quantità delle materie prime è disponibile nell'ubicazione di entrata produzione (**005** nei dati dimostrativi di USMF) e l'esecuzione dell'ordine di produzione può iniziare.
14. Nella pagina **Ordine di produzione**, nel riquadro azioni, nella scheda **Ordine di produzione**, nel gruppo **Processo** selezionare **Avvia** e quindi **OK**.
15. Nel'app, andare a **Produzione** \> **Dichiarata finita e stoccaggio**.
16. Nel campo **ID produzione**, immettere il numero dell'ordine di produzione e altri dettagli obbligatori, quindi selezionare **OK**.
Si verificano i seguenti eventi:
- Il rilascio a un magazzino viene attivato per l'ordine cliente associato.
- In base al rilascio, viene creato il lavoro di spedizione e cross-docking. Questo lavoro indica all'operatore del magazzino di prelevare le quantità richieste per soddisfare la riga ordine cliente e metterle nell'ubicazione in uscita specificata nella direttiva di ubicazione di cross-docking.
- Se la quantità dell'ordine di produzione è maggiore della quantità richiesta dall'ordine cliente, viene creato il lavoro di stoccaggio regolare. Questo lavoro indica all'operatore del magazzino di prelevare la quantità di prodotti finiti rimanente dopo cross-docking e spostarla nell'ubicazione normale, a seconda della direttiva di ubicazione.
[!INCLUDE[footer-include](../../includes/footer-banner.md)] | 93.955801 | 744 | 0.784899 | ita_Latn | 0.999777 |
4d4c2f0909311e4e194aeb63ad74b01c7f7a87a2 | 6,661 | md | Markdown | treebanks/it_postwita/it_postwita-dep-acl-relcl.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | treebanks/it_postwita/it_postwita-dep-acl-relcl.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | treebanks/it_postwita/it_postwita-dep-acl-relcl.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Statistics of acl:relcl in UD_Italian-PoSTWITA'
udver: '2'
---
## Treebank Statistics: UD_Italian-PoSTWITA: Relations: `acl:relcl`
This relation is a language-specific subtype of <tt><a href="it_postwita-dep-acl.html">acl</a></tt>.
850 nodes (1%) are attached to their parents as `acl:relcl`.
850 instances of `acl:relcl` (100%) are left-to-right (parent precedes child).
Average distance between parent and child is 3.32588235294118.
The following 29 pairs of parts of speech are connected with `acl:relcl`: <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (412; 48% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (294; 35% instances), <tt><a href="it_postwita-pos-PROPN.html">PROPN</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (40; 5% instances), <tt><a href="it_postwita-pos-SYM.html">SYM</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (37; 4% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-ADJ.html">ADJ</a></tt> (9; 1% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-AUX.html">AUX</a></tt> (7; 1% instances), <tt><a href="it_postwita-pos-ADJ.html">ADJ</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (6; 1% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-ADJ.html">ADJ</a></tt> (6; 1% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt> (5; 1% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-PROPN.html">PROPN</a></tt> (4; 0% instances), <tt><a href="it_postwita-pos-X.html">X</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (4; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt> (3; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-PRON.html">PRON</a></tt> (3; 0% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-PRON.html">PRON</a></tt> (2; 0% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-SYM.html">SYM</a></tt> (2; 0% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-X.html">X</a></tt> (2; 0% instances), <tt><a href="it_postwita-pos-PROPN.html">PROPN</a></tt>-<tt><a href="it_postwita-pos-PRON.html">PRON</a></tt> (2; 0% instances), <tt><a href="it_postwita-pos-ADJ.html">ADJ</a></tt>-<tt><a href="it_postwita-pos-AUX.html">AUX</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-ADV.html">ADV</a></tt>-<tt><a href="it_postwita-pos-VERB.html">VERB</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-ADP.html">ADP</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-AUX.html">AUX</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-INTJ.html">INTJ</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-PROPN.html">PROPN</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-SYM.html">SYM</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt>-<tt><a href="it_postwita-pos-X.html">X</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-DET.html">DET</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-PRON.html">PRON</a></tt>-<tt><a href="it_postwita-pos-INTJ.html">INTJ</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-PROPN.html">PROPN</a></tt>-<tt><a href="it_postwita-pos-ADJ.html">ADJ</a></tt> (1; 0% instances), <tt><a href="it_postwita-pos-SYM.html">SYM</a></tt>-<tt><a href="it_postwita-pos-NOUN.html">NOUN</a></tt> (1; 0% instances).
~~~ conllu
# visual-style 9 bgColor:blue
# visual-style 9 fgColor:white
# visual-style 6 bgColor:blue
# visual-style 6 fgColor:white
# visual-style 6 9 acl:relcl color:blue
1 MARIO mario PROPN SP _ 4 nsubj _ _
2 MONTI MONTI PROPN SP _ 1 flat:name _ _
3 STA stare AUX VA Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin 4 aux _ _
4 FREGANDO fregare VERB V VerbForm=Inf 0 root _ _
5 LA il DET RD Definite=Def|Gender=Fem|Number=Sing|PronType=Art 6 det _ _
6 GENTE gente NOUN S Gender=Fem|Number=Sing 4 obj _ _
7 CHE che PRON PR PronType=Rel 9 nsubj _ _
8 GIÀ GIÀ ADV B _ 9 advmod _ _
9 HA avere VERB V Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin 6 acl:relcl _ _
10 LE il DET RD Definite=Def|Gender=Fem|Number=Plur|PronType=Art 11 det _ _
11 TASCHE tasca NOUN S Gender=Fem|Number=Plur 9 obj _ _
12 VUOTE vuoto ADJ A Gender=Fem|Number=Plur 11 amod _ _
~~~
~~~ conllu
# visual-style 8 bgColor:blue
# visual-style 8 fgColor:white
# visual-style 6 bgColor:blue
# visual-style 6 fgColor:white
# visual-style 6 8 acl:relcl color:blue
1 Visto vedere VERB V Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin 6 acl _ _
2 che che SCONJ CS _ 4 mark _ _
3 mi mi PRON PC Clitic=Yes|Number=Sing|Person=1|PronType=Prs 4 expl _ _
4 volete volere VERB V Mood=Ind|Number=Plur|Person=2|Tense=Pres|VerbForm=Fin 1 ccomp _ _
5 bene bene ADV B _ 4 advmod _ _
6 chi chi PRON PR PronType=Rel 0 root _ _
7 vuole volere AUX VM Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin 8 aux _ _
8 fare fare VERB V VerbForm=Inf 6 acl:relcl _ _
9 i il DET RD Definite=Def|Gender=Masc|Number=Plur|PronType=Art 11 det _ _
10 miei mio DET AP Gender=Masc|Number=Plur|Poss=Yes|PronType=Prs 11 det:poss _ _
11 compiti compito NOUN S Gender=Masc|Number=Plur 8 obj _ SpaceAfter=No
12 ? ? PUNCT FS _ 6 punct _ _
13 ;) ;) SYM SYM _ 6 discourse:emo _ _
~~~
~~~ conllu
# visual-style 3 bgColor:blue
# visual-style 3 fgColor:white
# visual-style 1 bgColor:blue
# visual-style 1 fgColor:white
# visual-style 1 3 acl:relcl color:blue
1 efestione efestione PROPN SP _ 0 root _ _
2 che che PRON PR PronType=Rel 3 nsubj _ _
3 protegge proteggere VERB V Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin 1 acl:relcl _ _
4 il il DET RD Definite=Def|Gender=Masc|Number=Sing|PronType=Art 6 det _ _
5 suo suo DET AP Gender=Masc|Number=Sing|Poss=Yes|PronType=Prs 6 det:poss _ _
6 amore amore NOUN S Gender=Masc|Number=Sing 3 obj _ _
7 #Alexander #Alexander SYM SYM _ 1 parataxis:hashtag _ _
~~~
| 82.234568 | 3,726 | 0.695691 | yue_Hant | 0.657402 |
4d4c9b6b8dab25efe006285fd5f6c5a889b2e5e3 | 1,112 | md | Markdown | packages/spanish-adjectives/README.md | MitchellMarkGeorge/rosaenlg | 977b313bf0ce6938c8f0b1d26aa9572df187b0c7 | [
"Apache-2.0"
] | 1 | 2020-12-14T15:38:36.000Z | 2020-12-14T15:38:36.000Z | packages/spanish-adjectives/README.md | MitchellMarkGeorge/rosaenlg | 977b313bf0ce6938c8f0b1d26aa9572df187b0c7 | [
"Apache-2.0"
] | null | null | null | packages/spanish-adjectives/README.md | MitchellMarkGeorge/rosaenlg | 977b313bf0ce6938c8f0b1d26aa9572df187b0c7 | [
"Apache-2.0"
] | null | null | null | <!--
Copyright 2019 Ludan Stoecklé
SPDX-License-Identifier: Apache-2.0
-->
# spanish-adjectives
Agreement of Spanish adjectives, based on the gender and number.
Manages a lot of special cases:
* extensive list of nationalities: _francés_ becomes _francesas_ FP
* invariable: _esmeralda_, _macho_
* exceptions: _joven_ becomes _jóvenes_ MP
* apocopes (_bueno_ becomes _buen_ when placed before a M S word)
## Installation
```sh
npm install spanish-adjectives
```
## Usage
```javascript
const SpanishAdjectives = require('spanish-adjectives');
// negras
console.log(SpanishAdjectives.agreeAdjective('negro', 'F', 'P'));
// daneses
console.log(SpanishAdjectives.agreeAdjective('danés', 'M', 'P'));
```
One main function `agreeAdjective` that takes multiple parameters and return the agreed adjective:
* `adjective`: the adjective to agree; it must be the lemma, not the agreed form
* `gender` gender of the word; `M` `F`
* `number`: number of the word; `S` or `P`
* `precedesNoun`: put `true` if the adjective will precede the noun; default `false`; used for apocopes
| 29.263158 | 104 | 0.720324 | eng_Latn | 0.977521 |
4d4cc0997877b6cba423c2e09f248209c4272a99 | 220 | md | Markdown | _posts/2022-04-15-HTML_List.md | Xogo8565/Xogo8565.github.io | 233be473292cf24dc8d2002056663ec368270a0e | [
"MIT"
] | null | null | null | _posts/2022-04-15-HTML_List.md | Xogo8565/Xogo8565.github.io | 233be473292cf24dc8d2002056663ec368270a0e | [
"MIT"
] | 1 | 2022-02-27T05:08:32.000Z | 2022-02-27T05:08:32.000Z | _posts/2022-04-15-HTML_List.md | Xogo8565/Xogo8565.github.io | 233be473292cf24dc8d2002056663ec368270a0e | [
"MIT"
] | null | null | null | ---
title: "HTML List"
excerpt: "HTML List"
categories:
- Study
- HTML
tags:
- Study
- HTML
- List
---
## List
<script src="https://gist.github.com/Xogo8565/4ec4dc67173deb9cbe1d2c5d6990ddb3.js"></script> | 12.222222 | 92 | 0.65 | kor_Hang | 0.276855 |
4d4e384bdb67b190058e4511348f327ab6ec02a3 | 4,031 | md | Markdown | docs/dotnet/user-defined-operators-cpp-cli.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/dotnet/user-defined-operators-cpp-cli.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/dotnet/user-defined-operators-cpp-cli.md | Mdlglobal-atlassian-net/cpp-docs.cs-cz | 803fe43d9332d0b8dda5fd4acfe7f1eb0da3a35e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:53:26.000Z | 2020-05-28T15:53:26.000Z | ---
title: Uživatelem definované operátory (C++/CLI)
ms.date: 11/04/2016
helpviewer_keywords:
- user-defined operators under /clr
ms.assetid: 42f93b4a-6de4-4e34-b07b-5a62ac014f2c
ms.openlocfilehash: cf80eb4c440c1308e8ea06a563c18569e4e4ddf2
ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 04/23/2019
ms.locfileid: "62384501"
---
# <a name="user-defined-operators-ccli"></a>Uživatelem definované operátory (C++/CLI)
Uživatelem definované operátory pro spravované typy jsou povoleny jako statické členy nebo členy instance nebo v globálním oboru. Nicméně pouze statické operátory jsou přístupné prostřednictvím metadat na klienty, kteří jsou napsané v jiném jazyce než v jazyce Visual C++.
Typ odkazu jeden z parametrů statické uživatelem definovaný operátor musí být jedna z následujících:
- Popisovač (`type` ^) na instanci nadřazeného typu.
- Dereference typu odkazu (`type`^ & nebo typ ^ %) ke zpracování do instance systému nadřazeného typu.
Typ hodnoty jeden z parametrů statické uživatelem definovaný operátor musí být jedna z následujících:
- Stejného typu jako nadřazený typ hodnoty.
- Dereference typu ukazatele (`type`^) do nadřazeného typu.
- Dereference typu odkazu (`type`% nebo `type`&) do nadřazeného typu.
- Dereference typu odkazu (`type`^ % nebo `type`^ &) pro popisovač.
Můžete definovat následující operátory:
|Operátor|Unární nebo binární formuláře?|
|--------------|--------------------------|
|!|Unární|
|!=|binární|
|%|binární|
|&|Jednočlenné a binární soubor|
|&&|binární|
|*|Jednočlenné a binární soubor|
|+|Jednočlenné a binární soubor|
|++|Unární|
|, |binární|
|-|Jednočlenné a binární soubor|
|--|Unární|
|->|Unární|
|/|binární|
|<|binární|
|<<|binární|
|\<=|binární|
|=|binární|
|==|binární|
|>|binární|
|>=|binární|
|>>|binární|
|^|binární|
|false|Unární|
|true|Unární|
|||binární|
||||binární|
|~|Unární|
## <a name="example"></a>Příklad
```cpp
// mcppv2_user-defined_operators.cpp
// compile with: /clr
using namespace System;
public ref struct X {
X(int i) : m_i(i) {}
X() {}
int m_i;
// static, binary, user-defined operator
static X ^ operator + (X^ me, int i) {
return (gcnew X(me -> m_i + i));
}
// instance, binary, user-defined operator
X^ operator -( int i ) {
return gcnew X(this->m_i - i);
}
// instance, unary, user-defined pre-increment operator
X^ operator ++() {
return gcnew X(this->m_i++);
}
// instance, unary, user-defined post-increment operator
X^ operator ++(int i) {
return gcnew X(this->m_i++);
}
// static, unary user-defined pre- and post-increment operator
static X^ operator-- (X^ me) {
return (gcnew X(me -> m_i - 1));
}
};
int main() {
X ^hX = gcnew X(-5);
System::Console::WriteLine(hX -> m_i);
hX = hX + 1;
System::Console::WriteLine(hX -> m_i);
hX = hX - (-1);
System::Console::WriteLine(hX -> m_i);
++hX;
System::Console::WriteLine(hX -> m_i);
hX++;
System::Console::WriteLine(hX -> m_i);
hX--;
System::Console::WriteLine(hX -> m_i);
--hX;
System::Console::WriteLine(hX -> m_i);
}
```
```Output
-5
-4
-3
-2
-1
-2
-3
```
## <a name="example"></a>Příklad
Následující příklad ukazuje syntéze operátoru, který je k dispozici pouze při použití **/CLR** ke kompilaci. Syntéze operátoru vytvoří formulář přiřazení binárního operátoru, pokud není definována, pokud levá strana operátoru přiřazení s typem CLR.
```cpp
// mcppv2_user-defined_operators_2.cpp
// compile with: /clr
ref struct A {
A(int n) : m_n(n) {};
static A^ operator + (A^ r1, A^ r2) {
return gcnew A( r1->m_n + r2->m_n);
};
int m_n;
};
int main() {
A^ a1 = gcnew A(10);
A^ a2 = gcnew A(20);
a1 += a2; // a1 = a1 + a2 += not defined in source
System::Console::WriteLine(a1->m_n);
}
```
```Output
30
```
## <a name="see-also"></a>Viz také:
[Třídy a struktury](../extensions/classes-and-structs-cpp-component-extensions.md)
| 23.852071 | 272 | 0.664103 | ces_Latn | 0.989147 |
4d4e61c1b50ca8b6f1cdd2be89e9c071c96b3738 | 2,472 | md | Markdown | _posts/2019-03-14-nosotros-us.md | breaktimetv2/breaktimetv2.github.io | 486dafc58b6e6c7e3fd3aaf38f334466bb9651a7 | [
"MIT"
] | null | null | null | _posts/2019-03-14-nosotros-us.md | breaktimetv2/breaktimetv2.github.io | 486dafc58b6e6c7e3fd3aaf38f334466bb9651a7 | [
"MIT"
] | null | null | null | _posts/2019-03-14-nosotros-us.md | breaktimetv2/breaktimetv2.github.io | 486dafc58b6e6c7e3fd3aaf38f334466bb9651a7 | [
"MIT"
] | null | null | null | ---
layout: peliculas
title: "Nosotros"
titulo_original: "Us"
image_carousel: 'https://res.cloudinary.com/u4innovation/image/upload/v1559364553/nosotros-poster-min_ebw7dy.jpg'
image_banner: 'https://res.cloudinary.com/u4innovation/image/upload/v1559364557/nosotros-banner-min_ybgyju.jpg'
trailer: https://www.youtube.com/embed/3YZikrq0K1U
embed: https://www.youtube.com/embed/3YZikrq0K1U?autoplay=1&rel=0&hd=1&border=0&wmode=opaque&enablejsapi=1&modestbranding=1&controls=1&showinfo=0
idioma: 'Latino'
sandbox: allow-same-origin allow-forms
reproductor: fembed
description: Una familia se dispone a ir a la playa y disfrutar del sol. Jason, el más joven de la misma, desaparece. Cuando sus padres le encuentran, el niño parece estar desorientado. Esa noche, una nueva familia trata de aterrorizarles y lo consiguen cuando se dan cuenta de que son físicamente y emocionalmente iguales. Ahora, la única salida es matar a la familia impostora antes de que esta acabe con ellos.
description_corta: Una familia se dispone a ir a la playa y disfrutar del sol. Jason, el más joven de la misma, desaparece. Cuando sus padres le encuentran, el niño parece estar desorientado. Esa noche, una nueva familia trata de aterrorizarles y lo consiguen cuando se dan cuenta de que son físicamente y emocionalmente iguales. Ahora, la única salida es matar a la familia impostora antes de que esta acabe con ellos.
duracion: '1h 56 min'
estrellas: '5'
clasificacion: '+10'
category: 'peliculas'
descargas: 'yes'
descargas2:
descarga-1: ["1", "https://www.rapidvideo.com/d/G3HUU1R8BT", "https://www.google.com/s2/favicons?domain=openload.co","OpenLoad","https://res.cloudinary.com/imbriitneysam/image/upload/v1541473684/mexico.png", "Latino", "HD"]
nuevo: 'new_peliculas'
calidad: 'Full HD'
genero: Terror, Suspenso
anio: '2019'
reproductores: ["https://api.cuevana3.io/rr/gd.php?h=ek5lbm9xYWNrS0xJMVp5b21KREk0dFBLbjVkaHhkRGdrOG1jbnBpUnhhS1ZySVdaZWFlczQ1M1Nub3g3cnBIb3JzMW9obW5UeTgybjNHaDhyTFRVMnF1U3FadVkyUT09"]
tags:
- Terror
twitter_text: Nosotros
introduction: Una familia se dispone a ir a la playa y disfrutar del sol. Jason, el más joven de la misma, desaparece. Cuando sus padres le encuentran, el niño parece estar desorientado. Esa noche, una nueva familia trata de aterrorizarles y lo consiguen cuando se dan cuenta de que son físicamente y emocionalmente iguales. Ahora, la única salida es matar a la familia impostora antes de que esta acabe con ellos.
---
| 60.292683 | 419 | 0.794903 | spa_Latn | 0.927208 |
4d4f9a4924c9f623109b251788aec5d031b41fc6 | 1,444 | md | Markdown | _listings/microsoft-graph/memailboxsettings-get-postman.md | streamdata-gallery-organizations/microsoft-graph | d8404f8bac893cb65a9787e89d98e284d1e5eaf1 | [
"CC-BY-3.0"
] | null | null | null | _listings/microsoft-graph/memailboxsettings-get-postman.md | streamdata-gallery-organizations/microsoft-graph | d8404f8bac893cb65a9787e89d98e284d1e5eaf1 | [
"CC-BY-3.0"
] | null | null | null | _listings/microsoft-graph/memailboxsettings-get-postman.md | streamdata-gallery-organizations/microsoft-graph | d8404f8bac893cb65a9787e89d98e284d1e5eaf1 | [
"CC-BY-3.0"
] | null | null | null | {
"info": {
"name": "Microsoft Graph API Get User Mailbox Settings",
"_postman_id": "ab387893-ebb1-4b77-8400-55876a7388ff",
"description": "Get user mailbox settings Get the user's mailboxSettings. This includes settings for automatic replies (notify people automatically upon receipt of their email), locale (language and country/region), and time zone.",
"schema": "https://schema.getpostman.com/json/collection/v2.0.0/"
},
"item": [
{
"name": "user",
"item": [
{
"id": "d6da03df-8cd5-484a-ac9e-fcf85c5763d1",
"name": "GetUserMailboxSettings",
"request": {
"url": "http://graph.microsoft.com/me/mailboxSettings",
"method": "GET",
"header": [
{
"key": "Authorization",
"value": "Authorization",
"description": "Bearer <token>",
"disabled": false
}
],
"body": {
"mode": "raw"
},
"description": "Get user mailbox settings Get the user's mailboxSettings"
},
"response": [
{
"status": "Successful Response",
"code": 200,
"name": "Response_200",
"id": "5d04eb63-1e5b-4374-8c8f-40954b95bd39"
}
]
}
]
}
]
} | 33.581395 | 237 | 0.481994 | eng_Latn | 0.309361 |
4d507e3f63b9b48e13a2b0c074585f393a598c55 | 1,625 | md | Markdown | _posts/2017/2017-11-22-drafting-iconic-masters.md | markcerqueira/markcerqueira.github.com | ea445ed927b0345fad28847d124acc0b45abba1d | [
"MIT"
] | 1 | 2016-03-21T13:32:13.000Z | 2016-03-21T13:32:13.000Z | _posts/2017/2017-11-22-drafting-iconic-masters.md | markcerqueira/markcerqueira.github.com | ea445ed927b0345fad28847d124acc0b45abba1d | [
"MIT"
] | 6 | 2016-03-15T21:27:19.000Z | 2022-02-26T01:15:07.000Z | _posts/2017/2017-11-22-drafting-iconic-masters.md | markcerqueira/markcerqueira.github.com | ea445ed927b0345fad28847d124acc0b45abba1d | [
"MIT"
] | null | null | null | ---
layout: post
title: "Iconic Masters - Draft Report"
description: ""
category:
tags: [tabletop, mtg, twitch]
---
We've been drafting Ixalan a lot lately at Twitch. To spice things up we did a Kaladesh + Aether Revolt draft last week and this week we threw down to do an Iconic Masters draft.
Drafting Iconic Masters was a ton of fun. I packed a **Sheoldred, Whispering One** so I did something pretty unconventional and kept picking cool black cards. When I didn't see any good black cards I started splashing white because the heart of the cards told me to. Both colors must've been pretty open because I was passed a **Blood Baron of Vizkopa** and an **Emeria Angela**. Getting an **Indulgent Tormenter** towards the end of the draft strengthened my belief in the heart of the cards once again.
<div>
<img class="rounded-corners" style="max-width: 920px; border: 1px;" src="{{ site.images2017 }}/11-22/iconic.png"/>
<p class="caption-text" style="line-height: 1.5em; margin-bottom: 24px;"><strong>With these rare cards and my skill, there's no way I could lose!</strong></p>
</div>
The deck played great -- naturally -- if I could survive to get Indulgent Tormenter and then Sheoldred out. Getting more cards and having my opponent sacrifice their creatures every turn is pretty awesome! 😏
Iconic Masters was a really fun set to draft because it packs a ton of [cool, beautiful cards][1]. The price point ($30 versus the usual $7.50) pushes it out of the "every week draft" category but it's certainly a special treat one should try at least once!
[1]: https://magic.wizards.com/en/products/iconic-masters/cards | 73.863636 | 504 | 0.752615 | eng_Latn | 0.99587 |
4d51168d8b13bfb4743559eb1e4e8f18eb9dd08e | 8,833 | md | Markdown | _posts/tennis/2022-01-11-03_ball_traking copy.md | DK-Lite/DK-Lite.github.io | 488b5e04d09fe0c9161ffd6ce8383b20ac9f3acf | [
"MIT"
] | null | null | null | _posts/tennis/2022-01-11-03_ball_traking copy.md | DK-Lite/DK-Lite.github.io | 488b5e04d09fe0c9161ffd6ce8383b20ac9f3acf | [
"MIT"
] | null | null | null | _posts/tennis/2022-01-11-03_ball_traking copy.md | DK-Lite/DK-Lite.github.io | 488b5e04d09fe0c9161ffd6ce8383b20ac9f3acf | [
"MIT"
] | null | null | null | ---
layout: post
title: "Tennis Ball Tracking"
author: "DK-Lite"
use_math: true
---
트래킹은 연속되는 영상에서 검출된 물체를 지속적으로 `포커싱`하는 것이다.
테니스 경기영상에서는 `테니스공`이나 `플레이어`가 목표가 된다. 이번 포스팅은 실제 경기가 아닌 아마추어 영상에서의
경기시 필요한 `요구사항`을 파악한 후 그것에 맞춘 트래킹을 구현해 볼 것이다.
<br/>
## 개발환경
- PC: Mac mini M1
- OS: macOS Big Sur
- Lang: Python
- Package: opencv-python, numpy
<br/>
## 일반적인 트래킹 알고리즘
영상처리에서 일반적으로 잘 알려진 트래킹 알고리즘은 아래와 같다.
- Optical flow
- Mean Shift, Cam Shift
- TLD (Trading, Learning, Detection)
3개의 알고리즘 모두 현재 프레임과 이전 프레임의 `분포도` 및 `특징점`의 유사성을 비교하여 물체를 추적하지만
테니스 볼의 경우 `매우 빠르게` 움직이기에 위 알고리즘으로는 한계가 있다.
## 테니스 경기 환경에서의 트래킹을 위한 요구사항
우선 테니스 볼 속도는 일반적으로 100km/h를 가뿐히 넘는다. 그렇기에 프레임 별로 검출되는 볼의 거리는 매우 넓다.
때문에 가장 중요한 것은 `볼 검출`과 해당 검출이 이전 프레임에서 어떤 볼인지를 `찾는 과정`이 필요하다.
또한 연습 경기장의 특성상으로 영상 속에 테니스 경기가 `1개 이상`의 플레이가 발생할 수 있다.
이는 공 검출에 있어 단일이 아닌 2개 이상 처리 해야하는 `요구사항`도 있다.
정리하자면
- 프레임 단위로 움직이는 `범위가 크다`.
- `1개 이상의 공`이 발생할 수 있다.
따라서 우리는 검출된 공이 어떤 공인지 정확하게 `군집`시키는 것에 주력해야한다.
## 데이터 클래스 작성
개발 편의를 위해 테니스 볼에 대한 `데이터 클래스`를 작성한다.
```python
# dataclasses.py
import math
from dataclasses import dataclass
@dataclass
class Ball:
x: float
y: float
radius: float
def __iter__(self):
return iter((self.x, self.y, self.radius))
def __getitem__(self, index):
return (self.x, self.y, self.radius)[index]
def __distance(self, x1: float, y1: float, x2: float, y2: float):
dx = x1 - x2
dy = y1 - y2
return math.sqrt(dx * dx + dy * dy)
def distance(self, ball):
""" 마지막 공과의 거리 """
x, y, _ = ball
return self.__distance(self.x, self.y, x, y)
def loss(self, f: any):
""" 회귀 모델과의 오류값 """
return math.fabs(self.y - f(self.x))
```
다른 볼과의 `거리`와 `회귀` 모델을 통한 오류값을 구할 수 있다.
## 트래킹 클래스 작성
매 프레임 단위로 `새로운 볼`들이 생성된다. 생성되는 볼들을 입력으로 넣으면 각 `군집`을 `분류`하거나 `생성` 또는 `제거`해내는 클래스를 작성해보자
```python
class TennisBallTracking:
def __init__(self, maxlen=20):
self._maxlen = maxlen
self._traces = deque(maxlen=self._maxlen)
def __refresh(self):
self._traces = deque(filter(lambda x: x.size, self._traces), maxlen=self._maxlen)
def apply(self, balls: list):
copied = balls[:]
for trace in self._traces:
matched = trace.find(copied)
if matched:
trace.add(matched)
copied.remove(matched)
else:
trace.forward()
self.__refresh()
for ball in copied:
if len(self._traces) > 20: break
self._traces.append(Trace(ball, color=constants.COLOR[random.randrange(0, 6)]))
return self._traces
```
간단히 설명하자면 `apply`함수를 통해 들어온 공들은 기존에 존재하는 `Trace`에 포함이 되는지 `find` 작업을 진행한다. 여기서 매칭이 되면 해당 `Trace`에 추가가 되지만 그렇지 않으면 `Trace`를 강제로 `forward`시킨다.
여기서 `forward`의 역할은 `Trace`를 일정한 크기로 유지시키기 위해 사용된다.
매칭이 완료되면 `forward`과정에서 `history`가 사라진 `Trace`들이 발생한다. 이것들은 size 필터링을 통해 Refresh 해주는 작업을 진행하자
마지막으로 매칭 되지 못하고 남은 볼들은 새로운 추적의 객체로 `탄생`시킨다.(최대 20개를 생성)
## 추적 클래스 작성
추적을 저장하고 선별할 수 있는 클래스를 만들어보자.
테니스볼 추적은 아래와 같은 조건을 가진다.
- 고정된 프레임 수 만큼 `히스토리`를 저장 (default: 8)
- 공이 나타나고 사라지는 추적만 표시
- 최대한 추적에 맞는 공을 선택
```python
from . import dataclasses
class Trace:
def __init__(self, ball: Ball, color=[0, 0, 255], maxlen=8):
self._history = deque(maxlen=maxlen)
self._color = color
self.add(ball)
def add(self, ball):
self._history.appendleft(ball)
def forward(self):
self._history.pop()
def find(self, balls):
current = self._history[0]
selected = []
for ball in balls:
if ball.distance(current) < 150:
selected.append(ball)
if len(selected) == 0:
return None
selected = sorted(selected, key=lambda x: x.distance(current))
return selected[0]
@property
def history(self):
return self._history
@property
def size(self):
return len(self._history)
@property
def color(self):
return self._color
```
`self._history` 변수는 추적에 포함되는 애들을 저장해둔다.
이때 `deque 자료구조`를 사용하여 일정크기의 데이터만 유지할 수 있도록 한다.
추적 클래스에서는 현 시점에 검출된 볼 리스트를 받아 추적에 포함되는 볼을 선택할 것이다.
조건은 `일정거리` 안에 들어오는 볼들이다.
여기까지의 과정을 동작시켜보자
```python
detect = TennisBallDetection()
tracking = TennisBallTracking()
def test_tracking(frame):
balls = detect.apply(frame)
traces = tracking.apply(balls)
for trace in traces:
for ball in trace.history:
x, y, r = ball
cv.circle(frame_output, (int(x), int(y)), radius=int(r+2), color=trace.color, thickness=-1)
```
프레임별로 `볼 검출`을 진행하고 생성된 볼 리스트를 `트래킹 객체`에 적용시킨다. 반환하는 `traces`는 현재까지 진행된 볼들의 `trace`들을 저장하고 있다.
그리고 각 `trace`들을 색깔별로 `history`에 따라 프레임에 원을 그려내는 코드이다.

자세히 들어다보면 매 프레임마다 생성되는 `공` 또는 `잡음`들이 주위 `군집`으로 포함되어지는 것을 확인할 수 있다.
## 볼 움직임의 특성
플레이어의 라켓에 맞은 뒤 볼의 움직임을 보면 중력에 의해 `2차원 곡선`으로 움직인다.
따라서 기존 히스토리의 추적 데이터를 기반으로 `2차 회귀 방정식`을 구한 뒤, 후보로 선택된 볼 중에 2차 회귀 모델에 `가장 적은 Loss`의 볼을 선택하자

포인터 데이터들을 가지고 2차 방정식 `회귀모델`을 구하는 함수를 추가해보면
```python
from . import dataclasses
class Trace:
...
def find(self, balls):
...
if len(selected) == 0:
return None
predict = self.__get_regression()
if predict:
selected = sorted(selected, key=lambda x: x.loss(f=predict))
else:
selected = sorted(selected, key=lambda x: x.distance(current))
#selected = sorted(selected, key=lambda x: x.distance(current))
return selected[0]
def __get_regression(self):
if self.size <= 3: return None
x = list(map(lambda e: e.x, self._history))
y = list(map(lambda e: e.y, self._history))
fit = np.polyfit(x, y, deg=2)
self._predict = np.poly1d(fit)
return self._predict
...
```
`__get_regression()`을 주목해보자 `Numpy.polyfit`과 `poly1d`의 사용법은 구글링을 통해 쉽게 알수 있다.
적당한 히스토리가 존재해야 어느정도의 회귀 모델을 구할 수 있으며 부족하다면 가장 거리가 가까운걸로 선택하게 하자
회귀모델이 구해진다면 모델과 볼의 `Loss`정도가 가장 작은 볼을 반환하면 된다.
여기까지 결과물을 확인해보면

해당 영상에서는 크게 차이를 느끼지 못할 것이다. 하지만 여러공이 움직이는 연습경기장에서 볼 추적 중간에 방해가 되는 물체가 들어오면 개선의 차이를 확인할 수 있을 것 같다.
## 전체 코드
```python
# tracker.py
import random
import numpy as np
from . import constants # 컬러 정의
from collections import deque
from .dataclasses import Ball
class Trace:
def __init__(self, ball: Ball, color=[0, 0, 255], maxlen=8):
self._history = deque(maxlen=maxlen)
self._color = color
self.add(ball)
def add(self, ball):
self._history.appendleft(ball)
def forward(self):
self._history.pop()
def find(self, balls):
current = self._history[0]
selected = []
for ball in balls:
if ball.distance(current) < 150:
selected.append(ball)
if len(selected) == 0:
return None
predict = self.__get_regression()
if predict:
selected = sorted(selected, key=lambda x: x.loss(f=predict))
else:
selected = sorted(selected, key=lambda x: x.distance(current))
return selected[0]
def __get_regression(self):
if self.size <= 3: return None
x = list(map(lambda e: e.x, self._history))
y = list(map(lambda e: e.y, self._history))
fit = np.polyfit(x, y, deg=2)
self._predict = np.poly1d(fit)
return self._predict
@property
def history(self):
return self._history
@property
def size(self):
return len(self._history)
@property
def color(self):
return self._color
class TennisBallTracking:
def __init__(self, maxlen=20):
self._maxlen = maxlen
self._traces = deque(maxlen=self._maxlen)
def __refresh(self):
self._traces = deque(filter(lambda x: x.size, self._traces), maxlen=self._maxlen)
def apply(self, balls: list):
copied = balls[:]
for trace in self._traces:
matched = trace.find(copied)
if matched:
trace.add(matched)
copied.remove(matched)
else:
trace.forward()
self.__refresh()
for ball in copied:
if len(self._traces) > 20: break
self._traces.append(Trace(ball, color=constants.COLOR[random.randrange(0, 6)]))
return self._traces
```
## ...
일반적인 트래킹 기법이 아닌 테니스공 움직임에 특화된 트래킹 기법을 알아보고 구현해보았다. 여전히 테니스공이 아닌
잡음들이 충분히 잡히고 트래킹에 방해가 되는건 사실이다. 이처럼 영상처리를 단계별로 진행할시 앞 처리에 대한 `의존성`이 매우 강력함으로 처리 하나하나에 `검증`은 필수라 할 수 있다.
추가적으로 테니스공 검출에 `딥러닝`을 이용한다면 아주 멋지게 트래킹된 영상을 볼 수 있을거라 예상한다,
다음 포스팅은 위 추적이 떨어지는 위치의 `bounce 체크`와 `테니스 코트 검출`을 하여 경기 위에서 바로 보는듯한 효과를 주는 `와핑`처리까지 포스팅하겠다.
| 26.210682 | 141 | 0.610778 | kor_Hang | 0.999901 |
4d518bd482f522bebfa2fa33444c2ff95dfbcafb | 647 | md | Markdown | Examples/addXP.md | sleeplesskyru/simply-xp | de2627de083cf112fe2c6eab8e21302c06092278 | [
"Apache-2.0"
] | 11 | 2021-10-31T01:38:14.000Z | 2022-03-18T14:58:48.000Z | Examples/addXP.md | sleeplesskyru/simply-xp | de2627de083cf112fe2c6eab8e21302c06092278 | [
"Apache-2.0"
] | 2 | 2021-11-20T09:39:58.000Z | 2022-01-07T03:42:07.000Z | Examples/addXP.md | sleeplesskyru/simply-xp | de2627de083cf112fe2c6eab8e21302c06092278 | [
"Apache-2.0"
] | 5 | 2021-11-16T14:28:26.000Z | 2022-02-03T05:05:11.000Z | # addXP
Add XP to a user | `addXP`
### Usage
```js
let xp = require('simply-xp')
xp.addXP(message, userID, guildID, xp)
```
### Example
```js
let xp = require('simply-xp')
xp.addXP(message, message.author.id, message.guild.id, 10)
```
- **_Tip:_** It has built in randomizer.. Use it by
```js
let xp = require('simply-xp')
xp.addXP(message, message.author.id, message.guild.id, {
min: 10,
max: 25
})
```
- ## Returns `<Object>`
```
{
level: 1,
xp: 10
}
```
- ## Fires `levelUp` event
```js
client.on('levelUp', async (message, data) => {})
```
- - ### data Returns `<Object>`
```
{
xp,
level,
userID,
guildID
}
```
| 11.350877 | 58 | 0.573416 | eng_Latn | 0.512575 |
4d53513dbc34cb74695540413a5ff0bcfb722054 | 168 | md | Markdown | README.md | KoshcheevPA/is_task5 | b49b4daa0da78c6c44cdf1c6ae4cade955b97fdb | [
"Apache-2.0"
] | null | null | null | README.md | KoshcheevPA/is_task5 | b49b4daa0da78c6c44cdf1c6ae4cade955b97fdb | [
"Apache-2.0"
] | null | null | null | README.md | KoshcheevPA/is_task5 | b49b4daa0da78c6c44cdf1c6ae4cade955b97fdb | [
"Apache-2.0"
] | null | null | null | 1) docker-compose build
2) docker-compose up -d | for development (localhost:5000)
3) docker-compose -f docker-compose.prod.yml up -d | for production (localhost:9090)
| 42 | 84 | 0.755952 | eng_Latn | 0.700581 |
4d535e7ace0534a5de718c46be62d17172db3669 | 4,373 | md | Markdown | docs/atl/reference/options-atl-active-server-page-component-wizard.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/atl/reference/options-atl-active-server-page-component-wizard.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/atl/reference/options-atl-active-server-page-component-wizard.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Opções, o Assistente de página componente Active Server ATL | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-atl
ms.topic: reference
f1_keywords:
- vc.codewiz.class.atl.asp.options
dev_langs:
- C++
helpviewer_keywords:
- ATL Active Server Page Component Wizard, options
ms.assetid: 54f34e26-53c7-4456-9675-cb86e356bde0
author: mikeblome
ms.author: mblome
ms.workload:
- cplusplus
ms.openlocfilehash: fda466ad45b5b91f02920d68eef8eab071010830
ms.sourcegitcommit: 913c3bf23937b64b90ac05181fdff3df947d9f1c
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/18/2018
ms.locfileid: "46020128"
---
# <a name="options-atl-active-server-page-component-wizard"></a>Opções, Assistente de página componente Active Server ATL
Use esta página do Assistente de componentes do ATL Active Server página projetar para maior eficiência e suporte de erro para o objeto.
Para obter mais informações sobre projetos ATL e classes COM da ATL, consulte [componentes de área de trabalho COM ATL](../../atl/atl-com-desktop-components.md).
- **Modelo de Threading**
Indica o método para o gerenciamento de threads. Por padrão, o projeto utiliza **Apartment** threading.
Ver [especificando o modelo de Threading do projeto](../../atl/specifying-the-threading-model-for-a-project-atl.md) para obter mais informações.
|Opção|Descrição|
|------------|-----------------|
|**Simples**|Especifica que o objeto usa o modelo de threading único. O modelo de threading único, um objeto sempre é executado no thread COM primário. Ver [single-threaded Apartments](/windows/desktop/com/single-threaded-apartments) e [InprocServer32](/windows/desktop/com/inprocserver32) para obter mais informações.|
|**Apartment**|Especifica que o objeto usa apartamento de threading. Apartment equivalente a único thread. Cada objeto de um componente do apartment-threaded é atribuído um apartamento de thread, durante a vida útil do objeto; No entanto, vários threads podem ser usados para vários objetos. Cada compartimento estiver associado a um thread específico e tem uma bomba de mensagem do Windows (padrão).<br /><br /> Ver [single-threaded Apartments](/windows/desktop/com/single-threaded-apartments) para obter mais informações.|
|**Ambos**|Especifica que o objeto pode usar apartment ou threading livre, dependendo de qual tipo de um thread é criada.|
|**livre**|Especifica que o objeto usa threading livre. Threading livre é equivalente a um modelo de apartment com vários threads. Ver [multi-threaded Apartments](/windows/desktop/com/multithreaded-apartments) para obter mais informações.|
|**Neutral**|Especifica que o objeto segue as diretrizes para apartments de vários threads, mas ela pode ser executada em qualquer tipo de thread.|
- **Agregação**
Indica se o objeto usa [agregação](/windows/desktop/com/aggregation). O objeto agregado escolhe quais interfaces para expor para clientes e as interfaces são expostas como se o objeto agregado implementada-los. Os clientes do objeto agregado se comunicam somente com o objeto agregado.
|Opção|Descrição|
|------------|-----------------|
|**Sim**|Especifica se o objeto pode ser agregado. O padrão.|
|**No**|Especifica que o objeto não é agregado.|
|**Only**|Especifica que o objeto deve ser agregado.|
- **Suporte**
Opções adicionais de suporte:
|Opção|Descrição|
|------------|-----------------|
|**ISupportErrorInfo**|Cria o suporte para o [ISupportErrorInfo](../../atl/reference/isupporterrorinfoimpl-class.md) de interface para que o objeto pode retornar informações de erro para o cliente.|
|**Pontos de Conexão**|Permite que os pontos de conexão para seu objeto, fazendo a classe do seu objeto derivam [IConnectionPointContainerImpl](../../atl/reference/iconnectionpointcontainerimpl-class.md).|
|**Marshaler de thread livre**|Cria um objeto livre de marshaler para marshaling de ponteiros de interface com eficiência entre os threads no mesmo processo. Disponível para o objeto que especifica um **ambos** ou **gratuito** como o modelo de threading.|
## <a name="see-also"></a>Consulte também
[Assistente do componente Active Server Page da ATL](../../atl/reference/atl-active-server-page-component-wizard.md)<br/>
[Componente de página de servidor ativo do ATL](../../atl/reference/adding-an-atl-active-server-page-component.md)
| 61.591549 | 527 | 0.756689 | por_Latn | 0.987998 |
4d5474145f9de294359d65e136c9dd4649286b56 | 246 | md | Markdown | src/test/resources/testExpected/README.md | peterlcole/xlr-user-export-api | 9b9eb82b17e972f99b69e13ee20b328a258fd17b | [
"MIT"
] | null | null | null | src/test/resources/testExpected/README.md | peterlcole/xlr-user-export-api | 9b9eb82b17e972f99b69e13ee20b328a258fd17b | [
"MIT"
] | null | null | null | src/test/resources/testExpected/README.md | peterlcole/xlr-user-export-api | 9b9eb82b17e972f99b69e13ee20b328a258fd17b | [
"MIT"
] | 2 | 2019-10-14T01:28:04.000Z | 2020-04-10T18:50:08.000Z | # Purpose
Place files here that have the expected output of you tests if appropriate. For example, if your integration test generates a json result, you can easily compare it to a json file with the org.skyscreamer.jsonassert.JSONAssert class.
| 61.5 | 234 | 0.804878 | eng_Latn | 0.997586 |
4d54ad94a7fe6a47f06c3a42063fb0c3483d4bc7 | 176 | md | Markdown | _pages/blog_etiquetas.md | rodrigoms95/rodrigoms95.github.io | 51f811f6f4ad7cacb3297e628fb012c9d2968166 | [
"MIT"
] | null | null | null | _pages/blog_etiquetas.md | rodrigoms95/rodrigoms95.github.io | 51f811f6f4ad7cacb3297e628fb012c9d2968166 | [
"MIT"
] | null | null | null | _pages/blog_etiquetas.md | rodrigoms95/rodrigoms95.github.io | 51f811f6f4ad7cacb3297e628fb012c9d2968166 | [
"MIT"
] | null | null | null | ---
title: "Blog"
permalink: /blog_etiquetas/
layout: tags
author_profile: true
---
[Ordenar artículos por categoría](../blog)
[Ordenar artículos por fecha](../blog_fechas)
| 14.666667 | 45 | 0.721591 | spa_Latn | 0.6542 |
4d5557e66e328f21b3075ad0b87ec5b7d71ac0ec | 3,848 | md | Markdown | articles/container-registry/container-registry-check-health.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/container-registry/container-registry-check-health.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/container-registry/container-registry-check-health.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Kontrola stavu registru
description: Zjistěte, jak spustit příkaz rychlé diagnostiky k identifikaci běžných problémů při používání registru kontejnerů Azure, včetně místní konfigurace Dockeru a připojení k registru.
ms.topic: article
ms.date: 07/02/2019
ms.openlocfilehash: ea4432c9e92c4a0380517e39678814e2d1cb3bfc
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/27/2020
ms.locfileid: "74456406"
---
# <a name="check-the-health-of-an-azure-container-registry"></a>Kontrola stavu registru kontejnerů Azure
Při použití registru kontejneru Azure může občas dojít k problémům. Například nemusí být možné vytáhnout image kontejneru z důvodu problému s Dockerem v místním prostředí. Nebo problém se sítí může bránit připojení k registru.
Jako první diagnostický krok spusťte příkaz [az acr check-health,][az-acr-check-health] abyste získali informace o stavu prostředí a volitelně přístup k cílovému registru. Tento příkaz je k dispozici v Azure CLI verze 2.0.67 nebo novější. Pokud potřebujete instalaci nebo upgrade, přečtěte si téma [Instalace Azure CLI][azure-cli].
## <a name="run-az-acr-check-health"></a>Spustit az acr check-zdraví
Následující příklady ukazují různé způsoby spuštění příkazu. `az acr check-health`
> [!NOTE]
> Pokud spustíte příkaz v Azure Cloud Shell, místní prostředí není zaškrtnuto. Můžete však zkontrolovat přístup k cílovému registru.
### <a name="check-the-environment-only"></a>Zkontrolujte pouze prostředí
Chcete-li zkontrolovat místní konfiguraci klienta Docker, CLI a helmu, spusťte příkaz bez dalších parametrů:
```azurecli
az acr check-health
```
### <a name="check-the-environment-and-a-target-registry"></a>Kontrola prostředí a cílového registru
Chcete-li zkontrolovat přístup k registru a provést kontroly místního prostředí, předajte název cílového registru. Například:
```azurecli
az acr check-health --name myregistry
```
## <a name="error-reporting"></a>Zasílání zpráv o chybách
Příkaz protokoluje informace do standardního výstupu. Pokud je zjištěn problém, poskytuje kód chyby a popis. Další informace o kódech a možných řešeních naleznete v [odkazu na chybu](container-registry-health-error-reference.md).
Ve výchozím nastavení se příkaz zastaví vždy, když najde chybu. Příkaz můžete také spustit tak, aby poskytoval výstup pro všechny kontroly stavu, a to i v případě, že jsou nalezeny chyby. Přidejte `--ignore-errors` parametr, jak je znázorněno v následujících příkladech:
```azurecli
# Check environment only
az acr check-health --ignore-errors
# Check environment and target registry
az acr check-health --name myregistry --ignore-errors
```
Ukázkový výstup:
```console
$ az acr check-health --name myregistry --ignore-errors --yes
Docker daemon status: available
Docker version: Docker version 18.09.2, build 6247962
Docker pull of 'mcr.microsoft.com/mcr/hello-world:latest' : OK
ACR CLI version: 2.2.9
Helm version:
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
DNS lookup to myregistry.azurecr.io at IP 40.xxx.xxx.162 : OK
Challenge endpoint https://myregistry.azurecr.io/v2/ : OK
Fetch refresh token for registry 'myregistry.azurecr.io' : OK
Fetch access token for registry 'myregistry.azurecr.io' : OK
```
## <a name="next-steps"></a>Další kroky
Podrobnosti o kódech chyb vrácených příkazem [az acr check-health][az-acr-check-health] naleznete v [odkazu na chybu kontroly stavu](container-registry-health-error-reference.md).
Nejčastější dotazy a další známé problémy týkající se registru kontejnerů Azure najdete v [nejčastějších](container-registry-faq.md) dotazech.
<!-- LINKS - internal -->
[azure-cli]: /cli/azure/install-azure-cli
[az-acr-check-health]: /cli/azure/acr#az-acr-check-health
| 43.727273 | 331 | 0.786902 | ces_Latn | 0.998653 |
4d5670778c87744577dd899a7e6bfbaa59cdc293 | 6,011 | md | Markdown | docs/relational-databases/system-stored-procedures/sp-help-jobsteplog-transact-sql.md | bingenortuzar/sql-docs.es-es | 9e13730ffa0f3ce461cce71bebf1a3ce188c80ad | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-26T21:26:08.000Z | 2021-04-26T21:26:08.000Z | docs/relational-databases/system-stored-procedures/sp-help-jobsteplog-transact-sql.md | jlporatti/sql-docs.es-es | 9b35d3acbb48253e1f299815df975f9ddaa5e9c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-stored-procedures/sp-help-jobsteplog-transact-sql.md | jlporatti/sql-docs.es-es | 9b35d3acbb48253e1f299815df975f9ddaa5e9c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: sp_help_jobsteplog (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 08/09/2016
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: system-objects
ms.topic: language-reference
f1_keywords:
- sp_help_jobsteplog_TSQL
- sp_help_jobsteplog
dev_langs:
- TSQL
helpviewer_keywords:
- sp_help_jobsteplog
ms.assetid: 1a0be7b1-8f31-4b4c-aadb-586c0e00ed04
author: stevestein
ms.author: sstein
ms.openlocfilehash: e3af6ff05b971e6b9a0dedc1ec2e14f4ba87e00c
ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 07/15/2019
ms.locfileid: "68090045"
---
# <a name="sphelpjobsteplog-transact-sql"></a>sp_help_jobsteplog (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2008-xxxx-xxxx-xxx-md.md)]
Devuelve los metadatos de un determinado [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] registro de paso de trabajo del agente. **sp_help_jobsteplog** no devuelve el registro real.
 [Convenciones de sintaxis de Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Sintaxis
```
sp_help_jobsteplog { [ @job_id = ] 'job_id' | [ @job_name = ] 'job_name' }
[ , [ @step_id = ] step_id ]
[ , [ @step_name = ] 'step_name' ]
```
## <a name="arguments"></a>Argumentos
`[ @job_id = ] 'job_id'` El número de identificación del trabajo que se va a devolver información del registro de paso de trabajo. *job_id* es **int**, su valor predeterminado es null.
`[ @job_name = ] 'job_name'` El nombre del trabajo. *job_name* es **sysname**, su valor predeterminado es NULL.
> [!NOTE]
> Cualquier *job_id* o *job_name* debe especificarse, pero no se pueden especificar ambos.
`[ @step_id = ] step_id` El número de identificación del paso del trabajo. Si no se especifica, se incluirán todos los pasos del trabajo. *step_id* es **int**, su valor predeterminado es null.
`[ @step_name = ] 'step_name'` El nombre del paso del trabajo. *Step_name* es **sysname**, su valor predeterminado es null.
## <a name="return-code-values"></a>Valores de código de retorno
0 (correcto) o 1 (error)
## <a name="result-sets"></a>Conjuntos de resultados
|Nombre de columna|Tipo de datos|Descripción|
|-----------------|---------------|-----------------|
|**job_id**|**uniqueidentifier**|Identificador único del trabajo.|
|**job_name**|**sysname**|Nombre del trabajo.|
|**step_id**|**int**|Identificador del paso en el trabajo. Por ejemplo, si el paso es el primer paso en el trabajo, su *step_id* es 1.|
|**step_name**|**sysname**|Nombre del paso del trabajo.|
|**step_uid**|**uniqueidentifier**|Identificador único del paso en el trabajo (generado por el sistema).|
|**date_created**|**datetime**|Fecha de creación del paso.|
|**date_modified**|**datetime**|Fecha de la última modificación del paso.|
|**log_size**|**float**|Tamaño del registro de pasos de trabajo, en megabytes (MB).|
|**log**|**nvarchar(max)**|Salida del registro de pasos de trabajo.|
## <a name="remarks"></a>Comentarios
**sp_help_jobsteplog** está en el **msdb** base de datos.
## <a name="permissions"></a>Permisos
De forma predeterminada, los miembros del rol fijo de servidor **sysadmin** pueden ejecutar este procedimiento almacenado. Al resto de usuarios se les debe conceder uno de los siguientes roles fijos de base de datos del Agente [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] en la base de datos **msdb** :
- **SQLAgentUserRole**
- **SQLAgentReaderRole**
- **SQLAgentOperatorRole**
Para detalles sobre los permisos de estos roles, consulte [Roles fijos de base de datos del Agente SQL Server](../../ssms/agent/sql-server-agent-fixed-database-roles.md).
Los miembros de **SQLAgentUserRole** solo pueden ver los metadatos del registro de paso de trabajo de pasos de trabajo que les pertenecen.
## <a name="examples"></a>Ejemplos
### <a name="a-returns-job-step-log-information-for-all-steps-in-a-specific-job"></a>A. Devolver información del registro de pasos de trabajo para todos los pasos de un trabajo específico
En el ejemplo siguiente se devuelve toda la información del registro de pasos de trabajo del trabajo denominado `Weekly Sales Data Backup`.
```
USE msdb ;
GO
EXEC dbo.sp_help_jobsteplog
@job_name = N'Weekly Sales Data Backup' ;
GO
```
### <a name="b-return-job-step-log-information-about-a-specific-job-step"></a>b. Devolver información del registro de pasos de trabajo para un paso de trabajo específico
En el ejemplo siguiente se devuelve información del registro de pasos de trabajo para el primer paso de trabajo del trabajo denominado `Weekly Sales Data Backup`.
```
USE msdb ;
GO
EXEC dbo.sp_help_jobsteplog
@job_name = N'Weekly Sales Data Backup',
@step_id = 1 ;
GO
```
## <a name="see-also"></a>Vea también
[sp_add_jobstep (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-add-jobstep-transact-sql.md)
[sp_delete_jobstep (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-delete-jobstep-transact-sql.md)
[sp_help_jobstep (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-help-jobstep-transact-sql.md)
[sp_delete_jobstep (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-delete-jobstep-transact-sql.md)
[sp_delete_jobsteplog (Transact-SQL)](../../relational-databases/system-stored-procedures/sp-delete-jobsteplog-transact-sql.md)
[Procedimientos almacenados del Agente SQL Server (Transact-SQL)](../../relational-databases/system-stored-procedures/sql-server-agent-stored-procedures-transact-sql.md)
| 48.088 | 318 | 0.710697 | spa_Latn | 0.798077 |
4d56e2d02fb19588216a9bc9e37883ccc060ae33 | 1,025 | md | Markdown | articles/cognitive-services/Custom-Vision-Service/includes/clean-ic-project.md | eltociear/azure-docs.fr-fr | 3302b8be75f0872cf7d7a5e264850849ac36e493 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Custom-Vision-Service/includes/clean-ic-project.md | eltociear/azure-docs.fr-fr | 3302b8be75f0872cf7d7a5e264850849ac36e493 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Custom-Vision-Service/includes/clean-ic-project.md | eltociear/azure-docs.fr-fr | 3302b8be75f0872cf7d7a5e264850849ac36e493 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: PatrickFarley
ms.service: cognitive-services
ms.subservice: custom-vision
ms.topic: include
ms.date: 03/21/2019
ms.author: pafarley
ms.openlocfilehash: 3955172ce44764af17417d93c483ca2c9ebc55b7
ms.sourcegitcommit: 34a6fa5fc66b1cfdfbf8178ef5cdb151c97c721c
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/28/2020
ms.locfileid: "82130826"
---
## <a name="clean-up-resources"></a>Nettoyer les ressources
Si vous voulez implémenter votre projet de classification d’images (ou essayer un projet de [détection d’objet](../quickstarts/object-detection.md)), supprimez le projet d’identification d’arbre de cet exemple. Avec l’essai gratuit, vous pouvez essayer deux projets Vision personnalisée.
Sur le [site web Custom Vision](https://customvision.ai), accédez à **Projects** (Projets) puis sélectionnez la corbeille sous My New Project (Nouveau projet).
 | 48.809524 | 287 | 0.798049 | fra_Latn | 0.788316 |
4d5746c9a345fae85ed5e54839044c14bbb3d4ee | 3,857 | md | Markdown | docs/FileImportSettingsDto.md | dragosv/memsource-api | 9ae1c71e05fd346d3e48d95fc5adeb44f43a7924 | [
"MIT"
] | null | null | null | docs/FileImportSettingsDto.md | dragosv/memsource-api | 9ae1c71e05fd346d3e48d95fc5adeb44f43a7924 | [
"MIT"
] | null | null | null | docs/FileImportSettingsDto.md | dragosv/memsource-api | 9ae1c71e05fd346d3e48d95fc5adeb44f43a7924 | [
"MIT"
] | 1 | 2020-12-14T16:16:25.000Z | 2020-12-14T16:16:25.000Z | # FileImportSettingsDto
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**InputCharset** | **string** | | [optional] [default to null]
**OutputCharset** | **string** | | [optional] [default to null]
**ZipCharset** | **string** | | [optional] [default to null]
**FileFormat** | **string** | | [optional] [default to null]
**TargetLength** | **bool** | | [optional] [default to null]
**TargetLengthMax** | **int32** | | [optional] [default to null]
**TargetLengthPercent** | **bool** | | [optional] [default to null]
**TargetLengthPercentValue** | **float64** | | [optional] [default to null]
**Android** | [***AndroidSettingsDto**](AndroidSettingsDto.md) | | [optional] [default to null]
**Idml** | [***IdmlSettingsDto**](IdmlSettingsDto.md) | | [optional] [default to null]
**Xls** | [***XlsSettingsDto**](XlsSettingsDto.md) | | [optional] [default to null]
**MultilingualXml** | [***MultilingualXmlSettingsDto**](MultilingualXmlSettingsDto.md) | | [optional] [default to null]
**Php** | [***PhpSettingsDto**](PhpSettingsDto.md) | | [optional] [default to null]
**Resx** | [***ResxSettingsDto**](ResxSettingsDto.md) | | [optional] [default to null]
**Json** | [***JsonSettingsDto**](JsonSettingsDto.md) | | [optional] [default to null]
**Html** | [***HtmlSettingsDto**](HtmlSettingsDto.md) | | [optional] [default to null]
**MultilingualXls** | [***MultilingualXlsSettingsDto**](MultilingualXlsSettingsDto.md) | | [optional] [default to null]
**MultilingualCsv** | [***MultilingualCsvSettingsDto**](MultilingualCsvSettingsDto.md) | | [optional] [default to null]
**Csv** | [***CsvSettingsDto**](CsvSettingsDto.md) | | [optional] [default to null]
**Txt** | [***TxtSettingsDto**](TxtSettingsDto.md) | | [optional] [default to null]
**Xlf2** | [***Xlf2SettingsDto**](Xlf2SettingsDto.md) | | [optional] [default to null]
**QuarkTag** | [***QuarkTagSettingsDto**](QuarkTagSettingsDto.md) | | [optional] [default to null]
**Pdf** | [***PdfSettingsDto**](PdfSettingsDto.md) | | [optional] [default to null]
**TmMatch** | [***TmMatchSettingsDto**](TMMatchSettingsDto.md) | | [optional] [default to null]
**Xml** | [***XmlSettingsDto**](XmlSettingsDto.md) | | [optional] [default to null]
**Mif** | [***MifSettingsDto**](MifSettingsDto.md) | | [optional] [default to null]
**Properties** | [***PropertiesSettingsDto**](PropertiesSettingsDto.md) | | [optional] [default to null]
**Doc** | [***DocSettingsDto**](DocSettingsDto.md) | | [optional] [default to null]
**Xlf** | [***XlfSettingsDto**](XlfSettingsDto.md) | | [optional] [default to null]
**SdlXlf** | [***SdlXlfSettingsDto**](SdlXlfSettingsDto.md) | | [optional] [default to null]
**Ttx** | [***TtxSettingsDto**](TtxSettingsDto.md) | | [optional] [default to null]
**Ppt** | [***PptSettingsDto**](PptSettingsDto.md) | | [optional] [default to null]
**Yaml** | [***YamlSettingsDto**](YamlSettingsDto.md) | | [optional] [default to null]
**Dita** | [***DitaSettingsDto**](DitaSettingsDto.md) | | [optional] [default to null]
**DocBook** | [***DocBookSettingsDto**](DocBookSettingsDto.md) | | [optional] [default to null]
**Po** | [***PoSettingsDto**](PoSettingsDto.md) | | [optional] [default to null]
**Mac** | [***MacSettingsDto**](MacSettingsDto.md) | | [optional] [default to null]
**Md** | [***MdSettingsDto**](MdSettingsDto.md) | | [optional] [default to null]
**Psd** | [***PsdSettingsDto**](PsdSettingsDto.md) | | [optional] [default to null]
**SegRule** | [***SegRuleReference**](SegRuleReference.md) | | [optional] [default to null]
**TargetSegRule** | [***SegRuleReference**](SegRuleReference.md) | | [optional] [default to null]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 75.627451 | 161 | 0.649209 | yue_Hant | 0.716453 |
4d57991660703251aa4f7b568d303d8bbbea2797 | 228 | md | Markdown | content/photography/2013-09-11-morning-lake-norway.md | aimxhaisse/blog | 1d60f86920876c4f1865b06c3c52ee514eaebe84 | [
"MIT"
] | null | null | null | content/photography/2013-09-11-morning-lake-norway.md | aimxhaisse/blog | 1d60f86920876c4f1865b06c3c52ee514eaebe84 | [
"MIT"
] | null | null | null | content/photography/2013-09-11-morning-lake-norway.md | aimxhaisse/blog | 1d60f86920876c4f1865b06c3c52ee514eaebe84 | [
"MIT"
] | null | null | null | ---
categories:
- photography
date: "2013-09-11T00:00:00Z"
desc: Aursjøen, Norway
icon: photography
plugin: lightense
title: waking up in the clouds
---
<img src="/img/photography/norway-morning-lake.jpg" data-action="zoom" />
| 19 | 73 | 0.732456 | eng_Latn | 0.302613 |
4d57cf3b0c825bb8eb0ff5440571d02a279d5761 | 58 | md | Markdown | ExamFinal/README.md | Possawat064/Devop-WorkShops | 2dfb2614c3dfda57347c57490bda06fa78c1d6c0 | [
"MIT"
] | null | null | null | ExamFinal/README.md | Possawat064/Devop-WorkShops | 2dfb2614c3dfda57347c57490bda06fa78c1d6c0 | [
"MIT"
] | null | null | null | ExamFinal/README.md | Possawat064/Devop-WorkShops | 2dfb2614c3dfda57347c57490bda06fa78c1d6c0 | [
"MIT"
] | null | null | null | # INT209/210 Final Exam
## [START](docs/01-honor-code.md) | 19.333333 | 33 | 0.689655 | yue_Hant | 0.845678 |
4d5868d96e25a9c8aad697059c0a21c93ea5a83f | 3,948 | md | Markdown | README.md | seblucas/PicasaWebSync | bde613766b81ee9f35af9d3e666950b2040ea586 | [
"MIT"
] | null | null | null | README.md | seblucas/PicasaWebSync | bde613766b81ee9f35af9d3e666950b2040ea586 | [
"MIT"
] | null | null | null | README.md | seblucas/PicasaWebSync | bde613766b81ee9f35af9d3e666950b2040ea586 | [
"MIT"
] | null | null | null | # PicasaWebSync
A command line tool to resize and upload pictures and videos into Picasa Web Albums.
Author: Brady Holt (http://www.GeekyTidBits.com)
License: The MIT License (MIT) (http://www.opensource.org/licenses/mit-license.php)
Overview
---
PicasaWebSync is a command-line tool to synchronize local photos and videos to online Picasa Web Albums. It is flexible with a configuration file / run-time options and optionally resizes* photos before uploading them.
**Features**
- Resizes photos before uploading.
- Resizes videos before uploading (requires external tool like ffmpeg).
- Allows folders to be excluded by name or hint file included in folder.
- Allows album access (i.e. Public / Private) to be set by hint files in source folders.
- Allows excluding files over a specified size.
- Removes photos/videos/albums from Picasa Web Albums that have been removed locally (can prevent this with -addOnly command line option)
- Updates files on Picasa which have been updated locally since the last time they were uploaded.
- Supports these file types: jpg, jpeg, gif, tiff, tif, png, bmp, avi, wmv, mpg, asf, mov, mp4
Installation and Usage
---
To install and use picasawebsync:
1. Obtain a build. Either download the source of this repository and build the solution (src/PicasaWebSync.sln) or download the latest release from [Downloads](https://github.com/bradyholt/PicasaWebSync/downloads) page.
2. Place the build output into a dedicated directory.
3. Modify the picasawebsync.exe.config file and update the values for the picasa.username and picasa.password settings to match your Google username and Password.
4. Run the main executable **picasawebsync.exe** with appropriate command line paremters and options. To see a list of available command line options run 'picasawebsync.exe -help'.
**Usage**
picasawebsync.exe folderPath [options]
**Options**
-u:USERNAME, Picasa Username (can also be specified in picasawebsync.exe.config)
-p:PASSWORD, Picasa Password (can also be specified in picasawebsync.exe.config)
-r, recursive (include subfolders)
-emptyAlbumFirst delete all images in album before adding photos
-addOnly add only and do not remove anything from online albums (overrides -emptyAlbumFirst)");
-v, verbose output
-help print help menu
**Example Usage**
picasawebsync.exe "C:\Users\Public\Pictures\My Pictures\" -u:[email protected] -p:Secr@tPwd -r -v
Repository Directories
---
- **src** - The source solution/project for picasawebsync.
- **lib** - Contains Google Data API libraries used by picasawebsync. The files in this folder are referenced by the source project file in /src. This folder can be generally ignored unless you are building the source.
Video Resizing
---
By default, video resizing is disabled (key="video.resize" value="false" in picasawebsync.exe.config) but to enable it you should
install a command-line utility that can handle video resizing and specify the command-line in the video.resize.command config value in
picasawebsync.exe.config. I highly recommend installing and utilizing FFmpeg (http://www.ffmpeg.org/) for video resizing. Make sure the
video.resize.command setting contains the path to the ffmpeg executable or your PATH environment include the directory where this executable
resizes so that PicasaWebSync can locate it at runtime.
Hint Files
---
Hint files can be placed in local folders which will instruct PicasaWebSync to mark an album with Public / Private access or exclude
the folder entirely from being uploaded. Refer to the /hint_files for sample files. The hint files do not have to have any contents.
Requirements
---
**Windows**
.NET Framework 3.5 (Windows 7, Windows Vista, Windows Server 2008, Windows XP, Windows Server 2003)
**Linux**
Mono version supporting .NET Framework 3.5
| 51.947368 | 220 | 0.753293 | eng_Latn | 0.987896 |
4d587a513049582bcb180031064b3a62fc0254ef | 577 | md | Markdown | catalog/riman-gambler-mouse/en-US_riman-gambler-mouse.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/riman-gambler-mouse/en-US_riman-gambler-mouse.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/riman-gambler-mouse/en-US_riman-gambler-mouse.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Riman Gambler Mouse

- **type**: manga
- **volumes**: 4
- **original-name**: リーマンギャンブラー マウス
## Tags
- psychological
- seinen
## Authors
- Takahashi
- Noboru (Story & Art)
## Sinopse
Tadanori Takamura is a failure in everything he does. His job, his marriage, his life. One day, he is offered a way out of his hell (by a 3-fingered woman): A gambling game called "cowarDICE."
(Source: MU)
## Links
- [My Anime list](https://myanimelist.net/manga/12893/Riman_Gambler_Mouse)
| 20.607143 | 192 | 0.686308 | eng_Latn | 0.841362 |
4d58aea8fd4400ded0aadb3bf4b68d86ff199dba | 8,849 | md | Markdown | README.md | AppViewX/gke-marketplace-appviewx | d721bd6eb68e599935b4fcb7849de9badc25523f | [
"Apache-2.0"
] | null | null | null | README.md | AppViewX/gke-marketplace-appviewx | d721bd6eb68e599935b4fcb7849de9badc25523f | [
"Apache-2.0"
] | 1 | 2021-03-17T18:55:34.000Z | 2021-03-17T18:55:34.000Z | README.md | AppViewX/gke-marketplace-appviewx | d721bd6eb68e599935b4fcb7849de9badc25523f | [
"Apache-2.0"
] | 2 | 2020-11-17T07:15:26.000Z | 2021-03-17T14:37:57.000Z | # AppViewX for GKE Marketplace
AppViewX application can be installed using either of the following approaches:
* [Using the Google Cloud Platform Console](#using-install-platform-console)
* [Using the command line](#using-install-command-line)
## <a name="using-install-platform-console"></a>Using the Google Cloud Platform Marketplace
Get up and running with a few clicks! Install AppViewX application to a
Google Kubernetes Engine cluster using Google Cloud Marketplace. Follow the
[on-screen instructions].
## <a name="using-install-command-line"></a>Using the command line
### Prerequisites
#### Set up command-line tools
You'll need the following tools in your development environment:
- [gcloud](https://cloud.google.com/sdk/gcloud/)
- [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/)
- [docker](https://docs.docker.com/install/)
- [mpdev](https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/docs/tool-prerequisites.md)
Configure `gcloud` as a Docker credential helper:
```shell
gcloud auth configure-docker
```
You can install AppViewX application in an existing GKE cluster or create a new GKE cluster.
* If you want to **create** a new Google GKE cluster, follow the instructions from the section [Create a GKE cluster](#create-gke-cluster) onwards.
* If you have an **existing** GKE cluster, ensure that the cluster nodes have a minimum of 8 vCPU and 32GB RAM and follow the instructions from section [Install the application resource definition](#install-application-resource-definition) onwards.
#### <a name="create-gke-cluster"></a>Create a GKE cluster
AppViewX application requires a minimum 1 node cluster with each node having a minimum of 8 vCPU and 32GB RAM. Available machine types can be seen [here](https://cloud.google.com/compute/docs/machine-types).
Create a new cluster from the command line:
```shell
# set the name of the Kubernetes cluster
export CLUSTER=appviewx-cluster
# set the zone to launch the cluster
export ZONE=us-west1-a
# set the machine type for the cluster
export MACHINE_TYPE=e2-standard-8
# create the cluster using google command line tools
gcloud container clusters create "$CLUSTER" --zone "$ZONE" ---machine-type "$MACHINE_TYPE"
```
Configure `kubectl` to connect to the new cluster:
```shell
gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE"
```
#### <a name="install-application-resource-definition"></a>Install the application resource definition
An application resource is a collection of individual Kubernetes components,
such as services, stateful sets, deployments, and so on, that you can manage as a group.
To set up your cluster to understand application resources, run the following command:
```shell
kubectl apply -f "https://raw.githubusercontent.com/GoogleCloudPlatform/marketplace-k8s-app-tools/master/crd/app-crd.yaml"
```
You need to run this command once.
The application resource is defined by the Kubernetes
[SIG-apps](https://github.com/kubernetes/community/tree/master/sig-apps)
community. The source code can be found on
[github.com/kubernetes-sigs/application](https://github.com/kubernetes-sigs/application).
#### Prerequisites for using Role-Based Access Control
You must grant your user the ability to create roles in Kubernetes by running the following command.
```shell
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
```
You need to run this command once.
### Install the Application
#### Clone this repo
```shell
git clone https://github.com/AppViewX/gke-marketplace-appviewx.git
git checkout main
```
#### Pull deployer image
Configure `gcloud` as a Docker credential helper:
```shell
gcloud auth configure-docker
```
Pull the deployer image to your local docker registry
```shell
docker pull gcr.io/appviewxclm/appviewx/deployer:2020.3.0
```
#### Run installer script
Set your application instance name and the Kubernetes namespace to deploy:
```shell
# set the application instance name
export APP_NAME=appviewx
# set the Kubernetes namespace the application was originally installed
export NAMESPACE=avx
```
Creat the namepsace
```shell
kubectl create namespace $NAMESPACE
```
Run the install script
```shell
mpdev install --deployer=gcr.io/appviewxclm/appviewx/deployer:2020.3.0 --parameters='{"name": "'$APP_NAME'", "namespace": "'$NAMESPACE'"}'
```
Watch the deployment come up with
```shell
kubectl get pods -n $NAMESPACE --watch
```
#### Accessing the application
```shell
* Click on the Kubernetes Engine -> Services & Ingresses menu
* Select the appviewx-cluster and switch to INGRESS tab
* Click on avx-ingress and look if all the Backend services are marked as green under INGRESS section
* If all the Backend services marked with green you are good to access the application else do the following steps.
```
Update the health check url for avx-platform-gateway
```shell
* Click on avx-platform-gateway from Backend services and click the link under Health Check
* Click on EDIT
* Update the Request Path field with /avxmgr/printroutes
* Save the changes
```
Update the health check url for avx-platform-web
```shell
* Click on avx-platform-web from Backend services and click the link under Health Check
* Click on EDIT
* Update the Request Path field with /appviewx/login/
* Save the changes
```
The above changes are mandatory to access the AppViewX application as by default the health check url for loadbalancer is configured as "/". It will take some time for loadbalancer to update the changes and verify if all Backend services are marked as green.
#### Create TLS certificate for AppViewX
> Note: You can skip this step if you have not set up external access.
1. If you already have a certificate that you want to use, copy your
certificate and key pair to the `/tmp/tls.crt`and `/tmp/tls.key` files
then skip to the next step (2).
To create a new certificate, run the following command:
```shell
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /tmp/tls.key \
-out /tmp/tls.crt \
-subj "/CN=appviewx/O=appviewx"
```
2. Create the tls secret:
```shell
* Note down the name of the application APP_NAME and NAMESPACE from the Run installer script section
* kubectl create secret tls $APP_NAME-tls -n $NAMESPACE --cert /tmp/tls.crt --key /tmp/tls.key
```
Run the following command to get the AppViewX application URL
```shell
SERVICE_IP=$(kubectl get ingress avx-ingress --namespace avx --output jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "https://${SERVICE_IP}/appviewx/login"
```
# Delete the Application
There are two approaches to delete the AppViewX application
* [Using the Google Cloud Platform Console](#using-platform-console)
* [Using the command line](#using-command-line)
## <a name="using-platform-console"></a>Using the Google Cloud Platform Console
1. In the GCP Console, open [Kubernetes Applications](https://console.cloud.google.com/kubernetes/application).
2. From the list of applications, click **appviewx**.
3. On the Application Details page, click **Delete**.
## <a name="using-command-line"></a>Using the command line
Set your application instance name and the Kubernetes namespace used to deploy:
```shell
# set the application instance name
export APP_NAME=appviewx
# set the Kubernetes namespace the application was originally installed
export NAMESPACE=avx
```
### Delete the resources
Delete the resources using types and a label:
```shell
kubectl delete ns avx
```
### Delete the persistent volumes of your installation
By design, removal of stateful sets (used by bookie pods) in Kubernetes does not remove
the persistent volume claims that are attached to their pods. This prevents your
installations from accidentally deleting stateful data.
To remove the persistent volume claims with their attached persistent disks, run
the following `kubectl` commands:
```shell
# remove all the persistent volumes or disks
for pv in $(kubectl get pvc --namespace $NAMESPACE \
--selector app.kubernetes.io/name=$APP_NAME \
--output jsonpath='{.items[*].spec.volumeName}');
do
kubectl delete pv/$pv --namespace $NAMESPACE
done
# remove all the persistent volume claims
kubectl delete persistentvolumeclaims \
--namespace $NAMESPACE \
--selector app.kubernetes.io/name=$APP_NAME
```
### Delete the GKE cluster
Optionally, if you don't need the deployed application or the GKE cluster,
delete the cluster using this command:
```shell
# replace with the cluster name that you used
export CLUSTER=appviewx-cluster
# replace with the zone that you used
export ZONE=us-west1-a
# delete the cluster using gcloud command
gcloud container clusters delete "$CLUSTER" --zone "$ZONE"
```
| 30.725694 | 258 | 0.757826 | eng_Latn | 0.921794 |
4d591075828d51e42b171f5aa8ee35744f0ab0e8 | 4,264 | md | Markdown | _posts/2020-04-26-the-burden-of-taxation.md | BKSpurgeon/personal-blog | 3e2dd720c4410e2d0f183ca60d6f1230e93285fa | [
"MIT"
] | null | null | null | _posts/2020-04-26-the-burden-of-taxation.md | BKSpurgeon/personal-blog | 3e2dd720c4410e2d0f183ca60d6f1230e93285fa | [
"MIT"
] | null | null | null | _posts/2020-04-26-the-burden-of-taxation.md | BKSpurgeon/personal-blog | 3e2dd720c4410e2d0f183ca60d6f1230e93285fa | [
"MIT"
] | null | null | null | ---
layout: post
title: The burden of taxation
comments: true
tags:
- anecdotes
---
I was speaking to a gentleman (his identity is protected). He had just started a business, and was hustling for orders, as all good businessmen are wont to do.
"How are you getting your customers?" I asked.
"Advertise"
The catalogue he went with claimed exposure to 18,000 letter boxes. The result? Not a *single* call. 18,000 letter boxes..........riiiight. The hit-rate seems abysmally low - I'm not saying that the advertiser is a liar, but I am saying that you can get your face ripped off in business if you're not careful.
"What next?""
"I do marketing," he responded. "1000 notice."
"And how did that prove?""
"One sale!" he beamed. His smile reached from ear to ear. "First sale!"
"Nice. But how are you going to scale?"
"Now I doing Facebook Ads!" and then he added: "You have to really careful on how you spending. Money fly away if not careful."
Sage advice.
I noted to myself, that money is viewed very differently, if it is your own, and if you are seeking a return on it, than if you were a government department, spending someone else's money, with an indifference as to whether a return is generated.
I saw and thought to myself: this is why no government department or bureaucracy can ever succeed in any type of business endeavour.
But that doesn't stop our valiant and ever optimistic bureaucrats from attempting it, in the GREAT REPUBLIC that is the SOCIALIST REPUBLIC of the COMMONWEALTH AUSTRALIA. All else who tried before were idiots, our current political leaders and bureaucrats have the wisdom and expertise to be the only ones in all human history to make a success out of it.
After hearing of his marketing spend, and his loses, I lamented: after all his hard work and toil, the government should unjustly rape him of his labours by taxing him in every way possible: company tax, pay roll tax, personal income tax, (indirect) bank taxes, capital gains tax, GST, and all the onerous compliance burdens associated with running a business (GST Business Activity Statements) ad naseaum etc. I suggested to him that it is much easier to become a government bureaucrat, and to "work from home" from
~~11 am - 3 pm (lunch and tea breaks included)~~ 9am - 5pm on very important projects: projects done for the people, and by the people, amen.
But he did not lament:
"**I taking cash, BABEY!**"
Sublime! A fair-dinkum entrepreneur, and a true-blue Aussie to boot. And what a fool I am in comparison: accepting only bank transfers, so I can pay my taxes like a dutiful citizen, so that the government, in its glorious wisdom, can spend it profitably:
(i) on ships that sink when loaded,
(ii) and on submarines that cannot sink,
(iii) on submarines that take 20 years to get operational, and are then moored because there aren't enough people to man them;
(iv) on internet infrastructure that is obsolete and slow, and TWICE the cost as before,
(v) and on overpriced hotel quarantine programs -- whom nobody is in charge of, and whom nobody has any memory of who was in charge of the program.
Were I to continue writing, I would quickly and irredeemably deplete the vast forests of the Amazon.
It was then that I realised that our little paradise is somewhat doomed: when honest, law-abiding, God-fearing foreigners should resort to such measures to earn a living: how can this republic survive with it's wanton expenditures uncurtailed?
It is a miracle of life and business evolution that any business survives here. But not to worry, those that remain will soon be rendered extinct through a bi-partisan coalition of the Australian Labour Party, and the Liberal-National coalition - it's FINAL SOLUTION: to entirely eradicate the country of any private enterprise that is left. The goal is to make sure that every hard-working Aussie battler has free stuff given to him by the tax payer, or it's off to the gulags for you, son!
Friedman was right: this is a government **by** the bureaucrat, **for** the bureaucrat. Little do they know - as has been amply demonstrated by my friend - that the tax payer is silently revolting: he's bolted from the stables while the bureaucrat's still sound asleep. | 68.774194 | 517 | 0.766182 | eng_Latn | 0.999907 |
4d5981036a5b9c706445aa3336d7bbde10fb2ef2 | 1,844 | md | Markdown | README.md | chris-marsh/gcalendar | 7cf032bba678c0782412d3407fabdc750f966f3f | [
"MIT"
] | 10 | 2017-02-08T22:20:19.000Z | 2021-11-22T00:59:06.000Z | README.md | chris-marsh/gcalendar | 7cf032bba678c0782412d3407fabdc750f966f3f | [
"MIT"
] | 4 | 2017-02-08T22:25:40.000Z | 2018-04-15T12:09:25.000Z | README.md | chris-marsh/gcalendar | 7cf032bba678c0782412d3407fabdc750f966f3f | [
"MIT"
] | 4 | 2017-02-12T21:41:56.000Z | 2022-03-22T13:56:20.000Z | # Galendae
A basic popup calendar that can be styled to match workspace themes.
**Galendae** was designed to be a stylish popup calendar that can match the styling of Desktop Environments or Window Managers.
[](http://imgur.com/a/7WPDF)
[](http://imgur.com/a/7WPDF)
[](http://imgur.com/a/7WPDF)
[](http://imgur.com/a/7WPDF)
## Whats in a name
Galendae is derived from the Roman word [Kalendae](https://en.wikipedia.org/wiki/Calends), meaning the first day of the month. I thought Kalendae sounded like a KDE application and since I was using GTK+, Galendae was born.
## Dependency
GTK+ 3.x
## Building from source
$ git clone https://github.com/chris-marsh/galendae.git
$ make release
## Configuration
galendae will look for a configuration file called galendae.conf. It will search in the following order;
./galendae.conf
~/.config/galendae/galendae.conf
You can specify alternatives configuration files with the '-c' option. You can specify a full path or just a filename. If you only give a filename, the same directories as above will be tried.
## Running
$ galendae
$ galendae -c config/blue.conf
## Useage
galendae [OPTION ...]
DESCRIPTION
Galandae displays a gui calendar. Keys:
h|Left - decrease month
l|Right - increase month
k|Up - increase year
j|Down - decrease year
g|Home - return to current date
q|Esc - exit the calendar
OPTIONS
-c, --config FILE - config file to load
-h, --help - display this help and exit
-v, --version - output version information
| 30.733333 | 223 | 0.668655 | eng_Latn | 0.875628 |
4d59c71c78ab9dc522ed5059993e8c10eca04975 | 1,358 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugclass-gettoken-method.md | MMiooiMM/docs.zh-tw | df6d917d6a71a772c0ab98727fb4d167399cdef6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugclass-gettoken-method.md | MMiooiMM/docs.zh-tw | df6d917d6a71a772c0ab98727fb4d167399cdef6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugclass-gettoken-method.md | MMiooiMM/docs.zh-tw | df6d917d6a71a772c0ab98727fb4d167399cdef6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ICorDebugClass::GetToken 方法
ms.date: 03/30/2017
api_name:
- ICorDebugClass.GetToken
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorDebugClass::GetToken
helpviewer_keywords:
- GetToken method, ICorDebugClass interface [.NET Framework debugging]
- ICorDebugClass::GetToken method [.NET Framework debugging]
ms.assetid: ee5c848a-eac4-4462-b07a-07ccd76a75df
topic_type:
- apiref
ms.openlocfilehash: 6964c931307a40f384ad8a8e355cab0aad575ec6
ms.sourcegitcommit: 559fcfbe4871636494870a8b716bf7325df34ac5
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 10/30/2019
ms.locfileid: "73125778"
---
# <a name="icordebugclassgettoken-method"></a>ICorDebugClass::GetToken 方法
取得參考此類別定義的 `TypeDef` 中繼資料 token。
## <a name="syntax"></a>語法
```cpp
HRESULT GetToken (
[out] mdTypeDef *pTypeDef
);
```
## <a name="parameters"></a>參數
`pTypeDef`
脫銷參考此類別定義之 `mdTypeDef` token 的指標。
## <a name="requirements"></a>需求
**平台:** 請參閱[系統需求](../../../../docs/framework/get-started/system-requirements.md)。
**標頭:** CorDebug.idl、CorDebug.h
**程式庫:** CorGuids.lib
**.NET framework 版本:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>請參閱
- [中繼資料介面](../../../../docs/framework/unmanaged-api/metadata/metadata-interfaces.md)
| 26.115385 | 105 | 0.703976 | yue_Hant | 0.522471 |
4d5a0bc3dcacf5a87a7bf3ded93b94f7227943e0 | 677 | md | Markdown | README.md | mcx/signal_logger | 7d209bb42ae563fe78175f4b52a8f5c648984a39 | [
"BSD-3-Clause"
] | null | null | null | README.md | mcx/signal_logger | 7d209bb42ae563fe78175f4b52a8f5c648984a39 | [
"BSD-3-Clause"
] | 4 | 2019-07-09T08:49:38.000Z | 2021-09-13T11:26:02.000Z | README.md | mcx/signal_logger | 7d209bb42ae563fe78175f4b52a8f5c648984a39 | [
"BSD-3-Clause"
] | 3 | 2020-05-28T08:50:03.000Z | 2022-03-27T12:54:55.000Z | # Signal Logger
## Overview
These packages provide an interface for a signal logger and some default signal loggers. A signal logger can log a time series of a signal.
The source code is released under a [BSD 3-Clause license](LICENSE).
**Author(s):** Gabriel Hottiger, Christian Gehring, Dario Bellicoso
[Documentation](http://docs.leggedrobotics.com/signal_logger_doc/)
## Building
In order to install, clone the latest version from this repository into your catkin workspace and compile the packages.
## Bugs & Feature Requests
Please report bugs and request features using the [Issue Tracker](https://github.com/ANYbotics/signal_logger/issues).
| 33.85 | 140 | 0.760709 | eng_Latn | 0.903602 |
4d5a96d4840b0d6c5701ef3811654a6f7ab6dc5f | 3,404 | md | Markdown | docs/outlook/mapi/imapisession-logoff.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-15T11:25:43.000Z | 2021-08-15T11:25:43.000Z | docs/outlook/mapi/imapisession-logoff.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/outlook/mapi/imapisession-logoff.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IMAPISessionLogoff
manager: soliver
ms.date: 03/09/2015
ms.audience: Developer
ms.topic: reference
ms.prod: office-online-server
localization_priority: Normal
api_name:
- IMAPISession.Logoff
api_type:
- COM
ms.assetid: 93e38f6c-4b67-4f2d-bc94-631efec86852
description: Dernière modification le 9 mars 2015
ms.openlocfilehash: 317c3702415ddf30038ccd0d40cdf0f19abc61f8
ms.sourcegitcommit: 8fe462c32b91c87911942c188f3445e85a54137c
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/23/2019
ms.locfileid: "32338107"
---
# <a name="imapisessionlogoff"></a>IMAPISession::Logoff
**S’applique à** : Outlook 2013 | Outlook 2016
Met fin à une session MAPI.
```cpp
HRESULT Logoff(
ULONG_PTR ulUIParam,
ULONG ulFlags,
ULONG ulReserved
);
```
## <a name="parameters"></a>Parameters
_ulUIParam_
> [in] Poignée vers la fenêtre parente de toutes les boîtes de dialogue ou fenêtres à afficher. Ce paramètre est ignoré si l’MAPI_LOGOFF_UI n’est pas définie.
_ulFlags_
> [in] Masque de bits d’indicateurs qui contrôlent l’opération de ffage de logo. Les indicateurs suivants peuvent être définies :
MAPI_LOGOFF_SHARED
> Si cette session est partagée, tous les clients qui se sont connectés à l’aide de la session partagée doivent être avertis de la ouverture de session en cours. Les clients doivent se déconnecter. Tout client qui utilise la session partagée peut définir cet indicateur. MAPI_LOGOFF_SHARED est ignoré si la session en cours n’est pas partagée.
MAPI_LOGOFF_UI
> **La ff** de logo peut afficher une boîte de dialogue pendant l’opération, ce qui peut être l’invite à confirmer l’opération.
_ulReserved_
> [in] R�serv� ; doit �tre �gal � z�ro.
## <a name="return-value"></a>Valeur renvoyée
S_OK
> L’opération de ffage de logo a réussi.
## <a name="remarks"></a>Remarques
La **méthode IMAPISession::Logoff** met fin à une session MAPI. Lorsque **la méthode Logoff** est de retour, aucune des méthodes à l’exception de [IUnknown::Release](https://msdn.microsoft.com/library/ms682317%28v=VS.85%29.aspx) ne peut être appelée.
## <a name="notes-to-callers"></a>Remarques pour les appelants
Lorsque **la logoff est** de retour, relâchez l’objet de session en appelant sa méthode **IUnknown::Release.**
Pour plus d’informations sur la fin d’une session, voir [Fin d’une session MAPI.](ending-a-mapi-session.md)
## <a name="mfcmapi-reference"></a>Référence MFCMAPI
Pour voir un exemple de code MFCMAPI, consultez le tableau suivant.
|**Fichier**|**Fonction**|**Commentaire**|
|:-----|:-----|:-----|
|MAPIObjects.cpp <br/> |CMapiObjects::Logoff <br/> |MFCMAPI utilise la méthode **IMAPISession::Logoff** pour se déconnecter de la session avant de la libérer. <br/> |
> [!NOTE]
> En raison du comportement d’arrêt rapide introduit dans Microsoft Office Outlook 2007 Service Pack 2, Microsoft Outlook 2010 et Microsoft Outlook 2013, les clients ne doivent jamais transmettre le paramètre **MAPI_LOGOFF_SHARED** à [IMAPISession::Logoff](imapisession-logoff.md). Le **MAPI_LOGOFF_SHARED’arrêt** entraîne l’arrêt de tous les clients MAPI et un comportement inattendu se produit.
## <a name="see-also"></a>Voir aussi
[IMAPISession : IUnknown](imapisessioniunknown.md)
[MFCMAPI comme un exemple de Code](mfcmapi-as-a-code-sample.md)
[Fin d’une session MAPI](ending-a-mapi-session.md)
| 34.734694 | 397 | 0.743537 | fra_Latn | 0.824094 |
4d5a97bf3e6e374fdf9226199e001e9063e7d041 | 1,838 | md | Markdown | docs/t-sql/functions/analytic-functions-transact-sql.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/analytic-functions-transact-sql.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/analytic-functions-transact-sql.md | lxyhcx/sql-docs.zh-cn | e63de561000b0b4bebff037bfe96170d6b61c908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 分析函数 (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 07/24/2017
ms.prod: sql
ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw
ms.component: t-sql|functions
ms.reviewer: ''
ms.suite: sql
ms.technology: t-sql
ms.tgt_pltfrm: ''
ms.topic: language-reference
dev_langs:
- TSQL
ms.assetid: 60fbff84-673b-48ea-9254-6ecdad20e7fe
caps.latest.revision: 5
author: edmacauley
ms.author: edmaca
manager: craigg
monikerRange: '>= aps-pdw-2016 || = azuresqldb-current || = azure-sqldw-latest || >= sql-server-2016 || = sqlallproducts-allversions'
ms.openlocfilehash: 9f58b13ca5f314dbb163fb7ffdc03938eb682d45
ms.sourcegitcommit: 1740f3090b168c0e809611a7aa6fd514075616bf
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/03/2018
---
# <a name="analytic-functions-transact-sql"></a>分析函数 (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2012-all-md](../../includes/tsql-appliesto-ss2012-all-md.md)]
SQL Server 支持以下分析函数:
|||
|-|-|
|[CUME_DIST (Transact-SQL)](../../t-sql/functions/cume-dist-transact-sql.md)|[LEAD (Transact-SQL)](../../t-sql/functions/lead-transact-sql.md)|
|[FIRST_VALUE (Transact-SQL)](../../t-sql/functions/first-value-transact-sql.md)|[PERCENTILE_CONT (Transact-SQL)](../../t-sql/functions/percentile-cont-transact-sql.md)|
|[LAG (Transact-SQL)](../../t-sql/functions/lag-transact-sql.md)|[PERCENTILE_DISC (Transact-SQL)](../../t-sql/functions/percentile-disc-transact-sql.md)|
|[LAST_VALUE (Transact-SQL)](../../t-sql/functions/last-value-transact-sql.md)|[PERCENT_RANK (Transact-SQL)](../../t-sql/functions/percent-rank-transact-sql.md)|
分析函数基于一组行计算聚合值。 但是,与聚合函数不同,分析函数可能针对每个组返回多行。 可以使用分析函数来计算移动平均线、运行总计、百分比或一个组内的前 N 个结果。
## <a name="see-also"></a>另请参阅
[OVER 子句 (Transact-SQL)](../../t-sql/queries/select-over-clause-transact-sql.md)
| 40.844444 | 171 | 0.720892 | yue_Hant | 0.543664 |
4d5abcb3ca9fa92aa90b9de54b04923ff3149ca6 | 9,349 | md | Markdown | week23_淺談Redux:狀態管理是一門學問.md | heidiliu2020/this-is-codediary | 5b3cc386a1ad1be7fffbcbadcad06bd1c47ff04f | [
"MIT"
] | 1 | 2022-01-14T16:49:53.000Z | 2022-01-14T16:49:53.000Z | week23_淺談Redux:狀態管理是一門學問.md | heidiliu2020/this-is-codediary | 5b3cc386a1ad1be7fffbcbadcad06bd1c47ff04f | [
"MIT"
] | null | null | null | week23_淺談Redux:狀態管理是一門學問.md | heidiliu2020/this-is-codediary | 5b3cc386a1ad1be7fffbcbadcad06bd1c47ff04f | [
"MIT"
] | null | null | null | ###### tags: `Redux`
# [week 23] 淺談 Redux:狀態管理是一門學問
> 本篇為 [[FE303] React 的好夥伴:Redux](https://lidemy.com/p/fe303-react-redux) 這門課程的學習筆記。如有錯誤歡迎指正!
- 推薦閱讀:[Redux 官方文件](https://redux.js.org/introduction/getting-started)
## 什麼是 Redux?
根據 [Redux 官網](https://redux.js.org/)說明,可知 Redux 是一個「用於 JavaScript 應用程式的**狀態管理**工具」:
> A Predictable State Container for JS Apps
雖然 Redux 經常與 React 搭配使用,但其實 Redux 是一種前端的「架構模式」。簡單來說,就是用來實現「狀態管理機制」的套件,並且能套用在任何程式語言。
## 認識 Redux 之前
再進一步認識 Redux 之前,可先從 Redux 的歷史演化來瞭解。
最早是在 Facebook 提出 React 時,所提出的 Flux 開發架構,目的是解決 MVC 在大型商業網站所存在的問題,例如管理 UI 畫面與資料之間的對應關係。
### 「狀態管理」是一門學問
從以前到現在,關於「狀態管理」,在不同框架也會有不同處理方式,舉例來說:
- jQuery:資料與畫面分離,重點放在「畫面」上
- Vue 與 Angular:資料與畫面「雙向綁定」
以 [Vue.js](https://vuejs.org/v2/guide/) 為例,是透過 `v-model` 語法進行雙向綁定:
```htmlmixed=
<!-- 畫面 -->
<div id="app-6">
<p>{{ message }}</p>
<input v-model="message">
</div>
```
```javascript=
// 資料
var app6 = new Vue({
el: '#app-6',
data: {
message: 'Hello Vue!'
}
})
```
- React:只需要管資料,藉由「狀態」渲染出畫面
### Flux 簡介
根據 [Flux](https://facebook.github.io/Flux/) 官網說明:
> Application architecture for building user interfaces
在傳統的 MVC 架構,Model 和 View 之間可能會呈現複雜關係:

(圖片來源:[Facebook 介紹影片](https://www.youtube.com/watch?v=nYkdrAPrdcw))
Facebook 當初之所以會提出 React 和 Flux,就是為了解決當 APP 架構變更大,功能變更複雜時遇到的問題。例如 FB 中的 Messager 的提醒通知,程式碼中可能有許多部分會操控到同一 Model,使得狀態過於複雜,不易進行後續追蹤。
Flux 架構流程如下,若想要改變 Store(資料)或 View(畫面),都必須透過 Dispatcher 發出 Action(指令),呈現單向資料流:

上述架構,在小型專案中其實是多此一舉,直接透過修改 Store 去渲染 View 即可;但在大型專案中,透過像這樣「集中」管理的方式,會更容易進行複雜的狀態管理。
## Redux 簡介
以下是 Redux 中的 data flow 示意圖:

在 React 當中,其實有個和 Redux 功能類似的內建 Hook:[useReducer](https://reactjs.org/docs/hooks-reference.html#usereducer),同樣可用來管理複雜的狀態。
### React Hook:useReducer
useReducer 可接受三個參數:
```javascript=
const [state, dispatch] = useReducer(reducer, initialArg, init);
```
以下是官網提供的範例:
```javascript=
// 初始狀態為 count: 0
const initialState = {count: 0};
// 由 reducer 回傳 state
function reducer(state, action) {
switch (action.type) {
// return 新的 state 並取代原本的
case 'increment':
return {count: state.count + 1};
case 'decrement':
return {count: state.count - 1};
// 非預期指令時則丟出 Error
default:
throw new Error();
}
}
function Counter() {
const [state, dispatch] = useReducer(reducer, initialState);
return (
<>
Count: {state.count}
// 由 dispatch 發送指令,{type: 'decrement'} 這個物件就代表一個動作
<button onClick={() => dispatch({type: 'decrement'})}>-</button>
<button onClick={() => dispatch({type: 'increment'})}>+</button>
</>
);
}
```
- reducer 是一個 function,可接收兩個參數:目前的狀態和要執行的操作
- initialState 代表初始狀態
- 使用 useReducer 會得到 state, dispatch 兩個值,可對應到 Redux 中「Store 裡面的狀態」和「由 dispatch 指定 Store 執行哪些事」
## Redux 實際操作
透過上述簡介,我們可以瞭解到 React 和 Redux 其實並無相依性,兩者均可進行狀態管理,差別在於:
- React:把 state 放在 component 裡面
- Redux:把 state 放在 Store 裡面,Store 是一個 JavaScript 物件
### 安裝套件
首先依照[官方文件](https://redux.js.org/introduction/getting-started)安裝 Redux:
1. 初始 npm 專案
```
$ npm init
```
2. 安裝 redux
```
$ npm install redux
```
3. 新增 app.js 並使用 require 引入
> 官方文件是使用 import,需注意在 node.js 較舊版本無法使用,因此這裡使用 require。
```javascript=
const { createStore } = require("redux");
```
### 建立 Store 存放 state
透過以下程式碼,可創建 Redux 中的 Store:
```javascript=
const { createStore } = require("redux");
const initialState = {
value: 0,
};
// reducer 決定狀態要如何變化
function counterReducer(state = initialState, action) {
return state;
}
// 把 reducer 存入 store
let store = createStore(counterReducer);
console.log(store);
```
在終端機執行 `node app.js`,可知 store 其實是一個物件:

若改城呼叫物件裡面的函式 `getState()`,會顯示目前的 state,也就是 initialState:
```javascript=
console.log(store.getState());
// { value: 0 }
```
### 透過 disptch 指定要做的事
接著可透過 store.dispatch() 指定要做的事,傳入參數為物件,慣例寫法是 `type: '...'`:
```javascript=
const { createStore } = require("redux");
const initialState = {
value: 0,
};
function counterReducer(state = initialState, action) {
console.log("receive action", action);
return state;
}
let store = createStore(counterReducer);
store.dispatch({
type: 'plus'
})
console.log(store.getState());
```
結果如下:

- `{ type: '@@redux/INIT0.1.0.m.r.p' }`:初始化時 redux 自動建立的 dispatch
- `{ type: 'plus' }`:印出 dispatch 的 action
也可以透過 `setTimeout()` 驗證:
```javascript=
// 延遲 1 秒
setTimeout(() => {
store.dispatch({
type: "plus",
});
}, 1000);
```
結果如下,1 秒後 reducer 才印出傳入的 action:

### 透過 dispatch 改變 state
改寫如下,傳入 action 會改變 state:
```javascript=
function counterReducer(state = initialState, action) {
console.log("receive action", action);
if (action.type === "plus") {
return {
value: state.value + 1, // 回傳新的 state
};
}
return state; // 回傳原本的 state
}
let store = createStore(counterReducer);
console.log("first state", store.getState());
// 傳入 action 改變 state
store.dispatch({
type: "plus",
});
console.log("second state", store.getState());
```
印出結果:

### Reducer 中的判斷式
- if...else 條件式:當 type 變多時不易管理
```javascript=
function counterReducer(state = initialState, action) {
if (action.type === "plus") {
return {
value: state.value + 1,
};
} else if (action.type === "minus") {
return {
value: state.value - 1,
};
}
return state;
}
```
- switch 條件式:較推薦使用
```javascript=
function counterReducer(state = initialState, action) {
switch (action.type) {
case "plus": {
return {
value: state.value + 1,
};
}
case "minus": {
return {
value: state.value - 1,
};
}
// 非預期 type 時直接回傳 state
default: {
return state;
}
}
}
```
### store.subscribe():當 store 改變時觸發執行
store 其實也有提供類似 addEventLister 的功能,也就是 subscribe(),傳入一個 function 作為參數:
```javascript=
// 當 store 改變時執行
store.subscribe(() => {
console.log("change!", store.getState());
});
```
再改寫上述範例:
```javascript=
let store = createStore(counterReducer);
store.subscribe(() => {
console.log("change!", store.getState());
});
store.dispatch({
type: "plus",
});
```
結果如下:

## 實作簡易的 todolist
以實作新增和刪除功能為例:
### 新增功能
和 React 的 useState 用法類似,因為 reducer 回傳新的 state 會直接覆蓋原有的,必須加上 `...state` 保存原本的 state:
```javascript=
const { createStore } = require("redux");
let todoId = 0;
const initialState = {
email: "123@123",
todos: [],
};
function counterReducer(state = initialState, action) {
console.log("receive action", action);
switch (action.type) {
case "add_todo": {
return {
// 保存原本的 state
...state,
todos: [
...state.todos,
{
id: todoId++,
name: action.payload.name,
},
],
};
}
default: {
return state;
}
}
}
let store = createStore(counterReducer);
store.subscribe(() => {
console.log("change!", store.getState());
});
store.dispatch({
type: "add_todo",
payload: {
name: "todo0",
},
});
store.dispatch({
type: "add_todo",
payload: {
name: "todo1",
},
});
```
### 刪除功能
新增 delete_todo 這個 dispatch:
```javascript=
store.dispatch({
type: "delete_todo",
payload: {
id: 0,
},
});
```
並在 reducer 新增條件,寫入要執行的動作:
```javascript=
case "delete_todo": {
return {
...state,
todos: state.todos.filter((todo) => todo.id !== action.payload.id),
};
}
// ...
```
結果如下:

### Reducer 的優點:方便寫測試
像這樣分開撰寫的好處,就是方便對 reducer 寫測試,可藉由比對結果,驗證邏輯是否正確,如以下範例:
```javascript=
expect(
counterReducer(initialState, {
type: "add_todo",
payload: {
name: "123",
},
})
).toEqual({ // 比對結果
todos: [{ name: "123" }],
});
```
## 優化:避免程式開發時出錯
當程式變複雜時,我們可透過「action constants」和「action creator」進行管理,可減少開發時出錯:
### action constants:以物件集中管理 Action Types
在上述範例中,我們是以「字串」來表示 type,但這麼做有個缺點,就是打錯字或發生錯誤時不易 debug,可透過 ActionTypes 物件集中管理:
```javascript=
// action constants
const ActionTypes = {
ADD_TODO: "add_todo",
DELETE_TODO: "delete_todo",
};
```
將原本的字串改用 ActionTypes 物件表示:
```javascript=
function counterReducer(state = initialState, action) {
switch (action.type) {
case ActionTypes.ADD_TODO: {
return {
...state,
todos: [
...state.todos,
{
id: todoId++,
name: action.payload.name,
},
],
};
}
case ActionTypes.DELETE_TODO: {
return {
...state,
todos: state.todos.filter((todo) => todo.id !== action.payload.id),
};
}
default: {
return state;
}
}
}
store.dispatch({
type: ActionTypes.ADD_TODO,
payload: {
name: "todo1",
},
});
store.dispatch({
type: ActionTypes.DELETE_TODO,
payload: {
id: 0,
},
});
```
### action creator:透過 function 建立 action
```javascript=
function addTodo(name) {
return {
type: ActionTypes.ADD_TODO,
payload: {
name,
},
};
}
function deleteTodo(name) {
return {
type: ActionTypes.DELETE_TODO,
payload: {
id: 0,
},
};
}
store.dispatch(addTodo("todo0"));
store.dispatch(addTodo("todo1"));
store.dispatch(addTodo("todo2"));
store.dispatch(deleteTodo(1));
```
| 17.639623 | 128 | 0.628409 | yue_Hant | 0.737327 |
4d5cf0eb1b3f201de24ca7eae415ee0a51997a01 | 1,404 | md | Markdown | AlchemyInsights/download-and-install-or-reinstall-office-365-or-office-2016-on-a-pc-or-mac.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:07:15.000Z | 2021-03-06T00:34:53.000Z | AlchemyInsights/download-and-install-or-reinstall-office-365-or-office-2016-on-a-pc-or-mac.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:25:08.000Z | 2022-02-09T06:52:49.000Z | AlchemyInsights/download-and-install-or-reinstall-office-365-or-office-2016-on-a-pc-or-mac.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-09T20:30:02.000Z | 2020-06-02T23:24:46.000Z | ---
title: Laste ned og installere eller installere Office 365 eller Office 2016 på en PC eller Mac
ms.author: pebaum
author: pebaum
ms.date: 04/21/2020
ms.audience: ITPro
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.assetid: 8d7abd5a-5004-4d16-aad9-8083df213ea3
ms.openlocfilehash: afc82137854e6fb6cdd4cdefbc0c0f4000435a1b34891ddf2a029dcff2ceffa8
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: nb-NO
ms.lasthandoff: 08/05/2021
ms.locfileid: "54004667"
---
# <a name="download-and-install-or-reinstall-office-365-or-office-2016-on-a-pc-or-mac"></a>Laste ned og installere eller installere Office 365 eller Office 2016 på en PC eller Mac
Hvis du vil laste ned og Office produkter [](https://portal.office.com/OLS/MySoftware.aspx) som er inkludert i abonnementet, går du til \> **Min Office** og klikker **Installer**.
Hvis du vil ha detaljerte instruksjoner, [kan du se Laste ned og installere eller installere Office 365](https://support.office.com/article/4414eaaf-0478-48be-9c42-23adc471665816658?wt.mc_id=O365_Admin_Alch).
Hvis du trenger å installere Office frakoblet, kan du se Bruke det [frakoblede installasjonsprogrammet Office 2016](https://support.office.com/article/f0a85fe7-118f-41cb-a791-d59cef96ad1c?wt.mc_id=O365_Admin_Alch#OfficePlans=Office_for_business).
| 50.142857 | 246 | 0.806268 | nob_Latn | 0.459517 |
4d5cf4fad5af2ac9096f07deba083302cfade9e7 | 2,050 | md | Markdown | content/storage/object-storage/api/bucket/policy/get_policy.md | dawnhe0/yiqiyun-administration-docs | 75ea72038ffb49f858efd020f3985ecd10ad72d3 | [
"Apache-2.0"
] | null | null | null | content/storage/object-storage/api/bucket/policy/get_policy.md | dawnhe0/yiqiyun-administration-docs | 75ea72038ffb49f858efd020f3985ecd10ad72d3 | [
"Apache-2.0"
] | 1 | 2022-03-30T09:45:45.000Z | 2022-03-30T09:46:03.000Z | content/storage/object-storage/api/bucket/policy/get_policy.md | dawnhe0/yiqiyun-administration-docs | 75ea72038ffb49f858efd020f3985ecd10ad72d3 | [
"Apache-2.0"
] | 9 | 2022-03-22T02:15:06.000Z | 2022-03-30T09:43:45.000Z | ---
title: "GET Bucket Policy"
---
该接口用于获取 Bucket 的访问策略相关设置。亿栖云对象存储定义访问策略为 Bucket 的子资源,因此,只有 Bucket 的所有者才能调用该 API。
## 请求语法
```http
GET /?policy HTTP/1.1
Host: <bucket-name>.<zone-id>.is.yiqiyun.com
Date: <date>
Authorization: <authorization-string>
```
## 请求参数
无。
## 请求头
此接口仅包含公共请求头。关于公共请求头的更多信息,请参见 [公共请求头](/storage/object-storage/api/common_header/#请求头字段-request-header)。
## 请求消息体
无。
## 响应头
此接口仅包含公共响应头。关于公共响应头的更多信息,请参见 [公共响应头](/storage/object-storage/api/common_header/#响应头字段-response-header)。
## 响应消息体
成功调用该 API 后,会返回一个 Json 消息体,其字段说明如下:
| 名称 | 类型 | 说明 |
| - | - | - |
| statement | Dict | 指定 Bucket 的访问策略 |
## 错误码
| 错误码 | 错误描述 | HTTP 状态码 |
| --- | --- | --- |
| OK | 成功获取 Bucket 的访问策略相关配置 | 200 |
其他错误码可参考 [错误码列表](/storage/object-storage/api/error_code/#错误码列表)。
## 示例
### 请求示例
```http
GET /?policy HTTP/1.1
Host: mybucket.jn1.is.yiqiyun.com
Date: Sun, 16 Aug 2015 09:05:00 GMT
Authorization: authorization string
```
### 响应示例
```http
HTTP/1.1 200 OK
Server: yiqiyun
Date: Sun, 16 Aug 2015 09:05:02 GMT
Content-Length: 300
Connection: close
x-qs-request-id: aa08cf7a43f611e5886952542e6ce14b
{
"statement": [
{
"id": "allow everyone to get and create objects",
"user": "*",
"action": ["get_object", "create_object"],
"effect": "allow",
"resource": ["mybucket/*"],
"condition":{
"string_like": {
"Referer": ["*.example1.com", "*.example2.com"]
}
}
},
{
"id": "allow everyone to head bucket",
"user": "*",
"action": "head_bucket",
"effect": "allow",
"condition":{
"string_like": {
"Referer": ["*.example3.com", "*.example4.com"]
},
"string_not_like": {
"Referer": ["*.service.example3.com"]
}
}
}
]
}
```
## SDK
此接口所对应的各语言 SDK 可参考 [SDK 文档](/storage/object-storage/sdk/)。 | 19.902913 | 103 | 0.545366 | yue_Hant | 0.351579 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.