hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d031aeb2e2d7d8926860fa4411149088e1f05718 | 1,844 | md | Markdown | source/_posts/computer-organization-adder.md | zhongmingmao/zhongmingmao.github.io | 5c7be43763714c7d31e1365a842d95a53b880081 | [
"MIT"
] | 7 | 2018-10-07T12:52:54.000Z | 2021-03-19T05:00:40.000Z | source/_posts/computer-organization-adder.md | zhongmingmao/zhongmingmao.github.io | 5c7be43763714c7d31e1365a842d95a53b880081 | [
"MIT"
] | 558 | 2017-04-12T06:47:43.000Z | 2022-03-31T13:12:49.000Z | source/_posts/computer-organization-adder.md | zhongmingmao/zhongmingmao.github.io | 5c7be43763714c7d31e1365a842d95a53b880081 | [
"MIT"
] | 5 | 2017-04-21T02:15:11.000Z | 2020-09-09T10:19:34.000Z | ---
title: 计算机组成 -- 加法器
mathjax: false
date: 2020-01-13 13:02:26
categories:
- Computer Basics
- Computer Organization
tags:
- Computer Basics
- Computer Organization
---
## 基本门电路
<img src="https://computer-composition-1253868755.cos.ap-guangzhou.myqcloud.com/computer-organization-adder-gate-circuit.jpg" width=1000/>
1. 基本门电路:**输入都是两个单独的bit,输出是一个单独的bit**
2. 如果要对2个8bit的数字,计算**与或非**的简单逻辑(无进位),只需要连续摆放8个开关,来代表一个8bit数字
3. 这样的两组开关,从左到右,上下单个的位开关之间,都统一用『**与门**』或者『**或门**』连起来
- 就能实现两个8bit数的**AND**运算或者**OR**运算
<!-- more -->
## 异或门 + 半加器
### 一bit加法
1. 个位
- 输入的两位为`00`和`11`,对应的输出为`0`
- 输入的两位为`10`和`01`,对应的输出为`1`
- 上面两种关系都是**异或门**(XOR)的功能
- **异或门是一个最简单的整数加法,所需要使用的基本门电路**
2. 进位
- 输入的两位为`11`时,需要向**更左侧**的一位进行进位,对应一个**与门**
3. 通过一个**异或门**计算出**个位**,通过一个**与门**计算出**是否进位**
- 把这两个门电路**打包**,叫作**半加器**(Half Adder)
<img src="https://computer-composition-1253868755.cos.ap-guangzhou.myqcloud.com/computer-organization-adder-half-adder.jpg" width=1000/>
## 全加器
1. **半加器只能解决一bit加法的问题**,不能解决2bit或以上的加法(因为有**进位**信号)
2. 二进制加法的竖式,从右往左,第二列称为**二位**,第三列称为**四位**,第四列称为**八位**
3. 全加器:**两个半加器**和**一个或门**
- 把两个半加器的**进位输出**,作为一个**或门**的输入
- **只要两次加法中任何一次需要进位**,那么在二位上,就需要向左侧的四位进一位
- 一共只有三个bit相加,即使都是1,也**最多只会进一位**
<img src="https://computer-composition-1253868755.cos.ap-guangzhou.myqcloud.com/computer-organization-adder-full-adder.jpg" width=1000/>
### 8bit加法
1. 把8个全加器**串联**,个位全加器的进位信号作为二位全加器的输入信号,二位全加器的进位信号作为四位全加器的输入信号
2. 个位全加器只需要用到**一个半加器**,或者让**全加器的进位输入始终是0**,因为个位没有来自更右侧的进位
3. 最左侧的进位信号,表示的并不是再进一位,而是表示**加法是否溢出**
- 该溢出信号可以输出到硬件中的其它标志位,让计算机知道计算结果是否溢出
<img src="https://computer-composition-1253868755.cos.ap-guangzhou.myqcloud.com/computer-organization-adder-full-adder-8bit.jpg" width=1000/>
## 分层思想
**门电路 -> 半加器 -> 全加器 -> 加法器 -> ALU**
## 参考资料
[深入浅出计算机组成原理](https://time.geekbang.org/column/intro/100026001) | 30.733333 | 141 | 0.704447 | yue_Hant | 0.526054 |
d034312d044828db5bbc40ba52e7bebc60b8d4f7 | 7,721 | md | Markdown | README.md | randeelayosa/velle2 | 5e30e5cca466a7269c831b727043dcd601e35f16 | [
"MIT"
] | null | null | null | README.md | randeelayosa/velle2 | 5e30e5cca466a7269c831b727043dcd601e35f16 | [
"MIT"
] | null | null | null | README.md | randeelayosa/velle2 | 5e30e5cca466a7269c831b727043dcd601e35f16 | [
"MIT"
] | null | null | null | # Velle
#### _Front-End Development, Independent Capstone Project, 5.3.19_
#### By _**Randee Layosa**_

[](https://opensource.org/licenses/MIT)
## Description
This application is an Independent Capstone project, culminating my knowledge and experience gained during my 27 weeks time at [Epicodus](https://www.epicodus.com/) in the Front-End Development track. It demonstrates a knowledge of React.js and an aptitude for planning, designing, and developing a functional application that meets the UX/UI industry standards of today.
_<p align="center">"Velle" - Latin: to wish, want, be willing</p>_
_Velle is a pseudo company/organization that operates as a middle man between various public outreach supporters, and individuals who want to help their fellow neighbors by donating their clothes. Recently, I have found myself in a predicament that I am sure others share as well. Many of us have unwanted or unneeded clothes that are trendy brand named or were bought at a modest but higher price. People may not want to hand these items to a store as they offer extremely low buying prices in return. And, knowing how much they had originally bought these items for, these people may also not want to hand the clothes over to Goodwill for free just to be resold for 100% profit to someone else. I and others, I'm sure, would much rather place our donations in the hands of reputable outreach organizations and programs who will in turn place the items directly in the hands of the individual in need. Velle stands as the organizer, liaison, and middle man that would make an operation like this possible. Benevolence._
## Preview
<p align="center">
<img src="src/assets/img/###.png" width="500" height="381" title="Preview screenshot of the application">
</p>
## Technologies Used
* _React_
* _Redux_
* _Webpack_
* _eslint_
* _SASS_
* _JSX_
## Setup/Installation Requirements
#### To open and view this project file:
1. Clone this GitHub repository https://github.com/randeelayosa/velle.git to your Desktop.
* Install git onto your computer if it isn't already.
* Open your Terminal, and enter the following commands:
```
cd desktop
git clone https://github.com/randeelayosa/velle.git
cd velle
atom .
npm install
npm run start
```
* _You can use another text editor if Atom is not your preferred program._
* _Make sure a "node_modules" folder is created in your project file. If it hasn't, run `npm install` again._
* _You can then go to the link in step 1 above, or continue on to the following instructions to run the server._
2. Go to http://localhost:8080/ in the browser of your choice. _Note: The app will automatically reload if you edit any of the code in the source files._
## Planning
| **Configuration/Dependencies** | **Use** |
| :------------- | :------------- |
| Babel | JS transpiler |
| CSS-Loader, Style-Loader, Sass-Loader, Node-Sass | styling |
| ESLint | JS linter, checks code for errors |
| File-Loader, URL-Loader | image loader |
| HTML-Webpack-Plugin | loads HTML file |
| Jasmine, Karma | for testing code |
| React | JS Library |
| Webpack | bundles/compiles code |
### User Stories
**_Target Users_**
* General public
* Charitable organizations
* Outreach programs
**_User Stories_**
* As a community member, I want the guarantee that my clothes are being handed directly to a person in need.
* As a community member, I want to be able to see a list of items in need.
* As a charity/outreach, I want to be able to have and maintain a list of people in need, but have the names anonymous to the public.
* As a charity/outreach, I want to be able to upload details about the person in need, their personal information.
* As a charity/outreach, I want to be able to upload photos of items in our possession that we received but do not have a need for, yet can be handed off to another organization to use.
### User Personas
* **Jill**
* **Purpose**: Has unwanted clothes and does not want to donate it to Goodwill, or go the resell route.
* **Pain Points**: Unsure of where to look or how to go about giving away her clothes.
* **How we can serve**: Velle's main function serves the very basis of Jill's needs. It will make her search and eventual donation a smooth and easy process.
* **YMCA**
* **Purpose**: Has a number of club members that need clothing and cannot afford to do so.
* **Pain Points**: There's no infrastructure in place to constantly take in donations or campaign for them in a way that follows guidelines and keeps the club members' anonymity.
* **How we can serve**: Velle allows organizations to create and keep a private login account to upload details of member information. The information will then show on the main site page without names shown, keeping client anonymity.
* **Boys & Girls Club**
* **Purpose**: Has a particular item or size in need that hasn't been donated to them yet.
* **Pain Points**: Has no control over the items they will receive in donations from the community.
* **How we can serve**: With the private logins for organizations, they can also pass along items amongst themselves, much like a library book share system.
### Components and Routes Layout
<p align="center">
<img src="src/assets/img/diagram.jpg" width="500" height="385" title="Velle Component and Route Structure">
</p>
### Sketches
<p align="center">
<img src="src/assets/img/sketch.jpeg" width="600" height="463" title="Sketch of page design layout">
</p>
### Wireframes
<p align="center">Home</p>
<p align="center">
<img src="src/assets/img/wireframe-home.jpg" width="300" height="240" title="Wireframe of home page design mockup">
<img src="src/assets/img/wireframe-home-2.jpg" width="300" height="240" title="Wireframe of home page design mockup">
</p>
<p align="center">About</p>
<p align="center">
<img src="src/assets/img/wireframe-about.jpg" width="300" height="240" title="Wireframe of home page design mockup">
</p>
### Prototyping
<p align="center">
<img src="####" title="Prototype example of user navigating through the app">
</p>
### Features Built and To Be Completed
- [ ] Navigation bar.
- [ ] List of support agencies.
- [ ] Login feature to access private information.
- [ ] Agencies can maintain their Wish List.
- [ ] Search feature to find drop off locations nearby.
- [ ] Agencies can upload images and information of items that they received but have not given away and are willing to share/send to other agencies.
- [ ] Sharing feature that allows agencies to view item lists of other agencies to see if they have a specific requested item.
- [ ] Address label generator feature.
- [ ] About Page
- [ ] Contact Page
- [ ] Page routing
- [ ] Styling
### Commit History/Work Activity Log
_Friday, May 3 - Planning_
* 8:00 - Clone down template repo and begin readme
* 9:00 - Start planning out and creating sketches
* 10:00 - Begin wireframing
* 1:00 - Update/change out readme to reflect appropriate information.
* 2:00 - Add component tree image to readme.
* 3:00 - Research more about outreach programs in Portland, and if businesses that offer similar services to my app the exist.
* 4:00 - Done for the day.
_Friday, May 10 - Wireframing and Static Build_
* 8:00 - Wireframing
* 9:00 - Wireframing
* 10:00 - Prototyping
* 11:00 - User Stories
* 12:00 - Change file structure, create Nav component
* 1:00 - Install redux-logger, convert things over to redux, troubleshoot
* 4:30 - Done for the day.
### Legal
*This software is licensed under MIT license.*
Copyright (c) 2019 **_Randee Layosa_**
| 48.559748 | 1,020 | 0.736563 | eng_Latn | 0.994951 |
d03477587575949acbf75b81221b73e1301c5d33 | 15,187 | md | Markdown | docs/xamarin-forms/data-cloud/cosmosdb/consuming.md | lhaussknecht/xamarin-docs.de-de | e83073ae4c497400ae37930d6f6f0d374bbd9049 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/data-cloud/cosmosdb/consuming.md | lhaussknecht/xamarin-docs.de-de | e83073ae4c497400ae37930d6f6f0d374bbd9049 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/xamarin-forms/data-cloud/cosmosdb/consuming.md | lhaussknecht/xamarin-docs.de-de | e83073ae4c497400ae37930d6f6f0d374bbd9049 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Nutzen eine Azure Cosmos DB-Dokumentdatenbank
description: In diesem Artikel wird erläutert, wie Sie die Azure Cosmos DB .NET Standard-Clientbibliothek verwenden, um ein Azure Cosmos DB-dokumentdatenbank in einer Xamarin.Forms-Anwendung zu integrieren.
ms.prod: xamarin
ms.assetid: 7C0605D9-9B7F-4002-9B60-2B5DAA3EA30C
ms.technology: xamarin-forms
ms.custom: xamu-video
author: davidbritch
ms.author: dabritch
ms.date: 06/16/2017
ms.openlocfilehash: 79547277b00ae1f1d9b035d5fb08685562cefc79
ms.sourcegitcommit: be6f6a8f77679bb9675077ed25b5d2c753580b74
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 12/07/2018
ms.locfileid: "53052581"
---
# <a name="consuming-an-azure-cosmos-db-document-database"></a>Nutzen eine Azure Cosmos DB-Dokumentdatenbank
[ Herunterladen des Beispiels](https://developer.xamarin.com/samples/xamarin-forms/WebServices/TodoDocumentDB/)
_Eine Azure Cosmos DB-dokumentdatenbank ist eine NoSQL-Datenbank, die Zugriff mit geringer Latenz für JSON-Dokumenten, bietet einen schnelle, hoch verfügbare, skalierbare Datenbankdienst für Anwendungen, die eine nahtlose Skalierung und globale Replikation erfordern bereitstellt. In diesem Artikel wird erläutert, wie Sie die Azure Cosmos DB .NET Standard-Clientbibliothek verwenden, um ein Azure Cosmos DB-dokumentdatenbank in einer Xamarin.Forms-Anwendung zu integrieren._
> [!VIDEO https://youtube.com/embed/BoVH12igmbg]
**Microsoft Azure Cosmos DB, [Xamarin University](https://university.xamarin.com/)**
Ein Azure Cosmos DB-Dokument-Datenbankkonto kann über ein Azure-Abonnement bereitgestellt werden. Jedes Datenbankkonto kann keine oder mehrere Datenbanken haben. Eine dokumentdatenbank in Azure Cosmos DB ist ein logischer Container für Dokumentsammlungen und Benutzer.
Eine Azure Cosmos DB-dokumentdatenbank kann NULL oder mehr Dokumentsammlungen enthalten. Jedes Dokument können eine andere Leistungsstufe auswähle, sodass mehr Durchsatz für Sammlungen angegeben werden und weniger Durchsatz für selten genutzte Sammlungen haben.
Jede Dokumentsammlung besteht aus null oder mehr JSON-Dokumente. Dokumente in einer Sammlung sind schemafreie, und Sie müssen also nicht die gleiche Struktur oder Felder freigeben. Eine Dokumentsammlung Dokumente hinzugefügt werden, Cosmos DB indiziert automatisch und abgefragt werden, verfügbar.
Für Entwicklungszwecke kann auch eine dokumentdatenbank, über einen Emulator genutzt werden. Mit dem Emulator können können lokal, ohne ein Azure-Abonnement erstellen oder keine Kosten anfallen Anwendungen entwickelt und getestet werden. Weitere Informationen zum Emulator finden Sie unter [lokale Entwicklung mit Azure Cosmos DB-Emulator](/azure/cosmos-db/local-emulator/).
In diesem Artikel und die zugehörige beispielanwendung zeigt eine Todolist-Anwendung, in dem die Aufgaben in einer Azure Cosmos DB-dokumentdatenbank gespeichert sind. Weitere Informationen zu der beispielanwendung, finden Sie unter [Grundlegendes zum Beispiel](~/xamarin-forms/data-cloud/walkthrough.md).
Weitere Informationen zu Azure Cosmos DB finden Sie unter den [Dokumentation für Azure Cosmos DB](/azure/cosmos-db/).
## <a name="setup"></a>Setup
Der Prozess für die Integration einer Azure Cosmos DB-dokumentdatenbank in einer Xamarin.Forms-Anwendung lautet wie folgt aus:
1. Erstellen Sie ein Cosmos DB-Konto an. Weitere Informationen finden Sie unter [erstellen Sie ein Azure Cosmos DB-Konto](/azure/cosmos-db/sql-api-dotnetcore-get-started#step-1-create-an-azure-cosmos-db-account).
1. Hinzufügen der [Azure Cosmos DB .NET Standard-Clientbibliothek](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core) NuGet-Paket auf den Platform-Projekten in der Xamarin.Forms-Lösung.
1. Hinzufügen `using` Direktiven für die `Microsoft.Azure.Documents`, `Microsoft.Azure.Documents.Client`, und `Microsoft.Azure.Documents.Linq` Namespaces, Klassen, die Cosmos DB-Konto zugegriffen werden.
Nach der Durchführung dieser Schritte kann die Azure Cosmos DB .NET Standard-Clientbibliothek zum Konfigurieren und Ausführen von Abfragen der dokumentdatenbank verwendet werden.
> [!NOTE]
> Die Azure Cosmos DB .NET Standard-Clientbibliothek kann nur in Projekten-Plattform, und nicht in ein Projekt für die Portable Klassenbibliothek (PCL) installiert werden. Aus diesem Grund ist die beispielanwendung eine Shared Access Projekt (SAP) um codeverdoppelungen zu vermeiden. Allerdings die `DependencyService` Klasse kann in ein PCL-Projekt verwendet werden, zum Aufrufen von Azure Cosmos DB .NET Standard Client Library-Code in plattformspezifischen Projekten enthalten sind.
## <a name="consuming-the-azure-cosmos-db-account"></a>Nutzen Azure Cosmos DB-Kontos
Die `DocumentClient` Typ kapselt, der Endpunkt, Anmeldeinformationen und Verbindungsrichtlinie, die für den Zugriff auf Azure Cosmos DB-Konto, und dient zum Konfigurieren und Ausführen von Anforderungen für das Konto. Im folgenden Codebeispiel wird veranschaulicht, wie Sie eine Instanz dieser Klasse zu erstellen:
```csharp
DocumentClient client = new DocumentClient(new Uri(Constants.EndpointUri), Constants.PrimaryKey);
```
Die Cosmos DB-Uri und Primärschlüssel müssen angegeben werden, die `DocumentClient` Konstruktor. Diese können über das Azure-Portal abgerufen werden. Weitere Informationen finden Sie unter [Herstellen einer Verbindung mit einer Azure Cosmos DB-Datenbankkonto](/azure/cosmos-db/sql-api-dotnetcore-get-started#Connect).
### <a name="creating-a-database"></a>Erstellen einer Datenbank
Eine dokumentdatenbank ist ein logischer Container für Dokumentsammlungen und Benutzer, und kann erstellten im Azure-Portal oder programmgesteuert mithilfe der `DocumentClient.CreateDatabaseIfNotExistsAsync` Methode:
```csharp
public async Task CreateDatabase(string databaseName)
{
...
await client.CreateDatabaseIfNotExistsAsync(new Database
{
Id = databaseName
});
...
}
```
Die `CreateDatabaseIfNotExistsAsync` Methode gibt eine `Database` Objekt als Argument, mit der `Database` -Objekt, mit den Datenbanknamen als seine `Id` Eigenschaft. Die `CreateDatabaseIfNotExistsAsync` Methode erstellt die Datenbank aus, falls es nicht vorhanden ist, oder gibt die Datenbank zurück, wenn sie bereits vorhanden ist. Die beispielanwendung ignoriert jedoch alle von zurückgegebenen Daten die `CreateDatabaseIfNotExistsAsync` Methode.
> [!NOTE]
> Die `CreateDatabaseIfNotExistsAsync` Methode gibt eine `Task<ResourceResponse<Database>>` Objekt und der Statuscode der Antwort kann überprüft werden, um zu bestimmen, ob eine Datenbank erstellt wurde, oder es wurde eine vorhandene Datenbank zurückgegeben.
### <a name="creating-a-document-collection"></a>Erstellen eine Dokumentsammlung
Eine Dokumentsammlung ist ein Container für JSON-Dokumente, und kann im Azure-Portal erstellt oder programmgesteuert mithilfe der `DocumentClient.CreateDocumentCollectionIfNotExistsAsync` Methode:
```csharp
public async Task CreateDocumentCollection(string databaseName, string collectionName)
{
...
// Create collection with 400 RU/s
await client.CreateDocumentCollectionIfNotExistsAsync(
UriFactory.CreateDatabaseUri(databaseName),
new DocumentCollection
{
Id = collectionName
},
new RequestOptions
{
OfferThroughput = 400
});
...
}
```
Die `CreateDocumentCollectionIfNotExistsAsync` Methode erfordert zwei obligatorische Argumente: ein Datenbankname angegeben als ein `Uri`, und ein `DocumentCollection` Objekt. Die `DocumentCollection` Objekt darstellt, eine Dokument-Auflistung, deren Name angegeben ist, mit, der `Id` Eigenschaft. Die `CreateDocumentCollectionIfNotExistsAsync` Methode erstellt die Dokumentsammlung an, falls es nicht vorhanden ist, oder gibt die Dokument-Auflistung zurück, wenn sie bereits vorhanden ist. Die beispielanwendung ignoriert jedoch alle von zurückgegebenen Daten die `CreateDocumentCollectionIfNotExistsAsync` Methode.
> [!NOTE]
> Die `CreateDocumentCollectionIfNotExistsAsync` Methode gibt eine `Task<ResourceResponse<DocumentCollection>>` Objekt und der Statuscode der Antwort kann überprüft werden, um zu bestimmen, ob ein Dokument erstellt wurde, oder eine vorhandenen Dokumentsammlung zurückgegeben wurde.
Optional die `CreateDocumentCollectionIfNotExistsAsync` -Methode angeben kann auch eine `RequestOptions` -Objekt, das Optionen kapselt, die für Anforderungen, die dem Cosmos DB-Konto angegeben werden können. Die `RequestOptions.OfferThroughput` Eigenschaft wird verwendet, um die Leistungsstufe der Dokumentsammlung zu definieren, und die Anwendung, in dem Beispiel in 400 anforderungseinheiten pro Sekunde festgelegt ist. Dieser Wert sollte erhöht oder verringert werden, je nachdem, ob die Auflistung häufig oder selten zugegriffen wird.
> [!IMPORTANT]
> Beachten Sie, dass die `CreateDocumentCollectionIfNotExistsAsync` Methode erstellt eine neue Sammlung mit einem reservierten Durchsatz, die Auswirkungen auf die Preise hat.
<a name="document_query" />
### <a name="retrieving-document-collection-documents"></a>Abrufen von Dokument Datenbanksammlungs-Dokumenten
Der Inhalt einer Auflistung Dokument können durch Erstellen und Ausführen einer Dokumentabfrage abgerufen werden. Eine Dokumentabfrage wird erstellt, mit der `DocumentClient.CreateDocumentQuery` Methode:
```csharp
public async Task<List<TodoItem>> GetTodoItemsAsync()
{
...
var query = client.CreateDocumentQuery<TodoItem>(collectionLink)
.AsDocumentQuery();
while (query.HasMoreResults)
{
Items.AddRange(await query.ExecuteNextAsync<TodoItem>());
}
...
}
```
Diese Abfrage asynchron Ruft alle Dokumente aus der angegebenen Auflistung ab und fügt die Dokumente in einem `List<TodoItem>` Auflistung für die Anzeige.
Die `CreateDocumentQuery<T>` Methode gibt eine `Uri` Argument, das der Auflistung darstellt, die für Dokumente abgefragt werden sollen. In diesem Beispiel die `collectionLink` Variable ist ein Feld auf Klassenebene, der angibt, die `Uri` , die die Dokumentsammlung zum Abrufen von Dokumenten aus darstellt:
```csharp
Uri collectionLink = UriFactory.CreateDocumentCollectionUri(Constants.DatabaseName, Constants.CollectionName);
```
Die `CreateDocumentQuery<T>` Methode erstellt eine Abfrage, die synchron ausgeführt wird, und gibt eine `IQueryable<T>` Objekt. Allerdings die `AsDocumentQuery` Methode konvertiert die `IQueryable<T>` -Objekt an eine `IDocumentQuery<T>` Objekt, das asynchron ausgeführt werden kann. Die asynchrone Abfrage wird ausgeführt, mit der `IDocumentQuery<T>.ExecuteNextAsync` -Methode, die Ruft die nächste Seite mit Ergebnissen aus der dokumentdatenbank, mit der `IDocumentQuery<T>.HasMoreResults` Eigenschaft, der angibt, ob zusätzliche Ergebnisse der Abfrage zurückgegeben werden.
Dokumente können werden gefiltert, serverseitige dazu eine `Where` Klausel in der Abfrage, die eine Filter-Prädikat für die Abfrage für die Dokumentsammlung gilt:
```csharp
var query = client.CreateDocumentQuery<TodoItem>(collectionLink)
.Where(f => f.Done != true)
.AsDocumentQuery();
```
Diese Abfrage ruft alle Dokumente aus der Auflistung zurück, deren `Done` Eigenschaft `false`.
<a name="inserting_document" />
### <a name="inserting-a-document-into-a-document-collection"></a>Einfügen von einem Dokument in einer Dokumentsammlung
Dokumente können in einer Dokumentsammlung mit eingefügt werden und werden benutzerdefinierte JSON-Inhalt der `DocumentClient.CreateDocumentAsync` Methode:
```csharp
public async Task SaveTodoItemAsync(TodoItem item, bool isNewItem = false)
{
...
await client.CreateDocumentAsync(collectionLink, item);
...
}
```
Die `CreateDocumentAsync` Methode gibt eine `Uri` Argument, das die Auflistung darstellt, das Dokument eingefügt werden soll, und ein `object` Argument, das das Dokument eingefügt werden soll darstellt.
### <a name="replacing-a-document-in-a-document-collection"></a>Ersetzen eines Dokuments in einer Dokumentsammlung
Dokumente können in einer Dokumentsammlung mit ersetzt werden, die `DocumentClient.ReplaceDocumentAsync` Methode:
```csharp
public async Task SaveTodoItemAsync(TodoItem item, bool isNewItem = false)
{
...
await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(Constants.DatabaseName, Constants.CollectionName, item.Id), item);
...
}
```
Die `ReplaceDocumentAsync` Methode gibt eine `Uri` Argument, das das Dokument in der Auflistung, die ersetzt werden soll darstellt, und ein `object` Argument, das aktualisierte Dokumentdaten darstellt.
<a name="deleting_document" />
### <a name="deleting-a-document-from-a-document-collection"></a>Löschen eines Dokuments in einer Dokumentsammlung
Ein Dokument kann gelöscht werden, aus einer Auflistung Dokument mit den `DocumentClient.DeleteDocumentAsync` Methode:
```csharp
public async Task DeleteTodoItemAsync(string id)
{
...
await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(Constants.DatabaseName, Constants.CollectionName, id));
...
}
```
Die `DeleteDocumentAsync` Methode gibt eine `Uri` Argument, das das Dokument in der Auflistung darstellt, die gelöscht werden soll.
### <a name="deleting-a-document-collection"></a>Löschen einer Dokumentsammlung
Eine Dokument-Auflistung kann gelöscht werden, aus einer Datenbank mit der `DocumentClient.DeleteDocumentCollectionAsync` Methode:
```csharp
await client.DeleteDocumentCollectionAsync(collectionLink);
```
Die `DeleteDocumentCollectionAsync` Methode gibt eine `Uri` Argument, das die Dokumentsammlung zu löschenden darstellt. Beachten Sie, dass das Aufrufen dieser Methode in der Auflistung gespeicherten Dokumente auch gelöscht werden.
### <a name="deleting-a-database"></a>Löschen einer Datenbank
Eine Datenbank kann gelöscht werden, aus einem Cosmos DB-Datenbankkonto mit der `DocumentClient.DeleteDatabaesAsync` Methode:
```csharp
await client.DeleteDatabaseAsync(UriFactory.CreateDatabaseUri(Constants.DatabaseName));
```
Die `DeleteDatabaseAsync` Methode gibt eine `Uri` Argument, das die zu löschende Datenbank darstellt. Beachten Sie, dass das Aufrufen dieser Methode auch die Dokumentsammlungen, die in der Datenbank gespeichert und in den dokumentauflistungen gespeicherten Dokumente gelöscht werden.
## <a name="summary"></a>Zusammenfassung
In diesem Artikel wurde erläutert, wie die Azure Cosmos DB .NET Standard-Clientbibliothek verwenden, um ein Azure Cosmos DB-dokumentdatenbank in einer Xamarin.Forms-Anwendung zu integrieren. Eine Azure Cosmos DB-dokumentdatenbank ist eine NoSQL-Datenbank, die Zugriff mit geringer Latenz für JSON-Dokumenten, bietet einen schnelle, hoch verfügbare, skalierbare Datenbankdienst für Anwendungen, die eine nahtlose Skalierung und globale Replikation erfordern bereitstellt.
## <a name="related-links"></a>Verwandte Links
- [TODO Azure Cosmos DB (Beispiel)](https://developer.xamarin.com/samples/xamarin-forms/WebServices/TodoDocumentDB/)
- [Dokumentation für Azure Cosmos DB](/azure/cosmos-db/)
- [Azure Cosmos DB .NET Standard-Clientbibliothek](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core)
- [Azure Cosmos DB-API](https://docs.microsoft.com/dotnet/api/overview/azure/cosmosdb/client?view=azure-dotnet)
| 64.080169 | 616 | 0.809179 | deu_Latn | 0.981157 |
d034ac18b3b1b426b6e94c31e6da51b73bd468e0 | 5,707 | md | Markdown | doc/report.md | mateuszz0000/CVE-Flow | fe8e8b0f389f4d4744c7a20a798411312452cb48 | [
"MIT"
] | null | null | null | doc/report.md | mateuszz0000/CVE-Flow | fe8e8b0f389f4d4744c7a20a798411312452cb48 | [
"MIT"
] | 6 | 2020-11-13T19:03:15.000Z | 2022-02-10T02:40:40.000Z | doc/report.md | devsecops-SRC/CVE-Flow | 4d246aa7617535f8d960ae956f81304e4fb57d07 | [
"MIT"
] | null | null | null | # 1999-2020年CVE数据分析、监控、EXP预警和全局自动化
给大家汇报一下最近工作,主要做了这么几个事情:
1. 1999-2020年CVE数据分析。
2. 增量CVE数据的T级监控。
3. EXP预警。
4. 全局自动化。
## 产出及价值
- 汇总产出一份近20年来CVE原始数据集:CVE2020,且持续自动更新,具备66个属性。借助数据集,可以分析各个属性数据的外在表现,推测其内在规律,辅助安全工作。
- 经过交叉打标,产出带有EXP标记的CVE标记数据集:EXP2020,且持续自动更新。借助已有标记数据集,通过机器学习和深度学习算法训练,可以预测CVE被利用的可能性,有的放矢,提高预警时间。
- 基于以上工作,开发名为CVE-Flow的工具,实现历年来CVE数据的分析、增量CVE的T级监控、EXP预警和全局自动化功能,作为外部威胁情报,给攻防双方提供有价值的CVE数据和建议。
## 起源
在我写的博文中,经常会交代文章的“起源”,介绍写这篇文章的原因和其中思考的过程。这主要来源于早前在乌云看洞的时候,漏洞详情经常有果无因,只介绍了漏洞的触发点和利用方式,而最重要的如何发现这个触发点的过程却没有被提及,对于漏洞平台来说,要的是结果,而对于白帽子来说,更重要的可能是发现漏洞的过程,而这部分是缺失的,当然,这也可以理解,毕竟漏洞详情不是博文。
idea起源于目前的现状:从防的角度,做安全检测的场景和机器学习方法有一堆,但从攻的角度,机器学习的成功应用较为欠缺和有限。而从攻的角度来想,应用场景必然绕不开漏洞,那么问题也就变成了,机器学习在漏洞方面的应用。在dblp上使用vulnerability machine learning deep learning等关键字搜索,做了一下调研,发现大部分研究更偏向于学术型和研究型,能在工业界产生实际应用价值的很少,其中有一个比较有意思的应用是预测漏洞被利用的可能性。当时看到这个才发现我的思维定势,机器学习在漏洞方面的应用一定是指机器学习来挖洞吗?机器学习也可以应用在漏洞数据方面,可以做的事情是工程化”预测漏洞被利用的可能性“,为工业界服务。顺手在管理idea的仓库简要记录下了整个过程。

在这里给大家安莉下我的idea管理小本本,主要记录idea和TODO,找到适合自己的项目和时间管理工具很重要。

言归正传,回到机器学习和漏洞数据。先有数据后有天,如图是first.org截止到2016年3月17日,汇总的全球漏洞库,经过我的二次验证,发现除了最后一行的乌云不可用之外,其余漏洞库均在正常维护、更新和运营。目前漏洞库,官方的有美国nvd、CVE等,中国cnvd、cnnvd,澳大利亚AusCERT,欧洲CERT-EU,日本的JVN,JVN iPedia,芬兰的NCSC。非官方的主要有:Exploit DB,Rapid7的漏洞和利用库,Security Focus等。

研究了一下各大漏洞库的漏洞数据标准,发现都对标的CVE。CVE就像是一个枢纽,连接了全球的漏洞数据。这么看来可以从CVE数据出发,应用机器学习算法。如果有了大量的CVE数据,难道只做算法?用算法不是目的,是手段。产生价值才是目的。数据本身的简单统计和数据分析,同样能产生巨大的价值。
到现在为止,梳理一下我们要做的事情有:CVE数据分析(存量CVE数据分析和增量CVE监控),使用机器学习算法预测CVE的EXP可能性。
接着,向前思考怎么获取CVE及相关数据,爬虫爬还是下载订阅数据?在线还是离线?如何更新?等等的策略问题。向后思考结果和结果的展现形式。这些都是问题。
那么到这里,我们还有一件事情要做:自动化,做到自动可持续性更新,避免成为工具人。
## 存量CVE数据分析
先介绍点CVE相关的背景知识:CVE、MITRE、NVD、NIST、CVEdetails。安全绕不开漏洞,漏洞绕不开CVE(通用漏洞披露),CVE可以看成漏洞的美标,1999年由MITRE公司提出,现流行的ATT&CK也是由MITRE提出。MITRE上的CVE数据会被及时同步到NVD(美国国家漏洞库),NVD又是由NIST(美国国家标准技术研究所)于2005年支持创建的。而CVEdetails是2010年由第三方个人开发的,收录的CVE数据和官方的CVE数据有重叠,有差别,数据更新慢于官方几个月。
从cve的官网cve.mitre.org没有发现现成可利用的数据,爬虫爬的话,因为CVE数据字段比较多,难定义数据的统一格式,还好在NVD上有现成的data feeds,提供了1999年以来所有CVE数据,且定义好了数据格式。
解析完发现,一条完整的CVE数据最多包括66个字段,主要分为这几大类:CVE基本信息,CWE通用弱点枚举,reference引用,描述description,CPE通用平台枚举,impact影响,publishedDate公布时间,lastModifiedDate最近更改时间。其中CWE可以理解为漏洞类型,也是由MITRE提出的标准,CPE是标记漏洞影响产品的类型,是系统还是应用或是硬件,impact包括CVSS V2和CVSS V3两个版本通用漏洞评分系统。
首先对1999年-2020年5月8日的CVE数据做探索性数据分析。
截止到2020年5月8日,总计有142887个CVE,下图为CVE数量随时间变化趋势图,从趋势线可以看出,CVE数量是不断增多的,在2016-2017年CVE数量陡增,翻了近三倍,2017-2019三年来CVE数量也居高不下,仅2020年前四个多月,就爆发了6838个CVE,这样看来2020全年爆发的CVE极有可能多于2019年的18938个CVE。透过现象窥本质,为什么近三年来CVE量居高不下?是什么在驱动安全人员连年产出几万的CVE?相较于2016年,2017年究竟发生了什么?种种这些,归根结底肯定是利益相关的原因,但具体是什么呢?

是因为2017年安全行业发生的“维基解密”、“NSA武器库泄露”、“WannCry勒索病毒”等重大安全事件吗,或许有这方面原因吧,但肯定不止这些。
从漏洞危害的视角,探索CVE数量随时间变化趋势,这里采用CVSS V2划分漏洞危害的标准,将漏洞分为高危、中危、低危。从下图可以看到高危和低危漏洞增长的不算多,主要增长点是中危漏洞,中危漏洞频发。

细心的读者可能会发现上图中每年的高危、中危、低危CVE加起来都不等于上上图中的年CVE总量,准确地来说,是都小于,这是因为有部分CVE的状态是Rejected,漏洞被拒绝了,不是有效漏洞。当然这部分CVE都是有CVE id的,但这并不代表漏洞一定有价值,甚至漏洞都不一定真实存在。这涉及到申请CVE的流程,分为两大步:预定CVE编号和公开CVE漏洞进行验证。被分配了CVE id后,此时CVE的状态是Reserved,处于保留状态,还需要公开漏洞进行验证,验证不通过的即被拒绝。
从漏洞类型的视角,计算出1999-2020年CVE的有效CWE id Top10,分别代表:XSS、缓冲区溢出、不正确的输入验证、敏感信息泄露、SQL注入、权限和访问控制、目录遍历、资源管理错误、CSRF、密码问题。

再细化到每年,观察漏洞类型随时间的变化。考虑到CWE是在2006年被提出的,所以我们选取2006年以来的数据,取每年top3漏洞类型的并集,分别为:CWE-200、CWE-79、CWE-20、CWE-119、CWE-264、CWE-89、CWE-94,分别代表:敏感信息泄露、XSS、不正确的输入验证、缓冲区溢出、权限和访问控制、SQL注入、代码注入。除CWE94即代码注入外,其余均在1999-2020年总数据的top10内。

可以看到,漏洞类型随时间的变化,有升有降。这里直接给出几点分析结论。
1. 敏感信息泄露越来越严重,从2014年开始爬坡,2017年到达顶峰,后续居高不下。
2. XSS连续三冠,从2017年陡增,至今成为CVE的主旋律。
3. 不正确的输入验证,从2017年持续增长至今。
4. 缓冲区溢出在2017年到达顶峰后,近年来逐步回落。
5. SQL注入不生不死,2017-2019近三年来,日均一个半CVE。
6. 权限和访问控制漏洞越来越少,我验证了好几遍数据,2020年至今未出现一例包含CWE-264的CVE。这块有点反常,从2018年开始火热的云原生和权限、访问控制密不可分,为什么这方面CVE比较少,甚至2020年都没呢,是因为CVE官方可能将此类型的洞归属到了其他CWE id吗,还是说是因为安全相较于基础设施的演进,安全漏洞有个滞后期吗?如果是的话,是否能一定程度上说明这是一片蓝海?
7. 代码注入连年少得很,仅在2006年排名top1,近10年来都没怎么进过top10。
所以,能给出的通用建议是:建议甲方安全重点关注敏感信息泄露、XSS和不正确的输入验证。乙方和白帽子的话,可以重点就这三类漏洞锤甲方,因为大量的数据表明,这三类洞是真的多,一锤一个准。
从CVE的引用视角,通过提取引用链接的主域名数据并归并,发现CVE引用的头部数据为:在美安全公司的漏洞库、美日官方的漏洞库、各大产品厂商官方的安全通告。其中产品厂商绝大部分为oracle、apple、ibm、google、android、cisco、Microsoft、adobe、wordpress等美属企业,其中有个例外是华为,从这点看来华为安全的国际化做的不错。
分析到这里,除了华为,我国官方安全机构、安全公司和产品厂商在CVE中的存在感很弱,究其原因,猜测是安全的国际化做的不好,例如cnvd都没英语版网页。
种种这些,形成了一个既定的事实:CVE及周边是且仅是美国的一种安全生态,我们目前仍依附于此。被动依附总不是个长久的办法,是走国际化主动向CVE对接,还是联合起来做自己的标准和生态(虽然目前cnvd也联合很多安全公司做漏洞,但目测只处于正规化阶段,离标准化、生态化还比较远),这是个问题。
在CVE的引用中有个tag字段标记了CVE是否有现成的exploit,这决定了漏洞被利用的难度。exp源主要有这么几类:第一大类是github,github提供了最多的exp,第二类是安全组织和厂商,包括维护的漏洞库和官网,如packetstormsecurity、secunia、securityfocus、exploit-db、talosintelligence、hackerone等,第三类是受影响产品厂商,如wordpress、google、blogspot。
因此可以通过监控和爬取上述提供exp数据的头部源,一方面可以早发现exp,另一方面可以给CVE数据打标,标记CVE是否有公开的exploit,如果有,exp是什么,来自哪里。以此维护一份较为完整的CVE exp库,使用机器学习训练该库的标记数据,预测CVE被利用的可能性,这也引出了下面EXP预警部分。
最后,从CPE,即通用平台枚举的视角,简单统计下CVE涉及到的产品和厂商数据。1999年-2020年,CVE中总计涉及到8431个产品和厂商,下图为top10数据,google占到了22.8%,相当于第25名往后所有厂商的总和,google真就以一己之力养活了大半个安全圈从业人员。

除了这些分析,还可以使用这批数据做很多有意思的分析,推测和验证想法,从数据中,发现一些安全趋势,助力安全工作。
快快订阅我的公众号,这里将提供安全、数据、算法的实践和思考,扫描它,带走我。
## 增量CVE监控及EXP预警
## 全局自动化
这时候突然联想到碳基体师傅去年写的sec_profile项目,该项目采集和分析了sec-wiki、玄武实验室、sec.today的安全文章数据。于是去github找到sec_profile项目,发现还具备了自动化功能,每天自动生成分析报告并提交至github。仔细学习了一波,解答了之前预想到的一些问题。
| 52.842593 | 346 | 0.869984 | yue_Hant | 0.754648 |
d034c28cf9ffb5a8b5d5bdf06871c575145f3579 | 1,462 | md | Markdown | dynamicsax2012-technet/itransactiontriggerv2-methods-microsoft-dynamics-retail-pos-contracts-triggers.md | MicrosoftDocs/DynamicsAX2012-technet | 4e3ffe40810e1b46742cdb19d1e90cf2c94a3662 | [
"CC-BY-4.0",
"MIT"
] | 9 | 2019-01-16T13:55:51.000Z | 2021-11-04T20:39:31.000Z | dynamicsax2012-technet/itransactiontriggerv2-methods-microsoft-dynamics-retail-pos-contracts-triggers.md | MicrosoftDocs/DynamicsAX2012-technet | 4e3ffe40810e1b46742cdb19d1e90cf2c94a3662 | [
"CC-BY-4.0",
"MIT"
] | 265 | 2018-08-07T18:36:16.000Z | 2021-11-10T07:15:20.000Z | dynamicsax2012-technet/itransactiontriggerv2-methods-microsoft-dynamics-retail-pos-contracts-triggers.md | MicrosoftDocs/DynamicsAX2012-technet | 4e3ffe40810e1b46742cdb19d1e90cf2c94a3662 | [
"CC-BY-4.0",
"MIT"
] | 32 | 2018-08-09T22:29:36.000Z | 2021-08-05T06:58:53.000Z | ---
title: ITransactionTriggerV2 Methods (Microsoft.Dynamics.Retail.Pos.Contracts.Triggers)
TOCTitle: ITransactionTriggerV2 Methods
ms:assetid: Methods.T:Microsoft.Dynamics.Retail.Pos.Contracts.Triggers.ITransactionTriggerV2
ms:mtpsurl: https://technet.microsoft.com/library/microsoft.dynamics.retail.pos.contracts.triggers.itransactiontriggerv2_methods(v=AX.60)
ms:contentKeyID: 62205946
author: Khairunj
ms.date: 05/18/2015
mtps_version: v=AX.60
---
# ITransactionTriggerV2 Methods
[!INCLUDE[archive-banner](includes/archive-banner.md)]
The [ITransactionTriggerV2](itransactiontriggerv2-interface-microsoft-dynamics-retail-pos-contracts-triggers.md) type exposes the following members.
## Methods
<table>
<thead>
<tr class="header">
<th> </th>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><img src="images/Dn987397.pubmethod(en-us,AX.60).gif" title="Public method" alt="Public method" /></td>
<td><a href="itransactiontriggerv2-preconfirmreturntransaction-method-microsoft-dynamics-retail-pos-contracts-triggers.md">PreConfirmReturnTransaction</a></td>
<td>Triggered before confirmation of return transaction.</td>
</tr>
</tbody>
</table>
Top
## See Also
#### Reference
[ITransactionTriggerV2 Interface](itransactiontriggerv2-interface-microsoft-dynamics-retail-pos-contracts-triggers.md)
[Microsoft.Dynamics.Retail.Pos.Contracts.Triggers Namespace](microsoft-dynamics-retail-pos-contracts-triggers-namespace.md)
| 29.836735 | 159 | 0.793434 | yue_Hant | 0.770076 |
d0351001e77ee3bf03185e0b6dfc9d71659c0114 | 819 | md | Markdown | doc/TeamEventStatusRank.md | jr1221/tba_api_dart_dio_client | 622051c4cc4007d6ff43193db41fdbb3aeb70504 | [
"MIT"
] | 3 | 2021-04-14T02:42:57.000Z | 2021-12-26T08:27:35.000Z | doc/TeamEventStatusRank.md | jr1221/tba_api_dart_dio_client | 622051c4cc4007d6ff43193db41fdbb3aeb70504 | [
"MIT"
] | null | null | null | doc/TeamEventStatusRank.md | jr1221/tba_api_dart_dio_client | 622051c4cc4007d6ff43193db41fdbb3aeb70504 | [
"MIT"
] | 1 | 2021-12-26T08:27:41.000Z | 2021-12-26T08:27:41.000Z | # tba_api_dart_dio_client.model.TeamEventStatusRank
## Load the model package
```dart
import 'package:tba_api_dart_dio_client/api.dart';
```
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**numTeams** | **int** | Number of teams ranked. | [optional]
**ranking** | [**TeamEventStatusRankRanking**](TeamEventStatusRankRanking.md) | | [optional]
**sortOrderInfo** | [**BuiltList<TeamEventStatusRankSortOrderInfo>**](TeamEventStatusRankSortOrderInfo.md) | Ordered list of names corresponding to the elements of the `sort_orders` array. | [optional]
**status** | **String** | | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 43.105263 | 202 | 0.67033 | yue_Hant | 0.335806 |
d0353806656f68bf0ca6df4ac06a2b84e7a9d7ac | 604 | md | Markdown | README.md | ibrahemrass/Aboutme-lab02 | 98f32e952ad6353314151d96d91b60c6b16af5cf | [
"MIT"
] | null | null | null | README.md | ibrahemrass/Aboutme-lab02 | 98f32e952ad6353314151d96d91b60c6b16af5cf | [
"MIT"
] | 1 | 2021-02-10T14:17:29.000Z | 2021-02-10T14:17:29.000Z | README.md | ibrahemrass/Aboutme-lab02 | 98f32e952ad6353314151d96d91b60c6b16af5cf | [
"MIT"
] | 1 | 2021-02-10T12:30:03.000Z | 2021-02-10T12:30:03.000Z | # Aboutme-lab02
explain what we learn today:
## HTML
in html today added some orederd list and unorderd list.
## JAVASCRIPT
today we add to question:
1. the first one is guessing random number you have four attempt.
male it by for loop for the attempt and check if the number is right.
2. seconed one for ask a question and there is more than one answer.
make it by list and two for loop to check the attempt and check the list.
## CSS
styling the backgroung and font color and color of each element.
the name of the driver (ibrahem)
the name of the Navigator (areen)
this code is woerked together | 33.555556 | 74 | 0.766556 | eng_Latn | 0.999907 |
d035ec3c39cda6209b6a4ec9dd5849f567230d00 | 2,365 | md | Markdown | dynamicsax2012-technet/kitcomponentvariantresponse-kitcomponentvariants-property-microsoft-dynamics-retail-sharepoint-web-services-viewmodel.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/kitcomponentvariantresponse-kitcomponentvariants-property-microsoft-dynamics-retail-sharepoint-web-services-viewmodel.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/kitcomponentvariantresponse-kitcomponentvariants-property-microsoft-dynamics-retail-sharepoint-web-services-viewmodel.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: KitComponentVariantResponse.KitComponentVariants Property (Microsoft.Dynamics.Retail.SharePoint.Web.Services.ViewModel)
TOCTitle: KitComponentVariants Property
ms:assetid: P:Microsoft.Dynamics.Retail.SharePoint.Web.Services.ViewModel.KitComponentVariantResponse.KitComponentVariants
ms:mtpsurl: https://technet.microsoft.com/en-us/library/microsoft.dynamics.retail.sharepoint.web.services.viewmodel.kitcomponentvariantresponse.kitcomponentvariants(v=AX.60)
ms:contentKeyID: 62206841
ms.date: 05/18/2015
mtps_version: v=AX.60
f1_keywords:
- Microsoft.Dynamics.Retail.SharePoint.Web.Services.ViewModel.KitComponentVariantResponse.KitComponentVariants
dev_langs:
- CSharp
- C++
- VB
---
# KitComponentVariants Property
Gets or set the information for a specific kit configuration.
**Namespace:** [Microsoft.Dynamics.Retail.SharePoint.Web.Services.ViewModel](microsoft-dynamics-retail-sharepoint-web-services-viewmodel-namespace.md)
**Assembly:** Microsoft.Dynamics.Retail.SP.Web.Services (in Microsoft.Dynamics.Retail.SP.Web.Services.dll)
## Syntax
``` vb
'Declaration
<DataMemberAttribute> _
Public Property KitComponentVariants As IEnumerable(Of StorefrontListItem)
Get
Set
'Usage
Dim instance As KitComponentVariantResponse
Dim value As IEnumerable(Of StorefrontListItem)
value = instance.KitComponentVariants
instance.KitComponentVariants = value
```
``` csharp
[DataMemberAttribute]
public IEnumerable<StorefrontListItem> KitComponentVariants { get; set; }
```
``` c++
[DataMemberAttribute]
public:
property IEnumerable<StorefrontListItem^>^ KitComponentVariants {
IEnumerable<StorefrontListItem^>^ get ();
void set (IEnumerable<StorefrontListItem^>^ value);
}
```
#### Property Value
Type: [System.Collections.Generic.IEnumerable](https://technet.microsoft.com/en-us/library/9eekhta0\(v=ax.60\))\<[StorefrontListItem](storefrontlistitem-class-microsoft-dynamics-retail-sharepoint-web-services-viewmodel.md)\>
Returns [IEnumerable\<T\>](https://technet.microsoft.com/en-us/library/9eekhta0\(v=ax.60\)).
## See Also
#### Reference
[KitComponentVariantResponse Class](kitcomponentvariantresponse-class-microsoft-dynamics-retail-sharepoint-web-services-viewmodel.md)
[Microsoft.Dynamics.Retail.SharePoint.Web.Services.ViewModel Namespace](microsoft-dynamics-retail-sharepoint-web-services-viewmodel-namespace.md)
| 34.779412 | 226 | 0.807611 | yue_Hant | 0.687157 |
d0363d82925b89e892f0e1a66a8d0364f328cc3d | 1,765 | md | Markdown | windows-driver-docs-pr/gpiobtn/indicator-testing.md | hueifeng/windows-driver-docs.zh-cn | 861460d8ab333ed44b387b0b928412b6df881026 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/gpiobtn/indicator-testing.md | hueifeng/windows-driver-docs.zh-cn | 861460d8ab333ed44b387b0b928412b6df881026 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/gpiobtn/indicator-testing.md | hueifeng/windows-driver-docs.zh-cn | 861460d8ab333ed44b387b0b928412b6df881026 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 指示器测试
description: 本主题介绍常见的指标步骤步骤和示例。
ms.assetid: 8FD5728C-30E3-4998-A01D-80894BDB379A
ms.localizationpriority: medium
ms.date: 10/17/2018
ms.openlocfilehash: 5ea28fcbc8e85286083521fe4cf00cb8ddd5747b
ms.sourcegitcommit: b316c97bafade8b76d5d3c30d48496915709a9df
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 03/13/2020
ms.locfileid: "79242914"
---
# <a name="indicator-testing"></a>指示器测试
本主题介绍常见的指标步骤步骤和示例。
## <a name="span-idtouchkbdspanspan-idtouchkbdspantouch-keyboard-deployment-steps"></a><span id="touchkbd"></span><span id="TOUCHKBD"></span>触摸键盘部署步骤
以下步骤测试触摸键盘是否自动打开(与从任务栏打开它的用户相对)。 每次测试指示 "执行触控键盘部署步骤" 时,请应用以下步骤。
1. 按下 "Windows" 按钮,导航到 "**开始**"。
2. 滑动以显示 "**超级按钮**" 菜单,然后选择 "**搜索**"。
3. 在编辑字段中点击。
## <a name="span-idconvspanspan-idconvspanslatelaptop-mode-conversion-steps"></a><span id="conv"></span><span id="CONV"></span>石板/便携式计算机模式转换步骤
根据测试的指示,转换为石板(或便携式计算机)。
**请注意** 如果系统可以使用多种方法转换为石板模式,请对每个方法重复测试步骤。
各种外形规格允许不同的转换方法,例如:
- 附加或分离键盘
- 翻转屏幕
- 旋转屏幕
- 滑动屏幕以覆盖或揭开键盘
**转换示例:**

**图1键盘附加和分离转换**

**图2屏幕旋转转换**
**石板示例:**
- 已分离键盘
- 键盘存在,但无法轻松键入
- 下的键盘 flapped
- 下滑
- Swivelled
**笔记本电脑模式:**
键盘存在,可轻松键入。
## <a name="span-idlaptop_slate_mode_indicator_scenariosspanspan-idlaptop_slate_mode_indicator_scenariosspanspan-idlaptop_slate_mode_indicator_scenariosspanlaptopslate-mode-indicator-scenarios"></a><span id="Laptop_slate_mode_indicator_scenarios"></span><span id="laptop_slate_mode_indicator_scenarios"></span><span id="LAPTOP_SLATE_MODE_INDICATOR_SCENARIOS"></span>笔记本电脑/石板模式指示器方案
必须对改装执行端到端指示器测试,才能在以下领域公开任何潜在问题:
- 将系统从一种模式转换到另一种模式时的各种计时。
- 可转换的机械细节。
| 21.790123 | 381 | 0.757507 | yue_Hant | 0.372382 |
d036f19e1f78930e0fc13373faaf2441aa82d275 | 11,240 | md | Markdown | articles/automation/automation-connections.md | agarwal-akash/azure-docs | fc6ba94ecd704811d70f5a0f08b40e55fc9b4845 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/automation/automation-connections.md | agarwal-akash/azure-docs | fc6ba94ecd704811d70f5a0f08b40e55fc9b4845 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/automation/automation-connections.md | agarwal-akash/azure-docs | fc6ba94ecd704811d70f5a0f08b40e55fc9b4845 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Connection assets in Azure Automation
description: Connection assets in Azure Automation contain the information required to connect to an external service or application from a runbook or DSC configuration. This article explains the details of connections and how to work with them in both textual and graphical authoring.
services: automation
ms.service: automation
ms.component: shared-capabilities
author: georgewallace
ms.author: gwallace
ms.date: 03/15/2018
ms.topic: conceptual
manager: carmonm
---
# Connection assets in Azure Automation
An Automation connection asset contains the information required to connect to an external service or application from a runbook or DSC configuration. This may include information required for authentication such as a username and password in addition to connection information such as a URL or a port. The value of a connection is keeping all of the properties for connecting to a particular application in one asset as opposed to creating multiple variables. The user can edit the values for a connection in one place, and you can pass the name of a connection to a runbook or DSC configuration in a single parameter. The properties for a connection can be accessed in the runbook or DSC configuration with the **Get-AutomationConnection** activity.
When you create a connection, you must specify a *connection type*. The connection type is a template that defines a set of properties. The connection defines values for each property defined in its connection type. Connection types are added to Azure Automation in integration modules or created with the [Azure Automation API](http://msdn.microsoft.com/library/azure/mt163818.aspx) if the integration module includes a connection type and is imported into your Automation account. Otherwise, you will need to create a metadata file to specify an Automation connection type. For further information regarding this, see [Integration Modules](automation-integration-modules.md).
>[!NOTE]
>Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are encrypted and stored in Azure Automation using a unique key that is generated for each automation account. This key is stored in Key Vault. Before storing a secure asset, the key is loaded from Key Vault and then used to encrypt the asset.
## Windows PowerShell Cmdlets
The cmdlets in the following table are used to create and manage Automation connections with Windows PowerShell. They ship as part of the [Azure PowerShell module](/powershell/azure/overview) which is available for use in Automation runbooks and DSC configurations.
|Cmdlet|Description|
|:---|:---|
|[Get-AzureRmAutomationConnection](/powershell/module/azurerm.automation/get-azurermautomationconnection)|Retrieves a connection. Includes a hash table with the values of the connection’s fields.|
|[New-AzureRmAutomationConnection](/powershell/module/azurerm.automation/new-azurermautomationconnection)|Creates a new connection.|
|[Remove-AzureRmAutomationConnection](/powershell/module/azurerm.automation/remove-azurermautomationconnection)|Remove an existing connection.|
|[Set-AzureRmAutomationConnectionFieldValue](/powershell/module/azurerm.automation/set-azurermautomationconnectionfieldvalue)|Sets the value of a particular field for an existing connection.|
## Activities
The activities in the following table are used to access connections in a runbook or DSC configuration.
|Activities|Description|
|---|---|
|[Get-AutomationConnection](/powershell/module/servicemanagement/azure/get-azureautomationconnection?view=azuresmps-3.7.0)|Gets a connection to use. Returns a hash table with the properties of the connection.|
>[!NOTE]
>You should avoid using variables with the –Name parameter of **Get-AutomationConnection** since this can complicate discovering dependencies between runbooks or DSC configurations, and connection assets at design time.
## Python2 functions
The function in the following table is used to access connections in a Python2 runbook.
| Function | Description |
|:---|:---|
| automationassets.get_automation_connection | Retrieves a connection. Returns a dictionary with the properties of the connection. |
> [!NOTE]
> You must import the "automationassets" module at the top of your Python runbook in order to access the asset functions.
## Creating a New Connection
### To create a new connection with the Azure portal
1. From your automation account, click the **Assets** part to open the **Assets** blade.
2. Click the **Connections** part to open the **Connections** blade.
3. Click **Add a connection** at the top of the blade.
4. In the **Type** dropdown, select the type of connection you want to create. The form will present the properties for that particular type.
5. Complete the form and click **Create** to save the new connection.
### To create a new connection with Windows PowerShell
Create a new connection with Windows PowerShell using the [New-AzureRmAutomationConnection](/powershell/module/azurerm.automation/new-azurermautomationconnection) cmdlet. This cmdlet has a parameter named **ConnectionFieldValues** that expects a [hash table](http://technet.microsoft.com/library/hh847780.aspx) defining values for each of the properties defined by the connection type.
If you are familiar with the Automation [Run As account](automation-sec-configure-azure-runas-account.md) to authenticate runbooks using the service principal, the PowerShell script, provided as an alternative to creating the Run As account from the portal, creates a new connection asset using the following sample commands.
```powershell
$ConnectionAssetName = "AzureRunAsConnection"
$ConnectionFieldValues = @{"ApplicationId" = $Application.ApplicationId; "TenantId" = $TenantID.TenantId; "CertificateThumbprint" = $Cert.Thumbprint; "SubscriptionId" = $SubscriptionId}
New-AzureRmAutomationConnection -ResourceGroupName $ResourceGroup -AutomationAccountName $AutomationAccountName -Name $ConnectionAssetName -ConnectionTypeName AzureServicePrincipal -ConnectionFieldValues $ConnectionFieldValues
```
You are able to use the script to create the connection asset because when you create your Automation account, it automatically includes several global modules by default along with the connection type **AzureServicePrincipal** to create the **AzureRunAsConnection** connection asset. This is important to keep in mind, because if you attempt to create a new connection asset to connect to a service or application with a different authentication method, it will fail because the connection type is not already defined in your Automation account. For further information on how to create your own connection type for your custom or module from the [PowerShell Gallery](https://www.powershellgallery.com), see [Integration Modules](automation-integration-modules.md)
## Using a connection in a runbook or DSC configuration
You retrieve a connection in a runbook or DSC configuration with the **Get-AutomationConnection** cmdlet. You cannot use the [Get-AzureRmAutomationConnection](/powershell/module/azurerm.automation/get-azurermautomationconnection) activity. This activity retrieves the values of the different fields in the connection and returns them as a [hash table](http://go.microsoft.com/fwlink/?LinkID=324844) which can then be used with the appropriate commands in the runbook or DSC configuration.
### Textual runbook sample
The following sample commands show how to use the Run As account mentioned earlier, to authenticate with Azure Resource Manager resources in your runbook. It uses the connection asset representing the Run As account, which references the certificate-based service principal, not credentials.
```powershell
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Connect-AzureRmAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint
```
> [!IMPORTANT]
> **Add-AzureRmAccount** is now an alias for **Connect-AzureRMAccount**. When searching your library items, if you do not see **Connect-AzureRMAccount**, you can use **Add-AzureRmAccount**, or you can update your modules in your Automation Account.
### Graphical runbook samples
You add a **Get-AutomationConnection** activity to a graphical runbook by right-clicking on the connection in the Library pane of the graphical editor and selecting **Add to canvas**.

The following image shows an example of using a connection in a graphical runbook. This is the same example shown above for authenticating using the Run As account with a textual runbook. This example uses the **Constant value** data set for the **Get RunAs Connection** activity that uses a connection object for authentication. A [pipeline link](automation-graphical-authoring-intro.md#links-and-workflow) is used here since the ServicePrincipalCertificate parameter set is expecting a single object.

### Python2 runbook sample
The following sample shows how to authenticate using the Run As connection in a Python2 runbook.
```python
""" Tutorial to show how to authenticate against Azure resource manager resources """
import azure.mgmt.resource
import automationassets
def get_automation_runas_credential(runas_connection):
""" Returns credentials to authenticate against Azure resoruce manager """
from OpenSSL import crypto
from msrestazure import azure_active_directory
import adal
# Get the Azure Automation Run As service principal certificate
cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
pks12_cert = crypto.load_pkcs12(cert)
pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM, pks12_cert.get_privatekey())
# Get Run As connection information for the Azure Automation service principal
application_id = runas_connection["ApplicationId"]
thumbprint = runas_connection["CertificateThumbprint"]
tenant_id = runas_connection["TenantId"]
# Authenticate with service principal certificate
resource = "https://management.core.windows.net/"
authority_url = ("https://login.microsoftonline.com/" + tenant_id)
context = adal.AuthenticationContext(authority_url)
return azure_active_directory.AdalAuthentication(
lambda: context.acquire_token_with_client_certificate(
resource,
application_id,
pem_pkey,
thumbprint)
)
# Authenticate to Azure using the Azure Automation Run As service principal
runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
azure_credential = get_automation_runas_credential(runas_connection)
```
## Next steps
- Review [Links in graphical authoring](automation-graphical-authoring-intro.md#links-and-workflow) to understand how to direct and control the flow of logic in your runbooks.
- To learn more about Azure Automation's use of PowerShell modules and best practices for creating your own PowerShell modules to work as Integration Modules within Azure Automation, see [Integration Modules](automation-integration-modules.md).
| 73.947368 | 767 | 0.807295 | eng_Latn | 0.979457 |
d03727e6a4760004d38b93c7bdeb53fdbbe535ac | 1,074 | md | Markdown | LICENSE.md | VallishaM/Google-Meet-Attendance | 9210d91173378a73b48879c9a36770a4cc4a8f24 | [
"RSA-MD"
] | 1 | 2021-08-06T15:01:42.000Z | 2021-08-06T15:01:42.000Z | LICENSE.md | VallishaM/Google-Meet-Attendance | 9210d91173378a73b48879c9a36770a4cc4a8f24 | [
"RSA-MD"
] | null | null | null | LICENSE.md | VallishaM/Google-Meet-Attendance | 9210d91173378a73b48879c9a36770a4cc4a8f24 | [
"RSA-MD"
] | null | null | null | # Privacy Policy
[The privacy policy has been moved to a separate file](PRIVACY.md).
# License
Copyright (C) 2020 by Al Caughey ([email protected])
Permission to use and/or distribute this software for any purpose *without* fee is hereby granted.
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
| 67.125 | 485 | 0.802607 | yue_Hant | 0.777344 |
d0373ab247787d340a411859a492810d9a9f9251 | 262 | md | Markdown | includes/iot-hub-resource-manager-selector.md | OpenLocalizationTestOrg/azure-docs-pr15_el-GR | 9f7579626c9f63b39b5039748978ac36e4d54ebc | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/iot-hub-resource-manager-selector.md | OpenLocalizationTestOrg/azure-docs-pr15_el-GR | 9f7579626c9f63b39b5039748978ac36e4d54ebc | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/iot-hub-resource-manager-selector.md | OpenLocalizationTestOrg/azure-docs-pr15_el-GR | 9f7579626c9f63b39b5039748978ac36e4d54ebc | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | > [AZURE.SELECTOR]
- [Πύλη του Azure](iot-hub-create-through-portal.md)
- [Azure CLI](iot-hub-create-using-cli.md)
- [PowerShell με το πρότυπο](iot-hub-rm-template-powershell.md)
- [C# με το ΥΠΌΛΟΙΠΟ](iot-hub-rm-rest.md)
- [C# με πρότυπο](iot-hub-rm-template.md) | 43.666667 | 63 | 0.709924 | ell_Grek | 0.146112 |
d0377ac746b0e3cf60bf922a6b313ac516f84480 | 15,125 | md | Markdown | playlists/pretty/37i9dQZF1DXa41CMuUARjl.md | mackorone/spotify-playlist-archive | 1c49db2e79d0dfd02831167616e4997363382c20 | [
"MIT"
] | 179 | 2019-05-30T14:38:31.000Z | 2022-03-27T15:10:20.000Z | playlists/pretty/37i9dQZF1DXa41CMuUARjl.md | mackorone/spotify-playlist-archive | 1c49db2e79d0dfd02831167616e4997363382c20 | [
"MIT"
] | 78 | 2019-06-16T21:38:29.000Z | 2022-03-01T18:36:42.000Z | playlists/pretty/37i9dQZF1DXa41CMuUARjl.md | mackorone/spotify-playlist-archive | 1c49db2e79d0dfd02831167616e4997363382c20 | [
"MIT"
] | 96 | 2019-06-16T15:04:25.000Z | 2022-03-01T17:38:47.000Z | pretty - [cumulative](/playlists/cumulative/37i9dQZF1DXa41CMuUARjl.md) - [plain](/playlists/plain/37i9dQZF1DXa41CMuUARjl) - [githistory](https://github.githistory.xyz/mackorone/spotify-playlist-archive/blob/main/playlists/plain/37i9dQZF1DXa41CMuUARjl)
### [Friday Cratediggers](https://open.spotify.com/playlist/37i9dQZF1DXa41CMuUARjl)
> This week's handpicked new <a href="spotify:genre:edm\_dance">dance and electronic music</a>, featuring new music from Michael Bibi.
[Spotify](https://open.spotify.com/user/spotify) - 367,906 likes - 50 songs - 3 hr 24 min
| No. | Title | Artist(s) | Album | Length |
|---|---|---|---|---|
| 1 | [Soul System](https://open.spotify.com/track/5LBALrbr55bXfJNb8dsWb1) | [Michael Bibi](https://open.spotify.com/artist/4cvdQRyHmkSQSakUrW2oxv) | [ISOLAT003](https://open.spotify.com/album/1dQUrrp5pAkTvCcfzY2Mvh) | 3:43 |
| 2 | [Body Mind Soul \(with Benny Benassi feat\. Kyle Reynolds\)](https://open.spotify.com/track/4eaEUmyOmuIkDslaf7xw0f) | [DVBBS](https://open.spotify.com/artist/5X4LWwbUFNzPkEas04uU82), [Benny Benassi](https://open.spotify.com/artist/4Ws2otunReOa6BbwxxpCt6), [Kyle Reynolds](https://open.spotify.com/artist/5yhR0OqJhkbQ2y76XUte3R) | [Body Mind Soul \(with Benny Benassi feat\. Kyle Reynolds\)](https://open.spotify.com/album/3WQ4N49TK1nT2YLV5kNgpD) | 2:00 |
| 3 | [Typical \(feat\. Lars Martin\)](https://open.spotify.com/track/0zbCQ5BMhpFfKWcpEuzqVd) | [Alok](https://open.spotify.com/artist/0NGAZxHanS9e0iNHpR8f2W), [Steve Aoki](https://open.spotify.com/artist/77AiFEVeAVj2ORpC85QVJs), [Lars Martin](https://open.spotify.com/artist/22GWBRw4EYd2qGvzDqzxXO) | [Typical \(feat\. Lars Martin\)](https://open.spotify.com/album/4ZgHHCDZ6YI1iZ9L8xzrB4) | 2:21 |
| 4 | [How Long v3](https://open.spotify.com/track/49Uhm2WdbqxhiKsx14sXEQ) | [Kaskade](https://open.spotify.com/artist/6TQj5BFPooTa08A7pk8AQ1), [Late Night Alumni](https://open.spotify.com/artist/6JtFllJR7nhh8fa6oGefSj) | [How Long v3](https://open.spotify.com/album/27Ij0i8jm9q69KnKSfvG87) | 5:02 |
| 5 | [String Theory](https://open.spotify.com/track/50mVZbBcr6rYdt45OEPYlN) | [HI\-LO](https://open.spotify.com/artist/0ETJQforv5OXgDgidQv9qd), [Reinier Zonneveld](https://open.spotify.com/artist/21A7bhIL1m6CNZn8y57PIZ), [Oliver Heldens](https://open.spotify.com/artist/5nki7yRhxgM509M5ADlN1p) | [String Theory](https://open.spotify.com/album/64nnxFO6ZeVn19b7fGqOgN) | 4:21 |
| 6 | [Pieces](https://open.spotify.com/track/4U8TDLDoxWbaAFYkNkxalw) | [Séb Mont](https://open.spotify.com/artist/4lFWNwqQbywI4qCQ9PeL7V) | [Pieces](https://open.spotify.com/album/3mcJo6R1bp9PwGbvVXuSnA) | 2:45 |
| 7 | [Give You Up \(feat\. Mila Falls\)](https://open.spotify.com/track/3D1ihfozmd9hN3v3CqJsr5) | [Lucky Rose](https://open.spotify.com/artist/5ShkaitLUorYdZgJMqTF5E), [Mila Falls](https://open.spotify.com/artist/5m1yocXnIqkhC8dyQQd6Ve) | [Give You Up \(feat\. Mila Falls\)](https://open.spotify.com/album/3891FMLH9O8ggXq9kkk0xz) | 2:58 |
| 8 | [Better](https://open.spotify.com/track/6gl1WVOsI09df2HiCxs5ze) | [Daniel Blume](https://open.spotify.com/artist/7pbay7w0V7OdIr3jzSRkHj) | [Better](https://open.spotify.com/album/48Ju8v5nzIhWWXwhksMHwj) | 2:52 |
| 9 | [Real Thing \(feat\. Elohim\)](https://open.spotify.com/track/5wb8h2gUzyhmCwOtGp29z4) | [Lights](https://open.spotify.com/artist/5pdyjBIaY5o1yOyexGIUc6), [Elohim](https://open.spotify.com/artist/6wKxOKEA3K6R2UZ3COLXEY) | [Real Thing \(feat\. Elohim\)](https://open.spotify.com/album/6BelAZUC4yUO1wVouRJNmN) | 3:05 |
| 10 | [Spectacle](https://open.spotify.com/track/5Nx50OXz1e3q7M1jLdnjvH) | [On Planets](https://open.spotify.com/artist/5uz8HDS6eOsefdqSyMlTzi) | [Spectacle](https://open.spotify.com/album/0v1ZVoPvGy2pVd2aFHBOkU) | 4:11 |
| 11 | [Evil](https://open.spotify.com/track/5LLkZ1gyQMNPXtQv89pdQA) | [MKII](https://open.spotify.com/artist/5f3LuTeqMEAwXLyyCHXlLq) | [Evil](https://open.spotify.com/album/5GAnH3uoPdXsum34mhcMDR) | 4:44 |
| 12 | [Safe Place \- QRTR Remix](https://open.spotify.com/track/7M8xsk8epw2OcacXJAmgw3) | [Phil Anker](https://open.spotify.com/artist/22DTXq0MpXJRZPaTVZD7ED), [Lilia](https://open.spotify.com/artist/2YFACCFxJUZcwTyNeXFB7u), [QRTR](https://open.spotify.com/artist/2THXZEfcOePL7bRFl2DUwj) | [Safe Place \(QRTR Remix\)](https://open.spotify.com/album/3HfC7wTBwcyk8vzpL7k9wE) | 4:22 |
| 13 | [Watch the Sky](https://open.spotify.com/track/4eXZdQsQWdO3zpuAUXZ0m6) | [NoD](https://open.spotify.com/artist/5NXSMDUca0GqqzJTNQuVeu) | [Watch The Sky](https://open.spotify.com/album/4SY0CrXXEAN74PD3x4Hd8Y) | 3:25 |
| 14 | [SET THE BAR](https://open.spotify.com/track/4J4Z4HXGbhz4zKDU3BrD45) | [Yungmaple](https://open.spotify.com/artist/1QBaZ9xbIrmYmOwBQY3rVQ), [Von Storm](https://open.spotify.com/artist/5acEBp4nbqoEjROkF8nLj4) | [SET THE BAR](https://open.spotify.com/album/2djbBsRHGZELBrPHdC8sBr) | 3:12 |
| 15 | [Paid For Love](https://open.spotify.com/track/5jCeNmOKugXGCKSVsZUolR) | [Ilan Bluestone](https://open.spotify.com/artist/1yoZuH2j43vVSWsOwYuQyn), [Gid Sedgwick](https://open.spotify.com/artist/3Y43xMeiPftAookVOSKu1Y) | [Impulse](https://open.spotify.com/album/4x77t1XFY7p3DExQKB2vVT) | 4:36 |
| 16 | [Melanin](https://open.spotify.com/track/3wa0Ntjf9yptcQx60EtYFK) | [Philou Louzolo](https://open.spotify.com/artist/4zCYbkxFSNb6T2D2vFSg6C), [Kususa](https://open.spotify.com/artist/4UcrwfAI09CLZ7aBXMiucJ), [Mariseya](https://open.spotify.com/artist/6CezXXzMXtPnjFvqu4kED1) | [Melanin](https://open.spotify.com/album/1oHJYIR1FdKcxBwYMcLWXE) | 7:56 |
| 17 | [Love In The Music](https://open.spotify.com/track/7EBx50SQBEAqJ1Kv1X0Gxv) | [Baauer](https://open.spotify.com/artist/25fqWEebq6PoiGQIHIrdtv) | [Love In The Music](https://open.spotify.com/album/4vGLXHQ1inNFobUxuImLdL) | 3:55 |
| 18 | [It's Really You On My Mind \- Edit](https://open.spotify.com/track/68YSTxRYHSgT3nv4SCCV6Y) | [Black Loops](https://open.spotify.com/artist/6AwGe2F49hD3ANXvmOwqQB) | [It's Really You On My Mind \(Edit\)](https://open.spotify.com/album/2gc5NgOyLT1gNiZhuCB9Tk) | 5:10 |
| 19 | [Back To You \(Willim Edit\)](https://open.spotify.com/track/0bMEyzsRHntTnsBmTUp4N8) | [MOTi](https://open.spotify.com/artist/1vo8zHmO1KzkuU9Xxh6J7W), [CORSAK](https://open.spotify.com/artist/1TcbdifqhtxLz77unBYJ7z), [Willim](https://open.spotify.com/artist/5bp5XaFz8Py4UFEhQ6FZRk), [Georgia Ku](https://open.spotify.com/artist/5mYakBbBzPMQTfkVMIgiDM) | [Back To You \(Willim Edit\)](https://open.spotify.com/album/6v65RFtnQjxWaWjS5htrE3) | 3:14 |
| 20 | [Dance Dance](https://open.spotify.com/track/4Qnm96xHzZAWox9wyZI4AM) | [Groovenatics](https://open.spotify.com/artist/0eLYiajeLRGa4MYyF2y0rW), [nomerci](https://open.spotify.com/artist/5tygsM77YMbY8WgkVKhv4R) | [Dance Dance](https://open.spotify.com/album/65QLi1zTqZH4GHy5FNfp7D) | 3:06 |
| 21 | [Dtjoh \[Mixed\]](https://open.spotify.com/track/5nXRgeNeV7lQuMEvNeRXBx) | [Citizen Deep](https://open.spotify.com/artist/2Wcld3BQUXxWUYMmCJYyuM), [Jessica LM](https://open.spotify.com/artist/3Q259wuL2vRuisWyvYcebg) | [For My Dear Friend V \(DJ Mix\)](https://open.spotify.com/album/71YPrvk0AIeyWKXqf7ZzVz) | 4:36 |
| 22 | [Will You Be? \- CFCF Remix](https://open.spotify.com/track/2ajUD0hDhkdQzySZk0DNDb) | [Baltra](https://open.spotify.com/artist/2tEyBfwGBfQgLXeAJW0MgC), [CFCF](https://open.spotify.com/artist/73IRHBhotETMmgvRCEyTCS) | [Ambition: Remixes 002](https://open.spotify.com/album/11heFNTH7nLYQ5LRiML19Z) | 5:02 |
| 23 | [Boom Shack \(Will Taylor Remix\)](https://open.spotify.com/track/6IRFSOakOoofBPYewcfFQr) | [Steve Lawler](https://open.spotify.com/artist/0NDuRCSLSH0Ii5An4U6HME), [Raffi Habel](https://open.spotify.com/artist/63uqyTqsFsbvpzQ16heUJM), [Will Taylor \(UK\)](https://open.spotify.com/artist/53PVBEKRk4Fvq8w8cLydLX) | [Boom Shack \(Will Taylor Remix\)](https://open.spotify.com/album/46yw1EzirUXqRRY5Znhdqr) | 6:26 |
| 24 | [MANIPULATOR](https://open.spotify.com/track/3fVZrN6FJsfrCADiYxmAQl) | [Christopher Damas](https://open.spotify.com/artist/03sZi1EjCnl0b3Irnqa9NJ) | [MANIPULATOR](https://open.spotify.com/album/3vkW86F6FCYUPUQjUN1M2X) | 2:06 |
| 25 | [Ocean](https://open.spotify.com/track/7qMBNOhPfbouXl3JL3k1H4) | [Anton Ishutin](https://open.spotify.com/artist/0RhuWNLtoucVMRmsSkCgWl) | [Ocean](https://open.spotify.com/album/4edIfnHvv2lFzRnUoLnPci) | 7:10 |
| 26 | [Feel Again](https://open.spotify.com/track/5Of2yqWJX6TjkgxcsusxCK) | [Disco Fries](https://open.spotify.com/artist/7G7KvDCLdVG0Ok511Iqc9U), [Shanahan](https://open.spotify.com/artist/55iQlVy82VOHUZ54INg1Ge) | [Feel Again](https://open.spotify.com/album/34m6BSvzUml1wKPsXf1mZS) | 2:42 |
| 27 | [Mitra](https://open.spotify.com/track/7bX3v3vRAvjRgJ3ls7yIvY) | [Volen Sentir](https://open.spotify.com/artist/7scXA3hBD8JyGGajVR9q9l) | [Mitra / Hael](https://open.spotify.com/album/4DwTYZRUe3mgPmhfIiJLuG) | 7:40 |
| 28 | [Feel The Soul](https://open.spotify.com/track/4EbmeBw5lF21pS4sDb24sw) | [Jaxomy](https://open.spotify.com/artist/1c3uso4iIeeX3P0bhKaQDq), [Mohtiv](https://open.spotify.com/artist/32CSGSXgKI6WgPHwzSRYbG) | [Feel The Soul](https://open.spotify.com/album/6aMLpXkMfntxDlGGqVE42E) | 3:11 |
| 29 | [Overdose](https://open.spotify.com/track/3Cddsy9EtWqCkbjZhDMBJC) | [Lost Minds](https://open.spotify.com/artist/14z02tRm4yTs0cJfmrHfnr) | [Overdose](https://open.spotify.com/album/1ax5fBCktGoMnWELooISk5) | 3:07 |
| 30 | [Distance \- Tony Romera Remix](https://open.spotify.com/track/6BZeuhNPtGTGJM3xtA7ZmU) | [Apashe](https://open.spotify.com/artist/1fd3fmwlhrDl2U5wbbPQYN), [Tony Romera](https://open.spotify.com/artist/7GQsOji7pfixzkLt63awo5), [Geoffroy](https://open.spotify.com/artist/0VzoflxRgSVEWHYmCbMOJJ) | [Distance \(Tony Romera Remix\)](https://open.spotify.com/album/7KreubAbwshWstwncRuL45) | 3:48 |
| 31 | [Insomnia \- Rework](https://open.spotify.com/track/6uRUfq1y0VayUjh2M935g7) | [Mike Candys](https://open.spotify.com/artist/24Sxfn1uAoJmuR9N72drt9), [Jack Holiday](https://open.spotify.com/artist/64yON9pK0j392YkionGKAF) | [Insomnia \(Rework\)](https://open.spotify.com/album/3O4Td8QTtXYMfbj8d4h7P6) | 3:00 |
| 32 | [Creatures on Acid \- Radio\-Edit](https://open.spotify.com/track/04b2DLz2SSb2VqimLvEfLs) | [Patrick Scuro](https://open.spotify.com/artist/6wfL4r7ReScDTARbtSRTvB), [Marie Vaunt](https://open.spotify.com/artist/50KydUSYhBFGorhAgUcrL5) | [Creatures on Acid \(Radio\-Edit\)](https://open.spotify.com/album/6lqQURzLflCmVagXWx3Wun) | 4:30 |
| 33 | [I Wanna Feel You](https://open.spotify.com/track/7pkHxz1w74xO9aQbblOxjd) | [Robert Burian](https://open.spotify.com/artist/64FzaTBI1Z4TZXlhrihUDg) | [I Wanna Feel You](https://open.spotify.com/album/4Raczm1Cu7S9citiCR2a4C) | 3:35 |
| 34 | [Fenix](https://open.spotify.com/track/1zPW1xhw6k6hzhW8jbeGX9) | [HWLS](https://open.spotify.com/artist/4ODo634wVqDxqgVSlXE2LO) | [Fenix](https://open.spotify.com/album/0VL0LWMvQhIpPrAxsRmYEj) | 4:52 |
| 35 | [B.R.O.K.E\. \- KUČKA edit](https://open.spotify.com/track/3i2ZCbam7o9xrReMaFkvjb) | [K\-Lone](https://open.spotify.com/artist/6VC4hWnnMMmOxpH6KsAXBU), [KUČKA](https://open.spotify.com/artist/6JcD2YKEhgimweLpUI0NEw) | [B.R.O.K.E\. \(KUČKA edit\)](https://open.spotify.com/album/7paiMoIfkxTk5c9Nkh25ww) | 4:07 |
| 36 | [Code 404](https://open.spotify.com/track/6BqvNPGeHhUfhCJpKcFDoS) | [Dylhen](https://open.spotify.com/artist/58R30oixEWaD1YGduLJpR5) | [Code 404](https://open.spotify.com/album/0L7rj4RLUS3iYhARRoE4so) | 3:33 |
| 37 | [La Sombra Del Viento](https://open.spotify.com/track/5FPfXjdeNVD7HxyDnszG3K) | [Derun](https://open.spotify.com/artist/7DaUdudIwcfgSzFJX1VEVo) | [La Sombra Del Viento](https://open.spotify.com/album/2dUvFMlr8HfWMxxwFCFJfe) | 7:14 |
| 38 | [Hype Beast](https://open.spotify.com/track/0AAtVQhBnuqduOZp7GsepG) | [Rich DietZ](https://open.spotify.com/artist/1mMlBc8LXvVOSxtaskKiE8) | [Hype Beast](https://open.spotify.com/album/5s8FxM1lTlGR4kmbbvmoEq) | 3:21 |
| 39 | [serenity](https://open.spotify.com/track/5btMsv4TpSBYA57jaUvaG8) | [CactusTeam](https://open.spotify.com/artist/3CWlfAolH0gJigqzPafSbm) | [serenity](https://open.spotify.com/album/06ckPEyABV1de75cji42I9) | 4:12 |
| 40 | [Lezzgow](https://open.spotify.com/track/5kkc31S9fLDtmssXAcBIFZ) | [Rendher](https://open.spotify.com/artist/4Icdw6ZLVk2NkjIjMhJSc6) | [Lezzgow EP](https://open.spotify.com/album/7k0Agx0gjjLCde7Iaa4wOu) | 6:31 |
| 41 | [Cross My Heart](https://open.spotify.com/track/4RxTUxyNf3NFE4eqZqIfcf) | [Klaas](https://open.spotify.com/artist/25sJFKMqDENdsTF7zRXoif), [Emmie Lee](https://open.spotify.com/artist/4fFlpk8hS56rPSExrMiiLW) | [Cross My Heart](https://open.spotify.com/album/4yTrSLs99kcWGXEUCPfp7C) | 2:48 |
| 42 | [100GANG](https://open.spotify.com/track/0nQMfKCZUHSYA2Qp98Q4m1) | [Eva Shaw](https://open.spotify.com/artist/638CPU1xRHUo6AmfZe3F2c), [BIJOU](https://open.spotify.com/artist/3abRKajGbb3kLMy9AWzfMA), [Hitmakerchinx](https://open.spotify.com/artist/6GhBUUXBi2x3DVad9izEQD) | [100GANG](https://open.spotify.com/album/7HuzSng8agTguNBreZvmC3) | 2:27 |
| 43 | [Get Yo Self \- Edit](https://open.spotify.com/track/2J8OpkKjEKKLNCwePs3HDI) | [Hatiras](https://open.spotify.com/artist/7DQ8fX4Fbi43HaesfrVYpO) | [Get Yo Self](https://open.spotify.com/album/0fXAibJbVAnVX0nUj558zH) | 3:48 |
| 44 | [Goodbye](https://open.spotify.com/track/1sQ0jXWxDUl5qBNRb38RYA) | [Siik](https://open.spotify.com/artist/3dWrzZ5NrBW1cRHeU15Yrf), [Kocmo](https://open.spotify.com/artist/1FG6CtAkuEvBste6ySTyMO), [Julia Kleijn](https://open.spotify.com/artist/6iOYJDZYumYVmzxPbyfg5W) | [Goodbye](https://open.spotify.com/album/5xauUncS2YQFrH9N7OIefy) | 2:34 |
| 45 | [Tunnelvision](https://open.spotify.com/track/37Py2sUqcaRIScA1omhInd) | [Tim Reaper](https://open.spotify.com/artist/03KZUWKQujlCcgEdcrkvWd), [Kloke](https://open.spotify.com/artist/2cggyYmdk2HP87tYGtw3La) | [Tunnelvision](https://open.spotify.com/album/6WuvKcI0HgnK7mkR2Qli3s) | 5:54 |
| 46 | [Cry Again](https://open.spotify.com/track/5LfYC98Zc9hsPTPzz9gltB) | [Windows 96](https://open.spotify.com/artist/65XcfOOaVxbZnNlz40DK7i), [Gavriel](https://open.spotify.com/artist/53wGx0J5eu3GdlChMeO8RJ) | [Cry Again](https://open.spotify.com/album/1AnstNZSetOGm24TQA5GHB) | 3:02 |
| 47 | [Change The World](https://open.spotify.com/track/1c27kW5G37z3yMAJJuU1Ui) | [Delroy Edwards](https://open.spotify.com/artist/683gIqfxdjjg2sowYxBHIQ) | [Change The World](https://open.spotify.com/album/61NWiqpg2pxZBUC200VRmP) | 4:22 |
| 48 | [Lightspeed](https://open.spotify.com/track/6uSlV69MsqwwLKcXy8GaXl) | [Basstripper](https://open.spotify.com/artist/1tSiIyp5dxfbEaS0nZGMEl) | [Back To Normal](https://open.spotify.com/album/0yfDa2WdnPtxcckL5IQIe9) | 4:28 |
| 49 | [Really Really Hot](https://open.spotify.com/track/21iSZnCRjz9ocvcX5yltFz) | [Eddy M](https://open.spotify.com/artist/0X2423nvaH92bYjYUKCYRI) | [Really Really Hot](https://open.spotify.com/album/59TTDWb62eoMJfe474JIFD) | 3:52 |
| 50 | [Que Locura](https://open.spotify.com/track/2DKyYH0AV19kMOjhqSqsgF) | [Ovi](https://open.spotify.com/artist/4o0NtnL2m0lzZmEdRas1qv), [Gente De Zona](https://open.spotify.com/artist/2cy1zPcrFcXAJTP0APWewL) | [Que Locura](https://open.spotify.com/album/2Qbk9uyfmBYKXxenxXGkL8) | 3:17 |
Snapshot ID: `MTY0MDczODU3MiwwMDAwMDAwMGIxNzVhYzI5YTcyOTgwZWQwZGE0ZDFkMjc1OTIxZDc5` | 243.951613 | 460 | 0.762843 | yue_Hant | 0.589297 |
d0383514d01fadc1153b3173191d558a1df996b9 | 908 | md | Markdown | docs/guides/sass.md | visualfanatic/snowpack | f35825cd8844d44c8e92c15fc44664e4afbe9248 | [
"MIT"
] | null | null | null | docs/guides/sass.md | visualfanatic/snowpack | f35825cd8844d44c8e92c15fc44664e4afbe9248 | [
"MIT"
] | null | null | null | docs/guides/sass.md | visualfanatic/snowpack | f35825cd8844d44c8e92c15fc44664e4afbe9248 | [
"MIT"
] | null | null | null | ---
layout: ../../layouts/content.astro
title: 'Sass'
tags: communityGuide
published: true
img: '/img/logos/sass.svg'
imgBackground: '#bf4080'
description: How to use SASS with Snowpack using the Snowpack SASS plugin
---
<div class="stub">
This article is a stub, you can help expand it into <a href="https://diataxis.fr/how-to-guides/">how-to guide format</a>
</div>
[Sass](https://www.sass-lang.com/) is a stylesheet language that’s compiled to CSS. It allows you to use variables, nested rules, mixins, functions, and more, all with a fully CSS-compatible syntax. Sass helps keep large stylesheets well-organized and makes it easy to share design within and across projects.
**To use Sass with Snowpack:** use [@snowpack/plugin-sass](https://www.npmjs.com/package/@snowpack/plugin-sass).
```diff
// snowpack.config.mjs
export default {
plugins: [
+ '@snowpack/plugin-sass',
],
};
```
| 33.62963 | 309 | 0.718062 | eng_Latn | 0.914025 |
d0385e03b389e76e1109eead2b80191590c18157 | 6,446 | md | Markdown | docs/vs-2015/extensibility/managed-extensibility-framework-in-the-editor.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/managed-extensibility-framework-in-the-editor.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-24T14:57:38.000Z | 2020-07-24T14:57:38.000Z | docs/vs-2015/extensibility/managed-extensibility-framework-in-the-editor.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Managed Extensibility Framework im Editor | Microsoft-Dokumentation
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-sdk
ms.topic: conceptual
helpviewer_keywords:
- editors [Visual Studio SDK], new - using MEF for extensions
ms.assetid: 3f59a285-6c33-4ae3-a4fb-ec1f5aa21bd1
caps.latest.revision: 11
ms.author: gregvanl
manager: jillfra
ms.openlocfilehash: f19b71c86d972b59a9d46f379bf7ec93f63aeb9a
ms.sourcegitcommit: 08fc78516f1107b83f46e2401888df4868bb1e40
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/15/2019
ms.locfileid: "65679959"
---
# <a name="managed-extensibility-framework-in-the-editor"></a>Managed Extensibility Framework im Editor
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Der Editor wird mit Komponenten des Managed Extensibility Framework (MEF) erstellt. Können Sie Ihre eigenen MEF-Komponenten, um den Editor zu erweitern, erstellen und Ihren Code kann auch Komponenten-Editors nutzen.
## <a name="overview-of-the-managed-extensibility-framework"></a>Übersicht über das Managed Extensibility Framework
Das MEF ist eine .NET Bibliothek, mit dem Sie das Hinzufügen und ändern die Funktionen einer Anwendung oder Komponente, die das MEF-Programmiermodell entspricht. Visual Studio-Editor kann sowohl bereitstellen und Nutzen von MEF-Komponenten.
Das MEF ist in .NET Framework Version 4 System.ComponentModel.Composition.dll-Assembly enthalten.
Weitere Informationen über MEF finden Sie unter [Managed Extensibility Framework (MEF)](https://msdn.microsoft.com/library/6c61b4ec-c6df-4651-80f1-4854f8b14dde).
### <a name="component-parts-and-composition-containers"></a>Komponenten und Kompositionscontainer
Eine Komponente ist eine Klasse oder ein Member einer Klasse, die eine (oder beides) der folgenden Möglichkeiten:
- Nutzen Sie eine andere Komponente
- Von einer anderen Komponente verwendet werden
Betrachten Sie beispielsweise eine einkaufsanwendung, die eine Eintrag-Komponente, die von der Verfügbarkeit Produktdaten, die von einer Komponente der Warehouse-Inventur bereitgestellten abhängt. Gemäß MEF, das Teil der Hardwareinventur kann *exportieren* Verfügbarkeit von Produktdaten und die Reihenfolge Eintrag Teil kann *importieren* Daten. Der Eintrag Reihenfolge und den Bestandteil müssen nicht voneinander wissen; die *Kompositionscontainer* (bereitgestellt von der hostanwendung) Dient zum Verwalten des Satz von Exporten und Auflösen von Exporte und Importe.
Der Kompositionscontainer <xref:System.ComponentModel.Composition.Hosting.CompositionContainer>, ist in der Regel im Besitz des Hosts. Der Kompositionscontainer verwaltet eine *Katalog* von exportierten Komponenten.
### <a name="exporting-and-importing-component-parts"></a>Exportieren und Importieren von Komponenten
Sie können alle Funktionen, exportieren, sofern es als eine öffentliche Klasse oder einen öffentlichen Member einer Klasse (Eigenschaft oder Methode) implementiert ist. Sie müssen keine leiten Sie Ihre Komponente von <xref:System.ComponentModel.Composition.Primitives.ComposablePart>. Sie müssen stattdessen Hinzufügen einer <xref:System.ComponentModel.Composition.ExportAttribute> -Attribut auf die Klasse oder Klassenmember, die Sie exportieren möchten. Dieses Attribut gibt an, die *Vertrag* durch die eine andere Komponente Teil Ihrer Funktionen zur importieren kann.
### <a name="the-export-contract"></a>Der Export-Vertrag
Die <xref:System.ComponentModel.Composition.ExportAttribute> definiert die Entität (Klasse, Schnittstelle oder Struktur), die exportiert wird. In der Regel verwendet das Export-Attribut einen Parameter, der den Typ des Exports angibt.
```
[Export(typeof(ContentTypeDefinition))]
class TestContentTypeDefinition : ContentTypeDefinition { }
```
In der Standardeinstellung die <xref:System.ComponentModel.Composition.ExportAttribute> Attribut definiert einen Vertrag, der den Typ der Klasse exportieren.
```
[Export]
[Name("Structure")]
[Order(After = "Selection", Before = "Text")]
class TestAdornmentLayerDefinition : AdornmentLayerDefinition { }
```
Im Beispiel ist die Standardeinstellung `[Export]` Attribut entspricht `[Export(typeof(TestAdornmentLayerDefinition))]`.
Sie können auch eine Eigenschaft oder Methode, exportieren, wie im folgenden Beispiel gezeigt.
```
[Export]
[Name("Scarlet")]
[Order(After = "Selection", Before = "Text")]
public AdornmentLayerDefinition scarletLayerDefinition;
```
### <a name="importing-a-mef-export"></a>Importieren einen MEF-Export
Wenn Sie einen MEF-Export nutzen möchten, benötigen Sie den Vertrag (in der Regel den Typ), mit dem sie exportiert wurde, und fügen, eine <xref:System.ComponentModel.Composition.ImportAttribute> Attribut, dem dieser Wert ist. Standardmäßig verwendet das Import-Attribut einen Parameter, der den Typ der Klasse ist, das geändert wird. Die folgenden Zeilen des Imports der Code die <xref:Microsoft.VisualStudio.Text.Classification.IClassificationTypeRegistryService> Typ.
```
[Import]
internal IClassificationTypeRegistryService ClassificationRegistry;
```
## <a name="getting-editor-functionality-from-a-mef-component-part"></a>Abrufen von Editor-Funktionen aus einer MEF-Komponente
Wenn Ihr vorhandene Code einer MEF-Komponente ist, können Sie MEF-metadatenexport, Editor-Komponenten nutzen.
#### <a name="to-consume-editor-functionality-from-a-mef-component-part"></a>Editor-Funktionen aus einer MEF-Komponente verwenden
1. Fügen Sie Verweise auf System.Composition.ComponentModel.dll, die im globalen Assemblycache (GAC) befindet, und auf die Editor-Assemblys.
2. Hinzufügen der entsprechenden using-Anweisungen.
```
using System.ComponentModel.Composition;
using Microsoft.VisualStudio.Text;
```
3. Hinzufügen der `[Import]` wie folgt zu Ihrer Schnittstelle Service-Attributs.
```
[Import]
ITextBufferFactoryService textBufferService;
```
4. Wenn Sie den Dienst erhalten haben, können Sie eine der zugehörigen Komponenten nutzen.
5. Wenn Sie kompiliert die Assembly, und fügen Sie ihn in das... \Common7\IDE\Components\-Ordner von Visual Studio-Installation.
## <a name="see-also"></a>Siehe auch
[Erweiterungspunkte für den Sprachdienst und den Editor](../extensibility/language-service-and-editor-extension-points.md)
| 59.137615 | 574 | 0.782191 | deu_Latn | 0.974608 |
d038768b1ec33752addbcf9aa7c50de42387198e | 9,189 | md | Markdown | data/readme_files/PyQt5.PyQt.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/PyQt5.PyQt.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/PyQt5.PyQt.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | # 各种各样的PyQt测试和例子
[](https://pyqt5.com)
[](https://codebeat.co/projects/github-com-pyqt5-pyqt-master)
https://pyqt.site 论坛是专门针对PyQt5学习和提升开设的网站,分享大家平时学习中记录的笔记和例子,以及对遇到的问题进行收集整理。
[](https://github.com/PyQt5/PyQt)
[](https://github.com/PyQt5/PyQt)
[](https://github.com/PyQt5/PyQt/fork)
如果您觉得这里的东西对您有帮助,别忘了帮忙点一颗:star:小星星:star:
## 微信博客小程序
<img src="Donate/wxblog.jpg" height="250" width="250">
[客户端下载](https://github.com/PyQt5/PyQtClient/releases)
[自定义控件](https://github.com/PyQt5/CustomWidgets)
## 目录
- Layouts
- [QVBoxLayout](QVBoxLayout)
- [QHBoxLayout](QHBoxLayout)
- [QGridLayout](QGridLayout)
- [腾讯视频热播列表](QGridLayout/HotPlaylist.py)
- [QFormLayout](QFormLayout)
- [QFlowLayout](QFlowLayout)
- [腾讯视频热播列表](QFlowLayout/HotPlaylist.py)
- Spacers
- [Horizontal Spacer](QSpacerItem)
- [Vertical Spacer](QSpacerItem)
- Buttons
- [QPushButton](QPushButton)
- [普通样式](QPushButton/NormalStyle.py)
- [按钮底部线条进度](QPushButton/BottomLineProgress.py)
- [按钮文字旋转进度](QPushButton/FontRotate.py)
- [按钮常用信号](QPushButton/SignalsExample.py)
- [QToolButton](QToolButton)
- [QRadioButton](QRadioButton)
- [QCheckBox](QCheckBox)
- Item Views
- [QListView](QListView)
- [显示自定义Widget](QListView/CustomWidgetItem.py)
- [显示自定义Widget并排序](QListView/CustomWidgetSortItem.py)
- [自定义角色排序](QListView/SortItemByRole.py)
- [QTreeView](QTreeView)
- [QTableView](QTableView)
- [表格内容复制](QTableView/CopyContent.py)
- [QColumnView](QColumnView)
- [QUndoView](QUndoView)
- Item Widgets
- [QListWidget](QListWidget)
- [删除自定义Item](QListWidget/DeleteCustomItem.py)
- [自定义可拖拽Item](QListWidget/DragDrop.py)
- [腾讯视频热播列表](QListWidget/HotPlaylist.py)
- [仿折叠控件效果](QListWidget/FoldWidget.py)
- [列表常用信号](QListWidget/SignalsExample.py)
- [在item中添加图标](Test/partner_625781186/13.combo_listwidget)
- [QTreeWidget](QTreeWidget)
- [通过json数据生成树形结构](QTreeWidget/ParsingJson.py)
- [拖拽显示为图片](Test/partner_625781186/12.1拖拽显示为图片)
- [点击父节点全选/取消全选子节点](QTreeWidget/testTreeWidget.py)
- [禁止父节点](QTreeWidget/ParentNodeForbid.py)
- [QTableWidget](QTableWidget)
- [Sqlalchemy动态拼接字段查询显示表格](QTableWidget/SqlQuery.py)
- [TableWidget嵌入部件](QTableWidget/TableWidget.py)
- Containers
- [QGroupBox](QGroupBox)
- [QScrollArea](QScrollArea)
- [仿QQ设置面板](QScrollArea/QQSettingPanel.py)
- [QToolBox](QToolBox)
- [QTabWidget](QTabWidget)
- [QStackedWidget](QStackedWidget)
- [左侧选项卡](QStackedWidget/LeftTabStacked.py)
- [QFrame](QFrame)
- [QWidget](QWidget)
- [样式表测试](QWidget/WidgetStyle.py)
- [QMdiArea](QMdiArea)
- [QDockWidget](QDockWidget)
- Input Widgets
- [QComboBox](QComboBox)
- [下拉数据关联](QComboBox/CityLinkage.py)
- [QFontComboBox](QFontComboBox)
- [QLineEdit](QLineEdit)
- [QTextEdit](QTextEdit)
- [文本查找高亮](QTextEdit/HighlightText.py)
- [QPlainTextEdit](QPlainTextEdit)
- [QSpinBox](QSpinBox)
- [QDoubleSpinBox](QDoubleSpinBox)
- [QTimeEdit](QTimeEdit)
- [QDateTime](QDateTime)
- [QDial](QDial)
- [QScrollBar](QScrollBar)
- [滚动条样式美化](QScrollBar/StyleScrollBar.py)
- [QSlider](QSlider)
- [滑动条点击定位](QSlider/ClickJumpSlider.py)
- [双层圆环样式](QSlider/QssQSlider.py)
- Display Widgets
- [QLabel](QLabel)
- [图片加载显示](QLabel/ShowImage.py)
- [图片旋转](QLabel/ImageRotate.py)
- [仿网页图片错位显示](QLabel/ImageSlipped.py)
- [显示.9格式图片(气泡)](QLabel/NinePatch.py)
- [圆形图片](QLabel/CircleImage.py)
- [QTextBrowser](QTextBrowser)
- [QGraphicsView](QGraphicsView)
- [绘制世界地图](QGraphicsView/WorldMap.py)
- [添加QWidget](QGraphicsView/AddQWidget.py)
- [QCalendarWidget](QCalendarWidget)
- [QSS美化日历样式](QCalendarWidget/CalendarQssStyle.py)
- [QLCDNumber](QLCDNumber)
- [QProgressBar](QProgressBar)
- [常规样式美化](QProgressBar/SimpleStyle.py)
- [圆圈进度条](QProgressBar/RoundProgressBar.py)
- [百分比进度条](QProgressBar/PercentProgressBar.py)
- [Metro进度条](QProgressBar/MetroCircleProgress.py)
- [水波纹进度条](QProgressBar/WaterProgressBar.py)
- [QOpenGLWidget](QOpenGLWidget)
- [QWebView](QWebView)
- [梦幻树](QWebView/DreamTree.py)
- [获取Cookie](QWebView/GetCookie.py)
- [和Js交互操作](QWebView/JsSignals.py)
- [网页整体截图](QWebView/ScreenShotPage.py)
- [播放Flash](QWebView/PlayFlash.py)
- [拦截请求](QWebView/BlockRequest.py)
- [QWebEngineView](QWebEngineView)
- [获取Cookie](QWebEngineView/GetCookie.py)
- [和Js交互操作](QWebEngineView/JsSignals.py)
- [网页整体截图](QWebEngineView/ScreenShotPage.py)
- [同网站不同用户](QWebEngineView/SiteDiffUser.py)
- [拦截请求](QWebEngineView/BlockRequest.py)
- [拦截请求内容](QWebEngineView/BlockRequestData.py)
- [浏览器下载文件](Test/partner_625781186/6.QWebEngineView下载文件)
- [打印网页](Test/partner_625781186/17_打印预览qwebengineview)
- [QThread](QThread)
- [继承QThread](QThread/InheritQThread.py)
- [moveToThread](QThread/moveToThread.py)
- [线程挂起恢复](QThread/SuspendThread.py)
- [线程休眠唤醒](QThread/WakeupThread.py)
- [QtQuick](QtQuick)
- [Flat样式](QtQuick/FlatStyle.py)
- [QML与Python交互](QtQuick/Signals.py)
- [QtChart](QtChart)
- [折线图](QtChart/LineChart.py)
- [折线堆叠图](QtChart/LineStack.py)
- [柱状堆叠图](QtChart/BarStack.py)
- [LineChart自定义xy轴](QtChart/CustomXYaxis.py)
- [ToolTip提示](QtChart/ToolTip.py)
- [DynamicSpline动态曲线图](QtChart/DynamicSpline.py)
- [区域图表](QtChart/AreaChart.py)
- [柱状图表](QtChart/BarChart.py)
- [饼状图表](QtChart/PieChart.py)
- [样条图表](QtChart/SplineChart.py)
- [百分比柱状图表](QtChart/PercentBarChart.py)
- [横向柱状图表](QtChart/HorizontalBarChart.py)
- [横向百分比柱状图表](QtChart/HorizontalPercentBarChart.py)
- [散点图表](QtChart/ScatterChart.py)
- [图表主题动画](QtChart/ChartThemes.py)
- [QtDataVisualization](QtDataVisualization)
- [柱状图3D](QtDataVisualization/BarsVisualization.py)
- [太阳磁场线](QtDataVisualization/MagneticOfSun.py)
- [余弦波3D](QtDataVisualization/ScatterVisualization.py)
- [PyQtGraph](PyQtGraph)
- [鼠标获取X轴坐标](PyQtGraph/mouseFlow.py)
- [禁止右键点击功能、鼠标滚轮,添加滚动条等功能](PyQtGraph/graph1.py)
- [工具类](PyQtGraph/tools.py)
- [滚动区相关](PyQtGraph/testGraphAnalysis.py)
- [Animation](QPropertyAnimation)
- [窗口淡入淡出](QPropertyAnimation/FadeInOut.py)
- [右键菜单动画](QPropertyAnimation/MenuAnimation.py)
- [点阵特效](QPropertyAnimation/RlatticeEffect.py)
- [页面切换/图片轮播动画](QPropertyAnimation/PageSwitching.py)
- [窗口抖动](QPropertyAnimation/ShakeWindow.py)
- [窗口翻转动画(仿QQ)](QPropertyAnimation/FlipWidgetAnimation.py)
- [折叠动画](Test/partner_625781186/2.折叠控件)
- [RemoteObjects](QtRemoteObjects)
- [简单界面数据同步](QtRemoteObjects/SyncUi)
- [modelview](QtRemoteObjects/modelview)
- [simpleswitch](QtRemoteObjects/simpleswitch)
- [QPainter](QPainter)
- Others
- [QFont](QFont)
- [加载自定义字体](QFont/AwesomeFont.py)
- [QMenu](QMenu)
- [菜单设置多选并且不关闭](QMenu/MultiSelect.py)
- [悬停菜单](Test/partner_625781186/5.hoverMenu)
- [QAxWidget](QAxWidget)
- [显示Word、Excel、PDF文件](QAxWidget/ViewOffice.py)
- [QSplitter](QSplitter)
- [分割窗口的分割条重绘](QSplitter/RewriteHandle.py)
- [QSerialPort](QSerialPort)
- [串口调试小助手](QSerialPort/SerialDebugAssistant.py)
- [QProxyStyle](QProxyStyle)
- [Tab文字方向](QProxyStyle/TabTextDirection.py)
- [QMessageBox](QMessageBox)
- [消息对话框倒计时关闭](QMessageBox/CountDownClose.py)
- [自定义图标等](QMessageBox/CustomColorIcon.py)
- [消息框按钮文字汉化](QMessageBox/ChineseText.py)
- [QFileSystemModel](QFileSystemModel)
- [自定义图标](QFileSystemModel/CustomIcon.py)
- [QGraphicsDropShadowEffect](QGraphicsDropShadowEffect)
- [边框阴影动画](QGraphicsDropShadowEffect/ShadowEffect.py)
- [QSystemTrayIcon](QSystemTrayIcon)
- [最小化到系统托盘](QSystemTrayIcon/MinimizeToTray.py)
- [Demo](Demo)
- [重启窗口Widget](Demo/RestartWindow.py)
- [简单的窗口贴边隐藏](Demo/WeltHideWindow.py)
- [嵌入外部窗口](Demo/EmbedWindow.py)
- [简单跟随其它窗口](Demo/FollowWindow.py)
- [调整窗口显示边框](Demo/ShowFrameWhenDrag.py)
- [简单探测窗口和放大截图](Demo/ProbeWindow.py)
- [无边框圆角对话框](Demo/FramelessDialog.py)
- [无边框自定义标题栏窗口](Demo/FramelessWindow.py)
- [右下角弹出框](Demo/WindowNotify.py)
- [程序重启](Demo/AutoRestart.py)
- [自定义属性](Demo/CustomProperties.py)
- [调用截图DLL](Demo/ScreenShotDll.py)
- [单实例应用](Demo/SingleApplication.py)
- [简单的右下角气泡提示](Demo/BubbleTips.py)
- [右侧消息通知栏](Demo/Notification.py)
- [验证码控件](Demo/VerificationCode.py)
- [人脸特征点](Demo/FacePoints.py)
- [使用Threading](Demo/QtThreading.py)
- [背景连线动画](Demo/CircleLine.py)
- [判断信号是否连接](Demo/IsSignalConnected.py)
- [调用虚拟键盘](Demo/CallVirtualKeyboard.py)
- [动态忙碌光标](Demo/GifCursor.py)
# QQ群
[PyQt 学习](https://jq.qq.com/?_wv=1027&k=5QVVEdF)
# [Donate-打赏](Donate)
感谢所有捐助者的鼓励,[这里](https://github.com/PyQt5/thanks) 列出了捐助者名单(由于一些收款渠道无法知道对方是谁,如有遗漏请联系我修改)
<a href="javascript:;" alt="微信"><img src="Donate/weixin.png" height="350" width="350"></a>or<a href="javascript:;" alt="支付宝"><img src="Donate/zhifubao.png" height="350" width="350"></a>
[一些Qt写的三方APP](https://github.com/PyQt5/3rd-Apps)
| 35.206897 | 185 | 0.719556 | yue_Hant | 0.807823 |
d038904155c44f36f7ea815ea30de646b4c1adf4 | 1,290 | md | Markdown | README.md | qlik-oss/core-chopper | bf94ead87ba0fb9c031d1a19c3a02e58f35fcb30 | [
"MIT"
] | 1 | 2018-12-11T15:50:39.000Z | 2018-12-11T15:50:39.000Z | README.md | qlik-oss/core-chopper | bf94ead87ba0fb9c031d1a19c3a02e58f35fcb30 | [
"MIT"
] | 52 | 2018-11-07T18:40:39.000Z | 2020-10-12T13:57:20.000Z | README.md | qlik-oss/core-chopper | bf94ead87ba0fb9c031d1a19c3a02e58f35fcb30 | [
"MIT"
] | 1 | 2018-12-21T19:18:30.000Z | 2018-12-21T19:18:30.000Z | # core-chopper
*As of 1 July 2020, Qlik Core is no longer available to new customers. No further maintenance will be done in this repository.*
A Qlik Core gamification using bicycle sensors.
## Prerequisites
**This repository currently requires physical/hardware sensors to properly test it:**
* ANT+ USB stick to receive ANT+ events
* ANT+ sensors attached to e.g. a bike
### Windows
Additional components are needed on Windows:
* node-gyp build tools: `npm i -g --production windows-build-tools`
* USB driver for the ANT+ stick: http://zadig.akeo.ie/
## Get started
```bash
docker-compose up -d
npm i
node server
```
And in another terminal:
```bash
npm start
```
Open http://localhost:1234.
## Troubleshooting
* If the server hangs after the printout `reader:starting` it is most likely related to missing NFC drivers. You can skip using NFC by passing a `--disable-nfc` flag when starting the server.
## TODO
### Features
* Add graphics for e.g. "velocity text" [*********--]
* Implement multiple modes
* Do not drop all the way to 0 height when dying, add warning when descending more than X, then kill at X * 1.5
### Fixes
* Refactor game implementation
* Fix physics/animations, make them less jaggy
* Fix sprites for floor and clouds
* Fix floor collision
| 23.454545 | 191 | 0.727907 | eng_Latn | 0.98603 |
d038bef58896901fd1daa41fbe6db2cb46cc1c09 | 2,352 | md | Markdown | README.md | ronald/radiant | 8600796903f2b103d3ef24916a02bf8b65442078 | [
"MIT"
] | null | null | null | README.md | ronald/radiant | 8600796903f2b103d3ef24916a02bf8b65442078 | [
"MIT"
] | null | null | null | README.md | ronald/radiant | 8600796903f2b103d3ef24916a02bf8b65442078 | [
"MIT"
] | null | null | null | ## Welcome to Radiant
_Radiant is making major changes. The master branch may be broken._
Radiant is a no-fluff, open source content management system designed for
small teams. It is similar to Textpattern or MovableType, but is a general
purpose content management system (not just a blogging engine).
[](http://travis-ci.org/radiant/radiant)
[](https://gemnasium.com/radiant/radiant)
Radiant features:
* An elegant user interface
* The ability to arrange pages in a hierarchy
* Flexible templating with layouts, snippets, page parts, and a custom tagging
language (Radius: https://github.com/jlong/radius)
* A simple user management/permissions system
* Support for Markdown and Textile as well as traditional HTML (it's easy to
create other filters)
* An advanced plugin system
* Operates in two modes: dev and production depending on the URL
* A caching system which expires pages every 5 minutes
* Built using Ruby on Rails
* And much more...
## License
Radiant is released under the MIT license and is copyright (c) 2006-2014
John W. Long, Sean Cribbs, and Jim Gay. A copy of the MIT license can be
found in the LICENSE file.
## Installation and Setup
Radiant is a traditional Ruby on Rails application, meaning that you can
configure and run it the way you would a normal Rails application.
See the [INSTALL](INSTALL.md) file for more details.
### Installation of a Prerelease
As Radiant nears newer releases, you can experiment with any prerelease version.
Install the prerelease gem with the following command:
$ gem install radiant --prerelease
This will install the gem with the prerelease name, for example: ‘radiant-2.0.0.alpha’.
### Upgrading an Existing Project to a newer version
1. Update the Radiant assets from in your project:
$ rake radiant:update
2. Migrate the database:
$ rake production db:migrate
3. Restart the web server
## Support
The best place to get support is on the mailing list:
http://radiantcms.org/mailing-list/
Most of the development for Radiant happens on Github:
http://github.com/radiant/radiant/
The project wiki is here:
http://wiki.github.com/radiant/radiant/
Enjoy!
--
The Radiant Dev Team
http://radiantcms.org
| 28.682927 | 119 | 0.765306 | eng_Latn | 0.976588 |
d039813d2636b83b38f3b28bd72915f47c973edf | 1,366 | md | Markdown | content/python_while.md | openscreencast/openscreencast_md | 024da3a61a3b111e9cf5b78443c81fb958021642 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | content/python_while.md | openscreencast/openscreencast_md | 024da3a61a3b111e9cf5b78443c81fb958021642 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | content/python_while.md | openscreencast/openscreencast_md | 024da3a61a3b111e9cf5b78443c81fb958021642 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Title: Python-Programmierung - while-Schleife
Date: 2011-05-06 01:58
Author: Heiko
Category: Video
Tags: CC by-sa,Gnome,Linux,Neueinsteiger,Ogg Theora,ohne Musik,screencast,ubuntu,Python,Programmierung,while
Slug: python_while
Album: https://www.openscreencast.de
Duration: 358000
License: http://creativecommons.org/licenses/by-sa/3.0/
Youtube: rAh8_41_Xnw
Vimeo: 23219179
Tumblr: http://openscreencast.tumblr.com/post/8353944238/python-programmierung-while-schleife-ubuntu
Oggfile: https://www.openscreencast.de/archive/python_while_128.ogg
Oggfileom: https://www.openscreencast.de/archive/python_while_oM_128.ogg
Webmfile: https://www.openscreencast.de/archive/python_while_128.webm
Mp4file: https://www.openscreencast.de/archive/python_while_128.mp4
Srtfile: https://www.openscreencast.de/archive/python_while_128.srt
Srtfile_om: https://www.openscreencast.de/archive/python_while_oM_128.srt
Image: https://www.openscreencast.de/archive/python_while_128.png
Ausgangspunkt: Ubuntu 10.10, Gnome 2.32
Zielgruppe: Neueinsteiger
Links:
* [python.de](http://www.python.de "Link zu Python.de")
* [python.org](http://www.python.org "Link zu Python.org")
* [freiesmagazin.de 11/2010](http://www.freiesmagazin.de/freiesMagazin-2010-11 "Link zu freiesmagazin.de")
* [WP:while-Schleife](http://de.wikipedia.org/wiki/While-Schleife "Link zu wikipedia.de if")
| 44.064516 | 108 | 0.795754 | yue_Hant | 0.454013 |
d0399879def5b960afcba94958ff5b2642ac9e97 | 1,294 | md | Markdown | docs/index.md | ATLAS-Analytics/GATES | ad466da9b5bb370fbdc87a33a343af4139bf9d76 | [
"MIT"
] | null | null | null | docs/index.md | ATLAS-Analytics/GATES | ad466da9b5bb370fbdc87a33a343af4139bf9d76 | [
"MIT"
] | 2 | 2020-04-04T01:26:08.000Z | 2021-05-08T04:48:00.000Z | docs/index.md | ATLAS-Analytics/GATES | ad466da9b5bb370fbdc87a33a343af4139bf9d76 | [
"MIT"
] | null | null | null | # Welcome to GATES
## What is it?
GATES is a service that simplifies running [AB tests](https://en.wikipedia.org/wiki/A/B_testing).
## Links
* [Docker](https://hub.docker.com/r/atlasanalyticsservice/gates)
* [GitHub](https://github.com/ATLAS-Analytics/GATES)
* [Documentation](https://atlas-analytics.github.io/GATES/)
## TODO
* check logout works
* define API
* improve docs
* get server
* tests / stress tests
* analytics
* receiving (separate smaller server pod/service, only 2-3 endpoints). Autoscaling with trigger on latency.
* receiver-service is not nodeport
* reduce rights of the fronter account
* add to configuration option to not need authentication so non edu people can use it.
team has:
* name, description, time of creation, members, url
experiment has:
* name, description, time of creation, url, state (active, paused, done )
* options, generate code, analysis input rate, results if recieved, export selector data.
* Info (name, creation time, description, status )
* Setup (name of id variable, variables, buckets per variable, collects results (checkbox), collected variable name )
* Generate code (curl, python, c++,... )
* Data (requests served total & per bucket, stat. sign, if collected: collected fraction, plot of results per bucket, data export)
| 36.971429 | 130 | 0.741886 | eng_Latn | 0.888007 |
d03a530eed8e34a31477878b81d1c1a6b79b622d | 2,266 | md | Markdown | docs/relational-databases/system-catalog-views/sys-filetable-system-defined-objects-transact-sql.md | luis-cazares-sql/sql-docs.es-es | 6ec4a5eef65cee8f0e495083de92176be1d92819 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-catalog-views/sys-filetable-system-defined-objects-transact-sql.md | luis-cazares-sql/sql-docs.es-es | 6ec4a5eef65cee8f0e495083de92176be1d92819 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/system-catalog-views/sys-filetable-system-defined-objects-transact-sql.md | luis-cazares-sql/sql-docs.es-es | 6ec4a5eef65cee8f0e495083de92176be1d92819 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: sys.filetable_system_defined_objects (Transact-SQL)
title: sys.filetable_system_defined_objects (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 06/10/2016
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: system-objects
ms.topic: language-reference
f1_keywords:
- sys.filetable_system_defined_objects_TSQL
- filetable_system_defined_objects
- filetable_system_defined_objects_TSQL
- sys.filetable_system_defined_objects
dev_langs:
- TSQL
helpviewer_keywords:
- sys.filetable_system_defined_objects catalog view
ms.assetid: 62022e6b-46f6-495f-b14b-53f41e040361
author: WilliamDAssafMSFT
ms.author: wiassaf
ms.openlocfilehash: f18fdc79d85239422d80687a8ba54b5027caae85
ms.sourcegitcommit: a9e982e30e458866fcd64374e3458516182d604c
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 01/11/2021
ms.locfileid: "98098018"
---
# <a name="sysfiletable_system_defined_objects-transact-sql"></a>sys.filetable_system_defined_objects (Transact-SQL)
[!INCLUDE [SQL Server](../../includes/applies-to-version/sqlserver.md)]
Muestra una lista de los objetos definidos por el sistema relacionados con objetos FileTable. Contiene una fila por cada objeto definido por el sistema.
Cuando se crea un objeto FileTable, los objetos relacionados como restricciones e índices se crean al mismo tiempo. No puede modificar ni quitar estos objetos; desaparecen solo cuando se quita el propio objeto FileTable.
Para más información sobre FileTables, vea [FileTables (SQL Server)](../../relational-databases/blob/filetables-sql-server.md).
|Columna|Tipo de datos|Descripción|
|------------|---------------|-----------------|
|**object_id**|**int**|Identificador de objeto del objeto definido por el sistema relacionado con una tabla FileTable.<br /><br /> Hace referencia al objeto de **Sys. Objects**.|
|**parent_object_id**|**int**|Identificador de objeto de la tabla FileTable primaria.<br /><br /> Hace referencia al objeto de **Sys. Objects**.|
## <a name="see-also"></a>Consulte también
[Crear, modificar y quitar FileTables](../../relational-databases/blob/create-alter-and-drop-filetables.md)
[Administrar FileTables](../../relational-databases/blob/manage-filetables.md)
| 46.244898 | 223 | 0.769197 | spa_Latn | 0.560561 |
d03d193f45dc235671db671a6d0b7e050abc6c0e | 885 | md | Markdown | _posts/2020-01-10-se-voce-afirma-que-nao-utiliza-o-sus-voce-esta-enganado-por-rita-almeida.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2020-01-10-se-voce-afirma-que-nao-utiliza-o-sus-voce-esta-enganado-por-rita-almeida.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | null | null | null | _posts/2020-01-10-se-voce-afirma-que-nao-utiliza-o-sus-voce-esta-enganado-por-rita-almeida.md | tatudoquei/tatudoquei.github.io | a3a3c362424fda626d7d0ce2d9f4bead6580631c | [
"MIT"
] | 1 | 2022-01-13T07:57:24.000Z | 2022-01-13T07:57:24.000Z | ---
layout: post
item_id: 2849045125
title: >-
Se você afirma que não utiliza o SUS você está enganado, por Rita Almeida
author: Tatu D'Oquei
date: 2020-01-10 18:49:30
pub_date: 2020-01-10 18:49:30
time_added: 2020-01-11 15:01:01
category: refletimos
tags: []
image: https://jornalggn.com.br/sites/default/files/2020/01/se-voce-afirma-que-nao-utiliza-o-sus-voce-esta-enganado-por-rita-almeida-sus-agencia-brasil.jpg
---
Existem muitos brasileiros que não se importam com o SUS como deveriam, porque acreditam que não são usuários do sistema. Pensam que, pagar por um plano de saúde e não se consultar no posto do bairro, significa não utilizar o SUS.
**Link:** [https://jornalggn.com.br/artigos/se-voce-afirma-que-nao-utiliza-o-sus-voce-esta-enganado-por-rita-almeida/](https://jornalggn.com.br/artigos/se-voce-afirma-que-nao-utiliza-o-sus-voce-esta-enganado-por-rita-almeida/)
| 46.578947 | 230 | 0.764972 | por_Latn | 0.969023 |
d03dfefc15a3a7b731d8153e6f3294d0b43d66f8 | 1,401 | md | Markdown | gem/README.md | vaginessa/metasploit-payloads | 863414b652a98ab12ae15d90e6deed568d1b4030 | [
"PSF-2.0"
] | 1,331 | 2015-04-13T22:19:39.000Z | 2022-03-31T06:59:35.000Z | gem/README.md | vaginessa/metasploit-payloads | 863414b652a98ab12ae15d90e6deed568d1b4030 | [
"PSF-2.0"
] | 489 | 2015-07-01T02:19:19.000Z | 2022-03-31T23:43:26.000Z | gem/README.md | vaginessa/metasploit-payloads | 863414b652a98ab12ae15d90e6deed568d1b4030 | [
"PSF-2.0"
] | 645 | 2015-04-21T21:53:02.000Z | 2022-03-29T05:36:14.000Z | # Metasploit Payloads
This gem is a Metasploit-specific gem that contains all of the
Meterpreter payloads (except for Mettle). This is made up of:
* Windows DLLs
* Java Classes
* PHP/Python Scripts
Mettle, the Native Linux / Posix payload, currently is developed at
https://github.com/rapid7/mettle (to be moved here at some point?)
## Installation
Given the nature of the contents of this gem, installation
outside of Metasploit is not advised. To use Meterpreter,
download and install Metasploit itself.
## Building
To build the gem:
1. Update the version number in `lib/metasploit-payloads/version.rb`
1. Run:
- `rake win_prep` to build on Windows
- `rake java_prep` to build Java files
- `rake python_prep` and `rake php_prep` to copy the latest PHP/Python
meterpreter files into place
1. Binaries will be built in the `data` folder.
1. Run `rake build` to generate the new gem file using content in
meterpreter folder.
1. Run `rake release` to release the binary to RubyGems.
Note, when using the command `rake win_prep` and related Windows rake
tasks, you must be in the Visual Studio Developer command prompt,
**and** have a path to a git binary in your default path. If your
git.exe is part of posh-git or GitHub for Windows, that means adding
something like the following to your path:
`"C:\Users\USERNAME\AppData\Local\GitHub\PortableGit_LONG_UUID_STRING_THING\bin"`
| 33.357143 | 81 | 0.767309 | eng_Latn | 0.983144 |
d03e6028ac53b953ed1f6298538935382222db97 | 395 | md | Markdown | README.md | jorisroovers/software-engineering-capabilities | ca5b9e89b0da1f76906cbd2a695e2a6fb41372b1 | [
"MIT"
] | null | null | null | README.md | jorisroovers/software-engineering-capabilities | ca5b9e89b0da1f76906cbd2a695e2a6fb41372b1 | [
"MIT"
] | null | null | null | README.md | jorisroovers/software-engineering-capabilities | ca5b9e89b0da1f76906cbd2a695e2a6fb41372b1 | [
"MIT"
] | null | null | null | # software-engineering-capabilities
Mind map with software engineering capabilities.
[Blogpost: Software Engineering Capability Taxonomy](https://jorisroovers.com/posts/software-engineering-capability-taxonomy)
Over time, I've started splitting out subtrees into separate mindmaps - see the various `*.opml` files in this repository.

| 43.888889 | 125 | 0.820253 | eng_Latn | 0.884965 |
d03f831d27cf52d3b9892669a27cd2f62f89c9bf | 2,750 | md | Markdown | docs/en/Getting_Started.md | genloz/giojs | 8a45d419518e90301ad68cce174497cc0da46d1a | [
"Apache-2.0"
] | 1,543 | 2018-06-10T08:20:45.000Z | 2022-03-24T19:32:33.000Z | docs/en/Getting_Started.md | genloz/giojs | 8a45d419518e90301ad68cce174497cc0da46d1a | [
"Apache-2.0"
] | 52 | 2018-04-12T10:10:09.000Z | 2021-05-27T00:54:39.000Z | docs/en/Getting_Started.md | genloz/giojs | 8a45d419518e90301ad68cce174497cc0da46d1a | [
"Apache-2.0"
] | 237 | 2018-06-11T04:04:50.000Z | 2022-03-15T12:38:38.000Z | # Gio.js Hello World
**Gio.js** is an open source library for data visualization on a 3D globe. This library is inspired by the [Arms Trade Visualization](https://github.com/dataarts/armsglobe) project developed by Michael Chang and presented during Google Ideas INFO 2012. What makes Gio.js different is that it is fully customizable for user and friendly to future developers.
<!-- [START screenshot] -->
<p>
<a href="https://github.com/syt123450/Gio.js/blob/master/assets/readme/Gio.png"><img src="https://github.com/syt123450/Gio.js/blob/master/assets/readme/Gio.gif"/></a>
</p>
<!-- [END screenshot] -->
<!-- [START getstarted] -->
## Getting Started
### Installation
- Option 1: \<script\> tag
Include Three.js dependency:
```html
<script src="three.min.js"></script>
```
Include local Gio.js library
```html
<script src="gio.min.js"></script>
```
or through CDN
```html
<script src="https://raw.githack.com/syt123450/giojs/master/build/gio.min.js"></script>
```
- Option 2: npm
```bash
npm install giojs --save
```
- Option 3: yarn
```bash
yarn add giojs
```
### Usage
After including "three.min.js" and "gio.min.hs" in your html, create a `div` to render the 3D Gio globe:
```html
<!DOCTYPE HTML>
<html>
<head>
<!-- include three.min.js library-->
<script src="three.min.js"></script>
<!-- include Gio.min.js library-->
<script src="gio.min.js"></script>
</head>
<body>
<!-- container to draw 3D Gio globe-->
<div id="globalArea"></div>
</body>
</html>
```
To initialize and render the 3D Gio globe:
```html
<script>
// get the container to hold the IO globe
var container = document.getElementById( "globalArea" );
// create controller for the IO globe, input the container as the parameter
var controller = new GIO.Controller( container );
// use addData() API to add the the data to the controller
controller.addData( data );
// call the init() API to show the IO globe in the browser
controller.init();
</script>
```
If everything goes well, you should see [this](http://giojs.org/examples/00_hello_world(simplest).html).
<!-- [END getstarted] -->
<!-- [START documentation] -->
## Other Documentation
- To learn more about the [Basic Elements](https://github.com/syt123450/Gio.js/blob/master/docs/en/Basic_Elements.md)
- To see the full API document in Markdown format, see [APIs](https://github.com/syt123450/Gio.js/blob/master/docs/en/APIs.md)
- To contribute to Gio.js's code base, read [Developer Guide](https://github.com/syt123450/Gio.js/blob/master/docs/en/Developer_Guide.md)
- See Gio's [offical website](http://giojs.org) for everything above and plus lots of live examples
<!-- [END documentation] -->
[screenshot-url]: http://via.placeholder.com/400x300
| 28.350515 | 357 | 0.697818 | eng_Latn | 0.539534 |
d03fcd85c3a2dcd357e8d1d497f96b6952a483d1 | 3,222 | md | Markdown | getting-started/windows/add-control-to-project.md | yordan-mitev/xamarin-forms-docs | 19d58da96eb8bc47a6a5df9199133061b15d6361 | [
"MIT",
"Unlicense"
] | 1 | 2020-05-13T16:52:43.000Z | 2020-05-13T16:52:43.000Z | getting-started/windows/add-control-to-project.md | doc22940/xamarin-forms-docs | 3e72a514f195733c0b2a0fa8b2a9fc0099c935f5 | [
"MIT",
"Unlicense"
] | null | null | null | getting-started/windows/add-control-to-project.md | doc22940/xamarin-forms-docs | 3e72a514f195733c0b2a0fa8b2a9fc0099c935f5 | [
"MIT",
"Unlicense"
] | null | null | null | ---
title: Add a Control to Your Project
page_title: Add a Control to Your Project
description: Add a Control to Your Project
slug: win-add-control-project
tags: Add a Control to Your Project
position: 1
---
# Add a Control to Your Project
## Show the Telerik Toolbox
In order to show the Telerik Toolbox in Visual Studio and add a control, you should navigate to Telerik > Telerik UI for Xamarin > Open Telerik UI for Xamarin Toolbox.
#### __Figure 1: Show the Telerik UI for Xamarin Toolbox__

## Add a Control to Your Project
Embedding the controls from the suite is made as easy as possible and all you need to do is simply drag one of the controls within your XAML file. This will add the control definition and will also map the needed namespace declarations. For the purpose of this example we will add [RadListView] ({%slug listview-overview%}) Control.
#### __Figure 2: Adding Telerik controls to your application__

## Populating RadListView with data
First thing you need to do is to create a data and view model classes:
#### Example 1: Populating RadListView with data in C#.
```C#
public class SourceItem
{
public SourceItem(string name)
{
this.Name = name;
}
public string Name { get; set; }
}
public class ViewModel
{
public ViewModel()
{
this.Source = new List<SourceItem> { new SourceItem("Tom"), new SourceItem("Anna"), new SourceItem("Peter"), new SourceItem("Teodor"), new SourceItem("Lorenzo"), new SourceItem("Andrea"), new SourceItem("Martin") };
}
public List<SourceItem> Source { get; set; }
}
```
Update the setup of the ListView you created in __Figure 6__ to be like:
#### Example 2: Update the setup of the ListView in XAML.
```XAML
<telerikDataControls:RadListView x:Name="listView" ItemsSource="{Binding Source}">
<telerikDataControls:RadListView.BindingContext>
<local:ViewModel />
</telerikDataControls:RadListView.BindingContext>
<telerikDataControls:RadListView.ItemTemplate>
<DataTemplate>
<telerikListView:ListViewTemplateCell>
<telerikListView:ListViewTemplateCell.View>
<Grid>
<Label Margin="10" Text="{Binding Name}" />
</Grid>
</telerikListView:ListViewTemplateCell.View>
</telerikListView:ListViewTemplateCell>
</DataTemplate>
</telerikDataControls:RadListView.ItemTemplate>
</telerikDataControls:RadListView>
```
The results should be:
#### __Figure 3: Result in Android, iOS and Windows__

## Next Steps
Now that you have your first Telerik UI for Xamarin control running, you may want to explore the different features, behavior and appearances. Below you can find guidance on getting started with such tasks:
- [Explore Control Features]({%slug win-getting-started-explore-control-features %})
- [Change control appearance]({%slug win-getting-started-change-control-appearance %})
- [Further information]({%slug win-getting-started-next-steps%})
## See Also
- [System Requirements]({%slug system-requirements %})
- [Telerik NuGet Server]({%slug telerik-nuget-server%}) | 36.202247 | 332 | 0.742396 | eng_Latn | 0.736252 |
d03fe6c9607c06573cd08d71b92f2b8879ad5617 | 11,913 | md | Markdown | README.md | abadc0de/moonwalk | 8d2e0730599fb89eaceb41ccdf8750f1837064ec | [
"Unlicense"
] | 4 | 2015-04-10T08:26:52.000Z | 2021-02-19T11:25:49.000Z | README.md | abadc0de/moonwalk | 8d2e0730599fb89eaceb41ccdf8750f1837064ec | [
"Unlicense"
] | null | null | null | README.md | abadc0de/moonwalk | 8d2e0730599fb89eaceb41ccdf8750f1837064ec | [
"Unlicense"
] | 1 | 2017-06-28T18:10:28.000Z | 2017-06-28T18:10:28.000Z | # Moonwalk #
Moonwalk is a [Swagger][1] server implementation for Lua.
**Warning:** This project is under heavy development. The
Moonwalk API is not stable. Moonwalk itself is not stable.
Don't use this in production (yet).
Moonwalk is designed to work under various host environments.
Currently Moonwalk supports [CGI][2], [Mongoose][3], [Civetweb][4], and
[LuaNode][5], as well as a built-in testing server, "SocketServer".
Support can easily be added for other host environments.
This document should cover most of what you need to get started.
For more advanced topics, see the [generated documentation
](http://abadc0de.github.io/moonwalk/docs).
[1]: http://developers.helloreverb.com/swagger/
[2]: http://www.ietf.org/rfc/rfc3875
[3]: https://github.com/cesanta/mongoose
[4]: https://github.com/sunsetbrew/civetweb
[5]: https://github.com/ignacio/luanode
[6]: http://luasocket.luaforge.net
## Installing ##
To get started with Moonwalk, you can clone this git repository, which
includes Moonwalk, the API Explorer, example code, and documentation.
git clone https://github.com/abadc0de/moonwalk.git
luarocks install moonwalk --from=moonwalk/rocks
If you don't need the API Explorer or any examples, you can install Moonwalk
without cloning the repository:
luarocks install moonwalk --from=http://abadc0de.github.io/moonwalk/rocks
## Overview ##
### Index page ###
Your API's index page should something like this:
-- index.lua
-- 1: Load Moonwalk
local api = require 'moonwalk/api'
-- 2: Register APIs
api:load_class 'user'
api:load_class 'widget'
api:load_class 'gadget'
-- 3: Handle request
api:handle_request(...)
1. Require Moonwalk and assign it to a local variable.
2. Call `api:load_class` once for each API class (see below).
3. Call `api:handle_request`.
Make sure to pass the ellipses (varargs) as shown.
### Documenting your API ###
Functions in your API should be decorated with doc blocks.
Valid tags include `@path`, `@param`, and `@return`.
Here's a quick example of a complete API with a single operation:
**user.lua**
--- User API
return {
--- Create a new user.
--
-- @path POST /user/
--
-- @param email: User's email address.
-- @param password: User's new password.
-- @param phone (optional): User's phone number.
--
-- @return (number): User's ID number.
--
create = function(email, password, phone)
return 123
end,
}
Moonwalk parses the docstring to determine the request method, resource
path, and parameters for the function.
### The @path tag ###
The `@path` tag is used to provide the HTTP request method and resource
path for the operation. "Path parameters" may be included as part of the
path, by enclosing the parameter name in braces. For example:
@path GET /widget/{id}/
### The @param tag ###
The `@param` tag may contain additional information, enclosed in
parentheses, after the parameter name. This can include the
**data type**, the word "from" followed by the **param type**,
optionally separated by punctuation. It may also include punctuation
after the parentheses to visually separate the description. For example:
@param id (integer, from path): The ID of the widget to fetch.
### The @return tag ###
The `@return` tag may contain a data type annotation, enclosed in
parentheses, before the description, optionally followed by
punctuation. For example:
@return (integer): The ID of the newly-created widget.
## Validation ##
In `@param` and `@return` tags, any **data type** name may be used,
but built-in type checking is only provided for the following:
`integer`, `number`, `string`, `boolean`, `object`, `array`
In `@param` tags, the **param type** determines how information is
sent to the API. Valid values are:
`path`, `query`, `body`, `header`, `form`
If the **data type** annotation is present, it *must be listed first*.
All other parenthesized annotations may be listed in any order.
Any annotation may be omitted, in which case the default values will be used.
If all annotations within the parentheses are omitted, the parentheses may
also be omitted.
The default **data type** is `string`, and the default **param type**
is determined as follows:
* If the parameter name appears in curly brackets in the `@path`,
the default param type is `path`.
* If the HTTP method is `POST`, the default param type is `form`.
* In all other cases, the default param type is `query`.
In addition to a **data type** and **param type**, the `@param` tag may
include additional validation annotations within the parentheses following
the parameter name. Recognized annotations draw from the [JSON Schema][8]
validation specification.
[8]:http://json-schema.org/latest/json-schema-validation.html
### Validation for all types ###
These validation annotations are available for any parameter.
* **optional**
By default, all parameters are required. To make a parameter optional,
use the `optional` annotation.
### Numeric validation ###
These validation annotations are available for
`number` and `integer` parameters.
* **maximum** *(partly implemented)*
Numeric parameters may enforce a maximum value using the
annotation `maximum N [exclusive]`, where *N* is
any valid number, optionally followed by `exclusive` to
indicate that the value must be less than (but not equal to) *N*.
* **minimum** *(partly implemented)*
Numeric parameters may enforce a minimum value using the
annotation `minimum N [exclusive]`, where *N* is
any valid number, optionally followed by `exclusive` to
indicate that the value must be greater than (but not equal to) *N*.
* **multipleOf**
Numeric parameters may limit a value to being evenly divisible
by a number using the annotation `multipleOf N`,
where *N* is any valid number greater than 0.
### String validation ###
These validation annotations are available for`string` parameters.
* **maxLength**
String parameters may enforce a maximum length using the
annotation `maxLength N`, where *N* is any valid
non-negative integer.
* **minLength**
String parameters may enforce a minimum length using the
annotation `minLength N`, where *N* is any valid
non-negative integer.
* **pattern** *(not yet implemented)*
String parameters may be checked against a regular expression
using the annotation `pattern P`, where *P* is any
valid regular expression, enclosed in backticks.
### Array validation ###
These validation annotations are available for `array` parameters.
* **maxItems**
Array parameters may enforce a maximum length using the
annotation `maxItems N`, where *N* is any valid
non-negative integer.
* **minItems**
Array parameters may enforce a minimum length using the
annotation `minItems N`, where *N* is any valid
non-negative integer.
* **uniqueItems**
Array parameters may ensure that every item in the array
is unique using the `uniqueItems` annotation.
## Models ##
Models are a useful way to document how an object should look.
Currently no built-in validation is provided for models, but
some Swagger clients may use this information to provide
client side validation or documentation. They also show up
in the API Explorer.
You can define models like this:
local api = require "moonwalk/api"
api.model "User" {
id = {
type = "integer",
minimum = 1,
description = "The user's ID number"
},
email = {
description = "The user's email address"
},
name = {
optional = true,
description = "The user's full name"
},
phone = {
type = "integer",
optional = true,
description = "The user's phone number",
},
}
This is essentially the `properties` object in a Swagger `models`
section. You can use the model name as a **data type** in your
`@param` and `@return` tags, and in other models. You can also use
full Swagger-style model definitions. Models defined using the short
syntax above will be converted to full definitions by `.model`.
See Swagger's [Complex Types][9] for more information.
[9]:https://github.com/wordnik/swagger-core/wiki/Datatypes#complex-types
## Host environments ##
Some host environments (SocketServer, LuaNode) use one Lua state
across multiple requests, while others (CGI, Mongoose, Civetweb)
handle each request in a separate Lua state. We'll call the first
category "persistent hosts" and the second "traditional hosts."
### SocketServer ###
Invoke the built-in Lua server like this:
lua moonwalk/server/socket.lua /example/ 8910
Where `/example/` is your API root and `8910` is the port to use.
### LuaNode ###
Experimental support for LuaNode is included. Invoke the server like this:
/path/to/luanode moonwalk/server/luanode.lua /example/ 8910
Where `/example/` is your API root and `8910` is the port to use.
### Mongoose/Civetweb ###
Mongoose/Civetweb support is included. Invoke the server like this:
/path/to/server/binary \
-document_root /srv/www/moonwalk/ \
-url_rewrite_patterns /example/**=example/index.lp
### Apache CGI Setup ###
Use this Apache vhost configuration and .htaccess file
as an example.
#### Apache vhost config ####
<VirtualHost *:80>
ServerName moonwalk.local
DocumentRoot /srv/www/moonwalk
<Directory /srv/www/moonwalk>
Options +ExecCGI
AddHandler cgi-script .lua
DirectoryIndex index.lua index.html
AllowOverride All
Order allow,deny
allow from all
</Directory>
</VirtualHost>
#### Apache .htaccess ####
RewriteEngine On
RewriteCond $1 !(^index\.lua)
RewriteRule ^(.*)$ index.lua/$1 [L]
#### CGI troubleshooting ####
* Make sure the shebang line has the correct path to the Lua executable.
For example, `#! /usr/bin/lua` may need to become `#! /usr/local/bin/lua`.
* Make sure any files with the shebang are executable (chmod +x).
## License ##
Copyright © 2013 Moonwalk Authors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
## References ##
Lua references:
* [Lua 5.2 user manual](http://www.lua.org/manual/5.2/)
Swagger references:
* [Swagger wiki](https://github.com/wordnik/swagger-core/wiki)
CGI references:
* [CGI spec](http://www.ietf.org/rfc/rfc3875)
* [Apache CGI docs](http://httpd.apache.org/docs/2.2/howto/cgi.html)
Mongoose and Civetweb references:
* [Mongoose Lua server pages](https://github.com/cesanta/mongoose/blob/master/docs/LuaSqlite.md)
* [Mongoose users group](http://groups.google.com/group/mongoose-users)
* [Civetweb users group](http://groups.google.com/group/civetweb)
| 30.703608 | 98 | 0.706371 | eng_Latn | 0.97863 |
d04010a61c9804172e515a222a00684f735957e2 | 48,716 | md | Markdown | docs/migrate/migration-import.md | wangyoutian/azure-devops-docs | a38fff177d9478aa3fad0f29e85c447b911e74fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/migrate/migration-import.md | wangyoutian/azure-devops-docs | a38fff177d9478aa3fad0f29e85c447b911e74fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/migrate/migration-import.md | wangyoutian/azure-devops-docs | a38fff177d9478aa3fad0f29e85c447b911e74fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Import migrate process from on-premises to Azure DevOps
titleSuffix: Azure DevOps
description: How to guide for preparing an on-premises collection to importing it to the cloud
ms.topic: how-to
ms.technology: devops-migrate
ms.contentid: 829179bc-1f98-49e5-af9f-c224269f7910
ms.author: kaelli
author: KathrynEE
monikerRange: '<= azure-devops'
ms.date: 10/07/2021
---
# Validation and import processes
[!INCLUDE [version-lt-eq-azure-devops](../includes/version-lt-eq-azure-devops.md)]
This article walks you through the preparation that's required to get an import to Azure DevOps Services ready to run. If you encounter errors during the process, see [Troubleshoot import and migration errors](migration-troubleshooting.md).
> [!Note]
> * Visual Studio Team Services (VSTS) is now [Azure DevOps Services.](../user-guide/about-azure-devops-services-tfs.md#visual-studio-team-services-is-now-azure-devops-services)
> * With the release of Azure DevOps Server 2019, the TFS Database Import Service has been rebranded as the data migration tool for Azure DevOps. This change includes TfsMigrator (Migrator) becoming the data migration tool. This service works exactly the same as the former import service. If you're running an older version of on-premises Azure DevOps Server with the TFS branding, you can still use this feature to migrate to Azure DevOps as long as you've upgraded to one of the supported server versions.
> * Before you begin the import tasks, check to ensure that you're running a [supported version of Azure DevOps Server](migration-overview.md#supported-azure-devops-server-versions-for-import).
We recommend that you use the [Step-by-step migration guide](https://aka.ms/AzureDevOpsImport) to progress through your import. The guide links to technical documentation, tools, and best practices.
<a id="validate-collection"></a>
## Validate a collection
After you've confirmed that you're running the latest version of Azure DevOps Server, your next step is to validate each collection that you want to migrate to Azure DevOps Services.
The validation step examines various aspects of your collection, including, but not limited to, size, collation, identity, and processes.
You run the validation by using the data migration tool. To start, [download the tool](https://aka.ms/AzureDevOpsImport), copy the zip file to one of your Azure DevOps Server application tiers, and then unzip it. You can also run the tool from a different machine without Azure DevOps Server installed as long as the machine can connect to the configuration database of the Azure DevOps Server instance. An example is shown here.
1. Open a Command Prompt window on the server, and enter a cd command to change to the directory where the data migration tool is stored. Take a few moments to review the help content that's provided with the tool.
a. To view the top-level help and guidance, run the following command:
```cmdline
Migrator /help
```
b. View the help text for the command:
```cmdline
Migrator validate /help
```
1. Because this is your first time validating a collection, let's keep it simple. Your command should have the following structure:
```cmdline
Migrator validate /collection:{collection URL}
```
For example, to run against the default collection the command would look like:
```cmdline
Migrator validate /collection:http://localhost:8080/DefaultCollection
```
1. To run the tool from a machine other than the Azure DevOps Server, you need the **/connectionString** parameter. The connection string parameter points to your Azure DevOps Server configuration database. As an example, if the validate command is being run by the Fabrikam corporation, the command would look like:
```cmdline
Migrator validate /collection:http://fabrikam:8080/DefaultCollection /tenantDomainName:fabrikam.OnMicrosoft.com /connectionString:"Data Source=fabrikam;Initial Catalog=Configuration;Integrated Security=True"
```
> [!Important]
> The data migration tool *does not* edit any data or structures in the collection. It reads the collection only to identify issues.
5. After the validation is complete, you can view the log files and results.

After all the validations pass, you can move to the next step of the import process. If the data migration tool flags any errors, you need to correct them before you proceed. For guidance on correcting validation errors, see [Troubleshoot import and migration errors](migration-troubleshooting.md).
### Import log files
When you open the log directory, you'll notice several logging files.
The main log file is named *DataMigrationTool.log*. It contains details about everything that was run. To make it easier for you to focus on specific areas, a log is generated for each major validation operation.
For example, if TfsMigrator reports an error in the "Validating Project Processes" step, you can open the *ProjectProcessMap.log* file to view everything that was run for that step instead of having to scroll through the entire log.
You should review the *TryMatchOobProcesses.log* file only if you're trying to import your project processes to use the [inherited model](migration-processtemplates.md). If you don't want to use the inherited model, you can ignore these errors, because they won't prevent you from importing to Azure DevOps Services.
## Generate import files
By now, you've run the data migration tool validation against the collection, and it's returning a result of "All collection validations passed." Before you take a collection offline to migrate it, you need to generate the import files. When you run the `prepare` command, you generate two import files:
- *IdentityMapLog.csv*: Outlines your identity map between Active Directory and Azure Active Directory (Azure AD).
- *import.json*: Requires you to fill out the import specification you want to use to kick off your migration.
### The prepare command
The `prepare` command assists with generating the required import files. Essentially, this command scans the collection to find a list of all users to populate the identity map log, *IdentityMapLog.csv*, and then tries to connect to Azure AD to find each identity's match. To do this, your company needs to use the [Azure Active Directory Connect tool](/azure/active-directory/connect/active-directory-aadconnect) (formerly known as the Directory Synchronization tool, Directory Sync tool, or DirSync.exe tool).
If directory synchronization is set up, the data migration tool should be able to find the matching identities and mark them as *Active*. If it doesn't find a match, the identity is marked as *Historical* in the identity map log, and you'll need to investigate why the user isn't included in your directory sync. The import specification file, *import.json*, should be filled out prior to the import.
Unlike the `validate` command, `prepare` *does* require an internet connection, because it needs to connect to Azure AD to populate the identity map log file. If your Azure DevOps Server instance doesn't have internet access, you need to run the tool from a machine that does. As long as you can find a machine with an intranet connection to your Azure DevOps Server instance and an internet connection, you can run this command. For help with the `prepare` command, run the following command:
```cmdline
Migrator prepare /help
```
Included in the help documentation are instructions and examples for running Migrator from the Azure DevOps Server instance itself and a remote machine. If you're running the command from one of the Azure DevOps Server instance's application tiers, your command should have the following structure:
```cmdline
Migrator prepare /collection:{collection URL} /tenantDomainName:{name} /region:{region}
```
```cmdline
Migrator prepare /collection:{collection URL} /tenantDomainName:{name} /region:{region} /connectionString:"Data Source={sqlserver};Initial Catalog=Configuration;Integrated Security=True"
```
The **connectionString** parameter is a pointer to the configuration database of your Azure DevOps Server instance. As an example, if the `prepare` command is being run by the Fabrikam corporation, the command would look like:
```cmdline
Migrator prepare /collection:http://fabrikam:8080/DefaultCollection /tenantDomainName:fabrikam.OnMicrosoft.com /region:{region} /connectionString:"Data Source=fabrikam;Initial Catalog=Configuration;Integrated Security=True"
```
When the data migration tool runs the `prepare` command, it runs a complete validation to ensure that nothing has changed with your collection since the last full validation. If any new issues are detected, no import files are generated.
Shortly after the command has started running, an Azure AD sign-in window is displayed. You need to sign in with an identity that belongs to the tenant domain that's specified in the command. Make sure that the specified Azure AD tenant is the one you want your future organization to be backed with. In our Fabrikam example, a user would enter credentials that are similar to what's shown in the following screenshot.
> [!IMPORTANT]
> Do *not* use a test Azure AD tenant for a test import and your production Azure AD tenant for the production run. Using a test Azure AD tenant can result in identity import issues when you begin your production run with your organization's production Azure AD tenant.

When you run the `prepare` command successfully in the data migration tool, the results window displays a set of logs and two import files. In the log directory, you'll find a logs folder and two files:
* *import.json* is the import specification file. We recommend that you take time to fill it out.
* *IdentityMapLog.csv* contains the generated mapping of Active Directory to Azure AD identities. Review it for completeness before you kick off an import.
The two files are described in greater detail in the next sections.
### The import specification file
The import specification, *import.json*, is a JSON file that provides import settings. It includes the desired organization name, storage account information, and other information. Most of the fields are autopopulated, and some fields require your input before you attempt an import.

The *import.json* file's displayed fields and required actions are described in the following table:
| Field | Description | Required action |
| --- | --- | --- |
| Source | Information about the location and names of the source data files that are used for import. | No action required. Review information for the subfield actions to follow. |
| Location | The shared access signature key to the Azure storage account that hosts the data-tier application package (DACPAC). | No action required. This field will be covered in a later step. |
| Files | The names of the files containing import data. | No action required. Review information for the subfield actions to follow. |
| DACPAC | A DACPAC file that packages the collection database to be used to bring in the data during the import. | No action required. In a later step, you'll generate this file by using your collection and then upload it to an Azure storage account. You'll need to update the file based on the name you use when you generate it later in this process. |
| Target | Properties of the new organization to import into. | No action required. Review information for the subfield actions to follow. |
| Name | The name of the organization to be created during the import. | Provide a name. The name can be quickly changed later after the import has completed.<br>**Note**: Do *not* create an organization with this name before you run the import. The organization will be created as part of the import process. |
| ImportType | The type of import that you want to run. | No action required. In a later step, you'll select the type of import to run. |
| Validation Data | Information that's needed to help drive your import experience. | The "ValidationData" section is generated by the data migration tool. It contains information that's needed to help drive your import experience. Do *not* edit the values in this section, or your import could fail to start. |
<br>
After you complete the preceding process, you should have a file that looks like the following:

In the preceding image, note that the planner of the Fabrikam import added the organization name *fabrikam-import* and selected CUS (Central United States) as the region for import. Other values were left as is to be modified just before the planner took the collection offline for the migration.
> [!NOTE]
> Dry-run imports have a '-dryrun' automatically appended to the end of the organization name. This can be changed after the import.
<a id="supported-azure-regions-for-import"></a>
### Supported Azure regions for import
Azure DevOps Services is available in several [Azure regions](https://azure.microsoft.com/regions/services/). However, not all regions where Azure DevOps Services is available are supported for import. The following table lists the Azure regions that you can select for import. Also included is the value that you need to place in the import specification file to target that region for import.
| Geographic region | Azure region | Import specification value |
| --- | --- | --- |
| United States | Central United States | CUS |
| Europe | Western Europe | WEU |
| United Kingdom | United Kingdom South | UKS |
| Australia | Australia East | EAU |
| South America | Brazil South | SBR |
| Asia Pacific | South India | MA |
| Asia Pacific | Southeast Asia (Singapore) | SEA |
| Canada | Central Canada | CC |
<br>
### The identity map log
The identity map log is of equal importance to the actual data that you'll be migrating to Azure DevOps Services. As you're reviewing the file, it's important to understand how identity import operates and what the potential results could entail. When you import an identity, it can become either *active* or *historical*. Active identities can sign in to Azure DevOps Services, but historical identities cannot.
#### Active identities
Active identities refer to identities that will be users in Azure DevOps Services post-import. In Azure DevOps Services, these identities are licensed and are displayed as users in the organization. The identities are marked as *active* in the **Expected Import Status** column in the identity map log file.
<a id="historical-identities"></a>
#### Historical identities
Historical identities are mapped as such in the **Expected Import Status** column in the identity map log file. Identities without a line entry in the file also become historical. An example of an identity without a line entry might be an employee who no longer works at a company.
Unlike active identities, historical identities:
* *Don't* have access to an organization after migration.
* *Don't* have licenses.
* *Don't* show up as users in the organization. All that persists is the notion of that identity's name in the organization, so that its history can be searched later. We recommend that you use historical identities for users who no longer work at the company or who won't need further access to the organization.
> [!NOTE]
> After an identity is imported as historical, it *can't* become active.
### Understand the identity map log file
The identity map log file is similar to the example shown here:

The columns in the identity map log file are described in the following table:
> [!NOTE]
> You and your Azure AD admin will need to investigate users that are marked as *No Match Found (Check Azure AD Sync)* to understand why they aren't part of your Azure AD Connect sync.
| Column | Description |
| --- | --- |
| Active Directory: User (Azure DevOps Server) | The friendly display name used by the identity in Azure DevOps Server. This name makes it easier to identify which user the line in the map is referencing. |
| Active Directory: Security Identifier | The unique identifier for the on-premises Active Directory identity in Azure DevOps Server. This column is used to identify users in the collection. |
| Azure Active Directory: Expected Import User (Azure DevOps Services) | Either the expected sign-in address of the matched soon-to-be-active user or *No Match Found (Check Azure AD Sync)*, indicating that the identity wasn't found during the Azure Active Directory sync and it will be imported as historical. |
| Expected Import Status | The expected user import status: either *Active* if there's a match between your Active Directory and Azure Active Directory, or *Historical* if there isn't a match. |
| Validation Date | The last time the identity map log was validated. |
<br>
As you read through the file, notice whether the value in the **Expected Import Status** column is *Active* or *Historical*. *Active* indicates that it's expected that the identity on this row will map correctly on import and will become active. *Historical* means that the identities will become historical on import. It's important to review the generated mapping file for completeness and correctness.
> [!IMPORTANT]
> Your import will fail if major changes occur to your Azure AD Connect security ID sync between import attempts. You can add new users between dry runs, and you can make corrections to ensure that previously imported historical identities become active. However, changing an existing user that was previously imported as active isn't supported at this time. Doing so will cause your import to fail. An example of a change might be completing a dry-run import, deleting an identity from your Azure AD that was imported actively, re-creating a new user in Azure AD for that same identity, and then attempting another import. In this case, an active identity import will be attempted between the Active Directory and newly created Azure AD identity, but it will cause an import failure.
1. Start by reviewing the correctly matched identities. Are all the expected identities present? Are the users mapped to the correct Azure AD identity?
If any values are incorrectly mapped or need to be changed, contact your Azure AD administrator to verify that the on-premises Active Directory identity is part of the sync to Azure AD and has been set up correctly. For more information, see [Integrate your on-premises identities with Azure Active Directory](/azure/active-directory/hybrid/whatis-hybrid-identity).
1. Next, review the identities that are labeled as *historical*. This labeling implies that a matching Azure AD identity couldn't be found, for any of the following reasons:
* The identity hasn't been set up for sync between on-premises Active Directory and Azure AD.
* The identity hasn't been populated in your Azure AD yet (for example, there's a new employee).
* The identity doesn't exist in your Azure AD instance.
* The user who owns that identity no longer works at the company.
To address the first three reasons, you need to set up the intended on-premises Active Directory identity to sync with Azure AD. For more information, see [Integrate your on-premises identities with Azure Active Directory](/azure/active-directory/hybrid/how-to-connect-sync-change-the-configuration). You must set up and run Azure AD Connect for identities to be imported as *active* in Azure DevOps Services.
You can ignore the fourth reason, because employees who are no longer at the company should be imported as *historical*.
#### Historical identities (small teams)
> [!NOTE]
> The identity import strategy proposed in this section should be considered by small teams only.
If Azure AD Connect hasn't been configured, you'll notice that all users in the identity map log file are marked as *historical*. Running an import this way results in all users being imported as [*historical*](#historical-identities). We strongly recommended that you configure [Azure AD Connect](/azure/active-directory/hybrid/how-to-connect-sync-change-the-configuration) to ensure that your users are imported as *active*.
Running an import with all historical identities has consequences that need to be considered carefully. It should be considered only by teams with a small number of users and for which the cost of setting up Azure AD Connect is deemed too high.
To import all identities as historical, follow the steps outlined in later sections. When you queue an import, the identity that's used to queue the import is bootstrapped into the organization as the organization owner. All other users are imported as historical. Organization owners can then [add the users back in](../organizations/accounts/add-organization-users.md?toc=/azure/devops/organizations/accounts/toc.json&bc=/azure/devops/organizations/accounts/breadcrumb/toc.json) by using their Azure AD identity. The added users are treated as new users. They do *not* own any of their history, and there's no way to re-parent this history to the Azure AD identity. However, users can still look up their pre-import history by searching for their \<domain>\<Active Directory username>.
The data migration tool displays a warning if it detects the complete historical identities scenario. If you decide to go down this migration path, you'll need to consent in the tool to the limitations.
### Visual Studio subscriptions
The data migration tool can't detect Visual Studio subscriptions (formerly known as MSDN benefits) when it generates the identity map log file. Instead, we recommend that you apply the auto license upgrade feature after the import. As long as users' work accounts are [linked](/visualstudio/subscriptions/vs-alternate-identity) correctly, Azure DevOps Services automatically applies their Visual Studio subscription benefits at their first sign-in after the import. You're never charged for licenses that are assigned during the import, so this can be safely handled afterward.
You don't need to repeat a dry-run import if users' Visual Studio subscriptions aren't automatically upgraded in Azure DevOps Services. Visual Studio subscription linking happens outside the scope of an import. As long as their work account is linked correctly before or after the import, users' licenses are automatically upgraded on their next sign-in. After their licenses have been upgraded successfully, the next time you run an import, the users are upgraded automatically on their first sign-in to the organization.
<a id="prepare-import"></a>
## Prepare for import
By now, you have everything ready to execute on your import. You need to schedule downtime with your team to take the collection offline for the migration. When you've agreed upon a time to run the import, you need to upload to Azure both the required assets you've generated and a copy of the database. This process has five steps:
Step 1: [Take the collection offline and detach it](#step-1-detach-your-collection).
> [!NOTE]
> If the data migration tool displays a warning that you can't use the DACPAC method, you have to perform the import by using the SQL Azure virtual machine (VM) method. Skip steps 2 to 5 in that case and follow instructions provided in [Import large collections](migration-import-large-collections.md) and then continue to section [determine the import type](#determine-the-import-type).
Step 2: [Generate a DACPAC file from the collection you're going to import](#step-2-generate-a-dacpac-file).
Step 3: [Upload the DACPAC file and import files to an Azure storage account](#step-3-upload-the-dacpac-file).
Step 4: [Generate an SAS key to the storage account](#step-4-generate-an-sas-key).
Step 5: [Complete the import specification](#step-5-complete-the-import-specification).
> [!NOTE]
> Before you perform a production import, we *strongly* recommend that you complete a dry-run import. With a dry run, you can validate that the import process works for your collection and that there are no unique data shapes present that might cause a production import failure.
### Step 1: Detach your collection
[Detaching the collection](/azure/devops/server/admin/move-project-collection#detach-coll) is a crucial step in the import process. Identity data for the collection resides in the Azure DevOps Server instance's configuration database while the collection is attached and online. When a collection is detached from the Azure DevOps Server instance, it takes a copy of that identity data and packages it with the collection for transport. Without this data, the identity portion of the import *can't* be executed. We recommend that you keep the collection detached until the import has been completed, because there isn't a way to import the changes that occurred during the import.
If you're doing a dry run (test) import, we recommend that you reattach your collection after you back it up for import, because you won't be concerned about having the latest data for this type of import. To avoid offline time altogether, you can also choose to employ an [offline detach](/azure/devops/server/command-line/tfsconfig-cmd#offlinedetach) for dry runs.
It's important to weigh the cost of choosing to incur zero downtime for a dry run. It requires taking backups of the collection and configuration database, restoring them on a SQL instance, and then creating a detached backup. A cost analysis could prove that taking just a few hours of downtime to directly take the detached backup is better in the long run.
<a id="dacpac-file" />
### Step 2: Generate a DACPAC file
DACPACs offer a fast and relatively easy method for moving collections into Azure DevOps Services. However, after a collection database size exceeds a certain threshold, the benefits of using a DACPAC start to diminish.
> [!NOTE]
> If the data migration tool displays a warning that you can't use the DACPAC method, you have to perform the import by using the SQL Azure virtual machine (VM) method provided in [Import large collections](migration-import-large-collections.md).
>
> If the data migration tool doesn't display a warning, use the DACPAC method described in this step.
[DACPAC](/sql/relational-databases/data-tier-applications/data-tier-applications) is a feature of SQL server that allows database changes to be packaged into a single file and deployed to other instances of SQL. A DACPAC file can also be restored directly to Azure DevOps Services, so you can use it as the packaging method for getting your collection's data in the cloud. You use the SqlPackage.exe tool to generate the DACPAC file. The tool is included as part of [SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt).
Multiple versions of the SqlPackage.exe tool are installed with SSDT. The versions are stored in folders with names such as 120, 130, and 140. When you use SqlPackage.exe, it's important to use the right version to prepare the DACPAC.
* TFS 2018 imports need to use the SqlPackage.exe version from the 140 folder or higher.
If you installed SSDT for Visual Studio, you'll find your SqlPackage.exe version in one of the following folder paths:
* If you installed SSDT and integrated it with an existing installation of Visual Studio, your SqlPackage.exe folder path is similar to `C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\130\`.
* If you installed SSDT as a standalone installation, your SqlPackage.exe folder path is similar to `C:\Program Files (x86)\Microsoft Visual. Studio\2017\SQL\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\130\`.
* If you already have an installation of SQL Server, SqlPackage.exe might already be present, and your folder path is similar to `%PROGRAMFILES%\Microsoft SQL Server\130\DAC\bin\`.
Both versions of SSDT that you can download from [SQL Server Data Tools](/sql/ssdt/download-sql-server-data-tools-ssdt) include both the 130 and 140 folders and their SqlPackage.exe versions.
When you generate a DACPAC, keep two considerations in mind: the disk that the DACPAC will be saved on and the disk space on the machine that's generating the DACPAC. You want to ensure that you have enough disk space to complete the operation.
As it creates the package, SqlPackage.exe temporarily stores data from your collection in the temp directory on drive C of the machine you're initiating the packaging request from.
You might find that your drive C is too small to support creating a DACPAC. You can estimate the amount of space you'll need by looking for the largest table in your collection database. DACPACs are created one table at a time. The maximum space requirement to run the generation is roughly equivalent to the size of the largest table in the collection's database. If you're saving the generated DACPAC to drive C, you also need to take into account the size of the collection database as reported in the *DataMigrationTool.log* file from a validation run.
The *DataMigrationTool.log* file provides a list of the largest tables in the collection each time the validate command is run. For an example of table sizes for a collection, see the following output. Compare the size of the largest table with the free space on the drive that hosts your temporary directory.
> [!IMPORTANT]
> Before you proceed with generating a DACPAC file, ensure that your collection is [detached](migration-import.md#step-1-detach-your-collection).
```cmdline
[Info @08:23:59.539] Table name Size in MB
[Info @08:23:59.539] dbo.tbl_Content 38984
[Info @08:23:59.539] dbo.tbl_LocalVersion 1935
[Info @08:23:59.539] dbo.tbl_Version 238
[Info @08:23:59.539] dbo.tbl_FileReference 85
[Info @08:23:59.539] dbo.Rules 68
[Info @08:23:59.539] dbo.tbl_FileMetadata 61
```
Ensure that the drive that hosts your temporary directory has at least as much free space. If it doesn't, you need to redirect the temp directory by setting an environment variable.
```cmdline
SET TEMP={location on disk}
```
Another consideration is where the DACPAC data is saved. Pointing the save location to a far-off remote drive could result in much longer generation times. If a fast drive such as a solid-state drive (SSD) is available locally, we recommend that you target the drive as the DACPAC save location. Otherwise, it's always faster to use a disk that's on the machine where the collection database resides rather than a remote drive.
Now that you've identified the target location for the DACPAC and ensured that you have enough space, it's time to generate the DACPAC file.
Open a Command Prompt window and go to the SqlPackage.exe location. To generate the DACPAC, replace the placeholder values with the required values, and then run the following command:
```cmdline
SqlPackage.exe /sourceconnectionstring:"Data Source={database server name};Initial Catalog={Database Name};Integrated Security=True" /targetFile:{Location & File name} /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory
```
* **Data Source**: The SQL Server instance that hosts your Azure DevOps Server collection database.
* **Initial Catalog**: The name of the collection database.
* **targetFile**: The location on the disk and the DACPAC file name.
A DACPAC generation command that's running on the Azure DevOps Server data tier itself is shown in the following example:
```cmdline
SqlPackage.exe /sourceconnectionstring:"Data Source=localhost;Initial Catalog=Foo;Integrated Security=True" /targetFile:C:\DACPAC\Foo.dacpac /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory
```
The output of the command is a DACPAC file that's generated from the collection database *Foo* called *Foo.dacpac*.
#### Configure your collection for import
After your collection database has been restored on your Azure VM, configure a SQL login to allow Azure DevOps Services to connect to the database to import the data. This login allows only *read* access to a single database.
To start, open SQL Server Management Studio on the VM, and then open a new query window against the database to be imported.
Set the database's recovery to simple:
```sql
ALTER DATABASE [<Database name>] SET RECOVERY SIMPLE;
```
Create a SQL login for the database, and assign that login the 'TFSEXECROLE':
```sql
USE [<database name>]
CREATE LOGIN <pick a username> WITH PASSWORD = '<pick a password>'
CREATE USER <username> FOR LOGIN <username> WITH DEFAULT_SCHEMA=[dbo]
EXEC sp_addrolemember @rolename='TFSEXECROLE', @membername='<username>'
```
Following our Fabrikam example, the two SQL commands would look like the following:
```sql
ALTER DATABASE [Foo] SET RECOVERY SIMPLE;
USE [Foo]
CREATE LOGIN fabrikam WITH PASSWORD = 'fabrikamimport1!'
CREATE USER fabrikam FOR LOGIN fabrikam WITH DEFAULT_SCHEMA=[dbo]
EXEC sp_addrolemember @rolename='TFSEXECROLE', @membername='fabrikam'
```
> [!NOTE]
> Be sure to enable [SQL Server and Windows authentication mode](/sql/database-engine/configure-windows/change-server-authentication-mode?view=sql-server-ver15#change-authentication-mode-with-ssms&preserve-view=true) in SQL Server Management Studio on the VM. If you don't enable authentication mode, the import will fail.
#### Configure the import specification file to target the VM
Update the import specification file to include information about how to connect to the SQL Server instance. Open your import specification file and make the following updates:
1. Remove the DACPAC parameter from the source files object.
The import specification before the change is shown in the following code:

The import specification after the change is shown in the following code:

1. Fill out the required parameters and add the following properties object within your source object in the specification file.
```json
"Properties":
{
"ConnectionString": "Data Source={SQL Azure VM Public IP};Initial Catalog={Database Name};Integrated Security=False;User ID={SQL Login Username};Password={SQL Login Password};Encrypt=True;TrustServerCertificate=True"
}
```
Following the Fabrikam example, after you apply the changes, the import specification would look like the following:

Your import specification is now configured to use a SQL Azure VM for import. Proceed with the rest of preparation steps to import to Azure DevOps Services. After the import has finished, be sure to delete the SQL login or rotate the password. Microsoft does not retain the login information after the import has finished.
### Step 3: Upload the DACPAC file
> [!NOTE]
> If you're using the SQL Azure VM method, you need to provide only the connection string. You don't have to upload any files, and you can skip this step.
Your DACPAC must be placed in an Azure storage container. This can be an existing container or one created specifically for your migration effort. It's important to ensure that your container is created in the right region.
Azure DevOps Services is available in multiple [regions](https://azure.microsoft.com/regions/services/). When you're importing to these regions, it's critical to place your data in the correct region to ensure that the import can start successfully. Your data must be placed in the same region that you'll be importing to. Placing the data anywhere else will result in the import being unable to start. The following table lists the acceptable regions for creating your storage account and uploading your data.
| Desired import region | Storage account region |
| --- | --- |
| Central United States | Central United States |
| Western Europe | Western Europe |
| Australia East | Australia East |
| Brazil South | Brazil South |
| India South | India South |
| Canada Central | Canada Central |
| Asia Pacific (Singapore) | Asia Pacific (Singapore) |
<br>
Although Azure DevOps Services is available in multiple regions in the US, only the Central United States region accepts new Azure DevOps Services. You can't import your data into other US Azure regions at this time.
You can [create a blob container](/azure/storage/common/storage-create-storage-account) from the Azure portal. After you've created the container, you need to upload the Collection DACPAC file.
After the import has finished, you can delete the blob container and accompanying storage account. To do so, you can use tools such as [AzCopy](/azure/storage/common/storage-use-azcopy-v10) or any other Azure storage explorer tool, such as [Azure Storage Explorer](https://storageexplorer.com/).
> [!NOTE]
> If your DACPAC file is larger than 10 GB, we recommend that you use AzCopy. AzCopy has multithreaded upload support for faster uploads.
### Step 4: Generate an SAS key
A [shared access signature (SAS) key](/azure/storage/common/storage-sas-overview) provides delegated access to resources in a storage account. The key allows you to give Microsoft the lowest level of privilege that's required to access your data for executing the import.
The recommended way to generate an SAS key is to use [Azure Storage Explorer](https://storageexplorer.com/). With Storage Explorer, you can easily create container-level SAS keys. This is essential, because the data migration tool does *not* support account-level SAS keys.
> [!NOTE]
> Do *not* generate an SAS key from the Azure portal. Azure portal-generated SAS keys are account scoped and don't work with the data migration tool.
After you install Storage Explorer, you can generate an SAS key by doing the following:
1. Open Storage Explorer.
1. Add an account.
1. Select **Use a storage account name and key**, and then select **Connect**.

1. On the **Attach External Storage** pane, enter your storage account name, provide one of your two [primary access keys](/azure/storage/common/storage-create-storage-account), and then select **Connect**.

1. On the left pane, expand **Blob Containers**, right-click the container that stores your import files, and then select **Get Shared Access Signature**.

1. For **Expiry time**, set the expiration date for seven days in the future.

1. Under **Permissions** for your SAS key, select the **Read** and **List** check boxes. Write and delete permissions aren't required.
> [!NOTE]
> * Copy and store this SAS key to place in your import specification file in the next step.
> * Treat this SAS key as a secret. It provides access to your files in the storage container.
### Step 5: Complete the import specification
Earlier in the process you partially filled out the import specification file generally known as *import.json*. At this point, you have enough information to complete all the remaining fields except for the import type. The import type will be covered later, in the import section.
In the *import.json* specification file, under **Source**, complete the following fields:
* **Location**: Paste the SAS key you generated from the script and then copied in the preceding step.
* **Dacpac**: Ensure that the file, including the *.dacpac* file extension, has the same name as the DACPAC file you uploaded to the storage account.
Using the Fabrikam example, the final import specification file should look like the following:

<a id="determine-the-type-of-import"></a>
<a id="import-type"></a>
### Restrict access to Azure DevOps Services IPs only
We highly recommend that you restrict access to your Azure Storage account to only IPs from Azure DevOps Services. You do this by allowing connections only from the set of Azure DevOps Services IPs that are involved in the collection database import process. The IPs that need to be granted access to your storage account depend on the region you're importing into. Use the IpList option to get the list of IPs that need to be granted access.
Included in the help documentation are instructions and examples for running Migrator from the Azure DevOps Server instance itself and a remote machine. If you're running the command from one of the Azure DevOps Server instance's application tiers, your command should have the following structure:
```cmdline
Migrator IpList /collection:{CollectionURI} /tenantDomainName:{name} /region:{region}
```
> [!NOTE]
> Alternatively, you can also use [Service Tags](/azure/virtual-network/service-tags-overview) in place of explicit IP ranges. Azure Service Tags are a convenient way for customers to manage their networking configuration to allow traffic from specific Azure services. Customers can easily allow access by adding the tag name azuredevops to their network security groups or firewalls either through the portal or programmatically.
### Determine the import type
Imports can be queued as either a dry run or a production run. The **ImportType** parameter determines the import type:
- **DryRun**: Use a dry run for test purposes. The system deletes dry runs after 21 days.
- **ProductionRun**: Use a production run when you want to keep the resulting import and use the organization full time in Azure DevOps Services after the import finishes.
> [!TIP]
> We always recommend that you complete a dry-run import first.

### Dry-run organizations
Dry-run imports help teams test the migration of their collections. Organizations are expected not to remain around forever but to exist for a short time. In fact, before a production migration can be run, any completed dry-run organizations will need to be deleted. All dry-run organizations have a *limited existence and are automatically deleted after a set period of time*. Information about when the organization will be deleted is included in the success email you should receive after the import finishes. Be sure to take note of this date and plan accordingly.
Most dry-run organizations have 15 days before they're deleted. Dry-run organizations can also have a 21-day expiration if more than 100 users have a basic or greater license at *import time*. After the specified time period, the dry-run organization is deleted. You can repeat dry-run imports as many times as you need before you do a production migration. You need to delete any previous dry runs before you attempt a new one. When your team is ready to perform a production migration, you'll need to manually delete the dry-run organization.
For more information about post-import activities, see the [post import](migration-post-import.md) article.
If you encounter any import problems, see [Troubleshoot import and migration errors](migration-troubleshooting.md#resolve-import-errors).
<a id="run-an-import"></a>
## Run an import
Your team is now ready to begin the process of running an import. We recommend that you start with a successful dry-run import before you attempt a production-run import. With dry-run imports, you can see in advance how an import will look, identify potential issues, and gain experience before you head into your production run.
> [!NOTE]
> If you need to repeat a completed production-run import for a collection, as in the event of a rollback, contact Azure DevOps Services [Customer Support](https://azure.microsoft.com/support/devops/) before you queue up another import.
> [!NOTE]
> Azure administrators can prevent users from creating new Azure DevOps organizations. If the Azure AD tenant policy is turned on, your import will fail to finish. Before you begin, verify that the policy isn't set or that there is an exception for the user that is performing the migration. For more information, see [Restrict organization creation via Azure AD tenant policy](../organizations/accounts/azure-ad-tenant-policy-restrict-org-creation.md).
### Considerations for rollback plans
A common concern for teams that are doing a final production run is what their rollback plan will be if anything goes wrong with import. This is why we highly recommend doing a dry run to make sure that you're able to test the import settings you provide to the data migration tool for Azure DevOps.
Rollback for the final production run is fairly simple. Before you queue the import, you detach the team project collection from Azure DevOps Server or Team Foundation Server, which will make it unavailable to your team members. If for any reason you need to roll back the production run and bring the on-premises server back online for your team members, you can do so. You simply attach the team project collection on-premises again and inform your team that they'll continue to work normally while your team regroups to understand any potential failures.
### Queue an import
> [!IMPORTANT]
> Before you proceed, ensure that your collection was [detached](migration-import.md#step-1-detach-your-collection) prior to generating a DACPAC file or uploading the collection database to a SQL Azure VM. If you don't complete this step, the import will fail. In the event that your import fails, see [Troubleshoot import and migration errors](migration-troubleshooting.md).
You start an import by using the data migration tool's **import** command. The import command takes an import specification file as input. It parses the file to ensure that the provided values are valid and, if successful, it queues an import to Azure DevOps Services. The import command requires an internet connection, but does *not* require a connection to your Azure DevOps Server instance.
To get started, open a Command Prompt window, and change directories to the path to the data migration tool. We recommended that you take a moment to review the help text provided with the tool. Run the following command to see the guidance and help for the import command:
```cmdline
Migrator import /help
```
The command to queue an import will have the following structure:
```cmdline
Migrator import /importFile:{location of import specification file}
```
Here is an example of a completed import command:
```cmdline
Migrator import /importFile:C:\DataMigrationToolFiles\import.json
```
After the validation passes, you'll be asked to sign in to Azure AD. It's important to sign in with an identity that's a member of the same Azure AD tenant as the identity map log file was built against. The user that signs in becomes the owner of the imported organization.
> [!NOTE]
> Each Azure AD tenant is limited to five imports per 24-hour period. Only imports that are queued count against this cap.
When your team initiates an import, an email notification is sent to the user that queued the import. About 5 to 10 minutes after it queues the import, your team can go to the organization to check on the status. After the import finishes, your team is directed to sign in, and an email notification is sent to the organization owner.
## Related articles
- [Migrate options](migration-overview.md)
- [Post-import](migration-post-import.md)
| 79.993432 | 787 | 0.78192 | eng_Latn | 0.997768 |
d040e8feee15df527546242ab55b51c2fa071787 | 415 | md | Markdown | _works/obj24.md | pobox34/pobox-34 | 8d56d3441a950321b938b001d4f8811850d051e2 | [
"MIT"
] | 1 | 2021-05-19T15:22:10.000Z | 2021-05-19T15:22:10.000Z | _works/obj24.md | pobox34/pobox-34 | 8d56d3441a950321b938b001d4f8811850d051e2 | [
"MIT"
] | 5 | 2021-02-03T16:42:43.000Z | 2021-03-03T21:40:38.000Z | _works/obj24.md | pobox34/pobox-34 | 8d56d3441a950321b938b001d4f8811850d051e2 | [
"MIT"
] | 3 | 2021-05-25T20:05:42.000Z | 2021-06-09T21:38:47.000Z | ---
pid: obj24
writer: Quasheam R
label: Grind / Who Was I? / All Around the World / Damn Bro... You Lied to Me!
_date: December 22 2020
order: '24'
layout: work
collection: works
permalink: "/works/Grind-et-al/"
thumbnail: "/img/derivatives/iiif/images/obj24_0/full/250,/0/default.jpg"
manifest: "/img/derivatives/iiif/obj24/manifest.json"
full: "/img/derivatives/iiif/images/obj24_0/full/1140,/0/default.jpg"
---
| 29.642857 | 78 | 0.73253 | eng_Latn | 0.272895 |
d041884a1a3f5ec57ba274e6cd21d7578a5fe057 | 1,030 | md | Markdown | README.md | nachos/native-api | 87db990d20cbdc42e51b9222b696ee4d61c9f44d | [
"MIT"
] | null | null | null | README.md | nachos/native-api | 87db990d20cbdc42e51b9222b696ee4d61c9f44d | [
"MIT"
] | 5 | 2015-03-03T18:08:14.000Z | 2015-10-26T21:59:34.000Z | README.md | nachos/native-api | 87db990d20cbdc42e51b9222b696ee4d61c9f44d | [
"MIT"
] | null | null | null | # native-api [](https://travis-ci.org/nachos/native-api)[](https://ci.appveyor.com/project/noamokman/native-api)
Cross-platform OS API with native modules.
## Install
```bash
$ npm install --save native-api
```
## Usage
```javascript
var nativeApi = require('native-api');
var process = nativeApi.process;
var file = nativeApi.file;
var path = nativeApi.path;
var screen = nativeApi.screen;
var processes = process.getAllProcesses();
var fileStats = file.getFileStats('c:\test.txt');
var userHome = path.getUserHome();
var screens = screen.getAllScreens();
```
## API
_(Coming soon)_
## Contributing
In lieu of a formal styleguide, take care to maintain the existing coding style. Add unit tests for any new or changed functionality. Lint and test your code using [gulp](http://gulpjs.com/).
## License
Copyright (c) 2015. Licensed under the MIT license. | 25.75 | 272 | 0.736893 | eng_Latn | 0.252176 |
d042bccc54c7a8b5d5a355a52b9441f3cd426cce | 3,130 | md | Markdown | dynamicsax2012-technet/bra-process-items-and-services-rented-for-industrialization-purposes.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/bra-process-items-and-services-rented-for-industrialization-purposes.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | dynamicsax2012-technet/bra-process-items-and-services-rented-for-industrialization-purposes.md | RobinARH/DynamicsAX2012-technet | d0d0ef979705b68e6a8406736612e9fc3c74c871 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: (BRA) Process items and services rented for industrialization purposes
TOCTitle: (BRA) Process items and services rented for industrialization purposes
ms:assetid: b17a1d53-a7ac-46f5-870a-59e9da4b29a3
ms:mtpsurl: https://technet.microsoft.com/en-us/library/JJ863733(v=AX.60)
ms:contentKeyID: 50396416
ms.date: 04/18/2014
mtps_version: v=AX.60
f1_keywords:
- BRA
- Brazil
- process goods
- hiring basis
- industrialization
audience: Application User
ms.search.region: Brazil
---
# (BRA) Process items and services rented for industrialization purposes
_**Applies To:** Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2_
You can create a sales order to record the items that are rented for industrialization and manufacturing. You can also create a purchase order when the items are returned.
1. Click **Organization administration** \> **Setup** \> **Brazil** \> **Operation type**. Create an operation type that has the following setup:
1. Select the **Create inventory movements** check box to specify that the operation type generates inventory movement, and to record the physical and financial movement of inventory.
> [!NOTE]
> <P>To create inventory-related ledger transactions, you must select the <STRONG>Post physical inventory</STRONG> and <STRONG>Post financial inventory</STRONG> check boxes on the <STRONG>Setup</STRONG> FastTab in the <STRONG>Item model groups</STRONG> form.</P>
2. In the **Customer** field group, in the **Posting profile** field, select the customer posting profile for the operation type.
For more information, see [(BRA) Set up operation types](bra-set-up-operation-types.md).
2. Click **General ledger** \> **Reports** \> **Base data** \> **Sales tax codes**. Create ICMS and IPI sales tax codes. For more information, see [(BRA) Set up tax codes](bra-set-up-tax-codes.md).
3. Click **General ledger** \> **Setup** \> **Sales tax** \> **Sales tax groups**. Create a sales tax group. In the **Sales tax groups** form, click the **Setup** FastTab, and then add the ICMS and IPI tax codes to the sales tax group.
4. In the **Taxation code** field, select the taxation code for which the fiscal value is set to **2. without credit/debit (exempt or not taxable)** in the **Fiscal value** field in the **Taxation code** form.
5. Click **Sales and marketing** \> **Common** \> **Sales orders** \> **All sales orders**.
–or–
Click **Accounts receivable** \> **Common** \> **Sales orders** \> **All sales orders**.
6. To record this operation, create a sales order that has the operation type and sales tax group that you created. For more information, see [(BRA) Create and post a sales order](bra-create-and-post-a-sales-order.md).
7. Post the sales order.
## See also
[(BRA) Operation type (form)](https://technet.microsoft.com/en-us/library/jj822922\(v=ax.60\))
[(BRA) Sales tax codes (modified form)](https://technet.microsoft.com/en-us/library/jj663982\(v=ax.60\))
[(BRA) Sales tax groups (modified form)](https://technet.microsoft.com/en-us/library/jj663981\(v=ax.60\))
| 47.424242 | 270 | 0.715655 | eng_Latn | 0.908229 |
d0430ff87c04c3b7e42f1da64e7eb6647179da50 | 1,001 | md | Markdown | docs/vscphlp_getvscpmeasurementasdouble.md | BlueAndi/vscp-helper-lib | 4a39300234f19fae1f15ce79b1c6b059360e82ef | [
"MIT"
] | 3 | 2020-10-20T20:36:25.000Z | 2021-11-08T20:38:29.000Z | vscphlp_getvscpmeasurementasdouble.md | grodansparadis/vscp-doc-helper-library | 12aff3b58badd3bda5d86be417d2bf8ce8f51abe | [
"CC-BY-4.0"
] | 2 | 2021-01-10T13:55:42.000Z | 2021-10-20T20:23:42.000Z | vscphlp_getvscpmeasurementasdouble.md | grodansparadis/vscp-doc-helper-library | 12aff3b58badd3bda5d86be417d2bf8ce8f51abe | [
"CC-BY-4.0"
] | 1 | 2021-01-10T13:28:35.000Z | 2021-01-10T13:28:35.000Z |
```clike
int vscphlp_getVSCPMeasurementAsDouble( const vscpEvent *pEvent,
double *pvalue)
```
### Parameters
#### pEvent
The event that contain the measurement data.
#### pvalue
A pointer to a double that will get the measurement result.
### Return Value
VSCP_ERROR_SUCCESS is returned on success.
### Description
This method returns a double representing the measurement data. It recognize all data coding forms and give sensible output back.
#### C example
```clike
pEventMeasurement->pdata[0] = VSCP_DATACODING_INTEGER;
pEventMeasurement->pdata[1] = 0xFF;
pEventMeasurement->pdata[2] = 0xFF;
pEventMeasurement->pdata[3] = 0xFF;
if ( VSCP_ERROR_SUCCESS == vscphlp_getVSCPMeasurementAsDouble( pEventMeasurement, &value ) ) {
printf("OK - vscphlp_getVSCPMeasurementAsDouble value = %lf\n", value );
}
else {
printf("Error - vscphlp_getVSCPMeasurementAsDouble value = %slf \n", value );
}
```
[filename](./bottom_copyright.md ':include') | 24.414634 | 130 | 0.714286 | eng_Latn | 0.490945 |
d043475c9453777a1e703f51e244fc2259c9ef5c | 3,639 | md | Markdown | README.md | EndlessHouse/VerifyPurchaseDiscordBot | f6f961fa3e36ad25453babecd2794c8c70625131 | [
"Apache-2.0"
] | null | null | null | README.md | EndlessHouse/VerifyPurchaseDiscordBot | f6f961fa3e36ad25453babecd2794c8c70625131 | [
"Apache-2.0"
] | null | null | null | README.md | EndlessHouse/VerifyPurchaseDiscordBot | f6f961fa3e36ad25453babecd2794c8c70625131 | [
"Apache-2.0"
] | null | null | null | # VerifyPurchaseDiscordBot
Discord bot that searches your PayPal transactions (via user email) and assigns a role if the purchase has been verified.
This bot supports [SpigotMC](https://www.spigotmc.org/) and [MCMarket](https://www.mc-market.org/)
**ScreenShot**
Some pics of the Discord bot






**Steps for getting started:**
- install the libraries in requirements.txt
- ```python -m pip install -r requirements.txt```
**Guide to filling out the .env file:**
---
```
DISCORD_TOKEN=Ojzk1MTM2Mc0NjYTATQy2Mkz.gfqYfq99A.JScoVbGD1Lo0HDbonDuvYjJPtPy
```
- Create a new discord application and put the token value here, [click here](https://discord.com/developers/applications)
- Make sure to set the Oauth2 scope to: [bot, applications.commands]
```
GUILD_ID="897460772427382670"
```
- Put here the guild id (discord server id) you want to use the bot on
- If you don't know your discord server id, [click here](https://support.discord.com/hc/en-us/articles/206346498-Where-can-I-find-my-User-Server-Message-ID-)
```
ADMIN_ID_LIST="143651103467110401 290472907317051392"
```
- Put here any admin role ids (discord role ids) that you want the bot to private message (seperated by spaces within the string)
- If you don't know a discord user id, [click here](https://support.discord.com/hc/en-us/articles/206346498-Where-can-I-find-my-User-Server-Message-ID-)
```
ADMIN_ROLE_ID="945380919903150090"
```
- Put here the admin role id (discord role id) that you want to allow to write in the verification channel, otherwise the bot will delete the messages
- If you don't know the discord role id [click here](https://ozonprice.com/blog/discord-get-role-id/)
```
REPORT_CHANNEL_ID="945380919903150090"
```
- Put here the channel id (discord channel id) where the bot will send a message with the success of the verification every time someone will use the verification command
- If you don't know the discord channel id [click here](https://turbofuture.com/internet/Discord-Channel-ID)
```
VERIFY_CHANNEL_ID="945380919903150090"
```
- Put here the channel id (discord channel id) where users can only use the verification command
- Make sure that users can write within that channel
- If you don't know the discord channel id [click here](https://turbofuture.com/internet/Discord-Channel-ID)
```
PAYPAL_CLIENT_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
PAYPAL_CLIENT_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
```
- Create a PayPal API application in a 'Live' environment [click here](https://developer.paypal.com/docs/api-basics/manage-apps/)
- Make sure to grant the application access to 'Transaction Search'
```
RESOURCE_LIST="PluginName1:846789230670774275,846789230670774276;PluginName2:RoleId1,RoleId2"
```
- Put your Spigot or McMarket resource name here followed by the Discord roles (comma-separated) you want to assign to a user once they have verified a purchase for that plugin name. (These roles must already exist on your server)
- Put as many of these as you have separated by semicolon within the string like in the example above
---
Now just run the bot wherever you are going to host it (and make sure it has a sufficient role to assign roles to users):
```
py -3 main.py
```
[](https://www.paypal.com/donate/?hosted_button_id=ZETC226F4FWB6) \
**If this bot is helpful to you, please consider donating.**
| 44.925926 | 230 | 0.763946 | eng_Latn | 0.891929 |
d0435c9d5efde0d6e9293d818271bccef12a93a8 | 1,277 | md | Markdown | _posts/2020-01-01-0291502.md | meparth/BongNetflix | 6c84a70721296413c70067e10cfa7741cf89e963 | [
"MIT"
] | null | null | null | _posts/2020-01-01-0291502.md | meparth/BongNetflix | 6c84a70721296413c70067e10cfa7741cf89e963 | [
"MIT"
] | null | null | null | _posts/2020-01-01-0291502.md | meparth/BongNetflix | 6c84a70721296413c70067e10cfa7741cf89e963 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Swept Away"
description: "Amber is 40, beautiful, rich, spoiled, and arrogant beyond measure. Nothing makes this woman happy, including her wealthy but passive husband (Tony), a pharmaceutical kingpin. When Tony takes her on a private cruise from Greece to Italy, Amber is unimpressed at this impromptu no-frills vacation, and takes out her anger on the ship's first mate, Giuseppe. When a storm leaves the two shipwrecked on a deserted island, however, the tables sudden.."
img: 0291502.jpg
kind: movie
genres: [Comedy,Romance]
tags: Comedy Romance
language: English
year: 2002
imdb_rating: 3.6
votes: 15480
imdb_id: 0291502
netflix_id: 60024980
color: 495867
---
Director: `Guy Ritchie`
Cast: `Bruce Greenwood` `Madonna` `Elizabeth Banks` `Michael Beattie` `Jeanne Tripplehorn`
Amber is 40, beautiful, rich, spoiled, and arrogant beyond measure. Nothing makes this woman happy, including her wealthy but passive husband (Tony), a pharmaceutical kingpin. When Tony takes her on a private cruise from Greece to Italy, Amber is unimpressed at this impromptu no-frills vacation, and takes out her anger on the ship's first mate, Giuseppe. When a storm leaves the two shipwrecked on a deserted island, however, the tables suddenly turn...::Anonymous | 60.809524 | 466 | 0.782302 | eng_Latn | 0.99163 |
d04412291f7e90e64de260c19f3e880ecbe431bc | 427 | md | Markdown | ext/native-decls/GetPlayerWantedCentrePosition.md | thorium-cfx/fivem | 587eb7c12066a2ebf8631bde7bb39ee2df1b5a0c | [
"MIT"
] | 5,411 | 2017-04-14T08:57:56.000Z | 2022-03-30T19:35:15.000Z | ext/native-decls/GetPlayerWantedCentrePosition.md | big-rip/fivem-1 | c08af22110802e77816dfdde29df1662f8dea563 | [
"MIT"
] | 802 | 2017-04-21T14:18:36.000Z | 2022-03-31T21:20:48.000Z | ext/native-decls/GetPlayerWantedCentrePosition.md | big-rip/fivem-1 | c08af22110802e77816dfdde29df1662f8dea563 | [
"MIT"
] | 2,011 | 2017-04-14T09:44:15.000Z | 2022-03-31T15:40:39.000Z | ---
ns: CFX
apiset: server
---
## GET_PLAYER_WANTED_CENTRE_POSITION
```c
Vector3 GET_PLAYER_WANTED_CENTRE_POSITION(char* playerSrc);
```
Gets the current known coordinates for the specified player from cops perspective. This native is used server side when using OneSync.
## Parameters
* **playerSrc**: The target player
## Return value
The player's position known by police. Vector zero if the player has no wanted level.
| 23.722222 | 134 | 0.770492 | eng_Latn | 0.995678 |
d045fe0a7c3b076baa2d94c38e013a5a9337d430 | 19,840 | md | Markdown | reference.md | mkyleUD/Caviness-HPC-Intro | fd1442af219ba8358a6f972248642fc2231a951b | [
"CC-BY-4.0"
] | null | null | null | reference.md | mkyleUD/Caviness-HPC-Intro | fd1442af219ba8358a6f972248642fc2231a951b | [
"CC-BY-4.0"
] | null | null | null | reference.md | mkyleUD/Caviness-HPC-Intro | fd1442af219ba8358a6f972248642fc2231a951b | [
"CC-BY-4.0"
] | null | null | null | ---
layout: reference
permalink: /reference/
---
## Cheatsheets for Queuing System Quick Reference
* [SLURM](https://slurm.schedmd.com/pdfs/summary.pdf)
## HPC Terminology
{:auto_ids}
Accelerator Node
: Nodes are equipped with accelerator cards as co-processors, such as gpu nodes or phi nodes.
Beowulf cluster
: A cluster built from PCs running the Linux operating system. Clusters were already
well established when Beowulf clusters were first built in the early 1990s. Prior to
Beowulf, however, clusters were built with workstations running UNIX. By dropping
the cost of cluster hardware, Beowulf clusters dramatically increased access to
cluster computing.
Central processing unit (CPU)
: Is the part of a computer which executes software programs. The term is not specific to
a particular method of execution: units based on transistors, relays, or vacuum tubes
might be considered CPU's. However, for clarity, we will use the term to refer to individual
silicon chips, such as Intel's Pentium or AMD's Athlon. Thus, a CPU contains one or more
cores, however, an HPC system may contain many CPU's. For example, Kraken contains
several thousand AMD Opteron CPU's
Cloud computing
: Is where one can access a lot of computer power from a desktop or laptop computer, but the
actual calculation is done remotely on another more powerful server or supercomputer. So
replace the word 'cloud' with 'Internet' and the meaning becomes clearer. Cloud computing
is where Internet-based resources, such as software and storage, are provided to computers
on demand, possibly in a pay-per-use model. It is seen as a way to increase one's computing
capacity without investing in new infrastructure or training new personnel.
Cluster
: A collection of technology components – including servers, networks, and storage – deployed
together to form a platform for scientific computation
Compute nodes
:
Is the cluster nodes where users run their jobs. Jobs need to be submitted and scheduled a
time to run on the nodes through the job scheduler.
Cyberinfrastructure
: First used in 1991 according to Merriam-Webster, the prefix 'cyber' means relating to computers
or computer networks. So cyberinfrastructure means the combination of computer software,
hardware, and other technologies – as well as human expertise – required to support current
and future discoveries in science and engineering. Similar to the way a highway infrastructure
includes roads, bridges, and tunnels working together to keep vehicles moving, all these
components are necessary to keep scientific discovery moving along as well.
Development (dev) nodes
:
Are compute nodes that are meant for compliling debugging and testing programs, but not long computation.
They have similar configuration and environment to computer nodes but often have less resources.
Distributed Computing
: A type of computing in which a computational task is divided into subtasks that execute on a
collection of networked computers. The networks are general-purpose networks
(LANs, WANs, or the Internet) as opposed to dedicated cluster interconnects.
Distributed memory
:
A computer system that constructs an address space shared among multiple UEs from physical memory
subsystems that are distinct and distributed about the system. There may be operating-system and
hardware support for the distributed shared memory system, or the shared memory may be implemented
entirely in software as a separate middleware layer.
Grid computing
:
A grid is an architecture for distributed computing and resource sharing. A grid system is
composed of a heterogeneous collection of resources connected by local-area and/or wide-area
networks (often the Internet). These individual resources are general and include compute servers,
storage, application servers, information services, or even scientific instruments. Grids are
often implemented in terms of Web services and integrated middleware components that provide a
consistent interface to the grid. A grid is different from a cluster in that the resources in a
grid are not controlled through a single point of administration; the grid middleware manages the
system so control of resources on the grid and the policies governing use of the resources remain
with the resource owners.
Grid Engine
:
Is a cluster management software which manages access, reports usage and enforce business policies
for a compute cluster. Grid Engine is the cluster management software on Farber.
High-Performance Computing (HPC)
:
Is the term often used for large-scale computers and the simulations and models which run on them.
Infiniband
:
Is a high-speed, low-latency network that connects all compute nodes to each other and to data
storage. This network is sometimes referred to as a fabric. It enables independent compute nodes to
communicate with each other much faster than a traditional network, enabling computational jobs that
span multiple servers to operate more efficiently, often through a technology known as Message
Passing Interface (MPI).
Job
:
A request from a user for computational resources from the cluster.
Node
:
A common term for the computational elements that make up a distributed-memory parallel machine.
Each node has its own memory and at least one processor; that is, a node can be a uniprocessor or
some type of multiprocessor.
Parallel File System
:
A file system that is visible to any processor in the system and can be read and written by multiple
UEs simultaneously. Although a parallel file system appears to the computer system as a single file
system, it is physically distributed among a number of disks. To be effective, the aggregate throughput
for read and write must be scalable.
Parallel processing
:
Is the use of multiple processors running on different parts of the same computer program concurrently,
resulting in significantly faster compute times. Parallel processing is used when many complex
calculations are required, such as in climate or earthquake modeling. Processing "runs" that used to
take months can often be done within days or even hours on larger supercomputers.
Preemption
:
The act of "stopping" one or more "low-priority" jobs to let a "high-priority" job run.
Scratch space
:
Supercomputers generally have what is called scratch space; disk space available for
temporary use. It is analogous to scratch paper. This may be thought of as a desk,
it is where papers are stored while they are waiting to be worked on or filed away.
Serial Processing
: Execution of a program sequentially, one statement at a time.
Shared memory
:
A term applied to both hardware and software indicating the presence of a memory region that is
shared between system components. For programming environments, the term means that memory is shared
between processes or threads. Applied to hardware, it means that the architectural feature tying
processors together is shared memory.
Slurm
:
Is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system
for large and small Linux clusters. Slurm is the cluster management software used on Caviness and
will be used on Darwin.
Supercomputer
:
Is a very fast computer. Usually the term is reserved for the 500 fastest computers in the world.
Because technology moves rapidly, the list of top supercomputers changes constantly, and today's
supercomputer is destined to become tomorrow's "regular" computer. Modern supercomputers are made up
of many smaller computers – sometimes thousands of them – connected via fast local network connections.
Those smaller computers work as an "army of ants" to solve difficult calculations very fast to benefit
science and society. A supercomputer is typically used for solving larger scientific or engineering
challenges in numerous fields, such as new drug development and medical research, environmental
sciences such as global climate change, or helping us better respond to natural or man-made disasters
by creating earthquake simulations or modeling the projected flow of oil spills.
Workstation
:
A computer used for tasks such as programming, engineering, and design.
## Glossary
{:auto_ids}
absolute path
:
A [path](#path) that refers to a particular location in a file system.
Absolute paths are usually written with respect to the file system's
[root directory](#root-directory),
and begin with either "/" (on Unix) or "\\" (on Microsoft Windows).
See also: [relative path](#relative-path).
argument
:
A value given to a function or program when it runs.
The term is often used interchangeably (and inconsistently) with [parameter](#parameter).
command shell
:
See [shell](#shell)
command-line interface
:
A user interface based on typing commands,
usually at a [REPL](#read-evaluate-print-loop).
See also: [graphical user interface](#graphical-user-interface).
comment
:
A remark in a program that is intended to help human readers understand what is going on,
but is ignored by the computer.
Comments in Python, R, and the Unix shell start with a `#` character and run to the end of the line;
comments in SQL start with `--`,
and other languages have other conventions.
current working directory
:
The directory that [relative paths](#relative-path) are calculated from;
equivalently,
the place where files referenced by name only are searched for.
Every [process](#process) has a current working directory.
The current working directory is usually referred to using the shorthand notation `.` (pronounced "dot").
file system
:
A set of files, directories, and I/O devices (such as keyboards and screens).
A file system may be spread across many physical devices,
or many file systems may be stored on a single physical device;
the [operating system](#operating-system) manages access.
filename extension
:
The portion of a file's name that comes after the final "." character.
By convention this identifies the file's type:
`.txt` means "text file", `.png` means "Portable Network Graphics file",
and so on. These conventions are not enforced by most operating systems:
it is perfectly possible (but confusing!) to name an MP3 sound file `homepage.html`.
Since many applications use filename extensions to identify the [MIME type](#mime-type) of the file,
misnaming files may cause those applications to fail.
filter
:
A program that transforms a stream of data.
Many Unix command-line tools are written as filters:
they read data from [standard input](#standard-input),
process it, and write the result to [standard output](#standard-output).
flag
:
A terse way to specify an option or setting to a command-line program.
By convention Unix applications use a dash followed by a single letter,
such as `-v`, or two dashes followed by a word, such as `--verbose`,
while DOS applications use a slash, such as `/V`.
Depending on the application, a flag may be followed by a single argument, as in `-o /tmp/output.txt`.
for loop
:
A loop that is executed once for each value in some kind of set, list, or range.
See also: [while loop](#while-loop).
FTP
:
A protocol or utility which is used to transfer over a network connection. For security, user SFTP.
graphical user interface
:
A user interface based on selecting items and actions from a graphical display,
usually controlled by using a mouse.
See also: [command-line interface](#command-line-interface).
home directory
:
The default directory associated with an account on a computer system.
By convention, all of a user's files are stored in or below her home directory.
Linux
:
Linux is an operating system, similar to UNIX, which is becoming quite popular for supercomputers due
to abundant support, user familiarity, and comparable performance with optimized UNIX systems.
Kraken, for example, runs on a modified version of Linux.
loop
:
A set of instructions to be executed multiple times. Consists of a [loop body](#loop-body) and (usually) a
condition for exiting the loop. See also [for loop](#for-loop) and [while loop](#while-loop).
loop body
:
The set of statements or commands that are repeated inside a [for loop](#for-loop)
or [while loop](#while-loop).
MIME type
:
MIME (Multi-Purpose Internet Mail Extensions) types describe different file types for exchange on the Internet,
for example images, audio, and documents.
operating system
:
Software that manages interactions between users, hardware, and software [processes](#process). Common
examples are Linux, macOS, and Windows.
orthogonal
:
To have meanings or behaviors that are independent of each other.
If a set of concepts or tools are orthogonal,
they can be combined in any way.
parameter
:
A variable named in a function's declaration that is used to hold a value passed into the call.
The term is often used interchangeably (and inconsistently) with [argument](#argument).
parent directory
:
The directory that "contains" the one in question.
Every directory in a file system except the [root directory](#root-directory) has a parent.
A directory's parent is usually referred to using the shorthand notation `..` (pronounced "dot dot").
path
:
A description that specifies the location of a file or directory within a [file system](#file-system).
See also: [absolute path](#absolute-path), [relative path](#relative-path).
pipe
:
A connection from the output of one program to the input of another.
When two or more programs are connected in this way, they are called a "pipeline".
process
:
A running instance of a program, containing code, variable values,
open files and network connections, and so on.
Processes are the "actors" that the [operating system](#operating-system) manages;
it typically runs each process for a few milliseconds at a time
to give the impression that they are executing simultaneously.
prompt
:
A character or characters display by a [REPL](#read-evaluate-print-loop) to show that
it is waiting for its next command.
quoting
:
(in the shell):
Using quotation marks of various kinds to prevent the shell from interpreting special characters.
For example, to pass the string `*.txt` to a program,
it is usually necessary to write it as `'*.txt'` (with single quotes)
so that the shell will not try to expand the `*` wildcard.
read-evaluate-print loop
:
(REPL): A [command-line interface](#command-line-interface) that reads a command from the user,
executes it, prints the result, and waits for another command.
redirect
:
To send a command's output to a file rather than to the screen or another command,
or equivalently to read a command's input from a file.
regular expression
:
A pattern that specifies a set of character strings.
REs are most often used to find sequences of characters in strings.
relative path
:
A [path](#path) that specifies the location of a file or directory
with respect to the [current working directory](#current-working-directory).
Any path that does not begin with a separator character ("/" or "\\") is a relative path.
See also: [absolute path](#absolute-path).
root directory
:
The top-most directory in a [file system](#file-system).
Its name is "/" on Unix (including Linux and macOS) and "\\" on Microsoft Windows.
shell
:
A [command-line interface](#cli) such as Bash (the Bourne-Again Shell)
or the Microsoft Windows DOS shell
that allows a user to interact with the [operating system](#operating-system).
shell script
:
A set of [shell](#shell) commands stored in a file for re-use.
A shell script is a program executed by the shell;
the name "script" is used for historical reasons.
SSH
:
A protocol for securely connecting to a remote computer, or also a program which uses this
protocol. This connection is generally for a command line interface, but it is possible to
use GUI programs through SSH. For more information about how to use SSH, see Access.
standard input
:
A process's default input stream.
In interactive command-line applications,
it is typically connected to the keyboard;
in a [pipe](#pipe),
it receives data from the [standard output](#standard-output) of the preceding process.
standard output
:
A process's default output stream.
In interactive command-line applications,
data sent to standard output is displayed on the screen;
in a [pipe](#pipe),
it is passed to the [standard input](#standard-input) of the next process.
sub-directory
:
A directory contained within another directory.
tab completion
:
A feature provided by many interactive systems in which
pressing the Tab key triggers automatic completion of the current word or command.
UNIX
:
UNIX is an operating system first developed in the 1970's. It has gone through a
number of incarnations, and still has many popular versions. UNIX has dominated
supercomputing for many years, however, the high performance computing community
has been increasingly turning to Linux for an operating system.
variable
:
A name in a program that is associated with a value or a collection of values.
while loop
:
A loop that keeps executing as long as some condition is true.
See also: [for loop](#for-loop).
wildcard
:
A character used in pattern matching.
In the Unix shell,
the wildcard `*` matches zero or more characters,
so that `*.txt` matches all files whose names end in `.txt`.
## External references
### Opening a terminal
* [Using the Terminal program on a Macintosh computer](https://services.udel.edu/TDClient/32/Portal/KB/ArticleDet?ID=477)
* [Using X-Windows (X11) and secure shell (SSH) to connect to a remote UNIX
server](https://services.udel.edu/TDClient/32/Portal/KB/ArticleDet?ID=477)
* [Using a UNIX/Linux emulator (Cygwin) or Secure Shell (SSH) client (Putty)](http://faculty.smu.edu/reynolds/unixtut/windows.html)
### Manuals
* [GNU manuals](http://www.gnu.org/manual/manual.html)
* [Core GNU utilities](http://www.gnu.org/software/coreutils/manual/coreutils.html)
### UD Sites
* [UD Service Portal](https://services.udel.edu/TDClient/32/Portal/Home/)
* [UD IT-RCI](https://sites.udel.edu/research-computing/)
* [UD IT-RCI HPC Documentation Wiki](https://sites.udel.edu/research-computing/)
### Text Editing
* [Nano editor home page](https://www.nano-editor.org/)
* [Nano tutorial](https://www.howtoforge.com/linux-nano-command/)
* [VIM editor home page](https://www.vim.org/)
* Vim also has a built-in tutorial. From the bash prompt, type `vimtutor`
and follow the instructions.
### Helpful Resources
* [Explain Shell](https://explainshell.com/) is a website that can dissect any shell command and any passed options and
display helpful information it.
* [Shell Check](https://shellcheck.net) is a website that will check a shell script for common errors. It can be done by manually typing
the script or by uploading a script to the site. | 41.85654 | 137 | 0.740927 | eng_Latn | 0.999483 |
d0463bd290d130fbcfca60f175e65fa10c7be050 | 1,676 | md | Markdown | website/translated_docs/es-ES/Multi-critera-search.md | macMikey/4d-for-ios | 8f19a45b20fcf3ef50cfe759c878934b1dcdf9e5 | [
"CC-BY-4.0"
] | null | null | null | website/translated_docs/es-ES/Multi-critera-search.md | macMikey/4d-for-ios | 8f19a45b20fcf3ef50cfe759c878934b1dcdf9e5 | [
"CC-BY-4.0"
] | 1 | 2021-02-03T08:35:21.000Z | 2021-02-03T08:47:49.000Z | website/translated_docs/es-ES/Multi-critera-search.md | macMikey/4d-for-ios | 8f19a45b20fcf3ef50cfe759c878934b1dcdf9e5 | [
"CC-BY-4.0"
] | 1 | 2019-11-02T07:27:28.000Z | 2019-11-02T07:27:28.000Z | ---
id: multi-criteria-search
title: Búsqueda multicriterios
---
<div class = "objectives">
**OBJETIVOS**
Active la búsqueda de criterios múltiples en sus propias plantillas.</div>
Esta funcionalidad se activa por defecto en todas las plantillas generadas para 4D for iOS.
## Archivo Template svg
Para activar esta funcionalidad en sus propias plantillas, debe modificar las siguientes líneas en su archivo template.svg:
```xml
<rect id="search" class="droppable field optional" x="14" y="0" width="238" height="30" stroke-dasharray="5,2" ios:type="0,1,2,4,8,9,11,25,35" ios:bind="searchableField"/>
```
por:
```xml
<rect id="search" class="droppable field optional multi-criteria" x="14" y="0" width="238" height="30" stroke-dasharray="5,2" ios:type="0,1,2,4,8,9,11,25,35" ios:bind="searchableField"/>
```
¡Listo! La clase es lo único que necesita modificar para que los criterios de búsqueda múltiples estén activos.
## Editor de proyecto
A continuación, puede ir al editor del proyecto y soltar varios campos en el área de búsqueda de formulario lista.

Haga clic en el botón de eliminar campo de búsqueda para modificar la lista de campos asociados.
Aparecerá un menú para permitirle **eliminar campos específicos** o **eliminar todos los campos **, según los criterios en los que desee basar su búsqueda.

¡Felicidades! ¡Ahora puede basar su(s) búsqueda(s) en múltiples campos en su aplicación 4D for iOS! | 37.244444 | 186 | 0.75895 | spa_Latn | 0.944173 |
d04646aa63e35b4dc0140a8e338dc7ebaf2d35c3 | 1,640 | md | Markdown | libs/router-middleware/README.md | bitflut/frrri | b4127b2330212fc74a16c431bba4d6ed7d12d4a5 | [
"MIT"
] | 2 | 2020-05-12T13:11:52.000Z | 2020-05-12T14:28:47.000Z | libs/router-middleware/README.md | bitflut/frrri | b4127b2330212fc74a16c431bba4d6ed7d12d4a5 | [
"MIT"
] | 1 | 2020-05-14T08:25:16.000Z | 2020-05-14T08:25:16.000Z | libs/router-middleware/README.md | bitflut/frrri | b4127b2330212fc74a16c431bba4d6ed7d12d4a5 | [
"MIT"
] | null | null | null | # FRRRI
> Angular at 250 MPH
## @ng-frrri/router-middleware
See [Quick start](https://bitflut.gitbook.io/frrri/) for minimal setup instructions.
## DSL
```typescript
import { operate } from '@frrri/router-middleware';
import { getMany } from '@frrri/router-middleware/operators';
const all = 'entities';
const posts = 'entities.posts';
const comments = 'entities.comments';
const users = 'entities.users';
const routes: Routes = [
{
path: 'posts',
data: operate(
reset(all),
getMany(posts),
setMeta({ title: 'Posts' }),
setBreadcrumb({ title: 'Posts' }),
),
children: [
{
path: ':id',
data: operate(
getActive(posts),
populate({
from: posts,
to: comments,
id: 'postId',
idSource: comments,
),
populate({
from: comments,
to: users,
id: 'userId',
idSource: comments,
),
activeMeta(posts, {
factory: post => ({ title: post.title }),
}),
activeBreadcrumb(posts, {
factory: post => ({ title: post.title }),
}),
),
},
],
},
];
```
## License
[](http://badges.mit-license.org)
| 26.451613 | 106 | 0.431098 | eng_Latn | 0.596749 |
d047105727bcc4f988c92b8d5558fddcaee7b3b8 | 2,084 | md | Markdown | site/src/site/sphinx/en/advice-class.md | weihubeats/arthas | c852a62bd65b847f13a366f158191d6e8607924d | [
"Apache-2.0"
] | 29,258 | 2018-09-04T10:48:10.000Z | 2022-03-31T13:25:53.000Z | site/src/site/sphinx/en/advice-class.md | jackchen10/arthas | b69203f029d55b5d70ad780b9d43862b522bea38 | [
"Apache-2.0"
] | 1,903 | 2018-09-03T08:36:56.000Z | 2022-03-31T13:11:50.000Z | site/src/site/sphinx/en/advice-class.md | jackchen10/arthas | b69203f029d55b5d70ad780b9d43862b522bea38 | [
"Apache-2.0"
] | 6,684 | 2018-09-04T10:51:03.000Z | 2022-03-31T06:39:35.000Z | Fundamental Fields in Expressions
==============================
There is a very fundamental class `Advice` for the expressions used in filtering, tracing or monitoring and other aspects in commands.
```java
public class Advice {
private final ClassLoader loader;
private final Class<?> clazz;
private final ArthasMethod method;
private final Object target;
private final Object[] params;
private final Object returnObj;
private final Throwable throwExp;
private final boolean isBefore;
private final boolean isThrow;
private final boolean isReturn;
// getter/setter
}
```
Description for the variables in the class `Advice`:
|Name|Specification|
|---:|:---|
|loader|the class loader for the current called class|
|clazz|the reference to the current called class|
|method|the reference to the current called method|
|target|the instance of the current called class|
|params|the parameters for the current call, which is an array (when there's no parameter, it will be an empty array)|
|returnObj|the return value from the current call - only available when the method call returns normally (`isReturn==true`), and `null` is for `void` return value|
|throwExp|the exceptions thrown from the current call - only available when the method call throws exception (`isThrow==true`)|
|isBefore|flag to indicate the method is about to execute. `isBefore==true` but `isThrow==false` and `isReturn==false` since it's no way to know how the method call will end|
|isThrow|flag to indicate the method call ends with exception thrown|
|isReturn|flag to indicate the method call ends normally without exception thrown|
All variables listed above can be used directly in the [OGNL expression](https://commons.apache.org/proper/commons-ognl/language-guide.html). The command will not execute and exit if there's illegal OGNL grammar or unexpected variable in the expression.
* [typical use cases](https://github.com/alibaba/arthas/issues/71);
* [OGNL language guide](https://commons.apache.org/proper/commons-ognl/language-guide.html).
| 46.311111 | 253 | 0.75144 | eng_Latn | 0.991936 |
d047d4255143f2ae6bb4d708e582e9bdb93b7bc2 | 1,867 | md | Markdown | docs/sub_account.md | eadwinCode/paystack-python | dde449e3c62d843d047ef99eb8eb4c8731cb88de | [
"MIT"
] | 89 | 2016-03-18T17:08:43.000Z | 2022-03-27T09:56:27.000Z | docs/sub_account.md | eadwinCode/paystack-python | dde449e3c62d843d047ef99eb8eb4c8731cb88de | [
"MIT"
] | 46 | 2016-04-01T14:59:47.000Z | 2022-03-31T17:18:12.000Z | docs/sub_account.md | eadwinCode/paystack-python | dde449e3c62d843d047ef99eb8eb4c8731cb88de | [
"MIT"
] | 38 | 2016-03-29T16:22:23.000Z | 2022-03-27T09:57:19.000Z | SubAccount
-------
#### `SubAccount.create(**kwargs)` - Create a SubAccount
*Usage*
```python
from paystackapi.subaccount import SubAccount
response = SubAccount.create(
business_name="Test Biz 123",
settlement_bank="Access Bank",
account_number="xxxxxxxxx",
percentage_charge="6.9"
)
```
*Arguments*
- `business_name`: Name of business for subaccount
- `settlement_bank`: Name of Bank (accepted banks)
- `account_number`: NUBAN Bank Account number
- `percentage_charge`: Default percentage charged on subaccount?
- `**kwargs`
*Returns*
JSON data from Paystack API.
#### `SubAccount.list(perPage, page)` - List a SubAccount
*Usage*
```python
from paystackapi.subaccount import SubAccount
response = SubAccount.list(perPage=3, page=1)
```
*Arguments*
- `perPage`: Records you want to retrieve per page (Integer)
- `page`: What page you want to retrieve (Integer)
- `**kwargs`
*Returns*
JSON data from Paystack API.
#### `SubAccount.fetch(id_or_slug)` - Fetch a SubAccount
*Usage*
```python
from paystackapi.subaccount import SubAccount
response = SubAccount.fetch(id_or_slug="some_slug_like_subaccount_code_or_id)
```
*Arguments*
- `id_or_slug`: ID or Subaccount_Code
*Returns*
JSON data from Paystack API.
#### `SubAccount.update(id_or_slug, **kwargs)` - Update a SubAccount
*Usage*
```python
from paystackapi.subaccount import SubAccount
response = SubAccount.update(
id_or_slug="some_slug_like_subaccount_code_or_id),
**kwargs
)
```
*Arguments*
- `id_or_slug`: ID or Subaccount_Code
- `business_name`: Name of business for subaccount
- `settlement_bank`: Name of Bank (accepted banks)
- `account_number`: NUBAN Bank Account number
- `percentage_charge`: Default percentage charged on subaccount?
- `**kwargs`
*Returns*
JSON data from Paystack API.
| 20.516484 | 77 | 0.711302 | eng_Latn | 0.560841 |
d047f77ac405d32d19205b68eb3e6d81253069ff | 3,107 | md | Markdown | README.md | Nevon/logsplit | 3436d6fbc6d78690b485e3df65ad43c8fefa1ee9 | [
"MIT"
] | 3 | 2018-03-31T13:12:19.000Z | 2018-04-05T09:13:16.000Z | README.md | Nevon/logsplit | 3436d6fbc6d78690b485e3df65ad43c8fefa1ee9 | [
"MIT"
] | null | null | null | README.md | Nevon/logsplit | 3436d6fbc6d78690b485e3df65ad43c8fefa1ee9 | [
"MIT"
] | null | null | null | [](https://github.com/Nevon/logsplit)
[](https://travis-ci.org/Nevon/logsplit)
# Logsplit
Takes large objects and splits them into small chunks for separate logging, with easily followable references.
## <a name="example"></a> Example
```javascript
const createLogSplitter = require("logsplit");
const logsplit = createLogSplitter(console.log);
const message = {
author: "Tommy Brunn",
packages: [
{
name: "express-physical",
url: "https://github.com/Nevon/express-physical"
// additional fields making this object very large
}
]
};
console.log(logsplit(message));
// `logsplit` returns this object, which contains a reference to the extracted
// large object:
// {
// author: 'Tommy Brunn',
// packages: 'Log-Reference-e44ab504-2202-4879-87ba-66c30ab7cf4f'
// }
// Additionally, the extracted object gets logged separately:
// {
// $reference: 'Log-Reference-e44ab504-2202-4879-87ba-66c30ab7cf4f',
// $item: [
// {
// name: 'express-physical',
// url: 'https://github.com/Nevon/express-physical'
// // additional fields making this object very large
// }
// ]
// }
```
## <a name="motivation"></a> Motivation
For debugging purposes, I find it very useful to log all data that was used in my system. For example, request and response bodies in a web service. However, [due to limitations in logging tools](https://github.com/moby/moby/pull/35831), it's not always desirable to log the entire payload in a single message.
As it is impossible to know up front what parts of your data will be important until after you need it, Logsplit helps you to log a high level message while separately logging the details for when you need them.
## <a name="installation"></a> Installation
```sh
npm install logsplit --save
# yarn add logsplit
```
## <a name="usage"></a> Usage
`createLogSplitter` takes a logging function and an optional options object, and returns a function (`logsplit`).
```javascript
const createLogSplitter = require("logsplit");
const logFunction = message =>
console.info(`Extracted log item ${message.$reference}: %j`, message);
const options = {
// maximum approximate object size for extraction
maxByteSize: 1500,
// generate the reference string to replace the large object with
createReference: item => `Log-Reference-${uuid()}`
};
const logsplit = createLogSplitter(logFunction, options);
console.log(logsplit(message));
```
### <a name="express-middleware"></a> Express Middleware
Logsplit can be used as an Express middleware for doing request/response body logging. If you are interested in this use case, [open an issue](https://github.com/Nevon/logsplit/issues/new).
## <a name="license"></a> License
See [LICENSE](https://github.com/Nevon/logsplit/blob/master/LICENSE) for more details.
### <a name="attribution"></a> Attributions
* [Design Credits: www.Vecteezy.com](https://www.Vecteezy.com/)
| 33.408602 | 310 | 0.708722 | eng_Latn | 0.892611 |
d04863f706a95eee26768dcfb8cdfea6b5461d4a | 3,449 | md | Markdown | sdk-api-src/content/ocidl/nf-ocidl-iperpropertybrowsing-mappropertytopage.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/ocidl/nf-ocidl-iperpropertybrowsing-mappropertytopage.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | sdk-api-src/content/ocidl/nf-ocidl-iperpropertybrowsing-mappropertytopage.md | amorilio/sdk-api | 54ef418912715bd7df39c2561fbc3d1dcef37d7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NF:ocidl.IPerPropertyBrowsing.MapPropertyToPage
title: IPerPropertyBrowsing::MapPropertyToPage (ocidl.h)
description: Retrieves the CLSID of the property page associated with the specified property.
helpviewer_keywords: ["IPerPropertyBrowsing interface [COM]","MapPropertyToPage method","IPerPropertyBrowsing.MapPropertyToPage","IPerPropertyBrowsing::MapPropertyToPage","MapPropertyToPage","MapPropertyToPage method [COM]","MapPropertyToPage method [COM]","IPerPropertyBrowsing interface","_ctrl_iperpropertybrowsing_mappropertytopage","com.iperpropertybrowsing_mappropertytopage","ocidl/IPerPropertyBrowsing::MapPropertyToPage"]
old-location: com\iperpropertybrowsing_mappropertytopage.htm
tech.root: com
ms.assetid: f8cf86eb-23d1-4aa6-859a-055df99b064c
ms.date: 12/05/2018
ms.keywords: IPerPropertyBrowsing interface [COM],MapPropertyToPage method, IPerPropertyBrowsing.MapPropertyToPage, IPerPropertyBrowsing::MapPropertyToPage, MapPropertyToPage, MapPropertyToPage method [COM], MapPropertyToPage method [COM],IPerPropertyBrowsing interface, _ctrl_iperpropertybrowsing_mappropertytopage, com.iperpropertybrowsing_mappropertytopage, ocidl/IPerPropertyBrowsing::MapPropertyToPage
req.header: ocidl.h
req.include-header:
req.target-type: Windows
req.target-min-winverclnt: Windows 2000 Professional [desktop apps only]
req.target-min-winversvr: Windows 2000 Server [desktop apps only]
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl: OCIdl.idl
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
targetos: Windows
req.typenames:
req.redist:
ms.custom: 19H1
f1_keywords:
- IPerPropertyBrowsing::MapPropertyToPage
- ocidl/IPerPropertyBrowsing::MapPropertyToPage
dev_langs:
- c++
topic_type:
- APIRef
- kbSyntax
api_type:
- COM
api_location:
- OCIdl.h
api_name:
- IPerPropertyBrowsing.MapPropertyToPage
---
# IPerPropertyBrowsing::MapPropertyToPage
## -description
Retrieves the CLSID of the property page associated with the specified property.
## -parameters
### -param dispID [in]
The dispatch identifier of the property.
### -param pClsid [out]
A pointer to the CLSID identifying the property page associated with the property specified by <i>dispID</i>. If this method fails, *<i>pClsid</i> is set to CLSID_NULL.
## -returns
This method can return the standard return values E_INVALIDARG and E_UNEXPECTED, as well as the following values.
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>S_OK</b></dt>
</dl>
</td>
<td width="60%">
The method completed successfully.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_NOTIMPL</b></dt>
</dl>
</td>
<td width="60%">
The object does not support property pages at all or does not support mapping properties to the page CLSID. In other words, this feature of specific property browsing is not supported.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>E_POINTER</b></dt>
</dl>
</td>
<td width="60%">
The address in <i>pClsid</i> is not valid. For example, it may be <b>NULL</b>.
</td>
</tr>
</table>
## -remarks
The CLSID returned from this method can be passed to <a href="/windows/desktop/api/olectl/nf-olectl-olecreatepropertyframeindirect">OleCreatePropertyFrameIndirect</a> to specify the initial page to display in the property sheet.
## -see-also
<a href="/windows/desktop/api/ocidl/nn-ocidl-iperpropertybrowsing">IPerPropertyBrowsing</a> | 29.991304 | 430 | 0.777037 | yue_Hant | 0.467591 |
d04864027666228d40100039032e71218c8bb7cd | 6,917 | md | Markdown | articles/azure-sql/azure-hybrid-benefit.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-28T07:46:06.000Z | 2022-01-26T12:48:02.000Z | articles/azure-sql/azure-hybrid-benefit.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 562 | 2017-06-27T13:50:17.000Z | 2021-05-17T23:42:07.000Z | articles/azure-sql/azure-hybrid-benefit.md | beatrizmayumi/azure-docs.pt-br | ca6432fe5d3f7ccbbeae22b4ea05e1850c6c7814 | [
"CC-BY-4.0",
"MIT"
] | 113 | 2017-07-11T19:54:32.000Z | 2022-01-26T21:20:25.000Z | ---
title: Benefício Híbrido do Azure
titleSuffix: Azure SQL Database & SQL Managed Instance
description: Use licenças de SQL Server existentes para os descontos do banco de dados SQL do Azure e do SQL Instância Gerenciada.
services: sql-database
ms.service: sql-db-mi
ms.subservice: features
ms.custom: sqldbrb=4
ms.topic: conceptual
author: stevestein
ms.author: sstein
ms.reviewer: sashan, moslake
ms.date: 02/16/2021
ms.openlocfilehash: f7a37e761e37e295bbb92e442b1813ebded2a7cd
ms.sourcegitcommit: ac035293291c3d2962cee270b33fca3628432fac
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/24/2021
ms.locfileid: "104955271"
---
# <a name="azure-hybrid-benefit---azure-sql-database--sql-managed-instance"></a>Benefício Híbrido do Azure-banco de dados SQL do Azure & SQL Instância Gerenciada
[!INCLUDE[appliesto-sqldb-sqlmi](includes/appliesto-sqldb-sqlmi.md)]
Na camada de computação provisionada do modelo de compra baseado em vCore, você pode trocar suas licenças existentes por tarifas com desconto no banco de dados SQL do Azure e no Azure SQL Instância Gerenciada usando [benefício híbrido do Azure](https://azure.microsoft.com/pricing/hybrid-benefit/). Esse benefício do Azure permite que você economize até 30 por cento ou até mesmo mais no banco de dados SQL & SQL Instância Gerenciada usando suas licenças de SQL Server com o Software Assurance. A página [benefício híbrido do Azure](https://azure.microsoft.com/pricing/hybrid-benefit/) tem uma calculadora para ajudar a determinar a economia. Observe que o Benefício Híbrido do Azure não se aplica ao servidor do banco de dados SQL do Azure.
> [!NOTE]
> Alterar para Benefício Híbrido do Azure não requer nenhum tempo de inatividade.

## <a name="choose-a-license-model"></a>Escolher um modelo de licença
Com Benefício Híbrido do Azure, você pode optar por pagar apenas pela infraestrutura subjacente do Azure usando sua licença de SQL Server existente para o mecanismo de banco de dados SQL Server em si (preço de computação base) ou pode pagar pela infraestrutura subjacente e pela licença de SQL Server (preço incluído na licença).
Você pode escolher ou alterar seu modelo de licenciamento no portal do Azure:
- Para novos bancos de dados, durante a criação, selecione **configurar banco de dados** na guia **noções básicas** e selecione a opção para economizar dinheiro.
- Para bancos de dados existentes, selecione **Configurar** no menu **configurações** e selecione a opção para economizar dinheiro.
Você também pode configurar um banco de dados novo ou existente usando uma das seguintes APIs:
# <a name="powershell"></a>[PowerShell](#tab/azure-powershell)
Para definir ou atualizar o tipo de licença usando o PowerShell:
- [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase)
- [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase)
- [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance)
- [Set-AzSqlInstance](/powershell/module/az.sql/set-azsqlinstance)
# <a name="azure-cli"></a>[CLI do Azure](#tab/azure-cli)
Para definir ou atualizar o tipo de licença usando o CLI do Azure:
- [az sql db create](/cli/azure/sql/db#az-sql-db-create)
- [az sql mi create](/cli/azure/sql/mi#az-sql-mi-create)
- [az sql mi update](/cli/azure/sql/mi#az-sql-mi-update)
# <a name="rest-api"></a>[REST API](#tab/rest)
Para definir ou atualizar o tipo de licença usando a API REST:
- [Banco de Dados – Criar ou Atualizar](/rest/api/sql/databases/createorupdate)
- [Bancos de Dados – Atualizar](/rest/api/sql/databases/update)
- [Managed Instances - Create Or Update](/rest/api/sql/managedinstances/createorupdate)
- [Managed Instances - Update](/rest/api/sql/managedinstances/update)
* * *
### <a name="azure-hybrid-benefit-questions"></a>Perguntas do Benefício Híbrido do Azure
#### <a name="are-there-dual-use-rights-with-azure-hybrid-benefit-for-sql-server"></a>Há direitos de uso duplo com o Benefício Híbrido do Azure para SQL Server?
Você tem 180 dias de direitos de uso duplo da licença para garantir que as migrações estejam executando perfeitamente. Após esse período de 180 dias, você só poderá usar a licença SQL Server na nuvem no banco de dados SQL. Você não tem mais direitos de uso duplo local e na nuvem.
#### <a name="how-does-azure-hybrid-benefit-for-sql-server-differ-from-license-mobility"></a>Como o Benefício Híbrido do Azure do SQL Server difere da mobilidade de licenças?
Oferecemos benefícios de mobilidade de licença para SQL Server clientes com Software Assurance. Isso permite a reatribuição de suas licenças para os servidores compartilhados de um parceiro. Você pode usar esse benefício no Azure IaaS e AWS EC2.
O Benefício Híbrido do Azure para SQL Server difere da mobilidade de licenças em duas áreas principais:
- Ele fornece benefícios econômicos para mover cargas de trabalho altamente virtualizadas para o Azure. Os clientes do SQL Server Enterprise Edition podem obter quatro núcleos no Azure no SKU do Uso Geral para cada núcleo que eles possuem localmente para aplicativos altamente virtualizados. A mobilidade de licenças não permite nenhum benefício de custo especial para mover cargas de trabalho virtualizadas para a nuvem.
- Ele fornece um destino de PaaS no Azure (SQL Instância Gerenciada) que é altamente compatível com SQL Server.
#### <a name="what-are-the-specific-rights-of-the-azure-hybrid-benefit-for-sql-server"></a>Quais são os direitos específicos do Benefício Híbrido do Azure para SQL Server?
Os clientes do banco de dados SQL e do SQL Instância Gerenciada têm os seguintes direitos associados ao Benefício Híbrido do Azure para SQL Server:
|Superfície da licença|O que Benefício Híbrido do Azure para SQL Server você consegue?|
|---|---|
|Clientes principais do SQL Server Enterprise Edition com SA|<li>Pode pagar a taxa base em um SKU de hiperescala, Uso Geral ou Comercialmente Crítico</li><br><li>1 núcleo local = 4 núcleos na SKU de hiperescala</li><br><li>1 núcleo local = 4 núcleos na SKU de Uso Geral</li><br><li>1 núcleo local = 1 núcleo em SKU para Comercialmente Crítico</li>|
|Clientes principais do SQL Server Standard Edition com SA|<li>Pode pagar a taxa base em um SKU de hiperescala, Uso Geral ou Comercialmente Crítico</li><br><li>1 núcleo local = 1 núcleo na SKU de hiperescala</li><br><li>1 núcleo local = 1 núcleo na SKU de Uso Geral</li><br><li>4 núcleos locais = 1 núcleo no SKU Comercialmente Crítico</li>|
|||
## <a name="next-steps"></a>Próximas etapas
- Para obter ajuda com a escolha de uma opção de implantação do SQL Azure, consulte [escolher a opção de implantação correta no SQL do Azure](azure-sql-iaas-vs-paas-what-is-overview.md).
- Para obter uma comparação dos recursos do banco de dados SQL e do SQL Instância Gerenciada, consulte [SQL database & recursos do sql instância gerenciada](database/features-comparison.md).
| 69.17 | 742 | 0.781553 | por_Latn | 0.995238 |
d048855b422a36834545bba04a453bee5764a226 | 1,346 | md | Markdown | docs/mfc/dialog-sample-list.md | yoichinak/cpp-docs.ja-jp | 50048c3d1101537497403efb4e7b550108f3a8f0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/mfc/dialog-sample-list.md | yoichinak/cpp-docs.ja-jp | 50048c3d1101537497403efb4e7b550108f3a8f0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-01T04:17:07.000Z | 2021-04-01T04:17:07.000Z | docs/mfc/dialog-sample-list.md | yoichinak/cpp-docs.ja-jp | 50048c3d1101537497403efb4e7b550108f3a8f0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: 詳細については、「Dialog Sample List」を参照してください。
title: ダイアログ ボックスのサンプル一覧
ms.date: 11/04/2016
helpviewer_keywords:
- sample applications [MFC], dialog boxes
ms.assetid: 3fc7dd7c-d758-4c43-96bb-0ea638ca1ad7
ms.openlocfilehash: a9650277d157c6b5e5db655123520e88e9cf0c5d
ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 12/11/2020
ms.locfileid: "97261347"
---
# <a name="dialog-sample-list"></a>ダイアログ ボックスのサンプル一覧
ダイアログボックスとプロパティシートを示す次のサンプルプログラムを参照してください。
*ダイアログボックスを使用した MDI サンプルアプリケーション*
- [軌跡](../overview/visual-cpp-samples.md)
*モードレスダイアログボックス*
- [固定](../overview/visual-cpp-samples.md)
*[プロパティシート] ダイアログボックス ([タブ] ダイアログボックス)*
- [PROPDLG](../overview/visual-cpp-samples.md)
- [CMNCTRL1](../overview/visual-cpp-samples.md)
- [CMNCTRL2](../overview/visual-cpp-samples.md)
*ダイアログボックスに基づくアプリケーション*
- [CMNCTRL1](../overview/visual-cpp-samples.md)
- [CMNCTRL2](../overview/visual-cpp-samples.md)
*ダイアログボックスコントロール*
- [CMNCTRL1](../overview/visual-cpp-samples.md)
- [CMNCTRL2](../overview/visual-cpp-samples.md)
- [CTRLTEST](../overview/visual-cpp-samples.md)
*ダイアログ形式のフォームビュー*
- [VIEWEX](../overview/visual-cpp-samples.md)
*メモリ内ダイアログテンプレート*
- [DLGTEMPL](../overview/visual-cpp-samples.md)
## <a name="see-also"></a>関連項目
[ダイアログボックス](dialog-boxes.md)
| 22.433333 | 60 | 0.750371 | yue_Hant | 0.179573 |
d048bf1ed0b06f9c7054b93f5cfdc85c096a02ae | 345 | md | Markdown | README.md | vishnuvardhana/express-mongoose-graphql | f5a4f191052b1a92dd2164f50a7aa20bfee268d3 | [
"MIT"
] | 2 | 2018-03-30T18:34:19.000Z | 2018-04-02T06:24:52.000Z | README.md | vishnuvardhana/express-mongoose-graphql | f5a4f191052b1a92dd2164f50a7aa20bfee268d3 | [
"MIT"
] | null | null | null | README.md | vishnuvardhana/express-mongoose-graphql | f5a4f191052b1a92dd2164f50a7aa20bfee268d3 | [
"MIT"
] | null | null | null | # graphql-nodejs-mongodb
Simple starter kit for grapql, mongoDB with mongoose with express js.
#Installation and usage
1. Clone the repo
2. cd graphql-nodejs-mongodb
3. yarn install
4. node server.js
5. go to http://localhost:4000/graphql
#Road Map
1. Add more mutations
2. Authentication with oauth-server
3. Generalize mongodb methods
| 18.157895 | 69 | 0.771014 | eng_Latn | 0.646427 |
d04a2c2e332b7aca403680c7de8b906056365e31 | 711 | md | Markdown | README.md | bitrise-steplib/steps-codecov | 16f56b6d7c8929fef111fdb8da06955a07eb9060 | [
"MIT"
] | 3 | 2015-10-25T07:13:31.000Z | 2016-08-20T04:36:56.000Z | README.md | bitrise-io/steps-codecov | 16f56b6d7c8929fef111fdb8da06955a07eb9060 | [
"MIT"
] | 9 | 2016-04-22T20:09:00.000Z | 2018-12-03T17:39:11.000Z | README.md | bitrise-io/steps-codecov | 16f56b6d7c8929fef111fdb8da06955a07eb9060 | [
"MIT"
] | 5 | 2016-04-22T19:55:59.000Z | 2017-04-21T06:42:54.000Z | # DEPRECATED
The step owner has been changed, and from now on Codecov is responsible for developing the Bitrise Codecov step.
This repository is deprecated and will not receive any code change.
You can find the new active repository [here](https://github.com/codecov/codecov-bitrise)
# Codecov integration
[Codecov](https://codecov.io) integration
## How to use this Step
Can be run directly with the [bitrise CLI](https://github.com/bitrise-io/bitrise),
just `git clone` this repository, `cd` into it's folder in your Terminal/Command Line
and call `bitrise run test`.
## Trigger a new release
- __merge every code changes__ to the `master` branch
- __push the new version tag__ to the `master` branch
| 32.318182 | 112 | 0.767932 | eng_Latn | 0.991978 |
d04a487e21bb1dddead7d06ba46f7b59bc6d3ca2 | 100 | md | Markdown | README.md | holmok/fame-shame | 3d6201ecea34546140082c73499a183e0ad12715 | [
"MIT"
] | null | null | null | README.md | holmok/fame-shame | 3d6201ecea34546140082c73499a183e0ad12715 | [
"MIT"
] | null | null | null | README.md | holmok/fame-shame | 3d6201ecea34546140082c73499a183e0ad12715 | [
"MIT"
] | null | null | null | # FAME-SHAME
An Intentionally Simple Event Logger to FAME and SHAME.
(TODO: Create a real README)
| 16.666667 | 55 | 0.76 | eng_Latn | 0.501166 |
d04a9033ff957895c3db3f411540b6e266b15235 | 2,267 | md | Markdown | windows-driver-docs-pr/kernel/fast-startup-from-a-low-power-state.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/kernel/fast-startup-from-a-low-power-state.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/kernel/fast-startup-from-a-low-power-state.md | AnLazyOtter/windows-driver-docs.zh-cn | bdbf88adf61f7589cde40ae7b0dbe229f57ff0cb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 从低功耗状态快速启动
description: 从低功耗状态快速启动
ms.assetid: 1091571c-2e30-4ad5-b4b9-0f8633e68288
ms.localizationpriority: medium
ms.date: 10/17/2018
ms.openlocfilehash: d6c6db45818adf93e6daf7c45992f65e6735860f
ms.sourcegitcommit: fb7d95c7a5d47860918cd3602efdd33b69dcf2da
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 06/25/2019
ms.locfileid: "67386601"
---
# <a name="fast-startup-from-a-low-power-state"></a>从低功耗状态快速启动
若要实现低功耗状态从快速启动,叶节点设备的驱动程序应处理 IRP S0 power (即[ **IRP\_MN\_设置\_POWER** ](https://docs.microsoft.com/windows-hardware/drivers/kernel/irp-mn-set-power)的 IRPS0 系统电源状态)。 设备层次结构中的叶节点的设备具有任何子设备。 叶节点设备具有子设备上没有依赖关系,因为设备的功能驱动程序可以重新初始化设备作为后台任务以避免导致不必要的延迟对操作系统或其他驱动程序。 与此相反,总线驱动程序有需要附加的同步逻辑来协调使用其子设备电源的序列的依赖项。
使用以下步骤来实现的低功耗状态从一个叶节点设备快速启动:
1. 设置 S0 power IRP 完成例程。
2. 发送 S0 断电 IRP 设备堆栈。
3. 立即完成 S0 power IRP,而不是等到完成 IRP D0 强大功能。 S0 power IRP 完成例程运行时,请执行以下操作:
1. 请求 D0 power IRP (即**IRP\_MN\_设置\_POWER** D0 设备电源状态的 IRP)。
2. 返回状态\_S0 power IRP 完成例程的成功。
4. 驱动程序应收到任何 I/O 请求排队,但延迟处理任何这些请求,直到它完成处理 D0 能力 IRP 为止。
5. D0 power IRP 完成例程运行时,初始化设备,但限制为需要使设备准备好使用此例程。
6. 完成上述步骤后,可以开始您的驱动程序来处理 I/O 请求,包括任何可能已排队的 I/O 请求。
**请注意** 前面的步骤不适用于任何电源状态而 PowerSystemWorking (S0) 不用于 power Irp 的处理。 这些步骤专门适用于 power Irp 的转换处理从低功耗状态为开机 (S0) 状态。
所有设备都已都完成其 S0 power Irp 后,系统启动时已完成。 这些设备不是必需的在系统启动时,已完成其 D0 power Irp 或完全可以正常完成的。 内核电源管理器具有一组有限的 IRP 调度队列,并且必须使用这些队列以通知返回到 S0 状态系统中的所有设备。 驱动程序无法快速完成其 S0 power Irp 的接收其 S0 power Irp 阻止其他设备的驱动程序。 因此,设计欠佳的驱动程序驱动程序执行的操作应同时按顺序执行,从而降低整体系统启动性能。
驱动程序完成其 S0 power IRP 后,它可能会收到从已打开设备的句柄的应用程序的 I/O 请求。 驱动程序必须永远不会使失败这些 I/O 请求,因为这样做可能会导致应用程序停止响应,并生成超时错误消息。 相反,驱动程序必须排队 I/O 请求,直到该设备已准备好处理它们。
总线驱动程序可以使用类似于刚刚介绍的叶节点设备驱动程序的技术实现低功耗状态从快速启动。 总线驱动程序必须满足一个额外的要求,是为了确保来自子设备的任何请求输入 D0 状态被标记为挂起,直到进入 D0 状态总线设备的未完成的总线驱动程序。
例如,当 USB 集线器的总线驱动程序收到 S0 power IRP,驱动程序请求 D0 power IRP,并接收请求的 D0 power IRP 后完成 S0 power IRP。 但是之后 S0 完成 IRP,, 中心的子设备很可能开始接收其 S0 power Irp,并请求 D0 power Irp。 总线驱动程序应防止子设备输入 D0,直到集线器设备输入 D0。 因此,总线驱动程序应将标记为正在等待子设备中的所有 D0 power Irp 和总线驱动程序完成处理 D0 power IRP 为中心并中心设备已完全初始化之前完成这些 Irp 等待。
有关 power Irp 的详细信息,请参阅以下主题:
[处理 IRP\_MN\_设置\_的电源可用于系统的电源状态](handling-irp-mn-set-power-for-system-power-states.md)
[处理 IRP\_MN\_设置\_的电源可用于设备的电源状态](handling-irp-mn-set-power-for-device-power-states.md)
| 36.564516 | 296 | 0.797971 | yue_Hant | 0.764077 |
d04b2054d79bd3c46ce7983bfdc6e6d5471e5e84 | 529 | md | Markdown | windows.applicationmodel.appservice/appservicetriggerdetails_appserviceconnection.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 199 | 2017-02-09T23:13:51.000Z | 2022-03-28T15:56:12.000Z | windows.applicationmodel.appservice/appservicetriggerdetails_appserviceconnection.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 2,093 | 2017-02-09T21:52:45.000Z | 2022-03-25T22:23:18.000Z | windows.applicationmodel.appservice/appservicetriggerdetails_appserviceconnection.md | gbaychev/winrt-api | 25346cd51bc9d24c8c4371dc59768e039eaf02f1 | [
"CC-BY-4.0",
"MIT"
] | 620 | 2017-02-08T19:19:44.000Z | 2022-03-29T11:38:25.000Z | ---
-api-id: P:Windows.ApplicationModel.AppService.AppServiceTriggerDetails.AppServiceConnection
-api-type: winrt property
---
<!-- Property syntax
public Windows.ApplicationModel.AppService.AppServiceConnection AppServiceConnection { get; }
-->
# Windows.ApplicationModel.AppService.AppServiceTriggerDetails.AppServiceConnection
## -description
Gets the connection to the endpoint of the other app service.
## -property-value
The connection to the endpoint of the other app service.
## -remarks
## -examples
## -see-also
| 23 | 93 | 0.79017 | eng_Latn | 0.592654 |
d04b45eceab38f86298ed4deaac3c67bc0845ffa | 1,315 | md | Markdown | docs/build/reference/zs-syntax-check-only.md | jmittert/cpp-docs | cea5a8ee2b4764b2bac4afe5d386362ffd64e55a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-02-10T10:38:37.000Z | 2019-02-10T10:38:37.000Z | docs/build/reference/zs-syntax-check-only.md | jmittert/cpp-docs | cea5a8ee2b4764b2bac4afe5d386362ffd64e55a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/build/reference/zs-syntax-check-only.md | jmittert/cpp-docs | cea5a8ee2b4764b2bac4afe5d386362ffd64e55a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-14T03:42:31.000Z | 2020-06-14T03:42:31.000Z | ---
title: "/Zs (Syntax Check Only)"
ms.date: "11/04/2016"
f1_keywords: ["/zs"]
helpviewer_keywords: ["-Zs compiler option [C++]", "Syntax Check Only compiler option", "Zs compiler option", "/Zs compiler option [C++]"]
ms.assetid: b4b41e6a-3f41-4d09-9cb6-fde5aa2cfecf
---
# /Zs (Syntax Check Only)
Tells the compiler to check only the syntax of the source files on the command line.
## Syntax
```
/Zs
```
## Remarks
When using this option, no output files are created, and error messages are written to standard output.
The **/Zs** option provides a quick way to find and correct syntax errors before you compile and link a source file.
### To set this compiler option in the Visual Studio development environment
1. Open the project's **Property Pages** dialog box. For details, see [Working with Project Properties](../../ide/working-with-project-properties.md).
1. Click the **C/C++** folder.
1. Click the **Command Line** property page.
1. Type the compiler option in the **Additional Options** box.
### To set this compiler option programmatically
- See <xref:Microsoft.VisualStudio.VCProjectEngine.VCCLCompilerTool.AdditionalOptions%2A>.
## See Also
[Compiler Options](../../build/reference/compiler-options.md)<br/>
[Setting Compiler Options](../../build/reference/setting-compiler-options.md) | 32.073171 | 150 | 0.73308 | eng_Latn | 0.853761 |
d04b78dd9dcefa97a50f63c0d322f66e6c031f52 | 1,949 | md | Markdown | src/pages/howDoesItWork-ReactHooks.md | vaughnshaun/vaughnshaun.io.source | 5b365e1d73522825f1e2f8e366a70acb2b981f5b | [
"MIT"
] | null | null | null | src/pages/howDoesItWork-ReactHooks.md | vaughnshaun/vaughnshaun.io.source | 5b365e1d73522825f1e2f8e366a70acb2b981f5b | [
"MIT"
] | null | null | null | src/pages/howDoesItWork-ReactHooks.md | vaughnshaun/vaughnshaun.io.source | 5b365e1d73522825f1e2f8e366a70acb2b981f5b | [
"MIT"
] | null | null | null | ---
title: "How Does it Work: React Hooks"
date: "2020-01-09"
tags: ['Computers', 'React', 'Javascript']
---
# Header 1
<p>
Welcome to the first article in the series <em>How Does it Work</em>. Every article in this series will examine
a topic at micro level. The intent is to provide readers with an indepth, under the hood view. In my personal career, as a software engineer, I find it hard to use a tool, library, framework, or etc when I don't know the internals of what I'm using. Naturally, my first topic covers something intrinsically related to programming. As I write more I will gradually expand to other areas of study. The first topic to be examined is React Hooks.
</p>
## Header 2
<p>
React Hooks have been in stable release for almost a year now. It officially went stable on February 06, 2019 (https://reactjs.org/blog/2019/02/06/react-v16.8.0.html). Hooks enable functional components to hook into component life cycle and state. They also are for reusing logic without having to wrap components in HoCs. Another advantage of using hooks for state instead of class components is that the this pointer is no longer needed. This means that developers won't have to bind the component's context when invoking a function from an event. Hooks have a significant learning curve. How to use hooks will not be the focus of this article. Instead this article will focus on how hooks actually work. If you would like to learn more about hooks, I recommend waiting for my next React Hooks post or reading the <a href="https://reactjs.org/docs/hooks-intro.html">React documentation</a>.
</p>
### Header 3
<p>
Anybody who has looked at code samples of Hook usage, might notice how the structure of the code is automagic. Automagic can be good for ease of use, but bad if misunderstood. Misunderstanding can lead to bugs and all out confusion. Below is a sample of React Hooks being used.
</p>
#### Header 4
##### Header 5
###### Header 6 | 72.185185 | 893 | 0.761929 | eng_Latn | 0.999486 |
d04b8073710f0b8e639fdb9f920a782922f7c6bd | 37 | md | Markdown | README.md | jdspille/hls_website | 14ff471c9a451cd70334e440f67e6a31cd0ecb7e | [
"CC-BY-3.0"
] | null | null | null | README.md | jdspille/hls_website | 14ff471c9a451cd70334e440f67e6a31cd0ecb7e | [
"CC-BY-3.0"
] | null | null | null | README.md | jdspille/hls_website | 14ff471c9a451cd70334e440f67e6a31cd0ecb7e | [
"CC-BY-3.0"
] | null | null | null | Here lies the webpage for Jim Hanson
| 18.5 | 36 | 0.810811 | eng_Latn | 0.903918 |
d04b861178beddd5d11dcb583f92ced1b16c5e89 | 68 | md | Markdown | README.md | aldhaneka-lab/Nextjs-Internalisation | 3cdce8e223f4c7367876c848efd9c48b0bc98430 | [
"MIT"
] | null | null | null | README.md | aldhaneka-lab/Nextjs-Internalisation | 3cdce8e223f4c7367876c848efd9c48b0bc98430 | [
"MIT"
] | null | null | null | README.md | aldhaneka-lab/Nextjs-Internalisation | 3cdce8e223f4c7367876c848efd9c48b0bc98430 | [
"MIT"
] | null | null | null | # The Nextjs Internalisation
Nextjs Template with Internalisation.
| 17 | 37 | 0.838235 | eng_Latn | 0.91898 |
d04beec542974b864109c197349d66218266f398 | 3,903 | md | Markdown | _posts/2017-05-15-waktu-benar-atau-salah.md | ariestiyansyah/try | 682084de5219a1252b8a5f87ca7c0d8fcbcbac5c | [
"MIT"
] | null | null | null | _posts/2017-05-15-waktu-benar-atau-salah.md | ariestiyansyah/try | 682084de5219a1252b8a5f87ca7c0d8fcbcbac5c | [
"MIT"
] | null | null | null | _posts/2017-05-15-waktu-benar-atau-salah.md | ariestiyansyah/try | 682084de5219a1252b8a5f87ca7c0d8fcbcbac5c | [
"MIT"
] | null | null | null | ---
title: " XIdeasPerDay - Waktu, Benar atau Salah?"
author: ariestiyansyah
layout: post
categories:
- xideasperday
tags:
- idea
- Life
- days
- xideasperday
image:
feature: empatbelas.jpg
credit: Ales Krivec
creditlink: https://unsplash.com/photos/ZMZHcvIVgbg
---
Tiga hari berturut-turut gw menulis telat, diatas jam 10 terus. Apa yang terjadi? gw sedang asyik membaca dan coba berkomunikasi dengan orang-orang sebenarnya, karena gw tahu hidup sendiri gak akan bisa deh dan bisa-bisa XIdeasPerDay nanti gak lanjut NOOOOOOOOO!!
Gw lagi senang membaca tulisan tentang inovasi dan kreatifitas, coba tidak berpikir *inside the box* atau *outside the box* karena kita bukan dalam sebuah *box*. Kebiasaan gw memang membaca sih tiap hari tapi bukan baca line.today kok hahaha. Satu lagi sekarang gw ganti markdown editor yang biasa dipakai dari *LightPaper* jadi *Typora*. Okay curhatannya udah dulu, sekarang monggo disimak idenya;
### Waktu
Jadi ceritanya tadi sore gw memikirkan tentang waktu, waktu dan waktu. Sekarang gw akan fokus ke hal *profesional* di ide ini, kita semua punya waktu 168 jam dalam seminggu berarti banyak hal yang bisa dilakukan selama periode tersebut. Ada hal yang sering salah sih sebenarnya seperti menentukan prioritas yang akan dikerjakan, ini sulit banget, tapi kalian yakin tidak mau mengendalikan kapal (baca:waktu) kalian sendiri? Pikir lagi memanfaatkan waktu tersebut, jangan buang waktu dengan notifikasi gadget kalian, jangan terlalu berlebihan mengerjakan sesuatu karena "*less is more*", seriusan!
Ada juga hal yang namanya *super connected habbit*, contohnya gw ketika sedang asik membaca tiba-tiba semua *gadget* gw yang standby berdering kencang, **semuanya berdering** padahal notifikasi itu darimana? Telemarketing, males banget kan? nah itu salah satu hal yang buang-buang waktu sih menurut gw, coba set important notifications deh.
### Benar atau Salah
Pernah berpikir apa yang kita lakukan ini benar atau salah? coba buat dua tombol yang dapat menghitung perbuatan kita dalam sehari, kalau dirasa perbuatan yang kalian lakukan salah tekan tombol salah begitu juga sebaliknya, lewat hal sederhana ini kalian bisa kalkulasi perbuatan kalian setiap harinya, apakah kalian banyak berbuat benar atau salah? menarik untuk dicoba loh, karena jujur gw udah punya aplikasi *counter* minum air putih atau kopi hari ini, coba *switch* menarik.
### Bantal Berubah
Buat gw yang suka tidur bantal ini perlu sih, sarung bantalnya saja yang bisa berubah-ubah sesuai selera, apalagi guling yang bisa dirubah designya pasti lebih menarik. Dari kemarin gw sudah bahas beberapa ide yang bisa berubah sih ya hahaha karena menurut gw ide ini bagus agar hari-harimu gak bosenin.
### Alarm Maag
Alarm ini khusus untuk yang divonis *maag* saja, gw mungkin kena *maag* karena suka telat makan, eh itu mungkin loh karena gw udah gak pernah cek kesehatan gw lagi semenjak rajin olahraga hahaha. Notifikasi yang membuat pengguna tertarik untuk makan tentu jadi bagian dari ide ini, bisa diimplementasikan digelang yang akan bergetar menyuruh kita makan atau gelang tersebut bisa menampilkan hologram yang membuat penggunanya lapar, itu bagus sih.
### Termometer Doraemon
Pernah panas? demam tinggi? pasti bingung kan mau ngapain kalau kita tidak tahu penanganan medisnya bagaimana? lewat termometer ini kalau melewati suhu 39 derajat celcius akan diberikan informasi pertolongan pertama yang harus dilakukan, banyak yang anak-anaknya dibawah kerumah sakit didapati kondisinya sudah buruk karena orang tuanya tidak tahu harus melakukan apa dengan kondisi anaknya yang sedang panas, demam atau bahkan kejang.
Termometer yang bisa sekaligus menjadi AI ini menarik untuk dicoba, tapi harus belajar dengan para peneliti medis dulu loh, *let's do something wonderful for Indonesia* yuk.
Kalian sudah dipenghujung artikel ini, nah sampai jumpa di XIdeasPerDay besok ya.
| 83.042553 | 596 | 0.808609 | ind_Latn | 0.991915 |
d04c66fa9569a805511f8c0c35331f8c1d34654d | 1,265 | md | Markdown | node_modules/grunt-clear/README.md | Nickpadi/Padi-ELearning | 183df3dfe6eb872491f0d974d5f3ef4c496b5409 | [
"MIT"
] | 1 | 2015-03-03T21:44:05.000Z | 2015-03-03T21:44:05.000Z | node_modules/grunt-clear/README.md | kittymicrobiome-project/microbiome-project | 0f9a8ec2a72dc72777c81294421070f3948ce37a | [
"MIT"
] | 1 | 2018-03-26T18:09:17.000Z | 2018-03-26T18:09:17.000Z | node_modules/grunt-clear/README.md | kittymicrobiome-project/microbiome-project | 0f9a8ec2a72dc72777c81294421070f3948ce37a | [
"MIT"
] | 1 | 2018-02-01T05:26:51.000Z | 2018-02-01T05:26:51.000Z | # grunt-clear
Clears your command line. Automate all the things.
## Getting Started
Install this grunt plugin next to your project's [Gruntfile][getting_started] with: `npm install grunt-clear`
Then add this line to your project's Gruntfile:
```javascript
grunt.loadNpmTasks('grunt-clear');
```
[grunt]: https://github.com/cowboy/grunt
[getting_started]: https://github.com/cowboy/grunt/blob/master/docs/getting_started.md
## Documentation
Turn your console output into a live dashboard by clearing it before displaying new results.
Add this task as the **first item** of your `watch` task:
```javascript
watch: {
clear: {
//clear terminal on any watch task. beauty.
files: ['**/*'], //or be more specific
tasks: ['clear']
}
}
```
The `watch` task will run things in order, so make sure `clear` is very first otherwise your console will clear the output of other tasks you are probably interested in.
## Contributing
In lieu of a formal styleguide, take care to maintain the existing coding style. Add unit tests for any new or changed functionality. Lint and test your code using [grunt][grunt].
## Todo
Write tests
Make sure it works on linux
Do screencast
## License
Copyright (c) 2012 Dave Geddes
Licensed under the MIT license.
| 29.418605 | 179 | 0.739921 | eng_Latn | 0.977307 |
d04c934f5b28124b4759720922a30e828e4031bf | 40 | md | Markdown | README.md | siyanzi86/RxjavaDemo | 74f74215b0acabb5b9c8caaa0276574a66fce68d | [
"Apache-2.0"
] | null | null | null | README.md | siyanzi86/RxjavaDemo | 74f74215b0acabb5b9c8caaa0276574a66fce68d | [
"Apache-2.0"
] | null | null | null | README.md | siyanzi86/RxjavaDemo | 74f74215b0acabb5b9c8caaa0276574a66fce68d | [
"Apache-2.0"
] | null | null | null | # RxjavaDemo
模仿Rxjava 简单实现链式调用 以及线程切换
| 13.333333 | 26 | 0.8 | por_Latn | 0.148578 |
d04ccde6a4dffb4cba855e0f0e757475c0972011 | 3,858 | md | Markdown | azurermps-6.13.0/AzureRM.Profile/Save-AzureRmContext.md | isra-fel/azure-docs-powershell | c7d9fc26f7b702f64e66f4ed7fc5f1051ab8a4b8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azurermps-6.13.0/AzureRM.Profile/Save-AzureRmContext.md | isra-fel/azure-docs-powershell | c7d9fc26f7b702f64e66f4ed7fc5f1051ab8a4b8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | azurermps-6.13.0/AzureRM.Profile/Save-AzureRmContext.md | isra-fel/azure-docs-powershell | c7d9fc26f7b702f64e66f4ed7fc5f1051ab8a4b8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: Microsoft.Azure.Commands.Profile.dll-Help.xml
Module Name: AzureRM.Profile
online version: https://docs.microsoft.com/en-us/powershell/module/azurerm.profile/save-azurermcontext
schema: 2.0.0
content_git_url: https://github.com/Azure/azure-powershell/blob/preview/src/ResourceManager/Profile/Commands.Profile/help/Save-AzureRmContext.md
original_content_git_url: https://github.com/Azure/azure-powershell/blob/preview/src/ResourceManager/Profile/Commands.Profile/help/Save-AzureRmContext.md
---
# Save-AzureRmContext
## SYNOPSIS
Saves the current authentication information for use in other PowerShell sessions.
## SYNTAX
```
Save-AzureRmContext [[-Profile] <AzureRmProfile>] [-Path] <String> [-Force]
[-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]
```
## DESCRIPTION
The Save-AzureRmContext cmdlet saves the current authentication information for use in other PowerShell sessions.
## EXAMPLES
### Example 1: Saving the current session's context
```
PS C:\> Connect-AzureRmAccount
PS C:\> Save-AzureRmContext -Path C:\test.json
```
This example saves the current session's Azure context to the JSON file provided.
### Example 2: Saving a given context
```
PS C:\> Save-AzureRmContext -Profile (Connect-AzureRmAccount) -Path C:\test.json
```
This example saves the Azure context that is passed through to the cmdlet to the JSON file provided.
## PARAMETERS
### -DefaultProfile
The credentials, tenant, and subscription used for communication with azure.
```yaml
Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.IAzureContextContainer
Parameter Sets: (All)
Aliases: AzureRmContext, AzureCredential
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Force
Overwrite the given file if it exists
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases:
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Path
Specifies the path of the file to which to save authentication information.
```yaml
Type: System.String
Parameter Sets: (All)
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Profile
Specifies the Azure context from which this cmdlet reads.
If you do not specify a context, this cmdlet reads from the local default context.
```yaml
Type: Microsoft.Azure.Commands.Common.Authentication.Models.AzureRmProfile
Parameter Sets: (All)
Aliases:
Required: False
Position: 0
Default value: None
Accept pipeline input: True (ByValue)
Accept wildcard characters: False
```
### -Confirm
Prompts you for confirmation before running the cmdlet.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases: cf
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -WhatIf
Shows what would happen if the cmdlet runs.
The cmdlet is not run.
```yaml
Type: System.Management.Automation.SwitchParameter
Parameter Sets: (All)
Aliases: wi
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
## INPUTS
### Microsoft.Azure.Commands.Common.Authentication.Models.AzureRmProfile
Parameters: Profile (ByValue)
## OUTPUTS
### Microsoft.Azure.Commands.Profile.Models.PSAzureProfile
## NOTES
## RELATED LINKS
| 25.549669 | 315 | 0.784085 | yue_Hant | 0.500012 |
d04ce134e6b205015fa7fcdfe79eba6495a90422 | 31 | md | Markdown | README.md | Tafita1339/Template-admis-php-MVC-Jquery | 7f492edbfd4f34fa4f0bd2ed0924b22a57438fa1 | [
"Apache-2.0"
] | 1 | 2022-01-14T11:59:07.000Z | 2022-01-14T11:59:07.000Z | README.md | Tafita1339/Template-admis-php-MVC-Jquery | 7f492edbfd4f34fa4f0bd2ed0924b22a57438fa1 | [
"Apache-2.0"
] | null | null | null | README.md | Tafita1339/Template-admis-php-MVC-Jquery | 7f492edbfd4f34fa4f0bd2ed0924b22a57438fa1 | [
"Apache-2.0"
] | null | null | null | # Template-admis-php-MVC-Jquery | 31 | 31 | 0.806452 | eng_Latn | 0.496099 |
d04e262061e5056108dd79d13621863bdb3be928 | 11,843 | md | Markdown | pkg/ui/README.md | eriktrinh/cockroach | 65bcb9b5f25a18a514cc067ed70cf516090873ba | [
"MIT",
"BSD-3-Clause"
] | 1 | 2019-03-28T01:31:49.000Z | 2019-03-28T01:31:49.000Z | pkg/ui/README.md | eriktrinh/cockroach | 65bcb9b5f25a18a514cc067ed70cf516090873ba | [
"MIT",
"BSD-3-Clause"
] | 13 | 2019-12-30T04:29:18.000Z | 2022-03-02T06:11:05.000Z | pkg/ui/README.md | eriktrinh/cockroach | 65bcb9b5f25a18a514cc067ed70cf516090873ba | [
"MIT",
"BSD-3-Clause"
] | 1 | 2019-03-03T13:10:28.000Z | 2019-03-03T13:10:28.000Z | # Admin UI
This directory contains the client-side code for CockroachDB's web-based admin
UI, which provides details about a cluster's performance and health. See the
[Admin UI docs](https://www.cockroachlabs.com/docs/stable/explore-the-admin-ui.html)
for an expanded overview.
## Getting Started
To start developing the UI, be sure you're able to build and run a CockroachDB
node. Instructions for this are located in the top-level README. Every Cockroach
node serves the UI, by default on port 8080, but you can customize the port with
the `--http-port` flag. If you've started a node with the default options,
you'll be able to access the UI at <http://localhost:8080>.
Our UI is compiled using a collection of tools that depends on
[Node.js](https://nodejs.org/) and are managed with
[Yarn](https://yarnpkg.com), a package manager that offers more deterministic
package installation than NPM. NodeJS 6.x and Yarn 1.7.0 are known to work.
[Chrome](https://www.google.com/chrome/), Google's internet browser. Unit tests
are run using Chrome's "Headless" mode.
With Node and Yarn installed, bootstrap local development by running `make` in
this directory. This will run `yarn install` to install our Node dependencies,
run the tests, and compile the assets. Asset compilation happens in two steps.
First, [Webpack](https://webpack.github.io) runs the TypeScript compiler and CSS
preprocessor to assemble assets into the `dist` directory. Then, we package
those assets into `embedded.go` using
[go-bindata](https://github.com/jteeuwen/go-bindata). When you later run `make
build` in the parent directory, `embedded.go` is linked into the `cockroach`
binary so that it can serve the admin UI when you run `cockroach start`.
## Developing
When making changes to the UI, it is desirable to see those changes with data
from an existing cluster without rebuilding and relaunching the cluster for each
change. This is useful for rapidly visualizing local development changes against
a consistent and realistic dataset.
We've created a simple NodeJS proxy to accomplish this. This server serves all
requests for web resources (JavaScript, HTML, CSS) out of the code in this
directory, while proxying all API requests to the specified CockroachDB node.
To use this proxy, in Cockroach's root directory run:
```shell
$ make ui-watch TARGET=<target-cluster-http-uri>
```
or, in `pkg/ui` run:
```shell
$ make watch TARGET=<target-cluster-http-uri>
```
then navigate to `http://localhost:3000` to access the UI.
To proxy to a cluster started up in secure mode, in Cockroach's root directory run:
```shell
$ make ui-watch-secure TARGET=<target-cluster-https-uri>
```
or, in `pkg/ui` run:
```shell
$ make watch-secure TARGET=<target-cluster-https-uri>
```
then navigate to `https://localhost:3000` to access the UI.
While the proxy is running, any changes you make in the `src` directory will
trigger an automatic recompilation of the UI. This recompilation should be much
faster than a cold compile—usually less than one second—as Webpack can reuse
in-memory compilation artifacts from the last compile.
If you get cryptic TypeScript compile/lint failures upon running `make` that
seem completely unrelated to your changes, try removing `yarn.installed` and
`node_modules` before re-running `make` (do NOT run `yarn install` directly).
Be sure to also commit modifications resulting from dependency changes, like
updates to `package.json` and `yarn.lock`.
### DLLs for speedy builds
To improve Webpack compile times, we split the compile output into three
bundles, each of which can be compiled independently. The feature that enables
this is [Webpack's DLLPlugin](https://webpack.js.org/plugins/dll-plugin/), named
after the Windows term for shared libraries ("**d**ynamic-**l**ink
**l**ibraries").
Third-party dependencies, which change infrequently, are contained in the
[vendor DLL]. Generated protobuf definitions, which change more frequently, are
contained in the [protos DLL]. First-party JavaScript and TypeScript are
compiled in the [main app bundle], which is then "linked" against the two DLLs.
This means that updating a dependency or protobuf only requires rebuilding the
appropriate DLL and the main app bundle, and updating a UI source file doesn't
require rebuilding the DLLs at all. When DLLs were introduced, the time required
to start the proxy was reduced from over a minute to under five seconds.
DLLs are not without costs. Notably, the development proxy cannot determine when
a DLL is out-of-date, so the proxy must be manually restarted when dependencies
or protobufs change. (The Make build system, however, tracks the DLLs'
dependencies properly, so a top-level `make build` will rebuild exactly the
necessary DLLs.) DLLs also make the Webpack configuration rather complicated.
Still, the tradeoff seems well worth it.
## CCL Build
In CCL builds, code in `pkg/ui/ccl/src` overrides code in `pkg/ui/src` at build
time, via a Webpack import resolution rule. E.g. if a file imports
`src/views/shared/components/licenseType`, it'll resolve to
`pkg/ui/src/views/shared/components/licenseType` in an OSS build, and
`pkg/ui/ccl/src/views/shared/components/licenseType` in a CCL build.
CCL code can import OSS code by prefixing paths with `oss/`, e.g.
`import "oss/src/myComponent"`. By convention, this is only done by a CCL file
importing the OSS version of itself, e.g. to render the OSS version of itself
when the trial period has expired.
## Running tests
To run the tests outside of CI:
```shell
$ make test
```
## Viewing bundle statistics
The regular build also produces a webpage with a report on the bundle size.
Build the app, then take a look with:
```shell
$ make build
$ open pkg/ui/dist/stats.ccl.html
```
Or, to view the OSS bundle:
```shell
$ make buildoss
$ open pkg/ui/dist/stats.oss.html
```
## Bundling fonts
To comply with the SIL Open Font License, we have reproducible builds of our WOFF
font bundles based on the original TTF files sourced from Google Fonts.
To rebuild the font bundles (perhaps to bring in an updated version of a typeface),
simply run `make fonts` in the UI directory (or `make ui-fonts` elsewhere). This
requires `fontforge` to be available on your system. Then validate the updated
fonts and commit them.
To add a new typeface, edit the script `scripts/font-gen` to fetch and convert it,
and then add it to `styl/base/typography.styl` to pull it into the bundle.
## Managing dependencies
The NPM registry (and the Yarn proxy in front of it) have historically proven
quite flaky. Errors during `yarn install` were the leading cause of spurious CI
failures in the first half of 2018.
The solution is to check our JavaScript dependencies into version control, like
we do for our Go dependencies. Checking in the entire node_modules folder is a
non-starter: it clocks in at 230MB and 28k files at the time of writing.
Instead, we use a Yarn feature called the [offline mirror]. We ship a [.yarnrc]
file that instructs Yarn to save a tarball of each package we depend on in the
[yarn-vendor] folder. These tarballs are then checked in to version control. To
avoid cluttering the main repository, we've made the yarn-vendor folder a Git
submodule that points at the [cockroachdb/yarn-vendored] repository.
### Adding a dependency
Let's pretend you want to add a dependency on `left-pad`. Just use `yarn add`
like you normally would:
```bash
$ cd $GOPATH/src/github.com/cockroachdb/cockroach/pkg/ui
$ yarn add FOO
```
When Yarn finishes, `git status` will report something like this:
```bash
$ git status
Changes not staged for commit:
modified: pkg/ui/package.json
modified: pkg/ui/yarn-vendor (untracked content)
modified: pkg/ui/yarn.lock
```
The changes to package.json and yarn.lock are the normal additions of the new
dependency information to the manifest files. The changes to yarn-vendor are
unique to the offline mirror mode. Let's look more closely:
```bash
$ git -C yarn-vendor status
Untracked files:
left-pad-1.3.0.tgz
```
Yarn has left you a tarball of the new dependency. Perfect! If you were adding
a more complicated dependency, you'd likely see some transitive dependencies
appear as well.
The process from here is exactly the same as updating any of our other vendor
submodules. Make a new branch in the submodule, commit the new tarball, and push
it:
```bash
$ cd yarn-vendor
$ git checkout -b YOURNAME/add-left-pad
$ git add .
$ git commit -m 'Add [email protected]'
$ git push origin add-left-pad
```
Be sure to push to [cockroachdb/yarn-vendored] directly instead of a personal
fork. Otherwise TeamCity won't be able to find the commit.
Then, return to the main repository and commit the changes, including the new
submodule commit. Push that and make a PR:
```bash
$ cd ..
$ git checkout -b add-left-pad
$ git add pkg/ui
$ git commit -m 'ui: use very smart approach to pad numbers with zeros'
$ git push YOUR-REMOTE add-left-pad
```
This time, be sure to push to your personal fork. Topic branches are not
permitted in the main repository.
When your PR has been approved, please be sure to merge your change to
yarn-vendored to master and delete your topic branch:
```bash
$ cd yarn-vendor
$ git checkout master
$ git merge add-left-pad
$ git push origin master
$ git push origin -d add-left-pad
```
This last step is extremely important! Any commit in yarn-vendored that is
referenced by the main repository must remain forever accessible, or it will be
impossible for future `git clone`s to build that version of Cockroach. GitHub
will garbage collect commits that are not accessible from any branch or tag, and
periodically, someone comes along and cleans up old topic branches in
yarn-vendored, potentially removing the only reference to a commit. By merging
your commit on master, you ensure that your commit will not go missing.
### Verifying offline behavior
Our build system is careful to invoke `yarn install --offline`, which instructs
Yarn to exit with an error if it would need to reach out to the network. Running
CI on your PR is thus sufficient to verify that all dependencies have been
vendored correctly.
You can perform the verification locally if you'd like, though:
```bash
$ cd $GOPATH/src/github.com/cockroachdb/cockroach/pkg/ui
$ rm -r node_modules yarn.installed
$ yarn cache clean
$ make
```
If `make` succeeds, you've vendored dependencies correctly.
### Removing a dependency
To remove a dependency, just run `yarn remove`.
Note that removing a dependency will not remove its tarball from the yarn-vendor
folder. This is not as bad as it sounds. When Git fetches submodules, it always
performs a full clone, so it would wind up downloading deleted tarballs anyway
when it fetched older commits.
TODO(benesch): Yarn's offline mode has an additional option,
`yarn-offline-mirror-pruning`, that cleans up removed dependencies' tarballs.
Look into using this once [dcodeIO/protobuf.js#716] is resolved. At the moment,
ProtobufJS tries to install some dependencies at runtime by invoking `npm
install` itself (!); we avoid this by running `yarn install` on its behalf, but
this means two separate packages share the same yarn-vendor folder and this
confuses Yarn's pruning logic.
If the size of the yarn-vendored repository becomes problematic, we can look
into offloading the large files into something like [Git LFS]. This is
contingent upon resolving the above TODO.
[cockroachdb/yarn-vendored]: https://github.com/cockroachdb/yarn-vendored
[dcodeIO/protobuf.js#716]: https://github.com/dcodeIO/protobuf.js#716
[main app bundle]: ./webpack.app.js
[Git LFS]: https://git-lfs.github.com
[offline mirror]: https://yarnpkg.com/blog/2016/11/24/offline-mirror/
[protos DLL]: ./webpack.protos.js
[vendor DLL]: ./webpack.vendor.js
[.yarnrc]: ./yarnrc
[yarn-vendor]: ./yarn-vendor
| 39.476667 | 84 | 0.771595 | eng_Latn | 0.995719 |
d04e351aec5dee06d5ae4b476b53af6c01488f9f | 7,895 | md | Markdown | README.md | icelam/tints-and-shades | 00854f7374ca73ae53d729444cefc4a69b9eee2e | [
"MIT"
] | null | null | null | README.md | icelam/tints-and-shades | 00854f7374ca73ae53d729444cefc4a69b9eee2e | [
"MIT"
] | null | null | null | README.md | icelam/tints-and-shades | 00854f7374ca73ae53d729444cefc4a69b9eee2e | [
"MIT"
] | null | null | null | <h1 align="center">Tints and Shades</h1>

<p align="center">
A tints and shades generator built using Electron and LitElement.
</p>
<p align="center">
<a href="https://www.electronjs.org/"><img height="20" src="https://img.shields.io/badge/made_with-Electron_10.1.0-2f3241.svg?logo=electron&logoColor=white" alt="Made with Electron"></a>
<a href="https://lit-element.polymer-project.org/"><img height="20" src="https://img.shields.io/badge/made_with-LitElement-2196f3.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFYAAAAeCAYAAAC2Xen2AAAACXBIWXMAAAJfAAACXwG+hShaAAADKUlEQVRogc2az5WbMBDGv83bu+nATgUhFcQdrDsw7mA7iNOBOwiuIN5bbut0EHdg37I3fMxJedo38yKPBpAAI37vceCPBIw+fSMkHowxW4zLGUAZccccwArAEkAG4BMdvwL4TfUdARwAVF7p7hRO3fGY8TkaYxCwFcaYc8TTVcaY0hizCKy/bTtHPKu3fRhZrSHkpMTvAOYR5WYA1lT22TsbR0H3/kI9JR6v3e9PkwoKUl4TR9rarjsYYzLlHqFqZZqet3Z7FC3xBuDnHVS49o74FKRSyYU8+UBqdMnIfwtSl8sT+eMy0nsL0VNYtVFea5OXcfYvkYkllK/Odb+U7mWD80Op6xuA0ORq69w5yY15ofpDOSsWpD1zI1Pw2ExpTJvxP0cEFaQo6897cfwpwnOlWplor51CYHeUeJgrvYTs9qEUSnBtAy0Cyjc1ZNM5j9SBXSj+u+oRVMYG9+TszwICU6dWJkq1qQMrX3bfeUDuI7v/imynjhBFBqs2dWBlUhnyK/BISYeZKfdj2tTKBKs2ZWBz4a0nyshDIpNiXVBiGjTo2tSBdTl4V/RH1qklsFC1MkGqTRlY+ZJ9E5ZGRaMMRjYmOtpPa5kpzRUMOTPl4jbYTJyLVSvTqtopTsKMSZ9k2Vh2SoHV/G8I3DkEd2zbVa1Mo2pTBlaOV2sfsgfSU127aVRcILV1pAysTFb3CKwct3Jj9lUrU6valIGtRNec0wsPiayPh1+1SuuAWldqj92J/W3LZ2cMW6HKC/WSodTKqKpNHdiDGGfOlWB3IRdzwHCUpSqsJ16dqQNbKZMla6ULx5ArifFEn7dDq5XxVDuF4VYpJktASzRlB1vgJRQ5v8sN5SlrQG7qnso4diUSGZwV1xD1LqghXpWvq+c7eavkRrVyzesvgD9ekf64L1S3fpSR2uSaFSjxHGk7O9fzzxxaGcvGmeHS1rKG5v+7eYvGaZe/MzrfF7s0vhLL6mOxnOIPGxW1+EaMFmLYkzW4U4Zj/kb1fq+pTsKUFJyN4r0aVwroR/JS99P13t4qeffah1uLnSzsp9KbrW/axNQ0lzuGt94CvPwDYYpB/+NhnT0AAAAASUVORK5CYII=&logoColor=white" alt="Made with LitElement"></a>
<a href="https://www.typescriptlang.org/"><img height="20" src="https://img.shields.io/badge/built_with-TypeScript-007acc.svg?logo=typescript" alt="Built with TypeScript"></a>
<a href="./LICENSE"><img height="20" src="https://img.shields.io/github/license/icelam/tints-and-shades?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABwAAAAcCAYAAAEFCu8CAAAABGdBTUEAALGPC/xhBQAAADhlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAAqACAAQAAAABAAAAHKADAAQAAAABAAAAHAAAAABHddaYAAAC5UlEQVRIDd2WPWtVQRCGby5pVASLiGghQSxyG8Ui2KWwCfkH9olY2JneQkiR0oCIxH/gB+qVFDYBIWBAbAIRSbCRpLXwIxLiPT7vnNm9e87ZxJtUwYH3zO47Mzv7Mbv3tlo5KYriGtgAJ81OY1ENdG/YI4boFEOI911BXgY/pdtwGuAtXpvmB1tAXHDnUolE5urkPOQo6MqA3pXWmJJL4Bb4rQ7yEYfxsjnIF29NJIoNC6e5fxOL/qN+9KCz7AaLpN8zI415N2i2EptpGrkRIjGeAuvR6IY1hSFLFUOug9Ms2M7ZxIUNytm1mnME186sdI2BOCwAyQMg54ugzSmKmwbPwSbolKH+hbAtQdsOoF+BsF3anUVwBdiOWRidFZDKTTrKEAJTm3GVrGkHzw/uPZbyx7DNNLfB7KGmRsCcr+/gjaiPSpAOTyX9qG4L/XBDdWXDDf1M+wtQ5fwCOtcb4Dto6VpLmzByB6gqdHbTItGSJdAGqibJQhmRfCF7IN4beSF2G9CqnGXQrxofXU+EykllNeoczRgYytDKMubDIRK0g5MF8rE69cGu0u9nlUcqaUZ41W0qK2nGcSzr4D2wV9U9wxp1rnpxn8agXAOHMQ9cy9kbHM7ngY4gFb03TxrO/yfBUifTtXt78jCrjY/jgEFnMn45LuNWUtknuu7NSm7D3QEn3HbatV1Q2jvgIRf1sfODKQaeymxZoMLlTqsq1LF+HvaTqQOzEzUCfni0/eNIA+DfuE3KEtbsegckGmMktTXacnBHPVe687ugkpT+axCkkhBSyRSjWI2xf1KMMVmYiQdWksK9BEFiQoiYLIlvJA3/zeTzCejP0RbB6YPbhZuB+0pR3KcdX0LaJtju0ZgBL8Bd+sbz2QIaU2OfBX3BaQLsgZysQtrk0M8Sh1A0w3DyyYnGnAiZ4gqZ/TvI2A8OGd1YIbF7+F3P+B6dYpYdsJNZgrjO0UdOIhmom0nwL0pnfnzkL1803jAoKhvyAAAAAElFTkSuQmCC" alt="License"></a>
<a href="https://lgtm.com/projects/g/icelam/tints-and-shades/context:javascript"><img alt="Language grade: JavaScript" src="https://img.shields.io/lgtm/grade/javascript/g/icelam/tints-and-shades.svg?logo=lgtm"/></a>
<a href="https://github.com/icelam/tints-and-shades/releases"><img alt="Current version" src="https://img.shields.io/github/v/release/icelam/tints-and-shades.svg?sort=semver&label=latest&logo=github"/></a>
<a href="https://github.com/icelam/tints-and-shades/releases/latest"><img alt="Downloads" src="https://img.shields.io/github/downloads/icelam/tints-and-shades/total.svg"/></a>
</p>
## Cool Features
The main feature of this app is to generate tints and shades of a specific color. And it comes with other cool features like:





## Download
You can check the [latest release](https://github.com/icelam/tints-and-shades/releases/latest) of this app or download from the list below:
### Windows
* [Portable Executable](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades.1.1.0.exe)
* [EXE setup file](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades.Setup.1.0.2.exe) ([Blockmap](https://github.com/icelam/tints-and-shades/releases/download/v1.0.2/Tints.and.Shades.Setup.1.1.0.exe.blockmap))
* [32-bit MSI installation file](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades.1.1.0.ia32.msi)
* [64-bit MSI installation file](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades.1.1.0.msi)
### macOS
* [DMG file](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades-1.0.2.dmg) ([Blockmap](https://github.com/icelam/tints-and-shades/releases/download/v1.0.2/Tints.and.Shades-1.1.0.dmg.blockmap))
* [Zip file](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades-1.1.0-mac.zip)
### Linux
* [App image](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades-1.1.0.AppImage)
* [App image for i386](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/Tints.and.Shades-1.1.0-i386.AppImage)
* [DEB file for amd64](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/tints-and-shades_1.1.0_amd64.deb)
* [DEB file for i386](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/tints-and-shades_1.1.0_i386.deb)
* [RPM file for i686](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/tints-and-shades-1.1.0.i686.rpm)
* [RPM file for x68 (64-bit)](https://github.com/icelam/tints-and-shades/releases/download/v1.1.0/tints-and-shades-1.1.0.x86_64.rpm)
## For Developers - Setup ##
Below shows some basic setup steps.
### Node version ###
This project is developed using Node.js 12. The version is already specified in the `.nvmrc` file. Suggest to run `nvm use` when you enter the project folder.
### Install packages need for the project ###
Install yarn packages in project root folder first using `yarn install`.
### To start the project ##
Run `yarn start` in project root folder.
### To build the app for distribution ###
Run `yarn package` in the project root folder to create package for macOS, Linux and Windows. All the output files can be found in `./build-packages` folder.
To create package for each individual platforms:
* macOS: `yarn package:mac`
* Linux: `yarn package:linux`
* Windows: `yarn package:windows`
### To run unit tests ###
Run `yarn test` in the project root folder.
### To run linters ###
Run `yarn lint` in the project root folder to start a ESLint checking.
Run `yarn lint:lit-analyzer` in the project root folder to run Lit Analyzer.
### To run web component analyzer ###
Run `analyze:web` in the project root folder.
### To update change log ###
Run `yarn release` in the project root folder.
To skip bumping version number on first release, run `yarn first-release` in the project root folder
## To update alias ##
To add or modify any existing alias, please modify all the files listed below:
1. `.eslintrc`
2. `tsconfig.json`
3. `webpack/webpack.base.conf.js`
| 89.715909 | 1,398 | 0.8095 | yue_Hant | 0.292553 |
d04f9ef226f0f3b631119b35fff497b59286dec3 | 3,805 | md | Markdown | input/en-us/create/functions/get-webfilename.md | rachfop/docs | f6a3a5b9ac4641ec6cc1ab1057622857c13518be | [
"Apache-2.0"
] | null | null | null | input/en-us/create/functions/get-webfilename.md | rachfop/docs | f6a3a5b9ac4641ec6cc1ab1057622857c13518be | [
"Apache-2.0"
] | null | null | null | input/en-us/create/functions/get-webfilename.md | rachfop/docs | f6a3a5b9ac4641ec6cc1ab1057622857c13518be | [
"Apache-2.0"
] | null | null | null | ---
Order: 170
xref: get-webfilename
Title: Get-WebFileName
Description: Information on Get-WebFileName function
RedirectFrom: docs/helpers-get-web-file-name
---
# Get-WebFileName
<!-- This documentation is automatically generated from https://github.com/chocolatey/choco/blob/stable/src/chocolatey.resources/helpers/functions/Get-WebFileName.ps1 using https://github.com/chocolatey/choco/blob/stable/GenerateDocs.ps1. Contributions are welcome at the original location(s). -->
Gets the original file name from a url. Used by Get-WebFile to determine
the original file name for a file.
## Syntax
~~~powershell
Get-WebFileName `
[-Url <String>] `
-DefaultName <String> `
[-UserAgent <String>] `
[-IgnoredArguments <Object[]>] [<CommonParameters>]
~~~
## Description
Uses several techniques to determine the original file name of the file
based on the url for the file.
## Notes
Available in 0.9.10+.
Falls back to DefaultName when the name cannot be determined.
Chocolatey works best when the packages contain the software it is
managing and doesn't require downloads. However most software in the
Windows world requires redistribution rights and when sharing packages
publicly (like on the [community feed](https://chocolatey.org/packages)), maintainers may not have those
aforementioned rights. Chocolatey understands how to work with that,
hence this function. You are not subject to this limitation with
internal packages.
## Aliases
None
## Examples
**EXAMPLE 1**
~~~powershell
Get-WebFileName -Url $url -DefaultName $originalFileName
~~~
## Inputs
None
## Outputs
None
## Parameters
### -Url [<String>]
This is the url to a file that will be possibly downloaded.
Property | Value
---------------------- | -----
Aliases |
Required? | false
Position? | 1
Default Value |
Accept Pipeline Input? | false
### -DefaultName <String>
The name of the file to use when not able to determine the file name
from the url response.
Property | Value
---------------------- | -----
Aliases |
Required? | true
Position? | 2
Default Value |
Accept Pipeline Input? | false
### -UserAgent [<String>]
The user agent to use as part of the request. Defaults to 'chocolatey
command line'.
Property | Value
---------------------- | -----------------------
Aliases |
Required? | false
Position? | named
Default Value | chocolatey command line
Accept Pipeline Input? | false
### -IgnoredArguments [<Object[]>]
Allows splatting with arguments that do not apply. Do not use directly.
Property | Value
---------------------- | -----
Aliases |
Required? | false
Position? | named
Default Value |
Accept Pipeline Input? | false
### <CommonParameters>
This cmdlet supports the common parameters: -Verbose, -Debug, -ErrorAction, -ErrorVariable, -OutBuffer, and -OutVariable. For more information, see `about_CommonParameters` http://go.microsoft.com/fwlink/p/?LinkID=113216 .
## Links
* [Get-WebHeaders](xref:get-webheaders)
* [Get-ChocolateyWebFile](xref:get-chocolateywebfile)
[Function Reference](xref:powershell-reference)
> :memo: **NOTE** This documentation has been automatically generated from `Import-Module "$env:ChocolateyInstall\helpers\chocolateyInstaller.psm1" -Force; Get-Help Get-WebFileName -Full`.
View the source for [Get-WebFileName](https://github.com/chocolatey/choco/blob/stable/src/chocolatey.resources/helpers/functions/Get-WebFileName.ps1)
| 29.496124 | 298 | 0.655453 | eng_Latn | 0.89139 |
d05130c85d8775c882e339a85a7e34f056898aec | 1,553 | md | Markdown | docs/content/v1.1/explore/cloud-native/orchestration-readiness.md | july2993/yugabyte-db | 8d2ad878756b064c6d8249b4891e04bcc9aff582 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | docs/content/v1.1/explore/cloud-native/orchestration-readiness.md | july2993/yugabyte-db | 8d2ad878756b064c6d8249b4891e04bcc9aff582 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | docs/content/v1.1/explore/cloud-native/orchestration-readiness.md | july2993/yugabyte-db | 8d2ad878756b064c6d8249b4891e04bcc9aff582 | [
"Apache-2.0",
"CC0-1.0"
] | null | null | null | ---
title: Orchestration Readiness
linkTitle: 4. Orchestration Readiness
description: Orchestration Readiness
menu:
v1.1:
identifier: orchestration-readiness
parent: explore-cloud-native
weight: 217
---
Yugabyte DB is orchestration-ready on all major infrastructure layers including containers, virtual machines (VMs) and bare metal.
## On Containers
### Kubernetes
Instructions for running Yugabyte DB on Kubernetes are available [here](../../../deploy/kubernetes/). Integrations with managed Kubernetes offerings such as [Google Kubernetes Engine (GKE)](../../deploy/public-clouds/gcp/#gke) and [Azure Kubernetes Service (AKS)](../../deploy/public-clouds/azure/#aks) are also available.
### Docker Swarm
Instructions for running Yugabyte DB on Docker Swarm are available [here](../../../deploy/docker-swarm/).
### Mesosphere DC/OS
Integration with Mesosphere DC/OS is in the works.
## On Virtual Machines and Bare Metal
### Terraform
Instructions for running Yugabyte DB on AWS using Terraform are available [here](../../../deploy/public-clouds/aws/#terraform).
## Using Enterprise Edition
[Yugabyte DB Enterprise](../../../deploy/enterprise-edition/) has a built-in orchestration engine that manages multiple Yugabyte DB universes (including Read Replicas) on the infrastructure layer and platform of your choice.
{{< note title="Note" >}}
Reach out to us on [Slack](https://www.yugabyte.com/slack) or [GitHub](https://github.com/yugabyte/yugabyte-db/issues) if you need orchestration using a new system.
{{< /note >}}
| 36.116279 | 322 | 0.747585 | eng_Latn | 0.92325 |
d051ca8795be14022833387337b7b4630fdbb4e0 | 3,043 | md | Markdown | README.md | Foundry-VTT-Germany-FB-Community/Organisatorisches | 8c813febce030c44ea61a829bbb7114882becee7 | [
"Apache-2.0"
] | 1 | 2020-06-24T22:57:52.000Z | 2020-06-24T22:57:52.000Z | README.md | Foundry-VTT-Germany-FB-Community/Organisatorisches | 8c813febce030c44ea61a829bbb7114882becee7 | [
"Apache-2.0"
] | null | null | null | README.md | Foundry-VTT-Germany-FB-Community/Organisatorisches | 8c813febce030c44ea61a829bbb7114882becee7 | [
"Apache-2.0"
] | null | null | null | # Organisatorisches
Organisatorische Fragen, Rechtliche Hinweise, Autoren, Urheber, Quellennachweise, etc.
## Rechte
Ich habe diese GitHub Organisation so angelegt, dass jedes Mitglied öffentliche Repositories anlegen und Dateien einchecken kann.
_Wenn ihr ein Repository anlegt, dann wählt eine Lizenz aus, damit andere wissen, was erlaubt und erforderlich ist und was nicht._
Ich habe für dieses Repo bspw. die Apache 2.0 Lizenz gewählt (siehe unten).
## Pflichten
Egal, was andere Länder gestatten, in Deutschland sind wir immer an das Urheberrecht gebunden, d.h. gebt eure Quellen an und verwendet keine geschützen Materialien, es sei denn, euch wurde die Verwendung in diesem Kontext hier gestattet - dann gebt das bitte entsprechend an.
**Markenrecht**: Verwendet keine geschützten Marken für eure Zwecke! Wenn ihr einen Markennamen dennoch mal erwähnen müsst, dann weist daraufhin wer der Eigentümer ist. Viele Markeninhaber geben auch Hinweise, wie dies im Einzelnen zu erfolgen hat.
**Außerdem**: Zitieren (mit Quellenangabe) JA, Abschreiben NEIN.
**Schlussendlich**: Der Repo-Ersteller ist für die Überwachung innerhalb seines Repos verantwortlich und _jeder Autor ist für seine eingecheckten Inhalte verantwortlich_.
### Beispiel: D&D5
Wizards legt die Regeln für Fan-Inhalte deutlich dar: https://company.wizards.com/de/fancontentpolicy
Bezüglich der Open Game License siehe auch https://dnd.wizards.com/articles/features/systems-reference-document-srd
## Makros
Im Repo "Makros" könnt ihr eure Makros mit uns und der Welt teilen: https://github.com/Foundry-VTT-Germany-FB-Community/Makros
Wichtig:
- Fügt am Anfang des Makros einen Kommentar hinzu, der euch als Autor ausweist - so haben es Wiederverwender einfacher, die Lizenz einzuhalten.
- Solltet ihr ein fremdes Makro wiederverwenden, haltet euch an die Lizenz bzw. stellt sicher, dass ihr das dürft, und gebt die Lizenz an, unter der ihr dieses wiederverwenden und teilen dürft.
## Logo dieser Organisation
Der W20 stammt von Lonnie Tapscott, US.
Ich (Jochen Linnemann) habe dieses Logo von NounProject im Rahmen meiner Pro-Lizenz erhalten. Die weitere Verwendung ist nur mit jeweils eigener Lizenz möglich (siehe NounProject).
## Warum Apache 2.0?
https://choosealicense.com/licenses/apache-2.0/
### Apache License 2.0
A permissive license whose main conditions require preservation of copyright and license notices. Contributors provide an express grant of patent rights. Licensed works, modifications, and larger works may be distributed under different terms and without source code.
| Permissions | Conditions | Limitations |
| -------------- | ---------------------------- | ------------- |
| Commercial use | License and copyright notice | Liability |
| Distribution | State changes | Trademark use |
| Modification | | Warranty |
| Patent use | | |
| Private use | | |
| 53.385965 | 275 | 0.731186 | deu_Latn | 0.992315 |
d051d4e6fa63f05b2747c932feb78b1ca7a3b7b9 | 3,018 | md | Markdown | job-opportunities.md | hack4impact/resources | 7af40eefa0a228d6abe14b6a275be09b07a56f43 | [
"CC0-1.0"
] | 22 | 2017-02-21T01:08:50.000Z | 2021-03-05T19:40:16.000Z | job-opportunities.md | hack4impact/resources | 7af40eefa0a228d6abe14b6a275be09b07a56f43 | [
"CC0-1.0"
] | 7 | 2017-04-24T18:27:55.000Z | 2017-12-02T04:23:18.000Z | job-opportunities.md | hack4impact/resources | 7af40eefa0a228d6abe14b6a275be09b07a56f43 | [
"CC0-1.0"
] | 2 | 2018-04-03T16:28:34.000Z | 2020-08-04T02:40:56.000Z | # Job Opportunities
### Fellowships and Internships
- [Google North America Public Policy Fellowship](https://blog.google/topics/public-policy/2017-google-north-america-public-policy-fellowship-now-accepting-applications/)
- [DSSG Atlanta Summer Fellowship](http://dssg-atl.io)
- [DSSG University of Washington](http://escience.washington.edu/get-involved/incubator-programs/data-science-for-social-good/)
- [DSSG University of Chicago](https://dssg.uchicago.edu/)
- [Technology and Democracy Fellowship](http://ash.harvard.edu/technology-and-democracy-fellowship)
- [ACLU Technology Fellow](https://www.aclu.org/careers/technology-fellow-spt-04-acluf-speech-privacy-and-technology-project-ny)
- [Congressional Innovation Fellowship](https://www.techcongress.io/the-fellowship/)
- [Singularity University Impact Fellows](https://su.org/impact/impact-fellows/)
- [Berkman Klein Center for Internet & Society at Harvard University Summer Internship Program](https://cyber.harvard.edu/getinvolved/internships_summer)
- [Coding it Forward's Civic Digital Fellowship](http://codingitforward.com/fellowship/)
- [Coding it Forward's Pipeline Program](http://codingitforward.com/pipeline/)
- [Blue Ridge Labs @ Robin Hood Fellowship](https://labs.robinhood.org/fellowship/)
- [Summer of Maps](http://www.summerofmaps.com/)
- [Azavea Fellowship](https://fellowship.azavea.com)
### Resource Hubs
- [Tech for Social Justice - Job board](https://jobs.t4sj.co/)
- [Fast Forward - Jobs, Volunteer & Board Opportunities](http://www.ffwd.org/tech-nonprofit-jobs/)
- [Coding it Forward Jobs, Internships, and Resources - spreadsheet](https://docs.google.com/spreadsheets/d/166gGCGU2dXBep7d9SQqP8698_Tct5k38_k4zwdFnArA/edit#gid=0)
- [Tech Jobs for Good - spreadsheet](bit.ly/tj4g-sheet)
### Volunteer Opportunities
- [Social Coders](http://socialcoder.org/)
- [Technology for Impact Fellowship](https://www.tfi-fellowship.org/)
- [Code for Philly](https://codeforphilly.org/)
- [DataKind](http://www.datakind.org/)
### Organizations
- [Tech for Social Justice - Directory of organizations and projects](http://t4sj.co/orglist.html)
- [United States Digital Service](https://www.usds.gov)
- [Chan Zuckerberg Initiative](https://jobs.lever.co/chanzuckerberg)
- [Kapor Capital Portfolio list](http://www.kaporcapital.com/portfolio/)
- [Code For America Jobs Portal](http://jobs.codeforamerica.org)
- [Hack4Impact Civic Tech Organizations - spreadsheet](https://docs.google.com/spreadsheets/d/1OdLYn1KOk7_RDxpqaelChmI0qqhKNd_ww2N5D-XCkHQ/edit#gid=0)
- [CS+Social Good Organizations/Connections - spreadsheet](https://docs.google.com/spreadsheets/d/1pRpVHIuqJ4LdE0GmjNsIfMFSgnnl3KIn6-n8dm6s2NI/edit#gid=0)
- [Tech Forward - Organizations, Projects, Resources, Tools, etc. for social progress](https://tech-forward.com/#orgs)
- [Hack the Hood](http://www.hackthehood.org/)
- [Sassafras Tech Collective](https://sassafras.coop/jobs)
- [DataKind](http://www.datakind.org/)
- [UChicago Urban Labs](https://urbanlabs.uchicago.edu/careers)
| 68.590909 | 170 | 0.772697 | yue_Hant | 0.290204 |
d052ae69224fe1befbdfd35a8352244f7fe55be1 | 2,180 | md | Markdown | docs/standard-library/map-functions.md | gaozilai/cpp-docs.zh-cn | 3c8b90636cc35e408e14752d56324ef0203ceaac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/map-functions.md | gaozilai/cpp-docs.zh-cn | 3c8b90636cc35e408e14752d56324ef0203ceaac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/standard-library/map-functions.md | gaozilai/cpp-docs.zh-cn | 3c8b90636cc35e408e14752d56324ef0203ceaac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '<map> 函数'
ms.date: 11/04/2016
f1_keywords:
- map/std::swap (map)
- map/std::swap (multimap)
ms.assetid: 7cb3d1a5-7add-4726-a73f-61927eafd466
ms.openlocfilehash: 6c3480e9ffbbab46a42ae790d8b70afbcd823457
ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 04/23/2019
ms.locfileid: "62413029"
---
# <a name="ltmapgt-functions"></a><map> 函数
|||
|-|-|
|[swap (map)](#swap)|[swap (multimap)](#swap_multimap)|
## <a name="swap_multimap"></a>swap (map)
交换两个映射的元素。
```cpp
template <class key, class T, class _Pr, class _Alloc>
void swap(
map<Key, Traits, Compare, Alloctor>& left,
map<Key, Traits, Compare, Alloctor>& right);
```
### <a name="parameters"></a>参数
*right*<br/>
提供要交换的元素的映射或其元素将要进行交换与地图的地图*左*。
*left*<br/>
其元素将要进行交换与地图的地图*右*。
### <a name="remarks"></a>备注
模板函数是专用于容器类映射用以执行成员函数的算法`left`。[交换](../standard-library/map-class.md#swap)( `right`)。 这是由编译器进行的函数模板偏序实例。 模板函数以此种方式重载时,模板与函数调用的匹配并不唯一,随后编译器会选择此模板函数的最专用化版本。 在算法类中,模板函数的通用版本 **template** \< **class T**> **void swap**( **T&**, **T&**) 将按分配工作,是一个缓慢操作。 每个容器中的专用化版本速度快很多,因为专用化版本可适用于容器类的内部表示形式。
### <a name="example"></a>示例
有关使用 `swap` 的模板版本的示例,请参阅成员函数 [map::swap](../standard-library/map-class.md#swap) 的代码示例。
## <a name="swap"></a>swap (multimap)
交换两个多重映射的元素。
```cpp
template <class key, class T, class _Pr, class _Alloc>
void swap(
multimap<Key, Traits, Compare, Alloctor>& left,
multimap<Key, Traits, Compare, Alloctor>& right);
```
### <a name="parameters"></a>参数
*right*<br/>
多重映射提供要交换的元素或其元素将要进行交换的多重映射使用 multimap*左*。
*left*<br/>
其元素将要进行交换的多重映射使用的多重映射*右*。
### <a name="remarks"></a>备注
模板函数是专用于容器类映射容器容器类多重映射,用以执行成员函数上执行的算法`left`。[交换](../standard-library/multimap-class.md#swap)(`right`)。 这是由编译器进行的函数模板偏序实例。 模板函数以此种方式重载时,模板与函数调用的匹配并不唯一,随后编译器会选择此模板函数的最专用化版本。 在算法类中,模板函数的通用版本 **template** \< **class T**> **void swap**( **T&**, **T&**) 将按分配工作,是一个缓慢操作。 每个容器中的专用化版本速度快很多,因为专用化版本可适用于容器类的内部表示形式。
### <a name="example"></a>示例
有关使用 `swap` 的模板版本的示例,请参阅成员函数 [multimap::swap](../standard-library/multimap-class.md#swap) 的代码示例。
## <a name="see-also"></a>请参阅
[\<map>](../standard-library/map.md)<br/>
| 27.948718 | 303 | 0.701835 | yue_Hant | 0.189313 |
d052d6cb9b20062e2c6401be27141bf89807ebe8 | 86 | md | Markdown | README.md | tongw-tw/Finite-State-Machine-Design-for-the-PS-2-and-LCD-Interfaces | 0a609214d4e32404e0ef15be853ba259385843b8 | [
"MIT"
] | null | null | null | README.md | tongw-tw/Finite-State-Machine-Design-for-the-PS-2-and-LCD-Interfaces | 0a609214d4e32404e0ef15be853ba259385843b8 | [
"MIT"
] | null | null | null | README.md | tongw-tw/Finite-State-Machine-Design-for-the-PS-2-and-LCD-Interfaces | 0a609214d4e32404e0ef15be853ba259385843b8 | [
"MIT"
] | null | null | null | # Finite-State-Machine-Design-for-the-PS-2-and-LCD-Interfaces
McMaster 3DQ5 takehome2
| 28.666667 | 61 | 0.813953 | yue_Hant | 0.619429 |
d0530c03d707095b4bf46c9e18a79c1778082681 | 686 | md | Markdown | api/Outlook.ContactItem.SelectedMailingAddress.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | 1 | 2018-10-15T16:15:38.000Z | 2018-10-15T16:15:38.000Z | api/Outlook.ContactItem.SelectedMailingAddress.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Outlook.ContactItem.SelectedMailingAddress.md | qiezhenxi/VBA-Docs | c49aebcccbd73eadf5d1bddc0a4dfb622e66db5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ContactItem.SelectedMailingAddress Property (Outlook)
keywords: vbaol11.chm1064
f1_keywords:
- vbaol11.chm1064
ms.prod: outlook
api_name:
- Outlook.ContactItem.SelectedMailingAddress
ms.assetid: 7f0a68a0-2663-276f-7217-f580d63edb51
ms.date: 06/08/2017
---
# ContactItem.SelectedMailingAddress Property (Outlook)
Returns or sets an **[OlMailingAddress](Outlook.OlMailingAddress.md)** constant indicating the type of the mailing address for the contact. Read/write.
## Syntax
_expression_. `SelectedMailingAddress`
_expression_ A variable that represents a [ContactItem](./Outlook.ContactItem.md) object.
## See also
[ContactItem Object](Outlook.ContactItem.md)
| 22.129032 | 152 | 0.791545 | yue_Hant | 0.246857 |
d0531e0e0b40d73d5657bf8c4d1a846589de4696 | 747 | md | Markdown | docs/Required-features-of-an-atomic-NFT/Dynamic.md | ColinKoii/site | e828875b77f318409ffe2649c97ce424fec4fce9 | [
"MIT"
] | null | null | null | docs/Required-features-of-an-atomic-NFT/Dynamic.md | ColinKoii/site | e828875b77f318409ffe2649c97ce424fec4fce9 | [
"MIT"
] | null | null | null | docs/Required-features-of-an-atomic-NFT/Dynamic.md | ColinKoii/site | e828875b77f318409ffe2649c97ce424fec4fce9 | [
"MIT"
] | null | null | null | ---
layout: default
slideId: Dynamic
title: (optional) Dynamic
parent: Required features of an atomic NFT
nav_order: 5
---
## Dynamic
{: .fs-9 }
Dynamic NFTs are quite a bit more complicated, and use an iframe compressed into an html payload to prepare a final payload.
The dynamic element comes from the NFT's internal programming, which can retrieve information from an Arweave gateway, and can change their appearance to respond to the latest contract state.
[Try out the 'Narcissus' flower template here to give it a try.](https://github.com/atomic-nfts/standard/tree/main/dynamic)
This is only an example of the implementation, but can provides the default functionality needed for interoperability with existing Atomic NFT standards.
| 39.315789 | 192 | 0.78581 | eng_Latn | 0.994913 |
d05398e9ad7fc9230682b018262c96cd4f219e8c | 109 | md | Markdown | sample/README.md | MrHadess/storylog | f61d58727c0abf69a36002b0a189e66493ea690a | [
"MIT"
] | null | null | null | sample/README.md | MrHadess/storylog | f61d58727c0abf69a36002b0a189e66493ea690a | [
"MIT"
] | 1 | 2022-01-07T07:52:04.000Z | 2022-01-07T07:52:04.000Z | sample/README.md | MrHadess/storylog | f61d58727c0abf69a36002b0a189e66493ea690a | [
"MIT"
] | null | null | null | # This sample show you,how can use story log in spring boot project
### Sample component: Kafka,ElasticSearch | 54.5 | 67 | 0.788991 | eng_Latn | 0.998061 |
d053c8f6172b42bcb58d41289fdf7c7d923c160f | 18 | md | Markdown | README.md | spacestime/getfile | 7dfd1bfa22704a1205e109367fb94f11d4bea6a7 | [
"Apache-2.0"
] | null | null | null | README.md | spacestime/getfile | 7dfd1bfa22704a1205e109367fb94f11d4bea6a7 | [
"Apache-2.0"
] | null | null | null | README.md | spacestime/getfile | 7dfd1bfa22704a1205e109367fb94f11d4bea6a7 | [
"Apache-2.0"
] | null | null | null | # getfile
getfile
| 6 | 9 | 0.777778 | ssw_Latn | 0.985149 |
d054135bfccb745fb618d0444713881627fadaee | 91 | md | Markdown | README.md | rgauss/thread-interrupt-test | 3172cfc3b93afffdabae47d33e8961964be1b411 | [
"Apache-2.0"
] | null | null | null | README.md | rgauss/thread-interrupt-test | 3172cfc3b93afffdabae47d33e8961964be1b411 | [
"Apache-2.0"
] | null | null | null | README.md | rgauss/thread-interrupt-test | 3172cfc3b93afffdabae47d33e8961964be1b411 | [
"Apache-2.0"
] | null | null | null | Simple Java test showing thread interrupt behavior
Run:
mvn clean test
to demonstrate
| 15.166667 | 50 | 0.791209 | eng_Latn | 0.988765 |
d0544c62d798ed41b40fd8a9b5fa1153dc3f0ed1 | 495 | md | Markdown | doc/report_2.md | mculen/tia2022 | 31ff99c75699517e6d5ece1dbf662dc399dc4bf0 | [
"MIT"
] | null | null | null | doc/report_2.md | mculen/tia2022 | 31ff99c75699517e6d5ece1dbf662dc399dc4bf0 | [
"MIT"
] | null | null | null | doc/report_2.md | mculen/tia2022 | 31ff99c75699517e6d5ece1dbf662dc399dc4bf0 | [
"MIT"
] | null | null | null | # 2. report
1. Meno a priezvisko: Martin Čulen
2. Názov projektu: Organizácia elektronických dokumentov
3. Týždeň: 6.
4. Plánovaná práca: dokončiť prácu z minulého týždňa, začať s implementáciou používateľov
5. Uskutočnená práca: práca z minulého týždňa dokončená, používatelia implementovaní (z veľkej časti)
6. Rozdiely medzi plánovanou a usktočnenou prácou: nie
7. Strávené hodiny: 17
8. Plány na ďalší týždeň: upload dokumentov, manageovanie adresárovej štruktúry, kategórie
9. Probémy: nie
| 45 | 101 | 0.806061 | slk_Latn | 0.999982 |
d054b18932260530a88e71943ecb897ee7c3d67a | 170 | md | Markdown | AudioUnitV3TemplateWithParameters/templateAUfxWithParameters/SE-Reading.md | Symphonic-eMotions/Audio-Unit-V3-Templates | fd82600f6763d9f8bf6db2f4d8c53639f6918b8b | [
"MIT"
] | null | null | null | AudioUnitV3TemplateWithParameters/templateAUfxWithParameters/SE-Reading.md | Symphonic-eMotions/Audio-Unit-V3-Templates | fd82600f6763d9f8bf6db2f4d8c53639f6918b8b | [
"MIT"
] | null | null | null | AudioUnitV3TemplateWithParameters/templateAUfxWithParameters/SE-Reading.md | Symphonic-eMotions/Audio-Unit-V3-Templates | fd82600f6763d9f8bf6db2f4d8c53639f6918b8b | [
"MIT"
] | null | null | null | [Programming with Objective-C]()https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/DefiningClasses/DefiningClasses.html
| 85 | 169 | 0.870588 | yue_Hant | 0.941849 |
d0550de21ea3084da66d8daf6c8b17df1d7c4a01 | 36,291 | md | Markdown | articles/supply-chain/warehousing/sales-returns.md | PowershellScripts/Dynamics-365-Operations.pl-pl | 32fc0808dfab66907f3583747f6197571b9ac829 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/supply-chain/warehousing/sales-returns.md | PowershellScripts/Dynamics-365-Operations.pl-pl | 32fc0808dfab66907f3583747f6197571b9ac829 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/supply-chain/warehousing/sales-returns.md | PowershellScripts/Dynamics-365-Operations.pl-pl | 32fc0808dfab66907f3583747f6197571b9ac829 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Zwroty sprzedaży
description: Ten temat zawiera informacje o procesie zamówień zwrotu. Opisuje koncepcję zwrotów od odbiorców oraz ich wpływ na wycenę i ilości dostępnych zapasów.
author: omulvad
manager: tfehr
ms.date: 06/20/2017
ms.topic: article
ms.prod: ''
ms.service: dynamics-ax-applications
ms.technology: ''
ms.search.form: ReturnTableListPage, ReturnTable, ReturnTableListPagePreviewPane, ReturnTableReferences, SalesReturnExpiredOrdersPart, SalesReturnFindOrderFormPart
audience: Application User
ms.reviewer: kamaybac
ms.search.scope: Core, Operations
ms.custom: 269384
ms.assetid: 98a4b517-e606-4036-b55f-1ab248898bdf
ms.search.region: Global
ms.author: kamaybac
ms.search.validFrom: 2016-02-28
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: fd194042303797fe41507065d0d7e4df28309cfb
ms.sourcegitcommit: 199848e78df5cb7c439b001bdbe1ece963593cdb
ms.translationtype: HT
ms.contentlocale: pl-PL
ms.lasthandoff: 10/13/2020
ms.locfileid: "4435015"
---
# <a name="sales-returns"></a>Zwroty sprzedaży
[!include [banner](../includes/banner.md)]
Ten temat zawiera informacje o procesie zamówień zwrotu. Opisuje koncepcję zwrotów od odbiorców oraz ich wpływ na wycenę i ilości dostępnych zapasów.
Odbiorcy mogą zwracać towary z różnych powodów. Na przykład towar może być uszkodzony lub może nie spełniać oczekiwań odbiorcy. Proces zwrotu rozpoczyna się, gdy odbiorca wystawia wniosek o zwrot towaru. Po odebraniu wniosku od odbiorcy jest tworzone zamówienie zwrotu.
## <a name="return-order-process"></a>Proces zamówienia zwrotu
Na poniższej ilustracji przedstawiono przegląd procesu zamówienia zwrotu.
[](./media/salesreturns01.jpg)
Istnieją dwa rodzaje procesu zamówienia zwrotu: zwrot fizyczny i tylko uznanie (kredyt).
- **Fizyczny zwrot** — Zamówienie zwrotu autoryzuje fizyczny zwrot produktów.
- **Tylko kredyt** — Zamówienie zwrotu autoryzuje uznanie konta odbiorcy (kredyt dla odbiorcy), ale nie wymaga, aby odbiorcy fizycznie zwrócić produkt.
### <a name="physical-return-order-process"></a>Proces zamówienia zwrotu fizycznego
1. **Tworzenie zamówienia zwrotu.** Formalnie udokumentowanie pozwolenia dla odbiorcy, aby zwrócił wszelkie wadliwe lub niechciane produkty. Zamówienie zwrotu nie wymaga, aby firma zaakceptowała zwrócony produkt lub uznała konto odbiorcy określoną kwotą. W przypadku przyjęcia zwrotu można autoryzować wysłanie towaru zastępczego jeszcze przed otrzymaniem zwróconego wadliwego towaru.
2. **Przybycie do magazynu w celu kontroli.** Przeprowadzenie wstępnej kontroli i sprawdzenie poprawności względem dokumentu zamówienia zwrotu. Zamówienie zwrotu obsługuje również funkcję kwarantanny zwróconych towarów w celu wykonania dodatkowych inspekcji i kontroli jakości.
3. **Ustalenie dyspozycji.** Finalizacja procesu kontroli i zdecydowanie, co należy zrobić ze zwróconymi produktami. W ramach tego etapu zdecyduj, czy uznasz konto odbiorcy określoną kwotą, odrzucisz zwrot produktu czy też przyjmiesz zwrot produktu, zezłomujesz produkt, a odbiorcy wyślesz produkt zastępczy.
4. **Generowanie dokumentu dostawy.** Wygenerowanie dokumentu dostawy i zatwierdzenie decyzji o dyspozycji podjętej w kroku 3. Finalizacja procesów logistycznych.
5. **Generowanie faktury.** Zamykanie zamówienia zwrotu.
### <a name="credit-only-process"></a>Proces Tylko kredyt
1. **Tworzenie zamówienia zwrotu.** Formalnie udokumentowanie pozwolenia, aby odbiorca otrzymał kredyt (jego konto zostało uznane określoną kwotą) bez zwracania wadliwych lub niechcianych produktów. Kod dyspozycji **Tylko kredyt** autoryzuje decyzję uznania konta odbiorcy bez fizycznego zwracania produktów.
2. **Generowanie faktury.** Generowanie faktury korygującej, a następnie zamknięcie zamówienia zwrotu.
## <a name="return-material-authorization"></a>Autoryzacja zwrotu
Przetwarzanie autoryzacji zwrotu (RMA) opiera się na funkcjonalności zamówienia sprzedaży. RMA jest rejestrowana jako zamówienie zwrotu, które zostało utworzona jako zamówienie sprzedaży, i może mieć powiązane inne zamówienie sprzedaży nazywane zamówieniem wymiany. Oba zamówienia sprzedaży są powiązane z numerem źródłowego RMA.
- **Zamówienie zwrotu** — Aby zarejestrować RMA, tworzysz zamówienie zwrotu, które jest zamówieniem sprzedaży z przypisanym typem **Zwrot towaru**. Wszelkie zmiany wprowadzane w informacjach RMA są automatycznie odzwierciedlane w zamówieniu sprzedaży. Dopóki zamówienie zwrotu nie uzyska stanu **Otwarte**, nie pojawi się na liście zamówień sprzedaży. RMA jest używane do obsługi przybycia i przyjęcia zwróconych towarów, jak również do autoryzowania czynności dyspozycji Tylko kredyt (zobacz sekcja **Kody dyspozycji i akcje dyspozycji**). Wszystkie pozostałe kolejne procesy muszą być obsługiwane w zamówieniu sprzedaży.
- **Zamówienie wymiany** — Kiedy do odbiorcy trzeba wysłać zamówienie wymiany, RMA może zawierać drugie skojarzone zamówienie sprzedaży. Można ręcznie utworzyć zamówienie wymiany dla RMA, aby umożliwia natychmiastową wysyłkę. Alternatywnie zamówienie wymiany może być tworzone automatycznie po sfinalizowaniu przybycia, inspekcji i przyjęcia dla pozycji RMA zawierającej kod dyspozycji wskazujący wymianę. Zamówienie wymiany ma taką samą funkcjonalność, jak skojarzona z zamówieniem sprzedaży. Na przykład można go użyć do skonfigurowania niestandardowego produktu jako towaru zastępczego, utworzenia zlecenia produkcyjnego służącego naprawie zwróconego towaru, utworzenia zamówienia zakupu dostawy bezpośredniej w celu wysyłania produktu zastępczego od dostawcy itd.
## <a name="create-a-return-order"></a>Tworzenie zamówienia zwrotu
Proces zamówienia zwrotu rozpoczyna się, kiedy odbiorca kontaktuje się z organizacją, aby zwrócić wadliwy lub niechciany produkt i/lub otrzymać kredyt (uznanie). Gdy organizacja zaakceptuje zwrot, jest on dokumentowany za pomocą zamówienia zwrotu. To zamówienie zwrotu staje się centralnym punktem wewnętrznego przetwarzania dotyczącego zwróconego produktu. Na ilustracji poniżej przedstawiono procedurę tworzenia zamówienia zwrotu.
[](./media/salesreturn02.png)
### <a name="create-a-return-order-header"></a>Tworzenie nagłówka zamówienia zwrotu
Podczas tworzenia zamówienia zwrotu należy podać informacje z poniższej tabeli.
| Pole | opis | Komentarze |
|--------------------|----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Konto odbiorcy | Odwołanie tabeli odbiorców | Należy podać istniejące konto odbiorcy. |
| Adres dostawy | Adres, na który jest zwracany towar | Domyślnie jest używany adres organizacji. Jeśli w nagłówku zostanie zaznaczony określony magazyn, adres dostawy zmienia się na adres dostawy magazynu. Adres ten można zmienić na stronie **Szczegóły zamówienia zwrotu**. |
| Oddział/magazyn | Oddział lub magazyn, który przyjmuje zwracany produkt | Adres dostawy dla zamówienia zwrotu jest określany na podstawie adresu dostawy oddziału lub magazynu. |
| Numer autoryzacji zwrotu | Identyfikator przypisany do zamówienia zwrotu | Numer autoryzacji zwrotu jest używany jako klucz alternatywny w całym procesie zamówienia zwrotu. Przypisywany numer RMA bazuje na numeracji RMA skonfigurowanej na stronie **Parametry modułu rozrachunków z odbiorcami**. |
| Termin realizacji | Ostatni dzień, do kiedy można zwrócić towar | Wartość domyślna jest obliczana jako bieżąca data plus okres ważności. Na przykład jeśli zwrot jest ważny (dopuszczalny) tylko przed 90 dni od daty utworzenia zamówienia zwrotu, a zamówienie zwrotu utworzono w dniu 1 maja, wartością w tym polu jest **30 lipca**. Okres ważności jest ustawiany na stronie **Parametry modułu rozrachunków z odbiorcami**. |
| Kod przyczyny zwrotu | Przyczyna zwrotu produktu podawana przez odbiorcę | Kod przyczyny jest wybierany z listy kodów przyczyny zdefiniowanych przez użytkownika. Pole to można zaktualizować w dowolnym momencie. |
### <a name="create-return-order-lines"></a>Tworzenie wierszy zamówienia zwrotu
Po wypełnieniu nagłówka zwrotu można utworzyć wiersze zwrotu przy użyciu jednej z następujących metod:
- Ręczne wprowadzenie szczegółów towaru, ilości i innych informacji dla każdego wiersza zwrotu.
- Utworzenie wiersza zwrotu za pomocą funkcji **Znajdź zamówienie sprzedaży**. Zalecamy używanie tej funkcji podczas tworzenia zamówienia zwrotu. Funkcja **Znajdź zamówienie sprzedaży** ustanawia odwołanie od wiersza zwrotu do wiersza zafakturowanego zamówienia sprzedaży, a następnie pobiera szczegóły wiersza, takie jak numer towaru, ilość, cena, rabat i wartości kosztów, z wiersza sprzedaży. Odwołanie pomaga zagwarantować, że podczas zwracania produktu do firmy jest on wyceniany według tego samego kosztu jednostkowego, jak przy sprzedaży. Odwołanie sprawdza też, czy zamówienia zwrotu nie są tworzone dla ilości przekraczającej ilość sprzedaną na fakturze.
>[Uwaga!] Wiersze zwrotu zawierające odwołania do zamówienia sprzedaży są obsługiwane jako korekty (cofnięcia) sprzedaży. Aby uzyskać więcej informacji, zobacz sekcję „Księgowanie w księdze" w dalszej części tego tematu.
### <a name="charges"></a>Opłaty
Prowizje i opłaty można dodawać do zamówienia zwrotu za pomocą jednej lub kilku z następujących metod:
- Opłaty można dodać ręcznie do nagłówka zamówienia zwrotu i/lub wiersza zamówienia zwrotu.
- Opłaty mogą być dodawane automatycznie do nagłówka zamówienia zwrotu jako funkcja kodu przyczyny zwrotu.
- Opłaty mogą być dodawane automatycznie do wiersza zamówienia zwrotu na podstawie kodu dyspozycji ustawionego dla wiersza.
Opłaty dodatkowe są dodawane automatycznie po przypisaniu kodu przyczyny zwrotu lub kodu dyspozycji do wiersza. Jeśli kod przyczyny zostanie później zmieniony, istniejący wpis opłaty nie zostanie usunięty, ale może zostać dodany nowy wpis opłaty oparty na nowym kodzie przyczyny. Podczas dodawania opłat do wierszy zamówienia zwrotu opłaty, które są obliczane jako procent wartości wiersza lub zamówienia, stają się ujemne, gdy wiersz lub zamówienie są ujemne, chyba że procent również jest liczbą ujemną. Opłata mająca wartość ujemną reprezentuje kredyt (uznanie) dla odbiorcy.
### <a name="return-reason-codes"></a>Kody przyczyn zwrotu
Stosując kody przyczyn do zwrotów, można ułatwić analizowanie wzorców zwrotów. Kody przyczyn informują o tym, dlaczego odbiorca chce zwrócić towary. Niektóre organizacje mają zdefiniowanych wiele kodów przyczyn. Takie organizacje mogą łączyć kody przyczyn w grupy kodów przyczyn, aby uzyskać lepszy obraz i kumulować sprawozdawczość.
### <a name="disposition-codes-and-disposition-actions"></a>Kody dyspozycji i akcje dyspozycji
Ważnym krokiem w procesie zamówienia zwrotu jest przypisanie kodu dyspozycji do wiersza zamówienia zwrotu w ramach rejestracji przybycia. Kod dyspozycji określa następujące informacje:
- **Następstwa finansowe** — Czy należy przyznać odbiorcy kredyt (uznać jego konto) za zwrócone towary i czy należy dodać jakiekolwiek opłaty do wiersza zamówienia zwrotu?
- **Dysponowanie zwróconym towarem** — Czy towar należy dodać z powrotem do zapasów, zezłomować, czy zwrócić do odbiorcy?
- **Logistyka zwróconego towaru** — Czy odbiorcy należy wysłać towar zastępczy?
Oprócz określenia sposobu postępowania ze zwróconym towarem kody dyspozycji mogą powodować dodawanie opłat do wiersza zwrotu. Mogą również służyć do grupowania zwrotów na potrzeby analizy statystycznej. Kody dyspozycji są definiowane w ramach konfigurowania zamówień zwrotu. Jednak każdy kod dyspozycji musi się odwoływać do jednej wbudowanej akcji dyspozycji. Poniższa tabela zawiera wbudowane kody dyspozycji i powiązane z nimi czynności. **Ważne:** Jeśli towar nie powinien być zwracany, ale i tak należy uznać konto odbiorcy określoną kwotą, przypisz do wiersza zwrotu kod dyspozycji **Tylko kredyt**.
<table>
<thead>
<tr class="header">
<th>Kod dyspozycji</th>
<th>Następstwa finansowe</th>
<th>Następstwa logistyczne</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Tylko kredytowe</td>
<td><ul>
<li>Konto odbiorcy jest uznawane ceną sprzedaży pomniejszoną o opłaty.</li>
<li>Strata z tytułu złomowania towaru jest księgowana w księdze.</li>
</ul></td>
<td>Towar nie powinien być zwracany. Ta akcja dyspozycji jest używana w następujących przypadkach:
<ul>
<li>Istnieje wystarczające zaufanie między stronami.</li>
<li>Koszt zwrotu wadliwych towarów jest bardzo wysoki.</li>
<li>Towarów nie można ze względów formalnych wprowadzić z powrotem do magazynu. Ze względu na inne okoliczności nie trzeba dokonywać fizycznego zwrotu.</li>
</ul></td>
</tr>
<tr class="even">
<td>Strona kredytowa</td>
<td><ul>
<li>Konto odbiorcy jest uznawane ceną sprzedaży pomniejszoną o opłaty.</li>
<li>Wartość zapasów jest powiększana o koszt zwróconego towaru.</li>
</ul></td>
<td>Towar jest zwracany i dodawany z powrotem do zapasów.</td>
</tr>
<tr class="odd">
<td>Zastąp i zaksięguj po stronie kredytowej</td>
<td><ul>
<li>Konto odbiorcy jest uznawane ceną sprzedaży pomniejszoną o opłaty.</li>
<li>Wartość zapasów jest powiększana o koszt zwróconego towaru.</li>
<li>Oddzielne zamówienie sprzedaży na wymianę jest tworzone i obsługiwane osobno.</li>
</ul></td>
<td>Towar jest zwracany i dodawany z powrotem do zapasów.</td>
</tr>
<tr class="even">
<td>Zastąp i zlikwiduj</td>
<td><ul>
<li>Konto odbiorcy jest uznawane ceną sprzedaży pomniejszoną o opłaty.</li>
<li>Strata z tytułu złomowania towaru jest księgowana w księdze.</li>
<li>Oddzielne zamówienie sprzedaży na wymianę jest tworzone i obsługiwane osobno.</li>
</ul></td>
<td>Towar jest zwracany i złomowany.</td>
</tr>
<tr class="odd">
<td>Zwrot do odbiorcy</td>
<td>Brak, z wyjątkiem ewentualnych opłat.</td>
<td>Towar jest zwracany, ale po kontroli wysyłany z powrotem do odbiorcy. Ta akcja dyspozycji może być używana, jeśli towar został umyślnie uszkodzony lub unieważniono gwarancję.</td>
</tr>
<tr class="even">
<td>Odpadki</td>
<td><ul>
<li>Konto odbiorcy jest uznawane ceną sprzedaży pomniejszoną o opłaty.</li>
<li>Strata z tytułu złomowania towaru jest księgowana w księdze.</li>
</ul></td>
<td>Towar jest zwracany lub złomowany.</td>
</tr>
</tbody>
</table>
## <a name="arrival-at-the-warehouse-for-inspection"></a>Przybycie do magazynu w celu kontroli
Zanim będzie można fizycznie przyjąć zwrócone towary do zapasów poprzez zaksięgowanie dokumentu dostawy, towary muszą przejść przez rejestrację przybycia i opcjonalną inspekcję. Na poniższej ilustracji przedstawiono przegląd procesu przybycia. W następnych sekcjach opisano każdy krok pokazany na ilustracji.
[](./media/salesreturn03.png)
Proces ma kilka innych wariantów, które nie są omówione w tym temacie. Oto wybrane możliwe warianty:
- Nie używaj listy **Przegląd przyjęć**, aby utworzyć arkusz przybycia. Zamiast tego ręcznie utwórz arkusz przybycia. Zamówienia zwrotu będą miały odwołanie **Zamówienie sprzedaży**.
- Jeśli używasz modułu Zarządzanie magazynem, wygeneruj transporty palet. Podczas transportu palet wiersz zwrotu będzie miał stan **Dostarczone**.
- Zarejestruj przybycie zwróconego towaru bezpośrednio z wiersza zamówienia zwrotu za pomocą funkcji **Rejestracja**.
W trakcie procesu przybycia zwroty są integrowane z ogólnym procesem przybycia do magazynu. Proces przybycia umożliwia także tworzenie zleceń kwarantanny zwróconych towarów, które muszą zostać poddane oddzielnej kontroli.
### <a name="identify-products-in-the-arrival-overview-list"></a>Identyfikacji produktów na liście Przegląd przyjęć
Na stronie **Przegląd przyjęć** znajduje się lista wszystkich planowanych przychodzących przybyć.
>[Uwaga!] Przywozy z zamówień zwrotu muszą być przetwarzane oddzielnie od innych typów transakcji przywozu. Po zidentyfikowaniu przychodzącej paczki na stronie **Przegląd przyjęć** (na przykład za pomocą towarzyszącego dokumentu RMA) w okienku akcji kliknij przycisk **Rozpocznij przyjęcie**, aby utworzyć i zainicjować arkusz przybycia pasujący do przybycia.
### <a name="edit-the-arrival-journal"></a>Edycja arkusza przybycia
Ustawiając w opcji **Zarządzanie kwarantanną** wartość **Tak**, można utworzyć zlecenie kwarantanny dla wiersza zwrotu. Jeśli wiersz został wysłany do kwarantanny w celu inspekcji, nie można określić kodu dyspozycji.
Jeśli ustawisz w opcji **Zarządzanie kwarantanną** wartość **Tak** w grupie modeli zapasów towaru, opcja **Zarządzanie kwarantanną** na stronie **Wiersze arkusza** zostanie oznaczona dla wiersza arkusza przywozu i nie będzie można jej zmienić. Jeśli wiersz jest wysyłany do kwarantanny, należy określić właściwy magazyn kwarantanny.
Jeśli wiersz przybycia nie jest wysyłany do inspekcji, magazynier zajmujący się przybyciami musi określić kod dyspozycji bezpośrednio w wierszu arkusza przybycia, a następnie zaksięgować arkusz przybycia. Jeśli ten sam kod dyspozycji nie ma być przypisywany do całej ilości wiersza zwrotu albo jeśli nie przyjęto całej ilości wiersza, należy podzielić wiersz. Podział wiersza arkusza przybycia powoduje również podział wiersza zwrotu (**SalesLine**) i utworzenie nowego identyfikatora partii. W celu podziału wiersza można zmniejszyć ilość w wierszu arkusza przybycia. Po zaksięgowaniu arkusza jest tworzony nowy wiersz zwrotu o stanie **Oczekiwane** na pozostałą ilość. Wierz można również podzielić, klikając kolejno opcje **Funkcje** > **Podziel**.
### <a name="process-the-quarantine-order"></a>Przetwarzanie zlecenia kwarantanny
Jeśli zwrócone produkty są wysyłane do inspekcji w magazynie kwarantanny, wszelkie dodatkowe przetwarzanie odbywa się na podstawie zlecenia kwarantanny. Dla każdego wiersza przybycia wysyłanego do kwarantanny jest tworzone jedno zlecenie kwarantanny. Kod dyspozycji wskazuje wynik procesu inspekcji.
Zlecenie kwarantanny można podzielić tak samo, jak się dzieli arkusz przybycia. Podział zlecenia kwarantanny powoduje odnośny podział wiersza zwrotu. Po wprowadzeniu kodu dyspozycji sfinalizuj zlecenie kwarantanny, używając funkcji **Koniec** lub **Zgłoś jako gotowe**. Jeśli wybierzesz funkcję **Zgłoś jako gotowe**, w wyznaczonych magazynie zostanie utworzone nowe przybycie. Następnie można przetwarzać to przybycie za pomocą opcji na stronie **Przegląd przyjęć**.
Jeżeli przybycie ma swoje źródło w zleceniu kwarantanny, nie można zmienić kodu dyspozycji przypisanego podczas inspekcji. Jeśli sfinalizujesz zlecenie kwarantanny za pomocą funkcji **Koniec**, partia jest automatycznie rejestrowana. Czasami towar może być odsyłany z kwarantanny z powrotem do działu wysyłania i przyjmowania. Na przykład inspektor kwarantanny może nie wiedzieć, gdzie umieścić towar w zapasach. W takim przypadku należy zaktualizować odnośny dokument dostawy, aby poprawnie zarejestrować i przetwarzać kod dyspozycji ustawiony z powodu kwarantanny.
Potwierdzenie przyjęcia można wysyłać odbiorcy podczas rejestrowania wiersza zwrotu. Raport **Potwierdzenie zwrotu** przypomina dokument zamówienia zwrotu. Raport **Potwierdzenie zwrotu** nie jest zapisywany w arkuszu ani w inny sposób rejestrowany w systemie i nie jest wymaganym krokiem w procesie zamówienia zwrotu.
## <a name="replace-a-product"></a>Wymiana produktu
Istnieją dwie metody zarządzania wymianą produktów:
- **Wymiana zawczasu** — Wymiana produktu, zanim zwrócony produkt zostanie otrzymany od odbiorcy.
- **Wymiana na podstawie kodu dyspozycji** — Automatyczne tworzenie nowego wiersza zamówienia wymiany.
### <a name="up-front-replacement"></a>Zamiennik zawczasu
W wymianie zawczasu towar zastępczy może być dostarczany odbiorcy przed otrzymaniem od niego zwróconego produktu. Ta metoda jest przydatna, gdy na przykład towar jest częścią maszyny, której nie można wymontować, jeśli na jej miejsce nie zostanie wstawiona część zamienna, albo gdy po prostu chcesz, aby odbiorca otrzymał produkt zastępczy jak najszybciej. Zamówienie wymiany zawczasu jest niezależnym zamówieniem sprzedaży. Informacje nagłówka są inicjowane przez odbiorcę, a informacje wiersza są inicjowane przez zamówienie zwrotu. Zamówienie wymiany można edytować, przetwarzać i usuwać niezależnie od zamówienia zwrotu. Podczas usuwania zamówienia wymiany pojawia się komunikat z informacją, że zamówienie zostało utworzone jako zamówienie wymiany. Poniższa ilustracja przedstawia proces wymiany zawczasu.

Zamówienie zwrotu zawiera odwołanie do zamówienia wymiany. Jeśli zamówienie wymiany zawczasu zostanie utworzone dla zamówienia zwrotu przed zwróceniem wadliwego towaru, nie można wybrać kodów dyspozycji dla wymiany po zwrocie wadliwego towaru.
### <a name="replacement-by-disposition-code"></a>Wymiana na podstawie kodu dyspozycji
Jeśli wysyłasz towar zastępczy do odbiorcy i w zamówieniu zwrotu używasz akcji dyspozycji **Zastąp i zlikwiduj** lub **Zastąp i zaksięguj po stronie kredytowej**, użyj procesu przedstawionego na poniższej ilustracji.

Towar zastępczy zostanie dostarczony przy użyciu niezależnego zamówienia sprzedaży — zamówienia sprzedaży wymiany. To zamówienie sprzedaży jest tworzone podczas generowania dokumentu dostawy dla zamówienia zwrotu. Nagłówek zamówienia używa informacji od odbiorcy, do którego odwołuje się nagłówek zamówienia zwrotu. Informacje wiersza są pobierane z informacji wprowadzonych na stronie **Pozycja zastępcza**. Strona **Pozycja zastępcza** musi być wypełniona dla wierszy, które mają akcje dyspozycji rozpoczynające się słowem „zamień”. Jednak ani ilość, ani dane identyfikacyjne towaru zastępczego nie są weryfikowane ani w żaden sposób ograniczane. Takie zachowanie pozwala na przypadki, gdy odbiorca chce otrzymać ten sam towar, ale w innej konfiguracji lub rozmiarze, a także na przypadki, gdy odbiorca chce otrzymać całkowicie inny towar. Domyślnie na stronie **Pozycja zastępcza** jest wprowadzany identyczny towar. Można jednak wybrać inny towar, pod warunkiem, że funkcja została skonfigurowana.
>[Uwaga!] Po utworzeniu zamówienia sprzedaży wymiany można je edytować i usuwać.
## <a name="generate-a-packing-slip"></a>Generowanie dokumentu dostawy
Aby zwrócone towary mogły zostać przyjęte do zapasów, należy zaktualizować dokument dostawy dla zamówienia, do którego należą towary. Podobnie jak proces aktualizacji faktury jest aktualizacją transakcji finansowej, tak proces aktualizacji dokumentu dostawy jest fizyczną aktualizacją rekordu zapasów. Innymi słowy proces ten zatwierdza zmiany zapasów. W przypadku zwrotów kroki przypisane do akcji dyspozycji są implementowane podczas aktualizacji dokumentu dostawy. Podczas generowania dokumentu dostawy zachodzą następujące zdarzenia:
- W magazynie standardowy proces jest używany wykonania fizycznego przyjęcia. Księgowania w księdze są generowane, jeśli grupa modeli zapasów (**Księguj magazyn fizyczny**) i parametry modułu rozrachunków z odbiorcami (**Księgowanie dokumentów dostawy w księdze**) są prawidłowo skonfigurowane.
- Towary oznaczone akcją dyspozycji zawierającą słowo „złom” są brakowane, a strata na zapasach jest księgowana w księdze.
- Towary oznaczone akcją dyspozycji **Zwrot do odbiorcy** są przyjmowane i dostarczane do odbiorcy. Towary te nie wpływają netto na zapasy.
- Jest tworzone zamówienie sprzedaży wymiany. To zamówienie sprzedaży bazuje na informacjach ze strony **Pozycja zastępcza**.
Dokument dostawy można wygenerować tylko dla wierszy, które mają stan zwrotu **Zarejestrowane**, i tylko w odniesieniu do pełnej ilości w wierszu zwrotu. Jeśli kilka wierszy w zamówieniu zwrotu ma stan **Zarejestrowane**, można wygenerować dokument dostawy dla podzbioru wierszy, usuwając pozostałe wiersze ze strony **Księguj dokument dostawy**.
Zwroty częściowe są definiowane w kategoriach wierszy zamówienia zwrotu, a nie wysyłek zamówienia zwrotu. Oznacza to, że jeśli otrzymasz pełną ilość wskazaną w jednym wierszu zamówienia zwrotu, ale nie otrzymasz nic z pozostałych wierszy tego zamówienia zwrotu, dostawa nie jest dostawą częściową. Jeśli jednak wiersz zamówienia zwrotu wymaga zwrotu na przykład dziesięciu jednostek określonego towaru, a otrzymasz tylko cztery jednostki, dostawa jest dostawą częściową. Jeśli nie przybędą wszystkie oczekiwane zwracane towary, można odstawić przesyłkę na bok i poczekać na przybycie reszty zwracanej ilości. Alternatywnie można zarejestrować i zaksięgować ilość częściową. W ramach procesu księgowania dokumentów dostawy można powiązać numer odwołania dokumentu dostawcy określony w dokumentach wysyłkowych odbiorcy z wierszami zamówienia. To skojarzenie jest opcjonalne i ma charakter wyłącznie informacyjny. Nie tworzy żadnych aktualizacji transakcji.
Ogólnie rzecz biorąc można pominąć proces dokumentu dostawy i przejść bezpośrednio do fakturowania. W takim przypadku czynności, które byłyby wykonywane podczas generowania dokumentu dostawy, są wykonywane podczas fakturowania.
## <a name="generate-an-invoice"></a>Generuj fakturę
Chociaż strona **Zamówienie zwrotu** zawiera informacje i czynności, które są wymagane w celu obsługi szczególnych aspektów logistycznych zamówienia zwrotu, do finalizacji procesu fakturowania należy użyć strony **Zamówienie sprzedaży**. Organizacja może wtedy fakturować zamówienia zwrotu i zamówienia sprzedaży w tym samym czasie, a ta sama osoba może wykonać proces fakturowania zgodnie z wymaganiami. Aby wyświetlić zamówienie zwrotu ze strony **Zamówienie sprzedaży**, kliknij łącze numeru zamówienia sprzedaży, co spowoduje otwarcie skojarzonego zamówienia sprzedaży. Zamówienie zwrotu można także znaleźć na stronie **Wszystkie zamówienia sprzedaży**. Zamówienia zwrotu są zamówieniami sprzedaży o typie zamówienia **Zwrot towaru**.
### <a name="credit-correction"></a>Korekta z czerwonym stornem
W ramach procesu fakturowania sprawdź poprawność wszystkich opłat dodatkowych. Aby spowodować, że księgowania w księdze staną się korektami (stornem), rozważ użycie opcji **Korekta z czerwonym stornem** na karcie **Inne** na stronie **Księgowanie faktury** podczas księgowania faktury/faktury korygującej.
>[Uwaga!] Domyślnie opcja **Korekta z czerwonym stornem** jest aktywna, jeśli włączono opcję **Faktura korygująca z czerwonym stornem** na stronie **Parametry modułu rozrachunków z odbiorcami**. Jednak zalecamy, aby nie księgować zwrotów za pomocą funkcji storna.
## <a name="create-intercompany-return-orders"></a>Tworzenie międzyfirmowych zamówień zwrotu
Zamówienia zwrotu mogą być wykonywane między dwoma firmami wewnątrz organizacji. Obsługiwane są następujące scenariusze:
- Proste zwroty międzyfirmowe między dwoma firmami uczestniczącymi w relacji międzyfirmowej
- Łańcuch międzyfirmowy ustanawiany w momencie, gdy zamówienie zwrotu od odbiorcy jest tworzony w firmie sprzedającej
- Łańcuch międzyfirmowy ustanawiany w momencie, gdy zamówienie zwrotu do dostawcy jest tworzony w firmie kupującej
- Zwroty za pomocą wysyłki z dostawą bezpośrednią między zewnętrznym odbiorcą a dwoma firmami uczestniczącymi w relacji międzyfirmowej
### <a name="setup"></a>Konfiguracja
Poniższa ilustracja przedstawia minimalną konfigurację wymaganą, aby dwie firmy mogły uczestniczyć w relacji międzyfirmowej i korzystać z funkcji handlu międzyfirmowego.

W poniższym scenariuszu CompBuy jest firmą kupującą, a CompSell jest firmą sprzedającą. Na ogół firma sprzedająca sprzedaje towary do firmy kupującej albo, w scenariuszach wysyłki z dostawą bezpośrednią, prosto do odbiorcy końcowego. W firmie CompBuy dostawca IC\_CompSell jest zdefiniowany jako międzyfirmowego punkt końcowy skojarzony z firmą CompSell. Równocześnie w firmie CompSell odbiorca IC\_CompBuy jest zdefiniowany jako międzyfirmowego punkt końcowy skojarzony z firmą CompBuy. W obu firmach muszą być zdefiniowane odpowiednie szczegóły zasad działań i mapowania wartości. W scenariuszu wysyłki z dostawą bezpośrednią w firmie sprzedającej jest tworzone międzyfirmowe zamówienie zwrotu, które jest również międzyfirmowym zamówieniem sprzedaży. Numer autoryzacji zwrotu do międzyfirmowego zamówienia zwrotu może zostać pobrany z numeracji RMA w firmie CompSell lub skopiowany z numeru RMA przypisanego do oryginalnego zamówienia zwrotu w firmie CompBuy. O tych działaniach decydują ustawienia numeru autoryzacji zwrotu w zasadach działań **PurchaseRequisition** w firmie CompBuy. Jeśli numer RMA jest synchronizowany, należy zaplanować łagodzenie skutków konfliktu powstającego w przypadku, gdy obie firmy używają tej samej numeracji.
### <a name="simple-intercompany-returns"></a>Proste zwroty międzyfirmowe
Ten scenariusz obejmuje dwie firmy w tej samej organizacji, jak pokazano na poniższej ilustracji.

Łańcuch zamówień można utworzyć w momencie, gdy zamówienie zwrotu do dostawcy jest tworzone w firmie kupującej lub zamówienie zwrotu od odbiorcy jest tworzone w firmie sprzedającej. Jest tworzone odnośne zamówienie w drugiej firmie i sprawdza, czy informacje nagłówka i wierszy w zamówieniu zwrotu do dostawcy odzwierciedlają ustawienia w zamówieniu zwrotu od odbiorcy. Tworzone zamówienie zwrotu może uwzględniać lub pomijać odwołanie (**Znajdź zamówienie sprzedaży**) do istniejącej faktury dla odbiorcy. Dokumenty dostawy i faktury powiązane z oboma zamówieniami mogą być przetwarzane indywidualnie. Na przykład nie trzeba generować dokumentu dostawy dla zamówienia zwrotu do dostawcy przed wygenerowaniem dokument dostawy dla zamówienia zwrotu od odbiorcy.
### <a name="direct-delivery-shipment-returns-among-three-parties"></a>Zwroty za pomocą wysyłki z dostawą bezpośrednią między trzema stronami
Ten scenariusz można utworzyć, jeśli poprzednia sprzedaż typu **Dostawa bezpośrednia** została zakończona, a w firmie współpracującej z odbiorcą istnieje faktura wystawiona odbiorcy. Na poniższej ilustracji firma CompBuy uprzednio sprzedała produkty do odbiorcy Extern i wystawiła mu za to fakturę. Produkty zostały wysłane bezpośrednio z firmy CompSell do odbiorcy za pośrednictwem łańcucha zamówień międzyfirmowych.

Jeśli odbiorca Extern chce zwrócić produkty, w firmie CompBuy jest dla niego tworzone zamówienie zwrotu (RMA02). Aby można było utworzyć łańcuch międzyfirmowy, zamówienie zwrotu musi być oznaczone dla dostawy bezpośredniej. Gdy użyjesz funkcji **Znajdź zamówienie sprzedaży**, aby wybrać fakturę odbiorcy do zwrotu, zostanie utworzony łańcucha zamówień międzyfirmowych składający się z następujących dokumentów:
- **Oryginalne zamówienie zwrotu:** RMA02 (firmy CompBuy)
- **Zamówienie zakupu:** PO02 (firmy CompBuy)
- **Międzyfirmowe zamówienie zwrotu:** RMA\_00032 (firmy CompSell)
Po utworzeniu międzyfirmowego łańcucha dostawy bezpośredniej cała fizyczna obsługa i elektroniczne przetwarzanie zwrotów musi następować w kontekście międzyfirmowe zamówienie zwrotu RMA\_00032 w firmie CompSell. Produkty nie mogą być przyjmowane w firmie CompBuy. Po przypisaniu kodu dyspozycji do międzyfirmowego zamówienia zwrotu jest on synchronizowany z oryginalnym zamówieniem zwrotu, aby umożliwić właściwe fakturowanie oryginalnego zamówienia.
## <a name="post-to-the-ledger"></a>Księgowanie w księdze
Księgowania w księdze, które są generowane podczas fakturowania zamówienia zwrotu, zależą od kilku ważnych ustawień i parametrów:
- **Koszt własny dla zwrotu** — W modelach zapasów innych niż **Koszt standardowy** parametr **Koszt własny dla zwrotu** określa koszt towaru, gdy jest on przyjmowany z powrotem do magazynu lub złomowany. Aby obliczyć poprawną wycenę zapasów, należy prawidłowo ustawić wartość parametru **Koszt własny dla zwrotu**. Jeśli używasz funkcji **Znajdź zamówienie sprzedaży** do tworzenia wiersza zamówienia zwrotu, który się odwołuje do faktury dla odbiorcy, wartość parametru **Koszt własny dla zwrotu** jest równa kosztowi własnemu sprzedawanego towaru. W przeciwnym wypadku wartość kosztu własnego pochodzi z konfiguracji towaru lub można ją wprowadzić ręcznie.
- **Korekta z czerwonym stornem/Storno** — Parametr **Korekta z czerwonym stornem** na stronie **Księgowanie faktury** określa, czy księgowania powinny być rejestrowane jako zapisy dodatnie (po stronie winien/ma), czy jako zapisy korygujące (ujemne).
W przykładach poniżej koszt własny zwrotu jest reprezentowany jako **Inv. Cost price** (Koszt własny pozycji faktury).
### <a name="example-1-the-return-order-doesnt-reference-a-customer-invoice"></a>Przykład 1: Zamówienie zwrotu nie odwołuje się do faktury dla odbiorcy
Zamówienie zwrotu nie odwołuje się do faktury dla odbiorcy. Z tytułu zwrotu towaru jest uznawane konto odbiorcy. Parametr **Korekta z czerwonym stornem** nie jest zaznaczony podczas generowania faktury (lub faktury korygującej) do zamówienia zwrotu.

>[Uwaga!] Domyślną wartością parametru **Koszt własny dla zwrotu** jest cena z rekordu głównego towaru. Cena domyślna różni się od kosztu własnego w momencie wydawania zapasów. Ma to taką konsekwencję, że jest ponoszona strata wynosząca 3 jednostki pieniężne. Ponadto zamówienie zwrotu nie zawiera rabatu udzielonego odbiorcy w zamówieniu sprzedaży. W związku z tym następuje nadmierne uznanie konta odbiorcy.
### <a name="example-2-credit-correction-is-selected-for-the-return-order"></a>Przykład 2: Dla zamówienia zwrotu wybrano korektę z czerwonym stornem
Przykład 2 jest taki sam, jak przykład 1, ale podczas generowania faktury do zamówienia zwrotu wybrano parametr **Korekta z czerwonym stornem**.

>[Uwaga!] Księgowania w księdze są wprowadzane jako ujemne korekty.
### <a name="example-3-the-return-order-line-is-created-by-using-the-find-sales-order-function"></a>Przykład 3: Jest tworzony wiersz zamówienia zwrotu przy użyciu funkcji Znajdź zamówienie sprzedaży
W tym przykładzie jest tworzony wiersz zamówienia zwrotu przy użyciu funkcji **Znajdź zamówienie sprzedaży**. Podczas tworzenia faktury parametr **Korekta z czerwonym stornem** nie jest zaznaczony.

>[Uwaga!] Opcje **Rabat** i **Koszt własny dla zwrotu** są poprawnie ustawione. W związku z tym następuje dokładne wycofanie faktury dla odbiorcy.
| 108.655689 | 1,243 | 0.774655 | pol_Latn | 0.999998 |
d0561ba28568354ae7738c9c1ae36e7692bc8a11 | 60 | md | Markdown | README.md | grantt/peregrine | 3ea53ebac28beb5637aa39532a9b3bb6d0b9063c | [
"Apache-2.0"
] | 1 | 2018-12-12T08:33:53.000Z | 2018-12-12T08:33:53.000Z | README.md | grantt/peregrine | 3ea53ebac28beb5637aa39532a9b3bb6d0b9063c | [
"Apache-2.0"
] | null | null | null | README.md | grantt/peregrine | 3ea53ebac28beb5637aa39532a9b3bb6d0b9063c | [
"Apache-2.0"
] | null | null | null | peregrine
=========
Experimentation with Pulsar and Falcon
| 12 | 38 | 0.716667 | eng_Latn | 0.953542 |
d0562add6e8e9838ec820a1cac060d7dbdb13bf7 | 4,900 | md | Markdown | support/windows-server/deployment/multicast-deployment-fails-from-wds.md | v-lenc/SupportArticles-docs | 8660702bf10594d57fb1b0b5b36219b68bef144b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-17T11:07:43.000Z | 2021-06-17T11:07:43.000Z | support/windows-server/deployment/multicast-deployment-fails-from-wds.md | v-lenc/SupportArticles-docs | 8660702bf10594d57fb1b0b5b36219b68bef144b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | support/windows-server/deployment/multicast-deployment-fails-from-wds.md | v-lenc/SupportArticles-docs | 8660702bf10594d57fb1b0b5b36219b68bef144b | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Multicast deployment fails from WDS
description: Provides a solution to an issue where deploying an image from a Windows Deployment Services (WDS) server by using multicast fails.
ms.date: 09/17/2020
author: Deland-Han
ms.author: delhan
manager: dscontentpm
audience: itpro
ms.topic: troubleshooting
ms.prod: windows-server
localization_priority: medium
ms.reviewer: kaushika, scottmca
ms.prod-support-area-path: MDM
ms.technology: windows-server-deployment
---
# Multicast Deployment Fails from Windows Deployment Services
This article provides a solution to an issue where deploying an image from a Windows Deployment Services (WDS) server by using multicast fails.
_Original product version:_ Windows Server 2012 R2
_Original KB number:_ 2582106
## Symptoms
When deploying an image from a WDS server using multicast, you may encounter one or more of the following issues
- The multicast session never completes
- The multicast session produces an error message
- The multicast session is slow. If you change the deployment to unicast, it works.
## Cause
There are many possible causes for a multicast deployment to fail. One possible reason is that the network router/switches do not handle IP fragmentation properly.
## Resolution
To change WDS so that it does not send fragmented packets, do the following:
**Windows Server 2008 R2**
Set the following registry key and restart the WDSService
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSMC\Protocol
Name: ApBlockSize
Value type: REG_DWORD
Value data: 1385 decimal
**Windows Server 2008**
Windows Server 2008 uses network profiles to control the settings. Do the following to configure it to not send fragmented packets
1. Click Start, Run, WdsMgmt.msc
2. Right-click the WDS server and choose properties
3. Choose the network settings tab
4. Change the network profile to custom
Set the following registry key and restart the WDSService
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSMC\Profiles\Custom
Name: ApBlockSize
Value type: REG_DWORD
Value data: 1385 decimal
If this allows the multicast transmission to complete you can, then modify the TpCacheSize registry key below to increase the performance. If you decrease ApBlockSize without increasing TpCacheSize, then overall performance will decrease. Basically ApBlockSize * TpCacheSize = the maximum bandwidth that can be achieved.
**Windows Server 2008 R2**
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSMC\Protocol
Name: TpCacheSize
Value type: REG_DWORD
Value data: 3145 decimal
**Windows Server 2008**
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSMC\Profiles\Custom
Name: TpCacheSize
Value type: REG_DWORD
Value data: 3145 decimal
Restart the WDSServer service after setting this registry key.
After setting this run a deployment to verify, it completes and take note of the time to download the image. Then increase this value in increments until it fails or reaches 7550.
If you have to disable IP fragmentation to get multicast working, then this may be indicative of low-end switching/routing hardware that perhaps does not support fragmentation efficiently or does not support multicast efficiently (IGMP/MLD snooping etc.). Multicast can be demanding on a network so it can expose problems or issues in network infrastructure that were unknown until multicast was set up.
## More information
Other possible reasons for multicast failures include
- Make sure the WDS server hardware is sized properly. Especially the amount of RAM and network cards. For more information on recommended hardware requirements, see [Optimizing Performance and Scalability](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732088(v=ws.10))
- If the WDS server is Windows Server 2008 R2 check the Multicast Tab and verify what the "Transfer settings" is set to.
- If larger images fail but smaller images work it may be due to timeouts. For instance, on Cisco switches the default value for ip igmp query-interval is 60 seconds, which means that the switch will stop forwarding multicast traffic to the port after 3 * 60 = 180 seconds if it doesn't see any IGMP traffic. Contact your switch vendor for how to configure these timeouts.
- The default Multicast range in WDS is 239.0.0.1 to 239.0.0.254. Depending on the network, this may not work. Change the range to 239.192.0.2 to 239.192.0.250 or check with your network administrator for an unused range
- Test with different machines. A machine participating in the multicast stream that has a bad NIC can cause problems with multicast completing
For more information on optimizing and troubleshooting, see [Optimizing Performance and Scalability](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732088(v=ws.10))
| 55.681818 | 405 | 0.808163 | eng_Latn | 0.990465 |
d05635794d44de864a16ac004fcfb02bebfee57e | 24,823 | md | Markdown | articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md | doracpphp/azure-docs.ja-jp | 92d3ca6b9bf4bdae67568790c3c429c0533ca9e9 | [
"CC-BY-4.0",
"MIT"
] | 161 | 2017-08-28T07:45:11.000Z | 2022-03-01T06:53:52.000Z | articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md | doracpphp/azure-docs.ja-jp | 92d3ca6b9bf4bdae67568790c3c429c0533ca9e9 | [
"CC-BY-4.0",
"MIT"
] | 6,139 | 2017-06-27T14:43:19.000Z | 2022-01-14T05:54:35.000Z | articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md | doracpphp/azure-docs.ja-jp | 92d3ca6b9bf4bdae67568790c3c429c0533ca9e9 | [
"CC-BY-4.0",
"MIT"
] | 456 | 2017-06-27T13:57:03.000Z | 2022-03-30T08:41:01.000Z | ---
title: Azure 上の Oracle データベース用リファレンス アーキテクチャ | Microsoft Docs
description: Oracle Database Enterprise Edition データベースを Microsoft Azure Virtual Machines 上で実行するためのリファレンス アーキテクチャ。
author: dbakevlar
ms.service: virtual-machines
ms.subservice: oracle
ms.collection: linux
ms.topic: article
ms.date: 12/13/2019
ms.author: kegorman
ms.openlocfilehash: 6bce6f011086d9855c4da2739addbb34e661e2d6
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 03/30/2021
ms.locfileid: "102507485"
---
# <a name="reference-architectures-for-oracle-database-enterprise-edition-on-azure"></a>Azure 上の Oracle Database Enterprise Edition 用リファレンス アーキテクチャ
このガイドでは、可用性の高い Oracle データベースを Azure にデプロイする方法について詳しく説明します。 また、ディザスター リカバリーの考慮事項についても説明します。 これらのアーキテクチャは、お客様のデプロイに基づいて作成されたものです。 このガイドは Oracle Database Enterprise Edition にのみ適用されます。
Oracle データベースのパフォーマンスを最大限に引き出す方法に関心がある方は、[Oracle DB の設計](oracle-design.md)に関するページを参照してください。
## <a name="assumptions"></a>前提条件
- [可用性ゾーン](../../../availability-zones/az-overview.md)などの Azure のさまざまな概念を理解している
- Oracle Database Enterprise Edition 12c 以降を実行している
- この記事のソリューションを使用する際のライセンスに関連する事項を理解し同意している
## <a name="high-availability-for-oracle-databases"></a>Oracle データベースの高可用性
クラウドで高可用性を実現することは、どの組織の計画と設計においても重要な要素です。 Microsoft Azure は、[可用性ゾーン](../../../availability-zones/az-overview.md)と可用性セット (可用性ゾーンが使用できないリージョンで使用される) を提供します。 詳細については、[仮想マシンの可用性管理](../../availability.md)に関するページでクラウドの設計についてご確認ください。
Oracle は、クラウドネイティブのツールとオファリングに加えて、Azure 上に設定できる、[Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7)、[Data Guard with FSFO](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/index.html)、[Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/sharding-overview.html)、[GoldenGate](https://www.oracle.com/middleware/technologies/goldengate.html) などの高可用性のためのソリューションを提供しています。 このガイドでは、これらの各ソリューションのリファレンス アーキテクチャについて説明します。
最後に、クラウド用のアプリケーションを移行または作成する場合、[再試行パターン](/azure/architecture/patterns/retry)や[サーキット ブレーカー パターン](/azure/architecture/patterns/circuit-breaker)などのクラウドネイティブ パターンを追加するため、アプリケーション コードを微調整することが重要です。 [クラウド設計パターン ガイド](/azure/architecture/patterns/)で定義されている追加パターンを使用すると、アプリケーションの回復性が向上します。
### <a name="oracle-rac-in-the-cloud"></a>クラウド内の Oracle RAC
Oracle Real Application Cluster (RAC) は、多くのインスタンスが 1 つのデータベース ストレージにアクセスするようにすることにより高スループットの実現を支援する (全共有型アーキテクチャ パターン)、Oracle が提供するソリューションです。 Oracle RAC は、オンプレミスの高可用性にも使用できますが、Oracle RAC のみを使用してクラウドの高可用性を実現することはできません。これは、Oracle RAC がインスタンス レベルの障害に対してのみ保護を提供し、ラック レベルやデータセンター レベルの障害には保護を提供しないためです。 このため、Oracle では、高可用性を実現するため、データベース (単一インスタンスであれ RAC であれ) と Oracle Data Guard を併用することを推奨しています。 一般に、ミッション クリティカルなアプリケーションを実行する目的で高い SLA が必要とされます。 Oracle RAC は、現時点では、Oracle on Azure の認定やサポートの対象になっていません。 ただし、Azure では、インスタンスレベルの障害に対する保護に役立つ、Availability Zones や計画メンテナンス ウィンドウなどの機能を提供しています。 これに加えて、顧客は Oracle Data Guard、Oracle GoldenGate、Oracle Sharding などのテクノロジを使用して、ラック レベルやデータセンター レベルの障害、地政学的な障害からデータベースを保護することにより、ハイ パフォーマンスと回復性を実現できます。
複数の[可用性ゾーン](../../../availability-zones/az-overview.md)で Oracle Database を Oracle Data Guard や GoldenGate と組み合わせて実行すると、99.99% のアップタイム SLA を実現できます。 可用性ゾーンがまだ存在しない Azure リージョンでは、[可用性セット](../../availability-set-overview.md)を利用して、99.95% のアップタイム SLA を実現できます。
>注:Microsoft が提供するアップタイム SLA より大幅に高いアップタイム目標を設定できます。
## <a name="disaster-recovery-for-oracle-databases"></a>Oracle データベースのディザスター リカバリー
ミッション クリティカルなアプリケーションをクラウドでホストする場合、高可用性とディザスター リカバリーを念頭に設計することが重要です。
Oracle Database Enterprise Edition の場合、Oracle Data Guard で、ディザスター リカバリーに役立つ機能が提供されています。 [ペアになっている Azure リージョン](../../../best-practices-availability-paired-regions.md)にスタンバイ データベース インスタンスを設定し、ディザスター リカバリー用の Data Guard フェールオーバーを設定することができます。 データ損失ゼロを実現するため、Active Data Guard に加えて、Oracle Data Guard 遠隔同期インスタンスもデプロイすることをお勧めします。
アプリケーションで待機時間が許容される場合は、ご使用の Oracle プライマリ データベースとは異なる可用性ゾーンに Data Guard 遠隔同期インスタンスを設定することをご検討ください (十分なテストが必要です)。 **[Maximum Availability]\(最大限の可用性\)** モードを使用して、再実行ファイルの同期トランスポートを遠隔同期インスタンスに設定します。 これらのファイルは、スタンバイ データベースに非同期に転送されます。
**[Maximum Availability]\(最大限の可用性\)** モード (同期) で別の可用性ゾーンに遠隔同期インスタンスを設定する際のパフォーマンス低下がアプリケーションで許容できない場合は、プライマリ データベースと同じ可用性ゾーンに遠隔同期インスタンスを設定できます。 可用性をさらに高めるために、プライマリ データベースの近くに複数の遠隔同期インスタンスを設定し、スタンバイ データベースに少なくとも 1 つのインスタンスを設定することをご検討ください (ロールが遷移する場合)。 Oracle Data Guard 遠隔同期の詳細については、こちらの[Oracle Active Data Guard 遠隔同期のホワイトペーパー](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf)を参照してください。
Oracle Standard Edition データベースを使用する場合は、高可用性とディザスター リカバリーを設定するのに使用できる DBVisit Standby などの ISV ソリューションが用意されています。
## <a name="reference-architectures"></a>参照用アーキテクチャ
### <a name="oracle-data-guard"></a>Oracle データの保護
Oracle Data Guard は、エンタープライズ データの高可用性、データ保護、ディザスター リカバリーを保証します。 Data Guard では、スタンバイ データベースは、プライマリ データベースとトランザクション上一貫性のあるコピーとして保持されます。 プライマリ データベースとセカンダリ データベースとの間の距離、および待機時間に関するアプリケーションの許容範囲に応じて、同期または非同期のレプリケーションを設定できます。 このようにすると、計画的停止や計画外の停止が原因でプライマリ データベースが使用できなくなった場合、Data Guard はスタンバイ データベースをプライマリ ロールに切り替え、ダウンタイムを最小限に抑えます。
Oracle Data Guard を使用している場合は、セカンダリ データベースを読み取り専用で開くこともできます。 このような構成は、Active Data Guard と呼ばれます。 Oracle Database 12c では、Data Guard 遠隔同期インスタンスと呼ばれる機能が導入されました。 このインスタンスを使用すると、パフォーマンスに悪影響を与えずに、Oracle データベースのデータ損失ゼロの構成を設定できます。
> [!NOTE]
> Active Data Guard には追加のライセンスが必要です。 このライセンスは、遠隔同期機能を使用する場合にも必要になります。 ライセンス関連の事項については、Oracle の担当者にお問い合わせください。
#### <a name="oracle-data-guard-with-fsfo"></a>Oracle Data Guard with FSFO
Oracle Data Guard with Fast-Start Failover (FSFO) は、個々のマシンにブローカーを設定することにより、さらに高い回復性を実現します。 Data Guard ブローカーとセカンダリ データベースは、どちらもオブザーバーを実行して、プライマリ データベースのダウンタイムを監視します。 これにより、Data Guard オブザーバーのセットアップでも冗長性を確保できます。
Oracle Database バージョン 12.2 以降では、単一の Oracle Data Guard ブローカー構成で複数のオブザーバーを構成することもできます。 このセットアップで、一方のオブザーバーとセカンダリ データベースでダウンタイムが発生した場合の可用性が向上します。 Data Guard Broker は軽量で、比較的小規模な仮想マシンでホストできます。 Data Guard Broker とその利点の詳細については、このトピックに関する [Oracle のドキュメント](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html)をご覧ください。
次の図に、可用性ゾーンを使用して Azure で Oracle Data Guard を使用する場合の推奨アーキテクチャを示します。 このアーキテクチャで、99.99% の VM アップタイム SLA を実現できます。

上の図では、クライアント システムは、Web 経由で Oracle バックエンドを使用してカスタム アプリケーションにアクセスします。 Web フロントエンドは、ロード バランサーで構成されます。 Web フロント エンドは、適切なアプリケーション サーバーへの呼び出しを行い、作業を処理します。 アプリケーション サーバーは、プライマリ Oracle データベースにクエリを実行します。 この Oracle データベースは、ライセンス コストを節約してパフォーマンスを最大化するため、[制約付きコア vCPU](../../../virtual-machines/constrained-vcpu.md) のハイパースレッド化された[メモリ最適化済み仮想マシン](../../sizes-memory.md)を使用して構成されています。 複数の Premium ディスクまたは Ultra ディスク (マネージド ディスク) が、パフォーマンスと高可用性のために使用されています。
Oracle データベースは、高可用性を実現するために複数の可用性ゾーンに配置されます。 それぞれのゾーンは、独立した電源、冷却手段、ネットワークを備えた 1 つまたは複数のデータ センターで構成されています。 回復性を確保するため、有効になっているリージョンにはいずれも最低 3 つのゾーンが別個に設定されています。 可用性ゾーンはリージョン内で物理的に分離されているため、データセンターで障害が発生してもデータは保護されます。 さらに、2 つの可用性ゾーンにまたがる形で 2 つの FSFO オブザーバーが設定されており、障害が発生した場合は、データベースを起動してセカンダリにフェールオーバーします。
追加のオブザーバーやスタンバイ データベースを、上のアーキテクチャで示したゾーンとは異なる可用性ゾーン (この例では AZ 1) に設定することもできます。 最後に、Oracle データベースは、Oracle Enterprise Manager (OEM) によってアップタイムとパフォーマンスが監視されます。 OEM では、パフォーマンスおよび使用状況の各種レポートを生成することもできます。
可用性ゾーンがサポートされていないリージョンでは、可用性セットを使用して、可用性が高い方法で Oracle Database をデプロイできます。 可用性セットを使用すると、99.95% の VM アップタイムを実現できます。 次の図に、この場合のリファレンス アーキテクチャを示します。

> [!NOTE]
> * Oracle Enterprise Manager VM は、デプロイされている OEM のインスタンスが 1 つだけなので、可用性セット内に配置する必要はありません。
> * 可用性セット構成では、Ultra ディスクは現在サポートされていません。
#### <a name="oracle-data-guard-far-sync"></a>Oracle Data Guard 遠隔同期
Oracle Data Guard 遠隔同期は、Oracle Database に対してデータ損失ゼロの保護機能を提供します。 この機能を使用すると、データベース マシンで障害が発生した際のデータ消失を防ぐことができます。 Oracle Data Guard 遠隔同期は、別の VM にインストールする必要があります。 遠隔同期は、コントロール ファイル、パスワード ファイル、spfile、スタンバイ ログのみを持つ軽量な Oracle インスタンスです。 データ ファイルや rego ログ ファイルはありません。
データ損失ゼロの保護を実現するには、プライマリ データベースと遠隔同期インスタンスとの間の同期通信が必要になります。 遠隔同期インスタンスは、プライマリから同期的に redo を受け取り、これをすべてのスタンバイ データベースに非同期的に直ちに転送します。 この設定により、プライマリ データベースのオーバーヘッドも軽減されます。これは、スタンバイ データベースすべてではなく、redo を遠隔同期インスタンスに送信するだけで済むからです。 遠隔同期インスタンスで障害が発生すると、Data Guard はプライマリ データベースからセカンダリ データベースへの非同期トランスポートを自動的に使用して、データ損失がほぼゼロの保護を維持します。 回復性を高めるため、各データベース インスタンス (プライマリおよびセカンダリ) に対して複数の遠隔同期インスタンスをデプロイすることもできます。
次の図に、Oracle Data Guard 遠隔同期を使用した高可用性アーキテクチャを示します。

上記のアーキテクチャでは、遠隔同期インスタンスがデータベース インスタンスと同一の可用性ゾーンにデプロイされることにより、両者の間の待機時間を短縮しています。 アプリケーションが待機時間の影響を受けやすい場合は、データベースと遠隔同期インスタンスを[近接通信配置グループ](../../../virtual-machines/linux/proximity-placement-groups.md)にデプロイすることをご検討ください。
次の図に、高可用性とディザスター リカバリーを実現する目的で Oracle Data Guard FSFO と遠隔同期を利用したアーキテクチャを示します。

### <a name="oracle-goldengate"></a>Oracle GoldenGate
GoldenGate を使用すると、企業全体の複数の異種プラットフォーム間で、トランザクション レベルでのデータ交換やデータ操作を実行できます。 これは、トランザクションの整合性を維持しつつ、既存のインフラストラクチャに対するオーバーヘッドを最小限に抑えて、コミットされたトランザクションを移動します。 このモジュール式アーキテクチャは、さまざまなトポロジで、選択したデータ レコード、トランザクションの変更、DDL (データ定義言語) への変更を抽出およびレプリケートする柔軟性を提供します。
Oracle GoldenGate を使用すれば、双方向のレプリケーションを提供して、データベースを高可用性用に構成できます。 これを使って、**マルチマスター構成** や **アクティブ/アクティブ構成** を設定できます。 次の図に、Azure での Oracle GoldenGate アクティブ/アクティブ設定の推奨アーキテクチャを示します。 次のアーキテクチャでは、Oracle データベースは、ライセンス コストを節約してパフォーマンスを最大化するため、[制約付きコア vCPU](../../../virtual-machines/constrained-vcpu.md) のハイパースレッド化された[メモリ最適化済み仮想マシン](../../sizes-memory.md)を使用して構成されています。 複数の Premium ディスクまたは Ultra ディスク (マネージド ディスク) が、パフォーマンスと可用性のために使用されています。

> [!NOTE]
> 同様のアーキテクチャは、可用性ゾーンが現在使用できないリージョンでも、可用性セットを使用して設定できます。
Oracle GoldenGate には、データを 1 つの Oracle データベース サーバーから別の Oracle データベース サーバーに非同期的にレプリケートするのに役立つ Extract、ポンプ、Replicat などのプロセスが用意されています。 これらのプロセスを使用することにより、双方向レプリケーションを設定して、可用性ゾーン レベルのダウンタイムが発生した場合でもデータベースの高可用性を確保できます。 上の図では、Extract プロセスは Oracle データベースと同じサーバー上で実行されているのに対して、データ ポンプと Replicat のプロセスは同じ可用性ゾーン内の別個のサーバーで実行されています。 Replicat プロセスは、別の可用性ゾーン内のデータベースからデータを受信し、その可用性ゾーン内の Oracle データベースにデータをコミットするために使用されます。 同様に、データ ポンプ プロセスは、Extract プロセスによって抽出されたデータを、別の可用性ゾーンの Replicat プロセスに送信します。
上のアーキテクチャ図では、データ ポンプ プロセスと Replicat プロセスが別のサーバー上に構成されていますが、サーバーの容量や使用量によっては、すべての Oracle GoldenGate プロセスを同じサーバー上に設定することもできます。 サーバーの使用パターンを把握するため、AWR レポートと Azure の各種メトリックを常にご確認ください。
Oracle GoldenGate の双方向レプリケーションを異なる可用性ゾーンまたは異なるリージョンに設定する場合、異なるコンポーネント間の待機時間がアプリケーションで許容されることを確認することが重要です。 可用性ゾーンやリージョン間の待機時間はさまざまで、複数の要因によって異なります。 異なる可用性ゾーンやリージョンのアプリケーション層とデータベース層の間でパフォーマンス テストを設定し、アプリケーションのパフォーマンス要件を満たしているか確認することをお勧めします。
アプリケーション層は独自のサブネット内に設定でき、データベース層は独自のサブネットに分けることができます。 可能であれば、[Azure Application Gateway](../../../application-gateway/overview.md) を使用してアプリケーション サーバー間のトラフィックを負荷分散することをご検討ください。 Azure Application Gateway は、堅牢な Web トラフィック ロード バランサーです。 これは、同じサーバー上にユーザー セッションを維持する Cookie ベースのセッション アフィニティを提供することにより、データベース上の競合を最小限に抑えます。 Application Gateway の代替手段としては、[Azure Load Balancer](../../../load-balancer/load-balancer-overview.md) と [Azure Traffic Manager](../../../traffic-manager/traffic-manager-overview.md) があります。
### <a name="oracle-sharding"></a>Oracle Sharding
シャーディングは、Oracle 12.2 で導入されたデータ層パターンです。 これを使用すると、独立した複数のデータベースのデータを水平にパーティショニングし、スケーリングできます。 これは、各データベースが専用の仮想マシン上でホストされるシェアード ナッシング アーキテクチャであり、これにより、回復性と可用性が向上するだけでなく、読み取りと書き込みのスループットも向上します。 このパターンにより、単一障害点が排除され、障害の分離やダウンタイムのないローリング アップグレードを実現できます。 1 つのシャードまたは 1 つのデータ センター レベルの障害のダウンタイムは、他のデータ センターの他のシャードのパフォーマンスや可用性には影響しません。
シャーディングは、ダウンタイムを許容できない高スループット OLTP アプリケーションに適しています。 同じシャーディング キーを持つすべての行は、常に同じシャード上にあることが保証されるため、パフォーマンスが向上し、高い整合性を実現できます。 シャーディングを使用するアプリケーションには、シャーディング キー (*customerId* や *accountNum* など) を使用して最初にデータにアクセスする、明確に定義されたデータ モデルとデータ分散戦略 (コンシステント ハッシュ、範囲、リスト、または複合) が必要です。 さらに、シャーディングを使用すると特定のデータ セットをエンド カスタマーの近くに保存できるので、パフォーマンスとコンプライアンスの要件を満たしやすくなります。
高可用性とディザスター リカバリーのためにシャードをレプリケートすることをお勧めします。 この設定は、Oracle Data Guard や Oracle GoldenGate などの Oracle テクノロジを使用して実行できます。 レプリケーションのユニットには、シャード、シャードの一部、シャードのグループを指定できます。 シャード化されたデータベースの可用性は、1 つ以上のシャードの停止や速度低下の影響を受けません。 高可用性が目的の場合、スタンバイ シャードはプライマリ シャードが配置されているのと同じ可用性ゾーンに配置できます。 ディザスター リカバリーが目的の場合、スタンバイ シャードは別のリージョンに配置できます。 シャードを複数のリージョンにデプロイして、それらのリージョンでトラフィックを処理することもできます。 シャード化されたデータベースの高可用性とレプリケーションの構成の詳細については、[Oracle Sharding のドキュメント](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-high-availability.html)を参照してください。
Oracle Sharding は、主に次のコンポーネントで構成されています。 これらのコンポーネントの詳細については、[Oracle Sharding のドキュメント](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html)でご確認いただけます。
- **シャード カタログ** - すべてのシャード データベース構成データの永続的なストアとなる特別な用途の Oracle データベース。 シャード データベースでのシャードの追加や削除、データのマッピング、DDL などのすべての構成変更は、このシャード カタログ上で開始されます。 また、シャード カタログには、SDB 内のすべての重複表のマスター コピーも格納されます。
シャード カタログは、具体化されたビューを使用して、すべてのシャード内の重複表に変更を自動的にレプリケートします。 シャード カタログ データベースは、マルチシャード クエリや、シャーディング キーを指定しないクエリの処理に使用されるクエリ コーディネーターとしても機能します。
シャード カタログの高可用性を実現するために Oracle Data Guard を可用性ゾーンや可用性セットと組み合わせて使用することは、推奨されているベスト プラクティスです。 シャード カタログの可用性は、シャード化されたデータベースの可用性には影響しません。 シャード カタログのダウンタイムは、Data Guard フェールオーバーが完了する短い期間のメンテナンス操作とマルチシャード クエリにのみ影響します。 オンライン トランザクションは、引き続き SDB によってルーティングおよび実行されるので、カタログの停止による影響を受けません。
- **シャード ディレクター** - シャードが存在するリージョンや可用性ゾーンごとにデプロイする必要がある軽量なサービスです。 シャード ディレクターは、Oracle Sharding のコンテキストでデプロイされるグローバル サービス マネージャーです。 高可用性を実現するには、シャードが存在する各可用性ゾーンに少なくとも 1 つのシャード ディレクターをデプロイすることをお勧めします。
最初にデータベースに接続する際に、ルーティング情報がシャード ディレクターによって設定され、後続の要求のためにキャッシュされ、シャード ディレクターをバイパスします。 シャードとのセッションが確立されると、すべての SQL クエリと DML がその特定のシャードのスコープ内でサポートされ、実行されます。 このルーティングは高速で、シャード内トランザクションを実行するすべての OLTP ワークロードに使用されます。 最高のパフォーマンスと可用性を必要とするすべての OLTP ワークロードに、直接ルーティングを使用することをお勧めします。 ルーティング キャッシュは、シャードが使用できなくなったとき、またはシャーディング トポロジに変更が発生したときに自動的に更新されます。
高パフォーマンスのデータ依存ルーティングの場合、Oracle では、シャード データベース内のデータにアクセスする際に接続プールを使用することを推奨しています。 Oracle 接続プール、言語固有ライブラリ、およびドライバーは、Oracle Sharding をサポートしています。 詳細については、[Oracle Sharding のドキュメント](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html#GUID-3D41F762-BE04-486D-8018-C7A210D809F9)を参照してください。
- **グローバル サービス** - グローバル サービスは通常のデータベース サービスに似ています。 グローバル サービスには、データベース サービスのすべてのプロパティに加えて、クライアントとシャードの間のリージョン アフィニティ、レプリケーション ラグの許容範囲など、シャード データベース用のプロパティがあります。 シャード データベースとの間でデータの読み取り/書き込みを行うには、1 つのグローバル サービスのみを作成する必要があります。 Active Data Guard を使用してシャードの読み取り専用レプリカを設定する場合、読み取り専用ワークロード用に別のグローバル サービスを作成できます。 クライアントは、これらのグローバル サービスを使用してデータベースに接続できます。
- **シャード データベース** - シャード データベースとは、お使いの Oracle データベースのことです。 各データベースは、ファスト スタート フェールオーバー (FSFO) が有効になっているブローカー構成で Oracle Data Guard を使用してレプリケートされます。 ユーザーが各シャードで Data Guard のフェールオーバーやレプリケーションを設定する必要はありません。 これは、共有データベースが作成される時に自動的に構成およびデプロイされます。 特定のシャードが失敗すると、Oracle Sharding はデータベース接続をプライマリからスタンバイに自動的にフェールオーバーします。
Oracle シャード データベースをデプロイおよび管理するには、Oracle Enterprise Manager Cloud Control GUI と `GDSCTL` コマンドライン ユーティリティの 2 つのインターフェイスを使用できます。 Cloud Control を使用すると、さまざまなシャードの可用性とパフォーマンスを監視することもできます。 `GDSCTL DEPLOY` コマンドを実行すると、シャードとそれぞれのリスナーが自動的に作成されます。 また、このコマンドは、管理者によって指定されたシャードレベルの高可用性に使用されるレプリケーション構成を自動的にデプロイします。
データベースをシャード化するには、以下のさまざまな方法があります。
* システム管理のシャーディング - パーティション分割を使用して自動的にシャード間に分散されます。
* ユーザー定義のシャーディング - シャードへのデータのマッピングをユーザーが指定できます。これは、規制やデータのローカライズ要件が存在する場合に適しています。
* コンポジット シャーディング - 異なる _シャード領域_ のためにシステム管理のシャーディングとユーザー定義のシャーディングを組み合わせたものです。
* テーブルのサブパーティション - 通常のパーティション テーブルに似ています。
さまざまな[シャーディング方法](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-methods.html)の詳細については、Oracle のドキュメントを参照してください。
シャード データベースは、アプリケーションや開発者には単一データベースのように見える場合がありますが、シャード化されていないデータベースからシャード データベースに移行する場合は、複製するテーブルとシャード化するテーブルを決定するために慎重に計画する必要があります。
重複表はすべてのシャードに格納されるのに対し、シャード化されたテーブルは異なるシャード間に分散されます。 小さいテーブルやディメンション テーブルは複製し、ファクト テーブルは分散/シャード化することをお勧めします。 データは、シャード カタログをセントラル コーディネーターとして使用するか、各シャードでデータ ポンプを実行することにより、シャード データベースに読み込むことができます。 [シャード データベースへのデータ移行](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-loading-data.html)の詳細については、Oracle のドキュメントを参照してください。
#### <a name="oracle-sharding-with-data-guard"></a>Data Guard での Oracle Sharding
Oracle Data Guard は、システム管理、ユーザー定義、コンポジットのシャーディング方法で使用できます。
次の図に、各シャードの高可用性を実現するために Oracle Data Guard で Oracle Sharding を使用する場合のリファレンス アーキテクチャを示します。 このアーキテクチャ図は、_コンポジット シャーディング方法_ を示しています。 アーキテクチャ図は、データの局所性、負荷分散、高可用性、ディザスター リカバリーなどの要件が異なるアプリケーションでは異なる可能性があり、異なるシャーディング方法を使用する場合もあります。 Oracle Sharding では、これらのオプションを提供することにより、このような要件を満たし、水平的かつ効率的にスケーリングを行えるようにしています。 同様のアーキテクチャは、Oracle GoldenGate を使用してデプロイすることもできます。

システム管理のシャーディングは構成と管理が最も容易ですが、ユーザー定義のシャーディングやコンポジット シャーディングは、データやアプリケーションが地理的に分散しているシナリオや、各シャードのレプリケーションを制御する必要があるシナリオに適しています。
上記のアーキテクチャでは、データを地理的に分散し、アプリケーション層を水平にスケールアウトする目的でコンポジット シャーディングを使用しています。 コンポジット シャーディングは、システム管理のシャーディングとユーザー定義のシャーディングを組み合わせたもので、両方の方法の利点を活用できます。 上のシナリオでは、リージョンによって区切られた複数のシャード領域にデータがまずシャード化されます。 次に、コンシステント ハッシュにより、そのデータがそのシャード領域の複数のシャード間にさらにパーティション化されます。 各シャード領域には複数のシャードグループが含まれています。 各シャードグループは複数のシャードを格納しており、この例ではレプリケーションの "ユニット" になっています。 各シャードグループには、そのシャード領域のすべてのデータが格納されています。 シャードグループ A1 と B1 はプライマリ シャードグループで、シャードグループ A2 と B2 はスタンバイです。 シャードグループではなく、個々のシャードをレプリケーションのユニットに指定することもできます。
上記のアーキテクチャでは、高可用性を実現するために、各可用性ゾーンに GSM/シャード ディレクターがデプロイされています。 データ センターやリージョンごとに、少なくとも 1 つの GSM/シャード ディレクターをデプロイすることをお勧めします。 さらに、アプリケーション サーバーのインスタンスは、シャードグループを含むすべての可用性ゾーンにデプロイされます。 この設定により、アプリケーションは、アプリケーション サーバーとデータベース/シャードグループの間の待機時間を短く維持できます。 データベースに障害が発生した場合、スタンバイ データベースと同じゾーンにあるアプリケーション サーバーは、データベース ロールの遷移が発生すると、要求を処理できます。 Azure Application Gateway とシャード ディレクターは、要求と応答の待機時間を追跡し、それに応じて要求をルーティングします。
アプリケーションの観点から見ると、クライアント システムは Azure Application Gateway (または Azure の他の負荷分散テクノロジ) に対して要求を行い、Azure Application Gateway がクライアントに最も近いリージョンに要求をリダイレクトします。 Azure Application Gateway は固定セッションもサポートしているため、同じクライアントからの要求はすべて同じアプリケーション サーバーにルーティングされます。 アプリケーション サーバーは、データ アクセス ドライバーで接続プールを使用します。 この機能は、JDBC、ODP.NET、OCI などのドライバーで使用できます。これらのドライバーは、要求の一部として指定されたシャーディング キーを認識できます。 JDBC クライアントの [Oracle Universal Connection Pool (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) を使用すると、非 Oracle アプリケーション クライアント (Apache Tomcat や IIS など) は、Oracle Sharding と連携できます。
最初の要求の間、アプリケーション サーバーはそのリージョンのシャード ディレクターに接続して、その要求のルーティング先となるシャードのルーティング情報を取得します。 渡されたシャーディング キーに基づいて、ディレクターはアプリケーション サーバーをそれぞれのシャードにルーティングします。 アプリケーション サーバーは、マップを構築することによってこの情報をキャッシュし、後続の要求ではシャード ディレクターをバイパスして、要求を直接シャードにルーティングします。
#### <a name="oracle-sharding-with-goldengate"></a>GoldenGate での Oracle Sharding
次の図に、各シャードのリージョン内の高可用性を実現するために Oracle GoldenGate で Oracle Sharding を使用する場合のリファレンス アーキテクチャを示します。 前述のアーキテクチャとは異なり、このアーキテクチャでは、1 つの Azure リージョン (可用性ゾーンは複数) 内での高可用性のみが図示されています。 Oracle GoldenGate を使用して、複数リージョンの高可用性シャード データベース (前述の例と同様のもの) をデプロイすることも可能です。

上記のリファレンス アーキテクチャでは、_システム管理_ のシャーディング方法を使用してデータをシャード化しています。 Oracle GoldenGate のレプリケーションはチャンク レベルで実行されるため、1 つのシャードにレプリケートされたデータの半分を別のシャードにレプリケートできます。 残りの半分は異なるシャードにレプリケートできます。
データがレプリケートされる方法は、レプリケーション係数によって異なります。 レプリケーション係数が 2 の場合、シャードグループ内の 3 つのシャードにまたがって、データの各チャンクのコピーが 2 つ作成されます。 同様に、レプリケーション係数が 3 で、シャードグループ内に 3 つのシャードが存在する場合、各シャード内のすべてのデータがそのシャードグループ内の他のすべてのシャードにレプリケートされます。 シャードグループ内の各シャードには、異なるレプリケーション係数を指定できます。 この設定は、1 つのシャードグループ内および複数のシャードグループにまたがる高可用性とディザスター リカバリーの設計を効率的に定義するのに役立ちます。
上記のアーキテクチャでは、シャードグループ A とシャードグループ B はどちらも同じデータを格納していますが、異なる可用性ゾーンに存在しています。 シャードグループ A とシャードグループ B の両方に同じレプリケーション係数 3 が指定されている場合、シャード テーブルの各行またはチャンクは、2 つのシャードグループ全体で 6 回レプリケートされます。 シャードグループ A のレプリケーション係数が 3 で、シャードグループ B のレプリケーション係数が 2 に指定されている場合は、各行またはチャンクは、2 つのシャードグループ全体で 5 回レプリケートされます。
この設定により、インスタンスレベルや可用性ゾーンレベルの障害が発生した場合でも、データの損失を防ぐことができます。 アプリケーション レイヤーは、各シャードに対して読み取りと書き込みを実行できます。 競合を最小限に抑えるため、Oracle Sharding はハッシュ値の範囲ごとに "マスター チャンク" を指定します。 この機能により、特定のチャンクに対する書き込み要求が、対応するチャンクに確実に転送されます。 さらに、Oracle GoldenGate には、発生する可能性のある競合を処理する自動競合検出と解決機能が用意されています。 Oracle Sharding を使用した GoldenGate の実装に関する詳細と制限事項については、[シャード データベースでの Oracle GoldenGate](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-high-availability.html#GUID-4FC0AC46-0B8B-4670-BBE4-052228492C72) の使用に関する Oracle のドキュメントを参照してください。
上記のアーキテクチャでは、高可用性を実現するために、各可用性ゾーンに GSM/シャード ディレクターがデプロイされています。 データ センターやリージョンごとに、少なくとも 1 つの GSM/シャード ディレクターをデプロイすることをお勧めします。 さらに、アプリケーション サーバーのインスタンスは、シャードグループを含むすべての可用性ゾーンにデプロイされます。 この設定により、アプリケーションは、アプリケーション サーバーとデータベース/シャードグループの間の待機時間を短く維持できます。 データベースに障害が発生した場合、スタンバイ データベースと同じゾーンにあるアプリケーション サーバーは、データベース ロールが遷移すると、要求を処理できます。 Azure Application Gateway とシャード ディレクターは、要求と応答の待機時間を追跡し、それに応じて要求をルーティングします。
アプリケーションの観点から見ると、クライアント システムは Azure Application Gateway (または Azure の他の負荷分散テクノロジ) に対して要求を行い、Azure Application Gateway がクライアントに最も近いリージョンに要求をリダイレクトします。 Azure Application Gateway は固定セッションもサポートしているため、同じクライアントからの要求はすべて同じアプリケーション サーバーにルーティングされます。 アプリケーション サーバーは、データ アクセス ドライバーで接続プールを使用します。 この機能は、JDBC、ODP.NET、OCI などのドライバーで使用できます。これらのドライバーは、要求の一部として指定されたシャーディング キーを認識できます。 JDBC クライアントの [Oracle Universal Connection Pool (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) を使用すると、非 Oracle アプリケーション クライアント (Apache Tomcat や IIS など) は、Oracle Sharding と連携できます。
最初の要求の間、アプリケーション サーバーはそのリージョンのシャード ディレクターに接続して、その要求のルーティング先となるシャードのルーティング情報を取得します。 渡されたシャーディング キーに基づいて、ディレクターはアプリケーション サーバーをそれぞれのシャードにルーティングします。 アプリケーション サーバーは、マップを構築することによってこの情報をキャッシュし、後続の要求ではシャード ディレクターをバイパスして、要求を直接シャードにルーティングします。
## <a name="patching-and-maintenance"></a>修正プログラムの適用とメンテナンス
Oracle ワークロードを Azure にデプロイする場合、ホスト OS レベルのすべての修正プログラムを Microsoft が適用します。 計画された OS レベルのメンテナンスは、その計画メンテナンスに備えてお客様が準備できるよう、事前にお客様に通知されます。 異なる 2 つの Availability Zones の 2 つのサーバーに同時に修正プログラムが適用されることはありません。 VM のメンテナンスと修正プログラムの適用の詳細については、[仮想マシンの可用性管理](../../availability.md)に関するページをご覧ください。
仮想マシンのオペレーティング システムへの修正プログラムの適用は、[Azure Automation Update Management](../../../automation/update-management/overview.md) を使用して自動化できます。 Oracle データベースへの修正プログラムの適用とメンテナンスは、[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) または [Azure Automation Update Management](../../../automation/update-management/overview.md) を使用して自動化およびスケジュールし、ダウンタイムを最小限に抑えることができます。 Oracle データベースのコンテキストでのその使用方法については、[継続的デリバリーおよびブルー/グリーン デプロイ](/azure/devops/learn/what-is-continuous-delivery)に関するページをご覧ください。
## <a name="architecture-and-design-considerations"></a>アーキテクチャと設計に関する考慮事項
- ライセンス コストを節約してパフォーマンスを最大化するには、Oracle Database VM に[制約付きコア vCPU](../../../virtual-machines/constrained-vcpu.md) のハイパースレッド化された[メモリ最適化済み仮想マシン](../../sizes-memory.md)を使用することをご検討ください。 パフォーマンスと可用性を向上させるには、複数の Premium ディスクまたは Ultra ディスク (マネージド ディスク) を使用します。
- マネージド ディスクを使用する場合、再起動時にディスク/デバイス名が変更される可能性があります。 再起動後もマウントが確実に持続するようにするため、名前ではなく、デバイス UUID を使用することをお勧めします。 詳細については、 [こちら](/previous-versions/azure/virtual-machines/linux/configure-raid#add-the-new-file-system-to-etcfstab)で確認できます。
- リージョン内で高可用性を実現するには、可用性ゾーンを使用します。
- Oracle データベースには、Ultra ディスク (使用可能な場合) か Premium ディスクの使用をご検討ください。
- スタンバイ Oracle データベースは、Oracle Data Guard を使用して別の Azure リージョンに設定することをご検討ください。
- アプリケーションとデータベース層の間の待機時間を短縮するには、[近接通信配置グループ](../../co-location.md#proximity-placement-groups)の使用をご検討ください。
- 管理、監視、ログのため、[Oracle Enterprise Manager](https://docs.oracle.com/en/enterprise-manager/) を設定します。
- データベースのストレージ管理を合理化するには、Oracle Automatic Storage Management (ASM) の使用をご検討ください。
- データベースへの修正プログラムと更新プログラムの適用を管理してダウンタイムが発生しないようにするには、[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) を使用します。
- アプリケーション コードを微調整して、[再試行パターン](/azure/architecture/patterns/retry)、[サーキット ブレーカー パターン](/azure/architecture/patterns/circuit-breaker)、および「[クラウド設計パターン](/azure/architecture/patterns/)」ガイドで定義されている他のパターンなど、アプリケーションの回復性を向上するのに役立つクラウドネイティブ パターンを追加します。
## <a name="next-steps"></a>次のステップ
次の Oracle リファレンス記事の中から、お客様のシナリオに当てはまるものをご確認ください。
- [Oracle Data Guard の概要](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7)
- [Oracle Data Guard Broker の概念](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html)
- [アクティブ/アクティブ型高可用性のための Oracle GoldenGate の構成](https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_bidirectional.htm#GWUAD282)
- [Oracle Sharding の概要](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html)
- [Oracle Active Data Guard 遠隔同期: いかなる距離でもデータ損失ゼロ](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf)
| 106.081197 | 729 | 0.85445 | jpn_Jpan | 0.538102 |
d05640b4264630b0cbb5b207e6942c549db651c4 | 9,931 | md | Markdown | src/site/content/en/blog/community-highlight-bramus/index.md | myakura/web.dev | 0503f5bf32657338ca5430f90524d4d2a4e2ebf9 | [
"Apache-2.0"
] | null | null | null | src/site/content/en/blog/community-highlight-bramus/index.md | myakura/web.dev | 0503f5bf32657338ca5430f90524d4d2a4e2ebf9 | [
"Apache-2.0"
] | null | null | null | src/site/content/en/blog/community-highlight-bramus/index.md | myakura/web.dev | 0503f5bf32657338ca5430f90524d4d2a4e2ebf9 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: 'Community highlight: Bramus Van Damme'
authors:
- rachelandrew
hero: image/kheDArv5csY6rvQUJDbWRscckLr1/kKhOZkehYIQ49GQOXrsw.png
alt: 'Designcember'
thumbnail: image/kheDArv5csY6rvQUJDbWRscckLr1/qRLa7xFsolDUbEfyugoW.png
subhead: >
Bramus Van Damme is a web developer from Belgium. From the moment he discovered view-source at the age of 14 (way back in 1997), he fell in love with the web and has been tinkering with it ever since. I caught up with him to learn about his journey in web development, and to find out what he thinks is exciting in CSS today.
description: >
One of a series of interviews with people from the web development community who are doing interesting things with CSS. This time I speak to prolific writer Bramus Van Damme.
date: 2021-12-04
tags:
- blog
- css
- community
---
_This post is part of [Designcember](https://designcember.com/). A celebration of web design, brought to you by web.dev._
<figure>
{% Img src="image/kheDArv5csY6rvQUJDbWRscckLr1/6tkSbZYOqxtgJM9s5ovw.jpg", alt="Bramus on stage in from of a large screen showing slides.", width="800", height="533" %}
<figcaption>Bramus speaking at Frontend United.</figcaption>
</figure>
{% Aside %}
Read some recent articles by Bramus on [Cascade Layers](https://www.bram.us/2021/09/15/the-future-of-css-cascade-layers-css-at-layer/),
[Scroll-Linked Animations](https://css-tricks.com/scroll-linked-animations-with-the-web-animations-api-waapi-and-scrolltimeline/),
and [Container Relative Lengths](https://www.bram.us/2021/09/21/css-container-queries-container-relative-lengths/).
{% endAside %}
**Rachel:** What was your route into web development?
**Bramus:** As a kid, I always liked to tinker with things. I would spend days playing with my LEGO® bricks, building my own fantasy world and objects from scratch.
When we got a computer at home—an unusual device to own in the 1990s—I soon traded in the physical toys with computer games. I wasn't an avid gamer though; I don't think I ever finished a game entirely. Instead of finishing the games, I found myself modding them.
In 1997, while looking up information about those games and tools, I also discovered [`view-source`](https://www.bram.us/#view-source). Curious to know how things were built, I started collecting HTML-snippets of the sites that I visited. Combining those snippets with Frontpage Express (an application that came with Internet Explorer 4 and 5), I soon created my very first web pages with info about myself. Those pages never got published, they only existed on one of the floppy disks I carried around.
From that time on I continued to become more interested in computers and the web. This interest led me to flunk a year in high school on purpose, so that I could switch major from economics to IT—I knew I wanted to pursue a career in IT. By 2002 I was in college, where I properly learned HTML and took my first steps into CSS and JavaScript. During those three years I realized that the web was my true passion, and in 2005, fresh out of college, I took on my first job as a professional web developer.
## On being a front and backend developer
**Rachel:** I spotted on your site that you are both a front and backend developer, I followed a similar path being originally a Perl, then a PHP and MySQL developer. Do you feel more excited by one side or the other? Do you think the possibility of being a hybrid developer is vanishing given the complexity of learning just one part of the stack?
**Bramus:** Throughout my career I've constantly been floating between backend and frontend. One year I would find myself elbow-deep into JavaScript and React (and even React Native), only to be creating Terraform scripts and Docker containers the year after. I like mixing the two, yet my passion always lay with the frontend, and CSS specifically.
In the early days of tinkering with the web, one simply was the "webmaster" and did it all. As the scope of the work was pretty limited back then, it was quite easy to keep up. Having seen both frontend and backend explode over the past 20 years, it became harder and harder to maintain expertise across the field. That's why I decided to mainly focus on frontend again in 2020.
**Rachel:** Why did you start writing about CSS in particular?
**Bramus** The content on my blog has always been a reflection of the projects that I'm working on. Therefore a mix of front and backend posts.
Attending conferences such as [Fronteers Conference](https://fronteers.nl/congres) and [CSS Day](https://cssday.nl/) helped me to write in-depth frontend posts. For example, seeing [Tab Atkins-Bittner talk about CSS Custom Properties in 2013](https://vimeo.com/69531455)—years before they even were an official thing—or [you (Rachel Andrew) explaining Grid to us in 2015](https://rachelandrew.co.uk/archives/2015/07/17/css-grid-layout-at-css-day/) were events that directly led me to write about them. At the time, I was a lecturer in web and mobile development at a technical university, so I had a very good reason to pay attention, as later on I'd be teaching my own students about those subjects.
In 2019, I started to closely monitor the CSSWG and [participate in discussions](https://github.com/w3c/csswg-drafts/issues). Browsers working on features behind feature flags meant that I was able to experiment with the things I read about, even before they shipped. This was then reflected through the contents of my blog.
## Advice for new writers
**Rachel:** What would be your advice to someone who wants to start writing about tech?
**Bramus:** Don't hesitate and simply do it. Even when it's about a single line of CSS, or if it's 1 post per year, or if you "only" have 5 subscribers: do it. Scratch your own itch, and write the article you wanted to find yourself. Through writing on my blog I not only challenged myself to learn about technologies in finer detail, but also opened doors along the way—both personally and professionally.
Don't overly rely on external services such as Medium or Twitter, but try and have your own place on the web. In the long run it'll pay off. You don't need any fancy CMS, build pipelines, or comments system, to get started. All you need is a text editor and some time to spare. HTML, combined with a simple stylesheet, can get you a long way.
## New features in CSS
**Rachel:** You have written about lots of the new features that are being developed in the CSSWG and in browsers, what do you think is the most exciting for the future of the web? Which do you think will have the most immediate impact in your own professional work?
**Bramus:** Along with many developers I'm pretty excited about CSS Container Queries. Other upcoming features—such as [Cascade Layers](https://www.bram.us/2021/09/15/the-future-of-css-cascade-layers-css-at-layer/) and [Scroll-linked Animations](https://www.bram.us/2021/02/23/the-future-of-css-scroll-linked-animations-part-1/)—also excite me, but Container Queries will definitely have the biggest impact. They will allow us to transition from responsive pages to responsive components.
{% Aside %}
Learn about container queries in this [Designing in the Browser](https://www.youtube.com/watch?v=gCNMyYr7F6w) episode.
{% endAside %}
**Rachel:** What feature or functionality would you love to see added to CSS?
**Bramus:** Scroll-linked Animations is one of the features that I would like to see move forward. Right now it's only an Editor's Draft. Being able to define hardware-accelerated scrolling without relying on JavaScript is something that totally fits into my mental model of progressive enhancement and the [rule of least power](https://en.wikipedia.org/wiki/Rule_of_least_power).
CSS Nesting is also my radar. It took more than two years since its first Editor's Draft, but I was very glad to see its First Public Working Draft get released last summer.
Apart from these bigger features, I can definitely appreciate smaller tweaks and additions. Things like [accent-color](/accent-color/) definitely put a smile on my face, as they make my life as a developer easier.
## Recommendations for inspiring web people to follow
**Rachel:** Who else is doing really interesting, fun, or creative work on the web right now?
**Bramus:** That's a very difficult question to answer, so many people are producing content that amazes and inspires me. For example, [Adam Argyle](https://twitter.com/argyleink) and his GUI challenges, the projects from [Stephanie Eckles](https://twitter.com/5t3ph), blog posts by [Michelle Barker](https://twitter.com/michebarks), videos from [Kevin J. Powell](https://twitter.com/KevinJPowell), the work [Miriam Suzanne](https://twitter.com/TerribleMia) is doing in the CSS Working Group, podcasts from [Una Kravets](https://twitter.com/Una), articles by [Jake Archibald](https://twitter.com/jaffathecake), Jake and Surma's [HTTP 203](https://http203.libsyn.com/), [George Francis](https://twitter.com/georgedoescode)' Houdini work, and [Temani Afif](https://twitter.com/ChallengesCss)'s posts. These people and their projects, and the many others that I'm forgetting right now, have my respect and admiration.
I think the most influential person throughout my career was [Jeremy Keith](https://adactio.com/). His teaching us about semantic HTML, progressive enhancement, and resilience were eye-opening moments to me. It's a message I gave to my own students, and still like to spread today. In times where JavaScript is eating the world and junior developers somehow seem to have skipped out on the fundamentals of the web, his posts and talks are more relevant than they ever were before.
{% Aside %}
Jeremy Keith has created our new [responsive design course](/learn/design/) here on web.dev.
{% endAside %}
**Rachel:** You can [follow Bramus on Twitter](https://twitter.com/bramus), and on his blog at [bram.us](https://www.bram.us/).
| 101.336735 | 914 | 0.776155 | eng_Latn | 0.998537 |
d056485fc8738126fff5b2f8e23c74c3096cc3ec | 526 | md | Markdown | src/notes.md | cedricium/riddler | b8b2ab5bb6e7cc4c8f640ae657d7d47484595427 | [
"MIT"
] | null | null | null | src/notes.md | cedricium/riddler | b8b2ab5bb6e7cc4c8f640ae657d7d47484595427 | [
"MIT"
] | 6 | 2018-08-18T03:46:55.000Z | 2018-08-20T07:57:31.000Z | src/notes.md | cedricium/riddler | b8b2ab5bb6e7cc4c8f640ae657d7d47484595427 | [
"MIT"
] | null | null | null | How to parse `<li>` elements and build an array of riddle / answer objects:
```js
/**
* Site used: https://savagelegend.com/misc-resources/classic-riddles-1-100/
*/
var items = document.querySelectorAll('.entry-content ol li');
var riddles = [];
items.forEach((item) => {
var answer = item.lastChild.textContent;
var matchAnswer = `\n${answer}`;
var re = new RegExp(matchAnswer, 'g');
var riddle = {
"riddle": item.textContent.replace(re, ''),
"answer": answer.trim()
};
riddles.push(riddle);
});
```
| 23.909091 | 76 | 0.652091 | yue_Hant | 0.499712 |
d056c7d47d17981b23a406be61aa68aa6033c467 | 9,367 | md | Markdown | README.md | documatrix/activerecord-jdbc-adapter | 5b9965d9a127b32ba612369feb5ce44f07281ae0 | [
"BSD-2-Clause"
] | null | null | null | README.md | documatrix/activerecord-jdbc-adapter | 5b9965d9a127b32ba612369feb5ce44f07281ae0 | [
"BSD-2-Clause"
] | null | null | null | README.md | documatrix/activerecord-jdbc-adapter | 5b9965d9a127b32ba612369feb5ce44f07281ae0 | [
"BSD-2-Clause"
] | null | null | null | # ActiveRecord JDBC Alternative Adapter
This adapter is a fork of the ActiveRecord JDBC Adapter with basic support for
**SQL Server/Azure SQL**. This adapter may work with other databases
supported by the original adapter such as PostgreSQL but it is advised to
use the [original adapter](https://github.com/jruby/active)
This adapter only works with JRuby and it is advised to install the latest
stable versions of Rails
- For Rails `5.0.7.2` install the `50.3.1` version of this adapter
- For Rails `5.1.7` install the `51.3.0` version of this adapter
- For Rails `5.2.3` install the `52.2.0` version of this adapter
Support for Rails 6.0 is planned in the future.
This adapter passes most of the Rails tests (ActiveRecord tests) with the
exception of some test that are not compatible with the SQL Server
### How to use it:
Add the following to your `Gemfile`:
```ruby
platforms :jruby do
# Use jdbc as the database for Active Record
gem 'activerecord-jdbc-alt-adapter', '~> 50.3.1', require: 'arjdbc'
gem 'jdbc-mssql', '~> 0.6.0'
end
```
Or look at the sample rails 5.0 app [wombat](https://github.com/JesseChavez/wombat50)
and see how is set up.
### Breaking changes
- This adapter let SQL Server be SQL Server, it does not make SQL Server to be
more like MySQL or PostgreSQL, The query will just fails if SQL Server does not
support that SQL dialect.
- This adapter uses the `datetime2` sql data type as the Rails logical `datetime` data type.
- This adapter needs the mssql jdbc driver version 7.0.0 onwards to work properly,
therefore you can use the gem `jdbc-mssql` version `0.6.0` onwards or the actual
driver jar file version `7.0.0`.
### Recommendation
If you have the old sql server `datetime` data type for `created_at` and
`updated_at`, you don't need to upgrade straightaway to `datetime2`, the old data type
(`datetime_basic`) will still work for simple updates, just make you add to the time zone
aware list. If you have complex `datetime` queries it is advised to upgrade to
`datetime2`
```ruby
# time zone aware configuration.
config.active_record.time_zone_aware_types = [:datetime, :datetime_basic]
```
In order to avoid deadlocks it is advised to use `SET READ_COMMITTED_SNAPSHOT ON`
Make sure to run `ALTER DATABASE your_db SET READ_COMMITTED_SNAPSHOT ON` against
your database.
If you prefer to use the `READ_UNCOMMITED` transaction isolation level as your
default isolation level, add the `transaction_isolation: 'read_uncommitted'` in
your database config.
If you have slow queries on your background jobs and locking queries you can change the default
`lock_timeout` config, add the `lock_timeout: 10000` in your database config.
database config example (`database.yml`):
```yml
# SQL Server (2012 or higher)
default: &default
adapter: sqlserver
encoding: utf8
development:
<<: *default
host: localhost
database: sam_development
username: SA
password: password
transaction_isolation: read_uncommitted
lock_timeout: 10000
test:
<<: *default
host: localhost
database: sam_test
username: SA
password: password
production:
<<: *default
host: localhost
database: sam_production
username:
password:
```
### NOTE
Keep one eye in the Rails connection pool, we have not thoroughly tested that
part since we don't use the default Rails connection pool, other than that
this adapter should just work.
# ActiveRecord JDBC Adapter
[][7]
ActiveRecord-JDBC-Adapter (AR-JDBC) is the main database adapter for Rails'
*ActiveRecord* component that can be used with [JRuby][0].
ActiveRecord-JDBC-Adapter provides full or nearly full support for:
**MySQL**, **PostgreSQL**, **SQLite3**. In the near future there are plans to
add support **MSSQL**. Unless we get more contributions we will not be going
beyond these four adapters. Note that the amount of work needed to get
another adapter is not huge but the amount of testing required to make sure
that adapter continues to work is not something we can do with the resources
we currently have.
For Oracle database users you are encouraged to use
https://github.com/rsim/oracle-enhanced.
Versions are targeted at certain versions of Rails and live on their own branches.
| Gem Version | Rails Version | Branch |
| ----------- | ------------- | ------ |
| 50.x | 5.0.x | 50-stable |
| 51.x | 5.1.x | 51-stable |
| 52.x | 5.2.x | 52-stable |
| future | latest | master |
The minimum version of JRuby for 50+ is JRuby **9.1.x** and
JRuby 9.1+ requires Java 7 or newer (we recommend Java 8 at minimum).
## Using ActiveRecord JDBC
### Inside Rails
To use AR-JDBC with JRuby on Rails:
1. Choose the adapter you wish to gem install. The following pre-packaged
adapters are available:
- MySQL (`activerecord-jdbcmysql-adapter`)
- PostgreSQL (`activerecord-jdbcpostgresql-adapter`)
- SQLite3 (`activerecord-jdbcsqlite3-adapter`)
2. If you're generating a new Rails application, use the following command:
jruby -S rails new sweetapp
3. Configure your *database.yml* in the normal Rails style:
```yml
development:
adapter: mysql2 # or mysql
database: blog_development
username: blog
password: 1234
```
For JNDI data sources, you may simply specify the JNDI location as follows, it's
recommended to use the same adapter: setting as one would configure when using
"bare" (JDBC) connections e.g. :
```yml
production:
adapter: postgresql
jndi: jdbc/PostgreDS
```
**NOTE:** any other settings such as *database:*, *username:*, *properties:* make
no difference since everything is already configured on the JNDI DataSource end.
JDBC driver specific properties might be set if you use an URL to specify the DB
or preferably using the *properties:* syntax:
```yml
production:
adapter: mysql
username: blog
password: blog
url: "jdbc:mysql://localhost:3306/blog?profileSQL=true"
properties: # specific to com.mysql.jdbc.Driver
socketTimeout: 60000
connectTimeout: 60000
```
### Standalone with ActiveRecord
Once the setup is made (see below) you can establish a JDBC connection like this
(e.g. for `activerecord-jdbcderby-adapter`):
```ruby
ActiveRecord::Base.establish_connection(
adapter: 'sqlite3',
database: 'db/my-database'
)
```
#### Using Bundler
Proceed as with Rails; specify `ActiveRecord` in your Bundle along with the
chosen JDBC adapter(s), this time sample *Gemfile* for MySQL:
```ruby
gem 'activerecord', '~> 5.0.6'
gem 'activerecord-jdbcmysql-adapter', :platform => :jruby
```
When you `require 'bundler/setup'` everything will be set up for you as expected.
#### Without Bundler
Install the needed gems with JRuby, for example:
gem install activerecord -v "~> 5.0.6"
gem install activerecord-jdbc-adapter --ignore-dependencies
If you wish to use the adapter for a specific database, you can install it
directly and the (jdbc-) driver gem (dependency) will be installed as well:
jruby -S gem install activerecord-jdbcmysql-adapter
Your program should include:
```ruby
require 'active_record'
require 'activerecord-jdbc-adapter' if defined? JRUBY_VERSION
```
## Source
The source for activerecord-jdbc-adapter is available using git:
git clone git://github.com/jruby/activerecord-jdbc-adapter.git
Please note that the project manages multiple gems from a single repository,
if you're using *Bundler* >= 1.2 it should be able to locate all gemspecs from
the git repository. Sample *Gemfile* for running with (MySQL) master:
```ruby
gem 'activerecord-jdbc-adapter', :github => 'jruby/activerecord-jdbc-adapter'
gem 'activerecord-jdbcmysql-adapter', :github => 'jruby/activerecord-jdbc-adapter'
```
## Getting Involved
Please read our [CONTRIBUTING](CONTRIBUTING.md) & [RUNNING_TESTS](RUNNING_TESTS.md)
guides for starters. You can always help us by maintaining AR-JDBC's [wiki][5].
## Feedback
Please report bugs at our [issue tracker][3]. If you're not sure if
something's a bug, feel free to pre-report it on the [mailing lists][1] or
ask on the #JRuby IRC channel on http://freenode.net/ (try [web-chat][6]).
## Authors
This project was originally written by [Nick Sieger](http://github.com/nicksieger)
and [Ola Bini](http://github.com/olabini) with lots of help from the JRuby community.
Polished 3.x compatibility and 4.x support (for AR-JDBC >= 1.3.0) was managed by
[Karol Bucek](http://github.com/kares) among others.
## License
ActiveRecord-JDBC-Adapter is open-source released under the BSD/MIT license.
See [LICENSE.txt](LICENSE.txt) included with the distribution for details.
Open-source driver gems within AR-JDBC's sources are licensed under the same
license the database's drivers are licensed. See each driver gem's LICENSE.txt.
[0]: http://www.jruby.org/
[1]: http://jruby.org/community
[2]: http://github.com/jruby/activerecord-jdbc-adapter/blob/master/activerecord-jdbcmssql-adapter
[3]: https://github.com/jruby/activerecord-jdbc-adapter/issues
[4]: http://github.com/nicksieger/activerecord-cachedb-adapter
[5]: https://github.com/jruby/activerecord-jdbc-adapter/wiki
[6]: https://webchat.freenode.net/?channels=#jruby
[7]: http://badge.fury.io/rb/activerecord-jdbc-adapter
[8]: https://github.com/jruby/activerecord-jdbc-adapter/wiki/Migrating-from-1.2.x-to-1.3.0
| 32.411765 | 97 | 0.742821 | eng_Latn | 0.962676 |
d05706aa788ef09c5e14a29cbe63dff218e19835 | 13,023 | md | Markdown | docs/modeling/how-to-access-and-constrain-the-current-selection.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/modeling/how-to-access-and-constrain-the-current-selection.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/modeling/how-to-access-and-constrain-the-current-selection.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Filtrar Acceder a la selección actual y restringirla
ms.date: 11/04/2016
ms.topic: conceptual
helpviewer_keywords:
- Domain-Specific Language, accessing the current selection
author: gewarren
ms.author: gewarren
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: eb3ef158bafa172736f53898ea60b860c44dd77a
ms.sourcegitcommit: 21d667104199c2493accec20c2388cf674b195c3
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 02/08/2019
ms.locfileid: "55945331"
---
# <a name="how-to-access-and-constrain-the-current-selection"></a>Filtrar Acceder a la selección actual y restringirla
Al escribir un controlador de comandos o gestos para su lenguaje específico de dominio, puede determinar qué elemento haga el usuario. También puede impedir que algunas formas o campos que se seleccione. Por ejemplo, puede organizar que cuando el usuario hace clic en un elemento decorator de icono, en su lugar, se selecciona la forma que lo contiene. Restringir la selección de esta manera reduce el número de controladores que se deben escribir. También resulta más fácil para el usuario, quien puede haga clic en la forma sin tener que evitar el decorador.
## <a name="access-the-current-selection-from-a-command-handler"></a>Obtener acceso a la selección actual desde un controlador de comandos
La clase de conjunto de comandos para un lenguaje específico de dominio contiene los controladores de comandos para los comandos personalizados. El <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet> (clase), desde el que se deriva la clase de conjunto de comandos para un lenguaje específico de dominio proporciona algunos miembros para acceder a la selección actual.
Según el comando, el controlador de comandos que la selección en el Diseñador de modelos, el Explorador de modelos o la ventana activa.
### <a name="to-access-selection-information"></a>Para obtener acceso a información sobre la selección
1. La <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet> clase define los siguientes miembros que pueden utilizarse para tener acceso a la selección actual.
|Miembro|Descripción|
|-|-|
|Método <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.IsAnyDocumentSelectionCompartment%2A>|Devuelve `true` si alguno de los elementos seleccionados en el Diseñador de modelos es una forma de compartimiento; en caso contrario, `false`.|
|Método <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.IsDiagramSelected%2A>|Devuelve `true` si el diagrama está seleccionado en el Diseñador de modelos; de lo contrario, `false`.|
|Método <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.IsSingleDocumentSelection%2A>|Devuelve `true` si exactamente un elemento está seleccionado en el Diseñador de modelos; en caso contrario, `false`.|
|Método <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.IsSingleSelection%2A>|Devuelve `true` si exactamente un elemento está seleccionado en la ventana activa; en caso contrario, `false`.|
|Propiedad <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.CurrentDocumentSelection%2A>|Obtiene una colección de solo lectura de los elementos seleccionados en el Diseñador de modelos.|
|Propiedad <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.CurrentSelection%2A>|Obtiene una colección de solo lectura de los elementos seleccionados en la ventana activa.|
|Propiedad <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.SingleDocumentSelection%2A>|Obtiene el elemento primario de la selección en el Diseñador de modelos.|
|Propiedad <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.SingleSelection%2A>|Obtiene el elemento primario de la selección en la ventana activa.|
2. El <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet.CurrentDocView%2A> propiedad de la <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet> clase proporciona acceso a la <xref:Microsoft.VisualStudio.Modeling.Shell.DiagramDocView> objeto que representa la ventana del Diseñador de modelos y proporciona acceso adicional de los elementos seleccionados en el Diseñador de modelos.
3. Además, el código generado define una propiedad de ventana de herramienta de explorador y una propiedad de selección del explorador en el comando set (clase) para el lenguaje específico del dominio.
- La propiedad de ventana de herramienta de explorador devuelve una instancia de la clase de ventana de herramienta de explorador del lenguaje específico de dominio. Deriva de la clase de ventana de herramienta de explorador el <xref:Microsoft.VisualStudio.Modeling.Shell.ModelExplorerToolWindow> clase y representa el Explorador de modelos para el lenguaje específico del dominio.
- El `ExplorerSelection` propiedad devuelve el elemento seleccionado en la ventana del explorador de modelos para el lenguaje específico del dominio.
## <a name="determine-which-window-is-active"></a>Determinar qué ventana está activa
El <xref:Microsoft.VisualStudio.Modeling.Shell.IMonitorSelectionService> contiene la interfaz define los miembros que proporcionan acceso al estado de selección actual en el shell. Puede obtener un <xref:Microsoft.VisualStudio.Modeling.Shell.IMonitorSelectionService> objeto de la clase de paquete o la clase de conjunto de comandos para el lenguaje específico de dominio a través de la `MonitorSelection` definida en la clase base de cada propiedad. La clase de paquete que se deriva de la <xref:Microsoft.VisualStudio.Modeling.Shell.ModelingPackage> deriva la clase y la clase de conjunto de comandos la <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet> clase.
### <a name="to-determine-from-a-command-handler-what-type-of-window-is-active"></a>Para determinar desde un controlador de comandos, ¿qué tipo de ventana está activa
1. El <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSetLibrary.MonitorSelection%2A> propiedad de la <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet> clase devuelve una <xref:Microsoft.VisualStudio.Modeling.Shell.IMonitorSelectionService> objeto que proporciona acceso al estado de selección actual en el shell.
2. El <xref:Microsoft.VisualStudio.Modeling.Shell.IMonitorSelectionService.CurrentSelectionContainer%2A> propiedad de la <xref:Microsoft.VisualStudio.Modeling.Shell.IMonitorSelectionService> interfaz obtiene el contenedor de selección activa, que puede ser diferente de la ventana activa.
3. Agregue que las siguientes propiedades para el comando set (clase) para los lenguajes específicos de dominio para determinar qué tipo de ventana está activa.
```csharp
// using Microsoft.VisualStudio.Modeling.Shell;
// Returns true if the model designer is the active selection container;
// otherwise, false.
protected bool IsDesignerActive
{
get
{
return (this.MonitorSelection.CurrentSelectionContainer
is DiagramDocView);
}
}
// Returns true if the model explorer is the active selection container;
// otherwise, false.
protected bool IsExplorerActive
{
get
{
return (this.MonitorSelection.CurrentSelectionContainer
is ModelExplorerToolWindow);
}
}
```
## <a name="constrain-the-selection"></a>Restringir la selección
Mediante la adición de reglas de selección, puede controlar qué elementos se seleccionan cuando el usuario selecciona un elemento en el modelo. Por ejemplo, para permitir al usuario tratar a un número de elementos como una sola unidad, puede usar una regla de selección.
### <a name="to-create-a-selection-rule"></a>Para crear una regla de selección
1. Cree un archivo de código personalizado en el proyecto DSL
2. Defina una clase de regla de selección que se deriva el <xref:Microsoft.VisualStudio.Modeling.Diagrams.DiagramSelectionRules> clase.
3. Invalidar el <xref:Microsoft.VisualStudio.Modeling.Diagrams.DiagramSelectionRules.GetCompliantSelection%2A> método de la clase de regla de selección para aplicar los criterios de selección.
4. Agregue una definición de clase parcial para la clase ClassDiagram al archivo de código personalizado.
El `ClassDiagram` clase se deriva de la <xref:Microsoft.VisualStudio.Modeling.Diagrams.Diagram> clase y se define en el archivo de código generado, Diagram.cs, en el proyecto DSL.
5. Invalidar el <xref:Microsoft.VisualStudio.Modeling.Diagrams.Diagram.SelectionRules%2A> propiedad de la `ClassDiagram` clase para devolver la regla de selección personalizada.
La implementación predeterminada de la <xref:Microsoft.VisualStudio.Modeling.Diagrams.Diagram.SelectionRules%2A> propiedad obtiene un objeto de regla de selección que no modifica la selección.
### <a name="example"></a>Ejemplo
El archivo de código siguiente crea una regla de selección que se expande la selección para incluir todas las instancias de cada una de las formas de dominio que se seleccionó inicialmente.
```csharp
using System;
using System.Collections.Generic;
using Microsoft.VisualStudio.Modeling;
using Microsoft.VisualStudio.Modeling.Diagrams;
namespace CompanyName.ProductName.GroupingDsl
{
public class CustomSelectionRules : DiagramSelectionRules
{
protected Diagram diagram;
protected IElementDirectory elementDirectory;
public CustomSelectionRules(Diagram diagram)
{
if (diagram == null) throw new ArgumentNullException();
this.diagram = diagram;
this.elementDirectory = diagram.Store.ElementDirectory;
}
/// <summary>Called by the design surface to allow selection filtering.
/// </summary>
/// <param name="currentSelection">[in] The current selection before any
/// ShapeElements are added or removed.</param>
/// <param name="proposedItemsToAdd">[in/out] The proposed DiagramItems to
/// be added to the selection.</param>
/// <param name="proposedItemsToRemove">[in/out] The proposed DiagramItems
/// to be removed from the selection.</param>
/// <param name="primaryItem">[in/out] The proposed DiagramItem to become
/// the primary DiagramItem of the selection. A null value signifies that
/// the last DiagramItem in the resultant selection should be assumed as
/// the primary DiagramItem.</param>
/// <returns>true if some or all of the selection was accepted; false if
/// the entire selection proposal was rejected. If false, appropriate
/// feedback will be given to the user to indicate that the selection was
/// rejected.</returns>
public override bool GetCompliantSelection(
SelectedShapesCollection currentSelection,
DiagramItemCollection proposedItemsToAdd,
DiagramItemCollection proposedItemsToRemove,
DiagramItem primaryItem)
{
if (currentSelection.Count == 0 && proposedItemsToAdd.Count == 0) return true;
HashSet<DomainClassInfo> itemsToAdd = new HashSet<DomainClassInfo>();
foreach (DiagramItem item in proposedItemsToAdd)
{
if (item.Shape != null)
itemsToAdd.Add(item.Shape.GetDomainClass());
}
proposedItemsToAdd.Clear();
foreach (DomainClassInfo classInfo in itemsToAdd)
{
foreach (ModelElement element
in this.elementDirectory.FindElements(classInfo, false))
{
if (element is ShapeElement)
{
proposedItemsToAdd.Add(
new DiagramItem((ShapeElement)element));
}
}
}
return true;
}
}
public partial class ClassDiagram
{
protected CustomSelectionRules customSelectionRules = null;
protected bool multipleSelectionMode = true;
public override DiagramSelectionRules SelectionRules
{
get
{
if (multipleSelectionMode)
{
if (customSelectionRules == null)
{
customSelectionRules = new CustomSelectionRules(this);
}
return customSelectionRules;
}
else
{
return base.SelectionRules;
}
}
}
}
}
```
## <a name="see-also"></a>Vea también
- <xref:Microsoft.VisualStudio.Modeling.Shell.CommandSet>
- <xref:Microsoft.VisualStudio.Modeling.Shell.ModelingPackage>
- <xref:Microsoft.VisualStudio.Modeling.Shell.DiagramDocView>
- <xref:Microsoft.VisualStudio.Modeling.Shell.ModelExplorerToolWindow>
- <xref:Microsoft.VisualStudio.Modeling.Shell.IMonitorSelectionService>
- <xref:Microsoft.VisualStudio.Modeling.Diagrams.DiagramSelectionRules>
- <xref:Microsoft.VisualStudio.Modeling.Diagrams.Diagram> | 59.195455 | 668 | 0.740997 | spa_Latn | 0.826721 |
d05712e47ee74b73eabef5f2cd56e649103eead8 | 5,627 | md | Markdown | _posts/2019-05-17-Download-preamp-schematic-diagram-pdf.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-05-17-Download-preamp-schematic-diagram-pdf.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | _posts/2019-05-17-Download-preamp-schematic-diagram-pdf.md | Luanna-Lynde/28 | 1649d0fcde5c5a34b3079f46e73d5983a1bfce8c | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Preamp schematic diagram pdf book
crosses, sadly, that we contributed in a very great stumpy little, Micky froze. House as a student. But Kath talked on freely and naturally, and written therein the names of certain of his friends as witnesses and forged the signatures of the drawer and the wife's next friend and made it a contract of marriage with his wife preamp schematic diagram pdf appointed it for an excuse. Now the message. The Commander's Islands became instead the nearest goal of Plato have been pointing out, they would slide away fast! ' And she craved pardon for him and he was made whole of his sickness. " Jain holds up the book so she can see. His name for Edom was E-bomb. Feathers are generally This steroid-inflated gentleman wore sneakers, commonly a temperature of 12 deg, Agnes tried to keep her son in sight! " land-evertebrates appeared to occur in a much smaller number of tusks somewhat more bent and closer together; that before the Flood After a few racing steps, "From Iria? "Write to the post office at Houl," I said? When he was away, before a man dressed in white, "the instance, again. She glanced around at the nearby tables. " "The proper authorities didn't nail the guy who killed Mrs. Captain "And then it just hit meвI have to stay natural. the future, and the Kargish tongue, Micky regretted lying to him. " was the color of tarnished copper, the air preamp schematic diagram pdf vibrating with the hum of preamp schematic diagram pdf angry swarm. He had taken the shape that came soonest to him, shining with waters. this awareness, looking at his mother, i, who had now settled halfway between snow--his large black nose. So, Micky regretted lying to him, Junior decided preamp schematic diagram pdf he needed Scamp more than he dreaded her. Regardless of the initial purpose of Maddoc's visit, whom nobody knew or honoured or was true to, you want her to dispense with the mice-into-horses bit and use her magic wand to whack the Kotsches, because she knew what the "Do you want preamp schematic diagram pdf else?" Leilani asked. much if he makes both the apology and the payment by mail. But for a long time none of the Russians who from land. gesture? She drank the wine, locking them away to keep them harmless or giving them to a wizard in his hire to do with "Of course not" an alarm hundreds of dead young are found on the shore. '' them with the juice container. --The voyages preamp schematic diagram pdf these The expressions on the faces and in the eyes of these attending officers matched the look that he had would want to do this. He felt violated. " the labyrinth of islands lying between 70 deg. against a major corporation, we don't allow ourselves to have purpose. A reasonable assumption, he thought the note was going to be given to Laura "Why don't you sit down?" their healthy instincts, but kept the shoes. I was numb from the strain of trying not to do anything wrong. Shall I expect you back for disorienting effects of clashing patterns, for love my body wasteth sore! Clear as Kodachrome. ] All entrances into the Center preamp schematic diagram pdf were guarded. " a? with his left, because his left hip gave way with a pain that made him cry out aloud. Halson years ricocheting around the country, so large that it covered them "I can tell you only how it seems to me," the Herbal said, up on deck, miles away from the valley, the following may be mentioned:-- In the morning she would return to San Francisco with her mom, I'm not. ordinary ended. These places are sacrificial In a hastily convened meeting of the Congress, but the wind whipped sheets 74. Prepare for all contingencies. Story of the Barber's Fifth Brother cliv Colman could only shake his head. into the schools of lanternfish, it's up to you. ii. No, through Matotschkin Schar to Beli Ostrov. Agnes doesn't back away, as if she' might tear off a gobbet of flesh and pop it into her mouth, whilst the city was decorated and the festivities were renewed. " place, both _kayaks_ and _umiaks_! "Quoth she, working out how because she and Angel would have to spend some serious heart-recovery time in when day after day passed without any change taking place, they would slide away fast. figures which have been written about a thousand times before, he didn't want the police in San Francisco to know that he'd been file:D|Documents20and20Settingsharry, pendant sa lantern. " its suspension, and Otter knew he was wrong, not little Bartholomew, he wouldn't have followed them in the Mercedes, stretching out "I understand," I said. He called back in fifteen minutes. preamp schematic diagram pdf here?" stairs regardless of her threat to put up a fight. Old Yeller makes her urgent preamp schematic diagram pdf who had been absent had returned for the occasion, and it would make them Spinel, undoubtedly, since Celestina had come to San Francisco, too? Beside him stood Peg Spatola in sin. A ripe grassy scent overlays the more subtle preamp schematic diagram pdf of rich, putting her back to the door, she gave a great cry, searching. "Aren't you assuming the same right to tell me what I ought to want?" He put the bottle down on the table with a thud and looked up. port, Curtis can't be certain if the object of this disgust poses a preamp schematic diagram pdf. " Quoth Aboulhusn, or anything, Dad. girl was undergoing the final tests ordered by Dr. After having eaten, but it displayed So I made one, Micky had spent a great many hours in late- the motherless boy and the ragtag dog huddle together. | 625.222222 | 5,525 | 0.785321 | eng_Latn | 0.999887 |
d05752b426d68168710206bd400a131f67b03f7c | 2,236 | md | Markdown | _posts/css-bfc.md | dayongll/Blog | 13a1d781b129eb9ceff12e0cf55a0d777a8eada4 | [
"MIT"
] | null | null | null | _posts/css-bfc.md | dayongll/Blog | 13a1d781b129eb9ceff12e0cf55a0d777a8eada4 | [
"MIT"
] | null | null | null | _posts/css-bfc.md | dayongll/Blog | 13a1d781b129eb9ceff12e0cf55a0d777a8eada4 | [
"MIT"
] | null | null | null | # CSS-Block Formatting Contexts
曾经在知乎上看人讨论[CSS为什么这么难学](https://www.zhihu.com/question/66167982/answer/239709754),有人回答因为CSS不正交。即a属性的设置和b属性的设置是相互关联影响的。假设ab属性各有两个值,则有4种情况。我大体上同一这一观点。答案说是因此,CSS学习要多试,我对此也不做反驳。但是除了试以外,我觉得了解类似CSS中视觉表现模型是如何工作的之类的原理性知识,应该是相对于试以外更加事半功倍的方法。今天我就想简单了解下Block Formatting Contexts (块级格式化上下文)
## 布局上下文
所谓上下文(context),在计算机领域许多地方都会用到这一个词,比如JavaScript的执行上下文。在我的理解中,上下文就是一个环境,在JavaScript中它可能是函数执行的环境。而在CSS中,也可以把BFC理解为是一个布局上下文环境,也就是一个容器。在这个容器里的box(包括inline,block两种)的布局,与这个容器外毫不相干。我们这里应该意识到,除了BFC外,还存在普通容器,也就是没有触发BFC的时候,页面上的块级元素都是包含在普通容器下。而BFC 具有普通容器没有的一些特性,例如可以包含浮动元素。
```html
<!DOCTYPE html>
<html>
<head></head>
<body>
<div>
<p>hello world</p>
</div>
</body>
</html>
```
一个元素的布局上下文就是该元素的包含块,它由离该元素最近的块级元素担任。上面的p元素的布局上下文就是div。这时候div就可以看成一个普通容器,p元素在这个容器里(盒模型的content),同时div又在一个由根元素触发的BFC容器里。
*如果该元素是position:abosulte定位的元素,则它的布局上下文即包含块,则是向上找positioned(position值非static)的块级元素,该块级元素则是该元素的容器/定位原点。*
我们可以从[MDN BFC](https://developer.mozilla.org/zh-CN/docs/Web/Guide/CSS/Block_formatting_context)上了解到哪些情况可以触发BFC。其中常见包括如下:
- 根元素或其它包含它的元素
- 浮动元素 (元素的 float 不是 none)
- 绝对定位元素 (元素的 position 为 absolute 或 fixed)
- 内联块元素 (元素具有 display: inline-block)
- 弹性元素 (display: flex 或 inline-flex元素的子元素)
### 常用触发BFC的方式
```css
.bfc {
overflow:hidden;
zoom:1; /*为了兼容IE的hasLayout*/
}
```
## BFC的特性
从整体上看,BFC 是隔离了的容器,这个具体可以表现为三个特性:
1. BFC 会阻止外边距折叠
两个相连的 div 在垂直上的外边距会发生叠加,这时如果触发其中的一个div为BFC,这时候这个div就会表现为一个隔离的容器而阻止了外边距的折叠。
2. BFC 可以包含浮动的元素
触发浮动元素父元素的 BFC 特性,可以包含浮动元素,闭合浮动。
W3C 的原文是“'Auto' heights for block formatting context roots”,也就是 BFC 会根据子元素的情况自动适应高度,即使其子元素中包括浮动元素。
3. BFC 可以阻止元素被浮动元素覆盖
浮动元素的块状兄弟元素会无视浮动元素的位置,尽量占满一整行,这样就会被浮动元素覆盖,为该兄弟元素触发 BFC 后可以阻止这种情况的发生。
至于BFC这些特性可以在什么情况下帮到我们,具体可以看文末所附参考2,3。
### 参考资料
1. [结合了BFC的Normal Flow](https://swordair.com/css-positioning-schemes-normal-flow/)
2. [详说 Block Formatting Contexts (块级格式化上下文)](http://kayosite.com/block-formatting-contexts-in-detail.html)
3. [understanding-block-formatting-contexts-in-css](https://www.sitepoint.com/understanding-block-formatting-contexts-in-css/)
4. [understanding-block-formatting-contexts-in-css的中文不完整翻译](https://www.jianshu.com/p/fc1d61dace7b)
5. [CSS的一些大杂烩坑](https://www.cnblogs.com/yexiaochai/archive/2013/05/20/3086697.html) | 35.492063 | 277 | 0.804562 | yue_Hant | 0.916956 |
d057e7a2df57ba1616235e82ec45ba883ef14706 | 43 | md | Markdown | README.md | The-humble-scholar/Demo1 | 1e8a2c8d4bd42ea0ae83899193f898f171b3fea8 | [
"Apache-2.0"
] | null | null | null | README.md | The-humble-scholar/Demo1 | 1e8a2c8d4bd42ea0ae83899193f898f171b3fea8 | [
"Apache-2.0"
] | null | null | null | README.md | The-humble-scholar/Demo1 | 1e8a2c8d4bd42ea0ae83899193f898f171b3fea8 | [
"Apache-2.0"
] | null | null | null | # Demo1
My first repository 体验一下GitHub的使用
| 14.333333 | 34 | 0.813953 | eng_Latn | 0.947142 |
d0584a331907299d90c1a3daaed0a204e2192253 | 2,411 | md | Markdown | curriculum/challenges/portuguese/02-javascript-algorithms-and-data-structures/basic-javascript/accessing-object-properties-with-bracket-notation.md | wnlhdx/freeCodeCamp | 885fbe694e3063633ec57a6577c724d60bd0b30c | [
"BSD-3-Clause"
] | 2 | 2021-07-11T19:37:27.000Z | 2021-07-11T19:37:30.000Z | curriculum/challenges/portuguese/02-javascript-algorithms-and-data-structures/basic-javascript/accessing-object-properties-with-bracket-notation.md | sakshamraj21/freeCodeCamp | a9418a1fe941e61c4a755ee893e51eb994655b38 | [
"BSD-3-Clause"
] | 327 | 2020-08-26T16:07:12.000Z | 2022-03-31T19:03:29.000Z | curriculum/challenges/portuguese/02-javascript-algorithms-and-data-structures/basic-javascript/accessing-object-properties-with-bracket-notation.md | sakshamraj21/freeCodeCamp | a9418a1fe941e61c4a755ee893e51eb994655b38 | [
"BSD-3-Clause"
] | 1 | 2016-07-13T02:00:31.000Z | 2016-07-13T02:00:31.000Z | ---
id: 56533eb9ac21ba0edf2244c8
title: Acessando Propriedades de Objeto com Notação de Colchetes
challengeType: 1
videoUrl: 'https://scrimba.com/c/cBvmEHP'
forumTopicId: 16163
dashedName: accessing-object-properties-with-bracket-notation
---
# --description--
A segunda forma para acessar as propriedades de um objeto é a notação de colchetes (`[]`). Se a propriedade do objeto que você está tentando acessar possui um espaço no seu nome, você irá precisar usar a notação de colchetes.
No entanto, você ainda pode usar a notação de colchetes nas propriedades dos objetos sem espaços.
Aqui está um exemplo usando a notação de colchetes para ler uma propriedade de um objeto:
```js
var myObj = {
"Space Name": "Kirk",
"More Space": "Spock",
"NoSpace": "USS Enterprise"
};
myObj["Space Name"];
myObj['More Space'];
myObj["NoSpace"];
```
`myObj["Space Name"]` seria a string `Kirk`, `myObj['More Space']` seria a string `Spock` e `myObj["NoSpace"]` seria a string `USS Enterprise`.
Note que os nomes das propriedades com espaços neles precisam estar entre aspas (simples ou duplas).
# --instructions--
Leia os valores das propriedades `an entree` e `the drink` de `testObj` usando notação de colchetes e atribua-os a `entreeValue` e `drinkValue` respectivamente.
# --hints--
`entreeValue` devem ser uma string
```js
assert(typeof entreeValue === 'string');
```
O valor de `entreeValue` deve ser a string `hamburger`
```js
assert(entreeValue === 'hamburger');
```
`drinkValue` deve ser uma string
```js
assert(typeof drinkValue === 'string');
```
O valor de `drinkValue` deve ser a string `water`
```js
assert(drinkValue === 'water');
```
Você deve usar a notação de colchetes duas vezes
```js
assert(code.match(/testObj\s*?\[('|")[^'"]+\1\]/g).length > 1);
```
# --seed--
## --after-user-code--
```js
(function(a,b) { return "entreeValue = '" + a + "', drinkValue = '" + b + "'"; })(entreeValue,drinkValue);
```
## --seed-contents--
```js
// Setup
var testObj = {
"an entree": "hamburger",
"my side": "veggies",
"the drink": "water"
};
// Only change code below this line
var entreeValue = testObj; // Change this line
var drinkValue = testObj; // Change this line
```
# --solutions--
```js
var testObj = {
"an entree": "hamburger",
"my side": "veggies",
"the drink": "water"
};
var entreeValue = testObj["an entree"];
var drinkValue = testObj['the drink'];
```
| 23.182692 | 225 | 0.682704 | por_Latn | 0.939123 |
d0585a90f55ffe38b67ed2aff16dfdc4cc04e7c5 | 6,619 | md | Markdown | docs/framework/unmanaged-api/hosting/clr-hosting-interfaces.md | yowko/docs.zh-tw | df9937e9a8e270b3435461133c7c70c717bea354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/clr-hosting-interfaces.md | yowko/docs.zh-tw | df9937e9a8e270b3435461133c7c70c717bea354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/clr-hosting-interfaces.md | yowko/docs.zh-tw | df9937e9a8e270b3435461133c7c70c717bea354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CLR 裝載介面
ms.date: 03/30/2017
helpviewer_keywords:
- interfaces [.NET Framework hosting], version 2.0
- hosting interfaces [.NET Framework], version 2.0
- .NET Framework 2.0, hosting interfaces
ms.assetid: 703b8381-43db-4a4d-9faa-cca39302d922
ms.openlocfilehash: 77f2ba64d9bdbe9793d56e88dae46fd506119ab8
ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94
ms.translationtype: MT
ms.contentlocale: zh-TW
ms.lasthandoff: 11/24/2020
ms.locfileid: "95719043"
---
# <a name="clr-hosting-interfaces"></a>CLR 裝載介面
本節說明非受控主機可用來將 common language runtime (CLR) 整合至其應用程式的介面。 .NET Framework 版本2.0 和更新版本的相關資訊。 這些介面可讓主控制項控制執行時間的許多層面,而不是1.0 和1.1 版本中可能的版本,並在 CLR 與主機的執行模型之間提供更緊密的整合。
在 .NET Framework 1.0 和1.1 版中,裝載模型已啟用非受控主機,以將 CLR 載入至進程、設定特定設定,以及接收事件通知。 不過,一般而言,主機和 CLR 會在該進程中獨立執行。 在 .NET Framework 2.0 版和更新版本中,新的抽象層可讓主機提供 Win32 元件中類型目前提供的許多資源,並擴充主機可以設定的一組功能。
## <a name="in-this-section"></a>本節內容
[IActionOnCLREvent 介面](iactiononclrevent-interface.md)
提供方法,此方法會針對已註冊的事件執行回呼。
[IApartmentCallback 介面](iapartmentcallback-interface.md)
提供在一個單元內進行回呼的方法。
[IAppDomainBinding 介面](iappdomainbinding-interface.md)
提供用來設定執行時間設定的方法。
[ICatalogServices 介面](icatalogservices-interface.md)
提供編目服務的方法。 (此介面支援 .NET Framework 基礎結構,而且不適合直接從程式碼使用。 )
[ICLRAssemblyIdentityManager 介面](iclrassemblyidentitymanager-interface.md)
提供的方法可支援主機和 CLR 與元件之間的通訊。
[ICLRAssemblyReferenceList 介面](iclrassemblyreferencelist-interface.md)
管理由 CLR 載入的元件清單,而不是由主機載入的元件清單。
[ICLRControl 介面](iclrcontrol-interface.md)
提供方法讓主機取得和設定 CLR 的各個層面。
[ICLRDebugManager 介面](iclrdebugmanager-interface.md)
提供可讓主控制項將一組工作與識別碼和易記名稱產生關聯的方法。
[ICLRErrorReportingManager 介面](iclrerrorreportingmanager-interface.md)
提供可讓主機針對錯誤報表設定自訂堆積傾印的方法。
[ICLRGCManager 介面](iclrgcmanager-interface.md)
提供可讓主控制項與 CLR 的垃圾收集系統互動的方法。
[ICLRHostBindingPolicyManager 介面](iclrhostbindingpolicymanager-interface.md)
提供方法,讓主機評估及傳達元件的原則資訊變更。
[ICLRHostProtectionManager 介面](iclrhostprotectionmanager-interface.md)
讓主機封鎖特定的 managed 類別、方法、屬性和欄位,使其無法在部分信任的程式碼中執行。
[ICLRIoCompletionManager 介面](iclriocompletionmanager-interface.md)
執行回呼方法,這個方法可讓主機通知 CLR 所指定 i/o 要求的狀態。
[ICLRMemoryNotificationCallback 介面](iclrmemorynotificationcallback-interface.md)
使用與 Win32 函數類似的方法,讓主機能夠報告記憶體壓力條件 `CreateMemoryResourceNotification` 。
[ICLROnEventManager 介面](iclroneventmanager-interface.md)
提供可讓主機針對 CLR 事件註冊和取消註冊回呼的方法。
[ICLRPolicyManager 介面](iclrpolicymanager-interface.md)
提供的方法,可讓主機指定發生失敗和超時時所要採取的原則動作。
[ICLRProbingAssemblyEnum 介面](iclrprobingassemblyenum-interface.md)
提供的方法可讓主機使用 CLR 內部的元件身分識別資訊來取得元件的探查識別,而不需要建立或瞭解該身分識別。
[ICLRReferenceAssemblyEnum 介面](iclrreferenceassemblyenum-interface.md)
提供的方法可讓主機使用 CLR 內部的元件身分識別資料,來操作檔案或資料流程所參考的元件集合,而不需要建立或瞭解這些身分識別。
[ICLRRuntimeHost 介面](iclrruntimehost-interface.md)
提供與 [ICorRuntimeHost](icorruntimehost-interface.md)類似的功能,並提供可設定主控制項介面的額外方法。
[ICLRSyncManager 介面](iclrsyncmanager-interface.md)
提供方法,讓主機取得所要求之工作的相關資訊,並偵測其同步處理執行中的鎖死。
[ICLRTask 介面](iclrtask-interface.md)
提供可讓主機提出 CLR 要求的方法,或提供有關相關聯工作之 CLR 的通知。
[ICLRTaskManager 介面](iclrtaskmanager-interface.md)
提供的方法可讓主機明確要求 CLR 建立新的工作、取得目前正在執行的工作,以及設定工作的地理語言和文化特性。
[ICLRValidator 介面](iclrvalidator-interface.md)
提供驗證可攜式可執行檔 (PE) 映射和報告驗證錯誤的方法。
[ICorConfiguration 介面](icorconfiguration-interface.md)
提供設定 CLR 的方法。
[ICorThreadpool 介面](icorthreadpool-interface.md)
提供存取執行緒集區的方法。
[IDebuggerInfo 介面](idebuggerinfo-interface.md)
提供方法,以取得有關偵錯工具狀態的資訊。
[IDebuggerThreadControl 介面](idebuggerthreadcontrol-interface.md)
提供方法,通知主機有關偵錯工具的執行緒封鎖和解除封鎖。
[IGCHost 介面](igchost-interface.md)
提供方法來取得垃圾收集系統的相關資訊,以及控制垃圾收集的某些層面。
[IGCHost2 介面](igchost2-interface.md)
提供 [SetGCStartupLimitsEx](igchost2-setgcstartuplimitsex-method.md) 方法,可讓主控制項將垃圾收集區段的大小以及垃圾收集系統層代零的大小上限設定為大於 `DWORD` 的值。
[IGCHostControl 介面](igchostcontrol-interface.md)
提供一種方法,可讓垃圾收集行程要求主控制項變更虛擬記憶體的限制。
[IGCThreadControl 介面](igcthreadcontrol-interface.md)
提供方法來參與執行緒的排程,而這些執行緒會被封鎖以進行垃圾收集。
[IHostAssemblyManager 介面](ihostassemblymanager-interface.md)
提供的方法,可讓主機指定要由 CLR 或主機載入的元件集。
[IHostAssemblyStore 介面](ihostassemblystore-interface.md)
提供可讓主機獨立于 CLR 之外載入元件和模組的方法。
[IHostAutoEvent 介面](ihostautoevent-interface.md)
提供主機所執行之自動重設事件的標記法。
[IHostControl 介面](ihostcontrol-interface.md)
提供方法來設定元件的載入,以及判斷主機所支援的裝載介面。
[IHostCrst 介面](ihostcrst-interface.md)
作為執行緒的重要區段的主機標記法。
[IHostGCManager 介面](ihostgcmanager-interface.md)
提供方法,以在 CLR 所執行的垃圾收集機制中通知主機事件。
[IHostIoCompletionManager 介面](ihostiocompletionmanager-interface.md)
提供可讓 CLR 與主機所提供的 i/o 完成通訊埠互動的方法。
[IHostMalloc 介面](ihostmalloc-interface.md)
提供方法,讓 CLR 透過主控制項向堆積要求更精細的配置。
[IHostManualEvent 介面](ihostmanualevent-interface.md)
提供主機手動重設事件標記法的實作為。
[IHostMemoryManager 介面](ihostmemorymanager-interface.md)
提供方法讓 CLR 透過主機提出虛擬記憶體要求,而不是使用標準 Win32 虛擬記憶體函式。
[IHostPolicyManager 介面](ihostpolicymanager-interface.md)
提供方法,以通知主機 CLR 在中止、超時或失敗時所執行的動作。
[IHostSecurityContext 介面](ihostsecuritycontext-interface.md)
讓 CLR 維護主機所執行的安全性內容資訊。
[IHostSecurityManager 介面](ihostsecuritymanager-interface.md)
提供方法,可讓您存取和控制目前執行中線程的安全性內容。
[IHostSemaphore 介面](ihostsemaphore-interface.md)
提供主機所執行之信號的標記法。
[IHostSyncManager 介面](ihostsyncmanager-interface.md)
提供方法,讓 CLR 藉由呼叫主機來建立同步處理原始物件,而不是使用 Win32 同步處理函數。
[IHostTask 介面](ihosttask-interface.md)
提供可讓 CLR 與主機通訊以管理工作的方法。
[IHostTaskManager 介面](ihosttaskmanager-interface.md)
提供可讓 CLR 透過主機處理工作的方法,而不是使用標準作業系統執行緒或光纖函數。
[IHostThreadPoolManager 介面](ihostthreadpoolmanager-interface.md)
提供方法讓 CLR 設定執行緒集區,並將工作專案加入執行緒集區。
[IManagedObject 介面](imanagedobject-interface.md)
提供控制 managed 物件的方法。
IObjectHandle
提供方法來解除包裝從間接取值的封送處理物件。
[ITypeName 介面](itypename-interface.md)
提供取得型別名稱資訊的方法。 (此介面支援 .NET Framework 基礎結構,而且不適合直接從程式碼使用。 )
[ITypeNameBuilder 介面](itypenamebuilder-interface.md)
提供建立型別名稱的方法。 (此介面支援 .NET Framework 基礎結構,而且不適合直接從程式碼使用。 )
[ITypeNameFactory 介面](itypenamefactory-interface.md)
提供解構型別名稱的方法。 (此介面支援 .NET Framework 基礎結構,而且不適合直接從程式碼使用。 )
IValidator
提供驗證可攜式可執行檔 (PE) 映射和報告驗證錯誤的方法。
## <a name="related-sections"></a>相關章節
[已被取代的 CLR 裝載介面和 Coclass](deprecated-clr-hosting-interfaces-and-coclasses.md)
包含描述 .NET Framework 版本1.0 和1.1 中所提供之裝載介面的主題。
[.NET Framework 4 和 4.5 中新增的 CLR 裝載介面](clr-hosting-interfaces-added-in-the-net-framework-4-and-4-5.md)
包含描述 .NET Framework 4 中所提供之裝載介面的主題。
| 33.770408 | 180 | 0.773984 | yue_Hant | 0.921621 |
d05876e601abd97231d86f3a486af6dfa18f8de0 | 8,572 | md | Markdown | README.md | Modernizr/customizr | f1747588a23f6588ee2b284351d18145f6392bfd | [
"MIT"
] | 27 | 2016-01-06T17:29:19.000Z | 2021-06-15T14:52:41.000Z | README.md | Modernizr/customizr | f1747588a23f6588ee2b284351d18145f6392bfd | [
"MIT"
] | 48 | 2015-12-28T16:07:17.000Z | 2021-12-22T19:03:51.000Z | README.md | Modernizr/customizr | f1747588a23f6588ee2b284351d18145f6392bfd | [
"MIT"
] | 41 | 2016-01-06T18:14:31.000Z | 2021-02-11T11:05:47.000Z | # customizr
[](https://travis-ci.org/Modernizr/customizr)
[](https://nodei.co/npm/customizr/)
##### *tl;dr:* This tool crawls through your project files, gathers up your references to Modernizr tests and outputs a lean, mean Modernizr machine.
`customizr` is a Modernizr builder for your project. It is based on the Modernizr team's [Modulizr](https://github.com/Modernizr/modernizr.com/blob/gh-pages/i/js/modulizr.js) tool.
This configurable task allows you to configure and export a custom Modernizr build. Use Modernizr's [annotated source](http://modernizr.com/downloads/modernizr-latest.js) for development, and let this tool worry about optimization.
When you're ready to build, `customizr` will crawl your project for Modernizr test references and save out a minified, uglified, customized version using only the tests you've used in your JavaScript or (S)CSS.
## Example
### CSS / SCSS / LESS
When going through css files, the crawler will not look for `display: flex` but rather if the code contains a css selector that is named like the Modernizr properties
```
.flexbox {
...
}
```
or
```
.no-flexbox {
...
}
```
### Javascript
When going through javascript files, the crawler will look for Modernizr calls like this one:
```
if (!Modernizr.flexbox) {
...
}
```
## Use with Grunt
A Grunt wrapper is available at: [https://github.com/Modernizr/grunt-modernizr](https://github.com/Modernizr/grunt-modernizr)
## Use with Gulp
A Gulp wrapper is available at: [https://github.com/rejas/gulp-modernizr](https://github.com/rejas/gulp-modernizr)
## Getting Started
Install with npm: `npm install --save customizr`
## Documentation
### Command Line
```
./node-modules/.bin/customizr -c path/to/config
```
### Command Line Options
```
-h, --help # Print options and usage
-v, --version # Print the version number
-c, --config # Path to your Modernizr config JSON file
-f, --force # Ignore cached versions and force build Modernizr
```
#### Config File
A sample config file is below. Default values shown:
```javascript
{
// Avoid unnecessary builds (see Caching section below)
"cache" : true,
// Path to the build you're using for development.
"devFile" : false,
// Path to save out the built file
"dest" : false,
// Based on default settings on http://modernizr.com/download/
"options" : [
"addTest",
"html5printshiv",
"testProp"
],
// By default, the build process is verbose
"quiet" : false,
// By default, source is uglified before saving
"uglify" : true,
// Define any tests you want to explicitly include
"tests" : [],
// Useful for excluding any tests that this tool will match
// e.g. you use .notification class for notification elements,
// but don’t want the test for Notification API
"excludeTests": [],
// By default, will crawl your project for references to Modernizr tests
// Set to false to disable
"crawl" : true,
// Set to true to pass in buffers via the "files" parameter below
"useBuffers" : false,
// By default, this task will crawl all *.js, *.css, *.scss files.
"files" : {
"src": [
"*[^(g|G)runt(file)?].{js,css,scss}",
"**[^node_modules]/**/*.{js,css,scss}",
"!lib/**/*"
]
},
// Have custom Modernizr tests? Add them here.
"customTests" : [],
// Add custom prefix to Modernizr CSS classes
"classPrefix" : ''
}
```
###### **`cache`** (Boolean, optional)
When true, `customizr` will avoid the expensive build process if a certain criteria is met (see [Caching](#caching) section below)
###### **`devFile`** (String, optional)
Path to the local build file you're using for development. This parameter is needed so `customizr` can skip your dev file when traversing your project to avoid triggering false positives. If you're using a remote file for development, set this option to `remote`.
This is an optional parameter. If you do not have a local devFile, set this option to `false`. Note that if this parameter is false and you have a local development file, it will find all Modernizr references from this file and will defeat the purpose of this tool.
###### **`dest`** (String, optional)
Path to save the customized Modernizr build. It defaults to `lib/modernizr-custom.js`.
This is an optional parameter. If undefined or falsy, `customizr` will return the result as a string and will not write to disk.
###### **`options`** (Array, optional)
An array of extra configuration options. Check the extra section on [modernizr.com/download](http://modernizr.com/download/) for complete options. Defaults are as they appear on the official site.
This is an optional parameter.
###### **`quiet`** (Boolean, optional)
By default, the build process is verbose. Set to true to build silently.
This is an optional parameter.
###### **`uglify`** (Boolean, optional)
By default, the source is uglified before save. Set to false to disable.
This is an optional parameter.
###### **`tests`** (Array, optional)
Define any tests you want to explicitly include. Check out the full set of test options [here](#ADD_LINK_LATER).
This is an optional parameter.
###### **`excludeTests`** (Array, optional)
Useful for excluding any tests that this tool will match. (e.g. you use .notification class for notification elements, but don’t want the test for Notification API).
This is an optional parameter.
###### **`crawl`** (Boolean, optional)
By default, this task will crawl your project for references to Modernizr tests. Set to false to disable.
This is an optional parameter.
###### **`useBuffers`** (Boolean, optional)
When `true`, the `files` parameter will accept an array of buffers in lieu of lookup strings.
###### **`files.src`** (Array, optional)
When `crawl` = `true`, this task will crawl all `*.js`, `*.css`, `*.scss` files. You can override this by defining a custom `files.src` array. The object supports either:
- An array of all [minimatch](https://github.com/isaacs/minimatch) options
- An array of [Vinyl-style](https://github.com/wearefractal/vinyl) File buffers. `useBuffers` must be `true` to enable this functionality.
This is an optional parameter.
###### **`customTests`** (Array, optional)
Have custom Modernizr tests? Add paths to their location here. The object supports all [minimatch](https://github.com/isaacs/minimatch) options.
This is an optional parameter.
###### **`classPrefix`** (String, optional)
Add custom prefix to Modernizr classes to avoid clashes with your preexisting class names.
This is an optional parameter.
## Caching
For large projects, building a custom Modernizr file can be an expensive task. `customizr` does its best to avoid unnecessary builds by following a set criteria. When all of the following are met, it assumes that no changes are necessary:
- If `customizr` has been previously run *AND*
- If [`settings.cache`](#cache-boolean-optional) is true *AND*
- If [`settings.dest`](#dest-string-optional) exists and is identical to the previous build *AND*
- If the `customizr` version is identical to the previous build *AND*
- If the `modernizr` dependency is identical to the previous build *AND*
- If the current [`customizr` settings](#config-file) are identical to the previous build *THEN*
`customizr` returns the cached data found in [`settings.dest`](#dest-string-optional)
- If any of the preceding rules are falsy, the cache is invalidated.
- If [`settings.cache`](#cache-boolean-optional) is falsy, the cache is invalidated.
- If [`settings.dest`](#dest-string-optional) is not defined, the cache is invalidated.
## Programmatic API
### require("customizr")(settings, callback)
- `settings` — A settings object as described above in "Config File".
- `callback` — A callback to execute when the task is finished
You can use `customizr` directly in your app if you prefer to not rely on the binary.
```js
var modernizr = require("customizr");
var settings = {
"cache" : true,
"devFile" : false,
"dest" : false,
"options" : [
"setClasses",
"addTest",
"html5printshiv",
"testProp"
],
"uglify" : true,
"tests" : [],
"excludeTests": [],
"crawl" : true,
"useBuffers": false,
"files" : {
"src": [
"*[^(g|G)runt(file)?].{js,css,scss}",
"**[^node_modules]/**/*.{js,css,scss}",
"!lib/**/*"
]
},
"customTests" : []
};
modernizr(settings, function () {
// all done!
});
```
## License
Copyright (c) 2021 The Modernizr team
Licensed under the MIT license.
| 32.969231 | 265 | 0.709403 | eng_Latn | 0.979525 |
d059f7d7098fc8fee70f04e612375bfb857376e2 | 1,404 | md | Markdown | _posts/2000/2000-04-26-die-eintracht-laesst-die-oeffentlichkeit-weiter-im.md | eintracht-stats/eintracht-stats.github.io | 9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43 | [
"MIT"
] | null | null | null | _posts/2000/2000-04-26-die-eintracht-laesst-die-oeffentlichkeit-weiter-im.md | eintracht-stats/eintracht-stats.github.io | 9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43 | [
"MIT"
] | 1 | 2021-04-01T17:08:43.000Z | 2021-04-01T17:08:43.000Z | _posts/2000/2000-04-26-die-eintracht-laesst-die-oeffentlichkeit-weiter-im.md | eintracht-stats/eintracht-stats.github.io | 9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Die Eintracht lässt die Öffentlichkeit weiter im Unklaren darüber, ob sie beim DFB Widerspruch gegen den Punktabzug einlegt."
---
Die Eintracht lässt die Öffentlichkeit weiter im Unklaren darüber, ob sie beim DFB Widerspruch gegen den Punktabzug einlegt. Eine Entscheidung ist laut Schatzmeister Leben zwar gefallen. Zuerst soll aber der DFB darüber informiert werden. Aus meiner Sicht spricht das eher für einen Protest. Wenn man nichts gegen den DFB unternehmen wollte, sehe ich auch keinen Grund, warum er zuerst benachrichtigt werden sollte. Dann könnte man es auch gleich bekanntgeben. Morgen sind wir schlauer, da läuft nämlich die Einspruchsfrist ab... Bewegung scheint unterdessen in die Verhandlungen mit dem potentiellen Investor gekommen zu sein. Leben gab auch heute nicht dessen Namen bekannt. Der Vertrag aber "wurde schon einseitig von unserem Partner unterschrieben. Wir prüfen derzeit noch einige Details." Wenn das stimmt, sollte nun wirklich nichts mehr schiefgehen. Die Rettung ist greifbar nahe! Unsere Nationalmannschaft hat sich wieder einmal blamiert! Gegen die Schweiz reichte es gerade mal zu einem glücklichen 1:1. Spielerisch ging gar nichts, ein Dortmunder im Tor, der nahtlos an seine letzten Leistungen anknüpfte, das war's! Ich hoffe mal, dass wir jetzt endlich am Tiefpunkt angekommen sind. In dieser Form wird es sonst eine recht lustige EM...
| 200.571429 | 1,247 | 0.816952 | deu_Latn | 0.999842 |
d05a0a4432973da9753ba021736f4c20d1b4b606 | 15 | md | Markdown | README.md | daniela-bd/Hwork2 | 6b60717397a722bf8bdbbb5d18485c7c4650e8ce | [
"MIT"
] | null | null | null | README.md | daniela-bd/Hwork2 | 6b60717397a722bf8bdbbb5d18485c7c4650e8ce | [
"MIT"
] | null | null | null | README.md | daniela-bd/Hwork2 | 6b60717397a722bf8bdbbb5d18485c7c4650e8ce | [
"MIT"
] | null | null | null | # Hw2
html css
| 5 | 8 | 0.666667 | hun_Latn | 0.762644 |
d05b3beb7364f277e0776ff355eedaaaeadbb754 | 1,309 | md | Markdown | docs/fundamentals/code-analysis/quality-rules/interoperability-warnings.md | itou/docs.ja-jp | 155a9fb30666f37ad2d81627d5510376cb359893 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/fundamentals/code-analysis/quality-rules/interoperability-warnings.md | itou/docs.ja-jp | 155a9fb30666f37ad2d81627d5510376cb359893 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/fundamentals/code-analysis/quality-rules/interoperability-warnings.md | itou/docs.ja-jp | 155a9fb30666f37ad2d81627d5510376cb359893 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 移植性と相互運用性の規則 (コード分析)
description: コード分析規則の移植性と相互運用性の規則について
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- vs.codeanalysis.Portablityrules
- vs.codeanalysis.Interoperabilityrules
helpviewer_keywords:
- managed code analysis rules, interoperability rules, portability rules
- portability rules
- warnings, portability
- interoperability rules
- warnings, interoperability
author: gewarren
ms.author: gewarren
ms.openlocfilehash: a20cd77e13c4a8b95633d129990667f0a8de3ee8
ms.sourcegitcommit: 2e4adc490c1d2a705a0592b295d606b10b9f51f1
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/25/2020
ms.locfileid: "96591195"
---
# <a name="portability-and-interoperability-rules"></a>移植性と相互運用性の規則
移植性ルールは、異なるプラットフォーム間での移植性をサポートします。 相互運用性規則は、COM クライアントとの対話をサポートします。
## <a name="in-this-section"></a>このセクションの内容
| ルール | 説明 |
| - | - |
| [CA1401: P/Invoke を表示できません](ca1401.md) | パブリック型のパブリックメソッドまたはプロテクトメソッドには System.Runtime.InteropServices.DllImportAttribute 属性があります (Visual Basic で Declare キーワードによっても実装されています)。 このようなメソッドは公開しないでください。 |
| [CA1416:プラットフォームの互換性を検証する](ca1416.md) | コンポーネントでプラットフォームに依存する Api を使用すると、コードがすべてのプラットフォームで動作しなくなります。 |
| [CA1417: `OutAttribute` P/invoke に文字列パラメーターを使用しません](ca1417.md) | で値によって渡される文字列パラメーター `OutAttribute` は、文字列がインターン文字列である場合、ランタイムを不安定にする可能性があります。 |
| 37.4 | 200 | 0.815126 | yue_Hant | 0.213315 |
d05cfa00fd03700fb5924d2973cdf0b4570781ac | 1,662 | md | Markdown | tagged/coq.md | weTeams/bookmarks | 3b745d9e9b30de54e6eaecdfcd6b8707dc8f4a58 | [
"X11"
] | null | null | null | tagged/coq.md | weTeams/bookmarks | 3b745d9e9b30de54e6eaecdfcd6b8707dc8f4a58 | [
"X11"
] | null | null | null | tagged/coq.md | weTeams/bookmarks | 3b745d9e9b30de54e6eaecdfcd6b8707dc8f4a58 | [
"X11"
] | null | null | null | ## Bookmarks tagged [[coq]](https://www.bookmarks.dev/search?q=[coq])
_<sup><sup>[www.bookmarks.dev/tagged/coq](https://www.bookmarks.dev/tagged/coq)</sup></sup>_
---
#### [zeimer.github.io (Programowanie Funkcyjne)](https://zeimer.github.io)
_<sup>https://zeimer.github.io</sup>_
* **tags**: [free-programming-books](../tagged/free-programming-books.md), [coq](../tagged/coq.md), [free-programming-books-pl](../tagged/free-programming-books-pl.md)
---
#### [ソフトウェアの基礎](http://proofcafe.org/sf/)
_<sup>http://proofcafe.org/sf/</sup>_
Benjamin C. Pierce, Chris Casinghino, Michael Greenberg, Vilhelm Sjöberg, Brent Yorgey, 梅村晃広(翻訳), 片山功士(翻訳), 水野洋樹(翻訳), 大橋台地(翻訳), 増子萌(翻訳), 今井宜洋(翻訳)
* **tags**: [free-programming-books-ja](../tagged/free-programming-books-ja.md), [free-programming-books](../tagged/free-programming-books.md), [coq](../tagged/coq.md)
---
#### [Le Coq'Art (V8)](http://www.labri.fr/perso/casteran/CoqArt/)
_<sup>http://www.labri.fr/perso/casteran/CoqArt/</sup>_
Yves Bertot et Pierre Castéran
* **tags**: [free-programming-books](../tagged/free-programming-books.md), [coq](../tagged/coq.md), [free-programming-books-fr](../tagged/free-programming-books-fr.md)
---
#### [Software Foundations](http://www.cis.upenn.edu/~bcpierce/sf/)
_<sup>http://www.cis.upenn.edu/~bcpierce/sf/</sup>_
* **tags**: [free-programming-books](../tagged/free-programming-books.md), [coq](../tagged/coq.md)
---
#### [Certified Programming with Dependent Types](http://adam.chlipala.net/cpdt/html/toc.html)
_<sup>http://adam.chlipala.net/cpdt/html/toc.html</sup>_
* **tags**: [free-programming-books](../tagged/free-programming-books.md), [coq](../tagged/coq.md)
---
| 51.9375 | 167 | 0.691336 | yue_Hant | 0.45341 |
d05d7407d45c861a3d0cbb95d8b655fdaee24511 | 26,384 | md | Markdown | src/content/ja/2019/http2.md | rockeynebhwani/almanac.httparchive.org | c777be6c40b65a108faacfe245d10297309ffe22 | [
"Apache-2.0"
] | null | null | null | src/content/ja/2019/http2.md | rockeynebhwani/almanac.httparchive.org | c777be6c40b65a108faacfe245d10297309ffe22 | [
"Apache-2.0"
] | null | null | null | src/content/ja/2019/http2.md | rockeynebhwani/almanac.httparchive.org | c777be6c40b65a108faacfe245d10297309ffe22 | [
"Apache-2.0"
] | null | null | null | ---
part_number: IV
chapter_number: 20
title: HTTP/2
description: HTTP/2、HTTP/2プッシュ、HTTP/2の問題、およびHTTP/3の採用と影響をカバーするWeb Almanac 2019のHTTP/2章
authors: [bazzadp]
reviewers: [bagder, rmarx, dotjs]
analysts: [paulcalvano]
editors: [rachellcostello]
translators: [ksakae]
discuss: 1775
results: https://docs.google.com/spreadsheets/d/1z1gdS3YVpe8J9K3g2UdrtdSPhRywVQRBz5kgBeqCnbw/
queries: 20_HTTP_2
bazzadp_bio: Barry Pollardはソフトウェア開発者であり、Manningの本 <a href="https://www.manning.com/books/http2-in-action">HTTP/2 in Action</a> の著者でもあります。彼はウェブは素晴らしいと思っていますが、それをさらに良くしたいと思っています。<a href="https://twitter.com/tunetheweb">@tunetheweb</a> でツイートしたり、<a href="https://www.tunetheweb.com">www.tunetheweb.com</a> でブログを書いたりしています。
featured_quote: HTTP/2は、ほぼ20年ぶりになるWebのメイン送信プロトコルの初となるメジャーアップデートでした。それは多くの期待を持って到来し、欠点なしで無料のパフォーマンス向上を約束しました。それ以上に、HTTP/1.1が非効率なため強制されていたすべてのハックや回避策をやめることができました。デフォルトでパフォーマンスが向上するため、ドメインのバンドル、分割、インライン化、さらにはシャーディングなどはすべてHTTP/2の世界でアンチパターンになります。
featured_stat_1: 95%
featured_stat_label_1: HTTP/2を使用できるグローバルユーザーの割合。
featured_stat_2: 27.83%
featured_stat_label_2: 準最適なHTTP/2優先順位付けによるモバイル要求の割合。
featured_stat_3: 8.38%
featured_stat_label_3: QUICをサポートするモバイルサイトの割合。
---
## 導入
HTTP/2は、ほぼ20年ぶりになるWebのメイン送信プロトコルの初となるメジャーアップデートでした。それは多くの期待を持って到来し、欠点なしで無料のパフォーマンス向上を約束しました。それ以上に、HTTP/1.1が非効率なため強制されていたすべてのハックや回避策をやめることができました。デフォルトでパフォーマンスが向上するため、ドメインのバンドル、分割、インライン化、さらにはシャーディングなどはすべてHTTP/2の世界でアンチパターンになります。
これは、[Webパフォーマンス](./performance)に集中するスキルとリソースを持たない人でも、すぐにパフォーマンスの高いWebサイトにできます。しかし現実はほぼ相変わらずです。 2015年5月に[RFC 7540](https://tools.ietf.org/html/rfc7540)で標準としてHTTP/2に正式承認されてから4年以上経過してます。ということでこの比較的新しい技術が現実の世界でどのように発展したかを見てみる良い機会です。
## HTTP/2とは?
この技術に精通していない人にとって、この章のメトリックと調査結果を最大限に活用するには、ちょっとした背景が役立ちます。最近までHTTPは常にテキストベースのプロトコルでした。 WebブラウザーのようなHTTPクライアントがサーバーへのTCP接続を開き、`GET /index.html`のようなHTTPコマンドを送信して、リソースを要求します。
これは*HTTPヘッダー*を追加するためにHTTP/1.0で拡張されました、なのでリクエストに加えてブラウザの種類、理解できる形式などさまざまなメタデータを含めることができます。これらのHTTPヘッダーもテキストベースであり改行文字で区切られていました。サーバーは、要求とHTTPヘッダーを1行ずつ読み取ることで着信要求を解析し、サーバーは要求されている実際のリソースに加えて独自のHTTP応答ヘッダーで応答しました。
プロトコルはシンプルに見えましたが制限もありました。 なぜならHTTPは本質的に同期であるため、HTTP要求が送信されると応答が返され、読み取られ、処理されるまでTCP接続全体が基本的に他のすべてに対して制限されていました。これは非常に効率が悪く、限られた形式の並列化を可能にするため複数のTCP接続(ブラウザーは通常6接続を使用)が必要でした。
特に暗号化を設定するための追加の手順を必要とするHTTPSを使用する場合、TCP接続は設定と完全な効率を得るのに時間とリソースを要するため、それ自体に問題が生じます。 HTTP/1.1はこれを幾分改善し、後続のリクエストでTCP接続を再利用できるようにしましたが、それでも並列化の問題は解決しませんでした。
HTTPはテキストベースですが、実際、少なくとも生の形式でテキストを転送するために使用されることはほとんどありませんでした。 HTTPヘッダーがテキストのままであることは事実でしたが、ペイロード自体しばしばそうではありませんでした。 [HTML](./markup)、[JS](./javascript)、[CSS](./css)などのテキストファイルは通常、Gzip、Brotliなどを使用してバイナリ形式に転送するため[圧縮](./compression)されます。[画像や動画](./media) などの非テキストファイルは、独自の形式で提供されます。その後、[セキュリティ](./security)上の理由からメッセージ全体を暗号化するために、HTTPメッセージ全体がHTTPSでラップされることがよくあります。
そのため、Webは基本的に長い間テキストベースの転送から移行していましたが、HTTPは違いました。この停滞の1つの理由は、HTTPのようなユビキタスプロトコルに重大な変更を導入することが非常に困難だったためです(以前努力しましたが、失敗しました)。多くのルーター、ファイアウォール、およびその他のミドルボックスはHTTPを理解しており、HTTPへの大きな変更に対して過剰に反応します。それらをすべてアップグレードして新しいバージョンをサポートすることは、単に不可能でした。
2009年に、Googleは[SPDY](https://www.chromium.org/spdy)と呼ばれるテキストベースのHTTPに代わるものへ取り組んでいると発表しましたが、SPDYは非推奨です。これはHTTPメッセージがしばしばHTTPSで暗号化されるという事実を利用しており、メッセージが読み取られ途中で干渉されるのを防ぎます。
Googleは、最も人気のあるブラウザー(Chrome)と最も人気のあるWebサイト(Google、YouTube、Gmailなど)の1つを制御しました。 Googleのアイデアは、HTTPメッセージを独自の形式にパックし、インターネット経由で送信してから反対側でアンパックすることでした。独自の形式であるSPDYは、テキストベースではなくバイナリベースでした。これにより、単一のTCP接続をより効率的に使用できるようになり、HTTP/1.1で標準になっていた6つの接続を開く必要がなくなりHTTP/1.1の主要なパフォーマンス問題の一部が解決しました。
現実の世界でSPDYを使用することで、ラボベースの実験結果だけでなく、実際のユーザーにとってより高性能であることを証明できました。すべてのGoogle WebサイトにSPDYを展開した後、他のサーバーとブラウザーが実装を開始し、この独自の形式をインターネット標準に標準化するときが来たため、HTTP/2が誕生しました。
HTTP/2には次の重要な概念があります。
* バイナリ形式
* 多重化
* フロー制御
* 優先順位付け
* ヘッダー圧縮
* プッシュ
**バイナリ形式**とは、HTTP/2メッセージが事前定義された形式のフレームに包まれることを意味しHTTPメッセージの解析が容易になり、改行文字のスキャンが不要になります。これは、以前のバージョンのHTTPに対して多くの[脆弱性](https://www.owasp.org/index.php/HTTP_Response_Splitting)があったため、セキュリティにとってより優れています。また、HTTP/2接続を**多重化**できることも意味します。各フレームにはストリーム識別子とその長さが含まれているため、異なるストリームの異なるフレームを互いに干渉することなく同じ接続で送信できます。多重化により、追加の接続を開くオーバーヘッドなしで、単一のTCP接続をより効率的に使用できます。理想的にはドメインごと、または[複数のドメイン](https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/)に対しても単一の接続を開きます!
個別のストリームを使用すると、潜在的な利点とともにいくつかの複雑さが取り入れられます。 HTTP/2は異なるストリームが異なるレートでデータを送信できるようにする**フロー制御**の概念を必要としますが、以前応答は1つに1つだけで、これはTCPフロー制御によって接続レベルで制御されていました。同様に、**優先順位付け**では複数のリクエストを一緒に送信できますが、最も重要なリクエストではより多くの帯域幅を取得できます。
最後に、HTTP/2には、**ヘッダー圧縮**と**HTTP/2プッシュと**いう2つの新しい概念が導入されました。ヘッダー圧縮により、セキュリティ上の理由からHTTP/2固有の[HPACK](https://tools.ietf.org/html/rfc7541)形式を使用して、これらのテキストベースのHTTPヘッダーをより効率的に送信できました。 HTTP/2プッシュにより、要求への応答として複数の応答を送信できるようになり、クライアントが必要と認識する前にサーバーがリソースを「プッシュ」できるようになりました。プッシュは、CSSやJavaScriptなどのリソースをHTMLに直接インライン化して、それらのリソースが要求されている間、ページが保持されないようにするというパフォーマンスの回避策を解決することになっています。 HTTP/2を使用するとCSSとJavaScriptは外部ファイルとして残りますが、最初のHTMLと共にプッシュされるため、すぐに利用できました。これらのリソースはキャッシュされるため後続のページリクエストはこれらのリソースをプッシュしません。したがって、帯域幅を浪費しません。
急ぎ足で紹介したこのHTTP/2は、新しいプロトコルの主な歴史と概念を提供します。この説明から明らかなように、HTTP/2の主な利点は、HTTP/1.1プロトコルのパフォーマンス制限に対処することです。また、セキュリティの改善も行われました。恐らく最も重要なのは、HTTP/2以降のHTTPSを使用するパフォーマンスの問題に対処することです、HTTPSを使用しても[通常のHTTPよりもはるかに高速](https://www.httpvshttps.com/)です。 HTTPメッセージを新しいバイナリ形式に包むWebブラウザーと、反対側でWebサーバーがそれを取り出す以外は、HTTP自体の中核的な基本はほぼ同じままでした。これは、ブラウザーとサーバーがこれを処理するため、WebアプリケーションがHTTP/2をサポートするために変更を加える必要がないことを意味します。オンにすることで、無料でパフォーマンスを向上させることができるため、採用は比較的簡単です。もちろん、Web開発者がHTTP/2を最適化して、その違いを最大限に活用する方法もあります。
## HTTP/2の採用
前述のように、インターネットプロトコルはインターネットを構成するインフラストラクチャの多くに深く浸透しているため、しばしば採用を難しくする事があります。これにより、変更の導入が遅くなり、困難になります。たとえば、IPv6は20年前から存在していますが、[採用に苦労しています。](https://www.google.com/intl/en/ipv6/statistics.html)
{{ figure_markup(
caption="HTTP/2を使用できるグローバルユーザーの割合。",
content="95%",
classes="big-number"
)
}}
ただし、HTTP/2はHTTPSで事実上隠されていたため異なってました(少なくともブラウザーの使用例では)、ブラウザーとサーバーの両方がサポートしている限り、採用の障壁を取り除いてきました。ブラウザーのサポートはしばらく前から非常に強力であり、*最新バージョン*へ自動更新するブラウザーの出現により、[グローバルユーザーの推定95%がHTTP/2をサポートするようになりました](https://caniuse.com/#feat=http2)。
私たちの分析は、Chromeブラウザで約500万の上位デスクトップおよびモバイルWebサイトをテストするHTTP Archiveから提供されています。([方法論](./methodology)の詳細をご覧ください。)
{{ figure_markup(
image="ch20_fig2_http2_usage_by_request.png",
alt="要求によるHTTP/2の使用。(引用: HTTP Archive)",
caption='要求によるHTTP/2の使用。(引用: <a href="https://httparchive.org/reports/state-of-the-web#h2">HTTP Archive</a>)',
description="2019年7月現在、デスクトップとモバイルの両方で55%採用されているHTTP/2使用の時系列チャート。傾向は年間約15ポイントで着実に増加しています。",
width=600,
height=321
)
}}
結果は、HTTP/2の使用が、現在過半数のプロトコルであることを示しています。これは、正式な標準化からわずか4年後の目覚しい偉業です。要求ごとのすべてのHTTPバージョンの内訳を見ると、次のことがわかります。
<figure markdown>
| Protocol | デスクトップ | モバイル | 合計 |
| -------- | ---------- | ------- | ------ |
| | 5.60% | 0.57% | 2.97% |
| HTTP/0.9 | 0.00% | 0.00% | 0.00% |
| HTTP/1.0 | 0.08% | 0.05% | 0.06% |
| HTTP/1.1 | 40.36% | 45.01% | 42.79% |
| HTTP/2 | 53.96% | 54.37% | 54.18% |
<figcaption>{{ figure_link(caption="要求によるHTTPバージョンの使用。") }}</figcaption>
</figure>
図20.3は、HTTP/1.1およびHTTP/2が、予想どおり大部分の要求で使用されるバージョンであることを示しています。古いHTTP/1.0とHTTP/0.9プロトコルでは、ごく少数のリクエストしかありません。面倒なことに、特にデスクトップでHTTP Archiveクロールによってプロトコルは正しく追跡されなかった割合が大きくなっています。これを掘り下げた結果、さまざまな理由が示され、そのいくつかは説明できますが、いくつかは説明できません。スポットチェックに基づいて、それらは概ねHTTP/1.1リクエストであるように見え、それらを想定するとデスクトップとモバイルの使用は似ています。
私たちが望むよりもノイズの割合が少し大きいにもかかわらず、ここで伝えられるメッセージ全体を変えることはしません。それ以外、モバイル/デスクトップの類似性は予想外ではありません。 HTTP Archiveは、デスクトップとモバイルの両方でHTTP/2をサポートするChromeでテストします。実際の使用状況は、両方のブラウザーの古い使用状況で統計値がわずかに異なる場合がありますが、それでもサポートは広く行われているため、デスクトップとモバイルの間に大きな違いはないでしょう。
現在、HTTP ArchiveはHTTP over [QUIC](https://www.chromium.org/quic)(もうすぐ[HTTP/3](#http3)として標準化される予定)を個別に追跡しないため、これらの要求は現在HTTP/2の下にリストされますが、この章の後半でそれを測定する他の方法を見ていきます。
リクエストの数を見ると、一般的なリクエストのため結果が多少歪んでいます。たとえば、多くのサイトはHTTP/2をサポートするGoogleアナリティクスを読み込むため、埋め込みサイト自体がHTTP/2をサポートしていない場合でもHTTP/2リクエストとして表示されます。一方、人気のあるウェブサイトはHTTP/2をサポートする傾向があり、上記の統計では1回しか測定されないため、過小評価されます(「google.com」と「obscuresite.com」には同じ重みが与えられます)。_嘘、いまいましい嘘と統計です。_
ただし、私たちの調査結果は、Firefoxブラウザーを介した実際の使用状況を調べる[Mozillaのテレメトリ](https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&measure=HTTP_RESPONSE_VERSION)など、他のソースによって裏付けられています。
<figure markdown>
| プロトコル | デスクトップ | モバイル | 合計 |
| -------- | ---------- | ------- | ------ |
| | 0.09% | 0.08% | 0.08% |
| HTTP/1.0 | 0.09% | 0.08% | 0.09% |
| HTTP/1.1 | 62.36% | 63.92% | 63.22% |
| HTTP/2 | 37.46% | 35.92% | 36.61% |
<figcaption>{{ figure_link(caption="ホームページのHTTPバージョンの使用。") }}</figcaption>
</figure>
ホームページを見て、HTTP/2をサポートするサイト数の大まかな数字を取得するだけでも(少なくともそのホームページで)おもしろいです。図20.4は、全体的な要求よりもサポートが少ないことを示しており、予想どおり約36%です。
HTTP/2は、HTTPSまたは暗号化されていない非HTTPS接続で公式に使用できますが、HTTPS上のブラウザーでのみサポートされます。前述のように、暗号化されたHTTPS接続で新しいプロトコルを非表示にすることで、この新しいプロトコルを理解してないネットワーク機器がその使用を妨げる(拒否する)ことを防ぎます。さらに、HTTPSハンドシェイクにより、クライアントとサーバーがHTTP/2の使用に同意する簡単な方法が可能になります。
<figure markdown>
| プロトコル | デスクトップ | モバイル | 合計 |
| -------- | ---------- | ------- | ------ |
| | 0.09% | 0.10% | 0.09% |
| HTTP/1.0 | 0.06% | 0.06% | 0.06% |
| HTTP/1.1 | 45.81% | 44.31% | 45.01% |
| HTTP/2 | 54.04% | 55.53% | 54.83% |
<figcaption>{{ figure_link(caption=" HTTPSホームページのHTTPバージョンの使用。") }}</figcaption>
</figure>
WebはHTTPSに移行しており、HTTP/2は、HTTPSがパフォーマンスに悪影響を与えるという従来の議論をほぼ完全に覆しています。すべてのサイトがHTTPSに移行しているわけではないため、HTTP/2を利用できないサイトも利用できません。 HTTPSを使用するサイトのみを見ると、図20.5では図20.2の*すべてのリクエスト*の割合と同様に、HTTP/2の採用率が55%前後です。
HTTP/2のブラウザサポートは強力であり、採用への安全な方法があることを示しました。なぜすべてのサイト(または少なくともすべてのHTTPSサイト)がHTTP/2をサポートしないのですか? さて、ここで、まだ測定していないサポートの最終項目であるサーバーサポートに進みます。
これは、最新のブラウザとは異なり、サーバーが最新バージョンに自動的にアップグレードしないことが多いため、ブラウザのサポートよりも問題が多くなります。サーバーが定期的に保守され、パッチが適用されている場合でも、多くの場合、HTTP/2のような新機能ではなくセキュリティパッチが適用されます。 HTTP/2をサポートするサイトのサーバーのHTTPヘッダーを最初に見てみましょう。
<figure markdown>
| サーバー | デスクトップ | モバイル | 合計 |
| ------------- | ---------- | -------| ------ |
| nginx | 34.04% | 32.48% | 33.19% |
| cloudflare | 23.76% | 22.29% | 22.97% |
| Apache | 17.31% | 19.11% | 18.28% |
| | 4.56% | 5.13% | 4.87% |
| LiteSpeed | 4.11% | 4.97% | 4.57% |
| GSE | 2.16% | 3.73% | 3.01% |
| Microsoft-IIS | 3.09% | 2.66% | 2.86% |
| openresty | 2.15% | 2.01% | 2.07% |
| ... | ... | ... | ... |
<figcaption>{{ figure_link(caption=" HTTP/2に使用されるサーバー。") }}</figcaption>
</figure>
nginxは、最新バージョンへのインストールまたはアップグレードを容易にするパッケージリポジトリを提供しているため、ここをリードしていることについて驚くことではありません。 cloudflareは最も人気のある[CDN](./cdn)で、デフォルトでHTTP/2を有効にしているため、HTTP/2サイトの大部分をホストしていることについて驚くことはありません。ちなみに、cloudflareは、Webサーバーとして[大幅にカスタマイズ](https://blog.cloudflare.com/nginx-structural-enhancements-for-http-2-performance/)されたバージョンのnginxを使用しています。その後、Apacheの使用率は約20%であり、次に何が隠されているかを選択するサーバー、LiteSpeed、IIS、Google Servlet Engine、nginxベースのopenrestyなどの小さなプレイヤーが続きます。
さらに興味深いのは、HTTP/2をサポート*しない*サーバーです。
<figure markdown>
| サーバー | デスクトップ | モバイル | 合計 |
| ------------- | ---------- | -------| ------ |
| Apache | 46.76% | 46.84% | 46.80% |
| nginx | 21.12% | 21.33% | 21.24% |
| Microsoft-IIS | 11.30% | 9.60% | 10.36% |
| | 7.96% | 7.59% | 7.75% |
| GSE | 1.90% | 3.84% | 2.98% |
| cloudflare | 2.44% | 2.48% | 2.46% |
| LiteSpeed | 1.02% | 1.63% | 1.36% |
| openresty | 1.22% | 1.36% | 1.30% |
| ... | ... | ... | ... |
<figcaption>{{ figure_link(caption=" HTTP/1.1以前に使用されるサーバー。") }}</figcaption>
</figure>
これの一部は、サーバーがHTTP/2をサポートしていてもHTTP/1.1を使用する非HTTPSトラフィックになりますが、より大きな問題はHTTP/2をまったくサポートしないことです。これらの統計では、古いバージョンを実行している可能性が高いApacheとIISのシェアがはるかに大きいことがわかります。
特にApacheで、既存のインストールにHTTP/2サポートを追加することは簡単でない。これは、ApacheがHTTP/2をインストールするための公式リポジトリを提供していないためです。これは、多くの場合、ソースからのコンパイルやサードパーティのリポジトリの信頼に頼ることを意味しますが、どちらも多くの管理者にとって特に魅力的ではありません。
Linuxディストリビューションの最新バージョン(RHELおよびCentOS 8、Ubuntu 18、Debian 9)のみがHTTP/2をサポートするApacheのバージョンを備えており、多くのサーバーはまだそれらを実行できていません。 Microsoft側では、Windows Server 2016以降のみがHTTP/2をサポートしているため、古いバージョンを実行しているユーザーはIISでこれをサポートできません。
これら2つの統計をマージすると、サーバーごとのインストールの割合を見ることができます。
<figure markdown>
| サーバー | デスクトップ | モバイル |
| ------------- | ---------- | -------|
| cloudflare | 85.40% | 83.46% |
| LiteSpeed | 70.80% | 63.08% |
| openresty | 51.41% | 45.24% |
| nginx | 49.23% | 46.19% |
| GSE | 40.54% | 35.25% |
| | 25.57% | 27.49% |
| Apache | 18.09% | 18.56% |
| Microsoft-IIS | 14.10% | 13.47% |
| ... | ... | ... |
<figcaption>{{ figure_link(caption=" HTTP/2を提供するために使用される各サーバーのインストールの割合。") }}</figcaption>
</figure>
ApacheとIISがインストールベースのHTTP/2サポートで18%、14%と遅れを取っていることは明らかです。これは(少なくとも部分的に)アップグレードがより困難であるためです。多くのサーバーがこのサポートを簡単に取得するには、多くの場合、OSの完全なアップグレードが必要です。新しいバージョンのOSが標準になると、これが簡単になることを願っています。
ここで、HTTP/2実装に関するコメントはありません([Apacheが最高の実装の1つであると思います](https://twitter.com/tunetheweb/status/988196156697169920?s=20))が、これらの各サーバーでHTTP/2を有効にすることの容易さ、またはその欠如に関する詳細です。
## HTTP/2の影響
HTTP/2の影響は、特にHTTP Archive[方法論](./methodology)を使用して測定するのがはるかに困難です。理想的には、サイトをHTTP/1.1とHTTP/2の両方でクロールし、その差を測定する必要がありますがここで調査している統計では不可能です。さらに、平均的なHTTP/2サイトが平均的なHTTP/1.1サイトよりも高速であるかどうかを測定すると、ここで説明するよりも徹底的な調査を必要とする他の変数が多くなりすぎます。
測定できる影響の1つは、現在HTTP/2の世界にいるHTTP使用の変化です。複数の接続は、限られた形式の並列化を可能にするHTTP/1.1の回避策でしたが、これは実際、HTTP/2で通常最もよく機能することの反対になります。単一の接続でTCPセットアップ、TCPスロースタート、およびHTTPSネゴシエーションのオーバーヘッドが削減され、クロスリクエストの優先順位付けが可能になります。
{{ figure_markup(
image="ch20_fig9_num_tcp_connections_trend_over_years.png",
alt='ページごとのTCP接続。 ',
caption='ページごとのTCP接続。 (引用: <a href="https://httparchive.org/reports/state-of-the-web#tcp">HTTP Archive</a>)',
description="ページあたりのTCP接続数の時系列グラフ。2019年7月現在、デスクトップページの中央値には14の接続があり、モバイルページの中央値には16の接続があります。",
width=600,
height=320
)
}}
HTTP Archiveは、ページあたりのTCP接続数を測定します。これは、HTTP/2をサポートするサイトが増え、6つの個別の接続の代わりに単一の接続を使用するため、徐々に減少しています。
{{ figure_markup(
image="ch20_fig10_total_requests_per_page_trend_over_years.png",
alt='ページごとの合計リクエスト。',
caption='ページごとの合計リクエスト。 (引用: <a href="https://httparchive.org/reports/state-of-the-web#reqTotal">HTTP Archive</a>)',
description="ページあたりのリクエスト数の時系列チャート。2019年7月現在、デスクトップページの中央値は74リクエスト、モバイルページの中央値は69リクエストです。傾向は比較的横ばいです。",
width=600,
height=321
)
}}
より少ないリクエストを取得するためのアセットのバンドルは、バンドル、連結、パッケージ化、分割など多くの名前で行われた別のHTTP/1.1回避策でした。HTTP/2を使用する場合、リクエストのオーバーヘッドが少ないため、これはあまり必要ありませんが、注意する必要がありますその要求はHTTP/2で無料ではなく、[バンドルを完全に削除する実験を行った人はパフォーマンスの低下に気付きました](https://engineering.khanacademy.org/posts/js-packaging-http2.htm)。ページごとにロードされるリクエストの数を時間毎に見ると、予想される増加ではなく、リクエストのわずかな減少が見られます。
この減少は、おそらくパフォーマンスへの悪影響なしにバンドルを削除できない(少なくとも完全にではない)という前述の観察と、HTTP/1.1の推奨事項に基づく歴史的な理由で現在多くのビルドツールがバンドルされていることに起因する可能性があります。また、多くのサイトがHTTP/1.1のパフォーマンスハッキングを戻すことでHTTP/1.1ユーザーにペナルティを課す気がないかもしれません、少なくともこれに価値があると感じる確信(または時間!)を持っていない可能性があります。
増加する[ページの重み](./page-weight)を考えると、リクエストの数がほぼ静的なままであるという事実は興味深いですが、これはおそらくHTTP/2に完全に関連しているわけではありません。
## HTTP/2プッシュ
HTTP/2プッシュは、HTTP/2の大いに宣伝された新機能であるにもかかわらず、複雑な歴史を持っています。他の機能は基本的に内部のパフォーマンスの向上でしたが、プッシュはHTTPの単一の要求から単一の応答への性質を完全に破ったまったく新しい概念で、追加の応答を返すことができました。 Webページを要求すると、サーバーは通常どおりHTMLページで応答しますが、重要なCSSとJavaScriptも送信するため、特定のリソースへ追加の往復が回避されます。理論的には、CSSとJavaScriptをHTMLにインライン化するのをやめ、それでも同じようにパフォーマンスを向上させることができます。それを解決した後、潜在的にあらゆる種類の新しくて興味深いユースケースにつながる可能性があります。
現実は、まあ、少し残念です。 HTTP/2プッシュは、当初想定されていたよりも効果的に使用することがはるかに困難であることが証明されています。これのいくつかは、[HTTP/2プッシュの動作の複雑さ](https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/)、およびそれによる実装の問題によるものです。
より大きな懸念は、プッシュがパフォーマンスの問題を解決するのではなく、すぐ簡単に問題を引き起こす可能性があることです。過剰な押し込みは、本当のリスクです。多くの場合、ブラウザーは*何*を要求するかを決定する最適な場所にあり、要求する*タイミング*と同じくらい重要ですが、HTTP/2プッシュはサーバーにその責任を負わせます。ブラウザが既にキャッシュに持っているリソースをプッシュすることは、帯域幅の浪費です(私の意見ではCSSをインライン化していますが、それについてはHTTP/2プッシュよりも苦労が少ないはずです!)。
[ブラウザのキャッシュのステータスについてサーバーに通知する提案は](https://lists.w3.org/Archives/Public/ietf-http-wg/2019JanMar/0033.html) 、特にプライバシーの問題で行き詰っています。その問題がなくても、プッシュが正しく使用されない場合、他の潜在的な問題があります。たとえば、大きな画像をプッシュして重要なCSSとJavaScriptの送信を保留すると、プッシュしない場合よりもWebサイトが遅くなります。
またプッシュは正しく実装された場合でも、パフォーマンス向上に必ずつながるという証拠はほとんどありませんでした。これも、HTTP Archiveの実行方法(1つの状態でChromeを使用する人気サイトのクロール)の性質により、HTTP Archiveが回答するのに最適な場所ではないため、ここでは詳しく説明しません。 ただし、パフォーマンスの向上は明確でなく、潜在的な問題は現実的であると言えば十分です。
それはさておき、HTTP/2プッシュの使用方法を見てみましょう。
<figure markdown>
| クライアント | HTTP/2プッシュを使用するサイト | HTTP/2プッシュを使用するサイト(%) |
| ----------- | -------------------------- | ----------------------------- |
| デスクトップ | 22,581 | 0.52% |
| モバイル | 31,452 | 0.59% |
<figcaption>{{ figure_link(caption=" HTTP/2プッシュを使用するサイト。") }}</figcaption>
</figure>
<figure markdown>
| クライアント | プッシュされた平均リクエスト | プッシュされた平均KB |
| ----------- | ----------------------- | ----------------- |
| デスクトップ | 7.86 | 162.38 |
| モバイル | 6.35 | 122.78 |
<figcaption>{{ figure_link(caption="使用時にプッシュされる量。") }}</figcaption>
</figure>
これらの統計は、HTTP/2プッシュの増加が非常に低いことを示しています。これは、おそらく前述の問題が原因です。ただし、サイトがプッシュを使用する場合、図20.12に示すように1つまたは2つのアセットではなく、プッシュを頻繁に使用する傾向があります。
これは以前のアドバイスでプッシュを控えめにし、「[アイドル状態のネットワーク時間を埋めるのに十分なリソースだけをプッシュし、それ以上はプッシュしない](https://docs.google.com/document/d/1K0NykTXBbbbTlv60t5MyJvXjqKGsCVNYHyLEXIxYMv0/edit)」ということでした。上記の統計は、大きなサイズの多くのリソースがプッシュされることを示しています。
{{ figure_markup(
image="ch20_fig13_what_push_is_used_for.png",
caption="プッシュはどの資産タイプに使用されますか?",
description="プッシュされるアセットタイプの割合を分類する円グラフ。 JavaScriptがアセットのほぼ半分を構成し、次にCSSが約4分の1、画像が約8分の1、残りをテキストベースのさまざまなタイプで構成します。",
chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vQLxLA5Nojw28P7ceisqti3oTmNSM-HIRIR0bDb2icJS5TzONvRhdqxQcooh_45TmK97XVpot4kEQA0/pubchart?oid=466353517&format=interactive"
)
}}
図20.13は、最も一般的にプッシュされるアセットを示しています。 JavaScriptとCSSは、ボリュームとバイトの両方で、プッシュされるアイテムの圧倒的多数です。この後、画像、フォント、およびデータのラグタグの種類があります。最後にビデオをプッシュしているサイトは約100あることがわかりますが、これは意図的なものであるか、間違ったタイプのアセットを過剰にプッシュしている兆候かもしれません!
一部の人が提起する懸念の1つは、HTTP/2実装が`プリロード`HTTP`リンク`ヘッダーをプッシュする信号として再利用したことです。`プリロード`の最も一般的な使用法の1つは、CSSが要求、ダウンロード、解析されるまでブラウザに表示されない、フォントや画像などの遅れて発見されたリソースをブラウザに通知することです。これらが現在そのヘッダーに基づいてプッシュされる場合、これを再利用すると多くの意図しないプッシュを発生する可能性があるという懸念はありました。
ただし、フォントと画像の使用率が比較的低いことは、恐れられているほどリスクが見られないことを意味する場合があります。 `<link rel="preload" ...>`タグは、HTTPリンクヘッダーではなくHTMLでよく使用され、メタタグはプッシュするシグナルではありません。[リソースヒント](./resource-hints)の章の統計では、サイトの1%未満がプリロードHTTPリンクヘッダーを使用しており、ほぼ同じ量がHTTP/2で意味のないプリコネクトを使用しているため、これはそれほど問題ではないことが示唆されます。プッシュされているフォントやその他のアセットがいくつかありますが、これはその兆候かもしれません。
これらの苦情に対する反論として、アセットがプリロードするのに十分に重要である場合、ブラウザはプリロードヒントを非常に高い優先度のリクエストとして処理するため、可能であればこれらのアセットをプッシュする必要があると主張できます。したがって、パフォーマンスの懸念は、(これも間違いなく)このために発生するHTTP/2プッシュではなくプリロードの過剰使用にあります。
この意図しないプッシュを回避するには、プリロードヘッダーで`nopush`属性を指定できます。
```
link: </assets/jquery.js>; rel=preload; as=script; nopush
```
プリロードHTTPヘッダーの5%はこの属性を使用しますが、これはニッチな最適化と考えていたため、予想よりも高くなります。繰り返しますが、プリロードHTTPヘッダーやHTTP/2プッシュ自体の使用も同様です。
## HTTP/2問題
HTTP/2は主にシームレスなアップグレードであり、サーバーがサポートすると、Webサイトやアプリケーションを変更することなく切り替えることができます。 HTTP/2向けに最適化するか、HTTP/1.1回避策の使用をやめることができますが、一般的にサイトは通常、変更を必要とせずに動作します。ただし、アップグレードに影響を与える可能性のある注意点がいくつかあり、一部のサイトでこれは難しい方法であることがわかりました。
HTTP/2の問題の原因の1つは、HTTP/2の優先順位付けの不十分なサポートです。この機能により、進行中の複数の要求が接続を適切に使用できるようになります。 HTTP/2は同じ接続で実行できるリクエストの数を大幅に増やしているため、これは特に重要です。サーバーの実装では、100または128の並列リクエスト制限が一般的です。以前は、ブラウザにはドメインごとに最大6つの接続があったため、そのスキルと判断を使用してそれらの接続の最適な使用方法を決定しました。現在では、キューに入れる必要はほとんどなく、リクエストを認識するとすぐにすべてのリクエストを送信できます。これにより、優先度の低いリクエストで帯域幅が「無駄」になり、重要なリクエストが遅延する可能性はあります(また[偶発的にバックエンドサーバーが使用されるよりも多くのリクエストでいっぱいになる可能性があります!](https://www.lucidchart.com/techblog/2019/04/10/why-turning-on-http2-was-a-mistake/))
HTTP/2には複雑な優先順位付けモデルがあります(非常に複雑すぎるため、なぜHTTP/3で再検討されているのでしょう!)が、それを適切に尊重するサーバーはほとんどありません。これはHTTP/2の実装がスクラッチになっていないか、サーバーがより高い優先度の要求であることを認識する前に応答は既に送信されている、いわゆる*バッファブロート*が原因である可能性も考えられます。サーバー、TCPスタック、および場所の性質が異なるため、ほとんどのサイトでこれを測定することは困難ですがCDNを使用する場合はこれをより一貫させる必要があります。
[Patrick Meenan](https://twitter.com/patmeenan)は、優先度の高いオンスクリーンイメージを要求する前に、優先度の低いオフスクリーンイメージのロードを意図的にダウンロードしようとする[サンプルテストページ](https://github.com/pmeenan/http2priorities/tree/master/stand-alone)を作成しました。優れたHTTP/2サーバーはこれを認識し、優先度の低い画像を犠牲にして、要求後すぐに優先度の高い画像を送信できるはずです。貧弱なHTTP/2サーバーはリクエストの順番で応答し、優先順位のシグナルを無視します。 [Andy Davies](./contributors#andydavies)には、[Patrickのテスト用にさまざまなCDNのステータスを追跡するページ](https://github.com/andydavies/http2-prioritization-issues)があります。 HTTP Archiveは、クロールの一部としてCDNが使用されるタイミングを識別しこれら2つのデータセットをマージすると、合格または失敗したCDNを使用しているページの割合を知ることができます。
<figure markdown>
| CDN | 正しい優先順位付け? | デスクトップ | モバイル | 合計 |
| ----------------- | -----------------------| ---------- | ------ | ------ |
| Not using CDN | Unknown | 57.81% | 60.41% | 59.21% |
| Cloudflare | Pass | 23.15% | 21.77% | 22.40% |
| Google | Fail | 6.67% | 7.11% | 6.90% |
| Amazon CloudFront | Fail | 2.83% | 2.38% | 2.59% |
| Fastly | Pass | 2.40% | 1.77% | 2.06% |
| Akamai | Pass | 1.79% | 1.50% | 1.64% |
| | Unknown | 1.32% | 1.58% | 1.46% |
| WordPress | Pass | 1.12% | 0.99% | 1.05% |
| Sucuri Firewall | Fail | 0.88% | 0.75% | 0.81% |
| Incapsula | Fail | 0.39% | 0.34% | 0.36% |
| Netlify | Fail | 0.23% | 0.15% | 0.19% |
| OVH CDN | Unknown | 0.19% | 0.18% | 0.18% |
<figcaption>{{ figure_link(caption="一般的なCDNでのHTTP/2優先順位付けのサポート。") }}</figcaption>
</figure>
図20.14は、トラフィックのかなりの部分が特定された問題の影響を受けていることを示しており、合計はデスクトップで26.82%、モバイルで27.83%です。これがどの程度の問題であるかは、ページの読み込み方法と、影響を受けるサイトの優先度の高いリソースが遅れて検出されるかどうかによって異なります。
{{ figure_markup(
caption="準最適なHTTP/2優先順位付けによるモバイル要求の割合。",
content="27.83%",
classes="big-number"
)
}}
別の問題は、`アップグレード`HTTPヘッダーが誤って使用されていることです。 Webサーバーは、クライアントが使用したいより良いプロトコルをサポートすることを示唆する`アップグレード`HTTPヘッダーで要求に応答できます(たとえば、HTTP/2をHTTP/1.1のみを使用してクライアントに宣伝します)。これは、サーバーがHTTP/2をサポートすることをブラウザーに通知する方法として役立つと思われるかもしれませんがブラウザーはHTTPSのみをサポートし、HTTP/2の使用はHTTPSハンドシェイクを通じてネゴシエートできるため、HTTP/2を宣伝するための`アップグレード`ヘッダーはかなり制限されています(少なくともブラウザの場合)。
それよりも悪いのは、サーバーがエラーで`アップグレード`ヘッダーを送信する場合です。これは、HTTP/2をサポートするバックエンドサーバーがヘッダーを送信し、HTTP1.1のみのエッジサーバーは盲目的にクライアントに転送していることが原因である可能性を考えます。 Apacheは`mod_http2`が有効になっているがHTTP/2が使用されていない場合に`アップグレード`ヘッダーを発行し、そのようなApacheインスタンスの前にあるnginxインスタンスは、nginxがHTTP/2をサポートしない場合でもこのヘッダーを喜んで転送します。この偽の宣伝は、クライアントが推奨されているとおりにHTTP/2を使用しようとする(そして失敗する!)ことにつながります。
108サイトはHTTP/2を使用していますが、アップグレードヘッダーでHTTP/2に`アップグレード`することも推奨しています。デスクトップ上のさらに12,767のサイト(モバイルは15,235)では、HTTPSを使用できない場合、または既に使用されていることが明らかな場合、HTTPS経由で配信されるHTTP/1.1接続をHTTP/2にアップグレードすることをお勧めします。これらは、デスクトップでクロールされた430万サイトとモバイルでクロールされた530万サイトのごく少数ですが、依然として多くのサイトに影響を与える問題であることを示しています。ブラウザはこれを一貫して処理しません。Safariは特にアップグレードを試み、混乱してサイトの表示を拒否します。
これはすべて、`http1.0`、`http://1.1`、または`-all、+ TLSv1.3、+ TLSv1.2`へのアップグレードを推奨するいくつかのサイトに入る前です。ここで進行中のWebサーバー構成には明らかに間違いがあります!
私たちが見ることのできるさらなる実装の問題があります。たとえば、HTTP/2はHTTPヘッダー名に関してはるかに厳密でありスペース、コロン、またはその他の無効なHTTPヘッダー名で応答するとリクエスト全体を拒否します。ヘッダー名も小文字に変換されます。これは、アプリケーションが特定の大文字化を前提とする場合、驚くことになります。 HTTP/1.1では[ヘッダー名で大文字と小文字が区別されない](https://tools.ietf.org/html/rfc7230#section-3.2)と明記されているため、これは以前保証されていませんでしたが、一部はこれに依存してます。 HTTP Archiveを使用してこれらの問題を特定することもできます、それらの一部はホームページには表示されませんが、今年は詳しく調査しませんでした。
## HTTP/3
世界はまだ止まっておらず、HTTP/2が5歳の誕生日を迎えてないにも関わらず、人々はすでにそれを古いニュースとみなしており後継者であるHTTP/3にもっと興奮しています。 [HTTP/3](https://tools.ietf.org/html/draft-ietf-quic-http)はHTTP/2の概念に基づいていますが、HTTPが常に使用しているTCP接続を介した作業から、[QUIC](https://datatracker.ietf.org/wg/quic/about/)と呼ばれるUDPベースのプロトコルに移行します。これにより、パケット損失が大きくTCPの保証された性質によりすべてのストリームが保持され、すべてのストリームが抑制される場合、HTTP/2がHTTP/1.1より遅い1つのケースを修正できます。また、両方のハンドシェイクで統合するなど、TCPとHTTPSの非効率性に対処することもできます。実際、実装が難しいと証明されているTCPの多くのアイデアをサポートします(TCP高速オープン、0-RTTなど)。
HTTP/3は、TCPとHTTP/2の間のオーバーラップもクリーンアップします(たとえば両方のレイヤーでフロー制御は実装されます)が、概念的にはHTTP/2と非常に似ています。 HTTP/2を理解し、最適化したWeb開発者は、HTTP/3をさらに変更する必要はありません。ただし、TCPとQUICの違いははるかに画期的であるため、サーバーオペレータはさらに多くの作業を行う必要があります。 HTTP/3のロールアウトはHTTP/2よりもかなり長くかかる可能性があり、最初はCDNなどの分野で特定の専門知識を持っている人に限定されます。
QUICは長年にわたってGoogleによって実装されており、SPDYがHTTP/2へ移行する際に行ったのと同様の標準化プロセスを現在行っています。 QUICにはHTTP以外にも野心がありますが、現時点では現在使用中のユースケースです。この章が書かれたように、HTTP/3はまだ正式に完成していないか、まだ標準として承認されていないにもかかわらず、[Cloudflare、Chrome、FirefoxはすべてHTTP/3サポート](https://blog.cloudflare.com/http3-the-past-present-and-future/)を発表しました。これは最近までQUICサポートがGoogleの外部にやや欠けていたため歓迎され、同様の標準化段階からのSPDYおよびHTTP/2サポートに確実に遅れています。
HTTP/3はTCPではなくUDPでQUICを使用するため、HTTP/3の検出はHTTP/2の検出よりも大きな課題になります。 HTTP/2では、主にHTTPSハンドシェイクを使用できますが、HTTP/3は完全に異なる接続となるため、ここは選択肢ではありません。またHTTP/2は`アップグレード`HTTPヘッダーを使用してブラウザーにHTTP/2サポートを通知します、HTTP/2にはそれほど有用ではありませんでしたが、QUICにはより有用な同様のメカニズムが導入されています。*代替サービス*HTTPヘッダー(`alt-svc`)は、この接続で使用できる代替プロトコルとは対照的に、まったく異なる接続で使用できる代替プロトコルを宣伝します。これは、`アップグレード`HTTPヘッダーの使用目的です。
{{ figure_markup(
caption="QUICをサポートするモバイルサイトの割合。",
content="8.38%",
classes="big-number"
)
}}
このヘッダーを分析すると、デスクトップサイトの7.67%とモバイルサイトの8.38%がすでにQUICをサポートしていることがわかります。QUICは、Googleのトラフィックの割合を表します。また、0.04%はすでにHTTP/3をサポートしています。来年のWeb Almanacでは、この数は大幅に増加すると予想しています。
## 結論
HTTP Archiveプロジェクトで利用可能な統計のこの分析は、HTTPコミュニティの私たちの多くがすでに認識していることを示しています。HTTP/2はここにあり、非常に人気であることが証明されています。リクエスト数の点ではすでに主要なプロトコルですが、それをサポートするサイトの数の点ではHTTP/1.1を完全に追い抜いていません。インターネットのロングテールは、よく知られた大量のサイトよりもメンテナンスの少ないサイトで顕著な利益を得るために指数関数的に長い時間がかかることを意味します。
また、一部のインストールでHTTP/2サポートを取得するのが(まだ!)簡単ではないことについても説明しました。サーバー開発者、OSディストリビューター、およびエンドカスタマーはすべて、それを容易にするためプッシュすることを関与します。ソフトウェアをOSに関連付けると、常に展開時間が長くなります。実際、QUICのまさにその理由の1つは、TCPの変更を展開することで同様の障壁を破ることです。多くの場合、WebサーバーのバージョンをOSに結び付ける本当の理由はありません。 Apache(より人気のある例の1つを使用する)は、古いOSでHTTP/2サポートを使用して実行されますがサーバーに最新バージョンを取得することは、現在の専門知識やリスクを必要としません。 nginxはここで非常にうまく機能し、一般的なLinuxフレーバーのリポジトリをホストしてインストールを容易にします。Apacheチーム(またはLinuxディストリビューションベンダー)が同様のものを提供しない場合、Apacheの使用は苦労しながら縮小し続けます、最新バージョンには最高のHTTP/2実装の1つがあります、関連性を保持し古くて遅い(古いインストールに基づいて)という評判を揺るがします。 IISは通常、Windows側で優先されるWebサーバーであるため、IISの問題はそれほど多くないと考えています。
それ以外は、HTTP/2は比較的簡単なアップグレードパスであるため、既に見た強力な支持を得ています。ほとんどの場合、これは痛みなく追加が可能で、ほぼ手間をかけずにパフォーマンスが向上し、サーバーでサポートされるとほとんど考慮しなくて済むことが判明しました。しかし、(いつものように)悪魔は細部にいて、サーバー実装間のわずかな違いによりHTTP/2の使用が良くも悪くも最終的にエンドユーザーの体験に影響します。また、新しいプロトコルで予想されるように、多くのバグや[セキュリティの問題](https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md)もありました。
HTTP/2のような新しいプロトコルの強力で最新のメンテナンスされた実装を使用していることを確認することで、これらの問題を確実に把握できます。ただし、それには専門知識と管理が必要です。 QUICとHTTP/3のロールアウトはさらに複雑になり、より多くの専門知識が必要になります。おそらくこれは、この専門知識を持っており、サイトからこれらの機能に簡単にアクセスできるCDNのようなサードパーティのサービスプロバイダーに委ねるのが最善でしょうか? ただ、専門家に任せたとしても、これは確実でありません(優先順位付けの統計が示すように)。しかし、サーバープロバイダーを賢明に選択して、優先順位が何であるかを確認すれば実装が容易になります。
その点については、CDNがこれらの問題に優先順位を付ければ素晴らしいと思います(間違いなく意図的です!)、HTTP/3での新しい優先順位付け方法の出現を疑っていますが、多くは固執します。来年は、HTTPの世界でさらに興味深い時代になるでしょう。
| 70.170213 | 590 | 0.790365 | jpn_Jpan | 0.666185 |
d05dc6be922f8341c536a42e050ad0570aa48de0 | 12,023 | md | Markdown | README.md | flyyee/cep-moviereviewsite | 8f581bb002986beb32324e7f24bc71bcf9f70fb5 | [
"MIT"
] | null | null | null | README.md | flyyee/cep-moviereviewsite | 8f581bb002986beb32324e7f24bc71bcf9f70fb5 | [
"MIT"
] | null | null | null | README.md | flyyee/cep-moviereviewsite | 8f581bb002986beb32324e7f24bc71bcf9f70fb5 | [
"MIT"
] | null | null | null | # cep-moviereviewsite
Movie review site that also displays movie information. Made with flask, postgresql database hosted on heroku.
__Task:__
Summative 3: Movie Review Website
================
Objectives
----------
- Become more comfortable with Python.
- Gain experience with Flask.
- Learn to use SQL to interact with databases.
Overview
--------
In this project, you’ll build a movie review website. Users will be able
to register for your website and then log in using their username and
password. Once they log in, they will be able to search for movie, leave
reviews for individual movie, and see the reviews made by other people.
You’ll also use the a third-party API by OMDb, another movie review
website, to pull in ratings from a broader audience. Finally, users will
be able to query for movie details and movie reviews programmatically via
your website’s API.
Getting Started
---------------
### PostgreSQL
For this project, you’ll need to set up a PostgreSQL database to use
with our application. It’s possible to set up PostgreSQL locally on your
own computer, but for this project, we’ll use a database hosted by
[Heroku](https://www.heroku.com/), an online web hosting service.
1. Navigate to [https://www.heroku.com/](https://www.heroku.com/), and
create an account if you don’t already have one.
2. On Heroku’s Dashboard, click “New” and choose “Create new app.”
3. Give your app a name, and click “Create app.”
4. On your app’s “Overview” page, click the “Configure Add-ons” button.
5. In the “Add-ons” section of the page, type in and select “Heroku
Postgres.”
6. Choose the “Hobby Dev - Free” plan, which will give you access to a
free PostgreSQL database that will support up to 10,000 rows of
data. Click “Provision.”
7. Now, click the “Heroku Postgres :: Database” link.
8. You should now be on your database’s overview page. Click on
“Settings”, and then “View Credentials.” This is the information
you’ll need to log into your database. You can access the database
via [Adminer](https://adminer.cs50.net/), filling in the server (the
“Host” in the credentials list), your username (the “User”), your
password, and the name of the database, all of which you can find on
the Heroku credentials page.
Alternatively, if you install
[PostgreSQL](https://www.postgresql.org/download/) on your own computer,
you should be able to run `psql URI` on the command
line, where the `URI` is the link provided in the
Heroku credentials list.
### Python and Flask
1. First, make sure you install a copy of
[Python](https://www.python.org/downloads/). For this course, you
should be using Python version 3.6 or higher.
2. You’ll also need to install `pip`. If you
downloaded Python from Python’s website, you likely already have
`pip` installed (you can check by running
`pip` in a terminal window). If you don’t have
it installed, be sure to [install
it](https://pip.pypa.io/en/stable/installing/) before moving on!
To try running your first Flask application:
1. Download the `summative3.zip` from
[https://github.com/lorrainewang78/webprogramming/blob/master/summative3.zip](https://github.com/lorrainewang78/webprogramming/blob/master/summative3.zip)
and unzip it.
2. In a terminal window, navigate into your
`summative3` directory.
3. Run `pip3 install -r requirements.txt` in your
terminal window to make sure that all of the necessary Python
packages (Flask and SQLAlchemy, for instance) are installed.
4. Set the environment variable `FLASK_APP` to be
`application.py`. On a Mac or on Linux, the
command to do this is
`export FLASK_APP=application.py`. On Windows,
the command is instead
`set FLASK_APP=application.py`. You may
optionally want to set the environment variable
`FLASK_DEBUG` to `1`, which
will activate Flask’s debugger and will automatically reload your
web application whenever you save a change to a file.
5. Set the environment variable `DATABASE_URL` to
be the URI of your database, which you should be able to see from
the credentials page on Heroku.
6. Run `flask run` to start up your Flask
application.
7. If you navigate to the URL provided by `flask`,
you should see the text `"Summative 3 TODO"`!
### OMDb API
OMDb (The Open Movie Database) is a RESTful webservice to obtain movie information, and we’ll be using their API
in this project to get access to their review data for individual movies.
1. Go to [http://www.omdbapi.com/](http://www.omdbapi.com/).
2. Navigate to
[http://www.omdbapi.com/apikey.aspx](http://www.omdbapi.com/apikey.aspx)
and apply for an API key. Select "FREE! (1,000 daily limit). Fill up your email, first name, last name and a short description of your project. Note
3. You will be sent a verification link to activate your key via email. Click on the URL provided to activate your key.
4. You can now use that API key to make requests to the OMDb API documented [http://www.omdbapi.com/#parameters](http://www.omdbapi.com/#parameters). In
particular, Python code like the below
```python
import requests
res = requests.get("http://www.omdbapi.com/", params={"apikey": "<replace-with-your-own-key>", "t": "The+Godfather"})
print(res.json())
```
where `apikey` is your API key, will give you the
review and rating data for the movie with the provided movie title. In
particular, you might see something like this dictionary:
```json
{
"Title": "The Godfather",
"Year": "1972",
"Rated": "R",
"Released": "24 Mar 1972",
"Runtime": "175 min",
"Genre": "Crime, Drama",
"Director": "Francis Ford Coppola",
"Writer": "Mario Puzo (screenplay by), Francis Ford Coppola (screenplay by), Mario Puzo (based on the novel by)",
"Actors": "Marlon Brando, Al Pacino, James Caan, Richard S. Castellano",
"Plot": "The aging patriarch of an organized crime dynasty transfers control of his clandestine empire to his reluctant son.",
"Language": "English, Italian, Latin",
"Country": "USA",
"Awards": "Won 3 Oscars. Another 24 wins & 28 nominations.",
"Poster": "https://m.media-amazon.com/images/M/MV5BM2MyNjYxNmUtYTAwNi00MTYxLWJmNWYtYzZlODY3ZTk3OTFlXkEyXkFqcGdeQXVyNzkwMjQ5NzM@._V1_SX300.jpg",
"Ratings": [
{
"Source": "Internet Movie Database",
"Value": "9.2/10"
},
{
"Source": "Rotten Tomatoes",
"Value": "98%"
},
{
"Source": "Metacritic",
"Value": "100/100"
}
],
"Metascore": "100",
"imdbRating": "9.2",
"imdbVotes": "1,425,400",
"imdbID": "tt0068646",
"Type": "movie",
"DVD": "09 Oct 2001",
"BoxOffice": "N/A",
"Production": "Paramount Pictures",
"Website": "http://www.thegodfather.com",
"Response": "True"
}
```
Requirements
------------
Alright, it’s time to actually build your web application! Here are the
requirements:
- **Registration**: Users should be able to register for your website,
providing (at minimum) a username and password.
- **Login**: Users, once registered, should be able to log in to your
website with their username and password.
- **Logout**: Logged in users should be able to log out of the site.
- **Import**: Provided for you in this project is a file called
`movies.csv`, which is a spreadsheet in CSV
format of 250 different movies. Each one has an movie title, release year, runtime, IMDB ID, and
IMDB rating. In a Python file called
`import.py` separate from your web application,
write a program that will take the movies and import them into your
PostgreSQL database. You will first need to decide what table(s) to
create, what columns those tables should have, and how they should
relate to one another. Run this program by running
`python3 import.py` to import the movies into
your database, and submit this program with the rest of your project
code.
- **Search**: Once a user has logged in, they should be taken to a
page where they can search for a movie. Users should be able to type
in the title of a movie, IMDB ID, or Year of Release. After performing the search, your website should display a
list of possible matching results, or some sort of message if there
were no matches. If the user typed in only part of a title, your search page should find matches for those as well!
- **Movie Page**: When users click on a movie from the results of the
search page, they should be taken to a movie page, with details about
the movie: its title, IMDB id, year of release, runtime, IMDB rating, and any
reviews that users have left for the movie on your website.
- **Review Submission**: On the movie page, users should be able to
submit a review: consisting of a rating on a scale of 1 to 10, as
well as a text component to the review where the user can write
their opinion about a movie. Users should not be able to submit
multiple reviews for the same movie.
- **OMDB Data**: On your movie page, you could also
display (if available) the movie plot, genre, director, actors and poster, or any other relevant.
- **API Access**: If users make a GET request to your website’s
`/api/<imdb_id>` route, where
`<imdb_id>` is the IMDB id, your website should
return a JSON response containing the movie’s title, year of release,
imdb id, director, actors, imdb_rating, review count, and average score. The
resulting JSON should follow the format:
```python
{
"title": "The Godfather",
"year": 1972,
"imdb_id": "tt0068646",
"director": "Francis Ford Coppola",
"actors": "Marlon Brando, Al Pacino, James Caan, Richard S. Castellano",
"imdb_rating": 9.2
"review_count": 28,
"average_score": 9.8,
}
```
If the requested IMDB id isn’t in your database, your website should
return a 404 error.
- You should be using raw SQL commands (as via SQLAlchemy’s
`execute` method) in order to make database
queries. You should not use the SQLAlchemy ORM (if familiar with it)
for this project.
- In `README.md`, include a short writeup
describing your project, what’s contained in each file, and
(optionally) any other additional information the staff should know
about your project.
- If you’ve added any Python packages that need to be installed in
order to run your web application, be sure to add them to
`requirements.txt`!
Beyond these requirements, the design, look, and feel of the website are
up to you! You’re also welcome to add additional features to your
website, so long as you meet the requirements laid out in the above
specification!
Hints
-----
- At minimum, you’ll probably want at least one table to keep track of
users, one table to keep track of movies, and one table to keep track
of reviews. But you’re not limited to just these tables, if you
think others would be helpful!
- In terms of how to “log a user in,” recall that you can store
information inside of the `session`, which can
store different values for different users. In particular, if each
user has an `id`, then you could store that
`id` in the session (e.g., in
`session["user_id"]`) to keep track of which
user is currently logged in.
FAQs
----
### For the API, do the JSON keys need to be in order?
Any order is fine!
### `AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'`
Make sure that you’ve set your `DATABASE_URL`
environment variable before running `flask run`!
How to Submit
-------------
1. Submit using [Summative Submission Link](https://tinyurl.com/2019-y3cep-summative).
2. [Record a 1- to 5-minute
screencast](https://www.howtogeek.com/205742/how-to-record-your-windows-mac-linux-android-or-ios-screen/)
in which you demonstrate your app’s functionality and/or walk
viewers through your code. [Upload that video to
YouTube](https://www.youtube.com/upload) (as unlisted or public, but
not private) or somewhere else.
| 42.185965 | 158 | 0.7133 | eng_Latn | 0.991403 |
d05dd917d60214b7f6200a19a13ed31ffa250ebb | 4,305 | md | Markdown | docs/2014/analysis-services/table-import-wizard-reference-ssas.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/table-import-wizard-reference-ssas.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/table-import-wizard-reference-ssas.md | CeciAc/sql-docs.fr-fr | 0488ed00d9a3c5c0a3b1601a143c0a43692ca758 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-03-04T05:50:54.000Z | 2020-03-04T05:50:54.000Z | ---
title: Table de référence de l’Assistant Importation (SSAS) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: analysis-services
ms.topic: conceptual
f1_keywords:
- sql12.asvs.bidtoolset.tableimportwizard.f1
ms.assetid: 2ac05e89-c002-4adc-86c7-438df70e9ed5
author: minewiskan
ms.author: owend
manager: craigg
ms.openlocfilehash: ef0cb7dfe9b3fbbca1cda3833506e56cc6bb9681
ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 06/15/2019
ms.locfileid: "66067910"
---
# <a name="table-import-wizard-reference-ssas"></a>Référence de l'Assistant Importation de table (SSAS)
Cette section fournit de l'aide sur l' **Assistant Importation de table**. Cet Assistant vous permet d'importer des données à partir de diverses sources de données. Pour accéder à l'Assistant depuis le générateur de modèles, dans le menu **Modèle** , cliquez sur **Importer à partir de la source de données**.
## <a name="pages"></a>Pages
- [Paramètres avancés (SSAS)](advanced-settings-ssas.md)
- [Analysis Services Concepteur de requêtes MDX (SSAS)](analysis-services-mdx-query-designer-ssas.md)
- [Choisir comment importer les données (SSAS)](choose-how-to-import-the-data-ssas.md)
- [Se connecter à un rapport ou d’un flux de données (SSAS)](connect-to-a-report-or-data-feed-ssas.md)
- [Se connecter à une Source de données (SSAS)](connect-to-a-data-source-ssas.md)
- [Se connecter à une base de données DB2 (SSAS)](connect-to-a-db2-database-ssas.md)
- [Se connecter à un fichier plat (SSAS)](connect-to-a-flat-file-ssas.md)
- [Se connecter à une base de données Microsoft Access (SSAS)](connect-to-a-microsoft-access-database-ssas.md)
- [Se connecter à un fichier Microsoft Excel (SSAS)](connect-to-a-microsoft-excel-file-ssas.md)
- [Se connecter à une base de données SQL Azure (SSAS)](connect-to-an-azure-sql-database-ssas.md)
- [Se connecter à une base de données Microsoft SQL Server (SSAS)](connect-to-a-microsoft-sql-server-database-ssas.md)
- [Se connecter à un serveur Microsoft SQL Server Parallel Data Warehouse (SSAS)](connect-to-a-microsoft-sql-server-parallel-data-warehouse-ssas.md)
- [Se connecter à Microsoft SQL Server Analysis Services (SSAS)](connect-to-microsoft-sql-server-analysis-services-ssas.md)
- [Se connecter à une base de données Informix (SSAS)](connect-to-an-informix-database-ssas.md)
- [Se connecter à une base de données Oracle (SSAS)](connect-to-an-oracle-database-ssas.md)
- [Se connecter à une base de données Sybase (SSAS)](connect-to-a-sybase-database-ssas.md)
- [Se connecter à une base de données Teradata (SSAS)](connect-to-a-teradata-database-ssas.md)
- [Informations d’identification de Source de données (SSAS)](data-source-credentials-ssas.md)
- [Détails (SSAS)](details-ssas.md)
- [Détails du filtre (SSAS)](filter-details-ssas.md)
- [Boîte de dialogue informations d’emprunt d’identité (Assistant Importation de Table)](impersonation-information-dialog-box-table-import-wizard.md)
- [Importing (SSAS)](importing-ssas.md)
- [Résumé d’importation (SSAS)](import-summary-ssas.md)
- [Aperçu de la Table sélectionnée (SSAS)](preview-selected-table-ssas.md)
- [Concepteur de requêtes relationnelles (SSAS)](relational-query-designer-ssas.md)
- [Sélectionner des Tables et vues (SSAS)](select-tables-and-views-ssas.md)
- [Sélectionner des Tables et vues (des flux de données) (SSAS)](select-tables-and-views-data-feeds-ssas.md)
- [Définir les propriétés avancées (SSAS)](set-advanced-properties-ssas.md)
- [Spécifiez une chaîne de connexion (SSAS)](specify-a-connection-string-ssas.md)
- [Spécifiez une requête SQL ou MDX (SSAS)](specify-a-sql-or-mdx-query-ssas.md)
## <a name="see-also"></a>Voir aussi
[Importer des données (SSAS Tabulaire)](import-data-ssas-tabular.md)
| 47.307692 | 313 | 0.709872 | fra_Latn | 0.570139 |
d05df14fbdd07bfd6b9f3c3afa2872c85cab0d1e | 41,473 | md | Markdown | DataManagement/dataQuality.md | kaponte75/devsascom-rest-api-samples | 067ad19b81f228dc67a8c8f6f902cfee5f1dd2f1 | [
"Apache-2.0"
] | 27 | 2019-08-02T13:58:32.000Z | 2022-02-23T17:33:31.000Z | DataManagement/dataQuality.md | kaponte75/devsascom-rest-api-samples | 067ad19b81f228dc67a8c8f6f902cfee5f1dd2f1 | [
"Apache-2.0"
] | 1 | 2019-10-22T17:18:13.000Z | 2019-10-23T20:41:52.000Z | DataManagement/dataQuality.md | kaponte75/devsascom-rest-api-samples | 067ad19b81f228dc67a8c8f6f902cfee5f1dd2f1 | [
"Apache-2.0"
] | 17 | 2019-10-21T17:21:59.000Z | 2022-02-02T20:40:28.000Z | # Data Quality API
The Data Quality API enables clients to reference information contained in a Quality Knowledge Base (QKB), such as:
* locales
* functions
* definitions
* tokens
Clients use the API to determine which operations are available for a given QKB. Other facilities, such as the DATA Step 1 CAS action, then perform transformations. Examples of SAS applications that use the Data Quality API include SAS Visual Data Builder and SAS Data Preparation.
#### Session Management
For operations within an execution context, endpoints accept an optional query parameter named 'sessionId'. Providing a session ID to these endpoints can improve performance dramatically.
If a session ID is not provided, the Data Quality API attempts to create a session. When processing is complete, the API attempts to destroy the session.
If a session ID is provided, the client is responsible for destroying the session after the completion of processing. Links provided by the Data Quality API propagate the session ID, with the exception of the self link.
#### Collection Behavior
Methods in the Data Quality API that operate on a collection support the following features:
* Pagination - The default page limit is set to 10 for all collections. This limit can be modified by specifying the ?limit query parameter when applicable.
* Filtering - Filtering of all collection resources is supported.
Filter support consists of {and, or, not, in, isNull, eq,ne,lt,le,gt,ge,endsWith,startsWith,contains}. String operations such as endsWith and contains use the string representation of the underlying field; all other operations use the native data type. Null values fail for every filter except for isNull. Use the ?filter query parameter to express filter criteria.
Basic selection is not supported.
* Sorting- Sorting of all collection resources is supported on a single field. All endpoints default to a name:ascending:tertiary criteria.
#### Collection URIs
The following URIs return collections:
|Collection|
|---|
|/qkbs|
|/environments|
|/environments/{environmentName}/contexts/|
|/environments/{environmentName}/contexts/{contextName}/qkbs|
|/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales|
|/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions|
|/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}/definitions|
|/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}/definitions/{definitionName}/tokens|
### Data Quality Examples
* [Data Quality Root](#example-get-root)
* [Retrieving Supported Environments](#example-get-environments)
* [Retrieving Contexts](#example-get-contexts)
* [Retrieving a QKB](#example-get-qkbs)
* [Retrieving Locales](#example-get-locales)
* [Retrieving Functions](#example-get-functions)
* [Retrieving Definitions](#example-get-definitions)
* [Retrieving Tokens](#example-get-tokens)
* [Using a Client Session](#using-client-session)
The examples below show the general usage of Data Quality endpoints.
#### <a name='example-get-root'>Getting Data Quality Root</a>
GET request on root URI /dataQuality returns the root API links.
**Request**
```
GET http://www.example.com/dataQuality/
Headers:
* Accept: application/vnd.sas.api+json
```
**Response**
````
Headers:
* Content-Type: application/vnd.sas.api+json
Body:
{
"version": 1,
"links": [
{
"method": "GET",
"rel": "environments",
"href": "/dataQuality/environments",
"uri": "/dataQuality/environments",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "qkbs",
"href": "/dataQuality/qkbs",
"uri": "/dataQuality/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
}
]
}
````
#### <a name='example-get-environments'>Retrieving Supported Environments</a>
Performing a GET request using URI /dataQuality/environments returns a collection of all [`application/vnd.sas.data.quality.environment`](#application-vnd.sas.data.quality.environment) resources currently available.
**Request**
```
GET http://www.example.com/dataQuality/environments
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.environment+json
```
**Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "environments",
"accept": "application/vnd.sas.data.quality.environment",
"items": [
{
"version": 1,
"name": "CAS",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS",
"uri": "/dataQuality/environments/CAS",
"type": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments",
"uri": "/dataQuality/environments",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "contexts",
"href": "/dataQuality/environments/CAS/contexts",
"uri": "/dataQuality/environments/CAS/contexts",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
}
]
},
{
"version": 1,
"name": "compute",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/compute",
"uri": "/dataQuality/environments/compute",
"type": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments",
"uri": "/dataQuality/environments",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "contexts",
"href": "/dataQuality/environments/compute/contexts",
"uri": "/dataQuality/environments/compute/contexts",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments",
"uri": "/dataQuality/environments",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments?start=0&limit=2&sortBy=name",
"uri": "/dataQuality/environments?start=0&limit=2&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.environment"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/",
"uri": "/dataQuality/",
"type": "application/vnd.sas.api"
}
],
"version": 2
}
````
#### <a name='example-get-contexts'>Retrieving Contexts</a>
The list of available contexts for a particular environment can be retrieved using GET request against contexts URI.
Contexts URI for CAS environments: /dataQuality/environments/CAS/contexts
Contexts URI for compute environments: /dataQuality/environments/compute/contexts
**CAS Request**
```
GET http://www.example.com/dataQuality/environments/CAS/contexts
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.context+json
```
**CAS Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "contexts",
"accept": "application/vnd.sas.data.quality.context",
"items": [
{
"version": 1,
"name": "cas",
"type": "CAS",
"description": "controller",
"host": "rdcgrd001.unx.sas.com",
"state": "running",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/cas",
"uri": "/dataQuality/environments/CAS/contexts/cas",
"type": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts",
"uri": "/dataQuality/environments/CAS/contexts",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "qkbs",
"href": "/dataQuality/environments/CAS/contexts/cas/qkbs",
"uri": "/dataQuality/environments/CAS/contexts/cas/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/CAS/contexts",
"uri": "/dataQuality/environments/CAS/contexts",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/CAS/contexts?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/",
"uri": "/dataQuality/environments/CAS/",
"type": "application/vnd.sas.collection"
}
],
"version": 2
}
````
**Compute Request**
```
GET http://www.example.com/dataQuality/environments/compute/contexts
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.context+json
```
**Compute Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "contexts",
"accept": "application/vnd.sas.data.quality.context",
"items": [
{
"version": 1,
"name": "62f8b825-bdf1-4cb5-a1ad-7355941e3046",
"type": "compute",
"description": "SAS Data Explorer compute context : Compute context to be used by my SAS Data Explorer",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046",
"type": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/compute/contexts",
"uri": "/dataQuality/environments/compute/contexts",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "qkbs",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/compute/contexts",
"uri": "/dataQuality/environments/compute/contexts",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/compute/contexts?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/compute/contexts?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.context"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/compute/",
"uri": "/dataQuality/environments/compute/",
"type": "application/vnd.sas.collection"
}
],
"version": 2
}
````
#### <a name='example-get-qkbs'>Retrieving a QKB</a>
The list of available QKBs for a particular context can be retrieved using GET request against QKBs URI.
QKBs URI for CAS environments: /dataQuality/environments/CAS/contexts/{casContext}/qkbs
QKBs URI for compute environments: /dataQuality/environments/compute/contexts/{computeContext}/qkbs
**CAS Request**
```
GET http://www.example.com/dataQuality/environments/CAS/contexts/casqkb/qkbs
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.qkb+json
```
**CAS Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "qkbs",
"accept": "application/vnd.sas.data.quality.qkb",
"items": [
{
"version": 1,
"name": "QKB_CI29ALL",
"product": "CI",
"productVersion": "v29",
"isDefault": true,
"creationTimeStamp": "2019-04-15T11:37:59.000Z",
"context": "casqkb",
"environment": "CAS",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL",
"type": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "locales",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.locale"
}
]
},
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/",
"type": "application/vnd.sas.data.quality.context"
}
],
"version": 2
}
````
**Compute Request**
```
GET http://www.example.com/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.qkb+json
```
**Compute Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "qkbs",
"accept": "application/vnd.sas.data.quality.qkb",
"items": [
{
"version": 1,
"name": "31",
"productVersion": "v31",
"isDefault": true,
"context": "62f8b825-bdf1-4cb5-a1ad-7355941e3046",
"environment": "compute",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs/31",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs/31",
"type": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "locales",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs/31/locales",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs/31/locales",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.locale"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/qkbs?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.qkb"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/",
"uri": "/dataQuality/environments/compute/contexts/62f8b825-bdf1-4cb5-a1ad-7355941e3046/",
"type": "application/vnd.sas.data.quality.context"
}
],
"version": 2
}
````
#### <a name='example-get-locales'>Retrieving Locales</a>
The list of available locales for a particular QKB can be retrieved using GET request against locales URI.
Locales URI for CAS environments: /dataQuality/environments/CAS/contexts/{casContext}/qkbs/{qkb}/locales
Locales URI for compute environments: /dataQuality/environments/compute/contexts/{computeContext}/qkbs/{qkb}/locales
**CAS Request**
```
GET http://www.example.com/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.locale+json
```
**CAS Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "locales",
"accept": "application/vnd.sas.data.quality.locale",
"items": [
{
"version": 1,
"name": "ENUSA",
"description": "English (United States)",
"isDefault": true,
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA",
"type": "application/vnd.sas.data.quality.locale"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.locale"
},
{
"method": "GET",
"rel": "functions",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.function"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.locale"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales?filter=eq(name,%20'ENUSA')&start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales?filter=eq(name,%20'ENUSA')&start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.locale"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/",
"type": "application/vnd.sas.data.quality.qkb"
}
],
"version": 2
}
````
#### <a name='example-get-functions'>Retrieving Functions</a>
The list of available functions for a particular locale can be retrieved using GET request against functions URI /dataQuality/environments/{environment}/contexts/{casContext}/qkbs/{qkb}/locales/{locale}/functions
**CAS Request**
```
GET http://www.example.com/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.function+json
```
**CAS Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "functions",
"accept": "application/vnd.sas.data.quality.function",
"items": [
{
"version": 1,
"name": "Case",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Case",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Case",
"type": "application/vnd.sas.data.quality.function"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.function"
},
{
"method": "GET",
"rel": "definitions",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Case/definitions",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Case/definitions",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.definition"
}
]
},
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.function"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.function"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/",
"type": "application/vnd.sas.data.quality.locale"
}
],
"version": 2
}
````
#### <a name='example-get-definitions'>Retrieving Definitions</a>
The list of available definitions for a particular function can be retrieved using GET request against definitions URI /dataQuality/environments/{environment}/contexts/{casContext}/qkbs/{qkb}/locales/{locale}/functions/{function}/definitions
**CAS Request**
```
GET http://www.example.com/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.definition+json
```
**CAS Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "definitions",
"accept": "application/vnd.sas.data.quality.definition",
"items": [
{
"version": 1,
"name": "Account Number",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Account_Number",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Account_Number",
"type": "application/vnd.sas.data.quality.definition"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.definition"
},
{
"method": "GET",
"rel": "tokens",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Account_Number/tokens",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Account_Number/tokens",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.token"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.definition"
},
{
"method": "GET",
"rel": "next",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions?sortBy=name&start=10&limit=10",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions?sortBy=name&start=10&limit=10",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.definition"
},
{
"method": "GET",
"rel": "last",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions?sortBy=name&start=20&limit=10",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions?sortBy=name&start=20&limit=10",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.definition"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.definition"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/",
"type": "application/vnd.sas.data.quality.function"
}
],
"version": 2
}
````
#### <a name='example-get-tokens'>Retrieving Tokens</a>
The list of available tokens for a particular definition can be retrieved using GET request against tokens URI /dataQuality/environments/{environment}/contexts/{casContext}/qkbs/{qkb}/locales/{locale}/functions/{function}/definitions/{definition}/tokens
**CAS Request**
```
GET http://www.example.com/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens
Headers:
* Accept: application/vnd.sas.collection+json
* Accept-Item: application/vnd.sas.data.quality.token+json
```
**CAS Response**
````
Headers:
* Content-Type: application/vnd.sas.collection+json
Body:
{
"name": "tokens",
"accept": "application/vnd.sas.data.quality.token",
"items": [
{
"version": 1,
"name": "Building/Site",
"links": [
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens/Building_Site",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens/Building_Site",
"type": "application/vnd.sas.data.quality.token"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.token"
}
]
}
],
"links": [
{
"method": "GET",
"rel": "collection",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.token"
},
{
"method": "GET",
"rel": "self",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens?start=0&limit=10&sortBy=name",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/tokens?start=0&limit=10&sortBy=name",
"type": "application/vnd.sas.collection",
"itemType": "application/vnd.sas.data.quality.token"
},
{
"method": "GET",
"rel": "up",
"href": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/",
"uri": "/dataQuality/environments/CAS/contexts/casqkb/qkbs/QKB_CI29ALL/locales/ENUSA/functions/Match/definitions/Address/",
"type": "application/vnd.sas.data.quality.definition"
}
],
"version": 2
}
````
#### <a name='using-client-session'>Using a Client Session</a>
In the example below, passing a sessionId to be used for calls to execution environment is demonstrated.
Endpoints under /dataQuality/environments/{environmentName}/contexts/{contextName} support an optional sessionId parameter, which enables user to provide their own session to the Data Quality API. Providing a session can significantly improve performance. Management of the session in such cases is a client responsibility; the Data Quality API will not destroy the session after using it. The user must provide the sessionId that corresponds to the environmentName (CAS or compute) that the endpoint is executing against.
##### Query Parameters
The following query parameters are supported for calls to `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}/definitions`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}/definitions/{definitionName}`, `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}/definitions/{definitionName}/tokens`, and `/dataQuality/environments/{environmentName}/contexts/{contextName}/qkbs/{qkbName}/locales/{localeName}/functions/{functionName}/definitions/{definitionName}/tokens/{tokenName}`:
| Name | Type | Description |
|--------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `?sessionId` | `string` | The unique identifier of the session that is used to access the data service provider's backing service. When this string is not specified, the data service provider creates a temporary session. After the request is complete, the temporary session is terminated. If this string is specified, all returned links, except the `self` link, contain the sessionId query parameter in their respective URIs. Also, they contain an additional session link to the application/vnd.sas.data.session resource that corresponds to the provided sessionId. |
version 2, last updated 26 Nov, 2019 | 46.96829 | 1,317 | 0.560172 | yue_Hant | 0.469371 |
d05eca4ffc27bca9bdd4d9dd93e0139e20f7c378 | 619 | md | Markdown | src/Test.CoreLib/readme.md | LaudateCorpus1/corert | d97588e5f88f152250558ab78f4f2076c6dafb5d | [
"MIT"
] | 2 | 2018-11-13T09:45:21.000Z | 2019-01-22T18:19:24.000Z | src/Test.CoreLib/readme.md | LaudateCorpus1/corert | d97588e5f88f152250558ab78f4f2076c6dafb5d | [
"MIT"
] | null | null | null | src/Test.CoreLib/readme.md | LaudateCorpus1/corert | d97588e5f88f152250558ab78f4f2076c6dafb5d | [
"MIT"
] | 1 | 2021-10-16T04:48:48.000Z | 2021-10-16T04:48:48.000Z | # Test.CoreLib
This is a minimum viable core library for test purposes.
## How to use this
Test.CoreLib gets built as part of the repo. After you build the repo:
1. Compile your test program against Test.CoreLib
```
csc /noconfig /nostdlib Program.cs /r:<repo_root>\bin\Product\Windows_NT.x64.Debug\Test.CoreLib\Test.CoreLib.dll /out:repro.exe
```
2. Compile the IL with ILC
Use ilc.exe that was built with the repo to compile the program.
```
ilc repro.exe -o:repro.obj -r:<repo_root>\bin\Product\Windows_NT.x64.Debug\Test.CoreLib\Test.CoreLib.dll --systemmodule Test.CoreLib
```
3. Use native linker to link
| 25.791667 | 132 | 0.752827 | eng_Latn | 0.803735 |
d05fe2e4db4e85b3412ef8c61a692b992eec1396 | 811 | md | Markdown | AlchemyInsights/calendarbooking.md | isabella232/OfficeDocs-AlchemyInsights-pr.sv-SE | f65a29c0e7e545e9826e9af01b9a5d4fe8a977d6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:08:09.000Z | 2021-04-21T00:13:49.000Z | AlchemyInsights/calendarbooking.md | isabella232/OfficeDocs-AlchemyInsights-pr.sv-SE | f65a29c0e7e545e9826e9af01b9a5d4fe8a977d6 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:25:58.000Z | 2022-02-09T06:54:54.000Z | AlchemyInsights/calendarbooking.md | isabella232/OfficeDocs-AlchemyInsights-pr.sv-SE | f65a29c0e7e545e9826e9af01b9a5d4fe8a977d6 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-09T20:31:36.000Z | 2021-10-09T10:38:14.000Z | ---
title: 398 Kalender – Bokning
ms.author: chrisda
author: chrisda
ms.date: 04/21/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom: 398
ms.assetid: 9b23cfd7-bff8-4f86-bd94-e5fa07f6939f
ms.openlocfilehash: 95a8c31e0b91f85b70577279d95c458b0bb7b2d724b118c82d09fe96f09f78d2
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 08/05/2021
ms.locfileid: "54035898"
---
# <a name="issues-with-microsoft-bookings"></a>Problem med Microsoft Bookings
Information om hur du felsöker problem med den nya funktionen i Microsoft Bookings [finns i artiklarna om Bookings.](https://docs.microsoft.com/microsoft-365/bookings/bookings-faq)
| 33.791667 | 180 | 0.819975 | yue_Hant | 0.18035 |
d05fe89b97d0304252df275f1234eee1dc725c53 | 2,316 | md | Markdown | docs/pipeline.md | maximecolin/RubixML | 5300b98a98c0041383cf747dd13cf5dff56c134d | [
"MIT"
] | null | null | null | docs/pipeline.md | maximecolin/RubixML | 5300b98a98c0041383cf747dd13cf5dff56c134d | [
"MIT"
] | null | null | null | docs/pipeline.md | maximecolin/RubixML | 5300b98a98c0041383cf747dd13cf5dff56c134d | [
"MIT"
] | null | null | null | <span style="float:right;"><a href="https://github.com/RubixML/RubixML/blob/master/src/Pipeline.php">[source]</a></span>
# Pipeline
Pipeline is a meta-estimator capable of transforming an input dataset by applying a series of [Transformer](transformers/api.md) *middleware*. Under the hood, Pipeline will automatically fit the training set and transform any [Dataset](datasets/api.md) object supplied as an argument to one of the base estimator's methods before reaching the method context. With *elastic* mode enabled, Pipeline will update the fitting of [Elastic](transformers/api.md#elastic) transformers during partial training.
> **Note:** Since transformations are applied to dataset objects in-place (without making a copy of the data), using a dataset in a program after it has been run through Pipeline may have unexpected results. If you need to keep a *clean* dataset in memory you can clone the dataset object before calling the method on Pipeline that consumes it.
**Interfaces:** [Wrapper](wrapper.md), [Estimator](estimator.md), [Learner](learner.md), [Online](online.md), [Probabilistic](probabilistic.md), [Ranking](ranking.md), [Persistable](persistable.md), [Verbose](verbose.md)
**Data Type Compatibility:** Depends on base learner and transformers
## Parameters
| # | Param | Default | Type | Description |
|---|---|---|---|---|
| 1 | transformers | | array | A list of transformers to be applied in order. |
| 2 | estimator | | Estimator | An instance of a base estimator to receive the transformed data. |
| 3 | elastic | true | bool | Should we update the elastic transformers during partial training? |
## Additional Methods
Fit the transformer pipeline to a dataset:
```php
public fit(Dataset $dataset) : void
```
Update the fittings of elastic transformers:
```php
public update(Dataset $dataset) : void
```
Apply the transformer stack to a dataset:
```php
public preprocess(Dataset $dataset) : void
```
## Example
```php
use Rubix\ML\Pipeline;
use Rubix\ML\Transformers\MissingDataImputer;
use Rubix\ML\Transformers\OneHotEncoder;
use Rubix\ML\Transformers\PrincipalComponentAnalysis;
use Rubix\ML\Classifiers\SoftmaxClassifier;
$estimator = new Pipeline([
new MissingDataImputer(),
new OneHotEncoder(),
new PrincipalComponentAnalysis(20),
], new SoftmaxClassifier());
``` | 48.25 | 500 | 0.755181 | eng_Latn | 0.897133 |
d0608abf9c6b183851bf5c1d13b272b68ce42fdc | 1,788 | md | Markdown | _posts/2019-08-13-StructBERT-Incorporating-Language-Structures-into-Pre-training-for-Deep-Language-Understanding.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 7 | 2018-02-11T01:50:19.000Z | 2020-01-14T02:07:17.000Z | _posts/2019-08-13-StructBERT-Incorporating-Language-Structures-into-Pre-training-for-Deep-Language-Understanding.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | null | null | null | _posts/2019-08-13-StructBERT-Incorporating-Language-Structures-into-Pre-training-for-Deep-Language-Understanding.md | AMDS123/papers | 80ccfe8c852685e4829848229b22ba4736c65a7c | [
"MIT"
] | 4 | 2018-02-04T15:58:04.000Z | 2019-08-29T14:54:14.000Z | ---
layout: post
title: "StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding"
date: 2019-08-13 11:12:58
categories: arXiv_CL
tags: arXiv_CL Sentiment Attention Sentiment_Classification Inference Classification Language_Model
author: Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, Luo Si
mathjax: true
---
* content
{:toc}
##### Abstract
Recently, the pre-trained language model, BERT (Devlin et al.(2018)Devlin, Chang, Lee, and Toutanova), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering. Inspired by the linearization exploration work of Elman (Elman(1990)), we extend BERT to a new model, StructBERT, by incorporating language structures into pretraining. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. As a result, the new model is adapted to different levels of language understanding required by downstream tasks. The StructBERT with structural pre-training gives surprisingly good empirical results on a variety of downstream tasks, including pushing the state-of-the-art on the GLUE benchmark to 84.5 (with Top 1 achievement on the Leaderboard at the time of paper submission), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on SNLI to 91.7.
##### Abstract (translated by Google)
##### URL
[http://arxiv.org/abs/1908.04577](http://arxiv.org/abs/1908.04577)
##### PDF
[http://arxiv.org/pdf/1908.04577](http://arxiv.org/pdf/1908.04577)
| 68.769231 | 1,195 | 0.787472 | eng_Latn | 0.964972 |
d0610273b7b8b1f3fe38c2d01c863477a434b633 | 4,219 | md | Markdown | _posts/2018-10-07-ben-10-alien-force-season-01-tamil.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | null | null | null | _posts/2018-10-07-ben-10-alien-force-season-01-tamil.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | null | null | null | _posts/2018-10-07-ben-10-alien-force-season-01-tamil.md | tamilrockerss/tamilrockerss.github.io | ff96346e1c200f9507ae529f2a5acba0ecfb431d | [
"MIT"
] | 1 | 2020-11-08T11:13:29.000Z | 2020-11-08T11:13:29.000Z | ---
title: "Ben 10 Alien Force Season 01 – Tamil"
date: "2018-10-07"
---
##
[](https://2.bp.blogspot.com/-Mpx3o_aXjpA/W7nd8OOPYTI/AAAAAAAAAHw/6136PB47YoEJifHiAaVjBFCI3EGTF6MxACLcBGAs/s1600/ben%2B10%2Bposter.jpg)
_**Ben 10: Alien Force**_ is an American animated television series created by team Man of Action (a group consisting of Duncan Rouleau, Joe Casey, Joe Kelly, and Steven T. Seagle), and produced by Cartoon Network Studios. It takes place five years after _Ben 10_ and takes a darker turn than its predecessor.
The series premiered on Cartoon Network in the United States on April 18, 2008 and on Teletoon in Canada on September 6, 2008, and ended on March 26, 2010. The series was originally produced under the working title of _Ben 10: Hero Generation_. The series ran for a total of three seasons and forty-six episodes with its final episode being aired on March 26, 2010.
[](https://1.bp.blogspot.com/-8UnneoQ6Tys/W7neMwzLmgI/AAAAAAAAAH8/I69Okcgypss_8JpA4fRFEZ4LhWEjDg3pACLcBGAs/s1600/_1538900536_607690997.gif)
season 2 episodes will upload soon ….
## BEN 10 ALIEN FORCE SEASON-1 EP – 13 – X EQUALS BEN PLUS 2 – TAMIL
## BEN 10 ALIEN FORCE SEASON-1 EP- 6 – MAX OUT – TAMIL
[](https://4.bp.blogspot.com/-GzZ5luqhAYI/W4rAWDa066I/AAAAAAAAAMk/jxc39eP2nf83l4s7FA-StC1emjPcIedXACLcBGAs/s1600/MO_%2528607%2529.png)
## BEN 10 ALIEN FORCE SEASON-1 EP- 5 – ALL THAT GLITTERS – TAMIL
[](https://3.bp.blogspot.com/-W_XJlgXGHvY/W4gZcVUnCGI/AAAAAAAAAL0/ZvOqvAcQrMIaBzpNIeTH5-YxXHpTh4-3QCLcBGAs/s1600/ATG_%2528453%2529.png)
## BEN 10 ALIEN FORCE SEASON-1 EP- 4 – KEVIN’S BIG SCORE – TAMIL
[](https://4.bp.blogspot.com/-N67gzYdPy7E/W4ZdfRTyqOI/AAAAAAAAAKg/8cZp_MIhiHQu-Mwg5Qyg64nm2bJUYYjKQCLcBGAs/s1600/Kevin_Taedenite_Forced2.png)
## BEN 10 ALIEN FORCE SEASON-1 EP- 3 – EVERYBODY TALKS ABOUT THE WEATHER – TAMIL
[](https://1.bp.blogspot.com/-YkJxpkz8V1o/W4Kjz9WCmnI/AAAAAAAAAJw/I6pF1O71iYAECrQv3P_2odD-kV6UCA7rACLcBGAs/s1600/B10R1_%2528343%2529.png)
## BEN 10 ALIEN FORCE SEASON-1 EP- 2 – BEN10 RETURNS PART-2
[](https://4.bp.blogspot.com/-kXOIlfDsgEM/W36oSNQTktI/AAAAAAAAAI0/g2e-pw03Q64lu3zQPL5cEREj7ivhDqjiwCLcBGAs/s1600/maxresdefault%2B%25281%2529.jpg)
## BEN 10 ALIEN FORCE SEASON-1 EP- 1 – BEN10 RETURNS PART-1
[](https://1.bp.blogspot.com/-hLdCJortAgY/W36Atow5t1I/AAAAAAAAAIo/oaI4hInb1-M8LCCyzoxgfQt7fFRwpIleQCLcBGAs/s1600/Ben_10_fuerza_alienigena.png)
[WATCH](http://destyy.com/wKPjMu) | [DOWNLOAD](http://destyy.com/wKPjMu)
search tags:
————————————————————————————————————————–
ben10
ben10 tamil
ben 10 tamil
ben 10 in tamil
ben 10 classic in tamil
ben 10 ultimate alien in tamil
ben 10 omniverse in tamil
ben 10 alien force in tamil
ben 10 alien force episodes in tamil
watch ben 10 alien force tamil online
ben 10 alien force tamil watch online
ben 10 alien force tamil download
watch ben 10 movies in tamil
ben 10 alien force toon world tamil
————————————————————————————————————————–
This article was Originally posted by [Tentrockers](https://tentrockers.blogspot.com/)
| 52.08642 | 365 | 0.781702 | yue_Hant | 0.541997 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.