hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4d8b9dd116a79e0e85ff5ad73e66ce6a209cd52f | 6,310 | md | Markdown | articles/cosmos-db/cassandra-import-data.md | mikaelkrief/azure-docs.fr-fr | 4cdd4a1518555b738f72dd53ba366849013f258c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/cassandra-import-data.md | mikaelkrief/azure-docs.fr-fr | 4cdd4a1518555b738f72dd53ba366849013f258c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/cassandra-import-data.md | mikaelkrief/azure-docs.fr-fr | 4cdd4a1518555b738f72dd53ba366849013f258c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Importer des données Cassandra dans Azure Cosmos DB | Microsoft Docs
description: Découvrez comment utiliser la commande CQL Copy pour copier des données Cassandra dans Azure Cosmos DB.
services: cosmos-db
author: govindk
manager: kfile
ms.service: cosmos-db
ms.component: cosmosdb-cassandra
ms.devlang: dotnet
ms.topic: tutorial
ms.date: 11/15/2017
ms.author: govindk
ms.custom: mvc
ms.openlocfilehash: 26731d80f5917f9d21aacafb5f8a79cfb02855af
ms.sourcegitcommit: 6116082991b98c8ee7a3ab0927cf588c3972eeaa
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 06/05/2018
ms.locfileid: "34795072"
---
# <a name="azure-cosmos-db-import-cassandra-data"></a>Azure Cosmos DB : Importer des données Cassandra
Ce didacticiel fournit des instructions sur l’importation de données Cassandra dans Azure Cosmos DB à l’aide de la commande COPY du langage de requête Cassandra (CQL).
Ce didacticiel décrit les tâches suivantes :
> [!div class="checklist"]
> * Récupération de votre chaîne de connexion
> * Importation de données à l’aide de la commande cqlsh COPY
> * Importation à l’aide du connecteur Spark
# <a name="prerequisites"></a>Prérequis
* Installez [Apache Cassandra](http://cassandra.apache.org/download/) et vérifiez plus particulièrement que *cqlsh* est présent.
* Augmentez le débit : la durée de la migration des données dépend de la quantité de débit que vous configurez pour vos tables. Veillez à augmenter le débit pour les migrations de données plus importantes. Une fois que vous avez effectué la migration, diminuez le débit pour réduire les coûts. Pour plus d’informations sur l’augmentation du débit dans le [portail Azure](https://portal.azure.com), consultez [Définir le débit des conteneurs Azure Cosmos DB](set-throughput.md).
* Activez SSL : Azure Cosmos DB obéit à des normes et à des exigences strictes en matière de sécurité. Veillez à activer SSL lorsque vous interagissez avec votre compte. Lorsque vous utilisez CQL avec SSH, vous pouvez fournir des informations sur SSL.
## <a name="find-your-connection-string"></a>Rechercher votre chaîne de connexion
1. Tout à gauche dans le [portail Azure](https://portal.azure.com), cliquez sur **Azure Cosmos DB**.
2. Dans le volet **Abonnements**, sélectionnez le nom de votre compte.
3. Cliquez sur **Chaîne de connexion**. Le volet droit contient toutes les informations dont vous avez besoin pour vous connecter à votre compte.

## <a name="use-cqlsh-copy"></a>Utiliser cqlsh COPY
Pour importer des données Cassandra dans Azure Cosmos DB à utiliser avec l’API Cassandra, suivez les instructions ci-dessous :
1. Connectez-vous à cqhsh en utilisant les informations de connexion à partir du portail.
2. Utilisez la [commande CQL COPY](http://cassandra.apache.org/doc/latest/tools/cqlsh.html#cqlsh) pour copier des données locales sur le point de terminaison d’API Apache Cassandra. Vérifiez que la source et la cible se trouvent dans le même centre de données afin de réduire les problèmes de latence.
### <a name="guide-for-moving-data-with-cqlsh"></a>Guide de déplacement des données avec cqlsh
1. Créez au préalable votre table et mettez-la à l’échelle :
* Par défaut, Azure Cosmos DB configure une nouvelle table d’API Cassandra avec 1 000 unités de requête par seconde (RU/s) (la création basée sur CQL est configurée sur 400 RU/s). Avant de commencer la migration à l’aide de cqlsh, créez au préalable toutes vos tables à partir du [portail Azure](https://portal.azure.com) ou de cqlsh.
* Dans le [portail Azure](https://portal.azure.com), augmentez le débit de vos tables du débit par défaut (400 ou 1 000 RU/s) à 10 000 RU/s pour la durée de la migration. Si le débit est plus élevé, vous pouvez éviter la limitation et procéder à une migration plus rapide. Avec une facturation à l’heure dans Azure Cosmos DB, vous pouvez réduire immédiatement le débit à l’issue de la migration afin de réduire les coûts.
2. Déterminez les frais de RU pour une opération. Pour cela, utilisez le Kit de développement logiciel (SDK) d’API Cassandra Azure Cosmos DB de votre choix. Cet exemple montre la version .NET pour obtenir les frais de RU.
```csharp
var tableInsertStatement = table.Insert(sampleEntity);
var insertResult = await tableInsertStatement.ExecuteAsync();
foreach (string key in insertResult.Info.IncomingPayload)
{
byte[] valueInBytes = customPayload[key];
string value = Encoding.UTF8.GetString(valueInBytes);
Console.WriteLine($“CustomPayload: {key}: {value}”);
}
```
3. Déterminez la latence de votre machine au service Azure Cosmos DB. Si vous utilisez un centre de données Azure, la latence doit être un petit nombre de millisecondes à un seul chiffre. Si vous n’utilisez pas le centre de données Azure, vous pouvez utiliser psping ou azurespeed.com pour obtenir la latence approximative à partir de votre emplacement.
4. Calculez les valeurs appropriées des paramètres (NUMPROCESS, INGESTRATE, MAXBATCHSIZE ou MINBATCHSIZE) qui présentent de bonnes performances.
5. Exécutez la commande de migration finale. L’exécution de cette commande suppose que vous avez démarré cqlsh en utilisant les informations de chaîne de connexion.
```
COPY exampleks.tablename FROM filefolderx/*.csv
```
## <a name="use-spark-to-import-data"></a>Utiliser Spark pour importer des données
Pour les données résidant dans un cluster existant dans des machines virtuelles Azure, l’importation de données à l’aide de Spark est également une option envisageable. Spark doit être configuré en tant qu’intermédiaire pour l’ingestion unique ou régulière.
## <a name="next-steps"></a>Étapes suivantes
Dans ce didacticiel, vous avez appris comment effectuer les tâches suivantes :
> [!div class="checklist"]
> * Récupération de votre chaîne de connexion
> * Importer des données à l’aide de la commande cql copy
> * Importer à l’aide du connecteur Spark
Vous pouvez maintenant passer à la section Concepts pour plus d’informations sur Azure Cosmos DB.
> [!div class="nextstepaction"]
>[Niveaux de cohérence des données paramétrables dans Azure Cosmos DB](../cosmos-db/consistency-levels.md)
| 60.673077 | 477 | 0.766403 | fra_Latn | 0.976102 |
4d8bbe10815d6455a249706d3df23a29a70f96c7 | 432 | md | Markdown | content/events/2017-madison/program/victoria-pierce.md | devopsdaysmaringa/webdevopsdays | 3125ca25af7d6ec7ddab45a55b48552dc31afa44 | [
"Apache-2.0",
"MIT"
] | null | null | null | content/events/2017-madison/program/victoria-pierce.md | devopsdaysmaringa/webdevopsdays | 3125ca25af7d6ec7ddab45a55b48552dc31afa44 | [
"Apache-2.0",
"MIT"
] | null | null | null | content/events/2017-madison/program/victoria-pierce.md | devopsdaysmaringa/webdevopsdays | 3125ca25af7d6ec7ddab45a55b48552dc31afa44 | [
"Apache-2.0",
"MIT"
] | null | null | null | +++
Talk_date = ""
Talk_start_time = ""
Talk_end_time = ""
Title = "Why YOU should be a Role Model"
Type = "talk"
Speakers = ["victoria-pierce"]
+++
My goal is to help women in technology realize that the biggest blocker of our efforts to close the gender gap is the lack of women role models and furthermore realize the potential that each individual woman in tech has to be a role model by exhibiting confidence in what they do.
| 39.272727 | 281 | 0.752315 | eng_Latn | 0.999091 |
4d8bc5d8f6d250e5fc0a59cfedfb4f713ef0b941 | 85 | md | Markdown | README.md | pebbie/pytalks | a91653b6891891417d50d0b519f474889517544c | [
"CC0-1.0"
] | 1 | 2020-02-15T15:12:37.000Z | 2020-02-15T15:12:37.000Z | README.md | pebbie/pytalks | a91653b6891891417d50d0b519f474889517544c | [
"CC0-1.0"
] | null | null | null | README.md | pebbie/pytalks | a91653b6891891417d50d0b519f474889517544c | [
"CC0-1.0"
] | null | null | null | # pytalks
repository for accompanied slide and codes during my talk (python related)
| 28.333333 | 74 | 0.811765 | eng_Latn | 0.999391 |
4d8c23c3e30f0a95732f3f8a3146f604cfbd7369 | 60 | md | Markdown | README.md | yttrian/bungiekord | bd36e129b51a61a84ba942b679b5f6f5d6882844 | [
"Apache-2.0"
] | null | null | null | README.md | yttrian/bungiekord | bd36e129b51a61a84ba942b679b5f6f5d6882844 | [
"Apache-2.0"
] | null | null | null | README.md | yttrian/bungiekord | bd36e129b51a61a84ba942b679b5f6f5d6882844 | [
"Apache-2.0"
] | null | null | null | # Bungiekord
A Kotlin wrapper for the Bungie API using ktor | 20 | 46 | 0.8 | eng_Latn | 0.433197 |
4d8c31433a892e0d0ad1ecc6920f8b4927899b7f | 837 | md | Markdown | catalog/go-go-purin-teikoku/en-US_go-go-purin-teikoku.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/go-go-purin-teikoku/en-US_go-go-purin-teikoku.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/go-go-purin-teikoku/en-US_go-go-purin-teikoku.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Go Go! Purin Teikoku

- **type**: manga
- **volumes**: 6
- **original-name**: GOGO!ぷりん帝国
- **start-date**: 1996-02-19
- **end-date**: 1996-02-19
## Tags
- comedy
- sci-fi
- seinen
## Authors
- Kubota
- Makoto (Story & Art)
## Sinopse
The pudding empire's home planet only has a lifespan left of one year. Thus, they aim to invade Earth and make it their own... but first they've got to wipe out those annoying humans who are currently inhabiting the planet. Planning an invasion turns out to be much easier said than done, especially when your human-killing monsters start having problems with morale, sickness, family issues, and so forth...
## Links
- [My Anime list](https://myanimelist.net/manga/19413/Go_Go_Purin_Teikoku)
| 28.862069 | 408 | 0.702509 | eng_Latn | 0.950841 |
4d8c628a23fd648b448acf58d83b08a7585d0fe4 | 6,192 | md | Markdown | articles/redis-cache/cache-redis-samples.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-NL | 389bf74805bf9458069a8f4a1acdccd760060bc9 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/redis-cache/cache-redis-samples.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-NL | 389bf74805bf9458069a8f4a1acdccd760060bc9 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/redis-cache/cache-redis-samples.md | OpenLocalizationTestOrg/azure-docs-pr15_nl-NL | 389bf74805bf9458069a8f4a1acdccd760060bc9 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | <properties
pageTitle="Voorbeelden van Azure Cache bestand Vgx. | Microsoft Azure"
description="Informatie over het gebruik van de Cache van Azure bestand Vgx."
services="redis-cache"
documentationCenter=""
authors="steved0x"
manager="douge"
editor=""/>
<tags
ms.service="cache"
ms.workload="tbd"
ms.tgt_pltfrm="cache-redis"
ms.devlang="multiple"
ms.topic="article"
ms.date="08/30/2016"
ms.author="sdanie"/>
# <a name="azure-redis-cache-samples"></a>Voorbeelden van Azure Cache bestand Vgx.
In dit onderwerp vindt u een lijst van Azure bestand Vgx. Cache steekproeven, die betrekking hebben op scenario's zoals verbinding maken met een cache, lezen en schrijven gegevens uit een cache en het bestand Vgx. ASP.NET-Cache-providers gebruiken. Enkele van de voorbeelden zijn downloadbare projecten en enkele bieden stapsgewijze instructies en codefragmenten opnemen maar geen koppelt aan een downloadbare project.
## <a name="hello-world-samples"></a>Hallo wereld voorbeelden
In de voorbeelden in deze sectie worden weergegeven de basisbeginselen van het verbinding maken met een exemplaar van de Azure bestand Vgx. Cache en lezen en schrijven van gegevens aan de cache met een verscheidenheid aan talen en bestand Vgx. clients.
De steekproef [Hallo allemaal](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) ziet hoe u verschillende cache bewerkingen van de [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) .NET-client uitvoeren.
In dit voorbeeld ziet u hoe u:
- Gebruik van verschillende verbindingsopties
- Lezen en schrijven van objecten en naar de cache bewerkingen synchroon en asynchroon
- Bestand Vgx. MGET/MSET-opdrachten gebruiken om waarden van de opgegeven toetsen te retourneren
- Bestand Vgx. transacties bewerkingen uitvoeren
- Werken met lijsten bestand Vgx. en gesorteerd sets weergegeven
- .NET-objecten met JsonConvert objectserializers opslaan
- Gebruik bestand Vgx. sets willen implementeren labelen
- Werken met Cluster bestand Vgx.
Zie de documentatie [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) op github en scenario's Zie voor meer gebruik de tests van de eenheid [StackExchange.Redis.Tests](https://github.com/StackExchange/StackExchange.Redis/tree/master/StackExchange.Redis.Tests) voor meer informatie.
[Het gebruik van Azure bestand Vgx. Cache met Python](cache-python-get-started.md) ziet hoe u het aan de slag met Azure bestand Vgx. Cache Python en de client [bestand Vgx. kopiëren](https://github.com/andymccurdy/redis-py) gebruiken.
[Werken met .NET-objecten in de cache](cache-dotnet-how-to-use-azure-redis-cache.md#work-with-net-objects-in-the-cache) ziet u een manier om te serialiseren .NET-objecten, zodat u kunt ze wilt schrijven en vanaf een exemplaar van de Azure bestand Vgx. Cache.
## <a name="use-redis-cache-as-a-scale-out-backplane-for-aspnet-signalr"></a>Cache bestand Vgx. Als een schaal out Backplane voor ASP.NET-SignalR gebruiken
Het [Bestand Vgx. Cache gebruiken als een schaal out Backplane voor ASP.NET SignalR](https://github.com/rustd/RedisSamples/tree/master/RedisAsSignalRBackplane) voorbeeld wordt gedemonstreerd hoe u Azure bestand Vgx. Cache als een backplane SignalR kunt gebruiken. Zie voor meer informatie over backplane, [SignalR Scaleout met het bestand Vgx.](http://www.asp.net/signalr/overview/performance/scaleout-with-redis).
## <a name="redis-cache-customer-query-sample"></a>Bestand Vgx voorbeeld van een query Cache klant.
In dit voorbeeld wordt gedemonstreerd vergelijkt prestaties tussen toegang krijgen tot gegevens in een cache en toegang tot gegevens uit de permanente opslag. In dit voorbeeld bevat twee projecten.
- [Demo bestand Vgx. Cache kunt hoe prestaties verbeteren door Caching van gegevens](https://github.com/rustd/RedisSamples/tree/master/RedisCacheCustomerQuerySample)
- [De Database en de Cache voor de demo vullen](https://github.com/rustd/RedisSamples/tree/master/SeedCacheForCustomerQuerySample)
## <a name="aspnet-session-state-and-output-caching"></a>ASP.NET-sessie staat en Caching van uitvoer
Het [Gebruik van Azure bestand Vgx. Cache voor de opslag van ASP.NET SessionState en OutputCache](https://github.com/rustd/RedisSamples/tree/master/SessionState_OutputCaching) voorbeeld wordt gedemonstreerd hoe u met Azure bestand Vgx. Cache opslaan van ASP.NET-sessie en uitvoercache de SessionState en OutputCache-providers gebruiken voor het bestand Vgx..
## <a name="manage-azure-redis-cache-with-maml"></a>Beheer van Cache met MAML Azure bestand Vgx.
De [Cache voor Azure bestand Vgx. beheren met behulp van Azure Management bibliotheken](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) -voorbeeld wordt gedemonstreerd hoe kunt u Azure Management bibliotheken gebruiken om te beheren - (maken / bijwerken / verwijderen) de Cache.
## <a name="custom-monitoring-sample"></a>Aangepaste steekproef bewaken
De [gegevens bewaken van Access bestand Vgx. Cache](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) -voorbeeld wordt gedemonstreerd hoe u kunt toegang krijgen tot controlegegevens voor uw Azure bestand Vgx. Cache buiten de Azure-Portal.
## <a name="a-twitter-style-clone-written-using-php-and-redis"></a>Een stijl van een Twitter-klonen geschreven met PHP en bestand Vgx.
De steekproef [Retwis](https://github.com/SyntaxC4-MSFT/retwis) is het bestand Vgx. Hallo wereld. Het is een minimale Twitter-stijl sociaal netwerk klonen geschreven met bestand Vgx. en PHP de [Predis](https://github.com/nrk/predis) -client gebruiken. De broncode is bedoeld om heel eenvoudig zijn en tegelijkertijd inschakelen andere bestand Vgx. gegevensstructuren.
## <a name="bandwidth-monitor"></a>Monitor met een bandbreedte
De steekproef [bandbreedte monitor](https://github.com/JonCole/SampleCode/tree/master/BandWidthMonitor) kunt u de bandbreedte gebruikt op de client volgen. Als u wilt meten de bandbreedte, de steekproef worden uitgevoerd op de clientcomputer cache, gesprekken te voeren naar de cache en de bandbreedte gemeld door de bandbreedte monitor steekproef toekijken.
| 81.473684 | 418 | 0.791505 | nld_Latn | 0.99778 |
4d8c82b8b377963e46b44f99dfab531c1928788f | 291 | md | Markdown | mainlib/docs/15313014289200.md | archerU/Notes | d6eb339279a3bcbf774a63ea48f6f5d3a19ca904 | [
"MIT"
] | null | null | null | mainlib/docs/15313014289200.md | archerU/Notes | d6eb339279a3bcbf774a63ea48f6f5d3a19ca904 | [
"MIT"
] | null | null | null | mainlib/docs/15313014289200.md | archerU/Notes | d6eb339279a3bcbf774a63ea48f6f5d3a19ca904 | [
"MIT"
] | null | null | null | # 业务文档描述模板 - 工程规范
## 业务概述
核心的业务逻辑
- 地址:https://www.taobao.com
- 数据情况:基本的 PV, UV 等,让人了解一下本业务的体量在什么级别。
- 业务的价值
- 等等……
## 业务结构
- 子业务甲
- 地址
- 业务逻辑介绍:
- 等等……
## 业务相关负责人
- 业务负责人:
- PD:
- 运营接口人:
- 开发接口人:
- 测试接口人:
## 应用构成
- 前端 assets 库
- 后端应用
- 数据接口
- 容灾方案
## 技术要点和难点
## FAQ
| 7.657895 | 38 | 0.549828 | yue_Hant | 0.810039 |
4d8c8d71c722ec9bccc94767785f1b6900e41723 | 752 | md | Markdown | F1/appactivated-event-visio-vis_sdr-chm10019005.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-03-09T13:24:12.000Z | 2020-03-09T16:19:11.000Z | F1/appactivated-event-visio-vis_sdr-chm10019005.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:52:15.000Z | 2021-09-28T07:52:15.000Z | F1/appactivated-event-visio-vis_sdr-chm10019005.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-28T06:51:45.000Z | 2019-11-28T06:51:45.000Z | ---
title: AppActivated Event, Visio [vis_sdr.chm10019005]
keywords: vis_sdr.chm10019005
f1_keywords:
- vis_sdr.chm10019005
ms.prod: office
ms.assetid: 24123190-199a-40be-b87b-de6b9a65eae3
ms.date: 06/08/2017
localization_priority: Normal
---
# AppActivated Event, Visio [vis_sdr.chm10019005]
Hi there! You have landed on one of our F1 Help redirector pages. Please select the topic you were looking for below.
[InvisibleApp.AppActivated Event (Visio)](http://msdn.microsoft.com/library/8fb2624b-6755-c907-91b1-656f0031663f%28Office.15%29.aspx)
[Application.AppActivated Event (Visio)](http://msdn.microsoft.com/library/150864ab-574a-6556-a56a-8ca619796062%28Office.15%29.aspx)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 35.809524 | 133 | 0.795213 | eng_Latn | 0.411416 |
4d8d2e7a802facee93a417b07f28c232d0334cc1 | 1,570 | md | Markdown | windows-driver-docs-pr/network/querying-a-packet-s-extensible-switch-source-port-data.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-07T12:25:23.000Z | 2022-02-07T12:25:23.000Z | windows-driver-docs-pr/network/querying-a-packet-s-extensible-switch-source-port-data.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/network/querying-a-packet-s-extensible-switch-source-port-data.md | hugmyndakassi/windows-driver-docs | aa56990cc71e945465bd4d4f128478b8ef5b3a1a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Querying a Packet's Extensible Switch Source Port Data
description: Querying a Packet's Extensible Switch Source Port Data
ms.date: 04/20/2017
---
# Querying a Packet's Extensible Switch Source Port Data
The Hyper-V extensible switch source port is specified by the **SourcePortId** member in the [**NDIS\_SWITCH\_FORWARDING\_DETAIL\_NET\_BUFFER\_LIST\_INFO**](/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_switch_forwarding_detail_net_buffer_list_info) structure. This structure is contained in the out-of-band (OOB) forwarding context of the packet's [**NET\_BUFFER\_LIST**](/windows-hardware/drivers/ddi/nbl/ns-nbl-net_buffer_list) structure. For more information on this context, see [Hyper-V Extensible Switch Forwarding Context](hyper-v-extensible-switch-forwarding-context.md).
The extensible switch extension accesses the [**NDIS\_SWITCH\_FORWARDING\_DETAIL\_NET\_BUFFER\_LIST\_INFO**](/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_switch_forwarding_detail_net_buffer_list_info) structure by using the [**NET\_BUFFER\_LIST\_SWITCH\_FORWARDING\_DETAIL**](/windows-hardware/drivers/ddi/ndis/nf-ndis-net_buffer_list_switch_forwarding_detail) macro. The following example shows how the driver can obtain the source port identifier from the packet's **NDIS\_SWITCH\_FORWARDING\_DETAIL\_NET\_BUFFER\_LIST\_INFO** structure.
```C++
PNDIS_SWITCH_FORWARDING_DETAIL_NET_BUFFER_LIST_INFO fwdDetail;
NDIS_SWITCH_PORT_ID sourcePortId;
fwdDetail = NET_BUFFER_LIST_SWITCH_FORWARDING_DETAIL(NetBufferList);
sourcePortId = fwdDetail->SourcePortId;
```
| 65.416667 | 583 | 0.818471 | yue_Hant | 0.482069 |
4d8d6882b3029338853bae651fa5a74516e1fd44 | 257 | md | Markdown | README.md | trcm/csvparser-polysemy | 5e85bbbf07e2b2394cc0c9a07a887ac3ad98ab2f | [
"MIT"
] | null | null | null | README.md | trcm/csvparser-polysemy | 5e85bbbf07e2b2394cc0c9a07a887ac3ad98ab2f | [
"MIT"
] | null | null | null | README.md | trcm/csvparser-polysemy | 5e85bbbf07e2b2394cc0c9a07a887ac3ad98ab2f | [
"MIT"
] | null | null | null | # csvparser
[](https://hackage.haskell.org/package/csvparser)
[](https://github.com/kowainik/csvparser/blob/master/LICENSE)
fdsa
| 25.7 | 126 | 0.747082 | kor_Hang | 0.152093 |
4d8da870d9fbc061c03a3190b805e826c21463e2 | 66 | md | Markdown | webtau-docs/znai/release-notes/1.45/_fix-2021-12-23-missing-files.md | twosigma/webtau | 4c3f23ed39cbb3ea74d08df705c7e6d9bc979e3d | [
"Apache-2.0"
] | 78 | 2018-05-04T16:10:56.000Z | 2019-11-29T16:09:44.000Z | webtau-docs/znai/release-notes/1.45/_fix-2021-12-23-missing-files.md | twosigma/webtau | 4c3f23ed39cbb3ea74d08df705c7e6d9bc979e3d | [
"Apache-2.0"
] | 218 | 2018-05-04T12:16:26.000Z | 2020-03-16T15:39:38.000Z | webtau-docs/znai/release-notes/1.45/_fix-2021-12-23-missing-files.md | twosigma/webtau | 4c3f23ed39cbb3ea74d08df705c7e6d9bc979e3d | [
"Apache-2.0"
] | 6 | 2019-06-05T06:43:14.000Z | 2020-03-11T15:23:29.000Z | * Fix: missing files message to use only path but not container id | 66 | 66 | 0.787879 | eng_Latn | 0.999612 |
4d8dbde6f44b79ff86cacd4f2d6f73c94347bf23 | 1,908 | md | Markdown | Defined-Functions.md | wisiwee/extmarkdown-cheatsheet | 92590662606e4974f262b0541a37e23c37a66117 | [
"MIT"
] | null | null | null | Defined-Functions.md | wisiwee/extmarkdown-cheatsheet | 92590662606e4974f262b0541a37e23c37a66117 | [
"MIT"
] | null | null | null | Defined-Functions.md | wisiwee/extmarkdown-cheatsheet | 92590662606e4974f262b0541a37e23c37a66117 | [
"MIT"
] | null | null | null | # ExtMarkdown Defined Functions
### Advanced Mathematic
| Operators | Symbols |Example |
| -------------- |:--------:|:-----------------------------:|
| Percent | PERCENT()| PERCENT(30) (Will results 0.3)|
| | POW() | POW(10, 4) |
| Square | SQUARE() | SQUARE(50) |
| Factor | FACT() | FACT(50) |
| | LN() | LN(50) |
| | LG10() | LG10(50) |
| | LG2() | LG2(50) |
| | E() | E(50) |
| | PI() | PI() (Will results 3.14) |
| | MOD() | 10 MOD(4) |
| | ABS() | ABS(-4) |
| | CEIL() | CEIL(3.14) |
| | FLOOR() | FLOOR(3.14) |
### Trigonometry
| Operators | Symbols |Example |
| -------------- |:--------:|:----------------------------:|
| Sinus | SIN() | 30% (Will results 0.3) |
| | COSIN() | 10^(4) |
| Square | SQUARE()| SQUARE(50) |
| Factor | FACT() | FACT(50) |
| | LN() | LN(50) |
| | LG10() | LG10(50) |
| | LG2() | LG2(50) |
| | E() | E(50) |
| | PI() | PI() (Will results 3.14) |
| | MOD() | 10 MOD(4) |
| | ABS() | ABS(-4) |
| | CEIL() | CEIL(3.14) |
| | FLOOR() | FLOOR(3.14) | | 48.923077 | 61 | 0.221698 | eng_Latn | 0.063643 |
4d8df9ddbaad0b2a2b4c9d4e97e29207057bec83 | 1,320 | md | Markdown | AUTHORS.md | ruchidesai/research-subject-mapper | 8f453e0732751afad82fbab21156ad4851ef6395 | [
"BSD-3-Clause"
] | null | null | null | AUTHORS.md | ruchidesai/research-subject-mapper | 8f453e0732751afad82fbab21156ad4851ef6395 | [
"BSD-3-Clause"
] | null | null | null | AUTHORS.md | ruchidesai/research-subject-mapper | 8f453e0732751afad82fbab21156ad4851ef6395 | [
"BSD-3-Clause"
] | null | null | null | The Research Subject Mapper and the RED-I Project would not be possible without the generous contributions of our authors and contributors.
David R Nelson and his HCV Target project and the University of Florida Clinical Translational Science Institute for providing the seed funding that launched the RED-I Project.
Christopher P. Barnes ([email protected]) provided the original concept and proposal for a generic, open source, standards-based bridge between the electronic medical record and REDCap systems.
Many thanks to David Nelson, Mike Freid, Joy Peter, Ken Berguist, and Monika Vainorius of the HCV Target Study Team and all of the HCV Target study sites for being the pilot project for RED-I. You all helped make it great.
Thanks also to Linc Moldawer, Jen Lanz, Ruth Davis, and Scott Brakenridge of the UF Surgery Genomics for being our second implementation site. Diversity makes us stronger.
Philip Chase ([email protected]), Nicholas Rejack ([email protected]), Erik Schmidt, and Chris Barnes provided direction to the development effort.
Radha Kandula, Mohan Das Katragadda, Yang Li, Kumar Sadhu, Alex Loiacono, Erik Schmidt, Nicholas Rejack, Philip Chase, Taeber Rapczak, Andrei Sura, Ruchi Desai and Roy Keyes provided code to make this project awesome. We are nothing without our developers.
| 94.285714 | 256 | 0.809091 | eng_Latn | 0.990933 |
4d8e9144a01ae4a658598007f3f6817268c05502 | 2,110 | md | Markdown | _posts/2017-01-29-wk2.md | ruzseth/ruzseth.github.io | 843e106300b848c62f35c372f3958d7e255247d0 | [
"CC-BY-3.0"
] | null | null | null | _posts/2017-01-29-wk2.md | ruzseth/ruzseth.github.io | 843e106300b848c62f35c372f3958d7e255247d0 | [
"CC-BY-3.0"
] | null | null | null | _posts/2017-01-29-wk2.md | ruzseth/ruzseth.github.io | 843e106300b848c62f35c372f3958d7e255247d0 | [
"CC-BY-3.0"
] | null | null | null | ---
layout: post
title: "Week Two!"
published: true
---
Welcome back!
I'd like to start this week's blog post with my Pick/Tip of the Week. This week, I'm sharing about MeetUp. MeetUp is a site that faciliates the process of finding people with a similar interest and "meeting up" with each other offline. These meet ups are often local. According to their website, their goal is to "bring people together to do, explore, teach and learn the things that help them come alive". While I've been aware of MeetUp and have even joined groups since last summer, I haven't actually gone to one due to schedule conflicts or because I don't want to go alone. However, I've decided to try it this week, so I'll be going to an introduction to JavaScript crash course by the [Iron Yard Austin MeetUp group](https://www.meetup.com/The-Iron-Yard-Austin/events/236707966/). Wish me luck! I'll let y'all know how it goes next week.
1. **What did you do this past week?**
This week, we got a few more hints about how to get started on our project and make it better as we went over topics in class. I got to see example unit tests and met a new person sitting next to me. We also learned about different comparisons between pointers and objects. Those were tricky.
2. **What's in your way?**
It's only the first week, but I wasn't able to catch up on the reading and missed some easy points on the quizzes. I found that I had trouble differentiating between pointers and objects as well.
3. **What will you do next week?**
I hope to make time to come to the lab and get started on the Collatz project. My laptop is still not ready and has been pushed back from being delivered until next week, so I'll be putting off figuring out Docker until then.
4. **What is your experience of this class?**
So far, I've been managing the workload and time required for OOP alongside my other commitments. I'm learning useful things, so I'm glad I was able to get into this class. I'm a little worried about how I'll feel later in the semester, but I'm hopeful.
Have a great week and good luck on the project!
| 75.357143 | 847 | 0.755924 | eng_Latn | 0.999915 |
4d8ec52f6dd03cdc344de3db01fac9b4d65199cc | 383 | md | Markdown | README.md | oskar404/hacking | 1c883695b00a50afbc088401c0c1154945c24a92 | [
"MIT"
] | null | null | null | README.md | oskar404/hacking | 1c883695b00a50afbc088401c0c1154945c24a92 | [
"MIT"
] | null | null | null | README.md | oskar404/hacking | 1c883695b00a50afbc088401c0c1154945c24a92 | [
"MIT"
] | null | null | null |
# Hacking
This repository is a collection of random tests. Basicly this repository is used
to figure out how tools and languages work and Linux system works.
These tests should work with some version of Ubuntu.
## References
- The Linux Programming Interface (Author: Michael Kerrisk)
Bit old book but these things do not change easily.
Book web page: http://man7.org/tlpi/
| 27.357143 | 80 | 0.772846 | eng_Latn | 0.997082 |
4d8ef805d54fd779791c3aab1a27c891b83888b9 | 4,143 | md | Markdown | _posts/2020-10-17-Reversi-Game-01.md | lutca1320/lutca1320.github.io | 915e00d28a34f9404dda20b4abf12fbd57a943c9 | [
"MIT"
] | 1 | 2021-09-27T12:13:18.000Z | 2021-09-27T12:13:18.000Z | _posts/2020-10-17-Reversi-Game-01.md | lutca1320/lutca1320.github.io | 915e00d28a34f9404dda20b4abf12fbd57a943c9 | [
"MIT"
] | null | null | null | _posts/2020-10-17-Reversi-Game-01.md | lutca1320/lutca1320.github.io | 915e00d28a34f9404dda20b4abf12fbd57a943c9 | [
"MIT"
] | null | null | null | ---
title: Reversi 게임 제작기 - 보드, 돌 구현
author: RUKA SPROUT
date: 2020-10-17 14:22:00 +0900
categories: [Game Dev - Reversi]
tags: [Unity]
---
#### 개발 목적
Unity로 간단하게 Reversi (오델로) 보드게임을 만들고, Reversi AI 를 구축하는 데에 목적을 둘 것이다. 그렇기 때문에 게임 구현에 많은 시간을 쏟기 보다는, AI에 더 중점을 두어 개발할 것이다.
게임 메커니즘은 기존 Reversi 보드게임을 그대로 가져올 것이며, Unity 2D로 개발할 것이고, 애셋은 픽셀 아트를 직접 찍어 제작할 것이다.
지금까지는 서론이였고, 아래부터는 실제 개발을 진행할 것이다.
---
### Piece (돌) & Board (보드) 구성
Reversi 게임은 아래 그림과 같이 8X8 보드에서 이루어진다.

이를 아래와 같이 구성하였다.

**Board**
가로로 8칸, 세로로 8칸이기 때문에 구성할 때 **2차원 배열**을 사용할 것이다. 보드의 크기가 게임 도중 바뀌는 일은 없기 때문에 `List<>` 가 아닌 `Array<>` 를 사용할 것이다.
**Grid**
Board 를 구성하는 Element 들은 `Grid` 라는 클래스로 구성할 것이다. 위 그림에서도 보는 것처럼 그리드로 나누어져 있어서 이름을 `Grid` 라고 지었다.
`Grid` 가 가지고 있는 정보는 다음과 같이 구성하였다.
- `stat` : 현재 그리드 위에 어떤 Piece 가 올려져 있는지를 말한다.
- `Status` (Enum) : `Black = -1`, `None = 0`, `White = 1`
- `index` : 현재 그리드가 보드에서 어느 위치에 있는지를 말한다.
- `piece` : 실제 Sprite 를 가지고 있는 오브젝트이다. 없을 경우 null 이다.
`Status` 라는 `Enum` 을 따로 정의했는데, 각각의 Element 에 특정 숫자를 정의해 주었다.
**`Black` 과 `White` 가 -1, 1인 이유는 값을 반전시킬 때 -1을 곱해서 하기 위함이다.**
**Piece**
*모든 `Grid` 가 `Piece` 를 가지고 있는 것은 아니지만, 모든 `Piece` 는 반드시 `Grid` 에 지정되어 있어야만 한다.*
`Piece` 클래스는 단지 Sprite 를 표현해 주는 용도로만 쓰이기 때문에 `SpriteRenderer` 컴포넌트를 조작하는 일을 맡는다.
---
### 코드 구현
아래 코드들은 정말 딱 구현만 해 둔 것이기 때문에, 서로 연결되어 있지도, 초기화되어 있지도 않다.
**이는 추후 `GameMode` 를 구현하면서 할 것이다.**
*참고로 필자가 C++ 스타일에 익숙해져서 그런지 자꾸 C++ 코드처럼 구현한다. 별로 상관은 없겠지만...*
#### Piece
```csharp
// Instantiate 해야 하므로 MonoBehaviour 를 상속하였다.
public class Piece : MonoBehaviour
{
public Sprite BlackSprite; // 흑돌 Sprite
public Sprite WhiteSprite; // 백돌 Sprite
private SpriteRenderer SR; // Sprite 를 적용할 랜더러
// color 파라미터 색상 Sprite 로 변경한다.
public void SetColor(Grid.Status color)
{
// SpriteRenderer 를 아직 불러오지 못했을 때를 방지해 명시적으로 호출한다.
Start();
// Sprite 변경
if (color == Grid.Status.Black)
{
SR.sprite = BlackSprite;
}
else if (color == Grid.Status.White)
{
SR.sprite = WhiteSprite;
}
else
{
// color 가 None 인 경우, Piece 가 올라와 있으면 안되기 때문에 현재 오브젝트를 제거한다.
Destroy(this.gameObject);
}
}
void Start()
{
// SpriteRenderer 컴포넌트를 가져온다.
SR = GetComponent<SpriteRenderer>();
}
}
```
#### Grid
```csharp
// 편의를 위해 보드에서의 위치를 표현해주는 Index 를 정의하였다.
using Index = System.Tuple<int, int>;
public class Grid
{
public enum Status { Black = -1, None = 0, White = 1 }
private Index index; // 그리드에서의 위치
private Status stat; // 현재 상태 (어떤 Piece 가 올라가 있는가?)
private Piece piece; // Piece 오브젝트
// Constructor
public Grid()
{
index = new Index(-1, -1);
stat = Status.None;
}
public Status GetStat() { return stat; }
public Index GetIndex() { return index; }
public void SetStat(Status stat) { this.stat = stat; }
public void SetPiece(Piece piece) { this.piece = piece; }
public void SetIndex(int i, int j) { this.index = new Index(i, j); }
// 디버깅을 위해 현재 상태를 Print 한다.
public void Print()
{
UnityEngine.Debug.LogWarning(index.Item1 + " / " + index.Item2 + " / " + stat.ToString());
}
}
```
#### Matrix (보드)
```csharp
// 편의를 위해 보드에서의 위치를 표현해주는 Index 를 정의하였다.
using Index = System.Tuple<int, int>;
public class Matrix
{
private Grid[,] data; // Grid 배열
// Constructor
public Matrix()
{
data = new Grid[8, 8]; // 8X8 크기 지정
for (var i = 0; i < 8; i++)
{
for (var j = 0; j < 8; j++)
{
data[i, j] = new Grid(); // 각각의 Grid 를 초기화 해주어야 한다.
data[i, j].SetIndex(i, j); // 각각의 Grid 에 Index 를 지정해준다.
}
}
}
public Grid[,] GetData() { return data; }
// Index 에 있는 Grid 를 가져온다.
public Grid GetGrid(Index index)
{
return data[index.Item1, index.Item2];
}
}
```
---
#### 다음 포스트에서는...
위 클래스들은 초기화하고 연결하는 `GameMode` 클래스를 구현할 것이다.
| 23.947977 | 120 | 0.580739 | kor_Hang | 1.000009 |
4d8f02c3714d112b79e23dd58aa64746d6e1ae4e | 35 | md | Markdown | README.md | viniciustessmann/login-laravel-passport-socialite | 8509400baba21cb068d17c8c13858a19050e20c3 | [
"MIT"
] | null | null | null | README.md | viniciustessmann/login-laravel-passport-socialite | 8509400baba21cb068d17c8c13858a19050e20c3 | [
"MIT"
] | null | null | null | README.md | viniciustessmann/login-laravel-passport-socialite | 8509400baba21cb068d17c8c13858a19050e20c3 | [
"MIT"
] | null | null | null | # login-laravel-passport-socialite
| 17.5 | 34 | 0.828571 | eng_Latn | 0.425413 |
4d8f656d5a9c51d16b903b09409a8f894c13584e | 2,978 | md | Markdown | _posts/web-standards/w3c/working-groups/vc-wg/2020-11-23-vc-wg.md | infominer33/Decentralized-ID | 4d35e5f71ed806f2cef5141f16f778d00ff072ba | [
"CC0-1.0",
"MIT"
] | 53 | 2019-06-12T21:29:50.000Z | 2020-10-12T18:23:46.000Z | _posts/web-standards/w3c/working-groups/vc-wg/2020-11-23-vc-wg.md | decentralized-id/decentralized-id.github.io | 4d35e5f71ed806f2cef5141f16f778d00ff072ba | [
"CC0-1.0",
"MIT"
] | 6 | 2019-06-11T10:31:27.000Z | 2020-01-13T15:00:11.000Z | _posts/web-standards/w3c/working-groups/vc-wg/2020-11-23-vc-wg.md | decentralized-id/decentralized-id.github.io | 4d35e5f71ed806f2cef5141f16f778d00ff072ba | [
"CC0-1.0",
"MIT"
] | 9 | 2020-10-21T15:46:21.000Z | 2022-02-23T21:23:46.000Z | ---
date: 2020-11-23
layout: single
title: W3C Verifiable Claims Working Group
description: The mission of the Verifiable Claims Working Group (VCWG) is to make expressing and exchanging credentials that have been verified by a third party easier and more secure on the Web.
excerpt: The Working Group will maintain the Verifiable Credentials Data Model specification, which provides a mechanism to express a verifiable credential on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.
permalink: web-standards/w3c/wg/vc/
redirect_from:
- web-standards/vc-wg/
toc: false
tags: ["W3C","VC-WG","Verifiable Credentials"]
categories: ["Web Standards"]
last_modified_at: 2020-11-23
---
* [W3C Verifiable Claims Working Group](https://www.w3.org/2017/vc/WG/)
* [w3c/verifiable-claims](https://github.com/w3c/verifiable-claims)
* [[email protected]](mailto:[email protected]?subject="Subscribe"): the group’s primary mailing list.
* [Mail Archives](https://lists.w3.org/Archives/Public/public-vc-wg/) - Technical discussion and public announcements for the Verifiable Claims Working Group
> The mission of the Verifiable Claims Working Group (VCWG) is to make expressing and exchanging credentials that have been verified by a third party easier and more secure on the Web.
* [Verifiable Credentials Working Group Charter](https://www.w3.org/2020/01/vc-wg-charter.html)
> The Working Group will maintain the Verifiable Credentials Data Model specification, which provides a mechanism to express a verifiable credential on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.
## Outputs
* [Verifiable Credentials Data Model 1.0](https://www.w3.org/TR/verifiable-claims-data-model/) and Representations specification.
* [Editors Draft](https://w3c.github.io/vc-data-model/) - Expressing verifiable information on the Web
* [GitHub w\ Issue Tracker](https://github.com/w3c/vc-data-model)
* [Verifiable Credentials Use Cases](https://www.w3.org/TR/vc-use-cases/)
* [Editors Draft](https://w3c.github.io/vc-use-cases/)
* [GitHub w\ Issue Tracker](https://github.com/w3c/vc-use-cases) - Verifiable Claims Use Cases.
* [Verifiable Credentials Implementation Guidelines 1.0](https://www.w3.org/TR/vc-imp-guide/)
* [Editors Draft](https://w3c.github.io/vc-imp-guide/)
* [GitHub with Issue Tracker](https://github.com/w3c/vc-imp-guide)
* [W3C Verifiable Claims Working Group Test Suite](https://w3c.github.io/vc-test-suite/)
* [w3c/vc-test-suite](https://github.com/w3c/vc-test-suite) Verifiable Claims WG Test Suite.
* [Verifiable Credentials Data Model Implementation Report 1.0](https://w3c.github.io/vc-test-suite/implementations/)
* [w3c/vctf](https://github.com/w3c/vctf) **Archived** (precursor to vcwg)
> The Web Payments Interest Group's Verifiable Claims Task Force
* [Verifiable Claims Task Force Use Cases](https://opencreds.org/specs/source/use-cases/)
| 72.634146 | 253 | 0.763264 | eng_Latn | 0.692524 |
4d8faf3aa5cc7c51e6e6f55e41721a0dff89bfb4 | 1,745 | md | Markdown | _posts/11/2021-04-06-asa-butterfield.md | chito365/uk | d1f92a520c24cba921e111aa73b75fd3fbc9deb8 | [
"MIT"
] | null | null | null | _posts/11/2021-04-06-asa-butterfield.md | chito365/uk | d1f92a520c24cba921e111aa73b75fd3fbc9deb8 | [
"MIT"
] | null | null | null | _posts/11/2021-04-06-asa-butterfield.md | chito365/uk | d1f92a520c24cba921e111aa73b75fd3fbc9deb8 | [
"MIT"
] | null | null | null | ---
id: 4690
title: Asa Butterfield
date: 2012-04-06T17:25:24+00:00
author: chito
layout: post
guid: https://ukdataservers.com/asa-butterfield/
permalink: /04/06/asa-butterfield/
---
* some text
{: toc}
## Who is Asa Butterfield
London-born actor who played the precocious Hugo Cabret in the Academy-Award-winning Martin Scorsese film Hugo. He also played leading roles in the films The Boy in the Striped Pyjamas, Ender’s Game, The Space Between Us, Miss Peregrine’s Home for Peculiar Children and Time Freak.
## Prior to Popularity
He began acting in his town’s local theater at the age of seven. He made his film debut as Andrew in the 2006 movie After Thomas.
## Random data
His name means “Wing” in Portuguese.
## Family & Everyday Life of Asa Butterfield
His mother gave birth to him in Islington, London. His older brother, Morgan, became the drummer for the band, Underneath the Tallest Tree.
## People Related With Asa Butterfield
He starred opposite Ben Kingsley in both Hugo and Ender’s Game.
| 26.439394 | 294 | 0.458453 | eng_Latn | 0.996924 |
4d904eba72933554b2eb11400b8845cd62042db4 | 2,991 | md | Markdown | _posts/2015-05-13-2015-4-13-student-debt-strikers-lead-the-way-to-a-victory-in-bankruptcy-court.md | debtcollective/www.debtcollective.org | 6a0b8dfe1e58128258a3d3a22d0e5d887267bb72 | [
"MIT"
] | 1 | 2021-01-11T19:40:42.000Z | 2021-01-11T19:40:42.000Z | _posts/2015-05-13-2015-4-13-student-debt-strikers-lead-the-way-to-a-victory-in-bankruptcy-court.md | debtcollective/www.debtcollective.org | 6a0b8dfe1e58128258a3d3a22d0e5d887267bb72 | [
"MIT"
] | 1 | 2018-06-16T04:34:17.000Z | 2018-06-16T04:34:17.000Z | _posts/2015-05-13-2015-4-13-student-debt-strikers-lead-the-way-to-a-victory-in-bankruptcy-court.md | 6un9-h0-Dan/powerreport.debtcollective.org | 6a0b8dfe1e58128258a3d3a22d0e5d887267bb72 | [
"MIT"
] | 1 | 2021-01-11T19:46:24.000Z | 2021-01-11T19:46:24.000Z | ---
title: Student Debt Strikers Lead the Way to a Victory in Bankruptcy Court
date: 2015-05-13 00:00:00 Z
layout: post
---
To our Corinthian student comrades,
When a company goes bankrupt, everyone the company owes money to fights over any remaining cash and assets in court. When Corinthian [declared bankruptcy](http://www.washingtonpost.com/news/business/wp/2015/05/04/for-profit-corinthian-colleges-files-for-bankruptcy/) on May 4, executives probably thought they were protecting the company from having to refund students. But we see it differently: Corinthian owes you money, so you should have a say in how the company’s assets get divided up. And a key player in the bankruptcy proceeding agrees.
Thanks to the courageous actions of student strikers, and with the [help of attorneys](http://drive.google.com/file/d/0Bwr4YBvoT1TNTVRIaVZPNHUyNDg/view?usp=sharing) from three firms, including Public Counsel in Los Angeles, all Corinthian students will have a voice in what happens to the company’s assets.
Over the last few days, dozens of students in the Corinthian Collective have been sending emails to the office of the US Trustee (the office that is organizing the bankrupctcy case) demanding to be heard.
Last night, Scott Gautier, an attorney at Robins Kaplan LLC, took a red-eye flight from California to Delaware to represent students in this morning’s bankruptcy hearing and to request that students have a voice. A few hours ago, the Office of the United States Trustee agreed to create a Committee of Student Creditors.
This means that students’ voices will be heard in Corinthian’s bankruptcy case.
This is a major step in the fight for complete debt cancellation. A few months ago, the [Corinthian 15](http://debtcollective.org/studentstrike) came out of the shadows to declare a debt strike against Corinthian and the Department of Education. That group quickly grew to include over 100 debt strikers (and counting!). Now, students are no longer considered debtors who owe Corinthian money. Instead, they are creditors that have a claim on Corinthians’ remaining assets!
The U.S. Trustee will be looking for people to represent hundreds of thousands of Corinthian students around the country. Sitting on the committee does not require you to attend in-person meetings or hearings.
If you would like an opportunity to sit on the Committee of Student Creditors, you can apply by printing out [this form](http://drive.google.com/file/d/0B17fZeNi9gBYS0RJRzczTDdWZUE/view?usp=sharing) signing it, and then faxing or emailing it to:
*May 15 Update: The selection process is complete. A Committee of Student Creditors has been chosen.*
While the bankruptcy unfolds, we encourage all [Corinthian Collective](http://debtcollective.org/corinthiansignup) members to keep fighting for debt cancellation. We don’t know what will happen in the bankruptcy proceeding, but we know that students are stronger when they stand together.
Love,
The Debt Collective
| 87.970588 | 546 | 0.800736 | eng_Latn | 0.997751 |
4d90b068d0f655af5d4cb906aa26504e3c4be147 | 480 | md | Markdown | README.md | monkeydri/babel-plugin-directory-named-module | a895f5ca10608d7065a52fe2d0d027a480fdaf90 | [
"MIT"
] | 9 | 2017-03-05T11:33:52.000Z | 2020-09-18T14:38:36.000Z | README.md | monkeydri/babel-plugin-directory-named-module | a895f5ca10608d7065a52fe2d0d027a480fdaf90 | [
"MIT"
] | 2 | 2017-04-06T11:03:24.000Z | 2019-04-25T14:14:41.000Z | README.md | monkeydri/babel-plugin-directory-named-module | a895f5ca10608d7065a52fe2d0d027a480fdaf90 | [
"MIT"
] | 4 | 2017-07-27T22:27:57.000Z | 2019-04-25T14:05:29.000Z | # babel-plugin-directory-named-module
[](https://nodei.co/npm/babel-plugin-directory-named-module/)
## Example
Babel plugin to resolve
```js
import module from 'path/to/module'
```
into
```js
import module from 'path/to/module/module.js'
```
## Usage
```bash
npm install babel-plugin-directory-named-module --save-dev
```
Edit `.babelrc`
```json
{
"plugins": [
"directory-named-module"
]
}
```
| 14.117647 | 129 | 0.68125 | kor_Hang | 0.21192 |
4d921b5528647313283babfa3a817f4d8206b563 | 81 | md | Markdown | documents/users_guide/zh-tw/grammar_voc.md | jinorasa2002/mint | c18c568e478a1b31be47c6e6c8bb10641f082898 | [
"MIT"
] | 4 | 2020-09-11T01:15:12.000Z | 2022-02-15T09:54:32.000Z | documents/users_guide/zh-tw/grammar_voc.md | jinorasa2002/mint | c18c568e478a1b31be47c6e6c8bb10641f082898 | [
"MIT"
] | 113 | 2020-05-31T03:00:52.000Z | 2022-03-24T04:28:31.000Z | documents/users_guide/zh-tw/grammar_voc.md | jinorasa2002/mint | c18c568e478a1b31be47c6e6c8bb10641f082898 | [
"MIT"
] | 10 | 2020-05-31T18:14:00.000Z | 2022-02-16T14:38:49.000Z | # 呼格
pāḷi:āmantanavacana
```
Āmantanavacanaṃ nāma tadāmantanaparidīpanattho.
```
| 13.5 | 47 | 0.790123 | lvs_Latn | 0.913929 |
4d935c8ebe72a8fdc1b15957cd4bf3c9973a7633 | 2,789 | md | Markdown | README.md | Esri/TilePackageKreator | 426059bf30a8f7ed235f5bd8eb7ba290ef067653 | [
"Apache-2.0"
] | 24 | 2017-02-06T13:52:12.000Z | 2022-02-07T03:04:13.000Z | README.md | Esri/TilePackageKreator | 426059bf30a8f7ed235f5bd8eb7ba290ef067653 | [
"Apache-2.0"
] | 87 | 2016-12-14T23:26:50.000Z | 2022-03-16T13:34:16.000Z | README.md | Esri/TilePackageKreator | 426059bf30a8f7ed235f5bd8eb7ba290ef067653 | [
"Apache-2.0"
] | 10 | 2017-07-27T02:16:18.000Z | 2022-02-04T20:13:17.000Z | # Tile Package Kreator

Tile Package Kreator is a desktop utility that guides you in creating Tile Package files. Tile Package files are used by out of the box mobile apps like ArcGIS Survey123, ArcGIS QuickCapture, and ArcGIS Collector to take basemaps offline. Custom apps built with the ArcGIS Runtime SDK can also work with Tile Package files. Typically, Tile Package files are used to help bring basemaps for offline use.
## How to use the app
You can download the app from these urls
http://links.esri.com/esrilabs/tile-package-kreator-windows
http://links.esri.com/esrilabs/tile-package-kreator-mac
http://links.esri.com/esrilabs/tile-package-kreator-ubuntu
Or directly from the [Microsoft Store](https://www.microsoft.com/en-us/p/tile-package-kreator/9pm0rrbmcsrz).
Once downloaded on your Mac, Windows or Ubuntu Linux system, Tile Package Kreator will let you login into your own ArcGIS organization (ArcGIS Online or ArcGIS Enterprise) and easily define your area of interest as well as the number of levels of detail for your offline map. You can define an area of interest based on a rectangle or along a predefined linear feature such as a road, river, pipe etc. Tile Package Kreator will request the creation of the Tile Package file from the selected Tiled Map service and let you store it locally or in your own ArcGIS organization. Once a Tile Package file is created, it can be used with ArcGIS Survey123, ArcGIS QuickCapture, and ArcGIS Collector as well as ArcGIS Pro, ArcMap and custom apps built with the ArcGIS Runtime SDKs.
Download this [PDF document](http://links.esri.com/esrilabs/tile-package-kreator-help) for a handy help guide for using Tile Package Kreator.
## Requirements
<a href="http://www.esri.com/landing-pages/appstudio">AppStudio for ArcGIS</a> is required to compile the code in this repo.
## Issues
Find a bug or want to request a new feature? Please let us know by submitting an issue.
## Contributing
Esri welcomes contributions from anyone and everyone. Please see our [guidelines for contributing](https://github.com/esri/contributing)
## Copyright and License
Copyright © 2021 Esri Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
> http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
A copy of the license is available in the repository's [LICENSE](./LICENSE) file.
| 50.709091 | 775 | 0.788096 | eng_Latn | 0.976001 |
4d93e179ffa7cfcc3a9788b19e40e2992dd40af4 | 12,280 | md | Markdown | README.md | charlessuresh/loll | c13ff154c7807807d4e61ad02998b2d112db4406 | [
"MIT"
] | null | null | null | README.md | charlessuresh/loll | c13ff154c7807807d4e61ad02998b2d112db4406 | [
"MIT"
] | null | null | null | README.md | charlessuresh/loll | c13ff154c7807807d4e61ad02998b2d112db4406 | [
"MIT"
] | null | null | null | # Realtor.com: Will they or won't they? Return user prediction
- Organization name: Realtor.com
- Project mentor: Tiffany Timbers
- Partner: Tarini Bhatnagar
- Team members: Rui Wang, Yuyan Guo, Charles Suresh, Siqi Zhou
## Executive Summary
The goal of this project was to predict a user's future visits to Realtor.com website based on the user's past visit history. To achieve our prediction goal, we used domain knowledge to derive useful features and capture the users' visiting patterns and behaviors. These features were used to train and build two competent machine learning models - one that predicts the return frequency and the other that predicts the return latency. Moreover, we implemented an automated pipeline to train the models along with an interactive dashboard and a web UI (user interface) to help interpret the models and their predictions dynamically.
The deliverables for this project include:
- Two reproducible machine learning models/pipelines
- A dashboard and a flask web UI to represent findings
- Insights on which user visiting behavior is helpful for predictions
- Recommendations on how to improve the models further
## Final Report
The final report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/project_report/_build/pdf/book.pdf).
## Usage
### Step 1: Clone this GitHub repository
Start by cloning this GitHub repository to your local machine
### Step 2: Data
The data provided to us for runnning this analysis contained the raw visit history of around 1.5 million users for the months - January 2021 to April 2021 and had a total of around 62 million records. Owing to the large size of this raw data (around 10.5 GB), it cannot be uploaded to GitHub. For maximum reproducibility of the analysis, you would need to get this data and place it at `data/UBC_MDS_2021/`
For the purposes of testing the functionality of this pipeline, we filtered out the raw visit history of 2000 users, totalling around 230,000 observations and uploaded this data to the GitHub repo instead. If you have cloned the repository to your local machine, you should already have this data located at `data/UBC_MDS_2021/`. If you intend to use this smaller dataset for running the analysis, you might get results that differ from our analysis.
If you intend to run the analysis on new data:
1. It is essential for the raw data to be in the form of parquet files and be located two levels deep from the `UBC_MDS_2021` directory (`UBC_MDS_2021/**/*`):
```
UBC_MDS_2021
├── 01_2021
│ ├── event_date=20210101
│ │ ├── 20210506_193846_00019_cest3_780c4734-d27b-4820-95df-e513fd3033fb.parquet
│ │ ├── 20210506_193846_00019_cest3_9cee8ac4-512a-456c-85c3-3e81b9d2a757.parquet
│ │ └── ...
│ ├── event_date=20210102
│ │ ├── 20210506_193846_00019_cest3_0b29e2bc-b433-4a43-878f-acd47eb45cb9.parquet
│ │ ├── 20210506_193846_00019_cest3_3d29eaa9-3981-48c9-af9e-9e27164945f2.parquet
│ │ └── ...
│ └── ...
├── 02_2021
│ ├── event_date=20210201
│ │ ├── 20210506_193846_00019_cest3_453e7dd3-9ff7-45d5-9bf8-0e16e34d3352.parquet
│ │ ├── 20210506_193846_00019_cest3_d9f6a6e0-94c6-4a10-baf1-ef1ce803ad54.parquet
│ │ └── ...
│ ├── event_date=20210202
│ │ ├── 20210506_193846_00019_cest3_03d37bbb-3752-4cd9-9216-849deb9ce0d5.parquet
│ │ ├── 20210506_193846_00019_cest3_a9db645b-0af4-463c-b92d-f15ff14ebcd2.parquet
│ │ └── ...
│ └── ...
└── ...
```
2. The raw data must have (at least) the following 33 features:
| <!-- --> | <!-- --> | <!-- --> |
|-------------|-------------|-------------|
| `visitor_id`| `datetime_mst` |`visit_id` |
|`visit_number` |`visit_hit_number` | `browser_name` |
| `search_box_searches`| `social_shares` |`property_status` |
|`email_share_success` |`sign_in` | `language` |
| `listing_price`| `saved_items` |`page_type_group` |
|`paid_vs_organic` |`experience_type` | `app_launch` |
| `search_number_of_bathrooms_persist` | `move_device_type` |`login_status` |
|`client_mapi_visitor_id_event` |`refined_search` | `cobroke_leads` |
| `search_max_price_persist`| `sign_out` |`registered_user_activity` |
| `search_number_of_bedrooms_persist` | `apps_type` |`kpi_channel_view` |
|`search_min_price_persist`| `advantage_leads` |`basecamp` |
### Step 3: Run the Analysis Using Docker
1. Install
[Docker](https://www.docker.com/get-started)
2. At the command line/terminal
from the root directory of this project, run the following command in the bash terminal to build the docker image:
```
docker build -t realtor:1.0 .
```
3. Next, run the following commands in the bash terminal as per your requirement:
- To run the analysis for both: Model Return Frequency and Model Return Latency:
```
# Clear out all the generated models, data files and reports
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan clean
# Generate the analysis for Model Return Frequency and Model Return Latency
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan all
```
- To run the analysis only for Model Return Frequency:
```
# Clear out all the generated models, data files and reports for Model Return Frequency
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan clean_return_frequency
# Generate the analysis for Model Return Frequency
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan all_return_frequency
```
- To run the analysis only for Model Return Latency:
```
# Clear out all the generated models, data files and reports for Model Return Latency
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan clean_return_latency
# Generate the analysis for Model Return Latency
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan all_return_latency
```
- To run a specific part of the analysis- either for Model Return Frequency or Model Return Latency, you need to replace the 'dash' in the command below with an appropriate Makefile phony target (based on your use case):
```
docker run --rm -it -p 8888:8888 -v $(pwd):/home/jovyan realtor:1.0 make -C /home/jovyan _________
```
For a list of available Makefile phony targets and their descriptions, refer [here](doc/makefile_descriptions.md). Or you may have a look at the [Makefile itself](Makefile).
*Note: Prior to using the docker image, please make sure the docker has been configured with memory larger than the data size and sufficient CPU/GPU power(refer to Figure 4.1). For the four months data given, below is the configuration we used to set the docker desktop program. This can be set by choosing the “preference” tab under the docker program.* <br><br>

## Models
### Model 1 - Predict number of times the user will return
- The `pickle file` of the model is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/models/model_return_frequency.pickle).
- The `feature EDA report` includes the EDA for all engineered features along with details of which features are selected for model fitting. The pdf version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/feature_eda/feature_eda_model_return_frequency.pdf). The html version report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/feature_eda/feature_eda_model_return_frequency.html).
- The `training report` includes all details and documentations for the model training. The pdf version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/training_report/training_report_model_return_frequency.pdf). The html version report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/training_report/training_report_model_return_frequency.html).
- The `testing report` includes all details about the predictions on test set. The pdf version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/testing_report/testing_report_model_return_frequency.pdf). The html version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/testing_report/testing_report_model_return_frequency.html).
### Model 2 - Predict number of days untile the user's next return
- The `pickle file` of the model is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/models/model_return_latency.pickle)
- The `feature EDA report` includes the EDA for all engineered features along with details of which features are selected for model fitting. The pdf version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/feature_eda/feature_eda_model_return_latency.pdf). The html version report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/feature_eda/feature_eda_model_return_latency.html).
- The `training report` includes all details and documentations for the model training. The pdf version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/training_report/training_report_model_return_latency.pdf). The html version report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/training_report/training_report_model_return_latency.html).
- The `testing report` includes all details about the predictions on test set. The pdf version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/testing_report/testing_report_model_return_latency.pdf). The html version of the report is available [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/doc/testing_report/testing_report_model_return_latency.html).
**Note:** For all the model reports above, we recommend reading the html version of the reports, which are rendered better comparing to pdf.
## Dashboard
We explored `explainerdashboard` API, a python library for developing interactive dashboards to illustrate the prediction results of machine learning models. Refer to the figure below as a reference to illustrate how the dashboard looks like,
- The main page of the dashboard:

- A sub-page of the dashboard to illustrate the regression part of the hurdle model:

Please refer the link [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/FTR-SIQI-book/dashboard/dashboard.md) on how to build a dashboard.
## Flask Web UI
We also built a **flask** web UI as a prediction demo by uploading a CSV file including engineered features and clicking the `submit` button, then returning the prediction results of two models, respectively. Refer to the figure below as a reference to illustrate how the flask web UI looks like,

Please refer the link [here](https://github.com/UBC-MDS/realtor_will_they_return/blob/FTR-SIQI-book/flask/flask.md) on how to trigger a flask web UI.
## License
- [MIT license](https://github.com/UBC-MDS/realtor_will_they_return/blob/main/LICENSE)
## References
- Homes for Sale, Mortgage Rates, Virtual Tours & Rentals: realtor.com®. Homes for Sale, Mortgage Rates, Virtual Tours & Rentals | realtor.com®. (n.d.). https://www.realtor.com/.
- Pyspark. Getting Started - PySpark 3.1.2 documentation. (n.d.). https://spark.apache.org/docs/latest/api/python/getting_started/index.html.
- Explainerdashboard. explainerdashboard. (n.d.). https://explainerdashboard.readthedocs.io/en/latest/.
- Poisson regression and non-normal loss. scikit. (n.d.). https://scikit-learn.org/stable/auto_examples/linear_model/plot_poisson_regression_non_normal_loss.html.
[Realtor.com: Will they or won't they? Return user prediction]
Copyright (C) [June 2021] [Realtor.com]
| 62.335025 | 632 | 0.767834 | eng_Latn | 0.896955 |
4d9537b012b43a65392bff0d05405e4f4527dd49 | 1,060 | md | Markdown | content/id/docs/reference/glossary/persistent-volume.md | rendiputra/website | 4b93c0608828e685881be1662e766d696f0b097b | [
"CC-BY-4.0"
] | 3,157 | 2017-10-18T13:28:53.000Z | 2022-03-31T06:41:57.000Z | content/id/docs/reference/glossary/persistent-volume.md | rendiputra/website | 4b93c0608828e685881be1662e766d696f0b097b | [
"CC-BY-4.0"
] | 27,074 | 2017-10-18T09:53:11.000Z | 2022-03-31T23:57:19.000Z | content/id/docs/reference/glossary/persistent-volume.md | rendiputra/website | 4b93c0608828e685881be1662e766d696f0b097b | [
"CC-BY-4.0"
] | 11,539 | 2017-10-18T15:54:11.000Z | 2022-03-31T12:51:54.000Z | ---
title: Persistent Volume
id: persistent-volume
date: 2018-04-12
full_link: /docs/concepts/storage/persistent-volumes/
short_description: >
Sebuat objek API yang merepresentasikan bagian penyimpanan pada klaster. Tersedia sebagai sumber daya umum yang dapat dipasang (_pluggable_) yang tetap bertahan bahkan di luar siklus hidup suatu Pod.
aka:
tags:
- core-object
- storage
---
Sebuat objek API yang merepresentasikan bagian penyimpanan pada klaster. Tersedia sebagai sumber daya umum yang dapat dipasang (_pluggable_) yang tetap bertahan bahkan di luar siklus hidup suatu {{< glossary_tooltip text="Pod" term_id="pod" >}}.
<!--more-->
PersistentVolume (PV) menyediakan suatu API yang mengabstraksi detail tentang bagaimana penyimpanan disediakan dari cara penggunaannya.
PV digunakan secara langsung pada skenario di mana penyimpanan dapat dibuat terlebih dahulu (penyediaan statis).
Untuk skenario yang membutuhkan penyimpanan sesuai permintaan (penyediaan dinamis), maka yang dapat digunakan sebagai penggantinya adalah PersistentVolumeClaim (PV).
| 50.47619 | 245 | 0.813208 | ind_Latn | 0.973353 |
4d958218562438a9c611a662a6eb8700b0fbe3a5 | 187 | md | Markdown | README.md | stefanwerleman/trie-container | 7d11499119193749594c1d8a9c2b5e4170458be2 | [
"MIT"
] | null | null | null | README.md | stefanwerleman/trie-container | 7d11499119193749594c1d8a9c2b5e4170458be2 | [
"MIT"
] | null | null | null | README.md | stefanwerleman/trie-container | 7d11499119193749594c1d8a9c2b5e4170458be2 | [
"MIT"
] | null | null | null | # trie-container
### This is a Ruby container class for the Trie data structure. This is may be used by others to implement applications that rely on this data structure.
## Methods
| 37.4 | 154 | 0.754011 | eng_Latn | 0.999941 |
4d95951843405419851022e70ebe63c209370ed8 | 3,030 | md | Markdown | web/docs/understand-vast/architecture/plugins.md | vast-io/vast | 6c9c787adc54079202dba85ea4a929004063f1ba | [
"BSD-3-Clause"
] | 63 | 2016-04-22T01:50:03.000Z | 2019-07-31T15:50:36.000Z | web/docs/understand-vast/architecture/plugins.md | vast-io/vast | 6c9c787adc54079202dba85ea4a929004063f1ba | [
"BSD-3-Clause"
] | 216 | 2017-01-24T16:25:43.000Z | 2019-08-01T19:37:00.000Z | web/docs/understand-vast/architecture/plugins.md | vast-io/vast | 6c9c787adc54079202dba85ea4a929004063f1ba | [
"BSD-3-Clause"
] | 28 | 2016-05-19T13:09:19.000Z | 2019-04-12T15:11:42.000Z | ---
sidebar_position: 3
---
# Plugins
VAST has a plugin system that makes it easy to hook into various places of
the data processing pipeline and add custom functionality in a safe and
sustainable way. A set of customization points allow anyone to add new
functionality that adds CLI commands, receives a copy of the input stream,
spawns queries, or implements integrations with third-party libraries.
There exist **dynamic plugins** that come in the form shared libraries, and
**static plugins** that are compiled into libvast or VAST itself:


Plugins do not only exist for extensions by third parties, but VAST also
implements core functionality through the plugin API. Such plugins compile as
static plugins. Because they are always built, we call them *native plugins*.
## Plugin types
VAST offers several customization points to exchange or enhance functionality
selectively. Here is a list of available plugin categories and plugin types:


### Command
The command plugin adds a new command to the `vast` executable, at a configurable
location in the command hierarchy. New commands can have sub-commands as well
and allow for flexible structuring of the provided functionality.
### Component
The component plugin spawns a [component](components) inside the VAST node. A
component is an [actor](actor-model) and runs in parallel with all other
components.
This plugin is the most generic mechanism to introduce new functionality.
### Analyzer
The analyzer plugin hooks into the processing path of data by spawning a new
actor inside the server that receives the full stream of table slices. The
analyzer plugin is a refinement of the [component plugin](#component).
### Reader
The reader plugin adds a new format to parse input data, such as JSON (ASCII) or
PCAP (binary).
Reader plugins automatically add the subcommands `vast import <format>`.
### Writer
The writer plugin adds a new format to print data, such as JSON (ASCII) or PCAP
(binary).
Writer plugins automatically add the subcommands `vast export <format>`.
### Query Language
A query language plugin adds an alternative parser for a query expression. This
plugin allows for replacing the query *frontend* while using VAST as *backend*
execution engine.
For example, you could write a SQL plugin that takes an expression like
`SELECT * FROM zeek.conn WHERE id.orig_h = "1.2.3.4"` and executes it on
historical data or runs it as live query.
### Transform
The transform plugin adds a new [transform
step](/docs/understand-vast/query-language/operators) that users can reference in
a [transform definition](/docs/understand-vast/query-language/pipelines).
### Store
Inside a partition, the store plugin implements the conversion from in-memory
Arrow record batches to the persistent format, and vice versa.
| 35.647059 | 81 | 0.782838 | eng_Latn | 0.988679 |
4d95c7977486558dbd6c96e6e833d2c82168953d | 1,683 | md | Markdown | sites/boxnovel.baidu.com/components/mip-shell-xiaoshuo/mip-shell-xiaoshuo.md | laiyg/mip2-extensions-platform | 5abbd5b40dff08c4ec8f672ff99c9ccbbf427877 | [
"MIT"
] | 34 | 2018-07-03T08:22:01.000Z | 2020-06-13T01:50:38.000Z | sites/boxnovel.baidu.com/components/mip-shell-xiaoshuo/mip-shell-xiaoshuo.md | laiyg/mip2-extensions-platform | 5abbd5b40dff08c4ec8f672ff99c9ccbbf427877 | [
"MIT"
] | 667 | 2018-05-29T09:22:51.000Z | 2020-08-03T09:07:27.000Z | sites/boxnovel.baidu.com/components/mip-shell-xiaoshuo/mip-shell-xiaoshuo.md | laiyg/mip2-extensions-platform | 5abbd5b40dff08c4ec8f672ff99c9ccbbf427877 | [
"MIT"
] | 243 | 2018-06-15T04:07:21.000Z | 2021-01-24T10:42:33.000Z | # `mip-shell-xiaoshuo`
## 说明
为极速小说阅读器定制的 mip-shell, 详细使用方法见[wiki](https://github.com/mipengine/mip2-extensions-platform/wiki/%E4%B8%87%E5%8D%B7%E8%AE%A1%E5%88%92-%E6%9E%81%E9%80%9F%E9%98%85%E8%AF%BB%E5%99%A8%E6%8E%A5%E5%85%A5%E6%96%87%E6%A1%A3)。
## 简单示例
为极速小说阅读器定制的 mip-shell-xiaoshuo 目录页传入格式。pageType分为四种:'catalog'目录页,'page'小说内容页,'detail'小说详情页,'bookEnd'小说结尾页。officeId为熊掌号id。
```
<mip-shell-xiaoshuo mip-shell="" id="xiaoshuo-shell">
<script type="application/json">
{
"routes": [{
"pattern": "mipx-xiaoshuo-(\\d)+-(\\d)+.html",
"meta": {
"header": {
"show": true,
"title": "神武天帝"
},
"pageType": "page",
"officeId": "160957******0959",
"footer": {
"actionGroup": [
{"name": "catalog", "text": "目录"},
{"name": "darkmode", "text": "夜间模式", "text2": "白天模式"},
{"name": "settings", "text": "更多设置"}
],
"hrefButton": {
"previous": "上一页",
"next": "下一页"
}
},
"book": {
"title": "将夜",
"chapterNumber": "共1347章",
"chapterStatus": "已完结"
},
"catalog": [
{
"name": "第1章 灵魂重生",
"link": "mipx-xiaoshuo-1-1.html",
"pages": [
"mipx-xiaoshuo-1-1.html",
"mipx-xiaoshuo-1-2.html",
"mipx-xiaoshuo-1-3.html"
]
}
]
}
}]
}
</script>
</mip-shell-xiaoshuo>
```
### routes
[参考mip-shell用法](https://github.com/mipengine/mip2/blob/master/docs/page/shell.md) | 30.053571 | 216 | 0.464052 | yue_Hant | 0.233584 |
4d95e82d07a197a1be6e9e2674fe83820a4d6e49 | 54 | md | Markdown | README.md | 0xfcmartins/JAuth | c8e083f44441346bf5fc49839e6886e9481076e9 | [
"MIT"
] | null | null | null | README.md | 0xfcmartins/JAuth | c8e083f44441346bf5fc49839e6886e9481076e9 | [
"MIT"
] | 1 | 2022-03-19T22:56:13.000Z | 2022-03-19T22:56:13.000Z | README.md | 0xfcmartins/JAuth | c8e083f44441346bf5fc49839e6886e9481076e9 | [
"MIT"
] | null | null | null | # JAuth
Java Spring Boot authentication micro service
| 18 | 45 | 0.833333 | eng_Latn | 0.771277 |
4d965e46e98316349d4843b09b426a6f70ffb324 | 2,171 | md | Markdown | docs/1.23/flowcontrol/v1beta1/nonResourcePolicyRule.md | jsonnet-libs/k8s-libsonnet | f8efa81cf15257bd151b97e31599e20b2ba5311b | [
"Apache-2.0"
] | 51 | 2021-07-02T12:34:06.000Z | 2022-03-25T09:20:57.000Z | docs/1.23/flowcontrol/v1beta1/nonResourcePolicyRule.md | jsonnet-libs/k8s-libsonnet | f8efa81cf15257bd151b97e31599e20b2ba5311b | [
"Apache-2.0"
] | null | null | null | docs/1.23/flowcontrol/v1beta1/nonResourcePolicyRule.md | jsonnet-libs/k8s-libsonnet | f8efa81cf15257bd151b97e31599e20b2ba5311b | [
"Apache-2.0"
] | 4 | 2021-07-22T17:39:30.000Z | 2021-11-17T19:15:14.000Z | ---
permalink: /1.23/flowcontrol/v1beta1/nonResourcePolicyRule/
---
# flowcontrol.v1beta1.nonResourcePolicyRule
"NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request."
## Index
* [`fn withNonResourceURLs(nonResourceURLs)`](#fn-withnonresourceurls)
* [`fn withNonResourceURLsMixin(nonResourceURLs)`](#fn-withnonresourceurlsmixin)
* [`fn withVerbs(verbs)`](#fn-withverbs)
* [`fn withVerbsMixin(verbs)`](#fn-withverbsmixin)
## Fields
### fn withNonResourceURLs
```ts
withNonResourceURLs(nonResourceURLs)
```
"`nonResourceURLs` is a set of url prefixes that a user should have access to and may not be empty. For example:\n - \"/healthz\" is legal\n - \"/hea*\" is illegal\n - \"/hea\" is legal but matches nothing\n - \"/hea/*\" also matches nothing\n - \"/healthz/*\" matches all per-component health checks.\n\"*\" matches all non-resource urls. if it is present, it must be the only entry. Required."
### fn withNonResourceURLsMixin
```ts
withNonResourceURLsMixin(nonResourceURLs)
```
"`nonResourceURLs` is a set of url prefixes that a user should have access to and may not be empty. For example:\n - \"/healthz\" is legal\n - \"/hea*\" is illegal\n - \"/hea\" is legal but matches nothing\n - \"/hea/*\" also matches nothing\n - \"/healthz/*\" matches all per-component health checks.\n\"*\" matches all non-resource urls. if it is present, it must be the only entry. Required."
**Note:** This function appends passed data to existing values
### fn withVerbs
```ts
withVerbs(verbs)
```
"`verbs` is a list of matching verbs and may not be empty. \"*\" matches all verbs. If it is present, it must be the only entry. Required."
### fn withVerbsMixin
```ts
withVerbsMixin(verbs)
```
"`verbs` is a list of matching verbs and may not be empty. \"*\" matches all verbs. If it is present, it must be the only entry. Required."
**Note:** This function appends passed data to existing values | 41.75 | 400 | 0.726393 | eng_Latn | 0.972894 |
4d969377fceb94865634b591dfd20c25636b0609 | 1,643 | md | Markdown | src/en/installation.md | cds-snc/alpha-design-system-documentation | 98a99a9c3d678cfe8084f553358548d65ff56442 | [
"MIT"
] | null | null | null | src/en/installation.md | cds-snc/alpha-design-system-documentation | 98a99a9c3d678cfe8084f553358548d65ff56442 | [
"MIT"
] | 5 | 2022-01-11T05:52:31.000Z | 2022-03-28T22:36:21.000Z | src/en/installation.md | cds-snc/alpha-design-system-documentation | 98a99a9c3d678cfe8084f553358548d65ff56442 | [
"MIT"
] | 3 | 2021-11-22T20:26:59.000Z | 2021-12-06T21:07:07.000Z | ---
title: Installation
translationKey: installation
layout: "layouts/base.njk"
eleventyNavigation:
key: installationEN
title: Installation
locale: en
order: 0
onThisPage:
0: Install from npm
1: Supported frameworks
2: JavaScript
3: React
4: Vue
---
# Installation
<section aria-label="Install from npm">
## Install from npm
``` js
npm install gcds-components
```
</section>
<section aria-label="Supported frameworks">
## Supported frameworks
The gcds-component library works in multiple frameworks.
### JavaScript
Place the following code in the `<head>` element of your site.
``` html
<script type="module">
import { defineCustomElements } from '/node_modules/gcds-components/loader/index.es2017.mjs';
defineCustomElements();
</script>
<link rel="stylesheet" href="/node_modules/gcds-components/dist/gcds/gcds.css">
```
All gcds-components should now be ready to use in your site.
### React
Place the following code in the `index.js` file of your app.
``` jsx
import { applyPolyfills, defineCustomElements } from 'gcds-components/loader';
import 'gcds-components/dist/gcds/gcds.css';
ReactDOM.render(...);
applyPolyfills().then(() => {
defineCustomElements(window);
});
```
All gcds-components should now be ready to use in your React app.
### Vue
Place the following code in the `main.js` file of your app.
``` js
import { applyPolyfills, defineCustomElements } from 'gcds-components/loader';
import 'gcds-components/dist/gcds/gcds.css';
applyPolyfills().then(() => {
defineCustomElements();
});
```
All gcds-components should now be ready to use in your Vue app.
</section>
| 19.559524 | 97 | 0.719416 | eng_Latn | 0.796834 |
4d979b09eccf2d62882c69c975ff869a3c69685e | 1,351 | md | Markdown | content/libraries/lmax-disruptor.md | tedneward/Research | 0410fe4e052961e05feda58267fbfa95f01b4a21 | [
"MIT"
] | 5 | 2020-05-30T08:22:20.000Z | 2022-03-12T09:16:10.000Z | content/libraries/lmax-disruptor.md | tedneward/Research | 0410fe4e052961e05feda58267fbfa95f01b4a21 | [
"MIT"
] | 2 | 2020-05-09T06:50:04.000Z | 2022-01-29T08:47:40.000Z | content/libraries/lmax-disruptor.md | tedneward/Research | 0410fe4e052961e05feda58267fbfa95f01b4a21 | [
"MIT"
] | 1 | 2021-12-14T04:20:30.000Z | 2021-12-14T04:20:30.000Z | title=LMAX Disruptor
tags=library, jvm, concurrency
summary=A framework which has "mechanical sympathy" for the hardware it's running on, and that's lock-free.
~~~~~~
"To understand the problem the Disruptor is trying to solve, and to get a feel for why this concurrency framework is so fast, read the [Technical Paper](http://lmax-exchange.github.com/disruptor/files/Disruptor-1.0.pdf). It also contains detailed performance results.
"[LMAX](http://www.lmax.com/) aims to be the fastest trading platform in the world. Clearly, in order to achieve this we needed to do something special to achieve very low-latency and high-throughput with our Java platform. Performance testing showed that using queues to pass data between stages of the system was introducing latency, so we focused on optimising this area.
"The Disruptor is the result of our research and testing. We found that cache misses at the CPU-level, and locks requiring kernel arbitration are both extremely costly, so we created a framework which has "mechanical sympathy" for the hardware it's running on, and that's lock-free.
"This is not a specialist solution, it's not designed to work only for a financial application. The Disruptor is a general-purpose mechanism for solving a difficult problem in concurrent programming."
[Website](http://lmax-exchange.github.io/disruptor/) | 96.5 | 374 | 0.793486 | eng_Latn | 0.998948 |
4d97cc77a637670bbe1d5bae2137044452c3d6b8 | 3,880 | md | Markdown | react/node_modules/npm-gui/README.md | LimLipJoo/Degree-FYP | 119344a094d201e3b90e727e67d3a84acb081efe | [
"MIT"
] | 1 | 2020-09-15T15:03:43.000Z | 2020-09-15T15:03:43.000Z | README.md | TryHardDood/npm-gui | f364a39511c1c71d18d9f9014129b5c5cf8c1db0 | [
"MIT"
] | 3 | 2022-02-14T02:16:38.000Z | 2022-02-27T11:35:44.000Z | README.md | TryHardDood/npm-gui | f364a39511c1c71d18d9f9014129b5c5cf8c1db0 | [
"MIT"
] | null | null | null | # [npm-gui](http://q-nick.github.io/npm-gui/)
[](https://travis-ci.org/q-nick/npm-gui) <a href="https://www.npmjs.com/package/npm-gui"><img src="https://img.shields.io/npm/dm/npm-gui.svg" alt="Downloads"></a> <a href="https://www.npmjs.com/package/npm-gui"><img src="https://img.shields.io/npm/v/npm-gui.svg" alt="Version"></a> <a href="https://www.npmjs.com/package/npm-gui"><img src="https://img.shields.io/npm/l/npm-gui.svg" alt="License"></a>
#

#
## About
`npm-gui` is a tool for managing javascript project dependencies, which are listed in `package.json` or `bower.json` - in a friendly way. Under the hood it will use transparently `npm`, `bower` or `yarn` commands to install, remove or update dependencies
(*to use **yarn** it requires **yarn.lock** file to be present in project folder.*)
### **npm-gui** key features:
- global dependencies management
- project dependencies management
- project scripts runner
- npm, yarn, bower support
#
## Getting Started
Simplest way to run `npm-gui` is by using <a href="https://www.npmjs.com/package/npx">`npx`</a>:
```
~/$ npx npm-gui
```
It will run the newest version of `npm-gui` without installing it on your system.
### Installation
`npm-gui` could also be installed as global dependency:
```
npm install -g npm-gui
```
or locally:
```
npm install npm-gui
```
### How to use
`npm-gui` app will be accessible in browser under address http://localhost:1337/. Remember to first use a command below:
When installed as global dependency you could run `npm-gui` with command line:
```
~/$ npm-gui
```
Then you could navigate to folder containing your javascript project (including `package.json` or `bower.json`).

Or you could run `npm-gui` command in you desired folder:
```
~/workspace/project1$ npm-gui
```
If you need to start app on another `host/port`, you could add `host:port` argument to command for example:
```
~/$ npm-gui localhost:9000
```
#### Starting
#### Navigating between projects
To change project press **folder icon** in top-right corner. Navigation panel will allow you to change folder - it must contain **yarn.lock, package.json or bower.json** file to be choosen.

#### Installing new dependencies
To install new dependency you can use **search/add button**. After typing name of the dependency in input - press search button - results will appear on list below. You can switch here between **npm/bower** repository. You must also decide will dependency be installed as production or development. After successfull installation of new dependency it will appear on project list.

#### Removing dependencies
To remove depenedency from your project simply press **trash icon** on the right.

#### Updating selected dependencies
- TODO
#### Updating all dependencies as...
To do a batch dependencies update and save new versions to package.json, for example *wanted*, press one of the green button above list of project dependencies.

#### Running scripts
- TODO
#### Removing scripts
- TODO
#### Enlarging console log
To get more readable log you can use enlarge button which will change width of console.
Consoles are not self-closing they will be visible until you close them with **remove button**

#
## Authors and Contributors
@q-nick
| 39.591837 | 457 | 0.732732 | eng_Latn | 0.936789 |
4d9804ae2442d7e7becbc0efaaf087085c0af006 | 7,537 | md | Markdown | desktop-src/tablet/recognitionproperty-constants.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-26T16:18:49.000Z | 2022-02-19T02:00:21.000Z | desktop-src/tablet/recognitionproperty-constants.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-04-09T17:00:51.000Z | 2020-04-09T18:30:01.000Z | desktop-src/tablet/recognitionproperty-constants.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-19T02:58:48.000Z | 2021-03-06T21:09:47.000Z | ---
Description: Defines values that specify the properties of a recognition alternate. The Tablet PC application programming interface (API) uses globally unique identifiers (GUIDs) to identify packet properties, which in Automation are constant strings.
ms.assetid: 2bfb0cbf-73a3-4e83-a4e9-f0803bd3dee8
title: RecognitionProperty Constants (Msinkaut.h)
ms.topic: reference
ms.date: 05/31/2018
---
# RecognitionProperty Constants
Defines values that specify the properties of a recognition alternate. The Tablet PC application programming interface (API) uses globally unique identifiers (GUIDs) to identify packet properties, which in Automation are constant strings.
The following table lists the available recognition alternate property globally unique identifier (GUID) fields. Use these GUIDs to access properties of an [**IInkRecognitionAlternate**](/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate) object by calling the [**GetPropertyValue**](/windows/desktop/api/msinkaut/nf-msinkaut-iinkrecognitionalternate-getpropertyvalue) method.
<table>
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">Constant Name</th>
<th style="text-align: left;">Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_LINENUMBER_________"></span><span id="___________inkrecognitionproperty_linenumber_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_LINENUMBER</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the line number of the <a href="/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate"><strong>IInkRecognitionAlternate</strong></a> object. <br/> LineNumber specifies the alternates with a particular line number.<br/>
<blockquote>
[!Note]<br />
This field is not supported for recognizers of East Asian characters.
</blockquote>
<br/></td>
</tr>
<tr class="even">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_SEGMENTATION_________"></span><span id="___________inkrecognitionproperty_segmentation_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_SEGMENTATION</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the segmentation of the <a href="/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate"><strong>IInkRecognitionAlternate</strong></a> object. <br/> Segmentation specifies the basic ink fragment or unit that the recognizer uses to produce a recognition result.<br/>
<blockquote>
[!Note]<br />
Not implemented.
</blockquote>
<br/></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_HOTPOINT_________"></span><span id="___________inkrecognitionproperty_hotpoint_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_HOTPOINT</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the recognition hot point of the <a href="/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate"><strong>IInkRecognitionAlternate</strong></a> object. <br/></td>
</tr>
<tr class="even">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_MAXIMUMSTROKECOUNT_________"></span><span id="___________inkrecognitionproperty_maximumstrokecount_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_MAXIMUMSTROKECOUNT</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the maximum stroke count of the recognition result for the <a href="/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate"><strong>IInkRecognitionAlternate</strong></a> object. <br/>
<blockquote>
[!Note]<br />
Not implemented.
</blockquote>
<br/></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_POINTSPERINCH_________"></span><span id="___________inkrecognitionproperty_pointsperinch_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_POINTSPERINCH</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the points-per-inch metric of the <a href="/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate"><strong>IInkRecognitionAlternate</strong></a> object. <br/>
<blockquote>
[!Note]<br />
Not implemented.
</blockquote>
<br/></td>
</tr>
<tr class="even">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_CONFIDENCELEVEL_________"></span><span id="___________inkrecognitionproperty_confidencelevel_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_CONFIDENCELEVEL</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the level of confidence that the recognizer has in the recognition result.<br/>
<blockquote>
[!Note]<br />
Confidence evaluation is available only for United States English and all gesture recognizers in Microsoft Windows XP Tablet PC Edition and Windows Vista. Methods that provide the confidence property for any other recognizer return E_NOTIMPL.
</blockquote>
<br/></td>
</tr>
<tr class="odd">
<td style="text-align: left;"><span id="___________INKRECOGNITIONPROPERTY_LINEMETRICS_________"></span><span id="___________inkrecognitionproperty_linemetrics_________"></span><dl> <dt> <strong>INKRECOGNITIONPROPERTY_LINEMETRICS</strong> </dt> </dl></td>
<td style="text-align: left;">The GUID that identifies the property for the line metrics of the <a href="/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternate"><strong>IInkRecognitionAlternate</strong></a> object, which is the line for which to retrieve metrics. <br/></td>
</tr>
</tbody>
</table>
## Remarks
In C++, you can access these constants in the Msinkaut.h header file, which is located in the <systemdrive>:\\Program Files\\Microsoft SDKs\\Windows\\v6.0\\Include directory if you installed the SDK in the default location.
> [!Note]
> In C++, these constants are WCHARs, not BSTRs. Convert them into BSTRs before use. For more information about the BSTR data type, see [Using the COM Library](using-the-com-library.md).
## Requirements
| | |
|-------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Minimum supported client<br/> | Windows XP Tablet PC Edition \[desktop apps only\]<br/> |
| Minimum supported server<br/> | None supported<br/> |
| Header<br/> | <dl> <dt>Msinkaut.h (also requires Msinkaut\_i.c)</dt> </dl> |
## See also
<dl> <dt>
[**AlternatesWithConstantPropertyValues Method**](/windows/desktop/api/msinkaut/nf-msinkaut-iinkrecognitionalternate-alternateswithconstantpropertyvalues)
</dt> <dt>
[**ConfidenceAlternates Property**](/windows/desktop/api/msinkaut/nf-msinkaut-iinkrecognitionalternate-get_confidencealternates)
</dt> <dt>
[**LineAlternates Property**](/windows/desktop/api/msinkaut/nf-msinkaut-iinkrecognitionalternate-get_linealternates)
</dt> <dt>
[**IInkRecognitionAlternates Interface**](/windows/desktop/api/msinkaut/nn-msinkaut-iinkrecognitionalternates)
</dt> </dl>
| 57.534351 | 392 | 0.724426 | eng_Latn | 0.426552 |
4d988132ba091b0f6604ee1e53ad5d81aadd6116 | 1,028 | md | Markdown | docs/source/markdown/podman-container-exists.1.md | jdieter/libpod | 82a83b9ff55e1f22cb1951b927de29866fa44054 | [
"Apache-2.0"
] | 2 | 2018-12-16T10:59:36.000Z | 2019-03-24T21:13:04.000Z | docs/source/markdown/podman-container-exists.1.md | jdieter/libpod | 82a83b9ff55e1f22cb1951b927de29866fa44054 | [
"Apache-2.0"
] | 1 | 2021-12-08T01:47:36.000Z | 2021-12-08T01:47:36.000Z | docs/source/markdown/podman-container-exists.1.md | jdieter/libpod | 82a83b9ff55e1f22cb1951b927de29866fa44054 | [
"Apache-2.0"
] | 1 | 2019-07-20T17:41:13.000Z | 2019-07-20T17:41:13.000Z | % podman-container-exists(1)
## NAME
podman-container-exists - Check if a container exists in local storage
## SYNOPSIS
**podman container exists** [*options*] *container*
## DESCRIPTION
**podman container exists** checks if a container exists in local storage. The **ID** or **Name**
of the container may be used as input. Podman will return an exit code
of `0` when the container is found. A `1` will be returned otherwise. An exit code of `125` indicates there
was an issue accessing the local storage.
## OPTIONS
**-h**, **--help**
Print usage statement
## Examples
Check if an container called `webclient` exists in local storage (the container does actually exist).
```
$ sudo podman container exists webclient
$ echo $?
0
$
```
Check if an container called `webbackend` exists in local storage (the container does not actually exist).
```
$ sudo podman container exists webbackend
$ echo $?
1
$
```
## SEE ALSO
podman(1)
## HISTORY
November 2018, Originally compiled by Brent Baude (bbaude at redhat dot com)
| 23.906977 | 108 | 0.728599 | eng_Latn | 0.987065 |
4d98b94bc2a4be31197b7075b93d6277f72b5c5b | 76 | md | Markdown | README.md | logvvw/minisdmfer | ff1df34ead9504d892d722e11ff104e691cb6810 | [
"Apache-2.0"
] | null | null | null | README.md | logvvw/minisdmfer | ff1df34ead9504d892d722e11ff104e691cb6810 | [
"Apache-2.0"
] | null | null | null | README.md | logvvw/minisdmfer | ff1df34ead9504d892d722e11ff104e691cb6810 | [
"Apache-2.0"
] | null | null | null | # minisdmfer
a minimal integrated framework base on springboot + dubbo + mq
| 25.333333 | 62 | 0.789474 | eng_Latn | 0.745644 |
4d98cf118968049a821e7690a50f758a09134f47 | 8,015 | md | Markdown | windows-apps-src/communication/interprocess-communication.md | Howard20181/windows-uwp.zh-cn | 4a36ae6ea1fce51ff5fb13288a96f68790d2bd29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/communication/interprocess-communication.md | Howard20181/windows-uwp.zh-cn | 4a36ae6ea1fce51ff5fb13288a96f68790d2bd29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/communication/interprocess-communication.md | Howard20181/windows-uwp.zh-cn | 4a36ae6ea1fce51ff5fb13288a96f68790d2bd29 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 进程间通信 (IPC)
description: 本主题介绍了在通用 Windows 平台 (UWP) 应用程序和 Win32 应用程序之间执行进程间通信 (IPC) 的各种方式。
ms.date: 03/23/2020
ms.topic: article
keywords: windows 10, uwp
ms.openlocfilehash: 0aa3c62100ecbb30e136c52cee3a6862cf15bef2
ms.sourcegitcommit: 7b2febddb3e8a17c9ab158abcdd2a59ce126661c
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 08/31/2020
ms.locfileid: "89175621"
---
# <a name="interprocess-communication-ipc"></a>进程间通信 (IPC)
本主题介绍了在通用 Windows 平台 (UWP) 应用程序和 Win32 应用程序之间执行进程间通信 (IPC) 的各种方式。
## <a name="app-services"></a>应用程序服务
应用服务使应用程序能够公开接受和返回 ([**ValueSet**](/uwp/api/Windows.Foundation.Collections.ValueSet)) 在后台的基元属性包的服务。 如果对其进行 [序列化](https://stackoverflow.com/questions/46367985/how-to-make-a-class-that-can-be-added-to-the-windows-foundation-collections-valu),则可以传递丰富的对象。
应用服务可以在后台任务或前台应用程序[内的](../launch-resume/how-to-create-and-consume-an-app-service.md)[进程中](../launch-resume/convert-app-service-in-process.md)运行。
应用服务最适合用于共享少量的数据,在这种情况下,不需要几乎实时的延迟。
## <a name="com"></a>COM
[COM](/windows/win32/com/component-object-model--com--portal) 是一种面向对象的分布式系统,用于创建可进行交互和通信的二进制软件组件。 作为开发人员,使用 COM 为应用程序创建可重用的软件组件和自动化层。 COM 组件可以是进程内或进程外的,它们可以通过 [客户端和服务器](/windows/win32/com/com-clients-and-servers) 模型进行通信。 进程外 COM 服务器长时间用作 [对象间通信](/windows/win32/com/inter-object-communication)的方式。
具有 [runFullTrust](../packaging/app-capability-declarations.md#restricted-capabilities) 功能的打包应用程序可以通过 [包清单](/uwp/schemas/appxpackage/uapmanifestschema/element-com-extension)为 IPC 注册进程外 COM 服务器。 这称为 " [打包的 COM](https://blogs.windows.com/windowsdeveloper/2017/04/13/com-server-ole-document-support-desktop-bridge/)"。
## <a name="filesystem"></a>文件系统
### <a name="broadfilesystemaccess"></a>BroadFileSystemAccess
打包的应用程序可以通过声明 [broadFileSystemAccess](../files/file-access-permissions.md#accessing-additional-locations) 受限功能来使用广泛的文件系统执行 IPC。 此功能授予 [Windows 存储](/uwp/api/Windows.Storage) Api 和 [xxxFromApp](/previous-versions/windows/desktop/legacy/mt846585(v=vs.85)) Win32 api 对广泛文件系统的访问权限。
默认情况下,使用打包应用程序的文件系统的 IPC 仅限于此部分中所述的其他机制。
### <a name="publishercachefolder"></a>PublisherCacheFolder
[PublisherCacheFolder](/uwp/api/windows.storage.applicationdata.getpublishercachefolder)使打包后的应用程序可以声明其清单中的文件夹,同一发布者可以与其他包共享这些文件夹。
共享存储文件夹具有以下要求和限制:
* 不会备份或漫游共享存储文件夹中的数据。
* 用户可以清除共享存储文件夹的内容。
* 不能使用 "共享存储" 文件夹在不同发布者的应用程序之间共享数据。
* 不能使用共享存储文件夹在不同用户之间共享数据。
* 共享存储文件夹没有版本管理。
如果发布多个应用程序,并且正在寻找一种简单的机制来共享它们之间的数据,则 PublisherCacheFolder 是基于文件系统的简单选项。
### <a name="sharedaccessstoragemanager"></a>SharedAccessStorageManager
[SharedAccessStorageManager](/uwp/api/Windows.ApplicationModel.DataTransfer.SharedStorageAccessManager) 与应用服务、协议激活 (例如 LaunchUriForResultsAsync) 等)结合使用,可通过令牌共享 StorageFiles。
## <a name="fulltrustprocesslauncher"></a>FullTrustProcessLauncher
利用 [runFullTrust](../packaging/app-capability-declarations.md#restricted-capabilities) 功能,打包应用程序可以在同一包中 [启动完全信任进程](/uwp/api/Windows.ApplicationModel.FullTrustProcessLauncher) 。
对于包限制是负担或缺少 IPC 选项的情况,应用程序可以使用完全信任进程作为代理来与系统交互,然后使用完全信任进程本身通过应用服务或其他受支持的其他 IPC 机制进行 IPC。
## <a name="launchuriforresultsasync"></a>LaunchUriForResultsAsync
[LaunchUriForResultsAsync](../launch-resume/how-to-launch-an-app-for-results.md) 用于简单 ([ValueSet](/uwp/api/Windows.Foundation.Collections.ValueSet)) 数据与实现 [ProtocolForResults](../launch-resume/how-to-launch-an-app-for-results.md#step-2-override-applicationonactivated-in-the-app-that-youll-launch-for-results) 激活协定的其他封装应用程序进行交换。 与通常在后台运行的应用服务不同,后者在前台启动目标应用程序。
可以通过 ValueSet 将 [SharedStorageAccessManager](/uwp/api/Windows.ApplicationModel.DataTransfer.SharedStorageAccessManager) 令牌传递给应用程序来共享文件。
## <a name="loopback"></a>环回
环回是与网络服务器进行通信的过程,该服务器在使用 localhost (环回地址) 中侦听。
为了保持安全和网络隔离,默认情况下,针对打包的应用程序,会阻止 IPC 的环回连接。 可以使用 [功能](/previous-versions/windows/apps/hh770532(v=win.10)) 和 [清单属性](/uwp/schemas/appxpackage/uapmanifestschema/element-uap4-loopbackaccessrules)在受信任的打包应用程序之间启用环回连接。
* 所有参与环回连接的打包应用程序都需要 `privateNetworkClientServer` 在其 [包清单](/uwp/schemas/appxpackage/uapmanifestschema/element-capability)中声明此功能。
* 两个打包的应用程序可以通过环回进行通信,方法是在其包清单中声明 [LoopbackAccessRules](/uwp/schemas/appxpackage/uapmanifestschema/element-uap4-loopbackaccessrules) 。
* 每个应用程序都必须列出其 [LoopbackAccessRules](/uwp/schemas/appxpackage/uapmanifestschema/element-uap4-loopbackaccessrules)中的另一个。 客户端为服务器声明了 "out" 规则,服务器为其支持的客户端声明了 "in" 规则。
> [!NOTE]
> 在开发时,可以通过 Visual Studio 中的包清单编辑器、通过 Microsoft Store 发布的应用程序的 [合作伙伴中心](../publish/view-app-identity-details.md) 或已安装的应用程序的 [add-appxpackage](/powershell/module/appx/get-appxpackage?view=win10-ps) PowerShell 命令,在这些规则中标识应用程序所需的包系列名称。
未打包的应用程序和服务没有包标识,因此不能在 [LoopbackAccessRules](/uwp/schemas/appxpackage/uapmanifestschema/element-uap4-loopbackaccessrules)中声明它们。 你可以通过 [CheckNetIsolation.exe](/previous-versions/windows/apps/hh780593(v=win.10))将打包的应用程序配置为通过与未打包的应用程序和服务的环回进行连接,但这仅适用于你对计算机具有本地访问权限的旁加载或调试方案,并且你具有管理员权限。
* 所有参与环回连接的打包应用程序都需要 `privateNetworkClientServer` 在其 [包清单](/uwp/schemas/appxpackage/uapmanifestschema/element-capability)中声明功能。
* 如果打包应用程序连接到未打包的应用程序或服务,则运行 `CheckNetIsolation.exe LoopbackExempt -a -n=<PACKAGEFAMILYNAME>` 为打包的应用程序添加环回例外。
* 如果未打包的应用程序或服务正在连接到打包应用程序,请运行 `CheckNetIsolation.exe LoopbackExempt -is -n=<PACKAGEFAMILYNAME>` 以使打包的应用程序能够接收入站环回连接。
* 打包的应用程序侦听连接时, [CheckNetIsolation.exe](/previous-versions/windows/apps/hh780593(v=win.10))必须连续运行。
* 该 `-is` 标志是在 Windows 10 中引入的,版本 1607 (10.0;生成 14393) 。
> [!NOTE]
> 在 `-n` 开发时,可以通过 Visual Studio 中的包清单编辑器、通过 Microsoft Store 发布的应用程序的[合作伙伴中心](../publish/view-app-identity-details.md)或已安装的应用程序的[add-appxpackage](/powershell/module/appx/get-appxpackage?view=win10-ps) PowerShell 命令来查找[CheckNetIsolation.exe](/previous-versions/windows/apps/hh780593(v=win.10))标志所需的包系列名称。
[CheckNetIsolation.exe](/previous-versions/windows/apps/hh780593(v=win.10)) 对于 [调试网络隔离问题](/previous-versions/windows/apps/hh780593(v=win.10)#debug-network-isolation-issues)也很有用。
## <a name="pipes"></a>管道
[管道](/windows/win32/ipc/pipes) 启用了管道服务器与一个或多个管道客户端之间的简单通信。
支持[匿名管道](/windows/win32/ipc/anonymous-pipes)和[命名管道](/windows/win32/ipc/named-pipes),但存在以下限制:
* 默认情况下,仅在同一包中的进程之间支持打包应用程序中的命名管道,除非进程是完全信任。
* 按照 [共享命名对象](./sharing-named-objects.md)的准则,可以在包之间共享命名管道。
* 打包的应用程序中的命名管道必须使用 `\\.\pipe\LOCAL\` 管道名称的语法。
## <a name="registry"></a>注册表
通常不建议使用 IPC 的[注册表](/windows/win32/sysinfo/registry-functions)使用情况,但对于现有代码,它是受支持的。 打包的应用程序只能访问他们有权访问的注册表项。
[打包为 .msix 的桌面应用程序](/windows/msix/desktop/desktop-to-uwp-root) 利用 [注册表虚拟化](/windows/msix/desktop/desktop-to-uwp-behind-the-scenes#registry) ,以便将全局注册表写入包含在 .msix 包中的专用 hive。 这可以实现源代码兼容性,同时最大限度地降低全局注册表影响,并且可用于同一包中的进程之间的 IPC。 如果必须使用注册表,此模型是首选模型,而不是操作全局注册表。
## <a name="rpc"></a>RPC
可以使用[RPC](/windows/win32/rpc/rpc-start-page)将打包应用程序连接到 Win32 rpc 终结点,前提是打包的应用程序具有匹配 RPC 终结点上的 acl 的正确功能。
自定义功能使 Oem 和 Ihv 能够 [定义任意功能](/windows-hardware/drivers/devapps/hardware-support-app--hsa--steps-for-driver-developers#reserving-a-custom-capability), [使用这些功能作为其 RPC 终结点的 ACL](/windows-hardware/drivers/devapps/hardware-support-app--hsa--steps-for-driver-developers#allowing-access-to-an-rpc-endpoint-to-a-uwp-app-using-the-custom-capability),然后将 [这些功能授予授权客户端应用程序](/windows-hardware/drivers/devapps/hardware-support-app--hsa--steps-for-driver-developers#preparing-the-signed-custom-capability-descriptor-sccd-file)。 有关完整的示例应用程序,请参阅 [CustomCapability](https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/CustomCapability) 示例。
RPC 终结点也可列入到特定的打包应用程序,将对终结点的访问限制为仅限这些应用程序,而无需使用自定义功能的管理开销。 可以使用 [DeriveAppContainerSidFromAppContainerName](/windows/win32/api/userenv/nf-userenv-deriveappcontainersidfromappcontainername) API 从包系列名称中派生 SID,然后使用 SID 作为 RPC 终结点的 ACL,如 [CustomCapability](https://github.com/Microsoft/Windows-universal-samples/blob/master/Samples/CustomCapability/Service/Server/RpcServer.cpp) 示例中所示。
## <a name="shared-memory"></a>Shared Memory
[文件映射](/windows/win32/memory/sharing-files-and-memory) 可用于在两个或多个进程之间共享文件或内存,但有以下限制:
* 默认情况下,仅在同一包中的进程之间支持打包应用程序中的文件映射,除非进程是完全信任。
* 按照 [共享命名对象](./sharing-named-objects.md)的准则,可以在包之间共享文件映射。
建议使用共享内存来有效地共享和处理大量数据。 | 63.110236 | 645 | 0.809482 | yue_Hant | 0.730123 |
4d98f1cf30f3e682cfaac070f9fc01fe595da00c | 60 | md | Markdown | README.old.md | diglopes/learning-redux | 4eaf4c83f00ef2dc42f98d16d1a31b8ccd82628c | [
"MIT"
] | null | null | null | README.old.md | diglopes/learning-redux | 4eaf4c83f00ef2dc42f98d16d1a31b8ccd82628c | [
"MIT"
] | 3 | 2021-03-10T18:52:31.000Z | 2022-02-27T05:01:00.000Z | README.old.md | diglopes/learning-redux | 4eaf4c83f00ef2dc42f98d16d1a31b8ccd82628c | [
"MIT"
] | null | null | null | # learning-redux
Just a simple app to learn how Redux works
| 20 | 42 | 0.783333 | eng_Latn | 0.968345 |
4d99453829e0ae415ac02276bcfc96ff10161c41 | 3,035 | md | Markdown | pkg/alexa/Readme.md | DrPsychick/alexa-go-cloudformation-demo | 2ebb6ce04cb6d8fd587053852fa50790462aaec2 | [
"MIT"
] | 2 | 2019-04-21T17:17:05.000Z | 2019-11-09T12:50:36.000Z | pkg/alexa/Readme.md | DrPsychick/alexa-go-cloudformation-demo | 2ebb6ce04cb6d8fd587053852fa50790462aaec2 | [
"MIT"
] | 22 | 2019-04-08T13:42:14.000Z | 2021-09-19T13:34:29.000Z | pkg/alexa/Readme.md | DrPsychick/alexa-go-cloudformation-demo | 2ebb6ce04cb6d8fd587053852fa50790462aaec2 | [
"MIT"
] | null | null | null | # OBSOLETE!
see https://github.com/DrPsychick/go-alexa-lambda
### Alexa Dialog
Example lambda request: Alexa asked, but could not match the user response to a valid slot value: `ER_SUCCESS_NO_MATCH`
```json
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.806dc75f-5ee0-44a2-913d-29b5be44ad54",
"timestamp": "2019-11-03T11:50:06Z",
"locale": "en-US",
"intent": {
"name": "AWSStatus",
"confirmationStatus": "NONE",
"slots": {
"Region": {
"name": "Region",
"value": "franfrut",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "amzn1.er-authority.echo-sdk.amzn1.ask.skill.8f065707-2c82-49b4-a78f-6a1fba6c8bae.AWSRegion",
"status": {
"code": "ER_SUCCESS_NO_MATCH"
}
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
},
"Area": {
"name": "Area",
"confirmationStatus": "NONE",
"source": "USER"
}
}
},
"dialogState": "COMPLETED"
}
```
Successful match: `ER_SUCCESS_MATCH`
```json
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.b8683011-7bde-4ad3-bdb0-3814764e2dff",
"timestamp": "2019-11-02T20:52:38Z",
"locale": "LOCALE",
"intent": {
"name": "AWSStatus",
"confirmationStatus": "NONE",
"slots": {
"Region": {
"name": "Region",
"value": "frankfurt",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "amzn1.er-authority.echo-sdk.amzn1.ask.skill.8f065707-2c82-49b4-a78f-6a1fba6c8bae.AWSRegion",
"status": {
"code": "ER_SUCCESS_MATCH"
},
"values": [
{
"value": {
"name": "Frankfurt",
"id": "4312d5c8cdda027420c474e2221abc34"
}
}
]
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
},
"Area": {
"name": "Area",
"confirmationStatus": "NONE",
"source": "USER"
}
}
},
"dialogState": "COMPLETED"
}
```
#### Links
* https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html
* multiple intents in one dialog: https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#pass-a-new-intent
* https://developer.amazon.com/blogs/alexa/post/cfbd2f5e-c72f-4b03-8040-8628bbca204c/alexa-skill-teardown-understanding-entity-resolution-with-pet-match
### Credits
basic code thanks to: https://github.com/soloworks/go-alexa-models | 31.947368 | 152 | 0.493575 | yue_Hant | 0.397721 |
4d99a0e4ef08ce3f9866ddec51a2c40f1fcae0a4 | 9,457 | md | Markdown | docs/ide/reference/accessibility-tips-and-tricks.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/reference/accessibility-tips-and-tricks.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/reference/accessibility-tips-and-tricks.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Tipps und Tricks zur Barrierefreiheit für Visual Studio
description: Informationen zu Tipps und Tricks, die Ihnen dabei helfen sollen, die Visual Studio-IDE für jeden Benutzer, einschließlich Benutzer mit einer Behinderung, leichter zugänglich zu machen
ms.date: 08/06/2019
ms.topic: conceptual
helpviewer_keywords:
- accessibility [Visual Studio]
ms.assetid: 6b491d88-f79e-4686-8841-857624bdcfda
author: TerryGLee
ms.author: tglee
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: 5828fb114a4df559c46dd6ae7f64887ab48e7429
ms.sourcegitcommit: 5216c15e9f24d1d5db9ebe204ee0e7ad08705347
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 08/09/2019
ms.locfileid: "68919520"
---
# <a name="accessibility-tips-and-tricks-for-visual-studio"></a>Tipps und Tricks zur Barrierefreiheit für Visual Studio
Visual Studio verfügt über integrierte Barrierefreiheitsfunktionen, die mit Sprachausgaben und anderen Hilfstechnologien kompatibel sind. Egal, ob Sie Tastenkombinationen zum Navigieren in der IDE verwenden möchten oder kontrastreiche Designs zur Verbesserung der Sichtbarkeit verwenden möchten – auf dieser Seite finden Sie dazu einige Tipps und Tricks.
Außerdem erfahren Sie, wie Anmerkungen verwendet werden, um nützliche Informationen über Ihren Code anzuzeigen. Sie erhalten ebenfalls Informationen darüber, wie Sie Sounds für Build- und Breakpointereignisse festlegen.
> [!NOTE]
> Dieses Thema gilt für Visual Studio unter Windows. Informationen zu Visual Studio für Mac finden Sie unter [Barrierefreiheit für Visual Studio für Mac](/visualstudio/mac/accessibility).
## <a name="save-your-ide-settings"></a>Speichern Ihrer IDE-Einstellungen
Sie können Ihre IDE-Umgebung anpassen, indem Sie Fensterlayout, Tastaturzuordnungsschema und andere Einstellungen speichern. Weitere Informationen finden Sie unter [Personalisieren von Visual Studio-IDE](../../ide/personalizing-the-visual-studio-ide.md).
## <a name="modify-your-ide-for-high-contrast-viewing"></a>Ändern Ihrer IDE für die Ansicht mit hohem Kontrast
Einige Personen haben Schwierigkeiten damit, manche Farben zu erkennen. Wenn Sie beim Schreiben von Code einen höheren Kontrast wünschen, aber nicht die üblichen Themen für hohen Kontrast verwenden möchten, bieten wir nun das Design „Blau (zusätzlicher Kontrast)“ an.
“")
## <a name="use-annotations-to-reveal-useful-information-about-your-code"></a>Verwenden von Anmerkungen, um nützliche Informationen über Ihren Code anzuzeigen
Der Visual Studio-Editor enthält viele Randsteuerelemente für den Text, die Sie über Charakteristiken und Funktionen an bestimmten Punkten einer Codezeile informieren, z. B. die Schraubendreher- und Glühbirnensymbole, Wellenlinien für Fehler und Warnungen, Lesezeichen usw. Sie können den Befehlssatz „Zeilenanmerkungen anzeigen“ verwenden, um diese Randsteuerelemente zu ermitteln und zwischen diesen zu navigieren.

## <a name="access-toolbars-by-using-keyboard-shortcuts"></a>Zugreifen auf Symbolleisten mithilfe von Tastenkombinationen
Die Visual Studio-IDE verfügt genau wie viele andere Toolfenster über Symbolleisten. Mithilfe der folgenden Tastenkombinationen können Sie darauf zugreifen.
|Feature|BESCHREIBUNG|Tastenkombination|
|-------------|-----------------| - |
|IDE-Symbolleisten|Wählen Sie die erste Schaltfläche in der Standardsymbolleiste.|**ALT**, **STRG**+**TAB**|
|Symbolleisten des Toolfensters|Verschieben Sie den Fokus zu den Symbolleisten in einem Toolfenster. <br> <br> **HINWEIS:** Dies funktioniert für die meisten Toolfenster, jedoch nur, wenn sich der Fokus in einem Toolfenster befindet. Sie müssen außerdem die UMSCHALTTASTE vor der ALT-TASTE drücken. In einigen Toolfenstern wie Team Explorer müssen Sie die UMSCHALTTASTE einen Moment gedrückt halten, bevor Sie die ALT-TASTE drücken.|**UMSCHALT**+**ALT**|
|Symbolleisten|Wechseln Sie zum ersten Element in der nächsten Symbolleiste (wenn eine Symbolleiste über Fokus verfügt).|**STRG**+**TAB**|
### <a name="other-useful-keyboard-shortcuts"></a>Weitere nützliche Tastenkombinationen
Einige weitere nützliche Tastenkombinationen sind folgende:
|Feature|BESCHREIBUNG|Tastenkombination|
|-------------|-----------------| - |
|IDE|Hohen Kontrast ein- und ausschalten <br> <br> **HINWEIS:** Windows-Standardtastenkombination|**Linke ALT**+**Linke UMSCHALT**+**DRUCK**|
|Dialogfeld|Aktivieren oder deaktivieren Sie die Kontrollkästchenoption in einem Dialogfeld. <br> <br> **HINWEIS:** Windows-Standardtastenkombination|**LEERTASTE**|
|Kontextmenüs|Öffnen Sie ein Kontextmenü (Rechtsklick). <br> <br> **HINWEIS:** Windows-Standardtastenkombination|**UMSCHALT**+**F10**|
|Menüs|Greifen Sie schnell auf ein Menüelement mithilfe der Zugriffstasten zu. Drücken Sie die **ALT**-Taste gefolgt von den unterstrichenen Buchstaben in einem Menü, um den Befehl zu aktivieren. Wenn z.B. das Dialogfeld „Projekt öffnen“ in Visual Studio angezeigt werden soll, wählen Sie **ALT**+**F**+**O**+**P** aus. <br><br> **HINWEIS:** Windows-Standardtastenkombination|**ALT** + **[Buchstabe]**|
|Suchfeld|Verwenden des Suchfeatures in Visual Studio|**Strg**+**Q**|
|Fenster „Toolbox“|Wechseln Sie zwischen Toolboxregisterkarten.|**STRG**+**NACH-OBEN-TASTE**<br /><br /> und<br /><br /> **STRG**+**NACH-UNTEN-TASTE**|
|Fenster „Toolbox“|Fügen Sie ein Steuerelement aus der Toolbox zu einem Formular oder einem Designer hinzu.|**EINGABETASTE**|
|Dialogfeld „Optionen“: Umgebung > Tastatur|Löschen Sie die Tastenkombination, die unter **Tastenkombination drücken** eingegeben wurde.|**RÜCKTASTE**|
|Fenster „Benachrichtigungstool“|Öffnen Sie das Fenster des Benachrichtigungstools, indem Sie nacheinander zwei Tastaturkombinationen verwenden. Zeigen Sie dann eine Benachrichtigung an, indem Sie sie über die Pfeiltasten auszuwählen.| **STRG**+ **\** , **STRG**+**N**|
> [!NOTE]
> Die angezeigten Dialogfelder und Menübefehle können sich je nach den aktiven Einstellungen oder der verwendeten Version von den in der Hilfe beschriebenen unterscheiden.
## <a name="access-notifications-by-using-keyboard-shortcuts"></a>Zugreifen auf Benachrichtigungen mithilfe von Tastenkombinationen
Wenn in der IDE eine Benachrichtigung angezeigt wird, können Sie so über Tastenkombinationen auf das Benachrichtigungsfenster zugreifen:
1. Drücken Sie an einer beliebigen Stelle in der IDE nacheinander die folgenden beiden Tastenkombinationen: **STRG**+ **\** und dann **STRG**+**N**.
Das Fenster **Benachrichtigungen** wird geöffnet.

1. Über die **Tabulatortaste** oder die Pfeiltasten können Sie eine Benachrichtigung auswählen.
## <a name="use-the-sound-applet-to-set-build-and-breakpoint-cues"></a>Verwenden des Sound-Applets, um Hinweise für Builds und Breakpoints festzulegen
Sie können das Sound-Applet in Windows verwenden, um Visual Studio-Programmereignissen einen Sound zuzuweisen. Insbesondere können Sie folgenden Programmereignissen Sounds zuweisen:
* Haltepunkt erreicht
* Buildvorgang abgebrochen
* Fehler beim Buildvorgang
* Buildvorgang erfolgreich
Gehen Sie dabei folgendermaßen vor:
1. Geben Sie auf einem Computer mit Windows 10 **Systemsounds ändern** in das Feld **Suche** ein.

(Falls Sie Cortana aktiviert haben, können Sie alternativ „Hey Cortana“ und anschließend „Systemsounds ändern“ sagen.)
1. Doppelklicken Sie auf **Systemsounds ändern**.

1. Klicken Sie im Dialogfeld **Sound** auf die Registerkarte **Sounds**.
1. Scrollen Sie dann in **Programmereignisse** zu **Microsoft Visual Studio**, und wählen Sie die Sounds aus, die Sie auf die gewünschten Ereignisse anwenden möchten.

1. Klicken Sie auf **OK**.
::: moniker range="vs-2017"
> [!TIP]
> Weitere Informationen zu Updates zur Barrierefreiheit finden Sie im Blogbeitrag [Accessibility improvements in Visual Studio 2017 version 15.3 (Verbesserungen der Barrierefreiheit in Visual Studio 2017, Version 15.3)](https://devblogs.microsoft.com/visualstudio/accessibility-improvements-in-visual-studio-2017-version-15-3/).
::: moniker-end
## <a name="see-also"></a>Siehe auch
* [Barrierefreiheitsfeatures in Visual Studio](../../ide/reference/accessibility-features-of-visual-studio.md)
* [Vorgehensweise: Anpassen von Menüs und Symbolleisten in Visual Studio](../../ide/how-to-customize-menus-and-toolbars-in-visual-studio.md)
* [Personalisieren der Visual Studio-IDE](../../ide/personalizing-the-visual-studio-ide.md)
* [Barrierefreiheit (Visual Studio für Mac)](/visualstudio/mac/accessibility)
* [Microsoft Accessibility (Microsoft-Barrierefreiheit)](https://www.microsoft.com/Accessibility)
| 72.746154 | 454 | 0.786613 | deu_Latn | 0.979032 |
4d99a1324343d2a1c88d824bcb7251baca260554 | 1,292 | md | Markdown | README.md | gontard/scalafix | 7df2cb25e12e0ab6af457b09c3fce700eb44d457 | [
"BSD-3-Clause"
] | null | null | null | README.md | gontard/scalafix | 7df2cb25e12e0ab6af457b09c3fce700eb44d457 | [
"BSD-3-Clause"
] | null | null | null | README.md | gontard/scalafix | 7df2cb25e12e0ab6af457b09c3fce700eb44d457 | [
"BSD-3-Clause"
] | null | null | null | <img src="website/static/img/scalafix-brand-small2x.png" alt="logo" width="37px" height="37px" style="margin-bottom:-8px;margin-right:-4px;"> Scalafix
[](https://index.scala-lang.org/scalacenter/scalafix/scalafix-core)
[](https://github.com/scalacenter/scalafix/actions?query=workflow)
[](https://gitter.im/scalacenter/scalafix)
========
Rewrite and linting tool for Scala..
## User documentation
Head over here: https://scalacenter.github.io/scalafix/
## Team
The current maintainers (people who can merge pull requests) are:
- Brice Jaglin - [`@bjaglin`](https://github.com/bjaglin)
- Gabriele Petronella - [`@gabro`](https://github.com/gabro)
- Guillaume Massé - [`@MasseGuillaume`](https://github.com/MasseGuillaume)
- Ólafur Páll Geirsson - [`@olafurpg`](https://github.com/olafurpg)
- Meriam Lachkar - [`@mlachkar`](https://github.com/mlachkar)
- Shane Delmore - [`@ShaneDelmore`](https://github.com/ShaneDelmore)
## Contributing
Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md).
| 47.851852 | 160 | 0.747678 | kor_Hang | 0.200836 |
4d9a6f23addbc9ff65f1c8c20123820d1e27891f | 792 | md | Markdown | src/pages/posts/post-127.md | jorgegorka/alvareznavarro | 88042aa2c34bd1fc262defbdf12391e182334986 | [
"MIT"
] | null | null | null | src/pages/posts/post-127.md | jorgegorka/alvareznavarro | 88042aa2c34bd1fc262defbdf12391e182334986 | [
"MIT"
] | null | null | null | src/pages/posts/post-127.md | jorgegorka/alvareznavarro | 88042aa2c34bd1fc262defbdf12391e182334986 | [
"MIT"
] | null | null | null | ---
title: "Foro sobre arte"
date: '2008-11-18T11:40:00+00:00'
slug: '/blog/2008/11/foro-sobre-arte'
tags: ["foros", "Internet"]
category: 'web-development'
excerpt: "[Jose Luis Birigay]( ha creado un [foro de debate sobre el mundo del arte]( una iniciativa pionera (sobre todo aquí en La Rioja) y que sin d..."
draft: false
headerImage:
---
[Jose Luis Birigay](http://www.birigay.es/) ha creado un [foro de debate sobre el mundo del arte](http://foro.birigay.es/index.php).
Es una iniciativa pionera (sobre todo aquí en La Rioja) y que sin duda favorecerá e impulsará la difusión de la cultura.
En el foro se puede debatir sobre exposiciones, artistas e incluso comentar ideas y trucos sobre materiales y técnicas.
Una buena idea y que está teniendo muy buena acogida entre el público.
| 41.684211 | 154 | 0.744949 | spa_Latn | 0.981149 |
4d9a7a3a174d5d33c81e951a42d735c11ed0fcc3 | 1,277 | md | Markdown | notes/contains_duplicate_2.md | caser789/leetcode | 97cca424277c98523cd722a82cebe0fe596ddfcf | [
"MIT"
] | null | null | null | notes/contains_duplicate_2.md | caser789/leetcode | 97cca424277c98523cd722a82cebe0fe596ddfcf | [
"MIT"
] | null | null | null | notes/contains_duplicate_2.md | caser789/leetcode | 97cca424277c98523cd722a82cebe0fe596ddfcf | [
"MIT"
] | null | null | null | ---
tags: [2019/08/30, application/array/duplicate, data structure/map, leetcode/219, method/search/hash]
title: Contains Duplicate II
created: '2019-07-30T15:50:41.583Z'
modified: '2019-08-30T14:51:24.645Z'
---
# Contains Duplicate II
Given an array of integers and an integer k, find out whether there are two distinct indices i and j in the array such that nums[i] = nums[j] and the absolute difference between i and j is at most k.
### Example 1:
```
Input: nums = [1,2,3,1], k = 3
Output: true
```
### Example 2:
```
Input: nums = [1,0,1,1], k = 1
Output: true
```
### Example 3:
```
Input: nums = [1,2,3,1,2,3], k = 2
Output: false
```
## Solution
```python
class Solution(object):
def containsNearbyDuplicate(self, nums, k):
"""
>>> Solution().containsNearbyDuplicate([1, 2, 3, 1], 3)
True
>>> Solution().containsNearbyDuplicate([1, 0, 1, 1], 1)
True
>>> Solution().containsNearbyDuplicate([1, 2, 3, 1, 2, 3], 2)
False
"""
store = {}
for i, num in enumerate(nums):
if num not in store:
store[num] = i
else:
if i - store[num] <= k:
return True
store[num] = i
return False
```
| 22.403509 | 199 | 0.563038 | eng_Latn | 0.905084 |
4d9aa15e0351a6e4e5f8aec61665ad849801d911 | 41 | md | Markdown | README.md | kelath/Burp-Extensions | 2d9122adc0e12b00f29bca321979dc2ecc428ddc | [
"MIT"
] | 2 | 2016-09-20T16:51:31.000Z | 2019-07-30T08:56:35.000Z | README.md | kelath/Burp-Extensions | 2d9122adc0e12b00f29bca321979dc2ecc428ddc | [
"MIT"
] | null | null | null | README.md | kelath/Burp-Extensions | 2d9122adc0e12b00f29bca321979dc2ecc428ddc | [
"MIT"
] | 2 | 2018-03-02T03:09:26.000Z | 2018-03-15T16:06:21.000Z | # Burp-Extensions
Useful burp extensions
| 13.666667 | 22 | 0.829268 | eng_Latn | 0.86873 |
4d9ac3c413761fc0a468da6d7f4a30e5ddab4617 | 1,309 | md | Markdown | src/pages/products/index.md | apiadventures/flash-green-farm | f34cf73d76ccb87f768908beaea18d0c27e806ee | [
"MIT"
] | null | null | null | src/pages/products/index.md | apiadventures/flash-green-farm | f34cf73d76ccb87f768908beaea18d0c27e806ee | [
"MIT"
] | 3 | 2021-09-21T16:51:33.000Z | 2022-02-27T11:12:54.000Z | src/pages/products/index.md | apiadventures/flash-green-farm | f34cf73d76ccb87f768908beaea18d0c27e806ee | [
"MIT"
] | null | null | null | ---
templateKey: product-page
title: Our Services
image: /img/img_0449.jpg
heading: Horse Rug Services
description: Clean my horse rug off offers a range of services
intro:
blurbs: []
heading: What we offer
description: Clean my horse rug off offers a range of services
main:
heading: Great coffee with no compromises
description: >
We hold our coffee to the highest standards from the shrub to the cup.
That’s why we’re meticulous and transparent about each step of the coffee’s
journey. We personally visit each farm to make sure the conditions are
optimal for the plants, farmers and the local environment.
image1:
alt: A close-up of a paper filter filled with ground coffee
image: /img/img_0449.jpg
image2:
alt: A green cup of a coffee on a wooden table
image: /img/img_0449.jpg
image3:
alt: horses
image: /img/img_0449.jpg
testimonials:
- author: 'Sam, Leylan'
quote: Claire is awesome
full_image: /img/img_0449.jpg
pricing:
heading: Prices
description: Here is our current price list
plans:
- description: Perfect for the drinker who likes to enjoy 1-2 cups per day.
items:
- 3 lbs of coffee per month
- Green or roasted beans"
- One or two varieties of beans"
plan: Rugs
price: '26'
---
| 29.75 | 79 | 0.705118 | eng_Latn | 0.995256 |
4d9b0722ab2fe0a830309aec09cb513fc66574e7 | 25,432 | md | Markdown | vendor/rosell-dk/webp-convert/docs/v1.3/converting/converters.md | DavidsonGomes/laradevweb | c539fd2d4eb69cd0ef80ba98b2f31468b9e3e363 | [
"MIT"
] | 478 | 2017-08-16T13:13:43.000Z | 2022-03-24T21:12:58.000Z | vendor/rosell-dk/webp-convert/docs/v1.3/converting/converters.md | DavidsonGomes/laradevweb | c539fd2d4eb69cd0ef80ba98b2f31468b9e3e363 | [
"MIT"
] | 314 | 2018-02-22T16:00:34.000Z | 2022-02-14T15:11:00.000Z | vendor/rosell-dk/webp-convert/docs/v1.3/converting/converters.md | DavidsonGomes/laradevweb | c539fd2d4eb69cd0ef80ba98b2f31468b9e3e363 | [
"MIT"
] | 114 | 2017-07-17T03:15:39.000Z | 2022-02-27T18:46:55.000Z | # The webp converters
## The converters at a glance
When it comes to webp conversion, there is actually only one library in town: *libwebp* from Google. All conversion methods below ultimately uses that very same library for conversion. This means that it does not matter much, which conversion method you use. Whatever works. There is however one thing to take note of, if you set *quality* to *auto*, and your system cannot determine the quality of the source (this requires imagick or gmagick), and you do not have access to install those, then the only way to get quality-detection is to connect to a *wpc* cloud converter. However, with *cwebp*, you can specify the desired reduction (the *size-in-percentage* option) - at the cost of doubling the conversion time. Read more about those considerations in the API.
Speed-wise, there is too little difference for it to matter, considering that images usually needs to be converted just once. Anyway, here are the results: *cweb* is the fastest (with method=3). *gd* is right behind, merely 3% slower than *cwebp*. *gmagick* are third place, ~8% slower than *cwebp*. *imagick* comes in ~22% slower than *cwebp*. *ewww* depends on connection speed. On my *digital ocean* account, it takes ~2 seconds to upload, convert, and download a tiny image (10 times longer than the local *cwebp*). A 1MB image however only takes ~4.5 seconds to upload, convert and download (1.5 seconds longer). A 2 MB image takes ~5 seconds to convert (only 16% longer than my *cwebp*). The *ewww* thus converts at a very decent speeds. Probably faster than your average shared host. If multiple big images needs to be converted at the same time, *ewww* will probably perform much better than the local converters.
[`cwebp`](#cwebp) works by executing the *cwebp* binary from Google, which is build upon the *libwebp* (also from Google). That library is actually the only library in town for generating webp images, which means that the other conversion methods ultimately uses that very same library. Which again means that the results using the different methods are very similar. However, with *cwebp*, we have more parameters to tweak than with the rest. We for example have the *method* option, which controls the trade off between encoding speed and the compressed file size and quality. Setting this to max, we can squeeze the images a few percent extra - without loosing quality (the converter is still pretty fast, so in most cases it is probably worth it).
Of course, as we here have to call a binary directly, *cwebp* requires the *exec* function to be enabled, and that the webserver user is allowed to execute the `cwebp` binary (either at known system locations, or one of the precompiled binaries, that comes with this library).
[`vips`](#vips) (**new in 2.0**) works by using the vips extension, if available. Vips is great! It offers many webp options, it is fast and installation is easier than imagick and gd, as it does not need to be configured for webp support.
[`imagick`](#imagick) does not support any special webp options, but is at least able to strip all metadata, if metadata is set to none. Imagick has a very nice feature - that it is able to detect the quality of a jpeg file. This enables it to automatically use same quality for destination as for source, which eliminates the risk of setting quality higher for the destination than for source (the result of that is that the file size gets higher, but the quality remains the same). As the other converters lends this capability from Imagick, this is however no reason for using Imagick rather than the other converters. Requirements: Imagick PHP extension compiled with WebP support
[`gmagick`](#gmagick) uses the *gmagick* extension. It is very similar to *imagick*. Requirements: Gmagick PHP extension compiled with WebP support.
[`gd`](#gd) uses the *Gd* extension to do the conversion. The *Gd* extension is pretty common, so the main feature of this converter is that it may work out of the box. It does not support any webp options, and does not support stripping metadata. Requirements: GD PHP extension compiled with WebP support.
[`wpc`](#wpc) is an open source cloud service for converting images to webp. To use it, you must either install [webp-convert-cloud-service](https://github.com/rosell-dk/webp-convert-cloud-service) directly on a remote server, or install the Wordpress plugin, [WebP Express](https://github.com/rosell-dk/webp-express) in Wordpress. Btw: Beware that upload limits will prevent conversion of big images. The converter checks your *php.ini* settings and abandons upload right away, if an image is larger than your *upload_max_filesize* or your *post_max_size* setting. Requirements: Access to a running service. The service can be installed [directly](https://github.com/rosell-dk/webp-convert-cloud-service) or by using [this Wordpress plugin](https://wordpress.org/plugins/webp-express/)
[`ewww`](#ewww) is also a cloud service. Not free, but cheap enough to be considered *practically* free. It supports lossless encoding, but this cannot be controlled. *Ewww* always uses lossy encoding for jpeg and lossless for png. For jpegs this is usually a good choice, however, many pngs are compressed better using lossy encoding. As lossless cannot be controlled, the "lossless:auto" option cannot be used for automatically trying both lossy and lossless and picking the smallest file. Also, unfortunately, *ewww* does not support quality=auto, like *wpc*, and it does not support *size-in-percentage* like *cwebp*, either. I have requested such features, and he is considering... As with *wpc*, beware of upload limits. Requirements: A key to the *EWWW Image Optimizer* cloud service. Can be purchaced [here](https://ewww.io/plans/)
[`stack`](#stack) takes a stack of converters and tries it from the top, until success. The main convert method actually calls this converter. Stacks within stacks are supported (not really needed, though).
**Summary:**
| | cwebp | vips | imagickbinary | imagick / gmagick | gd | ewww |
| ------------------------------------------ | --------- | ------ | -------------- | ----------------- | --------- | ------ |
| supports lossless encoding ? | yes | yes | yes | no | no | yes |
| supports lossless auto ? | yes | yes | yes | no | no | no |
| supports near-lossless ? | yes | yes | no | no | no | ? |
| supports metadata stripping / preserving | yes | yes | yes | yes | no | ? |
| supports setting alpha quality | yes | yes | yes | no | no | no |
| supports fixed quality (for lossy) | yes | yes | yes | yes | yes | yes |
| supports auto quality without help | no | no | yes | yes | no | no |
*WebPConvert* currently supports the following converters:
| Converter | Method | Requirements |
| ------------------------------------ | ------------------------------------------------ | -------------------------------------------------- |
| [`cwebp`](#cwebp) | Calls `cwebp` binary directly | `exec()` function *and* that the webserver user has permission to run `cwebp` binary |
| [`vips`](#vips) (new in 2.0) | Vips extension | Vips extension |
| [`imagick`](#imagick) | Imagick extension (`ImageMagick` wrapper) | Imagick PHP extension compiled with WebP support |
| [`gmagick`](#gmagick) | Gmagick extension (`ImageMagick` wrapper) | Gmagick PHP extension compiled with WebP support |
| [`gd`](#gd) | GD Graphics (Draw) extension (`LibGD` wrapper) | GD PHP extension compiled with WebP support |
| [`imagickbinary`](#imagickbinary) | Calls imagick binary directly | exec() and imagick installed and compiled with WebP support |
| [`wpc`](#wpc) | Connects to an open source cloud service | Access to a running service. The service can be installed [directly](https://github.com/rosell-dk/webp-convert-cloud-service) or by using [this Wordpress plugin](https://wordpress.org/plugins/webp-express/).
| [`ewww`](#ewww) | Connects to *EWWW Image Optimizer* cloud service | Purchasing a key |
## Installation
Instructions regarding getting the individual converters to work are [on the wiki](https://github.com/rosell-dk/webp-convert/wiki)
## cwebp
<table>
<tr><th>Requirements</th><td><code>exec()</code> function and that the webserver has permission to run `cwebp` binary (either found in system path, or a precompiled version supplied with this library)</td></tr>
<tr><th>Performance</th><td>~40-120ms to convert a 40kb image (depending on *method* option)</td></tr>
<tr><th>Reliability</th><td>No problems detected so far!</td></tr>
<tr><th>Availability</th><td>According to ewww docs, requirements are met on surprisingly many webhosts. Look <a href="https://docs.ewww.io/article/43-supported-web-hosts">here</a> for a list</td></tr>
<tr><th>General options supported</th><td>All (`quality`, `metadata`, `lossless`)</td></tr>
<tr><th>Extra options</th><td>`method` (0-6)<br>`use-nice` (boolean)<br>`try-common-system-paths` (boolean)<br> `try-supplied-binary-for-os` (boolean)<br>`autofilter` (boolean)<br>`size-in-percentage` (number / null)<br>`command-line-options` (string)<br>`low-memory` (boolean)</td></tr>
</table>
[cwebp](https://developers.google.com/speed/webp/docs/cwebp) is a WebP conversion command line converter released by Google. Our implementation ships with precompiled binaries for Linux, FreeBSD, WinNT, Darwin and SunOS. If however a cwebp binary is found in a usual location, that binary will be preferred. It is executed with [exec()](http://php.net/manual/en/function.exec.php).
In more detail, the implementation does this:
- It is tested whether cwebp is available in a common system path (eg `/usr/bin/cwebp`, ..)
- If not, then supplied binary is selected from `Converters/Binaries` (according to OS) - after validating checksum
- Command-line options are generated from the options
- If [`nice`]( https://en.wikipedia.org/wiki/Nice_(Unix)) command is found on host, binary is executed with low priority in order to save system resources
- Permissions of the generated file are set to be the same as parent folder
### Cwebp options
The following options are supported, besides the general options (such as quality, lossless etc):
| Option | Type | Default |
| -------------------------- | ------------------------- | -------------------------- |
| autofilter | boolean | false |
| command-line-options | string | '' |
| low-memory | boolean | false |
| method | integer (0-6) | 6 |
| near-lossless | integer (0-100) | 60 |
| size-in-percentage | integer (0-100) (or null) | null |
| rel-path-to-precompiled-binaries | string | './Binaries' |
| size-in-percentage | number (or null) | is_null |
| try-common-system-paths | boolean | true |
| try-supplied-binary-for-os | boolean | true |
| use-nice | boolean | false |
Descriptions (only of some of the options):
#### the `autofilter` option
Turns auto-filter on. This algorithm will spend additional time optimizing the filtering strength to reach a well-balanced quality. Unfortunately, it is extremely expensive in terms of computation. It takes about 5-10 times longer to do a conversion. A 1MB picture which perhaps typically takes about 2 seconds to convert, will takes about 15 seconds to convert with auto-filter. So in most cases, you will want to leave this at its default, which is off.
#### the `command-line-options` option
This allows you to set any parameter available for cwebp in the same way as you would do when executing *cwebp*. You could ie set it to "-sharpness 5 -mt -crop 10 10 40 40". Read more about all the available parameters in [the docs](https://developers.google.com/speed/webp/docs/cwebp)
#### the `low-memory` option
Reduce memory usage of lossy encoding at the cost of ~30% longer encoding time and marginally larger output size. Default: `false`. Read more in [the docs](https://developers.google.com/speed/webp/docs/cwebp). Default: *false*
#### The `method` option
This parameter controls the trade off between encoding speed and the compressed file size and quality. Possible values range from 0 to 6. 0 is fastest. 6 results in best quality.
#### the `near-lossless` option
Specify the level of near-lossless image preprocessing. This option adjusts pixel values to help compressibility, but has minimal impact on the visual quality. It triggers lossless compression mode automatically. The range is 0 (maximum preprocessing) to 100 (no preprocessing). The typical value is around 60. Read more [here](https://groups.google.com/a/webmproject.org/forum/#!topic/webp-discuss/0GmxDmlexek). Default: 60
#### The `size-in-percentage` option
This option sets the file size, *cwebp* should aim for, in percentage of the original. If you for example set it to *45*, and the source file is 100 kb, *cwebp* will try to create a file with size 45 kb (we use the `-size` option). This is an excellent alternative to the "quality:auto" option. If the quality detection isn't working on your system (and you do not have the rights to install imagick or gmagick), you should consider using this options instead. *Cwebp* is generally able to create webp files with the same quality at about 45% the size. So *45* would be a good choice. The option overrides the quality option. And note that it slows down the conversion - it takes about 2.5 times longer to do a conversion this way, than when quality is specified. Default is *off* (null)
#### final words on cwebp
The implementation is based on the work of Shane Bishop for his plugin, [EWWW Image Optimizer](https://ewww.io). Thanks for letting us do that!
See [the wiki](https://github.com/rosell-dk/webp-convert/wiki/Installing-cwebp---using-official-precompilations) for instructions regarding installing cwebp or using official precompilations.
## vips
<table>
<tr><th>Requirements</th><td>Vips extension</td></tr>
<tr><th>Performance</th><td>Great</td></tr>
<tr><th>Reliability</th><td>No problems detected so far!</td></tr>
<tr><th>Availability</th><td>Not that widespread yet, but gaining popularity</td></tr>
<tr><th>General options supported</th><td>All (`quality`, `metadata`, `lossless`)</td></tr>
<tr><th>Extra options</th><td>`smart-subsample`(boolean)<br>`alpha-quality`(0-100)<br>`near-lossless` (0-100)<br> `preset` (0-6)</td></tr>
</table>
For installation instructions, go [here](https://github.com/libvips/php-vips-ext).
The options are described [here](https://jcupitt.github.io/libvips/API/current/VipsForeignSave.html#vips-webpsave)
*near-lossless* is however an integer (0-100), in order to have the option behave like in cwebp.
## wpc
*WebPConvert Cloud Service*
<table>
<tr><th>Requirements</th><td>Access to a server with [webp-convert-cloud-service](https://github.com/rosell-dk/webp-convert-cloud-service) installed, <code>cURL</code> and PHP >= 5.5.0</td></tr>
<tr><th>Performance</th><td>Depends on the server where [webp-convert-cloud-service](https://github.com/rosell-dk/webp-convert-cloud-service) is set up, and the speed of internet connections. But perhaps ~1000ms to convert a 40kb image</td></tr>
<tr><th>Reliability</th><td>Great (depends on the reliability on the server where it is set up)</td></tr>
<tr><th>Availability</th><td>Should work on <em>almost</em> any webhost</td></tr>
<tr><th>General options supported</th><td>All (`quality`, `metadata`, `lossless`)</td></tr>
<tr><th>Extra options (old api)</th><td>`url`, `secret`</td></tr>
<tr><th>Extra options (new api)</th><td>`url`, `api-version`, `api-key`, `crypt-api-key-in-transfer`</td></tr>
</table>
[wpc](https://github.com/rosell-dk/webp-convert-cloud-service) is an open source cloud service. You do not buy a key, you set it up on a server, or you set up [the Wordpress plugin](https://wordpress.org/plugins/webp-express/). As WebPConvert Cloud Service itself is based on WebPConvert, all options are supported.
To use it, you need to set the `converter-options` (to add url etc).
#### Example, where api-key is not crypted, on new API:
```php
WebPConvert::convert($source, $destination, [
'max-quality' => 80,
'converters' => ['cwebp', 'wpc'],
'converter-options' => [
'wpc' => [
'api-version' => 1, /* from wpc release 1.0.0 */
'url' => 'http://example.com/wpc.php',
'api-key' => 'my dog is white',
'crypt-api-key-in-transfer' => false
]
]
));
```
#### Example, where api-key is crypted:
```php
WebPConvert::convert($source, $destination, [
'max-quality' => 80,
'converters' => ['cwebp', 'wpc'],
'converter-options' => [
'wpc' => [
'api-version' => 1,
'url' => 'https://example.com/wpc.php',
'api-key' => 'my dog is white',
'crypt-api-key-in-transfer' => true
],
]
));
```
In 2.0, you can alternatively set the api key and urls through through the *WPC_API_KEY* and *WPC_API_URL* environment variables. This is a safer place to store it.
To set an environment variable in Apache, you can use the `SetEnv` directory. Ie, place something like the following in your virtual host / or .htaccess file (replace the key with the one you purchased!)
```
SetEnv WPC_API_KEY my-dog-is-dashed
SetEnv WPC_API_URL https://wpc.example.com/wpc.php
```
#### Example, old API:
```php
WebPConvert::convert($source, $destination, [
'max-quality' => 80,
'converters' => ['cwebp', 'wpc'],
'converter-options' => [
'wpc' => [
'url' => 'https://example.com/wpc.php',
'secret' => 'my dog is white',
],
]
));
```
## ewww
<table>
<tr><th>Requirements</th><td>Valid EWWW Image Optimizer <a href="https://ewww.io/plans/">API key</a>, <code>cURL</code> and PHP >= 5.5.0</td></tr>
<tr><th>Performance</th><td>~1300ms to convert a 40kb image</td></tr>
<tr><th>Reliability</th><td>Great (but, as with any cloud service, there is a risk of downtime)</td></tr>
<tr><th>Availability</th><td>Should work on <em>almost</em> any webhost</td></tr>
<tr><th>General options supported</th><td>`quality`, `metadata` (partly)</td></tr>
<tr><th>Extra options</th><td>`key`</td></tr>
</table>
EWWW Image Optimizer is a very cheap cloud service for optimizing images. After purchasing an API key, add the converter in the `extra-converters` option, with `key` set to the key. Be aware that the `key` should be stored safely to avoid exploitation - preferably in the environment, ie with [dotenv](https://github.com/vlucas/phpdotenv).
The EWWW api doesn't support the `lossless` option, but it does automatically convert PNG's losslessly. Metadata is either all or none. If you have set it to something else than one of these, all metadata will be preserved.
In more detail, the implementation does this:
- Validates that there is a key, and that `curl` extension is working
- Validates the key, using the [/verify/ endpoint](https://ewww.io/api/) (in order to [protect the EWWW service from unnecessary file uploads, when key has expired](https://github.com/rosell-dk/webp-convert/issues/38))
- Converts, using the [/ endpoint](https://ewww.io/api/).
<details>
<summary><strong>Roadmap</strong> 👁</summary>
The converter could be improved by using `fsockopen` when `cURL` is not available - which is extremely rare. PHP >= 5.5.0 is also widely available (PHP 5.4.0 reached end of life [more than two years ago!](http://php.net/supported-versions.php)).
</details>
#### Example:
```php
WebPConvert::convert($source, $destination, [
'max-quality' => 80,
'converters' => ['gd', 'ewww'],
'converter-options' => [
'ewww' => [
'key' => 'your-api-key-here'
],
]
));
```
In 2.0, you can alternatively set the api key by through the *EWWW_API_KEY* environment variable. This is a safer place to store it.
To set an environment variable in Apache, you can use the `SetEnv` directory. Ie, place something like the following in your virtual host / or .htaccess file (replace the key with the one you purchased!)
```
SetEnv EWWW_API_KEY sP3LyPpsKWZy8CVBTYegzEGN6VsKKKKA
```
## gd
<table>
<tr><th>Requirements</th><td>GD PHP extension and PHP >= 5.5.0 (compiled with WebP support)</td></tr>
<tr><th>Performance</th><td>~30ms to convert a 40kb image</td></tr>
<tr><th>Reliability</th><td>Not sure - I have experienced corrupted images, but cannot reproduce</td></tr>
<tr><th>Availability</th><td>Unfortunately, according to <a href="https://stackoverflow.com/questions/25248382/how-to-create-a-webp-image-in-php">this link</a>, WebP support on shared hosts is rare.</td></tr>
<tr><th>General options supported</th><td>`quality`</td></tr>
<tr><th>Extra options</th><td>`skip-pngs`</td></tr>
</table>
[imagewebp](http://php.net/manual/en/function.imagewebp.php) is a function that comes with PHP (>5.5.0), *provided* that PHP has been compiled with WebP support.
`gd` neither supports copying metadata nor exposes any WebP options. Lacking the option to set lossless encoding results in poor encoding of PNGs - the filesize is generally much larger than the original. For this reason, PNG conversion is *disabled* by default, but it can be enabled my setting `skip-pngs` option to `false`.
Installaition instructions are [available in the wiki](https://github.com/rosell-dk/webp-convert/wiki/Installing-Gd-extension).
<details>
<summary><strong>Known bugs</strong> 👁</summary>
Due to a [bug](https://bugs.php.net/bug.php?id=66590), some versions sometimes created corrupted images. That bug can however easily be fixed in PHP (fix was released [here](https://stackoverflow.com/questions/30078090/imagewebp-php-creates-corrupted-webp-files)). However, I have experienced corrupted images *anyway* (but cannot reproduce that bug). So use this converter with caution. The corrupted images look completely transparent in Google Chrome, but have the correct size.
</details>
## imagick
<table>
<tr><th>Requirements</th><td>Imagick PHP extension (compiled with WebP support)</td></tr>
<tr><th>Quality</th><td>Poor. [See this issue]( https://github.com/rosell-dk/webp-convert/issues/43)</td></tr>
<tr><th>General options supported</th><td>`quality`</td></tr>
<tr><th>Extra options</th><td>None</td></tr>
<tr><th>Performance</th><td>~20-320ms to convert a 40kb image</td></tr>
<tr><th>Reliability</th><td>No problems detected so far</td></tr>
<tr><th>Availability</th><td>Probably only available on few shared hosts (if any)</td></tr>
</table>
WebP conversion with `imagick` is fast and [exposes many WebP options](http://www.imagemagick.org/script/webp.php). Unfortunately, WebP support for the `imagick` extension is pretty uncommon. At least not on the systems I have tried (Ubuntu 16.04 and Ubuntu 17.04). But if installed, it works great and has several WebP options.
See [this page](https://github.com/rosell-dk/webp-convert/wiki/Installing-Imagick-extension) in the Wiki for instructions on installing the extension.
## imagickbinary
<table>
<tr><th>Requirements</th><td><code>exec()</code> function and that imagick is installed on webserver, compiled with webp support</td></tr>
<tr><th>Performance</th><td>just fine</td></tr>
<tr><th>Reliability</th><td>No problems detected so far!</td></tr>
<tr><th>Availability</th><td>Not sure</td></tr>
<tr><th>General options supported</th><td>`quality`</td></tr>
<tr><th>Extra options</th><td>`use-nice` (boolean)</td></tr>
</table>
This converter tryes to execute `convert source.jpg webp:destination.jpg.webp`.
## stack
<table>
<tr><th>General options supported</th><td>all (passed to the converters in the stack )</td></tr>
<tr><th>Extra options</th><td>`converters` (array) and `converter-options` (array)</td></tr>
</table>
Stack implements the functionality you know from `WebPConvert::convert`. In fact, all `WebPConvert::convert` does is to call `Stack::convert($source, $destination, $options, $logger);`
It has two special options: `converters` and `converter-options`. You can read about those in `docs/api/convert.md`
| 78.736842 | 921 | 0.675291 | eng_Latn | 0.979562 |
4d9bd31e1ae60a5ec27acb7de8add4fab04ebb5b | 2,287 | md | Markdown | pegasus/sites/virtual/curriculum-algebra/16/Teacher.md | code-dot-org/code-dot-org | 3fd13c77f37823f3f71ae2675e6e4e1fd77905d1 | [
"Apache-2.0"
] | 772 | 2015-01-01T14:52:37.000Z | 2022-03-29T17:07:10.000Z | pegasus/sites/virtual/curriculum-algebra/16/Teacher.md | SNOmad1/code-dot-org | 3fd13c77f37823f3f71ae2675e6e4e1fd77905d1 | [
"Apache-2.0"
] | 13,529 | 2015-01-05T19:59:18.000Z | 2022-03-31T23:07:43.000Z | pegasus/sites/virtual/curriculum-algebra/16/Teacher.md | SNOmad1/code-dot-org | 3fd13c77f37823f3f71ae2675e6e4e1fd77905d1 | [
"Apache-2.0"
] | 494 | 2015-01-09T00:32:46.000Z | 2022-03-29T17:12:02.000Z | ---
title: The Big Game - Booleans
view: page_curriculum
theme: none
---
<%
lesson_id = 'alg16'
lesson = DB[:cdo_lessons].where(id_s:lesson_id).first
%>
<%= partial('../docs/_header', :lesson => lesson) %>
[summary]
## Teaching Summary
### **Getting Started**
1) [Introduction](#GetStarted)
### **Activity: The Big Game - Booleans**
2) [Online Puzzles](#Activity1)
[/summary]
[together]
# Teaching Guide
## Materials, Resources, and Prep
### For the Student
- [Safe-left? Design Recipe](../docs/worksheets/safe_left.pdf) (in the student workbook)
- [Safe-right? Design Recipe](../docs/worksheets/safe_right.pdf) (in the student workbook)
- [Onscreen? Design Recipe](../docs/worksheets/onscreen.pdf) (in the student workbook)
## Getting Started
### <a name="GetStarted"></a> 1) Introduction
Let's get back into that Big Game that we started in stage 7 and continued in stage 12.
When we last worked on the game, our danger and target were moving off the screen in opposite directions. Unfortunately, their update functions move them in one direction forever, so they never come back on screen once they've left! We'd actually like them to have a recurring role in this game, so we'll use some boolean logic to move them back to their starting points once they go off screen.
Once the students correctly implement [on-screen?](../docs/worksheets/onscreen.pdf) (and its sub-parts [safe-left?](../docs/worksheets/safe_left.pdf) and [safe-right?](../docs/worksheets/safe_right.pdf)), the new behavior of target and danger is that once they are off the screen they return to their starting position but with a new y-value. From this new vertical position they will continue to move across the screen. If one (or both) of the characters go off the screen and never reappear, the most likely source of the error is that one of the newly implemented boolean statements is incorrect.
[/together]
[together]
## Activity: The Big Game - Booleans
### <a name="Activity1"></a> 2) Online Puzzles
Return to your Big Game to use Booleans to keep your player character on screen. Head to [CS in Algebra stage 16](http://studio.code.org/s/algebra/lessons/16/levels/1) in Code Studio to get started programming.
[/together]
<%= partial('../docs/_footer', :lesson => lesson) %> | 38.762712 | 601 | 0.735024 | eng_Latn | 0.984171 |
4d9c15757cdb04d1d40504def66e39fefd23df51 | 989 | md | Markdown | _posts/2019-04-23-TRT-Volumes.md | pmazzocchi/pmazzocchi.github.io | 44a271a7b17c504b737da4ff7e7218736c13b1f0 | [
"Apache-2.0"
] | null | null | null | _posts/2019-04-23-TRT-Volumes.md | pmazzocchi/pmazzocchi.github.io | 44a271a7b17c504b737da4ff7e7218736c13b1f0 | [
"Apache-2.0"
] | null | null | null | _posts/2019-04-23-TRT-Volumes.md | pmazzocchi/pmazzocchi.github.io | 44a271a7b17c504b737da4ff7e7218736c13b1f0 | [
"Apache-2.0"
] | 2 | 2019-03-27T19:15:56.000Z | 2019-03-28T10:41:27.000Z | ---
layout: post
comments: false
title: "Real Volume: The Rock Trading Exchange"
subtitle: "A supplement to the Bitwise report presented in March 2019 to SEC"
author: "Marcello Pichini"
image:
main: 2019-04-23-trt-volumes.jpg
thumb: 2019-04-23-trt-volumes-thumb.jpg
published: true
newsfeed: true
---
We have analyzed the trading volumes reported by [_The Rock Trading_](http://www.therocktrading.com/) exchange and found them to be credible, according to the criteria used by [Bitwise Investments](https://www.bitwiseinvestments.com/) in their [March 2019 report](http://www.sec.gov/comments/sr-nysearca-2019-01/srnysearca201901-5164833-183434.pdf) to SEC.
Bitwise investigated the real liquidity of the Bitcoin market,
covering the 81 “top” Bitcoin exchanges by reported volume.
Most of those volumes turned out to be fake
and/or non-economic wash trading.
For more details about our supplemental analysis see the
[report]({{ site.baseurl }}/docs/2019-04-23-trt-volumes.pdf).
| 43 | 356 | 0.77452 | eng_Latn | 0.926392 |
4d9c67b2fddb86b26791fc82680c6cb2049e278d | 79 | md | Markdown | translations/zh-CN/data/reusables/webhooks/repo_desc.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 5 | 2021-03-05T01:17:14.000Z | 2021-08-11T06:13:50.000Z | translations/zh-CN/data/reusables/webhooks/repo_desc.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 340 | 2021-01-09T00:41:47.000Z | 2022-03-02T16:20:33.000Z | translations/zh-CN/data/reusables/webhooks/repo_desc.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 46 | 2020-11-05T10:39:05.000Z | 2021-07-23T11:35:59.000Z | `repository` | `object` | 事件发生所在的 [`repository`](/v3/repos/#get-a-repository)。
| 39.5 | 78 | 0.683544 | ceb_Latn | 0.109627 |
4d9c9d73a6873bf9b8667ff94951c4a4044cef5b | 3,440 | md | Markdown | docs/ide/step-7-add-multiplication-and-division-problems.md | soelax/visualstudio-docs.de-de | 1b9ae3a849df093d59b5e71e8233ccfe8c883575 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/step-7-add-multiplication-and-division-problems.md | soelax/visualstudio-docs.de-de | 1b9ae3a849df093d59b5e71e8233ccfe8c883575 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/step-7-add-multiplication-and-division-problems.md | soelax/visualstudio-docs.de-de | 1b9ae3a849df093d59b5e71e8233ccfe8c883575 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Schritt 7: Hinzufügen von Multiplikations- und Divisionsaufgaben'
ms.date: 11/04/2016
ms.topic: conceptual
dev_langs:
- csharp
- vb
ms.assetid: e638959e-f6a4-4eb4-b2e9-f63b7855cf8f
author: TerryGLee
ms.author: tglee
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: 887af3a439e1f6e0f21d5ca68061d2f9977dfac7
ms.sourcegitcommit: 59e5758036223ee866f3de5e3c0ab2b6dbae97b6
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 07/23/2019
ms.locfileid: "68416536"
---
# <a name="step-7-add-multiplication-and-division-problems"></a>Schritt 7: Hinzufügen von Multiplikations- und Divisionsaufgaben
Im siebenten Teil dieses Lernprogramms fügen Sie Multiplikations- und Divisionsaufgaben hinzu. Zunächst müssen Sie aber überlegen, wie diese Änderung vorgenommen wird. Überlegen Sie sich den ersten Schritt, der das Speichern von Werten einschließt.
## <a name="to-add-multiplication-and-division-problems"></a>So fügen Sie Multiplikations- und Divisionsaufgaben hinzu
1. Fügen Sie dem Formular vier Ganzzahlvariablen hinzu.
[!code-vb[VbExpressTutorial3Step7#15](../ide/codesnippet/VisualBasic/step-7-add-multiplication-and-division-problems_1.vb)]
[!code-csharp[VbExpressTutorial3Step7#15](../ide/codesnippet/CSharp/step-7-add-multiplication-and-division-problems_1.cs)]
2. Ändern Sie wie zuvor die `StartTheQuiz()`-Methode, damit die Multiplikations- und Divisionsaufgaben mit Zufallszahlen ausgefüllt werden.
[!code-vb[VbExpressTutorial3Step7#16](../ide/codesnippet/VisualBasic/step-7-add-multiplication-and-division-problems_2.vb)]
[!code-csharp[VbExpressTutorial3Step7#16](../ide/codesnippet/CSharp/step-7-add-multiplication-and-division-problems_2.cs)]
3. Ändern Sie die `CheckTheAnswer()`-Methode, damit auch die Multiplikations- und Divisionsaufgaben überprüft werden.
[!code-vb[VbExpressTutorial3Step7#17](../ide/codesnippet/VisualBasic/step-7-add-multiplication-and-division-problems_3.vb)]
[!code-csharp[VbExpressTutorial3Step7#17](../ide/codesnippet/CSharp/step-7-add-multiplication-and-division-problems_3.cs)]
Da es keine einfache Möglichkeit gibt, das Multiplikationszeichen (×) und das Divisionszeichen (÷) mit der Tastatur einzugeben, wird in Visual C# und Visual Basic ein Sternchen (*) für Multiplikation und ein Schrägstrich (/) für Division akzeptiert.
4. Ändern Sie den letzten Teil des <xref:System.Windows.Forms.Timer.Tick>-Ereignishandlers des Timers, damit die richtige Antwort ausgegeben wird, wenn die Zeit abgelaufen ist.
[!code-vb[VbExpressTutorial3Step7#23](../ide/codesnippet/VisualBasic/step-7-add-multiplication-and-division-problems_4.vb)]
[!code-csharp[VbExpressTutorial3Step7#23](../ide/codesnippet/CSharp/step-7-add-multiplication-and-division-problems_4.cs)]
5. Speichern Sie das Programm, und führen Sie es aus.
Die Quizteilnehmer müssen vier Aufgaben beantworten, um das Quiz abzuschließen. Dies ist in der folgenden Abbildung veranschaulicht.

**Mathetest** mit vier Aufgaben
## <a name="to-continue-or-review"></a>So fahren Sie fort oder überprüfen die Angaben
- Den nächsten Schritt des Tutorials finden Sie unter [Schritt 8: Anpassen des Quiz](../ide/step-8-customize-the-quiz.md).
- Den vorherigen Schritt des Tutorials finden Sie unter [Schritt 6: Hinzufügen einer Subtraktionsaufgabe](../ide/step-6-add-a-subtraction-problem.md).
| 57.333333 | 254 | 0.790988 | deu_Latn | 0.897059 |
4d9e6ad09ee4599bff9ade9fa2f438594c148042 | 7,249 | md | Markdown | old_src/docs/cookbook/animation/opacity-animation.md | salchichongallo/website | 833fa166546d279d17c99e2ae2060dc362904e1c | [
"CC-BY-3.0"
] | 40 | 2018-07-30T17:42:28.000Z | 2022-03-27T17:59:32.000Z | old_src/docs/cookbook/animation/opacity-animation.md | salchichongallo/website | 833fa166546d279d17c99e2ae2060dc362904e1c | [
"CC-BY-3.0"
] | 417 | 2018-07-30T17:43:42.000Z | 2022-03-25T19:45:57.000Z | old_src/docs/cookbook/animation/opacity-animation.md | salchichongallo/website | 833fa166546d279d17c99e2ae2060dc362904e1c | [
"CC-BY-3.0"
] | 36 | 2018-07-31T03:10:39.000Z | 2021-11-09T02:09:54.000Z | ---
layout: page
title: "Efectos Fade in and out en un Widget"
permalink: /cookbook/animation/opacity-animation/
---
Como desarrolladores de UI, a menudo necesitamos mostrar y ocultar elementos en pantalla. Sin embargo, los elementos que aparecen y desaparecen rápidamente de la pantalla pueden resultar molestos para los usuarios finales. En su lugar, a los elementos les podemos aplicar un fade con una animación de opacidad para crear una experiencia suave.
En Flutter, podemos lograr esta tarea usando el widget [`AnimatedOpacity`](https://docs.flutter.io/flutter/widgets/AnimatedOpacity-class.html).
## Instrucciones
1. Muestra una cuadro para realizar el fade in and out
2. Define un `StatefulWidget`
3. Muestra un botón que alterne la visibilidad
4. Al cuadro, aplícale el Fade in and out
## 1. Crea un cuadro para realizar el fade in and out
Primero, necesitaremos algo para realizar el fade in and out! En este ejemplo, dibujaremos un cuadro verde en la pantalla.
<!-- skip -->
```dart
Container(
width: 200.0,
height: 200.0,
color: Colors.green,
);
```
## 2. Define un `StatefulWidget`
Ahora que tenemos un cuadro verde para animar, necesitaremos una forma de saber si el cuadro debe ser visible o invisible. Para lograr esto, podemos usar un widget
[`StatefulWidget`](https://docs.flutter.io/flutter/widgets/StatefulWidget-class.html).
Un `StatefulWidget` es una clase que crea un objeto `State`. El objeto `State`
contiene algunos datos sobre nuestra aplicación y proporciona una forma de actualizar esos datos. Cuando actualizamos los datos, también podemos pedirle a Flutter que reconstruya nuestra UI con esos cambios.
En nuestro caso, tendremos un dato: un booleano que representa si el botón es visible o invisible.
Para construir un `StatefulWidget`, necesitamos crear dos clases: Una clase
`StatefulWidget` y la clase correspondiente `State`. Consejo profesional: Los plugins Flutter para Android Studio y VSCode incluyen el snippet `stful` para generar rápidamente este código!
<!-- skip -->
```dart
// El trabajo de StatefulWidget es tomar algunos datos y crear una clase State.
// En este caso, nuestro Widget toma un título y crea un _MyHomePageState.
class MyHomePage extends StatefulWidget {
final String title;
MyHomePage({Key key, this.title}) : super(key: key);
@override
_MyHomePageState createState() => _MyHomePageState();
}
// La clase State es responsable de dos cosas: mantener algunos datos que podamos
// actualizar y construir la UI usando esa información.
class _MyHomePageState extends State<MyHomePage> {
// Si el recuadro verde debe ser visible o invisible
bool _visible = true;
@override
Widget build(BuildContext context) {
// ¡El cuadro verde irá aquí con algunos otros widgets!
}
}
```
## 3. Muestra un botón que alterne la visibilidad
Ahora que tenemos algunos datos para determinar si nuestro recuadro verde debe ser visible o invisible, necesitaremos una forma de actualizar esos datos. En nuestro caso, si el cuadro está visible, queremos ocultarlo. Si la caja está oculta, ¡queremos mostrarla!
Para lograrlo, mostraremos un botón. Cuando un usuario pulsa el botón, alternamos el booleano de verdadero a falso, o de falso a verdadero. Necesitamos hacer este cambio usando [`setState`](https://docs.flutter.io/flutter/widgets/State/setState.html),
que es un método de la clase `State`. Esto le permitirá a Flutter saber que necesita reconstruir el Widget.
Nota: Para obtener más información sobre cómo trabajar con las entradas del usuario, por favor consulta la sección
[Manejando Gestos](/cookbook/#manejando-gestos) del Cookbook.
<!-- skip -->
```dart
FloatingActionButton(
onPressed: () {
// ¡Asegúrate de llamar a setState! Esto le dirá a Flutter que reconstruya el
// UI con nuestros cambios!
setState(() {
_visible = !_visible;
});
},
tooltip: 'Toggle Opacity',
child: Icon(Icons.flip),
);
```
## 4. Al cuadro, aplícale el Fade in and out
Tenemos un recuadro verde en la pantalla. Tenemos un botón para alternar la visibilidad a verdadero o falso. Entonces, ¿cómo aplicamos el fade in and out al cuadro? Con un Widget
[`AnimatedOpacity`](https://docs.flutter.io/flutter/widgets/AnimatedOpacity-class.html)!
El Widget `AnimatedOpacity` requiere tres argumentos:
* `opacity`: Un valor de 0.0 (invisible) a 1.0 (completamente visible).
* `duration`: Cuánto tiempo debe durar la animación para completar
* `child`: El widget para animar. En nuestro caso, el cuadro verde.
<!-- skip -->
```dart
AnimatedOpacity(
// Si el Widget debe ser visible, anime a 1.0 (completamente visible). Si
// el Widget debe estar oculto, anime a 0.0 (invisible).
opacity: _visible ? 1.0 : 0.0,
duration: Duration(milliseconds: 500),
// El cuadro verde debe ser el hijo de AnimatedOpacity
child: Container(
width: 200.0,
height: 200.0,
color: Colors.green,
),
);
```
## Ejemplo completo
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
final appTitle = 'Opacity Demo';
return MaterialApp(
title: appTitle,
home: MyHomePage(title: appTitle),
);
}
}
// El trabajo de StatefulWidget es tomar algunos datos y crear una clase State.
// En este caso, nuestro Widget toma un título y crea un _MyHomePageState.
class MyHomePage extends StatefulWidget {
final String title;
MyHomePage({Key key, this.title}) : super(key: key);
@override
_MyHomePageState createState() => _MyHomePageState();
}
// La clase State es responsable de dos cosas: mantener algunos datos que podamos
// actualizar y construir la UI usando esa información.
class _MyHomePageState extends State<MyHomePage> {
// Si el recuadro verde debe ser visible o invisible
bool _visible = true;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: AnimatedOpacity(
// Si el Widget debe ser visible, anime a 1.0 (completamente visible). Si
// el Widget debe estar oculto, anime a 0.0 (invisible).
opacity: _visible ? 1.0 : 0.0,
duration: Duration(milliseconds: 500),
// El cuadro verde debe ser el hijo de AnimatedOpacity
child: Container(
width: 200.0,
height: 200.0,
color: Colors.green,
),
),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
// Asegúrate de llamar a setState! Esto le dirá a Flutter que reconstruya el
// UI con nuestros cambios!
setState(() {
_visible = !_visible;
});
},
tooltip: 'Toggle Opacity',
child: Icon(Icons.flip),
), // Esta coma final hace que el auto-formateo sea más agradable para los métodos de construcción.
);
}
}
```

| 37.365979 | 344 | 0.696648 | spa_Latn | 0.959521 |
4d9e919ee7e96425dbfb746ba6cfe948e49b94bc | 1,568 | md | Markdown | docs/rules/no-dupe-args.md | eslint/eslint.github.io | 520499fa279bcb86d4934fcb732a3ade83bae7bb | [
"MIT"
] | 65 | 2015-05-18T12:57:43.000Z | 2019-05-17T16:36:07.000Z | docs/rules/no-dupe-args.md | eslint/eslint.github.io | 520499fa279bcb86d4934fcb732a3ade83bae7bb | [
"MIT"
] | 391 | 2015-01-18T01:08:56.000Z | 2019-07-12T19:22:09.000Z | docs/rules/no-dupe-args.md | eslint/eslint.github.io | 520499fa279bcb86d4934fcb732a3ade83bae7bb | [
"MIT"
] | 219 | 2015-01-24T20:36:38.000Z | 2019-07-07T04:14:06.000Z | ---
title: no-dupe-args
layout: doc
edit_link: https://github.com/eslint/eslint/edit/main/docs/src/rules/no-dupe-args.md
rule_type: problem
---
(recommended) The `"extends": "eslint:recommended"` property in a configuration file enables this rule.
Disallows duplicate arguments in `function` definitions.
If more than one parameter has the same name in a function definition, the last occurrence "shadows" the preceding occurrences. A duplicated name might be a typing error.
## Rule Details
This rule disallows duplicate parameter names in function declarations or expressions. It does not apply to arrow functions or class methods, because the parser reports the error.
If ESLint parses code in strict mode, the parser (instead of this rule) reports the error.
Examples of **incorrect** code for this rule:
```js
/*eslint no-dupe-args: "error"*/
function foo(a, b, a) {
console.log("value of the second a:", a);
}
var bar = function (a, b, a) {
console.log("value of the second a:", a);
};
```
Examples of **correct** code for this rule:
```js
/*eslint no-dupe-args: "error"*/
function foo(a, b, c) {
console.log(a, b, c);
}
var bar = function (a, b, c) {
console.log(a, b, c);
};
```
## Version
This rule was introduced in ESLint 0.16.0.
## Resources
* [Rule source](https://github.com/eslint/eslint/tree/HEAD/lib/rules/no-dupe-args.js)
* [Test source](https://github.com/eslint/eslint/tree/HEAD/tests/lib/rules/no-dupe-args.js)
* [Documentation source](https://github.com/eslint/eslint/tree/HEAD/docs/src/rules/no-dupe-args.md)
| 26.576271 | 179 | 0.714923 | eng_Latn | 0.904403 |
4d9f892df9c7947333220890b108efca55ff8185 | 1,490 | md | Markdown | deploy/mesos-marathon/README.md | VincentS/microservices-demo | 682740ad2f7801628e61494ed714da7ec354460b | [
"Apache-2.0"
] | 2 | 2017-04-04T14:46:28.000Z | 2018-04-15T23:04:42.000Z | deploy/mesos-marathon/README.md | VincentS/microservices-demo | 682740ad2f7801628e61494ed714da7ec354460b | [
"Apache-2.0"
] | null | null | null | deploy/mesos-marathon/README.md | VincentS/microservices-demo | 682740ad2f7801628e61494ed714da7ec354460b | [
"Apache-2.0"
] | 3 | 2017-03-29T14:03:07.000Z | 2018-09-09T17:16:54.000Z | # Deploy to Mesos using CNI
These scripts will install the microservices demo on Apache Mesos using Marathon.
## Caveates
- This is using a prerelease version of Mesos 1.0.0
- This was developed on AWS. May not work on other services.
## Prerequisites
- A working Mesos cluster (here is an example of [how to install mesos on AWS using terraform](https://github.com/philwinder/mesos-terraform))
- curl
## Quick start
```
./mesos-marathon.sh install
./mesos-marathon.sh start
```
## Usage
```
Starts the weavedemo microservices demo on Mesos using Marathon.
Caveats: This is using a RC version of Mesos, and may not work in the future. This was developed on AWS, so may not work on other services.
Commands:
install Install all required services on the Mesos hosts. Must install before starting.
uninstall Removes all installed services
start Starts the demo application services. Must already be installed.
stop Stops the demo application services
Options:
--force Skip all user interaction. Implied 'Yes' to all actions.
-q, --quiet Quiet (no output)
-l, --log Print log to file
-s, --strict Exit script with null variables. i.e 'set -o nounset'
-v, --verbose Output more information. (Items echoed to 'verbose')
-d, --debug Runs script in BASH debug mode (set -x)
-h, --help Display this help and exit
--version Output version information and exit
```
| 33.863636 | 142 | 0.691275 | eng_Latn | 0.981405 |
4da08b1c216e69578d5d87ab90ae6da44c606e67 | 22,413 | md | Markdown | articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Självstudier: Skapa ensidig webbapp – API för bildsökning i Bing'
titleSuffix: Azure cognitive services
description: Med API för bildsökning i Bing kan du söka på webben efter relevanta bilder med hög kvalitet. Använd den här självstudien för att skapa ett enkelsidigt program som kan skicka sökfrågor till API:et och visa resultaten inom webbsidan.
services: cognitive-services
author: aahill
manager: nitinme
ms.service: cognitive-services
ms.subservice: bing-image-search
ms.topic: tutorial
ms.date: 03/05/2020
ms.author: aahi
ms.custom: devx-track-js
ms.openlocfilehash: fe4c40e2c5e2b8992598125c376dc0da516e9736
ms.sourcegitcommit: 32c521a2ef396d121e71ba682e098092ac673b30
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 09/25/2020
ms.locfileid: "91316706"
---
# <a name="tutorial-create-a-single-page-app-using-the-bing-image-search-api"></a>Självstudier: Skapa en ensidesapp med hjälp av API för bildsökning i Bing
Med API för bildsökning i Bing kan du söka på webben efter relevanta bilder med hög kvalitet. Använd den här självstudien för att skapa ett enkelsidigt program som kan skicka sökfrågor till API:et och visa resultaten inom webbsidan. Den här självstudiekursen liknar motsvarande [självstudiekurs](../Bing-Web-Search/tutorial-bing-web-search-single-page-app.md) för webbsökning i Bing.
I den här självstudieappen visas hur du:
> [!div class="checklist"]
> * Anropa API för bildsökning i Bing i JavaScript
> * Förbättra sökresultat med hjälp av alternativ för sökning
> * Visa och bläddra igenom sökresultat
> * Begära och hantera en API-prenumerationsnyckel och klient-ID för Bing.
Den fullständiga källkoden till det här exemplet finns på [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/Tutorials/Bing-Image-Search).
## <a name="prerequisites"></a>Förutsättningar
* Den senaste versionen av [Node.js](https://nodejs.org/).
* Ramverket [Express.js](https://expressjs.com/) för Node.js. Installationsinstruktioner för källkoden finns i GitHub-exemplets readme-fil.
[!INCLUDE [cognitive-services-bing-image-search-signup-requirements](../../../includes/cognitive-services-bing-image-search-signup-requirements.md)]
## <a name="manage-and-store-user-subscription-keys"></a>Hantera och lagra användarens prenumerationsnycklar
Det här programmet använder webbläsarens beständiga lagring för att lagra prenumerationsnycklar för API:et. Om ingen nyckel lagras frågar webbsidan användaren om nyckeln och lagrar den för senare användning. Om nyckeln senare avvisas av API appen tas den bort från lagringen. I det här exemplet används den globala slut punkten. Du kan också använda den [anpassade slut domänen](../../cognitive-services/cognitive-services-custom-subdomains.md) som visas i Azure Portal för din resurs.
Vi definierar funktionerna `storeValue` och `retrieveValue`, antingen med objektet `localStorage` (inte i alla webbläsare) eller en cookie.
```javascript
// Cookie names for data being stored
API_KEY_COOKIE = "bing-search-api-key";
CLIENT_ID_COOKIE = "bing-search-client-id";
// The Bing Image Search API endpoint
BING_ENDPOINT = "https://api.cognitive.microsoft.com/bing/v7.0/images/search";
try { //Try to use localStorage first
localStorage.getItem;
window.retrieveValue = function (name) {
return localStorage.getItem(name) || "";
}
window.storeValue = function(name, value) {
localStorage.setItem(name, value);
}
} catch (e) {
//If the browser doesn't support localStorage, try a cookie
window.retrieveValue = function (name) {
var cookies = document.cookie.split(";");
for (var i = 0; i < cookies.length; i++) {
var keyvalue = cookies[i].split("=");
if (keyvalue[0].trim() === name) return keyvalue[1];
}
return "";
}
window.storeValue = function (name, value) {
var expiry = new Date();
expiry.setFullYear(expiry.getFullYear() + 1);
document.cookie = name + "=" + value.trim() + "; expires=" + expiry.toUTCString();
}
}
```
Funktionen `getSubscriptionKey()` försöker hämta en tidigare lagrad nyckel med hjälp av `retrieveValue`. Det inte går att hitta en nyckel kommer användaren att uppmanas att ange en nyckel som lagras med `storeValue`.
```javascript
// Get the stored API subscription key, or prompt if it's not found
function getSubscriptionKey() {
var key = retrieveValue(API_KEY_COOKIE);
while (key.length !== 32) {
key = prompt("Enter Bing Search API subscription key:", "").trim();
}
// always set the cookie in order to update the expiration date
storeValue(API_KEY_COOKIE, key);
return key;
}
```
HTML-taggen `<form>``onsubmit` anropar `bingWebSearch`-funktionen för att returnera sökresultat. `bingWebSearch` använder `getSubscriptionKey` för att autentisera varje fråga. Som du ser i den föregående definitionen ber `getSubscriptionKey` användaren om nyckeln om nyckeln inte har registrerats. Nyckeln lagras sedan för fortlöpande användning av programmet.
```html
<form name="bing" onsubmit="this.offset.value = 0; return bingWebSearch(this.query.value,
bingSearchOptions(this), getSubscriptionKey())">
```
## <a name="send-search-requests"></a>Skicka sökförfrågningar
Det här programmet använder ett HTML `<form>` för att inledningsvis skicka användarsökförfrågningar med hjälp av attributet `onsubmit` för att anropa `newBingImageSearch()`.
```html
<form name="bing" onsubmit="return newBingImageSearch(this)">
```
`onsubmit`-hanteraren returnerar `false`, som ser till att formuläret inte skickas.
## <a name="select-search-options"></a>Välj sökalternativ
![[Bildsökformulär i Bing]](media/cognitive-services-bing-images-api/image-search-spa-form.png)
Sökning i Bing tillhandahåller flera [filtrerfrågeparametrar](https://docs.microsoft.com/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#filter-query-parameters) för att begränsa och filtrera sökresultaten. HTML-formulär i det här programmet använder och visar följande parameteralternativ:
| Alternativ | Beskrivning |
|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `where` | En listruta för att välja marknad (plats och språk) som används för sökningen. |
| `query` | Textfältet för att ange sökvillkor. |
| `aspect` | Alternativknapparna för att välja proportioner för den hittade bilden: ungefär kvadratisk, bred eller hög. |
| `color` | |
| `when` | Listruta för att valfritt begränsa sökningen till den senaste dagen, veckan eller månaden. |
| `safe` | En kryssruta som anger om du vill använda Bing SafeSearch-funktionen för att filtrera bort ”vuxeninnehåll”. |
| `count` | Dolt fält. Antal sökresultat som returneras för varje begäran. Ändra om du vill visa färre eller fler resultat per sida. |
| `offset` | Dolt fält. Förskjutningen av det första sökresultatet i begäran. Används för växling. Den återställs till `0` vid en ny begäran. |
| `nextoffset` | Dolt fält. Vid mottagning av ett sökresultat anges fältet som värdet för `nextOffset` i svaret. Med hjälp av det här fältet undviks överlappande resultat på efterföljande sidor. |
| `stack` | Dolt fält. En JSON-kodad lista med förskjutningar i de föregående sidorna med sökresultat för att gå tillbaka till föregående sidor. |
Funktionen `bingSearchOptions()` formaterar dessa alternativ till en partiell frågesträng som kan användas i appens API-begäranden.
```javascript
// Build query options from the HTML form
function bingSearchOptions(form) {
var options = [];
options.push("mkt=" + form.where.value);
options.push("SafeSearch=" + (form.safe.checked ? "strict" : "off"));
if (form.when.value.length) options.push("freshness=" + form.when.value);
var aspect = "all";
for (var i = 0; i < form.aspect.length; i++) {
if (form.aspect[i].checked) {
aspect = form.aspect[i].value;
break;
}
}
options.push("aspect=" + aspect);
if (form.color.value) options.push("color=" + form.color.value);
options.push("count=" + form.count.value);
options.push("offset=" + form.offset.value);
return options.join("&");
}
```
## <a name="performing-the-request"></a>Utföra förfrågan
Beroende på sökfrågan, alternativsträngen och API-nyckeln använder funktionen `BingImageSearch()` ett XMLHttpRequest-objekt för att göra begäran till Bing-bildsökningsslutpunkten.
```javascript
// perform a search given query, options string, and API key
function bingImageSearch(query, options, key) {
// scroll to top of window
window.scrollTo(0, 0);
if (!query.trim().length) return false; // empty query, do nothing
showDiv("noresults", "Working. Please wait.");
hideDivs("results", "related", "_json", "_http", "paging1", "paging2", "error");
var request = new XMLHttpRequest();
var queryurl = BING_ENDPOINT + "?q=" + encodeURIComponent(query) + "&" + options;
// open the request
try {
request.open("GET", queryurl);
}
catch (e) {
renderErrorMessage("Bad request (invalid URL)\n" + queryurl);
return false;
}
// add request headers
request.setRequestHeader("Ocp-Apim-Subscription-Key", key);
request.setRequestHeader("Accept", "application/json");
var clientid = retrieveValue(CLIENT_ID_COOKIE);
if (clientid) request.setRequestHeader("X-MSEdge-ClientID", clientid);
// event handler for successful response
request.addEventListener("load", handleBingResponse);
// event handler for erorrs
request.addEventListener("error", function() {
renderErrorMessage("Error completing request");
});
// event handler for aborted request
request.addEventListener("abort", function() {
renderErrorMessage("Request aborted");
});
// send the request
request.send();
return false;
}
```
När HTTP-begäran har slutförts anropar JavaScript `handleBingResponse()`-belastningshändelsehanteraren, för att hantera en lyckad HTTP GET-begäran.
```javascript
// handle Bing search request results
function handleBingResponse() {
hideDivs("noresults");
var json = this.responseText.trim();
var jsobj = {};
// try to parse JSON results
try {
if (json.length) jsobj = JSON.parse(json);
} catch(e) {
renderErrorMessage("Invalid JSON response");
}
// show raw JSON and HTTP request
showDiv("json", preFormat(JSON.stringify(jsobj, null, 2)));
showDiv("http", preFormat("GET " + this.responseURL + "\n\nStatus: " + this.status + " " +
this.statusText + "\n" + this.getAllResponseHeaders()));
// if HTTP response is 200 OK, try to render search results
if (this.status === 200) {
var clientid = this.getResponseHeader("X-MSEdge-ClientID");
if (clientid) retrieveValue(CLIENT_ID_COOKIE, clientid);
if (json.length) {
if (jsobj._type === "Images") {
if (jsobj.nextOffset) document.forms.bing.nextoffset.value = jsobj.nextOffset;
renderSearchResults(jsobj);
} else {
renderErrorMessage("No search results in JSON response");
}
} else {
renderErrorMessage("Empty response (are you sending too many requests too quickly?)");
}
}
// Any other HTTP response is an error
else {
// 401 is unauthorized; force re-prompt for API key for next request
if (this.status === 401) invalidateSubscriptionKey();
// some error responses don't have a top-level errors object, so gin one up
var errors = jsobj.errors || [jsobj];
var errmsg = [];
// display HTTP status code
errmsg.push("HTTP Status " + this.status + " " + this.statusText + "\n");
// add all fields from all error responses
for (var i = 0; i < errors.length; i++) {
if (i) errmsg.push("\n");
for (var k in errors[i]) errmsg.push(k + ": " + errors[i][k]);
}
// also display Bing Trace ID if it isn't blocked by CORS
var traceid = this.getResponseHeader("BingAPIs-TraceId");
if (traceid) errmsg.push("\nTrace ID " + traceid);
// and display the error message
renderErrorMessage(errmsg.join("\n"));
}
}
```
> [!IMPORTANT]
> Lyckade HTTP-begäranden kan innehålla information om misslyckade sökningar. Om ett fel uppstår i sökåtgärden returnerar API för bildsökning i Bing en icke-200-HTTP-statuskod och felinformation i JSON-svaret. Om begäran var begränsad returnerar API:et ett tomt svar.
## <a name="display-the-search-results"></a>Visa sökresultat
Sökresultaten visas som funktionen `renderSearchResults()`, vilket tar den JSON som returneras av tjänsten för bildsökning i Bing och anropar en lämplig återgivningsfunktion på returnerade bilder och relaterade sökningar.
```javascript
function renderSearchResults(results) {
// add Prev / Next links with result count
var pagingLinks = renderPagingLinks(results);
showDiv("paging1", pagingLinks);
showDiv("paging2", pagingLinks);
showDiv("results", renderImageResults(results.value));
if (results.relatedSearches)
showDiv("sidebar", renderRelatedItems(results.relatedSearches));
}
```
Sökresultaten returneras som `value`-objekt på den översta nivån i JSON-svaret. Dessa skickas till `renderImageResults()`, som går igenom resultaten och konverterar varje objekt till HTML-format.
```javascript
function renderImageResults(items) {
var len = items.length;
var html = [];
if (!len) {
showDiv("noresults", "No results.");
hideDivs("paging1", "paging2");
return "";
}
for (var i = 0; i < len; i++) {
html.push(searchItemRenderers.images(items[i], i, len));
}
return html.join("\n\n");
}
```
API för bildsökning i Bing kan returnera fyra typer av sökförslag för att vägleda användarnas sökmiljöer, var och en i den översta objektnivån:
| Förslag | Description |
|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `pivotSuggestions` | Frågor som ersätter ett pivotord i den ursprungliga sökningen med ett annat. Om du till exempel söker efter ”röda blommor” kan ett pivotord vara ”röda”, och ett pivotförslag kan vara ”gula blommor”. |
| `queryExpansions` | Frågor som begränsar den ursprungliga sökningen genom att lägga till fler termer. Om du exempelvis söker efter ”Microsoft Surface” kan en frågeexpansion vara ”Microsoft Surface Pro”. |
| `relatedSearches` | Frågor som också har angetts av andra användare som registrerade den ursprungliga sökningen. Om du till exempel söker efter ”Mount Rainier” kan en relaterad sökning vara ”Mt. Saint Helens.” |
| `similarTerms` | Frågor vars innebörd liknar den ursprungliga sökningen. Om du exempelvis söker efter ”kattungar” kan en liknande term vara ”gulliga”. |
Med detta program återges endast `relatedItems`-förslag och de resulterande länkarna placeras i sidans sidopanel.
## <a name="rendering-search-results"></a>Återgivning av sökresultat
I programmet innehåller `searchItemRenderers`-objektet återgivningsfunktioner som genererar HTML för varje typ av sökresultat.
```javascript
searchItemRenderers = {
images: function(item, index, count) { ... },
relatedSearches: function(item) { ... }
}
```
En funktion för återgivning kan acceptera följande parametrar:
| Parameter | Beskrivning |
|---------|----------------------------------------------------------------------------------------------|
| `item` | JavaScript-objekt som innehåller objektets egenskaper, som dess webbadress och en beskrivning. |
| `index` | Index för resultatobjektet i en samling. |
| `count` | Antal objekt i sökresultatets objektsamling. |
Parametrarna `index` och `count` används för att numrera resultat, generera HTML-kod för samlingar och ordna innehåll. Mer specifikt:
* Beräknar storleken på miniatyrbilderna (bredd varierar med minst 120 bildpunkter medan höjden högst får vara 90 bildpunkter).
* Skapar HTML `<img>`-taggen för att visa miniatyrbilden.
* Skapar HTML `<a>`-taggar som länkar till bilden och den sida som innehåller den.
* Skapar beskrivning som visar information om bilden och den plats som den finns på.
```javascript
images: function (item, index, count) {
var height = 120;
var width = Math.max(Math.round(height * item.thumbnail.width / item.thumbnail.height), 120);
var html = [];
if (index === 0) html.push("<p class='images'>");
var title = escape(item.name) + "\n" + getHost(item.hostPageDisplayUrl);
html.push("<p class='images' style='max-width: " + width + "px'>");
html.push("<img src='"+ item.thumbnailUrl + "&h=" + height + "&w=" + width +
"' height=" + height + " width=" + width + "'>");
html.push("<br>");
html.push("<nobr><a href='" + item.contentUrl + "'>Image</a> - ");
html.push("<a href='" + item.hostPageUrl + "'>Page</a></nobr><br>");
html.push(title.replace("\n", " (").replace(/([a-z0-9])\.([a-z0-9])/g, "$1.<wbr>$2") + ")</p>");
return html.join("");
}, // relatedSearches renderer omitted
```
Miniatyrbildernas `height` och `width` används i både `<img>`-taggen och fälten `h` och `w` i miniatyrbildens webbadress. På så sätt kan Bing returnera [en miniatyrbild](../bing-web-search/resize-and-crop-thumbnails.md) med den exakta storleken.
## <a name="persisting-client-id"></a>Bestående klient-ID
Svar från API:er för Bing Search kan innehålla ett `X-MSEdge-ClientID`-huvud som ska skickas tillbaka till API:et med efterföljande förfrågningar. Om flera API:er för Bing-sökning används ska samma klient-ID användas för dem om möjligt.
När `X-MSEdge-ClientID`-huvudet tillhandahålls kan Bing-API:er associera alla sökningar för en användare, vilket är användbart i
Först hjälper Bing-sökmotorn till med att tillämpa tidigare kontexter på sökningarna för att hitta resultat som bättre tillfredsställer användaren. Om en användare tidigare har sökt efter termer som exempelvis relaterar till segling kan en senare sökning efter ”knopar” returnera information om knopar som används vid segling.
Därefter väljer Bing slumpmässigt ut användare som ska prova nya funktioner innan de blir allmänt tillgängliga. Genom att tillhandahålla samma klient-ID med varje begäran säkerställs att användare som har valts för att se en funktion alltid ser den. Utan klient-ID kan användaren se en funktion som sedan försvinner, till synes slumpmässigt, i sökresultatet.
Säkerhetsprinciper för webbläsaren (CORS) kan hindra att `X-MSEdge-ClientID`-huvudet visas för JavaScript. Den här begränsningen uppstår när söksvaret har ett annat ursprung än sidan som begärt det. I en produktionsmiljö bör du hantera den här principen genom att lägga upp ett serverskript som gör API-anrop på samma domän som webbsidan. Eftersom skriptet har samma ursprung som webbsidan är sedan `X-MSEdge-ClientID`-huvudet tillgängligt för JavaScript.
> [!NOTE]
> Du bör utföra begäran på serversidan i ett produktionsklart webbprogram ändå. I annat fall måste API-nyckeln för Bing-sökning inkluderas i webbsidan där den är tillgänglig för alla som visar källan. Du debiteras för all användning under din API-prenumerationsnyckel, även begäranden som görs av obehöriga personer, så det är viktigt att inte exponera nyckeln.
I utvecklingssyfte kan du begära API för webbsökning i Bing via en CORS-proxy. Svaret från en sådan proxy har ett `Access-Control-Expose-Headers` huvud som tillåter svarshuvuden och gör dem tillgängliga för Java Script.
Det är enkelt att installera en CORS-proxy för att tillåta att självstudien får åtkomst till klientens ID-huvud. [Installera Node.js](https://nodejs.org/en/download/) om du inte redan har det. Sedan kör du följande kommando i ett kommandofönster:
```console
npm install -g cors-proxy-server
```
Ändra sedan Webbsökning i Bing-slutpunkten i HTML-filen till: \
`http://localhost:9090/https://api.cognitive.microsoft.com/bing/v7.0/search`
Slutligen startar du CORS-proxyn med följande kommando:
```console
cors-proxy-server
```
Lämna kommandofönstret öppet medan du använder självstudieappen. Om du stänger fönstret stoppas proxyn. I det expanderbara avsnittet om HTTP-huvuden nedan kan du nu se `X-MSEdge-ClientID`-huvudet (bland annat) under sökresultatet och du kan kontrollera att det är samma för varje begäran.
## <a name="next-steps"></a>Nästa steg
> [!div class="nextstepaction"]
> [Extrahera avbildningsinformation med hjälp av API för bildsökning i Bing](tutorial-image-post.md)
## <a name="see-also"></a>Se även
* [API-referens för bildsökning i Bing](//docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v7-reference)
| 53.748201 | 485 | 0.661759 | swe_Latn | 0.984308 |
4da0910b36f828aaddcd4152cd26c34eff220405 | 2,343 | md | Markdown | docs/framework/data/adonet/sql/linq/find-the-maximum-value-in-a-numeric-sequence.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-06-02T11:09:59.000Z | 2019-06-15T10:17:08.000Z | docs/framework/data/adonet/sql/linq/find-the-maximum-value-in-a-numeric-sequence.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-18T18:30:39.000Z | 2019-10-18T18:30:39.000Z | docs/framework/data/adonet/sql/linq/find-the-maximum-value-in-a-numeric-sequence.md | Ski-Dive-Dev/docs | 20f23aba26bf1037e28c8f6ec525e14d846079fd | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-12T04:31:34.000Z | 2019-11-12T04:31:34.000Z | ---
title: "Find the Maximum Value in a Numeric Sequence"
ms.date: "03/30/2017"
dev_langs:
- "csharp"
- "vb"
ms.assetid: 70d7c058-0280-4815-a008-6f290093591a
---
# Find the Maximum Value in a Numeric Sequence
Use the <xref:System.Linq.Enumerable.Max%2A> operator to find the highest value in a sequence of numeric values.
## Example
The following example finds the latest date of hire for any employee.
If you run this query against the sample Northwind database, the output is: `11/15/1994 12:00:00 AM`.
[!code-csharp[DLinqQueryExamples#6](../../../../../../samples/snippets/csharp/VS_Snippets_Data/DLinqQueryExamples/cs/Program.cs#6)]
[!code-vb[DLinqQueryExamples#6](../../../../../../samples/snippets/visualbasic/VS_Snippets_Data/DLinqQueryExamples/vb/Module1.vb#6)]
## Example
The following example finds the most units in stock for any product.
If you run this example against the sample Northwind database, the output is: `125`.
[!code-csharp[DLinqQueryExamples#7](../../../../../../samples/snippets/csharp/VS_Snippets_Data/DLinqQueryExamples/cs/Program.cs#7)]
[!code-vb[DLinqQueryExamples#7](../../../../../../samples/snippets/visualbasic/VS_Snippets_Data/DLinqQueryExamples/vb/Module1.vb#7)]
## Example
The following example uses Max to find the `Products` that have the highest unit price in each category. The output then lists the results by category.
[!code-csharp[DLinqQueryExamples#8](../../../../../../samples/snippets/csharp/VS_Snippets_Data/DLinqQueryExamples/cs/Program.cs#8)]
[!code-vb[DLinqQueryExamples#8](../../../../../../samples/snippets/visualbasic/VS_Snippets_Data/DLinqQueryExamples/vb/Module1.vb#8)]
If you run the previous query against the Northwind sample database, your results will resemble the following:
`1`
`Côte de Blaye`
`2`
`Vegie-spread`
`3`
`Sir Rodney's Marmalade`
`4`
`Raclette Courdavault`
`5`
`Gnocchi di nonna Alice`
`6`
`Thüringer Rostbratwurst`
`7`
`Manjimup Dried Apples`
`8`
`Carnarvon Tigers`
## See also
- [Aggregate Queries](../../../../../../docs/framework/data/adonet/sql/linq/aggregate-queries.md)
- [Downloading Sample Databases](../../../../../../docs/framework/data/adonet/sql/linq/downloading-sample-databases.md)
| 33 | 154 | 0.686726 | eng_Latn | 0.634291 |
4da0c49dcf318ff13995afa78fa460799438af84 | 39,299 | md | Markdown | articles/storage/files/storage-sync-files-planning.md | Jontii/azure-docs.sv-se | d2551c12e17b442dc0b577205d034dcd6c73cff9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/files/storage-sync-files-planning.md | Jontii/azure-docs.sv-se | d2551c12e17b442dc0b577205d034dcd6c73cff9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/files/storage-sync-files-planning.md | Jontii/azure-docs.sv-se | d2551c12e17b442dc0b577205d034dcd6c73cff9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Planera för distribution av Azure File Sync | Microsoft Docs
description: Planera för en distribution med Azure File Sync, en tjänst som gör att du kan cachelagra ett antal Azure-filresurser på en lokal Windows Server eller virtuell dator i molnet.
author: roygara
ms.service: storage
ms.topic: conceptual
ms.date: 01/29/2021
ms.author: rogarana
ms.subservice: files
ms.custom: references_regions
ms.openlocfilehash: 65293df5fae523bff36240273afb93c4dd8485df
ms.sourcegitcommit: 54e1d4cdff28c2fd88eca949c2190da1b09dca91
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 01/31/2021
ms.locfileid: "99219484"
---
# <a name="planning-for-an-azure-file-sync-deployment"></a>Planera för distribution av Azure File Sync
:::row:::
:::column:::
[](https://www.youtube.com/watch?v=nfWLO7F52-s)
:::column-end:::
:::column:::
Azure File Sync är en tjänst som gör att du kan cachelagra ett antal Azure-filresurser på en lokal Windows Server eller virtuell dator i molnet.
I den här artikeln beskrivs hur du Azure File Sync koncept och funktioner. När du är bekant med Azure File Sync bör du tänka på följande [Azure File Sync distributions guide](storage-sync-files-deployment-guide.md) för att prova den här tjänsten.
:::column-end:::
:::row-end:::
Filerna kommer att lagras i molnet i [Azure-filresurser](storage-files-introduction.md). Azure-filresurser kan användas på två sätt: genom att direkt montera dessa Server lös Azure-filresurser (SMB) eller genom att cachelagra Azure-filresurser lokalt med Azure File Sync. Vilket distributions alternativ du väljer ändrar de aspekter du behöver tänka på när du planerar för distributionen.
- **Direkt montering av en Azure-fil resurs**: eftersom Azure Files ger SMB-åtkomst kan du montera Azure-filresurser lokalt eller i molnet med hjälp av standard-SMB-klienten som är tillgänglig i Windows, MacOS och Linux. Eftersom Azure-filresurser är utan server, behöver distributionen för produktions scenarier inte hantera en fil server eller NAS-enhet. Det innebär att du inte behöver tillämpa program varu korrigeringar eller byta ut fysiska diskar.
- **Cachelagra Azure-filresurser lokalt med Azure File Sync**: Azure File Sync gör det möjligt att centralisera organisationens fil resurser i Azure Files, samtidigt som du behåller flexibiliteten, prestandan och kompatibiliteten för en lokal fil server. Azure File Sync transformerar en lokal (eller moln) Windows Server till ett snabbt cacheminne för Azure-filresursen.
## <a name="management-concepts"></a>Hanterings begrepp
En Azure File Sync distribution har tre grundläggande hanterings objekt:
- **Azure-fil resurs**: en Azure-filresurs är en moln fil resurs utan server, som tillhandahåller *moln slut punkten* för en Azure File Sync synkroniseringsrelation. Filer i en Azure-filresurs kan nås direkt med SMB eller det fileraste protokollet, men vi rekommenderar att du huvudsakligen kommer åt filerna via Windows Server-cachen när Azure-filresursen används med Azure File Sync. Detta beror på att Azure Files idag saknar en effektiv mekanism för ändrings identifiering som Windows Server har, så att ändringar i Azure-filresursen direkt tar tid att sprida tillbaka till Server slut punkterna.
- **Server slut punkt**: sökvägen på Windows-servern som synkroniseras med en Azure-filresurs. Detta kan vara en speciell mapp på en volym eller volymens rot. Flera Server slut punkter kan finnas på samma volym om deras namn rymder inte överlappar varandra.
- **Sync-grupp**: objektet som definierar den synkroniserade relationen mellan en **moln slut punkt** eller Azure-filresurs och en server slut punkt. Slutpunkter i en synkroniseringsgrupp synkroniseras med varandra. Om du till exempel har två distinkta uppsättningar med filer som du vill hantera med Azure File Sync skapar du två synkroniserade grupper och lägger till olika slut punkter i varje synkroniseringsresurs.
### <a name="azure-file-share-management-concepts"></a>Hanterings koncept för Azure-filresurs
[!INCLUDE [storage-files-file-share-management-concepts](../../../includes/storage-files-file-share-management-concepts.md)]
### <a name="azure-file-sync-management-concepts"></a>Principer för Azure File Sync hantering
Sync-grupper distribueras till **Storage Sync-tjänster**, som är toppnivå objekt som registrerar servrar som ska användas med Azure File Sync och innehåller grupp relationerna. Tjänsten Storage Sync service är en peer med lagrings konto resursen och kan distribueras på samma sätt till Azure-resurs grupper. En tjänst för synkronisering av lagring kan skapa synkroniserade grupper som innehåller Azure-filresurser över flera lagrings konton och flera registrerade Windows-servrar.
Innan du kan skapa en Sync-grupp i en tjänst för synkronisering av lagring måste du först registrera en Windows-Server med tjänsten för synkronisering av lagring. Detta skapar ett **registrerat Server** objekt som representerar en förtroende relation mellan servern eller klustret och tjänsten för synkronisering av lagring. Om du vill registrera en tjänst för synkronisering av lagring måste du först installera Azure File Sync-agenten på-servern. En enskild server eller ett kluster kan bara registreras med en lagrings tjänst för synkronisering i taget.
En Sync-grupp innehåller en moln slut punkt, en Azure-filresurs och minst en server slut punkt. Serverns slut punkts objekt innehåller de inställningar som konfigurerar kapaciteten för **moln nivåer** , som tillhandahåller cachelagring-funktionen för Azure File Sync. För att synkronisera med en Azure-filresurs måste lagrings kontot som innehåller Azure-filresursen finnas i samma Azure-region som tjänsten för synkronisering av lagring.
> [!Important]
> Du kan göra ändringar i alla moln slut punkter eller Server slut punkter i synkroniseringsresursen och låta filerna vara synkroniserade med de andra slut punkterna i den synkroniserade gruppen. Om du gör en ändring i moln slut punkten (Azure-filresursen) direkt måste ändringarna först identifieras av ett Azure File Sync ändrings identifierings jobb. Ett ändrings identifierings jobb initieras bara för en moln slut punkt var 24: e timme. Mer information finns i [Azure Files vanliga frågor och svar](storage-files-faq.md#afs-change-detection).
### <a name="management-guidance"></a>Vägledning för hantering
När du distribuerar Azure File Sync rekommenderar vi att:
- Distribuera Azure File-resurser 1:1 med Windows-filresurser. Med serverns slut punkt objekt får du en stor flexibilitet för hur du ställer in topologin på Server sidan i den synkroniserade relationen. För att förenkla hanteringen ska du göra sökvägen till Server slut punkten matcha sökvägen till Windows-filresursen.
- Använd så få tjänster som möjligt för Storage-synkronisering. Detta fören klar hanteringen när du har synkronisera grupper som innehåller flera Server slut punkter, eftersom en Windows Server bara kan registreras till en lagrings tjänst för synkronisering i taget.
- Betala till ett lagrings kontos IOPS-begränsningar när du distribuerar Azure-filresurser. Vi rekommenderar att du mappar fil resurser 1:1 med lagrings konton, men det kanske inte alltid är möjligt på grund av olika begränsningar och begränsningar, både från din organisation och från Azure. Om det inte går att ha en enda fil resurs som har distribuerats i ett lagrings konto bör du överväga vilka resurser som ska vara hög aktiva och vilka resurser som är mindre aktiva för att säkerställa att de hetaste fil resurserna inte placeras i samma lagrings konto tillsammans.
## <a name="windows-file-server-considerations"></a>Windows fil Server-överväganden
Om du vill aktivera Sync-funktionen på Windows Server måste du installera den Azure File Sync nedladdnings bara agenten. Azure File Sync agenten innehåller två huvud komponenter: `FileSyncSvc.exe` , den bakgrunds fönster tjänst som ansvarar för att övervaka ändringar på Server slut punkter och initiera svarssessioner, och `StorageSync.sys` ett fil system filter som aktiverar moln nivåer och snabb haveri beredskap.
### <a name="operating-system-requirements"></a>Operativsystemskrav
Azure File Sync stöds med följande versioner av Windows Server:
| Version | SKU: er som stöds | Distributions alternativ som stöds |
|---------|----------------|------------------------------|
| Windows Server 2019 | Data Center, standard och IoT | Full och Core |
| Windows Server 2016 | Data Center, standard och lagrings Server | Full och Core |
| Windows Server 2012 R2 | Data Center, standard och lagrings Server | Full och Core |
Framtida versioner av Windows Server kommer att läggas till när de släpps.
> [!Important]
> Vi rekommenderar att du behåller alla servrar som du använder med Azure File Sync uppdaterad med de senaste uppdateringarna från Windows Update.
### <a name="minimum-system-resources"></a>Minsta system resurser
Azure File Sync kräver en server, antingen fysisk eller virtuell, med minst en processor och minst 2 GiB minne.
> [!Important]
> Om servern körs på en virtuell dator med dynamiskt minne aktiverat ska den virtuella datorn konfigureras med minst 2048 MiB-minne.
För de flesta produktions arbets belastningar rekommenderar vi inte att du konfigurerar en Azure File Sync Sync-server med bara minimi kraven. Se [rekommenderade system resurser](#recommended-system-resources) för mer information.
### <a name="recommended-system-resources"></a>Rekommenderade system resurser
Precis som alla Server funktioner eller program bestäms system resurs kraven för Azure File Sync av distributionens skala. större distributioner på en server kräver större system resurser. För Azure File Sync bestäms skalningen av antalet objekt över Server slut punkter och omsättningen på data uppsättningen. En enskild server kan ha Server slut punkter i flera Sync-grupper och antalet objekt som anges i följande tabell konton för det fullständiga namn område som en server är kopplad till.
Till exempel Server slut punkt A med 10 000 000 objekt + Server slut punkt B med 10 000 000 objekt = 20 000 000-objekt. För den här exempel distributionen rekommenderar vi 8 processorer, 16 GiB minne för stabilt tillstånd och (om möjligt) 48 GiB minne för den första migreringen.
Namn områdes data lagras i minnet av prestanda skäl. På grund av detta kräver större namn rymder mer minne för att upprätthålla bästa prestanda och mer omsättning kräver mer processor kraft för att bearbeta.
I följande tabell har vi tillhandahållit både storleken på namn området och en konvertering till kapacitet för vanliga fil resurser i generella syften, där den genomsnittliga fil storleken är 512 KiB. Om fil storlekarna är mindre bör du överväga att lägga till ytterligare minne för samma mängd kapacitet. Basera minnes konfigurationen på storleken på namn området.
| Storlek på namnrymd – filer & kataloger (miljoner) | Typisk kapacitet (TiB) | PROCESSOR kärnor | Rekommenderat minne (GiB) |
|---------|---------|---------|---------|
| 3 | 1.4 | 2 | 8 (ursprunglig synkronisering)/2 (typisk omsättning) |
| 5 | 2.3 | 2 | 16 (ursprunglig synkronisering)/4 (typisk omsättning) |
| 10 | 4.7 | 4 | 32 (ursprunglig synkronisering)/8 (typisk omsättning) |
| 30 | 14,0 | 8 | 48 (ursprunglig synkronisering)/16 (typisk omsättning) |
| 50 | 23,3 | 16 | 64 (ursprunglig synkronisering)/32 (typisk omsättning) |
| 100 * | 46,6 | 32 | 128 (ursprunglig synkronisering)/32 (typisk omsättning) |
\*Synkronisering av fler än 100 000 000 filer & kataloger rekommenderas inte för tillfället. Detta är en mjuk gräns som baseras på våra testade tröskelvärden. Mer information finns i [Azure Files skalbarhets-och prestanda mål](storage-files-scale-targets.md#azure-file-sync-scale-targets).
> [!TIP]
> Inledande synkronisering av ett namn område är en intensiv åtgärd och vi rekommenderar att du allokerar mer minne tills den första synkroniseringen har slutförts. Detta är inte obligatoriskt, men det kan påskynda den inledande synkroniseringen.
>
> Typisk omsättning är 0,5% av namn området som ändras per dag. Överväg att lägga till mer processor för högre omsättnings nivåer.
- En lokalt ansluten volym som är formaterad med NTFS-filsystemet.
### <a name="evaluation-cmdlet"></a>Utvärderings-cmdlet
Innan du distribuerar Azure File Sync bör du utvärdera om den är kompatibel med systemet med hjälp av Azure File Sync Evaluation-cmdleten. Denna cmdlet söker efter eventuella problem med fil systemet och data uppsättningen, till exempel tecken som inte stöds eller en operativ system version som inte stöds. Kontrollerna avser de flesta men inte alla funktioner som nämns nedan. Vi rekommenderar att du läser igenom resten av det här avsnittet noggrant för att se till att distributionen går smidigt.
Du kan installera utvärderings-cmdleten genom att installera AZ PowerShell-modulen, som kan installeras genom att följa anvisningarna här: [Installera och konfigurera Azure PowerShell](/powershell/azure/install-Az-ps).
#### <a name="usage"></a>Användning
Du kan starta utvärderings verktyget på ett par olika sätt: du kan utföra system kontroller, data uppsättnings kontroller eller båda. Så här utför du både system-och data uppsättnings kontroller:
```powershell
Invoke-AzStorageSyncCompatibilityCheck -Path <path>
```
Så här testar du endast din data uppsättning:
```powershell
Invoke-AzStorageSyncCompatibilityCheck -Path <path> -SkipSystemChecks
```
Så här testar du system kraven:
```powershell
Invoke-AzStorageSyncCompatibilityCheck -ComputerName <computer name> -SkipNamespaceChecks
```
Så här visar du resultatet i CSV:
```powershell
$validation = Invoke-AzStorageSyncCompatibilityCheck C:\DATA
$validation.Results | Select-Object -Property Type, Path, Level, Description, Result | Export-Csv -Path C:\results.csv -Encoding utf8
```
### <a name="file-system-compatibility"></a>Filsystemkompatibilitet
Azure File Sync stöds endast i direktansluten NTFS-volymer. Direct Attached Storage eller DAS på Windows Server innebär att operativ systemet Windows Server äger fil systemet. DAS kan tillhandahållas genom att fysiskt bifoga diskar till fil servern, ansluta virtuella diskar till en virtuell fil server (till exempel en virtuell dator som finns i Hyper-V) eller till och med via ISCSI.
Endast NTFS-volymer stöds. ReFS, FAT, FAT32 och andra fil system stöds inte.
I följande tabell visas interop-tillstånd för NTFS-fil system funktioner:
| Funktion | Supportstatus | Kommentarer |
|---------|----------------|-------|
| Åtkomstkontrollistor (ACL) | Fullt stöd | Windows-typ Discretionary Access Control Lists bevaras av Azure File Sync och verkställs av Windows Server på Server slut punkter. ACL: er kan också tillämpas när du monterar Azure-filresursen direkt, men detta kräver ytterligare konfiguration. Mer information finns i [avsnittet om identiteter](#identity) . |
| Hårda länkar | Överhoppad | |
| Symboliska länkar | Överhoppad | |
| Monterings punkter | Stöds delvis | Monterings punkter kan vara roten i en server slut punkt, men de hoppas över om de finns i en server slut punkts namnrymd. |
| Knut punkter | Överhoppad | Till exempel Distributed File System DfrsrPrivate-och DFSRoots-mappar. |
| Referenspunkter | Överhoppad | |
| NTFS-komprimering | Fullt stöd | |
| Sparse-filer | Fullt stöd | Synkronisering av sparse-filer (blockeras inte), men synkroniserar till molnet som en fullständig fil. Om fil innehållet ändras i molnet (eller på en annan server) är filen inte längre sparse när ändringen laddas ned. |
| Alternativa data strömmar (ADS) | Konserverat, men inte synkroniserat | Till exempel synkroniseras inte klassificerings etiketter som skapats av fil klassificerings infrastrukturen. Befintliga klassificerings etiketter på filer på alla Server slut punkter lämnas orörda. |
<a id="files-skipped"></a>Azure File Sync kommer också att hoppa över vissa temporära filer och systemmappar:
| Fil/mapp | Anteckning |
|-|-|
| pagefile.sys | Filinformation till system |
| Desktop.ini | Filinformation till system |
| tummes. db | Temporär fil för miniatyrer |
| ehthumbs. db | Temporär fil för medie miniatyrer |
| ~$\*.\* | Tillfällig Office-fil |
| \*. tmp | Temporär fil |
| \*.laccdb | Lås fil för åtkomst databasen|
| 635D02A9D91C401B97884B82B3BCDAEA.* | Intern Sync-fil|
| \\System volym information | Mapp som är speciell för volym |
| $RECYCLE. PLATS| Mapp |
| \\SyncShareState | Mapp för synkronisering |
### <a name="failover-clustering"></a>Redundanskluster
Windows Server-redundanskluster stöds av Azure File Sync för distributions alternativet "fil server för allmän användning". Redundanskluster stöds inte på Skalbar filserver för program data (SOFS) eller på klusterdelade volymer (CSV: er).
> [!Note]
> Azure File Sync agenten måste installeras på varje nod i ett redundanskluster för att synkroniseringen ska fungera korrekt.
### <a name="data-deduplication"></a>Datadeduplicering
**Windows Server 2016 och Windows Server 2019**
Datadeduplicering stöds oavsett om moln nivån är aktive rad eller inaktive rad på en eller flera Server slut punkter på volymen för Windows Server 2016 och Windows Server 2019. Genom att aktivera datadeduplicering på en volym med aktive rad moln nivå kan du cachelagra fler filer lokalt utan att tillhandahålla mer lagrings utrymme.
När datadeduplicering har Aktiver ATS på en volym med aktive rad moln nivå, kommer deduplicering av optimerade filer på serverns slut punkt att på samma sätt som en normal fil baserat på princip inställningarna för moln skiktet. När de deduplicerade filerna har flyttats, körs skräp insamlings jobbet för datadeduplicering automatiskt för att frigöra disk utrymme genom att ta bort onödiga segment som inte längre refereras till av andra filer på volymen.
Observera att volym besparingarna gäller endast för servern. dina data i Azure-filresursen kommer inte att dedupliceras.
> [!Note]
> För att stödja datadeduplicering på volymer med moln skiktning aktiverat på Windows Server 2019 måste Windows Update [KB4520062](https://support.microsoft.com/help/4520062) installeras och Azure File Sync agent version 9.0.0.0 eller senare krävs.
**Windows Server 2012 R2**
Azure File Sync stöder inte datadeduplicering och moln nivåer på samma volym på Windows Server 2012 R2. Om datadeduplicering har Aktiver ATS på en volym måste moln nivån vara inaktive rad.
**Kommentarer**
- Om datadeduplicering installeras innan du installerar Azure File Sync agent krävs en omstart för att stödja datadeduplicering och moln nivåer på samma volym.
- Om datadeduplicering har Aktiver ATS på en volym efter att moln nivån har Aktiver ATS optimerar optimerings jobbet för den inledande dedupliceringen filer på volymen som inte redan är i nivå och kommer att ha följande påverkan på moln nivåer:
- Principen för ledigt utrymme kommer att fortsätta att göra filer på nivå av filer efter det lediga utrymmet på volymen med hjälp av termisk karta.
- Datum policyn hoppar över nivåer av filer som annars kan ha kvalificerats för lagrings nivåer på grund av att optimerings jobbet för deduplicering har åtkomst till filerna.
- För pågående optimerings jobb för deduplicering kommer moln nivåer med datum policyn att förskjutas av [MinimumFileAgeDays](/powershell/module/deduplication/set-dedupvolume?view=win10-ps) -inställningen för datadeduplicering, om filen inte redan är i nivå.
- Exempel: om MinimumFileAgeDays-inställningen är sju dagar och den datum policyn för molnnivå är 30 dagar, kommer datum principen att nivånamn efter 37 dagar.
- Obs! när en fil har en nivå av Azure File Sync hoppar optimerings jobbet för deduplicering över filen.
- Om en server som kör Windows Server 2012 R2 med Azure File Sync-agenten installerad uppgraderas till Windows Server 2016 eller Windows Server 2019, måste följande steg utföras för att stödja datadeduplicering och moln nivåer på samma volym:
- Avinstallera Azure File Sync-agenten för Windows Server 2012 R2 och starta om servern.
- Hämta Azure File Sync agent för den nya versionen av serveroperativ systemet (Windows Server 2016 eller Windows Server 2019).
- Installera Azure File Sync agent och starta om servern.
Obs! Azure File Sync konfigurations inställningar på servern bevaras när agenten avinstalleras och installeras om.
### <a name="distributed-file-system-dfs"></a>Distributed File System (DFS)
Azure File Sync stöder interop med DFS-namnområden (DFS-N) och DFS Replication (DFS-R).
**DFS-namnrymder (DFS-n)**: Azure File Sync stöds fullt ut på DFS-n-servrar. Du kan installera Azure File Sync agenten på en eller flera DFS-N-medlemmar för att synkronisera data mellan server slut punkterna och moln slut punkten. Mer information finns i [Översikt över DFS-namnrymder](/windows-server/storage/dfs-namespaces/dfs-overview).
**DFS Replication (DFS-r)**: eftersom DFS-r och Azure File Sync båda är lösningar för replikering rekommenderar vi i de flesta fall att du ersätter DFS-R med Azure File Sync. Det finns dock flera scenarier där du vill använda DFS-R och Azure File Sync tillsammans:
- Du migrerar från en DFS-R-distribution till en Azure File Sync-distribution. Mer information finns i [Migrera en DFS Replication (DFS-R) distribution till Azure File Sync](storage-sync-files-deployment-guide.md#migrate-a-dfs-replication-dfs-r-deployment-to-azure-file-sync).
- Alla lokala servrar som behöver en kopia av dina fildata kan vara anslutna direkt till Internet.
- Filial servrar konsoliderar data till en enda hubb-server som du vill använda Azure File Sync.
För att Azure File Sync och DFS-R ska fungera sida vid sida:
1. Azure File Sync moln nivåer måste inaktive ras på volymer med replikerade DFS-R-mappar.
2. Server slut punkter ska inte konfigureras i mappar för skrivskyddad DFS-R-replikering.
Mer information finns i [DFS Replication översikt](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj127250(v=ws.11)).
### <a name="sysprep"></a>Sysprep
Att använda Sysprep på en server där Azure File Sync-agenten är installerad stöds inte och kan leda till oväntade resultat. Agent installation och Server registrering bör ske när du har distribuerat Server avbildningen och slutfört Sysprep-miniinstallationsprogrammet.
### <a name="windows-search"></a>Windows-sök
Om moln skiktning är aktiverat på en server slut punkt hoppas filer som skiktas över och inte indexeras av Windows Search. Filer som inte är på en nivå indexeras korrekt.
### <a name="other-hierarchical-storage-management-hsm-solutions"></a>Andra HSM-lösningar (hierarkisk lagrings hantering)
Inga andra HSM-lösningar ska användas med Azure File Sync.
## <a name="identity"></a>Identitet
Azure File Sync fungerar med din standard-AD-baserade identitet utan särskilda inställningar utöver att konfigurera synkronisering. När du använder Azure File Sync är den allmänna förväntan att de flesta åtkomst går igenom Azure File Sync caching-servrar i stället för via Azure-filresursen. Eftersom Server slut punkterna finns på Windows Server, och Windows Server har stöd för AD-och Windows-ACL: er under en längre tid, behövs ingenting utöver att se till att de Windows-filservrar som är registrerade hos synkroniseringstjänsten för lagring är domänanslutna. Azure File Sync kommer att lagra ACL: er på filerna i Azure-filresursen och replikera dem till alla Server slut punkter.
Även om ändringar som görs direkt till Azure-filresursen tar längre tid att synkronisera till serverns slut punkter i Sync-gruppen, kanske du också vill se till att du kan genomdriva dina AD-behörigheter på fil resursen direkt i molnet. För att göra detta måste du domän ansluta till ditt lagrings konto till din lokala AD, precis som hur dina Windows-filservrar är domänanslutna. Mer information om domän anslutning till ditt lagrings konto till ett kundägda Active Directory finns i [Azure Files Active Directory översikt](storage-files-active-directory-overview.md).
> [!Important]
> Det krävs ingen domän som ansluter till ditt lagrings konto Active Directory för att kunna distribuera Azure File Sync. Detta är ett strikt valfritt steg som gör att Azure-filresursen kan upprätthålla lokala ACL: er när användare monterar Azure-filresursen direkt.
## <a name="networking"></a>Nätverk
Azure File Sync-agenten kommunicerar med lagrings tjänsten för synkronisering och Azure-filresursen med hjälp av Azure File Sync REST-protokollet och det protokoll som används, och båda använder alltid HTTPS via port 443. SMB används aldrig för att ladda upp eller ladda ned data mellan Windows Server och Azure-filresursen. Eftersom de flesta organisationer tillåter HTTPS-trafik över port 443 krävs det vanligt vis inte någon särskild nätverks konfiguration för att kunna distribuera Azure File Sync.
Baserat på din organisations policy eller unika myndighets krav kan du behöva mer begränsad kommunikation med Azure, och därför Azure File Sync tillhandahålla flera mekanismer för att konfigurera nätverk. Utifrån dina krav kan du:
- Synkronisering av tunnlar och fil överföring/Ladda ned trafik över din ExpressRoute eller Azure VPN.
- Använd Azure Files och funktioner i Azure-nätverk som tjänst slut punkter och privata slut punkter.
- Konfigurera Azure File Sync som stöder proxyservern i din miljö.
- Begränsa nätverks aktivitet från Azure File Sync.
Mer information om Azure File Sync och nätverk finns i [Azure File Sync nätverks överväganden](storage-sync-files-networking-overview.md).
## <a name="encryption"></a>Kryptering
När du använder Azure File Sync finns det tre olika krypterings lager som du bör tänka på: kryptering i den andra lagringen av Windows Server, kryptering under överföring mellan Azure File Sync agent och Azure och kryptering i resten av dina data i Azure-filresursen.
### <a name="windows-server-encryption-at-rest"></a>Windows Server-kryptering i vila
Det finns två strategier för att kryptera data på Windows Server som fungerar normalt med Azure File Sync: kryptering under fil systemet, så att fil systemet och alla data som skrivs till den krypteras, och kryptering i själva fil formatet. Dessa metoder är inte ömsesidigt uteslutande. de kan användas tillsammans om det behövs eftersom krypterings syftet är annorlunda.
Windows Server tillhandahåller BitLocker-inkorg för att tillhandahålla kryptering under fil systemet. BitLocker är helt transparent för Azure File Sync. Den främsta anledningen till att använda en krypterings funktion som BitLocker är att förhindra fysisk exfiltrering av data från ditt lokala data Center genom att stjäla diskarna och förhindra inläsning av obehörigt operativ system för att utföra obehörig läsning/skrivning till dina data. Mer information om BitLocker finns i [Översikt över BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview).
Produkter från tredje part som fungerar på liknande sätt som BitLocker, i de hamnar under NTFS-volymen, bör fungera på samma sätt som i sin helhet med Azure File Sync.
Den andra huvudsakliga metoden för att kryptera data är att kryptera filens data ström när programmet sparar filen. Vissa program kan göra detta internt, men det är vanligt vis inte fallet. Ett exempel på en metod för kryptering av filens data ström är Azure Information Protection (AIP)/Azure Rights Management Services (Azure RMS)/Active Directory RMS. Den främsta anledningen till att använda en krypterings funktion som AIP/RMS är att förhindra data exfiltrering data från din fil resurs genom att kopiera den till alternativa platser, till exempel till en flash-enhet eller skicka e-post till en obehörig person. När en fils data ström är krypterad som en del av fil formatet kommer den här filen fortfarande att vara krypterad på Azure-filresursen.
Azure File Sync fungerar inte med NTFS EFS (NTFS Encrypted File System) eller krypterings lösningar från tredje part som är ovanför fil systemet men under filens data ström.
### <a name="encryption-in-transit"></a>Kryptering under överföring
> [!NOTE]
> Azure File Syncs tjänsten kommer att ta bort stöd för TLS 1.0 och 1,1 den 1 augusti 2020. Alla Azure File Sync agent versioner som stöds använder redan TLS 1.2 som standard. Användning av en tidigare version av TLS kan uppstå om TLS 1.2 inaktiverades på servern eller om en proxyserver används. Om du använder en proxyserver rekommenderar vi att du kontrollerar proxykonfigurationen. Azure File Sync service regioner som läggs till efter 5/1/2020 endast stöder TLS 1.2 och stöd för TLS 1.0 och 1,1 kommer att tas bort från befintliga regioner den 1 augusti 2020. Mer information finns i [fel söknings guiden](storage-sync-files-troubleshoot.md#tls-12-required-for-azure-file-sync).
Azure File Sync-agenten kommunicerar med lagrings tjänsten för synkronisering och Azure-filresursen med hjälp av Azure File Sync REST-protokollet och det protokoll som används, och båda använder alltid HTTPS via port 443. Azure File Sync skickar inte okrypterade begär Anden via HTTP.
Azure Storage-konton innehåller en växel för att kräva kryptering vid överföring, som är aktiverat som standard. Även om växeln på lagrings konto nivån är inaktive rad, innebär det att okrypterade anslutningar till dina Azure-filresurser är möjliga, Azure File Sync fortfarande bara använder krypterade kanaler för åtkomst till fil resursen.
Den primära anledningen till att inaktivera kryptering vid överföring av lagrings kontot är att stödja ett äldre program som måste köras på ett äldre operativ system, till exempel Windows Server 2008 R2 eller äldre Linux-distribution, som kommunicerar med en Azure-filresurs direkt. Om det äldre programmet pratar med Windows Server-cachen för fil resursen, har den här inställningen ingen inverkan.
Vi rekommenderar starkt att du ser till att kryptering av data överförs är aktiverat.
Mer information om kryptering i överföring finns i [krav på säker överföring i Azure Storage](../common/storage-require-secure-transfer.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
### <a name="azure-file-share-encryption-at-rest"></a>Azure File Share-kryptering i vila
[!INCLUDE [storage-files-encryption-at-rest](../../../includes/storage-files-encryption-at-rest.md)]
## <a name="storage-tiers"></a>Lagringsnivåer
[!INCLUDE [storage-files-tiers-overview](../../../includes/storage-files-tiers-overview.md)]
### <a name="enable-standard-file-shares-to-span-up-to-100-tib"></a>Aktivera standard fil resurser för att täcka upp till 100 TiB
[!INCLUDE [storage-files-tiers-enable-large-shares](../../../includes/storage-files-tiers-enable-large-shares.md)]
#### <a name="regional-availability"></a>Regional tillgänglighet
[!INCLUDE [storage-files-tiers-large-file-share-availability](../../../includes/storage-files-tiers-large-file-share-availability.md)]
## <a name="azure-file-sync-region-availability"></a>Tillgänglighet för Azure File Sync-regioner
Information om regionala tillgänglighet finns i [produkt tillgänglighet per region](https://azure.microsoft.com/global-infrastructure/services/?products=storage).
I följande regioner måste du begära åtkomst till Azure Storage innan du kan använda Azure File Sync med dem:
- Frankrike, södra
- Sydafrika, västra
- Förenade Arabemiraten Central
Följ processen i [det här dokumentet](https://azure.microsoft.com/global-infrastructure/geographies/)om du vill begära åtkomst för dessa regioner.
## <a name="redundancy"></a>Redundans
[!INCLUDE [storage-files-redundancy-overview](../../../includes/storage-files-redundancy-overview.md)]
> [!Important]
> Redundant lagring för geo-redundanta och geo-zoner har möjlighet att manuellt redundansväxla lagring till den sekundära regionen. Vi rekommenderar att du inte gör detta utanför en katastrof när du använder Azure File Sync på grund av den ökade sannolikheten för data förlust. I händelse av en katastrof där du vill initiera en manuell redundansväxling av lagring måste du öppna ett support ärende med Microsoft för att få Azure File Sync att återuppta synkroniseringen med den sekundära slut punkten.
## <a name="migration"></a>Migrering
Om du har en befintlig Windows-fil Server kan Azure File Sync installeras direkt på plats, utan att behöva flytta data till en ny server. Om du planerar att migrera till en ny Windows-filserver som en del av att anta Azure File Sync finns det flera olika metoder för att flytta data över:
- Skapa server slut punkter för din gamla fil resurs och den nya fil resursen och låt Azure File Sync synkronisera data mellan server slut punkterna. Fördelen med den här metoden är att det är mycket enkelt att överprenumerera lagringen på den nya fil servern, eftersom Azure File Sync är beroende av moln nivåer. När du är klar kan du klippa över slutanvändare till fil resursen på den nya servern och ta bort den gamla fil resursens Server slut punkt.
- Skapa bara en server slut punkt på den nya fil servern och kopiera data till från den gamla fil resursen med hjälp av `robocopy` . Beroende på topologin för fil resurser på din nya server (hur många resurser du har på varje volym, hur det kostar varje volym osv.) kan du tillfälligt behöva etablera ytterligare lagrings utrymme eftersom det förväntas att `robocopy` från den gamla servern till den nya servern i det lokala data centret kommer att bli snabbare än Azure File Sync att flytta data till Azure.
Du kan också använda Data Box-enhet för att migrera data till en Azure File Sync-distribution. I de flesta fall, när kunder vill använda Data Box-enhet för att mata in data, gör de det eftersom de tror att de kommer att öka hastigheten på sin distribution, eller så kan de hjälpa till med begränsade bandbredds scenarier. Även om det är sant att när du använder en Data Box-enhet för att mata in data i din Azure File Sync-distribution minskar bandbredds användningen, kommer det förmodligen att gå snabbare för de flesta scenarier för att kunna använda en data uppladdning online via en av de metoder som beskrivs ovan. Mer information om hur du använder Data Box-enhet för att mata in data i din Azure File Sync-distribution finns i [migrera data till Azure File Sync med Azure Data Box](storage-sync-offline-data-transfer.md).
Vanliga misstag som kunder gör när de migrerar data till sin nya Azure File Sync-distribution är att kopiera data direkt till Azure-filresursen i stället för på sina Windows-filservrar. Även om Azure File Sync kommer att identifiera alla nya filer på Azure-filresursen och synkronisera tillbaka dem till dina Windows-filresurser, är detta normalt betydligt långsammare än att läsa in data via Windows-filservern. När du använder Azure Copy-verktyg, till exempel AzCopy, är det viktigt att använda den senaste versionen. Se [tabellen fil kopierings verktyg](storage-files-migration-overview.md#file-copy-tools) för att få en översikt över Azure Copy-verktyg så att du kan kopiera alla viktiga metadata för en fil, till exempel tidsstämplar och ACL: er.
## <a name="antivirus"></a>Antivirus
Eftersom antivirus programmet fungerar genom att söka igenom filer efter känd skadlig kod kan ett antivirus program orsaka återkallande av nivåbaserade filer, vilket resulterar i högt utgående kostnader. I version 4,0 och senare av Azure File Sync agenten har filer på nivån säker Windows FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS angett. Vi rekommenderar att du rådfrågar din program varu leverantör för att lära dig hur du konfigurerar lösningen för att hoppa över att läsa filer med den här attributuppsättningen (många gör det automatiskt).
Microsofts interna antivirus lösningar, Windows Defender och System Center Endpoint Protection (SCEP), hoppar båda automatiskt över att läsa filer som har det här attributet angivet. Vi har testat dem och identifierat ett mindre problem: när du lägger till en server i en befintlig Sync-grupp, anropas filer som är mindre än 800 byte (nedladdade) på den nya servern. De här filerna kommer att finnas kvar på den nya servern och kommer inte att skiktas eftersom de inte uppfyller storleks kravet för skikt (>64 KB).
> [!Note]
> Antivirus leverantörer kan kontrol lera kompatibiliteten mellan sina produkter och Azure File Sync med hjälp av [Testsviten Azure File Sync Antivirus kompatibilitet](https://www.microsoft.com/download/details.aspx?id=58322)som är tillgänglig för hämtning på Microsoft Download Center.
## <a name="backup"></a>Backup
Om moln skiktning är aktiverat ska lösningar som direkt säkerhetskopierar Server slut punkten eller en virtuell dator där Server slut punkten finns inte användas. Moln nivåer gör att endast en delmängd av dina data lagras på Server slut punkten, med den fullständiga data uppsättningen som finns i Azure-filresursen. Beroende på vilken säkerhets kopierings lösning som används kommer filer i nivå antingen att hoppas över och inte säkerhets kopie ras (eftersom de har attributet FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS), eller så kommer de att återkallas till disk, vilket resulterar i höga utgående kostnader. Vi rekommenderar att du använder en lösning för säkerhets kopiering i molnet för att säkerhetskopiera Azure-filresursen direkt. Mer information finns i [om säkerhets kopiering av Azure-filresurser](../../backup/azure-file-share-backup-overview.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json) eller kontakta säkerhets kopierings leverantören för att se om de stöder säkerhets kopiering av Azure-filresurser.
Om du föredrar att använda en lokal lösning för säkerhets kopiering ska säkerhets kopieringar utföras på en server i den synkroniserade grupp där moln nivå inaktive ras. När du utför en återställning använder du alternativen på volym-eller fil nivå återställning. Filer som återställs med alternativet Återställning på filnivå synkroniseras till alla slut punkter i Sync-gruppen och befintliga filer ersätts med den version som återställs från säkerhets kopian. Återställningar på volym nivå ersätter inte nyare fil versioner i Azure-filresursen eller andra server slut punkter.
> [!Note]
> Återställning utan operativ system (BMR) kan orsaka oväntade resultat och stöds inte för närvarande.
> [!Note]
> Med version 9 av Azure File Sync agent, stöds nu VSS-ögonblicksbilder (inklusive tidigare versioner) på volymer som har aktiverat moln skikt. Du måste dock aktivera tidigare versions kompatibilitet via PowerShell. [Lär dig hur](storage-sync-files-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## <a name="azure-file-sync-agent-update-policy"></a>Uppdateringsprincip för Azure File Sync-agenten
[!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)]
## <a name="next-steps"></a>Nästa steg
* [Överväg inställningar för brand vägg och proxy](storage-sync-files-firewall-and-proxy.md)
* [Planera för en Azure Files-distribution](storage-files-planning.md)
* [Distribuera Azure Files](./storage-how-to-create-file-share.md)
* [Distribuera Azure File Sync](storage-sync-files-deployment-guide.md)
* [Övervaka Azure File Sync](storage-sync-files-monitoring.md)
| 108.861496 | 1,012 | 0.79717 | swe_Latn | 0.999638 |
4da1633e90b00e236187c0a5a78706b3ea2772d3 | 1,150 | md | Markdown | windows/application-management/app-v/appv-connect-to-the-management-console.md | chrymsft/windows-itpro-docs | 55ddc59da7b9160fb76983d2648a244aaaffef51 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-24T02:17:05.000Z | 2021-06-24T02:17:05.000Z | windows/application-management/app-v/appv-connect-to-the-management-console.md | chrymsft/windows-itpro-docs | 55ddc59da7b9160fb76983d2648a244aaaffef51 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows/application-management/app-v/appv-connect-to-the-management-console.md | chrymsft/windows-itpro-docs | 55ddc59da7b9160fb76983d2648a244aaaffef51 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-04T06:22:11.000Z | 2020-06-04T06:22:11.000Z | ---
title: How to Connect to the Management Console (Windows 10)
description: How to Connect to the Management Console
author: MaggiePucciEvans
ms.pagetype: mdop, appcompat, virtualization
ms.mktglfcycl: deploy
ms.sitesec: library
ms.prod: w10
ms.date: 04/19/2017
---
# How to Connect to the Management Console
**Applies to**
- Windows 10, version 1607
Use the following procedure to connect to the App-V Management Console.
**To connect to the App-V Management Console**
1. Open Internet Explorer browser and type the address for the App-V Management server. For example, **https://\<_management server name_\>:\<_management service port number_\>/console.html**.
2. To view different sections of the console, click the desired section in the navigation pane.
## Have a suggestion for App-V?
Add or vote on suggestions on the [Application Virtualization feedback site](https://appv.uservoice.com/forums/280448-microsoft-application-virtualization).<br>For App-V issues, use the [App-V TechNet Forum](https://social.technet.microsoft.com/Forums/en-US/home?forum=mdopappv).
## Related topics
- [Operations for App-V](appv-operations.md)
| 35.9375 | 279 | 0.768696 | eng_Latn | 0.901458 |
4da16b64859f18b9cb2c90404748abeb0b71ccea | 3,657 | md | Markdown | README.md | IM-IgniteDEV/bukkit-inventories | 8e8516be39fb9c1336c222352b5d4306bf38e9fa | [
"MIT"
] | 2 | 2020-07-15T01:24:47.000Z | 2021-08-07T12:29:01.000Z | README.md | IM-IgniteDEV/bukkit-inventories | 8e8516be39fb9c1336c222352b5d4306bf38e9fa | [
"MIT"
] | null | null | null | README.md | IM-IgniteDEV/bukkit-inventories | 8e8516be39fb9c1336c222352b5d4306bf38e9fa | [
"MIT"
] | null | null | null | # bukkit-inventories [](https://jitpack.io/#whippytools/bukkit-inventories) [](https://travis-ci.org/whippytools/bukkit-inventories)
This API allows you create custom inventory menus by annotations
# Using Bukkit-Inventories
To build with gradle, use these commands:
```shell
$ git clone https://github.com/whippytools/bukkit-inventories.git
$ gradle build
```
and if you want jar (with all dependencies) you can use:
```shell
$ gradle shadowJar
```
You also can download this, as a dependency using the following setup.
In Maven:
```xml
<repositories>
<repository>
<id>jitpack.io</id>
<url>https://jitpack.io</url>
</repository>
</repositories>
```
```xml
<dependency>
<groupId>com.github.whippytools</groupId>
<artifactId>bukkit-inventories</artifactId>
<version>newest-version</version>
</dependency>
```
Or in Gradle:
```gradle
repositories {
maven { url "https://jitpack.io" }
}
```
```gradle
dependencies {
compile group: 'com.github.whippytools', name: 'bukkit-inventories', version: 'newest-version'
}
```
# Example usage (normal inv)
```java
@Inventory(name = "&dPretty Inventory", size = 27)
@Item(material = Material.GOLDEN_APPLE, type = 1, name = "&3First &lItem", lore = {"&9AUUUU", "&kAUUU"}, slot = 0)
@Item(material = Material.GOLDEN_APPLE, name = "&ctest", slot = 1, action = "itemAction")
@Item(material = Material.GOLDEN_APPLE, slot = 2, forceEmptyName = true, forceEmptyLore = true)
@Item(item = "coolItem", slot = 3)
@ConfigItem("value.from.config", slot = 4)
@Fill(material = Material.STAINED_GLASS_PANE, type = 16)
public class TestInventory {
public void itemAction(InventoryClickEvent event) {
event.getWhoClicked().sendMessage(ChatColor.AQUA + "Test Message!!");
}
public ItemStack coolItem() {
return new ItemStack(Material.STONE, 1);
}
}
```
(there are so fuckin' many ways to do it with config so don't you ever say that this is for static inventories)
# Example Usage (villager trade inv)
```java
@Trade(villagerUUID = "uuid of the villager")
@TradeItem(firstTradeCost = "firstTradeCost", tradeResult = "tradeResult")
public class TestTrade {
private ItemStack firstTradeCost() {
return new ItemStack(Material.STONE, 1);
}
private ItemStack tradeResult() {
return new ItemStack(Material.DIAMOND, 1);
}
}
```
In @TradeItem you can also set second trade cost, and maximal usage of this trade
As you can see, this is simple to use. Here's an example how to register this:
```java
@Override
public void onEnable() {
Inventories.init(this);
Inventories.registerInventory(new TestInventory(), "superInventory");
Inventories.registerVillagerTrade(new TestTrade());
}
```
and how to open (normal inv):
```java
public class Command implements CommandExecutor {
@Override
public boolean onCommand(CommandSender commandSender, org.bukkit.command.Command command, String s, String[] strings) {
if (command.getName().equalsIgnoreCase("test")) {
Player player = (Player) commandSender;
Inventories.openInventory("test", player);
}
return false;
}
}
```
Villager trades work on villager's UUID, so you don't have to open it, when you'll click right villager it will work by itself.
If you are using @ConfigItem annotation remember to set ItemStack in config as: `MainClass.getInstance().getConfig().set("test", itemstack)`
If you have any questions, do not hesitate to contact us.
| 32.651786 | 279 | 0.703856 | eng_Latn | 0.624342 |
4da243a65777b0e5d2e5503ac82dd81a0f059e16 | 708 | md | Markdown | README.md | intio/headless-wm | 359e84cd89ae548711dcd81226cefb41e4dde0b1 | [
"BSD-3-Clause",
"MIT"
] | 1 | 2020-12-15T20:49:59.000Z | 2020-12-15T20:49:59.000Z | README.md | intio/headless-wm | 359e84cd89ae548711dcd81226cefb41e4dde0b1 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | README.md | intio/headless-wm | 359e84cd89ae548711dcd81226cefb41e4dde0b1 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | # Headless WM
This is a purely headless, API-driven, X11 window manager. It is meant
for kiosks, public displays, and other situations where interaction
with the screen / machine is limited (for users and/or staff), and
some form of remote control over the display is necessary.
It has no keybindings, no window decorations, no workspaces; it's
almost completely bare-bones except for keeping a list of
active/managed clients, which can be manipulated through an HTTP API.
## Lineage
This is a fork of [rollcat's `dewm`](https://github.com/rollcat/dewm),
which is a fork of [Dave MacFarlane's `dewm`](https://github.com/driusan/dewm),
which includes bits from [taowm](https://github.com/nigeltao/taowm).
| 41.647059 | 79 | 0.769774 | eng_Latn | 0.997952 |
4da267117ba4bb597fc41a5da15acdf7973e6177 | 41 | md | Markdown | README.md | wecantalk/landingpage | 57d9282a6a169f113f269fbc0b7fccacc370512c | [
"MIT"
] | null | null | null | README.md | wecantalk/landingpage | 57d9282a6a169f113f269fbc0b7fccacc370512c | [
"MIT"
] | null | null | null | README.md | wecantalk/landingpage | 57d9282a6a169f113f269fbc0b7fccacc370512c | [
"MIT"
] | null | null | null | # landingpage
We Can Talk - Landing Page
| 13.666667 | 26 | 0.756098 | eng_Latn | 0.522043 |
4da27bbd54ff3a4ced1b2a19ca378ec7248fdafe | 1,850 | md | Markdown | README.md | comuns-rpgmaker/babel-plugin-archetype | 1aabf2a1e72d2f479a563b439e19dc0fda2dbdfb | [
"Zlib"
] | 3 | 2020-08-27T17:05:39.000Z | 2020-08-27T17:53:31.000Z | README.md | comuns-rpgmaker/babel-plugin-archetype | 1aabf2a1e72d2f479a563b439e19dc0fda2dbdfb | [
"Zlib"
] | null | null | null | README.md | comuns-rpgmaker/babel-plugin-archetype | 1aabf2a1e72d2f479a563b439e19dc0fda2dbdfb | [
"Zlib"
] | null | null | null | # RPG Maker MZ - Babel Plugin Archetype
This is a template repository for writing plugins for RPG Maker MZ using Babel.
The main purpose here is to set a basis from which other repositories can
derive from and more easily be ready to start actual development.
## Getting Started
First of all, make sure you run `npm install` to install all the dependencies
for the project, such as [rollup.js](https://rollupjs.org/) and Babel itself.
Make sure to set `package.json` up correctly, changing the package name to that
of your plugin (this will be used to generate the output file) and adjust the
values of the `version` and `description` fields (and, optionally, `keywords`).
Also make sure to add a property `testProjectDir` if you want to test your
plugin (can be relative).
To configure plugin parameters and the likes, change `plugin-metadata.yaml`.
Read more about it on [comuns-rpgmaker/plugin-metadata][plugin-metadata].
[plugin-metadata]: https://github.com/comuns-rpgmaker/plugin-metadata
Once you are done, `npm run build` will create a JS file for your plugin as
`dist/js/plugins/{pkg.name}.js`.
By default, the plugin is wrapped into an IIFE and everything you export from
`./src/main.js` is saved under a namespace to be configured in `package.json`.
**TL;DR**:
First:
- `npm install`
- Modify `package.json`
Then:
- Modify `header.js` and write modern JS code on `src`
- `npm run build`
- Your plugin shows up compiled in `dist/js/plugins`
- Repeat
## Guidelines
This repo's purpose is **exclusively** providing a basic structure for other
plugin repos.
It is **not** the place to create core functionality! (i.e. no application
code here!)
Changes to this repo **must not** demand that repos derived from it be changed, but it **should** be possible to update them to a more recent version of the
archetype fairly easily.
| 35.576923 | 156 | 0.756757 | eng_Latn | 0.993267 |
4da3007faba9287132ac019084909892a8f64e00 | 7,337 | md | Markdown | MinutemanPress/DesignDoc.md | DrGaud/PaulsWork | a794846a848cb0aae8a29131107ba2ed211bcfee | [
"MIT"
] | null | null | null | MinutemanPress/DesignDoc.md | DrGaud/PaulsWork | a794846a848cb0aae8a29131107ba2ed211bcfee | [
"MIT"
] | null | null | null | MinutemanPress/DesignDoc.md | DrGaud/PaulsWork | a794846a848cb0aae8a29131107ba2ed211bcfee | [
"MIT"
] | null | null | null | # Minute Man design document
This is where I wil set out the build and design workflow for this site.
## Background
Been given 4 static html sites with associated images. The pages are stacked in the following:
1) index.html
2) Printers-kilmarnock.html
3) formmail.html
### 1) index.html - Home Page and About Page
The 'Home'/logo navigation link is really a banner. This information can be broken out of the image container and set to be a header in its own right.
The link is also synonymous with the 'About Page'. Content wise. there is a banner that is an image. A two-column body, where the text is displayed on the left and a image container on the right.
Main body:
Printers Kilmarnock, Ayrshire
"At Minuteman Press we offer a full range of products for the Kilmarnock, Irvine and Largs areas. We can print anything from a business card to a banner.
There is a talented in house designer so all your ideas can be translated into something that you would be proud to own and send to friends or business associates.
Our most popular products are business cards, letterheads, invitations for weddings and birthdays, orders of service for weddings and funerals, leaflets, menus, etc.
We can print posters from A4 size up to A0 on a variety of paper types. We can print architect's drawings up to A1 same day.
Banner Content:
• Notepads
• Graphic Design
• Booklets
• Forms
• Binding
• Lamination
• Invitations • Brochures
• Flyers
• Tickets
• Posters & Banners
• Digital Printing
• Postcards
• Labels & Stickers • Price Lists
• Greeting Cards
• Newsletters
• Promotional Items
• Menus & Tent Cards
• Invoices
• Presentation • Rubber Stamps
• Stationery
• Business Cards
• Letterheads
• Door Hangers
• Calenders & Diaries
• Catalogues & Reports
+ Much More More
Contact -Find Us at
2-4 Old Mill Road, Kilmarnock, Ayrshire, KA1 3AN.
Google Maps embedded link-currently displayed as an iframe.
### 2) printers-kilmarnock.html - services
**Existing Text**
Printing Services & Products
Minuteman Press of Kilmarnock is an owner-owned and operated Business-to-Business full-service print, marketing and design company, located at 2 - 4 Old Mill Road in Kilmarnock.
With our state of the art digital printing equipment, professional graphic design, complete Bindery, Poster and Banner equipment, we can offer you the quality and Service that can only come from our years of experience. Our products include brochures, business cards, envelopes, business forms, flyers, invitations, labels, letterheads, newsletters, postcards and presentation folders. We also offer posters and banners custom made to fit our customer’s needs.
We can even help you plan your new marketing strategy with our Every Door Direct Mail program or the perfect complementary promotional products from our fantastic new range, as we now offer a full range of pens, calendars, mouse mats, mugs & magnets.
The companies, schools, municipalities, churches, organisations and individuals we service are located throughout East Ayrshire including Kilmarnock, Irvine, Cumnock, New Cumnock, Aukinleck, Darvel, Galston, Stewarton, Dunlop, Kilmaurs, Largs, Ardrossan, Saltcoats and Dalry among others.
Minuteman Press of Kilmarnock is actively involved in our local community. We support local schools, churches, non-profits and sports teams and are active members of the local Business Referral Network.
**Rewritten**
We use state of the art digital printing equipment, professional graphic design, complete Bindery, Poster and Banner equipment. Offering you true quailty and service that can only come from our years of experience.
Our products include brochures, business cards, envelopes, business forms, flyers, invitations, labels, letterheads, newsletters, postcards and presentation folders. We also offer posters and banners custom made to fit our customer’s needs.
We can even help you plan your new marketing strategy with our Every Door Direct Mail program or the perfect complementary promotional products from our fantastic new range, as we now offer a full range of pens, calendars, mouse mats, mugs & magnets.
We work with everone from companies, schools, municipalities, churches, organisations to individuals.
Located in Kilmarnock, we service everone throughout Ayrshire. Including Irvine, Cumnock, New Cumnock, Aukinleck, Darvel, Galston, Stewarton, Dunlop, Kilmaurs, Largs, Ardrossan, Saltcoats and Dalry among others.
Minuteman Press of Kilmarnock is actively involved in our local community. We support local schools, churches, non-profits and sports teams and are active members of the local Business Referral Network.
Banner Text: GUARANTEED FAST 48 – 72 HOUR DELIVERY
Banner Content : (from index.html)
### 3) formmail.html -
(This page is a contact form)
sector : household, business (these are checkboxes)
Area Of Interest :
-General Enquiry
-Business Cards
-Stationery
-Flyers
-Folded Leaflets
-Brochures & Booklets
-Presentation Folders & Inserts
-Calenders & Diaries
-Postcards & Mailing Services
-Promotional Products
-Labels & Stickers
-Catalogues & Reports
-Cards & Invitations
-Menus & Tent Cards
-Posters & Banners
-Plaques & Awards
-Binders & Tabs
-Raffle Tickets
-Custom Stamps
-Door Hangers
-Business Forms
-Other (Please State Below)
Text Inputs:
Full Name - required
Email - required
Telephone - required
Address
Town
County
Post Code
Message: Textarea
TextArea
submit button
-End of Form-
Banner ( from index.html)
find us section-
---
# Revised Design Proposal
After looking through the html files I have been given, and taken the content from the pages. i can condense this all down to a single webpage.
- Header -on canvas
- Nav -on canvas
- About-off canvas
- Products -off canvas
- Contact -on canvas
The Header would be broken into its own `div` container. This way i can set actions on the telephone and mail.
Nav Bar would be set out using the proper structuring. I will replicate the buttons as was on the site on a large desktop, on mobiles I am thinking of a small hamburger menu, where the secondary links would be.
All the page links would be internal, so the content would be either offcanvas for the mobiles and scaled to fit as the viewport expands outwards.
I am going to use the banners (there are three of them) as backgrounds, they contain nothing of textual value. I will look to incorporate this into the three sections of the page.
I wil make each its own section.
I will replicate using HTML and CSS the CTA's and Banners, this would drastically increase the Assecibility and readability of the files.
The design would be kept seperate on a stylesheet. I will use the BEM methdology for the stylesheet. I dont see any complicated issues in the design.
At this point I would go with using Flexbox for the whole thing. By flexing into containers I can then set the elements to work with the canvas.
The design would be lead -mobile First. This way the larger viewports can then accomodate the off-canvas elements as it expands.
| 35.444444 | 461 | 0.756304 | eng_Latn | 0.997054 |
4da32541b34d33e02a2deaec92bca3e2a3ec8e86 | 1,840 | md | Markdown | README.md | ken107/jsonpatch-observe | 2d9fca5ed52b3d7bf27dd350097d355abcade734 | [
"MIT"
] | 5 | 2015-09-11T15:31:27.000Z | 2020-11-19T10:29:00.000Z | README.md | ken107/jsonpatch-observe | 2d9fca5ed52b3d7bf27dd350097d355abcade734 | [
"MIT"
] | 4 | 2020-04-11T19:20:09.000Z | 2021-05-10T23:51:58.000Z | README.md | ken107/jsonpatch-observe | 2d9fca5ed52b3d7bf27dd350097d355abcade734 | [
"MIT"
] | null | null | null | [](https://travis-ci.org/ken107/jsonpatch-observe)
Observe an object tree for changes and generate JSON Patches ([RFC 6902](https://tools.ietf.org/html/rfc6902)). Uses [Harmony Proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy), available in NodeJS version 6.4 and above.
### Usage
```javascript
const {observe} = require("jsonpatch-observe");
let observable = observe({});
observable.$subscribe(patch => console.log(patch));
observable.a = {b:1}; //prints {op:"add", path:"a", value:{b:1}}
observable.a.b = 2; //prints {op:"add", path:"a/b", value:2}
delete observable.a; //prints {op:"remove", path:"a"}
```
Note that the properties of an Observable are also Observables. This is how it's able to detect when you do `observable.a.b = 2`.
### Unobserved Properties
You can exclude certain properties from `observe` as follows:
```javascript
require("jsonpatch-observe").config.excludeProperty = function(obj, prop) {
//return true to exclude the property
}
```
### Splice Patch
The JSONPatch standard does not specify a "splice" operation. Without splice, Array changes are represented as a series of individual "add", "replace", and "remove" operations, which can be quite inefficient to apply.
This module supports generating the splice patch. Enable it as follows:
```javascript
require("jsonpatch-observe").config.enableSplice = true;
```
The splice patch has the following format:
```javascript
{
op: "splice",
path: "/myarr/3", //path to array index
remove: 2, //number of elements removed
add: ['a','b','c'] //elements added
}
```
I created a [fork](https://github.com/ken107/JSON-Patch) of Starcounter-Jack JSONPatch library capable of consuming this non-standard splice patch.
| 38.333333 | 265 | 0.728261 | eng_Latn | 0.876979 |
4da3fe59c65129ccf6e59dd129439f67c242281b | 150 | md | Markdown | vcxagency-base/README.md | kukgini/vcxagencynode | 9cf6b756e0622d962bd0d784f1cd0b5560b97753 | [
"Apache-2.0"
] | 22 | 2020-06-05T06:01:45.000Z | 2022-02-11T10:48:34.000Z | vcxagency-base/README.md | kukgini/vcxagencynode | 9cf6b756e0622d962bd0d784f1cd0b5560b97753 | [
"Apache-2.0"
] | 69 | 2020-06-09T14:17:22.000Z | 2022-03-23T06:14:02.000Z | vcxagency-base/README.md | kukgini/vcxagencynode | 9cf6b756e0622d962bd0d784f1cd0b5560b97753 | [
"Apache-2.0"
] | 8 | 2020-07-01T00:37:58.000Z | 2021-11-28T23:03:18.000Z | # ubuntu-indysdk-lite
Tools to build `vcxagency-base` docker image - base image for agency. It contains precompiled libindy and
pgsql wallet plugin.
| 37.5 | 106 | 0.793333 | eng_Latn | 0.939975 |
4da4d7324b89a3ddc9768afc5fe6f1c655a98636 | 676 | md | Markdown | README.md | Jinwen-XU/create-theorem | 8346abfdd751dfa27fd1dc2b5e4479a0c60bb653 | [
"LPPL-1.3c"
] | null | null | null | README.md | Jinwen-XU/create-theorem | 8346abfdd751dfa27fd1dc2b5e4479a0c60bb653 | [
"LPPL-1.3c"
] | null | null | null | README.md | Jinwen-XU/create-theorem | 8346abfdd751dfa27fd1dc2b5e4479a0c60bb653 | [
"LPPL-1.3c"
] | null | null | null | <!-- Copyright (C) 2021-2022 by Jinwen XU -->
# `create-theorem` - Initializing theorem-like environments with multilingual support
The package `create-theorem` provides the commands `\NameTheorem`, `\CreateTheorem` and `\SetTheorem` for naming, initializing and configuring theorem-like environments. All of these commands have key-value based interface and are especially useful in multi-language documents, allowing the easy declaration of theorem-like environments that can automatically adapt to the language settings.
*For more information, please refer to its documentation.*
# License
This work is released under the LaTeX Project Public License, v1.3c or later.
| 56.333333 | 391 | 0.797337 | eng_Latn | 0.996775 |
4da4e176177e3ede642abcbfc613dbd2c069ade4 | 12,827 | md | Markdown | CHANGELOG.md | matejvasek/sdk-javascript | 5ab81641aeaa7ef2d5dc23265277c65be144a881 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | matejvasek/sdk-javascript | 5ab81641aeaa7ef2d5dc23265277c65be144a881 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | matejvasek/sdk-javascript | 5ab81641aeaa7ef2d5dc23265277c65be144a881 | [
"Apache-2.0"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
### [2.0.2](https://github.com/cloudevents/sdk-javascript/compare/v2.0.1...v2.0.2) (2020-06-08)
### Bug Fixes
* add correct types to improve TypeScript behavior ([#202](https://github.com/cloudevents/sdk-javascript/issues/202)) ([da365e0](https://github.com/cloudevents/sdk-javascript/commit/da365e09ebcb493f63e6962800230899f1b978ad))
* fix references to constants - remove .js extension ([#200](https://github.com/cloudevents/sdk-javascript/issues/200)) ([c757a2b](https://github.com/cloudevents/sdk-javascript/commit/c757a2bce1e5432c420db7a4ae4755058964cff7))
* use /lib in gitignore so src/lib is not ignored ([#199](https://github.com/cloudevents/sdk-javascript/issues/199)) ([fba3294](https://github.com/cloudevents/sdk-javascript/commit/fba3294ce04a30be0e5ab551a1fa01727dc8d1f8))
### Documentation
* **README:** fix example typo ([#208](https://github.com/cloudevents/sdk-javascript/issues/208)) ([9857eda](https://github.com/cloudevents/sdk-javascript/commit/9857eda5ef85e64898f7c742e1ffabb714236d6a)), closes [#173](https://github.com/cloudevents/sdk-javascript/issues/173)
### Miscellaneous
* ts formatter ([#210](https://github.com/cloudevents/sdk-javascript/issues/210)) ([90782a9](https://github.com/cloudevents/sdk-javascript/commit/90782a9e17dbd293d379f0ec134cf7fb06d0f36f))
### [2.0.1](https://github.com/cloudevents/sdk-javascript/compare/v2.0.0...v2.0.1) (2020-06-01)
### Bug Fixes
* initialize CloudEvent's extensions property ([#192](https://github.com/cloudevents/sdk-javascript/issues/192)) ([0710166](https://github.com/cloudevents/sdk-javascript/commit/0710166ce9397f402b835fae745923d11357d15e))
* introduce CloudEventV1 and CloudEventV03 interfaces ([#194](https://github.com/cloudevents/sdk-javascript/issues/194)) ([a5befbe](https://github.com/cloudevents/sdk-javascript/commit/a5befbe0cf11a53e39f3ea33990b037e2f165611))
### Miscellaneous
* CI workflow to only upload report if CODACY_PROJECT_TOKEN is set ([#193](https://github.com/cloudevents/sdk-javascript/issues/193)) ([aa320e7](https://github.com/cloudevents/sdk-javascript/commit/aa320e7fe4ce59284378cdd9420c0191d6a54b39))
* minor typos in guidance docs ([#196](https://github.com/cloudevents/sdk-javascript/issues/196)) ([15cd763](https://github.com/cloudevents/sdk-javascript/commit/15cd7638da2906c7be7b550cc07ce551c2f7d1f8))
## [2.0.0](https://github.com/cloudevents/sdk-javascript/compare/v1.0.0...v2.0.0) (2020-05-27)
### ⚠ BREAKING CHANGES
* change CloudEvent to use direct object notation and get/set properties (#172)
* refactor HTTP bindings and specifications (#165)
* expose a version agnostic event emitter (#141)
* **unmarshaller:** remove asynchronous 0.3 unmarshaller API (#126)
### Features
* add ValidationError type extending TypeError ([#151](https://github.com/cloudevents/sdk-javascript/issues/151)) ([09b0c76](https://github.com/cloudevents/sdk-javascript/commit/09b0c76826657222f6dc93fa377349a62e9b628f))
* expose a mode and version agnostic event receiver ([#120](https://github.com/cloudevents/sdk-javascript/issues/120)) ([54f242b](https://github.com/cloudevents/sdk-javascript/commit/54f242b79e03dbba382f5016a1279ddf392c354f))
* expose a version agnostic event emitter ([#141](https://github.com/cloudevents/sdk-javascript/issues/141)) ([250a0a1](https://github.com/cloudevents/sdk-javascript/commit/250a0a144c5fbeac237e04dcd3f54e05dc30fc70))
* **unmarshaller:** remove asynchronous 0.3 unmarshaller API ([#126](https://github.com/cloudevents/sdk-javascript/issues/126)) ([63ae1ad](https://github.com/cloudevents/sdk-javascript/commit/63ae1ad527f0b9652222cbc7e51f7a895410a4b4))
* formatter.js es6 ([#87](https://github.com/cloudevents/sdk-javascript/issues/87)) ([c36f194](https://github.com/cloudevents/sdk-javascript/commit/c36f1949d0176574ace24fee87ce850f01f1e2f5))
* use CloudEvents not cloudevents everywhere ([#101](https://github.com/cloudevents/sdk-javascript/issues/101)) ([05ecbde](https://github.com/cloudevents/sdk-javascript/commit/05ecbdea4f594a6012ba7717f3311d0c20c2985f))
### Bug Fixes
* ensure binary events can handle no content-type header ([#134](https://github.com/cloudevents/sdk-javascript/issues/134)) ([72a87df](https://github.com/cloudevents/sdk-javascript/commit/72a87dfb2d05411f9f58b417bbc7db4233dcbbbf))
* Fix Express example installation ([#77](https://github.com/cloudevents/sdk-javascript/issues/77)) ([bb8e0f9](https://github.com/cloudevents/sdk-javascript/commit/bb8e0f9e0ca7aef00103d03f6071a648a9fab76d))
* make application/json the default content type in binary mode ([#118](https://github.com/cloudevents/sdk-javascript/issues/118)) ([d9e9ae6](https://github.com/cloudevents/sdk-javascript/commit/d9e9ae6bdcbaf80dc35d486765c9189a176be650))
* misspelled word ([#113](https://github.com/cloudevents/sdk-javascript/issues/113)) ([cd6a3ee](https://github.com/cloudevents/sdk-javascript/commit/cd6a3eec7dca4bac1e2ba9fbba9949799e6c97d8))
* misspelled word ([#115](https://github.com/cloudevents/sdk-javascript/issues/115)) ([53524ac](https://github.com/cloudevents/sdk-javascript/commit/53524acb0e18598b1376fa4485cdd2a117e892fd))
* protects the consts from being changed in other parts of the code. ([fbcbcec](https://github.com/cloudevents/sdk-javascript/commit/fbcbcec4e885618367c5cb25a8e030549dd829df))
* remove d.ts types. Fixes [#83](https://github.com/cloudevents/sdk-javascript/issues/83) ([#84](https://github.com/cloudevents/sdk-javascript/issues/84)) ([6c223e2](https://github.com/cloudevents/sdk-javascript/commit/6c223e2c34769fc0b2f2dbc58a398eb85442af92))
* support mTLS in 1.0 Binary and Structured emitters ([3a063d7](https://github.com/cloudevents/sdk-javascript/commit/3a063d72451d1156df8fe9c3499ef1e81e905060))
* throw "no cloud event detected" if one can't be read ([#139](https://github.com/cloudevents/sdk-javascript/issues/139)) ([ef7550d](https://github.com/cloudevents/sdk-javascript/commit/ef7550d60d248e1720172c0a18ae5dc21e8da5a1))
### Tests
* remove uuid require in spec_03_tests.js ([#145](https://github.com/cloudevents/sdk-javascript/issues/145)) ([c56c203](https://github.com/cloudevents/sdk-javascript/commit/c56c203d6af7b9bc1be09a82d33fdbe7aea7f331))
* use constants in spec_03_tests.js ([#144](https://github.com/cloudevents/sdk-javascript/issues/144)) ([2882aff](https://github.com/cloudevents/sdk-javascript/commit/2882affb382366654b3c7749ed274b9b74f84723))
* use header constants in receiver tests ([#131](https://github.com/cloudevents/sdk-javascript/issues/131)) ([60bf05c](https://github.com/cloudevents/sdk-javascript/commit/60bf05c8f2d4275b5432ce544982077d22b4b8ff))
* use header constants in unmarshaller tests ([#60](https://github.com/cloudevents/sdk-javascript/issues/60)) ([e087805](https://github.com/cloudevents/sdk-javascript/commit/e0878055a207154eaf040d00f778ad3854a5d7d2))
### lib
* change CloudEvent to use direct object notation and get/set properties ([#172](https://github.com/cloudevents/sdk-javascript/issues/172)) ([abc114b](https://github.com/cloudevents/sdk-javascript/commit/abc114b24e448a33d2a4f583cdc7ae191940bdca))
* refactor HTTP bindings and specifications ([#165](https://github.com/cloudevents/sdk-javascript/issues/165)) ([6f0b5ea](https://github.com/cloudevents/sdk-javascript/commit/6f0b5ea5f11ae8a451df2c46208bbd1e08ff7227))
### Documentation
* add instructions and details to contributors guide ([#105](https://github.com/cloudevents/sdk-javascript/issues/105)) ([fd99cb1](https://github.com/cloudevents/sdk-javascript/commit/fd99cb1e598bc27f0ec41755745942b0487f6905))
* add JSDocs for top level API objects ([#140](https://github.com/cloudevents/sdk-javascript/issues/140)) ([b283583](https://github.com/cloudevents/sdk-javascript/commit/b283583c0c07e6da40fac26a2b8c7dac894468dc))
* add maintainer guidelines for landing PRs ([#177](https://github.com/cloudevents/sdk-javascript/issues/177)) ([fdc79ae](https://github.com/cloudevents/sdk-javascript/commit/fdc79ae12083f989f80ec548669fc2070c69bb83))
* organize README badges and remove TS example ([#112](https://github.com/cloudevents/sdk-javascript/issues/112)) ([07323e0](https://github.com/cloudevents/sdk-javascript/commit/07323e078fdd60814ed61a65d6756e23cf523400))
* remove 0.1, 0.2 spec support from README ([56036b0](https://github.com/cloudevents/sdk-javascript/commit/56036b09ddfeb00d19678e118ea5f742b88cdfc7))
* remove repo structure docs ([#111](https://github.com/cloudevents/sdk-javascript/issues/111)) ([223a7c6](https://github.com/cloudevents/sdk-javascript/commit/223a7c6f03732fa4dc91c0af78adfcc4c026e7c8))
* update README and examples with new API ([#138](https://github.com/cloudevents/sdk-javascript/issues/138)) ([b866edd](https://github.com/cloudevents/sdk-javascript/commit/b866edddd9593b5456981f1f5613225b8335ec05))
### Miscellaneous
* add action to detect and close stale issues ([5a6cde5](https://github.com/cloudevents/sdk-javascript/commit/5a6cde5695049403c7f614c42067511908b54ffc))
* add coverage GitHub action ([#185](https://github.com/cloudevents/sdk-javascript/issues/185)) ([349fe8e](https://github.com/cloudevents/sdk-javascript/commit/349fe8e9bd3da711ab5c8221932d1bc5f551a1da))
* add eslint configuration and npm script ([3f238a0](https://github.com/cloudevents/sdk-javascript/commit/3f238a01248aba54b0208aaaa54b66cf2f54a749))
* add GitHub action for CI on master and prs ([#181](https://github.com/cloudevents/sdk-javascript/issues/181)) ([0fe57d1](https://github.com/cloudevents/sdk-javascript/commit/0fe57d123ac01458a6fa50752caf0071ed2571f6))
* add npm fix command ([#74](https://github.com/cloudevents/sdk-javascript/issues/74)) ([005d532](https://github.com/cloudevents/sdk-javascript/commit/005d5327e49cd271fe84382d18df7019dc3f73ad))
* add standard-version and release script ([f47bca4](https://github.com/cloudevents/sdk-javascript/commit/f47bca4ff0ca93dc83a927bb9ee4818e317a5e75))
* adds files section in package.json ([#147](https://github.com/cloudevents/sdk-javascript/issues/147)) ([f8a62b2](https://github.com/cloudevents/sdk-javascript/commit/f8a62b2843b12fe894201670770a00c034ab701d))
* es6 base64 parser ([#75](https://github.com/cloudevents/sdk-javascript/issues/75)) ([d042ef1](https://github.com/cloudevents/sdk-javascript/commit/d042ef1dbb555e2500036716d4170661dc48fe3e))
* es6 parser ([#98](https://github.com/cloudevents/sdk-javascript/issues/98)) ([cd6decd](https://github.com/cloudevents/sdk-javascript/commit/cd6decd74904888557bfc53045c87efe630fb88c))
* es6 unmarshaller ([#108](https://github.com/cloudevents/sdk-javascript/issues/108)) ([79ec3ef](https://github.com/cloudevents/sdk-javascript/commit/79ec3ef126a46afbd3217dfdb969b00f20e38f56))
* fix CI code coverage publishing ([#78](https://github.com/cloudevents/sdk-javascript/issues/78)) ([8fb0ddf](https://github.com/cloudevents/sdk-javascript/commit/8fb0ddf6eb0dd05b0728444f404e1014a9348599))
* Modify CI to also build backport branch(es) ([#122](https://github.com/cloudevents/sdk-javascript/issues/122)) ([c1fda94](https://github.com/cloudevents/sdk-javascript/commit/c1fda94d25f84db097e75177b166c3f18f707dda))
* remove note with bad link and non SDK docs ([#109](https://github.com/cloudevents/sdk-javascript/issues/109)) ([f30c814](https://github.com/cloudevents/sdk-javascript/commit/f30c814a09896d31f821ebe5eb5ba95cd264d699))
* update eslint rules to disallow var usage ([e83db29](https://github.com/cloudevents/sdk-javascript/commit/e83db297ae5761248d0c34a9d440e6a4285a645d))
* Update uuid dependency ([42246ce](https://github.com/cloudevents/sdk-javascript/commit/42246ce36b9898eea1d5daa5f43ddb13ee6b12d0))
* use es6 for cloudevents.js ([#73](https://github.com/cloudevents/sdk-javascript/issues/73)) ([12ac181](https://github.com/cloudevents/sdk-javascript/commit/12ac1813005d1c88e86c6fc9de675516dd3e290c))
## [1.0.0]
### Added
- Support for [Spec v1.0](https://github.com/cloudevents/spec/tree/v1.0)
- Typescript types for Spec v1.0: [see an example](./examples/typescript-ex)
### Removed
- Unmarshaller docs from README, moving them to [OLDOCS.md](./OLDOCS.md)
## [0.3.2]
### Fixed
- Fix the special `data` handling: issue
[#33](https://github.com/cloudevents/sdk-javascript/issues/33)
## [0.3.1]
### Fixed
- Axios version to `0.18.1` due the CVE-2019-10742
- Fix the `subject` attribute unmarshal error: issue
[#32](https://github.com/cloudevents/sdk-javascript/issues/32)
[Unreleased]: https://github.com/cloudevents/sdk-javascript/compare/v1.0.0...HEAD
[1.0.0]: https://github.com/cloudevents/sdk-javascript/compare/v0.3.2...v1.0.0
[0.3.2]: https://github.com/cloudevents/sdk-javascript/compare/v0.3.1...v0.3.2
[0.3.1]: https://github.com/cloudevents/sdk-javascript/compare/v0.3.0...v0.3.1
| 88.462069 | 277 | 0.792547 | yue_Hant | 0.190692 |
4da615be4810cd34927695253f031ae5027de1a9 | 40 | md | Markdown | README.md | ayaneshsarkar/codeigniter_project | 87c025ad61fd391868a00078ec1c4f08cac73b58 | [
"MIT"
] | null | null | null | README.md | ayaneshsarkar/codeigniter_project | 87c025ad61fd391868a00078ec1c4f08cac73b58 | [
"MIT"
] | null | null | null | README.md | ayaneshsarkar/codeigniter_project | 87c025ad61fd391868a00078ec1c4f08cac73b58 | [
"MIT"
] | null | null | null | ### This is my First CodeIgniter Project | 40 | 40 | 0.775 | eng_Latn | 0.999729 |
4da6374595eda7e0881f11a88381a9f45d19e538 | 6,236 | md | Markdown | data/readme_files/gforcada.haproxy_log_analysis.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/gforcada.haproxy_log_analysis.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/gforcada.haproxy_log_analysis.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | .. -*- coding: utf-8 -*-
HAProxy log analyzer
====================
This Python package is a `HAProxy`_ log parser.
It analyzes HAProxy log files in multiple ways (see commands section below).
.. note::
Currently only the `HTTP log format`_ is supported.
Tests and coverage
------------------
No project is trustworthy if does not have tests and a decent coverage!
.. image:: https://travis-ci.org/gforcada/haproxy_log_analysis.svg?branch=master
:target: https://travis-ci.org/gforcada/haproxy_log_analysis
:alt: Tests
.. image:: https://coveralls.io/repos/gforcada/haproxy_log_analysis/badge.svg?branch=master
:target: https://coveralls.io/github/gforcada/haproxy_log_analysis
:alt: Coverage
.. image:: https://img.shields.io/pypi/dm/haproxy_log_analysis.svg
:target: https://pypi.python.org/pypi/haproxy_log_analysis/
:alt: Downloads
.. image:: https://img.shields.io/pypi/v/haproxy_log_analysis.svg
:target: https://pypi.python.org/pypi/haproxy_log_analysis/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/status/haproxy_log_analysis.svg
:target: https://pypi.python.org/pypi/haproxy_log_analysis/
:alt: Egg Status
.. image:: https://img.shields.io/pypi/l/haproxy_log_analysis.svg
:target: https://pypi.python.org/pypi/haproxy_log_analysis/
:alt: License
Documentation
-------------
See the `documentation and API`_ at ReadTheDocs_.
Command-line interface
----------------------
The current ``--help`` looks like this::
usage: haproxy_log_analysis [-h] [-l LOG] [-s START] [-d DELTA] [-c COMMAND]
[-f FILTER] [-n] [--list-commands]
[--list-filters] [--json]
Analyze HAProxy log files and outputs statistics about it
optional arguments:
-h, --help show this help message and exit
-l LOG, --log LOG HAProxy log file to analyze
-s START, --start START
Process log entries starting at this time, in HAProxy
date format (e.g. 11/Dec/2013 or
11/Dec/2013:19:31:41). At least provide the
day/month/year. Values not specified will use their
base value (e.g. 00 for hour). Use in conjunction with
-d to limit the number of entries to process.
-d DELTA, --delta DELTA
Limit the number of entries to process. Express the
time delta as a number and a time unit, e.g.: 1s, 10m,
3h or 4d (for 1 second, 10 minutes, 3 hours or 4
days). Use in conjunction with -s to only analyze
certain time delta. If no start time is given, the
time on the first line will be used instead.
-c COMMAND, --command COMMAND
List of commands, comma separated, to run on the log
file. See --list-commands to get a full list of them.
-f FILTER, --filter FILTER
List of filters to apply on the log file. Passed as
comma separated and parameters within square brackets,
e.g ip[192.168.1.1],ssl,path[/some/path]. See --list-
filters to get a full list of them.
-n, --negate-filter Make filters passed with -f work the other way around,
i.e. if the ``ssl`` filter is passed instead of
showing only ssl requests it will show non-ssl
traffic. If the ``ip`` filter is used, then all but
that ip passed to the filter will be used.
--list-commands Lists all commands available.
--list-filters Lists all filters available.
--json Output results in json.
--invalid Print the lines that could not be parsed. Be aware
that mixing it with the print command will mix their
output.
Commands
--------
Commands are small purpose specific programs in themselves that report specific statistics about the log file being analyzed.
See them all with ``--list-commands`` or online at https://haproxy-log-analyzer.readthedocs.io/modules.html#module-haproxy.commands.
- ``average_response_time``
- ``average_waiting_time``
- ``connection_type``
- ``counter``
- ``http_methods``
- ``ip_counter``
- ``print``
- ``queue_peaks``
- ``request_path_counter``
- ``requests_per_hour``
- ``requests_per_minute``
- ``server_load``
- ``slow_requests``
- ``slow_requests_counter``
- ``status_codes_counter``
- ``top_ips``
- ``top_request_paths``
Filters
-------
Filters, contrary to commands,
are a way to reduce the amount of log lines to be processed.
.. note::
The ``-n`` command line argument allows to reverse filters output.
This helps when looking for specific traces, like a certain IP, a path...
See them all with ``--list-filters`` or online at https://haproxy-log-analyzer.readthedocs.io/modules.html#module-haproxy.filters.
- ``backend``
- ``frontend``
- ``http_method``
- ``ip``
- ``ip_range``
- ``path``
- ``response_size``
- ``server``
- ``slow_requests``
- ``ssl``
- ``status_code``
- ``status_code_family``
- ``wait_on_queues``
Installation
------------
After installation you will have a console script `haproxy_log_analysis`::
$ pip install haproxy_log_analysis
TODO
----
- add more commands: *(help appreciated)*
- reports on servers connection time
- reports on termination state
- reports around connections (active, frontend, backend, server)
- *your ideas here*
- think of a way to show the commands output in a meaningful way
- be able to specify an output format. For any command that makes sense (slow
requests for example) output the given fields for each log line (i.e.
acceptance date, path, downstream server, load at that time...)
- *your ideas*
.. _HAProxy: http://haproxy.1wt.eu/
.. _HTTP log format: http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#8.2.3
.. _documentation and API: https://haproxy-log-analyzer.readthedocs.io/
.. _ReadTheDocs: http://readthedocs.org
| 37.119048 | 132 | 0.630212 | eng_Latn | 0.889593 |
4da66dc2c2846ca04cecf1ebedc323821256e3a6 | 311 | md | Markdown | docs/_devices/MHO-C303.md | Home-Is-Where-You-Hang-Your-Hack/ble_monitor | 0c41cdb1356c00c3065206df65a33654873cbbf4 | [
"MIT"
] | null | null | null | docs/_devices/MHO-C303.md | Home-Is-Where-You-Hang-Your-Hack/ble_monitor | 0c41cdb1356c00c3065206df65a33654873cbbf4 | [
"MIT"
] | null | null | null | docs/_devices/MHO-C303.md | Home-Is-Where-You-Hang-Your-Hack/ble_monitor | 0c41cdb1356c00c3065206df65a33654873cbbf4 | [
"MIT"
] | null | null | null | ---
manufacturer: Xiaomi/MiaoMiaoCe
name: Alarm clock
model: MHO-C303
image: MHO-C303.png
physical_description: Rectangular body, E-Ink
broadcasted_properties:
- temperature
- humidity
- battery
broadcasted_property_notes:
broadcast_rate: ~20/min.
active_scan:
encryption_key:
custom_firmware:
notes:
---
| 17.277778 | 45 | 0.790997 | eng_Latn | 0.557952 |
4da78af974101321cf0a97867b1aaa297caf4e18 | 18,607 | md | Markdown | articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md | Microsoft/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 7 | 2017-08-28T08:02:11.000Z | 2021-05-05T07:47:55.000Z | articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 476 | 2017-10-15T08:20:18.000Z | 2021-04-16T05:20:11.000Z | articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md | MicrosoftDocs/azure-docs.sv-se | a43cb26da920952026f5e9c8720f3356a84de75b | [
"CC-BY-4.0",
"MIT"
] | 39 | 2017-08-03T09:46:48.000Z | 2021-11-05T11:41:27.000Z | ---
title: GlusterFS på virtuella Azure-datorer på RHEL for SAP NetWeaver | Microsoft Docs
description: GlusterFS på virtuella Azure-datorer på Red Hat Enterprise Linux för SAP NetWeaver
services: virtual-machines-windows,virtual-network,storage
documentationcenter: saponazure
author: rdeltcheva
manager: juergent
editor: ''
tags: azure-resource-manager
keywords: ''
ms.service: virtual-machines-sap
ms.topic: article
ms.tgt_pltfrm: vm-windows
ms.workload: infrastructure-services
ms.date: 08/16/2018
ms.author: radeltch
ms.openlocfilehash: 3ebc125fe6802ffbe4192c0250ec9adc2ceceb0b
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/29/2021
ms.locfileid: "101668732"
---
# <a name="glusterfs-on-azure-vms-on-red-hat-enterprise-linux-for-sap-netweaver"></a>GlusterFS på virtuella Azure-datorer på Red Hat Enterprise Linux för SAP NetWeaver
[dbms-guide]:dbms-guide.md
[deployment-guide]:deployment-guide.md
[planning-guide]:planning-guide.md
[2002167]:https://launchpad.support.sap.com/#/notes/2002167
[2009879]:https://launchpad.support.sap.com/#/notes/2009879
[1928533]:https://launchpad.support.sap.com/#/notes/1928533
[2015553]:https://launchpad.support.sap.com/#/notes/2015553
[2178632]:https://launchpad.support.sap.com/#/notes/2178632
[2191498]:https://launchpad.support.sap.com/#/notes/2191498
[2243692]:https://launchpad.support.sap.com/#/notes/2243692
[1999351]:https://launchpad.support.sap.com/#/notes/1999351
[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
[template-file-server]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fsap-file-server-md%2Fazuredeploy.json
[sap-hana-ha]:sap-hana-high-availability-rhel.md
Den här artikeln beskriver hur du distribuerar virtuella datorer, konfigurerar de virtuella datorerna och installerar ett GlusterFS-kluster som kan användas för att lagra delade data i ett SAP-system med hög tillgänglighet.
I den här guiden beskrivs hur du konfigurerar GlusterFS som används av två SAP-system, NW1 och NW2. Namnen på resurserna (till exempel virtuella datorer, virtuella nätverk) i exemplet förutsätter att du har använt [mallen SAP File Server][template-file-server] med Resource prefix **glust**.
Läs följande SAP-anteckningar och dokument först
* SAP anmärkning [1928533], som har:
* Lista över storlekar på virtuella Azure-datorer som stöds för distribution av SAP-program
* Viktig kapacitets information för Azure VM-storlekar
* Stöd för SAP-program och operativ system (OS) och databas kombinationer
* Nödvändig SAP kernel-version för Windows och Linux på Microsoft Azure
* SAP NOTE [2015553] visar krav för SAP-program distributioner som stöds i Azure.
* SAP NOTE [2002167] har rekommenderade OS-inställningar för Red Hat Enterprise Linux
* SAP NOTE [2009879] har SAP HANA rikt linjer för Red Hat Enterprise Linux
* SAP NOTE [2178632] innehåller detaljerad information om alla övervaknings mått som rapporter ATS för SAP i Azure.
* SAP NOTE [2191498] har den version av SAP host agent som krävs för Linux i Azure.
* SAP NOTE [2243692] innehåller information om SAP-licensiering på Linux i Azure.
* SAP anmärkning [1999351] innehåller ytterligare felsöknings information för Azure Enhanced Monitoring-tillägget för SAP.
* [SAP community wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) har alla nödvändiga SAP-anteckningar för Linux.
* [Azure Virtual Machines planera och implementera SAP på Linux][planning-guide]
* [Azure Virtual Machines-distribution för SAP på Linux (den här artikeln)][deployment-guide]
* [Azure Virtual Machines DBMS-distribution för SAP på Linux][dbms-guide]
* [Produkt dokumentation för Red Hat Gluster-lagring](https://access.redhat.com/documentation/red_hat_gluster_storage/)
* Allmän dokumentation om RHEL
* [Översikt över Add-On med hög tillgänglighet](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
* [Add-On administration med hög tillgänglighet](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
* [Add-On referens för hög tillgänglighet](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
* Dokumentation om Azure Specific RHEL:
* [Support principer för RHEL-kluster med hög tillgänglighet – Microsoft Azure Virtual Machines som kluster medlemmar](https://access.redhat.com/articles/3131341)
* [Installera och konfigurera en Red Hat Enterprise Linux 7,4 (och senare) High-Availability kluster på Microsoft Azure](https://access.redhat.com/articles/3252491)
## <a name="overview"></a>Översikt
SAP NetWeaver kräver delad lagring för att uppnå hög tillgänglighet. GlusterFS har kon figurer ATS i ett separat kluster och kan användas av flera SAP-system.

## <a name="set-up-glusterfs"></a>Konfigurera GlusterFS
Du kan antingen använda en Azure-mall från GitHub för att distribuera alla nödvändiga Azure-resurser, inklusive virtuella datorer, tillgänglighets uppsättning och nätverks gränssnitt, eller så kan du distribuera resurserna manuellt.
### <a name="deploy-linux-via-azure-template"></a>Distribuera Linux via Azure-mall
Azure Marketplace innehåller en avbildning för Red Hat Enterprise Linux som du kan använda för att distribuera nya virtuella datorer.
Du kan använda en av snabb starts mallarna på GitHub för att distribuera alla nödvändiga resurser. Mallen distribuerar de virtuella datorerna, tillgänglighets uppsättningarna osv. Följ de här stegen för att distribuera mallen:
1. Öppna [fil Server mal len SAP][template-file-server] i Azure Portal
1. Ange följande parametrar
1. Resource prefix
Ange prefixet som du vill använda. Värdet används som prefix för de resurser som distribueras.
2. Antal SAP-system ange hur många SAP-system som ska använda den här fil servern. Detta kommer att distribuera antalet diskar som krävs osv.
3. OS-typ
Välj en av Linux-distributionerna. I det här exemplet väljer du RHEL 7
4. Administratörens användar namn, administratörs lösen ord eller SSH-nyckel
En ny användare skapas som kan användas för att logga in på datorn.
5. Undernät-ID
Om du vill distribuera den virtuella datorn till ett befintligt VNet där du har angett ett undernät som har definierats för den virtuella datorn ska du namnge ID: t för det aktuella under nätet. ID: t ser vanligt vis ut som/Subscriptions/**< PRENUMERATIONS > -ID**/ResourceGroups/**< resurs grupp namn >**/providers/Microsoft.Network/virtualNetworks/**< virtuellt nätverks namn >**/subnets/**< under näts namn >**
### <a name="deploy-linux-manually-via-azure-portal"></a>Distribuera Linux manuellt via Azure Portal
Du måste först skapa de virtuella datorerna för det här klustret. Därefter skapar du en belastningsutjämnare och använder de virtuella datorerna i backend-poolerna. Vi rekommenderar [standard Load Balancer](../../../load-balancer/load-balancer-overview.md).
1. Skapa en resursgrupp
1. Skapa ett virtuellt nätverk
1. Skapa en tillgänglighets uppsättning
Ange Max uppdaterings domän
1. Skapa virtuell dator 1
Använd minst RHEL 7, i det här exemplet Red Hat Enterprise Linux 7,4-avbildningen <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM>
Välj den tillgänglighets uppsättning som skapades tidigare
1. Skapa virtuell dator 2
Använd minst RHEL 7, i det här exemplet Red Hat Enterprise Linux 7,4-avbildningen <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM>
Välj den tillgänglighets uppsättning som skapades tidigare
1. Lägg till en datadisk för varje SAP-system till båda virtuella datorerna.
### <a name="configure-glusterfs"></a>Konfigurera GlusterFS
Följande objekt har prefixet **[A]** -tillämpligt för alla noder, **[1]** , som endast gäller nod 1, **[2]** – endast tillämpligt på nod 2, **[3]** – gäller endast nod 3.
1. **[A]** namn matchning för värdnamn
Du kan antingen använda en DNS-server eller ändra/etc/hosts på alla noder. Det här exemplet visar hur du använder/etc/hosts-filen.
Ersätt IP-adress och värdnamn i följande kommandon
<pre><code>sudo vi /etc/hosts
</code></pre>
Infoga följande rader i/etc/hosts. Ändra IP-adress och värdnamn för att matcha din miljö
<pre><code># IP addresses of the Gluster nodes
<b>10.0.0.40 glust-0</b>
<b>10.0.0.41 glust-1</b>
<b>10.0.0.42 glust-2</b>
</code></pre>
1. **[A]** -register
Registrera dina virtuella datorer och koppla den till en pool som innehåller databaser för RHEL 7 och GlusterFS
<pre><code>sudo subscription-manager register
sudo subscription-manager attach --pool=<pool id>
</code></pre>
1. **[A]** aktivera GlusterFS databaser
Aktivera följande databaser för att installera de nödvändiga paketen.
<pre><code>sudo subscription-manager repos --disable "*"
sudo subscription-manager repos --enable=rhel-7-server-rpms
sudo subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
</code></pre>
1. **[A]** installera GlusterFS-paket
Installera dessa paket på alla GlusterFS-noder
<pre><code>sudo yum -y install redhat-storage-server
</code></pre>
Starta om noderna efter installationen.
1. **[A]** ändra brand vägg
Lägg till brand Väggs regler som tillåter klient trafik till GlusterFS-noderna.
<pre><code># list the available zones
firewall-cmd --get-active-zones
sudo firewall-cmd --zone=public --add-service=glusterfs --permanent
sudo firewall-cmd --zone=public --add-service=glusterfs
</code></pre>
1. **[A]** aktivera och starta GlusterFS-tjänsten
Starta GlusterFS-tjänsten på alla noder.
<pre><code>sudo systemctl start glusterd
sudo systemctl enable glusterd
</code></pre>
1. **[1]** skapa GluserFS
Kör följande kommandon för att skapa GlusterFS-klustret
<pre><code>sudo gluster peer probe glust-1
sudo gluster peer probe glust-2
# Check gluster peer status
sudo gluster peer status
# Number of Peers: 2
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Accepted peer request (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Accepted peer request (Connected)
</code></pre>
1. **[2]** testa peer-status
Testa peer-status på den andra noden
<pre><code>sudo gluster peer status
# Number of Peers: 2
#
# Hostname: glust-0
# Uuid: 6bc6927b-7ee2-461b-ad04-da123124d6bd
# State: Peer in Cluster (Connected)
#
# Hostname: glust-2
# Uuid: 9e340385-12fe-495e-ab0f-4f851b588cba
# State: Peer in Cluster (Connected)
</code></pre>
1. **[3]** testa peer-status
Testa peer-statusen på den tredje noden
<pre><code>sudo gluster peer status
# Number of Peers: 2
#
# Hostname: glust-0
# Uuid: 6bc6927b-7ee2-461b-ad04-da123124d6bd
# State: Peer in Cluster (Connected)
#
# Hostname: glust-1
# Uuid: 10d43840-fee4-4120-bf5a-de9c393964cd
# State: Peer in Cluster (Connected)
</code></pre>
1. **[A]** skapa LVM
I det här exemplet används GlusterFS för två SAP-system, NW1 och NW2. Använd följande kommandon för att skapa LVM-konfigurationer för dessa SAP-system.
Använd de här kommandona för NW1
<pre><code>sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun0
sudo pvscan
sudo vgcreate --physicalextentsize 256K rhgs-<b>NW1</b> /dev/disk/azure/scsi1/lun0
sudo vgscan
sudo lvcreate -l 50%FREE -n rhgs-<b>NW1</b>/sapmnt
sudo lvcreate -l 20%FREE -n rhgs-<b>NW1</b>/trans
sudo lvcreate -l 10%FREE -n rhgs-<b>NW1</b>/sys
sudo lvcreate -l 50%FREE -n rhgs-<b>NW1</b>/ascs
sudo lvcreate -l 100%FREE -n rhgs-<b>NW1</b>/aers
sudo lvscan
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW1</b>/sapmnt
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW1</b>/trans
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW1</b>/sys
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW1</b>/ascs
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW1</b>/aers
sudo mkdir -p /rhs/<b>NW1</b>/sapmnt
sudo mkdir -p /rhs/<b>NW1</b>/trans
sudo mkdir -p /rhs/<b>NW1</b>/sys
sudo mkdir -p /rhs/<b>NW1</b>/ascs
sudo mkdir -p /rhs/<b>NW1</b>/aers
sudo chattr +i /rhs/<b>NW1</b>/sapmnt
sudo chattr +i /rhs/<b>NW1</b>/trans
sudo chattr +i /rhs/<b>NW1</b>/sys
sudo chattr +i /rhs/<b>NW1</b>/ascs
sudo chattr +i /rhs/<b>NW1</b>/aers
echo -e "/dev/rhgs-<b>NW1</b>/sapmnt\t/rhs/<b>NW1</b>/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW1</b>/trans\t/rhs/<b>NW1</b>/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW1</b>/sys\t/rhs/<b>NW1</b>/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW1</b>/ascs\t/rhs/<b>NW1</b>/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW1</b>/aers\t/rhs/<b>NW1</b>/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
sudo mount -a
</code></pre>
Använd de här kommandona för NW2
<pre><code>sudo pvcreate --dataalignment 1024K /dev/disk/azure/scsi1/lun1
sudo pvscan
sudo vgcreate --physicalextentsize 256K rhgs-<b>NW2</b> /dev/disk/azure/scsi1/lun1
sudo vgscan
sudo lvcreate -l 50%FREE -n rhgs-<b>NW2</b>/sapmnt
sudo lvcreate -l 20%FREE -n rhgs-<b>NW2</b>/trans
sudo lvcreate -l 10%FREE -n rhgs-<b>NW2</b>/sys
sudo lvcreate -l 50%FREE -n rhgs-<b>NW2</b>/ascs
sudo lvcreate -l 100%FREE -n rhgs-<b>NW2</b>/aers
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW2</b>/sapmnt
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW2</b>/trans
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW2</b>/sys
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW2</b>/ascs
sudo mkfs.xfs -f -K -i size=512 -n size=8192 /dev/rhgs-<b>NW2</b>/aers
sudo mkdir -p /rhs/<b>NW2</b>/sapmnt
sudo mkdir -p /rhs/<b>NW2</b>/trans
sudo mkdir -p /rhs/<b>NW2</b>/sys
sudo mkdir -p /rhs/<b>NW2</b>/ascs
sudo mkdir -p /rhs/<b>NW2</b>/aers
sudo chattr +i /rhs/<b>NW2</b>/sapmnt
sudo chattr +i /rhs/<b>NW2</b>/trans
sudo chattr +i /rhs/<b>NW2</b>/sys
sudo chattr +i /rhs/<b>NW2</b>/ascs
sudo chattr +i /rhs/<b>NW2</b>/aers
sudo lvscan
echo -e "/dev/rhgs-<b>NW2</b>/sapmnt\t/rhs/<b>NW2</b>/sapmnt\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW2</b>/trans\t/rhs/<b>NW2</b>/trans\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW2</b>/sys\t/rhs/<b>NW2</b>/sys\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW2</b>/ascs\t/rhs/<b>NW2</b>/ascs\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
echo -e "/dev/rhgs-<b>NW2</b>/aers\t/rhs/<b>NW2</b>/aers\txfs\tdefaults,inode64,nobarrier,noatime,nouuid 0 2" | sudo tee -a /etc/fstab
sudo mount -a
</code></pre>
1. **[1]** skapa den distribuerade volymen
Använd följande kommandon för att skapa GlusterFS-volymen för NW1 och starta den.
<pre><code>sudo gluster vol create <b>NW1</b>-sapmnt replica 3 glust-0:/rhs/<b>NW1</b>/sapmnt glust-1:/rhs/<b>NW1</b>/sapmnt glust-2:/rhs/<b>NW1</b>/sapmnt force
sudo gluster vol create <b>NW1</b>-trans replica 3 glust-0:/rhs/<b>NW1</b>/trans glust-1:/rhs/<b>NW1</b>/trans glust-2:/rhs/<b>NW1</b>/trans force
sudo gluster vol create <b>NW1</b>-sys replica 3 glust-0:/rhs/<b>NW1</b>/sys glust-1:/rhs/<b>NW1</b>/sys glust-2:/rhs/<b>NW1</b>/sys force
sudo gluster vol create <b>NW1</b>-ascs replica 3 glust-0:/rhs/<b>NW1</b>/ascs glust-1:/rhs/<b>NW1</b>/ascs glust-2:/rhs/<b>NW1</b>/ascs force
sudo gluster vol create <b>NW1</b>-aers replica 3 glust-0:/rhs/<b>NW1</b>/aers glust-1:/rhs/<b>NW1</b>/aers glust-2:/rhs/<b>NW1</b>/aers force
sudo gluster volume start <b>NW1</b>-sapmnt
sudo gluster volume start <b>NW1</b>-trans
sudo gluster volume start <b>NW1</b>-sys
sudo gluster volume start <b>NW1</b>-ascs
sudo gluster volume start <b>NW1</b>-aers
</code></pre>
Använd följande kommandon för att skapa GlusterFS-volymen för NW2 och starta den.
<pre><code>sudo gluster vol create <b>NW2</b>-sapmnt replica 3 glust-0:/rhs/<b>NW2</b>/sapmnt glust-1:/rhs/<b>NW2</b>/sapmnt glust-2:/rhs/<b>NW2</b>/sapmnt force
sudo gluster vol create <b>NW2</b>-trans replica 3 glust-0:/rhs/<b>NW2</b>/trans glust-1:/rhs/<b>NW2</b>/trans glust-2:/rhs/<b>NW2</b>/trans force
sudo gluster vol create <b>NW2</b>-sys replica 3 glust-0:/rhs/<b>NW2</b>/sys glust-1:/rhs/<b>NW2</b>/sys glust-2:/rhs/<b>NW2</b>/sys force
sudo gluster vol create <b>NW2</b>-ascs replica 3 glust-0:/rhs/<b>NW2</b>/ascs glust-1:/rhs/<b>NW2</b>/ascs glust-2:/rhs/<b>NW2</b>/ascs force
sudo gluster vol create <b>NW2</b>-aers replica 3 glust-0:/rhs/<b>NW2</b>/aers glust-1:/rhs/<b>NW2</b>/aers glust-2:/rhs/<b>NW2</b>/aers force
sudo gluster volume start <b>NW2</b>-sapmnt
sudo gluster volume start <b>NW2</b>-trans
sudo gluster volume start <b>NW2</b>-sys
sudo gluster volume start <b>NW2</b>-ascs
sudo gluster volume start <b>NW2</b>-aers
</code></pre>
## <a name="next-steps"></a>Nästa steg
* [Installera SAP-ASCS och-databasen](high-availability-guide-rhel.md)
* [Azure Virtual Machines planera och implementera SAP][planning-guide]
* [Azure Virtual Machines distribution för SAP][deployment-guide]
* [Azure Virtual Machines DBMS-distribution för SAP][dbms-guide]
* Information om hur du upprättar hög tillgänglighet och planerar för haveri beredskap för SAP HANA på Azure (stora instanser) finns i [SAP HANA (stora instanser) hög tillgänglighet och haveri beredskap på Azure](hana-overview-high-availability-disaster-recovery.md).
* Information om hur du upprättar hög tillgänglighet och planerar för haveri beredskap för SAP HANA på virtuella Azure-datorer finns i [hög tillgänglighet för SAP HANA på Azure-Virtual Machines (VM)][sap-hana-ha]
| 51.830084 | 443 | 0.728221 | swe_Latn | 0.616203 |
4da80bdfdab6c0daf3edfb7a4e2060fb43133ff5 | 2,438 | md | Markdown | docs/debugger/continuing-execution-after-an-exception.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/continuing-execution-after-an-exception.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/continuing-execution-after-an-exception.md | jcarmon4/visualstudio-docs.es-es | 2f133c9f0a90eb92429dcca0573a0b3f458cdcf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Continuar la ejecución después de una excepción | Documentos de Microsoft
ms.date: 11/04/2016
ms.topic: conceptual
dev_langs:
- CSharp
- VB
- FSharp
- C++
- JScript
helpviewer_keywords:
- managed exceptions, continuing execution after
- exceptions, continuing execution after
- debugger, exceptions
- managed code, exception handling
- exception handling, continuing execution after
- execution, continuing after an exception
- program execution
- threading [Visual Studio], continuing execution after exceptions
- Exceptions dialog box
- programs, executing
ms.assetid: 6fe97aac-2131-4615-bd92-d3afee741558
author: mikejo5000
ms.author: mikejo
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: d557fc0ec056cac22603338f95920e5c721f67dd
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 04/23/2019
ms.locfileid: "62564104"
---
# <a name="continuing-execution-after-an-exception"></a>Continuar la ejecución después de una excepción
Cuando el depurador interrumpe la ejecución debido a una excepción, verá el **aplicación auxiliar de excepciones**, de forma predeterminada. Si ha deshabilitado la **aplicación auxiliar de excepciones** en el **opciones** cuadro de diálogo, verá el **Asistente de excepciones** (C# o Visual Basic) o el **excepción** cuadro de diálogo) (C++).
Cuando el **aplicación auxiliar de excepciones** aparece, puede intentar corregir el problema que provocó la excepción.
## <a name="managed-and-native-code"></a>Código administrado y nativo
En el código administrado y nativo, puede continuar la ejecución en el mismo subproceso después de una excepción no controlada. El **aplicación auxiliar de excepciones** se desenreda la pila de llamadas al punto donde se produjo la excepción.
## <a name="mixed-code"></a>Código mixto
Si se produce una excepción no controlada durante la depuración de código mixto nativo y administrado, las restricciones de sistema operativo impedirán que se desenrede la pila de llamadas. Si intenta rebobinar la pila de llamadas mediante el menú contextual, aparecerá un mensaje de error que indica que el depurador no puede desenredar la pila de llamadas si se ha producido una excepción no controlada durante la depuración de código mixto.
## <a name="see-also"></a>Vea también
- [Administración de excepciones con el depurador](../debugger/managing-exceptions-with-the-debugger.md) | 50.791667 | 444 | 0.795324 | spa_Latn | 0.952938 |
4da92ed9d1b6cc11204972cf41dd19770f8be882 | 1,763 | md | Markdown | meetings/meeting-12.md | olliegardner/seminar-roulette | c8330258778dd7f71b1289c5dfe611e5637cf71d | [
"MIT"
] | null | null | null | meetings/meeting-12.md | olliegardner/seminar-roulette | c8330258778dd7f71b1289c5dfe611e5637cf71d | [
"MIT"
] | null | null | null | meetings/meeting-12.md | olliegardner/seminar-roulette | c8330258778dd7f71b1289c5dfe611e5637cf71d | [
"MIT"
] | 1 | 2020-10-07T16:21:59.000Z | 2020-10-07T16:21:59.000Z | # Meeting 12 - 22 Jan 2021
This meeting started with me giving Jeremy a demonstration of the work I have completed this week. He pointed that he thought the "online only" filter was worded badly and didn't understand what it meant at first. He also thinks that the loading times are slightly too long, making the system frustrating to use at times. Jeremy suggested mentioning performance regressions, such as loading time, in the implementation chapter of my dissertation.
During my initial think aloud evaluations, participants reported that the system was too cluttered and hard to navigate. Jeremy asked if I thought the system was still too cluttered. Potentially could implement A/B testing to test different versions of the UI.
As we were looking through the system, Jeremy noticed that ascii characters, such as ì, aren't being rendered correctly on the frontend. They are just showing the ascii value. I will look into this and fix it next week.
Next, we discussed my test suite which I had added to this week. Jeremy said that it would be nice to see a test case coverage report in my dissertation once my codebase is frozen. He doesn't think that it is necessary to generate a report within my CI pipeline.
We discussed how I should go about conducting an evaluation for my project. We both agreed that I should give my evaluation participants a set of tasks to complete. I will then devise a survey for them to complete. I asked Jeremy how many participants he thought I should ask to complete my evaluation. He said definitely at least 10 but thought around 50 would be a good number. He suggested having between 7 and 15 questions in the evaluation. We agreed that I would prepare a draft evaluation for our meet on Friday 29th January.
| 146.916667 | 532 | 0.802609 | eng_Latn | 0.999956 |
4da97de41eb723eb82675d805d1870f74ce9261b | 4,555 | md | Markdown | articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-05-02T14:26:54.000Z | 2019-05-02T14:26:54.000Z | articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md | jhomarolo/azure-docs.pt-br | d11ab7fab56d90666ea619c6b12754b7761aca97 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Associando vários Azure AD com um AD FS único | Microsoft Docs
description: Neste documento, você aprenderá a federar vários Azure AD com um único AD FS.
keywords: federar, ADFS, AD FS, vários locatários, único AD FS, um ADFS, federação multilocatária, adfs de várias florestas, aad connect, federação, federação entre locatários
services: active-directory
documentationcenter: ''
author: billmath
manager: daveba
editor: ''
ms.assetid: ''
ms.service: active-directory
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: conceptual
ms.date: 07/17/2017
ms.subservice: hybrid
ms.author: billmath
ms.collection: M365-identity-device-management
ms.openlocfilehash: 620255896e02319675928396c3d6e5e0d9865c0c
ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/23/2019
ms.locfileid: "60244391"
---
# <a name="federate-multiple-instances-of-azure-ad-with-single-instance-of-ad-fs"></a>Federar várias instâncias do Azure AD com uma instância única do AD FS
Um único farm do AD FS de alta disponibilidade pode federar várias florestas se tiverem uma relação de confiança bidirecional. Essas várias florestas podem ou não corresponder ao mesmo Azure Active Directory. Este artigo fornece instruções sobre como configurar a federação entre uma única implantação do AD FS e mais de uma floresta que sincronizam com Azure AD diferentes.

> [!NOTE]
> O write-back de dispositivo e o ingresso automático de dispositivo não têm suporte neste cenário.
> [!NOTE]
> O Azure AD Connect não pode ser usado para configurar a federação neste cenário como o Azure AD Connect pode configurar a federação de domínios em um único Azure AD.
## <a name="steps-for-federating-ad-fs-with-multiple-azure-ad"></a>Etapas para federar o AD FS com vários Azure AD
Imagine um domínio contoso.com no contoso.onmicrosoft.com do Azure Active Directory já é federado com o AD FS local instalado no ambiente de Active Directory local contoso.com. Fabrikam.com é um domínio no Azure Active Directory fabrikam.onmicrosoft.com.
## <a name="step-1-establish-a-two-way-trust"></a>Etapa 1: estabelecer uma relação de confiança bidirecional
Para que o AD FS em contoso.com seja capaz de autenticar usuários no fabrikam.com, uma relação de confiança bidirecional é necessária entre contoso.com e fabrikam.com. Siga as diretrizes neste [artigo](https://technet.microsoft.com/library/cc816590.aspx) para criar a relação de confiança bidirecional.
## <a name="step-2-modify-contosocom-federation-settings"></a>Etapa 2: modificar configurações de federação de contoso.com
O emissor padrão definido para um único domínio federado ao AD FS é "http\://ADFSServiceFQDN/adfs/services/trust", por exemplo, `http://fs.contoso.com/adfs/services/trust`. O Azure Active Directory requer um emissor exclusivo para cada domínio. Já que o mesmo AD FS vai federar dois domínios, o valor do emissor deve ser modificado para ser exclusivo para cada domínio federado pelo AD FS com o Azure Active Directory.
No servidor do AD FS, abra o PowerShell do Azure AD (certifique-se de que o módulo MSOnline esteja instalado) e execute as seguintes etapas:
Conecte-se ao Azure Active Directory que contém o domínio contoso.com Connect-MsolService Atualize as configurações de federação para contoso.com Update-MsolFederatedDomain - DomainName contoso.com – SupportMultipleDomain
O emissor na configuração da federação de domínio será alterado para "http\://contoso.com/adfs/services/trust" e uma regra de declaração de emissão será adicionada para que o objeto de confiança de terceira parte confiável do Azure AD emita o valor de issuerId correto com base no sufixo de UPN.
## <a name="step-3-federate-fabrikamcom-with-ad-fs"></a>Etapa 3: federar fabrikam.com com o AD FS
Na sessão do PowerShell do Azure AD, siga estas etapas: Conectar-se ao Azure Active Directory que contém o domínio fabrikam.com
Connect-MsolService
Converta o domínio gerenciado fabrikam.com em federado:
Convert-MsolDomainToFederated -DomainName fabrikam.com -Verbose -SupportMultipleDomain
A operação acima vai federar o domínio fabrikam.com com o mesmo AD FS. Você pode verificar as configurações de domínio usando Get-MsolDomainFederationSettings para os dois domínios.
## <a name="next-steps"></a>Próximos passos
[Conectar o Active Directory com o Azure Active Directory](whatis-hybrid-identity.md)
| 65.071429 | 419 | 0.798463 | por_Latn | 0.995027 |
4da9bfddf95a1bf1d7615ea003d5cc47a2b5f32b | 1,351 | md | Markdown | docs/git2kube_load_folder.md | wjojand/git2kube | 089acbe30f7964689c0e3a620116f2448ee0efd7 | [
"MIT"
] | null | null | null | docs/git2kube_load_folder.md | wjojand/git2kube | 089acbe30f7964689c0e3a620116f2448ee0efd7 | [
"MIT"
] | null | null | null | docs/git2kube_load_folder.md | wjojand/git2kube | 089acbe30f7964689c0e3a620116f2448ee0efd7 | [
"MIT"
] | null | null | null | ## git2kube load folder
Loads files from git repository into Folder
### Synopsis
Loads files from git repository into Folder
```
git2kube load folder [flags]
```
### Options
```
-h, --help help for folder
-t, --target-folder string path to target folder
```
### Options inherited from parent commands
```
-b, --branch string branch name to pull (default "master")
-c, --cache-folder string destination on filesystem where cache of repository will be stored (default "/tmp/git2kube/data/")
--exclude strings regex that if is a match excludes the file from the upload, example: '*.yaml' or 'folder/*' if you want to match a folder (default [^\..*])
-g, --git string git repository address, either http(s) or ssh protocol has to be specified
--include strings regex that if is a match includes the file in the upload, example: '*.yaml' or 'folder/*' if you want to match a folder (default [.*])
-l, --log-level string command log level (options: [panic fatal error warning info debug]) (default "info")
-p, --ssh-key string path to the SSH private key (git repository address should be 'git@<address>', example: [email protected]:WanderaOrg/git2kube.git)
```
### SEE ALSO
* [git2kube load](git2kube_load.md) - Loads files from git repository into target
| 37.527778 | 169 | 0.672835 | eng_Latn | 0.987438 |
4daa52e258782c601f38d2d1fa8844c70b043615 | 11,423 | md | Markdown | translations/ja-JP/content/codespaces/getting-started-with-codespaces/getting-started-with-your-java-project-in-codespaces.md | JoyChannel/docs | 2d85af3d136df027c5e9230cac609b3712abafb3 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2021-01-05T16:29:05.000Z | 2022-02-26T09:08:44.000Z | translations/ja-JP/content/codespaces/getting-started-with-codespaces/getting-started-with-your-java-project-in-codespaces.md | moonlightnigh/docs | 37b2dc7444c4f38bd089298a097a755dd0df46ab | [
"CC-BY-4.0",
"MIT"
] | 116 | 2021-10-13T00:58:04.000Z | 2022-03-19T23:23:44.000Z | translations/ja-JP/content/codespaces/getting-started-with-codespaces/getting-started-with-your-java-project-in-codespaces.md | moonlightnigh/docs | 37b2dc7444c4f38bd089298a097a755dd0df46ab | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-08-31T03:18:06.000Z | 2021-10-30T17:49:09.000Z | ---
title: Getting started with your Java project in Codespaces
shortTitle: Getting started with your Java project
intro: 'Get started with your Java project in {% data variables.product.prodname_codespaces %} by creating a custom dev container.'
versions:
free-pro-team: '*'
topics:
- Codespaces
---
{% data reusables.codespaces.release-stage %}
### はじめに
This guide shows you how to set up your Java project in {% data variables.product.prodname_codespaces %}. It will take you through an example of opening your project in a codespace, and adding and modifying a dev container configuration from a template.
#### 必要な環境
- You should have an existing Java project in a repository on {% data variables.product.prodname_dotcom_the_website %}. If you don't have a project, you can try this tutorial with the following example: https://github.com/microsoft/vscode-remote-try-java
- You must have {% data variables.product.prodname_codespaces %} enabled for your organization.
### Step 1: Open your project in a codespace
1. Navigate to your project's repository. Use the {% octicon "download" aria-label="The download icon" %} **Code** drop-down menu, and select **Open with Codespaces**. If you don’t see this option, your project isn’t available for {% data variables.product.prodname_codespaces %}.
![[Open with Codespaces] ボタン](/assets/images/help/codespaces/open-with-codespaces-button.png)
2. To create a new codespace, click {% octicon "plus" aria-label="The plus icon" %} **New codespace**. ![[New codespace] ボタン](/assets/images/help/codespaces/new-codespace-button.png)
When you create a codespace, your project is created on a remote VM that is dedicated to you. By default, the container for your codespace has many languages and runtimes including Java, nvm, npm, and yarn. It also includes a common set of tools like git, wget, rsync, openssh, and nano.
You can customize your codespace by adjusting the amount of vCPUs and RAM, [adding dotfiles to personalize your environment](/codespaces/setting-up-your-codespace/personalizing-codespaces-for-your-account), or by modifying the tools and scripts installed.
{% data variables.product.prodname_codespaces %} uses a file called `devcontainer.json` to store configurations. On launch {% data variables.product.prodname_codespaces %} uses the file to install any tools, dependencies, or other set up that might be needed for the project. For more information, see "[Configuring Codespaces for your project](/codespaces/setting-up-your-codespace/configuring-codespaces-for-your-project)."
### Step 2: Add a dev container to your codespace from a template
The default codespaces container comes with the latest Java version, package managers (Maven, Gradle), and other common tools preinstalled. However, we recommend that you set up a custom container to define the tools and scripts that your project needs. This will ensure a fully reproducible environment for all {% data variables.product.prodname_codespaces %} users in your repository.
To set up your project with a custom container, you will need to use a `devcontainer.json` file to define the environment. In {% data variables.product.prodname_codespaces %} you can add this either from a template or you can create your own. For more information on dev containers, see "[Configuring Codespaces for your project](/codespaces/setting-up-your-codespace/configuring-codespaces-for-your-project)."
1. Access the command palette (`shift command P` / `shift control P`), then start typing "dev container". Click **Codespaces: Add Development Container Configuration Files...** 
3. For this example, click **Java**. In practice, you could select any container that’s specific to Java or a combination of tools such as Java and Azure Functions. 
4. Click the recommended version of Java. 
5. To rebuild your container, access the command palette (`shift command P` / `shift control P`), then start typing "rebuild". **Codespaces: Rebuild Container**をクリックしてください。 
#### Anatomy of your dev container
Adding the Java dev container template adds a `.devcontainer` folder to the root of your project's repository with the following files:
- `devcontainer.json`
- Dockerfile
The newly added `devcontainer.json` file defines a few properties that are described after the sample.
##### devcontainer.json
```json
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.159.0/containers/java
{
"name": "Java",
"build": {
"dockerfile": "Dockerfile",
"args": {
// Update the VARIANT arg to pick a Java version: 11, 14
"VARIANT": "11",
// Options
"INSTALL_MAVEN": "true",
"INSTALL_GRADLE": "false",
"INSTALL_NODE": "false",
"NODE_VERSION": "lts/*"
}
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"java.home": "/docker-java-home",
"maven.executable.path": "/usr/local/sdkman/candidates/maven/current/bin/mvn"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"vscjava.vscode-java-pack"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "java -version",
// Uncomment to connect as a non-root user. See https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
```
- **Name** - You can name your dev container anything, this is just the default.
- **Build** - The build properties.
- **Dockerfile** - In the build object, dockerfile is a reference to the Dockerfile that was also added from the template.
- **Args**
- **Variant**: This file only contains one build argument, which is the Java version that is passed into the Dockerfile.
- **Settings** - These are {% data variables.product.prodname_vscode %} settings that you can set.
- **Terminal.integrated.shell.linux** - While bash is the default here, you could use other terminal shells by modifying this.
- **Extensions** - These are extensions included by default.
- **Vscjava.vscode-java-pack** - The Java Extension Pack provides popular extensions for Java development to get you started.
- **forwardPorts** - Any ports listed here will be forwarded automatically.
- **postCreateCommand** - If you want to run anything after you land in your codespace that’s not defined in the Dockerfile, you can do that here.
- **remoteUser** - By default, you’re running as the `vscode` user, but you can optionally set this to `root`.
##### Dockerfile
```bash
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.159.0/containers/java/.devcontainer/base.Dockerfile
ARG VARIANT="14"
FROM mcr.microsoft.com/vscode/devcontainers/java:0-${VARIANT}
# [Optional] Install Maven or Gradle
ARG INSTALL_MAVEN="false"
ARG MAVEN_VERSION=3.6.3
ARG INSTALL_GRADLE="false"
ARG GRADLE_VERSION=5.4.1
RUN if [ "${INSTALL_MAVEN}" = "true" ]; then su vscode -c "source /usr/local/sdkman/bin/sdkman-init.sh && sdk install maven \"${MAVEN_VERSION}\""; fi \
&& if [ "${INSTALL_GRADLE}" = "true" ]; then su vscode -c "source /usr/local/sdkman/bin/sdkman-init.sh && sdk install gradle \"${GRADLE_VERSION}\""; fi
# [Optional] Install a version of Node.js using nvm for front end dev
ARG INSTALL_NODE="true"
ARG NODE_VERSION="lts/*"
RUN if [ "${INSTALL_NODE}" = "true" ]; then su vscode -c "source /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment this line to install global node packages.
# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1
```
You can use the Dockerfile to add additional container layers to specify OS packages, Java versions, or global packages we want included in our Dockerfile.
### Step 3: Modify your devcontainer.json file
With your dev container added and a basic understanding of what everything does, you can now make changes to configure it for your environment. In this example, you'll add properties to install extensions and your project dependencies when your codespace launches.
1. In the Explorer, select the `devcontainer.json` file from the tree to open it. You might have to exand the `.devcontainer` folder to see it.

2. Add the following lines to your `devcontainer.json` file after `extensions`.
```json{:copy}
"postCreateCommand": "npm install",
"forwardPorts": [4000],
```
For more information on `devcontainer.json` properties, see the [devcontainer.json reference](https://code.visualstudio.com/docs/remote/devcontainerjson-reference) on the Visual Studio Code docs.
3. To rebuild your container, access the command palette (`shift command P` / `shift control P`), then start typing "rebuild". **Codespaces: Rebuild Container**をクリックしてください。

Rebuilding inside your codespace ensures your changes work as expected before you commit the changes to the repository. If something does result in a failure, you’ll be placed in a codespace with a recovery container that you can rebuild from to keep adjusting your container.
### Step 4: Run your application
In the previous section, you used the `postCreateCommand` to install a set of packages via npm. You can now use this to run our application with npm.
1. Run your application by pressing `F5`.
2. When your project starts, you should see a toast in the bottom right corner with a prompt to connect to the port your project uses.

### Step 5: Commit your changes
{% data reusables.codespaces.committing-link-to-procedure %}
### 次のステップ
You should now be ready start developing your Java project in {% data variables.product.prodname_codespaces %}. Here are some additional resources for more advanced scenarios.
- [Managing encrypted secrets for {% data variables.product.prodname_codespaces %}](/codespaces/working-with-your-codespace/managing-encrypted-secrets-for-codespaces)
- [Managing GPG verification for {% data variables.product.prodname_codespaces %}](/codespaces/working-with-your-codespace/managing-gpg-verification-for-codespaces)
- [Forwarding ports in your codespace](/codespaces/developing-in-codespaces/forwarding-ports-in-your-codespace)
| 60.439153 | 425 | 0.748665 | eng_Latn | 0.973914 |
4daa732260eb4c10584c5b8df702bbffbbf5c46a | 1,181 | md | Markdown | tests/test_generators/output/kitchen_sink_md/Address.md | dalito/linkml | 1bbf442f5c0dab5b6a4eb3309ef25b95c74d0892 | [
"CC0-1.0"
] | null | null | null | tests/test_generators/output/kitchen_sink_md/Address.md | dalito/linkml | 1bbf442f5c0dab5b6a4eb3309ef25b95c74d0892 | [
"CC0-1.0"
] | null | null | null | tests/test_generators/output/kitchen_sink_md/Address.md | dalito/linkml | 1bbf442f5c0dab5b6a4eb3309ef25b95c74d0892 | [
"CC0-1.0"
] | null | null | null | # Class: Address
URI: [ks:Address](https://w3id.org/linkml/tests/kitchen_sink/Address)
<!-- no inheritance hierarchy -->
## Slots
| Name | Range | Cardinality | Description | Info |
| --- | --- | --- | --- | --- |
| [street](street.md) | NONE | 0..1 | None | . |
| [city](city.md) | NONE | 0..1 | None | . |
## Usages
| used by | used in | type | used |
| --- | --- | --- | --- |
| [Person](Person.md) | [addresses](addresses.md) | range | Address |
## Identifier and Mapping Information
## LinkML Specification
<!-- TODO: investigate https://stackoverflow.com/questions/37606292/how-to-create-tabbed-code-blocks-in-mkdocs-or-sphinx -->
### Direct
<details>
```yaml
name: Address
from_schema: https://w3id.org/linkml/tests/kitchen_sink
slots:
- street
- city
```
</details>
### Induced
<details>
```yaml
name: Address
from_schema: https://w3id.org/linkml/tests/kitchen_sink
attributes:
street:
name: street
from_schema: https://w3id.org/linkml/tests/kitchen_sink
alias: street
owner: Address
city:
name: city
from_schema: https://w3id.org/linkml/tests/kitchen_sink
alias: city
owner: Address
```
</details> | 15.337662 | 124 | 0.628281 | yue_Hant | 0.361558 |
4daa8992c4a20f2f515c77b0a56745f610ddb2a7 | 2,080 | md | Markdown | WindowsServerDocs/administration/windows-commands/delete-partition.md | akmerkator/windowsserverdocs | 63926404009f9e1330a4a0aa8cb9821a2dd7187e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-06T13:10:44.000Z | 2020-01-06T13:10:44.000Z | WindowsServerDocs/administration/windows-commands/delete-partition.md | akmerkator/windowsserverdocs | 63926404009f9e1330a4a0aa8cb9821a2dd7187e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | WindowsServerDocs/administration/windows-commands/delete-partition.md | akmerkator/windowsserverdocs | 63926404009f9e1330a4a0aa8cb9821a2dd7187e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-02T21:35:28.000Z | 2022-03-02T21:35:28.000Z | ---
title: delete partition
description: "Windows Commands topic for **** - "
ms.custom: na
ms.prod: windows-server-threshold
ms.reviewer: na
ms.suite: na
ms.technology: manage-windows-commands
ms.tgt_pltfrm: na
ms.topic: article
ms.assetid: 65752312-cb16-46f6-870f-1b95c507b101
author: coreyp-at-msft
ms.author: coreyp
manager: dongill
ms.date: 10/16/2017
---
# delete partition
Deletes the partition with focus.
## Syntax
```
delete partition [noerr] [override]
```
## Parameters
|Parameter|Description|
|---------|-----------|
|override|Enables DiskPart to delete any partition regardless of type. Typically, DiskPart only permits you to delete known data partitions.|
|noerr|For scripting only. When an error is encountered, DiskPart continues to process commands as if the error did not occur. Without this parameter, an error causes DiskPart to exit with an error code.|
## Remarks
> [!CAUTION]
> Deleting a partition on a dynamic disk can delete all dynamic volumes on the disk, thus destroying any data and leaving the disk in a corrupt state. To delete a dynamic volume, always use the **delete volume** command instead. Partitions can be deleted from dynamic disks, but they should not be created. For example, it is possible to delete an unrecognized GUID Partition Table (GPT) partition on a dynamic GPT disk. Deleting such a partition does not cause the resulting free space to become available. This command is intended to allow you to reclame space on a corrupted offline dynamic disk in an emergency situation where the **clean** command in DiskPart cannot be used.
> - You cannot delete the system partition, boot partition, or any partition that contains the active paging file or crash dump information.
> - A partition must be selected for this operation to succeed. Use the **select partition** command to select a partition and shift the focus to it.
## <a name="BKMK_examples"></a>Examples
To delete the partition with focus, type:
```
delete partition
```
#### Additional references
[Command-Line Syntax Key](command-line-syntax-key.md)
| 37.818182 | 680 | 0.762981 | eng_Latn | 0.988588 |
4dac4f56246eadab9b1ef8f12aa9403c825e18cb | 2,175 | md | Markdown | README.md | noise-field/aijourney_zeroshot | b2808db693253db53e137c2650297e6bb5d9476d | [
"MIT"
] | null | null | null | README.md | noise-field/aijourney_zeroshot | b2808db693253db53e137c2650297e6bb5d9476d | [
"MIT"
] | null | null | null | README.md | noise-field/aijourney_zeroshot | b2808db693253db53e137c2650297e6bb5d9476d | [
"MIT"
] | null | null | null | # Zeroshot classification POC using ruGPT3
Solution to AI4Humanities track of AI Journey 2020
## Fetching the model
Use the src/utils/download_weights.py to download the model and tokenizer.
```
python ./src/utils/download_weights.py sberbank-ai/rugpt3large_based_on_gpt2 ./model
```
## Evaluation
To reproduce the nplus1 evaluation figures from the PowerPoint:
1. Download and unzip the Readability corpus from [Taiga](https://tatianashavrina.github.io/taiga_site/downloads)
2. Fetch the model weights
3. Run src/evaluate.py number_of_samples seed
```
python ./src/evaluate.py ../NPlus1 ./model 500 0
```
This should produce the following metrics:
```
precision recall f1-score support
Гаджеты 0.00 0.00 0.00 24
Космос 0.33 0.03 0.06 31
Наука 0.82 0.85 0.84 238
Не знаю 0.00 0.00 0.00 0
Оружие 0.98 0.44 0.61 97
Технологии 0.37 0.44 0.40 85
Транспорт 0.44 0.16 0.24 25
accuracy 0.58 500
macro avg 0.42 0.27 0.31 500
weighted avg 0.69 0.58 0.60 500
```
An (untuned) logreg over BoW trained on all (6994) but these 500 samples achieves 0.82 weighted
precision/recall, achieving F1 of 0.60 when trained on about 500 supervised samples, while a dummy classifier
(stratified) yields 0.31/0.32.
While it is admittedly a bit of cheating (the model probably saw these texts on self-supervised pre-training),
it was never provided any supervision. Unfortunately, there aren't any widely used Russian datasets like
20newsgroups.
## Interactive classifier
To use the classifier interactively, run app.py with streamlit. The app expects the model fetched to `./model`
Works best with `large` model.
```
streamlit run ./src/app.py
```
## References
Shavrina T., Shapovalova O. (2017) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017. | 35.080645 | 216 | 0.65931 | eng_Latn | 0.878206 |
4dad68737f9dbb723b8a55d245d62115dd6ddd5a | 2,155 | md | Markdown | docs/2014/analysis-services/xmla/xml-elements-properties/dimension-element-xmla.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/xmla/xml-elements-properties/dimension-element-xmla.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/analysis-services/xmla/xml-elements-properties/dimension-element-xmla.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Dimensión de elemento (XMLA) | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- analysis-services
- docset-sql-devref
ms.topic: reference
api_name:
- Dimension Element
api_location:
- http://schemas.microsoft.com/analysisservices/2003/engine
topic_type:
- apiref
f1_keywords:
- http://schemas.microsoft.com/analysisservices/2003/engine#Dimension
- urn:schemas-microsoft-com:xml-analysis#Dimension
- microsoft.xml.analysis.dimension
helpviewer_keywords:
- Dimension element
ms.assetid: 85093468-e971-4b8e-9ee4-7b264ad01711
author: minewiskan
ms.author: owend
manager: craigg
ms.openlocfilehash: 1f20dfb338f1dd03923f8f71968f6c6f9f31bf80
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/02/2018
ms.locfileid: "48091135"
---
# <a name="dimension-element-xmla"></a>Elemento Dimension (XMLA)
Identifica la dimensión de cubo representada por el elemento primario [objeto](object-element-dimension-xmla.md) elemento.
## <a name="syntax"></a>Sintaxis
```xml
<Object>
...
<Dimension>...</Dimension>
...
</Object>
```
## <a name="element-characteristics"></a>Características de los elementos
|Característica|Descripción|
|--------------------|-----------------|
|Tipo y longitud de los datos|String|
|Valor predeterminado|None|
|Cardinalidad|1-1: Elemento necesario que se produce una vez y solo una vez.|
## <a name="element-relationships"></a>Relaciones del elemento
|Relación|Elemento|
|------------------|-------------|
|Elementos primarios|[Objeto](object-element-dimension-xmla.md)|
|Elementos secundarios|None|
## <a name="remarks"></a>Comentarios
El elemento `Dimension` es un identificador de objeto que contiene el nombre de la dimensión de cubo representada por el elemento `Object`.
## <a name="see-also"></a>Vea también
[Elemento de la base de datos (XMLA)](database-element-xmla.md)
[Elemento Dimension (XMLA)](dimension-element-xmla.md)
[Propiedades (XMLA)](xml-elements-properties.md)
| 29.930556 | 142 | 0.711833 | spa_Latn | 0.261437 |
4dade4e9410262a40d0bf20c14a1af2a059d8fd2 | 1,029 | md | Markdown | README.md | feedyard/circleci-platform-agent | f3336e46d287ca577c03cf5c5590cf5167c514d8 | [
"MIT"
] | null | null | null | README.md | feedyard/circleci-platform-agent | f3336e46d287ca577c03cf5c5590cf5167c514d8 | [
"MIT"
] | null | null | null | README.md | feedyard/circleci-platform-agent | f3336e46d287ca577c03cf5c5590cf5167c514d8 | [
"MIT"
] | null | null | null | # feedyard/circleci-platform-agent [](https://circleci.com/gh/feedyard/circleci-platform-agent) [](https://quay.io/repository/feedyard/circleci-platform-agent) [](https://raw.githubusercontent.com/feedyard/circleci-platform-agent/master/LICENSE) [](https://alpinelinux.org)
Based on [feedyard/circleci-infra-agent](https://github.com/feedyard/circleci-infra-agent). includes common tools for building or managing
kubernetes and platform resources as code in circleci pipelines.
packages/bin |
--------------|
kops |
kubectl |
go |
consul |
vault |
cfssl |
cfssljson |
See CHANGELOG for list of installed packages/versions
| 57.166667 | 612 | 0.723032 | eng_Latn | 0.297035 |
4daeabb37b4127dce23294a79f8b881a0cec49d5 | 2,533 | md | Markdown | docs/ado/reference/ado-api/connectiontimeout-property-ado.md | baleng/sql-docs.it-it | 80bb05c3cc6a68564372490896545d6211a9fa26 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/connectiontimeout-property-ado.md | baleng/sql-docs.it-it | 80bb05c3cc6a68564372490896545d6211a9fa26 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/connectiontimeout-property-ado.md | baleng/sql-docs.it-it | 80bb05c3cc6a68564372490896545d6211a9fa26 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Proprietà ConnectionTimeout (ADO) | Microsoft Docs
ms.prod: sql
ms.prod_service: connectivity
ms.technology: connectivity
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.topic: conceptual
apitype: COM
f1_keywords:
- Connection15::ConnectionTimeout
helpviewer_keywords:
- ConnectionTimeout property [ADO]
ms.assetid: 8904a403-1383-4b4b-b53d-5c01d6f5deac
author: MightyPen
ms.author: genemi
manager: craigg
ms.openlocfilehash: b5cb3e6e1cc4266551bfeabf09bde1a65fea032f
ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 10/01/2018
ms.locfileid: "47707249"
---
# <a name="connectiontimeout-property-ado"></a>Proprietà ConnectionTimeout (ADO)
Indica per quanto tempo di attesa durante il tentativo di stabilire una connessione prima di terminare il tentativo e generare un errore.
## <a name="settings-and-return-values"></a>Le impostazioni e valori restituiti
Imposta o restituisce un **lungo** valore che indica, in secondi, quanto tempo di attesa per la connessione da aprire. Valore predefinito è 15.
## <a name="remarks"></a>Note
Usare la **ConnectionTimeout** proprietà in un [connessione](../../../ado/reference/ado-api/connection-object-ado.md) se ritardi di rete del traffico o con intensa attività di server rendono necessario interrompere un tentativo di connessione dell'oggetto. Se l'ora dal **ConnectionTimeout** l'impostazione della proprietà deve trascorrere prima dell'apertura della connessione, si verifica un errore e ADO non annullerà il tentativo. Se si imposta la proprietà su zero, ADO attenderà all'infinito finché non viene aperta la connessione. Assicurarsi che il provider a cui si sta scrivendo codice supporta il **ConnectionTimeout** funzionalità.
Il **ConnectionTimeout** è di lettura/scrittura quando la connessione è chiusa e di sola lettura quando è aperto.
## <a name="applies-to"></a>Si applica a
[Oggetto Connection (ADO)](../../../ado/reference/ado-api/connection-object-ado.md)
## <a name="see-also"></a>Vedere anche
[ConnectionString, ConnectionTimeout ed esempio di proprietà State (VB)](../../../ado/reference/ado-api/connectionstring-connectiontimeout-and-state-properties-example-vb.md)
[Esempio ConnectionString, ConnectionTimeout e proprietà State (VC + +)](../../../ado/reference/ado-api/connectionstring-connectiontimeout-and-state-properties-example-vc.md)
[Proprietà CommandTimeout (ADO)](../../../ado/reference/ado-api/commandtimeout-property-ado.md)
| 57.568182 | 646 | 0.77497 | ita_Latn | 0.94184 |
4daedc72a3e9e47b8bfd5aaee521cf63877bbb3f | 9,441 | md | Markdown | getting-started/environment-setup.md | cmendible/docs | a2900fe23ce4c25812d967804029bec8527c5f7a | [
"MIT"
] | null | null | null | getting-started/environment-setup.md | cmendible/docs | a2900fe23ce4c25812d967804029bec8527c5f7a | [
"MIT"
] | null | null | null | getting-started/environment-setup.md | cmendible/docs | a2900fe23ce4c25812d967804029bec8527c5f7a | [
"MIT"
] | null | null | null | # Environment Setup
Dapr can be run in either self hosted or Kubernetes modes. Running Dapr runtime in self hosted mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments. For example, you can develop Dapr applications in self hosted mode and then deploy them to any Kubernetes cluster.
## Contents
- [Prerequisites](#prerequisites)
- [Installing Dapr CLI](#installing-dapr-cli)
- [Installing Dapr in self-hosted mode](#installing-dapr-in-self-hosted-mode)
- [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster)
## Prerequisites
On default Dapr will install with a developer environment using Docker containers to get you started easily. However, Dapr does not depend on Docker to run (see [here](https://github.com/dapr/cli/blob/master/README.md) for instructions on installing Dapr locally without Docker using slim init). This getting started guide assumes Dapr is installed along with this developer environment.
- Install [Docker](https://docs.docker.com/install/)
> For Windows user, ensure that `Docker Desktop For Windows` uses Linux containers.
## Installing Dapr CLI
### Using script to install the latest release
**Windows**
Install the latest windows Dapr cli to `c:\dapr` and add this directory to User PATH environment variable.
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
**Linux**
Install the latest linux Dapr CLI to `/usr/local/bin`
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
**MacOS**
Install the latest darwin Dapr CLI to `/usr/local/bin`
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
Or install via [Homebrew](https://brew.sh)
```bash
brew install dapr/tap/dapr-cli
```
### From the Binary Releases
Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed.
1. Download the [Dapr CLI](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
- For Linux/MacOS - `/usr/local/bin`
- For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable.
## Installing Dapr in self hosted mode
### Initialize Dapr using the CLI
On default, during initialization the Dapr CLI will install the Dapr binaries as well as setup a developer environment to help you get started easily with Dapr. This environment uses Docker containers, therefore Docker is listed as a prerequisite.
>If you prefer to run Dapr without this environment and no dependency on Docker, see the CLI documentation for usage of the `--slim` flag with the init CLI command [here](https://github.com/dapr/cli/blob/master/README.md). Note, if you are a new user, it is strongly recommended to intall Docker and use the regular init command.
> For Linux users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use "**sudo dapr init**"
> For Windows users, make sure that you run the cmd terminal in administrator mode
> **Note:** See [Dapr CLI](https://github.com/dapr/cli) for details on the usage of Dapr CLI
```bash
$ dapr init
⌛ Making the jump to hyperspace...
Downloading binaries and setting up components
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
```
If you prefer you can also install to an alternate location by using `--install-path`:
```
$ dapr init --install-path /home/user123/mydaprinstall
```
To see that Dapr has been installed successfully, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running.
### Install a specific runtime version
You can install or upgrade to a specific version of the Dapr runtime using `dapr init --runtime-version`. You can find the list of versions in [Dapr Release](https://github.com/dapr/dapr/releases).
```bash
# Install v0.1.0 runtime
$ dapr init --runtime-version 0.1.0
# Check the versions of cli and runtime
$ dapr --version
cli version: v0.1.0
runtime version: v0.1.0
```
### Uninstall Dapr in a self hosted mode
Uninstalling removes the Placement service container or the Placement service binary.
```bash
$ dapr uninstall
```
> For Linux users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use "**sudo dapr uninstall**" to remove dapr binaries and/or the containers.
It won't remove the Redis or Zipkin containers by default in case you were using them for other purposes. To remove Redis, Zipkin and actor Placement container as well as remove the default Dapr dir located at `$HOME/.dapr` or `%USERPROFILE%\.dapr\` run:
```bash
$ dapr uninstall --all
```
**You should always run `dapr uninstall` before running another `dapr init`.**
To specify a custom install path from which you have to uninstall run:
```bash
$ dapr uninstall --install-path /path/to/binary
```
## Installing Dapr on a Kubernetes cluster
When setting up Kubernetes, you can do this either via the Dapr CLI or Helm.
*Note that installing Dapr using the CLI is recommended for testing purposes only.*
Dapr installs the following pods:
* dapr-operator: Manages component updates and kubernetes services endpoints for Dapr (state stores, pub-subs, etc.)
* dapr-sidecar-injector: Injects Dapr into annotated deployment pods
* dapr-placement: Used for actors only. Creates mapping tables that map actor instances to pods
* dapr-sentry: Manages mTLS between services and acts as a certificate authority
### Setup Cluster
You can install Dapr on any Kubernetes cluster. Here are some helpful links:
- [Setup Minikube Cluster](./cluster/setup-minikube.md)
- [Setup Azure Kubernetes Service Cluster](./cluster/setup-aks.md)
- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart)
- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
> **Note:** The Dapr control plane containers are currently only distributed on linux containers.
> Your Kubernetes cluster must contain available Linux capable nodes.
> Both the Dapr CLI, and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`.
> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](../howto/hybrid-clusters/)
### Using the Dapr CLI
You can install Dapr to a Kubernetes cluster using CLI.
> **Note:** that using the CLI does not support non-default namespaces.
> If you need a non-default namespace, Helm installation has to be used (see below).
#### Install Dapr to Kubernetes
```bash
$ dapr init --kubernetes
ℹ️ Note: this installation is recommended for testing purposes. For production environments, please use Helm
⌛ Making the jump to hyperspace...
✅ Deploying the Dapr Operator to your cluster...
✅ Success! Dapr has been installed. To verify, run 'kubectl get pods -w' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
```
Dapr CLI installs Dapr to `default` namespace of Kubernetes cluster using [this](https://daprreleases.blob.core.windows.net/manifest/dapr-operator.yaml) manifest.
#### Uninstall Dapr on Kubernetes
```bash
$ dapr uninstall --kubernetes
```
### Using Helm (Advanced)
You can install Dapr to Kubernetes cluster using a Helm 3 chart.
> **Note:** The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
#### Install Dapr to Kubernetes
1. Make sure Helm 3 is installed on your machine
2. Add Azure Container Registry as a Helm repo
```bash
helm repo add dapr https://daprio.azurecr.io/helm/v1/repo
helm repo update
```
3. Create `dapr-system` namespace on your kubernetes cluster
```bash
kubectl create namespace dapr-system
```
4. Install the Dapr chart on your cluster in the `dapr-system` namespace.
```bash
helm install dapr dapr/dapr --namespace dapr-system
```
#### Verify installation
Once the chart installation is complete, verify the dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace:
```bash
$ kubectl get pods -n dapr-system -w
NAME READY STATUS RESTARTS AGE
dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s
dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s
dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s
dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
```
#### Sidecar annotations
To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this](../howto/configure-k8s/README.md) how to guide.
#### Uninstall Dapr on Kubernetes
Helm 3
```bash
helm uninstall dapr -n dapr-system
```
> **Note:** See [here](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts.
| 39.668067 | 387 | 0.748332 | eng_Latn | 0.949328 |
4daef2cb7416196ed868d4b455e7f395a5235689 | 2,083 | md | Markdown | AlchemyInsights/teams-upgrade-guidance.md | AdrianaMedia/OfficeDocs-AlchemyInsights-pr.cs-CZ | 1eeb3a75180d9d8bf791c4554db66c7bebd4f0b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/teams-upgrade-guidance.md | AdrianaMedia/OfficeDocs-AlchemyInsights-pr.cs-CZ | 1eeb3a75180d9d8bf791c4554db66c7bebd4f0b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | AlchemyInsights/teams-upgrade-guidance.md | AdrianaMedia/OfficeDocs-AlchemyInsights-pr.cs-CZ | 1eeb3a75180d9d8bf791c4554db66c7bebd4f0b6 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-10T18:26:09.000Z | 2021-06-10T18:26:09.000Z | ---
title: Pokyny k upgradu v Teams
ms.author: heidip
author: microsoftheidi
ms.audience: ITPro
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.custom:
- "982"
- "4000006"
ms.assetid: 0530bbd2-255c-434f-a24a-7c6c0877bad7
ms.openlocfilehash: 391d1253fd625004308a0cd1359cc0ccc46e1b95
ms.sourcegitcommit: 9a39e7cff11854c54c717a2c0094bfdfefee4ffd
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 10/01/2020
ms.locfileid: "48333921"
---
# <a name="microsoft-teams-upgrade"></a>Upgrade na Microsoft Teams
**Důležité**: Pokud jste upgradovali z Online Skypu pro firmy do Microsoft Teams v režimu jenom teams, ale nejste ještě připravení, můžeme vám tento problém vyřešit pomocí diagnostické podpory. V pravém horním rohu, které říká **nové centrum pro správu**, se ujistěte, že používáte nové centrum pro správu. V novém centru pro správu klikněte na widget " **potřebuju nápovědu?** ", zadejte "**Teams upgrade**" a spusťte diagnostický nástroj podle pokynů.
Bez ohledu na to, jestli Začínáme s týmy už používáte Teams přes Skype pro firmy nebo jste připraveni na upgrade, chceme, abyste měli jistotu, že máte všechno, co potřebujete k přechodu na úspěšnou cestu do týmů. Další informace najdete v následujících odkazech.
[Začínáme s upgradem Microsoft Teams](https://docs.microsoft.com/MicrosoftTeams/upgrade-start-here)
[Plánování upgradu](https://docs.microsoft.com/MicrosoftTeams/upgrade-plan-journey)
[Principy aplikace Microsoft teams a Skypu pro firmy](https://docs.microsoft.com/MicrosoftTeams/teams-and-skypeforbusiness-coexistence-and-interoperability)
[Upgrade z Online Skypu pro firmy do týmů](https://docs.microsoft.com/MicrosoftTeams/upgrade-to-teams-execute-skypeforbusinessonline)
[Upgrade z místního Skypu pro firmy na týmy](https://docs.microsoft.com/MicrosoftTeams/upgrade-to-teams-execute-skypeforbusinesshybridonprem)
[Kontrola stavu upgradu Online Skypu pro firmy na Teams pomocí PowerShellu](https://docs.microsoft.com/powershell/module/skype/get-csteamsupgradestatus?view=skype-ps) | 56.297297 | 453 | 0.80989 | ces_Latn | 0.982638 |
4dafb450bd6b823b0d0375c8fa6d3d9c992701a7 | 2,139 | md | Markdown | development-strategy.md | snicoll-scratches/hyf-incremental-dev-week1 | 4500cc3903e0bcad91da8d062d7125d613b133c1 | [
"Apache-2.0"
] | null | null | null | development-strategy.md | snicoll-scratches/hyf-incremental-dev-week1 | 4500cc3903e0bcad91da8d062d7125d613b133c1 | [
"Apache-2.0"
] | null | null | null | development-strategy.md | snicoll-scratches/hyf-incremental-dev-week1 | 4500cc3903e0bcad91da8d062d7125d613b133c1 | [
"Apache-2.0"
] | null | null | null | # Development Strategy
> `team-branchies`
A simple little website for the world to know a little about us
## 0. Set-Up
__A User can see my initial repository and live demo__
### Repo
1. Created a new repository
1. Clone the repository
1. Copy-paste this development strategy file into a file called `development-strategy.md`
1. Add a license
1. Give a title to your README
1. Fill out the rest of this file with your team's names
1. Push the changes
1. turn on GitHub Pages
---
## 1. User Story: about your team
__As a site visitor, I want to know about your team and who is in it__
### Repo
This user story was developed on a brach called `1-about-team`
### README.md
Wrote an introduction to the team and added a list with all of our names.
---
## 2. User Story: introducing Akbel
__As a site visitor, I want to learn more about Akbel__
### Repo
This user story was developed on a brach called `2-Akbel`
### name.md
Write a markdown bio page for this team member
### README.md
Change this team member's name on the list into a link to their new profile page
---
## 3. User Story: introducing Tiago
__As a site visitor, I want to learn more about Tiago__
### Repo
This user story was developed on a brach called `3-tiago`
### tiago.md
Write a markdown bio page for this team member
### README.md
Change this team member's name on the list into a link to their new profile page
---
## 4. User Story: introducing Mert
__As a site visitor, I want to learn more about Mert__
### Repo
This user story was developed on a brach called `4-mert`
### mert.md
Write a markdown bio page for this team member
### README.md
Change this team member's name on the list into a link to their new profile page
---
## 5. User Story: introducing Joel
__As a site visitor, I want to learn more about Joel__
### Repo
This user story was developed on a brach called `5-joel`
### joel.md
Write a markdown bio page for this team member
### README.md
Change this team member's name on the list into a link to their new profile page
---
## 6. Finishing Touches
__As a perfectionist, I want everything perfect :)__
| 18.929204 | 89 | 0.72604 | eng_Latn | 0.998552 |
4db00b1148f98785965d0368388cc2284a31b887 | 2,943 | md | Markdown | README.md | ImJasonH/controller-rs | 541431c30d90fa532b5474293b47973b9591b861 | [
"Apache-2.0"
] | null | null | null | README.md | ImJasonH/controller-rs | 541431c30d90fa532b5474293b47973b9591b861 | [
"Apache-2.0"
] | null | null | null | README.md | ImJasonH/controller-rs | 541431c30d90fa532b5474293b47973b9591b861 | [
"Apache-2.0"
] | null | null | null | ## controller-rs
[](https://circleci.com/gh/clux/controller-rs/tree/master)
[](
https://hub.docker.com/r/clux/controller/)
[](http://microbadger.com/images/clux/controller)
[](https://hub.docker.com/r/clux/controller/tags/)
A rust kubernetes controller for a [`Foo` resource](https://github.com/clux/controller-rs/blob/master/yaml/foo-crd.yaml) using [kube-rs](https://github.com/clux/kube-rs/).
The `Controller` object reconciles `Foo` instances when changes to it are detected, and writes to its .status object.
## Requirements
A kube cluster / minikube. Install the CRD and an instance of it into the cluster:
```sh
cargo run --bin crdgen > yaml/foo-crd.yaml
kubectl apply -f yaml/foo-crd.yaml
# then:
kubectl apply -f yaml/instance-bad.yaml
```
## Running
### Local Config
You need a valid local kube config with sufficient access (`clux` service account has sufficient access if you want to [impersonate](https://clux.github.io/probes/post/2019-03-31-impersonating-kube-accounts/) the one in `yaml/access.yaml`).
Start the server with `cargo run`:
```sh
cargo run
```
### In-cluster Config
Deploy as a deployment with scoped access via a service account. See `yaml/deployment.yaml` as an example.
```sh
kubectl apply -f yaml/deployment.yaml
sleep 10 # wait for docker pull and start on kube side
export FOO_POD="$(kubectl get pods -n default -lapp=foo-controller --no-headers | awk '{print $1}')"
kubectl port-forward ${FOO_POD} -n default 8080:8080 # keep this running
```
## Usage
Once the app is running, you can see that it observes `foo` events.
Try some of:
```sh
kubectl apply -f yaml/instance-good.yaml -n default
kubectl delete foo good -n default
kubectl edit foo good # change info to contain bad
```
The reconciler will run and write the status object on every change. You should see results in the logs of the pod, or on the .status object outputs of `kubectl get foos -oyaml`.
## Webapp output
The sample web server exposes some example metrics and debug information you can inspect with `curl`.
```sh
$ kubectl apply -f yaml/instance-good.yaml -n default
$ curl 0.0.0.0:8080/metrics
# HELP handled_events handled events
# TYPE handled_events counter
handled_events 1
$ curl 0.0.0.0:8080/
{"last_event":"2019-07-17T22:31:37.591320068Z"}
```
## Events
The example `reconciler` only checks the `.spec.info` to see if it contains the word `bad`. If it does, it updates the `.status` object to reflect whether or not the instance `is_bad`.
While this controller has no child objects configured, there is a `configmapgen_controller` example in [kube-rs](https://github.com/clux/kube-rs/).
| 39.77027 | 240 | 0.748896 | eng_Latn | 0.882934 |
4db01ac7782ac963808015852aabc8a9db896a58 | 2,256 | md | Markdown | README.md | principalstudio/html-webpack-inject-preload | 647ebcbaaf7ab7f319218a261d347056b2d451f8 | [
"MIT"
] | 20 | 2020-09-25T15:19:30.000Z | 2022-03-20T17:05:25.000Z | README.md | principalstudio/html-webpack-inject-preload | 647ebcbaaf7ab7f319218a261d347056b2d451f8 | [
"MIT"
] | 12 | 2020-09-25T13:27:33.000Z | 2021-03-08T15:58:00.000Z | README.md | principalstudio/html-webpack-inject-preload | 647ebcbaaf7ab7f319218a261d347056b2d451f8 | [
"MIT"
] | 1 | 2021-07-16T23:00:41.000Z | 2021-07-16T23:00:41.000Z | [](https://www.npmjs.com/package/@principalstudio/html-webpack-inject-preload) [](https://nodejs.org/)
# HTML Webpack Inject Preload
A [HTML Webpack Plugin](https://github.com/jantimon/html-webpack-plugin) for injecting [<link rel='preload'>](https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content)
This plugin allows to add preload links anywhere you want.
# Installation
You need to have HTMLWebpackPlugin v4 or v5 to make this plugin work.
```
npm i -D @principalstudio/html-webpack-inject-preload
```
**webpack.config.js**
```js
const HtmlWebpackPlugin = require('html-webpack-plugin');
const HtmlWebpackInjectPreload = require('@principalstudio/html-webpack-inject-preload');
module.exports = {
entry: 'index.js',
output: {
path: __dirname + '/dist',
filename: 'index_bundle.js'
},
plugins: [
new HtmlWebpackPlugin(),
new HtmlWebpackInjectPreload({
files: [
{
match: /.*\.woff2$/,
attributes: {as: 'font', type: 'font/woff2', crossorigin: true },
},
{
match: /vendors\.[a-z-0-9]*.css$/,
attributes: {as: 'style' },
},
]
})
]
}
```
**Options**
* files: An array of files object
* match: A regular expression to target files you want to preload
* attributes: Any attributes you want to use. The plugin will add the attribute `rel="preload"` by default.
**Usage**
The plugin is really simple to use. The plugin injects in `headTags`, before any link, the preload elements.
For example
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Webpack App</title>
<%= htmlWebpackPlugin.tags.headTags %>
</head>
<body>
<script src="index_bundle.js"></script>
</body>
</html>
```
will generate
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Webpack App</title>
<link href="dist/fonts/font.woff2" rel="preload" type="font/woff2" crossorigin>
<link href="dist/css/main.css">
</head>
<body>
<script src="index_bundle.js"></script>
</body>
</html>
```
| 25.066667 | 275 | 0.660461 | kor_Hang | 0.434247 |
4db07de3d6dc11d2760c58ef5c7624a4cdcbdfcd | 1,197 | md | Markdown | _posts/2021-01-20-til.md | HeeyeonJeong/HeeyeonJeong.github.io | da5d31ad5f99efa6531bacfc15c32c3dee9ffb95 | [
"MIT"
] | null | null | null | _posts/2021-01-20-til.md | HeeyeonJeong/HeeyeonJeong.github.io | da5d31ad5f99efa6531bacfc15c32c3dee9ffb95 | [
"MIT"
] | null | null | null | _posts/2021-01-20-til.md | HeeyeonJeong/HeeyeonJeong.github.io | da5d31ad5f99efa6531bacfc15c32c3dee9ffb95 | [
"MIT"
] | null | null | null | ---
layout: post
title: 2021/01/20_TIL
subtitle:
tags: [TIL, typescript class형]
author: Heeyeon Jeong
comments: true
---
# 🔥 Today
<br>
- [ByteDegree]
- `typescript`
- # class형
- ## 1. 접근 제한자
- protected : 해당 클래스 내부에서만 접근이 가능한 속성 or 메서드 (기본 pubilc) / 상속시에는 가능
- private : 해당 클래스 내부에서만 접근이 가능한 속성 or 메서드 (기본 pubilc)
- 공통 : 클래스 밖 인스턴스에서 접근이 불가하다.
- 차이 : 클래스 상속시 protected는 접근가능, private은 접근 불가
<br>
- ## 2. 속성과 할당을 동시에 하기 + 접근 제한자
- 생성자 정의시, 매개변수에 접근 제한자를 같이 쓰면 속성을 정의하고 할당하는 코드를 한번에 처리가 가능하다.
- 접근 제한자가 pubilc인 경우, (보통 default가 public라서 생략이 가능하지만) 매개변수에 동시에 작성시에는 pubilc을 명시해줘야한다.
<br>
- ## 3. 상속과 인터페이스의 관계
- implements : 클래스에서 인터페이스를 구현하는 방법
- abstract 추상 클래스 : 인스턴스를 만들 수 없는 클래스 or 완성 되지 않은 클래스
- abstract로 정의된 메서드를 꼭 구현해야함.
- 상속을 받아서 하위 클래스에서 추상클래스를 구현해야한다. (이 하위클래스를 인스턴스화 하면 됨)
<br>
- Mini Project / Part3 ⏩
- pagination을 구현했다. 페이지에 따른 데이터를 받아올 수 있는 api를 요청해서 받아오는 식으로 수정을 했다.
<br>
# 🔥 To Do
<br>
- [ecommerce-website] ⏩
- [부스트코스] project B
- [Algorithm] `javaScript` 매일 1문제 이상 풀기
- [부스트코스] CS 자료구조, 알고리즘 강의
<br>
- [ByteDegree] `React` 9주차 - TypeScript ⏩
- 9주차 Quiz : ~ 01/25
- Mini Project / Part3 / 서버 연동하는 댓글 서비스 구현 : 01/12 ~ 01/25 ⏩
| 19.95 | 89 | 0.639933 | kor_Hang | 1.00001 |
4db13abb4bef12670451eda8b645f138e7906417 | 9,462 | md | Markdown | _posts/2022-04-28-issue-118.md | rottenstonks/revuo-weekly | a3e7d0611eb2d2ba32b0c806891ee01f44eb9067 | [
"MIT"
] | null | null | null | _posts/2022-04-28-issue-118.md | rottenstonks/revuo-weekly | a3e7d0611eb2d2ba32b0c806891ee01f44eb9067 | [
"MIT"
] | 1 | 2022-02-22T18:38:13.000Z | 2022-02-22T18:38:13.000Z | _posts/2022-04-28-issue-118.md | rottenstonks/revuo-weekly | a3e7d0611eb2d2ba32b0c806891ee01f44eb9067 | [
"MIT"
] | 3 | 2022-02-12T06:44:26.000Z | 2022-02-21T00:32:45.000Z | ---
title: Issue 118: April 21-28, 2022
image: /img/img-issue118.png
issuenumber: 118
---
[<img src="/img/img-issue118.png" alt="Revuo Monero Weekly #118 Slide" class="img-lead">]({% post_url 2022-04-28-issue-118 %}.html)
<p class="text-lead">Revuo Monero Weekly: April 21-28, 2022</p>
<!--more-->
<h3>Table of Contents:</h3>
<ul class="contents">
<li><a href="#news">Recent News</a></li>
<li><a href="#events">Upcoming Events</a></li>
<li><a href="#ideas">CCS Proposal Ideas</a></li>
<li><a href="#proposals">CCS Proposals Need Funding</a></li>
<li><a href="#stats">Price & Blockchain Stats</a></li>
<li><a href="#volunteer">Volunteer Opportunities</a></li>
<li><a href="#support">Support</a></li>
</ul>
<h3 id="news">Recent News</h3>
<div class="newsbyte">
<h4>Unstoppable Swap v0.2.0 has been <a href="https://github.com/UnstoppableSwap/unstoppableswap-gui/releases/tag/v0.2.0" target="_blank">released</a> [Testnet]. Peep some screenshots, along with steps to set it up <a href="https://github.com/UnstoppableSwap/unstoppableswap-gui/blob/main/docs/SWAP_TESTNET.md" target="_blank">here</a>, or watch a demo posted by reddit user, unstoppableswap, <a href="https://teddit.adminforge.de/r/Monero/comments/uawipv/atomic_swap_gui_demo_on_mainnet_unstoppableswap/" target="_blank">here</a>.</h4>
</div>
<div class="newsbyte">
<h4>Reddit user, benevanoff, <a href="https://teddit.adminforge.de/r/Monero/comments/ubw6xv/xmrmultisweeper_tool_beta_release/" target="_blank">shared</a> XMR-multisweeper beta — Automate the process of syncing and sweeping several wallets all at once —. <a href="https://github.com/benevanoff/xmr-multisweeper" target="_blank">Source code</a>.</h4>
</div>
<div class="newsbyte">
<h4>reemuru has started working on Monero Development focused guides for potential new contributors. If you are interested in joining the grassroots XMR ecosystem, it can be a handy resource to dive into. Check it out on <a href="https://github.com/hyahatiph-labs/hlc/tree/main/xmr-dev-guides" target="_blank">GitHub</a>. Want to send tips to incentivize the endeavor? <a href="https://hiahatf.org/donate/" target="_blank">Do it</a>.</h4>
</div>
<div class="newsbyte">
<h4>moneroguides' "Getting to grips with Monero" mini series are published. Set of 4 actionable videos to further your understanding of XMR's concepts and do so with a hands-on approach in mind. <a href="https://moneroguides.org/" target="_blank">moneroguides.org</a>.</h4>
</div>
<div class="newsbyte">
<h4>monero-bash v1.4.1 is <a href="https://github.com/hinto-janaiyo/monero-bash/releases/tag/v1.4.1" target="_blank">out</a>.</h4>
</div>
<h3 id="events">Upcoming Events</h3>
<div class="event">
<p class="date" markdown="1">April 30, 2022 (Saturday) – 18:00 UTC</p>
<p markdown="1">MoneroKon 2022 Meeting - <a href="irc://irc.libera.chat/#monero-events" target="_blank">#monero-events</a> IRC channel; <a href="https://matrix.to/#/#monero-events:monero.social" target="_blank">Matrix room</a>.</p>
</div>
<div class="event">
<p class="date" markdown="1">May 1, 2022 (Sunday) – 18:00 UTC</p>
<p markdown="1">Community Workgroup Meeting - <a href="irc://irc.libera.chat/#monero-community" target="_blank">#monero-community</a> IRC channel; <a href="https://matrix.to/#/#monero-community:monero.social" target="_blank">Matrix room</a>.</p>
</div>
<div class="event">
<p class="date" markdown="1">May 4, 2022 (Wednesday) – 17:00 UTC</p>
<p markdown="1">Research Lab Meeting - <a href="irc://irc.libera.chat/#monero-research-lab" target="_blank">#monero-research-lab</a> IRC channel; <a href="https://matrix.to/#/#monero-research-lab:monero.social" target="_blank">Matrix room</a>.</p>
</div>
<h3 id="ideas">CCS Proposal Ideas</h3>
<p>Below you can find recent CCS proposal ideas open for discussion.</p>
<div class="proposal">
<p><a href="https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/310" target="_blank">Patronero - Open Source project for donating by mining</a>.</p>
</div>
<div class="proposal">
<p><a href="https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/311" target="_blank">Multi-scene monero solutions</a>.</p>
</div>
<div class="proposal">
<p><a href="https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/314" target="_blank">Seraphis Wallet PoC 2 funding proposal</a>.</p>
</div>
<div class="proposal">
<p><a href="https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/316" target="_blank">Interactive Developer Guides</a>.</p>
</div>
<h3 id="proposals">CCS Proposals Need Funding</h3>
<div class="proposal">
<p><a href="https://ccs.getmonero.org/proposals/cryptogrampy-hotshop-dev.html" target="_blank">HotShop Point of Sale</a> by cryptogrampy.</p>
<p>Raised <b>11.54 of 18</b> XMR.</p>
</div>
<div class="proposal">
<p><a href="https://ccs.getmonero.org/proposals/savandra-videos-for-monero.html" target="_blank">New Animated Videos</a> by savandra.</p>
<p>Raised <b>5.72 of 45</b> XMR.</p>
</div>
<div class="proposal">
<p><a href="https://ccs.getmonero.org/proposals/The-Monero-Moon-CCS-Proposal-March2022-John-Foss.html" target="_blank">The Monero Moon - March 2022</a> by John Foss.</p>
<p>Raised <b>24.49 of 36</b> XMR.</p>
</div>
<h3 id="stats">Price & Blockchain Stats</h3>
<h4 class="stat">Blockchain Stats</h4>
<div class="bcstats">
<p>Block height: <b>2611909</b></p>
<p>Hash rate: <b>2.649 GH/s</b></p>
<p>Average txs. per block: <b>31.99</b></p>
<p>Weekly Moving Average txs. per day: <b>28,916</b></p>
<p>Block reward: <b>~0.64 XMR</b></p>
</div>
<p class="note">Data taken on April 28, 2022.</p>
<h4 class="stat">XMR Blocks Distribution in last 1000 blocks</h4>
<p><img src="/img/hashrate-pool-distribution-0428.png" alt="Hashrate Pool Distribution Pie Chart"/></p>
<h4 class="stat" id="price-stat">Price & Performance</h4>
<div class="price-intro">XMR Market Cap: <b>$4,134,394,611</b>.<br/>Localmonero.co Street Price: <b>$258.97</b>.</div>
<p class="table-title">Monero (XMR) Price</p>
<table class="price-table">
<tr class="row1">
<th></th>
<th>04/28/22</th>
<th>Week</th>
<th>Month</th>
<th>Year</th>
</tr>
<tr>
<td data-th="XMR to">USD</td>
<td data-th="04/28/22">$228.10</td>
<td data-th="Week" class="red">-14.6%</td>
<td data-th="Month" class="green">+6.2%</td>
<td data-th="Year" class="red">-43.2%</td>
</tr>
<tr class="row3">
<td data-th="XMR to">EUR</td>
<td data-th="04/28/22">€217.03</td>
<td data-th="Week" class="red">-11.8%</td>
<td data-th="Month" class="green">+11.1%</td>
<td data-th="Year" class="red">-34.6%</td>
</tr>
<tr>
<td data-th="XMR to">BTC</td>
<td data-th="04/28/22">₿0.00571098</td>
<td data-th="Week" class="red">-11.5%</td>
<td data-th="Month" class="green">+25.1%</td>
<td data-th="Year" class="red">-21.6%</td>
</tr>
</table>
<p class="note">Data taken on April 28, 2022.</p>
<p class="table-title">XMR Price Graph</p>

Sources: <a href="https://miningpoolstats.stream/monero" target="_blank">miningpoolstats.stream</a>; <a href="https://bitinfocharts.com/monero/" target="_blank">bitinfocharts.com</a>; <a href="https://www.coingecko.com/en/coins/monero" target="_blank">coingecko.com</a>; <a href="https://localmonero.co/statistics" target="_blank">localmonero.co statistics</a>; <a href="https://localmonero.co/blocks" target="_blank">localmonero.co blocks</a>.
<h3 id="volunteer">Volunteer Opportunities</h3>
<p>If you want to get involved in making Monero better, but aren't sure how, check out some volunteer opportunities.</p>
<div class="newsbyte">
<p class="date"><a href="https://github.com/monero-project/monero" target="_blank">Test Monero Core Software</a></p>
<p>Anyone with moderate technical ability is encouraged to try to build and run Monero nightlies. Do not trust it with your Monero, but feel free to open an Issue on GitHub as problems arise. Instructions to build on your OS of choice can be found <a href="https://github.com/monero-project/monero#compiling-monero-from-source" target="_blank">here</a>. </p>
</div>
<div class="newsbyte">
<p class="date"><a href="https://github.com/monero-project/monero" target="_blank">Getting Started with Helping Monero</a></p>
<p>If you are new to Monero and want to contribute, please check out <a href="https://www.monerooutreach.org/stories/getting-started-helping-monero.php" target="_blank">this article about volunteering and contributing to Monero</a> from the Monero Outreach Workgroup. </p>
</div>
<h3 id="support">Support</h3>
<p markdown="1">Revuo is an <a href="https://revuo-xmr.com/support/">independent newsletter</a>. If you enjoy this publication and want to support it, you can send some XMR to this subaddress:</p>
<p class="address" markdown="1">89Esx7ZAoVcD9wiDw57gxgS7m52sFEEbQiFC4qq18YZy3CdcsXvJ67FYdcDFbmYEGK7xerxgmDptd1C2xLstCbgF3RUhSMT</p>
<p><center><a href="monero:89Esx7ZAoVcD9wiDw57gxgS7m52sFEEbQiFC4qq18YZy3CdcsXvJ67FYdcDFbmYEGK7xerxgmDptd1C2xLstCbgF3RUhSMT" class="qr"><img src="/img/donate-monero.jpg" style="max-width: 200px;"/></a></center></p>
Comments, criticisms, want to share links to be included in future issues? Contact us at **[email protected]**. | 52.860335 | 540 | 0.6933 | eng_Latn | 0.275638 |
4db1a319699dcd2e8628935d10fc2cb1dc58c530 | 750 | markdown | Markdown | content/post/04-tidy/index.markdown | chiraleducation/IGDS | 31d80d72df2aef8c17fd16660ecc35aa6fe6d1ff | [
"MIT"
] | null | null | null | content/post/04-tidy/index.markdown | chiraleducation/IGDS | 31d80d72df2aef8c17fd16660ecc35aa6fe6d1ff | [
"MIT"
] | null | null | null | content/post/04-tidy/index.markdown | chiraleducation/IGDS | 31d80d72df2aef8c17fd16660ecc35aa6fe6d1ff | [
"MIT"
] | null | null | null | ---
date: "2018-04-30T00:00:00Z"
draft: false
featured: false
image:
caption: 'Image credit: [**Vladimir Proskurovskiy on Unsplash**](https://unsplash.com/photos/fE1b8smeOM0)'
focal_point: ""
placement: 2
preview_only: true
links:
- icon: flask
icon_pack: fas
name: lab
url: /labs/04-challenge_rev.html
- icon: magic
icon_pack: fas
name: slides
url: /slides/04-slides.html
projects: []
subtitle: Tidy Data + Distributions
summary: Tidy Data + Distributions
title: Lab 04
---
## Reference lab on distributions:
[View lab](/labs/04-distributions.html)
## Tidy Data:
http://r4ds.had.co.nz/tidy-data.html
http://moderndive.com/4-tidy.html
http://vita.had.co.nz/papers/tidy-data.html
https://github.com/jennybc/lotr-tidy#readme
| 19.736842 | 108 | 0.718667 | kor_Hang | 0.343185 |
4db1bc2adde318aad67bf13091b90006ae4ad24a | 318 | md | Markdown | avd_docs/rules/docker/AVD-DS-0016/docs.md | weisdd/defsec | 7f21da8b92df69cdcc881c9dd6adf3a78b519cb9 | [
"MIT"
] | null | null | null | avd_docs/rules/docker/AVD-DS-0016/docs.md | weisdd/defsec | 7f21da8b92df69cdcc881c9dd6adf3a78b519cb9 | [
"MIT"
] | null | null | null | avd_docs/rules/docker/AVD-DS-0016/docs.md | weisdd/defsec | 7f21da8b92df69cdcc881c9dd6adf3a78b519cb9 | [
"MIT"
] | null | null | null |
### Multiple CMD instructions listed
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
### Impact
<!-- Add Impact here -->
<!-- DO NOT CHANGE -->
{{ remediationActions }}
### Links
- https://docs.docker.com/engine/reference/builder/#cmd
| 22.714286 | 125 | 0.704403 | eng_Latn | 0.879947 |
4db1bf722829b4bcd10feda215bd3bbbb90d6f37 | 2,416 | md | Markdown | windows-driver-docs-pr/image/wia-ips-job-separators.md | msmarkma/windows-driver-docs | b5f403fff45d9a25f4d55e52d84996aba457360a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-18T03:12:31.000Z | 2021-04-18T03:12:31.000Z | windows-driver-docs-pr/image/wia-ips-job-separators.md | msmarkma/windows-driver-docs | b5f403fff45d9a25f4d55e52d84996aba457360a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/image/wia-ips-job-separators.md | msmarkma/windows-driver-docs | b5f403fff45d9a25f4d55e52d84996aba457360a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-23T22:45:54.000Z | 2021-02-23T22:45:54.000Z | ---
title: WIA\_IPS\_JOB\_SEPARATORS
description: The WIA\_IPS\_JOB\_SEPARATORS property is used to enable the detection of job separators, and to configure the action that the device executes when it detects a job separator page. The WIA minidriver creates and maintains this property.
keywords: ["WIA_IPS_JOB_SEPARATORS Imaging Devices"]
topic_type:
- apiref
api_name:
- WIA_IPS_JOB_SEPARATORS
api_location:
- Wiadef.h
api_type:
- HeaderDef
ms.date: 11/28/2017
ms.localizationpriority: medium
---
# WIA\_IPS\_JOB\_SEPARATORS
The **WIA\_IPS\_JOB\_SEPARATORS** property is used to enable the detection of job separators, and to configure the action that the device executes when it detects a job separator page. The WIA minidriver creates and maintains this property.
Property Type: VT\_I4
Valid Values: WIA\_PROP\_LIST
Access Rights: Read/Write
Remarks
-------
The following table describes the valid values for the **WIA\_IPS\_JOB\_SEPARATORS** property.
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th>Value</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>WIA_SEPARATOR_DISABLED</p></td>
<td><p>Job separators detection is disabled. This is the required default value if the property is supported.</p></td>
</tr>
<tr class="even">
<td><p>WIA_SEPARATOR_DETECT_SCAN_CONTINUE</p></td>
<td><p>Detect job separator page, scan the separator page, and continue scanning.</p></td>
</tr>
<tr class="odd">
<td><p>WIA_SEPARATOR_DETECT_SCAN_STOP</p></td>
<td><p>Detect job separator page, scan the separator page, and stop scanning.</p></td>
</tr>
<tr class="even">
<td><p>WIA_SEPARATOR_DETECT_NOSCAN_CONTINUE</p></td>
<td><p>Detect job separator page, do not scan (skip) the separator page, and continue scanning.</p></td>
</tr>
<tr class="odd">
<td><p>WIA_SEPARATOR_DETECT_NOSCAN_STOP</p></td>
<td><p>Detect job separator page, do not scan (skip) the separator page, and stop scanning.</p></td>
</tr>
</tbody>
</table>
This property is optional, and is valid only for the Feeder data source item (represented in the [**WIA\_IPA\_ITEM\_CATEGORY**](wia-ipa-item-category.md) property as WIA\_CATEGORY\_FEEDER).
Requirements
------------
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td><p>Header</p></td>
<td>Wiadef.h (include Wiadef.h)</td>
</tr>
</tbody>
</table>
| 24.40404 | 249 | 0.723096 | eng_Latn | 0.836528 |
4db22bf9652325b81f325a790abbb3755d675226 | 1,069 | md | Markdown | README.md | bb4/bb4-template | ffe636b6c103bc45243f9a1d34d3d0ea4caa05e5 | [
"MIT"
] | null | null | null | README.md | bb4/bb4-template | ffe636b6c103bc45243f9a1d34d3d0ea4caa05e5 | [
"MIT"
] | null | null | null | README.md | bb4/bb4-template | ffe636b6c103bc45243f9a1d34d3d0ea4caa05e5 | [
"MIT"
] | null | null | null | # bb4-project-template
A trivial bb4 project that can be used as a template when creating new bb4 projects.
## How to use it
1. First create a new bb4-\<new project\> repository in github with no files in it.
1. Then either
* Create a [bare clone](https://help.github.com/articles/duplicating-a-repository/) and modify it, or
* Manually copy the project files from a clone of bb4-project-template into a new clone of the empty bb4-\<new project\>
directory on the file system. Do not copy the iml file or the following directories:
* .* (.git, .gradle, .idea)
* build
* gradle
* out
I prefer the second approach because then it will not have the git history from bb4-project-template.
1. Lastly, open the new project in intellij by opening the build.gradle file. In the import dialog, select the option to use the gradle wrapper instead of specifying the location of gradle. Now you have a working project to start from. Just modify it as needed to create your new bb4 project. Update the gitUrl in the Jenkins file.
| 48.590909 | 331 | 0.730589 | eng_Latn | 0.998482 |
4db23189ca57c5e25847781d426a23ea3474d96e | 21 | md | Markdown | README.md | burch-cm/ksn-landing-page | 4ff85704c66933c1f51b9fd02b49865dfd08369c | [
"MIT"
] | null | null | null | README.md | burch-cm/ksn-landing-page | 4ff85704c66933c1f51b9fd02b49865dfd08369c | [
"MIT"
] | null | null | null | README.md | burch-cm/ksn-landing-page | 4ff85704c66933c1f51b9fd02b49865dfd08369c | [
"MIT"
] | null | null | null | # ksn-landing-page
| 7 | 18 | 0.666667 | lit_Latn | 0.523193 |
4db3ae820e83c5f3ce4a2741d18e7694154cded1 | 44 | md | Markdown | README.md | AnmanTechnology/mbed-PS2Mouse | 29673adebc246d426c1569eb1108049fb7f77155 | [
"BSD-3-Clause"
] | null | null | null | README.md | AnmanTechnology/mbed-PS2Mouse | 29673adebc246d426c1569eb1108049fb7f77155 | [
"BSD-3-Clause"
] | null | null | null | README.md | AnmanTechnology/mbed-PS2Mouse | 29673adebc246d426c1569eb1108049fb7f77155 | [
"BSD-3-Clause"
] | null | null | null | # mbed-PS2Mouse
PS/2 Mouse Library for mbed
| 14.666667 | 27 | 0.772727 | yue_Hant | 0.99632 |
4db4165963d21588b0518ff7dd4ec7279f1c2636 | 2,091 | md | Markdown | _posts/2020/2020-05-18-covid-19-changes-the-world.md | kwaka1208/kwaka1208.github.io | 8d86646ed793826e99ba8956c766037f07310740 | [
"MIT"
] | 2 | 2021-04-16T00:55:02.000Z | 2022-03-30T15:07:43.000Z | _posts/2020/2020-05-18-covid-19-changes-the-world.md | kwaka1208/kwaka1208.github.io | 8d86646ed793826e99ba8956c766037f07310740 | [
"MIT"
] | 1 | 2021-06-01T11:08:30.000Z | 2021-06-01T11:08:30.000Z | _posts/2020/2020-05-18-covid-19-changes-the-world.md | kwaka1208/kwaka1208.github.io | 8d86646ed793826e99ba8956c766037f07310740 | [
"MIT"
] | null | null | null | ---
title: 新型コロナウイルスが変える社会
date: 2020-05-18T011:23:00 UTC+9
author: Wakabayashi, Kenichi
layout: post
permalink: /note/covid-19-changes-the-world
image : /assets/images/2020/covid-19-changes-the-world.png
categories: living_with_others
---
新型コロナウイルスに影響を受けて、これからの社会が変わっていくという論調や考え方をネット上でチラホラ見かけます。それらは「afterコロナ」だとか、いやいやこれからは新型コロナが完全に終息するわけではいだろうから、afterじゃなくて「withコロナ」だみたいなやりとりが行われていますが、実際のところafterなのかwithなのかは細かい話なのでどうでもよくて、どちらかというと短期と長期の2つの変化を考えなければならないと私は考えています。
短期というのは対新型コロナウイルスを念頭に置いた半年〜1年(状況によってはそれ以上)の期間、私たちが取り組むべき生活スタイルのこと。それはまさに今私たちが取り組んでいる「手洗い・うがい」や「ソーシャルディスタンシング」を生活の中にどのように定着させていくか?ということが主なテーマになり、お店であれば席の配置だったり、手指消毒薬の設置、サービスの提供方法みたいなこと、オフィスワークであればリモートワークの定常的な導入が挙げられます。
もうひとつの、長期というのは新型コロナウイルスに限定せず、人間の社会活動は人間が意図したとおりにはならないということを考えた上で、どのような生き方、暮らしかたをしていくのが良いのか?を考え取り組んでいくということです。
このことについては、今までも考える機会はたくさんありました。日本では毎年のように地震や水害、大規模な地震災害の時にはいつも通りの日常を送れないことを経験しています。しかしなかなか生活様式を変えようという発想に至らなかったのは、それが局地的なものであって全員が同じ意識で動くことがなかったからではないでしょうか。
今回の新型コロナウイルスの影響は日本はもとより全世界に及んでおり、世界中が同じ意識、危機感をもっていると考えても差し支えないと思います。海外のニュースで、マスクをし距離をとって並ぶ様子や、スーパーマーケットなどで食材や日用品が売り切れている様子が報じられているのを見ると、どの国も同じ問題を抱えているのだなということがよくわかります。
この世界中がひとつの意識・認識を共有できている状態というのはとても貴重な機会であって、変化する良いチャンスだと思います。変化には痛みが伴いますが、今は痛みが先にやってきてその痛みを乗り越えるための変化があちこちで起こっています。せっかく痛みを負ったのですから、このまま乗り越えてまた元の生活に戻るのはもったいない。どうせなら、これをバネにしてもっとより良い社会を手に入れたりじゃないですか。転んでもタダでは起きないの精神です。
そこで、これからの社会がどのように変わるのか、変わっていくのがいいのかを私なりに生活の基本である「衣食住」と「育(教育)」「働(労働)」「楽(娯楽)」の6つの軸でまとめてみようと思います。
もちろん、そこには私の主観、希望的な観測が含まれますし、社会の変化には多くのパラメータが影響しますから誰にとってもベストな方向を示せるわけではありませんが、ひとつの考え方としてまとめていくつもりです。
衣・食・住・育・働・楽のそれぞれについて考えてみようと思うのですが、そこには共通する価値観が見えてくるのではないかなと想像しています。今のところはぼんやりとですが「ひとつひとつの本質的な価値が求められるようになり、無駄は排除される」ということは間違いなくどこにも当てはまるでしょうし「より個人が重視される生き方」や「無駄を排除したことによって得られるより豊かな生き方」ができるようになると考えています。
私自身も現在は収入が減り貯金を切り崩して生活している状態ではありますが、規則正しい生活、きちんとした食事、今まで以上の運動を行うようになって、新型コロナウイルス以前よりも豊かな生活ができているのではないかなと感じています(もちろん収入の問題は早急に解決しなければならない大きな課題ですが)。
新型コロナウイルスによって痛みを受けた先にある新しい社会は、短期的にはまだまだ苦しい状況になると思いますが、長期的にはより人間的で豊かな生き方があると思います。その長期的な展望について積極的に考えていきます。
これは誰にでもできることですし、色んな考え方があってもいいと思うのでぜひ多くの方の意見や考え方も聴いてみたい。「それ面白そうやん」って思った方はぜひご自身の見方、考え方をシェアしていただけると嬉しいです。 | 65.34375 | 222 | 0.911526 | jpn_Jpan | 0.993521 |
4db5267663f67a98d13338a0209525255e147d4d | 9,691 | md | Markdown | articles/storage/blobs/storage-blob-event-overview.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/blobs/storage-blob-event-overview.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/blobs/storage-blob-event-overview.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Reagowanie na zdarzenia magazynu obiektów Blob platformy Azure | Dokumenty firmy Microsoft
description: Zasubskrybuj zdarzenia usługi Blob Storage przy użyciu usługi Azure Event Grid.
author: normesta
ms.author: normesta
ms.date: 04/06/2020
ms.topic: conceptual
ms.service: storage
ms.subservice: blobs
ms.reviewer: cbrooks
ms.openlocfilehash: d9c666fd6fcf020908b6fc5bdd639261853ad9c6
ms.sourcegitcommit: 98e79b359c4c6df2d8f9a47e0dbe93f3158be629
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/07/2020
ms.locfileid: "80811545"
---
# <a name="reacting-to-blob-storage-events"></a>Reagowanie na zdarzenia usługi Blob Storage
Zdarzenia usługi Azure Storage umożliwiają aplikacjom reagowanie na zdarzenia, takie jak tworzenie i usuwanie obiektów blob. Robi to bez konieczności skomplikowanego kodu lub kosztownych i nieefektywnych usług sondowania. Najlepsze jest to, że płacisz tylko za to, czego używasz.
Zdarzenia magazynu obiektów Blob są wypychane przy użyciu [usługi Azure Event Grid](https://azure.microsoft.com/services/event-grid/) do subskrybentów, takich jak usługi Azure Functions, Usługi Azure Logic Apps, a nawet do własnego odbiornika http. Usługa Event Grid zapewnia niezawodne dostarczanie zdarzeń do aplikacji za pośrednictwem rozszerzonych zasad ponawiania prób i dead-lettering.
Zobacz artykuł [schemat zdarzeń magazynu obiektów blob,](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) aby wyświetlić pełną listę zdarzeń, które obsługuje magazyn obiektów Blob.
Typowe scenariusze zdarzeń magazynu obiektów blob obejmują przetwarzanie obrazu lub wideo, indeksowanie wyszukiwania lub dowolny przepływ pracy zorientowany na plik. Przesyłanie plików asynchronicznych doskonale nadają się do zdarzeń. Gdy zmiany są rzadkie, ale scenariusz wymaga natychmiastowej reakcji, architektura oparta na zdarzeniach może być szczególnie wydajna.
Jeśli chcesz wypróbować zdarzenia magazynu obiektów blob, zobacz dowolne z następujących artykułów szybkiego startu:
|Jeśli chcesz użyć tego narzędzia: |Zobacz ten artykuł: |
|--|-|
|Azure Portal |[Szybki start: kierowanie zdarzeń magazynu obiektów Blob do punktu końcowego sieci Web za pomocą witryny Azure portal](https://docs.microsoft.com/azure/event-grid/blob-event-quickstart-portal?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
|PowerShell |[Szybki start: kierowanie zdarzeń magazynu do punktu końcowego sieci Web za pomocą programu PowerShell](https://docs.microsoft.com/azure/storage/blobs/storage-blob-event-quickstart-powershell?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
|Interfejs wiersza polecenia platformy Azure |[Szybki start: kierowanie zdarzeń magazynu do punktu końcowego sieci Web za pomocą interfejsu wiersza polecenia platformy Azure](https://docs.microsoft.com/azure/storage/blobs/storage-blob-event-quickstart?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
Aby wyświetlić szczegółowe przykłady reagowania na zdarzenia magazynu obiektów Blob przy użyciu funkcji platformy Azure, zobacz następujące artykuły:
- [Samouczek: Aby zaktualizować tabelę Databricks Delta, użyj zdarzeń usługi Azure Data Lake Storage Gen2.](data-lake-storage-events.md)
- [Samouczek: automatyzacja zmiany rozmiaru przesłanych obrazów przy użyciu siatki zdarzeń](https://docs.microsoft.com/azure/event-grid/resize-images-on-storage-blob-upload-event?tabs=dotnet)
>[!NOTE]
> Tylko konta magazynu typu **StorageV2 (ogólnego przeznaczenia v2),** **BlockBlobStorage**i **BlobStorage** obsługują integrację zdarzeń. **Magazyn (genral purpose v1)** *nie* obsługuje integracji z siatką zdarzeń.
## <a name="the-event-model"></a>Model zdarzenia
Usługa Event Grid używa [subskrypcji zdarzeń](../../event-grid/concepts.md#event-subscriptions) do kierowania wiadomości o zdarzeniach do subskrybentów. Ten obraz ilustruje relację między wydawcami zdarzeń, subskrypcjami zdarzeń i programami obsługi zdarzeń.

Najpierw zasubskrybuj punkt końcowy zdarzenia. Następnie po wyzwoleniu zdarzenia usługa Event Grid wyśle dane o tym zdarzeniu do punktu końcowego.
Zobacz artykuł [schematu zdarzeń magazynu obiektów blob,](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) aby wyświetlić:
> [!div class="checklist"]
> * Pełna lista zdarzeń magazynu obiektów Blob i sposób wyzwalania każdego zdarzenia.
> * Przykład danych, które siatka zdarzeń będzie wysyłać dla każdego z tych zdarzeń.
> * Cel każdej pary wartości klucza, który pojawia się w danych.
## <a name="filtering-events"></a>Filtrowanie zdarzeń
Zdarzenia obiektów blob [mogą być filtrowane](/cli/azure/eventgrid/event-subscription?view=azure-cli-latest) według typu zdarzenia, nazwy kontenera lub nazwy obiektu, który został utworzony/usunięty. Filtry w siatce zdarzeń są zgodne z początkiem lub końcem tematu, więc zdarzenia z pasującym obiektem przechodzą do subskrybenta.
Aby dowiedzieć się więcej o stosowaniu filtrów, zobacz [Filtrowanie zdarzeń dla siatki zdarzeń](https://docs.microsoft.com/azure/event-grid/how-to-filter-events).
Temat zdarzeń magazynu obiektów Blob używa formatu:
```
/blobServices/default/containers/<containername>/blobs/<blobname>
```
Aby dopasować wszystkie zdarzenia dla konta magazynu, można pozostawić filtry tematu puste.
Aby dopasować zdarzenia z obiektów blob utworzonych w `subjectBeginsWith` zestawie kontenerów udostępniających prefiks, należy użyć filtru, takiego jak:
```
/blobServices/default/containers/containerprefix
```
Aby dopasować zdarzenia z obiektów blob `subjectBeginsWith` utworzonych w określonym kontenerze, należy użyć filtru, takiego jak:
```
/blobServices/default/containers/containername/
```
Aby dopasować zdarzenia z obiektów blob utworzonych w określonym `subjectBeginsWith` kontenerze udostępniając prefiks nazwy obiektu blob, należy użyć filtru, takiego jak:
```
/blobServices/default/containers/containername/blobs/blobprefix
```
Aby dopasować zdarzenia z obiektów blob utworzonych w określonym `subjectEndsWith` kontenerze udostępniając sufiks obiektu blob, należy użyć filtru, takiego jak ".log" lub ".jpg". Aby uzyskać więcej informacji, zobacz [Pojęcia siatki zdarzeń](../../event-grid/concepts.md#event-subscriptions).
## <a name="practices-for-consuming-events"></a>Praktyki dotyczące spożywania zdarzeń
Aplikacje obsługujące zdarzenia magazynu obiektów Blob powinny stosować się do kilku zalecanych rozwiązań:
> [!div class="checklist"]
> * Ponieważ wiele subskrypcji można skonfigurować do kierowania zdarzeń do tego samego programu obsługi zdarzeń, ważne jest, aby nie zakładać, że zdarzenia pochodzą z określonego źródła, ale aby sprawdzić temat wiadomości, aby upewnić się, że pochodzi z oczekiwanego konta magazynu.
> * Podobnie sprawdź, czy eventType jest jednym jesteś przygotowany do przetworzenia i nie zakładaj, że wszystkie zdarzenia, które otrzymasz będą typy, których oczekujesz.
> * Ponieważ wiadomości mogą docierać po pewnym opóźnieniu, użyj pól etag, aby dowiedzieć się, czy informacje o obiektach są nadal aktualne. Aby dowiedzieć się, jak korzystać z pola etag, zobacz [Zarządzanie współbieżnością w magazynie obiektów Blob](https://docs.microsoft.com/azure/storage/common/storage-concurrency?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#managing-concurrency-in-blob-storage).
> * Ponieważ wiadomości mogą być dostarczane poza kolejnością, użyj pól sekwencera, aby zrozumieć kolejność zdarzeń na dowolnym określonym obiekcie. Pole sekwencera jest wartością ciągu reprezentującą logiczną sekwencję zdarzeń dla określonej nazwy obiektu blob. Można użyć standardowego porównania ciągów, aby zrozumieć względną sekwencję dwóch zdarzeń o tej samej nazwie obiektu blob.
> * Zdarzenia magazynu gwarantuje co najmniej raz dostarczanie do subskrybentów, co zapewnia, że wszystkie wiadomości są przesyłane. Jednak ze względu na ponownych prób lub dostępności subskrypcji, zduplikowane wiadomości mogą czasami wystąpić. Aby dowiedzieć się więcej o dostarczaniu i ponawianiu prób wiadomości, zobacz [Dostarczanie wiadomości w uchorzyć i ponowić próbę](../../event-grid/delivery-and-retry.md).
> * Użyj pola blobType, aby zrozumieć, jakiego typu operacje są dozwolone w obiekcie blob i typy biblioteki klienta, których należy użyć do uzyskania dostępu do obiektu blob. Prawidłowe wartości `BlockBlob` to `PageBlob`jeden lub .
> * Użyj pola adresu `CloudBlockBlob` URL `CloudAppendBlob` z i konstruktorów, aby uzyskać dostęp do obiektu blob.
> * Ignoruj pola, których nie rozumiesz. Ta praktyka pomoże Ci zachować odporność na nowe funkcje, które mogą być dodawane w przyszłości.
> * Jeśli chcesz upewnić się, że **zdarzenie Microsoft.Storage.BlobCreated** jest wyzwalane tylko wtedy, gdy `CopyBlob`blok `PutBlob` `PutBlockList` blob jest całkowicie zatwierdzony, filtruj zdarzenie dla wywołań interfejsu API , lub `FlushWithClose` REST. Te wywołania interfejsu API wyzwalają zdarzenie **Microsoft.Storage.BlobCreated** tylko wtedy, gdy dane są w pełni zaangażowane w blokowy obiekt blob. Aby dowiedzieć się, jak utworzyć filtr, zobacz [Filtrowanie zdarzeń dla siatki zdarzeń](https://docs.microsoft.com/azure/event-grid/how-to-filter-events).
## <a name="next-steps"></a>Następne kroki
Dowiedz się więcej o usłudze Event Grid i wypróbuj zdarzenia magazynu obiektów Blob:
- [Event Grid — informacje](../../event-grid/overview.md)
- [Schemat zdarzeń magazynu obiektów Blob](../../event-grid/event-schema-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
- [Rozsyłanie zdarzeń magazynu obiektów Blob do niestandardowego punktu końcowego sieci Web](storage-blob-event-quickstart.md)
| 84.269565 | 563 | 0.811784 | pol_Latn | 0.99982 |
4db57c50e3b3982b8334cfc641c96729ad563994 | 702 | md | Markdown | .github/CONTRIBUTING.md | EnderDev/PreMiD | e1dfe306617632eb0717647ecc46067243fba53a | [
"MIT"
] | 84 | 2018-10-26T14:54:36.000Z | 2019-01-02T14:11:59.000Z | .github/CONTRIBUTING.md | EnderDev/PreMiD | e1dfe306617632eb0717647ecc46067243fba53a | [
"MIT"
] | 45 | 2018-10-20T15:55:53.000Z | 2019-01-02T03:06:31.000Z | .github/CONTRIBUTING.md | EnderDev/PreMiD | e1dfe306617632eb0717647ecc46067243fba53a | [
"MIT"
] | 19 | 2018-10-20T15:08:40.000Z | 2019-01-02T17:50:21.000Z | # Contributing
## Requiered knowledge
- JavaScript
- html5
- NodeJS
Additional:
- CSS
- [VueJS](https://vuejs.org/)
- [ElectronJS](https://electronjs.org/)
- [NPMjs](https://www.npmjs.com/)
A source code editor is also requiered. We recommend [Visual Studio Code](https://code.visualstudio.com/).
### Installing the components
1. Install [Git](https://git-scm.com/)
2. Install [Node](https://nodejs.org/en/)
### Cloning the project
1. Fork the [repository](https://github.com/PreMiD/PreMiD)
2. Open a terminal and type `git clone https://github.com/PreMiD/PreMiD`
### Coding your vision
Please keep the structure. We don't want to disorganize our project. Chaotic files may not be accepted.
| 22.645161 | 106 | 0.719373 | yue_Hant | 0.341787 |
4db6bdd411728616c2918f4092545d848485e092 | 23 | md | Markdown | README.md | nalineesonawane/java | ff82aaafb491152b0ab6376473475406d7c33008 | [
"Apache-2.0"
] | null | null | null | README.md | nalineesonawane/java | ff82aaafb491152b0ab6376473475406d7c33008 | [
"Apache-2.0"
] | null | null | null | README.md | nalineesonawane/java | ff82aaafb491152b0ab6376473475406d7c33008 | [
"Apache-2.0"
] | null | null | null | # java
java springboot
| 7.666667 | 15 | 0.782609 | nld_Latn | 0.278414 |
4db74759181313e9173f53b7b00280ab789cd270 | 51,116 | md | Markdown | docs/_solutions/02_exercise_spatial_relationship_and_operations.md | napo/geospatial_course | d40b9ad3fa11e92d6541276261efeea42fc2db89 | [
"MIT"
] | null | null | null | docs/_solutions/02_exercise_spatial_relationship_and_operations.md | napo/geospatial_course | d40b9ad3fa11e92d6541276261efeea42fc2db89 | [
"MIT"
] | null | null | null | docs/_solutions/02_exercise_spatial_relationship_and_operations.md | napo/geospatial_course | d40b9ad3fa11e92d6541276261efeea42fc2db89 | [
"MIT"
] | null | null | null | ---
title: "Solution 02"
permalink: /solutions/02-spatial_relationships_and_operations
excerpt: "Spatial Relationships and Operations"
last_modified_at: 2021-10-14T17:48:05-03:00
header:
teaser: https://grass.osgeo.org/grass78/manuals/addons/v_concave_concave.png
#redirect_from:
# - /theme-setup/
toc: true
---
---
# Exercise 02: Spatial Relationships and Operations
## learning objectives
* repeat the concepts on the previous lesson
* errors with the simplified boundaries
* convex hull / concave hull / alphashape
* nearest_points
---
# Exercise
1 - create the geodataframe of the [gas&oil stations](https://www.mise.gov.it/images/exportCSV/anagrafica_impianti_attivi.csv) of Italy
- data from the italian [Ministry of Economic Development](https://www.mise.gov.it)
- count the total of the gas&oil stations for each muncipality of Trentino
2 - identify the difference of municipalities in Trentino in the year 2019 with the year 2021
- identify which municipalities are created from aggregation to others
- find the biggest new municipality of Trentino and show all the italian municipalities with bordering it
- create the macroarea of all the municipalities bordering with it
- for each gas&oil station in the macro-area, calculate how many monumental trees have been within a 500m radius
3 - creates a polygon that contains all the monumental trees inside the area
- identify all the gas&oil stations in this area which are within 2km of each other
- save the polygon in geopackage with the attribute "description" with the name of the gas&oil station
---
# Setup
```python
try:
import geopandas as gpd
except ModuleNotFoundError as e:
!pip install geopandas==0.10.1
import geopandas as gpd
if gpd.__version__ != "0.10.1":
!pip install -U geopandas==0.10.1
import geopandas as gpd
```
---
### Import of the packages
```python
import geopandas as gpd
import requests
import matplotlib.pyplot as plt
import pandas as pd
pd.options.mode.chained_assignment = None
```
# create the geodataframe of the [gas&oil stations](https://www.mise.gov.it/images/exportCSV/anagrafica_impianti_attivi.csv) of Italy
```python
urlfile = "https://www.mise.gov.it/images/exportCSV/anagrafica_impianti_attivi.csv"
stations = pd.read_csv(urlfile,skiprows=1,sep=";",encoding="ISO-8859-1")
```
```python
stations.head(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>idImpianto</th>
<th>Gestore</th>
<th>Bandiera</th>
<th>Tipo Impianto</th>
<th>Nome Impianto</th>
<th>Indirizzo</th>
<th>Comune</th>
<th>Provincia</th>
<th>Latitudine</th>
<th>Longitudine</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>46351</td>
<td>DI BENEDETTO CARBURANTI S.R.L.</td>
<td>DBCarburanti</td>
<td>Altro</td>
<td>VILLASETA</td>
<td>VILLASETA S.S.115 KM 186,225</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.293320</td>
<td>13.569777</td>
</tr>
<tr>
<th>1</th>
<td>23778</td>
<td>ALFONSO DI BENEDETTO CARBURANTI LUBRIFICANTI SRL</td>
<td>Sicilpetroli</td>
<td>Altro</td>
<td>A. Di Benedetto srl Via Imera Ag</td>
<td>VIA IMERA 10 92100</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.312391</td>
<td>13.585913</td>
</tr>
<tr>
<th>2</th>
<td>49195</td>
<td>EOS SERVICES S.R.L. A SOCIO UNICO</td>
<td>Q8</td>
<td>Altro</td>
<td>AG021</td>
<td>VIA PETRARCA S.N. 92100</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.298234</td>
<td>13.589792</td>
</tr>
<tr>
<th>3</th>
<td>49460</td>
<td>EOS SERVICES S.R.L. A SOCIO UNICO</td>
<td>Q8</td>
<td>Altro</td>
<td>AG023</td>
<td>CONTRADA FONTANELLE S.N. 92100</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.326120</td>
<td>13.591820</td>
</tr>
<tr>
<th>4</th>
<td>49459</td>
<td>EOS SERVICES S.R.L. A SOCIO UNICO</td>
<td>Q8</td>
<td>Altro</td>
<td>AG024</td>
<td>VILLAGGIO MOSE' S.N.C. 92100</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.274324</td>
<td>13.614224</td>
</tr>
</tbody>
</table>
</div>
```python
stations.columns
```
Index(['idImpianto', 'Gestore', 'Bandiera', 'Tipo Impianto', 'Nome Impianto',
'Indirizzo', 'Comune', 'Provincia', 'Latitudine', 'Longitudine'],
dtype='object')
```python
columns = {
'idImpianto': 'id',
'Gestore': 'manager',
'Bandiera':'company',
'Tipo Impianto':'type',
'Nome Impianto':'name',
'Indirizzo':'address',
'Comune':'city',
'Provincia':'province',
'Latitudine':'latitude',
'Longitudine':'longitude'
}
```
```python
stations.rename(columns=columns,inplace=True)
```
```python
stations.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>manager</th>
<th>company</th>
<th>type</th>
<th>name</th>
<th>address</th>
<th>city</th>
<th>province</th>
<th>latitude</th>
<th>longitude</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>46351</td>
<td>DI BENEDETTO CARBURANTI S.R.L.</td>
<td>DBCarburanti</td>
<td>Altro</td>
<td>VILLASETA</td>
<td>VILLASETA S.S.115 KM 186,225</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.293320</td>
<td>13.569777</td>
</tr>
<tr>
<th>1</th>
<td>23778</td>
<td>ALFONSO DI BENEDETTO CARBURANTI LUBRIFICANTI SRL</td>
<td>Sicilpetroli</td>
<td>Altro</td>
<td>A. Di Benedetto srl Via Imera Ag</td>
<td>VIA IMERA 10 92100</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.312391</td>
<td>13.585913</td>
</tr>
<tr>
<th>2</th>
<td>49195</td>
<td>EOS SERVICES S.R.L. A SOCIO UNICO</td>
<td>Q8</td>
<td>Altro</td>
<td>AG021</td>
<td>VIA PETRARCA S.N. 92100</td>
<td>AGRIGENTO</td>
<td>AG</td>
<td>37.298234</td>
<td>13.589792</td>
</tr>
</tbody>
</table>
</div>
```python
geo_stations = gpd.GeoDataFrame(
stations,
crs='EPSG:4326',
geometry=gpd.points_from_xy(stations.longitude, stations.latitude))
```
```python
geo_stations[~geo_stations.geometry.is_valid].shape[0]
```
5
Error:<br/>
the values should be zero: the geodataframe should contains points.<br/>
Maybe therea are some rows where the values of latitude and lontigude aren't present
{: .notice--warning}
```python
stations.latitude.isnull().sum()
```
5
```python
stations.longitude.isnull().sum()
```
5
5 ... the same value for the invalid geometries
```python
stations = stations[~stations.latitude.isnull()]
```
```python
geo_stations = gpd.GeoDataFrame(
stations,
crs='EPSG:4326',
geometry=gpd.points_from_xy(stations.longitude, stations.latitude))
```
```python
geo_stations[~geo_stations.geometry.is_valid].shape[0]
```
0
Now it's ZERO :)
```python
geo_stations.plot()
plt.show()
```

## count the total of the gas&oil stations for each muncipality of Trentino
On the GitHub repository of the course there are the geopackage files with the administrative limits of ISTAT [2020](https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/istat_administrative_units_2020.gpkg) and [2021](https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/istat_administrative_units_2021.gpkg) with generalized geometries
We download the data of both due the second issue of the exercise.
```python
url2021 = 'https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/istat_administrative_units_generalized_2021.gpkg'
url2020 = 'https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/istat_administrative_units_generalized_2020.gpkg'
istat2020 = "istat_administrative_units_generalized_2020.gpkg"
istat2021 = "istat_administrative_units_generalized_2021.gpkg"
```
```python
r = requests.get(url2021, allow_redirects=True)
open(istat2021, 'wb').write(r.content)
```
```python
r = requests.get(url2020, allow_redirects=True)
open(istat2020, 'wb').write(r.content)
```
```python
import fiona
fiona.listlayers(istat2020)
```
['municipalities', 'provincies', 'regions', 'macroregions']
```python
fiona.listlayers(istat2021)
```
['municipalities', 'provincies', 'regions', 'macroregions']
```python
provincies2021 = gpd.read_file(istat2021,layer="provincies")
```
```python
provincies2021.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>COD_RIP</th>
<th>COD_REG</th>
<th>COD_PROV</th>
<th>COD_CM</th>
<th>COD_UTS</th>
<th>DEN_PROV</th>
<th>DEN_CM</th>
<th>DEN_UTS</th>
<th>SIGLA</th>
<th>TIPO_UTS</th>
<th>geometry</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>201</td>
<td>201</td>
<td>-</td>
<td>Torino</td>
<td>Torino</td>
<td>TO</td>
<td>Citta metropolitana</td>
<td>MULTIPOLYGON (((411015.006 5049970.983, 411266...</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>1</td>
<td>2</td>
<td>0</td>
<td>2</td>
<td>Vercelli</td>
<td>-</td>
<td>Vercelli</td>
<td>VC</td>
<td>Provincia</td>
<td>MULTIPOLYGON (((438328.612 5087208.215, 439028...</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>1</td>
<td>3</td>
<td>0</td>
<td>3</td>
<td>Novara</td>
<td>-</td>
<td>Novara</td>
<td>NO</td>
<td>Provincia</td>
<td>MULTIPOLYGON (((460929.542 5076320.298, 461165...</td>
</tr>
</tbody>
</table>
</div>
```python
provincies2021.DEN_PROV.unique()
```
```python
array(['-', 'Vercelli', 'Novara', 'Cuneo', 'Asti', 'Alessandria', 'Aosta',
'Imperia', 'Savona', 'La Spezia', 'Varese', 'Como', 'Sondrio',
'Bergamo', 'Brescia', 'Pavia', 'Cremona', 'Mantova', 'Bolzano',
'Trento', 'Verona', 'Vicenza', 'Belluno', 'Treviso', 'Padova',
'Rovigo', 'Udine', 'Gorizia', 'Trieste', 'Piacenza', 'Parma',
"Reggio nell'Emilia", 'Modena', 'Ferrara', 'Ravenna',
"Forli'-Cesena", 'Pesaro e Urbino', 'Ancona', 'Macerata',
'Ascoli Piceno', 'Massa Carrara', 'Lucca', 'Pistoia', 'Livorno',
'Pisa', 'Arezzo', 'Siena', 'Grosseto', 'Perugia', 'Terni',
'Viterbo', 'Rieti', 'Latina', 'Frosinone', 'Caserta', 'Benevento',
'Avellino', 'Salerno', "L'Aquila", 'Teramo', 'Pescara', 'Chieti',
'Campobasso', 'Foggia', 'Taranto', 'Brindisi', 'Lecce', 'Potenza',
'Matera', 'Cosenza', 'Catanzaro', 'Trapani', 'Agrigento',
'Caltanissetta', 'Enna', 'Ragusa', 'Siracusa', 'Sassari', 'Nuoro',
'Pordenone', 'Isernia', 'Oristano', 'Biella', 'Lecco', 'Lodi',
'Rimini', 'Prato', 'Crotone', 'Vibo Valentia',
'Verbano-Cusio-Ossola', 'Monza e della Brianza', 'Fermo',
'Barletta-Andria-Trani', 'Sud Sardegna'], dtype=object)
```
choose the province of Trento
```python
province_of_trento = provincies2021[provincies2021['DEN_PROV']=='Trento']
```
```python
province_of_trento.crs
```
<Projected CRS: EPSG:32632>
Name: WGS 84 / UTM zone 32N
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: Between 6°E and 12°E, northern hemisphere between equator and 84°N, onshore and offshore. Algeria. Austria. Cameroon. Denmark. Equatorial Guinea. France. Gabon. Germany. Italy. Libya. Liechtenstein. Monaco. Netherlands. Niger. Nigeria. Norway. Sao Tome and Principe. Svalbard. Sweden. Switzerland. Tunisia. Vatican City State.
- bounds: (6.0, 0.0, 12.0, 84.0)
Coordinate Operation:
- name: UTM zone 32N
- method: Transverse Mercator
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
```python
boundary_province_of_trento = province_of_trento.to_crs(epsg=4326).geometry.values[0]
```
### plot it
```python
boundary_province_of_trento
```

```python
stations_province_trento = geo_stations[geo_stations.within(boundary_province_of_trento)]
```
```python
stations_province_trento.plot()
plt.show()
```

```python
stations_province_trento.shape[0]
```
212
without spatial relationship
```python
stations.province.unique()
```
array(['AG', 'AL', 'AN', 'AO', 'AP', 'AQ', 'AR', 'AT', 'AV', 'BA', 'BG',
'BI', 'BL', 'BN', 'BO', 'BR', 'BS', 'BT', 'BZ', 'CA', 'CB', 'CE',
'CH', 'CI', 'CL', 'CN', 'CO', 'CR', 'CS', 'CT', 'CZ', 'EN', 'FC',
'FE', 'FG', 'FI', 'FM', 'FR', 'GE', 'GO', 'GR', 'IM', 'IS', 'KR',
'LC', 'LE', 'LI', 'LO', 'LT', 'LU', 'MB', 'MC', 'ME', 'MI', 'MN',
'MO', 'MS', 'MT', nan, 'NO', 'NU', 'OG', 'OR', 'OT', 'PA', 'PC',
'PD', 'PE', 'PG', 'PI', 'PN', 'PO', 'PR', 'PT', 'PU', 'PV', 'PZ',
'RA', 'RC', 'RE', 'RG', 'RI', 'RM', 'RN', 'RO', 'SA', 'SI', 'SO',
'SP', 'SR', 'SS', 'SV', 'TA', 'TE', 'TN', 'TO', 'TP', 'TR', 'TS',
'TV', 'UD', 'VA', 'VB', 'VC', 'VE', 'VI', 'VR', 'VS', 'VT', 'VV'],
dtype=object)
```python
provincies2021[provincies2021['DEN_PROV']=='Trento']['SIGLA'].unique()
```
array(['TN'], dtype=object)
```python
stations[stations['province']=='TN'].shape[0]
```
211
212 in the geodataframe<br/>
211 in the dataframe
{: .notice--warning}
```python
stations_province_trento[stations_province_trento['province'] != 'TN']
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>manager</th>
<th>company</th>
<th>type</th>
<th>name</th>
<th>address</th>
<th>city</th>
<th>province</th>
<th>latitude</th>
<th>longitude</th>
<th>geometry</th>
</tr>
</thead>
<tbody>
<tr>
<th>21184</th>
<td>40236</td>
<td>SARNI S.R.L.</td>
<td>Agip Eni</td>
<td>Autostradale</td>
<td>ADIGE EST</td>
<td>Autostrada A22 BRENNERO-MODENA, Km. 186.98, di...</td>
<td>BRENTINO BELLUNO</td>
<td>VR</td>
<td>45.695187</td>
<td>10.916713</td>
<td>POINT (10.91671 45.69519)</td>
</tr>
</tbody>
</table>
</div>
```python
point_outside = stations_province_trento[stations_province_trento['province'] != 'TN']
```
```python
point_outside.explore()
```
<a href="webmap/point_ouside_border_web.html"><img src="https://raw.githubusercontent.com/napo/geospatial_course_unitn/master/images/point_ouside_border_web.png"/></a>
```python
point_outside.geometry.within(boundary_province_of_trento)
```
21184 True
dtype: bool
```python
province_of_trento.to_crs(epsg=4326).contains(point_outside.geometry.values[0])
```
21 True
dtype: bool
We need to use the *not* generalized version of the administrative limits of Italy
the course offers two zip files with the shapefiles of the italian municipalites in [2020](https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/shapefile_istat_municipalities_2020.zip) and [2021](https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/shapefile_istat_municipalities_2020.zip) made by ISTAT
```python
urlmunicipalities2021 = 'https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/shapefile_istat_municipalities_2021.zip'
```
```python
municipalities2021 = gpd.read_file(urlmunicipalities2021)
```
```python
municipalities2021.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>COD_RIP</th>
<th>COD_REG</th>
<th>COD_PROV</th>
<th>COD_CM</th>
<th>COD_UTS</th>
<th>PRO_COM</th>
<th>PRO_COM_T</th>
<th>COMUNE</th>
<th>COMUNE_A</th>
<th>CC_UTS</th>
<th>SHAPE_LENG</th>
<th>Shape_Le_1</th>
<th>Shape_Area</th>
<th>geometry</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>201</td>
<td>201</td>
<td>1077</td>
<td>001077</td>
<td>Chiaverano</td>
<td>None</td>
<td>0</td>
<td>18164.369945</td>
<td>18164.236621</td>
<td>1.202212e+07</td>
<td>POLYGON ((414358.390 5042001.044, 414381.796 5...</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>201</td>
<td>201</td>
<td>1079</td>
<td>001079</td>
<td>Chiesanuova</td>
<td>None</td>
<td>0</td>
<td>10777.398475</td>
<td>10777.318814</td>
<td>4.118911e+06</td>
<td>POLYGON ((394621.039 5031581.116, 394716.100 5...</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>201</td>
<td>201</td>
<td>1089</td>
<td>001089</td>
<td>Coazze</td>
<td>None</td>
<td>0</td>
<td>41591.434852</td>
<td>41591.122092</td>
<td>5.657268e+07</td>
<td>POLYGON ((364914.897 4993224.894, 364929.991 4...</td>
</tr>
</tbody>
</table>
</div>
```python
cod_prov_trento = provincies2021[provincies2021.DEN_PROV == 'Trento'].COD_PROV.values[0]
```
```python
municipalities_trentino_2021 = municipalities2021[municipalities2021.COD_PROV == cod_prov_trento]
```
```python
province_of_trento = italy = municipalities_trentino_2021.dissolve(by='COD_PROV')
%time
```
CPU times: user 2 µs, sys: 1e+03 ns, total: 3 µs
Wall time: 5.25 µs
```python
boundary_province_of_trento = province_of_trento.to_crs(epsg=4326).geometry.values[0]
```
```python
boundary_province_of_trento
```

```python
stations_province_trento = geo_stations[geo_stations.within(boundary_province_of_trento)]
```
```python
stations_province_trento.shape[0]
```
211
the total is right ;)
```python
point_outside.geometry.within(boundary_province_of_trento)
```
21184 False
dtype: bool
and also the spatial relationship
<br/>now we can count the number of gas&oil stations for each municipality of Trentino
```python
stations_by_municipalities = stations_province_trento.groupby(['city']).size().reset_index().rename(columns={0:'total'}).sort_values(['total','city'],ascending=[False,True])
%time
```
CPU times: user 4 µs, sys: 1e+03 ns, total: 5 µs
Wall time: 9.78 µs
```python
stations_by_municipalities = stations_province_trento.groupby(['city']).size().reset_index().rename(columns={0:'total'}).sort_values(['total','city'],ascending=[False,True])
```
```python
stations_by_municipalities
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>city</th>
<th>total</th>
</tr>
</thead>
<tbody>
<tr>
<th>89</th>
<td>TRENTO</td>
<td>32</td>
</tr>
<tr>
<th>73</th>
<td>ROVERETO</td>
<td>13</td>
</tr>
<tr>
<th>63</th>
<td>PERGINE VALSUGANA</td>
<td>8</td>
</tr>
<tr>
<th>46</th>
<td>LAVIS</td>
<td>7</td>
</tr>
<tr>
<th>3</th>
<td>ARCO</td>
<td>6</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>93</th>
<td>VIGO DI FASSA</td>
<td>1</td>
</tr>
<tr>
<th>95</th>
<td>VILLA AGNEDO</td>
<td>1</td>
</tr>
<tr>
<th>96</th>
<td>VILLA LAGARINA</td>
<td>1</td>
</tr>
<tr>
<th>97</th>
<td>VOLANO</td>
<td>1</td>
</tr>
<tr>
<th>98</th>
<td>ZUCLO</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>99 rows × 2 columns</p>
</div>
but ... if the columns "city" is not present?
{: .notice--success}
```python
del stations_province_trento['city'] #delete the column city
```
```python
stations_province_trento.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>manager</th>
<th>company</th>
<th>type</th>
<th>name</th>
<th>address</th>
<th>province</th>
<th>latitude</th>
<th>longitude</th>
<th>geometry</th>
</tr>
</thead>
<tbody>
<tr>
<th>18298</th>
<td>5169</td>
<td>CAMPION MARCO E C. S.A.S.</td>
<td>Esso</td>
<td>Strada Statale</td>
<td>CAMPION MARCO E C. S.A.S.</td>
<td>Statale 12 dell'Abetone e del Brennero, Km. 34...</td>
<td>TN</td>
<td>45.803566</td>
<td>11.019186</td>
<td>POINT (11.01919 45.80357)</td>
</tr>
<tr>
<th>18299</th>
<td>7317</td>
<td>FERRARI ATTILIO</td>
<td>Pompe Bianche</td>
<td>Altro</td>
<td>FERRARI ATTILIO</td>
<td>CORSO VERONA 22 38061</td>
<td>TN</td>
<td>45.757343</td>
<td>10.999531</td>
<td>POINT (10.99953 45.75734)</td>
</tr>
<tr>
<th>18300</th>
<td>23796</td>
<td>RO-MA SNC DI GIULIO ROPELE</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>ENI-AGIP</td>
<td>STRADA PROVINCIALE 90 DESTRA ADIGE KM. 18 + 15...</td>
<td>TN</td>
<td>45.989288</td>
<td>11.097397</td>
<td>POINT (11.09740 45.98929)</td>
</tr>
</tbody>
</table>
</div>
reconstruct the name of the city associated for each location
```python
def getNameCity(point,cities):
name = cities[cities.to_crs(epsg=4326).contains(point)].COMUNE.values[0]
return name
```
```python
stations_province_trento['city'] = stations_province_trento.geometry.apply(lambda point: getNameCity(point,municipalities_trentino_2021))
%time
```
CPU times: user 3 µs, sys: 0 ns, total: 3 µs
Wall time: 5.96 µs
```python
stations_province_trento.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>manager</th>
<th>company</th>
<th>type</th>
<th>name</th>
<th>address</th>
<th>province</th>
<th>latitude</th>
<th>longitude</th>
<th>geometry</th>
<th>city</th>
</tr>
</thead>
<tbody>
<tr>
<th>18298</th>
<td>5169</td>
<td>CAMPION MARCO E C. S.A.S.</td>
<td>Esso</td>
<td>Strada Statale</td>
<td>CAMPION MARCO E C. S.A.S.</td>
<td>Statale 12 dell'Abetone e del Brennero, Km. 34...</td>
<td>TN</td>
<td>45.803566</td>
<td>11.019186</td>
<td>POINT (11.01919 45.80357)</td>
<td>Ala</td>
</tr>
<tr>
<th>18299</th>
<td>7317</td>
<td>FERRARI ATTILIO</td>
<td>Pompe Bianche</td>
<td>Altro</td>
<td>FERRARI ATTILIO</td>
<td>CORSO VERONA 22 38061</td>
<td>TN</td>
<td>45.757343</td>
<td>10.999531</td>
<td>POINT (10.99953 45.75734)</td>
<td>Ala</td>
</tr>
<tr>
<th>18300</th>
<td>23796</td>
<td>RO-MA SNC DI GIULIO ROPELE</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>ENI-AGIP</td>
<td>STRADA PROVINCIALE 90 DESTRA ADIGE KM. 18 + 15...</td>
<td>TN</td>
<td>45.989288</td>
<td>11.097397</td>
<td>POINT (11.09740 45.98929)</td>
<td>Aldeno</td>
</tr>
</tbody>
</table>
</div>
```python
stations_by_municipalities = stations_province_trento.groupby(['city']).size().reset_index().rename(columns={0:'total'}).sort_values(['total','city'],ascending=[False,True])
```
```python
stations_by_municipalities
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>city</th>
<th>total</th>
</tr>
</thead>
<tbody>
<tr>
<th>85</th>
<td>Trento</td>
<td>32</td>
</tr>
<tr>
<th>69</th>
<td>Rovereto</td>
<td>13</td>
</tr>
<tr>
<th>59</th>
<td>Pergine Valsugana</td>
<td>8</td>
</tr>
<tr>
<th>42</th>
<td>Lavis</td>
<td>7</td>
</tr>
<tr>
<th>4</th>
<td>Arco</td>
<td>6</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>87</th>
<td>Vallarsa</td>
<td>1</td>
</tr>
<tr>
<th>88</th>
<td>Vallelaghi</td>
<td>1</td>
</tr>
<tr>
<th>89</th>
<td>Vermiglio</td>
<td>1</td>
</tr>
<tr>
<th>90</th>
<td>Villa Lagarina</td>
<td>1</td>
</tr>
<tr>
<th>92</th>
<td>Volano</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>93 rows × 2 columns</p>
</div>
## identify the difference of municipalities in Trentino of year 2019 with year 2021
```python
urlmunicipalities2019 = 'https://github.com/napo/geospatial_course_unitn/raw/master/data/istat/shapefile_istat_municipalities_2019.zip'
municipalities2019 = gpd.read_file(urlmunicipalities2019)
```
```python
municipalities_trentino_2019 = municipalities2019[municipalities2019['COD_PROV'] == cod_prov_trento]
```
```python
names2019 = list(municipalities_trentino_2019.COMUNE.unique())
```
```python
names2021 = list(municipalities_trentino_2021.COMUNE.unique())
```
```python
notpresentin2021 = list(set(names2019) - set(names2021))
```
```python
notpresentin2021
```
['Varena',
'Brez',
'Revò',
'Romallo',
'Cloz',
'Malosco',
'Carano',
'Castelfondo',
'Cagnò',
'Fondo',
'Daiano',
'Faedo']
```python
notpresentin2019 = list(set(names2021) - set(names2019))
```
```python
notpresentin2019
```
["Borgo d'Anaunia", 'Ville di Fiemme', 'Novella']
```python
old_municipalities_2019 = municipalities_trentino_2019[municipalities_trentino_2019.COMUNE.isin(notpresentin2021)]
```
```python
old_municipalities_2019.plot()
plt.show()
```

```python
new_municipalities_2021 = municipalities_trentino_2021[municipalities_trentino_2021.COMUNE.isin(notpresentin2019)]
```
```python
new_municipalities_2021.plot()
plt.show()
```

## identify which municipalities are created from aggregation to others
```python
def whereincluded(geometry, geometries_gdf):
name = "not included"
found = geometries_gdf[geometries_gdf.geometry.contains(geometry)]
if len(found) > 0:
name = found.COMUNE.values[0]
return(name)
```
```python
old_municipalities_2019['included_in'] = old_municipalities_2019.geometry.apply(lambda g: whereincluded(g,new_municipalities_2021))
```
```python
old_municipalities_2019[['COMUNE','included_in']]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>COMUNE</th>
<th>included_in</th>
</tr>
</thead>
<tbody>
<tr>
<th>724</th>
<td>Brez</td>
<td>Novella</td>
</tr>
<tr>
<th>2168</th>
<td>Carano</td>
<td>Ville di Fiemme</td>
</tr>
<tr>
<th>3247</th>
<td>Cagnò</td>
<td>Novella</td>
</tr>
<tr>
<th>3343</th>
<td>Malosco</td>
<td>Borgo d'Anaunia</td>
</tr>
<tr>
<th>3346</th>
<td>Daiano</td>
<td>Ville di Fiemme</td>
</tr>
<tr>
<th>4269</th>
<td>Cloz</td>
<td>Novella</td>
</tr>
<tr>
<th>4869</th>
<td>Romallo</td>
<td>Novella</td>
</tr>
<tr>
<th>5142</th>
<td>Castelfondo</td>
<td>Borgo d'Anaunia</td>
</tr>
<tr>
<th>5540</th>
<td>Revò</td>
<td>Novella</td>
</tr>
<tr>
<th>5607</th>
<td>Varena</td>
<td>Ville di Fiemme</td>
</tr>
<tr>
<th>5622</th>
<td>Fondo</td>
<td>Borgo d'Anaunia</td>
</tr>
<tr>
<th>5652</th>
<td>Faedo</td>
<td>not included</td>
</tr>
</tbody>
</table>
</div>
Where is "Faedo" ?
```python
faedo = old_municipalities_2019[old_municipalities_2019.COMUNE == 'Faedo']
```
```python
faedo
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>COD_RIP</th>
<th>COD_REG</th>
<th>COD_PROV</th>
<th>COD_CM</th>
<th>COD_UTS</th>
<th>PRO_COM</th>
<th>PRO_COM_T</th>
<th>COMUNE</th>
<th>COMUNE_A</th>
<th>CC_UTS</th>
<th>SHAPE_LENG</th>
<th>SHAPE_AREA</th>
<th>SHAPE_LEN</th>
<th>geometry</th>
<th>included_in</th>
</tr>
</thead>
<tbody>
<tr>
<th>5652</th>
<td>2</td>
<td>4</td>
<td>22</td>
<td>0</td>
<td>22</td>
<td>22080</td>
<td>022080</td>
<td>Faedo</td>
<td>None</td>
<td>0</td>
<td>16440.165284</td>
<td>1.068038e+07</td>
<td>16440.047652</td>
<td>POLYGON ((667690.769 5121538.436, 667726.269 5...</td>
<td>not included</td>
</tr>
</tbody>
</table>
</div>
```python
faedo_geometry = faedo.geometry.values[0]
```
```python
faedo_is_in = municipalities_trentino_2021[municipalities_trentino_2021.geometry.contains(faedo_geometry)]
```
```python
faedo_new_municipality = faedo_is_in.COMUNE.values[0]
```
```python
faedo_new_municipality
```
"San Michele all'Adige"
```python
list_changed_municipalities = old_municipalities_2019[old_municipalities_2019.included_in != 'not included']
```
```python
list_changed_municipalities = list(list_changed_municipalities.included_in.unique())
```
```python
list_changed_municipalities.append(faedo_new_municipality)
```
```python
list_changed_municipalities
```
['Novella', 'Ville di Fiemme', "Borgo d'Anaunia", "San Michele all'Adige"]
and we can do the same with the polygons
```python
new_municipalities_trentino_2021 = municipalities_trentino_2021[municipalities_trentino_2021.COMUNE.isin(list_changed_municipalities)]
```
```python
new_municipalities_trentino_2021.plot()
plt.show()
```

## find the biggest new municipality of Trentino and show all the italian municipalities with bordering it
```python
biggest_new_municipality_trentino = new_municipalities_trentino_2021[new_municipalities_trentino_2021.geometry.area == new_municipalities_trentino_2021.geometry.area.max()]
```
```python
biggest_new_municipality_trentino.plot()
plt.show()
```

```python
boundary_borgo_anaunia = biggest_new_municipality_trentino.geometry.values[0]
```
```python
around_borgo_anaunia = municipalities2021[municipalities2021.touches(boundary_borgo_anaunia)]
```
```python
around_borgo_anaunia.plot()
plt.show()
```

## create the macroarea of all the municipalities bordering with it
```python
new_area = around_borgo_anaunia.append(biggest_new_municipality_trentino).dissolve()
```
```python
new_area = new_area[['geometry']]
```
```python
new_area['name'] = "area of borgo d'anaunia and bordering municipalities"
```
```python
new_area.plot()
plt.show()
```

## for each gas&oil station in the macro-area, calculate how many monumental trees have been within a 500m radius
the dataset in GeoJSON of the italian monumental trees is created with the [code of the lesson 02](https://github.com/napo/geospatial_course_unitn/blob/master/code/lessons/02_Spatial_relationships_and_operations.ipynb)<br/>
You can find the dataset [here](https://raw.githubusercontent.com/napo/geospatial_course_unitn/master/data/monumental_trees/italian_monumental_trees_20210505.geojson)
```python
macroarea_geometry = new_area.to_crs(epsg=4326).geometry.values[0]
```
```python
stations_in_macroarea = geo_stations[geo_stations.within(macroarea_geometry)]
```
```python
monumental_trees = gpd.read_file('https://github.com/napo/geospatial_course_unitn/raw/master/data/monumental_trees/italian_monumental_trees_20210505.geojson')
```
```python
monumental_trees_in_macroarea = monumental_trees[monumental_trees.within(macroarea_geometry)]
```
```python
def fivehundredfrom(point,points):
present = False
found = stations_in_macroarea[stations_in_macroarea.within(point)]
if len(found) > 0:
present = True
return(present)
```
```python
monumental_trees_in_macroarea.to_crs(epsg=3263).geometry.buffer(500).apply(lambda point: fivehundredfrom(point,stations_in_macroarea.to_crs(epsg=32632)))
```
298 False
330 False
3491 False
dtype: bool
it's normal that a gas&oil station is far away from a monumental tree :)
# creates a polygon that contains all the monumental trees inside the area
## convex hull
solution: create a convex hull<br/>
*In geometry, the convex hull or convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, or equivalently as the set of all convex combinations of points in the subset. For a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around the subset.* (source: [wikipedia](https://en.wikipedia.org/wiki/Convex_hull))

```python
monumental_trees_in_macroarea.plot()
plt.show()
```

```python
area_of_monumental_trees_in_macroarea = monumental_trees_in_macroarea.unary_union.convex_hull
```
```python
area_of_monumental_trees_in_macroarea
```

## Concave Hull
Contrary to a convex hull, a concave hull can describe the shape of a point cloud.
Convex hull<br/>

<br/><br/>
Concave hull<br/>

### Alpha shapes
Alpha shapes are often used to generalize bounding polygons containing sets of points. The alpha parameter is defined as the value a, such that an edge of a disk of radius 1/a can be drawn between any two edge members of a set of points and still contain all the points. The convex hull, a shape resembling what you would see if you wrapped a rubber band around pegs at all the data points, is an alpha shape where the alpha parameter is equal to zero
[https://alphashape.readthedocs.io/](https://alphashape.readthedocs.io/)
```python
try:
import alphashape
except ModuleNotFoundError as e:
!pip install alphashape==1.3.1
import alphashape
if alphashape.__version__ != "1.3.1":
!pip install -U alphashape==1.3.1
import alphashape
```
```python
alpha_shape = alphashape.alphashape(monumental_trees_in_macroarea, 100)
```
```python
alpha_shape.plot()
plt.show()
```

... we have only three points ... but if you want try with more ...
```python
convex_hull_trento = gpd.GeoDataFrame(
geometry=[stations_province_trento.geometry.unary_union.convex_hull],
columns=['geometry'],
crs=stations_province_trento.crs)
convex_hull_trento.explore()
```
<a href="webmap/convex_hull.html"><img src="https://raw.githubusercontent.com/napo/geospatial_course_unitn/master/images/convex_hull.png"/></a>
```python
stations_province_trento.explore()
```
<a href="webmap/points_for_hull.html"><img src="https://raw.githubusercontent.com/napo/geospatial_course_unitn/master/images/points_for_hull.png"/></a>
```python
alpha_paramenter = 60
alphashape.alphashape(stations_province_trento, alpha_paramenter).explore()
```
<a href="webmap/alphashape.html"><img src="https://raw.githubusercontent.com/napo/geospatial_course_unitn/master/images/alphashape.png"/></a>
Creating alpha shapes around sets of points usually requires a visually interactive step where the alpha parameter for a concave hull is determined by iterating over or bisecting values to approach a best fit.
More informations: [https://alphashape.readthedocs.io/en/latest/readme.html#using-a-varying-alpha-parameter](https://alphashape.readthedocs.io/en/latest/readme.html#using-a-varying-alpha-parameter)
## identify all the gas&oil stations in this area which are within 2km of each other
```python
stations_in_area_monumental_trees = stations_in_macroarea[stations_in_macroarea.within(area_of_monumental_trees_in_macroarea)]
```
```python
len(stations_in_area_monumental_trees)
```
0
```python
stations_out_area_monumental_trees = stations_in_macroarea[~stations_in_macroarea.within(area_of_monumental_trees_in_macroarea)]
```
```python
len(stations_out_area_monumental_trees)
```
9
## nearest points
```python
from shapely.ops import nearest_points
```
shapely offers a method to identify the nearest points between two geometries<br/>
Documentation [here](https://shapely.readthedocs.io/en/stable/manual.html#shapely.ops.nearest_points)
```python
def get_nearest_id(id, points):
# Create a union of points (multipoint geometry
multipoints = points[points.id != id]["geometry"].unary_union
# identify the starting point
point = points[points.id == id]
# find the nearest points
nearest_geoms = nearest_points(point['geometry'].values[0], multipoints)
# get corresponding values of the nearest point
# note: in the position 0 there is the starting point
nearest_data = points[points["geometry"] == nearest_geoms[1]]
# extract the id of the nearest point
nearest_id = nearest_data['id'].values[0]
return (nearest_id)
```
```python
stations_in_macroarea['id_nearest'] = stations_in_macroarea['id'].apply(lambda x :get_nearest_id(x,stations_in_macroarea))
```
```python
stations_in_macroarea
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>manager</th>
<th>company</th>
<th>type</th>
<th>name</th>
<th>address</th>
<th>city</th>
<th>province</th>
<th>latitude</th>
<th>longitude</th>
<th>geometry</th>
<th>id_nearest</th>
</tr>
</thead>
<tbody>
<tr>
<th>3453</th>
<td>7866</td>
<td>GATTERER SNC DI GATTERER GUENTHER & CO.</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>ENI</td>
<td>STRADA DI CIRCONVALLAZIONE 4</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.456748</td>
<td>11.268874</td>
<td>POINT (11.26887 46.45675)</td>
<td>27263</td>
</tr>
<tr>
<th>3454</th>
<td>49587</td>
<td>MEBORAST 2 S.R.L.</td>
<td>Agip Eni</td>
<td>Strada Statale</td>
<td>MEBORAST</td>
<td>238 delle Palade, Km. 221, SUD 39057</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.494556</td>
<td>11.281309</td>
<td>POINT (11.28131 46.49456)</td>
<td>7923</td>
</tr>
<tr>
<th>3455</th>
<td>7923</td>
<td>MEBO RAST DES KOMPATSCHER RICHARD & CO., KG</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>MEBORAST</td>
<td>ME-BO CORSIA EST SNC 39050</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.494485</td>
<td>11.281658</td>
<td>POINT (11.28166 46.49449)</td>
<td>49587</td>
</tr>
<tr>
<th>3456</th>
<td>9080</td>
<td>PICHLER KARL</td>
<td>Q8</td>
<td>Altro</td>
<td>Q8 des Karl Pichler</td>
<td>VIA CALDARO 8 39057</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.444788</td>
<td>11.260646</td>
<td>POINT (11.26065 46.44479)</td>
<td>27263</td>
</tr>
<tr>
<th>3457</th>
<td>27263</td>
<td>TSCHIGG HELMUT</td>
<td>Esso</td>
<td>Altro</td>
<td>TSCHIGG HELMUT</td>
<td>VIA BOLZANO 5 39057</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.458640</td>
<td>11.261110</td>
<td>POINT (11.26111 46.45864)</td>
<td>7866</td>
</tr>
<tr>
<th>18317</th>
<td>23679</td>
<td>FLAIM CARLO</td>
<td>Repsol</td>
<td>Strada Statale</td>
<td>Flaim Carlo</td>
<td>Statale 42 del Tonale e della Mendola, Km. 42,...</td>
<td>BREZ</td>
<td>TN</td>
<td>46.431569</td>
<td>11.107855</td>
<td>POINT (11.10786 46.43157)</td>
<td>23500</td>
</tr>
<tr>
<th>18358</th>
<td>23500</td>
<td>ZUCOL PIETRO</td>
<td>Esso</td>
<td>Altro</td>
<td>ESSO RAINER DI ZUCOL PIETRO</td>
<td>VIA PALADE 49 38013</td>
<td>FONDO</td>
<td>TN</td>
<td>46.436768</td>
<td>11.139896</td>
<td>POINT (11.13990 46.43677)</td>
<td>12750</td>
</tr>
<tr>
<th>18423</th>
<td>50275</td>
<td>GENTILINI MARCO</td>
<td>Api-Ip</td>
<td>Strada Statale</td>
<td>DISTRIBUTORE IP REVO'</td>
<td>42 del Tonale e della Mendola, Km. 192 + 370, ...</td>
<td>REVO'</td>
<td>TN</td>
<td>46.393205</td>
<td>11.063137</td>
<td>POINT (11.06314 46.39321)</td>
<td>23679</td>
</tr>
<tr>
<th>18451</th>
<td>12750</td>
<td>BONANI GIULIANO</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>bonani giuliano</td>
<td>VIA C. BATTISTI 1 38010</td>
<td>SARNONICO</td>
<td>TN</td>
<td>46.427077</td>
<td>11.141900</td>
<td>POINT (11.14190 46.42708)</td>
<td>23500</td>
</tr>
</tbody>
</table>
</div>
```python
def getdistance(id,points):
points = points.to_crs(epsg=32632)
point = points[points.id == id]
id_nearest = point.id_nearest.values[0]
point_nearest = points[points.id == id_nearest]
from_geometry = point.geometry.values[0]
to_geometry = point_nearest.geometry.values[0]
dist = from_geometry.distance(to_geometry)
return (dist)
```
```python
stations_in_macroarea['distance_to_nearest'] = stations_in_macroarea['id'].apply(lambda x :getdistance(x,stations_in_macroarea))
```
```python
stations_in_macroarea
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>id</th>
<th>manager</th>
<th>company</th>
<th>type</th>
<th>name</th>
<th>address</th>
<th>city</th>
<th>province</th>
<th>latitude</th>
<th>longitude</th>
<th>geometry</th>
<th>id_nearest</th>
<th>distance_to_nearest</th>
</tr>
</thead>
<tbody>
<tr>
<th>3453</th>
<td>7866</td>
<td>GATTERER SNC DI GATTERER GUENTHER & CO.</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>ENI</td>
<td>STRADA DI CIRCONVALLAZIONE 4</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.456748</td>
<td>11.268874</td>
<td>POINT (11.26887 46.45675)</td>
<td>27263</td>
<td>632.397005</td>
</tr>
<tr>
<th>3454</th>
<td>49587</td>
<td>MEBORAST 2 S.R.L.</td>
<td>Agip Eni</td>
<td>Strada Statale</td>
<td>MEBORAST</td>
<td>238 delle Palade, Km. 221, SUD 39057</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.494556</td>
<td>11.281309</td>
<td>POINT (11.28131 46.49456)</td>
<td>7923</td>
<td>27.937651</td>
</tr>
<tr>
<th>3455</th>
<td>7923</td>
<td>MEBO RAST DES KOMPATSCHER RICHARD & CO., KG</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>MEBORAST</td>
<td>ME-BO CORSIA EST SNC 39050</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.494485</td>
<td>11.281658</td>
<td>POINT (11.28166 46.49449)</td>
<td>49587</td>
<td>27.937651</td>
</tr>
<tr>
<th>3456</th>
<td>9080</td>
<td>PICHLER KARL</td>
<td>Q8</td>
<td>Altro</td>
<td>Q8 des Karl Pichler</td>
<td>VIA CALDARO 8 39057</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.444788</td>
<td>11.260646</td>
<td>POINT (11.26065 46.44479)</td>
<td>27263</td>
<td>1540.195343</td>
</tr>
<tr>
<th>3457</th>
<td>27263</td>
<td>TSCHIGG HELMUT</td>
<td>Esso</td>
<td>Altro</td>
<td>TSCHIGG HELMUT</td>
<td>VIA BOLZANO 5 39057</td>
<td>APPIANO SULLA STRADA DEL VINO</td>
<td>BZ</td>
<td>46.458640</td>
<td>11.261110</td>
<td>POINT (11.26111 46.45864)</td>
<td>7866</td>
<td>632.397005</td>
</tr>
<tr>
<th>18317</th>
<td>23679</td>
<td>FLAIM CARLO</td>
<td>Repsol</td>
<td>Strada Statale</td>
<td>Flaim Carlo</td>
<td>Statale 42 del Tonale e della Mendola, Km. 42,...</td>
<td>BREZ</td>
<td>TN</td>
<td>46.431569</td>
<td>11.107855</td>
<td>POINT (11.10786 46.43157)</td>
<td>23500</td>
<td>2529.179916</td>
</tr>
<tr>
<th>18358</th>
<td>23500</td>
<td>ZUCOL PIETRO</td>
<td>Esso</td>
<td>Altro</td>
<td>ESSO RAINER DI ZUCOL PIETRO</td>
<td>VIA PALADE 49 38013</td>
<td>FONDO</td>
<td>TN</td>
<td>46.436768</td>
<td>11.139896</td>
<td>POINT (11.13990 46.43677)</td>
<td>12750</td>
<td>1088.202974</td>
</tr>
<tr>
<th>18423</th>
<td>50275</td>
<td>GENTILINI MARCO</td>
<td>Api-Ip</td>
<td>Strada Statale</td>
<td>DISTRIBUTORE IP REVO'</td>
<td>42 del Tonale e della Mendola, Km. 192 + 370, ...</td>
<td>REVO'</td>
<td>TN</td>
<td>46.393205</td>
<td>11.063137</td>
<td>POINT (11.06314 46.39321)</td>
<td>23679</td>
<td>5477.457684</td>
</tr>
<tr>
<th>18451</th>
<td>12750</td>
<td>BONANI GIULIANO</td>
<td>Agip Eni</td>
<td>Altro</td>
<td>bonani giuliano</td>
<td>VIA C. BATTISTI 1 38010</td>
<td>SARNONICO</td>
<td>TN</td>
<td>46.427077</td>
<td>11.141900</td>
<td>POINT (11.14190 46.42708)</td>
<td>23500</td>
<td>1088.202974</td>
</tr>
</tbody>
</table>
</div> | 22.718222 | 529 | 0.605975 | eng_Latn | 0.24827 |
4db8988ef5c19be0e5e7bd91035d8f941696bdd7 | 9,718 | md | Markdown | docs/relational-databases/replication/merge/advanced-merge-replication-conflict-resolving-in-logical-record.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/replication/merge/advanced-merge-replication-conflict-resolving-in-logical-record.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/replication/merge/advanced-merge-replication-conflict-resolving-in-logical-record.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Detectando e solucionando conflitos em registros lógicos | Microsoft Docs
ms.custom: ''
ms.date: 03/14/2017
ms.prod: sql
ms.prod_service: database-engine
ms.reviewer: ''
ms.technology: replication
ms.topic: conceptual
helpviewer_keywords:
- logical records [SQL Server replication]
- conflict resolution [SQL Server replication], merge replication
ms.assetid: f2e55040-ca69-4ccf-97d1-c362e1633f26
author: MashaMSFT
ms.author: mathoma
manager: craigg
ms.openlocfilehash: c4aee8234b48fca2919f75a1cbf999e0dfb3c49c
ms.sourcegitcommit: 7aa6beaaf64daf01b0e98e6c63cc22906a77ed04
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 01/09/2019
ms.locfileid: "54128896"
---
# <a name="advanced-merge-replication-conflict---resolving-in-logical-record"></a>Conflito de replicação de mesclagem avançada – resolução em registro lógico
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
Este tópico cobre as várias abordagens de combinações de detecção e resolução de conflitos possíveis ao usar registros lógicos. Um conflito na replicação de mesclagem ocorre quando mais de um nó altera os mesmos dados ou quando a replicação de mesclagem encontra determinados tipos de erros, como violação de restrição, durante a replicação de alterações. Para obter mais informações sobre detecção e resolução de conflitos, consulte [Advanced Merge Replication Conflict Detection and Resolution](../../../relational-databases/replication/merge/advanced-merge-replication-conflict-detection-and-resolution.md).
Para especificar o acompanhamento de conflito e o nível de resolução para um artigo, confira [opções de Modificar Replicação de Mesclagem](../../../relational-databases/replication/merge/specify-merge-replication-properties.md).
## <a name="conflict-detection"></a>Detecção de conflito
A forma pela qual os conflitos são detectados para registros lógicos é determinada por duas propriedades do artigo: **column_tracking** e **logical_record_level_conflict_detection**. O[!INCLUDE[ssVersion2005](../../../includes/ssversion2005-md.md)] e versões posteriores também dão suporte à detecção do nível de registro lógico.
A propriedade de artigo **logical_record_level_conflict_detection** pode ser definida como TRUE ou FALSE. O valor deve ser definido apenas para o artigo pai de alto nível, sendo ignorado pelos artigos filho. Se esse valor for FALSE, a replicação de mesclagem detectará conflitos como nas versões anteriores do [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)], com base unicamente no valor da propriedade **column_tracking** do artigo. Se esse valor for TRUE, a replicação de mesclagem ignorará a propriedade **column_tracking** do artigo, e detectará um conflito se alterações forem feitas em qualquer lugar do registro lógico. Por exemplo, considere este cenário:

Um conflito é detectado caso dois usuários alterem quaisquer valores do registro lógico Customer2 nas tabelas **Customers**, **Orders**ou **OrderItems** . Esse exemplo invoca alterações feitas por meio da instrução UPDATE, mas o conflito pode igualmente ser detectado pelas alterações feitas com as instruções INSERT ou DELETE.
## <a name="conflict-resolution"></a>Resolução de conflitos
Por padrão, a replicação de mesclagem usa uma lógica fundamentada na prioridade para resolver conflitos. Se for feita uma alteração conflitante em dois bancos de dados de Assinante, a alteração do Assinante com alta prioridade de assinatura vence ou, se a prioridade for a mesma, a primeira alteração para alcançar o Publicador vence. Com a detecção em nível de linha e em nível de coluna, toda a linha vencedora substitui a linha perdedora.
A propriedade de artigo **logical_record_level_conflict_resolution** pode ser definida como TRUE ou FALSE. O valor deve ser definido apenas para o artigo pai de alto nível, sendo ignorado pelos artigos filho. Se o valor for TRUE, todo o registro lógico vencedor substituirá o registro lógico perdedor. Se for FALSE, as linhas vencedoras individuais poderão vir de outros Assinantes ou Publicadores. Por exemplo, o Assinante A pode vencer um conflito em uma linha da tabela **Orders** , e o Assinante B pode vencer em uma linha associada da tabela **OrderItems** . O resultado é um registro lógico com as linhas **Orders** do Assinante A e a linha **OrderItems** do Assinante B.
## <a name="interaction-of-conflict-resolution-and-detection-settings"></a>Interação das configurações de resolução e detecção de conflitos
O resultado dos conflitos depende da interação das configurações de detecção e resolução de conflitos. Com relação aos exemplos abaixo, supõe-se que a resolução de conflitos com base na prioridade está em uso. Quando se usam registros lógicos, as possibilidades são:
- Detecção em nível de linha ou coluna, resolução em nível de linha
- Detecção em nível de coluna, resolução de registro lógico
- Detecção em nível de linha, resolução de registro lógico
- Detecção de registro lógico, resolução de registro lógico
### <a name="row-or-column-level-detection-row-level-resolution"></a>Detecção em nível de linha ou coluna, resolução em nível de linha
Nesse exemplo, a publicação é configurada com:
- **column_tracking** é TRUE ou FALSE
- **logical_record_level_conflict_detection** é FALSE
- **logical_record_level_conflict_resolution** é FALSE
Nesse caso, a detecção permanece em nível de linha ou coluna e a resolução em nível de linha. Essas configurações são usadas para usufruir o fato de todas as alterações de um registro lógico serem replicadas como uma unidade; contudo, sem detecção ou resolução de conflitos em nível de registro lógico.
### <a name="column-level-detection-logical-record-resolution"></a>Detecção em nível de coluna, resolução de registro lógico
Nesse exemplo, a publicação é configurada com:
- **column_tracking** é TRUE
- **logical_record_level_conflict_detection** é FALSE
- **logical_record_level_conflict_resolution** é TRUE
Um Publicador ou Assinante começa com o mesmo conjunto de dados, e um registro lógico é definido entre as tabelas **orders** e **customers** . O Publicador altera a coluna **custcol1** na tabela **customers** , e **ordercol1** na tabela **orders** . O Publicador altera **custcol1** na mesma linha da tabela **customers** , e a coluna **ordercol2** , na mesma coluna da tabela **orders** . As alterações na mesma coluna da tabela **customer** resultam em conflito; contudo, as alterações na tabela **orders** não são conflitantes.
Como os conflitos são resolvidos em nível de registro lógico, as alterações vencedoras feitas no Publicador substituem as alterações feitas nas tabelas do Assinante durante o processo de replicação.

### <a name="row-level-detection-logical-record-resolution"></a>Detecção em nível de linha, resolução de registro lógico
Nesse exemplo, a publicação é configurada com:
- **column_tracking** é FALSE
- **logical_record_level_conflict_detection** é FALSE
- **logical_record_level_conflict_resolution** é TRUE
O Publicador e o Assinante são iniciados com o mesmo conjunto de dados. O Publicador altera a coluna **custcol1** na tabela **customers** . O Assinante altera **custcol2** na tabela **customers** , e na coluna **ordercol2** da tabela **orders** . As alterações da mesma linha da tabela **customers** resultam em conflito; contudo, as alterações do Assinante na tabela **orders** não estão em conflito.
Como os conflitos são resolvidos em nível de registro lógico, as alterações vencedoras feitas no Publicador substituem as alterações feitas nas tabelas do Assinante durante o processamento da replicação.

### <a name="logical-record-detection-logical-record-resolution"></a>Detecção de registro lógico, resolução de registro lógico
Nesse exemplo, a publicação é configurada com:
- **logical_record_level_conflict_detection** é TRUE
- **logical_record_level_conflict_resolution** é TRUE
O Publicador e o Assinante são iniciados com o mesmo conjunto de dados. O Publicador altera a coluna **custcol1** na tabela **customers** . O Assinante altera a coluna **ordercol1** na tabela **orders** . Não há alterações na mesma linha ou colunas, mas, como as alterações são feitas no mesmo registro lógico de **custid**=1, as alterações são detectadas como conflito no nível do registro lógico.
Como os conflitos são também resolvidos no nível do registro lógico, durante a sincronização a alteração vencedora feita no Publicador substitui a alteração feita nas tabelas do Assinante.

## <a name="see-also"></a>Consulte Também
[Agrupar alterações em linhas relacionadas com registros lógicos](../../../relational-databases/replication/merge/group-changes-to-related-rows-with-logical-records.md)
| 86 | 680 | 0.775571 | por_Latn | 0.999832 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.