hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4d01eb6a8ca520d1fb318e52e350a675acedb6be | 2,531 | md | Markdown | docs/data/oledb/commands-and-tables.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-29T07:51:50.000Z | 2021-01-29T07:51:50.000Z | docs/data/oledb/commands-and-tables.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/data/oledb/commands-and-tables.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-03T17:33:52.000Z | 2022-03-03T17:33:52.000Z | ---
description: "Learn more about: Commands and Tables"
title: "Commands and Tables"
ms.date: "11/19/2018"
helpviewer_keywords: ["OLE DB consumer templates, table support", "CCommand class, OLE DB consumer templates", "commands [C++], OLE DB Consumer Templates", "CTable class", "CAccessorRowset class, command and table classes", "rowsets, accessing", "tables [C++], OLE DB Consumer Templates", "OLE DB consumer templates, command support"]
ms.assetid: 4bd3787b-6d26-40a9-be0c-083080537c12
---
# Commands and Tables
Commands and tables allow you to access rowsets; that is, open rowsets, execute commands, and bind columns. The [CCommand](../../data/oledb/ccommand-class.md) and [CTable](../../data/oledb/ctable-class.md) classes instantiate the command and table objects, respectively. These classes derive from [CAccessorRowset](../../data/oledb/caccessorrowset-class.md) as shown in the following figure.
<br/>
Command and Table Classes
In the previous table, `TAccessor` can be any accessor type listed in [Accessor Types](../../data/oledb/accessors-and-rowsets.md). `TRowset` can be any rowset type listed in [Rowset Types](../../data/oledb/accessors-and-rowsets.md). `TMultiple` specifies the result type (a single or multiple result set).
The [ATL OLE DB Consumer Wizard](../../atl/reference/atl-ole-db-consumer-wizard.md) lets you specify whether you want a command or table object.
- For data sources without commands, you can use the `CTable` class. You generally use it for simple rowsets that specify no parameters and require no multiple results. This simple class opens a table on a data source using a table name that you specify.
- For data sources that support commands, you can use the `CCommand` class instead. To execute a command, call [Open](./ccommand-class.md#open) on this class. As an alternative, you can call `Prepare` to prepare a command that you want to execute more than once.
`CCommand` has three template arguments: an accessor type, a rowset type, and a result type (`CNoMultipleResults`, by default, or `CMultipleResults`). If you specify `CMultipleResults`, the `CCommand` class supports the `IMultipleResults` interface and handles multiple rowsets. The [DBVIEWER](https://github.com/Microsoft/VCSamples/tree/master/VC2010Samples/ATL/OLEDB/Consumer) sample shows how to handle the multiple results.
## See also
[OLE DB Consumer Templates](../../data/oledb/ole-db-consumer-templates-cpp.md)
| 90.392857 | 430 | 0.764125 | eng_Latn | 0.956251 |
4d02070c61c2f0d7b5de70f1652b3ada3415ffd3 | 6,070 | md | Markdown | hugo/content/The Most Useful Ways To Rename Files And Directories In Linux/index.md | hands-on-cloud/hands-on.cloud | 5c4a2c23700657615be40de23d431cda59be17cb | [
"MIT"
] | 2 | 2021-01-02T05:53:39.000Z | 2022-03-21T15:49:05.000Z | hugo/content/The Most Useful Ways To Rename Files And Directories In Linux/index.md | hands-on-cloud/hands-on.cloud | 5c4a2c23700657615be40de23d431cda59be17cb | [
"MIT"
] | 2 | 2019-11-12T13:18:13.000Z | 2021-01-02T12:42:44.000Z | hugo/content/The Most Useful Ways To Rename Files And Directories In Linux/index.md | hands-on-cloud/hands-on.cloud | 5c4a2c23700657615be40de23d431cda59be17cb | [
"MIT"
] | 17 | 2019-05-11T19:08:02.000Z | 2022-03-13T10:30:29.000Z | ---
title: 'The Most Useful Ways To Rename Files And Directories In Linux'
date: 2020-12-07T08:09:01-05:00
image: 'The-Most-Useful-Ways-To-Rename-Files-And-Directories-In-Linux'
tags:
- linux
- mv
- find
- rename
- bash
categories:
- Linux
authors:
- Andrei Maksimov
---
The ability to rename files and directories in Linux is one of the primary skills that every Linux user needs. This article shows how to use various ways like file manager, mv, and rename utilities in combination with finding and bash looping constructs. Improve your Linux stills in 3 minutes by reading this article!
Two ways are available for you to rename the directories or files in Linux:
* File manager.
* Command-line terminal.
## Rename files and directories using the file manager.
{{< my-picture name="The-Most-Useful-Ways-To-Rename-Files-And-Directories-In-Linux-Infographics" >}}
One of the easiest ways of renaming files and directories in Linux for new users is using Midnight Commander.
Midnight Commander - is a console-based file manager cloned from the famous Norton Commander.
{{< my-picture name="midnight-commander" >}}
To install Midnight Commander under Ubuntu, use the following command:
```sh
$ sudo apt-get update
$ sudo apt-get -y install mc
```
To install Midnight Commander under CentOS, use a different package manager:
```sh
$ sudo yum -y install mc
```
To launch Midnight Commander, execute **mc** command.
You need to use keyboard arrows to move the file selector. To switch between left and right screens, you need to use the **Tab** key. You may use your mouse too, but you’re limited to select only files, which are visible on the screen.
To rename the file or directory, move the cursor on top of it and press **F6**. If you forgot the exact key, use your mouse to select Move/Rename from the File menu.
{{< my-picture name="midnight-commander-move-files" >}}
Next, let’s look at how we can rename the files and directories by using **mv** and **rename** commands.
## Rename files and directories using the “mv” command.
The **mv** command helps you **m**o**v**e or rename files and directories from the source to the destination location. The syntax for the command is the following:
```sh
$ mv [OPTIONS] source destination
```
The **source** and **destination** can be a file or directory.
To rename **file1.txt** to **file2.txt** using **mv,** execute the following command:
```sh
$ mv file1.txt file2.txt
```
To change the name of **folder1** directory to **folder2**, use the following command:
```sh
$ mv folder1 folder2
```
### Rename multiple files at once.
The **mv** utility can rename only one file at a time, but you can use it with other commands to rename more than one file. These commands include **find** and Bash **for** and **while** loops.
For example, let’s imagine you need to change the file extension for a specific file type in a directory. In the following example, we rename all HTML files and change their extension from **html** to **php**.
Here’s the **example** folder structure:
```sh
$ tree example
example
├── index.html
├── page1.html
├── page2.html
└── page3.html
0 directories, 4 files
```
Now, let’s use the following Bash for-loop construct inside the **example** directory:
```sh
$ cd example
$ for f in *.html; do
mv "$f" "${f%.html}.php"
done
```
Here we stepped into the **example** directory. Next, we executed the **mv** command in Bash for-loop (the command between **for** and **done** keywords).
Here’s what’s happening:
* The for-loop is walking through the files ending on the **.html** and putting every file name to the variable **f**.
* Then **mv** utility changes extension of every file **f** from **.html** file to **.php**. A part of the expression **${f%.html}** is responsible for removing the **.html** from the file name. A complete expression **"${f%.html}.php"** will add **.php** to the file name without **.html** part.
Here’s the expected outcome:
```sh
$ ls -l
total 0
-rw-r--r-- 1 amaksimov wheel 0 Dec 5 17:13 index.php
-rw-r--r-- 1 amaksimov wheel 0 Dec 5 17:13 page1.php
-rw-r--r-- 1 amaksimov wheel 0 Dec 5 17:13 page2.php
-rw-r--r-- 1 amaksimov wheel 0 Dec 5 17:13 page3.php
```
## The “find” command to rename files and directories.
Using **find** utility is one of the most common ways to automate file and directory operations in Linux.
In the example below, we are using the **find** to achieve the same goal and change file extension.
The **find** utility finds all files ending on **.html** and uses the **-exec** argument to pass every found file name to the sh shell script written in one line.
```sh
$ find . -depth -name "*.html" -exec sh -c 'f="{}"; mv "$f" "${f%.html}.php"' \;
```
In the sh script-line, we set the variable **f** with the value of the file name **f=”{}”**, then we’re executing familiar **mv** command. A semicolon is used to split the variable set command from the **mv** command.
## Rename files and directories using the “rename” command.
In some cases, it is easier to use the **rename** command instead of **mv**. And you can use it to rename multiple files using regular expressions without combining it with other Linux commands.
Here’s the syntax for the **rename** command:
```sh
$ rename [OPTIONS] regexp files
```
For example, let’s rename all **.php** files back to **.html**:
```sh
$ ls
index.php page1.php page2.php page3.php
$ rename 's/.php/.html/' *.php
$ ls
index.html page1.html page2.html page3.html
```
If you wish to print the names of the files that you have selected for renaming, you can use the following command:
```sh
$ rename -n 's/.html/.php/' *.html
rename(index.html, index.php)
rename(page1.html, page1.php)
rename(page2.html, page2.php)
rename(page3.html, page3.php)
$ ls
index.html page1.html page2.html page3.html
```
## Summary.
In this article, you’ve learned how to rename files and directories in Linux using various ways like file manager, mv, and rename utilities combined with find and bash loop-expressions.
| 33.910615 | 318 | 0.71285 | eng_Latn | 0.993254 |
4d02684e868e4e5bc176736d37d67a3dba319995 | 2,071 | md | Markdown | src/pages/report/report53B.md | darkonix/theknurdproject | ebedbbe96b2619f3e24282486549909a27ba218f | [
"MIT"
] | null | null | null | src/pages/report/report53B.md | darkonix/theknurdproject | ebedbbe96b2619f3e24282486549909a27ba218f | [
"MIT"
] | null | null | null | src/pages/report/report53B.md | darkonix/theknurdproject | ebedbbe96b2619f3e24282486549909a27ba218f | [
"MIT"
] | null | null | null | ---
id: 'cd874409-c163-4814-aee2-e01c434b11a2'
shows_id: '1a361d2d-0652-4b5f-aa4c-03181ab7c2d8'
media_id: 'b2cef9fb-7461-4ca2-b166-6f9247aae31f'
title: 'Knurd Report #53B'
itunes_title: 'Knurd Report #53B'
published_date: '2019-03-21T144439.000Z'
guid: 'https//darkonix.hipcast.com/deluge/darkonix-20190321092243-5294.mp3'
status: 'Published'
episode_art: 'https//artwork.captivate.fm/e43aa46c-3cd8-4e1e-86b1-93e5863c4080/1000-itunes-1582315387.jpg'
shownotes: '
<p>000131 Amizade com animais<br />
000750 Star Trek Discovery - 2ª Temporada</p>
<p>Featuring music Dorival Caymmi - Vatapá e Dorival Caymmi - Saudade da Bahia</p>
<p>APOIE O NOSSO APOIA.SE COM O SEU APOIO PFVR https//apoia.se/theknurdproject</p>
'
summary: '000131 Amizade com animais
000750 Star Trek Discovery - 2ª Temporada
Featuring music Dorival Caymmi - Vatapá e Dorival Caymmi - Saudade da Bahia
APOIE O NOSSO APOIA.SE COM O SEU APOIO PFVR https//apoia.se/theknurdproject'
episode_type: 'full'
episode_season: '53'
episode_number: '2'
itunes_subtitle: '000131 Amizade com animais
000750 Star Trek Discovery - 2ª Temporada
Featuring music Dorival Caymmi - Vatapá e Dorival Caymmi - Saudade da Bahia
APOIE O NOSSO APOIA.SE COM O SEU APOIO PFVR https//apoia.se/theknurdproject'
author: 'Mycke Ramos'
link: 'https//knurd-report.simplecast.com/episodes/knurd-report-53b-xdxWY0rl'
explicit: 'explicit'
itunes_block: 'no'
google_block: ''
google_description: ''
donation_link: ''
donation_text: 'null'
post_id: '0'
website_title: 'null'
is_active: '1'
failed_import: '0'
slug: '0'
seo_title: 'null'
seo_description: ''
episode_private: '0'
episode_expiration: 'null'
auto_tweeted: '0'
video_repurposed: 'null'
video_s3_key: 'null'
media_url: 'https//podcasts.captivate.fm/media/b2cef9fb-7461-4ca2-b166-6f9247aae31f/darkonix-20190321092243-5294_tc.mp3'
---
00:01:31 Amizade com animais
00:07:50 Star Trek Discovery - 2ª Temporada
Featuring music: Dorival Caymmi - Vatapá e Dorival Caymmi - Saudade da Bahia
APOIE O NOSSO APOIA.SE COM O SEU APOIO PFVR: https://apoia.se/theknurdproject | 34.516667 | 120 | 0.772574 | kor_Hang | 0.228432 |
4d02a0c46ded94823509d16b1e75a64c2d9eb6a8 | 11,039 | md | Markdown | mdop/dart-v7/how-to-recover-remote-computers-using-the-dart-recovery-image-dart-7.md | MicrosoftDocs/mdop-docs-pr.fr-fr | 2eb26867200d360eef2e34b52e1044f5e8f00c7e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-20T21:13:51.000Z | 2021-04-20T21:13:51.000Z | mdop/dart-v7/how-to-recover-remote-computers-using-the-dart-recovery-image-dart-7.md | MicrosoftDocs/mdop-docs-pr.fr-fr | 2eb26867200d360eef2e34b52e1044f5e8f00c7e | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-08T05:27:50.000Z | 2020-07-08T15:39:16.000Z | mdop/dart-v7/how-to-recover-remote-computers-using-the-dart-recovery-image-dart-7.md | MicrosoftDocs/mdop-docs-pr.fr-fr | 2eb26867200d360eef2e34b52e1044f5e8f00c7e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-04T12:31:57.000Z | 2021-11-04T12:31:57.000Z | ---
title: Procédure pour récupérer des ordinateurs distants à l'aide de l'image de récupération DaRT
description: Procédure pour récupérer des ordinateurs distants à l'aide de l'image de récupération DaRT
author: dansimp
ms.assetid: 66bc45fb-dc40-4d47-b583-5bb1ff5c97a7
ms.reviewer: ''
manager: dansimp
ms.author: dansimp
ms.pagetype: mdop
ms.mktglfcycl: support
ms.sitesec: library
ms.prod: w10
ms.date: 08/30/2016
ms.openlocfilehash: 885ab1d1cf8f51dc4fd4613e41a20a2525ea7d6d
ms.sourcegitcommit: 354664bc527d93f80687cd2eba70d1eea024c7c3
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 06/26/2020
ms.locfileid: "10809569"
---
# Procédure pour récupérer des ordinateurs distants à l'aide de l'image de récupération DaRT
Les fonctionnalités de connexion à distance de Microsoft Diagnostics and Recovery Tools (DaRT) 7 permettent à un administrateur informatique d’exécuter les outils DaRT à distance sur un ordinateur d’utilisateur final. Lorsque certaines informations sont fournies par l’utilisateur final (ou par un professionnel du support technique travaillant sur l’ordinateur de l’utilisateur final), l’administrateur informatique ou l’agent de support technique peut prendre le contrôle de l’ordinateur de l’utilisateur final et exécuter les outils DaRT nécessaires à distance.
**Important**
Les deux ordinateurs qui établissent une connexion à distance doivent faire partie du même réseau.
**Pour récupérer un ordinateur distant à l’aide de DaRT**
1. Démarrez un ordinateur utilisateur final en utilisant l’image de récupération DaRT.
En règle générale, vous devez utiliser l’une des méthodes suivantes pour démarrer dans DaRT afin de récupérer un ordinateur distant, selon la manière dont vous déployez l’image de récupération DaRT. Pour plus d’informations sur le déploiement de l’image de récupération DaRT, voir [déploiement de l’image de récupération 7,0 de DART](deploying-the-dart-70-recovery-image-dart-7.md).
- Démarrez dans DaRT à partir d’une partition de récupération sur l’ordinateur qui pose problème.
- Démarrez dans DaRT à partir d’une partition distante sur le réseau.
Pour plus d’informations sur les avantages et inconvénients de chaque méthode, voir [planification de l’enregistrement et du déploiement de l’image de récupération 7,0 de DART](planning-how-to-save-and-deploy-the-dart-70-recovery-image.md).
Quelle que soit la méthode que vous utilisez pour démarrer dans DaRT, vous devez activer le périphérique de démarrage dans le BIOS pour l’option de démarrage ou les options que vous voulez mettre à disposition de l’utilisateur final.
**Remarque**
La configuration du BIOS est unique, en fonction du type de disque dur, de cartes réseau et d’autres éléments utilisés au sein de votre organisation.
2. Au démarrage de l’ordinateur dans l’image de récupération DaRT, la boîte de dialogue **netstart** s’affiche. Vous êtes invité à indiquer si vous souhaitez initialiser les services réseau. Si vous cliquez sur **Oui**, on suppose qu’un serveur DHCP est présent sur le réseau et qu’une tentative d’obtention d’une adresse IP du serveur est effectuée. Si le réseau utilise des adresses IP statiques au lieu de DHCP, vous pouvez utiliser l’outil de **configuration TCP/IP** dans DART pour spécifier une adresse IP statique.
Pour ignorer le processus d’initialisation du réseau, cliquez sur **non**.
3. Après la boîte de dialogue initialisation du réseau, vous êtes invité à indiquer si vous souhaitez remapper les lettres de lecteur. Lorsque vous exécutez Windows en ligne, le volume système est généralement mappé au lecteur C. Toutefois, lorsque vous exécutez Windows Offline sous WinRE, le volume système d’origine peut être mappé sur un autre lecteur, ce qui peut entraîner une confusion. Si vous décidez de remapper, DaRT tente de mapper les lettres du lecteur hors ligne pour correspondre aux lettres de lecteur en ligne. Le remappage est effectué uniquement si un système d’exploitation hors connexion est sélectionné plus tard dans le processus de démarrage.
4. Après la boîte de dialogue remappage, une boîte de dialogue **options de récupération du système** s’affiche et vous invite à sélectionner une disposition de clavier. Il affiche ensuite le répertoire racine du système, le type de système d’exploitation installé et la taille de partition. Si vous ne voyez pas votre système d’exploitation et que vous suspectez qu’il n’y a pas de problème, cliquez sur **charger des pilotes** pour charger les pilotes suspects. Vous êtes alors invité à insérer le média d’installation pour l’appareil et à sélectionner le pilote. Sélectionnez l’installation que vous voulez réparer ou diagnostiquer, puis cliquez sur **suivant**.
**Remarque**
Si l’environnement de récupération Windows (WinRE) détecte ou suspecte que Windows 7 ne démarrait pas correctement la dernière fois qu’il a été essayé, la **réparation du démarrage** peut commencer à s’exécuter automatiquement. Pour plus d’informations sur la façon de résoudre ce problème, reportez-vous à la rubrique [résolution des problèmes DaRT 7,0](troubleshooting-dart-70-new-ia.md).
~~~
If any of the registry hives are corrupted or missing, Registry Editor, and several other DaRT utilities, will have limited functionality. If no operating system is selected, some tools will not be available.
The **System Recovery Options** window appears and lists various recovery tools.
~~~
5. Dans la fenêtre des **options de récupération du système** , sélectionnez **Microsoft Diagnostics and Recovery Tools** pour ouvrir la fenêtre de **Diagnostics et d’outils de récupération** .
6. Dans la fenêtre **jeu d’outils de diagnostics et de récupération** , cliquez sur **connexion à distance** pour ouvrir la fenêtre de **connexion à distance DART** . Si vous êtes invité à fournir l’accès distant au support technique, cliquez sur **OK**.
La fenêtre de connexion à distance DaRT s’ouvre et affiche un numéro de ticket, une adresse IP et des informations de port.
7. Sur l’ordinateur de l’agent de support technique, ouvrez la **visionneuse de connexion à distance DART**.
Cliquez sur **Démarrer**, sur **tous les programmes**, sur **Microsoft DART 7**, puis sur **visionneuse de connexions à distance DART**.
8. Dans la fenêtre de **connexion à distance de DART** , entrez les informations requises sur le ticket, l’adresse IP et le port.
**Remarque**
Ces informations sont créées sur l’ordinateur de l’utilisateur final et doivent être fournies par l’utilisateur final. Plusieurs adresses IP peuvent être proposées, en fonction du nombre de personnes disponibles sur l’ordinateur de l’utilisateur final.
9. Cliquez sur **Se connecter**.
L’administrateur informatique suppose désormais le contrôle de l’ordinateur de l’utilisateur final et peut exécuter les outils DaRT à distance.
**Remarque**
Un fichier nommé inv32.xml contient des informations de connexion distantes, telles que le numéro de port et l’adresse IP. Par défaut, le fichier se trouve généralement sur%windir%\\system32.
**Pour personnaliser le processus de connexion à distance**
1. Vous pouvez personnaliser le processus de connexion à distance en modifiant le fichier winpeshl.ini. Pour plus d’informations sur la modification du fichier winpeshl.ini, voir [Winpeshl.ini de fichiers](https://go.microsoft.com/fwlink/?LinkId=219413).
Spécifiez les commandes et les paramètres suivants pour personnaliser l’établissement d’une connexion distante avec un ordinateur d’utilisateur final:
<table>
<colgroup>
<col width="33%" />
<col width="33%" />
<col width="33%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Commande</th>
<th align="left">Paramètre</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p><strong>RemoteRecovery.exe</strong></p></td>
<td align="left"><p>-nomessage</p></td>
<td align="left"><p>Spécifie que l’invite de confirmation ne s’affiche pas. <strong>La connexion à distance </strong> continue comme si l’utilisateur final avait répondu " Oui " à l’invite de confirmation.</p></td>
</tr>
<tr class="even">
<td align="left"><p><strong>WaitForConnection.exe</strong></p></td>
<td align="left"><p>aucune</p></td>
<td align="left"><p>Empêche la poursuite d’un script personnalisé tant que <strong> la connexion à distance n' </strong> est pas en cours d’exécution ou qu’une connexion valide est établie avec l’ordinateur de l’utilisateur final.</p>
<div class="alert">
<strong>Important</strong><br/><p>Cette commande n’utilise aucune fonction si elle est spécifiée de manière indépendante. Il doit être spécifié dans un script pour fonctionner correctement.</p>
</div>
<div>
</div></td>
</tr>
</tbody>
</table>
2. Voici un exemple de fichier winpeshl.ini personnalisé pour ouvrir l’outil de **connexion à distance** dès qu’une tentative de démarrage dans DART est lancée:
```ini
[LaunchApps]
"%windir%\system32\netstart.exe -network -remount"
"cmd /C start %windir%\system32\RemoteRecovery.exe -nomessage"
"%windir%\system32\WaitForConnection.exe"
"%SYSTEMDRIVE%\sources\recovery\recenv.exe"
```
**Pour exécuter la visionneuse de connexions à distance à l’invite de commandes**
1. Vous pouvez exécuter la **visionneuse de connexions à distance DART** à l’invite de commandes en spécifiant la commande **DartRemoteViewer.exe** et en utilisant les paramètres suivants:
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">Paramètre</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left"><p>-ticket = < <em> ticketnumber</em>></p></td>
<td align="left"><p>Où < <em> ticketnumber </em> > correspond au numéro de ticket, y compris aux tirets, qui est généré par la connexion à distance.</p></td>
</tr>
<tr class="even">
<td align="left"><p>-IPAddress = < <em> IPAddress</em>></p></td>
<td align="left"><p>Où < <em> IPAddress </em> > est l’adresse IP générée par la connexion à distance.</p></td>
</tr>
<tr class="odd">
<td align="left"><p>-port = < <em> port</em>></p></td>
<td align="left"><p>Où < <em> port </em> > correspond au port qui correspond à l’adresse IP spécifiée.</p></td>
</tr>
</tbody>
</table>
~~~
**Note**
The variables for these parameters are created on the end-user computer and must be provided by the end user.
~~~
2. Si les trois paramètres sont spécifiés et que les données sont valides, une connexion est effectuée immédiatement au démarrage du programme. Si un paramètre n’est pas valide, le programme démarre comme s’il n’y a aucun paramètre spécifié.
## Rubriques connexes
[Récupération des ordinateurs à l'aide de DaRT7.0](recovering-computers-using-dart-70-dart-7.md)
| 54.920398 | 668 | 0.749706 | fra_Latn | 0.981065 |
4d03222ec7b034ab86533508cbbc50caceeaf1e6 | 813 | markdown | Markdown | notes/0.14.0.markdown | rtyley/sbt-assembly | 9a2ca181226a8b437a784792343a203a0b94a6b8 | [
"MIT"
] | 1,585 | 2015-01-01T03:48:56.000Z | 2022-03-31T15:06:54.000Z | notes/0.14.0.markdown | rtyley/sbt-assembly | 9a2ca181226a8b437a784792343a203a0b94a6b8 | [
"MIT"
] | 269 | 2015-01-02T13:29:57.000Z | 2022-03-31T12:04:58.000Z | notes/0.14.0.markdown | rtyley/sbt-assembly | 9a2ca181226a8b437a784792343a203a0b94a6b8 | [
"MIT"
] | 219 | 2015-01-31T16:12:54.000Z | 2022-03-30T19:14:28.000Z | [1]: https://code.google.com/archive/p/jarjar/
[162]: https://github.com/sbt/sbt-assembly/pull/162
[165]: https://github.com/sbt/sbt-assembly/pull/165
[@wxiang7]: https://github.com/wxiang7
[@eed3si9n]: https://github.com/eed3si9n
### shading
sbt-assembly 0.14.0 adds shading support backed by [Jar Jar Links][1].
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("org.apache.commons.io.**" -> "shadeio.@1").inAll
)
The above rule renames all classes in `org.apache.commons.io` to the `shadeio` package by bytecode transformation, including the references to them. For more details see [Shading](https://github.com/sbt/sbt-assembly#shading).
This feature was contributed as [#162][162] by [@wxiang7][@wxiang7], and was further improved in [#165][165] by [@eed3si9n][@eed3si9n].
| 45.166667 | 225 | 0.712177 | eng_Latn | 0.703239 |
4d032fe77ac48e7984499c0e7b9f6f2671b1baa2 | 3,309 | md | Markdown | articles/api-management/api-management-howto-mutual-certificates-for-clients.md | hihayak/azure-docs.ja-jp | c3eebfa08baa1a81f6c2cbef5b58c11e0bba6d5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/api-management/api-management-howto-mutual-certificates-for-clients.md | hihayak/azure-docs.ja-jp | c3eebfa08baa1a81f6c2cbef5b58c11e0bba6d5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/api-management/api-management-howto-mutual-certificates-for-clients.md | hihayak/azure-docs.ja-jp | c3eebfa08baa1a81f6c2cbef5b58c11e0bba6d5d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: API Management でのクライアント証明書認証を使用した API の保護 - Azure API Management | Microsoft Docs
description: クライアント証明書を使用して API へのアクセスを保護する方法の詳細
services: api-management
documentationcenter: ''
author: vladvino
manager: erikre
editor: ''
ms.service: api-management
ms.workload: mobile
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: article
ms.date: 02/01/2017
ms.author: apimpm
ms.openlocfilehash: 450ebc621758363c5ea9ab6d631cd6c7df38794b
ms.sourcegitcommit: f8c592ebaad4a5fc45710dadc0e5c4480d122d6f
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 03/29/2019
ms.locfileid: "58619724"
---
# <a name="how-to-secure-apis-using-client-certificate-authentication-in-api-management"></a>API Management でクライアント証明書認証を使用して API を保護する方法
API Management には、クライアント証明書を使用して API (つまりクライアントから API Management) へのアクセスを保護する機能が備わっています。 現時点では、クライアント証明書の拇印が目的の値であるかを確認できます。 API Management にアップロードした既存の証明書に対する拇印を確認することもできます。
クライアント証明書を使用して API のバックエンド サービスへのアクセス (つまり API Management からバックエンド) を保護する方法については、[クライアント証明書認証を使用して、バックエンド サービスを保護する方法](https://docs.microsoft.com/azure/api-management/api-management-howto-mutual-certificates)に関するページを参照してください。
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
## <a name="checking-the-expiration-date"></a>有効期限の確認
以下のポリシーを構成して、証明書が有効期限切れかどうかを確認することができます。
```xml
<choose>
<when condition="@(context.Request.Certificate == null || context.Request.Certificate.NotAfter < DateTime.Now)" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
</choose>
```
## <a name="checking-the-issuer-and-subject"></a>発行者とサブジェクトの確認
以下のポリシーを構成して、クライアント証明書の発行者とサブジェクトを確認できます。
```xml
<choose>
<when condition="@(context.Request.Certificate == null || context.Request.Certificate.Issuer != "trusted-issuer" || context.Request.Certificate.SubjectName.Name != "expected-subject-name")" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
</choose>
```
## <a name="checking-the-thumbprint"></a>拇印の確認
以下のポリシーを構成して、クライアント証明書の拇印を確認できます。
```xml
<choose>
<when condition="@(context.Request.Certificate == null || context.Request.Certificate.Thumbprint != "desired-thumbprint")" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
</choose>
```
## <a name="checking-a-thumbprint-against-certificates-uploaded-to-api-management"></a>API Management にアップロードされた証明書に対する拇印の確認
以下の例では、API Management にアップロードされた証明書に対して、クライアント証明書の拇印を確認する方法を示しています。
```xml
<choose>
<when condition="@(context.Request.Certificate == null || !context.Deployment.Certificates.Any(c => c.Value.Thumbprint == context.Request.Certificate.Thumbprint))" >
<return-response>
<set-status code="403" reason="Invalid client certificate" />
</return-response>
</when>
</choose>
```
## <a name="next-step"></a>次のステップ
* [クライアント証明書認証を使用してバックエンド サービスを保護する方法](https://docs.microsoft.com/azure/api-management/api-management-howto-mutual-certificates)
* [証明書のアップロード方法](https://docs.microsoft.com/azure/api-management/api-management-howto-mutual-certificates)
| 35.580645 | 225 | 0.744938 | yue_Hant | 0.343744 |
4d0395da697c0eb83ac8c866f4337fdced9f9e8d | 2,009 | md | Markdown | docs/ide/step-8-customize-the-quiz.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/step-8-customize-the-quiz.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/step-8-customize-the-quiz.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '8. adım: testi özelleştirme'
ms.custom: ''
ms.date: 11/04/2016
ms.prod: visual-studio-dev15
ms.technology: vs-acquisition
ms.topic: conceptual
ms.assetid: dc8edb13-1b23-47d7-b859-8c6f7888c1a9
author: TerryGLee
ms.author: tglee
manager: douge
ms.workload:
- multiple
ms.openlocfilehash: 9c2f096415ccfbadfe66f18a373642cf6a5de86b
ms.sourcegitcommit: 04a717340b4ab4efc82945fbb25dfe58add2ee4c
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/28/2018
ms.locfileid: "32065253"
---
# <a name="step-8-customize-the-quiz"></a>8. adım: testi özelleştirme
Öğreticinin son bölümünde testi özelleştirme ve zaten öğrendiklerinizi üzerinde genişletmek için bazı yollar ele alacağız. Örneğin, program yanıt hiçbir zaman bir kesir olduğu rastgele bölme problemleri nasıl oluşturduğunu hakkında düşünün. Daha fazla bilgi için Aç `timeLabel` farklı bir renk denetlemek ve test alanın bir ipucu sağlar.
## <a name="to-customize-the-quiz"></a>Test özelleştirmek için
- Yalnızca beş saniyede bir test kaldığında kapatma **timeLabel** denetim kırmızı ayarlayarak kendi **BackColor** özelliği (`timeLabel.BackColor = Color.Red;`). Test bittiğinde rengi sıfırlayın.
- Doğru yanıt içine girildiğinde ses oynatarak test alanın bir ipucu verin bir <xref:System.Windows.Forms.NumericUpDown> denetim. (Her denetim için bir olay işleyicisi yazma <xref:System.Windows.Forms.NumericUpDown.ValueChanged> test alanın denetimin değeri değiştiğinde harekete olayı.)
## <a name="to-continue-or-review"></a>Devam etmek veya gözden geçirmek için
- Test tamamlanmış bir sürümünü indirmek için bkz: [tam matematik testi öğretici örnek](http://code.msdn.microsoft.com/Complete-Math-Quiz-8581813c).
- Sonraki öğretici gitmek için bkz: [öğretici 3: eşleşen bir oluşturma oyun](../ide/tutorial-3-create-a-matching-game.md).
- Eğitmen önceki adıma dönmek için bkz: [adım 7: çarpma ve bölme sorunları eklemek](../ide/step-7-add-multiplication-and-division-problems.md).
| 54.297297 | 339 | 0.77551 | tur_Latn | 0.998182 |
4d04b88fa7daf4403ec2856be3954669df8ae0ea | 3,095 | md | Markdown | docs/vs-2015/debugger/crt-debugging-techniques.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/crt-debugging-techniques.md | Simran-B/visualstudio-docs.de-de | 0e81681be8dbccb2346866f432f541b97d819dac | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-24T14:57:38.000Z | 2020-07-24T14:57:38.000Z | docs/vs-2015/debugger/crt-debugging-techniques.md | angelobreuer/visualstudio-docs.de-de | f553469c026f7aae82b7dc06ba7433dbde321350 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Debugtechniken CRT | Microsoft-Dokumentation
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-debug
ms.topic: conceptual
f1_keywords:
- c.runtime.debugging
dev_langs:
- FSharp
- VB
- CSharp
- C++
- C++
helpviewer_keywords:
- debugging [CRT]
- CRT, debugging
- debugging [C++], CRT debug support
ms.assetid: 9be561f6-14a8-44ff-925d-d911d5b8e6ff
caps.latest.revision: 23
author: MikeJo5000
ms.author: mikejo
manager: jillfra
ms.openlocfilehash: a69defe75b80ef1f395931017dfc942398ca2710
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 04/23/2019
ms.locfileid: "68161482"
---
# <a name="crt-debugging-techniques"></a>CRT-Debugverfahren
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Die folgenden Debugverfahren können beim Debuggen von Programmen hilfreich sein, die die C-Laufzeitbibliothek verwenden.
## <a name="in-this-section"></a>In diesem Abschnitt
[Verwenden der CRT-Debugbibliothek](../debugger/crt-debug-library-use.md)
Hier wird beschrieben, wie die C-Laufzeitbibliothek das Debuggen unterstützt, und Sie erhalten Hinweise für den Zugriff auf die betreffenden Tools.
[Makros für die Berichterstellung](../debugger/macros-for-reporting.md)
Hier finden Sie Informationen zu den in CRTDBG.H definierten Makros **_RPTn** und **_RPTFn**, die anstelle von `printf`-Anweisungen zum Debuggen verwendet werden.
[Debugversionen von Heapreservierungsfunktionen](../debugger/debug-versions-of-heap-allocation-functions.md)
Erörtert die speziellen Debugversionen von Heapreservierungsfunktionen. Zu den behandelten Themen gehören die Zuordnung von Aufrufen durch die CRT-Laufzeitbibliothek, Vorteile des expliziten Aufrufs, Vermeiden von Konvertierungen, Dokumentieren der einzelnen Reservierungstypen in Clientblocks und die Ergebnisse bei nicht definiertem _DEBUG.
[Details zum CRT-Debugheap](../debugger/crt-debug-heap-details.md)
Enthält Links zu den Themen Speicherverwaltung und Debugheap, Blocktypen auf dem Debugheap, Verwenden des Debugheaps, Berichtsfunktionen für den Heapzustand und Nachverfolgen von Heapreservierungsanforderungen.
[Schreiben von Hookfunktionen zum Debuggen](../debugger/debug-hook-function-writing.md)
Enthält Links zu den Themen Hookfunktionen für Clientblöcke, Hookfunktionen für Reservierungen, Reservierungshooks und CRT-Speicherbelegungen sowie Hookfunktionen für Berichte.
[Suchen von Arbeitsspeicherverlusten mit der CRT-Bibliothek](../debugger/finding-memory-leaks-using-the-crt-library.md)
Hier werden Verfahren zum Erkennen und Isolieren von Arbeitsspeicherverlusten mithilfe des Debuggers und der C-Laufzeitbibliothek erläutert.
## <a name="related-sections"></a>Verwandte Abschnitte
[Debuggen von nativem Code](../debugger/debugging-native-code.md)
Hier werden einige allgemeine Probleme und Verfahren beim Debuggen für C- und C++-Anwendungen erörtert.
[Debuggersicherheit](../debugger/debugger-security.md)
Enthält Empfehlungen für mehr Sicherheit beim Debuggen.
| 50.737705 | 345 | 0.798708 | deu_Latn | 0.954843 |
4d04dbe086503adaa9760fd9c6809900b325437a | 700 | md | Markdown | readme.md | ismo1106/inventory.asset.larv | 791640da0e1b9b9756849289e68d12826cdca5c7 | [
"MIT"
] | null | null | null | readme.md | ismo1106/inventory.asset.larv | 791640da0e1b9b9756849289e68d12826cdca5c7 | [
"MIT"
] | null | null | null | readme.md | ismo1106/inventory.asset.larv | 791640da0e1b9b9756849289e68d12826cdca5c7 | [
"MIT"
] | null | null | null | ## Installation
* `git clone https://github.com/ismo1106/inventory.asset.larv.git`
* `cd inventory.asset.larv`
* `composer install`
* save the .env.example to .env
* update the .env file with your db credentials
* `php artisan key:generate`
## Peraturan Permission (Middleware Route)
Name Permission menggunakan Nama Controller (without 'Controller' string) dengan separator (.) kemudian diikuti nama method atau action, contoh:
* Controller Class name : `UserController`
* Method function name : `index`
* Permission Name must use : `User.index`
## License
This Project used the Laravel framework, is open-sourced software licensed under the [MIT license](http://opensource.org/licenses/MIT).
| 35 | 144 | 0.76 | eng_Latn | 0.63659 |
4d05a2de0bce0c304c9cf0b2b24103f242d47126 | 1,906 | md | Markdown | README.md | lesh1k/mtnl | b80b52b8db86348f9a8e7ef923c1a3009deb6373 | [
"MIT"
] | null | null | null | README.md | lesh1k/mtnl | b80b52b8db86348f9a8e7ef923c1a3009deb6373 | [
"MIT"
] | null | null | null | README.md | lesh1k/mtnl | b80b52b8db86348f9a8e7ef923c1a3009deb6373 | [
"MIT"
] | null | null | null | ### Quantity to "more natural" language
JS module that provides a function to convert an amount in grams (or other units) to a more "kitchen-like" language
`mtnl` stands for *measurement to natural language*
####Usage
```javascript
// Load module. It exports one function - mtnl
var mtnl = require('./mtnl');
// Call the function
mtnl(3520); // '15 1/2 cup(s)'
// Optionally specify unit type
mtnl(3520, 'volume'); // '1 gallon(s)'
// OR a totaly different config
const OTHER_CONFIG = require('./other_config.json');
mtnl(1000, 'dry', OTHER_CONFIG);
```
All units, unit types, quantities and margin of error can be specified in [config.json](./config.json)
The code was written for units used in kitchen, however, using the above-mentioned config it can be adopted for any use.
In config, a unit looks like this:
```json
{
"cup": {
"name": "cup(s)",
"minimum": 1,
"divisible": 0.5
}
}
```
- `name` is used for output (e.g. `'1/2 cup(s)'`)
- `minimum` is used for specifying the smallest amount possible. For cup it's 1, thus `1/2 cup(s)` will never be possible, while `1 1/2 cup(s)` will be possible.
- `divisible` is used to determine the smallest *part* of the unit. For cup it's 0.5, thus `1 1/2 cup(s)` will be possible, while `1 3/4 cup(s)` will be not.
The `mtnl` function chooses the biggest unit possible (based on `rates` in config) that allows getting the quantity within the specified margin of error (e.g. 6 grams with 10% error can be either `6 gram(s)` OR `1 1/4 teaspoons`. The output will be the latter since, `teaspoon` is bigger than `gram`.
#### Examples
```javascript
var mtnl = require('./mtnl');
mtnl(300); // '11 ounce(s)'
mtnl(500); // '2 cup(s)'
mtnl(3520); // '15 1/2 cup(s)'
mtnl(3520, 'volume'); // '1 gallon(s)'
mtnl(5, 'weight'); // '1 teaspoon(s)'
mtnl(7); // 1 1/2 teaspoon(s)
mtnl(6); // 1 1/4 teaspoon(s)
```
| 35.296296 | 302 | 0.66107 | eng_Latn | 0.974966 |
4d05ef122837b150e950ad60eddb40c9f1eadf74 | 374 | md | Markdown | docs/iea43_wra_data_model-properties-measurement-location-measurement-location-properties-measurement-point-measurement-point-properties-mounting-arrangement-mounting-arrangement-properties-boom-diameter-mm.md | flrs-edf/digital_wra_data_standard | 7432b74d7699f5292067e453cf371898b0aa0592 | [
"BSD-3-Clause"
] | 39 | 2020-01-10T15:11:14.000Z | 2022-02-24T17:42:55.000Z | docs/iea43_wra_data_model-properties-measurement-location-measurement-location-properties-measurement-point-measurement-point-properties-mounting-arrangement-mounting-arrangement-properties-boom-diameter-mm.md | flrs-edf/digital_wra_data_standard | 7432b74d7699f5292067e453cf371898b0aa0592 | [
"BSD-3-Clause"
] | 118 | 2020-07-16T15:59:10.000Z | 2022-03-31T14:36:56.000Z | docs/iea43_wra_data_model-properties-measurement-location-measurement-location-properties-measurement-point-measurement-point-properties-mounting-arrangement-mounting-arrangement-properties-boom-diameter-mm.md | flrs-edf/digital_wra_data_standard | 7432b74d7699f5292067e453cf371898b0aa0592 | [
"BSD-3-Clause"
] | 11 | 2020-08-12T22:21:22.000Z | 2022-01-13T02:53:31.000Z | ## boom_diameter_mm Type
`number` ([Boom Diameter \[mm\]](iea43\_wra_data_model-properties-measurement-location-measurement-location-properties-measurement-point-measurement-point-properties-mounting-arrangement-mounting-arrangement-properties-boom-diameter-mm.md))
## boom_diameter_mm Constraints
**minimum**: the value of this number must greater than or equal to: `0`
| 46.75 | 240 | 0.81016 | eng_Latn | 0.797705 |
4d064844244489d370468a6df2ef0dd66d5e3870 | 1,793 | md | Markdown | includes/virtual-machines-shared-image-gallery-resources.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-machines-shared-image-gallery-resources.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-machines-shared-image-gallery-resources.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: plik dołączania
description: plik dołączania
services: virtual-machines
author: cynthn
ms.service: virtual-machines
ms.topic: include
ms.date: 05/04/2020
ms.author: cynthn
ms.custom: include file
ms.openlocfilehash: 433e909563602a2ef32b7986959b428c9afaf9f4
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 10/09/2020
ms.locfileid: "82789005"
---
| Zasób | Opis|
|----------|------------|
| **Źródło obrazu** | Jest to zasób, którego można użyć do utworzenia **wersji obrazu** w galerii obrazów. Źródłem obrazu może być istniejąca maszyna wirtualna platformy Azure, która jest [uogólniona lub wyspecjalizowana](../articles/virtual-machines/windows/shared-image-galleries.md#generalized-and-specialized-images), zarządzanym obrazem, migawką lub wersją obrazu w innej galerii obrazów. |
| **Galeria obrazów** | Podobnie jak w przypadku portalu Azure Marketplace, **Galeria obrazów** jest repozytorium do zarządzania i udostępniania obrazów, ale ty kontrolujesz, kto ma dostęp. |
| **Definicja obrazu** | Definicje obrazów są tworzone w galerii i zawierają informacje o obrazie i wymaganiach dotyczących używania go wewnętrznie. Dotyczy to zarówno obrazu systemu Windows, jak i Linux, informacji o wersji oraz minimalnych i maksymalnych wymagań dotyczących pamięci. Jest to definicja typu obrazu. |
| **Wersja obrazu** | **Wersja obrazu** jest używana do tworzenia maszyny wirtualnej w przypadku korzystania z galerii. Dla danego środowiska można mieć wiele wersji obrazu. Podobnie jak w przypadku obrazu zarządzanego, w przypadku tworzenia maszyny wirtualnej przy użyciu **wersji obrazu** wersja obrazu jest używana do tworzenia nowych dysków dla maszyny wirtualnej. Wersje obrazów można wielokrotnie używać. | | 77.956522 | 412 | 0.798104 | pol_Latn | 0.999737 |
4d06ba187dcb5fdb32b66b851a18a29004c2bd18 | 59 | md | Markdown | README.md | dinglau2008/GeometryEnigma | 15414f7bbaaeb8c62926febc4662cac66373ce7f | [
"Apache-2.0"
] | null | null | null | README.md | dinglau2008/GeometryEnigma | 15414f7bbaaeb8c62926febc4662cac66373ce7f | [
"Apache-2.0"
] | null | null | null | README.md | dinglau2008/GeometryEnigma | 15414f7bbaaeb8c62926febc4662cac66373ce7f | [
"Apache-2.0"
] | null | null | null | # GeometryEnigma
This project provide some Geometry coding
| 19.666667 | 41 | 0.847458 | eng_Latn | 0.996467 |
4d077351385f1d5d02d1712780fcf7d7fda6365f | 3,125 | md | Markdown | input/docs/SupportedLibraries.md | badcel/gircore.github.io | 5a7a255785f9d2942353fa0258be49c5375fd0a4 | [
"MIT"
] | null | null | null | input/docs/SupportedLibraries.md | badcel/gircore.github.io | 5a7a255785f9d2942353fa0258be49c5375fd0a4 | [
"MIT"
] | null | null | null | input/docs/SupportedLibraries.md | badcel/gircore.github.io | 5a7a255785f9d2942353fa0258be49c5375fd0a4 | [
"MIT"
] | null | null | null | Description: General information
---
Currently there are multiple libraries supported to integrate deeply into linux: [Gtk](#gtk), [WebkitGTK](#webkitgtk), [Libchamplain](#libchamplain), [libhandy](#libhandy), [GIO](#gio), [gstreamer](#gstreamer).
## GTK
[Gtk] is the toolkit which is used to display windows and widgets on the screen. The widgets can be added directly in code or described through an xml file.
Supported widgets are for example: Windows, dialogs, lables, images, spinner, progressbars, several buttons and switches, textboxes, tables, lists, menus, toolbars, popovers, and much more. It powers several linux desktops like [Gnome] and [Xfce] and applications like [Gimp].
![A picture of an example gtk application][GtkApp]
## WebkitGTK
[WebkitGTK] is a browser component for GTK and can be used to embed the webkit webengine into an application as a widget. There is support for the web inspector and several settings to tweak the webview to your needs.
The bindings make it easy to:
* Embed javascript into a webpage
* Call a javascript function
* Callback from the webpage into the C# code.
![A picture of an example gtk application with visible webpage][GtkAppBrowser]
## Libchamplain
[Libchamplain] is map component for GTK and can be used to embed maps into an application widget. By default it uses openstreetmap.
![A picture of an example gtk application with visible openstreetmap][GtkAppMaps]
## libhandy
[libhandy] extends GTK with new widgets, to support mobile devices. Meaning full blown applications automatically adopt their UI to different view modes,if the available space changes.
One widget of this library is the *Paginator* which is shown in the pictures above. It allows to swipe through widgets and is used to switch between the webpage and maps control.
## GIO
[GIO] is a library to allow easy access to input / output operations. Currently there is initial support for [DBus] operations. DBus is a standardized IPC-Framework which all major linux desktops use for interprocess communication.
## Gstreamer
[Gstreamer] is a multimedia library to play back various media format via a flexible pipelining system. The code to playback a movie is in the [samples](https://github.com/gircore/gir.core/blob/develop/Samples/Gst/Play.cs).
![A picture of the Tears of Steel project played via gstreamer][GstSintel]
(Homepage of the free movie: https://mango.blender.org/)
[DBus]: https://www.freedesktop.org/wiki/Software/dbus/
[GIO]: https://developer.gnome.org/gio/stable/
[libhandy]: https://source.puri.sm/Librem5/libhandy
[Libchamplain]: https://wiki.gnome.org/Projects/libchamplain
[WebkitGTK]: https://webkitgtk.org/
[Gtk]: https://gtk.org
[Gimp]: https://gimp.org
[Gnome]: https://gnome.org
[Xfce]: https://xfce.org
[Gstreamer]: https://gstreamer.freedesktop.org/
[GtkApp]: GtkApp.png "Example GtkApp"
[GtkAppBrowser]: GtkAppBrowser.png "Example GtkApp with Browser"
[GtkAppMaps]: GtkAppMaps.png "Example GtkApp with Maps"
[GstSintel]: GstSintel.png "Gstreamer playing back Tears of Steel (https://mango.blender.org/)" | 57.87037 | 278 | 0.76224 | eng_Latn | 0.963108 |
4d089c4eba68cde9f2249be66170f8b4188d35f6 | 113 | md | Markdown | README.md | lucasoskorep/pi-mta-sign | 30cbe70d7be2e5cb7f3f7e03698dd18b851421d7 | [
"MIT"
] | 1 | 2022-03-31T00:29:46.000Z | 2022-03-31T00:29:46.000Z | README.md | lucasoskorep/pi-mta-sign | 30cbe70d7be2e5cb7f3f7e03698dd18b851421d7 | [
"MIT"
] | null | null | null | README.md | lucasoskorep/pi-mta-sign | 30cbe70d7be2e5cb7f3f7e03698dd18b851421d7 | [
"MIT"
] | null | null | null | # pi-mta-sign
Code and documentation for project for turning a raspberry pi into your very own MTA subway sign. | 37.666667 | 98 | 0.787611 | eng_Latn | 0.995274 |
4d09648b356b907fb1db8583ad8e90ecf5e5ffa4 | 9,011 | md | Markdown | machine-learning-with-networks/graph-neural-networks.md | Yasoz/CS224W-notes | 8fc41b9169942ecd04364ed874dd07d5c8e305f2 | [
"MIT"
] | 219 | 2019-10-14T20:17:35.000Z | 2022-03-30T04:23:01.000Z | machine-learning-with-networks/graph-neural-networks.md | Yasoz/CS224W-notes | 8fc41b9169942ecd04364ed874dd07d5c8e305f2 | [
"MIT"
] | 15 | 2019-11-26T21:10:59.000Z | 2020-09-12T06:57:41.000Z | machine-learning-with-networks/graph-neural-networks.md | Yasoz/CS224W-notes | 8fc41b9169942ecd04364ed874dd07d5c8e305f2 | [
"MIT"
] | 78 | 2019-10-01T23:34:26.000Z | 2022-03-31T02:13:49.000Z | ---
layout: post
title: Graph Neural Networks
---
In the previous section, we have learned how to represent a graph using "shallow encoders". Those techniques give us powerful expressions of a graph in a vector space, but there are limitations as well. In this section, we will explore three different approaches using graph neural networks to overcome the limitations.
## Limitations of "Shallow Encoders"
* Shallow Encoders do not scale, as each node has a unique embedding.
* Shallow Encoders are inherently transductive. It can only generate embeddings for a single fixed graph.
* Node Features are not taken into consideration.
* Shallow Encoders cannot be generalized to train with different loss functions.
Fortunately, graph neural networks can solve the above limitations.
## Graph Convolutional Networks (GCN)
Traditionally, neural networks are designed for fixed-sized graphs. For example, we could consider an image as a grid graph or a piece of text as a line graph. However, most of the graphs in the real world have an arbitrary size and complex topological structure. Therefore, we need to define the computational graph of GCN differently.
### Setup
Given a graph $$G = (V, A, X)$$ such that:
* $$V$$ is the vertex set
* $$A$$ is the adjacency matrix
* $$X\in \mathbb{R}^{m\times\rvert V \rvert}$$ is the node feature matrix
### Computational Graph and Generalized Convolution

Let the example graph (referring to the above figure on the left) be our $$G$$. Our goal is to define a computational graph of GCN on $$G$$. The computational graph should keep the structure of $$G$$ and incorporate the nodes' neighboring features at the same time. For example, the embedding vector of node $$A$$ should consist of its neighbor $$\{B, C, D\}$$, and not depend on the ordering of $$\{B, C, D\}$$. One way to do this is to simply take the average of the features of $$\{B, C, D\}$$. In general, the aggregation function (referring to the boxes in the above figure on the right) needs to be **order invariant** (max, average, etc.).
The computational graph on $$G$$ with two layers will look like the following:

Here, each node defines a computational graph based on its neighbors. In particular, the computational graph for node $$A$$ can be viewed as the following (Layer-0 is the input layer with node feature $$X_i$$):

### Deep Encoders
With the above idea, here is the mathematical expression at each layer for node $$v$$ using the average aggregation function:
* At 0th layer: $$h^0_v = x_v$$. This is the node feature.
* At kth layer: $$ h_v^{k} = \sigma(W_k\sum_{u\in N(v)}\frac{h_u^{k-1}}{\rvert N(v)\rvert} + B_kh_v^{k-1}), \forall k \in \{1, .., K\}$$.
$$h_v^{k-1}$$ is the embedding of node $$v$$ from the previous layer. $$\rvert N(v) \rvert$$ is the number of the neighbors of node $$v$$.
The purpose of $$\sum_{u\in N(v)}\frac{h_u^{k-1}}{\rvert N(v) \rvert}$$ is to aggregate neighboring features of $$v$$ from the previous layer.
$$\sigma$$ is the activation function (e.g. ReLU) to introduce non-linearity. $$W_k$$ and $$B_k$$ are the trainable parameters.
* Output layer: $$z_v = h_v^{K}$$. This is the final embedding after $$K$$ layers.
Equivalently, the above computation can be written in a matrix multiplication form for the entire graph:
$$ H^{l+1} = \sigma(H^{l}W_0^{l} + \tilde{A}H^{l}W_1^{l}) $$ such that $$\tilde{A}=D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$$.
### Training the Model
We can feed these embeddings into any loss function and run stochastic gradient descent to train the parameters.
For example, for a binary classification task, we can define the loss function as:
$$L = \sum_{v\in V} y_v \log(\sigma(z_v^T\theta)) + (1-y_v)\log(1-\sigma(z_v^T\theta))$$
$$y_v \in \{0, 1\}$$ is the node class label. $$z_v$$ is the encoder output. $$\theta$$ is the classification weight. $$\sigma$$ can be the sigmoid function. $$\sigma(z_v^T\theta)$$ represents the predicted probability of node $$v$$. Therefore, the first half of the equation would contribute to the loss function, if the label is positive ($$y_v=1$$). Otherwise, the second half of the equation would contribute to the loss function.
We can also train the model in an unsupervised manner by using: random walk, graph factorization, node proximity, etc.
### Inductive Capability
GCN can be generalized to unseen nodes in a graph. For example, if a model is trained using nodes $$A, B, C$$, the newly added nodes $$D, E, F$$ can also be evaluated since the parameters are shared across all nodes.

## GraphSAGE
So far we have explored a simple neighborhood aggregation method, but we can also generalize the aggregation method in the following form:
$$ h_v^{K} = \sigma([W_k AGG(\{h_u^{k-1}, \forall u \in N(v)\}), B_kh_v^{k-1}])$$
For node $$v$$, we can apply different aggregation methods ($$AGG$$) to its neighbors and concatenate the features with $$v$$ itself.
Here are some commonly used aggregation functions:
* Mean: Take a weighted average of its neighbors.
$$AGG = \sum_{u\in N(v)} \frac{h_u^{k-1}}{\rvert N(v) \rvert}$$
* Pooling: Transform neighbor vectors and apply symmetric vector function ($$\gamma$$ can be element-wise mean or max).
$$AGG = \gamma(\{ Qh_u^{k-1}, \forall u\in N(v)\})$$
* LSTM: Apply LSTM to reshuffled neighbors.
$$AGG = LSTM(\{ h_u^{k-1}, \forall u\in \pi(N(v)\}))$$
## Graph Attention Networks
What if some neighboring nodes carry more important information than the others? In this case, we would want to assign different weights to different neighboring nodes by using the attention technique.
Let $$\alpha_{vu}$$ be the weighting factor (importance) of node $$u$$'s message to node $$v$$. From the average aggregation above, we have defined $$\alpha=\frac{1}{\rvert N(v) \rvert}$$. However, we can also explicitly define $$\alpha$$ based on the structural property of a graph.
### Attention Mechanism
Let $$\alpha_{uv}$$ be computed as the byproduct of an attention mechanism $$a$$, which computes the attention coefficients $$e_{vu}$$ across pairs of nodes $$u, v$$ based on their messages:
$$e_{vu} = a(W_kh_u^{k-1}, W_kh_v^{k-1})$$
$$e_{vu}$$ indicates the importance of node $$u$$'s message to node $$v$$. Then, we normalize the coefficients using softmax to compare importance across different neighbors:
$$\alpha_{vu} = \frac{\exp(e_{vu})}{\sum_{k\in N(v)}\exp(e_{vk})}$$
Therefore, we have:
$$h_{v}^k = \sigma(\sum_{u\in N(v)}\alpha_{vu}W_kh^{k-1}_u)$$
This approach is agnostic to the choice of $$a$$ and the parameters of $$a$$ can be trained jointly with $$W_k$$.
## Reference
Here is a list of useful references:
**Tutorials and Overview:**
* [Relational inductive biases and graph networks (Battaglia et al., 2018)](https://arxiv.org/pdf/1806.01261.pdf)
* [Representation learning on graphs: Methods and applications (Hamilton et al., 2017)](https://arxiv.org/pdf/1709.05584.pdf)
**Attention-based Neighborhood Aggregation:**
* [Graph attention networks (Hoshen, 2017; Velickovic et al., 2018; Liu et al., 2018)](https://arxiv.org/pdf/1710.10903.pdf)
**Embedding the Entire Graphs:**
* Graph neural nets with edge embeddings ([Battaglia et al., 2016](https://arxiv.org/pdf/1806.01261.pdf); [Gilmer et. al., 2017](https://arxiv.org/pdf/1704.01212.pdf))
* Embedding entire graphs ([Duvenaud et al., 2015](https://dl.acm.org/citation.cfm?id=2969488); [Dai et al., 2016](https://arxiv.org/pdf/1603.05629.pdf); [Li et al., 2018](https://arxiv.org/abs/1803.03324)) and graph pooling
([Ying et al., 2018](https://arxiv.org/pdf/1806.08804.pdf), [Zhang et al., 2018](https://arxiv.org/pdf/1911.05954.pdf))
* [Graph generation](https://arxiv.org/pdf/1802.08773.pdf) and [relational inference](https://arxiv.org/pdf/1802.04687.pdf) (You et al., 2018; Kipf et al., 2018)
* [How powerful are graph neural networks(Xu et al., 2017)](https://arxiv.org/pdf/1810.00826.pdf)
**Embedding Nodes:**
* Varying neighborhood: [Jumping knowledge networks Xu et al., 2018)](https://arxiv.org/pdf/1806.03536.pdf), [GeniePath (Liu et al., 2018](https://arxiv.org/pdf/1802.00910.pdf)
* [Position-aware GNN (You et al. 2019)](https://arxiv.org/pdf/1906.04817.pdf)
**Spectral Approaches to Graph Neural Networks:**
* [Spectral graph CNN](https://arxiv.org/pdf/1606.09375.pdf) & [ChebNet](https://arxiv.org/pdf/1609.02907.pdf) [Bruna et al., 2015; Defferrard et al., 2016)
* [Geometric deep learning (Bronstein et al., 2017; Monti et al., 2017)](https://arxiv.org/pdf/1611.08097.pdf)
**Other GNN Techniques:**
* [Pre-training Graph Neural Networks (Hu et al., 2019)](https://arxiv.org/pdf/1905.12265.pdf)
* [GNNExplainer: Generating Explanations for Graph Neural Networks (Ying et al., 2019)](https://arxiv.org/pdf/1903.03894.pdf)
| 64.364286 | 647 | 0.72123 | eng_Latn | 0.96729 |
4d09c0db3193db91bdf32a79a9a30df0dd859b0f | 1,133 | md | Markdown | _posts/2018-01-11-ubicomp2018.md | HAbitsLab/habitslab.github.io | 50223431304f2ca85e1fb8abac315c63a2cc85e1 | [
"MIT"
] | null | null | null | _posts/2018-01-11-ubicomp2018.md | HAbitsLab/habitslab.github.io | 50223431304f2ca85e1fb8abac315c63a2cc85e1 | [
"MIT"
] | null | null | null | _posts/2018-01-11-ubicomp2018.md | HAbitsLab/habitslab.github.io | 50223431304f2ca85e1fb8abac315c63a2cc85e1 | [
"MIT"
] | null | null | null | ---
date: 2018-01-11 12:01:00
layout: post
title: The HABits Lab Presents Advanced Research in Ubiquitous Computing at ACM Ubicomp 2018
subtitle: Habits Lab presents
description: Three lab members presented a paper, poster, and demo exhibiting their exciting new work in health technology.
image: https://res.cloudinary.com/dciykvm1w/image/upload/v1600110908/habit-lab-2_zllxmh.jpg
optimized_image: https://res.cloudinary.com/dciykvm1w/image/upload/v1600110908/habit-lab-2_zllxmh.jpg
category: mhealth
tags:
- mhealth
author: timtruty
---
PhD Students from The Health Aware Bits (HABits) Lab presented their new work, which focuses on cultivating a healthy lifestyle assisted by technology at the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP 2018). The event was held at the Suntec Singapore Convention and Exhibition Center (colocated with ISWC 2018) October 9-11th in Singapore.: [full story](https://www.mccormick.northwestern.edu/electrical-computer/news-events/news/articles/2019/the-habits-lab-presents-advanced-research-in-ubiquitous-computing-at-acm-ubicomp-2018.html)
| 45.32 | 581 | 0.806708 | eng_Latn | 0.81349 |
4d0a6cbc5a81f95dbe1eb739bc2779dae647dbb9 | 2,358 | md | Markdown | _posts/2021-04-22-third.md | KhanBe/KhanBe.github.io | 5b56ef5a737d89a00dcd340e68d3c02098bc02f2 | [
"MIT"
] | null | null | null | _posts/2021-04-22-third.md | KhanBe/KhanBe.github.io | 5b56ef5a737d89a00dcd340e68d3c02098bc02f2 | [
"MIT"
] | null | null | null | _posts/2021-04-22-third.md | KhanBe/KhanBe.github.io | 5b56ef5a737d89a00dcd340e68d3c02098bc02f2 | [
"MIT"
] | null | null | null | ---
title: "[자료구조]-Queue와 ArrayList"
excerpt: "어제 했던 트럭문제를 큐로 변환해보았다."
categories:
- 자료구조
tags:
- java
- Programmers
last_modified_at: 2021-04-22T08:06:00-05:00
---
어제 푼 다리를 지나는 트럭 문제를 ArrayList로 풀었다.
여러가지 코드를 보니 대부분 Queue를 이용했다.
그래서 Arraylist를 Queue로 코드를 변경했다.
변경하니 좀더 만들기는 편했다.
이미 풀어본 문제라서 그런가...?

주석에 모든 설명이 나와있다.
```java
import java.util.Queue;
import java.util.LinkedList;
class Solution {
public int solution(int bridge_length, int weight, int[] truck_weights) {
int answer = 0;
int weight_sum = 0;//현재 다리 무게
Queue<Integer> before_Q = new LinkedList<Integer>();//다리 건너기 전 큐
Queue<Integer> Q = new LinkedList<Integer>();//다리 큐
Queue<Integer> after_Q = new LinkedList<Integer>();//다리 건넌 후 큐
for(int i=0;i<truck_weights.length;i++) before_Q.offer(truck_weights[i]);//건너기 전 큐 초기화
for(int i=0;i<bridge_length;i++) Q.offer(0);//다리 0으로 초기화
while(!before_Q.isEmpty() || !Q.isEmpty()){
answer++;
if(Q.peek()!=0){//다리 끝에 0이 아니면
weight_sum-=Q.peek();//다리에 무게 빼기
after_Q.offer(Q.poll());//옮기기
}
else Q.remove();//다리 끝에 0이면 삭제
if(before_Q.peek()!=null){//다리 전큐가 비어있지 않으면
if(weight_sum+before_Q.peek()<=weight){//무게가 넘지 않으면
weight_sum+=before_Q.peek();
Q.offer(before_Q.poll());
}
else Q.offer(0);//무게가 넘으면 0을 다리에 채운다
}
}
return answer;
}
}
```
---
#### ArrayList와 Queue의 메소드
##### 1. ArrayList 메소드의 종류
```java
ArrayList<Integer> bridge = new ArrayList<Integer>(); //선언
bridge.add(0) // 0 추가
bridge.add(0 , 2) // index 0에 2추가
bridge.size() // 크기를 integer형태로 return
bridge.remove(0) // index 0인 원소 삭제
bridge.clear(); // 모든 값 제거
bridge.get(0); // index 0인 원소 return
bridge.contains(1)); // list에 1이 있는지 검색 : true
bridge.indexOf(1); // 1이 있는 index return 없으면 -1 return
```
---
##### 2. Queue 메소드의 종류

---
##### ArrayList를 이용해서 풀었을 때
1. 현 상황을 보기 쉬웠다.
2. 코드의 조건과 길이가 길어진다.
##### Queue를 이용해서 풀었을 때
1.지웠다 추가를 안해도 된다.
2.코드가 간결하다
3.head와 tail만 건들 수 있다.
제네릭부분은 블로그 참고를 많이 했다.
[제네릭관련 블로그 가기](https://yaboong.github.io/java/2019/01/19/java-generics-1/)
| 23.818182 | 94 | 0.579304 | kor_Hang | 0.998967 |
4d0a834d9ff98f986284eaeeb9c8c6fa311e3777 | 692 | md | Markdown | README.md | knasenn/ipweathermodule | a64afaabc5def718a6173be431d90bcb772a6f10 | [
"MIT"
] | null | null | null | README.md | knasenn/ipweathermodule | a64afaabc5def718a6173be431d90bcb772a6f10 | [
"MIT"
] | null | null | null | README.md | knasenn/ipweathermodule | a64afaabc5def718a6173be431d90bcb772a6f10 | [
"MIT"
] | null | null | null | Anax WEATHER mod (v1)
==================================
Install as Anax module
------------------------------------
This is how you install the module into an existing Anax installation.
Install using composer.
```
composer require aiur18/ipweathermodule "dev-master"
```
Copy the needed files, configuration and setup the weathermod
```
rsync -av vendor/aiur18/ipweathermodule/ ./
```
Dependency
------------------
This is a Anax modulen and primarly intended to be used together with the Anax framework.
License
------------------
This software carries a MIT license. See LICENSE.txt for details.
```
.
..: Copyright (c) 2017 - 2019 knasenn ([email protected])
```
| 15.727273 | 89 | 0.617052 | eng_Latn | 0.921458 |
4d0a87c79664053e6dc552819dae7ac4a1a734cf | 2,038 | md | Markdown | docs/transactions/TransactionConfig.md | japila-books/kafka-internals | f7ff0d600662b51323c479519e263bdc3c4a4efe | [
"Apache-2.0"
] | 15 | 2021-09-12T23:37:35.000Z | 2022-01-21T04:39:44.000Z | docs/transactions/TransactionConfig.md | japila-books/kafka-internals | f7ff0d600662b51323c479519e263bdc3c4a4efe | [
"Apache-2.0"
] | null | null | null | docs/transactions/TransactionConfig.md | japila-books/kafka-internals | f7ff0d600662b51323c479519e263bdc3c4a4efe | [
"Apache-2.0"
] | 1 | 2022-01-11T18:30:28.000Z | 2022-01-11T18:30:28.000Z | # TransactionConfig
`TransactionConfig` holds the values of the transactional configuration properties.
## <span id="transactionalIdExpirationMs"> transactional.id.expiration.ms
[transactional.id.expiration.ms](../KafkaConfig.md#transactionalIdExpirationMs)
Default: 7 days
## <span id="transactionMaxTimeoutMs"> transaction.max.timeout.ms
[transaction.max.timeout.ms](../KafkaConfig.md#transactionMaxTimeoutMs)
Default: 15 minutes
## <span id="transactionLogNumPartitions"> transaction.state.log.num.partitions
[transaction.state.log.num.partitions](../KafkaConfig.md#transactionTopicPartitions)
Default: 50
## <span id="transactionLogReplicationFactor"> transaction.state.log.replication.factor
[transaction.state.log.replication.factor](../KafkaConfig.md#transactionTopicReplicationFactor)
Default: 3
## <span id="transactionLogSegmentBytes"> transaction.state.log.segment.bytes
[transaction.state.log.segment.bytes](../KafkaConfig.md#transactionTopicSegmentBytes)
Default: 100 * 1024 * 1024
## <span id="transactionLogLoadBufferSize"> transaction.state.log.load.buffer.size
[transaction.state.log.load.buffer.size](../KafkaConfig.md#transactionsLoadBufferSize)
Default: 5 * 1024 * 1024
## <span id="transactionLogMinInsyncReplicas"> transaction.state.log.min.isr
[transaction.state.log.min.isr](../KafkaConfig.md#transactionTopicMinISR)
Default: 2
## <span id="abortTimedOutTransactionsIntervalMs"> transaction.abort.timed.out.transaction.cleanup.interval.ms
[transaction.abort.timed.out.transaction.cleanup.interval.ms](../KafkaConfig.md#transactionAbortTimedOutTransactionCleanupIntervalMs)
Default: 10 seconds
## <span id="removeExpiredTransactionalIdsIntervalMs"> transaction.remove.expired.transaction.cleanup.interval.ms
[transaction.remove.expired.transaction.cleanup.interval.ms](../KafkaConfig.md#transactionRemoveExpiredTransactionalIdCleanupIntervalMs)
Default: 1 hour
## <span id="requestTimeoutMs"> request.timeout.ms
[request.timeout.ms](../KafkaConfig.md#requestTimeoutMs)
Default: 30000
| 31.84375 | 136 | 0.810599 | yue_Hant | 0.273133 |
4d0ab1aca7118ce5ee811ca654db036aa6015e86 | 1,500 | md | Markdown | docs/07_Advanced/02_Expose_GraphQL_Schema.md | ynloultratech/graphql-bundle-docs | 0d8c90283940cb045e583970107590000f298966 | [
"MIT"
] | null | null | null | docs/07_Advanced/02_Expose_GraphQL_Schema.md | ynloultratech/graphql-bundle-docs | 0d8c90283940cb045e583970107590000f298966 | [
"MIT"
] | null | null | null | docs/07_Advanced/02_Expose_GraphQL_Schema.md | ynloultratech/graphql-bundle-docs | 0d8c90283940cb045e583970107590000f298966 | [
"MIT"
] | null | null | null | The schema of your project is published under the API
endpoint with the name `/schema.graphql`. if your API endpoint is `https://api.example.com` the GraphQL schema is located at
`https://api.example.com/schema.graphql`.
# Schema for Consumers
The schema is used for consumers to update types, queries & mutations.
By default the schema is secured by the same firewall of the API endpoint.
If your API require credentials, then must be used to access to the schema too.
In a local environment when you need access to your own API *(developing a frontend UI)*
this can be annoying because you need update the schema every time.
In this scenario can disable the symfony firewall to schema in **dev** environment.
````yam;
security.yml
security:
#...
api:
pattern: ^/(?!explorer|login|schema.graphql)
stateless: true
provider: fos_userbundle
guard:
authenticators:
- lexik_jwt_authentication.jwt_token_authenticator
access_control:
- { path: ^/schema.graphql, roles: IS_AUTHENTICATED_ANONYMOUSLY, allow_if: %kernel.debug% }
- { path: ^/login, roles: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/explorer, roles: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/, roles: IS_AUTHENTICATED_FULLY }
````
>>> Note the `allow_if: %kernel.debug%` in the schema access_control rule,
is very important to avoid publish your schema without credentials in production too. | 40.540541 | 124 | 0.696667 | eng_Latn | 0.974034 |
4d0ae9197f88405f5d747b188bf48411daec2349 | 2,448 | md | Markdown | articles/dev-spaces/how-to/upgrade-tools.md | bergano65/azure-docs.ru-ru | 8baaa0e3e952f85f7a3b5328960f0d0d3e52db07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/dev-spaces/how-to/upgrade-tools.md | bergano65/azure-docs.ru-ru | 8baaa0e3e952f85f7a3b5328960f0d0d3e52db07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/dev-spaces/how-to/upgrade-tools.md | bergano65/azure-docs.ru-ru | 8baaa0e3e952f85f7a3b5328960f0d0d3e52db07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Обновление средств Azure Dev Spaces
services: azure-dev-spaces
ms.date: 07/03/2018
ms.topic: conceptual
description: Узнайте, как обновить средства командной строки Azure Dev Spaces, расширение Visual Studo Code и расширение Visual Studio.
keywords: Docker, Kubernetes, Azure, AKS, Azure Container Service, containers
ms.openlocfilehash: 07d55689ac94a865527f4b595765d67b28ddb97a
ms.sourcegitcommit: f4f626d6e92174086c530ed9bf3ccbe058639081
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 12/25/2019
ms.locfileid: "75438419"
---
# <a name="how-to-upgrade-azure-dev-spaces-tools"></a>Обновление средств Azure Dev Spaces
Если вы уже используете Azure Dev Spaces, при выходе нового выпуска вам следует обновить клиентские средства Azure Dev Spaces.
## <a name="update-the-azure-cli"></a>Обновление Azure CLI
При обновлении до последней версии Azure CLI вы автоматически получаете последнюю версию расширения интерфейса командной строки для Azure Dev Spaces.
Предыдущую версию удалять не нужно, просто найдите нужный файл для скачивания на странице [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest).
## <a name="update-the-dev-spaces-cli-extension-and-command-line-tools"></a>Обновление расширения CLI для Azure Dev Spaces и средств командной строки
Выполните следующую команду:
```cmd
az aks use-dev-spaces -n <your-aks-cluster> -g <your-aks-cluster-resource-group> --update
```
## <a name="update-the-vs-code-extension"></a>Обновление расширения VS Code
После установки расширение автоматически обновляется. Возможно, потребуется перезагрузить расширение, чтобы использовать новые возможности. В VS Code откройте панель **Расширения**, выберите расширения **Azure Dev Spaces** и щелкните **Перезагрузить**.
## <a name="update-the-visual-studio-extension"></a>Обновление расширения Visual Studio
Как и для других расширений и обновлений, Visual Studio сообщит вам о наличии обновления средств Visual Studio для Kubernetes, в пакет которых входит Azure Dev Spaces. Найдите значок флага в верхней правой части экрана.
Чтобы обновить средства в Visual Studio, выберите пункты меню **Сервис > Расширения и обновления**, а затем слева выберите **Обновления**. Найдите **Средства Visual Studio для Kubernetes** и нажмите кнопку **Обновить**.
## <a name="next-steps"></a>Дальнейшие действия
Протестируйте новые средства, создав новый кластер. Изучите руководства по [Azure Dev Spaces](/azure/dev-spaces). | 53.217391 | 252 | 0.795752 | rus_Cyrl | 0.757565 |
4d0b837fd9ca8971c20101786ccf0992f2287ebd | 107 | md | Markdown | README.md | geraldspreer/spec-support | 59fa96e49621edd439ff865f7ee586d7adeda19e | [
"MIT"
] | null | null | null | README.md | geraldspreer/spec-support | 59fa96e49621edd439ff865f7ee586d7adeda19e | [
"MIT"
] | null | null | null | README.md | geraldspreer/spec-support | 59fa96e49621edd439ff865f7ee586d7adeda19e | [
"MIT"
] | null | null | null | # ng-spec-support
A collection of small functions that will help with TDD in Angular.
### Usage example
| 15.285714 | 67 | 0.747664 | eng_Latn | 0.999214 |
4d0bad8c020afaffd1c211c99e733f4be595173c | 3,327 | md | Markdown | docs/connectors/paypal/index.md | paycoreio/docs | 612b33cd5df9756545be78b95fb829073207137d | [
"MIT"
] | 8 | 2018-11-17T10:45:05.000Z | 2020-09-29T09:27:22.000Z | docs/connectors/paypal/index.md | paycoreio/docs.paycore.io | 003c7c12dfd02b2bd9d422d329af151c6dc8133d | [
"MIT"
] | 4 | 2020-11-03T12:08:30.000Z | 2021-06-17T12:04:53.000Z | docs/connectors/paypal/index.md | paycoreio/docs.paycore.io | 003c7c12dfd02b2bd9d422d329af151c6dc8133d | [
"MIT"
] | 2 | 2019-01-15T13:20:47.000Z | 2021-01-20T10:00:34.000Z | <img src="https://static.openfintech.io/payment_providers/paypal/logo.svg?w=400" width="400px">
# PayPal
!!! quote ""
The platform that grows with you
**Website**: [paypal.com/uk/business](https://www.paypal.com/uk/business)
**Login**: [paypal.com/uk/signin](https://www.paypal.com/uk/signin)
Follow the guidance for setting up a connection with PayPal payment service provider.
## Set Up Account
### Step 1: Contact PayPal support manager
Sign up for a business account on the [website](https://www.paypal.com/bizsignup/#/checkAccount) or contact support team via hotline. Submit the required documents to verify your account and gain access to the Sandbox and then, after testing, to the live account.
### Step 2: Create new App or select one from the list
Log in to the [Developer Dashboard](https://www.paypal.com/signin?returnUri=https%3A%2F%2Fdeveloper.paypal.com%2Fdeveloper%2Fapplications&_ga=1.13091063.1562130854.1623839968) with your PayPal account.
Go to the *DASHBOARD* menu, select *My Apps & Credentials*.
Press the *Create App* button and set *App Name*, or choose one of the existing app entries as *Default*.
!!! tip ""


!!! attention ""
Make sure you're on the *Sandbox* tab to get the API credentials you'll use while you're testing connection. After you test and before you go live, switch to the *Live* tab to get live credentials and complete *Live App* setup.

### Step 3: Get credentials
The *Default Application* page displays your API credentials, including your client ID and secret.
You also can log in to the *Sandbox* (or *Live*) store account and find API credentials at *App Centre* --> *Streamline operations* --> *API credentials*.
Credentials that have to be issued there:
* Client ID
* Secret
!!! tip ""


Also, SOAP API credentials you can find at *Account settings* --> *Account access* --> *API access* --> *NVP/SOAP API integration (Classic)*.
* API Username
* API Password
* Signature
!!! tip ""



!!! important
Be sure to check with the manager if you require to provide a white list of IPs, and if so, specify IP addresses from the [Corefy list](/integration/ips/).
## Connect Provider Account
### Step 1. Connect account at the {{custom.company_name}} Dashboard
Press **Connect** at [*PayPal Provider Overview*]({{custom.dashboard_base_url}}connect-directory/payment-providers/paypal/general) page in *'New connection'* and choose **Provider account** option to open Connection form.

Enter credentials:
* Client ID
* Secret --> Client Secret
* API Username --> SOAP username
* API Password --> SOAP password
* Signature
Select Test or Live mode according to the type of account to connect with PayPal.
!!! success
You have connected **PayPal** account!
!!! question "Still looking for help connecting your PayPal account?"
<!--email_off-->[Please contact our support team!](mailto:{{custom.support_email}})<!--/email_off-->
| 36.56044 | 263 | 0.730388 | eng_Latn | 0.913037 |
4d0c15928cb797583bbc2495e58e5ab6edaaf4e0 | 650 | md | Markdown | _videos/235832111.md | kinlane/videos.restfest.org | 06b2881b34a54054940e5a4508746e7096da3ada | [
"Apache-2.0"
] | 2 | 2018-01-07T00:21:54.000Z | 2018-09-27T15:05:36.000Z | _videos/235832111.md | kinlane/videos.restfest.org | 06b2881b34a54054940e5a4508746e7096da3ada | [
"Apache-2.0"
] | 14 | 2018-01-06T17:16:59.000Z | 2021-01-20T02:48:28.000Z | _videos/235832111.md | RESTFest/videos.restfest.org | 24f1d61df75a9c08e54f26613b0737951b533959 | [
"Apache-2.0"
] | 3 | 2018-09-27T14:14:44.000Z | 2018-09-27T15:05:44.000Z | ---
layout: video
title: REST Fest 2017 \ Nate Abele
description: >
Nate Abele - ''Un-dux Your Front-end'' - 15 September 2017
video_link: https://vimeo.com/235832111
picture:
width: 1920
height: 1080
src: https://i.vimeocdn.com/filter/overlay?src0=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F659925887_1920x1080.jpg&src1=http%3A%2F%2Ff.vimeocdn.com%2Fp%2Fimages%2Fcrawler_play.png
---
<iframe src="https://player.vimeo.com/video/235832111?title=0&byline=0&portrait=0&badge=0&autopause=0&player_id=0" width="1920" height="1080" frameborder="0" title="REST Fest 2017 \ Nate Abele" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> | 50 | 260 | 0.767692 | yue_Hant | 0.152821 |
4d0cd85dadf2a5ccae6030e2303c5b81033e540c | 2,969 | md | Markdown | book/src/datalog/features.md | johnstonskj/rust-asdi | 84787e5bc81608f002ae74ae44a354ba5c8b0724 | [
"MIT"
] | null | null | null | book/src/datalog/features.md | johnstonskj/rust-asdi | 84787e5bc81608f002ae74ae44a354ba5c8b0724 | [
"MIT"
] | 17 | 2021-12-31T04:13:56.000Z | 2022-02-17T00:02:17.000Z | book/src/datalog/features.md | johnstonskj/rust-asdi | 84787e5bc81608f002ae74ae44a354ba5c8b0724 | [
"MIT"
] | null | null | null | # Language Features
By default, the language supported by ASDI is sometimes termed _Pure Datalog_. This language
allows for positive literals only, and allow for recursion. It does support additional features
with feature sets applied to programs. Without a feature specified certain types will fail
well-formedness rules, the parser will report errors, and some tools will not execute. However,
the enabling of features is relatively simple in both the text representation using the `.feature`
pragma, and using `FeatureSet`s in the API.
## Negation
The feature `negation` enables the negation of individual literals in the body of a rule. This
language is often described as $\text{\small{Datalog}}^{\lnot}$.
```datalog
.features(negation).
alive(X) :- person(X) AND NOT dead(X).
alive(X) :- person(X) ∧ ¬dead(X).
```
The text representation allows for `"!"`, `"¬"`, and `"NOT"` to be used to denote negation.
## Constraints
The feature `constraints` enables the specification of rules that have no head, which in turn
specifies a rule that _may never_ be true. This language is described herein as
$\text{\small{Datalog}}^{\bot}$.
```datalog
.features(constraints).
⊥ :- dead(X) AND alive(X).
:- dead(X) AND alive(X).
```
The text allows for either an entirely missing head, or the value `"⊥"` as the head to denote
a constraint. The latter signifies that the rule implies falsity (or absurdity).
## Comparisons
The feature `comparisons` enables the inclusion of literal terms that use standard comparison
operators. This language is described herein as $\text{\small{Datalog}}^{\theta}$.
```datalog
.features(comparisons).
old(X) :- age(X, Y) ∧ Y > 75.
```
The text representation supports the operators equality (`"="`), inequality (`"!="`, `"/="`,
or `"≠"`), less than (`"<"`), less than or equal-to (`"<="`, or `"≤"`), greater than (`">"`), and
greater than or equal-to (`">="`, or `"≥"`).
## Disjunction
The feature `disjunction` enables the negation of individual literals in the body of a rule. This
language is often described as $\text{\small{Datalog}}^{\lor}$.
```datalog
.features(disjunction).
mother(X, Y) OR father(X, Y) :- parent(X, Y).
mother(X, Y) ∨ father(X, Y) :- parent(X, Y).
```
The text representation allows for `";"`, "|"`, `"∨"`, and `"OR"` to be used to denote disjunction.
## Example
The following demonstrates the text representation support for enabling features.
```datalog
.features(negation, comparisons, disjunction).
```
Similarly, the following API example shows how to create a feature set that may be added to a
program during creation.
```rust,no_run
use asdi::features::{
FEATURE_COMPARISONS, FEATURE_DISJUNCTION, FEATURE_NEGATION, FeatureSet
};
let features = FeatureSet::from(vec![FEATURE_NEGATION, FEATURE_DISJUNCTION]);
assert!(features.supports(&FEATURE_NEGATION));
assert!(!features.supports(&FEATURE_COMPARISONS));
assert_eq!(features.to_string(), ".feature(negation, disjunction).");
```
| 33.359551 | 99 | 0.721792 | eng_Latn | 0.990795 |
4d0ef9c7b5dd77c92bb5d62d39095c4057d70131 | 140 | md | Markdown | README.md | 0CBH0/Gocollection | 465e373e3421078f6fc119a4c2c84aacc2f4ff81 | [
"MIT"
] | 1 | 2019-10-06T14:30:42.000Z | 2019-10-06T14:30:42.000Z | README.md | 0CBH0/Gocollection | 465e373e3421078f6fc119a4c2c84aacc2f4ff81 | [
"MIT"
] | null | null | null | README.md | 0CBH0/Gocollection | 465e373e3421078f6fc119a4c2c84aacc2f4ff81 | [
"MIT"
] | null | null | null | # Gocollection
Collecting Gene Ontology terms for semantic research.
## Dependencies
python
scrapy
## Usage
~~~
sh Gocollection.sh
~~~
| 9.333333 | 53 | 0.735714 | eng_Latn | 0.877518 |
4d0f3016fa7571b52076d830e7c29d438a077016 | 1,212 | md | Markdown | AlchemyInsights/reset-favorites-edge.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:07:15.000Z | 2021-03-06T00:34:53.000Z | AlchemyInsights/reset-favorites-edge.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:25:08.000Z | 2022-02-09T06:52:49.000Z | AlchemyInsights/reset-favorites-edge.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-09T20:30:02.000Z | 2020-06-02T23:24:46.000Z | ---
title: Tilbakestille favoritter i Microsoft Edge
ms.author: pebaum
author: pebaum
manager: scotv
ms.date: 07/23/2021
audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "11937"
- "9007099"
ms.openlocfilehash: a76234d17e39c7122fc5eedcb5a08df5af6d4197f71168434806ebd9f2a92346
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: HT
ms.contentlocale: nb-NO
ms.lasthandoff: 08/05/2021
ms.locfileid: "57813113"
---
# <a name="reset-favorites-in-microsoft-edge"></a>Tilbakestille favoritter i Microsoft Edge
Hvis du vil tilbakestille eller flytte data i Edge, velger du fra følgende. Disse gjelder for versjon 88, med mindre annet er angitt:
- [Sikkerhetskopiere favorittene dine i Edge](/deployedge/edge-learnmore-reset-data-in-cloud#back-up-your-favorites)
- [Tilbakestill for å løse et synkroniseringsproblem](/deployedge/edge-learnmore-reset-data-in-cloud#perform-a-reset-to-fix-a-synchronization-problem)
- [Tilbakestille eller fjerne dataene fra Microsoft-skyen](/deployedge/edge-learnmore-reset-data-in-cloud#perform-a-reset-to-remove-your-data-from-microsofts-cloud) | 41.793103 | 164 | 0.814356 | nob_Latn | 0.441159 |
4d0f4305cba05716bb2416cce1a4e6d8ffbe22c5 | 971 | md | Markdown | mocha_tests/compression/out/en_gen_tn.reopened/content/43/11.md | unfoldingWord-dev/node-resource-container | 20c4b7bfd2fa3f397ee7e0e743567822912c305b | [
"MIT"
] | 1 | 2016-12-15T03:59:00.000Z | 2016-12-15T03:59:00.000Z | mocha_tests/compression/out/en_gen_tn.reopened/content/43/11.md | unfoldingWord-dev/node-resource-container | 20c4b7bfd2fa3f397ee7e0e743567822912c305b | [
"MIT"
] | 1 | 2016-12-16T18:41:20.000Z | 2016-12-16T18:41:20.000Z | mocha_tests/compression/out/en_gen_tn.reopened/content/43/11.md | unfoldingWord-dev/node-resource-container | 20c4b7bfd2fa3f397ee7e0e743567822912c305b | [
"MIT"
] | null | null | null | #If it be so, now do this
"If this is our only choice, then do it"
#Carry down
It was common to use the word "down" when speaking of traveling from Canaan to Egypt.
#balm ... spices and myrrh
See how you translated these words in [[:en:bible:notes:gen:37:25|37:25]].
#pistachio nuts
a small, green tree nut (See: [[:en:ta:vol1:translate:translate_unknown]])
#almonds
a tree nut with a sweet flavor (See: [[:en:ta:vol1:translate:translate_unknown]])
#Take double money in your hand
Here "hand" stands for the whole person. AT: "Take double the money with you" (See: [[:en:ta:vol2:translate:figs_synecdoche]])
#The money that was returned in the opening of your sacks, carry again in your hand
Here "hand" stands for the whole person. Also, the phrase "that was returned" can be stated in active form. AT: "take back to Egypt the money someone put in your sacks" (See: [[:en:ta:vol2:translate:figs_synecdoche]] and [[:en:ta:vol2:translate:figs_activepassive]]) | 35.962963 | 266 | 0.736354 | eng_Latn | 0.996256 |
4d0f84c5e482ea5d996ba10798b015f07516a1e6 | 3,010 | md | Markdown | docs/framework/configure-apps/file-schema/runtime/disablecachingbindingfailures-element.md | wangshuai-007/docs.zh-cn | b0f422093b4d46695643d4ef95ea85c51ef3c702 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/runtime/disablecachingbindingfailures-element.md | wangshuai-007/docs.zh-cn | b0f422093b4d46695643d4ef95ea85c51ef3c702 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/runtime/disablecachingbindingfailures-element.md | wangshuai-007/docs.zh-cn | b0f422093b4d46695643d4ef95ea85c51ef3c702 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "<disableCachingBindingFailures>元素"
ms.custom:
ms.date: 03/30/2017
ms.prod: .net-framework
ms.reviewer:
ms.suite:
ms.technology: dotnet-clr
ms.tgt_pltfrm:
ms.topic: article
f1_keywords:
- http://schemas.microsoft.com/.NetConfiguration/v2.0#disableCachingBindingFailures
- http://schemas.microsoft.com/.NetConfiguration/v2.0#configuration/runtime/disableCachingBindingFailures
helpviewer_keywords:
- assemblies [.NET Framework],caching binding failures
- caching assembly binding failures
- <disableCachingBindingFailures> element
- disableCachingBindingFailures element
ms.assetid: bf598873-83b7-48de-8955-00b0504fbad0
caps.latest.revision: "14"
author: rpetrusha
ms.author: ronpet
manager: wpickett
ms.openlocfilehash: 25d504afd7945718f08dd5f2bf92d7ea33037a11
ms.sourcegitcommit: 4f3fef493080a43e70e951223894768d36ce430a
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 11/21/2017
---
# <a name="ltdisablecachingbindingfailuresgt-element"></a><disableCachingBindingFailures>元素
指定是否禁用缓存绑定故障的发生是因为通过探测找不到程序集。
\<配置 > 元素
\<运行时 > 元素
\<disableCachingBindingFailures >
## <a name="syntax"></a>语法
```xml
<disableCachingBindingFailures enabled="0|1"/>
```
## <a name="attributes-and-elements"></a>特性和元素
下列各节描述了特性、子元素和父元素。
### <a name="attributes"></a>特性
|特性|描述|
|---------------|-----------------|
|enabled|必需的特性。<br /><br /> 指定是否禁用缓存绑定故障的发生是因为通过探测找不到程序集。|
## <a name="enabled-attribute"></a>enabled 特性
|值|描述|
|-----------|-----------------|
|0|不要禁用缓存绑定故障的发生是因为通过探测找不到程序集。 这是从.NET Framework 2.0 版开始的默认绑定行为。|
|1|禁用缓存绑定故障的发生是因为通过探测找不到程序集。 此设置将恢复为.NET Framework 1.1 版的绑定行为。|
### <a name="child-elements"></a>子元素
无。
### <a name="parent-elements"></a>父元素
|元素|描述|
|-------------|-----------------|
|`configuration`|公共语言运行时和 .NET Framework 应用程序所使用的每个配置文件中的根元素。|
|`runtime`|包含有关程序集绑定和垃圾回收的信息。|
## <a name="remarks"></a>备注
从.NET Framework 2.0 版开始,加载程序集的默认行为是缓存所有绑定和加载失败。 也就是说,如果加载程序集的尝试失败,后续请求加载相同程序集将立即失败,而无需任何尝试查找程序集。 此元素禁用绑定故障的发生是因为探测路径中找不到程序集的默认行为。 这些故障引发<xref:System.IO.FileNotFoundException>。
一些绑定和加载失败不会影响此元素,也始终缓存。 由于程序集已找到,但无法加载,将出现这些故障。 它们将引发<xref:System.BadImageFormatException>或<xref:System.IO.FileLoadException>。 以下列表包含此类故障的一些示例。
- 如果你尝试加载的文件不是有效的程序集,加载程序集的后续尝试将会失败,即使错误的文件将被替换为正确的程序集。
- 如果你尝试加载已锁定的文件系统程序集,加载程序集的后续尝试将失败,即使程序集发布的文件系统。
- 如果你尝试加载的程序集的一个或多个版本是在探测路径中,但它们之间的不是您请求的特定版本,加载该版本的后续尝试将失败,即使正确的版本将移到探测路径。
## <a name="example"></a>示例
下面的示例演示如何禁用的发生是因为通过探测找不到程序集的程序集绑定故障缓存。
```xml
<configuration>
<runtime>
<disableCachingBindingFailures enabled="1" />
</runtime>
</configuration>
```
## <a name="see-also"></a>另请参阅
[运行时设置架构](../../../../../docs/framework/configure-apps/file-schema/runtime/index.md)
[配置文件架构](../../../../../docs/framework/configure-apps/file-schema/index.md)
[运行时如何定位程序集](../../../../../docs/framework/deployment/how-the-runtime-locates-assemblies.md)
| 31.684211 | 178 | 0.703322 | yue_Hant | 0.291206 |
4d0fa37f4dea41e4f1dea051bc919b96f4e26164 | 2,661 | md | Markdown | doc/README.md | IanWambai/bitcoindart | a3b238370845be5f5d5ee01feaec352db7e7344c | [
"MIT"
] | 7 | 2020-12-15T06:40:59.000Z | 2022-01-26T15:35:55.000Z | doc/README.md | IanWambai/bitcoindart | a3b238370845be5f5d5ee01feaec352db7e7344c | [
"MIT"
] | 2 | 2021-06-08T06:30:58.000Z | 2021-07-19T08:56:53.000Z | doc/README.md | IanWambai/bitcoindart | a3b238370845be5f5d5ee01feaec352db7e7344c | [
"MIT"
] | 6 | 2021-06-07T18:27:19.000Z | 2021-11-12T06:11:45.000Z | ## 前言
本文对相关概念的介绍会分成两个部分,分别是原理性说明和代码展示。
## 比特币地址
比特币地址是一个由数字和字母组成的字符串,比特币地址类型分为普通地址和隔离见证(兼容/原生)地址。
下面是比特币地址的示例:
- 普通地址:
1F5VhMHukdnUES9kfXqzPzMeF1GPHKiF64
- 隔离见证(原生)地址:
bc1qnf4kpa62dwhpwm0stsas5yv0skatt3v9s040p8
- 隔离见证(兼容)地址:
33F1CKBVZDDWugFxiaibh9FLtAG6vLyDXk
比特币地址可由公钥经过单向的加密哈希算法得到。由公钥生成比特币地址时使用的算法是Secure Hash Algorithm (SHA)和the RACE Integ rity Primitives Evaluation Message Digest (RIPEMD),具体地说是SHA256和RIPEMD160。
下面介绍各种类型比特币地址的生成原理。
### 普通地址
#### 步骤1 哈希计算
以公钥 K 为输入,计算其SHA256哈希值,并以此结果计算RIPEMD160 哈希值,得到一个长度为160位(20字节)的数字:
A = RIPEMD160(SHA256(K))
公式中,K是公钥,A是生成的比特币地址。
#### 步骤2 地址编码
通常用户见到的比特币地址是经过“Base58Check”编码的,这种编码使用了58个字符(一种Base58数字系统)和校验码,提高了可读性、避免歧义并有效防止了在地址转录和输入中产生的错误。
为了将数据(数字)转换成Base58Check格式,首先我们要对数据添加一个称作“版本字节”的前缀,这个前缀用来识别编码的数据的类型。比特币普通地址的前缀是`0(十六进制是0x00)`
普通地址 = Base58Check(RIPEMD160(SHA256(K)))
公式中,K是公钥。
普通地址的生成代码
```dart
// 导入 Uint8List
import 'dart:typed_data';
// 导入 SHA256Digest
import 'package:pointycastle/digests/sha256.dart';
// 导入 RIPEMD160Digest
import 'package:pointycastle/digests/ripemd160.dart';
// 导入 bs58check
import 'package:bs58check/bs58check.dart' as bs58check;
Uint8List hash160(Uint8List buffer) {
Uint8List _tmp = new SHA256Digest().process(buffer);
return new RIPEMD160Digest().process(_tmp);
}
// 通过公钥计算地址hash
// pubkey:Uint8List 格式的公钥
final hash = hash160(pubkey);
// print(hash);
// [154, 107, 96, 247, 74, 107, 174, 23, 109, 240, 92, 59, 10, 17, 143, 133, 186, 181, 197, 133],共20字节
// 添加Base58Check版本字节(0x00)
final payload = new Uint8List(21);
payload.buffer.asByteData().setUint8(0, 0x00);
payload.setRange(1, payload.length, hash);
// print(payload);
// [0, 154, 107, 96, 247, 74, 107, 174, 23, 109, 240, 92, 59, 10, 17, 143, 133, 186, 181, 197, 133]
// Base58Check 编码
final address = bs58check.encode(payload);
// print(address);
// 1F5VhMHukdnUES9kfXqzPzMeF1GPHKiF64
```
### 隔离见证(原生)地址
#### 步骤1 哈希计算
该步骤与普通地址一样,即:A = RIPEMD160(SHA256(K)),其中K是公钥,A是生成的比特币地址。
#### 步骤2 地址编码
隔离见证地址使用的是 [Bech32](./bech32.md) 编码方式。Bech32编码实际上由两部分组成:一部分是前缀,被称为HRP(Human Readable Part,用户可读部分),另一部分是特殊的Base32编码,使用字母表`qpzry9x8gf2tvdw0s3jn54khce6mua7l`。比特币隔离见证地址Bech32编码使用的前缀是`bc`,版本号是`0`。
隔离见证地址的生成代码
```dart
// 导入 Segwit
import 'package:bech32/bech32.dart';
// 获取地址 hash
// Bech32 编码
final address = segwit.encode(Segwit('bc', 0, hash));
// print(address);
// bc1qnf4kpa62dwhpwm0stsas5yv0skatt3v9s040p8
```
### 隔离见证(兼容)地址
> P2SH地址,是采用Base58Check对脚本的20个字节哈希值进行编码,就像比特币地址是公钥20字节哈希的Base58Check编码一样。由于P2SH地址采用5作为前缀,这导致基于Base58编码的地址以“3”开头。
## 附录
### 隔离见证
隔离见证(segwit)是指把特定输出的签名或解锁脚本隔离开。“单独的scriptSig”或“单独的签名”就是它最简单的形式。隔离见证是对比特币的一种架构更改,旨在将见证数据从交易的scriptSig(解锁脚本)字段移动到伴随交易的独立的见证数据结构中。客户端要求的交易数据可以包括见证数据,也可以不包括。 | 27.43299 | 191 | 0.773393 | yue_Hant | 0.592395 |
4d0ffac3e6dea62b200959e5216ee58b1523161a | 69 | md | Markdown | add/metadata/System.DirectoryServices.AccountManagement/IdentityType.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-06-16T22:24:36.000Z | 2020-06-16T22:24:36.000Z | add/metadata/System.DirectoryServices.AccountManagement/IdentityType.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/System.DirectoryServices.AccountManagement/IdentityType.meta.md | kcpr10/dotnet-api-docs | b73418e9a84245edde38474bdd600bf06d047f5e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-02T13:31:28.000Z | 2020-05-02T13:31:28.000Z | ---
uid: System.DirectoryServices.AccountManagement.IdentityType
---
| 17.25 | 60 | 0.797101 | yue_Hant | 0.1899 |
4d10324029bab4998692cff07316a4320cadae84 | 2,247 | md | Markdown | Results/List.Int32.ListInt32WhereSelectToArray.md | manofstick/LinqBenchmarks | 4f42aa0002ed60228bfd1290921abafdaf2b7604 | [
"MIT"
] | null | null | null | Results/List.Int32.ListInt32WhereSelectToArray.md | manofstick/LinqBenchmarks | 4f42aa0002ed60228bfd1290921abafdaf2b7604 | [
"MIT"
] | null | null | null | Results/List.Int32.ListInt32WhereSelectToArray.md | manofstick/LinqBenchmarks | 4f42aa0002ed60228bfd1290921abafdaf2b7604 | [
"MIT"
] | null | null | null | ## List.Int32.ListInt32WhereSelectToArray
### Source
[ListInt32WhereSelectToArray.cs](../LinqBenchmarks/List/Int32/ListInt32WhereSelectToArray.cs)
### References:
- JM.LinqFaster: [1.1.2](https://www.nuget.org/packages/JM.LinqFaster/1.1.2)
- LinqAF: [3.0.0.0](https://www.nuget.org/packages/LinqAF/3.0.0.0)
- StructLinq.BCL: [0.19.2](https://www.nuget.org/packages/StructLinq.BCL/0.19.2)
- NetFabric.Hyperlinq: [3.0.0-beta26](https://www.nuget.org/packages/NetFabric.Hyperlinq/3.0.0-beta26)
### Results:
``` ini
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042
Intel Core i7-7567U CPU 3.50GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores
.NET Core SDK=5.0.100-preview.7.20366.6
[Host] : .NET Core 5.0.0 (CoreCLR 5.0.20.36411, CoreFX 5.0.20.36411), X64 RyuJIT
.NET Core 5.0 : .NET Core 5.0.0 (CoreCLR 5.0.20.36411, CoreFX 5.0.20.36411), X64 RyuJIT
Job=.NET Core 5.0 Runtime=.NET Core 5.0
```
| Method | Count | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Gen 1 | Gen 2 | Allocated |
|--------------------- |------ |-----------:|--------:|--------:|------:|--------:|-------:|------:|------:|----------:|
| ForLoop | 100 | 266.2 ns | 3.65 ns | 3.23 ns | 1.00 | 0.00 | 0.4168 | - | - | 872 B |
| ForeachLoop | 100 | 409.2 ns | 3.62 ns | 3.21 ns | 1.54 | 0.03 | 0.4168 | - | - | 872 B |
| Linq | 100 | 583.9 ns | 4.33 ns | 4.05 ns | 2.19 | 0.03 | 0.3939 | - | - | 824 B |
| LinqFaster | 100 | 553.1 ns | 3.48 ns | 3.08 ns | 2.08 | 0.03 | 0.4168 | - | - | 872 B |
| LinqAF | 100 | 1,226.2 ns | 7.56 ns | 6.31 ns | 4.61 | 0.06 | 0.4005 | - | - | 840 B |
| StructLinq | 100 | 674.2 ns | 4.44 ns | 3.94 ns | 2.53 | 0.03 | 0.1564 | - | - | 328 B |
| StructLinq_IFunction | 100 | 364.5 ns | 2.04 ns | 1.91 ns | 1.37 | 0.02 | 0.1068 | - | - | 224 B |
| Hyperlinq | 100 | 634.9 ns | 3.59 ns | 3.18 ns | 2.39 | 0.03 | 0.1068 | - | - | 224 B |
| Hyperlinq_Pool | 100 | 689.9 ns | 5.13 ns | 4.55 ns | 2.59 | 0.04 | 0.0267 | - | - | 56 B |
| 64.2 | 120 | 0.490432 | yue_Hant | 0.694688 |
4d10ef13e4902369e2a116a97b3cf7db34243693 | 429 | md | Markdown | README.md | ChavdarSlavov/rate-limiter | e3a10aafacebeafe7e088fa260c1826cabeec86b | [
"MIT"
] | null | null | null | README.md | ChavdarSlavov/rate-limiter | e3a10aafacebeafe7e088fa260c1826cabeec86b | [
"MIT"
] | null | null | null | README.md | ChavdarSlavov/rate-limiter | e3a10aafacebeafe7e088fa260c1826cabeec86b | [
"MIT"
] | null | null | null | # rate-limiter
[](https://travis-ci.org/ChavdarSlavov/rate-limiter)
[](https://coveralls.io/github/ChavdarSlavov/rate-limiter?branch=master)
Rate limiting is used to control the rate of traffic sent or received by a network | 85.8 | 176 | 0.794872 | yue_Hant | 0.386855 |
4d11364d64a29210d51cd3bd9623e276926f6e02 | 4,051 | md | Markdown | website/i18n/ru/docusaurus-plugin-content-docs/current/concepts/architecture.md | skrylnikov/documentation | 38cfbe9a7d45c3f4671d70eb86c0e71c45df2378 | [
"MIT"
] | null | null | null | website/i18n/ru/docusaurus-plugin-content-docs/current/concepts/architecture.md | skrylnikov/documentation | 38cfbe9a7d45c3f4671d70eb86c0e71c45df2378 | [
"MIT"
] | null | null | null | website/i18n/ru/docusaurus-plugin-content-docs/current/concepts/architecture.md | skrylnikov/documentation | 38cfbe9a7d45c3f4671d70eb86c0e71c45df2378 | [
"MIT"
] | null | null | null | ---
sidebar_position: 1
---
# Об архитектуре
## Проблемы
Обычно, разговор об архитектуре поднимается, когда разработка стопорится из-за тех или иных проблем в проекте.
### Bus-factor & Onboarding
Проект и его архитектуру понимает лишь ограниченный круг людей
**Примеры:**
- *"Сложно добавить человека в разработку"*
- *"На каждую проблему - у каждого свое мнение как обходить" (позавидуем ангуляру)*
- *"Не понимаю что происходит в этом большом куске монолита"*
### Неявные и неконтролируемые последствия
Множество неявных сайд-эффектов при разработке/рефакторинге *("все зависит от всего")*
**Примеры:**
- *"Фича импортит фичу"*
- *"Я обновил(а) стор одной страницы, а отвалилась функциональность на другой"*
- *"Логика размазана по всему приложению, и невозможно отследить - где начало, где конец"*
### Неконтролируемое переиспользование логики
Сложно переиспользовать/модифицировать существующую логику
При этом, обычно есть [две крайности](https://github.com/feature-sliced/documentation/discussions/14):
- Либо под каждый модуль пишется логика полностью с нуля *(с возможными повторениями в имеющейся кодобазе)*
- Либо идет тенденция переносить все-все реализуемые модули в `shared` папки, тем самым создавая из нее большую свалку из модулей *(где большинство используется только в одном месте)*
**Примеры:**
- *"У меня в проекте есть n-реализаций одной и той же бизнес-логики, за что приходится ежедневно расплачиваться"*
- *"В проекте есть 6 разных компонентов кнопки/попапа/..."*
- *"Свалка хелперов"*
## Требования
Поэтому кажется логичным предъявить желаемые *требования к идеальной архитектуре:*
:::note
Везде где говорится "легко", подразумевается "относительно легко для широкого круга разработчиков", т.к. ясно, что [не получится сделать идеального решения для абсолютно всех](/docs/about/mission#ограничения)
:::
### Explicitness
- Должно быть **легко осваивать и объяснять** команде проект и его архитектуру
- Структура должна отображать реальные **бизнес-ценности проекта**
- Должны быть явными **сайд-эффекты и связи** между абстракциями
- Должно быть **легко обнаруживать дублирование логики**, не мешая уникальным реализациям
- Не должно быть **распыления логики** по всему проекту
- Не должно быть **слишком много разнородных абстракций и правил** для хорошей архитектуры
### Control
- Хорошая архитектура должна **ускорять решение задач, внедрение фич**
- Должна быть возможность контролировать разработку проекта
- Должно быть легко **расширять, модифицировать, удалять код**
- Должна соблюдаться **декомпозиция и изолированность** функциональности
- Каждый компонент системы должен быть **легко заменяемым и удаляемым**
- *[Не нужно оптимизировать под изменения][ext-kof-not-modification] - мы не можем предсказывать будущее*
- *[Лучше - оптимизировать под удаление][ext-kof-but-removing] - на основании того контекста, который уже имеется*
### Adaptability
- Хорошая архитектура должна быть применима **к большинству проектов**
- *С уже существующими инфраструктурными решениями*
- *На любой стадии развития*
- Не должно быть зависимости от фреймворка и платформы
- Должна быть возможность **легко масштабировать проект и команду**, с возможностью параллелизации разработки
- Должно быть легко **подстраиваться под изменяющиеся требования и обстоятельства**
## См. также
- [(React Berlin Talk) Oleg Isonen - Feature Driven Architecture][ext-kof]
- [(React SPB Meetup #1) Sergey Sova - Feature Slices][ext-slices-spb]
- [(Статья) Про модуляризацию проектов][ext-medium]
- [(Статья) Про Separation of Concerns и структурирование по фичам][ext-ryanlanciaux]
[ext-kof-not-modification]: https://youtu.be/BWAeYuWFHhs?t=1631
[ext-kof-but-removing]: https://youtu.be/BWAeYuWFHhs?t=1666
[ext-slices-spb]: https://t.me/feature_slices
[ext-kof]: https://youtu.be/BWAeYuWFHhs
[ext-medium]: https://alexmngn.medium.com/why-react-developers-should-modularize-their-applications-d26d381854c1
[ext-ryanlanciaux]: https://ryanlanciaux.com/blog/2017/08/20/a-feature-based-approach-to-react-development/
| 41.336735 | 208 | 0.775364 | rus_Cyrl | 0.958159 |
4d11589f9be1f5c302593b63d52bc042885a78ab | 7,360 | md | Markdown | README.md | bsycorp/inkfish | 47c6ccd6591613c17c29c519b85adc2295a031e5 | [
"Apache-2.0"
] | 14 | 2018-11-14T09:41:28.000Z | 2022-01-27T15:21:25.000Z | README.md | bsycorp/inkfish | 47c6ccd6591613c17c29c519b85adc2295a031e5 | [
"Apache-2.0"
] | 12 | 2019-01-21T10:36:09.000Z | 2020-11-30T01:36:33.000Z | README.md | bsycorp/inkfish | 47c6ccd6591613c17c29c519b85adc2295a031e5 | [
"Apache-2.0"
] | 3 | 2019-03-03T05:21:05.000Z | 2019-08-05T03:34:58.000Z | # inkfish
A forward proxy for machines, with access control lists
[](https://travis-ci.org/bsycorp/inkfish)
https://hub.docker.com/r/bsycorp/inkfish
## About
This is a non-caching forward (aka egress/outbound) proxy, used to implement URL
white-listing for applications.
Key features:
* An outbound proxy designed for machines
* Can use cloud metadata to determine the identity of an instance
* Can use Proxy-Authorization header to identify "non-instance" (e.g. codebuild, serverless) workload
* Per-instance, per-user URL white-lists
* TLS MITM by default, white-lists all requests at the URL level
* Optional MITM bypass, by host
## Quick Start
You can start the proxy listening on port 8080 with a built-in demo config:
```shell
# Start proxy
docker run -p 8080:8080 bsycorp/inkfish:latest /app/inkfish -metadata none -config /config/demo
# Test as anonymous user
export http_proxy=http://localhost:8080
export https_proxy=http://localhost:8080
export no_proxy=127.0.0.1
curl -k https://ifconfig.io/ # This should work for anonymous
curl -k https://google.com/ # This will not work for anonymous user
# Test as an authenticated user
export http_proxy=http://foo:bar@localhost:8080
export https_proxy=http://foo:bar@localhost:8080
curl -k https://google.com/ # This should work
```
You can find the demo config over here: [/testdata/demo_config/](/testdata/demo_config/). When you are ready to
try out your own white-lists, you can mount them into the proxy container from your host:
```shell
docker run -v `pwd`/my_config:/config/mine -p 8080:8080 \
bsycorp/inkfish:latest /app/inkfish -metadata none -config /config/mine
```
The next step would be to start the proxy in a cloud environment where you can use instance
metadata instead of "hard coded" proxy credentials.
## How to run
Three distribution mechanisms are offered:
* Standard docker: `bsycorp/inkfish:x.y.z`. This is about 30MB and based on minideb. The entry point is a shell.
* Slim docker: `bsycorp/inkfish:x.y.z-slim`. A 10MB container with only the static Linux binary. The entry point
is the inkfish binary as is customary for containers with no shell.
* Linux static binary: You can download this from the (releases page)[https://github.com/bsycorp/inkfish/releases].
If you use the `slim` image, you will also need to mount SSL certificates (configuration of what upstream CAs
are trusted) into the container in addition to your proxy configuration files. On a Linux host, this is usually
done by volume mounting SSL certs from the host into the container by adding `-v /etc/ssl/certs:/etc/ssl/certs`
to the docker run command.
## Command-line arguments
```shell
$ docker run bsycorp/inkfish:latest-slim -h
Usage of /app/inkfish:
-addr string
proxy listen address (default ":8080")
-cacert string
path to CA cert file
-cakey string
path to CA key file
-client-idle-timeout int
client idle timeout (default 300)
-client-read-timeout int
client read timeout
-client-write-timeout int
client write timeout
-config string
path to configuration files (default ".")
-drain-time int
shutdown drain deadline (seconds) (default 30)
-insecure-test-mode
test mode (does not block)
-metadata string
default metadata provider (aws,none) (default "aws")
-metadata-update-every int
metadata update interval (default 10)
-metrics string
metrics provider (none,datadog,prometheus) (default "none")
```
## Configuration file format
The `-config` argument supplies a path to a directory full of "access control lists" and password file
entries.
### Passwd files
Passwd files in the config directory must have a `.passwd` extension. They may contain one or more
lines of: `<username>:<sha256-of-password>` and will be used to verify proxy auth.
It is expected that passwords will generated by infracode / orchestration and have high entropy so
a heavyweight password hashing function is not required.
### ACL Files
ACL files in the config directory must have a `.conf` extension. These control what requests will
be allowed through the proxy. The general format looks like:
```
from <user> [user2 user3...]
from ...
url [METHOD,METHOD2] <url-regex> [modifiers]
url ...
s3 <bucket-name> [modifiers]
bypass <host-port-regex>
bypass ...
```
Blank lines and comments (lines starting with `#`) are ignored. The `from` lines gate entry into the ACL.
The <user> may be specified as:
* `user:foo` - Identifies a client who will supply a proxy-authorization header with a username of `foo`.
* `tag:foo` - Identifies a client whose cloud metadata (e.g. instance ProxyUser tag in AWS) is `foo`.
* `ANONYMOUS` - Identifies a user or system which does not supply a proxy-authorization header and
does not have any identifying metadata tags.
* `AUTHENTICATED` - Identifies a client with a tag or valid proxy-authorization credentials.
* `ANYONE` - Any client. This includes clients with invalid proxy-authorization credentials.
The `acl` directive is used to permit requests according to a regular expression matching a URL.. You
may optionally specify one or more methods in the ACL, causing only requests made with one of the listed
methods to match the ACL. Typical "whole-host" acls look like:
* `acl ^http(s)?://foo\.com/`
WARNING: it is generally a mistake to forget the trailing `/`, as this would cause the regular expression
to match things like `https://foo.com.au/evilthing` as well as the intended domain `https://foo.com/`.
Similarly, it is usually a mistake to forget to escape dots with backslashes as this can also cause
unintended matches.
A more complex acl example might look like:
* `acl HEAD,GET,POST ^http(s)://api\.foo\.com/v2/`
The `bypass` directive is used to disable TLS MITM for specific hosts. You should supply a regular
expression which can be matched directly against the client's CONNECT request. For example:
* `bypass ^my-super-bucket\.ap-southeast-2\.amazonaws\.com:443$`
There is also a shorthand for a `url` regex that includes all AWS S3 URL notations
(bucket in path and bucket in host) across all regions
* `s3 my-super-bucket`
Modifiers alter the processing of a particular ACL. Currently supported modifiers are:
* `quiet` - Suppresses logging of successful requests for the URL pattern or S3 bucket. CONNECT will
still be logged, but not individual requests. This is useful for thing like SQS or log upload
endpoints where many requests are expected under ordinary circumstances.
## Metadata lookup
Rather than distributing proxy credentials, the preferred method of access control in inkfish is via
cloud instance metadata.
### AWS
For AWS, specify the `ProxyUser` tag on an instance. So for example if you apply tag of ProxyUser=foo,
then in your ACL you would write:
```
from tag:foo
url ^http(s)?://.*$
```
To grant instances with that tag unrestricted outbound HTTP(s) access.
### Health Check
Configure health checks for the service to hit the listening port on `/healthz`.
### Graceful shutdown
Set `drain-time` to your shutdown connection drain deadline. The default is
30 seconds. If you have a load balancer forwarding requests to inkfish, your
load balancer drain time should be higher than this value.
| 37.171717 | 115 | 0.752038 | eng_Latn | 0.991925 |
4d1258f8508dc8990d2a57232677fd1d80277c8c | 14,299 | md | Markdown | articles/azure-monitor/visualize/workbooks-automate.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/visualize/workbooks-automate.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/visualize/workbooks-automate.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Izleyici çalışma kitapları ve Azure Resource Manager şablonları
description: Azure Resource Manager şablonları aracılığıyla dağıtılan önceden oluşturulmuş ve özel parametreli Azure Izleyici çalışma kitapları ile karmaşık raporlamayı kolaylaştırın
services: azure-monitor
ms.workload: tbd
ms.tgt_pltfrm: ibiza
ms.topic: conceptual
ms.date: 04/30/2020
ms.openlocfilehash: 77190b85da08d09cf05a02dcc5787f0c24229948
ms.sourcegitcommit: 910a1a38711966cb171050db245fc3b22abc8c5f
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/19/2021
ms.locfileid: "100624638"
---
# <a name="programmatically-manage-workbooks"></a>Program aracılığıyla çalışma kitaplarını yönetme
Kaynak sahipleri, Kaynak Yöneticisi şablonları aracılığıyla çalışma kitaplarını oluşturma ve yönetme seçeneğine sahiptir.
Bu, aşağıdaki senaryolarda yararlı olabilir:
* Kaynak dağıtımlarıyla birlikte kuruluş veya etki alanına özgü analiz raporları dağıtma. Örneğin, yeni uygulamalarınız veya sanal makineleriniz için kuruluş 'e özgü performans ve hata çalışma kitapları dağıtabilirsiniz.
* Mevcut kaynaklar için çalışma kitaplarını kullanarak standart raporları veya panoları dağıtma.
Çalışma kitabı istenen alt/kaynak grubu içinde ve Kaynak Yöneticisi şablonlarında belirtilen içerikle oluşturulur.
Programlı olarak yönetilebilecek iki tür çalışma kitabı kaynağı vardır:
* [Çalışma kitabı şablonları](#azure-resource-manager-template-for-deploying-a-workbook-template)
* [Çalışma kitabı örnekleri](#azure-resource-manager-template-for-deploying-a-workbook-instance)
## <a name="azure-resource-manager-template-for-deploying-a-workbook-template"></a>Çalışma kitabı şablonu dağıtmak için Azure Resource Manager şablonu
1. Programlı olarak dağıtmak istediğiniz bir çalışma kitabını açın.
2. _Düzenleme_ araç çubuğu öğesine tıklayarak çalışma kitabını düzenleme moduna geçirin.
3. Araç çubuğundaki düğmesini kullanarak _Gelişmiş Düzenleyici_ açın _</>_ .
4. _Galeri şablonu_ sekmesinde olduğunuzdan emin olun.

1. Galeri şablonundaki JSON öğesini panoya kopyalayın.
2. Aşağıda, Azure Izleyici çalışma kitabı galerisine bir çalışma kitabı şablonu dağıtan örnek bir Azure Resource Manager şablonu verilmiştir. Yerinde kopyaladığınız JSON 'yi yapıştırın `<PASTE-COPIED-WORKBOOK_TEMPLATE_HERE>` . Bir çalışma kitabı şablonu oluşturan bir başvuru Azure Resource Manager şablonu [burada](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/ARM-template-for-creating-workbook-template)bulunabilir.
```json
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceName": {
"type": "string",
"defaultValue": "my-workbook-template",
"metadata": {
"description": "The unique name for this workbook template instance"
}
}
},
"resources": [
{
"name": "[parameters('resourceName')]",
"type": "microsoft.insights/workbooktemplates",
"location": "[resourceGroup().location]",
"apiVersion": "2019-10-17-preview",
"dependsOn": [],
"properties": {
"galleries": [
{
"name": "A Workbook Template",
"category": "Deployed Templates",
"order": 100,
"type": "workbook",
"resourceType": "Azure Monitor"
}
],
"templateData": <PASTE-COPIED-WORKBOOK_TEMPLATE_HERE>
}
}
]
}
```
1. `galleries`Nesnesinde `name` ve `category` anahtarlarını değerlerinizle birlikte girin. Sonraki bölümde [Parametreler](#parameters) hakkında daha fazla bilgi edinin.
2. Bu Azure Resource Manager şablonunu [Azure Portal](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template), [komut satırı arabirimi](../../azure-resource-manager/templates/deploy-cli.md), [PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)vb. kullanarak dağıtın.
3. Azure portal açın ve Azure Resource Manager şablonunda seçilen çalışma kitabı galerisine gidin. Örnek şablonda Azure Izleyici çalışma kitabı galerisine gidin:
1. Azure portal açın ve Azure Izleyicisi ' ne gidin
2. `Workbooks`İçindekiler tablosundan aç
3. Şablonunuz kategorisi altındaki galeride bulun `Deployed Templates` (mor öğelerden biri olacaktır).
### <a name="parameters"></a>Parametreler
|Parametreler |Açıklama |
|:-------------------------|:-------------------------------------------------------------------------------------------------------|
| `name` | Azure Resource Manager çalışma kitabı şablon kaynağının adı. |
|`type` | Her zaman Microsoft. Insights/workbooktemplates |
| `location` | Çalışma kitabının oluşturulacağı Azure konumu. |
| `apiVersion` | 2019-10-17 Önizleme |
| `type` | Her zaman Microsoft. Insights/workbooktemplates |
| `galleries` | Bu çalışma kitabı şablonunu içinde göstermek için galeriler kümesi. |
| `gallery.name` | Galerideki çalışma kitabı şablonunun kolay adı. |
| `gallery.category` | Galerideki şablonun yerleştirileceği grup. |
| `gallery.order` | Galerideki bir kategori içinde şablonu gösterme sırasına karar veren bir sayı. Daha düşük sıralama önceliği daha yüksektir. |
| `gallery.resourceType` | Galeriye karşılık gelen kaynak türü. Bu genellikle kaynağa karşılık gelen kaynak türü dizedir (örneğin, Microsoft. operationalınsights/çalışma alanları). |
|`gallery.type` | Çalışma kitabı türü olarak bahsedildiğinde, bu, galeriyi bir kaynak türü içinde ayıran benzersiz bir anahtardır. Örneğin, Application Insights `workbook` ve `tsg` farklı çalışma kitabı galerilerine karşılık gelen türler vardır. |
### <a name="galleries"></a>Galeriler
| Galeri | Kaynak türü | Çalışma kitabı türü |
| :--------------------------------------------- |:---------------------------------------------------|:--------------|
| Azure Izleyici 'de çalışma kitapları | `Azure Monitor` | `workbook` |
| Azure Izleyici 'de VM öngörüleri | `Azure Monitor` | `vm-insights` |
| Log Analytics çalışma alanındaki çalışma kitapları | `microsoft.operationalinsights/workspaces` | `workbook` |
| Application Insights çalışma kitapları | `microsoft.insights/component` | `workbook` |
| Application Insights 'de sorun giderme kılavuzu | `microsoft.insights/component` | `tsg` |
| Application Insights kullanımı | `microsoft.insights/component` | `usage` |
| Kubernetes hizmetindeki çalışma kitapları | `Microsoft.ContainerService/managedClusters` | `workbook` |
| Kaynak gruplarındaki çalışma kitapları | `microsoft.resources/subscriptions/resourcegroups` | `workbook` |
| Azure Active Directory çalışma kitapları | `microsoft.aadiam/tenant` | `workbook` |
| Sanal makinelerde VM öngörüleri | `microsoft.compute/virtualmachines` | `insights` |
| Sanal makine ölçek kümelerinde VM öngörüleri | `microsoft.compute/virtualmachinescalesets` | `insights` |
## <a name="azure-resource-manager-template-for-deploying-a-workbook-instance"></a>Çalışma kitabı örneği dağıtmak için Azure Resource Manager şablonu
1. Programlı olarak dağıtmak istediğiniz bir çalışma kitabını açın.
2. _Düzenleme_ araç çubuğu öğesine tıklayarak çalışma kitabını düzenleme moduna geçirin.
3. Araç çubuğundaki düğmesini kullanarak _Gelişmiş Düzenleyici_ açın _</>_ .
4. Düzenleyicide, _şablon türünü_ _Kaynak Yöneticisi şablona_ değiştirin.
5. Oluşturmak için Kaynak Yöneticisi şablonu düzenleyicide görüntülenir. İçeriği kopyalayın ve olduğu gibi kullanın ya da hedef kaynağı dağıtan daha büyük bir şablonla birleştirin.

## <a name="sample-azure-resource-manager-template"></a>Örnek Azure Resource Manager şablonu
Bu şablon ' Merhaba Dünya! ' görüntüleyen basit bir çalışma kitabının nasıl dağıtılacağını gösterir
```json
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workbookDisplayName": {
"type":"string",
"defaultValue": "My Workbook",
"metadata": {
"description": "The friendly name for the workbook that is used in the Gallery or Saved List. Needs to be unique in the scope of the resource group and source"
}
},
"workbookType": {
"type":"string",
"defaultValue": "tsg",
"metadata": {
"description": "The gallery that the workbook will been shown under. Supported values include workbook, `tsg`, Azure Monitor, etc."
}
},
"workbookSourceId": {
"type":"string",
"defaultValue": "<insert-your-resource-id-here>",
"metadata": {
"description": "The id of resource instance to which the workbook will be associated"
}
},
"workbookId": {
"type":"string",
"defaultValue": "[newGuid()]",
"metadata": {
"description": "The unique guid for this workbook instance"
}
}
},
"resources": [
{
"name": "[parameters('workbookId')]",
"type": "Microsoft.Insights/workbooks",
"location": "[resourceGroup().location]",
"kind": "shared",
"apiVersion": "2018-06-17-preview",
"dependsOn": [],
"properties": {
"displayName": "[parameters('workbookDisplayName')]",
"serializedData": "{\"version\":\"Notebook/1.0\",\"items\":[{\"type\":1,\"content\":\"{\\\"json\\\":\\\"Hello World!\\\"}\",\"conditionalVisibility\":null}],\"isLocked\":false}",
"version": "1.0",
"sourceId": "[parameters('workbookSourceId')]",
"category": "[parameters('workbookType')]"
}
}
],
"outputs": {
"workbookId": {
"type": "string",
"value": "[resourceId( 'Microsoft.Insights/workbooks', parameters('workbookId'))]"
}
}
}
```
### <a name="template-parameters"></a>Şablon parametreleri
| Parametre | Açıklama |
| :------------- |:-------------|
| `workbookDisplayName` | Galeri veya kayıtlı listede kullanılan çalışma kitabının kolay adı. Kaynak grubunun ve kaynağın kapsamında benzersiz olması gerekir |
| `workbookType` | Çalışma kitabının altında gösterilecek Galeri. Desteklenen değerler çalışma kitabı, `tsg` , Azure izleyici vb. içerir. |
| `workbookSourceId` | Çalışma kitabının ilişkilendirileceği kaynak örneğinin KIMLIĞI. Yeni çalışma kitabı bu kaynak örneğiyle ilgili görünür. Örneğin, _çalışma kitabı_ altındaki içeriğin kaynak tablosu. Çalışma kitabınızın Azure Izleyici 'deki çalışma kitabı galerisinde görünmesini istiyorsanız, kaynak KIMLIĞI yerine _Azure izleyici_ dize kullanın. |
| `workbookId` | Bu çalışma kitabı örneği için benzersiz GUID. Otomatik olarak yeni bir GUID oluşturmak için _[NewGuid ()]_ kullanın. |
| `kind` | Oluşturulan çalışma kitabının paylaşılıp paylaşıldığını veya özel olduğunu belirtmek için kullanılır. Paylaşılan çalışma kitapları ve _Kullanıcı_ için _paylaşılan_ değeri özel olanlar için kullanın. |
| `location` | Çalışma kitabının oluşturulacağı Azure konumu. Kaynak grubuyla aynı konumda oluşturmak için _[resourceGroup (). Location]_ kullanın |
| `serializedData` | Çalışma kitabında kullanılacak içeriği veya yükü içerir. Değeri almak için çalışma kitapları Kullanıcı arabirimindeki Kaynak Yöneticisi şablonunu kullanın |
### <a name="workbook-types"></a>Çalışma kitabı türleri
Çalışma kitabı türleri hangi çalışma kitabı galerisinin türünü yeni çalışma kitabı örneğinin gösterileceğini belirtir. Seçeneklere şunlar dahildir:
| Tür | Galeri konumu |
| :------------- |:-------------|
| `workbook` | Application Insights, Azure Izleyici vb. çalışma kitapları Galerisi de dahil olmak üzere çoğu raporda kullanılan varsayılan değer. |
| `tsg` | Application Insights 'de sorun giderme kılavuzu Galerisi |
| `usage` | Application Insights _kullanımı_ altında _daha fazla_ Galeri |
### <a name="limitations"></a>Sınırlamalar
Teknik bir nedenle, bu mekanizma Application Insights çalışma _kitapları_ galerisinde çalışma kitabı örnekleri oluşturmak için kullanılamaz. Bu sınırlamayı ele almak için çalışıyoruz. Bu arada, `tsg` Application Insights ilgili çalışma kitaplarını dağıtmak Için sorun giderme kılavuzu galerisini (workbookType:) kullanmanızı öneririz.
## <a name="next-steps"></a>Sonraki adımlar
Yeni [Azure Izleyicisini depolama deneyimi için](../insights/storage-insights-overview.md)desteklemek üzere çalışma kitaplarının nasıl kullanıldığını keşfedebilirsiniz.
| 66.506977 | 458 | 0.637527 | tur_Latn | 0.998269 |
4d128ac08fcc274b42ba6c97660442d5f6b4aa3b | 15,214 | md | Markdown | articles/azure-monitor/insights/vminsights-enable-overview.md | Ksantacr/azure-docs.es-es | d3abf102433fd952aafab2c57a55973ea05a9acb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/insights/vminsights-enable-overview.md | Ksantacr/azure-docs.es-es | d3abf102433fd952aafab2c57a55973ea05a9acb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/insights/vminsights-enable-overview.md | Ksantacr/azure-docs.es-es | d3abf102433fd952aafab2c57a55973ea05a9acb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Habilitar Monitor de Azure para información general de las máquinas virtuales (versión preliminar) | Microsoft Docs
description: En este artículo se describe cómo implementar y configurar a Azure Monitor para máquinas virtuales y los requisitos del sistema necesarios.
services: azure-monitor
documentationcenter: ''
author: mgoedtel
manager: carmonm
editor: ''
ms.assetid: ''
ms.service: azure-monitor
ms.topic: conceptual
ms.tgt_pltfrm: na
ms.workload: infrastructure-services
ms.date: 05/22/2019
ms.author: magoedte
ms.openlocfilehash: 76d18b6a942ed9b8c6871b0ff7cbc1c83917ada4
ms.sourcegitcommit: 778e7376853b69bbd5455ad260d2dc17109d05c1
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 05/23/2019
ms.locfileid: "66130462"
---
# <a name="enable-azure-monitor-for-vms-preview-overview"></a>Habilitar a Monitor de Azure para información general de las máquinas virtuales (versión preliminar)
En este artículo proporciona información general sobre las opciones disponibles para configurar a Azure Monitor para supervisar el estado, rendimiento y detectar dependencias de aplicaciones que se ejecutan en máquinas virtuales y conjuntos de escalado de máquinas virtuales de las máquinas virtuales locales, máquinas virtuales o máquinas virtuales hospedan en otro entorno de nube.
Para habilitar Azure Monitor para VM, puede seguir alguno de los métodos siguientes:
* Habilitar una máquina virtual de Azure o el escalado de máquinas virtuales que se establece mediante la selección **Insights (versión preliminar)** directamente desde la escala de máquina virtual o máquina virtual establecido.
* Habilitar dos o más máquinas virtuales de Azure y máquina virtual de conjuntos de escalado mediante la directiva de Azure. Mediante este método, las dependencias necesarias de las máquinas virtuales nuevas y existentes y conjuntos de escalado están instaladas y configuradas correctamente. Conjuntos de escalado y máquinas virtuales no compatible se notifican para que pueda decidir si desea habilitarlas y cómo desea corregirlas.
* Use PowerShell para habilitar dos o más máquinas virtuales o conjuntos de escalado de máquinas virtuales de Azure en una suscripción o grupo de recursos concreto.
* Habilitar para supervisar las máquinas virtuales o equipos físicos que se hospeda en la red corporativa o en otro entorno de nube.
## <a name="prerequisites"></a>Requisitos previos
Antes de empezar, asegúrese de conocer la información de los apartados siguientes.
### <a name="log-analytics"></a>Log Analytics
Azure Monitor para las máquinas virtuales es compatible con un área de trabajo de Log Analytics en las siguientes regiones:
- Centro occidental de EE.UU.
- Este de EE. UU
- Canada Central<sup>1</sup>
- Sur de Reino Unido<sup>1</sup>
- Europa occidental
- Sudeste Asiático<sup>1</sup>
<sup>1</sup> Actualmente, esta región no admite la característica de estado de Azure Monitor para VM.
>[!NOTE]
>Se pueden implementar máquinas virtuales de Azure desde cualquier región, sin limitarse a las admitidas en el área de trabajo de Log Analytics.
>
Si no tiene un área de trabajo, puede crear una con alguno de los métodos siguientes:
* [La CLI de Azure](../../azure-monitor/learn/quick-create-workspace-cli.md)
* [PowerShell](../../azure-monitor/learn/quick-create-workspace-posh.md)
* [Portal de Azure](../../azure-monitor/learn/quick-create-workspace.md)
* [Azure Resource Manager](../../azure-monitor/platform/template-workspace-configuration.md)
Si se habilita la supervisión de una sola máquina virtual de Azure o escalado de máquinas virtuales que se establezca en el portal de Azure, puede crear un área de trabajo durante este proceso.
Para el escenario a escala mediante la directiva de Azure, Azure PowerShell o plantillas de Azure Resource Manager, el área de trabajo de Log Analytics debe en primer lugar la siguiente configuración:
* Instale las soluciones ServiceMap e InfrastructureInsights. Puede completar esta instalación mediante el uso de una plantilla de Azure Resource Manager proporcionada o mediante el **configurar área de trabajo** opción se encuentra en el **comenzar** ficha.
* Configure el área de trabajo de Log Analytics para que recopile contadores de rendimiento.
Para configurar el área de trabajo para el escenario a escala, puede configurarlo mediante uno de los métodos siguientes:
* [Azure PowerShell](vminsights-enable-at-scale-powershell.md#set-up-a-log-analytics-workspace)
* Desde el **configurar área de trabajo** opción en el Monitor de Azure para máquinas virtuales [directiva cobertura](vminsights-enable-at-scale-policy.md#manage-policy-coverage-feature-overview) página
### <a name="supported-operating-systems"></a>Sistemas operativos compatibles
La siguiente es una lista de los sistemas operativos Windows y Linux que son compatibles oficialmente con Azure Monitor para máquinas virtuales. Más adelante en esta sección, encontrará una lista completa que detalla las versiones de kernel admitidas y las versiones de sistema operativo Linux principales y secundarias.
|Versión del SO |Rendimiento |Mapas |Health |
|-----------|------------|-----|-------|
|Windows Server 2019 | X | X | X |
|Windows Server 2016 1803 | X | X | X |
|Windows Server 2016 | X | X | X |
|Windows Server 2012 R2 | X | X | X |
|Windows Server 2012 | X | X | |
|Windows Server 2008 R2 | X | X| |
|Red Hat Enterprise Linux (RHEL) 6, 7| X | X| X |
|Ubuntu 14.04, 16.04, 18.04 | X | X | X |
|CentOS Linux 6, 7 | X | X | X |
|SUSE Linux Enterprise Server (SLES) 11, 12 | X | X | X |
|Debian 8, 9.4 | X<sup>1</sup> | | X |
<sup>1</sup> La característica Rendimiento de Azure Monitor para VM solo está disponible desde Azure Monitor. No está disponible cuando tiene acceso a ella directamente desde el panel izquierdo de la máquina virtual de Azure.
>[!NOTE]
>La siguiente información se aplica a la compatibilidad del sistema operativo Linux:
> - Se admiten solo versiones de kernel SMP Linux y predeterminados.
> - Las versiones de kernel no estándar, como Physical Address Extension (PAE) y Xen, no son compatibles con ninguna distribución de Linux. Por ejemplo, un sistema con la cadena de versión *2.6.16.21-0.8-xen* no es compatible.
> - No se admiten los kernel personalizados, incluidas las recompilaciones de kernels estándar.
> - Se admite el kernel de CentOSPlus.
#### <a name="red-hat-linux-7"></a>Red Hat Linux 7
| Versión del SO | Versión del kernel |
|:--|:--|
| 7.4 | 3.10.0-693 |
| 7.5 | 3.10.0-862 |
| 7.6 | 3.10.0-957 |
#### <a name="red-hat-linux-6"></a>Red Hat Linux 6
| Versión del SO | Versión del kernel |
|:--|:--|
| 6.9 | 2.6.32-696 |
| 6.10 | 2.6.32-754 |
### <a name="centosplus"></a>CentOSPlus
| Versión del SO | Versión del kernel |
|:--|:--|
| 6.9 | 2.6.32-696.18.7<br>2.6.32-696.30.1 |
| 6.10 | 2.6.32-696.30.1<br>2.6.32-754.3.5 |
#### <a name="ubuntu-server"></a>Ubuntu Server
| Versión del SO | Versión del kernel |
|:--|:--|
| Ubuntu 18.04 | kernel 4.15.* |
| Ubuntu 16.04.3 | kernel 4.15.* |
| 16.04 | 4.4.\*<br>4.8.\*<br>4.10.\*<br>4.11.\*<br>4.13.\* |
| 14.04 | 3.13.\*<br>4.4.\* |
#### <a name="suse-linux-11-enterprise-server"></a>SUSE Linux 11 Enterprise Server
| Versión del SO | Versión del kernel
|:--|:--|
|11 SP4 | 3.0.* |
#### <a name="suse-linux-12-enterprise-server"></a>SUSE Linux 12 Enterprise Server
| Versión del SO | Versión del kernel
|:--|:--|
|12 SP2 | 4.4.* |
|12 SP3 | 4.4.* |
### <a name="the-microsoft-dependency-agent"></a>Microsoft Dependency Agent
La característica Service Map de Azure Monitor para VM obtiene sus datos de Microsoft Dependency Agent. Dependency Agent depende del agente de Log Analytics en lo que respecta a sus conexiones a Log Analytics. Por tanto, el sistema debe tener instalado y configurado el agente de Log Analytics con Dependency Agent.
Cuando se habilita Azure Monitor para VM para una sola máquina virtual de Azure o cuando se usa el método de implementación a escala, es necesario usar la extensión Dependency Agent de Azure VM para instalar el agente como parte de la experiencia.
En un entorno híbrido, puede descargar e instalar el agente de dependencia en cualquiera de estas dos maneras: manualmente o mediante un método de implementación automatizada para máquinas virtuales que están hospedadas fuera de Azure.
En la tabla siguiente se describen los orígenes conectados que son compatibles con la característica Asignación en un entorno híbrido.
| Origen conectado | Compatible | DESCRIPCIÓN |
|:--|:--|:--|
| Agentes de Windows | Sí | Además del [agente de Log Analytics para Windows](../../azure-monitor/platform/log-analytics-agent.md), los agentes de Windows requieren Microsoft Dependency Agent. Para obtener una lista completa de las versiones del sistema operativo, consulte los [sistemas operativos compatibles](#supported-operating-systems). |
| Agentes de Linux | Sí | Además del [agente de Log Analytics para Linux](../../azure-monitor/platform/log-analytics-agent.md), los agentes de Linux requieren Microsoft Dependency Agent. Para obtener una lista completa de las versiones del sistema operativo, consulte los [sistemas operativos compatibles](#supported-operating-systems). |
| Grupo de administración de System Center Operations Manager | No | |
Dependency Agent se puede descargar desde las ubicaciones siguientes:
| Archivo | SO | Versión | SHA-256 |
|:--|:--|:--|:--|
| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.8.1 | 622C99924385CBF539988D759BCFDC9146BB157E7D577C997CDD2674E27E08DD |
| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.8.1 | 3037934A5D3FB7911D5840A9744AE9F980F87F620A7F7B407F05E276FE7AE4A8 |
## <a name="role-based-access-control"></a>Control de acceso basado en rol
Para habilitar las características de Azure Monitor para VM y acceder a ellas, debe tener asignados los siguientes roles de acceso:
- Para habilitarla, debe tener la *colaborador de Log Analytics* rol.
- Para ver los datos de rendimiento, mantenimiento y el mapa, debe tener el rol de *lector de supervisión* en la máquina virtual de Azure. El área de trabajo de Log Analytics debe configurarse para Azure Monitor para VM.
Para más información acerca de cómo controlar el acceso a un área de trabajo de Log Analytics, consulte [Administración de áreas de trabajo](../../azure-monitor/platform/manage-access.md).
## <a name="how-to-enable-azure-monitor-for-vms-preview"></a>Cómo habilitar a Azure Monitor para las máquinas virtuales (versión preliminar)
Habilitar a Monitor de Azure para máquinas virtuales mediante uno de los métodos siguientes que se describe en la tabla siguiente.
| Estado de implementación | Método | DESCRIPCIÓN |
|------------------|--------|-------------|
| Conjunto de escalado de máquina virtual de Azure o una máquina virtual único | [Directamente desde la máquina virtual](vminsights-enable-single-vm.md) | Puede habilitar una sola máquina virtual de Azure seleccionando **Insights (versión preliminar)** directamente desde la escala de máquina virtual o máquina virtual establecido. |
| Varias máquinas virtuales de Azure o conjuntos de escalado de máquinas virtuales | [Azure Policy](vminsights-enable-at-scale-policy.md) | Puede permitir que varias máquinas virtuales de Azure con Azure Policy y definiciones de directivas disponibles. |
| Varias máquinas virtuales de Azure o conjuntos de escalado de máquinas virtuales | [Plantillas de Azure PowerShell o Azure Resource Manager](vminsights-enable-at-scale-powershell.md) | Puede habilitar varios conjuntos de escalado de máquinas virtuales de Azure o una máquina virtual a través de un grupo de recursos o suscripción especificado mediante plantillas de Azure PowerShell o Azure Resource Manager. |
| Nube híbrida | [Para habilitar el entorno híbrido](vminsights-enable-hybrid-cloud.md) | Puede implementar en máquinas virtuales o equipos físicos que se hospedan en su centro de datos u otros entornos de nube. |
## <a name="performance-counters-enabled"></a>Contadores de rendimiento habilitados
Azure Monitor para las máquinas virtuales configura un área de trabajo de Log Analytics para recopilar los contadores de rendimiento que utiliza. En la tabla siguiente se enumera los objetos y contadores recopilados cada 60 segundos.
### <a name="windows-performance-counters"></a>Contadores de rendimiento de Windows
|Nombre de objeto |Nombre del contador |
|------------|-------------|
|LogicalDisk |% de espacio libre |
|LogicalDisk |Prom. Segundos de disco/lecturas |
|LogicalDisk |Prom. Segundos de disco/transferencias |
|LogicalDisk |Prom. Segundos de disco/escrituras |
|LogicalDisk |Bytes de disco/s |
|LogicalDisk |Bytes de lectura de disco/s |
|LogicalDisk |Lecturas de disco/s |
|LogicalDisk |Transferencias de disco/s |
|LogicalDisk |Bytes de escritura en disco/s |
|LogicalDisk | Escrituras en disco/s |
|LogicalDisk |Megabytes libres |
|Memoria |MB disponibles |
|Adaptador de red |Bytes recibidos/s |
|Adaptador de red |Bytes enviados/seg. |
|Procesador |% de tiempo de procesador |
### <a name="linux-performance-counters"></a>Contadores de rendimiento de Linux
|Nombre de objeto |Nombre del contador |
|------------|-------------|
|Disco lógico |% espacio usado |
|Disco lógico |Bytes de lectura de disco/s |
|Disco lógico |Lecturas de disco/s |
|Disco lógico |Transferencias de disco/s |
|Disco lógico |Bytes de escritura en disco/s |
|Disco lógico | Escrituras en disco/s |
|Disco lógico |Megabytes libres |
|Disco lógico |Bytes de disco lógico/s |
|Memoria |MB de memoria disponibles |
|Red |Número total de bytes recibidos |
|Red |Número total de bytes transmitidos |
|Procesador |% de tiempo de procesador |
## <a name="diagnostic-and-usage-data"></a>Datos de uso y diagnóstico
Microsoft recopila automáticamente datos de uso y rendimiento a través del servicio Azure Monitor. Microsoft usa estos datos para proporcionar y mejorar la calidad, la seguridad y la integridad del servicio. Con el fin de proporcionar funcionalidades de solución de problemas precisas y eficientes, los datos de la característica Mapa incluyen información sobre la configuración del software, como el sistema operativo y la versión, la dirección IP, el nombre DNS y el nombre de la estación de trabajo. Microsoft no recopila nombres, direcciones ni otra información de contacto.
Para más información sobre el uso y la recopilación de datos, vea la [Declaración de privacidad de Microsoft Online Services](https://go.microsoft.com/fwlink/?LinkId=512132).
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-dsr-and-stp-note.md)]
## <a name="next-steps"></a>Pasos siguientes
Ahora que la supervisión está habilitada para la máquina virtual, esta información está disponible para analizarse con Azure Monitor para VM. Para obtener información sobre cómo usar la característica de mantenimiento, consulte [Descripción del estado de las máquinas virtuales de Azure con Azure Monitor para VM (versión preliminar)](vminsights-health.md). Para ver las dependencias de las aplicaciones detectadas, consulte [Uso de la asignación de Azure Monitor para VM (versión preliminar) para conocer los componentes de una aplicación](vminsights-maps.md).
| 63.656904 | 578 | 0.765085 | spa_Latn | 0.974894 |
4d12f03115241fee585a6f9d3e68ca99207ff742 | 7,043 | md | Markdown | intl.en-US/Instance/Manage instances/Metadata/Overview.md | raojuantracy/ecs | 6aa7067fb996cbfefadba78c036a1645a206e5b2 | [
"MIT"
] | null | null | null | intl.en-US/Instance/Manage instances/Metadata/Overview.md | raojuantracy/ecs | 6aa7067fb996cbfefadba78c036a1645a206e5b2 | [
"MIT"
] | null | null | null | intl.en-US/Instance/Manage instances/Metadata/Overview.md | raojuantracy/ecs | 6aa7067fb996cbfefadba78c036a1645a206e5b2 | [
"MIT"
] | null | null | null | # Overview
This topic describes instance metadata and the items that can be obtained from it. You can use instance metadata to flexibly manage or configure instances.
## Introduction
Metadata of an instance includes basic information of the instance in Alibaba Cloud, such as the instance ID, IP address, MAC addresses of network interface controllers \(NICs\) bound to the instance, and operating system type. Instance metadata also includes dynamic items generated after the instance is first started such as system events, instance identifiers, and user data.
For more information, see [Retrieve instance metadata](/intl.en-US/Instance/Manage instances/Metadata/Retrieve instance metadata.md).
## Metadata items
The following table lists basic metadata items that you can obtain from an ECS instance.
|Metadata item|Description|Version|
|:------------|:----------|:------|
|/dns-conf/nameservers|The DNS configurations of the instance.|2016-01-01|
|/eipv4|The IPv4 elastic IP address \(EIP\) that is associated with the primary NIC of the instance.|2016-01-01|
|/hostname|The hostname of the instance.|2016-01-01|
|/instance/instance-type|The instance type of the instance.|2016-01-01|
|/image-id|The ID of the image used to create the instance.|2016-01-01|
|/image/market-place/product-code|The product code of the Alibaba Cloud Marketplace image.|2016-01-01|
|/image/market-place/charge-type|The billing method of the Alibaba Cloud Marketplace image.|2016-01-01|
|/instance-id|The ID of the instance.|2016-01-01|
|/mac|The MAC address of the instance. If the instance is bound with multiple NICs, only the MAC address of eth0 is displayed.|2016-01-01|
|/network-type|The network type of the instance. Only VPC-type instances are supported.|2016-01-01|
|/network/interfaces/macs|The list of MAC addresses of the NICs.|2016-01-01|
|/network/interfaces/macs/\[mac\]/network-interface-id|The identifier of the NIC. Replace \[mac\] with the MAC address of the instance.|2016-01-01|
|/network/interfaces/macs/\[mac\]/netmask|The subnet mask of the NIC.|2016-01-01|
|/network/interfaces/macs/\[mac\]/vswitch-cidr-block|The IPv4 CIDR block of the VSwitch to which the NIC is connected.|2016-01-01|
|/network/interfaces/macs/\[mac\]/vpc-cidr-block|The IPv4 CIDR block of the VPC to which the NIC belongs.|2016-01-01|
|/network/interfaces/macs/\[mac\]/private-ipv4s|The list of private IPv4 addresses assigned to the NIC.|2016-01-01|
|/network/interfaces/macs/\[mac\]/vswitch-id|The ID of the VSwitch that is in the same VPC as the security group of the NIC.|2016-01-01|
|/network/interfaces/macs/\[mac\]/vpc-id|The ID of the VPC to which the security group of the NIC belongs.|2016-01-01|
|/network/interfaces/macs/\[mac\]/primary-ip-address|The primary private IP address of the NIC.|2016-01-01|
|/network/interfaces/macs/\[mac\]/gateway|The IPv4 gateway of the VPC to which the NIC belongs.|2016-01-01|
|/instance/max-netbw-egress|The maximum outbound internal bandwidth of the instance type. Unit: Kbit/s.|2016-01-01|
|/instance/max-netbw-ingerss|The maximum inbound internal bandwidth of the instance type. Unit: Kbit/s.|2016-01-01|
|/private-ipv4|The private IPv4 address of the primary NIC.|2016-01-01|
|/public-ipv4|The public IPv4 address of the primary NIC.|2016-01-01|
|/ntp-conf/ntp-servers|The IP address of the Network Time Protocol \(NTP\) server.|2016-01-01|
|/owner-account-id|The Alibaba Cloud account ID of the instance owner.|2016-01-01|
|/public-keys|The list of all public keys of the instance.|2016-01-01|
|/region-id|The region ID of the instance.|2016-01-01|
|/zone-id|The zone ID of the instance.|2016-01-01|
|/serial-number|The serial number of the instance.|2016-01-01|
|/source-address|The image library from which the package management software of a Linux instance obtains updates. The source of the package management software is the YUM or APT repository.|2016-01-01|
|/kms-server|The server that activates the KMS service of a Windows instance.|2016-01-01|
|/wsus-server/wu-server|The server that updates a Windows instance.|2016-01-01|
|/wsus-server/wu-status-server|The server that monitors the update status of a Windows instance.|2016-01-01|
|/vpc-id|The ID of the VPC to which the instance belongs.|2016-01-01|
|/vpc-cidr-block|The CIDR block of the VPC to which the instance belongs.|2016-01-01|
|/vswitch-cidr-block|The CIDR block of the VSwitch to which the instance is connected.|2016-01-01|
|/vswitch-id|The ID of the VSwitch to which the instance is connected.|2016-01-01|
|/ram/security-credentials/\[role-name\]|The temporary STS credentials generated for the RAM role of the instance. You can obtain the STS credentials only after the instance assumes a RAM role. Replace \[role-name\] with the RAM role name of the instance. If the \[role-name\] parameter is not specified, the name of the instance RAM role is returned.**Note:** A new STS credential is available 30 minutes prior to the expiration of the old one. During this period, both STS credentials can be used.
|2016-01-01|
|/instance/spot/termination-time|The stop and release time specified in the operating system of a preemptible instance. The time is in the yyyy-MM-ddThh:mm:ssZ format \(UTC+0\). Example: 2018-04-07T17:03:00Z.|2016-01-01|
|/instance/virtualization-solution|The ECS virtualization solution. Virt 1.0 and Virt 2.0 are supported.|2016-01-01|
|/instance/virtualization-solution-version|The internal build version.|2016-01-01|
|/instance-identity/pkcs7|The signature of the instance identifier.|2016-01-01|
## Dynamic metadata - O&M
For more information, see [Overview](/intl.en-US/Deployment & Maintenance/System events/Overview.md).
## Dynamic metadata - identity
Instance identifiers are used to identify and differentiate instances. This can provide trust foundation for application permission control and software activation. Instance identifiers consist of **instance identity documents** \(document\) and **instance identity signatures** \(signature\). For more information, see [Instance identity](/intl.en-US/Instance/Manage instances/Instance identity.md).
## Dynamic metadata - configuration
User data of an instance is implemented mainly based on different types of custom scripts and you can configure user data when you create the instance. User data allows users to customize instance startup and pass in data to the instance. For example, you can use user data to perform configuration operations such as automatically obtaining software resource packages, enabling services, printing logs, installing dependency packages, and initializing web environments. You can also pass in user data as common data to the instance and reference the data in the instance.
When the instance enters the **Running** state, the system runs the user data scripts by using the administrator or root permission. Then, the system runs the initialization information or the `/etc/init` folder.
For more information, see [Prepare user data](/intl.en-US/Instance/Manage instances/User data/Prepare user data.md).
| 89.151899 | 572 | 0.776232 | eng_Latn | 0.960629 |
4d1333f3a9a247d9b55c04e822df6979d65d776c | 846 | md | Markdown | api/Office.SearchFolders.Item.md | CoolDev1/VBA-Docs | 4d5dde1cd9371be038c3e67f27364d1f6e40a063 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Office.SearchFolders.Item.md | CoolDev1/VBA-Docs | 4d5dde1cd9371be038c3e67f27364d1f6e40a063 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Office.SearchFolders.Item.md | CoolDev1/VBA-Docs | 4d5dde1cd9371be038c3e67f27364d1f6e40a063 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: SearchFolders.Item Property (Office)
keywords: vbaof11.chm258001
f1_keywords:
- vbaof11.chm258001
ms.prod: office
api_name:
- Office.SearchFolders.Item
ms.assetid: e3ea4b1a-648e-1266-8a88-ef0cbd978989
ms.date: 06/08/2017
---
# SearchFolders.Item Property (Office)
Gets a **ScopeFolder** object that represents a subfolder of the parent object. Read-only.
## Syntax
_expression_. `Item`( `_Index_` )
_expression_ Required. A variable that represents a '[SearchFolders](Office.SearchFolders.md)' object.
## Parameters
|Name|Required/Optional|Data type|Description|
|:-----|:-----|:-----|:-----|
| _Index_|Required|**Long**|Determines which subfolder to return.|
## See also
[SearchFolders Object](Office.SearchFolders.md)
[SearchFolders Object Members](./overview/Library-Reference/searchfolders-members-office.md)
| 19.674419 | 103 | 0.738771 | yue_Hant | 0.640446 |
4d137d5af7a6443ce6f9f63a7ed87a7c572ad989 | 6,060 | md | Markdown | snowpack/README.md | amenoyoya/android-project | eb6a5de1a233f0636155d8ddc223b9a1a9f48afd | [
"MIT"
] | null | null | null | snowpack/README.md | amenoyoya/android-project | eb6a5de1a233f0636155d8ddc223b9a1a9f48afd | [
"MIT"
] | 2 | 2022-02-15T00:59:34.000Z | 2022-02-27T23:18:42.000Z | snowpack/README.md | amenoyoya/android-project | eb6a5de1a233f0636155d8ddc223b9a1a9f48afd | [
"MIT"
] | null | null | null | # Capacitor + Snowpack
## File IO by capacitor
### Setup
```bash
# install capacitor, snowpack
$ yarn add @capacitor/core @capacitor/filesystem
$ yarn add -D snowpack @capacitor/cli
$ npx cap init
$ npx cap sync
# install alpine.js
$ yarn add -D alpinejs
```
- `capacitor.config.json`
```json
{
"server": {
"url": "http://localhost:8080",
"cleartext": true
},
"appId": "com.example.app",
"appName": "App",
"webDir": "www",
"bundledWebRuntime": false
}
```
- `snowpack.config.mjs`
```javascript
/** @type {import("snowpack").SnowpackUserConfig } */
export default {
root: './src', // build 時のソースディレクトリ: ./src/
// 開発時は ./src/index.html がTOPページとなる
mount: {
/* ... */
},
plugins: [
/* ... */
],
routes: [
/* Enable an SPA Fallback in development: */
// {"match": "routes", "src": ".*", "dest": "/index.html"},
],
optimize: {
/* Example: Bundle your final build: */
// "bundle": true,
},
packageOptions: {
/* ... */
},
devOptions: {
/* ... */
},
buildOptions: {
out: './www' // build 時の出力先: ./www/
// ※ capacitor の webDir を ./www/ にしているため
},
};
```
- `src/index.html`
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
</head>
<body>
<div class="container my-4" x-data="app()">
<div class="d-flex">
<button class="btn btn-primary" @click="showPlatform">Show Platform</button>
<div x-html="platformMessage"></div>
</div>
<div class="mt-4">
<button class="btn btn-danger" @click="writeFile">Write File</button>
<div x-html="resultMessage"></div>
</div>
</div>
<script type="module" src="/js/app.js"></script>
</body>
</html>
```
- `src/js/app.js`
```javascript
import { Capacitor } from '@capacitor/core';
import { Filesystem, Directory, Encoding } from '@capacitor/filesystem';
import Alpine from 'alpinejs';
window.app = () => {
return {
open: false,
platformMessage: '',
resultMessage: '',
showPlatform() {
this.platformMessage = `<div class="alert alert-dark">${Capacitor.getPlatform()}</div>`;
},
async writeFile() {
try {
alert(
(await Filesystem.getUri({
directory: Directory.Data,
path: 'secrets/data.txt'
})).uri
);
await Filesystem.writeFile({
path: 'secrets/data.txt',
data: '✅ capacitor/filesystem: writeFile',
directory: Directory.Data,
recursive: true,
encoding: Encoding.UTF8
});
const content = await Filesystem.readFile({
path: 'secrets/data.txt',
directory: Directory.Data,
recursive: true,
encoding: Encoding.UTF8
});
console.log(content);
this.resultMessage = `<div class="alert alert-success">${content.data}</div>`;
} catch (err) {
this.resultMessage = `<div class="alert alert-danger">${err.toString()}</div>`;
}
}
}
};
window.Alpine = Alpine;
Alpine.start();
```
### Execute snowpack development server
```bash
# launch snowpack development server
$ yarn snowpack dev
## => http://localhost:8080
```

### Deploy as android app
Android でファイル読み書きできるようにするためには権限を付与する必要がある
- `android/app/src/main/AndroidManifest.xml`
```diff
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.app">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale|smallestScreenSize|screenLayout|uiMode"
android:name="com.example.app.MainActivity"
android:label="@string/title_activity_main"
android:theme="@style/AppTheme.NoActionBarLaunch"
android:launchMode="singleTask">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<provider
android:name="androidx.core.content.FileProvider"
android:authorities="${applicationId}.fileprovider"
android:exported="false"
android:grantUriPermissions="true">
<meta-data
android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="@xml/file_paths"></meta-data>
</provider>
</application>
<!-- Permissions -->
<uses-permission android:name="android.permission.INTERNET" />
+ <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
+ <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
</manifest>
```
その後 BlueStacks (Androidエミュレータ) を起動し、以下のコマンドを実行
```bash
# add capacitor android platform
$ yarn add @capacitor/android
$ npx cap add android
# build snowpack files
$ yarn snowpack build
## => ./www/ にビルド済み web ファイル生成
# synchronize web files to capacitor android platform
$ npx cap sync android
## => ./www/ 内のファイルが capacitor android プロジェクトに同期される
# connect to android debug bridge port
## <port>: BlueStacks の起動ポートを指定
$ adb connect 127.0.0.1:<port>
# deploy & execute capacitor app to android
$ npx cap run android
```

| 25.787234 | 212 | 0.613201 | kor_Hang | 0.153565 |
4d14cff77a216515645197322f8be963a1a0fccd | 1,984 | md | Markdown | MDF.md | plopgrizzly/mdf | 98e546c7df8d99691eb34c13d4a83fed9b47e3db | [
"MIT"
] | null | null | null | MDF.md | plopgrizzly/mdf | 98e546c7df8d99691eb34c13d4a83fed9b47e3db | [
"MIT"
] | null | null | null | MDF.md | plopgrizzly/mdf | 98e546c7df8d99691eb34c13d4a83fed9b47e3db | [
"MIT"
] | null | null | null | # Document Format (.df)
Document format is based on Markdown, but with the intent of having all the functionality of a
webpage.
Lets say we are making a webpage:
```md
My Website's Title (In Titlebar)
================================
My Webpage's Title (On Page)
--------------------------------
# Section Heading (Biggest heading)
## Section Subheading
### Mini-Section Heading
#### Mini-Section Subheading (Smallest heading)
This is a paragraph. Links work differently than in MarkDown:
:plopgrizzly.com/main.df
[Link/Clickable Text]:plopgrizzly.com/main.df
{Image Alt Text}:url/to/image.svg
[{Link Image Alt Text}:url/to/image.svg]:plopgrizzly.com/main.df
[Button that runs a function when clicked]:{
// Nahar Function Code.
}
{Something that runs a function right before it's shown}:{
// Nahar Function Code.
}
Title of next page (Optional for Pagebreak)
===========================================
Here's a table:
| # Column Heading | # Column Heading |
| Paragraph | Paragraph |
| Paragraph | Paragraph |
| Paragraph | Paragraph |
`unsyntaxhighlighted_code`
``rust rust_syntax_highlighted_code``
``nahar
nahar_syntax_highlighted_code
``
````_nahar
// This will make an executable code block (2 or more backticks to open/close).
````
__italic__
**bold**
You can escape markdown syntax with a `\`: \# \[ \*. Horizontal rule (2 or more -):
==
--
~~
== A thick horizontal rule with text in the middle of it ==
-- A thin horizontal rule with text in the middle of it --
~~ A fancy horizontal rule with text in the middle of it ~~
. An ordered list
. Item 2
; An unordered list.
; Item 2
[# Centered Section Heading ?center]
[Centered Paragraph. ?center]
[Centered Image !image.svg ?center]
Escaping different syntaxes (only needed after newline):
\| | |
\#
\##
\###
\####
\.
\;
\==
\===
\--
\---
\~~
\~~~
Escaping different syntaxes (always needed):
\[]
\{}
\`
\``
\__
\**
\\
```
# Markdown Document Format (.mdf)
| 20.666667 | 94 | 0.640121 | eng_Latn | 0.915102 |
4d1506b8fc4f5501fca0f1de9d171a1132fb1cd1 | 4,148 | md | Markdown | docs/ssma/db2/getting-started-with-ssma-for-db2-console-db2tosql.md | drake1983/sql-docs.es-es | d924b200133b8c9d280fc10842a04cd7947a1516 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-25T17:50:01.000Z | 2020-04-25T17:50:01.000Z | docs/ssma/db2/getting-started-with-ssma-for-db2-console-db2tosql.md | drake1983/sql-docs.es-es | d924b200133b8c9d280fc10842a04cd7947a1516 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssma/db2/getting-started-with-ssma-for-db2-console-db2tosql.md | drake1983/sql-docs.es-es | d924b200133b8c9d280fc10842a04cd7947a1516 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Introducción a SSMA para la consola de DB2 (DB2ToSQL) | Documentos de Microsoft
ms.prod: sql
ms.prod_service: sql-tools
ms.component: ssma-db2
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.suite: sql
ms.technology: ssma
ms.tgt_pltfrm: ''
ms.topic: conceptual
applies_to:
- Azure SQL Database
- SQL Server
ms.assetid: f245c017-023e-4880-8721-8908d339525e
caps.latest.revision: 5
author: Shamikg
ms.author: Shamikg
manager: craigg
ms.openlocfilehash: fd7e0f118854a6ba07988065d02aea6a2e7f6f0f
ms.sourcegitcommit: 1740f3090b168c0e809611a7aa6fd514075616bf
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 05/03/2018
---
# <a name="getting-started-with-ssma--for-db2-console-db2tosql"></a>Introducción a SSMA para la consola de DB2 (DB2ToSQL)
Esta sección describe el procedimiento para iniciar y empezar a trabajar con la aplicación de consola de DB2. También se muestran en este documento, se utilizan las convenciones en una ventana de salida de consola SSMA típica.
## <a name="launching-ssma-console"></a>Iniciar la consola SSMA
Utilice los pasos siguientes para iniciar la aplicación de consola SSMA:
1. Vaya a **iniciar** y seleccione **todos los programas**.
2. Haga clic en el **[!INCLUDE[ssNoVersion](../../includes/ssnoversion_md.md)] Migration Assistant para DB2 símbolo del sistema** acceso directo.
Muestra el menú de uso de la consola de SSMA y `(/? Help)`, que le ayudarán a empezar a trabajar con la aplicación de consola.
## <a name="procedure-for-using-the-ssma-console"></a>Procedimiento para usar la consola SSMA
Después de la consola se inicia correctamente en el sistema de Windows, podría utilizar los pasos siguientes para trabajar con ella:
1. Configurar la consola SSMA a través de los archivos de script. Para obtener más información acerca de esta sección, vea [crear archivos de Script (DB2ToSQL) ](../../ssma/db2/creating-script-files-db2tosql.md) .
2. [Crear archivos de valor de la Variable (DB2ToSQL)](../../ssma/db2/creating-variable-value-files-db2tosql.md)
3. [Crear los archivos de conexión de servidor (DB2ToSQL)](../../ssma/db2/creating-the-server-connection-files-db2tosql.md)
4. [Ejecutar la consola SSMA (DB2ToSQL) ](../../ssma/db2/executing-the-ssma-console-db2tosql.md) según sus necesidades de proyecto
Características adicionales:
1. [Administrar contraseñas](http://msdn.microsoft.com/en-us/56d546e3-8747-4169-aace-693302667e94) y exportar / importar en otros equipos de la ventana
2. [Generar informes](http://msdn.microsoft.com/en-us/69ef5fd9-190d-4c58-8199-b3f77d5e1883) ver el xml detallado informes de evaluación /conversion y migración de datos de salida. También se pueden generar informes de errores detallada para comandos de actualización y la sincronización.
## <a name="ssma-console-output-conventions"></a>Convenciones de salida de consola SSMA
Al ejecutar los comandos de script SSMA y opciones, el programa de consola muestra los resultados y mensajes (información, error, etc.) para el usuario en la consola o si es necesario, se redirige a un archivo de salida xml. Cada tipo de mensaje en la salida se especifica mediante un color único. Por ejemplo, el mensaje de texto en color blanco denota secuencias de comandos de archivo; uno de color verde representa un símbolo del sistema de entrada del usuario y así sucesivamente.

Interpretación de color de la salida de la consola en la tabla siguiente:
|Color|Description|
|---------|---------------|
|Rojo|Error irrecuperable durante la ejecución|
|Gris|Marca de fecha y hora, el mensaje al usuario|
|Blanco|Comandos del archivo de script, tipo de mensaje|
|Amarillo|Advertencia|
|Verde|Símbolo del sistema de entrada del usuario|
|Cian|Iniciar, finalizar y el resultado de una operación|
## <a name="see-also"></a>Vea también
[Instalación de SSMA para DB2](http://msdn.microsoft.com/en-us/79fbe8ea-471b-407a-be2a-4100d9b57c61)
| 55.306667 | 487 | 0.749759 | spa_Latn | 0.913841 |
4d153300bd51ca3b5357d18b38cd0295627ad2ca | 69 | md | Markdown | README.md | XiaonaZhao/dataINxlsx | cc6a951aa38e071124f3571935682cd5feff2bcc | [
"Apache-2.0"
] | null | null | null | README.md | XiaonaZhao/dataINxlsx | cc6a951aa38e071124f3571935682cd5feff2bcc | [
"Apache-2.0"
] | null | null | null | README.md | XiaonaZhao/dataINxlsx | cc6a951aa38e071124f3571935682cd5feff2bcc | [
"Apache-2.0"
] | null | null | null | # dataINxlsx
process data in excel, and draw some beautiful pictures
| 23 | 55 | 0.811594 | eng_Latn | 0.998039 |
4d15718da0687c71d11fab4ae44da5d69edfa1a4 | 1,177 | md | Markdown | docs/framework/wcf/feature-details/migrating-net-remoting-applications-to-wcf.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/migrating-net-remoting-applications-to-wcf.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/migrating-net-remoting-applications-to-wcf.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: .NET リモート処理アプリケーションの WCF への移行
ms.date: 03/30/2017
helpviewer_keywords:
- ',NET remoting [WCF]'
ms.assetid: 24793465-65ae-4308-8c12-dce4fd12a583
ms.openlocfilehash: c0ce7c9badc8bad6eedc58827b6efe2595ab2cf8
ms.sourcegitcommit: c7a7e1468bf0fa7f7065de951d60dfc8d5ba89f5
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 05/14/2019
ms.locfileid: "65592864"
---
# <a name="migrating-net-remoting-applications-to-wcf"></a>.NET リモート処理アプリケーションの WCF への移行
**このトピックの対象は、既存のアプリケーションとの下位互換性のために残されているレガシ テクノロジに特定されています。新規の開発には、このトピックを適用しないでください。WCF を使用して、分散アプリケーションを開発する必要がありますようになりました。**
既存の .NET リモート処理アプリケーションと WCF を活用するために 2 つの方法があります。 統合と移行します。 統合では、.NET Remoting 2.0 と WCF を並行して実行されている、既存の .NET Remoting 2.0 コードを変更することがなく同時に両方のテクノロジを同じビジネス オブジェクトを公開することができます。 統合は、.NET Framework 2.0 以降を実行していることが必要です。 WCF の機能を利用して、Remoting 2.0 システムとの互換性を配線する必要はない場合、は、サービス全体を WCF に移行することができます。 .NET Remoting 2.0 から WCF への移行では、リモート オブジェクトのインターフェイスとその構成設定の変更が必要です。 両方のトピックに掲載されて[を Windows Communication Foundation からリモート処理](https://go.microsoft.com/fwlink/?LinkId=74403)します。
## <a name="see-also"></a>関連項目
- [概念](../../../../docs/framework/wcf/conceptual-overview.md)
| 53.5 | 475 | 0.802889 | yue_Hant | 0.617641 |
4d15b673d585fb740e37231a844ecc52967f61ac | 4,653 | md | Markdown | README.md | dawidzim/bender | ccdec5de26b3c9100fcf81b15b3761029d27578f | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | dawidzim/bender | ccdec5de26b3c9100fcf81b15b3761029d27578f | [
"Apache-2.0",
"MIT"
] | null | null | null | README.md | dawidzim/bender | ccdec5de26b3c9100fcf81b15b3761029d27578f | [
"Apache-2.0",
"MIT"
] | null | null | null | # bender
Bender is a dependency management tool for hardware design projects. It provides a way to define dependencies among IPs, execute unit tests, and verify that the source files are valid input for various simulation and synthesis tools.
[](https://travis-ci.org/fabianschuiki/bender)
[](https://crates.io/crates/bender)
## Workflow
The workflow of bender is based on a configuration and a lock file. The configuration file lists the sources, dependencies, and tests of the package at hand. The lock file is used by the tool to track which exact version of a package is being used. Adding this file to version control, e.g. for chips that will be taped out, makes it easy to reconstruct the exact IPs that were used during a simulation, synthesis, or tapeout.
Upon executing any command, bender checks to see if dependencies have been added to the configuration file that are not in the lock file. It then tries to find a revision for each added dependency that is compatible with the other dependencies and add that to the lock file. In a second step, bender tries to ensure that the checked out revisions match the ones in the lock file. If not possible, appropriate errors are generated.
The update command reevaluates all dependencies in the configuration file and tries to find for each a revision that satisfies all recursive constraints. If semantic versioning is used, this will update the dependencies to newer versions within the bounds of the version requirement provided in the configuration file.
## Package Structure
Bender looks for the following three files in a package:
- `Bender.yml`:
This is the main package manifest, and the only required file for a directory to be recognized as a Bender package. It contains metadata, dependencies, and source file lists.
- `Bender.lock`:
The lock file is generated once all dependencies have been successfully resolved. It contains the exact revision of each dependency. This file *may* be put under version control to allow for reproducible builds. This is handy for example upon taping out a design. If the lock file is missing or a new dependency has been added, it is regenerated.
- `Bender.local`:
This file contains local configuration overrides. It should be ignored in version control, i.e. added to `.gitignore`. This file can be used to override dependencies with local variants. It is also used when the user asks for a local working copy of a dependency.
## Targets
Targets are flags that can be used to filter source files and dependencies. They are used to differentiate the step in the ASIC/FPGA design flow, the EDA tool, technology target, etc. The following table lists the targets that should be adhered to:
- `test`: Set this target when verifying your design through unit tests or testbenches. Use the target to enable source files that contain testbenches, UVM models, etc.
- **Tool**: You should set exactly one of the following to indicate with which tool you are working.
- `vsim`: Set this target when working with ModelSim vsim. Automatically set by the *bender-vsim* plugin.
- `vcs`: Set this target when working with Synopsys VCS. Automatically set by the *bender-vcs* plugin.
- `synopsys`: Set this target when working with Synopsys Design Compiler. Automatically set by the *bender-synopsys* plugin.
- `vivado`: Set this target when working with Xilinx Vivado. Automatically set by the *bender-vivado* plugin.
- **Abstraction**: You should set exactly one of the following to indicate at which abstraction level you are working on.
- `rtl`: Set this target when working with the Register Transfer Level description of a design. If this target is set, only behavioural and no technology-specific modules should be used.
- `gate`: Set this target when working with gate-level netlists, for example after synthesis or layout.
- **Stage**: You should set exactly one of the following to indicate what you are using the design for.
- `simulation`: Set this target if you simulate the design. This target should be used to include protocol checkers and other verification modules.
- `synthesis`: Set this target if you synthesize the design. The target can be used to disable various parts of the source files which are not synthesizable.
- **Technology**: You should set exactly one of the following pairs of targets to indicate what FPGA or ASIC technology you target.
- `fpga xilinx`
- `fpga altera`
- `asic umc65`
- `asic gf28`
- `asic gf22`
- `asic stm28fdsoi`
| 66.471429 | 430 | 0.776918 | eng_Latn | 0.999505 |
4d1695e5840f9c477896eb2513d177698332601e | 88 | md | Markdown | README.md | E2E-Orchestration/Tests | 1adb5ac74c30e0a99f1884e3827915dee6e46db3 | [
"MIT"
] | 1 | 2021-11-01T18:58:42.000Z | 2021-11-01T18:58:42.000Z | README.md | E2E-Orchestration/Tests | 1adb5ac74c30e0a99f1884e3827915dee6e46db3 | [
"MIT"
] | null | null | null | README.md | E2E-Orchestration/Tests | 1adb5ac74c30e0a99f1884e3827915dee6e46db3 | [
"MIT"
] | null | null | null | # Tests
The full, end to end regression tests that are run against the E2E Org solution
| 29.333333 | 79 | 0.784091 | eng_Latn | 0.999891 |
4d16a213dd34c65f2741e082be1f979c5bec0d76 | 633 | md | Markdown | app/src/main/assets/doc/functions/Unequal.md | Apphope/hih | 25e89ae21b5ef17f3d2cb3a91557b09df97d1ed8 | [
"Apache-2.0"
] | null | null | null | app/src/main/assets/doc/functions/Unequal.md | Apphope/hih | 25e89ae21b5ef17f3d2cb3a91557b09df97d1ed8 | [
"Apache-2.0"
] | null | null | null | app/src/main/assets/doc/functions/Unequal.md | Apphope/hih | 25e89ae21b5ef17f3d2cb3a91557b09df97d1ed8 | [
"Apache-2.0"
] | null | null | null | ## Unequal
```
Unequal(x, y)
x != y
```
> yields `False` if `x` and `y` are known to be equal, or `True` if `x` and `y` are known to be unequal.
```
lhs != rhs
```
> represents the inequality `lhs <> rhs`.
### Examples
```
>> 1 != 1.
False
```
Lists are compared based on their elements:
```
>> {1} != {2}
True
>> {1, 2} != {1, 2}
False
>> {a} != {a}
False
>> "a" != "b"
True
>> "a" != "a"
False
>> Pi != N(Pi)
False
>> a_ != b_
a_ != b_
>> a != a != a
False
>> "abc" != "def" != "abc"
False
>> a != a != b
False
>> a != b != a
a != b != a
>> {Unequal(), Unequal(x), Unequal(1)}
{True, True, True}
``` | 10.377049 | 104 | 0.466035 | eng_Latn | 0.952376 |
4d1771989d5339460d6f0192c47c57b7d4e53637 | 2,617 | md | Markdown | _posts/2021-01-22-djangoweather.md | KingMurky/KingMurky.github.io | 2b1322031bc21dac3ab3d149aea84853cbf0a232 | [
"MIT"
] | 1 | 2021-01-21T13:22:09.000Z | 2021-01-21T13:22:09.000Z | _posts/2021-01-22-djangoweather.md | KingMurky/kingmurky.github.io | 2b1322031bc21dac3ab3d149aea84853cbf0a232 | [
"MIT"
] | null | null | null | _posts/2021-01-22-djangoweather.md | KingMurky/kingmurky.github.io | 2b1322031bc21dac3ab3d149aea84853cbf0a232 | [
"MIT"
] | null | null | null | ---
title: "Django로 날씨알림 사이트 만들기"
categories:
- 개발
last_modified_at: 2021-01-22T18:08+09:00
comments: true
---
#### 1. 소개
django를 활용하여 할 수 있는 것이 무엇이 있을까 생각하다가 날씨를 알려주는 사이트를 만들면 재미있을 것 같다는 생각이 들었다.
마침 openweathermap 이라는 사이트에서 전세계 각 도시의 기상 정보를 알려주는 API를 제공하는 것을 확인하여 바로 개발해보았다.
게다가!!! 좀더 이 API를 쉽게 사용할 수 있도록 어느 개발자분들이 pyowm이라는 python 패키지를 만들어 두셔서 관련 정보를 좀 더 쉽게 다룰 수 있었다.
그리고 더불어 특정 도시의 경도, 위도 값을 받아오기 위해 구글 크롬에서 제공하는 geocoding API를 사용하였다.
이 역시도 googlemaps라는 python 패키지가 존재하여 쉽게 API를 이용할 수 있었다.
#### 2. django 기본설정
이미 필자가 [무작정 따라하는 django tutorial 1편](https://kingmurky.github.io/%EA%B0%9C%EB%B0%9C/django2/) 에서 다룬 부분이지만 짤막하게나마 다뤄보도록 하겠다.
우선 django와 관련 IDE가 모두 설치가 되어있다고 가정하겠다.
우선 이해가 안되더라도 terminal에서 다음 과정을 따라해보자.
1. cd 프로젝트를 생성할 디렉토리 주소 **django 프로젝트를 만들 디렉토리로 이동**
2. django-admin startproject 프로젝트 이름 **django 프로젝트 생성**
3. py manage.py startapp 하위 프로그램 이름 **하위 프로그램 생성**
4. py manage.py migrate **DB파일 생성 기본적으로는 django에서 제공하는 sqlite3으로 생성된다**
5. py manage.py createsuperuser **관리자 계정 생성**
6. py manage.py runserver **서버 실행**
일단 기본 설정은 끝났다!
#### 3. settings.py 와 urls.py 설정하기
settings.py는 기본적으로 django 파일의 다양한 설정을 관리하는 파일이라고 할 수 있다.
이곳에서 많은 것들을 설정 할 수 있지만 우선 우리는 일부만을 다루도록 하겠다.
일단 INSTALLED_APPS 에서 우리가 만든 하위 앱을 추가하자.
```python
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'weather'
]
```
'weather' 앱을 추가한 것을 확인 할 수 있다.
그 다음 언어와 시간대 설정을 해보자.
```python
LANGUAGE_CODE = 'ko-kr'
TIME_ZONE = 'Asia/Seoul'
```
필자는 한국어를 사용하고 한국에 사는 한국인 이므로 이렇게 설정해주었다.
다음은 url을 연결해주는 작업을 해주기 위해 project 디렉토리 하위에 위치한 urls.py 파일을 열어주도록 하자.
초기에는 이런 모습일 것이다.
```python
urlpatterns = [
path('admin/', admin.site.urls),
]
```
여기에 우리가 프로젝트 하위에 만든 하위 앱의 url을 연결하여 원하는 대로 view가 표시되도록 하자.
우선
```(.python)
from django.urls import path,include
```
을 입력하여 include 함수를 import 해주자.
include 함수는 django 서버가 url을 쉽게 찾을 수 있도록 도와준다.
그리고 나서 urlpatterns에 다음과 같이 적으면 된다.
```python
path('weather/', include('weather.urls')),
```
나 같은 경우 하위 앱의 이름이 weather 이었기 때문에 weather.urls로 경로 설정을 해주었다.
이때 서버를 작동시키면 우리가 원하는 화면이 등장하지 않는다.
우선 view가 구성되지 않았기 때문이고, 하위 앱에 urls.py가 존재하지 않기 때문에, 'weather.urls'가 존재하지 않는 파일로 인식되어 오류를 발생시키기 때문이다.
우선 하위 앱 weather 하위에 urls.py 파일을 생성하자
이 파일에는 하위 앱 weather의 다양한 view를 연결할 urlpatterns를 생성하도록 한다.
다음과 같이 입력하자.
```python
from django.urls import path
from . import views
urlpatterns = [
]
```
여기까지 하면 기본적인 url세팅을 모두 마치게 되었다.
다음 시간에는 본격적으로 우리가 사이트에 마주할 화면을 담당하는 views.py 파일을 다뤄보고 API에 대해 살펴보도록 하겠다.
| 17.682432 | 123 | 0.696217 | kor_Hang | 1.00001 |
4d178ee8966514ccc69f854b06f1ea1880b53d0c | 1,292 | md | Markdown | front-ends/README.md | Merve40/master-thesis | 79a00ec11adde3d275180c5bb283465d48e4646f | [
"MIT"
] | 1 | 2021-08-28T13:32:00.000Z | 2021-08-28T13:32:00.000Z | front-ends/README.md | Merve40/master-thesis | 79a00ec11adde3d275180c5bb283465d48e4646f | [
"MIT"
] | null | null | null | front-ends/README.md | Merve40/master-thesis | 79a00ec11adde3d275180c5bb283465d48e4646f | [
"MIT"
] | null | null | null | # Front-ends
**Broker Interfaces:**
- `/charterer`
- `/shipowner`
**IDM Interface:** `/idm`
**Logger Interface** for server and smart contract events: `/logger`
---
**Front-end Libraries:**
- [ReactJs](https://reactjs.org/)
- [Bootstrap](https://react-bootstrap.netlify.app/)
- [TablerReact](https://github.com/tabler/tabler-react)
**Other Libraries:**
- [web3js](https://web3js.readthedocs.io)
- [ethr-did](https://github.com/uport-project/ethr-did)
- [did-resolver](https://github.com/decentralized-identity/did-resolver) & [ethr-did-resolver](https://github.com/decentralized-identity/ethr-did-resolver)
- [did-jwt](https://github.com/decentralized-identity/did-jwt) & [did-jwt-vc](https://github.com/decentralized-identity/did-jwt-vc)
- [merkletreejs](https://github.com/miguelmota/merkletreejs) & [keccak256](https://github.com/miguelmota/keccak256)
- [jsonpath](https://github.com/dchester/jsonpath)
---
**Accessing Front-ends:**
_Servers and Front-ends are deployed on IP `0.0.0.0` which enables access via the local network_
- idm:
`<local-server-ip>:8080`
- logger:
`<local-server-ip>:9000`
- charterer:
`<local-server-ip>:3000`
- shipowner:
`<local-server-ip>:5000`
- customer:
`<local-server-ip>:7000`
| 30.046512 | 157 | 0.677245 | yue_Hant | 0.354566 |
4d17ba57112d4ba025a88b19451f567b0da44aed | 3,936 | md | Markdown | content/publication/2022_AI.md | mingshuwang/academic-kickstart | 662523c33683b7dd05fb992c2a135a4b75a8aa54 | [
"MIT"
] | null | null | null | content/publication/2022_AI.md | mingshuwang/academic-kickstart | 662523c33683b7dd05fb992c2a135a4b75a8aa54 | [
"MIT"
] | null | null | null | content/publication/2022_AI.md | mingshuwang/academic-kickstart | 662523c33683b7dd05fb992c2a135a4b75a8aa54 | [
"MIT"
] | null | null | null | +++
title = "Embedding Artificial Intelligence in Society: Looking Beyond the EU AI Master Plan Using the Culture Cycle"
date = 2022-02-01
# Authors. Comma separated list, e.g. `["Bob Smith", "David Jones"]`.
authors = ["Borsci, S.", "Lehtola, V.", "Francesco, N.", "Yang, M.Y.", "Augustijn, E.W.", "Bagheriye, L.", "Christoph, B.", "Kounadi, O.", "Li, J.", "Moreira, J.", "van der Nagel, J.", "Veldkamp, B.", "Duc, L.V.", "**Wang, M.**", "Wijnhoven, F.", "Wolterink, J.M.", "Zurita-Milla, R."]
# Publication type.
# Legend:
# 0 = Uncategorized
# 1 = Conference proceedings
# 2 = Journal
# 3 = Work in progress
# 4 = Technical report
# 5 = Book
# 6 = Book chapter
publication_types = ["2"]
# Publication name and optional abbreviated version.
publication = " *AI & Society*"
publication_short = " *AI & Society*"
# Abstract and optional shortened version.
abstract = "The European Union (EU) Commission’s whitepaper on Artificial Intelligence (AI)proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical and methodological constraints. We identify two main obstacles in the implementation of this plan: i) the lack of a coherent EU vision to drive future decision-making processes at state and local levels and ii) the lack of methods to support a sustainable diffusion of AI in our society. The lack of a coherent vision stems from not considering societal differences across the EU member states. We suggest that these differences may lead to a fractured market and an AI crisis in which different members of the EU will adopt nation-centric strategies to exploit AI, thus preventing the development of a frictionless market as envisaged by the EU. Moreover, the Commission aims at changing the AI development culture proposing a human-centred and safety-first perspective that is not supported by methodological advancements, thus taking the risks of unforeseen social and societal impacts of AI. We discuss potential societal, technical, and methodological gaps that should be filled to avoid the risks of developing AI systems at the expense of society. Our analysis results in the recommendation that the EU regulators and policymakers consider how to complement the EC programme with rules and compensatory mechanisms to avoid market fragmentation due to local and global ambitions. Moreover, regulators should go beyond the human-centred approach establishing a research agenda seeking answers to the technical and methodological open questions regarding the development and assessment of human-AI co-action aiming for a sustainable AI diffusion in the society."
# Featured image thumbnail (optional)
image_preview = ""
# Is this a selected publication? (true/false)
selected = false
# Projects (optional).
# Associate this publication with one or more of your projects.
# Simply enter the filename (excluding '.md') of your project file in `content/project/`.
projects = []
# Links (optional).
#url_source =
#url_pdf =
#url_preprint = "#"
#url_code = "#"
#url_dataset = "#"
#url_project = "#"
#url_slides = "#"
#url_video = "#"
#url_poster = "#"
#url_source = "#"
# Custom links (optional).
# Uncomment line below to enable. For multiple links, use the form `[{...}, {...}, {...}]`.
# url_custom = [{name = "Custom Link", url = "http://example.org"}]
# Does the content use math formatting?
math = true
# Does the content use source code highlighting?
highlight = true
# Featured image
# Place your image in the `static/img/` folder and reference its filename below, e.g. `image = "example.jpg"`.
[header]
# image = "headers/bubbles-wide.jpg"
# caption = "My caption :smile:"
+++
| 57.882353 | 1,972 | 0.733232 | eng_Latn | 0.981147 |
4d1872c16ce09cbdf1e6e8094b4a1b9556ea5dc4 | 1,529 | md | Markdown | README.md | johnlunney/digitakt-song-mode | 194e6df759e5b3ff46186924d5034ab2bf7f4d29 | [
"MIT"
] | null | null | null | README.md | johnlunney/digitakt-song-mode | 194e6df759e5b3ff46186924d5034ab2bf7f4d29 | [
"MIT"
] | null | null | null | README.md | johnlunney/digitakt-song-mode | 194e6df759e5b3ff46186924d5034ab2bf7f4d29 | [
"MIT"
] | null | null | null | # digitakt-song-mode
digitakt-song-mode is a desktop app that provides so-called "song mode" for [Elektron](https://www.elektron.se/) Digitakt sampler. Piece of music can be crated by scheduling patterns (with specified number of repetitions) and managing the mute state of the tracks.
The app should also work for the Digitone synthesizer - it wasn't tested though.
Built with [diquencer](https://github.com/mcjlnrtwcz/diquencer) library for MIDI sequencing.

## Installation
Use the [pipenv](https://pipenv.readthedocs.io/en/latest/) environment manager (via `Makefile`) to install digitakt-song-mode.
```bash
make install
```
## Usage
To launch enter the shell and execute Python script.
```bash
make start
```
The songs are stored in JSON files (no editor available yet). See [example](extras/example.json) for reference.
The `mutes` list determines which tracks should be silent. If all tracks should play leave the list empty.
## Development
For development purposes you need to install development packages.
```bash
make install_dev
```
The `Makefile` provides the following convenience targets:
- `shell`: enter the shell,
- `format`: auto-format code with `black`,
- `sort`: sort imports with `isort`,
- `lint`: check compliance with `flake8`.
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests (if there are any) as appropriate.
## License
[MIT](LICENSE)
| 31.854167 | 264 | 0.759974 | eng_Latn | 0.988434 |
4d188a7b5fc86a9832b681d19d5f964ee2188e51 | 3,914 | md | Markdown | readme.md | LuigiHyde/Pr-ctica-1-2-3 | dd85a4048bf81cb880cb52324288fb5d4810e5b7 | [
"MIT"
] | null | null | null | readme.md | LuigiHyde/Pr-ctica-1-2-3 | dd85a4048bf81cb880cb52324288fb5d4810e5b7 | [
"MIT"
] | null | null | null | readme.md | LuigiHyde/Pr-ctica-1-2-3 | dd85a4048bf81cb880cb52324288fb5d4810e5b7 | [
"MIT"
] | null | null | null | ## PORTFOLIO LUIGI DAFINA
Proyecto de Creación Multimedia Interactiva de la Facultad de Bellas Artes de la Univesidad de Granada
# 1 Datos
**Titulo** : Portfolio Personal
**Web:** https://luigihyde.github.io/Practicas-1-2-3/PROYECTOLuigiMarianDafina.html
**Autor:** Luigi Marian Dafina
**Resumen** : Aquí vemos una pincelada (nunca mejor dicho) sobre las distintas disciplinas que se manejan en la facultad de bellas artes, mostrando alguno de los trabajos a cursar en este ámbito, así como en el de Estudios de Guitarra Clásica en el Conservatorio.
**Estilo/género:** Novela / juego / portfolio / documental... etc.
**Resolución:** 800x600px
**Probado en:** Google Chrome y Mozilla Firefox
Para el funcionamiento en Dispositivos Móviles Android:

**Tamaño proyecto:** 30,5 MB
**Licencia** Este proyecto tiene una Licencia CC Reconocimiento Compartir igual (CC BY-SA)
**Fecha** : 13/06/2020
**Medios**
- Github:
# 2. Memoria del proyecto
### 2.1 Storyboard:
Comienza con una animación con "Ink Colors" en avance y retroceso, revelando cada vez algo más de información a cerca del proyecto, así como del responsable. (Primera imagen a la arriba a la izquierda)
Tras ello un menú nos muestra la galería( arriba a la derecha), el Quizz(abajo) y los créditos.
[explicación](https://github.com/LuigiHyde/Practicas-1-2-3/blob/master/Explicaci%C3%B3n%20proyecto.jpg)
### 2.2. Esquema de navegación

# 3. Metodología
Se ha reaizado una serie de bocetos que han guiado el trabajo.
Ha habidio una serie de modificaciones para ajustarlo a las exigencias requeridas.
Se ha pedido opinión a distintos amigos y compañeros, ayudadndo así a comprobar la visualización
en distintos dispositivos.
### Etapa 1: Ideación de proyecto
**Investigación de campo**
- Portfolio visualizados en clase , de otros años.
- Páginas web y obra de distintos artistas.
**Motivación de la propuesta**
Este proyecto surge con la idea de suplir la ausencia de una página personal que mostrase el recorrido y dedicación hacia las distintas artes visuales ejercidas en la rama de Bellas Artes así como en la de la Guitarra Clásica.
**Publico / audiencia**
-Orientado a personal de contratación para la realización de encargos artísticos y musicales.
-Orientado a personas que estén interesadas en la trayectoria académica y profesional.
-Compañeros de clase.
### Etapa 2: Desarrollo / actividades realizadas
-Portolio
-INTRO
-Galería de trabajos.
-Créditos
-Encuesta artística y musical.
### Etapa 3: Problemas identificados
-Falta de enlace del quizz con el porfolio.
-Pregunta segunda del Quizz. Tras fallar o agotarse el tiempo tendría que llevar a la primera pregunta.
-Ausencia de botón para regular el volúmen del sonido.
# 4. Conclusiones
Creo que el proyecto consta de un buen planteamiento, pero los fallos de ejecución son muchos y variados...
Me encantaría disponer de tiempo para arreglar y mejorar la interface, el funcionamiento añadiendo botones,
haciendolo más intuitivo, opciones de mejora de sonido y regular volúmen...etc...
# 5 Referencias
**Artículos y blogs **
http://luigihyde.blogspot.com/
https://www.tusclasesparticulares.com/blog/chitaraguitarraguitar
**Recursos y materiales audiovisuales:**
* Musica:
"Danza Oriental II -Enrique Granados."
Interpretación vídeo; "Cubanita-Flores Chaviano"
Encuesta: "Statues. Harry Potter SoundTrack"
* Imágenes:
Tomadas por el autor.
* Tipografía:
Arial
**Herramientas utilizadas**
- Hippani Animator 5.1
- Github.io
-Adobe Animate
https://creativecommons.org/licenses/?lang=es
Junio 2020
| 24.929936 | 263 | 0.747828 | spa_Latn | 0.988674 |
4d18dde91d38cf424cc203c756e34b60949db4bc | 463 | md | Markdown | CHANGELOG.md | jsbites/optimusprime | 26e4196db070bf463ae2b5b349b1caaf6992155d | [
"MIT"
] | 1 | 2017-05-06T04:38:25.000Z | 2017-05-06T04:38:25.000Z | CHANGELOG.md | jsbites/optimusprime | 26e4196db070bf463ae2b5b349b1caaf6992155d | [
"MIT"
] | null | null | null | CHANGELOG.md | jsbites/optimusprime | 26e4196db070bf463ae2b5b349b1caaf6992155d | [
"MIT"
] | null | null | null | ```
/[-])// ___
__ --\ `_/~--| / \
/_-/~~--~~ /~~~\\_\ /\
| |___|===|_-- | \ \ \
_/~~~~~~~~|~~\, ---|---\___/----| \/\-\
~\________|__/ / // \__ | || / | | | |
,~-|~~~~~\--, | \|--|/~||| | | |
[3-|____---~~ _--'==;/ _, | |_|
/ /\__|_/ \ \__/--/
```
## **Optimus Prime** (*v0.2.1*)
* Initial working version.
| 28.9375 | 45 | 0.168467 | yue_Hant | 0.097774 |
4d190aaee3d2a56e845d61cad433cdf4805b0ae8 | 3,953 | md | Markdown | espanya/104_viseloga.md | jkunimune15/pandunia | b0422521534a8b43d5b9d3252644ee76ac42fc0b | [
"CC-BY-4.0"
] | null | null | null | espanya/104_viseloga.md | jkunimune15/pandunia | b0422521534a8b43d5b9d3252644ee76ac42fc0b | [
"CC-BY-4.0"
] | null | null | null | espanya/104_viseloga.md | jkunimune15/pandunia | b0422521534a8b43d5b9d3252644ee76ac42fc0b | [
"CC-BY-4.0"
] | null | null | null | # Pronombres
Pronombres pueden mantenerse en pie para sustantivos y frases nominales.
## Pronombres personales
| Singular | Plural |
|:------------|:-------------|
| **mi** | **mimon** |
| yo | nosotros |
| **tu** | **tumon** |
| tú, usted | ustedes |
| **ye** | **yemon** |
| él, ella | ellos, ellas |
Todos los pronombres pueden usarse para todos los géneros.
Los pronombres posesivos consisten en el pronombre personal y la partícula posesiva
**su**.
| Singular | Plural |
|:------------|:-------------|
| **mi su** | **mimon su** |
| mio | nuestro |
| **tu su** | **tumon su** |
| tuyo | de ustedes |
| **ye su** | **yemon su** |
| suyo | de ellos |
## Pronombre reflexivo
El pronombre reflexivo se usa cuando el objeto de una oración es la misma como el sujeto.
**se**
– su mismo
**mi vide se.**
– Me miro.
**mimon vide se.**
– Nos miramos.
La expresión **semon** se usa cuando dos o más personas performan la acción el uno al otro.
**semon**
– el uno al otro
**mi e tu vide semon.**
– Yo y tú miramos el uno al otros.
**mimon vide semon.**
– Nos miramos.
## Pronombres demostrativos
Pronombres demostrativos se usan con sustantivos para hacerlos más espicificos.
Los demostrativos en Pandunia son:
**ni**
– esto (cerca del orador)
**go**
– eso (lejos del orador)
**la**
– él o la (conocido por el orador y el oyente)
En Pandunia, **ni**, **go**, y **la** funciona solo como modificadores,
y requieren un sustantivo despés.
Para usarlos como pronombres, un sustantivo genérico como
**she**
(«cosa») se necesita poner después.
El demostrativo proximal
**ni**
indica cosas que están cerca del orador.
El demostrativo distal
**go**
indica cosas que están lejos del orador.
**ni she si bon.**
– Esta (cosa) es buena.
**go she si dus.**
– Esa (cosa) es mala.
**mi vol ni buku, no go.**
– Quiero este libro, no ese.
Los demostrativos proximal y distal se usan para introducir objetos nuevos.
El demostrativo temático, en el otro lado,
no espicifica distancia fisical,
pero se usan cuando el orador ya mencionó el objeto o persona en cuestión,
o es conocido por el oyente o es de interés actual en el discurso.
**ni she si mau. ye vol yam go mushu.**
– Ese es gato. Quiere comer ese ratón.
**mi ten un mau e un vaf. la vaf si dai. ye yam poli yam.**
– Tengo un gato y un perro. El perro es grande. Come mucha comida.
### Uso abstracto
Los demostrativos se pueden usar para referir
a entidades abstractas de discurso, no objetos concretos.
**la** refiere a cosas que se dijeron anteriormente,
**ni** refiere a cosas que se dicen actualmente,
y **go** refiere a cosas que se dirán en el futuro.
**ni jumle si korte.**
– Esta oración es corta.
Arriba, **ni jumle** (_esta oración_) refiere a la oración que se está hablando.
**mi mana go she: mi ama tu.**
– Significo esto: te amo. (Significo que te amo.)
**mi ama tu. mi mana la she.**
– Te amo. Esa es la que significo.
Arriba, **go she** refiere al contenido de la oración siguiente,
y **la she** refiere al contenido de la oración previa.
## Pronombres interogativos
**ke** es pronombre interogativo para uso general.
Funciona como las palabras españoles _qué_, _quién_, y _cuál_.
**ke?**
– ¿Qué?
Para formar otros pronombres interogativos, **ke** se combina con sustantivos genéricos.
**ke she?**
– ¿Cuál? (¿Qué cosa?)
**ke jen?**
– ¿Quién? (¿Qué persona?)
**na ke zaman?**
– ¿Cuando? (¿En qué tiempo?)
**na ke loka?**
– ¿Donde? (¿En qué lugar?)
**va ke mode?**
– ¿Cómo? (¿De qué modo?)
**ze ke sabu?**
– ¿Por qué? (¿Por qué razón?)
Antes de un adjetivo, **ke** también significa «cuán».
**ke nove?**
– ¿Cuán nuevo?
**ke dai?**
– ¿Cuán grande?
**ke lili?**
– ¿Cuán pequeño?
**ke koste?**
– ¿Cuán caro? (¿Cuánto cuesta?)
**ke poli?**
– ¿Cuán muchos? (¿Cuántos?)
**tu tena ke dai di mau?**
– ¿Cuán grande gato es tu gato?
| 24.70625 | 91 | 0.646598 | spa_Latn | 0.995354 |
4d192cc7542936dc40c7183d0924a23d47dfc6a1 | 2,265 | md | Markdown | docs/extensibility/debugger/reference/idebugbreakpointrequest3-getrequestinfo2.md | aleffe61/visualstudio-docs.it-it | c499d4c129dfdd7d5853a8f539753058fc9ae5c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugbreakpointrequest3-getrequestinfo2.md | aleffe61/visualstudio-docs.it-it | c499d4c129dfdd7d5853a8f539753058fc9ae5c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugbreakpointrequest3-getrequestinfo2.md | aleffe61/visualstudio-docs.it-it | c499d4c129dfdd7d5853a8f539753058fc9ae5c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugBreakpointRequest3::GetRequestInfo2 | Microsoft Docs
ms.date: 11/04/2016
ms.topic: conceptual
f1_keywords:
- IDebugBreakpointRequest3::GetRequestInfo2
helpviewer_keywords:
- IDebugBreakpointRequest3::GetRequestInfo2
ms.assetid: 33942e4a-0a0a-49e8-a693-004954f6d38a
author: gregvanl
ms.author: gregvanl
manager: douge
ms.workload:
- vssdk
ms.openlocfilehash: 4290c34d37cfd5c444702d6040d7c934e698d53f
ms.sourcegitcommit: 37fb7075b0a65d2add3b137a5230767aa3266c74
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 01/02/2019
ms.locfileid: "53904864"
---
# <a name="idebugbreakpointrequest3getrequestinfo2"></a>IDebugBreakpointRequest3::GetRequestInfo2
Questo metodo ottiene le informazioni di richiesta di punto di interruzione che descrivono la richiesta di punto di interruzione.
## <a name="syntax"></a>Sintassi
```cpp
HRESULT GetRequestInfo2(
BPREQI_FIELDS dwFields,
BP_REQUEST_INFO2* bBPRequestInfo
);
```
```csharp
int GetRequestInfo2(
enum_BPREQI_FIELDS dwFields,
BP_REQUEST_INFO2[] bBPRequestInfo
);
```
#### <a name="parameters"></a>Parametri
`dwFields`
[in] Una combinazione di flag dal [BPREQI_FIELDS](../../../extensibility/debugger/reference/bpreqi-fields.md) enumerazione che determinano quali campi della `pBPRequestInfo` sono da compilare.
`bBPRequestInfo`
[out] Il [BP_REQUEST_INFO2](../../../extensibility/debugger/reference/bp-request-info2.md) struttura da compilare.
## <a name="return-value"></a>Valore restituito
Se ha esito positivo, restituisce `S_OK`; in caso contrario, restituisce il codice di errore.
## <a name="remarks"></a>Note
Altre informazioni in questa richiesta di restituito dal [GetRequestInfo](../../../extensibility/debugger/reference/idebugbreakpointrequest2-getrequestinfo.md) (metodo).
## <a name="see-also"></a>Vedere anche
[IDebugBreakpointRequest3](../../../extensibility/debugger/reference/idebugbreakpointrequest3.md)
[GetRequestInfo](../../../extensibility/debugger/reference/idebugbreakpointrequest2-getrequestinfo.md)
[BP_REQUEST_INFO2](../../../extensibility/debugger/reference/bp-request-info2.md)
[BPREQI_FIELDS](../../../extensibility/debugger/reference/bpreqi-fields.md) | 39.051724 | 195 | 0.756291 | ita_Latn | 0.484093 |
4d1a1169a31406c678f20ba74d966974caec61d7 | 1,645 | md | Markdown | docs/resources/firewall_policer.md | vcalmic/terraform-provider-junos | 06107eea87d67d408bb05da41462862791101590 | [
"MIT"
] | null | null | null | docs/resources/firewall_policer.md | vcalmic/terraform-provider-junos | 06107eea87d67d408bb05da41462862791101590 | [
"MIT"
] | null | null | null | docs/resources/firewall_policer.md | vcalmic/terraform-provider-junos | 06107eea87d67d408bb05da41462862791101590 | [
"MIT"
] | null | null | null | ---
page_title: "Junos: junos_firewall_policer"
---
# junos_firewall_policer
Provides a firewall policer resource.
## Example Usage
```hcl
# Configure a firewall policer
resource junos_firewall_policer "policer_demo" {
name = "policerDemo"
filter_specific = true
if_exceeding {
bandwidth_percent = 80
burst_size_limit = "50k"
}
then {
discard = true
}
}
```
## Argument Reference
The following arguments are supported:
- **name** (Required, String, Forces new resource)
Name of policer.
- **filter_specific** (Optional, Boolean)
Policer is filter-specific.
- **if_exceeding** (Required, Block)
Define rate limits options.
- **burst_size_limit** (Required, String)
Burst size limit in bytes.
- **bandwidth_percent** (Optional, Number)
Bandwidth limit in percentage.
- **bandwidth_limit** (Optional, String)
Bandwidth limit in bits/second.
- **then** (Required, Block)
Define action to take if the rate limits are exceeded.
- **discard** (Optional, Boolean)
Discard the packet.
- **forwarding_class** (Optional, String)
Classify packet to forwarding class.
- **loss_priority** (Optional, String)
Packet's loss priority.
- **out_of_profile** (Optional, Boolean)
Discard packets only if both congested and over threshold.
## Attributes Reference
The following attributes are exported:
- **id** (String)
An identifier for the resource with format `<name>`.
## Import
Junos firewall policer can be imported using an id made up of `<name>`, e.g.
```shell
$ terraform import junos_firewall_policer.policer_demo policerDemo
```
| 24.552239 | 76 | 0.696049 | eng_Latn | 0.925609 |
4d1aeedfa5b07146a505d8490650b9c2ae6b8490 | 4,340 | md | Markdown | README.md | menip/speech-to-text | 6dc3d2c0ead4fb4acdd051b4c14305262d4d39b9 | [
"MIT"
] | 21 | 2020-07-29T16:39:13.000Z | 2022-03-23T17:27:25.000Z | README.md | menip/speech-to-text | 6dc3d2c0ead4fb4acdd051b4c14305262d4d39b9 | [
"MIT"
] | 5 | 2020-09-16T08:36:53.000Z | 2021-09-12T16:48:05.000Z | README.md | menip/speech-to-text | 6dc3d2c0ead4fb4acdd051b4c14305262d4d39b9 | [
"MIT"
] | 1 | 2021-01-20T05:15:26.000Z | 2021-01-20T05:15:26.000Z | Speech to Text module for Godot
===============================
This is a Speech to Text (STT) module for [Godot][godot]. In other words, a module
that captures the user's microphone input and converts it to text.
[godot]: https://godotengine.org "Godot site"
Requirements
------------
The module may be built with Godot 3.2 on the following platforms:
- Windows
- OS X
- Unix (with **PulseAudio** or **ALSA** requirement)
- iOS
- Android
I've only verified the x11 build, haven't yet tested export builds.
Check if your system fulfills Godot's building [requirements][compilingReq] on the
desired platform, or for cross-compiling to another system. Other than that, *Speech
to Text* has no additional requirements. It is intended to be used alongside a
microphone connected to the system, which will capture voice input.
[compilingReq]: http://docs.godotengine.org/en/3.2/development/compiling/index.html "Compiling Requirements"
Building Godot with the module
------------------------------
The following steps assume that you are on a **Unix** system. For a different
platform supported by the module, use the equivalent tools.
1. If you don't have the source code for Godot, clone its repository from GitHub.
$ git clone https://github.com/godotengine/godot
2. Inside the cloned repository, change to the latest stable build that the module
works on (when these instructions were made, it was 3.2).
$ cd godot
$ git checkout 3.2
3. Clone this repository inside Godot's `modules/` directory, and switch back to godot root before compile.
$ cd modules
$ git clone https://github.com/menip/godot_speech_to_text.git
$ cd ../
4. Build Godot, according to your desired platform (follow the
[instructions][howToBuild] given on the Godot Docs).
5. Run Godot:
$ ./bin/godot*tools*
6. In order to check if the module was successfully added, follow these final steps:
6.1. After opening Godot, click the **New Project** button on the right side to
open the **Create New Project** window.
6.2. On the new window, add a **Project Path** (I'd recommend an empty directory)
and a **Project Name** (you are free to choose as you like).
6.3. Click **Create** to open the Godot project editor window.
6.4. On the right side, there should be a **Scene** tab with a window below it.
Click the first icon below the **Scene** label, which has a plus symbol `+`,
to create a new node.
6.5. Check if the ***STTRunner*** appears in the list of nodes; it should probably
be near the end of the list. There is also a search bar for convenience.
[howToBuild]: http://docs.godotengine.org/en/3.2/development/compiling/index.html "How to build Godot"
Usage
-----
Check the html tutorial [here][sttTutorial] for more information on how to use the module.
[sttTutorial]: https://samuraisigma.github.io/godot-docs/doc/community/tutorials/misc/speech_to_text.html "Speech to Text module tutorial"
[godotDocsFork]: https://github.com/SamuraiSigma/godot-docs "My Godot Docs fork"
Export templates
----------------
If you wish to export a game that uses the *Speech to Text* module, you will first
need to build export templates for the desired platform.
Check the instructions and requirements on the Godot Docs [site][exportTemplates] to
learn how to build export templates for a specific system. This includes cross
compiling for opposite bits or even for a different platform.
[exportTemplates]: http://docs.godotengine.org/en/3.3/development/compiling/index.html "Building export templates"
### TODO: Add Godot 3 demo.
# Third party libraries
The below third party libraries were used in this **Speech to Text** module.
## sphinxbase
- Upstream: http://cmusphinx.sourceforge.net
- Version: 5prealpha
- License: BSD-2-Clause
Files extracted from upstream source:
- `src/libsphinxbase/*`, except from: `Makefile.*`
- `src/libsphinxad/*`, except from: `Makefile.*`
- `include/*`, except from: `Makefile.*`, `config.h.in`, `sphinx_config.h.in`
- LICENSE
## pocketsphinx
- Upstream: http://cmusphinx.sourceforge.net
- Version: 5prealpha
- License: BSD-2-Clause
Files extracted from upstream source:
- `src/libpocketsphinx/*` as src/, except from: `Makefile.*`
- `include/*`, except from: `Makefile.*`
- LICENSE
| 32.148148 | 138 | 0.715899 | eng_Latn | 0.962265 |
4d1af3dc47619c41e16d5fb62ac0d2fea5c2d3fd | 1,187 | md | Markdown | packages/plugin-apply-try-catch/README.md | sobolevn/putout | 965279977fa9aa7e522960a86793813c64af1f26 | [
"MIT"
] | null | null | null | packages/plugin-apply-try-catch/README.md | sobolevn/putout | 965279977fa9aa7e522960a86793813c64af1f26 | [
"MIT"
] | null | null | null | packages/plugin-apply-try-catch/README.md | sobolevn/putout | 965279977fa9aa7e522960a86793813c64af1f26 | [
"MIT"
] | null | null | null | # @putout/plugin-apply-try-catch [![NPM version][NPMIMGURL]][NPMURL] [![Dependency Status][DependencyStatusIMGURL]][DependencyStatusURL]
[NPMIMGURL]: https://img.shields.io/npm/v/@putout/plugin-apply-try-catch.svg?style=flat&longCache=true
[NPMURL]: https://npmjs.org/package/@putout/plugin-apply-try-catch"npm"
[DependencyStatusURL]: https://david-dm.org/coderaiser/putout?path=packages/plugin-apply-try-catch
[DependencyStatusIMGURL]: https://david-dm.org/coderaiser/putout.svg?path=packages/plugin-apply-try-catch
`putout` plugin adds ability to apply [tryCatch](https://github.com/coderaiser/try-catch).
## Install
```
npm i @putout/plugin-apply-try-catch
```
## Rule
```json
{
"rules": {
"apply-try-catch/try-catch": "on",
"apply-try-catch/try-to-catch": "on"
}
}
```
## tryCatch
### ❌ Incorrect code example
```js
try {
log('hello');
} catch(error) {
}
```
### ✅ Correct code Example
```js
const [error] = tryCatch(log, 'hello');
```
## tryToCatch
### ❌ Incorrect code example
```js
try {
await send('hello');
} catch(error) {
}
```
### ✅ Correct code Example
```js
const [error] = await tryCatch(log, 'hello');
```
## License
MIT
| 18.546875 | 136 | 0.668071 | yue_Hant | 0.455102 |
4d1ba0da96de1be04ee7d237061ef05110eae2d7 | 8,473 | md | Markdown | website/translated_docs/fr/MSC/repair.md | doc4d/docs | eb0dcee4b6398c8dc1662cebba14c69e1ebec35c | [
"CC-BY-4.0"
] | null | null | null | website/translated_docs/fr/MSC/repair.md | doc4d/docs | eb0dcee4b6398c8dc1662cebba14c69e1ebec35c | [
"CC-BY-4.0"
] | 2 | 2020-07-30T09:14:33.000Z | 2021-09-24T15:28:17.000Z | website/translated_docs/fr/MSC/repair.md | doc4d/docs | eb0dcee4b6398c8dc1662cebba14c69e1ebec35c | [
"CC-BY-4.0"
] | 3 | 2020-01-20T09:16:58.000Z | 2021-03-25T18:07:46.000Z | ---
id: repair
title: Page Réparation
sidebar_label: Page Réparation
---
Cette page permet de réparer le fichier de données ou le fichier de structure lorsqu’il a été endommagé. Generally, you will only use these functions under the supervision of 4D technical teams, when anomalies have been detected while opening the application or following a [verification](verify.md).
**Attention :** Chaque réparation entraîne la duplication du fichier d’origine et donc l’augmentation de la taille du dossier de l’application. Il est important de prendre cela en considération (notamment sous macOS, où les applications 4D apparaissent sous forme de paquet) afin de ne pas augmenter excessivement la taille de l'application. Une intervention manuelle à l’intérieur du package peut être utile afin de supprimer les copies des fichiers d’origine.
> La réparation n’est disponible qu’en mode maintenance. Si vous tentez d’effectuer cette opération en mode standard, une boîte de dialogue d’alerte vous prévient que l'application va être fermée puis relancée en mode maintenance.
> Lorsque la base est chiffrée, la réparation des données comprend le déchiffrage et le chiffrage et nécessite ainsi la clé de chiffrement de données courante. Si aucune clé de chiffrement valide n'a déjà été fournie, une boite de dialogue s'affiche pour demander pour demander le mot de passe ou la clé de chiffrement (voir Page Chiffrement).
## Fichiers
### Fichier de données à réparer
Chemin d’accès du fichier de données courant. Le bouton **[...]** permet de désigner un autre fichier de données. Lorsque vous cliquez sur ce bouton, une boîte de dialogue standard d’ouverture de documents s’affiche, vous permettant de désigner le fichier de données à réparer. Une fois cette boîte de dialogue validée, le chemin d’accès du fichier à réparer est indiqué dans la fenêtre. Si vous effectuez une réparation par [réparation par en-têtes d'enregistrements](#réparation-par-en-têtes-denregistrements), vous pouvez sélectionner tout fichier de données. Une fois cette boîte de dialogue validée, le chemin d’accès du fichier à réparer est indiqué dans la fenêtre.
### Dossier de sauvegarde
Par défaut, le fichier de données original sera dupliqué avant réparation. Il sera placé dans un sous-dossier libellé “Replaced files (repairing)” dans le dossier de l'application. Le second bouton **[...]** permet de désigner un autre emplacement pour les sauvegardes des fichiers originaux effectuées avant réparation. Cette option permet notamment de réparer des fichiers volumineux en utilisant différents disques.
### Fichiers réparés
4D crée un nouveau fichier de données vide à l’emplacement du fichier d’origine. Le fichier d’origine est déplacé dans le dossier nommé "\Replaced Files (Repairing) date heure" dont l’emplacement a été défini dans la zone de "Dossier de sauvegarde" (dossier de l'application par défaut). Le fichier vide est rempli avec les données récupérées.
## Réparation standard
La réparation standard permet de réparer des données dans lesquelles seuls quelques enregistrements ou index sont endommagés (les tables d'adresses sont intactes). Les données sont compactées et réparées. A noter que ce type de réparation ne peut être effectué que si le fichier de données et le fichier de structure correspondent.
A l’issue de la procédure, la page "Réparation" du CSM est affichée. Un message indique si la réparation a été effectuée avec succès. Dans ce cas, vous pouvez immédiatement ouvrir l'application. 
## Réparation par en-têtes d'enregistrements
Cette option de réparation de bas niveau est à utiliser uniquement dans le cas où le fichier de données a été fortement endommagé et une fois que toutes les autres solutions (restitution de sauvegarde, réparation standard) se sont avérées inefficaces.
Les enregistrements de 4D sont de taille variable : il est donc nécessaire, pour les retrouver, de conserver dans une table spéciale l’endroit où ils sont stockés sur votre disque. Le programme accède donc à l’adresse de l’enregistrement par l’intermédiaire d’un index et d’une table d’adresses. Si seuls des enregistrements ou des index sont endommagés, l’option de réparation standard suffira généralement pour résoudre le problème. C’est lorsque la table d’adresses est touchée qu’il faudra en venir à une récupération plus sophistiquée, puisqu’il faut la reconstituer. Pour réaliser cette opération, le CSM utilise le marqueur qui se trouve en en-tête de chaque enregistrement. Les marqueurs peuvent être comparés à des résumés des enregistrements, comportant l’essentiel de leurs informations, et à partir desquels une reconstitution de la table d’adresses est possible.
> Si tous les enregistrements et toutes les tables ont été attribués, seule la zone principale est affichée.
>
> La récupération par en-têtes ne tient pas compte des éventuelles contraintes d’intégrité. En particulier, à l’issue de cette opération, vous pouvez obtenir des valeurs dupliquées avec des champs uniques ou des valeurs NULL avec des champs déclarés **non NULL**.
Lorsque vous cliquez sur le bouton **Réparer**, 4D effectue une analyse complète du fichier de données. A l’issue de cette analyse, le résultat est affiché dans la fenêtre suivante :

> Si tous les enregistrements et toutes les tables ont été attribués, seule la zone principale est affichée.
La zone "Enregistrements trouvés dans le fichier de données" comporte deux tableaux synthétisant les informations issues de l’analyse du fichier de données.
- Le premier tableau liste les informations issues de l’analyse du fichier de données. Chaque ligne représente un groupe d’enregistrements récupérables dans le fichier de données :
- La colonne **Ordre** indique l’ordre de récupération des groupes d’enregistrements.
- La colonne **Nombre** indique le nombre d'enregistrements contenus dans la table.
- La colonne **Table de destination** indique le nom des tables ayant pu être automatiquement associées aux groupes d’enregistrements identifiés. Les noms des tables attribuées automatiquement sont affichés en caractères verts. Les groupes qui n'ont pas encore été attribués, c'est-à-dire, les tables qui n'ont pas pu être associées à des enregistrements sont affichées en caractères rouges.
- La colonne **Récupérer** permet vous permet d’indiquer pour chaque groupe si vous souhaitez récupérer les enregistrements. Par défaut, l’option est cochée pour tous les groupes avec les enregistrements qui peuvent être associés à une table.
- Le deuxième tableau liste les tables du fichier de structure.
### Attribution manuelle
Si, du fait de l’endommagement de la table d’adresses, un ou plusieurs groupes d’enregistrements n’ont pas pu être attribués à des tables, vous pouvez les attribuer manuellement. Pour attribuer une table à un groupe non identifié, sélectionnez le groupe dans le premier tableau. Lorsque vous sélectionnez des enregistrements non identifiés, la zone "Contenu des enregistrements" affiche une prévisualisation du contenu des premiers enregistrements du groupe afin de vous permettre de les attribuer plus facilement :

Sélectionnez ensuite la table à attribuer dans le tableau des "Tables non attribuées" puis cliquez sur le bouton **Identifier table**. Vous pouvez également attribuer une table par glisser-déposer. Le groupe d’enregistrements est alors associé à la table, il sera récupéré dans cette table. Les noms des tables attribuées manuellement sont affichés en caractères noirs. Le bouton **Ignorer enregistrements** permet de supprimer l’association effectuée manuellement entre une table et un groupe d’enregistrements.
## Voir le compte rendu
Une fois la réparation terminée, 4D génère un fichier de compte-rendu dans le dossier Logs du projet. Ce fichier liste l’ensemble des opérations qui ont été menées. Il est créé au format xml et est nommé : *ApplicationName**_Repair_Log_yyyy-mm-dd hh-mm-ss.xml*" où :
- *ApplicationName* est le nom du fichier de structure sans extension, par exemple "Factures",
- *aaaa-mm-jj hh-mm-ss* est l'horodatage du fichier, basé sur la date et l'heure système locales au moment du lancement de l'opération de vérification, par exemple "2019-02-11 15-20-45".
Lorsque vous cliquez sur le bouton **Voir le compte rendu**, 4D affiche le fichier de compte-rendu le plus récent dans le navigateur par défaut de l’ordinateur.
| 117.680556 | 875 | 0.801959 | fra_Latn | 0.99648 |
4d1bad335e7e840286224bad02fde103024b7e8f | 37,915 | md | Markdown | articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md | jayv-ops/azure-docs.de-de | 6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md | jayv-ops/azure-docs.de-de | 6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md | jayv-ops/azure-docs.de-de | 6be2304cfbe5fd0bf0d4ed0fbdf4a6a4d11ac6e0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Inkrementelles Kopieren von Daten mithilfe der Änderungsnachverfolgung und des Azure-Portals
description: In diesem Tutorial erstellen Sie eine Azure Data Factory-Instanz mit einer Pipeline, die Deltadaten basierend auf Informationen der Änderungsnachverfolgung in der Quelldatenbank in Azure SQL-Datenbank in einen Azure-Blobspeicher lädt.
ms.author: yexu
author: dearandyxu
ms.service: data-factory
ms.topic: tutorial
ms.custom: seo-lt-2019; seo-dt-2019
ms.date: 02/18/2021
ms.openlocfilehash: c79d96e016459732ce71019511fa429d62d91f9d
ms.sourcegitcommit: c27a20b278f2ac758447418ea4c8c61e27927d6a
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/03/2021
ms.locfileid: "101740136"
---
# <a name="incrementally-load-data-from-azure-sql-database-to-azure-blob-storage-using-change-tracking-information-using-the-azure-portal"></a>Inkrementelles Laden von Daten aus Azure SQL-Datenbank in Azure Blob Storage mit Informationen der Änderungsnachverfolgung und dem Azure-Portal
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
In diesem Tutorial erstellen Sie eine Azure Data Factory-Instanz mit einer Pipeline, die Deltadaten basierend auf Informationen der **Änderungsnachverfolgung** in der Quelldatenbank in Azure SQL-Datenbank in einen Azure-Blobspeicher lädt.
In diesem Tutorial führen Sie die folgenden Schritte aus:
> [!div class="checklist"]
> * Vorbereiten des Quelldatenspeichers
> * Erstellen einer Data Factory.
> * Erstellen Sie verknüpfte Dienste.
> * Erstellen von Datasets für Quelle, Senke und Änderungsnachverfolgung
> * Erstellen, Ausführen und Überwachen der vollständigen Kopierpipeline
> * Hinzufügen oder Aktualisieren von Daten in der Quelltabelle
> * Erstellen, Ausführen und Überwachen der inkrementellen Kopierpipeline
## <a name="overview"></a>Übersicht
In einer Datenintegrationslösung ist das inkrementelle Laden von Daten nach anfänglichen vollständigen Ladevorgängen ein häufig verwendetes Szenario. In einigen Fällen können die geänderten Daten für einen Zeitraum Ihres Quelldatenspeichers leicht aufgeteilt werden (z.B. LastModifyTime, CreationTime). Manchmal gibt es keine explizite Möglichkeit, die Deltadaten seit der letzten Verarbeitung der Daten zu identifizieren. Die Technologie für die Änderungsnachverfolgung, die von Datenspeichern unterstützt wird, z.B. Azure SQL-Datenbank und SQL Server, kann zum Identifizieren der Deltadaten verwendet werden. In diesem Tutorial wird beschrieben, wie Sie Azure Data Factory zusammen mit der SQL-Technologie für die Änderungsnachverfolgung nutzen, um Deltadaten inkrementell aus Azure SQL-Datenbank in Azure Blob Storage zu laden. Konkretere Informationen zur SQL-Technologie für die Änderungsnachverfolgung finden Sie unter [Informationen zur Änderungsnachverfolgung (SQL Server)](/sql/relational-databases/track-changes/about-change-tracking-sql-server).
## <a name="end-to-end-workflow"></a>Kompletter Workflow
Hier sind die Schritte des typischen End-to-End-Workflows zum inkrementellen Laden von Daten per Technologie für die Änderungsnachverfolgung angegeben.
> [!NOTE]
> Diese Technologie wird sowohl von Azure SQL-Datenbank als auch von SQL Server unterstützt. In diesem Tutorial wird Azure SQL-Datenbank als Quelldatenspeicher verwendet. Sie können auch eine SQL Server-Instanz verwenden.
1. **Initial loading of historical data** (Erstes Laden von Verlaufsdaten) (einmalige Ausführung):
1. Aktivieren Sie die Technologie für die Änderungsnachverfolgung in der Quelldatenbank in Azure SQL-Datenbank.
2. Rufen Sie den Anfangswert von SYS_CHANGE_VERSION in der Datenbank als Baseline zum Erfassen von geänderten Daten ab.
3. Laden Sie die vollständigen Daten aus der Quelldatenbank in Azure Blob Storage.
2. **Incremental loading of delta data on a schedule** (Inkrementelles Laden von Deltadaten nach einem Zeitplan) (regelmäßige Ausführung nach dem ersten Laden der Daten):
1. Rufen Sie die alten und neuen SYS_CHANGE_VERSION-Werte ab.
3. Laden Sie die Deltadaten, indem Sie die Primärschlüssel von geänderten Zeilen (zwischen zwei SYS_CHANGE_VERSION-Werten) aus **sys.change_tracking_tables** mit Daten in der **Quelltabelle** verknüpfen, und verschieben Sie die Deltadaten dann auf das Ziel.
4. Aktualisieren Sie SYS_CHANGE_VERSION für das nächste Deltaladen.
## <a name="high-level-solution"></a>Allgemeine Lösung
In diesem Tutorial erstellen Sie zwei Pipelines, mit denen die folgenden beiden Vorgänge durchgeführt werden:
1. **Erstes Laden:** : Sie erstellen eine Pipeline mit einer Kopieraktivität, bei der die gesamten Daten aus dem Quelldatenspeicher (Azure SQL-Datenbank) in den Zieldatenspeicher (Azure Blob Storage) kopiert werden.

1. **Inkrementell laden:** Sie erstellen eine Pipeline mit den folgenden Aktivitäten und führen sie regelmäßig aus.
1. Erstellen Sie **zwei Lookup-Aktivitäten**, um die alte und neue SYS_CHANGE_VERSION aus Azure SQL-Datenbank abzurufen und an die Kopieraktivität zu übergeben.
2. Erstellen Sie **eine Kopieraktivität**, um die eingefügten/aktualisierten/gelöschten Daten zwischen den beiden SYS_CHANGE_VERSION-Werten aus Azure SQL-Datenbank nach Azure Blob Storage zu kopieren.
3. Erstellen Sie **eine Aktivität „Gespeicherte Prozedur“** , um den Wert von SYS_CHANGE_VERSION für die nächste Pipelineausführung zu aktualisieren.

Wenn Sie kein Azure-Abonnement besitzen, können Sie ein [kostenloses Konto](https://azure.microsoft.com/free/) erstellen, bevor Sie beginnen.
## <a name="prerequisites"></a>Voraussetzungen
* **Azure SQL-Datenbank**. Sie verwenden die Datenbank als den **Quell**-Datenspeicher. Wenn Sie in Azure SQL-Datenbank noch keine Datenbank haben, lesen Sie den Artikel [Erstellen einer Datenbank in Azure SQL-Datenbank](../azure-sql/database/single-database-create-quickstart.md). Dort finden Sie die erforderlichen Schritte zum Erstellen einer solchen Datenbank.
* **Azure Storage-Konto**. Sie verwenden den Blob Storage als den **Senken**-Datenspeicher. Wenn Sie kein Azure Storage-Konto besitzen, finden Sie im Artikel [Erstellen eines Speicherkontos](../storage/common/storage-account-create.md) Schritte zum Erstellen eines solchen Kontos. Erstellen Sie einen Container mit dem Namen **Adftutorial**.
### <a name="create-a-data-source-table-in-azure-sql-database"></a>Erstellen einer Datenquellentabelle in Azure SQL-Datenbank
1. Starten Sie **SQL Server Management Studio**, und stellen Sie eine Verbindung mit SQL-Datenbank her.
2. Klicken Sie im **Server-Explorer** mit der rechten Maustaste auf Ihre **Datenbank**, und wählen Sie **Neue Abfrage**.
3. Führen Sie den folgenden SQL-Befehl für Ihre Datenbank aus, um eine Tabelle mit dem Namen `data_source_table` als Datenquellenspeicher zu erstellen.
```sql
create table data_source_table
(
PersonID int NOT NULL,
Name varchar(255),
Age int
PRIMARY KEY (PersonID)
);
INSERT INTO data_source_table
(PersonID, Name, Age)
VALUES
(1, 'aaaa', 21),
(2, 'bbbb', 24),
(3, 'cccc', 20),
(4, 'dddd', 26),
(5, 'eeee', 22);
```
4. Aktivieren Sie den Mechanismus für die **Änderungsnachverfolgung** in Ihrer Datenbank und der Quelltabelle (data_source_table), indem Sie die folgende SQL-Abfrage ausführen:
> [!NOTE]
> - Ersetzen Sie <your database name> durch den Namen der Datenbank in Azure SQL-Datenbank, in der „data_source_table“ enthalten ist.
> - Im aktuellen Beispiel werden die geänderten Daten zwei Tage lang aufbewahrt. Wenn Sie die geänderten Daten für drei oder mehr Tage laden, sind einige geänderte Daten nicht enthalten. Eine Möglichkeit besteht darin, den Wert von CHANGE_RETENTION zu erhöhen. Alternativ dazu können Sie sicherstellen, dass Ihr Zeitraum für das Laden der geänderten Daten nicht mehr als zwei Tage beträgt. Weitere Informationen finden Sie unter [Aktivieren der Änderungsnachverfolgung für eine Datenbank](/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server#enable-change-tracking-for-a-database).
```sql
ALTER DATABASE <your database name>
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
ALTER TABLE data_source_table
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
```
5. Erstellen Sie eine neue Tabelle, und speichern Sie „ChangeTracking_version“ mit einem Standardwert, indem Sie die folgende Abfrage ausführen:
```sql
create table table_store_ChangeTracking_version
(
TableName varchar(255),
SYS_CHANGE_VERSION BIGINT,
);
DECLARE @ChangeTracking_version BIGINT
SET @ChangeTracking_version = CHANGE_TRACKING_CURRENT_VERSION();
INSERT INTO table_store_ChangeTracking_version
VALUES ('data_source_table', @ChangeTracking_version)
```
> [!NOTE]
> Wenn sich die Daten nach dem Aktivieren der Änderungsnachverfolgung für SQL-Datenbank nicht geändert haben, lautet der Wert für die Version der Änderungsnachverfolgung „0“.
6. Führen Sie die folgende Abfrage zum Erstellen einer gespeicherten Prozedur in Ihrer Datenbank aus. Die Pipeline ruft diese gespeicherte Prozedur auf, um die Version der Änderungsnachverfolgung in der Tabelle zu aktualisieren, die Sie im vorherigen Schritt erstellt haben.
```sql
CREATE PROCEDURE Update_ChangeTracking_Version @CurrentTrackingVersion BIGINT, @TableName varchar(50)
AS
BEGIN
UPDATE table_store_ChangeTracking_version
SET [SYS_CHANGE_VERSION] = @CurrentTrackingVersion
WHERE [TableName] = @TableName
END
```
### <a name="azure-powershell"></a>Azure PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Installieren Sie die aktuellen Azure PowerShell-Module, indem Sie die Anweisungen unter [Installieren und Konfigurieren von Azure PowerShell](/powershell/azure/install-Az-ps) befolgen.
## <a name="create-a-data-factory"></a>Erstellen einer Data Factory
1. Starten Sie den Webbrowser **Microsoft Edge** oder **Google Chrome**. Die Data Factory-Benutzeroberfläche wird zurzeit nur in den Webbrowsern Microsoft Edge und Google Chrome unterstützt.
1. Wählen Sie im Menü auf der linken Seite **Ressource erstellen** > **Daten + Analysen** > **Data Factory** aus:

2. Geben Sie auf der Seite **Neue Data Factory** unter **Name** den Namen **ADFTutorialDataFactory** ein.

Der Name der Azure Data Factory-Instanz muss **global eindeutig** sein. Sollte der folgende Fehler auftreten, ändern Sie den Namen der Data Factory (beispielsweise in „<IhrName>ADFTutorialDataFactory“), und wiederholen Sie den Vorgang. Benennungsregeln für Data Factory-Artefakte finden Sie im Artikel [Azure Data Factory – Benennungsregeln](naming-rules.md).
*Der Data Factory-Name „ADFTutorialDataFactory“ ist nicht verfügbar.*
3. Wählen Sie Ihr **Azure-Abonnement** aus, in dem die Data Factory erstellt werden soll.
4. Führen Sie für die **Ressourcengruppe** einen der folgenden Schritte aus:
- Wählen Sie die Option **Use existing**(Vorhandene verwenden) und dann in der Dropdownliste eine vorhandene Ressourcengruppe.
- Wählen Sie **Neu erstellen**, und geben Sie den Namen einer Ressourcengruppe ein.
Weitere Informationen über Ressourcengruppen finden Sie unter [Verwenden von Ressourcengruppen zum Verwalten von Azure-Ressourcen](../azure-resource-manager/management/overview.md).
4. Wählen Sie **V2 (Vorschau)** als **Version** aus.
5. Wählen Sie den **Standort** für die Data Factory aus. In der Dropdownliste werden nur unterstützte Standorte angezeigt. Die von der Data Factory verwendeten Datenspeicher (Azure Storage, Azure SQL-Datenbank usw.) und Computedienste (HDInsight usw.) können sich in anderen Regionen befinden.
6. Wählen Sie die Option **An Dashboard anheften** aus.
7. Klicken Sie auf **Erstellen**.
8. Auf dem Dashboard sehen Sie die folgende Kachel mit dem Status: **Deploying data factory** (Data Factory wird bereitgestellt...).

9. Nach Abschluss der Erstellung wird die Seite **Data Factory** wie in der Abbildung angezeigt.

10. Klicken Sie auf die Kachel **Erstellen und überwachen**, um die Azure Data Factory-Benutzeroberfläche (User Interface, UI) auf einer separaten Registerkarte zu starten.
11. Wechseln Sie im linken Bereich der Seite **Erste Schritte** zur Registerkarte **Bearbeiten**, wie in der folgenden Abbildung gezeigt:

## <a name="create-linked-services"></a>Erstellen von verknüpften Diensten
Um Ihre Datenspeicher und Compute Services mit der Data Factory zu verknüpfen, können Sie verknüpfte Dienste in einer Data Factory erstellen. In diesem Abschnitt erstellen Sie verknüpfte Dienste mit Ihrem Azure Storage-Konto und mit Ihrer Datenbank in Azure SQL-Datenbank.
### <a name="create-azure-storage-linked-service"></a>Erstellen des verknüpften Azure Storage-Diensts.
In diesem Schritt verknüpfen Sie Ihr Azure Storage-Konto mit der Data Factory.
1. Klicken Sie auf **Verbindungen** und dann auf **+ Neu**.

2. Wählen Sie im Fenster **New Linked Service** (Neuer verknüpfter Dienst) die Option **Azure Blob Storage**, und klicken Sie dann auf **Weiter**.

3. Führen Sie im Fenster **New Linked Service** (Neuer verknüpfter Dienst) die folgenden Schritte aus:
1. Geben Sie unter **Name** die Zeichenfolge **AzureStorageLinkedService** ein.
2. Wählen Sie unter **Speicherkontoname** Ihr Azure Storage-Konto aus.
3. Klicken Sie auf **Speichern**.

### <a name="create-azure-sql-database-linked-service"></a>Erstellen Sie einen Azure SQL-Datenbank -verknüpften Dienst.
In diesem Schritt verknüpfen Sie Ihre Datenbank mit der Data Factory.
1. Klicken Sie auf **Verbindungen** und dann auf **+ Neu**.
2. Wählen Sie im Fenster **New Linked Service** (Neuer verknüpfter Dienst) die Option **Azure SQL-Datenbank**, und klicken Sie auf **Weiter**.
3. Führen Sie im Fenster **New Linked Service** (Neuer verknüpfter Dienst) die folgenden Schritte aus:
1. Geben Sie in das Feld **Name** den Namen **AzureSqlDatabaseLinkedService** ein.
2. Wählen Sie im Feld **Servername** Ihren Server aus.
3. Wählen Sie im Feld **Datenbankname** Ihre Datenbank aus.
4. Geben Sie im Feld **Benutzername** den Namen des Benutzers ein.
5. Geben Sie im Feld **Kennwort** das Kennwort für den Benutzer ein.
6. Klicken Sie auf **Verbindung testen**, um die Verbindung zu testen.
7. Klicken Sie auf **Speichern**, um den verknüpften Dienst zu speichern.

## <a name="create-datasets"></a>Erstellen von Datasets
In diesem Schritt erstellen Sie Datasets, die für die Datenquelle, das Datenziel und den Ort zum Speichern der SYS_CHANGE_VERSION stehen.
### <a name="create-a-dataset-to-represent-source-data"></a>Erstellen eines Datasets zum Darstellen von Quelldaten
In diesem Schritt erstellen Sie ein Dataset, das für die Quelldaten steht.
1. Klicken Sie in der Strukturansicht auf **+** (Pluszeichen) und dann auf **Dataset**.

2. Wählen Sie **Azure SQL-Datenbank**, und klicken Sie auf **Fertig stellen**.

3. Eine neue Registerkarte zum Konfigurieren des Datasets wird angezeigt. Außerdem wird das Dataset in der Strukturansicht angezeigt. Ändern Sie im **Eigenschaftenfenster** den Namen des Datasets in **SourceDataset**.

4. Wechseln Sie zur Registerkarte **Verbindung**, und führen Sie die folgenden Schritte aus:
1. Wählen Sie unter **Verknüpfter Dienst** die Option **AzureSqlDatabaseLinkedService**.
2. Wählen Sie unter **Tabelle** die Option **[dbo].[data_source_table]** .

### <a name="create-a-dataset-to-represent-data-copied-to-sink-data-store"></a>Erstellen Sie ein Dataset zum Darstellen von Daten, die in den Senkendatenspeicher kopiert werden.
In diesem Schritt erstellen Sie ein Dataset, das für die Daten steht, die aus dem Quelldatenspeicher kopiert werden. Sie haben den Container „adftutorial“ in Ihrer Azure Blob Storage-Instanz erstellt, als Sie die Voraussetzungen erfüllt haben. Erstellen Sie den Container, wenn er noch nicht vorhanden ist (oder) geben Sie den Namen eines bereits vorhandenen ein. In diesem Tutorial wird der Name der Ausgabedatei dynamisch mit dem Ausdruck `@CONCAT('Incremental-', pipeline().RunId, '.txt')` generiert.
1. Klicken Sie in der Strukturansicht auf **+** (Pluszeichen) und dann auf **Dataset**.

2. Wählen Sie **Azure Blob Storage**, und klicken Sie auf **Fertig stellen**.

3. Eine neue Registerkarte zum Konfigurieren des Datasets wird angezeigt. Außerdem wird das Dataset in der Strukturansicht angezeigt. Ändern Sie im **Eigenschaftenfenster** den Namen des Datasets in **SinkDataset**.

4. Wechseln Sie im Eigenschaftenfenster zur Registerkarte **Verbindung**, und führen Sie die folgenden Schritte aus:
1. Wählen Sie unter **Verknüpfter Dienst** die Option **AzureStorageLinkedService**.
2. Geben Sie für den Teil **folder** von **filePath** die Zeichenfolge **adftutorial/incchgtracking** ein.
3. Geben Sie für den Teil **file** von **filePath** die Zeichenfolge **\@CONCAT('Incremental-', pipeline().RunId, '.txt')** ein.

### <a name="create-a-dataset-to-represent-change-tracking-data"></a>Erstellen eines Datasets zum Darstellen von Daten der Änderungsnachverfolgung
In diesem Schritt erstellen Sie ein Dataset zum Speichern der Version für die Änderungsnachverfolgung. Sie haben die Tabelle „table_store_ChangeTracking_version“ während der Erfüllung der Voraussetzungen erstellt.
1. Klicken Sie in der Strukturansicht auf **+** (Pluszeichen) und dann auf **Dataset**.
2. Wählen Sie **Azure SQL-Datenbank**, und klicken Sie auf **Fertig stellen**.
3. Eine neue Registerkarte zum Konfigurieren des Datasets wird angezeigt. Außerdem wird das Dataset in der Strukturansicht angezeigt. Ändern Sie im **Eigenschaftenfenster** den Namen des Datasets in **ChangeTrackingDataset**.
4. Wechseln Sie zur Registerkarte **Verbindung**, und führen Sie die folgenden Schritte aus:
1. Wählen Sie unter **Verknüpfter Dienst** die Option **AzureSqlDatabaseLinkedService**.
2. Wählen Sie unter **Tabelle** die Option **[dbo].[table_store_ChangeTracking_version]** .
## <a name="create-a-pipeline-for-the-full-copy"></a>Erstellen einer Pipeline für den vollständigen Kopiervorgang
In diesem Schritt erstellen Sie eine Pipeline mit einer Kopieraktivität, bei der die gesamten Daten aus dem Quelldatenspeicher (Azure SQL-Datenbank) in den Zieldatenspeicher (Azure Blob Storage) kopiert werden.
1. Klicken Sie im Bereich auf der linken Seite auf **+** (Pluszeichen) und dann auf **Pipeline**.

2. Eine neue Registerkarte zum Konfigurieren der Pipeline wird angezeigt. Außerdem wird die Pipeline in der Strukturansicht angezeigt. Ändern Sie im **Eigenschaftenfenster** den Namen der Pipeline in **FullCopyPipeline**.

3. Erweitern Sie in der Toolbox **Aktivitäten** die Option **Datenfluss**, und ziehen Sie die **Copy**-Aktivität in die Oberfläche des Pipeline-Designers. Legen Sie den Namen auf **FullCopyActivity** fest.

4. Wechseln Sie zur Registerkarte **Quelle**, und wählen Sie im Feld **Source Dataset** (Quelldataset) die Option **SourceDataset**.

5. Wechseln Sie zur Registerkarte **Senke**, und wählen Sie im Feld **Sink Dataset** (Senkendataset) die Option **SinkDataset**.

6. Klicken Sie zum Überprüfen der Pipelinedefinition in der Symbolleiste auf **Überprüfen**. Vergewissern Sie sich, dass keine Validierungsfehler vorliegen. Schließen Sie den **Pipeline Validation Report** (Pipelineüberprüfungsbericht), indem Sie auf **>>** klicken.

7. Klicken Sie zum Veröffentlichen von Entitäten (verknüpfte Dienste, Datasets und Pipelines) auf **Veröffentlichen**. Warten Sie, bis die Veröffentlichung erfolgreich durchgeführt wurde.

8. Warten Sie, bis die Meldung **Erfolgreich veröffentlicht** angezeigt wird.

9. Sie können auch Benachrichtigungen anzeigen, indem Sie links auf die Schaltfläche **Benachrichtigungen anzeigen** klicken. Klicken Sie zum Schließen des Fensters mit den Benachrichtigungen auf **X**.

### <a name="run-the-full-copy-pipeline"></a>Ausführen der vollständigen Kopierpipeline
Klicken Sie auf der Symbolleiste der Pipeline auf **Trigger** und dann auf **Trigger Now** (Jetzt auslösen).

### <a name="monitor-the-full-copy-pipeline"></a>Überwachen der vollständigen Kopierpipeline
1. Klicken Sie links auf die Registerkarte **Überwachen**. Die Pipelineausführung wird in der Liste mit ihrem Status angezeigt. Klicken Sie zum Aktualisieren der Liste auf **Aktualisieren**. Mit den Links in der Spalte „Aktionen“ können Sie Aktivitätsausführungen anzeigen, die der Pipelineausführung zugeordnet sind, und die Pipeline erneut ausführen.

2. Wenn Sie mit der Pipelineausführung verknüpfte Aktivitätsausführungen anzeigen möchten, klicken Sie in der Spalte **Aktionen** auf den Link **View Activity Runs** (Aktivitätsausführungen anzeigen). Da die Pipeline nur eine Aktivität enthält, wird in der Liste nur ein Eintrag angezeigt. Klicken Sie auf **Pipelines**, um zurück zur Ansicht mit den Pipelineausführungen zu wechseln.

### <a name="review-the-results"></a>Überprüfen der Ergebnisse
Im Ordner `incchgtracking` des Containers `adftutorial` wird eine Datei mit dem Namen `incremental-<GUID>.txt` angezeigt.

Die Datei sollte die Daten aus Ihrer Datenbank enthalten:
```
1,aaaa,21
2,bbbb,24
3,cccc,20
4,dddd,26
5,eeee,22
```
## <a name="add-more-data-to-the-source-table"></a>Hinzufügen von weiteren Daten zur Quelltabelle
Führen Sie die folgende Abfrage für Ihre Datenbank aus, um eine Zeile hinzuzufügen und eine Zeile zu aktualisieren.
```sql
INSERT INTO data_source_table
(PersonID, Name, Age)
VALUES
(6, 'new','50');
UPDATE data_source_table
SET [Age] = '10', [name]='update' where [PersonID] = 1
```
## <a name="create-a-pipeline-for-the-delta-copy"></a>Erstellen einer Pipeline für die Deltakopie
In diesem Schritt erstellen Sie eine Pipeline mit den folgenden Aktivitäten und führen sie regelmäßig aus. Mit den **Lookup-Aktivitäten** wird die alte und neue SYS_CHANGE_VERSION aus Azure SQL-Datenbank abgerufen und an die Kopieraktivität übergeben. Die **Kopieraktivität** kopiert die eingefügten/aktualisierten/gelöschten Daten zwischen den beiden SYS_CHANGE_VERSION-Werten aus Azure SQL-Datenbank nach Azure Blob Storage. Die **Aktivität „Gespeicherte Prozedur“** aktualisiert den Wert von SYS_CHANGE_VERSION für die nächste Pipelineausführung.
1. Wechseln Sie auf der Data Factory-Benutzeroberfläche zur Registerkarte **Bearbeiten**. Klicken Sie im Bereich auf der linken Seite auf **+** (Pluszeichen) und dann auf **Pipeline**.

2. Eine neue Registerkarte zum Konfigurieren der Pipeline wird angezeigt. Außerdem wird die Pipeline in der Strukturansicht angezeigt. Ändern Sie im **Eigenschaftenfenster** den Namen der Pipeline in **IncrementalCopyPipeline**.

3. Erweitern Sie in der Toolbox **Aktivitäten** die Option **Allgemein**, und ziehen Sie die **Lookup**-Aktivität auf die Oberfläche des Pipeline-Designers. Legen Sie den Namen der Aktivität auf **LookupLastChangeTrackingVersionActivity** fest. Mit dieser Aktivität wird die Version für die Änderungsnachverfolgung des letzten Kopiervorgangs abgerufen, die in der Tabelle **table_store_ChangeTracking_version** gespeichert ist.

4. Wechseln Sie im **Eigenschaftenfenster** zu **Einstellungen**, und wählen Sie im Feld **Source Dataset** (Quelldataset) die Option **ChangeTrackingDataset**.

5. Ziehen Sie die **Lookup**-Aktivität aus der Toolbox **Aktivitäten** in die Oberfläche des Pipeline-Designers. Legen Sie den Namen der Aktivität auf **LookupCurrentChangeTrackingVersionActivity** fest. Mit dieser Aktivität wird die aktuelle Version der Änderungsnachverfolgung abgerufen.

6. Wechseln Sie im **Eigenschaftenfenster** zu **Einstellungen**, und führen Sie die folgenden Schritte aus:
1. Wählen Sie im Feld **Source Dataset** (Quelldataset) die Option **SourceDataset**.
2. Wählen Sie unter **Abfrage verwenden** die Option **Abfrage**.
3. Geben Sie unter **Abfrage** die folgende SQL-Abfrage ein:
```sql
SELECT CHANGE_TRACKING_CURRENT_VERSION() as CurrentChangeTrackingVersion
```

7. Erweitern Sie in der Toolbox **Aktivitäten** die Option **Datenfluss**, und ziehen Sie die **Copy**-Aktivität in die Oberfläche des Pipeline-Designers. Legen Sie den Namen der Aktivität auf **IncrementalCopyActivity** fest. Mit dieser Aktivität werden die Daten, die zwischen der letzten Version und der aktuellen Version der Änderungsnachverfolgung angefallen sind, in den Zieldatenspeicher kopiert.

8. Wechseln Sie im **Eigenschaftenfenster** zur Registerkarte **Quelle**, und führen Sie die folgenden Schritte aus:
1. Wählen Sie unter **Source Dataset** (Quelldataset) die Option **SourceDataset**.
2. Wählen Sie unter **Abfrage verwenden** die Option **Abfrage**.
3. Geben Sie unter **Abfrage** die folgende SQL-Abfrage ein:
```sql
select data_source_table.PersonID,data_source_table.Name,data_source_table.Age, CT.SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION from data_source_table RIGHT OUTER JOIN CHANGETABLE(CHANGES data_source_table, @{activity('LookupLastChangeTrackingVersionActivity').output.firstRow.SYS_CHANGE_VERSION}) as CT on data_source_table.PersonID = CT.PersonID where CT.SYS_CHANGE_VERSION <= @{activity('LookupCurrentChangeTrackingVersionActivity').output.firstRow.CurrentChangeTrackingVersion}
```

9. Wechseln Sie zur Registerkarte **Senke**, und wählen Sie im Feld **Sink Dataset** (Senkendataset) die Option **SinkDataset**.

10. **Verbinden Sie beide Lookup-Aktivitäten nacheinander mit der Copy-Aktivität**. Ziehen Sie die **grüne** Schaltfläche, die der **Lookup**-Aktivität zugeordnet ist, auf die **Copy**-Aktivität.

11. Ziehen Sie die **Stored Procedure**-Aktivität aus der Toolbox **Aktivitäten** in die Oberfläche des Pipeline-Designers. Legen Sie den Namen der Aktivität auf **StoredProceduretoUpdateChangeTrackingActivity** fest. Mit dieser Aktivität wird die Version der Änderungsnachverfolgung in der Tabelle **table_store_ChangeTracking_version** geändert.

12. Wechseln Sie zur Registerkarte *SQL-Konto*\*, und wählen Sie unter **Verknüpfter Dienst** die Option **AzureSqlDatabaseLinkedService**.

13. Wechseln Sie zur Registerkarte **Gespeicherte Prozedur**, und führen Sie die folgenden Schritte aus:
1. Wählen Sie unter **Name der gespeicherten Prozedur** den Namen **Update_ChangeTracking_Version**.
2. Wählen Sie die Option **Import parameter** (Importparameter).
3. Geben Sie im Abschnitt **Parameter der gespeicherten Prozedur** die folgenden Werte für den Parameter an:
| Name | type | Wert |
| ---- | ---- | ----- |
| CurrentTrackingVersion | Int64 | @{activity('LookupCurrentChangeTrackingVersionActivity').output.firstRow.CurrentChangeTrackingVersion} |
| TableName | String | @{activity('LookupLastChangeTrackingVersionActivity').output.firstRow.TableName} |

14. **Verbinden Sie die Copy-Aktivität mit der Stored Procedure-Aktivität**. Ziehen Sie die **grüne** Schaltfläche, die der Copy-Aktivität zugeordnet ist, auf die Stored Procedure-Aktivität.

15. Klicken Sie in der Symbolleiste auf **Überprüfen**. Vergewissern Sie sich, dass keine Validierungsfehler vorliegen. Schließen Sie das Fenster **Pipeline Validation Report** (Pipelineüberprüfungsbericht), indem Sie auf **>>** klicken.

16. Veröffentlichen Sie Entitäten (verknüpfte Dienste, Datasets und Pipelines) für den Data Factory-Dienst, indem Sie auf die Schaltfläche **Alle veröffentlichen** klicken. Warten Sie, bis die Meldung **Veröffentlichung erfolgreich** angezeigt wird.

### <a name="run-the-incremental-copy-pipeline"></a>Ausführen der inkrementellen Kopierpipeline
1. Klicken Sie auf der Symbolleiste der Pipeline auf **Trigger** und dann auf **Trigger Now** (Jetzt auslösen).

2. Wählen Sie im Fenster **Pipelineausführung** die Option **Fertig stellen** aus.
### <a name="monitor-the-incremental-copy-pipeline"></a>Überwachen der inkrementellen Kopierpipeline
1. Klicken Sie links auf die Registerkarte **Überwachen**. Die Pipelineausführung wird in der Liste mit ihrem Status angezeigt. Klicken Sie zum Aktualisieren der Liste auf **Aktualisieren**. Mit den Links in der Spalte **Aktionen** können Sie Aktivitätsausführungen anzeigen, die der Pipelineausführung zugeordnet sind, und die Pipeline erneut ausführen.

2. Wenn Sie mit der Pipelineausführung verknüpfte Aktivitätsausführungen anzeigen möchten, klicken Sie in der Spalte **Aktionen** auf den Link **View Activity Runs** (Aktivitätsausführungen anzeigen). Da die Pipeline nur eine Aktivität enthält, wird in der Liste nur ein Eintrag angezeigt. Klicken Sie auf **Pipelines**, um zurück zur Ansicht mit den Pipelineausführungen zu wechseln.

### <a name="review-the-results"></a>Überprüfen der Ergebnisse
Die zweite Datei ist im Ordner `incchgtracking` des Containers `adftutorial` enthalten.

Die Datei sollte nur die Deltadaten aus Ihrer Datenbank enthalten. Der Datensatz mit der Kennzeichnung `U` ist die aktualisierte Zeile in der Datenbank, und mit `I` wird die hinzugefügte Zeile angegeben.
```
1,update,10,2,U
6,new,50,1,I
```
Die ersten drei Spalten enthalten geänderte Daten aus „data_source_table“. Die letzten beiden Spalten enthalten die Metadaten aus der Systemtabelle für die Änderungsnachverfolgung. Die vierte Spalte enthält die SYS_CHANGE_VERSION für die einzelnen geänderten Zeilen. Die fünfte Spalte enthält den Vorgang: U = update (aktualisieren), I = insert (einfügen). Weitere Informationen zu den Informationen zur Änderungsnachverfolgung finden Sie unter [CHANGETABLE](/sql/relational-databases/system-functions/changetable-transact-sql).
```
==================================================================
PersonID Name Age SYS_CHANGE_VERSION SYS_CHANGE_OPERATION
==================================================================
1 update 10 2 U
6 new 50 1 I
```
## <a name="next-steps"></a>Nächste Schritte
Im folgenden Tutorial erfahren Sie mehr über das Kopieren von neuen und geänderten Dateien nur auf Grundlage ihres LastModifiedDate-Werts:
> [!div class="nextstepaction"]
> [Kopieren neuer Dateien basierend auf „lastmodifieddate“](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
| 79.989451 | 1,058 | 0.777239 | deu_Latn | 0.984028 |
4d1c9165fc44a3e8813a00f041bc9f994acff7b6 | 3,898 | md | Markdown | docs/vs-2015/code-quality/ca1700-do-not-name-enum-values-reserved.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-09-01T20:45:52.000Z | 2020-09-01T20:45:52.000Z | docs/vs-2015/code-quality/ca1700-do-not-name-enum-values-reserved.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/ca1700-do-not-name-enum-values-reserved.md | klmnden/visualstudio-docs.tr-tr | 82aa1370dab4ae413f5f924dad3e392ecbad0d02 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'CA1700: Sabit listesi değerlerini adlandırmayın 'ayrılmış' | Microsoft Docs'
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-code-analysis
ms.topic: reference
f1_keywords:
- CA1700
- DoNotNameEnumValuesReserved
helpviewer_keywords:
- DoNotNameEnumValuesReserved
- CA1700
ms.assetid: 7a7e01c3-ae7d-4c82-a646-91b58864a749
caps.latest.revision: 19
author: gewarren
ms.author: gewarren
manager: wpickett
ms.openlocfilehash: a5446d21b51f57b4a614e8931b154654bee99cd2
ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/23/2019
ms.locfileid: "68189269"
---
# <a name="ca1700-do-not-name-enum-values-39reserved39"></a>CA1700: Sabit listesi değerlerini adlandırmayın 'ayrılmış'
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
|||
|-|-|
|TypeName|DoNotNameEnumValuesReserved|
|CheckId|CA1700|
|Kategori|Microsoft.Naming|
|Yeni Değişiklik|Yeni|
## <a name="cause"></a>Sebep
Bir numaralandırma üyesinin adı "ayrılmış" sözcüklerini içerir.
## <a name="rule-description"></a>Kural Tanımı
Bu kural, "ayrılmış" içeren bir ada sahip numaralandırma üyesi şu anda kullanılmamaktadır ancak yeniden adlandırılabilir veya gelecekteki bir sürüme kaldırıldığını varsayar. Üye kaldırma veya yeniden adlandırma bölünmesi farklıdır. Kullanıcıların bir üyesi olduğundan, yalnızca "ayrılmış" adını içerir ya da kullanıcılara Okuma veya belge tarafından uymayı güvenebilirsiniz yoksay beklememeniz gerekir. Ayrıca, nesne tarayıcılar ve akıllı bir tümleşik geliştirme ortamları ayrılmış üyeleri görünür olduğundan, bunlar hakkında gerçekten üyeleri kullanıldığını karışıklığa neden olabilir.
Ayrılmış bir üye kullanmak yerine, sabit listesi gelecek sürümünde yeni bir üye ekleyin. Eklenmesini değiştirmek için özgün üyelerinin değerlerini neden olmaz sürece çoğu durumda, yeni üyenin toplama bozucu bir değişiklik değil.
Bile özgün değerlerine özgün üyelerini korumak, çalışmaları sınırlı bir süre içinde bir üyenin bir değişiklik ektir. Öncelikle, yeni üye var olan kod yolları kullanan çağıranlar bozmadan döndürülemez bir `switch` (`Select` içinde [!INCLUDE[vbprvb](../includes/vbprvb-md.md)]) ifadesi, kapsayan tüm üye listesi ve bu bir özel durum oluşturur dönüş değeri Varsayılan durumda. İstemci kodu yansıma yöntemleri davranış değişikliği gibi işleyebilir değil, bir ikincil arz ettiği <xref:System.Enum.IsDefined%2A?displayProperty=fullName>. Buna uygun olarak, mevcut yöntemlerden döndürülecek yeni üyenin veya bilinen uygulama uyumsuzluğu nedeniyle zayıf yansıma kullanım gerçekleşir, yalnızca bölünemez çözümdür:
1. Özgün ve yeni üyelerini içeren yeni bir sabit listesi ekleyin.
2. Özgün numaralandırması ile işaretle <xref:System.ObsoleteAttribute?displayProperty=fullName> özniteliği.
Herhangi bir dışarıdan görülebilen türler ve özgün numaralandırma kullanıma üyeleri için aynı yordamı izleyin.
## <a name="how-to-fix-violations"></a>İhlaller Nasıl Düzeltilir?
Bu kural ihlalini düzeltmek için kaldırmak veya üye yeniden adlandırın.
## <a name="when-to-suppress-warnings"></a>Uyarılar Bastırıldığında
Daha önce sevk kitaplıkları veya şu anda kullanılan bir üye için bu kuraldan bir uyarıyı bastırmak güvenlidir.
## <a name="related-rules"></a>İlgili kuralları
[CA2217: Sabit listelerini FlagsAttribute ile işaretlemeyin](../code-quality/ca2217-do-not-mark-enums-with-flagsattribute.md)
[CA1712: Enum değerleri için tür adıyla önek kullanmayın](../code-quality/ca1712-do-not-prefix-enum-values-with-type-name.md)
[CA1028: Numaralandırma depolaması Int32 olmalıdır](../code-quality/ca1028-enum-storage-should-be-int32.md)
[CA1008: Numaralandırmalar sıfır değerine sahip olmalıdır](../code-quality/ca1008-enums-should-have-zero-value.md)
[CA1027: Sabit listelerini FlagsAttribute ile işaretleyin](../code-quality/ca1027-mark-enums-with-flagsattribute.md)
| 58.179104 | 705 | 0.812981 | tur_Latn | 0.999466 |
4d1cf4514f23852a930b31eaed5b5ff9a8cc78a2 | 3,107 | md | Markdown | articles/azure-video-analyzer/video-analyzer-for-media-docs/customize-person-model-overview.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-video-analyzer/video-analyzer-for-media-docs/customize-person-model-overview.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-video-analyzer/video-analyzer-for-media-docs/customize-person-model-overview.md | R0bes/azure-docs.de-de | 24540ed5abf9dd081738288512d1525093dd2938 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Anpassen eines Personenmodells in Azure Video Analyzer for Media (früher Video Indexer) – Azure
titleSuffix: Azure Video Analyzer for Media
description: Dieser Artikel vermittelt einen Überblick darüber, was ein Personenmodell in Azure Video Analyzer for Media (früher Video Indexer) ist und wie es angepasst werden kann.
services: azure-video-analyzer
author: anikaz
manager: johndeu
ms.topic: article
ms.subservice: azure-video-analyzer-media
ms.date: 05/15/2019
ms.author: kumud
ms.openlocfilehash: e3032e42e4c3e741ee20a113b5f5e0ac34c68876
ms.sourcegitcommit: 0af634af87404d6970d82fcf1e75598c8da7a044
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 06/15/2021
ms.locfileid: "112123174"
---
# <a name="customize-a-person-model-in-video-analyzer-for-media"></a>Anpassen eines Personenmodells in Azure Video Analyzer for Media
Azure Video Analyzer for Media (früher Video Indexer) unterstützt die Erkennung von Prominenten in Ihren Videos. Die Funktion zur Erkennung von Prominenten umfasst ungefähr eine Million Gesichter, die auf häufig angeforderten Datenquellen wie IMDB, Wikipedia und den wichtigsten LinkedIn-Influencern basieren. Gesichter, die vom Azure Video Analyzer for Media nicht erkannt werden, werden trotzdem erfasst, jedoch nicht benannt. Kunden können benutzerdefinierte Personenmodelle erstellen und Azure Video Analyzer for Media aktivieren, um Gesichter zu erkennen, die standardmäßig nicht erkannt werden. Kunden können diese Personenmodelle erstellen, indem sie den Namen einer Person mit Bilddateien des Gesichts der Person kombinieren.
Wenn Ihr Konto unterschiedliche Anwendungsfälle abdeckt, können Sie davon profitieren, dass Sie mehrere Personenmodelle pro Konto erstellen können. Wenn der Inhalt in Ihrem Konto z. B. in verschiedene Kanäle sortiert werden soll, möchten Sie vielleicht für jeden Kanal ein eigenes Personenmodell erstellen.
> [!NOTE]
> Jedes Personenmodell unterstützt bis zu 1 Million Personen und jedes Konto hat ein Limit von 50 Personenmodellen.
Sobald ein Modell erstellt wurde, können Sie es verwenden, indem Sie die Modell-ID eines bestimmten Personenmodells beim Hochladen/Indizieren oder erneuten Indizieren eines Videos angeben. Beim Trainieren eines neuen Gesichts für ein Video wird das bestimmte benutzerdefinierte Modell, dem das Video zugeordnet war, aktualisiert.
Wenn Sie keine Unterstützung für Mehrpersonenmodelle benötigen, weisen Sie Ihrem Video beim Hochladen/Indizieren oder erneuten Indizieren keine Personenmodell-ID zu. In diesem Fall verwendet Azure Video Analyzer for Media das Standardpersonenmodell in Ihrem Konto.
Sie können die Azure Video Analyzer for Media-Website verwenden, um Gesichter zu bearbeiten, die in einem Video erkannt wurden, und um mehrere benutzerdefinierte Personenmodelle in Ihrem Konto zu verwalten, wie im Thema [Anpassen eines Personenmodells mithilfe einer Website](customize-person-model-with-website.md) beschrieben. Sie können auch die API verwenden. Dies wird unter [Anpassen von Personenmodellen mithilfe von APIs](customize-person-model-with-api.md) beschrieben.
| 94.151515 | 735 | 0.831349 | deu_Latn | 0.997971 |
4d1d251170e6248098c53e018a5320b3a4161976 | 2,908 | md | Markdown | docs/serving/config-ha.md | bryeung/docs | fc2482c0f483ac27dd6c0106a386cabc70e18902 | [
"Apache-2.0"
] | null | null | null | docs/serving/config-ha.md | bryeung/docs | fc2482c0f483ac27dd6c0106a386cabc70e18902 | [
"Apache-2.0"
] | null | null | null | docs/serving/config-ha.md | bryeung/docs | fc2482c0f483ac27dd6c0106a386cabc70e18902 | [
"Apache-2.0"
] | null | null | null | ---
title: "Configuring high-availability components"
weight: 50
type: "docs"
---
Active/passive high availability (HA) is a standard feature of Kubernetes APIs that helps to ensure that APIs stay operational if a disruption occurs. In an HA deployment, if an active controller crashes or is deleted, another controller is available to take over processing of the APIs that were being serviced by the controller that is now unavailable.
Active/passive HA in Knative is available through leader election, which can be enabled after Knative Serving control plane is installed.
When using a leader election HA pattern, instances of controllers are already scheduled and running inside the cluster before they are required. These controller instances compete to use a shared resource, known as the leader election lock. The instance of the controller that has access to the leader election lock resource at any given time is referred to as the leader.
HA functionality is available on Knative for the following components:
- `autoscaler-hpa`
- `controller`
- `activator`
HA functionality is not currently available for the following components:
- `autoscaler`
- `webhook`
- `queueproxy`
- `net-kourier`
## Enabling leader election
**NOTE:** Leader election functionality is still an alpha phase feature currently in development.
1. Enable leader election for the control plane controllers:
```
$ kubectl patch configmap/config-leader-election \
--namespace knative-serving \
--type merge \
--patch '{"data":{"enabledComponents": "controller,hpaautoscaler,certcontroller,istiocontroller,nscontroller"}}'
```
1. Restart the controllers:
```
$ kubectl rollout restart deployment <deployment-name> -n knative-serving
```
**NOTE:** You will experience temporary control plane downtime during this step.
When your controllers come back up, they should be running as leader-elected.
At this point, we've configured the controllers to use leader election and we
can scale the control plane up!
1. After the controllers have been configured to use leader election, the control plane can be scaled up:
```
$ kubectl -n knative-serving scale deployment <deployment-name> --replicas=2
```
## Scaling the control plane
The following serving controller deployments can be scaled up once leader election is enabled.
Standard deployments:
- `controller`
- `networking-istio` (if Istio is installed)
Optionally installed deployments:
- `autoscaler-hpa`
- `networking-ns-cert`
- `networking-certmanager`
Scale up the deployment(s):
```
$ kubectl -n knative-serving scale deployment <deployment-name> --replicas=2
```
- Setting `--replicas` to a value of `2` enables HA.
- You can use a higher value if you have a use case that requires more replicas of a deployment. For example, if you require a minimum of 3 `controller` deployments, set `--replicas=3`.
- Setting `--replicas=1` disables HA.
| 37.766234 | 372 | 0.772352 | eng_Latn | 0.998433 |
4d1d8ff53e285c51caa95b3a9af197a88c5a4b86 | 3,313 | md | Markdown | articles/billing-azure-subscription-past-due-balance.md | cloudmelon/azure-content | 4ecbe3ea06f39eb9a6e31dc0b4b6ed2aa0173778 | [
"CC-BY-3.0"
] | 1 | 2021-11-05T02:14:47.000Z | 2021-11-05T02:14:47.000Z | articles/billing-azure-subscription-past-due-balance.md | cloudmelon/azure-content | 4ecbe3ea06f39eb9a6e31dc0b4b6ed2aa0173778 | [
"CC-BY-3.0"
] | null | null | null | articles/billing-azure-subscription-past-due-balance.md | cloudmelon/azure-content | 4ecbe3ea06f39eb9a6e31dc0b4b6ed2aa0173778 | [
"CC-BY-3.0"
] | 1 | 2021-09-17T10:46:16.000Z | 2021-09-17T10:46:16.000Z | <properties
pageTitle="Why have you received a notification that your Azure subscription has a past due balance | Microsoft Azure"
description="Describes how to make payment if your Azure subscription has a past due balance"
services="billing"
documentationCenter=""
authors="genlin"
manager="jarrettr"
editor="meerak"
tags="billing"
/>
<tags
ms.service="billing"
ms.workload="na"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="06/22/2016"
ms.author="genli"/>
# Why have you received a notification that your Azure subscription has a past due balance?
If you are the Account Administrator for your Azure subscription, and have not made your payment on time, you will receive an email notification about your past due balance or you will see an alert either on [https://account.windowsazure.com](https://account.windowsazure.com) or [https://portal.azure.com](https://portal.azure.com).
If we are unable to process your payment for some reason, you might receive an email with a message similar to:
**We have been unable to charge your credit card for your subscription. To prevent any service interruptions, please update your payment information.**
Make sure you are getting notification emails. If you are not getting notification emails, you may be using different email addresses for login and Account Admin . The email address in the Account Administrator’s profile is used by Microsoft to notify you about important billing-related updates about the subscription. We recommend that you specify a contact email address that you check regularly.
## What will happen if you forget to pay
The service will be canceled and your resources will no longer be available. Any data will be deleted 90 days after the service is terminated.
## What can you do to resolve the issue
Pay your outstanding balance in full.
**Scenario 1**: If you are on an invoice mode of payment, send your payment to the location listed at the bottom of your invoice. If you need help, contact [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
**Scenario 2**: If the bill is not paid because the credit card on file was declined, or has expired, use another credit card or payment method for the subscriptions, or contact your bank to resolve the issue. If you update the payment method, all outstanding charges against that payment method will automatically be settled immediately. This includes outstanding charges for Azure as well as any other Microsoft services for which that card was used.
For instructions about how to change the payment method in Azure, see [How to change the credit card used to pay for an Azure subscription](./billing-how-to-change-credit-card.md). You must log on as an Account Administrator to make this change.
**Scenario 3**: If the bill notice was not received because the Account Administrator has left the company or changed roles, contact [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to resolve the issue.
> [AZURE.NOTE] If your Azure subscription becomes disabled, you can use the steps in this article to re-enable it: [What do I do if my Azure subscription is cancelled?](billing-subscription-become-disable.md)
| 70.489362 | 453 | 0.777543 | eng_Latn | 0.998034 |
4d1f72e2516a8ed81903cc0d21f583114e6da66e | 1,281 | md | Markdown | README.md | jnlon/jbackup | 453f80a8cf8c6cdc7d1aa18af4181cedbda875aa | [
"MIT"
] | null | null | null | README.md | jnlon/jbackup | 453f80a8cf8c6cdc7d1aa18af4181cedbda875aa | [
"MIT"
] | null | null | null | README.md | jnlon/jbackup | 453f80a8cf8c6cdc7d1aa18af4181cedbda875aa | [
"MIT"
] | null | null | null | # jbackup
A small shell script for building and executing rsync runs in a generic way.
## Usage
jbackup takes a single argument - a path to a 'profile' which is just a shell
script with the following variables and methods defined:
```
# Arguments passed directly to rsync
RSYNC_SRC="..."
RSYNC_DEST="..."
RSYNC_OPTIONS="..."
# Shell functions called before and after rsync executes
pre_command() {
# mount drives or prepare for sync
}
post_command() {
# umount drives or perform cleanup commands
}
```
See [profiles/](profiles/) in this repo for profile examples. I recommend
placing your custom profiles in `$HOME/.jbackup/`
## Behavior
jbackup must be run as root. jbackup will abort executing your profile if
either `rsync`, `pre_command`, or `post_command` return a non-zero value.
I recommend using `set -e` in your post/post commands so jbackup aborts when an
error is encountered.
## Motivation
jbackup is useful when rsync runs need an initial setup and teardown process,
for example mounting and unmounting drives. jbackup was created to centralize a
number of standalone scripts I had accumulated that executed nearly the same
commands but with slightly different arguments. With jbackup this functionality
now exists in simple standardized profile modules.
| 29.790698 | 79 | 0.774395 | eng_Latn | 0.998609 |
4d1f8c8713cc6c7a5dc2742557582cd57a0625db | 2,852 | md | Markdown | docs/generalStepDefs/the_should_be_enabled.md | ReadyTalk/cukefarm | a5f297ab7687d047876a5fd5f2b518b5703eab64 | [
"MIT"
] | 19 | 2015-03-10T20:25:14.000Z | 2021-08-16T09:04:41.000Z | docs/generalStepDefs/the_should_be_enabled.md | ReadyTalk/cukefarm | a5f297ab7687d047876a5fd5f2b518b5703eab64 | [
"MIT"
] | 53 | 2015-03-18T19:58:33.000Z | 2018-06-11T23:50:52.000Z | docs/generalStepDefs/the_should_be_enabled.md | ReadyTalk/cukefarm | a5f297ab7687d047876a5fd5f2b518b5703eab64 | [
"MIT"
] | 10 | 2015-11-03T17:18:29.000Z | 2018-07-14T08:47:38.000Z | * [the "\_\_\_" should be enabled](the-"\_\_\_"-should-be-enabled)
* [regex](regex)
* [execution](execution)
* [with enabled button](with-enabled-button)
* [with disabled button](with-disabled-button)
* [timing](timing)
# the "\_\_\_" should be enabled
## regex
regex should match 'the "\_\_\_" button...'
```
verifyStepMatch('the "Save Configuration" button should be enabled');
```
regex should match 'the "\_\_\_" field...'
```
verifyStepMatch('the "Username" field should be enabled');
```
regex should match 'the "\_\_\_" drop down list...'
```
verifyStepMatch('the "Timezone" drop down list should be enabled');
```
regex should match a step without an element type
```
verifyStepMatch('the "Save Button" should be enabled');
```
regex should match '...should be enabled'
```
verifyStepMatch('the "Save Configuration" button should be enabled');
```
regex should match '...should not be enabled'
```
verifyStepMatch('the "Save Configuration" button should not be enabled');
```
regex should capture the element name, element type, and the expectation
```
verifyStepCaptures('the "Save Configuration" button should be enabled', 'Save Configuration', ' button', 'should');
```
regex should capture the element name, the expectation, and a blank string if no element type is provided
```
verifyStepCaptures('the "Save Configuration Button" should be enabled', 'Save Configuration Button', '', 'should');
```
## execution
### with enabled button
with enabled button should succeed if it expects the button to be enabled
```
return executeStep('the "Test" button should be enabled', function() {
expect(currentStepResult.status).to.equal(Cucumber.Status.PASSED);
});
```
with enabled button should fail if it expects the button to be disabled
```
return executeStep('the "Test" button should not be enabled', function() {
expect(currentStepResult.status).to.equal(Cucumber.Status.FAILED);
});
```
### with disabled button
with disabled button should fail if it expects the button to be enabled
```
return executeStep('the "Test" button should be enabled', function() {
expect(currentStepResult.status).to.equal(Cucumber.Status.FAILED);
});
```
with disabled button should succeed if it expects the button to be disabled
```
return executeStep('the "Test" button should not be enabled', function() {
expect(currentStepResult.status).to.equal(Cucumber.Status.PASSED);
});
```
## timing
timing should wait for the element to be present before verifying
```
return browser.driver.executeScript("setTimeout( function() { $('div#test').append('<button id=\"testButton\">Button</button>'); }, 200 )").then(() => {
return executeStep('the "Test" button should be enabled', function() {
expect(currentStepResult.status).to.equal(Cucumber.Status.PASSED);
});
});
```
| 25.017544 | 152 | 0.703366 | eng_Latn | 0.915414 |
4d208c07bddb44824e9e59a34280b3345a6ee7a9 | 3,529 | md | Markdown | README.md | compcake/compcake | 98f1dd4056dff865b3d5d8fbe6a85a9df6413d46 | [
"MIT"
] | null | null | null | README.md | compcake/compcake | 98f1dd4056dff865b3d5d8fbe6a85a9df6413d46 | [
"MIT"
] | 1 | 2017-07-11T03:06:39.000Z | 2017-07-11T03:06:39.000Z | README.md | compcake/compcake | 98f1dd4056dff865b3d5d8fbe6a85a9df6413d46 | [
"MIT"
] | null | null | null | # CompCake
CakePHP based BJCP Competition Management Software
## Introduction
The purpose of this software is to make running a BJCP competition, well, a
piece of cake.
If you have run a competition before, you probably still have recurring
nightmares about lost entries, entries not paid, being forced into a particular
way of assigning entries to flights, sorting scoresheets, mailing scoresheets,
labels that won't print in some browsers. I could go on, but I think you
already have the point by now, so if you're still interested, do read on.
CakePHP is the basis this software is built on. I went with CakePHP for
the framework because a competition software program is inherently
data-intensive, and CakePHP has a lot of scaffolding which keeps the code
compact, simple, and easy to understand while being straightforward to
develop. This choice means it won't be JavaScript heavy (almost everything
runs on the server, with the exception of BJCP style guideline string
mappings), and therefore not as pretty and snappy as some Web2.0 applications,
but I'm more concerned about getting the job done, and getting it done well.
**This software is still in active development.** The user experience is
nearly complete (enough to accept entries) but the competition staff and
admin modules are still in development. Since the competitions I am running
will be judging in September you can expect a lot of active development
over the next month and a half as I get those parts done!
## Setup
1. You need a PayPal account to receive payment of entry fees. We use the new
OAuth-based REST APIs, so you will need to go to developer.paypal.com, sign in
with your PayPal credentials, and generate an "Application" client token and
secret. It does not matter what you name the application in PayPal. The
client token and secret will go into the config. More details are in the
comments in the app.php file.
2. If you want to use reCAPTCHA to avoid bots trying to spam your system
by creating drone login accounts, you will need to set up a reCAPTCHA
site key and secret. The instructions on how to do this are in the app.php
file. ReCAPTCHA is minimially invasive to your users and very effective as
a security feature, so I highly suggest implementing it on your site.
3. Clone the repo on your server where you want it to live:
```
git clone https://github.com/compcake/compcake.git
```
4. Use composer to fetch all the project dependencies.
```
cd compcake/
composer update
```
5. Edit the main configuration file, config/app.php. There are plenty of
comments to guide you. Be careful to follow the PHP syntax as you make changes.
You should only change the configuration before the "DO NOT CHANGE ANYTHING..."
comment. One possible exception is if you want to test PayPal payments without
actually sending real money, you may want to enable the Sandbox to do your
testing, and then switch it back to the main URL once you are satisfied that
things are working.
6. Navigate to the config/schema subdirectory and build the SqlLite database.
```
cd config/schema
make all
```
7. Navigate to the website and login as the test admin account. The default
login credentials are: [email protected], password letmein
CHANGE THE PASSWORD. I mean it! You probably also want to update the admin
user details to something better than the default Test Admin.
8. You will probably want to edit the rules, which live at:
```
src/Template/Pages/Rules.ctp
```
That's it... you should be ready to start accepting entries.
| 41.034884 | 79 | 0.784925 | eng_Latn | 0.999344 |
4d209bcad31161249fe06ae61e12c82aa2c0eb0b | 3,303 | md | Markdown | docs/framework/unmanaged-api/hosting/ihostsecuritymanager-getsecuritycontext-method.md | michaelgoin/dotnet-docs | 89e583ef512e91fb9910338acd151ee1352a3799 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/ihostsecuritymanager-getsecuritycontext-method.md | michaelgoin/dotnet-docs | 89e583ef512e91fb9910338acd151ee1352a3799 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/ihostsecuritymanager-getsecuritycontext-method.md | michaelgoin/dotnet-docs | 89e583ef512e91fb9910338acd151ee1352a3799 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "IHostSecurityManager::GetSecurityContext Method"
ms.custom: ""
ms.date: "03/30/2017"
ms.prod: ".net-framework"
ms.reviewer: ""
ms.suite: ""
ms.technology:
- "dotnet-clr"
ms.tgt_pltfrm: ""
ms.topic: "reference"
api_name:
- "IHostSecurityManager.GetSecurityContext"
api_location:
- "mscoree.dll"
api_type:
- "COM"
f1_keywords:
- "IHostSecurityManager::GetSecurityContext"
helpviewer_keywords:
- "GetSecurityContext method [.NET Framework hosting]"
- "IHostSecurityManager::GetSecurityContext method [.NET Framework hosting]"
ms.assetid: 958970d6-f6a2-4b84-b32a-f555cbaf8f61
topic_type:
- "apiref"
caps.latest.revision: 10
author: "rpetrusha"
ms.author: "ronpet"
manager: "wpickett"
---
# IHostSecurityManager::GetSecurityContext Method
Gets the requested [IHostSecurityContext](../../../../docs/framework/unmanaged-api/hosting/ihostsecuritycontext-interface.md) from the host.
## Syntax
```
HRESULT GetSecurityContext (
[in] EContextType eContextType,
[out] IHostSecurityContext** ppSecurityContext
);
```
#### Parameters
`eContextType`
[in] One of the [EContextType](../../../../docs/framework/unmanaged-api/hosting/econtexttype-enumeration.md) values, indicating what type of security context to return.
`ppSecurityContext`
[out] The address of an interface pointer to the `IHostSecurityContext` of `eContextType`.
## Return Value
|HRESULT|Description|
|-------------|-----------------|
|S_OK|`GetSecurityContext` returned successfully.|
|HOST_E_CLRNOTAVAILABLE|The common language runtime (CLR) has not been loaded into a process, or the CLR is in a state in which it cannot run managed code or process the call successfully.|
|HOST_E_TIMEOUT|The call timed out.|
|HOST_E_NOT_OWNER|The caller does not own the lock.|
|HOST_E_ABANDONED|An event was canceled while a blocked thread or fiber was waiting on it.|
|E_FAIL|An unknown catastrophic failure occurred. When a method returns E_FAIL, the CLR is no longer usable within the process. Subsequent calls to hosting methods return HOST_E_CLRNOTAVAILABLE.|
## Remarks
A host can control all code access to thread tokens by both the CLR and user code. It can also ensure that complete security context information is passed across asynchronous operations or code points with restricted code access. `IHostSecurityContext` encapsulates this security context information, which is opaque to the CLR. The CLR captures this information and moves it across thread pool worker item dispatch, finalizer execution, and module and class construction.
## Requirements
**Platforms:** See [System Requirements](../../../../docs/framework/get-started/system-requirements.md).
**Header:** MSCorEE.h
**Library:** Included as a resource in MSCorEE.dll
**.NET Framework Versions:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## See Also
[EContextType Enumeration](../../../../docs/framework/unmanaged-api/hosting/econtexttype-enumeration.md)
[IHostSecurityContext Interface](../../../../docs/framework/unmanaged-api/hosting/ihostsecuritycontext-interface.md)
[IHostSecurityManager Interface](../../../../docs/framework/unmanaged-api/hosting/ihostsecuritymanager-interface.md)
| 42.896104 | 475 | 0.736603 | eng_Latn | 0.729255 |
4d20b24e2700dca07d11683dc83a883dd6f2b674 | 1,330 | md | Markdown | _pages/about.md | YuchenWang2015/YuchenWang2015.github.io | e54d735f90561c9eaf85db93092bb1e9fa4d075f | [
"MIT"
] | null | null | null | _pages/about.md | YuchenWang2015/YuchenWang2015.github.io | e54d735f90561c9eaf85db93092bb1e9fa4d075f | [
"MIT"
] | null | null | null | _pages/about.md | YuchenWang2015/YuchenWang2015.github.io | e54d735f90561c9eaf85db93092bb1e9fa4d075f | [
"MIT"
] | null | null | null | ---
permalink: /
title: "About me"
excerpt: "About me"
author_profile: true
redirect_from:
- /about/
- /about.html
---
I am Yuchen Wang, currently a third year PhD student in computational chemistry at Kansas State University.
Ma Anshan 🇨🇳 -> Wuhu 🇨🇳 -> Kansas 🇺🇸 -> 😊 still on my way to travel around the world 🌎.
Contact me via email: [email protected].
Education
======
- 2019 - Present PhD student, Kansas State Univeristy Research advisor: Prof. Christine Aikens.
- 2015 - 2019 Bachelor of Science in chemistry, Anhui Normal University Research advisor: Prof. Sufan Wang.
- 2012 - 2015 Ma Anshan No.2 High school.
PhD Research project
======
- Nonadiabatic dynamics simulations on studying optical propeties of Gold and Silver Nanoparticles.
Others
======
Language: Native Chinese speaker/Working proficiency in English/Limited French proficiency
🎥: Huge fan of Ang Lee/Wes Anderson
🎼: Kubert Leung/Ding Wei/Franz Schubert
⚽️:Robin van Persie/Frenkie de Jong/club: Liverpool F.C.
For more info
------
More info about configuring academicpages can be found in [the guide](https://academicpages.github.io/markdown/). The [guides for the Minimal Mistakes theme](https://mmistakes.github.io/minimal-mistakes/docs/configuration/) (which this theme was forked from) might also be helpful.
| 30.227273 | 281 | 0.73985 | eng_Latn | 0.881342 |
4d20ea708389e62d2068b298053efb26d71f9603 | 15,207 | md | Markdown | articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-non-gallery.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-25T13:24:48.000Z | 2019-10-25T13:24:48.000Z | articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-non-gallery.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/manage-apps/application-sign-in-problem-federated-sso-non-gallery.md | brentnewbury/azure-docs | 52da5a910db122fc92c877a6f62c54c32c7f3b31 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-07-31T21:22:35.000Z | 2019-07-31T21:22:35.000Z | ---
title: Problems signing in to a non-gallery application configured for federated single sign-on | Microsoft Docs
description: Guidance for the specific problems you may face when signing in to an application configured for SAML-based federated single sign-on with Azure AD
services: active-directory
documentationcenter: ''
author: msmimart
manager: CelesteDG
ms.assetid:
ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: conceptual
ms.date: 07/11/2017
ms.author: mimart
ms.reviewer: asteen
ms.collection: M365-identity-device-management
---
# Problems signing in to a non-gallery application configured for federated single sign-on
To troubleshoot the sign-in issues below, we recommend you follow these suggestion to get better diagnosis and automate the resolution steps:
- Install the [My Apps Secure Browser Extension](access-panel-extension-problem-installing.md) to help Azure Active Directory (Azure AD) to provide better diagnosis and resolutions when using the testing experience in the Azure portal.
- Reproduce the error using the testing experience in the app configuration page in the Azure portal. Learn more on [Debug SAML-based single sign-on applications](../develop/howto-v1-debug-saml-sso-issues.md)
## Application not found in directory
*Error AADSTS70001: Application with Identifier `https://contoso.com` was not found in the directory*.
**Possible cause**
The Issuer attribute sends from the application to Azure AD in the SAML request doesn’t match the Identifier value configured in the application Azure AD.
**Resolution**
Ensure that the `Issuer` attribute in the SAML request matches the Identifier value configured in Azure AD. If you use the [testing experience](../develop/howto-v1-debug-saml-sso-issues.md) in the Azure portal with the My Apps Secure Browser Extension, you don't need to manually follow these steps.
1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Co-admin.**
2. Open the **Azure Active Directory Extension** by clicking **All services** at the top of the main left-hand navigation menu.
3. Type in **“Azure Active Directory**” in the filter search box and select the **Azure Active Directory** item.
4. click **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
5. click **All Applications** to view a list of all your applications.
* If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications.**
6. Select the application you want to configure single sign-on.
7. Once the application loads, click the **Single sign-on** from the application’s left-hand navigation menu.
8. Once the application loads, open **Basic SAML configuration**. Verify that the value in the Identifier textbox matches the value for the identifier value displayed in the error.
## The reply address does not match the reply addresses configured for the application.
*Error AADSTS50011: The reply address `https://contoso.com` does not match the reply addresses configured for the application*
**Possible cause**
The AssertionConsumerServiceURL value in the SAML request doesn't match the Reply URL value or pattern configured in Azure AD. The AssertionConsumerServiceURL value in the SAML request is the URL you see in the error.
**Resolution**
Ensure that the `Issuer` attribute in the SAML request matches the Identifier value configured in Azure AD. If you use the [testing experience](../develop/howto-v1-debug-saml-sso-issues.md) in the Azure portal with the My Apps Secure Browser Extension, you don't need to manually follow these steps.
1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Co-admin.**
2. Open the **Azure Active Directory Extension** by clicking **All services** at the top of the main left-hand navigation menu.
3. Type in **“Azure Active Directory**” in the filter search box and select the **Azure Active Directory** item.
4. click **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
5. click **All Applications** to view a list of all your applications.
* If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications.**
6. Select the application you want to configure single sign-on
7. Once the application loads, click the **Single sign-on** from the application’s left-hand navigation menu.
8. Once the application loads, open **Basic SAML configuration**. Verify or update the value in the Reply URL textbox to match the `AssertionConsumerServiceURL` value in the SAML request.
After you've updated the Reply URL value in Azure AD, and it matches the value sent by the application in the SAML request, you should be able to sign in to the application.
## User not assigned a role
*Error AADSTS50105: The signed in user `brian\@contoso.com` is not assigned to a role for the application*
**Possible cause**
The user has not been granted access to the application in Azure AD.
**Resolution**
To assign one or more users to an application directly, follow the steps below. If you use the [testing experience](../develop/howto-v1-debug-saml-sso-issues.md) in the Azure portal with the My Apps Secure Browser Extension, you don't need to manually follow these steps.
1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
2. Open the **Azure Active Directory Extension** by clicking **All services** at the top of the main left-hand navigation menu.
3. Type in **“Azure Active Directory**” in the filter search box and select the **Azure Active Directory** item.
4. click **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
5. click **All Applications** to view a list of all your applications.
* If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications.**
6. Select the application you want to assign a user to from the list.
7. Once the application loads, click **Users and Groups** from the application’s left-hand navigation menu.
8. Click the **Add** button on top of the **Users and Groups** list to open the **Add Assignment** pane.
9. click the **Users and groups** selector from the **Add Assignment** pane.
10. Type in the **full name** or **email address** of the user you are interested in assigning into the **Search by name or email address** search box.
11. Hover over the **user** in the list to reveal a **checkbox**. Click the checkbox next to the user’s profile photo or logo to add your user to the **Selected** list.
12. **Optional:** If you would like to **add more than one user**, type in another **full name** or **email address** into the **Search by name or email address** search box, and click the checkbox to add this user to the **Selected** list.
13. When you are finished selecting users, click the **Select** button to add them to the list of users and groups to be assigned to the application.
14. **Optional:** click the **Select Role** selector in the **Add Assignment** pane to select a role to assign to the users you have selected.
15. Click the **Assign** button to assign the application to the selected users.
After a short period of time, the users you have selected be able to launch these applications using the methods described in the solution description section.
## Not a valid SAML Request
*Error AADSTS75005: The request is not a valid Saml2 protocol message.*
**Possible cause**
Azure AD doesn’t support the SAML Request sent by the application for Single Sign-on. Some common issues are:
- Missing required fields in the SAML request
- SAML request encoded method
**Resolution**
1. Capture SAML request. follow the tutorial on [how to debug SAML-based single sign-on to applications in Azure AD](https://docs.microsoft.com/azure/active-directory/develop/active-directory-saml-debugging) to learn how to capture the SAML request.
2. Contact the application vendor and share:
- SAML request
- [Azure AD Single Sign-on SAML protocol requirements](https://docs.microsoft.com/azure/active-directory/develop/active-directory-single-sign-on-protocol-reference)
The application vendor should validate that they support the Azure AD SAML implementation for single sign-on.
## Misconfigured application
*Error AADSTS650056: Misconfigured application. This could be due to one of the following: The client has not listed any permissions for 'AAD Graph' in the requested permissions in the client's application registration. Or, The admin has not consented in the tenant. Or, Check the application identifier in the request to ensure it matches the configured client application identifier. Please contact your admin to fix the configuration or consent on behalf of the tenant.*.
**Possible cause**
The `Issuer` attribute sent from the application to Azure AD in the SAML request doesn’t match the Identifier value configured for the application in Azure AD.
**Resolution**
Ensure that the `Issuer` attribute in the SAML request matches the Identifier value configured in Azure AD. If you use the [testing experience](../develop/howto-v1-debug-saml-sso-issues.md) in the Azure portal with the My Apps Secure Browser Extension, you don't need to manually follow these steps:
1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Co-admin**.
1. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
1. Type **“Azure Active Directory"** in the filter search box and select the **Azure Active Directory** item.
1. Select **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
1. Select **All Applications** to view a list of all your applications.
If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications**.
1. Select the application you want to configure for single sign-on.
1. Once the application loads, open **Basic SAML configuration**. Verify that the value in the Identifier textbox matches the value for the identifier value displayed in the error.
## Certificate or key not configured
Error AADSTS50003: No signing key configured.
**Possible cause**
The application object is corrupted and Azure AD doesn’t recognize the certificate configured for the application.
**Resolution**
To delete and create a new certificate, follow the steps below:
1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Co-admin.**
2. Open the **Azure Active Directory Extension** by clicking **All services** at the top of the main left-hand navigation menu.
3. Type in **“Azure Active Directory**” in the filter search box and select the **Azure Active Directory** item.
4. click **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
5. click **All Applications** to view a list of all your applications.
* If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications.**
6. Select the application you want to configure single sign-on.
7. Once the application loads, click the **Single sign-on** from the application’s left-hand navigation menu.
8. click **Create new certificate** under the **SAML signing Certificate** section.
9. Select Expiration date. Then, click **Save.**
10. Check **Make new certificate active** to override the active certificate. Then, click **Save** at the top of the pane and accept to activate the rollover certificate.
11. Under the **SAML Signing Certificate** section, click **remove** to remove the **Unused** certificate.
## SAML Request not present in the request
*Error AADSTS750054: SAMLRequest or SAMLResponse must be present as query string parameters in HTTP request for SAML Redirect binding.*
**Possible cause**
Azure AD wasn’t able to identify the SAML request within the URL parameters in the HTTP request. This can happen if the application is not using HTTP redirect binding when sending the SAML request to Azure AD.
**Resolution**
The application needs to send the SAML request encoded into the location header using HTTP redirect binding. For more information about how to implement it, read the section HTTP Redirect Binding in the [SAML protocol specification document](https://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf).
## Azure AD is sending the token to an incorrect endpoint
**Possible cause**
During single sign-on, if the sign-in request does not contain an explicit reply URL (Assertion Consumer Service URL) then Azure AD will select any of the configured rely URLs for that application. Even if the application has an explicit reply URL configured, the user may be to redirected https://127.0.0.1:444.
When the application was added as a non-gallery app, Azure Active Directory created this reply URL as a default value. This behavior has changed and Azure Active Directory no longer adds this URL by default.
**Resolution**
Delete the unused reply URLs configured for the application.
1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator** or **Co-admin**.
2. Open the **Azure Active Directory Extension** by selecting **All services** at the top of the main left-hand navigation menu.
3. Type **“Azure Active Directory"** in the filter search box and select the **Azure Active Directory** item.
4. Select **Enterprise Applications** from the Azure Active Directory left-hand navigation menu.
5. Select **All Applications** to view a list of all your applications.
If you do not see the application you want show up here, use the **Filter** control at the top of the **All Applications List** and set the **Show** option to **All Applications**.
6. Select the application you want to configure for single sign-on.
7. Once the application loads, open **Basic SAML configuration**. In the **Reply URL (Assertion Consumer Service URL)**, delete unused or default Reply URLs created by the system. For example, `https://127.0.0.1:444/applications/default.aspx`.
## Problem when customizing the SAML claims sent to an application
To learn how to customize the SAML attribute claims sent to your application, see [Claims mapping in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-claims-mapping) for more information.
## Next steps
[Azure AD Single Sign-on SAML protocol requirements](https://docs.microsoft.com/azure/active-directory/develop/active-directory-single-sign-on-protocol-reference)
| 55.298182 | 474 | 0.763661 | eng_Latn | 0.989601 |
4d212803cc76fbe2ae2ab313b9200eae5aca625f | 496 | md | Markdown | src/work/index.md | bcrumpton/bcrumpton | 57f0bafd2e302497fb40932e4438541da4d555d9 | [
"MIT"
] | null | null | null | src/work/index.md | bcrumpton/bcrumpton | 57f0bafd2e302497fb40932e4438541da4d555d9 | [
"MIT"
] | null | null | null | src/work/index.md | bcrumpton/bcrumpton | 57f0bafd2e302497fb40932e4438541da4d555d9 | [
"MIT"
] | null | null | null | ---
title: 👨💻 Work
layout: layout.liquid
---
<h1 class="my-12 text-6xl font-bold">{{ title }}</h1>
<div class="work">
{%- for work in collections.work reversed -%}
<div class="work__item mt-6">
<h4 class="text-3xl md:text-4xl font-bold">
<a href="{{work.url}}">{{ work.data.title }}</a>
<small>{{ work.data.completeDate }}</small>
</h4>
<p>{{ work.data.tech }}</p>
</div>
{%- endfor -%}
</div> | 22.545455 | 64 | 0.483871 | eng_Latn | 0.489236 |
4d21383f2a46ba44ccc03a7beab77e1f15183461 | 139 | md | Markdown | doc/b-examples/k-qp.md | andreadelprete/pinocchio | 6fa1c7d5502629ee126f84f1a05471815fba30f4 | [
"BSD-2-Clause-FreeBSD"
] | 1 | 2021-06-22T15:42:45.000Z | 2021-06-22T15:42:45.000Z | doc/b-examples/k-qp.md | andreadelprete/pinocchio | 6fa1c7d5502629ee126f84f1a05471815fba30f4 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | doc/b-examples/k-qp.md | andreadelprete/pinocchio | 6fa1c7d5502629ee126f84f1a05471815fba30f4 | [
"BSD-2-Clause-FreeBSD"
] | 1 | 2021-03-21T09:14:26.000Z | 2021-03-21T09:14:26.000Z | # QP (normal forces) unilateral contact dynamics (if we can write it concise enough)
## Python
\include k-qp.py
## C++
\include k-qp.cpp
| 17.375 | 84 | 0.705036 | eng_Latn | 0.995088 |
4d217c414991dc6eb6fc364671d35c3f9b573f5d | 8,654 | md | Markdown | doc/protobuf.md | wuyongchn/trpc | ee492ce8c40ef6a156f298aa295ed4e1fe4b4545 | [
"BSD-2-Clause"
] | null | null | null | doc/protobuf.md | wuyongchn/trpc | ee492ce8c40ef6a156f298aa295ed4e1fe4b4545 | [
"BSD-2-Clause"
] | null | null | null | doc/protobuf.md | wuyongchn/trpc | ee492ce8c40ef6a156f298aa295ed4e1fe4b4545 | [
"BSD-2-Clause"
] | null | null | null | # Protobuf 如何实现 RPC
Protobuf 全称是 Protocol Buffer,是 Google 开发的一套跨语言、跨平台、易拓展,用于结构化数据的序列化的工具库,和 XML 类似,但是更小、更快、更简单。Protobuf 预留了 RPC 接口,但是没有实现网络交互,需要用户自己实现。
利用 Protobuf 实现 RPC,可以将代码分为以下三部分:
1. Protobuf 自动生成的代码
1. RPC 框架(网络通信,服务注册于发现,负载均衡等)
1. 业务逻辑
## proto
下面是利用 Protobuf 定义的一个 EchoService,方法为 Echo
```
syntax="proto2";
package example;
option cc_generic_services = true;
message EchoRequest {
required string message = 1;
};
message EchoResponse {
required string message = 1;
};
service EchoService {
rpc Echo(EchoRequest) returns (EchoResponse);
};
```
利用 protoc 编译器可以自动生成 echo.pb.h 和 echo.ph.cc 两个文件。其中 service EchoService 会生成 EchoService 和 EchoService_Stub 两个类,分别应用于 server 端和 client 端。
EchoService 继承与 ::google::protobuf::Service 类。Service 类主要的成员是 CallMethod,根据 |method| 参数调用相应的函数。
```
class PROTOBUF_EXPORT Service {
public:
/*****/
virtual void CallMethod(const MethodDescriptor* method,
RpcController* controller, const Message* request,
Message* response, Closure* done) = 0;
/*****/
};
```
EchoService 继承 Service 并且生成方法 Echo,Echo 需要其子类实现,CallMethod 方法已经实现,在 EchoService 中,调用 Echo 方法。
```
class EchoService : public ::google::protobuf::Service {
protected:
// This class should be treated as an abstract interface.
inline EchoService() {};
public:
virtual ~EchoService();
typedef EchoService_Stub Stub;
static const ::google::protobuf::ServiceDescriptor* descriptor();
virtual void Echo(::google::protobuf::RpcController* controller,
const ::example::EchoRequest* request,
::example::EchoResponse* response,
::google::protobuf::Closure* done);
// implements Service ----------------------------------------------
const ::google::protobuf::ServiceDescriptor* GetDescriptor();
void CallMethod(const ::google::protobuf::MethodDescriptor* method,
::google::protobuf::RpcController* controller,
const ::google::protobuf::Message* request,
::google::protobuf::Message* response,
::google::protobuf::Closure* done);
const ::google::protobuf::Message& GetRequestPrototype(
const ::google::protobuf::MethodDescriptor* method) const;
const ::google::protobuf::Message& GetResponsePrototype(
const ::google::protobuf::MethodDescriptor* method) const;
private:
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(EchoService);
};
```
因此,server 端再收到网络传输的 Request 序列化数据后,先反序列化,然后调用 EchoService 的 CallMethod 方法处理。待处理完后,将 Response 序列化,由网络传输给 client。大致过程如下
```
Recv(Requenst);
Deserialize(Request);
CallMethod(Requenst, Response);
Serialize(Response);
Send(Response);
```
EchoService_Stub 继承与 EchoService,封装 EchoService 的接口,供 client 使用,简化 client 的逻辑,client 可以像直接调用本地方法一样调用 EchoService 的方法。EchoService_Stub 所有方法都已经实现,Echo 方法直接调用 RpcChannel 类的 CallMethod 方法,将本地的请求传输给 server 端。
```
class EchoService_Stub : public EchoService {
public:
EchoService_Stub(::google::protobuf::RpcChannel* channel);
EchoService_Stub(::google::protobuf::RpcChannel* channel,
::google::protobuf::Service::ChannelOwnership ownership);
~EchoService_Stub();
inline ::google::protobuf::RpcChannel* channel() { return channel_; }
// implements EchoService ------------------------------------------
void Echo(::google::protobuf::RpcController* controller,
const ::example::EchoRequest* request,
::example::EchoResponse* response,
::google::protobuf::Closure* done);
private:
::google::protobuf::RpcChannel* channel_;
bool owns_channel_;
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(EchoService_Stub);
};
// ------- .cc ------------
void EchoService_Stub::Echo(::google::protobuf::RpcController* controller,
const ::example::EchoRequest* request,
::example::EchoResponse* response,
::google::protobuf::Closure* done) {
channel_->CallMethod(descriptor()->method(0),
controller, request, response, done);
}
```
## RpcChannel
Rrotobuf 的 RpcChannel 是一个纯虚类,需要子类实现,可以理解为一个通道,连接了 RPC 服务的 server 和 client 端。
```
// Abstract interface for an RPC channel. An RpcChannel represents a
// communication line to a Service which can be used to call that Service's
// methods. The Service may be running on another machine. Normally, you
// should not call an RpcChannel directly, but instead construct a stub Service
// wrapping it. Example:
// RpcChannel* channel = new MyRpcChannel("remotehost.example.com:1234");
// MyService* service = new MyService::Stub(channel);
// service->MyMethod(request, &response, callback);
class PROTOBUF_EXPORT RpcChannel {
public:
inline RpcChannel() {}
virtual ~RpcChannel();
// Call the given method of the remote service. The signature of this
// procedure looks the same as Service::CallMethod(), but the requirements
// are less strict in one important way: the request and response objects
// need not be of any specific class as long as their descriptors are
// method->input_type() and method->output_type().
virtual void CallMethod(const MethodDescriptor* method,
RpcController* controller, const Message* request,
Message* response, Closure* done) = 0;
private:
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(RpcChannel);
};
```
## RpcController
RpcController 的主要目的是提供一种方法来操纵每个 RPC 调用的设置,并找出有关 RPC 级别的错误。
```
// An RpcController mediates a single method call. The primary purpose of
// the controller is to provide a way to manipulate settings specific to the
// RPC implementation and to find out about RPC-level errors.
//
// The methods provided by the RpcController interface are intended to be a
// "least common denominator" set of features which we expect all
// implementations to support. Specific implementations may provide more
// advanced features (e.g. deadline propagation).
class PROTOBUF_EXPORT RpcController {
public:
inline RpcController() {}
virtual ~RpcController();
// Client-side methods ---------------------------------------------
// These calls may be made from the client side only. Their results
// are undefined on the server side (may crash).
// Resets the RpcController to its initial state so that it may be reused in
// a new call. Must not be called while an RPC is in progress.
virtual void Reset() = 0;
// After a call has finished, returns true if the call failed. The possible
// reasons for failure depend on the RPC implementation. Failed() must not
// be called before a call has finished. If Failed() returns true, the
// contents of the response message are undefined.
virtual bool Failed() const = 0;
// If Failed() is true, returns a human-readable description of the error.
virtual std::string ErrorText() const = 0;
// Advises the RPC system that the caller desires that the RPC call be
// canceled. The RPC system may cancel it immediately, may wait awhile and
// then cancel it, or may not even cancel the call at all. If the call is
// canceled, the "done" callback will still be called and the RpcController
// will indicate that the call failed at that time.
virtual void StartCancel() = 0;
// Server-side methods ---------------------------------------------
// These calls may be made from the server side only. Their results
// are undefined on the client side (may crash).
// Causes Failed() to return true on the client side. "reason" will be
// incorporated into the message returned by ErrorText(). If you find
// you need to return machine-readable information about failures, you
// should incorporate it into your response protocol buffer and should
// NOT call SetFailed().
virtual void SetFailed(const std::string& reason) = 0;
// If true, indicates that the client canceled the RPC, so the server may
// as well give up on replying to it. The server should still call the
// final "done" callback.
virtual bool IsCanceled() const = 0;
// Asks that the given callback be called when the RPC is canceled. The
// callback will always be called exactly once. If the RPC completes without
// being canceled, the callback will be called after completion. If the RPC
// has already been canceled when NotifyOnCancel() is called, the callback
// will be called immediately.
//
// NotifyOnCancel() must be called no more than once per request.
virtual void NotifyOnCancel(Closure* callback) = 0;
private:
GOOGLE_DISALLOW_EVIL_CONSTRUCTORS(RpcController);
};
``` | 40.064815 | 203 | 0.691703 | eng_Latn | 0.856709 |
4d21e2c8c1e56b6b249954835f46a723fc246d3f | 522 | md | Markdown | README.md | FE-Dev/FEN | a629c72e15753558c431ab2128f90791ddb45752 | [
"MIT"
] | null | null | null | README.md | FE-Dev/FEN | a629c72e15753558c431ab2128f90791ddb45752 | [
"MIT"
] | null | null | null | README.md | FE-Dev/FEN | a629c72e15753558c431ab2128f90791ddb45752 | [
"MIT"
] | null | null | null | # FEN
Front-End Notes 前端笔记
## 构想
2. Front End Notes
1. diary
2. book
1. 《book1》
如果一本书预计会写很多的话,可以开文件夹,不带书名号,上面的书名号只是为了易于识辨。如果不多的话,一个MD文件就写完。也可以一开始先写一个文件,感觉不够写了再扩充。范例:[网易微专业](https://github.com/li-xinyang/FEND_Note)
1. chapter1
2. chapter2
2. 《book2》
3. topic
此目录灵感来源于[此](https://github.com/lzx1022/FE-notebook),按这个目录,就可以把看书/看文章/小技巧等分类记下来了,即使再短小也可以记下。日有所学,日有所思。
4. NetEase FE Note (子笔记本,专项笔记本,用于记录某一个课程的笔记。此处是否可用 git 分支来做?有时候,一个分支就可以骗过很多人的眼睛。)
5. imooc(慕课网,这么分又感觉和 diary 重叠了) | 30.705882 | 142 | 0.704981 | yue_Hant | 0.908261 |
4d2210524885abd38990c276f6ef81037d94f8ca | 263 | md | Markdown | README.md | aliefmulyadin/tubes_apbo | 61b8fbbefbaf48169c408bb2c3d2c07c4667ca35 | [
"MIT"
] | null | null | null | README.md | aliefmulyadin/tubes_apbo | 61b8fbbefbaf48169c408bb2c3d2c07c4667ca35 | [
"MIT"
] | null | null | null | README.md | aliefmulyadin/tubes_apbo | 61b8fbbefbaf48169c408bb2c3d2c07c4667ca35 | [
"MIT"
] | null | null | null | # Tugas Besar Analisa Perancangan Berbasis Objek (On Process)
Pembuatan Aplikasi Bengkel Motor Berbasis Web
1. 1119101003 - Sandi Yusuf N
2. 1119101008 - Danny Aulia N
3. 1119101009 - M Alief Mulyadin
4. 1119101014 - Aldi Nurzaman
5. 1119101015 - Ricky Rafi H
| 23.909091 | 61 | 0.771863 | ind_Latn | 0.742619 |
4d221095ea7efcbe872ec4e4ae1e0c561a09ea5f | 2,928 | md | Markdown | _posts/2017-04-09-programmers.md | ZavixDragon/zavixdragon.github.io | c1f84c09027a3b538d04a725d82367becea6bb99 | [
"MIT"
] | null | null | null | _posts/2017-04-09-programmers.md | ZavixDragon/zavixdragon.github.io | c1f84c09027a3b538d04a725d82367becea6bb99 | [
"MIT"
] | null | null | null | _posts/2017-04-09-programmers.md | ZavixDragon/zavixdragon.github.io | c1f84c09027a3b538d04a725d82367becea6bb99 | [
"MIT"
] | null | null | null | ---
layout: post
title: Programmers, Make A Damn Blog!
date: 2017-04-09 17:12
author: Noah Reinagel
comments: true
categories: [Blog, Meta]
---
All the highly successful programmers write blogs. You want to be a successful programmer don’t you? Do it for your confidence, do it for your craft, do it for your career, and do it for fun!
----
## You don’t know what the hell you are talking about, if you can’t write a post about it
----
Everyone claims to have skills. Heck! On their resumes people put the most obscure skills they can imagine themselves having. You can find script kiddies that think they know html because they modified a tumblr theme. The truth is revealed when someone writes and talks about it. That’s what blogs are all about, to present your opinions in written form.
After you write about something, you will speak and argue about it coming off as more credible. You want your company to listen when you tell them your ideas. You want your teammates to accept those ideas, don’t you? Then write about it!
----
## Keep learning and get good
----
This will improve your writing skills. For all those programmers who are saying “I just code, I don’t need to be good at writing”, unless you don't work on a team and you don’t have to communicate with your bosses, then the hell you don’t! I bet you correspond every day with the people you work with, and if you want people to say "YES!" to your design decisions and ideas, then git gud!
**The fastest way to learn where you're wrong is when you present ideas to be destroyed by other people.** Technical minds will be quick to criticize your ideas online, and this is exactly what you want. **Either you will defend those ideas and become more confident, or they will be burned and better ideas will rise from those ashes.**
You must always be learning new things to keep writing in your blog. This will encourage you to learn more and think more deeply about those topics.
----
## This is your future
----
Are you happy striving at your 8 hour job, driving through that traffic, making enough money to be okay, then just go on and skip this section.
I see that you want to be paid the big bucks, giving the talks, and signing copies of your book. This is the path that Yegor Bugayenko, Martin Fowler, Uncle Bob, and many other great programmers have taken. They all started blogging, and then found that they had many ideas to give so they wrote books and gave talks.
You are future writing your book. The concepts you write here will form into chapters in your best seller. They will be the concepts in the talks that you give. You will get to know the other great programmers and get invited to speak and contribute on important repositories.
----
## Conclusion
----
You will become confident, as you are blazing a trail ahead of you towards future success. It will make you good at writing, programming, and life. **Write a damn blog!** | 66.545455 | 389 | 0.76571 | eng_Latn | 0.999933 |
4d223c0403cac7322515632f2c53be64f086a922 | 29,325 | md | Markdown | articles/azure-monitor/app/ilogger.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/app/ilogger.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/app/ilogger.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Eksplorowanie dzienników śledzenia .NET z ILogger — Azure Application Insights
description: Przykłady użycia dostawcy usługi Azure Application Insights ILogger z aplikacjami ASP.NET Core i konsolowymi.
ms.service: azure-monitor
ms.subservice: application-insights
ms.topic: conceptual
author: mrbullwinkle
ms.author: mbullwin
ms.date: 02/19/2019
ms.reviewer: mbullwin
ms.openlocfilehash: b538196467ba1d69e679a111ca313f922738b048
ms.sourcegitcommit: f52ce6052c795035763dbba6de0b50ec17d7cd1d
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 01/24/2020
ms.locfileid: "76716029"
---
# <a name="applicationinsightsloggerprovider-for-net-core-ilogger-logs"></a>Dzienniki usługi ApplicationInsightsLoggerProvider dla programu .NET Core ILogger
ASP.NET Core obsługuje interfejs API rejestrowania, który współpracuje z różnymi rodzajami wbudowanych dostawców rejestrowania i innych firm. Rejestrowanie odbywa się przez wywoływanie **dziennika ()** lub jego wariantu w wystąpieniach *ILogger* . W tym artykule pokazano, jak używać *ApplicationInsightsLoggerProvider* do przechwytywania dzienników ILogger w konsoli programu i aplikacjach ASP.NET Core. W tym artykule opisano również, jak ApplicationInsightsLoggerProvider integruje się z innymi danymi telemetrycznymi Application Insights.
Aby dowiedzieć się więcej, zobacz [Rejestrowanie w ASP.NET Core](https://docs.microsoft.com/aspnet/core/fundamentals/logging).
## <a name="aspnet-core-applications"></a>ASP.NET Core aplikacji
ApplicationInsightsLoggerProvider jest domyślnie włączona w [Microsoft. ApplicationInsights. ASPNET SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) Version 2.7.1 (i nowsze) po włączeniu regularnego monitorowania Application Insights za pomocą jednej z metod standardowych:
- Wywołując metodę rozszerzenia **UseApplicationInsights** w IWebHostBuilder
- Wywołując metodę rozszerzenia **AddApplicationInsightsTelemetry** w IServiceCollection
Dzienniki ILogger, które są przechwytywane przez ApplicationInsightsLoggerProvider, podlegają tej samej konfiguracji co inne zebrane dane telemetryczne. Mają ten sam zestaw TelemetryInitializers i TelemetryProcessors, używają tego samego TelemetryChannel i są skorelowane i próbkowane w taki sam sposób jak inne dane telemetryczne. Jeśli używasz wersji 2.7.1 lub nowszej, do przechwytywania dzienników ILogger nie jest wymagana żadna akcja.
Tylko *ostrzeżenia* i więcej dzienników ILogger (ze wszystkich [kategorii](https://docs.microsoft.com/aspnet/core/fundamentals/logging/?view=aspnetcore-3.1#log-category)) są domyślnie wysyłane do Application Insights. Można jednak [zastosować filtry w celu zmodyfikowania tego zachowania](#control-logging-level). Do przechwycenia dzienników ILogger z **program.cs** lub **Startup.cs**jest wymagane wykonanie dodatkowych czynności. (Zobacz [przechwytywanie dzienników ILogger z Startup.cs i program.cs w aplikacjach ASP.NET Core](#capture-ilogger-logs-from-startupcs-and-programcs-in-aspnet-core-apps).)
Jeśli używasz starszej wersji programu Microsoft. ApplicationInsights. AspNet SDK lub chcesz tylko korzystać z ApplicationInsightsLoggerProvider bez żadnych innych Application Insights monitorowania, wykonaj następującą procedurę:
1. Zainstaluj pakiet NuGet:
```xml
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Logging.ApplicationInsights" Version="2.9.1" />
</ItemGroup>
```
1. Zmodyfikuj **program.cs** , jak pokazano poniżej:
```csharp
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Logging;
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureLogging(
builder =>
{
// Providing an instrumentation key here is required if you're using
// standalone package Microsoft.Extensions.Logging.ApplicationInsights
// or if you want to capture logs from early in the application startup
// pipeline from Startup.cs or Program.cs itself.
builder.AddApplicationInsights("ikey");
// Optional: Apply filters to control what logs are sent to Application Insights.
// The following configures LogLevel Information or above to be sent to
// Application Insights for all categories.
builder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
("", LogLevel.Information);
}
);
}
```
Kod w kroku 2 konfiguruje `ApplicationInsightsLoggerProvider`. Poniższy kod przedstawia przykładową klasę kontrolera, która używa `ILogger` do wysyłania dzienników. Dzienniki są przechwytywane przez Application Insights.
```csharp
public class ValuesController : ControllerBase
{
private readonly `ILogger` _logger;
public ValuesController(ILogger<ValuesController> logger)
{
_logger = logger;
}
// GET api/values
[HttpGet]
public ActionResult<IEnumerable<string>> Get()
{
// All the following logs will be picked up by Application Insights.
// and all of them will have ("MyKey", "MyValue") in Properties.
using (_logger.BeginScope(new Dictionary<string, object> { { "MyKey", "MyValue" } }))
{
_logger.LogWarning("An example of a Warning trace..");
_logger.LogError("An example of an Error level message");
}
return new string[] { "value1", "value2" };
}
}
```
### <a name="capture-ilogger-logs-from-startupcs-and-programcs-in-aspnet-core-apps"></a>Przechwytywanie dzienników ILogger z Startup.cs i Program.cs w aplikacjach ASP.NET Core
> [!NOTE]
> W ASP.NET Core 3,0 i nowszych nie można już wstrzyknąć `ILogger` w Startup.cs i Program.cs. Aby uzyskać więcej informacji, zobacz https://github.com/aspnet/Announcements/issues/353.
Nowy ApplicationInsightsLoggerProvider może przechwytywać dzienniki wczesne w potoku uruchamiania aplikacji. Mimo że usługa ApplicationInsightsLoggerProvider jest automatycznie włączana w Application Insights (począwszy od wersji 2.7.1), nie ma ustawionego klucza instrumentacji do późniejszego potoku. W związku z tym tylko dzienniki z klas/inne **kontrolera**będą przechwytywane. Aby przechwycić każdy dziennik zaczynający się od **program.cs** i **Startup.cs** , należy jawnie włączyć klucz Instrumentacji dla ApplicationInsightsLoggerProvider. Ponadto *TelemetryConfiguration* nie jest w pełni skonfigurowany podczas rejestrowania się z **program.cs** lub **Startup.cs** . Te dzienniki będą mieć minimalną konfigurację, która używa InMemoryChannel, bez próbkowania ani nie ma standardowych inicjatorów telemetrii ani procesorów.
Poniższe przykłady przedstawiają tę możliwość przy użyciu **program.cs** i **Startup.cs**.
#### <a name="example-programcs"></a>Przykład Program.cs
```csharp
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Logging;
public class Program
{
public static void Main(string[] args)
{
var host = CreateWebHostBuilder(args).Build();
var logger = host.Services.GetRequiredService<ILogger<Program>>();
// This will be picked up by AI
logger.LogInformation("From Program. Running the host now..");
host.Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureLogging(
builder =>
{
// Providing an instrumentation key here is required if you're using
// standalone package Microsoft.Extensions.Logging.ApplicationInsights
// or if you want to capture logs from early in the application startup
// pipeline from Startup.cs or Program.cs itself.
builder.AddApplicationInsights("ikey");
// Adding the filter below to ensure logs of all severity from Program.cs
// is sent to ApplicationInsights.
builder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
(typeof(Program).FullName, LogLevel.Trace);
// Adding the filter below to ensure logs of all severity from Startup.cs
// is sent to ApplicationInsights.
builder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
(typeof(Startup).FullName, LogLevel.Trace);
}
);
}
```
#### <a name="example-startupcs"></a>Przykład Startup.cs
```csharp
public class Startup
{
private readonly `ILogger` _logger;
public Startup(IConfiguration configuration, ILogger<Startup> logger)
{
Configuration = configuration;
_logger = logger;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
// The following will be picked up by Application Insights.
_logger.LogInformation("Logging from ConfigureServices.");
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
// The following will be picked up by Application Insights.
_logger.LogInformation("Configuring for Development environment");
app.UseDeveloperExceptionPage();
}
else
{
// The following will be picked up by Application Insights.
_logger.LogInformation("Configuring for Production environment");
}
app.UseMvc();
}
}
```
## <a name="migrate-from-the-old-applicationinsightsloggerprovider"></a>Migrowanie ze starego ApplicationInsightsLoggerProvider
Wersje zestawu SDK Microsoft. ApplicationInsights. AspNet przed 2.7.1 są obsługiwane przez dostawcę rejestrowania, który jest już przestarzały. Ten dostawca został włączony za pomocą metody rozszerzenia **AddApplicationInsights ()** ILoggerFactory. Zalecamy przeprowadzenie migracji do nowego dostawcy, który obejmuje dwa kroki:
1. Usuń wywołanie *ILoggerFactory. AddApplicationInsights ()* z metody **Startup. Configure ()** , aby uniknąć podwójnego rejestrowania.
2. Zastosuj ponownie wszystkie reguły filtrowania w kodzie, ponieważ nie będą one przestrzegane przez nowego dostawcę. Przeciążenia *ILoggerFactory. AddApplicationInsights ()* wymagały minimalnych funkcji LogLevel lub Filter. W przypadku nowego dostawcy filtrowanie jest częścią samej struktury rejestrowania. Nie zostało to zrobione przez dostawcę Application Insights. Należy usunąć wszystkie filtry, które są dostarczane za pośrednictwem przeciążeń *ILoggerFactory. AddApplicationInsights ()* . I reguły filtrowania powinny być podane zgodnie z instrukcjami [kontroli poziomu rejestrowania](#control-logging-level) . Jeśli używasz pliku *appSettings. JSON* do filtrowania rejestrowania, będzie on nadal działał z nowym dostawcą, ponieważ oba używają tego samego aliasu dostawcy, *ApplicationInsights*.
Nadal możesz używać starego dostawcy. (Zostanie on usunięty tylko w wersji głównej, zmiana na 3). *XX*), ale zalecamy przeprowadzenie migracji do nowego dostawcy z następujących powodów:
- Poprzedni dostawca nie obsługuje [zakresów dzienników](https://docs.microsoft.com/aspnet/core/fundamentals/logging/?view=aspnetcore-2.2#log-scopes). W nowym dostawcy właściwości z zakresu są automatycznie dodawane jako właściwości niestandardowe do zebranych danych telemetrycznych.
- Dzienniki można teraz przechwycić znacznie wcześniej w potoku uruchamiania aplikacji. Dzienniki z **programu** i klasy **uruchamiania** mogą teraz być przechwytywane.
- W przypadku nowego dostawcy Filtrowanie odbywa się na poziomie platformy. Dzienniki można filtrować do dostawcy Application Insights w taki sam sposób jak w przypadku innych dostawców, w tym w przypadku dostawców wbudowanych, takich jak konsole, debugowanie i tak dalej. Można również zastosować te same filtry do wielu dostawców.
- W ASP.NET Core (2,0 i nowszych) zalecanym sposobem [włączania dostawców rejestrowania](https://github.com/aspnet/Announcements/issues/255) jest użycie metod rozszerzających ILoggingBuilder w **program.cs** .
> [!Note]
> Nowy dostawca jest dostępny dla aplikacji przeznaczonych dla wersji STANDARD 2.0 lub nowszej. Jeśli aplikacja jest przeznaczona dla starszych wersji programu .NET Core, takich jak .NET Core 1,1, lub jeśli jest ona przeznaczona dla .NET Framework, Kontynuuj korzystanie z starego dostawcy.
## <a name="console-application"></a>Aplikacja konsolowa
> [!NOTE]
> Istnieje nowy Application Insights SDK o nazwie [Microsoft. ApplicationInsights. WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) , który może służyć do włączania Application Insights (ILogger i innych Application Insights telemetrii) dla każdej aplikacji konsolowej. Zalecane jest korzystanie z tego pakietu i powiązanych instrukcji w [tym miejscu](../../azure-monitor/app/worker-service.md).
Poniższy kod przedstawia przykładową aplikację konsolową skonfigurowaną do wysyłania śladów ILogger do Application Insights.
Zainstalowane pakiety:
```xml
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.1.0" />
<PackageReference Include="Microsoft.Extensions.Logging.ApplicationInsights" Version="2.9.1" />
<PackageReference Include="Microsoft.Extensions.Logging.Console" Version="2.1.0" />
</ItemGroup>
```
```csharp
class Program
{
static void Main(string[] args)
{
// Create the DI container.
IServiceCollection services = new ServiceCollection();
// Channel is explicitly configured to do flush on it later.
var channel = new InMemoryChannel();
services.Configure<TelemetryConfiguration>(
(config) =>
{
config.TelemetryChannel = channel;
}
);
// Add the logging pipelines to use. We are using Application Insights only here.
services.AddLogging(builder =>
{
// Optional: Apply filters to configure LogLevel Trace or above is sent to
// Application Insights for all categories.
builder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
("", LogLevel.Trace);
builder.AddApplicationInsights("--YourAIKeyHere--");
});
// Build ServiceProvider.
IServiceProvider serviceProvider = services.BuildServiceProvider();
ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
// Begin a new scope. This is optional.
using (logger.BeginScope(new Dictionary<string, object> { { "Method", nameof(Main) } }))
{
logger.LogInformation("Logger is working"); // this will be captured by Application Insights.
}
// Explicitly call Flush() followed by sleep is required in Console Apps.
// This is to ensure that even if application terminates, telemetry is sent to the back-end.
channel.Flush();
Thread.Sleep(1000);
}
}
```
W tym przykładzie jest wykorzystywany `Microsoft.Extensions.Logging.ApplicationInsights`pakiet autonomiczny. Domyślnie ta konfiguracja używa TelemetryConfiguration "bCzy minimum" do wysyłania danych do Application Insights. Minimum od zera oznacza, że InMemoryChannel jest używanym kanałem. Nie ma próbkowania ani standardowego TelemetryInitializers. Takie zachowanie można zastąpić dla aplikacji konsolowej, jak pokazano w poniższym przykładzie.
Zainstaluj ten dodatkowy pakiet:
```xml
<PackageReference Include="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel" Version="2.9.1" />
```
W poniższej sekcji pokazano, jak zastąpić domyślny TelemetryConfiguration za pomocą **usług. Skonfiguruj metodę\<TelemetryConfiguration > ()** . Ten przykład konfiguruje `ServerTelemetryChannel` i próbkowanie. Dodaje niestandardowy ITelemetryInitializer do TelemetryConfiguration.
```csharp
// Create the DI container.
IServiceCollection services = new ServiceCollection();
var serverChannel = new ServerTelemetryChannel();
services.Configure<TelemetryConfiguration>(
(config) =>
{
config.TelemetryChannel = serverChannel;
config.TelemetryInitializers.Add(new MyTelemetryInitalizer());
config.DefaultTelemetrySink.TelemetryProcessorChainBuilder.UseSampling(5);
serverChannel.Initialize(config);
}
);
// Add the logging pipelines to use. We are adding Application Insights only.
services.AddLogging(loggingBuilder =>
{
loggingBuilder.AddApplicationInsights();
});
........
........
// Explicitly calling Flush() followed by sleep is required in Console Apps.
// This is to ensure that even if the application terminates, telemetry is sent to the back end.
serverChannel.Flush();
Thread.Sleep(1000);
```
## <a name="control-logging-level"></a>Kontrola poziomu rejestrowania
ASP.NET Core *ILogger* infrastruktura ma wbudowany mechanizm zastosowania [filtrowania dzienników](https://docs.microsoft.com/aspnet/core/fundamentals/logging/?view=aspnetcore-2.2#log-filtering). Pozwala to kontrolować dzienniki wysyłane do każdego zarejestrowanego dostawcy, w tym dostawcę Application Insights. Filtrowanie można wykonać w konfiguracji (zazwyczaj przy użyciu pliku *appSettings. JSON* ) lub w kodzie. Ta funkcja jest dostarczana przez samą platformę. Nie jest on specyficzny dla dostawcy Application Insights.
Poniższe przykłady stosują reguły filtru do ApplicationInsightsLoggerProvider.
### <a name="create-filter-rules-in-configuration-with-appsettingsjson"></a>Utwórz reguły filtru w konfiguracji za pomocą pliku appSettings. JSON
W przypadku ApplicationInsightsLoggerProvider alias dostawcy jest `ApplicationInsights`. Poniższa sekcja pliku *appSettings. JSON* konfiguruje dzienniki pod kątem *ostrzeżenia* i powyżej z wszystkich kategorii i *błędów* i powyżej od kategorii, które zaczynają się od "Microsoft" do wysłania do `ApplicationInsightsLoggerProvider`.
```json
{
"Logging": {
"ApplicationInsights": {
"LogLevel": {
"Default": "Warning",
"Microsoft": "Error"
}
},
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*"
}
```
### <a name="create-filter-rules-in-code"></a>Tworzenie reguł filtru w kodzie
Poniższy fragment kodu konfiguruje dzienniki pod kątem *ostrzeżenia* i powyżej ze wszystkich kategorii oraz dla *błędu* i powyżej z kategorii, które zaczynają się od "Microsoft" do wysłania do `ApplicationInsightsLoggerProvider`. Ta konfiguracja jest taka sama jak w poprzedniej sekcji pliku *appSettings. JSON*.
```csharp
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureLogging(logging =>
logging.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
("", LogLevel.Warning)
.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
("Microsoft", LogLevel.Error);
```
## <a name="frequently-asked-questions"></a>Często zadawane pytania
### <a name="what-are-the-old-and-new-versions-of-applicationinsightsloggerprovider"></a>Jakie są stare i nowe wersje programu ApplicationInsightsLoggerProvider?
[Zestaw Microsoft. ApplicationInsights. ASPNET SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) zawiera wbudowaną ApplicationInsightsLoggerProvider (Microsoft. ApplicationInsights. AspNetCore. Logging. ApplicationInsightsLoggerProvider), która została włączona za pomocą metod rozszerzenia **ILoggerFactory** . Ten dostawca został oznaczony jako przestarzały w wersji 2.7.1. Zostanie on całkowicie usunięty w następnej zmianie wersji głównej. Sam pakiet [Microsoft. ApplicationInsights. AspNetCore](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) . Jest to wymagane do włączenia monitorowania żądań, zależności i tak dalej.
Sugerowana alternatywa to nowy pakiet autonomiczny [Microsoft. Extensions. Logging. ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights), który zawiera udoskonaloną metodę ApplicationInsightsLoggerProvider (Microsoft. Extensions. Logging. ApplicationInsights. ApplicationInsightsLoggerProvider) i metody rozszerzenia na ILoggerBuilder w celu jej włączenia.
[Zestaw SDK Microsoft. ApplicationInsights. ASPNET](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) w wersji 2.7.1 ma zależność od nowego pakietu i włącza automatyczne przechwytywanie ILogger.
### <a name="why-are-some-ilogger-logs-shown-twice-in-application-insights"></a>Dlaczego niektóre dzienniki ILogger są wyświetlane dwukrotnie w Application Insights?
Duplikowanie może wystąpić, jeśli masz starszą (teraz przestarzałą) wersję programu ApplicationInsightsLoggerProvider, wywołując `AddApplicationInsights` w `ILoggerFactory`. Sprawdź, czy metoda **konfiguracji** ma następujące elementy, i usuń ją:
```csharp
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Warning);
// ..other code.
}
```
Jeśli podczas debugowania z programu Visual Studio występuje podwójne rejestrowanie, ustaw dla `EnableDebugLogger` *wartość false* w kodzie, który włącza Application Insights w następujący sposób. To duplikowanie i naprawa ma zastosowanie tylko w przypadku debugowania aplikacji.
```csharp
public void ConfigureServices(IServiceCollection services)
{
ApplicationInsightsServiceOptions options = new ApplicationInsightsServiceOptions();
options.EnableDebugLogger = false;
services.AddApplicationInsightsTelemetry(options);
// ..other code.
}
```
### <a name="i-updated-to-microsoftapplicationinsightsaspnet-sdkhttpswwwnugetorgpackagesmicrosoftapplicationinsightsaspnetcore-version-271-and-logs-from-ilogger-are-captured-automatically-how-do-i-turn-off-this-feature-completely"></a>Zaktualizowano do [Microsoft. ApplicationInsights. ASPNET SDK](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) wersja 2.7.1, a dzienniki z ILogger są przechwytywane automatycznie. Jak mogę wyłączyć tę funkcję całkowicie?
Zobacz sekcję [Kontrola poziomu rejestrowania](../../azure-monitor/app/ilogger.md#control-logging-level) , aby dowiedzieć się, jak ogólnie filtrować dzienniki. Aby wyłączyć ApplicationInsightsLoggerProvider, użyj `LogLevel.None`:
**W kodzie:**
```csharp
builder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>
("", LogLevel.None);
```
**W konfiguracji:**
```json
{
"Logging": {
"ApplicationInsights": {
"LogLevel": {
"Default": "None"
}
}
```
### <a name="why-do-some-ilogger-logs-not-have-the-same-properties-as-others"></a>Dlaczego niektóre dzienniki ILogger mają takie same właściwości jak inne?
Application Insights przechwytuje i wysyła dzienniki ILogger przy użyciu tego samego TelemetryConfiguration, który jest używany przez każdą inną telemetrię. Ale wystąpił wyjątek. Domyślnie TelemetryConfiguration nie jest w pełni skonfigurowany podczas rejestrowania z **program.cs** lub **Startup.cs**. Dzienniki z tych miejsc nie mają konfiguracji domyślnej, dlatego nie będą działać wszystkie TelemetryInitializers i TelemetryProcessors.
### <a name="im-using-the-standalone-package-microsoftextensionsloggingapplicationinsights-and-i-want-to-log-some-additional-custom-telemetry-manually-how-should-i-do-that"></a>Używam autonomicznego pakietu Microsoft. Extensions. Logging. ApplicationInsights i chcę ręcznie rejestrować kilka dodatkowych danych telemetrycznych. Jak to zrobić?
W przypadku korzystania z pakietu autonomicznego `TelemetryClient` nie jest wprowadzany do kontenera DI, dlatego należy utworzyć nowe wystąpienie `TelemetryClient` i użyć tej samej konfiguracji, która jest używana przez dostawcę rejestratora, jak pokazano w poniższym kodzie. Gwarantuje to, że ta sama konfiguracja jest używana dla wszystkich niestandardowych danych telemetrycznych, a także do telemetrii z ILogger.
```csharp
public class MyController : ApiController
{
// This telemtryclient can be used to track additional telemetry using TrackXXX() api.
private readonly TelemetryClient _telemetryClient;
private readonly ILogger _logger;
public MyController(IOptions<TelemetryConfiguration> options, ILogger<MyController> logger)
{
_telemetryClient = new TelemetryClient(options.Value);
_logger = logger;
}
}
```
> [!NOTE]
> Jeśli używasz pakietu Microsoft. ApplicationInsights. AspNetCore do włączania Application Insights, zmodyfikuj ten kod, aby uzyskać `TelemetryClient` bezpośrednio w konstruktorze. Na przykład zapoznaj się z tematem [często zadawanych pytań](https://docs.microsoft.com/azure/azure-monitor/app/asp-net-core#frequently-asked-questions).
### <a name="what-application-insights-telemetry-type-is-produced-from-ilogger-logs-or-where-can-i-see-ilogger-logs-in-application-insights"></a>Jaki Application Insights typ telemetrii jest produkowany z dzienników ILogger? Lub gdzie mogę zobaczyć ILogger dzienników w Application Insights?
ApplicationInsightsLoggerProvider przechwytuje dzienniki ILogger i tworzy TraceTelemetry z nich. Jeśli obiekt wyjątku jest przesyłany do metody **log ()** na ILogger, *ExceptionTelemetry* jest tworzony zamiast TraceTelemetry. Te elementy telemetrii mogą znajdować się w tych samych miejscach co inne TraceTelemetry lub ExceptionTelemetry dla Application Insights, w tym do portalu, analizy lub lokalnego debugera programu Visual Studio.
Jeśli wolisz zawsze wysyłać TraceTelemetry, użyj tego fragmentu kodu: ```builder.AddApplicationInsights((opt) => opt.TrackExceptionsAsExceptionTelemetry = false);```
### <a name="i-dont-have-the-sdk-installed-and-i-use-the-azure-web-apps-extension-to-enable-application-insights-for-my-aspnet-core-applications-how-do-i-use-the-new-provider"></a>Nie mam zainstalowanego zestawu SDK i używam rozszerzenia Web Apps platformy Azure, aby włączyć Application Insights dla aplikacji ASP.NET Core. Jak mogę użyć nowego dostawcy?
Rozszerzenie Application Insights na platformie Azure Web Apps używa nowego dostawcy. Reguły filtrowania można modyfikować w pliku *appSettings. JSON* aplikacji.
### <a name="im-using-the-standalone-package-microsoftextensionsloggingapplicationinsights-and-enabling-application-insights-provider-by-calling-builderaddapplicationinsightsikey-is-there-an-option-to-get-an-instrumentation-key-from-configuration"></a>Używam samodzielnego pakietu Microsoft. Extensions. Logging. ApplicationInsights i włączania dostawcy Application Insights przez wywoływanie **konstruktora. AddApplicationInsights ("iKey")** . Czy istnieje opcja pobrania klucza Instrumentacji z konfiguracji?
Zmodyfikuj Program.cs i appSettings. JSON w następujący sposób:
```csharp
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureLogging((hostingContext, logging) =>
{
// hostingContext.HostingEnvironment can be used to determine environments as well.
var appInsightKey = hostingContext.Configuration["myikeyfromconfig"];
logging.AddApplicationInsights(appInsightKey);
});
}
```
Właściwa sekcja z `appsettings.json`:
```json
{
"myikeyfromconfig": "putrealikeyhere"
}
```
Ten kod jest wymagany tylko w przypadku korzystania z autonomicznego dostawcy rejestrowania. Do regularnego monitorowania Application Insights klucz Instrumentacji jest ładowany automatycznie ze ścieżki konfiguracyjnej *ApplicationInsights: Instrumentationkey*. Plik appSettings. JSON powinien wyglądać następująco:
```json
{
"ApplicationInsights":
{
"Instrumentationkey":"putrealikeyhere"
}
}
```
## <a name="next-steps"></a>Następne kroki
Dowiedz się więcej o usługach:
* [Logowanie ASP.NET Core](https://docs.microsoft.com/aspnet/core/fundamentals/logging)
* [Dzienniki śledzenia .NET w Application Insights](../../azure-monitor/app/asp-net-trace-logs.md)
| 57.612967 | 832 | 0.754237 | pol_Latn | 0.98476 |
4d22d1c8f142b1092bff650ab9bf9f6870641a1a | 329 | md | Markdown | tccli/examples/iotvideo/v20201215/ModifyDataForward.md | ws0416/tencentcloud-cli | 0a90fa77c8be1efa30b196a3eeb31b8be1f6a325 | [
"Apache-2.0"
] | 47 | 2018-05-31T11:26:25.000Z | 2022-03-08T02:12:45.000Z | tccli/examples/iotvideo/v20201215/ModifyDataForward.md | ws0416/tencentcloud-cli | 0a90fa77c8be1efa30b196a3eeb31b8be1f6a325 | [
"Apache-2.0"
] | 23 | 2018-06-14T10:46:30.000Z | 2022-02-28T02:53:09.000Z | tccli/examples/iotvideo/v20201215/ModifyDataForward.md | ws0416/tencentcloud-cli | 0a90fa77c8be1efa30b196a3eeb31b8be1f6a325 | [
"Apache-2.0"
] | 22 | 2018-10-22T09:49:45.000Z | 2022-03-30T08:06:04.000Z | **Example 1: 修改数据转发**
Input:
```
tccli iotvideo ModifyDataForward --cli-unfold-argument \
--ProductId TOIDHQ3AOQ, \
--ForwardAddr [{"forward":{"api":"http://127.0.0.1:1080/sub.php"}}] \
--DataChose 1
```
Output:
```
{
"Response": {
"RequestId": "be69a7a3-7315-40a7-9532-3316e4a3e07e"
}
}
```
| 14.304348 | 73 | 0.580547 | yue_Hant | 0.166881 |
4d2311b5b1e7c6db1bfa550af921728007f7ecc2 | 2,546 | md | Markdown | docs/2014/reporting-services/report-design/formatting-ranges-on-a-gauge-report-builder-and-ssrs.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/reporting-services/report-design/formatting-ranges-on-a-gauge-report-builder-and-ssrs.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/reporting-services/report-design/formatting-ranges-on-a-gauge-report-builder-and-ssrs.md | robsonbrandao/sql-docs.pt-br | f41715f4d108211fce4c97803848bd294b9c1c17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Formatando intervalos de um medidor (Construtor de Relatórios e SSRS) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- reporting-services-native
ms.topic: conceptual
ms.assetid: ffdec8ca-3e95-41cd-850b-9e8c83be4b49
author: markingmyname
ms.author: maghan
manager: kfile
ms.openlocfilehash: e893b005e074f50828b04525c1c1f963a801489e
ms.sourcegitcommit: 31800ba0bb0af09476e38f6b4d155b136764c06c
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 02/15/2019
ms.locfileid: "56287334"
---
# <a name="formatting-ranges-on-a-gauge-report-builder-and-ssrs"></a>Formatando intervalos de um medidor (Construtor de Relatórios e SSRS)
Um intervalo de medidor é uma zona ou área na escala de medidor que indica uma subseção importante de valores no medidor. Usando um intervalo de medidor, é possível indicar visualmente quando o valor de ponteiro entrou em um determinado conjunto de valores. Intervalos são definidos por um valor inicial e um valor final.
Você também pode usar intervalos para definir seções diferentes de um medidor. Por exemplo, em um medidor com valores entre 0 e 10, é possível definir um intervalo vermelho com valor entre 0 e 3, um amarelo com intervalo entre 4 e 7 e um verde com intervalo entre 8 e 10. Caso o valor inicial especificado seja maior que o valor final, os valores são trocados para que o inicial seja o final e vice-versa.
É possível posicionar o intervalo da mesma forma que posiciona ponteiros em uma escala. As propriedades **Posição** e **Distância da escala** determinam a posição do intervalo. Para obter mais informações, consulte [Medidores (Construtor de Relatórios e SSRS)](gauges-report-builder-and-ssrs.md).
> [!NOTE]
> [!INCLUDE[ssRBRDDup](../../includes/ssrbrddup-md.md)]
## <a name="see-also"></a>Consulte também
[Formatando escalas em um medidor (Construtor de Relatórios e SSRS)](formatting-scales-on-a-gauge-report-builder-and-ssrs.md)
[Formatando ponteiros de um medidor (Construtor de Relatórios e SSRS)](formatting-pointers-on-a-gauge-report-builder-and-ssrs.md)
[Definir mínimo ou máximo em um medidor (Construtor de Relatórios e SSRS)](set-a-minimum-or-maximum-on-a-gauge-report-builder-and-ssrs.md)
[Tutorial: Adicionando um KPI ao relatório (construtor de relatórios)](../tutorial-adding-a-kpi-to-your-report-report-builder.md)
[Medidores (Construtor de Relatórios e SSRS)](gauges-report-builder-and-ssrs.md)
| 65.282051 | 408 | 0.770228 | por_Latn | 0.994932 |
4d237b6fb6e5155f7e3c135d19f7347840caaa85 | 2,593 | md | Markdown | sqlcl/README.md | Sajaki/oracle-db-tools | 5677d373692436887231b789be0cb0e0ad947bdc | [
"MIT"
] | null | null | null | sqlcl/README.md | Sajaki/oracle-db-tools | 5677d373692436887231b789be0cb0e0ad947bdc | [
"MIT"
] | null | null | null | sqlcl/README.md | Sajaki/oracle-db-tools | 5677d373692436887231b789be0cb0e0ad947bdc | [
"MIT"
] | null | null | null | # SQLcl - Scripting

## What is it?
SQLcl scripting is based on Java's [JSR-223](https://jcp.org/aboutJava/communityprocess/final/jsr223/index.html) which allows scripting languages to be executed from the Java VM. There are a number of languages that can be plugged in with the NashHorn Javascript engine being included in Java. A list of languages can be found [here](https://en.wikipedia.org/wiki/List_of_JVM_languages)
The addition of client side scripting will allow control flow in the sql scripts themselves. It also allow for things like file access, greater control on host commands, leverage various javascript libraries, and the ability to leverage java.
## Globals
There are a few globals pushed into the scripting engine for use.
**args** -This is a simple array of the arguments passed along
Example:
~~~
for(var arg in args) {
ctx.write(arg + ":" + args[arg]);
ctx.write("\n");
}
~~~
**sqlcl** - This is SQLCL itself
~~~
setStmt(<String of stuff to run>)
This can be a single statement, an entire script of stuff, or any sqlcl command such as "@numbers.sql"
~~~
~~~
run()
Runs whatever is set via the setStmt function
~~~
Example:
~~~
/* Run any amount of command in the sqlcl prompt */
sqlcl.setStmt("select something from somewhere; @myscript \n begin null;end;");
sqlcl.run();
~~~
**ctx** ( this has tons of methods but this is the single most important )
~~~
write(<String>)
~~~
Example:
~~~
ctx.write('Hello World');
~~~
**util** ( again tons of methods )
~~~
execute(<string>,binds)
executes whatever is passed in with a boolean return for success/failure
~~~
~~~
executeReturnOneCol(<string>,binds)
executes and returns the first row , first column
~~~
~~~
executeReturnListofList(<string>,binds)
executes and returns an array(rows) of arrays(row).
~~~
~~~
executeReturnList(<string>,binds)
execute and returns and array ( rows ) of objects ( row )
~~~
Examples: [sql.js](https://github.com/oracle/Oracle_DB_Tools/blob/master/sqlcl/examples/sql.js)
### Helper Functions
While JSR-223 is great for adding javascript capabilities, knowledge of java is required for more advanced usage. Some of the more commonly needed functions will be provided in [helper.js](https://github.com/oracle/Oracle_DB_Tools/blob/master/sqlcl/lib/helpers.js). The .js file itself contains the descriptions of the functions. This will expand greatly as the examples and requests for examples grow.
| 30.151163 | 404 | 0.704975 | eng_Latn | 0.995395 |
4d2416ed2279c2ecd74e16b9bc0e4202ff46fb08 | 866 | md | Markdown | docs/docs/7_internals/99_TODO.md | dbonino/diozero | a5bb04e5e63ae1738be30b6e21abcfacdc8b4782 | [
"MIT"
] | null | null | null | docs/docs/7_internals/99_TODO.md | dbonino/diozero | a5bb04e5e63ae1738be30b6e21abcfacdc8b4782 | [
"MIT"
] | null | null | null | docs/docs/7_internals/99_TODO.md | dbonino/diozero | a5bb04e5e63ae1738be30b6e21abcfacdc8b4782 | [
"MIT"
] | null | null | null | ---
parent: Internals
nav_order: 99
permalink: /internals/todo.html
---
# To-Do
* Thorough testing (various types of devices using each service provider)
* A clean object-orientated API for IMUs
* Remote API for board capabilities
* SPI support for Arduino devices
* Introduce Servo as a device type
* Try out ConfigurableFirmata - is there actually any difference to the StandardFirmata protocol?
* Complete ADSL1x15
* BME680
* DONE Native support for all devices via mmap (/dev/mem), in particular to improve performance and add support for GPIO pull up/down configuration.
* DONE Wireless access to Firmata devices (network and Bluetooth). E.g. [ESP32](https://learn.sparkfun.com/tutorials/esp32-thing-hookup-guide?_ga=1.116824388.33505106.1471290985#installing-the-esp32-arduino-core) [Firmata GitHub issue #315](https://github.com/firmata/arduino/issues/315)
| 45.578947 | 287 | 0.786374 | eng_Latn | 0.775169 |
4d24ae786da5b959467fddc68fb9702ec66d0e06 | 1,034 | md | Markdown | README.md | samhaeng/stripes-data-transfer-components | 3dc13f04cebb9738662744b85c0610001496c6a7 | [
"Apache-2.0"
] | null | null | null | README.md | samhaeng/stripes-data-transfer-components | 3dc13f04cebb9738662744b85c0610001496c6a7 | [
"Apache-2.0"
] | 183 | 2020-01-27T17:00:08.000Z | 2022-03-30T13:47:34.000Z | README.md | samhaeng/stripes-data-transfer-components | 3dc13f04cebb9738662744b85c0610001496c6a7 | [
"Apache-2.0"
] | 5 | 2020-05-22T07:53:16.000Z | 2020-12-22T01:07:08.000Z | # stripes-data-transfer-components
Copyright (C) 2020 The Open Library Foundation
This software is distributed under the terms of the Apache License,
Version 2.0. See the file "[LICENSE](LICENSE)" for more information.
## Introduction
This is a library of React components and utility functions for use with [the Stripes UI toolkit](https://github.com/folio-org/stripes-core/), part of [the FOLIO project](https://www.folio.org/).
These components are intended for use by modules and applications that handle EResource Management (ERM)
duties. Other modules and applications may not find them as useful.
## Additional information
Project [code style guide](./CODESTYLEGUIDE.md).
Other [modules](https://dev.folio.org/source-code/#client-side).
See projects [UIDEXP](https://issues.folio.org/projects/UIDEXP), [UIDATIMP](https://issues.folio.org/projects/UIDATIMP)
at the [FOLIO issue tracker](https://dev.folio.org/guidelines/issue-tracker/).
Other FOLIO Developer documentation is at [dev.folio.org](https://dev.folio.org/)
| 43.083333 | 195 | 0.773694 | eng_Latn | 0.834573 |
4d253131365eb29fde798e9f28de209baaf68b40 | 1,349 | md | Markdown | faq/packages/packages.md | rebootcode/php-resources | 508fd373f53e0cb252c8272327d247ef3d87b998 | [
"CC-BY-4.0"
] | null | null | null | faq/packages/packages.md | rebootcode/php-resources | 508fd373f53e0cb252c8272327d247ef3d87b998 | [
"CC-BY-4.0"
] | null | null | null | faq/packages/packages.md | rebootcode/php-resources | 508fd373f53e0cb252c8272327d247ef3d87b998 | [
"CC-BY-4.0"
] | 1 | 2021-02-15T13:39:47.000Z | 2021-02-15T13:39:47.000Z | ---
title: "Where can I get open source PHP libraries, scripts, packages and other code?"
read_time: "2 min"
updated: "April 28, 2016"
group: "packages"
redirect_from: "/faq/php-libraries-scripts-and-code/"
permalink: "/faq/packages/php-libraries-scripts-and-code/"
compass:
prev: "/faq/packages/what-is-composer/"
next: "/faq/security/php-security-issues/"
---
There are many places for you to find open source PHP libraries that you can use
in your code. The most libraries from [GitHub](https://github.com) and
[BitBucket](https://bitbucket.org) can be found on [http://packagist.org](http://packagist.org).
For using packagist get to know also [Composer](http://getcomposer.org), command
line script for managing them in your project.
## See also
Other useful resources to check out:
* [Awesome PHP](https://github.com/ziadoz/awesome-php/) - Curated list of PHP
ecosystem, present also on [libhunt](https://php.libhunt.com/).
* [Hoa](http://hoa-project.net/) - set of PHP libraries
* [PHP Classes](http://phpclasses.org) - PHP classes and packages
* [PHP Package Checklist](http://phppackagechecklist.com/) - A quality checklist for open-source PHP packages.
* [Producer](http://getproducer.org/) - Validates and releases your PHP library package.
* [The PHP League](https://thephpleague.com/) - group that provides good PHP packages
| 43.516129 | 110 | 0.739066 | eng_Latn | 0.795402 |
4d275b5dcf97207edb738e1cec29df44bbc9baaf | 209 | md | Markdown | src/__tests__/fixtures/unfoldingWord/en_tq/num/07/12.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | null | null | null | src/__tests__/fixtures/unfoldingWord/en_tq/num/07/12.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 226 | 2020-09-09T21:56:14.000Z | 2022-03-26T18:09:53.000Z | src/__tests__/fixtures/unfoldingWord/en_tq/num/07/12.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 1 | 2022-01-10T21:47:07.000Z | 2022-01-10T21:47:07.000Z | # Who was the first of the 12 tribes of Israel that offered sacrifices for the dedication of the altar?
Judah was the first of the 12 tribes of Israel that offered sacrifices for the dedication of the altar.
| 52.25 | 103 | 0.794258 | eng_Latn | 0.999999 |
4d27f25d14a1a92130f6bd1257924fb911ce702e | 2,709 | md | Markdown | _publications/CSEye.md | AmeerD/AmeerD.github.io | 4336d5a74b8517d4f92ce6e912aea49a2804b377 | [
"MIT"
] | null | null | null | _publications/CSEye.md | AmeerD/AmeerD.github.io | 4336d5a74b8517d4f92ce6e912aea49a2804b377 | [
"MIT"
] | null | null | null | _publications/CSEye.md | AmeerD/AmeerD.github.io | 4336d5a74b8517d4f92ce6e912aea49a2804b377 | [
"MIT"
] | null | null | null | ---
title: "CSEye: A Proposed Solution for Accurate and Accessible One-to-Many Face Verification"
collection: publications
permalink: /publication/CSEye
excerpt: 'Ameer Dharamshi, Rosie Yuyan Zou'
date: 2019-01-31
venue: 'AAAI Student Abstract Track'
paperurl: 'https://aaai.org/ojs/index.php/AAAI/article/view/5103'
citation:
---
CSEye is a low-cost, one-to-many facial verfication model that addresses the one-to-many verification process using a unique, three-stage model architecture. First, a truncated VGG19 network extracts latent features from the suspect image and the candidate images. We then vectorize the extracted feature matrices and compute sets of differences based on angle, dot product, and element-wise distance measures. Finally, a dense network selects the optimal suspect-candidate match. CSEye was trained on randomly generated suspect-candidate sets from the [Labelled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) dataset. The angle and distance measures reliably produce accuracy rates exceeding 90% in initial tests with the angle measure reporting accuracies up to 98%.
The idea behind CSEye was inspired by the final project for the University of Waterloo's Winter 2018 offering of *STAT 841: Statistical Learning - Classification*. We were challenged to develop a novel classification algorithm and demonstrate its ability in some domain. My team of three other students and myself decided to create a computer vision model addressing the problem of one-to-many face verification in a low-cost approach without compromising on accuracy. In this project, we extracted weights from the suspect and all candidate images with a CNN that employed a weight-sharing scheme. This structure ensured identical treatment of all images. The suspect features are then injected into a dense network as weights to influence the suspect-candidate match. This structure facilitates the interaction between the suspect weights and the candidate weights in the layer preceeding final classification. The [draft](http://rosiezou.com/441proj.html) of the initial CSEye project provides a detailed description of the model architecture and approach. The [Jupyter notebook](https://github.com/rosiezou/cvproj/blob/master/441proj.ipynb) containing the model and the [github repository](https://github.com/rosiezou/cvproj) are also available for viewing.
The final camera-ready draft of the paper, *CSEye: A Proposed Solution for Accurate and Accessible One-to-Many
Face Verification*, can be [downloaded here](http://ameerd.github.io/files/CSEye_AAAI_2019_SA_412_CRC.pdf). A demo of the model is [provided here](https://github.com/AmeerD/CSEye/tree/master/Demo) for illustrative purposes.
| 150.5 | 1,262 | 0.811 | eng_Latn | 0.9879 |
4d292ebddd0aac2e4b75112537d5145688e88016 | 2,629 | md | Markdown | docs/words/insert.md | chenyitian/REBOL-3-Documentation | aa0ae36965b64e04dc9b60ef939b3681fae1518c | [
"Apache-2.0"
] | 2 | 2019-08-30T14:55:30.000Z | 2021-08-06T13:40:36.000Z | docs/words/insert.md | chenyitian/REBOL-3-Documentation | aa0ae36965b64e04dc9b60ef939b3681fae1518c | [
"Apache-2.0"
] | null | null | null | docs/words/insert.md | chenyitian/REBOL-3-Documentation | aa0ae36965b64e04dc9b60ef939b3681fae1518c | [
"Apache-2.0"
] | 1 | 2021-03-14T20:00:59.000Z | 2021-03-14T20:00:59.000Z | # Insert - Function Summary
## Summary:
Inserts a value into a series and returns the series after the insert.
## Usage:
**insert series value**
## Arguments:
**series** - Series at point to insert (must be: series port bitset)
**value** - The value to insert (must be: any-type)
## Refinements:
**/part** - Limits to a given length or position.
**range** - The range argument. (must be: number series port pair)
**/only** - Inserts a series as a series.
**/dup** - Duplicates the insert a specified number of times.
**count** - The count argument. (must be: number pair)
## Description:
If the value is a series compatible with the first (block or string-based datatype), then all of its values will be inserted. The series position just past the insert is returned, allowing multiple inserts to be cascaded together.
This function includes many refinements.
/PART allows you to specify how many elements you want to insert.
/ONLY will force a block to be insert, rather than its individual elements. (Is only done if first argument is a block datatype.)
/DUP will cause the inserted series to be repeated a given number of times.
The series will be modified.
```
str: copy "here this"
insert str "now "
print str
now here this
```
```
insert tail str " message"
print str
now here this message
```
```
insert tail str reduce [tab now]
print str
now here this message 9-Mar-2004/0:59:54-8:00
```
```
insert insert str "Tom, " "Tina, "
print str
Tom, Tina, now here this message 9-Mar-2004/0:59:54-8:00
```
```
insert/dup str "." 7
print str
.......Tom, Tina, now here this message 9-Mar-2004/0:59:54-8:00
```
```
insert/part tail str next "!?$" 1
print str
.......Tom, Tina, now here this message 9-Mar-2004/0:59:54-8:00?
```
```
blk: copy ["hello"]
insert blk 'print
probe blk
[print "hello"]
```
```
insert tail blk http://www.rebol.com
probe blk
[print "hello" http://www.rebol.com]
```
```
insert/only blk [separate block]
probe blk
[[separate block] print "hello" http://www.rebol.com]
```
## Related:
[**append**](http://www.rebol.com/docs/words/wappend.html) - Appends a value to the tail of a series and returns the series head.
[**change**](http://www.rebol.com/docs/words/wchange.html) - Changes a value in a series and returns the series after the change.
[**clear**](http://www.rebol.com/docs/words/wclear.html) - Removes all values from the current index to the tail. Returns at tail.
[**join**](http://www.rebol.com/docs/words/wjoin.html) - Concatenates values.
[**remove**](http://www.rebol.com/docs/words/wremove.html) - Removes value(s) from a series and returns after the remove. | 25.038095 | 230 | 0.702168 | eng_Latn | 0.906091 |
4d2937e05aa59534d45785f3ebb3421590a87f6f | 696 | md | Markdown | posts/2021-02-05_bb_blog.md | rohbot/beach-robot-blog | e23f16ac5bc8001847eac922c31edb3ae80c4891 | [
"MIT"
] | null | null | null | posts/2021-02-05_bb_blog.md | rohbot/beach-robot-blog | e23f16ac5bc8001847eac922c31edb3ae80c4891 | [
"MIT"
] | null | null | null | posts/2021-02-05_bb_blog.md | rohbot/beach-robot-blog | e23f16ac5bc8001847eac922c31edb3ae80c4891 | [
"MIT"
] | null | null | null | ---
title: Server service up and running.
description: Blog entry for the BeachBot.
date: 2021-02-05
tags:
- V3
- Programming
layout: layouts/post.njk
---
Date: 2021-02-05
Author: Andrew
Finished off the work yesterday with the creation of a new service to start the Flask server on startup. Then changed the shutdown code to actually shutdown the Pi rather than write to a file and it all works! Tested from the Surface and very happy the UI/server setup seems to be coming together.
Need to hide all the UI content when shutdown is done so the UI doesn't give the impression that it is still connected.
I suspect the redis piece might be a little harder... something for the weekend!
| 36.631579 | 298 | 0.768678 | eng_Latn | 0.999344 |
4d294d17897c02255ba47b565acdc430010bc77c | 67 | md | Markdown | README.md | chronologie7/stack_example | b35a02de35f0f41116c90ab9c33ffdbd866bedad | [
"MIT"
] | null | null | null | README.md | chronologie7/stack_example | b35a02de35f0f41116c90ab9c33ffdbd866bedad | [
"MIT"
] | null | null | null | README.md | chronologie7/stack_example | b35a02de35f0f41116c90ab9c33ffdbd866bedad | [
"MIT"
] | null | null | null | # stack_example
Simple example program for stack management in C++
| 22.333333 | 50 | 0.80597 | eng_Latn | 0.995533 |
4d29f5b7c9388a8513a4c685b7773454884efe14 | 65 | md | Markdown | README.md | qynvi/rtl-regmux | 3525cfed8dcdf8f2375f7eda58c31cb68a1de876 | [
"MIT"
] | null | null | null | README.md | qynvi/rtl-regmux | 3525cfed8dcdf8f2375f7eda58c31cb68a1de876 | [
"MIT"
] | null | null | null | README.md | qynvi/rtl-regmux | 3525cfed8dcdf8f2375f7eda58c31cb68a1de876 | [
"MIT"
] | null | null | null | # rtl-regmux
FPGA synthesizable VHDL for a register multiplexer.
| 21.666667 | 51 | 0.815385 | eng_Latn | 0.447511 |
4d2a3b59e344788112e3b858c906c1e07776be94 | 4,476 | md | Markdown | README.md | i2y/jet | a8186a9926bb0d4285c09de5dfc204f2aa468835 | [
"MIT"
] | 17 | 2017-09-13T05:02:29.000Z | 2021-12-11T10:55:15.000Z | README.md | i2y/jet | a8186a9926bb0d4285c09de5dfc204f2aa468835 | [
"MIT"
] | null | null | null | README.md | i2y/jet | a8186a9926bb0d4285c09de5dfc204f2aa468835 | [
"MIT"
] | 2 | 2017-09-20T01:49:57.000Z | 2021-10-01T17:44:09.000Z | <img src="https://github.com/i2y/jet/raw/master/jet_logo.png" width="300px"/>
> "I thought of objects being like biological cells and/or individual
> computers on a network, only able to communicate with messages"
> _--Alan Kay, creator of Smalltalk, on the meaning of "object oriented programming"_
Jet is a simple OOP, dynamically typed, functional language that runs on the [Erlang](http://www.erlang.org) virtual machine (BEAM).
Jet's syntax is [Ruby](https://www.ruby-lang.org)-like syntax.
Jet was inspired by [Reia](https://github.com/tarcieri/reia) and [Celluloid](https://github.com/celluloid/celluloid).
## Language features
### Builtin Types
```ruby
### Numbers
49 # integer
4.9 # float
### Booleans
true
false
### Atoms
:foo
### Lists
list = [2, 3, 4]
list2 = [1, *list] # => [1, 2, 3, 4]
[1, 2, 3, *rest] = list2
rest # => [4]
list.append(5) # => [2, 3, 4, 5]
list # => [2, 3, 4]
list.select {|item| item > 2}
.map {|item| item * 2} # => [6, 8]
list # => [2, 3, 4]
# list comprehensions
[n * 2 for n in list] # => [4, 6, 8]
### Tuples
tuple = {1, 2, 3}
tuple.select {|item| item > 1}
.map {|item| item * 2} # => [4, 6]
tuple.to_list # => [1, 2, 3]
### Maps
dict = {foo: 1, bar: 2}
dict2 = dict.put(:baz, 3) # => {foo: 1, bar: 2, baz: 3}
dict # => {foo: 1, bar: 2}
dict.get(:baz, 100) # => 100
### Strings (Lists)
"Abc"
### Anonymous functions (Blocks)
add = {|x, y| x + y}
add(40, 9) # => 49
multiply = do |x, y|
x * y
end
multiply(7, 7) # => 49
### Binaries
<<1, 2, 3>>
<<"abc">>
<<1 , 2, x>> = <<1, 2, 3>>
x # => 3
```
### Class definition
Car.jet
```ruby
Module Car
class Car
# On jet, state of an instance is immutable.
# The initialize method returns initial state of an instance.
def initialize()
{name: "foo",
speed: 100}
end
def display()
@name.display()
@speed.display()
end
end
end
```
### Module definition
Enumerable.jet
```ruby
module Enumerable
def select(func)
reduce([]) {|item, acc|
if func.(item)
acc ++ [item]
else
acc
end
}
end
def filter(func)
reduce([]) {|item, acc|
if func.(item)
acc ++ [item]
else
acc
end
}
end
def reject(func)
reduce([]) {|item, acc|
if func.(item)
acc
else
acc ++ [item]
end
}
end
def map(func)
reduce([]) {|item, acc|
acc ++ [func.(item)]
}
end
def collect(func)
reduce([]) {|item, acc|
acc ++ [func.(item)]
}
end
def min(func)
reduce(:infinity) {|item, acc|
match func.(acc, item)
case -1
0
case 0
0
case 1
item
end
}
end
def min()
reduce(:infinity) {|item, acc|
if acc <= item
acc
else
item
end
}
end
def unique()
reduce([]) {|item, acc|
if acc.index_of(item)
acc
else
acc ++ [item]
end
}
end
def each(func)
reduce([]) {|item, acc|
func.(item)
}
end
end
```
### Mixing in Modules
SampleList.jet
```ruby
module SampleList
class SampleList
include Enumerable
def initialize(items)
{items: items}
end
def reduce(acc, func)
lists::foldl(func, acc, @items)
end
end
end
```
### Trailing closures (Trailing blocks)
```ruby
sample_list = SampleList::SampleList.new([1, 2, 3])
sample_list.select {|item| item > 1}
.map {|item| item * 2}
# => [4, 6]
```
### Other supported features
- Tail recursion optimization
- Pattern matching
### Currently unsupported features
- Class inheritance
- Macro definition
## Requirements
- Erlang/OTP >= 18.0
- Elixir >= 1.1
## Installation
```sh
$ git clone https://github.com/i2y/jet.git
$ cd jet
$ mix archive.build
$ mix archive.install
$ mix escript.build
$ cp jet <any path>
```
## Usage
### Command
Compiling:
```sh
$ ls
Foo.jet
$ jet Foo.jet
$ ls
Foo.beam Foo.jet
```
Compiling and Executing:
```sh
$ cat Foo.jet
module Foo
def self.bar()
123.display()
end
end
$ jet -r Foo::bar Foo.jet
123
```
### Mix
mix.exs file example:
```elixir
defmodule MyApp.Mixfile do
use Mix.Project
def project do
[app: :my_app,
version: "1.0.0",
compilers: [:jet|Mix.compilers],
deps: [{:jet, git: "https://github.com/i2y/jet.git"}]]
end
end
```
".jet" files in source directory(src) is automatically compiled by mix command.
| 15.928826 | 132 | 0.565907 | eng_Latn | 0.59658 |
4d2a4466d0b773fd8b9c65adc890a824922dfcd7 | 3,823 | md | Markdown | README.md | ggg17226/http-client | 879f0a5556764152af81cdc6d198cd6338843142 | [
"MIT"
] | null | null | null | README.md | ggg17226/http-client | 879f0a5556764152af81cdc6d198cd6338843142 | [
"MIT"
] | null | null | null | README.md | ggg17226/http-client | 879f0a5556764152af81cdc6d198cd6338843142 | [
"MIT"
] | null | null | null | # http-client
## 使用说明
引入maven依赖
```xml
<dependency>
<groupId>cn.aghost</groupId>
<artifactId>http-client</artifactId>
<version>1.0.6</version>
</dependency>
```
使用示例(High Level)
```java
@Data
@HttpCodec
public class TestObject {
private String addr;
}
TestObject testObject =
Get.doGet("https://file.aghost.cn/mmmmyipaddr.php?id=1", "tag", TestObject.class);
TestObject testObject =
Post.doPost(
"https://file.aghost.cn/mmmmyipaddr.php?id=1",
"tag",
new TestObject(),
TestObject.class);
TestObject testObject =
Put.doPut(
"https://file.aghost.cn/mmmmyipaddr.php?id=1",
"tag",
new TestObject(),
TestObject.class);
TestObject testObject =
Delete.doDelete(
"https://file.aghost.cn/mmmmyipaddr.php?id=1",
"tag",
new TestObject(),
TestObject.class);
```
使用示例(Low Level)
```java
//get请求
HttpResponse httpResponse = Get.doGet("https://file.aghost.cn/mmmmyipaddr.php");
log.info(JSON.toJSONString(httpResponse));
Get.doGetAsync(
"https://file.aghost.cn/mmmmyipaddr.php",
new HttpCallback() {
@Override
public void onFailure(@NotNull Call call, @NotNull IOException e) {
log.error(ExceptionUtils.getStackTrace(e));
assert e == null;
}
@Override
public void onSuccess(@NotNull Call call, @NotNull HttpResponse response) {
log.info(JSON.toJSONString(response));
assert httpResponse.getContentType().equals("application/json");
}
});
//post请求
HttpResponse httpResponse = Post.doPost("https://file.aghost.cn/mmmmyipaddr.php", null, null);
Post.doPostAsync(
"https://file.aghost.cn/mmmmyipaddr.php",
null,
null,
new HttpCallback() {
@Override
public void onFailure(@NotNull Call call, @NotNull IOException e) {
log.error(ExceptionUtils.getStackTrace(e));
assert e == null;
}
@Override
public void onSuccess(@NotNull Call call, @NotNull HttpResponse response) {
log.info(JSON.toJSONString(response));
assert httpResponse.getContentType().equals("application/json");
}
});
//put请求
HttpResponse httpResponse = Put.doPut("https://file.aghost.cn/mmmmyipaddr.php", null, null);
Put.doPutAsync(
"https://file.aghost.cn/mmmmyipaddr.php",
null,
null,
new HttpCallback() {
@Override
public void onFailure(@NotNull Call call, @NotNull IOException e) {
log.error(ExceptionUtils.getStackTrace(e));
assert e == null;
}
@Override
public void onSuccess(@NotNull Call call, @NotNull HttpResponse response) {
log.info(JSON.toJSONString(response));
assert httpResponse.getContentType().equals("application/json");
}
});
//delete请求
HttpResponse httpResponse =
Delete.doDelete("https://file.aghost.cn/mmmmyipaddr.php", null, null);
Delete.doDeleteAsync(
"https://file.aghost.cn/mmmmyipaddr.php",
null,
null,
new HttpCallback() {
@Override
public void onFailure(@NotNull Call call, @NotNull IOException e) {
log.error(ExceptionUtils.getStackTrace(e));
assert e == null;
}
@Override
public void onSuccess(@NotNull Call call, @NotNull HttpResponse response) {
log.info(JSON.toJSONString(response));
assert httpResponse.getContentType().equals("application/json");
}
});
``` | 28.318519 | 98 | 0.580957 | kor_Hang | 0.299256 |
4d2ae02b01f57b4e7199dea60689476a09f417a7 | 2,396 | md | Markdown | Animation/label.md | lekhanhtoan37/awesome-ios-animation | 8f12370f28294c7182571503e50cdad1103074c2 | [
"MIT"
] | 3 | 2021-09-30T17:52:11.000Z | 2022-03-05T07:17:49.000Z | Animation/label.md | lekhanhtoan37/awesome-ios-animation | 8f12370f28294c7182571503e50cdad1103074c2 | [
"MIT"
] | null | null | null | Animation/label.md | lekhanhtoan37/awesome-ios-animation | 8f12370f28294c7182571503e50cdad1103074c2 | [
"MIT"
] | 1 | 2020-03-06T07:58:25.000Z | 2020-03-06T07:58:25.000Z | [ZCAnimatedLabel](https://github.com/overboming/ZCAnimatedLabel)
--
> UILabel replacement with fine-grain appear/disappear animation

[TOMSMorphingLabel](https://github.com/tomknig/TOMSMorphingLabel)
--
> Configurable morphing transitions between text values of a label.

[Preloader.Ophiuchus](https://github.com/Yalantis/Preloader.Ophiuchus)
--
> Custom Label to apply animations on whole text or letters

[RQShineLabel](https://github.com/zipme/RQShineLabel)
--
> Secret app like text animation

[GlitchLabel](https://github.com/kciter/GlitchLabel)
--
> G..lit...c...hing UILa..bel fo..r iO...S

[ActiveLabel.swift](https://github.com/optonaut/ActiveLabel.swift)
--
> UILabel drop-in replacement supporting Hashtags (#), Mentions (@) and URLs (http://) written in Swift

[CountdownLabel](https://github.com/suzuki-0000/CountdownLabel)
--
> Simple countdown UILabel with morphing animation, and some useful function.

[LTMorphingLabel](https://github.com/lexrus/LTMorphingLabel)
--
> Graceful morphing effects for UILabel written in Swift.

[MarqueeLabel](https://github.com/cbpowell/MarqueeLabel)
--
> A drop-in replacement for UILabel, which automatically adds a scrolling marquee effect when the label's text will not fit inside the specified frame.

## [GhostTypewriter](https://github.com/wibosco/GhostTypewriter)
> A UILabel subclass that adds a typewriting animation effect

| 40.610169 | 191 | 0.795075 | yue_Hant | 0.304504 |
4d2bc055b20f0802bc96a5e73b454742fcc5b0e7 | 1,530 | md | Markdown | _posts/2021-08-21-共同富裕的上綱上線, 第三次分配山雨欲來,是否是一場社會革命的變奏?運動式的道德勸捐,在社會普遍存在.md | NodeBE4/teahouse | 4d31c4088cc871c98a9760cefd2d77e5e0dd7466 | [
"MIT"
] | 1 | 2020-09-16T02:05:27.000Z | 2020-09-16T02:05:27.000Z | _posts/2021-08-21-共同富裕的上綱上線, 第三次分配山雨欲來,是否是一場社會革命的變奏?運動式的道德勸捐,在社會普遍存在.md | NodeBE4/teahouse | 4d31c4088cc871c98a9760cefd2d77e5e0dd7466 | [
"MIT"
] | null | null | null | _posts/2021-08-21-共同富裕的上綱上線, 第三次分配山雨欲來,是否是一場社會革命的變奏?運動式的道德勸捐,在社會普遍存在.md | NodeBE4/teahouse | 4d31c4088cc871c98a9760cefd2d77e5e0dd7466 | [
"MIT"
] | null | null | null | ---
layout: post
title: "共同富裕的上綱上線, 第三次分配山雨欲來,是否是一場社會革命的變奏?運動式的道德勸捐,在社會普遍存在仇富心態催化之下,是否會出現第三次分配的夾生飯?(楊錦麟論時政)"
date: 2021-08-21T13:22:45.000Z
author: 老楊到處說
from: https://www.youtube.com/watch?v=gYPIkuUf4hk
tags: [ 老楊到處說 ]
categories: [ 老楊到處說 ]
---
<!--1629552165000-->
[共同富裕的上綱上線, 第三次分配山雨欲來,是否是一場社會革命的變奏?運動式的道德勸捐,在社會普遍存在仇富心態催化之下,是否會出現第三次分配的夾生飯?(楊錦麟論時政)](https://www.youtube.com/watch?v=gYPIkuUf4hk)
------
<div>
訂閱楊錦麟頻道,每天免費收看最新影片:https://bit.ly/yangjinlin歡迎各位透過PayPal支持我們的創作:http://bit.ly/support-laoyang国家主席习近平周二主持召开中央财经委员会会议,强调要分阶段促进共同富裕,允许一部分人先富起来,先富带后富、帮后富,并提出构建初次分配、再分配、三次分配协调配套的基础性制度安排。中国经济学家厉以宁说,初次分配是各个要素获得要素报酬,二次分配就是通过税收、转移支付再平衡,三次分配就是通过自愿公益捐赠进一步调节。官方强调收入三次分配,预示他们将鼓励高收入群体和企业,以公益捐赠形式回馈社会。腾讯砸下的500亿以及中央财经委会议提到的“第三次分配”,不禁让外界猜想,接下来会不会有更多高利润企业或中国富豪应声而动,官方又会有哪些新的措施来“劫富济贫”? 中国过去20年经济成绩斐然,但收入分配不均造成的巨大贫富差距,也是一直未能解决的社会问题。中国城乡贫富差距也十分显著。反映居民贫富差距的基尼系数,在2012年到2015年处于0.462到0.474之间。一般来说,0.4到0.5之间意味着收入分配差距较大。有分析指出,被几大巨头独家把控的资源应是全体民众共同拥有,必须通过二次三次重新分配,使资本服从国家整体利益和战略发展,才能变成好的资本,实现“共同富裕”。上述諸多種種,被外界視為是中國正在醞釀一場低調的社會革命,凸顯的是中共高层决策对企业命运的主宰力。新加坡《聯合早報》韓詠紅指出,可以预见,未来在中国的过高收入会限制、富人将被要求捐出更多钱,政府加快实施财产税、遗产税也是大概率的事。这是否是一场“社会革命”的前奏?大家拭目以待。须指出, 说一千道一万,目标都是好的,手段和方式若无法对症下药,则治标不治本;若过猛或过激,则会引起反弹。韓詠虹注意到,半年来的官方动作,仍未脱运动式色彩,新规一夕出台改变产业游戏规则,市场猪羊变色,都在挫伤企业积极性与引发焦虑。在启动新一轮社会变革时,对节奏的把握;对规则、私有财产和个人空间的尊重,都将是衡量这个体制优越性和现代化水平的标准。我在另一個節目中提到:共同富裕及第三次分配,海外多數評論傾向於負面解讀,第三次分配的提法雖然出自北京大學厲以寧教授,但據說借鑑了發達國家第三次分配的原理和範式,但西歐或北歐福利國家主要依靠稅收調節收入分配,稅率確立的權力在國會。是在該國法治條件下進行的財富調整,因而不會對私有產權形成制度性的威脅,第三次分配似乎等於稅收槓桿加道德逼捐,慈善的概念或慈善基金會的概念中國萌生於2008年汶川大地震之後,尚未完全發育成熟,這樣的第三次分配會不會出現夾生飯現象?
</div>
| 90 | 1,111 | 0.860784 | yue_Hant | 0.827103 |
4d2be7635b0ce9a466469b4a58f71061a1181710 | 1,600 | md | Markdown | README.md | rusito-23/WahlaanTests | a80e0a26daa6ad9d6c463c29eefe550ace185d73 | [
"MIT"
] | 1 | 2019-04-23T03:47:16.000Z | 2019-04-23T03:47:16.000Z | README.md | rusito-23/Wahlaan | a80e0a26daa6ad9d6c463c29eefe550ace185d73 | [
"MIT"
] | 3 | 2019-03-28T15:06:58.000Z | 2019-04-21T17:27:18.000Z | README.md | rusito-23/Wahlaan | a80e0a26daa6ad9d6c463c29eefe550ace185d73 | [
"MIT"
] | null | null | null | # Wahlaan 🤓
Discrete Math II 2019 - Famaf, Argentina.
Uses the Greedy algorithm for Graph Coloring with C.
## Test Suites
- **SANITY**
Runs several UnitTests over the required functions.
- **PERFORMANCE**
Runs several actions, saving the partial time for each of one:
- Graph reading
- Greedy with natural order
- WelshPowell Re-Ordering and Greedy
- 100 SwitchVertices
- 1000 RMBCs + Greedy
- **COLOR**
Shows the results of Greedy with the following orders:
- Natural
- Welsh Powell
- 100 SwitchVértices
- 100 RMBCs
- **BIPARTITO**
Shows the results of running Greedy and Bipartite over the graph.
## Makefile 🔛
The makefile provides the following targets:
- `make <suite> GRAPH=Path/to/Graph` Suite over a Graph
Example: `make sanity GRAPH=Graphs/K4.txt`
- `make <suite>-all FOLDER=Path/To/Folder` Suite over a folder of Graphs
Ejemplo: `make performance-all FOLDER=Graphs/Performance`
- `make <suite>-valgrind GRAPH=Path/To/Graph` Suite over a Graph with Valgrind results:
`make bipartito-valgrind GRAPH=Graphs/Bipartito.txt VALGRIND='valgrind --leak-check=full --show-leak-kinds=all'`
### Structure
```
.
├── Graphs
│ ├── Bipartite
│ ├── Color
│ │ └── Small graphs to check COLOR suite
│ ├── Complete
│ │ └── Complete Graphs
│ └── Performance
│ └── Large graphs to test the PERFORMACE suite
├── Makefile
├── README.md
├── Test
│ ├── PrintTests.c
│ ├── TestSuites.c
│ ├── Tests.h
│ ├── TestsMultiples.c
│ └── UnitTests.c
├── Wahlaan
│ ├── Project .c files
│ └── Rii.h
└── main.c
```
| 20.253165 | 113 | 0.661875 | eng_Latn | 0.664855 |
4d2c1654f30c478125857622dc5c1390a37acfe5 | 971 | md | Markdown | meta/ro/15-9-1.md | diseminare/open-sdg-data-starter | eb41cb01197ae0269729ce838a86658fa13e7875 | [
"MIT"
] | null | null | null | meta/ro/15-9-1.md | diseminare/open-sdg-data-starter | eb41cb01197ae0269729ce838a86658fa13e7875 | [
"MIT"
] | null | null | null | meta/ro/15-9-1.md | diseminare/open-sdg-data-starter | eb41cb01197ae0269729ce838a86658fa13e7875 | [
"MIT"
] | null | null | null | ---
data_non_statistical: false
goal_meta_link: http://unstats.un.org/sdgs/files/metadata-compilation/Metadata-Goal-15.pdf
goal_meta_link_text: United Nations Sustainable Development Goals Metadata (pdf 456kB)
graph_type: line
indicator: 15.9.1
indicator_name: Progress towards national targets established in accordance with Aichi
Biodiversity Target 2 of the Strategic Plan for Biodiversity 2011-2020
indicator_sort_order: 15-09-01
layout: indicator
permalink: /15-9-1/
published: true
reporting_status: notstarted
sdg_goal: '15'
target: By 2020, integrate ecosystem and biodiversity values into national and local
planning, development processes, poverty reduction strategies and accounts
target_id: '15.9'
graph_title: Progress towards national targets established in accordance with Aichi Biodiversity
Target 2 of the Strategic Plan for Biodiversity 2011-2020
un_custodian_agency: CBD-Secretariat,UNEP
un_designated_tier: '3'
previous: 15-8-1
next: 15-a-1
---
| 38.84 | 96 | 0.821833 | eng_Latn | 0.659785 |
4d2c50281e2aed9662404ab797ef765f889a8658 | 3,816 | md | Markdown | client/README.md | wiseco/imagecashletter | 892d93317176b669cce39e45ce10f049463173d6 | [
"Apache-2.0"
] | null | null | null | client/README.md | wiseco/imagecashletter | 892d93317176b669cce39e45ce10f049463173d6 | [
"Apache-2.0"
] | null | null | null | client/README.md | wiseco/imagecashletter | 892d93317176b669cce39e45ce10f049463173d6 | [
"Apache-2.0"
] | null | null | null | # Go API client for openapi
Moov Image Cash Letter (ICL) implements an HTTP API for creating, parsing and validating ImageCashLetter files.
## Overview
This API client was generated by the [OpenAPI Generator](https://openapi-generator.tech) project. By using the [OpenAPI-spec](https://www.openapis.org/) from a remote server, you can easily generate an API client.
- API version: v1
- Package version: 1.0.0
- Build package: org.openapitools.codegen.languages.GoClientCodegen
For more information, please visit [https://github.com/moov-io/imagecashletter](https://github.com/moov-io/imagecashletter)
## Installation
Install the following dependencies:
```shell
go get github.com/stretchr/testify/assert
go get golang.org/x/oauth2
go get golang.org/x/net/context
go get github.com/antihax/optional
```
Put the package under your project folder and add the following in import:
```golang
import "./openapi"
```
## Documentation for API Endpoints
All URIs are relative to *http://localhost:8083*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*ImageCashLetterFilesApi* | [**AddICLToFile**](docs/ImageCashLetterFilesApi.md#addicltofile) | **Post** /files/{fileID}/cashLetters | Add CashLetter to File
*ImageCashLetterFilesApi* | [**CreateICLFile**](docs/ImageCashLetterFilesApi.md#createiclfile) | **Post** /files/create | Create File
*ImageCashLetterFilesApi* | [**DeleteICLFile**](docs/ImageCashLetterFilesApi.md#deleteiclfile) | **Delete** /files/{fileID} | Delete file
*ImageCashLetterFilesApi* | [**DeleteICLFromFile**](docs/ImageCashLetterFilesApi.md#deleteiclfromfile) | **Delete** /files/{fileID}/cashLetters/{cashLetterID} | Delete a CashLetter from a File
*ImageCashLetterFilesApi* | [**GetICLFileByID**](docs/ImageCashLetterFilesApi.md#geticlfilebyid) | **Get** /files/{fileID} | Retrieve a file
*ImageCashLetterFilesApi* | [**GetICLFileContents**](docs/ImageCashLetterFilesApi.md#geticlfilecontents) | **Get** /files/{fileID}/contents | Get file contents
*ImageCashLetterFilesApi* | [**GetICLFiles**](docs/ImageCashLetterFilesApi.md#geticlfiles) | **Get** /files | Get ICL Files
*ImageCashLetterFilesApi* | [**Ping**](docs/ImageCashLetterFilesApi.md#ping) | **Get** /ping | Ping ImageCashLetter service
*ImageCashLetterFilesApi* | [**UpdateICLFile**](docs/ImageCashLetterFilesApi.md#updateiclfile) | **Post** /files/{fileID} | Updates FileHeader
*ImageCashLetterFilesApi* | [**ValidateICLFile**](docs/ImageCashLetterFilesApi.md#validateiclfile) | **Get** /files/{fileID}/validate | Validate file
## Documentation For Models
- [Bundle](docs/Bundle.md)
- [BundleControl](docs/BundleControl.md)
- [BundleHeader](docs/BundleHeader.md)
- [CashLetter](docs/CashLetter.md)
- [CashLetterControl](docs/CashLetterControl.md)
- [CashLetterHeader](docs/CashLetterHeader.md)
- [CheckDetailAddendumA](docs/CheckDetailAddendumA.md)
- [CheckDetailAddendumB](docs/CheckDetailAddendumB.md)
- [CheckDetailAddendumC](docs/CheckDetailAddendumC.md)
- [Checks](docs/Checks.md)
- [CreateIclFile](docs/CreateIclFile.md)
- [CreditItem](docs/CreditItem.md)
- [Error](docs/Error.md)
- [IclFile](docs/IclFile.md)
- [IclFileControl](docs/IclFileControl.md)
- [IclFileHeader](docs/IclFileHeader.md)
- [ImageViewAnalysis](docs/ImageViewAnalysis.md)
- [ImageViewData](docs/ImageViewData.md)
- [ImageViewDetail](docs/ImageViewDetail.md)
- [ReturnDetailAddendumA](docs/ReturnDetailAddendumA.md)
- [ReturnDetailAddendumB](docs/ReturnDetailAddendumB.md)
- [ReturnDetailAddendumC](docs/ReturnDetailAddendumC.md)
- [ReturnDetailAddendumD](docs/ReturnDetailAddendumD.md)
- [Returns](docs/Returns.md)
- [RoutingNumberSummary](docs/RoutingNumberSummary.md)
## Documentation For Authorization
Endpoints do not require authorization.
## Author
| 44.372093 | 214 | 0.7576 | yue_Hant | 0.882785 |
4d2c5e3652e032b792b697773f4bbaf50794a6a5 | 6,960 | md | Markdown | articles/dedicated-compute/dc-sd.md | myoung-ukcloud/documentation | c5a9c2c80280a856afa360b621f038353a22ae5f | [
"MIT"
] | null | null | null | articles/dedicated-compute/dc-sd.md | myoung-ukcloud/documentation | c5a9c2c80280a856afa360b621f038353a22ae5f | [
"MIT"
] | null | null | null | articles/dedicated-compute/dc-sd.md | myoung-ukcloud/documentation | c5a9c2c80280a856afa360b621f038353a22ae5f | [
"MIT"
] | null | null | null | ---
title: UKCloud Dedicated Compute v2 Service Definition
description: Provides an overview of what is provided by the UKCloud Dedicated Compute v2 service
services: dedicated-compute
author: Sue Highmoor
reviewer:
lastreviewed: 02/07/2019
toc_rootlink: Service Definition
toc_sub1:
toc_sub2:
toc_sub3:
toc_sub4:
toc_title: UKCloud Dedicated Compute v2 Service Definition
toc_fullpath: Service Definition/dc-sd.md
toc_mdlink: dc-sd.md
---
# UKCloud Dedicated Compute v2 Service Definition
## Why UKCloud?
UKCloud is dedicated to helping the UK Public Sector and UK citizens by delivering more choice and flexibility through safe and trusted cloud technology. We own and operate a UK-sovereign, industry-leading, multi-cloud platform, located within the Government's Crown Campus, offering multiple cloud technologies, including VMware, Azure, OpenStack, OpenShift and Oracle. This enables customers to choose the right technology for creating new workloads or migrating existing applications to the cloud.
We recognise the importance of government services in making the country run smoothly, which is why we include the highest level of support to all our customers at no extra cost. This includes a dedicated 24/7 UK telephone and ticket support, and Network Operations Centre (NOC) utilising protective and proactive monitoring tools, and access to UKCloud's technical experts.

## What is UKCloud Dedicated Compute v2?
Dedicated Compute v2 is a flexible solution designed where guaranteed performance is required. Delivering exceptional performance through dedicated compute hosts, meeting your security obligations by addressing compliance and regulatory requirements through physical separation of workloads.
Hosts are assigned for your sole use and are enabled for granular configuration to meet your workload requirements. Dedicated Compute v2 utilises our public VMware storage, providing on-demand flexibility, with either Tier 1 or Tier 2 options available.
For full information regarding this product, we have Service Scopes, FAQs and other relevant documents on our [Knowledge Centre](https://docs.ukcloud.com).
## What the service can help you achieve
- Workload isolation. Designed for the exclusive use of each customer, and provides the highest levels of separation and isolation from other customers within a trusted community
- Predictable performance and flexible configuration. Guaranteed performance - Manage custom workloads, with full-control of VM sizing, CPU and RAM allocations
- Use your existing software licenses. Utilise existing licenses on the UKCloud platform, including Microsoft licenses, and other licenses that are bound to the VM (some restrictions apply such as Microsoft Server licenses)
- Prove the concept. Quick access to your own dedicated cloud infrastructure without the long-term commitment
- Meets compliance and regulatory requirements. The UKCloud platform is regularly CHECK-tested and undergoes regular independent assessments to ensure data security
- Automation. Delivered as a cloud service through high levels of automation, enabling self-service via the UKCloud Portal
## Product options
The service is designed to be flexible and allows you to choose from the list below in order to match your requirements.
### Security Domain
Choose the security domain in which you want to run your application
- Assured OFFICIAL - DDoS-protected internet, PSN, HSCN and Janet
- Elevated OFFICIAL - PSN and RLI
### Select Required Number of Compute Hosts
We operate at least N+1 for hardware resilience, Starter pack includes 1 host used for resilience in case of hardware failure
### Pricing and Packaging Options
Various commitment discounts available
- PAYG
- 1 Year Commit
- 2 Year Commit
- 3 Year Commit
### Storage Profile
- Tier 1
- Tier 2
### Monitoring and Metric
Advanced Metrics and Monitoring delivered throught VMware vRealise Operations (vROPS)
### Disaster Recovery
Additional workload protection to UKCloud Public utilsing Zerto technology
## Pricing and packaging
Pricing from £3360 per/month/host - Full pricing with all options including licensing and connectivity is available in the [KCloud Pricing Guide](https://ukcloud.com/pricing-guide).
## Accreditation and information assurance
The security of our platform is our number one priority. We've always been committed to adhering to exacting standards, frameworks and best practice. Everything we do is subject to regular independent validation by government accreditors, sector auditors, and management system assessors. Details are available on the [UKCloud website](https://ukcloud.com/governance/).
## Connectivity options
UKCloud provides one of the best-connected cloud platforms for the UK Public Sector. We offer a range of flexible connectivity options detailed in the [UKCloud Pricing Guide](https://ukcloud.com/pricing-guide) which enable access to our secure platform by DDoS-protected internet, native PSN, Janet, HSCN and RLI and your own lease lines via our HybridConnect service.
## An SLA you can trust
We understand that enterprise workloads need a dependable service that underpins the reliability of the application to users and other systems, which is why we offer one of the best SLAs on G-Cloud. For full details on the service SLA including measurements and service credits, please view the [*SLA defintion article*](../other/other-ref-sla-definition.md) on the UKCloud Knowledge Centre.
|
-----------------------------|-------
**Service level agreement** | 99.99%
**Portal level agreement** | 99.90%
**Availability calculation** | Availability is calculated based on the number of hours in the billing month (for example, 744 hours for months with 31 days). Excludes any planned and emergency maintenance.
**Measurement of SLA** | Unavailability applies to existing VMs when the compute platform becomes inaccessible due to a fault recognised at the IaaS layer or lower:<ul><li>Fault is not within the customer's control (OS configuration, customer applications and customer networks)<li>Fault is within UKCloud-controlled components such as the dedicated compute infrastructure, UKCloud data centre facilities, physical firewalls and routers</ul>
**Key exclusions** | Full details of exclusions are available in the SLA definition document within the UKCloud Knowledge Centre |
## The small print
For full terms and conditions including onboarding and responsibilities, please refer to the [*Terms and conditions documents*](../other/other-ref-terms-and-conditions.md).
## Feedback
If you find a problem with this article, click **Improve this Doc** to make the change yourself or raise an [issue](https://github.com/UKCloud/documentation/issues) in GitHub. If you have an idea for how we could improve any of our services, send an email to <[email protected]>.
| 56.585366 | 500 | 0.796408 | eng_Latn | 0.996103 |
4d2c7825ac06139db7d047377eda6dad0e3ebf6f | 603 | md | Markdown | src/main/java/g0601_0700/s0647_palindromic_substrings/readme.md | javadev/LeetCode-in-Java | 032ba0a4c2cc8533f3085eea0c93b334cfe80051 | [
"MIT"
] | 14 | 2021-06-30T19:25:18.000Z | 2022-03-23T03:58:11.000Z | src/main/java/g0601_0700/s0647_palindromic_substrings/readme.md | javadev/LeetCode-in-Java | 032ba0a4c2cc8533f3085eea0c93b334cfe80051 | [
"MIT"
] | 670 | 2021-10-29T19:00:47.000Z | 2022-03-31T02:49:46.000Z | src/main/java/g0601_0700/s0647_palindromic_substrings/readme.md | javadev/LeetCode-in-Java | 032ba0a4c2cc8533f3085eea0c93b334cfe80051 | [
"MIT"
] | 41 | 2021-09-23T06:58:31.000Z | 2022-01-06T23:28:13.000Z | 647\. Palindromic Substrings
Medium
Given a string `s`, return _the number of **palindromic substrings** in it_.
A string is a **palindrome** when it reads the same backward as forward.
A **substring** is a contiguous sequence of characters within the string.
**Example 1:**
**Input:** s = "abc"
**Output:** 3
**Explanation:** Three palindromic strings: "a", "b", "c".
**Example 2:**
**Input:** s = "aaa"
**Output:** 6
**Explanation:** Six palindromic strings: "a", "a", "a", "aa", "aa", "aaa".
**Constraints:**
* `1 <= s.length <= 1000`
* `s` consists of lowercase English letters. | 20.1 | 76 | 0.636816 | eng_Latn | 0.910134 |
4d2cf641f5bb839d7655c1d0c95eeb2122e00f9f | 15,929 | md | Markdown | README.md | badrbilal/mendix-DataTables | b6bc0e92abe1cdc8638a490080e002c6e8e84033 | [
"Apache-2.0"
] | 1 | 2019-10-02T21:53:49.000Z | 2019-10-02T21:53:49.000Z | README.md | badrbilal/mendix-DataTables | b6bc0e92abe1cdc8638a490080e002c6e8e84033 | [
"Apache-2.0"
] | null | null | null | README.md | badrbilal/mendix-DataTables | b6bc0e92abe1cdc8638a490080e002c6e8e84033 | [
"Apache-2.0"
] | null | null | null |
# DataTables for Mendix
This widget provides the [DataTables](http://datatables.net/) library as Mendix custom widget.
## Contributing
For more information on contributing to this repository visit [Contributing to a GitHub repository](https://world.mendix.com/display/howto50/Contributing+to+a+GitHub+repository)!
## Typical usage scenario
Displaying data in a grid with more flexiblity than the standard datagrid allows.
## Features
All features can be seen in action in the test/demo project.
- Drag columns to reorder them
- Allow end user to choose which columns to show or hide
- Allow end user to choose the paging size
- Paging size and column layout can be saved in the local storage of the browser.
- Use scrollbar i.s.o. paging buttons. Additional data is loaded automatically when the user scrolls down far enough.
- Responsive table: Hide columns for smaller display sizes, also set a priority on each column to indicate which columns should be hidden first.
- Responsive class: For smaller display sizes, allow end user to expand a row to see the data that does not fit in the grid.
- Force a table refresh from your microflow
- Feed XPath constraints from an attribute
- Specify column widths or allow the grid to size the columns based on actual data and available space.
- Either fit the grid in the available width, or use horizontal scrolling.
- Selection similar to DataGrid
- Selection changed microflow, to implement functionality similar to 'listen to grid'
- Define buttons to work with the selected rows. Each button will call a microflow, which receives the current selection.
- Place buttons in another container to put them together with other page elements, like a new button that sits in a container above the widget.
- Enable/disable buttons depending on a boolean or enum value.
- Style rows or cells depending on data in the row. The demo project has an example of this in the Data Types demo.
- Export the data.
## Limitations
- Currently it is not possible to resize columns at runtime.
- Currently the widget uses a default style where it should use the Mendix theme settings.
- References can be used one level deep.
- Due to limitations in the custom widget definition, the attributes and references need to be entered as text rather than selected from a list.
## Atlas UI styling
The *show xx entries* dropdown may look a little squeezed together vertically.
Adding this to your CSS may help, it probably needs a little adaption to your theme.
``` CSS
div.datatables div.dataTables_wrapper div.dataTables_length select {
height: 34px;
padding: 0px 5px;
}
```
## Backlog
- Currently only XPath can be used to get the grid data.
## Configuration
- Define an entity that acts as context object for the grid.
- Insert the widget in a page
- Configure the properties
## The context entity
This widget needs a context entity:
- To allow microflows to force a refresh
- To get XPath constraints from a String attribute
- To use search filters
It is advised to use a non-persistent entity.
## Button and selection microflows
When the table has some form of multiple selection, there is a little snag with the parameters of microflows that receive the selection:
- When the user selects one row, that row is passed to an object parameter of the same type as your table entity.
- When the user selects multiple rows, the rows are passed to a list parameter of the same type as your table entity.
So, for multiple selection you need to have two parameters and check which one is actually passed (not empty). The demo project has an example of this.
Mendix hides this little easter egg for the default DataGrid, for multiple selection, the list parameter will just get the one object when only one object is selected.
## Custom row and cell styling
The widget allows row and cell styling based on attribute values. To use a value for styling, turn on its TR data attribute flag. The value will then be included as data- attribute on the table row. Using CSS selection, you can apply custom styling.
It is also possible to apply the styling to a single cell. In addition to the TR data flag for the value attribute, turn on the TD data attribute for the column you want to style.
The demo project has examples of this.
## Properties
For the class properties, multiple classes can be entered, separated by a single space.
### Datasource
- _Table entity_ - This is the entity that is displayed in the grid.
- _Refresh attribute_ - The attribute used force a refresh. Set it to true and change with refresh in client to trigger a refresh. The widget will set it to false.
- _Keep scroll pos attribute_ - Used together with the refresh attribute. Set it to true to keep the page or scroll position after a refresh. If this attribute is not used, the scroll position will always be reset for each full refresh.
- _XPath constraint_ - Optional. Set the attribute value to an XPath constraint, without the surrounding [ and ]
- _Allow multi column sort_ - Allow multiple column sort, default no. If yes, end user can use shift-click to sort on multiple columns.
- _Show progress getting data_ - Show progress bar while getting data. Usually not necessary but if your XPath is complex it can be useful.
#### Column definitions
For each column, add an item to the list
##### Common
- _Attribute name_ - The name of the attribute to be displayed in the column. This is case sensitive
- _Reference name_ - For one level deep references, this is the reference name, case sensitive
- _Caption_ - The column caption, translatable.
- _Header tooltip_ - Optional. Column header tooltip, translatable
- _Allow sort_ - By default all columns are visible. Turn this off for calculated attributes.
##### Date values
- _Date/time type_ - Display DateTime attribute as date, date/time, or time only
- _Date formats_ - Formats to use for date/time values. Translatable to allow different formats for each language.
##### Layout
- _Visible_ - Sets whether column is initially visible. Combine with the column visibility setting.
- _Responsive priority_ - Responsive priority, 0 is the highest, columns with higher numbers will be hidden first when the grid does not fit in the current display size.
- _Column width_ - Optional. Specify width, value is used exactly as you enter it: 20%, 150px, 5em, etc
- _Header class_ - Optional. Specify class(es) to be put on the column header.
- _Cell class_ - Optional. Specify class(es) to be put on each cell in the column.
- _Group digits_ - Whether to group digits with thousand separators, for integer, long and decimal
- _Decimal positions_ - Decimal positions, decimal data type only.
##### Extra
- _TR data attr_ - Include the value as data- attribute on the table row. Useful for styling with CSS selection.
- _TD data attr_ - Include the attribute name as data- attribute on the table cell. Useful for styling with CSS selection. Not done by default on all cells because that slows down the table rendering.
#### Attribute search filters
The widget does not display search filter inputs. To provide search filters, define an attribute of the same type on your context entity.
__Filtering on booleans.__ Filtering on booleans is a little tricky because there is no way to tell the difference between off/false or no selection made. To overcome this, the widget expects an enumeration as attribute on your context entity. The modeler does not allow the value _true_ for an enumeration. For the true value, use enumeration value ___t___. For the false value, anything else, but ___f___ would be a good one. The caption can be any value.
- _Context entity attribute_ - The attribute on the context entity to get the filter value.
- _Reference name_ - Optional. Reference name (module.refname) to search on an attribute in a referenced entity, can be multiple levels deep.
- _Attribute name_ - Attribute name to filter on, this is case sensitive.
- _Operator_ - Operator to use. For booleans and enumerations, only _Equals_ makes sense.
#### Reference search filters
The widget does not display search filter inputs. To provide search filters, define an association to the same entity on your context entity.
- _Context entity reference_ - The reference from the context entity to the entity to filter on.
- _Reference name_ - The reference (module.reference) from the table entity to the same entity. Note that the reference from the table entity to that entity can span multiple associations.
### Layout
- _Is responsive_ - When turned on, columns will be hidden on smaller screen sizes depending on their responsive priorities
- _Auto column width_ - Control the auto column width feature. Turn off when specifying widths on the columns
- _Allow column reorder_ - When turned on, the user can drag and drop columns to reorder them
- _Table class_ - Specify class(es) to be put on the table
- _Show table information_ - When turned on, display information: Showing 1 to 6 of 50,000 entries
- _Use infinite scroll_ - Use infinite scroll rather than the default paging buttons. Set nowrap on the table class!
- _Horizontal scrolling_ - When true, horizontal scrolling is used. Set nowrap on the table class!
- _Vertical scrolling_ - Optional, any CSS unit, default 200px. When specified, vertical scrolling is used and height of the rows is constrained to the specified height
- _State save name_ - Optional. When specified, grid layout state is saved to and loaded from browser local storage using the specied name. Make sure this name is unique across your application! It is also advisable to put your project name first in each statesaving name, to prevent issues where multiple apps have an order overview grid.
#### Column visibility
- _Visible columns_ - Optional, ignored if Allow column visibility is turned on. If specified, only the columns for which the index is listed here will be shown. Separate values using a comma. The first column has index 0.
- _Allow column visibility_ - When turned on, the user can choose which columns to display
- _Columns button caption_ - Caption of the columns button, translatable
- _Button class_ - Optional. Additional classes to put on the button. When placing the button together with other buttons, be sure to put at least mx-button on it.
- _Placement selector_ - Optional. Places the button relative to the node found using this CSS selector. If empty, button is placed in default container above the table. Can be used to bring buttons together in one container.
- _Placement position_ - Position of the button in the placement container. Only relevant when placement selector has been specified.
#### Built-in table classes
The default DataTables stylesheet has the following class names available to control the different styling features of a DataTable.
Class | Description
--------------- | -----
nowrap | Disable wrapping of content in the table, so all text in the cells is on a single line
table-bordered | Bordered table
table-hover | Row highlighting on mouse over
table-striped | Row striping
table-condensed | More compact table by reducing padding
The nowrap class is DataTables specific, the others are Bootstrap classes.
### Selection
The _Selection changed callback microflow_ can be used to create functionality similar to listen to grid. The demo project has examples of this: The master / detail pages.
- _Selection_ - Selection type
- _Select first row_ - Select first row after displaying data
- _Allow deselect_ - Allow deselect. When off, at least one row must remain selected.
- _Selection changed callback microflow_ - The name of the microflow (Module.Microflow) that is called when the selection has been changed.
### Buttons
The buttons to use for processing selections. Note that buttons are displayed in the widget itself, above the grid. You can place the buttons in a container on your page using the _placement_ properties. This allows you to create one toolbar containing Mendix action buttons and buttons created by the widget. The demo project has an example of this in the Data Types demo.
Buttons can be enabled or disabled depending on a value in the grid entity. Only one attribute, boolean or enumeration, for each button to keep the widget simple. If there is a lot of business logic involed, you could introduce a new boolean attribute and set it when your object is changed or in a before commit event. For enumerations, use the internal value, not the caption.
Make sure that the default button is always allowed.
#### Definition
- _Caption_ - Button caption, translatable
- _Name_ - Button name, will be mx-name- class on the button
- _Is default button_ - Is default button, microflow will be called when user doubleclicks a row. Only that row is passed, even if other rows are selected too.
- _Button type_ - Button type, the same as normal Mendix buttons.
- _Class_ - Optional. Specify class(es) to be put on the button
- _Glyphicon classes_ - Optional. Glyphicon classes, like __glyphicon glyphicon-edit__
- _Button microflow_ - The name of the microflow (Module.Microflow) that is called when the button is clicked
- _Show progress_ - Show progress bar
#### Confirmation
- _Ask confirmation_ - Ask for confirmation
- _Question_ - Confirmation question
- _Proceed caption_ - Proceed button caption on the confirmation popup
- _Cancel caption_ - Cancel button caption on the confirmation popup
#### Placement
- _Placement selector_ - Optional. Places the button relative to the node found using this CSS selector. If empty, button is placed in default container above the table. Can be used to bring new and edit button together in one container
- _Placement position_ - Position of the button in the placement container. Only relevant when placement selector has been specified
#### Enabled
- _Enabled attribute name_ - Optional. Direct attribute of the grid entity to control button disabled status.
- _Enabled value_ - Optional. The value for which the button is enabled.
### Export
The export cannot be done on the client because the client only has a subset of the data. Therefore, the export is run on the server. The widget sets two values on the context object for the backend to use. For this to work, add two unlimited string attributes (configuration and XPath constraint) to your context entity and choose those in the widget properties.
The demo project has a generic implementation to create data exports.
Please be sure to turn on the apply entity access setting on your export microflow where necessary.
- _Allow export_ - If on, an additional export button will be displayed
- _Button caption_ - Export button caption
- _Visible only_ - Export visible columns only
- _Export config attribute_ - Export configuration attribute
- _Export XPath attribute_ - Export XPath constraint attribute
- _Export microflow_ - The microflow that does the actual export, receives the context object.
- _Export button type_ - Button type, the same as normal Mendix buttons.
- _Export button class_ - Optional. Specify class(es) to be put on the button
- _Export button glyphicon classes_ - Optional. Glyphicon classes, like __glyphicon glyphicon-edit__
- _Placement selector_ - Optional. Places the button relative to the node found using this CSS selector. If empty, button is placed in default container above the table.
- _Placement position_ - Position of the button in the placement container. Only relevant when placement selector has been specified
### Advanced
- _Placement delay_ - Delay (ms) before placing or moving buttons. Depending on complexity of the page, browsers may need more time to properly render the buttons.
- _Scroll multiplier_ - Scroll buffer multiplier. This value determines how much data is pre-fetched when infinite scroll is used. Lower values cause less data to be requested from the server but will require more server calls when the user keeps scrolling. | 60.56654 | 457 | 0.785172 | eng_Latn | 0.999064 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.