hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b9133b55a64a9ed3424c087757770b780f17cc89 | 1,223 | md | Markdown | docs/rna/mkref.md | zhouyiqi91/CeleScope | ac347d5283c211f7a0607338083fa00f8a33fb24 | [
"MIT"
] | 7 | 2020-09-11T05:09:54.000Z | 2020-12-11T08:29:29.000Z | docs/rna/mkref.md | zhouyiqi91/CeleScope | ac347d5283c211f7a0607338083fa00f8a33fb24 | [
"MIT"
] | 3 | 2020-11-09T09:18:26.000Z | 2020-12-21T04:03:49.000Z | docs/rna/mkref.md | zhouyiqi91/CeleScope | ac347d5283c211f7a0607338083fa00f8a33fb24 | [
"MIT"
] | 4 | 2020-09-30T09:35:46.000Z | 2020-12-29T00:24:19.000Z | # mkref
## Features
- Create a genome reference directory.
## Input
- Genome fasta file
- Genome gtf file
- Mitochondria gene list file(optional)
## Output
- STAR genome index files
- Genome refFlat file
- Genome config file
```
$ cat celescope_genome.config
[genome]
genome_name = Homo_sapiens_ensembl_99
genome_type = rna
fasta = Homo_sapiens.GRCh38.dna.primary_assembly.fa
gtf = Homo_sapiens.GRCh38.99.gtf
refflat = Homo_sapiens_ensembl_99.refFlat
```
## Parameters
`--genome_name` Required, genome name.
`--fasta` Required, genome fasta file.
`--gtf` Required, genome gtf file
`--mt_gene_list` Mitochondria gene list file. It is a plain text file with one gene per line. If not provided, will use `MT-` and `mt-` to determine mitochondria genes.
`--genomeDir` Default="./", output directory.
`--thread` Default=6, thread to use.
## Usage examples
- Human genome
```
celescope rna mkref \
--genome_name Homo_sapiens_ensembl_99 \
--fasta Homo_sapiens.GRCh38.dna.primary_assembly.fa \
--gtf Homo_sapiens.GRCh38.99.gtf
```
- Mouse genome
```
celescope rna mkref \
--genome_name Mus_musculus_ensembl_99 \
--fasta Mus_musculus.GRCm38.dna.primary_assembly.fa \
--gtf Mus_musculus.GRCm38.99.gtf
```
| 19.412698 | 168 | 0.739984 | kor_Hang | 0.154251 |
b9141a01c93bd63830049b6ee188099058d0bc36 | 1,182 | md | Markdown | Stream-1/Front-End-Development/1.Re-opening-the-box-model/2.The-Box-Model/ReadMe.md | GunnerJnr/_CodeInstitute | efba0984a3dc71558eef97724c85e274a712798c | [
"MIT"
] | 4 | 2017-10-10T14:00:40.000Z | 2021-01-27T14:08:26.000Z | Stream-1/Front-End-Development/1.Re-opening-the-box-model/2.The-Box-Model/ReadMe.md | GunnerJnr/_CodeInstitute | efba0984a3dc71558eef97724c85e274a712798c | [
"MIT"
] | 115 | 2019-10-24T11:18:33.000Z | 2022-03-11T23:15:42.000Z | Stream-1/Front-End-Development/1.Re-opening-the-box-model/2.The-Box-Model/ReadMe.md | GunnerJnr/_CodeInstitute | efba0984a3dc71558eef97724c85e274a712798c | [
"MIT"
] | 5 | 2017-09-22T21:42:39.000Z | 2020-02-07T02:18:11.000Z | CHALLENGE
=========
- Create a HTML page called *box.html* using this template:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Box Model</title>
<style type="text/css">
*CSS here*/
</style>
</head>
<body>
<div class=""></div>
</body>
</html>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Inside the `<head>` and between the `<style>` tags, create a CSS class
named `box` with the following CSS rules:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.box{
width:300px;
height: 300px;
background-color:#81BBC9;
margin: 50px;
border: 10px dashed #000;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- In the html, give the `<div>` the class name of `box`.
- Open the HTML page in your browser.
- Modify all the values of the code and see how it affects the expected
output.
Did you get the following result?

| 22.301887 | 80 | 0.410321 | eng_Latn | 0.634745 |
b915300ef8eea4f02666e556c0440e1504b2d625 | 1,528 | md | Markdown | CONTRIBUTING.md | bwu-dart-playground/polymer_elements_parent | ac9a5c9198561022455e8b7160fdd0b40489bca9 | [
"BSD-3-Clause"
] | 1 | 2019-09-11T00:40:10.000Z | 2019-09-11T00:40:10.000Z | CONTRIBUTING.md | bwu-dart-playground/polymer_elements_parent | ac9a5c9198561022455e8b7160fdd0b40489bca9 | [
"BSD-3-Clause"
] | null | null | null | CONTRIBUTING.md | bwu-dart-playground/polymer_elements_parent | ac9a5c9198561022455e8b7160fdd0b40489bca9 | [
"BSD-3-Clause"
] | 1 | 2020-01-17T20:18:20.000Z | 2020-01-17T20:18:20.000Z | # How to contribute
### Sign our Contributor License Agreement (CLA)
Even for small changes, we ask that you please sign the CLA electronically
[here](https://developers.google.com/open-source/cla/individual).
The CLA is necessary because you own the copyright to your changes, even
after your contribution becomes part of our codebase, so we need your permission
to use and distribute your code. You can find more details
[here](https://code.google.com/p/dart/wiki/Contributing).
### Find the right place to change the code
This repo contains a lot of code that is developed elsewhere. For example, all
elements definitions under `lib/src/` are downloaded automatically from
other repositories using `bower`.
If you would like to update HTML or Javascript code on a specific element, you
probably need to send a pull request in a different repository. The code for a
specific element lives under a repo of the same name under the [PolymerElements
organization](https://github.com/polymerelements/). For example,
`lib/src/iron-input/iron-input.html` is actually developed on the [iron-input
repo](https://github.com/polymerelements/iron-input), if you send the fix there,
we will get it automatically next time we run our update scripts.
In addition, the Dart libraries for each element are generated automatically by
running `pub run custom_element_apigen:update polymer_elements_config.yaml`.
If you include changes to the code generation tool, please also run that script
to make sure the generated code is up to date.
| 50.933333 | 80 | 0.795812 | eng_Latn | 0.997902 |
b915a05a7109463e497a270e1a5dc82c2b4580d7 | 690 | md | Markdown | entries/netmaumau.md | Mailaender/opensourcegames | e3ffdfb848cc908680935dbb57420b19cddd45ff | [
"CC0-1.0"
] | 245 | 2017-11-30T14:50:32.000Z | 2022-03-19T05:25:56.000Z | entries/netmaumau.md | Mailaender/opensourcegames | e3ffdfb848cc908680935dbb57420b19cddd45ff | [
"CC0-1.0"
] | 350 | 2017-11-30T20:15:19.000Z | 2022-03-15T18:47:19.000Z | entries/netmaumau.md | Mailaender/opensourcegames | e3ffdfb848cc908680935dbb57420b19cddd45ff | [
"CC0-1.0"
] | 37 | 2019-06-23T18:44:23.000Z | 2022-03-22T02:31:39.000Z | # NetMauMau
- Home: https://github.com/velnias75/NetMauMau, https://sourceforge.net/projects/netmaumau/
- State: mature, inactive since 2015
- Download: https://github.com/velnias75/NetMauMau/releases, https://sourceforge.net/projects/netmaumau/files/
- Platform: Windows, Linux
- Keyword: cards, role playing, content open, online
- Code repository: https://github.com/velnias75/NetMauMau.git (@created 2014, @stars 16, @forks 3)
- Code language: C++
- Code license: LGPL-3.0
- Code dependency: Qt
- Assets license: ? (LGPL)
- Developer: Heiko Schäfer
Online version of the multiplayer card game Mau Mau, which is a game with some similarities to Uno.
## Building
- Build system: Make
| 34.5 | 110 | 0.750725 | yue_Hant | 0.56787 |
b9162d7964341223f373a5c98668bbf9679eec6c | 3,374 | md | Markdown | articles/service-fabric/service-fabric-best-practices-overview.md | CatchRetry/azure-docs.fr-fr | 1ccd071caa483cc19d4d9b8c1c59104b1a7e6438 | [
"CC-BY-4.0"
] | null | null | null | articles/service-fabric/service-fabric-best-practices-overview.md | CatchRetry/azure-docs.fr-fr | 1ccd071caa483cc19d4d9b8c1c59104b1a7e6438 | [
"CC-BY-4.0"
] | null | null | null | articles/service-fabric/service-fabric-best-practices-overview.md | CatchRetry/azure-docs.fr-fr | 1ccd071caa483cc19d4d9b8c1c59104b1a7e6438 | [
"CC-BY-4.0"
] | 3 | 2020-03-31T11:56:12.000Z | 2021-06-04T06:51:19.000Z | ---
title: Bonnes pratiques relatives aux applications et aux clusters Azure Service Fabric | Microsoft Docs
description: Bonnes pratiques relatives à la gestion des clusters et des applications Service Fabric
services: service-fabric
documentationcenter: .net
author: peterpogorski
manager: jeanpaul.connock
editor: ''
ms.assetid: 19ca51e8-69b9-4952-b4b5-4bf04cded217
ms.service: service-fabric
ms.devlang: dotNet
ms.topic: conceptual
ms.tgt_pltfrm: NA
ms.workload: NA
ms.date: 01/23/2019
ms.author: pepogors
ms.openlocfilehash: 06240ac08a12b67e95b4cb9b9a33fcca32de45a8
ms.sourcegitcommit: 97d0dfb25ac23d07179b804719a454f25d1f0d46
ms.translationtype: HT
ms.contentlocale: fr-FR
ms.lasthandoff: 01/25/2019
ms.locfileid: "54914427"
---
# <a name="azure-service-fabric-application-and-cluster-best-practices"></a>Bonnes pratiques relatives aux applications et aux clusters Azure Service Fabric
Pour la gestion de vos clusters et de vos applications Azure Service Fabric, il est fortement recommandé d’effectuer certaines opérations dans le but d’optimiser la fiabilité de votre environnement de production. Effectuez les opérations décrites dans ce document, puis sélectionnez l’un de nos [exemples de modèles de clusters Azure Service Fabric](https://github.com/Azure-Samples/service-fabric-cluster-templates) pour concevoir votre solution de production ou modifier votre modèle existant afin d’incorporer ces pratiques.
## <a name="security"></a>Sécurité
* [Bonnes pratiques de sécurité](service-fabric-best-practices-security.md)
## <a name="networking"></a>Réseau
* [Bonnes pratiques relatives aux réseaux](service-fabric-best-practices-networking.md)
## <a name="compute-planning-and-scaling"></a>Planification et mise à l’échelle de la capacité de calcul
* [Bonnes pratiques relatives à la mise à l’échelle de la capacité de calcul](service-fabric-best-practices-capacity-scaling.md)
* [Planification de la capacité de calcul](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-capacity)
## <a name="infrastructure-as-code"></a>Infrastructure as code
* [Bonnes pratiques relatives à l’implémentation de l’infrastructure sous forme de code](service-fabric-best-practices-infrastructure-as-code.md)
## <a name="monitoring-and-diagnostics"></a>Surveillance et diagnostics
* [Bonnes pratiques relatives à la supervision et au diagnostic des clusters](service-fabric-best-practices-monitoring.md)
## <a name="checklist"></a>Liste de contrôle
Une fois que vous aurez lu toutes les sections ci-dessus, vérifiez que vous avez bien intégré toutes les bonnes pratiques de la check-list de la préparation à la production :
* [Check-list de la préparation à la production Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-production-readiness-checklist)
## <a name="next-steps"></a>Étapes suivantes
* Créez un cluster sur des machines virtuelles ou des ordinateurs exécutant Windows Server : [Création de clusters Service Fabric pour Windows Server](service-fabric-cluster-creation-for-windows-server.md)
* Créez un cluster sur des machines virtuelles ou des ordinateurs exécutant Linux : [Créer un cluster Linux](service-fabric-cluster-creation-via-portal.md)
* Résolution des problèmes : [Guide de résolution des problèmes liés à Service Fabric](https://github.com/Azure/Service-Fabric-Troubleshooting-Guides) | 58.172414 | 527 | 0.803201 | fra_Latn | 0.710478 |
b91642ac4b31d64fa5315ec918b90fc1cb7d866a | 2,299 | md | Markdown | README.md | nvthuong1996/node-ntp | faa581b6e79992c0ea43ac9214435d33f62a6acb | [
"MIT"
] | null | null | null | README.md | nvthuong1996/node-ntp | faa581b6e79992c0ea43ac9214435d33f62a6acb | [
"MIT"
] | null | null | null | README.md | nvthuong1996/node-ntp | faa581b6e79992c0ea43ac9214435d33f62a6acb | [
"MIT"
] | 1 | 2020-01-09T23:46:03.000Z | 2020-01-09T23:46:03.000Z | ## ntp2
simple network time protocol implementation for node.js

[](https://travis-ci.org/song940/node-ntp)
### Installation
```bash
$ npm i ntp2
```
### Example
```js
const ntp = require('ntp2');
ntp.time(function(err, response){
console.log('The network time is :', response.time);
});
```
sntp server
```js
const ntp = require('ntp2');
const server = ntp.createServer(function(message, response){
console.log('server message:', message);
message.transmitTimestamp = Date.now();
response(message);
}).listen(123, function(err){
console.log('server is running at %s', server.address().port);
});
```
### API
- ntp2.Server()
- ntp2.Client()
- ntp2.createServer()
### SPEC
+ https://tools.ietf.org/html/rfc2030
+ https://tools.ietf.org/html/rfc4330
### Contributing
- Fork this Repo first
- Clone your Repo
- Install dependencies by `$ npm install`
- Checkout a feature branch
- Feel free to add your features
- Make sure your features are fully tested
- Publish your local branch, Open a pull request
- Enjoy hacking <3
### MIT license
Copyright (c) 2016 Lsong <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
---
| 28.382716 | 115 | 0.746411 | eng_Latn | 0.471416 |
b9164db420c7b14430dc6a4506bca8de66f037c1 | 127 | md | Markdown | README.md | hartono-wen/get-ip-address-with-python | 7ebb3060f7811bf482e501aa85d4b5bed96a7c0a | [
"MIT"
] | null | null | null | README.md | hartono-wen/get-ip-address-with-python | 7ebb3060f7811bf482e501aa85d4b5bed96a7c0a | [
"MIT"
] | null | null | null | README.md | hartono-wen/get-ip-address-with-python | 7ebb3060f7811bf482e501aa85d4b5bed96a7c0a | [
"MIT"
] | null | null | null | # get-ip-address-with-python
This repository is to give an example on how to get the IP address of local computer using Python
| 42.333333 | 97 | 0.795276 | eng_Latn | 0.999931 |
b91672f702552e323b13c388606461e0b5ca41f9 | 2,249 | md | Markdown | src/pages/index.md | ajpwilson/gatsby-starter-netlify-cms | 54b86bcaf501815c1cfd2f1b6b37c618b847c377 | [
"MIT"
] | null | null | null | src/pages/index.md | ajpwilson/gatsby-starter-netlify-cms | 54b86bcaf501815c1cfd2f1b6b37c618b847c377 | [
"MIT"
] | 2 | 2020-01-24T15:47:56.000Z | 2020-01-24T15:48:43.000Z | src/pages/index.md | ajpwilson/gatsby-starter-netlify-cms | 54b86bcaf501815c1cfd2f1b6b37c618b847c377 | [
"MIT"
] | null | null | null | ---
templateKey: index-page
title: Helping your business build the future.
image: /img/wearekick.jpg
heading: We build integrated e-commerce websites for the merchant sector
mainpitch:
title: What we can do for You
description: Blurb can go here if needed...
description: >-
Kaldi is the ultimate spot for coffee lovers who want to learn about their
java’s origin and support the farmers that grew it. We take coffee production,
roasting and brewing seriously and we’re glad to pass that knowledge to
anyone.
intro:
blurbs:
- text: >-
We specialise in creating interactive, engaging websites for our clients
bespoke to their audience, products, services, brand and future
strategy.
title: Bespoke Website Design
image: /img/bespoke_websites.png
- text: >-
We put the strategy into great web design. Without a considered digital
marketing plan designed around your goals your website has less value.
title: Digital Marketing
image: /img/digital_marketing.png
- text: >-
Carefully considered cross platform apps for desktop, tablet and mobile.
We build apps that provide engaging, memorable user experiences.
title: Application Development
image: /img/application_dev.png
heading: What we offer
description: >
Kaldi is the ultimate spot for coffee lovers who want to learn about their
java’s origin and support the farmers that grew it. We take coffee
production, roasting and brewing seriously and we’re glad to pass that
knowledge to anyone. This is an edit via identity...
main:
heading: Great coffee with no compromises
description: >
We hold our coffee to the highest standards from the shrub to the cup.
That’s why we’re meticulous and transparent about each step of the coffee’s
journey. We personally visit each farm to make sure the conditions are
optimal for the plants, farmers and the local environment.
image1:
alt: A close-up of a paper filter filled with ground coffee
image: /img/products-grid3.jpg
image2:
alt: A green cup of a coffee on a wooden table
image: /img/products-grid2.jpg
image3:
alt: Coffee beans
image: /img/products-grid1.jpg
---
| 40.160714 | 80 | 0.72699 | eng_Latn | 0.997726 |
b917263dbd43ccff59984754ebe708fffbf0b92e | 400 | md | Markdown | src/products/high-quality-30w-40w-80w-120w-cob-aluminum-alloy-led-street-light-price.md | rgamezh/ssg | a390ea1b20808010ecee13da215abf0ef16afa4f | [
"MIT"
] | null | null | null | src/products/high-quality-30w-40w-80w-120w-cob-aluminum-alloy-led-street-light-price.md | rgamezh/ssg | a390ea1b20808010ecee13da215abf0ef16afa4f | [
"MIT"
] | null | null | null | src/products/high-quality-30w-40w-80w-120w-cob-aluminum-alloy-led-street-light-price.md | rgamezh/ssg | a390ea1b20808010ecee13da215abf0ef16afa4f | [
"MIT"
] | null | null | null | ---
templateKey: product-page
title: High quality 30w 40w 80w 120w cob aluminum alloy led street light price
category: Vial
images:
- alt: Product Image
image: /img/15644473547515.jpg
material: Aleación de aluminio fundido a presión de alta calidad.
consumption: 30W 40W 80W 120W
voltage: '90-265V '
equipment: ''
colorTemperature: 3000-7000K
cri: Ra≥85
beamAngle: 60° 120°
ip: IP65
---
| 22.222222 | 78 | 0.74 | eng_Latn | 0.218693 |
b917dbfc70d16451e638fc4d17cdb9d7a4f45a08 | 950 | md | Markdown | readme.md | theborakompanioni/marvin-dm | a9cc3b36fd67599e41b23cdbab0c1e67d5e59140 | [
"Apache-2.0"
] | null | null | null | readme.md | theborakompanioni/marvin-dm | a9cc3b36fd67599e41b23cdbab0c1e67d5e59140 | [
"Apache-2.0"
] | null | null | null | readme.md | theborakompanioni/marvin-dm | a9cc3b36fd67599e41b23cdbab0c1e67d5e59140 | [
"Apache-2.0"
] | null | null | null | [](https://travis-ci.org/theborakompanioni/marvin-dm)
[](https://github.com/theborakompanioni/marvin-dm/blob/master/LICENSE)
marvin-dm
===
Identifies your out of date project dependencies and shows the latest version you need to upgrade to.
YOU DEPEND ON OTHER PROJECTS.
YOU WANT TO STAY UP TO DATE.
MARVIN'S GOT YOUR BACK.
Got a [maven](https://maven.apache.org/) project? Get a badge.
Marvin gets you an overview of your project dependencies, the version you use and the latest available, so you can quickly see what's drifting.
Then it's all boiled down into a badge showing the current status, which you can embed on your site.
Inspired by [david](https://david-dm.org/).
License
-------
The project is licensed under the Apache License. See [LICENSE](LICENSE) for details.
| 45.238095 | 166 | 0.767368 | eng_Latn | 0.762529 |
b918039f94f6b38f05a0b959248c29c032984505 | 6,366 | md | Markdown | articles/cognitive-services/Translator/quickstart-python-transliterate.md | ZexDC/azure-docs | edf98652ceca4fbbd9c3ac156d87c7b985a4587d | [
"CC-BY-4.0"
] | null | null | null | articles/cognitive-services/Translator/quickstart-python-transliterate.md | ZexDC/azure-docs | edf98652ceca4fbbd9c3ac156d87c7b985a4587d | [
"CC-BY-4.0"
] | 1 | 2022-01-13T19:45:57.000Z | 2022-01-13T19:45:57.000Z | articles/cognitive-services/Translator/quickstart-python-transliterate.md | ZexDC/azure-docs | edf98652ceca4fbbd9c3ac156d87c7b985a4587d | [
"CC-BY-4.0"
] | null | null | null | ---
title: "Quickstart: Transliterate text, Python - Translator Text API"
titleSuffix: Azure Cognitive Services
description: In this quickstart, you'll learn how to transliterate (convert) text from one script to another using Python and the Translator Text REST API. In this sample, Japanese is transliterated to use the Latin alphabet.
services: cognitive-services
author: erhopf
manager: cgronlun
ms.service: cognitive-services
ms.component: translator-text
ms.topic: quickstart
ms.date: 10/29/2018
ms.author: erhopf
---
# Quickstart: Use the Translator Text API to transliterate text using Python
In this quickstart, you'll learn how to transliterate (convert) text from one script to another using Python and the Translator Text REST API. In the sample provided, Japanese is transliterated to use the Latin alphabet.
This quickstart requires an [Azure Cognitive Services account](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account) with a Translator Text resource. If you don't have an account, you can use the [free trial](https://azure.microsoft.com/try/cognitive-services/) to get a subscription key.
## Prerequisites
This quickstart requires:
* Python 2.7.x or 3.x
* An Azure subscription key for Translator Text
## Create a project and import required modules
Create a new Python project using your favorite IDE or editor. Then copy this code snippet into your project in a file named `transliterate-text.py`.
```python
# -*- coding: utf-8 -*-
import os, requests, uuid, json
```
> [!NOTE]
> If you haven't used these modules you'll need to install them before running your program. To install these packages, run: `pip install requests uuid`.
The first comment tells your Python interpreter to use UTF-8 encoding. Then required modules are imported to read your subscription key from an environment variable, construct the http request, create a unique identifier, and handle the JSON response returned by the Translator Text API.
## Set the subscription key, base url, and path
This sample will try to read your Translator Text subscription key from the environment variable `TRANSLATOR_TEXT_KEY`. If you're not familiar with environment variables, you can set `subscriptionKey` as a string and comment out the conditional statement.
Copy this code into your project:
```python
# Checks to see if the Translator Text subscription key is available
# as an environment variable. If you are setting your subscription key as a
# string, then comment these lines out.
if 'TRANSLATOR_TEXT_KEY' in os.environ:
subscriptionKey = os.environ['TRANSLATOR_TEXT_KEY']
else:
print('Environment variable for TRANSLATOR_TEXT_KEY is not set.')
exit()
# If you want to set your subscription key as a string, uncomment the line
# below and add your subscription key.
#subscriptionKey = 'put_your_key_here'
```
Currently, one endpoint is available for Translator Text, and it's set as the `base_url`. `path` sets the `transliterate` route and identifies that we want to hit version 3 of the API.
The `params` are used to set the input language, and the input and output scripts. In this sample, we're transliterating from Japanese to the Latin alphabet.
>[!NOTE]
> For more information about endpoints, routes, and request parameters, see [Translator Text API 3.0: Transliterate](https://docs.microsoft.com/azure/cognitive-services/translator/reference/v3-0-transliterate).
```python
base_url = 'https://api.cognitive.microsofttranslator.com'
path = '/transliterate?api-version=3.0'
params = '&language=ja&fromScript=jpan&toScript=latn'
constructed_url = base_url + path + params
```
## Add headers
The easiest way to authenticate a request is to pass in your subscription key as an
`Ocp-Apim-Subscription-Key` header, which is what we use in this sample. As an alternative, you can exchange your subscription key for an access token, and pass the access token along as an `Authorization` header to validate your request. For more information, see [Authentication](https://docs.microsoft.com/azure/cognitive-services/translator/reference/v3-0-reference#authentication).
Copy this code snippet into your project:
```python
headers = {
'Ocp-Apim-Subscription-Key': subscriptionKey,
'Content-type': 'application/json',
'X-ClientTraceId': str(uuid.uuid4())
}
```
## Create a request to transliterate text
Define the string (or strings) that you want to transliterate:
```python
# Transliterate "good afternoon" from source Japanese.
# Note: You can pass more than one object in body.
body = [{
'text': 'こんにちは'
}]
```
Next, we'll create a POST request using the `requests` module. It takes three arguments: the concatenated URL, the request headers, and the request body:
```python
request = requests.post(constructed_url, headers=headers, json=body)
response = request.json()
```
## Print the response
The last step is to print the results. This code snippet prettifies the results by sorting the keys, setting indentation, and declaring item and key separators.
```python
print(json.dumps(response, sort_keys=True, indent=4, ensure_ascii=False, separators=(',', ': ')))
```
## Put it all together
That's it, you've put together a simple program that will call the Translator Text API and return a JSON response. Now it's time to run your program:
```console
python transliterate-text.py
```
If you'd like to compare your code against ours, the complete sample is available on [GitHub](https://github.com/MicrosoftTranslator/Text-Translation-API-V3-Python).
## Sample response
```json
[
{
"script": "latn",
"text": "konnnichiha"
}
]
```
## Clean up resources
If you've hardcoded your subscription key into your program, make sure to remove the subscription key when you're finished with this quickstart.
## Next steps
> [!div class="nextstepaction"]
> [Explore Python examples on GitHub](https://github.com/MicrosoftTranslator/Text-Translation-API-V3-Python)
## See also
Learn how to use the Translator Text API to:
* [Translate text](quickstart-python-translate.md)
* [Identify the language by input](quickstart-python-detect.md)
* [Get alternate translations](quickstart-python-dictionary.md)
* [Get a list of supported languages](quickstart-python-languages.md)
* [Determine sentence lengths from an input](quickstart-python-sentences.md)
| 40.547771 | 386 | 0.767201 | eng_Latn | 0.977165 |
b918704edbf8980341ec8be823ce9cabf532752c | 4,153 | md | Markdown | microsoft-edge/webview2/concepts/security.md | imicy/edge-developer | 0295122d2ec60604d24bbe625ee8bf33fedefd30 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-02-26T15:41:33.000Z | 2022-02-26T15:41:33.000Z | microsoft-edge/webview2/concepts/security.md | ariana10855550/edge-developer | 3e69e3a55ae44a2323bf09b14ae67c64bf9565e7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | microsoft-edge/webview2/concepts/security.md | ariana10855550/edge-developer | 3e69e3a55ae44a2323bf09b14ae67c64bf9565e7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Best practices for developing secure WebView2 applications
description: How to develop secure WebView2 applications.
author: MSEdgeTeam
ms.author: msedgedevrel
ms.topic: conceptual
ms.prod: microsoft-edge
ms.technology: webview
ms.date: 10/14/2020
---
# Best practices for developing secure WebView2 applications
The [WebView2 control](../index.md) allows developers to host web content in the native applications. When used correctly, hosting web content offers several advantages, such as using web-based UI, accessing features of the web platform, sharing code cross-platform, and so on. To avoid vulnerabilities that can arise from hosting web content, make sure to design your WebView2 application to closely monitor interactions between the web content and the host application.
* Treat all web content as insecure:
* Validate web messages and host object parameters before consuming them, because web messages and parameters can be malformed (unintentionally or maliciously) and can cause the app to behave unexpectedly.
* Always check the origin of the document that's running inside WebView2, and assess the trustworthiness of the content.
* Design specific web messages and host object interactions, instead of using generic proxies.
* Set the following options to restrict web content functionality, by modifying [ICoreWebView2Settings (Win32)](/microsoft-edge/webview2/reference/win32/icorewebview2settings) or [CoreWebView2Settings (.NET)](/dotnet/api/microsoft.web.webview2.core.corewebview2settings):
* Set `AreHostObjectsAllowed` to `false`, if you don't expect the web content to access host objects.
* Set `IsWebMessageEnabled` to `false`, if you don't expect the web content to post web messages to your native application.
* Set `IsScriptEnabled` to `false`, if you don't expect the web content to run scripts (for example, when showing static HTML content).
* Set `AreDefaultScriptDialogsEnabled` to `false`, if you don't expect the web content to show `alert` or `prompt` dialog boxes.
* Update settings based on the origin of the new page:
* To prevent your application from navigating to certain pages, use the `NavigationStarting` and `FrameNavigationStarting` events to check page or frame navigation, and then conditionally block the navigation.
* When navigating to a new page, you may need to adjust the property values on [ICoreWebView2Settings (Win32)](/microsoft-edge/webview2/reference/win32/icorewebview2settings) or [CoreWebView2Settings (.NET)](/dotnet/api/microsoft.web.webview2.core.corewebview2settings), as previously described.
* When navigating to a new document, use the `ContentLoading` event and `RemoveHostObjectFromScript` to remove exposed host objects.
* WebView2 cannot be run as a system user. This restriction blocks scenarios such as building a Credential Provider.
<!-- ====================================================================== -->
<!--
## Security
Always check the Source property of the WebView before using `ExecuteScript`, `PostWebMessageAsJson`, `PostWebMessageAsString`, or any other method to send information into the WebView. The WebView may have navigated to another page via the end user interacting with the page or script in the page causing navigation. Similarly, be very careful with `AddScriptToExecuteOnDocumentCreated`. All future `navigations` run the same script and if it provides access to information intended only for a certain origin, any HTML document may have access.
When examining the result of an `ExecuteScript` method call, a `WebMessageReceived` event, always check the Source of the sender, or any other mechanism of receiving information from an HTML document in a WebView validate the URI of the HTML document is what you expect.
When constructing a message to send into a WebView, prefer using `PostWebMessageAsJson` and construct the JSON string parameter using a JSON library. This avoids any potential accidents of encoding information into a JSON string or script and ensure no attacker controlled input can modify the rest of the JSON message or run arbitrary script. -->
| 78.358491 | 545 | 0.781844 | eng_Latn | 0.98735 |
b91964c0299ffe4e3f79c82781536edcfe7a7046 | 148 | md | Markdown | README.md | Voxelmon/Voxelmon-Toolkit | ee7ee578b25abbfd71645f333554d2552639d485 | [
"MIT"
] | null | null | null | README.md | Voxelmon/Voxelmon-Toolkit | ee7ee578b25abbfd71645f333554d2552639d485 | [
"MIT"
] | null | null | null | README.md | Voxelmon/Voxelmon-Toolkit | ee7ee578b25abbfd71645f333554d2552639d485 | [
"MIT"
] | null | null | null | # Voxelmon Toolkit
A simple Java Swing program that allows the user to modify IXL (Voxelmon data files) as well as import and export them as jsons.
| 49.333333 | 128 | 0.790541 | eng_Latn | 0.993372 |
b919ab5a76f53a6cd903ec8286a7131d545468f0 | 6,711 | md | Markdown | futurism.md | narthur/knowledge | 817ea1be6682061a23a2ff7ea2eba3b0a5618295 | [
"CC0-1.0"
] | 8 | 2020-01-10T14:47:59.000Z | 2022-03-12T12:59:22.000Z | futurism.md | narthur/knowledge | 817ea1be6682061a23a2ff7ea2eba3b0a5618295 | [
"CC0-1.0"
] | null | null | null | futurism.md | narthur/knowledge | 817ea1be6682061a23a2ff7ea2eba3b0a5618295 | [
"CC0-1.0"
] | 2 | 2020-09-24T12:03:38.000Z | 2022-01-24T23:18:41.000Z | # Futurism
[Metaculus](https://www.metaculus.com/questions/) - "Metaculus is a community dedicated to generating accurate predictions about future real-world events by aggregating the collective wisdom, insight, and intelligence of its participants."
## Futures Thinking
[Backcasting - Wikipedia](https://en.wikipedia.org/wiki/Backcasting) \#article - "**Backcasting** is a planning method that starts with defining a desirable future and then works backwards to identify policies and programs that will connect that specified future to the present.[\[1\]](https://en.wikipedia.org/wiki/Backcasting#cite_note-1) The fundamentals of the method were outlined by John B. Robinson from the [University of Waterloo](https://en.wikipedia.org/wiki/University_of_Waterloo) in 1990.[\[2\]](https://en.wikipedia.org/wiki/Backcasting#cite_note-Robinson_1990-2) The fundamental question of backcasting asks: "if we want to attain a certain goal, what actions must be taken to get there?"[\[3\]](https://en.wikipedia.org/wiki/Backcasting#cite_note-3)[\[4\]](https://en.wikipedia.org/wiki/Backcasting#cite_note-4)"
[Excel: Scenario Planning and Analysis](https://www.lynda.com/Excel-tutorials/Excel-Scenario-Planning-Analysis/636107-2.html?srchtrk=index%3a1%0alinktypeid%3a2%0aq%3ascenario+planning%0apage%3a1%0as%3arelevance%0asa%3atrue%0aproducttypeid%3a2) \#course - "A multitude of factors can affect the trajectory of your business. Learning how to document, summarize, and present projected business scenarios can help provide a basis for insightful business analysis, and help you evaluate the impact of various choices on your organization. In this course, explore techniques for analyzing a series of business scenarios using the flexible and powerful capabilities built into Excel. The course starts with a chapter on the art and craft of scenario planning before turning to the technical capabilities of Excel. Tools covered include row grouping to show and hide detail, PivotTables, and functions for using the normal distribution."
[The Fourth Way: Design Thinking Meets Futures Thinking](https://medium.com/@anna.roumiantseva/the-fourth-way-design-thinking-meets-futures-thinking-85793ae3aa1e) \#article - "They say “hindsight is 20/20”. If only you knew then what you know now, you would have sold that stock, ended that relationship, or taken that job offer in a snap. Of course the tricky part is being able to make those decisions in the present, but how do you do that without knowing what’s lurking around the corner? I want to argue that by making Futures Thinking a standard part of your thought process — both in your business and personal lives — you’ll be able to make better decisions in the face of uncertainty."
[Futures Thinking: A Mind-set, not a Method - Touchpoint - Medium](https://medium.com/touchpoint/futures-thinking-a-mind-set-not-a-method-64c9b5f9da37) \#article - "Design practices are becoming increasingly future-focussed, reflecting the complexities of the design challenges that we face. Futures thinking can offer us tools and methods to help with this, but more than that, it might offer us a new way of seeing the world that we design for."
[Futures Thinking: The Basics](https://www.fastcompany.com/1362037/futures-thinking-basics) \#article - "For nearly the past fifteen years, I’ve been working as a futurist. My job has been to provide people with insights into emerging trends and issues, to allow them to do _their_ jobs better. I’ve done this work for big companies and government agencies \(usually under the Very Professional sounding title of strategic foresight\), and for TV writers and game companies. It’s quite an enjoyable job, as it allows me to indulge my easily-distracted curiosity about the world."
[Futures Thinking \| Coursera](https://www.coursera.org/specializations/futures-thinking) \#course - "Ready Yourself for a Changing World. Learn the skills and mindsets of the world’s top futurists, so you can forecast what’s coming, imagine new possibilities, and seize control of your own future"
[Leading Like a Futurist](https://www.lynda.com/Business-tutorials/Leading-like-Futurist/2813223-2.html?srchtrk=index%3a1%0alinktypeid%3a2%0aq%3abackcasting%0apage%3a1%0as%3arelevance%0asa%3atrue%0aproducttypeid%3a2) \#course - "Futures-centered thinking may be the most important leadership skill you've never been taught. It’s a discipline and set of practices that everyone can learn. This course is about helping you see, shape, and share the future by improving your strategic thinking and foresight skills, your comfort with ambiguity, and your ability to communicate opportunities in a way that gets others excited, too. Lisa Kay Solomon—an expert on leadership, futures, and design thinking—explains how to imagine new futures, avoid being blindsided, and take steps to bring preferred futures to life. She also explains how to move beyond traditional ways of innovating and assessing risk by learning how to scan and capitalize on external trends and forces that are accelerating the spread of change and disruption."
[Pre-mortem - Wikipedia](https://en.wikipedia.org/wiki/Pre-mortem) \#article - "A **pre-mortem**, or **premortem**, is a [managerial strategy](https://en.wikipedia.org/wiki/Management) in which a project team imagines that a project or organization has failed, and then works backward to determine what potentially could lead to the failure of the project or organization.[\[1\]](https://en.wikipedia.org/wiki/Pre-mortem#cite_note-hbr-1)[\[2\]](https://en.wikipedia.org/wiki/Pre-mortem#cite_note-2)"
[Scenario planning - Wikipedia](https://en.wikipedia.org/wiki/Scenario_planning) \#article - "**Scenario planning**, also called **scenario thinking** or [**scenario analysis**](https://en.wikipedia.org/wiki/Scenario_analysis), is a [strategic planning](https://en.wikipedia.org/wiki/Strategic_planning) method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by [military intelligence](https://en.wikipedia.org/wiki/Military_intelligence).[\[1\]](https://en.wikipedia.org/wiki/Scenario_planning#cite_note-1)"
[TOOLS: BACKCASTING FROM SCENARIOS — innovate change](https://www.innovatechange.co.nz/news/2015/6/21/backcasting-from-scenarios) \#article - "Imagining a desired future can inspire strategy and action, but the path to success is not always obvious. At **innovate change**, we use future focused scenarios to provide a picture of the future, then we ‘backcast’ to work out how to get there. Developing policy, services or programmes from this perspective helps us to imagine the impact of our plans on real people."
| 248.555556 | 1,026 | 0.79392 | eng_Latn | 0.984798 |
b91a0c4d6ad0c4a258335cc68b4af9dd6a3fbae7 | 1,644 | md | Markdown | site2/website/versioned_docs/version-2.5.0/io-twitter-source.md | turtlequeue/pulsar | eafe7dd56dfbaafb99c66db1155486fdc90ebbc3 | [
"Apache-2.0"
] | 9,156 | 2018-09-23T14:10:46.000Z | 2022-03-31T07:17:16.000Z | site2/website/versioned_docs/version-2.5.0/io-twitter-source.md | turtlequeue/pulsar | eafe7dd56dfbaafb99c66db1155486fdc90ebbc3 | [
"Apache-2.0"
] | 10,453 | 2018-09-22T00:31:02.000Z | 2022-03-31T20:02:09.000Z | site2/website/versioned_docs/version-2.5.0/io-twitter-source.md | turtlequeue/pulsar | eafe7dd56dfbaafb99c66db1155486fdc90ebbc3 | [
"Apache-2.0"
] | 2,730 | 2018-09-25T05:46:22.000Z | 2022-03-31T06:48:59.000Z | ---
id: version-2.5.0-io-twitter-source
title: Twitter Firehose source connector
sidebar_label: Twitter Firehose source connector
original_id: io-twitter-source
---
The Twitter Firehose source connector receives tweets from Twitter Firehose and
writes the tweets to Pulsar topics.
## Configuration
The configuration of the Twitter Firehose source connector has the following properties.
### Property
| Name | Type|Required | Default | Description
|------|----------|----------|---------|-------------|
| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.<br><br>For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). |
| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. |
| `token` | String|true | " " (empty string) | The twitter OAuth token. |
| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. |
| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.<br><br>If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time.
| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. |
| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. |
| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. |
> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html).
| 56.689655 | 218 | 0.733577 | eng_Latn | 0.925939 |
b91a4d03ac063eb88b4301f19453f98250fa072b | 19,645 | md | Markdown | docs/framework/resiliency/overview.md | israsta/architecture-center | 6b2530737e8d90aef5c2a4bc8664bae261d4d49c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/resiliency/overview.md | israsta/architecture-center | 6b2530737e8d90aef5c2a4bc8664bae261d4d49c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/resiliency/overview.md | israsta/architecture-center | 6b2530737e8d90aef5c2a4bc8664bae261d4d49c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Overview of the reliability pillar
description: Describes the reliability pillar
author: david-stanford
ms.date: 10/21/2019
ms.topic: conceptual
ms.service: architecture-center
ms.subservice: well-architected
ms.custom:
- fasttrack-edit
- overview
---
# Overview of the reliability pillar
Building a reliable application in the cloud is different from traditional application development. While historically you may have purchased levels of redundant higher-end hardware to minimize the chance of an entire application platform failing, in the cloud, we acknowledge up front that failures will happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component.
To assess your workload using the tenets found in the Microsoft Azure Well-Architected Framework, see the [Microsoft Azure Well-Architected Review](/assessments/?id=azure-architecture-review&mode=pre-assessment).
Reliable applications are:
- **Resilient** and recover gracefully from failures, and they continue to function with minimal downtime and data loss before full recovery.
- **Highly available (HA)** and run as designed in a healthy state with no significant downtime.
Understanding how these elements work together — and how they affect cost — is essential to building a reliable application. It can help you determine how much downtime is acceptable, the potential cost to your business, and which functions are necessary during a recovery.
This article provides a brief overview of building reliability into each step of the Azure application design process. Each section includes a link to an in-depth article on how to integrate reliability into that specific step in the process. If you're looking for reliability considerations for individual Azure services, review the [Resiliency checklist for specific Azure services](../../checklist/resiliency-per-service.md).
## Build for reliability
This section describes six steps for building a reliable Azure application. Each step links to a section that further defines the process and terms.
1. [**Define requirements.**](#define-requirements) Develop availability and recovery requirements based on decomposed workloads and business needs.
1. [**Use architectural best practices.**](#use-architectural-best-practices) Follow proven practices, identify possible failure points in the architecture, and determine how the application will respond to failure.
1. [**Test with simulations and forced failovers.**](#test-with-simulations-and-forced-failovers) Simulate faults, trigger forced failovers, and test detection and recovery from these failures.
1. [**Deploy the application consistently.**](#deploy-the-application-consistently) Release to production using reliable and repeatable processes.
1. [**Monitor application health.**](#monitor-application-health) Detect failures, monitor indicators of potential failures, and gauge the health of your applications.
1. [**Respond to failures and disasters.**](#respond-to-failures-and-disasters) Identify when a failure occurs, and determine how to address it based on established strategies.
## Define requirements
Identify your business needs, and build your reliability plan to address them. Consider the following:
- **Identify workloads and usage.** A *workload* is a distinct capability or task that is logically separated from other tasks, in terms of business logic and data storage requirements. Each workload has different requirements for availability, scalability, data consistency, and disaster recovery.
- **Plan for usage patterns.** *Usage patterns* also play a role in requirements. Identify differences in requirements during critical and non-critical periods. For example, a tax-filing application can't fail during a filing deadline. To ensure uptime, plan redundancy across several regions in case one fails. Conversely, to minimize costs during non-critical periods, you can run your application in a single region.
- **Identify critical components and paths.** Not all components of your system might be as important as others. For example, you might have an optional component that adds incremental value, but that the workload can run without if necessary. Understand where these components are, and conversely, where the critical parts of your workload are. This will help to scope your availability and reliability metrics and to plan your recovery strategies to prioritize the highest-importance components.
- **Establish availability metrics — *mean time to recovery* (MTTR) and *mean time between failures* (MTBF).** MTTR is the average time it takes to restore a component after a failure. MTBF is how long a component can reasonably expect to last between outages. Use these measures to determine where to add redundancy and to determine service-level agreements (SLAs) for customers.
- **Establish recovery metrics — recovery time objective (RTO) and recovery point objective (RPO).** *RTO* is the maximum acceptable time an application can be unavailable after an incident. *RPO* is the maximum duration of data loss that is acceptable during a disaster. To derive these values, conduct a risk assessment and make sure you understand the cost and risk of downtime or data loss in your organization.
> [!NOTE]
> If the MTTR of *any* critical component in a highly available setup exceeds the system RTO, a failure in the system might cause an unacceptable business disruption. That is, you can't restore the system within the defined RTO.
- **Determine workload availability targets.** To ensure that application architecture meets your business requirements, define target SLAs for each workload. Account for the cost and complexity of meeting availability requirements, in addition to application dependencies.
- **Understand service-level agreements.** In Azure, the SLA describes the Microsoft commitments for uptime and connectivity. If the SLA for a particular service is 99.9 percent, you should expect the service to be available 99.9 percent of the time.
Define your own target SLAs for each workload in your solution, so you can determine whether the architecture meets the business requirements. For example, if a workload requires 99.99 percent uptime but depends on a service with a 99.9 percent SLA, that service can't be a single point of failure in the system.
For more information about developing requirements for reliable applications, see [Application design for resiliency](/azure/architecture/framework/resiliency/design-apps.md).
## Use architectural best practices
During the architectural phase, focus on implementing practices that meet your business requirements, identify failure points, and minimize the scope of failures.
- **Perform a failure mode analysis (FMA).** FMA builds resiliency into an application early in the design stage. It helps you identify the types of failures your application might experience, the potential effects of each, and possible recovery strategies.
- **Create a redundancy plan.** The level of redundancy required for each workload depends on your business needs and factors into the overall cost of your application.
- **Design for scalability.** A cloud application must be able to scale to accommodate changes in usage. Begin with discrete components, and design the application to respond automatically to load changes whenever possible. Keep scaling limits in mind during design so you can expand easily in the future.
- **Plan for subscription and service requirements.** You might need additional subscriptions to provision enough resources to meet your business requirements for storage, connections, throughput, and more.
- **Use load-balancing to distribute requests.** Load-balancing distributes your application's requests to healthy service instances by removing unhealthy instances from rotation.
- **Implement resiliency strategies.** *Resiliency* is the ability of a system to recover from failures and continue to function. Implement [resiliency design patterns](/azure/architecture/framework/resiliency/resiliency-patterns.md), such as isolating critical resources, using compensating transactions, and performing asynchronous operations whenever possible.
- **Build availability requirements into your design.** *Availability* is the proportion of time your system is functional and working. Take steps to ensure that application availability conforms to your service-level agreement. For example, avoid single points of failure, decompose workloads by service-level objective, and throttle high-volume users.
- **Manage your data.** How you store, back up, and replicate data is critical.
- **Choose replication methods for your application data.** Your application data is stored in various data stores and might have different availability requirements. Evaluate the replication methods and locations for each type of data store to ensure that they satisfy your requirements.
- **Document and test your failover and failback processes.** Clearly document instructions to fail over to a new data store, and test them regularly to make sure they are accurate and easy to follow.
- **Protect your data.** Back up and validate data regularly, and make sure no single user account has access to both production and backup data.
- **Plan for data recovery.** Make sure that your backup and replication strategy provides for data recovery times that meet your service-level requirements. Account for all types of data your application uses, including reference data and databases.
## Azure service dependencies
Microsoft Azure services are available globally to drive your cloud operations at an optimal level. You can choose the best region for your needs based on technical and regulatory considerations: service capabilities, data residency, compliance requirements, and latency.
Azure services deployed to Azure regions are listed on the [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) page. To better understand regions and Availability Zones in Azure, see [Regions and Availability Zones in Azure](/azure/availability-zones/az-overview).
Azure services are built for resiliency including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Windows Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azuredatacentermap.azurewebsites.net).
## Test with simulations and forced failovers
Testing for reliability requires measuring how the end-to-end workload performs under failure conditions that only occur intermittently.
- **Test for common failure scenarios by triggering actual failures or by simulating them.** Use fault injection testing to test common scenarios (including combinations of failures) and recovery time.
- **Identify failures that occur only under load.** Test for peak load, using production data or synthetic data that is as close to production data as possible, to see how the application behaves under real-world conditions.
- **Run disaster recovery drills.** Have a disaster recovery plan in place, and test it periodically to make sure it works.
- **Perform failover and failback testing.** Ensure that your application's dependent services fail over and fail back in the correct order.
- **Run simulation tests.** Testing real-life scenarios can highlight issues that need to be addressed. Scenarios should be controllable and non-disruptive to
the business. Inform management of simulation testing plans.
- **Test health probes.** Configure health probes for load balancers and traffic managers to check critical system components. Test them to make sure that
they respond appropriately.
- **Test monitoring systems.** Be sure that monitoring systems are reliably reporting critical information and accurate data to help identify potential failures.
- **Include third-party services in test scenarios.** Test possible points of failure due to third-party service disruption, in addition to recovery.
Testing is an iterative process. Test the application, measure the outcome, analyze and address any failures, and repeat the process.
For more information about testing for application reliability, see [Testing Azure applications for resiliency and availability](./testing.md).
## Deploy the application consistently
*Deployment* includes provisioning Azure resources, deploying application code, and applying configuration settings. An update may involve all three tasks or a subset of them.
After an application is deployed to production, updates are a possible source of errors. Minimize errors with predictable and repeatable deployment processes.
- **Automate your application deployment process.** Automate as many processes as possible.
- **Design your release process to maximize availability.** If your release process requires services to go offline during deployment, your application is unavailable until they come back online. Take advantage of platform staging and production features. Use blue-green or canary releases to deploy updates, so if a failure occurs, you can quickly roll back the update.
- **Have a rollback plan for deployment.** Design a rollback process to return to a last known good version and to minimize downtime if a deployment fails.
- **Log and audit deployments.** If you use staged deployment techniques, more than one version of your application is running in production. Implement a robust logging strategy to capture as much version-specific information as possible.
- **Document the application release process.** Clearly define and document your release process, and ensure that it's available to the entire operations team.
For more information about application reliability and deployment, see [Deploying Azure applications for resiliency and availability](../devops/release-engineering-cd.md).
## Monitor application health
Implement best practices for monitoring and alerts in your application so you can detect failures and alert an operator to fix them.
- **Implement health probes and check functions.** Run them regularly from outside the application to identify degradation of application health and performance.
- **Check long-running workflows.** Catching issues early can minimize the need to roll back the entire workflow or to execute multiple compensating transactions.
- **Maintain application logs.**
- Log applications in production and at service boundaries.
- Use semantic and asynchronous logging.
- Separate application logs from audit logs.
- **Measure remote call statistics, and share the data with the application team.** To give your operations team an instantaneous view into application health, summarize remote call metrics, such as latency, throughput, and errors in the 99 and 95 percentiles. Perform statistical analysis on the metrics to uncover errors that occur within each percentile.
- **Track transient exceptions and retries over an appropriate time frame.** A trend of increasing exceptions over time indicates that the service is having an issue and may fail.
- **Set up an early warning system.** Identify the key performance indicators (KPIs) of an application's health, such as transient exceptions and remote call latency, and set appropriate threshold values for each of them. Send an alert to operations when the threshold value is reached.
- **Operate within Azure subscription limits.** Azure subscriptions have limits on certain resource types, such as the number of resource groups, cores, and storage accounts. Watch your use of resource types.
- **Monitor third-party services.** Log your invocations and correlate them with your application's health and diagnostic logging using a unique identifier.
- **Train multiple operators to monitor the application and to perform manual recovery steps.** Make sure there is always at least one trained operator active.
For more information about monitoring for application reliability, see [Monitoring Azure application health](./monitoring.md).
## Respond to failures and disasters
Create a recovery plan, and make sure that it covers data restoration, network outages, dependent service failures, and region-wide service disruptions. Consider your VMs, storage, databases, and other Azure platform services in your recovery strategy.
- **Plan for Azure support interactions.** Before the need arises, establish a process for contacting Azure support.
- **Document and test your disaster recovery plan.** Write a disaster recovery plan that reflects the business impact of application failures. Automate the recovery process as much as possible, and document any manual steps. Regularly test your disaster recovery process to validate and improve the plan.
- **Fail over manually when required.** Some systems can't fail over automatically and require a manual failover. If an application fails over to a secondary region, perform an operational readiness test. Verify that the primary region is healthy and ready to receive traffic again before failing back. Determine what the reduced application functionality is and how the app informs users of temporary problems.
- **Prepare for application failure.** Prepare for a range of failures, including faults that are handled automatically, those that result in reduced functionality, and those that cause the application to become unavailable. The application should inform users of temporary issues.
- **Recover from data corruption.** If a failure happens in a data store, check for data inconsistencies when the store becomes available again, especially if the data was replicated. Restore corrupt data from a backup.
- **Recover from a network outage.** You might be able to use cached data to run locally with reduced application functionality. If not, consider application downtime or fail over to another region. Store your data in an alternate location until connectivity is restored.
- **Recover from a dependent service failure.** Determine which functionality is still available and how the application should respond.
- **Recover from a region-wide service disruption.** Region-wide service disruptions are uncommon, but you should have a strategy to address them, especially for critical applications. You might be able to redeploy the application to another region or redistribute traffic.
For more information about responding to failures and disaster recovery, see [Failure and disaster recovery for Azure applications](./backup-and-recovery.md). | 130.099338 | 1,333 | 0.807941 | eng_Latn | 0.998759 |
b91a543c62dfccebf73d27a2367dcb84f44305e6 | 16,055 | md | Markdown | articles/cost-management-billing/cloudyn/cost-mgt-faq.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cost-management-billing/cloudyn/cost-mgt-faq.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cost-management-billing/cloudyn/cost-mgt-faq.md | changeworld/azure-docs.hu-hu | f0a30d78dd2458170473188ccce3aa7e128b7f89 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Gyakori kérdések az Azure-beli Cloudynről | Microsoft Docs
description: A cikk a Cloudynnel kapcsolatos gyakori kérdésekre ad választ.
keywords: ''
author: bandersmsft
ms.author: banders
ms.date: 01/24/2020
ms.topic: troubleshooting
ms.service: cost-management-billing
ms.reviewer: benshy
ms.custom: seodec18
ms.openlocfilehash: 5c770d83d59edf0a56184f8eea0bda6b0603893c
ms.sourcegitcommit: 67e9f4cc16f2cc6d8de99239b56cb87f3e9bff41
ms.translationtype: HT
ms.contentlocale: hu-HU
ms.lasthandoff: 01/31/2020
ms.locfileid: "76770075"
---
# <a name="frequently-asked-questions-for-cloudyn"></a>A Cloudynre vonatkozó gyakori kérdések
A cikk a Cloudynnel kapcsolatos néhány gyakori kérdésre ad választ. Ha kérdései vannak a Cloudynnel kapcsolatban, a [Cloudyn GYIK-oldalán](https://social.msdn.microsoft.com/Forums/en-US/231bf072-2c71-4121-8339-ac9d868137b9/faqs-for-cloudyn-cost-management?forum=Cloudyn) teheti fel őket.
## <a name="how-can-i-resolve-common-indirect-enterprise-setup-problems"></a>Hogyan oldhatok meg gyakori közvetlen vállalati beállítási problémákat?
Amikor először használja a Cloudyn portálját, a következő üzenetek jelenhetnek meg, ha Ön Nagyvállalati szerződést kötött, vagy egy felhőszolgáltató felhasználója:
- „The specified API key is not a top level enrollment key” (A megadott API-kulcs nem felső szintű regisztrációs kulcs) üzenet jelenik meg a **Set Up Cloudyn** (A Cloudyn beállítása) varázslóban.
- „Közvetlen regisztráció – Nem” üzenet jelenik meg a Nagyvállalati Szerződés portálján.
- "No usage data was found for the last 30 days. Please contact your distributor to make sure markup was enabled for your Azure account” (Nem találhatók az elmúlt 30 napra vonatkozó használati adatok. Lépjen kapcsolatba a terjesztőjével, és ellenőrizze, hogy engedélyezve van-e a korrektúra az Ön Azure-fiókjában”) üzenet jelenik meg a Cloudyn portálon.
Az előző üzenetek arra utalnak, hogy egy viszonteladón vagy felhőszolgáltatón keresztül vásárolt Azure Nagyvállalati szerződést. A viszonteladónak vagy a felhőszolgáltatónak kell engedélyeznie a _korrektúrát_ az Azure-fiókjában, mielőtt Ön megtekinthetné az adatait a Cloudyn szolgáltatásban.
A problémák megoldása:
1. A viszonteladónak engedélyeznie kell a _korrektúrát_ a fiókjában. Útmutatás: [Közvetett ügyfeleknek szóló előkészítési útmutató](https://ea.azure.com/api/v3Help/v2IndirectCustomerOnboardingGuide).
2. Létre kell hoznia egy kulcsot az Azure Nagyvállalati Szerződésben, amelyet a Cloudynnel használhat. Útmutatásért tekintse meg az [Azure EA hozzáadását](quick-register-ea.md#register-with-cloudyn) ismertető szakaszt vagy az [EA regisztrációs azonosító és API-kulcs megkeresését](https://youtu.be/u_phLs_udig) bemutató videót.
A Cloudyn szolgáltatást csak egy Azure-szolgáltatásadminisztrátor engedélyezheti. A társadminisztrátori jogosultság ehhez nem elegendő.
A Cloudyn beállításához az Azure Nagyvállalati Szerződés API-kulcsának létrehozása előtt engedélyeznie kell az Azure számlázási API-t a következő útmutató követésével:
- [Jelentéskészítő API-k Enterprise-ügyfeleknek – áttekintés](../manage/enterprise-api.md)
- [Microsoft Azure Enterprise Portal jelentéskészítő API](https://ea.azure.com/helpdocs/reportingAPI)**Az adatok az API-hoz való hozzáférésének engedélyezése** területen
Előfordulhat, hogy a részlegek rendszergazdáinak, a fióktulajdonosoknak és a nagyvállalati rendszergazdáknak is engedélyt kell adnia a _díjak megtekintéséhez_ a számlázási API segítségével.
## <a name="why-dont-i-see-optimizer-recommendations"></a>Miért nem látom az Optimizer (Optimalizáló) javaslatait?
A javaslatok csak aktivált fiókok számára érhetők el. A *nem aktivált* fiókok esetén nem jelennek meg a javaslatok az **Optimizer** (Optimalizáló) jelentéskategóriáiban, beleértve az alábbiakat:
- Optimization Manager (Optimalizáláskezelő)
- Sizing Optimization (Méretezés optimalizálása)
- Inefficiencies (Hatékonyságbeli hiányosságok)
Ha nem tudja megtekinteni az Optimizer (Optimalizáló) javaslatait, akkor valószínűleg nem rendelkezik aktivált fiókkal. A fiók aktiváláshoz regisztrálja a fiókját az Azure-beli hitelesítő adataival.
Fiók aktiválása:
1. A Cloudyn portálon kattintson a **Settings** (Beállítások) gombra a jobb felső sarokban, és válassza a **Cloud Accounts** (Felhőbeli fiókok) lehetőséget.
2. A Microsoft Azure Accounts (Microsoft Azure-fiókok) lapon keresse meg azokat a fiókokat, amelyek **nem aktivált** előfizetéssel rendelkeznek.
3. Kattintson a nem aktivált fiók jobb oldalán található, ceruzára hasonlító **szerkesztés** szimbólumra.
4. A bérlő- és díjazonosítót a rendszer automatikusan észleli. Kattintson a **Tovább** gombra.
5. A rendszer átirányítja az Azure Portalra. Jelentkezzen be a portálra, és engedélyezze a Cloudyn Collectornak az Azure adataihoz való hozzáférést.
6. A rendszer ezt követően átirányítja a Cloudyn fiókkezelési oldalára, az előfizetését pedig **aktív** fiókállapotra frissíti. A fiók állapotánál egy zöld pipa látható.
7. Ha nem lát zöld pipát egy vagy több előfizetés esetében, akkor nincs engedélye létrehozni az olvasó alkalmazást (a CloudynCollectort) az előfizetésben. Ehhez egy magasabb jogosultsággal rendelkező felhasználónak kell megismételnie a folyamat 3. és 4. lépését.
Miután elvégezte az előző lépéseket, egy-két napon belül megtekintheti az Optimizer (Optimalizáló) javaslatait. Azonban akár öt napot is igénybe vehet, mire az optimalizálási adatok mindegyike elérhetővé válik.
## <a name="how-do-i-enable-suspended-or-locked-out-users"></a>Hogyan engedélyezhetem a felfüggesztett vagy zárolt felhasználókat?
Először tekintsük át azt a leggyakoribb forgatókönyvet, amely miatt a felhasználói fiókok *kezdetben felfüggesztett* állapotba kerülhetnek.
> Tegyük fel, hogy Rendszergazda1 Microsoft-felhőszolgáltatói vagy Nagyvállalati Szerződéssel rendelkező felhasználó. A vállalata készen áll a Cloudyn használatára. Az Azure Portalon keresztül regisztrál, és bejelentkezik a Cloudyn portálra. A Cloudyn szolgáltatást regisztráló, és a Cloudyn portálra bejelentkező személyként Rendszergazda1 lesz az *elsődleges rendszergazda*. Rendszergazda1 nem hoz létre felhasználói fiókokat. A Cloudyn portál használatával azonban létrehoz Azure-fiókokat, és beállít egy entitáshierarchiát. Rendszergazda1 arról tájékoztatja Rendszergazda2-t, a bérlői rendszergazdát, hogy regisztrálnia kell a Cloudynen, és be kell jelentkeznie a Cloudyn portálra.
>
> Rendszergazda2 az Azure Portalon keresztül regisztrál. A Cloudyn portálra való bejelentkezés során azonban hibaüzenetet kap, amely szerint a fiókja **fel van függesztve**. Az elsődleges rendszergazda, Rendszergazda1 értesítést kap a fiók felfüggesztéséről. Rendszergazda1-nek aktiválnia kell Rendszergazda2 fiókját, és *rendszergazdai entitás-hozzáférést* kell biztosítania a megfelelő entitásokhoz, engedélyeznie kell a felhasználó-kezeléséhez, és aktiválnia kell a felhasználói fiókot.
Ha riasztást kap, amely egy felhasználó hozzáférésének engedélyezésére irányuló kérést tartalmaz, aktiválnia kell a felhasználói fiókot.
A felhasználói fiók aktiválása:
1. Jelentkezzen be a Cloudynre azzal az Azure rendszergazdai fiókkal, amelyet a Cloudyn beállításához használt. Másik lehetőségként jelentkezzen be egy rendszergazda hozzáféréssel rendelkező felhasználói fiókkal.
2. Kattintson a fogaskerék szimbólumra a jobb felső sarokban, és válassza a **User Management** (Felhasználókezelés) lehetőséget.
3. Keresse meg a felhasználót, kattintson a ceruza szimbólumra, majd szerkessze a felhasználót.
4. A **User status** (Felhasználó állapota) területen módosítsa az állapotot **Suspended** (Felfüggesztve) állapotról **Active** (Aktív) állapotra.
A Cloudyn felhasználói fiókjai az Azure-ból történő egyszeri bejelentkezéssel csatlakoznak. Ha a felhasználó elírja a jelszavát, előfordulhat, hogy zárolják a Cloudynben, még akkor is, ha továbbra is hozzáfér az Azure-hoz.
Ha a Cloudynben olyan e-mail-címre módosítja a címét, amely eltér az Azure-beli alapértelmezett címről, akkor előfordulhat, hogy a fiókja zárolva lesz. A „status initiallySuspended” (kezdetben Felfüggesztve állapot) állapotot jelenítheti meg. Ha felhasználói fiókja zárolva lett, a fiók alaphelyzetbe állításához vegye fel a kapcsolatot egy másik rendszergazdával.
Javasoljuk, hogy legalább két rendszergazdai Cloudyn-fiókot hozzon létre arra az esetre, ha az egyiket valamilyen okból zárolnák.
Ha nem tud bejelentkezni a Cloudyn portálra, ellenőrizze, hogy a helyes URL-címet használja a Cloudynbe történő bejelentkezéshez. A [https://azure.cloudyn.com](https://ms.portal.azure.com/#blade/Microsoft_Azure_CostManagement/CloudynMainBlade) címet használja.
Ne használja a Cloudyn közvetlen https://app.cloudyn.com URL-címét.
## <a name="how-do-i-activate-unactivated-accounts-with-azure-credentials"></a>Hogyan aktiválhatom a nem aktivált fiókokat Azure hitelesítő adatokkal?
Amint a Cloudyn észleli az Azure-fiókokat, azonnal biztosítja a költségadatokat a költségalapú jelentésekben. Azonban ahhoz, hogy a Cloudyn használati és teljesítményadatokat is biztosítson, regisztrálnia kell a fiókok Azure-beli hitelesítő adatait. Útmutatásért tekintse meg a [Fiók hozzáadása vagy előfizetés frissítése](activate-subs-accounts.md#add-an-account-or-update-a-subscription) című szakaszt.
A fiókok Azure-beli hitelesítő adatainak hozzáadásához a Cloudyn portálon válassza a fiók neve (nem az előfizetés) mellett jobbra található szerkesztés szimbólumot.
Amíg hozzá nem adta az Azure-beli hitelesítő adatokat, a fiók _nem aktivált_ állapotúként jelenik meg.
## <a name="how-do-i-add-multiple-accounts-and-entities-to-an-existing-subscription"></a>Hogyan adható hozzá több fiók és entitás egy meglévő előfizetéshez?
A további entitásokkal további Nagyvállalati Szerződések adhatók hozzá egy Cloudyn-előfizetéshez. További információ: [Entitások létrehozása és felügyelete](tutorial-user-access.md#create-and-manage-entities).
CSP-k esetén:
Ha további CSP-fiókokat szeretne hozzáadni egy entitáshoz, az új entitás létrehozásakor az **Enterprise** (Vállalat) helyett az **MSP Access** (MSP-hozzáférés) lehetőséget válassza. Ha a fiókja Nagyvállalati Szerződésként lett regisztrálva, és CSP hitelesítő adatokat kíván hozzáadni, előfordulhat, hogy a Cloudyn ügyfélszolgálati munkatársának módosítania kell a fiók beállításait. Ha Ön fizetős Azure-előfizetést használ, új támogatási kérést az Azure Portalon is létrehozhat. Válassza a **Súgó + támogatás**, majd az **Új támogatási kérés** elemet.
## <a name="currency-symbols-in-cloudyn-reports"></a>Pénznemszimbólumok a Cloudyn jelentéseiben
Lehet, hogy több különböző pénznemet használó Azure-fiókkal rendelkezik. A Cloudyn költségjelentései azonban egyszerre csak egy pénznemet jelenítenek meg jelentésenként.
Ha több, különböző pénznemet használó előfizetéssel rendelkezik, egy szülő- és annak gyermekentitásainak pénzneme **$** USD-ben jelenik meg. Az ajánlott eljárás az, hogy ne használjon különböző pénznemeket ugyanabban az entitáshierarchiában. Más szóval az entitásstruktúrába rendezett előfizetések mindegyikének ugyanazt a pénznemet kell használnia.
A Cloudyn automatikusan észleli a Nagyvállalati Szerződéshez tartozó előfizetés pénznemét, és helyesen jeleníti meg a jelentésekben. A Cloudyn azonban csak **$** USD-t jelenít meg a CSP- és Web Direct Azure-fiókok esetén.
## <a name="what-are-cloudyn-data-refresh-timelines"></a>Melyek a Cloudyn adatfrissítési ütemezései?
A Cloudyn az alábbi adatfrissítési ütemezésekkel rendelkezik:
- **Kezdeti**: A beállítás után akár 24 órát is igénybe vehet, amíg a költségadatok megtekinthetők lesznek a Cloudynben. Akár 10 napot is igénybe vehet, mire a Cloudyn elegendő adatot gyűjt a méretezési javaslatok megjelenítéséhez.
- **Napi**: Minden hónap tizedik napjától kezdve a hónap végéig a Cloudyn naprakész adatokat jelenít meg az előző napról körülbelül UTC+3 után a következő napon.
- **Havi**: Előfordulhat, hogy minden hónap első napjától a tizedik napjáig a Cloudyn csak az előző hónap végéig tartó adatokat jelenít meg.
A Cloudyn akkor dolgozza fel az előző nap adatait, amikor az előző nap összes adata elérhetővé válik. Az előző nap adatai általában körülbelül UTC+3-ra válnak elérhetővé mindennap. Bizonyos adatok (például címkék) feldolgozása további 24 órát is igénybe vehet.
Az aktuális hónap adatai általában még nem gyűjthetők minden hónap elején. Ebben az időszakban véglegesítik a szolgáltatók az előző hónapra vonatkozó számlákat. Az előző hónap adatai az egyes hónapok kezdete után 5–10 nappal jelennek meg a Cloudynben. Ebben az időszakban előfordulhat, hogy az előző hónapból csak az amortizált költségek láthatók. Előfordulhat, hogy nem látja a napi számlázási vagy használati adatokat. Amikor az adatok elérhetővé válnak, a Cloudyn visszamenőleg feldolgozza őket. Feldolgozás után a havi adatok mindegyike megjelenik az egyes hónapok ötödik és tizedik napja között.
Az Azure akkor is rögzíti az adatokat, ha Azure és a Cloudyn között késedelmet szenved az adatok elküldése. A kapcsolat helyreállítása után a rendszer átviszi az adatokat a Cloudynbe.
## <a name="cost-fluctuations-in-cloudyn-cost-reports"></a>Költségek ingadozása a Cloudyn költségjelentéseiben
A költségjelentésekben költségingadozások láthatók, amikor a felhőszolgáltatók elküldik a frissített számlázási fájlokat. Az ingadozó költségek akkor fordulnak elő, ha új fájlok érkeznek egy felhőszolgáltatótól a szokásos napi vagy havi jelentési ütemezésen kívül. A költségek változását nem a Cloudyn általi újraszámítás okozza.
A hóna során a felhőszolgáltató által elküldött számlázási fájlok mindegyike a napi költségek becslésére szolgál. Az adatok néha gyakran frissülnek, alkalmanként naponta többször is. A frissítések gyakoribbak az AWS, mint az Azure használata esetén. A költségek teljes összegének stabilnak kell maradnia, ha az előző hónap számlázási számítása befejeződött, és megérkezik a végleges számla. Ez általában a hónap tizedik napjáig történik meg.
Változásokat okozhat az is, ha költségkiigazítások érkeznek a felhőszolgáltatótól. Ilyenek lehetnek például a jóváírások. A módosítások a vonatkozó hónap lezárása után hónapokkal is előfordulhatnak. A módosítások mindig megjelennek, amikor a felhőszolgáltató újraszámítást végez. A Cloudyn frissíti a korábbi adatait, gondoskodva arról, hogy minden kiigazítás újraszámítása megtörténjen. Azt is ellenőrzi, hogy a költségek pontosan jelennek-e meg a jelentésekben.
## <a name="how-can-a-direct-csp-configure-cloudyn-access-for-indirect-csp-customers-or-partners"></a>Hogyan konfigurálhat egy közvetlen CSP Cloudyn-hozzáférést konfigurálni közvetett CSP-ügyfeleinek vagy -partnereinek?
Útmutatásért tekintse meg a [Közvetett CSP-hozzáférés konfigurálása a Cloudynben](quick-register-csp.md#configure-indirect-csp-access-in-cloudyn) című szakaszt.
## <a name="what-causes-the-optimizer-menu-item-to-appear"></a>Mi okozza az Optimizer (Optimalizáló) menüelem megjelenését?
Az Azure Resource Manager-hozzáférés hozzáadása és az adatok összegyűjtése után megjelenik az **Optimizer** (Optimalizáló) lehetőség. Az Azure Resource Manager-hozzáférés aktiválásához tekintse meg a [Hogyan aktiválhatom a nem aktivált fiókokat Azure hitelesítő adatokkal?](#how-do-i-activate-unactivated-accounts-with-azure-credentials) kérdést.
## <a name="is-cloudyn-agent-based"></a>A Cloudyn ügynökalapú?
Nem. Nem használ ügynököket. A virtuális gépek Azure-beli virtuálisgép-metrikaadatainak gyűjtése a Microsoft Insights API-ból történik. Ha metrikaadatokat szeretne gyűjteni az Azure-beli virtuális gépekről, engedélyeznie kell rajtuk a diagnosztikai beállításokat.
## <a name="do-cloudyn-reports-show-more-than-one-ad-tenant-per-report"></a>A Cloudyn-jelentések megjelenítenek több AD-bérlőt jelentésenként?
Igen. Minden AD-bérlő számára [létrehozhat egy megfelelő felhőfiók-entitást](tutorial-user-access.md#create-and-manage-entities). Ezután megtekintheti az összes Azure AD-bérlője adatait és más felhőszolgáltatókat is, például az Amazon Web Servicest és a Google Cloud Platformot.
| 99.720497 | 686 | 0.824914 | hun_Latn | 1.00001 |
b91aa501cbf75330bc89a59e88a8bad7c5f8dafd | 79 | md | Markdown | examples/line/multiple/index.en.md | nitinthewiz/G2Plot | f30188f378d09f7eb382931fc92cd8eb09ca0d6e | [
"MIT"
] | 1 | 2020-09-24T00:20:16.000Z | 2020-09-24T00:20:16.000Z | examples/line/multiple/index.en.md | needones/G2Plot | 1d25d8560d4d1bf6b3554a0580ecd1375f244b6d | [
"MIT"
] | 1 | 2020-12-26T00:12:18.000Z | 2020-12-26T00:12:18.000Z | examples/line/multiple/index.en.md | needones/G2Plot | 1d25d8560d4d1bf6b3554a0580ecd1375f244b6d | [
"MIT"
] | null | null | null | ---
title: Multiple Line Chart
order: 1
---
Description about this component.
| 11.285714 | 33 | 0.721519 | eng_Latn | 0.977658 |
b91acc5d3717948ed542c5d8c98cff5bf9648cd8 | 2,263 | md | Markdown | api/Excel.QueryTable.TextFilePromptOnRefresh.md | MarkFern/VBA-Docs | b84627cc8e24acfd336d1e9761a9ddd58f19d352 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-03-09T13:24:12.000Z | 2020-03-09T16:19:11.000Z | api/Excel.QueryTable.TextFilePromptOnRefresh.md | MarkFern/VBA-Docs | b84627cc8e24acfd336d1e9761a9ddd58f19d352 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/Excel.QueryTable.TextFilePromptOnRefresh.md | MarkFern/VBA-Docs | b84627cc8e24acfd336d1e9761a9ddd58f19d352 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: QueryTable.TextFilePromptOnRefresh property (Excel)
keywords: vbaxl10.chm518115
f1_keywords:
- vbaxl10.chm518115
ms.prod: excel
api_name:
- Excel.QueryTable.TextFilePromptOnRefresh
ms.assetid: 3fe619b9-2bc8-46f4-4e18-655e9cf5a61f
ms.date: 06/08/2017
localization_priority: Normal
---
# QueryTable.TextFilePromptOnRefresh property (Excel)
**True** if you want to specify the name of the imported text file each time the query table is refreshed. The **Import Text File** dialog box allows you to specify the path and file name. The default value is **False**. Read/write **Boolean**.
## Syntax
_expression_. `TextFilePromptOnRefresh`
_expression_ A variable that represents a **[QueryTable](Excel.QueryTable.md)** object.
## Remarks
Use this property only when your query table is based on data from a text file (with the **[QueryType](Excel.QueryTable.QueryType.md)** property set to **xlTextImport**).
If the value of this property is **True**, the dialog box doesn't appear the first time a query table is refreshed.
The default value is **True** in the user interface.
If you import data using the user interface, data from a web query or a text query is imported as a **[QueryTable](Excel.QueryTable.md)** object, while all other external data is imported as a **[ListObject](Excel.ListObject.md)** object.
If you import data using the object model, data from a web query or a text query must be imported as a **QueryTable**, while all other external data can be imported as either a **ListObject** or a **QueryTable**.
The **TextFilePromptOnRefresh** property applies only to **QueryTable** objects.
## Example
This example prompts the user for the name of the text file whenever the query table on the first worksheet in the first workbook is refreshed.
```vb
Set shFirstQtr = Workbooks(1).Worksheets(1)
Set qtQtrResults = shFirstQtr.QueryTables _
.Add(Connection := "TEXT;C:\My Documents\19980331.txt", _
Destination := shFirstQtr.Cells(1,1))
With qtQtrResults
.TextFileParseType = xlDelimited
.TextFilePromptOnRefresh = True
.TextFileTabDelimiter = True
.Refresh
End With
```
## See also
[QueryTable Object](Excel.QueryTable.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 34.287879 | 245 | 0.759169 | eng_Latn | 0.957202 |
b91b42474f4ac506731e0b2206fa66f6e1bfc044 | 3,977 | md | Markdown | preprocessed-site/posts/2017/11-haskell-newbies-talks.md | haru2036/blog | b80c6ff6a3a9ec6b31b7e3812f3f50b91af202c7 | [
"MIT"
] | 36 | 2017-04-18T09:58:50.000Z | 2021-06-11T11:37:43.000Z | preprocessed-site/posts/2017/11-haskell-newbies-talks.md | haru2036/blog | b80c6ff6a3a9ec6b31b7e3812f3f50b91af202c7 | [
"MIT"
] | 174 | 2017-04-04T05:35:38.000Z | 2022-02-08T02:47:28.000Z | preprocessed-site/posts/2017/11-haskell-newbies-talks.md | haru2036/blog | b80c6ff6a3a9ec6b31b7e3812f3f50b91af202c7 | [
"MIT"
] | 28 | 2017-04-13T04:01:41.000Z | 2021-06-04T17:13:55.000Z | ---
title: Haskell-jp現在の活動・目的総ざらい
headingBackgroundImage: ../../img/background.png
headingDivClass: post-heading
subHeading: Haskell入門者LT会での発表内容の再共有です
author: Yuji Yamamoto
postedBy: <a href="http://the.igreque.info/">Yuji Yamamoto(@igrep)</a>
date: August 30, 2017
...
---
# Haskell入門者LT会で発表してきました
昨日、私山本悠滋は[Haskell入門者LT会 - connpass](https://shinjukuhs.connpass.com/event/58936/)で、ゲストとして参加し、我らがHaskell-jpの宣伝をして参りました。
発表資料自体は[こちらのSlideshareのページ](https://www.slideshare.net/igrep/haskelljp-haskelljp)に載せたのですが、残念ながら貼り付けているリンクがすべて無効になってしまいました。
本資料は、Haskell-jpの現在進行中の活動や、今後の目標などを伝えると同時に、そのリンク集としての意味合いが強いので、これでは致命的です。
そこで、本記事ではスライドの内容のうち、Haskell-jpの現在の活動や今後についての箇所をほぼそのまま貼り付けて再共有することにいたします。
この機会に何かしらの活動に興味を持っていただければ幸いです。
↓以下がその内容となります。
markdownで書いといてよかった! (⌒\_⌒)
# 目的
- 日本にHaskellを普及させる
- 日本を代表するHaskellのユーザーグループとなる
- 国内外のHaskellコミュニティーをはじめ、IT業界における広い認知を得ること
# [Slackチーム](https://haskell-jp.slackarchive.io/)
- いろいろしゃべったり質問したり、フィードを流したりしてます
- リンク先は[SlackArchive](http://slackarchive.io/)を利用した発言ログ
- 登録は[こちら](https://haskell.jp/signin-slack.html)から。
## 運用ルール
- 質問は\#questionsか\#generalで。
- \#generalで質問を許可しているのは、慌ててやってきた人が質問した場合の救済策
- たらい回しにするやりとりを避けたい
- ネチケット(多分死語)は守りましょう。
- チャンネル作りはご自由に。
## おすすめチャンネル
- questions: 軽い気持ちの質問チャンネル
- questions-feed: teratail・Stackoverflowなどの質問が流れます
- ちょっと更新頻度が高すぎるかもですが...。
- event-announcement: イベントの募集やどこそこで登壇する、といった情報を配信
- 自動化したい...。
# [Reddit](https://www.reddit.com/r/haskell_jp/)
## 作った背景:
- Slackだとクローズドで、生み出した情報とかが再利用できないよね。
- 検索にもひっかからないし、
- SlackArchiveは微妙だし。
## 特徴
- スレッドがDISQUSみたいなツリー上の構造になることもあり、込み入った議論や相談に便利。
- もちろん検索エンジンもインデックスしてくれる。
- 実は**今はSlackよりこちらを推奨**しています。
- Slackチームのreddit-haskell-jpのチャンネルにも投稿が流れます。
# [Haskell-jpブログ](https://haskell.jp/blog/)
- 〆(゚\_゚\*)
広くHaskellに関する記事なら誰でも投稿できる。
- **執筆者常時募集**。
- [こちらのリポジトリー](https://github.com/haskell-jp/blog)にPull requestを送ってください。
- このイベントのレポートなんてこれまでなかったタイプの記事なんでちょうどいいでしょう!
# [Haskell-jpもくもく会](https://haskell-jp.connpass.com/)
- (\*´Д\`)
「Haskellに関する作業をもくもくとやったり、希望者でLTを行ったりするゆるい会」
- 弊ユーザーグループ設立のきっかけとなった[Haskellもくもく会](https://haskellmokumoku.connpass.com/)が前身。
- Haskellもくもく会から数えて次回で46回目!
- 初心者から上級者まで好き勝手やっております。
- 重要: [次回は9月10日(日)](https://haskell-jp.connpass.com/event/64567/)
- (;\^ω\^)
学生枠を設けてますが、今回に限って学生枠が集まりすぎてうれしい悲鳴を上げています。
# [Haskell-jp wiki](https://wiki.haskell.jp/)
- いろいろ書いているWikiです!
- [gitit](https://github.com/jgm/gitit)というHaskell製のWikiエンジンでできています。
- GitHubアカウントがあれば誰でも編集できます。
## おすすめのページ:
- [Haskellに関する日本語のリンク集](https://wiki.haskell.jp/Links)
- [データ構造列伝](https://wiki.haskell.jp/%E3%83%87%E3%83%BC%E3%82%BF%E6%A7%8B%E9%80%A0%E5%88%97%E4%BC%9D)
# [Haskell-jp TODOリスト](https://trello.com/b/GfAyczPt/haskell-jp)
- パブリックなTrelloのボード。
- Haskell-jpのやりたいことが割と見えます。
- 要望がある場合Slackなどで提案すると、多分優しい誰かがBacklogに追加してくれます。
- 現状、権限を持った人以外コメント含め書き込みはできません。あしからず。
- 希望者は私なりSlackなりでご相談を。
# [Haskellレシピ集](https://github.com/haskell-jp/recipe-collection)
- 文字通りレシピ集
- あまり私は関わっていないのですが、どうも更新停止気味 (◞‸◟)
# [Haskell Antenna](https://haskell.jp/antenna/)
- [Haskell News](http://haskellnews.org/)にインスパイアされて作成。
- 超実験的。**運がよかったら見える**。
- 作者の[\@lotz84氏](https://github.com/lotz84)個人のHerokuアカウント(無料枠)で動いているので...。
- [こちらのリポジトリー](https://github.com/haskell-jp/antenna)にPull requestを送れば情報源を増やすこともできます。
# [相互リンク集](https://haskell.jp/blog/posts/links.html)
- Haskell好きな人が、
- Haskellについて書いたWebサイトに、
- (サイト全体の一部でOK)
- こんなバナー→ <img width="150" src="https://haskell.jp/img/supported-by-haskell-jp.svg" alt="Supported By Haskell-jp."> を張ってもらい、
- それを一覧にしたもの
# そのほか、特にやる気はあること
## 入門コンテンツ
- 鋭意開発中。まだ共有してなくてすまん。
- 今後ブログと並ぶ大きなコンテンツの一つとしたい。
## 法人化
- 「Haskell-jp」としてはゆるめのつながりでありたい
- でも、運営委員会的な組織は多分遅かれ速かれ必要
- お金の管理とか、目標とか責任とかを明確にするため
# まとめ
- Haskell-jpの一つの大きな役目は、haskell.jpというドメインを通じ、場を提供すること
- ブログとか、アンテナとか、Wikiとか。
- haskell.jpを利用したコンテンツはほかにもあっていいと思うのでどんどんどうぞ!
- 何かあればRedditやSlack、GitHubのIssueなどでご連絡ください!
- 今後とも、Haskell-jpをよろしくお願いします hask(\_ \_)eller
以上が発表内容です。
いかがでしょうか?気になる活動があればぜひぜひ参加してみてください!
| 26.691275 | 124 | 0.763892 | yue_Hant | 0.447832 |
b91b487314871ec2e8b9d19bb96f5efe3c1eaad1 | 1,849 | md | Markdown | README.md | iOS-AllProjects/EYSlider | f9e0781f524733bf2be4d89f3161de2fed650fc0 | [
"MIT"
] | 1 | 2018-03-30T20:44:40.000Z | 2018-03-30T20:44:40.000Z | README.md | iOS-AllProjects/EYSlider | f9e0781f524733bf2be4d89f3161de2fed650fc0 | [
"MIT"
] | 1 | 2018-03-30T15:04:46.000Z | 2018-03-30T15:04:46.000Z | README.md | iOS-AllProjects/EYSlider | f9e0781f524733bf2be4d89f3161de2fed650fc0 | [
"MIT"
] | 1 | 2018-03-30T20:44:41.000Z | 2018-03-30T20:44:41.000Z | EYSlider
==================
Custom slider bar written in swift 4.
Screenshots
----
 
Install
-------
##### Requirements
- iOS 10.0+
- Swift 4.0+
##### Manual
Copy & paste `CustomSlider.swift` in your project
##### CocoaPods
[CocoaPods](https://cocoapods.org/) is a dependency manager for Cocoa projects. You can install it with the following command:
```
$ gem install cocoapods
```
To integrate EYSlider into your Xcode project using CocoaPods, specify it in your ```Podfile```:
```
source 'https://github.com/CocoaPods/Specs.git'
source 'https://github.com/iOS-AllProjects/EYSlider.git'
platform :ios, '10.0'
use_frameworks!
target '<Your Target Project Name>' do
pod 'EYSlider', '0.1.0'
end
```
<b>Or</b>
```
source 'https://github.com/iOS-AllProjects/EYSlidersBar.git'
platform :ios, '10.0'
use_frameworks!
```
Usage
-----
Drag a `UIView` into your storyboard! Change the class to `CustomSlider`. The view will be updated!
### In storyboard Edit the following properties!
##### For the View
animationDuration
viewBackColor
viewCornerRadius
##### For border Layer
borderBackColor
borderColor
borderWidth
##### For tracker Layer
positiveTrackColor
negativeTrackColor
##### For handle Layer
handleBackColor
handleShadowColor
##### For the values
MaxValue
sliderValue
### Create an Outlet for the Control!
``` swift
@IBOutlet weak var slider: CustomSlider!
```
### Access Properties of your Outlet and modify to your needs!
``` swift
//Example
let currentValue = "\(slider.value)"
```
And that's it!
| 18.867347 | 297 | 0.714981 | eng_Latn | 0.498452 |
b91b89d4c532429a8615d260382c791b267ccb05 | 926 | md | Markdown | README.md | mikepm35/SwiftyHASS | b4324fef6d3ef3a5bfb0ff72f79f1f4863bf0033 | [
"MIT"
] | null | null | null | README.md | mikepm35/SwiftyHASS | b4324fef6d3ef3a5bfb0ff72f79f1f4863bf0033 | [
"MIT"
] | null | null | null | README.md | mikepm35/SwiftyHASS | b4324fef6d3ef3a5bfb0ff72f79f1f4863bf0033 | [
"MIT"
] | null | null | null | # SwiftyHASS
[](https://travis-ci.org/mikepm35/SwiftyHASS)
[](http://cocoapods.org/pods/SwiftyHASS)
[](http://cocoapods.org/pods/SwiftyHASS)
[](http://cocoapods.org/pods/SwiftyHASS)
## Example
To run the example project, clone the repo, and run `pod install` from the Example directory first.
## Requirements
## Installation
SwiftyHASS is available through [CocoaPods](http://cocoapods.org). To install
it, simply add the following line to your Podfile:
```ruby
pod "SwiftyHASS"
```
## Author
mikepm35, [email protected]
## License
SwiftyHASS is available under the MIT license. See the LICENSE file for more info.
| 30.866667 | 122 | 0.75378 | eng_Latn | 0.312647 |
b91b8a47ed45600470c682af4daa3abd8762c859 | 180 | md | Markdown | README.md | maoraymundo/assignments | 86923aa9c38914eb92dfd4f8e17540ade5a6810f | [
"MIT"
] | null | null | null | README.md | maoraymundo/assignments | 86923aa9c38914eb92dfd4f8e17540ade5a6810f | [
"MIT"
] | null | null | null | README.md | maoraymundo/assignments | 86923aa9c38914eb92dfd4f8e17540ade5a6810f | [
"MIT"
] | null | null | null | Assignment Bundle
1. Base 7 Calculator
-- can add, subtract and multiply two numbers
-- has at least 1 unit test and functional test
2. Sample with API calls and front end | 30 | 51 | 0.738889 | eng_Latn | 0.999482 |
b91bcaa48fe885beb0ef9662470cfc427bc099f2 | 6,036 | md | Markdown | articles/cognitive-services/metrics-advisor/how-tos/alerts.md | georgeOsdDev/azure-docs.ja-jp | 2a180fb94381b1f17e50b24e85626240a3e7fa72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/metrics-advisor/how-tos/alerts.md | georgeOsdDev/azure-docs.ja-jp | 2a180fb94381b1f17e50b24e85626240a3e7fa72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/metrics-advisor/how-tos/alerts.md | georgeOsdDev/azure-docs.ja-jp | 2a180fb94381b1f17e50b24e85626240a3e7fa72 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Metrics Advisor アラートを構成する
titleSuffix: Azure Cognitive Services
description: 電子メール、Web、Azure DevOps のフックを使用して Metrics Advisor アラートを構成する方法。
services: cognitive-services
author: mrbullwinkle
manager: nitinme
ms.service: cognitive-services
ms.subservice: metrics-advisor
ms.topic: conceptual
ms.date: 09/14/2020
ms.author: mbullwin
ms.openlocfilehash: 30d8fdf99da7a4854db0985bed6256ecd6f7a366
ms.sourcegitcommit: 867cb1b7a1f3a1f0b427282c648d411d0ca4f81f
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 03/19/2021
ms.locfileid: "93420922"
---
# <a name="how-to-configure-alerts-and-get-notifications-using-a-hook"></a>方法: フックを使用してアラートを構成し、通知を取得する
Metrics Advisor によって異常が検出されると、フックを使用し、アラート設定に基づいてアラート通知がトリガーされます。 アラート設定は、複数の検出構成で使用でき、さまざまなパラメーターを使用して警告ルールをカスタマイズできます。
## <a name="create-a-hook"></a>フックを作成する
Metrics Advisor では、3 つの異なる種類のフック (email hook、Web hook、Azure DevOps) がサポートされており、 特定のシナリオに適したものを選択できます。
### <a name="email-hook"></a>email hook
> [!Note]
> Metrics Advisor リソース管理者は、異常アラートを送信する前に、電子メールの設定を構成し、SMTP 関連の情報を Metrics Advisor に入力する必要があります。 リソース グループ管理者またはサブスクリプション管理者は、Metrics Advisor リソースの [アクセス制御] タブで、少なくとも 1 つの "*Cognitive Services Metrics Advisor 管理者*" ロールを割り当てる必要があります。 [電子メール設定の構成の詳細をご確認ください](../faq.md#how-to-set-up-email-settings-and-enable-alerting-by-email)。
email hook を作成するには、次のパラメーターを使用できます。
email hook は、 **[メールの送信先]** セクションで指定された電子メール アドレスに送信される異常アラートのチャネルです。 2 種類のアラート電子メール ("*データ フィードを使用できません*" アラートと、1 つまたは複数の異常を含む "*インシデント レポート*") が送信されます。
|パラメーター |説明 |
|---------|---------|
| 名前 | email hook の名前 |
| 電子メールの送信先| アラートの送信先の電子メール アドレス|
| 外部リンク | オプションのフィールド。トラブルシューティングに関する説明など、カスタマイズされたリダイレクトを有効にします。 |
| カスタマイズされた異常アラートのタイトル | タイトル テンプレートでは、`${severity}`、`${alertSettingName}`、`${datafeedName}`、`${metricName}`、`${detectConfigName}`、`${timestamp}`、`${topDimension}`、`${incidentCount}`、`${anomalyCount}` がサポートされています
**[OK]** をクリックすると、email hook が作成されます。 これを任意のアラート設定で使用すると、異常アラートを受け取ることができます。
### <a name="web-hook"></a>Web hook
> [!Note]
> * **POST** 要求メソッドを使用します。
> * 要求本文は、次のようになります。
`{"timestamp":"2019-09-11T00:00:00Z","alertSettingGuid":"49635104-1234-4c1c-b94a-744fc920a9eb"}`
> * Web hook を作成または変更する場合、空の要求本文を使用して、テストとして API を呼び出します。 この API は、200 HTTP コードを返す必要があります。
Web hook は、Metrics Advisor サービスから使用できるすべての情報のエントリ ポイントであり、アラートがトリガーされたときにユーザー指定の API を呼び出します。 Web hook を介してすべてのアラートを送信できます。
Web hook を作成するには、次の情報を追加する必要があります。
|パラメーター |説明 |
|---------|---------|
|エンドポイント | アラートがトリガーされたときに呼び出される API アドレス。 |
|ユーザー名/パスワード | API アドレスの認証用。 認証が不要な場合は、これを空白のままにします。 |
|ヘッダー | API 呼び出しのカスタム ヘッダー。 |
:::image type="content" source="../media/alerts/create-web-hook.png" alt-text="Web hook の作成ウィンドウ。":::
Web hook を介して通知をプッシュする場合、次の API を使用してアラートの詳細を取得できます。 プッシュ先の API サービスで、*timestamp* と *alertSettingGuid* を設定し、次のクエリを使用します。
- `query_alert_result_anomalies`
- `query_alert_result_incidents`
### <a name="azure-devops"></a>Azure DevOps
Metrics Advisor では、異常が検出されたときに問題またはバグを追跡するために、Azure DevOps で作業項目を自動的に作成することもできます。 Azure DevOps hook を介してすべてのアラートを送信できます。
Azure DevOps hook を作成するには、次の情報を追加する必要があります
|パラメーター |説明 |
|---------|---------|
| 名前 | フックの名前 |
| 組織 | DevOps が属している組織 |
| Project | DevOps 内の特定のプロジェクト。 |
| Access Token | DevOps の認証用トークン。 |
> [!Note]
> Metrics Advisor で異常アラートに基づいて作業項目を作成する場合は、書き込みアクセス許可を付与する必要があります。 フックを作成した後、それらを任意のアラート設定で使用できます。 フックは、 **[hook settings]\(フックの設定\)** ページで管理します。
## <a name="add-or-edit-alert-settings"></a>アラート設定の追加または編集
メトリックの詳細ページに移動して、メトリックの詳細ページの左下隅にある **[アラート設定]** セクションを見つけます。 ここには、選択した検出構成に適用されるすべてのアラート設定が一覧表示されます。 新しい検出構成を作成した時点では、アラート設定はなく、アラートは送信されません。
アラート設定を変更するには、 **[追加]** 、 **[編集]** 、 **[削除]** アイコンを使用します。
:::image type="content" source="../media/alerts/alert-setting.png" alt-text="[アラートの設定] メニュー項目。":::
**[追加]** または **[編集]** ボタンをクリックして、アラート設定を追加または編集するウィンドウを表示します。
:::image type="content" source="../media/alerts/edit-alert.png" alt-text="アラート設定の追加または編集":::
**アラート設定名**: このアラート設定の名前。 これは、アラート電子メールのタイトルに表示されます。
**フック**: アラートの送信先となるフックの一覧。
上記のスクリーンショットでマークされているセクションは、1 つの検出構成の設定です。 さまざまな検出構成に対してさまざまなアラート設定を設定できます。 このウィンドウの 3 番目のドロップダウン リストを使用して、ターゲット構成を選択します。
### <a name="filter-settings"></a>フィルターの設定
1 つの検出構成のフィルター設定を次に示します。
**[アラート対象]** には、異常をフィルター処理するための 4 つのオプションがあります。
* **すべての系列の異常**: すべての異常がアラートに含まれます。
* **系列グループの異常**: 系列をディメンション値でフィルター処理します。 いくつかのディメンションに特定の値を設定します。 系列が指定された値と一致する場合のみ、異常がアラートに含まれます。
* **お気に入りの系列の異常**: お気に入りとしてマークされた系列のみがアラートに含まれます。 |
* **すべての系列の上位 N 個の異常**: このフィルターは、値が上位 N 個に含まれている系列のみに関心がある場合に使用します。いくつかのタイムスタンプを調べて、これらのタイムスタンプの系列の値が上位 N 個に含まれているかどうかを確認します。"上位 n" のカウントが指定された数より大きい場合、異常はアラートに含まれます。 |
**[Filter anomaly options]\(異常のフィルター処理オプション\)** は追加のフィルターで、次のオプションがあります。
- **[重大度]** : 異常の重大度が指定された範囲内にある場合のみ、異常が含まれます。
- **[再通知]** : アラートでトリガーされたときに、次の N 個のポイント (期間) 内の異常に対してアラートを一時的に停止します。
- **再通知の種類**: **[系列]** に設定すると、トリガーされた異常によって、その系列のみが再通知されます。 **[メトリック]** の場合、1 つのトリガーされた異常によって、このメトリック内のすべての系列が再通知されます。
- **再通知回数**: 再通知するポイント (期間) の数。
- **連続しない場合にリセット**: 選択した場合、トリガーされた異常によって、次の n 個の連続した異常のみが再通知されます。 次のデータ ポイントのいずれかが異常でない場合、そのポイントから再通知がリセットされます。選択しない場合、連続するデータ ポイントが異常ではない場合でも、1 つのトリガーされた異常によって、次の n 個のポイント (期間) が再通知されます。
- **値** (省略可能): 値でフィルター処理します。 条件を満たすポイント値のみ、異常が含まれます。 別のメトリックの対応する値を使用する場合、2 つのメトリックのディメンション名が一致している必要があります。
フィルターで除外されなかった異常は、アラートで送信されます。
### <a name="add-cross-metric-settings"></a>クロスメトリック設定を追加する
[アラート設定] ページで **[+ Add cross-metric settings]\(+ クロスメトリック設定の追加\)** をクリックして、別のセクションを追加します。
**[演算子]** セレクターは、各セクションの論理的な関係で、アラートを送信するかどうかを決定します。
|演算子 |説明 |
|---------|---------|
|AND | 系列が各アラート セクションと一致し、すべてのデータ ポイントが異常である場合にのみアラートを送信します。 メトリックのディメンション名が異なる場合、アラートはトリガーされません。 |
|OR | 1 つ以上のセクションに異常がある場合にアラートを送信します。 |
:::image type="content" source="../media/alerts/alert-setting-operator.png" alt-text="複数のアラート設定セクションの演算子":::
## <a name="next-steps"></a>次のステップ
- [フィードバックを使用して異常検出を調整する](anomaly-feedback.md)
- [インシデントを診断する](diagnose-incident.md)
- [メトリックを構成して検出構成を微調整する](configure-metrics.md) | 42.20979 | 325 | 0.748178 | yue_Hant | 0.505903 |
b91c060d916d1c03ee7e7d79aad47f22378bd93d | 1,582 | md | Markdown | api/Visio.ContainerProperties.RemoveMember.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-09-07T04:44:48.000Z | 2021-12-16T15:05:50.000Z | api/Visio.ContainerProperties.RemoveMember.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-13T09:32:15.000Z | 2021-06-13T09:32:15.000Z | api/Visio.ContainerProperties.RemoveMember.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-28T06:51:45.000Z | 2019-11-28T06:51:45.000Z | ---
title: ContainerProperties.RemoveMember method (Visio)
keywords: vis_sdr.chm17662335
f1_keywords:
- vis_sdr.chm17662335
ms.prod: visio
api_name:
- Visio.ContainerProperties.RemoveMember
ms.assetid: 953beb58-ea8a-7c1f-20c1-0fe4de23e831
ms.date: 06/08/2017
localization_priority: Normal
---
# ContainerProperties.RemoveMember method (Visio)
Removes a shape or set of shapes from the container.
## Syntax
_expression_.**RemoveMember** (_ObjectToRemove_)
_expression_ A variable that represents a **[ContainerProperties](Visio.ContainerProperties.md)** object.
## Parameters
|Name|Required/Optional|Data type|Description|
|:-----|:-----|:-----|:-----|
| _ObjectToRemove_|Required| **[UNKNOWN]**|The shape or shapes to remove from the container. Can be a **[Shape](Visio.Shape.md)** or **[Selection](Visio.Selection.md)** selection.|
## Return value
**Nothing**
## Remarks
The **RemoveMember** method removes from the container the shapes specified in the _ObjectToRemove_ parameter.
If the container is a list, Microsoft Visio removes the shapes specified in _ObjectToRemove_ both from the list (if it is a list member) and from the list container.
If the **[ContainerProperties.LockMembership](Visio.ContainerProperties.LockMembership.md)** property is **True**, Visio returns a Disabled error.
If _ObjectToRemove_ does not contain top-level shapes on the page, Visio returns an Invalid Parameter error. However, if _ObjectToRemove_ is not a container member, Visio does not return an error.
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 32.958333 | 197 | 0.769912 | eng_Latn | 0.897936 |
b91c0d3b158012f7bd000d832ff9ae3bba73499c | 504 | md | Markdown | README.md | LiamMartens/react-image-hotspot | c0cdc43bcc04f8d506a61d6702d2fc3c9707a9cb | [
"MIT"
] | null | null | null | README.md | LiamMartens/react-image-hotspot | c0cdc43bcc04f8d506a61d6702d2fc3c9707a9cb | [
"MIT"
] | null | null | null | README.md | LiamMartens/react-image-hotspot | c0cdc43bcc04f8d506a61d6702d2fc3c9707a9cb | [
"MIT"
] | null | null | null | # react-image-hotspot
This is a small library providing functionality to create hotspots on an image.

## Installation
```
yarn add react-image-hotspot
```
## Getting started
Simply import and use the `ImgaeHotspot` component
```jsx
import { ImageHotspot } from 'react-image-hotspot';
const App = () => {
const [value, setValue] = React.useState([]);
return (
<ImageHotspot
src="https://...image"
value={value}
onChange={setValue}
/>
);
}
``` | 19.384615 | 79 | 0.654762 | eng_Latn | 0.619763 |
b91c3daf6a54f7ef77b173c630985294880cc284 | 480 | md | Markdown | CONTRIBUTING.md | BizarreNULL/beagle | e1b0a87f9d1fd8c0c123aac7d5fb98b0bfdb7d2c | [
"Apache-2.0"
] | 1 | 2020-06-03T23:38:08.000Z | 2020-06-03T23:38:08.000Z | CONTRIBUTING.md | BizarreNULL/beagle | e1b0a87f9d1fd8c0c123aac7d5fb98b0bfdb7d2c | [
"Apache-2.0"
] | 1 | 2020-08-10T15:53:40.000Z | 2020-08-10T15:53:40.000Z | CONTRIBUTING.md | BizarreNULL/beagle | e1b0a87f9d1fd8c0c123aac7d5fb98b0bfdb7d2c | [
"Apache-2.0"
] | null | null | null | # Contributing to Beagle
Here are a few resources to guide you through contributing to Beagle.
- [Pull Request Guidelines](./doc/contributing/pull_requests.md)
- [Commit message conventions and rules](./doc/contributing/commits.md)
Questions? Head to our [FAQ](./FAQ.md) where you might find some answers.
## Beagle contribution guidelines
Please refer to [beagle's CONTRIBUTING.md](https://github.com/ZupIT/beagle/blob/master/CONTRIBUTING.md) for details on our guidelines.
| 36.923077 | 134 | 0.779167 | eng_Latn | 0.922181 |
b91ca47e61a3c59b3c786a36d2b816ca52acfa66 | 1,791 | md | Markdown | quality/standards.md | petk/php-knowledge | 88506f01316f785a0c5c705a5f7eb84f0755af45 | [
"CC0-1.0"
] | null | null | null | quality/standards.md | petk/php-knowledge | 88506f01316f785a0c5c705a5f7eb84f0755af45 | [
"CC0-1.0"
] | null | null | null | quality/standards.md | petk/php-knowledge | 88506f01316f785a0c5c705a5f7eb84f0755af45 | [
"CC0-1.0"
] | null | null | null | # How to write standardized PHP code?
For PHP, there are many standards and coding style recommendations that you
should look into when writing code. It simplifies collaboration on code and
makes it more readable once you get used to a particular style.
One of the most adopted standards recommendations in PHP is
[PHP FIG](http://php-fig.org), which recommends coding style via two of its
PSRs (PHP Standard Recommendation):
* [PSR-1](http://www.php-fig.org/psr/psr-1/) - Basic Coding Standard
* [PSR-2](http://www.php-fig.org/psr/psr-2/) - Coding Style Guide
Many open source projects also extend the above PSRs with their own coding
style guides, for example:
* [Symfony](http://symfony.com/doc/current/contributing/code/standards.html)
Many advanced PHP IDEs and editors also offer code refactoring via plugins and
extensions. With predefined common coding styles such as PSR-1 and PSR-2, with
refactoring code, automatic code generation, autocomplete and similar things,
consistently writing standardized PHP code can be done with ease.
The more you write code, the more you will understand the importance in using
common style guides. Especially when working with source code control (eg, Git,
coworkers), or just to be consistent in general.
## See also
* [Core PHP language specification](https://github.com/php/php-langspec)
* [PHP_CodeSniffer](https://github.com/squizlabs/PHP_CodeSniffer) - Tokenizes
PHP, JavaScript and CSS files, and detects violations of a defined set of
coding standards.
* [PHP Coding Standards Fixer](https://github.com/FriendsOfPHP/PHP-CS-Fixer) -
A tool that fixes coding standards in your code.
* [PHP-FIG: Extended Coding Style Guide proposal](https://github.com/php-fig/fig-standards/blob/master/proposed/extended-coding-style-guide.md)
| 48.405405 | 143 | 0.781128 | eng_Latn | 0.988955 |
b91cccb4ea34357ba7345a76ccd9cd4981c89edc | 2,023 | md | Markdown | public/web/docs/materials.md | LivelyKernel/vwf | b910dcb5a8aa00aa13c57574cbf4dce1d3920c34 | [
"Apache-2.0"
] | 1 | 2015-06-13T09:23:57.000Z | 2015-06-13T09:23:57.000Z | public/web/docs/materials.md | aashish24/vwf | e67a6f33f7010d1f2d2a6a3112863a4141b0defa | [
"Apache-2.0"
] | null | null | null | public/web/docs/materials.md | aashish24/vwf | e67a6f33f7010d1f2d2a6a3112863a4141b0defa | [
"Apache-2.0"
] | null | null | null | # Switch Materials on an Object
Imagine you have a simple scene (a cube) and you would like to programatically change the material on the cube. (Note: it is important that the collada file for the 3D object have properly mapped texture coordinates)
Let's look at the code for the simple scene, with a material object added as a child to the cube:
---
extends: http://vwf.example.com/navscene.vwf
properties:
children:
cube:
extends: node3.vwf
source: cube.dae
type: model/vnd.collada+xml
children:
material:
extends: http://vwf.example.com/material.vwf
scripts:
- |
this.initialize = function() {
}
You can change the cube's material anywhere in the code that you would like. For the purpose of this example, let's assume that you want to change it right at the beginning in the *initialize* function. Everything you could want to change about the material can actually be changed via the properties of the existing material:
this.initialize = function() {
var material = this.cube.material;
// Change the color
material.color = [ 0, 0, 205 ];
// Change the texture
material.texture = "images/grandma.png";
// Make the object transparent
material.alpha = 0.1;
}
A full list of material properties can be found in the [material](jsdoc_cmp/symbols/material.vwf.html) application API.
Sometimes it may be desirable to switch out the entire material - if for example, you wanted to toggle between two that had many distinct properties or have more than one object share the same material. Here's how you could do that:
this.initialize = function() {
var self = this;
this.children.create( "material1", "http://vwf.example.com/material.vwf", function() {
this.color = [ 0, 0, 205 ];
this.texture = "images/grandma.png";
this.alpha = 0.1;
self.cube.material = self.material1;
} );
}
You can read about the parameters of the create function on the API page for [node.children](jsdoc_cmp/symbols/node.vwf.html#children). | 36.781818 | 328 | 0.7217 | eng_Latn | 0.994922 |
b91d0fd0db59e80b7cedbc0852ae76be28c8d66a | 11,431 | md | Markdown | articles/active-directory-b2c/identity-provider-amazon.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-06-06T22:50:05.000Z | 2017-06-06T22:50:05.000Z | articles/active-directory-b2c/identity-provider-amazon.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 41 | 2016-11-21T14:37:50.000Z | 2017-06-14T20:46:01.000Z | articles/active-directory-b2c/identity-provider-amazon.md | changeworld/azure-docs.it- | 34f70ff6964ec4f6f1a08527526e214fdefbe12a | [
"CC-BY-4.0",
"MIT"
] | 7 | 2016-11-16T18:13:16.000Z | 2017-06-26T10:37:55.000Z | ---
title: Configurare l'iscrizione e l'accesso con un account Amazon
titleSuffix: Azure AD B2C
description: Consentire l'iscrizione e l'accesso ai clienti con account Amazon alle applicazioni da Azure Active Directory B2C.
services: active-directory-b2c
author: msmimart
manager: celestedg
ms.service: active-directory
ms.workload: identity
ms.topic: how-to
ms.custom: project-no-code
ms.date: 03/17/2021
ms.author: mimart
ms.subservice: B2C
zone_pivot_groups: b2c-policy-type
ms.openlocfilehash: b6c0d9d5430d84006b208c50e78b8d875c95b8ac
ms.sourcegitcommit: d40ffda6ef9463bb75835754cabe84e3da24aab5
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/07/2021
ms.locfileid: "107028384"
---
# <a name="set-up-sign-up-and-sign-in-with-an-amazon-account-using-azure-active-directory-b2c"></a>Configurare l'iscrizione e l'accesso con un account Amazon tramite Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
::: zone pivot="b2c-custom-policy"
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
::: zone-end
## <a name="prerequisites"></a>Prerequisiti
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
## <a name="create-an-app-in-the-amazon-developer-console"></a>Creare un'app in Amazon Developer Console
Per abilitare l'accesso per gli utenti con un account Amazon in Azure Active Directory B2C (Azure AD B2C), è necessario creare un'applicazione in [Amazon Developer Services and Technologies](https://developer.amazon.com). Per ulteriori informazioni, vedere la pagina [relativa alla registrazione per l'accesso con Amazon](https://developer.amazon.com/docs/login-with-amazon/register-web.html). Se non si ha già un account Amazon, è possibile iscriversi all'indirizzo [https://www.amazon.com/](https://www.amazon.com/) .
1. Accedere a [Amazon Developer Console](https://developer.amazon.com/dashboard) con le credenziali dell'account Amazon.
1. Se non è ancora stato fatto, selezionare **iscrizione**, seguire i passaggi per la registrazione per sviluppatori e quindi accettare i criteri.
1. Dal Dashboard selezionare **Accedi con Amazon**.
1. Selezionare **Create a New Security Profile** (Crea un nuovo profilo di sicurezza).
1. Immettere un **nome del profilo di sicurezza**, la **Descrizione del profilo di sicurezza** e l'URL dell' **informativa sulla privacy di consenso**. ad esempio, `https://www.contoso.com/privacy` l'URL dell'informativa sulla privacy è una pagina che fornisce informazioni sulla privacy agli utenti. Fare quindi clic su **Salva**.
1. Nella sezione **account di accesso con configurazioni di Amazon** selezionare il **nome del profilo di sicurezza** creato, selezionare l'icona **Gestisci** e quindi selezionare **Impostazioni Web**.
1. Nella sezione **Impostazioni Web** copiare i valori dell'**ID client**. Selezionare **Mostra segreto** per ottenere il segreto client, quindi copiarlo. Sono necessari entrambi i valori per configurare un account Amazon come provider di identità nel tenant. **Client Secret** è un'importante credenziale di sicurezza.
1. Nella sezione **Impostazioni Web** selezionare **modifica**.
1. In **origini consentite** immettere `https://your-tenant-name.b2clogin.com` . Sostituire `your-tenant-name` con il nome del tenant. Se si usa un [dominio personalizzato](custom-domain.md), immettere `https://your-domain-name` .
1. **URL restituiti consentiti** , immettere `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` . Se si usa un [dominio personalizzato](custom-domain.md), immettere `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp` . Sostituire `your-tenant-name` con il nome del tenant e con il `your-domain-name` dominio personalizzato.
1. Selezionare **Salva**.
::: zone pivot="b2c-user-flow"
## <a name="configure-amazon-as-an-identity-provider"></a>Configurare Amazon come provider di identità
1. Accedere al [portale di Azure](https://portal.azure.com/) come amministratore globale del tenant di Azure AD B2C.
1. Assicurarsi di usare la directory che contiene il tenant di Azure AD B2C. A tale scopo, fare clic sul filtro **Directory e sottoscrizione** nel menu in alto e scegliere la directory che contiene il tenant.
1. Scegliere **Tutti i servizi** nell'angolo in alto a sinistra del portale di Azure, cercare **Azure AD B2C** e selezionarlo.
1. Selezionare **provider di identità**, quindi **Amazon**.
1. Immettere un **Nome**. Ad esempio, *Amazon*.
1. Per **ID client**, immettere l'ID client dell'applicazione Amazon creata in precedenza.
1. Per il **segreto client**, immettere il segreto client registrato.
1. Selezionare **Salva**.
## <a name="add-amazon-identity-provider-to-a-user-flow"></a>Aggiungere Amazon Identity provider a un flusso utente
A questo punto, il provider di identità Amazon è stato configurato, ma non è ancora disponibile in nessuna delle pagine di accesso. Per aggiungere il provider di identità Amazon a un flusso utente:
1. Nel tenant di Azure AD B2C selezionare **Flussi utente**.
1. Fare clic sul flusso utente per cui si vuole aggiungere il provider di identità Amazon.
1. In provider di identità basati su **Social Network** selezionare **Amazon**.
1. Selezionare **Salva**.
1. Per testare i criteri, selezionare **Esegui flusso utente**.
1. Per **applicazione**, selezionare l'applicazione Web denominata *testapp1* registrata in precedenza. L'**URL di risposta** dovrebbe mostrare `https://jwt.ms`.
1. Selezionare il pulsante **Esegui flusso utente** .
1. Dalla pagina di iscrizione o di accesso selezionare **Amazon** per accedere con l'account Amazon.
Se il processo di accesso ha esito positivo, il browser viene reindirizzato a `https://jwt.ms` , che Visualizza il contenuto del token restituito da Azure ad B2C.
::: zone-end
::: zone pivot="b2c-custom-policy"
## <a name="create-a-policy-key"></a>Creare una chiave dei criteri
È necessario archiviare il segreto client registrato in precedenza nel tenant di Azure AD B2C.
1. Accedere al [portale di Azure](https://portal.azure.com/).
2. Assicurarsi di usare la directory che contiene il tenant di Azure AD B2C. A tale scopo, fare clic sul filtro **Directory e sottoscrizione** nel menu in alto e scegliere la directory che contiene il tenant.
3. Scegliere **Tutti i servizi** nell'angolo in alto a sinistra nel portale di Azure e quindi cercare e selezionare **Azure AD B2C**.
4. Nella pagina Panoramica selezionare **Framework dell'esperienza di gestione delle identità**.
5. Selezionare **Chiavi dei criteri** e quindi selezionare **Aggiungi**.
6. Per **Opzioni** scegliere `Manual`.
7. Immettere un **nome** per la chiave dei criteri. Ad esempio: `AmazonSecret`. Verrà aggiunto automaticamente il prefisso `B2C_1A_` al nome della chiave.
8. In **Segreto** immettere il segreto client registrato in precedenza.
9. In **Uso chiave** selezionare `Signature`.
10. Fare clic su **Crea**.
## <a name="configure-amazon-as-an-identity-provider"></a>Configurare Amazon come provider di identità
Per consentire agli utenti di accedere con un account Amazon, è necessario definire l'account come provider di attestazioni. che Azure AD B2C possibile comunicare con tramite un endpoint. L'endpoint offre un set di attestazioni che vengono usate da Azure AD B2C per verificare se un utente specifico è stato autenticato.
È possibile definire un account Amazon come provider di attestazioni aggiungendolo all'elemento **ClaimsProviders** nel file di estensione dei criteri.
1. Aprire *TrustFrameworkExtensions.xml*.
2. Trovare l'elemento **ClaimsProviders**. Se non esiste, aggiungerlo nell'elemento radice.
3. Aggiungere un nuovo **ClaimsProvider** come illustrato di seguito:
```xml
<ClaimsProvider>
<Domain>amazon.com</Domain>
<DisplayName>Amazon</DisplayName>
<TechnicalProfiles>
<TechnicalProfile Id="Amazon-OAuth2">
<DisplayName>Amazon</DisplayName>
<Protocol Name="OAuth2" />
<Metadata>
<Item Key="ProviderName">amazon</Item>
<Item Key="authorization_endpoint">https://www.amazon.com/ap/oa</Item>
<Item Key="AccessTokenEndpoint">https://api.amazon.com/auth/o2/token</Item>
<Item Key="ClaimsEndpoint">https://api.amazon.com/user/profile</Item>
<Item Key="scope">profile</Item>
<Item Key="HttpBinding">POST</Item>
<Item Key="UsePolicyInRedirectUri">false</Item>
<Item Key="client_id">Your Amazon application client ID</Item>
</Metadata>
<CryptographicKeys>
<Key Id="client_secret" StorageReferenceId="B2C_1A_AmazonSecret" />
</CryptographicKeys>
<OutputClaims>
<OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="user_id" />
<OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
<OutputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
<OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="amazon.com" />
<OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" />
</OutputClaims>
<OutputClaimsTransformations>
<OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
<OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
<OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
</OutputClaimsTransformations>
<UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
</TechnicalProfile>
</TechnicalProfiles>
</ClaimsProvider>
```
4. Impostare **client_id** sull'ID applicazione ottenuto con la registrazione dell'applicazione.
5. Salvare il file.
[!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
```xml
<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
<ClaimsProviderSelections>
...
<ClaimsProviderSelection TargetClaimsExchangeId="AmazonExchange" />
</ClaimsProviderSelections>
...
</OrchestrationStep>
<OrchestrationStep Order="2" Type="ClaimsExchange">
...
<ClaimsExchanges>
<ClaimsExchange Id="AmazonExchange" TechnicalProfileReferenceId="Amazon-OAuth2" />
</ClaimsExchanges>
</OrchestrationStep>
```
[!INCLUDE [active-directory-b2c-configure-relying-party-policy](../../includes/active-directory-b2c-configure-relying-party-policy-user-journey.md)]
## <a name="test-your-custom-policy"></a>Testare i criteri personalizzati
1. Selezionare, ad esempio, i criteri di relying party `B2C_1A_signup_signin` .
1. Per **applicazione** selezionare un'applicazione Web [registrata in precedenza](tutorial-register-applications.md). L'**URL di risposta** dovrebbe mostrare `https://jwt.ms`.
1. Selezionare il pulsante **Esegui adesso** .
1. Dalla pagina di iscrizione o di accesso selezionare **Amazon** per accedere con l'account Amazon.
Se il processo di accesso ha esito positivo, il browser viene reindirizzato a `https://jwt.ms` , che Visualizza il contenuto del token restituito da Azure ad B2C.
::: zone-end
| 61.789189 | 519 | 0.755839 | ita_Latn | 0.947867 |
b91d2b07a533b4a2add12a9b34b1d2bcab5f1c95 | 1,324 | md | Markdown | docs/framework/wpf/graphics-multimedia/transformations.md | eOkadas/docs.fr-fr | 64202ad620f9bcd91f4360ec74aa6d86e1d4ae15 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/transformations.md | eOkadas/docs.fr-fr | 64202ad620f9bcd91f4360ec74aa6d86e1d4ae15 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/graphics-multimedia/transformations.md | eOkadas/docs.fr-fr | 64202ad620f9bcd91f4360ec74aa6d86e1d4ae15 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Transformations
ms.date: 03/30/2017
f1_keywords:
- AutoGeneratedOrientationPage
helpviewer_keywords:
- skewing objects [WPF]
- transformations [WPF], about transformations
- transformations [WPF]
- graphics [WPF], transformations
- transform classes [WPF], 2-D
- scaling objects [WPF]
- translating objects [WPF]
- 2-D transform classes
- rotating objects [WPF]
- Transforms [WPF]
- Transforms [WPF], about Transforms
ms.assetid: 712b543f-d8b2-4dcf-ba2c-f7921c61c6fd
ms.openlocfilehash: a0b5268d1c7e319a6144a7d551dca45bdc3e64aa
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/23/2019
ms.locfileid: "61925990"
---
# <a name="transformations"></a>Transformations
Les transformations sont utilisées pour faire pivoter, mettre à l’échelle, traduire ou incliner <xref:System.Windows.FrameworkElement> objets.
## <a name="in-this-section"></a>Dans cette section
[Vue d’ensemble des transformations](transforms-overview.md)
[Rubriques de guide pratique](transformations-how-to-topics.md)
## <a name="see-also"></a>Voir aussi
- <xref:System.Windows.Media.Transform>
- [Graphiques et multimédia](index.md)
- [Vue d’ensemble du rendu graphique de WPF](wpf-graphics-rendering-overview.md)
- [Disposition](../advanced/layout.md)
| 33.948718 | 144 | 0.773414 | fra_Latn | 0.233506 |
b91df609678b280f78a5c1bc6fbb8b4dbd55cc87 | 188 | md | Markdown | README.md | lbaitemple/arduino_simulink | a852d2b340782f332b08b74447eb449f711a6a60 | [
"BSD-2-Clause"
] | null | null | null | README.md | lbaitemple/arduino_simulink | a852d2b340782f332b08b74447eb449f711a6a60 | [
"BSD-2-Clause"
] | null | null | null | README.md | lbaitemple/arduino_simulink | a852d2b340782f332b08b74447eb449f711a6a60 | [
"BSD-2-Clause"
] | null | null | null | # arduino_simulink
This package is to write a Ultrasonic sensor in Arduino Mega 2560 to read distance in inch or centimeter. The sensor is a parallax ping
ultrasonic sensor 28015 REV B.
| 37.6 | 136 | 0.803191 | eng_Latn | 0.997975 |
b91df79622b12262928fa284d2ff1c0976efce9d | 790 | md | Markdown | _trash/vault/root.md | yairdar/ydu-101-mono | 3723136a1c7035bc8128771cec0370aa3bda6689 | [
"Apache-2.0"
] | null | null | null | _trash/vault/root.md | yairdar/ydu-101-mono | 3723136a1c7035bc8128771cec0370aa3bda6689 | [
"Apache-2.0"
] | null | null | null | _trash/vault/root.md | yairdar/ydu-101-mono | 3723136a1c7035bc8128771cec0370aa3bda6689 | [
"Apache-2.0"
] | null | null | null | ---
id: 0nMAtWUU1p6d5jTfI9kPE
title: Root
desc: ''
updated: 1639941153159
created: 1639940592150
---
# Welcome to Dendron
This is the root of your dendron vault. If you decide to publish your entire vault, this will be your landing page. You are free to customize any part of this page except the frontmatter on top.
## Lookup
This section contains useful links to related resources.
- [Getting Started Guide](https://link.dendron.so/6b25)
- [Discord](https://link.dendron.so/6b23)
- [Home Page](https://wiki.dendron.so/)
- [Github](https://link.dendron.so/6b24)
- [Developer Docs](https://docs.dendron.so/)
## Reference Samples
- ![[ydu.field.ide.tool.vscode#dev-container-development,1]]
- ![[ydu.field.ide.tool.vscode]]
- [[ydu.field.ide.tool.vscode#dev-container-development]]
| 26.333333 | 194 | 0.735443 | eng_Latn | 0.825634 |
b91e0f5fd12b2a57134941f5de38403efad75da4 | 4,098 | md | Markdown | README.md | dukaev/fasterer | aebf173d0b73cacdd2c812f7739da4157b304315 | [
"MIT"
] | null | null | null | README.md | dukaev/fasterer | aebf173d0b73cacdd2c812f7739da4157b304315 | [
"MIT"
] | null | null | null | README.md | dukaev/fasterer | aebf173d0b73cacdd2c812f7739da4157b304315 | [
"MIT"
] | null | null | null | [](https://travis-ci.org/DamirSvrtan/fasterer)
[](https://codeclimate.com/github/DamirSvrtan/fasterer)
[](http://badge.fury.io/rb/fasterer)
[](https://codeclimate.com/github/DamirSvrtan/fasterer/coverage)
# Fasterer
Make your Rubies go faster with this command line tool highly inspired by [fast-ruby](https://github.com/JuanitoFatas/fast-ruby) and [Sferik's talk at Baruco Conf](https://speakerdeck.com/sferik/writing-fast-ruby).
Fasterer will suggest some speed improvements which you can check in detail at the [fast-ruby repo](https://github.com/JuanitoFatas/fast-ruby).
**Please note** that you shouldn't follow the suggestions blindly. Using a while loop instead of a each_with_index probably shouldn't be considered if you're doing a regular Rails project, but maybe if you're doing something very speed dependent such as Rack or if you're building your own framework, you might consider this speed increase.
## Installation
```shell
gem install fasterer
```
## Usage
Run it from the root of your project:
```shell
fasterer
```
## Example output
```
app/models/post.rb:57 Array#select.first is slower than Array#detect.
app/models/post.rb:61 Array#select.first is slower than Array#detect.
db/seeds/cities.rb:15 Hash#keys.each is slower than Hash#each_key.
db/seeds/cities.rb:33 Hash#keys.each is slower than Hash#each_key.
test/options_test.rb:84 Hash#merge! with one argument is slower than Hash#[].
test/module_test.rb:272 Don't rescue NoMethodError, rather check with respond_to?.
spec/cache/mem_cache_store_spec.rb:161 Use tr instead of gsub when grepping plain strings.
```
## Configuration
Configuration is done through the **.fasterer.yml** file. This can placed in the root of your
project, or any ancestor folder.
Options:
* Turn off speed suggestions
* Blacklist files or complete folder paths
Example:
```yaml
speedups:
rescue_vs_respond_to: true
module_eval: true
shuffle_first_vs_sample: true
for_loop_vs_each: true
each_with_index_vs_while: false
map_flatten_vs_flat_map: true
reverse_each_vs_reverse_each: true
select_first_vs_detect: true
sort_vs_sort_by: true
fetch_with_argument_vs_block: true
keys_each_vs_each_key: true
hash_merge_bang_vs_hash_brackets: true
block_vs_symbol_to_proc: true
proc_call_vs_yield: true
gsub_vs_tr: true
select_last_vs_reverse_detect: true
getter_vs_attr_reader: true
setter_vs_attr_writer: true
exclude_paths:
- 'vendor/**/*.rb'
- 'db/schema.rb'
```
## Integrations
These 3rd-party integrations enable you to run `fasterer` automatically
as part of a larger framework.
* https://github.com/jumanjihouse/pre-commit-hooks
This integration allows to use `fasterer` as either a pre-commit hook or within CI.
It uses the https://pre-commit.com/ framework for managing and maintaining
multi-language pre-commit hooks.
* https://github.com/prontolabs/pronto-fasterer
Pronto runner for Fasterer, speed improvements suggester.
[Pronto](https://github.com/mmozuras/pronto) also integrates via
[danger-pronto](https://github.com/RestlessThinker/danger-pronto) into the
[danger](https://github.com/danger/danger) framework for pull requests
on Github, Gitlab, and BitBucket.
## Speedups TODO:
4. find vs bsearch
5. Array#count vs Array#size
7. Enumerable#each + push vs Enumerable#map
17. Hash#merge vs Hash#merge!
20. String#casecmp vs String#downcase + ==
21. String concatenation
22. String#match vs String#start_with?/String#end_with?
23. String#gsub vs String#sub
## Contributing
1. Fork it ( https://github.com/DamirSvrtan/fasterer/fork )
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create a new Pull Request
| 33.867769 | 340 | 0.772816 | eng_Latn | 0.840601 |
b91e62884530d70803cde2cfb4bae26607c613ca | 218 | md | Markdown | src/shops/captains-village-marina.md | syncopated/shopshuswap | 7bbbb3a270e3099392f49c220e94474afc525d3c | [
"MIT"
] | null | null | null | src/shops/captains-village-marina.md | syncopated/shopshuswap | 7bbbb3a270e3099392f49c220e94474afc525d3c | [
"MIT"
] | 5 | 2021-03-01T21:11:30.000Z | 2022-02-26T01:54:50.000Z | src/shops/captains-village-marina.md | syncopated/shopshuswap | 7bbbb3a270e3099392f49c220e94474afc525d3c | [
"MIT"
] | null | null | null | ---
name: Captain's Village Marina
area: north-shuswap
category: Retail & Services
type:
phone: 250-955-2424
email: [email protected]
url: http://captainsvillage.com/
tags:
---
Open Tuesday - Saturday, 8am-5pm
| 16.769231 | 32 | 0.747706 | eng_Latn | 0.649266 |
b91e66683517c3aa55843a168b902e4ac3efe96f | 703 | md | Markdown | api/Outlook.RemoteItem.ConversationTopic.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2019-09-07T04:44:48.000Z | 2021-12-16T15:05:50.000Z | api/Outlook.RemoteItem.ConversationTopic.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-13T09:32:15.000Z | 2021-06-13T09:32:15.000Z | api/Outlook.RemoteItem.ConversationTopic.md | ahkon/VBA-Docs | c047d7975de2b0949b496af150d279c505a8595b | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-23T03:40:08.000Z | 2021-06-23T03:40:08.000Z | ---
title: RemoteItem.ConversationTopic property (Outlook)
keywords: vbaol11.chm1593
f1_keywords:
- vbaol11.chm1593
ms.prod: outlook
api_name:
- Outlook.RemoteItem.ConversationTopic
ms.assetid: e8f624d0-f7bb-7672-178d-80d6aa498858
ms.date: 06/08/2017
localization_priority: Normal
---
# RemoteItem.ConversationTopic property (Outlook)
Returns a **String** representing the topic of the conversation thread of the Outlook item. Read-only.
## Syntax
_expression_. `ConversationTopic`
_expression_ A variable that represents a [RemoteItem](Outlook.RemoteItem.md) object.
## See also
[RemoteItem Object](Outlook.RemoteItem.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 21.96875 | 102 | 0.790896 | eng_Latn | 0.367364 |
b91eab4e023c3ec1def7cbe6d132ef65600c8e77 | 1,673 | md | Markdown | content/curriculum/guides/2015/1/15.01.07.x.md | kenlu89/teachers_institute | 1fc993f30d6ac17b3097e63510ce758a12c910ea | [
"MIT"
] | null | null | null | content/curriculum/guides/2015/1/15.01.07.x.md | kenlu89/teachers_institute | 1fc993f30d6ac17b3097e63510ce758a12c910ea | [
"MIT"
] | null | null | null | content/curriculum/guides/2015/1/15.01.07.x.md | kenlu89/teachers_institute | 1fc993f30d6ac17b3097e63510ce758a12c910ea | [
"MIT"
] | null | null | null | ---
layout: "unit"
title: "Guide Entry 15.01.07"
path: "/curriculum/guides/2015/1/15.01.07.x.html"
unitTitle: "Contemporary Native American Fictional Accounts of Hope and Fear"
unitAuthor: "Marialuisa Sapienza"
keywords: ""
recommendedFor: "Recommended for English, grade 10; AP English, grades 11 and 12"
---
<main>
<p>
This unit studies the themes of hope and fear in a rich selection of contemporary Native American writings. It starts with a close analysis and discussion of what causes fear and how the characters in a novel, a short story, a play, and poems struggle, fight and react to societal injustices with hope. The unit takes into consideration other causes for fear like poverty, alcohol, isolation, loneliness, and sometimes the feeling(s) of desperation. The students read the novel,
<em>
The Absolutely True Diary of a Part-Time Indian
</em>
by Sherman Alexie (Spokane/Coeur d’Alene), the short story
<em>
The Red Convertible
</em>
by Louise Erdrich (Turtle Mountain/Ojibwe), the play
<em>
Sliver of a Full Moon
</em>
by Mary Kathryn Nagle (Cherokee), a rich selection of poems such as “Homeland” by Jayne Fawcett (Mohegan), “I Found Him on a Hill Top” by Ella Wilcox Sekatau (Narragansett), and “Sad Country Song” by John Christian Hopkins (Narragansett), as well as some paintings and photographs that represent Native American cultures. The goal is to read, analyze, reflect, research, discuss, and write about the reactions to hardships. The unit also aims to teach students to appreciate and understand American Indian arts while addressing Common Core Standards.
</p>
<p>
(Recommended for English, grade 10; AP English, grades 11 and 12)
</p>
</main> | 57.689655 | 550 | 0.773461 | eng_Latn | 0.994945 |
b91f357cecab3d4be25f49e22282c12ef7775f8e | 4,804 | markdown | Markdown | _posts/2014-11-16-ios-settings-bundle.markdown | hhtczengjing/MyBlogSourceCode | f4b836a4d7673ada724fe66d8a1c3bdaee566052 | [
"CC-BY-4.0"
] | null | null | null | _posts/2014-11-16-ios-settings-bundle.markdown | hhtczengjing/MyBlogSourceCode | f4b836a4d7673ada724fe66d8a1c3bdaee566052 | [
"CC-BY-4.0"
] | null | null | null | _posts/2014-11-16-ios-settings-bundle.markdown | hhtczengjing/MyBlogSourceCode | f4b836a4d7673ada724fe66d8a1c3bdaee566052 | [
"CC-BY-4.0"
] | 2 | 2016-07-08T14:19:31.000Z | 2016-10-06T04:02:37.000Z | ---
layout: post
title: "iOS开发中Settings.bundle的使用"
date: 2014-11-16 16:35:48 +0800
comments: true
tags: iOS
---
在iOS开发中很多时候开发者需要让用户自行设置一些系统的配置项目,比如让用户设置是否支持在3G模式下加载数据,或者是让用户自己设置支不支持网络数据缓存的功能。另外在企业级应用开发中经常有需要对后台的访问地址进行调整那么需要用户自行的进行配置,下面是爱奇艺和招商银行的设置配置项:

###Settings.bundle配置说明
在Settings.bundle中支持如下几种配置项:

1、`Group`
Group类似于UITableView中的Group分组,用来表示一组设置项,配置如下所示:

配置项说明:
(1)Title:表示分组的显示标题
(2)Type:默认是Group
(3)FooterText:Group的底部显示的文字内容
2、`Multi Value`
`Multi Value`是为了让用户在多个值中选择需要的内容,相当于下拉列表的形式进行选择,配置如下所示:

配置项说明:
(1)Type:默认是Multi Value
(2)Title:配置项显示的标题
(3)Identifier:设置项的标识符,用于读取配置项的配置内容
(4)Default Value:默认的值,对应的是Values中的项目
(5)Titles:显示的标题的集合
(6)Values:显示的值的集合,与标题一一对应
3、`Slider`

配置项说明:
(1)Type:配置类型,默认是Slider
(2)Identifier:设置项的标识符,用于读取配置项的配置内容
(3)Default Value:默认值,Number类型
(4)Minimum Value:最小值,Number类型
(5)Maximum Value:最大值,Number类型
(6)Max Value Image Filename:最大值那一端的图片。
(7)Min Value Image Filename:最小值那一端的图片。
4、`Text Field`

配置项说明:
(1)Text Field is Secure:是否为安全文本。如果设置为YES,则内容以圆点符号出现。
(2)Autocapitalization Style:自动大写。有四个值: `None(无)`、`Sentences(句子首字母大写)`、`Words(单词首字母大写)`和`All Characters(所有字母大写)`。
(3)Autocorrection Style:自动纠正拼写,如果开启,你输入一个不存在的单词,系统会划红线提示。有三个值:`Default(默认)`、`No Autocorrection(不自动纠正)`和`Autocorrection(自动纠正)`。
(4)Keyboard Type:键盘样式。有五个值:`Alphabet(字母表,默认)`、`Numbers and Punctuation(数字和标点符号)`、`Number Pad(数字面板)`、`URL(比Alphabet多出了.com等域名后缀)`和`Email Address(比Alphabet多出了@符合)`。
5、`Title`

配置项说明:
(1)Type:默认是Title
(2)Title:配置项显示的标题
(3)Identifier:设置项的标识符,用于读取配置项的配置内容
(4)Default Value:默认的值
6、`Toggle Switch`
`Toggle Switch`是一个类似于UISwitch的选项,用于设置简单的开启或者关闭的选项,配置如下所示:

配置项说明:
(1)Type:默认是Toggle Switch
(2)Title:配置项显示的标题
(3)Identifier:设置项的标识符,用于读取配置项的配置内容
(4)Default Value:默认的值
###在项目中使用
1、添加Setting.bundle文件到项目中

2、读取配置信息
```
- (void)readingPreference
{
//获取Settings.bundle路径
NSString *settingsBundle = [[NSBundle mainBundle] pathForResource:@"Settings" ofType:@"bundle"];
if(!settingsBundle)
{
NSLog(@"找不到Settings.bundle文件");
return;
}
//读取Settings.bundle里面的配置信息
NSDictionary *settings = [NSDictionary dictionaryWithContentsOfFile:[settingsBundle stringByAppendingPathComponent:@"Root.plist"]];
NSArray *preferences = [settings objectForKey:@"PreferenceSpecifiers"];
NSMutableDictionary *defaultsToRegister = [[NSMutableDictionary alloc] initWithCapacity:[preferences count]];
for(NSDictionary *prefSpecification in preferences)
{
NSString *key = [prefSpecification objectForKey:@"Key"];
if(key)
{
[defaultsToRegister setObject:[prefSpecification objectForKey:@"DefaultValue"] forKey:key];
}
}
[[NSUserDefaults standardUserDefaults] registerDefaults:defaultsToRegister];
[[NSUserDefaults standardUserDefaults] synchronize];
//TODO:读取指定数据
}
```
3、在AppDelegate中读取配置信息
(1)应用启动后读取配置信息
```
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
//读取配置文件
[[SystemConfigHelper shared] readingPreference];
self.window.backgroundColor = [UIColor whiteColor];
[self.window makeKeyAndVisible];
return YES;
}
```
(2)应用程序进入到前台后读取配置信息
```
- (void)applicationWillEnterForeground:(UIApplication *)application
{
//读取配置信息
[[SystemConfigHelper shared] readingPreference];
}
```
说明:
`SystemConfigHelper`是用来读取系统配置信息的工具.
###典型实例
1、[爱奇艺iPhone客户端的Settings.bundle配置](/images/ios_settings_bundle/iqiyi.plist)
2、[招商银行iPhone客户端的Settings.bundle配置](/images/ios_settings_bundle/cmb.plist)
###参考资料
1、[《整合Settings.bundle显示版本信息》](http://www.cocoachina.com/ios/20141103/10112.html)
2、[《应用程序首选项(application preference)及数据存储》](http://www.cnblogs.com/wayne23/p/3441898.html)
3、[《设置束(Setting Bundle)的使用》](http://blog.csdn.net/nogodoss/article/details/21938771)
4、[《三十而立,从零开始学ios开发(十九):Application Settings and User Defaults(上)》](http://www.cnblogs.com/minglz/archive/2013/05/30/3048269.html)
| 24.262626 | 162 | 0.773938 | yue_Hant | 0.624789 |
b91f740e9e9d2ad84db417d59929bac99da18c25 | 1,118 | md | Markdown | docs/build/tools/avalanchejs/modules/api_avm_genesisdata.md | traderd65/avalanche-docs | dbf9abde1cfe953b0af32ad1b93d755a51f10872 | [
"BSD-3-Clause"
] | null | null | null | docs/build/tools/avalanchejs/modules/api_avm_genesisdata.md | traderd65/avalanche-docs | dbf9abde1cfe953b0af32ad1b93d755a51f10872 | [
"BSD-3-Clause"
] | null | null | null | docs/build/tools/avalanchejs/modules/api_avm_genesisdata.md | traderd65/avalanche-docs | dbf9abde1cfe953b0af32ad1b93d755a51f10872 | [
"BSD-3-Clause"
] | null | null | null | [avalanche](../README.md) › [API-AVM-GenesisData](api_avm_genesisdata.md)
# Module: API-AVM-GenesisData
## Index
### Classes
* [GenesisData](../classes/api_avm_genesisdata.genesisdata.md)
### Variables
* [bintools](api_avm_genesisdata.md#const-bintools)
* [buffer](api_avm_genesisdata.md#const-buffer)
* [decimalString](api_avm_genesisdata.md#const-decimalstring)
## Variables
### `Const` bintools
• **bintools**: *[BinTools](../classes/utils_bintools.bintools.md)* = BinTools.getInstance()
*Defined in [src/apis/avm/genesisdata.ts:20](https://github.com/ava-labs/avalanchejs/blob/fa4a637/src/apis/avm/genesisdata.ts#L20)*
___
### `Const` buffer
• **buffer**: *[SerializedType](src_utils.md#serializedtype)* = "Buffer"
*Defined in [src/apis/avm/genesisdata.ts:22](https://github.com/ava-labs/avalanchejs/blob/fa4a637/src/apis/avm/genesisdata.ts#L22)*
___
### `Const` decimalString
• **decimalString**: *[SerializedType](src_utils.md#serializedtype)* = "decimalString"
*Defined in [src/apis/avm/genesisdata.ts:21](https://github.com/ava-labs/avalanchejs/blob/fa4a637/src/apis/avm/genesisdata.ts#L21)*
| 27.95 | 131 | 0.741503 | yue_Hant | 0.38137 |
b91f796c1c2c65006619c731f1f067c724cd9306 | 1,627 | md | Markdown | README.md | TheophileMot/weightedTextRank | 96d392ae19525a715c5c8b4296f94f7f72a6e504 | [
"MIT"
] | null | null | null | README.md | TheophileMot/weightedTextRank | 96d392ae19525a715c5c8b4296f94f7f72a6e504 | [
"MIT"
] | 1 | 2019-07-09T07:45:02.000Z | 2019-07-09T07:45:02.000Z | README.md | TheophileMot/weightedTextRank | 96d392ae19525a715c5c8b4296f94f7f72a6e504 | [
"MIT"
] | null | null | null | # wTextRank
Implementation of TextRank algorithm on text parsed by the Google API. The main function is `rankSentences()`, which takes as arguments the text data and an optional weighting function on tokens. The default weighting function assigns 1 to each token.
## Usage
```
let WTextRank(textData, tokenWeightFunction);
let rankedSentences = WTR.rankSentences();
```
Arguments: `textData` is provided by the Google API, and `tokenWeightFunction` is of the following form:
```
tokenWeightFunction(tokenIndex, sentence) {
...
return weight
}
```
where `sentence` is an object with keys `text`, `tokens`, `keyTokens`. The `weight` should be strictly positive. See the code for more details.
The weight of each sentence is the product of the weights of its tokens.
## Example
Here's an example app in Node; it penalizes sentences with pronouns.
```
const WTextRank = require('./wTextRank');
const fs = require('fs');
fs.readFile( __dirname + '/parsedText.txt', (err, data) => {
if (err) {
throw err;
}
textData = JSON.parse(data);
function tokenWeightFunction(i, data) {
if (data.tokens[i].partOfSpeech.tag === 'PRON') {
return 0.1;
} else {
return 1;
}
}
const WTR = new WTextRank(textData);
let rankedSentences = WTR.rankSentences();
let bestSentences = rankedSentences.slice(0, 5);
let worstSentences = rankedSentences.slice(-5);
console.log(bestSentences.map(s => [+s.score.toFixed(2), s.text.content, Array.from(s.keyTokens)]));
console.log();
console.log(worstSentences.map(s => [+s.score.toFixed(2), s.text.content, Array.from(s.keyTokens)]));
});
``` | 29.581818 | 251 | 0.704364 | eng_Latn | 0.85741 |
b91f7e83b863d598f26e19d52ccfcb2b275b514d | 74 | md | Markdown | content/pages/tree.md | alixnovosi/alix.zone | ffd7dee3f42f6f808eb480955baf58dea3611dd1 | [
"CC-BY-3.0"
] | null | null | null | content/pages/tree.md | alixnovosi/alix.zone | ffd7dee3f42f6f808eb480955baf58dea3611dd1 | [
"CC-BY-3.0"
] | null | null | null | content/pages/tree.md | alixnovosi/alix.zone | ffd7dee3f42f6f808eb480955baf58dea3611dd1 | [
"CC-BY-3.0"
] | null | null | null | Title: tree!
Identifier: tree
template: js_toy_page
<div id="app"></div>
| 12.333333 | 21 | 0.716216 | fra_Latn | 0.159866 |
b91fa44841ed1e74cb05f22ee4dfdd9a83109e47 | 262 | md | Markdown | src/easy/road-trip/readme.md | rdtsc/codeeval-solutions | d5c06baf89125e9e9f4b163ee57e5a8f7e73e717 | [
"MIT"
] | null | null | null | src/easy/road-trip/readme.md | rdtsc/codeeval-solutions | d5c06baf89125e9e9f4b163ee57e5a8f7e73e717 | [
"MIT"
] | null | null | null | src/easy/road-trip/readme.md | rdtsc/codeeval-solutions | d5c06baf89125e9e9f4b163ee57e5a8f7e73e717 | [
"MIT"
] | null | null | null | Road Trip
---------
**Problem 124**
> Do not be left without petrol.
Full problem statement is available [here][mirror].
[mirror]: https://github.com/rdtsc/codeeval-problem-statements/tree/master/easy/124-road-trip/
"View Problem Statement Mirror"
| 21.833333 | 94 | 0.70229 | eng_Latn | 0.434804 |
b91fb6e0ff0b961a412c4a1b230b2d4c1b8befcd | 8,026 | md | Markdown | articles/active-directory/active-directory-privileged-identity-management-roles.md | OpenLocalizationTestOrg/azure-docs-pr15_de-AT | ca82887d8067662697adba993b87860bdbefea29 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-29T22:55:06.000Z | 2020-11-29T22:55:06.000Z | articles/active-directory/active-directory-privileged-identity-management-roles.md | Allyn69/azure-docs-pr15_de-CH | 211ef2a7547f43e3b90b3c4e2cb49e88d7fe139f | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/active-directory-privileged-identity-management-roles.md | Allyn69/azure-docs-pr15_de-CH | 211ef2a7547f43e3b90b3c4e2cb49e88d7fe139f | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | 2 | 2019-07-03T20:05:49.000Z | 2020-11-29T22:55:15.000Z | <properties
pageTitle="Rollen in PIM | Microsoft Azure"
description="Erfahren Sie, welche Rollen für privilegierte Identitäten Endung Azure privilegierten Identity Management verwendet werden."
services="active-directory"
documentationCenter=""
authors="kgremban"
manager="femila"
editor=""/>
<tags
ms.service="active-directory"
ms.devlang="na"
ms.topic="article"
ms.tgt_pltfrm="na"
ms.workload="identity"
ms.date="07/01/2016"
ms.author="kgremban"/>
# <a name="roles-in-azure-ad-privileged-identity-management"></a>Rollen in Azure AD Berechtigungen Identitätsmanagement
<!-- **PLACEHOLDER: Need description of how this works. Azure PIM uses roles from MSODS objects.**-->
Sie können Benutzer in Ihrer Organisation unterschiedliche Administratorrollen in Azure AD. Zuordnungsstatus Rolle steuern, welche Aufgaben wie hinzufügen oder Entfernen von Benutzern oder Service-Einstellungen, die Benutzer auf Azure AD Office 365 Microsoft Online Services und andere verbundene Anwendung ausführen.
Ein globaler Administrator können die Benutzer **dauerhaft** in Azure AD mithilfe von PowerShell-Cmdlets wie Rollen zugewiesen werden `Add-MsolRoleMember` und `Remove-MsolRoleMember`, oder durch [Zuweisen von Administratorrollen in Azure Active Directory](active-directory-assign-admin-roles.md)im Verwaltungsportal.
Azure AD Berechtigungen Identitätsmanagement (PIM) verwaltet Richtlinien für privilegierten Zugriff für Benutzer in Azure AD. PIM weist Benutzern Rollen in Azure AD, und weisen Sie einer Person dauerhaft in der Rolle für die Rolle sein. Wenn ein Benutzer dauerhaft einer Rolle zugewiesen oder aktiviert eine Zuordnung Rolle berechtigt sie Active Directory Azure, Office 365 und anderen Applikationen mit ihren Rollen zugewiesenen Berechtigungen verwalten.
Es gibt keinen Unterschied jemand mit einem gegenüber Assignment berechtigte Rolle Zugriff. Der einzige Unterschied ist, dass manche dieser Zugriff jederzeit nicht. Bestehen für die Rolle und können einschalten und ausschalten Wenn sie müssen.
## <a name="roles-managed-in-pim"></a>Rollen im PIM verwaltet
Privilegierte Identity Management können Sie allgemeine Administratorrollen Benutzer zuweisen:
- **Globaler administrator** (auch bekannt als Unternehmensadministrator) hat Zugriff auf alle administrativen Funktionen. Sie können mehrere globale Administratoren in Ihrer Organisation. Wer erwerben Sie Office 365 automatisch angemeldet wird ein globaler Administrator
- **Privilegierte Rollenadministrator** verwaltet Azure AD PIM und Rolle Aufgaben für andere Benutzer aktualisiert.
- **Abrechnung Administrator** macht Käufe, verwaltet Abonnements Supportanfragen verwaltet und überwacht Service.
- **Administrator Kennwort** Zurücksetzen von Kennwörtern, Serviceanfragen verwaltet und überwacht Service. Kennwort-Admins sind auf Zurücksetzen von Kennwörtern für Benutzer.
- **Dienstadministratoren** Serviceanfragen verwaltet und überwacht service Health.
> [AZURE.NOTE] Bei Verwendung von Office 365 vor der Zuweisung der Administratorrolle Service an einen Benutzer zunächst weisen Sie dann den Benutzer Administratorberechtigungen an einen Dienst wie Exchange Online.
- **Benutzerverwaltungsadministrator** Zurücksetzen von Kennwörtern, Dienst überwacht und verwaltet Benutzerkonten, Gruppen und Serviceanfragen. Management Benutzeradministrator kann nicht globalen Administrator löschen, andere Administratorrollen erstellen oder Zurücksetzen von Kennwörtern für Abrechnung, global und Dienstadministratoren.
- **Exchange-Administrator** verfügt über Administratorzugriff auf Exchange Online über die Exchange-Verwaltungskonsole (BK) und möglich, nahezu jede Aufgabe im Exchange Online.
- **SharePoint-Administrator** verfügt über Administratorzugriff auf SharePoint Online über das SharePoint Online Administrationscenter und kann nahezu jede Aufgabe in SharePoint Online.
- **Skype für Business Administrator** verfügt über Administratorzugriff auf Skype für Unternehmen über Skype Business Admin Center und fast alle Aufgaben in Skype für Business Online ausführen.
Dieser Artikel Weitere Informationen zum [Zuweisen von Administratorrollen in Azure AD](active-directory-assign-admin-roles.md) und [Zuweisen von Administratorrollen in Office 365](https://support.office.com/article/Assigning-admin-roles-in-Office-365-eac4d046-1afd-4f1a-85fc-8219c79e1504).
<!--**PLACEHOLDER: The above article may not be the one we want since PIM gets roles from places other that Office 365**-->
Über PIM können Sie [diesen Rollen Benutzer zuweisen](active-directory-privileged-identity-management-how-to-add-role-to-user.md) , so dass Benutzer [die Rolle bei Bedarf aktivieren](active-directory-privileged-identity-management-how-to-activate-role.md).
Wenn Sie einen anderen Benutzer Zugang zu PIM selbst verwalten möchten, werden Rollen erfordert PIM die Benutzer wie [PIM zugreifen](active-directory-privileged-identity-management-how-to-give-access-to-pim.md)beschrieben.
<!-- ## The PIM Security Administrator Role **PLACEHOLDER: Need description of the Security Administrator role.**-->
## <a name="roles-not-managed-in-pim"></a>Rollen im PIM nicht verwaltet
Rollen in Exchange Online und SharePoint Online außer den oben genannten nicht in Azure AD dargestellt und sind daher nicht im PIM angezeigt. Weitere Informationen zum Ändern der Zuweisung von differenzierte Rolle in Office 365-Diensten finden Sie unter [Berechtigungen in Office 365](https://support.office.com/article/Permissions-in-Office-365-da585eea-f576-4f55-a1e0-87090b6aaa9d).
Azure-Abonnements und Ressourcengruppen werden auch nicht in Azure AD dargestellt. Um Azure-Abonnements zu verwalten, finden Sie unter [Hinzufügen oder Ändern von Azure Administratorrollen](../billing-add-change-azure-subscription-administrator.md) und Informationen zu Azure RBAC finden Sie unter [Azure Role-Based-Zugriffskontrolle](role-based-access-control-configure.md).
<!--**The above links might be replaced by ones that are from within this documentation repository **-->
## <a name="user-roles-and-signing-in"></a>Benutzerrollen und anmelden
Bei einigen Microsoft Services und Applikationen Zuweisen einer Rolle zu einen Benutzer ermöglichen, dass Benutzer ein Administrator möglicherweise nicht.
Zugriff auf Azure-Verwaltungsportal erfordert, dass der Benutzer ein Dienstadministrator oder Co-Administrator Azure-Abonnement, selbst wenn der Benutzer nicht der Azure-Abonnements verwalten muss. Zum Verwalten von Konfigurationen für Azure AD im klassischen Portal muss Benutzer den globalen Administrator in Azure AD und Co Abonnementadministrator Azure-Abonnement sein. Zum Hinzufügen von Benutzern zu Azure-Abonnements finden Sie unter [Hinzufügen oder Ändern von Azure Administratorrollen](../billing-add-change-azure-subscription-administrator.md).
Zugriff auf Microsoft Online Services benötigen Benutzer auch zugewiesen werden eine Lizenz vor dem Öffnen der Service Portal oder Verwaltungsaufgaben ausführen.
## <a name="assign-a-license-to-a-user-in-azure-ad"></a>Zuweisen einer Lizenz zu einem Benutzer in Azure AD
1. [Klassischen Azure-Portal] anmelden (http://manage.windowsazure.com) globales Administratorkonto oder ein CO Administratorkonto.
2. Wählen Sie **Alle Elemente** im Hauptmenü.
3. Wählen Sie das Verzeichnis zu arbeiten und die Lizenzen zugeordnet.
4. **Lizenzen**auswählen Die Liste der verfügbaren Lizenzen wird angezeigt.
5. Wählen Sie den Lizenzplan enthält die Lizenzen, die Sie verteilen möchten.
6. Wählen Sie **Benutzer zuweisen**.
7. Wählen Sie den Benutzer, dem eine Lizenz zuweisen möchten.
8. Klicken Sie auf die Schaltfläche **zuweisen** . Der Benutzer kann jetzt in Azure anmelden.
<!--Every topic should have next steps and links to the next logical set of content to keep the customer engaged-->
## <a name="next-steps"></a>Nächste Schritte
[AZURE.INCLUDE [active-directory-privileged-identity-management-toc](../../includes/active-directory-privileged-identity-management-toc.md)]
| 88.197802 | 557 | 0.814104 | deu_Latn | 0.983912 |
b91fede5845ce10875a7c3015891e888df85210a | 719 | md | Markdown | 2010/CVE-2010-2020.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2010/CVE-2010-2020.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2010/CVE-2010-2020.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2010-2020](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-2020)



### Description
sys/nfsclient/nfs_vfsops.c in the NFS client in the kernel in FreeBSD 7.2 through 8.1-PRERELEASE, when vfs.usermount is enabled, does not validate the length of a certain fhsize parameter, which allows local users to gain privileges via a crafted mount request.
### POC
#### Reference
No PoCs from references.
#### Github
- https://github.com/Snoopy-Sec/Localroot-ALL-CVE
| 39.944444 | 261 | 0.751043 | eng_Latn | 0.544712 |
b91ffb15c5d23b337785c7c7c5a93a1a1bb85456 | 1,677 | md | Markdown | CHANGELOG.md | beechtom/puppetlabs-http_request | 45fd25fa89bebcb0bc658b419a201f8657adcbc4 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | beechtom/puppetlabs-http_request | 45fd25fa89bebcb0bc658b419a201f8657adcbc4 | [
"Apache-2.0"
] | 11 | 2020-09-15T16:19:28.000Z | 2021-12-01T01:34:51.000Z | CHANGELOG.md | beechtom/puppetlabs-http_request | 45fd25fa89bebcb0bc658b419a201f8657adcbc4 | [
"Apache-2.0"
] | 8 | 2020-08-27T00:29:52.000Z | 2021-06-08T15:29:58.000Z | # Changelog
## Release 0.3.1
### Bug fixes
- **Handle reponses without a body**
([#14](https://github.com/puppetlabs/puppetlabs-http_request/pull/14))
The `http_request` task no longer errors when the response does not
include a body.
_Contributed by [op-ct](https://github.com/op-ct)._
## Release 0.3.0
### New features
- **Update `method` parameter to accept `patch`**
([#10](https://github.com/puppetlabs/puppetlabs-http_request/issues/10))
The `method` parameter now accepts `patch` as a value.
_Contributed by [op-ct](https://github.com/op-ct)._
## Release 0.2.2
### Bug fixes
- **Read key data from file passed to `key` parameter**
([#8](https://github.com/puppetlabs/puppetlabs-http_request/pull/8))
Key data is now read from the file path passed to the `key` parameter.
Previously, the file path itself was used as the key data.
## Release 0.2.1
### Bug fixes
- **Convert headers to strings**
([#4](https://github.com/puppetlabs/puppetlabs-http_request/pull/4))
Headers set under the `headers` parameter are now converted to strings before
making a request. Previously, headers were passed to the request as symbols.
_Contributed by [barskern](https://github.com/barskern)._
## Release 0.2.0
### New features
- **Add `json_endpoint` parameter to `http_request` task**
([#2](https://github.com/puppetlabs/puppetlabs-http_request/issues/2))
The `http_request` task now accepts a `json_endpoint` parameter. When set to
`true`, the task will convert the request body to JSON, set the `Content-Type`
header to `application/json`, and parse the response body as JSON.
## Release 0.1.0
This is the initial release.
| 27.048387 | 80 | 0.713178 | eng_Latn | 0.914204 |
b92036f61758589a27898116f8c97ba8165ad723 | 379 | md | Markdown | _posts/2016-01-15-release-6-3-pro-rege-et-lege-postcss-autoprefixer.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 5 | 2016-01-25T08:51:46.000Z | 2022-02-16T05:51:08.000Z | _posts/2016-01-15-release-6-3-pro-rege-et-lege-postcss-autoprefixer.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 3 | 2015-08-22T08:39:36.000Z | 2021-07-25T15:24:10.000Z | _posts/2016-01-15-release-6-3-pro-rege-et-lege-postcss-autoprefixer.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 2 | 2016-01-18T03:56:54.000Z | 2021-07-25T14:27:30.000Z | ---
title: Release 6.3 “Pro rege et lege” · postcss/autoprefixer
author: azu
layout: post
itemUrl: 'https://github.com/postcss/autoprefixer/releases/tag/6.3.0'
editJSONPath: 'https://github.com/jser/jser.info/edit/gh-pages/data/2016/01/index.json'
date: '2016-01-15T00:48:02Z'
tags:
- CSS
- Tools
---
Autoprefixer 6.3リリース。
Grid Layoutのサポート、自前で用意したUA別のデータからブラウザの種類を決められる機能の追加
| 27.071429 | 87 | 0.754617 | yue_Hant | 0.193477 |
b920b589fb65358e78e45eca90c5a44ce510ca73 | 11,933 | md | Markdown | datajpa/datajpa-mongodb-template/DOC.md | EdurtIO/spring-learn-integration | 72fee0e99e53fc88e1f7fbd728e2227e3b68be3b | [
"Apache-2.0"
] | 2 | 2020-02-19T03:12:53.000Z | 2020-02-19T03:12:55.000Z | datajpa/datajpa-mongodb-template/DOC.md | qianmoQ/spring-learn-integration | 72fee0e99e53fc88e1f7fbd728e2227e3b68be3b | [
"Apache-2.0"
] | null | null | null | datajpa/datajpa-mongodb-template/DOC.md | qianmoQ/spring-learn-integration | 72fee0e99e53fc88e1f7fbd728e2227e3b68be3b | [
"Apache-2.0"
] | 6 | 2019-06-13T07:06:01.000Z | 2020-12-14T12:15:06.000Z | # Spring DataJPA MongoDB Template教程
本教程主要详细讲解Spring Data MongoDB,它向MongoDB提供Spring Data平台的抽象.
MongoDB是基于文档的存储,以持久保存数据,并可用作数据库,缓存,消息代理等.
#### 基础环境
---
|技术|版本|
|:---:|---|
|Java|1.8+|
|SpringBoot|2.x.x|
|DataJPA|2.x.x|
|MongoDB|3.6.3-cmongo-|
#### 创建项目
---
- 初始化项目
```bash
mvn archetype:generate -DgroupId=com.edurt.sli.slidmt -DartifactId=spring-learn-integration-datajpa-mongodb-template -DarchetypeArtifactId=maven-archetype-quickstart -Dversion=1.0.0 -DinteractiveMode=false
```
- 修改pom.xml增加mongodb的支持
```xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>spring-learn-integration-datajpa</artifactId>
<groupId>com.edurt.sli</groupId>
<version>1.0.0</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-learn-integration-datajpa-mongodb-template</artifactId>
<name>Spring DataJPA MongoDB教程(Template版)</name>
<properties>
<spring.data.mongodb.version>2.2.0.RELEASE</spring.data.mongodb.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
<version>${spring.data.mongodb.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>${dependency.springboot2.common.version}</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${dependency.lombok.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${dependency.springboot2.common.version}</version>
<configuration>
<fork>true</fork>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${plugin.maven.compiler.version}</version>
<configuration>
<source>${system.java.version}</source>
<target>${system.java.version}</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
`spring-data-mongodb`整合MongoDB需要的依赖包
- 一个简单的应用类
```java
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.edurt.sli.slidmt;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
/**
* <p> SpringBootDataJPAMongoDBTemplateIntegration </p>
* <p> Description : SpringBootDataJPAMongoDBTemplateIntegration </p>
* <p> Author : qianmoQ </p>
* <p> Version : 1.0 </p>
* <p> Create Time : 2019-10-21 11:26 </p>
* <p> Author Email: <a href="mailTo:[email protected]">qianmoQ</a> </p>
*/
@SpringBootApplication
public class SpringBootDataJPAMongoDBTemplateIntegration {
public static void main(String[] args) {
SpringApplication.run(SpringBootDataJPAMongoDBTemplateIntegration.class, args);
}
}
```
#### 配置支持MongoDB
---
- 在`/src/main/java/com/edurt/sli/slidmt`目录下创建config目录,并在该目录下新建MongoDBConfig文件
```java
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.edurt.sli.slidmt.config;
import com.mongodb.MongoClient;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.stereotype.Component;
/**
* <p> MongoDBConfig </p>
* <p> Description : MongoDBConfig </p>
* <p> Author : qianmoQ </p>
* <p> Version : 1.0 </p>
* <p> Create Time : 2019-10-21 11:28 </p>
* <p> Author Email: <a href="mailTo:[email protected]">qianmoQ</a> </p>
*/
@Component
@Configuration
@ConfigurationProperties(prefix = "custom.mongodb")
@Data
@AllArgsConstructor
@NoArgsConstructor
public class MongoDBConfig {
private String server; // mongodb服务器地址
private Integer port; // mongodb服务器地址端口
private String database; // mongodb访问的数据库
@Bean
public MongoClient mongoClient() {
return new MongoClient(server, port);
}
@Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoClient(), database);
}
}
```
- 在resources资源目录下创建一个application.properties的配置文件,内容如下
```bash
custom.mongodb.server=localhost
custom.mongodb.port=27017
custom.mongodb.database=test
```
#### 操作MongoDB数据
---
- 在`/src/main/java/com/edurt/sli/slidmt`目录下创建*model*目录,并在该目录下新建MongoDBModel文件
```java
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.edurt.sli.slidmt.model;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
/**
* <p> MongoDBModel </p>
* <p> Description : MongoDBModel </p>
* <p> Author : qianmoQ </p>
* <p> Version : 1.0 </p>
* <p> Create Time : 2019-10-21 11:42 </p>
* <p> Author Email: <a href="mailTo:[email protected]">qianmoQ</a> </p>
*/
@Data
@ToString
@AllArgsConstructor
@NoArgsConstructor
public class MongoDBModel {
private String id;
private String title;
private String context;
}
```
- 测试增删改查的功能
在`/src/main/java/com/edurt/sli/slidmt`目录下创建*controller*目录,并在该目录下新建MongoDbController文件
```java
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.edurt.sli.slidmt.controller;
import com.edurt.sli.slidmt.model.MongoDBModel;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.data.mongodb.core.query.Update;
import org.springframework.web.bind.annotation.*;
/**
* <p> MongoDbController </p>
* <p> Description : MongoDbController </p>
* <p> Author : qianmoQ </p>
* <p> Version : 1.0 </p>
* <p> Create Time : 2019-10-21 11:44 </p>
* <p> Author Email: <a href="mailTo:[email protected]">qianmoQ</a> </p>
*/
@RestController
@RequestMapping(value = "mongodb/template")
public class MongoDbController {
@Autowired
private MongoTemplate mongoTemplate;
@GetMapping
public Object get() {
return this.mongoTemplate.findOne(Query.query(Criteria.where("title").is("Hello MongoDB")), MongoDBModel.class);
}
@PostMapping
public Object post(@RequestBody MongoDBModel model) {
return this.mongoTemplate.save(model);
}
@PutMapping
public Object put(@RequestBody MongoDBModel model) {
Query query = new Query(Criteria.where("title").is("Hello MongoDB"));
Update update = new Update().set("title", model.getTitle());
return this.mongoTemplate.findAndModify(query, update, MongoDBModel.class);
}
@DeleteMapping
public Object delete(@RequestParam String id) {
return this.mongoTemplate.remove(Query.query(Criteria.where("id").is(id)));
}
}
```
添加数据
```bash
shicheng@shichengdeMacBook-Pro ~> curl -X POST http://localhost:8080/mongodb/template -H 'Content-Type:application/json' -d '{"title": "HelloMongoDB", "context": "我是SpringBoot整合MongoDB示例"}'
{"id":"5dad2d4ea479fc579f298545","title":"HelloMongoDB","context":"我是SpringBoot整合MongoDB示例"}⏎
```
修改数据
```bash
shicheng@shichengdeMacBook-Pro ~> curl -X PUT http://localhost:8080/mongodb/template -H 'Content-Type:application/json' -d '{"title": "HelloMongoDBModfiy", "context": "我是SpringBoot整合MongoDB示例"}'
{"id":"5dad2d4ea479fc579f298545","title":"HelloMongoDBModfiy","context":"我是SpringBoot整合MongoDB示例"}⏎
```
获取数据
```bash
shicheng@shichengdeMacBook-Pro ~> curl -X GET http://localhost:8080/mongodb/template
{"id":"5dad2d4ea479fc579f298545","title":"HelloMongoDBModfiy","context":"我是SpringBoot整合MongoDB示例"}⏎
```
删除数据
```bash
shicheng@shichengdeMacBook-Pro ~> curl -X DELETE 'http://localhost:8080/mongodb/template?title=HelloMongoDB'
SUCCESS⏎
```
#### 打包文件部署
---
- 打包数据
```bash
mvn clean package -Dmaven.test.skip=true -X
```
运行打包后的文件即可
```bash
java -jar target/spring-learn-integration-datajpa-mongodb-template-1.0.0.jar
```
#### 源码地址
---
- [GitHub](https://github.com/EdurtIO/programming-learn-integration/tree/master/datajpa/datajpa-mongodb-template) | 30.834625 | 205 | 0.707366 | eng_Latn | 0.486705 |
b92120b8bace2de8f2d2a9398d71c5882b274849 | 416 | md | Markdown | _posts/2013-03-27-webkit-webkit-for-developers.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 5 | 2016-01-25T08:51:46.000Z | 2022-02-16T05:51:08.000Z | _posts/2013-03-27-webkit-webkit-for-developers.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 3 | 2015-08-22T08:39:36.000Z | 2021-07-25T15:24:10.000Z | _posts/2013-03-27-webkit-webkit-for-developers.md | jser/realtime.jser.info | 1c4e18b7ae7775838604ae7b7c666f1b28fb71d4 | [
"MIT"
] | 2 | 2016-01-18T03:56:54.000Z | 2021-07-25T14:27:30.000Z | ---
title: 開発者のための WebKit (“WebKit for Developers” 日本語訳)
author: azu
layout: post
itemUrl: 'http://myakura.github.com/n/webkit4devs.html'
editJSONPath: 'https://github.com/jser/jser.info/edit/gh-pages/data/2013/03/index.json'
date: '2013-03-27T15:00:00+00:00'
tags:
- webkit
- document
- 翻訳
---
WebKitがどういう構成をしているか、WebKit portとは何か、どの部分が共通なのか、JavaScriptのバインディング、Operaが採用するChromium、WebKit Nightlyとは何か について書かれている
| 29.714286 | 114 | 0.769231 | yue_Hant | 0.565236 |
b9214922344582f3482bbc225686491fd6531592 | 512 | md | Markdown | notes/istio.md | bradleyfrank/notes-etc | 6f6b08266be202a316286a8a4d7713378c67fc22 | [
"Unlicense"
] | null | null | null | notes/istio.md | bradleyfrank/notes-etc | 6f6b08266be202a316286a8a4d7713378c67fc22 | [
"Unlicense"
] | null | null | null | notes/istio.md | bradleyfrank/notes-etc | 6f6b08266be202a316286a8a4d7713378c67fc22 | [
"Unlicense"
] | null | null | null | # Istio
What Envoy version is Istio using?
```sh
kubectl exec -it prometheus-68b46fc8bb-dc965 -c istio-proxy -n istio-system pilot-agent request GET server_info
{
"version": "12cfbda324320f99e0e39d7c393109fcd824591f/1.14.1/Clean/RELEASE/BoringSSL"
}
```
Ref: [what-envoy-version-is-istio-using](https://istio.io/v1.6/docs/ops/diagnostic-tools/proxy-cmd/#what-envoy-version-is-istio-using)
---
```sh
stern istio-ingressgateway -n istio-system -e health -e metrics -o json | jq -Sr '.message | fromjson'
```
| 26.947368 | 134 | 0.742188 | eng_Latn | 0.178974 |
b92217c810773f2f07a0b7d1f8ad9816f64d2d45 | 6,431 | md | Markdown | docs/framework/wcf/feature-details/using-data-contracts.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/using-data-contracts.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/using-data-contracts.md | skahack/docs.ja-jp | 7f7fac4879f8509f582c3ee008776ae7d4dde227 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: データ コントラクトの使用
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- DataContractAttribute class
- WCF, data
- data contracts [WCF]
ms.assetid: a3ae7b21-c15c-4c05-abd8-f483bcbf31af
ms.openlocfilehash: 3fd22cc0842c51b331905369915bd055235680c4
ms.sourcegitcommit: c7a7e1468bf0fa7f7065de951d60dfc8d5ba89f5
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 05/14/2019
ms.locfileid: "65592192"
---
# <a name="using-data-contracts"></a>データ コントラクトの使用
*データ コントラクト* は、サービスとクライアントの間の正式な取り決めであり、交換されるデータが抽象的に記述されています。 つまり、クライアントとサービスが通信するために必要なのは同じデータ コントラクトだけで、同じ型を共有する必要はありません。 データ コントラクトは、パラメーターまたは戻り値の型ごとに、交換するためにシリアル化する (XML に変換する) 必要があるデータを正確に定義します。
## <a name="data-contract-basics"></a>データ コントラクトの基本
Windows Communication Foundation (WCF) は、(変換して、XML から) データを逆シリアル化およびシリアル化する、既定では、データ コントラクト シリアライザーと呼ばれるシリアル化エンジンを使用します。 すべて .NET Framework のプリミティブ型、整数や文字列などと特定の種類などのプリミティブとして扱われます<xref:System.DateTime>と<xref:System.Xml.XmlElement>、準備なしでシリアル化できるし、既定のデータ コントラクトを持つと見なされます。 多くの .NET Framework 型は、既存のデータ コントラクトを持ちます。 シリアル化できるすべての型の一覧については、「 [Types Supported by the Data Contract Serializer](../../../../docs/framework/wcf/feature-details/types-supported-by-the-data-contract-serializer.md)」を参照してください。
新しい複合型を作成したら、シリアル化できるように、データ コントラクトを定義する必要があります。 既定では、 <xref:System.Runtime.Serialization.DataContractSerializer> はデータ コントラクトを推測し、公開されている型をすべてシリアル化します。 その型の読み書き可能なパブリック プロパティおよびパブリック フィールドは、すべてシリアル化されます。 <xref:System.Runtime.Serialization.IgnoreDataMemberAttribute>を使用することにより、メンバーがシリアル化されないようにすることができます。 また、 <xref:System.Runtime.Serialization.DataContractAttribute> 属性および <xref:System.Runtime.Serialization.DataMemberAttribute> 属性を使用して、データ コントラクトを明示的に作成することもできます。 これを行うには、通常、その型に <xref:System.Runtime.Serialization.DataContractAttribute> 属性を適用します。 この属性は、クラス、構造体、および列挙体に適用できます。 次に、データ コントラクト型の各メンバーに <xref:System.Runtime.Serialization.DataMemberAttribute> 属性を適用して、それが *データ メンバー*であること、つまり、シリアル化する必要があることを示す必要があります。 詳細については、次を参照してください。[シリアル化できる型](../../../../docs/framework/wcf/feature-details/serializable-types.md)します。
### <a name="example"></a>例
<xref:System.ServiceModel.ServiceContractAttribute> 属性と <xref:System.ServiceModel.OperationContractAttribute> 属性が明示的に適用されたサービス コントラクト (インターフェイス) の例を次に示します。 この例は、プリミティブ型はデータ コントラクトを必要としないのに対し、複合型は必要とすることを示しています。
[!code-csharp[C_DataContract#1](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_datacontract/cs/source.cs#1)]
[!code-vb[C_DataContract#1](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_datacontract/vb/source.vb#1)]
次の例では、 `MyTypes.PurchaseOrder` 型のデータ コントラクトが作成され、 <xref:System.Runtime.Serialization.DataContractAttribute> と <xref:System.Runtime.Serialization.DataMemberAttribute> 属性がクラスとそのメンバーに適用されるかを示します。
[!code-csharp[C_DataContract#2](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_datacontract/cs/source.cs#2)]
[!code-vb[C_DataContract#2](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_datacontract/vb/source.vb#2)]
### <a name="notes"></a>メモ
以下に、データ コントラクトを作成する際に考慮する必要がある項目を示します。
- <xref:System.Runtime.Serialization.IgnoreDataMemberAttribute> 属性は、マークされていない型で使用した場合にのみ受け入れられます。 これには、 <xref:System.Runtime.Serialization.DataContractAttribute>、 <xref:System.SerializableAttribute>、 <xref:System.Runtime.Serialization.CollectionDataContractAttribute>、 <xref:System.Runtime.Serialization.EnumMemberAttribute> のいずれかの属性でマークされていない型、または他の方法 ( <xref:System.Xml.Serialization.IXmlSerializable>など) でシリアル化可能としてマークされた型が含まれます。
- <xref:System.Runtime.Serialization.DataMemberAttribute> 属性は、フィールドおよびプロパティに適用できます。
- メンバーのアクセシビリティ レベル (内部、プライベート、保護、またはパブリック) は、データ コントラクトに影響しません。
- <xref:System.Runtime.Serialization.DataMemberAttribute> 属性が静的メンバーに適用されている場合は無視されます。
- シリアル化中には、プロパティのデータ メンバーがシリアル化対象のプロパティの値を取得できるように、プロパティ取得コードが呼び出されます。
- 逆シリアル化中には、まず初期化されていないオブジェクトが、その型のコンストラクターを呼び出さずに作成されます。 次に、すべてのデータ メンバーが逆シリアル化されます。
- 逆シリアル化中には、プロパティのデータ メンバーが、プロパティを逆シリアル化されている値に設定できるように、プロパティ設定コードが呼び出されます。
- データ コントラクトが有効であるためには、すべてのデータ メンバーをシリアル化できる必要があります。 シリアル化できるすべての型の一覧については、「 [Types Supported by the Data Contract Serializer](../../../../docs/framework/wcf/feature-details/types-supported-by-the-data-contract-serializer.md)」を参照してください。
ジェネリック型は、非ジェネリック型とまったく同じように処理されます。 ジェネリック パラメーターに対する特別な要件はありません。 たとえば、次の型について考えます。
[!code-csharp[C_DataContract#3](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_datacontract/cs/source.cs#3)]
[!code-vb[C_DataContract#3](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_datacontract/vb/source.vb#3)]
この型は、ジェネリック型パラメーター (`T`) に使用される型をシリアル化できるかどうかに関係なくシリアル化できます。 すべてのデータ メンバーをシリアル化できる必要があるため、次の型をシリアル化できるのは、ジェネリック型パラメーターもシリアル化できる場合だけです。次のコード例を参照してください。
[!code-csharp[C_DataContract#4](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_datacontract/cs/source.cs#4)]
[!code-vb[C_DataContract#4](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_datacontract/vb/source.vb#4)]
データ コントラクトを定義する WCF サービスのコード サンプル全体については、「 [Basic Data Contract](../../../../docs/framework/wcf/samples/basic-data-contract.md) 」のサンプルを参照してください。
## <a name="see-also"></a>関連項目
- <xref:System.Runtime.Serialization.DataMemberAttribute>
- <xref:System.Runtime.Serialization.DataContractAttribute>
- [シリアル化可能な型](../../../../docs/framework/wcf/feature-details/serializable-types.md)
- [データ コントラクト名](../../../../docs/framework/wcf/feature-details/data-contract-names.md)
- [データ コントラクトの等価性](../../../../docs/framework/wcf/feature-details/data-contract-equivalence.md)
- [データ メンバーの順序](../../../../docs/framework/wcf/feature-details/data-member-order.md)
- [既知のデータ コントラクト型](../../../../docs/framework/wcf/feature-details/data-contract-known-types.md)
- [上位互換性のあるデータ コントラクト](../../../../docs/framework/wcf/feature-details/forward-compatible-data-contracts.md)
- [データ コントラクトのバージョン管理](../../../../docs/framework/wcf/feature-details/data-contract-versioning.md)
- [バージョン トレラントなシリアル化コールバック](../../../../docs/framework/wcf/feature-details/version-tolerant-serialization-callbacks.md)
- [データ メンバーの既定値](../../../../docs/framework/wcf/feature-details/data-member-default-values.md)
- [データ コントラクト シリアライザーでサポートされる型](../../../../docs/framework/wcf/feature-details/types-supported-by-the-data-contract-serializer.md)
- [方法: クラスまたは構造体に基本的なデータ コントラクトを作成します。](../../../../docs/framework/wcf/feature-details/how-to-create-a-basic-data-contract-for-a-class-or-structure.md)
| 76.559524 | 820 | 0.786036 | yue_Hant | 0.500216 |
b9223b8587b9b51af3b477a63ff6f1a0bc6d5fb4 | 241 | md | Markdown | Controls/bootstrap4/NavigationBar/control.md | Jollof-guy/dotvvm-docs | 339102392f0b766d74628f183f9069e125a0ce55 | [
"Apache-2.0"
] | null | null | null | Controls/bootstrap4/NavigationBar/control.md | Jollof-guy/dotvvm-docs | 339102392f0b766d74628f183f9069e125a0ce55 | [
"Apache-2.0"
] | null | null | null | Controls/bootstrap4/NavigationBar/control.md | Jollof-guy/dotvvm-docs | 339102392f0b766d74628f183f9069e125a0ce55 | [
"Apache-2.0"
] | null | null | null | Renders a Bootstrap Navs widget.
<https://getbootstrap.com/docs/4.3/components/navs/>
If you want to create a responsive menu (Navbar), use the [ResponsiveNavigation](/docs/controls/bootstrap/ResponsiveNavigation/{branch}) control instead. | 48.2 | 153 | 0.79668 | eng_Latn | 0.524685 |
b922a47702014d488332d9ee0ff29ba4c4a8ea88 | 53 | md | Markdown | README.md | peteward44/torrent.net | 2d78400a835694da18a808a54ee6add706c14ffa | [
"MIT"
] | 8 | 2015-11-19T23:56:47.000Z | 2018-08-11T21:27:46.000Z | README.md | peteward44/torrent.net | 2d78400a835694da18a808a54ee6add706c14ffa | [
"MIT"
] | null | null | null | README.md | peteward44/torrent.net | 2d78400a835694da18a808a54ee6add706c14ffa | [
"MIT"
] | 4 | 2016-08-24T01:48:46.000Z | 2019-08-04T13:42:11.000Z | # torrent.net
Basic bit torrent client written in c#
| 17.666667 | 38 | 0.773585 | eng_Latn | 0.971595 |
b922c0140429cf1f98d87f3bb97213dd17c78d79 | 1,195 | md | Markdown | docs/framework/unmanaged-api/profiling/icorprofilercallback10-eventpipeprovidercreated-method.md | xp44mm/docs | bda25d6f7059d6ac826b251978c9d6f823fd7154 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/profiling/icorprofilercallback10-eventpipeprovidercreated-method.md | xp44mm/docs | bda25d6f7059d6ac826b251978c9d6f823fd7154 | [
"CC-BY-4.0",
"MIT"
] | 355 | 2021-03-31T17:20:49.000Z | 2022-03-09T04:04:44.000Z | docs/framework/unmanaged-api/profiling/icorprofilercallback10-eventpipeprovidercreated-method.md | xp44mm/docs | bda25d6f7059d6ac826b251978c9d6f823fd7154 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: "Learn more about: ICorProfilerCallback10::EventPipeProviderCreated Method"
title: "ICorProfilerCallback10::EventPipeProviderCreated Method"
ms.date: "03/19/2021"
api_name:
- "ICorProfilerCallback10.EventPipeProviderCreated"
api_location:
- "coreclr.dll"
- "corprof.idl"
api_type:
- "COM"
---
# ICorProfilerCallback10::EventPipeProviderCreated Method
Notifies the profiler whenever an EventPipe provider is created.
## Syntax
```cpp
HRESULT EventPipeProviderCreated([in] EVENTPIPE_PROVIDER provider);
```
## Parameters
`provider`
[in] The provider that was created.
## Requirements
**Platforms:** See [.NET Core supported operating systems](../../../core/install/windows.md?pivots=os-windows).
**Header:** CorProf.idl, CorProf.h
**.NET Versions:** [!INCLUDE[net_core](../../../../includes/net-core-50-md.md)]
## See also
- [Profiling Interfaces](profiling-interfaces.md)
- [ICorProfilerCallback10 Interface](icorprofilercallback10-interface.md)
- [ICorProfilerInfo12 Interface](icorprofilerinfo12-interface.md)
- [ICorProfilerInfo12.EventPipeAddProviderToSession Method](icorprofilerinfo12-eventpipeaddprovidertosession-method.md)
| 29.875 | 119 | 0.754812 | yue_Hant | 0.717739 |
b9234e707002e6f72e8dad9ccee1955c61f23d83 | 1,865 | md | Markdown | docs/vs-2015/extensibility/debugger/reference/idebugfield-getinfo.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugfield-getinfo.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/debugger/reference/idebugfield-getinfo.md | 1DanielaBlanco/visualstudio-docs.es-es | 9e934cd5752dc7df6f5e93744805e3c600c87ff0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugField::GetInfo | Documentos de Microsoft
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-sdk
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- IDebugField::GetInfo
helpviewer_keywords:
- IDebugField::GetInfo method
ms.assetid: 7d508200-89ce-400f-a8ea-f28e7610cb2b
caps.latest.revision: 13
ms.author: gregvanl
manager: ghogen
ms.openlocfilehash: eb4709321c60f812f1f5211cc9f2c065d3f0a236
ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 11/16/2018
ms.locfileid: "51741621"
---
# <a name="idebugfieldgetinfo"></a>IDebugField::GetInfo
[!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)]
Este método obtiene que se puede mostrar información sobre el campo.
## <a name="syntax"></a>Sintaxis
```cpp#
HRESULT GetInfo(
FIELD_INFO_FIELDS dwFields,
FIELD_INFO* pFieldInfo
);
```
```csharp
int GetInfo(
enum_FIELD_INFO_FIELDS dwFields,
FIELD_INFO[] pFieldInfo
);
```
#### <a name="parameters"></a>Parámetros
`dwFields`
[in] Una combinación de [FIELD_INFO_FIELDS](../../../extensibility/debugger/reference/field-info-fields.md) constantes que selecciona la información que se mostrará. Si el campo representa un símbolo, esto suele ser el nombre de símbolo y el tipo.
`pFieldInfo`
[out] Devuelve la información de la proporcionada [FIELD_INFO](../../../extensibility/debugger/reference/field-info.md) estructura.
## <a name="return-value"></a>Valor devuelto
Si es correcto, devuelve `S_OK`; en caso contrario, devuelve un código de error.
## <a name="see-also"></a>Vea también
[IDebugField](../../../extensibility/debugger/reference/idebugfield.md)
[FIELD_INFO](../../../extensibility/debugger/reference/field-info.md)
| 30.080645 | 250 | 0.726542 | spa_Latn | 0.320247 |
b9234f5ce72a5272c85a9d79138f0ce259254db0 | 4,269 | md | Markdown | docs/code-quality/ca1715-identifiers-should-have-correct-prefix.md | wyossi/visualstudio-docs | ee4cc0f6e8acc8c8559f540e8d0c7cdd01687223 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca1715-identifiers-should-have-correct-prefix.md | wyossi/visualstudio-docs | ee4cc0f6e8acc8c8559f540e8d0c7cdd01687223 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/ca1715-identifiers-should-have-correct-prefix.md | wyossi/visualstudio-docs | ee4cc0f6e8acc8c8559f540e8d0c7cdd01687223 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "CA1715: Identifiers should have correct prefix"
ms.date: 11/04/2016
ms.prod: visual-studio-dev15
ms.topic: reference
f1_keywords:
- "CA1715"
- "IdentifiersShouldHaveCorrectPrefix"
helpviewer_keywords:
- "IdentifiersShouldHaveCorrectPrefix"
- "CA1715"
ms.assetid: cf45f8df-6855-4cb6-a4e2-7cfed714cf2f
author: gewarren
ms.author: gewarren
manager: douge
dev_langs:
- CPP
- CSharp
- VB
ms.workload:
- "multiple"
---
# CA1715: Identifiers should have correct prefix
|||
|-|-|
|TypeName|IdentifiersShouldHaveCorrectPrefix|
|CheckId|CA1715|
|Category|Microsoft.Naming|
|Breaking Change|Breaking - when fired on interfaces.<br /><br /> Non-breaking - when raised on generic type parameters.|
## Cause
The name of an externally visible interface does not start with an uppercase 'I'.
-or-
The name of a generic type parameter on an externally visible type or method does not start with an uppercase 'T'.
## Rule description
By convention, the names of certain programming elements start with a specific prefix.
Interface names should start with an uppercase 'I' followed by another uppercase letter. This rule reports violations for interface names such as 'MyInterface' and 'IsolatedInterface'.
Generic type parameter names should start with an uppercase 'T' and optionally may be followed by another uppercase letter. This rule reports violations for generic type parameter names such as 'V' and 'Type'.
Naming conventions provide a common look for libraries that target the common language runtime. This reduces the learning curve that is required for new software libraries, and increases customer confidence that the library was developed by someone who has expertise in developing managed code.
## How to fix violations
Rename the identifier so that it is correctly prefixed.
## When to suppress warnings
Do not suppress a warning from this rule.
## Example
**The following example shows an incorrectly named interface.**
[!code-cpp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix#1](../code-quality/codesnippet/CPP/ca1715-identifiers-should-have-correct-prefix_1.cpp)]
[!code-vb[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix#1](../code-quality/codesnippet/VisualBasic/ca1715-identifiers-should-have-correct-prefix_1.vb)]
[!code-csharp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix#1](../code-quality/codesnippet/CSharp/ca1715-identifiers-should-have-correct-prefix_1.cs)]
## Example
**The following example fixes the previous violation by prefixing the interface with 'I'.**
[!code-csharp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix2#1](../code-quality/codesnippet/CSharp/ca1715-identifiers-should-have-correct-prefix_2.cs)]
[!code-cpp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix2#1](../code-quality/codesnippet/CPP/ca1715-identifiers-should-have-correct-prefix_2.cpp)]
[!code-vb[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix2#1](../code-quality/codesnippet/VisualBasic/ca1715-identifiers-should-have-correct-prefix_2.vb)]
## Example
**The following example shows an incorrectly named generic type parameter.**
[!code-cpp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix3#1](../code-quality/codesnippet/CPP/ca1715-identifiers-should-have-correct-prefix_3.cpp)]
[!code-vb[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix3#1](../code-quality/codesnippet/VisualBasic/ca1715-identifiers-should-have-correct-prefix_3.vb)]
[!code-csharp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix3#1](../code-quality/codesnippet/CSharp/ca1715-identifiers-should-have-correct-prefix_3.cs)]
## Example
**The following example fixes the previous violation by prefixing the generic type parameter with 'T'.**
[!code-cpp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix4#1](../code-quality/codesnippet/CPP/ca1715-identifiers-should-have-correct-prefix_4.cpp)]
[!code-csharp[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix4#1](../code-quality/codesnippet/CSharp/ca1715-identifiers-should-have-correct-prefix_4.cs)]
[!code-vb[FxCop.Naming.IdentifiersShouldHaveCorrectPrefix4#1](../code-quality/codesnippet/VisualBasic/ca1715-identifiers-should-have-correct-prefix_4.vb)]
## Related rules
[CA1722: Identifiers should not have incorrect prefix](../code-quality/ca1722-identifiers-should-not-have-incorrect-prefix.md) | 51.433735 | 295 | 0.800187 | eng_Latn | 0.89421 |
b9239428db95abb04b9dbc772818020db887fd53 | 52 | md | Markdown | debates/systemic-risk.md | CynicusRex/awesome-crypto-critique | b79c475f7a5061a72e4cfd106f65d8da06322a77 | [
"CC0-1.0"
] | null | null | null | debates/systemic-risk.md | CynicusRex/awesome-crypto-critique | b79c475f7a5061a72e4cfd106f65d8da06322a77 | [
"CC0-1.0"
] | null | null | null | debates/systemic-risk.md | CynicusRex/awesome-crypto-critique | b79c475f7a5061a72e4cfd106f65d8da06322a77 | [
"CC0-1.0"
] | null | null | null | # Are crypto assets a systemic risk to the economy?
| 26 | 51 | 0.769231 | eng_Latn | 0.997597 |
b923a07e409db1b818d6a6bc930420874306f0df | 440 | md | Markdown | docs/2020/registration.md | CannonLock/all-hands | 70b779da687a98239a49f007ce331d245bcd3324 | [
"CC-BY-4.0"
] | null | null | null | docs/2020/registration.md | CannonLock/all-hands | 70b779da687a98239a49f007ce331d245bcd3324 | [
"CC-BY-4.0"
] | 14 | 2018-01-11T17:24:30.000Z | 2022-02-24T16:26:33.000Z | docs/2020/registration.md | CannonLock/all-hands | 70b779da687a98239a49f007ce331d245bcd3324 | [
"CC-BY-4.0"
] | 8 | 2018-01-04T16:14:50.000Z | 2022-03-14T15:09:09.000Z | # OSG All-Hands Meeting 2020 – Registration
Registration for the event is free, but is required for security reasons. If
you hope to attend any of the event, please register now; it takes only a couple
minutes:
[https://indico.fnal.gov/event/22127/registrations/](https://indico.fnal.gov/event/22127/registrations/)
Check [the Meeting Logistics page](technology.md)
for more information on meeting technology and other logistics.
| 40 | 104 | 0.788636 | eng_Latn | 0.987659 |
b9248d8fc5e04eb5dbe8e81d7a4104d8926210ac | 2,971 | md | Markdown | windows-apps-src/porting/apps-on-arm-troubleshooting-arm32.md | bayashka123/windows-uwp | 1ebfc7ee1febf2b67c033144dcba72f94b1b7d90 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/porting/apps-on-arm-troubleshooting-arm32.md | bayashka123/windows-uwp | 1ebfc7ee1febf2b67c033144dcba72f94b1b7d90 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/porting/apps-on-arm-troubleshooting-arm32.md | bayashka123/windows-uwp | 1ebfc7ee1febf2b67c033144dcba72f94b1b7d90 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Troubleshooting ARM32 UWP apps
description: Common issues with ARM32 apps when running on ARM, and how to fix them.
ms.date: 01/03/2019
ms.topic: article
keywords: windows 10 s, always connected, ARM32 apps on ARM, windows 10 on ARM, troubleshooting
ms.localizationpriority: medium
---
# Troubleshooting ARM UWP apps
If your ARM32 or ARM64 UWP app isn't working correctly on ARM, here's some guidance that may help.
>[!NOTE]
> To build your UWP application to natively target the ARM64 platform, you must have Visual Studio 2017 version 15.9 or later. For more information, see [this blog post](https://blogs.windows.com/buildingapps/2018/11/15/official-support-for-windows-10-on-arm-development/).
## Common issues
Here are some common issues to keep in mind when troubleshooting ARM32 and ARM64 apps.
### Using Windows 10 Mobile-only APIs on ARM-based processors
ARM apps may run into problems when using mobile-only APIs (for example, **HardwareButtons**). To mitigate this, you can dynamically detect whether your app is running on Windows 10 Mobile before calling these APIs. Follow the guidance in the blog post, [Dynamically detecting features with API contracts](https://blogs.windows.com/buildingapps/2015/09/15/dynamically-detecting-features-with-api-contracts-10-by-10/).
### Including dependencies not supported by UWP apps
Universal Windows Platform (UWP) apps that aren't properly built with Visual Studio and the UWP SDK may have dependencies on OS components that aren't available to ARM apps running on an ARM64 system. Examples of these dependencies include:
- Expecting parts of the .NET Framework to be available.
- Referencing third-party .NET components that aren't compatible with UWP.
These issues can be resolved by: removing the unavailable dependencies and rebuilding the app by using the latest Microsoft Visual Studio and UWP SDK versions; or as a last resort, removing the ARM app from the Microsoft Store, so that the x86 version of the app (if available) is downloaded to users’ PCs.
For more info on .NET APIs available for UWP apps, see [.NET for UWP apps](https://docs.microsoft.com/dotnet/api/index?view=dotnet-uwp-10.0)
### Compiling an app with an older version of Visual Studio and SDK
If you're running into issues, be sure to use the latest versions of Microsoft Visual Studio and the Windows SDK to compile your app. Apps compiled with an earlier version of Visual Studio and the SDK may have issues that have been fixed in later versions.
## Debugging
You can use existing tools for developing apps for the ARM platform. Here are some helpful resources.
- Visual Studio 15.5 Preview 1 and later supports running ARM32 apps by using Universal Authentication mode. This automatically bootstraps the necessary remote debugging tools.
- See [Debugging on ARM64](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-arm64) to learn more about tools and strategies for debugging on ARM.
| 72.463415 | 417 | 0.792326 | eng_Latn | 0.994218 |
b924dab02baf8ca40ccd098dc1490d400a56d6da | 79 | md | Markdown | README.md | BenedictusAryo/nG1_cpu_mem_check | 50f20ca2869fac4abf5132ac1ad3b370a936d6f8 | [
"MIT"
] | null | null | null | README.md | BenedictusAryo/nG1_cpu_mem_check | 50f20ca2869fac4abf5132ac1ad3b370a936d6f8 | [
"MIT"
] | null | null | null | README.md | BenedictusAryo/nG1_cpu_mem_check | 50f20ca2869fac4abf5132ac1ad3b370a936d6f8 | [
"MIT"
] | null | null | null | # nG1_cpu_mem_check
Bash script to run CPU and memory utilization usage report
| 26.333333 | 58 | 0.835443 | eng_Latn | 0.919022 |
b9250c72675417a8e4c789aa83e775b41da9bdcd | 8,571 | md | Markdown | articles/azure-functions/durable/durable-functions-unit-testing.md | Nickoelton/azure-docs.es-es | fc4d2301c7a6fe717e92d25ee458bf8f1408dc76 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-functions/durable/durable-functions-unit-testing.md | Nickoelton/azure-docs.es-es | fc4d2301c7a6fe717e92d25ee458bf8f1408dc76 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-functions/durable/durable-functions-unit-testing.md | Nickoelton/azure-docs.es-es | fc4d2301c7a6fe717e92d25ee458bf8f1408dc76 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Prueba unitaria de Azure Durable Functions
description: Obtenga información sobre cómo ejecutar una prueba unitaria de Durable Functions.
ms.topic: conceptual
ms.date: 11/03/2019
ms.openlocfilehash: 7786a0a2e2d31086e1938b70e63fe2374e16fe7f
ms.sourcegitcommit: c4246c2b986c6f53b20b94d4e75ccc49ec768a9a
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 12/04/2020
ms.locfileid: "96601363"
---
# <a name="durable-functions-unit-testing"></a>Prueba unitaria de Durable Functions
La prueba unitaria es parte importante de las prácticas modernas de desarrollo de software. Las pruebas unitarias comprueban el comportamiento de la lógica de negocios e impiden los cambios importantes desapercibidos en el futuro. Durable Functions puede aumentar fácilmente su complejidad, por lo que realizar pruebas unitarias ayudará a evitar los cambios importantes. En las secciones siguientes se explica cómo ejecutar una prueba unitaria de los tres tipos de función: funciones de cliente de orquestación, de orquestador y de actividad.
> [!NOTE]
> En este artículo se ofrecen instrucciones sobre pruebas unitarias para aplicaciones de Durable Functions que tengan como destino Durable Functions 1.x. Todavía no se ha actualizado para tener en cuenta los cambios introducidos en Durable Functions 2.x. Para obtener más información sobre las diferencias entre versiones, vea el artículo [Versiones de Durable Functions](durable-functions-versions.md).
## <a name="prerequisites"></a>Prerequisites
Los ejemplos que aparecen en este artículo requieren conocer los siguientes conceptos y marcos:
* Pruebas unitarias
* Funciones duraderas
* [xUnit](https://github.com/xunit/xunit): marco de pruebas
* [moq](https://github.com/moq/moq4): marco de simulación
## <a name="base-classes-for-mocking"></a>Clases base para simulación
La simulación se admite a través de tres clases abstractas en Durable Functions 1.x:
* `DurableOrchestrationClientBase`
* `DurableOrchestrationContextBase`
* `DurableActivityContextBase`
Estas clases son clases base para `DurableOrchestrationClient`, `DurableOrchestrationContext` y `DurableActivityContext` que definen los métodos de cliente de orquestación, de orquestador y de actividad. Las simulaciones establecerán el comportamiento esperado para los métodos de clase base, de manera que la prueba unitaria pueda comprobar la lógica de negocios. Hay un flujo de trabajo de dos pasos para realizar una prueba unitaria de la lógica de negocios en el cliente de Orchestration y Orchestrator:
1. Use las clases base en lugar de la implementación concreta cuando defina las firmas de las funciones de cliente de orquestación y de orquestador.
2. En las pruebas unitarias, simule el comportamiento de las clases base y compruebe la lógica de negocios.
Encuentre más detalles en los párrafos siguientes para probar las funciones que usan el enlace del cliente de Orchestration y el enlace de desencadenador de Orchestrator.
## <a name="unit-testing-trigger-functions"></a>Funciones de desencadenador de prueba unitaria
En esta sección, la prueba unitaria validará la lógica de la función de desencadenador HTTP siguiente para iniciar nuevas orquestaciones.
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/HttpStart.cs)]
La tarea de prueba unitaria será comprobar el valor del encabezado `Retry-After` proporcionado en la carga útil de la respuesta. Por lo tanto, la prueba unitaria simulará algunos de los métodos de `DurableOrchestrationClientBase` para garantizar un comportamiento predecible.
En primer lugar, se requiere una simulación de la clase base, `DurableOrchestrationClientBase`. La simulación puede ser una nueva clase que implemente `DurableOrchestrationClientBase`. Sin embargo, usar un marco de simulación como [moq](https://github.com/moq/moq4) simplifica el proceso:
```csharp
// Mock DurableOrchestrationClientBase
var durableOrchestrationClientBaseMock = new Mock<DurableOrchestrationClientBase>();
```
Luego, se simula el método `StartNewAsync` para devolver un identificador de instancia conocido.
```csharp
// Mock StartNewAsync method
durableOrchestrationClientBaseMock.
Setup(x => x.StartNewAsync(functionName, It.IsAny<object>())).
ReturnsAsync(instanceId);
```
A continuación, se simula `CreateCheckStatusResponse` para devolver siempre una respuesta HTTP 200 vacía.
```csharp
// Mock CreateCheckStatusResponse method
durableOrchestrationClientBaseMock
.Setup(x => x.CreateCheckStatusResponse(It.IsAny<HttpRequestMessage>(), instanceId))
.Returns(new HttpResponseMessage
{
StatusCode = HttpStatusCode.OK,
Content = new StringContent(string.Empty),
Headers =
{
RetryAfter = new RetryConditionHeaderValue(TimeSpan.FromSeconds(10))
}
});
```
También se simula `ILogger`:
```csharp
// Mock ILogger
var loggerMock = new Mock<ILogger>();
```
Ahora, se llama al método `Run` desde la prueba unitaria:
```csharp
// Call Orchestration trigger function
var result = await HttpStart.Run(
new HttpRequestMessage()
{
Content = new StringContent("{}", Encoding.UTF8, "application/json"),
RequestUri = new Uri("http://localhost:7071/orchestrators/E1_HelloSequence"),
},
durableOrchestrationClientBaseMock.Object,
functionName,
loggerMock.Object);
```
El último paso es comparar la salida con el valor esperado:
```csharp
// Validate that output is not null
Assert.NotNull(result.Headers.RetryAfter);
// Validate output's Retry-After header value
Assert.Equal(TimeSpan.FromSeconds(10), result.Headers.RetryAfter.Delta);
```
Después de combinar todos los pasos, la prueba unitaria tendrá el código siguiente:
[!code-csharp[Main](~/samples-durable-functions/samples/VSSample.Tests/HttpStartTests.cs)]
## <a name="unit-testing-orchestrator-functions"></a>Funciones de Orchestrator de prueba unitaria
Las funciones de Orchestrator son incluso más interesantes para la prueba unitaria porque habitualmente tienen mucha más lógica de negocios.
En esta sección, las pruebas unitarias validarán la salida de la función `E1_HelloSequence` de Orchestrator:
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/HelloSequence.cs)]
El código de la prueba unitaria comenzará con la creación de una simulación:
```csharp
var durableOrchestrationContextMock = new Mock<DurableOrchestrationContextBase>();
```
Luego, se simularán las llamadas al método de la actividad:
```csharp
durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "Tokyo")).ReturnsAsync("Hello Tokyo!");
durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "Seattle")).ReturnsAsync("Hello Seattle!");
durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "London")).ReturnsAsync("Hello London!");
```
A continuación, la prueba unitaria llamará al método `HelloSequence.Run`:
```csharp
var result = await HelloSequence.Run(durableOrchestrationContextMock.Object);
```
Finalmente, se validará la salida:
```csharp
Assert.Equal(3, result.Count);
Assert.Equal("Hello Tokyo!", result[0]);
Assert.Equal("Hello Seattle!", result[1]);
Assert.Equal("Hello London!", result[2]);
```
Después de combinar todos los pasos, la prueba unitaria tendrá el código siguiente:
[!code-csharp[Main](~/samples-durable-functions/samples/VSSample.Tests/HelloSequenceOrchestratorTests.cs)]
## <a name="unit-testing-activity-functions"></a>Funciones de la actividad de prueba unitaria
Es posible realizar una prueba unitaria de las funciones de actividad del mismo modo que las funciones no duraderas.
En esta sección, la prueba unitaria validará el comportamiento de la actividad de la función `E1_SayHello`:
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/HelloSequence.cs)]
Y las pruebas unitarias comprobarán el formato de la salida. Las pruebas unitarias pueden utilizar los tipos de parámetro directamente o simular la clase `DurableActivityContextBase`:
[!code-csharp[Main](~/samples-durable-functions/samples/VSSample.Tests/HelloSequenceActivityTests.cs)]
## <a name="next-steps"></a>Pasos siguientes
> [!div class="nextstepaction"]
> [Más información sobre xUnit](https://xunit.net/docs/getting-started/netcore/cmdline)
>
> [Más información sobre moq](https://github.com/Moq/moq4/wiki/Quickstart)
| 46.32973 | 542 | 0.772139 | spa_Latn | 0.90631 |
b9257b162edfee9e83f4c28f2cf546982e5af17b | 907 | md | Markdown | exampleSite/content/articles/celebrity/daymond-john-seeks-10-million-for-03-with-n95-masks.md | savage-teddy/parsa-hugo | b710d0a0a1cbbb79f9f5e6c903fc8dc311bcac74 | [
"MIT"
] | null | null | null | exampleSite/content/articles/celebrity/daymond-john-seeks-10-million-for-03-with-n95-masks.md | savage-teddy/parsa-hugo | b710d0a0a1cbbb79f9f5e6c903fc8dc311bcac74 | [
"MIT"
] | null | null | null | exampleSite/content/articles/celebrity/daymond-john-seeks-10-million-for-03-with-n95-masks.md | savage-teddy/parsa-hugo | b710d0a0a1cbbb79f9f5e6c903fc8dc311bcac74 | [
"MIT"
] | null | null | null | +++
categories = ["Celebrity"]
date = 2020-04-24T22:27:00Z
description = "Shark Tank gets bit!"
image = "/images/daymondJohn.jpg"
tags = ["Daymond John", "Shark Tank", "Celebrity"]
title = "Daymond John seeks $10 million for .03% with N95 masks"
type = "post"
+++
Shark Tank shark Daymond John went seeking $10 million dollars for a .03% stake in his company that sells outrageously marked up N95 masks. When the COVID-19 virus was first heard, Mr. John could be seen buying up oodles of toilet paper from Costco. Little did we all know, he was also buy a plethora of N95 masks for an entrepreneurial business idea.
The idea came to him when it was apparent the only way to survive the pandemic was using toilet paper cartridges in N95 masks. Mr. John deferred any comments relating to his new business. Donald Trump gave the following statement concerning the deal...

| 53.352941 | 351 | 0.75634 | eng_Latn | 0.998485 |
b9258b0729ca350606851db76050df9814b00d0c | 4,650 | md | Markdown | docs/linux/sql-server-linux-whats-new-2019.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/linux/sql-server-linux-whats-new-2019.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/linux/sql-server-linux-whats-new-2019.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Neuigkeiten zu SQL Server 2019 für Linux
description: In diesem Artikel sind die Neuigkeiten für SQL Server 2019 für Linux zusammengestellt.
author: VanMSFT
ms.author: vanto
ms.date: 10/23/2019
ms.topic: conceptual
ms.prod: sql
ms.technology: linux
ms.openlocfilehash: 5183efa51afd89ad82d0cdcb6448996429b81d28
ms.sourcegitcommit: bb56808dd81890df4f45636b600aaf3269c374f2
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/24/2019
ms.locfileid: "72890559"
---
# <a name="whats-new-for-sql-server-2019-on-linux"></a>Neuigkeiten zu SQL Server 2019 für Linux
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md-linuxonly](../includes/appliesto-ss-xxxx-xxxx-xxx-md-linuxonly.md)]
In diesem Artikel sind die wichtigsten Features und Dienste beschrieben, die für SQL Server 2019 für Linux verfügbar sind. Informationen über Paketdownloads und bekannte Probleme finden Sie unter [Versionshinweise](sql-server-linux-release-notes-2019.md?view=sql-server-linux-ver15).
## <a name="updates"></a>Updates
Die Updates wurden in SQL Server 2019 unter Linux vorgenommen:
| Neue Funktion oder Update | Details |
|:-----|:-----|
|Replikationsunterstützung |[SQL Server Replication on Linux (SQL Server-Replikation unter Linux)](sql-server-linux-replication.md)
|Unterstützung für den Microsoft Distributed Transaction Coordinator (MSDTC) |[How to configure MSDTC on Linux (Vorgehensweise: Konfigurieren von MSDTC unter Linux)](sql-server-linux-configure-msdtc.md) |
|OpenLDAP-Unterstützung für AD-Drittanbieter |[Tutorial: Use Active Directory authentication with SQL Server on Linux (Tutorial: Verwenden der Azure Active Directory-Authentifizierung mit SQL Server unter Linux)](sql-server-linux-active-directory-authentication.md) |
|Machine Learning unter Linux |[Configure Machine Learning on Linux (Konfigurieren von Machine Learning unter Linux)](sql-server-linux-setup-machine-learning.md) |
|`tempdb`-Verbesserungen | Standardmäßig erstellt eine neue Installation von SQL Server für Linux mehrere `tempdb`-Datendateien, deren Anzahl sich nach der Anzahl von logischen Kernen richtet (bis zu 8 Datendateien). Das gilt nicht für direkte Upgrades der Neben- oder Hauptversion. Jede `tempdb`-Datei ist 8 MB groß und wird automatisch um 64 MB vergrößert. Dieses Verhalten ähnelt dem der SQL Server-Standardinstallation unter Windows. |
| PolyBase unter Linux | [Installieren Sie PolyBase](../relational-databases/polybase/polybase-linux-setup.md) für Nicht-Hadoop-Connectors unter Linux.<br/><br/>[PolyBase type mapping (Typzuordnung mit PolyBase)](../relational-databases/polybase/polybase-type-mapping.md) |
| Unterstützung von Change Data Capture (CDC) | Change Data Capture (CDC) wird jetzt unter Linux für SQL Server 2019 unterstützt. |
| Microsoft Container Registry | [Microsoft Container Registry](https://www.ntweekly.com/2019/09/23/microsoft-container-registry-to-replace-docker-hub-for-new-images/) ersetzt nun Docker Hub für neue offizielle Microsoft-Containerimages, einschließlich [!INCLUDE[sql-server-2019](../includes/sssqlv15-md.md)]. |
| Container ohne Root-Berechtigung | Mit [!INCLUDE[sql-server-2019](../includes/sssqlv15-md.md)] wird die Möglichkeit eingeführt, sicherere Container zu erstellen, indem der [!INCLUDE[sql-server](../includes/ssnoversion-md.md)]-Prozess standardmäßig als Nicht-Root-Benutzer gestartet wird. Weitere Informationen finden Sie unter [Erstellen und Ausführen von SQL Server-Containern als Nicht-Root-Benutzer](sql-server-linux-configure-docker.md#buildnonrootcontainer). |
## <a name="next-steps"></a>Nächste Schritte
Um SQL Server für Linux zu installieren, gehen Sie gemäß einem der folgenden Tutorials vor:
- [Installieren unter Red Hat Enterprise Linux](quickstart-install-connect-red-hat.md?view=sql-server-linux-ver15)
- [Installation unter SUSE Linux Enterprise Server](quickstart-install-connect-suse.md?view=sql-server-linux-ver15)
- [Installation unter Ubuntu](quickstart-install-connect-ubuntu.md?view=sql-server-linux-ver15)
- [Ausführen unter Docker](quickstart-install-connect-docker.md?view=sql-server-linux-ver15)
- [Provision a SQL VM in Azure (Bereitstellen einer SQL-VM in Azure)](/azure/virtual-machines/linux/sql/provision-sql-server-linux-virtual-machine?toc=/sql/toc/toc.json)
Antworten auf häufig gestellte Fragen finden Sie unter [Häufig gestellte Fragen zu SQL Server für Linux](sql-server-linux-faq.md). Weitere Verbesserungen, die in SQL Server 2019 eingeführt wurden, finden Sie unter [Neues in SQL Server 2019](../sql-server/what-s-new-in-sql-server-ver15.md?view=sql-server-ver15).
[!INCLUDE[get-help-options](../includes/paragraph-content/get-help-options.md)]
| 89.423077 | 467 | 0.798495 | deu_Latn | 0.825805 |
b925a5ba0d21f83b99311d7be076abf82416609f | 1,589 | md | Markdown | wdk-ddi-src/content/d3dkmddi/ns-d3dkmddi-_dxgk_trackedworkload_state_flags.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dkmddi/ns-d3dkmddi-_dxgk_trackedworkload_state_flags.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dkmddi/ns-d3dkmddi-_dxgk_trackedworkload_state_flags.md | Cloud-Writer/windows-driver-docs-ddi | 6ac33c6bc5649df3e1b468a977f97c688486caab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:d3dkmddi._DXGK_TRACKEDWORKLOAD_STATE_FLAGS
title: _DXGK_TRACKEDWORKLOAD_STATE_FLAGS (d3dkmddi.h)
description: Indicates GPU configurations, including the appropriate frequencies and power level, for a context.
ms.assetid: 0b6f3ccf-c4c8-4787-87dc-8397385e1374
ms.date: 10/19/2018
ms.topic: struct
ms.keywords: _DXGK_TRACKEDWORKLOAD_STATE_FLAGS, DXGK_TRACKEDWORKLOAD_STATE_FLAGS,
req.header: d3dkmddi.h
req.include-header:
req.target-type:
req.target-min-winverclnt: Windows 10, version 1809
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.lib:
req.dll:
req.ddi-compliance:
req.unicode-ansi:
req.max-support:
req.typenames: DXGK_TRACKEDWORKLOAD_STATE_FLAGS
topic_type:
- apiref
api_type:
- HeaderDef
api_location:
- d3dkmddi.h
api_name:
- _DXGK_TRACKEDWORKLOAD_STATE_FLAGS
product:
- Windows
targetos: Windows
tech.root: display
ms.custom: RS5
---
# _DXGK_TRACKEDWORKLOAD_STATE_FLAGS structure
## -description
Indicates GPU configurations, including the appropriate frequencies and power level, for a context.
## -struct-fields
### -field Saturated
The driver should set this state flag if the driver cannot go higher than specified. This includes transient states like thermal limits.
### -field OptimalLevel
The driver should set this flag if for the given power level, we are in an optimal efficiency range for the tracked workload. The driver should set this flag only when, for certain workloads, lowering the frequency will reduce the performance per watt.
### -field Reserved
This value is reserved.
### -field Value
## -remarks
## -see-also
| 25.222222 | 252 | 0.795469 | eng_Latn | 0.808128 |
b9262c1328f0d95220258e7fcd3c3f9ed794f9b4 | 811 | md | Markdown | riscoss-platform-dm/riscoss-platform-dm-war/local-repository/README.md | rbenjacob/riscoss-platform | 9b76ae16b1c070f799d0543147ff97d120aac807 | [
"Apache-2.0"
] | 1 | 2016-02-17T07:46:22.000Z | 2016-02-17T07:46:22.000Z | riscoss-platform-dm/riscoss-platform-dm-war/local-repository/README.md | rbenjacob/riscoss-platform | 9b76ae16b1c070f799d0543147ff97d120aac807 | [
"Apache-2.0"
] | null | null | null | riscoss-platform-dm/riscoss-platform-dm-war/local-repository/README.md | rbenjacob/riscoss-platform | 9b76ae16b1c070f799d0543147ff97d120aac807 | [
"Apache-2.0"
] | null | null | null | xwiki-platform-legacy-oldcore-6.0-rc-1-objectpolicy.jar has been obtained in the following way:
1) git clone https://github.com/xwiki/xwiki-platform
2) git checkout stable-6.0.X
3) git pull [email protected]:woshilapin/xwiki-platform.git object-policy
4) cd xwiki-platform-core/xwiki-platform-oldcore
5) mvn clean install
6) cd xwiki-platform-legacy/xwiki-platform-legacy-oldcore
7) mvn clean install
8) Install the jar in the local repository patched-jars with the following command:
mvn org.apache.maven.plugins:maven-install-plugin:2.3.1:install-file -Dfile=PATH_TO/xwiki-platform-legacy-oldcore-6.0-rc-1-objectpolicy.jar -DgroupId=org.xwiki.platform -DartifactId=xwiki-platform-legacy-oldcore -Dversion=6.0-rc-1 -Dclassifier=objectpolicy -Dpackaging=jar -DlocalRepositoryPath=PATH_TO/local-repository
| 57.928571 | 322 | 0.803946 | eng_Latn | 0.33527 |
b9263571e8824bf2d1dc1580dd046ea300f4bdb4 | 89 | md | Markdown | README.md | thanhtrixx/music-web-spring-boot | 280fb1a31e48a70f5ffc9166d2d3a6712ba3f170 | [
"Apache-2.0"
] | null | null | null | README.md | thanhtrixx/music-web-spring-boot | 280fb1a31e48a70f5ffc9166d2d3a6712ba3f170 | [
"Apache-2.0"
] | null | null | null | README.md | thanhtrixx/music-web-spring-boot | 280fb1a31e48a70f5ffc9166d2d3a6712ba3f170 | [
"Apache-2.0"
] | null | null | null | # music-web-spring-boot
Use Spring boot to create music web with RESTful & AngularJS
| 29.666667 | 64 | 0.786517 | eng_Latn | 0.954968 |
b926c91cc378aa5de09e466383a90928bfa9d467 | 1,691 | md | Markdown | _posts/2014-05-10-ms-access-to-postgresql-in-64-bit-windows.md | ivanhanigan/ivanhanigan.github.com.raw | afc476afda6a3818620836ec42628b46204879c8 | [
"CC-BY-4.0"
] | null | null | null | _posts/2014-05-10-ms-access-to-postgresql-in-64-bit-windows.md | ivanhanigan/ivanhanigan.github.com.raw | afc476afda6a3818620836ec42628b46204879c8 | [
"CC-BY-4.0"
] | null | null | null | _posts/2014-05-10-ms-access-to-postgresql-in-64-bit-windows.md | ivanhanigan/ivanhanigan.github.com.raw | afc476afda6a3818620836ec42628b46204879c8 | [
"CC-BY-4.0"
] | null | null | null | ---
name: ms-access-to-postgresql-in-64-bit-windows
layout: post
title: ms-access-to-postgresql-in-64-bit-windows
date: 2014-05-10
categories:
- research methods
---
- In 2011 we published a paper on "Creating an integrated historical record of extreme particulate air pollution events in australian cities from 1994 to 2007".
- I used a PostGIS/PostgreSQL server to host the database and MS Access clients for data entry by my co-author's at their own institutions (thousands of kms away)
- This worked great but recently I realised that updating to windows 64 bit has broken the ODBC connection
- to fix it I followed the instructions [here](http://www.youlikeprogramming.com/2011/09/postgresql-and-odbc-in-64-bit-windows/)
- You need to install both the 32 and 64 bit executables
- Assuming you use the 32 bit windows office suite (recommended) then use the command below to open the ODBC connections tool and add a DSN for your postgres database
#### Code:
Go to Start -> Run (or Press Windows+R keys) and enter
%WINDIR%\SysWOW64\odbcad32.exe
<p></p>
- On either the User DSN (DSNs available for only that user) or System DSN (DSNs available to every user) tab, click the Add button.
Scroll down the list and find the PostgreSQL ODBC Driver.
- You may select ANSI or UNICODE (For future support I personally prefer UNICODE), and click Finish.
- Data Source refers to the programmable name of the DSN Entry. Stick to lower-case letters and underscores. Ex: psql_server
- Specify the Database, Server, User Name and Password to your PostgreSQL server.
- Click the Test button to ensure everything has been specified correctly.
- Click the Save button to create the DSN.
| 56.366667 | 166 | 0.771141 | eng_Latn | 0.989142 |
b926d88d83e000512b62617607184b3ad8a3f9d0 | 326 | md | Markdown | src/pages/javascript/standard-objects/math/math-max/index.md | yashinnhl/guide | 755226f505063f5bb7aa246e9a557c908f57b2d7 | [
"BSD-3-Clause"
] | 2 | 2018-03-03T12:20:33.000Z | 2019-11-29T19:12:22.000Z | src/pages/javascript/standard-objects/math/math-max/index.md | yashinnhl/guide | 755226f505063f5bb7aa246e9a557c908f57b2d7 | [
"BSD-3-Clause"
] | 7 | 2020-07-19T10:17:15.000Z | 2022-02-26T01:49:48.000Z | src/pages/javascript/standard-objects/math/math-max/index.md | yashinnhl/guide | 755226f505063f5bb7aa246e9a557c908f57b2d7 | [
"BSD-3-Clause"
] | 2 | 2018-03-15T20:44:04.000Z | 2019-07-12T06:57:41.000Z | ---
title: Math Max
---
## Math Max
The Math.max() function returns the largest of zero or more numbers.
You can pass it any number of arguments.
```javascript
Math.max(7, 2, 9, -6);
// returns 9
```
#### More Information:
[MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/max)
| 17.157895 | 96 | 0.696319 | eng_Latn | 0.597114 |
b92787e3d47a190d2cbd1eb3e04ba1ba346494fb | 22 | md | Markdown | README.md | bogosla/django-elearning | 53686363fb7b3a28a7059530d19e1b4cadbf6d92 | [
"MIT"
] | null | null | null | README.md | bogosla/django-elearning | 53686363fb7b3a28a7059530d19e1b4cadbf6d92 | [
"MIT"
] | null | null | null | README.md | bogosla/django-elearning | 53686363fb7b3a28a7059530d19e1b4cadbf6d92 | [
"MIT"
] | null | null | null | ### django-elearning
| 7.333333 | 20 | 0.681818 | hrv_Latn | 0.207618 |
b927fd3f82ed2e77f5a883ba07e460b5899ea081 | 871 | md | Markdown | content/posts/tomcat.md | hjhm0223/hvely.github.io | 81079230577296bdfbdbfbc1371a0d0bb66c7109 | [
"MIT"
] | null | null | null | content/posts/tomcat.md | hjhm0223/hvely.github.io | 81079230577296bdfbdbfbc1371a0d0bb66c7109 | [
"MIT"
] | null | null | null | content/posts/tomcat.md | hjhm0223/hvely.github.io | 81079230577296bdfbdbfbc1371a0d0bb66c7109 | [
"MIT"
] | null | null | null | ---
template: SinglePost
title: Tomcat 사용법
status: Published
date: '2020-04-01'
featuredImage: >-
https://media.vlpt.us/images/hyunjae-lee/post/086f54ec-4bad-4ce5-a474-fecf121a94f0/tomcat-cover.png
excerpt: >-
categories:
- category: Study
meta:
description: test meta description
title: test meta title
---

`tomcat` 실행 (관리자 생성)
1. http://tomcat.apache.org binary tar.gz 파일 다운로드
2. tomcat 계정생성 (su 명령어 이용)
3. tomcat 계정의 홈에 `tomcat` 디렉토리 생성
sudo mkdir /home/tomcat
sudo chown tomcat:tomcat /home/tomcat
4. tomcat 디렉토리에 tomcat 설치
5. tomcat 기동
bin/startup.sh
6. 프로세스 확인 (http://localhost:8080)
ps -ef | grep tomcat
7. 프로세스 죽이기
kill -9 process이름
8. conf/server.xml 파일의 "8080"->"8090" 변경
9. tomcat 기동
10. 정상동작 확인 (http://hocalhost:8090)
*관리자 확인하기 : id | 24.194444 | 110 | 0.731343 | kor_Hang | 0.968645 |
b9281d773ddda4de30e77017a3f9f3470d8012da | 5,087 | md | Markdown | md/13_1CHFBta/chap-026.md | berinaniesh/Bible-Tamil-Sathiyavedam-1957 | 1787f0cd252f7bab5d14273ae6ab24c283bebbbb | [
"MIT"
] | null | null | null | md/13_1CHFBta/chap-026.md | berinaniesh/Bible-Tamil-Sathiyavedam-1957 | 1787f0cd252f7bab5d14273ae6ab24c283bebbbb | [
"MIT"
] | null | null | null | md/13_1CHFBta/chap-026.md | berinaniesh/Bible-Tamil-Sathiyavedam-1957 | 1787f0cd252f7bab5d14273ae6ab24c283bebbbb | [
"MIT"
] | null | null | null | ---
title: I நாளாகமம் 26
lang: ta
mainfont: Noto Sans Tamil Regular
---
# I நாளாகமம் 26:1
வாசல்காக்கிறவர்களின் வகுப்புகளாவன: கோராகியர் சந்ததியான ஆசாபின் புத்திரரிலே கோரேயின் குமாரன் மெஷெலேமியா என்பவன்,
# I நாளாகமம் 26:2
மெஷெலேமியாவின் குமாரர், மூத்தவனாகிய சகரியாவும்,
# I நாளாகமம் 26:3
எதியாயேல், செபதியா, யதனியேல், ஏலாம், யோகனான், எலியோனாய் என்னும் இரண்டாம் மூன்றாம் நான்காம் ஐந்தாம் ஆறாம் ஏழாம் குமாரரும்,
# I நாளாகமம் 26:4
ஓபேத்ஏதோமின் குமாரர், மூத்தவனாகிய செமாயாவும்,
# I நாளாகமம் 26:5
யோசபாத், யோவாக், சாக்கார், நெதனெயேல், அம்மியேல், இசக்கார், பெயுள்தாயி என்னும் இரண்டாம் மூன்றாம் நான்காம் ஐந்தாம் ஆறாம் ஏழாம் எட்டாம் குமாரருமே; தேவன் அவனை ஆசீர்வதித்திருந்தார்.
# I நாளாகமம் 26:6
அவன் குமாரனாகிய செமாயாவுக்கும் குமாரர் பிறந்து, அவர்கள் பராக்கிரமசாலிகளாயிருந்து, தங்கள் தகப்பன் குடும்பத்தாரை ஆண்டார்கள்.
# I நாளாகமம் 26:7
செமாயாவுக்கு இருந்த குமாரர் ஒத்னியும், பலசாலிகளாகிய ரெப்பாயேல், ஓபேத், எல்சாபாத் என்னும் அவன் சகோதரரும், எலிகூவும் செமகியாவுமே.
# I நாளாகமம் 26:8
ஓபேத்ஏதோமின் புத்திரரும் அவர்கள் குமாரரும் அவர்கள் சகோதரருமாகிய ஊழியத்திற்குப் பலத்த பராக்கிரமசாலிகளான அவர்களெல்லாரும் அறுபத்திரண்டுபேர்.
# I நாளாகமம் 26:9
மெஷெலேமியாவின் குமாரரும் சகோதரருமான பராக்கிரமசாலிகள் பதினெட்டுப்பேர்.
# I நாளாகமம் 26:10
மெராரியின் புத்திரரில் ஓசா என்பவனுடைய குமாரர்கள்: சிம்ரி என்னும் தலைமையானவன்; இவன் மூத்தவனாயிராவிட்டாலும் இவன் தகப்பன் இவனைத் தலைவனாக வைத்தான்.
# I நாளாகமம் 26:11
இல்க்கியா, தெபலியா, சகரியா என்னும் இரண்டாம் மூன்றாம் நான்காம் குமாரரானவர்கள்; ஓசாவின் குமாரரும் சகோதரரும் எல்லாம் பதின்மூன்றுபேர்.
# I நாளாகமம் 26:12
காவல்காரரான தலைவரின்கீழ்த் தங்கள் சகோதரருக்கொத்த முறையாய்க் கர்த்தருடைய ஆலயத்தில் வாசல்காக்கிறவர்களாய்ச் சேவிக்க இவர்கள் வகுக்கப்பட்டு,
# I நாளாகமம் 26:13
தங்கள் பிதாக்களின் குடும்பத்தாராகிய சிறியவர்களும், பெரியவர்களுமாய் இன்ன வாசலுக்கு இன்னாரென்று சீட்டுப்போட்டுக்கொண்டார்கள்.
# I நாளாகமம் 26:14
கீழ்ப்புறத்திற்குச் செலேமியாவுக்குச் சீட்டு விழுந்தது; விவேகமுள்ள யோசனைக்காரனாகிய சகரியா என்னும் அவன் குமாரனுக்குச் சீட்டுப் போட்டபோது, அவன் சீட்டு வடபுறத்திற்கென்று விழுந்தது.
# I நாளாகமம் 26:15
ஓபேத்ஏதோமுக்குத் தென்புறத்திற்கும், அவன் குமாரருக்கு அசுப்பீம் வீட்டிற்கும்,
# I நாளாகமம் 26:16
சூப்பீமுக்கும், ஓசாவுக்கும் மண்போட்டு உயர்த்தப்பட்ட வழியும் காவலுக்கு எதிர்காவலும் இருக்கிற மேற்புறமான வாசலுக்கும் சீட்டு விழுந்தது.
# I நாளாகமம் 26:17
கிழக்கே லேவியரான ஆறுபேரும், வடக்கே பகலிலே நாலுபேரும், தெற்கே பகலிலே நாலுபேரும், அசுப்பீம் வீட்டண்டையில் இரண்டிரண்டுபேரும்,
# I நாளாகமம் 26:18
வெளிப்புறமான வாசல் அண்டையில் மேற்கே இருக்கிற உயர்ந்த வழிக்கு நாலுபேரும், வெளிப்புறமான வழியில் இரண்டுபேரும் வைக்கப்பட்டார்கள்.
# I நாளாகமம் 26:19
கோராகின் புத்திரருக்குள்ளும், மெராரியின் புத்திரருக்குள்ளும், வாசல் காக்கிறவர்களின் வகுப்புகள் இவைகளே.
# I நாளாகமம் 26:20
மற்ற லேவியரில் அகியா என்பவன் தேவனுடைய ஆலயத்துப் பொக்கிஷங்களையும், பிரதிஷ்டையாக்கப்பட்ட பொருள்களின் பொக்கிஷங்களையும் விசாரிக்கிறவனாயிருந்தான்.
# I நாளாகமம் 26:21
லாதானின் குமாரர் யாரென்றால், கெர்சோனியனான அவனுடைய குமாரரில் தலைமையான பிதாக்களாயிருந்த யெகியேலியும்,
# I நாளாகமம் 26:22
யெகியேலியின் குமாரராகிய சேத்தாமும், அவன் சகோதரனாகிய யோவேலுமே; இவர்கள் கர்த்தருடைய ஆலயத்துப் பொக்கிஷங்களை விசாரிக்கிறவர்களாயிருந்தார்கள்.
# I நாளாகமம் 26:23
அம்ராமீயரிலும், இத்சாகாரியரிலும், எப்ரோனியரிலும், ஊசியேரியரிலும், சிலர் அப்படியே விசாரிக்கிறவர்களாயிருந்தார்கள்.
# I நாளாகமம் 26:24
மோசேயின் குமாரனாகிய கெர்சோமின் சந்ததியான செபுவேல் பொக்கிஷப் பிரதானியாயிருந்தான்.
# I நாளாகமம் 26:25
எலியேசர் மூலமாய் அவனுக்கு இருந்த சகோதரரானவர்கள், இவன் குமாரன் ரெகபியாவும், இவன் குமாரன் எஷாயாவும், இவன் குமாரன் யோராமும், இவன் குமாரன் சிக்கிரியும், இவன் குமாரன் செலோமித்துமே.
# I நாளாகமம் 26:26
ராஜாவாகிய தாவீதும், ஆயிரம்பேருக்கு அதிபதிகளும், நூறுபேருக்கு அதிபதிகளுமான பிதாக்களின் தலைவரும், சேனாபதிகளும் யுத்தத்தில் அகப்பட்ட கொள்ளைகளில் எடுத்து,
# I நாளாகமம் 26:27
கர்த்தருடைய ஆலயத்தைப் பரிபாலிக்கும்படிக்குப் பரிசுத்தம் என்று நேர்ந்துகொண்ட பொருள்களின் பொக்கிஷங்களையெல்லாம் அந்தச் செலோமித்தும் அவனுடைய சகோதரரும் விசாரித்தார்கள்.
# I நாளாகமம் 26:28
ஞானதிருஷ்டிக்காரனாகிய சாமுவேலும், கீசின் குமாரனாகிய சவுலும், நேரின் குமாரனாகிய அப்னேரும், செருயாவின் குமாரனாகிய யோவாபும், அவரவர் பரிசுத்தம் என்று நேர்ந்துகொண்ட அனைத்தும் செலோமித்தின் கையின்கீழும் அவன் சகோதரர் கையின்கீழும் இருந்தது.
# I நாளாகமம் 26:29
இத்சாகாரியரில் கெனானியாவும் அவன் குமாரரும் தேசகாரியங்களைப் பார்க்கும்படி வைக்கப்பட்டு, இஸ்ரவேலின்மேல் விசாரிப்புக்காரரும் மணியக்காரருமாயிருந்தார்கள்.
# I நாளாகமம் 26:30
எப்ரோனியரில் அசபியாவும் அவன் சகோதரருமாகிய ஆயிரத்து எழுநூறு பராக்கிரமசாலிகள் யோர்தானுக்கு இப்பாலே மேற்கே இருக்கிற இஸ்ரவேலின்மேல் கர்த்தருடைய எல்லா ஊழியத்திற்கும் ராஜாவின் வேலைக்கும் வைக்கப்பட்டார்கள்.
# I நாளாகமம் 26:31
எப்ரோனியரில் எரியாவும் இருந்தான்; அவன் தன் பிதாக்களின் வம்சங்களான எப்ரோனியரில் தலைமையானவன்; தாவீது அரசாண்ட நாற்பதாம் வருஷத்திலே அவர்கள் தேடப்பட்டபோது அவர்களுக்குள்ளே கீலேயாத்தேசத்து ஏசேரிலே பராக்கிரம வீரர் காணப்பட்டார்கள்.
# I நாளாகமம் 26:32
பலசாலிகளாகிய அவனுடைய சகோதரர் இரண்டாயிரத்து எழுநூறு பிரதான தலைவராயிருந்தார்கள்; அவர்களைத் தாவீது ராஜா தேவனுக்கடுத்த சகல காரியத்திற்காகவும், ராஜாவின் காரியத்திற்காவும், ரூபனியர்மேலும், காதியர் மேலும், மனாசேயின் பாதிக்கோத்திரத்தின்மேலும் வைத்தான்.
| 37.681481 | 244 | 0.511696 | tam_Taml | 0.997616 |
b928302cd23e85ee3faa66ba127662beb866d904 | 3,896 | md | Markdown | docs/ado/reference/ado-api/getrows-method-example-vb.md | RaufAsadov/sql-docs | 71985f03656a30381b2498ac5393aaf86f670bf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/getrows-method-example-vb.md | RaufAsadov/sql-docs | 71985f03656a30381b2498ac5393aaf86f670bf3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/ado-api/getrows-method-example-vb.md | RaufAsadov/sql-docs | 71985f03656a30381b2498ac5393aaf86f670bf3 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-08-24T18:43:51.000Z | 2020-08-24T18:43:51.000Z | ---
description: "GetRows Method Example (VB)"
title: "GetRows Method Example (VB) | Microsoft Docs"
ms.prod: sql
ms.prod_service: connectivity
ms.technology: connectivity
ms.custom: ""
ms.date: "01/19/2017"
ms.reviewer: ""
ms.topic: conceptual
dev_langs:
- "VB"
helpviewer_keywords:
- "Getrows method [ADO], Visual Basic example"
ms.assetid: 9f7c78bb-7bb8-4c4f-8e5a-4d3bfc8a208f
author: rothja
ms.author: jroth
---
# GetRows Method Example (VB)
This example uses the [GetRows](../../../ado/reference/ado-api/getrows-method-ado.md) method to retrieve a specified number of rows from a [Recordset](../../../ado/reference/ado-api/recordset-object-ado.md) and to fill an array with the resulting data. The **GetRows** method will return fewer than the desired number of rows in two cases: either if [EOF](../../../ado/reference/ado-api/bof-eof-properties-ado.md) has been reached, or if **GetRows** tried to retrieve a record that was deleted by another user. The function returns **False** only if the second case occurs. The GetRowsOK function is required for this procedure to run.
```
'BeginGetRowsVB
'To integrate this code
'replace the data source and initial catalog values
'in the connection string
Public Sub Main()
On Error GoTo ErrorHandler
' connection and recordset variables
Dim rstEmployees As ADODB.Recordset
Dim Cnxn As ADODB.Connection
Dim strSQLEmployees As String
Dim strCnxn As String
' array variable
Dim arrEmployees As Variant
' detail variables
Dim strMessage As String
Dim intRows As Integer
Dim intRecord As Integer
' open connection
Set Cnxn = New ADODB.Connection
strCnxn = "Provider='sqloledb';Data Source='MySqlServer';" & _
"Initial Catalog='Pubs';Integrated Security='SSPI';"
Cnxn.Open strCnxn
' open recordset client-side to enable RecordCount
Set rstEmployees = New ADODB.Recordset
strSQLEmployees = "SELECT fName, lName, hire_date FROM Employee ORDER BY lName"
rstEmployees.Open strSQLEmployees, Cnxn, adOpenStatic, adLockReadOnly, adCmdText
' get user input for number of rows
Do
strMessage = "Enter number of rows to retrieve:"
intRows = Val(InputBox(strMessage))
' if bad user input exit the loop
If intRows <= 0 Then
MsgBox "Please enter a positive number", vbOKOnly, "Not less than zero!"
' if number of requested records is over the total
ElseIf intRows > rstEmployees.RecordCount Then
MsgBox "Not enough records in Recordset to retrieve " & intRows & " rows.", _
vbOKOnly, "Over the available total"
Else
Exit Do
End If
Loop
' else put the data in an array and print
arrEmployees = rstEmployees.GetRows(intRows)
Dim x As Integer, y As Integer
For x = 0 To intRows - 1
For y = 0 To 2
Debug.Print arrEmployees(y, x) & " ";
Next y
Debug.Print vbCrLf
Next x
' clean up
rstEmployees.Close
Cnxn.Close
Set rstEmployees = Nothing
Set Cnxn = Nothing
Exit Sub
ErrorHandler:
' clean up
If Not rstEmployees Is Nothing Then
If rstEmployees.State = adStateOpen Then rstEmployees.Close
End If
Set rstEmployees = Nothing
If Not Cnxn Is Nothing Then
If Cnxn.State = adStateOpen Then Cnxn.Close
End If
Set Cnxn = Nothing
If Err <> 0 Then
MsgBox Err.Source & "-->" & Err.Description, , "Error"
End If
End Sub
'EndGetRowsVB
```
## See Also
[GetRows Method (ADO)](../../../ado/reference/ado-api/getrows-method-ado.md)
[Recordset Object (ADO)](../../../ado/reference/ado-api/recordset-object-ado.md)
| 34.477876 | 637 | 0.649384 | eng_Latn | 0.668925 |
b9283d42c962d79a4d6b3e6cd71c38b1409d0991 | 9,314 | md | Markdown | website/docs/documentation/components.md | videki/template-utils | 687410a645bfe8bcd335fab443efee1b8b39c826 | [
"MIT"
] | 1 | 2020-07-02T19:09:03.000Z | 2020-07-02T19:09:03.000Z | website/docs/documentation/components.md | videki/template-utils | 687410a645bfe8bcd335fab443efee1b8b39c826 | [
"MIT"
] | 13 | 2020-06-25T13:36:22.000Z | 2022-02-14T01:34:49.000Z | website/docs/documentation/components.md | videki/template-utils | 687410a645bfe8bcd335fab443efee1b8b39c826 | [
"MIT"
] | null | null | null | ---
id: components
title: Built-in components
---
This section describes how to use, configure and extend the generator components.
## Template repositories
Template repositories maintain how the required templates are accessed.
**Configuration**
To bind a repository connector, the provider class and its provider-specific properties have to be specified.
You can find the latter at the provider description.
```properties
# Use filesystem folder as a template repository
repository.template.provider=org.mycompany.templaterepo.ProviderClass
```
**Repository connectors**
### Filesystem repository
Retrieves templates from a filesystem location, e.g. each template is stored as a file.
```properties
repository.template.provider=net.videki.templateutils.template.core.provider.templaterepository.filesystem.FileSystemTemplateRepository
repository.template.provider.basedir=templates
```
Configuration:
| Property | Value |
| ------------------------------------ | -------- |
| repository.template.provider.basedir | Root folder location. Can be an absolute, or a relative path to app startup location. |
## Document structure repositories
Document structure repositories maintain how the document set descriptors are accessed.
For handling document structures two providers are required:
- a repository for retrieving the structure descriptors (filesystem, git, postgres, mongodb, etc.)
- a builder for parsing the descriptors (yaml, etc.)
**Configuration**
To bind a repository connector, the provider class and its provider-specific properties have to be specified.
You can find the latter at the provider description.
```properties
# Use filesystem folder as a template repository
repository.documentstructure.provider=org.mycompany.docstructurerepo.ProviderClass
```
**Repository connectors**
### Filesystem doc structure repository
Retrieves templates from a filesystem location, e.g. each template is stored as a file.
```properties
repository.documentstructure.provider=net.videki.templateutils.template.core.provider.documentstructure.repository.filesystem.FileSystemDocumentStructureRepository
repository.documentstructure.provider.filesystem.basedir=doc-structures
```
Configuration:
| Property | Value |
| ------------------------------------ | -------- |
| repository.documentstructure.provider.filesystem.basedir | Root folder location. Can be an absolute, or a relative path to app startup location. |
## Document structure builders
Document structure builder control how the document set descriptors are built.
**Configuration**
To bind a document set builder connector, the provider class and its provider-specific properties have to be specified.
You can find the latter at the provider description.
```properties
# Use filesystem folder as a template repository
repository.documentstructure.provider=org.mycompany.structurebuilder.MyProviderClass
```
**Builder implementations**
### YAML document structure builder
Parses a document structure stored in YAML format.
(The source repository )
```properties
repository.documentstructure.builder=net.videki.templateutils.template.core.provider.documentstructure.builder.yaml.YmlDocStructureBuilder
```
## Input processors
An input processor is a template processor, which fills in a specific template format
using a given placeholder and control (for example .docx files with docx-stamper markup).
```properties
processors.provider.<input-format>=org.mycompany.docstructurerepo.ProviderClass
```
You can specify a list of input processors for the engine with **exactly one** processor per input format.
The built-in configuration for example is the setup below:
```properties
processors.docx=net.videki.templateutils.template.core.processor.input.docx.DocxStamperInputTemplateProcessor
processors.xlsx=net.videki.templateutils.template.core.processor.input.xlsx.JxlsInputTemplateProcessor
```
**Template contexts**
The engine supports multiple values to be handed over to the processors. These are called template contexts.
A context can either be
- global
When only one value object is used, you can simply pass it to the engine without building a context wrapper.
When using this, it will be handled as a global context, and you will have to refer the values of this object
with <code>model</code> like `${model.myValue}`
```java
context.addValueObject(orgUnit);
```
In this case this <code>orgUnit</code> object can be referred as <code>model</code>, and its fields
with <code>model.fieldname</code>.
- named
In case of multiple objects, you can put the DTOs into named contexts and refer them by their name:
```java
final Contract dto = ContractDataFactory.createContract();
final OrganizationUnit orgUnit = OrgUnitDataFactory.createOrgUnit();
final Officer officer = OfficerDataFactory.createOfficer();
final DocumentProperties documentProperties = DocDataFactory.createDocData(transactionId);
final TemplateContext context = new TemplateContext();
context.getCtx().put("org", orgUnit);
context.getCtx().put("officer", officer);
context.getCtx().put("contract", dto);
context.getCtx().put("doc", documentProperties);
```
This way you can refer the values by their context names like <code>ctx['contract'].field</code>
e.q. `${ctx['contextname'].myValue}`.
**Implementations**
### Docx-stamper processor
#### Input format: .docx
Processes .docx templates using comment markup implemented by the
[docx stamper](https://github.com/thombergs/docx-stamper) tool.
```properties
processors.provider.docx=net.videki.templateutils.template.core.processor.input.docx.DocxStamperInputTemplateProcessor
```
### Noop processor
#### Input format: -
Simple loopback processor for returning input templates untouched.
```properties
processors.provider.noop=net.videki.templateutils.template.core.processor.input.noop.NoopTemplateProcessor
```
### Jxls processor
#### Input format: .xlsx
.xlsx processor for processing [Jxls](http://jxls.sourceforge.net/) marked-up templates.
```properties
processors.provider.xlsx=net.videki.templateutils.template.core.processor.input.xlsx.JxlsInputTemplateProcessor
```
## Converters
Converters - as you may guess from the name - are format converters from an input format into an output format.
**Implementations**
### Pdfbox docx-pdf converter
Converts docx documents into pdf using [PDF box](https://pdfbox.apache.org/).
## Result stores
## Registries
## Input and output types
### Document structures
Document structures are a set of templates, or template alternatives (we call these <code>TemplateElement</code>)
with a unique id describing the content that has to be generated.
It basically describes:
- the list of template elements
- how the result should look like: a singe, merged document is needed, or separate ones.
- the number of copies of the whole result
- the output format into which the result has to converted (one of the output formats or unchanged)
#### Template elements
A template element is a single template to be processed during generation identified by a given name
like <code>"contract"</code>, etc. with locale-based alternatives and a default locale.
For example:
```yaml
# contract_v02.yml
---
documentStructureId: "109a562d-8290-407d-98e1-e5e31c9808b7"
elements:
- templateElementId:
id: "cover"
templateNames:
hu_HU: "/covers/cover_v03.docx"
defaultLocale: "hu_HU"
count: 1
- templateElementId:
id: "contract"
templateNames:
en: "/contracts/vintage/contract_v09_en.docx"
hu: "/contracts/vintage/contract_v09_hu.docx"
defaultLocale: "hu_HU"
count: 1
- templateElementId:
id: "terms"
templateNames:
hu: "/term/terms_v02.docx"
defaultLocale: "hu_HU"
count: 1
- templateElementId:
id: "conditions"
templateNames:
hu: "/conditions/vintage/conditions_eco_v11.xlsx"
defaultLocale: "hu_HU"
count: 1
resultMode: "SEPARATE_DOCUMENTS"
outputFormat: "UNCHANGED"
copies: 1
```
### Value sets
A <code>ValueSet</code> is a one-time holder object, in which we collect data for a specific document structure.
The container can hold
- global value contexts
These are used for all templates not having a specific context. Use this by adding the context
with <code>TemplateElementId.getGlobalTemplateElementId()</code>
- template-level contexts
If a template has to marked with a different object collection, than others, a template-level context can be added
to the value set with the required template's <code>templateElementId</code>.
You can instantiate a value set as shown below:
```java
final ValueSet values = new ValueSet(structure.getDocumentStructureId(), tranId);
values.getValues().put(TemplateElementId.getGlobalTemplateElementId(), getContractTestData(tranId));
```
The transaction can be bound to the surrounding business transaction.
### Generation results
A generation result is another holder object containing the result document stream (when using streams
without saving the result into the result store), or descriptors of the saved results.
| 31.572881 | 167 | 0.754885 | eng_Latn | 0.933553 |
b9288f01bca5e1f396c09782aa0cee209cd725c3 | 2,411 | md | Markdown | src/clients/README.md | globality-corp/nodule-openapi | edd79699db4daaba76d13b7b4189d2eaacb5145d | [
"MIT"
] | null | null | null | src/clients/README.md | globality-corp/nodule-openapi | edd79699db4daaba76d13b7b4189d2eaacb5145d | [
"MIT"
] | 8 | 2018-08-20T14:26:28.000Z | 2022-02-12T10:58:09.000Z | src/clients/README.md | globality-corp/nodule-openapi | edd79699db4daaba76d13b7b4189d2eaacb5145d | [
"MIT"
] | 1 | 2019-03-17T03:50:29.000Z | 2019-03-17T03:50:29.000Z | # createOpenAPIClient
The `nodule-openapi` includes a function (`createOpenAPIClient`) that allows the wrapping of the OpenAPI
swagger client into callable functions.
## Usage
Given a swagger definition `spec`:
import { createOpenAPIClient } from '@globality/nodule-openapi';
const client = createOpenAPIClient('name', spec);
const result = await client.foo.search(req, args);
## Headers
The `createOpenAPIClient` allows for custom headers to be attached to the request. Bind a function to `extendHeaders`
to add information to the headers object.
bind('extendHeaders', () => extendHeaderFunc)
The function receives the request and basic headers and should return an extended list of headers.
If a given function is not specified, the code defaults to the following set of headers:
* X-Request-Service: Taken from `getMetadata().name`
* X-Request-Id: Taken from `req.id`
* X-Request-Started-At: Taken from `req._startAt` or the current datetime
* X-Request-User: Taken from `req.locals.user.id`
* Jwt: A JSON stringifed representation of `req.locals.jwt`
## Logging
The `createOpenAPIClient` supports custom logging during run time. There are 3 places where we can inject custom logic.
### buildRequestLogs
Prior to the client request being sent out, we can build a set of request log information. The function is called:
buildRequestLogs(req, serviceName, operationName, request)
The service name, operation name, original req, and the OpenAPI request can be used to build logs.
### logSuccess
If the call succeeds the logSuccess function is called:
logSuccess(req, request, response, requestLogs, executeStartTime)
The original req, the Open API request and response, the returned value from buildRequestLogs and the execution start
time are passed to this function which can call any logging service.
### logFailure
If the call fails, the logFailure function is called:
logFailure(req, request, error, requestLogs)
The original req, the Open API request, the error thrown, the returned value from buildRequestLogs and the execution
start time are passed to this function which can call any logging service.
### Usage
To use any of the above functionality, bind a function to the appropriate field:
bind('logging.buildRequestLogs', () => buildFoo);
bind('logging.logSuccess', () => successFoo);
bind('logging.logFailure', () => failureFoo);
| 34.942029 | 119 | 0.763169 | eng_Latn | 0.991511 |
b9288f7cf3d5ba209258deab295b21be4766c806 | 26,372 | md | Markdown | articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md | JungYeolYang/azure-docs.zh-cn | afa9274e7d02ee4348ddb6ab81878b9ad1e52f52 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md | JungYeolYang/azure-docs.zh-cn | afa9274e7d02ee4348ddb6ab81878b9ad1e52f52 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/team-data-science-process/team-data-science-process-for-data-scientists.md | JungYeolYang/azure-docs.zh-cn | afa9274e7d02ee4348ddb6ab81878b9ad1e52f52 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 用于数据科学家的 Team Data Science Process
description: 有关一组目标的指南,这些目标通常用于使用 Team Data Science Process 和 Azure 机器学习通过 Azure 技术实施全面的数据科学解决方案。
services: machine-learning
author: marktab
manager: cgronlun
editor: cgronlun
ms.service: machine-learning
ms.subservice: team-data-science-process
ms.topic: article
ms.date: 11/21/2017
ms.author: tdsp
ms.custom: seodec18, previous-author=deguhath, previous-ms.author=deguhath
ms.openlocfilehash: ba40a9d62e809079c2fa2d347d3231b2bf39988d
ms.sourcegitcommit: 3102f886aa962842303c8753fe8fa5324a52834a
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 04/23/2019
ms.locfileid: "61044996"
---
# <a name="team-data-science-process-for-data-scientists"></a>用于数据科学家的 Team Data Science Process
本文指导读者利用一组典型对象,通过配合 Azure 技术实现综合性数据科学解决方案。 本文将指导你:
- 了解分析工作负荷
- 使用 Team Data Science Process
- 使用 Azure 机器学习
- 了解数据传输和存储的基础知识
- 提供数据源文档
- 使用工具来处理分析
这些培训材料与 Team Data Science Process (TDSP) 及 Microsoft 与开源代码软件和工具包相关,有助于构想、执行和交付数据科学解决方案。
## <a name="lesson-path"></a>课程路径
可通过下表中所列项目进行自学。 要按照路径学习请参阅“说明”列,要查看学习参考请单击“主题”链接,要检查技能掌握情况请使用“知识检查”列。
| **目标** | **主题** | **说明** | **知识检查** |
|-------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 了解开发分析项目的过程 | [Team Data Science Process 概述](overview.md) | 我们首先开始介绍 Team Data Science Process (TDSP)。 此过程将引导你逐步了解分析项目。 阅读以下各节,详细了解相关过程及其实现方法。 | [查看项目的 TDSP 项目结构项目并将其下载到本地计算机](https://github.com/Azure/Azure-TDSP-ProjectTemplate)。 |
| | [敏捷开发](https://www.visualstudio.com/agile/) | Team Data Science Process 适用于多种不同的编程方法。 在此学习路径下,我们将使用敏捷软件开发。 请参阅文章“什么是敏捷开发?” 和“生成敏捷区域性”,了解 Agile 的基本使用方法。 本站点还收录了其他参考资料,以供读者深入了解。 | 向同事解释持续集成和持续交付。 |
| | [用于数据科学的 DevOps](https://mva.microsoft.com/training-courses/devops-an-it-pro-guide-8286?l=GVFXzCXy_8104984382) | 开发者操作 (DevOps) 提供人员、进程和平台,使用者可通过它来处理项目,并将解决方案集成到组织的标准 IT 中。 从应用和安全性来看,集成是必需的。 在此联机课程中,你将了解 DevOps 的相关实践,以及所拥有的某些工具链选项。 | 为技术受众准备一篇时长 30 分钟的演示文稿,介绍 DevOps 对于分析项目的重要性。 |
| 了解数据存储和处理的相关技术 | [Microsoft 商业分析和 AI](https://www.microsoft.com/cloud-platform/what-is-cortana-intelligence) | 在此学习路径下,我们介绍了几种用于创建分析解决方案的技术,但 Microsoft 所拥有的技术远不止这些。 要了解已有的选项,必须查看 Microsoft Azure、Azure Stack 和本地选项上可用的平台及功能。 查看此资源,了解在解决分析问题时可用的各种工具。 | [从此学习班下载并查看演示文稿材料](https://info.microsoft.com/CO-Azure-CNTNT-FY16-Oct2015-Cortana-Registration.html)。 |
| 设置和配置培训、开发及生产环境 | [Microsoft Azure](https://azure.microsoft.com/training/learning-paths/azure-solution-architect/) | 现在,让我们在 Microsoft Azure 中创建一个培训用帐户,学习如何创建开发和测试环境。 你可以通过这些免费培训资源入门。 完成“初学者”和“中级”路径。 | [如果没有 Azure 帐户,请先创建一个](https://azure.microsoft.com/free/?v=17.39&WT.srch=1&WT.mc_id=AID559320_SEM_2kAfgmyQ&lnkd=Bing_Azure_Brand)。 登录至 Microsoft Azure 门户并[创建一个资源组](../../azure-resource-manager/manage-resource-groups-portal.md)用于培训。 |
| | [Microsoft Azure 命令行接口 (CLI)](https://docs.microsoft.com/cli/azure/get-started-with-azure-cli?view=azure-cli-latest) | Microsoft Azure 的使用范围极其广泛 - 从图形工具(如 VSCode 和 Visual Studio)到 Web 接口(如 Azure 门户)、命令行(如 Azure PowerShell 命令行)和函数等均适用。 在本文中,我们介绍了命令行接口 (CLI),这是一种可在工作站本地、Windows 和其他操作系统,以及 Azure 门户中使用的工具。 | [使用 Azure CLI 设置默认订阅](https://docs.microsoft.com/cli/azure/manage-azure-subscriptions-azure-cli?view=azure-cli-latest)。 |
| | [Microsoft Azure 存储](../../storage/common/storage-introduction.md) | 需要一个位置来存储数据。 在本文中,你将了解 Microsoft Azure 的存储选项,学习如何创建存储帐户,如何将数据复制或移动到云。 请通读此简介了解详细内容。 | [在培训资源组中创建存储帐户,为 blob 对象创建容器以及上传和下载数据。](../../storage/blobs/storage-quickstart-blobs-cli.md) |
| | [Microsoft Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-whatis) | Microsoft Azure Active Directory (AAD) 是维护应用程序安全的基础。 在本文中,你将了解关于帐户、权限和特权的详细信息。 Active Directory 和安全性都是较复杂的主题,因此请通读此资源以了解相关基础知识。 | [向 Azure Active Directory 添加用户](../../active-directory/fundamentals/add-users-azure-active-directory.md)。 注意:如果不是该订阅的管理员,则可能不具备执行此操作的权限。 如果是这种情况,[只需查看此教程以了解详细信息](../../active-directory/fundamentals/add-users-azure-active-directory.md)。 |
| | [Microsoft Azure 数据科学虚拟机](../data-science-virtual-machine/overview.md) | 可以在多个操作系统中以本地方式安装用于数据科学的工具。 但是,Microsoft Azure 数据科学虚拟机 (DSVM) 可提供用户所需的一切工具,以及大量可供使用的项目模板。 在本文中,你将了解关于 DVSM 的详细信息,并学习如何通过这些示例开始工作。 此资源介绍了数据科学虚拟机的相关信息,指导如何创建数据科学虚拟机,并介绍了使用它来开发代码的若干选项。 它还提供完成此学习路径所需的一切软件,以便你能顺利完成此主题的知识路径。 | [创建数据科学虚拟机,并至少通过一个实验室加以使用](../data-science-virtual-machine/provision-vm.md)。 |
| 安装和了解使用数据科学解决方案的相关工具和技术
| [使用 git](https://mva.microsoft.com/training-courses/github-for-windows-users-16749?l=KTNeW39wC_6006218965) | 要通过 TDSP 完成 DevOps 过程,需要版本控制系统。 Microsoft Azure 机器学习服务使用的是 git,这是一款常用的开源分布式存储库系统。 本文详细介绍如何安装、配置、使用 git 和中央存储库 GitHub。 | [克隆此 GitHub 项目作为学习路径项目结构](https://github.com/Azure/Azure-TDSP-ProjectTemplate)。 |
| | [VSCode](https://code.visualstudio.com/docs/getstarted/introvideos) | VSCode 是跨平台的集成开发环境 (IDE),支持多种语言和 Azure 工具。 可使用此单一环境来创建自己的整套解决方案。 请观看这些介绍视频以开始。 | 安装 VSCode,并[在交互式编辑器演练场中演练 VS 代码功能](https://code.visualstudio.com/docs/introvideos/basics)。 |
| | [使用 Python 进行编程](https://docs.python.org/3/tutorial/index.html) | 我们将在此解决方案中使用 Python,这是数据科学中最常用的语言之一。 本文介绍了使用 Python 编写分析代码的基础知识,并提供用于深入学习的相关资源。 演练此参考资料中的第 1-9 节,并检查所学知识。 | [使用 Python 向 Azure 表添加一个实体](../../cosmos-db/table-storage-how-to-use-python.md)。 |
| | [使用 Notebook](https://jupyter-notebook.readthedocs.io/en/latest/notebook.html#introduction) | 在同一文档中引入文本和代码的一种方法是使用 Notebook。 由于需配合使用 Azure 机器学习服务与 Notebook,因此掌握其使用方法是有益的。 通读此教程,并在知识检查部分进行尝试。 | [打开此页](https://try.jupyter.org/),然后单击“欢迎使用 Python.ipynb”链接。 在该页面上演练相关示例。 |
| | [机器学习](https://mva.microsoft.com/training-courses/data-science-and-machine-learning-essentials-14100?l=UyhoTxWdB_3505050723)
| 创建高级分析解决方案需要处理数据和使用机器学习,这也是使用人工智能和深入学习的基础。 此课程将介绍详细介绍机器学习。 [要获取数据科学的完整课程,请查看此证书](https://academy.microsoft.com/professional-program/tracks/data-science/)。 | 在机器学习算法中定位资源。 (提示:搜索“Azure 机器学习算法备忘单”) |
| | [scikit-learn](https://scikit-learn.org/stable/tutorial/basic/tutorial.html) | 可使用 scikit-learn 工具集在 Python 中执行数据科学任务。 我们在自己的解决方案中使用了此框架。 本文介绍了相关基础知识,并说明了在何处可以进行更深入的学习。 | 使用 Iris 数据集,保留使用 Pickle 的 SVM 模型。 |
| | [使用 Docker](https://docs.microsoft.com/dotnet/standard/microservices-architecture/container-docker-introduction/docker-defined) | Docker 是一个分布式平台,用于生成、装运和运行应用程序,此外在 Azure 机器学习服务中也经常使用它。 本文介绍了关于此技术的基础知识,并说明了在何处可以进行更深入的学习。 | 打开 Visual Studio Code 并[安装 Docker 扩展](https://code.visualstudio.com/Docs/languages/dockerfile)。 [创建简单的 Node Docker 容器](https://blogs.msdn.microsoft.com/vscode/2015/08/11/getting-started-with-docker/)。 |
| | [HDInsight](../../hdinsight/hdinsight-hadoop-introduction.md) | HDInsight 是 Hadoop 开源基础结构,可作为一种服务在 Microsoft Azure 中使用。 你的机器学习算法可能具有较大的数据集,而 HDInsight 可大规模地存储、传输和处理数据。 本文介绍如何使用 HDInsight。 | [创建小型 HDInsight 群集](../../hdinsight/hdinsight-hadoop-create-linux-clusters-portal.md)。 使用 HiveQL 语句[将列投影到 /example/data/sample.log 文件](../../hdinsight/hdinsight-use-hive.md)。 或者,[也可在本地系统中完成此知识检查](../../hdinsight/hdinsight-hadoop-emulator-get-started.md)。 |
| 根据业务需求创建数据处理流 | [根据 TDSP 确定问题](https://buckwoody.wordpress.com/2017/08/31/the-keys-to-effective-data-science-projects-the-question/) | 在安装和配置了开发环境并对相关技术和进程有一定了解之后,即可通过 TDSP 将所有内容集合在一起以执行分析。 我们需要先定义问题、选择数据源,并完成 Team Data Science Process 中的剩余步骤。 在演练此流程的过程中,请注意 DevOps 进程。 在本文中,你将了解如何从组织获取需求,如何通过应用程序来创建数据流映射,从而使用 Team Data Science Process 定义解决方案。
| 在“[5 大数据科学问题](../studio/data-science-for-beginners-the-5-questions-data-science-answers.md)”中定位资源,并描述你的组织可能在这些领域遇到的一个问题。 要解决此问题应关注哪个算法? |
| 使用 Azure 机器学习服务创建预测解决方案 | [Azure 机器学习服务](../service/overview-what-is-azure-ml.md) | Microsoft Azure 机器学习包括处理数据源,使用 AI 进行数据处理和功能设计,创建试验以及跟踪模型运行。 所有这些都在单个环境中运行,且大部分功能都可以在本地或 Azure 中运行。 可使用 PyTorch、TensorFlow 和其他框架来创建试验。 在本文中,我们将重点介绍此过程的完整示例,其中用到的一切内容都是迄今为止你已学习过的。 | |
| 使用 Power BI 可视化结果 | [Power BI](https://powerbi.microsoft.com/guided-learning/) | Power BI 是 Microsoft 的数据可视化工具。 从 Web 到移动设备和台式计算机,多个平台都支持这款工具。 在本文中,你将了解如何处理通过访问 Azure 存储的结果并使用 Power BI 创建可视化效果创建的解决方案的输出。 | [在 Power BI 中完成本教程。](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) 然后将 Power BI 连接到在运行试验的过程中创建的 Blob CSV。 |
| 监视解决方案 | [Application Insights](../../azure-monitor/app/app-insights-overview.md) | 有多种工具可用于监视你的最终解决方案。 使用 Azure Application Insights,可轻松将内置的监视功能集成到解决方案中。 | [设置 Application Insights 以监视 应用程序](https://cmatskas.com/visual-studio-code-integration-with-azure-application-insights/)。 |
| | [Azure Monitor 日志](../../log-analytics/log-analytics-overview.md) | 监视应用程序的另一种方法是将其集成到 DevOps 进程。 Azure Monitor 日志系统提供了一套丰富的功能,可帮助你分析解决方案后将其部署进行监视。 | [完成本教程](https://docs.loganalytics.io/docs/Learn/Getting-Started/Getting-started-with-the-Analytics-portal)上使用 Azure Monitor 日志。 |
| 完成此学习路径 | | 祝贺你! 你已完成此学习路径。 还有许多内容可学习。 |
## <a name="next-steps"></a>后续步骤
[用于开发者操作的 Team Data Science Process](team-data-science-process-for-devops.md) 本文探讨特定于高级分析和认知服务解决方案实施项目的开发者操作 (DevOps) 功能。
| 387.823529 | 1,318 | 0.257432 | yue_Hant | 0.74035 |
b9293fd4f91d58be351e137245a11410605699ab | 20,108 | md | Markdown | labs/iot/environment-monitor/steps/set-up-pi.md | kartben/iot-curriculum | 11b930b11123575cc4f68ce106f945c6c0c46dd5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | labs/iot/environment-monitor/steps/set-up-pi.md | kartben/iot-curriculum | 11b930b11123575cc4f68ce106f945c6c0c46dd5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | labs/iot/environment-monitor/steps/set-up-pi.md | kartben/iot-curriculum | 11b930b11123575cc4f68ce106f945c6c0c46dd5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-17T09:37:50.000Z | 2020-11-17T09:37:50.000Z | # Set up a Raspberry Pi to send temperature data
In the [previous step](./set-up-iot-central.md) you set up an IoT Central application using a pre-created template, and set up a simulated device.
In this step you will set up a Raspberry Pi to send temperature data.
## Raspberry Pi
The [Raspberry Pi](https://raspberrypi.org) is a low-priced, small form factor computer that can run a full version of Linux. It's popular with hobbyists and kids - it was originally designed to be a cheap computer for kids to learn to code on. It has the same standard USB and HDMI ports that a PC or Mac would have, as well as GPIO (General Purpose Input Output) pins that can be used to work with a wide array of external electronic components, devices, sensors, machinery and robotics.
Raspberry Pi's can run a wide range of programing languages. In this lab you will use Python, and program the Pi using Visual Studio Code (VS Code), an open-source developer text editor that can remotely program on a Pi from your PC or Mac. When connected to the Pi remotely from your PC or Mac you can write and debug code from your device, with the code running on the Pi. You will also get a terminal that runs on the Pi.
The temperature data will come from a temperature sensor attached to the Pi. The sensor required is a Grove Temperature Humidity sensor and is part of the [Grove Pi+ Starter Kit](https://www.seeedstudio.com/GrovePi-Starter-Kit-for-Raspberry-Pi-A-B-B-2-3-CE-certified.html). These kits are designed to lower the barrier to entry when using sensors - providing a controller board, sensors with standard cables, and Python libraries.
## Hardware requirements
You will need the following hardware:
* A Raspberry Pi 4
* A micro SD Card
* An SD card to USB converter that matches the USB ports on your device if you r device doesn't have an SD card slot
* A Raspberry Pi 4 power supply (USB-C)
* [A Grove Pi+ Starter Kit](https://www.seeedstudio.com/GrovePi-Starter-Kit-for-Raspberry-Pi-A-B-B-2-3-CE-certified.html)
* A keyboard, mouse and monitor
* A [micro-HDMI to HDMI adapter or cable](https://www.raspberrypi.org/products/micro-hdmi-to-standard-hdmi-a-cable/)
## Create the device in IoT Central
To connect the Pi to IoT Central, you need to create a new device, the same as you did in the previous step. The difference is this time the device isn't simulated. Once you have the device created, you will need some connection details that the Pi can use to connect to IoT Central as the device.
### Create the device
1. Follow the instructions in the previous step to create a new device using the *Environment Monitor* device template.
1. Ensure `Environment Monitor` is selected for the *Template Type*
1. Set the *Device Name* to `Pi Environment Monitor`
1. Set the *Device ID* to `pi-environment-monitor`
1. Set *Simulate this device* to **NO**
1. Select the **Create** button

1. Once the device is created it will appear in the devices list. Select it.

1. Select the **Connect** button on the top to bring up the connection details for the device

1. The dialog that appears shows the information needed to connect from the Pi. Note down the *ID scope* and *Primary key* values. You can use the **Copy to clipboard** button on the end of each value to copy the values.

### Add the device to the dashboard
The dashboard currently only shows temperature from the simulated device. It needs to be changed to include the Pi.
1. Select the **Dashboard** tab from the side bar menu
1. Select the **Edit** button from the top menu

1. Select the properties cog from the Temperature tile

1. From the *Configure Chart* panel, drop down the *Devices* list, and check the `Pi Environment Monitor`.

1. Select the **Update** button

1. Select the **Save** button for the dashboard

## Set up the Pi
1. Fit the Grove Pi+ hat to the Raspberry Pi. The socket on the bottom of the Pi+ fits onto the GPIO pins on the Raspberry Pi.
> There are more pins on the Pi than holes in the socket - the Pi+ fits on the pins closest to the SD card socket.


1. Using one of the cables in the Grove Pi+ starter kit, connect the temperature humidity sensor to the **D4** digital port on the Pi+. This is the last digital port on the side where the Pi USB sockets are.

### Set up the software
1. Insert the SD card into your PC or laptop using an adapter if necessary
1. Using the [Raspberry Pi imager](https://www.raspberrypi.org/downloads/), image the SD card with the default Raspberry Pi OS image. You can find instructions on how to do this in the [Raspberry Pi installing images documentation](https://www.raspberrypi.org/documentation/installation/installing-images/).
1. Insert the SD card into the Pi
1. Connect the Pi to your keyboard, mouse and monitor. If you are using ethernet for internet access then connect the Pi to an ethernet cable connected to your network. Then connect it to the power supply.
> If you don't have a keyboard, monitor and mouse available, you can set up your Pi for headless access - check out the [Microsoft Raspberry Pi headless setup docs](https://github.com/microsoft/rpi-resources/tree/master/headless-setup) for details on how to set this up.
1. Work through the setup wizard on the Pi:
1. Set your country, language and timezone
1. Change your password from the default - when a new Raspberry Pi is set up it creates an account with a username of `pi` and a password of `raspberry`, so set a new password
1. Set up the screen if necessary
1. If you want to use Wifi, select your wireless network and enter the password if needed.
> If you are using enterprise security you may need to launch Chromium, the Pi's browser after selecting your wireless network to log in to your Wifi
1. Update the Pis software
1. Reboot the Pi
Once the Pi has rebooted, you will need to change the hostname. All newly setup Pis are configured with a hostname of `raspberrypi`, so if you have more than one Pi on your network you won't be able to distinguish between them by name unless you rename them. You will also need to enable SSH (Secure SHell) access so you can control the Pi later remotely from Visual Studio Code.
1. From the Raspberry Pi select the **Raspberry Pi** menu, then select **Preferences -> Raspberry Pi Configuration**

1. Change the value of the *Hostname* in the *General* tab to be something unique, such as by using your name

1. In the **Interfaces** tab, ensure **SSH** is set to *Enable*

1. Select the **OK** button
1. When prompted, select **Yes** to reboot the Pi
Once the Pi is rebooted, you will be able to connect to it remotely from Visual Studio Code.
## Connect to the Pi from Visual Studio Code
[Visual Studio Code](http://code.visualstudio.com?WT.mc_id=iotcurriculum-github-jabenn) is an open-source developer text editor that can be expanded with extensions to support multiple features or programming languages. It can also be used to remotely code on a Raspberry Pi from your PC or Mac via a remote development extension.
### Install the remote development extension
1. Install Visual Studio Code if you don't already have it installed
1. Launch Visual Studio Code
1. Select the **Extensions** tab from the side menu

1. Search for `Remote Development` and select the **Remote Development Pack**. Select the **Install** button to install this extension.

### Connect to your Raspberry Pi
1. From Visual Studio Code, launch the Command Palette. This is a pop-up menu that allows you to run actions from VS Code as well as any extensions installed.
1. If you are using Windows or Linux, press `Ctrl+Shift+p`
1. If you are using macOS, press `Command+Shift+p`
1. Type `Remote-SSH` and select *Remote-SSH: Connect to host...`

1. In the *Select configured Host* dialog, enter `pi@<hostname>.local` replacing `<hostname>` with the Hostname you entered when configuring the Pi. For example, if you set the Hostname to be `lab-pi-1`, then enter `[email protected]`
1. If you are using Windows or Linux and you get any errors about the Hostname not being found, you will need to install additional software to enable ZeroConf networking (also referred to by Apple as Bonjour):
1. If you are using Linux, install Avahi using the following command:
```sh
sudo apt-get install avahi-daemon
```
1. If you are using Windows, the easiest way to enable ZeroConf is to install [Bonjour Print Services for Windows](http://support.apple.com/kb/DL999). You can also install [iTunes for Windows](https://www.apple.com/itunes/download/) to get a newer version of the utility (which is not available standalone).
1. The first time you connect you will need to confirm you want to connect to the specific device based off it's 'fingerprint'. Select **Connect**.

1. Enter the password for your Pi when prompted
Visual Studio Code will then connect to your Pi and install some dependencies on the Pi that it needs.
### Configure Python on the Pi
The Pi will be programmed using Python, so Visual Studio Code needs to have an extension installed to understand Python, and the Pi needs some additional libraries installed to work with the Grove Pi+.
1. From the Visual Studio Code window that is connected to the Pi, select the **Extensions** tab
1. Search for `PyLance` and select the **Install in SSH: hostname** button to install the PyLance Python extension on the Pi

> This extension will just be installed on the Pi, not locally. The extensions you install on different remote devices are different to the ones you install locally
1. Once installed, you will need to reload the window, so select the **Reload required** button

Visual Studio Code will now be configured to run Python on the Pi. Next the Python developer features need to be installed.
### Configure the Grove Pi+
1. From the Visual Studio Code terminal connected to the Pi, run the following command to install the Grove libraries:
```sh
sudo curl -kL dexterindustries.com/update_grovepi | bash
```
This will ensure the Pi is correctly configured to work with the Grove Pi+, and install all necessary software and Python packages
> If you don't see the terminal in the bottom of the screen, create a new terminal by selecting **Terminal -> New Terminal**
1. Reboot the Pi using the following command:
```sh
sudo reboot
```
Whilst the Pi is rebooting, VS Code will attempt to reconnect. It will reconnect when it can, and you many need to re-enter your password.
Once the Pi is rebooted, it is recommended to update the firmware on the Grove Pi+ to the latest version.
1. From the Visual Studio Code terminal connected to the Pi, run the following command to get the latest Grove code:
```sh
git clone https://github.com/DexterInd/GrovePi.git
```
This will clone a GitHub repository with Grove Pi code into a folder called `GrovePi`
1. Change to the `Firmware` folder in the newly created `GrovePi` folder:
```sh
cd GrovePi/Firmware
```
1. Update the firmware using the following command
```sh
sudo ./firmware_update.sh
```
1. Reboot the Pi using the following command:
```sh
sudo reboot
```
Whilst the Pi is rebooting, VS Code will attempt to reconnect. It will reconnect when it can, and you many need to re-enter your password.
## Write the code
1. From the VS Code Terminal, create a new folder for the Pi code called `EnvironmentMonitor`
```sh
mkdir EnvironmentMonitor
```
1. Open this folder in VS Code by selecting **File -> Open...** then selecting the `EnvironmentMonitor` folder from the popup, then selecting the **OK** button

1. The window will reload opening the new folder. You may need to enter your password again.
### Install required packages
1. To send data to IoT Central, a Python package needs to be installed. To install this, run the following command on the Pi using the Terminal in VS Code:
```sh
pip3 install azure-iot-device
```
> All though it is good practice to use [Python virtual environments](https://docs.python.org/3/tutorial/venv.html), we will not be using them for this lab to reduce complexity as the Grove installer doesn't use virtual environments to install the Python packages needed.
1. Create a new file called `app.py`. This is a Python file that will contain the code to send data to IoT Central.
1. Select the **Explorer** tab from the side menu

1. Select the **New File** button for the *Environment Monitor* folder

> This button is only visible when the cursor is in the file explorer box
1. Name the file `app.py`, then press return

1. Add the following code to this file. You can also find this code in the [app.py](../code/temperature/app.py) file in the [code/temperature](../code/temperature) folder.
```python
import asyncio
import json
import grovepi
from azure.iot.device.aio import IoTHubDeviceClient, ProvisioningDeviceClient
# The connection details from IoT Central for the device
id_scope = ID_SCOPE
key = KEY
device_id = "pi-temperature-sensor"
# Set the temperature sensor port to the digital port D4
# and mark it as INPUT meaning data needs to be
# read from it
temperature_sensor_port = 4
grovepi.pinMode(temperature_sensor_port, "INPUT")
# Gets telemetry from the Grove sensors
# Telemetry needs to be sent as JSON data
async def get_telemetry() -> str:
# The dht call returns the temperature and the humidity,
# we only want the temperature, so ignore the humidity
[temperature, _] = grovepi.dht(temperature_sensor_port, 0)
# The temperature can come as 0, meaning you are reading
# too fast, if so sleep for a second to ensure the next reading
# is ready
while (temperature == 0):
[temperature, _] = grovepi.dht(temperature_sensor_port, 0)
await asyncio.sleep(1)
# Build a dictionary of data
# The items in the dictionary need names that match the
# telemetry values expected by IoT Central
dict = {
"Temperature" : temperature, # The temperature value
}
# Convert the dictionary to JSON
return json.dumps(dict)
# The main function that runs the program in an async loop
async def main():
# Connect to IoT Central and request the connection details for the device
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host="global.azure-devices-provisioning.net",
registration_id=device_id,
id_scope=id_scope,
symmetric_key=key)
registration_result = await provisioning_device_client.register()
# Build the connection string - this is used to connect to IoT Central
conn_str="HostName=" + registration_result.registration_state.assigned_hub + \
";DeviceId=" + device_id + \
";SharedAccessKey=" + key
# The client object is used to interact with Azure IoT Central.
device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
# Connect the client.
print("Connecting")
await device_client.connect()
print("Connected")
# async loop that sends the telemetry
async def main_loop():
while True:
# Get the telemetry to send
telemetry = await get_telemetry()
print("Telemetry:", telemetry)
# Send the telemetry to IoT Central
await device_client.send_message(telemetry)
# Wait for a minute so telemetry is not sent to often
await asyncio.sleep(60)
# Run the async main loop forever
await main_loop()
# Finally, disconnect
await device_client.disconnect()
# Start the program running
asyncio.run(main())
```
Read the comments in the code to see what each section does.
1. Head to line 7 in the file:
```python
id_scope = ID_SCOPE
key = KEY
```
1. Replace the values of `ID_SCOPE` and `KEY` with the ID Scope and Key values you copied from the device connection dialog in IoT Central. These values need to be in quotes. For example if your ID scope was `0ne0FF11FF0` and your key was `12345abcdeFGH567+890ZY=` then the code would read:
```python
id_scope = "0ne0FF11FF0"
key = "12345abcdeFGH567+890ZY="
```
1. Save the file
1. Run the code from the VS Code terminal using the following command:
```sh
python3 app.py
```
1. The app will start up, connect to Azure IoT Central, then send temperature values:
```output
pi@jim-iot-pi:~/EnvironmentMonitor $ python3 app.py
RegistrationStage(RequestAndResponseOperation): Op will transition into polling after interval 2. Setting timer.
Connecting
Connected
Telemetry: {"Temperature": 24.0}
Telemetry: {"Temperature": 25.0}
Telemetry: {"Temperature": 24.0}
Telemetry: {"Temperature": 24.0}
```
Try warming the sensor using your finger, or cooling it with a fan to see changes in temperature.
1. From IoT Central, view the Temperature chart for the Pi device

1. The dashboard will now show temperature values from both the simulated device and the Pi using different colored lines to distinguish between the devices.

## Next steps
In this step you set up a Raspberry Pi to send temperature data.
In the [next step](./set-up-humidity-sound.md) you will add humidity and sound data to the telemetry.
| 45.596372 | 489 | 0.726676 | eng_Latn | 0.992615 |
b9294a102ab2ac4d4a45a649fd716b3aeab7ff51 | 3,915 | md | Markdown | CONTRIBUTING.md | Lion7k/incubator-weex | 4bc653653c1460dcad0642ef654e593c89ebaaf1 | [
"Apache-2.0"
] | 3 | 2019-11-20T06:24:39.000Z | 2019-11-20T06:55:03.000Z | CONTRIBUTING.md | Lion7k/incubator-weex | 4bc653653c1460dcad0642ef654e593c89ebaaf1 | [
"Apache-2.0"
] | null | null | null | CONTRIBUTING.md | Lion7k/incubator-weex | 4bc653653c1460dcad0642ef654e593c89ebaaf1 | [
"Apache-2.0"
] | null | null | null | # How to Contribute
Welcome to create [pull requests](https://github.com/apache/incubator-weex/compare) or join in our [mailing list](http://mail-archives.apache.org/mod_mbox/incubator-weex-dev/) for bugfix, doc, example, suggestion and anything.
## Join in Weex Mailing List
In Weex community all discussion will happen on mailing list.
Just send an email to `[email protected]` and follow the instructions to subscribe Weex dev mailing list. And then you will receive all discussions and community messages by your personal email. In the same time you can freely send your own emails to join in us.
At the same time you can see the archives of all the mails through the web: [http://mail-archives.apache.org/mod_mbox/incubator-weex-dev/](http://mail-archives.apache.org/mod_mbox/incubator-weex-dev/)
*If you won't follow the mailing list any more. There is another way to unsubscribe it: send an email to `[email protected]` and follow the instructions.*
Besides Weex dev mailing list, we also have some other mailing lists for you. You can check them out here: [http://mail-archives.apache.org/mod_mbox/#weex.incubator](http://mail-archives.apache.org/mod_mbox/#weex.incubator)
## Branch Management
```
release
↑
release-{version}
↑
master <--- PR(feature/hotfix/typo)
```
0. `master` branch
0. `master` is the stable developing branch.
0. ***It's RECOMMENDED to commit hotfix (like typo) or feature PR to `master `***.
0. `release-{version}` branch
0. `release-{version}` is used for every version which we consider for stable publish.
0. e.g. `release-0.16`
0. `release` branch
0. `release` is the latest release branch,we will make tag and publish version on this branch.
### Branch Name For PR
```
{module}-{action}-{shortName}
```
* `{module}`, see [commit log module](#commit-log)
* `{action}`
* `feature`: checkout from `{module}` and merge to `{module}` later. If `{module}` not exists, merge to `dev`
* `bugfix`: like `feature`, for bugfix only
* `hotfix`: checkout from `master` or release `tag`, merge to `master` and `{module}` later. If `{module}` not exists, merge to `dev`
for example:
* `android-bugfix-memory`
* `jsfm-feature-communication`
* `android-hotfix-compute-layout`
## Commit Log
```
{action} [{module}] {description}
```
* `{action}`
* `+` add
* `*` update or bugfix
* `-` remove
* `{module}`
* Including: android, ios, jsfm, html5, component, doc, website, example, test, all
* `{description}`
* Just make it as clear and simple as possible.
for example:
* `+ [android] close #123, add refreshing for WebView`
* `* [doc] fix #123, update video auto-play property`
* `- [example] remove abc`
## Pull Request
You can [create pull requests](https://github.com/apache/incubator-weex/compare) in GitHub.
1. First we suggest you have some discussion with the community (commonly in our mailing list) before you code.
2. Fork repo from [https://github.com/apache/incubator-weex/](https://github.com/apache/incubator-weex/)
3. Finish the job you want to do.
4. Create a pull request.
## Code Style Guide
### Objective-C
* Tabs for indentation(not spaces)
* `*` operator goes with the variable name (e.g. Type *variable;)
* Function definitions: place each brace on its own line.
* Other braces: place the open brace on the line preceding the code block; place the close brace on its own line.
* Use `#pragma marks` to categorize methods into functional groupings and protocol implementations
* Follow other guidelines on [GitHub Objective-C Style Guide](https://github.com/github/objective-c-style-guide)
### Java & Android
* Use [Google Java Style](https://google.github.io/styleguide/javaguide.html) as basic guidelines of java code.
* Follow [AOSP Code Style](https://source.android.com/source/code-style.html) for rest of android related code style.
| 39.545455 | 284 | 0.72235 | eng_Latn | 0.95258 |
b92967d56fac128bf0c97b43b98fa3bd2b90ca6d | 782 | md | Markdown | README.md | pataraco/blockchain | f94e3dbbf7979f59a9cd677627765571920ca322 | [
"MIT"
] | null | null | null | README.md | pataraco/blockchain | f94e3dbbf7979f59a9cd677627765571920ca322 | [
"MIT"
] | null | null | null | README.md | pataraco/blockchain | f94e3dbbf7979f59a9cd677627765571920ca322 | [
"MIT"
] | null | null | null | # Blockchain Project
This is a project created mostly in part from the following Udemy course - [Python - The Practical Guide](https://www.udemy.com/course/learn-python-by-building-a-blockchain-cryptocurrency/).
## Purpose
Learning Python
## Description
Learn Python from the ground up and use Python to build a hands-on project from scratch!
## Blockchain Security Layers
- **Previous Block Hash** - Basic Block Manipulation Check: Blocks know about previous blocks which a generated hash of their correct contents.
- **Proof of Work** - No Mass Manipulation Possible: Proof of work value calculatied to generate a specific hash value/pattern of contents (AKA Mining).
- **Signed Transactions** - Basic Transaction Manipulation Check: Uses public/private keys to verify.
| 43.444444 | 190 | 0.776215 | eng_Latn | 0.993627 |
b9297c548e89abe937e13887c994fdd97dd36a85 | 13,200 | md | Markdown | articles/frontdoor/front-door-custom-domain.md | Microsoft/azure-docs.cs-cz | 1e2621851bc583267d783b184f52dc4b853a058c | [
"CC-BY-4.0",
"MIT"
] | 6 | 2017-08-28T07:43:21.000Z | 2022-01-04T10:32:24.000Z | articles/frontdoor/front-door-custom-domain.md | MicrosoftDocs/azure-docs.cs-cz | 1e2621851bc583267d783b184f52dc4b853a058c | [
"CC-BY-4.0",
"MIT"
] | 428 | 2018-08-23T21:35:37.000Z | 2021-03-03T10:46:43.000Z | articles/frontdoor/front-door-custom-domain.md | Microsoft/azure-docs.cs-cz | 1e2621851bc583267d783b184f52dc4b853a058c | [
"CC-BY-4.0",
"MIT"
] | 16 | 2018-03-03T16:52:06.000Z | 2021-12-22T09:52:44.000Z | ---
title: Kurz – přidání vlastní domény do konfigurace front-dveří Azure
description: V tomto kurzu se dozvíte, jak ke službě Azure Front Door připojit vlastní doménu.
services: frontdoor
documentationcenter: ''
author: jessie-jyy
editor: ''
ms.service: frontdoor
ms.workload: infrastructure-services
ms.tgt_pltfrm: na
ms.devlang: na
ms.topic: tutorial
ms.date: 04/12/2021
ms.author: yuajia
ms.openlocfilehash: 7e2f05a7d911ce2b311a423994d2b459de0fa269
ms.sourcegitcommit: b4fbb7a6a0aa93656e8dd29979786069eca567dc
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 04/13/2021
ms.locfileid: "107308859"
---
# <a name="tutorial-add-a-custom-domain-to-your-front-door"></a>Kurz: Přidání vlastní domény do služby Front Door
V tomto kurzu se dozvíte, jak do služby Front Door přidat vlastní doménu. Pokud používáte přední dveře Azure pro doručování aplikací, je potřeba vlastní doména, pokud chcete, aby se váš vlastní název domény zobrazoval v žádosti koncového uživatele. Srozumitelný název domény může být praktický pro vaše zákazníky a užitečný při budování značky.
Po vytvoření front-endu je výchozí hostitel front-end, který je subdoménou `azurefd.net` , zahrnut v adrese URL pro doručování obsahu front-endu z back-endu ve výchozím nastavení (například https: \/ /Contoso-frontend.azurefd.NET/activeusers.htm). Pro usnadnění práce poskytuje Azure Front Door možnost přidružit k výchozímu hostiteli vlastní doménu. Díky této možnosti můžete doručovat obsah na adrese URL s vlastní doménou místo názvu domény vlastněné službou Front Door (například https:\//www.contoso.com/photo.png).
V tomto kurzu se naučíte:
> [!div class="checklist"]
> - Vytvořit záznam DNS CNAME.
> - Přidružit vlastní doménu ke službě Front Door.
> - Ověřit vlastní doménu.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
> [!NOTE]
> Přední dvířka **nepodporují vlastní** domény s [Punycode](https://en.wikipedia.org/wiki/Punycode) znaky.
## <a name="prerequisites"></a>Požadavky
* Před dokončením kroků v tomto kurzu musíte nejprve vytvořit službu Front Door. Další informace najdete v tématu [Rychlý start: Vytvoření služby Front Door](quickstart-create-front-door.md).
* Pokud ještě nemáte vlastní doménu, musíte ji nejdřív koupit se zprostředkovatelem domény. Příklad najdete v tématu [Nákup vlastního názvu domény](../app-service/manage-custom-dns-buy-domain.md).
* Pokud k hostování [domén DNS](../dns/dns-overview.md)používáte Azure, musíte delegovat DNS (Domain Name System) poskytovatele domény na Azure DNS. Další informace najdete v tématu [delegování domény na Azure DNS](../dns/dns-delegate-domain-azure-dns.md). V opačném případě Pokud ke zpracování domény DNS používáte poskytovatele domény, pokračujte v [vytváření záznamu DNS CNAME](#create-a-cname-dns-record).
## <a name="create-a-cname-dns-record"></a>Vytvoření záznamu DNS CNAME
Než budete moct použít vlastní doménu s předními dvířky, musíte nejdřív vytvořit záznam kanonického názvu (CNAME) se svým poskytovatelem domény, aby odkazoval na výchozího hostitele front-endu na front-endu (například contoso.azurefd.net). Záznam CNAME je typem záznamu DNS, který mapuje zdrojový název domény na cílový název domény. U front-bran Azure je názvem zdrojové domény vlastní název domény a cílovým názvem domény je výchozí název hostitele front-dveří. Jakmile se přední dveře ověří, který záznam CNAME vytvoříte, bude provoz adresovaný do zdrojové vlastní domény (například na webové \. contoso.com) směrován do zadaného cílového umístění front-endu na výchozím hostiteli (například contoso-frontend.azurefd.NET).
Vlastní doménu a její subdoménu lze současně přidružit pouze k jednomu přednímu Dvířku. Můžete ale použít různé subdomény ze stejné vlastní domény pro různé přední dveře pomocí několika záznamů CNAME. Můžete také namapovat vlastní doménu s různými subdoménami na stejné přední dveře.
## <a name="map-the-temporary-afdverify-subdomain"></a>Mapování dočasné subdomény afdverify
Pokud mapujete existující doménu, která je v produkčním prostředí, je potřeba vzít v úvahu zvláštní požadavky. Při registraci vlastní domény v Azure Portal může probíhat krátká doba výpadku domény. Abyste se vyhnuli přerušení webového provozu, nejdřív namapujte svoji vlastní doménu na front-endové výchozí hostitele front-endu s poddoménou Azure afdverify a vytvořte dočasné mapování CNAME. Díky tomu mají uživatelé v průběhu mapování DNS přístup k vaší doméně bez přerušení.
V opačném případě, pokud používáte vlastní doménu poprvé a v ní není spuštěn žádný provozní provoz, můžete vlastní doménu namapovat přímo na vaše přední dveře. Pokračujte a [namapujte trvalou vlastní doménu](#map-the-permanent-custom-domain).
Vytvoření záznamu CNAME se subdoménou afdverify:
1. Přihlaste se k webu poskytovatele vaší vlastní domény.
2. Vyhledejte stránku pro správu záznamů DNS – projděte si dokumentaci poskytovatele nebo hledejte oblasti webu označené jako **Domain Name** (Název domény), **DNS** nebo **Name server management** (Správa názvových serverů).
3. Vytvořte položku záznamu CNAME pro svou vlastní doménu a vyplňte pole podle následující tabulky (názvy polí se můžou lišit):
| Zdroj | Typ | Cíl |
|---------------------------|-------|---------------------------------|
| afdverify.www.contoso.com | CNAME | afdverify.contoso-frontend.azurefd.net |
- Zdroj: zadejte název vlastní domény včetně subdomény afdverify v následujícím formátu: afdverify. _< vlastní název > domény_. Například afdverify.www.contoso.com. Pokud provádíte mapování domény se zástupnými znaky, jako je například \* . contoso.com, zdrojová hodnota je stejná, protože by to bylo bez zástupného znaku: afdverify.contoso.com.
- Typ: Zadejte *CNAME*.
- Cíl: zadejte výchozího hostitele front-endu front-endu, včetně subdomény afdverify, v následujícím formátu: afdverify. _< název > koncového bodu_. azurefd.NET. Například afdverify.contoso-frontend.azurefd.net.
4. Uložte provedené změny.
Například postup pro registrátora domén GoDaddy je následující:
1. Přihlaste se a vyberte vlastní doménu, kterou chcete použít.
2. V části Domains (Domény) vyberte **Manage All** (Spravovat vše) a pak vyberte **DNS** | **Manage Zones** (Spravovat zóny).
3. Jako **Domain Name** (Název domény) zadejte vlastní doménu a vyberte **Search** (Vyhledat).
4. Na stránce **DNS Management** (Správa DNS) vyberte **Add** (Přidat) a pak v seznamu **Type** (Typ) vyberte **CNAME**.
5. Vyplňte následující pole záznamu CNAME:
- Type (Typ): Ponechte vybranou hodnotu *CNAME*.
- Host (Hostitel): Zadejte subdoménu své vlastní domény, kterou chcete použít, včetně názvu subdomény afdverify. Například afdverify.www.
- Points to (Odkazuje na): Zadejte název výchozího hostitele front-endu služby Front Door včetně názvu subdomény afdverify. Například afdverify.contoso-frontend.azurefd.net.
- TTL: ponechte vybranou *jednu hodinu* .
6. Vyberte **Uložit**.
Záznam CNAME se přidá do tabulky záznamů DNS.
## <a name="associate-the-custom-domain-with-your-front-door"></a>Přidružení vlastní domény ke službě Front Door
Po zaregistrování vlastní domény ji můžete přidat do služby Front Door.
1. Přihlaste se k webu [Azure Portal](https://portal.azure.com/) a přejděte ke službě Front Door obsahující hostitele front-endu, kterého chcete namapovat na vlastní doménu.
2. Na stránce **Návrháře front dveří** vyberte + a přidejte vlastní doménu.
3. Zadejte **vlastní doménu**.
4. V poli **Hostitel front-endu** se předvyplní hostitel front-endu, který se použije jako cílová doména vašeho záznamu CNAME a který se odvodí ze služby Front Door: *<výchozí_název_hostitele>*.azurefd.net. Název není možné změnit.
5. Do pole **Vlastní název hostitele** zadejte vlastní doménu (včetně subdomény), kterou chcete použít jako zdrojovou doménu záznamu CNAME. Například webová \. contoso.com nebo CDN.contoso.com. Nepoužívejte název subdomény afdverify.
6. Vyberte **Přidat**.
Azure ověří, že pro zadaný název vlastní domény existuje záznam CNAME. Pokud je záznam CNAME správný, vaše vlastní doména se ověří.
>[!WARNING]
> **Musíte** zajistit, aby všichni hostitelé front-endu (včetně vlastních domén) ve službě Front Door měli přidružené pravidlo směrování s výchozí cestou (/\*). To znamená, že mezi všemi pravidly směrování musí pro každého front-endového hostitele existovat alespoň jedno pravidlo směrování s definovanou výchozí cestou (/\*). Pokud se vám to nepodaří zajistit, provoz koncových uživatelů se možná nebude směrovat správně.
## <a name="verify-the-custom-domain"></a>Ověření vlastní domény
Po dokončení registrace vlastní domény ověřte, že vlastní doména odkazuje na výchozího hostitele front-endu front-endu.
V prohlížeči přejděte na adresu souboru s použitím vlastní domény. Například pokud je vaše vlastní doména robotics.contoso.com, adresa URL souboru uloženého v mezipaměti by měla být podobná jako následující adresa URL: http:\//robotics.contoso.com/my-public-container/my-file.jpg. Ověřte, že výsledek je stejný jako při přístupu ke frontě na přední dveře přímo na *< hostiteli >*. azurefd.NET.
## <a name="map-the-permanent-custom-domain"></a>Mapování trvalé vlastní domény
Pokud jste ověřili, že se subdoménou afdverify úspěšně namapovala na vaše přední dveře (nebo pokud používáte novou vlastní doménu, která není v produkčním prostředí), můžete vlastní doménu namapovat přímo na výchozího hostitele front-endu.
Vytvoření záznamu CNAME pro vlastní doménu:
1. Přihlaste se k webu poskytovatele vaší vlastní domény.
2. Stránku pro správu záznamů DNS najdete v dokumentaci k poskytovateli nebo v části hledání oblastí webu s **názvem domény**, **DNS** nebo **správy názvových serverů**.
3. Vytvořte položku záznamu CNAME pro svou vlastní doménu a vyplňte pole podle následující tabulky (názvy polí se můžou lišit):
| Zdroj | Typ | Cíl |
|-----------------|-------|-----------------------|
| <www.contoso.com> | CNAME | contoso-frontend.azurefd.net |
- Zdroj: zadejte název vlastní domény (například webová \. contoso.com).
- Typ: Zadejte *CNAME*.
- Cíl: Zadejte výchozího hostitele front-endu služby Front Door. Musí být v následujícím formátu:_< hostname >_. azurefd.NET. Například contoso-frontend.azurefd.net.
4. Uložte provedené změny.
5. Pokud jste předtím vytvořili záznam CNAME dočasné subdomény afdverify, odstraňte ho.
6. Pokud tuto vlastní doménu používáte v produkčním prostředí poprvé, postupujte podle kroků pro [přidružení vlastní domény k předním dvířkům](#associate-the-custom-domain-with-your-front-door) a [ověření vlastní domény](#verify-the-custom-domain).
Například postup pro registrátora domén GoDaddy je následující:
1. Přihlaste se a vyberte vlastní doménu, kterou chcete použít.
2. V části Domains (Domény) vyberte **Manage All** (Spravovat vše) a pak vyberte **DNS** | **Manage Zones** (Spravovat zóny).
3. Jako **Domain Name** (Název domény) zadejte vlastní doménu a vyberte **Search** (Vyhledat).
4. Na stránce **DNS Management** (Správa DNS) vyberte **Add** (Přidat) a pak v seznamu **Type** (Typ) vyberte **CNAME**.
5. Vyplňte pole záznamu CNAME:
- Type (Typ): Ponechte vybranou hodnotu *CNAME*.
- Host (Hostitel): Zadejte subdoménu vlastní domény, kterou chcete použít. Například www nebo profile.
- Points to (Odkazuje na): Zadejte výchozí název hostitele vaší služby Front Door. Například contoso.azurefd.net.
- TTL: ponechte vybranou *jednu hodinu* .
6. Vyberte **Uložit**.
Záznam CNAME se přidá do tabulky záznamů DNS.
7. Pokud máte záznam CNAME afdverify, vyberte ikonu tužky vedle něj a pak vyberte ikonu koše.
8. Vyberte **Delete** (Odstranit) a odstraňte záznam CNAME.
## <a name="clean-up-resources"></a>Vyčištění prostředků
V předchozích krocích jste do služby Front Door přidali vlastní doménu. Pokud už nechcete přidružit vaše přední dveře k vlastní doméně, můžete vlastní doménu odebrat pomocí těchto kroků:
1. Přečtěte si svého poskytovatele DNS, odstraňte záznam CNAME pro vlastní doménu nebo aktualizujte záznam CNAME pro vlastní doménu na koncový bod mimo frontu dveří.
> [!Important]
> Aby se zabránilo záznamům DNS v dangling a bezpečnostním rizikům vytvořeným od 9. dubna 2021, vyžadují přední dveře Azure odebrání záznamů CNAME do koncových bodů před tím, než bude možné prostředky odstranit. Prostředky zahrnují vlastní domény front-Dvířk, koncové body front nebo skupiny prostředků Azure, které mají povolené vlastní domény front-dveří.
2. V Návrháři služby Front Door vyberte vlastní doménu, kterou chcete odebrat.
3. V místní nabídce pro vlastní doménu vyberte **Odstranit** . Vlastní doména teď ztratí přidružení ke svému koncovému bodu.
## <a name="next-steps"></a>Další kroky
V tomto kurzu jste se naučili:
* Vytvořit záznam DNS CNAME.
* Přidružit vlastní doménu ke službě Front Door.
* Ověřit vlastní doménu.
Pokud chcete zjistit, jak povolit protokol HTTPS pro vlastní doménu, přejděte k dalšímu kurzu.
> [!div class="nextstepaction"]
> [Povolení HTTPS pro vlastní doménu](front-door-custom-domain-https.md)
| 61.395349 | 726 | 0.768788 | ces_Latn | 0.999883 |
b929da8f687bf899fdc67ff2a7a90566095451aa | 193 | md | Markdown | _definitions/bld-apprehension.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 5 | 2018-08-07T21:57:01.000Z | 2022-02-26T13:29:20.000Z | _definitions/bld-apprehension.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 1 | 2018-08-07T22:29:07.000Z | 2018-08-07T22:45:46.000Z | _definitions/bld-apprehension.md | digitallawyer/openlegaldictionary | a318d6c73c3d8e33756d947add397dac7f25cca2 | [
"MIT"
] | 2 | 2020-12-26T17:22:04.000Z | 2021-02-12T21:35:50.000Z | ---
title: Apprehension
letter: A
permalink: "/definitions/bld-apprehension.html"
body: In praotioe. The
published_at: '2018-07-07'
source: Black's Law Dictionary 2nd Ed (1910)
layout: post
--- | 21.444444 | 47 | 0.746114 | eng_Latn | 0.347363 |
b929f3d185fed932280e7c4e56adf0599bad37bd | 5,556 | md | Markdown | fiber.md | cSkyHawk/docs | c5c18a25bacb9e9fe6fba529ca1c0b85a4e308bc | [
"Apache-2.0"
] | 1 | 2020-11-05T07:14:58.000Z | 2020-11-05T07:14:58.000Z | fiber.md | cSkyHawk/docs | c5c18a25bacb9e9fe6fba529ca1c0b85a4e308bc | [
"Apache-2.0"
] | null | null | null | fiber.md | cSkyHawk/docs | c5c18a25bacb9e9fe6fba529ca1c0b85a4e308bc | [
"Apache-2.0"
] | null | null | null | ---
description: Fiber represents the fiber package where you start to create an instance.
---
# 📦 Fiber
## New
This method creates a new **App** named instance. You can pass optional [settings ](app.md#settings)when creating a new instance
{% code title="Signature" %}
```go
func New(config ...Config) *App
```
{% endcode %}
{% code title="Example" %}
```go
// Default config
app := fiber.New()
// ...
```
{% endcode %}
## Config
You can pass an optional Config when creating a new Fiber instance.
{% code title="Example" %}
```go
// Custom config
app := fiber.New(fiber.Config{
Prefork: true,
CaseSensitive: true,
StrictRouting: true,
ServerHeader: "Fiber",
})
// ...
```
{% endcode %}
**Config fields**
| Property | Type | Description | Default |
| :--- | :--- | :--- | :--- |
| Prefork | `bool` | Enables use of the[`SO_REUSEPORT`](https://lwn.net/Articles/542629/)socket option. This will spawn multiple Go processes listening on the same port. learn more about [socket sharding](https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/). | `false` |
| ServerHeader | `string` | Enables the `Server` HTTP header with the given value. | `""` |
| StrictRouting | `bool` | When enabled, the router treats `/foo` and `/foo/` as different. Otherwise, the router treats `/foo` and `/foo/` as the same. | `false` |
| CaseSensitive | `bool` | When enabled, `/Foo` and `/foo` are different routes. When disabled, `/Foo`and `/foo` are treated the same. | `false` |
| Immutable | `bool` | When enabled, all values returned by context methods are immutable. By default, they are valid until you return from the handler; see issue [\#185](https://github.com/gofiber/fiber/issues/185). | `false` |
| UnescapePath | `bool` | Converts all encoded characters in the route back before setting the path for the context, so that the routing can also work with URL encoded special characters | `false` |
| ETag | `bool` | Enable or disable ETag header generation, since both weak and strong etags are generated using the same hashing method \(CRC-32\). Weak ETags are the default when enabled. | `false` |
| BodyLimit | `int` | Sets the maximum allowed size for a request body, if the size exceeds the configured limit, it sends `413 - Request Entity Too Large` response. | `4 * 1024 * 1024` |
| Concurrency | `int` | Maximum number of concurrent connections. | `256 * 1024` |
| Views | `Views` | Views is the interface that wraps the Render function. See our **Template Middleware** for supported engines. | `nil` |
| ReadTimeout | `time.Duration` | The amount of time allowed to read the full request, including the body. The default timeout is unlimited. | `nil` |
| WriteTimeout | `time.Duration` | The maximum duration before timing out writes of the response. The default timeout is unlimited. | `nil` |
| IdleTimeout | `time.Duration` | The maximum amount of time to wait for the next request when keep-alive is enabled. If IdleTimeout is zero, the value of ReadTimeout is used. | `nil` |
| ReadBufferSize | `int` | per-connection buffer size for requests' reading. This also limits the maximum header size. Increase this buffer if your clients send multi-KB RequestURIs and/or multi-KB headers \(for example, BIG cookies\). | `4096` |
| WriteBufferSize | `int` | Per-connection buffer size for responses' writing. | `4096` |
| CompressedFileSuffix | `string` | Adds a suffix to the original file name and tries saving the resulting compressed file under the new file name. | `".fiber.gz"` |
| ProxyHeader | `string` | This will enable `c.IP()` to return the value of the given header key. By default `c.IP()` will return the Remote IP from the TCP connection, this property can be useful if you are behind a load balancer e.g. _X-Forwarded-\*_. | `""` |
| GETOnly | `bool` | Rejects all non-GET requests if set to true. This option is useful as anti-DoS protection for servers accepting only GET requests. The request size is limited by ReadBufferSize if GETOnly is set. | `false` |
| ErrorHandler | `ErrorHandler` | ErrorHandler is executed when an error is returned from fiber.Handler. | `DefaultErrorHandler` |
| DisableKeepalive | `bool` | Disable keep-alive connections, the server will close incoming connections after sending the first response to client | `false` |
| DisableDefaultDate | `bool` | When set to true causes the default date header to be excluded from the response. | `false` |
| DisableDefaultContentType | `bool` | When set to true, causes the default Content-Type header to be excluded from the Response. | `false` |
| DisableHeaderNormalizing | `bool` | By default all header names are normalized: conteNT-tYPE -> Content-Type | `false` |
| DisableStartupMessage | `bool` | When set to true, it will not print out debug information | `false` |
## NewError
NewError creates a new HTTPError instance with an optional message
{% code title="Signature" %}
```go
func NewError(code int, message ...string) *Error
```
{% endcode %}
{% code title="Example" %}
```go
app.Get(func(c *fiber.Ctx) error {
return fiber.NewError(782, "Custom error message")
})
```
{% endcode %}
## IsChild
IsChild determines if the current process is a result of Prefork
{% code title="Signature" %}
```go
func IsChild() bool
```
{% endcode %}
{% code title="Example" %}
```go
// Prefork will spawn child processes
app := fiber.New(fiber.Config{
Prefork: true,
})
if !fiber.IsChild() {
fmt.Println("I'm the parent process")
} else {
fmt.Println("I'm a child process")
}
// ...
```
{% endcode %}
| 47.084746 | 282 | 0.705184 | eng_Latn | 0.980367 |
b92a48c20eefc108d2d24a98533d6df3b5c5947a | 321 | md | Markdown | test-data/use-tabs/expected/sub1/README.md | wooolfgang/markdown-notes-tree | 73e55e9b7edc7ffce849908cd76b94669ef3a5d7 | [
"MIT"
] | 16 | 2020-03-12T09:38:45.000Z | 2022-03-28T15:21:59.000Z | test-data/use-tabs/expected/sub1/README.md | wooolfgang/markdown-notes-tree | 73e55e9b7edc7ffce849908cd76b94669ef3a5d7 | [
"MIT"
] | 6 | 2020-03-12T09:50:56.000Z | 2021-12-18T12:17:40.000Z | test-data/use-tabs/expected/sub1/README.md | mistermicheels/markdown-notes-tree | 396df1df165ec6ed62dc0661469d360c64c920b6 | [
"MIT"
] | 4 | 2021-06-16T04:03:26.000Z | 2021-09-05T22:04:41.000Z | <!-- generated by markdown-notes-tree -->
# sub1
<!-- optional markdown-notes-tree directory description starts here -->
<!-- optional markdown-notes-tree directory description ends here -->
- [**sub1a**](sub1a)
- [Title for file1a1](sub1a/file1a1.md)
- [Title for file1a](file1a.md)
- [Title for file1b](file1b.md)
| 24.692308 | 71 | 0.694704 | eng_Latn | 0.812426 |
b92ab53934cdfbf19416802a6a395d3e6a8008cf | 1,191 | md | Markdown | docs/vs-2015/ide/reference/set-current-thread-command.md | Surbowl/visualstudio-docs.zh-cn | bbf24a28be0be0848ce22c2e18aa9302c89783c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/ide/reference/set-current-thread-command.md | Surbowl/visualstudio-docs.zh-cn | bbf24a28be0be0848ce22c2e18aa9302c89783c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/ide/reference/set-current-thread-command.md | Surbowl/visualstudio-docs.zh-cn | bbf24a28be0be0848ce22c2e18aa9302c89783c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: “设置当前线程”命令 | Microsoft Docs
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-general
ms.topic: reference
f1_keywords:
- debug.setcurrentthread
helpviewer_keywords:
- Set Current Thread command
- Debug.SetCurrentThread command
ms.assetid: 9917ed1d-6c30-4d94-b2f0-69acce74f1b2
caps.latest.revision: 20
author: jillre
ms.author: jillfra
manager: jillfra
ms.openlocfilehash: 67bf0d37e6f734fa4b3229488bc3eee2732c3063
ms.sourcegitcommit: a8e8f4bd5d508da34bbe9f2d4d9fa94da0539de0
ms.translationtype: MTE95
ms.contentlocale: zh-CN
ms.lasthandoff: 10/19/2019
ms.locfileid: "72665447"
---
# <a name="set-current-thread-command"></a>“设置当前线程”命令
[!INCLUDE[vs2017banner](../../includes/vs2017banner.md)]
将指定的线程设置为当前线程。
## <a name="syntax"></a>语法
```
Debug.SetCurrentThread index
```
## <a name="arguments"></a>自变量
`index`(必需)。 按线程的索引选择线程。
## <a name="example"></a>示例
```
>Debug.SetCurrentThread 1
```
## <a name="see-also"></a>另请参阅
[Visual studio](../../ide/reference/visual-studio-commands.md) "[命令" 窗口](../../ide/reference/command-window.md)中的["查找/命令" 框](../../ide/find-command-box.md) [visual studio 命令别名](../../ide/reference/visual-studio-command-aliases.md)
| 25.891304 | 231 | 0.739715 | yue_Hant | 0.114503 |
b92b4b7b8d25fb620dbdfddc8d091a394f01fb18 | 15 | md | Markdown | README.md | klaasgerardkrakau1972-1/klaas | 6efa587150beeb625d568be2728d7a530be076ce | [
"BSL-1.0"
] | null | null | null | README.md | klaasgerardkrakau1972-1/klaas | 6efa587150beeb625d568be2728d7a530be076ce | [
"BSL-1.0"
] | null | null | null | README.md | klaasgerardkrakau1972-1/klaas | 6efa587150beeb625d568be2728d7a530be076ce | [
"BSL-1.0"
] | null | null | null | # klaas
gerard
| 5 | 7 | 0.733333 | afr_Latn | 0.788058 |
b92ba56dd176c39e5e821c73314d9232b98e39c9 | 7,330 | md | Markdown | biztalk/technical-guides/sql-server-settings-that-should-not-be-changed.md | hiromi-shindo/biztalk-docs | 2b055711e6cdcda6e297d58bfb5fb1d5b36d4708 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-06-14T19:45:26.000Z | 2019-06-14T19:45:26.000Z | biztalk/technical-guides/sql-server-settings-that-should-not-be-changed.md | hiromi-shindo/biztalk-docs | 2b055711e6cdcda6e297d58bfb5fb1d5b36d4708 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-19T08:49:33.000Z | 2019-11-19T08:49:33.000Z | biztalk/technical-guides/sql-server-settings-that-should-not-be-changed.md | OPS-E2E-PPE/biztalk-docs | dfcd48d9ae3142ba3484aac52cb35f6ec8f3881c | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-08-05T18:53:58.000Z | 2019-08-05T18:53:58.000Z | ---
title: "SQL Server settings not to change | Microsoft Docs"
description: Max Degree of Parallelism, Auto create statistics Auto Update statistics, and rebuilding indexes in BizTalk Server
ms.custom: ""
ms.date: "06/08/2017"
ms.prod: "biztalk-server"
ms.reviewer: ""
ms.suite: ""
ms.tgt_pltfrm: ""
ms.topic: "article"
ms.assetid: 4b383bfb-c3d9-47d4-b294-f6be94302734
caps.latest.revision: 2
author: "MandiOhlinger"
ms.author: "mandia"
manager: "anneta"
---
# SQL Server Settings That Should Not Be Changed
When setting up [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)] during the operational readiness procedures for [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)], you should not make changes to the following settings.
## SQL Server Max Degree of Parallelism
Max Degree of Parallelism (MDOP) is set to “1” during the configuration of [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] for the [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)] instance(s) that host the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] MessageBox database(s). This is a [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)] instance-level setting. This setting should not be changed from the value of “1”. Changing this to anything other than “1” can have a significant negative impact on the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] stored procedures and performance. If changing the parallelism setting for an instance of [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)] will have an adverse effect on other database applications that are being executed on the [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)] instance, you should create a separate instance of [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)] dedicated to hosting the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] databases.
Parallel queries are generally best suited to batch processing and decision support workloads. They are typically not desirable in a transaction processing environment where you have many short, fast queries running in parallel. In addition, changing the MDOP setting sometimes causes the query plan to be changed, which leads to poor query performance or even deadlocks with the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] queries.
The [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] stored procedures provide the correct joins and lock hints wherever possible in order to try to keep the query optimizer from doing much work and changing the plan. These stored procedures provide consistent query executions by constructing the queries such that the query optimizer is taken out of the picture as much as possible.
For more information, see [KB 899000: Parallelism setting for SQL Server instance used by BizTalk Server](https://support.microsoft.com/help/899000/the-parallelism-setting-for-the-instance-of-sql-server-when-you-config).
## SQL Server Statistics on the MessageBox Database
The following options are turned off by default in the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] MessageBox database when it is created:
- Auto create statistics
- Auto update statistics
Do not enable these options on MessageBox databases. Enabling the "auto create statistics" and "auto update statistics" options can cause undesirable query execution delays, especially in a high-load environment.
In addition, the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] stored procedures have exact joins and lock hints specified on the queries. This is done to ensure that the optimal query plan is used by the [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] queries in [!INCLUDE[btsSQLServerNoVersion](../includes/btssqlservernoversion-md.md)]. The distributions and expected results for the queries are known; the approximate number of rows returned is known. Statistics are generally not needed.
For more information, see the following Microsoft Knowledge Base articles:
- **912262**—["The auto update statistics option, the auto create statistics option, and the Parallelism setting are turned off in the SQL Server database instance that hosts the BizTalk Server BizTalkMsgBoxDB database"](https://support.microsoft.com/help/912262/the-auto-update-statistics-option-the-auto-create-statistics-option-an).
- **917845**—["You experience blocking, deadlock conditions, or other SQL Server issues when you try to connect to the BizTalkMsgBoxDb database in BizTalk Server"](https://support.microsoft.com/help/917845/you-experience-blocking--deadlock-conditions--or-other-sql-server-issu).
## Changes to the MessageBox Database
The MessageBox database should be treated like non-Microsoft application source code. That is, you should not “tweak” the MessageBox database via changes to tables, indexes, stored procedures, and most SQL Server database settings. For more information, in the BizTalk Core Engine's WebLog, see [What you can and can't do with the MessageBox Database server](http://go.microsoft.com/fwlink/p/?LinkId=101577).
## Default Settings for the Database Index Rebuilds and Defragmentation
[!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] does not support defragmenting indexes. “DBCC INDEXDEFRAG” and “ALTER INDEX … REORGANIZE …” are not supported since they use page locking, which can cause blocking and deadlocks with [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)]. [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] does support database index rebuilds (“DBCC DBREINDEX” and “ALTER INDEX … REBUILD …”), but they should only be done during maintenance windows when [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] is not processing data. Index rebuilds while [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] is processing data are not supported.
For more information, see [KB 917845: You experience blocking, deadlock conditions, or other SQL Server issues when you try to connect to the BizTalkMsgBoxDb database in BizTalk Server"](https://support.microsoft.com/help/917845/you-experience-blocking--deadlock-conditions--or-other-sql-server-issu).
Index fragmentation is not as much of a performance issue for [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] as it would be for a DSS system or an OLTP system that performs index scans. [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] does very selective queries and updates and [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] stored procedures should not cause table or index scans.
## See Also
[Checklist: Configuring SQL Server](~/technical-guides/checklist-configuring-sql-server.md)
| 124.237288 | 1,288 | 0.799318 | eng_Latn | 0.955832 |
b92bb7b0c7309ef864dddcc995e9a332814e4738 | 660 | md | Markdown | README.md | danprime/ComplexDateCalculator | c43217f87bbde15b924b6663bc8a69767b57f9b6 | [
"MIT"
] | null | null | null | README.md | danprime/ComplexDateCalculator | c43217f87bbde15b924b6663bc8a69767b57f9b6 | [
"MIT"
] | null | null | null | README.md | danprime/ComplexDateCalculator | c43217f87bbde15b924b6663bc8a69767b57f9b6 | [
"MIT"
] | null | null | null | # ComplexDateCalculator
Calculates a complex date like 3rd wednesday of November
Returns a Javascript Date Object.
##PARAMETERS
* dayOfWeek (1 = Monday, 2 = Tuesday, ... ,7 = Sunday)
* iterator (1 = First, 2 = Second, 3 = Third )
* month (1 = January, 2 = February, ... , 12 = December)
* year - the year you want to calculate from (optional, if null, defaults to current year)
##Example Usage:
* Calculate the Labour Day (First Monday of September) = complexDateCalculator(1, 1, 9, 2015)
Output: Date 2015-09-07T06:00:00.000Z
* Calculate the Third Tuesday of this year's July = complexDateCalculator(2, 3, 7)
Output: Date 2015-07-21T06:00:00.000Z
| 36.666667 | 93 | 0.710606 | eng_Latn | 0.677995 |
b92be3f3e56a3933314ac045122e2eb8dd66baf2 | 6,794 | md | Markdown | articles/aks/tutorial-kubernetes-upgrade-cluster.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/aks/tutorial-kubernetes-upgrade-cluster.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/aks/tutorial-kubernetes-upgrade-cluster.md | changeworld/azure-docs.cs-cz | cbff9869fbcda283f69d4909754309e49c409f7d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Kurz Kubernetes v Azure – Upgrade clusteru
description: V tomto kurzu Azure Kubernetes Service (AKS) se dozvíte, jak upgradovat existující cluster AKS na nejnovější dostupnou verzi Kubernetes.
services: container-service
ms.topic: tutorial
ms.date: 02/25/2020
ms.custom: mvc
ms.openlocfilehash: 4d9ef061904fb1a0fff25506eedb82158971bed5
ms.sourcegitcommit: 0947111b263015136bca0e6ec5a8c570b3f700ff
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 03/24/2020
ms.locfileid: "77622030"
---
# <a name="tutorial-upgrade-kubernetes-in-azure-kubernetes-service-aks"></a>Kurz: Upgrade Kubernetes ve službě Azure Kubernetes Service (AKS)
V rámci životního cyklu aplikace a clusteru možná budete chtít provést upgrade na nejnovější dostupnou verzi Kubernetes a využívat nové funkce. Cluster Azure Kubernetes Service (AKS) je možné upgradovat pomocí Azure CLI.
V tomto kurzu, který je sedmou částí sedmidílné série, se upgraduje cluster Kubernetes. Získáte informace o těchto tématech:
> [!div class="checklist"]
> * Identifikace aktuální verze a dostupných verzí Kubernetes
> * Upgrade uzlů Kubernetes
> * Ověření úspěšného upgradu
## <a name="before-you-begin"></a>Než začnete
V předchozích kurzech byla aplikace zabalena do bitové kopie kontejneru. Tato bitová kopie byla odeslána do registru kontejnerů Azure a vy jste vytvořili cluster AKS. Aplikace byla poté nasazena do clusteru AKS. Pokud jste tyto kroky neprovedli a chcete je sledovat, začněte s [kurzem 1 – Vytvořte i images kontejneru][aks-tutorial-prepare-app].
Tento kurz vyžaduje, abyste spouštěli Azure CLI verze 2.0.53 nebo novější. Verzi zjistíte spuštěním příkazu `az --version`. Pokud potřebujete instalaci nebo upgrade, přečtěte si téma [Instalace Azure CLI][azure-cli-install].
## <a name="get-available-cluster-versions"></a>Získání dostupných verzí clusteru
Před upgradem clusteru pomocí příkazu [az aks get-upgrades][] zkontrolujte, které vydané verze Kubernetes jsou pro upgrade k dispozici:
```azurecli
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
```
V následujícím příkladu je aktuální verze *1.14.8*a dostupné verze jsou zobrazeny ve sloupci *Upgrady.*
```
Name ResourceGroup MasterVersion NodePoolVersion Upgrades
------- --------------- --------------- ----------------- --------------
default myResourceGroup 1.14.8 1.14.8 1.15.5, 1.15.7
```
## <a name="upgrade-a-cluster"></a>Upgrade clusteru
Chcete-li minimalizovat narušení spuštěných aplikací, uzly AKS jsou pečlivě uzavřeny a vybity. V tomto procesu jsou prováděny následující kroky:
1. Plánovač Kubernetes zabraňuje další pody jsou naplánovány na uzlu, který má být upgradován.
1. Spuštěné pody v uzlu jsou naplánovány na jiných uzlech v clusteru.
1. Vytvoří se uzel, který spouští nejnovější součásti Kubernetes.
1. Když je nový uzel připraven a připojen ke clusteru, plánovač Kubernetes začne spouštět pody na něm.
1. Starý uzel je odstraněn a další uzel v clusteru zahájí proces cordon u vyprazdňování.
Cluster AKS můžete upgradovat pomocí příkazu [az aks upgrade][]. Následující příklad upgraduje cluster na Kubernetes verze *1.14.6*.
> [!NOTE]
> Najednou můžete upgradovat pouze jednu dílčí verzi. Můžete například upgradovat z *1.14.x* na *1.15.x*, ale nelze je upgradovat přímo z *1.14.x* na *1.16.x.* Chcete-li upgradovat z *1.14.x* na *1.16.x*, nejprve upgradujte z *1.14.x* na *1.15.x*a proveďte další upgrade z *1.15.x* na *1.16.x*.
```azurecli
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.15.5
```
Následující kondenzovaný příklad výstupu ukazuje *kubernetesVersion* nyní zprávy *1.15.5*:
```json
{
"agentPoolProfiles": [
{
"count": 3,
"maxPods": 110,
"name": "nodepool1",
"osType": "Linux",
"storageProfile": "ManagedDisks",
"vmSize": "Standard_DS1_v2",
}
],
"dnsPrefix": "myAKSClust-myResourceGroup-19da35",
"enableRbac": false,
"fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io",
"id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
"kubernetesVersion": "1.15.5",
"location": "eastus",
"name": "myAKSCluster",
"type": "Microsoft.ContainerService/ManagedClusters"
}
```
## <a name="validate-an-upgrade"></a>Ověření upgradu
Následujícím způsobem ověřte úspěšné provedení upgradu pomocí příkazu [az aks show][]:
```azurecli
az aks show --resource-group myResourceGroup --name myAKSCluster --output table
```
Následující ukázkový výstup ukazuje, že cluster AKS spouští *KubernetesVersion 1.15.5*:
```
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
------------ ---------- --------------- ------------------- ------------------- ----------------------------------------------------------------
myAKSCluster eastus myResourceGroup 1.15.5 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
```
## <a name="delete-the-cluster"></a>Odstranění clusteru
Vzhledem k tomu, že tento kurz je poslední částí řady, můžete odstranit cluster AKS. Jelikož uzly prostředí Kubernetes běží na virtuálních počítačích v Azure, účtují se za ně poplatky, i když cluster nevyužíváte. Pomocí příkazu [az group delete][az-group-delete] odeberete skupinu prostředků, službu kontejneru a všechny související prostředky.
```azurecli-interactive
az group delete --name myResourceGroup --yes --no-wait
```
> [!NOTE]
> Při odstranění clusteru se neodebere instanční objekt služby Azure Active Directory používaný clusterem AKS. Postup odebrání instančního objektu najdete v tématu věnovaném [aspektům instančního objektu AKS a jeho odstranění][sp-delete].
## <a name="next-steps"></a>Další kroky
V tomto kurzu jste upgradovali Kubernetes v clusteru AKS. Naučili jste se tyto postupy:
> [!div class="checklist"]
> * Identifikace aktuální verze a dostupných verzí Kubernetes
> * Upgrade uzlů Kubernetes
> * Ověření úspěšného upgradu
Další informace o službě AKS najdete na následujícím odkazu.
> [!div class="nextstepaction"]
> [Přehled služby AKS][aks-intro]
<!-- LINKS - external -->
[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
<!-- LINKS - internal -->
[aks-intro]: ./intro-kubernetes.md
[aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
[az aks show]: /cli/azure/aks#az-aks-show
[az aks get-upgrades]: /cli/azure/aks#az-aks-get-upgrades
[az aks upgrade]: /cli/azure/aks#az-aks-upgrade
[azure-cli-install]: /cli/azure/install-azure-cli
[az-group-delete]: /cli/azure/group#az-group-delete
[sp-delete]: kubernetes-service-principal.md#additional-considerations
| 46.855172 | 345 | 0.735502 | ces_Latn | 0.996236 |
b92d9c9cf2f3500eaf7ab6f072ecd55de01aadee | 368 | md | Markdown | README.md | HappyTramp/mandelbrot_cpu | 9ab34fff22bb2d6ebedefc702f7ec6c55937e175 | [
"MIT"
] | null | null | null | README.md | HappyTramp/mandelbrot_cpu | 9ab34fff22bb2d6ebedefc702f7ec6c55937e175 | [
"MIT"
] | null | null | null | README.md | HappyTramp/mandelbrot_cpu | 9ab34fff22bb2d6ebedefc702f7ec6c55937e175 | [
"MIT"
] | null | null | null | # Mandelbrot set visualizer (CPU version)
A visualizer of the [Mandelbrot Set](https://en.wikipedia.org/wiki/Mandelbrot_set).
All computation are on the CPU compared to my [other mandelbrot visualizer](https://github.com/HappyTramp/mandel) where I use the GPU.
## Dependencies
- [SDL2](https://www.libsdl.org/) for the graphics
## Usage
```
make all
./mandel
```
| 23 | 134 | 0.736413 | eng_Latn | 0.624682 |
b92e159087ffc1fb25dd8c7f7e4971a615e44210 | 6,494 | md | Markdown | configuration.md | crossoverJie/docs-zh | 7364b677af4e6f27649eec049a4b06f4a876be69 | [
"MIT"
] | 1 | 2021-06-14T17:57:33.000Z | 2021-06-14T17:57:33.000Z | configuration.md | crossoverJie/docs-zh | 7364b677af4e6f27649eec049a4b06f4a876be69 | [
"MIT"
] | null | null | null | configuration.md | crossoverJie/docs-zh | 7364b677af4e6f27649eec049a4b06f4a876be69 | [
"MIT"
] | 1 | 2018-12-04T15:30:55.000Z | 2018-12-04T15:30:55.000Z | # 配置项
你可以配置在 `window.$docsify` 里。
```html
<script>
window.$docsify = {
repo: 'docsifyjs/docsify',
maxLevel: 3,
coverpage: true
}
</script>
```
## el
- 类型:`String`
- 默认值:`#app`
docsify 初始化的挂载元素,可以是一个 CSS 选择器,默认为 `#app` 如果不存在就直接绑定在 `body` 上。
```js
window.$docsify = {
el: '#app'
};
```
## repo
- 类型:`String`
- 默认值: `null`
配置仓库地址或者 `username/repo` 的字符串,会在页面右上角渲染一个 [GitHub Corner](http://tholman.com/github-corners/) 挂件。
```js
window.$docsify = {
repo: 'docsifyjs/docsify',
// or
repo: 'https://github.com/docsifyjs/docsify/'
};
```
## maxLevel
- 类型:`Number`
- 默认值: `6`
默认情况下会抓取文档中所有标题渲染成目录,可配置最大支持渲染的标题层级。
```js
window.$docsify = {
maxLevel: 4
};
```
## loadNavbar
- 类型:`Boolean|String`
- 默认值: `false`
加载自定义导航栏,参考[定制导航栏](zh-cn/custom-navbar.md) 了解用法。设置为 `true` 后会加载 `_navbar.md` 文件,也可以自定义加载的文件名。
```js
window.$docsify = {
// 加载 _navbar.md
loadNavbar: true,
// 加载 nav.md
loadNavbar: 'nav.md'
};
```
## loadSidebar
- 类型:`Boolean|String`
- 默认值: `false`
加载自定义侧边栏,参考[多页文档](zh-cn/more-pages.md)。设置为 `true` 后会加载 `_sidebar.md` 文件,也可以自定义加载的文件名。
```js
window.$docsify = {
// 加载 _sidebar.md
loadSidebar: true,
// 加载 summary.md
loadSidebar: 'summary.md'
};
```
## subMaxLevel
- 类型:`Number`
- 默认值: `0`
自定义侧边栏后默认不会再生成目录,你也可以通过设置生成目录的最大层级开启这个功能。
```js
window.$docsify = {
subMaxLevel: 2
};
```
## auto2top
- 类型:`Boolean`
- 默认值: `false`
切换页面后是否自动跳转到页面顶部。
```js
window.$docsify = {
auto2top: true
};
```
## homepage
- 类型:`String`
- 默认值: `README.md`
设置首页文件加载路径。适合不想将 `README.md` 作为入口文件渲染,或者是文档存放在其他位置的情况使用。
```js
window.$docsify = {
// 入口文件改为 /home.md
homepage: 'home.md',
// 文档和仓库根目录下的 README.md 内容一致
homepage:
'https://raw.githubusercontent.com/docsifyjs/docsify/master/README.md'
};
```
## basePath
- 类型:`String`
文档加载的根路径,可以是二级路径或者是其他域名的路径。
```js
window.$docsify = {
basePath: '/path/',
// 直接渲染其他域名的文档
basePath: 'https://docsify.js.org/',
// 甚至直接渲染其他仓库 readme
basePath:
'https://raw.githubusercontent.com/ryanmcdermott/clean-code-javascript/master/'
};
```
## coverpage
- 类型:`Boolean|String`
- 默认值: `false`
启用[封面页](zh-cn/cover.md)。开启后是加载 `_coverpage.md` 文件,也可以自定义文件名。
```js
window.$docsify = {
coverpage: true,
// 自定义文件名
coverpage: 'cover.md',
// mutiple covers
coverpage: ['/', '/zh-cn/'],
// mutiple covers and custom file name
coverpage: {
'/': 'cover.md',
'/zh-cn/': 'cover.md'
}
};
```
## logo
- Type: `String`
Website logo as it appears in the sidebar, you can resize by CSS.
```js
window.$docsify = {
logo: '/_media/icon.svg'
};
```
## name
- 类型:`String`
文档标题,会显示在侧边栏顶部。
```js
window.$docsify = {
name: 'docsify'
};
```
## nameLink
- 类型:`String`
- 默认值:`window.location.pathname`
点击文档标题后跳转的链接地址。
```js
window.$docsify = {
nameLink: '/',
// 按照路由切换
nameLink: {
'/zh-cn/': '/zh-cn/',
'/': '/'
}
};
```
## markdown
- 类型: `Object|Function`
参考 [Markdown 配置](zh-cn/markdown.md)。
```js
window.$docsify = {
// object
markdown: {
smartypants: true,
renderer: {
link: function() {
// ...
}
}
},
// function
markdown: function(marked, renderer) {
// ...
return marked;
}
};
```
## themeColor
- 类型:`String`
替换主题色。利用 [CSS3 支持变量](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_variables)的特性,对于老的浏览器有 polyfill 处理。
```js
window.$docsify = {
themeColor: '#3F51B5'
};
```
## alias
- 类型:`Object`
定义路由别名,可以更自由的定义路由规则。 支持正则。
```js
window.$docsify = {
alias: {
'/foo/(+*)': '/bar/$1', // supports regexp
'/zh-cn/changelog': '/changelog',
'/changelog':
'https://raw.githubusercontent.com/docsifyjs/docsify/master/CHANGELOG',
'/.*/_sidebar.md': '/_sidebar.md' // See #301
}
};
```
## autoHeader
- 类型:`Boolean`
同时设置 `loadSidebar` 和 `autoHeader` 后,可以根据 `_sidebar.md` 的内容自动为每个页面增加标题。[#78](https://github.com/docsifyjs/docsify/issues/78)
```js
window.$docsify = {
loadSidebar: true,
autoHeader: true
};
```
## executeScript
- 类型:`Boolean`
执行文档里的 script 标签里的脚本,只执行第一个 script ([demo](zh-cn/themes.md))。 如果 Vue 存在,则自动开启。
```js
window.$docsify = {
executeScript: true
};
```
```markdown
## This is test
<script>
console.log(2333)
</script>
```
注意如果执行的是一个外链脚本,比如 jsfiddle 的内嵌 demo,请确保引入 [external-script](plugins.md?id=外链脚本-external-script) 插件。
## noEmoji
- type: `Boolean`
禁用 emoji 解析。
```js
window.$docsify = {
noEmoji: true
};
```
## mergeNavbar
- type: `Boolean`
小屏设备下合并导航栏到侧边栏。
```js
window.$docsify = {
mergeNavbar: true
};
```
## formatUpdated
- type: `String|Function`
我们可以显示文档更新日期通过 **{docsify-updated<span>}</span>** 变量. 并且格式化日期通过 `formatUpdated`。参考 https://github.com/lukeed/tinydate#patterns
```js
window.$docsify = {
formatUpdated: '{MM}/{DD} {HH}:{mm}',
formatUpdated: function(time) {
// ...
return time;
}
};
```
## externalLinkTarget
- type: `String`
- default: `_blank`
当前默认为 \_blank, 配置一下就可以:
```js
window.$docsify = {
externalLinkTarget: '_self' // default: '_blank'
};
```
## routerMode
- type: `String`
- default: `hash`
```js
window.$docsify = {
routerMode: 'history' // default: 'hash'
};
```
## noCompileLinks
- 类型: `Array`
有时我们不希望 docsify 处理我们的链接。 参考 [#203](https://github.com/docsifyjs/docsify/issues/203)
```js
window.$docsify = {
noCompileLinks: ['/foo', '/bar/.*']
};
```
## requestHeaders
- type: `Object`
设置请求资源的请求头。
```js
window.$docsify = {
requestHeaders: {
'x-token': 'xxx'
}
};
```
## ext
- type: `String`
资源的文件扩展名。
```js
window.$docsify = {
ext: '.md'
};
```
## fallbackLanguages
- type: `Array<string>`
List of languages that will fallback to the default language when a page is request and didn't exists for the given local.
Example:
- try to fetch the page of `/de/overview`. If this page exists, it'll be displayed
- then try to fetch the default page `/overview` (depending on the default language). If this page exists, it'll be displayed
- then display 404 page.
```js
window.$docsify = {
fallbackLanguages: ['fr', 'de']
};
```
## notFoundPage
- type: `Boolean` | `String` | `Object`
Load the `_404.md` file:
```js
window.$docsify = {
notFoundPage: true
};
```
Load the customised path of the 404 page:
```js
window.$docsify = {
notFoundPage: 'my404.md'
};
```
Load the right 404 page according to the localisation:
```js
window.$docsify = {
notFoundPage: {
'/': '_404.md',
'/de': 'de/_404.md'
}
};
```
> Note: The options with fallbackLanguages didn't work with the `notFoundPage` options.
| 13.935622 | 126 | 0.624885 | yue_Hant | 0.628903 |
b92ea7006ee0ad51b3fc4744979686c381edc1a3 | 4,403 | md | Markdown | _posts/2010-09-20-2997.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | 1 | 2020-11-20T20:39:54.000Z | 2020-11-20T20:39:54.000Z | _posts/2010-09-20-2997.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | _posts/2010-09-20-2997.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | ---
title: >
Jibbot
layout: post
permalink: /view/2997
votes: 2
preview: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAADj0lEQVRIia2XS0wTQRzGvy0VZBtCo1IeLVg0pvh+VKx60Yghin0kYqIeiI+LiZz16MFED8aTEW7qTS4aoyXKRSMSMaj4IBhAgzbQLZRCWqxua1tbD4vjdHZKt+p3+uY/339+uzvbaStkMmn8lvfRHfDkOniEW2d09+Fdbr21pZV4PT0xMvopB08LDoOvB2sty5nipH8uJw9AZZWRqQSnI5polGxr1gIY+zSinmJ5AKQDF7tdVgBOr8/cc6FQWG1dDTGTE4H8PHPPhTPtAICeQlkLsLP2NgBX+6/X1tVM+ufy8AB4vY8BuFxNhfJSkPUQr/ZfJ0MmwPKWGSsAnGg7RioF7d/L/vEdu1fTwzy8XK9oQchFZoUbt2+QgdkiKkbyy+qhotPHT9H9N7tuEV9hLOMyQpHoH97g8LOSooWcPzT6tLeXSe/ds6dimZlk1jdspmc/jL4HIM5cCy8/JU1Mc3mr6teQjD4cFjJiGoAgL9xEest50awDIEtp3bsrAOjM+uy1Po/7AQCHEZ2HDg+89xmY2+WhMzoA97q6ZoNBksh0bmGMOrOI9jc1u10et8uzv6lZPatTlwyXhpy+Dqevw3BpKO/qoa8z6WJ9ulgf+jqjJcPhOX0dyWRyKjjn9HXk5XElyzFZjnGnOLxua/uL533v3rzqtrb/HW8Rcc4XWUqPmI8CWCmlOZeTT98TKUOxXhRL6Y9BFm+nw1FRtvC6m6rtkaFOmzIY6jRW2xVLZ7QgRZE/xbm/+g0txIdDix0WinSJFD180fekcbvjQd8Tbqaocd++H6nUbCQyG4nEv8Wn2gZsgRWDu87NWrvXzdk9ty8PWBxT4SjJ7Ni0iV7I//sbx1BigJCy2RrGPo4FAhIAt8tjszUAMBQbSUYYHnlLmgfevqmsXMpcfjAYd2zdRobc84W0gzrVyP7R7Xq6X5r2qR+XpVpkGLSY9uh8LJFIASgrLy1PlKozWfs3/H6Uu2jzXk8uHqOy8tLofEzt+TwAGzY3EDDttUhLMotnNJkyclwxAGivRVqSWTxDiRCJxxQDgPZapCS//8iQFtpzeH29z821NQCkyQAA4o8ePqmFp6WdPbCqTJYqk0Xt/5f+4oD8J2U9T3vjxkyRHsC2nbuFn1HiNa5FkpVmKynSnuVN+difwwVpenwsbyaL1+zW9sckh5LJjGKWLBFon5MH4MuXCQD19XWM16JDrS15M78AsHeBXFWntLUAAAAASUVORK5CYII="
---
<dl class="side-by-side">
<dt>Preview</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAADj0lEQVRIia2XS0wTQRzGvy0VZBtCo1IeLVg0pvh+VKx60Yghin0kYqIeiI+LiZz16MFED8aTEW7qTS4aoyXKRSMSMaj4IBhAgzbQLZRCWqxua1tbD4vjdHZKt+p3+uY/339+uzvbaStkMmn8lvfRHfDkOniEW2d09+Fdbr21pZV4PT0xMvopB08LDoOvB2sty5nipH8uJw9AZZWRqQSnI5polGxr1gIY+zSinmJ5AKQDF7tdVgBOr8/cc6FQWG1dDTGTE4H8PHPPhTPtAICeQlkLsLP2NgBX+6/X1tVM+ufy8AB4vY8BuFxNhfJSkPUQr/ZfJ0MmwPKWGSsAnGg7RioF7d/L/vEdu1fTwzy8XK9oQchFZoUbt2+QgdkiKkbyy+qhotPHT9H9N7tuEV9hLOMyQpHoH97g8LOSooWcPzT6tLeXSe/ds6dimZlk1jdspmc/jL4HIM5cCy8/JU1Mc3mr6teQjD4cFjJiGoAgL9xEest50awDIEtp3bsrAOjM+uy1Po/7AQCHEZ2HDg+89xmY2+WhMzoA97q6ZoNBksh0bmGMOrOI9jc1u10et8uzv6lZPatTlwyXhpy+Dqevw3BpKO/qoa8z6WJ9ulgf+jqjJcPhOX0dyWRyKjjn9HXk5XElyzFZjnGnOLxua/uL533v3rzqtrb/HW8Rcc4XWUqPmI8CWCmlOZeTT98TKUOxXhRL6Y9BFm+nw1FRtvC6m6rtkaFOmzIY6jRW2xVLZ7QgRZE/xbm/+g0txIdDix0WinSJFD180fekcbvjQd8Tbqaocd++H6nUbCQyG4nEv8Wn2gZsgRWDu87NWrvXzdk9ty8PWBxT4SjJ7Ni0iV7I//sbx1BigJCy2RrGPo4FAhIAt8tjszUAMBQbSUYYHnlLmgfevqmsXMpcfjAYd2zdRobc84W0gzrVyP7R7Xq6X5r2qR+XpVpkGLSY9uh8LJFIASgrLy1PlKozWfs3/H6Uu2jzXk8uHqOy8tLofEzt+TwAGzY3EDDttUhLMotnNJkyclwxAGivRVqSWTxDiRCJxxQDgPZapCS//8iQFtpzeH29z821NQCkyQAA4o8ePqmFp6WdPbCqTJYqk0Xt/5f+4oD8J2U9T3vjxkyRHsC2nbuFn1HiNa5FkpVmKynSnuVN+difwwVpenwsbyaL1+zW9sckh5LJjGKWLBFon5MH4MuXCQD19XWM16JDrS15M78AsHeBXFWntLUAAAAASUVORK5CYII=">
</dd>
<dt>Original</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAAAgCAYAAACinX6EAAADeUlEQVR42uWZS09TQRSA+QUmrvwHuHLlwhh/gSsNJCppXKBifIUQQxtIEEEshKcoQYK1aG1KgbY82lLA8oiwcKH1kZC4I7o0MYaNW0fPhDM5d5jHbZBypTf50jOv257v3jszN62osBybXz4yE+drzhmp+N8PSDKZSygpKwHylS+1APn7D0RAbj3L2S8BqWyEAbrfoYpLKqD98X1OWQrA5JGyewTKehI81MtgOjfF9pPkfMLIgQvo7HvInoZ7lUBb90DQiG18c3uTth3aPCHg5fiw46pBmQqAsgoqAMqh+DMBlGUBK3+XUMCTAmrrajnBnTImgAKCP4+x02dOcSCmArA/jMXzyAIy+ZgQALHnBKAEeoXd3gG0jp4DBUDCwPFfRzlY9pwAGVlAdU0VxyRAHg9JzuRDImkE6jwjQDeDowBdOwowjYckUQIF6z0hwISbVcAEJqrDc/uCxfU4ozT4rzvAfoHmuwzYvZ+PMhPy+XTn1x3LqwsMmc5MsZWNtIOSCdC/0Oy/gE+bBSEBkh7uu8MpGwHTuTir9l0SAiLxEc6hEBAeD2uRHwEUQNGN0x7J2XGGLK3NuIKOkb+MyjPVIbZ3CZMQt7gSgMn1D3YZkSV0PWoWYLJNHTeUoAA6xiZAvsKFzXXH26jcjvVbb2p5X6sIU5K6MgVPLgsIZLZY24evHIhlAfQOuHbTpwTaYnMpwXx+iS1vbLB8ocCBGOpon3Q248AqQHXLLM7HObqyClmA/+wRIQBim4CJZERcdYipAH9bCwcF3GpsZPFcTgjAdpsApYSJ1BhvgM9iwDHpxUmOLAASfz/t50C8FwEXLvusAqAP9IVzzr5e4LiaE27XX2Wdvfc4EBdb7h96oBQAib+dbGCZkSs8lgWgOCqAQgV8+/HdKgD67EnAWGRUJAixXC5WAFz1gfqTrKPuhPIOgDF87GEVABPfxe4ejmoSlAXwR2rnMYC4ZALkxrHoYFFrLCQBUAEDz0OsNdjoAOqoABxHBeCcIAuQ54BXiQT//CdzQLAjwPaCTsCLuVUHNgHAaPiJchmUVwGVAFwFuETNCqAV8O7zmit6e4KsrdUvUAkAWn7XsexKjAW2KzkQb1dVOpZMKkDeC9CNkLwPGIpGHUAd3QvYlsFdAmiCU7MR8Uljk5RYIsShAiBhFVQAjlNtiemPtf0vYcMm4A9qgePOMVXxgQAAAABJRU5ErkJggg==">
</dd>
<dt>Title</dt>
<dd>Jibbot</dd>
<dt>Description</dt>
<dd>A catmuffin robot, powered on pringles.</dd>
<dt>Added By</dt>
<dd>Rutskarn</dd>
<dt>Added On</dt>
<dd>2010-09-20</dd>
<dt>Votes</dt>
<dd>2</dd>
</dl>
| 146.766667 | 1,370 | 0.929139 | yue_Hant | 0.592166 |
b92fcd259e2f418f3f440392a55d286feb2c64f2 | 4,597 | md | Markdown | README.md | iqbalhasnan/uadetector | 215fb652723c52866572cff885f52a3fe67b9db5 | [
"Apache-2.0"
] | 138 | 2015-01-06T13:55:55.000Z | 2019-07-16T09:44:54.000Z | README.md | iqbalhasnan/uadetector | 215fb652723c52866572cff885f52a3fe67b9db5 | [
"Apache-2.0"
] | 43 | 2015-01-02T05:37:11.000Z | 2019-01-22T12:20:32.000Z | README.md | iqbalhasnan/uadetector | 215fb652723c52866572cff885f52a3fe67b9db5 | [
"Apache-2.0"
] | 94 | 2015-01-16T18:36:42.000Z | 2019-06-28T07:24:07.000Z | UADetector
==========
What is UADetector?
-----
UADetector is a library to identify over 190 different desktop and mobile
browsers and 130 other User-Agents like feed readers, email clients and
multimedia players. In addition, even more than 400 robots like BingBot,
Googlebot or Yahoo Bot can be identified.
This library is a free, portable Java library to analyze User-Agent strings.
The goal of this library is to detect the type and the associated operating
system of a client like `Mobile Firefox 9.0` on `Android` or `Mobile
Safari 5.1` on `iOS`.
UADetector is divided into two modules. The core module includes
the API and implementation to read the detection information and the functions
to identify User-Agents. The resources module contains the database with the
necessary identification information and a service factory class to get simply
preconfigured UserAgentStringParser singletons. This library will be published
monthly, is integration-tested against the core module and is guaranteed to run
against the defined core.
Device categorization
-----
Since version `0.9.10` we support device categorization which means that for
instance an *iPhone* or *Nexus 4* will be classified as "Smartphone" and an
*iPad*, *Kindle* or *Surface RT* as "Tablet". Please take a look into our
[API documentation](http://uadetector.sourceforge.net/modules/uadetector-core/apidocs/net/sf/uadetector/ReadableUserAgent.html)
to get an idea what you can get when parsing an user agent string.
Features
--------
### Detects over 190 different browsers
This library detects over 190 different desktop and mobile browsers and 130
other User-Agents like feed readers, multimedia players and email clients.
### Identifies over 400 robots
In the Internet many robots are on their way to examine sites. A large number
of robots can be detected with this library.
### Monthly updated
Each month a new version of the resources module will be published so you can
always detect the latest User-Agents.
### Extremely tested
All classes in this library have been especially tested. The unit tests have
over 90% branch coverage and 98% line coverage. In addition, many integration
tests performed regularly.
How can You help?
-----------------
UADetector is an open source tool and welcomes contributions.
* Report bugs, feature requests and other issues in the
[issue tracking](https://github.com/before/uadetector/issues) application, but look
on our [known issues](http://uadetector.sourceforge.net/known-issues.html)
page first before posting!
* Help with the documentation by pointing out areas that are lacking or
unclear, and if you are so inclined, submitting patches to correct it. You
can quickly contribute rough thoughts by forking this project on
[GitHub](https://github.com/before/uadetector) and
[SourceForge](http://sourceforge.net/p/uadetector/code/?branch=ref%2Fmaster),
or you can volunteer to help collate and organize information that is already
there.
Your participation in this project is much appreciated!
License
-------
Please visit the UADetector web site for more information:
* [http://uadetector.sourceforge.net/](http://uadetector.sourceforge.net/)
Copyright 2012 André Rouél
André Rouél licenses this product to you under the Apache License, version 2.0
(the "License"); you may not use this product except in compliance with the
License. You may obtain a copy of the License at:
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
Also, please refer to each LICENSE.*component*.txt file, which is located in
the same directory as this file, for the license terms of the components that
this product depends on.
-------------------------------------------------------------------------------
This product contains a modified version of Dave Koelle's Alphanum Algorithm,
which can be obtained at:
* HOMEPAGE:
* [http://www.davekoelle.com/alphanum.html](http://www.davekoelle.com/alphanum.html)
* LICENSE:
* LICENSE.alphanum.txt (GNU LGPL 2.1 or above)
This product uses a version of Jaroslav Mallat's UAS Data, which can be
obtained at:
* HOMEPAGE:
* [http://user-agent-string.info/](http://user-agent-string.info/)
* LICENSE:
* LICENSE.uas.txt (CC BY 3.0)
| 37.680328 | 127 | 0.759626 | eng_Latn | 0.993368 |
b93114ff9c2a222d9c443da98b9611c482ea7c7d | 2,543 | md | Markdown | docs/plugins-v1/xml.md | Queenscripts/concord-website | bba48ad3a4b3650339557b182db83bf3b0aa49ba | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | docs/plugins-v1/xml.md | Queenscripts/concord-website | bba48ad3a4b3650339557b182db83bf3b0aa49ba | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | docs/plugins-v1/xml.md | Queenscripts/concord-website | bba48ad3a4b3650339557b182db83bf3b0aa49ba | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | ---
layout: wmt/docs
title: XML Task
side-navigation: wmt/docs-navigation.html
deprecated: true
description: Plugin for parsing XML data
---
# {{ page.title }}
The `xmlUtils` task provides methods to work with XML files.
- [Usage](#usage)
- [Provided Methods](#provided-methods)
- [Read a String Value](#read-a-string-value)
- [Read a List of String Values](#read-a-list-of-string-values)
- [Read a Maven GAV](#read-a-maven-gav)
## Usage
To be able to use the `xmlUtils` task in a Concord flow, it must be added as a
[dependency](../processes-v1/configuration.html#dependencies):
```yaml
configuration:
dependencies:
- "mvn://com.walmartlabs.concord.plugins:xml-tasks:{{ site.concord_plugins_version }}"
```
This adds the task to the classpath and allows you to invoke the task using
expression.
## Provided Methods
### Read a String Value
Assuming an XML file `data.xml` with the following content:
```xml
<books>
<book id="0">
<title>Don Quixote</title>
<author>Miguel de Cervantes</author>
</book>
</books>
```
To get the string value of the book's `<title>` tag:
```yaml
- log: ${xmlUtils.xpathString('data.xml', '/books/book[@id="0"]/title/text()')}
```
Prints out `Don Quixote`.
The expression must be a valid [XPath](https://en.wikipedia.org/wiki/XPath) and
return a DOM text node.
## Read a List of String Values
Assuming an XML file `data.xml` with the following content:
```xml
<books>
<book id="0">
<title>Don Quixote</title>
<author>Miguel de Cervantes</author>
</book>
<book id="1">
<title>To Kill a Mockingbird</title>
<author>Harper Lee</author>
</book>
</books>
```
To get a list of values for all book `<title>` tags:
```yaml
- log: ${xmlUtils.xpathListOfStrings('data.xml', '/books/book/title/text()')}
```
The expression must be a valid [XPath](https://en.wikipedia.org/wiki/XPath) and
return a set of DOM text node.
## Read a Maven GAV
To read Maven GAV (groupId/artifactId/version attributes) from a `pom.xml`
file:
```xml
<!-- simplified example POM -->
<project>
<parent>
<groupId>com.walmartlabs.concord.plugins</groupId>
<artifactId>concord-plugins-parent</artifactId>
<version>1.27.1-SNAPSHOT</version>
</parent>
<artifactId>xml-tasks</artifactId>
</project>
```
```yaml
# concord.yml
- expr: "${xmlUtils.mavenGav('pom.xml')}"
out: gav
- log: "groupId: ${gav.groupId}" # "com.walmartlabs.concord.plugins"
- log: "artifactId: ${gav.artifactId}" # "xml-tasks"
- log: "version: ${gav.version}" # "1.27.1-SNAPSHOT"
```
| 22.705357 | 90 | 0.680692 | eng_Latn | 0.626327 |
b9316c133f51b86d087cd4b10e416554867a9268 | 74,352 | md | Markdown | README.md | gsl-lite/gsl-lite | 19e2ccfac68591d82171c8ca8cdf10acfec6397a | [
"MIT"
] | 251 | 2020-01-24T20:12:45.000Z | 2022-03-27T18:41:19.000Z | README.md | gsl-lite/gsl-lite | 19e2ccfac68591d82171c8ca8cdf10acfec6397a | [
"MIT"
] | 78 | 2020-01-27T17:39:40.000Z | 2022-03-16T12:08:29.000Z | README.md | gsl-lite/gsl-lite | 19e2ccfac68591d82171c8ca8cdf10acfec6397a | [
"MIT"
] | 30 | 2020-02-01T16:47:32.000Z | 2022-03-19T03:42:05.000Z | # *gsl-lite*: Guidelines Support Library for C++98, C++11 up
| metadata | build | packages | try online |
| -------- | ------ | -------- | ---------- |
| [](https://en.wikipedia.org/wiki/C%2B%2B#Standardization) <br> [](https://opensource.org/licenses/MIT) <br> [](https://github.com/gsl-lite/gsl-lite/releases) | [](https://dev.azure.com/gsl-lite/gsl-lite/_build/latest?definitionId=1&branchName=master) <br> [](https://travis-ci.com/gsl-lite/gsl-lite) <br> [](https://ci.appveyor.com/project/gsl-lite/gsl-lite) | [](https://vcpkg.info/port/gsl-lite) <br> [](https://raw.githubusercontent.com/gsl-lite/gsl-lite/master/include/gsl/gsl-lite.hpp) | [](https://gcc.godbolt.org/z/JVtM2c) <br> [](https://wandbox.org/permlink/PloGDgU3dtDO2qVV) |
*gsl-lite* is an implementation of the [C++ Core Guidelines Support Library](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-gsl) originally based on [Microsoft GSL](https://github.com/microsoft/gsl).
**Contents**
- [Example usage](#example-usage)
- [In a nutshell](#in-a-nutshell)
- [License](#license)
- [Dependencies](#dependencies)
- [Installation and use](#installation-and-use)
- [Version semantics](#version-semantics)
- [Using *gsl-lite* in libraries](#using-gsl-lite-in-libraries)
- [Configuration options](#configuration-options)
- [Features](#features)
- [Deprecation](#deprecation)
- [Reported to work with](#reported-to-work-with)
- [Building the tests](#building-the-tests)
- [Other GSL implementations](#other-gsl-implementations)
- [Notes and references](#notes-and-references)
- [Appendix](#appendix)
Example usage
-------------
```Cpp
#include <gsl/gsl-lite.hpp>
int * use( gsl::not_null<int *> p )
{
// use p knowing it's not nullptr, NULL or 0.
return p;
}
struct Widget
{
Widget() : owned_ptr_( new int(42) ) {}
~Widget() { delete owned_ptr_; }
void work() { non_owned_ptr_ = use( owned_ptr_ ); }
gsl::owner<int *> owned_ptr_; // if alias template support
int * non_owned_ptr_;
};
int main()
{
Widget w;
w.work();
}
```
In a nutshell
-------------
**gsl-lite** is a single-file header-only implementation of the [C++ Core Guidelines Support Library](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-gsl) originally based on [Microsoft GSL](https://github.com/microsoft/gsl) and adapted for C++98, C++03. It also works when compiled as C++11, C++14, C++17, C++20.
The Guidelines Support Library (GSL) contains functions and types that are suggested for use by the [C++ Core Guidelines](https://github.com/isocpp/CppCoreGuidelines) maintained by the [Standard C++ Foundation](https://isocpp.org/). The library includes types like `owner<>`, `not_null<>`, `span<>`, `string_span` and [others](#features).
*gsl-lite* recognizes when it is compiled for the CUDA platform and decorates some functions with `__host__` and `__device__`. See also section [API macro](#api-macro).
License
-------
*gsl-lite* uses the [MIT](LICENSE) license.
Dependencies
------------
*gsl-lite* has no other dependencies than the [C++ standard library](http://en.cppreference.com/w/cpp/header).
Installation and use
--------------------
### As CMake package
The recommended way to consume *gsl-lite* in your CMake project is to use `find_package()` and `target_link_libraries()`:
```CMake
cmake_minimum_required( VERSION 3.15 FATAL_ERROR )
find_package( gsl-lite 0.39 REQUIRED )
project( my-program LANGUAGES CXX )
add_executable( my-program main.cpp )
target_link_libraries( my-program PRIVATE gsl::gsl-lite-v1 )
```
There are different ways to make the `gsl-lite` package available to your project:
<details>
<summary>Using Vcpkg</summary>
<p>
1. For the [Vcpkg package manager](https://github.com/microsoft/vcpkg/), simply run Vcpkg's install command:
vcpkg install gsl-lite
2. Now, configure your project passing the Vcpkg toolchain file as a parameter:
cd <my-program-source-dir>
mkdir build
cd build
cmake -DCMAKE_TOOLCHAIN_FILE=<vcpkg-root>/scripts/buildsystems/vcpkg.cmake ..
cmake --build ../build
</p></details>
<details>
<summary>Using an exported build directory</summary>
<p>
1. Clone the *gsl-lite* repository and configure a build directory with CMake:
git clone [email protected]:gsl-lite/gsl-lite.git <gsl-lite-source-dir>
cd <gsl-lite-source-dir>
mkdir build
cd build
cmake ..
2. Now, configure your project passing the CMake build directory as a parameter:
cd <my-program-source-dir>
mkdir build
cd build
cmake -Dgsl-lite_DIR:PATH=<gsl-lite-source-dir>/build ..
cmake --build ../build
See [example/cmake-pkg/Readme.md](example/cmake-pkg/Readme.md) for a complete example.
</p></details>
### Other options
*gsl-lite* is a header-only library; if you do not want to use the CMake package, or if you use a different build system, all
you need to do is to add the "include" subdirectory of the *gsl-lite* source directory to your include path:
git clone [email protected]:gsl-lite/gsl-lite.git <gsl-lite-source-dir>
g++ -std=c++03 -I<gsl-lite-source-dir>/include main.cpp
*gsl-lite* is also a single-header library; if you want to avoid external dependencies, it suffices to copy the header file
["include/gsl/gsl-lite.hpp"](https://raw.githubusercontent.com/gsl-lite/gsl-lite/master/include/gsl/gsl-lite.hpp) to a
subdirectory of your project:
git clone [email protected]:gsl-lite/gsl-lite.git <gsl-lite-source-dir>
mkdir -p external/include/gsl
cp <gsl-lite-source-dir>/include/gsl/gsl-lite.hpp external/include/gsl/
g++ -std=c++03 -Iexternal/include main.cpp
Version semantics
-----------------
*gsl-lite* strives to follow [Semantic Versioning](https://semver.org/) guidelines. Although we are still in the "initial development" stage (version 0\.*), we generally maintain
[API](https://en.wikipedia.org/wiki/Application_programming_interface) and [ABI](https://en.wikipedia.org/wiki/Application_binary_interface) compatibility and avoid breaking changes in minor and patch releases.
Development of *gsl-lite* happens in the `master` branch. Versioning semantics apply only to tagged releases: there is no stability guarantee between individual commits in the `master` branch, i.e. anything
added since the last tagged release may be renamed, removed, have the semantics changed, etc. without further notice.
A minor-version release will be compatible (in both ABI and API) with the previous minor-version release (with [rare exceptions](https://github.com/gsl-lite/gsl-lite/issues/156) while we're still in version 0.\*).
Thus, once a change is released, it becomes part of the API.
Some of the [configuration options](#configuration-options) affect the API and ABI of *gsl-lite*. Most configuration options exist because a change we wanted to make would have broken backward compatibility,
so many recent changes and improvements are currently opt-in. The current plan is to toggle the default values of these configuration options for the next major version release.
To simplify migration to the next major version, *gsl-lite* 0.36 introduces the notion of *versioned defaults*. By setting the configuration option `gsl_CONFIG_DEFAULTS_VERSION=0` or `gsl_CONFIG_DEFAULTS_VERSION=1`,
a set of version-specific default options can be selected. Alternatively, when consuming *gsl-lite* [as a CMake package](#as-cmake-package), versioned defaults can be selected by linking to the target
`gsl::gsl-lite-v0` or `gsl::gsl-lite-v1` rather than `gsl::gsl-lite`.
The following table gives an overview of the configuration options affected by versioned defaults:
Macro | v0 default | v1 default | |
------------------------------------------------------------------------------------:|:---------------------------------------------------------|-------------------|-|
[`gsl_FEATURE_OWNER_MACRO`](#gsl_feature_owner_macro1) | 1 | 0 | an unprefixed macro `Owner()` may interfere with user code |
[`gsl_FEATURE_GSL_LITE_NAMESPACE`](#gsl_feature_gsl_lite_namespace0) | 0 | 1 | cf. [Using *gsl-lite* in libraries](#using-gsl-lite-in-libraries) |
[`gsl_CONFIG_DEPRECATE_TO_LEVEL`](#gsl_config_deprecate_to_level0) | 0 | 6 | |
[`gsl_CONFIG_INDEX_TYPE`](#gsl_config_index_typegsl_config_span_index_type) | `gsl_CONFIG_SPAN_INDEX_TYPE` (defaults to `std::size_t`) | `std::ptrdiff_t` | the GSL specifies `gsl::index` to be a signed type, and M-GSL also uses `std::ptrdiff_t` |
[`gsl_CONFIG_ALLOWS_SPAN_COMPARISON`](#gsl_config_allows_span_comparison1) | 1 | 0 | C++20 `std::span<>` does not support comparison because semantics (deep vs. shallow) are unclear |
[`gsl_CONFIG_NOT_NULL_EXPLICIT_CTOR`](#gsl_config_not_null_explicit_ctor0) | 0 | 1 | cf. reasoning in [M-GSL/#395](https://github.com/Microsoft/GSL/issues/395) (note that `not_null<>` in M-GSL has an implicit constructor, cf. [M-GSL/#699](https://github.com/Microsoft/GSL/issues/699)) |
[`gsl_CONFIG_TRANSPARENT_NOT_NULL`](#gsl_config_transparent_not_null0) | 0 | 1 | enables conformant behavior for `not_null<>::get()` |
[`gsl_CONFIG_NARROW_THROWS_ON_TRUNCATION`](#gsl_config_narrow_throws_on_truncation0) | 0 | 1 | enables conformant behavior for `narrow<>()` (cf. [#52](https://github.com/gsl-lite/gsl-lite/issues/52)) |
Note that the v1 defaults are not yet stable; future 0.\* releases may introduce more configuration switches with different version-specific defaults.
Using *gsl-lite* in libraries
-----------------------------
Many features of *gsl-lite* are very useful for defining library interfaces, e.g. spans, precondition checks, or `gsl::not_null<>`. As such, we encourage using *gsl-lite* in your libraries.
However, please mind the following considerations:
- *gsl-lite* is an implementation of the [Guidelines Support Library](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-gsl), which is not a library but a non-formal specification.
There are other libraries implementing the GSL, most notably the [Microsoft GSL](https://github.com/microsoft/GSL/) (herein often referred to as "M-GSL"). Both libraries live in different headers and consist
of unrelated implementations. There is considerable API compatibility between M-GSL and *gsl-lite*, but some differences are inevitable because the GSL specification is rather loose and informal, and because
both implementations take some liberties at interpreting and extending the specification (cf. e.g. [#6](https://github.com/gsl-lite/gsl-lite/issues/6), [#52](https://github.com/gsl-lite/gsl-lite/issues/52),
[#153](https://github.com/gsl-lite/gsl-lite/issues/153)). Also, the ABIs of *gsl-lite* and M-GSL are generally incompatible.
- It is not clear whether the GSL specification envisions that multiple implementations of the specification should coexist (cf. [CppCoreGuidelines/#1519](https://github.com/isocpp/CppCoreGuidelines/issues/1519)),
but because all existing implementations currently live in the same `namespace gsl`, using more than one GSL implementation in the same target will usually fail with compile/link errors. This is clearly
an impediment for using either in a library because the library would thereby force its consumers to pick the same GSL implementation.
- The API and ABI of *gsl-lite* can be altered by some of the [configuration options](#configuration-options). We consider the availability of these options a strength of *gsl-lite*, but the lack
of an option-invariant API and ABI is another burden for libraries, which may or may not depend on a particular choice of configuration settings and implicitly force these upon their users.
Our goal is to make *gsl-lite* suitable for use in libraries; we want to address all of these concerns in the next major version. But if you want to use *gsl-lite* in a library today, we recommend to
- use version-1 defaults (cf. [Version semantics](#version-semantics))
- include the new header \<gsl-lite/gsl-lite.hpp\> rather than \<gsl/gsl-lite.hpp\>
- refer to the new `namespace gsl_lite` instead of `namespace gsl` (or define a `namespace gsl = ::gsl_lite;` alias in your own namespace)
- use the prefixed contract checking macros `gsl_Expects()`/`gsl_Ensures()` rather than the unprefixed `Expects()`/`Ensures()`
(M-GSL prefixes its macros with uppercase `GSL_`; we traditionally consider lowercase `gsl_` the realm of *gsl-lite*)
- avoid any changes to the configuration options
Example:
```cmake
# my-statistics-lib/CMakeLists.txt
find_package( gsl-lite 0.39 REQUIRED )
add_library( my-statistics-lib STATIC mean.cpp )
target_link_libraries( my-statistics-lib PUBLIC gsl::gsl-lite-v1 )
```
```c++
// my-statistics-lib/include/my-statistics-lib/mean.hpp
#include <gsl-lite/gsl-lite.hpp> // instead of <gsl/gsl-lite.hpp>
namespace my_statistics_lib {
namespace gsl = ::gsl_lite; // convenience alias
double mean( gsl::span<double const> elements )
{
gsl_Expects( !elements.empty() ); // instead of Expects()
...
}
} // namespace my_statistics_lib
```
The idea is that *gsl-lite* will move all its definitions to `namespace gsl_lite` in the next major version, and provide a `namespace gsl` with aliases only if the traditional header \<gsl/gsl-lite.hpp\> is
included. This way, any code that only uses the new header \<gsl-lite/gsl-lite.hpp\> will not risk collision with M-GSL.
Configuration options
---------------------
**Contents**
- [API macro](#api-macro)
- [Standard selection macro](#standard-selection-macro)
- [Feature selection macros](#feature-selection-macros)
- [Contract checking configuration macros](#contract-checking-configuration-macros)
- [Microsoft GSL compatibility macros](#microsoft-gsl-compatibility-macros)
- [Other configuration macros](#other-configuration-macros)
### API macro
#### `gsl_api`
Functions in *gsl-lite* are decorated with `gsl_api` where appropriate. **By default `gsl_api` is defined empty for non-CUDA platforms and `__host__ __device__` for the CUDA platform.** Define this macro to specify your own function decoration.
### Standard selection macro
#### `gsl_CPLUSPLUS`
Define this macro to override the auto-detection of the supported C++ standard if your compiler does not set the `__cplusplus` macro correctly.
### Feature selection macros
#### `gsl_FEATURE_WITH_CONTAINER_TO_STD=99`
Define this to the highest C++ standard (98, 3, 11, 14, 17, 20) you want to include tagged-construction via `with_container`. **Default is 99 for inclusion with any standard.**
#### `gsl_FEATURE_MAKE_SPAN_TO_STD=99`
Define this to the highest C++ standard (98, 3, 11, 14, 17, 20) you want to include `make_span()` creator functions. **Default is 99 for inclusion with any standard.**
#### `gsl_FEATURE_BYTE_SPAN_TO_STD=99`
Define this to the highest C++ standard (98, 3, 11, 14, 17, 20) you want to include `byte_span()` creator functions. **Default is 99 for inclusion with any standard.**
#### `gsl_FEATURE_IMPLICIT_MACRO=0`
Define this macro to 1 to provide the `implicit` macro. **Default is 0.**
#### `gsl_FEATURE_OWNER_MACRO=1`
At default macro `Owner()` is defined for all C++ versions. This may be useful to transition from a compiler that doesn't provide alias templates to one that does. Define this macro to 0 to omit the `Owner()` macro. **Default is 1.**
#### `gsl_FEATURE_EXPERIMENTAL_RETURN_GUARD=0`
Provide experimental types `final_action_return` and `final_action_error` and convenience functions `on_return()` and `on_error()`. **Default is 0.**
#### `gsl_FEATURE_GSL_LITE_NAMESPACE=0`
Define this to additionally define a `namespace gsl_lite` with most of the *gsl-lite* API available, cf. [Using *gsl-lite* in libraries](#using-gsl-lite-in-libraries). **Default is 0.**
### Contract checking configuration macros
*gsl-lite* provides contract violation response control as originally suggested in proposal [N4415](http://wg21.link/n4415), with some refinements inspired by [P1710](http://wg21.link/P1710)/[P1730](http://wg21.link/P1730).
There are several macros for expressing preconditions, postconditions, and invariants:
- `gsl_Expects( cond )` for simple preconditions
- `gsl_Ensures( cond )` for simple postconditions
- `gsl_Assert( cond )` for simple assertions
- `gsl_FailFast()` to indicate unreachable code
- `gsl_ExpectsAudit( cond )` for preconditions that are expensive or include potentially opaque function calls
- `gsl_EnsuresAudit( cond )` for postconditions that are expensive or include potentially opaque function calls
- `gsl_AssertAudit( cond )` for assertions that are expensive or include potentially opaque function calls
The macros `Expects()` and `Ensures()` are also provided as aliases for `gsl_Expects()` and `gsl_Ensures()`.
The following macros control whether contracts are checked at runtime:
- **`gsl_CONFIG_CONTRACT_CHECKING_AUDIT`**
Define this macro to have all contracts checked at runtime.
- **`gsl_CONFIG_CONTRACT_CHECKING_ON` (default)**
Define this macro to have contracts expressed with `gsl_Expects()`, `gsl_Ensures()`, `gsl_Assert()`, and `gsl_FailFast()` checked at runtime, and contracts expressed with `gsl_ExpectsAudit()`, `gsl_EnsuresAudit()`, and `gsl_AssertAudit()` not checked and not evaluated at runtime. **This is the default.**
- **`gsl_CONFIG_CONTRACT_CHECKING_OFF`**
Define this macro to disable all runtime checking of contracts and invariants. (Note that `gsl_FailFast()` checks will trigger runtime failure even if runtime checking is disabled.)
The following macros can be used to selectively disable checking for a particular kind of contract:
- **`gsl_CONFIG_CONTRACT_CHECKING_EXPECTS_OFF`**
Define this macro to disable runtime checking of precondition contracts expressed with `gsl_Expects()` and `gsl_ExpectsAudit()`.
- **`gsl_CONFIG_CONTRACT_CHECKING_ENSURES_OFF`**
Define this macro to disable runtime checking of postcondition contracts expressed with `gsl_Ensures()` and `gsl_EnsuresAudit()`.
- **`gsl_CONFIG_CONTRACT_CHECKING_ASSERT_OFF`**
Define this macro to disable runtime checking of assertions expressed with `gsl_Assert()` and `gsl_AssertAudit()`.
The following macros control the handling of runtime contract violations:
- **`gsl_CONFIG_CONTRACT_VIOLATION_TERMINATES` (default)**
Define this macro to call `std::terminate()` on a GSL contract violation in `gsl_Expects()`, `gsl_ExpectsAudit()`, `gsl_Ensures()`, `gsl_EnsuresAudit()`, `gsl_Assert()`, `gsl_AssertAudit()`, and `gsl_FailFast()`. **This is the default.**
- **`gsl_CONFIG_CONTRACT_VIOLATION_ASSERTS`**
If this macro is defined, the `assert()` macro is used to check GSL contracts expressed with `gsl_Expects()`, `gsl_ExpectsAudit()`, `gsl_Ensures()`, `gsl_EnsuresAudit()`, `gsl_Assert()`, `gsl_AssertAudit()`, and `gsl_FailFast()`. (Note that `gsl_FailFast()` will call `std::terminate()` if `NDEBUG` is defined.)
- **`gsl_CONFIG_CONTRACT_VIOLATION_TRAPS`**
Define this macro to execute a trap instruction on a GSL contract violation in `gsl_Expects()`, `gsl_ExpectsAudit()`, `gsl_Ensures()`, `gsl_EnsuresAudit()`, `gsl_Assert()`, `gsl_AssertAudit()`, and `gsl_FailFast()`.
- **`gsl_CONFIG_CONTRACT_VIOLATION_THROWS`**
Define this macro to throw a std::runtime_exception-derived exception `gsl::fail_fast` on a GSL contract violation in `gsl_Expects()`, `gsl_ExpectsAudit()`, `gsl_Ensures()`, `gsl_EnsuresAudit()`, `gsl_Assert()`, `gsl_AssertAudit()`, and `gsl_FailFast()`.
- **`gsl_CONFIG_CONTRACT_VIOLATION_CALLS_HANDLER`**
Define this macro to call a user-defined handler function `gsl::fail_fast_assert_handler()` on a GSL contract violation in `gsl_Expects()`, `gsl_ExpectsAudit()`, `gsl_Ensures()`, `gsl_EnsuresAudit()`, `gsl_Assert()`, `gsl_AssertAudit()`, and `gsl_FailFast()`. The user must provide a definition of the following function:
```c++
namespace gsl {
gsl_api void fail_fast_assert_handler(
char const * const expression, char const * const message,
char const * const file, int line );
}
```
Note that `gsl_FailFast()` will call `std::terminate()` if `fail_fast_assert_handler()` returns.
The following macros control what happens with contract checks not enforced at runtime:
- **`gsl_CONFIG_UNENFORCED_CONTRACTS_ELIDE` (default)**
Define this macro to disable all runtime checking and evaluation of unenforced contracts and invariants. (Note that `gsl_FailFast()` calls are never elided.) **This is the default.**
- **`gsl_CONFIG_UNENFORCED_CONTRACTS_ASSUME`**
Define this macro to let the compiler assume that contracts expressed with `gsl_Expects()`, `gsl_Ensures()`, and `gsl_Assert()` always hold true, and to have contracts expressed with `gsl_ExpectsAudit()`, `gsl_EnsuresAudit()`, and `gsl_AssertAudit()` not checked and not evaluated at runtime. With this setting, contract violations lead to undefined behavior, which gives the compiler more opportunities for optimization but can be dangerous if the code is not prepared for it.
Note that the distinction between regular and audit-level contracts is subtly different from the C++2a Contracts proposals. Defining `gsl_CONFIG_UNENFORCED_CONTRACTS_ASSUME` instructs the compiler that the
conditions expressed by GSL contracts can be assumed to hold true. This is meant to be an aid for the optimizer; runtime evaluation of the condition is not desired. However, because the GSL implements contract checks
with macros rather than as a language feature, it cannot reliably suppress runtime evaluation of a condition for all compilers. If the contract comprises a function call which is opaque to the compiler, many compilers
will generate the runtime function call.
Therefore, `gsl_Expects()`, `gsl_Ensures()`, and `gsl_Assert()` should be used only for conditions that can be proven side-effect-free by the compiler, and `gsl_ExpectsAudit()`, `gsl_EnsuresAudit()`, and `gsl_AssertAudit()` for everything else. In practice, this implies that `gsl_Expects()`, `gsl_Ensures()`, and `gsl_Assert()` should only be used for simple comparisons of scalar values, for simple inlineable getters, and for comparisons of class objects with trivially inlineable comparison operators.
Example:
```c++
template< class RandomIt >
auto median( RandomIt first, RandomIt last )
{
// Comparing iterators for equality boils down to a comparison of pointers. An optimizing
// compiler will inline the comparison operator and understand that the comparison is free
// of side-effects, and hence generate no code in gsl_CONFIG_UNENFORCED_CONTRACTS_ASSUME mode.
gsl_Expects( first != last );
// Verifying that a range of elements is sorted may be an expensive operation, and we
// cannot trust the compiler to understand that it is free of side-effects, so we use an
// audit-level contract check.
gsl_ExpectsAudit( std::is_sorted( first, last ) );
auto count = last - first;
return count % 2 != 0
? first[ count / 2 ]
: std::midpoint( first[ count / 2 ], first[ count / 2 + 1 ] );
}
```
### Microsoft GSL compatibility macros
#### `GSL_UNENFORCED_ON_CONTRACT_VIOLATION`
Equivalent to defining `gsl_CONFIG_CONTRACT_CHECKING_OFF`.
#### `GSL_TERMINATE_ON_CONTRACT_VIOLATION`
Equivalent to defining `gsl_CONFIG_CONTRACT_VIOLATION_TERMINATES`.
#### `GSL_THROW_ON_CONTRACT_VIOLATION`
Equivalent to defining `gsl_CONFIG_CONTRACT_VIOLATION_THROWS`.
### Other configuration macros
#### `gsl_CONFIG_DEPRECATE_TO_LEVEL=0`
Define this to and including the level you want deprecation; see table [Deprecation](#deprecation) below. **Default is 0 for no deprecation.**
#### `gsl_CONFIG_SPAN_INDEX_TYPE=std::size_t`
Define this macro to the type to use for indices in `span<>` and `basic_string_span<>`. Microsoft GSL uses `std::ptrdiff_t`. **Default for *gsl-lite* is `std::size_t`.**
#### `gsl_CONFIG_INDEX_TYPE=gsl_CONFIG_SPAN_INDEX_TYPE`
Define this macro to the type to use for `gsl::index`. Microsoft's GSL uses `std::ptrdiff_t`. **Default for *gsl-lite* is `std::size_t`.**
#### `gsl_CONFIG_NOT_NULL_EXPLICIT_CTOR=0`
Define this macro to 1 to make `not_null<>`'s constructor explicit. **Default is 0.** Note that in Microsoft's GSL the constructor is explicit. For implicit construction you can also use the *gsl-lite*-specific `not_null<>`-derived class `not_null_ic<>`.
#### `gsl_CONFIG_TRANSPARENT_NOT_NULL=0`
Define this macro to 1 to have `not_null<>` support typical member functions of the underlying smart pointer transparently (currently `get()`), while adding precondition checks. This is conformant behavior but may be incompatible with older code which expects that `not_null<>::get()` returns the underlying pointer itself. **Default is 0.**
#### `gsl_CONFIG_NOT_NULL_GET_BY_CONST_REF=0`
Define this macro to 1 to have the legacy non-transparent version of `not_null<>::get()` return `T const &` instead of `T`. This may improve performance with types that have an expensive copy-constructor. This macro must not be defined if `gsl_CONFIG_TRANSPARENT_NOT_NULL` is 1. **Default is 0 for `T`.**
#### `gsl_CONFIG_ALLOWS_SPAN_COMPARISON=1`
Define this macro to 0 to omit the ability to compare spans. C++20 `std::span<>` does not support comparison because semantics (deep vs. shallow) are unclear. **Default is 1.**
#### `gsl_CONFIG_ALLOWS_NONSTRICT_SPAN_COMPARISON=1`
Define this macro to 0 to omit the ability to compare spans of different types, e.g. of different const-volatile-ness. To be able to compare a string_span with a cstring_span, non-strict span comparison must be available. **Default is 1.**
#### `gsl_CONFIG_ALLOWS_UNCONSTRAINED_SPAN_CONTAINER_CTOR=0`
Define this macro to 1 to add the unconstrained span constructor for containers for pre-C++11 compilers that cannot constrain the constructor. This constructor may prove too greedy and interfere with other constructors. **Default is 0.**
Note: an alternative is to use the constructor tagged `with_container`: `span<V> s(gsl::with_container, cont)`.
#### `gsl_CONFIG_NARROW_THROWS_ON_TRUNCATION=0`
Define this macro to 1 to have `narrow<>()` always throw a `narrowing_error` exception if the narrowing conversion loses information due to truncation. If `gsl_CONFIG_NARROW_THROWS_ON_TRUNCATION` is 0 and `gsl_CONFIG_CONTRACT_VIOLATION_THROWS` is not defined, `narrow<>()` instead calls `std::terminate()` on information loss. **Default is 0.**
#### `gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS=0`
Define this macro to 1 to experience the by-design compile-time errors of the GSL components in the test suite. **Default is 0.**
Features
--------
See also section [GSL: Guidelines Support Library](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#S-gsl) of the C++ Core Guidelines [9].
Feature / library | GSL | M-GSL | *gsl-lite* | Notes |
----------------------------|:-------:|:-------:|:----------:|:------|
**1.Lifetime safety** | | | | |
**1.1 Indirection** | | | | |
`not_null<>` | ✓ | ✓ | ✓ | Wrap any indirection and enforce non-null,<br>see also [Other configuration macros](#other-configuration-macros) |
`not_null_ic<>` | - | - | ✓ | not_null with implicit constructor, allowing [copy-initialization](https://en.cppreference.com/w/cpp/language/copy_initialization) |
**1.2 Ownership** | | | | |
`owner<>` | ✓ | ✓ | ≥ C++11 | Owned raw pointers |
`Owner()` | - | - | ✓ | Macro for pre-C++11;<br>see also [Feature selection macros](#feature-selection-macros) |
`unique_ptr<>` | ✓ | ✓ | ≥ C++11 | `std::unique_ptr<>` |
`unique_ptr<>` | - | - | < C++11 | VC10, VC11 |
`shared_ptr<>` | ✓ | ✓ | ≥ C++11 | `std::shared_ptr<>` |
`shared_ptr<>` | - | - | < C++11 | VC10, VC11 |
`stack_array<>` | ✓ | - | - | A stack-allocated array, fixed size |
`dyn_array<>` | ? | - | - | A heap-allocated array, fixed size |
**2.Bounds safety** | | | | |
**2.1 Tag Types** | | | | |
`zstring` | ✓ | ✓ | ✓ | a `char*` (C-style string) |
`wzstring` | - | ✓ | ✓ | a `wchar_t*` (C-style string) |
`czstring` | ✓ | ✓ | ✓ | a `const char*` (C-style string) |
`cwzstring` | - | ✓ | ✓ | a `const wchar_t*` (C-style string) |
`**2.2 Views** | | | | |
`span<>` | ✓ | ✓ | 1D views | A view of contiguous T's, replace (*,len),<br>see also proposal [p0122](http://wg21.link/p0122) |
`span_p<>` | ✓ | - | - | A view of contiguous T's that ends at the first element for which predicate(*p) is true |
`make_span()` | - | ✓ | ✓ | Create a span |
`byte_span()` | - | - | ✓ | Create a span of bytes from a single object |
`as_bytes()` | - | ✓ | ✓ | A span as bytes |
`as_writable_bytes` | - | ✓ | ✓ | A span as writable bytes |
`basic_string_span<>` | - | ✓ | ✓ | See also proposal [p0123](http://wg21.link/p0123) |
`string_span` | ✓ | ✓ | ✓ | `basic_string_span< char >` |
`wstring_span` | - | ✓ | ✓ | `basic_string_span< wchar_t >` |
`cstring_span` | ✓ | ✓ | ✓ | `basic_string_span< const char >` |
`cwstring_span` | - | ✓ | ✓ | `basic_string_span< const wchar_t >` |
`zstring_span` | - | ✓ | ✓ | `basic_zstring_span< char >` |
`wzstring_span` | - | ✓ | ✓ | `basic_zstring_span< wchar_t >` |
`czstring_span` | - | ✓ | ✓ | `basic_zstring_span< const char >` |
`cwzstring_span` | - | ✓ | ✓ | `basic_zstring_span< const wchar_t >` |
`ensure_z()` | - | ✓ | ✓ | Create a `cstring_span` or `cwstring_span` |
`to_string()` | - | ✓ | ✓ | Convert a `string_span` to `std::string` or `std::wstring` |
**2.3 Indexing** | | | | |
`at()` | ✓ | ✓ | ≥ C++11 | Bounds-checked way of accessing<br>static arrays, `std::array<>`, `std::vector<>` |
`at()` | - | - | < C++11 | static arrays, `std::vector<>`<br>`std::array<>` : VC11 |
**3. Assertions** | | | | |
`Expects()` | ✓ | ✓ | ✓ | Precondition assertion |
`Ensures()` | ✓ | ✓ | ✓ | Postcondition assertion |
`gsl_Expects()` | - | - | ✓ | Precondition assertion |
`gsl_Ensures()` | - | - | ✓ | Postcondition assertion |
`gsl_Assert()` | - | - | ✓ | Assertion |
`gsl_FailFast()` | - | - | ✓ | Fail-fast termination |
`gsl_ExpectsAudit()` | - | - | ✓ | Audit-level precondition assertion |
`gsl_EnsuresAudit()` | - | - | ✓ | Audit-level postcondition assertion |
`gsl_AssertAudit()` | - | - | ✓ | Audit-level assertion |
**4. Utilities** | | | | |
`index` | ✓ | ✓ | ✓ | type for container indexes and subscripts, <br>see [Other configuration macros](#other-configuration-macros) |
`dim` | - | - | ✓ | type for container sizes |
`stride` | - | - | ✓ | type for index strides |
`diff` | - | - | ✓ | type for index differences |
`byte` | - | ✓ | ✓ | byte type, see also proposal [p0298](http://wg21.link/p0298) |
`final_action<>` | ✓ | ✓ | ≥ C++11 | Action at the end of a scope |
`final_action` | - | - | < C++11 | Currently only `void(*)()` |
`finally()` | ✓ | ✓ | ≥ C++11 | Make a `final_action<>` |
`finally()` | - | - | < C++11 | Make a `final_action` |
`final_action_return` | - | - | < C++11 | Currently only `void(*)()`, [experimental](#feature-selection-macros) |
`on_return()` | - | - | ≥ C++11 | Make a `final_action_return<>, [experimental](#feature-selection-macros) |
`on_return()` | - | - | < C++11 | Make a `final_action_return, [experimental](#feature-selection-macros) |
`final_action_error` | - | - | < C++11 | Currently only `void(*)()`, [experimental](#feature-selection-macros) |
`on_error()` | - | - | ≥ C++11 | Make a `final_action_error<>`, [experimental](#feature-selection-macros) |
`on_error()` | - | - | < C++11 | Make a `final_action_error`, [experimental](#feature-selection-macros) |
`narrow_cast<>` | ✓ | ✓ | ✓ | Searchable narrowing casts of values |
`narrow<>()` | ✓ | ✓ | ✓ | Checked narrowing cast |
`narrow_failfast<>()` | - | - | ✓ | Fail-fast narrowing cast |
`[[implicit]]` | ✓ | - | C++?? | Symmetric with explicit |
`implicit` | - | - | ✓ | Macro, see [Feature selection macros](#feature-selection-macros) |
`move_owner` | ? | - | - | ... |
**5. Algorithms** | | | | |
`copy()` | | | | Copy from source span to destination span |
`size()` | | | | Size of span, unsigned |
`ssize()` | | | | Size of span, signed |
**6. Concepts** | | | | |
... | | | | |
Note: *gsl-lite* treats VC12 (VS2013) and VC14 (VS2015) as C++11 (`gsl_CPP11_OR_GREATER`: 1).
Deprecation
-----------
The following features are deprecated since the indicated version. See macro [`gsl_CONFIG_DEPRECATE_TO_LEVEL`](#other-configuration-macros) on how to control deprecation using the indicated level.
Version | Level | Feature / Notes |
-------:|:-----:|:----------------|
0.37.0 | 6 | `as_writeable_bytes()`, call indexing for spans, and `span::at()` |
| | Use `as_writable_bytes()`, subscript indexing |
0.35.0 | - | `gsl_CONFIG_CONTRACT_LEVEL_ON`, `gsl_CONFIG_CONTRACT_LEVEL_OFF`, `gsl_CONFIG_CONTRACT_LEVEL_EXPECTS_ONLY` and `gsl_CONFIG_CONTRACT_LEVEL_ENSURES_ONLY` |
| | Use `gsl_CONFIG_CONTRACT_CHECKING_ON`, `gsl_CONFIG_CONTRACT_CHECKING_OFF`, `gsl_CONFIG_CONTRACT_CHECKING_ENSURES_OFF`, `gsl_CONFIG_CONTRACT_CHECKING_EXPECTS_OFF` |
0.31.0 | 5 | `span( std::nullptr_t, index_type )` |
| | `span( pointer, index_type )` is used |
0.31.0 | 5 | `span( U *, index_type size )` |
| | `span( pointer, index_type )` is used |
0.31.0 | 5 | `span( U (&arr)[N] )` |
| | `span( element_type (&arr)[N] )` is used |
0.31.0 | 5 | `span( std::array< U, N > [const] & arr )` |
| | `span( std::array< value_type, N > [const] & arr )` is used |
0.29.0 | 4 | `span( std::shared_ptr<T> const & p )` |
| | — |
0.29.0 | 4 | `span( std::unique_ptr<T> const & p )` |
| | — |
0.29.0 | 3 | `span<>::length()` |
| | Use `span<>::size()` |
0.29.0 | 3 | `span<>::length_bytes()` |
| | Use `span<>::size_bytes()` |
0.17.0 | 2 | member `span<>::as_bytes()`, `span<>::as_writeable_bytes()` |
| | — |
0.7.0 | - | `gsl_CONFIG_ALLOWS_SPAN_CONTAINER_CTOR` |
| | Use `gsl_CONFIG_ALLOWS_UNCONSTRAINED_SPAN_CONTAINER_CTOR`,<br>or consider `span(with_container, cont)`. |
Reported to work with
---------------------
The table below mentions the compiler versions and platforms *gsl-lite* is reported to work with.
Compiler | OS | Platforms | Versions | CI |
--------------------:|:----------------|-----------|------------------:|----|
GCC | Linux | x64 | 4.7 and newer | [4.7, 4.8, 4.9, 5](https://travis-ci.com/gsl-lite/gsl-lite/), [6, 7, 8, 9, 10, 11](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
GCC (MinGW) | Windows | x86, x64 | 4.8.4 and newer | |
GCC (DJGPP) | DOSBox, FreeDOS | x86 | 7.2 | |
GCC | MacOS | x64 | 6 and newer | [6, 7, 8, 9, 10](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
Clang | Linux | x64 | 3.5 and newer | [3.5, 3.6, 3.7, 3.8, 3.9](https://travis-ci.com/gsl-lite/gsl-lite/), [4, 5, 6, 7, 8, 9, 10, 11, 12](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
Clang with libstdc++ | Linux | x64 | 11 and newer | [12](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
Clang | Windows | x64 | version shipped with VS 2019 | [latest](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
MSVC (Visual Studio) | Windows | x86, x64 | VS 2010 and newer | VS [2010, 2012, 2013, 2015](https://ci.appveyor.com/project/gsl-lite/gsl-lite), [2017, 2019](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
AppleClang (Xcode) | MacOS | x64 | 7.3 and newer | [7.3, 8, 8.1, 9](https://travis-ci.com/gsl-lite/gsl-lite/), [9.1, 10, 10.0.1, 11, 11.0.3, 12, 12.0.5, 13](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
NVCC (CUDA Toolkit) | Linux, Windows | x64 | 10.2 and newer | [10.2, 11.4](https://dev.azure.com/gsl-lite/gsl-lite/_build?definitionId=1) |
ARMCC | | ARM | 5 and newer | |
Building the tests
------------------
To build the tests:
- [CMake](http://cmake.org), version 3.15 or later to be installed and in your PATH.
- A [suitable compiler](#reported-to-work-with).
The [*lest* test framework](https://github.com/martinmoene/lest) is included in the [test folder](test).
The following steps assume that the [*gsl-lite* source code](https://github.com/gsl-lite/gsl-lite) has been cloned into a directory named `C:\gsl-lite`.
1. Create a directory for the build outputs.
Here we use `C:\gsl-lite\build`.
cd C:\gsl-lite
mkdir build
cd build
2. Configure the build directory with CMake:
cmake -DGSL_LITE_OPT_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Debug ..
3. Build the test suite:
cmake --build . --config Debug
4. Run the test suite:
ctest -V -C Debug
All tests should pass, indicating your platform is supported and you are ready to use *gsl-lite*. See the table with [supported types and functions](#features).
Other GSL implementations
-------------------------
- Microsoft. [Guidelines Support Library (GSL)](https://github.com/microsoft/GSL).
- Vicente J. Botet Escriba. [Guidelines Support Library (GSL)](https://github.com/viboes/GSL).
Notes and references
--------------------
### Proposals, specification
[1] [`std::span<>` on cppreference](https://en.cppreference.com/w/cpp/container/span).
[2] [`std::span<>` in C++20 Working Draft](http://eel.is/c++draft/views).
[3] [P0091 - Template argument deduction for class templates](http://wg21.link/p0091).
[4] [P0122 - span: bounds-safe views for sequences of objects](http://wg21.link/p0122).
[5] [P0123 - string_span: bounds-safe views for sequences of characters](http://wg21.link/p0123).
[6] [P0298 - A byte type definition](http://wg21.link/p0298).
[7] [P0805 - Comparing Containers](http://wg21.link/p0805).
### Articles
[8] [Standard C++ Foundation](https://isocpp.org/).
[9] Standard C++ Foundation. [C++ Core Guidelines](https://github.com/isocpp/CppCoreGuidelines).
[10] Microsoft. [Guidelines Support Library (GSL)](https://github.com/microsoft/gsl).
[11] Bjarne Stroustrup. [Writing good C++14 (PDF)](https://github.com/isocpp/CppCoreGuidelines/raw/master/talks/Stroustrup%20-%20CppCon%202015%20keynote.pdf) — [Video](https://www.youtube.com/watch?t=9&v=1OEu9C51K2A). CppCon 2015.
[12] Herb Sutter. [Writing good C++14… By default (PDF)](https://github.com/isocpp/CppCoreGuidelines/raw/master/talks/Sutter%20-%20CppCon%202015%20day%202%20plenary%20.pdf) — [Video](https://www.youtube.com/watch?v=hEx5DNLWGgA). CppCon 2015.
[13] Gabriel Dos Reis. [Contracts for Dependable C++ (PDF)](https://github.com/isocpp/CppCoreGuidelines/raw/master/talks/Contracts-for-Dependable-C%2B%2B.pdf) — Video. CppCon 2015.
[14] Bjarne Stroustrup et al. [A brief introduction to C++’s model for type- and resource-safety](https://github.com/isocpp/CppCoreGuidelines/raw/master/docs/Introduction%20to%20type%20and%20resource%20safety.pdf).
[15] Herb Sutter and Neil MacIntosh. [Lifetime Safety: Preventing Leaks and Dangling](https://github.com/isocpp/CppCoreGuidelines/raw/master/docs/Lifetimes%20I%20and%20II%20-%20v0.9.1.pdf). 21 Sep 2015.
### Compiler feature testing
[16] cppreference.com. [Feature testing](https://en.cppreference.com/w/cpp/feature_test).
### C++ features in various compilers
[17] cppreference.com. [C++ compiler support](https://en.cppreference.com/w/cpp/compiler_support).
Appendix
--------
<a id="a1"></a>
### A.1 Compile-time information
In the test runner, the version of *gsl-lite* is available via tag `[.version]`. The following tags are available for information on the compiler and on the C++ standard library used: `[.compiler]`, `[.stdc++]`, `[.stdlanguage]` and `[.stdlibrary]`.
<a id="a2"></a>
### A.2 *gsl-lite* test specification
<details>
<summary>click to expand</summary>
<p>
```
gsl_Expects(): Allows a true expression
gsl_Ensures(): Allows a true expression
gsl_Assert(): Allows a true expression
gsl_Expects(): Terminates on a false expression
gsl_Ensures(): Terminates on a false expression
gsl_Assert(): Terminates on a false expression
gsl_FailFast(): Suppresses compiler warning about missing return value
gsl_FailFast(): Terminates
gsl_ExpectsAudit(): Allows a true expression
gsl_EnsuresAudit(): Allows a true expression
gsl_AssertAudit(): Allows a true expression
gsl_ExpectsAudit(): Terminates on a false expression in AUDIT mode
gsl_EnsuresAudit(): Terminates on a false expression in AUDIT mode
gsl_AssertAudit(): Terminates on a false expression in AUDIT mode
gsl_Expects(): No warnings produced for function calls in precondition checks
gsl_Expects(): Supports explicit conversions to bool
at(): Terminates access to non-existing C-array elements
at(): Terminates access to non-existing std::array elements (C++11)
at(): Terminates access to non-existing std::vector elements
at(): Terminates access to non-existing std::initializer_list elements (C++11)
at(): Terminates access to non-existing gsl::span elements
at(): Allows to access existing C-array elements
at(): Allows to access existing std::array elements (C++11)
at(): Allows to access existing std::vector elements
at(): Allows to access std::initializer_list elements (C++11)
at(): Allows to access gsl::span elements
byte: Allows to construct from integral via static cast (C++17)
byte: Allows to construct from integral via byte() (C++17)
byte: Allows to construct from integral via to_byte()
byte: Allows to convert to integral via to_integer()
byte: Allows comparison operations
byte: Allows bitwise or operation
byte: Allows bitwise and operation
byte: Allows bitwise x-or operation
byte: Allows bitwise or assignment
byte: Allows bitwise and assignment
byte: Allows bitwise x-or assignment
byte: Allows shift-left operation
byte: Allows shift-right operation
byte: Allows shift-left assignment
byte: Allows shift-right assignment
byte: Provides constexpr non-assignment operations (C++11)
byte: Provides constexpr assignment operations (C++14)
byte: Provides hash support (C++11)
equal()
lexicographical_compare()
conjunction<> and disjunction<>: Short-circuiting is handled correctly
conjunction<> and disjunction<>: First suitable type is chosen as base
span<>: free comparation functions fail for different const-ness [issue #32]
span<>: constrained container constructor suffers hard failure for arguments with reference-returning data() function [issue #242]
byte: aliasing rules lead to undefined behaviour when using enum class [issue #34](GSL issue #313, PR #390)
string_span<>: must not include terminating '\0' [issue #53]
string_span<>: to_string triggers SFINAE errors on basic_string_span's move & copy constructor with Clang-3.9 (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS) [issue #53a]
narrow<>(): Allows narrowing double to float without MSVC level 4 warning C4127: conditional expression is constant [issue #115]
detail::is_compatible_container<>: Not a proper type trait [PR #238]
not_null<>: Disallows default construction (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
not_null<>: Disallows construction from nullptr_t, NULL or 0 (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
not_null<>: Disallows construction from a unique pointer to underlying type (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
not_null<>: Layout is compatible to underlying type
not_null<>: Convertibility is correctly reported by type traits
not_null<>: Copyability and assignability are correctly reported by type traits
not_null<>: Disallows assignment from unrelated pointers (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
not_null<>: Terminates construction from a null pointer value (raw pointer)
not_null<>: Terminates construction from related pointer types for null pointer value (raw pointer)
not_null<>: Terminates assignment from a null pointer value (raw pointer)
not_null<>: Terminates assignment from related pointer types for null pointer value (raw pointer)
not_null<>: Allows to construct from a non-null underlying pointer (raw pointer)
not_null<>: Returns underlying pointer with get() (raw pointer)
not_null<>: Allows to construct from a non-null underlying pointer (raw pointer) with make_not_null()
not_null<>: Allows to construct from a non-null underlying pointer (raw pointer) with deduction guide
not_null<>: Allows to construct a const pointer from a non-null underlying pointer (raw pointer)
not_null<>: Converts to underlying pointer (raw pointer)
as_nullable: Converts to underlying pointer (raw pointer)
not_null<>: Allows to construct from a non-null related pointer (raw pointer)
not_null<>: Allows to construct a const pointer from a non-null related pointer (raw pointer)
not_null<>: Allows to construct from a not_null related pointer type (raw pointer)
not_null<>: Allows to construct a const pointer from a not_null related pointer type (raw pointer)
not_null<>: Converts to a related pointer (raw pointer)
as_nullable: Converts to a related pointer (raw pointer)
not_null<>: Allows assignment from a not_null related pointer type (raw pointer)
not_null<>: Allows assignment to a const pointer from a not_null related pointer type (raw pointer)
not_null<>: Allows indirect member access (raw pointer)
not_null<>: Allows dereferencing (raw pointer)
not_null<>: Terminates swap of a moved-from value (shared_ptr)
not_null<>: Tolerates self-move-assignment of a moved-from value (shared_ptr)
not_null<>: Terminates self-swap of a moved-from value (shared_ptr)
not_null<>: Terminates construction from a null pointer value (shared_ptr)
not_null<>: Terminates construction from related pointer types for null pointer value (shared_ptr)
not_null<>: Terminates assignment from a null pointer value (shared_ptr)
not_null<>: Terminates assignment from related pointer types for null pointer value (shared_ptr)
not_null<>: Terminates propagation of a moved-from value (shared_ptr)
not_null<>: Allows self-swap (shared_ptr)
not_null<>: Allows swap (shared_ptr)
not_null<>: Allows to construct from a non-null underlying pointer (shared_ptr)
not_null<>: Allows to construct from a non-null raw pointer with explicit conversion (shared_ptr)
not_null<>: Returns underlying pointer or raw pointer with get() (shared_ptr)
not_null<>: Allows to move from a not_null pointer to an underlying pointer (shared_ptr)
as_nullable: Allows to move from a not_null pointer to an underlying pointer (shared_ptr)
not_null<>: Allows to construct from a non-null underlying pointer (shared_ptr) with make_not_null()
not_null<>: Allows to construct from a non-null underlying pointer (shared_ptr) with deduction guide
not_null<>: Allows to construct a const pointer from a non-null underlying pointer (shared_ptr)
not_null<>: Converts to underlying pointer (shared_ptr)
as_nullable: Converts to underlying pointer (shared_ptr)
as_nullable: Terminates for moved-from pointer (shared_ptr)
not_null<>: Allows to construct from a non-null related pointer (shared_ptr)
not_null<>: Allows to construct a const pointer from a non-null related pointer (shared_ptr)
not_null<>: Allows to construct from a not_null related pointer type (shared_ptr)
not_null<>: Allows to construct a const pointer from a not_null related pointer type (shared_ptr)
not_null<>: Converts to a related pointer (shared_ptr)
as_nullable: Converts to a related pointer (shared_ptr)
not_null<>: Allows assignment from a not_null related pointer type (shared_ptr)
not_null<>: Allows assignment to a const pointer from a not_null related pointer type (shared_ptr)
not_null<>: Allows indirect member access (shared_ptr)
not_null<>: Allows dereferencing (shared_ptr)
not_null<>: Terminates swap of a moved-from value (unique_ptr)
not_null<>: Tolerates self-move-assignment of a moved-from value (unique_ptr)
not_null<>: Terminates self-swap of a moved-from value (unique_ptr)
not_null<>: Terminates construction from a null pointer value (unique_ptr)
not_null<>: Terminates construction from related pointer types for null pointer value (unique_ptr)
not_null<>: Terminates assignment from a null pointer value (unique_ptr)
not_null<>: Terminates assignment from related pointer types for null pointer value (unique_ptr)
not_null<>: Terminates propagation of a moved-from value (unique_ptr)
not_null<>: Allows self-swap (unique_ptr)
not_null<>: Allows swap (unique_ptr)
not_null<>: Allows to construct from a non-null underlying pointer (unique_ptr)
not_null<>: Allows to construct from a non-null raw pointer with explicit conversion (unique_ptr)
not_null<>: Returns underlying pointer or raw pointer with get() (unique_ptr)
not_null<>: Allows to move from a not_null pointer to an underlying pointer (unique_ptr)
as_nullable: Allows to move from a not_null pointer to an underlying pointer (unique_ptr)
not_null<>: Allows to move to a related pointer from a not_null pointer (unique_ptr)
as_nullable: Allows to move to a related pointer from a not_null pointer (unique_ptr)
not_null<>: Allows to construct from a non-null underlying pointer (unique_ptr) with make_not_null()
not_null<>: Allows to construct from a non-null underlying pointer (unique_ptr) with deduction guide
not_null<>: Allows to construct a const pointer from a non-null underlying pointer (unique_ptr)
not_null<>: Converts to underlying pointer (unique_ptr)
as_nullable: Converts to underlying pointer (unique_ptr)
as_nullable: Terminates for moved-from pointer (unique_ptr)
not_null<>: Allows to construct from a non-null related pointer (unique_ptr)
not_null<>: Allows to construct a const pointer from a non-null related pointer (unique_ptr)
not_null<>: Allows to construct from a not_null related pointer type (unique_ptr)
not_null<>: Allows to construct a const pointer from a not_null related pointer type (unique_ptr)
not_null<>: Converts to a related pointer (unique_ptr)
as_nullable: Converts to a related pointer (unique_ptr)
not_null<>: Allows assignment from a not_null related pointer type (unique_ptr)
not_null<>: Allows assignment to a const pointer from a not_null related pointer type (unique_ptr)
not_null<>: Allows indirect member access (unique_ptr)
not_null<>: Allows dereferencing (unique_ptr)
not_null<>: Allows to construct a not_null<shared_ptr<T>> from a non-null unique_ptr<T>
not_null<>: Allows to construct a not_null<shared_ptr<const T>> from a non-null unique_ptr<T>
not_null<>: Allows to construct a not_null<shared_ptr<T>> from a related non-null unique_ptr<U>
not_null<>: Allows to construct a not_null<shared_ptr<const T>> from a related non-null unique_ptr<U>
not_null<>: Allows to construct a not_null<shared_ptr<T>> from a not_null<unique_ptr<T>>
not_null<>: Allows to convert to weak_ptr<T> from a not_null<shared_ptr<T>>
not_null<>: Allows to convert from a not_null<shared_ptr<T>> to a user-defined type with explicit conversion constructor
not_null<>: Allows to construct a not_null<shared_ptr<const T>> from a not_null<unique_ptr<T>>
not_null<>: Allows to construct a not_null<shared_ptr<T>> from a related not_null<unique_ptr<U>>
not_null<>: Allows to construct a not_null<shared_ptr<const T>> from a related not_null<unique_ptr<U>>
not_null<>: Allows assignment to a not_null<shared_ptr<T>> from a related not_null<unique_ptr<U>>
not_null<>: Allows assignment to a not_null<shared_ptr<const T>> from a related not_null<unique_ptr<U>>
not_null<>: make_unique<T>() returns not_null<unique_ptr<T>>
not_null<>: make_shared<T>() returns not_null<shared_ptr<T>>
not_null<>: Allows assignment from a non-null bare recast pointer
not_null<>: Allows implicit conversion to underlying type
not_null<>: Allows to construct from a non-null user-defined ref-counted type
not_null<>: Allows to compare equal to another not_null of the same type
not_null<>: Allows to compare unequal to another not_null of the same type
not_null<>: Allows to compare less than another not_null of the same type
not_null<>: Allows to compare less than or equal to another not_null of the same type
not_null<>: Allows to compare greater than another not_null of the same type
not_null<>: Allows to compare greater than or equal to another not_null of the same type
not_null<>: Allows to compare equal to a raw pointer of the same type
not_null<>: Allows to compare unequal to a raw pointer of the same type
not_null<>: Allows to compare less than a raw pointer of the same type
not_null<>: Allows to compare less than or equal to a raw pointer of the same type
not_null<>: Allows to compare greater than a raw pointer of the same type
not_null<>: Allows to compare greater than or equal to a raw pointer of the same type
not_null<>: Able to deduce element_type of raw pointers
not_null<>: Able to deduce element_type of unique_ptr
not_null<>: Able to deduce element_type of shared_ptr
not_null<>: Able to deduce element_type of normal user-defined smart pointers
not_null<>: Able to correctly deduce element_type of user-defined smart pointers even if typedef and result of dereferencing differs
not_null<>: Able to deduce element_type of user-defined smart pointers even if they do not have an element_type typedef
not_null<>: Able to deduce element_type of user-defined smart pointers even if they do not have an element_type typedef, and element_type differs from T
not_null<>: Hashes match the hashes of the wrapped pointer
not_null<>: Hash functor disabled for non-hashable pointers and enabled for hashable pointers
owner<>: Disallows construction from a non-pointer type (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
owner<>: Allows its use as the (pointer) type it stands for
Owner(): Allows its use as the (pointer) type it stands for
span<>: Disallows construction from a temporary value (C++11) (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
span<>: Disallows construction from a C-array of incompatible type (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
span<>: Disallows construction from a std::array of incompatible type (C++11) (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
span<>: Terminates construction from a nullptr and a non-zero size (C++11)
span<>: Terminates construction from two pointers in the wrong order
span<>: Terminates construction from a null pointer and a non-zero size
span<>: Terminates creation of a sub span of the first n elements for n exceeding the span
span<>: Terminates creation of a sub span of the last n elements for n exceeding the span
span<>: Terminates creation of a sub span outside the span
span<>: Terminates access outside the span
span<>: Terminates access with front() and back() on empty span
span<>: Allows to default-construct
span<>: Allows to construct from a nullptr and a zero size (C++11)
span<>: Allows to construct from a single object (C++11)
span<>: Allows to construct from a const single object (C++11)
span<>: Allows to construct from two pointers
span<>: Allows to construct from two pointers to const
span<>: Allows to construct from a non-null pointer and a size
span<>: Allows to construct from a non-null pointer to const and a size
span<>: Allows to construct from a temporary pointer and a size
span<>: Allows to construct from a temporary pointer to const and a size
span<>: Allows to construct from any pointer and a zero size
span<>: Allows to construct from a C-array
span<>: Allows to construct from a const C-array
span<>: Allows to construct from a C-array with size via decay to pointer (potentially dangerous)
span<>: Allows to construct from a const C-array with size via decay to pointer (potentially dangerous)
span<>: Allows to construct from a std::initializer_list<> (C++11)
span<>: Allows to construct from a std::array<> (C++11)
span<>: Allows constexpr use (C++14)
span<>: Allows to construct from a std::array<> with const data (C++11) [deprecated-5]
span<>: Allows to construct from a container (std::vector<>)
span<>: Allows to construct from a temporary container (potentially dangerous)
span<>: Allows to tag-construct from a container (std::vector<>)
span<>: Allows to tag-construct from a temporary container (potentially dangerous)
span<>: Allows to construct from an empty gsl::shared_ptr (C++11) [deprecated-4]
span<>: Allows to construct from an empty gsl::unique_ptr (C++11) [deprecated-4]
span<>: Allows to construct from an empty gsl::unique_ptr (array, C++11) [deprecated-4]
span<>: Allows to construct from a non-empty gsl::shared_ptr (C++11) [deprecated-4]
span<>: Allows to construct from a non-empty gsl::unique_ptr (C++11) [deprecated-4]
span<>: Allows to construct from a non-empty gsl::unique_ptr (array, C++11) [deprecated-4]
span<>: Allows to default construct in a constexpr context
span<>: Allows to copy-construct from another span of the same type
span<>: Allows to copy-construct from another span of a compatible type
span<>: Allows to move-construct from another span of the same type (C++11)
span<>: Allows to copy-assign from another span of the same type
span<>: Allows to move-assign from another span of the same type (C++11)
span<>: Allows to create a sub span of the first n elements
span<>: Allows to create a sub span of the last n elements
span<>: Allows to create a sub span starting at a given offset
span<>: Allows to create a sub span starting at a given offset with a given length
span<>: Allows to create an empty sub span at full offset
span<>: Allows to create an empty sub span at full offset with zero length
span<>: Allows forward iteration
span<>: Allows const forward iteration
span<>: Allows reverse iteration
span<>: Allows const reverse iteration
span<>: Allows to observe an element via array indexing
span<>: Allows to observe an element via front() and back()
span<>: Allows to observe an element via data()
span<>: Allows to change an element via array indexing
span<>: Allows to change an element via front() and back()
span<>: Allows to change an element via data()
span<>: Allows to test for empty span via empty(), empty case
span<>: Allows to test for empty span via empty(), non-empty case
span<>: Allows to obtain the number of elements via size(), as configured
span<>: Allows to obtain the number of elements via ssize(), signed
span<>: Allows to obtain the number of elements via length() [deprecated-3]
span<>: Allows to obtain the number of bytes via size_bytes()
span<>: Allows to obtain the number of bytes via length_bytes() [deprecated-3]
span<>: Allows to swap with another span of the same type
span<>: Allows to view the elements as read-only bytes [deprecated-2 as member]
span<>: Allows to view and change the elements as writable bytes [deprecated-2 as member]
span<>: Allows to view the elements as a span of another type
span<>: Allows to change the elements from a span of another type
copy(): Allows to copy a span to another span of the same element type
copy(): Allows to copy a span to another span of a different element type
size(): Allows to obtain the number of elements in span via size(span), unsigned
ssize(): Allows to obtain the number of elements in span via ssize(span), signed
make_span(): (gsl_FEATURE_MAKE_SPAN=1)
make_span(): Allows to build from two pointers
make_span(): Allows to build from two const pointers
make_span(): Allows to build from a non-null pointer and a size
make_span(): Allows to build from a non-null const pointer and a size
make_span(): Allows to build from a C-array
make_span(): Allows to build from a const C-array
make_span(): Allows building from a std::initializer_list<> (C++11)
make_span(): Allows to build from a std::array<> (C++11)
make_span(): Allows to build from a const std::array<> (C++11)
make_span(): Allows to build from a container (std::vector<>)
make_span(): Allows to build from a const container (std::vector<>)
make_span(): Allows to build from a temporary container (potentially dangerous)
make_span(): Allows to tag-build from a container (std::vector<>)
make_span(): Allows to tag-build from a temporary container (potentially dangerous)
make_span(): Allows to build from an empty gsl::shared_ptr (C++11) [deprecated-4]
make_span(): Allows to build from an empty gsl::unique_ptr (C++11) [deprecated-4]
make_span(): Allows to build from an empty gsl::unique_ptr (array, C++11) [deprecated-4]
make_span(): Allows to build from a non-empty gsl::shared_ptr (C++11) [deprecated-4]
make_span(): Allows to build from a non-empty gsl::unique_ptr (C++11) [deprecated-4]
make_span(): Allows to build from a non-empty gsl::unique_ptr (array, C++11) [deprecated-4]
byte_span() (gsl_FEATURE_BYTE_SPAN=1)
byte_span(): Allows to build a span of gsl::byte from a single object
byte_span(): Allows to build a span of const gsl::byte from a single const object
string_span: Disallows construction of a string_span from a cstring_span (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
string_span: Disallows construction of a string_span from a const std::string (define gsl_CONFIG_CONFIRMS_COMPILATION_ERRORS)
string_span: Allows to default-construct
string_span: Allows to construct from a nullptr (C++11)
string_span: Allows to construct a cstring_span from a const C-string
string_span: Allows to construct a string_span from a non-const C-string and size
string_span: Allows to construct a string_span from a non-const C-string begin and end pointer
string_span: Allows to construct a string_span from a non-const C-array
string_span: Allows to construct a string_span from a non-const std::string
string_span: Allows to construct a string_span from a non-const std::array (C++11)
string_span: Allows to construct a string_span from a non-const container (std::vector)
string_span: Allows to construct a string_span from a non-const container, via a tag (std::vector)
string_span: Allows to construct a cstring_span from a non-const C-string and size
string_span: Allows to construct a cstring_span from a non-const C-string begin and end pointer
string_span: Allows to construct a cstring_span from a non-const C-array
string_span: Allows to construct a cstring_span from a non-const std::string
string_span: Allows to construct a cstring_span from a non-const std::array (C++11)
string_span: Allows to construct a cstring_span from a non-const container (std::vector)
string_span: Allows to construct a cstring_span from a non-const container, via a tag (std::vector)
string_span: Allows to construct a cstring_span from a const C-string and size
string_span: Allows to construct a cstring_span from a non-const C-string begin and end pointer
string_span: Allows to construct a cstring_span from a const C-array
string_span: Allows to construct a cstring_span from a const std::string
string_span: Allows to construct a cstring_span from a const std::array (C++11)
string_span: Allows to construct a cstring_span from a const container (std::vector)
string_span: Allows to construct a cstring_span from a const container, via a tag (std::vector)
string_span: Allows to construct a wstring_span from a non-const C-string and size
string_span: Allows to construct a wstring_span from a non-const C-string begin and end pointer
string_span: Allows to construct a wstring_span from a non-const C-array
string_span: Allows to construct a wstring_span from a non-const std::wstring
string_span: Allows to construct a wstring_span from a non-const std::array (C++11)
string_span: Allows to construct a wstring_span from a non-const container (std::vector)
string_span: Allows to construct a wstring_span from a non-const container, via a tag (std::vector)
string_span: Allows to construct a cwstring_span from a non-const C-string and size
string_span: Allows to construct a cwstring_span from a non-const C-string begin and end pointer
string_span: Allows to construct a cwstring_span from a non-const C-array
string_span: Allows to construct a cwstring_span from a non-const std::wstring
string_span: Allows to construct a cwstring_span from a non-const std::array (C++11)
string_span: Allows to construct a cwstring_span from a non-const container (std::vector)
string_span: Allows to construct a cwstring_span from a non-const container, via a tag (std::vector)
string_span: Allows to construct a cwstring_span from a const C-string and size
string_span: Allows to construct a cwstring_span from a const C-string begin and end pointer
string_span: Allows to construct a cwstring_span from a const C-array
string_span: Allows to construct a cwstring_span from a const std::wstring
string_span: Allows to construct a cwstring_span from a const std::array (C++11)
string_span: Allows to construct a cwstring_span from a const container (std::vector)
string_span: Allows to construct a cwstring_span from a const container, via a tag (std::vector)
string_span: Allows to copy-construct from another span of the same type
string_span: Allows to copy-construct from another span of a compatible type
string_span: Allows to move-construct from another span of the same type (C++11)
string_span: Allows to copy-assign from another span of the same type
string_span: Allows to move-assign from another span of the same type (C++11)
string_span: Allows to create a sub span of the first n elements
string_span: Allows to create a sub span of the last n elements
string_span: Allows to create a sub span starting at a given offset
string_span: Allows to create a sub span starting at a given offset with a given length
string_span: Allows to create an empty sub span at full offset
string_span: Allows to create an empty sub span at full offset with zero length
string_span: Allows forward iteration
string_span: Allows const forward iteration
string_span: Allows reverse iteration
string_span: Allows const reverse iteration
string_span: Allows to observe an element via array indexing
string_span: Allows to observe an element via front() and back()
string_span: Allows to observe an element via data()
string_span: Allows to change an element via array indexing
string_span: Allows to change an element via front() and back()
string_span: Allows to change an element via data()
string_span: Allows to compare a string_span with another string_span
string_span: Allows to compare empty span to non-empty span
string_span: Allows to compare a string_span with a cstring_span
string_span: Allows to compare with types convertible to string_span
string_span: Allows to test for empty span via empty(), empty case
string_span: Allows to test for empty span via empty(), non-empty case
string_span: Allows to obtain the number of elements via length()
string_span: Allows to obtain the number of elements via size()
string_span: Allows to obtain the number of bytes via length_bytes()
string_span: Allows to obtain the number of bytes via size_bytes()
string_span: Allows to view the elements as read-only bytes
zstring_span: Terminates construction of a zstring_span from an empty span
zstring_span: Allows to construct a zstring_span from a zero-terminated empty string (via span)
zstring_span: Allows to construct a zstring_span from a zero-terminated non-empty string (via span)
zstring_span: Terminates construction of a zstring_span from a non-zero-terminated string (via span)
zstring_span: Terminates construction of a wzstring_span from an empty span
zstring_span: Allows to construct a wzstring_span from a zero-terminated empty string (via span)
zstring_span: Allows to construct a wzstring_span from a zero-terminated non-empty string (via span)
zstring_span: Terminates construction of a wzstring_span from a non-zero-terminated string (via span)
zstring_span: Allows to use a zstring_span with a legacy API via member assume_z()
zstring_span: Allows to use a wzstring_span with a legacy API via member assume_z()
to_string(): Allows to explicitly convert from string_span to std::string
to_string(): Allows to explicitly convert from cstring_span to std::string
to_string(): Allows to explicitly convert from wstring_span to std::wstring
to_string(): Allows to explicitly convert from cwstring_span to std::wstring
ensure_z(): Disallows to build a string_span from a const C-string
ensure_z(): Disallows to build a wstring_span from a const wide C-string
ensure_z(): Allows to build a string_span from a non-const C-string
ensure_z(): Allows to build a cstring_span from a non-const C-string
ensure_z(): Allows to build a cstring_span from a const C-string
ensure_z(): Allows to build a wstring_span from a non-const wide C-string
ensure_z(): Allows to build a cwstring_span from a non-const wide C-string
ensure_z(): Allows to build a cwstring_span from a const wide C-string
ensure_z(): Allows to specify ultimate location of the sentinel and ensure its presence
operator<<: Allows printing a string_span to an output stream
operator<<: Allows printing a cstring_span to an output stream
operator<<: Allows printing a wstring_span to an output stream
operator<<: Allows printing a cwstring_span to an output stream
finally: Allows to run lambda on leaving scope
finally: Allows to run function (bind) on leaving scope
finally: Allows to run function (pointer) on leaving scope
finally: Allows to move final_action
on_return: Allows to perform action on leaving scope without exception (gsl_FEATURE_EXPERIMENTAL_RETURN_GUARD)
on_error: Allows to perform action on leaving scope via an exception (gsl_FEATURE_EXPERIMENTAL_RETURN_GUARD)
narrow_cast<>: Allows narrowing without value loss
narrow_cast<>: Allows narrowing with value loss
narrow<>(): Allows narrowing without value loss
narrow<>(): Terminates when narrowing with value loss
narrow<>(): Terminates when narrowing with sign loss
narrow_failfast<>(): Allows narrowing without value loss
narrow_failfast<>(): Terminates when narrowing with value loss
narrow_failfast<>(): Terminates when narrowing with sign loss
CUDA: Precondition/postcondition checks and assertions can be used in kernel code
CUDA: span<> can be passed to kernel code
CUDA: span<> can be used in kernel code
CUDA: not_null<> can be passed to and used in kernel code
CUDA: gsl_FailFast() can be used in kernel code
```
</p>
</details>
| 67.165312 | 1,403 | 0.708468 | eng_Latn | 0.960842 |
b9319d32f189f102e791d7ffab3f1c8f300fc250 | 5,220 | md | Markdown | aspnetcore/mvc/views/tag-helpers/built-in/partial-tag-helper.md | benichka/aspnet-Docs | c79fece1de37c0e28034ca9dc8b99e9d13375538 | [
"CC-BY-4.0",
"MIT"
] | 21 | 2019-02-15T10:04:38.000Z | 2022-03-30T20:12:28.000Z | aspnetcore/mvc/views/tag-helpers/built-in/partial-tag-helper.md | benichka/aspnet-Docs | c79fece1de37c0e28034ca9dc8b99e9d13375538 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnetcore/mvc/views/tag-helpers/built-in/partial-tag-helper.md | benichka/aspnet-Docs | c79fece1de37c0e28034ca9dc8b99e9d13375538 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-05T01:27:35.000Z | 2020-05-05T01:27:35.000Z | ---
title: Partial Tag Helper in ASP.NET Core
author: scottaddie
description: Discover the ASP.NET Core Partial Tag Helper and the role each of its attributes play in rendering a partial view.
monikerRange: '>= aspnetcore-2.1'
ms.author: scaddie
ms.custom: mvc
ms.date: 07/25/2018
uid: mvc/views/tag-helpers/builtin-th/partial-tag-helper
---
# Partial Tag Helper in ASP.NET Core
By [Scott Addie](https://github.com/scottaddie)
For an overview of Tag Helpers, see <xref:mvc/views/tag-helpers/intro>.
[View or download sample code](https://github.com/aspnet/Docs/tree/master/aspnetcore/mvc/views/tag-helpers/built-in/samples) ([how to download](xref:index#how-to-download-a-sample))
## Overview
The Partial Tag Helper is used for rendering a [partial view](xref:mvc/views/partial) in Razor Pages and MVC apps. Consider that it:
* Requires ASP.NET Core 2.1 or later.
* Is an alternative to [HTML Helper syntax](xref:mvc/views/partial#reference-a-partial-view).
* Renders the partial view asynchronously.
The HTML Helper options for rendering a partial view include:
* [@await Html.PartialAsync](/dotnet/api/microsoft.aspnetcore.mvc.rendering.htmlhelperpartialextensions.partialasync)
* [@await Html.RenderPartialAsync](/dotnet/api/microsoft.aspnetcore.mvc.rendering.htmlhelperpartialextensions.renderpartialasync)
* [@Html.Partial](/dotnet/api/microsoft.aspnetcore.mvc.rendering.htmlhelperpartialextensions.partial)
* [@Html.RenderPartial](/dotnet/api/microsoft.aspnetcore.mvc.rendering.htmlhelperpartialextensions.renderpartial)
The *Product* model is used in samples throughout this document:
[!code-csharp[](samples/TagHelpersBuiltIn/Models/Product.cs)]
An inventory of the Partial Tag Helper attributes follows.
## name
The `name` attribute is required. It indicates the name or the path of the partial view to be rendered. When a partial view name is provided, the [view discovery](xref:mvc/views/overview#view-discovery) process is initiated. That process is bypassed when an explicit path is provided. For all acceptable `name` values, see [Partial view discovery](xref:mvc/views/partial#partial-view-discovery).
The following markup uses an explicit path, indicating that *_ProductPartial.cshtml* is to be loaded from the *Shared* folder. Using the [for](#for) attribute, a model is passed to the partial view for binding.
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Product.cshtml?name=snippet_Name)]
## for
The `for` attribute assigns a [ModelExpression](/dotnet/api/microsoft.aspnetcore.mvc.viewfeatures.modelexpression) to be evaluated against the current model. A `ModelExpression` infers the `@Model.` syntax. For example, `for="Product"` can be used instead of `for="@Model.Product"`. This default inference behavior is overridden by using the `@` symbol to define an inline expression. The `for` attribute can't be used with the [model](#model) attribute.
The following markup loads *_ProductPartial.cshtml*:
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Product.cshtml?name=snippet_For)]
The partial view is bound to the associated page model's `Product` property:
[!code-csharp[](samples/TagHelpersBuiltIn/Pages/Product.cshtml.cs?highlight=8)]
## model
The `model` attribute assigns a model instance to pass to the partial view. The `model` attribute can't be used with the [for](#for) attribute.
In the following markup, a new `Product` object is instantiated and passed to the `model` attribute for binding:
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Product.cshtml?name=snippet_Model)]
## view-data
The `view-data` attribute assigns a [ViewDataDictionary](/dotnet/api/microsoft.aspnetcore.mvc.viewfeatures.viewdatadictionary) to pass to the partial view. The following markup makes the entire ViewData collection accessible to the partial view:
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Product.cshtml?name=snippet_ViewData&highlight=5-)]
In the preceding code, the `IsNumberReadOnly` key value is set to `true` and added to the ViewData collection. Consequently, `ViewData["IsNumberReadOnly"]` is made accessible within the following partial view:
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Shared/_ProductViewDataPartial.cshtml?highlight=5)]
In this example, the value of `ViewData["IsNumberReadOnly"]` determines whether the *Number* field is displayed as read only.
## Migrate from an HTML Helper
Consider the following asynchronous HTML Helper example. A collection of products is iterated and displayed. Per the `PartialAsync` method's first parameter, the *_ProductPartial.cshtml* partial view is loaded. An instance of the `Product` model is passed to the partial view for binding.
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Products.cshtml?name=snippet_HtmlHelper&highlight=3)]
The following Partial Tag Helper achieves the same asynchronous rendering behavior as the `PartialAsync` HTML Helper. The `model` attribute is assigned a `Product` model instance for binding to the partial view.
[!code-cshtml[](samples/TagHelpersBuiltIn/Pages/Products.cshtml?name=snippet_TagHelper&highlight=3)]
## Additional resources
* <xref:mvc/views/partial>
* <xref:mvc/views/overview#weakly-typed-data-viewdata-viewdata-attribute-and-viewbag>
| 55.531915 | 454 | 0.792337 | eng_Latn | 0.924807 |
b931cb769a7ee1b8d123f6aff45d0991b069b824 | 5,847 | md | Markdown | entity-framework/core/get-started/full-dotnet/existing-db.md | beholderrk/EntityFramework.Docs.ru-ru | 4c755e55f5d58f50968d73598f9c16ffa318adaf | [
"MIT"
] | null | null | null | entity-framework/core/get-started/full-dotnet/existing-db.md | beholderrk/EntityFramework.Docs.ru-ru | 4c755e55f5d58f50968d73598f9c16ffa318adaf | [
"MIT"
] | null | null | null | entity-framework/core/get-started/full-dotnet/existing-db.md | beholderrk/EntityFramework.Docs.ru-ru | 4c755e55f5d58f50968d73598f9c16ffa318adaf | [
"MIT"
] | null | null | null | ---
title: Начало работы в .NET Framework — существующая база данных — EF Core
author: rowanmiller
ms.author: divega
ms.date: 08/06/2018
ms.assetid: a29a3d97-b2d8-4d33-9475-40ac67b3b2c6
ms.technology: entity-framework-core
uid: core/get-started/full-dotnet/existing-db
ms.openlocfilehash: d5c548927b736199c7d6fddc9c74139ca5f6614e
ms.sourcegitcommit: 902257be9c63c427dc793750a2b827d6feb8e38c
ms.translationtype: HT
ms.contentlocale: ru-RU
ms.lasthandoff: 08/07/2018
ms.locfileid: "39614419"
---
# <a name="getting-started-with-ef-core-on-net-framework-with-an-existing-database"></a>Начало работы с EF Core в .NET Framework с существующей базой данных
В этом руководстве вы создадите консольное приложение, которое реализует простейший доступ к базе данных Microsoft SQL Server с помощью Entity Framework. Для создания модели Entity Framework используется реконструирование существующей базы данных.
[Пример для этой статьи на GitHub](https://github.com/aspnet/EntityFramework.Docs/tree/master/samples/core/GetStarted/FullNet/ConsoleApp.ExistingDb).
## <a name="prerequisites"></a>Предварительные требования
* [Visual Studio 2017 версии 15.7 или выше](https://www.visualstudio.com/downloads/)
## <a name="create-blogging-database"></a>Создание базы данных Blogging
В этом руководстве в качестве существующей базы данных используется база **Blogging** для ведения блогов, размещенная на локальном экземпляре LocalDb. Если вы уже создали базу данных **Blogging**, работая с другим руководством, пропустите эти шаги.
* Открытие Visual Studio
* **"Сервис" > "Подключение к базе данных"…**
* Выберите **Microsoft SQL Server** и щелкните **Продолжить**.
* Введите значение **(localdb)\mssqllocaldb** для параметра **Имя сервера**.
* Введите значение **master** для параметра **Имя базы данных**, затем щелкните **ОК**.
* Теперь база данных master появится в разделе **Подключения к данным** в **обозревателе сервера**.
* Щелкните правой кнопкой мыши базу данных в **обозревателе сервера** и выберите действие **Создать запрос**.
* Скопируйте представленный ниже скрипт и вставьте его в редактор запросов
* Щелкните область редактора запросов правой кнопкой мыши и выберите действие **Выполнить**.
[!code-sql[Main](../_shared/create-blogging-database-script.sql)]
## <a name="create-a-new-project"></a>Создание нового проекта
* Откройте Visual Studio 2017.
* **"Файл" > "Создать" > "Проект"…**
* В меню слева выберите **"Установленные" > Visual C# > Классическое приложение Windows**
* Выберите шаблон проекта **Консольное приложение (.NET Framework)**.
* Задайте **.NET Framework 4.6.1** или более позднюю версию в качестве целевой платформы проекта
* Назовите проект *ConsoleApp.ExistingDb* и нажмите кнопку **ОК**
## <a name="install-entity-framework"></a>Установка Entity Framework
Чтобы использовать EF Core, установите пакеты для поставщиков базы данных, с которыми вы будете работать. В этом руководстве используется SQL Server. Список доступных поставщиков вы найдете в разделе [Database Providers](../../providers/index.md) (Поставщики базы данных).
* Последовательно выберите пункты **Средства > Диспетчер пакетов NuGet > Консоль диспетчера пакетов**.
* Запуск `Install-Package Microsoft.EntityFrameworkCore.SqlServer`
На следующем этапе вы воспользуетесь рядом инструментов Entity Framework Tools для реконструирования базы данных. Поэтому также установите пакет инструментов.
* Запуск `Install-Package Microsoft.EntityFrameworkCore.Tools`
## <a name="reverse-engineer-the-model"></a>Реконструирование модели
Теперь пора создать модель EF на основе существующей базы данных.
* Последовательно выберите пункты **Средства -> Диспетчер пакетов NuGet -> Консоль диспетчера пакетов**.
* Выполните следующую команду, чтобы создать модель на основе существующей базы данных.
``` powershell
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer
```
> [!TIP]
> Вы можете выбрать, для каких таблиц создавать сущности, указав в команде выше аргумент `-Tables`. Например, `-Tables Blog,Post`.
Процесс реконструирования создает классы сущностей (`Blog` и `Post`) и производный контекст (`BloggingContext`) на основе схемы существующей базы данных.
Классы сущностей — это простые объекты C#, которые представляют данные для использования в запросах и командах сохранения. Ниже приведены классы сущностей `Blog` и `Post`:
[!code-csharp[Main](../../../../samples/core/GetStarted/FullNet/ConsoleApp.ExistingDb/Blog.cs)]
[!code-csharp[Main](../../../../samples/core/GetStarted/FullNet/ConsoleApp.ExistingDb/Post.cs)]
> [!TIP]
> Чтобы включить отложенную загрузку, можно присвоить свойствам навигации (Blog.Post и Post.Blog) значение `virtual`.
Контекст представляет сеанс работы с базой данных. Он содержит методы, которые можно использовать для запроса и сохранения экземпляров классов сущностей.
[!code-csharp[Main](../../../../samples/core/GetStarted/FullNet/ConsoleApp.ExistingDb/BloggingContext.cs)]
## <a name="use-the-model"></a>Использование модели
Теперь вы можете использовать созданную модель для доступа к данным.
* Откройте файл *Program.cs*.
* Замените все содержимое этого файла следующим кодом:
[!code-csharp[Main](../../../../samples/core/GetStarted/FullNet/ConsoleApp.ExistingDb/Program.cs)]
* Выберите "Отладка" -> "Запустить без отладки".
Вы увидите, как один блог сохраняется в базе данных, а затем сведения обо всех блогах выводятся в консоль.

## <a name="additional-resources"></a>Дополнительные ресурсы
* [EF Core в .NET Framework с новой базой данных](xref:core/get-started/full-dotnet/new-db)
* [EF Core в .NET Core с новой базой данных — SQLite](xref:core/get-started/netcore/new-db-sqlite). Руководство по кроссплатформенной консоли EF.
| 46.404762 | 272 | 0.775782 | rus_Cyrl | 0.850148 |
b932653cd9351df6879a4c6c4517c871a8118672 | 3,425 | md | Markdown | README.md | Quentin123456/UIPickerViewDemo | 9782ad95e50efe9d141262a14bbb7005e3307fd7 | [
"MIT"
] | null | null | null | README.md | Quentin123456/UIPickerViewDemo | 9782ad95e50efe9d141262a14bbb7005e3307fd7 | [
"MIT"
] | null | null | null | README.md | Quentin123456/UIPickerViewDemo | 9782ad95e50efe9d141262a14bbb7005e3307fd7 | [
"MIT"
] | null | null | null | # UIPickerViewDemo
This is a demo about picker view, including two programming languages, swift and Objective-C.
Objectvie-C写法:
//懒加载picker view
- (UIPickerView *)pickerView {
if (_pickerView == nil) {
self.pickerView = [[UIPickerView alloc]initWithFrame:CGRectMake(0, 44, self.view.frame.size.width, 400)];
_pickerView.layer.masksToBounds = YES;
_pickerView.layer.borderWidth = 1;
_pickerView.delegate = self;
_pickerView.dataSource = self;
}
return _pickerView;
}
实现UIPickerView的代理方法
//设置列数
- (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView {
return 2;
}
//设置指定列包含的项数
- (NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component {
if (component == 0) {
return self.provinceArray.count;
}
return [self.dictionary[self.selectedProvince] count];
}
//设置每个选项显示的内容
- (NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component {
if (component == 0) {
return self.provinceArray[row];
}
return [self.dictionary[self.selectedProvince] objectAtIndex:row];
}
//用户进行选择
- (void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component {
if (component == 0) {
self.selectedProvince = self.provinceArray[row];
[self.pickerView reloadComponent:1];
//设置第二列首选的始终是第一个
[self.pickerView selectRow:0 inComponent:1 animated:YES];
}
}
swift写法
初始化参数
var label : UILabel!
var pickerView : UIPickerView!
var pickerData:[String : [String]] = ["放假":["写代码","玩游戏","泡妹子"],"旅游":["马尔代夫","火星","迪拜","月球"],"上班":["加班","不加班"]]
//保存全部数据
var pickerFirstData: [String] = ["放假","旅游","上班"] //第一级数据
var pickerSecondData: [String] = ["写代码","玩游戏","泡妹子"]//第二级数据
实现UIPickerView的代理方法
//设置选择框的总列数,继承于UIPickViewDataSource协议
func numberOfComponents(in pickerView: UIPickerView) -> Int {
return 2
}
//设置选择框的总行数,继承于UIPickViewDataSource协议
func pickerView(_ pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int {
//总行数设置为数据源的总长度。component :为0 表示第一列,1 表示第二列
//根据不同的数据源设置不同的个数
if component == 0 {
return self.pickerFirstData.count
} else {
return self.pickerSecondData.count
}
}
//设置选项框各选项的内容,继承于UIPickViewDelegate协议
func pickerView(_ pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String? {
if component == 0 {//选择第一级数据
return self.pickerFirstData[row]
} else {//选择第二级数据
return self.pickerSecondData[row]
}
}
//设置每行选项的高度
func pickerView(_ pickerView: UIPickerView, rowHeightForComponent component: Int) -> CGFloat {
return 45
}
//检测响应选项的选择状态
func pickerView(_ pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) {
if component == 0 {
//记录用户选择的值
let selectedFirst = self.pickerFirstData[row] as String
// 根据第一列选择的值,获取第二列数据
self.pickerSecondData = self.pickerData[selectedFirst]!
//刷新第二列的数据源
self.pickerView.reloadComponent(1)
//刷新数据源后将第二组数据转到下标为0,并且开启动画效果
self.pickerView.selectRow(0, inComponent: 1, animated: true)
}
}
| 30.309735 | 114 | 0.650511 | yue_Hant | 0.750019 |
b932d2d811b58b9db0232f8efdaf966657f69b14 | 989 | md | Markdown | docs/interfaces/_utils_flexible_box_layout_flexbasis_.flexbasisprops.md | johanneslumpe/styled-props | 1249ed6a7532272803b227b3b38902b059f145dd | [
"MIT"
] | 7 | 2018-08-12T00:55:48.000Z | 2020-07-24T14:27:26.000Z | docs/interfaces/_utils_flexible_box_layout_flexbasis_.flexbasisprops.md | johanneslumpe/styled-props | 1249ed6a7532272803b227b3b38902b059f145dd | [
"MIT"
] | 28 | 2018-08-13T23:09:35.000Z | 2022-02-26T07:55:43.000Z | docs/interfaces/_utils_flexible_box_layout_flexbasis_.flexbasisprops.md | johanneslumpe/styled-props | 1249ed6a7532272803b227b3b38902b059f145dd | [
"MIT"
] | 1 | 2018-11-29T16:09:03.000Z | 2018-11-29T16:09:03.000Z | [@johanneslumpe/styled-props](../README.md) > ["utils/flexible-box-layout/flexBasis"](../modules/_utils_flexible_box_layout_flexbasis_.md) > [FlexBasisProps](../interfaces/_utils_flexible_box_layout_flexbasis_.flexbasisprops.md)
# Interface: FlexBasisProps
## Type parameters
#### T
## Hierarchy
**FlexBasisProps**
## Index
### Properties
* [style$FlexBasis](_utils_flexible_box_layout_flexbasis_.flexbasisprops.md#style_flexbasis)
---
## Properties
<a id="style_flexbasis"></a>
### style$FlexBasis
**● style$FlexBasis**: *`T`*
*Defined in [utils/flexible-box-layout/flexBasis.ts:12](https://github.com/johanneslumpe/styled-props/blob/8e709f1/src/utils/flexible-box-layout/flexBasis.ts#L12)*
The **`flex-basis`** CSS property sets the initial main size of a flex item. It sets the size of the content box unless otherwise set with `box-sizing`.
*__see__*: [https://developer.mozilla.org/docs/Web/CSS/flex-basis](https://developer.mozilla.org/docs/Web/CSS/flex-basis)
___
| 28.257143 | 228 | 0.753286 | yue_Hant | 0.234065 |
b9330f4cf84a950dc7172733580cbc1f4a144ea7 | 3,735 | md | Markdown | api/Project.OutlineCode.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-08T20:10:22.000Z | 2021-04-08T20:10:22.000Z | api/Project.OutlineCode.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:52:15.000Z | 2021-09-28T07:52:15.000Z | api/Project.OutlineCode.md | skucab/VBA-Docs | 2912fe0343ddeef19007524ac662d3fcb8c0df09 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-09-28T07:45:29.000Z | 2021-09-28T07:45:29.000Z | ---
title: OutlineCode Object (Project)
ms.prod: project-server
api_name:
- Project.OutlineCode
ms.assetid: 8f75bdd3-ed5b-ed0f-9c3c-85af3a21580c
ms.date: 06/08/2017
localization_priority: Normal
---
# OutlineCode Object (Project)
Represents a local outline code in Project. The **OutlineCode** object is a member of the **[OutlineCodes](Project.outlinecodes(object).md)** collection.
**Using the OutlineCode Object**
The following example adds a custom outline code to store the location of resources and configures the outline code so that only values specified in the lookup table can be associated with a resource.
```vb
Sub CreateLocationOutlineCode()
Dim objOutlineCode As OutlineCode
Set objOutlineCode = ActiveProject.OutlineCodes.Add( _
pjCustomResourceOutlineCode1, "Location")
objOutlineCode.OnlyLookUpTableCodes = True
DefineLocationCodeMask objOutlineCode.CodeMask
EditLocationLookupTable objOutlineCode.LookupTable
End Sub
Sub DefineLocationCodeMask(objCodeMask As CodeMask)
objCodeMask.Add _
Sequence:=pjCustomOutlineCodeUppercaseLetters, _
Length:=2, Separator:="."
objCodeMask.Add _
Sequence:=pjCustomOutlineCodeUppercaseLetters, _
Separator:="."
objCodeMask.Add _
Sequence:=pjCustomOutlineCodeUppercaseLetters, _
Length:=3, Separator:="."
End Sub
Sub EditLocationLookupTable(objLookupTable As LookupTable)
Dim objStateEntry As LookupTableEntry
Dim objCountyEntry As LookupTableEntry
Dim objCityEntry As LookupTableEntry
Set objStateEntry = objLookupTable.AddChild("WA")
objStateEntry.Description = "Washington"
Set objCountyEntry = objLookupTable.AddChild("KING", _
objStateEntry.UniqueID)
objCountyEntry.Description = "King County"
Set objCityEntry = objLookupTable.AddChild("SEA", _
objCountyEntry.UniqueID)
objCityEntry.Description = "Seattle"
Set objCityEntry = objLookupTable.AddChild("RED", _
objCountyEntry.UniqueID)
objCityEntry.Description = "Redmond"
Set objCityEntry = objLookupTable.AddChild("KIR", _
objCountyEntry.UniqueID)
objCityEntry.Description = "Kirkland"
End Sub
```
## Remarks
An outline code is a type of local custom field that has a hierarchical text lookup table. Enterprise custom fields of type **Text** that have hierarchical lookup tables act as outline codes. Use the **[OutlineCodes](Project.Project.OutlineCodes.md)** property to return an **OutlineCodes** collection. Use the **[Add](Project.OutlineCodes.Add.md)** method to add a local outline code to the **OutlineCodes** collection. To add an enterprise custom field, you must use Project Web App or the Project Server Interface (PSI).
## Methods
|Name|
|:-----|
|[Delete](Project.OutlineCode.Delete.md)|
## Properties
|Name|
|:-----|
|[Application](Project.OutlineCode.Application.md)|
|[CodeMask](Project.OutlineCode.CodeMask.md)|
|[DefaultValue](Project.OutlineCode.DefaultValue.md)|
|[FieldID](Project.OutlineCode.FieldID.md)|
|[Index](Project.OutlineCode.Index.md)|
|[LinkedFieldID](Project.OutlineCode.LinkedFieldID.md)|
|[LookupTable](Project.OutlineCode.LookupTable.md)|
|[MatchGeneric](Project.OutlineCode.MatchGeneric.md)|
|[Name](Project.OutlineCode.Name.md)|
|[OnlyCompleteCodes](Project.OutlineCode.OnlyCompleteCodes.md)|
|[OnlyLeaves](Project.OutlineCode.OnlyLeaves.md)|
|[OnlyLookUpTableCodes](Project.OutlineCode.OnlyLookUpTableCodes.md)|
|[Parent](Project.OutlineCode.Parent.md)|
|[RequiredCode](Project.OutlineCode.RequiredCode.md)|
|[SortOrder](Project.OutlineCode.SortOrder.md)|
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 31.386555 | 524 | 0.748059 | yue_Hant | 0.915399 |
b933f6ae18360dbe969b6656ebd40344e41a6116 | 221 | md | Markdown | _posts/2009-06-19-t2236864843.md | craigwmcclellan/craigmcclellan.github.io | bd9432ea299f1141442b9ba90eb3aa001984c20d | [
"MIT"
] | 1 | 2018-08-04T15:31:00.000Z | 2018-08-04T15:31:00.000Z | _posts/2009-06-19-t2236864843.md | craigwmcclellan/craigmcclellan.github.io | bd9432ea299f1141442b9ba90eb3aa001984c20d | [
"MIT"
] | null | null | null | _posts/2009-06-19-t2236864843.md | craigwmcclellan/craigmcclellan.github.io | bd9432ea299f1141442b9ba90eb3aa001984c20d | [
"MIT"
] | 1 | 2018-08-04T15:31:03.000Z | 2018-08-04T15:31:03.000Z | ---
layout: post
microblog: true
audio:
photo:
date: 2009-06-18 18:00:00 -0600
guid: http://craigmcclellan.micro.blog/2009/06/19/t2236864843.html
---
The people from Griffin are here handing out free hot Krispy Kreams.
| 22.1 | 68 | 0.746606 | eng_Latn | 0.734798 |
b93436060f46920a0610666a43b317fca9b585b7 | 10,236 | md | Markdown | content/v1/reference/goa/client.md | aarongreenlee/goa.design | 75a34f9b73251a5bdf1283b3bf58e039cba28299 | [
"MIT"
] | 28 | 2016-01-31T01:19:22.000Z | 2021-12-10T06:40:16.000Z | content/v1/reference/goa/client.md | aarongreenlee/goa.design | 75a34f9b73251a5bdf1283b3bf58e039cba28299 | [
"MIT"
] | 118 | 2016-03-11T20:05:46.000Z | 2022-02-04T17:05:31.000Z | content/v1/reference/goa/client.md | aarongreenlee/goa.design | 75a34f9b73251a5bdf1283b3bf58e039cba28299 | [
"MIT"
] | 54 | 2016-02-02T17:12:38.000Z | 2022-02-03T21:56:58.000Z | +++
date="2019-05-09T20:22:43-07:00"
description="github.com/goadesign/goa/client"
+++
# client
`import "github.com/goadesign/goa/client"`
* [Overview](#pkg-overview)
* [Index](#pkg-index)
## <a name="pkg-overview">Overview</a>
## <a name="pkg-index">Index</a>
* [func ContextRequestID(ctx context.Context) string](#ContextRequestID)
* [func ContextWithRequestID(ctx context.Context) (context.Context, string)](#ContextWithRequestID)
* [func HandleResponse(c *Client, resp *http.Response, pretty bool)](#HandleResponse)
* [func SetContextRequestID(ctx context.Context, reqID string) context.Context](#SetContextRequestID)
* [func WSRead(ws *websocket.Conn)](#WSRead)
* [func WSWrite(ws *websocket.Conn)](#WSWrite)
* [type APIKeySigner](#APIKeySigner)
* [func (s *APIKeySigner) Sign(req *http.Request) error](#APIKeySigner.Sign)
* [type BasicSigner](#BasicSigner)
* [func (s *BasicSigner) Sign(req *http.Request) error](#BasicSigner.Sign)
* [type Client](#Client)
* [func New(c Doer) *Client](#New)
* [func (c *Client) Do(ctx context.Context, req *http.Request) (*http.Response, error)](#Client.Do)
* [type Doer](#Doer)
* [func HTTPClientDoer(hc *http.Client) Doer](#HTTPClientDoer)
* [type JWTSigner](#JWTSigner)
* [func (s *JWTSigner) Sign(req *http.Request) error](#JWTSigner.Sign)
* [type OAuth2Signer](#OAuth2Signer)
* [func (s *OAuth2Signer) Sign(req *http.Request) error](#OAuth2Signer.Sign)
* [type Signer](#Signer)
* [type StaticToken](#StaticToken)
* [func (t *StaticToken) SetAuthHeader(r *http.Request)](#StaticToken.SetAuthHeader)
* [func (t *StaticToken) Valid() bool](#StaticToken.Valid)
* [type StaticTokenSource](#StaticTokenSource)
* [func (s *StaticTokenSource) Token() (Token, error)](#StaticTokenSource.Token)
* [type Token](#Token)
* [type TokenSource](#TokenSource)
#### <a name="pkg-files">Package files</a>
[cli.go](/src/github.com/goadesign/goa/client/cli.go) [client.go](/src/github.com/goadesign/goa/client/client.go) [signers.go](/src/github.com/goadesign/goa/client/signers.go)
## <a name="ContextRequestID">func</a> [ContextRequestID](/src/target/client.go?s=6270:6319#L228)
``` go
func ContextRequestID(ctx context.Context) string
```
ContextRequestID extracts the Request ID from the context.
## <a name="ContextWithRequestID">func</a> [ContextWithRequestID](/src/target/client.go?s=6565:6637#L239)
``` go
func ContextWithRequestID(ctx context.Context) (context.Context, string)
```
ContextWithRequestID returns ctx and the request ID if it already has one or creates and returns a new context with
a new request ID.
## <a name="HandleResponse">func</a> [HandleResponse](/src/target/cli.go?s=420:484#L23)
``` go
func HandleResponse(c *Client, resp *http.Response, pretty bool)
```
HandleResponse logs the response details and exits the process with a status computed from
the response status code. The mapping of response status code to exit status is as follows:
401: 1
402 to 500 (other than 403 and 404): 2
403: 3
404: 4
500+: 5
## <a name="SetContextRequestID">func</a> [SetContextRequestID](/src/target/client.go?s=6872:6947#L249)
``` go
func SetContextRequestID(ctx context.Context, reqID string) context.Context
```
SetContextRequestID sets a request ID in the given context and returns a new context.
## <a name="WSRead">func</a> [WSRead](/src/target/cli.go?s=1901:1932#L87)
``` go
func WSRead(ws *websocket.Conn)
```
WSRead reads from a websocket and print the read messages to STDOUT.
## <a name="WSWrite">func</a> [WSWrite](/src/target/cli.go?s=1656:1688#L77)
``` go
func WSWrite(ws *websocket.Conn)
```
WSWrite sends STDIN lines to a websocket server.
## <a name="APIKeySigner">type</a> [APIKeySigner](/src/target/signers.go?s=455:875#L24)
``` go
type APIKeySigner struct {
// SignQuery indicates whether to set the API key in the URL query with key KeyName
// or whether to use a header with name KeyName.
SignQuery bool
// KeyName is the name of the HTTP header or query string that contains the API key.
KeyName string
// KeyValue stores the actual key.
KeyValue string
// Format is the format used to render the key, e.g. "Bearer %s"
Format string
}
```
APIKeySigner implements API Key auth.
### <a name="APIKeySigner.Sign">func</a> (\*APIKeySigner) [Sign](/src/target/signers.go?s=2673:2725#L93)
``` go
func (s *APIKeySigner) Sign(req *http.Request) error
```
Sign adds the API key header to the request.
## <a name="BasicSigner">type</a> [BasicSigner](/src/target/signers.go?s=255:410#L16)
``` go
type BasicSigner struct {
// Username is the basic auth user.
Username string
// Password is err guess what? the basic auth password.
Password string
}
```
BasicSigner implements basic auth.
### <a name="BasicSigner.Sign">func</a> (\*BasicSigner) [Sign](/src/target/signers.go?s=2467:2518#L85)
``` go
func (s *BasicSigner) Sign(req *http.Request) error
```
Sign adds the basic auth header to the request.
## <a name="Client">type</a> [Client](/src/target/client.go?s=390:724#L25)
``` go
type Client struct {
// Doer is the underlying http client.
Doer
// Scheme overrides the default action scheme.
Scheme string
// Host is the service hostname.
Host string
// UserAgent is the user agent set in requests made by the client.
UserAgent string
// Dump indicates whether to dump request response.
Dump bool
}
```
Client is the common client data structure for all goa service clients.
### <a name="New">func</a> [New](/src/target/client.go?s=836:860#L41)
``` go
func New(c Doer) *Client
```
New creates a new API client that wraps c.
If c is nil, the returned client wraps http.DefaultClient.
### <a name="Client.Do">func</a> (\*Client) [Do](/src/target/client.go?s=1605:1688#L65)
``` go
func (c *Client) Do(ctx context.Context, req *http.Request) (*http.Response, error)
```
Do wraps the underlying http client Do method and adds logging.
The logger should be in the context.
## <a name="Doer">type</a> [Doer](/src/target/client.go?s=231:311#L20)
``` go
type Doer interface {
Do(context.Context, *http.Request) (*http.Response, error)
}
```
Doer defines the Do method of the http client.
### <a name="HTTPClientDoer">func</a> [HTTPClientDoer](/src/target/client.go?s=1060:1101#L49)
``` go
func HTTPClientDoer(hc *http.Client) Doer
```
HTTPClientDoer turns a stdlib http.Client into a Doer. Use it to enable to call New() with an http.Client.
## <a name="JWTSigner">type</a> [JWTSigner](/src/target/signers.go?s=924:1123#L37)
``` go
type JWTSigner struct {
// TokenSource is a JWT token source.
// See https://godoc.org/golang.org/x/oauth2/jwt#Config.TokenSource for an example
// of an implementation.
TokenSource TokenSource
}
```
JWTSigner implements JSON Web Token auth.
### <a name="JWTSigner.Sign">func</a> (\*JWTSigner) [Sign](/src/target/signers.go?s=3118:3167#L114)
``` go
func (s *JWTSigner) Sign(req *http.Request) error
```
Sign adds the JWT auth header.
## <a name="OAuth2Signer">type</a> [OAuth2Signer](/src/target/signers.go?s=1255:1449#L46)
``` go
type OAuth2Signer struct {
// TokenSource is an OAuth2 access token source.
// See package golang/oauth2 and its subpackage for implementations of token
// sources.
TokenSource TokenSource
}
```
OAuth2Signer adds a authorization header to the request using the given OAuth2 token
source to produce the header value.
### <a name="OAuth2Signer.Sign">func</a> (\*OAuth2Signer) [Sign](/src/target/signers.go?s=3288:3340#L119)
``` go
func (s *OAuth2Signer) Sign(req *http.Request) error
```
Sign refreshes the access token if needed and adds the OAuth header.
## <a name="Signer">type</a> [Signer](/src/target/signers.go?s=118:213#L10)
``` go
type Signer interface {
// Sign adds required headers, cookies etc.
Sign(*http.Request) error
}
```
Signer is the common interface implemented by all signers.
## <a name="StaticToken">type</a> [StaticToken](/src/target/signers.go?s=2281:2412#L76)
``` go
type StaticToken struct {
// Value used to set the auth header.
Value string
// OAuth type, defaults to "Bearer".
Type string
}
```
StaticToken implements a token that sets the auth header with a given static value.
### <a name="StaticToken.SetAuthHeader">func</a> (\*StaticToken) [SetAuthHeader](/src/target/signers.go?s=3895:3947#L142)
``` go
func (t *StaticToken) SetAuthHeader(r *http.Request)
```
SetAuthHeader sets the Authorization header to r.
### <a name="StaticToken.Valid">func</a> (\*StaticToken) [Valid](/src/target/signers.go?s=4122:4156#L151)
``` go
func (t *StaticToken) Valid() bool
```
Valid reports whether Token can be used to properly sign requests.
## <a name="StaticTokenSource">type</a> [StaticTokenSource](/src/target/signers.go?s=2134:2190#L71)
``` go
type StaticTokenSource struct {
StaticToken *StaticToken
}
```
StaticTokenSource implements a token source that always returns the same token.
### <a name="StaticTokenSource.Token">func</a> (\*StaticTokenSource) [Token](/src/target/signers.go?s=3759:3809#L137)
``` go
func (s *StaticTokenSource) Token() (Token, error)
```
Token returns the static token.
## <a name="Token">type</a> [Token](/src/target/signers.go?s=1590:1785#L55)
``` go
type Token interface {
// SetAuthHeader sets the Authorization header to r.
SetAuthHeader(r *http.Request)
// Valid reports whether Token can be used to properly sign requests.
Valid() bool
}
```
Token is the interface to an OAuth2 token implementation.
It can be implemented with <a href="https://godoc.org/golang.org/x/oauth2#Token">https://godoc.org/golang.org/x/oauth2#Token</a>.
## <a name="TokenSource">type</a> [TokenSource](/src/target/signers.go?s=1843:2047#L63)
``` go
type TokenSource interface {
// Token returns a token or an error.
// Token must be safe for concurrent use by multiple goroutines.
// The returned Token must not be modified.
Token() (Token, error)
}
```
A TokenSource is anything that can return a token.
- - -
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)
| 23.369863 | 176 | 0.702618 | eng_Latn | 0.325348 |
b9344fc004477a659eee969904129382a8d778c0 | 3,569 | md | Markdown | README.md | josap/querybuilder | cc640dae9d84406ee3ad4b1dbeab972711e09b67 | [
"MIT"
] | null | null | null | README.md | josap/querybuilder | cc640dae9d84406ee3ad4b1dbeab972711e09b67 | [
"MIT"
] | null | null | null | README.md | josap/querybuilder | cc640dae9d84406ee3ad4b1dbeab972711e09b67 | [
"MIT"
] | null | null | null | # SqlKata Query Builder
[](https://ci.appveyor.com/project/ahmad-moussawi/querybuilder)
[](https://www.nuget.org/packages/SqlKata)
[](https://www.myget.org/feed/sqlkata/package/nuget/SqlKata)
<a href="https://twitter.com/ahmadmuzavi?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-size="large" data-show-count="false">Follow @ahmadmuzavi</a> for the latest updates about SqlKata.

SqlKata Query Builder is a powerful Sql Query Builder written in C#.
It's secure and framework agnostic. Inspired by the top Query Builders available, like Laravel Query Builder, and Knex.
SqlKata has an expressive API. it follows a clean naming convention, which is very similar to the SQL syntax.
By providing a level of abstraction over the supported database engines, that allows you to work with multiple databases with the same unified API.
SqlKata supports complex queries, such as nested conditions, selection from SubQuery, filtering over SubQueries, Conditional Statements and others. Currently it has built-in compilers for SqlServer, MySql, PostgreSql and Firebird.
Checkout the full documentation on [https://sqlkata.com](https://sqlkata.com)
## Installation
using dotnet cli
```sh
$ dotnet add package SqlKata
```
using Nuget Package Manager
```sh
PM> Install-Package SqlKata
```
## Quick Examples
### Add RowNumber Column *New
```cs
Query query = new Query();
query.Select("Id", "Name").From("MyTable").OrderBy("Name").AddRowNumber("RowNumber");
SqlKata.Compilers.SqlServerCompiler sqlServerCompiler = new SqlKata.Compilers.SqlServerCompiler();
string sqlResult = sqlServerCompiler.Compile(query).ToString();
//sqlResult -> SELECT ROW_NUMBER() OVER (ORDER BY [Name]) AS [RowNumber], [Id], [Name] FROM [MyTable]
```
### Setup Connection
```cs
var connection = new SqlConnection("...");
var compiler = new SqlCompiler();
var db = new QueryFactory(connection, compiler);
```
### Get all records
```cs
var books = db.Query("Books").Get();
```
### Published books only
```cs
var books = db.Query("Books").WhereTrue("IsPublished").Get();
```
### Get one book by Id
```cs
var introToSql = db.Query("Books").Where("Id", 145).Where("Lang", "en").First();
```
### Recent books: last 10
```cs
var recent = db.Query("Books").OrderByDesc("PublishedAt").Limit(10).Get();
```
### Join with authors table
```cs
var books = db.Query("Books")
.Join("Authors", "Authors.Id", "Books.AuthorId")
.Select("Books.*", "Authors.Name as AuthorName")
.Get();
foreach(var book in books)
{
Console.WriteLine($"{book.Title}: {book.AuthorName}");
}
```
### Conditional queries
```cs
var isFriday = DateTime.Today.DayOfWeek == DayOfWeek.Friday;
var books = db.Query("Books")
.When(isFriday, q => q.WhereIn("Category", new [] {"OpenSource", "MachineLearning"}))
.Get();
```
### Pagination
```cs
var page1 = db.Query("Books").Paginate(10);
foreach(var book in page1.List)
{
Console.WriteLine(book.Name);
}
...
var page2 = page1.Next();
```
### Insert
```cs
int affected = db.Query("Users").Insert(new {
Name = "Jane",
CountryId = 1
});
```
### Update
```cs
int affected = db.Query("Users").Where("Id", 1).Update(new {
Name = "Jane",
CountryId = 1
});
```
### Delete
```cs
int affected = db.Query("Users").Where("Id", 1).Delete();
```
| 25.492857 | 230 | 0.699075 | yue_Hant | 0.357498 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.