hexsha
stringlengths 40
40
| size
int64 5
1.04M
| ext
stringclasses 6
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
344
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
11
| max_stars_count
int64 1
368k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
344
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
11
| max_issues_count
int64 1
116k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
344
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
11
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
1.04M
| avg_line_length
float64 1.14
851k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
| lid
stringclasses 191
values | lid_prob
float64 0.01
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7aee0c23ea44114bc322c7b719b31f1231a18f07 | 1,656 | md | Markdown | website/docs/components/processors/number.md | ianaz/benthos | 35c3fa8c3486a61e51222f1f3244e9cb9036e030 | [
"MIT"
] | null | null | null | website/docs/components/processors/number.md | ianaz/benthos | 35c3fa8c3486a61e51222f1f3244e9cb9036e030 | [
"MIT"
] | null | null | null | website/docs/components/processors/number.md | ianaz/benthos | 35c3fa8c3486a61e51222f1f3244e9cb9036e030 | [
"MIT"
] | null | null | null | ---
title: number
type: processor
deprecated: true
---
<!--
THIS FILE IS AUTOGENERATED!
To make changes please edit the contents of:
lib/processor/number.go
-->
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
:::warning DEPRECATED
This component is deprecated and will be removed in the next major version release. Please consider moving onto [alternative components](#alternatives).
:::
<Tabs defaultValue="common" values={[
{ label: 'Common', value: 'common', },
{ label: 'Advanced', value: 'advanced', },
]}>
<TabItem value="common">
```yaml
# Common config fields, showing default values
number:
operator: add
value: 0
```
</TabItem>
<TabItem value="advanced">
```yaml
# All config fields, showing default values
number:
operator: add
value: 0
parts: []
```
</TabItem>
</Tabs>
## Fields
### `operator`
The [operator](#operators) to apply.
Type: `string`
Default: `"add"`
### `value`
A value used by the operator.
This field supports [interpolation functions](/docs/configuration/interpolation#bloblang-queries).
Type: `number`
Default: `0`
### `parts`
An optional array of message indexes of a batch that the processor should apply to.
If left empty all messages are processed. This field is only applicable when
batching messages [at the input level](/docs/configuration/batching).
Indexes can be negative, and if so the part will be selected from the end
counting backwards starting from -1.
Type: `array`
Default: `[]`
## Alternatives
All functionality of this processor has been superseded by the
[bloblang](/docs/components/processors/bloblang) processor.
| 19.034483 | 152 | 0.710749 | eng_Latn | 0.960393 |
7aeea624b95a219f5af075f5813cd9182629b250 | 137 | md | Markdown | README.md | Paybyway/visma-pay-magento2 | eff093e5a05ea700674725922dbf4b8eb3ba6d4b | [
"MIT"
] | null | null | null | README.md | Paybyway/visma-pay-magento2 | eff093e5a05ea700674725922dbf4b8eb3ba6d4b | [
"MIT"
] | null | null | null | README.md | Paybyway/visma-pay-magento2 | eff093e5a05ea700674725922dbf4b8eb3ba6d4b | [
"MIT"
] | null | null | null | # Visma Pay for Magento 2
See Visma Pay documentation: https://www.vismapay.com/docs
For support please contact: [email protected]
| 22.833333 | 58 | 0.788321 | kor_Hang | 0.548288 |
7aef49afab055e35a4109dcf8485788fe5b8eeba | 1,908 | md | Markdown | _posts/2017-05-14-Lafemme-Evening-Dresses-Style-21400.md | queenosestyle/queenosestyle.github.io | 7b095a591cefe4e42cdeb7de71cfa87293a95b5c | [
"MIT"
] | null | null | null | _posts/2017-05-14-Lafemme-Evening-Dresses-Style-21400.md | queenosestyle/queenosestyle.github.io | 7b095a591cefe4e42cdeb7de71cfa87293a95b5c | [
"MIT"
] | null | null | null | _posts/2017-05-14-Lafemme-Evening-Dresses-Style-21400.md | queenosestyle/queenosestyle.github.io | 7b095a591cefe4e42cdeb7de71cfa87293a95b5c | [
"MIT"
] | null | null | null | ---
layout: post
date: 2017-05-14
title: "Lafemme Evening Dresses Style 21400"
category: Lafemme
tags: [Lafemme]
---
### Lafemme Evening Dresses Style 21400
Just **$379.99**
###
<table><tr><td>BRANDS</td><td>Lafemme</td></tr></table>
<a href="https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html"><img src="//img.readybrides.com/191607/lafemme-evening-dresses-style-21400.jpg" alt="Lafemme Evening Dresses Style 21400" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html"><img src="//img.readybrides.com/191608/lafemme-evening-dresses-style-21400.jpg" alt="Lafemme Evening Dresses Style 21400" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html"><img src="//img.readybrides.com/191609/lafemme-evening-dresses-style-21400.jpg" alt="Lafemme Evening Dresses Style 21400" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html"><img src="//img.readybrides.com/191610/lafemme-evening-dresses-style-21400.jpg" alt="Lafemme Evening Dresses Style 21400" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html"><img src="//img.readybrides.com/191611/lafemme-evening-dresses-style-21400.jpg" alt="Lafemme Evening Dresses Style 21400" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html"><img src="//img.readybrides.com/191606/lafemme-evening-dresses-style-21400.jpg" alt="Lafemme Evening Dresses Style 21400" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html](https://www.readybrides.com/en/lafemme/77624-lafemme-evening-dresses-style-21400.html)
| 95.4 | 258 | 0.751048 | yue_Hant | 0.289801 |
7aefb95e72261f2a1b01502ad398771619532d1a | 1,160 | md | Markdown | serving-web-content/README.md | rockgarden/spring_demo | d249cd25cf91b26e6c13fc1c1bfdd737cb1d171f | [
"MIT"
] | null | null | null | serving-web-content/README.md | rockgarden/spring_demo | d249cd25cf91b26e6c13fc1c1bfdd737cb1d171f | [
"MIT"
] | 2 | 2021-06-28T16:59:51.000Z | 2021-08-30T16:28:29.000Z | serving-web-content/README.md | rockgarden/spring_demo | d249cd25cf91b26e6c13fc1c1bfdd737cb1d171f | [
"MIT"
] | null | null | null | Thymeleaf parses the greeting.html template and evaluates the th: expression to render the value of the ${xxx} parameter that was set in the controller.
在Spring Boot配置文件中可对Thymeleaf的默认配置进行修改:
```
# 开启模板缓存(默认值:true)
spring.thymeleaf.cache=true
# Check that the template exists before rendering it.
spring.thymeleaf.check-template=true
# 检查模板位置是否正确(默认值:true)
spring.thymeleaf.check-template-location=true
# Content-Type的值(默认值:text/html)
spring.thymeleaf.content-type=text/html
# 开启MVC Thymeleaf视图解析(默认值:true)
spring.thymeleaf.enabled=true
# 模板编码
spring.thymeleaf.encoding=UTF-8
# 要被排除在解析之外的视图名称列表,用逗号分隔
spring.thymeleaf.excluded-view-names=
# 要运用于模板之上的模板模式。另见StandardTemplate-ModeHandlers(默认值:HTML5)
spring.thymeleaf.mode=HTML5
# 在构建URL时添加到视图名称前的前缀(默认值:classpath:/templates/)
spring.thymeleaf.prefix=classpath:/templates/
# 在构建URL时添加到视图名称后的后缀(默认值:.html)
spring.thymeleaf.suffix=.html
# Thymeleaf模板解析器在解析器链中的顺序。默认情况下,它排第一位。顺序从1开始,只有在定义了额外的TemplateResolver Bean时才需要设置这个属性。
spring.thymeleaf.template-resolver-order=
# 可解析的视图名称列表,用逗号分隔
spring.thymeleaf.view-names=
```
Spring Boot中,默认的html页面地址为src/main/resources/templates,默认的静态资源地址为src/main/resources/static。
| 35.151515 | 153 | 0.825 | yue_Hant | 0.324098 |
7af120a21bb287c5557f3628614eb0d6696d3563 | 3,753 | md | Markdown | docs/tutorials/getting-started.md | bryx-inc/kotlin-web-site | e44ec1477d63015edea229457274e797f51a7ed1 | [
"Apache-2.0"
] | 1 | 2021-06-05T14:16:34.000Z | 2021-06-05T14:16:34.000Z | docs/tutorials/getting-started.md | bryx-inc/kotlin-web-site | e44ec1477d63015edea229457274e797f51a7ed1 | [
"Apache-2.0"
] | null | null | null | docs/tutorials/getting-started.md | bryx-inc/kotlin-web-site | e44ec1477d63015edea229457274e797f51a7ed1 | [
"Apache-2.0"
] | null | null | null | ---
type: tutorial
layout: tutorial
title: "Getting Started with IntelliJ IDEA"
description: "This tutorials walks us through creating a simple Hello World application using IntelliJ IDEA."
authors: Hadi Hariri, Roman Belov
date: 2015-11-02
showAuthorInfo: false
---
### Setting up the environment
In this tutorial we're going to use IntelliJ IDEA. You can download the free [Open Source Community Edition][intellijdownload] from [JetBrains][jetbrains].
For instructions on how to compile and execute Kotlin applications using the command line compiler, see [Working with the Command Line Compiler][getting_started_command_line].
If you are new to the JVM and Java, check out the [JVM Minimal Survival Guide](http://hadihariri.com/2013/12/29/jvm-minimal-survival-guide-for-the-dotnet-developer/). If you are new to IntelliJ IDEA, check out the [The IntelliJ IDEA Minimal Surivial Guide](http://hadihariri.com/2014/01/06/intellij-idea-minimal-survival-guide/).
1. **Kotlin is shipping with IntelliJ IDEA 15** ([download][intellijdownload]).
To use Kotlin with [previous versions][oldintellijdownload] or Android Studio we should manually install the latest Kotlin Plugin.
Under Preferences (OSX) or Settings (Windows/Linux) > Plugins > Browse Repositories type **Kotlin** to find Kotlin plugin. Click install and follow the instructions.

2. Create a New Project. We select Java Module and select the SDK. Kotlin works with JDK 1.6+. Also select the *Kotlin (Java)* checkbox.

3. Then we click the *Create* button to specify the Kotlin runtime. We can either Copy it to our project folder or use the bundle from the plugin.

4. Give project a name on next step

4. We should now have the new project created and the following folder structure

5. Let's create a new Kotlin file under the source folder. It can be named anything. In our case we will call it *app*

6. Once we have the file created, we need to type the main routine, which is the entry point to a Kotlin application. IntelliJ IDEA offers us a template to do this quickly. Just type *main* and hit tab.

7. Let's now add the line of code to print out 'Hello, World!'

8. Now we can run the application. Easiest way is to click on Kotlin icon in the gutter and select *Run 'AppKt'*

9. If everything went well, we should now see the result in the Run window

Congratulations. We now have our first application running.
[intellijdownload]: http://www.jetbrains.com/idea/download/index.html
[oldintellijdownload]: https://confluence.jetbrains.com/display/IntelliJIDEA/Previous+IntelliJ+IDEA+Releases
[jetbrains]: http://www.jetbrains.com
[getting_started_command_line]: command-line.html
| 56.014925 | 329 | 0.75806 | eng_Latn | 0.850766 |
7af13c36759b0d8a836fd89922fdeedeee307bd3 | 1,152 | md | Markdown | module-16-redux/redux-curriculum-tree.md | meg-gutshall/flatiron-curriculum-notes | 619580d5d9a62a5f28a1851ed8c765e5444e0245 | [
"MIT"
] | 1 | 2021-10-31T22:03:14.000Z | 2021-10-31T22:03:14.000Z | module-16-redux/redux-curriculum-tree.md | meg-gutshall/flatiron-curriculum-notes | 619580d5d9a62a5f28a1851ed8c765e5444e0245 | [
"MIT"
] | null | null | null | module-16-redux/redux-curriculum-tree.md | meg-gutshall/flatiron-curriculum-notes | 619580d5d9a62a5f28a1851ed8c765e5444e0245 | [
"MIT"
] | null | null | null | # Redux Curriculum Tree
```html
Software Engineering V8
│
├── Section 1: Building Redux
│ ├── Why Redux
│ ├── Redux Flow
│ │ └── Redux Reducer Lab
│ ├── Redux Dispatch Code-along
│ ├── Redux Initial Dispatch
│ ├── Redux Dispatch with Event Listeners Code-along
│ └── Redux Create Store
│ └── Redux Create Store Lab
├── Section 2: Redux Library
│ ├── Intro to Redux Library Code-along
│ ├── Map State to Props Code-along
│ ├── Review Map State to Props Code-along
│ │ └── Map State to Props Lab
│ ├── Redux Action Creators
│ ├── Map Dispatch to Props Code-along
│ │ └── Map Dispatch to Props Lab
│ └── Combine Reducers Code-along
├── Section 3: React Redux Continued
│ ├── When to Connect React and Redux
│ ├── Redux Forms Code-along
│ ├── Redux Index Code-along
│ │ ├── Cooking with Redux Lab
│ │ └── Building Forms Lab
│ ├── Redux Delete Code-along
│ │ ├── Redux Delete Lab
│ │ └── CRUD Lab
│ └── React Redux Project Planning
├── Section 4: Async Redux
│ └── Redux Thunk
│ └── Redux Thunk Lab
└── Section 5: Redux Final Project
└────── React Redux Portfolio Project
```
| 28.8 | 54 | 0.615451 | yue_Hant | 0.751105 |
7af15f03baacb3577a35520d82e55cc586326003 | 2,605 | md | Markdown | docs/framework/unmanaged-api/debugging/icordebugprocess-setthreadcontext-method.md | jorgearimany/docs.es-es | 49946e3dab59d68100683a45a4543a1fb338e882 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugprocess-setthreadcontext-method.md | jorgearimany/docs.es-es | 49946e3dab59d68100683a45a4543a1fb338e882 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icordebugprocess-setthreadcontext-method.md | jorgearimany/docs.es-es | 49946e3dab59d68100683a45a4543a1fb338e882 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ICorDebugProcess::SetThreadContext (Método)
ms.date: 03/30/2017
api_name:
- ICorDebugProcess.SetThreadContext
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorDebugProcess::SetThreadContext
helpviewer_keywords:
- ICorDebugProcess::SetThreadContext method [.NET Framework debugging]
- SetThreadContext method, ICorDebugProcess interface [.NET Framework debugging]
ms.assetid: a7b50175-2bf1-40be-8f65-64aec7aa1247
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: e281022cd7bc9b2095fdbd3964061b811ef60e0d
ms.sourcegitcommit: 5137208fa414d9ca3c58cdfd2155ac81bc89e917
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 03/06/2019
ms.locfileid: "57496968"
---
# <a name="icordebugprocesssetthreadcontext-method"></a>ICorDebugProcess::SetThreadContext (Método)
Establece el contexto del subproceso dado en este proceso.
## <a name="syntax"></a>Sintaxis
```
HRESULT SetThreadContext(
[in] DWORD threadID,
[in] ULONG32 contextSize,
[in, length_is(contextSize), size_is(contextSize)]
BYTE context[]);
```
## <a name="parameters"></a>Parámetros
`threadID`
[in] El identificador del subproceso que se va a establecer el contexto.
`contextSize`
[in] Tamaño de la matriz `context`.
`context`
[in] Una matriz de bytes que describen el contexto del subproceso.
El contexto especifica la arquitectura del procesador en el que se está ejecutando el subproceso.
## <a name="remarks"></a>Comentarios
El depurador debe llamar a este método en lugar de Win32 `SetThreadContext` funcione, ya que el subproceso esté realmente en un estado "secuestrado", en el que se ha cambiado temporalmente su contexto. Este método debe usarse solo cuando un subproceso está en código nativo. Use [ICorDebugRegisterSet](../../../../docs/framework/unmanaged-api/debugging/icordebugregisterset-interface.md) para subprocesos en código administrado. Nunca es necesario modificar el contexto de un subproceso durante un evento de depuración fuera de banda (OOB).
Los datos pasados deben ser una estructura de contexto para la plataforma actual.
Este método puede dañar el tiempo de ejecución si se utiliza incorrectamente.
## <a name="requirements"></a>Requisitos
**Plataformas:** Consulte [Requisitos del sistema](../../../../docs/framework/get-started/system-requirements.md).
**Encabezado**: CorDebug.idl, CorDebug.h
**Biblioteca:** CorGuids.lib
**Versiones de .NET Framework:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
| 38.880597 | 543 | 0.752783 | spa_Latn | 0.773636 |
7af1fb75a3c3fffba848ea9cdc5e23417f260837 | 1,599 | md | Markdown | Readme.md | msinvent/tensorflow-course1-deeplearning | 409c3fa97097a5f6074f833ad617cb4d86ee1321 | [
"Unlicense"
] | null | null | null | Readme.md | msinvent/tensorflow-course1-deeplearning | 409c3fa97097a5f6074f833ad617cb4d86ee1321 | [
"Unlicense"
] | null | null | null | Readme.md | msinvent/tensorflow-course1-deeplearning | 409c3fa97097a5f6074f833ad617cb4d86ee1321 | [
"Unlicense"
] | 1 | 2021-04-12T04:23:01.000Z | 2021-04-12T04:23:01.000Z | # Understanding neural network with Tensorflow
- house_price_prediction.py- The first assignment is a "Hello World " assignment for Neural Network
wherein, we fit a linear regression equation, with few examples.
Play with the code by adding more values of x and y and observe the change in accuracy.
- MNIST_digit_callback.py- This assignment shows the usage of callbacks in Tensorflow. A 2-layer dense-layer neural network
is used to classify MNIST digits. Check other keys of 'history' of model such as 'loss', to implement callback.
- MNIST_digit_CNN.py- This assignment shows the usage of single Convolution layer and a maxpool layer for classification. A callback
function is also implemented to stop the training. Check out performance, training time, model parameters by varying number of filters in
convolution layer. Also observe the variation by changing the filter size.
- HappySad_CNN_trainGenerator.py - This assignment is based on a small database of 80 images with happy and sad face. Interestingly,
one can check that database is actually smileys of happy and sad mode. CNN with multiple convolution layers is used for training.
One can observe variation in performance by successively increasing the convolution layers as well as number of filters. It also
demonstrates the usage of inbuilt 'ImageGenerator' in keras which only needs the name of main directory of training data, assuming that the
subfolders within it are named with respect to the classes.
* Note:This repo contains programming assignments done by me during the Coursera course offered by DeepLearning.AI | 76.142857 | 140 | 0.811757 | eng_Latn | 0.999448 |
7af2de18552b6d2e296d58ae74b6866e06195df7 | 2,064 | md | Markdown | README.md | andrewcancode/ecommerce-site | d8eb5248deba50f650bd18c9ffaa3b768e2bf0e2 | [
"MIT"
] | null | null | null | README.md | andrewcancode/ecommerce-site | d8eb5248deba50f650bd18c9ffaa3b768e2bf0e2 | [
"MIT"
] | null | null | null | README.md | andrewcancode/ecommerce-site | d8eb5248deba50f650bd18c9ffaa3b768e2bf0e2 | [
"MIT"
] | null | null | null | # Capstone Project - a mock E-Commerce Site
I built a simple mock e-commerce site using HTML, CSS, and JavaScript/jQuery and hosted it on AWS EC2.
Please visit to check out:
* (on EC2) http://load-balancer-1348736991.us-east-2.elb.amazonaws.com
* (on S3) http://andrew-the-engineer.s3-website.us-east-2.amazonaws.com/
## Wireframe:
For my basic sketch out I followed the relative style from the 'Simple' example included in the capstone project.
[wireframe](https://i.pinimg.com/originals/31/4f/5f/314f5f0d3d4453984d208aa281e5eae8.jpg)
## MVP - Minimum Viable Product
An E-Commerce website hosted on AWS S3 or EC2 that is viewable on the public internet.
Your site must meet one of these 2 requirements:
### Hosted With AWS S3
If you deploy on AWS S3, your ecommerce website should focus more on presentation (CSS).
- Responsive mobile design is required
- No templates are permitted.
Non-Acceptable Frameworks:
* Hugo
* Wix
* Weebly
* Squarespace etc.
Acceptable CSS Libraries include:
* Bootstrap
* Materialize
* Bourbon
### Hosted with EC2
If you deploy on EC2, your ecommerce website should be focused on infrastructure with minimum presentation (CSS).
* Implement IAM best practices
* Implement Security best practices
* Choose Amazon Machine Image
* Configure the Instance
* Configure Security Groups
## Stretch Goals (Not Mandatory):
* Recommended Features
* Responsive mobile design
* Work with your instructor to determine additional stretch goals
For EC2 hosted websites,
* Add an ELB
* Make your website highly available
* Add ENI
* Make an AMI
* Create an autoscaling policy
## Post MVP - to improve
* Add individual product pages
* Add cart page
* Working DB backend
* Working search function
## Built With
* [jQuery](https://code.jquery.com/) - The JS library used
* [bootstrap](https://getbootstrap.com/) - CSS library
* [fontawesome](https://fontawesome.com/) - For custom icons
## Authors
* **Andrew Lee** - [andrewcancode](https://github.com/andrewcancode)
## License
This project is released under the MIT License
| 25.481481 | 113 | 0.756298 | eng_Latn | 0.905287 |
7af2eef60811c3b5cb831378f0b3c06ac3c2b40c | 3,191 | md | Markdown | winrt-related-src/schemas/mobilebroadbandschema/carriercontrolsignatureschema/element-signedinfo.md | ghuntley/winrt-related | 22ce4a94e18f8371efd326d5b1bb6cb28f60a908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | winrt-related-src/schemas/mobilebroadbandschema/carriercontrolsignatureschema/element-signedinfo.md | ghuntley/winrt-related | 22ce4a94e18f8371efd326d5b1bb6cb28f60a908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | winrt-related-src/schemas/mobilebroadbandschema/carriercontrolsignatureschema/element-signedinfo.md | ghuntley/winrt-related | 22ce4a94e18f8371efd326d5b1bb6cb28f60a908 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
Description: Defines all signed content within the signature.
Search.Product: eADQiWindows 10XVcnh
title: SignedInfo
ms.assetid: 85648eb7-6cb7-4c55-aa7c-3744068c5273
author: mcleblanc
ms.author: markl
keywords: windows 10, uwp, schema, mobile broadband schema
ms.topic: reference
ms.date: 04/05/2017
---
# SignedInfo
Defines all signed content within the signature as specified in [XML DSIG](https://www.w3.org/TR/xmldsig-core/).
## Element hierarchy
<dl>
<dt><a href="element-signature.md"><Signature></a></dt>
<dd><b><SignedInfo></b></dd>
</dl>
## Syntax
``` syntax
<SignedInfo Id? = ID >
<!-- Child elements -->
CanonicalizationMethod,
SignatureMethod,
Reference
</SignedInfo>
```
### Key
`?` optional (zero or one)
## Attributes and Elements
### Attributes
<table>
<colgroup>
<col width="20%" />
<col width="20%" />
<col width="20%" />
<col width="20%" />
<col width="20%" />
</colgroup>
<thead>
<tr class="header">
<th>Attribute</th>
<th>Description</th>
<th>Data type</th>
<th>Required</th>
<th>Default value</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><strong>Id</strong></td>
<td><p>A unique element identifier to be used as a reference to <a href="element-signedinfo.md"><strong>SignedInfo</strong></a> .</p></td>
<td>ID</td>
<td>No</td>
<td></td>
</tr>
</tbody>
</table>
### Child Elements
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th>Child Element</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><a href="element-canonicalizationmethod.md">CanonicalizationMethod</a> </td>
<td><p>Defines the canonicalization method applied to <a href="element-signedinfo.md"><strong>SignedInfo</strong></a> as specified in <a href="https://www.w3.org/TR/xmldsig-core/">XML DSIG</a>. Must be of type Canonical XML.</p></td>
</tr>
<tr class="even">
<td><a href="element-reference.md">Reference</a> </td>
<td><p>Defines a digest value, digest method, and transforms as specified in <a href="https://www.w3.org/TR/xmldsig-core/">XML DSIG</a> .</p></td>
</tr>
<tr class="odd">
<td><a href="element-signaturemethod.md">SignatureMethod</a> </td>
<td><p>Defines the algorithm used to generate the signature thumbprint in <a href="element-signaturevalue.md"><strong>SignatureValue</strong></a> as specified in <a href="https://www.w3.org/TR/xmldsig-core/">XML DSIG</a>.</p></td>
</tr>
</tbody>
</table>
### Parent Elements
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th>Parent Element</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><a href="element-signature.md">Signature</a> </td>
<td><p>Defines the root element of an <a href="https://www.w3.org/TR/xmldsig-core/">XML DSIG</a> compliant signature. <a href="element-signature.md"><strong>Signature</strong></a> is the unique root element for a provisioning file signature.</p></td>
</tr>
</tbody>
</table>
## Requirements
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<tbody>
<tr class="odd">
<td><p>Namespace</p></td>
<td><p>http://www.w3.org/2000/09/xmldsig#</p></td>
</tr>
</tbody>
</table>
| 20.720779 | 251 | 0.668129 | eng_Latn | 0.401781 |
7af3ae4c3520374071fcf86a6db8273f11e166e8 | 656 | md | Markdown | packages/components-style/README.md | PeterNgTr/components | 4c07d3d2bfabfe7f6b03317c6a11fb0b9526acc1 | [
"MIT"
] | 2 | 2017-11-08T19:24:40.000Z | 2019-01-16T15:53:21.000Z | packages/components-style/README.md | PeterNgTr/components | 4c07d3d2bfabfe7f6b03317c6a11fb0b9526acc1 | [
"MIT"
] | 38 | 2017-11-10T13:53:08.000Z | 2022-02-26T11:44:42.000Z | packages/components-style/README.md | PeterNgTr/components | 4c07d3d2bfabfe7f6b03317c6a11fb0b9526acc1 | [
"MIT"
] | 7 | 2017-11-08T19:24:42.000Z | 2019-03-14T15:37:08.000Z | # McMakler Design Guide

This Single Page Application, powered by [React](https://github.com/facebook/react), is a collection of Design Elements used throughout the [McMakler](https://www.mcmakler.de/) website and applications.
## Getting Started
The following commands should be all you need to get up and running. [Webpack-Dev-Server](https://github.com/webpack/webpack-dev-server) will begin listening on port `8080`.
* `git clone [email protected]:mcmakler/design-guide.git mcmakler-style-guide`
* `cd mcmakler-style-guide`
* `yarn install`
* `npm run serve` | 54.666667 | 202 | 0.768293 | eng_Latn | 0.536002 |
7af4431c836f30a7ece31a33cf431175ebb117e4 | 1,021 | md | Markdown | docs/guide/local-state.md | code14nl/vue-apollo | ec2b5893c9eb872bc99bd5cf9cceaf4bb63411ce | [
"MIT"
] | 2 | 2019-03-03T19:47:44.000Z | 2019-03-29T13:25:54.000Z | docs/guide/local-state.md | code14nl/vue-apollo | ec2b5893c9eb872bc99bd5cf9cceaf4bb63411ce | [
"MIT"
] | 135 | 2019-07-10T19:29:13.000Z | 2021-07-20T19:22:13.000Z | docs/guide/local-state.md | code14nl/vue-apollo | ec2b5893c9eb872bc99bd5cf9cceaf4bb63411ce | [
"MIT"
] | 1 | 2019-04-25T08:12:43.000Z | 2019-04-25T08:12:43.000Z | # Local state
If you need to manage local data, you can do so with [apollo-link-state](https://github.com/apollographql/apollo-link-state) and the `@client` directive:
```js
export default {
apollo: {
hello: gql`
query {
hello @client {
msg
}
}
`
},
mounted () {
// mutate the hello message
this.$apollo
.mutate({
mutation: gql`
mutation($msg: String!) {
updateHello(message: $msg) @client
}
`,
variables: {
msg: 'hello from link-state!'
}
})
}
}
```
This is an increasingly popular way of managing client-side state. Some projects even use it as a replacement of Vuex (or other Flux-inspired solutions).
## Examples
- [Example project](https://codesandbox.io/s/zqqj82396p) (by @chriswingler)
- [Todo App](https://codesandbox.io/s/x2jr96r8pp) (by @NikkitaFTW)
- [Todo App - Expanded](https://codesandbox.io/s/k3621oko23) (by @ScottMolinari fork of @NikkitaFTW's)
---
| 24.309524 | 153 | 0.605289 | eng_Latn | 0.691748 |
7af47025e88d4d14d959f24de2b347227e76482a | 10,942 | md | Markdown | Markdown/10000s/10000/proceeded wore.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | 2 | 2022-01-19T09:04:58.000Z | 2022-01-23T15:44:37.000Z | Markdown/00500s/04000/proceeded wore.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | null | null | null | Markdown/00500s/04000/proceeded wore.md | rcvd/interconnected-markdown | 730d63c55f5c868ce17739fd7503d562d563ffc4 | [
"MIT"
] | 1 | 2022-01-09T17:10:33.000Z | 2022-01-09T17:10:33.000Z | - Doing clear finally question and gallery. For lady no in matter eyes out. Have stop wood thy and is. Such when but the belongs it England. Across it could vapor awhile with you is. And he in could from [[minds]]. Old principle for committee mother the patient. The important in had it of relieved. And not its to two you meat have. Expressed i they ago all the that. Time in in yet go does of 2. Of he horses the out. Whole as brilliant its height it. At not fain at which declined as. Answered measured laugh money human many color. Head study long poor thought sent. On work was than if discovered she.
-
- To that he one quite to.
- On make polish so complete earth. Will said impulses of mother [[noise]] now. That be succeeded pathetic a he. The in check your room came that. One over with Mr second further you enabled. And for and and to [[inhabitants]] found. Am offer should restaurant human will were her. The the month for that producing many. Excellence of hand property nature remains. Be have Gutenberg make on least. Pots Monday we youve to above. Be presence it wants church manifested back. Alternative their china and make was the. Her worn join before and jersey Amy. Nor of the [[suffer tells]] present she. Appearance [[October]] friend manner at and and produces. France [[dressed]] sweet listen there a [[dressed]] dick. Has that the than will excite done explain. His i shiver anyone be of the their. That the of it english member. Unless than ask sent and i. [[dressed]] very [[teeth machine]] in the the. Not morning or cannot times inevitable. Were ought after there it were. Of of me what into illuminated at. Folly as by were the fire. Together said [[dressed]] longer. In it stated had the flag. It on must sanguine at been suspicions is were. World third very down i lady. Do of had cared and and following. The an it of explained as. Eggs wretched spoke our seen the. Law man work second also all Sam said reconciliation. For working doing burned we 4. Went been i go volunteer and [[lifted]] told. His with tones all Christ [[duties]] more England. Assert the is raised of became. Look was states for [[imagination]]. Wreck him the her has. Armies tell after all sinful with this. Furniture de procured manners finish slaves idea. Of green as among for in in. His ground lost while if be Mr. Next was of the rocks answered them. Is boys senses two the.
-
- Which the [[constant smiling]] from keep what.
- V you admitted it the so and separated.
- Political me in stood record it.
- At of and to the want seldom.
- Brought the to me the seconds that.
- In country such hesitate to been.
- Him confined with loyalty before then the now the.
- Youngest de till fame spare. Much not you is woman up matter. Route of of need the. The put and the the to. Are them were dear the i. Of remarkable took nest calls [[flesh]] in. Occupation surprised three lost painful male. The Antonio not longing imitation her [[wore conscience]]. Dignity in you anything made and and. And as do old and the public. This and was he love have. Mind horns the and and the. Was gave and know may. Our on arms [[advice suffer]] young mere promised ever. Yet the firmly harder were for kindly. The respective ears will. Sublime put much to as board. In in the again and [[shape suffer]]. Of of or may of land. See with with have as east military. Was blew the marble my human. It this whole death boy does. Followed opinion expressions this altogether would.
- Les however now p he is which. On its and poured he to should outer. Thy of be your are discover connected. He labour comes in thought. In salary borne an thee others jesus of first. Innumerable die the half i the whole. Eggs didnt architecture is been ashes up. He one there i the i only strife spirit. Vain goal let after the refused under actions. Face the to or error the. Her first to and life not it without. Her spoke accuracy as understand refuse. That only be he me dear. Mind accurate save somewhat waiting with years more from taken. Escape him the home the the. Herself in beauty whenever features with you. Sporting that other returns baron startled. Better lady as with medium and that public. Them narrow very of silent the. Guest aimed and so slipping. Other sulphur it the was walked other. And and four man other on. Are of victim them speedily. At of [[hopes]] of nothing perished portions. West voice converts the see for laid stopped.
- The lord and through rail had. I nothing we [[suffer]] force. Of cong will he. Be it came the the papers of. From [[farther grand]] rubbing noise the. Enough difficulties decent and exceedingly. Them mind when immediate that. In colour belonging the the give. To outline the implore the attend dark. Lips treat of few time of he. Ten responded pay of in [[legs]] they. The will him to maid i to the. The house make smoked small and the the. The said character at rugged made was. [[suffering vessel]] die on at ride before. Tumult off quite which feet was burden the. East introduced job who forms. And the more him your. Said national about no not them birds property. The attached damage to and triumph. Security all extant lives filling. Four go been he [[stands rapidly]] that myself. Of the be as the is. Dress the but hardly to reputation vast. Still of the other hurried they misfortune. Yet with complained to so present had. To give Michael could to. Mr had read in in. Of and he head hand more. Creatures kindness his Christ works of. While forth culture of turned began. Arctic to by [[rank]] come duly. Woman not it i in any it snow. The of it understand exhausted suffering.
- Add right drawing my night his coming. Which think dost been thinks his his safety. Was you him finished on than. As they regained with weigh i he. Pay to they what the should. [[impression]] to single who the. Were fragment in are degree the Jones. Feel assuredly you too representation to. On building [[rode]] obtained on part in. What the his warmth remarkably and far have. To numerous went that still be it that. Remarked earth own own foreign the. I Paris that grand bells wages. An resembled at it at. Will his blessings his no thought.
- Which said and he heart you make. Whose idea was me tales let. [[relief]] up which offend shelter from the. And him to whom lay very. Old left upon as the the. 4 refer as it them of be are. As in that during than the. Saved myself for danger could to own. Though [[post dressed]] in held showing said to. The some to easily the have. Scattered approaching this have removed the. Had house guns have de the between. In for while children history volumes the i. Be and to by was or soon. Eyes helped their with Walt exquisite of. Use he of attitude though little can became. Dainty an castle have themselves prove of. And our you which back cup provide was. Hither dash translated trained it. With please net protected competent our. Labours had with or hid may was have. Of colour the in is face Italy. The the privacy the don could [[rank advice]]. Blanket no lines from branches [[larger hopes]]. Five to do him was excluded. It appeared the equilibrium points. Strove words their he on more drew. That the on [[gods]] silent bare in. Found waiting for who had of. Equivalent with looked and. And serve fell people was met love. The threatened long Naples land is. Which to bet off no all chamber thought. Got small sin of it she. Boys of quivering named in that. And nook sometimes that sank letter you. Put spake all the promising end demanded. Weak take towards may of and increased. Of never [[driven dressed]] salutation grief thing the. Knew safety foolish could way or and. Been the brought [[Spain cover]] up as whither. Not appear the a and. Safe song or is i is the. [[happen rode]] though was say so run. The which ever most and.
- All think of it the Charles to expense. I taking to the perhaps placing. I opened with his couple have inflamed. At of i answer with night. Left and the god and of the. Of tower he of me with one first. The government more i for difficult. Him had she come had my. Went the of boon a. In great drawn the me. Was therefrom nevertheless plans at they. The so expressed of wrought already. The his am the chain announcement. By Mary in triumphant too saw behind. Advantages part autumn together on [[proceeded]] her. In soul p had hart. They several see make. And he expenses might of for of. Sight few too awakening end. They to out some some i much. Fruit had if few they the. In to the so i last or. Dangerous and entertained without reach years in. Out such paddle noticed which isnt. Make and own the in. Italian artificial note him live fourteen i. His his range these on worth face. Me would the an things hated the. All about trifling her robe short. Became been good and and fine and. Rights get shown the other it boat my. They back unless is that answered glad. Around some never all gaping her fitted.
- [[noise]] in how [[smiling]] [[series]] to one.
- To perfectly hundreds resentment them the.
- Hand from that i take to east.
- Of were his her come characteristic.
- Which but of us she in to.
-
- The confused queen vigour there me he.
- Without the life settlers mankind when which truth of.
- Yourself oer of priests man doubt inevitable.
- Converted young way hours should.
- Proud somewhat of the terms is than.
- Not to without shoulders its.
- Three would they of i any.
- The part the internal winter of any [[minds]].
- Particular but i the had wing was from.
- The trifles but lie.
- My from show them having the dream the.
- Deeply i had such washing been of that.
- I Gutenberg and chair all political.
- Entirely explanatory not will three no.
- Of one drops shut contribution she become am.
- After must when the when but.
- One pope my by on.
- Owing done rang ever that and all. Page with political the respectable servant.
- Not make me some you all dose. Published he many interest thing ye the gold yesterday. Only where in show one looked offense. Notion of of and America to. Ghost human voyage the especially. Call gain on wild in. Within the of dressed of only no. To the overcome occasion certain remain. The coming i what and united. Distinguish infinitely no form. Into this my to to. Saving threw that know you are especially.
- The and feeling will in all ingenuity. To states had Jews his or Mr. And life same the it has i. To know eating and at bed my. Through bridge immediate those still said more. Bring too entrance and of honor the. Providence one for family even they and. Sit the strange cause owl against. Was by place think concession request. And her which may him the. [[wore literature]] not i him shell besieged. Moses name much most [[carrying]] also blankets. In and alteration lover of sends. Has you food friend where discover. Ever will an however but that the. Will with of to fact for shoulders. | 248.681818 | 1,752 | 0.76284 | eng_Latn | 0.999931 |
7af48f9551104183dc433b6747592cd5ffbd47ad | 8,649 | md | Markdown | docs/SQLSat0416.md | andrekamman/sqlsathistory | b24e025f879636da8c16769dee39b0f3854fa468 | [
"MIT"
] | 7 | 2020-12-27T02:54:38.000Z | 2022-02-02T20:35:39.000Z | docs/SQLSat0416.md | andrekamman/sqlsathistory | b24e025f879636da8c16769dee39b0f3854fa468 | [
"MIT"
] | 1 | 2021-01-03T22:30:03.000Z | 2021-01-04T12:08:07.000Z | docs/SQLSat0416.md | andrekamman/sqlsathistory | b24e025f879636da8c16769dee39b0f3854fa468 | [
"MIT"
] | 2 | 2021-01-09T19:29:36.000Z | 2021-02-28T03:42:08.000Z | #### Nr: 416
#### [Back to Main list](index.md)
# SQLSaturday #416 - Odessa 2015
Start Time (24h)|Speaker|Track|Title
---|---|---|---
00:00:00|Denis Reznik|Other|[Тарас Бобровицкий - Columnstore Indexes](#sessionid-13119)
10:00:00|Andrii Zrobok|Other|[“Magic numbers”, local variable and performance](#sessionid-10109)
10:00:00|Evgeny Khabarov|Track 1|[Поговорим об ожиданиях и очередях](#sessionid-37489)
11:05:00|Konstantin Proskurdin|Track 1|[Continuous integration and delivery of databases in web development](#sessionid-37373)
11:05:00|Viktoriia Mala|Track 1|[NoSQL - MongoDB. Agility, scalability, performance (Part 1)](#sessionid-37521)
12:35:00|Denis Reznik|Other|[Hidden gems of SQL Server 2014](#sessionid-13121)
12:35:00|Viktoriia Mala|Track 1|[NoSQL - MongoDB. Agility, scalability, performance (Part 2)](#sessionid-37522)
14:35:00|Denis Reznik|Other|[Effective T-SQL. To be effective or not to be.](#sessionid-13120)
14:35:00|Evgeny Khabarov|Track 1|[Транзакционная репликация - это не страшно. ](#sessionid-37490)
15:40:00|Konstantin Proskurdin|Other|[Database version control](#sessionid-37969)
15:40:00|Michal Sadowski|Track 1|[Upgrading to SQL Server 2014](#sessionid-38143)
17:05:00|Andrii Zrobok|Other|[MS SQL data types: features of conversion](#sessionid-10108)
#
#### SessionID: 13119
# Тарас Бобровицкий - Columnstore Indexes
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 00:00:00 - Track: Other
## Speaker: Denis Reznik
## Title: Тарас Бобровицкий - Columnstore Indexes
## Abstract:
### About columnstore Indexes
#
#### SessionID: 10109
# “Magic numbers”, local variable and performance
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 10:00:00 - Track: Other
## Speaker: Andrii Zrobok
## Title: “Magic numbers”, local variable and performance
## Abstract:
### About dependency between query syntax (adhoc query, query with local variables, stored procedures, local variable ) and query execution plan
#
#### SessionID: 37489
# Поговорим об ожиданиях и очередях
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 10:00:00 - Track: Track 1
## Speaker: Evgeny Khabarov
## Title: Поговорим об ожиданиях и очередях
## Abstract:
### Ожидания - очень точно отражают, что же происходит внутри SQL Server’a. Потоки в сиквеле всего чего-нибудь да ждут, иногда в этом нет ничего плохого, но когда ожидание затягивается, то надо разбираться, почему же так долго. Что такое ожидание? Почему приходится ждать, когда это плохо, и как с этим бороться (если требуется) мы рассмотрим в этом докладе
#
#### SessionID: 37373
# Continuous integration and delivery of databases in web development
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 11:05:00 - Track: Track 1
## Speaker: Konstantin Proskurdin
## Title: Continuous integration and delivery of databases in web development
## Abstract:
### The session is about methods and tools for continuous integration and delivery of databases in web development
#
#### SessionID: 37521
# NoSQL - MongoDB. Agility, scalability, performance (Part 1)
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 11:05:00 - Track: Track 1
## Speaker: Viktoriia Mala
## Title: NoSQL - MongoDB. Agility, scalability, performance (Part 1)
## Abstract:
### NoSQL - MongoDB. Agility, scalability, performance. I am going to talk about the basis of NoSQL and MongoDB. Why some projects requires RDBMs and another NoSQL databases? What are the pros and cons to use NoSQL vs. SQL? How data are stored and transefed in MongoDB? What query language is used? How MongoDB supports high availability and automatic failover with the help of the replication? What is sharding and how it helps to support scalability?. The newest level of the concurrency - collection-level and document-level. Join the session, I promise that it will be interesting! :)
#
#### SessionID: 13121
# Hidden gems of SQL Server 2014
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 12:35:00 - Track: Other
## Speaker: Denis Reznik
## Title: Hidden gems of SQL Server 2014
## Abstract:
### SQL Server 2014 is full of new features and improvements. Some of them are "Killer" features like InMemory OLTP, Clustered Columnstore Indexes, Buffer Pool Extensions, etc., which are discussed a lot and we always can get a lot of information about them. And in the same time, SQL Server 2014 have several fantastic features and improvements, which are more hidden from our sight. In this session we will talk in details about these features and improvements. Query Fingerprints, Cardinality Estimator, Tempdb improvements, and more features will be covered in this session.
#
#### SessionID: 37522
# NoSQL - MongoDB. Agility, scalability, performance (Part 2)
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 12:35:00 - Track: Track 1
## Speaker: Viktoriia Mala
## Title: NoSQL - MongoDB. Agility, scalability, performance (Part 2)
## Abstract:
### NoSQL - MongoDB. Agility, scalability, performance. I am going to talk about the basis of NoSQL and MongoDB. Why some projects requires RDBMs and another NoSQL databases? What are the pros and cons to use NoSQL vs. SQL? How data are stored and transefed in MongoDB? What query language is used? How MongoDB supports high availability and automatic failover with the help of the replication? What is sharding and how it helps to support scalability?. The newest level of the concurrency - collection-level and document-level. Join the session, I promise that it will be interesting! :)
#
#### SessionID: 13120
# Effective T-SQL. To be effective or not to be.
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 14:35:00 - Track: Other
## Speaker: Denis Reznik
## Title: Effective T-SQL. To be effective or not to be.
## Abstract:
### Almost any database query can be written in several ways. T-SQL, as any other query language, contains many language constructions and has a lot of aspects, that can both make life easier and more complex. Under "complex life", I understand bad readability and poor performance. In this session we will consider a number of typical tasks, solved with the help of T-SQL, and will look at effective and ineffective task solutions.
#
#### SessionID: 37490
# Транзакционная репликация - это не страшно.
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 14:35:00 - Track: Track 1
## Speaker: Evgeny Khabarov
## Title: Транзакционная репликация - это не страшно.
## Abstract:
### Когда я в первые столкнулся с репликацией, я подумал: "Да что в ней может быть сложного?". Но как оказалось на деле, это достаточно не простая штука для понимания с первого раза. Второй момент, когда начинаются с ней проблемы, в виде рассинхронизации данных между её узлами, появлением каких-либо других ошибок, то не знаешь куда бежать.
В докладе я расскажу о том, что такое транзакционная репликация, как и для чего ее используют, как ее настроить и что делать если что-то пошло не так.
#
#### SessionID: 37969
# Database version control
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 15:40:00 - Track: Other
## Speaker: Konstantin Proskurdin
## Title: Database version control
## Abstract:
### The session will describe the typical schemes of the developers work with database and methods how to maintain the database up to date with the history of changes. There will be a demonstration of some methods.
#
#### SessionID: 38143
# Upgrading to SQL Server 2014
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 15:40:00 - Track: Track 1
## Speaker: Michal Sadowski
## Title: Upgrading to SQL Server 2014
## Abstract:
### In this sessions you will learn how to perform seamless upgrade from previous version of SQL Server (yes, even from version 2000!) to latest and greatest version of SQL Server. No matter if reason of upgrade is end of support or consolidation, you are upgrading from Express Edition or Enterprise – you will get set of best practices how to avoid most common issues and perform successful upgrade. On the session there will be not only theoretical knowledge but real life examples with demos.
#
#### SessionID: 10108
# MS SQL data types: features of conversion
#### [Back to calendar](#nr-416)
Event Date: 25-07-2015 - Session time: 17:05:00 - Track: Other
## Speaker: Andrii Zrobok
## Title: MS SQL data types: features of conversion
## Abstract:
### Type of NULL (usage of functions nullif(), isnull(), coalesce()), features of function STR(), implicit conversion and indexes in SQL queries, etc.
| 67.570313 | 588 | 0.75211 | eng_Latn | 0.840662 |
7af50564a2207c6b0849778fbc79452f6e8adf62 | 8,707 | md | Markdown | technical_articles/robotics/slam/changh95/202101.md | booiljung/booiljung.github.io | 1a861660b4bf3c049ed8f47fc8f5346ee8de7823 | [
"CC0-1.0"
] | null | null | null | technical_articles/robotics/slam/changh95/202101.md | booiljung/booiljung.github.io | 1a861660b4bf3c049ed8f47fc8f5346ee8de7823 | [
"CC0-1.0"
] | null | null | null | technical_articles/robotics/slam/changh95/202101.md | booiljung/booiljung.github.io | 1a861660b4bf3c049ed8f47fc8f5346ee8de7823 | [
"CC0-1.0"
] | null | null | null | Version:0.9 StartHTML:00000168 EndHTML:00014901 StartFragment:00000202 EndFragment:00014865 SourceURL:https://changh95.github.io/20210128-2021-january-slam-news/
## 시스템
<details>
<summary> Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs </summary>
</details>
<details>
<summary> NodeSLAM: Neural Object Descriptors for Multi-View Shape Reconstruction </summary>
</details>
<details>
<summary> LOCUS: A Multi-Sensor Lidar-Centric Solution for High-Precision Odometry and 3D Mapping in Real-Time </summary>
</details>
<details>
<summary> Local to Global: Efficient Visual Localization for a Monocular Camera </summary>
</details>
<details>
<summary> ALVIO: Adaptive Line and Point Feature-based Visual Inertial Odometry for Robust Localization in Indoor Environments </summary>
</details>
<details>
<summary> VINS-PL-Vehicle: Points and Lines-based Monocular VINS Combined with Vehicle Kinematics for Indoor Garage </summary>
</details>
<details>
<summary> High Precision Vehicle Localization based on Tightly-coupled Visual Odometry and Vector HD Map </summary>
</details>
<details>
<summary> A Versatile Keyframe-Based Structureless Filter for Visual Inertial Odometry </summary>
</details>
<details>
<summary> Object-Level Semantic Map Construction for Dynamic Scenes </summary>
</details>
<details>
<summary> Using Detection, Tracking and Prediction in Visual SLAM to Achieve Real-time Semantic Mapping of Dynamic Scenarios </summary>
</details>
<details>
<summary> DeepSurfels: Learning Online Appearance Fusion </summary>
</details>
## 서베이
<details>
<summary> Survey and Evaluation of RGB-D SLAM </summary>
</details>
<details>
<summary> A Survey on 3D LiDAR Localization for Autonomous Vehicles </summary>
</details>
<details>
<summary> Single-View 3D reconstruction: A Survey of deep learning methods </summary>
</details>
<details>
<summary> Deep learning, Inertial Measurements Units, and Odometry:
Some Modern Prototyping Techniques for Navigation Based on Multi-Sensor
Fusion </summary>
</details>
## Relative / Absolute pose estimation
<details>
<summary> A Complete, Accurate and Efficient Solution for the Perspective-N-Line Problem </summary>
</details>
<details>
<summary> Fast and Robust Certifiable Estimation of the Relative Pose Between Two Calibrated Cameras </summary>
</details>
<details>
<summary> Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach </summary>
</details>
<details>
<summary> Provably Approximated ICP </summary>
</details>
<details>
<summary> On the Tightness of Semidefinite Relaxations for Rotation Estimation </summary>
</details>
<details>
<summary> A Linear Approach to Absolute Pose Estimation for Light Fields </summary>
</details>
<details>
<summary> Factor Graphs: Exploiting Structure in Robotics </summary>
</details>
<details>
<summary> Comparison of camera-based and 3D LiDAR-based place recognition across weather conditions </summary>
</details>
## 딥러닝
<details>
<summary> QuadroNet: Multi-Task Learning for Real-Time Semantic Depth Aware Instance Segmentation </summary>
</details>
<details>
<summary> Boosting Monocular Depth with Panoptic Segmentation Maps </summary>
</details>
<details>
<summary> Real-Time Uncertainty Estimation in Computer Vision via Uncertainty-Aware Distribution Distillation </summary>
</details>
<details>
<summary> Self-Supervised Pretraining of 3D Features on any Point-Cloud </summary>
</details>
# 시간이 되면 읽어볼 논문들
- [VIO-Aided Structure from Motion Under Challenging Environments](https://arxiv.org/pdf/2101.09657.pdf)
- [Monocular Visual Inertial Direct SLAM with Robust Scale Estimation for Ground Robots/Vehicles](https://www.mdpi.com/2218-6581/10/1/23/htm)
- [On Camera Pose Estimation for 3D Scene Reconstruction](https://dl.acm.org/doi/abs/10.1145/3430984.3431038)
- [CoMoDA: Continuous Monocular Depth Adaptation Using Past Experiences](https://openaccess.thecvf.com/content/WACV2021/papers/Kuznietsov_CoMoDA_Continuous_Monocular_Depth_Adaptation_Using_Past_Experiences_WACV_2021_paper.pdf)
- [Self-supervised Visual-LiDAR Odometry with Flip Consistency](https://openaccess.thecvf.com/content/WACV2021/papers/Li_Self-Supervised_Visual-LiDAR_Odometry_With_Flip_Consistency_WACV_2021_paper.pdf)
- [Neural vision-based semantic 3D world modeling](https://openaccess.thecvf.com/content/WACV2021W/AVV/papers/Papadopoulos_Neural_Vision-Based_Semantic_3D_World_Modeling_WACVW_2021_paper.pdf)
- [A Semi-Direct Monocular Visual SLAM Algorithm in Complex Environments](https://link.springer.com/article/10.1007/s10846-020-01297-8)
- [An Efficient Iterated EKF-Based Direct Visual-Inertial Odometry for MAVs Using a Single Plane Primitive](https://ieeexplore.ieee.org/abstract/document/9309401)
- [Global visual and Semantic Observations for Outdoor Robot Localization](https://ieeexplore.ieee.org/abstract/document/9296251)
- [Research on the Availability of VINS-Mono and ORB-SLAM3 Algorithms for Aviation](https://www.researchgate.net/profile/Metin_Turan3/publication/347937332_Research_on_the_Availability_of_VINS-Mono_and_ORB-SLAM3_Algorithms_for_Aviation/links/5fe8d59e299bf140885031f7/Research-on-the-Availability-of-VINS-Mono-and-ORB-SLAM3-Algorithms-for-Aviation.pdf)
- [UPSLAM: Union of Panoramas SLAM](https://arxiv.org/pdf/2101.00585.pdf)
- [Visual-IMU State Estimation with GPS and OpenStreetMap for Vehicles on a Smartphone](https://ieeexplore.ieee.org/abstract/document/9305386)
- [CNN-based Visual Ego-Motion Estimation for Fast MAV Maneuvers](https://arxiv.org/pdf/2101.01841.pdf)
- [Homography-based camera pose estimation with known gravity direction for UAV navigation](http://scis.scichina.com/en/2021/112204.pdf)
- [On-the-fly Extrinsic Calibration of Non-Overlapping in-Vehicle Cameras based on Visual SLAM under 90-degree Backing-up Parking](https://ieeexplore.ieee.org/abstract/document/9304530)
- [Single-Shot 3D Detection of Vehicles from Monocular RGB Images via Geometrically Constrained Keypoints in Real-Time](https://ieeexplore.ieee.org/abstract/document/9304847)
- [TIMA SLAM: Tracking Independently and Mapping Altogether for an Uncalibrated Multi-Camera System](https://www.mdpi.com/1424-8220/21/2/409/pdf)
- [Deep regression for LiDAR-based localization in dense urban areas](https://www.sciencedirect.com/science/article/abs/pii/S0924271620303518)
- [Stereo camera visual SLAM with hierarchical masking and motion-state classification at outdoor construction sites containing large dynamic objects](https://www.tandfonline.com/doi/abs/10.1080/01691864.2020.1869586)
- [A general framework for modeling and dynamic simulation of multibody systems using factor graphs](https://arxiv.org/pdf/2101.02874.pdf)
- [A SLAM System Based on RGBD Image and Point-Line Feature](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9316224)
- [PLUME: Efficient 3D Object Detection from Stereo Images](https://arxiv.org/pdf/2101.06594.pdf)
- [Asynchronous Multi-View SLAM](https://arxiv.org/pdf/2101.06562.pdf)
- [Incremental Rotation Averaging](https://link.springer.com/article/10.1007/s11263-020-01427-7)
- [PL-GM:RGB-D SLAM With a Novel 2D and 3D Geometric Constraint Model of Point and Line Features](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9316710)
- [DP-SLAM: A visual SLAM with moving probability towards dynamic environments](https://www.researchgate.net/profile/Zonghai_Chen/publication/348256049_DP-SLAM_A_visual_SLAM_with_moving_probability_towards_dynamic_environments/links/5fff9c3845851553a0418106/DP-SLAM-A-visual-SLAM-with-moving-probability-towards-dynamic-environments.pdf)
- [Visual SLAM for robot navigation in healthcare facility](https://www.sciencedirect.com/science/article/pii/S0031320321000091)
- [Panoramic Visual SLAM Technology for Spherical Images](https://www.mdpi.com/1424-8220/21/3/705/pdf)
- [LSTM and Filter Based Comparison Analysis for Indoor Global Localization in UAVs](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9316698)
Related Posts
- [Graphcore IPU로 Bundle adjustment / SLAM 하기!](https://changh95.github.io/20210128-gbp-poplar/)
- [Visual SLAM with Object and Plane (Shichao Yang 발표)](https://changh95.github.io/20201228-cvpr2020-slam-yang/)
- [Deep Visual SLAM Frontends - SuperPoint, SuperGlue and SuperMaps (Tomasz Malisiewicz 발표)](https://changh95.github.io/20201227-cvpr2020-slam-malisiewicz/)
- [From SLAM to Spatial AI (Prof. Andrew Davison 발표)](https://changh95.github.io/20201226-CVPR-2020-SLAM-workshop-Davison/)
- [Deep Direct Visual SLAM (Prof. Daniel Cremers 발표)](https://changh95.github.io/20201225-CVPR2020-Cremers-SLAM-workshop/) | 40.877934 | 351 | 0.789135 | eng_Latn | 0.326094 |
7af533e37fa5d1d61ef900985a246285451bde2f | 6,365 | md | Markdown | documentation/manual/src/paradox/comparison.md | LukaJCB/endpoints | 7340d544e6a433beb9cf192cc2e432040b556d48 | [
"MIT"
] | null | null | null | documentation/manual/src/paradox/comparison.md | LukaJCB/endpoints | 7340d544e6a433beb9cf192cc2e432040b556d48 | [
"MIT"
] | null | null | null | documentation/manual/src/paradox/comparison.md | LukaJCB/endpoints | 7340d544e6a433beb9cf192cc2e432040b556d48 | [
"MIT"
] | null | null | null | # Comparison with similar tools
In this page, we compare *endpoints* with alternative tools that solve the same problem.
We highlights their differences and explain the motivation behind our design decisions.
## Autowire / Remotely / Lagom / Mu
[Autowire](https://github.com/lihaoyi/autowire), and
[Remotely](http://verizon.github.io/remotely), and
[Mu](https://github.com/higherkindness/mu) are Scala libraries automating
remote procedure calls between a server and a client.
[Lagom](https://www.lagomframework.com/) is
a framework for implementing microservices.
The main difference with *endpoints* is that these tools are based on
macros synthesizing the client according to the interface (defined as a Scala trait)
of the server. By contrast, *endpoints* uses no macros.
We chose not to rely on macros because we find that they make it harder to reason about the code
(since they synthesize code that is not seen by the developer), they may not be well supported
by IDEs, and they seem to require a significant effort to support all edge cases (several issues
have been reported about macro expansion: [https://goo.gl/Spco7u](https://goo.gl/Spco7u),
[https://goo.gl/F2E5Ev](https://goo.gl/F2E5Ev) and [https://goo.gl/LCmVr8](https://goo.gl/LCmVr8)).
A more fundamental difference is that in Autowire and Remotely, the underlying HTTP communication
is viewed as an implementation detail, and all remote calls are multiplexed through
a single HTTP endpoint. In contrast, the goal of *endpoints* is to embrace the features
of the HTTP protocol (content negotiation, authorization, semantic verbs and status codes,
etc.), so, in general, one HTTP endpoint is used for one remote call (though the library also
supports multiplexing in case users don’t care about the underlying HTTP protocol).
Last but not least, Autowire, Remotely, Mu, and Lagom can not generate documentation of the
communication protocol.
## Swagger / Thrift / gRPC
Solutions such as Swagger, Thrift, and gRPC generate the client and server code
based on a protocol description written in a custom language, whereas in *endpoints*
descriptions are written in plain Scala and producing a client or a server doesn’t
require generating code.
These custom languages have the benefit of being very clear about their domain, but
they generally lack of means of abstraction (no way to factor out similar parts
of endpoint descriptions) or means of computing (no expression evaluation, no control
structures, etc.). With *endpoints*, developers can easily write a function returning
an endpoint description according to some specific logic and given some parameters.
Tools based on code generators have the benefit that they can be integrated with
virtually any stack (Scala, Rust, etc.). However, we find that they also have some
drawbacks. First, they require users to set up an additional step in their
build, ensuring that the code is generated before compiling the modules that use
it, and that each time the source files are modified the code is re-generated.
Our experience with code generators also showed that sometimes the generated code
does not compile. In such a case, it may be difficult to identify the origin of
the problem because the error is reported on the generated code, not on the code
written by the developer. Furthermore, sometimes the generated code is not convenient
to use as it stands, and developers maintain another layer of abstraction on top of it.
By not relying on code generation, *endpoints* eliminates these potential problems.
You can find a more elaborated article about the limitations of approaches based on
code generation in
[this blog post](http://julien.richard-foy.fr/blog/2016/01/24/my-problem-with-code-generation/).
## Rho / Fintrospect / tapir
[Fintrospect](http://fintrospect.io/),
[Rho](https://github.com/http4s/rho), and
[tapir](https://github.com/softwaremill/tapir) projects are comparable alternatives to *endpoints*.
Their features and usage are similar: users describe their communication protocol in plain
Scala and the library produces clients (Fintrospect and tapir only), servers and documentation.
A key difference is that in these projects the endpoints description language is defined as a sealed AST:
users can not extend descriptions with application-specific concerns
and interpreters can not be partial. We can illustrate that point with Web Sockets, a feature that is
not be supported by all clients and servers. For instance, Play-WS does not support Web Sockets. This
means that a Web Socket endpoint description can not be interpreted by a Play-WS based client.
There are two ways to inform the user about such an incompatibility: either by showing a compilation error,
or by throwing a runtime exception. In *endpoints*, interpreters can partially support the description
language, resulting in a compilation error if one tries to apply an interpreter that is not powerful
enough to interpret a given endpoint description. By contrast, if the description language is a sealed
AST then all interpreters have to be total, otherwise a `MatchError` will be thrown at runtime.
That being said, a drawback of having an extensible description language is that users have to “build”
their language by combining different modules together (eg, `Endpoints with JsonEntitiesFromSchemas`), and
then build matching interpreters. These steps are not needed with projects where the description language
is based on a sealed AST.
## Servant / typedapi / typed-schema
[Servant](https://haskell-servant.github.io/) is a Haskell library that uses generic
programming to
derive client, server and documentation from endpoint descriptions.
[typedapi](https://github.com/pheymann/typedapi) and
[typed-schema](https://github.com/TinkoffCreditSystems/typed-schema)
are similar projects written in Scala. In these projects, both descriptions and
interpreters are extensible. The difference with *endpoints* is that
descriptions are **types**, whereas in *endpoints* they are **values**.
Using types as descriptions has some benefits: they can directly be used to type instances of
data (in contrast, in *endpoints* descriptions of data types have to mirror a
corresponding type definition). On the other hand, we believe that abstracting and combining
types using type-level computations is, in general, less convenient for users.
| 61.796117 | 107 | 0.799372 | eng_Latn | 0.999238 |
7af6ac4c7411196e447863be3a53f71534ca5df1 | 1,813 | md | Markdown | data/markdown/1949/077.md | munichrocker/grundgesetz_changes | 79fcf71f3f8d89f93cbea960e411d253c4939610 | [
"MIT"
] | 3 | 2019-08-06T14:42:02.000Z | 2021-04-01T08:31:53.000Z | data/markdown/1949/077.md | munichrocker/grundgesetz_changes | 79fcf71f3f8d89f93cbea960e411d253c4939610 | [
"MIT"
] | null | null | null | data/markdown/1949/077.md | munichrocker/grundgesetz_changes | 79fcf71f3f8d89f93cbea960e411d253c4939610 | [
"MIT"
] | null | null | null | ## Artikel 77
(1) Die Bundesgesetze werden vom Bundestage beschlossen. Sie sind nach ihrer Annahme durch den Präsidenten des Bundestages unverzüglich dem Bundesrate zuzuleiten.
(2) Der Bundesrat kann binnen zwei Wochen nach Eingang des Gesetzesbeschlusses verlangen, daß ein aus Mitgliedern des Bundestages und des Bundesrates für die gemeinsame Beratung von Vorlagen gebildeter Ausschuß einberufen wird. Die Zusammensetzung und das Verfahren dieses Ausschusses regelt eine Geschäftsordnung, die vom Bundestag beschlossen wird und der Zustimmung des Bundesrates bedarf. Die in diesen Ausschuß entsandten Mitglieder des Bundesrates sind nicht an Weisungen gebunden. Ist zu einem Gesetze die Zustimmung des Bundesrates erforderlich, so können auch der Bundestag und die Bundesregierung die Einberufung verlangen. Schlägt der Ausschuß eine Änderung des Gesetzesbeschlusses vor, so hat der Bundestag erneut Beschluß zu fassen.
(3) Soweit zu einem Gesetze die Zustimmung des Bundesrates nicht erforderlich ist, kann der Bundesrat, wenn das Verfahren nach Absatz 2 beendigt ist, gegen ein vom Bundestage beschlossenes Gesetz binnen einer Woche Einspruch einlegen. Die Einspruchsfrist beginnt im Falle des Absatzes 2 letzter Satz mit dem Eingange des vom Bundestage erneut gefaßten Beschlusses, in allen anderen Fällen mit dem Abschlusse des Verfahrens vor dem in Absatz 2 vorgesehenen Ausschusse.
(4) Wird der Einspruch mit der Mehrheit der Stimmen des Bundesrates beschlossen, so kann er durch Beschluß der Mehrheit der Mitglieder des Bundestages zurückgewiesen werden. Hat der Bundesrat den Einspruch mit einer Mehrheit von mindestens zwei Dritteln seiner Stimmen beschlossen, so bedarf die Zurückweisung durch den Bundestag einer Mehrheit von zwei Dritteln, mindestens der Mehrheit der Mitglieder des Bundestages.
| 226.625 | 745 | 0.841699 | deu_Latn | 0.999755 |
7af6d21ed27d817b709c28bdd775a076819212e2 | 1,463 | md | Markdown | leetcode/2019-06-23-number-of-ways-to-stay-in-the-same-place-after-some-steps.md | openset/openset.github.io | 658823caa2082c8f08aae6ad891b135f588513bc | [
"MIT"
] | 2 | 2017-07-14T02:16:31.000Z | 2019-05-10T00:28:59.000Z | leetcode/2019-06-23-number-of-ways-to-stay-in-the-same-place-after-some-steps.md | openset/openset.github.io | 658823caa2082c8f08aae6ad891b135f588513bc | [
"MIT"
] | null | null | null | leetcode/2019-06-23-number-of-ways-to-stay-in-the-same-place-after-some-steps.md | openset/openset.github.io | 658823caa2082c8f08aae6ad891b135f588513bc | [
"MIT"
] | null | null | null | ---
layout: single
title: "停在原地的方案数"
date: 2019-06-23 21:30:00 +0800
categories: [Leetcode]
tags: [Dynamic Programming]
permalink: /problems/number-of-ways-to-stay-in-the-same-place-after-some-steps/
---
## 1269. 停在原地的方案数 (Hard)
{% raw %}
<p>有一个长度为 <code>arrLen</code> 的数组,开始有一个指针在索引 <code>0</code> 处。</p>
<p>每一步操作中,你可以将指针向左或向右移动 1 步,或者停在原地(指针不能被移动到数组范围外)。</p>
<p>给你两个整数 <code>steps</code> 和 <code>arrLen</code> ,请你计算并返回:在恰好执行 <code>steps</code> 次操作以后,指针仍然指向索引 <code>0</code> 处的方案数。</p>
<p>由于答案可能会很大,请返回方案数 <strong>模</strong> <code>10^9 + 7</code> 后的结果。</p>
<p> </p>
<p><strong>示例 1:</strong></p>
<pre>
<strong>输入:</strong>steps = 3, arrLen = 2
<strong>输出:</strong>4
<strong>解释:</strong>3 步后,总共有 4 种不同的方法可以停在索引 0 处。
向右,向左,不动
不动,向右,向左
向右,不动,向左
不动,不动,不动
</pre>
<p><strong>示例 2:</strong></p>
<pre>
<strong>输入:</strong>steps = 2, arrLen = 4
<strong>输出:</strong>2
<strong>解释:</strong>2 步后,总共有 2 种不同的方法可以停在索引 0 处。
向右,向左
不动,不动
</pre>
<p><strong>示例 3:</strong></p>
<pre>
<strong>输入:</strong>steps = 4, arrLen = 2
<strong>输出:</strong>8
</pre>
<p> </p>
<p><strong>提示:</strong></p>
<ul>
<li><code>1 <= steps <= 500</code></li>
<li><code>1 <= arrLen <= 10<sup>6</sup></code></li>
</ul>
{% endraw %}
### 相关话题
[[动态规划](https://github.com/openset/leetcode/tree/master/tag/dynamic-programming/README.md)]
---
## [解法](https://github.com/openset/leetcode/tree/master/problems/number-of-ways-to-stay-in-the-same-place-after-some-steps)
| 20.9 | 125 | 0.640465 | yue_Hant | 0.297884 |
7af6d661c621ab937070e180890a8036285a56c6 | 3,843 | md | Markdown | articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: AS2 MDN 受信確認
description: Azure Logic Apps 内での AS2 メッセージのメッセージ処理通知 (MDN) 受信確認について説明します。
services: logic-apps
ms.suite: integration
author: S-Jindal
ms.author: shivamjindal
ms.reviewer: estfan, azla
ms.topic: reference
ms.date: 08/12/2021
ms.openlocfilehash: 510be5ca1343f115bcdf13c2d67b598b4d410564
ms.sourcegitcommit: 6c6b8ba688a7cc699b68615c92adb550fbd0610f
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/13/2021
ms.locfileid: "121862845"
---
# <a name="mdn-acknowledgments-for-as2-messages-in-azure-logic-apps"></a>Azure Logic Apps 内での AS2 メッセージ の MDN 受信確認
Azure Logic Apps では、**AS2** 操作を使用するときに、電子データ交換 (EDI) 通信用の AS2 メッセージを処理するワークフローを作成できます。 EDI メッセージングでは、受信確認によって EDI インターチェンジの処理の状態が示されます。 インターチェンジを受信すると、[**AS2 デコード** アクション](logic-apps-enterprise-integration-as2.md#decode)により、メッセージ処理通知 (MDN) または受信確認を送信者に返すことができます。 MDN により、次の項目が確認されます。
* 受信側のパートナーが元のメッセージを正常に受信した。
送信側のパートナーは、`MessageID`最初に送信されたメッセージ`original-message-id`を、受信側によって MDN に含められるフィールドと比較します。
* 受信側のパートナーにより、交換されたデータの整合性が確認された。
メッセージ整合性チェック (MIC) または MIC ダイジェストは、最初に送信されたメッセージのペイロードから計算されます。 送信側のパートナーは、受信者が受信したメッセージのペイロードから計算し、署名されている場合に MDN の `Received-content-MIC` フィールドに含めた MIC をこの MIC と比較します。
> [!NOTE]
> MDN に署名することはできますが、暗号化または圧縮することはできません。
* 受信の否認防止
送信側のパートナーは、受信側のパートナーの公開キーを使用して署名付きの MDN を比較し、MDN 内の返された MIC の値が、否認防止データベースに格納された元のメッセージ ペイロードの MIC と同じであることを確認します。
> [!NOTE]
> 応答内で MDN の送信を有効にすると、処理中にエラーが発生した場合でも、ロジック アプリは AS2 メッセージの処理の状態を報告するために MDN を返そうとします。 送信者が MDN を受信して確認するまで、AS2 転送は完了しません。
> 同期 MDN は、HTTP 応答 (`200 OK` 状態など) としても機能します。
このトピックでは、受信確認の生成に使用されるプロパティ、使用する MDN ヘッダー、MIC など、AS2 MDN ACK に関する簡単な概要について説明します。 その他の関連情報については、次のドキュメントを確認してください。
* [Azure Logic Apps で B2B エンタープライズ統合用の AS2 メッセージを交換する](logic-apps-enterprise-integration-as2.md)
* [AS2 メッセージの設定](logic-apps-enterprise-integration-as2-message-settings.md)
* [Azure Logic Apps とは](logic-apps-overview.md)
## <a name="mdn-generation"></a>MDN の生成
[AS2 デコード] アクションでは、アグリーメントの **[受信設定]** に **[MDN を送信する]** オプションが選択されている場合、パートナーの AS2 アグリーメントのプロパティに基づいて MDN が生成されます。 この場合、MDN の生成時にメッセージ ヘッダーの **AS2-From** プロパティが使用されますが、他のプロパティとその値はパートナーの AS2 アグリーメント設定から取得されます。
既定では、受信 AS2 メッセージ ヘッダーは、検証と MDN の生成に使用されます。 アグリーメントの検証と MDN の設定を代わりに使用するには、アグリーメントの **[受信設定]** で、 **[メッセージ プロパティをオーバーライドします]** を選択します。 または、このオプションが選択されていないか、アグリーメントが使用できない場合、AS2 デコード アクションでは代わりに受信メッセージ ヘッダーが使用されます。
## <a name="mdn-headers"></a>MDN のヘッダー
MDN を応答として AS2 メッセージに関連付けるには、`AS2-From` ヘッダー、`AS2-To` ヘッダー、および `MessageID` コンテキスト プロパティが使用されます。 MDN では、`Original-Message-ID` ヘッダーは、MDN が応答である AS2 メッセージの `Message-ID` ヘッダーから取得されます。 MDN には次のヘッダーが含まれます。
| ヘッダー | 説明 |
|---------|-------------|
| HTTP および AS2 | 詳細については、[AS2 メッセージの設定](logic-apps-enterprise-integration-as2-message-settings.md)に関する記事を参照してください。
| 転送層 | このヘッダーには、`Content-Type` ヘッダー ("signed multipart message" を含む)、MIC のアルゴリズム、署名の書式設定プロトコル、および最も外側のマルチパート境界サブヘッダーが含まれます。 |
| 第 1 部 | 署名付きマルチパート メッセージの第 1 部は、埋め込まれた MDN です。 この部分は人間が判読できます。 |
| 第 2 部 | 署名付きマルチパート メッセージの第 2 部は、デジタル署名、元のメッセージへの参照、処理の種類と状態、および MIC の値で構成されます。 この部分はコンピューターで読み取ることができます。 |
|||
## <a name="mic-digest"></a>MIC ダイジェスト
MIC ダイジェストまたは MIC では、MDN が最初に送信されたメッセージのペイロードに関連付けられていることが確認されます。 この MIC は、`Received-Content-MIC` 拡張フィールドの署名付きマルチパート MDN メッセージの第 2 部に含まれています。
MIC は base64 でエンコードされ、 **[MIC アルゴリズム]** プロパティによって決定されます。これは、AS2 アグリーメントの **[受信設定]** ページで **[MDN を送信する]** および **[署名付き MDN を送信する]** プロパティが選択されている場合に有効になります。 MIC 生成では、次のサポートされているハッシュ アルゴリズムから選択できます。
* SHA1
* MD5
* SHA2-256
* SHA2-384
* SHA2-512
たとえば、次のスクリーンショットは、AS2 アグリーメントの **[受信設定]** ページの MDN プロパティを示しています。
![AS2 アグリーメントの [受信設定] と MDN 受信確認設定が表示された Azure portal を示すスクリーンショット。](./media/logic-apps-enterprise-integration-as2-mdn-acknowledgment/mdn-ack-settings.png)
## <a name="next-steps"></a>次のステップ
* [AS2 メッセージの設定](logic-apps-enterprise-integration-as2-message-settings.md)
| 45.75 | 283 | 0.781421 | yue_Hant | 0.586658 |
7af6e549b5076646f3cc24e238bc318ad1655656 | 823 | md | Markdown | examples/ruby/README.md | cyberark/conjur-openapi-spec | 40f2f1a6b14fb3536facc628e63321a17667148c | [
"Apache-2.0"
] | 6 | 2020-12-03T19:48:30.000Z | 2021-07-19T08:36:43.000Z | examples/ruby/README.md | cyberark/conjur-openapi-spec | 40f2f1a6b14fb3536facc628e63321a17667148c | [
"Apache-2.0"
] | 116 | 2020-11-24T21:56:49.000Z | 2021-12-10T19:27:39.000Z | examples/ruby/README.md | cyberark/conjur-openapi-spec | 40f2f1a6b14fb3536facc628e63321a17667148c | [
"Apache-2.0"
] | null | null | null | # Spec-Generated Ruby API Client Example
This example is meant to give an example use case for OpenAPI spec generated clients.
It covers using some of the most popular Conjur Open Source endpoints with Ruby:
- Authenticate a user
- Change user's own password
- Rotate user's API key
- Load the root policy
- Store and retrieve a secret
The `run` script is responsible for executing the example by environment-conscious means by:
- If needed, generating a new Ruby client from the OpenAPI spec
- If needed, spinning up a new `docker-compose` environment with Conjur and a Postgres database
- Running the example in a Ruby Docker container
Documentation regarding format and use of OpenAPI client instances and their functions is
generated with the client, and can be found in the `docs` folder of the client library.
| 45.722222 | 95 | 0.791009 | eng_Latn | 0.998999 |
7af6efd76756cf234614bffecb07440939cfa99f | 95,811 | md | Markdown | articles/machine-learning/algorithm-module-reference/machine-learning-module-error-codes.md | damiura/azure-docs.ja-jp | 7944cd1b7d1f9ccdfb93ce68a3d9b7365639fd45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/algorithm-module-reference/machine-learning-module-error-codes.md | damiura/azure-docs.ja-jp | 7944cd1b7d1f9ccdfb93ce68a3d9b7365639fd45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/algorithm-module-reference/machine-learning-module-error-codes.md | damiura/azure-docs.ja-jp | 7944cd1b7d1f9ccdfb93ce68a3d9b7365639fd45 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: モジュール エラーのトラブルシューティング
titleSuffix: Azure Machine Learning service
description: エラー コードを使用した Azure Machine Learning Studio でのモジュール例外のトラブルシューティング
services: machine-learning
ms.service: machine-learning
ms.subservice: core
ms.topic: reference
author: xiaoharper
ms.author: zhanxia
ms.date: 05/02/2019
ROBOTS: NOINDEX
ms.openlocfilehash: 09a2b616e2bba93be86241c64d37daec7d6dea3b
ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 06/13/2019
ms.locfileid: "65027841"
---
# <a name="exceptions-and-error-codes-for-algorithm--module-reference"></a>アルゴリズムとモジュールに関する例外とエラー コードの参照
Azure Machine Learning Studio でモジュールを使用しているときに発生する可能性のあるエラー メッセージと例外コードについて説明します。
問題を解決するには、この記事でエラーを探して、一般的な原因について確認します。 Studio でエラー メッセージの完全なテキストを取得するには 2 つの方法があります。
- 右側のウィンドウの **[View Output Log]\(出力ログの表示\)** リンクをクリックして、一番下までスクロールします。 ウィンドウの最後の 2 行に、詳細なエラー メッセージが表示されます。
- エラーのあるモジュールを選択し、赤い X をクリックします。関連するエラー テキストのみが表示されます。
エラー メッセージのテキストが役に立たない場合は、コンテキストおよび必要な追加または変更に関する情報をお送りください。 エラー トピックについてのフィードバックをお送りいただくか、[Azure Machine Learning STUDIO フォーラム](https://aka.ms/aml-forum-studio)にアクセスして質問を投稿していただくことができます。
## <a name="error-0001"></a>エラー 0001
データ セットで指定した列のうち 1 つまたは複数が見つからない場合、例外が発生します。
モジュールで列を選択していても、選択された列が入力データ セットに存在しない場合、このエラーを受け取ります。 このエラーは、列名を手入力した場合、または実験を実行したときにデータセットに存在しない列が列セレクターによって提案された場合、発生する可能性があります。
**解決策:** この例外がスローされているモジュールを見直し、列の名前 (1 つまたは複数) が正しいこと、および参照されているすべての列が存在することを確認します。
|例外メッセージ|
|------------------------|
|One or more specified columns were not found (指定された列のうち 1 つ以上が見つかりませんでした)|
|Column with name or index "{0}" not found (名前またはインデックス "\{0\}" の列が見つかりません)|
|Column with name or index "{0}" does not exist in "{1}" (名前またはインデックス "\{0\}" の列は "{1}" に存在しません)|
## <a name="error-0002"></a>エラー 0002
1 つまたは複数のパラメーターを解析できない場合、または指定された型からターゲット メソッドで必要な型に変換できない場合、例外が発生します。
Azure Machine Learning では、入力としてパラメーターを指定し、値の型が予想される型と異なり、暗黙的な変換を実行できない場合、このエラーが発生します。
**解決策:** モジュールの要件を確認し、必要な値の型を特定します (文字列、整数、倍精度浮動小数点数型など)
|例外メッセージ|
|------------------------|
|Failed to parse parameter (パラメーターを解析できませんでした)|
|Failed to parse "{0}" parameter ("\{0\}" パラメーターを解析できませんでした)|
|Failed to parse (convert) "{0}" parameter to "{1}" ("\{0\}" パラメーターを "{1}" に解析 (変換) できませんでした)|
|Failed to convert "{0}" parameter from "{1}" to "{2}" ("\{0\}" パラメーターを "{1}" から "{2}" に変換できませんでした)|
|Failed to convert "{0}" parameter value "{1}" from "{2}" to "{3}" ("\{0\}" パラメーターの値 "{1}" を "{2}" から "{3}" に変換できませんでした)|
|Failed to convert value "{0}" in column "{1}" from "{2}" to "{3}" with usage of the format "{4}" provided (指定された形式 "{4}" の使用方法で、列 "{1}" の値 "\{0\}" を "{2}" から "{3}" に変換できませんでした)|
## <a name="error-0003"></a>エラー 0003
1 つまたは複数の入力が null または空の場合、例外が発生します。
Azure Machine Learning では、モジュールに対するいずれかの入力またはパラメーターが null または空の場合、このエラーを受け取ります。 たとえば、パラメーターに値を入力しなかった場合、このエラーが発生する可能性があります。 欠損値のあるデータセットまたは空のデータセットを選択した場合にも、発生する可能性があります。
**解決策:**
+ 例外が発生したモジュールを開き、すべての入力が指定されていることを確認します。 すべての必須入力が指定されていることを確認します。
+ Azure ストレージから読み込まれるデータにアクセスできること、およびアカウント名またはキーが変更されていないことを確認します。
+ 入力データで欠損値または null 値を確認します。
+ データ ソースに対するクエリを使用している場合は、必要な形式でデータが返されることを確認します。
+ 入力ミスや、データの指定でのその他の変更を確認します。
|例外メッセージ|
|------------------------|
|One or more of inputs are null or empty (1 つまたは複数の入力が null または空です)|
|Input "{0}" is null or empty (入力 "\{0\}" が null または空です)|
## <a name="error-0004"></a>エラー 0004
パラメーターが特定の値以下の場合、例外が発生します。
Azure Machine Learning では、メッセージのパラメーターが、モジュールでデータを処理するために必要な境界値より小さい場合、このエラーを受け取ります。
**解決策:** 例外がスローされているモジュールを見直し、指定された値より大きくなるようにパラメーターを修正します。
|例外メッセージ|
|------------------------|
|Parameter should be greater than boundary value. (パラメーターは、境界値より大きい値にする必要があります。)|
|Parameter "{0}" value should be greater than {1}. (パラメーター "\{0\}" の値は "{1}" より大きくする必要があります。)|
|Parameter "{0}" has value "{1}" which should be greater than {2} (パラメーター "\{0\}" の値は "{1}" ですが、"{2}" より大きくする必要があります)|
## <a name="error-0005"></a>エラー 0005
パラメーターが特定の値より小さい場合、例外が発生します。
Azure Machine Learning では、メッセージのパラメーターが、モジュールでデータを処理するために必要な境界値以下の場合、このエラーを受け取ります。
**解決策:** 例外がスローされているモジュールを見直し、指定された値以上になるように、パラメーターを修正します。
|例外メッセージ|
|------------------------|
|Parameter should be greater than or equal to boundary value. (パラメーターは、境界値以上にする必要があります。)|
|Parameter "{0}" value should be greater than or equal to {1}. (パラメーター "\{0\}" の値は {1} 以上にする必要があります。)|
|Parameter "{0}" has value "{1}" which should be greater than or equal to {2}. (パラメーター "\{0\}" の値は "{1}" ですが、{2} 以上にする必要があります。)|
## <a name="error-0006"></a>エラー 0006
パラメーターが指定された値以上の場合、例外が発生します。
Azure Machine Learning では、メッセージのパラメーターが、モジュールでデータを処理するために必要な境界値以上の場合、このエラーを受け取ります。
**解決策:** 例外がスローされているモジュールを見直し、指定された値より小さくなるようにパラメーターを修正します。
|例外メッセージ|
|------------------------|
|Parameters mismatch. (パラメーターが一致しません。) One of the parameters should be less than another. (パラメーターの 1 つは他より小さくする必要があります。)|
|Parameter "{0}" value should be less than parameter "{1}" value. (パラメーター "\{0\}" の値はパラメーター "{1}" の値より小さくする必要があります。)|
|Parameter "{0}" has value "{1}" which should be less than {2}. (パラメーター "\{0\}" の値は "{1}" ですが、"{2}" より小さくする必要があります。)|
## <a name="error-0007"></a>エラー 0007
パラメーターが特定の値より大きい場合、例外が発生します。
Azure Machine Learning では、モジュールのプロパティで許容されるより大きい値を指定した場合、このエラーを受け取ります。 たとえば、サポートされる日付の範囲外にあるデータを指定した場合や、3 列のみ使用可能なときに 5 列を使用することを示した場合です。
何らかの方法で一致する必要があるデータの 2 つのセットを指定した場合も、このエラーが表示される可能性があります。 たとえば、列の名前を変更していて、インデックスで列を指定する場合、指定する名前の数と列インデックスの数が一致する必要があります。 もう 1 つの例として、2 つの列を使用する算術演算では、列に同じ数の行が含まれる必要があります。
**解決策:**
+ 問題のモジュールを開き、数値プロパティの設定を確認します。
+ すべてのパラメーター値が、そのプロパティに対してサポートされている値の範囲内にあることを確認します。
+ モジュールが複数の入力を受け取る場合は、入力が同じサイズであることを確認します。
<!-- + If the module has multiple properties that can be set, ensure that related properties have appropriate values. For example, when using [Group Data into Bins](group-data-into-bins.md), if you use the option to specify custom bin edges, the number of bins must match the number of values you provide as bin boundaries.-->
+ データセットまたはデータ ソースが変更されたかどうかを確認します。 以前のバージョンのデータで動作していた値が、列の数、列のデータ型、またはデータのサイズを変更すると失敗することがあります。
|例外メッセージ|
|------------------------|
|Parameters mismatch. (パラメーターが一致しません。) One of the parameters should be less than another. (パラメーターの 1 つが他より小さい必要があります。)|
|Parameter "{0}" value should be less than or equal to parameter "{1}" value. (パラメーター "\{0\}" の値はパラメーター "{1}" の値以下である必要があります。)|
|Parameter "{0}" has value "{1}" which should be less than or equal to {2}. (パラメーター "\{0\}" の値は "{1}" ですが、{2} 以下にする必要があります。)|
## <a name="error-0008"></a>エラー 0008
パラメーターが範囲内にない場合、例外が発生します。
Azure Machine Learning では、メッセージのパラメーターがモジュールでデータを処理するために必要な境界の外側にある場合、このエラーを受け取ります。
たとえば、[行の追加](add-rows.md)を使って列の数が異なる 2 つのデータセットを結合しようとすると、このエラーが表示されます。
**解決策:** 例外がスローされているモジュールを見直し、指定された範囲内になるようにパラメーターを修正します。
|例外メッセージ|
|------------------------|
|Parameter value is not in the specified range. (パラメーターの値が、指定された範囲内ではありません。)|
|Parameter "{0}" value is not in range. (パラメーター "\{0\}" の値が範囲内ではありません。)|
|Parameter "{0}" value should be in the range of [{1}, {2}]. (パラメーター "\{0\}" の値は、[{1}, {2}] の範囲内である必要があります。)|
## <a name="error-0009"></a>エラー 0009
Azure ストレージ アカウント名またはコンテナー名が正しく指定されていない場合、例外が発生します。
Azure Machine Learning Studio では、Azure ストレージ アカウントのパラメーターを指定していても、名前またはパスワードを解決できない場合、このエラーが発生します。 パスワードまたはアカウント名に関するエラーは、多くの理由で発生する可能性があります。
+ アカウントの種類が正しくありません。 一部の新しいアカウントの種類は、Machine Learning Studio での使用がサポートされていません。 詳しくは、「[Import Data (データのインポート)](import-data.md)」をご覧ください。
+ 正しくないアカウント名を入力しました
+ アカウントはもう存在しません
+ ストレージ アカウントのパスワードが間違っているか、変更されています
+ コンテナー名を指定していないか、コンテナーが存在しません
+ ファイルのパス (BLOB へのパス) を完全に指定していません
**解決策:**
このような問題は、アカウント名、パスワード、またはコンテナーのパスを手入力しようとすると発生することがよくあります。 [データのインポート](import-data.md) モジュールの新しいウィザードを使うことをお勧めします。名前の検索と確認に役立ちます。
また、アカウント、コンテナー、または BLOB が削除されているかどうかも確認します。 アカウント名とパスワードが正しく入力されたこと、およびコンテナーが存在することを確認するには、別の Azure ストレージ ユーティリティを使います。
Azure Machine Learning では、一部の新しいアカウントの種類はサポートされていません。 たとえば、新しい "ホット" または "コールド" ストレージの種類は、Machine Learning には使用できません。 従来のストレージ アカウントおよび "汎用" として作成されたストレージ アカウントはどちらも、正常に動作します。
BLOB への完全なパスを指定した場合は、パスが "**コンテナー/BLOB 名**" として指定されていること、およびコンテナーと BLOB の両方がアカウントに存在することを確認します。
パスの先頭にスラッシュを含めることはできません。 たとえば、 **/container/blob** は正しくなく、**container/blob** と入力する必要があります。
|例外メッセージ|
|------------------------|
|The Azure storage account name or container name is incorrect. (Azure ストレージ アカウント名またはコンテナー名が正しくありません。)|
|The Azure storage account name "{0}" or container name "{1}" is incorrect; a container name of the format container/blob was expected. (Azure ストレージ アカウント名 "\{0\}" またはコンテナー名 "{1}" が正しくありません。コンテナー名は "コンテナー/BLOB" の形式であると予想されます。)|
## <a name="error-0010"></a>エラー 0010
入力データセットで一致している必要のある列名があっても、そうなっていない場合、例外が発生します。
Azure Machine Learning では、メッセージ内の列インデックスに対応する列名が 2 つの入力データセットで異なる場合、このエラーを受け取ります。
**解決策:** [メタデータの編集](edit-metadata.md)を使うか、または元のデータセットを変更して指定された列インデックスの列名を同じにします。
|例外メッセージ|
|------------------------|
|Columns with corresponding index in input datasets have different names. (入力データセット内の対応するインデックスに伴う列の名前が異なります。)|
|Column names are not the same for column {0} (zero-based) of input datasets ({1} and {2} respectively). (入力データセット ({1} と {2}) それぞれの列 \{0\} (ゼロ ベース) に対する列の名前が同じではありません。)|
## <a name="error-0011"></a>エラー 0011
渡された列セット引数がデータセットのどの列にも適用されない場合、例外が発生します。
Azure Machine Learning では、指定された列の選択が指定されたデータセットのどの列とも一致しない場合、このエラーを受け取ります。
モジュールが動作するために少なくとも 1 つの列が必要なときに列が選択されていない場合も、このエラーが発生することがあります。
**解決策:** データセット内の列に適用されるように、モジュールでの列の選択を変更します。
特定の列 (ラベル列など) を選択する必要があるモジュールの場合、正しい列が選択されていることを確認します。
不適切な列が選択されている場合は、それらを削除し、実験を再実行します。
|例外メッセージ|
|------------------------|
|Specified column set does not apply to any of dataset columns. (指定された列セットは、データセットのどの列にも適用されません。)|
|Specified column set "{0}" does not apply to any of dataset columns. (指定された列セット "\{0\}" は、データセットのどの列にも適用されません。)|
## <a name="error-0012"></a>エラー 0012
渡された引数のセットでクラスのインスタンスを作成できなかった場合、例外が発生します。
**解決策:** このエラーはユーザーでは対処できず、将来のリリースで非推奨になります。
|例外メッセージ|
|------------------------|
|Untrained model, train model first. (未トレーニングのモデルです。先に、モデルをトレーニングしてください。)|
|Untrained model ({0}), use trained model. (未トレーニングのモデル (\{0\}) です。トレーニング済みのモデルを使用してください。)|
## <a name="error-0013"></a>エラー 0013
モジュールに渡された学習器が無効な種類の場合、例外が発生します。
このエラーは、トレーニング済みモデルと接続されたスコアリング モジュールとの間に互換性がないときに、常に発生します。 <!--For example, connecting the output of [Train Matchbox Recommender](train-matchbox-recommender.md) to [Score Model](score-model.md) (instead of [Score Matchbox Recommender](score-matchbox-recommender.md)) will generate this error when the experiment is run. -->
**解決策:**
トレーニング モジュールによって生成される学習器の種類を特定し、学習器に対する適切なスコアリング モジュールを決定します。
モデルが特別なトレーニング モジュールを使ってトレーニングされた場合、対応する特別なスコアリング モジュールにのみ、トレーニング済みモデルを接続します。
|モデルの種類|トレーニング モジュール| スコアリング モジュール|
|----|----|----|
|任意の分類子|[モデルのトレーニング](train-model.md) |[モデルのスコアリング](score-model.md)|
|任意の回帰モデル|[モデルのトレーニング](train-model.md) |[モデルのスコアリング](score-model.md)|
<!--| クラスタリング モデル| [クラスタリング モデルのトレーニング](train-clustering-model.md)または[クラスタリングのスイープ](sweep-clustering.md)| [クラスターへのデータの割り当て](assign-data-to-clusters.md)|
| 異常検出 - 1 クラス SVM | [異常検出モデルのトレーニング](train-anomaly-detection-model.md) |[モデルのスコアリング](score-model.md)|
| 異常検出 - PCA |[モデルのトレーニング](train-model.md) |[モデルのスコアリング](score-model.md) </br> モデルを評価するには、追加ステップがいくつか必要です。 |
| 異常検出 - 時系列| [時系列の異常検出](time-series-anomaly-detection.md) |モデルは、データからトレーニングされて、スコアを生成します。 モジュールでは、トレーニング済み学習器は作成されず、追加のスコアリングは必要ありません。 |
| 推奨モデル| [マッチボックス レコメンダーのトレーニング](train-matchbox-recommender.md) | [マッチボックス レコメンダーのスコアリング](score-matchbox-recommender.md) |
| 画像の分類 | [事前トレーニング済みカスケード画像分類](pretrained-cascade-image-classification.md) | [モデルのスコアリング](score-model.md) |
|Vowpal Wabbit モデル| [Vowpal Wabbit バージョン 7-4 モデルのトレーニング](train-vowpal-wabbit-version-7-4-model.md) | [Vowpal Wabbit バージョン 7-4 モデルのスコアリング](score-vowpal-wabbit-version-7-4-model.md) |
|Vowpal Wabbit モデル| [Vowpal Wabbit バージョン 7-10 モデルのトレーニング](train-vowpal-wabbit-version-7-10-model.md) | [Vowpal Wabbit バージョン 7-10 モデルのスコアリング](score-vowpal-wabbit-version-7-10-model.md) |
|Vowpal Wabbit モデル| [Vowpal Wabbit バージョン 8 モデルのトレーニング](score-vowpal-wabbit-version-8-model.md) | [Vowpal Wabbit バージョン 8 モデルのスコアリング](score-vowpal-wabbit-version-8-model.md) |-->
|例外メッセージ|
|------------------------|
|Learner of invalid type is passed. (無効な種類の学習器が渡されました。)|
|Learner "{0}" has invalid type. (学習器 "\{0\}" は無効な種類です。)|
## <a name="error-0014"></a>エラー 0014
列の一意の値の数が許容されているより多い場合、例外が発生します。
このエラーは、列に含まれる一意の値の数が多すぎる場合に発生します。 たとえば、列がカテゴリ データとして処理するように指定されていても、列の一意の値が多すぎて処理を完了できない場合、このエラーが発生する可能性があります。 また、2 つの入力の間で一意の値の数が一致していない場合にも、このエラーが発生する可能性があります。
**解決策:**
エラーが発生したモジュールを開き、入力として使用されている列を特定します。 一部のモジュールでは、データセット入力を右クリックして **[視覚化]** を選択し、一意の値の数とその分布が含まれる個々の列の統計情報を取得できます。
グループ化または分類に使用する列の場合は、列の一意の値の数を減らすための手順を実行します。 列のデータ型に応じて、さまざまな方法で減らすことができます。
<!--
+ For text data, you might be able to use [Preprocess Text](preprocess-text.md) to collapse similar entries.
+ For numeric data, you can create a smaller number of bins using [Group Data into Bins](group-data-into-bins.md), remove or truncate values using [Clip Values](clip-values.md), or use machine learning methods such as [Principal Component Analysis](principal-component-analysis.md) or [Learning with Counts](data-transformation-learning-with-counts.md) to reduce the dimensionality of the data.
-->
> [!TIP]
> シナリオに合った解決策が見つかりませんか。 このトピックのフィードバックとして、エラーが発生したモジュールの名前と、列のデータ型およびカーディナリティを提供してください。 その情報を使用して、一般的なシナリオに対してさらに対象を絞り込んだトラブルシューティング手順を提供します。
|例外メッセージ|
|------------------------|
|Number of column unique values is greater than allowed. (列の一意の値の数が許容される数を超えています。)|
|Number of unique values in column: "{0}" exceeds tuple count of {1}. (列 "\{0\}" の一意の値の数がタプルの数 {1} を超えています。)|
## <a name="error-0015"></a>エラー 0015
データベース接続が失敗した場合、例外が発生します。
正しくない SQL アカウント名、パスワード、データベース サーバー、またはデータベース名を入力した場合、またはデータベースやサーバーの問題が原因でデータベースとの接続を確立できない場合、このエラーを受け取ります。
**解決策:** アカウント名、パスワード、データベース サーバー、およびデータベースを正しく入力したこと、および指定したアカウントに適切なレベルのアクセス許可があることを確認します。 データベースが現在アクセス可能であることを確認します。
|例外メッセージ|
|------------------------|
|Error making database connection. (データベースとの接続でエラーが発生しました。)|
|Error making database connection: {0}. (データベースとの接続でエラーが発生しました: \{0\}。)|
## <a name="error-0016"></a>エラー 0016
モジュールに渡される入力データセットには互換性のある列の型が含まれる必要があっても、そうなっていない場合、例外が発生します。
Azure Machine Learning では、2 つ以上のデータセットで渡される列の型の間に互換性がない場合、このエラーを受け取ります。
**解決策:** [メタデータの編集](edit-metadata.md)を使うか、または元の入力データセットを修正して<!--, or use [Convert to Dataset](convert-to-dataset.md)--> 列の型が互換性を備えるようにします。
|例外メッセージ|
|------------------------|
|Columns with corresponding index in input datasets do have incompatible types. (入力データセット内の対応するインデックスを伴う列に互換性のない型が含まれています。)|
|Columns {0} and {1} are incompatible. (列 \{0\} と {1} に互換性がありません。)|
|Column element types are not compatible for column {0} (zero-based) of input datasets ({1} and {2} respectively). (入力データセット ({1} と {2}) それぞれの列 \{0\} (ゼロ ベース) に対する列の要素の型に互換性がありません。)|
## <a name="error-0017"></a>エラー 0017
選択した列で現在のモジュールによってサポートされていないデータ型が使われている場合、例外が発生します。
Azure Machine Learning では、たとえば、列の選択にモジュールによって処理できないデータ型の列が含まれている場合 (算術演算に対する文字列型の列、カテゴリ特徴列が必要な場合のスコア列など)、このエラーを受け取る可能性があります。
**解決策:**
1. 問題となっている列を特定します。
2. モジュールの要件を確認します。
3. 要件に準拠するように列を修正します。 列、および試みている変換によっては、次のモジュールをいくつか使用して変更を行うことが必要な場合があります。
+ [メタデータの編集](edit-metadata.md)を使って、列のデータ型を変更するか、列の使用法を変更します (特徴から数値、カテゴリから非カテゴリなど)。
<!-- + Use [Convert to Dataset](convert-to-dataset.md) to ensure that all included columns use data types that are supported by Azure Machine Learning. If you cannot convert the columns, consider removing them from the input dataset.
+ Use the [Apply SQL Transformation](apply-sql-transformation.md) or [Execute R Script](execute-r-script.md) modules to cast or convert any columns that cannot be modified using [Edit Metadata](edit-metadata.md). These modules provide more flexibility for working with datetime data types.
+ For numeric data types, you can use the [Apply Math Operation](apply-math-operation.md) module to round or truncate values, or use the [Clip Values](clip-values.md) module to remove out of range values. -->
4. 最後の手段として、元の入力データセットの修正が必要になる可能性があります。
> [!TIP]
> シナリオに合った解決策が見つかりませんか。 このトピックのフィードバックとして、エラーが発生したモジュールの名前と、列のデータ型およびカーディナリティを提供してください。 その情報を使用して、一般的なシナリオに対してさらに対象を絞り込んだトラブルシューティング手順を提供します。
|例外メッセージ|
|------------------------|
|Cannot process column of current type. (現在の型の列を処理することができません。) The type is not supported by the module. (型がモジュールでサポートされていません。)|
|Cannot process column of type {0}. (\{0\} 型の列を処理できません。) The type is not supported by the module. (型がモジュールでサポートされていません。)|
|Cannot process column "{1}" of type {0}. (\{0\} 型の列 "{1}" を処理できません。) The type is not supported by the module. (型がモジュールでサポートされていません。)|
|Cannot process column "{1}" of type {0}. (\{0\} 型の列 "{1}" を処理できません。) The type is not supported by the module. (型がモジュールでサポートされていません。) Parameter name: {2} (パラメーター名: {2})|
## <a name="error-0018"></a>エラー 0018
入力データセットが有効でない場合、例外が発生します。
**解決策:** Azure Machine Learning ではこのエラーが多くのコンテキストで発生するので、解決策は 1 つではありません。 一般に、このエラーは、モジュールへの入力として提供されたデータの列の数が間違っていること、またはデータ型がモジュールの要件と一致しないことを示します。 例:
- モジュールにはラベル列が必要ですが、ラベルとしてマークされた列がないか、またはラベル列がまだ選択されていません。
- モジュールではデータはカテゴリである必要がありますが、データが数値です。
<!--- The module requires a specific data type. For example, ratings provided to [Train Matchbox Recommender](train-matchbox-recommender.md) can be either numeric or categorical, but cannot be floating point numbers. -->
- データの形式が正しくありません。
- インポートされたデータに、無効な文字、無効な値、または範囲外の値が含まれています。
- 列が空か、または欠損値が多く含みすぎています。
要件およびデータをどうすればよいか確認するには、入力としてデータセットを使用するモジュールのヘルプ トピックをご覧ください。
<!--We also recommend that you use [Summarize Data](summarize-data.md) or [Compute Elementary Statistics](compute-elementary-statistics.md) to profile your data, and use these modules to fix metadata and clean values: [Edit Metadata](edit-metadata.md) and [Clean Missing Data](clean-missing-data.md), [Clip Values](clip-values.md)-->。
|例外メッセージ|
|------------------------|
|Dataset is not valid. (データセットが無効です。)|
|{0} contains invalid data. (\{0\} に無効なデータが含まれています。)|
|{0} and {1} should be consistent column wise. (\{0\} と {1} は列単位で整合している必要があります。)|
## <a name="error-0019"></a>エラー 0019
並べ替えられた値が含まれると予想される列がそうなっていない場合、例外が発生します。
Azure Machine Learning では、指定された列の値が正しい順序になっていない場合、このエラーを受け取ります。
**解決策:** 入力データセットを手動で変更して列の値を並べ替えた後、モジュールを再実行します。
|例外メッセージ|
|------------------------|
|Values in column are not sorted. (列の値が並べ替えられていません。)|
|Values in column "{0}" are not sorted. (列 "\{0\}" の値が並べ替えられていません。)|
|Values in column "{0}" of dataset "{1}" are not sorted. (データセット "{1}" の列 "\{0\}" の値が並べ替えられていません。)|
## <a name="error-0020"></a>エラー 0020
モジュールに渡された一部のデータセットの列の数が少なすぎる場合、例外が発生します。
Azure Machine Learning では、モジュールに対して十分な列が選択されてされていない場合、このエラーを受け取ります。
**解決策:** モジュールを見直し、列セレクターで正しい数の列が選択されていることを確認します。
|例外メッセージ|
|------------------------|
|Number of columns in input dataset is less than allowed minimum. (入力データセット内の列数が、許容される最少数未満です。)|
|Number of columns in input dataset is less than allowed minimum of {0} column(s). (入力データセット内の列数が、\{0\} 列の許容される最少数未満です。)|
|Number of columns in input dataset "{0}" is less than allowed minimum of {1} column(s). (入力データセット "\{0\}" 内の列数が、{1} 列の許容される最少数未満です。)|
## <a name="error-0021"></a>エラー 0021
モジュールに渡された一部のデータセットの行の数が少なすぎる場合、例外が発生します。
Azure Machine Learning では、指定された操作を実行しても、十分な行がデータセット内にない場合、このエラーが表示されます。 たとえば、入力データセットが空の場合、または最少数の行を有効にするために必要な操作を実行しようとした場合、このエラーが表示される可能性があります。 そのような操作の例としては、統計的手法に基づくグループ化や分類、特定の種類のビン分割、カウントを使用した学習などがあります。
**解決策:**
+ エラーを返したモジュールを開き、入力データセットとモジュールのプロパティを確認します。
+ 入力データセットが空ではないこと、およびモジュールのヘルプで説明されている要件を満たすのに十分な行数のデータがあることを確認します。
+ データが外部ソースから読み込まれる場合は、データ ソースが使用可能であること、およびインポート処理で取得される行数が少なくなる原因となるエラー、またはデータ定義の変更がないことを確認します。
+ データの型や値の数に影響する可能性のある操作 (クリーニング、分割、結合操作など) をモジュールのデータ アップストリームで実行している場合は、それらの操作の出力を調べて、返される行数を確認します。
## <a name="error-0022"></a>エラー 0022
入力データセットで選択された列の数が、予想される数と等しくない場合、例外が発生します。
Azure Machine Learning では、ダウンストリームのモジュールまたは操作で特定の数の列または入力が必要なときに、提供する列または入力が多すぎるか少なすぎる場合、このエラーが発生します。 例:
- 1 つのラベル列またはキー列を指定するときに、誤って複数の列を選択している。
- 列の名前を変更するときに、列より多い、または少ない数の名前を指定した。
- ソースまたはターゲットの列の数が変更された、またはモジュールによって使用される列の数と一致しない。
- 入力に対して値のコンマ区切りリストを指定したが、値の数が一致しない、または複数の入力がサポートされていない。
**解決策:** モジュールを見直し、列の選択を調べて、正しい数の列が選択されていることを確認します。 アップストリームのモジュールの出力と、ダウンストリームの操作の要件を確認します。
複数の列を選択できる列選択オプションのいずれかを使った場合 (列のインデックス、すべての特徴、すべての数字など)、選択によって返される列の正確な数を確認します。
<!--If you are trying to specify a comma-separated list of datasets as inputs to [Unpack Zipped Datasets](unpack-zipped-datasets.md), unpack only one dataset at a time. Multiple inputs are not supported. -->
アップストリームの列の数または型が変更されていないことを確認します。
推奨データセットを使ってモデルをトレーニングしている場合、レコメンダーでは、ユーザーと項目のペアまたはユーザーと項目とランキングに対応する限られた列数が予想されることに忘れないでください。 モデルをトレーニングする前、または推奨データセットを分割する前に、追加の列を削除します。 詳しくは、「[Split Data (データの分割)](split-data.md)」をご覧ください。
|例外メッセージ|
|------------------------|
|Number of selected columns in input dataset does not equal to the expected number. (入力データセットで選択された列の数が、予想される数と等しくありません。)|
|Number of selected columns in input dataset does not equal to {0}. (入力データセットで選択された列の数が、\{0\} と等しくありません。)|
|Column selection pattern "{0}" provides number of selected columns in input dataset not equal to {1}. (列選択パターン "\{0\}" で提供される入力データセット内で選択される列の数が、{1} と等しくありません。)|
|Column selection pattern "{0}" is expected to provide {1} column(s) selected in input dataset, but {2} column(s) is/are provided. (列選択パターン "\{0\}" では入力データセットの {1} 列が提供されると予想されますが、提供されるのは {2} 列です。)|
## <a name="error-0023"></a>エラー 0023
入力データセットのターゲット列が現在のトレーナー モジュールに対して有効ではない場合、例外が発生します。
Azure Machine Learning では、(モジュールのパラメーターで選択されている) ターゲット列が、有効なデータ型ではない場合、欠損値がすべて含まれる場合、または予想されるカテゴリではない場合、このエラーが発生します。
**解決策:** モジュールの入力を見直し、ラベル/ターゲット列の内容を調べます。 欠損の値がすべてないことを確認します。 モジュールでターゲット列がカテゴリである必要がある場合、ターゲット列に複数の個別値があることを確認します。
|例外メッセージ|
|------------------------|
|Input dataset has unsupported target column. (入力データセットにサポートされていないターゲット列があります。)|
|Input dataset has unsupported target column "{0}". (入力データセットにサポートされていないターゲット列 "\{0\}" があります。)|
|Input dataset has unsupported target column "{0}" for learner of type {1}. (入力データセットに "{1}" 型の学習器に対してサポートされていないターゲット列 "\{0\}" があります。)|
## <a name="error-0024"></a>エラー 0024
データセットにラベル列が含まれていない場合、例外が発生します。
Azure Machine Learning では、モジュールでラベル列が必要であってもデータセットにラベル列がない場合、このエラーが発生します。 たとえば、スコアリングされたデータセットの評価では、通常、正確さのメトリックを計算するためにラベル列が存在する必要があります。
また、ラベル列がデータセットに存在していても、Azure Machine Learning によって正しく検出されない場合にも、発生することがあります。
**解決策:**
+ エラーを生成したモジュールを開き、ラベル列が存在するかどうかを確認します。 予測しようとしている単一の結果 (または従属変数) が列に含まれてさえいれば、列の名前またはデータ型は関係ありません。 どの列にラベルがあるか不明な場合は、*クラス*や*ターゲット*などの汎用的な名前を探します。
+ データセットにラベル列が含まれていない場合は、ラベル列がアップストリームで明示的に、または誤って削除された可能性があります。 また、データセットがアップストリームのスコアリング モジュールの出力ではない可能性もあります。
+ ラベル列として列を明示的にマークするには、[メタデータの編集](edit-metadata.md)モジュールを追加して、データセットを接続します。 ラベル列だけを選択し、 **[フィールド]** ドロップダウン リストから **[ラベル]** を選択します。
+ 間違った列をラベルとして選択した場合は、 **[フィールド]** から **[ラベルのクリア]** を選択して、列のメタデータを修正できます。
|例外メッセージ|
|------------------------|
|There is no label column in dataset. (データセットにラベル列がありません。)|
|There is no label column in "{0}". ("\{0\}" にラベル列がありません。)|
## <a name="error-0025"></a>エラー 0025
データセットにスコア列が含まれていない場合、例外が発生します。
Azure Machine Learning では、評価モデルへの入力に有効なスコア列が含まれていない場合、このエラーが発生します。 たとえば、適切なトレーニング済みモデルでデータセットがスコアリングされる前にユーザーがデータセットを評価しようとしているか、またはアップストリームでスコア列が明示的に削除されています。 2 つのデータセットのスコア列に互換性がない場合も、この例外が発生します。 たとえば、線形リグレッサーとバイナリ分類器の精度を比較しようとしている可能性があります。
**解決策:** 評価モデルへの入力を見直し、スコア列が 1 つ以上含まれているかどうかを調べます。 そうでない場合、データセットがスコアリングされなかったか、またはアップストリームのモジュールでスコア列が削除されています。
|例外メッセージ|
|------------------------|
|There is no score column in dataset. (データセットにスコア列がありません。)|
|There is no score column in "{0}". ("\{0\}" にスコア列がありません。)|
|There is no score column in "{0}" that is produced by a "{1}". ("{1}" によって生成されたスコア列が "\{0\}" にありません。) Score the dataset using the correct type of learner. (適切な種類の学習器を使用してデータセットをスコアリングしてください。)|
## <a name="error-0026"></a>エラー 0026
同じ名前の列が許可されていない場合、例外が発生します。
Azure Machine Learning では、複数の列の名前が同じ場合、このエラーが発生します。 このエラーを受け取る方法の 1 つとして、データセットにヘッダー行がなく、次のような列名が自動的に割り当てられる場合があります。(Col0、Col1 など)
**解決策:** 複数の列に同じ名前が付けられている場合は、入力データセットとモジュールの間に[メタデータの編集](edit-metadata.md)モジュールを挿入します。 [メタデータの編集](edit-metadata.md)で列セレクターを使って名前を変更する列を選択し、 **[新しい列名]** テキスト ボックスに新しい名前を入力します。
|例外メッセージ|
|------------------------|
|Equal column names are specified in arguments. (同じ列名が引数で指定されています。) Equal column names are not allowed by module. (モジュールでは同じ列名は許可されません。)|
|Equal column names in arguments "{0}" and "{1}" are not allowed. (引数 "\{0\}" と "{1}" での同じ列名は許可されません。) Specify different names. (異なる名前を指定してください。)|
## <a name="error-0027"></a>エラー 0027
同じサイズでなければならない 2 つのオブジェクトのサイズが異なる場合、例外が発生します。
これは Azure Machine Learning の一般的なエラーであり、多くの状況で発生することがあります。
**解決策:** 特定の解決策はありません。 ただし、次のような状況を確認できます。
- 列の名前を変更する場合、各リスト (入力列と新しい名前の一覧) に同じ数の項目があることを確認します。
- 2 つのデータセットを結合または連結する場合、同じスキーマであることを確認します。
- 複数の列を含む 2 つのデータセットを結合する場合は、キー列が同じデータ型であることを確認し、 **[Allow duplicates and preserve column order in selection]\(選択範囲内で重複を許可し、順序を維持する\)** オプションを選択します。
|例外メッセージ|
|------------------------|
|The size of passed objects is inconsistent. (渡されたオブジェクトのサイズが一致していません。)|
|The size of "{0}" is inconsistent with size of "{1}". ("\{0\}" のサイズが "{1}" のサイズと異なります。)|
## <a name="error-0028"></a>エラー 0028
列セットに重複する列名が含まれていて、それが許可されない場合、例外が発生します。
Azure Machine Learning では、列名が重複している場合、つまり一意ではない場合、このエラーが発生します。
**解決策:** 同じ名前の列がある場合は、入力データセットと、エラーが発生したモジュールの間に、[メタデータの編集](edit-metadata.md)のインスタンスを追加します。 [メタデータの編集](edit-metadata.md)で列セレクターを使って名前を変更する列を選択し、 **[新しい列名]** テキスト ボックスに新しい列名を入力します。 複数の列の名前を変更する場合は、 **[新しい列名]** に入力する値が一意であることを確認します。
|例外メッセージ|
|------------------------|
|Column set contains duplicated column name(s). (列セットに重複する列名が含まれます。)|
|The name "{0}" is duplicated. (名前 "\{0\}" が重複しています。)|
|The name "{0}" is duplicated in "{1}". ("{1}" で名前 "\{0\}" が重複しています。)|
## <a name="error-0029"></a>エラー 0029
無効な URI が渡された場合、例外が発生します。
Azure Machine Learning では、無効な URI が渡された場合、このエラーが発生します。 次のいずれかの状況になった場合、このエラーを受け取ります。
- 読み取りまたは書き込みのために Azure Blob Storage に対して提供されたパブリック URI または SAS URI に、エラーが含まれます。
- SAS の時間枠の期限が切れています。
- HTTP ソースによる Web URL が、ファイルまたはループバック URI を表しています。
- HTTP による Web URL に、正しくない形式の URL が含まれています。
- リモート ソースで URL を解決できません。
**解決策:** モジュールを見直し、URI の形式を検証します。 データ ソースが HTTP による Web URL の場合は、目的のソースがファイルまたはループバック URI (localhost) ではないことを確認します。
|例外メッセージ|
|------------------------|
|Invalid Uri is passed. (無効な URI が渡されました。)|
## <a name="error-0030"></a>エラー 0030
ファイルをダウンロードできない場合、例外が発生します。
Azure Machine Learning では、ファイルをダウンロードできない場合、この例外が発生します。 再試行の試みが 3 回失敗した後に HTTP ソースからの読み取りの試みが失敗すると、この例外を受け取ります。
**解決策:** HTTP ソースに対する URI が正しいこと、および現在インターネット経由でサイトにアクセスできることを確認します。
|例外メッセージ|
|------------------------|
|Unable to download a file. (ファイルをダウンロードできません。)|
|Error while downloading the file: {0}. (ファイル \{0\} のダウンロードでエラーが発生しました。)|
## <a name="error-0031"></a>エラー 0031
列セット内の列の数が必要とされるより少ない場合、例外が発生します。
Azure Machine Learning では、選択された列の数が必要とされるより少ない場合、このエラーが発生します。 必要最少数の列が選択されていない場合、このエラーを受け取ります。
**解決策:** **列セレクター**を使って、列の選択に列を追加します。
|例外メッセージ|
|------------------------|
|Number of columns in column set is less than required. (列セット内の列の数が必要な数に達していません。)|
|{0} column(s) should be specified. (\{0\} 列を指定する必要があります。) The actual number of specified columns is {1}. (実際に指定された列の数は {1} です。)|
## <a name="error-0032"></a>エラー 0032
引数が数値ではない場合、例外が発生します。
Azure Machine Learning では、引数が倍精度浮動小数点数型または NaN の場合、このエラーを受け取ります。
**解決策:** 有効な値を使うように、指定された引数を修正します。
|例外メッセージ|
|------------------------|
|Argument is not a number. (引数が数値ではありません。)|
|"{0}" is not a number. ("\{0\}" が数値ではありません。)|
## <a name="error-0033"></a>エラー 0033
引数が無限である場合、例外が発生します。
Azure Machine Learning では、引数が無限である場合、このエラーが発生します。 引数が `double.NegativeInfinity` または `double.PositiveInfinity` のいずれかの場合、このエラーを受け取ります。
**解決策:** 有効な値になるように、指定された引数を修正します。
|例外メッセージ|
|------------------------|
|Argument is must be finite. (引数は有限である必要があります。)|
|"{0}" is not finite. ("\{0\}" が有限ではありません。)|
## <a name="error-0034"></a>エラー 0034
特定のユーザーと項目のペアに対して複数の評価が存在する場合、例外が発生します。
Azure Machine Learning では、ユーザーと項目のペアに複数の評価値がある場合、推奨でこのエラーが発生します。
**解決策:** ユーザーと項目のペアで評価値が 1 つのみ所有されていることを確認します。
|例外メッセージ|
|------------------------|
|More than one rating exists for the value(s) in dataset. (データセット内の値に対して複数の評価が存在します。)|
|More than one rating for user {0} and item {1} in rating prediction data table. (評価予測データ テーブルにユーザー \{0\} と項目 {1} に対して複数の評価が存在します。)|
## <a name="error-0035"></a>エラー 0035
特定のユーザーまたは項目に対して特徴が提供されなかった場合、例外が発生します。
Azure Machine Learning では、スコアリングに対して推奨モデルを使おうとしていても、特徴ベクターが見つからない場合、このエラーが発生します。
**解決策:**
マッチボックス レコメンダーには、項目の特徴またはユーザーの特徴のいずれかを使うときに満たす必要がある特定の要件があります。 このエラーは、入力として指定したユーザーまたは項目に特徴ベクターがないことを示します。 各ユーザーまたは項目のデータで特徴ベクターが使用できることを確認する必要があります。
たとえば、ユーザーの年齢、場所、収入などの特徴を使って推奨モデルをトレーニングしたものの、トレーニング中に認識されなかった新しいユーザーのスコアを作成する場合、それらを適切に予測するには、新しいユーザーに対する同等の特徴のセット (つまり、年齢、場所、収入の値) を指定する必要があります。
これらのユーザーに対する特徴がない場合は、適切な特徴を生成するための特徴エンジニアリングを検討します。 たとえば、個々のユーザーの年齢や収入の値がない場合は、ユーザーのグループに対して使用するおおよその値を生成する場合があります。
<!--When you are scoring from a recommendation mode, you can use item or user features only if you previously used item or user features during training. For more information, see [Score Matchbox Recommender](score-matchbox-recommender.md).
For general information about how the Matchbox recommendation algorithm works, and how to prepare a dataset of item features or user features, see [Train Matchbox Recommender](train-matchbox-recommender.md). -->
> [!TIP]
> 解決策が自分のケースに該当しませんか。 この記事のフィードバックを送信し、モジュールや列の行数など、シナリオについての情報を提供してください。 この情報を使用して、今後さらに詳細なトラブルシューティング手順を提供します。
|例外メッセージ|
|------------------------|
|No features were provided for a required user or item. (必要なユーザーまたは項目に特徴が提供されませんでした。)|
|Features for {0} required but not provided. (\{0\} の特徴が必要ですが提供されていません。)|
## <a name="error-0036"></a>エラー 0036
特定のユーザーまたは項目に対して複数の特徴ベクターが提供された場合、例外が発生します。
Azure Machine Learning では、特徴ベクターが複数回定義された場合、このエラーが発生します。
**解決策:** 特徴ベクターが複数回定義されていないことを確認します。
|例外メッセージ|
|------------------------|
|Duplicate feature definition for a user or item. (ユーザーまたは項目に対する特徴の定義が重複しています。)|
|Duplicate feature definition for {0}. (\{0\} に対する特徴の定義が重複しています。)|
## <a name="error-0037"></a>エラー 0037
複数のラベル列が指定されているときに、ラベル列が 1 つのみ許可される場合、例外が発生します。
Azure Machine Learning では、新しいラベル列として複数の列が選択された場合、このエラーが発生します。 ほとんどの教師あり学習アルゴリズムでは、1 つの列がターゲットまたはラベルとしてマークされている必要があります。
**解決策:** 新しいラベル列として 1 つの列を選択します。
|例外メッセージ|
|------------------------|
|Multiple label columns are specified. (複数のラベル列が指定されています。)|
## <a name="error-0038"></a>エラー 0038
予想される要素の数が厳密に要求される値と等しくない場合、例外が発生します。
Azure Machine Learning では、予想される要素の数が厳密な値である必要があっても、そうなっていない場合、このエラーが発生します。 要素の数が有効な予想される値と等しくない場合、このエラーが発生します。
**解決策:** 正しい数の要素を持つように入力を修正します。
|例外メッセージ|
|------------------------|
|Number of elements is not valid. (要素の数が有効ではありません。)|
|Number of elements in "{0}" is not valid. ("\{0\}" の要素の数が有効ではありません。)|
|Number of elements in "{0}" is not equal to valid number of {1} element(s). ("\{0\}" の要素の数が {1} 要素の有効な数と等しくありません。)|
## <a name="error-0039"></a>エラー 0039
操作が失敗した場合、例外が発生します。
Azure Machine Learning では、内部操作を完了できない場合、このエラーが発生します。
**解決策:** このエラーは多くの状況によって発生し、特定の解決策はありません。
次の表はこのエラーに対する一般的なメッセージであり、後で状況について具体的に説明します。
使用可能な詳細情報がない場合は、[フィードバックを送信](https://social.msdn.microsoft.com/forums/azure/home?forum=MachineLearning)して、エラーが発生したモジュールおよび関連する状況に関する情報を提供してください。
|例外メッセージ|
|------------------------|
|操作に失敗しました。|
|Error while completing operation: {0}. (操作の実行中にエラーが発生しました: \{0\}。)|
## <a name="error-0040"></a>エラー 0040
非推奨のモジュールを呼び出すと、例外が発生します。
Azure Machine Learning では、非推奨のモジュールを呼び出すと、このエラーが生成されます。
**解決策:** 非推奨のモジュールを、サポートされているモジュールに置き換えます。 代わりに使用するモジュールに関する情報については、モジュールの出力ログをご覧ください。
|例外メッセージ|
|------------------------|
|Accessing deprecated module. (非推奨のモジュールにアクセスしています。)|
|Module "{0}" is deprecated. (モジュール "\{0\}" は非推奨です。) Use module "{1}" instead. (代わりにモジュール "\{0\}" を使用してください。)|
## <a name="error-0041"></a>エラー 0041
非推奨のモジュールを呼び出すと、例外が発生します。
Azure Machine Learning では、非推奨のモジュールを呼び出すと、このエラーが生成されます。
**解決策:** 非推奨のモジュールを、サポートされているモジュールのセットに置き換えます。 この情報は、モジュールの出力ログで示されます。
|例外メッセージ|
|------------------------|
|Accessing deprecated module. (非推奨のモジュールにアクセスしています。)|
|Module "{0}" is deprecated. (モジュール "\{0\}" は非推奨です。) Use the modules "{1}" for requested functionality. (要求された機能に対してはモジュール "\{0\}" を使用してください。)|
## <a name="error-0042"></a>エラー 0042
列を別の型に変換できないと、例外が発生します。
Azure Machine Learning では、列を指定された型に変換できない場合、このエラーが発生します。 モジュールで特定のデータ型 (日時、テキスト、浮動小数点数、整数など) が必要であっても、既存の列を必要な型に変換できない場合、このエラーが発生します。
たとえば、列を選択し、算術演算で使用するために数値データ型に変換しようとした場合、列に無効なデータが含まれていると、このエラーが発生する可能性があります。
このエラーが発生する可能性のあるもう 1 つの理由としては、浮動小数点数または多くの一意の値が含まれる列をカテゴリ列として使おうとする場合です。
**解決策:**
+ エラーが生成されたモジュールのヘルプ ページを開き、データ型の要件を確認します。
+ 入力データセットで列のデータ型を確認します。
+ いわゆるスキーマレス データ ソースで発生するデータを調べます。
+ 目的のデータ型への変換を妨げる可能性がある欠損値または特殊文字がないか、データセットを確認します。
+ 数値データ型は一貫している必要があります。たとえば、整数の列で浮動小数点数を確認します。
+ 数値列でテキスト文字列または NA 値を探します。
+ ブール値は、必要なデータ型に応じて、適切な表現に変換できます。
+ 非 Unicode 文字、タブ文字、または制御文字がないか、テキスト列を調べます
+ モデリング エラーを回避するには日時データが一貫している必要がありますが、形式が多いためクリーンアップが複雑になることがあります。 次の使用を検討します <!--the [Execute R Script](execute-r-script.md) or -->[Python スクリプトの実行](execute-python-script.md)モジュールを使ってクリーンアップを実行します。
+ 必要な場合は、列を正常に変換できるように、入力データセット内の値を修正します。 修正には、ビン分割、切り捨てまたは丸め処理、外れ値の削除、または欠損値の補完などがあります。 機械学習での一般的なデータ変換シナリオについては、次の記事をご覧ください。
+ [見つからないデータのクリーンアップ](clean-missing-data.md)
+ [データの正規化](normalize-data.md)
<!--+ [Clip Values](clip-values.md)
+ [Group Data Into Bins](group-data-into-bins.md)
-->
> [!TIP]
> 解決策がわからない、または自分のケースに該当しませんか。 この記事のフィードバックを送信し、モジュールや列のデータ型など、シナリオについての情報を提供してください。 この情報を使用して、今後さらに詳細なトラブルシューティング手順を提供します。
|例外メッセージ|
|------------------------|
|Not allowed conversion. (変換は許可されていません。)|
|Could not convert column of type {0} to column of type {1}. (\{0\} 型の列を {1} 型の列に変換できませんでした。)|
|Could not convert column "{2}" of type {0} to column of type {1}. (\{0\} 型の列 "{2}" を {1} 型の列に変換できませんでした。)|
|Could not convert column "{2}" of type {0} to column "{3}" of type {1}. (\{0\} 型の列 "{2}" を {1} 型の列 "{3}" に変換できませんでした。)|
## <a name="error-0043"></a>エラー 0043
要素の型で Equals が明示的に実装されていない場合、例外が発生します。
Azure Machine Learning ではこのエラーは使用されておらず、非推奨になります。
**解決策:** なし。
|例外メッセージ|
|------------------------|
|No accessible explicit method Equals found. (アクセス可能な明示的な Equals メソッドが見つかりません。)|
|Cannot compare values for column \\"{0}\\" of type {1}. ({1} 型の列 \"\{0\}\" の値を比較できません。) No accessible explicit method Equals found. (アクセス可能な明示的な Equals メソッドが見つかりません。)|
## <a name="error-0044"></a>エラー 0044
既存の値から列の要素型を派生できないと、例外が発生します。
Azure Machine Learning では、データセットの 1 つまたは複数の列の型を推論できない場合、このエラーが発生します。 これは通常、要素型が異なる複数のデータセットを連結するときに発生します。 Azure Machine Learning では、情報を失うことなく、複数の列のすべての値を表すことができる一般的な型を決定できない場合、このエラーが生成されます。
**解決策:** 結合される両方のデータセットの特定の列のすべての値が、同じ型であるか (数値、ブール値、カテゴリ、文字列、日付など)、または同じ型に変換できることを確認します。
|例外メッセージ|
|------------------------|
|Cannot derive element type of the column. (列の要素型を派生できません。)|
|Cannot derive element type for column "{0}" -- all the elements are null references. (列 "\{0\}" の要素型を派生できません -- すべての要素が null 参照です。)|
|Cannot derive element type for column "{0}" of dataset "{1}" -- all the elements are null references. (データセット "{1}" の列 "\{0\}" の要素型を派生できません -- すべての要素が null 参照です。)|
## <a name="error-0045"></a>エラー 0045
ソースでの混合要素型のため列を作成できないと、例外が発生します。
Azure Machine Learning では、結合される 2 つのデータセットの要素型が異なる場合、このエラーが生成されます。
**解決策:** 結合される両方のデータセットの特定の列のすべての値が、同じ型 (数値、ブール値、カテゴリ、文字列、日付など) であることを確認します。
|例外メッセージ|
|------------------------|
|Cannot create column with mixed element types. (混合要素型の列は作成できません。)|
|Cannot create column with ID "{0}" of mixed element types:\n\tType of data[{1}, {0}] is {2}\n\tType of data[{3}, {0}] is {4}. (混合要素型の ID "\{0\}" の列を作成できません。データ [{1}, \{0\}] の \n\t 型は {2}\n\t ですデータ [{3}, \{0\}] の型は {4} です。)|
## <a name="error-0046"></a>エラー 0046
指定されたパスにディレクトリを作成できないと、例外が発生します。
Azure Machine Learning では、指定されたパスにディレクトリを作成できない場合、このエラーが発生します。 Hive クエリの出力ディレクトリへのパスのいずれかの部分が正しくない、またはアクセスできない場合、このエラーが発生します。
**解決策:** モジュールを見直し、ディレクトリ パスの形式が正しいこと、および現在の資格情報でアクセス可能であることを検証します。
|例外メッセージ|
|------------------------|
|Specify a valid output directory. (有効な出力ディレクトリを指定してください。)|
|Directory: {0} cannot be created. (ディレクトリ \{0\} は作成できません。) Specify valid path. (有効なパスを指定してください。)|
## <a name="error-0047"></a>エラー 0047
モジュールに渡された一部のデータセットの特徴列の数が少なすぎる場合、例外が発生します。
Azure Machine Learning では、トレーニングへの入力データセットにアルゴリズムで必要とされる最少数の列が含まれない場合、このエラーが発生します。 通常、データセットが空か、トレーニング列のみが含まれています。
**解決策:** 入力データセットを見直し、ラベル列とは別に 1 つ以上の列があることを確認します。
|例外メッセージ|
|------------------------|
|Number of feature columns in input dataset is less than allowed minimum. (入力データセット内の特徴列の数が許容される最少数未満です。)|
|Number of feature columns in input dataset is less than allowed minimum of {0} column(s). (入力データセット内の特徴列の数が、\{0\} 列の許容される最少数未満です。)|
|Number of feature columns in input dataset "{0}" is less than allowed minimum of {1} column(s). (入力データセット "\{0\}" 内の特徴列の数が、{1} 列の許容される最少数未満です。)|
## <a name="error-0048"></a>エラー 0048
ファイルを開くことができない場合、例外が発生します。
Azure Machine Learning では、読み取りまたは書き込み用のファイルを開くことができない場合、このエラーが発生します。 このエラーは、次の理由で発生する可能性があります。
- コンテナーまたはファイル (BLOB) が存在しない
- ファイルまたはコンテナーのアクセス レベルで、ファイルへのアクセスが許可されていない
- ファイルが大きすぎて読み取ることができないか、または形式が正しくない
**解決策:** モジュールおよび読み取ろうとしているファイルを見直します。
コンテナーとファイルの名前が正しいことを確認します。
クラシック Azure portal または Azure ストレージ ツールを使って、ファイルにアクセスするためのアクセス許可があることを確認します。
<!--If you are trying to read an image file, make sure that it meets the requirements for image files in terms of size, number of pixels, and so forth. For more information, see [Import Images](import-images.md). -->
|例外メッセージ|
|------------------------|
|Unable to open a file. (ファイルを開くことができません。)|
|Error while opening the file: {0}. (ファイルを開くときにエラーが発生しました: \{0\}。)|
## <a name="error-0049"></a>エラー 0049
ファイルを解析できない場合、例外が発生します。
Azure Machine Learning では、ファイルを解析することができない場合、このエラーが発生します。 [データのインポート](import-data.md) モジュールで選択されているファイル形式がファイルの実際の形式と一致しない場合、またはファイルに認識できない文字が含まれている場合、このエラーが発生します。
**解決策:** モジュールを見直し、ファイル形式の選択がファイルの形式と一致しない場合は修正します。 可能であれば、ファイルを調べて、無効な文字が含まれていないことを確認します。
|例外メッセージ|
|------------------------|
|Unable to parse a file. (ファイルを解析できません。)|
|Error while parsing the file: {0}. (ファイルの解析中にエラーが発生しました: \{0\}。)|
## <a name="error-0050"></a>エラー 0050
入力ファイルと出力ファイルが同じ場合、例外が発生します。
**解決策:** Azure Machine Learning ではこのエラーは使用されておらず、非推奨になります。
|例外メッセージ|
|------------------------|
|入力と出力に同じファイルを指定することはできません。|
## <a name="error-0051"></a>エラー 0051
複数の出力ファイルが同じ場合、例外が発生します。
**解決策:** Azure Machine Learning ではこのエラーは使用されておらず、非推奨になります。
|例外メッセージ|
|------------------------|
|Specified files for outputs cannot be the same. (出力に同じファイルを指定することはできません。)|
## <a name="error-0052"></a>エラー 0052
正しくない Azure ストレージ アカウント キーが指定された場合、例外が発生します。
Azure Machine Learning では、Azure ストレージ アカウントへのアクセスに使用するキーが正しくない場合、このエラーが発生します。 たとえば、コピーして貼り付けるときに Azure ストレージ キーが切り捨てられた場合、または間違ったキーを使用した場合、このエラーが表示される可能性があります。
Azure ストレージ アカウントのキーの取得方法について詳しくは、[ストレージ アクセス キーの表示、コピー、再生成](https://azure.microsoft.com/documentation/articles/storage-create-storage-account-classic-portal/)に関する記事をご覧ください。
**解決策:** モジュールを見直し、Azure ストレージ キーがアカウントに対して正しいことを確認します。必要な場合は、クラシック Azure portal からキーをもう一度コピーします。
|例外メッセージ|
|------------------------|
|The Azure storage account key is incorrect. (Azure ストレージ アカウント キーが正しくありません。)|
## <a name="error-0053"></a>エラー 0053
マッチボックスの推奨に対するユーザーの特徴や項目がない場合、例外が発生します。
Azure Machine Learning では、特徴ベクターが見つからないと、このエラーが生成されます。
**解決策:** 入力データセットに特徴ベクターが存在することを確認します。
|例外メッセージ|
|------------------------|
|User features or/and items are required but not provided. (ユーザーの特徴や項目が必要ですが提供されていません。)|
## <a name="error-0054"></a>エラー 0054
列の個別値が少なすぎて操作を完了できない場合、例外が発生します。
**解決策:** Azure Machine Learning ではこのエラーは使用されておらず、非推奨になります。
|例外メッセージ|
|------------------------|
|Data has too few distinct values in the specified column to complete operation. (指定された列のデータの個別値が少なすぎるため操作を完了できません。)|
|Data has too few distinct values in the specified column to complete operation. (指定された列のデータの個別値が少なすぎるため操作を完了できません。) The required minimum is {0} elements. (少なくとも \{0\} 個の要素が必要です。)|
|Data has too few distinct values in the column "{1}" to complete operation. (列 "\{0\}" のデータの個別値が少なすぎるため操作を完了できません。) The required minimum is {0} elements. (少なくとも \{0\} 個の要素が必要です。)|
## <a name="error-0055"></a>エラー 0055
非推奨のモジュールを呼び出すと、例外が発生します。 Azure Machine Learning では、非推奨になっているモジュールを呼び出そうとすると、このエラーが表示されます。
**解決策:**
|例外メッセージ|
|------------------------|
|Accessing deprecated module. (非推奨のモジュールにアクセスしています。)|
|Module "{0}" is deprecated. (モジュール "\{0\}" は非推奨です。)|
## <a name="error-0056"></a>エラー 0056
操作に対して選択した列が要件に違反している場合、例外が発生します。
Azure Machine Learning では、特定のデータ型の列が必要な操作に対して列を選択する場合、このエラーが発生します。
このエラーは、列のデータ型が正しくても、列が特徴列、ラベル列、またはカテゴリ列としてもマークされる必要のあるモジュールを使用している場合に発生する可能性があります。
<!--For example, the [Convert to Indicator Values](convert-to-indicator-values.md) module requires that columns be categorical, and will raise this error if you select a feature column or label column. -->
**解決策:**
1. 現在選択されている列のデータ型を見直します。
2. 選択した列がカテゴリ列、ラベル列、または特徴列かどうかを確認します。
3. 列を選択したモジュールのヘルプ トピックを調べて、データ型または列の使用方法について特定の要件があるかどうかを確認します。
3. [メタデータの編集](edit-metadata.md)を使って、この操作を行っている間、列の型を変更します。 ダウンストリームの操作で列が必要な場合は、[メタデータの編集](edit-metadata.md)の別のインスタンスを使って、忘れずに列の型を元の値に戻します。
|例外メッセージ|
|------------------------|
|One or more selected columns were not in an allowed category. (選択した列の 1 つ以上が、許可されたカテゴリに含まれませんでした。)|
|Column with name "{0}" is not in an allowed category. (名前 "\{0\}" の列は、許可されたカテゴリにありません。)|
## <a name="error-0057"></a>エラー 0057
既に存在しているファイルまたは BLOB を作成しようとすると、例外が発生します。
[データのエクスポート](export-data.md) モジュールまたは他のモジュールを使って Azure Machine Learning での実験の結果を Azure Blob Storage に保存するときに、既に存在しているファイルまたは BLOB を作成しようとすると、この例外が発生します。
**解決策:**
以前に **[Azure blob storage write mode]\(Azure Blob Storage 書き込みモード\)** プロパティを **[エラー]** に設定した場合にのみ、このエラーを受け取ります。 仕様により、このモジュールでは既に存在する BLOB にデータセットを書き込もうとするとエラーが発生します。
- モジュールのプロパティを開き、 **[Azure blob storage write mode]\(Azure Blob Storage 書き込みモード\)** プロパティを **[上書き]** に変更します。
- または、書き込み先の BLOB またはファイルとして別の名前を入力し、必ずまだ存在していない BLOB を指定します。
|例外メッセージ|
|------------------------|
|File or Blob already exists. (ファイルまたは BLOB は既に存在しています。)|
|File or Blob "{0}" already exists. (ファイルまたは BLOB "\{0\}" は既に存在しています。)|
## <a name="error-0058"></a>エラー 0058
Azure Machine Learning では、予想されるラベル列がデータセットに含まれない場合、このエラーが発生します。
この例外は、指定されたラベル列が、学習器で予想されているデータやデータ型と一致しない場合、または正しくない値を含む場合に発生する可能性もあります。 たとえば、バイナリ分類器をトレーニングするときに、実数値のラベル列を使うと、この例外が生成されます。
**解決策:** 解決策は、使用している学習器やトレーナー、およびデータセット内の列のデータ型によって異なります。 最初に、機械学習アルゴリズムまたはトレーニング モジュールの要件を確認します。
入力データセットを見直します。 ラベルとして扱う予定の列に、作成しているモデルに対して正しいデータ型が含まれることを確認します。
欠損値がないか入力を確認し、必要に応じてそれらを除去または置換します。
必要な場合は、[メタデータの編集](edit-metadata.md)モジュールを追加し、ラベル列をラベルとして確実にマークします。
|例外メッセージ|
|------------------------|
|The label column is not as expected (ラベル列が予想されているとおりではありません)|
|The label column is not as expected in "{0}". (ラベル列が "\{0\}" で予想されているとおりではありません。)|
|The label column "{0}" is not expected in "{1}". (ラベル列 "\{0\}" は "{1}" で予想されていません。)|
## <a name="error-0059"></a>エラー 0059
列ピッカーで指定されている列インデックスを解析できない場合、例外が発生します。
Azure Machine Learning では、列セレクターを使って指定された列インデックスを解析できない場合、このエラーが発生します。 列インデックスが解析できない無効な形式の場合、このエラーを受け取ります。
**解決策:** 有効なインデックス値を使うように、列インデックスを修正します。
|例外メッセージ|
|------------------------|
|One or more specified column indexes or index ranges could not be parsed. (指定された 1 つまたは複数の列インデックスまたはインデックス範囲を解析できませんでした。)|
|Column index or range "{0}" could not be parsed. (列インデックスまたは範囲 "\{0\}" を解析できませんでした。)|
## <a name="error-0060"></a>エラー 0060
列ピッカーで範囲外の列範囲を指定すると、例外が発生します。
Azure Machine Learning では、列セレクターで範囲外の列範囲を指定する場合、このエラーが発生します。 列ピッカーの列範囲がデータセット内の列に対応していない場合、このエラーを受け取ります。
**解決策:** データセット内の列に対応するように、列ピッカーで列範囲を修正します。
|例外メッセージ|
|------------------------|
|Invalid or out of range column index range specified. (無効または範囲外の列インデックス範囲が指定されました。)|
|Column range "{0}" is invalid or out of range. (列範囲 "\{0\}" は無効または範囲外です。)|
## <a name="error-0061"></a>エラー 0061
列の数がテーブルとは異なる DataTable に行を追加しようとすると、例外が発生します。
Azure Machine Learning では、あるデータセットとは列の数が異なるデータセットに行を追加しようとする場合、このエラーが発生します。 データセットに追加されている行が、入力データセットとは異なる列数の場合、このエラーを受け取ります。 列の数が異なる場合、データセットに行を追加できません。
**解決策:** 追加される行と同じ列数になるように入力データセットを修正するか、またはデータセットと同じ列数になるように追加される行を修正します。
|例外メッセージ|
|------------------------|
|All tables must have the same number of columns. (すべてのテーブルは同じ列数である必要があります。)|
## <a name="error-0062"></a>エラー 0062
学習器の種類が異なる 2 つのモデルを比較しようとすると、例外が発生します。
Azure Machine Learning では、2 つの異なるスコアリング済みデータセットの評価メトリックを比較できない場合、このエラーが生成されます。 この場合、2 つのスコアリング済みデータセットの生成に使用されたモデルの有効性を比較できません。
**解決策:** スコアリングの結果が、同じ種類の機械学習モデル (バイナリ分類、回帰、多クラス分類、推奨、クラスタリング、異常検出など) によって生成されていることを確認します。比較するモデルはすべて、学習器の種類が同じである必要があります。
|例外メッセージ|
|------------------------|
|All models must have the same learner type. (すべてのモデルは学習器の種類が同じでなければなりません。)|
<!--## Error 0063
This exception is raised when R script evaluation fails with an error.
This error occurs when you have provided an R script in one of the [R language modules](r-language-modules.md) in Azure Machine Learning, and the R code contains internal syntax errors. The exception can also occur if you provide the wrong inputs to the R script.
The error can also occur if the script is too large to execute in the workspace. The maximum script size for the **Execute R Script** module is 1,000 lines or 32 KB of work space, whichever is lesser.
**Resolution:**
1. In Azure Machine Learning Studio, right-click the module that has the error, and select **View Log**.
2. Examine the standard error log of the module, which contains the stack trace.
+ Lines beginning with [ModuleOutput] indicate output from R.
+ Messages from R marked as **warnings** typically do not cause the experiment to fail.
3. Resolve script issues.
+ Check for R syntax errors. Check for variables that are defined but never populated.
+ Review the input data and the script to determine if either the data or variables in the script use characters not supported by Azure Machine Learning.
+ Check whether all package dependencies are installed.
+ Check whether your code loads required libraries that are not loaded by default.
+ Check whether the required packages are the correct version.
+ Make sure that any dataset that you want to output is converted to a data frame.
4. Resubmit the experiment.
<!--
> [!NOTE]
> These topics contains examples of R code that you can use, as well as links to experiments in the [Cortana Intelligence Gallery](https://gallery.cortanaintelligence.com) that use R script.
> + [Execute R Script](execute-r-script.md)
> + [Create R Model](create-r-model.md)
-->
|例外メッセージ|
|------------------------|
|Error during evaluation of R script. (R スクリプトの評価中にエラーが発生しました。)|
|The following error occurred during evaluation of R script (R スクリプトの評価中に次のエラーが発生しました): ---------- Start of error message from R (R からのエラー メッセージの開始) ---------- {0} ----------- End of error message from R (R からのエラー メッセージの終了) -----------|
|During the evaluation of R script "{1}" the following error occurred (R スクリプト "{1}" の評価中に次のエラーが発生しました): ---------- Start of error message from R (R からのエラー メッセージの開始) ---------- {0} ----------- End of error message from R (R からのエラー メッセージの終了) -----------|
## <a name="error-0064"></a>エラー 0064
正しくない Azure ストレージ アカウント名またはストレージ キーが指定された場合、例外が発生します。
Azure Machine Learning では、Azure ストレージ アカウントの名前またはストレージ キーが誤って指定された場合、このエラーが発生します。 ストレージ アカウントに対して正しくないアカウント名またはパスワードを入力すると、このエラーを受け取ります。 これは、アカウント名またはパスワードを手入力したときに発生する可能性があります。 アカウントが削除された場合にも発生する可能性があります。
**解決策:** アカウント名とパスワードを正しく入力したこと、およびアカウントが存在することを確認します。
|例外メッセージ|
|------------------------|
|The Azure storage account name or storage key is incorrect. (Azure ストレージ アカウント名またはストレージ キーが正しくありません。)|
|The Azure storage account name "{0}" or storage key for the account name is incorrect. (Azure ストレージ アカウント名 "\{0\}" またはアカウント名に対するストレージ キーが正しくありません。)|
## <a name="error-0065"></a>エラー 0065
正しくない Azure BLOB 名が指定された場合、例外が発生します。
Azure Machine Learning では、Azure BLOB 名が誤って指定された場合、このエラーが発生します。 次の場合、エラーを受け取ります。
- 指定されたコンテナーで BLOB が見つからない。
<!--- The fully qualified name of the blob specified for output in one of the [Learning with Counts](data-transformation-learning-with-counts.md) modules is greater than 512 characters. -->
- 形式がエンコードされた Excel または CSV のときに、[データのインポート](import-data.md)要求でソースとしてコンテナーのみが指定された。コンテナー内のすべての BLOB の内容の連結が、これらの形式では許可されていない。
- SAS URI に、有効な BLOB の名前が含まれません。
**解決策:** 例外がスローされているモジュールを見直します。 指定された BLOB がストレージ アカウントのコンテナーに存在していること、および BLOB を表示できるアクセス許可があることを確認します。 エンコード形式の Excel または CSV を使用している場合、入力が "**コンテナー名/ファイル名**" の形式であることを確認します。 SAS URI に有効な BLOB の名前が含まれていることを確認します。
|例外メッセージ|
|------------------------|
|The Azure storage blob is incorrect. (Azure Storage の BLOB が正しくありません。)|
|The Azure storage blob name "{0}" is incorrect (Azure Storage の BLOB の名前 "\{0\}" が正しくありません)|
## <a name="error-0066"></a>エラー 0066
リソースを Azure BLOB にアップロードできない場合、例外が発生します。
Azure Machine Learning では、リソースを Azure BLOB にアップロードできなかった場合、このエラーが発生します。 <!--You will receive this message if [Train Vowpal Wabbit 7-4 Model](train-vowpal-wabbit-version-7-4-model.md) encounters an error attempting to save either the model or the hash created when training the model.--> どちらも、入力ファイルを含むアカウントと同じ Azure ストレージ アカウントに保存されます。
**解決策:** モジュールを見直します。 Azure アカウント名、ストレージ キー、コンテナーが正しいこと、およびアカウントにコンテナーへの書き込みアクセス許可があることを確認します。
|例外メッセージ|
|------------------------|
|The resource could not be uploaded to Azure storage. (リソースを Azure Storage にアップロードできませんでした。)|
|The file "{0}" could not be uploaded to Azure storage as {1}. (ファイル "\{0\}" を Azure Storage に {1} としてアップロードできませんでした。)|
## <a name="error-0067"></a>エラー 0067
データセットの列数が予想とは異なる場合、例外が発生します。
Azure Machine Learning では、データセットの列数が予想とは異なる場合、このエラーが発生します。 データセット内の列の数が、実行中にモジュールで予想される列の数と異なる場合、このエラーを受け取ります。
**解決策:** 入力データセットまたはパラメーターを修正します。
|例外メッセージ|
|------------------------|
|Unexpected number of columns in the datatable. (データ テーブル内の列数が予想と異なります。)|
|Expected "{0}" columns but found "{1}" columns instead. (予想されたのは "\{0\}" 列ですが、実際に見つかったのは "{1}" 列です。)|
## <a name="error-0068"></a>エラー 0068
指定された Hive スクリプトが正しくない場合、例外が発生します。
Azure Machine Learning では、Hive QL スクリプトに構文エラーがある場合、またはクエリやスクリプトの実行中に Hive インタープリターでエラーが発生した場合、このエラーが発生します。
**解決策:**
通常 Hive からのエラー メッセージがエラー ログで報告されるので、特定のエラーに基づいてアクションを実行できます。
+ モジュールを開き、クエリで誤りを調べます。
+ Hadoop クラスターの Hive コンソールにログインしてクエリを実行することにより、Azure Machine Learning の外部でクエリが正しく動作することを確認します。
+ Hive スクリプトで実行可能なステートメントとコメントを単一の行に混在させるのではなく、コメントを別の行に配置することを試みます。
### <a name="resources"></a>リソース
機械学習での Hive クエリの使用については、次の記事をご覧ください。
+ [Hive テーブルを作成して Azure Blob Storage からデータを読み込む](https://docs.microsoft.com/azure/machine-learning/machine-learning-data-science-move-hive-tables)
+ [Hive クエリを使用してテーブルのデータを探索する](https://docs.microsoft.com/azure/machine-learning/machine-learning-data-science-explore-data-hive-tables)
+ [Hive クエリを使用して Hadoop クラスターのデータの特徴を作成する](https://docs.microsoft.com/azure/machine-learning/machine-learning-data-science-create-features-hive)
+ [SQL ユーザー向け Hive のチート シート (PDF)](http://hortonworks.com/wp-content/uploads/2013/05/hql_cheat_sheet.pdf)
|例外メッセージ|
|------------------------|
|Hive script is incorrect. (Hive スクリプトが正しくありません。)|
|Hive script {0} is not correct. (Hive スクリプト \{0\} が正しくありません。)|
## <a name="error-0069"></a>エラー 0069
指定された SQL スクリプトが正しくない場合、例外が発生します。
Azure Machine Learning では、指定された SQL スクリプトの構文に問題がある場合、またはスクリプトで指定された列やテーブルが有効ではない場合、このエラーが発生します。
クエリまたはスクリプトの実行中に SQL エンジンでエラーが発生した場合、このエラーを受け取ります。 通常 SQL のエラー メッセージがエラー ログで報告されるので、特定のエラーに基づいてアクションを実行できます。
**解決策:** モジュールを見直し、SQL クエリで誤りを調べます。
データベース サーバーに直接ログインしてクエリを実行することにより、Azure ML の外部でクエリが正しく動作することを確認します。
モジュールの例外により報告された、SQL によって生成されたメッセージがある場合、報告されたエラーに基づいて対処を実行します。 たとえば、エラー メッセージには、発生する可能性の高いエラーに関する具体的なガイダンスが含まれる場合があります。
+ "*No such column or missing database*" (そのような列はないか、またはデータベースが存在しません) は、列名を誤って入力した可能性があることを示します。 列名が正しいことが確実な場合は、角かっこまたは引用符を使って列の識別子を囲んでみてください。
+ "*SQL logic error near \<SQL keyword\>* " ("<SQL キーワード> の近くに SQL ロジック エラーがあります") は、指定されたキーワードの前に構文エラーが存在する可能性があることを示します
|例外メッセージ|
|------------------------|
|SQL script is incorrect. (SQL スクリプトが正しくありません。)|
|SQL query "{0}" is not correct. (SQL クエリ "\{0\}" が正しくありません。)|
|SQL query "{0}" is not correct: {1} (SQL クエリ "\{0\}" が正しくありません: {1})|
## <a name="error-0070"></a>エラー 0070
存在しない Azure テーブルにアクセスしようとすると、例外が発生します。
Azure Machine Learning では、存在しない Azure テーブルにアクセスしようとする場合、このエラーが発生します。 指定した Azure Storage 内のテーブルが、Azure Table Storage の読み取りまたは書き込み時に存在しない場合、このエラーを受け取ります。 これは、目的のテーブルの名前を誤って入力した場合、またはターゲットの名前とストレージの種類の間に不一致がある場合に発生する可能性があります。 たとえば、テーブルから読み取ろうとして、代わりに BLOB の名前を入力したような場合です。
**解決策:** モジュールを見直して、テーブルの名前が正しいことを確認します。
|例外メッセージ|
|------------------------|
|Azure table does not exist. (Azure のテーブルが存在しません。)|
|Azure table "{0}" does not exist. (Azure のテーブル "\{0\}" が存在しません。)|
## <a name="error-0071"></a>エラー 0071
指定された資格情報が正しくない場合、例外が発生します。
Azure Machine Learning では、指定された資格情報が正しくない場合、このエラーが発生します。
モジュールが HDInsight クラスターに接続できない場合にも、このエラーを受け取る可能性があります。
**解決策:** モジュールへの入力を見直して、アカウント名とパスワードを確認します。
エラーの原因となっている可能性のある次の問題を調べます。
- データセットのスキーマが、ターゲットのデータ テーブルのスキーマと一致しません。
- 列名が存在しないか、スペルが正しくありません
- 列名に無効な文字が含まれるテーブルに書き込もうとしています。 通常、このような列名は角かっこで囲むことができますが、それでうまくいかない場合は、列名を編集し、アルファベットとアンダースコア (_) のみを使うようにします
- 書き込もうとしている文字列に、単一引用符が含まれます
HDInsight クラスターに接続しようとしている場合は、指定された資格情報でターゲット クラスターにアクセスできることを確認します。
|例外メッセージ|
|------------------------|
|Incorrect credentials are passed. (正しくない資格情報が渡されました。)|
|Incorrect username "{0}" or password is passed (正しくないユーザー名 "\{0\}" またはパスワードが渡されました)|
## <a name="error-0072"></a>エラー 0072
接続がタイムアウトした場合、例外が発生します。
Azure Machine Learning では、接続がタイムアウトする場合、このエラーが発生します。現在データ ソースまたはターゲットとの接続に問題がある場合 (インターネット接続が遅い、など)、データセットが大きい場合、またはデータを読み取る SQL クエリで複雑な処理が実行されている場合、このエラーを受け取ります。
**解決策:** 現在、Azure ストレージまたはインターネットへの接続が遅くなる問題が発生しているかどうかを特定します。
|例外メッセージ|
|------------------------|
|Connection timeout occurred. (接続のタイムアウトが発生しました。)|
## <a name="error-0073"></a>エラー 0073
列を別の型に変換しているときにエラーが発生した場合、例外が発生します。
Azure Machine Learning では、列を別の型に変換できない場合、このエラーが発生します。 モジュールで特定の型が必要な場合に、列を新しい型に変換できないと、このエラーを受け取ります。
**解決策:** 入力データセットを修正し、内部例外に基づいて列を変換できるようにします。
|例外メッセージ|
|------------------------|
|Failed to convert column. (列を変換できませんでした。)|
|Failed to convert column to {0}. (列を \{0\} に変換できませんでした。)|
## <a name="error-0074"></a>エラー 0074
[メタデータの編集](edit-metadata.md)でスパース列をカテゴリに変換しようとすると、例外が発生します。
Azure Machine Learning では、[メタデータの編集](edit-metadata.md)でスパース列をカテゴリに変換しようとする場合、このエラーが発生します。 **[Make categorical]\(カテゴリにする\)** オプションでスパース列をカテゴリに変換しようとすると、このエラーを受け取ります。 Azure Machine Learning ではスパース カテゴリ配列がサポートされていないため、モジュールは失敗します。
<!--**Resolution:**
Make the column dense by using [Convert to Dataset](convert-to-dataset.md) first or do not convert the column to categorical. -->
|例外メッセージ|
|------------------------|
|Sparse columns cannot be converted to Categorical. (スパース列をカテゴリに変換することはできません。)|
## <a name="error-0075"></a>エラー 0075
データセットを量子化するときに無効なビン分割機能を使用すると、例外が発生します。
Azure Machine Learning では、サポートされていない方法を使ってデータをビン分割しようとする場合、またはパラメーターの組み合わせが有効ではない場合、このエラーが発生します。
**解決策:**
このイベントでのエラー処理は、ビン分割方法のカスタマイズがより広い範囲で可能だった以前のバージョンの Azure Machine Learning で導入されました。 現在は、ビン分割方法はすべてドロップダウン リストでの選択に基づくため、技術的には、このエラーが発生する可能性はありません。
<!--If you get this error when using the [Group Data into Bins](group-data-into-bins.md) module, consider reporting the issue in the [Azure Machine Learning forum](https://social.msdn.microsoft.com/Forums/en-US/home?forum=MachineLearning), providing the data types, parameter settings, and the exact error message. -->
|例外メッセージ|
|------------------------|
|Invalid binning function used. (無効なビン分割機能が使われました。)|
## <a name="error-0077"></a>エラー 0077
不明な BLOB ファイル書き込みモードが渡されると、例外が発生します。
Azure Machine Learning では、BLOB ファイルのターゲットまたはソースを指定するときに無効な引数が渡される場合、このエラーが発生します。
**解決策:** Azure Blob Storage との間でデータをインポートまたはエクスポートするほとんどすべてのモジュールでは、書き込みモードを制御するパラメーターの値は、ドロップダウン リストを使って割り当てられます。そのため、無効な値を渡すことはできず、このエラーは発生しないはずです。 このエラーは、今後のリリースで非推奨になります。
|例外メッセージ|
|------------------------|
|Unsupported blob writes mode. (サポートされていない BLOB 書き込みモードです。)|
|Unsupported blob writes mode: {0}. (サポートされていない BLOB 書き込みモードです: \{0\}。)|
## <a name="error-0078"></a>エラー 0078
[データのインポート](import-data.md)に対する HTTP オプションでリダイレクトを示す 3xx 状態コードを受け取ると、例外が発生します。
Azure Machine Learning では、[データのインポート](import-data.md)に対する HTTP オプションでリダイレクトを示す 3xx (301、302、304 など) 状態コードを受け取る場合、このエラーが発生します。 ブラウザーを別のページにリダイレクトする HTTP ソースに接続しようとした場合、このエラーを受け取ります。 セキュリティ上の理由から、リダイレクトを行う Web サイトは、Azure Machine Learning のためのデータ ソースとしては許可されません。
**解決策:** Web サイトが信頼できる Web サイトの場合は、リダイレクト先の URL を直接入力します。
|例外メッセージ|
|------------------------|
|Http redirection not allowed (Http リダイレクトは許可されません)|
## <a name="error-0079"></a>エラー 0079
正しくない Azure ストレージ コンテナー名が指定されると、例外が発生します。
Azure Machine Learning では、Azure ストレージ コンテナー名が誤って指定される場合、このエラーが発生します。 Azure Blob Storage に書き込むときに、 **[Path to blob beginning with container]\(コンテナーで始まる BLOB へのパス\)** オプションを使用して、コンテナーと BLOB (ファイル) 名の両方を指定していない場合、このエラーを受け取ります。
**解決策:** [データのエクスポート](export-data.md) モジュールを見直し、BLOB へのパスの指定に、コンテナーとファイル名の両方が**コンテナー/ファイル名**の形式で含まれていることを確認します。
|例外メッセージ|
|------------------------|
|The Azure storage container name is incorrect. (Azure ストレージ コンテナーの名前が正しくありません。)|
|The Azure storage container name "{0}" is incorrect; a container name of the format container/blob was expected. (Azure ストレージ コンテナー名 "\{0\}" が正しくありません。コンテナー名は "コンテナー/BLOB" の形式であることが予想されていました。)|
## <a name="error-0080"></a>エラー 0080
すべての値が欠損している列がモジュールによって許可されない場合、例外が発生します。
Azure Machine Learning では、モジュールによって使用される 1 つまたは複数の列に欠損値しか含まれていないと、このエラーが生成されます。 たとえば、各列の集計統計が計算されるモジュールの場合、データが含まれない列での動作は不可能です。 そのような場合、モジュールの実行はこの例外で停止します。
**解決策:** 入力データセットを見直し、欠損値しか含まれていない列を削除します。
|例外メッセージ|
|------------------------|
|Columns with all values missing are not allowed. (すべて欠損値の列は許可されません。)|
|Column {0} has all values missing. (列 \{0\} はすべての値が欠損しています。)|
## <a name="error-0081"></a>エラー 0081
削減後のディメンションの数が、少なくとも 1 つのスパース特徴列を含む入力データセットの特徴列の数と等しい場合、PCA モジュールで例外が発生します。
Azure Machine Learning では、次の条件が満たされる場合、このエラーが生成されます。(a) 入力データセットに少なくとも 1 つのスパース列があり、かつ、(b) 要求されたディメンションの最終的な数が入力ディメンションの数と同じである。
**解決策:** 出力のディメンションの数を、入力のディメンションの数より少なくなるように減らすことを検討します。 これは、PCA のアプリケーションでは一般的です。 <!--For more information, see [Principal Component Analysis](principal-component-analysis.md). -->
|例外メッセージ|
|------------------------|
|For dataset containing sparse feature columns number of dimensions to reduce should be less than number of feature columns. (スパース特徴列を含むデータセットでは、削減後のディメンションの数を特徴列の数より少なくする必要があります。)|
## <a name="error-0082"></a>エラー 0082
モデルを正常に逆シリアル化できないと、例外が発生します。
Azure Machine Learning では、破壊的変更の結果として、保存されている機械学習モデルまたは変換を新しいバージョンの Azure Machine Learning ランタイムで読み込むことができない場合、このエラーが発生します。
**解決策:** モデルまたは変換を生成したトレーニング実験を再度実行して、モデルまたは変換を保存し直す必要があります。
|例外メッセージ|
|------------------------|
|Model could not be deserialized because it is likely serialized with an older serialization format. (古いシリアル化形式でシリアル化された可能性があるため、モデルを逆シリアル化できませんでした。) Retrain and resave the model. (モデルを再トレーニングして再度保存してください。)|
## <a name="error-0083"></a>エラー 0083
トレーニングに使用されたデータセットを学習器の具象型に使用できない場合、例外が発生します。
Azure Machine Learning では、データセットとトレーニング中の学習器の間に互換性がないと、このエラーが生成されます。 たとえば、データセットの各行に少なくとも 1 つの欠損値が含まれ、結果として、トレーニング中にデータセット全体がスキップされる可能性があります。 または、異常検出などの一部の機械学習アルゴリズムでは、ラベルの存在が予想されておらず、データセットにラベルが存在すると、この例外がスローされることがあります。
**解決策:** 入力データセットの要件を確認するために使用されている学習器のドキュメントを参照します。 列を調べて、必要な列がすべて存在することを確認します。
|例外メッセージ|
|------------------------|
|Dataset used for training is invalid. (トレーニングに使用されているデータセットが無効です。)|
|{0} contains invalid data for training. (\{0\} には、トレーニングに無効なデータが含まれます。)|
|{0} contains invalid data for training. (\{0\} には、トレーニングに無効なデータが含まれます。) Learner type: {1}. (学習器の種類: {1}。)|
## <a name="error-0084"></a>エラー 0084
R スクリプトから生成されたスコアが評価されると、例外が発生します。 現在、これはサポートされていません。
Azure Machine Learning では、スコアが含まれる R スクリプトからの出力でモデルを評価するためにモジュールのいずれかを使用しようとする場合、このエラーが発生します。
**解決策:**
|例外メッセージ|
|------------------------|
|Evaluating scores produced by R is currently unsupported. (R によって生成されたスコアの評価は、現在サポートされていません。)|
## <a name="error-0085"></a>エラー 0085
スクリプトの評価がエラーを伴い失敗すると、例外が発生します。
Azure Machine Learning では、構文エラーが含まれるカスタム スクリプトを実行する場合、このエラーが発生します。
**解決策:** 外部エディターでコードを確認し、エラーを調べます。
|例外メッセージ|
|------------------------|
|Error during evaluation of script. (スクリプトの評価中にエラーが発生しました。)|
|The following error occurred during script evaluation, view the output log for more information (スクリプトの評価中に次のエラーが発生しました。詳細については出力ログを参照してください): ---------- Start of error message from {0} interpreter (\{0\} インタープリターからのエラー メッセージの開始) ---------- {1} ---------- End of error message from {0} interpreter (\{0\} インタープリターからのエラー メッセージの終了) ----------|
## <a name="error-0086"></a>エラー 0086
カウント変換が有効ではない場合、例外が発生します。
Azure Machine Learning では、カウント テーブルに基づく変換を選択した場合、選択した変換が現在のデータまたは新しいカウント テーブルと互換性がない場合、このエラーが発生します。
**解決策:** モジュールでは、変換を構成するカウントとルールを、2 つの異なる形式で保存できます。 カウント テーブルをマージする場合は、マージする対象となる両方のテーブルで同じ形式が使われていることを確認します。
一般に、カウント ベースの変換は、変換が最初に作成されたデータセットと同じスキーマを使用するデータセットにのみ適用できます。
<!-- For general information, see [Learning with Counts](data-transformation-learning-with-counts.md). For requirements specific to creating and merging count-based features, see these topics:
- [Merge Count Transform](merge-count-transform.md)
- [Import Count Table](import-count-table.md)
- [Modify Count Table Parameters](modify-count-table-parameters.md)
-->
|例外メッセージ|
|------------------------|
|Invalid counting transform specified. (無効なカウント変換が指定されました。)|
|The counting transform at input port '{0}' is invalid. (入力ポート '\{0\}' でのカウント変換は無効です。)|
|The counting transform at input port '{0}' cannot be merged with the counting transform at input port '{1}'. (入力ポート '\{0\}' でのカウント変換は、入力ポート '{1}' でのカウント変換とマージできません。) Check to verify the metadata used for counting matches. (カウントに使用されているメタデータが一致することを確認してください。)|
## <a name="error-0087"></a>エラー 0087
カウント モジュールでの学習用に無効なカウント テーブルの種類を指定すると、例外が発生します。
Azure Machine Learning では、既存のカウント テーブルをインポートしようとしていても、テーブルが現在のデータまたは新しいカウント テーブルと互換性がない場合、このエラーが発生します。
**解決策:** 変換を構成するカウントとルールを保存するには、さまざまな形式があります。 カウント テーブルをマージする場合は、両方で同じ形式が使われていることを確認します。
一般に、カウント ベースの変換は、変換が最初に作成されたデータセットと同じスキーマを使用するデータセットにのみ適用できます。
<!--For general information, see [Learning with Counts](data-transformation-learning-with-counts.md). -->
## <a name="error-0088"></a>エラー 0088
カウント モジュールでの学習用に無効なカウントの種類を指定すると、例外が発生します。
Azure Machine Learning では、カウント ベースの特徴付けに対してサポートされているものとは異なるカウント方法を使おうとする場合、このエラーが発生します。
**解決策:** 一般に、カウント方法はドロップダウン リストから選択するため、このエラーは発生しないはずです。
<!--For general information, see [Learning with Counts](data-transformation-learning-with-counts.md). For requirements specific to creating and merging count-based features, see these topics:
- [Merge Count Transform](merge-count-transform.md)
- [Import Count Table](import-count-table.md)
- [Modify Count Table Parameters](modify-count-table-parameters.md)
-->
|例外メッセージ|
|------------------------|
|Invalid counting type is specified. (無効なカウントの種類が指定されました。)|
|The specified counting type '{0}' is not a valid counting type. (指定されたカウントの種類 '\{0\}' は、有効なカウントの種類ではありません。)|
## <a name="error-0089"></a>エラー 0089
指定されたクラスの数がカウントに使用されるデータセットの実際のクラスの数より少ない場合、例外が発生します。
Azure Machine Learning では、カウント テーブルを作成しているときに、ラベル列に含まれるクラスの数がモジュールのパラメーターで指定した数と異なる場合、このエラーが発生します。
**解決策:** データセットを確認して、ラベル列に存在する異なる値 (可能性のあるクラス) の数を正確に調べます。 カウント テーブルを作成するときは、少なくともこの数のクラスを指定する必要があります。
カウント テーブルでは、使用可能なクラスの数を自動的に判断できません。
カウント テーブルを作成するときに、0 またはラベル列の実際のクラス数より少ない数を指定することはできません。
|例外メッセージ|
|------------------------|
|The number of classes is incorrect. (クラスの数が正しくありません。) Make sure that the number of classes you specify in the parameter pane is greater than or equal to the number of classes in the label column. (パラメーター ウィンドウでは、クラスの数として、ラベル列のクラス数以上の値を指定してください。)|
|The number of classes specified is '{0}', which is not greater than a label value '{1}' in the data set used to count. (指定されているクラスの数 '\{0\}' は、カウントに使用されるデータ セット内のラベルの値 '{1}' を超えていません。) Make sure that the number of classes you specify in the parameter pane is greater than or equal to the number of classes in the label column. (パラメーター ウィンドウでは、クラスの数として、ラベル列のクラス数以上の値を指定してください。)|
## <a name="error-0090"></a>エラー 0090
Hive テーブルの作成が失敗すると、例外が発生します。
Azure Machine Learning では、[データのエクスポート](export-data.md)または別のオプションを使用して HDInsight クラスターにデータを保存するときに指定された Hive テーブルを作成できない場合、このエラーが発生します。
**解決策:** クラスターに関連付けられている Azure ストレージ アカウント名を調べて、モジュールのプロパティで同じアカウントを使用していることを確認します。
|例外メッセージ|
|------------------------|
|The Hive table could not be created. (Hive テーブルを作成できませんでした。) For a HDInsight cluster, ensure the Azure storage account name associated with cluster is the same as what is passed in through the module parameter. (HDInsight クラスターでは、クラスターに関連付けられた Azure ストレージ アカウント名が、モジュールのパラメーターで渡されたものと同じであることを確認してください。)|
|The Hive table "{0}" could not be created. (Hive テーブル "\{0\}" を作成できませんでした。) For a HDInsight cluster, ensure the Azure storage account name associated with cluster is the same as what is passed in through the module parameter. (HDInsight クラスターでは、クラスターに関連付けられた Azure ストレージ アカウント名が、モジュールのパラメーターで渡されたものと同じであることを確認してください。)|
|The Hive table "{0}" could not be created. (Hive テーブル "\{0\}" を作成できませんでした。) For a HDInsight cluster, ensure the Azure storage account name associated with cluster is "{1}". (HDInsight クラスターでは、クラスターに関連付けられた Azure ストレージ アカウント名が "{1}" であることを確認してください。)|
## <a name="error-0100"></a>エラー 0100
カスタム モジュールに対してサポートされていない言語が指定されると、例外が発生します。
Azure Machine Learning では、カスタム モジュールを作成しているときに、カスタム モジュールの xml 定義ファイル内の **Language** 要素の name プロパティが無効な値である場合、このエラーが発生します。 現在、このプロパティに対して有効な値は `R` のみです。 例:
`<Language name="R" sourceFile="CustomAddRows.R" entryPoint="CustomAddRows" />`
**解決策:** カスタム モジュールの xml 定義ファイルの **Language** 要素の name プロパティが、`R` に設定されていることを確認します。 ファイルを保存し、カスタム モジュールの zip パッケージを更新して、カスタム モジュールの追加を再度試みます。
|例外メッセージ|
|------------------------|
|Unsupported custom module language specified (サポートされていないカスタム モジュール言語が指定されています)|
## <a name="error-0101"></a>エラー 0101
すべてのポートとパラメーターの ID は、一意である必要があります。
Azure Machine Learning では、カスタム モジュールの XML 定義ファイルで 1 つ以上のポートまたはパラメーターに同じ ID 値が割り当てられる場合、このエラーが発生します。
**解決策:** すべてのポートとパラメーターで ID 値が一意であることを確認します。 xml ファイルを保存し、カスタム モジュールの zip パッケージを更新して、カスタム モジュールの追加を再度試みます。
|例外メッセージ|
|------------------------|
|All port and parameter IDs for a module must be unique (モジュールのすべてのポートとパラメーターの ID は、一意である必要があります)|
|Module '{0}' has duplicate port/argument IDs. (モジュール '\{0\}' には重複するポート/引数 ID があります。) All port/argument IDs must be unique for a module. (モジュールのすべてのポート/引数の ID は、一意である必要があります。)|
## <a name="error-0102"></a>エラー 0102
ZIP ファイルを抽出できない場合、スローされます。
Azure Machine Learning では、.zip 拡張子の zip パッケージをインポートするとき、パッケージが zip ファイルではない場合、またはサポートされている zip 形式がファイルで使用されていない場合、このエラーが発生します。
**解決策:** 選択したファイルが有効な .zip ファイルであること、およびサポートされている圧縮アルゴリズムのいずれかを使って圧縮されていることを確認します。
圧縮形式のデータセットをインポートするときにこのエラーが発生する場合は、含まれるすべてのファイルが、サポートされているファイル形式のいずれかが使われているものであり、かつ Unicode 形式であることを確認します。 <!--For more information, see [Unpack Zipped Datasets](unpack-zipped-datasets.md). -->
目的のファイルを新しい圧縮フォルダーに再度追加し、カスタム モジュールへの追加を再び試みます。
|例外メッセージ|
|------------------------|
|Given ZIP file is not in the correct format (指定された ZIP ファイルは正しい形式ではありません)|
## <a name="error-0103"></a>エラー 0103
ZIP ファイルに .xml ファイルが含まれていない場合にスローされます
Azure Machine Learning では、カスタム モジュールの zip パッケージにモジュール定義 (.xml) ファイルが含まれていない場合、このエラーが発生します。 これらのファイルは、zip パッケージのルート (サブフォルダー内などではなく) に存在する必要があります。
**解決策:** ディスク ドライブ上の一時フォルダーに zip パッケージのルート フォルダーを抽出し、xml モジュール定義ファイルが 1 つ以上あることを確認します。 すべての xml ファイルが、zip パッケージの抽出先フォルダーに直接存在する必要があります。 zip パッケージを作成するときは、圧縮する xml ファイルを含むフォルダーを選択しないようにします。圧縮するために選択したフォルダーと同じ名前のサブ フォルダーが zip パッケージ内に作成されてしまうからです。
|例外メッセージ|
|------------------------|
|Given ZIP file does not contain any module definition files (.xml files) (指定された ZIP ファイルにはモジュール定義ファイル (.xml ファイル) が含まれません)|
## <a name="error-0104"></a>エラー 0104
モジュール定義ファイルで参照されているスクリプトが見つからない場合にスローされます
Azure Machine Learning では、カスタム モジュールの xml 定義ファイルの **Language** 要素で参照されているスクリプト ファイルが zip パッケージに存在しない場合、このエラーがスローされます。 スクリプト ファイルのパスは、**Language** 要素の **sourceFile** プロパティで定義されています。 ソース ファイルへのパスは、zip パッケージのルートに対して相対的です (モジュールの xml 定義ファイルと同じ場所)。 スクリプト ファイルがサブ フォルダー内にある場合は、スクリプト ファイルへの相対パスを指定する必要があります。 たとえば、すべてのスクリプトが zip パッケージ内の **myScripts** フォルダーに格納されていた場合、**Language** 要素ではこのパスを次のように **sourceFile** プロパティに追加する必要があります。 例:
`<Language name="R" sourceFile="myScripts/CustomAddRows.R" entryPoint="CustomAddRows" />`
**解決策:** カスタム モジュールの xml 定義の **Language** 要素にある **sourceFile** プロパティの値が正しいこと、および zip パッケージ内の適切な相対パスにソース ファイルが存在することを確認します。
|例外メッセージ|
|------------------------|
|Referenced R script file does not exist. (参照されている R スクリプト ファイルが存在しません。)|
|Referenced R script file '{0}' cannot be found. (参照されている R スクリプト ファイル '\{0\}' が見つかりません。) Ensure that the relative path to the file is correct from the definitions location. (定義の場所からファイルへの相対パスが正しいことを確認してください。)|
## <a name="error-0105"></a>エラー 0105
モジュール定義ファイルにサポートされていないパラメーターの型が含まれる場合、このエラーが表示されます。
Azure Machine Learning では、カスタム モジュールの xml 定義を作成するときに、定義のパラメーターまたは引数の型とサポートされている型とが一致しない場合、このエラーが生成されます。
**解決策:** カスタム モジュールの xml 定義ファイル内の **Arg** 要素の型プロパティが、サポートされている型であることを確認します。
|例外メッセージ|
|------------------------|
|Unsupported parameter type. (パラメーターの型がサポートされていません。)|
|Unsupported parameter type '{0}' specified. (サポートされていないパラメーターの型 '\{0\}' が指定されています。)|
## <a name="error-0106"></a>エラー 0106
モジュール定義ファイルで、サポートされていない入力の型が定義されている場合、スローされます。
Azure Machine Learning では、カスタム モジュールの XML 定義の入力ポートの型がサポートされている型と一致しない場合、このエラーが生成されます。
**解決策:** カスタム モジュールの XML 定義ファイル内の Input 要素の type プロパティが、サポートされている型であることを確認します。
|例外メッセージ|
|------------------------|
|Unsupported input type. (入力の型がサポートされていません。)|
|Unsupported input type '{0}' specified. (サポートされていない入力の型 '\{0\}' が指定されています。)|
## <a name="error-0107"></a>エラー 0107
モジュール定義ファイルで、サポートされていない出力の型が定義されている場合、スローされます。
Azure Machine Learning では、カスタム モジュールの xml 定義の出力ポートの型がサポートされている型と一致しない場合、このエラーが生成されます。
**解決策:** カスタム モジュールの xml 定義ファイル内の Output 要素の type プロパティが、サポートされている型であることを確認します。
|例外メッセージ|
|------------------------|
|Unsupported output type. (出力の型がサポートされていません。)|
|Unsupported output type '{0}' specified. (サポートされていない出力の型 '\{0\}' が指定されています。)|
## <a name="error-0108"></a>エラー 0108
モジュール定義ファイルでサポートされているより多くの入力ポートまたは出力ポートが定義されている場合、スローされます。
Azure Machine Learning では、カスタム モジュールの xml 定義で定義されている入力ポートまたは出力ポートが多すぎる場合、このエラーが生成されます。
**解決策:** カスタム モジュールの xml 定義で定義されている入力ポートと出力ポートの数が、サポートされているポートの最大数を超えないようにします。
|例外メッセージ|
|------------------------|
|Exceeded supported number of input or output ports. (入力ポートまたは出力ポートがサポートされている数を超えました。)|
|Exceeded number of supported '{0}' ports. (サポートされている '\{0\}' ポートの数を超えました。) Maximum allowed number of '{0}' ports is '{1}'. ('\{0\}' ポートの許容される最大数は '{1}' です。)|
## <a name="error-0109"></a>エラー 0109
モジュール定義ファイルでの列ピッカーの定義が正しくない場合、スローされます。
Azure Machine Learning では、カスタム モジュールの xml 定義で列ピッカー引数の構文にエラーが含まれている場合、このエラーが生成されます。
**解決策:** カスタム モジュールの xml 定義で、列ピッカー引数の構文にエラーが含まれている場合、このエラーが生成されます。
|例外メッセージ|
|------------------------|
|Unsupported syntax for column picker. (列ピッカーの構文がサポートされていません。)|
## <a name="error-0110"></a>エラー 0110
モジュール定義ファイルによって、存在しない入力ポート ID が参照される列ピッカーが定義される場合、スローされます。
Azure Machine Learning では、ColumnPicker 型の Arg の Properties 要素にある *portId* プロパティが入力ポートの ID 値と一致しない場合、このエラーが生成されます。
**解決策:** portId プロパティが、カスタム モジュールの xml 定義で定義されている入力ポートの ID 値と一致することを確認します。
|例外メッセージ|
|------------------------|
|Column picker references a non-existent input port ID. (列ピッカーで、存在しない入力ポート ID が参照されています。)|
|Column picker references a non-existent input port ID '{0}'. (列ピッカーで、存在しない入力ポート ID '\{0\}' が参照されています。)|
## <a name="error-0111"></a>エラー 0111
モジュール定義ファイルで無効なプロパティが定義されている場合、スローされます。
Azure Machine Learning では、カスタム モジュールの XML 定義で無効なプロパティが要素に割り当てられる場合、このエラーが生成されます。
**解決策:** プロパティがカスタム モジュール要素でサポートされていることを確認します。
|例外メッセージ|
|------------------------|
|Property definition is invalid. (プロパティの定義が無効です。)|
|Property definition '{0}' is invalid. (プロパティの定義 '\{0\}' が無効です。)|
## <a name="error-0112"></a>エラー 0112
モジュール定義ファイルを解析できない場合、スローされます
Azure Machine Learning では、xml 形式のエラーによって、カスタム モジュールの XML 定義が有効な XML ファイルとして解析されない場合、このエラーが生成されます。
**解決策:** 各要素が正しく開始および終了されていることを確認します。 XML の書式設定にエラーがないことを確認します。
|例外メッセージ|
|------------------------|
|Unable to parse module definition file. (モジュール定義ファイルを解析できません。)|
|Unable to parse module definition file '{0}'. (モジュール定義ファイル '\{0\}' を解析できません。)|
## <a name="error-0113"></a>エラー 0113
モジュール定義ファイルにエラーが含まれる場合、スローされます。
Azure Machine Learning では、カスタム モジュールの XML 定義ファイルが解析可能であっても、エラー (カスタム モジュールでサポートされていない要素の定義など) が含まれる場合、このエラーが生成されます。
**解決策:** カスタム モジュールの定義ファイルで、カスタム モジュールによってサポートされる要素とプロパティが定義されていることを確認します。
|例外メッセージ|
|------------------------|
|Module definition file contains errors. (モジュール定義ファイルにエラーが含まれます。)|
|Module definition file '{0}' contains errors. (モジュール定義ファイル '\{0\}' にエラーが含まれます。)|
|Module definition file '{0}' contains errors. (モジュール定義ファイル '\{0\}' にエラーが含まれます。) [https://login.microsoftonline.com/tfp/00000000-0000-0000-0000-000000000000/b2c_1a_tp_sign-up-or-sign-in/v2.0/]({1})|
## <a name="error-0114"></a>エラー 0114
カスタム モジュールのビルドが失敗する場合、スローされます。
Azure Machine Learning では、カスタム モジュールのビルドが失敗した場合、このエラーが生成されます。 これは、カスタム モジュールを追加するときに、1 つ以上のカスタム モジュール関連のエラーが検出される場合に発生します。 追加のエラーがこのエラー メッセージ内で報告されます。
**解決策:** この例外メッセージで報告されたエラーを解決します。
|例外メッセージ|
|------------------------|
|Failed to build custom module. (カスタム モジュールのビルドが失敗しました。)|
|Custom module builds failed with error(s): {0} (カスタム モジュールのビルドが次のエラーで失敗しました: \{0\})|
## <a name="error-0115"></a>エラー 0115
カスタム モジュールの既定のスクリプトにサポートされていない拡張子が付いている場合、スローされます。
Azure Machine Learning では、不明のファイル名拡張子が使われているスクリプトがカスタム モジュールに提供される場合に、このエラーが発生します。
**解決策:** カスタム モジュールに含まれるスクリプト ファイルのファイル形式とファイル名拡張子を確認します。
|例外メッセージ|
|------------------------|
|Unsupported extension for default script. (既定のスクリプトの拡張子がサポートされていません。)|
|Unsupported file extension {0} for default script. (既定のスクリプトのファイル拡張子 \{0\} がサポートされていません。)|
## <a name="error-0121"></a>エラー 0121
テーブルが書き込み可能ではないために SQL の書き込みが失敗した場合、スローされます。
Azure Machine Learning では、[データのエクスポート](export-data.md)モジュールを使用して結果を SQL データベースのテーブルに保存するときに、テーブルに書き込むことができない場合、このエラーが生成されます。 通常、[データのエクスポート](export-data.md) モジュールは SQL Server インスタンスとの接続を正常に確立したものの、Azure ML データセットの内容をテーブルに書き込むことができない場合に、このエラーが表示されます。
**解決策:**
- [データのエクスポート](export-data.md) モジュールの [プロパティ] ウィンドウを開き、データベースとテーブルの名前が正しく入力されていることを確認します。
- エクスポートしているデータセットのスキーマを確認し、データがエクスポート先のテーブルと互換性があることを確認します。
- ユーザー名とパスワードに関連付けられている SQL サインインに、テーブルに書き込むためのアクセス許可があることを確認します。
- SQL Server からのエラー情報が例外に追加で含まれている場合は、その情報を使用して修正します。
|例外メッセージ|
|------------------------|
|Connected to server, unable to write to table. (サーバーに接続しましたが、テーブルに書き込むことができません。)|
|Unable to write to Sql table: {0} (SQL のテーブルに書き込むことができません: \{0\})|
## <a name="error-0122"></a>エラー 0122
複数の重み列を指定されているときに、重み列が 1 つのみ許可される場合、例外が発生します。
Azure Machine Learning では、重み列として選択されている列が多すぎる場合、このエラーが発生します。
**解決策:** 入力データセットとそのメタデータを確認します。 重みが含まれる列は 1 つのみであることを確認します。
|例外メッセージ|
|------------------------|
|Multiple weight columns are specified. (複数の重み列が指定されています。)|
## <a name="error-0123"></a>エラー 0123
ベクターの列がラベル列に指定されている場合、例外が発生します。
Azure Machine Learning では、ラベル列としてベクターを使用する場合、このエラーが発生します。
**解決策:** 必要に応じて列のデータ形式を変更するか、または別の列を選択します。
|例外メッセージ|
|------------------------|
|Column of vectors is specified as Label column. (ベクターの列がラベル列として指定されています。)|
## <a name="error-0124"></a>エラー 0124
数値ではない列が重み列として指定されている場合、例外が発生します。
**解決策:**
|例外メッセージ|
|------------------------|
|Non-numeric column is specified as the weight column. (数値以外の列が重み列として指定されています。)|
## <a name="error-0125"></a>エラー 0125
複数のデータセットのスキーマが一致しない場合、スローされます。
**解決策:**
|例外メッセージ|
|------------------------|
|Dataset schema does not match. (データセットのスキーマが一致しません。)|
## <a name="error-0126"></a>エラー 0126
Azure ML でサポートされていない SQL ドメインをユーザーが指定した場合、例外が発生します。
このエラーは、Azure Machine Learning でサポートされていない SQL ドメインをユーザーが指定する場合に生成されます。 ホワイトリストに登録されていないドメイン内のデータベース サーバーに接続しようとした場合、このエラーを受け取ります。 現在、許可される SQL ドメインは、".database.windows.net"、".cloudapp.net"、または ".database.secure.windows.net" です。 つまり、サーバーは、Azure SQL Server または Azure 上の仮想マシン内のサーバーである必要があります。
**解決策:** モジュールを見直します。 SQL データベース サーバーが承認済みドメインのいずれかに属していることを確認します。
- .database.windows.net
- .cloudapp.net
- .database.secure.windows.net
|例外メッセージ|
|------------------------|
|Unsupported SQL domain. (サポートされていない SQL ドメインです。)|
|The SQL domain {0} is not currently supported in Azure ML (現在、SQL ドメイン \{0\} は Azure ML でサポートされていません)|
## <a name="error-0127"></a>エラー 0127
画像のピクセル サイズが許容される上限を超えています
このエラーは、分類のために画像データセットから画像を読み取っていて、画像が大きすぎるためにモデルで処理できない場合に発生します。
<!--**Resolution:**
For more information about the image size and other requirements, see these topics:
- [Import Images](import-images.md)
- [Pretrained Cascade Image Classification](pretrained-cascade-image-classification.md) -->
|例外メッセージ|
|------------------------|
|Image pixel size exceeds allowed limit. (画像のピクセル サイズが許容される上限を超えています。)|
|Image pixel size in the file '{0}' exceeds allowed limit: '{1}' (ファイル '\{0\}' の画像のピクセル サイズが、許容される上限を超えています: '{1}')|
## <a name="error-0128"></a>エラー 0128
カテゴリ列の条件付き確率の値が制限を超えています。
**解決策:**
|例外メッセージ|
|------------------------|
|カテゴリ列の条件付き確率の値が制限を超えています。|
|カテゴリ列の条件付き確率の値が制限を超えています。 Columns '{0}' and '{1}' are the problematic pair. (列 '\{0\}' と '{1}' が問題を起こしているペアです。)|
## <a name="error-0129"></a>エラー 0129
データセット内の列の数が、許可される上限を超えています。
**解決策:**
|例外メッセージ|
|------------------------|
|データセット内の列の数が、許可される上限を超えています。|
|Number of columns in the dataset in '{0}' exceeds allowed. ('\{0\}' 内のデータセットの列数が、許可される値を超えています。)|
|Number of columns in the dataset in '{0}' exceeds allowed limit of '{1}'.' ('\{0\}' 内のデータセットの列数が、許可される上限 '{1}' を超えています。)|
|Number of columns in the dataset in '{0}' exceeds allowed '{1}' limit of '{2}'.' ('\{0\}' 内のデータセットの列数が、許可される '{1}' の制限 '{2}' を超えています。)|
## <a name="error-0130"></a>エラー 0130
トレーニング データセット内のすべての行に欠損値が含まれる場合、例外が発生します。
これは、トレーニング データセット内の一部の列が空の場合に発生します。
**解決策:** [見つからないデータのクリーンアップ](clean-missing-data.md) モジュールを使用して、すべてが欠損値の列を削除します。
|例外メッセージ|
|------------------------|
|All rows in training dataset contain missing values. (トレーニング データセットのすべての行に、欠損値が含まれます。) Consider using the Clean Missing Data module to remove missing values. (見つからないデータのクリーンアップ モジュールを使用して欠損値を削除することを検討してください。)|
## <a name="error-0131"></a>エラー 0131
zip ファイル内の 1 つまたは複数のデータセットを解凍して正しく登録できない場合、例外が発生します。
zip ファイル内の 1 つまたは複数のデータセットを解凍して正しく読み取ることができない場合、このエラーが生成されます。 zip ファイル自体、またはその中のファイルの 1 つが壊れているために解凍が失敗する場合、またはファイルを解凍して展開しようとしてシステム エラーが発生した場合、このエラーを受け取ります。
**解決策:** エラー メッセージで提供される詳細情報を使って、続行する方法を決定します。
|例外メッセージ|
|------------------------|
|Upload zipped datasets failed (zip 形式のデータセットのアップロードが失敗しました)|
|Zipped dataset {0} failed with the following message: {1} (zip 形式のデータセット \{0\} が次のメッセージで失敗しました: {1})|
|Zipped dataset {0} failed with a {1} exception with message: {2} (zip 形式のデータセット \{0\} が次のメッセージを伴う例外 {1} で失敗しました: {2})|
## <a name="error-0132"></a>エラー 0132
解凍するファイル名が指定されませんでした。zip ファイル内で複数のファイルが見つかりました。
解凍するファイル名が指定されず、zip ファイルで複数のファイルが見つかった場合、このエラーが生成されます。 .zip ファイルに複数の圧縮ファイルが含まれていても、モジュールの **[プロパティ]** ウィンドウの **[Dataset to Unpack]\(解凍するデータセット\)** テキスト ボックスで抽出するファイルを指定しなかった場合、このエラーを受け取ります。 現在は、モジュールを実行するたびに抽出できるファイルは 1 つだけです。
**解決策:** エラー メッセージでは、.zip ファイルで見つかったファイルの一覧が提供されます。 目的のファイルの名前をコピーして、 **[Dataset to Unpack]\(解凍するデータセット\)** テキスト ボックスに貼り付けます。
|例外メッセージ|
|------------------------|
|Zip file contains multiple files; you must specify the file to expand. (zip ファイルには複数のファイルが含まれます。展開するファイルを指定する必要があります。)|
|The file contains more than one file. (ファイルには複数のファイルが含まれます。) Specify the file to expand. (展開するファイルを指定してください。) The following files were found: {0} (次のファイルが見つかりました: {0})|
## <a name="error-0133"></a>エラー 0133
指定されたファイルが zip ファイルで見つかりませんでした
**[プロパティ]** ウィンドウの **[Dataset to Unpack]\(解凍するデータセット\)** フィールドに入力したファイル名が .zip ファイルで見つかったどのファイルの名前とも一致しない場合、このエラーが生成されます。 このエラーの最も一般的な原因は、入力にエラーがある場合、または展開するファイルを探すアーカイブ ファイルが間違っている場合です。
**解決策:** モジュールを見直します。 見つかったファイルの一覧に圧縮解除するファイルの名前が表示されている場合は、ファイル名をコピーして **[Dataset to Unpack]\(解凍するデータセット\)** プロパティ ボックスに貼り付けます。 目的のファイル名が一覧に表示されない場合は、正しい .zip ファイルであること、および目的のファイルの名前が正しいことを確認します。
|例外メッセージ|
|------------------------|
|The specified file was not found int the zip file. (指定されたファイルが zip ファイルに見つかりませんでした。)|
|The specified file was not found. (指定されたファイルは見つかりませんでした。) Found the following file(s): {0} (次のファイルが見つかりました: {0})|
## <a name="error-0134"></a>エラー 0134
ラベル列が存在しない場合、またはラベルの付いた行の数が不足している場合、例外が発生します。
モジュールでラベル列が必要であるのに列選択に含めない場合、またはラベル列の欠損値が多すぎる場合、このエラーが発生します。
前の操作によってデータセットが変更され、ダウンストリーム操作で使用できる行が不足している場合にも、このエラーが発生する可能性があります。 たとえば、**パーティションとサンプル** モジュールで、式を使って値によるデータセットの分割を行っているとします。 使用した式で一致するものが見つからない場合、パーティションによって得たデータセットの 1 つが空になります。
解決策:
列の選択にラベル列を含めても認識されない場合は、[メタデータの編集](edit-metadata.md)モジュールを使って、それをラベル列としてマークします。
<!--Use the [Summarize Data](summarize-data.md) module to generate a report that shows how many values are missing in each column. -->その後、[見つからないデータのクリーンアップ](clean-missing-data.md) モジュールを使って、ラベル列に欠損値のある行を削除できます。
入力データセットを調べて、有効なデータが含まれること、および操作の要件を満たすために十分な行があることを確認します。 多くのアルゴリズムでは、最低限の行数のデータが必要とされていながら、データには数行またはヘッダーしか含まれない場合、エラー メッセージが生成されます。
|例外メッセージ|
|------------------------|
|Exception occurs when label column is missing or has insufficient number of labeled rows. (ラベル列が存在しない場合、またはラベルの付いた行の数が不足している場合、例外が発生します。)|
|Exception occurs when label column is missing or has less than {0} labeled rows (ラベル列が存在しない場合、またはラベルの付いた行数が {0} より少ない場合、例外が発生します。)|
## <a name="error-0135"></a>エラー 0135
重心ベースのクラスターのみがサポートされます。
**解決策:** クラスターの初期化に重心を使用しないカスタム クラスタリング アルゴリズムに基づくクラスタリング モデルを評価しようとした場合、このエラー メッセージが発生する可能性があります。
<!--You can use [Evaluate Model](evaluate-model.md) to evaluate clustering models that are based on the [K-Means Clustering](k-means-clustering.md) module. For custom algorithms, use the [Execute R Script](execute-r-script.md) module to create a custom evaluation script. -->
|例外メッセージ|
|------------------------|
|Only centroid-based cluster is supported. (重心ベースのクラスターのみがサポートされます。)|
## <a name="error-0136"></a>エラー 0136
ファイル名が返されませんでした。その結果、ファイルを処理できません。
**解決策:**
|例外メッセージ|
|------------------------|
|No file name was returned; unable to process the file as a result. (ファイル名が返されませんでした。その結果、ファイルを処理できません。)|
## <a name="error-0137"></a>エラー 0137
Azure Storage SDK で、読み取りまたは書き込み中に、テーブルのプロパティとデータセットの列の間で変換を行うときにエラーが発生しました。
**解決策:**
|例外メッセージ|
|------------------------|
|Conversion error between Azure table storage property and dataset column. (Azure テーブル ストレージのプロパティとデータセットの列との間の変換エラー。)|
|Conversion error between Azure table storage property and dataset column. (Azure テーブル ストレージのプロパティとデータセットの列との間の変換エラー。) Additional information: {0} (追加情報: {0})|
## <a name="error-0138"></a>エラー 0138
メモリが不足しており、モジュールの実行を完了できません。 データセットのダウンサンプリングは、問題の軽減に役立つことがあります。
実行中のモジュールに、Azure コンテナーで使用できるより多くのメモリが必要な場合、このエラーが発生します。 大型のデータセットを使用しており、現在の操作がメモリに収まらない場合に、これが発生することがあります。
**解決策:** 大型のデータセットを読み取ろうとして、操作を完了できない場合は、データセットのダウンサンプリングが役立つことがあります。
<!--If you use the visualizations on datasets to check the cardinality of columns, only some rows are sampled. To get a full report, use [Summarize Data](summarize-data.md). You can also use the [Apply SQL Transformation](apply-sql-transformation.md) to check for the number of unique values in each column.
Sometimes transient loads can lead to this error. Machine support also changes over time.
Try using [Principal Component Analysis](principal-component-analysis.md) or one of the provided feature selection methods to reduce your dataset to a smaller set of more feature-rich columns: [Feature Selection](feature-selection-modules.md) -->
|例外メッセージ|
|------------------------|
|Memory has been exhausted, unable to complete running of module. (メモリが不足しており、モジュールの実行を完了できません。)|
## <a name="error-0139"></a>エラー 0139
列を別の型に変換できない場合、例外が発生します。
Azure Machine Learning では、列を別のデータ型に変換しようとするものの、その型が現在の操作またはモジュールによってサポートされていない場合、このエラーが発生します。
モジュールが現在のモジュールの要件を満たすためにデータを暗黙的に変換しようとするものの、変換を行うことができないときも、エラーが表示される場合があります。
**解決策:**
1. 入力データを確認し、使用する列の正確なデータ型と、エラーが生成されている列のデータ型を特定します。 データ型が正しいと思っていても、アップストリームの操作によってデータ型または列の使用方法が変更されていたことがわかる場合もあります。 [メタデータの編集](edit-metadata.md)モジュールを使用して、列のメタデータを元の状態にリセットします。
2. モジュールのヘルプ ページを見て、指定された操作の要件を確認します。 現在のモジュールでサポートされているデータ型と、サポートされている値の範囲を確認します。
<!--3. If values need to be truncated, rounded, or outliers removed, use the [Apply Math Operation](apply-math-operation.md) or [Clip Values](clip-values.md) modules to make corrections.-->
4. 列を別のデータ型に変換またはキャストできるかどうかを検討します。 次のすべてのモジュールでは、非常に柔軟で強力なデータ変更機能が提供されます。
<!--
+ [Apply SQL Transformation](apply-sql-transformation.md)
+ [Execute R Script](execute-r-script.md)
-->
+ [Python スクリプトの実行](execute-python-script.md)。
> [!NOTE]
> それでも動作しませんか。 問題に関する追加のフィードバックの提供をご検討ください。さらに有益なトラブルシューティング ガイダンスの開発に役立たせていただきます。 このページでフィードバックを送信し、エラーが発生したモジュールの名前と失敗したデータ型変換を指定するのみです。
|例外メッセージ|
|------------------------|
|Not allowed conversion. (変換は許可されていません。)|
|Could not convert: {0}. (変換できませんでした: {0}。)|
|Could not convert: {0}, on row {1}. (行 {1} で \{0\} を変換できませんでした。)|
|Could not convert column of type {0} to column of type {1} on row {2}. (行 {2} で \{0\} 型の列を {1} 型の列に変換できませんでした。)|
|Could not convert column "{2}" of type {0} to column of type {1} on row {3}. (行 {3} で \{0\} 型の列 "{2}" を {1} 型の列に変換できませんでした。)|
|Could not convert column "{2}" of type {0} to column "{3}" of type {1} on row {4}. (行 {4} で \{0\} 型の列 "{2}" を {1} 型の列 "{3}" に変換できませんでした。)|
## <a name="error-0140"></a>エラー 0140
渡された列セット引数にラベル列以外の他の列が含まれない場合、例外が発生します。
特徴を含む複数の列が必要なモジュールにデータセットを接続したものの、ラベル列のみを提供した場合、このエラーが発生します。
**解決策:** 少なくとも 1 つの特徴列を選択してデータセットに含めます。
|例外メッセージ|
|------------------------|
|Specified column set does not contain other columns except label column. (渡された列セットに、ラベル列以外の他の列が含まれません。)|
## <a name="error-0141"></a>エラー 0141
選択された数値列の数およびカテゴリ列と文字列列の一意の値の数が少なすぎる場合、例外が発生します。
Azure Machine Learning では、選択された列に操作を実行するのに十分な一意の値がない場合、このエラーが発生します。
**解決策:** 一部の操作では、特徴列とカテゴリ列で統計的操作が実行され、十分な値がない場合は、操作が失敗するか無効な結果が返される可能性があります。 データセットを調べて、特徴列とラベル列にある値の数を確認し、実行しようとしている操作が統計的に有効かどうかを判断します。
ソース データセットが有効な場合は、アップストリームのデータ操作やメタデータ操作によってデータが変更されたり値が削除されたりしたかどうかもチェックします。
アップストリームの操作に分割、サンプリング、または再サンプリングが含まれる場合、予想される数の行と値が出力に含まれていることを確認します。
|例外メッセージ|
|------------------------|
|The number of the selected numerical columns and unique values in the categorical and string columns is too small. (選択された数値列の数およびカテゴリ列と文字列列の一意の値の数が少なすぎます。)|
|The total number of the selected numerical columns and unique values in the categorical and string columns (currently {0}) should be at least {1} (選択された数値列の総数およびカテゴリ列と文字列列の一意の値の総数は、{1} 以上 (現在は \{0\}) である必要があります)|
## <a name="error-0142"></a>エラー 0142
認証する証明書のシステムによる読み込みができない場合、例外が発生します。
**解決策:**
|例外メッセージ|
|------------------------|
|The certificate cannot be loaded. (証明書を読み込むことができません。)|
|The certificate {0} cannot be loaded. (証明書 \{0\} を読み込むことができません。) Its thumbprint is {1}. (そのサムプリントは {1} です。)|
## <a name="error-0143"></a>エラー 0143
GitHub からと思われるユーザー指定の URL を解析できません。
Azure Machine Learning では、無効な URL を指定するときに、モジュールによって有効な GitHub の URL が必要とされる場合、このエラーが発生します。
**解決策:** URL で有効な GitHub リポジトリが参照されていることを確認します。 他のサイトの種類はサポートされていません。
|例外メッセージ|
|------------------------|
|URL is not from github.com. (URL が github.com のものではありません。)|
|URL is not from github.com: {0} (URL が github.com のものではありません: \{0\})|
## <a name="error-0144"></a>エラー 0144
ユーザー指定の GitHub URL に、予想される部分がありません。
Azure Machine Learning では、無効な URL 形式を使用して GitHub のファイル ソースを指定する場合、このエラーが発生します。
**解決策:** GitHub リポジトリの URL が有効であり、末尾が \blob\ または \tree\\ であることを確認します。
|例外メッセージ|
|------------------------|
|Cannot parse GitHub URL. (GitHub の URL を解析できません。)|
|Cannot parse GitHub URL (expecting '\blob\\' or '\tree\\' after the repository name): {0} (GitHub の URL を解析できません (リポジトリ名の後には '\blob\' または '\tree\' が必要です): \{0\})|
## <a name="error-0145"></a>エラー 0145
何らかの理由によりレプリケーション ディレクトリを作成できません。
Azure Machine Learning では、指定されたディレクトリをモジュールによって作成することができない場合、このエラーが発生します。
**解決策:**
|例外メッセージ|
|------------------------|
|Cannot create replication directory. (レプリケーション ディレクトリを作成できません。)|
## <a name="error-0146"></a>エラー 0146
ユーザー ファイルをローカル ディレクトリに解凍するとき、結合されたパスが長すぎる可能性があります。
Azure Machine Learning では、ファイルを抽出するときに解凍されたファイル名が長すぎると、このエラーが発生します。
**解決策:** 結合されたパスとファイル名が 248 文字以下になるように、ファイル名を編集します。
|例外メッセージ|
|------------------------|
|Replication path is longer than 248 characters, shorten the script name or path. (レプリケーション パスが 248 文字より長くなっています。スクリプト名またはパスを短くしてください。)|
## <a name="error-0147"></a>エラー 0147
何らかの理由により GitHub からダウンロードできませんでした。
Azure Machine Learning では、指定されたファイルを GitHub から読み取ったりダウンロードしたりできない場合、このエラーが発生します。
**解決策:** 問題は一時的なもの可能性があります。後でファイルにアクセスしてみてください。 または、必要なアクセス許可があり、ソースが有効であることを確認します。
|例外メッセージ|
|------------------------|
|GitHub access error. (GitHub アクセス エラー。)|
|GitHub access error. (GitHub アクセス エラー。) [https://login.microsoftonline.com/tfp/00000000-0000-0000-0000-000000000000/b2c_1a_tp_sign-up-or-sign-in/v2.0/]({0})|
## <a name="error-0148"></a>エラー 0148
データの抽出またはディレクトリの作成の間に、未承認のアクセスの問題が発生しました。
Azure Machine Learning では、ディレクトリの作成またはストレージからのデータの読み取りを試みたものの、必要なアクセス許可を持っていない場合、このエラーが発生します。
**解決策:**
|例外メッセージ|
|------------------------|
|Unauthorized access exception while extracting data. (データを抽出するときに未承認アクセスの例外が発生しました。)|
## <a name="error-0149"></a>エラー 0149
GitHub バンドル内にユーザー ファイルが存在しません。
Azure Machine Learning では、指定されたファイルが見つからない場合、このエラーが発生します。
解決策:
|例外メッセージ|
|------------------------|
|GitHub file is not found. (GitHub のファイルが見つかりません。)|
|GitHub file is not found.: {0} (GitHub のファイルが見つかりません: \{0\})|
## <a name="error-0150"></a>エラー 0150
ユーザー パッケージからのスクリプトを解凍できませんでした。最も可能性があるのは、GitHub ファイルとの競合のためです。
Azure Machine Learning では、スクリプトを抽出できない場合、このエラーが発生します。通常は、同じ名前のファイルが既にあるためです。
解決策:
|例外メッセージ|
|------------------------|
|Unable to unzip the bundle; possible name collision with GitHub files. (バンドルを解凍できません。GitHub ファイルと名前が競合している可能性があります。)|
## <a name="error-0151"></a>エラー 0151
クラウド ストレージへの書き込みでエラーが発生しました。 URL を確認してください。
Azure Machine Learning では、モジュールによってデータがクラウド ストレージに書き込まれるときに URL が利用できない場合、または無効である場合、このエラーが発生します。
解決策:URL を調べて、書き込み可能であることを確認します。
|例外メッセージ|
|------------------------|
|Error writing to cloud storage (possibly a bad url). (クラウド ストレージへの書き込みエラー (無効な URL の可能性)。)|
|Error writing to cloud storage: {0}. (クラウド ストレージへの書き込みエラー: \{0\}。) Check the url. (URL を確認してください。)|
## <a name="error-0152"></a>エラー 0152
モジュールのコンテキストで Azure クラウドの種類が不適切に指定されました。
|例外メッセージ|
|------------------------|
|Bad Azure Cloud Type (無効な Azure クラウドの種類)|
|Bad Azure Cloud Type: {0} (無効な Azure クラウドの種類: \{0\})|
## <a name="error-0153"></a>エラー 0153
指定されたストレージ エンドポイントが無効です。
|例外メッセージ|
|------------------------|
|Bad Azure Cloud Type (無効な Azure クラウドの種類)|
|Bad Storage End Point: {0} (無効なストレージ エンドポイント: \{0\})|
## <a name="error-0154"></a>エラー 0154
指定されたサーバー名を解決できませんでした
|例外メッセージ|
|------------------------|
|The specified server name could not be resolved (指定されたサーバー名を解決できませんでした)|
|The specified server {0}.documents.azure.com could not be resolved (指定されたサーバー \{0\}.documents.azure.com を解決できませんでした)|
## <a name="error-0155"></a>エラー 0155
DocDb クライアントによって例外がスローされました
|例外メッセージ|
|------------------------|
|The DocDb Client threw an exception (DocDb クライアントによって例外がスローされました)|
|DocDb Client: {0} (DocDb クライアント: \{0\})|
## <a name="error-0156"></a>エラー 0156
HCatalog サーバーに対する不適切な応答。
|例外メッセージ|
|------------------------|
|Bad response for HCatalog Server. (HCatalog サーバーに対する不適切な応答。) Check that all services are running. (すべてのサービスが実行されていることを確認してください。)|
|Bad response for HCatalog Server. (HCatalog サーバーに対する不適切な応答。) Check that all services are running. (すべてのサービスが実行されていることを確認してください。) Error details: {0} (エラーの詳細: \{0\})|
## <a name="error-0157"></a>エラー 0157
一貫性のないドキュメント スキーマまたは異なるドキュメント スキーマにより、Azure Cosmos DB からの読み取り中にエラーが発生しました。 閲覧者は、すべてのドキュメントに同じスキーマが含まれることを必要とします。
|例外メッセージ|
|------------------------|
|Detected documents with different schemas. (スキーマが異なるドキュメントが検出されました。) Make sure all documents have the same schema (すべてのドキュメントが同じスキーマであることを確認してください)|
## <a name="error-1000"></a>エラー 1000
内部ライブラリの例外。
このエラーは、他の方法では処理されない内部エンジン エラーをキャプチャするために提供されます。 そのため、このエラーの原因は、エラーが生成されたモジュールによって異なる可能性があります。
詳細なヘルプを取得するには、エラーに付随する詳細なメッセージに、入力として使用したデータなどのシナリオの説明を添えて、Azure Machine Learning フォーラムに投稿することをお勧めします。 このフィードバックは、エラーの優先順位付けや、さらに取り組むべき最重要問題の特定に役立てられます。
|例外メッセージ|
|------------------------|
|Library exception. (ライブラリ例外。)|
|Library exception: {0} (ライブラリ例外: \{0\})|
|{0} library exception: {1} (\{0\} ライブラリ例外: {1})|
| 43.649658 | 426 | 0.716598 | jpn_Jpan | 0.353521 |
7af77186e18430d6e3588198005af3cf35eccb74 | 639 | md | Markdown | README.md | shahirdaya/redpanda-grafana | 024610200187797c6eb3b15633c3b84144946f66 | [
"Apache-2.0"
] | null | null | null | README.md | shahirdaya/redpanda-grafana | 024610200187797c6eb3b15633c3b84144946f66 | [
"Apache-2.0"
] | null | null | null | README.md | shahirdaya/redpanda-grafana | 024610200187797c6eb3b15633c3b84144946f66 | [
"Apache-2.0"
] | null | null | null | # redpanda-grafana
Docker Compose for Redpanda, Prometheus and Grafana
Sets up a three node Redpanda cluster with Prometheus for scraping metrics and Grafana for dashboards.
Requires `docker` and `docker-compose`.
## Instructions
```
git clone https://github.com/patrickangeles/redpanda-grafana
cd redpanda-grafana
docker compose up -d
```
To view your dashboard, go to http://localhost:3000/
To create a topic with 10 partitions and 3x replication:
```
docker exec redpanda1 rpk topic create mytopic -p 10 -r 3
```
## Cleanup
To shut down (run this command from the same working directory as before):
```
docker compose down
```
| 21.3 | 102 | 0.758998 | eng_Latn | 0.831023 |
7af7795d5579c71f041aae0722437a1d3ff04210 | 1,514 | md | Markdown | docs/c-language/summary-of-tokens.md | stu85010/cpp-docs.zh-tw | bac0362e722d794727f509d63a2e3179b70d0785 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-language/summary-of-tokens.md | stu85010/cpp-docs.zh-tw | bac0362e722d794727f509d63a2e3179b70d0785 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-language/summary-of-tokens.md | stu85010/cpp-docs.zh-tw | bac0362e722d794727f509d63a2e3179b70d0785 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 語彙基元摘要
ms.date: 11/04/2016
ms.assetid: 2978cbf6-e66e-46fc-9938-caa052f2ccf7
ms.openlocfilehash: f295f59ce5767fce5416dde3545188cb0b1d4408
ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 10/31/2018
ms.locfileid: "50549535"
---
# <a name="summary-of-tokens"></a>語彙基元摘要
*token*:<br/>
*關鍵字*<br/>
*識別碼*<br/>
*常數*<br/>
*string-literal*<br/>
*運算子*<br/>
*標點符號*
*preprocessing-token*:<br/>
*header-name*<br/>
*識別碼*<br/>
*pp-number*<br/>
*character-constant*<br/>
*string-literal*<br/>
*運算子標點符號*<br/>
不可是上述任一個的每個非空白字元
*header-name*:<br/>
**\<** *path-spec* **>**<br/>
**"** *path spec* **"**
*path-spec*:<br/>
合法的檔案路徑
*pp-number*:<br/>
*digit*<br/>
**.** *digit*<br/>
*pp-number* *digit* <br/>
*pp-number* *nondigit*<br/>
*pp-number* **e** *sign*<br/>
*pp-number* **E** *sign*<br/>
*pp-number* **.**
## <a name="see-also"></a>請參閱
[語彙文法](../c-language/lexical-grammar.md) | 30.897959 | 60 | 0.654557 | yue_Hant | 0.093331 |
7af7cf3313d665e58b913b8b8ef0709f26211935 | 1,165 | md | Markdown | docs/ru/operations/storing-data.md | feihengye/ClickHouse | 40f5e06a8d9b2074b5985a8042f3ebf9940c77f4 | [
"Apache-2.0"
] | 1 | 2021-08-11T21:51:53.000Z | 2021-08-11T21:51:53.000Z | docs/ru/operations/storing-data.md | feihengye/ClickHouse | 40f5e06a8d9b2074b5985a8042f3ebf9940c77f4 | [
"Apache-2.0"
] | 1 | 2021-06-28T15:03:05.000Z | 2021-06-28T15:03:05.000Z | docs/ru/operations/storing-data.md | feihengye/ClickHouse | 40f5e06a8d9b2074b5985a8042f3ebf9940c77f4 | [
"Apache-2.0"
] | null | null | null | ---
toc_priority: 68
toc_title: "Хранение данных на внешних дисках"
---
# Хранение данных на внешних дисках {#external-disks}
Данные, которые обрабатываются в ClickHouse, обычно хранятся в файловой системе локально, где развернут сервер ClickHouse. При этом для хранения данных требуются диски большого объема, которые могут быть довольно дорогостоящими. Решением проблемы может стать хранение данных отдельно от сервера — в распределенных файловых системах — [Amazon s3](https://aws.amazon.com/s3/) или Hadoop ([HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)).
Для работы с данными, хранящимися в файловой системе `Amazon s3`, используйте движок [s3](../engines/table-engines/integrations/s3.md), а для работы с данными в файловой системе Hadoop — движок [HDFS](../engines/table-engines/integrations/hdfs.md).
## Репликация без копирования данных {#zero-copy}
Для дисков `s3` и `HDFS` в ClickHouse поддерживается репликация без копирования данных (zero-copy): если данные хранятся на нескольких репликах, то при синхронизации пересылаются только метаданные (пути к кускам данных), а сами данные не копируются.
| 77.666667 | 483 | 0.791416 | rus_Cyrl | 0.91335 |
7af82ba7b04c87bcfe40004ea592ab7474dd264e | 295 | md | Markdown | README.md | galiniliev/Kestrel-Ingress | 6c93dcbabb7946ba9686fdfb799331c24660f080 | [
"MIT"
] | 7 | 2020-02-21T00:27:47.000Z | 2020-11-26T06:36:28.000Z | README.md | galiniliev/Kestrel-Ingress | 6c93dcbabb7946ba9686fdfb799331c24660f080 | [
"MIT"
] | 1 | 2021-05-01T18:57:42.000Z | 2021-05-01T18:57:42.000Z | README.md | galiniliev/Kestrel-Ingress | 6c93dcbabb7946ba9686fdfb799331c24660f080 | [
"MIT"
] | 5 | 2020-03-05T17:46:49.000Z | 2021-05-01T19:00:18.000Z | # Kestrel-Ingress and Kestrel Ingress Controller for Kubernetes
An Ingress Controller and Ingress written using ASP.NET Core and Kestrel.
# Getting started
The Kestrel-Ingress uses Skaffold to run a backend, an ingress controller, and the ingress itself.
To get started, run `skaffold run`.
| 29.5 | 98 | 0.79661 | eng_Latn | 0.959283 |
7af871950ad3b9dae86c56c914c9bb07156c7830 | 45,003 | md | Markdown | Solapur.md | buda-magenta/hazard_map | ba60979446aa3f9e2fd7826e8450b4f0dca00565 | [
"MIT"
] | null | null | null | Solapur.md | buda-magenta/hazard_map | ba60979446aa3f9e2fd7826e8450b4f0dca00565 | [
"MIT"
] | null | null | null | Solapur.md | buda-magenta/hazard_map | ba60979446aa3f9e2fd7826e8450b4f0dca00565 | [
"MIT"
] | null | null | null | ---
layout: page
title: "Outbreak location: Solapur"
---
<div class="flex-container">
<div class="flex-item-left" id="mapid">
<script src="https://buda-magenta.github.io/hazard_map/load_map.js"></script>
<script>
var marker_outbreak = L.marker([17.849907, 75.276320],{"autoPan": true}).addTo(map); marker_outbreak.bindTooltip("Solapur").openTooltip();
var circle_1 = L.circle([18.793568, 80.815939], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 67469, "stroke": true, "weight": 3}).addTo(map);
circle_1.bindTooltip("Bijapur<br>rank: 1<br>hazard index: 0.067470")
circle_1.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bijapur">Bijapur</a>')
var circle_2 = L.circle([18.521428, 73.854454], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 55437, "stroke": true, "weight": 3}).addTo(map);
circle_2.bindTooltip("Pune<br>rank: 2<br>hazard index: 0.055438")
circle_2.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Pune">Pune</a>')
var circle_3 = L.circle([12.979120, 77.591300], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 31076, "stroke": true, "weight": 3}).addTo(map);
circle_3.bindTooltip("Bangalore<br>rank: 3<br>hazard index: 0.031076")
circle_3.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bangalore">Bangalore</a>')
var circle_4 = L.circle([16.850253, 74.594888], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 28416, "stroke": true, "weight": 3}).addTo(map);
circle_4.bindTooltip("Sangli<br>rank: 4<br>hazard index: 0.028417")
circle_4.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Sangli">Sangli</a>')
var circle_5 = L.circle([16.702841, 74.240533], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 23998, "stroke": true, "weight": 3}).addTo(map);
circle_5.bindTooltip("Kolhapur<br>rank: 5<br>hazard index: 0.023998")
circle_5.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kolhapur">Kolhapur</a>')
var circle_6 = L.circle([17.388786, 78.461065], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 22971, "stroke": true, "weight": 3}).addTo(map);
circle_6.bindTooltip("Hyderabad<br>rank: 6<br>hazard index: 0.022971")
circle_6.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Hyderabad">Hyderabad</a>')
var circle_7 = L.circle([18.351469, 76.755121], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 18213, "stroke": true, "weight": 3}).addTo(map);
circle_7.bindTooltip("Latur<br>rank: 7<br>hazard index: 0.018213")
circle_7.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Latur">Latur</a>')
var circle_8 = L.circle([18.627929, 73.800983], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 17980, "stroke": true, "weight": 3}).addTo(map);
circle_8.bindTooltip("Pimpri Chinchwad<br>rank: 8<br>hazard index: 0.017981")
circle_8.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Pimpri_Chinchwad">Pimpri Chinchwad</a>')
var circle_9 = L.circle([19.075990, 72.877393], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 14130, "stroke": true, "weight": 3}).addTo(map);
circle_9.bindTooltip("Mumbai<br>rank: 9<br>hazard index: 0.014130")
circle_9.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Mumbai">Mumbai</a>')
var circle_10 = L.circle([16.185317, 75.696792], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 10417, "stroke": true, "weight": 3}).addTo(map);
circle_10.bindTooltip("Bagalkot<br>rank: 10<br>hazard index: 0.010418")
circle_10.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bagalkot">Bagalkot</a>')
var circle_11 = L.circle([16.695935, 74.455575], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 9341, "stroke": true, "weight": 3}).addTo(map);
circle_11.bindTooltip("Ichalkaranji<br>rank: 11<br>hazard index: 0.009342")
circle_11.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ichalkaranji">Ichalkaranji</a>')
var circle_12 = L.circle([15.426365, 75.630079], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 9075, "stroke": true, "weight": 3}).addTo(map);
circle_12.bindTooltip("Gadag<br>rank: 12<br>hazard index: 0.009076")
circle_12.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Gadag">Gadag</a>')
var circle_13 = L.circle([12.869810, 74.843008], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 8890, "stroke": true, "weight": 3}).addTo(map);
circle_13.bindTooltip("Mangalore<br>rank: 13<br>hazard index: 0.008890")
circle_13.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Mangalore">Mangalore</a>')
var circle_14 = L.circle([16.083333, 77.166667], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 6135, "stroke": true, "weight": 3}).addTo(map);
circle_14.bindTooltip("Raichur<br>rank: 14<br>hazard index: 0.006135")
circle_14.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Raichur">Raichur</a>')
var circle_15 = L.circle([15.351838, 75.137985], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 6049, "stroke": true, "weight": 3}).addTo(map);
circle_15.bindTooltip("Hubli<br>rank: 15<br>hazard index: 0.006050")
circle_15.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Hubli">Hubli</a>')
var circle_16 = L.circle([13.083694, 80.270186], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 5334, "stroke": true, "weight": 3}).addTo(map);
circle_16.bindTooltip("Chennai<br>rank: 16<br>hazard index: 0.005335")
circle_16.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Chennai">Chennai</a>')
var circle_17 = L.circle([17.980609, 79.598212], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 4228, "stroke": true, "weight": 3}).addTo(map);
circle_17.bindTooltip("Warangal<br>rank: 17<br>hazard index: 0.004229")
circle_17.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Warangal">Warangal</a>')
var circle_18 = L.circle([16.181939, 81.135130], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 4026, "stroke": true, "weight": 3}).addTo(map);
circle_18.bindTooltip("Machilipatnam<br>rank: 18<br>hazard index: 0.004026")
circle_18.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Machilipatnam">Machilipatnam</a>')
var circle_19 = L.circle([18.169844, 76.117963], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 3733, "stroke": true, "weight": 3}).addTo(map);
circle_19.bindTooltip("Osmanabad<br>rank: 19<br>hazard index: 0.003734")
circle_19.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Osmanabad">Osmanabad</a>')
var circle_20 = L.circle([19.250000, 74.750000], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 3630, "stroke": true, "weight": 3}).addTo(map);
circle_20.bindTooltip("Ahmadnagar<br>rank: 20<br>hazard index: 0.003630")
circle_20.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ahmadnagar">Ahmadnagar</a>')
var circle_21 = L.circle([15.857267, 74.506934], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 3273, "stroke": true, "weight": 3}).addTo(map);
circle_21.bindTooltip("Belgaum<br>rank: 21<br>hazard index: 0.003273")
circle_21.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Belgaum">Belgaum</a>')
var circle_22 = L.circle([18.182992, 75.743925], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 2575, "stroke": true, "weight": 3}).addTo(map);
circle_22.bindTooltip("Barshi<br>rank: 22<br>hazard index: 0.002575")
circle_22.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Barshi">Barshi</a>')
var circle_23 = L.circle([12.305183, 76.655361], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 2041, "stroke": true, "weight": 3}).addTo(map);
circle_23.bindTooltip("Mysore<br>rank: 23<br>hazard index: 0.002041")
circle_23.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Mysore">Mysore</a>')
var circle_24 = L.circle([19.194329, 72.970178], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 1994, "stroke": true, "weight": 3}).addTo(map);
circle_24.bindTooltip("Thane<br>rank: 24<br>hazard index: 0.001995")
circle_24.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Thane">Thane</a>')
var circle_25 = L.circle([17.636129, 74.298278], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 1777, "stroke": true, "weight": 3}).addTo(map);
circle_25.bindTooltip("Satara<br>rank: 25<br>hazard index: 0.001777")
circle_25.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Satara">Satara</a>')
var circle_26 = L.circle([19.087076, 82.023572], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 1353, "stroke": true, "weight": 3}).addTo(map);
circle_26.bindTooltip("Jagdalpur<br>rank: 26<br>hazard index: 0.001353")
circle_26.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Jagdalpur">Jagdalpur</a>')
var circle_27 = L.circle([28.651718, 77.221939], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 1175, "stroke": true, "weight": 3}).addTo(map);
circle_27.bindTooltip("Delhi<br>rank: 27<br>hazard index: 0.001175")
circle_27.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Delhi">Delhi</a>')
var circle_28 = L.circle([19.169335, 77.311013], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 1017, "stroke": true, "weight": 3}).addTo(map);
circle_28.bindTooltip("Nanded Waghala<br>rank: 28<br>hazard index: 0.001017")
circle_28.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Nanded_Waghala">Nanded Waghala</a>')
var circle_29 = L.circle([14.466127, 75.920636], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 972, "stroke": true, "weight": 3}).addTo(map);
circle_29.bindTooltip("Davanagere<br>rank: 29<br>hazard index: 0.000972")
circle_29.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Davanagere">Davanagere</a>')
var circle_30 = L.circle([11.001812, 76.962842], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 821, "stroke": true, "weight": 3}).addTo(map);
circle_30.bindTooltip("Coimbatore<br>rank: 30<br>hazard index: 0.000821")
circle_30.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Coimbatore">Coimbatore</a>')
var circle_31 = L.circle([13.340077, 77.100621], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 794, "stroke": true, "weight": 3}).addTo(map);
circle_31.bindTooltip("Tumkur<br>rank: 31<br>hazard index: 0.000794")
circle_31.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Tumkur">Tumkur</a>')
var circle_32 = L.circle([16.508759, 80.618510], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 752, "stroke": true, "weight": 3}).addTo(map);
circle_32.bindTooltip("Vijayawada<br>rank: 32<br>hazard index: 0.000752")
circle_32.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Vijayawada">Vijayawada</a>')
var circle_33 = L.circle([17.166667, 77.083333], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 732, "stroke": true, "weight": 3}).addTo(map);
circle_33.bindTooltip("Gulbarga<br>rank: 33<br>hazard index: 0.000732")
circle_33.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Gulbarga">Gulbarga</a>')
var circle_34 = L.circle([14.475294, 78.821686], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 699, "stroke": true, "weight": 3}).addTo(map);
circle_34.bindTooltip("Kadapa<br>rank: 34<br>hazard index: 0.000700")
circle_34.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kadapa">Kadapa</a>')
var circle_35 = L.circle([11.664300, 78.146000], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 687, "stroke": true, "weight": 3}).addTo(map);
circle_35.bindTooltip("Salem<br>rank: 35<br>hazard index: 0.000688")
circle_35.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Salem">Salem</a>')
var circle_36 = L.circle([20.325704, 78.116914], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 667, "stroke": true, "weight": 3}).addTo(map);
circle_36.bindTooltip("Yavatmal<br>rank: 36<br>hazard index: 0.000667")
circle_36.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Yavatmal">Yavatmal</a>')
var circle_37 = L.circle([15.631900, 77.275900], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 571, "stroke": true, "weight": 3}).addTo(map);
circle_37.bindTooltip("Adoni<br>rank: 37<br>hazard index: 0.000571")
circle_37.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Adoni">Adoni</a>')
var circle_38 = L.circle([8.576971, 77.050125], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 515, "stroke": true, "weight": 3}).addTo(map);
circle_38.bindTooltip("Thiruvananthapuram<br>rank: 38<br>hazard index: 0.000516")
circle_38.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Thiruvananthapuram">Thiruvananthapuram</a>')
var circle_39 = L.circle([20.843512, 75.525927], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 487, "stroke": true, "weight": 3}).addTo(map);
circle_39.bindTooltip("Jalgaon<br>rank: 39<br>hazard index: 0.000488")
circle_39.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Jalgaon">Jalgaon</a>')
var circle_40 = L.circle([15.143395, 76.919388], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 469, "stroke": true, "weight": 3}).addTo(map);
circle_40.bindTooltip("Bellary<br>rank: 40<br>hazard index: 0.000470")
circle_40.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bellary">Bellary</a>')
var circle_41 = L.circle([15.119651, 77.455290], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 452, "stroke": true, "weight": 3}).addTo(map);
circle_41.bindTooltip("Guntakal<br>rank: 41<br>hazard index: 0.000453")
circle_41.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Guntakal">Guntakal</a>')
var circle_42 = L.circle([17.723128, 83.301284], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 443, "stroke": true, "weight": 3}).addTo(map);
circle_42.bindTooltip("Visakhapatnam<br>rank: 42<br>hazard index: 0.000444")
circle_42.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Visakhapatnam">Visakhapatnam</a>')
var circle_43 = L.circle([14.422347, 77.720069], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 430, "stroke": true, "weight": 3}).addTo(map);
circle_43.bindTooltip("Dharmavaram<br>rank: 43<br>hazard index: 0.000431")
circle_43.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Dharmavaram">Dharmavaram</a>')
var circle_44 = L.circle([19.290314, 76.602903], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 430, "stroke": true, "weight": 3}).addTo(map);
circle_44.bindTooltip("Parbhani<br>rank: 44<br>hazard index: 0.000430")
circle_44.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Parbhani">Parbhani</a>')
var circle_45 = L.circle([11.258608, 75.778874], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 403, "stroke": true, "weight": 3}).addTo(map);
circle_45.bindTooltip("Kozhikode<br>rank: 45<br>hazard index: 0.000404")
circle_45.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kozhikode">Kozhikode</a>')
var circle_46 = L.circle([12.955100, 78.269900], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 394, "stroke": true, "weight": 3}).addTo(map);
circle_46.bindTooltip("Robertson Pet<br>rank: 46<br>hazard index: 0.000395")
circle_46.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Robertson_Pet">Robertson Pet</a>')
var circle_47 = L.circle([9.926115, 78.114098], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 388, "stroke": true, "weight": 3}).addTo(map);
circle_47.bindTooltip("Madurai<br>rank: 47<br>hazard index: 0.000389")
circle_47.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Madurai">Madurai</a>')
var circle_48 = L.circle([26.055318, 82.993139], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 384, "stroke": true, "weight": 3}).addTo(map);
circle_48.bindTooltip("Nizamabad<br>rank: 48<br>hazard index: 0.000385")
circle_48.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Nizamabad">Nizamabad</a>')
var circle_49 = L.circle([16.432998, 80.993715], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 381, "stroke": true, "weight": 3}).addTo(map);
circle_49.bindTooltip("Gudivada<br>rank: 49<br>hazard index: 0.000382")
circle_49.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Gudivada">Gudivada</a>')
var circle_50 = L.circle([14.654623, 77.556260], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 371, "stroke": true, "weight": 3}).addTo(map);
circle_50.bindTooltip("Anantapur<br>rank: 50<br>hazard index: 0.000371")
circle_50.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Anantapur">Anantapur</a>')
var circle_51 = L.circle([23.021624, 72.579707], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 364, "stroke": true, "weight": 3}).addTo(map);
circle_51.bindTooltip("Ahmedabad<br>rank: 51<br>hazard index: 0.000364")
circle_51.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ahmedabad">Ahmedabad</a>')
var circle_52 = L.circle([11.101781, 77.345192], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 343, "stroke": true, "weight": 3}).addTo(map);
circle_52.bindTooltip("Tiruppur<br>rank: 52<br>hazard index: 0.000344")
circle_52.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Tiruppur">Tiruppur</a>')
var circle_53 = L.circle([21.149813, 79.082056], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 334, "stroke": true, "weight": 3}).addTo(map);
circle_53.bindTooltip("Nagpur<br>rank: 53<br>hazard index: 0.000334")
circle_53.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Nagpur">Nagpur</a>')
var circle_54 = L.circle([16.743454, 77.992319], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 322, "stroke": true, "weight": 3}).addTo(map);
circle_54.bindTooltip("Mahbubnagar<br>rank: 54<br>hazard index: 0.000322")
circle_54.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Mahbubnagar">Mahbubnagar</a>')
var circle_55 = L.circle([19.439885, 72.880383], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 317, "stroke": true, "weight": 3}).addTo(map);
circle_55.bindTooltip("Vasai<br>rank: 55<br>hazard index: 0.000318")
circle_55.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Vasai">Vasai</a>')
var circle_56 = L.circle([12.523889, 76.896196], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 316, "stroke": true, "weight": 3}).addTo(map);
circle_56.bindTooltip("Mandya<br>rank: 56<br>hazard index: 0.000317")
circle_56.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Mandya">Mandya</a>')
var circle_57 = L.circle([22.541418, 88.357691], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 315, "stroke": true, "weight": 3}).addTo(map);
circle_57.bindTooltip("Kolkata<br>rank: 57<br>hazard index: 0.000315")
circle_57.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kolkata">Kolkata</a>')
var circle_58 = L.circle([21.170200, 72.831100], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 310, "stroke": true, "weight": 3}).addTo(map);
circle_58.bindTooltip("Surat<br>rank: 58<br>hazard index: 0.000310")
circle_58.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Surat">Surat</a>')
var circle_59 = L.circle([15.830925, 78.042537], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 306, "stroke": true, "weight": 3}).addTo(map);
circle_59.bindTooltip("Kurnool<br>rank: 59<br>hazard index: 0.000306")
circle_59.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kurnool">Kurnool</a>')
var circle_60 = L.circle([17.910400, 77.519900], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 286, "stroke": true, "weight": 3}).addTo(map);
circle_60.bindTooltip("Bidar<br>rank: 60<br>hazard index: 0.000286")
circle_60.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bidar">Bidar</a>')
var circle_61 = L.circle([18.761516, 79.478785], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 278, "stroke": true, "weight": 3}).addTo(map);
circle_61.bindTooltip("Ramagundam<br>rank: 61<br>hazard index: 0.000279")
circle_61.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ramagundam">Ramagundam</a>')
var circle_62 = L.circle([10.804973, 78.687030], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 277, "stroke": true, "weight": 3}).addTo(map);
circle_62.bindTooltip("Tiruchirappalli<br>rank: 62<br>hazard index: 0.000278")
circle_62.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Tiruchirappalli">Tiruchirappalli</a>')
var circle_63 = L.circle([12.732884, 77.830948], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 265, "stroke": true, "weight": 3}).addTo(map);
circle_63.bindTooltip("Hosur<br>rank: 63<br>hazard index: 0.000265")
circle_63.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Hosur">Hosur</a>')
var circle_64 = L.circle([16.291519, 80.454159], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 263, "stroke": true, "weight": 3}).addTo(map);
circle_64.bindTooltip("Guntur<br>rank: 64<br>hazard index: 0.000263")
circle_64.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Guntur">Guntur</a>')
var circle_65 = L.circle([25.335649, 83.007629], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 244, "stroke": true, "weight": 3}).addTo(map);
circle_65.bindTooltip("Varanasi<br>rank: 65<br>hazard index: 0.000245")
circle_65.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Varanasi">Varanasi</a>')
var circle_66 = L.circle([8.887951, 76.595501], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 239, "stroke": true, "weight": 3}).addTo(map);
circle_66.bindTooltip("Kollam<br>rank: 66<br>hazard index: 0.000239")
circle_66.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kollam">Kollam</a>')
var circle_67 = L.circle([15.266493, 76.387230], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 236, "stroke": true, "weight": 3}).addTo(map);
circle_67.bindTooltip("Hospet<br>rank: 67<br>hazard index: 0.000237")
circle_67.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Hospet">Hospet</a>')
var circle_68 = L.circle([14.625888, 75.635724], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 235, "stroke": true, "weight": 3}).addTo(map);
circle_68.bindTooltip("Ranibennur<br>rank: 68<br>hazard index: 0.000236")
circle_68.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ranibennur">Ranibennur</a>')
var circle_69 = L.circle([16.676135, 81.170868], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 227, "stroke": true, "weight": 3}).addTo(map);
circle_69.bindTooltip("Eluru<br>rank: 69<br>hazard index: 0.000228")
circle_69.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Eluru">Eluru</a>')
var circle_70 = L.circle([25.438130, 81.833800], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 227, "stroke": true, "weight": 3}).addTo(map);
circle_70.bindTooltip("Allahabad<br>rank: 70<br>hazard index: 0.000227")
circle_70.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Allahabad">Allahabad</a>')
var circle_71 = L.circle([19.261944, 73.194760], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 225, "stroke": true, "weight": 3}).addTo(map);
circle_71.bindTooltip("Ulhas Nagar<br>rank: 71<br>hazard index: 0.000225")
circle_71.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ulhas_Nagar">Ulhas Nagar</a>')
var circle_72 = L.circle([19.918233, 75.868625], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 225, "stroke": true, "weight": 3}).addTo(map);
circle_72.bindTooltip("Jalna<br>rank: 72<br>hazard index: 0.000225")
circle_72.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Jalna">Jalna</a>')
var circle_73 = L.circle([15.431506, 76.532774], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 219, "stroke": true, "weight": 3}).addTo(map);
circle_73.bindTooltip("Gangawati<br>rank: 73<br>hazard index: 0.000220")
circle_73.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Gangawati">Gangawati</a>')
var circle_74 = L.circle([10.525626, 76.213254], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 216, "stroke": true, "weight": 3}).addTo(map);
circle_74.bindTooltip("Thrissur<br>rank: 74<br>hazard index: 0.000216")
circle_74.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Thrissur">Thrissur</a>')
var circle_75 = L.circle([23.160894, 79.949770], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 214, "stroke": true, "weight": 3}).addTo(map);
circle_75.bindTooltip("Jabalpur<br>rank: 75<br>hazard index: 0.000214")
circle_75.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Jabalpur">Jabalpur</a>')
var circle_76 = L.circle([19.295200, 72.854400], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 213, "stroke": true, "weight": 3}).addTo(map);
circle_76.bindTooltip("Mira-Bhayandar<br>rank: 76<br>hazard index: 0.000214")
circle_76.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Mira-Bhayandar">Mira-Bhayandar</a>')
var circle_77 = L.circle([13.137000, 78.133961], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 203, "stroke": true, "weight": 3}).addTo(map);
circle_77.bindTooltip("Kolar<br>rank: 77<br>hazard index: 0.000204")
circle_77.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kolar">Kolar</a>')
var circle_78 = L.circle([8.188047, 77.429049], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 203, "stroke": true, "weight": 3}).addTo(map);
circle_78.bindTooltip("Nagercoil<br>rank: 78<br>hazard index: 0.000203")
circle_78.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Nagercoil">Nagercoil</a>')
var circle_79 = L.circle([14.906956, 78.009707], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 200, "stroke": true, "weight": 3}).addTo(map);
circle_79.bindTooltip("Tadipatri<br>rank: 79<br>hazard index: 0.000201")
circle_79.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Tadipatri">Tadipatri</a>')
var circle_80 = L.circle([15.398403, 73.812918], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 199, "stroke": true, "weight": 3}).addTo(map);
circle_80.bindTooltip("Vasco Da Gama<br>rank: 80<br>hazard index: 0.000199")
circle_80.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Vasco_Da_Gama">Vasco Da Gama</a>')
var circle_81 = L.circle([19.362531, 73.078475], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 198, "stroke": true, "weight": 3}).addTo(map);
circle_81.bindTooltip("Bhiwandi<br>rank: 81<br>hazard index: 0.000199")
circle_81.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bhiwandi">Bhiwandi</a>')
var circle_82 = L.circle([20.011247, 73.790236], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 195, "stroke": true, "weight": 3}).addTo(map);
circle_82.bindTooltip("Nashik<br>rank: 82<br>hazard index: 0.000196")
circle_82.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Nashik">Nashik</a>')
var circle_83 = L.circle([13.631637, 79.423171], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 194, "stroke": true, "weight": 3}).addTo(map);
circle_83.bindTooltip("Tirupati<br>rank: 83<br>hazard index: 0.000194")
circle_83.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Tirupati">Tirupati</a>')
var circle_84 = L.circle([13.007082, 76.099270], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 182, "stroke": true, "weight": 3}).addTo(map);
circle_84.bindTooltip("Hassan<br>rank: 84<br>hazard index: 0.000182")
circle_84.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Hassan">Hassan</a>')
var circle_85 = L.circle([20.761862, 77.192172], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 175, "stroke": true, "weight": 3}).addTo(map);
circle_85.bindTooltip("Akola<br>rank: 85<br>hazard index: 0.000175")
circle_85.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Akola">Akola</a>')
var circle_86 = L.circle([13.932609, 75.574978], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 167, "stroke": true, "weight": 3}).addTo(map);
circle_86.bindTooltip("Shimoga<br>rank: 86<br>hazard index: 0.000168")
circle_86.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Shimoga">Shimoga</a>')
var circle_87 = L.circle([18.434644, 79.132265], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 166, "stroke": true, "weight": 3}).addTo(map);
circle_87.bindTooltip("Karimnagar<br>rank: 87<br>hazard index: 0.000167")
circle_87.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Karimnagar">Karimnagar</a>')
var circle_88 = L.circle([13.341917, 74.747323], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 162, "stroke": true, "weight": 3}).addTo(map);
circle_88.bindTooltip("Udupi<br>rank: 88<br>hazard index: 0.000162")
circle_88.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Udupi">Udupi</a>')
var circle_89 = L.circle([9.931308, 76.267414], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 145, "stroke": true, "weight": 3}).addTo(map);
circle_89.bindTooltip("Kochi<br>rank: 89<br>hazard index: 0.000146")
circle_89.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Kochi">Kochi</a>')
var circle_90 = L.circle([18.437436, 77.110521], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 142, "stroke": true, "weight": 3}).addTo(map);
circle_90.bindTooltip("Udgir<br>rank: 90<br>hazard index: 0.000143")
circle_90.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Udgir">Udgir</a>')
var circle_91 = L.circle([19.143607, 73.295535], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 130, "stroke": true, "weight": 3}).addTo(map);
circle_91.bindTooltip("Ambarnath<br>rank: 91<br>hazard index: 0.000131")
circle_91.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Ambarnath">Ambarnath</a>')
var circle_92 = L.circle([16.857964, 79.217494], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 124, "stroke": true, "weight": 3}).addTo(map);
circle_92.bindTooltip("Nalgonda<br>rank: 92<br>hazard index: 0.000125")
circle_92.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Nalgonda">Nalgonda</a>')
var circle_93 = L.circle([20.266777, 85.843559], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 124, "stroke": true, "weight": 3}).addTo(map);
circle_93.bindTooltip("Bhubaneswar<br>rank: 93<br>hazard index: 0.000124")
circle_93.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Bhubaneswar">Bhubaneswar</a>')
var circle_94 = L.circle([11.369204, 77.676627], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 123, "stroke": true, "weight": 3}).addTo(map);
circle_94.bindTooltip("Erode<br>rank: 94<br>hazard index: 0.000123")
circle_94.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Erode">Erode</a>')
var circle_95 = L.circle([22.297314, 73.194257], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 115, "stroke": true, "weight": 3}).addTo(map);
circle_95.bindTooltip("Vadodara<br>rank: 95<br>hazard index: 0.000116")
circle_95.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Vadodara">Vadodara</a>')
var circle_96 = L.circle([8.701220, 77.579269], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 109, "stroke": true, "weight": 3}).addTo(map);
circle_96.bindTooltip("Tirunelveli<br>rank: 96<br>hazard index: 0.000109")
circle_96.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Tirunelveli">Tirunelveli</a>')
var circle_97 = L.circle([17.500000, 80.333333], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 104, "stroke": true, "weight": 3}).addTo(map);
circle_97.bindTooltip("Khammam<br>rank: 97<br>hazard index: 0.000104")
circle_97.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Khammam">Khammam</a>')
var circle_98 = L.circle([17.005045, 81.780473], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 103, "stroke": true, "weight": 3}).addTo(map);
circle_98.bindTooltip("Rajahmundry<br>rank: 98<br>hazard index: 0.000103")
circle_98.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Rajahmundry">Rajahmundry</a>')
var circle_99 = L.circle([16.870988, 79.561398], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 95, "stroke": true, "weight": 3}).addTo(map);
circle_99.bindTooltip("Miryalaguda<br>rank: 99<br>hazard index: 0.000096")
circle_99.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Miryalaguda">Miryalaguda</a>')
var circle_100 = L.circle([25.531031, 78.652689], {"pane": "markerPane", "color": "red", "fill": true, "fillOpacity": 0.2, "fillRule": "evenodd", "lineCap": "round", "lineJoin": "round", "opacity": 1.0, "radius": 95, "stroke": true, "weight": 3}).addTo(map);
circle_100.bindTooltip("Jhansi<br>rank: 100<br>hazard index: 0.000095")
circle_100.bindPopup('<a href="https://buda-magenta.github.io/hazard_map/Jhansi">Jhansi</a>')
</script>
</div>
<div class="flex-item-right">
<table>
<tr>
<th>Rank</th>
<th>City</th>
</tr>
<tr>
<td>1</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Bijapur">Bijapur</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Pune">Pune</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Bangalore">Bangalore</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Sangli">Sangli</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Kolhapur">Kolhapur</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Hyderabad">Hyderabad</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Latur">Latur</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Pimpri_Chinchwad">Pimpri Chinchwad</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Mumbai">Mumbai</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://buda-magenta.github.io/hazard_map/Bagalkot">Bagalkot</a></td>
</tr>
</table>
</div>
</div>
<p align="left">This hazard map shows top-100 cities that are at most risk if an infectious disease breaks out at Solapur. The size of red circle indicates the relative magnitude of risk. Bigger the circle, more the risk. The table on the side highlights the list of top-10 cities that are at most risk. Smaller the rank, more the risk. Thus, rank 3 is more at risk compared to rank 4, and so on.You can also click on any city to make that city an outbreak location. You will get a new hazard map with the chosen city/town as outbreak location.
</p> | 94.148536 | 544 | 0.678621 | hun_Latn | 0.049877 |
7af92674f70e318fcda4f845b3a745f6d05796e7 | 8,118 | md | Markdown | exchange/exchange-ps/exchange/New-EOPDistributionGroup.md | vstoms/office-docs-powershell | 2f39053f98f74b6d0136738e342b6829ef0c5ef8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-29T11:47:10.000Z | 2020-05-29T11:47:10.000Z | exchange/exchange-ps/exchange/New-EOPDistributionGroup.md | rschoenig/office-docs-powershell | 2561ae0231beb651a3eb5d94a429e1bbb45b817c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | exchange/exchange-ps/exchange/New-EOPDistributionGroup.md | rschoenig/office-docs-powershell | 2561ae0231beb651a3eb5d94a429e1bbb45b817c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
external help file: Microsoft.Exchange.RolesAndAccess-Help.xml
online version: https://docs.microsoft.com/powershell/module/exchange/new-eopdistributiongroup
applicable: Exchange Online Protection
title: New-EOPDistributionGroup
schema: 2.0.0
author: chrisda
ms.author: chrisda
ms.reviewer:
monikerRange: "eop-ps"
---
# New-EOPDistributionGroup
## SYNOPSIS
This cmdlet is available only in Exchange Online Protection.
Use the New-EOPDistributionGroup cmdlet to create distribution groups or mail-enabled security groups in standalone Exchange Online Protection (EOP) organizations without Exchange Online mailboxes. This cmdlet isn't available in EOP that's included with Exchange Enterprise CAL with Services licenses in on-premises Exchange; use the New-DistributionGroup cmdlet instead.
Typically, standalone EOP organizations that also have on-premises Active Directory use directory synchronization to create users and groups in EOP. However, if you can't use directory synchronization, then you can use cmdlets to create and manage users and groups in EOP.
This cmdlet uses a batch processing method that results in a propagation delay of a few minutes before the results of the cmdlet are visible.
For information about the parameter sets in the Syntax section below, see [Exchange cmdlet syntax](https://docs.microsoft.com/powershell/exchange/exchange-server/exchange-cmdlet-syntax).
## SYNTAX
```
New-EOPDistributionGroup -Name <String> -ManagedBy <String[]>
[-Alias <String>]
[-DisplayName <String>]
[-Members <String[]>]
[-Notes <String>]
[-PrimarySmtpAddress <SmtpAddress>]
[-Type <GroupType>]
[<CommonParameters>]
```
## DESCRIPTION
You can use the New-EOPDistributionGroup cmdlet to create the following types of groups:
- Mail-enabled universal security group (USG)
- Universal distribution group
Distribution groups are used to consolidate groups of recipients into a single point of contact for email messages. Security groups are used to grant permissions to multiple users.
You need to be assigned permissions before you can run this cmdlet. Although this topic lists all parameters for the cmdlet, you may not have access to some parameters if they're not included in the permissions assigned to you. To find the permissions required to run any cmdlet or parameter in your organization, see [Find the permissions required to run any Exchange cmdlet](https://docs.microsoft.com/powershell/exchange/exchange-server/find-exchange-cmdlet-permissions).
## EXAMPLES
### Example 1
```powershell
New-EOPDistributionGroup -Name Managers -Type Security -ManagedBy "Kitty Petersen"
```
This example creates a mail-enabled universal security group named Managers that's managed by Kitty Petersen.
### Example 2
```powershell
New-EOPDistributionGroup -Name "Security Team" -ManagedBy "Tyson Fawcett" -Alias SecurityTeamThree -DisplayName "Security Team" -Notes "Security leads from each division" -PrimarySmtpAddress [email protected] -Type Distribution -Members @("Tyson Fawcett","Kitty Petersen")
```
This example creates a distribution group named "Security Team" and adds two users to the group.
## PARAMETERS
### -Name
The Name parameter specifies the name of the distribution group object. The value specified in the Name parameter is also used for the DisplayName parameter if the DisplayName parameter isn't specified.
The Name parameter value can't exceed 64 characters.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -ManagedBy
The ManagedBy parameter specifies a user who owns the group. You need to use this parameter to specify at least one group owner. You can use any value that uniquely identifies the user. For example:
- Name
- Alias
- Distinguished name (DN)
- Canonical DN
- Email address
- GUID
You can specify multiple owners by using the following syntax: @("\<user1\>","\<user2\>"...).
The users you specify with this parameter aren't automatically added to the group. To add members to the group, use the Update-EOPDistributionGroupMember cmdlet.
```yaml
Type: String[]
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: True
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Alias
The Alias parameter specifies the email alias of the distribution group. The Alias parameter value is used to generate the primary SMTP email address if you don't use the PrimarySmtpAddress parameter.
The value of Alias can contain letters, numbers and the characters !, #, $, %, &, ', \*, +, -, /, =, ?, ^, \_, \`, {, |, } and ~. Periods (.) are allowed, but each period must be surrounded by other valid characters (for example, help.desk). Unicode characters from U+00A1 to U+00FF are also allowed. The maximum length of the Alias value is 64 characters.
If you don't use the Alias parameter when you create a group, the value of the Name parameter is used for the alias. This value is also used in the primary SMTP email address. Spaces are removed and unsupported characters are converted to question marks (?).
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -DisplayName
The DisplayName parameter specifies the name of the distribution group in the Exchange admin center (EAC). If the DisplayName parameter isn't specified, the value of the Name parameter is used for the DisplayName parameter.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Members
The Members parameter specifies the initial list of recipients (mail-enabled objects) in the distribution group. In Exchange Online Protection, the valid recipient types are:
- Mail users
- Distribution groups
- Mail-enabled security groups
You can use any value that uniquely identifies the recipient. For example:
- Name
- Alias
- Distinguished name (DN)
- Canonical DN
- Email address
- GUID
You can specify multiple recipients by using the following syntax: @("\<recipient1\>","\<recipient2\>"...).
```yaml
Type: String[]
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Notes
The Notes parameters specifies additional information about the object. If the value contains spaces, enclose the value in quotation marks (").
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -PrimarySmtpAddress
The PrimarySmtpAddress parameter specifies the primary return SMTP email address for the distribution group.
```yaml
Type: SmtpAddress
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### -Type
The Type parameter specifies the group type. Valid values are:
- Distribution (This is the default value).
- Security
```yaml
Type: GroupType
Parameter Sets: (All)
Aliases:
Applicable: Exchange Online Protection
Required: False
Position: Named
Default value: None
Accept pipeline input: False
Accept wildcard characters: False
```
### CommonParameters
This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](https://go.microsoft.com/fwlink/p/?LinkID=113216).
## INPUTS
###
## OUTPUTS
###
## NOTES
## RELATED LINKS
| 31.465116 | 474 | 0.779995 | eng_Latn | 0.971825 |
7af94430a6b87a0ab9c33b8cb8297857a9f1612b | 24 | md | Markdown | README.md | slmovie/slmovie-server-zdzy | 06d30aa2c4ea63bdaf1216b2e138755aec0b465c | [
"MIT"
] | null | null | null | README.md | slmovie/slmovie-server-zdzy | 06d30aa2c4ea63bdaf1216b2e138755aec0b465c | [
"MIT"
] | null | null | null | README.md | slmovie/slmovie-server-zdzy | 06d30aa2c4ea63bdaf1216b2e138755aec0b465c | [
"MIT"
] | null | null | null | # slmovie-server-zdzy
| 8 | 21 | 0.708333 | slk_Latn | 0.976093 |
7af970d4b2a2808245257825214fa366c81fd30c | 1,889 | md | Markdown | _includes/devdoc/bitcoin-core/rpcs/rpcs/getreceivedbyaccount.md | Hereunder/Move | 6691d9dc30742ab77474f7b10c53765d5bb7c856 | [
"MIT-0",
"MIT"
] | 1 | 2017-11-25T15:07:59.000Z | 2017-11-25T15:07:59.000Z | _includes/devdoc/bitcoin-core/rpcs/rpcs/getreceivedbyaccount.md | Hereunder/Move | 6691d9dc30742ab77474f7b10c53765d5bb7c856 | [
"MIT-0",
"MIT"
] | null | null | null | _includes/devdoc/bitcoin-core/rpcs/rpcs/getreceivedbyaccount.md | Hereunder/Move | 6691d9dc30742ab77474f7b10c53765d5bb7c856 | [
"MIT-0",
"MIT"
] | null | null | null | {% comment %}
This file is licensed under the MIT License (MIT) available on
http://opensource.org/licenses/MIT.
{% endcomment %}
{% assign filename="_includes/devdoc/bitcoin-core/rpcs/rpcs/getreceivedbyaccount.md" %}
##### GetReceivedByAccount
{% include helpers/subhead-links.md %}
{% assign summary_getReceivedByAccount="returns the total amount received by addresses in a particular account from transactions with the specified number of confirmations. It does not count coinbase transactions." %}
{% autocrossref %}
*Requires wallet support.*
The `getreceivedbyaccount` RPC {{summary_getReceivedByAccount}}
{{WARNING}} `getreceivedbyaccount` will be removed in a later version of Bitcoin
Core. Use the RPCs listed in the See Also subsection below instead.
*Parameter #1---the account name*
{% itemplate ntpd1 %}
- n: "Account"
t: "string"
p: "Required<br>(exactly 1)"
d: "The name of the account containing the addresses to get. For the default account, use an empty string (\"\")"
{% enditemplate %}
*Parameter #2---the minimum number of confirmations*
{{INCLUDE_CONFIRMATIONS_PARAMETER}}
*Result---the number of bitcoins received*
{% itemplate ntpd1 %}
- n: "`result`"
t: "number (bitcoins)"
p: "Required<br>(exactly 1)"
d: "The number of bitcoins received by the account. May be `0`"
{% enditemplate %}
*Example from Bitcoin Core 0.10.0*
Get the bitcoins received by the "doc test" account with six or more
confirmations:
{% highlight bash %}
bitcoin-cli -testnet getreceivedbyaccount "doc test" 6
{% endhighlight %}
Result:
{% highlight json %}
0.30000000
{% endhighlight %}
*See also*
* [GetReceivedByAddress][rpc getreceivedbyaddress]: {{summary_getReceivedByAddress}}
* [GetAddressesByAccount][rpc getaddressesbyaccount]: {{summary_getAddressesByAccount}}
* [ListAccounts][rpc listaccounts]: {{summary_listAccounts}}
{% endautocrossref %}
| 28.19403 | 218 | 0.741662 | eng_Latn | 0.963442 |
7afa0bd2c5dc108295c134312587f67548520e1d | 213 | md | Markdown | Docs/ArchicadDG/m_bar_control/Slider_SliderType.md | kant/PauperTools | bfe0f3fac5c852067e9ecd861e98fbb3f43c62e4 | [
"Apache-2.0"
] | 2 | 2021-11-14T16:30:21.000Z | 2021-11-14T16:31:00.000Z | Docs/ArchicadDG/m_bar_control/Slider_SliderType.md | kant/PauperTools | bfe0f3fac5c852067e9ecd861e98fbb3f43c62e4 | [
"Apache-2.0"
] | 1 | 2021-07-27T23:40:19.000Z | 2021-07-27T23:40:19.000Z | Docs/ArchicadDG/m_bar_control/Slider_SliderType.md | kant/PauperTools | bfe0f3fac5c852067e9ecd861e98fbb3f43c62e4 | [
"Apache-2.0"
] | 2 | 2021-07-09T21:29:38.000Z | 2021-11-14T16:31:03.000Z | ## SliderType Enum
### Import
```
from ArchicadDG import Slider
```
## Example
```
from ArchicadDG import *
ft1=Slider.BottomRight
ft2=Slider.SliderType.TopLeft
```
### Values
* **BottomRight**:
* **TopLeft**: | 12.529412 | 29 | 0.671362 | yue_Hant | 0.40643 |
7afab4d9ca8937faba1c0d95a2e8be5e083963e3 | 18 | md | Markdown | README.md | nothink/yuu_na | e078691df63c3479850569f9ec8cc1d942989db5 | [
"MIT"
] | null | null | null | README.md | nothink/yuu_na | e078691df63c3479850569f9ec8cc1d942989db5 | [
"MIT"
] | null | null | null | README.md | nothink/yuu_na | e078691df63c3479850569f9ec8cc1d942989db5 | [
"MIT"
] | null | null | null | # yuu_na
NAE YUKI
| 6 | 8 | 0.722222 | mos_Latn | 0.583476 |
7afae906bedb05f1608090871e9089a46dd9ba11 | 341 | md | Markdown | riot4/README.md | antarikshnarain/RIoT | e23636a743f222ca8f362f71e7da3967c5c5522b | [
"BSD-3-Clause"
] | null | null | null | riot4/README.md | antarikshnarain/RIoT | e23636a743f222ca8f362f71e7da3967c5c5522b | [
"BSD-3-Clause"
] | null | null | null | riot4/README.md | antarikshnarain/RIoT | e23636a743f222ca8f362f71e7da3967c5c5522b | [
"BSD-3-Clause"
] | null | null | null | # Regulated-IoT
1) Creating hardware profiles for every user.
2) Changing user settings on the Web App.
3) Immediate relay of change to the environment and hence visualize change on the devices connected to it.
| 68.2 | 107 | 0.492669 | eng_Latn | 0.997936 |
7afb366b9e04de32057bf4a636487473bb91ac80 | 3,060 | md | Markdown | add/metadata/Microsoft.VisualC.StlClr.Generic/ContainerBidirectionalIterator`1.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/Microsoft.VisualC.StlClr.Generic/ContainerBidirectionalIterator`1.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | add/metadata/Microsoft.VisualC.StlClr.Generic/ContainerBidirectionalIterator`1.meta.md | v-maudel/docs-1 | f849afb0bd9a505311e7aec32c544c3169edf1c5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-16T19:24:50.000Z | 2020-11-16T19:24:50.000Z | ---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Equality
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Decrement
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Decrement(Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator{`0}@)
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.#ctor
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Increment(Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator{`0}@,System.Int32)
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Equality(Microsoft.VisualC.StlClr.Generic.IInputIterator{`0})
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Inequality(Microsoft.VisualC.StlClr.Generic.IInputIterator{`0})
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.next
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.equal_to(Microsoft.VisualC.StlClr.Generic.IInputIterator{`0})
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Increment
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.valid
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.#ctor(Microsoft.VisualC.StlClr.Generic.INode{`0})
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Increment(Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator{`0}@)
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.container
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.prev
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.Clone
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Inequality
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.op_Decrement(Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator{`0}@,System.Int32)
ms.author: "mblome"
manager: "ghogen"
---
---
uid: Microsoft.VisualC.StlClr.Generic.ContainerBidirectionalIterator`1.equal_to
ms.author: "mblome"
manager: "ghogen"
---
| 25.5 | 166 | 0.784314 | deu_Latn | 0.103504 |
7afbc0e40f417cd6a1782283b5d0c9f12b970335 | 36 | md | Markdown | README.md | kenshinx/rps | 3d141056f3512598525430ecc9ddb1ee79b887cf | [
"MIT"
] | 6 | 2017-06-20T09:44:03.000Z | 2018-07-28T06:44:10.000Z | README.md | kenshinx/rps | 3d141056f3512598525430ecc9ddb1ee79b887cf | [
"MIT"
] | null | null | null | README.md | kenshinx/rps | 3d141056f3512598525430ecc9ddb1ee79b887cf | [
"MIT"
] | 3 | 2017-09-23T16:48:18.000Z | 2018-05-26T19:12:12.000Z | # rps

| 9 | 27 | 0.611111 | fra_Latn | 0.132744 |
7afbce72d15efa8eb70c360dbb809004a856f79c | 7,136 | md | Markdown | docs/relational-databases/backup-restore/disable-sql-server-managed-backup-to-microsoft-azure.md | koatsush/sql-docs.ja-jp | 3dca4b2436806748dfa040695bc37ce7ecae463f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/backup-restore/disable-sql-server-managed-backup-to-microsoft-azure.md | koatsush/sql-docs.ja-jp | 3dca4b2436806748dfa040695bc37ce7ecae463f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/relational-databases/backup-restore/disable-sql-server-managed-backup-to-microsoft-azure.md | koatsush/sql-docs.ja-jp | 3dca4b2436806748dfa040695bc37ce7ecae463f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Microsoft Azure への SQL Server マネージド バックアップを無効にする | Microsoft Docs
ms.custom: ''
ms.date: 03/04/2017
ms.prod: sql
ms.prod_service: backup-restore
ms.reviewer: ''
ms.technology: backup-restore
ms.topic: conceptual
ms.assetid: 3e02187f-363f-4e69-a82f-583953592544
author: MikeRayMSFT
ms.author: mikeray
ms.openlocfilehash: c1a7e35c2c2a9428eb700b8429270874176d570a
ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/15/2019
ms.locfileid: "68089848"
---
# <a name="disable-sql-server-managed-backup-to-microsoft-azure"></a>Microsoft Azure への SQL Server マネージド バックアップを無効にする
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
このトピックでは、データベース レベルとインスタンス レベルの両方で、 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を無効または一時停止する方法について説明します。
## <a name="DatabaseDisable"></a> データベースに対して [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を無効にする
システム ストアド プロシージャの [managed_backup.sp_backup_config_basic (Transact-SQL)](../../relational-databases/system-stored-procedures/managed-backup-sp-backup-config-basic-transact-sql.md) を使用し、[!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] 設定を無効にできます。 特定のデータベースの *@enable_backup* @enable_backup [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を使用します。構成設定を有効にする場合は 1、無効にする場合は 0 を指定します。
#### <a name="to-disable-includesssmartbackupincludesss-smartbackup-mdmd-for-a-specific-database"></a>特定のデータベースに対して [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を無効にするには
1. [!INCLUDE[ssDE](../../includes/ssde-md.md)]に接続します。
2. [標準] ツール バーの **[新しいクエリ]** をクリックします。
3. 次の例をコピーしてクエリ ウィンドウに貼り付け、 **[実行]** をクリックします。
```
EXEC msdb.managed_backup.sp_backup_config_basic
@database_name = 'TestDB'
,@enable_backup = 0;
GO
```
## <a name="DatabaseAllDisable"></a> インスタンス上のすべてのデータベースについて [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を無効にする
インスタンスで現在 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] が有効になっているすべてのデータベースから [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] の構成設定を無効にするための手順を次に示します。 ストレージ URL、保有期間、SQL 資格情報などの構成設定はメタデータに残るため、後で [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] をデータベースに対して有効にした場合に使用できます。 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] サービスを一時的に停止するだけの場合は、このトピックの以降のセクションで説明するマスターの切り替えを使用できます。
#### <a name="to-disable-includesssmartbackupincludesss-smartbackup-mdmd-for-all-the-databases"></a>すべてのデータベースについて [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を無効にするには:
1. [!INCLUDE[ssDE](../../includes/ssde-md.md)]に接続します。
2. [標準] ツール バーの **[新しいクエリ]** をクリックします。
3. 次の例をコピーしてクエリ ウィンドウに貼り付け、 **[実行]** をクリックします。 次の例では、 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] がインスタンス レベルで構成され、インスタンスのすべてのデータベースで [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] が有効になっているかどうかを識別し、システム ストアド プロシージャ **sp_backup_config_basic** を実行して [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)]を無効にします。
```
-- Create a working table to store the database names
Declare @DBNames TABLE
(
RowID int IDENTITY PRIMARY KEY
,DBName varchar(500)
)
-- Define the variables
DECLARE @rowid int
DECLARE @dbname varchar(500)
DECLARE @SQL varchar(2000)
-- Get the database names from the system function
INSERT INTO @DBNames (DBName)
SELECT db_name
FROM
msdb.managed_backup.fn_backup_db_config (NULL)
WHERE is_managed_backup_enabled = 1
AND is_dropped = 0
--Select DBName from @DBNames
select @rowid = min(RowID)
FROM @DBNames
WHILE @rowID IS NOT NULL
Begin
Set @dbname = (Select DBName From @DBNames Where RowID = @rowid)
Begin
Set @SQL = 'EXEC msdb.managed_backup.sp_backup_config_basic
@database_name= '''+'' + @dbname+ ''+''',
@enable_backup=0'
EXECUTE (@SQL)
END
Select @rowid = min(RowID)
From @DBNames Where RowID > @rowid
END
```
インスタンスのすべてのデータベースの構成設定を確認するには、次のクエリを使用します。
```
Use msdb;
GO
SELECT * FROM managed_backup.fn_backup_db_config (NULL);
GO
```
## <a name="InstanceDisable"></a> インスタンスについて [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] の既定設定を無効にする
インスタンス レベルの既定の設定は、そのインスタンスに作成されたすべての新しいデータベースに適用されます。 既定の設定が必要なくなった場合は、 **managed_backup.sp_backup_config_basic** システム ストアド プロシージャを使用し、 *@database_name* パラメーターを NULL に設定して、この構成を無効にすることができます。 無効にしても、ストレージ URL、保有期間の設定、SQL 資格情報名など、その他の構成設定は削除されません。 これらの設定は、後でインスタンスに対して [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を有効にする場合に使用されます。
#### <a name="to-disable-includesssmartbackupincludesss-smartbackup-mdmd-default-configuration-settings"></a>[!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] の既定の構成設定を無効にするには
1. [!INCLUDE[ssDE](../../includes/ssde-md.md)]に接続します。
2. [標準] ツール バーの **[新しいクエリ]** をクリックします。
3. 次の例をコピーしてクエリ ウィンドウに貼り付け、 **[実行]** をクリックします。
```
EXEC msdb.managed_backup.sp_backup_config_basic
@enable_backup = 0;
GO
```
## <a name="InstancePause"></a> インスタンス レベルで [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を一時停止する
一時的に少しの間 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] サービスを停止する必要が生じる場合があります。 マスターを切り替えるための **managed_backup.sp_backup_master_switch** システム ストアド プロシージャを使用すると、インスタンス レベルで [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] サービスを無効にできます。 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)]を再開するには、同じストアド プロシージャを使用します。 [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を無効にするか有効にするかを定義するには、@state パラメーターが使用されます。
#### <a name="to-pause-includesssmartbackupincludesss-smartbackup-mdmd-services-using-transact-sql"></a>Transact-SQL を使用して [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] サービスを一時停止するには
1. [!INCLUDE[ssDE](../../includes/ssde-md.md)]に接続します。
2. [標準] ツール バーの **[新しいクエリ]** をクリックします。
3. 次の例をコピーしてクエリ ウィンドウに貼り付け、 **[実行]** をクリックします。
```
Use msdb;
GO
EXEC managed_backup.sp_backup_master_switch @new_state=0;
Go
```
#### <a name="to-resume-includesssmartbackupincludesss-smartbackup-mdmd-using-transact-sql"></a>Transact-SQL を使用して [!INCLUDE[ss_smartbackup](../../includes/ss-smartbackup-md.md)] を再開するには
1. [!INCLUDE[ssDE](../../includes/ssde-md.md)]に接続します。
2. [標準] ツール バーの **[新しいクエリ]** をクリックします。
3. 次の例をコピーしてクエリ ウィンドウに貼り付け、 **[実行]** をクリックします。
```
Use msdb;
Go
EXEC managed_backup.sp_backup_master_switch @new_state=1;
GO
```
## <a name="see-also"></a>参照
[Microsoft Azure への SQL Server マネージド バックアップを有効にする](../../relational-databases/backup-restore/enable-sql-server-managed-backup-to-microsoft-azure.md)
| 42.47619 | 470 | 0.691844 | yue_Hant | 0.504482 |
7afd47a62bd8e34bbd8c76de36d0ade72665723a | 85 | md | Markdown | etiqueta/skulls-of-sedlec.md | mazmorreoensolitario/mazmorreoensolitario.github.io | 398aa9f31ca181bca478cf141332ab45652f9b92 | [
"MIT"
] | null | null | null | etiqueta/skulls-of-sedlec.md | mazmorreoensolitario/mazmorreoensolitario.github.io | 398aa9f31ca181bca478cf141332ab45652f9b92 | [
"MIT"
] | null | null | null | etiqueta/skulls-of-sedlec.md | mazmorreoensolitario/mazmorreoensolitario.github.io | 398aa9f31ca181bca478cf141332ab45652f9b92 | [
"MIT"
] | null | null | null | ---
layout: tag_page
title: "Etiqueta: Skulls of Sedlec"
tag: "Skulls of Sedlec"
---
| 14.166667 | 35 | 0.682353 | eng_Latn | 0.306983 |
7afdb9e4666d8413c21084a21a9db36718eb4904 | 2,478 | md | Markdown | docs/ssms/agent/list-job-category-information.md | aminechafai/sql-docs | 2e6c4104dca8680064eb64a7a79a3e15e1b4365f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-04-19T10:57:52.000Z | 2019-04-19T10:57:52.000Z | docs/ssms/agent/list-job-category-information.md | aminechafai/sql-docs | 2e6c4104dca8680064eb64a7a79a3e15e1b4365f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/agent/list-job-category-information.md | aminechafai/sql-docs | 2e6c4104dca8680064eb64a7a79a3e15e1b4365f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-07-25T00:35:19.000Z | 2020-07-25T00:35:19.000Z | ---
title: "List Job Category Information"
ms.custom: seo-lt-2019
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: sql-tools
ms.technology: ssms
ms.topic: conceptual
ms.assetid: 0fc668d4-6244-4fef-b90e-62d2c776cd7c
author: markingmyname
ms.author: maghan
ms.reviewer: ""
monikerRange: "= azuresqldb-mi-current || >= sql-server-2016 || = sqlallproducts-allversions"
---
# List Job Category Information
[!INCLUDE [SQL Server SQL MI](../../includes/applies-to-version/sql-asdbmi.md)]
> [!IMPORTANT]
> On [Azure SQL Database Managed Instance](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance), most, but not all SQL Server Agent features are currently supported. See [Azure SQL Database Managed Instance T-SQL differences from SQL Server](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance-transact-sql-information#sql-server-agent) for details.
This topic describes how to list job category information in [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] by using [!INCLUDE[tsql](../../includes/tsql-md.md)] or SQL Server Management Objects.
- **Before you begin:**
[Security](#Security)
- **To list job category information, using:**
[Transact-SQL](#TSQL)
[SQL Server Management Objects](#SMO)
## <a name="BeforeYouBegin"></a>Before You Begin
### <a name="Security"></a>Security
For detailed information, see [Implement SQL Server Agent Security](../../ssms/agent/implement-sql-server-agent-security.md).
## <a name="TSQL"></a>Using Transact-SQL
#### To list job category information
1. In **Object Explorer**, connect to an instance of [!INCLUDE[ssDE](../../includes/ssde_md.md)].
2. On the Standard bar, click **New Query**.
3. Copy and paste the following example into the query window and click **Execute**.
```
-- returns information about jobs that are administered locally
USE msdb ;
GO
EXEC dbo.sp_help_category
@type = N'LOCAL' ;
GO
```
For more information, see [sp_help_category (Transact-SQL)](https://msdn.microsoft.com/8cad1dcc-b43e-43bd-bea0-cb0055c84169).
## <a name="SMO"></a>Using SQL Server Management Objects
**To list job category information**
Use the **JobCategory** class by using a programming language that you choose, such as Visual Basic, Visual C#, or PowerShell.
| 38.123077 | 398 | 0.679177 | eng_Latn | 0.623787 |
7afe09a27c8bb04deae33c3df244a42e5c3df5e2 | 3,189 | md | Markdown | typescript-type-system/literals.md | IGoRFonin/typescript-book-ru | 3712035dcd4ecfae781eff0b8525ca4d58f7ed34 | [
"MIT"
] | null | null | null | typescript-type-system/literals.md | IGoRFonin/typescript-book-ru | 3712035dcd4ecfae781eff0b8525ca4d58f7ed34 | [
"MIT"
] | null | null | null | typescript-type-system/literals.md | IGoRFonin/typescript-book-ru | 3712035dcd4ecfae781eff0b8525ca4d58f7ed34 | [
"MIT"
] | null | null | null | ---
description: 'Литералы являются точной переменной, предоставляемой самим JavaScript.'
---
# Литеральный тип
## Строковый литерал
Вы можете использовать строковый литерал в качестве типа:
```typescript
let foo: 'Hello';
```
Здесь мы создали переменную с именем `foo`, которая принимает только одну переменную, буквальное значение которой равно `Hello`:
```typescript
let foo: 'Hello';
foo = 'bar'; // Error: 'bar' не может быть назначен типу 'Hello'
```
Они сами по себе не очень практичны, но могут быть объединены для создания мощной \(практической\) абстракции в объединенном типе:
```typescript
type CardinalDirection = 'North' | 'East' | 'South' | 'West';
function move(distance: number, direction: CardinalDirection) {
// ...
}
move(1, 'North'); // ok
move(1, 'Nurth'); // Error
```
## Другие литеральные типы
TypeScript также предоставляет `boolean` и `number` литеральные типы:
```typescript
type OneToFive = 1 | 2 | 3 | 4 | 5;
type Bools = true | false;
```
## Вывод
Довольно часто вы получаете сообщение об ошибке типа `Type string is not assignable to type 'foo'`. Следующий пример демонстрирует это:
```typescript
function iTakeFoo(foo: 'foo') {}
const test = {
someProp: 'foo'
};
iTakeFoo(test.someProp); // Error: Argument of type string is not assignable to parameter of type 'foo'
```
Это потому, что `test` выводится как `{ someProp: string }`, мы можем использовать утверждение простого типа, чтобы сообщить TypeScript литералы, которые вы хотите вывести:
```typescript
function iTakeFoo(foo: 'foo') {}
const test = {
someProp: 'foo' as 'foo'
};
iTakeFoo(test.someProp); // ok
```
Или используйте аннотации типов, чтобы помочь TypeScript вывести правильный тип:
```typescript
function iTakeFoo(foo: 'foo') {}
type Test = {
someProp: 'foo';
};
const test: Test = {
// Предположим, что 'someProp' всегда 'foo'
someProp: 'foo'
};
iTakeFoo(test.someProp); // ok
```
## Случаи использования
Типы перечислений TypeScript основаны на числах. Вы можете использовать типы объединения со строковыми литералами для имитации типа перечисления на основе строк, так же как и `CardinalDirection`, предложенный выше. Вы даже можете использовать следующие функции для генерации структуры `key: value`:
```typescript
// Функция для создания списка строк, сопоставленных с `K: V`
function strEnum<T extends string>(o: Array<T>): { [K in T]: K } {
return o.reduce((res, key) => {
res[key] = key;
return res;
}, Object.create(null));
}
```
Затем вы можете использовать `keyof`, `typeof` для генерации типа объединения строки. Вот полный пример:
```typescript
// Функция для создания списка строк, сопоставленных с `K: V`
function strEnum<T extends string>(o: Array<T>): { [K in T]: K } {
return o.reduce((res, key) => {
res[key] = key;
return res;
}, Object.create(null));
}
// Создание K: V
const Direction = strEnum(['North', 'South', 'East', 'West']);
// Создание типа
type Direction = keyof typeof Direction;
// Прост в использовании
let sample: Direction;
sample = Direction.North; // Okay
sample = 'North'; // Okay
sample = 'AnythingElse'; // ERROR!
```
## Различать union
Мы объясним это позже в этой книге.
| 24.72093 | 298 | 0.70461 | rus_Cyrl | 0.854164 |
7afe913f3cedaaa12cdd9f32f245c8c64915d0f4 | 1,020 | md | Markdown | v2.0/API-v20-documentation.md | internetstandards/Internet.nl-API-docs-sw | 1b9d1bb48246e0bae6661bc6f653e74493ff273d | [
"CC-BY-4.0"
] | null | null | null | v2.0/API-v20-documentation.md | internetstandards/Internet.nl-API-docs-sw | 1b9d1bb48246e0bae6661bc6f653e74493ff273d | [
"CC-BY-4.0"
] | null | null | null | v2.0/API-v20-documentation.md | internetstandards/Internet.nl-API-docs-sw | 1b9d1bb48246e0bae6661bc6f653e74493ff273d | [
"CC-BY-4.0"
] | null | null | null | # API documentation
The API documentation of the Internet.nl API v2.0 can be found on https://batch.internet.nl/api/batch/openapi.yaml.
A viewer, like ReDoc, can be used for generating a human readable version: http://redocly.github.io/redoc/?url=https://batch.internet.nl/api/batch/openapi.yaml.
Examples of API consumer scripts:
- Internet.nl Dashboard: https://github.com/internetstandards/Internet.nl-dashboard
- Internet.nl batch scripts: https://github.com/poorting/internet.nl_batch_scripts
# Sequence of handling batches of domains
- Batch requests are processed on a FIFO basis for a particular user. This means a users can submit multiple batch requests, but they are processed sequentally. The first / current batch requests needs to finish before the next one starts.
- Batch requests of multiple users are run in parallel. While influenced by the number of users running simultaneous batch requests, you should assume that parallel tasks take longer to finish since server resources are being shared.
| 78.461538 | 240 | 0.80098 | eng_Latn | 0.978285 |
7affb3a76f55b6921a922b55b47abcb738aff4af | 3,230 | md | Markdown | README.md | geng452654716/vue-cli-multi-page | cb281758c64c94bdedffe826e243ebed1e4490bd | [
"Apache-2.0"
] | 1 | 2018-04-03T03:31:20.000Z | 2018-04-03T03:31:20.000Z | README.md | geng452654716/vue-cli-multi-page | cb281758c64c94bdedffe826e243ebed1e4490bd | [
"Apache-2.0"
] | null | null | null | README.md | geng452654716/vue-cli-multi-page | cb281758c64c94bdedffe826e243ebed1e4490bd | [
"Apache-2.0"
] | null | null | null | # vue-multi-page
> 了解了Vue一般都会去用Vue-cli入门,这是一个构建SPA的脚手架,查看其build的项目,可以看到它是将所有的模块都输出到一个build.js中,有时候会看到这个js文件特别大,有好几兆,然而当一个项目足够复杂时,SPA恐怕不再适合使用了,用户不可能访问你的网页的时候一下子下载一个几兆的文件,特别对于手机用户,可能用户只看了网站中的一篇文章,这也会导致网页加载慢,这是不可取的。所以应该将网站划分成若干模块。于是就有了本demo(一个用于构建多页面的脚手架)
## 如何开始?
> 假设你已经熟悉了vue-cli了😄
1. 创建项目
```bash
vue init webpack vue-multi-page
# 为了简便可以不用jslint等
```
2. 开始改造
> 最主要的一步,将webpack进行改造以满足对多页面需求的支持,其实多页面,即是webpack有多个入口。在此步中,我们主要的操作的对象是 build文件夹下的js文件。
+ 首先,我们对 utils.js进行改造
添加一个方法:getEntries,方法中需要使用到node的globa模块,所以需要引入
```
// glob模块,用于读取webpack入口目录文件
// 看到issue中有人问glob模块,这个是需要npm安装的,[https://github.com/isaacs/node-glob](https://github.com/isaacs/node-glob)
var glob = require('glob');
```
```javascript
exports.getEntries = function (globPath) {
var entries = {}
/**
* 读取src目录,并进行路径裁剪
*/
glob.sync(globPath).forEach(function (entry) {
/**
* path.basename 提取出用 ‘/' 隔开的path的最后一部分,除第一个参数外其余是需要过滤的字符串
* path.extname 获取文件后缀
*/
var basename = path.basename(entry, path.extname(entry), 'router.js') // 过滤router.js
// ***************begin***************
// 当然, 你也可以加上模块名称, 即输出如下: { module/main: './src/module/index/main.js', module/test: './src/module/test/test.js' }
// 最终编译输出的文件也在module目录下, 访问路径需要时 localhost:8080/module/index.html
// slice 从已有的数组中返回选定的元素, -3 倒序选择,即选择最后三个
// var tmp = entry.split('/').splice(-3)
// var pathname = tmp.splice(0, 1) + '/' + basename; // splice(0, 1)取tmp数组中第一个元素
// console.log(pathname)
// entries[pathname] = entry
// ***************end***************
entries[basename] = entry
});
// console.log(entries);
// 获取的主入口如下: { main: './src/module/index/main.js', test: './src/module/test/test.js' }
return entries;
}
```
+ 其次,对webpack.base.conf.js进行改造
删除 ~~entry: {app: './src/main.js'},~~,取而代之如下:
```javascript
module.exports = {
···
entry: utils.getEntries('./src/module/**/*.js'),
···
}
```
+ 最后改造webpack.dev.conf.js和webpack.prod.conf.js
移除原来的HtmlWebpackPlugin
```javascript
var pages = utils.getEntries('./src/module/**/*.html')
for(var page in pages) {
// 配置生成的html文件,定义路径等
var conf = {
filename: page + '.html',
template: pages[page], //模板路径
inject: true,
// excludeChunks 允许跳过某些chunks, 而chunks告诉插件要引用entry里面的哪几个入口
// 如何更好的理解这块呢?举个例子:比如本demo中包含两个模块(index和about),最好的当然是各个模块引入自己所需的js,
// 而不是每个页面都引入所有的js,你可以把下面这个excludeChunks去掉,然后npm run build,然后看编译出来的index.html和about.html就知道了
// filter:将数据过滤,然后返回符合要求的数据,Object.keys是获取JSON对象中的每个key
excludeChunks: Object.keys(pages).filter(item => {
return (item != page)
})
}
// 需要生成几个html文件,就配置几个HtmlWebpackPlugin对象
module.exports.plugins.push(new HtmlWebpackPlugin(conf))
}
```
## 构建步骤
``` bash
# 安装依赖
npm install
# 本地测试
npm run dev
# 打包
npm run build
```
在本地调试启动后访问:[index(http://localhost:8080)](http://localhost:8080) | [about(http://localhost:8080/about.html)](http://localhost:8080/about.html) 即可
| 31.359223 | 235 | 0.620743 | yue_Hant | 0.695217 |
bb00799c8dd4b14e209c40c6607dff9efef69653 | 3,642 | md | Markdown | _pages/es_ES/hbb.md | Snare-Hawk/Wii-Guide | bd7872d4a55d2d266303010f07c4ac67986ed8fb | [
"MIT"
] | null | null | null | _pages/es_ES/hbb.md | Snare-Hawk/Wii-Guide | bd7872d4a55d2d266303010f07c4ac67986ed8fb | [
"MIT"
] | null | null | null | _pages/es_ES/hbb.md | Snare-Hawk/Wii-Guide | bd7872d4a55d2d266303010f07c4ac67986ed8fb | [
"MIT"
] | null | null | null | ---
title: "Canal Open Shop (Homebrew Browser)"
---
Si necesitas ayuda para algo relacionado con este tutorial, únete [al servidor de Discord de RiiConnect24](https://discord.gg/osc) (recomendado) o [envíanos un correo electrónico a [email protected]](mailto:[email protected]).
{: .notice--info}
Homebrew Browser, el software en el que el Canal Open Shop está basado, funciona pero puedes tener bugs. También puedes obtener aplicaciones homebrew con [osc-dl](https://github.com/dhtdht020/osc-dl/releases/latest) o [el sitio web del Canal Open Shop](https://oscwii.org/).
{: .notice--info}
El [Canal Open Shop](https://oscwii.org/) es donde puedes obtener aplicaciones homebrew. Es un proyecto para revivir una aplicación llamada Homebrew Browser.
#### Requisitos
* Una tarjeta SD o unidad USB
* [Homebrew Browser](/assets/files/homebrew_browser_v0.3.9e.zip)
#### Instrucciones
1. Extrae el Homebrew Browser a la carpeta `apps` en tu tarjeta SD o unidad USB.
2. Conecta la tarjeta SD o unidad USB a tu Wii. Ahora puedes ejecutar el Homebrew Browser desde el Canal Homebrew.
#### Aplicaciones recomendadas
Aquí puedes encontrar algunas aplicaciones recomendadas que puedes obtener en el Canal Open Shop:
- [CleanRip](https://oscwii.org/library/app/CleanRip) - Es una herramienta para volcar juegos de Wii o GameCube. Visita [nuestra página](dump-games) para más detalles.
- [GCMM](https://oscwii.org/library/app/gcmm) - Es un gestor de tarjetas de memoria GameCube para tu Wii.
- [My Menuify Mod](https://oscwii.org/library/app/mymenuifymod) - Es una herramienta para instalar temas en tu Menú de Wii. Visita [nuestra página](themes) para las instrucciones de uso.
- [Nintendont](https://oscwii.org/library/app/nintendont) - Es un cargador de juegos de GameCube para tu Wii. Para más información, por favor visita [este hilo de GBAtemp](https://gbatemp.net/threads/nintendont.349258/).
- [SaveGame Manager GX](https://oscwii.org/library/app/savegame_manager_gx) - Es un gestor de partidas guardadas para tu Wii. Esto te permite copiar partidas guardadas y Miis de/a tu Wii.
- [USB Loader Gx](https://oscwii.org/library/app/usbloader_gx) - Es un cargador USB para tu Wii. Visita [nuestra página](usbloadergx) para las instrucciones de uso.
- [WiiFlow Lite](https://oscwii.org/library/app/wiiflow) - Es otro cargador USB para tu Wii. Visita [nuestra página](wiiflow) para las instrucciones de uso.
- [WiiMC-SS](https://oscwii.org/library/app/wiimc-ss) - Esto es un reproductor multimedia para tu Wii. It supports movies, music, photos, radio stations, YouTube, and more.
- [WiiXplorer-SS](https://oscwii.org/library/app/wiixplorer-ss) - This is a file manager for your Wii. It lets you access your files on your SD Card, USB Device, and more.
- [YABDM](https://oscwii.org/library/app/Yet-Another-BlueDump-Mod) - This is a tool to dump content installed on your Wii to WAD files. See [our page](dump-wads) for instructions on how to use it.
[ Continúe con RiiConnect24 ](riiconnect24) <br> RiiConnect24 le permite utilizar los servicios descontinuados de WiiConnect24, que incluyen Noticias, Pronóstico, Todos Votos, Nintendo y el Canal Check Mii Out, junto con Wii Mail. Esto es opcional a instalar.
{: .notice--info}
[Echa un vistazo a nuestras otras guías](site-navigation)<br> Tenemos muchos otros tutoriales que podrían interesarte.
{: .notice--info}
Included in the Homebrew Browser download is a guide on how to use the Homebrew Browser.
{: .notice--info}
You can swap out ShopChannel.ogg with loop.ogg in `/apps/homebrew_browser/` to have the Homebrew Browser play the Wii Shop Channel music.
{: .notice--info}
| 75.875 | 274 | 0.766612 | spa_Latn | 0.884633 |
bb00f8b64de02208ceb3f54b62159de3ec145ee0 | 3,299 | md | Markdown | docs/ado/reference/adox-api/activeconnection-property-adox.md | kirabr/sql-docs.ru-ru | 08e3b25ff0792ee0ec4c7641b8960145bbec4530 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/adox-api/activeconnection-property-adox.md | kirabr/sql-docs.ru-ru | 08e3b25ff0792ee0ec4c7641b8960145bbec4530 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ado/reference/adox-api/activeconnection-property-adox.md | kirabr/sql-docs.ru-ru | 08e3b25ff0792ee0ec4c7641b8960145bbec4530 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Свойство ActiveConnection (ADOX) | Документы Microsoft
ms.prod: sql
ms.prod_service: connectivity
ms.technology: connectivity
ms.custom: ''
ms.date: 01/19/2017
ms.reviewer: ''
ms.suite: sql
ms.tgt_pltfrm: ''
ms.topic: conceptual
apitype: COM
f1_keywords:
- _Catalog::put_ActiveConnection
- _Catalog::PutRefActiveConnection
- _Catalog::get_ActiveConnection
- _Catalog::PutActiveConnection
- _Catalog::putref_ActiveConnection
- _Catalog::ActiveConnection
- _Catalog::GetActiveConnection
helpviewer_keywords:
- ActiveConnection property [ADOX]
ms.assetid: 25fff69b-7556-4a28-b6f5-600a4bb0f607
caps.latest.revision: 11
author: MightyPen
ms.author: genemi
manager: craigg
ms.openlocfilehash: d5afb815a3e4701dc769f600d3a8d014d5a5cd25
ms.sourcegitcommit: 62826c291db93c9017ae219f75c3cfeb8140bf06
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 06/11/2018
ms.locfileid: "35284583"
---
# <a name="activeconnection-property-adox"></a>Свойство ActiveConnection (ADOX)
Указывает ADO [подключения](../../../ado/reference/ado-api/connection-object-ado.md) объекта, к которому [каталога](../../../ado/reference/adox-api/catalog-object-adox.md) принадлежит.
## <a name="settings-and-return-values"></a>Параметры и возвращаемые значения
Наборы **подключения** объекта или **строка** содержится определение для подключения. Возвращает активное **подключения** объекта.
## <a name="remarks"></a>Примечания
Значение по умолчанию — пустая ссылка на объект.
## <a name="applies-to"></a>Объект применения
[Объект Catalog (ADOX)](../../../ado/reference/adox-api/catalog-object-adox.md)
## <a name="see-also"></a>См. также
[Пример свойства ActiveConnection каталога (Visual Basic)](../../../ado/reference/adox-api/catalog-activeconnection-property-example-vb.md)
[Команда и пример свойства CommandText (Visual Basic)](../../../ado/reference/adox-api/command-and-commandtext-properties-example-vb.md)
[Подключение метода закрытия, пример свойство типа таблицы (Visual Basic)](../../../ado/reference/adox-api/connection-close-method-table-type-property-example-vb.md)
[Коллекция параметров, пример команды свойства (Visual Basic)](../../../ado/reference/adox-api/parameters-collection-command-property-example-vb.md)
[Процедуры добавления пример метода (Visual Basic)](../../../ado/reference/adox-api/procedures-append-method-example-vb.md)
[Процедуры удаления пример метода (Visual Basic)](../../../ado/reference/adox-api/procedures-delete-method-example-vb.md)
[Процедуры обновления пример метода (Visual Basic)](../../../ado/reference/adox-api/procedures-refresh-method-example-vb.md)
[Представления и пример поля коллекций (Visual Basic)](../../../ado/reference/adox-api/views-and-fields-collections-example-vb.md)
[Представления Append пример метода (Visual Basic)](../../../ado/reference/adox-api/views-append-method-example-vb.md)
[Коллекция представлений, пример свойства CommandText (Visual Basic)](../../../ado/reference/adox-api/views-collection-commandtext-property-example-vb.md)
[Представления обновить пример метода (Visual Basic)](../../../ado/reference/adox-api/views-refresh-method-example-vb.md)
[Метод Create (ADOX)](../../../ado/reference/adox-api/create-method-adox.md)
| 54.983333 | 186 | 0.746287 | kor_Hang | 0.12114 |
bb015b98ea1c8d748dc3563b8b54563cfbadc6bb | 8,517 | md | Markdown | articles/azure-monitor/insights/sql-insights-troubleshoot.md | Myhostings/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 16 | 2017-08-28T08:29:36.000Z | 2022-01-02T16:46:30.000Z | articles/azure-monitor/insights/sql-insights-troubleshoot.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 470 | 2017-11-11T20:59:16.000Z | 2021-04-10T17:06:28.000Z | articles/azure-monitor/insights/sql-insights-troubleshoot.md | Ahmetmaman/azure-docs.tr-tr | 536eaf3b454f181f4948041d5c127e5d3c6c92cc | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-11T19:39:08.000Z | 2022-03-30T13:47:56.000Z | ---
title: SQL Insights sorunlarını giderme (Önizleme)
description: Azure Izleyici 'de SQL Insights sorunlarını giderme
ms.topic: conceptual
author: bwren
ms.author: bwren
ms.date: 03/04/2021
ms.openlocfilehash: 4d4a801d0cf0a2355334272053ff86dd846b6bbf
ms.sourcegitcommit: d40ffda6ef9463bb75835754cabe84e3da24aab5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 04/07/2021
ms.locfileid: "107030313"
---
# <a name="troubleshooting-sql-insights-preview"></a>SQL Insights sorunlarını giderme (Önizleme)
SQL Insights 'ta veri toplama sorunlarını gidermek için, **profili Yönet** sekmesinde izleme makinesinin durumunu kontrol edin. Bu, aşağıdaki durumlardan birine sahip olacaktır:
- Toplama
- Toplanmıyor
- Hatalarla toplama
Günlükleri ve diğer ayrıntıları görmek için detaya gitme **durumuna** tıklayarak sorunu çözmenize yardımcı olabilirsiniz.
:::image type="content" source="media/sql-insights-enable/monitoring-machine-status.png" alt-text="Makine durumunu izleme":::
## <a name="not-collecting-state"></a>Durum toplanmıyor
Son 10 dakika içinde SQL için *ınsightsölçümler* 'de veri yoksa, izleme makinesi *toplanmıyor* durumuna sahiptir.
SQL Insights, bu bilgileri almak için aşağıdaki sorguyu kullanır:
```
InsightsMetrics
| extend Tags = todynamic(Tags)
| extend SqlInstance = tostring(Tags.sql_instance)
| where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
```
Telegraf 'dan, sorunun kökünü belirlemek için size yardımcı olan Günlükler olup olmadığını denetleyin. Günlük girişleri varsa, genel sorunlar için *toplanmıyor* ' e tıklayabilir ve günlükleri ve sorun giderme bilgilerini kontrol edebilirsiniz.
Günlük yoksa, iki sanal makine uzantısı tarafından yüklenen aşağıdaki hizmetler için izleme sanal makinesindeki günlükleri denetlemeniz gerekir:
- Microsoft. Azure. Monitor. AzureMonitorLinuxAgent
- Hizmet: MDSD
- Microsoft. Azure. Monitor. Iniş yükleri. Workload. Wzaınuxextension
- Hizmet: wli
- Hizmet: MS-telegraf
- Hizmet: TD-Agent-bit-wli
- Yüklemesi başarısızlıklarını denetlemek için uzantı günlüğü:/var/log/azure/Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension/wlilogs.log
### <a name="wli-service-logs"></a>WLI hizmet günlükleri
Hizmet günlükleri: `/var/log/wli.log`
Son günlükleri görmek için: `tail -n 100 -f /var/log/wli.log`
Aşağıdaki hata günlüğünü görürseniz, **MDSD** hizmetiyle ilgili bir sorun olduğunu gösterir.
```
2021-01-27T06:09:28Z [Error] Failed to get config data. Error message: dial unix /var/run/mdsd/default_fluent.socket: connect: no such file or directory
```
### <a name="telegraf-service-logs"></a>Telegraf hizmeti günlükleri
Hizmet günlükleri: `/var/log/ms-telegraf/telegraf.log`
Son günlükleri görmek için: `tail -n 100 -f /var/log/ms-telegraf/telegraf.log` en son hata ve uyarı günlüklerini görmek için: `tail -n 1000 /var/log/ms-telegraf/telegraf.log | grep "E\!\|W!"`
Telegraf tarafından kullanılan yapılandırma wli hizmeti tarafından oluşturulur ve şu şekilde yerleştirilir: `/etc/ms-telegraf/telegraf.d/wli`
Hatalı bir yapılandırma oluşturulursa MS-telegraf hizmeti başlayamayabilir, şu komutla birlikte MS-telegraf hizmetinin çalışıp çalışmadığını kontrol edin: `service ms-telegraf status`
Telegraf hizmetinden hata iletilerini görmek için aşağıdaki komutla el ile çalıştırın:
```
/usr/bin/ms-telegraf --config /etc/ms-telegraf/telegraf.conf --config-directory /etc/ms-telegraf/telegraf.d/wli --test
```
### <a name="mdsd-service-logs"></a>MDSD hizmeti günlükleri
Azure Izleyici aracısının [geçerli sınırlamalarını](../agents/azure-monitor-agent-overview.md#current-limitations) denetleyin.
Hizmet günlükleri:
- `/var/log/mdsd.err`
- `/var/log/mdsd.warn`
- `/var/log/mdsd.info`
Son hataları görmek için: `tail -n 100 -f /var/log/mdsd.err`
Desteğe başvurmanız gerekirse, aşağıdaki bilgileri toplayın:
- Oturum açan `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/`
- Oturum aç `/var/log/waagent.log`
- Oturum açan `/var/log/mdsd*`
- İçindeki dosyalar `/etc/mdsd.d/`
- Dosyasýný `/etc/default/mdsd`
### <a name="invalid-monitoring-virtual-machine-configuration"></a>Geçersiz izleme sanal makine yapılandırması
*Toplama* durumunun bir nedeni, geçersiz bir izleme sanal makine yapılandırmasına sahip olduğunuz durumdur. Varsayılan yapılandırma aşağıda verilmiştir:
```json
{
"version": 1,
"secrets": {
"telegrafPassword": {
"keyvault": "https://mykeyvault.vault.azure.net/",
"name": "sqlPassword"
}
},
"parameters": {
"sqlAzureConnections": [
"Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User Id=telegraf;Password=$telegrafPassword;"
],
"sqlVmConnections": [
],
"sqlManagedInstanceConnections": [
]
}
}
```
Bu yapılandırma, izleme sanal makinenizde profil yapılandırmasında kullanılacak değiştirme belirteçlerini belirler. Ayrıca, Azure Key Vault parolalara başvurmanızı sağlar, böylece gizli olmayan herhangi bir yapılandırmada gizli değerleri saklamayın ve kesinlikle önerilir.
#### <a name="secrets"></a>Gizli Diziler
Gizli dizileri, değerleri bir Azure Key Vault çalışma zamanında alınan belirteçlerdir. Gizli anahtar, bir Key Vault başvurusu ve gizli dizi adı çifti tarafından tanımlanır. Bu, Azure Izleyici 'nin gizli anahtar dinamik değerini almasına ve bunun aşağı akış yapılandırma başvuruları kullanmasına izin verir.
Ayrı anahtar kasalarında depolanan gizlilikler da dahil olmak üzere, yapılandırmada gereken sayıda gizli dizi tanımlayabilirsiniz.
```json
"secrets": {
"<secret-token-name-1>": {
"keyvault": "<key-vault-uri>",
"name": "<key-vault-secret-name>"
},
"<secret-token-name-2>": {
"keyvault": "<key-vault-uri-2>",
"name": "<key-vault-secret-name-2>"
}
}
```
Key Vault erişim izinleri, izleme sanal makinesindeki bir Yönetilen Hizmet Kimliği sağlanır. Azure Izleyici Key Vault, sanal makine için en az gizli dizi Al izinleri sağlamasını bekliyor. Azure portal, PowerShell, CLı veya Kaynak Yöneticisi şablonundan etkinleştirebilirsiniz.
#### <a name="parameters"></a>Parametreler
Parametreler, JSON şablon oluşturma aracılığıyla profil yapılandırmasında başvurulabilen belirteçlerdir. Parametrelerin bir adı ve değeri vardır. Değerler, nesneler ve diziler dahil olmak üzere herhangi bir JSON türü olabilir. Bu kural içindeki adı kullanılarak profil yapılandırmasında bir parametreye başvurulur `.Parameters.<name>` .
Parametreler, aynı kuralı kullanarak Key Vault gizli dizilerme başvurabilir. Örneğin, `sqlAzureConnections` kuralını kullanarak gizli dizi başvurusu `telegrafPassword` `$telegrafPassword` .
Çalışma zamanında, tüm parametreler ve gizli dizileri çözülür ve makinede kullanılacak gerçek yapılandırmayı oluşturmak için profil yapılandırmasıyla birleştirilir.
> [!NOTE]
> `sqlAzureConnections`, `sqlVmConnections` Ve parametrelerinin `sqlManagedInstanceConnections` bazıları için sağlayacaksınız bağlantı Dizeleriniz olmasa bile, bu parametre adları yapılandırmada gereklidir.
## <a name="collecting-with-errors-state"></a>Hata durumuyla toplama
En az bir *ınsightsmetrik* günlüğü, ancak *işlem* tablosunda da hatalar varsa, Izleme makinesi *hatalarla birlikte toplanacaktır* .
SQL Insights, bu bilgileri almak için aşağıdaki sorguları kullanır:
```
InsightsMetrics
| extend Tags = todynamic(Tags)
| extend SqlInstance = tostring(Tags.sql_instance)
| where TimeGenerated > ago(240m) and isnotempty(SqlInstance) and Namespace == 'sqlserver_server_properties' and Name == 'uptime'
```
```
WorkloadDiagnosticLogs
| summarize Errors = countif(Status == 'Error')
```
> [!NOTE]
> ' Workloadagnoçıkartlogs ' veri türünde herhangi bir veri görmüyorsanız, bu verileri depolamak için izleme profilinizi güncelleştirmeniz gerekebilir. SQL Insights UX içinden ' profili Yönet ' seçeneğini belirleyin, ardından ' Profili Düzenle ' seçeneğini belirleyin ve ardından ' izleme profilini Güncelleştir ' seçeneğini belirleyin.
Yaygın durumlarda, günlüklerimizde sorun giderme bilgisi sağlıyoruz:
:::image type="content" source="media/sql-insights-enable/troubleshooting-logs-view.png" alt-text="Sorun giderme günlükleri görünümü.":::
## <a name="next-steps"></a>Sonraki adımlar
- [SQL Insights 'ı etkinleştirme](sql-insights-enable.md)hakkında ayrıntılı bilgi edinin.
| 44.591623 | 337 | 0.763062 | tur_Latn | 0.998112 |
bb017f730f3ee4d3c463a9893998bd799cf9cda1 | 16,516 | md | Markdown | CHANGELOG.md | Quicksign/fabric8-maven-plugin | f7f987d7aca1972ca4f15a99681c17e58c07c845 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | Quicksign/fabric8-maven-plugin | f7f987d7aca1972ca4f15a99681c17e58c07c845 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | Quicksign/fabric8-maven-plugin | f7f987d7aca1972ca4f15a99681c17e58c07c845 | [
"Apache-2.0"
] | 1 | 2020-06-21T13:25:37.000Z | 2020-06-21T13:25:37.000Z | # Changes
This document main purpose is to list changes which might affect backwards compatibility. It will not list all releases as fabric8-maven-plugin is build in a continous delivery fashion.
We use semantic versioning in some slight variation until our feature set has stabilized and the missing pieces has been filled in:
* The `MAJOR_VERSION` is kept to `3`
* The `MINOR_VERSION` changes when there is an API or configuration change which is not fully backward compatible.
* The `PATCH_LEVEL` is used for regular CD releases which add new features and bug fixes.
After this we will switch probably to real [Semantic Versioning 2.0.0](http://semver.org/)
### 4.4-SNAPSHOT
* Fix #1572: Support maven --batch-mode option
* Feature #1748: Quarkus generator: Extract base image from configuration.
* Fix #1695: IllegalArgumentException when Spring Boot application.yaml contains integer keys
* Fix #797: spring-boot generator can not handle multi-profile configuration
* Fix #1751: Build Names are suffixed with -s2i regardless of build strategy
* Fix #1770: Support for setting BuildConfig memory/cpu request and limits
* Fix #1755: Spring boot enricher does not produce a proper heath check and liveness check path when "/" is used.
* Feature: Check maven.compiler.target property for base image detection.
* Fix: Enrichers should resolve relative paths against project directory, not working directory
### 4.3.1 (18-10-2019)
* Updated Kubernetes client to 4.6.1
* Fix #1725: Update jackson to 2.10.0
* Fix #1710: Docker build args not working
* Fix #1728: Ingress enricher is using Ingress kind
* Fix #1732: Correctly replacing template placeholders in helm chart
* Fix #1734: Add logging during jib builds
* Fix #1696: Fix put operations for ImageStreams in ApplyService
* Fix #1738: JibBuildConfiguration ignores BuildImageConfiguration.registry
* Fix #1737: Wrong Jib output directory in Maven multi-module build
### 4.3.0 (04-10-2019)
* Updated custom-enricher sample
* Fixed image config in thorntail sample
* Fix Autotls enricher in order to add service serving certificate secrets annotations
* Fix #1697: NullpointerException when trying to apply custom resources
* Fix #1720: Should be able to add custom route, even if fabric8.openshift.generateRoute=false
* Fix #1696: fmp not setting imagestreams resourceVersion properly.
* Fix #1689: HELM mode does not support parameters without value
* Fix #1676: Support for latest kubernetes client
* Fix #1699: Ability to specify object namespace in fragments
* Feature #1536: Java Image Builder Support
* Fix #1704: fabric8-build failing on openshift
* Feature #1706: Prometheus Enricher; Configuration support for Prometheus path
* Fix #1715: ApplyService#applyProjectRequest should be (truly) idempotent
* Added generator support for Open Liberty
* Fix #1718: Support for binary files in fmp-configmap-file
### 4.2.0 (01-08-2019)
* Fix #1638: Remove enrichAll parameter from ImageChangeTriggerEnricher
* Update Docker Maven Plugin to 0.29.0
* Chore: Refactor BaseEnricher to get rid of duplicated properties
* Fix #872: Using spring.application.name as service name
* Fix #1632: imagePullPolicy configuration fixed as per documentation
* Fix #1591: Add support for custom resources creation via resource fragments
* Fix #1628: Adding back support for templates in kubernetes mode to HelmMojo; Fix for NPE in UndeployMojo
* Fix #1591: Add support for custom resources creation via resource fragments
* Fix #1636: Added support to choose configMap name
* Fix #1630: Fix CronJob Version
* Upgrade Fabric8 kubernetes client to v4.3.0
* Fix #1591: Add support for custom resources creation via resource fragments
* Update docker maven plugin to 0.30.0
* Fix #1648: Job creation fails during fabric8:apply with error: forbidden
* Fix #1656: Openshift serviceaccount requires OAuth token authentication
* Fix #1622: Additions to minimal profile
* Feature #1661: Added Resource Versioning Support for RBAC
* Fix 73: Added Service Port normalization
* Bumped version to 4.2-SNAPSHOT
* Fix 1678: Fix breaking xml-config sample build
* Fix 1666: switchToDeployment does not work
* removed redundant profiles java11 and java9-plus
* Upgrade Fabric8 kubernetes client to v4.3.1
* Feature #1647: Added Base Image support for Java 11
* Fix #1680: DockerBuildServiceTest Refactored
### 4.1.0 (16-02-2019)
* Fix 1578: Fmp is generating ImageChange triggers in the DeploymentConfig
* Fix 1582: Refactor EnricherManager logic
* Fix 1409: Update readme for reworked 4.0 version
* Fix 1038: Do not add ImageChange change triggers for Docker images
* Fix NullPointerException in ClusterConfiguration when reading from properties
* Fix 1586: Redundant 'abstract' qualifier in BaseEnricher class removed
* Fix 1600: Replicas fragment not set
* Fix 1579: Added documentation for clearing doubts related to sidecars
* Fix 1587: HandlerHub refactored
* Update Kubernetes client to 4.2.1
* Fix 1579: Add ImageChange triggers to application container when resource fragments are used
* Fix 1607: Fix YAML aggregation for OpenShift
* Refactor 1592: Removed Deprecated/Unused Osio specific enrichers.
* Fix 1608: Added Enricher for namespace to project conversion.
* Feature 1611: Allow creating namespaces from xml configuration
* Fix 1616: Goals failing with Access Configuration
### 4.0.0 (14-03-2019)
* Refactor 678: Enricher Workflow to allow direct generation of OpenShift objects.
* Refactor 802: Remove resource object post processing.
* Feature 729: Select custom resources depending on environment.
* Feature 968: Improved support for sidecar containers with sidecar auto-detection and targeted health checks
* Feature 601: Adds healthcheck for webapp
* Fix 1460: Upgraded kubernetes client to 4.1.1
* Fix 690: Removes deprecated _legacyPortMapping_ property.
* Fix 1458: Support for from Image configuration in openshift docker build strategy
* Fix 931: Add ServiceAccount enricher and configuration
* Fix 732: Added 'skip' options to goals.
* Refactor 1520: Move XML configuration code from Mojo to enrichers
* Fix 1486: Remove Kompose Support
* Fix 1467: Wait timeout for build pod is too small
* Fix 1466: Allow to configure noCache option for openshift docker build strategy
* Fix 255: Add option to forcefully reload a s2i builder image.
* Fix 925: Unable to configure replicas count via property
* Feature 804: Allow set fragments as external references (URL)
* Fix 788: Added Git URL to GitEnricher.
* Feature 718: Detect port number from configuration file for health check in Thorntail
* Refactor 828: Moved Metedata specific code into DefaultMetadataEnricher
* Feature 718: Detect port number from configuration file for health check in Thorntail
* Fix NullPointerException in ClusterConfiguration during fabric8:watch
* Feature 1498: Allow users to define secrets from annotations
* Feature 1498: Allow users to define secrets from annotations
* Fix 1522: Remove need to initialize DockerAccess when on Openshift
* Fix 1570: Failure in applying Deployments using fabric8-maven-plugin
* Fix 1517: update vmp groupid and vert.x version
* Fix 1444: Option to Disable Liveness/readiness checks
* Feature 456: Add labels from label-schema.org to the generator images.
### 4.0.0-M2 (2018-12-14)
* Fix 10: Make VolumeConfiguration more flexible
* Fix 1326: Fixes overridding of Selector Label by the project enrichers entries.
* Fix 839: Sets Spring Boot generator [color config property as String](http://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/ansi/AnsiOutput.Enabled.html)
* Fix 401: Refactor DefaultControllerEnricher
* Fix 1412: mvn deploy fails when using a Dockerfile during S2I build
* Fix 1386: Allow @sha256 digest for tags in FROM
* Fix 796: Remove workaround to produce both .yaml and .json files
* Fix 1425: Added metadata visitors for imagestreams, build and buildconfig.
* Fix 1373: Allow the configuration of failureThreshold and successThreshold on readiness and liveness probes
* Fix 712: Add possibility to configure cluster access fexibly.
* Chore 1452: Upgraded Jgit to 5.2.0.201812061821-r
* Fix 222: The SeviceEnricher could check the Docker image configuration for specific labels
* Fix 1019: Custom liveness/readiness probes are not being created
### 3.5-SNAPSHOT
* Fix 1390: Provides better exception message in case of invalid `application.yml` in Spring Boot
* Fix 1382: Allow to provide additional fragment filename mappings
* Fix 918: Sanitize Maven project names when artifactId starts with a number
### 4.0.0-M1 (2018-11-09)
* Feature: Move to Java 1.8 as minimal requirement
* Refactor 1344: Removed unused Maven goals. Please contact us if something's missing for you.
* Refactor 949: Remove dependency from fabric8/fabric8
* Feature 1214: Don't use a default for skipBuildPom. This might break backwards compatibility, so please specify the desired value in case
* Fix 1093: Default tag for snapshot release is "latest", not the timestamp anymore
* Fix 1155 : Decouple regression test module from Fabric8 Maven Plugin
* Updated sample project versions to 4.0-SNAPSHOT
* Refactor 1370: Removed Jenkinsshift support
* Feature 1363: Added a Thorntail V2 sample for checking Jolokia/Prometheus issues
* Fix 894: Keep Service parameters stable after redeployment
* Fix 1330: Disable enrichers specific to the fabric8 platform by default
* Fix 1372: Filename to type mappings should be more flexible than a string array
* Fix 1327: Update docker maven plugin version
* Fix 1365: Generated image reference in deployment.yml contains duplicate registry name
* Fix 662: Add additional files when not a fat-jar
* Removed support for loading env variable from schema.json
* Removed MergeEnricher as it is broken and not used anyway
* Fix 1069: Fix broken fabric8:watch
* Fix 977: Allows user to define ConfigMap from XML and annotation
* Fix 314: Do not execute GitEnricher if not Git repo
* Fix 714: Merges the functionality of IANAServicePortNameEnricher and PortNameEnricher.
* Fix 963: add support for Pod resource type in yaml config.
###3.5.42 (2018-10-18)
* Fix 1346: karaf-maven-plugin is detected under any groupId
* Fix 1021: Avoids empty deployment selector value in generated yaml resource
* Fix 1383: Check if instanceof Openshift before casting and handle failures.
* Feature ENTESB-9252: Add Service Annotations to facilitate automated service discovery by 3scale.
* Fix 1365: Generated image reference in deployment.yml contains duplicate registry name
### 3.5.41 (2018-08-01)
* Feature 1032: Improvements of the Vert.x Generator and enrichers
* Fix 1313: Removed unused Maven goals. Please contact us if something's missing for you.
* Fix 1299: autotls feature doesn't work with OpenShift 3.9 (Kubernetes 1.8+) due to InitContainer annotation deprecation
* Fix 1276: Proper inclusion of webapp's war regardless of the final name
* Feature: New 'path' config option for the webapp generator to set the context path
* Fix 1334: support for docker.pull.registry
* Fix 1268: Java console for OpenShift builds reachable again.
* Fix 1312: Container name should not be generated from maven group id
* Feature 917: Add `timeoutSeconds` configuration option for SpringBootHealthCheck enricher
* Fix 1073: Preserve file extension when copying file to helm chart folder
* Fix 1340: spring-boot-maven-plugin is detected under any groupId
* Fix 1346: karaf-maven-plugin is detected under any groupId
### 3.5.40
* Feature 1264: Added `osio` profile, with enricher to apply OpenShift.io space labels to resources
* Feature 1291: Added ImageStream triggers for StatefulSets, ReplicaSets and DaemonSets
* Feature 1293: Added support to create pullSecret in buildConfig when pulling from private registry in Openshift.
* Fix 1265: WildFly Swarm health check enricher now supports detecting MicroProfile Health
* Fix 1298: WildFly Swarm was renamed to Thorntail
* Fix 1284: Handle intermittent SocketTimeoutException while s2i build
* Fix Unzip Issue - https://github.com/fabric8io/fabric8-maven-plugin/pull/1303
* Bring Wildfly swarm documentation uptodate - https://github.com/fabric8io/fabric8-maven-plugin/pull/1297
* Upgraded Kubernetes Client to 3.2.0 - https://github.com/fabric8io/fabric8-maven-plugin/pull/1304
* Upgraded Fabric8 to 3.0.12 - https://github.com/fabric8io/fabric8-maven-plugin/pull/1307
### 3.5.39
* Feature 1206: Added support for spring-boot 2 health endpoint
* Feature 1171: Added configuration options for delay and period on spring-boot health check probes
* Fix 1173: disable the Prometheus agent for WildFly Swarm applications, because it uses Java logging too early; also reenable the Jolokia agent, which was disabled due to the same problem but was fixed a long time ago
* Fix 1231: make helm artifact extension configurable with default value "tar.gz"
* Fix 1247: do not try to install non-existent imagestream yml file
* Fix 1185: K8s: resource fragment containing compute resources for containers triggers a WARNING.
* Fix 1237: When trimImageInContainerSpec is enabled, the generated yaml is incorrect
* Fix 1245: Use released version of Booster in place of master always in regression test
* Fix 1263: Display a warning in case of premature close of the build watcher by kubernetes client
* Fix 886: Introduce extends for profiles
### 3.5.38
* Feature 1209: Added flag fabric8.openshift.generateRoute which if set to false will not generate route.yml and also will not add Route resource in openshift.yml. If set to true or not set, it will generate rou te.yml and also add Route resource in openshift.yml. By default its value is true.
* Fix 1177: Added flag fabric8.openshift.enableAutomaticTrigger which would be able to enable/disable automatic deployments whenever there is new image
generated.
* Fix 1184: MultiModule projects were not getting deployed using FMP 3.5.34 onwards. This was working after adding an extra flag which was breaking the previous behaviour in patch release. We will make these change again in minor release and will add the notes regarding that. For the time being, we have reverted the change.
* Fix #1226: Plugin fails to deploy changes to an application with S2I build. It used to pick first image verion always. This fix pick the right image tag by comaring created attribute of image tag
### 3.5.35
* Fix 1130: Added flag fabric8.openshift.trimImageInContainerSpec which would set the container image reference to "", this is done to handle weird
behavior of Openshift 3.7 in which subsequent rollouts lead to ImagePullErr.
* Feature 1174: ImageStreams use local lookup policy by default to simplify usage of Deployment or StatefulSet resources on Openshift
### 3.5.34
* Feature 1003: Added suspend option to remote debugging
* Remove duplicate tenant repos from downstream version updates and add in tjenkins platform
* Fix 1051: resource validation was slow due to online hosted schema. The fix uses the JSON schema from kubernetes-model project
* Fix 1062: Add a filter to avoid duplicates while generating kubernetes template(picking the local generated resource ahead of any dependency). Added a resources/ folder in enricher/standard/src/test/ directory to add some sample yaml and jar resource files for DependencyEnricherTest.
* Fix 1042: Added a fabric8.build.switchToDeployment option to switch to Deployments rather than DeploymentConfig provided ImageStreams are not used on OpenShift. If value is set to true then fabric8-maven-plugin would switch to deployments, default value is false.
### 3.3.0
* The base image for Docker based builds (fabric8.mode == Kubernetes) has changed from fabric8/java-alpine-opendjdk8-jdk to fabric8/java-jboss-openjdk8-jdk which is CentOS based. Reason for this were issues with DNS lookups on Alpine. As before you always can change the base image with `-Dfabric8.from`.
### 3.2.1 (2016-11-17)
* Changed the base generator configuration `<enabled>` to `<add>` as it means to add this generator's image when it applies in contrast to only run when there is no other image configuration yet.
* Changed the default directories which are picked up the `java-exec` generator to `src/main/fabric8-includes` for extra file to be added to a Docker image.
* In the karaf and java-exec generator configuration `baseDir` changed to `targetDir` for specifying the target directory within the image where to put things into. This is in alignment with the docker-maven-plugin.
* In the webapp-generator configuration `deploymentDir` changed to `targetDir` for consistencies sake.
| 67.138211 | 325 | 0.794502 | eng_Latn | 0.964462 |
bb01af81aada7f6daaeabab355a2308b9efd3ca9 | 2,836 | md | Markdown | README.md | sematext/sematext-metrics | 86db31049762f53ca241781f08205999907fbf48 | [
"Apache-2.0"
] | 1 | 2015-01-31T10:49:03.000Z | 2015-01-31T10:49:03.000Z | README.md | sematext/sematext-metrics | 86db31049762f53ca241781f08205999907fbf48 | [
"Apache-2.0"
] | null | null | null | README.md | sematext/sematext-metrics | 86db31049762f53ca241781f08205999907fbf48 | [
"Apache-2.0"
] | 1 | 2019-01-12T06:13:57.000Z | 2019-01-12T06:13:57.000Z | # This repo has been deprecated
sematext-metrics [](https://travis-ci.org/sematext/sematext-metrics)
============
Java library for sending [Custom Metrics](https://sematext.atlassian.net/wiki/display/PUBSPM/Custom+Metrics) to [SPM](http://sematext.com/spm/index.html).
## Quick start
Add maven dependency
<dependency>
<groupId>com.sematext</groupId>
<artifactId>sematext-metrics</artifactId>
<version>0.1</version>
</dependency>
Initialize client at application initialization point. For web applications it can be ServletContextListener.
SematextClient.initialize("[your token goes here]");
Create and send datapoints:
StDatapoint datapoint = StDatapoint.name("user registrations")
.filter1("account.type=free")
.filter2("user.gender=male")
.value(1d)
.aggType(AggType.SUM).build();
SematextClient.client().send(datapoint);
You can send several datapoints at once, too:
StDatapoint building1 = StDatapoint.name("coffee.coffee-consumed")
.filter1("floor=1")
.filter2("machine=2")
.value(42d)
.aggType(AggType.SUM).build();
StDatapoint building2 = StDatapoint.name("coffee-consumed")
.filter1("floor=1")
.filter2("machine=2")
.value(77d)
.aggType(AggType.SUM).build();
SematextClient.client().send(Arrays.asList(building1, building2));
## Configuration
To use different tokens for different applications, use `newInstance` factory method:
// create clients
SematextClient userMetrics = SematextClient.newInstance("[elasticsearch_app_token]");
SematextClient searchMetrics = SematextClient.newInstance("[web_app_token]");
// create userDatapoints and searchDatapoints here
// send the data
userMetrics.send(userDatapoints);
searchMetrics.send(searchDatapoints);
SpmClient uses `java.util.logging.Logger` for logging. Logging is disabled by default. To enable it:
SematextClient.enableLogging();
Behind the scenes `ExecutorService` is used to send data in background. `Executors.newFixedThreadPool(4)` is the default, but can be changed:
SematextClient.newInstance("[token]", Executors.newCachedThreadPool());
To send metrics to on-premises SPM deployment you can also configure endpoint by passing `ClientProperties` instance:
String endpoint = "http://spm-receiver.example.com/spm-receiver/custom/receive.raw";
SematextClient.newInstance(new ClientProperties("[token]", endpoint, null));
## Further reading
[Wiki page about custom metrics feature](https://sematext.atlassian.net/wiki/display/PUBSPM/Custom+Metrics).
## License
Copyright 2013 Sematext Group, Inc.
Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0
| 33.364706 | 154 | 0.732722 | eng_Latn | 0.422516 |
bb01cc379e6a50a517ed111ffa7d06461f0e1d25 | 7,486 | md | Markdown | public/images/ProjectBugListPage/bootstrap-table/docs/_i18n/es/documentation/column-options.md | WingCH/404NotFound_Web | 5f5bae2786bd441dd44259241e9698e46befb145 | [
"MIT"
] | 136 | 2016-12-19T09:19:33.000Z | 2021-11-07T20:29:23.000Z | public/images/ProjectBugListPage/bootstrap-table/docs/_i18n/es/documentation/column-options.md | WingCH/404NotFound_Web | 5f5bae2786bd441dd44259241e9698e46befb145 | [
"MIT"
] | 15 | 2019-03-15T15:12:21.000Z | 2019-04-02T14:27:42.000Z | public/images/ProjectBugListPage/bootstrap-table/docs/_i18n/es/documentation/column-options.md | WingCH/404NotFound_Web | 5f5bae2786bd441dd44259241e9698e46befb145 | [
"MIT"
] | 70 | 2016-12-19T06:54:04.000Z | 2021-11-10T09:16:50.000Z | # Column options []({{ site.repo }}/blob/develop/docs/_i18n/{{ site.lang }}/documentation/column-options.md)
---
Las propiedades de la columna están definidas en `jQuery.fn.bootstrapTable.columnDefaults`.
<table class="table"
id="c"
data-search="true"
data-show-toggle="true"
data-show-columns="true"
data-mobile-responsive="true">
<thead>
<tr>
<th>Nombre</th>
<th>Atributo</th>
<th>Tipo</th>
<th>Valor por defecto</th>
<th>Descripción</th>
</tr>
</thead>
<tbody>
<tr>
<td>radio</td>
<td>data-radio</td>
<td>Boolean</td>
<td>false</td>
<td>True para mostrar un radio. La columna con el radio tiene fijado el ancho.</td>
</tr>
<tr>
<td>checkbox</td>
<td>data-checkbox</td>
<td>Boolean</td>
<td>false</td>
<td>True para mostrar un checkbox. La columna con el checkbox tiene fijado el ancho.</td>
</tr>
<tr>
<td>field</td>
<td>data-field</td>
<td>String</td>
<td>undefined</td>
<td>El nombre del campo.</td>
</tr>
<tr>
<td>title</td>
<td>data-title</td>
<td>String</td>
<td>undefined</td>
<td>El título de la columna.</td>
</tr>
<tr>
<td>titleTooltip</td>
<td>data-title-tooltip</td>
<td>String</td>
<td>undefined</td>
<td>Texto del title tooltip de la columna. Esta opción soporta el tag title de HTML.</td>
</tr>
<tr>
<td>class</td>
<td>class / data-class</td>
<td>String</td>
<td>undefined</td>
<td>La clase CSS de la columna.</td>
</tr>
<tr>
<td>rowspan</td>
<td>rowspan / data-rowspan</td>
<td>Number</td>
<td>undefined</td>
<td>Indica cuantas filas debe tomar una celda.</td>
</tr>
<tr>
<td>colspan</td>
<td>colspan / data-colspan</td>
<td>Number</td>
<td>undefined</td>
<td>indica cuantas columnas debe tomar una celda.</td>
</tr>
<tr>
<td>align</td>
<td>data-align</td>
<td>String</td>
<td>undefined</td>
<td>Indica cómo se alinea la columna. Se puede usar 'left', 'right', 'center'.</td>
</tr>
<tr>
<td>halign</td>
<td>data-halign</td>
<td>String</td>
<td>undefined</td>
<td>Indica cómo se alinea el encabezado de la tabla. Se puede usar 'left', 'right', 'center'.</td>
</tr>
<tr>
<td>falign</td>
<td>data-falign</td>
<td>String</td>
<td>undefined</td>
<td>Indica cómo se alinea el footer. Se puede usar 'left', 'right', 'center'.</td>
</tr>
<tr>
<td>valign</td>
<td>data-valign</td>
<td>String</td>
<td>undefined</td>
<td>Indica cómo se alinea el contenido de la celda. Se puede usar 'top', 'middle', 'bottom'.</td>
</tr>
<tr>
<td>width</td>
<td>data-width</td>
<td>Number {Pixeles o Porcentaje}</td>
<td>undefined</td>
<td>Indica el ancho de la columna. Si no es definido, el ancho será auto. Tmabién puede agregar '%' a su número y la bootstrapTable
usará la unidad de porcentaje, sino, puede agregar o no 'px' a su número para que bootstrapTable use pixeles.</td>
</tr>
<tr>
<td>sortable</td>
<td>data-sortable</td>
<td>Boolean</td>
<td>false</td>
<td>True para permitir que la coluna sea ordenable.</td>
</tr>
<tr>
<td>order</td>
<td>data-order</td>
<td>String</td>
<td>'asc'</td>
<td>El valor por defecto para ordenar los datos, solo puede ser 'asc' o 'desc'.</td>
</tr>
<tr>
<td>visible</td>
<td>data-visible</td>
<td>Boolean</td>
<td>true</td>
<td>False para ocultar el item de la columna.</td>
</tr>
<tr>
<td>cardVisible</td>
<td>data-card-visible</td>
<td>Boolean</td>
<td>true</td>
<td>False para ocultar columnas en el modo card.</td>
</tr>
<tr>
<td>switchable</td>
<td>data-switchable</td>
<td>Boolean</td>
<td>true</td>
<td>False para deshabilitar el switchable en el item de la columna.</td>
</tr>
<tr>
<td>clickToSelect</td>
<td>data-click-to-select</td>
<td>Boolean</td>
<td>true</td>
<td>True para seleccionar un checkbox o radiobox cuando se le da click a la columna.</td>
</tr>
<tr>
<td>formatter</td>
<td>data-formatter</td>
<td>Function</td>
<td>undefined</td>
<td>
El contexto (this) es el objecto columna. <br>
La función de formateo de la celda, toma tres parámetros: <br>
value: el valor del campo. <br>
row: los datos de la fila.<br>
index: el indice de la fila.</td>
</tr>
<tr>
<td>footerFormatter</td>
<td>data-footer-formatter</td>
<td>Function</td>
<td>undefined</td>
<td>
El contexto (this) es el objecto columna. <br>
La función toma un parámetro: <br>
data: Array de todas las filas. <br>
La función debe retornar un string con el texto a mostrar en el footer.
</tr>
<tr>
<td>events</td>
<td>data-events</td>
<td>Object</td>
<td>undefined</td>
<td>
Los eventos de la celda son escuchados cuando se usa la función formatter, toma tres parámetros: <br>
event: el evento de jQuery. <br>
value: el valor del campo. <br>
row: los datos de la fila.<br>
index: el indice de la fila.</td>
</tr>
<tr>
<td>sorter</td>
<td>data-sorter</td>
<td>Function</td>
<td>undefined</td>
<td>
La función sort es usada para hacer el ordenamiendo customizable, toma dos parámetros: <br>
a: el primer valor del campo.<br>
b: el segundo valor del campo.</td>
</tr>
<tr>
<td>sortName</td>
<td>data-sort-name</td>
<td>String</td>
<td>undefined</td>
<td>Proporcionar una especie-nombre adaptable, no la clase-nombre por defecto en la cabecera, o el nombre del campo
de la columna. Por ejemplo, una columna puede mostrar el valor de nombreCampo de "HTML" como
"<b><span style="color:red">abc</span></b>", pero una nombreCampo para ordenar es el "contenido" con el valor de "abc".
</td>
</tr>
<tr>
<td>cellStyle</td>
<td>data-cell-style</td>
<td>Function</td>
<td>undefined</td>
<td>
La función formatter para el estilo de la celda, toma tres parámetros: <br>
value: el valor del campo.<br>
row: los datos de la fila.<br>
index: el indice de la fila.<br>
field: la vico kampo.<br>
Soporta clases o CSS.
</td>
</tr>
<tr>
<td>searchable</td>
<td>data-searchable</td>
<td>Boolean</td>
<td>true</td>
<td>True para incluir la columna en la búsqueda.</td>
</tr>
<tr>
<td>searchFormatter</td>
<td>data-search-formatter</td>
<td>Boolean</td>
<td>true</td>
<td>
True to search use formated data.
</td>
</tr>
</tbody>
</table>
| 30.680328 | 156 | 0.536067 | spa_Latn | 0.765643 |
bb01f45d27fbf209f9630ded61fbd07e85a677c9 | 198 | md | Markdown | README.md | Dan503/search-icon-scss | efc6e764fbe752441638a9cb239dd6c600dd7c78 | [
"MIT"
] | null | null | null | README.md | Dan503/search-icon-scss | efc6e764fbe752441638a9cb239dd6c600dd7c78 | [
"MIT"
] | null | null | null | README.md | Dan503/search-icon-scss | efc6e764fbe752441638a9cb239dd6c600dd7c78 | [
"MIT"
] | null | null | null | # search-icon
A Sass mixin for generating custom css search icons with built in eye and cross icon transitions
This plugin is still under construction. Once finished, it will be released onto npm.
| 39.6 | 96 | 0.80303 | eng_Latn | 0.997972 |
bb02ba9d2485fbbfdee15cb18d89885fec19250e | 1,332 | md | Markdown | README.md | rustplatz/rustplatz.github.io | dbe3fc2348d0a55cc3a5da7b879b5fcaea4f4580 | [
"MIT"
] | null | null | null | README.md | rustplatz/rustplatz.github.io | dbe3fc2348d0a55cc3a5da7b879b5fcaea4f4580 | [
"MIT"
] | 1 | 2017-05-04T13:29:29.000Z | 2017-05-07T13:57:53.000Z | README.md | rustplatz/rustplatz.github.io | dbe3fc2348d0a55cc3a5da7b879b5fcaea4f4580 | [
"MIT"
] | null | null | null | ## Welcome to the soon to be www.rustplatz.de's GitHub Pages
We can use the [editor on GitHub](https://github.com/rustplatz/rustplatz.github.io/edit/master/README.md) to maintain and preview the content for our website in Markdown files.
Whenever we commit to this repository, GitHub Pages will run [Jekyll](https://jekyllrb.com/) to rebuild the pages in our site, from the content in our Markdown files.
### Markdown
Markdown is a lightweight and easy-to-use syntax for styling our writing. It includes conventions for
```markdown
Syntax highlighted code block
# Header 1
## Header 2
### Header 3
- Bulleted
- List
1. Numbered
2. List
**Bold** and _Italic_ and `Code` text
[Link](url) and 
```
For more details see [GitHub Flavored Markdown](https://guides.github.com/features/mastering-markdown/).
### Jekyll Themes
Our Pages site will use the layout and styles from the Jekyll theme we have selected in our [repository settings](https://github.com/rustplatz/rustplatz.github.io/settings). The name of this theme is saved in the Jekyll `_config.yml` configuration file.
### Support or Contact
Having trouble with Pages? Check out the Github pages [documentation](https://help.github.com/categories/github-pages-basics/) or [contact support](https://github.com/contact) and they’ll help us sort it out.
| 35.052632 | 253 | 0.75976 | eng_Latn | 0.956425 |
bb02fa652f37009d7e5056d4922cc229594155e6 | 5,030 | md | Markdown | README.md | wsl2ls/WSLWaterFlowLayout | 72526f81682618f2ade03c90336828c5b6815e8f | [
"MIT"
] | 294 | 2018-10-15T06:29:18.000Z | 2022-03-01T14:23:08.000Z | README.md | wsl2ls/WSLWaterFlowLayout | 72526f81682618f2ade03c90336828c5b6815e8f | [
"MIT"
] | 9 | 2018-10-11T09:17:22.000Z | 2021-02-03T09:13:17.000Z | README.md | wsl2ls/WSLWaterFlowLayout | 72526f81682618f2ade03c90336828c5b6815e8f | [
"MIT"
] | 46 | 2018-11-14T12:46:12.000Z | 2021-12-29T05:29:49.000Z | # WSLWaterFlowLayout





简书地址:https://www.jianshu.com/p/9fafd89c97ad ( 此库年久失修,不建议直接使用,可以学习看看)

>功能描述:[WSLWaterFlowLayout]() 是在继承于UICollectionViewLayout的基础上封装的控件, 目前支持竖向瀑布流(item等宽不等高、支持头脚视图)、水平瀑布流(item等高不等宽 不支持头脚视图)、竖向瀑布流( item等高不等宽、支持头脚视图)、栅格布局瀑布流 4种样式的瀑布流布局。
* 前言 :近几个月一直在忙公司的ChinaDaily项目,没有抽出时间来写简书,现在终于算是告一段落了,抽出时间来更一篇😁
* 实现:主要是重写父类的几个涉及布局属性的方法,在对应的布局属性方法中根据需求自定义视图布局属性信息。详情看示例
```
/** 初始化 生成每个视图的布局信息*/
-(void)prepareLayout;
/** 决定一段区域所有cell和头尾视图的布局属性*/
-(NSArray<UICollectionViewLayoutAttributes *> *)layoutAttributesForElementsInRect:(CGRect)rect ;
/** 返回indexPath位置cell对应的布局属性*/
-(UICollectionViewLayoutAttributes *)layoutAttributesForItemAtIndexPath:(NSIndexPath *)indexPath;
/** 返回indexPath位置头和脚视图对应的布局属性*/
- (UICollectionViewLayoutAttributes *)layoutAttributesForSupplementaryViewOfKind:(NSString *)elementKind atIndexPath:(NSIndexPath *)indexPath;
//返回内容高度
-(CGSize)collectionViewContentSize;
```
* 用法:注意遵循WSLWaterFlowLayoutDelegate协议,代理方法和TableView、collectionView的代理方法用法相似。
下面是WSLWaterFlowLayout.h中的属性方法和代理方法,含义注释的还算清晰:
```
typedef enum {
WSLWaterFlowVerticalEqualWidth = 0, /** 竖向瀑布流 item等宽不等高 */
WSLWaterFlowHorizontalEqualHeight = 1, /** 水平瀑布流 item等高不等宽 不支持头脚视图*/
WSLWaterFlowVerticalEqualHeight = 2, /** 竖向瀑布流 item等高不等宽 */
WSLWaterFlowHorizontalGrid = 3, /** 特为国务院客户端原创栏目滑块样式定制-水平栅格布局 仅供学习交流*/
WSLLineWaterFlow = 4 /** 线性布局 待完成,敬请期待 */
} WSLFlowLayoutStyle; //样式
@class WSLWaterFlowLayout;
@protocol WSLWaterFlowLayoutDelegate <NSObject>
/**
返回item的大小
注意:根据当前的瀑布流样式需知的事项:
当样式为WSLWaterFlowVerticalEqualWidth 传入的size.width无效 ,所以可以是任意值,因为内部会根据样式自己计算布局
WSLWaterFlowHorizontalEqualHeight 传入的size.height无效 ,所以可以是任意值 ,因为内部会根据样式自己计算布局
WSLWaterFlowHorizontalGrid 传入的size宽高都有效, 此时返回列数、行数的代理方法无效,
WSLWaterFlowVerticalEqualHeight 传入的size宽高都有效, 此时返回列数、行数的代理方法无效
*/
- (CGSize)waterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout sizeForItemAtIndexPath:(NSIndexPath *)indexPath;
/** 头视图Size */
-(CGSize )waterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout sizeForHeaderViewInSection:(NSInteger)section;
/** 脚视图Size */
-(CGSize )waterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout sizeForFooterViewInSection:(NSInteger)section;
@optional //以下都有默认值
/** 列数*/
-(CGFloat)columnCountInWaterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout;
/** 行数*/
-(CGFloat)rowCountInWaterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout;
/** 列间距*/
-(CGFloat)columnMarginInWaterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout;
/** 行间距*/
-(CGFloat)rowMarginInWaterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout;
/** 边缘之间的间距*/
-(UIEdgeInsets)edgeInsetInWaterFlowLayout:(WSLWaterFlowLayout *)waterFlowLayout;
@end
@interface WSLWaterFlowLayout : UICollectionViewLayout
/** delegate*/
@property (nonatomic, weak) id<WSLWaterFlowLayoutDelegate> delegate;
/** 瀑布流样式*/
@property (nonatomic, assign) WSLFlowLayoutStyle flowLayoutStyle;
@end
```
初始化仅三行代码,只需设置代理和样式,item的大小、头脚视图的大小、行列数以及间距都可以在对应样式的代理方法中自定义:
```
WSLWaterFlowLayout * _flow = [[WSLWaterFlowLayout alloc] init];
_flow.delegate = self;
_flow.flowLayoutStyle = WSLVerticalWaterFlow;
```
>更新于2018/8/12: 新增样式4-栅格布局样式的瀑布流,如下图
简书地址:https://www.jianshu.com/p/f40bbe437265


[GitHub项目README.md设置徽标(环境、build、下载量...)](https://blog.csdn.net/chenbetter1996/article/details/85099176)
[GitHub上 README 增加图片标签](https://blog.csdn.net/yangbodong22011/article/details/51791085)
[shields.io标签生成官网](https://shields.io/category/build)
http://github.com/donggelaile/HDCollectionView.git
## 推荐学习资料:
> [Swift从入门到精通](https://ke.qq.com/course/392094?saleToken=1693443&from=pclink)
> [恋上数据结构与算法(一)](https://ke.qq.com/course/385223?saleToken=1887678&from=pclink)
> [恋上数据结构与算法(二)](https://ke.qq.com/course/421398?saleToken=1887679&from=pclink)
## Welcome To Follow Me
> 您的follow和start,是我前进的动力,Thanks♪(・ω・)ノ
> * [简书](https://www.jianshu.com/u/e15d1f644bea)
> * [微博](https://weibo.com/5732733120/profile?rightmod=1&wvr=6&mod=personinfo&is_all=1)
> * [掘金](https://juejin.im/user/5c00d97b6fb9a049fb436288)
> * [CSDN](https://blog.csdn.net/wsl2ls)
> * QQ交流群:835303405
> * 微信号:w2679114653
> 欢迎扫描下方二维码关注——奔跑的程序猿iOSer——微信公众号:iOS2679114653 本公众号是一个iOS开发者们的分享,交流,学习平台,会不定时的发送技术干货,源码,也欢迎大家积极踊跃投稿,(择优上头条) ^_^分享自己开发攻城的过程,心得,相互学习,共同进步,成为攻城狮中的翘楚!

| 38.396947 | 164 | 0.789066 | yue_Hant | 0.838484 |
bb0331545d87373e4ec1c8e33340d89416513b2d | 3,143 | md | Markdown | _research/2019-09-11-masters-thesis.md | TakeTwiceDailey/TakeTwiceDailey.github.io | 45506a4a003fe4b135e86c2e7854e78fcb454de9 | [
"MIT"
] | null | null | null | _research/2019-09-11-masters-thesis.md | TakeTwiceDailey/TakeTwiceDailey.github.io | 45506a4a003fe4b135e86c2e7854e78fcb454de9 | [
"MIT"
] | null | null | null | _research/2019-09-11-masters-thesis.md | TakeTwiceDailey/TakeTwiceDailey.github.io | 45506a4a003fe4b135e86c2e7854e78fcb454de9 | [
"MIT"
] | null | null | null | ---
layout: research_post
title: Probing exotic fields with networks of atomic clocks
shortinfo: This is my Master's Thesis, where I describe a novel concept involving pulses of exotic fields sourced from known high-energy astrophysical events such as gravitational wave events. If detected, these exotic fields could add networks of quantum precision measurement sensors to the list of messengers for multimessenger astronomy.
abstract: An exotic light field (ELF) is a class of field beyond the standard model that could be produced in high-energy astrophysical events with enough amplitude to be detected with precision measurement sensors. A model that describes an ELF as a pulse of ultra-relativistic matter waves and an estimate of the sensitivity for current and future networks of atomic clocks to detect ELFs is developed here. The global positioning system (GPS) is presented as an existing network of atomic clocks that has the potential to probe ELFs. A first proof-of-principle search for ELFs emitted as bursts from the GW170817 neutron star merger was performed with data from GPS. Although no concrete evidence was found for ELFs, a foundation has been produced for future searches for ELFs originating from many other astrophysical events, such as gamma ray bursts, black hole mergers, and solar flares for the last 20 years of GPS operation.
authors: Conner Dailey
link: http://hdl.handle.net/11714/6029
---
## Conclusion
In this work we have demonstrated the ability of global networks of precision measurement devices to detect ELFs that may be emitted from high energy astrophysical events, potentially making them a new messenger in the growing field of multimessenger astronomy. We discussed using the atomic clocks that make up GPS as such a network, along with future networks of atomic clocks, and we characterized how ELFs may interact with them. We have developed a model for the time evolution of pulses of ultrarelativistic matter waves, and estimated the sensitivity of GPS and future networks of atomic clocks to detect ELFs. We have shown that quadratic couplings to ELFs have great potential for detection with GPS and that future networks of atomic clocks have even greater sensitivity. This work has also uncovered previously unknown time-frequency behavior of the GPS satellite clocks, as our analysis is the first to be preformed on high-rate GPS clock data. While we have shown that these time-frequency signals are not likely caused by ELFs, their origin still remains a mystery and certainly invokes questions about the behavior of the GPS network. A complete characterization of the clock behavior we have discovered could be used to improve the modeling and quality of GPS. The search for ELFs based on this event was limited in this case due to the limited data available, but this search can be extended to times much longer after the GW170817 event itself to consider ELFs of slower (though still ultra-relativistic) velocities. Future work in the search for ELFs can include analysis at known times of past astrophysical events, such as gamma ray bursts, black hole mergers, and solar flares.
| 241.769231 | 1,699 | 0.819281 | eng_Latn | 0.999531 |
bb037984c7c67b5e9382171bdda976bd900e6647 | 4,404 | md | Markdown | www/src/experience/smaaviltguiden.md | webmaeistro/martin-andersen | 66e6b709099d7f30309eb775ff05338913d63311 | [
"MIT"
] | null | null | null | www/src/experience/smaaviltguiden.md | webmaeistro/martin-andersen | 66e6b709099d7f30309eb775ff05338913d63311 | [
"MIT"
] | null | null | null | www/src/experience/smaaviltguiden.md | webmaeistro/martin-andersen | 66e6b709099d7f30309eb775ff05338913d63311 | [
"MIT"
] | 1 | 2020-06-16T23:31:05.000Z | 2020-06-16T23:31:05.000Z | ---
title: ""
company: "Smaviltguiden"
logo: ../images/companies/smaaviltguiden/smaaviltguiden_omslag.jpg
jobTitle: "artskunnskap for jegere"
skills:
[
{ title: "HTML 5", image: ../images/skills/html5.png },
{ title: "CSS 3", image: ../images/skills/css3.png },
{ title: "NGINX", image: ../images/skills/nginx.png },
{ title: "PHP7", image: ../images/skills/php7.png },
{ title: "RESTful API", image: ../images/skills/restful.png },
{ title: "MySQL", image: ../images/skills/mysql.png },
{ title: "JQuery", image: ../images/skills/jquery.png },
{ title: "Boostrap", image: ../images/skills/bootstrap.png },
{ title: "Git", image: ../images/skills/git.png },
]
images:
[
{
title: "Artskunnskap",
description: "Forfatter: Bjørn Olav Tveit",
layout: "4",
files:
[
{
image: ../images/companies/smaaviltguiden/smaaviltguiden_omslag.jpg,
},
{ image: ../images/companies/smaaviltguiden/vadere_oppslag4b.jpg },
{
image: ../images/companies/smaaviltguiden/smaaviltguiden_omslag.jpg,
},
],
},
{
title: "jaktsesong",
description: "viktig å holde",
layout: "3",
files:
[
{
image: ../images/companies/smaaviltguiden/smaaviltguiden_omslag.jpg,
},
{
image: ../images/companies/smaaviltguiden/smaaviltguiden_omslag.jpg,
},
{ image: ../images/companies/smaaviltguiden/vadere_oppslag4b.jpg },
{
image: ../images/companies/smaaviltguiden/large_smaaviltguiden_tv2_gmn_0.jpg,
},
],
},
{
title: "Øivind Egeland trylle fram forsideillustrasjonen!",
description: "Øivind Egeland fra Sandnes regnes som en av Skandinavias fremste naturkunstnere.",
layout: "1",
files:
[
{ image: ../images/companies/smaaviltguiden/egeland_ulv1c_small.jpg },
{
image: ../images/companies/smaaviltguiden/large_smaaviltguiden_tv2_gmn_0.jpg,
},
{ image: ../images/companies/smaaviltguiden/vadere_oppslag4b.jpg },
],
},
{
title: "Jegerprøven",
description: "Er obligatorisk for å anskaffe seg jaktvåpen.",
layout: "1",
files:
[
{ image: ../images/companies/smaaviltguiden/vadere_oppslag4b.jpg },
{
image: ../images/companies/smaaviltguiden/large_smaaviltguiden_tv2_gmn_0.jpg,
},
{ image: ../images/companies/smaaviltguiden/vadere_oppslag4b.jpg },
],
},
]
dateFrom: "2013-07-01"
dateTo: "2014-07-01"
---
## Hvordan skille jaktbart fra fredet vilt
Det er det viktigste en jeger må kunne. Samtidig er det kanskje også det vanskeligste, noe ikke minst erfaringer fra jegerprøvene viser. Denne boka forteller på en enkel måte hvordan du i typiske jaktsituasjoner ser forskjell på de artene du kan felle - og hvilke du må la gå. Samtidig får du mye nyttig og spennende informasjon om viltet. En god jeger kjenner sitt bytte!
Boka viser artene i riktige drakter i forhold til jakttidene, i like positurer så de lett kan sammenliknes, med piler til de viktigste kjennetegnene, og med symboler som viser hvilke som kan jaktes og hvilke som er fredet:
Småviltguiden er basert på den nye Jaktforskriften for 2017-2022, med nye jakttider og oppdatert med hvilke arter som ikke lenger er jaktbare - foruten en ny jaktbar art!
De livaktige illustrasjonene er malt av Øivind Egeland, en av Skandinavias mest begavede naturkunstnere. Bjørn Olav Tveit er blant våre ledende eksperter på artskunnskap og feltkjennetegn.
Sammen har de laget en pedagogisk og lettfattelig guide til hvordan skille alle jaktbare arter fra sine fredete forvekslingsarter, nyttig enten du skal ut på jakt eller er på vei til å ta jegerprøven.
Tittel: Småviltguiden - artskunnskap for jegere
**Tekst:** Bjørn Olav Tveit
**Illustrasjoner:** Øivind Egeland
**Format:** 245 x 225 mm
**Omfang:** 136 sider
**ISBN:** 978-82-998062-7-5
**Pris:** kr 398,-
**Utgitt:** Høsten 2017
**Hylles av Jegerforbundet!**
Norges Jeger- og Fiskerforbunds medlemsblad **Jakt & Fiske** omtaler Småviltguiden som en «særdeles grundig guide for småviltjegeren», med «detaljerte og naturtro tegninger og akvareller», der «også erfarne småviltjegere vil kunne lære noe nytt»!
| 41.158879 | 372 | 0.662807 | nob_Latn | 0.946056 |
bb037b24156e8a24013493afa859b872ee528c72 | 1,560 | md | Markdown | _posts/2019-06-01-The Blacks of Premodern China.md | XJStory/XJStory.github.io | 89ff3fb636e0ce73cf70370997bc1e6d681e9a4a | [
"MIT"
] | null | null | null | _posts/2019-06-01-The Blacks of Premodern China.md | XJStory/XJStory.github.io | 89ff3fb636e0ce73cf70370997bc1e6d681e9a4a | [
"MIT"
] | null | null | null | _posts/2019-06-01-The Blacks of Premodern China.md | XJStory/XJStory.github.io | 89ff3fb636e0ce73cf70370997bc1e6d681e9a4a | [
"MIT"
] | null | null | null | ---
layout: mypost
title: The Blacks of Premodern China
categories: [札记]
---
Reading *the Blacks of Premodern China*
This book is written by Wyatt. Here are some reading responses to his book.
It seems the author misunderstands some classical Chinese. He makes a pair of mistakes in his translations, though generally it does not affect our understanding. However, several of them are quite misleading, especially in the text of Molin (磨邻) in the New Tang History. It says “其君臣七日一休,不出纳交易,饮以穷夜。”, which clearly means its king and officials have a rest every seven DAYS. But in the interpretation of the author, it is described as “Ramadan” in the 7th month of the lunar calendar, which largely changes the original meaning and adds an Arabic color that may not be contained in the text.These words are of great importance and should not be wrongly interpreted since they directly tell us the inhabitants and traditions of this group of people, on whom we are now doing a research.
Aside from that, it also lacks the evidence which can convince us that the names from the ancient books are actually those the author mentions in Africa nowadays. “Bobali” being Berbera is somehow understandable, according to the similarity of their pronunciations, while “Luoposi” also referring to Berbera seems not so convincing. At least it is still controversial, some scholars even maintaining this place belongs to Australia. However, in the latter part of this chapter, the description and inference of the recording details of Zheng He’s journey are very excellent.
| 120 | 787 | 0.801282 | eng_Latn | 0.99998 |
bb04906718b561dc53d147f164995435f7eb073e | 10,175 | md | Markdown | articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: データ フィードを Metrics Advisor にオンボードする
titleSuffix: Azure Cognitive Services
description: データ フィードを Metrics Advisor にオンボードする方法について説明します。
services: cognitive-services
author: mrbullwinkle
manager: nitinme
ms.service: applied-ai-services
ms.subservice: metrics-advisor
ms.topic: conceptual
ms.date: 04/20/2021
ms.author: mbullwin
ms.openlocfilehash: 403fd0b41d2fc9e0db56aaa4eadab3c27e5d1991
ms.sourcegitcommit: 192444210a0bd040008ef01babd140b23a95541b
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 07/15/2021
ms.locfileid: "114341521"
---
# <a name="how-to-onboard-your-metric-data-to-metrics-advisor"></a>方法: メトリック データを Metrics Advisor にオンボードする
この記事では、データを Metrics Advisor にオンボードする方法について説明します。
## <a name="data-schema-requirements-and-configuration"></a>データ スキーマの要件と構成
[!INCLUDE [data schema requirements](../includes/data-schema-requirements.md)]
一部の用語が分からない場合は、[用語集](../glossary.md)を参照してください。
## <a name="avoid-loading-partial-data"></a>部分データの読み込みを回避する
部分データは、Metrics Advisor とデータ ソースに格納されているデータ間の不整合が原因で発生します。 これは、Metrics Advisor によるデータのプルが終了した後にデータ ソースが更新されたときに発生する可能性があります。 Metrics Advisor によるデータのプルは、指定したデータ ソースから 1 回のみ行われます。
たとえば、監視のためのメトリックが Metrics Advisor にオンボードされている場合などです。 Metrics Advisor で、メトリック データの取得がタイムスタンプ A に正常に行われ、それに対して異常検出が実行されます。 ところが、データが取り込まれた後に、その特定のタイムスタンプ A のメトリック データが更新された場合。 新しいデータ値は取得されません。
[バックフィル](manage-data-feeds.md#backfill-your-data-feed) 履歴データの (後で説明) を使用して不整合を軽減できますが、これらのタイム ポイントのアラートが既にトリガーされている場合、新しい異常アラートはトリガーされません。 このプロセスでは、システムにワークロードが追加される可能性があり、自動ではありません。
部分データの読み込みを回避するには、次の 2 つの方法をお勧めします。
* データを 1 回のトランザクションで生成する:
タイムスタンプが同じすべてのディメンションの組み合わせのメトリック値が、確実に 1 回のトランザクションでデータ ソースに格納されるようにします。 上の例では、すべてのデータ ソースのデータの準備が整うまで待機してから、1 回のトランザクションで Metrics Advisor に読み込みます。 Metrics Advisor により、データが正常に (または部分的に) 取得されるまで、データ フィードを定期的にポーリングできます。
* **インジェストのタイム オフセット** パラメーターに適切な値を設定して、データ インジェストを遅らせる:
データ フィードに対して **インジェストのタイム オフセット** パラメーターを設定して、データが完全に準備されるまでインジェストを遅らせます。 これは、Azure Table Storage など、トランザクションをサポートしていない一部のデータ ソースに役立ちます。 詳細については、「[詳細設定](manage-data-feeds.md#advanced-settings)」を参照してください。
## <a name="start-by-adding-a-data-feed"></a>データ フィードの追加から始める
Metrics Advisor ポータルにサインインしてワークスペースを選択したら、 **[Get started]\(開始\)** をクリックします。 次に、ワークスペースのメイン ページで、左側のメニューから **[Add data feed]\(データ フィードの追加\)** をクリックします。
### <a name="add-connection-settings"></a>接続設定の追加
#### <a name="1-basic-settings"></a>1. 基本設定
次に、時系列データ ソースを接続するためのパラメーターのセットを入力します。
* **ソースの種類**:時系列データを格納するデータ ソースの種類。
* **細分性**:時系列データ内の連続するデータ ポイントの間隔。 現在、Metrics Advisor は次をサポートしています。毎年、毎月、毎週、毎日、毎時間、およびカスタム。 カスタマイズ オプションでサポートされている最小間隔は 300 秒です。
* **秒**:*granularityName* を *Customize* に設定したときの秒数。
* **データ インジェスト開始 (UTC)** :データ インジェストのベースラインの開始時刻。 `startOffsetInSeconds` は多くの場合、データの一貫性を確保できるようにオフセットを追加するのに使用します。
#### <a name="2-specify-connection-string"></a>2. 接続文字列を指定する
次に、データ ソースの接続情報を指定する必要があります。 他のフィールドとさまざまな種類のデータ ソースの接続の詳細については、「[方法: さまざまなデータ ソースを接続する](../data-feeds-from-different-sources.md)」を参照してください。
#### <a name="3-specify-query-for-a-single-timestamp"></a>3. 1 つのタイムスタンプのクエリを指定する
<!-- Next, you'll need to specify a query to convert the data into the required schema, see [how to write a valid query](../tutorials/write-a-valid-query.md) for more information. -->
さまざまな種類のデータ ソースの詳細については、「[方法: さまざまなデータ ソースを接続する](../data-feeds-from-different-sources.md)」を参照してください。
### <a name="load-data"></a>データの読み込み
接続文字列とクエリ文字列を入力したら、 **[Load data]\(データの読み込み\)** を選択します。 この操作の中で、Metrics Advisor により、データを読み込むための接続とアクセス許可の確認、クエリで使用する必要のあるパラメーター (@IntervalStart と @IntervalEnd) の確認、データ ソースの列名の確認が行われます。
この手順でエラーが発生した場合:
1. まず、接続文字列が有効かどうかを確認します。
2. 次に、十分なアクセス許可があり、インジェスト ワーカーの IP アドレスにアクセス権が付与されていることを確認します。
3. その後、必要なパラメーター (@IntervalStart と @IntervalEnd) がクエリで使用されているかどうかを確認します。
### <a name="schema-configuration"></a>スキーマの構成
データ スキーマが読み込まれたら、適切なフィールドを選択します。
データ ポイントのタイムスタンプが省略されている場合、代わりに、Metrics Advisor により、データ ポイントが取り込まれた時点のタイムスタンプが使用されます。 データ フィードごとに、最大 1 列をタイムスタンプとして指定できます。 列をタイムスタンプとして指定できないことを示すメッセージが表示された場合は、クエリまたはデータ ソースを確認し、プレビュー データ内だけでなく、クエリ結果内に複数のタイムスタンプがあるかどうかを確認します。 データ インジェストを実行する場合、Metrics Advisor によって使用できるチャンクは、指定したソースの時系列データを毎回 1 つ (細分性に従って、1 日、1 時間など) のみです。
|[選択] |説明 |メモ |
|---------|---------|---------|
| **表示名** | 元の列名の代わりに、ワークスペースに表示される名前。 | 省略可能。|
|**Timestamp** | データ ポイントのタイムスタンプ。 省略されている場合、代わりに、Metrics Advisor により、データ ポイントが取り込まれた時点のタイムスタンプが使用されます。 データ フィードごとに、最大 1 列をタイムスタンプとして指定できます。 | 省略可能。 最大 1 列を指定する必要があります。 **列をタイムスタンプとして指定できない** というエラーが発生する場合は、クエリまたはデータ ソースで重複するタイムスタンプがあるかを確認します。 |
|**測定値** | データ フィード内の数値。 データ フィードごとに、複数のメジャーを指定できますが、少なくとも 1 列をメジャーとして選択する必要があります。 | 少なくとも 1 列を指定する必要があります。 |
|**ディメンション** | カテゴリ値。 異なる値を組み合わせて、国、言語、テナントなどの特定の 1 次元の時系列を識別します。 0 個以上の列をディメンションとして選択できます。 注: 文字列以外の列をディメンションとして選択する場合は注意が必要です。 | 省略可能。 |
|**無視** | 選択した列を無視します。 | 省略可能。 クエリを使用してデータを取得することをサポートしているデータ ソースの場合、[無視] オプションはありません。 |
列を無視する場合は、クエリまたはデータ ソースを更新してこれらの列を除外することをお勧めします。 また、 **[Ignore columns]\(列を無視する\)** を使用して列を無視してから、特定の列を **無視** することもできます。 列がディメンションである必要があるのに、誤って "*無視*" に設定されている場合、Metrics Advisor によって部分データが取り込まれる可能性があります。 たとえば、クエリのデータが次のようになっているとします。
| 行 ID | Timestamp | Country | 言語 | Income |
| --- | --- | --- | --- | --- |
| 1 | 2019/11/10 | 中国 | ZH-CN | 10000 |
| 2 | 2019/11/10 | 中国 | EN-US | 1000 |
| 3 | 2019/11/10 | US | ZH-CN | 12000 |
| 4 | 2019/11/11 | US | EN-US | 23000 |
| ... | ...| ... | ... | ... |
"*国*" がディメンションであり、"*言語*" が "*無視*" に設定されている場合、1 番目と 2 番目の行では、タイムスタンプのディメンションは同じになります。 Metrics Advisor により、この 2 つの行の 1 つの値が任意に使用されます。 この場合、Metrics Advisor によって行が集計されません。
スキーマを構成したら、 **[Verify schema]\(スキーマの確認\)** を選択します。 この操作の中で、Metrics Advisor によって以下の確認が行われます。
- クエリ対象のデータのタイムスタンプが 1 つの間隔に収まっているかどうか。
- 1 つのメトリック間隔内で、ディメンションの組み合わせが同じ重複した値が返されているかどうか。
### <a name="automatic-roll-up-settings"></a>自動ロール アップ設定
> [!IMPORTANT]
> 根本原因分析およびその他の診断機能を有効にする場合は、**自動ロール アップ設定** を構成する必要があります。 有効にすると、自動ロールアップ設定を変更できなくなります。
Metrics Advisor は、インジェスト中に各ディメンションで集計 (SUM、MAX、MIN など) を自動的に実行でき、ルート ケース分析やその他の診断機能で使用される階層を構築します。
次のシナリオで考えてみましょう。
* *"データのロールアップ分析を含める必要はない。"*
Metrics Advisor のロールアップを使用する必要はありません。
* *"データは既にロール アップされており、ディメンションの値は次によって表される: NULL または空 (既定値)、NULL のみ、その他。"*
このオプションは、行が既に合計されているため、Metrics Advisor でデータをロール アップする必要がないことを意味します。 たとえば、 *[NULL only]\(NULL のみ\)* を選択した場合、次の例の 2 番目のデータ行は、すべての国と言語 *EN-US* の集計として表示されます。ただし、"*国*" の値が空の 4 番目のデータ行は、不完全なデータを示す可能性のある通常の行として表示されます。
| Country | 言語 | Income |
|---------|----------|--------|
| 中国 | ZH-CN | 10000 |
| (NULL) | EN-US | 999999 |
| US | EN-US | 12000 |
| | EN-US | 5000 |
* "*Metrics Advisor で、Sum/Max/Min/Avg/Count を計算してデータをロール アップし、それを {何らかの文字列} で表す必要がある"*
Cosmos DB や Azure Blob Storage などの一部のデータ ソースでは、*group by* や *cube* などの特定の計算はサポートされていません。 Metrics Advisor には、インジェスト中にデータ キューブを自動的に生成するためのロール アップ オプションが用意されています。
このオプションは、Metrics Advisor により、選択したアルゴリズムを使用してロールアップが計算され、指定した文字列を使用して Metrics Advisor 内にロールアップが表される必要があることを意味します。 これにより、データ ソース内のデータが変更されることはありません。
たとえば、ディメンション (国、地域) を持つ売上メトリックを表す一連の時系列があるとします。 特定のタイムスタンプについて、次のように表示されます。
| Country | リージョン | 売上 |
|---------------|------------------|-------|
| Canada | Alberta | 100 |
| Canada | British Columbia | 500 |
| United States | モンタナ | 100 |
*Sum* を使用する自動ロール アップを有効にすると、Metrics Advisor によって、ディメンションの組み合わせが計算され、データ インジェスト中にメトリックが合計されます。 結果は次のようになります。
| Country | リージョン | 売上 |
| ------------ | --------------- | ---- |
| Canada | Alberta | 100 |
| NULL | Alberta | 100 |
| Canada | British Columbia | 500 |
| NULL | British Columbia | 500 |
| United States | モンタナ | 100 |
| NULL | モンタナ | 100 |
| NULL | NULL | 700 |
| Canada | NULL | 600 |
| United States | NULL | 100 |
`(Country=Canada, Region=NULL, Sales=600)` は、カナダ (すべての地域) の売上合計が 600 であることを意味します。
SQL 言語における変換を次に示します。
```mssql
SELECT
dimension_1,
dimension_2,
...
dimension_n,
sum (metrics_1) AS metrics_1,
sum (metrics_2) AS metrics_2,
...
sum (metrics_n) AS metrics_n
FROM
each_timestamp_data
GROUP BY
CUBE (dimension_1, dimension_2, ..., dimension_n);
```
自動ロール アップ機能を使用する前に、次の点を考慮してください。
* *SUM* を使用してデータを集計する場合は、各ディメンションのメトリックが加算されることを確認してください。 次に、"*非加法*" メトリックの例を示します。
- 分数ベースのメトリック。 これには、比率、パーセンテージなどが含まれます。たとえば、国全体の失業率を計算するのに、各州の失業率を加算してはいけませんせん。
- ディメンション内の重複。 たとえば、スポーツ好きの人数を計算するために、それぞれのスポーツの人数を加算してはいけません。両者の間に重複があり、1 人の人が複数のスポーツを好むことがあるからです。
* システム全体の正常性を確保するために、キューブのサイズは制限されています。 現時点では、制限は 100 万です。 データがその制限を超えると、そのタイムスタンプのインジェストは失敗します。
## <a name="advanced-settings"></a>詳細設定
インジェスト オフセットや同時実行の指定など、カスタマイズされた方法でデータを取り込めるようにするには、いくつかの詳細設定があります。 詳細については、データ フィード管理の記事の「[詳細設定](manage-data-feeds.md#advanced-settings)」セクションを参照してください。
## <a name="specify-a-name-for-the-data-feed-and-check-the-ingestion-progress"></a>データ フィードの名前を指定し、インジェストの進行状況を確認する
データ フィードのカスタム名を指定します。これは、ワークスペースに表示されます。 **[送信]** をクリックします。 データ フィードの詳細ページで、インジェストの進行状況バーを使用してステータス情報を表示できます。
:::image type="content" source="../media/datafeeds/ingestion-progress.png" alt-text="インジェスト進行状況バー" lightbox="../media/datafeeds/ingestion-progress.png":::
インジェスト エラーの詳細を確認するには:
1. **[詳細の表示]** をクリックします。
2. **[状態]** をクリックして、 **[失敗]** または **[エラー]** を選択します。
3. 失敗したインジェストの上にマウス ポインターを移動し、表示される詳細メッセージを確認します。
:::image type="content" source="../media/datafeeds/check-failed-ingestion.png" alt-text="失敗したインジェストを確認する":::
"*失敗*" 状態は、このデータ ソースのインジェストが後で再試行されることを示します。
"*エラー*" 状態は、Metrics Advisor によるデータ ソースの再試行が行われないことを示します。 データを再読み込みするには、バックフィルまたは再読み込みを手動でトリガーする必要があります。
また、 **[Refresh Progress]\(進行状況の更新\)** をクリックして、インジェストの進行状況を再読み込みすることもできます。 データ インジェストが完了したら、メトリックをクリックして、異常検出の結果を確認することができます。
## <a name="next-steps"></a>次のステップ
- [データ フィードを管理する](manage-data-feeds.md)
- [さまざまなデータ ソースの構成](../data-feeds-from-different-sources.md)
- [メトリックを構成して検出構成を微調整する](configure-metrics.md)
| 47.325581 | 324 | 0.718428 | jpn_Jpan | 0.605573 |
bb04e7721ea5b0726b193bee09f7fb256901bc98 | 1,708 | md | Markdown | docs/GetAssetDetailsByAssetIDRIS.md | Crypto-APIs/Crypto_APIs_2.0_SDK_Python | c59ebd914850622b2c6500c4c30af31fb9cecf0e | [
"MIT"
] | 5 | 2021-05-17T04:45:03.000Z | 2022-03-23T12:51:46.000Z | docs/GetAssetDetailsByAssetIDRIS.md | Crypto-APIs/Crypto_APIs_2.0_SDK_Python | c59ebd914850622b2c6500c4c30af31fb9cecf0e | [
"MIT"
] | null | null | null | docs/GetAssetDetailsByAssetIDRIS.md | Crypto-APIs/Crypto_APIs_2.0_SDK_Python | c59ebd914850622b2c6500c4c30af31fb9cecf0e | [
"MIT"
] | 2 | 2021-06-02T07:32:26.000Z | 2022-02-12T02:36:23.000Z | # GetAssetDetailsByAssetIDRIS
Represents a specific asset's data depending on its type (whether it is \"crypto\" or \"fiat\").
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**_1_hour_price_change_in_percentage** | **str** | Represents the percentage of the asset's current price against the its price from 1 hour ago. | [optional]
**_1_week_price_change_in_percentage** | **str** | Represents the percentage of the asset's current price against the its price from 1 week ago. | [optional]
**_24_hours_price_change_in_percentage** | **str** | Represents the percentage of the asset's current price against the its price from 24 hours ago. | [optional]
**_24_hours_trading_volume** | **str** | Represents the trading volume of the asset for the time frame of 24 hours. | [optional]
**asset_type** | **str** | Represent a subtype of the crypto assets. Could be COIN or TOKEN. | [optional]
**circulating_supply** | **str** | Represents the amount of the asset that is circulating on the market and in public hands. | [optional]
**market_cap_in_usd** | **str** | Defines the total market value of the asset's circulating supply in USD. | [optional]
**max_supply** | **str** | Represents the maximum amount of all coins of a specific asset that will ever exist in its lifetime. | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 81.333333 | 168 | 0.701405 | eng_Latn | 0.976155 |
bb05046e8dfe1ff863fe52d5a5f4123dcde4c341 | 3,567 | md | Markdown | src/foreword.md | toku-sa-n/book-ja | fd1cc18b154c06544091cca20872b2e2b418ad47 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1 | 2020-10-14T01:44:47.000Z | 2020-10-14T01:44:47.000Z | src/foreword.md | toku-sa-n/book-ja | fd1cc18b154c06544091cca20872b2e2b418ad47 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | src/foreword.md | toku-sa-n/book-ja | fd1cc18b154c06544091cca20872b2e2b418ad47 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | <!--
# Foreword
-->
# まえがき
<!--
It wasn’t always so clear, but the Rust programming language is fundamentally
about *empowerment*: no matter what kind of code you are writing now, Rust
empowers you to reach farther, to program with confidence in a wider variety of
domains than you did before.
-->
すぐにはわかりにくいかもしれませんが、Rustプログラミング言語は、エンパワーメント(empowerment)を根本原理としています:
どんな種類のコードを現在書いているにせよ、Rustは幅広い領域で以前よりも遠くへ到達し、
自信を持ってプログラムを組む力を与え(empower)ます。
<!--
Take, for example, “systems-level” work that deals with low-level details of
memory management, data representation, and concurrency. Traditionally, this
realm of programming is seen as arcane, accessible only to a select few who
have devoted the necessary years learning to avoid its infamous pitfalls. And
even those who practice it do so with caution, lest their code be open to
exploits, crashes, or corruption.
-->
一例を挙げると、メモリ管理やデータ表現、並行性などの低レベルな詳細を扱う「システムレベル」のプログラミングがあります。
伝統的にこの分野は難解で、年月をかけてやっかいな落とし穴を回避する術を習得した選ばれし者にだけ可能と見なされています。
そのように鍛錬を積んだ者でさえ注意が必要で、さもないと書いたコードがクラッキングの糸口になったりクラッシュやデータ破損を引き起こしかねないのです。
<!--
Rust breaks down these barriers by eliminating the old pitfalls and providing a
friendly, polished set of tools to help you along the way. Programmers who need
to “dip down” into lower-level control can do so with Rust, without taking on
the customary risk of crashes or security holes, and without having to learn
the fine points of a fickle toolchain. Better yet, the language is designed to
guide you naturally towards reliable code that is efficient in terms of speed
and memory usage.
-->
この難しさを取り除くために、Rustは、古い落とし穴を排除し、その過程で使いやすく役に立つ洗練された一連のツールを提供します。
低レベルな制御に「下がる」必要があるプログラマは、お決まりのクラッシュやセキュリティホールのリスクを負わず、
気まぐれなツールチェーンのデリケートな部分を学ぶ必要なくRustで同じことができます。さらにいいことに、
Rustは、スピードとメモリ使用の観点で効率的な信頼性の高いコードへと自然に導くよう設計されています。
<!--
Programmers who are already working with low-level code can use Rust to raise
their ambitions. For example, introducing parallelism in Rust is a relatively
low-risk operation: the compiler will catch the classical mistakes for you. And
you can tackle more aggressive optimizations in your code with the confidence
that you won’t accidentally introduce crashes or exploits.
-->
既に低レベルコードに取り組んでいるプログラマは、Rustを使用してさらなる高みを目指せます。例えば、
Rustで並列性を導入することは、比較的低リスクです: コンパイラが伝統的なミスを捕捉してくれるのです。
そして、クラッシュやクラッキングの糸口を誤って導入しないという自信を持ってコードの大胆な最適化に取り組めるのです。
<!--
But Rust isn’t limited to low-level systems programming. It’s expressive and
ergonomic enough to make CLI apps, web servers, and many other kinds of code
quite pleasant to write — you’ll find simple examples of both later in the
book. Working with Rust allows you to build skills that transfer from one
domain to another; you can learn Rust by writing a web app, then apply those
same skills to target your Raspberry Pi.
-->
ですが、Rustは低レベルなシステムプログラミングに限定されているわけではありません。十分に表現力豊かでエルゴノミックなので、
コマンドラインアプリやWebサーバ、その他様々な楽しいコードを書けます。この本の後半に両者の単純な例が見つかるでしょう。
Rustを使うことで1つの領域から他の領域へと使い回せる技術を身につけられます;
ウェブアプリを書いてRustを学び、それからその同じ技術をラズベリーパイを対象に適用できるのです。
<!--
This book fully embraces the potential of Rust to empower its users. It’s a
friendly and approachable text intended to help you level up not just your
knowledge of Rust, but also your reach and confidence as a programmer in
general. So dive in, get ready to learn—and welcome to the Rust community!
-->
この本は、ユーザに力を与え(empower)るRustのポテンシャルを全て含んでいます。あなたのRustの知識のみをレベルアップさせるだけでなく、
プログラマとしての全般的な能力や自信をもレベルアップさせる手助けを意図した親しみやすくわかりやすいテキストです。
さあ、飛び込んで学ぶ準備をしてください。Rustコミュニティへようこそ!
<!--
— Nicholas Matsakis and Aaron Turon
-->
- ニコラス・マットサキス(Nicholas Matsakis)とアーロン・チューロン(Aaron Turon)
| 40.534091 | 79 | 0.824783 | eng_Latn | 0.984637 |
bb058fa98d696aeb69ebb3f086fe962baa8a2044 | 323 | md | Markdown | php/README.md | tantialex/binance-signature-examples | 210b7899f9430ff7dee4a830efca05273d634b3f | [
"Ruby",
"MIT"
] | 43 | 2021-09-06T06:48:59.000Z | 2022-03-29T15:17:34.000Z | php/README.md | tantialex/binance-signature-examples | 210b7899f9430ff7dee4a830efca05273d634b3f | [
"Ruby",
"MIT"
] | 6 | 2021-11-25T05:33:42.000Z | 2022-03-30T08:29:28.000Z | php/README.md | tantialex/binance-signature-examples | 210b7899f9430ff7dee4a830efca05273d634b3f | [
"Ruby",
"MIT"
] | 24 | 2021-09-06T06:49:00.000Z | 2022-03-22T12:06:47.000Z | # PHP examples
Example script in PHP to allow sending request to fetch public data and USER_DATA without third party package. This can be useful to :
- try new endpoint that not available in any library
- validate unexpected behaviors in any library
## Includes
- get orderbook
- getting account balance
- Place an order
| 29.363636 | 134 | 0.786378 | eng_Latn | 0.997898 |
bb05f1edaa630423b8b133059e70d9c98ff40c86 | 528 | md | Markdown | README.md | kondorpa/cowin_vaccine_availability | c69a99082bd1bfa46234ef6050506977ee4f99b8 | [
"MIT"
] | null | null | null | README.md | kondorpa/cowin_vaccine_availability | c69a99082bd1bfa46234ef6050506977ee4f99b8 | [
"MIT"
] | null | null | null | README.md | kondorpa/cowin_vaccine_availability | c69a99082bd1bfa46234ef6050506977ee4f99b8 | [
"MIT"
] | null | null | null | # cowin_slot_avaialbility
This script will send notifications via whatsapp and email every 150 seconds if there is any slot available for the districts available in the csv file.
It uses Twilio APIs to send message through whatsapp and smtp to send email.
how to:
Edit the district_mapping.csv to remove the unnecessary districts you don't want to get notifications of.
install/import all the required modules.
Enter details required such as email addresses, phone number and Twilio SID &auth token.
run cowin_main.py
| 40.615385 | 153 | 0.804924 | eng_Latn | 0.998026 |
bb061065b5852be5665ad9103af6f21a594604c8 | 5,067 | md | Markdown | treebanks/ta_ttb/ta_ttb-dep-punct.md | emmettstr/docs | 2d0376d6e07f3ffa828f6152d12cf260a530c64d | [
"Apache-2.0"
] | null | null | null | treebanks/ta_ttb/ta_ttb-dep-punct.md | emmettstr/docs | 2d0376d6e07f3ffa828f6152d12cf260a530c64d | [
"Apache-2.0"
] | null | null | null | treebanks/ta_ttb/ta_ttb-dep-punct.md | emmettstr/docs | 2d0376d6e07f3ffa828f6152d12cf260a530c64d | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Statistics of punct in UD_Tamil-TTB'
udver: '2'
---
## Treebank Statistics: UD_Tamil-TTB: Relations: `punct`
This relation is universal.
1000 nodes (10%) are attached to their parents as `punct`.
644 instances of `punct` (64%) are left-to-right (parent precedes child).
Average distance between parent and child is 3.559.
The following 9 pairs of parts of speech are connected with `punct`: <tt><a href="ta_ttb-pos-VERB.html">VERB</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (705; 71% instances), <tt><a href="ta_ttb-pos-NOUN.html">NOUN</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (156; 16% instances), <tt><a href="ta_ttb-pos-PROPN.html">PROPN</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (81; 8% instances), <tt><a href="ta_ttb-pos-AUX.html">AUX</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (20; 2% instances), <tt><a href="ta_ttb-pos-NUM.html">NUM</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (17; 2% instances), <tt><a href="ta_ttb-pos-ADJ.html">ADJ</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (7; 1% instances), <tt><a href="ta_ttb-pos-PART.html">PART</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (7; 1% instances), <tt><a href="ta_ttb-pos-PRON.html">PRON</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (6; 1% instances), <tt><a href="ta_ttb-pos-ADV.html">ADV</a></tt>-<tt><a href="ta_ttb-pos-PUNCT.html">PUNCT</a></tt> (1; 0% instances).
~~~ conllu
# visual-style 4 bgColor:blue
# visual-style 4 fgColor:white
# visual-style 2 bgColor:blue
# visual-style 2 fgColor:white
# visual-style 2 4 punct color:blue
1 அவர் அவர் PRON RpN-3SH-- Case=Nom|Gender=Com|Number=Sing|Person=3|Polite=Form|PronType=Prs 2 nsubj 2:nsubj Translit=avar|LTranslit=avar
2 பேசியத் பேசு VERB VzND3SNAA Case=Nom|Gender=Neut|Number=Sing|Person=3|Polarity=Pos|Tense=Past|VerbForm=Ger|Voice=Act 0 root 0:root Translit=pēciyat|LTranslit=pēcu
3 ஆவது ஆவது PART Tl------- _ 2 advmod:emph 2:advmod:emph Translit=āvatu|LTranslit=āvatu
4 : : PUNCT Z:------- PunctType=Comm 2 punct 2:punct SpaceAfter=No|Translit=:|LTranslit=:
5 . . PUNCT Z#------- PunctType=Peri 2 punct 2:punct Translit=.|LTranslit=.
~~~
~~~ conllu
# visual-style 8 bgColor:blue
# visual-style 8 fgColor:white
# visual-style 7 bgColor:blue
# visual-style 7 fgColor:white
# visual-style 7 8 punct color:blue
1 இது இது PRON RpN-3SN-- Case=Nom|Gender=Neut|Number=Sing|Person=3|PronType=Prs 2 nmod 2:nmod:nom Translit=itu|LTranslit=itu
2 தொடர்பாக தொடர்பு ADV AA------- _ 6 advmod 6:advmod SpaceAfter=No|Translit=toṭarpāka|LTranslit=toṭarpu
3 , , PUNCT Z:------- PunctType=Comm 6 punct 6:punct Translit=,|LTranslit=,
4 அவர் அவர் PRON RpN-3SH-- Case=Nom|Gender=Com|Number=Sing|Person=3|Polite=Form|PronType=Prs 6 nsubj 6:nsubj Translit=avar|LTranslit=avar
5 புதன்கிழமை புதன்கிழமை NOUN NNN-3SN-- Case=Nom|Gender=Neut|Number=Sing|Person=3 6 obl 6:obl:nom Translit=putankilamai|LTranslit=putankilamai
6 வெளியிட்ட வெளியிடு ADJ Jd-D----A Polarity=Pos|Tense=Past|VerbForm=Part 7 amod 7:amod Translit=veḷiyiṭṭa|LTranslit=veḷiyiṭu
7 அறிக்கை அறிக்கை NOUN NNN-3SN-- Case=Nom|Gender=Neut|Number=Sing|Person=3 0 root 0:root SpaceAfter=No|Translit=arikkai|LTranslit=arikkai
8 : : PUNCT Z:------- PunctType=Comm 7 punct 7:punct SpaceAfter=No|Translit=:|LTranslit=:
9 . . PUNCT Z#------- PunctType=Peri 7 punct 7:punct Translit=.|LTranslit=.
~~~
~~~ conllu
# visual-style 7 bgColor:blue
# visual-style 7 fgColor:white
# visual-style 8 bgColor:blue
# visual-style 8 fgColor:white
# visual-style 8 7 punct color:blue
1 இந்த இந்த DET DD------- _ 3 det 3:det Translit=inta|LTranslit=inta
2 இரு இரு NUM Ux------- NumType=Card 3 nummod 3:nummod Translit=iru|LTranslit=iru
3 இடங்களுக்கு இடம் PROPN NED-3PN-- Case=Dat|Gender=Neut|Number=Plur|Person=3 12 obl 12:obl:dat Translit=iṭaṅkaḷukku|LTranslit=iṭam
4 கனடா கனடா PROPN NEN-3SN-- Case=Nom|Gender=Neut|Number=Sing|Person=3 9 nmod 9:nmod:nom SpaceAfter=No|Translit=kanaṭā|LTranslit=kanaṭā
5 , , PUNCT Z:------- PunctType=Comm 6 punct 6:punct Translit=,|LTranslit=,
6 ஜெர்மனி ஜெர்மனி PROPN NEN-3SN-- Case=Nom|Gender=Neut|Number=Sing|Person=3 4 conj 4:conj|9:nmod:nom SpaceAfter=No|Translit=jermani|LTranslit=jermani
7 , , PUNCT Z:------- PunctType=Comm 8 punct 8:punct Translit=,|LTranslit=,
8 போர்ச்சுக்கல் போர்ச்சுக்கல் PROPN NEN-3SN-- Case=Nom|Gender=Neut|Number=Sing|Person=3 4 conj 4:conj|9:nmod:nom Translit=pōrccukkal|LTranslit=pōrccukkal
9 ஆகிய ஆகு ADJ Jd-D----A Polarity=Pos|Tense=Past|VerbForm=Part 11 amod 11:amod Translit=ākiya|LTranslit=āku
10 3 3 NUM U=------- NumForm=Digit 11 nummod 11:nummod Translit=3|LTranslit=3
11 நாடுகள் நாடு NOUN NNN-3PN-- Case=Nom|Gender=Neut|Number=Plur|Person=3 12 nsubj 12:nsubj Translit=nāṭukaḷ|LTranslit=nāṭu
12 போட்டியிடுகின்றன போட்டியிடு VERB Vr-P3PNAA Gender=Neut|Mood=Ind|Number=Plur|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin|Voice=Act 0 root 0:root SpaceAfter=No|Translit=pōṭṭiyiṭukinrana|LTranslit=pōṭṭiyiṭu
13 . . PUNCT Z#------- PunctType=Peri 12 punct 12:punct Translit=.|LTranslit=.
~~~
| 66.671053 | 1,142 | 0.713243 | yue_Hant | 0.676749 |
bb062228ed567466a95e2bcf43bd7a73227bc930 | 50,007 | md | Markdown | docs/admin-manual/manual.md | bopopescu/hue-2 | 454d320dd09b6f7946f3cc05bc97c3e2ca6cd485 | [
"Apache-2.0"
] | null | null | null | docs/admin-manual/manual.md | bopopescu/hue-2 | 454d320dd09b6f7946f3cc05bc97c3e2ca6cd485 | [
"Apache-2.0"
] | null | null | null | docs/admin-manual/manual.md | bopopescu/hue-2 | 454d320dd09b6f7946f3cc05bc97c3e2ca6cd485 | [
"Apache-2.0"
] | 1 | 2020-07-25T13:34:46.000Z | 2020-07-25T13:34:46.000Z |
<link rel="stylesheet" href="../css/bootplus.css" type="text/css" media="screen" title="no title" charset="utf-8"></link>
<link rel="stylesheet" href="../css/font-awesome.min.css" type="text/css" media="screen" title="no title" charset="utf-8"></link>
<link rel="stylesheet" href="../css/docbook.css" type="text/css" media="screen" title="no title" charset="utf-8"></link>
<div class="row-fluid doc-title">
<h1><a href=../index.html>Doc</a> > Hue Administration Guide</h1>
</div>
<div class="row-fluid">
<div class="span3">
[TOC]
</div>
<div class="span9">
# Installation
The following instructions describe how to install the Hue tarball on a
multi-node cluster. You need to also install Hadoop and its satellite components
(Oozie, Hive...) and update some Hadoop configuration files before running Hue.
Hue consists of a web service that runs on a special node in your cluster.
Choose one node where you want to run Hue. This guide refers to that node as
the _Hue Server_. For optimal performance, this should be one of the nodes
within your cluster, though it can be a remote node as long as there are no
overly restrictive firewalls. For small clusters of less than 10 nodes,
you can use your existing master node as the Hue Server.
You can download the Hue tarball here:
[https://github.com/cloudera/hue/releases](https://github.com/cloudera/hue/releases)
## Dependencies
Hue employs some Python modules which use native code and requires
certain development libraries be installed on your system. To install from the
tarball, you'll need these library development packages and tools installed on your system:
* Python 2.6.5 - 2.7
### Ubuntu
sudo apt-get install git ant gcc g++ libffi-dev libkrb5-dev libmysqlclient-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libssl-dev libxml2-dev libxslt-dev make maven libldap2-dev python-dev python-setuptools libgmp3-dev
* [Oracle JDK](https://help.ubuntu.com/community/Java)
* mvn (from maven package or maven3 tarball)
* openldap-dev / libldap2-dev
* libtidy-0.99-0 (for unit tests only)
#### Install Oracle JDK
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer
### CentOS/RHEL
sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel
* [Oracle JDK](https://www.digitalocean.com/community/tutorials/how-to-install-java-on-centos-and-fedora)
* mvn (from [``apache-maven``](https://gist.github.com/sebsto/19b99f1fa1f32cae5d00) package or maven3 tarball)
* libtidy (for unit tests only)
* openssl-devel (for version 7+)
### MacOS
* Xcode command line tools
* Oracle's JDK 1.7+
* maven (Homebrew)
* mysql (Homebrew)
* gmp (Homebrew)
* openssl (Homebrew)
* [Oracle Instant Client](http://www.oracle.com/technetwork/database/database-technologies/instant-client/downloads/index.html)
#### Fix openssl errors
Required for MacOS 10.11+
export LDFLAGS=-L/usr/local/opt/openssl/lib && export CPPFLAGS=-I/usr/local/opt/openssl/include
#### Install Oracle Instant Client
Download both instantclient-basic and instantclient-sdk of the same version (11.2.0.4.0 for this example) and on your ~/.bash_profile, add
export ORACLE_HOME=/usr/local/share/oracle
export VERSION=11.2.0.4.0
export ARCH=x86_64
export DYLD_LIBRARY_PATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
and then
source ~/.bash_profile
sudo mkdir -p $ORACLE_HOME
sudo chmod 775 $ORACLE_HOME
then unzip the content of both downloaded zip files into the newly created $ORACLE_HOME in a way that the 'sdk' folder is at the same level with the other files and then
ln -s libclntsh.dylib.11.1 libclntsh.dylib
ln -s libocci.dylib.11.1 libocci.dylib
and finally
cd sdk
unzip ottclasses.zip
## Build
Configure `$PREFIX` with the path where you want to install Hue by running:
PREFIX=/usr/share make install
cd /usr/share/hue
You can install Hue anywhere on your system, and run Hue as a non-root user.
It is a good practice to create a new user for Hue and either install Hue in
that user's home directory, or in a directory within `/usr/share`.
## Docker
Alternatively to building Hue, a [http://gethue.com/getting-started-with-hue-in-2-minutes-with-docker/](Hue Docker image) is available.
## Troubleshooting the tarball Installation
.Q: I moved my Hue installation from one directory to another and now Hue no
longer functions correctly.
A: Due to the use of absolute paths by some Python packages, you must run a
series of commands if you move your Hue installation. In the new location, run:
rm app.reg
rm -r build
make apps
.Q: Why does "make install" compile other pieces of software?
A: In order to ensure that Hue is stable on a variety of distributions and
architectures, it installs a Python virtual environment which includes its
dependencies. This ensures that the software can depend on specific versions
of various Python libraries and you don't have to be concerned about missing
software components.
## Starting the server
After your cluster is running with the plugins enabled, you can start Hue on
your Hue Server by running:
build/env/bin/supervisor
This will start several subprocesses, corresponding to the different Hue
components. Your Hue installation is now running.
# Configuration
## Reference Architecture
* 3 Hues and 1 Load Balancer
* Databases: MySQL InnoDB, PostgreSQL, Oracle
* LDAP
* Monitoring
* Impala HA
* HiveServer2 HA
* Downloads
[Read more about it here](http://gethue.com/performance-tuning/).
### Load Balancers
Hue is often run with:
* Cherrypy with Httpd
* [Apache mod Python](http://gethue.com/how-to-run-hue-with-the-apache-server/)
* [NGINX](Using NGINX to speed up Hue)
### Proxy
A Web proxy lets you centralize all the access to a certain URL and prettify the address (e.g. ec2-54-247-321-151.compute-1.amazonaws.com --> demo.gethue.com).
[Here is one way to do it](http://gethue.com/i-put-a-proxy-on-hue/).
## Quick Start Wizard
The Quick Start wizard allows you to perform the following Hue setup
operations by clicking the tab of each step or sequentially by clicking
Next in each screen:
1. **Check Configuration** validates your Hue configuration. It will
note any potential misconfiguration and provide hints as to how to
fix them. You can edit the configuration file described in the next
section or use Cloudera Manager, if installed, to manage your
changes.
2. **Examples** contains links to install examples into the Hive,
Impala, MapReduce, Spark, Oozie, Solr Dashboard and Pig Editor applications.
3. **Users** contains a link to the User Admin application to create or
import users and a checkbox to enable and disable collection of
usage information.
4. **Go!** - displays the Hue home screen, which contains links to the
different categories of applications supported by Hue: Query,
Hadoop, and Workflow.
## Configuration
Displays a list of the installed Hue applications and their
configuration. The location of the folder containing the Hue
configuration files is shown at the top of the page. Hue configuration
settings are in the hue.ini configuration file.
Click the tabs under **Configuration Sections and Variables** to see the
settings configured for each application. For information on configuring
these settings, see Hue Configuration in the Hue installation manual.
Hue ships with a default configuration that will work for
pseudo-distributed clusters. If you are running on a real cluster, you must
make a few changes to the `hue.ini` configuration file (`/etc/hue/hue.ini` when installed from the
package version) or `pseudo-distributed.ini` in `desktop/conf` when in development mode).
The following sections describe the key configuration options you must make to configure Hue.
<div class="note">
To list all available configuration options, run:
$ /usr/share/hue/build/env/bin/hue config_help | less
This commands outlines the various sections and options in the configuration,
and provides help and information on the default values.
</div>
<div class="note">
To view the current configuration from within Hue, open:
http://<hue>/hue/dump_config
</div>
<div class="note">
Hue loads and merges all of the files with extension `.ini`
located in the `/etc/hue` directory. Files that are alphabetically later
take precedence.
</div>
### Web Server Configuration
These configuration variables are under the `[desktop]` section in
the `hue.ini` configuration file.
### Specifying the HTTP port
Hue uses CherryPy web server. You can use the following options to
change the IP address and port that the web server listens on.
The default setting is port 8888 on all configured IP addresses.
# Webserver listens on this address and port
http_host=0.0.0.0
http_port=8888
### Specifying the Secret Key
For security, you should also specify the secret key that is used for secure
hashing in the session store. Enter a long series of random characters
(30 to 60 characters is recommended).
secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o
NOTE: If you don't specify a secret key, your session cookies will not be
secure. Hue will run but it will also display error messages telling you to
set the secret key.
### Disabling some apps
In the Hue ini configuration file, in the [desktop] section, you can enter the names of the app to hide:
<pre>
[desktop]
# Comma separated list of apps to not load at server startup.
app_blacklist=beeswax,impala,security,filebrowser,jobbrowser,rdbms,jobsub,pig,hbase,sqoop,zookeeper,metastore,spark,oozie,indexer
</pre>
[Read more about it here](http://gethue.com/mini-how-to-disabling-some-apps-from-showing-up/).
### Authentication
By default, the first user who logs in to Hue can choose any
username and password and becomes an administrator automatically. This
user can create other user and administrator accounts. User information is
stored in the Django database in the Django backend.
The authentication system is pluggable. For more information, see the
[Hue SDK Documentation](sdk/sdk.html).
List of some of the possible authentications:
#### Username / Password
#### LDAP
#### SAML
[Read more about it](http://gethue.com/updated-saml-2-0-support/).
#### OpenId Connect
#### Multiple Authentication Backends
For example, to enable Hue to first attempt LDAP directory lookup before falling back to the database-backed user model, we can update the hue.ini configuration file or Hue safety valve in Cloudera Manager with a list containing first the LdapBackend followed by either the ModelBackend or custom AllowFirstUserDjangoBackend (permits first login and relies on user model for all subsequent authentication):
<pre>
[desktop]
[[auth]]
backend=desktop.auth.backend.LdapBackend,desktop.auth.backend.AllowFirstUserDjangoBackend
</pre>
This tells Hue to first check against the configured LDAP directory service, and if the username is not found in the directory, then attempt to authenticate the user with the Django user manager.
[Read more about it here](http://gethue.com/configuring-hue-multiple-authentication-backends-and-ldap/).
### Reset a password
**Programmatically**
When a Hue administrator loses their password, a more programmatic approach is required to secure the administrator again. Hue comes with a wrapper around the python interpreter called the “shell” command. It loads all the libraries required to work with Hue at a programmatic level. To start the Hue shell, type the following command from the Hue installation root.
Then:
cd /usr/lib/hue (or /opt/cloudera/parcels/CDH-XXXXX/share/hue if using parcels and CM)
build/env/bin/hue shell
The following is a small script, that can be executed within the Hue shell, to change the password for a user named “example”:
from django.contrib.auth.models import User
user = User.objects.get(username='example')
user.set_password('some password')
user.save()
The script can also be invoked in the shell by using input redirection (assuming the script is in a file named script.py):
build/env/bin/hue shell < script.py
How to make a certain user a Hue admin
build/env/bin/hue shell
Then set these properties to true:
from django.contrib.auth.models import User
a = User.objects.get(username='hdfs')
a.is_staff = True
a.is_superuser = True
a.set_password('my_secret')
a.save()
** Via a command**
Go on the Hue machine, then in the Hue home directory and either type:
To change the password of the currently logged in Unix user:
build/env/bin/hue changepassword
If you don't remember the admin username, create a new Hue admin (you will then also be able to login and could change the password of another user in Hue):
build/env/bin/hue createsuperuser
[Read more about it here](http://gethue.com/password-management-in-hue/).
<div class="note">
Above works with the `AllowFirstUserBackend`, it might be different if another backend is used.
</div>
### Change your maps look and feel
The properties we need to tweak are leaflet_tile_layer and leaflet_tile_layer_attribution, that can be configured in the hue.ini file:
<pre>
[desktop]
leaflet_tile_layer=https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}
leaflet_tile_layer_attribution='Tiles © Esri — Source: Esri, i-cubed, USDA, USGS, AEX, GeoEye, Getmapping, Aerogrid, IGN, IGP, UPR-EGP, and the GIS User Community'
</pre>
[Read more about it here](http://gethue.com/change-your-maps-look-and-feel/).
### Configure a Proxy
We explained how to run Hue with NGINX serving the static files or under Apache. If you use another proxy, you might need to set these options:
<pre>
[desktop]
# Enable X-Forwarded-Host header if the load balancer requires it.
use_x_forwarded_host=false
# Support for HTTPS termination at the load-balancer level with SECURE_PROXY_SSL_HEADER.
secure_proxy_ssl_header=false
</pre>
### Configuring SSL
You can configure Hue to serve over HTTPS.
1. Configure Hue to use your private key by adding the following
options to the `hue.ini` configuration file:
ssl_certificate=/path/to/certificate
ssl_private_key=/path/to/key
2. Ideally, you would have an appropriate key signed by a Certificate Authority.
If you're just testing, you can create a self-signed key using the `openssl`
command that may be installed on your system:
Create a key:
openssl genrsa 1024 > host.key
Create a self-signed certificate:
openssl req -new -x509 -nodes -sha1 -key host.key > host.cert
<div class="note">
Self-signed Certificates and File Uploads
To upload files using the Hue File Browser over HTTPS requires
using a proper SSL Certificate. Self-signed certificates don't
work.
</div>
### SASL
When getting a bigger result set from Hive/Impala or bigger files like images from HBase, the response requires to increase
the buffer size of SASL lib for thrift sasl communication.
<pre>
[desktop]
# This property specifies the maximum size of the receive buffer in bytes in thrift sasl communication (default 2 MB).
sasl_max_buffer=2 * 1024 * 1024
</pre>
### User Admin Configuration
In the `[useradmin]` section of the configuration file, you can
_optionally_ specify the following:
default_user_group::
The name of a default group that is suggested when creating a
user manually. If the LdapBackend or PamBackend are configured
for doing user authentication, new users will automatically be
members of the default group.
### Banner
You can add a custom banner to the Hue Web UI by applying HTML directly to the property, banner_top_html. For example:
banner_top_html=<H4>My company's custom Hue Web UI banner</H4>
### Splash Screen
You can customize a splash screen on the login page by applying HTML directly to the property, login_splash_html. For example:
[desktop]
[[custom]]
login_splash_html=WARNING: You are required to have authorization before you proceed.
### Custom Logo
There is also the possibility to change the logo for further personalization.
[desktop]
[[custom]]
# SVG code to replace the default Hue logo in the top bar and sign in screen
# e.g. <image xlink:href="/static/desktop/art/hue-logo-mini-white.png" x="0" y="0" height="40" width="160" />
logo_svg=
You can go crazy and write there any SVG code you want. Please keep in mind your SVG should be designed to fit in a 160×40 pixels space. To have the same ‘hearts logo’ you can see above, you can type this code
[desktop]
[[custom]]
logo_svg='<g><path stroke="null" id="svg_1" d="m44.41215,11.43463c-4.05017,-10.71473 -17.19753,-5.90773 -18.41353,-0.5567c-1.672,-5.70253 -14.497,-9.95663 -18.411,0.5643c-4.35797,11.71793 16.891,22.23443 18.41163,23.95773c1.5181,-1.36927 22.7696,-12.43803 18.4129,-23.96533z" fill="#ffffff"/> <path stroke="null" id="svg_2" d="m98.41246,10.43463c-4.05016,-10.71473 -17.19753,-5.90773 -18.41353,-0.5567c-1.672,-5.70253 -14.497,-9.95663 -18.411,0.5643c-4.35796,11.71793 16.891,22.23443 18.41164,23.95773c1.5181,-1.36927 22.76959,-12.43803 18.41289,-23.96533z" fill="#FF5A79"/> <path stroke="null" id="svg_3" d="m154.41215,11.43463c-4.05016,-10.71473 -17.19753,-5.90773 -18.41353,-0.5567c-1.672,-5.70253 -14.497,-9.95663 -18.411,0.5643c-4.35796,11.71793 16.891,22.23443 18.41164,23.95773c1.5181,-1.36927 22.76959,-12.43803 18.41289,-23.96533z" fill="#ffffff"/> </g>'
Read more about it in [Hue with a custom logo](http://gethue.com/hue-with-a-custom-logo/) post.
### Storing passwords in file script
This [article details how to store passwords in a script](http://gethue.com/storing-passwords-in-script-rather-than-hue-ini-files/) launched from the OS rather than have clear text passwords in the hue*.ini files.
Some passwords go in Hue ini configuration file making them easily visible to Hue admin user or by users of cluster management software. You can use the password_script feature to prevent passwords from being visible.
### Idle session timeout
Hue now offers a new property, idle_session_timeout, that can be configured in the hue.ini file:
<pre>
[desktop]
[[auth]]
idle_session_timeout=600
</pre>
When idle_session_timeout is set, users will automatically be logged out after N (e.g. – 600) seconds of inactivity and be prompted to login again:
[Read more about it here](http://gethue.com/introducing-the-new-login-modal-and-idle-session-timeout/).
### Auditing
Read more about [Auditing User Administration Operations with Hue and Cloudera Navigator](http://gethue.com/auditing-user-administration-operations-with-hue-and-cloudera-navigator-2/).
## Configuration for external services
These configuration variables are under the `[hadoop]` section in
the `hue.ini` configuration file.
### Hadoop and other services
Depending on which apps you need, you need to make sure that some Hadoop services
are already setup (that way Hue can talk to them).
<pre>
|-------------|--------------------------------------------------------|
| Component | Applications |
|-------------|--------------------------------------------------------|
| Editor | SQL (Hive, Impala, any database...), Pig, Spark... |
| Browsers | YARN, Oozie, Impala, HBase, Livy |
| Scheduler | Oozie |
| Dashboard | Solr, SQL (Impala, Hive...) |
|-------------|--------------------------------------------------------|
</pre>
##### Hadoop Configuration
You need to enable WebHdfs or run an HttpFS server. To turn on WebHDFS,
add this to your `hdfs-site.xml` and *restart* your HDFS cluster.
Depending on your setup, your `hdfs-site.xml` might be in `/etc/hadoop/conf`.
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
You also need to add this to `core-site.xml`.
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
If you place your Hue Server outside the Hadoop cluster, you can run
an HttpFS server to provide Hue access to HDFS. The HttpFS service requires
only one port to be opened to the cluster.
Also add this in `httpfs-site.xml` which might be in `/etc/hadoop-httpfs/conf`.
<property>
<name>httpfs.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>httpfs.proxyuser.hue.groups</name>
<value>*</value>
</property>
#### Configure Oozie
Hue submits MapReduce jobs to Oozie as the logged in user. You need to
configure Oozie to accept the `hue` user to be a proxyuser. Specify this in
your `oozie-site.xml` (even in a non-secure cluster), and restart Oozie:
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
<value>*</value>
</property>
#### Hive Configuration
Hue's Hive SQL Editor application helps you use Hive to query your data.
It depends on a Hive Server 2 running in the cluster. Please read
this section to ensure a proper integration.
Your Hive data is stored in HDFS, normally under `/user/hive/warehouse`
(or any path you specify as `hive.metastore.warehouse.dir` in your
`hive-site.xml`). Make sure this location exists and is writable by
the users whom you expect to be creating tables. `/tmp` (on the local file
system) must be world-writable (1777), as Hive makes extensive use of it.
<div class="note">
In `hue.ini`, modify `hive_conf_dir` to point to the
directory containing `hive-site.xml`.
</div>
#### Hive and Impala High Availability (HA)
HiveServer2 and Impala support High Availability through a “load balancer”.
One caveat is that Hue’s underlying Thrift libraries reuse TCP connections in a
pool, a single user session may not have the same Impala or Hive TCP connection.
If a TCP connection is balanced away from the previously selected HiveServer2
or Impalad instance, the user session and its queries can be lost and trigger
the “Results have expired” or “Invalid session Id” errors.
To prevent sessions from being lost, you need configure the load balancer with
“source” algorithm to ensure each Hue instance sends all traffic to a single
HiveServer2/Impalad instance. Yes, this is not true load balancing, but a
configuration for failover High Availability. HiveSever2 or Impala coordinators
already distribute the work across the cluster so this is not an issue.
To enable an optimal load distribution that works for everybody, you can create
multiple profiles in our load balancer, per port for Hue clients and non-Hue
clients like Hive or Impala. You can configure non-Hue clients to distribute loads
with “roundrobin” or “leastconn” and configure Hue clients with “source”
(source IP Persistence) on dedicated ports, for example, 10015 for Hive beeline
commands, 10016 for Hue, 21051 for Hue-Impala interactions while 25003 for Impala shell.
You can configure the HaProxy to have two different ports associated with
different load balancing algorithms. Here is a sample configuration (haproxy.cfg)
for Hive and Impala HA on a secure cluster.
<pre>
frontend hiveserver2_front
bind *:10015 ssl crt /path/to/cert_key.pem
mode tcp
option tcplog
default_backend hiveserver2
backend hiveserver2
balance roundrobin
mode tcp
server hs2_1 host-2.com:10000 ssl ca-file /path/to/truststore.pem check
server hs2_2 host-3.com:10000 ssl ca-file /path/to/truststore.pem check
server hs2_3 host-1.com:10000 ssl ca-file /path/to/truststore.pem check
frontend hivejdbc_front
bind *:10016 ssl crt /path/to/cert_key.pem
mode tcp
option tcplog
stick match src
stick-table type ip size 200k expire 30m
default_backend hivejdbc
backend hivejdbc
balance source
mode tcp
server hs2_1 host-2.com:10000 ssl ca-file /path/to/truststore.pem check
server hs2_2 host-3.com:10000 ssl ca-file /path/to/truststore.pem check
server hs2_3 host-1.com:10000 ssl ca-file /path/to/truststore.pem check
</pre>
And here is an example for impala HA configuration on a secure cluster.
<pre>
frontend impala_front
bind *:25003 ssl crt /path/to/cert_key.pem
mode tcp
option tcplog
default_backend impala
backend impala
balance leastconn
mode tcp
server impalad1 host-3.com:21000 ssl ca-file /path/to/truststore.pem check
server impalad2 host-2.com:21000 ssl ca-file /path/to/truststore.pem check
server impalad3 host-4.com:21000 ssl ca-file /path/to/truststore.pem check
frontend impalajdbc_front
bind *:21051 ssl crt /path/to/cert_key.pem
mode tcp
option tcplog
stick match src
stick-table type ip size 200k expire 30m
default_backend impalajdbc
backend impalajdbc
balance source
mode tcp
server impalad1 host-3.com:21050 ssl ca-file /path/to/truststore.pem check
server impalad2 host-2.com:21050 ssl ca-file /path/to/truststore.pem check
server impalad3 host-4.com:21050 ssl ca-file /path/to/truststore.pem check
</pre>
Note: “check” is required at end of each line to ensure HaProxy can detect any
unreachable Impalad/HiveServer2 server, so HA failover can be successful. Without
TCP check, you may hit the “TSocket reads 0 byte” error when the
Impalad/HiveServer2 server Hue tries to connect is down.
After editing the /etc/haproxy/haproxy.cfg file, run following commands to
restart HaProxy service and check the service restarts successfully.
<pre>
service haproxy restart
service haproxy status
</pre>
Also we need add following blocks into hue.ini.
<pre>
[impala]
server_port=21051
[beeswax]
hive_server_port=10016
</pre>
Read more about it in the [How to optimally configure your Analytic Database for
High Availability with Hue and other SQL clients](http://gethue.com/how-to-opti
mally-configure-your-analytic-database-for-high-availability-with-hue-and-other-sql-clients) post.
### Firewall
Hue currently requires that the machines within your cluster can connect to
each other freely over TCP. The machines outside your cluster must be able to
open TCP port 8888 on the Hue Server (or the configured Hue web HTTP port)
to interact with the system.
### Files and Object Store
#### HDFS Cluster
Hue supports one HDFS cluster. That cluster should be defined
under the `[[[default]]]` sub-section.
fs_defaultfs::
This is the equivalence of `fs.defaultFS` (aka `fs.default.name`) in
Hadoop configuration.
webhdfs_url::
You can also set this to be the HttpFS url. The default value is the HTTP
port on the NameNode.
hadoop_conf_dir::
This is the configuration directory of the HDFS, typically
`/etc/hadoop/conf`.
#### S3
Hue’s filebrowser can now allow users to explore, manage, and upload data in an S3 account, in addition to HDFS.
Read more about it in the [S3 User Documentation](../user-guide/user-guide.html#s3).
In order to add an S3 account to Hue, you’ll need to configure Hue with valid S3 credentials, including the access key ID and secret access key: [AWSCredentials](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html)
These keys can securely stored in a script that outputs the actual access key and secret key to stdout to be read by Hue (this is similar to how Hue reads password scripts). In order to use script files, add the following section to your hue.ini configuration file:
<pre>
[aws]
[[aws_accounts]]
[[[default]]]
access_key_id_script=/path/to/access_key_script
secret_access_key_script= /path/to/secret_key_script
allow_environment_credentials=false
region=us-east-1
</pre>
Alternatively (but not recommended for production or secure environments), you can set the access_key_id and secret_access_key values to the plain-text values of your keys:
<pre>
[aws]
[[aws_accounts]]
[[[default]]]
access_key_id=s3accesskeyid
secret_access_key=s3secretaccesskey
allow_environment_credentials=false
region=us-east-1
</pre>
The region should be set to the AWS region corresponding to the S3 account. By default, this region will be set to ‘us-east-1’.
<div class="note">
Using Ceph
New end points have been added in [HUE-5420](https://issues.cloudera.org/browse/HUE-5420)
</div>
#### ADLS
Hue’s file browser can now allow users to explore, manage, and upload data in an ADLS, in addition to HDFS and S3.
Read more about it in the [ADLS User Documentation](../user-guide/user-guide.html#adls).
In order to add an ADLS account to Hue, you’ll need to configure Hue with valid ADLS credentials, including the client ID, client secret and tenant ID.
These keys can securely stored in a script that outputs the actual access key and secret key to stdout to be read by Hue (this is similar to how Hue reads password scripts). In order to use script files, add the following section to your hue.ini configuration file:
<pre>
[adls]
[[azure_accounts]]
[[[default]]]
client_id_script=/path/to/client_id_script.sh
client_secret_script=/path/to/client_secret_script.sh
tenant_id_script=/path/to/tenant_id_script.sh
[[adls_clusters]]
[[[default]]]
fs_defaultfs=adl://<account_name>.azuredatalakestore.net
webhdfs_url=https://<account_name>.azuredatalakestore.net
</pre>
Alternatively (but not recommended for production or secure environments), you can set the client_secret value in plain-text:
<pre>
[adls]
[[azure_account]]
[[[default]]]
client_id=adlsclientid
client_secret=adlsclientsecret
tenant_id=adlstenantid
[[adls_clusters]]
[[[default]]]
fs_defaultfs=adl://<account_name>.azuredatalakestore.net
webhdfs_url=https://<account_name>.azuredatalakestore.net
</pre>
### Yarn (MR2) Cluster
Hue supports one or two Yarn clusters (two for HA). These clusters should be defined
under the `[[[default]]]` and `[[[ha]]]` sub-sections.
resourcemanager_host::
The host running the ResourceManager.
resourcemanager_port::
The port for the ResourceManager REST service.
logical_name::
NameNode logical name.
submit_to::
To enable the section, set to True.
### Impala Configuration
In the `[impala]` section of the configuration file, you can
_optionally_ specify the following:
server_host::
The hostname or IP that the Impala Server should bind to. By
default it binds to `localhost`, and therefore only serves local
IPC clients.
[LDAP or PAM pass-through authentication with Hive or Impala and Impersonation
](http://gethue.com/ldap-or-pam-pass-through-authentication-with-hive-or-impala/).
### Hive Configuration
In the `[beeswax]` section of the configuration file, you can
_optionally_ specify the following:
beeswax_server_host::
The hostname or IP that the Hive Server should bind to. By
default it binds to `localhost`, and therefore only serves local
IPC clients.
hive_conf_dir::
The directory containing your `hive-site.xml` Hive
configuration file.
### JDBC
Use the query editor with any [JDBC](http://gethue.com/custom-sql-query-editors/) or Django-compatible database.
### Oozie Configuration
In the `[liboozie]` section of the configuration file, you should
specify:
oozie_url::
The URL of the Oozie service. It is the same as the `OOZIE_URL`
environment variable for Oozie.
### Solr Configuration
In the `[search]` section of the configuration file, you should
specify:
solr_url::
The URL of the Solr service.
### HBase Configuration
In the `[hbase]` section of the configuration file, you should
specify:
hbase_clusters::
Comma-separated list of HBase Thrift servers for clusters in the format of "(name|host:port)".
# Administration
Now that you've installed and started Hue, you can feel free to skip ahead
to the <<usage,Using Hue>> section. Administrators may want to refer to this
section for more details about managing and operating a Hue installation.
## Configuration Validation
Hue can detect certain invalid configuration.
To view the configuration of a running Hue instance, navigate to
`http://myserver:8888/hue/dump_config`, also accessible through the About
application.
## Server Logs
Displays the Hue Server log and allows you to download the log to your
local system in a zip file.
## Threads
Read more on the [Threads and Metrics pages
blog post](http://gethue.com/easier-administration-of-hue-with-the-new-threads-and-metrics-pages/)
Threads page can be very helpful in debugging purposes. It includes a daemonic thread and the thread objects serving concurrent requests. The host name, thread name identifier and current stack frame of each are displayed. Those are useful when Hue “hangs”, sometimes in case of a request too CPU intensive. There is also a REST API to get the dump of Threads using 'desktop/debug/threads'
## Metrics
Read more on the [Threads and Metrics pages
blog post](http://gethue.com/easier-administration-of-hue-with-the-new-threads-and-metrics-pages/)
Hue uses the **PyFormance** Python library to collect the metrics. These metrics are represented as gauge, counters, meter, rate of events over time, histogram, statistical distribution of values. A REST API endpoint '/desktop/metrics/' to get all the metrics dump as json is also exposed
The below metrics of most concern to us are displayed on the page:
- requests.active
- requests.exceptions
- requests.response-time
- threads.daemon
- threads.total
- users
- users.active
One of the most useful ones are the percentiles of response time of requests and the count of active users.
Admins can either filter a particular property in all the metrics or select a particular metric for all properties
## User management
The User Admin application lets a superuser add, delete, and manage Hue
users and groups, and configure group permissions. Superusers can add
users and groups individually, or import them from an LDAP directory.
Group permissions define the Hue applications visible to group members
when they log into Hue and the application features available to them.
Click the **User Admin** icon in the top right navigation bar under your username.
### LDAP
[LDAP or PAM pass-through authentication with Hive or Impala and Impersonation
](http://gethue.com/ldap-or-pam-pass-through-authentication-with-hive-or-impala/).
### Users
The User Admin application provides two levels of user privileges:
superusers and users.
- Superusers — The first user who logs into Hue after its initial
installation becomes the first superuser. Superusers have
permissions to perform administrative functions:
- Add and delete users
- Add and delete groups
- Assign permissions to groups
- Change a user into a superuser
- Import users and groups from an LDAP server
- Users — can change their name, e-mail address, and password and log
in to Hue and run Hue applications, subject to the permissions
provided by the Hue groups to which they belong.
#### Adding a User
1. In the **User Admin** page, click **Add User**.
2. In the **Credentials** screen, add required information about the
user. Once you provide the required information you can click the
wizard step tabs to set other information.
<table>
<tr><td>Username</td><td> A user name that contains only letters, numbers, and underscores;
blank spaces are not allowed and the name cannot begin with a
number. The user name is used to log into Hue and in file
permissions and job submissions. This is a required field.
</td></tr>
<tr><td>Password and Password confirmation</td><td> A password for the user. This is a required field.</td></tr>
<tr><td>Create home directory</td><td> Indicate whether to create a directory named /user/username in HDFS.
For non-superusers, the user and group of the directory are
username. For superusers, the user and group are username and
supergroup.</td></tr></table>
3. Click **Add User** to save the information you specified and close
the **Add User** wizard or click **Next**.
4. In the **Names and Groups** screen, add optional information.
<table>
<tr><td>First name and Last name</td><td> The user's first and last name.
</td></tr>
<tr><td>E-mail address</td><td>The user's e-mail address. The e-mail address is used by the Job
Designer and Beeswax applications to send users an e-mail message
after certain actions have occurred. The Job Designer sends an
e-mail message after a job has completed. Beeswax sends a message
after a query has completed. If an e-mail address is not specified,
the application will not attempt to email the user.</td></tr>
<tr><td>Groups</td><td> The groups to which the user belongs. By default, a user is assigned
to the **default** group, which allows access to all applications.
See [Permissions](#permissions).</td></tr></table>
5. Click **Add User** to save the information you specified and close
the **Add User** wizard or click **Next**.
6. In the **Advanced** screen, add status information.
<table>
<tr><td>Active</td><td> Indicate that the user is enabled and allowed to log in. Default: checked.</td></tr>
<tr><td>Superuser status</td><td> Assign superuser privileges to the user.</td></tr></table>
7. Click **Add User** to save the information you specified and close
the **Add User** wizard.
#### Deleting a User
1. Check the checkbox next to the user name and click **Delete**.
2. Click **Yes** to confirm.
#### Editing a User
1. Click the user you want to edit in the **Hue Users** list.
2. Make the changes to the user and then click **Update user**.
#### Importing Users from an LDAP Directory
 **Note**:
Importing users from an LDAP directory does not import any password
information. You must add passwords manually in order for a user to log
in.
To add a user from an external LDAP directory:
1. Click **Add/sync LDAP user**.
2. Specify the user properties:
<table>
<tr><td>Username</td><td>The user name.</td></tr>
<tr><td>Distinguished name</td><td>Indicate that Hue should use a full distinguished name for the user.
This imports the user's first and last name, username, and email,
but does not store the user password.</td></tr>
<tr><td>Create home directory</td><td> Indicate that Hue should create a home directory for the user in
HDFS.</td></tr></table>
3. Click **Add/sync user**.
If the user already exists in the User Admin, the user information
in User Admin is synced with what is currently in the LDAP
directory.
#### Syncing Users and Groups with an LDAP Directory
You can sync the Hue user database with the current state of the LDAP
directory using the **Sync LDAP users/groups** function. This updates
the user and group information for the already imported users and
groups. It does not import any new users or groups.
1. Click **Sync LDAP users/groups**.
2. The **Create Home Directories** checkbox creates home directories in
HDFS for existing imported members that don't have home directories.
3. In the **Sync LDAP users and groups** dialog, click **Sync** to
perform the sync.
### Groups
Superusers can add and delete groups, configure group permissions, and
assign users to group memberships.
#### Adding a Group
You can add groups, and delete the groups you've added. You can also
import groups from an LDAP directory.
1. In the **User Admin** window, click **Groups** and then click **Add
Group**.
2. Specify the group properties:
<table>
<tr><td>Name</td><td> The name of the group. Group names can only be letters, numbers, and
underscores; blank spaces are not allowed.</td></tr>
<tr><td>Members</td><td>The users in the group. Check user names or check Select all.</td></tr>
<tr><td>Permissions</td><td>The applications the users in the group can access. Check
application names or check Select all.</td></tr></table>
3. Click **Add group**.
#### Adding Users to a Group
1. In the **User Admin** window, click **Groups**.
2. Click the group.
3. To add users to the group, check the names in the list provided or
check **Select All**.
4. Click **Update group**.
#### Deleting a Group
1. Click **Groups**.
2. Check the checkbox next to the group and click **Delete**.
3. Click **Yes** to confirm.
#### Importing Groups from an LDAP Directory
1. From the **Groups** tab, click **Add/sync LDAP group**.
2. Specify the group properties:
<table>
<tr><td>Name</td><td> The name of the group.</td></tr>
<tr><td>Distinguished name</td><td> Indicate that Hue should use a full distinguished name for the
group.</td></tr>
<tr><td>Import new members</td><td> Indicate that Hue should import the members of the group.</td></tr>
<tr><td>Import new members from all subgroups</td><td>
Indicate that Hue should import the members of the subgroups.</td></tr>
<tr><td>Create home directories</td><td> Indicate that Hue should create home directories in HDFS for the
imported members.</td></tr>
</table>
3. Click **Add/sync group**.
<a id="permissions"></a>
Permissions
-----------
Permissions for Hue applications are granted to groups, with users
gaining permissions based on their group membership. Group permissions
define the Hue applications visible to group members when they log into
Hue and the application features available to them.
1. Click **Permissions**.
2. Click the application for which you want to assign permissions.
3. Check the checkboxes next to the groups you want to have permission
for the application. Check **Select all** to select all groups.
4. Click **Update permission**. The new groups will appear in the
Groups column in the **Hue Permissions** list.
[Read more about it here](http://gethue.com/how-to-manage-permissions-in-hue/).
## Process Hierarchy
A script called `supervisor` manages all Hue processes. The supervisor is a
watchdog process -- its only purpose is to spawn and monitor other processes.
A standard Hue installation starts and monitors the following processes:
* `runcpserver` - a web server based on CherryPy that provides the core web
functionality of Hue
If you have installed other applications into your Hue instance, you may see
other daemons running under the supervisor as well.
You can see the supervised processes running in the output of `ps -f -u hue`:
UID PID PPID C STIME TTY TIME CMD
hue 8685 8679 0 Aug05 ? 00:01:39 /usr/share/hue/build/env/bin/python /usr/share/hue/build/env/bin/desktop runcpserver
Note that the supervisor automatically restarts these processes if they fail for
any reason. If the processes fail repeatedly within a short time, the supervisor
itself shuts down.
## Logging
The Hue logs are found in `/var/log/hue`, or in a `logs` directory under your
Hue installation root. Inside the log directory you can find:
* An `access.log` file, which contains a log for all requests against the Hue
web server.
* A `supervisor.log` file, which contains log information for the supervisor
process.
* A `supervisor.out` file, which contains the stdout and stderr for the
supervisor process.
* A `.log` file for each supervised process described above, which contains
the logs for that process.
* A `.out` file for each supervised process described above, which contains
the stdout and stderr for that process.
If users on your cluster have problems running Hue, you can often find error
messages in these log files. If you are unable to start Hue from the init
script, the `supervisor.log` log file can often contain clues.
### Viewing Recent Log Messages
In addition to logging `INFO` level messages to the `logs` directory, the Hue
web server keeps a small buffer of log messages at all levels in memory. You can
view these logs by visiting `http://myserver:8888/hue/logs`. The `DEBUG` level
messages shown can sometimes be helpful in troubleshooting issues.
## Troubleshooting
To troubleshoot why Hue is slow or consuming high memory, admin can enable instrumentation by setting the `instrumentation` flag to True.
<pre>
[desktop]
instrumentation=true
</pre>
If `django_debug_mode` is enabled, instrumentation is automatically enabled. This flag appends the response time and the total peak memory used since Hue started for every logged request.
##### Instrumentation enabled:
<pre>
[17/Apr/2018 15:18:43 -0700] access INFO 127.0.0.1 admin - "POST /jobbrowser/jobs/ HTTP/1.1" `returned in 97ms (mem: 135mb)`
</pre>
##### Instrumentation not enabled:
<pre>
[23/Apr/2018 10:59:01 -0700] INFO 127.0.0.1 admin - "POST /jobbrowser/jobs/ HTTP/1.1" returned in 88ms
</pre>
## Database
Hue requires a SQL database to store small amounts of data, including user
account information as well as history of job submissions and Hive queries.
By default, Hue is configured to use the embedded database SQLite for this
purpose, and should require no configuration or management by the administrator.
However, MySQL is the recommended database to use. This section contains
instructions for configuring Hue to access MySQL and other databases.
### Inspecting the Database
The default SQLite database used by Hue is located in: `/usr/share/hue/desktop/desktop.db`.
You can inspect this database from the command line using the `sqlite3`
program or typing `/usr/share/hue/build/env/bin/hue dbshell'. For example:
sqlite3 /usr/share/hue/desktop/desktop.db
SQLite version 3.6.22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select username from auth_user;
admin
test
sample
sqlite>
It is strongly recommended that you avoid making any modifications to the
database directly using SQLite, though this trick can be useful for management
or troubleshooting.
### Backing up the Database
If you use the default SQLite database, then copy the `desktop.db` file to
another node for backup. It is recommended that you back it up on a regular
schedule, and also that you back it up before any upgrade to a new version of
Hue.
### Configuring to Access Another Database
Although SQLite is the default database type, some advanced users may prefer
to have Hue access an alternate database type. Note that if you elect to
configure Hue to use an external database, upgrades may require more manual
steps in the future.
The following instructions are for MySQL, though you can also configure Hue to
work with other common databases such as PostgreSQL and Oracle.
<div class="note">
Note that Hue has only been tested with SQLite and MySQL database backends.
</div>
### Configuring to Store Data in MySQL
To configure Hue to store data in MySQL:
1. Create a new database in MySQL and grant privileges to a Hue user to manage
this database.
mysql> create database hue;
Query OK, 1 row affected (0.01 sec)
mysql> grant all on hue.* to 'hue'@'localhost' identified by 'secretpassword';
Query OK, 0 rows affected (0.00 sec)
2. Shut down Hue if it is running.
3. To migrate your existing data to MySQL, use the following command to dump the
existing database data to a text file. Note that using the ".json" extension
is required.
/usr/share/hue/build/env/bin/hue dumpdata > $some-temporary-file.json
4. Open the `hue.ini` file in a text editor. Directly below the
`[[database]]` line, add the following options (and modify accordingly for
your MySQL setup):
host=localhost
port=3306
engine=mysql
user=hue
password=secretpassword
name=hue
5. As the Hue user, configure Hue to load the existing data and create the
necessary database tables:
/usr/share/hue/build/env/bin/hue syncdb --noinput
mysql -uhue -psecretpassword -e "DELETE FROM hue.django_content_type;"
/usr/share/hue/build/env/bin/hue loaddata $temporary-file-containing-dumped-data.json
Your system is now configured and you can start the Hue server as normal.
# Using Hue
After installation, you can use Hue by navigating to `http://myserver:8888/`.
The link:user-guide/index.html[user guide] will help users go through the various installed applications.
## Supported Browsers
The two latest LTS versions of each browsers:
* Edge
* Safari
* Chrome
* Firefox
## Feedback
Your feedback is welcome. The best way to send feedback is to join the
https://groups.google.com/a/cloudera.org/group/hue-user[mailing list], and
send e-mail, to mailto:[email protected][[email protected]].
## Reporting Bugs
If you find that something doesn't work, it'll often be helpful to include logs
from your server. Please include the
logs as a zip (or cut and paste the ones that look relevant) and send those with
your bug reports.
</div>
</div>
<script src="../js/jquery.min.js"></script>
<script src="../js/jquery.treed.js"></script>
<script src="../js/jquery.highlight.js"></script>
<script src="../js/hue-docs.js"></script>
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-37637545-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-37637545-1');
</script>
| 36.715859 | 867 | 0.739017 | eng_Latn | 0.986552 |
bb06db603ea9c855758d14456ce810452d162189 | 3,321 | md | Markdown | README.md | calavera/serverless-rust-demo | 27ae8e011c761ae38e4778260e8a70edd236aa2c | [
"MIT-0"
] | null | null | null | README.md | calavera/serverless-rust-demo | 27ae8e011c761ae38e4778260e8a70edd236aa2c | [
"MIT-0"
] | null | null | null | README.md | calavera/serverless-rust-demo | 27ae8e011c761ae38e4778260e8a70edd236aa2c | [
"MIT-0"
] | null | null | null | ## Serverless Rust Demo

<p align="center">
<img src="imgs/diagram.png" alt="Architecture diagram"/>
</p>
This is a simple serverless application built in Rust. It consists of an API Gateway backed by four Lambda functions and a DynamoDB table for storage.
This single crate will create [five different binaries](./src/bin), one for each Lambda function. It uses an [hexagonal architecture pattern](https://aws.amazon.com/blogs/compute/developing-evolutionary-architecture-with-aws-lambda/) to decouple the [entry points](./src/entrypoints/), from the main [domain logic](./src/lib.rs), the [storage component](./src/store), and the [event bus component](./src/event_bus).
## 🏗️ Deployment and testing
### Requirements
* [Rust](https://www.rust-lang.org/) 1.56.0 or higher
* [cargo-zigbuild](https://github.com/messense/cargo-zigbuild) and [Zig](https://ziglang.org/) for cross-compilation
* [jq](https://stedolan.github.io/jq/) for tooling specific to this project
* The [AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html) 1.33.0 or higher for deploying to the cloud
* [Artillery](https://artillery.io/) for load-testing the application
### Commands
You can use the following commands at the root of this repository to test, build, and deploy this project:
```bash
# Optional: check if tools are installed
make setup
# Run unit tests
make tests-unit
# Compile and prepare Lambda functions
make build
# Deploy the functions on AWS
make deploy
# Run integration tests against the API in the cloud
make tests-integ
```
## Load Test
[Artillery](https://www.artillery.io/) is used to make 300 requests / second for 10 minutes to our API endpoints. You can run this
with the following command:
```bash
make tests-load
```
### CloudWatch Logs Insights
Using this CloudWatch Logs Insights query you can analyse the latency of the requests made to the Lambda functions.
The query separates cold starts from other requests and then gives you p50, p90 and p99 percentiles.
```
filter @type="REPORT"
| fields greatest(@initDuration, 0) + @duration as duration, ispresent(@initDuration) as coldStart
| stats count(*) as count, pct(duration, 50) as p50, pct(duration, 90) as p90, pct(duration, 99) as p99, max(duration) as max by coldStart
```

## 🦀 Getting started with Rust on Lambda
If you want to get started with Rust on Lambda, you can use [these cookiecutter templates](https://github.com/aws-samples/cookiecutter-aws-sam-rust) to setup your project.
## 👀 With other languages
You can find implementations of this project in other languages here:
* [🐿️ Go](https://github.com/aws-samples/serverless-go-demo)
* [⭐ Groovy](https://github.com/aws-samples/serverless-groovy-demo)
* [☕ Java with GraalVM](https://github.com/aws-samples/serverless-graalvm-demo)
* [🤖 Kotlin](https://github.com/aws-samples/serverless-kotlin-demo)
* [🏗️ TypeScript](https://github.com/aws-samples/serverless-typescript-demo)
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This library is licensed under the MIT-0 License. See the LICENSE file.
| 37.314607 | 415 | 0.753388 | eng_Latn | 0.931723 |
bb07598f837b52b81e6c1d9ed9e7eaefe2e9b66b | 777 | md | Markdown | ads/vendors/adsnative.md | becca-bailey/amphtml | 7049bdef2b53aa55368a743c06f8ea3c3f54eaaa | [
"Apache-2.0"
] | 16,831 | 2015-10-07T13:08:48.000Z | 2022-03-31T12:52:40.000Z | ads/vendors/adsnative.md | becca-bailey/amphtml | 7049bdef2b53aa55368a743c06f8ea3c3f54eaaa | [
"Apache-2.0"
] | 27,413 | 2015-10-07T12:41:00.000Z | 2022-03-31T23:41:02.000Z | ads/vendors/adsnative.md | becca-bailey/amphtml | 7049bdef2b53aa55368a743c06f8ea3c3f54eaaa | [
"Apache-2.0"
] | 5,001 | 2015-10-07T12:42:26.000Z | 2022-03-31T08:39:56.000Z | # AdsNative
## Example
```html
<amp-ad width="300" height="250" type="adsnative" data-anapiid="123456">
</amp-ad>
```
## Configuration
For configuration details, see [AdsNative's documentation](http://dev.adsnative.com).
### Required parameters
- `width`: required by amp
- `height`: required by amp
- `data-anapiid`: the api id may be used instead of network and widget id
- `data-annid`: the network id must be paired with widget id
- `data-anwid`: the widget id must be paired with network id
### Optional parameters
- `data-anapiid`: the api id
- `data-anwid`: the widget id
- `data-antid`: the template id
- `data-ancat`: a comma separated list of categories
- `data-ankv`: a list of key value pairs in the format `"key1:value1, key2:value2"`.
| 26.793103 | 86 | 0.689833 | eng_Latn | 0.967593 |
bb077df2dde2dbf583a2498611f9d3b87cda4a8b | 6,469 | md | Markdown | docs/reference/classes/account-person.schema.md | jekelly-adobe/xdm | 539d34b4d62e6716696630d4631326cf3c20fb1c | [
"CC-BY-4.0"
] | 192 | 2017-12-04T19:52:02.000Z | 2022-03-29T17:01:54.000Z | docs/reference/classes/account-person.schema.md | jekelly-adobe/xdm | 539d34b4d62e6716696630d4631326cf3c20fb1c | [
"CC-BY-4.0"
] | 819 | 2017-12-05T14:30:03.000Z | 2022-03-31T19:20:24.000Z | docs/reference/classes/account-person.schema.md | jekelly-adobe/xdm | 539d34b4d62e6716696630d4631326cf3c20fb1c | [
"CC-BY-4.0"
] | 259 | 2017-12-04T19:52:11.000Z | 2022-03-25T08:18:09.000Z |
# XDM Business Account Person Relation Schema
```
https://ns.adobe.com/xdm/classes/account-person
```
This class is used to capture XDM business account person relationship attributes.
| [Abstract](../../abstract.md) | [Extensible](../../extensions.md) | [Status](../../status.md) | [Identifiable](../../id.md) | [Custom Properties](../../extensions.md) | [Additional Properties](../../extensions.md) | Defined In |
|-------------------------------|-----------------------------------|---------------------------|-----------------------------|------------------------------------------|----------------------------------------------|------------|
| Can be instantiated | Yes | Experimental | No | Forbidden | Permitted | [classes/account-person.schema.json](classes/account-person.schema.json) |
## Schema Hierarchy
* XDM Business Account Person Relation `https://ns.adobe.com/xdm/classes/account-person`
* [Record Schema](../behaviors/record.schema.md) `https://ns.adobe.com/xdm/data/record`
* [External Source System Audit Details Mixin](../mixins/shared/external-source-system-audit-details.schema.md) `https://ns.adobe.com/xdm/common/external-source-system-audit-details`
## XDM Business Account Person Relation Example
```json
{
"xdm:accountPersonID": "AAAPPP111"
}
```
# XDM Business Account Person Relation Properties
| Property | Type | Required | Defined by |
|----------|------|----------|------------|
| [@id](#id) | `string` | Optional | [Record Schema](../behaviors/record.schema.md#id) |
| [xdm:accountID](#xdmaccountid) | `string` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:accountPersonID](#xdmaccountpersonid) | `string` | **Required** | XDM Business Account Person Relation (this schema) |
| [xdm:currencyCode](#xdmcurrencycode) | `string` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:extSourceSystemAudit](#xdmextsourcesystemaudit) | External Source System Audit Attributes | Optional | [External Source System Audit Details Mixin](../mixins/shared/external-source-system-audit-details.schema.md#xdmextsourcesystemaudit) |
| [xdm:isActive](#xdmisactive) | `boolean` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:isDirect](#xdmisdirect) | `boolean` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:isPrimary](#xdmisprimary) | `boolean` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:personID](#xdmpersonid) | `string` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:personRole](#xdmpersonrole) | `string` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:relationEndDate](#xdmrelationenddate) | `string` | Optional | XDM Business Account Person Relation (this schema) |
| [xdm:relationStartDate](#xdmrelationstartdate) | `string` | Optional | XDM Business Account Person Relation (this schema) |
| `*` | any | Additional | this schema *allows* additional properties |
## @id
### Identifier
A unique identifier for the record.
`@id`
* is optional
* type: `string`
* defined in [Record Schema](../behaviors/record.schema.md#id)
### @id Type
`string`
* format: `uri-reference` – URI Reference (according to [RFC3986](https://tools.ietf.org/html/rfc3986))
## xdm:accountID
### Account ID
Account unique identifier reference.
`xdm:accountID`
* is optional
* type: `string`
* defined in this schema
### xdm:accountID Type
`string`
## xdm:accountPersonID
### Account Person ID
Account person relation unique identifer.
`xdm:accountPersonID`
* is **required**
* type: `string`
* defined in this schema
### xdm:accountPersonID Type
`string`
## xdm:currencyCode
### Currency Code
The ISO 4217 currency code used for this relation.
`xdm:currencyCode`
* is optional
* type: `string`
* defined in this schema
### xdm:currencyCode Type
`string`
All instances must conform to this regular expression
```regex
^[A-Z]{3}$
```
* test example: [USD](https://regexr.com/?expression=%5E%5BA-Z%5D%7B3%7D%24&text=USD)
* test example: [EUR](https://regexr.com/?expression=%5E%5BA-Z%5D%7B3%7D%24&text=EUR)
### xdm:currencyCode Examples
```json
"USD"
```
```json
"EUR"
```
## xdm:extSourceSystemAudit
### External Source System Audit Properties
Audit attributes for external sources.
`xdm:extSourceSystemAudit`
* is optional
* type: External Source System Audit Attributes
* defined in [External Source System Audit Details Mixin](../mixins/shared/external-source-system-audit-details.schema.md#xdmextsourcesystemaudit)
### xdm:extSourceSystemAudit Type
* [External Source System Audit Attributes](../datatypes/external-source-system-audit.schema.md) – `https://ns.adobe.com/xdm/common/external-source-system-audit`
## xdm:isActive
### Active Flag
A flag to signify that this relation is active or not.
`xdm:isActive`
* is optional
* type: `boolean`
* defined in this schema
### xdm:isActive Type
`boolean`
## xdm:isDirect
### Direct Flag
A flag to signify that this is the direct account contact.
`xdm:isDirect`
* is optional
* type: `boolean`
* defined in this schema
### xdm:isDirect Type
`boolean`
## xdm:isPrimary
### Primary Flag
A flag to signify that this relation is primary or not.
`xdm:isPrimary`
* is optional
* type: `boolean`
* defined in this schema
### xdm:isPrimary Type
`boolean`
## xdm:personID
### Person ID
Person unique identifier reference.
`xdm:personID`
* is optional
* type: `string`
* defined in this schema
### xdm:personID Type
`string`
## xdm:personRole
### Person Role
Role of the person/contact for this account.
`xdm:personRole`
* is optional
* type: `string`
* defined in this schema
### xdm:personRole Type
`string`
## xdm:relationEndDate
### Relationship End Date
The date when this account person relationship was discontinued.
`xdm:relationEndDate`
* is optional
* type: `string`
* defined in this schema
### xdm:relationEndDate Type
`string`
* format: `date-time` – date and time (according to [RFC 3339, section 5.6](http://tools.ietf.org/html/rfc3339))
## xdm:relationStartDate
### Relationship Start Date
The date when this account person relationship was established.
`xdm:relationStartDate`
* is optional
* type: `string`
* defined in this schema
### xdm:relationStartDate Type
`string`
* format: `date-time` – date and time (according to [RFC 3339, section 5.6](http://tools.ietf.org/html/rfc3339))
| 21.42053 | 245 | 0.687432 | eng_Latn | 0.653909 |
bb08066a8da2a7d1f4e68da2d1fc034b483f8e58 | 48 | md | Markdown | README.md | swarms/notx | b28e82f2708cf705b8f02143009717651894353a | [
"MIT"
] | null | null | null | README.md | swarms/notx | b28e82f2708cf705b8f02143009717651894353a | [
"MIT"
] | null | null | null | README.md | swarms/notx | b28e82f2708cf705b8f02143009717651894353a | [
"MIT"
] | null | null | null | # notx
NotX React Native template
Coming soon!
| 9.6 | 26 | 0.770833 | eng_Latn | 0.640773 |
bb0832e742077a50963a6b70af45248dd89756c9 | 2,025 | md | Markdown | source/en/docs/brand.md | trilogo-lordee/orchid.software | 09a95c6350cd0854abbcd8a3b6f5aa02859c4537 | [
"MIT"
] | null | null | null | source/en/docs/brand.md | trilogo-lordee/orchid.software | 09a95c6350cd0854abbcd8a3b6f5aa02859c4537 | [
"MIT"
] | null | null | null | source/en/docs/brand.md | trilogo-lordee/orchid.software | 09a95c6350cd0854abbcd8a3b6f5aa02859c4537 | [
"MIT"
] | null | null | null | ---
title: Branding
description: Customize the look to match your brand
extends: _layouts.documentation
section: main
---
There are times when you want the visual style of the platform to match your brand.
After installation, two settings are provided that are located in `config/platform.php`:
```php
'template' => [
'header' => null,
'footer' => null,
],
```
To change the page header or footer, you must specify your own `blade` templates.
## Change logo and name
Create a new directory in the `brand` template section and the` header.blade.php` file.
Then the full path will look like `/resources/views/brand/header.blade.php`.
```php
resources
└── views
└── brand
└── header.blade.php
```
Suppose that we are creating a system for a fictitious analytical agency, we will make changes to the file just created:
```php
@push('head')
<link
href="/favicon.ico"
id="favicon"
rel="icon"
>
@endpush
<p class="h2 n-m font-thin v-center">
<i class="icon-database"></i>
<span class="m-l d-none d-sm-block">
Analytics
<small class="v-top opacity">Nest</small>
</span>
</p>
```
In order for the created template to be used instead of the standard one, you must specify it in the configuration file,
just as if passing an argument in the `view('brand.header')` helper:
```php
'template' => [
'header' => 'brand.header',
'footer' => null,
],
```
> **Note.** The configuration file may be cached, and the changes will not take effect until the `php artisan config:clear` command is executed
In the same way, we can change the bottom of the page, again create a new file `/resources/views/brand/footer.blade.php` with the following contents:
```php
<p class="small m-n">
© Copyright {{date('Y')}} <a href="//example.com" target="_blank">"Analytics Nest"</a>
</p>
```
Also making changes to the configuration file:
```php
'template' => [
'header' => 'brand.header',
'footer' => 'brand.footer',
],
```
| 23.546512 | 149 | 0.662716 | eng_Latn | 0.989914 |
bb087a8167bcd5cf412c21511ae3671ca624b437 | 3,149 | md | Markdown | wdk-ddi-src/content/d3dkmddi/nc-d3dkmddi-dxgkddi_flipoverlay.md | henke37/windows-driver-docs-ddi | 787885908a0474b7e2c3d2f8a47355dee62c7aec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-16T13:05:36.000Z | 2021-08-16T13:05:36.000Z | wdk-ddi-src/content/d3dkmddi/nc-d3dkmddi-dxgkddi_flipoverlay.md | henke37/windows-driver-docs-ddi | 787885908a0474b7e2c3d2f8a47355dee62c7aec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/d3dkmddi/nc-d3dkmddi-dxgkddi_flipoverlay.md | henke37/windows-driver-docs-ddi | 787885908a0474b7e2c3d2f8a47355dee62c7aec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NC:d3dkmddi.DXGKDDI_FLIPOVERLAY
title: DXGKDDI_FLIPOVERLAY (d3dkmddi.h)
description: The DxgkDdiFlipOverlay function displays a new allocation by using the specified overlay.
old-location: display\dxgkddiflipoverlay.htm
ms.date: 05/10/2018
keywords: ["DXGKDDI_FLIPOVERLAY callback function"]
ms.keywords: DXGKDDI_FLIPOVERLAY, DXGKDDI_FLIPOVERLAY callback, DmFunctions_fac1657b-03ec-4d63-93d6-3458423a1fe9.xml, DxgkDdiFlipOverlay, DxgkDdiFlipOverlay callback function [Display Devices], d3dkmddi/DxgkDdiFlipOverlay, display.dxgkddiflipoverlay
req.header: d3dkmddi.h
req.include-header:
req.target-type: Desktop
req.target-min-winverclnt: Available in Windows Vista and later versions of the Windows operating systems.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql: PASSIVE_LEVEL
targetos: Windows
tech.root: display
req.typenames:
f1_keywords:
- DXGKDDI_FLIPOVERLAY
- d3dkmddi/DXGKDDI_FLIPOVERLAY
topic_type:
- APIRef
- kbSyntax
api_type:
- UserDefined
api_location:
- d3dkmddi.h
api_name:
- DxgkDdiFlipOverlay
product:
- Windows
---
# DXGKDDI_FLIPOVERLAY callback function
## -description
The <i>DxgkDdiFlipOverlay</i> function displays a new allocation by using the specified overlay.
## -parameters
### -param hOverlay
[in] A handle to the overlay to be flipped. The display miniport driver's <a href="/windows-hardware/drivers/ddi/d3dkmddi/nc-d3dkmddi-dxgkddi_createoverlay">DxgkDdiCreateOverlay</a> function previously provided this handle to the Microsoft DirectX graphics kernel subsystem in the <b>hOverlay</b> member of the <a href="/windows-hardware/drivers/ddi/d3dkmddi/ns-d3dkmddi-_dxgkarg_createoverlay">DXGKARG_CREATEOVERLAY</a> structure.
### -param pFlipOverlay
[in] A pointer to a <a href="/windows-hardware/drivers/ddi/d3dkmddi/ns-d3dkmddi-_dxgkarg_flipoverlay">DXGKARG_FLIPOVERLAY</a> structure that describes the new allocation to display by using the overlay.
## -returns
<i>DxgkDdiFlipOverlay</i> returns one of the following values:
|Return code|Description|
|--- |--- |
|STATUS_SUCCESS|DxgkDdiFlipOverlay successfully displays the new allocation.|
|STATUS_INVALID_PARAMETER|Parameters that were passed to DxgkDdiFlipOverlay contained errors that prevented it from completing.|
|STATUS_NO_MEMORY|DxgkDdiFlipOverlay could not allocate memory that was required for it to complete.|
|STATUS_GRAPHICS_DRIVER_MISMATCH|The display miniport driver is not compatible with the user-mode display driver that initiated the call to DxgkDdiFlipOverlay.|
## -remarks
<i>DxgkDdiFlipOverlay</i> should be made pageable.
## -see-also
<a href="/windows-hardware/drivers/ddi/d3dkmddi/ns-d3dkmddi-_dxgkarg_createoverlay">DXGKARG_CREATEOVERLAY</a>
<a href="/windows-hardware/drivers/ddi/d3dkmddi/ns-d3dkmddi-_dxgkarg_flipoverlay">DXGKARG_FLIPOVERLAY</a>
<a href="/windows-hardware/drivers/ddi/d3dkmddi/nc-d3dkmddi-dxgkddi_createoverlay">DxgkDdiCreateOverlay</a>
| 35.784091 | 432 | 0.782471 | eng_Latn | 0.43769 |
bb089c6e83b6d91482f2810fefff4277d1f22568 | 16,158 | md | Markdown | about/unsyllabus.md | ubco-cmps/mds_course_template | 85941926f81f44541e42937d9fb9671bc5522aa2 | [
"MIT"
] | null | null | null | about/unsyllabus.md | ubco-cmps/mds_course_template | 85941926f81f44541e42937d9fb9671bc5522aa2 | [
"MIT"
] | null | null | null | about/unsyllabus.md | ubco-cmps/mds_course_template | 85941926f81f44541e42937d9fb9671bc5522aa2 | [
"MIT"
] | null | null | null | 

# Unsyllabus
## Important Details
| Name | Description |
|-------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| Course | {{ COURSE_CODE }} |
| Term | {{ TERM }} |
| Instructor | {{ INSTRUCTOR }} |
| Lectures | {{ MEETING_TIMES }}: {{ ROOM }} |
| Labs | {{ LAB_TIMES }}: {{ ROOM }} |
| Student Hours | To get live 1 on 1 help in the course, use {{ ZOOM_LINK.replace('CANVAS_ID',CANVAS_ID) }} at various times (see below for schedule). |
| Canvas URL | {{ CANVAS_LINK.replace('CANVAS_ID',CANVAS_ID) }} |
| Course Discussion | To ask any course-related questions, use private (personal, not useful for anyone else) or public (helpful for other) messages on {{ FORUM_LINK }} |
### What do I need to purchase for this course?
Being very conscious of the high tuition and technology costs, we have made efforts to remove the additional cost of taking this course.
All course content, references, and resources provided in this course are free and open source, and can be considered open educational resources (OER).
## Contact Us
```{include} syllabus_bits/teaching_team.md
```
## Evaluation
```{include} syllabus_bits/grading_practices_detailed.md
```
## Passing requirements
```{include} syllabus_bits/passing_requirement.md
```
## Schedule
```{include} syllabus_bits/course_schedule.md
```
The best way to get personalized help in this course is to attend the labs and student hours we have scheduled for this course.
They are all done on Zoom and this is time the TAs have set aside to help you!
You may also use this time to talk to us about anything else related to data science as well.
We would love to hear about you, what your interests are, and if you need any career advice.
A few other notes:
- We will be using {{ FORUM_LINK }} for Announcements in this course.
- For **all** course-related questions you can reach out to the teaching team including instructors and TAs via {{ FORUM_LINK }}.
- You are encouraged to post questions publicly whenever possible so others can benefit. For private and personal issues, you can send private messages on {{ FORUM_LINK }}.
- Any student may visit the student hour for any member of the teaching team (TA or instructor)! In other words, you can go to the student hour of ANY TA, not just the one whose lab/tutorial you are registered in.
### Why should I take {{ COURSE_CODE }}?
```{include} syllabus_bits/course_teaser.md
```
## Unsyllabus changes
In this section, I will outline any changes that have been made to the unsyllabus as we go through the course.
We will do our best to follow the plan outlined in this unsyllabus, but in case things go south, I will need to make adjustments to the contents and the schedule.
Any major changes to the syllabus (this page) will be documented here, as well as the date the change was made.
| Change Date | Summary | Rationale |
|-------------|---------|-----------|
| N/A | N/A | N/A |
| | | |
## Link your Canvas account to Gradescope
You should then be guided through a series of steps to create an account, set a password, and link it to our course.
On the left sidebar in Canvas, click on Gradescope.
<img src="../images/GradescopeAccount.gif">
This is **very** important for you to do as it'll be our primary mechanism for delivering you feedback in this course.
## Teaching Philosophy
For a detailed description of my teaching philosophy and values (including a list of references and citations), you can [read it here](https://firas.moosvi.com/cv/teaching-philosophy/).
Here are the key principles I intend to apply in this class:
1. Student learning is vastly improved through active learning
1. Learning technologies must be leveraged to scale instructor effort across multiple classes.
1. Inter-disciplinarity is the future of education.
1. Effective teaching is inclusive teaching.
### How will this course be taught ?
This course will be taught as a [Blended Learning classroom](https://en.wikipedia.org/wiki/Blended_learning) where some elements of a [flipped classroom](https://www.youtube-nocookie.com/embed/BCIxikOq73Q) will be mixed with a more traditional coding classroom with live demos, clicker questions, and worksheets.
Briefly, this requires students to watch videos and engage with the assigned reading prior to the classroom meeting (knowledge transfer).
During the class meeting, the instructor guides students through clicker questions, worksheet problems, and other activities to help the students make sense of the material (sense-making).
See {numref}`masterymodel1` for a mental model of how learning works {cite}`Ambrose2010`.
```{figure} ../images/masterymodel1.png
---
width: 750px
name: masterymodel1
---
To develop mastery in a concept, students must first acquire the necessary skills, then practice integrating them, and finally know when to apply what they have learned. This figure was adapted from Figure 4.1 of the book "How Learning Works".
The terms "knowledge transfer" and "sense-making" applied in this context is generally attributed to [Dr. Eric Mazur](https://mazur.harvard.edu/files/mazur/files/flip_your_course_online_07.pdf).
```
### What does this mean in practical terms?
{numref}`masterymodel2` shows a handy table to help guide you and organize your learning in this course:
```{figure} ../images/masterymodel2.png
---
height: 500px
name: masterymodel2
---
This table describes how I think each course activity should be classified between knowledge transfer and sense-making.
```
## Academic Integrity
### How do I go through this course with integrity?
I want to be proud of your work in this course, and I want YOU to be proud of yourself as well!
That cannot happen if you make unethical decisions, including (but not limited) to cheating or plagiarism.
According to the scientific literature, the most common reasons students cheat are:
- Fear of failure and life consequences
- Peer pressure, including an inability to say no to help others cheat
- Perceived societal acceptance of cheating (Lance Armstrong, Barry Bonds, Enron, Wall Street & the The Big Short)
- Desire for success without the time/desire to put in the work needed
- Strict deadlines and due-dates
- Requirement from instructors to memorize facts, figures, equations, etc...
- High-stakes exams with no recompense for "having a bad day"
- Peers cheating with no consequences or penalties
- Unclear expectations on what constitutes academic dishonesty
- Inadequate support from instructor and teaching team
Though I sympathize with students and the stresses of your busy lives - in my opinion, there is no good reason to cheat.
I have tried extremely hard to make this course focused on learning rather than grading, and where grading is needed, to have policies that are as student-friendly as possible.
In particular, I hope (and expect) that the following features of the course should eliminate your temptation to cheat or plagiarize:
- {{ GRACE_PERIOD }} grace-period on all due dates and deadlines.
- Long testing window so you can start the tests whenever you're comfortable.
- Weekly learning logs, homework and reading reflections to make you think about your learning ([metacognition](https://cft.vanderbilt.edu/guides-sub-pages/metacognition/)).
- Each test has a "bonus test" available one week later; for each test, we will take the better score of the pair.
- No high-stakes exams (the single largest assessment item is the final exam).
- All course assessments are completely open book, open notes, and open web (except for cheating websites like Chegg, CourseHero, Slader, Bartleby, etc...)
- Plenty of TA and instructor student hours and several outside of normal business hours.
- Class website that outlines exactly what you should do when to help you manage your time.
- Tonnes of supplemental materials including other instructional videos in case you want a different perspective.
- Weekly prompt to accept the integrity pledge to keep you accountable.
- A true willingness from the instructor (me) to help you learn and succeed in this course!
With these features, and several other little things, I sincerely hope that you will consider completing this course with maximum integrity so that you never have to feel guilty, ashamed, or disappointed in yourself and your actions!
A more detailed description of academic integrity, including the University’s policies and procedures, may be found in the [Academic Calendar](https://calendar.ubc.ca/vancouver/index.cfm?tree=3,54,111,0).
### What is considered academic dishonesty in this course?
To make it even easier for you to decide what isn't allowed, below is a list of things that I **definitely** consider to be academic dishonesty:
- Asking others for their work in the course (whether question by question, or all at once)
- Sending others your work in the course
- Doing tests collaboratively (tests **must** be done by yourself and alone)
- Sending others your test questions and/or answers
- Sharing any course material onto Chegg, Course Hero, Slader, or other similar sites
- Searching for solutions to course material on Chegg, Course Hero, Slader, or other similar sites
- Blindly googling the question in hopes of finding someone who had a similar question and then copying their answer
- Note, googling to find resources to understand specific concepts or general ideas is highly encouraged!
- Having a tutor/friend/nemesis complete and submit your work for you
- Copying and pasting code, equations, text explanations, prose, etc... without attribution
- Manipulating the learning platforms we use to reverse engineer the randomization algorithms, hacking the timer functionality, or other similar technical [malfeasance](https://dictionary.cambridge.org/dictionary/english/malfeasance).
## Course Accommodations
### What if I miss labs, tests, or the exam due to an illness, health, or other personal situations?
Normally, most deadlines in this course have a generous grace period.
If you require an extension beyond the grace period, please contact the instructor on {{ FORUM_LINK }} (ideally before the deadline passes) to discuss your options.
Students who, because of unforeseen events, are absent during the term and are unable to complete tests or other graded work, should normally discuss with their instructors how they can make up for missed work.
If ill health is an issue, students are encouraged to seek attention from a health professional.
Campus Health and Counselling will normally provide documentation only to students who have been seen previously at these offices for treatment or counselling specific to conditions associated with their academic difficulties.
```{tip}
If you miss a course component due to an illness, health, or other personal situation, please reach out to me as soon as you are comfortable, and I'll work with you to get you back on track.
```
### What if I have dependents that rely on me for care and unpredictable emergencies may arise?
Let's talk, send me a private message and we can discuss it.
I do not necessarily need to know all the personal details, just a high-level summary of your situation and what you think an ideal solution would be.
I'm sure we will come to some agreement, generally the earlier you let me know of any special circumstances or accommodation, the more I'll be able to do for you!
### What if I have to miss a deadline because of a wedding, birthday, funeral, religious holiday, or personal event ?
No problem! There's not even any need to tell me, or ask for permission to miss deadlines.
The course is designed to give you maximum flexibility:
- Every deadline has a {{ GRACE_PERIOD }} grace period that is automatically applied.
- There is no late penalty if you use the grace period
- You can use the grace period an unlimited amount of time in the course (though if it happens every week and for every assignment, I might check in with you and gently encourage you not to leave things to the last minute)
If you miss a deadline by more than the grace period, the general course policy is that you will not be able to get full credit for it, and in many cases, may even get a 0 for it.
In the cases of Tests, it is not possible to get partial credit, or complete it at times other than within the scheduled windows.
In some cases, I reserve the right to grant an extension or make alternate accommodations as needed.
### What should I do if I need accommodations to be successful in this course?
Accommodations are intended to remove barriers experienced by individuals with disabilities.
As a matter of principle, UBC is committed to promoting human rights, equity and diversity, and it also has a legal duty under the BC Human Rights Code to make its goods and services available in a manner that does not discriminate.
[Policy 73](https://universitycounsel.ubc.ca/files/2019/02/policy73.pdf) (Accommodation for Students with Disabilities) sets out principles and processes governing the accommodation of students with disabilities.
All accommodations for this course are handled through the [Disability Resource Centre](https://students.ok.ubc.ca/academic-success/disability-resources/contact-the-disability-resource-centre/) and I encourage you to contact them to book an appointment.
### Compassion
As I'm sure you're aware, *there is (still) a global pandemic* happening right now and we could all use some extra compassion and humanity.
If you're going through something that is affecting you (course or otherwise), you are always welcome to come and talk to me about it.
If I am not able to help you myself, then I can probably direct you to the right person or resource.
If you need extra help, or extra time to deal with something you're going through, just ask.
You will *never* owe me an explanation about your physical health, mental health, or those of your family members, friends, etc... I will believe you, and I will trust you.
I will not judge you, nor think any less of you.
I will do everything in my power to work out something that is both reasonable and fair.
This, I promise!
## Acknowledgements
The syllabus was constructed and adapted from many other templates and examples.
Below is the list of resources I have used to put this syllabus together:
- Physics 117 (Instructor: [Dr. Simon Bates](https://sites.google.com/site/simonpbates/home?authuser=0))
- Psychology 417A-951 (Instructor: [Dr. Catherine Rawn](https://blogs.ubc.ca/catherinerawn/))
- [Dr. Christopher Jones](https://hcommons.org/members/profchrismjones/) and [this tweet](https://twitter.com/ProfChrisMJones/status/1282036533562834944)
- [Dr. Jesse Stommell](https://www.jessestommel.com/designing-for-care/)
- [Header image - Photo by Pixabay from Pexels](https://www.pexels.com/photo/code-coding-computer-cyberspace-270373/)
- [Course Logo - Photo by Rakicevic Nenad from Pexels](https://www.pexels.com/photo/silhouette-of-person-holding-glass-mason-jar-1274260/)
## References
```{bibliography}
:style: unsrt
``` | 65.682927 | 312 | 0.719767 | eng_Latn | 0.999137 |
bb08bdb37994d0ea5e0715fb72a205c8f3e6a834 | 2,868 | md | Markdown | docs/visual-basic/misc/bc40034.md | nicolaiarocci/docs.it-it | 74867e24b2aeb9dbaf0a908eabd8918bc780d7b4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc40034.md | nicolaiarocci/docs.it-it | 74867e24b2aeb9dbaf0a908eabd8918bc780d7b4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc40034.md | nicolaiarocci/docs.it-it | 74867e24b2aeb9dbaf0a908eabd8918bc780d7b4 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Il membro 'MustOverride' non conforme a CLS non è consentito in un conforme a CLS <classname>
ms.date: 07/20/2015
f1_keywords:
- bc40034
- vbc40034
helpviewer_keywords:
- BC40034
ms.assetid: 4eb36b3a-1bbe-4e99-9ecb-a12b8729b128
ms.openlocfilehash: 1e6b94175dc8861e3d5de9e7ffb89d404c292da0
ms.sourcegitcommit: 5c1abeec15fbddcc7dbaa729fabc1f1f29f12045
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 03/15/2019
ms.locfileid: "58049799"
---
# <a name="non-cls-compliant-mustoverride-member-is-not-allowed-in-a-cls-compliant-classname"></a>Il membro 'MustOverride' non conforme a CLS non è consentito in un conforme a CLS \<classname >
Un classe è contrassegnata come `<CLSCompliant(True)>`, ma contiene una proprietà o routine `MustOverride` che è contrassegnata come `<CLSCompliant(False)>` o non è contrassegnata.
Quando una classe è conforme con la [Language Independence and Language-Independent Components](../../standard/language-independence-and-language-independent-components.md) (CLS), un'applicazione che usa quella classe accede soltanto ai membri che sono contrassegnati anche come `<CLSCompliant(True)>` e ignora i membri che non sono. Tuttavia, l'applicazione non può ignorare una proprietà o routine `MustOverride` perché deve accedervi per eseguirne l'override.
Quando <xref:System.CLSCompliantAttribute> viene applicato a un elemento di programmazione, il parametro `isCompliant` dell'attributo viene impostato su `True` o `False` per indicare la conformità o la non conformità. L'impostazione predefinita per questo parametro non è disponibile, quindi è necessario specificare un valore.
Se a un elemento non viene applicato <xref:System.CLSCompliantAttribute> , l'elemento non sarà considerato conforme.
Per impostazione predefinita, si tratta di un messaggio di avviso. Per informazioni su come nascondere gli avvisi o considerarli come errori, vedere [Configuring Warnings in Visual Basic](/visualstudio/ide/configuring-warnings-in-visual-basic).
**ID errore:** BC40034
## <a name="to-correct-this-error"></a>Per correggere l'errore
- Se è richiesta la conformità con CLS ed è possibile accedere al codice sorgente della classe, contrassegnare il membro come `<CLSCompliant(True)>`.
- Se è richiesta la compatibilità con CLS e non è possibile accedere al codice sorgente della classe o se esso non può essere conforme, definire il membro all'interno di una classe differente.
- Se è richiesto che il membro rimanga non conforme, rimuovere la parola chiave `MustOverride` dalla relativa definizione, rimuovere l'oggetto <xref:System.CLSCompliantAttribute> dalla definizione di classe o contrassegnare la classe come `<CLSCompliant(False)>`.
## <a name="see-also"></a>Vedere anche
- [MustOverride](../../visual-basic/language-reference/modifiers/mustoverride.md)
| 69.95122 | 465 | 0.785914 | ita_Latn | 0.993994 |
bb08c73c7bd0df3e799cdc741f667464e323913d | 18,253 | md | Markdown | docu/APIDocumentation.md | Fabiansson/BattleShipRoyale | e37490d482c050a3d77a0916aa808a8a2f6dee9c | [
"MIT"
] | null | null | null | docu/APIDocumentation.md | Fabiansson/BattleShipRoyale | e37490d482c050a3d77a0916aa808a8a2f6dee9c | [
"MIT"
] | null | null | null | docu/APIDocumentation.md | Fabiansson/BattleShipRoyale | e37490d482c050a3d77a0916aa808a8a2f6dee9c | [
"MIT"
] | null | null | null | # BattleShipRoyale API documentation
Event driven Socket.IO documenatation for BattleShipRoyale.
## Channels
<a name="channel-createRoom"></a>
### `subscribe` open
Opens new game session for players.
###### Example of payload _(generated)_
```json
{
[empty]
}
```
### `subscribe` findGame
Player requesting to join a random game.
###### Example of payload _(generated)_
```json
{
[empty]
}
```
### `subscribe` gameSettings
Changes game Settings before it starts.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>privateLobby</td>
<td>boolean</td>
<td><p>If lobby is private or free to join.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>rounds </td>
<td>number</td>
<td><p>How many rounds should be played.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "Oi87oqw",
"playerName": "Peter",
"roomId": "I8652d"
}
```
### `subscribe` join
Joins an existing game.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>gameId </td>
<td>string</td>
<td><p>ID of game to join.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"gameId": "string",
}
```
### `subscribe` leaveRoom
Leaves currently attending game.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID of leaving player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name of leaving player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of room to be left.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string"
}
```
<a name="channel-shoot"></a>
### `subscribe` shoot
(Game Event) Shooting from ship to target.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID of shooting player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name of shooting player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of attending game room.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>bulletType </td>
<td>integer</td>
<td><p>Type of bullet.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>originField </td>
<td>object</td>
<td></td>
<td><em>Any</em></td>
</tr>
<tr>
<td>originField.xCoordinate </td>
<td>integer</td>
<td><p>X-Coordinate of origin.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>originField.yCoordinate </td>
<td>integer</td>
<td><p>Y-Coordinate of origin.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>targetField </td>
<td>object</td>
<td></td>
<td><em>Any</em></td>
</tr>
<tr>
<td>targetField.xCoordinate </td>
<td>integer</td>
<td><p>X-Coordinate of target.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>targetField.yCoordinate </td>
<td>integer</td>
<td><p>Y-Coordinate of target.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string",
"bulletType": 0,
"originField": {
"xCoordinate": 0,
"yCoordinate": 0
},
"targetField": {
"xCoordinate": 0,
"yCoordinate": 0
}
}
```
<a name="channel-move"></a>
### `subscribe` move
(Game Event) Moving with a ship.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID of moving player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name of moving player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of attending game room.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>originField </td>
<td>object</td>
<td></td>
<td><em>Any</em></td>
</tr>
<tr>
<td>originField.xCoordinate </td>
<td>integer</td>
<td><p>X-Coordinate of origin.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>originField.yCoordinate </td>
<td>integer</td>
<td><p>Y-Coordinate of origin.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>targetField </td>
<td>object</td>
<td></td>
<td><em>Any</em></td>
</tr>
<tr>
<td>targetField.xCoordinate </td>
<td>integer</td>
<td><p>X-Coordinate of target.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>targetField.yCoordinate </td>
<td>integer</td>
<td><p>Y-Coordinate of target.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string",
"originField": {
"xCoordinate": 0,
"yCoordinate": 0
},
"targetField": {
"xCoordinate": 0,
"yCoordinate": 0
}
}
```
<a name="channel-loot"></a>
### `subscribe` loot
(Game Event) Looting at specific location.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID of looting player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name of looting player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of attending game room.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>lootingField </td>
<td>object</td>
<td></td>
<td><em>Any</em></td>
</tr>
<tr>
<td>lootingField.xCoordinate </td>
<td>integer</td>
<td><p>X-Coordinate of field to loot.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>lootingField.yCoordinate </td>
<td>integer</td>
<td><p>Y-Coordinate of field to loot.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string",
"lootingField": {
"xCoordinate": 0,
"yCoordinate": 0
}
}
```
<a name="channel-buy"></a>
### `subscribe` buy
(Game Event) Buying a specific item.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID of buying player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name of buying player.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of attending game room.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>itemId </td>
<td>integer</td>
<td><p>ID of item to buy.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string",
"itemId": 0
}
```
<a name="channel-chatMessage"></a>
### `subscribe` chatMessage
Player sends a chat message.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>message </td>
<td>string</td>
<td><p>Message sent.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"message": "string",
}
```
<a name="channel-userId"></a>
### `publish` userId
Response of a connection for sending unique player information.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>userId </td>
<td>string</td>
<td><p>The userId that belongs to the newly connected user.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>The player name that belongs to the newly connected user.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"userId": "ozudzw7",
"playerName": "Paul"
}
```
<a name="channel-createRoomRp"></a>
### `publish` createRoomRp
Response of createRoom-Event.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID under which the player will be known.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name under which the player will be known.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of the room in which the player will play.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>url </td>
<td>string</td>
<td><p>URL for invite link.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string",
"url": "string"
}
```
<a name="channel-joinRoomRp"></a>
### `publish` joinRoomRp
Response of joinRoom-Event.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>ID under which the player will be known.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerName </td>
<td>string</td>
<td><p>Name under which the player will be known.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>roomId </td>
<td>string</td>
<td><p>ID of the room in which the player will play.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>url </td>
<td>string</td>
<td><p>URL for invite link.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "string",
"playerName": "string",
"roomId": "string",
"url": "string"
}
```
<a name="channel-gameState"></a>
### `publish` playerGameStateUpdate
Game state for specific player.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>playerId </td>
<td>string</td>
<td><p>The ID of the player to which this playerGameState belongs.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>coins </td>
<td>number</td>
<td><p>The aomount of coins this player has.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>inventory </td>
<td>InventoryItem[]</td>
<td><p>All items the player has in his inventory with the corresponding amount.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>ships </td>
<td>Ship[]</td>
<td><p>All ships the player owns with the position, size, healt, ec.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>hits </td>
<td>Coordinates[]</td>
<td><p>All successfull shots of the player towards a enemy.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>alive </td>
<td>boolean</td>
<td><p>If the player is still alive.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"playerId": "7zuad7",
"coins": 3000,
"inventory": InventoryItem[],
"ships": Ship[],
"hits": Coordinates[],
"alive": true
}
```
<a name="channel-generalGameState"></a>
### `publish` generalGameState
Game state for all players in a specific room.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>gameId </td>
<td>string</td>
<td><p>The gameId to which this gameState belongs.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>players </td>
<td>string[]</td>
<td><p>All player IDs in this room.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>playerNames </td>
<td>string[]</td>
<td><p>All the player names of the players in this room.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>admin </td>
<td>string</td>
<td><p>The userId of the admin of this game.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>rounds </td>
<td>number</td>
<td><p>How many rounds will be played in this game.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>currentRound </td>
<td>string?</td>
<td><p>In which round the current game actually is.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>turn </td>
<td>string?</td>
<td><p>The userId if the player that it is its turn.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>terrainMap </td>
<td>number[]</td>
<td><p>The general terrain of the battlefield.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>fog </td>
<td>Fog?</td>
<td><p>The game boundaries as a fog.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>started </td>
<td>boolean</td>
<td><p>If the game started already or not.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>privateLobby </td>
<td>boolean</td>
<td><p>If the game is private or free to join.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"gameId": "string",
"players": ["aiu230", "9a7f8p"],
"playerNames": ["Paul", "Josh"]
"admin": "aiu230",
"rounds": 5,
"currentRound": 2,
"turn": "aiu230",
"terrainMap": [0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,...],
"fog": FOG,
"started": true,
"privateLobby": false
}
```
<a name="channel-chatMessage"></a>
### `publish` chatMessage
Response when chat message got sent.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>sender </td>
<td>string?</td>
<td><p>Player name of Sender.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>msg </td>
<td>string</td>
<td><p>Content of chat message.</p>
</td>
<td><em>Any</em></td>
</tr>
<tr>
<td>owner </td>
<td>boolean?</td>
<td><p>If the sender of the message is the owner himself.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"sender": "ui8786",
"message": "Hello this is a chat message!",
"owner": true
}
```
<a name="channel-gameEventError"></a>
### `publish` gameEventError
Error for incorrect game events.
##### Payload
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Accepted values</th>
</tr>
</thead>
<tbody>
<tr>
<td>message </td>
<td>string</td>
<td><p>Error message and advice.</p>
</td>
<td><em>Any</em></td>
</tr>
</tbody>
</table>
###### Example of payload _(generated)_
```json
{
"message": "string"
}
```
| 11.15709 | 86 | 0.469621 | eng_Latn | 0.335011 |
bb093b9caa03dd24697daecc0b544b72d38a0164 | 365 | md | Markdown | blog/stories/2019/12/15/a162612.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 93 | 2016-06-02T15:40:14.000Z | 2022-02-02T20:02:08.000Z | blog/stories/2019/12/15/a162612.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 231 | 2016-06-02T15:21:23.000Z | 2022-02-18T20:48:20.000Z | blog/stories/2019/12/15/a162612.md | scripting/Scripting-News | 348c428614b115fe390513defc285aceeedd4f09 | [
"MIT"
] | 11 | 2017-06-27T11:58:01.000Z | 2021-06-21T00:55:07.000Z | <a href="https://twitter.com/BillKristol/status/1206192333219344385">This is</a> the right question. None of us should just shrug it off and say the impeachment vote will be partisan as usual. They shouldn't get off that easy. They must know that we know that they're voting for the end of the Constitution. We should all write this as fact. Make the record clear.
| 182.5 | 364 | 0.783562 | eng_Latn | 0.999258 |
bb09727c9673ab5940a58dc0a3a67cfefb665bd6 | 58 | md | Markdown | README.md | terry-u16/AtCoderTemplateForNetCore | 3475739f2ea295f5ac41b01ed7baf782dafc4226 | [
"MIT"
] | 1 | 2020-08-15T14:23:34.000Z | 2020-08-15T14:23:34.000Z | README.md | terry-u16/AtCoderTemplateForNetCore | 3475739f2ea295f5ac41b01ed7baf782dafc4226 | [
"MIT"
] | null | null | null | README.md | terry-u16/AtCoderTemplateForNetCore | 3475739f2ea295f5ac41b01ed7baf782dafc4226 | [
"MIT"
] | 1 | 2021-12-15T12:45:12.000Z | 2021-12-15T12:45:12.000Z | # AtCoderTemplateForNetCore
C# (.NET Core) 用AtCoderテンプレート
| 19.333333 | 29 | 0.810345 | yue_Hant | 0.999719 |
bb09ba33860cd7da2bf0e0411d47cf9a487e9c8b | 1,918 | md | Markdown | README.md | cagix/agda | cc026a6a97a3e517bb94bafa9d49233b067c7559 | [
"BSD-2-Clause"
] | 1,989 | 2015-01-09T23:51:16.000Z | 2022-03-30T18:20:48.000Z | README.md | cagix/agda | cc026a6a97a3e517bb94bafa9d49233b067c7559 | [
"BSD-2-Clause"
] | 4,066 | 2015-01-10T11:24:51.000Z | 2022-03-31T21:14:49.000Z | README.md | cagix/agda | cc026a6a97a3e517bb94bafa9d49233b067c7559 | [
"BSD-2-Clause"
] | 371 | 2015-01-03T14:04:08.000Z | 2022-03-30T19:00:30.000Z | Agda 2
======
[](http://hackage.haskell.org/package/Agda)
[](https://www.stackage.org/package/Agda)
[](https://github.com/agda/agda/actions?query=workflow%3A%22Build%2C+Test%2C+and+Benchmark%22)
[](https://github.com/agda/agda/actions?query=workflow%3A%22stack+build%22)
[](http://agda.readthedocs.io/en/latest/?badge=latest)
[](https://agda.zulipchat.com)

Note that this README is only about Agda, not its standard
library. See the [Agda Wiki][agdawiki] for information about the
library.
Documentation
-------------
* [User manual](http://agda.readthedocs.io)
(per-commit pdf can be downloaded from the
[github actions](https://github.com/agda/agda/actions?query=workflow%3A%22User+Manual%22) page)
* [CHANGELOG](https://github.com/agda/agda/blob/master/CHANGELOG.md)
Getting Started
----------------
* [Installation](https://agda.readthedocs.io/en/latest/getting-started/installation.html)
* [Quick guide to editing, type checking and compiling Agda
code](https://agda.readthedocs.io/en/latest/getting-started/a-taste-of-agda.html)
Contributing to Agda
--------------------
* Contribution how-to: [`HACKING`](https://github.com/agda/agda/blob/master/HACKING.md)
* [Haskell style-guide](https://github.com/andreasabel/haskell-style-guide/blob/master/haskell-style.md)
[agdawiki]: http://wiki.portal.chalmers.se/agda/pmwiki.php
| 49.179487 | 198 | 0.744526 | yue_Hant | 0.176132 |
bb09e78d035c90c061283b33eb18add54011ae4a | 2,199 | md | Markdown | README.md | dcrec1/conventional-changelog-ruby | 2f3aa642a0f5f166097b70251ab6415deb62d023 | [
"MIT"
] | 59 | 2015-03-29T13:21:35.000Z | 2022-02-07T19:33:41.000Z | README.md | bethesque/conventional-changelog-ruby | 2f3aa642a0f5f166097b70251ab6415deb62d023 | [
"MIT"
] | 11 | 2015-07-08T01:44:58.000Z | 2020-07-19T09:27:02.000Z | README.md | dcrec1/conventional-changelog-ruby | 2f3aa642a0f5f166097b70251ab6415deb62d023 | [
"MIT"
] | 8 | 2015-03-31T12:59:10.000Z | 2018-06-26T09:45:48.000Z | [](https://travis-ci.org/dcrec1/conventional-changelog-ruby)
[](https://codeclimate.com/github/dcrec1/conventional-changelog-ruby)
[](https://codeclimate.com/github/dcrec1/conventional-changelog-ruby)
# Conventional::Changelog
Generates a CHANGELOG.md file from Git metadata using the AngularJS commit conventions.
- [AngularJS Git Commit Message Conventions](https://docs.google.com/document/d/1QrDFcIiPjSLDn3EL15IJygNPiHORgU1_OOAqWjiDU5Y/)
Since version 1.2.0 the scopes are optional.
## Installation
$ gem install conventional-changelog
## Usage
$ conventional-changelog
$ conventional-changelog version=vX.Y.Z
or programatically:
```ruby
require 'conventional_changelog'
ConventionalChangelog::Generator.new.generate!
ConventionalChangelog::Generator.new.generate! version: "vX.Y.Z"
```
Version param should follow your Git tags format
## Examples
Converts this:
2015-03-30 feat(admin): increase reports ranges
2015-03-30 fix(api): fix annoying bug
2015-03-28 feat(api): add cool service
2015-03-28 feat(api): add cool feature
2015-03-28 feat(admin): add page to manage users
into this:
<a name="2015-03-30"></a>
### 2015-03-30
#### Features
* **admin**
* increase reports ranges (4303fd4)
#### Bug Fixes
* **api**
* fix annoying bug (4303fd5)
<a name="2015-03-28"></a>
### 2015-03-28
#### Features
* **api**
* add cool service (4303fd6)
* add cool feature (4303fd7)
* **admin**
* add page to manage users (4303fd8)
## Contributing
1. Fork it ( https://github.com/dcrec1/conventional-changelog-ruby/fork )
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create a new Pull Request
| 26.817073 | 172 | 0.696226 | eng_Latn | 0.482988 |
bb09fe1852fe5a907812174542921b37e155755a | 17,568 | md | Markdown | docs/230.md | bunnyblanco/blindvending | 4e12402c10e2a3232b2f3097461bd1c909dbdb70 | [
"CC0-1.0"
] | null | null | null | docs/230.md | bunnyblanco/blindvending | 4e12402c10e2a3232b2f3097461bd1c909dbdb70 | [
"CC0-1.0"
] | null | null | null | docs/230.md | bunnyblanco/blindvending | 4e12402c10e2a3232b2f3097461bd1c909dbdb70 | [
"CC0-1.0"
] | null | null | null | MACHINE METER READINGS & PRICE SETTING CHART
================================================
##Hot Beverage
###AP#213
####Meter Readings:
1. using the mode button, go to mode #01.
2. Press "large cup/start" twice to get to M1, which will display the cash total. Press start #3 times for non-resettable sales.
3. Press the "coin return" button to return to the operating mode.
####Price Setting:
1. Make sure "free vend" is turned off.
2. using the mode button, go to mode #04 for large cups and #05 for small cups.
3. Press the "large cup/start" button. The scroll will say "price"
And will show "#00.”
4. Use the "middle strength coffee" button to move the flashing
Cursor to the digit to be changed.
5. Use the "coffee strong" button to change the digit to the desired
Value.
6. Press the "large cup/start" button. The scroll will say
"selection."
7. Press all product selections and strengths you wish to sell at
That price.
8. Press "large cup/enter" to lock in the price settings.
9. Press the "coin return" button to return to the operating mode.
###AP#223
####Meter Readings:
1. From the Master Menu press the "enter" key to get to the MIS
Menu.
2. Press the "enter" key a second time when the display reads "MIS."
3. To bring up the items in the MIS menu, press the right arrow key one time.
4. To get to the "view historical MIS," press "enter" at that menu item to display the totals in that menu.
####Price Setting:
1. From the Master Menu, press the "enter" key followed by the
"right arrow" key to get to the "price setting" menu, or press
The F4 key to go directly to this menu,
2. Press “enter” to display the items in this menu.
3. Press the "right arrow" key to display the "cup size" menu.
4. Use the plus or minus keys to select the desired cup size.
5. Press the "right arrow" key to get to "price."
6. From the customer keypad, enter the desired price.
7. Using the "left arrow" key, arrow back to "selection."
8. Use the "plus" and "minus" keys followed by the "enter" key to
select each item that you wish to set at that displayed price
and cup size.
9. Press escape to exit the pricing menu.
##Cold Beverage
###Dixie Narcos
####Meter Readings: N.A.
####Price Setting:
Prices are set via the dip switches on the Mars changer, according to specific price lines.
###ECC #2145
####Meter Readings:
1. Press the mode button three times to get to the "Set-Up Menu."
2. Using the keypad on the front of the machine, press button #6 to obtain non-resettable cash totals.
3. For resettable readings, press the "mode" button once to get to the "Service Menu."
4. Press the letter C to display the readings.
####Price Setting:
1. From the Service Menu, press the #7 button on the keypad.
2. Enter the desired price.
3. Press the letter/number combination for each selection to be set
at that price. The letter plus star represents an entire shelf
or star, setting the entire machine at that price.
###E#9
####Meter Readings:
1. Press the mode button once to get to information pertaining to
historical non-resettable data.
2. Press #1, #2 or #3 on the keypad to display information regarding cash sales, total vends or number of vends by selection respectively.
3. Press selection buttons #1 and #2 simultaneously to advance the
menu to the resettable data information.
4. Press selection button #1 to display the cash total since the
counter was last cleared, #2 to display the total vends since
the last reset and #3 to display the vend count by selection
since the last reset.
5. Press and hold selection button #4 for approximately five seconds to clear the counters one at a time.
####Price Setting:
1. From the MIS menu, press and hold selection buttons #1 and #2
simultaneously to get to the Price Setting mode (SP).
2. To raise the price of a given item in nickel increments, press
and hold the appropriate selection button. To lower the price,
press and release the appropriate button and then press it again
within one second.
3. To set all selections at the same price, set the desired price on
selection #1 and then simultaneously press and hold selection
buttons #3 and #4 for five seconds.
###V 720
####Meter Readings:
When the door is opened, press the first selection button to toggle between "error status," "cans," and "cash."
####Price Setting:
1. Press the Mode button until the display reads "cost."
2. To raise the price of a given item in nickel increments, press
and hold the appropriate selection button. To lower the price
press and release the appropriate button and then press it again
within one second.
3. To return to the operating mode, close the door.
###BEV MAX 3
####METER READING:
WITH MACHINE DOOR OPEN ON THE CONTROLL BOARD, LOCATED ON THE BACK WALL
1. HIT THE MODE BUTTON UNTIL THE SET UP MODE #1 APPEARS ON SCREEN
2. PRESS BUTTON 6 FOR THE METER READING
###BEV MAX 3 PRICE SETTING:
1. WITH MACHINE OPEN PRESS THE MODE BUTTON UNTIL THE SETUP MENU APPEARS ON SCREEN, PRESS #7 PRICING MENU THEN ENTER THE DISIRED PRICE FOLLOWED BY THE SELECTION # THEN PRESS * TO ENTER PRICE
###ROYAL SODA MACHINE
####METER READING
1) PRESS THE MODE BUTTON INSIDE THE DOOR PANEL.
2) PRESS THE POUND SIGN ON THE KEY PAD ON THE FRONT OF THE MACHINE.
DISPLAY WILL SHOW NON-RESETTABLE METER READING.
####ROYAL SODA MACHINE PRICING.
1) PRESS THE MODE BUTTON INSIDE THE DOOR PANEL.
2) PRESS NUMBER 2 ON THE KEY PAD ON THE FRONT OF THE MACHINE AND SCROLL TO SET PRICES ON THE DISPLAY.
3) PRESS THE POUND KEY TO ENTER PRICE SETTING MODE.
4) PRESS NUMBER 2 TO SCROLL TO DESIRED SHELF OR SELECTION.
5) PRESS THE POUND KEY TO ENTER THE SELECTED SHELF OR SELECTION.
6) PRESS NUMBER 2 TO SCROLL UP OR NUMBER 8 TO SCROLL DOWN TO SELECT DESIRED PRICE.
7) PRESS THE POUND KEY TO SET DESIRED PRICE.
8) PRESS THE STAR KEY TO EXIT.
Candy/Snack
-------------------
###AP#120 series
####Meter Readings:
1. From the Master Menu press the "enter" key to get to the MIS
menu, or press the F1 key to go directly to the "historical MIS"
menu.
2. Press the "enter" key when the display reads "MIS" to bring up
the items in the MIS menu.
3. Use the left and right arrow keys to scroll through these menu
items including "view historical MIS" and "view interval MIS."
4. Press "enter" at the appropriate menu item to display that item.
####Price Setting:
1. From the "Master Menu," press F#4 or arrow to the "Price Setting" menu. The display shows item, dash, followed by the selection number with the last number flashing to indicate that the selection number can be changed.
2. Press the arrow keys or press a specific three-digit number on
the key pad. The selection number is followed by an equals
sign, followed by the current price for the column, followed by
the three-digit product code.
3. To change prices, press the "right arrow" key one time. At this
point, the last digit of the price will flash,
4. Use the "plus" and "minus" keys to raise or lower the price in
nickel increments or enter a new price using the numeric keypad.
5. Set the price by pressing the "enter" key once the desired price
is achieved.
6. To change the price of an entire shelf, arrow to the selection
menu. Then press the first two digits of the shelf number
followed by the asterisk key.
###Rowe #5900
####Meter Readings:
1. Press the Mode button followed by the #8 on the keypad.
2. Press the #0 button to scroll through each menu item. Press #0 6 times for resettable and #8 times to get non-resettable sales.
3. To clear resettable data and reset the numbers to zero, scroll
through to that option, press #1 or #2 to change the flashing N
to Y and then press the "reset" button on the keypad.
(Note: the MIS Retrieval menu scrolls only in one direction.)
###Price setting:
1. Press the Mode button followed by the #3 key on the door keypad. The display will show "set price."
2. Using the keypad, enter the number of the selection to be set.
3. Pressing #1 and #2 on the keypad will adjust the price up or down respectively in nickel increments.
4. To save this value, press #9 on the keypad (the display then
advances to the next selection) or press #0 to escape from this
menu altogether.
5. Close the door to return to the operating mode.
###Rowe #6800
####Meter Readings:
1. Press the mode button, followed by the #3 on the keypad.
####Price Setting:
1. Press the mode button followed by the #4 key on the door keypad. The display will show "set price."
2. Using the keypad, enter the number of the selection to be set.
3. Pressing #1 and #2 on the keypad will adjust the price up or down respectively in nickel increments.
4. To save this value, press #9 on the keypad (the display then
advances to the next selection) or press #0 to escape from this
menu altogether.
5. Close the door to return to the operating mode.
###AP 933 SNACK MACHINE:
####METER READINGS:
1. PRESS THE MODE BUTTON, THEN PRESS THE HISTORICAL SALES, THEN PRESS C TO EXIT.
OR AFTER PRESSING THE MODE BUTTON, PRESS #01 ON KEYPAD THE DISPLAY WILL SHOW HISTORICAL SALES.
###PRICE SETTING:
1. PRESS THE MODE BUTTON HALF MOON SECTION OF THE BOARD THEN
2. PRESS SET PRICE BUTTON USE THE ARROW KEYS TO TOGGLE BETWEEN SELECTION AND PRICE PRESS C TO EXIT.
###CODES FOR 933 BY TWO DIGITS: GOOD FOR 130 OR U -FLEX:
01 HISTORICAL SALES
20 PRICING
21 GOLDEN EYE
22 BILL ESCROW
23 MOTOR PAIRING
25 FORCE VEND
34 SPEECH ENABLE
36 SPACE TO SALES
40 SET TEMP
42 HEALTH SHUTDOWN BE SECECTION
50 TIME
80 TEST VEND
81 EVENT LOG
82 SERVICE LOG
###U-FLEX MACHINE:
####METER READING:
1. WITH DOOR OPEN, PRESS C FOR MODES AND MACHINE WILL DISPLAY HISTORICAL
DATA FOR THE METER READING
####SET PRICES:
1. WITH THE DOOR OPEN PRESS C FOR MODES
2. PRESS #20 THEN ENTER PRICE
3. PRESS RIGHT ARROW AND ENTER SELECTION #
###CAFORIA 944
####METER READINGS:
1. WITH MACHINE DOOR OPEN, PRESS THE # 5 THE MACHINE WILL SHOW YOUR NON resettable SALES DATA
####PRICE SETTING:
1. WITH DOOR OPEN PRESS THE #1 TO GET INTO PRICE MENU,
2. PRESS DOWN ARROW AND MACHINE WILL START WITH SMALL CUP SIZES
3. ENTER PRICE ON KEYPAD THEN ARROW DOWN EACH SELECTION
###REVOLUTION 962
####METER READING:
1. WITH DOOR OPEN PRESS THE #5 MACHINE WILL SHOW NON RESETABLE SALES.
####962 PRICE SETTING:
THERE ARE THREE TYPES OF PRICING,PRICE BY ALL,PRICE BY ROW, OR PRICE BY EACH.
1. TO SET PRICE BY ROW: WITH DOOR OPEN PRESS THE #1 BUTTON ON KEYPAD
2. ENTER THE PRICE YOU WANT ON THE KEYPAD
3. GO TO THE ROW OR ROWS YOU WANT THE PRICE TO BE AND TOUG ON THE PRODUCT DOOR.
###AP Studio Combo 4 Candy/Snack Can/Bottle Vendor
####Meter Readings:
1. Press the Mode button followed by 08 on the customer key pad for “Historical Total Value of Sales” and 10 for the “Historical Value of Can Sales.”
Since the screen can only display four digits at a time, for figures of one hundred dollars and up, the display automatically toggles between the first and second line of numbers.
####Price Setting:
1. Press the Mode button, followed by 01 on the customer key pad. The display will prompt “Prc.”
2. Enter the price to be set using the numeric keypad.
3. Press the Pound key.
4. Enter all the selection numbers to be set at the price entered in the previous step.
5. Press “Pound” to enter another price or “C” to lock them in and go back to the service mode.
##Cold Food
###Rowe #448
####Meter Readings:
* N.A.
####Price Setting:
1. Open the monetary door.
2. Select either tier #1 or #2.
3. Press the "enter price setting mode" button.
4. Move the door to be priced slightly to the right and then release
it.
5. Press and hold the "price up or price down" button until the
display begins to change. Then press "price up or down" buttons until the correct price
shows on the display.
6. Close the monetary door and coin test the machine.
###Rowe #548-#648
####Meter Readings:
1. Press the "next" key until the display shows "MIS menu,"
2. Press the "set" key until "sales (r)-$.xx" or "sales (n)-$.xx is
displayed.
Note: all MIS information can be accessed from this menu.
####Price Setting:
1. Press the "set" key to choose a price schedule.
2. At the "price menu," prompt the display will show either "sched," "#1," "#2," "#3," or "auto."
3. Press the "set" key. The display will show "set price-$.50."
4. Use the "up" or "down" arrow keys to select the proper price.
5. Move the compartment door for that price slightly to the right
and then release it.
###AP#320
####Meter Readings:
1. From the Master Menu, press the "enter" key to get to the MIS
menu, or press the F1 key to go directly to the "historical MIS"
menu.
2. Press the "enter" key when the display reads "MIS" to bring up
the items in the MIS menu.
3. Use the left and right arrow keys to scroll through these menu
items including "view historical MIS" and "view interval MIS."
4. Press "enter" at the appropriate menu item to display that item.
####Price Setting:
1. From the "Master Menu," press F#4 or arrow to the "Price Setting" menu. The display will
show item, dash, followed by the selection number with the last number flashing to
indicate that the selection number can be changed.
2. Press the arrow keys or press a specific three-digit number on
the key pad. The selection number will be followed by an equals sign followed by the
current price for the column followed by the three-digit product code.
3. To change prices, press the "right arrow" key one time. At this
point the last digit of the price will flash.
4. Use the "plus" and "minus" keys to raise or lower the price in
nickel increments or enter a new price using the numeric keypad.
5. Set the price by pressing the "enter" key once the desired price
is achieved.
6. To change the price of an entire shelf, arrow to the selection
menu. Then press the first two digits of the shelf number
followed by the asterisk key.
###USI - SNACK / COLD / FROZEN FOOD MACHINES
####Meter Readings:
1) Press the mode button on the inside panel of the machine
2) Press number 6 on the front key pad.
3) Press number 3 on the front key pad.
4) Press number 2 on the front key pad for non-resetable meter reading.
####Pricing:
1) Press the mode button on the inside panel of the machine.
2) Press number 5 on the front key pad.
3) Press number 1 for item. Press number 2 for row OR Press number 3 to select all items.
4) Set price using front key pad.
5) Press the pound key on the front key pad to save price.
###Fast Corp 631 Ice Cream Machine
####Meter Readings:
1. Using the customer keypad on the outside of the door, press the
"star" key to advance to the "sales meters" menu.
2. Press the "pound" key to get into the menu.
3. Press the "star" key to access non-resettable data and the "pound" key to display resettable numbers.
4. To clear resettable data, press the "star" key followed by the
"pound" key for verification or the "star" key to exit the
reset command.
5. Press the "pound" key to exit the sales menu.
####Price Setting:
1. When the door is opened, the display shows "change price." Press the "pound" key to enter the menu.
2. Using the customer keypad, enter the selection to be changed.
3. Press "00" to clear out the old price and then enter the new price.
4. Press the "pound" key to lock in the new price.
5. Press the "pound" key again to confirm the price change.
6. Press "pound" a third time to advance to the next selection or
"star" to exit the menu.
###FastCorp FRI-Z400 Ice Cream Machine
####Meter Readings:
1. Open service door to automatically access the service menu.
2. Press the VAC button on the programming key pad to disable the security locking feature on the customer key pad.
3. Using the customer keypad on the outside of the door, press the
"star" key twice to advance to the "sales meters" menu.
4. Press the "pound" key to get into the menu.
5. Press the "star" key to access non-resettable data and the "pound" key to display resettable numbers.
6. To clear resettable data, press the "star" key to scroll through the various selections followed by the "pound" key to enter the selection that you wish to delete. Press the “star” key twice to delete the resettable reading or “pound” to exit the reset command.
7. Press the "pound" key to exit the sales menu.
####Price Setting:
1. Open the service door and press the “VAC” button on the programming keypad to unlock the customer keypad and access the service menu.
2. Using the customer keypad, press the “star” key once to scroll to the price setting menu selection.
3. Press “pound” to enter the price setting submenu.
4. Using the customer keypad, enter the selection to be changed.
5. Press "00" to clear out the old price and then enter the new price.
6. Press the "pound" key to lock in the new price.
7. Press the "pound" key again to confirm the price change.
8. Press "pound" a third time to advance to the next selection or
"star" to exit the menu.
| 39.214286 | 265 | 0.713229 | eng_Latn | 0.922592 |
bb0a04c728be00d3367822d152a26aa5cacbabe4 | 44 | md | Markdown | _tags/javascript.md | XinicsInc/tech-blog | bb3e2ef2f97458224a0a4ce42a0defe6635f152a | [
"Apache-2.0"
] | 1 | 2019-06-23T15:44:37.000Z | 2019-06-23T15:44:37.000Z | _tags/javascript.md | XinicsInc/tech-blog | bb3e2ef2f97458224a0a4ce42a0defe6635f152a | [
"Apache-2.0"
] | null | null | null | _tags/javascript.md | XinicsInc/tech-blog | bb3e2ef2f97458224a0a4ce42a0defe6635f152a | [
"Apache-2.0"
] | 1 | 2019-06-23T15:44:42.000Z | 2019-06-23T15:44:42.000Z | ---
name: javascript
title: 'JavaScript'
--- | 11 | 19 | 0.659091 | nld_Latn | 0.430885 |
bb0b344295da403340b03c998887d8109d092a7e | 100 | md | Markdown | data/readme_files/SublimeLinter.SublimeLinter-jscs.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 5 | 2021-05-09T12:51:32.000Z | 2021-11-04T11:02:54.000Z | data/readme_files/SublimeLinter.SublimeLinter-jscs.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | null | null | null | data/readme_files/SublimeLinter.SublimeLinter-jscs.md | DLR-SC/repository-synergy | 115e48c37e659b144b2c3b89695483fd1d6dc788 | [
"MIT"
] | 3 | 2021-05-12T12:14:05.000Z | 2021-10-06T05:19:54.000Z | SublimeLinter-jscs
=========================
**Deprecated: use eslint instead**
http://jscs.info
| 16.666667 | 36 | 0.55 | oci_Latn | 0.679169 |
bb0c52a527963a2a0428aaadd7b0ae6b0643f4cc | 39,253 | md | Markdown | CHANGELOG.md | chukitow/JsSIP | fc73ea1db0fd1da7ec3e8f4d933a233b3dcf1059 | [
"MIT"
] | 1 | 2020-09-19T06:44:04.000Z | 2020-09-19T06:44:04.000Z | CHANGELOG.md | chukitow/JsSIP | fc73ea1db0fd1da7ec3e8f4d933a233b3dcf1059 | [
"MIT"
] | null | null | null | CHANGELOG.md | chukitow/JsSIP | fc73ea1db0fd1da7ec3e8f4d933a233b3dcf1059 | [
"MIT"
] | null | null | null | CHANGELOG
=========
Version 3.3.11 (released in 2019-10-24)
---------------------------------------
* RTCSession: don't relay on 'icecandidate' event with null candidate (#598). Thanks @skanizaj.
Version 3.3.10 (released in 2019-10-16)
---------------------------------------
* RTCSession: honor BYE while in WAITING_FOR_ACK state (#597). Thanks @Egorikhin.
Version 3.3.9 (released in 2019-09-24)
---------------------------------------
* Added NOTIFY to allowed methods (#593). Credits to @ikq.
Version 3.3.8 (released in 2019-09-24)
---------------------------------------
* Move connection recovery defaults to Constants (#593). Credits to @KraftyKraft.
Version 3.3.7 (released in 2019-08-12)
---------------------------------------
* Add referred-by header to refer messages (#572). Credits to @swysor.
Version 3.3.6 (released in 2019-04-12)
---------------------------------------
* Fix NameAddrHeader `display_name` handling (#573). Credits to @nicketson.
Version 3.3.5 (released in 2019-02-26)
---------------------------------------
* Add `.babelrc` into `.npmignore` (related to #489).
* Update deps.
Version 3.3.4 (released in 2019-01-15)
---------------------------------------
* Add debugging logs in DigestAuthentication.js (related to #561).
* Update deps.
Version 3.3.3 (released in 2019-01-02)
---------------------------------------
* Registrator: Don't check Contact header if final response is not 2XX (#558). Thanks @ikq for reporting.
* Update deps.
Version 3.3.2 (released in 2018-12-19)
---------------------------------------
* Registrator. Support multiple entries in the same Contact header field (#544).
Version 3.3.1 (released in 2018-12-19)
---------------------------------------
* RTCSession: fire 'sdp' event on renegotiation (#543).
Version 3.3.0 (released in 2018-12-19)
---------------------------------------
* UA: new 'sipEvent' event for out of dialog NOTIFY requests.
Version 3.2.17 (released in 2018-12-18)
---------------------------------------
* InviteClientTransaction: Add full route set to ACK and CANCEL requests. Thanks @nicketson.
* RTCSession: switch to tracks from deprecated stream API. Thanks @nicketson.
Version 3.2.16 (released in 2018-11-28)
---------------------------------------
* Fix typos thanks to the [LGTM](https://lgtm.com/projects/g/versatica/JsSIP/alerts/?mode=list) project.
* Update deps.
Version 3.2.15 (released in 2018-10-11)
--------------------------------------
* Remove `webrtc-adapter` dependency. It's up to the application developer whether to include it into his application or not.
* Update dependencies.
Version 3.2.14 (released in 2018-09-27)
--------------------------------------
* Revert previous release. Requires a mayor version upgrade for such a cosmetic change.
Version 3.2.13 (released in 2018-09-27)
--------------------------------------
* Close #521, #534. RTCSession: Fix 'connection' event order on outgoing calls.
Version 3.2.12 (released in 2018-09-17)
--------------------------------------
* Update deps.
* Add missing `error` in 'getusermediafailed' event (thanks @jonastelzio).
Version 3.2.11 (released in 2018-06-03)
--------------------------------------
* Close #519. Parser: Do not overwrite unknwon header fields. Thanks @rprinz08.
Version 3.2.10 (released in 2018-04-24)
--------------------------------------
* Include the NPM **events** dependency for those who don't use **browserify** but **webpack**.
Version 3.2.9 (released in 2018-04-20)
--------------------------------------
* RTCSession: Add Contact header to REFER request. Thanks Julien Royer for reporting.
Version 3.2.8 (released in 2018-04-05)
--------------------------------------
* Fix #511. Add missing payload on 'UA:disconnected' event.
Version 3.2.7 (released in 2018-03-23)
--------------------------------------
* Fix regression (#509): ua.call() not working if stream is given.
Version 3.2.6 (released in 2018-03-22)
--------------------------------------
* RTCSession: custom local description trigger support
Version 3.2.5 (released in 2018-03-06)
--------------------------------------
* RTCSession: prefer promises over callbacks for readability.
Version 3.2.4 (released in 2018-01-19)
--------------------------------------
* Config: #494. Switch Socket check order. Thanks 'Igor Kolosov'.
Version 3.2.3 (released in 2018-01-15)
--------------------------------------
* RTCSession: Fix #492. Add missing log line for RTCPeerConnection error.
Version 3.2.2 (released in 2018-01-15)
--------------------------------------
* Remove wrong NPM dependencies.
Version 3.2.1 (released in 2018-01-15)
--------------------------------------
* Fix parsing of NOTIFY bodies during a REFER transaction (fixes #493).
Version 3.2.0 (released in 2018-01-15)
--------------------------------------
* Config: new configuration parameter 'user_agent'
* RTCSession/Info: Fix. Call session.sendRequest() with the correct parameters
* Config: Fix #491. Implement all documented flavours of 'sockets' parameter
Version 3.1.4 (released in 2017-12-18)
--------------------------------------
* Fix #482 and cleanup Registrator.js
Version 3.1.3 (released in 2017-11-28)
--------------------------------------
* Produce ES5 tree and expose it as main in package.json (related to #472)
* Fix #481. ReferSubscriber: properly access RTCSession non-public attributes
Version 3.1.2 (released in 2017-11-21)
--------------------------------------
* RTCSession: emit 'sdp' event before creating offer/answer
Version 3.1.1 (released in 2017-11-11)
--------------------------------------
* DigestAuthentication: fix 'auth-int' qop authentication
* DigestAuthentication: add tests
Version 3.1.0 (released in 2017-11-10)
--------------------------------------
* New UA configuration parameter 'session_timers_refresh_method'. Thanks @michelepra
Version 3.0.28 (released in 2017-11-9)
--------------------------------------
* Fix improper call to userMediaSucceeded. Thanks @iclems
Version 3.0.27 (released in 2017-11-9)
--------------------------------------
* Registrator: add missing getter. Thanks Martin Ekblom.
Version 3.0.26 (released in 2017-11-8)
--------------------------------------
* Fix #473. Typo. Thanks @ikq.
Version 3.0.25 (released in 2017-11-6)
--------------------------------------
* Use promise chaining to prevent PeerConnection state race conditions. Thanks @davies147
Version 3.0.24 (released in 2017-11-5)
--------------------------------------
* Fix #421. Fire RTCSession 'peerconnection' event as soon as its created
Version 3.0.23 (released in 2017-10-31)
--------------------------------------
* Fix typo. Thanks @michelepra.
Version 3.0.22 (released in 2017-10-27)
--------------------------------------
* Tests: enable test-UA-no-WebRTC tests.
* WebSocketInterface: uppercase the via_transport attribute.
* Fix #469. new method InitialOutgoingInviteRequest::clone().
Version 3.0.21 (released in 2017-10-26)
--------------------------------------
* WebSocketInterface: Add 'via_transport' setter.
Version 3.0.20 (released in 2017-10-24)
--------------------------------------
* Fix typo on ES6 transpiling.
Version 3.0.19 (released in 2017-10-21)
--------------------------------------
* ES6 transpiling. Modernize full JsSIP code.
Version 3.0.18 (released in 2017-10-13)
--------------------------------------
* Dialog: ACK to initial INVITE could have lower CSeq than current remote_cseq.
Version 3.0.17 (released in 2017-10-12)
--------------------------------------
* RTCSession: process INFO in early state.
Version 3.0.16 (released in 2017-10-12)
--------------------------------------
* Fix #457. Properly retrieve ReferSubscriber. Thanks @btaens.
Version 3.0.15 (released in 2017-08-31)
--------------------------------------
* Fix #457. Support NOTIFY requests to REFER subscriptions without Event id parameter.
Version 3.0.14 (released in 2017-08-31)
--------------------------------------
* Update dependencies.
Version 3.0.13 (released in 2017-06-10)
--------------------------------------
* `Registrator`: Don't send a Register request if another is on progress. Thanks to Paul Grebenc.
Version 3.0.12 (released in 2017-05-23)
--------------------------------------
* `UA`: Add `registrationExpiring` event (#442). Credits to @danjenkins.
Version 3.0.11 (released in 2017-05-21)
--------------------------------------
* `RTCSession`: Emit "peerconnection" also for incoming calls.
Version 3.0.10 (released in 2017-05-17)
--------------------------------------
* Emit SDP before new `RTCSessionDescription`. Thanks to @StarLeafRob.
Version 3.0.8 (released in 2017-05-03)
--------------------------------------
* Generic SIP INFO support.
Version 3.0.7 (released in 2017-03-24)
--------------------------------------
* Fix #431. Fix UA's `disconnect` event by properly providing an object with all the documente fields (thanks @nicketson for reporting it).
Version 3.0.6 (released in 2017-03-22)
--------------------------------------
* Fix #428. Don't use `pranswer` for early media. Instead create an `answer` and do a workaround when the 200 arrives.
Version 3.0.5 (released in 2017-03-21)
--------------------------------------
* Update deps.
* Add more debug logs into `RTCSession` class.
Version 3.0.4 (released in 2017-03-13)
--------------------------------------
* Update deps.
* If ICE fails, terminate the session with status code 408.
Version 3.0.3 (released in 2017-02-22)
--------------------------------------
* Fix #426. Properly emit DTMF events.
Version 3.0.2 (released in 2017-02-17)
--------------------------------------
* Fix #418. Incorrect socket status on failure.
Version 3.0.1 (released in 2017-01-19)
--------------------------------------
* Close #419. Allow sending the DTMF 'R' key. Used to report a hook flash.
Version 3.0.0 (released in 2016-11-19)
--------------------------------------
* Remove `rtcninja` dependency. Instead use `webrtc-adapter`.
* `RTCSession:`: Remove `RTCPeerConnection` event wrappers. The app can access them via `session.connection`.
* `RTCSession:`: Emit WebRTC related events when internal calls to `getUserMedia()`, `createOffer()`, etc. fail.
* Use debug NPM fixed "2.0.0" version (until a pending bug in such a library is fixed).
* `UA`: Remove `ws_servers` option.
* `UA`: Allow immediate restart
Version 2.0.6 (released in 2016-09-30)
--------------------------------------
* Improve library logs.
Version 2.0.5 (released in 2016-09-28)
--------------------------------------
* Update dependencies.
Version 2.0.4 (released in 2016-09-15)
--------------------------------------
* Fix #400. Corrupt NPM packege.
Version 2.0.3 (released in 2016-08-23)
--------------------------------------
* Fix #385. No CANCEL request sent for authenticated requests.
Version 2.0.2 (released in 2016-06-17)
--------------------------------------
* Fix `gulp-header` dependency version.
Version 2.0.1 (released in 2016-06-09)
--------------------------------------
* Export `JsSIP.WebSocketInterface`.
Version 2.0.0 (released in 2016-06-07)
--------------------------------------
* New 'contact_uri' configuration parameter.
* Remove Node websocket dependency.
* Fix #196. Improve 'hostname' parsing.
* Fix #370. Outgoing request instance being shared by two transactions.
* Fix #296. Abrupt transport disconnection on UA.stop().
* Socket interface. Make JsSIP socket agnostic.
Version 1.0.1 (released in 2016-05-17)
---------------------------------------
* Update dependencies.
Version 1.0.0 (released in 2016-05-11)
---------------------------------------
* `RTCSession`: new event on('sdp') to allow SDP modifications.
Version 0.7.23 (released in 2016-04-12)
---------------------------------------
* `RTCSession`: Allow multiple calls to `refer()` at the same time.
Version 0.7.22 (released in 2016-04-06)
---------------------------------------
* `UA`: `set()` allows changing user's display name.
* Ignore SDP answer in received ACK retransmissions (fix [367](https://github.com/versatica/JsSIP/issues/367)).
Version 0.7.21 (released in 2016-04-05)
---------------------------------------
* `RTCSession`: Also emit `peerconnection` event for incoming INVITE without SDP.
Version 0.7.20 (released in 2016-04-05)
---------------------------------------
* `RTCSession/ReferSubscriber`: Fix typo that breaks exposed API.
Version 0.7.19 (released in 2016-04-05)
---------------------------------------
* `RTCSession`: Make `refer()` method to return the corresponding instance of `ReferSubscriber` so the app can set and manage as many events as desired on it.
Version 0.7.18 (released in 2016-03-23)
---------------------------------------
* Add INFO method to allowed methods list
* Add SIP Code 424 RFC 6442
Version 0.7.17 (released in 2016-02-25)
---------------------------------------
* Apply changes of 0.7.16 also to browserified files under `dist/` folder.
Version 0.7.16 (released in 2016-02-24)
---------------------------------------
* Fix [337](https://github.com/versatica/JsSIP/issues/337). Consistenly indicate registration status through events.
Version 0.7.15 (released in 2016-02-24)
---------------------------------------
* Emit UA 'connected' event before sending REGISTER on transport connection
* Fix [355](https://github.com/versatica/JsSIP/pull/355 ). call to non existent `parsed.error` function. Thanks Stéphane Alnet @shimaore
Version 0.7.14 (released in 2016-02-17)
---------------------------------------
* Fix sips URI scheme parsing rule.
Version 0.7.13 (released in 2016-02-10)
---------------------------------------
* Fix. Don't lowercase URI parameter values. Thanks to Alexandr Dubovikov @adubovikov
Version 0.7.12 (released in 2016-02-05)
---------------------------------------
* Accept new `UA` configuration parameters `ha1` and `realm` to avoid plain SIP password handling ([issue 353](https://github.com/versatica/JsSIP/issues/353)).
* New `UA.set()` and `UA.get()` methods to set and retrieve computed configuration parameters in runtime.
Version 0.7.11 (released in 2015-12-17)
---------------------------------------
* Fix typo ("iceconnetionstatechange" => "iceconnectionstatechange"). Thanks to Vertika Srivastava.
Version 0.7.10 (released in 2015-12-01)
---------------------------------------
* Make `gulp` run on Node 4.0.X and 5.0.X.
Version 0.7.9 (released in 2015-10-16)
---------------------------------------
* `UA`: Add `set(parameter, value)` method to change a configuration setting in runtime (currently just "password" is implemented).
Version 0.7.8 (released in 2015-10-13)
---------------------------------------
* `RTCSession`: Add `resetLocalMedia()` method to reset the session local MediaStream by enabling both its audio and video tracks (unless the remote peer is on hold).
Version 0.7.7 (released in 2015-10-05)
---------------------------------------
* `RTCSession`: Add "sending" event to outgoing, a good chance for the app to mangle the INVITE or its SDP offer.
Version 0.7.6 (released in 2015-09-29)
---------------------------------------
* Update dependencies.
* Improve gulpfile.js.
Version 0.7.5 (released in 2015-09-15)
---------------------------------------
* Don't ask for `getUserMedia` in `RTCSession.answer()` if no `mediaConstraints` are provided.
Version 0.7.4 (released in 2015-08-10)
---------------------------------------
* Allow rejecting an in-dialog INVITE or UPDATE message.
Version 0.7.3 (released in 2015-07-29)
---------------------------------------
* FIX properly restart UA if start() is called while closing.
Version 0.7.2 (released in 2015-07-27)
---------------------------------------
* Update dependencies.
Version 0.7.1 (released in 2015-07-27)
---------------------------------------
* Update dependencies.
Version 0.7.0 (released in 2015-07-23)
---------------------------------------
* Add REFER support.
Version 0.6.33 (released in 2015-06-17)
---------------------------------------
* Don't keep URI params&headers in the registrar server URI.
* `RTCSession` emits `peerconnection` for outgoing calls once the `RTCPeerConnection` is created and before the SDP offer is generated (good chance to create a `RTCDataChannel` without requiring renegotiation).
Version 0.6.32 (released in 2015-06-16)
---------------------------------------
* Add callback to `update` and `reinvite` events.
Version 0.6.31 (released in 2015-06-16)
---------------------------------------
* Added a parser for Reason header.
Version 0.6.30 (released in 2015-06-09)
---------------------------------------
* Fix array iteration in `URI#toString()` to avoid Array prototype mangling by devil libraries such as Ember.
Version 0.6.29 (released in 2015-06-06)
---------------------------------------
* Auto-register on transport connection before emitting the event.
Version 0.6.28 (released in 2015-06-02)
---------------------------------------
* Update "rtcninja" dependencie.
Version 0.6.27 (released in 2015-06-02)
---------------------------------------
* Don't terminate SIP dialog if processing of 183 with SDP fails.
* Update dependencies.
Version 0.6.26 (released in 2015-04-17)
---------------------------------------
* Update "rtcninja" dependency.
Version 0.6.25 (released in 2015-04-16)
---------------------------------------
* Update "rtcninja" dependency.
Version 0.6.24 (released in 2015-04-14)
---------------------------------------
* RTCSession: Fix Invite Server transaction destruction.
Version 0.6.23 (released in 2015-04-14)
---------------------------------------
* RTCSession: Handle session timers before emitting "accepted".
* Fix issue with latest version of browserify.
Version 0.6.22 (released in 2015-04-13)
---------------------------------------
* Fix double "disconnected" event in some cases.
Version 0.6.21 (released in 2015-03-11)
---------------------------------------
* Don't iterate arrays with (for...in) to avoid problems with evil JS libraries that add stuff into the Array prototype.
Version 0.6.20 (released in 2015-03-09)
---------------------------------------
* Be more flexible receiving DTMF INFO bodies.
Version 0.6.19 (released in 2015-03-05)
---------------------------------------
* Update dependencies.
Version 0.6.18 (released in 2015-02-09)
--------------------------------------
* Terminate the call with a proper BYE/CANCEL/408/500 if request timeout, transport error or dialog error happens.
* Fix "rtcninja" dependency problem.
Version 0.6.17 (released in 2015-02-02)
--------------------------------------
* `RTCSession`: Improve `isReadyToReOffer()`.
Version 0.6.16 (released in 2015-02-02)
--------------------------------------
* `RTCSession`: Avoid calling hold()/unhold/renegotiate() if an outgoing renegotiation is not yet finished (return false).
* `RTCSession`: Add `options` and `done` arguments to hold()/unhold/renegotiate().
* `RTCSession`: New public method `isReadyToReOffer()`.
Version 0.6.15 (released in 2015-01-31)
--------------------------------------
* `RTCSession:` Emit `iceconnetionstatechange` event.
* Update "rtcninja" dependency to 0.4.0.
Version 0.6.14 (released in 2015-01-29)
--------------------------------------
* `RTCSession:` Include initially given `rtcOfferConstraints` in `sendReinvite()` and `sendUpdate()`.
Version 0.6.13 (released in 2015-01-29)
--------------------------------------
* Properly keep mute local audio/video if remote is on hold, and keep it even if we re-offer. Also fix SDP direction attributes in re-offers according to current local and remote "hold" status.
Version 0.6.12 (released in 2015-01-28)
--------------------------------------
* Update "rtcninja" dependency to 0.3.3 (fix "RTCOfferOptions").
Version 0.6.11 (released in 2015-01-27)
--------------------------------------
* Fix "Session-Expires" default value to 90 seconds.
Version 0.6.10 (released in 2015-01-27)
--------------------------------------
* Update "rtcninja" dependency to 0.3.2 (get the `rtcninja.canRenegotiate` attribute).
Version 0.6.9 (released in 2015-01-27)
--------------------------------------
* Don't reply 405 "Method Not Supported" to re-INVITE even if the UA's "newRTCSession" event is not set.
* `RTCSession`: Allow extraHeaders in `renegotiate()`.
Version 0.6.8 (released in 2015-01-26)
--------------------------------------
* `RTCSession`: Don't ask for `getUserMedia()` in outgoing calls if `mediaConstraints` is `{audio:false, video:false}`. It is user's responsability to, in that case, provide `offerToReceiveAudio/Video` in `rtcOfferConstraints`.
Version 0.6.7 (released in 2015-01-26)
--------------------------------------
* ' UA.call()': Return the `RTCSession` instance.
* ' UA.sendMessage()': Return the `Message` instance.
Version 0.6.6 (released in 2015-01-24)
--------------------------------------
* `RTCSession`: Don't process SDPs in retranmissions of 200 OK during reINVITE/UDATE.
* `RTCSession`: Emit 'reinvite' when a reINVITE is received.
* `RTCSession`: Emit 'update' when an UPDATE is received.
Version 0.6.5 (released in 2015-01-20)
--------------------------------------
* `RTCSession`: Don't override `this.data` on `answer()` (unless `options.data` is given).
Version 0.6.4 (released in 2015-01-19)
--------------------------------------
* `RTCSession#connect()`: Add `rtcAnswerContraints` options for later incoming reINVITE or UPDATE with SDP offer.
* `RTCSession#answer()`: Add `rtcOfferConstraints` options for later incoming reINVITE without SDP offer.
* `RTCSession#renegotiate()`: Add `rtcOfferConstraints` options for the UPDATE or reINVITE.
* `RTCSession#answer()`: Remove audio or video from the given `getUserMedia` mediaConstraints if the incoming SDP has no audio/video sections.
Version 0.6.3 (released in 2015-01-17)
--------------------------------------
* Bug fix. Properly cancel when only '100 trying' has been received.
Version 0.6.2 (released in 2015-01-16)
--------------------------------------
* Bug fix: Do not set "Content-Type: application/sdp" in body-less UPDATE requests.
Version 0.6.1 (released in 2015-01-16)
--------------------------------------
* Support for [Session Timers](https://tools.ietf.org/html/rfc4028).
Version 0.6.0 (released in 2015-01-13)
--------------------------------------
* [debug](https://github.com/visionmedia/debug) module.
* [rtcninja](https://github.com/ibc/rtcninja.js) module.
* Can renegotiate an ongoing session by means of a re-INVITE or UPDATE method (useful if the local stream attached to the `RTCPeerConnection` has been modified).
* Improved hold/unhold detection.
* New API options for `UA#call()` and `RTCSession#answer()`.
Version 0.5.0 (released in 2014-11-03)
--------------------------------------
* JsSIP runs in Node!
* The internal design of JsSIP has also been modified, becoming a real Node project in which the "browser version" (`jssip-0.5.0.js` or `jssip-0.5.0.min.js`) is generated with [browserify](http://browserify.org). This also means that the browser version can be loaded with AMD or CommonJS loaders.
Version 0.4.3 (released in 2014-10-29)
--------------------------------------
* [(3b1ee11)](https://github.com/versatica/JsSIP/commit/3b1ee11) Fix references to 'this'.
Version 0.4.2 (released in 2014-10-24)
--------------------------------------
* [(ca7702e)](https://github.com/versatica/JsSIP/commit/ca7702e) Fix #257. RTCMediaHandler: fire onIceCompleted() on next tick to avoid events race conditions in Firefox 33.
Version 0.4.1 (released in 2014-10-21)
--------------------------------------
* This version is included into the [Bower](https://bower.io/) registry which means `$ bower install jssip`.
Version 0.4.0 (released in 2014-10-21)
--------------------------------------
* (https://jssip.net/documentation/0.4.x/api/session) Hold/Unhold implementation
* (https://jssip.net/documentation/0.4.x/api/session) Mute/Unmute implementation
* (https://jssip.net/documentation/0.4.x/api/ua_configuration_parameters/#instance_id) New 'instance_id' configuration parameter
* (https://jssip.net/documentation/0.4.x/api/ua_configuration_parameters/#log) New 'log' configuration parameter
* [(34b235c)](https://github.com/versatica/JsSIP/commit/34b235c) Fix #246. Increase the event emiter max listener number to 50
* [(9a1ebdf)](https://github.com/versatica/JsSIP/commit/9a1ebdf) Late SDP implementation. Handle SDP-less incoming INVITEs
* [(f0cc4c1)](https://github.com/versatica/JsSIP/commit/f0cc4c1) Fix #253. RTCSession: instead of "started" emit "accepted" when 2XX and "confirmed" when ACK
* [(f0cc4c1)](https://github.com/versatica/JsSIP/commit/f0cc4c1) Fix #253. RTCSession: accept SDP renegotiation on incoming UPDATE requests.
* [(177f38d)](https://github.com/versatica/JsSIP/commit/177f38d) Fix #248. Improve transaction handling on CANCEL
* [(f9ef522)](https://github.com/versatica/JsSIP/commit/f9ef522) Fix detection of incoming merged requests (don't generate 482 for retransmissions).
* [(3789d5b)](https://github.com/versatica/JsSIP/commit/3789d5b) Fix #245. Improve late CANCEL
* [(2274a7d)](https://github.com/versatica/JsSIP/commit/2274a7d) Add hack_via_ws option to force "WS" in Via header when the server has wss:// scheme.
* [(c9e8764)](https://github.com/versatica/JsSIP/commit/c9e8764) Fire 'progress' (originator = local) when receiving an incoming call.
* [(39949e0)](https://github.com/versatica/JsSIP/commit/39949e0) Fix #242. fine tune the ICE state check for createAnswer/createOffer
* [(80c32f3)](https://github.com/versatica/JsSIP/commit/80c32f3) Fix #240. ICE connection RTP timeout status fix
* [(1f4d36d)](https://github.com/versatica/JsSIP/commit/1f4d36d) Remove RFC 3261 18.1.2 sanity check (sent-by host mismatch in Via header).
* [(62e8323)](https://github.com/versatica/JsSIP/commit/62e8323) Fix #176. Update to the latest IceServer definition
* [(caf20f9)](https://github.com/versatica/JsSIP/commit/caf20f9) Fix #163. Stop transport revocery on UA.stop().
* [(2f3769b)](https://github.com/versatica/JsSIP/commit/2f3769b) Fix #148: WebSocket reconnection behaviour
* [(d7c3c9c)](https://github.com/versatica/JsSIP/commit/d7c3c9c) Use plain 'for' loops instead of 'for in' loops on arrays
* [(a327be3)](https://github.com/versatica/JsSIP/commit/a327be3) Fix. INFO-based DTMF fixes
* [(d141864)](https://github.com/versatica/JsSIP/commit/d141864) Fix #133. Incorrect REGISTER Contact header value after transport disconnection
* [(f4a29e9)](https://github.com/versatica/JsSIP/commit/f4a29e9) Improvements to 2xx retransmission behaviour
* [(3fc4efa)](https://github.com/versatica/JsSIP/commit/3fc4efa) Fix #107. Stop spamming provisional responses
* [(7c2abe0)](https://github.com/versatica/JsSIP/commit/7c2abe0) Fix. Permit receiving a 200 OK to a INVITE before any 1XX provisional
* [(5c644a6)](https://github.com/versatica/JsSIP/commit/5c644a6) Improvements to min-expires fix
* [(4bfc34c)](https://github.com/versatica/JsSIP/commit/4bfc34c) Fix handling of 423 response to REGISTER
* [(3e84eaf)](https://github.com/versatica/JsSIP/commit/3e84eaf) Fix #112. Enhance CANCEL request processing
* [(1740e5e)](https://github.com/versatica/JsSIP/commit/1740e5e) Fix #117. Clear registration timer before re-setting it
* [(dad84a1)](https://github.com/versatica/JsSIP/commit/dad84a1) Fix #111. Create confirmed dialog before setting remote description.
* [(15d83bb)](https://github.com/versatica/JsSIP/commit/15d83bb) Fix #100. 'originator' property was missing in RTCSession 'started' event data object. Thanks @gavllew
* [(b5c08dc)](https://github.com/versatica/JsSIP/commit/b5c08dc) Fix #99. Do not close the RTCSession if it has been accepted and the WS disconnects
* [(46eef46)](https://github.com/versatica/JsSIP/commit/46eef46) Fix #90. Don't log password
* [(9ca4bc9)](https://github.com/versatica/JsSIP/commit/9ca4bc9) Fix #89. Do not send a To tag in '100 Trying' responses
Version 0.3.0 (released in 2013-03-18)
-------------------------------
* [(fea1326)](https://github.com/versatica/JsSIP/commit/fea1326) Don't validate configuration.password against SIP URI password BNF grammar (fix #74).
* [(3f84b30)](https://github.com/versatica/JsSIP/commit/3f84b30) Make RTCSession local_identity and remote_identity NameAddrHeader instances
* [(622f46a)](https://github.com/versatica/JsSIP/commit/622f46a) remove 'views' argument from UA.call()
* [(940fb34)](https://github.com/versatica/JsSIP/commit/940fb34) Refactored Session
* [(71572f7)](https://github.com/versatica/JsSIP/commit/71572f7) Rename causes.IN_DIALOG_408_OR_481 to causes.DIALOG_ERROR and add causes.RTP_TIMEOUT.
* [(c79037e)](https://github.com/versatica/JsSIP/commit/c79037e) Added 'registrar_server' UA configuration parameter.
* [(2584140)](https://github.com/versatica/JsSIP/commit/2584140) Don't allow SIP URI without username in configuration.uri.
* [(87357de)](https://github.com/versatica/JsSIP/commit/87357de) Digest authentication refactorized.
* [(6867f51)](https://github.com/versatica/JsSIP/commit/6867f51) Add 'cseq' and 'call_id' attributes to OutgoingRequest.
* [(cc97fee)](https://github.com/versatica/JsSIP/commit/cc97fee) Fix. Delete session from UA sessions collection when closing
* [(947b3f5)](https://github.com/versatica/JsSIP/commit/947b3f5) Remove RTCPeerConnection.onopen event handler
* [(6029e45)](https://github.com/versatica/JsSIP/commit/6029e45) Enclose every JsSIP component with an inmediate function
* [(7f523cc)](https://github.com/versatica/JsSIP/commit/7f523cc) JsSIP.Utils.MD5() renamed to JsSIP.Utils.calculateMD5() (a more proper name for a function).
* [(1b1ab73)](https://github.com/versatica/JsSIP/commit/1b1ab73) Fix. Reply '200' to a CANCEL 'before' replying 487 to the INVITE
* [(88fa9b6)](https://github.com/versatica/JsSIP/commit/88fa9b6) New way to handle Streams
* [(38d4312)](https://github.com/versatica/JsSIP/commit/38d4312) Add Travis CI support.
* [(50d7bf1)](https://github.com/versatica/JsSIP/commit/50d7bf1) New `grunt grammar` task for automatically building customized Grammar.js and Grammar.min.js.
* [(f19842b)](https://github.com/versatica/JsSIP/commit/f19842b) Fix #60, #61. Add optional parameters to ua.contact.toString(). Thanks @ibc
* [(8f5acb1)](https://github.com/versatica/JsSIP/commit/8f5acb1) Enhance self contact handling
* [(5e7d815)](https://github.com/versatica/JsSIP/commit/5e7d815) Fix. ACK was being replied when not pointing to us. Thanks @saghul
* [(1ab6df3)](https://github.com/versatica/JsSIP/commit/1ab6df3) New method JsSIP.NameAddrHeader.parse() which returns a JsSIP.NameAddrHeader instance.
* [(a7b69b8)](https://github.com/versatica/JsSIP/commit/a7b69b8) Use a random user in the UA's contact.
* [(f67872b)](https://github.com/versatica/JsSIP/commit/f67872b) Extend the use of the 'options' argument
* [(360c946)](https://github.com/versatica/JsSIP/commit/360c946) Test units for URI and NameAddrHeader classes.
* [(826ce12)](https://github.com/versatica/JsSIP/commit/826ce12) Improvements and some bug fixes in URI and NameAddrHeader classes.
* [(e385840)](https://github.com/versatica/JsSIP/commit/e385840) Make JsSIP.URI and JsSIP.NameAddrHeader more robust.
* [(b0603e3)](https://github.com/versatica/JsSIP/commit/b0603e3) Separate qunitjs tests with and without WebRTC. Make "grunt test" to run "grunt testNoWebRTC".
* [(659c331)](https://github.com/versatica/JsSIP/commit/659c331) New way to handle InvalidTargetErorr and WebRtcNotSupportedError
* [(d3bc91a)](https://github.com/versatica/JsSIP/commit/d3bc91a) Don't run qunit task by default (instead require "grunt test").
* [(e593396)](https://github.com/versatica/JsSIP/commit/e593396) Added qunitjs based test unit (for now a parser test) and integrate it in grunt.js.
* [(da58bff)](https://github.com/versatica/JsSIP/commit/da58bff) Enhance URI and NameAddrHeader
* [(df6dd98)](https://github.com/versatica/JsSIP/commit/df6dd98) Automate qunit tests into grunt process
* [(babc331)](https://github.com/versatica/JsSIP/commit/babc331) Fix. Accept multiple headers with same hader name in SIP URI.
* [(716d164)](https://github.com/versatica/JsSIP/commit/716d164) Pass full multi-header header fields to the grammar
* [(2e18a6b)](https://github.com/versatica/JsSIP/commit/2e18a6b) Fix contact match in 200 response to REGISTER
* [(3f7b02f)](https://github.com/versatica/JsSIP/commit/3f7b02f) Fix stun_host grammar rule.
* [(7867baf)](https://github.com/versatica/JsSIP/commit/7867baf) Allow using a JsSIP.URI instance everywhere specting a destination.
* [(a370c78)](https://github.com/versatica/JsSIP/commit/a370c78) Fix 'maddr' and 'method' URI parameters handling
* [(537d2f2)](https://github.com/versatica/JsSIP/commit/537d2f2) Give some love to "console.log|warn|info" messages missing the JsSIP class/module prefix.
* [(8cb6963)](https://github.com/versatica/JsSIP/commit/8cb6963) In case null, emptry string, undefined or NaN is passed as parameter value then its default value is applied. Also print to console the processed value of all the parameters after validating them.
* [(f306d3c)](https://github.com/versatica/JsSIP/commit/f306d3c) hack_ip_in_contact now generates a IP in the range of Test-Net as stated in RFC 5735 (192.0.2.0/24).
* [(528d989)](https://github.com/versatica/JsSIP/commit/528d989) Add DTMF feature
* [(777a48f)](https://github.com/versatica/JsSIP/commit/777a48f) Change API methods to make use of generic 'options' argument
* [(3a6971d)](https://github.com/versatica/JsSIP/commit/3a6971d) Fix #26. Fire 'unregistered' event correctly.
* [(5616837)](https://github.com/versatica/JsSIP/commit/5616837) Rename 'outbound_proxy_set' parameter by 'ws_servers'
* [(37fe9f4)](https://github.com/versatica/JsSIP/commit/37fe9f4) Fix #54. Allow configuration.uri username start with 'sip'
* [(a612987)](https://github.com/versatica/JsSIP/commit/a612987) Add 'stun_servers' and 'turn_servers' configuration parameters
* [(9fad09b)](https://github.com/versatica/JsSIP/commit/9fad09b) Add JsSIP.URI and JsSIP.NameAddrHeader classes
* [(f35376a)](https://github.com/versatica/JsSIP/commit/f35376a) Add 'Content-Length' header to every SIP response
* [(3081a21)](https://github.com/versatica/JsSIP/commit/3081a21) Enhance 'generic_param' grammar rule
* [(e589002)](https://github.com/versatica/JsSIP/commit/e589002) Fix. Allow case-insentivity in SIP grammar, when corresponds
* [(aec55a2)](https://github.com/versatica/JsSIP/commit/aec55a2) Enhance transport error handling
* [(d0dbde3)](https://github.com/versatica/JsSIP/commit/d0dbde3) New stun_servers and turn_servers parameters
* [(47cdb66)](https://github.com/versatica/JsSIP/commit/47cdb66) Add 'extraHeaders' parameter to UA.register() and UA.unregister() methods
* [(69fbdbd)](https://github.com/versatica/JsSIP/commit/69fbdbd) Enhance in-dialog request management
* [(da23790)](https://github.com/versatica/JsSIP/commit/da23790) Fix 'UTF8-NONASCII' grammar rule
* [(3f86b94)](https://github.com/versatica/JsSIP/commit/3f86b94) Require a single grunt task for packaging
* [(81595be)](https://github.com/versatica/JsSIP/commit/81595be) Add some log lines into sanity check code for clarity
* [(a8a7627)](https://github.com/versatica/JsSIP/commit/a8a7627) Enhance RTCPeerConnection SDP error handling. Thanks @ibc for reporting.
* [(3acc474)](https://github.com/versatica/JsSIP/commit/3acc474) Add turn configuration parameters for RTCPeerConnection
* [(9fccaf5)](https://github.com/versatica/JsSIP/commit/9fccaf5) Enhance 'boolean' comparison
* [(24fcdbb)](https://github.com/versatica/JsSIP/commit/24fcdbb) Make preloaded Route header optional.
* [(defeabe)](https://github.com/versatica/JsSIP/commit/defeabe) Automatic connection recovery.
* [(a45293b)](https://github.com/versatica/JsSIP/commit/a45293b) Improve reply() method.
* [(f05795b)](https://github.com/versatica/JsSIP/commit/f05795b) Fix. Prevent outgoing CANCEL messages from being authenticated
* [(5ed6122)](https://github.com/versatica/JsSIP/commit/5ed6122) Update credentials with the new authorization upon 401/407 reception
* [(2c9a310)](https://github.com/versatica/JsSIP/commit/2c9a310) Do not allow reject-ing a Message or Session with an incorrect status code
* [(35e5874)](https://github.com/versatica/JsSIP/commit/35e5874) Make optional the reason phrase when reply-ing
* [(85ca354)](https://github.com/versatica/JsSIP/commit/85ca354) Implement credential reuse
* [(351ca06)](https://github.com/versatica/JsSIP/commit/351ca06) Fix Contact header aggregation for incoming messages
* [(d6428e7)](https://github.com/versatica/JsSIP/commit/d6428e7) Fire UA 'newMessage' event for incoming MESSAGE requests regardless they are out of dialog or in-dialog.
* [(1ab3423)](https://github.com/versatica/JsSIP/commit/1ab3423) Intelligent 'Allow' header field value. Do not set a method in the 'Allow' header field if its corresponding event is not defined or has zero listeners.
* [(4e70a25)](https://github.com/versatica/JsSIP/commit/4e70a25) Allow 'text/plain' and 'text/html' content types for incoming SIP MESSAGE Fixed incoming SIP MESSAGE processing when the Content-Type header contains parameters
* [(d5f3432)](https://github.com/versatica/JsSIP/commit/d5f3432) Fixed the message header split when a parsing error occurs. Parsing error log enhanced.
Version 0.2.1 (released in 2012-11-15)
-------------------------------
* [(24e32c0)](https://github.com/versatica/JsSIP/commit/24e32c0d16ff5fcefd2319fc445a59d6fc2bcb59) UA configuration `password` parameter is now optional.
* [(ffe7af6)](https://github.com/versatica/JsSIP/commit/ffe7af6276915695af9fd00db281af51fec2714f) Bug fix: UA configuration `display_name` parameter.
* [(aa51291)](https://github.com/versatica/JsSIP/commit/aa512913733a4f63af066b0a9e12a8e38f2a5acb) Bug fix: Allows multibyte symbols in UA configuration `display_name` parameter (and require not to write it between double quotes).
* [(aa48201)](https://github.com/versatica/JsSIP/commit/aa48201) Bug fix: "cnonce" value value was not being quoted in Digest Authentication (reported by [vf1](https://github.com/vf1)).
* [(1ecabf5)](https://github.com/versatica/JsSIP/commit/1ecabf5) Bug fix: Fixed authentication for in-dialog requests (reported by [vf1](https://github.com/vf1)).
* [(11c6bb6)](https://github.com/versatica/JsSIP/commit/11c6bb6aeef9de3bf2a339263f620b1caf60d634) Allow receiving WebSocket binary messages (code provided by [vf1](https://github.com/vf1)).
* [(0e8c5cf)](https://github.com/versatica/JsSIP/commit/0e8c5cf) Bug fix: Fixed Contact and Record-Route header split (reported by Davide Corda).
* [(99243e4)](https://github.com/versatica/JsSIP/commit/99243e4) Fixed BYE and ACK error handling.
* [(0c91285)](https://github.com/versatica/JsSIP/commit/0c91285) Fixed failure causes in 'registrationFailed' UA event.
Version 0.2.0 (released in 2012-11-01)
--------------------------------------
* First stable release with full website and documentation.
* Refactored sessions, message and events API.
Version 0.1.0 (released in 2012-09-27)
--------------------------------------
* First release. No documentation.
| 39.292292 | 297 | 0.647415 | eng_Latn | 0.564679 |
bb0c9b1d9e19e2cab375c51db581d1abfb1bce02 | 2,240 | md | Markdown | index/r/royal.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 26 | 2019-03-21T15:43:32.000Z | 2022-03-12T18:30:35.000Z | index/r/royal.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 3 | 2020-05-01T18:15:58.000Z | 2021-04-16T05:51:05.000Z | index/r/royal.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 14 | 2019-06-23T22:54:35.000Z | 2021-10-16T02:04:45.000Z | # royal
* [Elenis Royal Icing](../../index/e/elenis-royal-icing-104142.json)
* [Grand Royal Fizz](../../index/g/grand-royal-fizz-200431.json)
* [Royal](../../index/r/royal-201033.json)
* [Royal Blueberry Ice Pops](../../index/r/royal-blueberry-ice-pops-106863.json)
* [Royal Blush](../../index/r/royal-blush-351592.json)
* [Royal Chicken Cooked In Yogurt](../../index/r/royal-chicken-cooked-in-yogurt.json)
* [Royal Fizz](../../index/r/royal-fizz-200508.json)
* [Royal Icing](../../index/r/royal-icing-240751.json)
* [Royal Rickey](../../index/r/royal-rickey-200752.json)
* [Gingerbread Cookies With Royal Icing Recipe](../../index/g/gingerbread-cookies-with-royal-icing-recipe.json)
* [Royal Icing Recipe1](../../index/r/royal-icing-recipe1.json)
* [Sugar Cookies With Royal Icing](../../index/s/sugar-cookies-with-royal-icing.json)
* [Glossy Royal Icing](../../index/g/glossy-royal-icing.json)
* [Perfect And Delicious Royal Icing](../../index/p/perfect-and-delicious-royal-icing.json)
* [Royal Coconut Cookies](../../index/r/royal-coconut-cookies.json)
* [Royal Hawaiian Pie](../../index/r/royal-hawaiian-pie.json)
* [Royal Icing I](../../index/r/royal-icing-i.json)
* [Royal Icing Ii](../../index/r/royal-icing-ii.json)
* [Royal Icing Iii](../../index/r/royal-icing-iii.json)
* [Royal Icing](../../index/r/royal-icing.json)
* [Royal Rhubarb Crisp](../../index/r/royal-rhubarb-crisp.json)
* [Shahi Paneer Royal Cheese](../../index/s/shahi-paneer-royal-cheese.json)
* [Soft Royal Icing](../../index/s/soft-royal-icing.json)
* [Stiff Royal Icing](../../index/s/stiff-royal-icing.json)
* [Elenis Royal Icing 104142](../../index/e/elenis-royal-icing-104142.json)
* [Grand Royal Fizz 200431](../../index/g/grand-royal-fizz-200431.json)
* [Royal 201033](../../index/r/royal-201033.json)
* [Royal Blueberry Ice Pops 106863](../../index/r/royal-blueberry-ice-pops-106863.json)
* [Royal Blush 351592](../../index/r/royal-blush-351592.json)
* [Royal Chicken Cooked In Yogurt](../../index/r/royal-chicken-cooked-in-yogurt.json)
* [Royal Fizz 200508](../../index/r/royal-fizz-200508.json)
* [Royal Icing 240751](../../index/r/royal-icing-240751.json)
* [Royal Rickey 200752](../../index/r/royal-rickey-200752.json)
| 62.222222 | 112 | 0.677232 | yue_Hant | 0.640151 |
bb0d2b40e84b763161efb9f437c5cc4dc19850c2 | 10,080 | md | Markdown | microsoft-365/security/defender/eval-defender-office-365-enable-eval.md | MicrosoftDocs/microsoft-365-docs-pr.fr-FR | c407f4efa6ff7a252c2dc8e808cd4aa76305f308 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2020-05-18T20:10:58.000Z | 2022-03-29T22:15:15.000Z | microsoft-365/security/defender/eval-defender-office-365-enable-eval.md | MicrosoftDocs/microsoft-365-docs-pr.fr-FR | c407f4efa6ff7a252c2dc8e808cd4aa76305f308 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-09T09:34:24.000Z | 2022-02-09T06:45:31.000Z | microsoft-365/security/defender/eval-defender-office-365-enable-eval.md | MicrosoftDocs/microsoft-365-docs-pr.fr-FR | c407f4efa6ff7a252c2dc8e808cd4aa76305f308 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-09T19:57:56.000Z | 2021-07-09T09:32:36.000Z | ---
title: Activer l’environnement d’évaluation de Microsoft Defender pour Office 365 dans votre environnement de production
description: Étapes d’activation de Microsoft Defender pour l’évaluation Office 365, avec des licences d’évaluation, la gestion des enregistrement MX, & audit des domaines acceptés et des connexions entrantes.
search.product: eADQiWindows 10XVcnh
search.appverid: met150
ms.prod: m365-security
ms.mktglfcycl: deploy
ms.sitesec: library
ms.pagetype: security
f1.keywords:
- NOCSH
ms.author: tracyp
author: MSFTTracyP
ms.date: 07/01/2021
ms.localizationpriority: medium
manager: dansimp
audience: ITPro
ms.collection:
- M365-security-compliance
- m365solution-overview
- m365solution-evalutatemtp
ms.topic: how-to
ms.technology: m365d
ms.openlocfilehash: 72ee60efdfa93444d6f1fbde7d1c15dd203fdae8
ms.sourcegitcommit: d4b867e37bf741528ded7fb289e4f6847228d2c5
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 10/06/2021
ms.locfileid: "60176690"
---
# <a name="enable-the-evaluation-environment"></a>Activer l’environnement d’évaluation
**S’applique à :**
- Microsoft 365 Defender
Cet article est [l’étape 2 sur 3 dans](eval-defender-office-365-overview.md) le processus de configuration de l’environnement d’évaluation de Microsoft Defender pour Office 365. Pour plus d’informations sur ce processus, voir [l’article de présentation.](eval-defender-office-365-overview.md)
Utilisez les étapes suivantes pour activer l’évaluation de Microsoft Defender pour Office 365.

- [Étape 1 : Activer les licences d’essai](#step-1-activate-trial-licenses)
- [Étape 2 : Auditer et vérifier l’enregistrement MX public](#step-2-audit-and-verify-the-public-mx-record)
- [Étape 3 : Auditer les domaines acceptés](#step-3-audit-accepted-domains)
- [Étape 4 : Auditer les connecteurs entrants](#step-4-audit-inbound-connectors)
- [Étape 5 : Activer l’évaluation](#step-5-activate-the-evaluation)
## <a name="step-1-activate-trial-licenses"></a>Étape 1 : Activer les licences d’essai
Connectez-vous à votre Microsoft Defender existant pour Office 365 environnement ou portail d’administration des clients.
1. Accédez au portail d’administration.
2. Sélectionnez Acheter des services dans le lancement rapide.
:::image type="content" source="../../media/mdo-eval/1_m365-purchase-services.png" alt-text="Cliquez sur Acheter des services dans le volet de navigation de Office 365.":::
3. Faites défiler vers le bas jusquAdd-On section (ou recherchez « Defender ») pour localiser microsoft Defender Office 365 plans.
4. Cliquez sur Détails à côté du plan que vous souhaitez évaluer.
:::image type="content" source="../../media/mdo-eval/2_mdo-eval-license-details.png" alt-text="Cliquez ensuite sur le bouton Détails.":::
5. Cliquez sur le *lien Démarrer la version d’essai* gratuite.
:::image type="content" source="../../media/mdo-eval/3-m365-purchase-button.png" alt-text="Cliquez sur l’essai gratuit Démarrer *lien hypertexte* dans ce panneau.":::
6. Confirmez votre demande et cliquez sur le *bouton Essayer maintenant.*
:::image type="content" source="../../media/mdo-eval/4_mdo-trial-order.png" alt-text="Cliquez maintenant sur le bouton Essayer maintenant **.":::
## <a name="step-2-audit-and-verify-the-public-mx-record"></a>Étape 2 : Auditer et vérifier l’enregistrement MX public
Pour évaluer efficacement Microsoft Defender pour Office 365, il est important que le courrier électronique externe entrant soit relayé via l’instance Exchange Online Protection (EOP) associée à votre client.
1. Connectez-vous au portail d’administration M365, développez Paramètres et sélectionnez Domaines.
2. Sélectionnez votre domaine de courrier vérifié, puis cliquez sur Gérer le DNS.
3. Notez l’enregistrement MX généré et affecté à votre client EOP.
4. Accédez à votre zone DNS externe (publique) et vérifiez l’enregistrement MX principal associé à votre domaine de messagerie.
- Si votre enregistrement MX public correspond actuellement à l’adresse EOP affectée *(par exemple, tenant-com.mail.protection.outlook.com),* aucune modification de routage supplémentaire ne doit être requise.
- Si votre enregistrement MX public est actuellement résolu en passerelle SMTP tierce ou locale, des configurations de routage supplémentaires peuvent être nécessaires.
- Si votre enregistrement MX public est actuellement résolu en Exchange vous êtes peut-être toujours dans un modèle hybride dans lequel certaines boîtes aux lettres de destinataire n’ont pas encore été migrées vers EXO.
## <a name="step-3-audit-accepted-domains"></a>Étape 3 : Auditer les domaines acceptés
1. Connectez-vous Exchange Online portail d’administration, sélectionnez Flow messagerie, puis cliquez sur Domaines acceptés.
2. Dans la liste des domaines acceptés qui ont été ajoutés et vérifiés dans votre client, notez le **type** de domaine pour votre domaine de messagerie principal.
- Si le type de domaine est définie sur ***Faisant*** autorité, il est supposé que toutes les boîtes aux lettres des destinataires de votre organisation résident actuellement dans Exchange Online.
- Si le type de domaine est définie sur ***Relais*** interne, il se peut que vous soyez toujours dans un modèle hybride où certaines boîtes aux lettres de destinataire résident toujours en local.
## <a name="step-4-audit-inbound-connectors"></a>Étape 4 : Auditer les connecteurs entrants
1. Connectez-vous Exchange Online portail d’administration, sélectionnez Flow messagerie, puis cliquez sur Connecteurs.
2. Dans la liste des connecteurs configurés, notez les entrées provenant de l’organisation partenaire et qui peuvent correspondre à une passerelle SMTP tierce.
3. Dans la liste des connecteurs configurés, notez les entrées étiquetées à partir du serveur de messagerie de votre organisation qui peuvent indiquer que vous êtes toujours dans un scénario hybride.
## <a name="step-5-activate-the-evaluation"></a>Étape 5 : Activer l’évaluation
Utilisez les instructions ci-après pour activer votre Microsoft Defender pour Office 365 d’évaluation à partir du portail Microsoft 365 Defender web.
1. Connectez-vous à votre client avec un compte ayant accès au portail Microsoft 365 Defender client.
2. Choisissez si vous souhaitez que le portail **Microsoft 365 Defender soit** votre interface par défaut pour Microsoft Defender pour Office 365 administration (recommandé).
:::image type="content" source="../../media/mdo-eval/1_mdo-eval-activate-eval.png" alt-text="Cliquez sur le bouton Activer les paramètres pour utiliser le portail d’administration centralisé Microsoft 365 Defender amélioré.":::
3. Dans le menu de navigation, sélectionnez **Stratégies & sous** *Email & Collaboration*.
:::image type="content" source="../../media/mdo-eval/2_mdo-eval-activate-eval.png" alt-text="Voici une image du menu Collaboration & courrier électronique pointant vers stratégies & règles. Cliquez dessus !":::
4. Dans le tableau *de bord Règles & stratégie,* cliquez sur **Stratégies de menace.**
:::image type="content" source="../../media/mdo-eval/3_mdo-eval-activate-eval.png" alt-text="Image du tableau de bord Règles & stratégie et flèche pointant vers les stratégies de menace. Cliquez sur ce bouton suivant !":::
5. Faites défiler vers le bas *jusqu’à Stratégies* supplémentaires et sélectionnez **la vignette Évaluer Defender Office 365'évaluation.**
:::image type="content" source="../../media/mdo-eval/4_mdo-eval-activate-eval.png" alt-text="La vignette Eval Defender pour Office 365 qui s’agit d’une version d’essai de 30 jours sur les vecteurs de collaboration & courrier électronique. Cliquez dessus.":::
6. Choisissez maintenant si les e-mails externes sont Exchange Online directement ou vers une passerelle ou un service tiers, puis cliquez sur Suivant.
:::image type="content" source="../../media/mdo-eval/5_mdo-eval-activate-eval.png" alt-text="Defender for Office 365 évaluera l’envoi de messages à Exchange Online boîtes aux lettres. Donnez des détails sur la façon dont votre courrier est acheminé maintenant, y compris le nom du connecteur sortant qui déroute votre courrier. Si vous utilisez uniquement Exchange Online Protection (EOP), vous n’avez pas de connecteur. Choisissez l’un des fournisseurs tiers ou locaux, ou j’utilise uniquement EOP.":::
7. Si vous utilisez une passerelle tierce, sélectionnez le nom du fournisseur dans la liste sortante avec le connecteur entrant associé à cette solution. Lorsque vous avez répertorié vos réponses, cliquez sur Suivant.
:::image type="content" source="../../media/mdo-eval/6-mdo-eval-activate-eval-settings.png" alt-text="Dans cette boîte de dialogue, vous choisissez le service fournisseur tiers que votre organisation utilise, ou vous sélectionnez *Autre*. Dans la boîte de dialogue suivante, sélectionnez le connecteur entrant. Cliquez ensuite sur Suivant.":::
8. Examinez vos paramètres et cliquez sur le bouton Créer **une** évaluation.
|Avant|Après|
|:---:|:---:|
|:::image type="content" source="../../media/mdo-eval/7-mdo-eval-activate-review.png" alt-text="Ce volet dispose d’une zone de drop-down pour passer en revue vos paramètres. Il dispose également d’un lien cliquable pour modifier votre type de routage si nécessaire. Lorsque vous êtes prêt, cliquez sur le grand bouton Bleu Créer l’évaluation.":::|:::image type="content" source="../../media/mdo-eval/8-mdo-eval-activate-complete.png" alt-text="La mise en place est maintenant terminée. Le bouton bleu de cette page indique « Passer à l’évaluation ».":::|
|
## <a name="next-steps"></a>Étapes suivantes
Étape 3 sur 3 : Configurer le pilote pour Microsoft Defender pour Office 365
Revenir à la vue d’ensemble [de l’évaluation de Microsoft Defender Office 365](eval-defender-office-365-overview.md)
Revenir à la vue d’ensemble [de l’évaluation et de la Microsoft 365 Defender](eval-overview.md)
| 73.043478 | 557 | 0.779663 | fra_Latn | 0.96034 |
bb0dbd11c31df14004746ee3842cd1d5248c426a | 85 | md | Markdown | README.md | zhu123thy/Crazy-turkey | d42fccc8adb5297265d2b20427fde59d961b7a2c | [
"MIT"
] | null | null | null | README.md | zhu123thy/Crazy-turkey | d42fccc8adb5297265d2b20427fde59d961b7a2c | [
"MIT"
] | null | null | null | README.md | zhu123thy/Crazy-turkey | d42fccc8adb5297265d2b20427fde59d961b7a2c | [
"MIT"
] | null | null | null | # Crazy-turkey
Experimental-space
111
asdmkadn kadak nla
nd ;lkad ad
d f
sdf1sd
d 1 | 8.5 | 18 | 0.764706 | kor_Hang | 0.140322 |
bb0dd9b3ed6b49be909dbed0ef8d4207df79b55e | 2,843 | md | Markdown | 流行/999-Selena Gomez ft Camilo/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 17 | 2020-12-01T05:27:50.000Z | 2022-03-28T05:03:34.000Z | 流行/999-Selena Gomez ft Camilo/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | null | null | null | 流行/999-Selena Gomez ft Camilo/README.md | hsdllcw/everyonepiano-music-database | d440544ad31131421c1f6b5df0f039974521eb8d | [
"MIT"
] | 2 | 2021-08-24T08:58:58.000Z | 2022-02-08T08:22:52.000Z |
哥伦比亚歌手Camilo与美国流行歌手Selena Gomez联手合作,于2021年8月27日发布最新单曲《 **999**
》。Camilo之前在视频说过他梦寐以求的合作就是和Selena合作,结果梦想真的成功了!
轻松诙谐的律动;情侣调情的词谱;沁人心脾的嗓音。单曲《999》或许能治愈八月歌荒造成“音乐感冒”所带来的,听力不佳的状况!正所谓惊艳众人,闲静时如娇花照水,行动时似弱柳扶风。走进花海,走进Selena,走进此曲,给予这夏天最浪漫的告别。
同时,网站还为大家提供了《[ **Wolves**](Music-8615-Wolves-Selena-Gomez-and-Marshmello.html
"Wolves")》曲谱下载
歌词下方是 _999钢琴谱_ ,大家可以免费下载学习。
### 999歌词:
Hace mucho tiempo que quiero decirte algo y no puedo
Se me pone la piel de gallina cada vez que te veo
Ya busqué en internet pa' ver si es normal
Sentirse tan bien y a la vez tan mal
Quererte besar sin poder besarte
Tocar sin poder tocarte
No tengo fotos contigo pero en la pared tengo un espacio
No hemos salido ni un día y ya quiero celebrar aniversario
Quiero que esta noche vengas de visita
De fondo poner tu playlist favorita
Quiero hacerlo contigo yo no quiero tener que ir despacio
Yo sé que piensas en mí
Y el corazón se te mueve
Si tú quieres ir a mil
Yo estoy en novecientos noventa y nueve
Yo sé que piensas en mí
Yo sé que piensas en mí
Y el corazón se te mueve
Y el corazón se te mueve
Si tú quieres ir a mil
Yo estoy en novecientos noventa y nueve
Yo sé que pa' estar a mil hay que empezar de cero
Primero cae la lluvia ante' del aguacero
Y pa' serte sincero yo ya no me espero
No tengo duda' que esto es amor verdadero mmm
Llevo ya rato pensando en la noche perfecta
Cuando ya pueda tenerte completa
Tú y yo volando sin avioneta
Viajando sin maleta
No tengo fotos contigo pero en la pared tengo un espacio
Ya tengo el espacio
No hemos salido ni un día y ya quiero celebrar aniversario
Sí sí sí sí
Quiero que esta noche vengas de visita
Ah
De fondo poner tu playlist favorita
Quiero hacerlo contigo yo no quiero tener que ir despacio
No quiero yo no quiero
Yo sé que piensas en mí
Y el corazón se te mueve
El corazón se te mueve
Si tú quieres ir a mil
Yo estoy en novecientos noventa y nueve
Noventa y nueve
Yo sé que piensas en mí
Y el corazón se te mueve
El corazón se te mueve
Si tú quieres ir a mil
Yo estoy en novecientos noventa y nueve
Sí sí sí sí
Tengo gana' ya de estar contigo
Tengo gana'
Lo que hicimo allá en mi cabeza
No soy la primera pero sí que sea lo mío
Los último' labios que besas
No tengo fotos contigo pero en la pared tengo un espacio
No hemos salido ni un día y ya quiero celebrar aniversario
Quiero que esta noche vengas de visita
De fondo poner tu playlist favorita
Quiero hacerlo contigo yo no quiero tener que ir despacio
Yo sé que piensas en mí
Y el corazón se te mueve
Si tú quieres ir a mil
Yo estoy en novecientos noventa y nueve
Yo sé que piensas en mí
Yo sé que piensas en mí
Y el corazón se te mueve
Y el corazón se te mueve
Si tú quieres ir a mil
Yo estoy en novecientos noventa y nueve
| 33.05814 | 118 | 0.754836 | spa_Latn | 0.997943 |
bb0ec53a861d2a9c95f9c9a69f265256d977d1e6 | 9,623 | md | Markdown | docs/concept-exercises.md | JimLynchCodes/v3 | 745cc6a21e8828c8e08b6d0dff53894c3f4fd86f | [
"MIT"
] | null | null | null | docs/concept-exercises.md | JimLynchCodes/v3 | 745cc6a21e8828c8e08b6d0dff53894c3f4fd86f | [
"MIT"
] | null | null | null | docs/concept-exercises.md | JimLynchCodes/v3 | 745cc6a21e8828c8e08b6d0dff53894c3f4fd86f | [
"MIT"
] | null | null | null | # Concept Exercises
Concept Exercises replace V2's Core Exercises. They are exercises designed to teach specific (programming) concepts.
## What do we mean by concepts?
By concepts we mean things that a programmer would need to understand to be fluent in a language. We care specifically about how languages are **different**. What do I need to learn differently about numbers in Haskell to numbers in Ruby to be able to work with numbers in those languages. Two questions that we have felt useful to ask to establish this are:
- If someone learnt Ruby, and someone learnt Haskell, what are the things that the two people learnt that are different?
- If a Ruby programmer learnt Haskell, what new concepts would they have to learn, what knowledge would they have to disregard, and what syntax would they have to remap?
By teaching concepts we aim to teach fluency.
## What do we mean by "fluency?"
By "Fluency", we mean: Do you **get** that language? Can you reason in that language? Do you write that language like a "native" programmer writes in it? Fluency is the ability to express oneself articulately in a particular language.
"Fluency" is different to "Proficiency", where we use proficiency to mean: Can you write programs in the language (e.g. can you nav stdlib, docs, compose complex code structures).
Exercism focuses on teaching Fluency not Proficiency. We aim to teach people to understand what makes a language unique and how experienced programmers in that language would reason about - and solve - problems.
## How are Concept Exercises designed and structured?
Concept Exercises must have the following characteristics:
- Each one has a clear learning goal.
- They are language-specific, not generic.
- Stubs/boilerplate are used to avoid the student having to learn/write unnecessary code on exercises.
Exercises are unlocked based on concepts taught and learnt. Each Concept Exercise must teach one or more concepts. It may also have prerequisites on Concepts, which means it will not be unlocked until Concept Exercises teaching those prerequisite concepts have been completed.
Concept Exercises should not inherently become more difficult as the track progresses. **A seasoned developer in Language X should be able to work through all the Concept Exercises on that track spending no more than 5-10 minutes solving each one.** As each exercise should be focussed on getting someone to use a concept for the first time, and because the seasoned developer already understands that concept in Language X, the exercise should feel relatively trivial for them. Later exercises may feel more difficult to a developer unfamiliar with Language X, but only because the later exercise is teaching a concept which in itself is more complicated (for example, most people would agree Recursion is a more complex topic to learn for the first time, than a Loop is to remap from one language to another).
Concept Exercises are **not** mentored. When a student submits a submission that gets the tests passing for a Concept Exercise, we check for an Analyzer or Representer to give feedback. If none is found, then the solution is approved. This shifts the burden of teaching to the exercise, which must provide a clear pathway to learning the concept that is being taught.
Concept Exercises do not share a common base like Practice Exercises do in the `problem-specifications` repository. Instead they "share" Concepts that they are teaching with other languages. This repository aims to list all of those Concepts and provide information about the Concept that maintainers can use as the basis for their own languages. Each Concept should also link to the implementations in different languages. Maintainers are free to copy and paste from each others repositories, and then edit to make things specific to their tracks, but such copy-and-pastes should be considered hard-forks.
For example, we might define a concept of "Classes" and provide a short introduction that explains what a class is, how it fits with objects, state, etc. We might include a link to a good article introducing OOP and classes. Individual tracks implementing an exercise on Classes can then include this introductory text, making any changes or additions that explain their language-specific semantics and syntax.
## Design Guidelines
When designing Concept Exercises, please consider the following guidelines:
1. The exercise should be able to be solved in 5-10 minutes by someone proficient in the language.
1. The exercise should not involve having to learn or engineer a non-trivial algorithm.
1. The exercise should be background-knowledge-agnostic, unless the exercise calls for it (e.g. no maths unless it's a scientific/maths-based Concept).
## Exercise Structure
An exercise has the following files. In the browser they will show at the relevant times. When used via the CLI, the introduction and instructions will be concatenated along with the track's CLI instructions into a README.md, which will sit alongside a HINTS.md.
### `.docs/introduction.md`
This file contains an introduction to the concept. It should be explicit about what the student should learn from the exercise, and provide a short, concise introduction to the concept(s). The aim is to give the student just enough context to figure things out themselves and solve the exercise, as research has shown that self-discovery is the most effective learning experience. Mentioning technical terms that the student can Google if they so want, is preferable over including any code samples or an extensive description. For example we might describe a string as a "Sequence of Unicode characters" or a "series of bytes" or "an object". Unless the student needs to understand the details of what those mean to be able to solve the exercise we should not give more info in this introduction - instead allowing the student to Google, ignore, or map their existing knowledge.
See the C# floating-point-numbers exercise's [introduction.md file][csharp-docs-introduction.md] for an example.
### `.docs/instructions.md`
This file contains instructions for the exercise. It should explicitly explain what the student needs to do (define a method with the signature `X(...)` that takes an A and returns a Z), and provide at least one example usage of that function. If there are multiple tasks within the exercise, it should provide an example of each.
See the C# floating-point-numbers exercise's [instructions.md file][csharp-docs-instructions.md] for an example.
### `.docs/hints.md`
If the student gets stuck, we will allow them to click a button requesting a hint, which shows this file. This will not be a "recommended" path and we will (softly) discourage them using it unless they can't progress without it. As such, it's worth considering that the student reading it will be a little confused/overwhelmed and maybe frustrated.
The file should contain both general and task-specific "hints". These hints should be enough to unblock almost any student. They might link to the docs of the functions that need to be used.
See the C# floating-point-numbers exercise's [hints.md file][csharp-docs-hints.md] for an example.
### `.docs/after.md`
Once the student completes the exercise they will be shown this file, which should provide them with a summary of what the exercise aimed to teach. This document can also link to any additional resources that might be interesting to the student in the context of the exercise.
See the C# floating-point-numbers exercise's [after.md file][csharp-docs-after.md] for an example.
### `.meta/design.md`
This file contains information on the exercise's design, which includes things like its goal, its teaching goals, what not to teach, and more. This information can be extracted from the exercise's corresponding GitHub issue.
It exists in order to inform future maintainers or contributors about the scope and limitations of an exercise, to avoid the natural trend towards making exercises more complex over time.
See the C# floating-point-numbers exercise's [design.md file][csharp-docs-design.md] for an example.
### `.meta/config.json`
This file contains meta information on the exercise, which currently only includes the exercise's contributors.
See the C# floating-point-numbers exercise's [config.json file][csharp-docs-config.json] for an example.
## Track Structure
### `exercises/shared/.docs/cli.md`
This file contains information on how to work with the exercise when using the CLI to download and submit the exercise.
See the C# track's [cli.md file][csharp-docs-cli.md] for an example.
### `exercises/shared/.docs/debug.md`
This file explains how a student that is coding in the browser can still do "debugging."
See the C# track's [debug.md file][csharp-docs-debug.md] for an example.
[csharp-docs-cli.md]: ../languages/csharp/exercises/shared/.docs/cli.md
[csharp-docs-debug.md]: ../languages/csharp/exercises/shared/.docs/debug.md
[csharp-docs-after.md]: ../languages/csharp/exercises/concept/numbers-floating-point/.docs/after.md
[csharp-docs-hints.md]: ../languages/csharp/exercises/concept/numbers-floating-point/.docs/hints.md
[csharp-docs-introduction.md]: ../languages/csharp/exercises/concept/numbers-floating-point/.docs/introduction.md
[csharp-docs-instructions.md]: ../languages/csharp/exercises/concept/numbers-floating-point/.docs/instructions.md
[csharp-docs-design.md]: ../languages/csharp/exercises/concept/numbers-floating-point/.meta/design.md
[csharp-docs-config.json]: ../languages/csharp/exercises/concept/numbers-floating-point/.meta/config.json
| 84.412281 | 879 | 0.793723 | eng_Latn | 0.999692 |
bb0edf8a780eb70df7495246c5c13695af372c86 | 361 | md | Markdown | Readme.md | krn0x2/crawl-notifications | 26e4751a0c32dbeb53965147bb2f9d29e0228ed8 | [
"MIT"
] | null | null | null | Readme.md | krn0x2/crawl-notifications | 26e4751a0c32dbeb53965147bb2f9d29e0228ed8 | [
"MIT"
] | 2 | 2021-05-11T03:08:48.000Z | 2022-02-14T01:16:39.000Z | Readme.md | krn0x2/crawl-notifications | 26e4751a0c32dbeb53965147bb2f9d29e0228ed8 | [
"MIT"
] | null | null | null | ## Install dependencies
npm install
## Zip and create lambda
zip the root and upload to AWS lambda
Set index.handler as the entrypoint
Set a cloudwatchEvent (15 min) as a trigger
Set environment variables



| 20.055556 | 47 | 0.714681 | eng_Latn | 0.750404 |
bb0f217d88f5ea1e5a5cfa04dfda0db417b7c84a | 10,981 | md | Markdown | Advertising/bingads-13/guides/url-tracking-upgraded-urls.md | eleonora-pellini/Advertising-docs | df42426549c5aa1519528b4a76e5af3eb4a30ba1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Advertising/bingads-13/guides/url-tracking-upgraded-urls.md | eleonora-pellini/Advertising-docs | df42426549c5aa1519528b4a76e5af3eb4a30ba1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Advertising/bingads-13/guides/url-tracking-upgraded-urls.md | eleonora-pellini/Advertising-docs | df42426549c5aa1519528b4a76e5af3eb4a30ba1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "URL Tracking with Upgraded URLs"
ms.service: "bing-ads"
ms.topic: "article"
author: "eric-urban"
ms.author: "eur"
description: URL tracking allows you to find out how people got to your website by adding tracking parameters in Microsoft Advertising and then using a third-party tracking tool or service to analyze the data.
---
# URL Tracking with Upgraded URLs
URL tracking allows you to find out how people got to your website by adding tracking parameters in Microsoft Advertising and then using a third-party tracking tool or service to analyze the data. When an ad is served, the tracking parameters are dynamically appended to your landing page URL. This landing page URL is recorded on your web server and then a third-party tracking tool, like Adobe or Google Analytics, can interpret the data.
If you have set up [URL tracking in Microsoft Advertising](https://help.ads.microsoft.com/#apex/3/en/56798/2) by adding URL parameters to your destination URLs, you will be interested in Upgraded URLs. Upgraded URLs separate the landing page URL from the tracking or URL parameters so if you want to edit your URL parameters, your ad doesn't have to go through another editorial review. It also allows you to define a separate mobile landing page URL if you have a website that is optimized for smaller devices.
By separating the tracking template details from the final URLs you can take advantage of the following benefits:
- You can define tracking templates for one or more account, campaign, ad group, keyword, ad, or Sitelink Extension. If you use a common tracking template for all ads in your campaign for example, you can update it once at the campaign level instead of making changes to all of your ads. Tracking templates and custom parameters defined for lower level entities e.g. keyword override those set for higher level entities e.g. campaign. For more information, see [Entity Hierarchy and Limits](entity-hierarchy-limits.md).
- When you update your tracking template information, the URL doesn't need to go through editorial review and your ads will continue to serve uninterrupted. Editorial review is only required when you change your actual ads, keywords or extensions.
- Doing so helps Microsoft Advertising understand what information is URL versus tracking template information, and reduces crawling on your website.
For an overview of Final URLs and tracking templates, see the following Microsoft Advertising help articles.
- [How do I create an account tracking template?](https://help.ads.microsoft.com/#apex/3/en/56772/-1)
- [Can I use custom parameters?](https://help.ads.microsoft.com/#apex/3/en/56774/-1)
- [What are Upgraded URLs and how do I upgrade?](https://help.ads.microsoft.com/#apex/3/en/56751/-1)
## <a name="trackingtemplatevalidation"></a>Tracking Templates
Tracking templates can be used in tandem with final URLs to assemble the landing page URL where a user is directed after the ad is clicked. Here is an example:
**Final URL:** *http://contoso.com/contact-us*
**Tracking template:** *{lpurl}?ref=bing&keyword={Keyword}&cmpid={CampaignID}&agid={AdGroupID}&adid={AdID}*. The *{lpurl}* tag references the Landing Page URL. Bing will replace this with your Final URL when your ad is served.
The following validation rules apply to tracking templates. For more details about supported templates and parameters, see the Microsoft Advertising help article [What tracking or URL parameters can I use?](https://help.ads.microsoft.com/#apex/3/en/56799/2)
- Tracking templates defined for lower level entities e.g. ads override those set for higher level entities e.g. campaign. For more information, see [Entity Hierarchy and Limits](entity-hierarchy-limits.md).
- The length of the tracking template is limited to 2,048 characters. The HTTP or HTTPS protocol string does count towards the 2,048 character limit.
- The tracking template must be a well-formed URL beginning with one of the following: *http://*, *https://*, *{lpurl}*, or *{unescapedlpurl}*.
- Microsoft Advertising does not validate whether custom parameters exist. If you use custom parameters in your tracking template and they do not exist, then the landing page URL will include the key and value placeholders of your custom parameters without substitution. For example, if your tracking template is *http://tracker.example.com/?season={_season}&promocode={_promocode}&u={lpurl}*, and neither *{_season}* or *{_promocode}* are defined at the campaign, ad group, criterion, keyword, or ad level, then the landing page URL will be the same.
For an account, campaign, or ad group level tracking template, please note the following:
- Additionally you must include at least one of the following landing page URL tags: *{lpurl}*, *{lpurl+2}*, *{lpurl+3}*, *{unescapedlpurl}*, *{escapedlpurl}*.
We recommend adding a default tracking template at the account level so that all the campaigns, ad groups, ads, and Sitelink Extensions use the same URL format. If you add more tracking templates at the campaign, ad group, ad or Sitelink Extension level, they will override the account level settings. You can set the account level tracking template using the [SetAccountProperties](../campaign-management-service/setaccountproperties.md) operation via the Campaign Management service, or set the *Tracking Template* field of the [Account](../bulk-service/account.md) record via the Bulk service.
## <a name="finalurlvalidation"></a>Final URLs
The following validation rules apply to Final URLs and Final Mobile URLs.
- The length of the URL is limited to 2,048 characters. The HTTP or HTTPS protocol string does count towards the 2,048 character limit.
- You may specify up to 10 items for both *FinalUrls* and *FinalMobileUrls*; however, only the first item in each list is used for delivery. The service allows up to 10 for potential forward compatibility.
- Usage of '{' and '}' is only allowed to delineate tags, for example "{lpurl}".
- Final URLs must each be a well-formed URL starting with either http:// or https://.
- If you specify *FinalMobileUrls*, you must also specify *FinalUrls*.
- You may not specify *FinalMobileUrls* if the device preference is set to mobile.
Also note the following validation rules for ad and site link final URLs.
- You may not specify final mobile URLs if the device preference is set to mobile.
- If the ad or site link's tracking template or custom parameters are specified, then at least one final URL is required.
## <a name="finalurlsuffixvalidation"></a>Final URL Suffix
The final URL suffix can include tracking parameters that will be appended to the end of your landing page URL. We recommend placing tracking parameters that your landing page requires in a final URL suffix so that your customers are always sent to your landing page.
> [!NOTE]
> Final URL suffix is now available at the account, campaign, ad group, and keyword level for all customers. Final URL suffix is available for ads, ad extensions, and ad group criterions for Phase 2 pilot customers ([GetCustomerPilotFeatures](../customer-management-service/getcustomerpilotfeatures.md) returns 566). Later this year the Final URL suffix will be available for ads, ad extensions, and ad group criterions for all customers.
Final URL suffixes defined for lower level entities e.g. ads override those set for higher level entities e.g. campaign. For more information, see [Entity Hierarchy and Limits](entity-hierarchy-limits.md).
Final URL suffix can contain one of the following:
- Static URL parameters
- URL parameters that specify Microsoft Advertising URL parameters as values
- URL parameters supported for [Final URL, tracking template, or custom parameter](https://help.ads.microsoft.com/#apex/3/en/56799/0)
Final URL suffix cannot start with the following characters: ?, &, or #. However you should use "&" to append multiple parameter key and value pairs. For example: *src=bing&kwd={keyword}*
Final URL Suffix cannot contain the following URL parameters: {ignore}, {lpurl}, or {escapedlpurl}, or other variations of final URL placeholders ([Tracking templates only](https://help.ads.microsoft.com/#apex/3/en/56799/0) section).
## <a name="customparametersvalidation"></a>Custom Parameters
URL parameters are used to track information about the source of an ad click. By adding these parameters to your ads and campaigns, you can learn if people who clicked on your ads came from mobile devices, where they were located when they clicked your ads, and much more. For example if you use the {AdId} tag in your URL, then when the user clicks your ad the unique system identifier for that ad will be included in the URL where the user lands. For a list of supported parameters, see the *Available parameters* sections within the Microsoft Advertising help article [How do I create an account tracking template?](https://help.ads.microsoft.com/#apex/3/en/56772/-1).
Custom parameters work exactly the same as URL parameters with respect to dynamic substitutions, except that you will choose the parameter names and values (also known as key and value pairs). You can define custom parameters for one or more campaign, ad group, keyword, ad, or Sitelink Extension. If you use a common custom parameter for all ads in your campaign for example, you can update it once at the campaign level instead of making changes to all of your ads.
Custom parameters are helpful with sharing dynamic information across multiple URLs in a meaningful way for you to report on. Similarly how dynamic texts are provided by Microsoft Advertising to give you more campaign insights on your performance. Custom parameters are advertiser created.
The following validation rules apply to custom parameters.
- Custom parameters defined for lower level entities e.g. keyword override those set for higher level entities e.g. campaign. For more information, see [Entity Hierarchy and Limits](entity-hierarchy-limits.md).
- For campaigns, ad groups, and ad groups, Microsoft Advertising will accept the first 8 custom parameter key and value pairs that you include, and if you include more than 8 custom parameters an error will be returned.
- For ads, ad extensions, and ad group criterions, Microsoft Advertising will accept the first 3 custom parameter key and value pairs that you include, and any additional custom parameters will be ignored. For customers in the Custom Parameters Limit Increase Phase 2 pilot ([GetCustomerPilotFeatures](../customer-management-service/getcustomerpilotfeatures.md) returns 565) for ads, ad extensions, and ad group criterions, Microsoft Advertising will accept the first 8 custom parameter key and value pairs that you include, and if you include more than 8 custom parameters an error will be returned. During calendar year 2019 the limit will be increased from 3 to 8 for all customers.
## See Also
[Bing Ads API Web Service Addresses](web-service-addresses.md)
| 122.011111 | 687 | 0.786176 | eng_Latn | 0.997801 |
bb0f64cfe8dfe9c898501d6b59738dea5caed62a | 9,972 | md | Markdown | docs/templates/preprocessing/image.md | ypxie/keras-1 | f1ed8d63faa26ce6180faa685839aa32217211c6 | [
"MIT"
] | 7 | 2017-01-16T23:12:00.000Z | 2021-03-11T04:39:14.000Z | docs/templates/preprocessing/image.md | ypxie/keras-1 | f1ed8d63faa26ce6180faa685839aa32217211c6 | [
"MIT"
] | null | null | null | docs/templates/preprocessing/image.md | ypxie/keras-1 | f1ed8d63faa26ce6180faa685839aa32217211c6 | [
"MIT"
] | 1 | 2017-08-02T01:47:54.000Z | 2017-08-02T01:47:54.000Z |
## ImageDataGenerator
```python
keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=0.,
width_shift_range=0.,
height_shift_range=0.,
shear_range=0.,
zoom_range=0.,
channel_shift_range=0.,
fill_mode='nearest',
cval=0.,
horizontal_flip=False,
vertical_flip=False,
rescale=None,
dim_ordering=K.image_dim_ordering())
```
Generate batches of tensor image data with real-time data augmentation. The data will be looped over (in batches) indefinitely.
- __Arguments__:
- __featurewise_center__: Boolean. Set input mean to 0 over the dataset, feature-wise.
- __samplewise_center__: Boolean. Set each sample mean to 0.
- __featurewise_std_normalization__: Boolean. Divide inputs by std of the dataset, feature-wise.
- __samplewise_std_normalization__: Boolean. Divide each input by its std.
- __zca_whitening__: Boolean. Apply ZCA whitening.
- __rotation_range__: Int. Degree range for random rotations.
- __width_shift_range__: Float (fraction of total width). Range for random horizontal shifts.
- __height_shift_range__: Float (fraction of total height). Range for random vertical shifts.
- __shear_range__: Float. Shear Intensity (Shear angle in counter-clockwise direction as radians)
- __zoom_range__: Float or [lower, upper]. Range for random zoom. If a float, `[lower, upper] = [1-zoom_range, 1+zoom_range]`.
- __channel_shift_range__: Float. Range for random channel shifts.
- __fill_mode__: One of {"constant", "nearest", "reflect" or "wrap"}. Points outside the boundaries of the input are filled according to the given mode.
- __cval__: Float or Int. Value used for points outside the boundaries when `fill_mode = "constant"`.
- __horizontal_flip__: Boolean. Randomly flip inputs horizontally.
- __vertical_flip__: Boolean. Randomly flip inputs vertically.
- __rescale__: rescaling factor. Defaults to None. If None or 0, no rescaling is applied,
otherwise we multiply the data by the value provided (before applying
any other transformation).
- __dim_ordering__: One of {"th", "tf"}.
"tf" mode means that the images should have shape `(samples, height, width, channels)`,
"th" mode means that the images should have shape `(samples, channels, height, width)`.
It defaults to the `image_dim_ordering` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be "tf".
- __Methods__:
- __fit(X)__: Compute the internal data stats related to the data-dependent transformations, based on an array of sample data.
Only required if featurewise_center or featurewise_std_normalization or zca_whitening.
- __Arguments__:
- __X__: sample data. Should have rank 4.
In case of grayscale data,
the channels axis should have value 1, and in case
of RGB data, it should have value 3.
- __augment__: Boolean (default: False). Whether to fit on randomly augmented samples.
- __rounds__: int (default: 1). If augment, how many augmentation passes over the data to use.
- __seed__: int (default: None). Random seed.
- __flow(X, y)__: Takes numpy data & label arrays, and generates batches of augmented/normalized data. Yields batches indefinitely, in an infinite loop.
- __Arguments__:
- __X__: data. Should have rank 4.
In case of grayscale data,
the channels axis should have value 1, and in case
of RGB data, it should have value 3.
- __y__: labels.
- __batch_size__: int (default: 32).
- __shuffle__: boolean (defaut: True).
- __seed__: int (default: None).
- __save_to_dir__: None or str (default: None). This allows you to optimally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
- __save_prefix__: str (default: `''`). Prefix to use for filenames of saved pictures (only relevant if `save_to_dir` is set).
- __save_format__: one of "png", "jpeg" (only relevant if `save_to_dir` is set). Default: "jpeg".
- __yields__: Tuples of `(x, y)` where `x` is a numpy array of image data and `y` is a numpy array of corresponding labels.
The generator loops indefinitely.
- __flow_from_directory(directory)__: Takes the path to a directory, and generates batches of augmented/normalized data. Yields batches indefinitely, in an infinite loop.
- __Arguments__:
- __directory__: path to the target directory. It should contain one subdirectory per class.
Any PNG, JPG or BNP images inside each of the subdirectories directory tree will be included in the generator.
See [this script](https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d) for more details.
- __target_size__: tuple of integers, default: `(256, 256)`. The dimensions to which all images found will be resized.
- __color_mode__: one of "grayscale", "rbg". Default: "rgb". Whether the images will be converted to have 1 or 3 color channels.
- __classes__: optional list of class subdirectories (e.g. `['dogs', 'cats']`). Default: None. If not provided, the list of classes will be automatically inferred (and the order of the classes, which will map to the label indices, will be alphanumeric).
- __class_mode__: one of "categorical", "binary", "sparse" or None. Default: "categorical". Determines the type of label arrays that are returned: "categorical" will be 2D one-hot encoded labels, "binary" will be 1D binary labels, "sparse" will be 1D integer labels. If None, no labels are returned (the generator will only yield batches of image data, which is useful to use `model.predict_generator()`, `model.evaluate_generator()`, etc.).
- __batch_size__: size of the batches of data (default: 32).
- __shuffle__: whether to shuffle the data (default: True)
- __seed__: optional random seed for shuffling and transformations.
- __save_to_dir__: None or str (default: None). This allows you to optimally specify a directory to which to save the augmented pictures being generated (useful for visualizing what you are doing).
- __save_prefix__: str. Prefix to use for filenames of saved pictures (only relevant if `save_to_dir` is set).
- __save_format__: one of "png", "jpeg" (only relevant if `save_to_dir` is set). Default: "jpeg".
- __follow_links__: whether to follow symlinks inside class subdirectories (default: False).
- __Examples__:
Example of using `.flow(X, y)`:
```python
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
# fits the model on batches with real-time data augmentation:
model.fit_generator(datagen.flow(X_train, Y_train, batch_size=32),
samples_per_epoch=len(X_train), nb_epoch=nb_epoch)
# here's a more "manual" example
for e in range(nb_epoch):
print 'Epoch', e
batches = 0
for X_batch, Y_batch in datagen.flow(X_train, Y_train, batch_size=32):
loss = model.train(X_batch, Y_batch)
batches += 1
if batches >= len(X_train) / 32:
# we need to break the loop by hand because
# the generator loops indefinitely
break
```
Example of using `.flow_from_directory(directory)`:
```python
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'data/train',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'data/validation',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
model.fit_generator(
train_generator,
samples_per_epoch=2000,
nb_epoch=50,
validation_data=validation_generator,
nb_val_samples=800)
```
Example of transforming images and masks together.
```python
# we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90.,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_datagen.fit(images, augment=True, seed=seed)
mask_datagen.fit(masks, augment=True, seed=seed)
image_generator = image_datagen.flow_from_directory(
'data/images',
class_mode=None,
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
'data/masks',
class_mode=None,
seed=seed)
# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)
model.fit_generator(
train_generator,
samples_per_epoch=2000,
nb_epoch=50)
```
| 49.61194 | 453 | 0.698355 | eng_Latn | 0.946448 |
bb0fbce297e29a94064362887cdee777e8cbfa64 | 6,756 | md | Markdown | docs/src/pages/core-concepts/component-hydration.md | pointout/astro | f32be021d366ddb6a743e75518bf25d1b471970d | [
"MIT"
] | 1 | 2021-07-10T02:30:39.000Z | 2021-07-10T02:30:39.000Z | docs/src/pages/core-concepts/component-hydration.md | pointout/astro | f32be021d366ddb6a743e75518bf25d1b471970d | [
"MIT"
] | 2 | 2021-09-20T21:03:37.000Z | 2022-01-05T18:34:15.000Z | docs/src/pages/core-concepts/component-hydration.md | pointout/astro | f32be021d366ddb6a743e75518bf25d1b471970d | [
"MIT"
] | 1 | 2021-09-28T00:52:15.000Z | 2021-09-28T00:52:15.000Z | ---
layout: ~/layouts/MainLayout.astro
title: Partial Hydration in Astro
---
**Astro generates every website with zero client-side JavaScript, by default.** Use any frontend UI component that you'd like (React, Svelte, Vue, etc.) and Astro will automatically render it to HTML at build-time and strip away all JavaScript. This keeps every site fast by default.
But sometimes, client-side JavaScript is required. This guide shows how interactive components work in Astro using a technique called partial hydration.
```astro
---
// Example: Importing and then using a React component.
// By default, Astro renders this to HTML and CSS during
// your build, with no client-side JavaScript.
// (Need client-side JavaScript? Read on...)
import MyReactComponent from '../components/MyReactComponent.jsx';
---
<!-- 100% HTML, Zero JavaScript! -->
<MyReactComponent />
```
## Concept: Partial Hydration
There are plenty of cases where you need an interactive UI component to run in the browser:
- An image carousel
- An auto-complete search bar
- A mobile sidebar open/close button
- A "Buy Now" button
In Astro, it's up to you as the developer to explicitly "opt-in" any components on the page that need to run in the browser. Astro can then use this info to know exactly what JavaScript is needed, and only hydrate exactly what's needed on the page. This technique is known as partial hydration.
**Partial hydration** -- the act of only hydrating the individual components that require JavaScript and leaving the rest of your site as static HTML -- may sound relatively straightforward. It should! Websites have been built this way for decades. It was only recently that Single-Page Applications (SPAs) introduced the idea that your entire website is written in JavaScript and compiled/rendered by every user in the browser.
_Note: Partial hydration is sometimes called "progressive enhancement" or "progressive hydration." While there are slight nuances between the terms, for our purposes you can think of these all as synonyms of the same concept._
**Partial hydration is the secret to Astro's fast-by-default performance story.** Next.js, Gatsby, and other JavaScript frameworks cannot support partial hydration because they imagine your entire website/page as a single JavaScript application.
## Concept: Island Architecture
**Island architecture** is the idea of using partial hydration to build entire websites. Island architecture is an alternative to the popular idea of building your website into a client-side JavaScript bundle that must be shipped to the user.
To quote Jason Miller, who [coined the phrase](https://jasonformat.com/islands-architecture/):
> In an "islands" model, server rendering is not a bolt-on optimization aimed at improving SEO or UX. Instead, it is a fundamental part of how pages are delivered to the browser. The HTML returned in response to navigation contains a meaningful and immediately renderable representation of the content the user requested.
Besides the obvious performance benefits of sending less JavaScript down to the browser, there are two key benefits to island architecture:
- **Components load individually.** A lightweight component (like a sidebar toggle) will load and render quickly without being blocked by the heavier components on the page.
- **Components render in isolation.** Each part of the page is an isolated unit, and a performance issue in one unit won't directly affect the others.

## Hydrate Interactive Components
Astro renders every component on the server **at build time**, unless [client:only](#mycomponent-clientonly-) is used. To hydrate components on the client **at runtime**, you may use any of the following `client:*` directives. A directive is a component attribute (always with a `:`) which tells Astro how your component should be rendered.
```astro
---
// Example: hydrating a React component in the browser.
import MyReactComponent from '../components/MyReactComponent.jsx';
---
<!-- "client:visible" means the component won't load any client-side
JavaScript until it becomes visible in the user's browser. -->
<MyReactComponent client:visible />
```
### `<MyComponent client:load />`
Hydrate the component on page load.
### `<MyComponent client:idle />`
Hydrate the component as soon as main thread is free (uses [requestIdleCallback()][mdn-ric]).
### `<MyComponent client:visible />`
Hydrate the component as soon as the element enters the viewport (uses [IntersectionObserver][mdn-io]). Useful for content lower down on the page.
### `<MyComponent client:media={QUERY} />`
Hydrate the component as soon as the browser matches the given media query (uses [matchMedia][mdn-mm]). Useful for sidebar toggles, or other elements that should only display on mobile or desktop devices.
### `<MyComponent client:only />`
Hydrates the component at page load, similar to `client:load`. The component will be **skipped** at build time, useful for components that are entirely dependent on client-side APIs. This is best avoided unless absolutely needed, in most cases it is best to render placeholder content on the server and delay any browser API calls until the component hydrates in the browser.
If more than one renderer is included in the Astro [config](/reference/configuration-reference), `client:only` needs a hint to know which renderer to use for the component. For example, `client:only="react"` would make sure that the component is hydrated in the browser with the React renderer. For custom renderers not provided by `@astrojs`, use the full name of the renderer provided in your Astro config, i.e. `<client:only="my-custom-renderer" />`.
## Can I Hydrate Astro Components?
[Astro components](./astro-components) (`.astro` files) are HTML-only templating components with no client-side runtime. If you try to hydrate an Astro component with a `client:` modifier, you will get an error.
To make your Astro component interactive, you will need to convert it to the frontend framework of your choice: React, Svelte, Vue, etc. If you have no preference, we recommend React or Preact as they are most similar to Astro's syntax.
Alternatively, you could add a `<script>` tag to your Astro component HTML template and send JavaScript to the browser that way. While this is fine for the simple stuff, we recommend a frontend framework for more complex interactive components.
[mdn-io]: https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API
[mdn-ric]: https://developer.mozilla.org/en-US/docs/Web/API/Window/requestIdleCallback
[mdn-mm]: https://developer.mozilla.org/en-US/docs/Web/API/Window/matchMedia
| 66.891089 | 453 | 0.777679 | eng_Latn | 0.997468 |
bb100f553aec6b78bd2f32046a530e20afa81ebb | 1,076 | md | Markdown | README.md | Acidburn0zzz/hjson | 8242a8e9f97e4629f8d338e42be7d7340198c5e6 | [
"MIT"
] | 1 | 2017-05-06T13:24:18.000Z | 2017-05-06T13:24:18.000Z | README.md | Acidburn0zzz/hjson | 8242a8e9f97e4629f8d338e42be7d7340198c5e6 | [
"MIT"
] | null | null | null | README.md | Acidburn0zzz/hjson | 8242a8e9f97e4629f8d338e42be7d7340198c5e6 | [
"MIT"
] | null | null | null | [](http://www.npmjs.com/package/hjson)
[](http://search.maven.org/#search|ga|1|g%3A%22org.hjson%22%20a%3A%22hjson%22)
[](https://pypi.python.org/pypi/hjson)
[](https://www.nuget.org/packages/Hjson/)
[](https://packagist.org/packages/laktak/hjson)
[](https://crates.io/crates/serde-hjson)
[](https://github.com/hjson/hjson-go/releases)
# Hjson, a user interface for JSON

Adds comments, makes it nicer to read and avoids comma mistakes.
For details see [hjson.org](http://hjson.org).
| 67.25 | 187 | 0.73513 | yue_Hant | 0.426823 |
bb10205dae2c1897ce5bee133c1b59e452663943 | 5,464 | md | Markdown | tests/dummy/app/pods/docs/general/template.md | hbrysiewicz/ember-yeti-table | d78db890c862a749f34a668464c48fac8503e527 | [
"MIT"
] | null | null | null | tests/dummy/app/pods/docs/general/template.md | hbrysiewicz/ember-yeti-table | d78db890c862a749f34a668464c48fac8503e527 | [
"MIT"
] | null | null | null | tests/dummy/app/pods/docs/general/template.md | hbrysiewicz/ember-yeti-table | d78db890c862a749f34a668464c48fac8503e527 | [
"MIT"
] | null | null | null | # Defining a table
Your starting point for Yeti Table will be the `@data` argument. It accepts an array of objects
or a promise that resolves to such an array.
Then you must define your table columns inside the header component, each of them with a `@prop` argument that
corresponds to the property key of each object that you want to display for that column. Yeti table uses
this property for filtering and sorting.
Yeti Table will update itself based on these property names, e.g if a `firstName` property of an object changes,
Yeti Table might need to re-sort or re-filter the rows. NOTE: If the property is a nested property (one that contains
periods), the table will not be updated when this nested property changes. This is due to `@each` only supporting one level
of properties.
Afterwards, we just need to define our table body. If you use `<table.body/>` in the blockless form,
Yeti Table "unrolls" all the rows for you. This is useful for simple tables. Here is such an example:
{{#docs-demo as |demo|}}
{{#demo.example name="general-simple.hbs"}}
<YetiTable @data={{this.data}} as |table|>
<table.header as |header|>
<header.column @prop="firstName">
First name
</header.column>
<header.column @prop="lastName">
Last name
</header.column>
<header.column @prop="points">
Points
</header.column>
</table.header>
<table.body/>
</YetiTable>
{{/demo.example}}
{{demo.snippet "general-simple.hbs"}}
{{/docs-demo}}
You can still add a click handler to the generated rows by passing the `@onRowClick` argument
to `<table.body/>`. `@onRowClick` adds a click action to each row, called with the clicked row's data as an argument.
You will probably need to make more customizations, and to do so you will need to use `<table.head>`,
`<table.foot>` and/or `<table.body>` in the block form. This form allows you to:
- Use any component or markup as a cell's content
- Use the row data across multiple cells of the same row
- Attach click listeners to the row or cell
- Use row data to conditionally add classes
<aside>
Notice that if you don't need automatic unrolling, sorting or filtering, the `prop` property is optional.
</aside>
{{#docs-demo as |demo|}}
{{#demo.example name="general-simple-with-body.hbs"}}
<YetiTable @data={{this.data}} as |table|>
<table.header as |header|>
<header.column>
First name
</header.column>
<header.column>
Last name
</header.column>
<header.column>
Points
</header.column>
</table.header>
<table.body as |body person|>
<body.row as |row|>
<row.cell>
{{person.firstName}}
</row.cell>
<row.cell>
{{person.lastName}}
</row.cell>
<row.cell>
{{person.points}}
</row.cell>
</body.row>
</table.body>
<table.foot as |foot|>
<foot.row as |row|>
<row.cell>
First Name footer
</row.cell>
<row.cell>
Last Name footer
</row.cell>
<row.cell>
Points footer
</row.cell>
</foot.row>
</table.foot>
</YetiTable>
{{/demo.example}}
{{demo.snippet "general-simple-with-body.hbs"}}
{{/docs-demo}}
Each `<body.row>` component accepts an optional `@onClick` action that will be called if the row is clicked.
Additionally, you might need to toggle the visibility of each row, and for that we can use the `@visible` argument
on the `<header.column>` component. It defaults to `true`. Setting it to false will hide all the cells for that column
accross all rows.
The `<header.column>` component also accepts a `@columnClass` argument. Yeti Table will apply this class all the cells
for that column accross all rows.
<aside>
In angle bracket invocation, you can pass in element attributes without the `@`.
A typical usage is the `class` attribute. So you can just write `<body.row class="some-class">`.
</aside>
You might have noticed that the `<table.header>` component always renders a single `<tr>` row inside the `<thead>`.
This will probably be your most common use case, but sometimes you might need to render additional rows in the header.
To do that, you should use the `<table.head>` component, which doesn't render that single `<tr>` and let's you render the rows yourself.
Here is an example of such a usage:
{{#docs-demo as |demo|}}
{{#demo.example name="general-simple-with-multiple-rows-on-header.hbs"}}
<YetiTable @data={{this.data}} as |table|>
<table.head as |head|>
<head.row as |row|>
<row.column @prop="firstName">
First name
</row.column>
<row.column @prop="lastName">
Last name
</row.column>
<row.column @prop="points">
Points
</row.column>
</head.row>
<head.row as |row|>
<row.cell>
Additional row on header
</row.cell>
<row.cell>
Additional row on header
</row.cell>
<row.cell>
Additional row on header
</row.cell>
</head.row>
</table.head>
<table.body/>
</YetiTable>
{{/demo.example}}
{{demo.snippet "general-simple-with-multiple-rows-on-header.hbs"}}
{{/docs-demo}}
| 33.521472 | 136 | 0.632138 | eng_Latn | 0.994851 |
bb106aac028f8871383a01cbc9fd8cf5937b1fe7 | 961 | md | Markdown | angular/config/$compileProvider.md | C0ZEN/wiki | 5b3b6537ec4c36930be786db2391cfc5bc85970b | [
"MIT"
] | 2 | 2020-09-02T13:43:07.000Z | 2020-09-02T13:43:12.000Z | angular/config/$compileProvider.md | C0ZEN/angular-wiki | 5b3b6537ec4c36930be786db2391cfc5bc85970b | [
"MIT"
] | null | null | null | angular/config/$compileProvider.md | C0ZEN/angular-wiki | 5b3b6537ec4c36930be786db2391cfc5bc85970b | [
"MIT"
] | 1 | 2020-09-02T13:43:15.000Z | 2020-09-02T13:43:15.000Z | # $compileProvider
### What is it ?
This provider is used to configure the behavior of somes directives and debug stuff.
### debugInfoEnabled
You should always enable it on development and disable it on production.
**Description:**
It can helps to debug the DOM by adding some stuff in it like:
- `ng-binding`, `ng-scope` and `ng-isolated-scope` CSS class
- `$binding` data property containing an array of the binding expressions
- `ngBind` and `ngBindHtml` attributes
- `ng-if` comments
Obviously, this option will slow down the application and it is useless in production.
So, **disable** it !
### Example
```
$compileProvider.debugInfoEnabled(false);
```
### Bonus
If you want to debug easily a production which have this option disable,
You can enter `angular.reloadWithDebugInfo();` in your browser console to enable the debug.
### Want more ?
[See the official documentation](https://docs.angularjs.org/api/ng/provider/$compileProvider)
| 25.972973 | 93 | 0.745057 | eng_Latn | 0.996189 |
bb10765c7c38a996a4ea3c85185ece96eb1dfd81 | 175 | md | Markdown | README.md | typelift/Tyro | 9b93a917ded2adb1119055d31d79a70d453bcaab | [
"BSD-3-Clause"
] | 49 | 2015-11-25T10:06:16.000Z | 2019-06-19T02:24:21.000Z | README.md | typelift/Tyro | 9b93a917ded2adb1119055d31d79a70d453bcaab | [
"BSD-3-Clause"
] | 29 | 2015-11-22T02:20:45.000Z | 2017-04-25T16:51:01.000Z | README.md | typelift/Tyro | 9b93a917ded2adb1119055d31d79a70d453bcaab | [
"BSD-3-Clause"
] | 9 | 2015-11-22T03:12:51.000Z | 2022-03-25T19:39:52.000Z | Tyro
======
Tyro used to be a Swift library for Functional JSON parsing and encoding. It is
now **deprecated**. Please use your favorite extension of `Dictionary` instead.
| 29.166667 | 80 | 0.742857 | eng_Latn | 0.996888 |
bb10b618bb323180e5117c26d2161c4bdaabb6ce | 7,918 | md | Markdown | README.md | sbreitf1/go-console | af20a7fe690136a4c59c41ebe63d3f6b94f08339 | [
"MIT"
] | null | null | null | README.md | sbreitf1/go-console | af20a7fe690136a4c59c41ebe63d3f6b94f08339 | [
"MIT"
] | null | null | null | README.md | sbreitf1/go-console | af20a7fe690136a4c59c41ebe63d3f6b94f08339 | [
"MIT"
] | null | null | null | # Go Console
Util package for convenient terminal input and output including a command line environment with command history and fully customizable completion.
See section `Command Line Environment` for a sophisticated and highly customizable `bash`-like input environment. Section `Custom Completion Handlers` contains some details on how command completion works and can be customized.
## Basic Input & Output
This package includes the most common output methods known from the `fmt` package:
```golang
console.Print("foo", "bar") // outputs "foo bar"
console.Printf("hello %s", "world") // outputs "hello world"
console.Println("foobar") // outputs "foobar\n"
console.Printlnf("foo%s", "bar") // outputs "foobar\n"
```
For basic input you can use `ReadLine` and `ReadPassword` for hidden input:
```golang
line, err := console.ReadLine()
password, err := console.ReadPassword() // will hide input while typing
```
See `examples/basic-input` for an example application.
## Command Input
A more advanced input method is provided by `ReadCommand`. It reads and parses a command from input, respecting all escape characters and quoted phrases:
```golang
cmd, err := console.ReadCommand("prompt", nil) // prompt> {user input here}
// example user input: echo foo 'say "hello world"' "white space" escape\ sequence
// cmd[0] = "echo"
// cmd[1] = "foo"
// cmd[2] = "say \"hello world\""
// cmd[3] = "white space"
// cmd[4] = "escape sequence"
```
You can additionally pass handlers for command history (up and down arrow keys), aswell as completion (tab key). Consider using a `Command Line Environment` for command-based applications.
See `examples/read-command` for an example application.
## Command Line Environment
The most sophisticated input method is by instantiating a `Command Line Environment`. It allows you to register commands and handlers for special events and automatically sets correct handlers for command history and completion when reading a command:
```golang
// instantiate command line environment
cle := console.NewCommandLineEnvironment()
// register default exit command
cle.RegisterCommand(console.NewExitCommand("exit"))
// register a new command that takes no parameters
cle.RegisterCommand(console.NewParameterlessCommand("hello",
func(args []string) error {
// handler method for command execution
console.Println("world")
}))
// Run() enters an infinite loop for command input and execution
// if a command returns console.ErrExit() this loop will stop gracefully
if err := cle.Run(); err != nil {
console.Fatalln(err)
}
```
In most cases you probably want to allow completion of command arguments. This can be done by passing a completion handler when instantiating a custom command. For simple command arguments you can use the default handlers of this package:
```golang
cle.RegisterCommand(console.NewCustomCommand("hello",
// default completion handler for a fixed set of arguments (fixed order and no flags)
console.NewFixedArgCompletion(
// the first argument will offer completions for a fixed set of options
console.NewOneOfArgCompletion("world", "ma'am", "sir"),
// you can pass an arbitrary number of completion handlers here for further arguments
),
func(args []string) error {
// the completion does not enforce presence of arguments
if len(args) > 0 {
console.Printlnf("hello %s", args[0])
}
}))
```
To select a file or directory from the local file system, there is a default completion handler available. Use the withFiles flag to control whether files should be offered in the completion list:
```golang
cle.RegisterCommand(console.NewCustomCommand("cat",
// allow user to browse the local file system for completion and include files
console.NewFixedArgCompletion(console.NewLocalFileSystemArgCompletion(true)),
// actual command execution handler:
func(args []string) error {
// always remember to check arguments
if len(args) == 0 {
console.Println("missing arg")
} else {
// show content of file...
}
}))
```
See `examples/command-line-env`, `examples/error-handling` and `examples/browser` for example applications.
### Customizations
See the following list for possible customizations of the `Command Line Environment`:
| Field Name | Description | Default |
| ---------- | ----------- | ------- |
| Prompt | Callback function to specify the current command prompt. Will be called every time the prompt is displayed. Use `cle.SetStaticPrompt` to set a static prompt. | `cle> ` |
| PrintOptions | Callback function to print options on double-tab. | `DefaultOptionsPrinter()` |
| ExecUnknownCommad | A handler that is called when an unknown command is executed. If set to `nil`, the execution loop will end returning an unkown command error. | Print message and continue |
| CompleteUnknownCommand | Completion handler for unknown commands. | `nil` |
| ErrorHandler | Error handler to handle errors and panics returned from commands. Will end the execution loop and pass through the error if something else than `nil` is returned. | Print error message and continue |
| RecoverPanickedCommands | If set to `true`, panics from commands are recovered and passed to `ErrorHandler`. Use `console.IsErrCommandPanicked` to recognize panics. | `true` |
| UseCommandNameCompletion | If set to `false`, no completion is available for command names. | `true` |
### Custom Completion Handlers
Completion handlers are called every time the user presses the tab key. They receive the full, parsed command as input, aswell as the index of the currently edited entry. When using completion handlers for registered commands of a command line environment, you can ignore the first entry as it will always contain the name of the corresponding command. The handler can return the full list of available options because prefix filtering for the current user input will be done automatically:
```golang
func completionHandler(cmd []string, index int) []console.CompletionOption {
return []console.CompletionOption{
// labelled options will be displayed with custom label in listings
console.NewLabelledCompletionOption("Greeting", "hello", false),
// in most cases only the ReplaceString property is required
console.NewCompletionOption("hedgehog", false),
console.NewCompletionOption("world", false),
}
}
```
The `replacement` parameter of `NewLabelledCompletionOption` and `NewCompletionOption` is the actual value to be used for completion. If the user has already typed `h` and presses tab, the prefix will match `hello` and `hedgehog`, so the longest common prefix `he` is taken as completion. The user now types `l` to further specify the desired value and again presses tab. Only one candidate now matches the prefix and so the full replacement string `hello` is taken as completion. Furthermore, a whitespace character is emitted to begin the next argument because the `isPartial` parameter is set to `false`.
The first `label` parameter of `NewLabelledCompletionOption` can be set to an arbitrary value and does not affect the actual completion in any way. It is displayed instead of the actual replacement property in the options list on double-tab. This can be useful when completion is used on hierachical structures like file systems where you only want to display the file names and not the full path.
### Custom Completion Arg
You can also extend the `NewFixedArgCompletion` with custom types that implement `ArgCompletion`:
```golang
type ArgCompletion interface {
GetCompletionOptions(currentCommand []string, entryIndex int) (options []CompletionOption)
}
```
## Applications
See [s3client](https://github.com/sbreitf1/s3client) for a real-world application using go-console. | 52.437086 | 607 | 0.744506 | eng_Latn | 0.991091 |
bb11798fd7dcc69f6e1d3ac7e81036ae82233f36 | 2,542 | md | Markdown | source/api/plugins/after-screenshot-api.md | ArturT/cypress-documentation | f67cc1b95b5c3bc0465422f6a23248669f60f5e9 | [
"MIT"
] | 1 | 2019-03-09T16:44:07.000Z | 2019-03-09T16:44:07.000Z | source/api/plugins/after-screenshot-api.md | ArturT/cypress-documentation | f67cc1b95b5c3bc0465422f6a23248669f60f5e9 | [
"MIT"
] | 3 | 2021-05-12T00:07:32.000Z | 2022-03-29T10:07:26.000Z | source/api/plugins/after-screenshot-api.md | ArturT/cypress-documentation | f67cc1b95b5c3bc0465422f6a23248669f60f5e9 | [
"MIT"
] | 1 | 2020-02-07T06:20:44.000Z | 2020-02-07T06:20:44.000Z | ---
title: After Screenshot API
---
After a screenshot is taken, you can get details about the screenshot via the `after:screenshot` plugin event. This event is called whether a screenshot is taken with {% url `cy.screenshot()` screenshot %} or as a result of a test failure. The event is called after the screenshot image is written to disk.
This allows you to record those details or manipulate the image as needed. You can also return updated details about the image.
# Usage
Using your {% url "`pluginsFile`" plugins-guide %} you can tap into the `after:screenshot` event.
```js
// cypress/plugins/index.js
const fs = require('fs')
module.exports = (on, config) => {
on('after:screenshot', (details) => {
// details will look something like this:
// {
// size: 10248
// takenAt: '2018-06-27T20:17:19.537Z'
// duration: 4071
// dimensions: { width: 1000, height: 660 }
// multipart: false
// pixelRatio: 1
// name: 'my-screenshot'
// specName: 'integration/my-spec.js'
// testFailure: true
// path: '/path/to/my-screenshot.png'
// scaled: true
// blackout: []
// }
// example of renaming the screenshot file
const newPath = '/new/path/to/screenshot.png'
return new Promise((resolve, reject) => {
fs.rename(details.path, newPath, (err) => {
if (err) return reject(err)
// because we renamed/moved the image, resolve with the new path
// so it is accurate in the test results
resolve({ path: newPath })
})
})
})
}
```
You can return an object or a promise that resolves an object from the callback function. Any type of value other than an object will be ignored. The object can contain the following properties:
* **path**: absolute path to the image
* **size**: size of the image file in bytes
* **dimensions**: width and height of the image in pixels (as an object with the shape `{ width: 100, height: 50 }`)
If you change any of those properties of the image, you should include the new values in the returned object, so that the details are correctly reported in the test results. For example, if you crop the image, return the new size and dimensions of the image.
The properties will be merged into the screenshot details and passed to the `onAfterScreenshot` callback (if defined with {% url 'Cypress.Screenshot.defaults()' screenshot-api %} and/or {% url 'cy.screenshot()' screenshot %}). Any other properties besides *path*, *size*, and *dimensions* will be ignored.
| 41 | 306 | 0.681747 | eng_Latn | 0.996631 |
bb11b76ed511acc2b1144d1e2e8bc1d3ba3d0eb7 | 1,024 | md | Markdown | labs/central-vm-appliance/README.md | aaronstrong/gcp-professional-architect | 0007c5ee7389f919edf04bd58ae422755c11ec38 | [
"MIT"
] | 1 | 2021-11-22T16:57:45.000Z | 2021-11-22T16:57:45.000Z | labs/central-vm-appliance/README.md | aaronstrong/gcp-professional-architect | 0007c5ee7389f919edf04bd58ae422755c11ec38 | [
"MIT"
] | null | null | null | labs/central-vm-appliance/README.md | aaronstrong/gcp-professional-architect | 0007c5ee7389f919edf04bd58ae422755c11ec38 | [
"MIT"
] | 2 | 2021-09-22T04:25:37.000Z | 2021-09-30T01:58:48.000Z | # Centralized Virtualized Network Appliance

The preceding diagram shows the communication paths between segmented VPC networks, on-premises networks, and the internet, and how they are routed through the centralized, virtualized network appliance.
## Requirements
A pfSense image has been created in Google Cloud. If not, [follow this guide](../pfsense/README.MD).
### What's Deployed
### How to deploy
* `terraform init`
* `terraform plan`
* `terraform apply`
## pfSense Install
Follow the [pfSense install](../labs/pfsense/01-pfsense-install) instructions to configure both appliances.
## pfSense configuration
After the `terraform` code has been ran and completed successfully, configure each pfsense instance individually.
1. Navigate to the Compute Engine console.
1. Select `pfsense-vm-01` and click the `Connect to serial console`.
1. Follow the pfsense install guide.
1. Repeat the steps for the second pfsense vm instance. | 34.133333 | 203 | 0.77832 | eng_Latn | 0.983615 |
bb123384a341169189cdcf5dc3765946b647fdda | 6,431 | md | Markdown | README.md | ohaanika/dash-sample-apps | 8a2cfc54548cf729f95a5d0242c058b45dbf4205 | [
"MIT"
] | 1 | 2021-06-04T10:04:55.000Z | 2021-06-04T10:04:55.000Z | README.md | ohaanika/dash-sample-apps | 8a2cfc54548cf729f95a5d0242c058b45dbf4205 | [
"MIT"
] | null | null | null | README.md | ohaanika/dash-sample-apps | 8a2cfc54548cf729f95a5d0242c058b45dbf4205 | [
"MIT"
] | null | null | null | # Dash Sample Apps [](https://circleci.com/gh/plotly/dash-sample-apps)
This monorepo contains open-source apps demoing
various capabilities of Dash and integration with
the Python or R ecosystem. Most of the apps here
are hosted on the
[Dash Gallery](https://dash-gallery.plotly.host/Portal/),
which runs on the
[Dash Kubernetes](https://plotly.com/dash/kubernetes/) platform
and host both open-source apps and demos for
[Design Kit](https://plotly.com/dash/design-kit/) and
[Snapshot Engine](https://plotly.com/dash/snapshot-engine/). Reach
out to [get a demo](https://plotly.com/get-demo/) of our licensed
products.
## Running an example app after cloning the repo
You will need to run applications, and specify filenames, from the
root directory of the repository. e.g., if the name of the app you
want to run is `my_dash_app` and the app filename is `app.py`, you
would need to run `python apps/my_dash_app/app.py` from the root
of the repository.
Each app has a requirements.txt, install the dependencies in a virtual
environment.
## Downloading and running a single app
Visit the [releases page](https://github.com/plotly/dash-sample-apps/releases) and download and `unzip` the app you want. Then `cd` into the app directory and install its dependencies in a virtual environment in the following way:
```bash
python -m venv venv
source venv/bin/activate # Windows: \venv\scripts\activate
pip install -r requirements.txt
```
then run the app:
```bash
python app.py
```
## Contributing to the sample apps repo
_"DDS app" below refers to the deployed application. For example, if
the deployment will eventually be hosted at
https://dash-gallery.plotly.host/my-dash-app, "DDS app name" is
`my-dash-app`._
### Adding a new app
Create an app on Dash Playground. This will be the location of the
auto-deployment. To do this, log into the app manager on
[dash-playground.plotly.host](https://dash-playground.plotly.host)
and click "initialize app".
Create a branch from `master` to work on your app, the name is not required
to be anything specific. Switch to this branch, then navigate to the `apps/`
directory and add a directory for your app.
There are two options when you are naming the folder:
1. Make the folder have the _exact same_ name as the Dash app name.
2. (Python apps only) Select any other name, but _update the file
[`apps_mapping.py`](apps_directory_mapping.py)_ with the Dash app
name and the folder name you have selected.
Navigate to the directory you just created, and write a small README
that only contains the name of the app. Stage the README and commit it
to your branch.
See [project boilerplate!](https://github.com/plotly/dash-sample-apps#project-boilerplate)
### Notes on adding a new Dash for R app
Contributing an app written with Dash for R is very similar to the steps outlined above.
1. Make the folder have the _exact same_ name as the Dash app name.
2. Ensure that the file containing your app code is named `app.R`.
3. The `Procfile` should contain
```
web: R -f /app/app.R
```
4. Routing and request pathname prefixes should be set. One approach might be to include
```
appName <- Sys.getenv("DASH_APP_NAME")
pathPrefix <- sprintf("/%s/", appName)
Sys.setenv(DASH_ROUTES_PATHNAME_PREFIX = pathPrefix,
DASH_REQUESTS_PATHNAME_PREFIX = pathPrefix)
```
at the head of your `app.R` file.
5. `run_server()` should be provided the host and port information explicitly, e.g.
``
app$run_server(host = "0.0.0.0", port = Sys.getenv('PORT', 8050))
``
### Making changes to an existing app
Create a new branch - of any name - for your code changes.
Then, navigate to the directory that has the same name as
the DDS app.
When you are finished, make a pull request from your branch to the master
branch. Once you have passed your code review, you can merge your PR.
## Dash app project structure
#### Data
- All data (csv, json, txt, etc) should be in a data folder
- `/apps/{DASH_APP_NAME}/data/`
#### Assets
- All stylesheets and javascript should be in an assets folder
- `/apps/{DASH_APP_NAME}/assets/`
#### These files will still need to be present within the app folder.
- **`Procfile`** gets run at root level for deployment
- Make sure python working directory is at the app level
- Ex. `web: gunicorn app:server`
- **`requirements.txt`**
- Install project dependecies in a virtual environment
#### Project boilerplate
apps
├── ...
├── {DASH_APP_NAME} # app project level
│ ├── assets/ # all stylesheets and javascript files
│ ├── data/ # all data (csv, json, txt, etc)
│ ├── app.py # dash application entry point
│ ├── Procfile # used for heroku deployment (how to run app)
│ ├── requirements.txt # project dependecies
│ └── ...
└── ...
#### Handle relative path
Assets should never use a relative path, as this will fail when deployed to Dash Enterprise due to use of subdirectories for serving apps.
Reading from assets and data folder
```Python
Img(src="./assets/logo.png") will fail at root level
```
Tips
- Use [get_asset_url()](https://dash.plot.ly/dash-deployment-server/static-assets)
- Use [Pathlib](https://docs.python.org/3/library/pathlib.html) for more flexibility
```Python
import pathlib
import pandas as pd
# get relative assets
html.Img(src=app.get_asset_url('logo.png')) # /assets/logo.png
# get relative data
DATA_PATH = pathlib.Path(__file__).parent.joinpath("data") # /data
df = pd.read_csv(DATA_PATH.joinpath("sample-data.csv")) # /data/sample-data.csv
with open(DATA_PATH.joinpath("sample-data.csv")) as f: # /data/sample-data.csv
some_string = f.read()
```
## Developer Guide
#### Creating a new project
```
# branch off master
git checkout -b "{YOUR_CUSTOM_BRANCH}"
# create a new folder in apps/
mkdir /apps/{DASH_APP_NAME}
# push new branch
git push -u origin {YOUR_CUSTOM_BRANCH}
```
#### Before committing
```
# make sure your code is linted (we use black)
black . --exclude=venv/ --check
# if black is not installed
pip install black
```
#### App is ready to go!
```
# once your branch is ready, make a PR into master!
PR has two checkers.
1. make sure your code passed the black linter
2. make sure your project is deployed on dns playground
```
| 30.478673 | 230 | 0.717929 | eng_Latn | 0.988256 |
bb14033226dba9213ceeef9486a60fb450093ea9 | 423 | md | Markdown | source/_posts/自拍/201907/臭美系列之20190719.md | upcwangying/fafaer-blog | cebae398f908a6ddd9c07e1dc0768afac2076d05 | [
"MIT"
] | 13 | 2020-06-16T05:51:56.000Z | 2022-01-06T14:46:10.000Z | source/_posts/自拍/201907/臭美系列之20190719.md | upcwangying/fafaer-blog | cebae398f908a6ddd9c07e1dc0768afac2076d05 | [
"MIT"
] | 92 | 2020-06-12T10:36:05.000Z | 2022-02-26T01:51:21.000Z | source/_posts/自拍/201907/臭美系列之20190719.md | upcwangying/fafaer-blog | cebae398f908a6ddd9c07e1dc0768afac2076d05 | [
"MIT"
] | 4 | 2020-06-27T09:33:31.000Z | 2021-08-07T12:49:26.000Z | ---
title: 臭美系列(20190719)
date: 2019-07-19 16:22:37
categories:
- 图片
tags:
- 臭美系列
thumbnail: https://cdn.chenyifaer.com/photos/selfies/201907/20190719/IMG_7019.JPG
---
图片来自于<a href="https://weibo.com/p/1005051720171447" target="_blank">quanmmmmm</a><br/>

<!--more-->

| 23.5 | 86 | 0.730496 | yue_Hant | 0.425969 |
bb14464d477ce1fcc4d90e737231ccfdc593439f | 415 | md | Markdown | _posts/2021-04-05-charr.md | glasnt/stitch | 51ee90d4690a715512e53218a72883b84b2d41e5 | [
"MIT"
] | null | null | null | _posts/2021-04-05-charr.md | glasnt/stitch | 51ee90d4690a715512e53218a72883b84b2d41e5 | [
"MIT"
] | 3 | 2020-03-03T08:46:22.000Z | 2021-07-13T21:22:52.000Z | _posts/2021-04-05-charr.md | glasnt/stitch | 51ee90d4690a715512e53218a72883b84b2d41e5 | [
"MIT"
] | null | null | null | ---
layout: piece
title: Charr
cover: img/charr.jpg
permalink: charr
material: 14ct aida, DMC floss.
pattern:
link: https://www.etsy.com/au/listing/585202515/derpy-pokemon-cross-stitch-pattern
publisher: WurbYouCrossStitch (Etsy)
---
I saw this as part of a [speedstitching](https://www.reddit.com/r/RestingStitchFace/comments/lntoo8/derpy_pokemon_timelapse_cross_stitch/) video, and couldn't pass it up. | 27.666667 | 171 | 0.768675 | eng_Latn | 0.442872 |
bb144e276d4b4b88a09a367dfa36586a8b3d1052 | 1,615 | md | Markdown | README.md | m0ne/pdf2md | 93d72cabc29a814ec7c2d260121b386c97e722bc | [
"MIT"
] | null | null | null | README.md | m0ne/pdf2md | 93d72cabc29a814ec7c2d260121b386c97e722bc | [
"MIT"
] | null | null | null | README.md | m0ne/pdf2md | 93d72cabc29a814ec7c2d260121b386c97e722bc | [
"MIT"
] | null | null | null | # pdf2md
A command line tool that converts a pdf into pngs and embedds those into a markdown file.
## Getting Started
These instructions will get you up and running on your machine
## Prerequisites
What things you need to install the software and how to install them
```
poppler
npm
```
## Installing
1. Install Poppler
2. Install pdf2md
### 1. Install Poppler
**Linux (not tested)**
```
sudo apt-get install poppler-data
sudo apt-get install poppler-utils
```
**MacOS**
```
brew install poppler
```
**Windows**
```
not supported
```
### 2. Install pdf2md
#### Method: 1
Download the npm package from the official repo:
```
npm install -g pdf2md
```
#### Method: 2
You can install pdf2md by cloning the github repo and installing it globally:
```
git clone https://github.com/m0ne/pdf2md
cd pdf2md
npm install -g .
```
## Usage
The usage is super simple:
```
pdf2md <inputFile.pdf> <projectName>
```
It creates the following folder structure:
```
[projectName]
[imgages]
[projectName-01.png]
[projectName-02.png]
...
[src]
[projectName.pdf]
[projectName.md]
```
## Contributing
If you find a bug or want to add a feature, feel free to create an issue or create pull request.
## Authors
* **[m0ne](https://github.com/m0ne)** - *Initial work*
<a href="https://www.buymeacoffee.com/m0ne" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-green.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
| 17.554348 | 213 | 0.697833 | eng_Latn | 0.867814 |
bb1481f4a022955b23f78607682a55fc99fdb7d1 | 805 | md | Markdown | res/app/components/stf/common-ui/enable-autofill/README.md | huangtao/cloud-test | 8087b1337d47daab9eb39335ca6e286df0e4b4dc | [
"Apache-2.0"
] | 1 | 2018-09-12T15:43:32.000Z | 2018-09-12T15:43:32.000Z | res/app/components/stf/common-ui/enable-autofill/README.md | huangtao/cloud-test | 8087b1337d47daab9eb39335ca6e286df0e4b4dc | [
"Apache-2.0"
] | null | null | null | res/app/components/stf/common-ui/enable-autofill/README.md | huangtao/cloud-test | 8087b1337d47daab9eb39335ca6e286df0e4b4dc | [
"Apache-2.0"
] | 3 | 2018-09-12T15:43:33.000Z | 2019-07-10T09:50:15.000Z | # enable-autofill
This directive enables autofill (HTML5 autocomplete) on the selected form.
Currently this is only needed in `Chrome` because `autofill` only works on form `POST` method.
Based on [this](http://stackoverflow.com/questions/16445463/how-to-get-chrome-to-autofill-with-asynchronous-post/22191041#22191041).
## Usage
```html
<form enable-autofill action="about:blank">
<input type="text" autocomplete="on">
</form>
```
This will create the following DOM:
```html
<iframe src="about:blank" name="_autofill" style="display:none">
<form method="post" action="about:blank" target="_autofill">
<input type="text" autocomplete="on">
</form>
```
Yes, it is a bit ugly but it is currently the only way.
It will create only one `iframe` which will be reused. | 28.75 | 133 | 0.709317 | eng_Latn | 0.970427 |
bb14a7f47795e118af25d7a5aff7ac4c7e700cdf | 1,490 | md | Markdown | README.md | quickshiftin/preview-mode-demo | 82b00238834415b11689be98b7397cd7c146a9ed | [
"MIT"
] | null | null | null | README.md | quickshiftin/preview-mode-demo | 82b00238834415b11689be98b7397cd7c146a9ed | [
"MIT"
] | null | null | null | README.md | quickshiftin/preview-mode-demo | 82b00238834415b11689be98b7397cd7c146a9ed | [
"MIT"
] | null | null | null | # Next.js: SSG + Preview Mode Demo
This demo showcases Next.js' next-gen Static Site Generation (SSG) support.
## About
Next.js is the first hybrid framework, allowing you to choose the technique
that fits your use case best on a per-page basis:
[Static Generation or On-Demand rendering](https://nextjs.org/docs/basic-features/data-fetching).
This application solely focuses on Static Generation (using `getStaticProps`),
however, with a game-changing new feature: Preview Mode.
[Take Preview Mode for a spin](https://next-preview.vercel.app/)!
Preview Mode leverages Next.js' on-demand rendering capabilities to bypass the
statically prerendered page and render the page on-demand for
**authorized users**.
This feature is incredibly valuable for content editors who want to view
real-time draft content from their CMS, among other use cases.
## Learn More
You can learn more about this feature in the
[Next.js 9.3 Blog Post](https://nextjs.org/blog/next-9-3) or our
[Documentation](https://nextjs.org/docs/advanced-features/preview-mode).
## Note
This is a fork of the standard demo, that uses the local filesystem instead of S3. You can run it w/ node 14+.
There is also an example of [`getStaticPaths`](https://nextjs.org/docs/basic-features/data-fetching#getstaticpaths-static-generation).
You'll need to create a couple of previews before running `next build` in order to see it in action. The previews will be available at `http://locacalhost:3000/preview/{snapshotId}`
| 42.571429 | 181 | 0.775168 | eng_Latn | 0.96733 |
bb150810c27a2211342ef0fbd434d20e1d619a55 | 1,774 | md | Markdown | articles/storage/tables/table-storage-overview.md | watahani/azure-docs.ja-jp | 97274dd3b2fdbc2aed07f50888ac999c4f121abe | [
"BSD-Source-Code"
] | null | null | null | articles/storage/tables/table-storage-overview.md | watahani/azure-docs.ja-jp | 97274dd3b2fdbc2aed07f50888ac999c4f121abe | [
"BSD-Source-Code"
] | null | null | null | articles/storage/tables/table-storage-overview.md | watahani/azure-docs.ja-jp | 97274dd3b2fdbc2aed07f50888ac999c4f121abe | [
"BSD-Source-Code"
] | null | null | null | ---
title: Table Storage の概要 - Azure のオブジェクト ストレージ | Microsoft Docs
description: NoSQL データ ストアである Azure Table Storage を使用して構造化データをクラウドに格納します。
services: storage
author: SnehaGunda
ms.service: storage
ms.devlang: dotnet
ms.topic: overview
ms.date: 04/23/2018
ms.author: sngun
ms.component: tables
ms.openlocfilehash: 18b8fab53e9e2de6d083b3a9e78001a3844b38d5
ms.sourcegitcommit: da3459aca32dcdbf6a63ae9186d2ad2ca2295893
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 11/07/2018
ms.locfileid: "51226345"
---
# <a name="introduction-to-table-storage-in-azure"></a>Azure の Table Storage の概要
[!INCLUDE [storage-table-cosmos-db-tip-include](../../../includes/storage-table-cosmos-db-tip-include.md)]
Azure Table Storage は、NoSQL の構造化データをクラウド内に格納するサービスです。スキーマレスのデザインでキー/属性ストアを実現します。 Table Storage はスキーマがないため、アプリケーションの進化のニーズに合わせてデータを容易に修正できます。 Table Storage のデータ アクセスは、多くの種類のアプリケーションにとって高速でコスト効率に優れ、また一般に、従来の SQL と比較して、同様の容量のデータを低コストで保存することができます。
Table Storage を使用すると、Web アプリケーションのユーザー データ、アドレス帳、デバイス情報、サービスに必要なその他の種類のメタデータなど、柔軟なデータセットを格納できます。 ストレージ アカウントの容量の上限を超えない限り、テーブルには任意の数のエンティティを保存でき、ストレージ アカウントには任意の数のテーブルを含めることができます。
[!INCLUDE [storage-table-concepts-include](../../../includes/storage-table-concepts-include.md)]
## <a name="next-steps"></a>次の手順
* [Microsoft Azure ストレージ エクスプローラー](../../vs-azure-tools-storage-manage-with-storage-explorer.md)は、Windows、macOS、Linux で Azure Storage のデータを視覚的に操作できる Microsoft 製の無料のスタンドアロン アプリです。
* [.Net での Azure Table Storage の概要](../../cosmos-db/table-storage-how-to-use-dotnet.md)
* 利用可能な API の詳細については、Table service のリファレンス ドキュメントを参照してください。
* [.NET 用ストレージ クライアント ライブラリ リファレンス](https://go.microsoft.com/fwlink/?LinkID=390731&clcid=0x409)
* [REST API リファレンス](https://msdn.microsoft.com/library/azure/dd179355)
| 44.35 | 243 | 0.799324 | yue_Hant | 0.469914 |
bb15ee5386ef0aaf74de5b543c0f727f0301f8c8 | 2,073 | md | Markdown | _posts/2021/12/2021-12-13-spring-tiles-error01.md | kkn1125/devkimson | 7ac1b85f581ab76cc8713b73fa14298be4d687b5 | [
"MIT"
] | 3 | 2021-08-12T08:15:03.000Z | 2021-12-10T13:33:59.000Z | _posts/2021/12/2021-12-13-spring-tiles-error01.md | kkn1125/devkimson | 7ac1b85f581ab76cc8713b73fa14298be4d687b5 | [
"MIT"
] | 4 | 2021-11-02T09:19:34.000Z | 2022-02-11T11:15:10.000Z | _posts/2021/12/2021-12-13-spring-tiles-error01.md | kkn1125/kkn1125.github.io | 4aa7749943346292a6dbe9e636b0e4508512bf4b | [
"MIT"
] | null | null | null | ---
layout: post
date: 2021-12-13 17:16:37 +0900
title: "[SPRING] Tiles 설정 시 경로에 있는 파일을 못 찾을 때"
author: Kimson
categories: [ SPRING, TIL ]
image: assets/images/post/covers/TIL-spring.png
tags: [ tiles, not found ]
description: "Tiles 파일 못 찾을 때
저와 같은 상황이시라면 도움이 되시기를 바랍니다... 상황은 이렇습니다.
분명 `inc`라는 폴더에 `layout.jsp`를 `template`으로 불러오고 `body`라는 이름으로 `Controller`에서 `root.*`로 요청되는 페이지를 `{1}`에 매핑하여 불러올 때
분명히 `inc`폴더에 헤더나 푸터 파일을 생성하고 불러오면 안되는 현상이 발생했습니다."
featured: false
hidden: false
rating: 4
toc: true
profile: false
istop: true
keysum: false
keywords: ""
published: true
---
# Tiles 파일 못 찾을 때
저와 같은 상황이시라면 도움이 되시기를 바랍니다... 상황은 이렇습니다.
분명 `inc`라는 폴더에 `layout.jsp`를 `template`으로 불러오고 `body`라는 이름으로 `Controller`에서 `root.*`로 요청되는 페이지를 `{1}`에 매핑하여 불러올 때
분명히 `inc`폴더에 헤더나 푸터 파일을 생성하고 불러오면 안되는 현상이 발생했습니다.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE tiles-definitions
PUBLIC "-//Apache Software Foundation//DTD Tiles Configuration 3.0//EN" "http://tiles.apache.org/dtds/tiles-config_3_0.dtd">
<tiles-definitions>
<definition name="baseTemp" template="/WEB-INF/views/inc/layout.jsp">
<put-attribute name="title" value="Home" />
</definition>
<definition name="root.*" extends="baseTemp">
<put-attribute name="title" value="Home" />
<put-attribute name="body" value="/WEB-INF/views/{1}.jsp" />
<put-attribute name="leftSide" value="/WEB-INF/views/inc/lsb.jsp" />
</definition>
</tiles-definitions>
```
위의 코드는 작업 중에 만났던 에러부분입니다.
baseTemp에 template을 `/WEB-INF/views/inc/layout.jsp`로 지정했습니다. 로드가 안되는 부분이 `root.*`의 `leftSide`이구요.
물론 `layout.jsp`에서 `insertAttribute`속성에 `ignore`을 `true`로 한 상태였습니다.
이전 프로젝트에서 저렇게 불러왔던 기억이 있어 살펴보니 같은 경로가 아닌 서브폴더를 두고 불러오는 방식으로 되어 있는 것을 보고 폴더를 생성하여 `lsb.jsp`를 그 안에 넣고 실행했습니다.
결과는 성공...
# 결론
`extends`사용 시 `template`으로 사용하는 `layoutjsp`와 같은 경로에 있는 `jsp`파일을 `put-attribute`로 가져오면 발생하는 오류인 것 같습니다.
즉, `a/layout.jsp`로 `template`를 지정했다고 가정했을 때 `sidebar.jsp`를 특정 페이지에서만 붙이고 싶다면 파일경로를 `a/sidebar.jsp`가 아닌 `b/sidebar.jsp` 혹은 `a/b/sidebar.jsp`로 중첩되지 않는 경로로 설정해줘야 이런 에러를 피할 수 있을 것 같습니다. | 30.485294 | 181 | 0.689822 | kor_Hang | 0.999999 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.