hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
sequencelengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
sequencelengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
sequencelengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
eda11ffc8b709e5d940c9a4afd7e6b645d49100a
635
md
Markdown
README.md
SandroMateo/track-suggester
d145e8509b0e89881a00482d59cd0b4765c5a15a
[ "MIT" ]
null
null
null
README.md
SandroMateo/track-suggester
d145e8509b0e89881a00482d59cd0b4765c5a15a
[ "MIT" ]
null
null
null
README.md
SandroMateo/track-suggester
d145e8509b0e89881a00482d59cd0b4765c5a15a
[ "MIT" ]
null
null
null
# _Track Suggester_ #### _JavaScript and jQuery with branching exercise for Epicodus, 08.12.2016_ #### By _**Sandro Alvarez**_ ## Description _This webpage will survey a user and tell them which Epicodus track will best suit their traits._ ## Setup * _Clone this repository_ * _Open index.html in your web browser_ * _Here is the link: https://sandromateo.github.io/track-suggester/_ ## Technologies Used * _HTML_ * _CSS_ * _Bootstrap_ * _JavaScript_ * _jQuery_ ## Support _If you run into any problems, contact me as [email protected]_ ### Legal _Copyright (c) 2016 Sandro Alvarez_ _Licensed under the MIT license_
18.676471
97
0.754331
eng_Latn
0.825788
eda138d0dc9b51d904f5ed4994f2f1fc4bdd04cb
2,454
md
Markdown
content/references/rippled-api/transaction-formats/transaction-types/setregularkey.md
bluejoe36596anzabn/xrpl-dev-portal
0376b96a0e05ebdc42479c7c08e89ed815582031
[ "Apache-2.0" ]
1
2020-01-27T17:47:09.000Z
2020-01-27T17:47:09.000Z
content/references/rippled-api/transaction-formats/transaction-types/setregularkey.md
bluejoe36596anzabn/xrpl-dev-portal
0376b96a0e05ebdc42479c7c08e89ed815582031
[ "Apache-2.0" ]
null
null
null
content/references/rippled-api/transaction-formats/transaction-types/setregularkey.md
bluejoe36596anzabn/xrpl-dev-portal
0376b96a0e05ebdc42479c7c08e89ed815582031
[ "Apache-2.0" ]
null
null
null
# SetRegularKey [[Source]](https://github.com/ripple/rippled/blob/4239880acb5e559446d2067f00dabb31cf102a23/src/ripple/app/transactors/SetRegularKey.cpp "Source") A `SetRegularKey` transaction assigns, changes, or removes the regular key pair associated with an account. You can protect your account by assigning a regular key pair to it and using it instead of the master key pair to sign transactions whenever possible. If your regular key pair is compromised, but your master key pair is not, you can use a `SetRegularKey` transaction to regain control of your account. ## Example {{currentpage.name}} JSON ```json { "Flags": 0, "TransactionType": "SetRegularKey", "Account": "rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn", "Fee": "12", "RegularKey": "rAR8rR8sUkBoCZFawhkWzY4Y5YoyuznwD" } ``` {% include '_snippets/tx-fields-intro.md' %} <!--{# fix md highlighting_ #}--> | Field | JSON Type | [Internal Type][] | Description | |:-------------|:----------|:------------------|:------------------------------| | `RegularKey` | String | AccountID | _(Optional)_ A base-58-encoded [Address][] that indicates the regular key pair to be assigned to the account. If omitted, removes any existing regular key pair from the account. Must not match the master key pair for the address. | **Warning:** Until the [fixMasterKeyAsRegularKey amendment][] :not_enabled: becomes enabled, it is possible to set your regular key to match your master key. If you then disable the master key, your address cannot send transactions signed with the key even though it matches the enabled regular key. As a result, you cannot send any transactions from the address unless you have [multi-signing](multi-signing.html) enabled and use that. With the amendment enabled, such "blocked" accounts can send transactions again. ## See Also For more information about regular and master key pairs, see [Cryptographic Keys](cryptographic-keys.html). For a tutorial on assigning a regular key pair to an account, see [Working with a Regular Key Pair](assign-a-regular-key-pair.html). For even greater security, you can use [multi-signing](multi-signing.html), but multi-signing requires additional XRP for the [transaction cost][] and [reserve](reserves.html). <!--{# common link defs #}--> {% include '_snippets/rippled-api-links.md' %} {% include '_snippets/tx-type-links.md' %} {% include '_snippets/rippled_versions.md' %}
58.428571
517
0.721679
eng_Latn
0.971339
eda2490c915f60f48e6511539fe393342f157123
4,369
md
Markdown
README.md
waico/smart_city_kazan
adc5ec477b1d7782e7fd5c9af78fa8fe1e089a8c
[ "MIT" ]
null
null
null
README.md
waico/smart_city_kazan
adc5ec477b1d7782e7fd5c9af78fa8fe1e089a8c
[ "MIT" ]
null
null
null
README.md
waico/smart_city_kazan
adc5ec477b1d7782e7fd5c9af78fa8fe1e089a8c
[ "MIT" ]
null
null
null
Разработанное решение помогает жителям города следить за безопасностью и чистотой во дворах, планировать работу обслуживающих организаций за счёт статистического анализа данных. Решение, в основе которого используется искусственный интеллект, позволяет решать следующие задачи: - Контролировать заполненность контейнеров для мусора - Детектировать возгарания и пожары во дворах - Собирать статистику по вывозу мусора - Давать предложения по измению графика вывоза мусора в каждом дворе, где установлены камеры Стек решения: python, yolo, folium, streamlit. Уникальность решения заключается в том, что оно позволяет получать отчёты на основе статистического анализа данных с уличных камер, повысить скорость реагирования профильных служб на инциденты, связанные с загрязнениями и возгараниями на территории жилих дворов. Сроки разработки: 3 месяца (разработка пилотной версии) + 6-12 месяцев (пилотные испытания и доработка) + 3-6 месяцев (промышленное внедрение). Стоимость разработки: - 0,5 млн руб (разработка пилотной версии) - 2 млн руб (пилотные испытания и доработка) - 1 млн руб (промышленное внедрение) ## Установка и запуск приложения >Примечание: В репозиторий не была загружена модель нейронной сети с весами, в связи с тем, что она занимает достаточно большой объем памяти хранилища. Поэтому необходимо воспользоваться [ссылкой](https://drive.google.com/drive/folders/1GngMurVoAZ-4tjBJWCziYvHtdw25lfC7?usp=sharing) для скачивания весов модели и выполнить [инструкции](https://github.com/waico/smart_city_kazan/blob/main/YOLO5_Train_Inference/readme.md), размещённые в папке `YOLO5_Train_Inference` репозитория. После подготовки модели, необходимо выполнить следующие действия: - скачайте все содержимое папки GUI - установите библиотеку `streamlit` ```terminal pip install streamlit ``` - установите дополнительные библиотеки, необходимые для работы приложения из файла requirements.txt в скачанной папке GUI, используя команду: pip install -r requirements.txt - находясь в корневой директории репозитория выполните команду: ```terminal streamlit run app_fol5.py ``` автоматически откроется окно браузера и в нём загрузится сраница приложения ![#Рисунок 1 - Скриншот работы приложения](scren1.PNG) В центре приложения - карта города, на которой пользователь видит камеры, расположенные в разных местах. Камеры окрашены разными цветами в зависимости от текущей ситуации на месте: - обнаружено переполнение мусорных баков, мусор лежит на дороге или урна переполнена, произошло возгорание мусора или машины на парковке - камера будет красной, - всё в порядке - камера будет зелёной. По каждой камере пользователь может получить подробный аналитический отчет нажав на соответствующую кнопку ниже (планировалось реализовать с помощью ссылки при наведении на камеру). ![#Рисунок 2 - Скриншот работы приложения](stats.png) После изучения камер пользователь может нажать на кнопку, находяющуюся в нижней части приложения, чтобы выбрать камеры, по которым обнаружены превышения в мусорных контейнерах и загрузить изображения с них. После нажатия на кнопку выполнится автоматический анализ полученных снимков с камер на которых будут отмечены проценты заполненности мусорных баков с выделением самих баков с помощью bounding boxes. ![#Рисунок 3 - Скриншот работы приложения](scren2.PNG) Скринкаст приложения доступен по ссылке - https://drive.google.com/file/d/1hL9Wuo9yvklfcD44p85gNL73YZfNW7yA/view?usp=sharing ## Структура репозитория ``` |-GUI - исходный код приложения |-OCR + geo_extract - применение OCR к доступным картинкам для извлечения времени и адреса. Обработка исходных данных по связке адресов и папок исходных данных. Пайплайн извлечение ширины и долготы по адресу |-YOLO5_Train_Inference - ноутбуки для обучения и инференса моделей, также содержит директорию weights с весами модели |-macro_micro_analytics - BI и анализ временных рядов загрузки контейнеров |-parser - автоматические сохранение картинок по яндексовому запросу, для обогащения датасета |-stream_video - интеграция стриминг видео ``` ## Описание используемой модели На основании внутреннего конкурса по качеству моделей из `YOLO5X,YOLO5S,YOLO5I,ResNET_v2` была выбрана `YOLO5X`. Значение метрики f1_weighted = 0.922 на отложенной выборке.
66.19697
545
0.798581
rus_Cyrl
0.971761
eda2dd43d765f1db3feb505540f3e0b9a78ae4b5
4,514
md
Markdown
distribution/aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
71
2020-02-01T06:58:53.000Z
2022-03-16T14:58:44.000Z
distribution/aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
9
2020-05-08T10:39:42.000Z
2022-01-24T08:22:18.000Z
distribution/aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
21
2020-01-11T20:50:13.000Z
2021-09-29T16:21:28.000Z
# AwsMigrationEvaluator ```text aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator ``` ```text include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ``` | Illustration | AwsMigrationEvaluator | AwsMigrationEvaluatorCard | AwsMigrationEvaluatorGroup | | :---: | :---: | :---: | :---: | | ![illustration for Illustration](../../../aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator.png) | ![illustration for AwsMigrationEvaluator](../../../aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator.Local.png) | ![illustration for AwsMigrationEvaluatorCard](../../../aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluatorCard.Local.png) | ![illustration for AwsMigrationEvaluatorGroup](../../../aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluatorGroup.Local.png) | ## AwsMigrationEvaluator ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('aws-20210730/bootstrap') ' loads the Item which embeds the element AwsMigrationEvaluator include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ' renders the element AwsMigrationEvaluator('AwsMigrationEvaluator', 'Aws Migration Evaluator', 'an optional tech label') @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('aws-20210730/bootstrap') ' loads the Item which embeds the element AwsMigrationEvaluator include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ' renders the element AwsMigrationEvaluator('AwsMigrationEvaluator', 'Aws Migration Evaluator', 'an optional tech label') @enduml ``` ## AwsMigrationEvaluatorCard ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('aws-20210730/bootstrap') ' loads the Item which embeds the element AwsMigrationEvaluatorCard include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ' renders the element AwsMigrationEvaluatorCard('AwsMigrationEvaluatorCard', 'Aws Migration Evaluator Card', 'an optional description') @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('aws-20210730/bootstrap') ' loads the Item which embeds the element AwsMigrationEvaluatorCard include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ' renders the element AwsMigrationEvaluatorCard('AwsMigrationEvaluatorCard', 'Aws Migration Evaluator Card', 'an optional description') @enduml ``` ## AwsMigrationEvaluatorGroup ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('aws-20210730/bootstrap') ' loads the Item which embeds the element AwsMigrationEvaluatorGroup include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ' renders the element AwsMigrationEvaluatorGroup('AwsMigrationEvaluatorGroup', 'Aws Migration Evaluator Group', 'an optional tech label') { note as note the content of the group end note } @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('aws-20210730/bootstrap') ' loads the Item which embeds the element AwsMigrationEvaluatorGroup include('aws-20210730/Architecture/MigrationTransfer/AwsMigrationEvaluator') ' renders the element AwsMigrationEvaluatorGroup('AwsMigrationEvaluatorGroup', 'Aws Migration Evaluator Group', 'an optional tech label') { note as note the content of the group end note } @enduml ```
28.56962
524
0.788214
eng_Latn
0.315557
eda31fdad61b8ff29cf8120af9e41a1ee9c0a455
3,343
md
Markdown
CHANGELOG.md
jyp334/flutter_pulltorefresh
167cc7674623ebd39906c5f1383e80a4d2d7169e
[ "MIT" ]
null
null
null
CHANGELOG.md
jyp334/flutter_pulltorefresh
167cc7674623ebd39906c5f1383e80a4d2d7169e
[ "MIT" ]
null
null
null
CHANGELOG.md
jyp334/flutter_pulltorefresh
167cc7674623ebd39906c5f1383e80a4d2d7169e
[ "MIT" ]
null
null
null
## 1.0.0 * initRelease ## 1.0.1 * Remove bottomColor ## 1.0.2 * Add Failed RefreshMode when catch data failed * ReMake Default header and footer builder * Replace RefreshMode,loadMode to refreshing,loading * Replace onModeChange to onRefresh,onLoadMore ## 1.0.3 * Fix error Props * Add interupt Scroll when failure status ## 1.0.4 * Update README and demo ## 1.0.5 * Remove headerHeight,footerHeight to get height inital * Make footer stay at the bottom of the world forever * replace idle to idel(my English mistake) * Fix defaultIndictor error Icon display ## 1.0.6 * Use Material default LoadingBar * Add a bool paramter to onOffsetChange to know if pullup or pulldown * Fix Bug: when pulled up or pull-down, sizeAnimation and IOS elasticity conflict, resulting in beating. ## 1.0.7 * Fix Bug1: The use of ListView as a container to cause a fatal error (continuous sliding) when the bottom control is reclaimed, using the SingleChildScrollView instead of preventing the base control from recovering many times from the exception * Fix Bug2: When the user continues to call at the same time in the two states of pull-down and drop down, the animation has no callback problem when it enters or fails. ## 1.0.8 * Reproducing bottom indicator, no more manual drag to load more * Control property values change more,Mainly:1.onModeChange => onRefreshChange,onLoadChange, 2.Add enableAutoLoadMore,3.Remove bottomVisiableRange ## 1.1.0 Notice: This version of the code changes much, Api too * Transfer state changes to Wrapper of indicator to reduce unnecessary interface refresh. * No longer using Refreshmode or LoadMode,replaced int because the state is hard to determine. * Now support the ScrollView in the reverse mode * The indicators are divided into two categories, loadIndicator and refreshIndicator, and the two support header and footer * provided a controller to invoke some essential operations inside. * Move triggerDistance,completeTime such props to Config * Add ClassicIndicator Convenient construction indicator ## 1.1.1 * Make triigerDistance be equally vaild for LoadWrapper * Add enableOverScroll attribute ## 1.1.2 * Fix Bug:Refreshing the indicator requires multiple dragging to refresh * Fix ClassialIndicator syntax errors and display status when no data is added. ## 1.1.3 * Fix contentList's item cannot be cached,Remove shrinkWrap,physics limit * Fix onOffsetChange callback error,In completion, failure, refresh state is also callback * Add unfollowIndicator implement in Demo(Example3) ## 1.1.4 * Fix enableOverScroll does not work * Add default IndicatorBuilder when headerBuilder or footerBuilder is null * Fix cannot loading when user loosen gesture and listview enter the rebounding ## 1.1.5 * Fix problem of offsetChange * Fix CustomScrollView didn't work * Fix refreshIcon not reference in ClassialIndicator ## 1.1.6 * Fix Compile error after flutter update ## 1.2.0 * Fixed the problem that ScrollController was not applied to internal controls * Optimize RefreshController * RefreshController changed to required now * Add feature:reuqestRefresh can jumpTo Bottom or Top * Fix problem: Refresh can still be triggered when ScrollView is nested internally * Remove rendered twice to get indicator height,replaced by using height attribute in Config * change RefreshStatus from int to enum
39.329412
245
0.792103
eng_Latn
0.986377
eda36d641536c82bdebc55f5cc32582a4e944170
285
md
Markdown
README.md
eleidan/docker-base-image
f965557c701d7eca617f1599a9bd164c2c9cc04f
[ "MIT" ]
null
null
null
README.md
eleidan/docker-base-image
f965557c701d7eca617f1599a9bd164c2c9cc04f
[ "MIT" ]
null
null
null
README.md
eleidan/docker-base-image
f965557c701d7eca617f1599a9bd164c2c9cc04f
[ "MIT" ]
null
null
null
# Docker Base Images [![Docker Build](https://img.shields.io/docker/automated/eleidan/base.svg?style=flat-square)](https://hub.docker.com/r/eleidan/base/) [![Docker Pulls](https://img.shields.io/docker/pulls/eleidan/base.svg?style=flat-square)](https://hub.docker.com/r/eleidan/base/)
71.25
133
0.750877
yue_Hant
0.300496
eda3afde0e206f24cc1dacb6fb2d634985daef9f
2,484
md
Markdown
new-testament/gal/gal-6.md
catherinedevlin/verse-voter-books
46f3b0631b35d8714f65f4b29fd6283930469eab
[ "Unlicense" ]
null
null
null
new-testament/gal/gal-6.md
catherinedevlin/verse-voter-books
46f3b0631b35d8714f65f4b29fd6283930469eab
[ "Unlicense" ]
null
null
null
new-testament/gal/gal-6.md
catherinedevlin/verse-voter-books
46f3b0631b35d8714f65f4b29fd6283930469eab
[ "Unlicense" ]
null
null
null
--- layout: page title: [Gal](/new-testament/gal.html) 6 --- # [Gal](/new-testament/gal.html) 6 [New Testament](/new-testament.html) [prev](/new-testament/gal/gal-5.html) 1 _My brothers, perhaps a man has done something wrong. If so, you who are strong in the Spirit must help him to do the right thing again. Help him in a gentle way. Take care yourself, that you are not tried and will want to do wrong._ 2 _Help each other in your troubles. In that way you obey Christ’s law._ 3 _A man who thinks that he is an important person when he is not, that man fools himself._ 4 _Let every man test his own work. Then he will be proud of his own work. He will not be proud because he thinks his own work is better than someone else’s work._ 5 _Each man must carry his own load._ 6 _People are taught the word of God. They should give some of all the good things they have to those who teach them._ 7 _Do not be fooled about this. God cannot be fooled. A man gets what he plants._ 8 _The man who plants the wrong things he wants to do will get death, because of those wrong things. But the person who plants what the Spirit wants him to do will live for ever, because of the Spirit._ 9 _We must not get tired of doing good things. If we do not stop doing them, we will get something back when the right time comes._ 10 _So then, when we can, we should do good to all people. But most of all, we should do it to those who are in God’s family._ 11 _(See, I am writing this to you in big letters with my own hand.)_ 12 _Some people want to do things that can be seen. They try to force you to be circumcised. They want to hide from trouble which would come to them if they talk about the cross of Christ._ 13 _Even those who are circumcised do not obey the law. But they want you to be circumcised. Then they can be proud that they made you do it._ 14 _But I will not be proud of anything but of the cross of our Lord Jesus Christ. Because of it, the things of the world have become dead to me, and I have become dead to the world._ 15 _It does not matter if a person is circumcised or not, but he must become a new person._ 16 _May all who live by this rule have peace. And may God bless them. They are the true people of Israel and they belong to God._ 17 _From now on, please do not trouble me. For I have marks on my body that show I belong to the Lord Jesus._ 18 _My brothers, may the kindness and grace of our Lord Jesus Christ bless your spirit. May he do it!_
50.693878
235
0.754428
eng_Latn
0.999925
eda3dfd945c3d13a8a5380b2d78d923286b8e5f5
376
md
Markdown
_plugins/related_posts-jekyll_plugin.md
erikw/directory
97a482fcff3233989f058a5e091bff277b29200a
[ "CC-BY-4.0" ]
18
2018-02-10T23:55:50.000Z
2022-03-25T11:36:04.000Z
_plugins/related_posts-jekyll_plugin.md
erikw/directory
97a482fcff3233989f058a5e091bff277b29200a
[ "CC-BY-4.0" ]
33
2015-12-19T06:35:37.000Z
2017-10-25T18:13:57.000Z
_plugins/related_posts-jekyll_plugin.md
erikw/directory
97a482fcff3233989f058a5e091bff277b29200a
[ "CC-BY-4.0" ]
15
2018-04-03T16:15:12.000Z
2022-02-20T20:26:52.000Z
--- type: copy-and-paste title: related_posts-jekyll_plugin description: > This is a jekyll plugin that overrides the built in related_posts function to calculate related posts based on a posts' tags. author: LawrenceWoodman git: [email protected]:LawrenceWoodman/related_posts-jekyll_plugin.git repository: https://github.com/LawrenceWoodman/related_posts-jekyll_plugin ---
34.181818
74
0.81383
eng_Latn
0.777014
eda3ecca0708e8ebf95e58f0564d09198d700016
2,556
md
Markdown
README.md
tommogs/verademo_jenkins_net
6071843070d61f62982a7797982d0ef210c357f3
[ "MIT" ]
null
null
null
README.md
tommogs/verademo_jenkins_net
6071843070d61f62982a7797982d0ef210c357f3
[ "MIT" ]
null
null
null
README.md
tommogs/verademo_jenkins_net
6071843070d61f62982a7797982d0ef210c357f3
[ "MIT" ]
null
null
null
# VeraDemo.NET - Blab-a-Gag ## About Blab-a-Gag is a fairly simple forum type application which allows: - users to post a one-liner joke - users to follow the jokes of other users or not (listen or ignore) - users to comment on other users messages (heckle) ### URLs `/reset` will reset the data in the database with a load of: - users - jokes - heckles `/feed` shows the jokes/heckles that are relevant to the current user. `/blabbers` shows a list of all other users and allows the current user to listen or ignore. `/profile` allows the current user to modify their profile. `/login` allows you to log in to your account `/register` allows you to create a new user account `/tools` shows a tools page that shows a fortune or lets you ping a host. ## Configure Database credentials are held in web.config Log4Net information is helped in log4net.config ### Database A blank database is provided in App_Data\VeraDemoNet.mdf - the application will connect to this by default. If you want to change it, the connection string is in web.config as BlabberDB ## Run Visual Studio 2017 is required to build the application. Publishing generates the appropriate files to deploy. Alternatively, run from inside Visual Studio. Open `/reset` in your browser and follow the instructions to prep the database Login with your username/password as defined in `ResetController.cs :: _veraUsers ## AWS/Azure Deployment ### Azure The deployment from Visual Studio recognises the connection string and will update to point to the Azure SQL Server instance ### AWS Install the AWS Toolkit for VS 2017 - https://aws.amazon.com/visualstudio/ ## Exploitation Demos See the `docs` folder # TODO ## Immediate: * Make it more easily deployable into Cloud Services (MS have lots of nice tools to help) * Test on Greenlight. ## Ongoing: * Add a couple of 'legacy' ASPX pages so that Greenlight can be demoed on pages (it doesn't work on CSHTML) * DOM based XSS to demonstrate Javascript-oriented flaw remedation * SourceClear/SCA demonstration through use of outdated/flaws 3rd party components ## Missing from here, but in Verademo * cwe-113-http-response-splitting * cwe-134-format-string-injection * cwe-384-session-fixation ## Specific to .NET - possibly to implement (but bear in mind resourcing on supporting course notes) * cwe-80 based on inadvertant exposure of public method in a controller. All controller methods are publicly accessible via get/set so look at converting to private/protected or use the [NonAction] attribute
31.170732
207
0.766041
eng_Latn
0.993697
eda433444121127444a85354c92b94f39672388f
8,559
md
Markdown
cryptohome/README.le_credentials.md
emersion/chromiumos-platform2
ba71ad06f7ba52e922c647a8915ff852b2d4ebbd
[ "BSD-3-Clause" ]
5
2019-01-19T15:38:48.000Z
2021-10-06T03:59:46.000Z
cryptohome/README.le_credentials.md
emersion/chromiumos-platform2
ba71ad06f7ba52e922c647a8915ff852b2d4ebbd
[ "BSD-3-Clause" ]
null
null
null
cryptohome/README.le_credentials.md
emersion/chromiumos-platform2
ba71ad06f7ba52e922c647a8915ff852b2d4ebbd
[ "BSD-3-Clause" ]
1
2019-02-15T23:05:30.000Z
2019-02-15T23:05:30.000Z
# Low-Entropy (LE) Credential Protection This feature enables us to protect low-entropy credentials, which allows the UI to offer PIN codes as an authentication mechanism for sign-in. ## Overview LE secrets that need brute force protection are mapped to high-entropy secrets that can be obtained via a rate-limited lookup enforced by the security module. The high-entropy secret is then plugged into the actual authentication mechanism used to implement sign-in. In other words, the released high-entropy secret is used as the key to protect a VaultKeyset, which contains further secrets such as the actual file system encryption key protecting encrypted user data. The low-entropy credentials and related metadata (including the number of unsuccessful authentication attempts to this point) are stored in an encrypted form on disk. This ensures that the security module can enforce retry limits against a compromised OS or hardware-level attacks while minimizing the storage footprint in security module flash. The security module manages a number of credential slots which are referred to by labels. Cryptohome communicates with the security module to verify that the credential presented by a user in an authentication attempt is correct. On success, the security module releases the corresponding high-entropy secret to cryptohome. Brute forcing is prevented by enforcing a cryptohome-defined delay schedule in the security module firmware. This only allows a limited number of authentication attempts for a specified timeframe (the delay schedule can also set a hard limit on the number of unsuccessful attempts). Each time a correct LE credential is provided, the number of unsuccessful attempts is reset to 0. An LE secret which has been locked out (i.e all attempts exhausted) may be reset by providing a separate high entropy reset credential to the LECredentialManager class (this reset credential is generated, encrypted to a conventional password for that user, and supplied when the LE secret is being set up). Presenting the reset credential to the security module resets the attempts counter for the credential, thus clearing the lockout and allowing the LE credential to be used in subsequent authentication attempts. ## Hash tree A hash tree is used by the security module to ensure integrity and track the state of all the credentials' metadata. Each credential has its own hash tree leaf, which is addressed by an integer label corresponding to its position in the tree. Using the hash tree we can obtain a root hash of the entire tree, and store that in the security module. This allows us to capture the entire state of the on-disk tree, using a single hash. This hash is then used to verify the integrity of the state passed to the security module while performing authentication/insert/reset operations. Since it is stored in the NVRAM of the security module, it can't be manipulated by the OS or attackers. Hardware attacks are hard since they will require decapping the chip. For more information on hash trees, see https://en.wikipedia.org/wiki/Merkle_tree . ## Relevant classes A diagram can be used to illustrate the various classes and their relation. LECredentialManager / \ / \ / \ / \ SignInHashTree LECredentialBackend | | | | PersistentLookupTable ### PersistentLookupTable This class provides a key-value like storage. The key is a uint64_t, and the value is a binary blob which contains data in a caller-determined format. The class aims to provide an atomic storage solution for its entries; updates to the table will either be correctly recorded or will have no effect. So, the table will always be in a valid state (without partial updates). It provides an API that allows values to be Stored, Retrieved and Removed, and is used by the SignInHashTree. ### SignInHashTree This class stores and manages the various credentials and labels used by the LECredentialManager on disk. As the implementation of the hash tree concept, it not only represents the leaf nodes of the hash tree, but also keeps track of all the inner-nodes' hash values. Using PersistentLookupTable, it stores an encrypted blob containing the metadata associated with an LE credential. It also stores alongside it a MAC which has been calculated on the metadata. The MACs are used during root hash calculations. Both of these are expected to be provided by the caller. Logically, the PersistentLookupTable can be thought of as storing all the leaf nodes of the hash tree. The hash tree is defined by two parameters: - The fan-out, i.e the number of children of a node. - The length (in terms of bits) of a leaf node label. These two parameters can be used to determine the layout of the hash tree. This helps to understand: - How a root hash is calculated. - What are the hash values that are required, given a particular leaf node, to recalculate a root hash. The SignInHashTree also contains a HashCache file. This file stores the inner node hash values, and helps avoid recalculation of these values with each authentication attempt. The HashCache file is redundant, and should be regenerated if there is any discrepancy detected between it and the leaf nodes and/or the state on security module. ### LECredentialBackend This is an interface used to communicate with the security module to perform the LE Credential operations. The LECredentialBackend will expose the following functionality provided by the security module: - Validate a credential. - Enforce the delay schedule provided during credential creation. - Encrypt and return the credential metadata. - Store, update and provide an operation log, to be used in case of state mis-match with on-disk state. ### LECredentialManager This class uses both the SignInHashTree and LeCredentialBackend to provide an interface that cryptohome can use to Add, Check, Reset and Remove an LE Credential. It provides support for the following operations: - InsertCredential: Provided an LE Secret, the high-entropy secret it is guarding, a reset credential which is used to unlock a locked-out LE secret and a delay schedule, it stores the resulting credential and returns a uint64_t label which can be used by cryptohome to reference the credential. - CheckCredential: Attempts authentication of a user. It is provided the label of the credential to verify, and the user-supplied secret, and on success returns the corresponding high entropy secret. - RemoveCredential: Given a label, removes that credential from the hash tree, and updates the security module's state to reflect that. - ResetCredential: TODO(crbug.com/809723) ## Key derivation ### LE secret The generation of the LE secret which is stored by the LE Credential manager can be best illustrated by the following diagram: Definitions: - VKK = VaultKeyset Key - VKK IV = VKK Initialization Vector - VK = VaultKeyset LE Salt (randomly generated) + User PIN | | | SCrypt | | \|/ VKK IV + SKeyKDF + LE Secret VKK Seed (randomly generated high entropy secret) | | (guarded by LE secret) | | \|/ Stored in LECredentialManager VKK Seed | | HMAC256(SKeyKDF) | | \|/ VKK VKK IV + VK(Vault Key) | | AES Encryption(VKK) | | \|/ Encrypted VK Per the above scheme, the SerializedVaultKeyset will store the LE Salt, so that it can be used to regenerate the LE secret used during CheckCredential(). ### Reset secret TODO(crbug.com/809723) ## Resynchronization TODO(crbug.com/809710)
39.625
80
0.692721
eng_Latn
0.999245
eda5318dbffa6126fc45cdfc44304c4d003ad207
7,518
md
Markdown
powerapps-docs/maker/canvas-apps/share-app.md
scotrobe-msft/powerapps-docs
6519d77ccd806c37e3370d2bdf8dd6e8b23c7f3b
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/canvas-apps/share-app.md
scotrobe-msft/powerapps-docs
6519d77ccd806c37e3370d2bdf8dd6e8b23c7f3b
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/maker/canvas-apps/share-app.md
scotrobe-msft/powerapps-docs
6519d77ccd806c37e3370d2bdf8dd6e8b23c7f3b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Share a canvas app | Microsoft Docs description: Share your canvas app by giving other users permission to run or modify it author: AFTOwen manager: kvivek ms.service: powerapps ms.topic: conceptual ms.custom: canvas ms.reviewer: ms.date: 11/28/2018 ms.author: anneta search.audienceType: - maker search.app: - PowerApps --- # Share a canvas app in PowerApps After you build a canvas app that addresses a business need, specify which users in your organization can run the app and which can modify and even reshare it. Specify each user by name, or specify a security group in Azure Active Directory. If everyone would benefit from your app, specify that your entire organization can run it. > [!IMPORTANT] > For a shared app to function as you expect, you must also manage permissions for the data source or sources on which the app is based, such as [Common Data Service](#common-data-service) or [Excel](share-app-data.md). You might also need to share [other resources](share-app-resources.md) on which the app depends, such as flows, gateways, or connections. ## Prerequisites Before you share an app, you must save it to the cloud (not locally) and then publish the app. - Give your app a meaningful name and a clear description, so that people know what your app does and they can easily find it in a list. On the **File** menu in PowerApps Studio, select **App settings**, specify a name, and then type or paste a description. - Whenever you make changes, you must save and publish the app again if you want others to see those changes. ## Share an app 1. [Sign in](https://web.powerapps.com?utm_source=padocs&utm_medium=linkinadoc&utm_campaign=referralsfromdoc) to PowerApps, and then select **Apps** near the left edge. ![Show list of apps](./media/share-app/file-apps.png) 1. Select the app that you want to share by selecting its icon. ![Select an app](./media/share-app/select-app.png) 1. In the banner, select **Share**. ![Open share screen](./media/share-app/banner-share.png) 1. Specify by name or alias the users or security groups in Azure Active Directory with which you want to share the app. - To allow your entire organization to run the app (but not modify or share it), type **Everyone** in the sharing panel. - You can share an app with a list of aliases, friendly names, or a combination of those (for example, **Jane Doe &lt;[email protected]>**) if the items are separated by semi-colons. If more than one person has the same name but different aliases, the first person found will be added to the list. A tooltip appears if a name or alias already has permission or can't be resolved. ![Specify users and co-owners](./media/share-app/share-everyone.png) > [!NOTE] > You can't share an app with a distribution group in your organization or with a user or group outside your organization. 1. If you want to allow those with whom you're sharing the app to edit and share it (in addition to running it), select the **Co-owner** check box. You can't grant **Co-owner** permission to a security group if you [created the app from within a solution](add-app-solution.md). > [!NOTE] > Regardless of permissions, no two people can edit an app at the same time. If one person opens the app for editing, other people can run it but not edit it. 1. If your app connects to data for which users need access permissions, specify them. For example, your app might connect to an entity in a Common Data Service database. When you share such an app, the sharing panel prompts you to manage security for that entity. > [!div class="mx-imgBorder"] > ![Assign a security role](media/share-app/cds-assign-security-role.png) For more information about managing security for an entity, see [Manage entity permissions](share-app.md#manage-entity-permissions) later in this topic. 1. If you want to help people find your app, select the **Send an email invitation to new users** check box. 1. At the bottom of the share panel, select **Share**. Everyone with whom you shared the app can run it in PowerApps Mobile on a mobile device or in AppSource on [Dynamics 365](https://home.dynamics.com) in a browser. Co-owners can edit and share the app in [PowerApps](https://web.powerapps.com?utm_source=padocs&utm_medium=linkinadoc&utm_campaign=referralsfromdoc). If you sent an email invitation, everyone with whom you shared the app can run it by selecting a link in the invitation. - If a user selects the link on a mobile device, the app opens in PowerApps Mobile. - If a user selects the link on a desktop computer, the app opens in a browser. Co-owners who receive an invitation get another link that opens the app for editing in PowerApps Studio. You can change permissions for a user or a security group by selecting their name and then performing either of these steps: - To allow co-owners to run the app but no longer edit or share it, clear the **Co-owner** check box. - To stop sharing the app with that user or group, select the Remove (x) icon. ## Security-group considerations - If you share an app with a security group, existing members of that group and anyone who joins it will have the permission that you specify for that group. Anyone who leaves the group loses that permission unless they belong to a different group that has access or you give them permission as an individual. - Every member of a security group has the same permission for an app as the overall group does. However, you can specify greater permissions for one or more members of that group to allow them greater access. For example, you can give Security Group A permission to run an app, but you can also give User B, who belongs to that group, **Co-owner** permission. Every member of the security group can run the app, but only User B can edit it. If you give Security Group A **Co-owner** permission and User B permission to run the app, that user can still edit the app. ## Manage entity permissions ### Common Data Service If you create an app based on Common Data Service, you must also ensure that the users with whom you share the app have the appropriate permissions for the entity or entities on which the app relies. Specifically, those users must belong to a security role that can perform tasks such as creating, reading, writing, and deleting relevant records. In many cases, you'll want to create one or more custom security roles with the exact permissions that users need to run the app. You can then assign a role to each user as appropriate. > [!NOTE] > As of this writing, you can assign security roles to individual users and security groups in Azure Active Directory but not to Office groups. #### Prerequisite To assign a role, you must have **System administrator** permissions for a Common Data Service database. #### Assign a security group in Azure AD to a role 1. In the sharing panel, select **Assign a security role** under **Data permissions**. 1. Select the role or roles in Common Data Service that you want to assign to the user or the security group in Azure AD with which you want to share the app. ![Security role list](media/share-app/cds-assign-security-role-list.png) ### Common Data Service (previous version) When you share an app that's based on an older version of Common Data Service, you must share the runtime permission to the service separately. If you don’t have permission to do this, see your environment administrator.
62.65
566
0.75991
eng_Latn
0.999475
eda5501c9b15e5b6e0b1839072fb9daa55a56a4d
2,119
md
Markdown
README.md
ponte1010/IMANE
0e008a3893754095ecf43617467f02b3475f35f2
[ "MIT" ]
1
2020-09-07T09:43:44.000Z
2020-09-07T09:43:44.000Z
README.md
ponte1010/IMANE
0e008a3893754095ecf43617467f02b3475f35f2
[ "MIT" ]
null
null
null
README.md
ponte1010/IMANE
0e008a3893754095ecf43617467f02b3475f35f2
[ "MIT" ]
null
null
null
# IMANE-PC (Incident management networking exercise) ### コンピュータ演習形式のサイバーインシデント対応シミュレーション - IMANE-PCはコンピュータ演習形式のサイバーインシデント対応シミュレーションです。 - 参加者は現実の役職として演習に参加でき、組織間連携を擬似体験できます。 - 結果(=対応フロー)はワークシートとして出力でき、演習の実施後直ぐに評価できます。 ### 各組織におけるサイバーインシデント対応演習の構築運用の支援システム - 演習内容は簡単に構築・カスタマイズ出来ます。 - シナリオデータを共有することで、既存のデータを参考にした演習構築が可能になります。 - 演習結果を共有し、比較することでベストプラクティスの検討が出来ます。 ### 参加者は役職を分担して演習に参加し、組織間連携を演習内で擬似体験できる ![IMANE-PCの概要](https://user-images.githubusercontent.com/55830516/83992655-01dd0900-a98c-11ea-94f1-4cb8af3ee356.png) # Getting Started / スタートガイド ### 前提条件 - サーバ - jdk verx 動作を確認 - Tomcat verx 動作を確認 - クライアント - Google Chrome - JDKのインストール - OpenJDKをインストール - https://openjdk.java.net/ - Tomcatのインストールとworkshop.jarの配備 - Apache Tomcatをインストール - http://tomcat.apache.org/ - apache-tomcat-x.x.xx\webapps内にworkshop.warを配備する - サーバーの起動 1. Tomcatを起動 2. コンソール画面へのアクセス - http://[ホスト名]:8080/workshop/ - 演習開始 - 「管理者/オブザーバはこちらから」をクリック - ログインをクリック - 開始ボタンをクリック - サーバーのシャットダウン - Tomcatをシャットダウン - インストール、実行の手順についてはIMANE-PCサーバー環境構築と演習開始方法のPDFを参照してください。 - [IMANE-PCサーバー環境構築と演習開始方法](https://drive.google.com/file/d/1g_2clxpkU8UtCgrSM63hIYKnH7hDyE-j/view?usp=sharing) - 操作の流れについてはついてはIMANE-PCの操作説明のPDFを参照してください - [IMANE-PCの操作説明](https://drive.google.com/file/d/1cHsT7w-BvoI9WSXi1ZQGjmmHKsY5d70Q/view?usp=sharing) - 同梱されてる演習シナリオについてはシナリオ概要のPDFを参照してください - [シナリオ概要](https://drive.google.com/file/d/1cMO9Nqt-RyStZ5-CS1KRWjsVoH2MdvkT/view?usp=sharing) # Release Note/リリース情報 2020年5月 ver.1.0 新規 For the versions are available, see the tags on this repository. # /バグ報告、機能リクエストについて バグ報告、機能リクエストはテンプレートに従ってissueを発行してください。 # Authors / 著者 TsurumaiGO team + [email protected] # License / ライセンス This project is licensed under the MIT License - see the LICENSE.md file for details このプロジェクトは MIT ライセンスの元にライセンスされています。 詳細はLICENSE.mdをご覧ください。 # Acknowledgments / 謝辞 This work was supported by Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (SIP), “Cyber-Security for Critical Infrastructure” (funding agency: NEDO).
26.822785
215
0.783388
yue_Hant
0.681913
eda6549b39175cd5bb960cdf1034e5d28b40ae1f
86
md
Markdown
_posts/0000-01-02-HaraYuki029.md
HaraYuki029/github-slideshow
4f4a8e97943ce0e8d4c9bcadc709244fdea13c6b
[ "MIT" ]
null
null
null
_posts/0000-01-02-HaraYuki029.md
HaraYuki029/github-slideshow
4f4a8e97943ce0e8d4c9bcadc709244fdea13c6b
[ "MIT" ]
3
2020-11-12T11:18:30.000Z
2020-11-12T11:54:34.000Z
_posts/0000-01-02-HaraYuki029.md
HaraYuki029/github-slideshow
4f4a8e97943ce0e8d4c9bcadc709244fdea13c6b
[ "MIT" ]
null
null
null
--- layout: slide title: "2枚目のスライドにようこそ!" --- aaaaaaaaaaaaaa 何かを書く。 戻るには戻るボタンを使いましょう!
10.75
23
0.744186
est_Latn
0.172696
eda66b816204da5260187b1ce68c70036d4281e3
41
md
Markdown
README.md
fredericorecsky/informatiestroom
a4379183f59745f6fa7a037482f25c3997e3b365
[ "MIT" ]
null
null
null
README.md
fredericorecsky/informatiestroom
a4379183f59745f6fa7a037482f25c3997e3b365
[ "MIT" ]
null
null
null
README.md
fredericorecsky/informatiestroom
a4379183f59745f6fa7a037482f25c3997e3b365
[ "MIT" ]
null
null
null
# informatiestroom Go lang poors man cms
13.666667
21
0.804878
nld_Latn
0.937868
eda8236583c93df87929545449fee29449f6e071
4,434
md
Markdown
Courses/Rails/Mastering Ruby Blocks & Iterators/6. Execute around.md
AndreaBarghigiani/working-knowledge
280467047b68bfa7e0cdb10291c79d8448fd09cb
[ "MIT" ]
null
null
null
Courses/Rails/Mastering Ruby Blocks & Iterators/6. Execute around.md
AndreaBarghigiani/working-knowledge
280467047b68bfa7e0cdb10291c79d8448fd09cb
[ "MIT" ]
null
null
null
Courses/Rails/Mastering Ruby Blocks & Iterators/6. Execute around.md
AndreaBarghigiani/working-knowledge
280467047b68bfa7e0cdb10291c79d8448fd09cb
[ "MIT" ]
null
null
null
The *Execute Around* it's a pattern that let us simplify our code and make more clean and easy to read. Basically the main focus of this approach is to abstract all the code that we want to happen before and after a some code in the middle that can change over time. Here are a couple of example to better understand this concept. ## Measuring code From time to time can happen that you need to measure the time a specific code has taken in order to run. For example you want to know how slow is a specific loop is before returning your results or probably your more interested in the time spent by a query to get the data stored in the database. It does not matter **what** code is taking time, what you're looking for is *how much time it took*. Generally speaking we can do something like that: ```ruby start_time = Time.now # Some code... end_time = Time.now - start_time puts "It took #{end_time}" ``` As you can see the code seems pretty simple, wel probably because I totally odmitted the complex code that you can put between the *measuring code*. But if you think for a moment about the kind of pattern we are studying right not, the measuring code is **executed around** the main code that is vital for our application. Basically our application will work just fine without the additional code, but we added it because we want to know how much time a specific code took to execute. This is a silly example, but try to think all the time this bit of code could be useful. I am sure you think it's gonna be useful too **but** you should remember how to implement it. Lucky you the blocks are coming for a rescue because you could create your own method that will be executed aroud the actual code of your application. ```ruby def timing_code(desc) start_time = Time.now yield end_time = Time.now - start_time puts "#{desc} took #{end_time}" end ``` Now with the `timing_code` at our disposal, where we also added a nice description to it, we can simplify our code like so: ```ruby timing_code('Reverse string') do "stressed".reverse end ``` Obviously with a code simple as the above you'll get really small results but still, now you have a method that will let you mesure the time spent by any code in your application. ## Reduce duplication of code The usefulness of the *Execute Around* pattern does not end only when you need to timing your code, it get really useful even to simplify the code you write and abstract annoying parts of your code. Let's say, for example, that you want to display a message as a result of a check. In this case the only thing that it's changing in your code is just the conditionals. Take this code for example: ```ruby class Sensor def temperature rand(100..200) end def level rand(1..5) end end sensor = Sensor.new puts "Checking water temperature..." result = sensor.temperature < 150 if result puts "OK" else puts "FAILED!" end puts "Checking water level..." result = sensor.level > 3 if result puts "OK" else puts "FAILED!" end ``` We have a `Sensor` class that generate random values for temperature and levels of water, down below you see that we repeat a lot of code: ```ruby puts "Checking water temperature..." result = sensor.temperature < 150 if result puts "OK" else puts "FAILED!" end ``` Beside the check that we store in the `result` variable and the string representing the value we will check, all the other code **it's just repeated**, what a waste of time! Now that we know how blocks works it's time to abstract the repetition and create a method that let us simplify our work: ```ruby def with_checking(description) puts "Checking #{description}..." result = yield if result puts "OK" else puts "FAILED!" end end ``` We added a method parameter to let the user add it's own description but after that we simply copy/paste the original code with the only exception that the condition stored in `result` is returned by the block the user passes. ```ruby # Recreating the previous checks with_checking("temperature") { sensor.temperature < 150 } with_checking("level") { sensor.level > 3 } # Custom check for the sake of it with_checking("it's payday") { Time.now.day > 25 } ``` The last, silly, check I've implemented here it has been added just to demonstrate that you can execute **any condition** you wish inside the block, the important part is that the answer you get will make sense :wink:
38.224138
335
0.751917
eng_Latn
0.999731
eda83b0a1752a76986b91a19b37cd22f72bf2ffe
15,248
md
Markdown
microsoft-community-training/infrastructure-management/install-your-platform-instance/configure-login-social-work-school-account.md
hkamel/microsoft-community-training
4aa7590d33981e0cb046269f8ad28c9c6a37ff26
[ "CC-BY-4.0", "MIT" ]
4
2021-06-18T02:40:49.000Z
2022-03-24T10:48:02.000Z
microsoft-community-training/infrastructure-management/install-your-platform-instance/configure-login-social-work-school-account.md
hkamel/microsoft-community-training
4aa7590d33981e0cb046269f8ad28c9c6a37ff26
[ "CC-BY-4.0", "MIT" ]
112
2021-07-12T19:45:57.000Z
2022-03-28T22:37:41.000Z
microsoft-community-training/infrastructure-management/install-your-platform-instance/configure-login-social-work-school-account.md
hkamel/microsoft-community-training
4aa7590d33981e0cb046269f8ad28c9c6a37ff26
[ "CC-BY-4.0", "MIT" ]
18
2021-07-12T09:37:53.000Z
2022-03-28T22:34:17.000Z
--- title: Configure login identity for the platform original-url: https://docs.microsoftcommunitytraining.com/docs/configure-login-social-work-school-account author: nikotha ms.author: nikotha description: Microsoft Community Training platform provides three types of login. ms.prod: azure zone_pivot_groups: "AD-Deployments-Methods" --- # Configure login identity for the platform ## Prerequisite Please follow [**installation article**](../../infrastructure-management/install-your-platform-instance/installation-guide-detailed-steps.md) to Deploy your MCT instance. In the step “**Set up your Login type**”, select the suitable identity type. ![Login Identity types](../../media/LoginIdentity1.png) and follow the below documentation to configure login identity. ## Configure login identity Microsoft Community Training platform provides three types of login: 1. Phone number 2. Social email-based login via your Microsoft, Google or Facebook account 3. Microsoft Work or School account > [!NOTE] > Please note this article is in continuation of the [**installation article**](../../infrastructure-management/install-your-platform-instance/installation-guide-detailed-steps.md). In this article, we will walk you through on how to configure login identity for the platform. ## Phone based authentication There is no additional configuration needed for phone-based login. ## Work or School Account based authentication ### **Option 1** - Run AAD script to Configure Work or School account for your training portal by following the instructions below #### Step 1 - Login to Azure portal Use an existing subscription for login to portal #### Step 2 - Create Azure AD application **Pre-requisite:** MCT requires Azure Active Directory application creation and registration. To successfully run the AAD creation script, the following permissions are needed: 1. Tenant - AAD app creator 2. Subscription - Owner **Further Steps:** 1. Open Cloud Shell in azure portal 2. Run the following steps in a Cloud Shell instance using the PowerShell environment. 1. Download the AAD app creation script using following command: `wget -q https://sangamapps2.blob.core.windows.net/aad-app-creation/AADAppCreation.ps1 -O ./aad_app_creation.ps1` 2. By default, the file is downloaded to your home directory. Navigate to the home directory with following command: **cd** 3. Run the AAD script downloaded in step 1: **./aad_app_creation.ps1** The script asks for the following two inputs: * Training Portal’s Website Name: Training portal’s website name. For example: If training portal URL is <https://contosolearning.azurefd.net/> , you need to enter “contosolearning” * Azure AD’s Tenant Domain Name: Azure AD’s tenant domain name. For example: contoso.onmicrosoft.com The AAD script takes ~2 minutes to run and outputs 4 values on screen (Client ID, Client Secret, Tenant Id, Tenant name). Make a note of the output values as they will be needed in next step. If someone else ran the script, ask them to share this output. A new app is created. If an app already exists with the same name, the script will delete the existing app and create a new app. In case of facing any issues after deployment please refer this [**guide**](troubleshooting.md#issue-7-azure-active-directory-configuration-issue). ### **Option 2** - Follow the Manual steps to Configure Work or School account for your training portal by following the instructions below #### Step 1 - Setup your Azure AD You can create a new Azure Active Directory tenant or use an existing one based on your organization requirement. 1. Create a new Azure Active Directory tenant and copy the tenant name required later as **Tenant Name**. If you already have an existing Azure AD, use the same and copy the tenant name required later as **Tenant Name**. For example, if the default domain for your Azure AD tenant is **contoso.onmicrosoft.com**, then enter **contoso**. 2. Go to the **Show diagnostics** section on the right and copy the tenant ID required later as **Tenant ID**. #### Step 2 - Create an Azure AD application 1. Create a new Azure AD application by following this article. You only need to follow the section titled **Create an Azure Active Directory application**. Please ensure to set the Redirect URIs as per below: **Redirect URIs** * Set to type "Web" * Add following to Redirect URIs: * **<https://name.azurefd.net/signin-azureAD>**, * **<https://name.azurewebsites.net/signin-azureAD>** and * **<https://name-staging.azurewebsites.net/signin-azureAD>** where **"name"** corresponds to your website name. :::image type="content" source="../../media/Redirect URIs.png" alt-text="Manual AAD Setup Step 1a"::: 2. Click on Expose an API from the left menu of your application. ![Manual AAD Setup Step 2](../../media/ManualAADSetup2.png) 3. Click on "Add a scope". Ensure that the auto-populated value of Application ID URI is of the form "api://{ClientID}" ![Manual AAD Setup Step 3a](../../media/ManualAADSetup3a.png) 4. Click on Save and continue. 5. Enter the value "access_as_user" under Scope name. 6. Select Admins and users under Who can consent? 7. Populate the remaining values. These values appear on the login screen (unless global consent is granted by admin) ![Manual AAD Setup Step 3](../../media/ManualAADSetup3.png) 8. Obtain Client ID and Client Secret: * Copy the value of Application ID required later as Client ID * Click on Certificates & Secrets from the left menu. * Click on New client secret. ![Manual AAD Setup Step 8](../../media/ManualAADSetup4.png) * Enter the description and expiry time of the secret (recommended to select maximum allowed time for expiry) and click on Save button. A value would be shown. Save this Client secret value. Would be required later as the ClientSecret. :::image type="content" source="../../media/Obtain clientsecret value.PNG" alt-text="Image showing how to obtain Client Secret value "::: 9. Make a note of the values and follow [**installation article**](../../infrastructure-management/install-your-platform-instance/installation-guide-detailed-steps.md) to complete the Deployment by configuring obtained values as below. ![Enter clientId, secret,tenantId,Name](../../media/LoginIdentity6.png) > [!Note] >If you are facing any issues while Deploying your AD instance, please follow the header "Azure Active Directory Configuration issue" under [**troubleshooting document**](troubleshooting.md) and send us the required output. ### Multi-Tenant support for Azure Active Directory based Authentication MCT supports login via multiple tenants for AAD authentication. This can be done with few configurations while following steps during AAD set up ([**Configurations on the Training Platform**](../../settings/configurations-on-the-training-platform.md#steps-to-set-the-configurations-on-the-platform)). Follow this documentation to reach App configuration section from your [**Azure portal**](https://www.portal.azure.com/). #### Steps to enable multi-tenant login for an AAD based instance 1. Login to your [**Azure portal**](https://www.portal.azure.com/) 2. Navigate to [**Configurations on the Training Platform**](../../settings/configurations-on-the-training-platform.md#steps-to-set-the-configurations-on-the-platform) 3. For adding multiple tenants: * For New Deployments: * If you are running AAD script while [**Configuring login identity for the platform**](#work-or-school-account-based-authentication) to create new deployment then give the Tenant Id as ‘**common**’ at the time of deployment in place of original Tenant ID that you received after running the script. * If you are following manual steps while [**Configuring login identity for the platform**](#option-2---follow-the-manual-steps-to-configure-work-or-school-account-for-your-training-portal-by-following-the-instructions-below) to create new deployment then provide the Tenant ID as ‘common’ in the “Set up your login type” window instead of the Tenant ID from your Azure AD. * For Existing deployments: * In [**Configurations on the Training Platform**](../../settings/configurations-on-the-training-platform.md#steps-to-set-the-configurations-on-the-platform) , search for **idp:AzureADExternalAuthTenantId**. Set the value as ‘**common**’ replacing the existing tenant id (we suggest you keep a copy of your original Tenant ID value as a reference). Click ‘Ok’. :::image type="content" source="../../media/MultitenantAAD1.png" alt-text="multi tenant app setting"::: 4. Now while in Configurations section, search for **idp:AzureADExternalAuthTenant** and note the Tenant name. 5. Search for **idp:AzureADExternalAuthClientId** and note Client ID. 6. Navigate to your tenant (tenant name from step 4) where your AAD exists, click on App registrations and search for application which corresponds to client id from step 5. :::image type="content" source="../../media/MultitenantAAD2.png" alt-text="multiple tenant app reg"::: 7. Click on the application and navigate to ‘Authentication’ and select ‘Accounts in any organizational directory (Any Azure AD directory - Multitenant)’ under Supported account types and click ‘Save’ :::image type="content" source="../../media/MultitenantAAD3.png" alt-text="multi tenant authentication"::: 8. For first time login using multiple tenants, admin of those tenants needs to approve the client ID of the MCT application by using URL, in below url replace the placeholder with clientID from step 5. ```URL https://login.microsoftonline.com/common/adminconsent?client_id=<client_ID_of_your_application > ``` ## Social account or email based authentication You can configure social account for your training portal by following the instructions below: ### Step 1 - Setup your Azure AD B2C You can create a new Azure AD B2C tenant or create an existing one based on your organization requirement. 1. Login to [**Azure portal**](https://portal.azure.com/). 2. Create a new [**Azure Active Directory B2C**](/azure/active-directory-b2c/tutorial-create-tenant) tenant. 3. Link the Azure Active Directory B2C tenant just created to your Azure subscription. ### Step 2 - Create Azure AD B2C application Here are the steps an create on Azure AD B2C tenant and link the same with your training portal instance: 1. Create a new Azure AD B2C application by following [**this article**](/azure/active-directory-b2c/tutorial-register-applications). Please ensure application properties are set as following: 1. Web app / Web API - set to "Yes" 2. Allow implicit flow - set to "No" 3. Add following to **Reply URL** 1. "https://*name*.azurefd.net/signin-b2c" 2. "https://*name*.azurewebsites.net/signin-b2c" 3. "https://*name*-staging.azurewebsites.net/signin-b2c" where "name" corresponds to your website name. 4. If you are setting up **Password reset flow**, then add following to **Reply URL** 1. "https://*name*.azurefd.net/signin-b2c-pwd" 2. "https://*name*.azurewebsites.net/signin-b2c-pwd" 3. "https://*name*-staging.azurewebsites.net/signin-b2c-pwd" where "name" corresponds to your website name. ![Password reset flow](../../media/image%28113%29.png) 2. Copy the Application ID value to be required later for **Client ID**. 3. Under Application, go to **Keys** and click on **Generate Key**. 4. Click on **Save** and the app key will appear. Copy the value to be required later for **Client Secret**. 5. Go to Azure Active Directory from the left menu of your Azure portal, click on Domain Names and copy the tenant name under Name to be required later for **Tenant Name**. For example, if the default domain for your Azure AD tenant is **contoso.onmicrosoft.com**, then enter **contoso**. 6. Refer [**this article**](/azure/active-directory-b2c/tutorial-create-user-flows) article to create a **signing flow** (a sign-up and sign-in user flow) and a **password reset flow** (for local account) * Select Email Addresses, Given Name, Identity Provider and Surname in Application claims * Application claims should be same as following screenshot ![Application Claims Login Identity1](../../media/LoginIdentity8.png) * Don’t select any Sign-up attributes ![User Attributes](../../media/LoginIdentity9.png) * Copy the user-flow(s) name to be required later (These will be required during MCT platform installation) > > [!NOTE] > Setting Password Reset Flow for an Existing Deployment: > If you are setting up the **Password reset flow** on an existing deployment with Azure AD B2C authentication: > > a. Set Userflow Name as **pwd_reset** (Step #1 in Create Flow using steps in [**this article**](/azure/active-directory-b2c/tutorial-create-user-flows) > > b. Add the following URLs in the **Reply URL** section, > > * "https://*name*.azurefd.net/signin-b2c-pwd" > * "https://*name*.azurewebsites.net/signin-b2c-pwd" > * "https://*name*-staging.azurewebsites.net/signin-b2c-pwd" where "name" corresponds to your website name. > > c. Open **App Service** and add the following configurations both with value as **B2C_1_pwd_reset**, > > * AzureADB2CPasswordResetPolicy > * idp:AzureADB2CPasswordResetPolicy > >![App Service](../../media/image%28355%29.png) 7. Follow the [**installation article**](../../infrastructure-management/install-your-platform-instance/installation-guide-detailed-steps.md) to complete the Deployment by configuring obtained values as per the below screenshot ![Create MCT redirect](../../media/LoginIdentity11.png) ### Step 3 - Configure your Identity provider Here are the steps to create policies based on the Identity Provider: 1. **Configure the identity provider** – based on your chosen provider such as [**Microsoft**](/azure/active-directory-b2c/active-directory-b2c-setup-msa-app), [**Google**](/azure/active-directory-b2c/active-directory-b2c-setup-goog-app) and [**Facebook**](/azure/active-directory-b2c/active-directory-b2c-setup-fb-app) 2. **Configure you Local Account** - You can configure local account for your training portal by following the instructions below: 1. Navigate to the Azure AD B2C tenant. 2. Under Policies select User Flow and click on the required User Flow from the populated list. 3. Under Settings, select Identity Providers and check whether the configuration matches exactly as below. ![Configure your local account](../../media/image%28360%29.png) 4. In the same window, select Application Claims and check whether the configuration matches exactly as below. ![Application Claims Local Account1](../../media/4.jpg) 5. Select User Attributes and ensure no options are selected. 6. Restart the training portal App service. ![Restart App Service.png](../../media/LoginIdentity14.png) 7. Once the app successfully restarts, verify if user can login using local account.
60.749004
422
0.738917
eng_Latn
0.973365
eda85d22eb7c8da1602d3adca7a571942040e5e4
15,897
md
Markdown
articles/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Nike1016/azure-docs.hu-hu
eaca0faf37d4e64d5d6222ae8fd9c90222634341
[ "CC-BY-4.0", "MIT" ]
1
2019-09-29T16:59:33.000Z
2019-09-29T16:59:33.000Z
articles/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Nike1016/azure-docs.hu-hu
eaca0faf37d4e64d5d6222ae8fd9c90222634341
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Nike1016/azure-docs.hu-hu
eaca0faf37d4e64d5d6222ae8fd9c90222634341
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: A számítási feladatok figyelése a DMV használatával | Microsoft Docs description: Megtudhatja, hogyan figyelheti a munkaterhelést a DMV használatával. services: sql-data-warehouse author: ronortloff manager: craigg ms.service: sql-data-warehouse ms.topic: conceptual ms.subservice: manage ms.date: 07/23/2019 ms.author: rortloff ms.reviewer: igorstan ms.openlocfilehash: f2dab34ea0ef64f4062819e9b2d475e6a226856b ms.sourcegitcommit: 9dc7517db9c5817a3acd52d789547f2e3efff848 ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 07/23/2019 ms.locfileid: "68405437" --- # <a name="monitor-your-workload-using-dmvs"></a>Monitor your workload using DMVs Ez a cikk azt ismerteti, hogyan használhatók a dinamikus felügyeleti nézetek (DMV) a számítási feladatok figyelésére. Ide tartozik a lekérdezés végrehajtásának vizsgálata Azure SQL Data Warehouseban. ## <a name="permissions"></a>Engedélyek A cikkben szereplő DMV lekéréséhez meg kell TEKINTENIe az adatbázis ÁLLAPOTÁT vagy a VEZÉRLÉSi engedélyt. Általában a MEGTEKINTÉSi adatbázis ÁLLAPOTának megadása az előnyben részesített engedély, mert sokkal szigorúbb. ```sql GRANT VIEW DATABASE STATE TO myuser; ``` ## <a name="monitor-connections"></a>Kapcsolatok figyelése Az SQL Data Warehouse összes bejelentkezését a rendszer naplózza a [sys. DM _pdw_exec_sessions][sys.dm_pdw_exec_sessions]. Ez a DMV tartalmazza az utolsó 10 000 bejelentkezést. A session_id az elsődleges kulcs, amely minden új bejelentkezéshez egymás után van rendelve. ```sql -- Other Active Connections SELECT * FROM sys.dm_pdw_exec_sessions where status <> 'Closed' and session_id <> session_id(); ``` ## <a name="monitor-query-execution"></a>Lekérdezés végrehajtásának figyelése Az SQL Data Warehouseon végrehajtott összes lekérdezést a [sys. DM _pdw_exec_requests][sys.dm_pdw_exec_requests]naplózza a rendszer. Ez a DMV tartalmazza az utolsó 10 000-lekérdezést. A request_id egyedileg azonosítja az egyes lekérdezéseket, és ez a DMV elsődleges kulcsa. A request_id minden egyes új lekérdezéshez egymás után van rendelve, és a QID előtaggal van ellátva, amely a lekérdezés AZONOSÍTÓját jelöli. A DMV egy adott session_id való lekérdezése az adott bejelentkezés összes lekérdezését megjeleníti. > [!NOTE] > A tárolt eljárások több kérelem azonosítóját használják. A kérelmek azonosítói sorrendben vannak hozzárendelve. > > Az alábbi lépéseket követve vizsgálhatja meg a lekérdezés-végrehajtási terveket és időpontokat egy adott lekérdezés esetében. ### <a name="step-1-identify-the-query-you-wish-to-investigate"></a>1\. LÉPÉS: A vizsgálni kívánt lekérdezés azonosítása ```sql -- Monitor active queries SELECT * FROM sys.dm_pdw_exec_requests WHERE status not in ('Completed','Failed','Cancelled') AND session_id <> session_id() ORDER BY submit_time DESC; -- Find top 10 queries longest running queries SELECT TOP 10 * FROM sys.dm_pdw_exec_requests ORDER BY total_elapsed_time DESC; ``` A fenti lekérdezési eredményekből **jegyezze** fel a vizsgálni kívánt lekérdezés azonosítóját. A felfüggesztett állapotú lekérdezések a nagy számú aktív futó lekérdezés miatt várólistára helyezhetők. Ezek a lekérdezések a [sys. DM _pdw_waits](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql) is megjelennek a lekérdezés UserConcurrencyResourceType. A párhuzamossági korlátokkal kapcsolatos információkért lásd: [teljesítmény](performance-tiers.md) -és [erőforrás-osztályok a számítási feladatok kezeléséhez](resource-classes-for-workload-management.md). A lekérdezések más okokat is várhatnak, például az objektumok zárolását. Ha a lekérdezés egy erőforrásra vár, tekintse meg a jelen cikk további [erőforrásaira váró lekérdezések kivizsgálását][Investigating queries waiting for resources] ismertető cikket. A [sys. DM _pdw_exec_requests](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql) táblában lévő lekérdezések keresésének egyszerűbbé tételéhez a [label][LABEL] paranccsal rendeljen hozzá egy olyan megjegyzést a lekérdezéshez, amely a sys. DM _pdw_exec_requests nézetben kereshető. ```sql -- Query with Label SELECT * FROM sys.tables OPTION (LABEL = 'My Query') ; -- Find a query with the Label 'My Query' -- Use brackets when querying the label column, as it it a key word SELECT * FROM sys.dm_pdw_exec_requests WHERE [label] = 'My Query'; ``` ### <a name="step-2-investigate-the-query-plan"></a>2\. LÉPÉS: A lekérdezési terv vizsgálata A kérelem azonosítója segítségével kérje le a lekérdezés elosztott SQL-(DSQL-) tervét a [sys. DM _pdw_request_steps][sys.dm_pdw_request_steps]. ```sql -- Find the distributed query plan steps for a specific query. -- Replace request_id with value from Step 1. SELECT * FROM sys.dm_pdw_request_steps WHERE request_id = 'QID####' ORDER BY step_index; ``` Ha egy DSQL-csomag a vártnál hosszabb időt vesz igénybe, az ok lehet egy összetett csomag, amely sok DSQL lépést vagy csak egy lépést tart hosszú idő alatt. Ha a csomag számos lépésből áll, több áthelyezési művelettel, érdemes lehet optimalizálni a táblázatok eloszlását az adatmozgatás csökkentése érdekében. A [táblázat terjesztési][Table distribution] cikke leírja, hogy miért kell áthelyezni az adatátvitelt a lekérdezés megoldásához, és elmagyarázza néhány terjesztési stratégiát az adatáthelyezés minimalizálásához. Egy adott lépés további részleteinek vizsgálatához a hosszan futó lekérdezési lépés *operation_type* oszlopát, és jegyezze fel a **lépés indexét**: * Folytassa az **SQL-műveletek**3a. lépésével: OnOperation, RemoteOperation, ReturnOperation. * Folytassa a 3b. lépéssel az adatáthelyezési **műveletekhez**: ShuffleMoveOperation, BroadcastMoveOperation, TrimMoveOperation, PartitionMoveOperation, MoveOperation, CopyOperation. ### <a name="step-3a-investigate-sql-on-the-distributed-databases"></a>3a. lépés: Az SQL vizsgálata az elosztott adatbázisokon A Request ID és a Step index használatával kérhet le részleteket a [sys. DM _pdw_sql_requests][sys.dm_pdw_sql_requests], amely az összes elosztott adatbázis lekérdezési lépésének végrehajtási információit tartalmazza. ```sql -- Find the distribution run times for a SQL step. -- Replace request_id and step_index with values from Step 1 and 3. SELECT * FROM sys.dm_pdw_sql_requests WHERE request_id = 'QID####' AND step_index = 2; ``` Ha a lekérdezési lépés fut, a [DBCC PDW_SHOWEXECUTIONPLAN][DBCC PDW_SHOWEXECUTIONPLAN] segítségével lekérheti a SQL Server becsült tervét a SQL Server terv gyorsítótárból egy adott disztribúción futó lépéshez. ```sql -- Find the SQL Server execution plan for a query running on a specific SQL Data Warehouse Compute or Control node. -- Replace distribution_id and spid with values from previous query. DBCC PDW_SHOWEXECUTIONPLAN(1, 78); ``` ### <a name="step-3b-investigate-data-movement-on-the-distributed-databases"></a>3b. lépés: Az elosztott adatbázisok adatáthelyezésének vizsgálata A Request ID és a Step index használatával adatokat kérhet le egy, a [sys. DM _pdw_dms_workers][sys.dm_pdw_dms_workers]származó egyes eloszláson futó adatáthelyezési lépésről. ```sql -- Find the information about all the workers completing a Data Movement Step. -- Replace request_id and step_index with values from Step 1 and 3. SELECT * FROM sys.dm_pdw_dms_workers WHERE request_id = 'QID####' AND step_index = 2; ``` * Tekintse meg a *total_elapsed_time* oszlopot, és ellenőrizze, hogy egy adott eloszlás jelentősen hosszabb időt vesz igénybe, mint a többi adatáthelyezés. * A hosszú ideig futó eloszlásnál tekintse meg a *rows_processed* oszlopot, és ellenőrizze, hogy az adott eloszlásból áthelyezett sorok száma lényegesen nagyobb-e, mint a többi. Ha igen, ez a megállapítás a mögöttes adatokat is jelezheti. Ha a lekérdezés fut, a [DBCC PDW_SHOWEXECUTIONPLAN][DBCC PDW_SHOWEXECUTIONPLAN] segítségével lekérheti a SQL Server becsült tervét a SQL Server terv gyorsítótárból a jelenleg futó SQL-lépéshez egy adott eloszláson belül. ```sql -- Find the SQL Server estimated plan for a query running on a specific SQL Data Warehouse Compute or Control node. -- Replace distribution_id and spid with values from previous query. DBCC PDW_SHOWEXECUTIONPLAN(55, 238); ``` <a name="waiting"></a> ## <a name="monitor-waiting-queries"></a>Várakozó lekérdezések figyelése Ha azt tapasztalja, hogy a lekérdezés nem halad előre, mert egy erőforrásra vár, itt látható egy lekérdezés, amely megjeleníti az összes olyan erőforrást, amely a lekérdezésre vár. ```sql -- Find queries -- Replace request_id with value from Step 1. SELECT waits.session_id, waits.request_id, requests.command, requests.status, requests.start_time, waits.type, waits.state, waits.object_type, waits.object_name FROM sys.dm_pdw_waits waits JOIN sys.dm_pdw_exec_requests requests ON waits.request_id=requests.request_id WHERE waits.request_id = 'QID####' ORDER BY waits.object_name, waits.object_type, waits.state; ``` Ha a lekérdezés aktívan várakozik egy másik lekérdezés erőforrásaira, akkor az állapot **AcquireResources**válik. Ha a lekérdezés rendelkezik az összes szükséges erőforrással, akkor a rendszer megadja az állapotot. ## <a name="monitor-tempdb"></a>Tempdb figyelése A tempdb a közbenső eredmények tárolására szolgál a lekérdezés végrehajtása során. A tempdb-adatbázis magas kihasználtsága lassú lekérdezési teljesítményt eredményezhet. Azure SQL Data Warehouse minden csomópontja körülbelül 1 TB-nyi lemezterülettel rendelkezik a tempdb. Az alábbiakban a tempdb használatának figyelésére, valamint a lekérdezésekben a tempdb használatának csökkenésére vonatkozó tippek találhatók. ### <a name="monitoring-tempdb-with-views"></a>Tempdb figyelése nézetekkel A tempdb-használat figyeléséhez először telepítse a [Microsoft. vw_sql_requests](https://github.com/Microsoft/sql-data-warehouse-samples/blob/master/solutions/monitoring/scripts/views/microsoft.vw_sql_requests.sql) nézetet a [Microsoft eszközkészletből a SQL Data Warehouse](https://github.com/Microsoft/sql-data-warehouse-samples/tree/master/solutions/monitoring). Ezután végrehajthatja a következő lekérdezést, hogy az összes végrehajtott lekérdezés esetében megjelenjen a tempdb használata. ```sql -- Monitor tempdb SELECT sr.request_id, ssu.session_id, ssu.pdw_node_id, sr.command, sr.total_elapsed_time, es.login_name AS 'LoginName', DB_NAME(ssu.database_id) AS 'DatabaseName', (es.memory_usage * 8) AS 'MemoryUsage (in KB)', (ssu.user_objects_alloc_page_count * 8) AS 'Space Allocated For User Objects (in KB)', (ssu.user_objects_dealloc_page_count * 8) AS 'Space Deallocated For User Objects (in KB)', (ssu.internal_objects_alloc_page_count * 8) AS 'Space Allocated For Internal Objects (in KB)', (ssu.internal_objects_dealloc_page_count * 8) AS 'Space Deallocated For Internal Objects (in KB)', CASE es.is_user_process WHEN 1 THEN 'User Session' WHEN 0 THEN 'System Session' END AS 'SessionType', es.row_count AS 'RowCount' FROM sys.dm_pdw_nodes_db_session_space_usage AS ssu INNER JOIN sys.dm_pdw_nodes_exec_sessions AS es ON ssu.session_id = es.session_id AND ssu.pdw_node_id = es.pdw_node_id INNER JOIN sys.dm_pdw_nodes_exec_connections AS er ON ssu.session_id = er.session_id AND ssu.pdw_node_id = er.pdw_node_id INNER JOIN microsoft.vw_sql_requests AS sr ON ssu.session_id = sr.spid AND ssu.pdw_node_id = sr.pdw_node_id WHERE DB_NAME(ssu.database_id) = 'tempdb' AND es.session_id <> @@SPID AND es.login_name <> 'sa' ORDER BY sr.request_id; ``` Ha olyan lekérdezéssel rendelkezik, amely nagy mennyiségű memóriát használ, vagy a tempdb kiosztásával kapcsolatos hibaüzenetet kapott, gyakran egy nagyon nagy méretű [CREATE TABLE a Select (CTAS)](https://docs.microsoft.com/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) vagy a [Insert Select](https://docs.microsoft.com/sql/t-sql/statements/insert-transact-sql) utasítás futtatása, amely sikertelen a következőben: végső adatáthelyezési művelet. Ez általában ShuffleMove műveletként azonosítható az elosztott lekérdezési tervben közvetlenül a végső Beszúrás kiválasztása előtt. A leggyakoribb megoldás a CTAS vagy a SELECT utasítás beillesztése több betöltési utasításba, így az adatmennyiség nem haladja meg az 1 TB/csomópont tempdb korlátot. A fürtöt nagyobb méretre is méretezheti, amely a tempdb méretét több csomóponton is eloszthatja, ami csökkenti az egyes csomópontok tempdb. ## <a name="monitor-memory"></a>Memória figyelése A memória okozhatja a lassú teljesítményt és a memóriával kapcsolatos problémák okát. Ha úgy találja, hogy SQL Server memóriahasználat a lekérdezés végrehajtása során, érdemes megfontolnia az adattárház skálázását. A következő lekérdezés visszaadja SQL Server memóriahasználat és a memória terhelését egy csomóponton: ```sql -- Memory consumption SELECT pc1.cntr_value as Curr_Mem_KB, pc1.cntr_value/1024.0 as Curr_Mem_MB, (pc1.cntr_value/1048576.0) as Curr_Mem_GB, pc2.cntr_value as Max_Mem_KB, pc2.cntr_value/1024.0 as Max_Mem_MB, (pc2.cntr_value/1048576.0) as Max_Mem_GB, pc1.cntr_value * 100.0/pc2.cntr_value AS Memory_Utilization_Percentage, pc1.pdw_node_id FROM -- pc1: current memory sys.dm_pdw_nodes_os_performance_counters AS pc1 -- pc2: total memory allowed for this SQL instance JOIN sys.dm_pdw_nodes_os_performance_counters AS pc2 ON pc1.object_name = pc2.object_name AND pc1.pdw_node_id = pc2.pdw_node_id WHERE pc1.counter_name = 'Total Server Memory (KB)' AND pc2.counter_name = 'Target Server Memory (KB)' ``` ## <a name="monitor-transaction-log-size"></a>Tranzakciós napló méretének figyelése A következő lekérdezés az egyes eloszlások tranzakciós naplójának méretét adja vissza. Ha az egyik naplófájl eléri a 160 GB-ot, érdemes megfontolnia a példány méretezését vagy a tranzakció méretének korlátozását. ```sql -- Transaction log size SELECT instance_name as distribution_db, cntr_value*1.0/1048576 as log_file_size_used_GB, pdw_node_id FROM sys.dm_pdw_nodes_os_performance_counters WHERE instance_name like 'Distribution_%' AND counter_name = 'Log File(s) Used Size (KB)' ``` ## <a name="monitor-transaction-log-rollback"></a>Tranzakciós napló visszaállításának figyelése Ha a lekérdezések sikertelenek, vagy a folytatáshoz hosszú időt vesznek fel, ellenőrizheti és megfigyelheti, hogy vannak-e visszagörgethető tranzakciók. ```sql -- Monitor rollback SELECT SUM(CASE WHEN t.database_transaction_next_undo_lsn IS NOT NULL THEN 1 ELSE 0 END), t.pdw_node_id, nod.[type] FROM sys.dm_pdw_nodes_tran_database_transactions t JOIN sys.dm_pdw_nodes nod ON t.pdw_node_id = nod.pdw_node_id GROUP BY t.pdw_node_id, nod.[type] ``` ## <a name="next-steps"></a>További lépések További információ a DMV: rendszernézetek. [][System views] <!--Image references--> <!--Article references--> [SQL Data Warehouse best practices]: ./sql-data-warehouse-best-practices.md [System views]: ./sql-data-warehouse-reference-tsql-system-views.md [Table distribution]: ./sql-data-warehouse-tables-distribute.md [Investigating queries waiting for resources]: ./sql-data-warehouse-manage-monitor.md#waiting <!--MSDN references--> [sys.dm_pdw_dms_workers]: https://msdn.microsoft.com/library/mt203878.aspx [sys.dm_pdw_exec_requests]: https://msdn.microsoft.com/library/mt203887.aspx [sys.dm_pdw_exec_sessions]: https://msdn.microsoft.com/library/mt203883.aspx [sys.dm_pdw_request_steps]: https://msdn.microsoft.com/library/mt203913.aspx [sys.dm_pdw_sql_requests]: https://msdn.microsoft.com/library/mt203889.aspx [DBCC PDW_SHOWEXECUTIONPLAN]: https://msdn.microsoft.com/library/mt204017.aspx [DBCC PDW_SHOWSPACEUSED]: https://msdn.microsoft.com/library/mt204028.aspx [LABEL]: https://msdn.microsoft.com/library/ms190322.aspx
55.583916
781
0.795496
hun_Latn
0.999296
eda8d3ff74ed541755c627d9505d6b245104c240
1,840
md
Markdown
site/rover/AP-OVERVIEW/AP-ENTRY/AP-E/AP-E-2/INV-CONTROL/INV-CONTROL-1/CYCLE-P1/CYCLE-P2/CYCLE-R3/README.md
mikes-zum/docs
2f60f8f79dea5b56d930021f17394c5b9afb86d5
[ "MIT" ]
7
2019-12-06T23:39:36.000Z
2020-12-13T13:26:23.000Z
site/rover/AP-OVERVIEW/AP-ENTRY/AP-E/AP-E-2/INV-CONTROL/INV-CONTROL-1/CYCLE-P1/CYCLE-P2/CYCLE-R3/README.md
mikes-zum/docs
2f60f8f79dea5b56d930021f17394c5b9afb86d5
[ "MIT" ]
36
2020-01-21T00:17:12.000Z
2022-02-28T03:24:29.000Z
site/rover/AP-OVERVIEW/AP-ENTRY/AP-E/AP-E-2/INV-CONTROL/INV-CONTROL-1/CYCLE-P1/CYCLE-P2/CYCLE-R3/README.md
mikes-zum/docs
2f60f8f79dea5b56d930021f17394c5b9afb86d5
[ "MIT" ]
33
2020-02-07T12:24:42.000Z
2022-03-24T15:38:31.000Z
## Cycle Cost Variation Report (CYCLE.R3) <PageHeader /> **Form Details** [ Form Details ](CYCLE-R3-1/README.md) **Purpose** The CYCLE.R3 procedure is used to print a cost variation report of all parts counted on a given cycle date. The report is intended to show the dollar and quantity adjustment effect this cycle count will have on inventory. It is sorted by inventory location within part number. This report should be reviewed after counts have been entered and before they are posted to inventory. **Frequency of Use** As required. **Prerequisites** The cycle counts should have been entered in the [ CYCLE.E ](../../../../../../../../../../rover/AP-OVERVIEW/AP-ENTRY/AP-E/AP-E-2/INV-CONTROL/INV-CONTROL-1/CYCLE-P1/CYCLE-P2/CYCLE-E) procedure. **Data Fields** **Part Number** The number of the part counted. **Description** The description as is appears in the Parts file. **Invloc** The inventory location at which the count occurred. **Tag** The tag number associated with the part number in the inventory location to be counted. **St** The current status of this tag. (N = New, C = Counted, P = Posted) **Count Quantity** The quantity counted and entered for this part number at the inventory location. **Beginning Quantity** The inventory balance as it appears in the Inv file before the count. **Variance** The variance between the count quantity and the beginning inventory balance. A negative number indicates a shrinkage in inventory. **Unit Cost** The unit cost of this part in the inventory location. **Variance** The total cost variance this count will have on the inventory location for this part. This is the amount that will feed to General Ledger through the INVREG file when the cycle count is posted. <badge text= "Version 8.10.57" vertical="middle" /> <PageFooter />
41.818182
194
0.736413
eng_Latn
0.998079
eda9994b8b9ee8e258d34ba4f47a23dffc5447c4
346
md
Markdown
robodex/README.md
Brunozhon/otter
3693b26af0112a88f6ae87e3cb0bafca8968a2b5
[ "BSD-3-Clause" ]
null
null
null
robodex/README.md
Brunozhon/otter
3693b26af0112a88f6ae87e3cb0bafca8968a2b5
[ "BSD-3-Clause" ]
null
null
null
robodex/README.md
Brunozhon/otter
3693b26af0112a88f6ae87e3cb0bafca8968a2b5
[ "BSD-3-Clause" ]
1
2020-11-01T16:54:49.000Z
2020-11-01T16:54:49.000Z
# Welcome to the Robodex! ## Ideas for some Robocats If you have an idea of a new Robocat, please add a picture with *at least* some parts like ![Robocat](robocat.png) and remenber to add it to the index.html file like ```html <img src="robotname.png" alt="your description" /> <!-- The src attribute's value has to be a *.png file --> ```
23.066667
108
0.696532
eng_Latn
0.976675
edaa1a3c780fed396ff68fcc9e479df62d60da98
2,833
md
Markdown
documentation/index.md
soonhokong/leanprover.github.io
e36c7e2dc0bde7a5bd938c4fdadb6de9bab5abda
[ "MIT" ]
null
null
null
documentation/index.md
soonhokong/leanprover.github.io
e36c7e2dc0bde7a5bd938c4fdadb6de9bab5abda
[ "MIT" ]
null
null
null
documentation/index.md
soonhokong/leanprover.github.io
e36c7e2dc0bde7a5bd938c4fdadb6de9bab5abda
[ "MIT" ]
null
null
null
--- layout: archive title: "Documentation" date: modified: excerpt: image: feature: teaser: thumb: ads: false --- ## Tutorial - There is an [online tutorial][tutorial-html], which is presented alongside a version of Lean running in your browser. - You can also read the [pdf version][tutorial-pdf]. - The tutorial includes a [quick reference][quickref]. [tutorial-html]: ../tutorial/index.html [tutorial-pdf]: ../tutorial/tutorial.pdf [quickref]: ../tutorial/quickref.pdf ## Libraries - You can explore the [standard library][standard] directly in the GitHub respository, or through annotated [markdown files][standardmd]. - You can also browse the [homotopy type theory library][hott] directly or through [markdown files][hottmd]. [standard]: https://github.com/leanprover/lean/tree/master/library [standardmd]: https://github.com/leanprover/lean/blob/master/library/library.md [hott]: https://github.com/leanprover/lean/tree/master/hott [hottmd]: https://github.com/leanprover/lean/blob/master/hott/hott.md ## Forum - You are welcome to join the [Lean user forum][leanuser] on Google Groups. [leanuser]: https://groups.google.com/forum/#!forum/lean-user ## Publications - *The Lean Theorem Prover*. [PDF](/files/system.pdf)<br /> [Leonardo de Moura][leo], [Soonho Kong][soonho], [Jeremy Avigad][jeremy], [Floris van Doorn][floris] and [Jakob von Raumer][jakob],<br />25th International Conference on Automated Deduction (CADE-25), Berlin, Germany, 2015. - *Elaboration in Dependent Type Theory*. [PDF][constr] <br /> [Leonardo de Moura][leo], [Jeremy Avigad][jeremy], [Soonho Kong][soonho] and [Cody Roux][cody]<br /> *preprint* [leo]: http://research.microsoft.com/en-us/um/people/leonardo [soonho]: http://www.cs.cmu.edu/~soonhok [jeremy]: http://www.andrew.cmu.edu/user/avigad [floris]: http://www.contrib.andrew.cmu.edu/~fpv [jakob]: http://von-raumer.de/ [cody]: http://www.andrew.cmu.edu/user/croux [constr]: http://arxiv.org/abs/1505.04324 ## Presentations - [The Lean Theorem Prover (CADE system description)](http://leanprover.github.io/presentations/20150807_CADE) [CADE'25](http://conference.mi.fu-berlin.de/cade-25/home), Berlin, Germany, August 2015 - [Lean CADE Tutorial](http://leanprover.github.io/presentations/20150803_CADE) [CADE'25](http://conference.mi.fu-berlin.de/cade-25/home), Berlin, Germany, August 2015 - [The Lean Theorem Prover](http://leanprover.github.io/presentations/20150717_CICM) [CICM](http://cicm-conference.org/2015/cicm.php), Washington D.C., July 2015 - [The Lean Theorem Prover](http://leanprover.github.io/presentations/20150218_MSR) PL(X) meeting at <a href="http://research.microsoft.com/en-us/groups/rise/">Microsoft Research</a>, Feb 2015 - [Lean mode for Emacs](http://leanprover.github.io/presentations/20150123_lean-mode/lean-mode.pdf)
39.901408
225
0.736675
yue_Hant
0.341186
edab7fcecdafb07bb5d5adf446d2187f9f2836f4
6,754
md
Markdown
desktop-src/DirectWrite/directwrite-glossary.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
552
2019-08-20T00:08:40.000Z
2022-03-30T18:25:35.000Z
desktop-src/DirectWrite/directwrite-glossary.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
1,143
2019-08-21T20:17:47.000Z
2022-03-31T20:24:39.000Z
desktop-src/DirectWrite/directwrite-glossary.md
citelao/win32
bf61803ccb0071d99eee158c7416b9270a83b3e4
[ "CC-BY-4.0", "MIT" ]
1,287
2019-08-20T05:37:48.000Z
2022-03-31T20:22:06.000Z
--- title: DirectWrite Glossary description: Glossary page ROBOTS: NOINDEX, NOFOLLOW ms.assetid: bf50f374-440b-44f3-a365-47588eaa071f ms.topic: article ms.date: 05/31/2018 --- # DirectWrite Glossary <dl> <dt> <span id="directwrite.directwrite_glossary_bidi__unicode_feature_"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_BIDI__UNICODE_FEATURE_"></span>**BiDi (Unicode feature)** </dt> <dd> Pronounced "By-Dye". Text that contains a mixture of words, from different languages, that must be read in different directions. Often requires special code to process. "Arabic and Hebrew are BiDi languages; the text goes right to left, but the numbers go left to right." </dd> <dt> <span id="directwrite.directwrite_glossary_cleartype"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_CLEARTYPE"></span>**ClearType** </dt> <dd> A font display technology that dramatically improves font display resolution so that letters on the computer screen appear smooth, not jagged. ClearType dramatically improves the readability of text on color LCD monitors with a digital interface, such as those found in laptops and high-quality flat-panel desktop displays. </dd> <dt> <span id="directwrite.directwrite_glossary_direct2d"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_DIRECT2D"></span>**Direct2D** </dt> <dd> A hardware-accelerated, immediate-mode, 2-D graphics API that provides high performance and high quality rendering for 2-D geometry, bitmaps, and text. The Direct2D API is designed to interoperate well with existing code that uses GDI, GDI+, or Direct3D. </dd> <dt> <span id="directwrite.directwrite_glossary_directwrite"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_DIRECTWRITE"></span>**DirectWrite** </dt> <dd> A DirectX API that provides improved high-quality text rendering and interoperability with GDI and Direct2D. </dd> <dt> <span id="directwrite.directwrite_glossary_directx"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_DIRECTX"></span>**DirectX** </dt> <dd> An extension of the Microsoft Windows operating system. DirectX technology helps games and other programs use the advanced multimedia capabilities of your hardware. </dd> <dt> <span id="directwrite.directwrite_glossary_gdi"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_GDI"></span>**GDI** </dt> <dd> An executable program that processes graphical function calls from a Windows-based application and passes those calls to the appropriate device driver, which performs the hardware-specific functions that generate output. By acting as a buffer between applications and output devices, GDI presents a device-independent view of the world for the application while interacting in a device-dependent format with the device. </dd> <dt> <span id="directwrite.directwrite_glossary_glyph"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_GLYPH"></span>**glyph** </dt> <dd> The physical representation of a character in a given font. Characters might have many glyphs, with each font on a system potentially defining a different glyph for that character. </dd> <dt> <span id="directwrite.directwrite_glossary_glyph_composition"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_GLYPH_COMPOSITION"></span>**glyph composition** </dt> <dd> The combining of two or more glyphs into a single glyph. </dd> <dt> <span id="directwrite.directwrite_glossary_glyph_decomposition"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_GLYPH_DECOMPOSITION"></span>**glyph decomposition** </dt> <dd> The splitting of a single glyph into multiple glyphs. </dd> <dt> <span id="directwrite.directwrite_glossary_glyph_run"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_GLYPH_RUN"></span>**glyph Run** </dt> <dd> A set of glyphs in a specific order, with the same formatting characteristics such as font face, size, weight and style. </dd> <dt> <span id="directwrite.directwrite_glossary_hardware-accelerated_text"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_HARDWARE-ACCELERATED_TEXT"></span>**hardware-accelerated text** </dt> <dd> Text rendered by a technology that uses hardware acceleration to improve rendering performance. </dd> <dt> <span id="directwrite.directwrite_glossary_interoperability"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_INTEROPERABILITY"></span>**interoperability** </dt> <dd> The ability for two or more APIs to work with and transmit information between one another. </dd> <dt> <span id="directwrite.directwrite_glossary_kerning"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_KERNING"></span>**kerning** </dt> <dd> The spacing applied between specific pairs of letters within a particular word. </dd> <dt> <span id="directwrite.directwrite_glossary_letterspacing"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_LETTERSPACING"></span>**letterspacing** </dt> <dd> The adjustment of the spacing between two characters to create the appearance of even spacing, fit text to a given space, and adjust line breaks. </dd> <dt> <span id="directwrite.directwrite_glossary_ligature"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_LIGATURE"></span>**ligature** </dt> <dd> Two or more characters combined to represent a single typographical character. The modern Latin script uses a few. Other scripts use many ligatures that depend on font and style. Some languages possess mandatory ligatures, for example, Arabic. </dd> <dt> <span id="directwrite.directwrite_glossary_smart_pointers"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_SMART_POINTERS"></span>**smart pointers** </dt> <dd> A class that wraps COM interface pointers that will automatically release the interface object. </dd> <dt> <span id="directwrite.directwrite_glossary_swashes"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_SWASHES"></span>**swashes** </dt> <dd> Ornamental addition to a character that makes for a more stylistic glyph. </dd> <dt> <span id="directwrite.directwrite_glossary_unicode"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_UNICODE"></span>**Unicode** </dt> <dd> A character-encoding standard developed by the Unicode Consortium that represents almost all of the written languages of the world. The Unicode character repertoire has multiple representation forms, including UTF-8, UTF-16, and UTF-32. Most Windows interfaces use the UTF-16 form. (GTMT has 20 definitions). </dd> <dt> <span id="directwrite.directwrite_glossary_windows_presentation_foundation__wpf_"></span><span id="DIRECTWRITE.DIRECTWRITE_GLOSSARY_WINDOWS_PRESENTATION_FOUNDATION__WPF_"></span>**Windows Presentation Foundation (WPF)** </dt> <dd> A GUI framework used by the .NET Micro Framework 3.0, Windows Vista, and Microsoft Silverlight. The .NET Micro Framework GUI classes are loosely based on WPF. </dd> </dl>    
43.857143
419
0.784128
eng_Latn
0.881414
edab896e37999b801dbf6c7833194fccf446c794
78
md
Markdown
src/Doc/MASA.Blazor.Doc/wwwroot/docs/api/MDataFooter.zh-CN.md
masastack-opensource/MASA.Blazor
ea83442d3e05035ef9495ceb40dbe5592df41c91
[ "MIT" ]
1
2021-12-20T08:10:55.000Z
2021-12-20T08:10:55.000Z
src/Doc/MASA.Blazor.Doc/wwwroot/docs/api/MDataFooter.zh-CN.md
TyroneChong/MASA.Blazor
d27785a25f971a7b7c3baa9ce3330987d7aa42d8
[ "MIT" ]
null
null
null
src/Doc/MASA.Blazor.Doc/wwwroot/docs/api/MDataFooter.zh-CN.md
TyroneChong/MASA.Blazor
d27785a25f971a7b7c3baa9ce3330987d7aa42d8
[ "MIT" ]
null
null
null
--- order: 0 title: DataFooter --- ## 属性 属性 ## 插槽 插槽 ## 事件 事件
3.9
17
0.423077
eng_Latn
0.118209
edabbcfe133d890d9ab1b76addabbf754df0cfd6
28
md
Markdown
README.md
bombinatetech/vdb
8f66065272081062b83f46b2fcecedb52983c34e
[ "BSD-3-Clause" ]
null
null
null
README.md
bombinatetech/vdb
8f66065272081062b83f46b2fcecedb52983c34e
[ "BSD-3-Clause" ]
null
null
null
README.md
bombinatetech/vdb
8f66065272081062b83f46b2fcecedb52983c34e
[ "BSD-3-Clause" ]
null
null
null
# vdb Super MNESIA Database
9.333333
21
0.785714
kor_Hang
0.638594
edabccd6f730cef344f8d7b8f2d00111f40c140f
276
md
Markdown
README.md
NovoLabs/DeepSpell
26f595ec748f18e4420f2dba36047f812305169c
[ "MIT" ]
243
2016-03-27T14:44:31.000Z
2022-03-21T21:39:41.000Z
README.md
EnterStudios/DeepSpell
4fed120260883e176440acfec0034d04267850a0
[ "MIT" ]
19
2016-12-22T16:02:55.000Z
2020-02-18T11:22:42.000Z
README.md
EnterStudios/DeepSpell
4fed120260883e176440acfec0034d04267850a0
[ "MIT" ]
100
2016-08-04T16:16:20.000Z
2022-02-06T05:57:04.000Z
# DeepSpell a Deep Learning based Speller See https://medium.com/@majortal/deep-spelling-9ffef96a24f6#.2c9pu8nlm Additional details: I used this AMI to train the system: https://aws.amazon.com/marketplace/pp/B06VSPXKDX On a p2.xlarge instance (currently at $0.9 per Hour)
23
70
0.782609
eng_Latn
0.759853
edac19d13a5f1a5e7f3b1b378072b37448c58e84
16,383
md
Markdown
borrow/recommendations/elibrary-lists/_posts/2019-02-28-elibrary-march-2019.md
suffolklibraries/florian
e9d2f66bae79e258839590f1006399394264c978
[ "MIT" ]
null
null
null
borrow/recommendations/elibrary-lists/_posts/2019-02-28-elibrary-march-2019.md
suffolklibraries/florian
e9d2f66bae79e258839590f1006399394264c978
[ "MIT" ]
null
null
null
borrow/recommendations/elibrary-lists/_posts/2019-02-28-elibrary-march-2019.md
suffolklibraries/florian
e9d2f66bae79e258839590f1006399394264c978
[ "MIT" ]
null
null
null
--- layout: sidebar-right title: "Recommended new eBooks and music for March 2019" date: 2019-02-28 author: lisa-brennan category: elibrary tag: elibrary excerpt: "Check out the best new eBooks and music from our OverDrive and Freegal services." featured-image: /images/featured/featured-elibrary-march-2019.jpg featured-alt: "One Minute Later, Blast from the Past, All the Little Lies" breadcrumb: elibrary --- We're always adding great titles to our [eLibrary](/elibrary/). We've listed some of our best new eBooks and music to help you choose your next read or listen. See also: * [Recommended new eAudiobooks &#x23;3](/new-suggestions/elibrary/new-eaudiobooks-3/) * [PressReader newspapers and magazines](/elibrary/press-reader/) ## Music [Set up Freegal &rarr;](/elibrary/freegal/) ### Albums and EPs <div class="custom-flex-container"> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34234146/1"><img class="custom-constrain-img" src="/images/featured/featured-good-thing.jpg" alt="Good Thing"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34234146/1" class="blue custom-no-underline">Leon Bridges - <cite>Good Thing</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34231651/1"><img class="custom-constrain-img" src="/images/featured/featured-staying-at-tamaras.jpg" alt="Staying at Tamara's"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34231651/1" class="blue custom-no-underline">George Ezra - <cite>Staying at Tamara's</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/16580407/1"><img class="custom-constrain-img" src="/images/featured/featured-the-script.jpg" alt="The Script"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/16580407/1" class="blue custom-no-underline">The Script - <cite>The Script</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34247594/1"><img class="custom-constrain-img" src="/images/featured/featured-the-love-train.jpg" alt="The Love Train"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34247594/1" class="blue custom-no-underline">Meghan Trainor - <cite>The Love Train</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/25678304/1"><img class="custom-constrain-img" src="/images/featured/featured-wrecking-ball.jpg" alt="Wrecking Ball"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/25678304/1" class="blue custom-no-underline">Bruce Springsteen - <cite>Wrecking Ball</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/28381901/1"><img class="custom-constrain-img" src="/images/featured/featured-dido-greatest-hits.jpg" alt="Greatest Hits"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/28381901/1" class="blue custom-no-underline">Dido - <cite>Greatest Hits (Deluxe)</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/31837472/1"><img class="custom-constrain-img" src="/images/featured/featured-exciter.jpg" alt="Exciter"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/31837472/1" class="blue custom-no-underline">Depeche Mode - <cite>Exciter (Deluxe)</cite></a></p></figcaption> </figure> <figure class="custom-flex-row-4 pv2"> <a class="white custom-no-underline" href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34148601/1"><img class="custom-constrain-img" src="/images/featured/featured-snoop-dogg-presents-bible-of-love.jpg" alt="Snoop Dogg presents Bible of Love"></a> <figcaption class="ma0 pa0"><p class="f5 fw6 ma0 pa0 pr2"><a href="https://suffolklibraries.freegalmusic.com/browse/trending/albums/34148601/1" class="blue custom-no-underline">Snoop Dogg - <cite>Snoop Dogg presents Bible of Love</cite></a></p></figcaption> </figure> </div> ### Singles * [Calvin Harris & Rag 'n' Bone Man - <cite>Giant</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34246535/1) * [Khalid - <cite>Talk</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34247551/1) * [Kygo & Sandro Cavazza - <cite>Happy Now (R3HAB Remix)</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34247066/1) * [Little Mix ft. Ty Dolla $ign - <cite>Think About Us</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34247098/1) * [Silk City, Dua Lipa ft. Diplo & Mark Ronson - <cite>Electricity</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34240810/1) * [Tom Walker - <cite>Just You and I</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34246379/1) * [Childish Gambino - <cite>This is America</cite>](https://suffolklibraries.freegalmusic.com/browse/trending/albums/34234578/1) ## eBooks [Set up OverDrive &rarr;](/elibrary/overdrive/) ![One Minute Later, Blast from the Past, All the Little Lies](/images/featured/featured-elibrary-march-2019.jpg) ### [<cite>Zucked: waking up to the Facebook catastrophe</cite>, by Roger McNamee](https://suffolklibraries.overdrive.com/media/4310885?cid=241300) > "If you had told Roger McNamee three years ago that he would soon be devoting himself to stopping Facebook from destroying democracy, he would have howled with laughter. He had mentored many tech leaders in his illustrious career as an investor, but few things had made him prouder, or been better for his fund's bottom line, than his early service to Mark Zuckerberg. Still a large shareholder in Facebook, he had every good reason to stay on the bright side. Until he simply couldn't. > "<cite>Zucked</cite> is McNamee's intimate reckoning with the catastrophic failure of the head of one of the world's most powerful companies to face up to the damage he is doing. It's a story that begins with a series of rude awakenings. First there is the author's dawning realization that the platform is being manipulated by some very bad actors. Then there is the even more unsettling realization that Zuckerberg and Sheryl Sandberg are unable or unwilling to share his concerns, polite as they may be to his face. And then comes Brexit and the election of Donald Trump, and the emergence of one horrific piece of news after another about the malign ends to which the Facebook platform has been put. > "To McNamee's shock, Facebook's leaders still duck and dissemble, viewing the matter as a public relations problem. Now thoroughly alienated, McNamee digs into the issue, and fortuitously meets up with some fellow travellers who share his concerns, and help him sharpen its focus. Soon he and a dream team of Silicon Valley technologists are charging into the fray, to raise consciousness about the existential threat of Facebook, and the persuasion architecture of the attention economy more broadly – to our public health and to our political order." ### [<cite>First Man In: leading from the front</cite>, by Ant Middleton](https://suffolklibraries.overdrive.com/media/3844656?cid=241300) > "After 13 years' service in the military, with four years as a Special Boat Service (SBS) sniper, Ant Middleton is the epitome of what it takes to excel. He served in the SBS, the naval wing of the special forces, the Royal Marines and 9 Parachute Squadron Royal, achieving what is known as the 'Holy Trinity' of the UK's Elite Forces. As a point man in the SBS, Ant was always the first man through the door, the first man into the dark, and the first man in harm's way. > "In this fascinating, exhilarating and revealing book, Ant speaks about the highs and gut-wrenching lows of his life – from the thrill of passing Special Forces Selection to dealing with the early death of his father and ending up in prison on leaving the military – and draws valuable lessons that we can all use in our daily lives." ### [<cite>All the Little Lies</cite>, by Chris Curran](https://suffolklibraries.overdrive.com/media/4500538?cid=241300) > "One email is all it takes to turn Eve's world upside down. It contains a picture of her true birth mother, Stella, and proves that Eve's entire life with her adoptive parents has been a lie. Now she must unravel the mystery of Stella's dark past. But what Eve finds will force her to take enormous risks, which put her – and her newborn baby – in immediate danger..." ### [<cite>Blast from the Past</cite>, by Cathy Hopkins](https://suffolklibraries.overdrive.com/media/4256273?cid=241300) > "On a trip of a lifetime to India, Bea is given an unexpected fiftieth birthday present – an hour with a celebrated clairvoyant. Unlucky in love, Bea learns that her true soulmate is still out there ̶ and that he's someone she knew in a past life. > "Returning home, Bea revisits the men in her life and can't resist looking up a few old lovers – the Good, the Bad and the... well, the others. As Bea connects with the ones that got away, she suspects that her little black book has remained shut for a reason. But one man out there has her in his sights. > "They say love is blind and maybe Bea just needs an eye test..." ### [<cite>The Doll House</cite>, by Phoebe Morgan](https://suffolklibraries.overdrive.com/media/3318628?cid=241300) > "Corinne's life might look perfect on the outside, but after three failed IVF attempts it's her last chance to have a baby. And when she finds a tiny part of a doll house outside her flat, it feels as if it's a sign. > "But as more pieces begin to turn up, Corinne realises that they are far too familiar. Someone knows about the miniature rocking horse and the little doll with its red velvet dress. Someone has been inside her house... How does the stranger know so much about her life? How long have they been watching? And what are they waiting for...?" ### [<cite>Jog On: how running saved my life</cite>, by Bella Mackie](https://suffolklibraries.overdrive.com/media/4056616?cid=241300) > "Divorced and struggling with deep-rooted mental health problems, Bella Mackie ended her twenties in tears. She could barely find the strength to get off the sofa, let alone piece her life back together. Until one day she did something she had never done of her own free will – she pulled on a pair of trainers and went for a run. > "That first attempt didn't last very long. But to her surprise, she was back out there the next day. And the day after that. She began to set herself achievable goals – to run 5k in under 30 minutes, to walk to work every day for a week, to attempt 10 push-ups in a row. Before she knew it, her mood was lifting for the first time in years. > "In <cite>Jog On</cite>, Bella explains with hilarious and unfiltered honesty how she used running to battle crippling anxiety and depression, without having to sacrifice her main loves: booze, cigarettes and ice cream. With the help of a supporting cast of doctors, psychologists, sportspeople and friends, she shares a wealth of inspirational stories, research and tips that show how exercise often can be the best medicine. > "This funny, moving and motivational book will encourage you to say 'jog on' to your problems and get your life back on track – no matter how small those first steps may be." ### [<cite>Past Tense</cite>, by Lee Child](https://suffolklibraries.overdrive.com/media/4345131?cid=241300) > "A young couple trying to get to New York City are stranded at a lonely motel in the middle of nowhere. Before long they're trapped in an ominous game of life and death. > "Meanwhile, Jack Reacher sets out on an epic road trip across America. He doesn't get far. Deep in the New England woods, he sees a sign to a place he has never been - the town where his father was born. But when he arrives he is told no one named Reacher ever lived there. Now he wonders: who's lying? > "As the tension ratchets up and these two stories begin to entwine, the stakes have never been higher for Reacher." ### [<cite>Fall</cite>, by Candice Fox](https://suffolklibraries.overdrive.com/media/4140959?cid=241300) > "If Detective Frank Bennett tries hard enough, he can sometimes forget that Eden Archer, his partner in the Homicide Department, is also a moonlighting serial killer... > "Thankfully their latest case is proving a good distraction. Someone is angry at Sydney's beautiful people – and the results are anything but pretty. On the rain-soaked running tracks of Sydney's parks, a predator is lurking, and it's not long before night-time jogs become a race to stay alive. > "While Frank and Eden chase shadows, a different kind of danger grows closer to home. Frank's new girlfriend Imogen Stone is fascinated by cold cases, and her latest project – the disappearance of the two Tanner children more than twenty years ago – is leading her straight to Eden's door. > "And, as Frank knows all too well, asking too many questions about Eden Archer can get you buried as deep as her past..." ### [<cite>Strangeways: a prison officer's story</cite>, by Neil Samworth](https://suffolklibraries.overdrive.com/media/3899503?cid=241300) > "Neil 'Sam' Samworth spent eleven years working as a prison officer in HMP Manchester, aka Strangeways. A tough Yorkshireman with a soft heart, Sam had to deal with it all – gangsters and gangbangers, terrorists and psychopaths, addicts and the mentally ill. Men who should not be locked up and men who should never be let out. > "<cite>Strangeways</cite> is a shocking and at times darkly funny account of life in a high security prison. Sam tackles cell fires and self-harmers, and goes head to head with some of the most dangerous men in the country. He averts a Christmas Day riot after turkey is taken off the menu and replaced by fish curry, and stands up to officers who abuse their position. He describes being attacked by prisoners, and reveals the problems caused by radicalisation and the drugs flooding our prisons. > "As staffing cuts saw Britain's prison system descend into crisis, the stress of the job – the suicides, the inhumanity of the system, and one assault too many – left Sam suffering from PTSD. This raw, searingly honest memoir is a testament to the men and women of the prison service and the incredibly difficult job we ask them to do." ### [<cite>One Minute Later</cite>, by Susan Lewis](https://suffolklibraries.overdrive.com/media/4245574?cid=241300) > "With a high-flying job, a beautiful apartment and friends whose lives are as happy as her own, Vivienne Shager is living the dream. Then, on the afternoon of Vivi's twenty-seventh birthday, one catastrophic minute changes everything. Forced to move back to the small seaside town where she grew up, Vivi remembers the reasons she left. The secrets, lies and questions that now must be answered before it's too late. But the answers lie in thirty years in the past... > "Shelley Raynor's family home, Deerwood Farm, has always been a special place until darkness strikes at its heart. When Vivi's and Shelley's worlds begin to entwine, it only takes a moment for the truth to unravel all of their lives."
89.038043
705
0.761765
eng_Latn
0.98535
edac9b4743753edcece6baab21ccaeac64d59579
1,698
md
Markdown
articles/cognitive-services/Content-Moderator/samples-rest.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
12
2017-08-28T07:45:55.000Z
2022-03-07T21:35:48.000Z
articles/cognitive-services/Content-Moderator/samples-rest.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
441
2017-11-08T13:15:56.000Z
2021-06-02T10:39:53.000Z
articles/cognitive-services/Content-Moderator/samples-rest.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
27
2017-11-13T13:38:31.000Z
2022-02-17T11:57:33.000Z
--- title: Przykłady kodu — Content Moderator, C# titleSuffix: Azure Cognitive Services description: Korzystaj z przykładów opartych na platformie Azure Cognitive Services Content Moderator w aplikacjach za pośrednictwem wywołań interfejsu API REST. services: cognitive-services author: PatrickFarley manager: nitinme ms.service: cognitive-services ms.subservice: content-moderator ms.topic: sample ms.date: 01/10/2019 ms.author: pafarley ms.openlocfilehash: df0b17509dfb11fb18a591c70e9060973459a24c ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 10/09/2020 ms.locfileid: "73744298" --- # <a name="content-moderator-rest-samples-in-c"></a>Przykłady kodu REST usługi Content Moderator w języku C# Poniższa lista zawiera linki do przykładów kodu utworzonych przy użyciu interfejsu API usługi Azure Content Moderator. - [Moderowanie obrazów](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageModeration) - [Moderowanie tekstu](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/TextModeration) - [Moderowanie filmów wideo](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/VideoModeration) - [Przeglądy obrazów](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageReviews) - [Zadania obrazów](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageJob) Aby uzyskać przewodniki dotyczące tych przykładów, zapoznaj się [seminarium internetowym na żądanie](https://info.microsoft.com/cognitive-services-content-moderator-ondemand.html).
54.774194
180
0.8351
pol_Latn
0.711572
edacda29a23ecda9152b68aaf22aac7320d8140e
24,497
md
Markdown
articles/cosmos-db/optimize-cost-throughput.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
43
2017-08-28T07:44:17.000Z
2022-02-20T20:53:01.000Z
articles/cosmos-db/optimize-cost-throughput.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
676
2017-07-14T20:21:38.000Z
2021-12-03T05:49:24.000Z
articles/cosmos-db/optimize-cost-throughput.md
KreizIT/azure-docs.fr-fr
dfe0cb93ebc98e9ca8eb2f3030127b4970911a06
[ "CC-BY-4.0", "MIT" ]
153
2017-07-11T00:08:42.000Z
2022-01-05T05:39:03.000Z
--- title: Optimisation du coût du débit dans Azure Cosmos DB description: Cet article explique comment optimiser le coût du débit pour les données stockées dans Azure Cosmos DB. author: markjbrown ms.author: mjbrown ms.service: cosmos-db ms.topic: conceptual ms.date: 08/26/2021 ms.custom: devx-track-csharp ms.openlocfilehash: ffe0c1ffe4004cc441213d1781a2bb26b54d4d34 ms.sourcegitcommit: 03f0db2e8d91219cf88852c1e500ae86552d8249 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 08/27/2021 ms.locfileid: "123029119" --- # <a name="optimize-provisioned-throughput-cost-in-azure-cosmos-db"></a>Optimiser le coût du débit approvisionné dans Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] En proposant un modèle avec débit approvisionné, Azure Cosmos DB offre des performances prévisibles à n’importe quelle échelle. La réservation ou l’approvisionnement du débit en amont élimine l’effet « voisin bruyant » pouvant affecter les performances. Vous spécifiez le débit exact dont vous avez besoin, et Azure Cosmos DB garantit par contrat de niveau de service le débit défini. Vous pouvez commencer avec un débit minimum de 400 unité de requêtes (RU) par seconde, puis mettre à l’échelle jusqu'à des dizaines de millions de demandes par seconde voire plus. Chaque demande que vous exécutez sur votre conteneur ou base de données Azure Cosmos, par exemple une demande de lecture, une demande d’écriture, une demande de requête ou des procédures stockées, entraîne un coût correspondant qui est déduit de votre débit approvisionné. Si vous approvisionnez 400 RU/s et que vous émettez une requête qui coûte 40 RU, vous pourrez émettre 10 requêtes de ce type par seconde. Toute requête en dehors de ce cadre sera limitée et vous devez relancer la requête. Si vous utilisez des pilotes clients, ces pilotes prennent en charge la logique de nouvelle tentative automatique. Vous pouvez configurer le débit sur des bases de données ou des conteneurs, et chaque stratégie peut vous aider à réduire les coûts en fonction du scénario. ## <a name="optimize-by-provisioning-throughput-at-different-levels"></a>Optimiser en configurant le débit à différents niveaux * Si vous approvisionnez le débit sur une base de données, tous les conteneurs, par exemple les collections/tables/graphiques au sein de cette base de données, peuvent se partager le débit en fonction de la charge. Le débit réservé au niveau de la base de données est partagé inégalement sur un ensemble spécifique de conteneurs, selon la charge de travail. * Si vous configurez le débit sur un conteneur, ce débit est garanti par contrat de niveau de service pour ce conteneur. Le choix d’une clé de partition logique est essentiel pour la répartition de la charge sur toutes les partitions logiques d’un conteneur. Consultez les articles [Partitionnement](partitioning-overview.md) et [Mise à l’échelle horizontale](partitioning-overview.md) pour plus d’informations. Voici quelques indications pour choisir une stratégie de débit approvisionné : **Provisionnez le débit sur une base de données Azure Cosmos (contenant un ensemble de conteneurs) si** : 1. Vous avez quelques dizaines de conteneurs Azure Cosmos et souhaitez partager le débit sur tout ou partie de ces conteneurs. 2. Vous effectuez une migration à partir d’une base de données à locataire unique conçue pour s’exécuter sur des machines virtuelles hébergées sur IaaS ou localement, par exemple des bases de données NoSQL ou relationnelles, vers Azure Cosmos DB. Vous avez un grand nombre de collections/tables/graphiques et ne souhaitez pas modifier votre modèle de données. Notez que vous devrez accepter certains compromis concernant les avantages offerts par Azure Cosmos DB si vous ne mettez pas à jour votre modèle de données lors de la migration à partir d’une base de données locale. Il est recommandé de réévaluer régulièrement à votre modèle de données pour optimiser les performances et les coûts. 3. Vous souhaitez absorber les pics imprévus dans les charges de travail en regroupant le débit au niveau de la base de données qui subit un pic inattendu dans la charge de travail. 4. Au lieu de définir un débit spécifique dans des conteneurs individuels, vous préférez répartir le débit d’agrégat sur un ensemble de conteneurs au sein de la base de données. **Approvisionnez le débit sur un conteneur individuel si :** 1. Vous avez quelques conteneurs Cosmos Azure. Comme Azure Cosmos DB ne dépend pas d’un schéma spécifique, un conteneur peut contenir des éléments avec des schémas hétérogènes, sans forcer les clients à créer plusieurs types de conteneurs (un pour chaque entité). Choisissez cette option si vous estimez que le regroupement de 10 à 20 conteneurs dans un seul conteneur peut être bénéfique. Avec un minimum de 400 unités de requête pour les conteneurs, le regroupement de 10 à 20 conteneurs dans un seul conteneur peut être plus économique. 2. Vous souhaitez contrôler le débit sur un conteneur spécifique et obtenir le débit garanti sur un conteneur donné avec une garantie par contrat de niveau de service. **Optez pour une solution hybride combinant les deux stratégies ci-dessus :** 1. Comme mentionné précédemment, Azure Cosmos DB vous permet de combiner les deux stratégies ci-dessus : vous pouvez désormais avoir certains conteneurs au sein de la base de données Azure Cosmos, pouvant se partager le débit approvisionné sur la base de données, et d’autres conteneurs au sein de la même base de données, chacun avec un débit approvisionné dédié. 2. Vous pouvez appliquer les stratégies ci-dessus afin d’obtenir une configuration hybride dans laquelle les deux niveaux de la base de données bénéficient d’un débit et certains conteneurs reçoivent un débit dédié. Comme indiqué dans le tableau suivant, selon le choix de l’API, vous pouvez approvisionner un débit à différents niveaux de granularité. |API|Pour un débit **partagé**, configurer |Pour un débit **dédié**, configurer | |----|----|----| |API SQL|Base de données|Conteneur| |API d’Azure Cosmos DB pour MongoDB|Base de données|Collection| |API Cassandra|Espace de clés|Table de charge de travail| |API Gremlin|Compte de base de données|Graph| |API de table|Compte de base de données|Table de charge de travail| En approvisionnant le débit à différents niveaux, vous pouvez optimiser vos coûts selon les caractéristiques de votre charge de travail. Comme mentionné précédemment, vous pouvez par programmation et à tout moment réduire ou augmenter votre débit approvisionné pour des conteneurs individuels ou collectivement pour un ensemble de conteneurs. Grâce à cette mise à l'échelle flexible qui s’adapte à votre charge de travail, vous payez uniquement pour le débit que vous avez configuré. Si votre conteneur ou un ensemble de conteneurs est réparti dans plusieurs régions, la disponibilité du débit que vous configurez sur le conteneur ou l’ensemble de conteneurs est garantie dans toutes les régions. ## <a name="optimize-with-rate-limiting-your-requests"></a>Optimiser en limitant le débit de vos requêtes Pour les charges de travail qui ne sont pas affectées par la latence, vous pouvez approvisionner un débit inférieur et permettre ainsi à l’application de gérer la limitation du débit lorsque le débit réel dépasse le débit approvisionné. Le serveur met fin à la requête de manière préventive avec `RequestRateTooLarge` (code d’état HTTP 429) et il retourne l’en-tête `x-ms-retry-after-ms` indiquant la durée, en millisecondes, pendant laquelle l’utilisateur doit attendre avant de réessayer. ```html HTTP Status 429, Status Line: RequestRateTooLarge x-ms-retry-after-ms :100 ``` ### <a name="retry-logic-in-sdks"></a>Logique de nouvelle tentative dans les kits de développement logiciel (SDK) Les kits de développement logiciel (SDK) natifs (.NET/.NET Core, Java, Node.js et Python) interceptent tous implicitement cette réponse, respectent l’en-tête retry-after spécifiée par le serveur, puis relancent la requête. La tentative suivante réussira toujours, sauf si plusieurs clients accèdent simultanément à votre compte. Si plusieurs de vos clients opèrent simultanément et systématiquement au-delà du taux de requête, le nombre de nouvelles tentatives par défaut de 9 ne suffira peut-être pas. Dans ce cas, le client envoie à l’application une exception `RequestRateTooLargeException` avec le code d’état 429. Le nombre de nouvelles tentatives par défaut peut être modifié en définissant les `RetryOptions` sur l’instance ConnectionPolicy. Par défaut, l’`RequestRateTooLargeException` avec le code d’état 429 est retournée après un temps d’attente cumulé de 30 secondes si la requête continue à fonctionner au-dessus du taux de requête. Cela se produit même lorsque le nombre de nouvelles tentatives actuel est inférieur au nombre maximal de nouvelles tentatives, qu’il s’agisse de la valeur par défaut de 9 ou d’une valeur définie par l’utilisateur. [MaxRetryAttemptsOnThrottledRequests](/dotnet/api/microsoft.azure.documents.client.retryoptions.maxretryattemptsonthrottledrequests) a la valeur 3 ; dans ce cas, si une opération de requête est soumise à une restriction de taux car elle dépasse le débit réservé pour le conteneur, l’opération de requête réessaie trois fois avant d’envoyer l’exception à l’application. [MaxRetryWaitTimeInSeconds](/dotnet/api/microsoft.azure.documents.client.retryoptions.maxretrywaittimeinseconds#Microsoft_Azure_Documents_Client_RetryOptions_MaxRetryWaitTimeInSeconds) a la valeur 60 ; dans ce cas, si le délai d’attente de la nouvelle tentative cumulative depuis la première requête dépasse 60 secondes, l’exception est levée. ```csharp ConnectionPolicy connectionPolicy = new ConnectionPolicy(); connectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 3; connectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 60; ``` ## <a name="partitioning-strategy-and-provisioned-throughput-costs"></a>Stratégie de partitionnement et coût du débit approvisionné Une bonne stratégie de partitionnement est importante pour optimiser les coûts dans Azure Cosmos DB. Vérifiez dans les métriques de stockage qu’il n’y a pas de partitions asymétriques. Vérifiez dans les métriques de débit qu’il n’y a pas de débit asymétrique. Vérifiez qu’il n’y a pas d’asymétrie dans les clés de partition. Les clés dominantes du stockage sont représentées sous forme de métriques, mais la clé dépendra de votre modèle d’accès aux applications. Il est important de bien choisir la clé de partition logique appropriée. Une bonne clé de partition doit avoir les caractéristiques suivantes : * Choisissez une clé de partition qui répartit la charge de travail de manière uniforme sur toutes les partitions et dans le temps. En d’autres termes, évitez une situation où certaines clés gèrent la majorité des données tandis que d’autres clés ont peu voire pas de données du tout. * Choisissez une clé de partition offrant des modèles d'accès répartis uniformément sur les partitions logiques. La charge de travail est raisonnablement répartie entre toutes les clés. En d’autres termes, la majeure partie de la charge de travail ne doit pas se concentrer sur quelques clés spécifiques. * Choisissez une clé de partition avec un large éventail de valeurs. L'idée de base est de répartir les données et l'activité de votre conteneur sur l'ensemble des partitions logiques afin que les ressources de débit et de stockage des données puissent être réparties sur les partitions logiques. Les candidats aux clés de partition peuvent inclure les propriétés qui apparaissent fréquemment en tant que filtre dans vos requêtes. Les requêtes peuvent être efficacement acheminées en incluant la clé de partition dans le prédicat de filtre. Cette stratégie de partitionnement facilite considérablement l’optimisation du débit approvisionné. ### <a name="design-smaller-items-for-higher-throughput"></a>Conception d’éléments plus petits pour un débit plus élevé Les frais de requête ou le coût de traitement de requête d’une opération donnée sont directement liés à la taille de l’élément. Les opérations effectuées sur des éléments volumineux coûteront plus cher que sur des éléments plus petits. ## <a name="data-access-patterns"></a>Modèles d’accès aux données Il est judicieux de séparer logiquement vos données en catégories logiques selon la fréquence à laquelle vous accédez aux données. En classant ces données par ordre d’importance, vous pouvez ajuster le stockage consommé et le débit requis. Selon la fréquence d’accès, vous pouvez placer les données dans des conteneurs distincts (par exemple, des tables, des graphiques et des collections) et ajuster le débit approvisionné sur ces éléments en fonction des besoins de ce segment de données. En outre, si vous utilisez Azure Cosmos DB et savez que vous n’effectuerez pas de recherches selon certaines valeurs de données ou que vous y accéderez rarement, vous devez stocker les valeurs compressées de ces attributs. Cette méthode vous permet de réaliser des économises au niveau de l’espace de stockage, de l’espace d’index et du débit approvisionné, entraînant ainsi une diminution des coûts. ## <a name="optimize-by-changing-indexing-policy"></a>Optimiser en modifiant la stratégie d’indexation Par défaut, Azure Cosmos DB indexe automatiquement chaque propriété de chaque enregistrement. Cette stratégie vise à faciliter le développement et à garantir d’excellentes performances dans différents types de requêtes ad-hoc. Si vos enregistrements volumineux contiennent des milliers de propriétés, mieux vaut éviter de payer le débit qu’entraînerait l’indexation de chaque propriété, surtout si vos recherches se limitent à 10 ou 20 de ces propriétés. À mesure que vous vous approchez de votre charge de travail finale, nous vous recommandons d’optimiser votre stratégie d’indexation. Vous trouverez plus d’informations sur la stratégie d’indexation Azure Cosmos DB [ici](index-policy.md). ## <a name="monitoring-provisioned-and-consumed-throughput"></a>Surveillance du débit approvisionné et consommé Vous pouvez surveiller le nombre total d’unités de requête approvisionnées, le nombre de demandes de limitation de taux, ainsi que le nombre d’unités de requête que vous avez consommées dans le portail Azure. L’image suivante montre un exemple de métrique d’utilisation : :::image type="content" source="./media/optimize-cost-throughput/monitoring.png" alt-text="Surveiller les unités de requête dans le Portail Azure"::: Vous pouvez également définir des alertes pour vérifier si le nombre de demandes de limitation de taux dépasse un seuil spécifique. Consultez l’article [Comment surveiller un compte Azure Cosmos DB](use-metrics.md) pour plus d’informations. Ces alertes peuvent envoyer un e-mail aux administrateurs de compte ou appeler un Webhook HTTP personnalisé ou une fonction Azure afin d’augmenter automatiquement le débit approvisionné. ## <a name="scale-your-throughput-elastically-and-on-demand"></a>Faire évoluer votre débit en toute flexibilité et à la demande Dans la mesure où vous êtes facturé selon le débit approvisionné, adapter le débit approvisionné à vos besoins peut vous aider à éviter les frais qu’entraîne le débit inutilisé. Vous pouvez à tout moment ajuster à la hausse ou à la baisse votre débit approvisionné, en fonction de vos besoins. Si vos besoins en matière de débit sont très prévisibles, vous pouvez utiliser Azure Functions et avoir recours à un déclencheur à minuterie pour [augmenter ou réduire le débit selon un calendrier](scale-on-schedule.md). * La surveillance de la consommation de vos unités de requête et de vos demandes de limitation de taux peut révéler que vous n’avez pas besoin d’un débit approvisionné constant tout au long de la journée ou de la semaine. Votre trafic sera peut-être moindre pendant la nuit ou le week-end. En utilisant le portail Azure, les kits de développement logiciel Azure Cosmos DB natifs ou des API REST, vous pouvez faire évoluer votre débit approvisionné à tout moment. L’API REST Azure Cosmos DB fournit des points de terminaison pour mettre à jour par programmation le niveau de performance de vos conteneurs, ce qui permet d’ajuster facilement le débit à partir de votre code en fonction de l’heure de la journée ou du jour de la semaine. L’opération se déroule sans interruption du service et généralement en moins d’une minute. * Faites évoluer le débit lorsque vous ingérez des données dans Azure Cosmos DB, par exemple, lors de la migration de données. Une fois la migration effectuée, vous pouvez réduire le débit approvisionné pour gérer votre solution revenue à un état stable. * N’oubliez pas que la facturation est appliquée par tranche horaire : vous ne réaliserez donc aucune économie si vous modifiez votre débit approvisionné plus d’une fois par heure. ## <a name="determine-the-throughput-needed-for-a-new-workload"></a>Déterminer le débit nécessaire pour une nouvelle charge de travail Pour déterminer le débit approvisionné d’une nouvelle charge de travail, vous pouvez procéder comme suit : 1. Effectuez une évaluation approximative initiale à l’aide de l’outil Capacity Planner et ajustez vos estimations avec l’explorateur Azure Cosmos DB dans le portail Azure. 2. Il est recommandé de créer les conteneurs avec un débit plus élevé que prévu puis de diminuer ce débit en fonction des besoins. 3. Il est recommandé d’utiliser un des kits de développement logiciel (SDK) Azure Cosmos DB natifs pour tirer parti des nouvelles tentatives automatiques avec les demandes à taux limité. Si vous utilisez une plateforme non prise en charge et l’API REST Azure Cosmos DB, implémentez votre propre stratégie de nouvelle tentative à l’aide de l’en-tête `x-ms-retry-after-ms`. 4. Assurez-vous que votre code d’application est capable de gérer l’échec de toutes les tentatives. 5. Vous pouvez configurer des alertes à partir du portail Azure afin de recevoir des notifications en cas de limitation du taux. Vous pouvez commencer par des limites conservatrices comme 10 requêtes à taux limité au cours des 15 dernières minutes, puis basculer vers des règles plus strictes une fois vous avez estimé votre consommation réelle. Vous pouvez également utiliser des limites de taux occasionnelles pour indiquer que vous testez différentes limites définies et que c’est exactement ce que vous voulez faire. 6. Utilisez la surveillance pour comprendre le fonctionnement de votre modèle de trafic et déterminer si vous avez besoin d’ajuster dynamiquement votre provisionnement de débit sur la journée ou sur une semaine. 7. Comparez régulièrement votre débit approvisionné et votre taux consommé pour vérifier que vous n’avez pas approvisionné plus de conteneurs et de bases de données que nécessaire. À des fins de contrôle, il est judicieux d’avoir un débit légèrement surapprovisionné. ### <a name="best-practices-to-optimize-provisioned-throughput"></a>Meilleures pratiques pour optimiser le coût du débit approvisionné Les étapes suivantes vous aider à rendre vos solutions hautement évolutives et économiques lors de l’utilisation d’Azure Cosmos DB. 1. Si vous avez considérablement surapprovisionné le débit sur les conteneurs et les bases de données, vous devez comparer les unités de requête approvisionnées et les unités de requêtes consommées afin d’ajuster les charges de travail. 2. Une méthode permettant d’estimer la quantité de débit réservé requis par votre application consiste à enregistrer les frais d’unité de requête associés à l’exécution des opérations courantes sur un élément représentatif utilisé par votre application (un conteneur ou une base de données Azure Cosmos), puis à évaluer le nombre d’opérations que vous prévoyez d’effectuer chaque seconde. Veillez à mesurer et à inclure également les requêtes courantes et leur utilisation. Pour savoir comment estimer le coût des RU de requêtes par programme ou à l’aide du portail, voir [Optimiser le coût des requêtes](./optimize-cost-reads-writes.md). 3. Une autre façon d’évaluer les opérations et leur coût en unités de requête consiste à activer les journaux d’activité Azure Monitor afin d’obtenir la répartition par opération/durée et les frais de chaque requête. Azure Cosmos DB applique des frais de requête pour chaque opération : ainsi, les frais de chaque opération peuvent être consignés dans la réponse à des fins d’analyse ultérieure. 4. Vous pouvez augmenter ou réduire en toute flexibilité le débit approvisionné pour répondre aux besoins de votre charge de travail. 5. Vous pouvez ajouter et supprimer des régions associées à votre compte Azure Cosmos selon vos besoins et contrôler ainsi vos coûts. 6. Assurez-vous que vos données et charges de travail sont réparties de façon uniforme entre les partitions logiques de vos conteneurs. Si cette répartition des partitions n’est pas uniforme, vous risquez d’approvisionner un débit supérieur à ce qui est nécessaire. Si vous constatez que la répartition n’est pas uniforme, nous vous recommandons de redistribution la charge de travail uniformément entre les partitions ou de repartitionner les données. 7. Si vous avez de nombreux conteneurs qui ne nécessitent pas de contrats de niveau de service, vous pouvez utiliser l’offre basée sur la base de données pour les cas où les contrats de niveau de service avec débit par conteneur ne s’appliquent pas. Vous devez identifier les conteneurs Azure Cosmos que vous souhaitez migrer vers l’offre de débit de niveau base de données inférieure puis migrer ces conteneurs à l’aide d’une solution basée sur des flux de modification. 8. Vous pouvez utiliser « Cosmos DB Free Tier » (gratuit pendant un an), Try Cosmos DB (jusqu’à trois régions) ou l’émulateur téléchargeable Cosmos DB pour les scénarios de développement et de test. En utilisant ces options dans un environnement de développement et de test, vous pouvez considérablement réduire vos coûts. 9. Vous pouvez effectuer d’autres optimisations spécifiques à votre charge de travail, par exemple augmenter la taille des lots, équilibrer la charge de travail des lectures dans plusieurs régions, et dédupliquer des données, le cas échéant. 10. Grâce à la capacité réservée Azure Cosmos DB, vous pouvez bénéficier de remises pouvant atteindre 65 % pendant trois ans. Le modèle de capacité réservée Azure Cosmos DB gère en amont les unités de requête qui seront nécessaires au fil du temps. Les remises sont accordées de telle sorte que plus vous utilisez des unités de requête sur longue période, plus votre remise sera importante. Ces remises sont appliquées immédiatement. Les unités de requête utilisées au-delà de vos valeurs approvisionnées sont facturées en fonction du coût de la capacité non réservé. Consultez la section [Capacité réservée Cosmos DB](cosmos-db-reserved-capacity.md)) pour plus d’informations. Vous pouvez également acheter une capacité réservée afin de réduire encore davantage le coût de votre débit approvisionné. ## <a name="next-steps"></a>Étapes suivantes Pour continuer à développer vos connaissances sur l’optimisation des coûts dans Azure Cosmos DB, consultez les articles suivants : * Vous tentez d’effectuer une planification de la capacité pour une migration vers Azure Cosmos DB ? Vous pouvez utiliser les informations sur votre cluster de bases de données existantes pour la planification de la capacité. * Si vous ne connaissez que le nombre de vCore et de serveurs présents dans votre cluster de bases de données existantes, lisez l’article sur l’[estimation des unités de requête à l’aide de vCore ou de processeurs virtuels](convert-vcore-to-request-unit.md) * Si vous connaissez les taux de requêtes typiques de la charge de travail actuelle des bases de données, consultez [Estimation des unités de requête avec Azure Cosmos DB Capacity Planner](estimate-ru-with-capacity-planner.md) * En savoir plus sur l’[optimisation pour le développement et le test](optimize-dev-test.md) * En savoir plus sur les [factures Azure Cosmos DB](understand-your-bill.md) * En savoir plus sur l’[optimisation du coût de stockage](optimize-cost-storage.md) * En savoir plus sur l’[optimisation du coût des lectures et écritures](optimize-cost-reads-writes.md) * En savoir plus sur l’[optimisation du coût des requêtes](./optimize-cost-reads-writes.md) * En savoir plus sur l’[optimisation du coût des comptes Azure Cosmos sur plusieurs régions](optimize-cost-regions.md)
128.256545
831
0.809854
fra_Latn
0.997083
edad670814739b71aba9157ff46639f17998e7e0
3,734
md
Markdown
06_git.md
jorgeav527/EXP_programacion_ing_civil
4a18059dc25db3ba2ec471baf237286ec75624e0
[ "FTL", "Linux-OpenIB" ]
null
null
null
06_git.md
jorgeav527/EXP_programacion_ing_civil
4a18059dc25db3ba2ec471baf237286ec75624e0
[ "FTL", "Linux-OpenIB" ]
null
null
null
06_git.md
jorgeav527/EXP_programacion_ing_civil
4a18059dc25db3ba2ec471baf237286ec75624e0
[ "FTL", "Linux-OpenIB" ]
null
null
null
### Git && GitHub - Nuestro ultimo punto GIT y GITHUB. - Supongamos que nos animamos hacer nuestro programa de roturas, necesitamos un lugar donde guardar nuestro proyectoentonces decidimos guardarlo en carpetas para luego subirlo a la nube, ahora bien como es la vida de mi proyecto normalmente en mi computador, tenemos nuestra version 1 del proyecto, ahora nuestro amigo se entero y quiere modificarlo entonces tenemos la v2, sabemos que nuestra version es buena pero algun dia para nuestra tesis o para nuetra vida profesional vamos a tener que agregarle las cosideraciones de la NTP entonces creamos la version NTP, despues de un tiempo queremos agregarle una base de datos entonces decidimos hacer la v3, pero sin modificar la version 1, luego juntamos la version de nuestro amigo y corregimos uno u otro error de la version 3, creamos la final final v4, al final despues de 2 años suponiendo que nunca formateamos nuestro computador decidimos usar la version 4 para hacer nuestra version NTP, en el mejor de los casos el orden queda de la siguiente manera, un completo caos. ### Que es Git? - Ahi es donde entra a tallar un sistema de control de versiones. GIT una herramienta que nos ayuda a poner un orden a todas estas versiones, arreglos, modificaciones, nos da un completo historial de cada punto en el tiempo de la vida de nuestro proyecto, nos ayuda a crear ramas y cuando todo este listo nos ayuda a fusionarlas de manera segura y eficiente y tambien nos brinda eficiencia y agilidad a medida que el proyecto escala. - supongamos que tenemos nuestros archivos, entonces iniciamos git y creamos un punto en el tiempo colocamos un bonito nombre de lo que trata el proyecto, luego agregamos una documentacion y creamos otro punto en el tiempo, ahora creamos la rama NTP para agregarle las consideraciones de la NTP, luego corregimos algunos errores indicados por nuestro amigo, agregamos una base de datos a nuestro proyecto en la rama ntp, cuando todo esta listo fucionamos la rama ntp a la rama maestra generando otro punto, y por ultimo agregamos testing para asegurarnos que el proyecto funcione correctamente. - ahora necesitamos la manera de subirlo a nube, esto se puede gracias a github, esto quiere decir en cualquier punto de la vida de nuestro proyecto podemos guardar y tener todo el historial albergado en un repositorio en internet. - GITHUB el contenedor de repositorios por exelencia, aparte del almacenamiento de nuestro proyecto, nos brinda una manera de trabajar en equipo en tiempo real, fusionar ideas de otras personas a nuestro proyecto y una manera de automatizar la integracion de nuevas funcionabilidades y monitorear los cambios realizados a nuevas versiones del proyecto. - Vaamos una analogia del flujo de trabajo que propone GIT y GITHUB, primero imaginemos que nuestro proyecto es nuestra tesis y los archivos del projecto son el tomo I, tomo II y tomo III ahora bien mediante un comando colocamos nuestros tomos dentro de una caja y revisamos si cambios que hemos realizado a nuestra revision N16 de nuestra tesis esta conforme, mediante otro comando lo empaquetamos y de esa manera creamos un punto en el tiempo, luego mediante otro comando la subimos como un repositorio a la universidad, esta revisa si los cambios son compatibles con las versiones anteriores, luego el docente pide una copia a la universidad y mediante otro comando se la lleva a su casa para revisarla, agrega sus cambios y sugerencias la guarda de la misma manera que el tesista, la envia a la universidad y esta reconoce los cambios e informa al tesista, este tiene dos opciones o agrega los cambios efectuados por el asesor o los modifica a su manera. - ahora vamos a ver un ejemplo real de lo que vismos ...
219.647059
1,026
0.811194
spa_Latn
0.99977
edadf96de849e6c668f2eeed489e4dd817cfb1ec
1,537
md
Markdown
_posts/people-love/18/2021-04-07-fabian.md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
_posts/people-love/18/2021-04-07-fabian.md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
_posts/people-love/18/2021-04-07-fabian.md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
--- id: 14778 title: Fabian date: 2021-04-07T15:25:36+00:00 author: victor layout: post guid: https://ukdataservers.com/fabian/ permalink: /04/07/fabian tags: - show love - unspecified - single - relationship - engaged - married - complicated - open relationship - widowed - separated - divorced - Husband - Wife - Boyfriend - Girlfriend category: Guides --- * some text {: toc} ## Who is Fabian 1950s teen idol who was promoted on American Bandstand and who had many Billboard charting singles, including &#8220;Tiger&#8221;/&#8221;Mighty Cold,&#8221; which was a #3 hit. ## Prior to Popularity He was discovered after music executives scoured South Philadelphia for an attractive person to turn into a teen idol. ## Random data He was signed with Chancellor Records, recording the songs &#8220;I&#8217;m a Man&#8221; and &#8220;Hound Dog Man.&#8221; ## Family & Everyday Life of Fabian He married Kathleen Regan in 1966, Kate Netter Forte in 1980, and Andrea Patrick, a former Bituminous Coal Queen and Miss Pennsylvania USA, in 1998. He has a son named Christian and a daughter named Julie. ## People Related With Fabian Frankie Avalon was a witness to Fabian&#8217;s talents early on in his career.
19.455696
205
0.592713
eng_Latn
0.99775
edae895b9d79c7057e4716f4dd93e0e6c63e851d
19,418
md
Markdown
content/news/tower-garden.md
klueless-io/gardensonata.com.au
43bf2bfce4e5cfffdf136de04e0e5ac32658efbe
[ "MIT" ]
null
null
null
content/news/tower-garden.md
klueless-io/gardensonata.com.au
43bf2bfce4e5cfffdf136de04e0e5ac32658efbe
[ "MIT" ]
null
null
null
content/news/tower-garden.md
klueless-io/gardensonata.com.au
43bf2bfce4e5cfffdf136de04e0e5ac32658efbe
[ "MIT" ]
1
2019-02-25T14:32:29.000Z
2019-02-25T14:32:29.000Z
--- title: "Tower Garden" date: 2018-07-27T00:00:00+10:00 draft: false author: "David Cruwys" image: /img/news/tower-garden-0-feature.jpg aliases: [ 'blogs/news/tower-garden' ] tags: [] categories: [] --- <p>The tower garden system is a vertical, aeroponic tower structure that allows you to grow up to 20 plants such as fruits, vegetables, spices, or flowers, using the one unit. <br><br> The tower takes up very little room and allows the plants to be cultivated in only three square feet. The system can be placed indoors or outdoors.</p> <p> The entire system relies on only water and provided nutrients for growth and does not utilize soil.</p> <p>The tower garden system successfully grows plants up to three times faster than other growing methods. See also: <a href="https://gardensonata.com.au/blogs/news/how-and-why-indoor-vertical-gardening" title="Indoor Vertical Gardening "><strong>Indoor Vertical Garden</strong></a>.<br><br> Cultivars can also expect a greater harvest yield of up to 30 per cent. Normally, plants are ready to harvest only a few weeks after planting. Don’t forget to visit here: <a href="https://gardensonata.com.au/">gardensonata.com.au<br><br><br></a></p> <h3>Making Tower Garden</h3> <p style="float: right;"><a href="http://www.towergarden.com/content/dam/towergarden/resources/PDF/Resources/72945_Guide_Web.pdf" target="_blank" rel="noopener noreferrer"><img alt="" src="/img/news/tower-garden-1.jpg" style="float: right; margin: 10px;"></a>Tower gardens are a great way to make every inch of your land count. If you have a <a href="https://gardensonata.com.au/blogs/news/small-space-gardening" title="Small Space Gardening "><strong>small living space</strong></a>, you can grow herbs, flowers, and other plants in a tower garden. <br><br>Use a bucket or <a href="https://amzn.to/2OGmZyp" target="_blank" title="LCG Floral 16G110 Agave in A Terra Cotta Pot " rel="noopener noreferrer"><strong>terracotta pot</strong></a> to make the tower garden's base, and then heighten the tower with wire mesh. <br><br>Plant a variety of seeds or seedlings to diversify your garden, and take routine care of it for thriving, healthy plants. <br><br></p> <h2 style="text-align: center;"><strong>Building The Tower</strong></h2> <h3>Fill a bucket or terracotta pot halfway with stones.</h3> <p style="float: left;"><a href="http://www.gardensonata.com.au/"><img alt="" src="/img/news/tower-garden-2.jpg" style="float: left; margin: 10px;"></a>You can gather small stones from around your yard or purchase them from a local plant nursery or online garden stores like <a href="https://gardensonata.com.au">gardensonata.com.au</a>. Continue adding rocks until the container is about one-third to halfway full.<br><br>The size of your bucket or pot can vary depending on whether you grow larger or smaller plants. Your container can be as small as 16 inch (40 cm) wide pot or as large as a five gallon bucket. <a href="http://www.gardensonata.com.au/"><br><br></a></p> <h3>Insert a wire mesh cylinder lengthwise into the stones.</h3> <p>The length ration between your container and the cylinder should be between 1:2-1:3, with the cylinder being longer. <br><br> Make sure that the bottom portion of the wire mesh is completely covered by the stones. Wiggle the cylinder around to check for looseness.<br><br><sup> </sup></p> <h3>Add peat or sphagnum moss into the wire mesh cylinder.</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-3.jpg" style="float: none; margin: 10px auto; display: block;"></p> Moss keeps the soil moist. With peat or <a href="https://amzn.to/2O1OeX1" target="_blank" title="SwansGreen Irish Moss Seeds Sagina Subulata Seeds,Sphagnum Moss " rel="noopener noreferrer"><strong>sphagnum moss</strong></a> on the bottom of your tower garden, you will not have to water as frequently.<br> <p>Fill the cylinder with about 2-3 inches (5-7 cm).<br><br></p> <h3>Layer potting soil on top of the peat moss.</h3> <p>Fill the rest of the mesh cylinder with about 4-5 inches (10-13 cm) of potting soil.</p> <p> Layer this directly on top of the moss. Choose soils that retain moisture and nutrients, like silt or loamy soil</p> <p>You will layer more moss into the tower garden as you add plants later on.<br><br></p> <h2 style="text-align: center;">Adding Plants</h2> <h3>Incorporate flowers, fruits, or herbs into your tower garden.</h3> <p style="float: right;"><img alt="" src="/img/news/tower-garden-4.jpg" style="float: right; margin: 10px;">A full variety of plants can grow well in a tower garden. Edible plants, like <a href="https://gardensonata.com.au/blogs/news/the-herb-garden" title="The Herb Garden "><strong>herbs</strong></a> or fruits/vegetables, can make your garden functional.<br><br>Flowers can add aesthetic beauty to the garden alongside the more practical plants.<br><br>Large edible plants, like tomatoes or cucumbers, need plenty of room to grow. Only plant a few larger plants at a time.<br><br></p> <h3>Choose plants based on where you'll put your garden.</h3> <p>Pick sun-loving plants in areas that receive light almost constantly and shade-living plants in spots with less direct sunlight. <br><br> Place your tower garden in a spot that receives between six and eight hours of sunlight a day, unless you specifically plant a <a href="https://gardensonata.com.au/collections/green-houses" title="green house "><strong>shade garden</strong></a>.<br><br></p> <h3>Plant the tallest plants on the bottom.</h3> <p style="float: left;"><img alt="" src="/img/news/tower-garden-5.jpg" style="float: left; margin: 10px;"><br><br><br>Think about plant size and shape as you plot out plants for your garden.Tall plants can block out the sun from smaller ones if placed on top. <br><br>Check the expected sizes your plants will grow to, and organize your garden accordingly.</p> <h3>Plant seedling roots between the wire mesh.</h3> <p>Scout out the ideal locations for your seedlings. Place them below the seeds planted to establish a strong root system into the garden.</p> <p>Once your plants are secure, add more peat or sphagnum moss into the wire mesh interior.<br><br></p> <h3>Plant seeds into the soil at an appropriate depth.</h3> <p>Poke your seeds between the wire mesh into the soil. Check the packet your seeds came in for the appropriate depth.</p> <p>Avoid adding additional sphagnum moss around your seeds until the plants have time to grow<br><br></p> <h2 style="text-align: center;"><strong>Caring of Tower Gardens</strong></h2> <h3>Make sure your plants are watered at least once per week.</h3> <p>Water your garden weekly or whenever your plants look yellowing or crisp to the touch. Once or twice a week, stick a finger into your tower garden's soil. If the soil is dry, your plants need to be watered.</p> <p> </p> <h3>Water your plants with Compost Tea once or twice a month</h3> <p style="float: right;"><br><img alt="" src="/img/news/tower-garden-6.jpg" style="float: right; margin: 10px;">Because your plants are growing in a limited space, you'll need to introduce more nutrients than usual. <br><br>Every other week, use compost tea instead of your usual watering routine.<br><br>Worm castings tea can work as an alternative.</p> <h3></h3> <h3></h3> <h3></h3> <h3></h3> <h3></h3> <h3> <br><br><br><br>Watch for Signs of Disease. </h3> <p>Take note of wilting yellowing/browning, blighted or mildewing plants.</p> <p>Disease can spread quickly in close quarters, so either treat or remove infected plants before your entire garden is weakened.<br><br></p> <h3>Check Periodically for Pests and Weeds. </h3> <p style="float: left;"><img alt="" src="/img/news/tower-garden-7.jpg" style="float: left; margin: 10px;">For the most part, tower gardens have less trouble with invasive plants and insects. This is thanks to the limited soil space and distance from the ground. Inspect your plants once or twice a month for bugs or unidentifiable plants.<br><br>Research the pests that specifically target the plants you chose. If you're growing strawberries, for example, you might keep an eye out for aphids, crickets, and fruit flies.<br><br></p> <h3>Rotate out plants in your tower garden as desired</h3> <p style="float: right;"><a href="https://gardensonata.com.au/"><img alt="" src="/img/news/tower-garden-8.jpg" style="float: right; margin: 10px;"></a><br><br>After you've harvested edible plants and are moving into the winter months, clean out your tower garden until you're ready to plant again next year.<br><br>For the first year, try plants that involve easy maintenance (like flowers). In later seasons, move on to more complicated plants. Don’t forget to visit here: <a href="https://gardensonata.com.au/">gardensonata.com.au<br><br><br><br></a></p> <h2 style="text-align: center;"> <br><strong>List of Tower Garden Plants to Grow </strong> </h2> <p>If you’re just getting started with Tower Garden, you may not be sure what to grow. We ship gourmet lettuce, cherry tomato, beefsteak tomato, cucumber, basil, and eggplant and bell pepper seeds with every Tower Garden. Here’s a little more information about these plants.<br><br><br></p> <h3 style="text-align: center;">Basil</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-9.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p style="text-align: left;">A favorite herb of beginner and experienced gardeners alike, basil is easy to grow and useful in the kitchen. (And let’s not forget its amazing aroma!) Basil grows well indoors and out.<br><br></p> <h3 style="text-align: center;">Chard</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-10.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Recently listed among the world’s most nutrient-dense foods, chard is a versatile, tasty green you can grow indoors or out, regardless of the season<br><br></p> <h3 style="text-align: center;">Cucumbers</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-11.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Nothing says summer like a cucumber! And since consistent watering isn’t a problem, Tower Garden cukes will be some of the best you’ve ever tasted.<br><br></p> <h3 style="text-align: center;">Eggplant</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-12.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>If you’re looking for an easy way to grow eggplant, look no further. </p> <p>From the tiny Thai to the big barbarella, you can grow your favorite eggplant varieties with Tower Garden.<br><br></p> <h3 style="text-align: center;">Kale</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-13.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Kale is one of the newer celebrities in the food world. And it’s unbelievably easy to grow with Tower Garden.</p> <p>In fact, you might have trouble keeping up with harvests!<br><br></p> <h3 style="text-align: center;">Lettuce</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-14.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Among the most cost-effective crops you can grow, lettuce flourishes in Tower Garden. <br><br> And it grows quickly, too—it’s often ready for harvest in as little as three weeks!<br>See also: <a href="https://gardensonata.com.au/blogs/news/lettuce-in-your-garden-why-not" title="Lettuce in your Garden? Why not? "><strong>Lettuce in your Garden? Why not?</strong></a></p> <h3 style="text-align: center;">Peppers</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-15.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Sweet or spicy? With Tower Garden, you don’t have to pick. You can grow any variety of pepper you like—as long as you’re prepared for abundant yields!<br><br></p> <h3 style="text-align: center;">Squash</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-16.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>There are many different varieties of squash—and you can grow them all with Tower Garden. <br><br> A popular crop among gardeners everywhere, squash can be quite prolific (and delicious).<br><br></p> <h3 style="text-align: center;">Strawberries</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-17.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Sweet and satisfying, strawberries of any kind are pretty much irresistible.<br><br> Tower Garden strawberries are doubly so—which is probably why some people have dedicated entire Tower Gardens to growing only this flavorful fruit.<br><br></p> <h3 style="text-align: center;">Tomatoes</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-18.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>What’s a garden without a tomato plant or two? With Tower Garden, you can grow any type of tomato you like—from mini, dwarf varieties to monstrous, indeterminate heirlooms<br><br></p> <h2 style="text-align: center;"><strong>Maintaining Your Tower Garden</strong></h2> <p>Check the water level weekly. With hot weather, and with large plants, check the water level at least twice a week.</p> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-19.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p>Check the pH twice a week and follow the instructions to adjust your pH level. Yellowing leaves are an indication that your pH has drifted out of the recommended range. <br><br>Keep the shower cap holes clean and free from debris. You can use a toothpick to clean the holes.</p> Keep roots away from the pump. You can trim the roots that may be dangling in the reservoir.<br> <ol></ol> Clean the pump filter monthly. Unplug the pump, pull the pump up through the access port and remove the pump cover. <br><br>Clean with water to remove debris. Or, another technique is to turn off the pump and firmly place a garden hose at the top of the center stem in your shower cap. <br><br>This will blow old root debris out of the filter into the bottom of the tank. Be sure to clean out the debris from the tank bottom or it will clog your pump.<br><br>Keep cool water (85 degrees or less) in the Tower Garden. You can help keep the water cool by utilizing the Tower Garden dolly or placing a mat underneath the reservoir. For more amazing garden equipment just click here: <a href="https://gardensonata.com.au/">gardensonata.com.au</a><br> <ol></ol> <p>Rotate the Tower Garden if it is placed next to a wall where the sun shines on the same part of the Tower Garden every day.</p> <p>For optimal plant uniformity, rotate your Tower Garden a quarter turn in the same direction each day or whenever possible.<br><br>Large plants, such as tomatoes, peppers, green beans, etc., should be maintained as a compact plant or trained up strings or trellises</p> <ol></ol> <ol></ol> Good sunlight is very important for optimal growth of each plant. Keep this in mind when pruning and training the plants in your Tower Garden<br><br><br> <ol></ol> <h2 style="text-align: center;"><strong>Important Safety Information </strong></h2> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-20.jpg" style="float: none; margin: 10px auto; display: block;"></p> <ol> <li> <p>The Tower Garden is an aeroponic growing system. <br><br>Do not stand on the Tower Garden and take care to keep children from playing on the Tower Garden. <br><br></p> </li> <li> <p>Follow the safety instructions included with the Tower Garden pump.<br><br>Do not attempt to plug or unplug the pump in rainy or damp conditions. <br><br></p> </li> <li> <p>When draining the Tower Garden, do not allow the drain pipe to drain over an electrical outlet or extension cord. <br><br>Do not overfill the Tower Garden, which would allow water to run out of the electrical cord opening in the reservoir. <br><br></p> </li> <li> <p>Do not place the Tower Garden where there is the potential for high winds, such as high rise balconies. <br><br></p> </li> <li> <p>Wear rubber gloves when mixing pH+Base, pH-Acid, Tower Tonic A, and Tower Tonic B, in order to protect your hands. <br><br></p> </li> <li> <p><img alt="" src="/img/news/tower-garden-21.jpg" style="float: right; margin: 10px;">Use a large stirring paddle or spoon to mix the nutrients and pH adjusting chemicals in the reservoir. <br><br><strong>DO NOT USE YOUR HANDS</strong>. The solution may be alkaline or acidic and may cause skin irritation.<br><br></p> </li> <li> <p>Read and follow all safety instructions on the pH and Tower Tonic labels.<br><br><br></p> </li> </ol> <h2 style="text-align: center;"> <strong>Clean Up AND Storage</strong> </h2> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-22.jpg" style="float: none; margin: 10px auto; display: block;"></p> <ul> <li> <p>Remove the plants from the Tower Garden by pulling the net pots from the planting ports. </p> </li> <li> <p>Disassemble the tower sections, starting at the top. The bottom section does not need to be removed from the reservoir lid. </p> </li> <li> <p>Compost or discard plant material. Clean and save net pots for future use.</p> </li> <li> <p>Unscrew the blue swivel hose from the reservoir lid and pump.</p> </li> <li> <p>Place the tower sections, shower cap and lid, and the pump in the reservoir and fill with warm soapy water.</p> </li> <li> <p>Allow the tower sections to sit in the reservoir for 30 minutes and then clean with water and a sponge. </p> </li> </ul> <p><br>You can store the parts of the Tower Garden in the reservoir until you are ready to grow nutritious fruits and vegetables again.<br><br></p> <h3 style="text-align: center;"> <strong>Tower Garden’s Key Advantages to Traditional Garden</strong>ing</h3> <p style="text-align: left;"><img alt="" src="/img/news/tower-garden-23.jpg" style="float: none; margin: 10px auto; display: block;"></p> <p style="text-align: center;"> </p> <ul> <li> <p>Comes with everything you need to start growing.</p> </li> <li> <p>Compact design fits almost anywhere, including patios, decks, porches, balconies, terraces, or rooftop gardens.</p> </li> <li> <p>Soil-free system means there is no weeding, tilling, kneeling, or getting dirty.</p> </li> <li> <p>No gardening experience is necessary.</p> </li> <li> <p>Grows many fruits and almost any vegetable, herb, or flower—faster than in soil.</p> </li> <li> <p>Uses less than 10% of the water and land involved in traditional gardening.</p> </li> <li> <p>Yields 30% more produce than traditional soil gardening methods.</p> </li> <li> <p>Reduces the need for pesticides, insecticides, or herbicides.</p> </li> <li> <p>Fewer issues with climate, such as heat, cold, drought.</p> </li> <li> <p>Help is just a call or click away.<br><br></p> </li> </ul> <p>You can also check: <a href="https://gardensonata.com.au/blogs/news/hydroponics-alternative-method-for-gardeners" title="Hydroponics:Alternative Methods for Gardeners "><strong>Hydroponics:Alternative Methods for Gardeners</strong></a> for other methods in gardening.</p> <ul></ul> <style type="text/css"><!-- td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;} --></style><style type="text/css"><!-- td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;} --></style>
1,294.533333
19,209
0.734422
eng_Latn
0.985057
edafbc27b2f53e5f76906b06a583a114876d35b9
7,409
md
Markdown
tricks/2021/k3s/apiserver/env.md
itwks/awesome-fenix
3a52f2989aabe7cb725c1e104eb866a4cf482657
[ "Apache-2.0", "CC-BY-4.0" ]
5,128
2020-03-05T04:19:08.000Z
2022-03-31T18:12:09.000Z
tricks/2021/k3s/apiserver/env.md
itwks/awesome-fenix
3a52f2989aabe7cb725c1e104eb866a4cf482657
[ "Apache-2.0", "CC-BY-4.0" ]
288
2020-03-05T05:07:57.000Z
2022-03-31T03:27:40.000Z
tricks/2021/k3s/apiserver/env.md
itwks/awesome-fenix
3a52f2989aabe7cb725c1e104eb866a4cf482657
[ "Apache-2.0", "CC-BY-4.0" ]
597
2020-04-28T08:34:14.000Z
2022-03-31T05:49:55.000Z
# K3S启动APIServer/Agent的环境准备 1. 入口:`cmd/server/main.go::main()` 1. 注册了containerd\kubectl\crictl\ctr四个reexec启动函数,对应了四个同名的以及agent启动子命令。 2. 由于传入参数是server,由urfave cli执行server::Run函数。 2. 处理命令行参数:`pkg/cli/server/server.go::Run()` 1. 重设进程的命令行参数,以避免参数中存在敏感信息泄漏的风险(譬如被ps命令列出) ```go // hide process arguments from ps output, since they may contain // database credentials or other secrets. gspt.SetProcTitle(os.Args[0] + " server") ``` 2. 命令行中传入的参数,会在urfave cli支持下,写入到全局变量ServerConfig中(NewServerCommand方法),在`Run()`方法中,将名为`ServerConfig`的`Server struct`转换为带层次结构的`server.Config struct`。命令行中可接受的参数如下: ``` ClusterCIDR string AgentToken string AgentTokenFile string Token string TokenFile string ClusterSecret string ServiceCIDR string ClusterDNS string ClusterDomain string // The port which kubectl clients can access k8s HTTPSPort int // The port which custom k3s API runs on SupervisorPort int // The port which kube-apiserver runs on APIServerPort int APIServerBindAddress string DataDir string DisableAgent bool KubeConfigOutput string KubeConfigMode string TLSSan cli.StringSlice BindAddress string ExtraAPIArgs cli.StringSlice ExtraSchedulerArgs cli.StringSlice ExtraControllerArgs cli.StringSlice ExtraCloudControllerArgs cli.StringSlice Rootless bool DatastoreEndpoint string DatastoreCAFile string DatastoreCertFile string DatastoreKeyFile string AdvertiseIP string AdvertisePort int DisableScheduler bool ServerURL string FlannelBackend string DefaultLocalStoragePath string DisableCCM bool DisableNPC bool DisableKubeProxy bool ClusterInit bool ClusterReset bool ClusterResetRestorePath string EncryptSecrets bool StartupHooks []func(context.Context, <-chan struct{}, string) error EtcdDisableSnapshots bool EtcdSnapshotDir string EtcdSnapshotCron string EtcdSnapshotRetention int ``` 3. 命令行参数(也包含一些Agent Node等默认参数)初始化结束后,以`server.Config`结构传递给`pkg/server/server.go::StartServer()`方法。 4. 建立一条goroutine,等待`serverConfig.ControlConfig.Runtime.APIServerReady`通道的信号。 5. 启动Agent,转入`pkg/agent/run.go::Run()`,详见“[K3S启动Agent的环境准备]()” 3. 启动服务器的环境准备:`pkg/server/server.go::StartServer()`,这里要包括 Etcd(或基于Kine包装的代替品)、APIServer等一系列服务。 1. 建立临时目录,以及设定对K3S管理的地址加入NOPROXY环境变量,避免受外部代理的干扰 ```go setupDataDirAndChdir(&config.ControlConfig); setNoProxyEnv(&config.ControlConfig); ``` 2. 调用`pkg/daemons/control/server.go::Server()`,启动APIServer服务器: 1. 设置默认值,在`defaults()`方法中初始化[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)范围、APIServer默认端口(`6443`/`6444`)、默认的数据目录(`/var/lib/rancher/k3s/server`)的默认值。 2. 在`prepare()`方法开始准备服务器环境,包括: 1. 设置默认证书,初始化Client(Admin、Controller、CloudController、Scheduler、KubeAPI、KuberPorxy、K3SController、Etcd)、Server、RequestHeader等的CA、Key文件路径(只是路径,不包括内容)。 2. 调用`cluster.Bootstrap()`方法(`pkg/cluster/bootstrap.go`),初始化集群后端存储。Rancher提供的cluster包解耦了Kubernetes对Etcd的直接依赖,使得K3S既可以使用内嵌的托管数据库模式,也可以使用SQLite、MySQL、PGSQL等非托管数据库模式。 > **managed database** is one whose lifecycle we control - initializing the cluster, adding/removing members, taking snapshots, etc 3. 初始化证书、ServiceAccount、Users(即Server和Node的Password)、EncryptedNetworkInfo(即IPSEC Key)、EncryptionConfig、Tokens信息 ```go if err := genCerts(config, runtime); err != nil { return err } if err := genServiceAccount(runtime); err != nil { return err } if err := genUsers(config, runtime); err != nil { return err } if err := genEncryptedNetworkInfo(config, runtime); err != nil { return err } if err := genEncryptionConfig(config, runtime); err != nil { return err } if err := readTokens(runtime); err != nil { return err } ``` 4. 调用`cluster.Start()`方法,启动集群后端存储。对于托管数据库,会在此时进行启动和测试工作,而非托管数据库,则只会在`startStorage()`方法中启动[Kine](https://github.com/k3s-io/kine)的监听器,开放一个默认名为`unix://kine.sock`的UDS地址。 3. 通过`setupTunnel()`方法,调用Rancher的[Remote Dialer](https://github.com/rancher/remotedialer)创建了一条通信隧道,并将它赋值给`k8s.io/kubernetes/cmd/kube-apiserver/app.DefaultProxyDialerFn`,因此后续K8S将能够使用到它来进行通信。 4. 通过`apiServer()`方法,将`config.Control`中的一批参数,又重新转换成kube-apiserver的命令行参数。这里调用了`executor.APIServer`,真正从K3S的包装代码过度到K8S里。调用的方式不是fork子进程,而是通过启动一条新的goroutine,直接在进程内调用[Cobra](https://github.com/spf13/cobra)(Kubernetes所采用的命令行框架)中的Command,具体Command的启动函数位于`k8s.io/kubernetes/cmd/kube-apiserver/app/server.go::Run()`方法。这部分的具体分析见:[K8S-APIServer启动过程](bootstrap.md)。 3. 在`router()`方法中,将`prepare()`方法里生成的一系列证书、serverConfig的配置信息、数据库信息、静态资源、ping地址等设置好HTTP Endpoint,并以[Gorilla Mux Router](https://github.com/gorilla/mux)的形式返回。这一步相当于建立了一个内置的HTTP服务器。 4. 等待`config.ControlConfig.Runtime.APIServerReady`信号,收到后启动`runControllers()`方法。 5. 打印Agent加入Server的命令地址,如: ``` Node token is available at /var/lib/rancher/k3s/server/token To join node to cluster: k3s agent -s https://172.19.45.5:6443 -t ${NODE_TOKEN} ``` 6. 将ServerCA、ClientAdminCert、ClientAdminKey以YAML格式写入磁盘,默认位置是`/etc/rancher/k3s/k3s.yaml`。 4. 与构建serverConfig类似,构建agentConfig对象,调用`pkg/agent/run.go::Run()`方法,创建Node Agent: 1. 在`syssetup.Configure()`方法中对系统运行Agent的条件进行检查,确认Linux Kernel的驱动模块(在`/sys/module/`下存在)和Kernel的内核信息映射 (在`/proc/sys/`下可读写)。 2. 在`pkg/agent/proxy/apiproxy.go::NewAPIProxy()`方法创建proxy对象(仅仅是初始化了对象),在`clientaccess.ParseAndValidateTokenForUser()`创建客户端访问令牌。 3. 转入`run()`方法,开始Node Agent运行 1. `setupCriCtlConfig()`及`containerd.Run()`:建立CRI配置信息,K3S默认的CRI是containerd,可以在配置中要求使用DockerShim。后续会以默认为`unix:///run/k3s/containerd/containerd.sock`的UDS地址与CRI框架通讯,这个地址会写入到`/var/lib/rancher/k3s/agent/etc/crictl.yaml`中,配置会写入`/var/lib/rancher/k3s/agent/etc/containerd/config.toml`,然后以exec.Command的形式fork一个containerd的子进程,如果数据目录里面的`/images`目录存在(默认不存在),还会自动预载里面的镜像到CRI框架中。 2. `flannel.Prepare()`建立CNI网络信息,K3S默认的CNI插件是Flannel VXLAN,可以在配置中设置NoFlanel然后自己安装网络插件。默认网络信息会写到`/var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist`和`/var/lib/rancher/k3s/agent/etc/flannel/net-conf.json` 3. `tunnel.Setup()`Agent获得与Kube-APIServer的通讯Endpoint、证书,并建立连接。证书等信息是默认从`/var/lib/rancher/k3s/agent/k3scontroller.kubeconfig`文件中读取得到的,然后访问`GET https://master-ip:6443/api/v1/namespaces/default/endpoints/kubernetes`,确认通信是否成功(只要通信成功,不在乎返回值是403),成功后通过`proxy.Update()`更新至Remotedialer Proxy对象。同时也会生成WebSockets连接:`wss://master-ip:6443/v1-k3s/connect`。 4. `agent.Agent()`调用Kubelet进程,从环境上下文中收集参数,转换形成kubelet命令行参数格式,然后同样是在进程内调用Cobra的Command,这部分内容在:“[Kubelet启动过程](../kubelet/bootstrap.md)”中分析。 5. `configureNode()` 5. 等待ctx.Done(),持续运行。
46.597484
371
0.678769
yue_Hant
0.497398
edb0a985fb0a616208896200fc4633524eebaa79
551
md
Markdown
api/Visio.Document.Validation.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Visio.Document.Validation.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
api/Visio.Document.Validation.md
italicize/VBA-Docs
8d12d72a1e3e9e32f31b87be3a3f9e18e411c1b0
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Document.Validation Property (Visio) keywords: vis_sdr.chm10562440 f1_keywords: - vis_sdr.chm10562440 ms.prod: visio api_name: - Visio.Document.Validation ms.assetid: 725533ed-49bd-5796-972c-9e84896a3139 ms.date: 06/08/2017 --- # Document.Validation Property (Visio) Returns the **[Validation](Visio.Validation.md)** object that is associated with the document. Read-only. ## Syntax _expression_. `Validation` _expression_ A variable that represents a '[Document](Visio.Document.md)' object. ### Return Value **Validation**
17.774194
106
0.753176
yue_Hant
0.515424
edb11a0a245d5b20bbfc2b024a05206bbfb2a6c2
181
md
Markdown
Gallery/images/2021-10-29-dinnertable.md
LWFlouisa/NumeroHexDiaries
a916be37023dc1eb625f5625fa9c0e1dbb6a657d
[ "MIT" ]
null
null
null
Gallery/images/2021-10-29-dinnertable.md
LWFlouisa/NumeroHexDiaries
a916be37023dc1eb625f5625fa9c0e1dbb6a657d
[ "MIT" ]
null
null
null
Gallery/images/2021-10-29-dinnertable.md
LWFlouisa/NumeroHexDiaries
a916be37023dc1eb625f5625fa9c0e1dbb6a657d
[ "MIT" ]
null
null
null
--- title: "Dinner Table" layout: post --- # Dinner Table <img src="https://raw.githubusercontent.com/LWFlouisa/WeirdSearch/main/images/beachtrip/20211024_172457.jpg" width="100%">
25.857143
122
0.751381
kor_Hang
0.111338
edb1207c93576a2e44577dabf14e457450d178bc
2,742
md
Markdown
README.md
Yiyiyimu/apisix-docker
619d26df77490c102e6a4c4cc123aad34842d01b
[ "Apache-2.0" ]
null
null
null
README.md
Yiyiyimu/apisix-docker
619d26df77490c102e6a4c4cc123aad34842d01b
[ "Apache-2.0" ]
null
null
null
README.md
Yiyiyimu/apisix-docker
619d26df77490c102e6a4c4cc123aad34842d01b
[ "Apache-2.0" ]
null
null
null
**Docker images are not official ASF releases but provided for convenience. Recommended usage is always to build the source.** ## How To Build Image **The master branch is for the version of Apache APISIX 2.x. If you need a previous version, please build from the [v1.x](https://github.com/apache/apisix-docker/releases/tag/v1.x) tag.** ### Build an image from source 1. Build from release version: ```sh # Assign Apache release version number to variable `APISIX_VERSION`, for example: 2.6. The latest version can be find at `https://github.com/apache/apisix/releases` export APISIX_VERSION=2.6 # alpine $ make build-on-alpine # centos $ make build-on-centos ``` 2. Build from master branch version, which has latest code(ONLY for the developer's convenience): ```sh export APISIX_VERSION=master # alpine $ make build-on-alpine # centos $ make build-on-centos ``` 3. Build from local code: ```sh # To copy apisix into image, we need to include it in build context $ cp -r <APISIX-PATH> ./apisix $ APISIX_PATH=./apisix make build-on-alpine-local # Might need root privilege if encounter "error checking context: 'can't stat'" ``` **Note:** For Chinese, the following command is always recommended. The additional build argument `ENABLE_PROXY=true` will enable proxy to definitely accelerate the progress. ```sh $ make build-on-alpine-cn ``` ### Manual deploy apisix via docker [Manual deploy](docs/en/latest/manual.md) ### QuickStart via docker-compose **start all modules with docker-compose** ```sh $ cd example $ docker-compose -p docker-apisix up -d ``` You can refer to [the docker-compose example](docs/en/latest/example.md) for more try. ### Quick test with all dependencies in one Docker container * All in one Docker container for Apache APISIX ```shell $ make build-all-in-one $ docker run -v `pwd`/all-in-one/apisix/config.yaml:/usr/local/apisix/conf/config.yaml -p 9080:9080 -p 2379:2379 -d apache/apisix:whole ``` * All in one Docker container for Apache apisix-dashboard **The latest version of `apisix-dashboard` is 2.7 and should be used with APISIX 2.6.** ```shell $ make build-dashboard $ docker run -v `pwd`/all-in-one/apisix/config.yaml:/usr/local/apisix/conf/config.yaml -v `pwd`/all-in-one/apisix-dashboard/conf.yaml:/usr/local/apisix-dashboard/conf/conf.yaml -p 9080:9080 -p 2379:2379 -p 9000:9000 -d apache/apisix-dashboard:whole ``` Tips: If there is a port conflict, please modify the host port through `docker run -p`, e.g. ```shell $ docker run -v `pwd`/all-in-one/apisix/config.yaml:/usr/local/apisix/conf/config.yaml -v `pwd`/all-in-one/apisix-dashboard/conf.yaml:/usr/local/apisix-dashboard/conf/conf.yaml -p 19080:9080 -p 12379:2379 -p 19000:9000 -d apache/apisix-dashboard:whole ```
32.642857
251
0.7407
eng_Latn
0.867218
edb1219b81a2631943c9d59efb68095292cf5998
746
md
Markdown
gophercon2017/operability_in_go.md
frankcash/notes
bc9a6bf0c9bce879d06aac205b79d84acd86aa71
[ "CC0-1.0" ]
3
2015-10-31T16:19:45.000Z
2018-07-29T10:43:27.000Z
gophercon2017/operability_in_go.md
frankcash/notes
bc9a6bf0c9bce879d06aac205b79d84acd86aa71
[ "CC0-1.0" ]
null
null
null
gophercon2017/operability_in_go.md
frankcash/notes
bc9a6bf0c9bce879d06aac205b79d84acd86aa71
[ "CC0-1.0" ]
1
2017-01-28T01:57:58.000Z
2017-01-28T01:57:58.000Z
# Operability in Go ## Maintenance Equal objectives: 1. Fix it 2. Identify what is wrong ## Fail Well - Fail immediately when unrecoverable errors occur. Otherwise data will corrupt - Fail the smalled execution as possible - In general, an unhadled error should panic ## Logging - Provide context, otherwise it will go nowhere - Some errors provide context, Named errors do not - You can added context within the `github.com/pkg/errors` lib. - Structured logging adds k-v pairs to your log `github.com/sirupsen/logrus`. This adds context. - Structured loggers output JSON, making them easy to be consumed. - Log environment and flag values - `expvar` shows current state - `expvar` for state, `log` for crashing - Monitoring: prometheus
26.642857
97
0.762735
eng_Latn
0.990083
edb220a8765fe7e9346037433d29a8d090aa862f
3,002
md
Markdown
docs/_build/get_sql_db.md
guirava/rubrik-sdk-for-python
10c58a28088a0b8e241af20ab9ecc802cec63246
[ "MIT" ]
4
2018-09-06T23:34:32.000Z
2018-10-08T15:04:22.000Z
docs/_build/get_sql_db.md
guirava/rubrik-sdk-for-python
10c58a28088a0b8e241af20ab9ecc802cec63246
[ "MIT" ]
1
2018-09-05T22:20:12.000Z
2018-09-06T04:57:23.000Z
docs/_build/get_sql_db.md
guirava/rubrik-sdk-for-python
10c58a28088a0b8e241af20ab9ecc802cec63246
[ "MIT" ]
3
2018-10-01T14:49:47.000Z
2018-10-03T15:41:12.000Z
# get_sql_db Retrieves summary information for SQL databases. Each keyword argument is a query parameter to filter the database details returned i.e. you can query for a specific database name, hostname, instance, is_relic, effective_sla_domain etc. ```py def get_sql_db(self, db_name=None, instance=None, hostname=None, availability_group=None, effective_sla_domain=None, primary_cluster_id='local', sla_assignment=None, limit=None, offset=None, is_relic=None, is_live_mount=None, is_log_shipping_secondary=None, sort_by=None, sort_order=None, timeout=15): ``` ## Keyword Arguments | Name | Type | Description | Choices | Default | |-------------|------|-----------------------------------------------------------------------------|---------|---------| | db_name | str | Filter by a substring of the database name. | | | | instance | str | The SQL instance name of the database. | | | | hostname | str | The SQL host name of the database. | | | | availability_group | str | Filter by the name of the Always On Availability Group. | | | | effective_sla_domain | str | Filter by the name of the effective SLA Domain. | | | | primary_cluster_id | str | Filter by primary cluster ID, or local. | | | | sla_assignment | str | Filter by SLA Domain assignment type. (Direct, Derived, Unassigned) | | | | limit | int | Limit the number of matches returned. | | | | offset | int | Ignore these many matches in the beginning. | | | | is_relic | bool | Filter database summary information by the value of the isRelic field. | | | | is_live_mount | bool | Filter database summary information by the value of the isLiveMount field. | | | | is_log_shipping_secondary | bool | Filter database summary information by the value of the isLogShippingSecondary field. | | | | sort_by | str | Sort results based on the specified attribute. (effectiveSlaDomainName, name) | | | | sort_order | str | Sort order, either ascending or descending. (asc, desc) | | | | timeout | int | The number of seconds to wait to establish a connection the Rubrik cluster before returning a timeout error. | | 15 | ## Returns | Type | Return Value | |------|-----------------------------------------------------------------------------------------------| | dict | The full response of `GET /v1/mssql/db?{query}` | ## Example ```py import rubrik_cdm rubrik = rubrik_cdm.Connect() db_name = "python-sdk-demo" instance = 'MSSQLSERVER' hostname = 'sql.rubrikdemo.com' availability_group = 'sql.rubrikdemo.com' effective_sla_domain = 'Gold' primary_cluster_id = 'local' sla_assignment = 'Direct' get_db = rubrik.get_sql_db(db_name=db_name, instance=instance, hostname=hostname, availability_group=availability_group, effective_sla_domain=effective_sla_domain, primary_cluster_id=primary_cluster_id, sla_assignment=sla_assignment) ```
53.607143
301
0.643904
eng_Latn
0.870946
edb3131e812aa9bb19c4eb29782fa47b6ba2e731
2,210
md
Markdown
_photos/budapest.md
fayeah/fayeah.github.io
4468f04f7f1ef09a007359126d63c8469e2c04ad
[ "MIT" ]
1
2019-03-06T03:58:34.000Z
2019-03-06T03:58:34.000Z
_photos/budapest.md
fayeah/fayeah.github.io
4468f04f7f1ef09a007359126d63c8469e2c04ad
[ "MIT" ]
4
2020-04-11T02:44:36.000Z
2022-03-02T08:12:30.000Z
_photos/budapest.md
fayeah/fayeah.github.io
4468f04f7f1ef09a007359126d63c8469e2c04ad
[ "MIT" ]
null
null
null
--- title: Budapest sidebar: - title: "Another Title" text: "More text here." nav: sidebar-photos --- ## Budapest - 游 非常幸运借工作的机会去一趟Budapest。期间还是非常忙碌的,这在我简书中也专门写过一篇,只不过写了比较多的工作和团队。这次聊聊Budapest这座城市。 ![chain-bridge]({{ site.url }}{{ site.baseurl }}/assets/images/chain-bridge.jpeg) **链子桥**有一个经典的故事: > *1820年,骑兵司令塞切尼要赶到对岸参加父亲的葬礼,但因为天气不好,木质浮桥无法通过,倒霉的他等了足足一个星期才渡过河。从此,他决心在多瑙河上修建一座桥梁。在筹措到足够的资金后,他请来了英国设计师和建筑师,经过7年时间,建成了现今布达佩斯的标志性建筑——链子桥。* 同时,链子桥是第一座链接布达与佩斯的第一座永久性建筑。不认识东南西北的我至今都不清楚到底东边是布达还是西边是佩斯。非常喜欢晚上在链子桥上散步,且走且停,感受整座城市的凉爽。链子桥的底下就是多瑙河,航运发达,如果时间允许还能乘船感受一番。桥的一边有一小段值得去看,多瑙河畔的鞋子,这里的故事是二战时候非常多的百姓包括小朋友在河畔被纳粹枪杀,尸体被抛进河中,留下的只有鞋子,听说当时鞋子眼镜比较贵,就让他们先脱衣服鞋子再处决。 ![shoes]({{ site.url }}{{ site.baseurl }}/assets/images/shoes.jpeg) 桥对面就是布达城堡,周末的一天,我就暴走到布达城堡,路线是小山坡登上。我真是佩服自己的勇气,一个人在外面瞎溜达,关键还是个路痴,就不怕走丢的哇。登上城堡,整座城市的景色尽收眼底,包括链子桥,多瑙河,以及各种建筑。布达佩斯的白天太阳还是非常晒的,简直我快脱掉一层皮了。有了经验以后出去玩还是带个墨镜啥的,防晒措施要做到位。这是一张拖了一层皮的我。 ![me]({{ site.url }}{{ site.baseurl }}/assets/images/me.jpeg) 即便非常晒,但是温度还是可以的,30度左右,并没有太热,当然是相对武汉来讲。看看外国友人是多么喜欢日光浴。 ![sunbathing]({{ site.url }}{{ site.baseurl }}/assets/images/sunbathing.jpeg) 一个人在桥上走走也不觉得怎么孤单,直到前面这对儿小情侣一直亲个不停,走一步亲一步,热恋期的重度患者啊,简直看瞎了我的眼。遗憾没有拍到经典的一幕。 ![kissing]({{ site.url }}{{ site.baseurl }}/assets/images/kissing.jpeg) 个人感觉比较幸运,在布达城堡看到了四位士兵在骑马,虽然不知道是何种仪式,貌似不是整天都有。 ![knight]({{ site.url }}{{ site.baseurl }}/assets/images/knight.jpeg) 布达城堡上还有不少的街头艺术嘞,有人在素描某些建筑,有的老伯伯在秀出自己的爱鹰,有的则是在城堡里拉小提琴,当然也有为路人画人像的,仔细观察人像,跟国内的还是很不一样的,这里的人像都比较抽象。 ![artist-1]({{ site.url }}{{ site.baseurl }}/assets/images/artist-1.jpeg) ![artist-2]({{ site.url }}{{ site.baseurl }}/assets/images/artist-2.jpeg) ![artist-3]({{ site.url }}{{ site.baseurl }}/assets/images/artist-3.jpeg) ![artist-4]({{ site.url }}{{ site.baseurl }}/assets/images/artist-4.jpeg) 布达佩斯的夏天白天时长特别长,下午在Marriott Hotel休息了一会之后,还是感觉么有到深夜,又出去溜达了。晚上的布达佩斯灯火通明,夜景跟白天的景色完全是两个风格,没有了白天的喧哗,更想静静地感受这座城市。 ![night]({{ site.url }}{{ site.baseurl }}/assets/images/night.jpeg) 整个布达佩斯之行,虽然忙碌,但也开心。欧洲的风景还真是不错的,整个城市的悠闲,关键是吃的还不贵,跟武汉相比是差不多的。如果真的是奔着旅游去的,还是建议多找几个朋友一起,城市整体相对发达,但是不免有流浪汉之类的,特别是女孩子还是要多注意安全的。另外一个之前也提到了,夏天来一定要做好防晒。还有如果真的想不尴尬而又理智地点餐的话,最好平时就提高一些英文,因为他们会卖出一些比较有“特色”的菜品,如果不小心点到了,那就尴尬了。总体来说这次机会还是非常不错的,非常希望以后还有机会再到欧洲玩一玩。
45.102041
244
0.773756
yue_Hant
0.301423
edb36637a5e76b7f675812f3129a0354d4280a45
6,376
md
Markdown
content/course/lpic1/1023_manage_shared_libraries.md
jadijadi/linuxlearner
d6f7f57400d7eda41937b1d601b4c8b9fbdf772c
[ "CC0-1.0" ]
17
2015-12-26T06:06:19.000Z
2022-03-14T15:12:46.000Z
content/course/lpic1/1023_manage_shared_libraries.md
jadijadi/linuxlearner
d6f7f57400d7eda41937b1d601b4c8b9fbdf772c
[ "CC0-1.0" ]
null
null
null
content/course/lpic1/1023_manage_shared_libraries.md
jadijadi/linuxlearner
d6f7f57400d7eda41937b1d601b4c8b9fbdf772c
[ "CC0-1.0" ]
6
2017-01-19T10:58:03.000Z
2022-03-09T12:34:06.000Z
Title: 102.3. Manage shared libraries Date: 2015-12-25 12:20 Category: lpic101 # 102.3. Manage shared libraries *weight 2* ## Objectives Candidates should be able to determine the shared libraries that executable programs depend on and install them when necessary. - Identify shared libraries. - Identify the typical locations of system libraries. - Load shared libraries. - ldd - ldconfig - /etc/ld.so.conf - LD_LIBRARY_PATH ## Linking When we write a program, we use libraries. For example if you need to read text from standard input, you need to *link* a library which provides this. Think linking has two forms: - **Static** linking is when you add this library to your executable program. In this method your program size is big because it has all the needed libraries. One good advantage is your program can be run without being dependent to other programs / libraries. - **Dynamic** linking is when you just say in your program "We need this and that library to run this program". This way your program is smaller but you need to install those libraries separately. This makes programs more secure (because libraries can be updated centrally), more advanced (any improvement in a library will improve the whole program) and smaller. > Dynamic linking is also called **shared** libraries because all the programs are sharing one library which is separately installed. ## What libraries I need first you should know that libraries are installed in /lib and /lib64 (for 32bit and 64bit libraries). #### ldd the ````ldd```` command helps you find: - If a program is dynamically or statically linked - What libraries a program needs lets have a look at two files: ```` # ldd /sbin/ldconfig not a dynamic executable # ldd /bin/ls ls lsblk lsmod root@funlife:/home/jadi/Downloads# ldd /bin/ls linux-vdso.so.1 => (0x00007fffef1fc000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f61696b3000) libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007f61694aa000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f61690e4000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f6168e77000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6168c73000) /lib64/ld-linux-x86-64.so.2 (0x00007f61698f8000) libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007f6168a6d000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f616884f000) ```` As you can see, ````ldd```` tells us that the /sbin/ldconfig is not dynamically linked but shows us the libraries needed by /bin/ls. #### symbolic links for libraries If you are writing a program and you use udev functions, you will ask for a library called *libudev.so.1*. But a Linux distro, might call its version of udev library *libudev.so.1.4.0*. How can we solve this problem? with **symbolic links** you will learn more about this in next chapters but for short, a symbolic name is a new name for the same file. I will check the same thing on my system. First I'll find where the libudev.so.1 on my system is: # locate libudev.so.1 /lib/i386-linux-gnu/libudev.so.1 and then will check that file: # locate /lib/i386-linux-gnu/libudev.so.1 lrwxrwxrwx 1 root root 16 Nov 13 23:05 /lib/i386-linux-gnu/libudev.so.1 -> libudev.so.1.4.0 As you can see, this is a symbolic link pointing to the version of libudev I have installed (1.4.0) so even if a software says it need libudev.so.1, my system will use its libusdev.so.1.4.0. #### Dynamic library configs As most of other linux tools, dynamic linking is also configured using a text config file. It is located at */etc/ld.so.conf*. On an Ubuntu system it is just points to other config files in /etc/ld.so.conf.d/ but all those lines can be included in the main file too: ```` # cat /etc/ld.so.conf include /etc/ld.so.conf.d/*.conf # ls /etc/ld.so.conf.d/ fakeroot-x86_64-linux-gnu.conf i686-linux-gnu.conf x86_64-linux-gnu_EGL.conf i386-linux-gnu.conf libc.conf x86_64-linux-gnu_GL.conf i386-linux-gnu_GL.conf x86_64-linux-gnu.conf x86_64-linux-gnu_mirclient8driver.conf # cat /etc/ld.so.conf.d/libc.conf # libc default configuration /usr/local/lib root@funlife:/sbin# cat /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf /usr/lib/x86_64-linux-gnu/mesa ```` the ````ldconfig```` commands processed all these files to make the loading of libraries faster. This command creates ld.so.cache to locate files that are to be dynamically loaded and linked. > if you change the ld.so.conf (or sub-directories) you need to run ldconfig To close this section lets run ldconfig with the **-p** switch to see what is saved in ld.so.cache: ```` # ldconfig -p | head 1358 libs found in cache `/etc/ld.so.cache' libzvbi.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzvbi.so.0 libzvbi-chains.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzvbi-chains.so.0 libzephyr.so.4 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzephyr.so.4 libzeitgeist-2.0.so.0 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzeitgeist-2.0.so.0 libzeitgeist-1.0.so.1 (libc6,x86-64) => /usr/lib/libzeitgeist-1.0.so.1 libzbar.so.0 (libc6,x86-64) => /usr/lib/libzbar.so.0 ... ... ```` As you can see, this file tells the the kernel that anyone asks for *libzvbi.so.0*, the */usr/lib/x86_64-linux-gnu/libzvbi.so.0* file should be loaded. ## LD_LIBRARY_PATH Sometimes you need to override the original installed libraries and use your own or a specific library. Cases can be : - You are running an old software which needs an old version of a library. - You are developing a shared library and want to test is without installing it - You are running a specific program (say from opt) which needs to access its own libraries in these cases, you have to use the environment variable ** LD_LIBRARY_PATH**. A collon (:) separated list of directories will tell your program where to search for needed libraries **before** checking the libraries in ld.so.cache. For example if you give this command: export LD_LIBRARY_PATH=/usr/lib/myoldlibs:/home/jadi/lpic/libs/ and then run any command, the system will search /usr/lib/myoldlibs and then /home/jadi/lpic/libs/ before going to the main system libraries (defined in ld.so.cache). . . . . . . . . . . .
42.506667
363
0.731493
eng_Latn
0.991159
edb4122310d493e4e330f30f9663fc9cbe58c7a8
7,506
md
Markdown
articles/cognitive-services/Bing-Video-Search/quick-start.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Bing-Video-Search/quick-start.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Bing-Video-Search/quick-start.md
andreatosato/azure-docs.it-it
7023e6b19af61da4bb4cdad6e4453baaa94f76c3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Avvio rapido dell'API Ricerca video Bing | Microsoft Docs description: Illustra come iniziare a usare l'API Ricerca video Bing. services: cognitive-services author: swhite-msft manager: ehansen ms.assetid: 7E59692A-83A8-4F4C-B122-1F0EDC8E5C86 ms.service: cognitive-services ms.component: bing-video-search ms.topic: article ms.date: 04/15/2017 ms.author: scottwhi ms.openlocfilehash: 0bd0f067d64cac3ebac342ebadcfcc010a47af7b ms.sourcegitcommit: 95d9a6acf29405a533db943b1688612980374272 ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 06/23/2018 ms.locfileid: "35376956" --- # <a name="your-first-video-search-query"></a>La prima query di ricerca video Prima di poter effettuare la prima chiamata, è necessario ottenere una chiave di sottoscrizione di Servizi cognitivi di Ricerca Bing. Per ottenere una chiave, vedere [Prova Servizi cognitivi](https://azure.microsoft.com/try/cognitive-services/?api=bing-video-search-api). Per ottenere video come risultati della ricerca, è consigliabile inviare una richiesta GET all'endpoint seguente: ``` https://api.cognitive.microsoft.com/bing/v7.0/videos/search ``` La richiesta deve usare il protocollo HTTPS. È consigliabile che tutte le richieste abbiano origine da un server. La distribuzione della chiave come parte di un'applicazione client consente a terze parti dannose di accedervi più facilmente. Inoltre, l'esecuzione delle chiamate da un server fornisce un singolo punto di aggiornamento per le versioni future dell'API. La richiesta deve specificare il parametro di query [q](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#query), contenente il termine di ricerca dell'utente. Nonostante sia facoltativo, la richiesta dovrebbe anche specificare il parametro di query [mkt](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#mkt), che identifica il mercato da cui devono provenire i risultati. Per un elenco di parametri di query facoltativi come `pricing`, vedere [Query Parameters](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#query-parameters) (Parametri di query). Tutti i valori dei parametri di query devono essere codificati in URL. La richiesta deve specificare l'intestazione [Ocp-Apim-Subscription-Key](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#subscriptionkey). Nonostante sia facoltativo, è consigliabile specificare anche le intestazioni seguenti: - [User-Agent](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#useragent) - [X-MSEdge-ClientID](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#clientid) - [X-Search-ClientIP](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#clientip) - [X-Search-Location](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#location) Le intestazioni relative a IP client e posizione sono importanti per la restituzione di contenuto con riconoscimento della posizione. Per un elenco di tutte le intestazioni di richiesta e risposta, vedere [Headers](https://docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference#headers) (Intestazioni). ## <a name="the-request"></a>Richiesta Di seguito è illustrata una richiesta di ricerca che include tutte le intestazioni e parametri di query suggeriti. Se è la prima volta che si chiama un'API Bing, non includere l'intestazione dell'ID client. Includere l'ID client solo se in precedenza è già stata chiamata un'API Bing e Bing ha restituito un ID client per la combinazione utente e dispositivo. ``` GET https://api.cognitive.microsoft.com/bing/v7.0/videos/search?q=sailing+dinghies&mkt=en-us HTTP/1.1 Ocp-Apim-Subscription-Key: 123456789ABCDE User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822) X-Search-ClientIP: 999.999.999.999 X-Search-Location: lat:47.60357;long:-122.3295;re:100 X-MSEdge-ClientID: <blobFromPriorResponseGoesHere> Host: api.cognitive.microsoft.com ``` Di seguito è riportata la risposta alla richiesta precedente. L'esempio mostra anche le intestazioni di risposta specifiche di Bing. ``` BingAPIs-TraceId: 76DD2C2549B94F9FB55B4BD6FEB6AC X-MSEdge-ClientID: 1C3352B306E669780D58D607B96869 BingAPIs-Market: en-US { "_type" : "Videos", "webSearchUrl" : "https:\/\/www.bing.com\/cr?IG=81EF7545D5694...", "totalEstimatedMatches" : 1000, "value" : [ { "name" : "How to sail - What to Wear for Dinghy Sailing", "description" : "An informative video on what to wear when...", "webSearchUrl" : "https:\/\/www.bing.com\/cr?IG=81EF7545D56...", "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?id=OVP.DYWCvh...", "datePublished" : "2014-03-04T11:51:53", "publisher" : [ { "name" : "Fabrikam" } ], "creator" : { "name" : "Marcus Appel" }, "contentUrl" : "https:\/\/www.fabrikam.com\/watch?v=vzmPjHZ--g", "hostPageUrl" : "https:\/\/www.bing.com\/cr?IG=81EF7545D56944...", "encodingFormat" : "h264", "hostPageDisplayUrl" : "https:\/\/www.fabrikam.com\/watch?v=vzmPjBZ--g", "width" : 1280, "height" : 720, "duration" : "PT2M47S", "motionThumbnailUrl" : "https:\/\/tse3.mm.bing.net\/th?id=OM.Y6...", "embedHtml" : "<iframe width=\"1280\" height=\"720\" src=\"https:...><\/iframe>", "allowHttpsEmbed" : true, "viewCount" : 8743, "thumbnail" : { "width" : 300, "height" : 168 }, "videoId" : "6DB795E11A6E3CBAAD636DB795E11E3CBAAD63", "allowMobileEmbed" : true, "isSuperfresh" : false }, . . . ], "nextOffset" : 0, "pivotSuggestions" : [ { "pivot" : "sailing", "suggestions" : [] }, { "pivot" : "dinghies", "suggestions" : [ { "text" : "Sailing Cruising", "displayText" : "Cruising", "webSearchUrl" : "https:\/\/www.bing.com\/cr?IG=81EF754...", "searchLink" : "https:\/\/api.cognitive.microsoft.com...", "thumbnail" : { "thumbnailUrl" : "https:\/\/tse4.mm.bing.net\/th?q=Sailing..." } }, . . . ] } ] } ``` ## <a name="next-steps"></a>Passaggi successivi Provare l'API. Passare a [Video Search API Testing Console](https://dev.cognitive.microsoft.com/docs/services/56b43f3ccf5ff8098cef3809/operations/58113fe5e31dac0a1ce6b0a8) (Console di test dell'API Ricerca video Bing). Per informazioni dettagliate sull'utilizzo degli oggetti risposta, vedere [Searching the Web for Videos](./search-the-web.md) (Ricerca di video sul Web). Per informazioni su come ottenere dati dettagliati su un video, ad esempio ricerche correlate, vedere [Ottenere informazioni dettagliate sui video](./video-insights.md). Per informazioni dettagliate sui video di tendenza nei social network, vedere [Get Trending Videos](./trending-videos.md) (Ottenere video di tendenza).
52.125
723
0.681721
ita_Latn
0.848995
edb55259043bac0e35dd78e709cbc228f735dd02
94
md
Markdown
versionStream/kubeProviders/oke/README.md
przyx/vault
df7d9877102fcebcb56aed921ad7eb64e13ba173
[ "Apache-2.0" ]
null
null
null
versionStream/kubeProviders/oke/README.md
przyx/vault
df7d9877102fcebcb56aed921ad7eb64e13ba173
[ "Apache-2.0" ]
17
2020-09-06T07:15:00.000Z
2020-09-15T14:03:18.000Z
versionStream/kubeProviders/oke/README.md
przyx/vault
df7d9877102fcebcb56aed921ad7eb64e13ba173
[ "Apache-2.0" ]
1
2021-10-10T11:14:13.000Z
2021-10-10T11:14:13.000Z
# Jenkins X Boot configuration for Oracle Cloud Infrastructure Container Engine for Kubernetes
94
94
0.861702
eng_Latn
0.493992
edb5aa3e66f7eec9e797e089ee258fe1824e5ee2
101
md
Markdown
cargo/vendor/local-waker-0.1.1/CHANGES.md
btwiuse/invctrl
b3b47ad75510bd48bb235f1813cc6eb391579a00
[ "MIT" ]
2
2021-02-24T19:59:52.000Z
2021-07-27T22:24:23.000Z
cargo/vendor/local-waker-0.1.1/CHANGES.md
btwiuse/conntroll
39e426475a13a0610252dab8c030206a27a955fd
[ "MIT" ]
25
2021-09-15T04:27:06.000Z
2022-03-08T20:27:49.000Z
cargo/vendor/local-waker-0.1.1/CHANGES.md
btwiuse/conntroll
39e426475a13a0610252dab8c030206a27a955fd
[ "MIT" ]
1
2021-02-24T20:01:11.000Z
2021-02-24T20:01:11.000Z
# Changes ## Unreleased - 2021-xx-xx ## 0.1.1 - 2021-03-29 * Move `LocalWaker` to it's own crate.
12.625
38
0.623762
eng_Latn
0.886177
edb5e442fecdb4dd9a9ecdec5eae28151bf0fd46
682
md
Markdown
docs/modinit/registration.md
noeppi-noeppi/WikiX
c37fb853679e6c8fc6297923fd61d00dc7a505c8
[ "Apache-2.0" ]
null
null
null
docs/modinit/registration.md
noeppi-noeppi/WikiX
c37fb853679e6c8fc6297923fd61d00dc7a505c8
[ "Apache-2.0" ]
null
null
null
docs/modinit/registration.md
noeppi-noeppi/WikiX
c37fb853679e6c8fc6297923fd61d00dc7a505c8
[ "Apache-2.0" ]
null
null
null
# ModInit registration You can use ModInit to register stuff to the [LibX registration system](../registration/index.md). For this, add `@RegisterClass` to a class. All `public` `static` `final` fields from that class will be registered to the LibX registration system. To exclude a field, annotate it with `@NoReg`. The registry name for a field is retrieved by taking the field name and replacing every uppercase character with an underscore (`_`) and the letter in lowercase. To customise the registry name, annotate a field with `@RegName`. You can also set priorities and prefixes in `@RegisterClass` to tweak their order and add a prefix to each registry name from a class.
68.2
227
0.778592
eng_Latn
0.99838
edb61a009022cb415843d1d366fe8b7453c1a633
70
md
Markdown
baseline_code/README.md
sainathadapa/urban-sound-tagging
d93044b06e7d954cc9fc51b95f5e801c1daaaec4
[ "MIT" ]
19
2019-06-11T12:04:57.000Z
2020-05-09T20:59:47.000Z
baseline_code/README.md
sainathadapa/dcase2019-task5-urban-sound-tagging
d93044b06e7d954cc9fc51b95f5e801c1daaaec4
[ "MIT" ]
1
2020-04-26T17:40:05.000Z
2020-04-26T17:40:05.000Z
baseline_code/README.md
sainathadapa/urban-sound-tagging
d93044b06e7d954cc9fc51b95f5e801c1daaaec4
[ "MIT" ]
6
2019-07-10T09:13:59.000Z
2020-03-01T15:02:53.000Z
Source: https://github.com/sonyc-project/urban-sound-tagging-baseline
35
69
0.814286
yue_Hant
0.194119
edb65d07efa9cb2be713137e481df8d114fad02b
11,485
md
Markdown
README.md
textcreationpartnership/B09776
33eda5aa2ba8556cd2d978f441a5d46de93fcdbb
[ "CC0-1.0" ]
null
null
null
README.md
textcreationpartnership/B09776
33eda5aa2ba8556cd2d978f441a5d46de93fcdbb
[ "CC0-1.0" ]
null
null
null
README.md
textcreationpartnership/B09776
33eda5aa2ba8556cd2d978f441a5d46de93fcdbb
[ "CC0-1.0" ]
null
null
null
#The Anabaptists meribah: or, VVaters of strife. Being a reply to a late insulting pamphlet, written by Thomas Lamb, merchant, intitulled, Truth prevailing against the fiercest opposition; or, An answer to Mr. John Goodwins Water-dipping, no firm footing for church-communion. Wherein the impertinency of M. Lamb's answer, and the validity of M. Goodwin's Water-dipping, &c. are manifested by I. Price a member of the Church of Christ, whereof the said Mr. Goodwin is pastor.# ##Price, J., fl. 1656.## The Anabaptists meribah: or, VVaters of strife. Being a reply to a late insulting pamphlet, written by Thomas Lamb, merchant, intitulled, Truth prevailing against the fiercest opposition; or, An answer to Mr. John Goodwins Water-dipping, no firm footing for church-communion. Wherein the impertinency of M. Lamb's answer, and the validity of M. Goodwin's Water-dipping, &c. are manifested by I. Price a member of the Church of Christ, whereof the said Mr. Goodwin is pastor. Price, J., fl. 1656. ##General Summary## **Links** [TCP catalogue](http://www.ota.ox.ac.uk/tcp/) • [HTML](http://tei.it.ox.ac.uk/tcp/Texts-HTML/free/B09/B09776.html) • [EPUB](http://tei.it.ox.ac.uk/tcp/Texts-EPUB/free/B09/B09776.epub) • [Page images (Historical Texts)](https://historicaltexts.jisc.ac.uk/eebo-124064252e) **Availability** To the extent possible under law, the Text Creation Partnership has waived all copyright and related or neighboring rights to this keyboarded and encoded edition of the work described above, according to the terms of the CC0 1.0 Public Domain Dedication (http://creativecommons.org/publicdomain/zero/1.0/). This waiver does not extend to any page images or other supplementary files associated with this work, which may be protected by copyright or other license restrictions. Please go to https://www.textcreationpartnership.org/ for more information about the project. **Major revisions** 1. __2010-12__ __TCP__ *Assigned for keying and markup* 1. __2010-12__ __SPi Global__ *Keyed and coded from ProQuest page images* 1. __2011-03__ __Ali Jakobson__ *Sampled and proofread* 1. __2011-03__ __Ali Jakobson__ *Text and markup reviewed and edited* 1. __2011-06__ __pfs__ *Batch review (QC) and XML conversion* ##Content Summary## #####Front##### THE ANABAPTISTS MERIBAH: OR, VVaters of Strife.BEING A Reply to a late inſulting Pamphlet, written b 1. THE EPISTLE DEDICATORY. #####Body##### 1. THE ANABAPTISTS MERIBAH: OR, VVaters of STRIFE. _ SECT: I. _ SECT: II. _ To the Reader: _ SECT: IIII. _ SECT: V. _ SECT: VI. _ SECT: VII. _ SECT: VIII. _ SECT: IX. _ SECT: IX. _ SECT: X. _ SECT: XII. _ SECT. XIII. _ SECT. IV. _ SECT. XV. _ SECT. XVI. _ SECT. XVII. _ SECT. XVIII. _ SECT. XIX. _ SECT. XX. _ SECT. XXI: _ SECT: XXII: _ SECT. XXIII: _ SECT. XXIV. _ SECT. XXV. _ SECT. XXVI. _ SECT. XXVII. _ SECT. XXVIII. _ SECT. XXIX. _ SECT. XXX. _ SECT: XXXI. _ SECT. XXXIII. _ SECT. XXXIII. _ SECT. XXXIV. _ SECT. XXXV. _ SECT. XXXVI _ SECT. XXXVII. _ SECT. XXXVIII. _ SECT. XXXIX. _ SECT. XL. _ SECT. XLI. _ SECT. XLII. _ SECT. XLV. _ SECT. XLIII. _ SECT. XLI. _ SECT. XLV. _ SECT. XLIV. _ SECT. XLVII. _ SECT. XLVIII. _ SECT. XLIX. _ SECT. L. _ SECT. LI. _ SECT. LII. _ SECT. LIII. _ SECT. LIV. _ SECT. LV. _ SECT. LVI. _ SECT. LVII. _ SECT. LVIII. _ SECT. LIX. _ SECT. LX. _ SECT. LXI. _ SECT. LXII. _ SECT. LXIII. _ SECT. LXIV. _ SECT. LXV. _ SECT. LXVI. _ SECT. LVII. _ SECT. LXVIII. _ SECT. LXIX. _ SECT. LXX. _ SECT. LXXI. _ SECT. LXXII. _ SECT. LXXIII. _ SECT. LXXIV. _ SECT. LXXV. _ SECT. LXXVI. _ SECT. LXXVII. _ SECT. LXXVIII. 1. Mr. LAMB'S POSTSCRIPT, Preſcribed. _ SECT: I. _ SECT: II. _ To the Reader: _ SECT: IIII. _ SECT: V. _ SECT: VI. _ SECT: VII. _ SECT: VIII. _ SECT: IX. _ SECT: IX. _ SECT: X. _ SECT: XII. _ SECT. XIII. _ SECT. IV. _ SECT. XV. _ SECT. XVI. _ SECT. XVII. _ SECT. XVIII. _ SECT. XIX. _ SECT. XX. _ SECT. XXI: _ SECT: XXII: _ SECT. XXIII: _ SECT. XXIV. _ SECT. XXV. _ SECT. XXVI. _ SECT. XXVII. _ SECT. XXVIII. _ SECT. XXIX. _ SECT. XXX. _ SECT: XXXI. _ SECT. XXXIII. _ SECT. XXXIII. _ SECT. XXXIV. _ SECT. XXXV. _ SECT. XXXVI _ SECT. XXXVII. _ SECT. XXXVIII. _ SECT. XXXIX. _ SECT. XL. _ SECT. XLI. _ SECT. XLII. _ SECT. XLV. _ SECT. XLIII. _ SECT. XLI. _ SECT. XLV. _ SECT. XLIV. _ SECT. XLVII. _ SECT. XLVIII. _ SECT. XLIX. _ SECT. L. _ SECT. LI. _ SECT. LII. _ SECT. LIII. _ SECT. LIV. _ SECT. LV. _ SECT. LVI. _ SECT. LVII. _ SECT. LVIII. _ SECT. LIX. _ SECT. LX. _ SECT. LXI. _ SECT. LXII. _ SECT. LXIII. _ SECT. LXIV. _ SECT. LXV. _ SECT. LXVI. _ SECT. LVII. _ SECT. LXVIII. _ SECT. LXIX. _ SECT. LXX. _ SECT. LXXI. _ SECT. LXXII. _ SECT. LXXIII. _ SECT. LXXIV. _ SECT. LXXV. _ SECT. LXXVI. _ SECT. LXXVII. _ SECT. LXXVIII. #####Back##### 1. Reader, if thou takeſt no pleaſure in the errors of the times, mend by thy pen, what the Printer hath marr'd by his preſs: 1. Theſe Books following are Printed for (and ſold by) Henry Everſden, at the Grey-hound in Pauls-Church-yard. **Types of content** * There are 6 **verse** lines! * Oh, Mr. Jourdain, there is **prose** in there! There are 82 **omitted** fragments! @__reason__ (82) : duplicate (8), illegible (74) • @__extent__ (82) : 1 page (8), 1 letter (61), 1 word (3), 3 letters (1), 2 letters (9) • @__resp__ (74) : #PDCC (74) **Character listing** |Text|string(s)|codepoint(s)| |---|---|---| |Latin-1 Supplement|àè|224 232| |Latin Extended-A|ſ|383| |Combining Diacritical Marks|̄|772| |General Punctuation|•—|8226 8212| |Geometric Shapes|◊▪|9674 9642| |CJKSymbolsandPunctuation|〈〉|12296 12297| ##Tag Usage Summary## ###Header Tag Usage### |No|element name|occ|attributes| |---|---|---|---| |1.|__author__|2|| |2.|__availability__|1|| |3.|__biblFull__|1|| |4.|__change__|5|| |5.|__date__|8| @__when__ (1) : 2011-12 (1)| |6.|__edition__|1|| |7.|__editionStmt__|1|| |8.|__editorialDecl__|1|| |9.|__encodingDesc__|1|| |10.|__extent__|2|| |11.|__fileDesc__|1|| |12.|__idno__|7| @__type__ (7) : DLPS (1), OCLC (2), STC (2), EEBO-CITATION (1), VID (1)| |13.|__keywords__|1| @__scheme__ (1) : http://authorities.loc.gov/ (1)| |14.|__label__|5|| |15.|__langUsage__|1|| |16.|__language__|1| @__ident__ (1) : eng (1)| |17.|__listPrefixDef__|1|| |18.|__note__|5|| |19.|__notesStmt__|2|| |20.|__p__|11|| |21.|__prefixDef__|2| @__ident__ (2) : tcp (1), char (1) • @__matchPattern__ (2) : ([0-9\-]+):([0-9IVX]+) (1), (.+) (1) • @__replacementPattern__ (2) : http://eebo.chadwyck.com/downloadtiff?vid=$1&page=$2 (1), https://raw.githubusercontent.com/textcreationpartnership/Texts/master/tcpchars.xml#$1 (1)| |22.|__profileDesc__|1|| |23.|__projectDesc__|1|| |24.|__pubPlace__|2|| |25.|__publicationStmt__|2|| |26.|__publisher__|2|| |27.|__ref__|1| @__target__ (1) : http://www.textcreationpartnership.org/docs/. (1)| |28.|__revisionDesc__|1|| |29.|__seriesStmt__|1|| |30.|__sourceDesc__|1|| |31.|__term__|4|| |32.|__textClass__|1|| |33.|__title__|3|| |34.|__titleStmt__|2|| ###Text Tag Usage### |No|element name|occ|attributes| |---|---|---|---| |1.|__back__|1|| |2.|__bibl__|2|| |3.|__body__|3|| |4.|__closer__|2|| |5.|__date__|1|| |6.|__dateline__|1|| |7.|__desc__|82|| |8.|__div__|87| @__type__ (87) : title_page (1), dedication (1), text (1), section (79), order (2), postscript (1), errata (1), publishers_advertisement (1) • @__n__ (79) : 1 (1), 2 (1), 3 (1), 4 (2), 5 (1), 6 (1), 7 (1), 8 (1), 9 (2), 10 (1), 12 (1), 13 (1), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 20 (1), 21 (1), 22 (1), 23 (1), 24 (1), 25 (1), 26 (1), 27 (1), 28 (1), 29 (1), 30 (1), 31 (1), 33 (2), 34 (1), 35 (1), 36 (1), 37 (1), 38 (1), 39 (1), 40 (1), 41 (2), 42 (1), 45 (2), 43 (1), 44 (1), 47 (1), 48 (1), 49 (1), 50 (1), 51 (1), 52 (1), 53 (1), 54 (1), 55 (1), 56 (1), 57 (1), 58 (1), 59 (1), 60 (1), 61 (1), 62 (1), 63 (1), 64 (1), 65 (1), 66 (1), 67 (1), 68 (1), 69 (1), 70 (1), 71 (1), 72 (1), 73 (1), 74 (1), 75 (1), 76 (1), 77 (1), 78 (1)| |9.|__floatingText__|2| @__xml:lang__ (2) : eng (0)| |10.|__front__|1|| |11.|__g__|780| @__ref__ (780) : char:EOLhyphen (771), char:EOLunhyphen (7), char:punc (1), char:cmbAbbrStroke (1)| |12.|__gap__|82| @__reason__ (82) : duplicate (8), illegible (74) • @__extent__ (82) : 1 page (8), 1 letter (61), 1 word (3), 3 letters (1), 2 letters (9) • @__resp__ (74) : #PDCC (74)| |13.|__head__|89| @__type__ (2) : sub (2)| |14.|__hi__|2160| @__rend__ (2) : sup (2)| |15.|__item__|29|| |16.|__l__|6|| |17.|__label__|46| @__type__ (46) : milestone (46)| |18.|__list__|4|| |19.|__milestone__|3| @__type__ (3) : tcpmilestone (3) • @__unit__ (3) : unspecified (3) • @__n__ (3) : 1 (1), 2 (1), 3 (1)| |20.|__note__|132| @__place__ (132) : margin (132) • @__n__ (2) : * (2)| |21.|__opener__|1|| |22.|__p__|330| @__n__ (83) : 2 (29), 1 (15), 3 (15), 4 (9), 5 (5), 6 (5), 7 (2), 8 (2), 9 (1)| |23.|__pb__|107| @__facs__ (107) : tcp:192312:1 (1), tcp:192312:2 (2), tcp:192312:3 (2), tcp:192312:4 (2), tcp:192312:5 (2), tcp:192312:6 (2), tcp:192312:7 (2), tcp:192312:8 (2), tcp:192312:9 (2), tcp:192312:10 (2), tcp:192312:11 (2), tcp:192312:12 (2), tcp:192312:13 (2), tcp:192312:14 (2), tcp:192312:15 (2), tcp:192312:16 (2), tcp:192312:17 (2), tcp:192312:18 (2), tcp:192312:19 (2), tcp:192312:20 (2), tcp:192312:21 (2), tcp:192312:22 (2), tcp:192312:23 (2), tcp:192312:24 (2), tcp:192312:25 (2), tcp:192312:26 (2), tcp:192312:27 (2), tcp:192312:28 (2), tcp:192312:29 (2), tcp:192312:30 (2), tcp:192312:31 (2), tcp:192312:32 (2), tcp:192312:33 (2), tcp:192312:34 (2), tcp:192312:35 (2), tcp:192312:36 (2), tcp:192312:37 (2), tcp:192312:38 (2), tcp:192312:39 (2), tcp:192312:40 (2), tcp:192312:41 (2), tcp:192312:42 (2), tcp:192312:43 (2), tcp:192312:44 (2), tcp:192312:45 (2), tcp:192312:46 (2), tcp:192312:47 (2), tcp:192312:48 (2), tcp:192312:49 (2), tcp:192312:50 (2), tcp:192312:51 (2), tcp:192312:52 (2), tcp:192312:53 (2), tcp:192312:54 (2) • @__rendition__ (7) : simple:additions (7) • @__n__ (92) : 1 (1), 2 (1), 3 (1), 4 (1), 5 (1), 6 (1), 7 (1), 8 (1), 9 (1), 10 (1), 11 (1), 12 (1), 13 (1), 14 (1), 15 (1), 16 (1), 17 (1), 18 (1), 19 (1), 20 (1), 21 (1), 22 (1), 23 (1), 24 (1), 25 (1), 26 (1), 27 (1), 28 (1), 29 (1), 30 (1), 31 (1), 32 (1), 33 (1), 34 (1), 35 (1), 36 (1), 37 (1), 38 (2), 39 (2), 40 (2), 41 (2), 44 (3), 45 (3), 48 (2), 49 (2), 50 (1), 51 (1), 52 (1), 53 (1), 54 (1), 55 (1), 56 (1), 57 (1), 58 (1), 59 (1), 60 (1), 61 (1), 62 (1), 63 (1), 64 (1), 65 (1), 66 (1), 67 (1), 68 (1), 69 (1), 70 (1), 71 (1), 72 (1), 73 (1), 74 (1), 75 (1), 76 (1), 77 (1), 78 (1), 79 (1), 80 (1), 81 (1), 82 (1), 83 (1), 84 (1), 85 (1), 86 (1)| |24.|__q__|17|| |25.|__salute__|1|| |26.|__seg__|48| @__rend__ (2) : decorInit (2) • @__type__ (46) : milestoneunit (46)| |27.|__signed__|2|| |28.|__trailer__|2||
25.186404
1,761
0.589813
yue_Hant
0.331801
edb8f975b48eb528c1f88df4e893a337614c2d7e
2,998
md
Markdown
mhallowell-Indentation_and_spacing.md
nicklynch10/FMCtCP
84a271b51aacae738747a62649621a0be468e4b4
[ "MIT" ]
null
null
null
mhallowell-Indentation_and_spacing.md
nicklynch10/FMCtCP
84a271b51aacae738747a62649621a0be468e4b4
[ "MIT" ]
null
null
null
mhallowell-Indentation_and_spacing.md
nicklynch10/FMCtCP
84a271b51aacae738747a62649621a0be468e4b4
[ "MIT" ]
null
null
null
## Assigning Variables This chapter focuses on making indentation and spacing clear using if-statements and while-loops. For the first program, I made the simplest code I could think of that involved indentation. Its only job was to turn the cpx red forever. In Python, there isn’t a direct “forever” loop, but a while-loop is equivalent. The indentation is clearer in Python, but both programs show that the performance function is inside of the loop. If “setAllPixelsTo(RED)” wasn’t indented, the program wouldn’t run. An error may occur saying “expected indentation,” which is required in order for the program to work. ![Screen Shot 2018-03-13 at 8.44.14 AM.png](https://lh6.googleusercontent.com/MRob3W86c6E-njEcqlsZXQuO2RXiGci4hgGy_5SJTUnoqhA_AoXUEYzEm81i-r5tNKp83VRTyLXdGbPafFJC_86JEK2uctmciR6jmjl0uRuum8tNTaVBHaZfA-3gKgHKcNTi_pSb)![Screen Shot 2018-03-13 at 8.58.49 AM.png](https://lh3.googleusercontent.com/zyFDYlvaxDATXkgdM2ywS7r0pxiHP2XzUrWWtmJOkSvu6-Ld_rlELEBtcjextnKTZN26L8fHSH0qmQe7slUFGaVrOa2tijF_4dH0N0zjzzdJJxLgLF7dYO9quBTicDXVV4jMkRT3) ____ The next program I wrote was slightly more complex. It adds another layer or two of indentation to show the importance of it. When I take “setAllPixelsTo(GREEN)” and move it into the while-loop and outside of the if-statement, the program gets conflicted with multiple true statements happening at the same time. It then tries to perform both functions at once, flashing between the two. I explain this in greater detail in my video. ![Screen Shot 2018-03-13 at 8.45.02 AM.png](https://lh4.googleusercontent.com/6WpicTm2hUQvmvo4LAfacT9LVqWg4BoNt9WfZOM6kZNW9PO11FosSjQksFRYtR0HD_gBudEoViUJRfHtvXcMYiXhCcx5dIVsoLFJSuW89ppsT9JwaVOVFVyVeYCjAtDRrTGvEdKW) ![Screen Shot 2018-03-13 at 9.14.56 AM copy](https://lh6.googleusercontent.com/BX4m0wx9t-50yG6X1EXih8FzW8B-9Q1cmHP9VuF3qalCi2NU7Ixtt-IVsTKvIb36p7gOU0B1923tjRG4YidSYdYxpMdgWGZZtHGAxSjtZ45W8fO3wLegfyXQ1ZWrk4-5AyNOKkB6) ____ As is true in coding, there are multiple ways of creating the same code. The last program is another way of creating the second program I made. It involves less indentation, and it is therefore, in many ways, simpler. Everything is encapsulated in the if-statements. Additionally, I mentioned spaces towards the end of my video. Spaces are important in variable naming, because you cannot have spaces in variable naming. If things have a type of separation, then spaces do not matter. I could put any amount of spaces between a function and the arguments and Python would ignore the spaces and run the program properly. ![Screen Shot 2018-03-13 at 9.15.27 AM.png](https://lh6.googleusercontent.com/4CS0wF7tW5BuDLYURlc0L-KdZC_Eha-ddei5V-DgpNLv2UdQg5Cxm12NoIDWdVqnUfV5agZ2PMq3D2VnOGIJhVHSxuShVLQYjIpq6Hhosb_WeIYM7EMNqdjeUEiHxd9ot4l5INWp) ![screen-shot-2018-03-13-at-9-14-56-am-copy3.png](https://lh3.googleusercontent.com/2wd4T2LAkGGsGvKme3emsHvyvxdpi24yKj7WXWJT5kgz-XIN5ze68_GFhFXrj0JfJjJfB4z3DrtdeBgEkoNBEzR3WAoE8Nvve-mbcImAVK4nZckuh-Ip-vU6FQfvvuouBd1aeoRQ) ____
166.555556
619
0.836891
eng_Latn
0.943988
edb94c3192ceb56f13c9b5c259e0027d7d8ec798
8,292
md
Markdown
docs/classes/_lumin_.ui.cursor.md
leoz/docusaurus_crash
c2fc7e8e5d2d2f74d439816cacf1b9bfc90b8ee6
[ "Apache-2.0" ]
1
2020-01-16T05:23:54.000Z
2020-01-16T05:23:54.000Z
docs/classes/_lumin_.ui.cursor.md
leoz/docusaurus_crash
c2fc7e8e5d2d2f74d439816cacf1b9bfc90b8ee6
[ "Apache-2.0" ]
null
null
null
docs/classes/_lumin_.ui.cursor.md
leoz/docusaurus_crash
c2fc7e8e5d2d2f74d439816cacf1b9bfc90b8ee6
[ "Apache-2.0" ]
null
null
null
--- id: "_lumin_.ui.cursor" title: "lumin.ui.Cursor" sidebar_label: "lumin.ui.Cursor" --- [magic-script-typings](../index.md) › [&quot;lumin&quot;](../modules/_lumin_.md) › [ui](../modules/_lumin_.ui.md) › [Cursor](_lumin_.ui.cursor.md) ## Hierarchy * **Cursor** ## Index ### Constructors * [constructor](_lumin_.ui.cursor.md#constructor) ### Methods * [GetCursorSnapMinDistance](_lumin_.ui.cursor.md#static-getcursorsnapmindistance) * [GetCursorSnapMinTime](_lumin_.ui.cursor.md#static-getcursorsnapmintime) * [GetCursorSnapMode](_lumin_.ui.cursor.md#static-getcursorsnapmode) * [GetGravityWellBlendTime](_lumin_.ui.cursor.md#static-getgravitywellblendtime) * [GetGravityWellMaxDistance](_lumin_.ui.cursor.md#static-getgravitywellmaxdistance) * [GetMoveRate](_lumin_.ui.cursor.md#static-getmoverate) * [GetPlaneDepth](_lumin_.ui.cursor.md#static-getplanedepth) * [GetPosition](_lumin_.ui.cursor.md#static-getposition) * [GetScale](_lumin_.ui.cursor.md#static-getscale) * [GetState](_lumin_.ui.cursor.md#static-getstate) * [IsEnabled](_lumin_.ui.cursor.md#static-isenabled) * [ResetDefaults](_lumin_.ui.cursor.md#static-resetdefaults) * [ResetState](_lumin_.ui.cursor.md#static-resetstate) * [SetCursorSnapMinDistance](_lumin_.ui.cursor.md#static-setcursorsnapmindistance) * [SetCursorSnapMinTime](_lumin_.ui.cursor.md#static-setcursorsnapmintime) * [SetCursorSnapMode](_lumin_.ui.cursor.md#static-setcursorsnapmode) * [SetEnabled](_lumin_.ui.cursor.md#static-setenabled) * [SetGravityWellBlendTime](_lumin_.ui.cursor.md#static-setgravitywellblendtime) * [SetGravityWellMaxDistance](_lumin_.ui.cursor.md#static-setgravitywellmaxdistance) * [SetMoveRate](_lumin_.ui.cursor.md#static-setmoverate) * [SetPlaneDepth](_lumin_.ui.cursor.md#static-setplanedepth) * [SetScale](_lumin_.ui.cursor.md#static-setscale) * [SetStartupPosition](_lumin_.ui.cursor.md#static-setstartupposition) * [SetState](_lumin_.ui.cursor.md#static-setstate) * [TransitionToPanel](_lumin_.ui.cursor.md#static-transitiontopanel) ## Constructors ### constructor \+ **new Cursor**(): *[Cursor](_lumin_.ui.cursor.md)* **Returns:** *[Cursor](_lumin_.ui.cursor.md)* ## Methods ### `Static` GetCursorSnapMinDistance ▸ **GetCursorSnapMinDistance**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetCursorSnapMinTime ▸ **GetCursorSnapMinTime**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetCursorSnapMode ▸ **GetCursorSnapMode**(`prism`: [Prism](_lumin_.prism.md)): *boolean* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *boolean* ___ ### `Static` GetGravityWellBlendTime ▸ **GetGravityWellBlendTime**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetGravityWellMaxDistance ▸ **GetGravityWellMaxDistance**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetMoveRate ▸ **GetMoveRate**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetPlaneDepth ▸ **GetPlaneDepth**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetPosition ▸ **GetPosition**(`prism`: [Prism](_lumin_.prism.md)): *[number, number, number]* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *[number, number, number]* ___ ### `Static` GetScale ▸ **GetScale**(`prism`: [Prism](_lumin_.prism.md)): *number* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *number* ___ ### `Static` GetState ▸ **GetState**(`prism`: [Prism](_lumin_.prism.md)): *[CursorHoverState](../enums/_lumin_.cursorhoverstate.md)* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *[CursorHoverState](../enums/_lumin_.cursorhoverstate.md)* ___ ### `Static` IsEnabled ▸ **IsEnabled**(`prism`: [Prism](_lumin_.prism.md)): *boolean* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *boolean* ___ ### `Static` ResetDefaults ▸ **ResetDefaults**(`prism`: [Prism](_lumin_.prism.md)): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *void* ___ ### `Static` ResetState ▸ **ResetState**(`prism`: [Prism](_lumin_.prism.md)): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | **Returns:** *void* ___ ### `Static` SetCursorSnapMinDistance ▸ **SetCursorSnapMinDistance**(`prism`: [Prism](_lumin_.prism.md), `distance`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `distance` | number | **Returns:** *void* ___ ### `Static` SetCursorSnapMinTime ▸ **SetCursorSnapMinTime**(`prism`: [Prism](_lumin_.prism.md), `seconds`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `seconds` | number | **Returns:** *void* ___ ### `Static` SetCursorSnapMode ▸ **SetCursorSnapMode**(`prism`: [Prism](_lumin_.prism.md), `snap`: boolean): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `snap` | boolean | **Returns:** *void* ___ ### `Static` SetEnabled ▸ **SetEnabled**(`prism`: [Prism](_lumin_.prism.md), `a_enabled`: boolean): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `a_enabled` | boolean | **Returns:** *void* ___ ### `Static` SetGravityWellBlendTime ▸ **SetGravityWellBlendTime**(`prism`: [Prism](_lumin_.prism.md), `seconds`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `seconds` | number | **Returns:** *void* ___ ### `Static` SetGravityWellMaxDistance ▸ **SetGravityWellMaxDistance**(`prism`: [Prism](_lumin_.prism.md), `distance`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `distance` | number | **Returns:** *void* ___ ### `Static` SetMoveRate ▸ **SetMoveRate**(`prism`: [Prism](_lumin_.prism.md), `a_rate`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `a_rate` | number | **Returns:** *void* ___ ### `Static` SetPlaneDepth ▸ **SetPlaneDepth**(`prism`: [Prism](_lumin_.prism.md), `a_depth`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `a_depth` | number | **Returns:** *void* ___ ### `Static` SetScale ▸ **SetScale**(`prism`: [Prism](_lumin_.prism.md), `a_scale`: number): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `a_scale` | number | **Returns:** *void* ___ ### `Static` SetStartupPosition ▸ **SetStartupPosition**(`prism`: [Prism](_lumin_.prism.md), `position`: [number, number]): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `position` | [number, number] | **Returns:** *void* ___ ### `Static` SetState ▸ **SetState**(`prism`: [Prism](_lumin_.prism.md), `cursorState`: [CursorHoverState](../enums/_lumin_.cursorhoverstate.md)): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `cursorState` | [CursorHoverState](../enums/_lumin_.cursorhoverstate.md) | **Returns:** *void* ___ ### `Static` TransitionToPanel ▸ **TransitionToPanel**(`prism`: [Prism](_lumin_.prism.md), `panel`: [UiPanel](_lumin_.ui.uipanel.md)): *void* **Parameters:** Name | Type | ------ | ------ | `prism` | [Prism](_lumin_.prism.md) | `panel` | [UiPanel](_lumin_.ui.uipanel.md) | **Returns:** *void*
19.980723
146
0.638929
yue_Hant
0.577505
edb9d6bc6a9fcf3e88bb97ebf4132bc5f2a8596b
1,920
md
Markdown
mso/r/mso_schema_template_deploy.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
78
2021-01-15T14:10:30.000Z
2022-02-14T09:17:40.000Z
mso/r/mso_schema_template_deploy.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
5
2021-04-09T15:21:28.000Z
2022-01-28T19:02:05.000Z
mso/r/mso_schema_template_deploy.md
chrisjaimon2012/tfwriter
1ea629ed386bbe6a8f21617a430dae19ba536a98
[ "MIT" ]
30
2021-01-17T13:16:57.000Z
2022-03-21T12:52:08.000Z
# mso_schema_template_deploy [back](../mso.md) ### Index - [Example Usage](#example-usage) - [Variables](#variables) - [Resource](#resource) - [Outputs](#outputs) ### Terraform ```terraform terraform { required_providers { mso = ">= 0.1.5" } } ``` [top](#index) ### Example Usage ```terraform module "mso_schema_template_deploy" { source = "./modules/mso/r/mso_schema_template_deploy" # schema_id - (required) is a type of string schema_id = null # site_id - (optional) is a type of string site_id = null # template_name - (required) is a type of string template_name = null # undeploy - (optional) is a type of bool undeploy = null } ``` [top](#index) ### Variables ```terraform variable "schema_id" { description = "(required)" type = string } variable "site_id" { description = "(optional)" type = string default = null } variable "template_name" { description = "(required)" type = string } variable "undeploy" { description = "(optional)" type = bool default = null } ``` [top](#index) ### Resource ```terraform resource "mso_schema_template_deploy" "this" { # schema_id - (required) is a type of string schema_id = var.schema_id # site_id - (optional) is a type of string site_id = var.site_id # template_name - (required) is a type of string template_name = var.template_name # undeploy - (optional) is a type of bool undeploy = var.undeploy } ``` [top](#index) ### Outputs ```terraform output "id" { description = "returns a string" value = mso_schema_template_deploy.this.id } output "site_id" { description = "returns a string" value = mso_schema_template_deploy.this.site_id } output "undeploy" { description = "returns a bool" value = mso_schema_template_deploy.this.undeploy } output "this" { value = mso_schema_template_deploy.this } ``` [top](#index)
17.297297
56
0.656771
eng_Latn
0.69297
edba31262b3d5e0bb163d94f323ab0d4e359bcf2
8,958
md
Markdown
_posts/2018-10-02-Kigali-Day-7.md
ArunShan1/ArunShan1.github.io
a5adaeedcea0bd3dff9fef5798d0bdfd27c8ec33
[ "MIT" ]
null
null
null
_posts/2018-10-02-Kigali-Day-7.md
ArunShan1/ArunShan1.github.io
a5adaeedcea0bd3dff9fef5798d0bdfd27c8ec33
[ "MIT" ]
null
null
null
_posts/2018-10-02-Kigali-Day-7.md
ArunShan1/ArunShan1.github.io
a5adaeedcea0bd3dff9fef5798d0bdfd27c8ec33
[ "MIT" ]
null
null
null
--- layout: post title: Kigali Day 7 - Data Day! date: "October 2, 2018" --- I decided I would continue the good work from yesterday and do some data analysis! ## Wikipedia Click Stream Analysis This is a brief analysis of [Wikipedia ClickStream Data](https://meta.wikimedia.org/wiki/Research:Wikipedia_clickstream). This is a really awesome data set that gets published monthly summarising how many clicks there are from any Wikipedia page (or external) to another. I expect that the average visitor to these websites probably isn't from Rwanda because: 1. The average Rwandan user already knows a lot about Rwanda 2. Wikipedia is only the 14th most popular website in Rwanda according to [Alexa](https://www.alexa.com/topsites/countries/RW) (7th in UK) 3. The population is only 12 million, of which only [1.5 million](http://www.internetlivestats.com/internet-users/rwanda/) have any internet access. 4. There are almost [1 million](https://tradingeconomics.com/rwanda/international-tourism-number-of-arrivals-wb-data.html) visitors to Rwanda annually, almost all of whom have internet access. Initially, we define **'Rwanda pages'** on Wikipedia to be those that contain Rwanda in them. These are the most popular English and French links as a percentage of all such clicks. <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr> <th>Language</th> <th>Page</th> <th>% of all Rwanda Page clicks </th> </tr> </thead> <tbody> <tr> <td rowspan="3" valign="top">en</td> <td>Rwandan_genocide</td> <td>23.22</td> </tr> <tr> <td>Rwanda</td> <td>22.64</td> </tr> <tr> <td>Hotel_Rwanda</td> <td>5.71</td> </tr> <tr> <td rowspan="2" valign="top">fr</td> <td>Rwanda</td> <td>3.86</td> </tr> <tr> <td>Génocide_des_Tutsis_au_Rwanda</td> <td>3.42</td> </tr> <tr> <td rowspan="5" valign="top">en</td> <td>Rwandan_Civil_War</td> <td>2.01</td> </tr> <tr> <td>Rwandan_Patriotic_Front</td> <td>1.57</td> </tr> <tr> <td>History_of_Rwanda</td> <td>1.24</td> </tr> <tr> <td>Economy_of_Rwanda</td> <td>1.10</td> </tr> <tr> <td>Kinyarwanda</td> <td>1.09</td> </tr> </tbody> </table> </div> It's really sad but not necessarily surprising to see that there are more clicks to the genocide than there are to Rwanda itself. Those two seem to dominate the clicks and outside of that, there are a few other historical websites and then some more positive news with the economy and language getting some interest. Now we'll see where people are coming from into these Rwanda pages and where they go afterwards. I've removed 'other' tags (e.g. search engines, external websites etc) to focus on where they're coming from within Wikipedia. <iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~ArunShan1/32.embed" height="525px" width="100%"></iframe> Kagame seems to be have a clearly important connection to Rwanda as does the capital, Kigali. The other relevant pages are genocide-related, ethnic groups that used to define people here (people no longer talk about them publicly), bordering countries as well as a few celebrities with Rwanda connections. Rwanda would do well to continue to play on Stromae, the Belgian Rapper with a Rwandan father, and Don Cheadle, who plays Paul Rusesabagina, the hero from Hotel Rwanda. We also have a few large popular pages like the main Wikipedia page and the national electoral calendar which will send hits to loads of different places but very few of those hits will be to Rwanda. So now, we look at it from the perspective of a given page and see what percentage of those hits are to Rwanda pages. We require a minimum of 300 hits to Rwanda pages and see which pages are best at pushing to Rwanda. <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr> <th>Language</th> <th>Prior Page</th> <th>Hits to Non-Rwanda</th> <th>Hits to Rwanda</th> <th>% of all hits to Rwanda</th> </tr> </thead> <tbody> <tr> <td rowspan="8" valign="top">en</td> <td>Kigali</td> <td>1871</td> <td>1782</td> <td>48.78</td> </tr> <tr> <td>Augustin_Bizimungu</td> <td>522</td> <td>304</td> <td>36.80</td> </tr> <tr> <td>Tutsi</td> <td>4017</td> <td>1795</td> <td>30.88</td> </tr> <tr> <td>Diane_Rwigara</td> <td>1183</td> <td>527</td> <td>30.82</td> </tr> <tr> <td>Ruanda-Urundi</td> <td>1215</td> <td>536</td> <td>30.61</td> </tr> <tr> <td>Paul_Kagame</td> <td>9764</td> <td>3705</td> <td>27.51</td> </tr> <tr> <td>Hutu</td> <td>3416</td> <td>1154</td> <td>25.25</td> </tr> <tr> <td>Interahamwe</td> <td>2351</td> <td>789</td> <td>25.13</td> </tr> <tr> <td>fr</td> <td>Paul_Kagame</td> <td>1487</td> <td>392</td> <td>20.86</td> </tr> <tr> <td rowspan="6" valign="top">en</td> <td>Paul_Rusesabagina</td> <td>3260</td> <td>837</td> <td>20.43</td> </tr> <tr> <td>First_Congo_War</td> <td>5460</td> <td>808</td> <td>12.89</td> </tr> <tr> <td>Stromae</td> <td>6698</td> <td>972</td> <td>12.67</td> </tr> <tr> <td>Second_Congo_War</td> <td>11077</td> <td>832</td> <td>6.99</td> </tr> <tr> <td>Genocide</td> <td>17992</td> <td>971</td> <td>5.12</td> </tr> <tr> <td>Don_Cheadle</td> <td>26428</td> <td>1363</td> <td>4.90</td> </tr> </tbody> </table> </div> Again, a lot of genocide-related pages but Kagame, Stromae and Don Cheadle are all relatively good at pushing people to find out more about Rwanda. Finally, we take a brief look at English vs French wikipedia and see if there's a substantial difference ![French vs English](/images/graph.png) We see that there is a clear difference and that French wikipedia is much more interested in Rwanda than English wikipedia is - almost twice as much. This is in contrast to other countries in the area like Kenya where less than 3% of all traffic is from French wikipedia. So overall, some pretty cool findings - need to look at it over time to get a better idea but that'll require a lot more data processing that my computer isn't quite fit to handle... Non Data Stuff ================= ![Buffet](/images/buffet.jpg "Buffet") *Super nice buffet for lunch* Had a nice lunch at Africa Bite for less than £3 while I was working. ![Pizza](/images/pizza.jpg "Pizza") *Pizza with Baptiste* In the evening, I met up with Baptiste, the waiter from Hotel Chez Lando, and we had Pizza at Simba. He's just finishing a Bachelor's in Business Information Technology at the University of Technology and Business Studies. We talked about life in Rwanda and some of the challenges for youth. We got particularly deep into why sports gambling has become so popular and how it's the alternative to work for so many young unemployed people. Daily Summary =========== **Kinyarwanda Word**: Rwiza *(Beautiful)* **Question**: What percentage of Rwandans have family outside of Kigali that they have to support? (1 in 3 was the estimate) **Thought**: The buffet was totally packed at lunch. The usually empty road was packed and the seats were full. I haven't seen too many businesses like that! **Problem**: You have 9 balls. They all look identical. All weight the same except one of them is lighter than the others. You have a scale that can tell you which side is heavier. What is the minimum number of weighings it takes to determine which is the light ball? **Business opportunity**: There's definitely something in this sports gambling industry. Something that can make the users of it more productive and use their risk-appetite to support businesses that struggle to get capital. One thought was to just arbitrage the potentially slower prices against those online or setting up a cheaper or more tech-savvy competitor and using it to find talent for other purposes. **Takeaway**: It was super nice to get to know a Rwandan and learn more about the culture and challenges. Need to keep doing that!
33.425373
474
0.65182
eng_Latn
0.98444
edbb494d139f38498543fdbc0483752b9385cb97
8,493
md
Markdown
src/content/guides/hot-module-replacement.md
Yayure/webpack.js.org
c6731f0da4d786ff5f0d15a6802d90ffaf08555b
[ "CC-BY-4.0" ]
1
2020-02-19T09:13:16.000Z
2020-02-19T09:13:16.000Z
src/content/guides/hot-module-replacement.md
Yayure/webpack.js.org
c6731f0da4d786ff5f0d15a6802d90ffaf08555b
[ "CC-BY-4.0" ]
null
null
null
src/content/guides/hot-module-replacement.md
Yayure/webpack.js.org
c6731f0da4d786ff5f0d15a6802d90ffaf08555b
[ "CC-BY-4.0" ]
null
null
null
--- title: 模块热替换 sort: 6 contributors: - jmreidy - jhnns - sararubin - aiduryagin - rohannair - joshsantos - drpicox - skipjack - sbaidon - gdi2290 - bdwain - caryli - xgirma - EugeneHlushko - aviyacohen related: - title: 概念 - 模块热替换(hot module replacement) url: /concepts/hot-module-replacement - title: API - 模块热替换(hot module replacement) url: /api/hot-module-replacement --- T> 本指南继续沿用 [开发环境](/guides/development) 指南中的代码示例。 模块热替换(hot module replacement 或 HMR)是 webpack 提供的最有用的功能之一。它允许在运行时更新所有类型的模块,而无需完全刷新。本页面重点介绍其__实现__,而 [概念](/concepts/hot-module-replacement) 页面提供了更多关于它的工作原理以及为什么它有用的细节。 W> __HMR__ 不适用于生产环境,这意味着它应当用于开发环境。更多详细信息,请查看 [生产环境](/guides/production) 指南。 ## 启用 HMR 此功能可以很大程度提高生产效率。我们要做的就是更新 [webpack-dev-server](https://github.com/webpack/webpack-dev-server) 配置,然后使用 webpack 内置的 HMR 插件。我们还要删除掉 `print.js` 的入口起点,因为现在已经在 `index.js` 模块中引用了它。 T> 如果你在技术选型中使用了 `webpack-dev-middleware` 而没有使用 `webpack-dev-server`,请使用 [`webpack-hot-middleware`](https://github.com/webpack-contrib/webpack-hot-middleware) package,以在你的自定义 server 或应用程序上启用 HMR。 __webpack.config.js__ ``` diff const path = require('path'); const HtmlWebpackPlugin = require('html-webpack-plugin'); const CleanWebpackPlugin = require('clean-webpack-plugin'); + const webpack = require('webpack'); module.exports = { entry: { - app: './src/index.js', - print: './src/print.js' + app: './src/index.js' }, devtool: 'inline-source-map', devServer: { contentBase: './dist', + hot: true }, plugins: [ new CleanWebpackPlugin(), new HtmlWebpackPlugin({ title: '模块热替换' }), + new webpack.HotModuleReplacementPlugin() ], output: { filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') } }; ``` T> 可以通过命令来修改 [webpack-dev-server](https://github.com/webpack/webpack-dev-server) 的配置:`webpack-dev-server --hotOnly`。 现在,修改 `index.js` 文件,以便在 `print.js` 内部发生变更时,告诉 webpack 接受 updated module。 __index.js__ ``` diff import _ from 'lodash'; import printMe from './print.js'; function component() { var element = document.createElement('div'); var btn = document.createElement('button'); element.innerHTML = _.join(['Hello', 'webpack'], ' '); btn.innerHTML = 'Click me and check the console!'; btn.onclick = printMe; element.appendChild(btn); return element; } document.body.appendChild(component()); + + if (module.hot) { + module.hot.accept('./print.js', function() { + console.log('Accepting the updated printMe module!'); + printMe(); + }) + } ``` 修改 `print.js` 中 `console.log` 语句,你将会在浏览器中看到如下的输出(暂时不要担心 `button.onclick = printMe()` 输出,我们稍后也会更新这部分)。 __print.js__ ``` diff export default function printMe() { - console.log('I get called from print.js!'); + console.log('Updating print.js...') } ``` __console__ ``` diff [HMR] Waiting for update signal from WDS... main.js:4395 [WDS] Hot Module Replacement enabled. + 2main.js:4395 [WDS] App updated. Recompiling... + main.js:4395 [WDS] App hot update... + main.js:4330 [HMR] Checking for updates on the server... + main.js:10024 Accepting the updated printMe module! + 0.4b8ee77….hot-update.js:10 Updating print.js... + main.js:4330 [HMR] Updated modules: + main.js:4330 [HMR] - 20 ``` ## 通过 Node.js API 在 Node.js API 中使用 webpack dev server 时,不要将 dev server 选项放在 webpack 配置对象(webpack config object)中。而是,在创建时,将其作为第二个参数传递。例如: `new WebpackDevServer(compiler, options)` 想要启用 HMR,还需要修改 webpack 配置对象,使其包含 HMR 入口起点。`webpack-dev-server` package 中具有一个叫做 `addDevServerEntrypoints` 的方法,你可以通过使用这个方法来实现。这是关于如何使用的一个基本示例: __dev-server.js__ ``` javascript const webpackDevServer = require('webpack-dev-server'); const webpack = require('webpack'); const config = require('./webpack.config.js'); const options = { contentBase: './dist', hot: true, host: 'localhost' }; webpackDevServer.addDevServerEntrypoints(config, options); const compiler = webpack(config); const server = new webpackDevServer(compiler, options); server.listen(5000, 'localhost', () => { console.log('dev server listening on port 5000'); }); ``` T> 如果你正在使用 [`webpack-dev-middleware`](/guides/development#using-webpack-dev-middleware),可以通过 [`webpack-hot-middleware`](https://github.com/webpack-contrib/webpack-hot-middleware) package,在自定义 dev server 中启用 HMR。 ## 问题 模块热替换可能比较难以掌握。为了说明这一点,我们回到刚才的示例中。如果你继续点击示例页面上的按钮,你会发现控制台仍在打印旧的 `printMe` 函数。 这是因为按钮的 `onclick` 事件处理函数仍然绑定在旧的 `printMe` 函数上。 为了让 HMR 正常工作,我们需要更新代码,使用 `module.hot.accept` 将其绑定到新的 `printMe` 函数上: __index.js__ ``` diff import _ from 'lodash'; import printMe from './print.js'; function component() { var element = document.createElement('div'); var btn = document.createElement('button'); element.innerHTML = _.join(['Hello', 'webpack'], ' '); btn.innerHTML = 'Click me and check the console!'; btn.onclick = printMe; // onclick 事件绑定原始的 printMe 函数上 element.appendChild(btn); return element; } - document.body.appendChild(component()); + let element = component(); // 存储 element,以在 print.js 修改时重新渲染 + document.body.appendChild(element); if (module.hot) { module.hot.accept('./print.js', function() { console.log('Accepting the updated printMe module!'); - printMe(); + document.body.removeChild(element); + element = component(); // Re-render the "component" to update the click handler + element = component(); // 重新渲染 "component",以便更新 click 事件处理函数 + document.body.appendChild(element); }) } ``` 这仅仅是一个示例,还有很多让人易于犯错的情况。幸运的是,有很多 loader(下面会提到一些)可以使得模块热替换变得更加容易。 ## HMR 加载样式 借助于 `style-loader`,使用模块热替换来加载 CSS 实际上极其简单。此 loader 在幕后使用了 `module.hot.accept`,在 CSS 依赖模块更新之后,会将其 patch(修补) 到 `<style>` 标签中。 首先使用以下命令安装两个 loader : ```bash npm install --save-dev style-loader css-loader ``` 然后更新配置文件,使用这两个 loader。 __webpack.config.js__ ```diff const path = require('path'); const HtmlWebpackPlugin = require('html-webpack-plugin'); const CleanWebpackPlugin = require('clean-webpack-plugin'); const webpack = require('webpack'); module.exports = { entry: { app: './src/index.js' }, devtool: 'inline-source-map', devServer: { contentBase: './dist', hot: true }, + module: { + rules: [ + { + test: /\.css$/, + use: ['style-loader', 'css-loader'] + } + ] + }, plugins: [ new CleanWebpackPlugin(), new HtmlWebpackPlugin({ title: '模块热替换' }), new webpack.HotModuleReplacementPlugin() ], output: { filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') } }; ``` 如同 import 模块,热加载样式表同样很简单: __project__ ``` diff webpack-demo | - package.json | - webpack.config.js | - /dist | - bundle.js | - /src | - index.js | - print.js + | - styles.css ``` __styles.css__ ``` css body { background: blue; } ``` __index.js__ ``` diff import _ from 'lodash'; import printMe from './print.js'; + import './styles.css'; function component() { var element = document.createElement('div'); var btn = document.createElement('button'); element.innerHTML = _.join(['Hello', 'webpack'], ' '); btn.innerHTML = 'Click me and check the console!'; btn.onclick = printMe; // onclick event is bind to the original printMe function element.appendChild(btn); return element; } let element = component(); document.body.appendChild(element); if (module.hot) { module.hot.accept('./print.js', function() { console.log('Accepting the updated printMe module!'); document.body.removeChild(element); element = component(); // Re-render the "component" to update the click handler document.body.appendChild(element); }) } ``` 将 `body` 的 style 改为 `background: red;`,你应该可以立即看到页面的背景颜色随之更改,而无需完全刷新。 __styles.css__ ``` diff body { - background: blue; + background: red; } ``` ## 其他代码和框架 社区还提供许多其他 loader 和示例,可以使 HMR 与各种框架和库平滑地进行交互…… - [React Hot Loader](https://github.com/gaearon/react-hot-loader):实时调整 react 组件。 - [Vue Loader](https://github.com/vuejs/vue-loader):此 loader 支持 vue 组件的 HMR,提供开箱即用体验。 - [Elm Hot Loader](https://github.com/fluxxu/elm-hot-loader):支持 Elm 编程语言的 HMR。 - [Angular HMR](https://github.com/gdi2290/angular-hmr):没有必要使用 loader!直接修改 NgModule 主文件就够了,它可以完全控制 HMR API。 T> 如果你知道任何其他 loader 或 plugin,能够有助于或增强模块热替换(hot module replacement),请提交一个 pull request 以添加到此列表中!
24.546243
211
0.673143
yue_Hant
0.321036
edbdedcb44d9f334ca7af7aa07153d79295393dd
483
md
Markdown
recipes/slices/caramel-cake.md
hadley/recipes
56d456f9cb8fb8e04046112bb4447335aabe7897
[ "CC-BY-4.0" ]
43
2015-07-21T13:12:46.000Z
2022-02-19T16:49:06.000Z
recipes/slices/caramel-cake.md
hadley/recipes
56d456f9cb8fb8e04046112bb4447335aabe7897
[ "CC-BY-4.0" ]
3
2015-12-20T19:36:23.000Z
2019-02-25T05:12:01.000Z
recipes/slices/caramel-cake.md
hadley/recipes
56d456f9cb8fb8e04046112bb4447335aabe7897
[ "CC-BY-4.0" ]
28
2016-03-13T19:47:46.000Z
2021-08-11T15:34:47.000Z
# Caramel cake * 6oz butter * 1 T golden syrup * 1/2 c raisins * 1 c raw sugar * 2 c flour * 2 t baking powder * ICING: * 1 1/2 T butter * 1 1/2 T golden syrup * Vanilla * 1 1/2 c icing sugar Melt butter and syrup then add raisins and dry ingredients. Press into sponge roll tin and bake 20 minutes at 180C. Ice and cut when warm. ICING: Melt butter, syrup and vanilla. Add icing sugar and mix well. Time: 50 minutes Comments: very good Source: Rally cook book, page 63
19.32
140
0.708075
eng_Latn
0.956792
edbe2ca108b62073394b8690848ba9ae8d78a671
437
md
Markdown
README.md
chrmlinux/tinyESPNow
24da626d42ade91f17b9c265e5bca23e376c3ccd
[ "MIT" ]
2
2022-02-16T02:27:07.000Z
2022-03-15T16:33:19.000Z
README.md
chrmlinux/tinyESPNow
24da626d42ade91f17b9c265e5bca23e376c3ccd
[ "MIT" ]
null
null
null
README.md
chrmlinux/tinyESPNow
24da626d42ade91f17b9c265e5bca23e376c3ccd
[ "MIT" ]
null
null
null
# tinyESPNow tinyESPNow makes it easy to implement ESPNow functionality An epoch-making library * The two files in the example folder are the same. The program started later will make the first transmission. MIT license, all text here must be included in any redistribution. # update history # 0.0.1 1st https://qiita-com.translate.goog/chrmlinux03/items/0c2b84ac80483391ee7c?_x_tr_sl=ja&_x_tr_tl=en&_x_tr_hl=ja&_x_tr_pto=wapp
24.277778
122
0.796339
eng_Latn
0.934031
edbf4b34389d268af99746471fc5557a674fb6d4
5,348
md
Markdown
docs/code-quality/version-compatibility-for-code-analysis-check-in-policies.md
MicrosoftDocs/visualstudio-docs.pt-br
b1882dc108a37c4caebbf5a80c274c440b9bbd4a
[ "CC-BY-4.0", "MIT" ]
5
2019-02-19T20:22:40.000Z
2022-02-19T14:55:39.000Z
docs/code-quality/version-compatibility-for-code-analysis-check-in-policies.md
MicrosoftDocs/visualstudio-docs.pt-br
b1882dc108a37c4caebbf5a80c274c440b9bbd4a
[ "CC-BY-4.0", "MIT" ]
32
2018-08-24T19:12:03.000Z
2021-03-03T01:30:48.000Z
docs/code-quality/version-compatibility-for-code-analysis-check-in-policies.md
MicrosoftDocs/visualstudio-docs.pt-br
b1882dc108a37c4caebbf5a80c274c440b9bbd4a
[ "CC-BY-4.0", "MIT" ]
25
2017-11-02T16:03:15.000Z
2021-10-02T02:18:00.000Z
--- title: Compatibilidade da versão para políticas de check-in de análise do código ms.date: 11/04/2016 description: Saiba como o Team System 2008 Team Foundation Server e Team Foundation Server 2010 avaliam Visual Studio de check-in de forma diferente. ms.custom: SEO-VS-2020 ms.topic: conceptual helpviewer_keywords: - version compatibility, code analysis check-in policy - check-in policies, version compatibility for code analysis ms.assetid: 1af376e3-3be7-4445-803b-76a858567a5b author: mikejo5000 ms.author: mikejo manager: jmartens ms.technology: vs-ide-code-analysis ms.workload: - multiple ms.openlocfilehash: 1cd049a8e2aea94e6dc3b581c5b6ca8ccf3ab4e8 ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 09/13/2021 ms.locfileid: "126610834" --- # <a name="version-compatibility-for-code-analysis-check-in-policies"></a>Compatibilidade da versão para políticas de check-in de análise do código Se você tiver que avaliar e autorizar políticas de check-in de análise de código usando versões diferentes do , deverá conhecer as diferenças em como e avaliar políticas [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] [!INCLUDE[vstsTfsOrcasLong](../code-quality/includes/vststfsorcaslong_md.md)] de [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] check-in. ## <a name="version-compatibility-for-evaluating-check-in-policies"></a>Compatibilidade de versão para avaliar Check-In políticas - Quando as políticas de check-in de análise de código são avaliadas no , todas as regras que existiam no , mas não [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] existem no , são [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] ignoradas. - Quando as políticas de check-in de análise de código são avaliadas no , todas as novas regras [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] exclusivas do [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] são ignoradas. - Se a política de check-in de análise de código especificar assemblies de regras, ignorará todas as regras especificadas por assemblies que [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] não reconhece. - Se a política de check-in de análise de código especificar assemblies de regras que [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] não reconhecem, uma mensagem será exibida. ## <a name="version-compatibility-for-authoring-check-in-policies"></a>Compatibilidade de versão para a Check-In políticas - Se você criou uma política de check-in de análise de código usando a versão do , não poderá usar a [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] versão do para [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] modificá-la. Além disso, [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] não é possível avaliar a política. - Se você criou uma política de check-in de análise de código usando no , poderá usar no para modificá-la e a política também poderá [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] ser avaliada pelo [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] . Depois de modificar a política usando [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] no , você não poderá mais editar a política usando no [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] [!INCLUDE[esprtfc](../code-quality/includes/esprtfc_md.md)] [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] . [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] pode avaliar as políticas sem problemas com nomes fortes incompatibilidades. - Para criar uma política de check-in de análise de código com configurações de regra que se aplicam a e , você deve criar a política no , fazer todas as alterações necessárias e salvar [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] a [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] política. Se as alterações nas regras existirem somente no [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] , você modificará e salvará a política em [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] . Depois de salvar a política no [!INCLUDE[vstsTfsOrcasShort](../code-quality/includes/vststfsorcasshort_md.md)] , você não poderá mais alterar as configurações de regras que existem apenas no [!INCLUDE[vstsTfsRosarioShort](../code-quality/includes/vststfsrosarioshort_md.md)] .
113.787234
1,055
0.801982
por_Latn
0.865115
21304fa72997990a237aa9ecb30bebd78851d34e
1,192
md
Markdown
docs/edge-stack/1.13/topics/running/aes-extensions/index.md
jboykin-bread/ambassador-docs
c3350c901c2bbad88db3ac553e8b46fe73702292
[ "Apache-2.0" ]
null
null
null
docs/edge-stack/1.13/topics/running/aes-extensions/index.md
jboykin-bread/ambassador-docs
c3350c901c2bbad88db3ac553e8b46fe73702292
[ "Apache-2.0" ]
null
null
null
docs/edge-stack/1.13/topics/running/aes-extensions/index.md
jboykin-bread/ambassador-docs
c3350c901c2bbad88db3ac553e8b46fe73702292
[ "Apache-2.0" ]
null
null
null
# $productName$ extensions The $productName$ contains a number of pre-built extensions that make running, deploying, and exposing your applications in Kubernetes easier. Use of AES extensions is implemented via Kubernetes Custom Resources. Documentation for how to uses the various extensions can be found throughout the [Using AES AES for Developer](../../using/) section of the docs. This section is concerned with how to operate and tune deployment of these extensions in AES. ## Redis Sine AES does not use a database, Redis is uses for caching state information when an extension requires it. The $productName$ shares the same Redis pool for all features that use Redis. The [Redis documentation](../aes-redis) contains detailed information on tuning how AES talks to Redis. ## The Extension process The various extensions of the $productName$ run as a separate process from the $productName$ control plane and Envoy data plane. ### `AES_LOG_LEVEL` The `AES_LOG_LEVEL` controls the logging of all of the extensions in AES. Log level names are case-insensitive. From least verbose to most verbose, valid log levels are `error`, `warn`/`warning`, `info`, `debug`, and `trace`.
36.121212
80
0.779362
eng_Latn
0.996387
213198a66e8a9201f17a0273bdd91b29356ca38b
768
md
Markdown
README.md
jbuisine/scenes-complexity
a886d3dac7e99c34c6ca76ef7e9eb21ac6f63638
[ "MIT" ]
null
null
null
README.md
jbuisine/scenes-complexity
a886d3dac7e99c34c6ca76ef7e9eb21ac6f63638
[ "MIT" ]
null
null
null
README.md
jbuisine/scenes-complexity
a886d3dac7e99c34c6ca76ef7e9eb21ac6f63638
[ "MIT" ]
null
null
null
# Scenes complexity ## Description Project developed in order to study the complexity of few scenes. ## Installation ```bash git clone https://github.com/prise-3d/scenes-complexity.git ``` ```bash pip install -r requirements.txt ``` ## How it works ? Generate estimators data on your own images data: ```bash python run/scenes_classification_data.py --folder /path/to/scenes --estimators estimator_1,estimator2 --output estimators_data.csv ``` Data file is saved into `data/generated` folder. You can try to clusterize images using: ```bash python run/scenes_classification.py --data data/generated/estimators_data.csv --clusters 3 --output estimated_clusters.png ``` ## Contributors * [jbuisine](https://github.com/jbuisine) ## License [MIT](LICENSE)
18.731707
130
0.75
eng_Latn
0.633882
2131b991e0eb34184772aa9e8ad16e3a98f1f3f0
4,786
md
Markdown
CHANGELOG.md
ministry-of-colour/aoeu
e4ccdb9c0b3852c5b5a63b6068e0379c82bdbb10
[ "MIT" ]
null
null
null
CHANGELOG.md
ministry-of-colour/aoeu
e4ccdb9c0b3852c5b5a63b6068e0379c82bdbb10
[ "MIT" ]
null
null
null
CHANGELOG.md
ministry-of-colour/aoeu
e4ccdb9c0b3852c5b5a63b6068e0379c82bdbb10
[ "MIT" ]
null
null
null
# Changelog ## 0.6.3-dev ## 0.6.2 Fixed Bugs Fix `system` logic so shims directory is removed from `PATH` properly (#402, #406) Support `.tool-versions` files that don't end in a newline (#403) ## 0.6.1 Features * Make `where` command default to current version (#389) * Optimize code for listing all plugins (#388) * Document `$TRAVIS_BUILD_DIR` in the plugin guide (#386) * Add `--asdf-tool-version` flag to plugin-test command (#381) * Add `-p` flag to `local` command (#377) Fixed Bugs * Fix behavior of `current` command when multiple versions are set (#401) * Fix fish shell init code (#392) * Fix `plugin-test` command (#379) * Add space before parenthesis in `current` command output (#371) ## 0.6.0 Features * Add support for `ASDF_DATA_DIR` environment variable (#275, #335, #361, #364, #365) Fixed Bugs * Fix `asdf current` so it works when no versions are installed (#368, #353) * Don't try to install system version (#369, #351) * Make `resolve_symlink` function work with relative symlinks (#370, #366) * Fix version changing code so it preserves symlinks (#329, #337) * Fix ShellCheck warnings (#336) ## 0.5.1 Features * Better formatting for `asdf list` output (#330, #331) Fixed Bugs * Correct environment variable name used for version lookup (#319, #326 #328) * Remove unnecessary `cd` in `asdf.sh` (#333, #334) * Correct Fish shell load script (#340) ## 0.5.0 Features * Changed exit codes for shims so we use codes with special meanings when possible (#305, #310) * Include plugin name in error message if plugin doesn't exist (#315) * Add support for custom executable paths (#314) * `asdf list` with no arguments should list all installed versions of all plugins (#311) Fixed Bugs * Print "No version set" message to stderr (#309) * Fix check for asdf directories in path for Fish shell (#306) ## 0.4.3 Features * Suggest action when no version is set (#291, #293) Fixed Bugs * Fix issue with asdf not always being added to beginning of `$PATH` (#288, #303, #304) * Fix incorrect `ASDF_CONFIG_FILE` environment variable name (#300) * Fix `asdf current` so it shows environment variables that are setting versions (#292, 294) ## 0.4.2 Features * Add support for `ASDF_DEFAULT_TOOL_VERSIONS_FILENAME` environment variable (#201, #228) * Only add asdf to `PATH` once (#261, #271) * Add `--urls` flag to `plugin-list` commands (#273) Fixed Bugs * Incorrect `grep` command caused version command to look at the wrong tool when reporting the version (#262) ## 0.4.1 Features * `asdf install` will also search for `.tool-versions` in parent directories (#237) Fixed Bugs * bad use of `sed` caused shims and `.tool-versions` to be duplicated with `-e` (#242, #250) * `asdf list` now outputs ref-versions as used on `.tool-versions` file (#243) * `asdf update` will explicitly use the `origin` remote when updating tags (#231) * All code is now linted by shellcheck (#223) * Add test to fail builds if banned commands are found (#251) ## 0.4.0 Features * Add CONTRIBUTING guidelines and GitHub issue and pull request templates (#217) * Add `plugin-list-all` command to list plugins from asdf-plugins repo. (#221) * `asdf current` shows all current tool versions when given no args (#219) * Add asdf-plugin-version metadata to shims (#212) * Add release.sh script to automate release of new versions (#220) Fixed Bugs * Allow spaces on path containing the `.tool-versions` file (#224) * Fixed bug in `--version` functionality so it works regardless of how asdf was installed (#198) ## 0.3.0 Features * Add `update` command to make it easier to update asdf to the latest release (#172, #180) * Add support for `system` version to allow passthrough to system installed tools (#55, #182) Fixed Bugs * Set `GREP_OPTIONS` and `GREP_COLORS` variables in util.sh so grep is always invoked with the correct settings (#170) * Export `ASDF_DIR` variable so the Zsh plugin can locate asdf if it's in a custom location (#156) * Don't add execute permission to files in a plugin's bin directory when adding the plugin (#124, #138, #154) ## 0.2.1 Features * Determine global tool version even when used outside of home directory (#106) Fixed Bugs * Correct reading of `ref:` and `path:` versions (#112) * Remove shims when uninstalling a version or removing a plugin (#122, #123, #125, #128, #131) * Add a helpful error message to the install command (#135) ## 0.2.0 Features * Improve plugin API for legacy file support (#87) * Unify `asdf local` and `asdf global` version getters as `asdf current` (#83) * Rename `asdf which` to `asdf current` (#78) Fixed Bugs * Fix bug that caused the `local` command to crash when the directory contains whitespace (#90) * Misc typo corrections (#93, #99) ## 0.1.0 * First tagged release
29.361963
118
0.718972
eng_Latn
0.983821
21326892a1d0ae9c95217bbb3b06f63bb6f4ffd9
755
md
Markdown
README.md
xhdix/xhdix
8bd5d5d68b34f05ba2ac787d70ca788add81dde7
[ "Unlicense" ]
null
null
null
README.md
xhdix/xhdix
8bd5d5d68b34f05ba2ac787d70ca788add81dde7
[ "Unlicense" ]
null
null
null
README.md
xhdix/xhdix
8bd5d5d68b34f05ba2ac787d70ca788add81dde7
[ "Unlicense" ]
null
null
null
### Hi there 👋 <table border="0"> <td> - 📫 How to reach me: - Email: [email protected] - Telegram: https://t.me/xhdix - Keybase (encrypted message): https://keybase.io/xhdix </td> <td> - ⚡ Fun fact: ![https://twitter.com/dinoman_j](https://user-images.githubusercontent.com/12384263/115099961-12fa8d00-9f42-11eb-9995-15509163616c.png) </td> </table> <!-- **xhdix/xhdix** is a ✨ _special_ ✨ repository because its `README.md` (this file) appears on your GitHub profile. Here are some ideas to get you started: - 🔭 I’m currently working on ... - 🌱 I’m currently learning ... - 👯 I’m looking to collaborate on ... - 🤔 I’m looking for help with ... - 💬 Ask me about ... - 📫 How to reach me: ... - 😄 Pronouns: ... - ⚡ Fun fact: ... -->
22.205882
135
0.639735
eng_Latn
0.733733
2132954319ab54801f4ee5ce783bb3ecf10bf536
914
md
Markdown
README.md
2019342a/numberCounter
24d3509fb4b24b80a86200c5983e63be0970039d
[ "MIT" ]
null
null
null
README.md
2019342a/numberCounter
24d3509fb4b24b80a86200c5983e63be0970039d
[ "MIT" ]
null
null
null
README.md
2019342a/numberCounter
24d3509fb4b24b80a86200c5983e63be0970039d
[ "MIT" ]
null
null
null
# numberCounter A simple counter using swiftUI ## What is it? A simple dummy iOS app that provides a simple MVC swiftUI example. The app contains a single view, two buttons and a display. The user can click the buttons to increment/decrement the number that is currently displayed on the screen. The app is a slight modification from an app in the book [A SwiftUI Kickstart](http://media.pragprog.com/newsletters/2020-10-14.html) which I suggest reading and buying. ## Why? This is more of a reference app that I can come back as it is a very simple application to understand, read and test but still provides solid swiftUI features and tests. ## Usage Clone the repo and open the `Counter` folder using xcode. You can from there, run the app, run the tests or run the preview on the `ContentView.swift`. ## Questions Fill free to add suggestions, comments or anything else. Issues and PRs are always welcome!
43.52381
104
0.780088
eng_Latn
0.999566
2132b6c9ae84151419faa845625c91e8e2f3262e
5,922
md
Markdown
README.md
2uasimojo/standardize
bd9774d88717bd2cc06a1c7c1bbf996ffcc44e7e
[ "Apache-2.0" ]
null
null
null
README.md
2uasimojo/standardize
bd9774d88717bd2cc06a1c7c1bbf996ffcc44e7e
[ "Apache-2.0" ]
null
null
null
README.md
2uasimojo/standardize
bd9774d88717bd2cc06a1c7c1bbf996ffcc44e7e
[ "Apache-2.0" ]
null
null
null
# standardize Standard development infrastructure and tooling to be used across repositories in an organization. This work was inspired by, and partially cribbed from, [lyft/boilerplate](https://github.com/lyft/boilerplate). ## Overview The principle behind this is to **copy** the standardized artifacts from this repository into the consuming repository. This is as opposed to pulling them dynamically on each use. In other words, **consumers update on demand.** It might seem like a *dis*advantage for consumers to be allowed to get out of sync, but (as long as a system is in place to check/update frequently) it allows more careful and explicit curation of changes. The multiplication of storage space is assumed to be insignificant. (Don't use this for huge binary blobs. If you need to standardize a compiled binary or similar, consider storing the _source_ here and compiling it at the target via your `update`.) ## Mechanism A "standard" lives in a subdirectory of `standards` and is identified by the subdirectory's name. For example, standard Makefile content lives under `standards/make` and is identified as `make`. A standard comprises: - Files, which are copied verbatim into the consuming repository at update time, replacing whatever was there before. The source directory structure is mirrored in the consuming repository -- e.g. `standardize/standards/make/*` is copied into `${TARGET_REPO}/standards/make/*`. - An `update` script (which can be any kind of executable, but please keep portability in mind). If present, this script is invoked twice during an update: - Once _before_ files are copied, with the command line argument `PRE`. This can be used to prepare for the copy and/or validate that it is allowed to happen. If the script exits nonzero, the update is aborted. - Once _after_ files are copied, with the command line argument `POST`. This can be used to perform any configuration required after files are laid down. For example, some files may need to be copied to other locations, or templated values therein substituted based on the environment of the consumer. If the script exits nonzero, the update is aborted (subsequent standards are not updated). ## Consuming ### Bootstrap 1. Copy the main [update script](standards/update) into your repo as `standards/update`. Make sure it is executable (`chmod +x`). **Note:** It is important that the `update` script be at the expected path, because one of the things it does is update itself! 2. Create a `Makefile` target as follows: ```makefile .PHONY: update_standards update_standards: @standards/update ``` **Note:** It is important that the `Makefile` target have the expected name, because (eventually) there may be automated jobs that use it to look for available updates. 3. Commit the above. ### Configure The `update` script looks for a configuration file at `standards/update.cfg`. It contains a list of standards, which are simply the names of subdirectories under `standards`, one per line. Whitespace and `#`-style comments are allowed. For example, to adopt the `make` and `gofmt` standards, your `standards/update.cfg` may look like: ``` # Use common makefile targets and functions make # Enforce golang style using our gofmt configuration gofmt ``` Opt into updates of a standard by including it in the file; otherwise you are opted out, even if you had previously used a given standard. **Note:** Updates are applied in the order in which they are listed in the configuration. If standards need to be applied in a certain order (which should be avoided if at all possible), it should be called out in their respective READMEs. ### Update Periodically, run `make update_standards` on a clean branch in your consuming repository. If it succeeds, commit the changes, being sure to notice if any new files were created. **Sanity check the changes against your specific repository to ensure they didn't break anything.** If they did, please make every effort to fix the issue _in the standardize repo itself_ before resorting to local snowflake fixups (which will be overwritten the next time you update) or opting out of the standard. ## Contributing - Create a subdirectory under `standards`. The name of the directory is the name of your standard. By convention, do not prefix your standard name with an underscore; such subdirectories are reserved for use by the infrastructure. In your subdirectory: - Add a `README.md` describing what your standard does and how it works. - Add any files that need to be copied into consuming repositories. (Optional -- you might have a standard that only needs to run `update`.) - Create an executable called `update`. (Optional -- you might have a standard that only needs to lay down files.) - It must accept exactly one command line argument, which will be either `PRE` or `POST`. The main driver will invoke `update PRE` _before_ copying files, and `update POST` _after_ copying files. (You may wish to ignore a phase, e.g. via `[[ "$1" == "PRE" ]] && exit 0`.) - It must indicate success or failure by exiting with zero or nonzero status, respectively. Failure will cause the main driver to abort. - The main driver exports the following variables for use by `update`s: - `REPO_ROOT`: The fully-qualified path to the root directory of the repository in which we are running. - `REPO_NAME`: The short name (so like `standardize`, not `openshift/standardize`) of the git repository in which we are running. (Note that discovering this relies on the `origin` remote being configured properly.) - `STANDARDS_ROOT`: The path to the directory containing the main `update` driver and the standard subdirectories themselves. Of note, `${STANDARDS_ROOT}/_lib/` contains some utilities that may be useful for `update`s.
43.866667
98
0.756501
eng_Latn
0.999515
21330cec1dcf8f4acfca59208d75e4db0b442d33
1,647
md
Markdown
README.md
leoandika/html-analyzer
d0e0ea83fe24b4768dec4bec0645afecbfcdec20
[ "MIT" ]
null
null
null
README.md
leoandika/html-analyzer
d0e0ea83fe24b4768dec4bec0645afecbfcdec20
[ "MIT" ]
null
null
null
README.md
leoandika/html-analyzer
d0e0ea83fe24b4768dec4bec0645afecbfcdec20
[ "MIT" ]
null
null
null
# Welcome to HTML Analyzer (Backend) Project! A Backend-Only HTML Analyzer Project to analyze your web page, fully written using Go Language. This application is able to analyze the following attributes : - HTML Version of the page - Page title - Heading Level counts (From h1 to h6!) - Number of Internal and External Links - Number of Inaccessible Links (yep, you read that right!) - Whether your page contains login form or not ## How to run A provided makefile make everything easy! Simply run the following commands to do things : 1. Clone the project and change current directory to the project folder 2. To build the project, just run `make build` 3. When you're done buildidng, run `make run` to run the application. Remember, the default port used in this application is 8087. Watch out for any other applications running on the same port. 4. To run the pre-written tests, run `make test` 5. Last but not least, to remove the binary, run `make clean` ## How to use After you run the application, you can use it by hit the endpoint `/analyzehtml` using GET method, and give your URL as the request parameter. Example : `localhost:8087/analyzehtml?url=https://www.amazon.com` You have to use complete URL to use this application (`http://` or `https://` along with `www.`) The response then will be returned in JSON format. ## How to contribute As they say, no project is perfect. This project also need many improvements. Therefore, if you found anything you can improve, please don't hesitate to create an issue and make a pull request. I will review it as soon as possible! :)
51.46875
195
0.739526
eng_Latn
0.998661
213344bcc7bbcf52d0733c5ca7584daf70ef54fc
1,957
md
Markdown
CHANGELOG.md
AlyoshaVasilieva/imxrt-ral
0675a330d981a280e5f4dc2b5f0ef2cb397948e1
[ "Apache-2.0", "MIT" ]
null
null
null
CHANGELOG.md
AlyoshaVasilieva/imxrt-ral
0675a330d981a280e5f4dc2b5f0ef2cb397948e1
[ "Apache-2.0", "MIT" ]
null
null
null
CHANGELOG.md
AlyoshaVasilieva/imxrt-ral
0675a330d981a280e5f4dc2b5f0ef2cb397948e1
[ "Apache-2.0", "MIT" ]
null
null
null
# Changelog ## [Unreleased] ## [0.4.1] 2021-02-12 This release corrects for missing, or incomplete, information in the i.MX RT SVD files. The changes manifest in the `imxrt-ral` crate. * Change USB's `ENDPTSTAT` access to read-write, supporting the access required for USB bus resets. * Add RIDMAE field to the BAUD register of i.MX RT 1015 and 1021 LPUART peripherals. * Correct USBCMD[ATDTW] bit offset for 1021, 1051, 1052, 1061, 1062, and 1064 chips. SVD identifies the offset as 12, when it's 14. Refer to the reference manuals for more information. * Correct the LDVAL bitwidth for PIT peripherals on 1015 and 1021 chips. The SVDs indicate that the field is 24 bits, when it's 32 bits. This release also removes mention of 'stm32ral' in the API documentation. ## [0.4.0] 2020-08-29 * **BREAKING** The RAL's `"rtfm"` feature is changed to `"rtic"`, reflecting the framework's new name. Users who are relying on the `"rtfm"` feature should now use the `"rtic"` feature. ## [0.3.0] 2020-06-18 * Only emit link section for `__INTERRUPTS` when compiling for ARM targets * Fix RAL's documentation to refer to i.MX RT registers ## [0.2.1] 2020-04-10 * Fixes cargo release, adds release building documentation ## [0.2.0] 2020-04-08 * Port of ccm, iomuxc, uart, i2c, and spi peripherals from teensy4-rs! * Support for imxrt1060evk board as well as teensy4 ## [0.1.0] 2020-02-06 Initial build and release of imxrt family of peripheral access crates [Unreleased]: https://github.com/imxrt-rs/imxrt-ral/compare/0.4.0...HEAD [0.4.1]: https://github.com/imxrt-rs/imxrt-ral/compare/0.4.0...0.4.1 [0.4.0]: https://github.com/imxrt-rs/imxrt-ral/compare/0.3.0...0.4.0 [0.3.0]: https://github.com/imxrt-rs/imxrt-ral/compare/0.2.1...0.3.0 [0.2.1]: https://github.com/imxrt-rs/imxrt-ral/compare/0.2.0...0.2.1 [0.2.0]: https://github.com/imxrt-rs/imxrt-ral/compare/0.1.0...0.2.1 [0.1.0]: https://github.com/imxrt-rs/imxrt-ral/releases/tag/0.1.0
37.634615
94
0.716403
eng_Latn
0.888185
2133b88385fb6e952e4f05e6bbb91285587da239
24,469
md
Markdown
docs/xamarin-forms/platform/native-forms.md
MaximeEglem/xamarin-docs.fr-fr
d0c5bb07e4a70664480bc49e339f5469f5169538
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/xamarin-forms/platform/native-forms.md
MaximeEglem/xamarin-docs.fr-fr
d0c5bb07e4a70664480bc49e339f5469f5169538
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/xamarin-forms/platform/native-forms.md
MaximeEglem/xamarin-docs.fr-fr
d0c5bb07e4a70664480bc49e339f5469f5169538
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Xamarin.Forms dans les projets Xamarin Native description: Cet article explique comment utiliser les pages ContentPage dérivées qui sont ajoutés directement à Xamarin les projets natifs et comment naviguer entre eux. ms.prod: xamarin ms.assetid: f343fc21-dfb1-4364-a332-9da6705d36bc ms.technology: xamarin-forms author: davidbritch ms.author: dabritch ms.date: 01/11/2018 ms.openlocfilehash: 65bb3fa070c082fa6c6c489e326a870a80fb9502 ms.sourcegitcommit: 6e955f6851794d58334d41f7a550d93a47e834d2 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 07/12/2018 ms.locfileid: "38997510" --- # <a name="xamarinforms-in-xamarin-native-projects"></a>Xamarin.Forms dans les projets Xamarin Native _Formulaires natifs autorisent dérivée de Xamarin.Forms ContentPage des pages à être consommés par les projets Xamarin.iOS, Xamarin.Android et Universal Windows Platform (UWP) natifs. Les projets natifs peuvent consommer des pages ContentPage dérivées qui sont ajoutés directement au projet, ou à partir d’une bibliothèque .NET Standard, la bibliothèque .NET Standard ou le projet partagé. Cet article explique comment utiliser les pages ContentPage dérivées qui sont directement ajoutées pour les projets natifs et comment naviguer entre eux._ En règle générale, une application Xamarin.Forms inclut une ou plusieurs pages qui dérivent [ `ContentPage` ](xref:Xamarin.Forms.ContentPage), et ces pages sont partagées par toutes les plateformes dans un projet partagé ou un projet de bibliothèque .NET Standard. Toutefois, les formulaires natifs permet `ContentPage`-dérivée des pages à ajouter directement à des applications Xamarin.iOS, Xamarin.Android et UWP natives. Par rapport à avoir le projet natif consommer `ContentPage`-pages dérivées à partir d’un projet de bibliothèque .NET Standard ou d’un projet partagé, l’avantage de l’ajout de pages directement vers les projets natifs est que les pages peuvent être étendues avec les vues natives. Vues natives peuvent ensuite être définis dans XAML avec `x:Name` et référencés à partir du code-behind. Pour plus d’informations sur les vues natives, consultez [vues natives](~/xamarin-forms/platform/native-views/index.md). Le processus permettant d’utiliser un Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-page dérivée dans un projet natif est comme suit : 1. Ajoutez le package Xamarin.Forms NuGet au projet natif. 1. Ajouter le [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page et toutes ses éventuelles dépendances au projet natif. 1. Appelez la méthode `Forms.Init`. 1. Construire une instance de la [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-page dérivée et la convertir en type natif approprié à l’aide d’une des méthodes d’extension suivantes : `CreateViewController` pour iOS, `CreateFragment` ou `CreateSupportFragment` pour Android, ou `CreateFrameworkElement` pour UWP. 1. Accédez à la représentation sous forme de type natif de la [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page à l’aide de l’API native de la navigation. Xamarin.Forms doit être initialisé en appelant le `Forms.Init` méthode avant un projet natif peut construire un [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-page dérivée. Quand cela principalement le choix dépend lorsqu’il est plus pratique dans votre flux d’application : il peut être effectué au démarrage de l’application, ou juste avant le `ContentPage`-page dérivée est construit. Dans cet article et les exemples d’applications qui accompagne cet article, le `Forms.Init` méthode est appelée au démarrage de l’application. > [!NOTE] > Le **NativeForms** solution d’application ne contient pas tous les projets Xamarin.Forms. Au lieu de cela, il se compose d’un projet Xamarin.iOS, un projet Xamarin.Android et un projet UWP. Chaque projet est un projet natif qui utilise des formulaires natifs pour consommer [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de pages. Toutefois, il n’existe aucune raison pourquoi ne pouvaient pas consommer les projets natifs `ContentPage`-pages dérivé d’un projet de bibliothèque .NET Standard ou d’un projet partagé. Lors de l’utilisation des formulaires natifs, Xamarin.Forms fonctionnalités telles que [ `DependencyService` ](xref:Xamarin.Forms.DependencyService), [ `MessagingCenter` ](xref:Xamarin.Forms.MessagingCenter)et le moteur de liaison de données, tout le travail fixe. ## <a name="ios"></a>iOS Sur iOS, le `FinishedLaunching` remplacer dans la `AppDelegate` classe est généralement l’endroit où effectuer l’application des tâches de démarrage. Elle est appelée une fois que l’application a été lancée et est généralement substituée pour configurer la fenêtre principale et afficher le contrôleur. Le code suivant montre l’exemple le `AppDelegate` classe dans l’exemple d’application : ```csharp [Register("AppDelegate")] public class AppDelegate : UIApplicationDelegate { public static AppDelegate Instance; UIWindow _window; UINavigationController _navigation; public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions) { Forms.Init(); Instance = this; _window = new UIWindow(UIScreen.MainScreen.Bounds); UINavigationBar.Appearance.SetTitleTextAttributes(new UITextAttributes { TextColor = UIColor.Black }); var mainPage = new PhonewordPage().CreateViewController(); mainPage.Title = "Phoneword"; _navigation = new UINavigationController(mainPage); _window.RootViewController = _navigation; _window.MakeKeyAndVisible(); return true; } ... } ``` Le `FinishedLaunching` méthode effectue les tâches suivantes : - Xamarin.Forms est initialisé en appelant le `Forms.Init` (méthode). - Une référence à la `AppDelegate` classe est stockée dans le `static` `Instance` champ. Il s’agit de fournir un mécanisme pour les autres classes d’appeler les méthodes définies dans le `AppDelegate` classe. - Le `UIWindow`, qui est le conteneur principal pour les vues dans des applications iOS natives, est créé. - Le `PhonewordPage` (classe), qui est un Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page défini dans XAML, qui est construit et converti en un `UIViewController` à l’aide de la `CreateViewController` méthode d’extension. - Le `Title` propriété de la `UIViewController` est défini, ce qui s’affichera sur le `UINavigationBar`. - Un `UINavigationController` est créé pour la gestion de la navigation hiérarchique. Le `UINavigationController` classe gère une pile de contrôleurs d’affichage et le `UIViewController` passé dans le constructeur s’affiche initialement lorsque le `UINavigationController` est chargé. - Le `UINavigationController` instance est définie en tant que le niveau supérieur `UIViewController` pour le `UIWindow`et le `UIWindow` est défini en tant que la fenêtre de clé pour l’application et est rendu visible. Une fois le `FinishedLaunching` méthode est exécutée, l’interface utilisateur définie dans le Xamarin.Forms `PhonewordPage` s’affichera classe, comme indiqué dans la capture d’écran suivante : [![](native-forms-images/ios-phonewordpage.png "iOS PhonewordPage")](native-forms-images/ios-phonewordpage-large.png#lightbox "iOS PhonewordPage") Interaction avec l’interface utilisateur, par exemple en appuyant sur un [ `Button` ](xref:Xamarin.Forms.Button), entraîne dans les gestionnaires d’événements dans le `PhonewordPage` l’exécution de code-behind. Par exemple, quand un utilisateur appuie sur le **historique des appels** bouton, le Gestionnaire d’événements suivant est exécuté : ```csharp void OnCallHistory(object sender, EventArgs e) { AppDelegate.Instance.NavigateToCallHistoryPage(); } ``` Le `static` `AppDelegate.Instance` champ permet la `AppDelegate.NavigateToCallHistoryPage` méthode à appeler, ce qui est illustré dans l’exemple de code suivant : ```csharp public void NavigateToCallHistoryPage() { var callHistoryPage = new CallHistoryPage().CreateViewController(); callHistoryPage.Title = "Call History"; _navigation.PushViewController(callHistoryPage, true); } ``` Le `NavigateToCallHistoryPage` méthode convertit le Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page pour un `UIViewController` avec le `CreateViewController` méthode d’extension et définit le `Title` propriété de la `UIViewController`. Le `UIViewController` est ensuite transmis vers `UINavigationController` par le `PushViewController` (méthode). Par conséquent, l’interface utilisateur définie dans le Xamarin.Forms `CallHistoryPage` s’affichera classe, comme indiqué dans la capture d’écran suivante : [![](native-forms-images/ios-callhistorypage.png "iOS CallHistoryPage")](native-forms-images/ios-callhistorypage-large.png#lightbox "iOS CallHistoryPage") Lorsque le `CallHistoryPage` s’affiche, en appuyant sur l’arrière flèche s’affiche le `UIViewController` pour le `CallHistoryPage` classe à partir de la `UINavigationController`, retour de l’utilisateur à la `UIViewController` pour la `PhonewordPage` classe. ## <a name="android"></a>Android Sur Android, le `OnCreate` remplacer dans la `MainActivity` classe est généralement l’endroit où effectuer l’application des tâches de démarrage. Le code suivant montre l’exemple le `MainActivity` classe dans l’exemple d’application : ```csharp public class MainActivity : AppCompatActivity { public static MainActivity Instance; protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); Forms.Init(this, bundle); Instance = this; SetContentView(Resource.Layout.Main); var toolbar = FindViewById<Toolbar>(Resource.Id.toolbar); SetSupportActionBar(toolbar); SupportActionBar.Title = "Phoneword"; var mainPage = new PhonewordPage().CreateFragment(this); FragmentManager .BeginTransaction() .Replace(Resource.Id.fragment_frame_layout, mainPage) .Commit(); ... } ... } ``` Le `OnCreate` méthode effectue les tâches suivantes : - Xamarin.Forms est initialisé en appelant le `Forms.Init` (méthode). - Une référence à la `MainActivity` classe est stockée dans le `static` `Instance` champ. Il s’agit de fournir un mécanisme pour les autres classes d’appeler les méthodes définies dans le `MainActivity` classe. - Le `Activity` contenu est défini à partir d’une ressource mise en page. Dans l’exemple d’application, la disposition se compose d’un `LinearLayout` qui contient un `Toolbar`et un `FrameLayout` d’agir comme un conteneur de fragment. - Le `Toolbar` est récupéré et défini en tant que la barre d’action pour le `Activity`, et le titre de barre d’action est défini. - Le `PhonewordPage` (classe), qui est un Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page défini dans XAML, qui est construit et converti en un `Fragment` à l’aide de la `CreateFragment` méthode d’extension. - Le `FragmentManager` classe crée et valide une transaction qui remplace le `FrameLayout` instance avec le `Fragment` pour la `PhonewordPage` classe. Pour plus d’informations sur les Fragments, consultez [Fragments](~/android/platform/fragments/index.md). > [!NOTE] > Outre le `CreateFragment` méthode d’extension, Xamarin.Forms inclut également un `CreateSupportFragment` (méthode). Le `CreateFragment` méthode crée un `Android.App.Fragment` qui peut être utilisée dans les applications qui ciblent des API 11 et supérieur. Le `CreateSupportFragment` méthode crée un `Android.Support.V4.App.Fragment` qui peut être utilisé dans les applications qui ciblent des versions d’API avant 11. Une fois le `OnCreate` méthode est exécutée, l’interface utilisateur définie dans le Xamarin.Forms `PhonewordPage` s’affichera classe, comme indiqué dans la capture d’écran suivante : [![](native-forms-images/android-phonewordpage.png "Android PhonewordPage")](native-forms-images/android-phonewordpage-large.png#lightbox "PhonewordPage Android") Interaction avec l’interface utilisateur, par exemple en appuyant sur un [ `Button` ](xref:Xamarin.Forms.Button), entraîne dans les gestionnaires d’événements dans le `PhonewordPage` l’exécution de code-behind. Par exemple, quand un utilisateur appuie sur le **historique des appels** bouton, le Gestionnaire d’événements suivant est exécuté : ```csharp void OnCallHistory(object sender, EventArgs e) { MainActivity.Instance.NavigateToCallHistoryPage(); } ``` Le `static` `MainActivity.Instance` champ permet la `MainActivity.NavigateToCallHistoryPage` méthode à appeler, ce qui est illustré dans l’exemple de code suivant : ```csharp public void NavigateToCallHistoryPage() { var callHistoryPage = new CallHistoryPage().CreateFragment(this); FragmentManager .BeginTransaction() .AddToBackStack(null) .Replace(Resource.Id.fragment_frame_layout, callHistoryPage) .Commit(); } ``` Le `NavigateToCallHistoryPage` méthode convertit le Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page pour un `Fragment` avec la `CreateFragment` méthode d’extension et ajoute le `Fragment` au fragment de la pile back. Par conséquent, l’interface utilisateur définie dans le Xamarin.Forms `CallHistoryPage` s’affichera, comme illustré dans la capture d’écran suivante : [![](native-forms-images/android-callhistorypage.png "Android CallHistoryPage")](native-forms-images/android-callhistorypage-large.png#lightbox "CallHistoryPage Android") Lorsque le `CallHistoryPage` s’affiche, en appuyant sur l’arrière flèche s’affiche le `Fragment` pour le `CallHistoryPage` à partir de la pile de retour de fragment, retour de l’utilisateur à la `Fragment` pour la `PhonewordPage` classe. ### <a name="enabling-back-navigation-support"></a>Prise en charge de Navigation arrière Le `FragmentManager` classe a un `BackStackChanged` événement qui se déclenche chaque fois que le contenu de la pile de retour de fragment est modifié. Le `OnCreate` méthode dans la `MainActivity` classe contient un gestionnaire d’événements anonyme pour cet événement : ```csharp FragmentManager.BackStackChanged += (sender, e) => { bool hasBack = FragmentManager.BackStackEntryCount > 0; SupportActionBar.SetHomeButtonEnabled(hasBack); SupportActionBar.SetDisplayHomeAsUpEnabled(hasBack); SupportActionBar.Title = hasBack ? "Call History" : "Phoneword"; }; ``` Ce gestionnaire d’événements affiche un bouton précédent sur la barre d’action à condition qu’il se trouve un ou plusieurs `Fragment` instances sur le fragment de pile back. La réponse à en appuyant sur le bouton précédent est gérée par le `OnOptionsItemSelected` remplacer : ```csharp public override bool OnOptionsItemSelected(Android.Views.IMenuItem item) { if (item.ItemId == global::Android.Resource.Id.Home && FragmentManager.BackStackEntryCount > 0) { FragmentManager.PopBackStack(); return true; } return base.OnOptionsItemSelected(item); } ``` Le `OnOptionsItemSelected` substitution est appelée chaque fois qu’un élément dans le menu options est sélectionné. Cette implémentation dépile le fragment en cours à partir de la pile de retour de fragment, autant que le bouton précédent a été sélectionné et il existe un ou plusieurs `Fragment` instances sur le fragment de pile back. ### <a name="multiple-activities"></a>Plusieurs activités Lorsqu’une application est composée de plusieurs activités, [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-pages dérivées peuvent être incorporées dans chacune des activités. Dans ce scénario, le `Forms.Init` méthode doive être appelée uniquement dans le `OnCreate` la substitution de la première `Activity` qui incorpore un Xamarin.Forms `ContentPage`. Toutefois, cela a l’impact suivant : - La valeur de `Xamarin.Forms.Color.Accent` seront prises dans le `Activity` qui a appelé le `Forms.Init` (méthode). - La valeur de `Xamarin.Forms.Application.Current` sera associé le `Activity` qui a appelé le `Forms.Init` (méthode). ### <a name="choosing-a-file"></a>Choix d’un fichier Lors de l’incorporation un [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page qui utilise un [ `WebView` ](xref:Xamarin.Forms.WebView) qui a besoin pour prendre en charge HTML « Choisir un fichier » bouton, le `Activity` substituez le `OnActivityResult` méthode : ```csharp protected override void OnActivityResult(int requestCode, Result resultCode, Intent data) { base.OnActivityResult(requestCode, resultCode, data); ActivityResultCallbackRegistry.InvokeCallback(requestCode, resultCode, data); } ``` ## <a name="uwp"></a>UWP Sur UWP, natif `App` classe est généralement l’endroit où effectuer l’application des tâches de démarrage. Xamarin.Forms est généralement initialisé, dans les applications UWP de Xamarin.Forms, dans le `OnLaunched` substituer dans natif `App` (classe), pour transmettre le `LaunchActivatedEventArgs` l’argument de la `Forms.Init` (méthode). Pour cette raison, les applications UWP natives qui consomment un Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-page dérivée peut appeler plus facilement le `Forms.Init` méthode à partir de la `App.OnLaunched` (méthode). Par défaut, natif `App` classe lance le `MainPage` classe en tant que la première page de l’application. Le code suivant montre l’exemple le `MainPage` classe dans l’exemple d’application : ```csharp public sealed partial class MainPage : Page { public static MainPage Instance; public MainPage() { this.InitializeComponent(); this.NavigationCacheMode = NavigationCacheMode.Enabled; Instance = this; this.Content = new Phoneword.UWP.Views.PhonewordPage().CreateFrameworkElement(); } ... } ``` Le `MainPage` constructeur effectue les tâches suivantes : - La mise en cache est activée pour la page, afin qu’un nouveau `MainPage` n’est pas construite lorsqu’un utilisateur accède à la page. - Une référence à la `MainPage` classe est stockée dans le `static` `Instance` champ. Il s’agit de fournir un mécanisme pour les autres classes d’appeler les méthodes définies dans le `MainPage` classe. - Le `PhonewordPage` (classe), qui est un Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée de page défini dans XAML, qui est construit et converti en un `FrameworkElement` à l’aide de la `CreateFrameworkElement` méthode d’extension et définissez en tant que le contenu de la `MainPage` classe. Une fois le `MainPage` constructeur a exécuté, l’interface utilisateur définie dans le Xamarin.Forms `PhonewordPage` s’affichera classe, comme indiqué dans la capture d’écran suivante : [![](native-forms-images/uwp-phonewordpage.png "UWP PhonewordPage")](native-forms-images/uwp-phonewordpage-large.png#lightbox "UWP PhonewordPage") Interaction avec l’interface utilisateur, par exemple en appuyant sur un [ `Button` ](xref:Xamarin.Forms.Button), entraîne dans les gestionnaires d’événements dans le `PhonewordPage` l’exécution de code-behind. Par exemple, quand un utilisateur appuie sur le **historique des appels** bouton, le Gestionnaire d’événements suivant est exécuté : ```csharp void OnCallHistory(object sender, EventArgs e) { Phoneword.UWP.MainPage.Instance.NavigateToCallHistoryPage(); } ``` Le `static` `MainPage.Instance` champ permet la `MainPage.NavigateToCallHistoryPage` méthode à appeler, ce qui est illustré dans l’exemple de code suivant : ```csharp public void NavigateToCallHistoryPage() { this.Frame.Navigate(new CallHistoryPage()); } ``` Navigation dans UWP est généralement effectuée avec la `Frame.Navigate` (méthode), qui prend un `Page` argument. Xamarin.Forms définit un `Frame.Navigate` méthode d’extension qui prend un [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-instance de page dérivée. Par conséquent, lorsque le `NavigateToCallHistoryPage` méthode s’exécute, l’interface utilisateur définie dans le Xamarin.Forms `CallHistoryPage` s’affichera, comme illustré dans la capture d’écran suivante : [![](native-forms-images/uwp-callhistorypage.png "UWP CallHistoryPage")](native-forms-images/uwp-callhistorypage-large.png#lightbox "UWP CallHistoryPage") Lorsque le `CallHistoryPage` s’affiche, en appuyant sur l’arrière flèche s’affiche le `FrameworkElement` pour le `CallHistoryPage` à partir de la pile de retour dans l’application, retour de l’utilisateur à la `FrameworkElement` pour la `PhonewordPage` classe. ### <a name="enabling-back-navigation-support"></a>Prise en charge de Navigation arrière Sur UWP, applications doivent activer la navigation arrière pour tous les matériels et logiciels serveurs boutons, sur les facteurs de forme de périphérique différent. Cela est possible en inscrivant un gestionnaire d’événements pour le `BackRequested` événement, qui peut être effectuée dans le `OnLaunched` méthode dans natif `App` classe : ```csharp protected override void OnLaunched(LaunchActivatedEventArgs e) { Frame rootFrame = Window.Current.Content as Frame; if (rootFrame == null) { ... // Place the frame in the current Window Window.Current.Content = rootFrame; SystemNavigationManager.GetForCurrentView().BackRequested += OnBackRequested; } ... } ``` Lorsque l’application est lancée, le `GetForCurrentView` méthode récupère le `SystemNavigationManager` objet associé à la vue actuelle, puis enregistre un gestionnaire d’événements pour le `BackRequested` événement. L’application ne reçoit que cet événement s’il est l’application de premier plan et en réponse, appelle le `OnBackRequested` Gestionnaire d’événements : ```csharp void OnBackRequested(object sender, BackRequestedEventArgs e) { Frame rootFrame = Window.Current.Content as Frame; if (rootFrame.CanGoBack) { e.Handled = true; rootFrame.GoBack(); } } ``` Le `OnBackRequested` Gestionnaire d’événements appelle la `GoBack` méthode sur le frame de la racine de l’application et les jeux le `BackRequestedEventArgs.Handled` propriété `true` pour marquer l’événement comme géré. Pour marquer l’événement comme géré peut entraîner le système vous obliger à quitter l’application (sur la famille de périphériques mobiles) ou en ignorant l’événement (sur la famille de périphériques de bureau). L’application s’appuie sur un bouton précédent du système fourni sur un téléphone, mais choisit s’il faut afficher un bouton précédent sur la barre de titre sur les appareils de bureau. Cela s’effectue en définissant le `AppViewBackButtonVisibility` propriété de la `AppViewBackButtonVisibility` valeurs d’énumération : ```csharp void OnNavigated(object sender, NavigationEventArgs e) { SystemNavigationManager.GetForCurrentView().AppViewBackButtonVisibility = ((Frame)sender).CanGoBack ? AppViewBackButtonVisibility.Visible : AppViewBackButtonVisibility.Collapsed; } ``` Le `OnNavigated` Gestionnaire d’événements, qui est exécutée en réponse à la `Navigated` déclenchement des événements, des mises à jour la visibilité du titre de la barre de bouton précédent lors de la navigation entre les pages se produit. Cela garantit que le bouton précédent de barre de titre est visible si la pile de retour dans l’application n’est pas vide, ou supprimé de la barre de titre, si la pile de retour dans l’application est vide. Pour plus d’informations sur la prise en charge de la navigation arrière sur UWP, consultez [l’historique de Navigation et vers l’arrière navigation pour les applications UWP](/windows/uwp/design/basics/navigation-history-and-backwards-navigation/). ## <a name="summary"></a>Récapitulatif Formulaires natifs autorisent Xamarin.Forms [ `ContentPage` ](xref:Xamarin.Forms.ContentPage)-dérivée des pages à être consommés par les projets Xamarin.iOS, Xamarin.Android et Universal Windows Platform (UWP) natifs. Les projets natifs peuvent consommer `ContentPage`-dérivée des pages qui sont ajoutés directement au projet, ou à partir d’un projet de bibliothèque .NET Standard ou d’un projet partagé. Cet article a expliqué comment utiliser `ContentPage`-dérivée des pages qui sont directement ajoutées pour les projets natifs et comment naviguer entre eux. ## <a name="related-links"></a>Liens associés - [NativeForms (exemple)](https://developer.xamarin.com/samples/xamarin-forms/Native2Forms/) - [Vues natives](~/xamarin-forms/platform/native-views/index.md)
67.969444
929
0.777964
fra_Latn
0.943351
2134d1a11dfdc3f9ab4df0a85dd7e691405a7f11
713
md
Markdown
samples/matterjs/svg/README.md
Feeles/IDE
b183f2b23a0f017af711e63e95b5341827bd3424
[ "MIT" ]
null
null
null
samples/matterjs/svg/README.md
Feeles/IDE
b183f2b23a0f017af711e63e95b5341827bd3424
[ "MIT" ]
360
2017-01-31T14:13:02.000Z
2019-02-05T12:33:12.000Z
samples/matterjs/svg/README.md
Feeles/IDE
b183f2b23a0f017af711e63e95b5341827bd3424
[ "MIT" ]
1
2018-03-31T03:05:17.000Z
2018-03-31T03:05:17.000Z
# ベクタ画像 SVG は、インターネット上でよく使われる画像の形式だよ スケーラブル・ベクタ・グラフィックスの略なんだ ## ![改造する](svg/main.js) ## ベクタ画像 コンピューターが扱う画像は、大きく分けてふたつあるんだ `JPG` などの **ラスタ画像** と、 `SVG` などの **ベクタ画像** だよ ラスタ画像は、色のついた点をたくさん並べるようにして絵を表現するけど、 ベクタ画像では、絵を「図形をたくさん重ねたもの」ととらえて、 たくさんの線をつなげて図形をつくることで、絵を表現するんだ ラスタ画像では普通、「四角い点」を格子(こうし)状に並べていくから、 円の絵を表現すると、どうしても周りがギザギザになってしまうんだ だけどベクタ画像の場合、まず円の絵を「円の図形」ととらえる。 その円をつくる輪っかの線の位置や形をそのまま残すから、 どれだけ大きくしてもギザギザにならない絵を表現できるんだ だから、スケーラブル(大きさを変えられる)と呼ばれているんだよ 本当にスケーラブルなのか、大きさを変えて試してみよう! パラメータ | 意味 | 最小 | 最大 --- | --- | --- | --- scale | 大きさの倍率 | 0 | Infinity sampling | 読み込みのおおざっぱさ | 0 以上 | なし `sampling` は、小さくするほど、図形が細かく読み込まれるよ ただしコンピューターの計算量も増えてしまうから、注意してね [メニューに戻る](index.html)
17.390244
37
0.73913
jpn_Jpan
0.897724
2134e39f23096e9496820f04ab6391cdfc838cff
1,803
md
Markdown
posts/2017-07-07.md
hc0208/qiita-trending
de4db71d0c1d7f3e0a002cda244f2355e05fbc2e
[ "MIT" ]
2
2017-08-06T23:22:17.000Z
2018-07-17T10:07:02.000Z
posts/2017-07-07.md
hc0208/qiita-trending
de4db71d0c1d7f3e0a002cda244f2355e05fbc2e
[ "MIT" ]
null
null
null
posts/2017-07-07.md
hc0208/qiita-trending
de4db71d0c1d7f3e0a002cda244f2355e05fbc2e
[ "MIT" ]
null
null
null
- [100万倍速いプログラムを書く](http://qiita.com/Akai_Banana/items/48a35d2a40d1804d3b32) - [4x4行列同士の掛算を高速化](http://qiita.com/blue-7/items/7a4ea5a4c3aa63c61be9) - [Grimoire.jsでシェーダープログラミング入門1](http://qiita.com/kyasbal_1994/items/cff1466719934f461ca8) - [Microsoftが作った「Visual Studio Code」arduino拡張機能を使ってみる](http://qiita.com/varlal/items/052d08d0e34c570a6f3b) - [高速な Convolution 処理を目指してみた。 ](http://qiita.com/t-tkd3a/items/879a5fd6410320fe504e) - [JSなんてもう怖くない!JavaScript/ES2015速習ガイド](http://qiita.com/niba1122/items/c660c1117ae0715b31c0) - [Unityでdocomoの音声合成APIを使用する方法](http://qiita.com/kanatano_mirai/items/677fde8589a4d810329a) - [関数型プログラミングを業務開発に適用するための架空の社内勉強会資料](http://qiita.com/uehaj/items/ff13229413b785cd9cf8) - [KerasモデルをCloud ML Engineで学習してOnline Predictionしてみた](http://qiita.com/hayatoy/items/cbed2ebb5ae1e7bbc202) - [Go言語でWebサイトを作ってみる (8: 管理者のみ見られるページを作ってみる)](http://qiita.com/y_ussie/items/0052c83c9ec75b06bb6c) - [便利すぎる負荷試験テストツールGatlingの使い方~自力ソース編~](http://qiita.com/nii_yan/items/d7d0ea949abeab13aea7) - [AWS LambdaのCI環境について ~表参道.rb #24~](http://qiita.com/ebihara99999/items/4f2f9baf60b45ef3c326) - [【Unity】ソートアルゴリズム12種を可視化してみた](http://qiita.com/r-ngtm/items/f4fa55c77459f63a5228) - [iOSで独自Viewを作る時の5つのTips](http://qiita.com/kikuchy/items/f1d6731d804b63cf7a29) - [DQNの生い立ち + Deep Q-NetworkをChainerで書いた](http://qiita.com/Ugo-Nama/items/08c6a5f6a571335972d5) - [Windows 10 Hyper-VとVMware Workstation 12 Playerの共存](http://qiita.com/miyamon_biz/items/f679c398c927a08be30e) - [「AWS is 何」を3行でまとめてみるよ](http://qiita.com/kohashi/items/1bb952313fb695f12577) - [PageMenuを使ってキュレーションアプリを作る(その1)【Swift3.0】](http://qiita.com/fromage-blanc/items/4c358e1e57e298baad18) - [怠惰なRAII](http://qiita.com/ktokhrtk/items/4237038c5ac41f5b953a) - [WPF で Prism と Autofac を使う](http://qiita.com/t-koyama/items/0ac888a24670b16f3eba)
85.857143
111
0.810871
yue_Hant
0.446799
2135232a5516afcf15c843f337a32a68ad3232e5
10,001
md
Markdown
articles/media-services/previous/media-services-rest-how-to-use.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/previous/media-services-rest-how-to-use.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/previous/media-services-rest-how-to-use.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Overzicht van Media Services bewerkingen REST API | Microsoft Docs description: De API ' Media Services Operations REST ' wordt gebruikt voor het maken van taken, assets, Live kanalen en andere resources in een Media Services-account. In dit artikel vindt u een overzicht van Azure Media Services v2 REST API. services: media-services documentationcenter: '' author: Juliako manager: femila editor: '' ms.assetid: a5f1c5e7-ec52-4e26-9a44-d9ea699f68d9 ms.service: media-services ms.workload: media ms.tgt_pltfrm: na ms.devlang: dotnet ms.topic: article ms.date: 03/20/2019 ms.author: juliako ms.reviewer: johndeu ms.openlocfilehash: 84e94a431efdc84ff6896de416bd222120784899 ms.sourcegitcommit: bcda98171d6e81795e723e525f81e6235f044e52 ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 09/01/2020 ms.locfileid: "89264280" --- # <a name="media-services-operations-rest-api-overview"></a>Overzicht van Media Services bewerkingen REST API [!INCLUDE [media services api v2 logo](./includes/v2-hr.md)] > [!NOTE] > Er worden geen nieuwe functies of functionaliteit meer aan Media Services v2. toegevoegd. <br/>Bekijk de nieuwste versie [Media Services v3](../latest/index.yml). Zie ook [migratie richtlijnen van v2 naar v3](../latest/migrate-from-v2-to-v3.md) De REST-API van **Media Services bewerkingen** wordt gebruikt voor het maken van taken, assets, Live kanalen en andere resources in een Media Services-account. Zie [Media Services Operations rest API Reference](/rest/api/media/operations/azure-media-services-rest-api-reference)(Engelstalig) voor meer informatie. Media Services biedt een REST API die zowel JSON als Atom + pub XML-indeling accepteert. Media Services REST API vereist specifieke HTTP-headers die elke client moet verzenden wanneer er verbinding wordt gemaakt met Media Services, evenals een set optionele headers. In de volgende secties worden de kopteksten en HTTP-termen beschreven die u kunt gebruiken bij het maken van aanvragen en het ontvangen van antwoorden van Media Services. Verificatie voor de Media Services REST API wordt uitgevoerd via Azure Active Directory verificatie. dit wordt beschreven in het artikel [Azure AD-verificatie gebruiken om toegang te krijgen tot de API van Azure Media Services met rest](media-services-rest-connect-with-aad.md) ## <a name="considerations"></a>Overwegingen De volgende overwegingen zijn van toepassing wanneer u REST gebruikt. * Bij het uitvoeren van een query op entiteiten is er een limiet van 1000 entiteiten die tegelijk worden geretourneerd, omdat open bare REST v2 de query resultaten beperkt tot 1000 resultaten. U moet **overs Laan** en **nemen** (.net)/ **Top** (rest) gebruiken, zoals beschreven in [dit .net-voor beeld](media-services-dotnet-manage-entities.md#enumerating-through-large-collections-of-entities) en [Dit rest API-voor beeld](media-services-rest-manage-entities.md#enumerating-through-large-collections-of-entities). * Wanneer u JSON gebruikt en opgeeft dat het sleutel woord **__metadata** wordt gebruikt in de aanvraag (bijvoorbeeld om te verwijzen naar een gekoppeld object), moet u de **Accept** -header instellen op [JSON-indeling](https://www.odata.org/documentation/odata-version-3-0/json-verbose-format/) (Zie het volgende voor beeld). Odata begrijpt de eigenschap **__metadata** niet in de aanvraag, tenzij u deze instelt op uitgebreid. ```console POST https://media.windows.net/API/Jobs HTTP/1.1 Content-Type: application/json;odata=verbose Accept: application/json;odata=verbose DataServiceVersion: 3.0 MaxDataServiceVersion: 3.0 x-ms-version: 2.19 Authorization: Bearer <ENCODED JWT TOKEN> Host: media.windows.net { "Name" : "NewTestJob", "InputMediaAssets" : [{"__metadata" : {"uri" : "https://media.windows.net/api/Assets('nb%3Acid%3AUUID%3Aba5356eb-30ff-4dc6-9e5a-41e4223540e7')"}}] . . . ``` ## <a name="standard-http-request-headers-supported-by-media-services"></a>Standaard-HTTP-aanvraag headers die worden ondersteund door Media Services Voor elke aanroep die u in Media Services maakt, moet u een set vereiste kopteksten opnemen in uw aanvraag en ook een set optionele kopteksten die u mogelijk wilt opnemen. De volgende tabel bevat de vereiste headers: | Koptekst | Type | Waarde | | --- | --- | --- | | Autorisatie |Drager |Bearer is het enige geaccepteerde autorisatie mechanisme. De waarde moet ook het toegangs token bevatten dat door Azure Active Directory is geleverd. | | x-ms-version |Decimaal |2,17 (of meest recente versie)| | DataServiceVersion |Decimaal |3,0 | | MaxDataServiceVersion |Decimaal |3,0 | > [!NOTE] > Omdat Media Services OData gebruikt om de REST-Api's weer te geven, moeten de DataServiceVersion-en MaxDataServiceVersion-headers worden opgenomen in alle aanvragen; Als dat niet het geval is, 3,0 wordt de DataServiceVersion-waarde Media Services die in gebruik is, echter gebruikt. > > Hier volgt een aantal optionele kopteksten: | Koptekst | Type | Waarde | | --- | --- | --- | | Datum |RFC 1123-datum |Tijds tempel van de aanvraag | | Accepteren |Inhoudstype |Het aangevraagde inhouds type voor het antwoord, zoals de volgende:<p> -application/json; odata = verbose<p> -Application/Atom + XML<p> Antwoorden kunnen een ander inhouds type hebben, zoals het ophalen van een blob, waarbij een geslaagd antwoord de BLOB-stream als de payload bevat. | | Accepteren-coderen |Gzip, verkleinen |GZIP-en deflate-code ring, indien van toepassing. Opmerking: voor grote bronnen kan Media Services deze header negeren en niet-gecomprimeerde gegevens retour neren. | | Accept-Language |"en", "ES", enzovoort. |Hiermee geeft u de voorkeurs taal op voor het antwoord. | | Accept-Charset |Charset-type like "UTF-8" |De standaard waarde is UTF-8. | | X-HTTP-methode |HTTP-methode |Biedt clients of firewalls die geen ondersteuning bieden voor HTTP-methoden zoals PUT of DELETE om deze methoden te gebruiken, via een GET-aanroep. | | Content-Type |Inhoudstype |Inhouds type van de aanvraag tekst in PUT-of POST-aanvragen. | | client-aanvraag-id |Tekenreeks |Een door een aanroeper gedefinieerde waarde die de opgegeven aanvraag aanduidt. Indien opgegeven, wordt deze waarde opgenomen in het antwoord bericht als een manier om de aanvraag toe te wijzen. <p><p>**Belangrijk**<p>De waarden moeten worden afgetopt op 2096b (2 KB). | ## <a name="standard-http-response-headers-supported-by-media-services"></a>Standaard-HTTP-antwoord headers die worden ondersteund door Media Services Hier volgt een reeks kopteksten die kunnen worden geretourneerd, afhankelijk van de resource die u hebt aangevraagd en de actie die u wilt uitvoeren. | Koptekst | Type | Waarde | | --- | --- | --- | | aanvraag-id |Tekenreeks |Een unieke id voor de huidige bewerking, gegenereerde service. | | client-aanvraag-id |Tekenreeks |Een id die is opgegeven door de aanroeper in de oorspronkelijke aanvraag, indien aanwezig. | | Datum |RFC 1123-datum |De datum/tijd waarop de aanvraag is verwerkt. | | Content-Type |Varieert |Het inhouds type van de antwoord tekst. | | Content-Encoding |Varieert |Gzip of verkleinen, indien van toepassing. | ## <a name="standard-http-verbs-supported-by-media-services"></a>Standaard HTTP-termen die worden ondersteund door Media Services Hier volgt een volledige lijst met HTTP-termen die kunnen worden gebruikt bij het maken van HTTP-aanvragen: | Verb | Beschrijving | | --- | --- | | GET |Retourneert de huidige waarde van een object. | | POST |Hiermee wordt een object gemaakt op basis van de verstrekte gegevens of wordt een opdracht verzonden. | | PUT |Hiermee wordt een object vervangen of een benoemd object gemaakt (indien van toepassing). | | DELETE |Hiermee verwijdert u een object. | | SAMEN |Hiermee wordt een bestaand object bijgewerkt met de benoemde eigenschaps wijzigingen. | | HEAD |Retourneert meta gegevens van een object voor een GET-antwoord. | ## <a name="discover-and-browse-the-media-services-entity-model"></a>Het Media Services-entiteits model ontdekken en doorzoeken Als u wilt dat Media Services entiteiten beter kunnen worden gedetecteerd, kan de $metadata bewerking worden gebruikt. U kunt alle geldige entiteits typen, entiteits eigenschappen, koppelingen, functies, acties, enzovoort ophalen. Door de $metadata bewerking toe te voegen aan het einde van uw Media Services REST API-eind punt, kunt u toegang krijgen tot deze detectie service. /API/$metadata. Voeg '? API-Version = 2. x ' aan het einde van de URI toe als u de meta gegevens in een browser wilt weer geven of als u de header x-MS-version niet in uw aanvraag wilt gebruiken. ## <a name="authenticate-with-media-services-rest-using-azure-active-directory"></a>Verifiëren met Media Services REST met behulp van Azure Active Directory Verificatie op de REST API geschiedt via Azure Active Directory (AAD). Zie [toegang tot de Azure Media Services API met Azure AD-verificatie](media-services-use-aad-auth-to-access-ams-api.md)voor meer informatie over het verkrijgen van de vereiste verificatie gegevens voor uw Media Services-account in azure Portal. Zie voor meer informatie over het schrijven van code die verbinding maakt met de REST API met behulp van Azure AD-verificatie het artikel [Azure AD-verificatie gebruiken voor toegang tot de Azure Media Services API met rest](media-services-rest-connect-with-aad.md). ## <a name="next-steps"></a>Volgende stappen Zie [Azure AD-verificatie gebruiken om toegang te krijgen tot de API van Azure Media Services met rest](media-services-rest-connect-with-aad.md)voor meer informatie over het gebruik van Azure AD-verificatie met Media Services rest API. ## <a name="media-services-learning-paths"></a>Media Services-leertrajecten [!INCLUDE [media-services-learning-paths-include](../../../includes/media-services-learning-paths-include.md)] ## <a name="provide-feedback"></a>Feedback geven [!INCLUDE [media-services-user-voice-include](../../../includes/media-services-user-voice-include.md)]
74.081481
516
0.774023
nld_Latn
0.998452
213584933130eb06f5dc2c81ee3cb2173ebbf97f
1,699
md
Markdown
_portfolio/periodic-table-android.md
WeitlerT/WeitlerT.github.io
6c53e351884934ff072633077f669a78d318a1b2
[ "MIT" ]
null
null
null
_portfolio/periodic-table-android.md
WeitlerT/WeitlerT.github.io
6c53e351884934ff072633077f669a78d318a1b2
[ "MIT" ]
1
2022-03-27T20:41:25.000Z
2022-03-27T20:41:25.000Z
_portfolio/periodic-table-android.md
WeitlerT/WeitlerT.github.io
6c53e351884934ff072633077f669a78d318a1b2
[ "MIT" ]
null
null
null
--- title: "Android - Periodic Table" excerpt: "An android application I made with a classmate for our HCI class. We set out to make a simplified table with easy to read information." header: image: #Image for the top goes here teaser: /assets/images/HCI/i1.png sidebar: - title: "Role" image: /assets/images/HCI/i1.png image_alt: "logo" text: "Front/Back-End Development" - title: "Responsibilities" text: "Creating the storyboard and a functional demo." - title: "Technologies" text: "Android, Android Studio, Mobile" gallery: - url: /assets/images/HCI/i1.png image_path: /assets/images/HCI/i1.png alt: #"Self Driving" - url: /assets/images/HCI/i2.png image_path: /assets/images/HCI/i2.png alt: #"Self Driving" - url: /assets/images/HCI/i3.png image_path: /assets/images/HCI/i3.png alt: #"Self Driving" - url: /assets/images/HCI/i4.png image_path: /assets/images/HCI/i4.png alt: #"Self Driving" --- For our HCI course we had to create an application that would simplify a pre-existing concept. We figured the periodic table would be interesting to put into a mobile application format. A key part of this course was making an application that anyone could use with no prior experience or knowledge. In other words it has to be intuitive, functional, and most importantly simple to understand. We created a periodic table, color coated it, and gave each cell in the table an onClick action. When tapping on any of the elements you would be taken to a page explaining that element you tapped on. The screenshots below show the different pages of the application. {% include gallery caption="A few images of the application" %}
49.970588
661
0.735138
eng_Latn
0.973181
2136ab7fe94e59fdebaf0445706773485076fe1e
68
md
Markdown
README.md
RedGuff/Harmonics
f09e9725311c968c87400061b5e7a41aa9ce34fd
[ "Unlicense" ]
null
null
null
README.md
RedGuff/Harmonics
f09e9725311c968c87400061b5e7a41aa9ce34fd
[ "Unlicense" ]
null
null
null
README.md
RedGuff/Harmonics
f09e9725311c968c87400061b5e7a41aa9ce34fd
[ "Unlicense" ]
null
null
null
# Harmonics Make randomized harmonics, for a futur sound generator.
22.666667
55
0.808824
eng_Latn
0.917794
2136caeeceea7ed1ad19513a4361413718beb55e
2,781
md
Markdown
README.md
kaspth/action_state
c0f214f0c38cf9349e457c4e38bc07f1760a8953
[ "MIT" ]
11
2022-03-08T21:52:15.000Z
2022-03-23T22:52:25.000Z
README.md
kaspth/action_state
c0f214f0c38cf9349e457c4e38bc07f1760a8953
[ "MIT" ]
1
2022-03-11T13:41:30.000Z
2022-03-18T16:46:19.000Z
README.md
kaspth/action_state
c0f214f0c38cf9349e457c4e38bc07f1760a8953
[ "MIT" ]
1
2022-03-11T13:32:46.000Z
2022-03-11T13:32:46.000Z
# action_state ActionState provides a simple DSL for defining Rails model states allowing you to query the state as an ActiveRecord scope on the class and a predicate on the instance. For example, the following state definition defines a class scope `Article.published` and an instance predicate `article.published?`. ```ruby class Article < ApplicationRecord state(:published) { where(published_at: ..Time.current) } ... end ``` ## Usage ActionState supports a small subset of ActiveRecord queries for the predicate definition, and delegates the scope definition to ActiveRecord. It's not meant to comprehensively support every possible ActiveRecord query; rather it supports a few features that tend to lend themselves well to predicate definitions. ### `where` The `where` method checks for inclusion in an Enumerable, coverage by a Range, and equality with other types of value. #### Inclusion in an Enumerable ```ruby state(:crafter) { where(role: ["designer", "developer"]) } ``` #### Covered by a Range ```ruby state(:negative) { where(stars: 1..4 } state(:indifferent) { where(stars: 5..6) } state(:positive) { where(stars: 7..9) } state(:recently_published) { where(published_at: 1.week.ago..) } ``` #### Equality ```ruby state(:featured) { where(featured: true) } ``` ### `where.not` The counterpart to `where` is `where.not` which checks for exclusion from an Enumerable or Range, and inequality with other types of value. ```ruby state(:deleted) { where.not(deleted_at: nil) } ``` ### `excluding` The `excluding` method excludes specific instances of a model. ```ruby state(:normal) { excluding(special_post) } ``` ### Passing arguments States can also be defined to accept arguments. ```ruby state(:before) { |whenever| where(created_at: ..whenever) } state(:after) { |whenever| where(created_at: whenever..) } ``` ### Composing states You can chain query methods together to form more complex queries. ```ruby state(:can_edit) { where(role: "admin").where.not(disabled: true) } ``` You can also compose multiple states together. ```ruby state(:published) { where(published: true) } state(:featured) { published.where(featured: true) } ``` ## Installation Add this line to your application's Gemfile: ```ruby gem "action_state" ``` And then execute: ```other $ bundle ``` Or install it yourself as: ```other $ gem install action_state ``` Finally, include `ActionState` in your model class or `ApplicationRecord`: ```ruby class ApplicationRecord < ActiveRecord::Base include ActionState ... end ``` ## Contributing Contributions are welcome. Please feel free top open a PR / issue / discussion. ## License The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
22.609756
170
0.725279
eng_Latn
0.995747
2138afc333adbd473208a6e0f54e21ed0b8850dc
2,650
md
Markdown
_posts/2019-09-10-Java-self-learning.md
AbnerVictor/AbnerVictor.github.io
f41510e65f4cbf35f38076e6ec88fc5dde7bd735
[ "MIT" ]
null
null
null
_posts/2019-09-10-Java-self-learning.md
AbnerVictor/AbnerVictor.github.io
f41510e65f4cbf35f38076e6ec88fc5dde7bd735
[ "MIT" ]
null
null
null
_posts/2019-09-10-Java-self-learning.md
AbnerVictor/AbnerVictor.github.io
f41510e65f4cbf35f38076e6ec88fc5dde7bd735
[ "MIT" ]
null
null
null
--- layout: post title: 'Java 学习笔记' subtitle: '从入门到放弃' date: 2019-09-11 categories: HKUST tags: Java --- [TOC] ## COMP 3021 - Java Programming *** [Course Page](https://course.cse.ust.hk/comp3021/) ## Self-learning *** ### 起步 1. 一个简单的实例 ```Java public class HelloWorld{ public static void main(String[] args){ System.out.println(“Helloworld”); } } ``` > **注:** `String args[]` 与 `String[] args` 都可以执行,但推荐使用后者,避免歧义 Java 编译指令 ```shell $ javac HelloWorld.java $ java HelloWorld Hello World ``` ### Java基础语法 - Java是大小写敏感的; - 类名的首字母应该大写,如:`MyFirstJavaClass`; - 所有的函数名应该以小写字母开头,如果方法名包含若干单词,则后面的每个单词首字母大写; - 源文件名必须和类名相同; - 主方法入口:所有的Java程序由 `public static void main(String[] args)`方法开始执行 #### Java标识符 Java中,类名、变量名和函数名被称为标识符。 1. 标识符应该以字母、美元符号`$`或者是下划线`_`开始; 2. 标识符中可以含有数字(首字符除外); 3. 关键字不能用作标识符,标识符大小写敏感; #### Java修饰符 - 访问控制修饰符:default,public,protected,private - 非访问控制修饰符:final,abstract,static,synchronized #### Java变量 - 局部变量 - 类变量(静态) - 成员变量(非静态) #### Java枚举 ```java class FreshJuice{ enum FreshJuiceSize{ SMALL, MEDIUM, LARGE }//枚举 FreshJuiceSize size; } public class FreshJuiceTest{ public static void main(String[] args){ FreshJuice juice = new FreshJuice(); juice.size = FreshJuiceSize.FreshJuiceSize.MEDIUM; } ``` **HINT**:枚举可以单独声明也可以声明在类里面。方法、变量、构造函数也可以在枚举中定义。 #### Java关键字 [关键字对照表](http://www.runoob.com/java/java-basic-syntax.html) #### Java继承 在 Java 中,一个类可以由其他类派生。如果你要创建一个类,而且已经存在一个类具有你所需要的属性或方法,那么你可以将新创建的类继承该类。 利用继承的方法,可以重用已存在类的方法和属性,而不用重写这些代码。被继承的类称为超类(super class),派生类称为子类(subclass)。 #### 接口 关键字:`interface`,类似于c++中的`virtual`虚函数。 接口只定义方法,实现取决于派生类,实现虚函数的关键字`implement` ### Java对象和类 - 对象 object:类的一个实例 - 类 class:描述一类对象的行为和状态 类有若干种访问级别,也分不同的类型:抽象类和final类等,在访问控制中介绍。 Java还有一些特殊的类,比如内部类、匿名类。 #### 变量 - 局部变量:在函数、构造函数中定义的变量,变量的生命和初始化都在函数中,函数结束后,变量就会自动销毁; - 成员变量:定义在类中,函数之外的变量,在创建对象的时候被实例化; - 类变量:类变量也在类中声明,在函数之外,但必须声明为static类型 #### 构造函数 如果没有显式定义类的构造函数,Java编译器会提供一个默认构造函数。 一个类可以有多个构造函数,在创建一个对象的时候至少调用其中一个。 ```java public class Puppy{ public Puppy(){} public Puppy(String name){} } ``` #### 创建对象 在Java中,使用关键字new来创建一个新的对象: 1. 声明一个对象,包括对象的名称和类型 2. 实例化:使用关键字new来创建一个对象 3. 初始化:使用new创建对象时,调用构造方法初始化对象 ```java public class Puppy{ public Puppy(String name){} public static void main(String[] args){ Puppy myPuppy = new Puppy("dog"); } } ``` #### 源文件声明规则 - 一个源文件中只能有一个public类 - 一个源文件中可以有多个非private类 - 源文件的名称应该和public类名保持一致 - 如果一个类定义在某个包中,那么package语句应该在源文件首行 - 如果源文件包含import语句,那么应该放在package语句和类定义之间 - import语句和package语句对源文件中定义的所有类都有效,在同一个源文件中,不能给不同的类不同的包声明 <!--stackedit_data: eyJoaXN0b3J5IjpbLTE2MzA3NTMwMjQsLTIzOTU4MjU1MSwxNT I1NzgzMzMzLDE2ODg0ODE0NywtMzkyMDA2OTc5LC0xMjgxMDg3 NTYxXX0= -->
15.317919
74
0.741132
yue_Hant
0.486954
2138bd45f9a243c83e6064d70ba9804be4494d23
123
md
Markdown
README.md
aldian/app-compiler-web
b44736aa159989e9968b88a048cfd60b027e4979
[ "MIT" ]
1
2017-11-22T03:12:19.000Z
2017-11-22T03:12:19.000Z
README.md
aldian/app-compiler-web
b44736aa159989e9968b88a048cfd60b027e4979
[ "MIT" ]
null
null
null
README.md
aldian/app-compiler-web
b44736aa159989e9968b88a048cfd60b027e4979
[ "MIT" ]
null
null
null
# app-compiler-web An example of client-server SpringMVC using Spring Boot. The client is accessing server using REST API.
30.75
56
0.804878
eng_Latn
0.877077
2138f8221cb5cac0065a397ca5a2690ae3a3c553
338
md
Markdown
addons/dialogic/Documentation/Content/Welcome.md
Jowan-Spooner/visual-novel-template
f19d82a1eb96d98eb6700d17aab0d9e0908de5d4
[ "MIT" ]
1
2022-03-17T00:54:12.000Z
2022-03-17T00:54:12.000Z
addons/dialogic/Documentation/Content/Welcome.md
Jowan-Spooner/visual-novel-template
f19d82a1eb96d98eb6700d17aab0d9e0908de5d4
[ "MIT" ]
1
2021-12-29T17:52:22.000Z
2021-12-29T17:54:32.000Z
addons/dialogic/Documentation/Content/Welcome.md
Jowan-Spooner/visual-novel-template
f19d82a1eb96d98eb6700d17aab0d9e0908de5d4
[ "MIT" ]
null
null
null
![WelcomeImage](./Images/dialogic-hero-1.3.png) Welcome to the Help pages. You can find all the information available on how to use the plugin and its parts. If you are looking for something specific, you can use the filter in the upper left. If you need extra help you can join [Emilio's Discord server](https://discord.gg/v4zhZNh)!
48.285714
111
0.763314
eng_Latn
0.994765
2139d653bd458311578901c9888f1e11c564914b
496
md
Markdown
_includes/about/en.md
brade1314/brade1314.github.io
2db5f4b24bc4b6b03cf08d4cf8537c700b9f209e
[ "Apache-2.0" ]
null
null
null
_includes/about/en.md
brade1314/brade1314.github.io
2db5f4b24bc4b6b03cf08d4cf8537c700b9f209e
[ "Apache-2.0" ]
5
2019-04-08T06:08:18.000Z
2021-04-22T02:22:47.000Z
_includes/about/en.md
brade1314/brade1314.github.io
2db5f4b24bc4b6b03cf08d4cf8537c700b9f209e
[ "Apache-2.0" ]
null
null
null
> Don’t worry if it doesn't work right. If everything did, you’d be out of a job. > Any fool can write code that a computer can understand. Good programmers write code that humans can understand. brade.zhong, a java programmer, Passionate about technology, while loving life, like sports, such as long-distance running, swimming. Please follow my [blog](http://blog.r136.dev) and [Github](http://github.com/brade1314). Click on my open source project 👉 [Github](http://github.com/brade1314).
82.666667
222
0.758065
eng_Latn
0.991001
213a5b096020d9805e0eef3d575e09d5120a6e69
1,682
md
Markdown
docs/app/com.pzpg.ogr.library/-library-fragment/index.md
Praktyka-Zawodowa-2020/optical_graph_recognition_mobApp
f4bc0e631942aba31ce2151c64f02ada6e3d7b58
[ "BSD-3-Clause" ]
1
2021-04-16T19:49:12.000Z
2021-04-16T19:49:12.000Z
docs/app/com.pzpg.ogr.library/-library-fragment/index.md
Praktyka-Zawodowa-2020/optical_graph_recognition_mobApp
f4bc0e631942aba31ce2151c64f02ada6e3d7b58
[ "BSD-3-Clause" ]
null
null
null
docs/app/com.pzpg.ogr.library/-library-fragment/index.md
Praktyka-Zawodowa-2020/optical_graph_recognition_mobApp
f4bc0e631942aba31ce2151c64f02ada6e3d7b58
[ "BSD-3-Clause" ]
1
2020-12-10T12:22:00.000Z
2020-12-10T12:22:00.000Z
[app](../../index.md) / [com.pzpg.ogr.library](../index.md) / [LibraryFragment](./index.md) # LibraryFragment `class LibraryFragment : Fragment, OnItemListener` Fragment representing a list of Items in the app Docs directory. ### Constructors | Name | Summary | |---|---| | [&lt;init&gt;](-init-.md) | Fragment representing a list of Items in the app Docs directory.`LibraryFragment()` | ### Functions | Name | Summary | |---|---| | [onClick](on-click.md) | `fun onClick(position: `[`Int`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-int/index.html)`): `[`Unit`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/index.html) | | [onCreate](on-create.md) | `fun onCreate(savedInstanceState: `[`Bundle`](https://developer.android.com/reference/android/os/Bundle.html)`?): `[`Unit`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/index.html) | | [onCreateView](on-create-view.md) | `fun onCreateView(inflater: `[`LayoutInflater`](https://developer.android.com/reference/android/view/LayoutInflater.html)`, container: `[`ViewGroup`](https://developer.android.com/reference/android/view/ViewGroup.html)`?, savedInstanceState: `[`Bundle`](https://developer.android.com/reference/android/os/Bundle.html)`?): `[`View`](https://developer.android.com/reference/android/view/View.html)`?` | | [onDestroy](on-destroy.md) | `fun onDestroy(): `[`Unit`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/index.html) | ### Companion Object Properties | Name | Summary | |---|---| | [ARG_COLUMN_COUNT](-a-r-g_-c-o-l-u-m-n_-c-o-u-n-t.md) | `const val ARG_COLUMN_COUNT: `[`String`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-string/index.html) |
58
438
0.704518
yue_Hant
0.533255
213c08b55c708babd0acc4cd1853132b404e2208
26,509
md
Markdown
docs/analysis-services/tabular-models/dax-formula-compatibility-in-directquery-mode-ssas-2016.md
PiJoCoder/sql-docs
128b255506c47eb05e19770a6bf5edfbdaa817ec
[ "CC-BY-4.0", "MIT" ]
1
2021-03-04T18:16:27.000Z
2021-03-04T18:16:27.000Z
docs/analysis-services/tabular-models/dax-formula-compatibility-in-directquery-mode-ssas-2016.md
PiJoCoder/sql-docs
128b255506c47eb05e19770a6bf5edfbdaa817ec
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/analysis-services/tabular-models/dax-formula-compatibility-in-directquery-mode-ssas-2016.md
PiJoCoder/sql-docs
128b255506c47eb05e19770a6bf5edfbdaa817ec
[ "CC-BY-4.0", "MIT" ]
1
2021-01-15T10:07:40.000Z
2021-01-15T10:07:40.000Z
--- title: "DAX Formula Compatibility in DirectQuery Mode (SSAS 2016) | Microsoft Docs" ms.custom: "" ms.date: "07/06/2017" ms.prod: "sql-server-2016" ms.reviewer: "" ms.suite: "" ms.technology: - "analysis-services" - "analysis-services/multidimensional-tabular" ms.tgt_pltfrm: "" ms.topic: "article" ms.assetid: d2fbafe6-d7fb-437b-b32b-fa2446023fa5 caps.latest.revision: 10 author: "Minewiskan" ms.author: "owend" manager: "erikre" --- # DAX Formula Compatibility in DirectQuery Mode [!INCLUDE[ssas-appliesto-sqlas-all-aas](../../includes/ssas-appliesto-sqlas-all-aas.md)] For tabular 1200 and higher models in DirectQuery mode, many functional limitations in earlier versions no longer apply. For DAX formulas in-particular: - DirectQuery now generates simpler queries, providing improved performance. - Row level security (RLS) is now supported in DirectQuery mode. - Calculated columns are now supported for tabular models in DirectQuery mode. ## DAX functions in DirectQuery mode In short, all DAX functions are supported for DirectQuery models. However, not all functions are supported for all formula types, and not all functions have been optimized for DirectQuery models. At the most basic level, we can put DAX functions into two camps: Optimized and Non-optimized. Let's first take a closer look at optimized functions. ### Optimized for DirectQuery These are functions that primarily return scalar or aggregate results. These functions are further divided into those that are supported in all types of formulas: measures, queries, calculated columns, row level security, and those that are supported in measure and query formulas only. These include: | Supported in all DAX formulas | Supported in measure and query formulas only | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ABS</br> ACOS</br> ACOT</br> AND</br> ASIN</br> ATAN</br> BLANK</br> CEILING</br> CONCATENATE</br> COS</br> COT</br> CURRENCY</br> DATE</br> DATEDIFF</br> DATEVALUE</br> DAY</br> DEGREES</br> DIVIDE</br> EDATE</br> EOMONTH</br> EXACT</br> EXP</br> FALSE</br> FIND</br> HOUR</br> IF</br> INT</br> ISBLANK</br> ISO.CEILING</br> KEEPFILTERS</br> LEFT</br> LEN</br> LN</br> LOG</br> LOG10</br> LOWER</br> MAX</br> MID</br> MIN</br> MINUTE</br> MOD</br> MONTH</br> MROUND</br> NOT</br> NOW</br> OR</br> PI</br> POWER</br> QUOTIENT</br> RADIANS</br> RAND</br> RELATED</br> REPT</br> RIGHT</br> ROUND</br> ROUNDDOWN</br> ROUNDUP</br> SEARCH</br> SECOND</br> SIGN</br> SIN</br> SQRT</br> SQRTPI</br> SUBSTITUTE</br> SWITCH</br> TAN</br> TIME</br> TIMEVALUE</br> TODAY</br> TRIM</br> TRUE</br> TRUNC</br> UNICODE</br> UPPER</br> USERNAME</br> USERELATIONSHIP</br> VALUE</br> WEEKDAY</br> WEEKNUM</br> YEAR</br> | ALL</br> ALLEXCEPT</br> ALLNOBLANKROW</br> ALLSELECTED</br> AVERAGE</br> AVERAGEA</br> AVERAGEX</br> CALCULATE</br> CALCULATETABLE</br> COUNT</br> COUNTA</br> COUNTAX</br> COUNTROWS</br> COUNTX</br> DISTINCT</br> DISTINCTCOUNT</br> FILTER</br> FILTERS</br> HASONEFILTER</br> HASONEVALUE</br> ISCROSSFILTERED</br> ISFILTERED</br> MAXA</br> MAXX</br> MIN</br> MINA</br> MINX</br> RELATEDTABLE</br> STDEV.P</br> STDEV.S</br> STDEVX.P</br> STDEVX.S</br> SUM</br> SUMX</br> VALUES</br> VAR.P</br> VAR.S</br> VARX.P</br> VARX.S | ### Non-optimized for DirectQuery These functions have not been optimized to work with DirectQuery. These functions *are not* supported in calculated column and row-level security formulas at all. However, these functions *are supported* in measure and query formulas, albeit with uncertain performance. We're not going to list all of the functions here. Basically, if it's not in one of the lists of optimized functions above, it's a non-optimized function for DirectQuery. The reasons a particular function might not be optimized for DirectQuery is because the underlying relational engine cannot perform calculations equivalent to those performed by the xVelocity engine, or the formula cannot be converted to an equivalent SQL expression. In other cases, the performance of the converted expression and the resulting calculations may be unacceptable. To learn about all DAX functions, see the [DAX Function Reference].(https://msdn.microsoft.com/en-us/library/ee634396.aspx) ## DAX operators in DirectQuery mode All DAX comparison and arithmetic operators are fully supported in DirectQuery mode. To learn more, see [DAX Operator Reference](https://msdn.microsoft.com/library/ee634237.aspx). ## Differences between in-memory and DirectQuery mode Queries on a model deployed in DirectQuery mode can return different results than the same model deployed in in-memory mode. This is because with DirectQuery, data is queried directly from a relational data store and aggregations required by formulas are performed using the relevant relational engine (SQL, Oracle, Teradata), rather than using the xVelocity in-memory analytics engine for storage and calculation. For example, there are differences in the way that certain relational data stores handle numeric values, dates, nulls, and so forth. In contrast, the DAX language is intended to emulate as closely as possible the behavior of functions in Microsoft Excel. For example, when handling nulls, empty strings and zero values, Excel attempts to provide the best answer regardless of the precise data type, and therefore the xVelocity engine does the same. However, when a tabular model is deployed in DirectQuery mode and passes formulas to a relational data source, the data must be handled according to the semantics of the relational data source, which typically require distinct handling of empty strings vs. nulls. For this reason, the same formula might return a different result when evaluated against cached data and against data returned solely from the relational store. Additionally, some functions aren't optimized for DirectQuery mode because the calculation would require the data in the current context be sent to the relational data source as a parameter. For example, measures using time-intelligence functions that reference date ranges in a calendar table. A relational data source might not have a calendar table, or at least one with . ## Semantic differences This section lists the types of semantic differences that you can expect, and describes any limitations that might apply to the usage of functions or to query results. ### Comparisons DAX in in-memory models support comparisons of two expressions that resolve to scalar values of different data types. However, models that are deployed in DirectQuery mode use the data types and comparison operators of the relational engine, and therefore might return different results. The following comparisons will always return an error when used in a calculation on a DirectQuery data source: - Numeric data type compared to any string data type - Numeric data type compared to a Boolean value - Any string data type compared to a Boolean value In general, DAX is more forgiving of data type mismatches in in-memory models and will attempt an implicit cast of values up to two times, as described in this section. However, formulas sent to a relational data store in DirectQuery mode are evaluated more strictly, following the rules of the relational engine, and are more likely to fail. **Comparisons of strings and numbers** EXAMPLE: `“2” < 3` The formula compares a text string to a number. The expression is **true** in both DirectQuery mode and in-memory models. In an in-memory model, the result is **true** because numbers as strings are implicitly cast to a numerical data type for comparisons with other numbers. SQL also implicitly casts text numbers as numbers for comparison to numerical data types. Note that this represents a change in behavior from the first version of [!INCLUDE[ssGemini](../../includes/ssgemini-md.md)], which would return **false**, because the text “2” would always be considered larger than any number. **Comparison of text with Boolean** EXAMPLE: `“VERDADERO” = TRUE` This expression compares a text string with a Boolean value. In general, for DirectQuery or In-Memory models, comparing a string value to a Boolean value results in an error. The only exceptions to the rule are when the string contains the word **true** or the word **false**; if the string contains any of true or false values, a conversion to Boolean is made and the comparison takes place giving the logical result. **Comparison of nulls** EXAMPLE: `EVALUATE ROW("X", BLANK() = BLANK())` This formula compares the SQL equivalent of a null to a null. It returns **true** in in-memory and DirectQuery models; a provision is made in DirectQuery model to guarantee similar behavior to in-memory model. Note that in Transact-SQL, a null is never equal to a null. However, in DAX, a blank is equal to another blank. This behavior is the same for all in-memory models. It is important to note that DirectQuery mode uses, most of, the semantics of SQL Server; but, in this case it separates from it giving a new behavior to NULL comparisons. ### Casts There is no cast function as such in DAX, but implicit casts are performed in many comparison and arithmetic operations. It is the comparison or arithmetic operation that determines the data type of the result. For example, - Boolean values are treated as numeric in arithmetic operations, such as TRUE + 1, or the function MIN applied to a column of Boolean values. A NOT operation also returns a numeric value. - Boolean values are always treated as logical values in comparisons and when used with EXACT, AND, OR, &amp;&amp;, or ||. **Cast from string to Boolean** In in-memory and DirectQuery models, casts are permitted to Boolean values from these strings only: **“”** (empty string), **“true”**, **“false”**; where an empty string casts to false value. Casts to the Boolean data type of any other string results in an error. **Cast from string to date/time** In DirectQuery mode, casts from string representations of dates and times to actual **datetime** values behave the same way as they do in SQL Server. Models that use the in-memory data store support a more limited range of text formats for dates than the string formats for dates that are supported by SQL Server. However, DAX supports custom date and time formats. **Cast from string to other non Boolean values** When casting from strings to non-Boolean values, DirectQuery mode behaves the same as SQL Server. For more information, see [CAST and CONVERT (Transact-SQL)](http://msdn.microsoft.com/en-us/a87d0850-c670-4720-9ad5-6f5a22343ea8). **Cast from numbers to string not allowed** EXAMPLE: `CONCATENATE(102,”,345”)` Casting from numbers to strings is not allowed in SQL Server. This formula returns an error in tabular models and in DirectQuery mode; however, the formula produces a result in [!INCLUDE[ssGemini](../../includes/ssgemini-md.md)]. **No support for two-try casts in DirectQuery** In-memory models often attempt a second cast when the first one fails. This never happens in DirectQuery mode. EXAMPLE: `TODAY() + “13:14:15”` In this expression, the first parameter has type **datetime** and second parameter has type **string**. However, the casts when combining the operands are handled differently. DAX will perform an implicit cast from **string** to **double**. In in-memory models, the formula engine attempts to cast directly to **double**, and if that fails, it will try to cast the string to **datetime**. In DirectQuery mode, only the direct cast from **string** to **double** will be applied. If this cast fails, the formula will return an error. ### Math functions and arithmetic operations Some mathematical functions will return different results in DirectQuery mode because of differences in the underlying data type or the casts that can be applied in operations. Also, the restrictions described above on the allowed range of values might affect the outcome of arithmetic operations. **Order of addition** When you create a formula that adds a series of numbers, an in-memory model might process the numbers in a different order than a DirectQuery model. Therefore, when you have many very large positive numbers and very large negative numbers, you may get an error in one operation and results in another operation. **Use of the POWER function** EXAMPLE: `POWER(-64, 1/3)` In DirectQuery mode, the POWER function cannot use negative values as the base when raised to a fractional exponent. This is the expected behavior in SQL Server. In an in-memory model, the formula returns -4. **Numerical overflow operations** In Transact-SQL, operations that result in a numerical overflow return an overflow error; therefore, formulas that result in an overflow also raise an error in DirectQuery mode. However, the same formula when used in an in-memory model returns an eight-byte integer. That is because the formula engine does not perform checks for numerical overflows. **LOG functions with blanks return different results** SQL Server handles nulls and blanks differently than the xVelocity engine. As a result, the following formula returns an error in DirectQuery mode, but return infinity (–inf) in in-memory mode. `EXAMPLE: LOG(blank())` The same limitations apply to the other logarithmic functions: LOG10 and LN. For more information about the **blank** data type in DAX, see [DAX Syntax Reference](https://msdn.microsoft.com/library/ee634217.aspx). **Division by 0 and division by Blank** In DirectQuery mode, division by zero (0) or division by BLANK will always result in an error. SQL Server does not support the notion of infinity, and because the natural result of any division by 0 is infinity, the result is an error. However, SQL Server supports division by nulls, and the result must always equal null. Rather than return different results for these operations, in DirectQuery mode, both types of operations (division by zero and division by null) return an error. Note that, in Excel and in [!INCLUDE[ssGemini](../../includes/ssgemini-md.md)] models, division by zero also returns an error. Division by a blank returns a blank. The following expressions are all valid in in-memory models, but will fail in DirectQuery mode: `1/BLANK` `1/0` `0.0/BLANK` `0/0` The expression `BLANK/BLANK` is a special case that returns `BLANK` in both for in-memory models, and in DirectQuery mode. ### Supported numeric and date-time ranges Formulas in in-memory tabular model are subject to the same limitations as Excel with regard to maximum allowed values for real numbers and dates. However, differences can arise when the maximum value is returned from a calculation or query, or when values are converted, cast, rounded, or truncated. - If values of types **Currency** and **Real** are multiplied, and the result is larger than the maximum possible value, in DirectQuery mode, no error is raised, and a null is returned. - In in-memory models, no error is raised, but the maximum value is returned. In general, because the accepted date ranges are different for Excel and SQL Server, results can be guaranteed to match only when dates are within the common date range, which is inclusive of the following dates: - Earliest date: March 1, 1990 - Latest date: December 31, 9999 If any dates used in formulas fall outside this range, either the formula will result in an error, or the results will not match. **Floating point values supported by CEILING** EXAMPLE: `EVALUATE ROW("x", CEILING(-4.398488E+30, 1))` The Transact-SQL equivalent of the DAX CEILING function only supports values with magnitude of 10^19 or less. A rule of thumb is that floating point values should be able to fit into **bigint**. **Datepart functions with dates that are out of range** Results in DirectQuery mode are guaranteed to match those in in-memory models only when the date used as the argument is in the valid date range. If these conditions are not satisfied, either an error will be raised, or the formula will return different results in DirectQuery than in in-memory mode. EXAMPLE: `MONTH(0)` or `YEAR(0)` In DirectQuery mode, the expressions return 12 and 1899, respectively. In in-memory models, the expressions return 1 and 1900, respectively. EXAMPLE: `EOMONTH(0.0001, 1)` The results of this expression will match only when the data supplied as a parameter is within the valid date range. EXAMPLE: `EOMONTH(blank(), blank())` or `EDATE(blank(), blank())` The results of this expression should be the same in DirectQuery mode and in-memory mode. **Truncation of time values** EXAMPLE: `SECOND(1231.04097222222)` In DirectQuery mode, the result is truncated, following the rules of SQL Server, and the expression evaluates to 59. In in-memory models, the results of each interim operation are rounded; therefore, the expression evaluates to 0. The following example demonstrates how this value is calculated: 1. The fraction of the input (0.04097222222) is multiplied by 24. 2. The resulting hour value (0.98333333328) is multiplied by 60. 3. The resulting minute value is 58.9999999968. 4. The fraction of the minute value (0.9999999968) is multiplied by 60. 5. The resulting second value (59.999999808) rounds up to 60. 6. 60 is equivalent to 0. **SQL Time data type not supported** In-memory models do not support use of the new SQL **Time** data type. In DirectQuery mode, formulas that reference columns with this data type will return an error. Time data columns cannot be imported into an in-memory model. However, sometimes the engine casts the time value to an acceptable data type, and the formula returns a result. This behavior affects all functions that use a date column as a parameter. ### <a name="bkmk_Currency"></a>Currency In DirectQuery mode, if the result of an arithmetic operation has the type **Currency**, the value must be within the following range: - Minimum: -922337203685477.5808 - Maximum: 922337203685477.5807 **Combining currency and REAL data types** EXAMPLE: `Currency sample 1` If **Currency** and **Real** types are multiplied, and the result is larger than 9223372036854774784 (0x7ffffffffffffc00), DirectQuery mode will not raise an error. In an in-memory model, an error is raised if the absolute value of the result is larger than 922337203685477.4784. **Operation results in an out-of-range value** EXAMPLE: `Currency sample 2` If operations on any two currency values result in a value that is outside the specified range, an error is raised in in-memory models, but not in DirectQuery models. **Combining currency with other data types** Division of currency values by values of other numeric types can result in different results. ### <a name="bkmk_Aggregations"></a>Aggregation functions Statistical functions on a table with one row return different results. Aggregation functions over empty tables also behave differently in in-memory models than they do in DirectQuery mode. **Statistical functions over a table with a single row** If the table that is used as argument contains a single row, in DirectQuery mode, statistical functions such as STDEV and VARx return null. In an in-memory model, a formula that uses STDEV or VARx over a table with a single row returns a division by zero error. ### <a name="bkmk_Text"></a>Text functions Because relational data stores provide different text data types than does Excel, you may see different results when searching strings or working with substrings. The length of strings also can be different. In general, any string manipulation functions that use fixed-size columns as arguments can have different results. Additionally, in SQL Server, some text functions support additional arguments that are not provided in Excel. If the formula requires the missing argument you can get different results or errors in the in-memory model. **Operations that return a character using LEFT, RIGHT, etc. may return the correct character but in a different case, or no results** EXAMPLE: `LEFT([“text”], 2)` In DirectQuery mode, the case of the character that is returned is always exactly the same as the letter that is stored in the database. However, the xVelocity engine uses a different algorithm for compression and indexing of values, to improve performance. By default, the Latin1_General collation is used, which is case-insensitive but accent-sensitive. Therefore, if there are multiple instances of a text string in lower case, upper case, or mixed case, all instances are considered the same string, and only the first instance of the string is stored in the index. All text functions that operate on stored strings will retrieve the specified portion of the indexed form. Therefore, the example formula would return the same value for the entire column, using the first instance as the input. This behavior also applies to other text functions, including RIGHT, MID, and so forth. **String length affects results** EXAMPLE: `SEARCH(“within string”, “sample target text”, 1, 1)` If you search for a string using the SEARCH function, and the target string is longer than the within string, DirectQuery mode raises an error. In an in-memory model, the searched string is returned, but with its length truncated to the length of &lt;within text&gt;. EXAMPLE: `EVALUATE ROW("X", REPLACE("CA", 3, 2, "California") )` If the length of the replacement string is greater than the length of the original string, in DirectQuery mode, the formula returns null. In in-memory models, the formula follows the behavior of Excel, which concatenates the source string and the replacement string, which returns CACalifornia. **Implicit TRIM in the middle of strings** EXAMPLE: `TRIM(“ A sample sentence with leading white space”)` DirectQuery mode translates the DAX TRIM function to the SQL statement `LTRIM(RTRIM(<column>))`. As a result, only leading and trailing white space is removed. In contrast, the same formula in an in-memory model removes spaces within the string, following the behavior of Excel. **Implicit RTRIM with use of LEN function** EXAMPLE: `LEN(‘string_column’)` Like SQL Server, DirectQuery mode automatically removes white space from the end of string columns: that is, it performs an implicit RTRIM. Therefore, formulas that use the LEN function can return different values if the string has trailing spaces. **In-memory supports additional parameters for SUBSTITUTE** EXAMPLE: `SUBSTITUTE([Title],”Doctor”,”Dr.”)` EXAMPLE: `SUBSTITUTE([Title],”Doctor”,”Dr.”, 2)` In DirectQuery mode, you can use only the version of this function that has three (3) parameters: a reference to a column, the old text, and the new text. If you use the second formula, an error is raised. In in-memory models, you can use an optional fourth parameter to specify the instance number of the string to replace. For example, you can replace only the second instance, etc. **Restrictions on string lengths for REPT operations** In in-memory models, the length of a string resulting from an operation using REPT must be less than 32,767 characters. This limitation does not apply in DirectQuery mode. **Substring operations return different results depending on character type** EXAMPLE: `MID([col], 2, 5)` If the input text is **varchar** or **nvarchar**, the result of the formula is always the same. However, if the text is a fixed-length character and the value for *&lt;num_chars&gt;* is greater than the length of the target string, in DirectQuery mode, a blank is added at the end of the result string. In an in-memory model, the result terminates at the last string character, with no padding. ## See also [DirectQuery Mode (SSAS Tabular)](http://msdn.microsoft.com/en-us/45ad2965-05ec-4fb1-a164-d8060b562ea5)
75.74
1,509
0.694783
eng_Latn
0.997183
213c1db721e63a8090927e0ae00b7dc4b531d021
598
md
Markdown
CONTRIBUTING.md
UBC-MDS/CleanPy
496dd0b36f44c9a131a3587b5090139b90d6015f
[ "MIT" ]
1
2020-05-05T15:58:52.000Z
2020-05-05T15:58:52.000Z
CONTRIBUTING.md
UBC-MDS/CleanPy
496dd0b36f44c9a131a3587b5090139b90d6015f
[ "MIT" ]
10
2019-02-07T08:01:38.000Z
2019-03-06T00:36:40.000Z
CONTRIBUTING.md
UBC-MDS/CleanPy
496dd0b36f44c9a131a3587b5090139b90d6015f
[ "MIT" ]
4
2019-02-06T02:09:56.000Z
2020-05-05T16:00:27.000Z
## Contribution Strategy We love pull requests from everyone. By participating in this project, you agree to abide by the CleanPy code of conduct. Fork, then clone the repo: git clone [email protected]:your-username/CleanPy.git Set up your machine: ./bin/setup Make sure the tests pass: Push to your fork and submit a pull request. At this point we hope to accept pull requests within 1 day. We may suggest some changes or improvements or alternatives. Some things that will increase the chance that your pull request is accepted: Write tests. Follow the style guide. Write a good commit message.
33.222222
121
0.79097
eng_Latn
0.999089
213ce5e369af58bb9bae2a5c0fbaca2ae21b8cd6
573
md
Markdown
README.md
jeversonneves/html-css
b5f4531517322b6e70c5ccba259e450d83163be5
[ "MIT" ]
null
null
null
README.md
jeversonneves/html-css
b5f4531517322b6e70c5ccba259e450d83163be5
[ "MIT" ]
null
null
null
README.md
jeversonneves/html-css
b5f4531517322b6e70c5ccba259e450d83163be5
[ "MIT" ]
null
null
null
# Repositório público: Jeverson Neves Conteúdo desenvolvido no site de curso em vídeo, ministrado pelo Professor Gustavo Guanabara. <img align="right" src="imagens/laptop.png" width="200"> ## Curso de HTML5 e CSS3 No curso aprendemos a criar páginas web simples, focando também no seu desing, obtemos conhecimentos teoricos e praticos. * [Desafios de HTML resolvidos - acesse clicando aqui.](https://github.com/jeversonneves/html-css/tree/main/desafios) * [Exercícios resolvidos - acesse clicando aqui.](https://github.com/jeversonneves/html-css/tree/main/exercicios)
44.076923
121
0.78534
por_Latn
0.994579
213cf3391556029bb92edca665652b20f6a27d75
13,720
md
Markdown
articles/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
12
2017-08-28T07:45:55.000Z
2022-03-07T21:35:48.000Z
articles/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
441
2017-11-08T13:15:56.000Z
2021-06-02T10:39:53.000Z
articles/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server.md
flexray/azure-docs.pl-pl
bfb8e5d5776d43b4623ce1c01dc44c8efc769c78
[ "CC-BY-4.0", "MIT" ]
27
2017-11-13T13:38:31.000Z
2022-02-17T11:57:33.000Z
--- title: Migrowanie bazy danych SQL Server do SQL Server na maszynie wirtualnej | Microsoft Docs description: Dowiedz się więcej o tym, jak migrować lokalną bazę danych użytkownika do SQL Server na maszynie wirtualnej platformy Azure. services: virtual-machines-windows documentationcenter: '' author: MashaMSFT editor: '' tags: azure-service-management ms.assetid: 00fd08c6-98fa-4d62-a3b8-ca20aa5246b1 ms.service: virtual-machines-sql ms.workload: iaas-sql-server ms.tgt_pltfrm: vm-windows-sql-server ms.subservice: migration ms.topic: how-to ms.date: 08/18/2018 ms.author: mathoma ms.reviewer: jroth ms.openlocfilehash: f6e9009040d2d02702f8a71c352716491d07d1f7 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 03/29/2021 ms.locfileid: "98704308" --- # <a name="migrate-a-sql-server-database-to-sql-server-on-an-azure-virtual-machine"></a>Migrowanie bazy danych SQL Server do SQL Server na maszynie wirtualnej platformy Azure [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)] Istnieje kilka sposobów migrowania lokalnej bazy danych SQL Server użytkownika do SQL Server na maszynie wirtualnej platformy Azure. Ten artykuł zawiera krótkie omówienie różnych metod i zaleca najlepszą metodę dla różnych scenariuszy. [!INCLUDE [learn-about-deployment-models](../../../../includes/learn-about-deployment-models-both-include.md)] > [!NOTE] > SQL Server 2008 i SQL Server 2008 R2 zbliżają się do [końca cyklu życia pomocy technicznej](https://www.microsoft.com/sql-server/sql-server-2008) dla wystąpień lokalnych. Aby rozszerzyć obsługę, można przeprowadzić migrację wystąpienia SQL Server na maszynę wirtualną platformy Azure lub zakupić rozszerzone aktualizacje zabezpieczeń, aby zachować je w środowisku lokalnym. Aby uzyskać więcej informacji, zobacz sekcję [dotyczącą rozszerzeń dla SQL Server 2008 i 2008 R2 z platformą Azure](sql-server-2008-extend-end-of-support.md) ## <a name="what-are-the-primary-migration-methods"></a>Jakie są podstawowe metody migracji? Podstawowe metody migracji są następujące: * Wykonaj lokalną kopię zapasową przy użyciu kompresji, a następnie ręcznie Skopiuj plik kopii zapasowej na maszynę wirtualną platformy Azure. * Wykonaj kopię zapasową do adresu URL, a następnie Przywróć ją na maszynie wirtualnej platformy Azure przy użyciu adresu URL. * Odłącz pliki danych i dziennika, skopiuj je do usługi Azure Blob Storage, a następnie dołącz je do SQL Server na maszynie wirtualnej platformy Azure przy użyciu adresu URL. * Przekonwertuj lokalną maszynę fizyczną na dysk VHD funkcji Hyper-V, przekaż go do usługi Azure Blob Storage, a następnie wdróż ją jako nową maszynę wirtualną przy użyciu przekazanego wirtualnego dysku twardego. * Wyślij dysk twardy za pomocą usługi Import/Export systemu Windows. * Jeśli lokalne wdrożenie grupy dostępności jest włączone, użyj [Kreatora dodawania repliki platformy Azure](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-onprem-availability) , aby utworzyć replikę na platformie Azure, w trybie failover i wskazać użytkowników do wystąpienia usługi Azure Database. * Użyj SQL Server [replikacji transakcyjnej](/sql/relational-databases/replication/transactional/transactional-replication) , aby skonfigurować wystąpienie usługi Azure SQL Server jako subskrybent, wyłącz replikację i wskazywać użytkownikom wystąpienie usługi Azure Database. > [!TIP] > Możesz również użyć tych samych technik do przenoszenia baz danych między maszynami wirtualnymi SQL Server na platformie Azure. Na przykład nie ma żadnego obsługiwanego sposobu uaktualnienia SQL Server Galeria obrazów z jednej wersji/wydania do innej. W takim przypadku należy utworzyć nową maszynę wirtualną SQL Server z nową wersją/wersją, a następnie użyć jednej z technik migracji w tym artykule, aby przenieść bazy danych. ## <a name="choose-a-migration-method"></a>Wybierz metodę migracji Aby uzyskać najlepszą wydajność transferu danych, dokonaj migracji plików bazy danych na maszynę wirtualną platformy Azure przy użyciu skompresowanego pliku kopii zapasowej. Aby zminimalizować przestoje podczas procesu migracji bazy danych, należy użyć opcji AlwaysOn lub opcji replikacja transakcyjna. Jeśli nie można użyć powyższych metod, należy ręcznie przeprowadzić migrację bazy danych. Ogólnie rzecz biorąc, należy zacząć od kopii zapasowej bazy danych, wykonać ją z kopią kopii zapasowej bazy danych na platformie Azure, a następnie przywrócić bazę danych. Możesz również skopiować pliki bazy danych na platformę Azure, a następnie dołączyć je. Istnieje kilka metod, za pomocą których można wykonać ten ręczny proces migrowania bazy danych na maszynę wirtualną platformy Azure. > [!NOTE] > Po uaktualnieniu do SQL Server 2014 lub SQL Server 2016 ze starszych wersji SQL Server należy wziąć pod uwagę, czy zmiany są potrzebne. Zaleca się, aby zająć się wszystkimi zależnościami od funkcji nieobsługiwanych przez nową wersję SQL Server w ramach projektu migracji. Aby uzyskać więcej informacji na temat obsługiwanych wersji i scenariuszy, zobacz [uaktualnianie do SQL Server](/sql/database-engine/install-windows/upgrade-sql-server). W poniższej tabeli wymieniono wszystkie podstawowe metody migracji i omówiono, kiedy użycie każdej metody jest najbardziej odpowiednie. | Metoda | Wersja źródłowej bazy danych | Wersja docelowej bazy danych | Ograniczenie rozmiaru kopii zapasowej źródłowej bazy danych | Uwagi | | --- | --- | --- | --- | --- | | [Wykonaj lokalną kopię zapasową przy użyciu kompresji i ręcznie Skopiuj plik kopii zapasowej do maszyny wirtualnej platformy Azure](#back-up-and-restore) |SQL Server 2005 lub więcej |SQL Server 2005 lub więcej |[Limit przestrzeni dyskowej maszyny wirtualnej platformy Azure](../../../index.yml) | Ta technika jest prosta i dobrze sprawdzona pod kątem przemieszczania baz danych między maszynami. | | [Wykonaj kopię zapasową do adresu URL i przywróć ją na maszynie wirtualnej platformy Azure przy użyciu adresu URL](#backup-to-url-and-restore-from-url) |SQL Server 2012 z dodatkiem SP1 ZASTOSUJESZ pakietu CU2 lub nowszym | SQL Server 2012 z dodatkiem SP1 ZASTOSUJESZ pakietu CU2 lub nowszym | < 12,8 TB dla SQL Server 2016, w przeciwnym razie < 1 TB | Ta metoda jest już inna metodą przenoszenia pliku kopii zapasowej na maszynę wirtualną przy użyciu usługi Azure Storage. | | [Odłącz i skopiuj pliki danych i dziennika do usługi Azure Blob Storage, a następnie Dołącz do SQL Server na maszynie wirtualnej platformy Azure z adresu URL](#detach-and-attach-from-a-url) | SQL Server 2005 lub więcej |SQL Server 2014 lub więcej | [Limit przestrzeni dyskowej maszyny wirtualnej platformy Azure](../../../index.yml) | Ta metoda służy do [przechowywania tych plików przy użyciu usługi Azure Blob Storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure) i dołączania ich do SQL Server działających na maszynie wirtualnej platformy Azure, szczególnie w przypadku bardzo dużych baz danych | | [Przekonwertuj maszynę lokalną na wirtualne dyski twarde funkcji Hyper-V, Przekaż do usługi Azure Blob Storage, a następnie wdróż nową maszynę wirtualną przy użyciu przekazanego wirtualnego dysku twardego](#convert-to-a-vm-upload-to-a-url-and-deploy-as-a-new-vm) |SQL Server 2005 lub więcej |SQL Server 2005 lub więcej |[Limit przestrzeni dyskowej maszyny wirtualnej platformy Azure](../../../index.yml) |Użyj przy korzystaniu [z własnej licencji SQL Server](../../../azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md)podczas migrowania bazy danych, która będzie uruchamiana w starszej wersji SQL Server, lub podczas migrowania baz danych systemu i użytkownika w ramach migracji bazy danych, zależnie od innych baz danych użytkownika i/lub systemowych baz danych. | | [Wyślij dysk twardy przy użyciu usługi Windows Import/Export](#ship-a-hard-drive) |SQL Server 2005 lub więcej |SQL Server 2005 lub więcej |[Limit przestrzeni dyskowej maszyny wirtualnej platformy Azure](../../../index.yml) |Korzystanie z [usługi Import/Export systemu Windows](../../../import-export/storage-import-export-service.md) , gdy ręczna metoda kopiowania jest zbyt wolna, na przykład z bardzo dużymi bazami danych | | [Korzystanie z Kreatora dodawania repliki platformy Azure](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-sql-onprem-availability) |SQL Server 2012 lub więcej |SQL Server 2012 lub więcej |[Limit przestrzeni dyskowej maszyny wirtualnej platformy Azure](../../../index.yml) |Minimalizuje czas przestoju, należy używać w przypadku, gdy istnieje zawsze lokalne wdrożenie | | [Użyj SQL Server replikacji transakcyjnej](/sql/relational-databases/replication/transactional/transactional-replication) |SQL Server 2005 lub więcej |SQL Server 2005 lub więcej |[Limit przestrzeni dyskowej maszyny wirtualnej platformy Azure](../../../index.yml) |Używaj, gdy zachodzi potrzeba zminimalizowania przestoju i nie istnieje zawsze lokalne wdrożenie | ## <a name="back-up-and-restore"></a>Tworzenie kopii zapasowej i przywracanie Wykonaj kopię zapasową bazy danych z kompresją, skopiuj kopię zapasową do maszyny wirtualnej, a następnie Przywróć bazę danych. Jeśli plik kopii zapasowej jest większy niż 1 TB, należy utworzyć zestaw rozłożony, ponieważ maksymalny rozmiar dysku maszyny wirtualnej wynosi 1 TB. Wykonaj następujące ogólne kroki, aby przeprowadzić migrację bazy danych użytkownika przy użyciu tej metody ręcznej: 1. Wykonaj pełną kopię zapasową bazy danych w lokalizacji lokalnej. 2. Utwórz lub Przekaż maszynę wirtualną z odpowiednią wersją SQL Server. 3. Skonfiguruj łączność na podstawie Twoich wymagań. Zobacz [nawiązywanie połączenia z maszyną wirtualną SQL Server na platformie Azure (Menedżer zasobów)](ways-to-connect-to-sql.md). 4. Skopiuj pliki kopii zapasowej na maszynę wirtualną za pomocą pulpitu zdalnego, Eksploratora Windows lub polecenia copy z wiersza polecenia. ## <a name="backup-to-url-and-restore-from-url"></a>Utwórz kopię zapasową do adresu URL i Przywróć z adresu URL Zamiast tworzyć kopie zapasowe do pliku lokalnego, można użyć funkcji [tworzenia kopii zapasowej do adresu URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) , a następnie przywrócić z adresu URL do maszyny wirtualnej. SQL Server 2016 obsługuje zestawy kopii zapasowych. Są one zalecane do wydajności i są wymagane, aby przekroczyć limity rozmiaru na obiekt BLOB. W przypadku bardzo dużych baz danych zalecane jest korzystanie z [usługi Import/Export systemu Windows](../../../import-export/storage-import-export-service.md) . ## <a name="detach-and-attach-from-a-url"></a>Odłączanie i dołączanie adresu URL Odłącz bazę danych i pliki dziennika, a następnie prześlij je do [usługi Azure Blob Storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure). Następnie Dołącz bazę danych z adresu URL na maszynie wirtualnej platformy Azure. Użyj tej metody, jeśli chcesz, aby pliki fizycznych baz danych znajdowały się w magazynie obiektów blob, co może być przydatne w przypadku bardzo dużych baz danych. Wykonaj następujące ogólne kroki, aby przeprowadzić migrację bazy danych użytkownika przy użyciu tej metody ręcznej: 1. Odłączanie plików bazy danych od lokalnego wystąpienia bazy danych. 2. Skopiuj odłączone pliki bazy danych do usługi Azure Blob Storage za pomocą [narzędzia wiersza polecenia AzCopy](../../../storage/common/storage-use-azcopy-v10.md). 3. Dołącz pliki bazy danych z adresu URL platformy Azure do wystąpienia SQL Server na maszynie wirtualnej platformy Azure. ## <a name="convert-to-a-vm-upload-to-a-url-and-deploy-as-a-new-vm"></a>Konwertuj na maszynę wirtualną, Przekaż do adresu URL i Wdróż ją jako nową maszynę wirtualną Ta metoda służy do migrowania wszystkich baz danych systemu i użytkownika w lokalnym wystąpieniu SQL Server na maszynę wirtualną platformy Azure. Wykonaj następujące ogólne kroki, aby przeprowadzić migrację całego wystąpienia SQL Server przy użyciu tej metody ręcznej: 1. Konwertowanie maszyn fizycznych lub wirtualnych na wirtualne dyski twarde funkcji Hyper-V. 2. Przekaż pliki VHD do usługi Azure Storage za pomocą [polecenia cmdlet Add-AzureVHD](/previous-versions/azure/dn495173(v=azure.100)). 3. Wdróż nową maszynę wirtualną przy użyciu przekazanego wirtualnego dysku twardego. > [!NOTE] > Aby przeprowadzić migrację całej aplikacji, należy rozważyć użycie [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md)]. ## <a name="ship-a-hard-drive"></a>Dostarcz dysk twardy Korzystając z [metody usługi Import/Export systemu Windows](../../../import-export/storage-import-export-service.md) , można transferować duże ilości danych plików do usługi Azure Blob Storage w sytuacjach, gdy przekazywanie przez sieć jest niezwykle kosztowne lub niemożliwe. Za pomocą tej usługi wysyłasz co najmniej jeden dysk twardy zawierający te dane do centrum danych platformy Azure, w którym dane zostaną przekazane do konta magazynu. ## <a name="next-steps"></a>Następne kroki Aby uzyskać więcej informacji, zobacz [SQL Server na platformie Azure — omówienie Virtual Machines](sql-server-on-azure-vm-iaas-what-is-overview.md). > [!TIP] > Jeśli masz pytania dotyczące maszyn wirtualnych programu SQL Server, zobacz [Często zadawane pytania](frequently-asked-questions-faq.md). Aby uzyskać instrukcje dotyczące tworzenia SQL Server na maszynie wirtualnej platformy Azure z przechwyconego obrazu, zobacz [porady & wskazówki dotyczące "klonowania" maszyn wirtualnych usługi Azure SQL z przechwyconych obrazów](/archive/blogs/psssql/tips-tricks-on-cloning-azure-sql-virtual-machines-from-captured-images) w blogu SQL Server inżynierów CSS.
116.271186
770
0.80758
pol_Latn
0.999861
213d4b300c52394c56b4242762989dad3800e76d
1,880
markdown
Markdown
source/_posts/2014-04-03-lua-engine-update.markdown
actboy168/ydwe.net
34cd9c82f31f8389768f38b0bfc55a6fb88743c0
[ "MIT" ]
7
2017-03-23T08:51:16.000Z
2021-11-11T00:42:47.000Z
source/_posts/2014-04-03-lua-engine-update.markdown
actboy168/ydwe.net
34cd9c82f31f8389768f38b0bfc55a6fb88743c0
[ "MIT" ]
11
2017-07-17T02:28:00.000Z
2021-05-23T13:32:54.000Z
source/_posts/2014-04-03-lua-engine-update.markdown
actboy168/ydwe.net
34cd9c82f31f8389768f38b0bfc55a6fb88743c0
[ "MIT" ]
1
2019-12-04T01:29:39.000Z
2019-12-04T01:29:39.000Z
--- layout: post title: "Lua引擎兼容性修改指导" date: 2014-04-03 00:09:33 +0800 comments: true author: actboy168 categories: YDWE --- 刚开始做Lua引擎时,只是抱着玩玩的心态,很多东西没有认真地设计。所以在YDWE1.27,Lua引擎进行了大规模的修改,为了确保你的代码可以顺利过渡到1.28以上的YDWE,请仔细阅读文本。注:1.27支持新旧两种写法,但1.28只会支持新写法。 <!-- more --> ##Lua引擎加载 旧写法 ``` call Cheat("run main.lua") ``` 新写法 ``` call Cheat("exec-lua: 'main.lua'") call Cheat("exec-lua: \"main.lua\"") call Cheat("exec-lua: main.lua") call Cheat("exec-lua:main.lua") call Cheat("exec-lua: main.lua ") ``` 新写法中可以让你的代码看起来更清晰,同时也更加灵活. ##内置库不再默认加载 旧写法 ``` lua jass.DisplayTimedTextToPlayer(cj.GetLocalPlayer(), 0, 0, 60., 'hello') ``` 新写法 ``` lua local jass = require 'jass.common' jass.DisplayTimedTextToPlayer(jass.GetLocalPlayer(), 0, 0, 60., 'hello') ``` 在新写法中,Lua引擎不在预定义**任何**全局变量,包括jass/japi/jass_ext/slk。你需要在使用之前用require加载它。这样可以保证Lua引擎不会跟你的代码(的变量名)冲突。同时如果某些库你不用的话,就永远不会被加载,减少资源的消耗。 你可以把这段代码添加到所有代码的最前面,这样就能保证新写法和旧写法完全等价,不过我还是建议你使用局部变量,并且去掉不使用的库。 ``` lua jass = require 'jass.common' japi = require 'jass.japi' slk = require 'jass.slk' jass_ext = {} jass_ext.hook = require 'jass.hook' jass_ext.runtime = require 'jass.runtime' ``` ##控制台激活改变 旧写法 ``` lua jass_ext.EnableConsole() ``` 新写法 ``` lua local runtime = require 'jass.runtime' runtime.console = true ``` 同样地你可以使用以下代码来作兼容 ``` lua function jass_ext.EnableConsole() local runtime = require 'jass.runtime' runtime.console = true end ``` ##|AHbz|的写法被移除 这不是一个标准的Lua语法,在Lua5.3中增加了|作为运算符,所以|AHbz|这种写法在5.3中会产生问题,我思考再三,还是决定将这种写法移除。不过值得庆幸的是,在5.3中加入了一个新函数string.unpackint,他可以将一个字符串当作二进制数组转为一个整数,这恰恰是可以让一个字符串"AHbz"转为整数的'AHbz'(注意我这里使用了Jass的写法)。就像这样 ``` lua string.unpackint('AHbz', 0, 4) ``` 或者这样 ``` lua function ID(id) return string.unpackint(id, 0, 4) end ID 'AHbz' ``` 不过目前Lua5.3还有很多不确定性,比如string.unpackint的第三个参数默认值总是8,所以你不能写成这样 ``` lua string.unpackint('AHbz') ``` 或许在正式版中会有所改变,所以|AHbz|的写法**暂时**保留,不过最终应该还是会被我去掉的。
18.431373
186
0.735106
yue_Hant
0.738924
213e9c4522639637fcd024b32e4ae10ff7d1a00a
10,282
md
Markdown
articles/azure-arc/servers/manage-agent.md
GordenW/azure-docs.zh-cn
2b69134b6401663a0fe76e07cd81d97da080bda1
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-arc/servers/manage-agent.md
GordenW/azure-docs.zh-cn
2b69134b6401663a0fe76e07cd81d97da080bda1
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-arc/servers/manage-agent.md
GordenW/azure-docs.zh-cn
2b69134b6401663a0fe76e07cd81d97da080bda1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 管理 Azure Arc for servers(预览版)代理 description: 本文介绍在 Azure Arc for servers Connected Machine 代理的生命周期内通常会执行的不同管理任务。 services: azure-arc ms.service: azure-arc ms.subservice: azure-arc-servers author: mgoedtel ms.author: magoedte ms.date: 07/30/2020 ms.topic: conceptual ms.openlocfilehash: 73ece3f1bc8d5e88d4c1c37e1040f2494230e4ee ms.sourcegitcommit: 85eb6e79599a78573db2082fe6f3beee497ad316 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 08/05/2020 ms.locfileid: "87809589" --- # <a name="managing-and-maintaining-the-connected-machine-agent"></a>管理并维护 Connected Machine 代理 为 Windows 或 Linux 初始部署 Azure Arc for servers(预览版)Connected Machine 代理后,可能需要重新配置代理,升级该代理,或者在代理到达其生命周期中的停用阶段时将其从计算机中删除。 可以轻松地手动或自动管理这些日常维护任务,从而减少运行错误并降低费用。 ## <a name="upgrading-agent"></a>升级代理 适用于 Windows 和 Linux 的 Azure Connected Machine 代理可以手动或自动升级到最新版本,具体取决于你的要求。 下表介绍了执行代理升级所支持的方法。 | 操作系统 | 升级方法 | |------------------|----------------| | Windows | 手动<br> Windows 更新 | | Ubuntu | [Apt](https://help.ubuntu.com/lts/serverguide/apt.html) | | SUSE Linux Enterprise Server | [zypper](https://en.opensuse.org/SDB:Zypper_usage_11.3) | | RedHat Enterprise、Amazon、CentOS Linux | [yum](https://wiki.centos.org/PackageManagement/Yum) | ### <a name="windows-agent"></a>Windows 代理 可从以下位置获取适用于 Windows 的 Connected Machine 代理更新包: * Microsoft Update * [Microsoft 更新目录](https://www.catalog.update.microsoft.com/Home.aspx) * Microsoft 下载中心提供的 [Windows 代理 Windows Installer 包](https://aka.ms/AzureConnectedMachineAgent)。 可通过多种方法升级代理,以支持软件更新管理过程。 除了从 Microsoft 更新获取,还可以通过执行 `AzureConnectedMachine.msi` 从命令提示符、脚本或其他自动化解决方案或 UI 向导手动下载并运行。 > [!NOTE] > * 若要升级代理,必须拥有“管理员”权限。 > * 若要手动升级,必须先下载 Installer 包并将其复制到目标服务器上的某个文件夹,或者从共享网络文件夹下载。 如果不熟悉 Windows Installer 包的命令行选项,请查看 [Msiexec 标准命令行选项](/windows/win32/msi/standard-installer-command-line-options)和 [Msiexec 命令行选项](/windows/win32/msi/command-line-options)。 #### <a name="to-upgrade-using-the-setup-wizard"></a>使用安装向导进行升级 1. 使用具有管理权限的帐户登录到计算机。 2. 执行“AzureConnectedMachineAgent.msi”以启动安装向导。 安装向导会发现是否存在先前版本,然后自动执行代理升级。 升级完成后,安装向导将自动关闭。 #### <a name="to-upgrade-from-the-command-line"></a>从命令行升级 1. 使用具有管理权限的帐户登录到计算机。 2. 若要无提示地升级代理并在 `C:\Support\Logs` 文件夹中创建安装日志文件,请运行以下命令。 ```dos msiexec.exe /i AzureConnectedMachineAgent.msi /qn /l*v "C:\Support\Logs\Azcmagentupgradesetup.log" ``` ### <a name="linux-agent"></a>Linux 代理 若要将 Linux 计算机上的代理更新到最新版本,则会涉及两种命令。 一种命令用于使用存储库中的最新可用包列表更新本地包索引,另一种命令用于升级本地包。 可以从 Microsoft 的[包存储库](https://packages.microsoft.com/)下载最新的代理包。 > [!NOTE] > 若要升级代理,必须拥有根访问权限,或者具有已使用 Sudo 提升权限的帐户。 #### <a name="upgrade-ubuntu"></a>升级 Ubuntu 1. 若要使用存储库中所进行的最新更改更新本地包索引,请运行以下命令: ```bash apt update ``` 2. 若要升级系统,请运行以下命令: ```bash apt upgrade ``` [apt](https://help.ubuntu.com/lts/serverguide/apt.html) 命令的操作(如安装和删除包)记录在 `/var/log/dpkg.log` 日志文件中。 #### <a name="upgrade-red-hatcentosamazon-linux"></a>升级 Red Hat/CentOS/Amazon Linux 1. 若要使用存储库中所进行的最新更改更新本地包索引,请运行以下命令: ```bash yum check-update ``` 2. 若要升级系统,请运行以下命令: ```bash yum update ``` [yum](https://access.redhat.com/articles/yum-cheat-sheet) 命令的操作(如安装和删除包)记录在 `/var/log/yum.log` 日志文件中。 #### <a name="upgrade-suse-linux-enterprise"></a>升级 SUSE Linux Enterprise 1. 若要使用存储库中所进行的最新更改更新本地包索引,请运行以下命令: ```bash zypper refresh ``` 2. 若要升级系统,请运行以下命令: ```bash zypper update ``` [zypper](https://en.opensuse.org/Portal:Zypper) 命令的操作(如安装和删除包)记录在 `/var/log/zypper.log` 日志文件中。 ## <a name="about-the-azcmagent-tool"></a>关于 Azcmagent 工具 Azcmagent 工具 (Azcmagent.exe) 用于在安装过程中配置 Azure Arc for servers(预览版)Connected Machine 代理,或在安装后修改代理的初始配置。 Azcmagent.exe 提供了用于自定义代理及查看其状态的命令行参数: * **Connect** - 将计算机连接到 Azure Arc * **Disconnect** - 断开计算机与 Azure Arc 的连接 * **Reconnect** - 将断开连接的计算机重新连接到 Azure Arc * **Show** - 查看代理状态及其配置属性(资源组名称、订阅 ID、版本等),这有助于排查与代理相关的问题。 * **-h or --help** - 显示可用的命令行参数 例如,若要查看 Reconnect 参数的详细帮助,请键入 `azcmagent reconnect -h`。 * **-v or --verbose** - 启用详细日志记录 你可以在以交互方式登录时手动执行“Connect”、“Disconnect”和“Reconnect”,也可以使用用于加入多个代理的相同服务主体或使用 Microsoft 标识平台[访问令牌](../../active-directory/develop/access-tokens.md)自动执行这些参数 。 如果未使用服务主体向 Azure Arc for servers(预览版)注册计算机,请参阅以下[文章](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale)来创建服务主体。 >[!NOTE] >若要运行**azcmagent**,必须具有 Linux 计算机上的*根*访问权限。 ### <a name="connect"></a>连接 此参数指定 Azure 资源管理器中的资源,该资源表示已在 Azure 中创建计算机。 资源位于指定的订阅和资源组中,有关计算机的数据存储在由 `--location` 设置指定的 Azure 区域中。 如果未指定,则此计算机的主机名为默认资源名。 然后,将下载与系统分配的计算机标识相对应的证书,并将其存储在本地。 完成此步骤后,Azure Connected Machine 元数据服务和来宾配置代理将开始与 Azure Arc for servers(预览版)进行同步。 若要使用服务主体进行连接,请运行以下命令: `azcmagent connect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>` 若要使用访问令牌进行连接,请运行以下命令: `azcmagent connect --access-token <> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>` 若要使用提升的登录凭据(交互式)进行连接,请运行以下命令: `azcmagent connect --tenant-id <TenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>` ### <a name="disconnect"></a>断开连接 此参数指定 Azure 资源管理器中的资源,该资源表示已在 Azure 中删除计算机。 它不会从计算机中删除代理,必须使用一个单独的步骤来删除代理。 断开计算机连接后,如果想要重新将其注册到 Azure Arc for servers(预览版),请使用 `azcmagent connect`,以便在 Azure 中为其创建新资源。 若要使用服务主体断开连接,请运行以下命令: `azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID>` 若要使用访问令牌断开连接,请运行以下命令: `azcmagent disconnect --access-token <accessToken>` 若要使用提升的登录凭据(交互式)断开连接,请运行以下命令: `azcmagent disconnect --tenant-id <tenantID>` ### <a name="reconnect"></a>重新连接 > [!WARNING] > `reconnect`命令已弃用,不应使用。 此命令将在将来的代理版本中删除,并且现有的代理将无法完成重新连接请求。 相反,请[断开](#disconnect)计算机的[连接](#connect),然后重新连接。 此参数可将已注册或已连接的计算机与 Azure Arc for servers(预览版)重新连接起来。 如果计算机已关闭(至少 45 天)从而导致其证书过期,则可能需要执行此参数。 此参数使用提供的身份验证选项来检索与表示此计算机的 Azure 资源管理器资源相对应的新凭据。 此命令需要高于 [Azure Connected Machine 加入](agent-overview.md#required-permissions)角色的权限。 若要使用服务主体重新进行连接,请运行以下命令: `azcmagent reconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID>` 若要使用访问令牌重新进行连接,请运行以下命令: `azcmagent reconnect --access-token <accessToken>` 若要使用提升的登录凭据(交互式)重新进行连接,请运行以下命令: `azcmagent reconnect --tenant-id <tenantID>` ## <a name="remove-the-agent"></a>删除代理 执行以下方法之一,从计算机中卸载 Windows 或 Linux Connected Machine 代理。 删除代理时,不会向 Arc for servers(预览版)注销计算机,如果不再需要在 Azure 中管理计算机,应使用一个单独的步骤来注销计算机。 ### <a name="windows-agent"></a>Windows 代理 以下两种方法都将删除代理,但不会删除计算机上的“C:\Program Files\AzureConnectedMachineAgent”文件夹。 #### <a name="uninstall-from-control-panel"></a>从控制面板卸载 1. 若要从计算机中卸载 Windows 代理,请执行以下操作: a. 使用拥有管理员权限的帐户登录到计算机。 b. 在“控制面板”中,选择“程序和功能”。 c. 在“程序和功能”中,依次选择“Azure Connected Machine Agent”、“卸载”、“是”。 >[!NOTE] > 也可以双击 **AzureConnectedMachineAgent.msi** 安装程序包运行代理安装向导。 #### <a name="uninstall-from-the-command-line"></a>从命令行卸载 若要从命令提示符手动卸载代理或使用自动化方法(如脚本),可以使用以下示例。 首先,需要从操作系统中检索产品代码,即应用程序包的主体标识符 GUID。 使用 Msiexec.exe 命令行 (`msiexec /x {Product Code}`) 执行卸载。 1. 打开注册表编辑器。 2. 在注册表项 `HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall` 下,查找并复制产品代码 GUID。 3. 然后,可以使用 Msiexec 卸载代理,示例如下: * 通过命令行类型: ```dos msiexec.exe /x {product code GUID} /qn ``` * 可使用 PowerShell 执行相同的步骤: ```powershell Get-ChildItem -Path HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall | ` Get-ItemProperty | ` Where-Object {$_.DisplayName -eq "Azure Connected Machine Agent"} | ` ForEach-Object {MsiExec.exe /x "$($_.PsChildName)" /qn} ``` ### <a name="linux-agent"></a>Linux 代理 > [!NOTE] > 若要卸载代理,必须拥有根访问权限,或者具有已使用 Sudo 提升权限的帐户。 卸载 Linux 代理时,要使用的命令取决于 Linux 操作系统。 - 对于 Ubuntu,请运行以下命令: ```bash sudo apt purge azcmagent ``` - 对于 RHEL、CentOS 和 Amazon Linux,请运行以下命令: ```bash sudo yum remove azcmagent ``` - 对于 SLES,请运行以下命令: ```bash sudo zypper remove azcmagent ``` ## <a name="unregister-machine"></a>注销计算机 如果计划在 Azure 中停止使用支持服务管理计算机,请执行以下步骤以向 Arc for server(预览版)注销计算机。 可在从计算机中删除 Connected Machine 代理之前或之后执行这些步骤。 1. 转到 [Azure 门户](https://aka.ms/hybridmachineportal)并打开 Azure Arc for servers(预览版)。 2. 在列表中选择计算机,选择省略号图标 ( **...** ),然后选择“删除”。 ## <a name="update-or-remove-proxy-settings"></a>更新或删除代理设置 若要将代理配置为在部署后通过代理服务器与服务进行通信或在部署后删除此配置,请使用以下任一方法来完成此任务。 ### <a name="windows"></a>Windows 若要设置代理服务器环境变量,请运行以下命令: ```powershell # If a proxy server is needed, execute these commands with the proxy URL and port. [Environment]::SetEnvironmentVariable("https_proxy","http://{proxy-url}:{proxy-port}","Machine") $env:https_proxy = [System.Environment]::GetEnvironmentVariable("https_proxy","Machine") # For the changes to take effect, the agent service needs to be restarted after the proxy environment variable is set. Restart-Service -Name himds ``` 若要将代理配置为停止通过代理服务器进行通信,请运行以下命令删除代理服务器环境变量并重启代理服务: ```powershell [Environment]::SetEnvironmentVariable("https_proxy",$null,"Machine") $env:https_proxy = [System.Environment]::GetEnvironmentVariable("https_proxy","Machine") # For the changes to take effect, the agent service needs to be restarted after the proxy environment variable removed. Restart-Service -Name himds ``` ### <a name="linux"></a>Linux 若要设置代理服务器,请从目录中运行以下命令(代理安装包将下载到该目录中): ```bash # Reconfigure the connected machine agent and set the proxy server. bash ~/Install_linux_azcmagent.sh --proxy "{proxy-url}:{proxy-port}" ``` 若要将代理配置为停止通过代理服务器进行通信,请运行以下命令以删除代理配置: ```bash sudo azcmagent_proxy remove ``` ## <a name="next-steps"></a>后续步骤 - 了解如何使用 [Azure Policy](../../governance/policy/overview.md) 管理计算机,例如,进行 VM [来宾配置](../../governance/policy/concepts/guest-configuration.md),验证计算机是否向预期的 Log Analytics 工作区报告,使用[用于 VM 的 Azure Monitor](../../azure-monitor/insights/vminsights-enable-policy.md) 启用监视等。 - 详细了解 [Log Analytics 代理](../../azure-monitor/platform/log-analytics-agent.md)。 如果希望主动监视计算机上运行的 OS 和工作负载,使用自动化 runbook 或更新管理等功能对其进行管理,或使用其他 Azure 服务(如 [Azure 安全中心](../../security-center/security-center-intro.md)),则需要安装适用于 Windows 和 Linux 的 Log Analytics 代理。
31.636923
297
0.74256
yue_Hant
0.753748
213f145b27eec39fb640bd2f46d0faffa7c2a2ec
918
md
Markdown
Web/OAuth.md
eyl056/tech-interview-for-developer
c1780f51fcdc7bc294c55296b84ec5fedd63ae05
[ "MIT" ]
6,277
2020-01-16T17:06:02.000Z
2022-03-31T16:32:23.000Z
Web/OAuth.md
HyeongJun94/tech-interview-for-developer
ab8faa238283b112acc7cdb4d72433aeb036464a
[ "MIT" ]
44
2020-01-22T03:18:38.000Z
2021-12-14T03:28:03.000Z
Web/OAuth.md
HyeongJun94/tech-interview-for-developer
ab8faa238283b112acc7cdb4d72433aeb036464a
[ "MIT" ]
1,604
2020-01-17T14:36:03.000Z
2022-03-31T09:45:11.000Z
## OAuth > Open Authentification 인터넷 사용자들이 비밀번호를 제공하지 않고, 다른 웹사이트 상의 자신들의 정보에 대해 웹사이트나 애플리케이션의 접근 권한을 부여할 수있는 개방형 표준 방법 <br> 이러한 매커니즘은 구글, 페이스북, 트위터 등이 사용하고 있으며 타사 애플리케이션 및 웹사이트의 계정에 대한 정보를 공유할 수 있도록 허용해준다. <br> <br> #### 사용 용어 --- - **사용자** : 계정을 가지고 있는 개인 - **소비자** : OAuth를 사용해 서비스 제공자에게 접근하는 웹사이트 or 애플리케이션 - **서비스 제공자** : OAuth를 통해 접근을 지원하는 웹 애플리케이션 - **소비자 비밀번호** : 서비스 제공자에서 소비자가 자신임을 인증하기 위한 키 - **요청 토큰** : 소비자가 사용자에게 접근권한을 인증받기 위해 필요한 정보가 담겨있음 - **접근 토큰** : 인증 후에 사용자가 서비스 제공자가 아닌 소비자를 통해 보호 자원에 접근하기 위한 키 값 <br> 토큰 종류로는 Access Token과 Refresh Token이 있다. Access Token은 만료시간이 있고 끝나면 다시 요청해야 한다. Refresh Token은 만료되면 아예 처음부터 진행해야 한다. <br> #### 인증 과정 --- > 소비자 <-> 서비스 제공자 1. 소비자가 서비스 제공자에게 요청토큰을 요청한다. 2. 서비스 제공자가 소비자에게 요청토큰을 발급해준다. 3. 소비자가 사용자를 서비스제공자로 이동시킨다. 여기서 사용자 인증이 수행된다. 4. 서비스 제공자가 사용자를 소비자로 이동시킨다. 5. 소비자가 접근토큰을 요청한다. 6. 서비스제공자가 접근토큰을 발급한다. 7. 발급된 접근토큰을 이용해서 소비자에서 사용자 정보에 접근한다. <br>
19.125
86
0.666667
kor_Hang
1.00001
213f57a93f14696729ed12f385ae62bc0d97111f
5,466
md
Markdown
_posts/2013-11-20-juggle-chainsaws.md
squinones/squinones.github.io
82f81dc03d07fd47236926a3b4c6e1fcc5e7fdaa
[ "MIT" ]
1
2016-02-04T18:06:47.000Z
2016-02-04T18:06:47.000Z
_posts/2013-11-20-juggle-chainsaws.md
squinones/squinones.github.io
82f81dc03d07fd47236926a3b4c6e1fcc5e7fdaa
[ "MIT" ]
null
null
null
_posts/2013-11-20-juggle-chainsaws.md
squinones/squinones.github.io
82f81dc03d07fd47236926a3b4c6e1fcc5e7fdaa
[ "MIT" ]
null
null
null
--- layout: post title: Juggle Chainsaws, Not Types date: 2013-11-20 05:28:08 category: development tags: [bugs, php, testing, type juggling] --- No matter how popular an activity it is, I really don't like to bash on PHP. Every language has its flaws when you look closely enough, and if PHP wears its idiosyncrasies a little closer to the surface than most, I think it makes up for it in other ways. PHP's handling of types, however, is confusing at best and at worst completely deranged. I've seen intercity rail schedules that can't hold a candle to PHP's <a href="http://us1.php.net/types.comparisons">type comparison tables</a>. The bizarre and unexpected behaviors that can result from a non-strict comparison has all but rendered the equality operator (==) useless. The typical advice from PHP master to PHP neophyte is, always check for identicality (===) unless you're very sure what you're doing. In fact, as part of our pre-screening process at Politico, we often ask PHP developer candidates to explain why the following expression evaluates as true. {% highlight php startinline=true %} ("politico" == 0); {% endhighlight %} This question probably wont stump a PHP developer with any significant experience. If you manage to code in PHP for more than a few months without running face-first in to this issue, you might not be trying hard enough. PHP is weakly typed in much the same way that water is slightly dry. When faced with the expression above, the interpreter tries to make sense of things by converting the string in to a number, following a single and actually <a href="http://www.php.net/manual/en/language.types.string.php#language.types.string.conversion">quite well documented set of rules.</a> To put it simply, if the string contains a '.', or the letter 'e', PHP will attempt to turn it in to a float. Otherwise, it turns it in to an integer. If the string starts with "valid numeric data," those numbers will be used as a value of the new integer (e.g. (int) "123foo" -&gt; 123). If the string doesn't start with a number, the value will be 0. The literal string "politico" starts with a 'p', which is not a number, and so the interpreter rewrites the above expression like so: {% highlight php startinline=true %} (0 == 0); {% endhighlight %} The obvious second half of that question is, how do you make PHP give you an answer that isn't quite as... insane? You use the special identicality operator (===), more commonly called the triple-equal. This operator compares the types of the values in the expression, and if they don't match, it doesn't go through the whole circus act and just returns false. Cute, right? Maybe until you realize that this behavior happens to catch a lot of unsuspecting programmers, especially ones who are new to the profession. Sometimes it creates subtle and hard-to-reproduce bugs, and sometimes it creates [serious vulnerabilities](http://en.securitylab.ru/lab/PT-2012-29). Once in a while, it'll even catch a more experienced programmer. I was recently implementing a fairly simple class with a setter method. The method accepts one argument and checks to see if it is in a list of allowed arguments before setting a protected property. If the argument is not in the list of allowed arguments, an exception is thrown. These arguments are configuration options that are generally expressed through the use of pre-defined constants and our code is merely storing the value. {% highlight php startinline=true %} public function setFoo($foo) { $validOptions = [VALID_CONSTANT_A, VALID_CONSTANT_B]; if (!in_array($foo, $validOptions)) { throw new InvalidArgumentException("foo must be one of: " . implode(", ", $validOptions)); } $this->foo = $foo; } {% endhighlight %} Even for a setter method, this is pretty unremarkable. Yet when we run our unit tests, we get an interesting failure. Our test that passes in an invalid argument expects to see an exception, but it doesn't. In fact, this method accepts almost any string as valid. And despite my being way more familiar with PHP's weird type handling than I really want to be, it still took me several minutes to figure out what was going on here. Those constants are a form of abstraction, hiding an implementation detail of one of our class's dependencies that we shouldn't have to care about. If the dependency ever decides to change that implementation, using the constants means we don't have to alter our code to keep up. But ignorance is often misguided bliss because VALID_CONSTANT_A is actually set to the integer 0, and by default, the in_array function doesn't do a strict, type-safe comparison. Let's make a quick fix... {% highlight php startinline=true %} public function setFoo($foo) { $validOptions = [VALID_CONSTANT_A, VALID_CONSTANT_B]; if (!in_array($foo, $validOptions, true)) { throw new InvalidArgumentException("foo must be one of: " . implode(", ", $validOptions)); } $this->foo = $foo; } {% endhighlight %} In case you didn't notice the change, in_array accepts a third argument, a boolean which forces strict comparison when set to true. Now our unit tests pass and our crisis is averted. Maybe there's a parable here. It's a pretty vivid illustration of why unit tests are worth our time and effort, if nothing else. Still, it nags at me. At the very best, silly things like this eat up valuable time. I don't even want to consider the very worst.
84.092308
850
0.761617
eng_Latn
0.998969
214083d6381ebf2564860604ba4cd7f76ce5af37
936
md
Markdown
_posts/2016-03-18-whats-the-http-benchmark-tools.md
liu1084/page.83096146.com
c9b65b7865eaa356cab71ac51b6b64ebd4174e0a
[ "MIT" ]
2
2017-09-30T01:54:30.000Z
2018-11-14T16:45:03.000Z
_posts/2016-03-18-whats-the-http-benchmark-tools.md
liu1084/page.83096146.com
c9b65b7865eaa356cab71ac51b6b64ebd4174e0a
[ "MIT" ]
null
null
null
_posts/2016-03-18-whats-the-http-benchmark-tools.md
liu1084/page.83096146.com
c9b65b7865eaa356cab71ac51b6b64ebd4174e0a
[ "MIT" ]
null
null
null
--- id: 245 title: 'What&#8217;s the HTTP benchmark tools?' date: 2016-03-18T15:19:52+00:00 author: liu1084 layout: post guid: http://blog.83096146.com/?p=245 permalink: /?p=245 categories: - test - tools --- Did&nbsp;you want to make a benchmark&nbsp;for your production application which is running on server Before release&nbsp;online? There are many tools for your selection: <table border="1" cellpadding="1" cellspacing="1" style="width:500px;"> <tr> <td> No. </td> <td> Tools Name </td> </tr> <tr> <td> 1 </td> <td> Apache's AB </td> </tr> <tr> <td> 2 </td> <td> Lighttpd's Weighttp </td> </tr> <tr> <td> 3 </td> <td> HTTPerf </td> </tr> <tr> <td> 4 </td> <td> Nginx's "wrk" </td> </tr> </table> How to using them? &nbsp;
13.183099
130
0.508547
eng_Latn
0.555827
2140d5db43728752d15c9eed1c4e067d70b5eba6
14,327
md
Markdown
articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
gliljas/azure-docs.sv-se-1
1efdf8ba0ddc3b4fb65903ae928979ac8872d66e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Konfigurera en QnA Maker tjänst – QnA Maker description: Innan du kan skapa en QnA Maker kunskaps banker måste du först konfigurera en QnA Maker tjänst i Azure. Alla som har behörighet att skapa nya resurser i en prenumeration kan konfigurera en QnA Maker-tjänst. ms.topic: conceptual ms.date: 03/19/2020 ms.openlocfilehash: 563a56fdb288568e7fe667fa54658400064a560f ms.sourcegitcommit: 58faa9fcbd62f3ac37ff0a65ab9357a01051a64f ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 04/29/2020 ms.locfileid: "81402994" --- # <a name="manage-qna-maker-resources"></a>Hantera QnA Maker resurser Innan du kan skapa en QnA Maker kunskaps banker måste du först konfigurera en QnA Maker tjänst i Azure. Alla som har behörighet att skapa nya resurser i en prenumeration kan konfigurera en QnA Maker-tjänst. En heltäckande förståelse av följande koncept är användbart innan du skapar din resurs: * [QnA Maker resurser](../Concepts/azure-resources.md) * [Redigerings-och publicerings nycklar](../Concepts/azure-resources.md#keys-in-qna-maker) ## <a name="create-a-new-qna-maker-service"></a>Skapa en ny QnA Maker-tjänst Den här proceduren skapar de Azure-resurser som krävs för att hantera innehållet i kunskaps basen. När du har slutfört de här stegen hittar du _prenumerations_ nycklarna på sidan **nycklar** för resursen i Azure Portal. 1. Logga in på Azure Portal och [skapa en QNA Maker](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) resurs. 1. Välj **skapa** efter att du har läst de allmänna villkoren: ![Skapa en ny QnA Maker-tjänst](../media/qnamaker-how-to-setup-service/create-new-resource-button.png) 1. I **QNA Maker**väljer du lämpliga nivåer och regioner: ![Skapa en ny QnA Maker tjänst – pris nivå och regioner](../media/qnamaker-how-to-setup-service/enter-qnamaker-info.png) * I fältet **namn** anger du ett unikt namn som identifierar den här QNA Makers tjänsten. Namnet identifierar också QnA Maker slut punkten som dina kunskaps baser kommer att associeras med. * Välj den **prenumeration** som QNA Maker resursen ska distribueras under. * Välj **pris nivå** för QNA Maker hanterings tjänster (portal-och hanterings-API: er). Se [Mer information om SKU-prissättning](https://aka.ms/qnamaker-pricing). * Skapa en ny **resurs grupp** (rekommenderas) eller Använd en befintlig som distribuerar den här QNA Maker resursen. QnA Maker skapar flera Azure-resurser. När du skapar en resurs grupp som innehåller dessa resurser kan du enkelt hitta, hantera och ta bort dessa resurser med resurs gruppens namn. * Välj en **resurs grupps plats**. * Välj **pris nivå för sökning** i Azure kognitiv sökning-tjänsten. Om alternativet för den kostnads fria nivån inte är tillgängligt (visas nedtonat) innebär det att du redan har en kostnads fri tjänst som distribuerats via din prenumeration. I så fall måste du börja med Basic-nivån. Se [pris information för Azure kognitiv sökning](https://azure.microsoft.com/pricing/details/search/). * Välj den **Sök plats** där du vill att Azure kognitiv sökning-index ska distribueras. Begränsningar för var kund information måste lagras hjälper dig att avgöra vilken plats du väljer för Azure Kognitiv sökning. * Ange ett namn på Azure App Service-instansen i fältet **namn på App** . * Standardvärdet är App Service standard nivån (S1). Du kan ändra planen när du har skapat den. Läs mer om [App Service prissättning](https://azure.microsoft.com/pricing/details/app-service/). * Välj **plats för webbplatsen** där App Service ska distribueras. > [!NOTE] > **Sök platsen** kan skilja sig från **webbplatsens plats**. * Välj om du vill aktivera **Application Insights**. Om **Application Insights** är aktive rad samlar QNA Maker in telemetri om trafik, chat-loggar och fel. * Välj **platsen för App Insights** där Application Insightss resursen ska distribueras. * För kostnads besparingar kan du [dela](#configure-qna-maker-to-use-different-cognitive-search-resource) några men inte alla Azure-resurser som skapats för QNA Maker. 1. När alla fält har verifierats väljer du **skapa**. Processen kan ta några minuter att slutföra. 1. När distributionen har slutförts visas följande resurser som skapats i din prenumeration: ![Resurs skapade en ny QnA Maker tjänst](../media/qnamaker-how-to-setup-service/resources-created.png) Resursen med _Cognitive Services_ typen har dina _prenumerations_ nycklar. ## <a name="find-subscription-keys-in-the-azure-portal"></a>Hitta prenumerations nycklar i Azure Portal Du kan visa och återställa dina prenumerations nycklar från Azure Portal, där du skapade QnA Maker resursen. 1. Gå till QnA Maker resursen i Azure Portal och välj den resurs som har _Cognitive Servicess_ typ: ![QnA Maker resurs lista](../media/qnamaker-how-to-key-management/qnamaker-resource-list.png) 2. Gå till **nycklar**: ![Prenumerationsnyckel](../media/qnamaker-how-to-key-management/subscription-key.PNG) ## <a name="find-endpoint-keys-in-the-qna-maker-portal"></a>Hitta slut punkts nycklar i QnA Maker Portal Slut punkten finns i samma region som resursen eftersom slut punkts nycklarna används för att anropa kunskaps basen. Slut punkts nycklar kan hanteras från [QNA Maker-portalen](https://qnamaker.ai). 1. Logga in på [QNA Maker Portal](https://qnamaker.ai), gå till din profil och välj sedan **tjänst inställningar**: ![Slut punkts nyckel](../media/qnamaker-how-to-key-management/Endpoint-keys.png) 2. Visa eller Återställ dina nycklar: > [!div class="mx-imgBorder"] > ![Slut punkts nyckel hanterare](../media/qnamaker-how-to-key-management/Endpoint-keys1.png) >[!NOTE] >Uppdatera dina nycklar om du tror att de har komprometterats. Detta kan kräva motsvarande ändringar i klient programmet eller bot koden. ## <a name="upgrade-qna-maker-sku"></a>Uppgradera QnA Maker SKU Om du vill ha fler frågor och svar i din kunskaps bas, utöver din nuvarande nivå, uppgraderar du QnA Maker-tjänstens pris nivå. Så här uppgraderar du SKU: n för QnA Maker hantering: 1. Gå till din QnA Maker-resurs i Azure Portal och välj **pris nivå**. ![QnA Maker resurs](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-resource.png) 1. Välj lämplig SKU och tryck på **Välj**. ![QnA Maker priser](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-pricing-page.png) ## <a name="upgrade-app-service"></a>Uppgradera App Service När din kunskaps bas behöver hantera fler förfrågningar från din klient program, uppgraderar du App Service pris nivå. Du kan [skala upp](https://docs.microsoft.com/azure/app-service/manage-scale-up) eller skala ut app service. Gå till App Service resursen i Azure Portal och välj alternativet **skala upp** eller **skala ut** efter behov. ![QnA Maker App Service skala](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-scale.png) ## <a name="upgrade-the-azure-cognitive-search-service"></a>Uppgradera Azure Kognitiv sökning-tjänsten Om du planerar att ha många kunskaps baser uppgraderar du pris nivån för Azure Kognitiv sökning-tjänsten. För närvarande kan du inte utföra en uppgradering på plats av Azure Search-SKU: n. Du kan dock skapa en ny Azure Search-resurs med önskad SKU, återställa data till den nya resursen och sedan länka den till QnA Maker stacken. Det gör du genom att följa dessa steg: 1. Skapa en ny Azure Search-resurs i Azure Portal och välj önskad SKU. ![QnA Maker Azure Search-resurs](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-azuresearch-new.png) 1. Återställ indexen från din ursprungliga Azure Search-resurs till den nya. Se [exempel koden för säkerhets kopierings återställning](https://github.com/pchoudhari/QnAMakerBackupRestore). 1. När data har återställts går du till din nya Azure Search-resurs, väljer **nycklar**och skriver ned **namnet** och **Administratörs nyckeln**: ![QnA Maker Azure Search-nycklar](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-azuresearch-keys.png) 1. Om du vill länka den nya Azure Search-resursen till QnA Maker stack går du till QnA Maker App Service-instansen. ![QnA Maker App Service instans](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-resource-list-appservice.png) 1. Välj **program inställningar** och ändra inställningarna i fälten **AzureSearchName** och **AzureSearchAdminKey** från steg 3. ![QnA Maker App Service inställning](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-settings.png) 1. Starta om App Service-instansen. ![Starta om QnA Maker App Service-instansen](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png) ## <a name="get-the-latest-runtime-updates"></a>Hämta de senaste körnings uppdateringarna QnAMaker runtime är en del av Azure App Service-instansen som distribueras när du [skapar en QnAMaker-tjänst](./set-up-qnamaker-service-azure.md) i Azure Portal. Uppdateringar görs regelbundet till körnings miljön. QnA Maker App Service-instansen är i läget för automatisk uppdatering efter 2019-versionen av webbplats tillägget (version 5 +). Den här uppdateringen är utformad för att ta hand om noll stillestånds tid under uppgraderingar. Du kan kontrol lera din aktuella version https://www.qnamaker.ai/UserSettingspå. Om din version är äldre än version 5. x måste du starta om App Service för att tillämpa de senaste uppdateringarna: 1. Gå till din QnAMaker-tjänst (resurs grupp) i [Azure Portal](https://portal.azure.com). > [!div class="mx-imgBorder"] > ![QnAMaker Azure-resurs grupp](../media/qnamaker-how-to-troubleshoot/qnamaker-azure-resourcegroup.png) 1. Välj App Service instansen och öppna avsnittet **Översikt** . > [!div class="mx-imgBorder"] > ![QnAMaker App Service instans](../media/qnamaker-how-to-troubleshoot/qnamaker-azure-appservice.png) 1. Starta om App Service. Uppdaterings processen bör avslutas på några sekunder. Alla beroende program eller robotar som använder den här QnAMaker-tjänsten kommer inte att vara tillgängliga för slutanvändarna under den här omstarts perioden. ![Omstart av QnAMaker App Service-instansen](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png) ## <a name="configure-qna-maker-to-use-different-cognitive-search-resource"></a>Konfigurera QnA Maker att använda olika Kognitiv sökning resurser Om du skapar en QnA-tjänst och dess beroenden (till exempel Sök) via portalen skapas en Sök tjänst för dig och länkas till QnA Maker tjänsten. När resurserna har skapats kan du uppdatera App Service-inställningen för att använda en tidigare befintlig Sök tjänst och ta bort den som du nyss skapade. QnA Maker **App Service** resurs använder kognitiv sökning resursen. Du måste ändra inställningen i Azure Portal för att kunna ändra Kognitiv sökning resursen som används av QnA Maker. 1. Hämta **Administratörs nyckeln** och **namnet** på den kognitiv sökning resurs som du vill QNA Maker använda. 1. Logga in på [Azure Portal](https://portal.azure.com) och hitta **App Service** som är kopplade till din QNA Maker resurs. Båda har samma namn. 1. Välj **Inställningar**och sedan **konfiguration**. Då visas alla befintliga inställningar för QnA Maker App Service. > [!div class="mx-imgBorder"] > ![Skärm bild av Azure Portal som visar App Service konfigurations inställningar](../media/qnamaker-how-to-upgrade-qnamaker/change-search-service-app-service-configuration.png) 1. Ändra värdena för följande nycklar: * **AzureSearchAdminKey** * **AzureSearchName** 1. Om du vill använda de nya inställningarna måste du starta om App Service. Välj **Översikt**och välj sedan **starta om**. > [!div class="mx-imgBorder"] > ![Skärm bild av Azure Portal starta om App Service när konfigurations inställningarna ändras](../media/qnamaker-how-to-upgrade-qnamaker/screenshot-azure-portal-restart-app-service.png) Om du skapar en QnA-tjänst via Azure Resource Manager mallar kan du skapa alla resurser och kontrol lera App Service skapandet för att använda en befintlig Sök tjänst. Läs mer om hur du konfigurerar [program inställningarna](../../../app-service/configure-common.md#configure-app-settings)för App Service. ## <a name="configure-app-service-idle-setting-to-avoid-timeout"></a>Konfigurera inställningen för inaktiv App Service för att undvika tids gräns App Service, som hanterar QnA Maker förutsägelse körning för en publicerad kunskaps bas, har en konfiguration för tids gräns för inaktivitet, som är standardvärdet för automatisk timeout om tjänsten är inaktiv. För QnA Maker innebär det att ditt förutsägelse körnings generateAnswer-API ibland går ut efter perioder utan trafik. För att hålla appen för förutsägelse slut punkt inläst även när det inte finns någon trafik, ställer du in inaktivitet till Always On. 1. Logga in på [Azure-portalen](https://portal.azure.com). 1. Sök efter och välj din QnA Maker resurss app service. Den har samma namn som den QnA Maker resursen, men har en annan **typ** av App Service. 1. Sök efter **Inställningar** och välj sedan **konfiguration**. 1. I konfigurations fönstret väljer du **allmänna inställningar**och sedan hitta **Always on**och väljer **på** som värde. > [!div class="mx-imgBorder"] > ![I konfigurations fönstret väljer du * * allmänna inställningar * * och letar sedan * * Always on * * och väljer * * på * * som värde.](../media/qnamaker-how-to-upgrade-qnamaker/configure-app-service-idle-timeout.png) 1. Välj **Spara** för att spara konfigurationen. 1. Du tillfrågas om du vill starta om appen för att använda den nya inställningen. Välj **Fortsätt**. Läs mer om hur du konfigurerar App Service [allmänna inställningar](../../../app-service/configure-common.md#configure-general-settings). ## <a name="delete-azure-resources"></a>Ta bort Azure-resurser Om du tar bort någon av de Azure-resurser som används för QnA Maker kunskaps baser fungerar inte längre kunskaps basen. Innan du tar bort några resurser bör du kontrol lera att du exporterar dina kunskaps baser från sidan **Inställningar** . ## <a name="next-steps"></a>Nästa steg Läs mer om [App Service](../../../app-service/index.yml) och [search service](../../../search/index.yml). > [!div class="nextstepaction"] > [Skapa och publicera en kunskapsbas](../Quickstarts/create-publish-knowledge-base.md)
64.246637
440
0.76806
swe_Latn
0.997405
2141265284b224bfeb94a62ed5b85f077836916a
8,500
md
Markdown
UPGRADE.md
fenix007/runcity
7d8dfed1c06af362b962ef1d6f0f1c63d0cc1c81
[ "MIT" ]
null
null
null
UPGRADE.md
fenix007/runcity
7d8dfed1c06af362b962ef1d6f0f1c63d0cc1c81
[ "MIT" ]
null
null
null
UPGRADE.md
fenix007/runcity
7d8dfed1c06af362b962ef1d6f0f1c63d0cc1c81
[ "MIT" ]
null
null
null
Symfony Standard Edition Upgrade ================================ From Symfony 2.0 to Symfony 2.1 ------------------------------- ### Project Dependencies As of Symfony 2.1, project dependencies are managed by [Composer](http://getcomposer.org/): * The `bin/vendors` script can be removed as `composer.phar` does all the work now (it is recommended to install it globally on your machine). * The `deps` file need to be replaced with the `composer.json` one. * The `composer.lock` is the equivalent of the generated `deps.lock` file and it is automatically generated by Composer. Download the default [`composer.json`](https://raw.github.com/symfony/symfony-standard/2.1/composer.json) and [`composer.lock`](https://raw.github.com/symfony/symfony-standard/2.1/composer.lock) files for Symfony 2.1 and put them into the main directory of your project. If you have customized your `deps` file, move the added dependencies to the `composer.json` file (many bundles and PHP libraries are already available as Composer packages -- search for them on [Packagist](http://packagist.org/)). Remove your current `vendor` directory. Finally, run Composer: $ composer.phar install Note: You must complete the upgrade steps below so composer can successfully generate the autoload files. ### `app/autoload.php` The default `autoload.php` reads as follows (it has been simplified a lot as autoloading for libraries and bundles declared in your `composer.json` file is automatically managed by the Composer autoloader): <?php use Doctrine\Common\Annotations\AnnotationRegistry; $loader = include __DIR__.'/../vendor/autoload.php'; // intl if (!function_exists('intl_get_error_code')) { require_once __DIR__.'/../vendor/symfony/symfony/src/Symfony/Component/Locale/Resources/stubs/functions.php'; $loader->add('', __DIR__.'/../vendor/symfony/symfony/src/Symfony/Component/Locale/Resources/stubs'); } AnnotationRegistry::registerLoader(array($loader, 'loadClass')); return $loader; ### `app/config/config.yml` The `framework.charset` setting must be removed. If you are not using `UTF-8` for your application, override the `getCharset()` method in your `AppKernel` class instead: class AppKernel extends Kernel { public function getCharset() { return 'ISO-8859-1'; } // ... } You might want to add the new `strict_requirements` parameter to `framework.router` (it avoids fatal errors in the production environment when a link cannot be generated): framework: router: strict_requirements: "%kernel.debug%" You can even disable the requirements check on production with `null` as you should know that the parameters for URL generation always pass the requirements, e.g. by validating them beforehand. This additionally enhances performance. See [config_prod.yml](https://github.com/symfony/symfony-standard/blob/master/app/config/config_prod.yml). The `default_locale` parameter is now a setting of the main `framework` configuration (it was under the `framework.session` in 2.0): framework: default_locale: "%locale%" The `auto_start` setting under `framework.session` must be removed as it is not used anymore (the session is now always started on-demand). If `auto_start` was the only setting under the `framework.session` entry, don't remove it entirely, but set its value to `~` (`~` means `null` in YAML) instead: framework: session: ~ The `trust_proxy_headers` setting was added in the default configuration file (as it should be set to `true` when you install your application behind a reverse proxy): framework: trust_proxy_headers: false An empty `bundles` entry was added to the `assetic` configuration: assetic: bundles: [] The default `swiftmailer` configuration now has the `spool` setting configured to the `memory` type to defer email sending after the response is sent to the user (recommended for better end-user performance): swiftmailer: spool: { type: memory } The `jms_security_extra` configuration was moved to the `security.yml` configuration file. ### `app/config/config_dev.yml` An example of how to send all emails to a unique address was added: #swiftmailer: # delivery_address: [email protected] ### `app/config/config_test.yml` The `storage_id` setting must be changed to `session.storage.mock_file`: framework: session: storage_id: session.storage.mock_file ### `app/config/parameters.ini` The file has been converted to a YAML file which reads as follows: parameters: database_driver: pdo_mysql database_host: localhost database_port: ~ database_name: symfony database_user: root database_password: ~ mailer_transport: smtp mailer_host: localhost mailer_user: ~ mailer_password: ~ locale: en secret: ThisTokenIsNotSoSecretChangeIt Note that if you convert your parameters file to YAML, you must also change its reference in `app/config/config.yml`. ### `app/config/routing_dev.yml` The `_assetic` entry was removed: #_assetic: # resource: . # type: assetic ### `app/config/security.yml` Under `security.access_control`, the default rule for internal routes was changed: security: access_control: #- { path: ^/_internal/secure, roles: IS_AUTHENTICATED_ANONYMOUSLY, ip: 127.0.0.1 } Under `security.providers`, the `in_memory` example was updated to the following: security: providers: in_memory: memory: users: user: { password: userpass, roles: [ 'ROLE_USER' ] } admin: { password: adminpass, roles: [ 'ROLE_ADMIN' ] } ### `app/AppKernel.php` The following bundles have been added to the list of default registered bundles: new JMS\AopBundle\JMSAopBundle(), new JMS\DiExtraBundle\JMSDiExtraBundle($this), You must also rename the DoctrineBundle from: new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(), to: new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(), ### `web/app.php` The default `web/app.php` file now reads as follows: <?php use Symfony\Component\ClassLoader\ApcClassLoader; use Symfony\Component\HttpFoundation\Request; $loader = require_once __DIR__.'/../app/bootstrap.php.cache'; // Use APC for autoloading to improve performance. // Change 'sf2' to a unique prefix in order to prevent cache key conflicts // with other applications also using APC. /* $loader = new ApcClassLoader('sf2', $loader); $loader->register(true); */ require_once __DIR__.'/../app/AppKernel.php'; //require_once __DIR__.'/../app/AppCache.php'; $kernel = new AppKernel('prod', false); $kernel->loadClassCache(); //$kernel = new AppCache($kernel); $request = Request::createFromGlobals(); $response = $kernel->handle($request); $response->send(); $kernel->terminate($request, $response); ### `web/app_dev.php` The default `web/app_dev.php` file now reads as follows: <?php use Symfony\Component\HttpFoundation\Request; // If you don't want to setup permissions the proper way, just uncomment the following PHP line // read http://symfony.com/doc/current/book/installation.html#configuration-and-setup for more information //umask(0000); // This check prevents access to debug front controllers that are deployed by accident to production servers. // Feel free to remove this, extend it, or make something more sophisticated. if (isset($_SERVER['HTTP_CLIENT_IP']) || isset($_SERVER['HTTP_X_FORWARDED_FOR']) || !in_array(@$_SERVER['REMOTE_ADDR'], array( '127.0.0.1', '::1', )) ) { header('HTTP/1.0 403 Forbidden'); exit('You are not allowed to access this file. Check '.basename(__FILE__).' for more information.'); } $loader = require_once __DIR__.'/../app/bootstrap.php.cache'; require_once __DIR__.'/../app/AppKernel.php'; $kernel = new AppKernel('dev', true); $kernel->loadClassCache(); $request = Request::createFromGlobals(); $response = $kernel->handle($request); $response->send(); $kernel->terminate($request, $response);
31.598513
117
0.684118
eng_Latn
0.928045
2141f95361e8ed9ae56b49c3e4be3ee1515974af
48
md
Markdown
powerstrip-weave/slide9.md
binocarlos/presentations
024418ac9494ac1039937ae739e73b4196485021
[ "Apache-2.0" ]
4
2015-02-04T10:34:58.000Z
2015-09-16T20:02:51.000Z
powerstrip-weave/slide9.md
binocarlos/presentations
024418ac9494ac1039937ae739e73b4196485021
[ "Apache-2.0" ]
null
null
null
powerstrip-weave/slide9.md
binocarlos/presentations
024418ac9494ac1039937ae739e73b4196485021
[ "Apache-2.0" ]
null
null
null
## weave weave is a networking tool for docker.
16
38
0.75
eng_Latn
0.99985
2142187ba0d683ecd60377026cc05c00e348b49a
8,500
md
Markdown
README.md
HeartLab/dcmjs-dimse
8c4792671d754892fb9c0b5146e4fd7ed8b7e2fe
[ "MIT" ]
null
null
null
README.md
HeartLab/dcmjs-dimse
8c4792671d754892fb9c0b5146e4fd7ed8b7e2fe
[ "MIT" ]
null
null
null
README.md
HeartLab/dcmjs-dimse
8c4792671d754892fb9c0b5146e4fd7ed8b7e2fe
[ "MIT" ]
null
null
null
[![NPM version][npm-version-image]][npm-url] [![build][build-image]][build-url] [![MIT License][license-image]][license-url] # dcmjs-dimse DICOM DIMSE implementation for Node.js using Steve Pieper's [dcmjs][dcmjs-url] library. This library was inspired by [fo-dicom][fo-dicom-url] and [mdcm][mdcm-url]. Part of the networking code was taken from [dicom-dimse][dicom-dimse-url]. ### Note **This effort is a work-in-progress and should not be used for production or clinical purposes.** ### Install npm install dcmjs-dimse ### Build npm install npm run build ### Features - Implements C-ECHO, C-FIND, C-STORE, C-MOVE, C-GET, N-CREATE, N-ACTION, N-DELETE, N-EVENT-REPORT, N-GET and N-SET services as SCU and SCP. - Supports secure DICOM TLS connections. - Allows custom DICOM implementations (Implementation Class UID and Implementation Version). - Provides asynchronous event handlers for incoming SCP requests. ### Examples #### C-Echo SCU ```js const dcmjsDimse = require('dcmjs-dimse'); const { Client } = dcmjsDimse; const { CEchoRequest } = dcmjsDimse.requests; const { Status } = dcmjsDimse.constants; const client = new Client(); const request = new CEchoRequest(); request.on('response', (response) => { if (response.getStatus() === Status.Success) { console.log('Happy!'); } }); client.addRequest(request); client.on('networkError', (e) => { console.log('Network error: ', e); }); client.send('127.0.0.1', 12345, 'SCU', 'ANY-SCP'); ``` #### C-Find SCU (Studies) ```js const dcmjsDimse = require('dcmjs-dimse'); const { Client } = dcmjsDimse; const { CFindRequest } = dcmjsDimse.requests; const { Status } = dcmjsDimse.constants; const client = new Client(); const request = CFindRequest.createStudyFindRequest({ PatientID: '12345', PatientName: '*' }); request.on('response', (response) => { if (response.getStatus() === Status.Pending && response.hasDataset()) { console.log(response.getDataset()); } }); client.addRequest(request); client.on('networkError', (e) => { console.log('Network error: ', e); }); client.send('127.0.0.1', 12345, 'SCU', 'ANY-SCP'); ``` #### C-Store SCU ```js const dcmjsDimse = require('dcmjs-dimse'); const { Client } = dcmjsDimse; const { CStoreRequest } = dcmjsDimse.requests; const client = new Client(); const request = new CStoreRequest('test.dcm'); client.addRequest(request); client.on('networkError', (e) => { console.log('Network error: ', e); }); client.send('127.0.0.1', 12345, 'SCU', 'ANY-SCP'); ``` #### C-Move SCU ```js const dcmjsDimse = require('dcmjs-dimse'); const { Client } = dcmjsDimse; const { CMoveRequest } = dcmjsDimse.requests; const { Status } = dcmjsDimse.constants; const client = new Client(); const request = CMoveRequest.createStudyMoveRequest('DEST-AE', studyInstanceUid); request.on('response', (response) => { if (response.getStatus() === Status.Pending) { console.log('Remaining: ' + response.getRemaining()); console.log('Completed: ' + response.getCompleted()); console.log('Warning: ' + response.getWarnings()); console.log('Failed: ' + response.getFailures()); } }); client.addRequest(request); client.on('networkError', (e) => { console.log('Network error: ', e); }); client.send('127.0.0.1', 12345, 'SCU', 'ANY-SCP'); ``` #### C-Get SCU ```js const dcmjsDimse = require('dcmjs-dimse'); const { Client } = dcmjsDimse; const { CGetRequest } = dcmjsDimse.requests; const { CStoreResponse } = dcmjsDimse.responses; const { Status } = dcmjsDimse.constants; const client = new Client(); const request = CGetRequest.createStudyGetRequest(studyInstanceUid); request.on('response', (response) => { if (response.getStatus() === Status.Pending) { console.log('Remaining: ' + response.getRemaining()); console.log('Completed: ' + response.getCompleted()); console.log('Warning: ' + response.getWarnings()); console.log('Failed: ' + response.getFailures()); } }); client.on('cStoreRequest', (request, callback) => { console.log(request.getDataset()); const response = CStoreResponse.fromRequest(request); response.setStatus(Status.Success); callback(response); }); client.addRequest(request); client.on('networkError', (e) => { console.log('Network error: ', e); }); client.send('127.0.0.1', 12345, 'SCU', 'ANY-SCP'); ``` #### SCP ```js const dcmjsDimse = require('dcmjs-dimse'); const { Server, Scp } = dcmjsDimse; const { CEchoResponse, CFindResponse, CStoreResponse } = dcmjsDimse.responses; const { Status, PresentationContextResult, RejectResult, RejectSource, RejectReason, TransferSyntax, SopClass, StorageClass, } = dcmjsDimse.constants; class DcmjsDimseScp extends Scp { constructor(socket, opts) { super(socket, opts); this.association = undefined; } // Handle incoming association requests associationRequested(association) { this.association = association; // Evaluate calling/called AET and reject association, if needed if (this.association.getCallingAeTitle() !== 'SCU') { this.sendAssociationReject( RejectResult.Permanent, RejectSource.ServiceUser, RejectReason.CallingAeNotRecognized ); return; } // Optionally set the preferred max PDU length this.association.setMaxPduLength(65536); const contexts = association.getPresentationContexts(); contexts.forEach((c) => { const context = association.getPresentationContext(c.id); if ( context.getAbstractSyntaxUid() === SopClass.Verification || context.getAbstractSyntaxUid() === SopClass.StudyRootQueryRetrieveInformationModelFind || context.getAbstractSyntaxUid() === StorageClass.MrImageStorage // Accept other presentation contexts, as needed ) { const transferSyntaxes = context.getTransferSyntaxUids(); transferSyntaxes.forEach((transferSyntax) => { if (transferSyntax === TransferSyntax.ImplicitVRLittleEndian) { context.setResult( PresentationContextResult.Accept, TransferSyntax.ImplicitVRLittleEndian ); } else { context.setResult(PresentationContextResult.RejectTransferSyntaxesNotSupported); } }); } else { context.setResult(PresentationContextResult.RejectAbstractSyntaxNotSupported); } }); this.sendAssociationAccept(); } // Handle incoming C-ECHO requests cEchoRequest(request, callback) { const response = CEchoResponse.fromRequest(request); response.setStatus(Status.Success); callback(response); } // Handle incoming C-FIND requests cFindRequest(request, callback) { console.log(request.getDataset()); const pendingResponse = CFindResponse.fromRequest(request); pendingResponse.setDataset(new Dataset({ PatientID: '12345', PatientName: 'JOHN^DOE' })); pendingResponse.setStatus(Status.Pending); const finalResponse = CFindResponse.fromRequest(request); finalResponse.setStatus(Status.Success); callback([pendingResponse, finalResponse]); } // Handle incoming C-STORE requests cStoreRequest(request, callback) { console.log(request.getDataset()); const response = CStoreResponse.fromRequest(request); response.setStatus(Status.Success); callback(response); } // Handle incoming association release requests associationReleaseRequested() { this.sendAssociationReleaseResponse(); } } const server = new Server(DcmjsDimseScp); server.on('networkError', (e) => { console.log('Network error: ', e); }); server.listen(port); // When done server.close(); ``` Please check the respecting [Wiki][dcmjs-dimse-wiki-examples-url] section for more examples. ### License dcmjs-dimse is released under the MIT License. [npm-url]: https://npmjs.org/package/dcmjs-dimse [npm-version-image]: https://img.shields.io/npm/v/dcmjs-dimse.svg?style=flat [build-url]: https://github.com/PantelisGeorgiadis/dcmjs-dimse/actions/workflows/build.yml [build-image]: https://github.com/PantelisGeorgiadis/dcmjs-dimse/actions/workflows/build.yml/badge.svg?branch=master [license-image]: https://img.shields.io/badge/license-MIT-blue.svg?style=flat [license-url]: LICENSE.txt [dcmjs-url]: https://github.com/dcmjs-org/dcmjs [fo-dicom-url]: https://github.com/fo-dicom/fo-dicom [mdcm-url]: https://github.com/fo-dicom/mdcm [dicom-dimse-url]: https://github.com/OHIF/dicom-dimse [dcmjs-dimse-wiki-examples-url]: https://github.com/PantelisGeorgiadis/dcmjs-dimse/wiki/Examples
31.021898
139
0.698706
kor_Hang
0.231611
2142283666b30da0253456444d1999e4667f1c32
1,276
md
Markdown
medium/313. Super Ugly Number.md
kilinchange/leetcode
7467b68b39d4b09b21a16439bc154a50b7262f29
[ "MIT" ]
null
null
null
medium/313. Super Ugly Number.md
kilinchange/leetcode
7467b68b39d4b09b21a16439bc154a50b7262f29
[ "MIT" ]
null
null
null
medium/313. Super Ugly Number.md
kilinchange/leetcode
7467b68b39d4b09b21a16439bc154a50b7262f29
[ "MIT" ]
null
null
null
# 313. Super Ugly Number > Write a program to find the `nth` super ugly number. > > Super ugly numbers are positive numbers whose all prime factors are in the given prime list `primes` of size `k`. > > **Example:** > > ``` > Input: n = 12, primes = [2,7,13,19] > Output: 32 > Explanation: [1,2,4,7,8,13,14,16,19,26,28,32] is the sequence of the first 12 > super ugly numbers given primes = [2,7,13,19] of size 4. > ``` > > **Note:** > > - `1` is a super ugly number for any given `primes`. > - The given numbers in `primes` are in ascending order. > - 0 < `k` ≤ 100, 0 < `n` ≤ 106, 0 < `primes[i]` < 1000. > - The nth super ugly number is guaranteed to fit in a 32-bit signed integer. 因为后面的丑数都是由前面的丑数乘以primes中的某一个数得到,对于每一个质因数,维护一个下标,表示下一个需要乘以该质因数的老丑数,按递增的顺序生成新丑数。 代码如下: ```python class Solution: def nthSuperUglyNumber(self, n: int, primes: List[int]) -> int: idxs = [0] * len(primes) nums = [1] while len(nums) < n: tmp = sys.maxsize for i in range(len(primes)): tmp = min(tmp, nums[idxs[i]] * primes[i]) for i in range(len(idxs)): if tmp == nums[idxs[i]] * primes[i]: idxs[i] += 1 nums.append(tmp) return nums[-1] ```
29.674419
115
0.579937
eng_Latn
0.885804
21427543c6429a13b6ace34760705c2b9c852bc6
21,852
md
Markdown
README.md
airbrake/airbrake
817dbf732e38c3f80986ef47905049b04dc173b8
[ "MIT" ]
481
2015-01-05T05:14:28.000Z
2022-03-23T13:28:05.000Z
README.md
airbrake/airbrake
817dbf732e38c3f80986ef47905049b04dc173b8
[ "MIT" ]
526
2015-01-07T02:12:33.000Z
2022-03-31T06:26:00.000Z
README.md
airbrake/airbrake
817dbf732e38c3f80986ef47905049b04dc173b8
[ "MIT" ]
267
2015-01-17T05:32:30.000Z
2022-03-04T13:26:57.000Z
Airbrake ======== [![Build Status](https://github.com/airbrake/airbrake/workflows/airbrake/badge.svg)](https://github.com/airbrake/airbrake/actions) [![Code Climate](https://codeclimate.com/github/airbrake/airbrake.svg)](https://codeclimate.com/github/airbrake/airbrake) [![Gem Version](https://badge.fury.io/rb/airbrake.svg)](http://badge.fury.io/rb/airbrake) [![Documentation Status](http://inch-ci.org/github/airbrake/airbrake.svg?branch=master)](http://inch-ci.org/github/airbrake/airbrake) [![Downloads](https://img.shields.io/gem/dt/airbrake.svg?style=flat)](https://rubygems.org/gems/airbrake) [![Reviewed by Hound](https://img.shields.io/badge/Reviewed_by-Hound-8E64B0.svg)](https://houndci.com) <p align="center"> <img src="https://airbrake-github-assets.s3.amazonaws.com/brand/airbrake-full-logo.png" width="200"> </p> * [Airbrake README](https://github.com/airbrake/airbrake) * [Airbrake Ruby README](https://github.com/airbrake/airbrake-ruby) * [YARD API documentation](http://www.rubydoc.info/gems/airbrake-ruby) Introduction ------------ [Airbrake][airbrake.io] is an online tool that provides robust exception tracking in any of your Ruby applications. In doing so, it allows you to easily review errors, tie an error to an individual piece of code, and trace the cause back to recent changes. The Airbrake dashboard provides easy categorization, searching, and prioritization of exceptions so that when errors occur, your team can quickly determine the root cause. Key features ------------ ![The Airbrake Dashboard][dashboard] This library is built on top of [Airbrake Ruby][airbrake-ruby]. The difference between _Airbrake_ and _Airbrake Ruby_ is that the `airbrake` gem is just a collection of integrations with frameworks or other libraries. The `airbrake-ruby` gem is the core library that performs exception sending and other heavy lifting. Normally, you just need to depend on this gem, select the integration you are interested in and follow the instructions for it. If you develop a pure frameworkless Ruby application or embed Ruby and don't need any of the listed integrations, you can depend on the `airbrake-ruby` gem and ignore this gem entirely. The list of integrations that are available in this gem includes: * [Heroku support][heroku-docs] (as an [add-on][heroku-addon]) * Web frameworks * Rails<sup>[[link](#rails)]</sup> * Sinatra<sup>[[link](#sinatra)]</sup> * Rack applications<sup>[[link](#rack)]</sup> * Job processing libraries * ActiveJob<sup>[[link](#activejob)]</sup> * Resque<sup>[[link](#resque)]</sup> * Sidekiq<sup>[[link](#sidekiq)]</sup> * DelayedJob<sup>[[link](#delayedjob)]</sup> * Shoryuken<sup>[[link](#shoryuken)]</sup> * Sneakers<sup>[[link](#sneakers)]</sup> * Other libraries * ActionCable<sup>[[link](#actioncable)]</sup> * Rake<sup>[[link](#rake)]</sup> * Logger<sup>[[link](#logger)]</sup> * Plain Ruby scripts<sup>[[link](#plain-ruby-scripts)]</sup> Deployment tracking: * Using Capistrano<sup>[[link](#capistrano)]</sup> * Using the Rake task<sup>[[link](#rake-task)]</sup> Installation ------------ ### Bundler Add the Airbrake gem to your Gemfile: ```ruby gem 'airbrake' ``` ### Manual Invoke the following command from your terminal: ```bash gem install airbrake ``` Configuration ------------- ### Rails #### Integration To integrate Airbrake with your Rails application, you need to know your [project id and project key][project-idkey]. Set `AIRBRAKE_PROJECT_ID` & `AIRBRAKE_PROJECT_KEY` environment variables with your project's values and generate the Airbrake config: ```bash export AIRBRAKE_PROJECT_ID=<PROJECT ID> export AIRBRAKE_PROJECT_KEY=<PROJECT KEY> rails g airbrake ``` [Heroku add-on][heroku-addon] users can omit specifying the key and the id. Heroku add-on's environment variables will be used ([Heroku add-on docs][heroku-docs]): ```bash rails g airbrake ``` This command will generate the Airbrake configuration file under `config/initializers/airbrake.rb`. Make sure that this file is checked into your version control system. This is enough to start Airbraking. In order to configure the library according to your needs, open up the file and edit it. [The full list of supported configuration options][config] is available online. To test the integration, invoke a special Rake task that we provide: ```ruby rake airbrake:test ``` In case of success, a test exception should appear in your dashboard. #### The notify_airbrake controller helpers The Airbrake gem defines two helper methods available inside Rails controllers: `#notify_airbrake` and `#notify_airbrake_sync`. If you want to notify Airbrake from your controllers manually, it's usually a good idea to prefer them over [`Airbrake.notify`][airbrake-notify], because they automatically add information from the Rack environment to notices. `#notify_airbrake` is asynchronous, while `#notify_airbrake_sync` is synchronous (waits for responses from the server and returns them). The list of accepted arguments is identical to `Airbrake.notify`. #### Additional features: user reporting, sophisticated API The library sends all uncaught exceptions automatically, attaching the maximum possible amount information that can help you to debug errors. The Airbrake gem is capable of reporting information about the currently logged in user (id, email, username, etc.), if you use an authentication library such as Devise. The library also provides a special API for manual error reporting. [The description of the API][airbrake-api] is available online. #### Automatic integration with Rake tasks and Rails runner Additionally, the Rails integration offers automatic exception reporting in any Rake tasks<sup>[[link](#rake)]</sup> and [Rails runner][rails-runner]. #### Integration with filter_parameters If you want to reuse `Rails.application.config.filter_parameters` in Airbrake you can configure your notifier the following way: ```rb # config/initializers/airbrake.rb Airbrake.configure do |c| c.blocklist_keys = Rails.application.config.filter_parameters end ``` There are a few important details: 1. You must load `filter_parameter_logging.rb` before the Airbrake config 2. If you use Lambdas to configure `filter_parameters`, you need to convert them to Procs. Otherwise you will get `ArgumentError` 3. If you use Procs to configure `filter_parameters`, the procs must return an Array of keys compatible with the Airbrake allowlist/blocklist option (String, Symbol, Regexp) Consult the [example application](https://github.com/kyrylo/airbrake-ruby-issue108), which was created to show how to configure `filter_parameters`. ##### filter_parameters dot notation warning The dot notation introduced in [rails/pull/13897][rails-13897] for `filter_parameters` (e.g. a key like `credit_card.code`) is unsupported for performance reasons. Instead, simply specify the `code` key. If you have a strong opinion on this, leave a comment in the [dedicated issue][rails-sub-keys]. ##### Logging In new Rails apps, by default, all the Airbrake logs are written into `log/airbrake.log`. In older versions we used to write to wherever `Rails.logger` writes. If you wish to upgrade your app to the new behaviour, please configure your logger the following way: ```ruby c.logger = Airbrake::Rails.logger ``` ### Sinatra To use Airbrake with Sinatra, simply `require` the gem, [configure][config] it and `use` our Rack middleware. ```ruby # myapp.rb require 'sinatra/base' require 'airbrake' Airbrake.configure do |c| c.project_id = 113743 c.project_key = 'fd04e13d806a90f96614ad8e529b2822' # Display debug output. c.logger.level = Logger::DEBUG end class MyApp < Sinatra::Base use Airbrake::Rack::Middleware get('/') { 1/0 } end run MyApp.run! ``` To run the app, add a file called `config.ru` to the same directory and invoke `rackup` from your console. ```ruby # config.ru require_relative 'myapp' ``` That's all! Now you can send a test request to `localhost:9292` and check your project's dashboard for a new error. ```bash curl localhost:9292 ``` ### Rack To send exceptions to Airbrake from any Rack application, simply `use` our Rack middleware, and [configure][config] the notifier. ```ruby require 'airbrake' require 'airbrake/rack' Airbrake.configure do |c| c.project_id = 113743 c.project_key = 'fd04e13d806a90f96614ad8e529b2822' end use Airbrake::Rack::Middleware ``` **Note:** be aware that by default the library doesn't filter any parameters, including user passwords. To filter out passwords [add a filter](https://github.com/airbrake/airbrake-ruby#airbrakeadd_filter). #### Appending information from Rack requests If you want to append additional information from web requests (such as HTTP headers), define a special filter such as: ```ruby Airbrake.add_filter do |notice| next unless (request = notice.stash[:rack_request]) notice[:params][:remoteIp] = request.env['REMOTE_IP'] end ``` The `notice` object carries a real `Rack::Request` object in its [stash](https://github.com/airbrake/airbrake-ruby#noticestash--noticestash). Rack requests will always be accessible through the `:rack_request` stash key. #### Optional Rack request filters The library comes with optional predefined builders listed below. ##### RequestBodyFilter `RequestBodyFilter` appends Rack request body to the notice. It accepts a `length` argument, which tells the filter how many bytes to read from the body. By default, up to 4096 bytes is read: ```ruby Airbrake.add_filter(Airbrake::Rack::RequestBodyFilter.new) ``` You can redefine how many bytes to read by passing an Integer argument to the filter. For example, read up to 512 bytes: ```ruby Airbrake.add_filter(Airbrake::Rack::RequestBodyFilter.new(512)) ``` #### Sending custom route breakdown performance ##### Arbitrary code performance instrumentation For every route in your app Airbrake collects performance breakdown statistics. If you need to monitor a specific operation, you can capture your own breakdown: ```ruby def index Airbrake::Rack.capture_timing('operation name') do call_operation(...) end call_other_operation end ``` That will benchmark `call_operation` and send performance information to Airbrake, to the corresponding route (under the 'operation name' label). ##### Method performance instrumentation Alternatively, you can measure performance of a specific method: ```ruby class UsersController extend Airbrake::Rack::Instrumentable def index call_operation(...) end airbrake_capture_timing :index end ``` Similarly to the previous example, performance information of the `index` method will be sent to Airbrake. ### Sidekiq We support Sidekiq v2+. The configurations steps for them are identical. Simply `require` our integration and you're done: ```ruby require 'airbrake/sidekiq' ``` If you required Sidekiq before Airbrake, then you don't even have to `require` anything manually and it should just work out-of-box. #### Airbrake::Sidekiq::RetryableJobsFilter By default, Airbrake notifies of all errors, including reoccurring errors during a retry attempt. To filter out these errors and only get notified when Sidekiq has exhausted its retries you can add the `RetryableJobsFilter`: ```ruby Airbrake.add_filter(Airbrake::Sidekiq::RetryableJobsFilter.new) ``` The filter accepts an optional `max_retries` parameter. When set, it configures the amount of allowed job retries that won't trigger an Airbrake notification. Normally, this parameter is configured by the job itself but this setting takes the highest precedence and forces the value upon all jobs, so be careful when you use it. By default, it's not set. ```ruby Airbrake.add_filter( Airbrake::Sidekiq::RetryableJobsFilter.new(max_retries: 10) ) ``` ### ActiveJob No additional configuration is needed. Simply ensure that you have configured your Airbrake notifier with your queue adapter. ### Resque Simply `require` the Resque integration: ```ruby require 'airbrake/resque' ``` #### Integrating with Rails applications If you're working with Resque in the context of a Rails application, create a new initializer in `config/initializers/resque.rb` with the following content: ```ruby # config/initializers/resque.rb require 'airbrake/resque' Resque::Failure.backend = Resque::Failure::Airbrake ``` Now you're all set. #### General integration Any Ruby app using Resque can be integrated with Airbrake. If you can require the Airbrake gem *after* Resque, then there's no need to require `airbrake/resque` anymore: ```ruby require 'resque' require 'airbrake' Resque::Failure.backend = Resque::Failure::Airbrake ``` If you're unsure, just configure it similar to the Rails approach. If you use multiple backends, then continue reading the needed configuration steps in [the Resque wiki][resque-wiki] (it's fairly straightforward). ### DelayedJob Simply `require` our integration and you're done: ```ruby require 'airbrake/delayed_job' ``` If you required DelayedJob before Airbrake, then you don't even have to `require` anything manually and it should just work out-of-box. ### Shoryuken Simply `require` our integration and you're done: ```ruby require 'airbrake/shoryuken' ``` If you required Shoryuken before Airbrake, then you don't even have to `require` anything manually and it should just work out-of-box. ### Sneakers Simply `require` our integration and you're done: ```ruby require 'airbrake/sneakers' ``` If you required Sneakers before Airbrake, then you don't even have to `require` anything manually and it should just work out-of-box. ### ActionCable The ActionCable integration sends errors occurring in ActionCable actions and subscribed/unsubscribed events. If you use Rails with ActionCable, there's nothing to do, it's already loaded. If you use ActionCable outside Rails, simply require it: ```ruby require 'airbrake/rails/action_cable' ``` ### Rake Airbrake offers Rake tasks integration, which is used by our Rails integration<sup>[[link](#rails)]</sup>. To integrate Airbrake in any project, just `require` the gem in your `Rakefile`, if it hasn't been required and [configure][config] the notifier. ```ruby # Rakefile require 'airbrake' Airbrake.configure do |c| c.project_id = 113743 c.project_key = 'fd04e13d806a90f96614ad8e529b2822' end task :foo do 1/0 end ``` ### Logger If you want to convert your log messages to Airbrake errors, you can use our integration with Ruby's `Logger` class from stdlib. All you need to do is to wrap your logger in Airbrake's decorator class: ```ruby require 'airbrake/logger' # Create a normal logger logger = Logger.new($stdout) # Wrap it logger = Airbrake::AirbrakeLogger.new(logger) ``` Now you can use the `logger` object exactly the same way you use it. For example, calling `fatal` on it will both log your message and send it to the Airbrake dashboard: ``` logger.fatal('oops') ``` The Logger class will attempt to utilize the default Airbrake notifier to deliver messages. It's possible to redefine it via `#airbrake_notifier`: ```ruby # Assign your own notifier. logger.airbrake_notifier = Airbrake::NoticeNotifier.new ``` #### Airbrake severity level In order to reduce the noise from the Logger integration it's possible to configure Airbrake severity level. For example, if you want to send only fatal messages from Logger, then configure it as follows: ```ruby # Send only fatal messages to Airbrake, ignore anything below this level. logger.airbrake_level = Logger::FATAL ``` By default, `airbrake_level` is set to `Logger::WARN`, which means it sends warnings, errors and fatal error messages to Airbrake. #### Configuring Airbrake logger integration with a Rails application In order to configure a production logger with Airbrake integration, simply overwrite `Rails.logger` with a wrapped logger in an `after_initialize` callback: ```ruby # config/environments/production.rb config.after_initialize do # Standard logger with Airbrake integration: # https://github.com/airbrake/airbrake#logger Rails.logger = Airbrake::AirbrakeLogger.new(Rails.logger) end ``` #### Configuring Rails APM SQL query stats when using Rails engines By default, the library collects Rails SQL performance stats. For standard Rails apps no extra configuration is needed. However if your app uses [Rails engines](https://guides.rubyonrails.org/engines.html), you need to take an additional step to make sure that the file and line information is present for queries being executed in the engine code. Specifically, you need to make sure that your [`Rails.backtrace_cleaner`](https://api.rubyonrails.org/classes/ActiveSupport/BacktraceCleaner.html) has a silencer that doesn't silence engine code (will be silenced by default). For example, if your engine is called `blorgh` and its main directory is in the root of your project, you need to extend the default silencer provided with Rails and add the path to your engine: ```rb # config/initializers/backtrace_silencers.rb # Delete default silencer(s). Rails.backtrace_cleaner.remove_silencers! # Define custom silencer, which adds support for the "blorgh" engine Rails.backtrace_cleaner.add_silencer do |line| app_dirs_pattern = %r{\A/?(app|config|lib|test|blorgh|\(\w*\))} !app_dirs_pattern.match?(line) end ``` ### Plain Ruby scripts Airbrake supports _any_ type of Ruby applications including plain Ruby scripts. If you want to integrate your script with Airbrake, you don't have to use this gem. The [Airbrake Ruby][airbrake-ruby] gem provides all the needed tooling. Deploy tracking --------------- By notifying Airbrake of your application deployments, all errors are resolved when a deploy occurs, so that you'll be notified again about any errors that reoccur after a deployment. Additionally, it's possible to review the errors in Airbrake that occurred before and after a deploy. There are several ways to integrate deployment tracking with your application, that are described below. ### Capistrano The library supports Capistrano v2 and Capistrano v3. In order to configure deploy tracking with Capistrano simply `require` our integration from your Capfile: ```ruby # Capfile require 'airbrake/capistrano' ``` If you use Capistrano 3, define the `after :finished` hook, which executes the deploy notification task (Capistrano 2 doesn't require this step). ```ruby # config/deploy.rb namespace :deploy do after :finished, 'airbrake:deploy' end ``` If you version your application, you can set the `:app_version` variable in `config/deploy.rb`, so that information will be attached to your deploy. ```ruby # config/deploy.rb set :app_version, '1.2.3' ``` ### Rake task A Rake task can accept several arguments shown in the table below: | Key | Required | Default | Example | ------------|----------|-----------|---------- ENVIRONMENT | No | Rails.env | production USERNAME | No | nil | john REPOSITORY | No | nil | https://github.com/airbrake/airbrake REVISION | No | nil | 38748467ea579e7ae64f7815452307c9d05e05c5 VERSION | No | nil | v2.0 #### In Rails Simply invoke `rake airbrake:deploy` and pass needed arguments: ```bash rake airbrake:deploy USERNAME=john ENVIRONMENT=production REVISION=38748467 REPOSITORY=https://github.com/airbrake/airbrake ``` #### Anywhere Make sure to `require` the library Rake integration in your Rakefile. ```ruby # Rakefile require 'airbrake/rake/tasks' ``` Then, invoke it like shown in the example for Rails. Supported Rubies ---------------- * CRuby >= 2.5.0 * JRuby >= 9k Contact ------- In case you have a problem, question or a bug report, feel free to: * [file an issue][issues] * [send us an email](mailto:[email protected]) * [tweet at us][twitter] * chat with us (visit [airbrake.io][airbrake.io] and click on the round orange button in the bottom right corner) License ------- The project uses the MIT License. See LICENSE.md for details. Development & testing --------------------- In order to run the test suite, first of all, clone the repo, and install dependencies with Bundler. ```bash git clone https://github.com/airbrake/airbrake.git cd airbrake bundle ``` Next, run unit tests. ```bash bundle exec rake ``` In order to test integrations with frameworks and other libraries, install their dependencies with help of the following command: ```bash bundle exec appraisal install ``` To run integration tests for a specific framework, use the `appraisal` command. ```bash bundle exec appraisal rails-4.2 rake spec:integration:rails bundle exec appraisal sinatra rake spec:integration:sinatra ``` Pro tip: [GitHub Actions config](/.github/workflows/test.yml) has the list of all the integration tests and commands to invoke them. [airbrake.io]: https://airbrake.io [airbrake-ruby]: https://github.com/airbrake/airbrake-ruby [issues]: https://github.com/airbrake/airbrake/issues [twitter]: https://twitter.com/airbrake [project-idkey]: https://github.com/airbrake/airbrake-ruby#project_id--project_key [config]: https://github.com/airbrake/airbrake-ruby#config-options [airbrake-api]: https://github.com/airbrake/airbrake-ruby#api [rails-runner]: http://guides.rubyonrails.org/command_line.html#rails-runner [resque-wiki]: https://github.com/resque/resque/wiki/Failure-Backends#using-multiple-failure-backends-at-once [heroku-addon]: https://elements.heroku.com/addons/airbrake [heroku-docs]: https://devcenter.heroku.com/articles/airbrake [dashboard]: https://s3.amazonaws.com/airbrake-github-assets/airbrake/airbrake-dashboard.png [rails-13897]: https://github.com/rails/rails/pull/13897 [rails-sub-keys]: https://github.com/airbrake/airbrake-ruby/issues/137 [airbrake-notify]: https://github.com/airbrake/airbrake-ruby#airbrakenotify
30.224066
133
0.757551
eng_Latn
0.939179
2142849d0447d728bc54ba88a536a63c876a59e7
4,784
md
Markdown
0019-third-party-manufacturers/midas.md
MidasTechnologies/HIP
3768d9bbdaf9cd0e9377ce61ac86424393b354ff
[ "Apache-2.0" ]
null
null
null
0019-third-party-manufacturers/midas.md
MidasTechnologies/HIP
3768d9bbdaf9cd0e9377ce61ac86424393b354ff
[ "Apache-2.0" ]
null
null
null
0019-third-party-manufacturers/midas.md
MidasTechnologies/HIP
3768d9bbdaf9cd0e9377ce61ac86424393b354ff
[ "Apache-2.0" ]
null
null
null
# Midas Technologies Co., Ltd ### Application to become an approved third party manufacturer as per [HIP19](https://github.com/helium/HIP/blob/master/0019-third-party-manufacturers.md) ## Summary(required) Midas Technologies is an IoT company from China focusing on the convergence of wireless communications and blockchain technologies,and we are committed to provide the most reliable products and the best services. We are launching to the market a Helium hotspot miner, Midas-926 Gateway, and also planning to build around it with full IoT applications for our customers. ## Company Information (required) Midas Technologies was initiated by some senior investors in the blockchain industry, bringing together experts in wireless product development. The founders have invested and successfully led many blockchain and digital asset projects in Asia, and highly agree with the goal of HEILIUM project which is a rising star in blockchain world with very promising future from our perspective. We hope to become a major Helium hotspot provider and promoter, and will keep investing on and contributing to Helium network. The technical team of the company is composed of experts in the fields of wireless communication, Internet of things and blockchain, with more than 20 years in average of successful professional career and entrepreneurial experience. Tens millions of users are benefited from what we created and built, such as many popular smart phones, IoT based cloud applications like intelligent city and intelligent village, blockchain finance products adopted by many banks in China and etc. ## Product Information (required) Midas-926: * Semtech SX1302 based LoRa concentrator supporting CN/US/EU bands * Quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz * 2GB LPDDR4-3200 SDRAM * 32GB Flash * ATECC608A or RJGT102 security chip for swarm key * WiFi(2.4GHz/5.0GHz) IEEE 802.11b/g/n/ac * BLE 5.0 * Status indication LEDs * Voltage 12V * Automatic software update ![Midas_透明层_正俯](https://user-images.githubusercontent.com/86901323/125427574-f8ae9457-f252-4cb0-b01d-ae8198b7a0c7.jpg) ![Midas_透明层_侧](https://user-images.githubusercontent.com/86901323/125427636-9126325b-37d8-4a05-9ce2-c2239669e94c.jpg) ![Midas_透明层_侧俯_2](https://user-images.githubusercontent.com/86901323/125427652-28f80920-a58c-4e75-9e49-aa2a52dc703d.jpg) ## Previous shipments (required) We have experiences in shipping wireless products worldwide. For Midas-926, our initial target market is China. ## Customer Support (required) We plan to set up a dedicated technical support for our customers and provide provide 24-hr service, 7 days a week. We can be reached by phone, email, social media applications such as Twitter and Wechat. We'll offer a one-year warranty for the gatway products. And we'll co-work with the factory and ensure to repair the defective products within one week. ## Hardware Security Element (required) The security is guaranteed by Helium hotspot miner standard design using ATECC608A chip. ## Hardware Information (required) * We are using ATECC608A and planning to use RJGT102 as a backup * We are using SX1302 in the gateway Midas926 * We are sourcing from the two major LoRaWan module vendors in China * And we have solid supply chain and can procure more than 10K concentrators per year ## Manufacturing Information (required) The team is composed of experts who have many years experience working for some well-known wireless communication hardware manufacturers such as Huawei, Motorola and RIM, and having delivered tens of millions of mobile phones in the past. So we have what is required to make a product being shipped in large scale. We have secured sufficient amount of materials for the forecast of 1 year. And with the good relations with two major LoRa module vendors in Eastern China, we have ensured the supply chain in long term. ## Proof of Identity We'll submit the information to DeWi ## Budget & Capital (required) We have secured materials for 3K gateways and reserved 5 millions RMB cash to launch the product and run the business. And we are confident to obtain positive cash flow in half a year. The founders of Midas are senior investors in block chain and ICT industries and are planning to raise 10 millions USD in multiple financing channels in the time frame of 1 year. ## Risks & Challenges (required) The short-term risk is the shortage of chips currently happening in the world. We are also working on new hardware design to avoid relying on single source of key components. ## Other information (required) * Twitter profile -https://twitter.com/midaswireless * Website -www.midaswireless.com * Contact info [email protected] * Payment methods available - * Regions covered / shipped to - Worldwide
61.333333
513
0.800376
eng_Latn
0.997162
2143e16026d7689606f1a81ebb596c4049692d80
2,284
md
Markdown
README.md
slashpinetech/forestry-dotnet
552a9978bcb83f24e3b1c574fcb4096024db60a8
[ "MIT" ]
null
null
null
README.md
slashpinetech/forestry-dotnet
552a9978bcb83f24e3b1c574fcb4096024db60a8
[ "MIT" ]
null
null
null
README.md
slashpinetech/forestry-dotnet
552a9978bcb83f24e3b1c574fcb4096024db60a8
[ "MIT" ]
null
null
null
[![MIT License](https://img.shields.io/github/license/slashpinetech/forestry-dotnet-versioning?color=1F3B2B&style=flat-square)](https://github.com/slashpinetech/forestry-dotnet-versioning/blob/main/LICENSE) # Forestry .NET -- Versioning Forestry .NET is a set of open-source libraries for building modern web applications using ASP.NET Core. This versioning package adds support for embedding build-time metadata into an assembly for use when the application is running. ## Usage ### Configuring your .csproj Add the following `<ItemGroup>` to the .csproj for your main assembly. ```xml <ItemGroup> <AssemblyAttribute Include="SlashPineTech.Forestry.Versioning.BuildDateAttribute"> <_Parameter1>$([System.DateTime]::UtcNow.ToString("o"))</_Parameter1> </AssemblyAttribute> <AssemblyAttribute Include="SlashPineTech.Forestry.Versioning.BuildNumberAttribute" Condition="$(BuildNumber) != ''"> <_Parameter1>$(BuildNumber)</_Parameter1> </AssemblyAttribute> <AssemblyAttribute Include="SlashPineTech.Forestry.Versioning.SourceBranchAttribute" Condition="$(Branch) != ''"> <_Parameter1>$(Branch)</_Parameter1> </AssemblyAttribute> <AssemblyAttribute Include="SlashPineTech.Forestry.Versioning.SourceCommitAttribute" Condition="$(Commit) != ''"> <_Parameter1>$(Commit)</_Parameter1> </AssemblyAttribute> </ItemGroup> ``` ### Configuring your CI Next, configure your CI to pass metadata to dotnet build (or package). Note: All CI platforms expose environment variables containing the metadata you need. The example below is using GitHub's environment variables. Consult the documentation for your CI platform for the specific variables to use. ``` dotnet build -p:BuildNumber=$GITHUB_RUN_NUMBER -p:Branch=$GITHUB_REF -p:Commit=$GITHUB_SHA ``` ### Startup Use the `BuildMetadataProvider` to register a singleton instance of `BuildMetadata` with all of the metadata that was compiled into the assembly in the prior step. ```c# services.AddSingleton(_ => new BuildMetadataProvider(typeof(Startup).Assembly).Provide()); ``` ### Wrapping up Now you can inject `BuildMetadata` anywhere you need to access this information, such as a `/version` endpoint that will return this as JSON or an MVC or Razor Page that will display this to administrators.
38.066667
206
0.773643
eng_Latn
0.769653
21448eaee4271a899fea1c294ad1c1e7f0c146c9
250
md
Markdown
Session 3/Exercise 1/README.md
AZharinova/Evaluating-Complex-Interventions-Action-Learning-Set
da9997e55ddb140aa6c96c425b79c1d9ddd5e655
[ "MIT" ]
null
null
null
Session 3/Exercise 1/README.md
AZharinova/Evaluating-Complex-Interventions-Action-Learning-Set
da9997e55ddb140aa6c96c425b79c1d9ddd5e655
[ "MIT" ]
null
null
null
Session 3/Exercise 1/README.md
AZharinova/Evaluating-Complex-Interventions-Action-Learning-Set
da9997e55ddb140aa6c96c425b79c1d9ddd5e655
[ "MIT" ]
null
null
null
## Exercise 1 - building dif-in-dif model ### Did the intervention reduced number of Delayed Transfers of Care (DTOC) days Materials include: 1. DTOC dataset 2. Empty Exercise script 3. Exercise script from the classroom work 4. Finished Exercise
25
80
0.772
eng_Latn
0.996917
214491e6d879818a797b30ae118c6ebb681751db
65
md
Markdown
review.md
Thinkenterprise/websocket-framed-java
deef07af58e32877d511d21cee03f43d49017b13
[ "MIT" ]
null
null
null
review.md
Thinkenterprise/websocket-framed-java
deef07af58e32877d511d21cee03f43d49017b13
[ "MIT" ]
null
null
null
review.md
Thinkenterprise/websocket-framed-java
deef07af58e32877d511d21cee03f43d49017b13
[ "MIT" ]
null
null
null
# Review Open Issues 1. Delete Autoconfiguration (II) - Edgar
16.25
41
0.723077
kor_Hang
0.539798
2144a1a88ee815be9de5d0c142e909a35d9e7630
199
md
Markdown
_posts/2021-03-08-midterm-pres.md
burnedsap/ms2
4376e95801f1608f1e3a8da2b90de68675845452
[ "MIT" ]
null
null
null
_posts/2021-03-08-midterm-pres.md
burnedsap/ms2
4376e95801f1608f1e3a8da2b90de68675845452
[ "MIT" ]
null
null
null
_posts/2021-03-08-midterm-pres.md
burnedsap/ms2
4376e95801f1608f1e3a8da2b90de68675845452
[ "MIT" ]
null
null
null
--- layout: post title: Midterm Presentation --- ## Midterm Presentation View the midterm presentation [here](https://github.com/burnedsap/ms2/raw/main/media/MS2%20Midterm%20Deck%20Salil.pdf).
15.307692
119
0.748744
eng_Latn
0.197967
21450c3ea134392792bd5cd6cabfd8ba620fa9ab
517
md
Markdown
README.md
ebeer/credhub-api-site
fcc3e49b409f4a6e3e9eff3e2cb85d7356caa44c
[ "Apache-2.0" ]
null
null
null
README.md
ebeer/credhub-api-site
fcc3e49b409f4a6e3e9eff3e2cb85d7356caa44c
[ "Apache-2.0" ]
null
null
null
README.md
ebeer/credhub-api-site
fcc3e49b409f4a6e3e9eff3e2cb85d7356caa44c
[ "Apache-2.0" ]
null
null
null
### Editing Content All user-provided content in the 'source' folder. The primary page is `source/index.html.md`, which pulls in sections of content from files in the `source/includes/` folder. ### Run locally ```shell bundle install bundle exec middleman server ``` Changes to master will automatically be published to https://credhub-api.cfapps.io via CI. ### Manually build site to publish ```shell bundle exec middleman build --clean ``` ### Manually Publish ```shell cd build/ cf push credhub-api ```
22.478261
175
0.731141
eng_Latn
0.992162
214602f8f92438d999f360819f3cc038f8f900d7
1,366
md
Markdown
AlchemyInsights/rollback-or-reinstall.md
pebaum/OfficeDocs-AlchemyInsights-pr.hr-HR
843ac1d14d8b214d7526fe0bc4b7035c352038f8
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/rollback-or-reinstall.md
pebaum/OfficeDocs-AlchemyInsights-pr.hr-HR
843ac1d14d8b214d7526fe0bc4b7035c352038f8
[ "CC-BY-4.0", "MIT" ]
null
null
null
AlchemyInsights/rollback-or-reinstall.md
pebaum/OfficeDocs-AlchemyInsights-pr.hr-HR
843ac1d14d8b214d7526fe0bc4b7035c352038f8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Vraćanje ili ponovna instalacija ms.author: pebaum author: pebaum manager: mnirkhe ms.audience: Admin ms.topic: article ROBOTS: NOINDEX, NOFOLLOW localization_priority: Normal ms.collection: Adm_O365 ms.custom: - "2584" - "9000691" ms.openlocfilehash: a8b30eeb61b20283efbcc5968dbf36aef45f36e8 ms.sourcegitcommit: 2572c4e5a981d5f3f556835061c568cfd08b78da ms.translationtype: MT ms.contentlocale: hr-HR ms.lasthandoff: 12/27/2019 ms.locfileid: "41969298" --- # <a name="reinstall-or-roll-back-office"></a>Ponovna instalacija ili vraćanje sustava Office Ako imate općenite probleme s programom Excel ili imate određeni problem s programom Excel nakon nedavnog ažuriranja paketa Office, možda ćete moći riješiti problem ponovnim instalacijama sustava Office ili vraćanjem na prethodnu verziju sustava Office. Da biste **izvršili ponovnu instalaciju** sustava Office, pregledajte [Preuzimanje i instalacija ili ponovna instalacija sustava Office 365 ili Office 2019 na PC ili Mac](https://support.office.com/article/download-and-install-or-reinstall-office-365-or-office-2019-on-a-pc-or-mac-4414eaaf-0478-48be-9c42-23adc4716658). Da biste izvršili **vraćanje sustava** Office, [pregledajte kako se vratiti na stariju verziju sustava Office](https://support.microsoft.com/help/2770432/how-to-revert-to-an-earlier-version-of-office-2013-or-office-2016-clic).
50.592593
319
0.814788
hrv_Latn
0.795511
214603fac26a020bdb0c6eed73a25ab28818f4b3
8,019
md
Markdown
CONTRIBUTING.md
RobertDeRose/Mojolicious-Plugin-AutoReload
a0c392026d4e3bcbc58d5b49667a0bdb7c06ffe6
[ "Artistic-1.0" ]
3
2021-05-20T22:56:20.000Z
2021-07-29T16:56:58.000Z
CONTRIBUTING.md
RobertDeRose/Mojolicious-Plugin-AutoReload
a0c392026d4e3bcbc58d5b49667a0bdb7c06ffe6
[ "Artistic-1.0" ]
12
2019-04-20T18:12:57.000Z
2021-05-09T02:13:04.000Z
CONTRIBUTING.md
preaction/Mojolicious-Plugin-Export-Git
62ca89922c467db7f16e51a5d79531301b77eb04
[ "Artistic-1.0" ]
5
2019-01-19T09:05:40.000Z
2022-02-12T02:54:34.000Z
# CONTRIBUTING This project is free software for the express purpose of collaboration. We welcome all input, bug reports, feature requests, general comments, and patches. ## Communication XXX Add communication forums If you're not sure about anything, please open an issue and ask, or e-mail the project founder <[email protected]> or [talk to us on IRC on irc.perl.org channel #cpantesters-discuss](https://chat.mibbit.com/?channel=%23cpantesters-discuss&server=irc.perl.org)! ## Standard of Conduct To ensure a welcoming, safe, collaborative environment, this project will enforce a standard of conduct: * The topic of this project is the project itself. Please stay on-topic. * Stick to the facts * Avoid demeaning remarks and sarcasm Unacceptable behavior will receive a single, public warning. Repeated unacceptable behavior will result in removal from the project. Remember, all the people who contribute to this project are volunteers. ## About this Project ### Project Goals XXX Add project goals ### Repository Layout This project follows CPAN conventions with some additions, explained below. #### `lib/` Modules are located in the `lib/` directory. Most of the functionality of the project should be in a module. If the functionality should be available to users from a script, the script should call the module. #### `bin/` Command-line scripts go in the `bin/` directory. Most of the real functionality of these should be in a library, but these scripts must call the library function and document the command-line interface. #### `t/` All the tests are located in the `t/` directory. See "Getting Started" below for how to build the project and run its tests. #### `xt/` Any extra tests that are not to be bundled with the CPAN module and run by consumers is located here. These tests are run at release time and may test things that are expensive or esoteric. #### `share/` Any files that are not runnable code but must still be available to the code are stored in `share/`. This includes default config files, default content, informational files, read-only databases, and other such. This project uses [File::Share](http://metacpan.com/pod/File::Share) to locate these files at run-time. ## What to Contribute ### Comments The issue tracker is used for both bug reports and to-do list. Anything on the issue tracker, open or closed, is available for discussion. ### Fixes For fixes, simply fork and send a pull request. Fixes to anything, documentation, code, tests, are equally welcome, appreciated, and addressed! If you are fixing a bug in the code, please add a regression test to ensure it stays fixed in the future. ### Features All contributions are welcome if they fit the scope of this project. If you're not sure if your feature fits, open an issue and ask. If it doesn't fit, we will try to find a way to enable you to add your feature in a related project (if it means changes in this project). When contributing a feature, please add some basic functionality tests to ensure the feature is working properly. These tests do not need to be comprehensive or paranoid, but must at least demonstrate that the feature is working as documented. ## Getting Started Building and Running Tests This project uses Dist::Zilla for its releases, but you aren't required to use it for contributing. These instructions do require you have [App::cpanminus (cpanm)](https://metacpan.org/pod/App::cpanminus) installed. `cpanm` is a CPAN client to install Perl modules and programs. You can install `cpanm` by doing: ``` curl -L https://cpanmin.us | perl - App::cpanminus ``` Or, if you (not incorrectly) do not trust that, by using the existing `cpan` client that comes with Perl: ``` cpan App::cpanminus ``` You may need to be root or Administrator to install cpanminus. XXX Add this for Perl version requirements This project also requires Perl version 5.24. If your Perl is not recent enough, you can install a new version of Perl in a local directory by using [perlbrew](http://perlbrew.pl) (the easiest option) or [plenv](https://github.com/tokuhirom/plenv). ### Using `cpanm` to install prereqs The [`cpanm`](https://metacpan.org/pod/App::cpanminus) command is the easiest way to install this project's dependencies. In the root of the project, just run `cpanm --installdeps .` and the dependencies will be installed. ### Using `carton` to install prereqs in an isolated directory If you with to isolate the prerequisites of this project so they do not interfere with other projects, you can use the [Carton](http://metacpan.org/pod/Carton) tool. Install Carton normally from CPAN using `cpanm Carton`, then use the `carton` command to install this module's prereqs in the `local/` directory: ``` carton install ``` Once the prereqs are installed, you can use `carton exec prove -lr t` to run all the tests with the right prereqs. Putting `carton exec` in front of the command makes sure Perl uses the right library directories. ### Using `prove` to run tests Perl comes with a utility called `prove` which runs tests and gives a report on failures. To run the test suite with `prove`, do: ``` prove -lr t ``` This will run all the tests in the `t` directory, recursively, while adding the current `lib/` directory to the library path. You can run individual test files more quickly by passing them as arguments to prove: ``` prove -l t/my-test.t ``` ### Using Dist::Zilla to install prereqs and run tests Once you have installed Dist::Zilla via `cpanm Dist::Zilla`, you can get this distributions's dependencies by doing: ``` dzil listdeps --author --missing | cpanm ``` Once all that is done, testing is as easy as: ``` dzil test ``` ## Before you Submit Your Contribution ### Copyright and License All contributions are copyright their respective owners, so make sure you agree with the project license (found in the LICENSE file) before contributing. The list of Contributors is calculated automatically from the Git commit log. If you do not wish to be listed as a contributor, or if you wish to be listed as a contributor with a different e-mail address, tell me so in the ticket or e-mail me at [email protected]. ### Code Formatting and Style Please try to maintain the existing code formatting and style. * 4-space indents * Opening brace on the same line as the opening keyword * Exceptions made for lengthy conditionals * Closing brace on the same column as the opening keyword ### Documentation Documentation is incredibly important, and contributions will not be accepted until documentated. * Methods must be documented inline, above the code of the method * Method documentation must include name, sample usage, and description of inputs and outputs * Attributes must be documented inline, above the attribute declaration * Attribute documentation must include name, sample value, and description * User-executable scripts must be documented with a short synopsis, a longer description, and all the arguments and options explained * Tests must be documented with the purpose of the test and any useful information for understanding the test. ### New Prerequisites Though this project has a `cpanfile`, a `Makefile.PL`, and maybe even a `Build.PL`, these files are auto-generated and should not be edited. To add new prereqs, you must add them to the `dist.ini` file in the following sections: * `[Prereqs]` - Runtime requirements * `[Prereqs / TestRequires]` - Test-only requirements * `[Prereqs / Recommends]` - Runtime recommendations, for optional modules * `[Prereqs / TestRecomments]` - Test-only recommendations, for optional modules If the section doesn't already exist, you can add it to the bottom of the `dist.ini` file. The `Recommends` and `TestRecommends` will be automatically installed by Travis CI to test those parts of the code. OS-specific prerequisites can be added using the [Dist::Zilla::Plugin::OSPrereqs](http://metacpan.org/pod/Dist::Zilla::Plugin::OSPrereqs) module.
32.076
120
0.765307
eng_Latn
0.998637
21460d3a5106df408dd69e10e773edf50c3f3f65
1,113
md
Markdown
README.md
knabben/aws-tools
7d32d0c33539cd61fb62e1a84fd2c159cf260613
[ "Apache-2.0" ]
null
null
null
README.md
knabben/aws-tools
7d32d0c33539cd61fb62e1a84fd2c159cf260613
[ "Apache-2.0" ]
null
null
null
README.md
knabben/aws-tools
7d32d0c33539cd61fb62e1a84fd2c159cf260613
[ "Apache-2.0" ]
null
null
null
Fire starter === This project makes easier to ADD or REMOVE the access of specifics IPs FROM specifics machines on AWS, via a REST API. Push to Dockerhub --- ``` make build tag=1 make push tag=1 # Kubernetes service and deployments (changes are needed here) # TODO - Helm Charts kubectl create -f kube/* ``` Run --- You need REDIS with Password enabled and an IAM key with EC2 permissions: docker run -p 8085:8085 awstools:1 serve -r localhost:6379 -p redis You can access the REST api as follows: REST API --- All the machines are find by tag Names, and by default it enables port 22. To add the IP for machine called api-1: ``` $ curl -v -XADD localhost:8085/api/v1/add -H "Content-Type: application/json" -d'{"ip": "189.90.98.9", "machine": "api-1"}' ``` To remove the IP address ACL from the machine api-1: ``` $ curl -v -XDELETE localhost:8085/api/v1/ -H "Content-Type: application/json" -d'{"ip": "189.90.98.9", "machine": "api-1"}' ``` Redis Cache --- After Adding a new IP, it will have a TTL before being removed from the list. It is achieved by Redis PubSub system on events channels.
22.26
135
0.706199
eng_Latn
0.937474
2147b383eb025c5e73214ef19f347881f0c87e92
203
md
Markdown
CHANGELOG.md
GeeTeam/gt_onelogin_flutter_plugin
881e14079ad736e0123b0c0c640e9196f42ec24f
[ "MIT" ]
null
null
null
CHANGELOG.md
GeeTeam/gt_onelogin_flutter_plugin
881e14079ad736e0123b0c0c640e9196f42ec24f
[ "MIT" ]
null
null
null
CHANGELOG.md
GeeTeam/gt_onelogin_flutter_plugin
881e14079ad736e0123b0c0c640e9196f42ec24f
[ "MIT" ]
null
null
null
## 0.0.2 2022-4-27 * OneLogin Android SDK 更新到 v2.7.5 * OneLogin iOS SDK 更新到 v2.7.4 * 新增授权页多语言设置、服务条款的抖动动画设置、标题栏距离屏幕左右边距设置 * 新增删除预取号缓存、重新预取号、改变服务条款勾选框状态的接口 ## 0.0.1 2021-12-15 * 首次发布 * 支持 Flutter 2.0
13.533333
38
0.70936
yue_Hant
0.934652
2148252352c1ae4c9a5f8a5ef365e53b7a278f30
19,341
md
Markdown
OracleUnifiedDirectory/README.md
rmohare/oracle-product-images
a5b0fc823de9258a0d0d63aed5837abb46b17cf3
[ "UPL-1.0" ]
5,519
2015-01-23T15:07:05.000Z
2022-03-31T12:12:19.000Z
OracleUnifiedDirectory/README.md
rmohare/oracle-product-images
a5b0fc823de9258a0d0d63aed5837abb46b17cf3
[ "UPL-1.0" ]
1,492
2015-01-26T05:31:35.000Z
2022-03-31T21:16:34.000Z
OracleUnifiedDirectory/README.md
rmohare/oracle-product-images
a5b0fc823de9258a0d0d63aed5837abb46b17cf3
[ "UPL-1.0" ]
5,850
2015-01-22T01:40:51.000Z
2022-03-31T12:12:19.000Z
Oracle Unified Directory ======================== ## Contents 1. [Introduction](#introduction) 1. [Installing the Oracle Unified Directory image](#installing-the-oracle-unified-directory-image) 1. [Running the Oracle Unified Directory image in a container](#running-the-oracle-unified-directory-image-in-a-container) 1. [Oracle Unified Directory Docker container configuration](#oracle-unified-directory-docker-container-configuration) 1. [Oracle Unified Directory Kubernetes Configuration](#oracle-unified-directory-kubernetes-configuration) ## Introduction Oracle Unified Directory provides comprehensive directory solution for robust identity management. Oracle Unified Directory is an all-in-one directory solution with storage, proxy, synchronization and virtualization capabilities. While unifying the approach, it provides all the services required for high-performance Enterprise and carrier-grade environments. Oracle Unified Directory ensures scalability to billions of entries, ease of installation, elastic deployments, enterprise manageability and effective monitoring. This project offers dockerfile and scripts to build and configure an Oracle Unified Directory image based on 12cPS4 (12.2.1.4.0) release. Use this image to facilitate installation, configuration, and environment setup for DevOps users. This image refers to binaries for Oracle Unified Directory Release 12.2.1.4.0 and it has the capability to create different types of Oracle Unified Directory Instances (Directory Service, Proxy, Replication) on containers targeted for development and testing. ***Image***: `oracle/oud:12.2.1.4.0` ## Installing the Oracle Unified Directory image An Oracle Unified Directory image can be created and/or made available for deployment in the following ways: 1. Build your own Oracle Unified Directory image using the WebLogic Image Tool. Oracle recommends using the Weblogic Image Tool to build your own Oracle Unified Directory 12.2.1.4.0 image along with the latest Bundle Patch and any additional patches that you require. For more information, see [Building an Oracle Unified Directory image with WebLogic Image Tool](imagetool/12.2.1.4.0) 1. Build your own Oracle Unified Directory image using the dockerfile, scripts and base image from Oracle Container Registry (OCR). To customize the image for specific use-cases, Oracle provides dockerfile and build scripts. For more information, see [Building an Oracle Unified Directory Image with Dockerfile, Scripts and Base Image from OCR](dockerfiles/12.2.1.4.0/README-OCR-Base.md). 1. Build your own Oracle Unified Directory image using the dockerfile and scripts. To customize the image for specific use-cases, Oracle provides dockerfile and build scripts. For more information, see [Building an Oracle Unified Directory Image with Dockerfiles and Scripts](dockerfiles/12.2.1.4.0/README.md). ## Running the Oracle Unified Directory image in a container The Oracle Unified Directory image supports running the following services in a container: * Directory Server/Service [instanceType=Directory] * Directory Proxy Server/Service [instanceType=Proxy] * Replication Server/Service [instanceType=Replication] * Directory Server/Service added to existing Directory or Replication Server/Service [instanceType=AddDS2RS] * Directory Client to run CLIs like ldapsearch, dsconfig, and dsreplication. The functionality and features available from the Oracle Unified Directory image will depend on the environment variables passed when setting up/starting the container. Configuration of instances with support for Proxy and Replication require invocation of `dsconfig` and `dsreplication` commands following the execution of `oud-setup`. The Oracle Unified Directory 12c image is designed to support passing `dsconfig` and `dsreplication` parameters as required, to be used with commands after instance creation using `oud-setup` or `oud-proxy-setup`. This provides flexibility in the types of Oracle Unified Directory service that can be configured to run in a container. Commands and parameters to create and start a container running an Oracle Unified Directory instance, based on the Oracle Unified Directory image are shown below. The command to create and start a container is as follows: ``` $ docker run -d -P \ --network=OUDNet \ --name=<container name> \ --volume <Path for the directory on Host which is to be mounted in container for user_projects>:/u01/oracle/user_projects \ --env OUD_INSTANCE_NAME=<name for the instance> \ --env instanceType=<Type of OUD instance to create and start> --env hostname=<hostname for the instance in container> \ --env-file <Path for the file containing environment variables> \ oracle/oud:12.2.1.4.0 ``` The parameters used in the example above are described in the table below: | **Parameter** | **Description** | **Default Value** | | ------ | ------ | ------ | | --name | Name for the container. When configuring multiple containers, this name is useful for referencing. | ------ | | --volume | Location of Oracle Unified Directory configuration and data outside the container. Path for the directory on Host which is to be mounted in container for user_projects : `/u01/oracle/user_projects` | ------ | | --network | Connect a container to a network. This specifies the networking layer to which the container will connect. | ------ | | --env OUD_INSTANCE_NAME | Name for the Oracle Unified Directory instance. This decides the directory location for the Oracle Unified Directory instance configuration files. If the Oracle Unified Directory instance name is 'myoudasinst_1', the location for the Oracle Unified Directory instance would be /u01/oracle/user_projects/myoudasinst_1 When user_projects directory is shared and outside container, avoid having same name for multiple instances. | asinst_1 | | --env instanceType | Type of Oracle Unified Directory instance to create and start. Takes one of the following values: Directory, Proxy, Replication, AddDS2RS| Directory | | --env hostname | Hostname to be used while invoking `oud-setup`, `oud-proxy-setup`, `dsconfig`, and `dsreplication` commands. | localhost | | --env-file | Parameter file. This can be used to list and store parameters and pass them to the container command, as an alternative to specifying the parameters on the command line. | ------ | Additional parameters supported by the Oracle Unified Directory image are listed below. These parameters are all passed to the container command using the --env or --env-file arguments: | **Environment Variable** (To be passed through --env or --env-file) | **Description** | **Default Value** | | ------ | ------ | ------ | | ldapPort | Port on which the Oracle Unified Directory instance in the container should listen for LDAP communication. Use 'disabled' if you do not want to enable it. | 1389 | | ldapsPort | Port on which the Oracle Unified Directory instance in the container should listen for LDAPS communication. Use 'disabled' if you do not want to enable it. | 1636 | | rootUserDN | DN for the Oracle Unified Directory instance root user. | ------ | | rootUserPassword | Password for the Oracle Unified Directory instance root user. | ------ | | adminConnectorPort | Port on which the Oracle Unified Directory instance in the container should listen for administration communication over LDAPS. Use 'disabled' if you do not want to enable it. Note that at least one of the LDAP or the HTTP administration ports must be enabled. | 1444 | | httpAdminConnectorPort | Port on which the Oracle Unified Directory Instance in the container should listen for Administration Communication over HTTPS Protocol. Use 'disabled' if you do not want to enable it. Note that at least one of the LDAP or the HTTP administration ports must be enabled. | 1888 | | httpPort | Port on which the Oracle Unified Directory Instance in the container should listen for HTTP Communication. Use 'disabled' if you do not want to enable it. | 1080 | | httpsPort | Port on which the Oracle Unified Directory Instance in the container should listen for HTTPS Communication. Use 'disabled' if you do not want to enable it. | 1081 | | sampleData | Specifies the number of sample entries to populate the Oracle Unified Directory instance with on creation. If this parameter has a non-numeric value, the parameter addBaseEntry is added to the command instead of sampleData. Similarly, when the ldifFile_n parameter is specified sampleData will not be considered and ldifFile entries will be populated.| 0 | | adminUID | User ID of the Global Administrator to use to bind to the server. This parameter is primarily used with the dsreplication command. | ------ | | adminPassword | Password for adminUID | ------ | | bindDN1 | BindDN to be used while setting up replication using `dsreplication` to connect to First Directory/Replication Instance. | ------ | | bindPassword1 | Password for bindDN1 | ------ | | bindDN2 | BindDN to be used while setting up replication using `dsreplication` to connect to Second Directory/Replication Instance. | ------ | | bindPassword2 | Password for bindDN2 | ------ | | replicationPort | Port value to be used while setting up a replication server. This variable is used to substitute values in `dsreplication` parameters. | 1898 | | sourceHost | Value for the hostname to be used while setting up a replication server. This variable is used to substitute values in `dsreplication` parameters. | ------ | | initializeFromHost | Value for the hostname to be used while initializing data on a new Oracle Unified Directory instance replicated from an existing instance. This variable is used to substitute values in `dsreplication` parameters. It is possible to have a different value for sourceHost and initializeFromHost while setting up replication with Replication Server, sourceHost can be used for the Replication Server and initializeFromHost can be used for an existing Directory instance from which data will be initialized.| $sourceHost | | serverTuning | Values to be used to tune JVM settings. The default value is jvm-default. If specific tuning parameters are required, they can be added using this variable. | jvm-default | | offlineToolsTuning | Values to be used to specify the tuning for offline tools. This variable if not specified will consider jvm-default as the default or specify the complete set of values with options if wanted to set to specific tuning | jvm-default| | generateSelfSignedCertificate | Set to "true" if the requirement is to generate a self signed certificate when creating an Oracle Unified Directory instance using `oud-setup`. If no value is provided this value takes the default, "true". If using a certificate generated separately from oud-setup this value should be set to "false". | true | | usePkcs11Keystore | Use a certificate in a PKCS#11 token that the replication gateway will use as servercertificate when accepting encrypted connections from the Oracle Directory Server Enterprise Edition server. Set to "true" if the requirement is to use the usePkcs11Keystore parameter when creating an Oracle Unified Directory instance using `oud-setup`. By default this parameter is not set. To use this option generateSelfSignedCertificate should be set to "false".| ------ | | enableStartTLS | Enable StartTLS to allow secure communication with the directory server by using the LDAP port. By default this parameter is not set. To use this option generateSelfSignedCertificate should be set to "false". | ------ | | useJCEKS | Specifies the path of a JCEKS that contains a certificate that the replication gateway will use as server certificate when accepting encrypted connections from the Oracle Directory Server Enterprise Edition server. If required this should specify the keyStorePath, for example, `/u01/oracle/config/keystore`. | ------ | | useJavaKeystore | Specify the path to the Java Keystore (JKS) that contains the server certificate. If required this should specify the path to the JKS, for example, `/u01/oracle/config/keystore`. By default this parameter is not set. To use this option generateSelfSignedCertificate should be set to "false". | ------ | | usePkcs12keyStore | Specify the path to the PKCS#12 keystore that contains the server certificate. If required this should specify the path, for example, `/u01/oracle/config/keystore.p12`. By default this parameter is not set. | ------ | | keyStorePasswordFile | Use the password in the specified file to access the certificate keystore. A password is required when you specify an existing certificate (JKS, JCEKS, PKCS#11, orPKCS#12) as a server certificate. If required this should specify the path of the password file, for example, `/u01/oracle/config/keystorepassword.txt`. By default this parameter is not set. | ------ | | eusPasswordScheme | Set password storage scheme, if configuring Oracle Unified Directory for Enterprise User Security. Set this to a value of either "sha1" or "sha2". By default this parameter is not set. | ------ | | jmxPort | Port on which the Directory Server should listen for JMX communication. Use 'disabled' if you do not want to enable it. | disabled | | javaSecurityFile | Specify the path to the Java security file. If required this should specify the path, for example, `/u01/oracle/config/new_security_file`. By default this parameter is not set. | ------ | | schemaConfigFile_n | 'n' in the variable name represents a numeric value between 1 and 50. This variable is used to set the full path of LDIF files that need to be passed to the Oracle Unified Directory instance for schema configuration/extension. If required this should specify the path, for example, `schemaConfigFile_1=/u01/oracle/config/00_test.ldif`. | ------ | | ldifFile_n | 'n' in the variable name represents a numeric value between 1 and 50. This variable is used to set the full path of LDIF files that need to be passed to the Oracle Unified Directory instance for initial data population. If required this should specify the path, for example, `ldifFile_1=/u01/oracle/config/test1.ldif`. | ------ | | dsconfigBatchFile_n | 'n' in the variable name represents a numeric value between 1 and 50. This variable is used to set the full path of LDIF files that need to be passed to the Oracle Unified Directory instance for batch processing by the `dsconfig` command. If required this should specify the path, for example, `dsconfigBatchFile_1=/u01/oracle/config/dsconfig_1.txt`. When executing the `dsconfig` command the following values are added implicitly to the arguments contained in the batch file : ${hostname}, ${adminConnectorPort}, ${bindDN} and ${bindPasswordFile} | ------ | | dstune_n | 'n' in the variable name represents a numeric value between 1 and 50. Allows commands and options to be passed to the `dstune` utility as a full command. | ------ | | dsconfig_n | 'n' in the variable name represents a numeric value between 1 and 300. Each file represents a set of execution parameters for the `dsconfig` command. For each `dsconfig` execution, the following variables are added implicitly : ${hostname}, ${adminConnectorPort}, ${bindDN}, ${bindPasswordFile}. | ------ | | dsreplication_n | 'n' in the variable name represents a numeric value between 1 and 50. Each file represents a set of execution parameters for the `dsreplication` command. For each `dsreplication` execution, the following variables are added implicitly : ${hostname}, ${ldapPort}, ${ldapsPort}, ${adminConnectorPort}, ${replicationPort}, ${sourceHost}, ${initializeFromHost}, and ${baseDN}. Depending on the dsreplication sub-command, the following variables are added implicitly : ${bindDN1}, ${bindPasswordFile1}, ${bindDN2}, ${bindPasswordFile2}, ${adminUID}, and ${adminPasswordFile}. | ------ | | post_dsreplication_dsconfig_n | 'n' in the variable name represents a numeric value between 1 and 300. Each file represents a set of execution parameters for the `dsconfig` command to be run following execution of the `dsreplication` command. For each `dsconfig` execution, the following variables/values are added implicitly : --provider-name "Multimaster Synchronization", ${hostname}, ${adminConnectorPort}, ${bindDN}, ${bindPasswordFile}. | ------ | | rebuildIndex_n | 'n' in the variable name represents a numeric value between 1 and 50. Each file represents a set of execution parameters for the `rebuild-index` command. For each `rebuild-index` execution, the following variables are added implicitly : ${hostname}, ${adminConnectorPort}, ${bindDN}, ${bindPasswordFile}, and ${baseDN}. | ------ | | manageSuffix_n | 'n' in the variable name represents a numeric value between 1 and 50. Each file represents a set of execution parameters for the `manage-suffix` command. For each `manage-suffix` execution, the following variables are added implicitly : ${hostname}, ${adminConnectorPort}, ${bindDN}, ${bindPasswordFile}. | ------ | | importLdif_n | 'n' in the variable name represents a numeric value between 1 and 50. Each file represents a set of execution parameters for the `import-ldif` command. For each `import-ldif` execution, the following variables are added implicitly : ${hostname}, ${adminConnectorPort}, ${bindDN}, ${bindPasswordFile}. | ------ | | execCmd_n | 'n' in the variable name represents a numeric value between 1 and 300. Each file represents a command to be executed in the container. For each command execution, the following variables are replaced, if present in the command : ${hostname}, ${ldapPort}, ${ldapsPort}, ${adminConnectorPort}. | ------ | **Note** For the following parameters above the following statement applies: * dsconfig_n * dsreplication_n * post_dsreplication_dsconfig_n * rebuildIndex_n * manageSuffix_n * importLdif_n * execCmd_n If values are provided the following variables will be substituted with their values: ${hostname},${ldapPort},${ldapsPort},${adminConnectorPort},${replicationPort},${sourceHost},${initializeFromHost},${sourceAdminConnectorPort},${sourceReplicationPort},${baseDN},${rootUserDN},${adminUID},${rootPwdFile},${bindPasswordFile},${adminPwdFile},${bindPwdFile1},${bindPwdFile2} ## Oracle Unified Directory Docker container Configuration To configure the Oracle Unified Directory containers on Docker only, see the tutorial [Creating Oracle Unified Directory Docker containers](https://docs.oracle.com/en/middleware/idm/unified-directory/12.2.1.4/tutorial-oud-docker/). ## Oracle Unified Directory Kubernetes Configuration To configure the Oracle Unified Directory containers with Kubernetes see the [Oracle Unified Directory on Kubernetes](https://oracle.github.io/fmw-kubernetes/oud/) documentation. # Licensing & Copyright ## License To download and run Oracle Fusion Middleware products, regardless whether inside or outside a container, you must download the binaries from the Oracle website and accept the license indicated at that page. All scripts and files hosted in this project and GitHub [docker-images/OracleUnifiedDirectory](./) repository required to build the images are, unless otherwise noted, released under [UPL 1.0](https://oss.oracle.com/licenses/upl/) license. ## Copyright Copyright (c) 2020 Oracle and/or its affiliates. Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl
130.682432
672
0.778502
eng_Latn
0.996327
21482aef2e52d4c80ef8348448bcf6d0149760a5
10,519
md
Markdown
_docs/enterprise/codefresh-runner.md
aarosil/docs.codefresh.io
c85bc21622984159962882f37c7d3ddce2be3487
[ "MIT" ]
1
2020-04-08T18:51:37.000Z
2020-04-08T18:51:37.000Z
_docs/enterprise/codefresh-runner.md
aarosil/docs.codefresh.io
c85bc21622984159962882f37c7d3ddce2be3487
[ "MIT" ]
null
null
null
_docs/enterprise/codefresh-runner.md
aarosil/docs.codefresh.io
c85bc21622984159962882f37c7d3ddce2be3487
[ "MIT" ]
null
null
null
--- title: "Codefresh Runner" description: "Run Codefresh pipelines on your private Kubernetes cluster" group: enterprise toc: true --- The Codefresh runner is a helper application that can be installed on your own Kubernetes cluster (behind a company firewall). It can then build Codefresh pipelines, with full access to secure internal services, without actually compromising the requirements of the on-premise installation. See the [Hybrid installation]({{site.baseurl}}/docs/enterprise/installation-security/#hybrid-installation) and [behind-the-firewall]({{site.baseurl}}/docs/enterprise/behind-the-firewall/) pages for more details. ## Codefresh Runner installation The Codefresh runner installer is available at [https://github.com/codefresh-io/venona](https://github.com/codefresh-io/venona). You can use Venona to create, upgrade and remove runner installations on any internal Kubernetes cluster. Notice that a runner installation is needed for each cluster that will _run_ Codefresh pipelines. A runner is **not** needed in clusters that are used for _deployment_. It is possible to deploy applications on different clusters other than the ones the runner is running on. The installation process takes care of all the components of the runner as well as the other resources (config-maps, secrets, volumes) needed by them. ### Prerequisites In order to use the Codefresh runner you need the following: 1. A Kubernetes cluster with outgoing Internet access (preferably with version 1.10). Each node should have 50GB disk size. 1. A [Codefresh account]({{site.baseurl}}/docs/getting-started/create-a-codefresh-account/) with the Hybrid feature enabled. 1. A [Codefresh CLI token]({{site.baseurl}}/docs/integrations/codefresh-api/#authentication-instructions) that will be used to authenticate to your Codefresh account. Installation can happen from any workstation or laptop that has access (i.e. via `kubectl`) to the Kubernetes cluster that will run Codefresh builds. The Codefresh runner will authenticate to your Codefresh account by using the Codefresh CLI token. ### Command line installation First setup [Codefresh CLI access](https://codefresh-io.github.io/cli/getting-started/) first, in the machine where you want to install the runner from. You can see if the CLI works correctly by running any command such as: ``` codefresh get pipelines ``` This should list the pipelines of your Codefresh account. Notice that access to the Codefresh CLI is only needed once during the Runner installation. After that, the Runner with authenticate on it own using the details provided. You do *NOT* need to install the Codefresh CLI on the cluster that is running Codefresh pipelines. Download the Runner installer (called Venona) from the [releases page](https://github.com/codefresh-io/venona/releases) or by using homebrew if you are on a Mac: ``` brew tap codefresh-io/venona brew install venona ``` Create a namespace in your cluster where you want the Codefresh runner to be installed: ``` kubectl create namespace codefresh-runtime ``` And finally run the installer passing as argument the namespace you just created: ``` venona install --kube-namespace codefresh-runtime ``` After a while you should see a message that the installation process has finished with success. You can run `venona --help` to get additional installation options. As an example, you can define your own location for kubeconfig and/or CLI config: ``` venona install --kube-namespace my-codefresh-runtime --verbose --kube-config-path c:/users/kostis/.kube/config --cfconfig c:/Users/Kostis/.cfconfig ``` To check the installation result type `venona status --verbose` and you will get a list of all installations. ### Installing on Kubernetes clusters with version earlier than 1.10 If your Kubernetes cluster is using a version earlier than 1.10 you also need to do the following: Make sure the `PersistentLocalVolumes` [feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) is turned on The runner will try to load available apis using the `/openapi/v2` endpoint Add this endpoint to the ClusterRole `system:discovery` under `rules[0].nonResourceURLs`: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:discovery rules: - nonResourceURLs: - ...other_resources - /openapi - /openapi/* verbs: - get ``` Use `kubectl` or any other management tool to perform this change to the role. ### Installing on Google Kubernetes Engine If you are installing Codefresh runner on the Kubernetes cluster on [GKE](https://cloud.google.com/kubernetes-engine/) * make sure your user has `Kubernetes Engine Cluster Admin` role in google console and * bind your user with `cluster-admin` Kubernetes cluster role. ``` kubectl create clusterrolebinding NAME --clusterrole cluster-admin --user <YOUR_USER> ``` ### Security roles Installation of the Codefresh runner on a Kubernetes cluster is also setting up 2 groups of objects. Each one has own RBAC needs and therefore, expected created roles(and cluster-roles) You can see the exact resource descriptors [on Github](https://github.com/codefresh-io/venona/tree/master/venonactl/pkg/templates/kubernetes). Here is a list of the resources that are created during a Runner installation: * Agent (grouped by `/.*.venona.yaml/`) * `service-account.venona.yaml` - The service account that the agent's pod will use at the end. * `cluster-role-binding.venona.yaml` - The agent discovering K8S apis by calling to `openapi/v2`, this ClusterRoleBinding binds bootstraped ClusterRole by Kubernetes `system:discovery` to `service-account.venona.yaml`. This role has only permissions to make a GET calls to non-resources URLs. * `role.venona.yaml` - Allow to `GET`, `CREATE` and `DELETE` pods and persistent volume claims. * `role-binding.venona.yaml` - The agent is spinning up pods and pvc, this binding binds `role.venona.yaml` to `service-account.venona.yaml`. * Runtime-environment (grouped by `/.*.re.yaml/`) - Kubernetes controller that spins up all required resources to provide a good caching experience during pipeline execution. * `service-account.dind-volume-provisioner.re.yaml` - The service account that the controller will use. * `cluster-role.dind-volume-provisioner.re.yaml` Defines all the permission needed for the controller to operate correctly. * `cluster-role-binding.dind-volume-provisioner.yaml` - Binds the ClusterRole to `service-account.dind-volume-provisioner.re.yaml`. ## Using the Codefresh Runner Once installed the Runner is fully automated. It polls on its own the Codefresh SAAS (by default every ten seconds) and creates automatically all resources needed for running pipelines. Once installation is complete, you should see the cluster of the runner as a new [Runtime environment](https://g.codefresh.io/account-admin/account-conf/runtime-environments) in Codefresh at your *Account Settings*, in the respective tab. {% include image.html lightbox="true" file="/images/enterprise/runner/runtime-environments.png" url="/images/enterprise/runner/runtime-environments.png" alt="Available runtime environments" caption="Available runtime environments" max-width="60%" %} If you have multiple environments available, you can change the default one (shown with a thin blue border) by clicking on the 3 dot menu on the right of each environment. The Codefresh runner installer comes with an option `set-default` that will automatically set as default the new runtime environment. You can even override the runtime environment for a specific pipeline by specifying in the respective section in the [pipeline settings]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipelines/). {% include image.html lightbox="true" file="/images/enterprise/runner/environment-per-pipeline.png" url="/images/enterprise/runner/environment-per-pipeline.png" alt="Running a pipeline on a specific environment" caption="Running a pipeline on a specific environment" max-width="60%" %} ## Monitoring the Runner Once installed, the runner is a normal Kubernetes application like all other applications. You can use your existing tools to monitor it. Only the runner pod is long living inside your cluster. All other components (such as the engine) are short lived and exist only during pipeline builds. You can always see what the Runner is doing by listing the resources inside the namespace you chose during installation: ``` $ kubectl get pods -n codefresh-runtime NAME READY STATUS RESTARTS AGE dind-5c5afbb02e9fd02917b33f06 1/1 Running 0 1m dind-lv-monitor-venona-kkkwr 1/1 Running 1 7d dind-volume-provisioner-venona-646cdcdc9-dqh8k 1/1 Running 2 7d engine-5c5afbb02e9fd02917b33f06 1/1 Running 0 1m venona-8b5f787c5-ftbnd 1/1 Running 2 7d ``` In the same manner you can list secrets, config-maps, logs, volumes etc. for the Codefresh builds. ## Upgrading the Runner To update the runner to a new version you use the `venona upgrade command` To delete a runner installation use `venona delete <installation_name>`. Make sure that you don't have any pipelines assigned to that runtime environment before the removal takes place. ## Deploying applications to the same cluster of the runner By default the Codefresh runner allows you to *build* pipelines in your private cluster. If you also want to *deploy* Docker images on your private cluster you need to use the [Codefresh API]({{site.baseurl}}/docs/integrations/codefresh-api/) or CLI to add the cluster to Codefresh. Here is the [CLI command](https://codefresh-io.github.io/cli/clusters/create-cluster/) ``` codefresh create cluster --kube-context <CONTEXT_NAME> --namespace <NAMESPACE> --serviceaccount <SERVICE_ACCOUNT> --behind-firewall ``` See the [connecting a cluster]({{site.baseurl}}/docs/deploy-to-kubernetes/add-kubernetes-cluster/) page for more details on adding cluster. ## What to read next * [Codefresh installation options]({{site.baseurl}}/docs/enterprise/installation-security/) * [Account management]({{site.baseurl}}/docs/enterprise/ent-account-mng/) * [Access Control]({{site.baseurl}}/docs/enterprise/access-control/) * [Codefresh API]({{site.baseurl}}/docs/integrations/codefresh-api/)
51.063107
305
0.764521
eng_Latn
0.987701
2148881a786851a6547ac087b62f4144139f34e0
5,783
md
Markdown
docs/2014/analysis-services/multidimensional-models/define-named-queries-in-a-data-source-view-analysis-services.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/multidimensional-models/define-named-queries-in-a-data-source-view-analysis-services.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/multidimensional-models/define-named-queries-in-a-data-source-view-analysis-services.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Definire query denominate in una vista origine dati (Analysis Services) | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: - analysis-services ms.topic: conceptual helpviewer_keywords: - named queries [Analysis Services], creating - modifying named queries - data source views [Analysis Services], named queries ms.assetid: f09ba8aa-950e-4c0d-961e-970de13200be author: minewiskan ms.author: owend manager: craigg ms.openlocfilehash: a01d130cc37faa29e2aebe8612fc5e02fef10c78 ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/02/2018 ms.locfileid: "48099621" --- # <a name="define-named-queries-in-a-data-source-view-analysis-services"></a>Definire query denominate in una vista origine dati (Analysis Services) Una query denominata è un'espressione SQL rappresentata come tabella. In una query denominata è possibile specificare un'espressione SQL per la selezione di righe e colonne restituite da una o più tabelle in una o più origini dati. Una query denominata è simile a qualsiasi altra tabella in una vista origine dati con righe e relazioni, con la differenza che la query denominata è basata su un'espressione. Una query denominata consente di estendere lo schema relazionale delle tabelle esistenti in una vista origine dati senza modificare l'origine dati sottostante. Una serie di query denominate può, ad esempio, essere utilizzata per suddividere una tabella delle dimensioni complessa in tabelle delle dimensioni più piccole e più semplici, da utilizzare nelle dimensioni del database. È inoltre possibile utilizzare una query denominata per unire in join più tabelle di database di una o più origini dati in una singola tabella della vista origine dati. ## <a name="creating-a-named-query"></a>Creazione di una query denominata > [!NOTE] > Non è possibile aggiungere un calcolo denominato a una query denominata, né basare una query denominata su una tabella contenente un calcolo denominato. Quando si crea una query denominata è necessario specificare un nome, la query SQL che restituisce le colonne e i dati per la tabella e, facoltativamente, una descrizione della query denominata. L'espressione SQL può fare riferimento ad altre tabelle della vista origine dati. Dopo avere definito la query denominata, la query SQL in una query denominata viene inviata al provider dell'origine dei dati e convalidata. Se il provider non rileva errori nella query SQL, la colonna viene aggiunta alla tabella. È necessario che le tabelle e le colonne a cui fa riferimento la query SQL non siano qualificate oppure siano qualificate solo in base al nome della tabella. Per fare riferimento alla colonna SaleAmount di una tabella, ad esempio, è possibile utilizzare `SaleAmount` o `Sales.SaleAmount` , mentre `dbo.Sales.SaleAmount` genera un errore. **Nota** In caso di definizione di una query denominata su un'origine dati [!INCLUDE[ssVersion2000](../../includes/ssversion2000-md.md)] o [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] 7.0, una query denominata contenente una sottoquery e una clausola GROUP BY correlate avrà esito negativo. Per altre informazioni, vedere l'articolo [Internal Error with SELECT Statement Containing Correlated Subquery and GROUP BY](http://support.microsoft.com/kb/274729) (Errore interno con l'istruzione SELECT contenente la sottoquery e GROUP BY correlati) della [!INCLUDE[msCoName](../../includes/msconame-md.md)] Knowledge Base. ## <a name="add-or-edit-a-named-query"></a>Aggiungere o modificare una query denominata 1. In [!INCLUDE[ssBIDevStudioFull](../../includes/ssbidevstudiofull-md.md)]aprire il progetto o connettersi al database contenente la vista origine dati in cui si desidera aggiungere una query denominata. 2. In Esplora soluzioni espandere la cartella **Viste origine dati** , quindi fare doppio clic sulla vista origine dati. 3. Nel riquadro **Tabelle** o **Diagramma** fare clic con il pulsante destro del mouse su un'area vuota e quindi scegliere **Nuova query denominata**. 4. Nella finestra di dialogo **Crea query denominata** effettuare le operazioni seguenti: 1. Nella casella di testo **Name** digitare un nome di query. 2. Facoltativamente, digitare una descrizione per la query nella casella di testo **Descrizione** . 3. Nella casella di riepilogo **Origine dati** selezionare l'origine dei dati su cui verrà eseguita la query denominata. 4. Digitare la query nel riquadro inferiore oppure creare una query mediante gli strumenti grafici per la compilazione di query. > [!NOTE] > L'interfaccia utente per la compilazione di query dipende dall'origine dei dati. Anziché un'interfaccia utente grafica, potrebbe venire visualizzata un'interfaccia utente generica, basata su testo. È possibile ottenere gli stessi risultati con interfacce utente diverse, ma è necessario eseguire procedure diverse. Per altre informazioni, vedere [Finestra di dialogo Crea query denominata o Modifica query denominata &#40;Analysis Services - Dati multidimensionali&#41;](../create-or-edit-named-query-dialog-box-analysis-services-multidimensional-data.md). 5. Fare clic su **OK**. Nell'intestazione di tabella verrà visualizzata un'icona con due tabelle sovrapposte, indicante che la tabella è stata sostituita da una query denominata. ## <a name="see-also"></a>Vedere anche [Viste origine dati in modelli multidimensionali](data-source-views-in-multidimensional-models.md) [Definire calcoli denominati in una vista origine dati &#40;Analysis Services&#41;](define-named-calculations-in-a-data-source-view-analysis-services.md)
83.811594
632
0.782466
ita_Latn
0.996884